id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.01395
|
A Novel Approach for Solving Security Constrained Optimal Power Flow
Using the Inverse Matrix Modification Lemma and Benders Decomposition
|
With the increasing complexity of power systems, faster methods for power
system reliability analysis are needed. We propose a novel methodology to solve
the security constrained optimal power flow (SCOPF) problem that reduces the
computational time by using the Sherman-Morrison-Woodbury identity and Benders
decomposition. The case study suggests that in a 500 node system, the run time
is reduced by 83.5% while ensuring a reliable operation of the system
considering short- and long-term post-contingency limits and reducing the
operational costs, compared to a preventive `N-1' strategy.
|
Matias Vistnes, Vijay Venu Vadlamudi, Sigurd Hofsmo Jakobsen, Oddbjørn Gjerde
|
2023-05-02T13:13:43Z
|
http://arxiv.org/abs/2305.01395v1
|
A Novel Approach for Solving Security Constrained Optimal Power Flow Using the Inverse Matrix Modification Lemma and Benders Decomposition
###### Abstract
With the increasing complexity of power systems, faster methods for power system reliability analysis are needed. We propose a novel methodology to solve the security constrained optimal power flow (SCOPF) problem that reduces the computational time by using the Sherman-Morrison-Woodbury identity and Benders decomposition. The case study suggests that in a 500 node system, the run time is reduced by 83.5% while ensuring a reliable operation of the system considering short- and long-term post-contingency limits and reducing the operational costs, compared to a preventive 'N-1' strategy.
Optimal Power Flow, Benders Decomposition, Schur-Complement, Sherman-Morrison-Woodbury Identity.
## I Introduction
Even with increased processing capability of computers, all the desired modeling details in power system analysis cannot be realized without computational bottlenecks in the ever-growing power systems. More concerning for system operators and planners is that the system's state is changing faster as a result of higher penetration of renewable energy and more frequent extreme weather. Also, with the energy transition and electrification, power system investment is not keeping up with demand growth, leading to operation closer to the equipment limits. Consequently, it is vital to have accurate solutions that can ensure the reliability of electric supply for customers; society is more dependent on it than ever before.
Using the traditional approach of reliability management, reliability could be accomplished through grid development solutions that meet a deterministic reliability criterion (DRC) at all times (typically N-1). Preventive rescheduling of the economic dispatch is often used by operators to meet a DRC. However, such solutions may be prohibitively expensive and not necessarily socio-economic efficient. Recommendations toward more probabilistic and risk-based methods to reliability management, balancing reliability and costs, were made in the EU-funded project GARPUR [1], where the proposed reliability management approach and criterion (RMAC) contains a socio-economic objective, a reliability target, a discarding principle, and a relaxation principle. Recent research shows the need for better methods for RMAC, especially the need for methods that scale better with the system size [1, 2, 3].
Security constrained optimal power flow (SCOPF) is an extensively used tool in RMAC studies. It can be used to optimize the generation schedule and power system operation. A comprehensive SCOPF formulation should include preventive, corrective, and restorative actions to meet system limits both in the current state and in the potential contingency states using an AC power system model. It is a large-scale non-convex mixed-integer nonlinear program (MINLP) [2], and no all-inclusive solution methods exist to analyze large realistic systems [4, 5]. There are many sources of computational burden which make the problem intractable; especially system size, number of contingencies considered, and the optimization of actions--preventive, corrective, and restorative. Risk and chance constraints increase the complexity of the problem [4]. Capitanescu [2] noted that several sets of post-contingency limits (on different timescales) should be included to better model the actions taken by operators and the different timescales of equipment operating limits. Many SCOPF formulations are proposed in the literature. Wang _et al._[4] solve a risk based DC SCOPF for a large power system using Lagrangian relaxation and Benders decomposition. Kardos _et al._[3] use parallel, distributed algorithms to solve a non-probabilistic AC SCOPF with preventive actions using Schur-Complements on a large scale system. Karangelos and Wehenkel [5] solve a chance-contrained AC SCOPF including preventive and corrective actions on a smaller power system using an iterative scheme.
Solving an SCOPF for large power systems involves a large number of nodes, branches, and contingencies considered. Two approaches are focused here: the inverse matrix modification lemma (IMML) and Benders decomposition.
The IMML shows how to efficiently compute matrix small modifications in large matrices, first known as the Sherman-Morrison-Woodbury identity [6], and later used in the power system context [7]. While underused in literature, the IMML is an application of Schur-Complements that could have great potential for use in SCOPF formulations through its reduced calculation burden of contingency cases. The IMML is combined with Benders decomposition, a widely used, highly efficient method to decompose large optimization problems [4, 8, 9].
This paper proposes a new, efficient and scalable methodology to find the optimal socio-economic operation of a power system using both preventive and corrective actions. Restorative actions are planned to be incorporated in future work. The proposal integrates the IMML and Benders decomposition into a probabilistic DC SCOPF framework; the developed method is modular and can be combined with other SCOPF formulations and solution techniques. A DC SCOPF is used to enable analysis of large power systems. The central contribution lies in the combination of efficient solving of the power flow and the unique formation of Benders' cuts including both preventive and corrective actions.
The rest of the paper is organized as follows: Section II presents the SCOPF formulation, the IMML, and Benders decomposition, resulting in the proposed methodology. Section III presents the details of the case study. Section IV presents a concluding summary of the work.
## II Methodology
The power system consists of sets of nodes \(\mathcal{N}\), branches \(\mathcal{B}\), generators \(\mathcal{G}\), and demands \(\mathcal{D}\), where \(N=|\mathcal{N}|\) and \(B=|\mathcal{B}|\). Contingencies are denoted \(c\) and taken from the contingency set \(\mathcal{C}\). To capture short- and long-term operational limits on transmission lines and the effect of generator ramping, we consider a power system with one pre-contingency state and two post-contingency states. The first post-contingency state is set after the contingency when circuit breakers have tripped. The second state is set minutes later when all frequency reserves are activated. We assume that the system has a stable trajectory between all states; although this assumption is common, it is not always valid [10].
### _Security Constrained Optimal Power Flow (SCOPF)_
A preventive-corrective SCOPF from Capitanescu and Wechenkel [11] is extended, and presented below (matrix and vector variables are in bold):
\[\min_{\mathbf{x}_{0},\mathbf{u}_{0},\mathbf{x}_{c},\mathbf{u}_{c}} f=f(\mathbf{x}_{0},\mathbf{u}_{0},\mathbf{x}_{c},\mathbf{u}_{c}) \tag{1a}\] \[s.t. g_{\mathbf{0}}(\mathbf{x}_{0},\mathbf{u}_{0})=0\] (1b) \[\mathbf{h}_{\mathbf{0}}(\mathbf{x}_{0},\mathbf{u}_{0})\leq\overline{\mathbf{h}}_{0}^ {LT}\] (1c) \[\mathbf{g}_{\mathbf{c}}(\mathbf{x}_{c},\mathbf{u}_{c})=0 c\in\mathcal{C}\] (1d) \[\mathbf{h}_{\mathbf{c}}(\mathbf{x}_{c},\mathbf{u}_{0})\leq\overline{\mathbf{h}}_{c}^ {ST} c\in\mathcal{C}\] (1e) \[\mathbf{h}_{\mathbf{c}}(\mathbf{x}_{c},\mathbf{u}_{c})\leq\overline{\mathbf{h}}_{c}^ {LT} c\in\mathcal{C}\] (1f) \[|\mathbf{u}_{c}-\mathbf{u}_{0}|\leq\Delta\mathbf{u}_{c} c\in\mathcal{C} \tag{1g}\]
where \(\mathbf{x}\) is the state variables, \(\mathbf{u}\) is the control variables, \(f\) is the function to be minimized with respect to \(\mathbf{x}_{0}\), \(\mathbf{u}_{0}\), \(\mathbf{x}_{c}\), and \(\mathbf{u}_{c}\), \(\mathbf{g}\) is the power flow equations, \(\mathbf{h}\) is the operating limits, and \(\Delta\mathbf{u}_{c}\) is the maximum allowed change of control variables. Subscripts \(\cdot_{0}\) and \(\cdot_{c}\), respectively, represent base case and post-contingency case. \(\overline{\mathbf{h}}_{0}^{LT}\), \(\overline{\mathbf{h}}_{c}^{ST}\), and \(\overline{\mathbf{h}}_{c}^{LT}\) are operating limits for normal operation, short-term limits after a contingency, and long-term limits after a contingency, respectively.
In this paper branch contingencies are considered, which include transmission lines and transformers. The operational limits considered are maximum current in branches (short- and long-term), generator maximum active power output, generator ramping limit, and a maximum of 10% load shedding on each node after a contingency.
### _Inverse matrix modification lemma_
When performing a contingency analysis we solve the DC power flow equation:
\[(\mathbf{H}+\mathbf{\Delta}\mathbf{H})\cdot\mathbf{\theta}=\mathbf{P} \tag{2}\]
where \(\mathbf{H}\) is a sparse (\(N\times N\)) susceptance matrix, \(\mathbf{\Delta}\mathbf{H}\) is a system modification due to a contingency, \(\mathbf{\theta}\) is the node voltages angles, and \(\mathbf{P}\) is the node injected active power. If \(\mathbf{\Delta}\mathbf{H}\) is symmetric2, and can be written as:
Footnote 2: For an asymmetric \(\mathbf{\Delta}\mathbf{H}\) (e.g., when using phase shifting transformers), the formula is still valid but the equations differ slightly [7].
\[\mathbf{\Delta}\mathbf{H}=\mathbf{\Phi}_{m}\cdot\mathbf{\delta}\mathbf{h}\cdot\mathbf{\Phi}_{m}^{ \mathsf{T}} \tag{3}\]
where \(\mathbf{\delta}\mathbf{h}\) is an (\(M\times M\)) matrix of the modifications in \(\mathbf{H}\), \(\mathbf{\Phi}_{m}\) is an (\(N\times M\)) connectivity matrix, and \(\mathbf{\Phi}^{\mathsf{T}}\) is the transposed matrix of \(\mathbf{\Phi}\). \(M\) is the number of modified components. This is a rank-one updated problem that allows us to use the IMML. The IMML facilitates a cheaper computation than solving the full equation system. Further, solving for \(\mathbf{\theta}\) and using the IMML yields [7]:
\[\mathbf{\theta}=\left(\mathbf{I}-\mathbf{H}^{-1}\mathbf{\Phi}_{m}\mathbf{c}\mathbf{\Phi}_ {m}^{\mathsf{T}}\right)\mathbf{H}^{-1}\mathbf{P} \tag{4}\] \[\mathbf{c}=\left(\mathbf{\delta}\mathbf{h}^{-1}+\mathbf{\Phi}_{m}^{\mathsf{T}} \mathbf{H}^{-1}\mathbf{\Phi}_{m}\right)^{-1} \tag{5}\]
where \(\mathbf{I}\) is the identity matrix. (4) is valid for any number of system element modifications. In the case of a single component modification, \(\mathbf{c}\) becomes a scalar, \(c\). In general, if \(\mathbf{c}^{-1}\) is singular the system modification separates the system into islands [7]. Separated systems cannot be analyzed using the IMML. In the Big O notation, the scheme is O(\(NM+M^{2}\)), greatly reducing the number of operations when \(N\gg M\), compared to solving the linear problem, O(\(N\)3).
Footnote 2: For an asymmetric \(\mathbf{\Delta}\mathbf{H}\) (e.g., when using phase shifting transformers), the formula is still valid but the equations differ slightly [7].
The procedure using the IMML to calculate the branch power flow after contingency \(\mathbf{F}_{c}\) of branch \(l\) from node \(i\) to node \(j\) contains the following equations:
\[\mathbf{\theta}_{c}=\mathbf{\theta}-\frac{\mathbf{\delta}\cdot(\theta[i]- \theta[j])}{1/H[i,j]+\delta[i]-\delta[j]} \tag{6}\] \[\mathbf{\delta}=\frac{x_{l}}{H[i,j]}\cdot(\mathbf{X}[:,i]-\mathbf{X}[:,j])\] (7) \[\mathbf{F}_{c}=\mathbf{\Psi}\cdot\mathbf{\Phi}\cdot\mathbf{\theta}_{c} \tag{8}\]
where \(x_{l}\) is the reactance of branch \(l\), \(\mathbf{X}\) is the inverse susceptance matrix, \(\mathbf{\Psi}\) is the diagonal branch susceptance matrix, and \(\mathbf{X}[:,i]\) (\(\mathbf{X}[i,:]\)) is notation for the \(i\)th column (row) of matrix \(\mathbf{X}\). Details on the matrices are given in the appendix, Section V.
In the proposed methodology, the inverse susceptance matrix after a contingency, \(\mathbf{X}_{c}\), is used to calculate contingency power transfer distribution factors (PTDF) matrix (effectively the line outage distribution factors (LODF), \(\mathbf{\varphi}\)) and the power flow after the outage.
\[\mathbf{\varphi}_{c}=\mathbf{\Psi}\cdot\mathbf{\Phi}\cdot\mathbf{X}_{c} \tag{9}\] \[\mathbf{F}=\mathbf{\varphi}_{c}\cdot\mathbf{P} \tag{10}\]
Direct calculation of \(\mathbf{X}_{c}\) is done by inverting the contingency susceptance matrix, \(\mathbf{X}_{c}=(\mathbf{H}+\mathbf{\Delta H})^{-1}\). However, using the IMML to find \(\mathbf{X}_{c}\) is more efficient.
\[\mathbf{X}_{c}=\mathbf{X}-\frac{\mathbf{X}\cdot\mathbf{\Phi}[i,:]\cdot H[i,j] \cdot\mathbf{\delta}}{1+H[i,j]\cdot\mathbf{\delta}\cdot\mathbf{\Phi}[i,:]} \tag{11}\] \[\mathbf{\delta}=\frac{x_{b}}{H[i,j]}\cdot(\mathbf{X}[:,i]-\mathbf{X}[:,j]) \tag{12}\]
### _Benders decomposition_
Benders decomposition is a method to divide an optimization problem into a main problem and several sub-problems [8]. From the sub-problems a Benders' cut is added to the main problem formulation in such a way that an optimal solution of the main problem is a feasible solution of (1). In our case the main problem is (1a) - (1c) and the constraints in the sub-problem are (1d) - (1g). Often the contingencies with active constraints in the optimal solution are a small sub-set of \(\mathcal{C}\). Thus not all sub-problems need to be added to the main problem to make a feasible solution. The branch power flow constraints in the sub-problem can be reformulated to reduce the complexity through a Benders' cut.
Starting from the equation for branch power flow (10), the Benders' cut can be deduced. Using (10) a constraint can be set up to limit the branch power flow to the rated value \(\overline{\mathbf{h}_{l}}\):
\[-\overline{\mathbf{h}_{l}}\leq\mathbf{\varphi}\cdot\mathbf{P}\leq\overline{\mathbf{h}_{l}}. \tag{13}\]
Only the lower or the upper bound can be active in the optimal solution. (10) is also valid for a change in injected power \(\Delta P\). Thus, a constraint which requires a power flow change \(\Delta F\) can be expressed.
\[\mathbf{\varphi}\cdot\mathbf{\Delta P}\ \begin{cases}\geq\Delta F_{l},&\text{if }F_{l} \geq 0\\ \leq\Delta F_{l},&\text{if }F_{l}<0\end{cases}\quad\forall l\in\mathcal{B} \tag{14}\]
where \(F_{l}\) is the current flow on branch \(l\). The Benders' cut added to the main problem formulation in (1a) - (1c) to mitigate overload on branch \(l\) after a contingency on branch \(i\) is given as follows:
\[\sum_{n}^{\mathcal{N}}\varphi_{i,n}\Delta P_{n}\ \begin{cases}\geq F_{l}- \overline{h}_{l},&\text{if }F_{l}>\overline{h}_{l}\\ \leq F_{l}+\overline{h}_{l},&\text{if }F_{l}<-\overline{h}_{l}\end{cases} \tag{15}\] \[\Delta P_{n}=P_{c,n}-P_{0,n}^{g}-P_{c,n}^{g\prime}+P_{c,n}^{g-}+P_ {c,n}^{d} \tag{16}\]
where \(\varphi_{i,n}\) is the value of the LODF at the contingency branch \(i\) and node \(n\), \(P_{c,n}\) is the injected active power at node \(n\) using the current generation schedule solution for contingency \(c\), \(P_{0,n}^{g}\) and \(P_{c,n}^{g}\) are the active power generation change at node \(n\) as a preventive action and as a corrective action, respectively (\(P^{g+}\) is an increase and \(P^{g-}\) is a decrease), and \(F_{l}\) is the flow on branch \(l\) with overload. (15) is not defined for branches within their limit, \(-\overline{h}_{l}\leq F_{l}\leq\overline{h}_{l}\), as there is no overload to mitigate.
If only considering preventive actions or only short-term operating limits, (16) is reduced to, respectively, as follows:
\[\Delta P_{n}=P_{0,n}-P_{0,n}^{g} \tag{17}\] \[\Delta P_{n}=P_{c,n}-P_{0,n}^{g}+P_{c,n}^{g-}+P_{c,n}^{d} \tag{18}\]
where \(P_{0,n}\) is the injected active power at node \(n\) using the current generation schedule solution in the base case. The corresponding \(\overline{h}_{l}\) for the time-frame is used.
In addition to the Benders' cut modelling the flow of power, the objective function is extended using (19) to account for the added costs of mitigating the branch overload when using corrective actions, or (20) for only short-term operating limits.
\[\min k_{\mathcal{G}}^{\mathsf{T}}\left(P_{0}^{\mathcal{G}}+\pi_{c}P_{c}^{ \mathcal{G}+}\right)+\text{vol}^{\mathsf{T}}\left(P_{0}^{\mathcal{D}}+\pi_{c}P_ {c}^{\mathcal{D}}\right) \tag{19}\] \[\min\text{vol}^{\mathsf{T}}\left(P_{0}^{\mathcal{D}}+\pi_{c}P_{c} ^{\mathcal{D}}\right) \tag{20}\]
where \(k^{\mathcal{G}}\) is the generation cost, \(\pi_{c}\) is the contingency probability, voll is the value-of-lost-load, and \(P^{\mathcal{D}}\) is the active power load shedding.
### _Algorithm_
The proposed methodology, illustrated in Fig. 1, employs the IMML to identify contingencies where branch overloads may occur. Benders' cuts are added to the model to represent potential preventive or corrective actions that can be taken to mitigate overloaded branches.
Initially, an SCOPF is solved without considering any possible contingencies, using any algorithm that optimizes linear programs. This generates an 'N-0' safe generation schedule for the system at its current state, and gives the lowest system cost. As cuts are later added as constraints, the system cost increases.
For a given branch contingency, the power flow is calculated with the base case generation schedule (with possible preventive actions) using (8). If the calculation finds overloaded branches from the contingency, (10) is calculated. (10) is O(\(n^{2}m+nm^{2}\)) while (8) is considerably faster at O(\(nm+m^{2}\)).
If the system is separated into islands after a branch contingency, Goderya's algorithm [12] is used to detect them. Then the reference island (the connected nodes which includes the reference node) is found. Islands without the reference node are assumed to black out. The reference island nodes of \(\mathbf{H}\) are used in Algorithm 1 to calculate \(\mathbf{X}_{c}\); then (26) and (27) are used to find the new PTDF matrix and power flow, respectively, followed by the Benders' cut if there are any overloaded branches. If not, short-term variables are added to the main problem, though no cut is added to the formulation.
When a contingency results in the overload of one or more branches, Benders' cuts for preventive and corrective actions are added for each overload, as outlined in section II-C. The model is then resolved to find the new optimal point and update the base case power flow.
The algorithm ends once all contingencies have been checked, and no overloaded branches are found. The output is an optimal generation schedule for a base case together with optimal corrective actions after a contingency for the optimal system cost.
## III Case study
A case study is run to compare a standard SCOPF (direct optimization of (1)) and the proposed methodology. The methods are implemented in Julia language [13] with the packages JuMP [14] and PowerSystems [15] and problem optimization using the Gurobi solver [16]. Results and run time are compared for the IEEE RTS-79 system [17] and the SouthCarolina500 (ACTIVSg500) test system [18] in Table I. Where generator ramping limits were not provided, limits of 1% of rated power per minute were set. The results show that using the proposed methodology significantly reduces the solution time while yielding highly similar optimal cost values.
One hypothesis in the proposed methodology is that it is faster to calculate only the PTDF matrix when needed, instead of calculating it for every contingency. For the ACTIVSg500 system, calculating contingency branch power flow using \(\boldsymbol{\theta}_{c}\) in (8) takes \(2.6\,\mathrm{\SIUnitSymbolMicro s}\), while it is much slower using \(\boldsymbol{\varphi}_{c}\) in (10) (or (26)), taking \(1.8\,\mathrm{ms}\) (\(7.0\,\mathrm{ms}\)). When applying the proposed methodology on the ACTIVSg500 system, approximately 1/3rd of the run time is used by the solver to optimize the model and approximately 1/5th of the run time is used to compute the branch power flow and PTDF. If skipping (8), and always calculating the branch power flow using \(\boldsymbol{\varphi}_{c}\), the branch power flow run time increases. When solving the RTS-79 system using the proposed methodology, 1/4th of the run time is the solver run time and 1/10th of the run time is the branch power flow run time; branch power flow run time increases when skipping (8). The RTS-79 system, which has 38 branches, has only one single branch contingency case that results in the system separation. The ACTIVSg500 system, which has 597 branches, has 268 single branch contingency cases resulting in the system separation. Thus, even on systems with a high ratio of branch contingencies that result in separation of the system, first calculating \(\boldsymbol{\theta}_{c}\) and only calculating \(\boldsymbol{\varphi}_{c}\) when needed is shown to decrease the run time.
The IEEE RTS-79 system is used to compare the proposed and the standard methodology. For the proposed methodology, the Benders' cuts of branch contingencies are shown in Table II. Branches are denoted as follows: branch from node \(i\) to node \(j\) is i-j.
From Table II, it is evident from the first iteration of the algorithm that all overloads on the initial base case generation schedule are mitigated, for all contingencies. The second iteration shows overload only on branch 7-8. These overloads are generated by corrective actions to mitigate the overloads found in iteration one. By analyzing the Benders' cut (15) it can be seen that a power injection change at any node can be used to fulfill the constraint. However, only the rating of the overloaded branch is used to make the cut, and thus other branches can be overloaded by the corrective actions found after adding the Benders' cut. In theory, the algorithm would need \(B^{2}\) iterations to ensure that all branch ratings are within limits. In the study of both the RTS-79 system and the ACTIVSg500 system, this resulted in one additional iteration, two iterations in total for each.
A comparison of the generation schedule produced by the proposed and the standard methodology is shown in Table III.
Fig. 1: The proposed algorithm for solving SCOPF.
The schedule is optimized on a per generator and per load basis, but aggregated to per node basis in the tables for the sake of compactness. The base case shows equal injected power between the methods for all nodes (not shown). Both methods find optimal short-term corrective actions for the contingency of branch 7-8. This contingency will isolate node 7 from the rest of the system, and thus actions must be taken to account for the lost generation on node 7. While the generation schedule is different, the optimal cost is equal down to the fifth significant digit.
## IV Conclusion
A novel methodology to solve an SCOPF with short- and long-term post-contingency limits by using the IMML and Benders decomposition has been presented and applied to a case study. The case study suggests that the proposed methodology reduces the computational time significantly. The proposed methodology is specialized towards the SCOPF framework, and is more complex to implement when compared to a standard direct solution method. This disadvantage needs to be balanced against the desired run time to solution. For system operation, reduced run time could mean closer to real-time analysis of critical contingencies if the system changes abruptly. And for planning, more scenarios can be explored within the same time-frame.
To further improve the run time of the proposed methodology, the iteration through contingencies could be executed in parallel. It should be noted that a parallel contingency overload search will often find more contingencies with overloads than if Benders' cuts are added in between, as some cuts mitigate overloads after other contingencies; solving again before adding cuts would mitigate this issue.
Further studies of the methodology's impact on system cost and reliability should be conducted. One useful addition to the SCOPF is to include a risk index [4] or chance constraints [5]. A risk constraint can control the calculated risk in the system to balance the operational cost against risk averse operation. Also, extending the methodology to an AC SCOPF would open the door to considering more types of operational limits.
## V Appendix
The state of a power system can be analyzed using matrices. Here matrices are deduced for the linearized (DC) power flow equations. The first fundamental matrix is the connectivity matrix \(\mathbf{\Phi}\) with dimensions (\(B\times N\)). This matrix shows how branches and nodes are connected. Specifically, it has a value of one for each node of a branch, while all other entries are zero.
\[\Phi[l,n]=\begin{array}{c}\left\{\begin{array}{ll}1,&\text{if }l_{i}=n\\ -1,&\text{if }l_{j}=n\\ 0,&\text{otherwise}\end{array}\right.\end{array} \tag{21}\]
where each branch \(l\) has a node origin \(l_{i}\) and end \(l_{j}\), and \(n\) is nodes in the set \(\mathcal{N}\).
The second fundamental matrix is the diagonal susceptance matrix \(\mathbf{\Psi}\) (\(B\times B\)), which contains the susceptance of all branches on its diagonal. All other values are zero.
\[\Psi[k,k]=b_{k} \tag{22}\] \[\Psi[k,l]=0\quad\forall k\neq l \tag{23}\]
where \(b_{k}\) is the branch susceptance. Combining these matrices results in the susceptance matrix \(\mathbf{H}\).
\[\mathbf{H}=\mathbf{\Phi}^{\mathsf{T}}\cdot\mathbf{\Psi}\cdot\mathbf{\Phi} \tag{24}\]
where \(\mathbf{\Phi}^{\mathsf{T}}\) is the transposed matrix of \(\mathbf{\Phi}\).
The inverse susceptance matrix \(\mathbf{X}\) is calculated from the susceptance matrix using Algorithm 1. \(\mathbf{H}\) is singular when built from all system nodes, thus the reference node is eliminated before inverting \(\mathbf{H}\). Further, \(\mathbf{X}\) is used to solve the power flow and calculate the PTDF matrix \(\mathbf{\varphi}\).
\[\mathbf{\theta}=\left[\mathbf{\mathbf{B}}^{*}\right]^{-1}\cdot\mathbf{\mathbf{P}}= \mathbf{\mathbf{X}}\cdot\mathbf{\mathbf{P}} \tag{25}\]
\[\mathbf{\varphi}=\mathbf{\Psi}\cdot\mathbf{\Phi}\cdot\mathbf{\mathbf{X}} \tag{26}\]
where \(\mathbf{\theta}\) is the node voltages angles, and \(\mathbf{\mathbf{P}}\) is the node injected active power. Branch power flow \(\mathbf{\mathbf{F}}\) can then be calculated two ways:
\[\mathbf{\mathbf{F}}=\mathbf{\Psi}\cdot\mathbf{\Phi}\cdot\mathbf{\theta} \tag{27}\]
\[\mathbf{\mathbf{F}}=\mathbf{\varphi}\cdot\mathbf{\mathbf{P}} \tag{28}\]
|
2301.13266
|
Stream-based Decentralized Runtime Verification
|
Industrial Control Systems (ICS) are often built from geographically
distributed components and often use programmable logic controllers for
localized processes. Since verification of such systems is challenging because
of both time sensitivity of the system specifications and the inherent
asynchrony in distributed components, developing runtime assurance that
verifies not just the correctness of different components, but also generates
aggregated statistics of the systems is of interest. In this paper, we first
present a general technique for runtime monitoring of distributed applications
whose behavior can be modeled as input/output {\em streams} with an internal
computation module in the partially synchronous semantics, where an imperfect
clock synchronization algorithm is assumed. Second, we propose a generalized
stream-based decentralized runtime verification technique. We also rigorously
evaluate our algorithm on extensive synthetic experiments and several ICS and
aircraft SBS message datasets.
|
Ritam Ganguly, Borzoo Bonakdarpour
|
2023-01-30T20:22:30Z
|
http://arxiv.org/abs/2301.13266v1
|
# Stream-based Decentralized Runtime Verification
###### Abstract
Industrial Control Systems (ICS) are often built from geographically distributed components and often use programmable logic controllers for localized processes. Since verification of such systems is challenging because of both time sensitivity of the system specifications and the inherent asynchrony in distributed components, developing runtime assurance that verifies not just the correctness of different components, but also generates aggregated statistics of the systems is of interest. In this paper, we first present a general technique for runtime monitoring of distributed applications whose behavior can be modeled as input/output _streams_ with an internal computation module in the partially synchronous semantics, where an imperfect clock synchronization algorithm is assumed. Second, we propose a generalized stream-based decentralized runtime verification technique. We also rigorously evaluate our algorithm on extensive synthetic experiments and several ICS and aircraft SBS message datasets.
## I Introduction
Industrial Control Systems (ICS) are information systems to control industrial processes such as manufacturing, product handling, distribution, etc. It includes supervisory control and data acquisition systems used to control geographically dispersed assets and distributed control systems using a programmable logic controller for each of the localized processes. A typical programmable logic controller (PLC) receives data produced by a large number of sensors, fitted across the system. The data produced by these components are often the target of cyber and ransom-ware attack putting the security of the system in jeopardy. Since these systems are linked to essential services, any attack on these facilities put the users life on the front line. The integrity of the data produced from these distributed components is very important as the PLC's behavior is dictated by it. Recent attacks have shown that an attack on a company's ICS costs the company around $5 million and 50 days of system down time. Additionally, according to a recent report [1], it takes the effected company around 191 days to fully recover and around 54% of all organization are vulnerable to such attacks.
In this paper, we advocate for a runtime verification (RV) approach, to monitor the behavior of a distributed system with respect to a formal specification. Applying RV to multiple components of an ICS can be viewed as the general problem of distributed RV, where a centralized or decentralized monitor(s) observe the behavior of a distributed system in which the processes do not share a global clock. Although RV deals with finite executions, the lack of a common global clock prohibits it from having a total ordering of events in a distributed setting. In other words, the monitor can only form a partial ordering of events which may yield different evaluations. Enumerating all possible interleavings of the system at runtime incurs in an exponential blowup, making the approach not scalable. To add to this already complex task, a PLC often requires time sensitive aggregation of data from multiple sources.
We propose an effective, sound and complete solution to distributed RV for the popular _stream-based_ specification language Lola[2]. Compared to other temporal logic, Lola can describe both correctness/failure assertions along with statistical measures that can be used for system profiling and coverage analysis. To present a high level of Lola example, consider two input streams \(x\) and \(y\) and a output stream, \(sum\) as shown in Fig. 1. Stream \(x\) has the value \(3\) until time instance \(2\) when it changes to \(5\) and so on.
We consider a fault proof decentralized set of monitors where each monitor only has a partial view of the system and has no access to a global clock. In order to limit the blow-up of states posed by the absence of the global clock, we make a practical assumption about the presence of a bounded clock skew \(\epsilon\) between all the local clocks, guaranteed by a clock synchronization algorithm (like NTP [3]). This setting is known to be _partially synchronous_. As can be seen in Fig. 1, any two events less than \(\epsilon=2\) time apart is considered to be concurrent and thus the non-determinism of the time of occurrence of each event is restricted to \(\epsilon-1\) on either side.
Fig. 1: Partially Synchronous LOLA
When attempting to evaluate the output stream \(\mathit{sum}\), we need to take into consideration all the possible time of occurrence of the values. For example, when evaluating the value of \(\mathit{sum}\) at time \(1\), we need to consider the value of \(x\) (resp. \(y\)) as \(3\) and \(5\) (resp. \(1\) and \(3\)) which evaluates to \(4\), \(6\) and \(8\). The same can be observed for evaluations across all time instances.
Our first contribution in this paper is introducing a partially synchronous semantics for Lola. In other words, we define Lola which takes into consideration a clock-skew of \(\epsilon\) when evaluating a stream expression. Second, we introduce an SMT-based associated equation rewriting technique over a partially observable distributed system, which takes into consideration the values observed by the monitor and rewrites the associated equation. The monitors are able to communicate within themselves and are able to resolve the partially evaluated equations into completely evaluated ones.
We have proved the correctness of our approach and the upper and lower bound of the message complexity. Additionally, we have completely implemented our technique and report the results of rigorous synthetic experiments, as well as monitoring correctness and aggregated results of several ICS. As identified in [4], most attacks on ICS components try to alter the value reported to the PLC in-order to make the PLC behave erroneously. Through our approach, we were able to detect these attacks in-spite of the clock asynchrony among the different components with deterministic guarantee. We also argue that our approach was able to evaluate system behavior aggregates that makes studying these system easier by the human operator. Unlike machine learning approaches (e.g., [5, 6, 7]), our approach will never raise false negatives. We put our monitoring technique to test, studying the effects of different parameters on the runtime and size of the message sent from one monitor to other and report on each of them.
OrganizationSection II presents the background concepts. Partially synchronous Lola and the formal problem statement are introduced in Section III. Our RV technique is collectively presented in Sections Section IV - VII followed by the experimental results in Section VIII. Related work is discussed in Section IX before we make concluding remarks in Section X. Details of syntax of Lola, proofs of correctness and more details about the ICS case studies can be found in the Appendix XI.
## II Preliminaries - Stream-based Specification Language (Lola) [2]
A Lola[2] specification describes the computation of output streams given a set of input streams. A _stream_\(\alpha\) of type \(\mathsf{T}\) is a finite sequence of values, \(t\in\mathsf{T}\). Let \(\alpha(i)\), where \(i\geq 0\), denote the value of the stream at time stamp \(i\). We denote a stream of finite length (resp. infinite length) by \(\mathsf{T}^{*}\) (resp. \(\mathsf{T}^{\omega}\)).
**Definition 1**: _A Lola specification is a set of equations over typed stream variables of the form:_
\[s_{1} =e_{1}(t_{1},\cdots,t_{m},s_{1},\cdots,s_{n})\] \[\vdots \vdots\] \[s_{n} =e_{n}(t_{1},\cdots,t_{m},s_{1},\cdots,s_{n})\]
_where \(s_{1},s_{2},\cdots,s_{n}\) are called the dependent variables, \(t_{1},t_{2},\cdots,t_{m}\) are called the independent variables, and \(e_{1},e_{2},\cdots,e_{n}\) are the stream expressions over \(s_{1},\cdots,s_{n},t_{1},\cdots,t_{m}\). \(\blacksquare\)_
Typically, _Input_ streams are referred to as independent variables, whereas _output_ streams are referred as dependent variable. For example, consider the following Lola specification, where \(t_{1}\) and \(t_{2}\) are independent stream variables of type boolean and \(t_{3}\) is an independent stream variable of type integer.
\[s_{1} =\texttt{true}\] \[s_{2} =t_{1}\vee(t_{3}\leq 1)\] \[s_{3} =\texttt{ite}(s_{3},s_{4},s_{4}+1)\] \[s_{4} =s_{9}[-1,0]+(t_{3}\mod 2)\]
where, \(\mathtt{ite}\) is the abbreviated form of _if-then-else_ and stream expressions \(s_{7}\) and \(s_{8}\) refers to the stream \(t_{1}\) with an offset of \(+1\) and \(-1\), respectively. Due to space constrains we present the full syntax of Lola in Appendix XI-A.
The semantics of Lola specifications is defined in terms of the evaluation model, which describes the relation between input and output streams.
**Definition 2**: _Given a Lola specification \(\varphi\) over independent variables, \(t_{1},\cdots,t_{m}\), of type, \(\mathsf{T}_{1},\cdots,\mathsf{T}_{m}\), and dependent variables, \(s_{1},\cdots,s_{n}\) with type, \(\mathsf{T}_{m+1},\cdots,\mathsf{T}_{m+n}\), let \(\tau_{1},\cdots,\tau_{m}\) be the streams of length \(N+1\), with \(\tau_{i}\) of type \(\mathsf{T}_{i}\). The tuple \(\langle\alpha_{1},\cdots,\alpha_{n}\rangle\) of streams of length \(N+1\) is called the evaluation model, if for every equation in \(\varphi\)_
\[s_{i}=e_{i}(t_{1},\cdots,t_{m},s_{1},\cdots,s_{n})\]
\(\langle\alpha_{1},\cdots,\alpha_{n}\rangle\) satisfies the following associated equations:
\[\alpha_{i}(j)=\mathit{val}(e_{i})(j)\qquad\text{for }(1\leq i\leq n)\wedge(0 \leq j\leq N)\]
where \(\mathit{val}(e_{i})(j)\) is defined as follows. For the base cases:
\[\mathit{val}(c)(j) =c\] \[\mathit{val}(t_{i})(j) =\tau_{i}(j)\] \[\mathit{val}(s_{i})(j) =\alpha_{i}(j)\]
For the inductive cases, where \(f\) is a function (e.g., arithmetic):
\[\mathit{val}\Big{(}f(e_{1},\cdots,e_{k})\Big{)}(j) =f\Big{(}\mathit{val}(e_{1})(j),\cdots,\mathit{val}(e_{k})(j)\Big{)}\] \[\mathit{val}\Big{(}\mathtt{ite}(b,e_{1},e_{2})\Big{)}(j) =\mathsf{if}\ \mathit{val}(b)(j)\ \mathtt{then}\ \mathit{val}(e_{1})(j)\] \[\qquad\mathsf{else}\ \mathit{val}(e_{2})(j)\] \[\mathit{val}(e[k,c])(j) =\begin{cases}\mathit{val}(e)(j+k)&\text{if }0\leq j+k\leq N\\ c&\text{otherwise}\ \
**Definition 3**: _A dependency graph for a Lola specification, \(\varphi\) is a weighted and directed graph \(G=\langle V,E\rangle\), with vertex set \(V=\{s_{1},\cdots,s_{n},t_{1},\cdots,t_{m}\}\). An edge \(e:\langle s_{i},s_{k},w\rangle\) (resp. \(e:\langle s_{i},t_{k},w\rangle\)) labeled with a weight \(w\) is in \(E\) iff the equation for \(\alpha_{i}(j)\) in \(\varphi_{\alpha}\) contains \(\alpha_{k}(j+w)\) (resp. \(\tau_{k}(j+w)\)) as a subexpression. Intuitively, an edge records that \(s_{i}\) at a particular position depends on the value of \(s_{k}\) (resp. \(t_{k}\)), offset by \(w\) positions._
Given a set of synchronous input streams \(\{\alpha_{1},\alpha_{2},\cdots,\alpha_{m}\}\) of respective type \(\mathbb{T}=\{\mathsf{T}_{1},\mathsf{T}_{2},\cdots,\mathsf{T}_{m}\}\) and a Lola specification, \(\varphi\), we evaluate the Lola specification, given by:
\[(\alpha_{1},\alpha_{2},\cdots,\alpha_{m})\models_{S}\varphi\]
given the above semantics, where \(\models_{S}\) denotes the synchronous evaluation.
## III Partially Synchronous Lola
In this section, we extend the semantics of Lola to one that can accommodate reasoning about distributed systems.
### _Distributed Streams_
Here, we refer to a global clock which will act as the "real" timekeeper. It is to be noted that the presence of this global clock is just for theoretical reasons and it is not available to any of the individual streams.
We assume a _partially synchronous_ system of \(n\) streams, denoted by \(\mathcal{A}=\{\alpha_{1},\alpha_{2},\cdots,\alpha_{n}\}\). For each stream \(\alpha_{i}\), where \(i\in[1,|\mathcal{A}|]\), the local clock can be represented as a monotonically increasing function \(c_{i}:\mathbb{Z}_{\geq 0}\rightarrow\mathbb{Z}_{\geq 0}\), where \(c_{i}(\mathcal{G})\) is the value of the local clock at global time \(\mathcal{G}\). Since we are dealing with discrete-time systems, for simplicity and without loss of generality, we represent time with non-negative integers \(\mathbb{Z}_{\geq 0}\). For any two streams \(\alpha_{i}\) and \(\alpha_{j}\), where \(i\neq j\), we assume:
\[\forall\mathcal{G}\in\mathbb{Z}_{\geq 0}\cdot\mid c_{i}(\mathcal{G})-c_{j}( \mathcal{G})\mid<\epsilon,\]
where \(\epsilon>0\) is the maximum clock skew. The value of \(\epsilon\) is constant and is known (e.g., to a monitor). This assumption is met by the presence of an off-the-shelf clock synchronization algorithm, like NTP [3], to ensure bounded clock skew among all streams. The local state of stream \(\alpha_{i}\) at time \(\sigma\) is given by \(\alpha_{i}(\sigma)\), where \(\sigma=c_{i}(\mathcal{G})\), that is the local time of occurrence of the event at some global time \(\mathcal{G}\).
**Definition 4**: _A distributed stream consisting of \(\mathcal{A}=\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}\) streams of length \(N+1\) is represented by the pair \((\mathcal{E},\rightsquigarrow,\) where \(\mathcal{E}\) is a set of all local states (i.e., \(\mathcal{E}=\cup_{i\in[1,n],j\in[0,N]}\alpha_{i}(j)\)) partially ordered by Lamport's happened-before (\(\rightsquigarrow\)) relation [8], subject to the partial synchrony assumption:_
* _For every stream_ \(\alpha_{i}\)_,_ \(1\leq i\leq|\mathcal{A}|\)_, all the events happening on it are totally ordered, that is,_ \[\forall i,j,k\in\mathbb{Z}_{\geq 0}:(j<k)\rightarrow(\alpha_{i}(j)\rightsquigarrow \alpha_{i}(k))\]
* _For any two streams_ \(\alpha_{i}\) _and_ \(\alpha_{j}\) _and two corresponding events_ \(\alpha_{i}(k),\alpha_{j}(l)\in\mathcal{E}\)_, if_ \(k+\epsilon<l\) _then,_ \(\alpha_{i}(k)\rightsquigarrow\alpha_{j}(l)\)_, where_ \(\epsilon\) _is the maximum clock skew._
* _For events,_ \(e\)_,_ \(f\)_, and_ \(g\)_, if_ \(e\rightsquigarrow f\) _and_ \(f\rightsquigarrow g\)_, then_ \(e\rightsquigarrow g\)_._
**Definition 5**: _Given a distributed stream \((\mathcal{E},\rightsquigarrow)\), a subset of events \(\mathcal{C}\subseteq\mathcal{E}\) is said to form a consistent cut if and only if when \(\mathcal{C}\) contains an event \(e\), then it should also contain all such events that happened before \(e\). Formally,_
\[\forall e,f\in\mathcal{E}.(e\in\mathcal{C})\wedge(f\rightsquigarrow e) \to f\in\mathcal{C}.\]
The frontier of a consistent cut \(\mathcal{C}\), denoted by \(\mathsf{front}(\mathcal{C})\) is the set of all events that happened last in each stream in the cut. That is, \(\mathsf{front}(\mathcal{C})\) is a set of \(\alpha_{i}(\textit{last})\) for each \(i\in[1,|\mathcal{A}|]\) and \(\alpha_{i}(\textit{last})\in\mathcal{C}\). We denote \(\alpha_{i}(\textit{last})\) as the last event in \(\alpha_{i}\) such that \(\forall\alpha_{i}(\sigma)\in\mathcal{C}.(\alpha_{i}(\sigma)\rightsquigarrow \alpha_{i}(\textit{last}))\rightarrow(\alpha_{i}(\sigma)\rightsquigarrow\alpha_ {i}(\textit{last}))\).
### _Partially Synchronous Lola_
We define the semantics of Lola specifications for partially synchronous distributed streams in terms of the evaluation model. The absence of a common global clock among the stream variables and the presence of the clock synchronization makes way for the output stream having multiple values at any given time instance. Thus, we update the evaluation model, so that \(\alpha_{i}(j)\) and \(\textit{val}(t_{i})(j)\) are now defined by sets rather than just a single value. This is due to nondeterminism caused by partial synchrony, i.e., the bounded clock skew \(\epsilon\).
**Definition 6**: _Given a Lola[2] specification \(\varphi\) over independent variables, \(t_{1},\cdots,t_{m}\) of type \(\mathsf{T}_{1},\cdots,\mathsf{T}_{m}\) and dependent variables, \(s_{1},\cdots,s_{n}\) of type \(\mathsf{T}_{m+1},\cdots,\mathsf{T}_{m+n}\) and \(\tau_{1},\cdots,\tau_{m}\) be the streams of length \(N+1\), with \(\tau_{i}\) of type \(\mathsf{T}_{i}\). The tuple of streams \(\langle\alpha_{1},\cdots,\alpha_{n}\rangle\) of length \(N+1\) with corresponding types is called the evaluation model in the partially synchronous setting, if for every equation in \(\varphi\):_
\[s_{i}=e_{i}(t_{1},\cdots,t_{m},s_{1},\cdots,s_{n}),\]
\(\langle\alpha_{1},\cdots,\alpha_{n}\rangle\) _satisfies the following associated equations:_
\[\alpha_{i}(j)=\left\{\textit{val}(e_{i})(k)\mid\max\{0,j-\epsilon+1\}\leq k \leq\min\{N,j+\epsilon-1\}\right\}\]
_where \(\textit{val}(e_{i})(j)\) is defined as follows. For the base cases:_
\[\textit{val}(c)(j) =\left\{c\right\}\] \[\textit{val}(t_{i})(j) =\left\{\tau_{i}(k)\mid\max\{0,j-\epsilon+1\}\leq k\leq\min\{N,j+ \epsilon-1\}\right\}\] \[\textit{val}(s_{i})(j) =\alpha_{i}(j)\]
_For the inductive cases:_
\[\textit{val}\Big{(}f(e_{1},\cdots,e_{p})\Big{)}(j) =\left\{f(e^{\prime}_{1},\cdots,e^{\prime}_{p})\mid e^{\prime}_{1} \in\textit{val}(e_{1})(j),\cdots,\right.\] \[\left.e^{\prime}_{p}\in\textit{val}(e_{p})(j)\right\}\] \[\textit{val}(\texttt{ite}(b,e_{1},e_{2}))(j) =\left\{\textit{val}(e_{1})(j)\quad\texttt{true}\in\textit{val}(b)(j)\right.\] \[\left.\textit{val}(e[k,c])(j) =\left\{\textit{val}(e)(j+k)\quad\text{if }0\leq j+k\leq N\right.\] \[\left.c\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\]
**Example 1**: _Consider the Lola specification, \(\varphi\), over the independent boolean variables read and write:_
input read:bool input write:bool output countRead := ite(read, countRead[-1,0] + 1, countRead[-1,0]) output countWrite := ite(write, countWrite[-1,0] + 1, countWrite[-1,0]) output check := (countWrite - countRead) <= 2
In Fig. 2, we have two input stream _read_ and _write_ which denotes the time instances where the corresponding events take place. It can be imagined that _read_ and _write_ are streams of type boolean with true values at time instances \(4,6,7\) and \(2,3,5,6\) and false values at all other time instances respectively. We evaluate the above mentioned Lola specification considering a time synchronization constant, \(\epsilon=2\). The corresponding associated equations, \(\varphi_{\alpha}\), are:
\[\mathit{countRead}(j) =\begin{cases}\mathtt{ite}(\mathit{read},1,0)&j=0\\ \mathtt{ite}\Big{(}\mathit{read},\mathit{countRead}(j-\\ 1)+1,\mathit{countRead}(j)\Big{)}&j\in[1,N)\\ \mathit{countWrite}(j)&=\begin{cases}\mathtt{ite}(\mathit{write},1,0)&j=0\\ \mathtt{ite}\Big{(}\mathit{write},\mathit{countWrite}(j-\\ 1)+1,\mathit{countWrite}(j)\Big{)}&j\in[1,N)\\ \end{cases}\\ \mathit{check}(j) =\Big{(}\mathit{countWrite}(j)-\mathit{countRead}(j)\Big{)}\leq 2 \end{cases}\]
Similar to the synchronous case, evaluation of the partially synchronous Lola specification involves creating the dependency graph.
**Definition 7**: _A dependency graph for a Lola specification, \(\varphi\) is a weighted directed multi-graph \(G=\langle V,E\rangle\), with vertex set \(V=\{s_{1},\cdots,s_{n},t_{1},\cdots,t_{m}\}\). An edge \(e:\langle s_{i},s_{k},w\rangle\) (resp. \(e:\langle s_{i},t_{k},w\rangle\)) labeled with a weight \(w=\{\omega\mid p-\epsilon<\omega<p+\epsilon\}\) is in \(E\) iff the equation for \(\alpha_{i}(j)\) contains \(\alpha_{k}(j+p)\) (resp. \(\tau_{k}(j+p)\)) as a sub-expression, for some \(j\) and offset \(p\)._
Intuitively, the dependency graph records that evaluation of a \(s_{i}\) at a particular position depends on the value of \(s_{k}\) (resp. \(t_{k}\)), with an offset in \(w\). It is to be noted that there can be more than one edge between a pair of vertex \((s_{i},s_{k})\) (resp. \((s_{i},t_{k})\)). Vertices labeled by \(t_{i}\) do not have any outgoing edges.
**Example 2**: _Consider the Lola specification over the independent integer variable a:_
input a :uint output b1 := b2[1, 0] + ite(b2[-1,7] <= a[1, 0], b2[-2,0], 6) output b2 := b1[-1,8]
Its dependency graph, shown in Fig. 3 for \(\epsilon=2\), has 1 edge from b1 to a with a weight \(\{0,1,2\}\). Similarly, there are 3 edges from b1 to b2 with weights \(\{0,1,2\},\{-2,-1,0\}\) and \(\{-3,-2,-1\}\) and 1 edge from b2 to b1 with a weight of \(\{-2,-1,0\}\)
Given a set of partially synchronous input streams \(\{\alpha_{1},\alpha_{2},\cdots,\alpha_{|\mathcal{A}|}\}\) of respective type \(\mathbb{T}=\{\mathsf{T}_{1},\mathsf{T}_{2},\cdots,\mathsf{T}_{|\mathcal{A}|}\}\) and a Lola specification, \(\varphi\), the evaluation of \(\varphi\) is given by
\[(\alpha_{1},\alpha_{2},\cdots,\alpha_{|\mathcal{A}|})\models_{PS}\varphi\]
where, \(\models_{PS}\) denotes the partially synchronous evaluation.
## IV Decentralized Monitoring Architecture
### _Overall Picture_
We consider a decentralized online monitoring system comprising of a fixed number of \(|\mathcal{M}|\) reliable monitor processes \(\mathcal{M}=\{M_{1},M_{2},\cdots,M_{|\mathcal{M}|}\}\) that can communicate with each other by sending and receiving messages through a complete point-to-point bidirectional communication links. Each communication link is also assumed to be reliable, i.e., there is no loss or alteration of messages. Similar to the distributed system under observation, we assume the clock on the individual monitors are asynchronous, with clock synchronization constant = \(\epsilon_{M}\).
Throughout this section we assume that the global distributed stream consisting of complete observations of \(|\mathcal{A}|\) streams is only partially visible to each monitor. Each monitor process locally executes an identical sequential algorithm which consists of the following steps (we will generalize this approach in Section VII). In other words, an evaluation iteration of each monitor consists of the following steps:
1. Reads the a subset of \(\mathcal{E}\) events (visible to \(M_{i}\)) along with the corresponding time and valuation of the events, which results in the construction of a _partial distributed stream_;
Fig. 3: Dependency Graph Example
Fig. 2: Partially Synchronous Lola Example
2. Each monitor evaluates the Lola specification \(\varphi\) given the partial distributed stream;
3. Every monitor, broadcasts a message containing rewritten associated equations of \(\varphi\), denoted \(LS\), and
4. Based on the message received containing associated equations, each monitor amalgamates the observations of all the monitors to compose a set of associated equations. After a evaluation iteration, each monitor will have the same set of associated equations to be evaluated on the upcoming distributed stream.
The message sent from monitor \(M_{i}\) at time \(\pi\) to another monitor \(M_{j}\), for all \(i,j\in[1,|\mathcal{M}|]\), during a evaluation iteration of the monitor is assumed to reach latest by time \(\pi+\epsilon_{M}\). Thus, the length of an _evaluation iteration_\(k\) can be adjusted to make sure the message from all other monitors reach before the start of the next evaluation iteration.
### _Detailed Description_
We now explain in detail the computation model (see Algorithm 1). Each monitor process \(M_{i}\in\mathcal{M}\), where \(i\in[1,|\mathcal{M}|]\), attempts to read \(e\in\mathcal{E}\), given the distributed stream, \((\mathcal{E},\leadsto)\). An event can either be observable, or not observable. Due to distribution, this results in obtaining a partial distributed stream \((\mathcal{E}_{i},\leadsto)\) defined below.
**Definition 8**: _Let \((\mathcal{E},\leadsto)\) be a distributed stream. We say that \((\mathcal{E}^{\prime},\leadsto)\) is a partial distributed stream for \((\mathcal{E},\leadsto)\) and denote it by \((\mathcal{E}^{\prime},\leadsto)\sqsubseteq(\mathcal{E},\leadsto)\) iff \(\mathcal{E}^{\prime}\subseteq\mathcal{E}\) (the happened before relation is obviously preserved). \(\blacksquare\)_
We now tie partial distributed streams to a set of decentralized monitors and the fact that decentralized monitors can only partially observe a distributed stream. First, all un-observed events is replaced by \(\natural\), i.e., for all \(\alpha_{i}(\sigma)\in\mathcal{E}\) if \(\alpha_{i}(\sigma)\not\in\mathcal{E}_{i}\) then \(\mathcal{E}_{i}=\mathcal{E}_{i}\cup\{\alpha_{i}(\sigma)=\natural\}\).
**Definition 9**: _Let \((\mathcal{E},\leadsto)\) be a distributed stream and \(\mathcal{M}=\{M_{1},M_{2},\cdots,M_{|\mathcal{M}|}\}\) be a set of monitors, where each monitor \(M_{i}\), for \(i\in[1,|\mathcal{M}|]\) is associated with a partial distributed stream \((\mathcal{E}_{i},\leadsto)\sqsubseteq(\mathcal{E},\leadsto)\). We say that these monitor observations are consistent if_
* \(\forall e\in\mathcal{E}.\exists i\in[1,|\mathcal{M}|].e\in\mathcal{E}_{i}\)_, and_
* \(\forall e\in\mathcal{E}_{i}.\forall e^{\prime}\in\mathcal{E}_{j}.(e=e^{ \prime}\wedge e\neq\natural)\oplus\left((e=\natural\lor e^{\prime}=\natural) \right)\)_,_
_where \(\oplus\) denoted the exclusive-or operator._
In a partially synchronous system, there are different ordering of events and each unique ordering of events might evaluate to different values. Given a distributed stream, \((\mathcal{E},\leadsto)\), a sequence of consistent cuts is of the form \(\mathcal{C}_{0}\mathcal{C}_{1}\mathcal{C}_{2}\cdots\mathcal{C}_{N}\), where for all \(i\geq 0\): (1) \(\mathcal{C}_{i}\subseteq\mathcal{E}\), and (2) \(\mathcal{C}_{i}\subseteq\mathcal{C}_{i+1}\).
Given the semantics of partially-synchronous Lola, evaluation of output stream variable \(s_{i}\) at time instance \(j\) requires events \(\alpha_{i}(k)\), where \(i\in[1,|\mathcal{A}|]\) and \(k\in\left\{\pi\mid\max\{0,j-\epsilon+1\}\leq\pi\leq\{N,j+\epsilon-1\}\right\}\). To translate monitoring of a distributed stream to a synchronous stream, we make sure that the events in the frontier of a consistent cut, \(\mathcal{C}_{j}\) are \(\alpha_{i}(k)\).
Let \(\mathbb{C}\) denote the set of all valid sequences of consistent cuts. We define the set of all synchronous streams of \((\mathcal{E},\leadsto)\) as follows:
\[\mathsf{Sr}(\mathcal{E},\leadsto)=\left\{\mathsf{front}(\mathcal{C}_{0}) \mathsf{front}(\mathcal{C}_{1})\cdots\mid\mathcal{C}_{0}\mathcal{C}_{1}\cdots \in\mathbb{C}\right\}\]
Intuitively, \(\mathsf{Sr}(\mathcal{E},\leadsto)\) can be interpreted as the set of all possible "interleavings". The evaluation of the Lola specification, \(\varphi\), with respect to \((\mathcal{E},\leadsto)\) is the following :
\[\left[\left(\mathcal{E},\leadsto)\rightleftharpoons_{PS}\varphi \right]=\left\{(\alpha_{1},\cdots,\alpha_{n})\models_{S}\varphi\mid(\alpha_{ 1},\cdots,\alpha_{n})\in\right.\] \[\left.\mathsf{Sr}(\mathcal{E},\leadsto)\right\}\]
This means that evaluating a partially synchronous distributed stream with respect to a Lola specification results in a set of evaluated results, as the computation may involve several streams. This also enables reducing the problem from evaluation of a partially synchronous distributed system to the evaluation of multiple synchronous streams, each evaluating to unique values for the output stream, with message complexity
\[O\big{(}\epsilon^{|\mathcal{A}|}N|\mathcal{M}|^{2}\big{)}\quad\Omega(N| \mathcal{M}|^{2})\]
### _Problem Statement_
The overall problem statement requires that upon the termination of the Algorithm 1, the verdict of all the monitors in the decentralized monitoring architecture is the same as that of a centralized monitor which has the global view of the system
\[\forall i\in[1,m]:\mathsf{Result}_{i}=\left[\left(\mathcal{E},\leadsto \right)\models_{PS}\varphi\right]\]
where \((\mathcal{E},\leadsto)\) is the global distributed stream and \(\varphi\) is the Lola specification with \(\mathsf{Result}_{i}\) as the evaluated result by monitor \(M_{i}\).
## V Calculating \(Ls\)
In this section, we introduce the rules of rewriting Lola associated equations given the evaluated results and observations of the system. In our distributed setting, evaluation of a Lola specification involves generating a set of synchronous streams and evaluating the given Lola specification on it (explained in Section VI). Here, we make use of the evaluation of Lola specification into forming our local observation to be shared with other monitors in the system.
Given the set of synchronous streams, \((\alpha_{1},\alpha_{2},\cdots,\alpha_{|\mathcal{A}|})\), the symbolic locally computed result \(LS\) (see Algorithm 1) consists of associated Lola equations, which either needs more information (data was unobserved) from other monitors to evaluate or the concerned monitor needs to wait (positive offset). In either case, the associated Lola specification is shared with all other monitors in the system as the missing data
can be observed by either monitors. We divide the rewriting rules into three cases, depending upon the observability of the value of the independent variables required for evaluating the expression \(e_{i}\) for all \(i\in[1,n]\). Each stream expression is categorized into three cases (1) completely unobserved, (2) completely observed or (3) partially observed. This can be done easily by going over the dependency graph and checking with the partial distributed stream read by the corresponding monitor.
**Case 1 (Completely Observed).** Formally, a completely observed stream expression \(s_{i}\) can be identified from the dependency graph, \(G=\langle V,E\rangle\), as for all \(s_{k}\) (resp. \(t_{k}\)) \((s_{i},s_{k},w)\in E\) (resp. \(\langle s_{i},t_{k},w\rangle\in E\)), \(s_{k}(j+w)\neq\natural\) (resp. \(t_{k}(j+w)\neq\natural\)) are observed for time instance \(j\). If yes, this signifies, that all independent and dependent variables required to evaluate \(s_{i}(j)\), is observed by the monitor \(M\), there by evaluating: \(s_{i}(j)=e_{i}(s_{1},\cdots,s_{n},t_{1},\cdots,t_{m})\) and rewriting \(s_{i}(j)\) to \(LS\).
**Case 2 (Completely Unobserved).** Formally, we present a completely unobserved stream expression, \(s_{i}\) from the dependency graph, \(G=\langle V,E\rangle\), as for all \(s_{k}\) (resp. \(t_{k}\)), \(\langle s_{i},s_{k},w\rangle\in E\) (resp. \(\langle s_{i},t_{k},w\rangle\in E\)), \(s_{k}(j+w)=\natural\) (resp. \(t_{k}(j+w)=\natural\)) are unobserved, for time instance \(j\). This signifies that the valuation of neither variables are known to the monitor \(M\). Thus, we rewrite the following stream expressions
\[s^{\prime}_{k}(j) =\begin{cases}s_{k}(j+w)&0\leq j+w\leq N\\ \texttt{default}&\text{otherwise}\\ \end{cases}\] \[t^{\prime}_{k}(j) =\begin{cases}t_{k}(j+w)&0\leq j+w\leq N\\ \texttt{default}&\text{otherwise}\\ \end{cases}\]
for all \(\langle s_{i},s_{k},w\rangle\in E\) and \(\langle s_{i},t_{k},w\rangle\in E\), and include the rewritten associated equation for evaluating \(s_{i}(j)\) as
\[s_{i}(j)=e_{i}(s^{\prime}_{1},\cdots,s^{\prime}_{n},t^{\prime}_{1},\cdots,t^{ \prime}_{m})\]
It is to be noted that the default value of a stream variable, \(s_{k}\) (resp. \(t_{k}\)), depends on the corresponding type \(\mathsf{T}_{k}\) (resp. \(\mathsf{T}_{m+k}\)) of the stream.
**Case 3 (Partially Observed).** Formally, we present a partially observed stream expression, \(s_{i}\) from the dependency graph, \(G=\langle V,E\rangle\), as for all \(s_{k}\) (resp. \(t_{k}\)), they are either observed or unobserved, for time instance \(j\). In other words, we can represent a set \(\mathbb{V}_{o}=\{s_{k}\mid\exists s_{k}(j+w)\neq\natural\}\) of all observed stream variable and a set \(\mathbb{V}_{u}=\{s_{k}\mid s_{k}(j+w)=\natural\}\) of all unobserved dependent stream variable for all \(\langle s_{i},s_{k},w\rangle\in E\). The set can be expanded to include independent variables as well. For all \(s_{k}\in\mathbb{V}_{u}\) (resp. \(t_{k}\in\mathbb{V}_{u}\)) that are unobserved, are replaced by:
\[s^{u}_{k}(j) =\begin{cases}s_{k}(j+w)&0\leq j+w\leq N\\ \texttt{default}&\text{otherwise}\\ \end{cases}\] \[t^{u}_{k}(j) =\begin{cases}t_{k}(j+w)&0\leq j+w\leq N\\ \texttt{default}&\text{otherwise}\\ \end{cases}\]
and for all \(s_{k}\in\mathbb{V}_{o}\) (resp. \(t_{k}\in\mathbb{V}_{o}\)) that are observed, are replaced by:
\[s^{o}_{k}(j+w) =\texttt{value}\] \[t^{o}_{k}(j+w) =\texttt{value}\]
and there by partially evaluating \(s_{i}(j)\) as
\[s_{i}(j)=e_{i}(s^{o}_{1},\cdots,s^{o}_{n},t^{o}_{1},\cdots,t^{o}_{m},s^{u}_{1 },\cdots,s^{u}_{n},t^{u}_{1},\cdots,t^{u}_{m})\]
followed by adding the partially evaluated associated equation for \(s_{i}(j)\) to \(LS\). It is to be noted, that a consistent partial distributed stream makes sure that for all \(s_{k}\) (resp. \(t_{k}\)), can only be either observed or unobserved and not both or neither.
**Example 3**: _Consider the Lola specification mentioned below and the stream input of length \(N=6\) divided into two evaluation rounds and \(\epsilon=2\) as shown in Fig. 4 with the monitors \(M_{1}\) and \(M_{2}\)._
The associated equation for the output stream is:
\[c=\begin{cases}\texttt{ite}(0\leq b(i+1),a(i+1),0)&i=1\\ \texttt{ite}(a(i-1)\leq b(i+1),a(i+1),\\ \texttt{height}(1-1))&2\leq i\leq N-1\\ \texttt{ite}(a(i-1)\leq 0,0,b(i-1))&i=N\\ \end{cases}\]
Let the partial distributed stream read by monitor \(M_{1}\) include \(\{a,(1,1),(3,5)\},\{b,(2,5),(3,9)\}\) and the partial distributed stream read by monitor \(M_{2}\) include \(\{a,(1,1),(2,7)\},\{b,(1,3),(3,9)\). Monitor \(M_{1}\) evaluates \(c(2)=5\) and partially evaluates \(c(1)\) and \(c(3)\). Thus \(LS^{1}_{1}=c(1)=\texttt{ite}(0\leq b(2),a(2),0),c(2)=a(3),c(3)=\texttt{ite}(7 \leq b(4),a(4),b(2))\}\).
Let the partial distributed stream read by monitor \(M_{1}\) include \(\{a,(4,4),(5,4)\},\{b,(4,3),(6,1)\}\) and the partial distributed stream read by monitor \(M_{2}\) include \(\{a,(5,4),(6,7)\},\{b,(4,3),(5,5)\}\). Monitor \(M_{1}\) evaluates \(c(4)=9\) and \(c(5)=3\) and partially evaluates \(c(6)\). Thus \(LS^{1}_{2}=\{c(4)=9,c(5)=3,c(6)=b(5)\}\). Monitor \(M_{2}\) evaluates \(c(6)=5\) and partially evaluates \(c(4)\) and \(c(5)\) and thus \(LS^{2}_{2}=\{c(4)=\texttt{ite}(a(3)\leq 5,4,9),c(5)=\texttt{ite}(a(4)\leq b(6),7,3),c(6)=5\}\).
It is to be noted, the after the first round of evaluation, the corresponding local states, \(LS^{1}_{1}\) and \(LS^{2}_{1}\) will be shared which
Fig. 4: Example of generating \(LS\)
will enable evaluating the output stream for few of the partially evaluated output stream (will be discussed in Section VII-A). These will be included in the local state of the following evaluation round.
Note that generating \(LS\) takes into consideration an ordered stream. One where the time of occurrence of events and values are comparable. It can be imagined that generating the same for the distributed system involves generating it for all possible ordering of events. This will be discussed in details in the following sections.s.
## VI SMT-based Solution
### _SMT Entities_
SMT entities represent (1) Lola equations, and (2) variables used to represent the distributed stream. Once we have generated a sequence of consistent cuts, we use the laws discussed in Section V, to construct the set of all locally computer or partially computed Lola equations.
**Distributed Stream.** In our SMT encoding, the set of events, \(\mathcal{E}\), is represented by a bit vector, where each bit corresponds to an individual event in the distributed stream, \((\mathcal{E},\leadsto)\). The length of the stream under observation is \(k\), which makes \(|\mathcal{E}|=k\times|\mathcal{A}|\) and the length of the entire stream is \(N\). We conduct a pre-processing of the distributed stream where we create a \(\mathcal{E}\times\mathcal{E}\) matrix, hbSet to incorporate the happen-before relations. We populate hbSet as hbSet[e]fI = 1 iff \(e\leadsto f\), else hbSet[e]fI = 0. In order to map each event to its respective stream, we introduce a function, \(\mu:\mathcal{E}\rightarrow\mathcal{A}\).
We introduce a valuation function, \(val:\mathcal{E}\rightarrow\mathsf{T}\) (whatever the type is in the Lola specification), in order to represent the values of the individual events. Due to the partially synchronous assumption of the system, the possible time of occurrence of an event is defined by a function \(\delta:\mathcal{E}\rightarrow\mathbb{Z}_{\geq 0}\), where \(\forall\alpha(\sigma)\in\mathcal{E}.\exists\sigma^{\prime}\in[\max\{0,\sigma -\epsilon+1\},\min\{\sigma+\epsilon-1\},N].\delta\big{(}\alpha(\sigma)\big{)} =\sigma^{\prime}\). We update the \(\delta\) function when referring to events on output streams by updating the time synchronization constant to \(\epsilon_{M}\). This accounts for the clock skew between two monitors. Finally, we introduce an uninterpreted function \(\rho:\mathbb{Z}_{\geq 0}\rightarrow\mathcal{2}^{\mathcal{E}}\) that identifies a sequence of consistent cuts for computing all possible evaluations of the Lola specification, while satisfying a number of given constrains explained in Section VI-B.
### _SMT Constrains_
Once we have defined the necessary SMT entities, we move onto the SMT constraints. We first define the SMT constraints for generating a sequence of consistent cuts, followed by the ones for evaluating the given Lola equations \(\varphi_{\alpha}\).
**Constrains for consistent cuts over \(\rho\):** In order to make sure that the uninterpreted function \(\rho\) identifies a sequence of consistent cuts, we enforce certain constraints. The first constraint enforces that each element in the range of \(\rho\) is in fact a consistent cut:
\[\forall i\in[0,k].\forall e,e^{\prime}\in\mathcal{E}.\Big{(}(e\leadsto e^{ \prime})\wedge(e^{\prime}\in\rho(i))\Big{)}\rightarrow(e\in\rho(i))\]
Next, we enforce that each successive consistent cut consists of all events included in the previous consistent cut:
\[\forall i\in[0,k-1].\rho(i)\subseteq\rho(i+1)\]
Next, we make sure that the front of each consistent cut constitutes of events with possible time of occurrence in accordance with the semantics of partially-synchronous Lola:
\[\forall i\in[0,k].\forall e\in\mathsf{front}(\rho(i)).\delta(e)=i\]
Finally, we make sure that every consistent cut consists of events from all streams:
\[\forall i\in[0,k].\forall\alpha\in\mathcal{A}.\exists e\in\mathsf{front}(\rho (i)).\mu(e)=\alpha\]
**Constrains for Lola specification:** These constraints will evaluate the Lola specifications and will make sure that \(\rho\) will not only represent a valid sequence of consistent cuts but also make sure that the sequence of consistent cuts evaluate the Lola equations, given the stream expressions. As is evident that a distributed system can often evaluate to multiple values at each instance of time. Thus, we would need to check for both satisfaction and violation for logical expressions and evaluate all possible values for arithmetic expressions. Note that monitoring all Lola specification can be reduce to evaluating expressions that are either logical or arithmetic. Below, we mention the SMT constraint for evaluating different Lola equations at time instance \(j\):
\[t_{i}[p,c] =\begin{cases}val(e)&0\leq j+p\leq N\\ c&\text{otherwise}\end{cases}\] \[s_{i}(j) =\mathsf{true}\quad\mathsf{front}(\rho(j))\models\varphi_{\alpha}\] (Logical expression, satisfaction) \[s_{i}(j) =e_{i}(\forall e\in\mathsf{front}(\rho(j)).val(e))\] (Arithmetic expression, evaluation)
The previously evaluated result is included in the SMT instance as a entity and a additional constrain is added that only evaluates to unique value, in order to generate all possible evaluations. The SMT instance returns a satisfiable result iff there exists at-least one unique evaluation of the equation. This is repeated multiple times until we are unable to generate a sequence of consistent cut, given the constraints, i.e., generate unique values. It is to be noted that stream expression of the form \(\mathtt{ite}(s_{i},s_{k},s_{j})\) can be reduced to a set of expressions where we first evaluate \(s_{i}\) as a logical expression followed by evaluating \(s_{j}\) and \(s_{k}\) accordingly.
## VII Runtime Verification of Lola specifications
Now that both the rules of generating rewritten Lola equations (Section V) and the working of the SMT encoding (Section VI) have been discussed, we can finally bring them together in order to solve the problem introduced in Section IV.
### _Computing \(Lc\)_
Given a set of local states computed from the SMT encoding, each monitor process receives a set of rewritten Lola associated equations, denoted by \(LS_{j}^{i}\), where \(i\in[1,|\mathcal{M}|]\) for \(j\)-th computation round. Our idea to compute \(LC\) from these sets is to simply take a prioritized union of all the associated equations.
\[LC(\Pi_{j}^{i})=\biguplus_{i\in[1,|\mathcal{M}|]}LS_{j}^{i}\]
The intuition behind the priority is that an evaluated Lola equation will take precedence over a partially evaluated/unevaluated Lola equation, and two partially-evaluated Lola equation will be combined to form a evaluated or partially evaluated Lola equation. For example, taking the locally computed \(LS_{1}^{1}\) and \(LS_{1}^{2}\) from Example 3, \(LC(LS_{1}^{1},LS_{1}^{2})\) is computed to be \(\{c(1)=a(2),c(2)=5,c(3)=\mathtt{i}\mathtt{te}(7\leq b(4),a(4),5)\}\) at Monitor \(M_{1}\) and \(\{c(1)=7,c(2)=5,c(3)=\mathtt{i}\mathtt{te}(7\leq b(4),a(4),5)\}\) at Monitor \(M_{2}\). Subsequently, \(LC(LS_{2}^{1},LS_{2}^{2})\) is computed to be \(\{c(4)=9,c(5)=3,c(6)=5\}\) at Monitor \(M_{1}\) and \(\{c(4)=9,c(5)=3,c(6)=5\}\) at Monitor \(M_{2}\).
### _Bringing it all Together_
As stated in Section IV-A, the monitors are decentralized and online. Since, setting up of a SMT instance is costly (as seen in our evaluated results in Section VIII), we often find it more efficient to evaluate the Lola specification after every \(k\) time instance. This reduces the number of computation rounds to \(\lceil N/k\rceil\) as well as the number of messages being transmitted over the network as well with an increase to the size of the messages. We update Algorithm 1 to reflect our solution more closely to Algorithm 2.
Each evaluation round starts by reading the \(r\)-th partial distributed system which consists of events occurring between the time \(\max\{0,(r-1)\times\lceil N/k\rceil\}\) and \(\min\{N,r\times\lceil N/k\rceil\}\) (line 3). We assume that the partial distributed system is consistent in accordance with the assumption that each event has been read by atleast one monitor. To account for any concurrency among the events in \((r-1)\)-th computation round with that in the \(r\)-th computation round, we expand the length by \(\epsilon\) time, there-by making the length of the \(r\)-th computation round, \(\max\{0,(r-1)\times\lceil N/k\rceil-\epsilon+1\}\) and \(\min\{N,r\times\lceil N/k\rceil\}\).
Next, we reduce the evaluation of the distributed stream problem into an SMT problem (line 7). We represent the distributed system using SMT entities and then by the help of SMT constraints, and we evaluate the Lola specification on the generated sequence of consistent cuts. Each sequence of consistent cut presents a unique ordering of the events which evaluates to a unique value for the stream expression (line 8). This is repeated until we no longer can generate a sequence of consistent cut that evaluates \(\varphi_{\alpha}\) to unique values (line 9). Both the evaluated as well as partially evaluated results are included in \(LS\) as associated Lola equations. This is followed by the communication phase where each monitor shares its locally computed \(LS_{r}^{i}\), for all \(i\in[1,|\mathcal{M}|]\) and \(r\) evaluation round (line 10-11).
Once, the local states of all the monitors are received, we take a prioritized union of all the associated equation and include them into \(LS_{r+1}^{i}\) set of associated equations (line 12). Following this, the computation shifts to next computation round and the above mentioned steps repeat again. Once we reach the end of the computation, all the evaluated values are contained in Result\({}^{i}\)
**Lemma 1**: _Let \(\mathcal{A}=\{S_{1},S_{2},\cdots,S_{n}\}\) be a distributed system and \(\varphi\) be an Lola specification. Algorithm 1 terminates when monitoring a terminating distributed system._
**Theorem 1**: _Algorithm 2 solves the problem stated in Section IV._
**Theorem 2**: _Let \(\varphi\) be a Lola specification and \((\mathcal{E},\rightsquigarrow)\) be a distributed stream consisting of \(|\mathcal{A}|\) streams. The message complexity of Algorithm 2 with \(|\mathcal{M}|\) monitors is_
\[O\big{(}\epsilon^{|\mathcal{A}|}N|\mathcal{M}|^{2}\big{)}\ \ \ \Omega(N| \mathcal{M}|^{2})\]
## VIII Case Study and Evaluation
In this section, we analyze our SMT-based decentralized monitoring solution. We note that we are not concerned about data collections, data transfer, etc, as given a distributed setting, the runtime of the actual SMT encoding will be the most dominating aspect of the monitoring process. We evaluate our proposed solution using traces collected from synthetic experiments (Section VIII-A) and case studies involving several industrial control systems and RACE dataset (Section VIII-B). The implementation of our approach can be found on Google Drive([https://tinyurl.com/2p6ddjnr](https://tinyurl.com/2p6ddjnr)).
### _Synthetic Experiments_
#### Vi-A1 Setup
Each experiment consists of two stages: (1) generation of the distributed stream and (2) verification. For data generation, we develop a synthetic program that randomly generates a distributed stream (i.e., the state of the local computation for a set of streams). We assume that streams are of the type Float, Integer or Boolean. For the streams of the type Float and Integer, the initial value is a random value s[0] and we generate the subsequent values by s[i-1] + N(0, 2), for all \(i\geq 1\). We also make sure that the value of a stream is always non-negative. On the other
hand, for streams of the type Boolean, we start with either true or false and then for the subsequent values, we stay at the same value or alter using a Bernoulli distribution of \(B(0.8)\), where a true signifies the same value and a false denotes a change in value.
For the monitor, we study the approach using Bernoulli distribution \(B(0.2)\), \(B(0.5)\) and \(B(0.8)\) as the read distribution of the events. A higher readability offers each event to be read by higher number of monitors. We also make sure that each event is read by at least one monitor in accordance with the proposed approach. To test the approach with respect to different types of stream expression, we use the following arithmetic and logical expressions.
```
inputa1:uint inputa2:uint outputarithExp:=a1+a2 outputlogExp:=(a1>2)&&(a2<8)
```
#### Iv-A2 Result - Analysis
We study different parameters and analyze how it effects the runtime and the message size in our approach. All experiments were conducted on a 2017 MacBook Pro with 3.5GHz Dual-Core Intel core i7 processor and 16GB, 2133 MHz LPDDR3 RAM. Unless specified otherwise all experiments consider number of streams, \(|\mathcal{A}|=3\), time synchronization constant, \(\epsilon_{M}=\epsilon=3s\), number of monitors same as the number of streams, computation length, \(N=100\), with \(k=3\) with a read distribution \(B(0.8)\).
**Time Synchronization Constant.** Increasing the value of the time synchronization constant \(\epsilon\), increases the possible number of concurrent events that needs to be considered. This increases the complexity of evaluating the Lola specification and there-by increasing the runtime of the algorithm. In addition to this, higher number of \(\epsilon\) corresponds to higher number of possible streams that needs to be considered. We observe that the runtime increases exponentially with increasing the value of \(\epsilon\) in Fig. 4(a), as expected. An interesting observation is that with increasing the value of \(k\), the runtime increases at a higher rate until it reaches the threshold where \(k=\epsilon\). This is due to the fact, that the number of streams to be considered increases exponentially but ultimately gets bounded by the number of events present in the computation.
Increasing the value of the time synchronization constant is also directly proportional to the number of evaluated results at each instance of time. This is because, each stream corresponds to a unique value being evaluated until it gets bounded by the total number of possible evaluations, as can be seen in Fig. 5(a). However, comparing Figs. 4(a) and 5(a), we see that the runtime increases at a faster rate to the size of the message. This owes to the fact that initially a SMT instance evaluates unique values at all instance of time. However, as we start reaching all possible evaluations for certain instance of time, only a fraction of the total time instance evaluates to unique values. This is the reason behind the size of the message reaching its threshold faster than the runtime of the monitor.
**Type of Stream Expression.** Stream expressions can be divided into two major types, one consisting of arithmetic operations and the other involving logical operations. Arithmetic operations can evaluate to values in the order of \(O(|\mathcal{A}|.\epsilon)\), where as logical operations can only evaluate to either true or false. When the monitors have high readability of the distributed stream, it is mostly the case, that the monitor was able to evaluate the stream expression. Thus, we observe in Fig. 4(c) that the runtime grows exponentially for evaluating arithmetic expressions but is linear for logical expressions. However, with low readability of the computation, irrespective of the type of expression, both takes exponential time since neither can completely evaluate the stream expression. So, each monitor has to generate all possible streams.
Similarly, for high readability and logical expressions, the message size is constant given the monitor was was able to evaluate the stream expression. However with low readability, message size for evaluating logical expressions matches with that of its arithmetic counterpart. This can be seen in Fig. 5(c) and is due to the fact, that with low readability, complete evaluation of the expression is not possible at a monitor and thus needs to send the rewritten expression with the values observed to the other monitors where it will be evaluated.
**Number of Streams.** As the number of streams increases, the number of events increase linearly and thereby making exponential increase in the number of possible synchronous streams (due to interleavings). This can be seen in Fig. 4(b), where the runtime increases exponentially with increase in the number of streams in the distributed stream. Similarly, in Fig. 5(b), increase in the number of streams linearly effects the number of unique values that the Lola expression can evaluate to and there-by increasing the size of the message.
### _Case Studies: Decentralized ICS and Flight Control RV_
We put our runtime verification approach to the test with respect to several industrial control system datasets that includes data generated by a (1) Secure Water Treatment plant (SWaT) [9], comprising of six processes, corresponding to different physical and control components; (2) a Power Distribution system [10] that includes readings from four phaser measurement unit (PMU) that measures the electric waves on an electric grid, and (3) a Gas Distribution system [11] that includes messages to and from the PLC. In these ICS, we monitor for correctness of system properties. Additionally we monitor for mutual separation between all pairs of aircraft in RACE [12] dataset, that consists of SBS messages from aircrafts. For more details about each of the systems along with the Lola specifications refer to the Appendix XI-C.
For our setting we assume, each component has its own asynchronous local clock, with varying time synchronization constant. Next we discuss the results of verifying different ICS with respect to Lola specifications.
**Result Analysis:** We employed same number of monitors as the number of components for each of the ICS case-studies and divided the entire airspace into 9 different ones with one monitor responsible for each. We observe that our approach
does not report satisfaction of system property when there has been an attack on the system in reality (false-negative). However, due to the assumption of partial-synchrony among the components, our approach may report false positives, i.e., it reports a violation of the system property even when there was no attack on the system. As can be seen in Fig. 7, with decreasing time synchronization constant, the number of false-positives reduce as well. This is due to the fact that with decreasing \(\epsilon\), less events are considered to be concurrent by the monitors. This makes the partial-ordering of events as observed by the monitor closer to the actual-ordering of events taking place in the system.
We get significantly better result for aircraft monitoring with fewer false-positives compared to the other dataset. This can be attributed towards Air Traffic Controllers maintaining greater separation between two aircrafts than the minimum that is recommended. As part of our monitoring of other ICS, we would like to report that our monitoring approach could successfully detect several attacks which includes underflow and overflow of tank and sudden change in quality of water in SWaT, differentiate between manual tripping of the breaker from the breaker being tripped due to a short-circuit in Power Distribution and Single-point data injection in Gas distribution.
## IX Related Work
Online predicate detection for both centralized and decentralized monitoring setting have been extensively studies in [13, 14]. Extensions to more expressive temporal operators are introduced in [15, 16]. Monitoring approaches introduced in [13, 15, 16] considers a fully asynchronous distributed system. An SMT-based predicate detection solution has been introduced in [17]. Runtime Verification for _synchronous_ distributed system has been studied in [18, 19, 20]. The assumption of a common global clock shared among all the components act as a major shortcoming of this approach. Finally, fault-tolerant monitoring, where monitors can crash, has been investigated in [21] for asynchronous and in [22] for synchronized distributed processes.
Fig. 5: Impact of different parameters on runtime for synthetic data.
Fig. 6: Impact of different parameters on message size for synthetic data.
Fig. 7: False-Positives for ICS Case-Studies
Runtime Verification of stream-based specification was introduced in [23, 2], where the occurrence of the events was assumed to be synchronous. To extend the stream-based runtime verification to more complex systems, one where the occurrence of events is asynchronous, a real-time based logic was introduced in [24, 25, 26]. However, these methods fall short to verify large geographically separated distributed system, due to their assumption regarding the presence of a shared global clock. On the contrary, we assume the presence of a clock synchronization algorithm which limits the maximum clock skew among components to a constant. This is a realistic assumption since different components of a large industrial system have their own clock and it is certain to have a skew between them. A similar SMT-based solution was studied for LTL and MTL specifications in [27, 28] respectively, which we extend to include a more expressive stream-based specification.
## X Conclusion
In this paper, we studied distributed runtime verification w.r.t. to the popular stream-based specification language Lola. We propose a online decentralized monitoring approach where each monitor takes a set of associated Lola specification and a partial distributed stream as input. By assuming partial synchrony among all streams and by reducing the verification problem into an SMT problem, we were able to reduce the complexity of our approach where it is no longer dependent on the time synchronization constant. We also conducted extensive synthetic experiments, verified system properties of large Industrial Control Systems and airspace monitoring of SBS messages. Comparing to machine learning-based approaches to verify the correctness of these system, our approach was able to produce sound and correct results with deterministic guarantees. As a better practice, one can also use our RV approach along with machine-learning based during training or as a safety net when detecting system violations.
For future work, we plan to study monitoring of distributed systems where monitors themselves are vulnerable to faults such as crash and Byzantine faults. This will let us design a technique with faults and vulnerabilities mimicking a real life monitoring system and thereby expanding the reach and application of runtime verification on more real-life safety critical systems.
|
2308.07718
|
Global biasing using a Hardware-based artificial Zeeman term in Spinwave
Ising Machines
|
A spinwave Ising machine (SWIM) is a newly proposed type of time-multiplexed
hardware solver for combinatorial optimization that employs feedback coupling
and phase sensitive amplification to map an Ising Hamiltonian into
phase-binarized propagating spin-wave RF pulses in an Yttrium-Iron-Garnet (YIG)
film. In this work, we increase the mathematical complexity of the SWIM by
adding a global Zeeman term to a 4-spin MAX-CUT Hamiltonian using a continuous
external electrical signal with the same frequency as the spin pulses and phase
locked with with one of the two possible states. We are able to induce
ferromagnetic ordering in both directions of the spin states despite
antiferromagnetic pairwise coupling. Embedding a planar antiferromagnetic spin
system in a magnetic field has been proven to increase the complexity of the
graph associated to its Hamiltonian and thus this straightforward
implementation helps explore higher degrees of complexity in this evolving
solver.
|
Victor H. González, Artem Litvinenko, Roman Khymyn, Johan Åkerman
|
2023-08-15T11:51:39Z
|
http://arxiv.org/abs/2308.07718v1
|
# Global biasing using a Hardware-based artificial Zeeman term in Spinwave Ising Machines
###### Abstract
A spinwave Ising machine (SWIM) is a newly proposed type of time-multiplexed hardware solver for combinatorial optimization that employs feedback coupling and phase sensitive amplification to map an Ising Hamiltonian into phase-binarized propagating spin-wave RF pulses in an Yttrium-Iron-Garnet (YIG) film. In this work, we increase the mathematical complexity of the SWIM by adding a global Zeeman term to a 4-spin MAX-CUT Hamiltonian using a continuous external electrical signal with the same frequency as the spin pulses and phase locked with with one of the two possible states. We are able to induce ferromagnetic ordering in both directions of the spin states despite antiferromagnetic pairwise coupling. Embedding a planar antiferromagnetic spin system in a magnetic field has been proven to increase the complexity of the graph associated to its Hamiltonian and thus this straightforward implementation helps explore higher degrees of complexity in this evolving solver.
combinatorial optimization problems, Ising machines, spinwaves, unconventional computing, physical computing, spinwaves.
In the landscape of physical computation schemes, Ising machines (IM) have attracted considerable attention and investment over the last decade, in both academic and industrial research [1, 2, 3, 4, 5, 6], for their applicability to combinatorial optimization problems, potential for scalability and progressive increase in mathematical complexity. In this work, we contribute to the latter as we implement a global bias to artificial spin states in a spinwave Ising machine (SWIM).
A SWIM is a newly proposed [2] time-multiplexed hardware solver circuit for problems in the NP (nonpolynomial time) complexity class that employs feedback coupling and phase sensitive amplification to map an Ising Hamiltonian into spinwave (SW) RF pulses propagating in an Yttrium-Iron-Garnet (YIG) film. Spinwaves are suitable for Ising machines because of their GHz oscillation frequencies, which permits the development of multiphysical systems using cheap and efficient off-the-shelf microwave components for signal processing [7, 8] and amplification which results in a small circuitry footprint and high per-spin power efficiency. In this work, we use an external continuous microwave signal to implement a global biasing to propagating spinwave RF artificial spin states. Exploring the complexity limits of this circuit implementation thus holds significant interest for technical and commercial applications.
An IM operates as an mapping of an objective function into the Ising Hamiltonian of a device composed of an array of \(N\) binarized physical units referred to as spins \(s_{i}=\pm 1\):
\[H=-\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}J_{ij}s_{i}s_{j}-\sum_{i=1}^{N}h_{i}s _{i} \tag{1}\]
The objective function associated with the NP problem to solve is encoded into the pairwise coupling \(J_{ij}\) and external Zeeman bias \(h_{i}\) such that ground state of the system represents the solution to it. Combinatorial problems of practical use, such as the traveling salesman and knapsack with integer weights, have been shown to be encodable if one can add constraints to the degrees of freedom of the system [9]. Adding a global bias (i.e. the \(h_{i}\) is identical for all \(s_{i}\)) to an antiferromagnetic planar graph has been shown as a straightforward way to increase its mathematical complexity [10]. Implementation of a global Zeeman term using software has also shown to improve the stability and performance of time-multiplexed IMs with frustated lattices [11]. Thus, exploring hardware-based alternative schemes can lend versatility to small scale low-power devices.
Fig. 1: Zeeman-biased spinwave Ising machine. PSA and LNA stand for phase-sensitive and low noise amplifiers, respectively. The propagating RF pulses have frequency of 3.13 GHz. The sign of the coupling is controlled by the total phase accumulation in the coupling delay and the Zeeman field amplitude and sign is controlled by the amplitude and phase of the injected signal \(\omega_{bias}\).
The circuit of the SWIM with a hardware implemented Zeeman term is shown in Fig. 1. The SWIM's construction of the artificial spin state relies in phase-sensitive amplification (PSA) of a reference signal \(\omega_{ref}=\)3.13 GHz. The PSA binarizes the phase of the signal by amplifying only its in-phase (\(\phi=0\)) and out-of-phase (\(\phi=\pi\)) components. The signal is then simultaneously injected into a YIG waveguide and a coaxial delay line using coupler C2. The electrical signal excites spin waves within waveguide that propagate at a much slower speed, allowing the coupling delay to re-inject a shifted signal using coupler C1. Switch 1 (S1) then pulses the signal and a low noise amplifier (LNA) compensates propagation losses as the cycle starts again. The resulting time-multiplexed pulse train is our artificial spin state, where each pulse is an artificial spin, with their electrical amplitudes representing the norm of \(s_{i}\) and their individual phases representing its sign. The coupling delay's length is such that every pulse interferes (or couples) with the previous nearest neighbor. Phase shifter PS ensures that the coupling term \(J_{i,i+1}\) is antiferromagnetic and variable attenuator VA controls its strength. The Zeeman term is implemented with external signal \(\omega_{bias}\) applied to the propagating pulses after PSA.
The role of \(\omega_{bias}\) is to unbalance the potential landscape of the artificial spin states and favor one phase over the other. Fig.2(a) shows the changes in PSA amplitude (\(A_{PSA}\)) for different signs of h. The effective sign of the Zeeman term is given by its relative phase with respect to the reference signal, with negative \(h\) being in-phase and positive out-of-phase. The effective magnitude of \(h\) is given by the signals' amplitude, -15 dBm in this case. The amplification imbalance allows us to change the phase sensitivity of the circuit and globally bias the state of the spins. The consequences of the bias are shown in Fig.2(b) and (c). In (b), we observe that the modulation harmonics present in the spectra depend on the spin state, with the biased signal containing a central carrier corresponding to \(\omega_{ref}\) that is absent for the unbiased solutions. \(\omega_{bias}\), whose frequency is the same as the reference signal's, drives the oscillators as they all acquire the same phase (i.e. align their spin direction). We complement this picture in (c) using the time traces of the RF pulses colored with their respective instantaneous phase and associated graph. It is clear that despite antiferromagnetic coupling, we are able to induce a change in ordering in both possible directions using \(\omega_{bias}\) alone. Combining time trace and spectrum analyses, we have different tools to study the effectiveness of the biasing as well as develop differentiation and operation protocols that allow us to program more complex problems. Although this is very promising in terms of exploration of higher complexity schemes, unexpected states also appear for intermediate amplitudes of \(h\).
While it is expected that there will be a sudden breaking of up and down spin symmetry at \(h=2\), a gradual spin flipping occurs. Mixed spin states appear at intermediate values of \(\omega_{bias}\) amplitude (\(\approx\)-20 dBm), as seen in Fig.3(a), resembling a chain with magnetic domains. Although the system is indeed synchronized with \(\omega_{bias}\), as shown in fig.3(b), these 3+1 states (three spins up and one down, or vice versa) are not a minimum energy solution of eq.1 and suggest an unintended increase of the degrees of freedom of the system.
In Fig.3(c) we show the phase diagram for a ring-shaped 4 spin system under an external Zeeman field where one of the spin's amplitude \(|S_{1}|\) can be less than one, i.e. one spin is shorter. The shortening of \(S_{1}\) results in a phase transition of the 4-spin system and appearance of the 3+1 states. In fact, from the time traces shown in Fig.3, we can directly observe that the spins have different electrical amplitudes and the odd one out has the smallest one. Since electrical amplitude and spin amplitude are proportional to each other, the emergence of the 3+1 states can give us information about the operation of our circuit and its limitations.
Fig. 2: Influence of Zeeman term at different signs of magnetic field. (a) Measured potential landscape shift of the artificial spin states. The sign of \(h\) is given by the phase between \(\omega_{ref}\) and \(\omega_{bias}\). The total phase sensitive amplification of the circuit \(A_{PSA}\) favors either 0 or \(\pi\) depending on \(h\) and thus changes the magnetic ordering of the spins. (b) Power spectral densities (PSD) as a functions of frequency for different \(h\) signs. We observe that the modulation harmonics can be used to identify the type of solution achieved, with both ferromagnetic states having the same spectra. (c) Time traces of the RF pulses colored with their respective instantaneous phase for different signs of \(h\). The non-zero signals have an amplitude of -15 dBm. \(\omega_{bias}\) allows us to change the ordering and direction of the artificial spin state.
Probing into the origin of this amplitude mismatch, we propose that the emergence of the 3+1 states depends on the non-linearity of the saturation of the LNA. Since we are injecting additional power to the system with the Zeeman signal, the LNA saturates and thus gain compression occurs beyond its linear range. Even if they are very close in the power transfer curve [12], its non-linearity results in spins of different signs being amplified differently with the minority spin shortening. An amplifier with a higher linear regime would mitigate this state degeneration. If compression gain is unavoidable, a stronger coupling between spins and a smoother saturation curve for the LNA would suppress 3+1 states. Digital feedback using a field programmable gate array (FPGA) has been employed successfully in previous time-multiplexed Ising machines to improve amplitude stability [13] and can also be used instead to the delay line to modify pairwise or all-to-all coupling. These findings can be implemented in future circuit designs to improve the quality and complexity of the solutions with larger amounts of spins.
It is worthwhile to mention that spectral analysis can help us understand the synchronization dynamics of the system as well as use for soution differentiation. As we see in fig.2(b) and fig.3(b), the peaks at \(\omega_{ref}\) show that \(\omega_{bias}\) drives the oscillators. Additionally, both biased solutions have their own characteristic spectrum. Thus, including this information in the digital feedback can allow us to design differentiation metrics and stopping conditions for relaxation and annealing protocols in bigger and more complex systems.
Finally, we tried to recreate a stable globally biased 3+2 solution with five spins as it would appear in a spin ring. Evidently, a ring with an odd number of spins will not have a stable unbiased solution as the phase difference on each circulation period will never be zero. It was clear that for a large enough bias, we do see all spins parallel to each other, as shown for -15 dBm in fig.4(a). The phase transition mentioned before and shown in fig.3 could be an indication that a 3+2 state could be viable and could allow us to construct a magnetic system analog with two clearly defined domains. An amplitude of -23 dBm (fig.4(a)), is unable to produce such solution because the spins do not synchronize with the external field (we can observe that the highest peak in fig.4(b) is not at \(\omega_{ref}\)). Instead, the resulting state's phase is not binarized and produces spins do not comply with the definition of eq.1. We believe that frustration is responsible for this phase slip and stable solutions are achievable with digital coupling and individual bias, both implementable with the aforementioned FPGA.
We have shown the broad features of a globally biased artificial spin state space composed of RF pulses in a YIG waveguide. Despite antiferromagnetic coupling between nearest neighbors in a 4 spin ring, we are able to induce ferromagnetic ordering by injecting an external signal of the same frequency to emulate the role of the Zeeman term in the Ising Hamiltonian. Intermediate values of this Zeeman signal introduce degeneracy in the amplification of the pulses and, consequently, 3+1 spin states. These effects be mitigated
Fig. 4: Five spin states at two different \(\omega_{bias}\) amplitudes. (a) Time traces at -15 dBm and -20 dBm. We can observe that although the bias manages to stabilize the phase, the solution is not phase binarized. (b) Power spectral densities (PDS) of the solutions at the same amplitudes. We see that each solution is synchronized to \(\omega_{bias}\) has a characteristic spectrum.
Fig. 3: Mixed artificial spin states for \(\omega_{bias}\) with -20 dBm. (a) Colored time traces of the RF spins. The appearance of 3+1 states is evidence of additional degrees of freedom in the Hamiltonian of the system. (b) Phase diagram of a ring-shaped 4 spin Ising machine with variable spin amplitude. The 3+1 states are a consequence of phase-dependent amplification of the spins. (c) PSD for both solutions with 3+1 states, we observe the same synchronization as in fig.2(b), but the modulation peaks are characteristic to these solutions.
by alternative amplification schemes whose implementation would guide future work in enabling all-to-all spin coupling for tackling non-trivial optimization tasks. The present work improves upon the emerging technology of commercially feasible IM hardware accelerators. The SWIM concept has a high potential for further scaling in terms of spin capacity, physical size and low-power low-footprint circuits for applied combinatorial optimization.
|
2302.05154
|
Industrial and Medical Anomaly Detection Through Cycle-Consistent
Adversarial Networks
|
In this study, a new Anomaly Detection (AD) approach for industrial and
medical images is proposed. This method leverages the theoretical strengths of
unsupervised learning and the data availability of both normal and abnormal
classes. Indeed, the AD is often formulated as an unsupervised task, implying
only normal images during training. These normal images are devoted to be
reconstructed, through an autoencoder architecture for instance. However, the
information contained in abnormal data, when available, is also valuable for
this reconstruction. The model would be able to identify its weaknesses by
better learning how to transform an abnormal (respectively normal) image into a
normal (respectively abnormal) one, helping the entire model to learn better
than a single normal to normal reconstruction. To address this challenge, the
proposed method uses Cycle-Generative Adversarial Networks (Cycle-GAN) for
(ab)normal-to-normal translation. After an input image has been reconstructed
by the normal generator, an anomaly score quantifies the differences between
the input and its reconstruction. Based on a threshold set to satisfy a
business quality constraint, the input image is then flagged as normal or not.
The proposed method is evaluated on industrial and medical datasets. The
results demonstrate accurate performance with a zero false negative constraint
compared to state-of-the-art methods. The code is available at
https://github.com/ValDelch/CycleGANS-AnomalyDetection.
|
Arnaud Bougaham, Valentin Delchevalerie, Mohammed El Adoui, Benoît Frénay
|
2023-02-10T10:25:12Z
|
http://arxiv.org/abs/2302.05154v2
|
# Industrial and Medical Anomaly Detection Through Cycle-Consistent Adversarial Networks
###### Abstract
In this study, a new Anomaly Detection (AD) approach for real-world images is proposed. This method leverages the theoretical strengths of unsupervised learning and the data availability of both normal and abnormal classes. The AD is often formulated as an unsupervised task motivated by the frequent imbalanced nature of the datasets, as well as the challenge of capturing the entirety of the abnormal class. Such methods only rely on normal images during training, which are devoted to be reconstructed through an autoencoder architecture for instance. However, the information contained in the abnormal data is also valuable for this reconstruction. Indeed, the model would be able to identify its weaknesses by better learning how to transform an abnormal (or normal) image into a normal (or abnormal) image. Each of these tasks could help the entire model to learn with higher precision than a single normal to normal reconstruction. To address this challenge, the proposed method utilizes Cycle-Generative Adversarial Networks (Cycle-GANs) for abnormal-to-normal translation. To the best of our knowledge, this is the first time that Cycle-GANs have been studied for this purpose. After an input image has been reconstructed by the normal generator, an anomaly score describes the differences between the input and reconstructed images. Based on a threshold set with a business quality constraint, the input image is then flagged as normal or not. The proposed method is evaluated on industrial and medical images, including cases with balanced datasets and others with as few as 30 abnormal images. The results demonstrate accurate performance and good generalization for all kinds of anomalies, specifically for texture-shaped images where the method reaches an average accuracy of 97.2% (85.4% with an additional zero false negative constraint).
Cycle-GANs, Industry 4.0, Industrial Images, Medical Images, Anomaly Detection, Zero False Negative
## I Introduction
This work proposes a new approach with a Generative Adversarial Networks (GAN) architecture for the task of Anomaly Detection (AD), which aims to combine the advantages of both unsupervised learning and the data availability of the normal and abnormal classes. Indeed, AD is often formulated as an unsupervised task due to the frequent high imbalance between normal and abnormal data, and the need for generalization across a wide range of anomalies. Therefore, in an autoencoder architecture for instance, the AD method is trained by reconstructing normal data only. Nevertheless, valuable information is missing during the training step. Only the normal class is taken into account, and the reconstruction from the abnormal to the normal class is not included in this learning process. Yet, this is precisely the task we are expecting from an AD method during the inference step. The proposed method seeks to overcome this limitation by learning how to transform an abnormal image into a normal image by exploiting samples from both classes. The objective is to generate a reconstructed image where any abnormal pixel is replaced by a normal one in a visually-coherent manner. During the training step, the "normal generator" is constrained by another "abnormal generator" in an adversarial framework, using Cycle-Generative Adversarial Networks (Cycle-GANs) [1]. Also, reconstructing the abnormal data during the learning step yields to a better normal generator than the classical methods using only the normal class. Even if the abnormal datasets can be small (as it is usually the case in the AD context), the normal generator performs better, because its performance is also constrained by the abnormal generator, resulting in a good reconstruction. We still consider this as an unsupervised learning task because the abnormal data used during training is not necessarily representative of all anomalies that could occur. Abnormal data are just given to help during the training phase by giving more feedback to the generators. Therefore, the generalization is guaranteed as it is the case in a classical GAN context, except that the normal reconstruction is much less noisy.
Cycle-GAN is a well-known architecture proposed a few years ago. It constitutes an elegant way to learn conditional mappings from two different domains \(\mathcal{X}\) and \(\mathcal{Y}\) (for image-to-image translation) by applying a cycle-consistent constraint on the transformations. The popularity of cycle-GANs lies in the fact that they only need a dataset of unpaired images to learn the mappings. In other words, they do not need the one-to-one correspondence between data from \(\mathcal{X}\) and \(\mathcal{Y}\), but only two independent sets of data \(\{x_{i}\in\mathcal{X}\}\) and \(\{y_{i}\in\mathcal{Y}\}\). As an example, one can consider two unpaired datasets \(\{x_{i}\}\) and \(\{y_{i}\}\) made of unrelated aerial images and Google maps, respectively. A cycle-GAN can be trained to learn meaningful mappings from \(\mathcal{X}\) to \(\mathcal{Y}\) and \(\mathcal{Y}\) to \(\mathcal{X}\). Fig. 1 presents an example generated with this cycle-GAN.
Many works have shown that cycle-GANs can be used in diverse image analysis tasks. Nonetheless, cycle-GANs remain seldom used in practice to solve problems in the industrial and medical areas. It is for example the case of AD where, to the best of our knowledge, no prior work directly exploits cycle-GANs. Compared to the state-of-the-art for AD in images, cycle-GANs seem to be more suitable than concurrent methods that rely on simple GANs. Furthermore, we show that the formalism of a cycle-GANs makes them really efficient and well-suited for AD. Moreover, the use of an identity loss allows the reconstruction of the normal-to-normal (and abnormal-to-abnormal) generator to be much less noisy than traditional GANs, making it possible to better discriminate normal and abnormal images. To illustrate this, we focus our experiments on several industrial and medical problems of AD. This is motivated by (i) the abundance of AD problems in these domains, and (ii) the positive societal impact of developing efficient AD algorithms for them. From our results, it clearly appears that cycle-GANs can be very efficient for specific types of images.
The main contributions of our work are as follow:
* Utilize abnormal data in the learning process to reinforce the normal generator, by reconstructing from both classes.
* Consider a cycle-GAN architecture, broadly used for image generation, as a powerful approach for AD.
* Apply an identity loss that allows a better discrimination between normal and abnormal images.
* Characterize and discuss the performances of the method for diverse industrial and medical AD problems.
* Give insights on the reasons why cycle-GANs fit well to the problem of AD for specific natures of images, and consider further investigations on the use of cycle-GANs in the industrial and medical domains.
First, we present the theoretical prerequisites to understand cycle-GANs in Section II. After that, Section III presents the previous works, and highlights that most of them only use simple GAN architectures by training with only normal data. The proposed AD method with cycle-GANs is described in Section IV. Section V then introduces the considered datasets, the experimental setup as well as the results. A discussion and some limitations with the use of cycle-GANs are presented in Section VI before concluding with Section VII.
## II Background on Cycle-GANs
This section introduces the formalism for cycle-GANs. The building blocks and the loss functions are described.
### _Building Blocks_
Cycle-Generative Adversarial Networks (Cycle-GANs) learn image-to-image mappings from an unpaired dataset constituted of two types of images from domains \(\mathcal{X}\) and \(\mathcal{Y}\). Cycle-GANs are obtained by tying together two distinct conditional GANs with a cycle-consistent constraint. The first GAN is made of a generator
\[G:\mathcal{X}\cup\mathcal{Y}\longrightarrow\mathcal{Y}:G\left(z\right)= \tilde{y}, \tag{1}\]
and a discriminator \(D_{Y}\), and the other is made of a generator
\[F:\mathcal{X}\cup\mathcal{Y}\longrightarrow\mathcal{X}:F\left(z\right)= \tilde{x}, \tag{2}\]
and a discriminator \(D_{X}\). For convenience, let's already consider an AD task where \(\mathcal{X}\) are abnormal images, while \(\mathcal{Y}\) are normal ones. On the one hand, the aim of \(G\) is to generate from \(x\in\mathcal{X}\cup\mathcal{Y}\) an image such that \(D_{Y}\) cannot distinguish it from real normal images in \(\mathcal{Y}\). On the other hand, \(F\) aims to generate images such that \(D_{X}\) is fooled and cannot distinguish it from real abnormal images in \(\mathcal{X}\). To achieve this, cycle-GANs are trained by optimizing a combination of different losses that are described in the next section.
### _Objective Function_
A cycle-GAN requires to train two GANs, and to tie them together with a cycle-consistent constraint. The loss can be broken down into three parts such that
\[G^{*},F^{*}=\text{arg}\min_{F,G}\max_{D_{X},D_{Y}}\mathcal{L}_{\text{adv}}+ \lambda_{\text{cyc}}\mathcal{L}_{\text{cyc}}+\lambda_{\text{ide}}\mathcal{L} _{\text{ide}}, \tag{3}\]
where \(\lambda_{\text{cyc}}\) and \(\lambda_{\text{ide}}\) are meta-parameters that constraint the different parts of the loss.
The first part is made of two classical adversarial losses [3]
\[\mathcal{L}_{\text{adv}}=\mathcal{L}_{\text{GAN}}\left(G,D_{Y}\right)+ \mathcal{L}_{\text{GAN}}\left(F,D_{X}\right), \tag{4}\]
where,
\[\mathcal{L}_{\text{GAN}}\left(G,D\right) =\mathbb{E}_{y}\left[\log\left(D\left(y\right)\right)\right]\] \[+\mathbb{E}_{x}\left[\log\left(1-D\left(G\left(x\right)\right) \right)\right]. \tag{5}\]
On the one side, by enforcing \(G\) (resp. \(F\)) to minimize \(\mathcal{L}_{\text{adv}}\), the generator will try to generate images that look similar to images from \(\mathcal{Y}\) (resp. \(\mathcal{X}\)). On the other side, by enforcing \(D_{Y}\) (resp. \(D_{X}\)) to maximize \(\mathcal{L}_{\text{adv}}\), the discriminator will try to distinguish between images coming from the generator \(G\) (resp. \(F\)) and real images in \(\mathcal{Y}\) (resp. \(\mathcal{X}\)).
The second part is motivated by the fact that the reconstructed images \(F\left(G\left(x\right)\right)\) and \(G\left(F\left(y\right)\right)\) should be close to \(x\) and \(y\), respectively (i.e., the pair of GANs should be cycle-consistent). This is achieved by the cycle-consistent loss
\[\mathcal{L}_{\text{cyc}}=\mathbb{E}_{x}\left[\|F\left(G\left(x\right)\right)-x \|_{1}\right]+\mathbb{E}_{y}\left[\|G\left(F\left(y\right)\right)-y\|_{1} \right], \tag{6}\]
Fig. 1: Example generated from a cycle-GAN (see Section V for training details) that learns mappings between aerial photos \(\mathcal{X}\) and Google maps \(\mathcal{Y}\) (dataset from [2]). The initial image \(x\in\mathcal{X}\) can be mapped to \(\tilde{y}\in\mathcal{Y}\) thanks to a first generator \(G\). The second one \(F\) can then go back from \(\tilde{y}\in\mathcal{Y}\) to \(\tilde{x}\in\mathcal{X}\). A cycle-consistent constraint enforces \(\tilde{x}\) to be close to \(x\).
where L1 norm is used like in the original work on cycle-GANs [1].
In addition to the \(\mathcal{L}_{\text{adv}}\) and \(\mathcal{L}_{\text{cyc}}\), an identity loss is added to constrain the generators to leave the images unmodified if they are already in the desired output domain, defined as
\[\mathcal{L}_{\text{ide}}=\mathbb{E}_{x}\left[\left\|F\left(x\right)-x\right\|_{ 1}\right]+\mathbb{E}_{y}\left[\left\|G\left(y\right)-y\right\|_{1}\right], \tag{7}\]
so as to enforce \(F\left(x\right)=x\) and \(G\left(y\right)=y\). In other words, \(F\) should not add anomalies if the input image is already abnormal, and \(G\) should not make any modification if it is already normal. Although the identity loss is present in the implementation of cycle-GANs from the original paper, it is not discussed and seldom used in practice. However, in the context of AD, the use of the identity loss is particularly relevant. Indeed, it is expected from \(G\) that it erases any abnormal pixel from the image. Nonetheless, in the case the image does not contain any of them, it should learn to leave it unmodified. This important property is exactly the one enforced by the identity loss. At the inference time, it also allows us to only use one of the two generators (the one that goes from abnormal to normal data, i.e., \(G\)) as potential abnormal pixels are revealed by comparing the reconstructed image with the original one.
## III Related Works
AD has long been an area of great concern in a wide range of fields such as biomedical [4], industrial [5] and security [6, 7]. Furthermore, a significant number of works have been published to characterize the AD approaches in the literature. The scope of this section is focused on previous works based on GANs and cycle-GANs, applied to the industrial and medical domains. GANs are used for many image-related tasks such as AD [8, 9, 10, 11], segmentation, data augmentation, etc. However, to the best of our knowledge, cycle-GANs have been mostly used for data augmentation [12, 13, 14]. Since the purpose of our research is focused on the AD in medical and industrial applications, in this section, we review the most relevant researches applied to theses two fields. From an in-depth analysis of the state-of-the-art methods associated with our research issue, the recent AD methods are mainly using only GANs. Yet, cycle-GANs can be highly useful for AD thanks to the combination of unsupervised learning and the data availability of the normal and abnormal classes.
Regarding the industrial studies, Bougaham et al. [5] propose to use intermediate patches for the inference step after a Wasserstein GAN learning process. The objective is to produce an efficient approach for AD on real industrial images of electronic Printed Circuit Board Assembly (PCBA). The technique can be used to assist current industrial image processing algorithms and to avoid tedious manual processing. Nevertheless, due to the wide variety of possible anomalies in a PCBA and the high complexity of autoencoder architecture, a real-world implementation remains a challenging task, specifically for small anomalies even if the method evolved to overcome some limitations in [11]. Rippel et al. [15] and Zhang et al. [16] suggest to use cycle-GANs to perform data augmentation by generating synthetic images for industrial inspection. Recently, J. Liu et al. [17] develop an autoencoder technique of AD using images of aluminum surfaces. The challenge of this work is to detect the manufacturing errors using unlabeled data. They introduced a dual prototype loss approach to encourage encoder generated feature vectors to match their own prototype. Therefore, the root mean square error between feature vectors is used as an indicator of anomalies.
Regarding the medical field, Schlegl et al. [8] proposed an unsupervised AD framework GANs (f-AnoGAN), that can detect the unseen anomalies of medical subjects after being trained on healthy tomography images. On the other side, among the previous studies in medical imaging, authors of [18, 19] could be cited. These authors use cycle-GANs to perform the data augmentation task for MRI and CT scan images, respectively. Again, they show that using cycle-GANs for data augmentation leads to better segmentation performances afterwards.
Despite being better than the previous approaches, these deep AD approach use unsupervised deep learning techniques, such as autoencoders and GANs, to characterize the normal class, without using the insights given by the anomalies. In contrast, in this work, the anomaly images are leveraged immediately in the training phase, using it as prior knowledge to strengthen the model at recognizing anomalies. Furthermore, we check our method in both industrial and medical images while enforcing zero false negative (ZFN).
In a concise way, a thorough analysis of the most important studies in the literature shows that in the industrial and medical domains, cycle-GANs have been mainly used for data augmentation. This paper demonstrates the suitability of cycle-GANs for AD, in particular, for industrial and medical images, which, to our knowledge, has never been covered in the literature.
## IV Methods
This section introduces the developed approach and shows its relevance for AD with cycle-GANs. The architecture for the training and inference steps is illustrated in Fig. 2.
The basic idea behind the use of cycle-GANs for AD is to exploit the conditional mapping learned by one of the two generators: the one that goes from abnormal to normal images. Indeed, by forward-propagating an abnormal image in this generator, it is expected to obtain a new image where the anomaly is erased. Nonetheless, and thanks to the identity loss, if a normal image is forward-propagated in the generator, it is expected to remain unchanged. Therefore, by comparing the output of the generator with its input, anomalies in the input images can be located. The other generator (normal-to-abnormal) is not really useful from a practical point of view. It is only useful to jointly train the first one, similarly to the two discriminators, but not for the AD inference step.
To perform the AD task, the normal and abnormal test images are given to the learned abnormal-to-normal generator. Then, an anomaly score is computed to measure the distance between the original test image and the reconstructed one. In
this paper, two metrics are considered: a per-pixel sum of the squared differences (SSE), and a Frechet Inception Distance (FID) [20]. The FID anomaly score is more elaborated and focuses on perceptual differences thanks to the use of a pre-trained Inception V3 network [21]. For each anomaly score, its potential for AD with cycle-GANs is assessed by building an anomaly detector with two different thresholds. The first threshold is set by minimizing the number of classification errors, which yields an anomaly detector with maximum accuracy (ACC). The second one is set so that all true positives are detected, i.e., only false alarms can be raised but no anomaly can be missed. This setting yields an anomaly detector with zero false negative (ZFN), which is the most useful in the business applications of the industrial or medical fields, where false negatives have large consequences for customers or patients. In summary, four anomaly detectors are built, i.e., one for each pair of metrics and thresholds. Note that the thresholds are set on the test sets, and make it possible to assess how much the two distributions (SSE and FID for normal and abnormal data) are discriminated. The use of an additional validation set should be preferred, but it would have been too costly in terms of abnormal data for several datasets (due to the scarcity of abnormal data available). Therefore, the accuracy values are an overestimate of the classification performances, and should be seen as a metric to quantify how much the method highlights abnormal images compare to normal images on the test sets. Our goal is indeed to measure the discriminative power of cycle-GANs for AD, so as to prospectively validate the practical interest of our idea in the industrial and medical domains.
## V Experiments
This section presents the experiments carried out to evaluate the proposed AD method. First, the datasets and the data pre-processing steps are introduced. Next, the model architecture used for all the experiments is detailed. Finally, qualitative and quantitative results are presented, then discussed in Section VI.
### _Datasets_
Eight datasets are used for both the industrial and medical domains, where several types of anomalies may occur. To assess the strengths and weaknesses of our method, we defined four categories of anomalies: small/large object-shaped, or small/large textured-shaped anomalies. Indeed, the anomaly can either arise on a specific object characterized by abrupt changes in pixels intensity (a screw for instance), or where the pixels intensity changes are more progressive and more homogeneous (in the structure of wood for instance).
To cover the industrial side, the public MVTEC-AD dataset [22] is used. It consists of different high resolution industrial images from 15 different categories of object and texture-shaped products with and without anomalies. In this work, 4 datasets were selected from MVTEC-AD to cover the different natures of images: the Screw (small object-shaped anomalies), the Hazelnut (large object-shaped anomalies), the Tile (large texture-shaped anomalies) and the Wood (small texture-shaped anomalies) dataset, which are made of \(480\), \(501\), \(347\) and \(326\) images, respectively. All of these datasets are clearly imbalanced with a minority of abnormal images.
To investigate the medical side, four datasets of object and texture-shaped images are used, coming from healthy and unhealthy subjects. First, PCAM (large texture-shaped anomalies) [23] consists in \(220,025\) histological images. Second, Breast Ultrasound (small texture-shaped anomalies) [24] is made of \(789\) images. One should mention that for this dataset, many images were manually labeled by experts by highlighting the tumor on the images. Therefore, in order to avoid any bias during training, we removed all those annotated images from the dataset, resulting in a dataset of \(654\) images. The third dataset is made of \(253\) Brain MRI images (large object-shaped anomalies) [25]. Finally, the retinal OCT dataset [26] (small and large texture-shaped anomalies) contains \(83,600\) images of Optical Coherence Tomography. All of these datasets are imbalanced with a minority of normal images, except for PCAM where the abnormal images are slightly minority. Table I provides an overview of the number of images for each dataset.
Fig. 2: Figure inspired from [1] presenting the architecture of the training (left side) and the inference (right side) steps. During the training step, the first generator \(G\) tries to map abnormal to normal images by fooling the discriminator \(D_{Y}\) that should not detect fake images. \(F\) and \(D_{X}\) follow the same idea but for normal images as input. During the inference step, only \(G\) is used even if the input can either be normal or abnormal images. This is possible thanks to the identity loss that enforces \(G\) to only modify abnormal pixels if any, and leaves the image unmodified otherwise.
The same data preprocessing steps are performed for the all the datasets, as the industrial and medical images are similar in an object-shaped or texture-shaped point of view. Some of the aforementioned datasets come with different types of anomalies. In this case, a single abnormal class is created by aggregating all the abnormal classes together. Also, the normal and abnormal classes are sometimes imbalanced. Therefore, in order to avoid the use of an imbalanced metric to evaluate the results, a specific split of the dataset is applied to ensure that the test sets are fully balanced. This is obtained by keeping half of the minority class images for the testing set, as well as the same number of randomly picked images from the majority class. All the remaining images are left to the training set1. Because AD may sometimes be a very imbalanced predictive problem, the majority class is generally overpopulated in the training sets. However, the test sets are perfectly balanced, which allows us to assess the performances with a simple accuracy metric. Furthermore, even if using imbalanced training sets may hurt the performances for most of the supervised machine learning algorithms, the training process of Cycle-GANs is less sensitive to this. The task of cycle-GAN differs from a simple label prediction, and each image gives feedback to the two generators, directly or indirectly. Images are resized to a resolution of \(256\times 256\) pixels by using a bicubic interpolation method, except for PCAM where the images remained in their original lower resolution of \(96\times 96\). Data augmentation is also performed so that objects and textures are rotated and flipped along both axes (except for retinal OCT where only flipping along the horizontal axis is pertinent).
Footnote 1: An exception is made for the PCAM dataset. Due to its larger size, only \(10\%\) of the minority class is taken for testing instead of \(50\%\).
### _Network Architecture and Training Procedure_
For convenience and practical purposes, the architectures used in this work as well as the training procedures are similar for the different applications, and follow the experimental setup presented in the initial paper on cycle-GANs [1]. The generators are formed by three convolution layers, several residual blocks [27], two fractionally-strided convolution layers and one final convolution layer. We use 9 residual blocks for images resized at \(256\times 256\) resolution, and only 6 for the PCAM (\(96\times 96\) resolution). For the discriminators, we use \(70\times 70\) PatchGANs [28, 29, 30]. All the models are trained through 200 iterations of the Adam optimizer with a learning rate of \(2\times 10^{-4}\). An exception is made for PCAM, for which only 40 training iterations are performed due to its large size. A linear learning decay is introduced at the middle of the training. The meta-parameters \(\lambda_{\text{cyc}}\) and \(\lambda_{\text{ide}}\) are fixed to 10 and 5, respectively. To give an idea of the computation time, training the cycle-GAN on 500 images with a \(256\times 256\) resolution for 200 iterations roughly takes 24 hours on a single Nvidia RTX A6000 GPU.
### _Experimental Results_
#### V-C1 Qualitative Assessments
The quality of the reconstruction, as well as the highlighting of anomalies are presented in Fig. 3. It shows the original image, the normal (generated) version and their squared pixel difference image, for the selected industrial and medical datasets. These images have been specifically chosen to illustrate different typical cases. However, the following quantitative assessment evaluates the global performances on all the test sets, which are in agreement with the qualitative examples presented here.
#### V-C2 Quantitative Assessments
Table II summarizes the accuracy (for the FID and SSE anomaly scores) under the zero-false-negative constraint (ZFN thr. columns), and, in a more standard way, without this constraint (ACC thr. columns) for all the different datasets. The distributions of the FID anomaly scores for the normal and abnormal test sets are shown in Fig. 4, with the accuracy calculated for the threshold set with the ZFN constraint, or without it.
## VI Discussion and Limitations
This section outlines the discussion and the limitations with the use of cycle-GANs, for industrial and medical images AD. We observe from the qualitative results presented in Fig. 3 that the anomaly reconstruction strongly depends on the nature of the image. Indeed, for the textured appearance images (Wood, Tile, Breast Ultrasound, PCAM and OCT Retina), small holes,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{3}{*}{_Dataset_} & \multirow{3}{*}{_Domain_} & \multirow{3}{*}{_Type_} & \multicolumn{2}{c}{_Anomaly Score = FID_} & \multicolumn{2}{c}{_Anomaly Score = SSE_} \\ \cline{4-6} & & & _ZFN thr._ & _ACC thr._ & _ZFN thr._ & _ACC thr._ \\ \hline Wood & Indust. & Texture & 93.33 & **95.00** & 73.33 & 83.33 \\ Tile & Indust. & Texture & **100.00** & **100.00** & 89.02 & 97.56 \\ Hazelnut & Indust. & Object & **100.00** & **100.00** & 72.86 & 94.29 \\ Screw & Indust. & Object & 50.83 & **57.50** & 52.50 & 52.50 \\ Breast & Med. & Texture & 65.15 & 93.18 & 87.12 & **94.70** \\ PCAM & Med. & Texture & 94.41 & 97.79 & 82.42 & **99.89** \\ Retina & Med. & Texture & 52.17 & **96.54** & 50.29 & 95.46 \\ Brain & Med. & Object & 51.02 & 62.24 & 62.24 & **68.37** \\ \hline \hline \end{tabular}
\end{table} TABLE II: AD accuracy (in %), based on the FID or SSE anomaly scores, with thresholds set to enforce a zero-false-negative constraint (ZFN thr.) or to maximize the AD accuracy (ACC thr.). Best scores are highlighted in bold.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{3}{*}{_Dataset_} & \multicolumn{3}{c}{_Training set_} & \multicolumn{3}{c}{_Test set_} \\ \cline{2-5} & \# Normal & \# Abnormal & \# Normal & \# Abnormal \\ \cline{2-5} Wood & 236 & 30 & 30 & 30 \\ Tile & 222 & 43 & 41 & 41 \\ Hazelnut & 396 & 35 & 35 & 35 \\ Screw & 301 & 59 & 60 & 60 \\ Breast & 67 & 455 & 66 & 66 \\ PCAM & 121,998 & 80,207 & 8,910 & 8,910 \\ Retina & 13,172 & 44,084 & 13,172 & 13,172 \\ Brain & 49 & 106 & 49 & 49 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Sizes of the training and test sets for each dataset, regarding the number of normal and abnormal data. A particular split to guaranty that the test sets are balanced is always chosen, even if the initial datasets are imbalanced.
large cracks, blurred areas or colorization contained in the abnormal images (left images of the red-framed blocks) are erased by the estimated normal texture (middle images of the red-framed blocks) and result in a pixel difference image (right images of the red-framed blocks) that faithfully highlights the anomalies. We can note that for complex random textures such as Tile, the model struggles to perfectly restore the normal areas, resulting in a noisy difference image. However, the anomaly can still be localized due to the even greater difficulty in restoring the abnormal area.
Good reconstruction is also observed for the Hazelnut and Brain MRI datasets, where these object-shaped images have large anomalies such as scratches or spots shown at the red-framed block of each set. They are not well erased but attenuated, which provides enough information in the difference image to detect and localize the anomaly. However, for an object-shaped dataset with small defects such as the Screw dataset (red-framed block), the anomaly does not disappear after reconstruction, making it impossible to highlight in the difference image.
We can also observe in the green-framed block (normal images) of each set that the reconstructions (middle images) are more or less identical to the original input (left images), resulting in an almost zero difference image (right images). The model has extracted the features of the normal distributions, and is able to restore normal images without changing the pixels value, thanks to the identity loss. Pixel areas with high discontinuity, as shown in the Brain MRI or Screw images, do not fully follow this observation, resulting in slight differences in the generated image that disturb the anomaly score, and make it difficult to obtain good separable thresholds, in the quantitative step.
For the quantitative assessment, we conclude from Table II that the Tile and the Hazelnut datasets achieve perfect classification on the test set under the FID anomaly score, and with or without the zero false negative constraint (ZFN). Nevertheless, this result could be overestimated due to the absence of many challenging examples that might occur in the real world. As expected from the qualitative assessment, the Wood and the PCAM datasets also perform well, with 93.33% and 94.41% accuracy with the FID anomaly score under the ZFN constraint, better than the 65.15% accuracy for the Breast Ultrasound dataset (even if the standard accuracy is 93.18%) or the 52.17% accuracy for the OCT Retina dataset (even if the standard accuracy is 96.54%). Indeed, the Breast and the OCT Retina datasets contain challenging abnormal images, resulting in a lower anomaly score that explains this high impact of the ZFN constraint on the accuracy. Despite the fact that the method achieves pretty good anomaly localization with the Brain MRI dataset, it has trouble to correctly discriminate the two classes, with an accuracy of 51.02%, due to its noisy normal reconstruction. Finally, the Screw dataset performs also poorly, with a near-random classification accuracy of about 50%, which was expected from the qualitative assessment. Fig. 4 states how normal and abnormal FID anomaly scores distributions overlap for the Screw, the Breast, the OCT Retina and the Brain MRI test sets, resulting in a relative poor accuracy compared to the other datasets. Regarding these results, the method reaches an average accuracy of 97.2% (85.4% with the additional ZFN constraint).
Overall, we can also conclude that the FID anomaly score improves the accuracy for the industrial images, by getting rid of the noisy pixel-by-pixel reconstruction described above. However, this score cannot avoid poor accuracy with an object
Fig. 3: Industrial (left set) and medical (right set) image examples. For each set of datasets, the left green-framed block presents normal images and the right red-framed block shows abnormal images, with the original image (1st column), the normal version generated (2nd column), and their squared pixel-wise difference image (3rd column). We manually added a small frame around each defect (abnormal images) for the reader’s convenience.
-shaped dataset like Screw, where small anomalies are not well captured by the model and the generated images still show the anomalies. For the Breast Ultrasound and Brain MRI datasets, it appears that for the ZFN constraint, the anomaly score based on SSE leads to better results. This could come from the fact that the Inception V3 model is not pre-trained on many medical-like images, leading in a poor feature extraction. We can also state that the ZFN constraint reduces the accuracy, due to a non-optimal classification threshold, except for the perfect classification of the Hazelnut or Tile datasets. In a way, this is the price to pay for adapting to a real case such as we may encounter in the industrial or medical domain.
To conclude this section, one would like to highlight the advantage of our method compared to state-of-the-art methods. To do so, Table III shows a comparison between a non-exhaustive list of them regarding several criteria. Among those criteria, we checked if other methods use a ZFN constraint, as it can be of fundamental importance in business and medical applications. We also compared the types of data, the different loss functions and whether they use abnormal data during the training phase. It clearly appears that our work covers a wider spectrum of criteria, while being the first GAN-based approach to reintroduce abnormal data during the training loop. As already presented in Section III, GAN is today the most widely used deep learning architecture for AD. Given a large set of normal images, GANs are able to learn a correct representation of them, and generate new samples from it. Afterwards, when feeding the model with abnormal images, differences between the input and reconstructed images may highlight anomalies. However, with such frameworks, the abnormal images are generally not used during the training while they are sometime be easily available (although often in small amount). In such situations, a cycle-GAN based method can benefit from the use of abnormal data during training in order to refine its representation of normal data.
## VII Conclusion
In this work, we propose and characterize for the first time an approach using Cycle-Generative Adversarial Networks (cycle-GANs) for Anomaly Detection (AD) on industrial and medical images. This method allows us to also exploit the abnormal images when at our disposal to refine its representation of normal data, by giving more insights on what is normal or abnormal. Furthermore, thanks to the use of the identity loss, we show that the formalism of cycle-GAN is naturally well-adapted to perform AD. Particular attention has been given to industrial and medical applications, due to the societal impact it may offer, and motivated by the lack of studies for such kind of work in these areas to date. The proposed method differs from previous work by exploiting both normal and abnormal images to learn mappings that can generate new matched data from one domain to another, under a cycle consistency constraint. The mapping of interest for our AD method is the one that can generate normal images. From this perspective, any differences between the test image and its normal (generated) version can be easily identified. Qualitatively, the pixel squared difference image is used to locate abnormal areas, and then quantitatively, an anomaly score is created to indicate whether the image contains abnormal areas, based on a preselected threshold. Ultimately, the method identifies anomalies at the pixel level while the labels are initially at the image level, i.e., without the requirement for tedious annotation at the pixel level.
The achieved results demonstrate that, independent of the application, images with a texture appearance (with continuous
Fig. 4: FID anomaly score distributions of normal (solid-green line and bars) and abnormal (dashed-red line and bars) images for the test datasets, with the threshold value in the ZFN setting (vertical dashed line in grey) or in the ACC setting (vertical dashed line in black).
pixel value variability, such as colorization or progressive blurred areas) tend to benefit from higher domain change mapping than those with an object appearance (with drastic pixel value changes, such as strong contours or structural shapes). An exception is observed for images of objects with coarse defects where the localization and detection of anomalies always meet expectations. We argue in this work that when both normal and abnormal data are available for training, the use of cycle-GAN architectures should be considered as an approach by the community, mainly when the anomalies are known to be in the form of textures or coarse objects.
New applications may also be explored for future work, such as object segmentation or object counting for industrial and medical fields using the same type of cycle-consistent models. This work is a first step and a proof-of-concept for cycle-GANs in AD for industrial and medical domains.
## Acknowledgment
V.D. benefits from the support of the Walloon region with a Ph.D. grant from FRIA (F.R.S.-FNRS). M.E. benefits from the support of the Belgian Walloon region for funding SMARTSENS project which is part of Win\({}^{2}\)WAL program (agreement 2110108). The authors thank Jerome Fink and Geraldin Nanfack for their insightful comments and discussions on this paper.
|
2308.10264
|
Quantum Codes on Graphs
|
We consider some questions related to codes constructed using various graphs,
in particular focusing on graphs which are not lattices in two or three
dimensions. We begin by considering Floquet codes which can be constructed
using ``emergent fermions". Here, we are considering codes that in some sense
generalize the honeycomb code[1] to more general, non-planar graphs. We then
consider a class of these codes that is related to (generalized) toric codes on
$2$-complexes. For (generalized) toric codes on $2$-complexes, the following
question arises: can the distance of these codes grow faster than square-root?
We answer the question negatively, and remark on recent systolic
inequalities[2]. We then turn to the case that of planar codes with vacancies,
or ``dead qubits", and consider the statistical mechanics of decoding in this
setting. Although we do not prove a threshold, our results should be
asymptotically correct for low error probability and high degree decoding
graphs (high degree taken before low error probability). In an appendix, we
discuss a toy model of vacancies in planar quantum codes, giving a
phenomenological discussion of how errors occur when ``super-stabilizers" are
not measured, and in a separate appendix we discuss a relation between Floquet
codes and chain maps.
|
M. B. Hastings
|
2023-08-20T13:22:58Z
|
http://arxiv.org/abs/2308.10264v1
|
# Quantum Codes on Graphs
###### Abstract
We consider some questions related to codes constructed using various graphs, in particular focusing on graphs which are not lattices in two or three dimensions. We begin by considering Floquet codes which can be constructed using "emergent fermions". Here, we are considering codes that in some sense generalize the honeycomb code[1] to more general, non-planar graphs. We then consider a class of these codes that is related to (generalized) toric codes on 2-complexes. For (generalized) toric codes on 2-complexes, the following question arises: can the distance of these codes grow faster than square-root? We answer the question negatively, and remark on recent systolic inequalities[2]. We then turn to the case that of planar codes with vacancies, or "dead qubits", and consider the statistical mechanics of decoding in this setting. Although we do not prove a threshold, our results should be asymptotically correct for low error probability and high degree decoding graphs (high degree taken before low error probability). In an appendix, we discuss a toy model of vacancies in planar quantum codes, giving a phenomenological discussion of how errors occur when "superstabilizers" are not measured, and in a separate appendix we discuss a relation between Floquet codes and chain maps.
The toric code[3] is a quantum stabilizer code where the qubits are arranged on a two-dimensional surface. One can consider a variety of different lattices for the qubits, and indeed one may even consider non-translationally invariant cellulations of the surface to define the code, but in general all these forms involve some notion of geometric locality of the stabilizers. In constrast, this paper considers quantum codes where checks are defined on general graphs, considering both Floquet codes and stabilizer codes.
In Section I, we give some background on Floquet codes and discuss the relation between certain Floquet codes and the Kramers-Wannier transform. In Section II, we consider Floquet codes which can be regarded as emergent Majorana fermions coupled to a \(\mathbb{Z}_{2}\) gauge field, and discuss possible patterns of checks. One particular class of these codes gives rise to what we call a "(twisted) toric code on a 2-complex", a particular kind of CSS stabilizer code; in Section III we bound the distance of this class of codes, assuming that the code is LDPC, showing that it is not possible to have the minimum distance of the code grow faster than the square-root of the number of qubits. As we discuss, there are examples of this code[4; 5] for which the _product_ of distances grows faster than the number of qubits, though recent systolic inequalities[2] show that it cannot grow _polynomially_ faster than the number of qubits. However, we show that it is not possible to have both \(Z\)- and \(X\)-distances grow faster than square-root for this kind of code. This contrasts with more general codes for which one can have distances greater than square-root[6] or even linear in the number of qubits[7; 8], and these codes can in turn be used to construct linear distance toric codes on 11-manifolds with degrees of freedom on 4-cells[9].
In Section IV, we consider the case of a toric code or planar Floquet code with some vacancies or dead qubits. We consider the case that one can measure"superstabilizers" near these vacancies; these can give rise to a check graph with high degree vertices and we consider the statistical mechanics of decoding in this case. Finally, in Appendix A, we give a homological interpretation of certain Floquet codes as chain maps and in Appendix B, we give a toy model of vacancies in planar codes, describing how error probability increases with time if superstabilizers are not measured; one interesting feature is a possible superlinear growth in error probability with time.
## I Floquet Code Background, Measuring Checks and the Kramers-Wannier Transform
Floquet codes[1; 10; 11; 12] are quantum codes where some operators called "checks" are measured in some sequence, which may be periodic in time or non-periodic[13]. In this paper, we consider codes on qubits, rather than qudits. After each check is measured, in the kinds of codes considered here, the qubits will be described by some stabilizer code, stabilized by some group called the"instantaneous stabilizer group" (ISG). This ISG is changing in time.
After measuring some check, the check itself will be part of the ISG at that time. Any other previously measured checks \(C\) will also be in the ISG if no measurement at an intervening time anti-commuted with the given check \(C\). There may also be elements of the ISG which are formed by products of checks.
All Floquet codes in this paper will involve choosing some sequence of one or two qubit measurements as checks. Each single qubit checks will measure one of the three possible Pauli operators on that qubit. Each two qubit check will measure the product of a pair of Pauli operators; any of the 9 possible products may be measured.
The honeycomb code and related Floquet codes (on other trivalent planar graphs with 3-colorable plaquettes) makes
use of a certain pattern of checks. This pattern is as follows. Consider a ring of sites (of size 6 in the case of the honeycomb code), and label the sites by integers \(1,\ldots,m\) for even \(m\), with the label periodic in \(m\). Then in some round one measures a set of checks that can be chosen to be (by choosing a basis of Pauli operators on each qubit appropriately)) \(X_{2j-1}X_{2j}\) for \(j=0,1,\ldots,m/2\). Then in the following round on measures a set of checks that can be chosen to be (again by choosing a basis of Paulis) \(Z_{2j}Z_{2j+1}\) for \(j=0,1,2,\ldots,m/2\).
In fact, the same pattern also occurs in the \(e\leftrightarrow m\) automorphism code[11], where this pattern is used as part of a Kramers-Wannier transform by preceding it by single qubit measurements and following with single qubit measurements. These measurements can be chosen as follows: first measure single qubit \(Z\) on all even qubits, then implement those pairwise measurements, then follow by measuring single qubit \(X\) on all odd qubits. Then this sequence implements a Kramers-Wannier transform, mapping
\[Z_{2j-1}\rightarrow\pm Z_{2j-2}Z_{2j}, \tag{1}\]
and mapping
\[X_{2j-1}X_{2j+1}\rightarrow\pm X_{2j}, \tag{2}\]
where the signs are determined by the measurement outcomes.
Let's consider what happens in the first two measurement rounds. Consider a given pair of qubits labeled \(2j-1,2j\). Then, we first measure \(Z_{2j}\) and next measure \(X_{2j-1}X_{2j}\). The measurement \(X_{2j-1}X_{2j}\) defines a stabilizer code, with a single "logical qubit". The effect of these two measurements is to encode qubit \(2j-1\) into this logical qubit. To see this, we can consider the two Pauli operators \(X_{2j-1}\) and \(Z_{2j-1}\). The operator \(X_{2j-1}\) commutes with all of these measurements and is also a logical operator of the code defined by \(X_{2j-1}X_{2j}\). The operator \(Z_{2j-1}\) can be multiplied by the operator \(Z_{2j}\) after the measurement of \(Z_{2j}\), to give \(Z_{2j-1}Z_{2j}\), which is another logical operator of the code defined by \(X_{2j-1}X_{2j}\).
Similarly, the measurement of \(Z_{2j}Z_{2j+1}\) defines a stabilizer code, with logical operators \(Z_{2j}\) and \(X_{2j}X_{2j+1}\). After measuring \(X_{2j+1}\), this effectively decodes this code, as we can multiply \(X_{2j}X_{2j+1}\) by \(X_{2j+1}\) to get \(X_{2j}\), so these logical operators of the code defined by \(Z_{2j}Z_{2j+1}\) become operator \(Z_{2j}\) and \(X_{2j}\) on qubit \(2j\).
As shown in [11], the effect of these four rounds of measurements (single qubit \(Z\), pairwise \(XX\), pairwise \(ZZ\), single qubit \(X\)) is to implement a Kramers-Wannier transform. So, other than these decoding and encoding operations effected by single qubit measurements, the effect of \(Z_{2j}Z_{2j+1}\) for \(j=0,1,2,\ldots,m/2\) on a state stabilized by \(X_{2j-1}X_{2j}\) for \(j=0,1,\ldots,m/2\) is to map the qubits encoded in this code stabilized by \(X_{2j-1}X_{2j}\) to qubits encoded in a code stabilized by \(Z_{2j}Z_{2j+1}\), while performing a Kramers-Wannier transform.
This Kramers-Wannier transform effectively measures a certain product of Paulis. Note that the map Eqs. (1) and (2) will map \(\prod_{j}Z_{2j-1}\) to plus or minus identity. Indeed, the measurements effectively measure this product. Similarly, the measurements produce a state of definite \(\prod_{j}X_{2j}\). It should be no surprise that this happens; indeed, the product of measurements \(X_{2j-1}X_{2j}\) is equal to the product of \(X_{j}\) over all \(j\), and this product commutes with the subsequent measurements of \(Z_{2j}Z_{2j+1}\), and after measuring \(X\) on odd qubits, we are left with definite \(\prod_{j}X_{2j}\).
The Kramers-Wannier transform can be implemented in a different way following[14]. In this way, one uses single qubit \(X\) measurements and controlled-\(Z\) gates, instead of pairwise measurements. The effect again is to perform the Kramers-Wannier transform while measuring a product of single qubit operators on the input state.
Unfortunately, this Kramers-Wannier transform has problems if it is used as a general technique to measure stabilizers. That is, one might try the following. Suppose one has some stabilizer code with some high weight stabilizer. Without loss of generality this stabilizer can be written as a product of Pauli \(X\) over some set of qubits, and so one could use those qubits as input to a Kramers-Wannier transform to measure that stabilizer, implementing that transform twice to return the qubits to their original state. However, method of measuring stabilizers can lead to an increase in the weight of errors. By Eq. (2), the map implements \(X_{2j-1}X_{2j+1}\rightarrow\pm X_{2j}\). Hence, \(X_{1}X_{2j+1}\to X_{2}X_{4}\ldots X_{2j}\) so a two qubit on the input state can map to an arbitrarily high weight error under Kramers-Wannier transform. Indeed, avoiding this production of high weight errors when measuring high weight stabilizers is the reason for the Shor[15], Steane[16], and Knill[17] fault tolerant schemes.
## II "Majorana-gauge field" codes
We now consider a class of Floquet codes where we do not use single qubit measurements and where the two qubit measurements have a particular form. Each such code will be defined by some trivalent graph, \(G\), as well as by some further data, giving a sequence of edges of the codes, which will correspond to a sequence of measurements. There is a one-to-one correspondence between qubits of the code and vertices of the graph.
The codes we consider here have some interpretation in terms of Majorana fermions coupled to a gauge field. They are related to codes of [18], but we consider more general graphs. One class of codes that we consider below is related to so-called "matching codes"[19]. Our codes differ (and generalize) in two ways. First, rather than a stabilizer code with a fixed stabilizer group, we consider a Floquet code, whose stabilizer group changes in time. Second, while this ISG at a given time is related to a matching code, it generalizes it because rather than considering a two-dimensional lattice, we consider matchings on more general trivalent graphs.
More generally, \(G\) need not be a graph but may be a _multigraph_. That is, a multigraph means that there be more than one edge between any pair of vertices; further, edges have an identity, so if there are multiple edges between a pair of vertices, different edges will correspond to different checks. However, for brevity, we refer to \(G\) as a graph.
We will assume that \(G\) is connected. The case where \(G\) is not connected can be easily understood in terms of the connected case, as it will correspond to having several codes, one for each connected component.
For each of the three edges attached to a given qubit, we make an arbitrary assignment of the 3 Pauli operators, \(X,Y,Z\), choosing a different Pauli operator on each edge. This choice has no affect on the distance or rate of the code; indeed, it has no effect on the properties of the code unless the external noise is biased in some way (for example, \(X\) errors more likely than \(Z\) errors).
Thus, each edge is labeled by a pair of Pauli operators, one for each of the two vertices attached to that edge. Then each edge corresponds to a two qubit measurement, where we measure the product of the given Pauli operators on those two qubits. For example, given an edge between vertices 1 and 2, where we choose Pauli \(X\) for that edge on qubit 1 and Pauli \(Y\) for that edge on qubit 2, the edge corresponds to a measurement of \(X_{1}Y_{2}\).
Then, given a sequence of edges of the code, we measure the corresponding checks in that order, repeating the sequence periodically.
There is a natural interpretation of these checks in terms of Majoranas, similar to that of the Kitaev honeycomb model[3]. For each qubit \(j\), introduce four Majorana operators, \(\gamma_{0}^{j},\gamma_{2}^{j},\gamma_{9}^{j},\gamma_{z}^{j}\). Impose a constraint \(\gamma_{0}^{j}\gamma_{z}^{j}\gamma_{y}^{j}\gamma_{z}^{j}=1\) for all \(j\). Represent Pauli operators \(X,Y,Z\) on that qubit by \(i\gamma_{0}^{j}\gamma_{x}^{j},i\gamma_{0}^{j}\gamma_{y}^{j},i\gamma_{0}^{j} \gamma_{z}^{j}\), respectively. See [1] also. We will not use this representation here, but it accounts for our choice of the term Majorana-Gauge Field code. 1
Footnote 1: In fact, one can consider graphs of higher degree than 3. In this case, if we have \(d\) edges attacked to some vertex, then we take Majorana operators \(\gamma_{0},\gamma_{1},\ldots,\gamma_{d}\) on that vertex. Then, one can construct \(d\) anticommuting operators on that vertex of the form \(i\gamma_{0}\gamma_{j}\) for \(j\in\{1,\ldots,d\}\), with each operator having even fermion parity, and each check will be the product of one such operator on one vertex and another such operator on another vertex. Assume \(d\) odd and fix a definite value for the product \(\gamma_{0}\gamma_{1}\ldots\gamma_{d}\), which commutes with all checks. Even products of these operators \(\gamma_{0},\ldots,\gamma_{d}\) can be realized as products of Pauli operators on \(m\) qubits, so long as \(2^{m}\geq 2^{d/2-1}\). However, for now we stick to degree 3.
### Homology
These checks generate some group. This group is sometimes called the "gauge group" in the subsystem code literature, but we avoid using this term as it may cause confusion with other uses of the term "gauge", e.g., "gauge fields". Let us define \(\mathcal{G}\) to be the group generated by checks, as well as by \(-1\) (i.e., by a sign). Let \(\mathcal{Q}\) be be \(\mathcal{G}\) modulo sign and let \(\mathcal{N}\) be the group generated by \(-1\), i.e. we have a short exact sequence \(1\to N\to G\to Q\to 1\). The group \(\mathcal{Q}\) is abelian.
Let us determine the rank of \(\mathcal{Q}\). Let there be \(n_{V}\) vertices in graph \(G\). Hence, there are
\[n_{E}=\frac{3}{2}n_{V}\]
edges. So, \(\mathcal{Q}\) is generated by this set of cardinality \(n_{E}\). However, if we take a product of checks corresponding to some set of edges, then that product is equal (up to sign) to the identity if and only if, for each vertex, either all three edges attached to that vertex appear in the given set or no edges attached to that vertex appear in the given set. Since we have assumed that \(G\) is connected, the only nontrivial product of checks equal (up to sign) to the identity is the product of _all_ checks. Hence, the rank of \(\mathcal{Q}\) is equal to \(n_{E}-1\).
Let the "stabilizer group" be the image in \(\mathcal{Q}\) of the center of \(\mathcal{G}\), i.e., the stabilizer group is the center of \(\mathcal{G}\) up to sign. Let \(\mathcal{S}\) denote the stabilizer group. We emphasize that \(\mathcal{S}\) is _not_ the ISG, though if some element of \(\mathcal{S}\) is in the ISG then it will remain in the ISG for all subsequent rounds.
Before considering \(\mathcal{S}\), we pause to define a certain homology theory. A 1-chain will be some formal sum of edges of \(G\) with \(\mathbb{Z}_{2}\) coefficients, i.e., we have a \(\mathbb{Z}_{2}\) vector space with dimension equal to the number of edges, with basis elements of the vector space in one-to-one correspondence with edges. A 0-chain will be a formal sum of vertices of
\(G\) with \(\mathbb{Z}2\) coefficients, and we define the obvious boundary operator \(\partial\) mapping 1-chains to 0-chains, mapping each edge to the sum of vertices in that edge. A 1-cycle is a 1-chain whose boundary vanishes. Hence, a 1-cycle has is a sum of edges, where for every vertex an _even_ number of edges are in that sum, so a 1-cycle corresponds to a sum of closed loops on the graph, disconnected from each other. Each such closed loop is a simple cycle, in the language of graph theory.
Corresponding to every 1-chain is some element of \(\mathcal{Q}\), namely that element is the product of checks corresponding to edges whose coefficient in that chain is equal to 1, modulo sign. One may verify that every 1-cycle corresponds to some product of checks which commutes with every check individually and hence corresponds to some element of \(\mathcal{S}\). Let us show indeed that these 1-cycles generate \(\mathcal{S}\). Consider some edge \((i,j)\) between vertices \(i\) and \(j\), and consider some 1-chain \(v\). One may verify that the product of checks corresponding to \(v\) commutes with the check corresponding to \(e\) if and only if \(\partial v\) has the same coefficient on vertex \(i\) as it does on vertex \(j\). That is, it occurs if and only if the number of edges in \(v\) attached to \(i\) has the same parity (mod 2) as the number of edges attached to \(j\). However, since \(G\) is assumed connected, in order for the product of checks corresponding to \(v\) to commute with all checks, either \(\partial v=0\) identically or \(\partial v\) is equal to 1 on every vertex, i.e., every vertex has either an even or odd number of edges attached to it in \(v\). If \(\partial v=0\) identically, then \(v\) is a 1-cycle. If \(\partial v\) equals 1 on every vertex, then \(v=x+y\), where \(x\) is some 1-cycle and \(y\) is the sum of _all_ edges. Since the product of checks corresponding to \(y\) is equal to the identity as explained above, the product of checks corresponding to \(v\) is the same as the product of checks corresponding to some 1-cycle, namely \(x\).
Hence, the rank of \(\mathcal{S}\) is equal to the rank of the group of 1-cycles. This group is called the first homology group of \(G\), and by the Euler characteristic of the chain complex we are studying, the rank of the first homology group is equal to \(n_{E}-n_{V}+1\), since the rank of the zeroth homology group (the group of 0-chains, modulo boundaries of 1-chains) is equal to 1 since \(G\) is connected. Hence, the rank of the stabilizer group \(\mathcal{S}\) equals
\[n_{E}-n_{V}+1=\frac{1}{2}n_{V}+1.\]
Since \(\mathcal{Q}\) has rank \(n_{E}-1=(3/2)n_{V}-1\), and \(\mathcal{S}\) has rank \(s\equiv(1/2)n_{V}+1\), the difference between the rank of \(\mathcal{G}\) and the rank of \(\mathcal{S}\) is \(n_{V}-2\). So, we can write \(\mathcal{G}\) as generated by (modulo sign)
\[\tilde{Z}_{1},\ldots,\tilde{Z}_{s},\tilde{X}_{s+1},\tilde{Z}_{s+1},\ldots, \tilde{X}_{s+r},\tilde{X}_{s+r},\]
where
\[r=\frac{n_{V}-2}{2}=\frac{1}{2}n_{V}-1,\]
and where the operators \(\tilde{Z}_{i},\tilde{X}_{j}\) are products of Paulis which obey the usual Pauli anticommutation relations, and where
\[s=\frac{1}{2}n_{V}+1\]
is the rank of \(\mathcal{S}\).
After these generalities, let us consider some specific measurement sequences. Start with the system in a maximally mixed state, and measure checks in some sequence. After each measurement, the state is a stabilizer state, which is stabilized by some group called the _instantaneous stabilizer group (ISG)_. Every element of the ISG is in \(\mathcal{G}\).
### Floquet Codes from Sequences of Perfect Matchings
Define a _perfect matching_ to be a subset of edges of the graph \(G\) such that every vertex is in exactly one edge in that subset. Suppose we perform some arbitrary sequence of measurements and then measure the checks on some perfect matching. Then, the ISG contains the checks corresponding to the edges in the perfect matching. We claim, and we now show that _the ISG is then generated by the checks in the perfect matching and possibly by some products of checks corresponding to \(1\)-cycles_. Every element of the ISG commutes with these checks. By the discussion above, if an element \(O\) of the ISG (which is a product of checks, and hence corresponds to some 1-chain \(v\)) commutes with a given check \((i,j)\) in the perfect matching, then \(\partial v\) has the same coefficient on vertex \(i\) as it does on \(j\). If \(\partial v\) is equal to \(1\) on \(i\) and \(j\), then we may multiply \(O\) by the check corresponding to edge \((i,j)\); this gives some other element of the ISG, and that element corresponds to some chain \(v^{\prime}\) such that \(\partial v^{\prime}\) vanishes on \(i\) and \(j\). In this way, since the matching is perfect, given any element of \(O\) of the ISG, we may multiply it by checks in the perfect matching so that the resulting operator \(O^{\prime}\) corresponds to a 1-cycle, as claimed.
Let \(C\) be any even length simple cycle \(C\) in \(G\) such that half the edges in \(C\) have the corresponding check in the ISG, i.e., following the edges in \(C\), we see that they alternating between being in the ISG and not being in the IG, we can measure the element of \(\mathcal{S}\) corresponding to that cycle \(C\) by measuring the checks corresponding to edges in \(C\) which are not in the matching.
So, we can choose the ISG to contain edges in some matching, as well as products of edges corresponding to any cycle of even length by appropriately measuring products of checks.
So, in this subsection we consider codes for which the sequence of checks measured is given by picking some sequence of perfect matchings, called \(m_{1},m_{2},\ldots,m_{k}\), for some \(k\), and then measuring all the checks in \(m_{1}\) in an arbitrary order (the order does not matter), then all the checks in \(m_{2}\), and so on, repeating the sequence periodically with period \(k\), identifying \(m_{k+i}\) with \(m_{i}\).
Given any two perfect matchings, \(m,m^{\prime}\), we can define a set of simple cycles that we call \(C(m,m^{\prime})\) as follows. Pictorially, simply draw on the graph the edges that appear in exactly one of the two matchings (but not both of them), and this will give a set of simple cycles. Formally, each matching \(m\) or \(m^{\prime}\) can be identified with some \(1\)-chain \(v_{m}\) or \(v_{m^{\prime}}\) where the chain has a coefficient \(1\) on the edges in the matching. Then, \(v_{m}+v_{m^{\prime}}\) is a \(1\)-cycle and hence corresponds to a union of simple cycles; these simple cycles are those in \(C(m,m^{\prime})\).
It is easy to verify the following: _suppose after measuring all the checks in some matching \(m_{i}\), the ISG is generated (up to signs) by those checks and by products of checks corresponding to \(1\)-cycles in some given set \(S\); then after measuring all the checks in matching \(m_{i+1}\), the ISG is generated by the checks in \(m_{i+1}\), and by products of checks corresponding to cycles in \(S\), and by products of checks corresponding to simple cycles in \(C(m,m^{\prime})\)._
**Definition 1**.: _Let us define a graph matching code to be a stabilizer code on a trivalent graph, with "checks" on the graph as defined above, such that the stabilizer group of the graph matching code is generated by the checks corresponding to edges of some perfect matching, and by products of checks corresponding to simple cycles, for some set of simple cycles._
Thus, what we have shown is that for such a measurement sequence using perfect matchings, after measuring any perfect matching the ISG is that of a graph matching code.
Indeed, following such a sequence of measurements, if we start from a maximally mixed state, it follows that the \(1\)-cycles in \(S\) correspond to simple cycles, rather than sums of more than one simple cycle. After measuring the checks in \(m_{i+1}\), we may update the set \(S\) to some set \(S^{\prime}\) containing the cycles in \(S\) as well as those in \(C(m,m^{\prime})\). Of course, it may be the case that there is some redundancy between these generators of the ISG, i.e., some product may be the identity.
One source of redundancy may be some redundancy in the generators which corresponds to cycles in \(S^{\prime}\). If that occurs, we may remove some cycles from \(S^{\prime}\) to obtain a set of cycles with no redundancy. In practice, however, this case is important for fault tolerance: such a redundancy may occur when some elements of \(C(m,m^{\prime})\) is already in \(S\) and that is how we can detect if some error occurs because we are measuring some operator whose value is already determined by the previous measurements.
Now let us compute the number of logical qubits. First, suppose that \(S\) is maximal: every product of checks corresponding to a \(1\)-cycle is in the ISG. These products of checks are simply the elements of the center of \(\mathcal{G}\) and the rank of the group generated by them is \((1/2)n_{V}+1\) as given above. The number of checks in the perfect matching is \((1/2)n_{V}\). However, there is a redundancy: the product of all checks in the perfect matching is equal to the product of checks corresponding to some \(1\)-cycle. This \(1\)-cycle is simply the sum of all checks _not_ in the perfect matching. It is easy to verify that this is the only redundancy: suppose some product of checks in the matching times some product of checks corresponding to a \(1\)-cycle is equal to the identity. Then, every vertex has either zero or three edges attached to it in this product (i.e., in the product of checks in the matching times the product of checks corresponding to \(1\)-cycles). Suppose some vertex \(v\) has three edges attached to in the product; then, every neighbor \(w\) of \(v\) also has three edges attached to it in the product, as either \(w\) is attached to \(v\) by a check in the matching or by a check in the \(1\)-cycles. Hence, since \(G\) is connected, the claim follows. Indeed, this redundancy occurs by taking the product of all checks in the matching times the product of all checks in all the simple cycles which do _not_ use any checks in the matching.
So, the rank of group generated by the center of \(\mathcal{G}\) and by checks in a perfect matching is equal to \((1/2)n_{V}+1+(1/2)n_{V}-1=n_{V}\), and so if \(S\) is maximal, there are no logical qubits. If \(S\) is less than maximal, there may be logical qubits; if \(S\) contains all the simple cycles which do _not_ use any checks in the matching, then the number of logical qubits is equal \((1/2)n_{V}\) minus the rank of \(S\).
## III Toric Codes on \(2\)-complexes
Let us now consider an interesting class of graph matching codes. We consider two types of simple cycles. One type of simple cycle we call an "avoiding cycle". Such an avoiding cycle is a simple cycle that does not contain any edges of the matching. (Deleting the edges of the matching from the graph leaves the union of all avoiding cycles.) The other type of simple cycle we call an "alternating cycle". An alternating cycle is one of even length, such that the edges alternate between being in the matching and not being in the matching.
We will consider in this section graph matching codes in which the set of simple cycles \(S\) defining the code includes all avoiding cycles and some set \(S^{\prime}\) of other cycles which is independent of the set of avoiding cycles. The honeycomb code is an example of such a code where \(S^{\prime}\) is the set of alternating cycles, but we may consider more general \(S^{\prime}\). Here, by "independent", we regard a cycle as corresponding to a \(1\)-chain on the graph with \(\mathbb{Z}_{2}\) coefficients.
We will show that, up to a low depth Clifford circuit and use of ancillas, and ignoring certain degenerate cases discussed below, these codes are equivalent to codes that we may call "twisted toric codes on \(2\)-complexes". First, note that there are two qubits in each check in the matching, but in the \(+1\) eigenspace of that check there is effectively only a single qubit, i.e., a logical qubit of that code defined by that check. Here is where the use of the low depth Clifford and ancillas comes in: we may apply a Clifford gate (such as a CNOT or other gate, depending on the check) to replace those two qubits in the check with a single effective qubit, and some ancilla. The check forces that ancilla to be in a particular state (say \(Z=+1\)), and so we ignore the ancillas.
Each vertex is in exactly one avoiding cycle. Except for certain degenerate cases, the two vertices in any given edge are in _different_ avoiding cycles from each other; a case where they are the same avoiding cycle is that the graph \(G\) has two vertices and there are three edges connecting those two vertices. So, ignoring the case where the cycles are the same, each effective qubit is in two checks corresponding to avoiding cycles. We may choose (by applying single qubit Cliffords) to have these checks act as Pauli \(Z\) on the effective qubit.
Each effective qubit is also in some number of checks corresponding to cycles \(S^{\prime}\). These checks act as Pauli \(X\) or Pauli \(Y\) operators on the effective qubits in that cycle, possibly multiplied by Pauli \(Z\) operator on qubits which are adjacent to that cycle, i.e., which correspond to edges attached to a vertex in that cycle.
Let us define twisted and untwisted toric codes on a \(2\)-complex as follows. We define a graph \(G_{eff}\) whose vertices correspond to avoiding cycles in \(G\), with an edge between two vertices if some edge in the perfect matching in \(G\) connects a vertex in one avoiding cycle to a vertex in the other. (Remark: if one really wants to include the degenerate case above where an effective qubit is in only a single avoiding cycle, this can also be treated by allowing \(G_{eff}\) to have self-loops.) We place a qubit on each edge of \(G_{eff}\). We promote \(G_{eff}\) to a \(2\)-complex by attaching \(2\)-cells corresponding to cycles of \(G\) in \(S^{\prime}\): given an cycle \(C\) in \(S^{\prime}\), the edges in \(C\) which are in the matching define a sequence of edges of \(G_{eff}\). Then, we may define the untwisted toric code on that \(2\)-complex by placing a qubit on each edge, placing a \(Z\)-stabilizer generator on each vertex, and placing an \(X\)-stabilizer generator on each plaquette. We define a twisted toric code on that \(2\)-complex by placing a qubit on each edge, placing a \(Z\)-stabilizer generator on each vertex, and on each plaquette we place a stabilizer generator which is a product of Pauli \(X\) operators on the qubits corresponding to edges in that plaquette possibly multiplied by some product of Pauli \(Z\) operators supported on qubits in that plaquette or adjacent to that plaquette, and further possibly multiplied by a factor of \(i=\sqrt{-1}\) if needed for Hermiticity.
So, ignoring the degenerate case, the graph matching codes where \(S\) includes all avoiding cycles are (twisted) toric codes.
In this and other \(2\)-complexes, we will often continue to refer to \(1\)-cells as edges and \(0\)-cells as vertices, using the terminology of graphs rather than complexes, as we will make use of some other graphs later.
We will say generally that a code is a (possibly twisted) toric code on a \(2\)-complex if some \(2\)-complex exists giving that code following the prescription of the above paragraph. For us, a \(2\)-complex will have some graph for its \(1\)-skeleton, and the \(2\)-cells will be polygons attached to cycles of the graph.
Note indeed that a CSS code is a toric code on a \(2\)-complex if and only if every qubit participates in exactly two \(Z\)-stabilizer generators. The "only if" direction is obvious: each edge has two endpoints. We will discuss the "if" direction later, in Section III.4, when talking about integer homology.
The following question arises: if we consider toric codes on \(2\)-complexes, what is the maximum possible distance of the code? For an untwisted toric code, there are in fact two distances: \(d_{X}\), the distance against \(X\) errors and \(d_{Z}\), the distance against \(Z\) errors. The value of \(d_{X}\) is equal to the minimum weight of a representative of a nontrivial first homology class while \(d_{Z}\) is equal to the minimum weight of a representative of a nontrivial first cohomology class. The weight of a representative is defined to be the number of qubits on which it acts nontrivially.
What about the distance of a twisted toric code? We will show that, roughly speaking, the twisting obtained from graph matching codes cannot improve the distance compared to an untwisted code. Consider some twisted toric code \(\mathcal{T}\) that arises from some graph matching code. Let \(\mathcal{T}^{\prime}\) be an untwisted toric code defined in the obvious way from \(\mathcal{T}\), using the same \(Z\)-stabilizer generators, and making the plaquette stabilizers products of Pauli \(X\) around
each plaquette. Then, any \(Z\)-type logical operator \(O\) of the untwisted \({\cal T}^{\prime}\) is a logical operator of the twisted \({\cal T}\); one may verify that \(O\) commutes with the stabilizers of \({\cal T}^{\prime}\) since they are the same as those of \({\cal T}\) up to possibly multiplying by Pauli \(Z\) operators and one may verify that \(O\) is not in the stabilizer group of \({\cal T}\) because any nontrivial product of plaquette stabilizers of \({\cal T}\) corresponds to some nontrivial product of cycles in \(S^{\prime}\), and each such product (by assumption on independence of \(S^{\prime}\)) will include at least one edge in the matching and hence will act as Pauli \(X\) or \(Y\) on at least one effective qubit. Further, any \(X\)-type logical operator of the untwisted \({\cal T}\) corresponds to some cycle in \(G_{eff}\) and hence to some cycle in \(G\) in the obvious way (each edge in \(G_{eff}\) is some edge in the matching, and we combine those edges with edges not in the matching to obtain a cycle in \(G\)), and that cycle in \(G\) then corresponds to some logical operator of the twisted \({\cal T}\): it will be a product of Pauli \(X\) on the edges in the cycle in \(G_{eff}\), possibly multiplied by Pauli \(Z\) on qubits in or adjacent to that cycle.
The simplest example of a toric code on a 2-complex is to consider "the" toric code on a cellulation of a 2-torus. In this case, one can obtain a code with \(N\) qubits and with \(d_{X},d_{Z}\) both being \(\Theta(\sqrt{N})\), where \(\Theta(\cdot)\) is computer science big-O notation. One may ask if one can do better.
If we demand that the code be LDPC (low-density parity check) also then we will show in Section III.3 that it is not possible that \(d_{X}\) and \(d_{Z}\) both be \(\omega(\sqrt{N})\), where again we use big-O notation, and \(\omega(\sqrt{N})\) means asymptotically larger than \(\sqrt{N}\). Here,
**Definition 2**.: _A CSS code is LDPC if all \(X\) and \(Z\) stabilizers act on \({\cal O}(1)\) qubits and all qubits are only in \({\cal O}(1)\) stabilizers (note that by construction, each qubit is in exactly \(2=O(1)\)\(Z\)-stabilizers)._
Remark: we use computer science big-O notation throughout. When we give a bound on distance, later, such as saying something in \({\cal O}(\sqrt{N})\), we mean that it is bounded by a constant times \(\sqrt{N}\) with the constant depending on the constants hidden in the definition of LDPC.
### Systolic Freedom for Codes on \(2\)-Complexes
There are examples of LDPC toric codes on 2-complexes where the product \(d_{X}d_{Z}\) is polylogarithmically larger than \(N\). The first example of this is in fact the first example[4] of any code to achieve \(d_{X}d_{Z}\) asymptotically larger than \(N\), and it is a toric code on a cellulation of a particular 3-manifold. The distances \(d_{X},d_{Z}\) are related to so-called "systoles" of this manifold (least area surfaces representing nontrivial homology), and the property that \(d_{X}d_{Z}\gg N\) is derived from a property of this manifold called systolic freedom (namely that the product of the areas of certain systoles is asymptotically larger than the volume of the manifold). So, we use the term "systolic freedom" to refer to \(d_{X}d_{Z}\gg N\).
The second example[5] gives \(d_{X}=\Theta(\log(N))\) and \(d_{Z}=\Theta(N)\), slightly improving the power of \(\log(N)\) in the product \(d_{X}d_{Z}\). In this case, the code is a toric code on a particular 2-complex which is a high dimensional expander.
To turn those toric codes with unbalanced distance (i.e., \(d_{Z}\gg d_{X}\) but \(d_{X}d_{Z}\gg N\)) into a code with \(d_{Z},d_{X}\gg\sqrt{N}\), it is possible to use a distance balancing trick. In the first example of a 3-manifold code[4], this distance balancing was done explicitly by constructing a toric code on a _four_-manifold, with degrees of freedom on _two_-cells. In later examples, an abstract distance balancing trick was used (see [20] which was then generalized and improved in Ref. [5]), but when applied to the toric code on a 2-complex, the result is again a toric code on a higher dimensional complex with degrees of freedom on cells of dimension \(>1\). So, in neither case do we obtain a toric code on a two complex. In particular, in both cases, each qubit is in more than 2 \(Z\)-stabilizers.
There is however a simple way to balance distance, albeit at the cost of breaking the LDPC property. Choose some integer \(\ell>1\), and insert \(\ell-1\) vertices into each edge of the 2-complex. This subdivides each edge into \(\ell\) edges and \(\ell-1\) vertices in the obvious way. As a result, the boundaries of 2-cells get subdivided, and the weight of each \(X\)-stabilizer hence increases by a factor \(\ell\), so for \(\ell\gg 1\) the resulting code is not LDPC. This subdivision multiplies the number of qubits by \(\ell\), and multiplies \(d_{X}\) by \(\ell\) while leaving \(d_{Z}\) unchanged. So, choosing \(\ell=d_{Z}/d_{X}\) we obtain a code on \(Nd_{Z}/d_{X}\) qubits with \(d_{Z}=d_{X}\). Applying this to the construction of Ref. [5], we get a code on \(N^{\prime}\) qubits with \(d_{Z}=d_{X}=\Omega(\sqrt{N^{\prime}\log(N^{\prime})})\).
Remark: the reader may verify that this subdivision of each edge into \(\ell\) edges is equivalent to concatenating the given code with a repetition code with stabilizers \(Z_{1}Z_{2},Z_{2}Z_{3},\ldots,Z_{\ell}\). From standard results, this concatenation increases \(d_{X}\) by a factor \(\ell\), while leaving \(d_{Z}\) unchanged. The increase in weight of the \(X\)-stabilizers of the concatenated code then follows since we must express \(X\)-stabilizers in terms of logical operators of the repetition code.
Remark: one might attempt to restore the LDPC property by subdividing the 2-cells. Let us sketch a way to do this, giving a pictorial description. Unfortunately, this way does not give the desired distance properties of the code, but it is worth discussing. Consider some 2-cell. Its boundary is a circle, subdivided with \(m=O(\ell)\) vertices. Draw this boundary as a circle in the plane. Draw another circle inside it, subdivided with \(\lfloor m/2\rfloor\) vertices, and draw edges from one circle to the other, drawing the edges without crossing. For example, label the vertices in the outer circle by
\(1,2,\ldots,m\) in order around the circle, periodic mod \(m\), and label the vertices in the inner circle by \(1^{\prime},2^{\prime},\ldots,(\lfloor m/2\rfloor)^{\prime}\) in order, and then draw edges from vertex \(j^{\prime}\) to vertices \(2j,2j+1\) for each \(j\). Fill all the holes between the two circles in the drawing with 2-cells, so that there are 2-cells with boundary containing vertices \(j^{\prime},2j,2j+1\) and with boundary containing vertices \(2j+1,2j+2,j^{\prime},j^{\prime}+1\). Then, the local geometry is bounded, meaning that every vertex attaches to only \(\mathcal{O}(1)\) edges and so that all 2-cells between the circle have only \(\mathcal{O}(1)\) cells in their boundary. Repeat this by again drawing a circle of \(\approx m/4\) vertices inside the previous one, and so on, approximately halving the number of vertices each time, until finally some circle has only \(\mathcal{O}(1)\) vertices and can be filled with a 2-cell. This subdivision can be thought of as introducing some "hyperbolic geometry" inside the outermost circle. Unfortunately, this subdivision creates "shortcuts" across the outermost circle, and the distance \(d_{X}\) of the code may be reduced by a factor proportional to \(\log(\ell)/\ell\). So, it does not suffice to give us an LDPC code with distance \(\omega(\sqrt{N})\).
### Systolic Almost-Rigidity
These examples of [4; 5] both have \(d_{X}d_{Z}\) only polylogarithmically larger than \(N\). However, there are several examples of stabilizer quantum codes where \(d_{X}d_{Z}\) is polynomially larger than \(N\); indeed, it is now possible to have \(d_{X}d_{Z}\sim N^{2}\). One may wonder: can one have \(d_{X}d_{Z}\) polynomially larger than \(N\) if one restricts to toric codes on 2-complexes?
The answer is no if one restrict to LDPC codes. This follows from a result called "systolic almost-rigidity" for simplicial complexes[2].
To use that result, first note that given any toric code into a 2-complex, we may turn the corresponding 2-complex into a simplicial complex (so that all 2-cells are triangles) by triangulating each 2-cell: pictorially, the triangulation can be seen quite simply by drawing each 2-cell as some polygon in the plane, adding an extra vertex in the center, and drawing edges from each vertex in the polygon to the center. This triangulation may reduce the distance \(d_{X}\) by creating "shortcuts" across the 2-cell, but given that the initial code is LDPC, the distance \(d_{X}\) is reduced by at most a constant factor. So, it suffices to consider toric codes on simplicial 2-complexes.
For toric codes on simplicial 2-complexes, theorem 2 of Ref. [2] shows that for any \(\epsilon>0\), we have \(d_{X}d_{Z}\leq\mathcal{O}(N^{1+\epsilon})\). To translate that theorem into the language of codes, their choice of a nontrivial first cohomology class \(\alpha\) corresponds to the choice of a nontrivial \(Z\)-type logical operator. Their quantity \(\operatorname{cut}^{\alpha}(M)\) equals the least weight representative of that \(Z\)-type logical operator; see Lemma 1 below. Their quantity \(\operatorname{sys}_{\alpha}(M)\) corresponds to the minimum, over all \(X\)-type logical operators which anticomute with the given \(Z\)-type logical operator, of the least weight representative of that \(X\)-type logical operator.
Thus, the theorem actually says a stronger statement than that \(d_{X}d_{Z}\leq\mathcal{O}(N^{1+\epsilon})\). It says that given any nontrivial \(Z\)-type logical operator, there is some \(X\)-type logical operator which anticommutes with it, and some representatives of those \(Z\)-type and \(X\)-type logical operators, such that the product of the distances is \(\leq\mathcal{O}(N^{1+\epsilon})\).
The authors of Ref. [2] conjecture that their theorem can be strengthened to bound \(d_{X}d_{Z}\leq\mathcal{O}(N\operatorname{polylog}(N))\), but do not prove this. It is also interesting to ask whether a similar result applies if we relax the LDPC assumption on the code.
Here is the lemma needed to translate their results to quantum information theory langauge. Remark: similar results appear in [2] when the complex is obtained by triangulating a manifold. There is nothing novel in our proof of the above lemma and we give it primarily to translate their results to quantum information theory terms.
**Lemma 1**.: _Given a choice of \(Z\)-type logical operator \(\alpha\), define a "cut" to be a subset of qubits \(H\) such that any representative of an \(X\)-type logical operator which anticommutes with \(\alpha\) must have support on \(H\). Define \(\operatorname{cut}^{\alpha}(M)\) to be the minimum cardinality of a cut. Remark: this is a paraphrase of the definition of Ref. [2] in quantum information theory terms. Then, \(\operatorname{cut}^{\alpha}(M)\) equals the minimum weight of a representative of logical operator \(\alpha\) and the minimum is attained when \(H\) is the support of such a minimum weight representative._
Proof.: First we show that if \(O\) is a representative of \(\alpha\), and if \(H\) is the support of \(O\), then \(H\) indeed is a cut. This is obvious, since if an operator anticommutes with logical operator \(\alpha\), and hence with \(O\), it must have support on the support of \(O\).
Next we will show that given any cut \(H\), there must be some representative \(O\) of \(\alpha\) supported on \(H\). Then, by the above paragraph, it follows that the minimum cut is attained on the support of a minimum weight representative.
To show that there is a representative of \(\alpha\) supported on \(H\), note that by CNOT gates supported on \(H\), and by CNOT gates supported on the complement of \(H\), we can bring the stabilizer group to the following form. There are some stabilizers, supported on \(H\) or on its complement, which are either \(X\) or \(Z\) on a single qubit. There are some qubits in \(H\) which are in EPR pairs with qubits outside \(H\); i.e., we have two stabilizers of the form \(XX^{\prime}\) and \(ZZ^{\prime}\) where the primed qubits are outside \(H\) and the unprimed qubits are inside \(H\). Finally, there are some qubits with
stabilizers \(XX^{\prime}\) or \(ZZ^{\prime}\), but not both, again where the primed qubits are outside \(H\) and the unprimed qubits are inside \(H\).
Now consider the logicals once the stabilizer group is in that form. If there is a pair of qubits with stabilizer \(XX^{\prime}\) but not \(ZZ^{\prime}\), then there are logicals \(X\) (or \(X^{\prime}\)) and \(ZZ^{\prime}\), while if they have stabilizer \(ZZ^{\prime}\) but not \(XX^{\prime}\), then there are logicals \(Z\) (or \(Z^{\prime}\)) and \(XX^{\prime}\). There may also be logical qubits where \(X\) and \(Z\) logical operators are both supported on \(H\) or both supported on the complement of \(H\).
Any \(Z\)-type logical operator then is of the form \(OO^{\prime}PQ\), where \(O\) is a product of \(Z\)-type logical operators on some logical qubits where both \(X\)- and \(Z\)-type logicals are supported on \(H\), where \(O^{\prime}\) is similar except the support is the complement of \(H\), and where \(P\) is a product of \(Z\)-type logical operators on logical qubits where the stabilizer group (after the transformation above) is of the form \(XX^{\prime}\) (i.e., \(P\) can be represented by a product of operators \(ZZ^{\prime}\) on those qubits) and \(Q\) is a product of \(Z\)-type logical operators on logical qubits where the stabilizer group (after the transformation above) is of the form \(ZZ^{\prime}\) (i.e., \(Q\) can be represented by a product of operators \(Z\) on those qubits).
However, by the assumption that \(H\) is a cut, \(O^{\prime}\) must be the identity operator, as otherwise there is an \(X\)-type logical supported outside \(H\) which anticommutes with it. Similarly, \(P\) must be the identity. But then any \(Z\)-type logical operator \(OQ\) has a representative on \(H\).
### Bound on Minimum Distance
**Lemma 2**.: _Let \(M\) be some \(2\)-complex corresponding to some LDPC code. Consider the corresponding chain complex with \(\mathbb{Z}_{2}\) coefficients. Let \(\mathrm{weight}(\cdot)\) of some cycle or cocycle denote the Hamming weight. Consider some nontrivial \(1\)-cocycle \(S\) which has the property that \(S\) cannot be written as the sum of two cocycles, each with lower weight than \(S\) and such that \(S\) is a minimum weight representative of some given cohomology class. Then \(\mathrm{weight}(S)=\mathcal{O}(\sqrt{N})\) or there is some \(1\)-cycle \(C\) which has inner product \(1\) with \(S\) such that \(\mathrm{weight}(C)=\mathcal{O}(\sqrt{N})\)._
Proof.: We use a graph metric for distance on the \(1\)-skeleton of \(M\).
Let \(\mathrm{supp}(S)\) be the support of \(S\), i.e., the set of edges with nonvanishing coefficient in \(S\).
Define a "boundary set" of vertices to be the set of vertices which are in the boundary of some edge in \(\mathrm{supp}(S)\).
Consider the following "boundary graph" \(B\): the vertex set is the boundary set and there is an edge between two vertices if the distance between them, using a graph metric on \(M\setminus\mathrm{supp}(S)\), is bounded by the largest diameter of a \(2\)-cell in \(M\). We claim that this graph is connected: indeed, if not, then for any connected component, the sum of edges in \(\mathrm{supp}(S)\) which are in the coboundary of a vertex in that connected component would define some cocycle with lower weight than \(S\), because (by definition of the boundary graph) no \(2\)-cell is in the coboundary of two different edges, with one edge in the coboundary of a vertex in one connected component and the other edge in the coboundary of a vertex in another connected component. The sum of these cocycles over connected components would equal \(S\).
Consider any vertex \(i\) in the boundary set. For any \(r\), let \(b_{r}(i)\) be the set of vertices of \(M\) which can be reached from \(i\) by a path of length at most \(r\) which avoids the support of \(S\). Here we use the graph metric on \(M\) for the path length.
Let \(c_{0}=\mathcal{O}(1)\) be a constant chosen later. We consider two cases: either, for some \(r\leq c_{0}\sqrt{N}\), there is some edge \(e\) in \(\mathrm{supp}(S)\) which is in the coboundary of two different vertices in \(b_{r}(i)\), or there is no such edge for any such \(r\).
In the first case, we can identify a \(1\)-cycle \(C\) with \(\mathrm{weight}\leq 1+2c_{0}\sqrt{N}\): the edge \(e\) has two vertices \(j,k\) in its boundary, and there is some path \(P\) of length \(\leq 2c_{0}\sqrt{N}\) which avoids \(\mathrm{supp}(S)\) from \(j\) to \(k\). Then, concatenate \(P\) with \((j,k)\) to get a path of length \(\mathcal{O}(\sqrt{N})\) which intersects \(\mathrm{supp}(S)\) exactly once. Let \(C\) be the \(1\)-cycle which is the sum of edges in that path.
So, consider the second case. In an abuse of notation, let \(b_{r}(i)\) also denote the \(0\)-cochain which is the sum of vertices in the given set \(b_{r}(i)\). Consider the \(1\)-cochain \(S^{\prime}=S+\partial^{T}b_{r}(i)\), where \(\partial^{T}\) is the coboundary operator. The weight of \(S^{\prime}\) is equal to the sum of two weights, the first being the weight of \(S^{\prime}\) restricted to the support of \(S\), and the second being the weight of \(S^{\prime}\) restricted to the complement of the support of \(S\). Calls these weights \(w_{1},w_{2}\) respectively.
We claim \(\mathrm{weight}(S)-w_{1}=\Omega(r)\). Indeed, assume \(\mathrm{weight}(S)\geq c_{0}\sqrt{N}\), as otherwise the lemma follows. Then, since graph \(B\) is connected, for \(r\leq c_{0}\sqrt{N}\), the intersection of \(b_{r}(i)\) with the boundary set has cardinality \(\Omega(r)\), where the constant hidden in the big-O notation depends only on the constants hidden in the definition of LDPC, not on the choice of \(c_{0}\). Note that by the assumption that we are in the second case, there is no edge in \(S\) which is in the coboundary of two distinct vertices in \(b_{r}(i)\).
So, since \(S\) is minimal, \(w_{2}=\Omega(r)\), as otherwise \(S^{\prime}\) would be lower weight than \(S\). Hence, for all \(r=\mathcal{O}(\sqrt{N})\), \(b_{r}(i)\) has \(\Omega(r)\) edges in its coboundary, and hence, \(b_{r}(i)\) has cardinality \(\Omega(r^{2})=\Omega(c_{0}^{2}N)\). Let us put back in the constants hidden in the big-O notation: the cardinality of \(b_{r}(i)\) is \(\geq c^{\prime}c_{0}^{2}N\), for some constant \(c^{\prime}\) which depends on the constants hidden in the definition of LDPC. However, there are only \(N/2\) vertices in \(M\), as there are \(N\) edges. Hence \(c^{\prime}c_{0}^{2}\leq 1/2\)
Since \(c^{\prime}\) is fixed, we may choose \(c_{0}\) large enough to give a contradiction, i.e., \(S\) has weight \(\leq c_{0}\sqrt{N}\) or there is some \(C\) with inner product \(1\) with \(S\) which has weight \(\leq 2c_{0}\sqrt{N}+1\).
A figure may help understand Lemma 2. See Fig. 1.
### A Remark on Distances with Integer Homology
Out of interest, let us consider some related bounds on "distances" in the case of integer homology. That is, we consider the homology of the chain complex _with integer coefficients_ associated with the given 2-complex defined by the quantum come. At this point, we must first remark that in the case of integer coefficients there may be more than one choice of chain complex associated with a given quantum code. For example, suppose we have a code on three qubits, with stabilizer generators \(Z_{1}Z_{2},Z_{2}Z_{3},Z_{3}Z_{1}\) and \(X_{1}X_{2}X_{3}\). This of course corresponds to a 2-complex with a single 2-cell which is a triangle, with the edges corresponding to qubits \(1,2,3\) respectively. Depending on the orientation of the the triangle, one defines different chain complexes with integer coefficients which agree, mod 2, with the \(\mathbb{Z}_{2}\) chain complex corresponding to the given quantum code.
In [9], such an integer chain complex was called a "lift" of the \(\mathbb{Z}_{2}\) chain complex. While every \(\mathbb{Z}_{2}\) chain complex admits a lift, in Ref. [9], it was conjectured that not every \(\mathbb{Z}_{2}\) chain complex corresponding to an LDPC code admits a "sparse lift". Here, a "sparse lift", means a lift such that every row and column of every boundary operator of the chain complex has the sum of the absolute values of its entries bounded by \(\mathcal{O}(1)\).
However, in the case of a code where every qubit participates in two \(Z\)-stabilizer generators, the corresponding \(\mathbb{Z}_{2}\) chain complex does admit some sparse lift. This sparse lift is given by the chain complex with integer coefficients corresponding to the 2-complex corresponding to that code. We now explain how to construct that 2-complex. First define the graph corresponding to the \(Z\)-stabilizer generators. Consider some 2-cell, which corresponds to some \(X\)-stabilizer of form \(X_{1}X_{2}\ldots X_{k}\) on some number \(k\) of qubits. Each qubit corresponds to some edge \((i,j)\) of the graph where \(i,j\) are vertices. We identify edge \((i,j)\) with edge \((j,i)\). By the assumption that the \(Z\)- and \(X\)-stabilizers commute, the qubits \(1,2,\ldots,k\) can be ordered so that the corresponding edges define some cycle on the graph: they are in some sequence \((i_{1},i_{2}),(i_{2,3}),\ldots,(i_{k},i_{1})\). Then, simply define the boundary of the 2-cell to be those edges \((i_{1},i_{2}),(i_{2,3}),\ldots,(i_{k},i_{1})\) in that sequence, orienting each edge from \(i_{a}\) to \(i_{a+1}\). In the corresponding chain complex, to choose a basis, we will pick an arbitrary orientation of each edge. Thus, the boundary operator acting on the given 2-cell will map the boundary to \(\pm(i_{1},i_{2})+\pm(i_{2},i_{3})+\ldots\), where each sign depends on whether the arbitrary orientation of each edge is from \(i_{a}\) to \(i_{a+1}\) or from \(i_{a+1}\) to \(i_{a}\).
Now that we can define a chain complex with integer coefficients corresponding to some quantum code, we consider the integer homology. A representative of homology or cohomology is some vector in a vector space with integer coefficients, and we define the weight of that representative to be the sum of absolute values of the coefficients. We denote this weight by weight(\(\cdot\)).
Figure 1: A sketch to illustrate the proof of Lemma 2. Suppose \(M\) has the topology of a cyclinder. The curving lines on top and bottom of the figure represent circles in the boundary of the cylinder, and the left side of the figure is attached to the right side to give a cylinder. \(M\) occupies the region between these lines; we do not show the cells in \(M\) but they are implicit in the figure. The vertical line is the support of \(S\), i.e., the edges in \(S\) are those which cut the vertical line. The curving line in the middle of the figure represents part of the coboundary of \(b_{r}(i)\), namely the part not touching the vertical line, in particular those edges in the coboundary are those cut by the curving line. The number of edges cut by this curving line represents \(w_{2}\). So, as we increase \(r\), the “bubble” increases in size, and \(w_{2}\) increases while \(w_{1}\) decreases.
**Lemma 3**.: _Let \(M\) be some \(2\)-complex corresponding to some LDPC code. Consider the corresponding chain complex with integer coefficients. Consider some nontrivial cocycle \(S\) which has the property that \(S\) cannot be written as the sum of two cocycles, each with lower weight than \(S\), and such that \(S\) is a minimum weight representative of some given cohomology class. Then \(\operatorname{weight}(S)=\mathcal{O}(\sqrt{N})\) or there is some \(1\)-cycle \(C\) which has nonvanishing inner product with \(S\) such that \(\operatorname{weight}(C)=\mathcal{O}(\sqrt{N})\)._
Proof.: We use a graph metric for distance on the \(1\)-skeleton of \(M\). Let \(\operatorname{supp}(S)\) be the support of \(S\).
If necessary, we subdivide2 the edges of \(M\) so that the cohomology representative has the property that it has coefficients either \(0\) or \(\pm 1\) on each edge, e.g., if \(S\) has coefficient \(2\) on some edge of \(M\), we subdivide that edge in two and give the represenatative coefficient \(1\) on each edge. Then, we define "positive" and "negative" sets of vertices: if some edge \((i,j)\) is in \(\operatorname{supp}(\tilde{S})\), then \(i\) is in the positive set if the edge is oriented from \(j\) to \(i\) and the coefficient is \(+1\) or if the edge is oriented from \(i\) to \(j\) and the coefficient is \(-1\); otherwise, \(i\) is in the negative set. The vertex \(j\) is in the positive set if \(i\) is in the negative set and vice-versa. If needed, we further subdivide the edges so that no vertex is in the positive set from some edge and in the negative set from some other edge, e.g., in fact if \(S\) has coefficient \(2\) on some edge of \(M\), we should subdivide that edge in three, and give \(S\) coefficients \(1,0,1\) on the three edges in order; then, the first vertex is positive, the second is negative, the third is positive, and the fourth is negative. Further, we subdivide the \(2\)-cells of \(M\), adding additional edges, so that each two cell has only two edges with nonvanishing coefficients in its boundary.
Footnote 2: By “subdivide”, we mean precisely that: subdivide the \(1\)-cell, by replacing it with \(k\)\(1\)-cells and \(k-1\)\(0\)-cells arranged in a line for some \(k\). This may increase the number of \(1\)-cells in the boundary of a \(2\)-cell, but this effect on the LDPC property does not matter for this proof. This is primarily a bookkeeping device.
We refer to the subdivided \(M\) simply as \(\tilde{M}\), and refer to the corresponding cohomology representative as \(\tilde{S}\). On the edges of \(\tilde{M}\) arising from subdivision, we use any metric so long as the total length of a subdivided edge of \(M\) is equal to \(1\); any added edges from subdividing \(2\)-cells are given length \(1\).
Consider the following "positive graph" \(G^{+}\): the vertex set is the set of positive vertices and there is an edge between two vertices if the distance between them, using a graph metric on \(M\setminus\operatorname{supp}(\tilde{S})\), is bounded by the largest diameter of a \(2\)-cell in \(\tilde{M}\). We claim that this graph is connected: indeed, if not, for each connected component, restricting \(\tilde{S}\) to the edges which are in the coboundary of some vertex in that connected component would define some cocycle with lower weight than \(S\); more precisely, it gives some cocycle of \(\tilde{M}\) which pulls back to some cocycle of \(M\) with lower weight than \(S\). Define a "negative graph" \(G^{-}\) similarly.
Let \(\ell\) be the distance from the positive set to the negative set on the \(1\)-skeleton of \(\tilde{M}\) with \(\operatorname{supp}(\tilde{S})\) removed, i.e., \(\ell\) is the minimal length of a path from positive set to negative set avoiding \(\operatorname{supp}(\tilde{S})\). We claim that \(\operatorname{weight}(S)=\operatorname{weight}(\tilde{S})\leq N/\ell\). Indeed, consider any \(d\), \(0<d<\ell\). Consider the cochain equal to \(C\) plus the sum, over all vertices \(v\) which can be reached by a path of distance at most \(d\) which starts at the positive set and avoids the support of \(\tilde{S}\), of the coboundary of \(v\). This is some other cochain which has the same inner product with \(C\), and for distinct \(d\) these cochains are disjoint. By assumption that the given \(S\) is minimal (and hence \(\tilde{S}\) is minimal), each of these other cochains has weight \(\geq\operatorname{weight}(S)\) and their sum is at most \(N\).
Now, for any integer \(k\geq 1\), define a \(k\)-fold cover of \(\tilde{M}\) in the obvious way, using the \(\tilde{S}\) as a "cut" to define the cover3. Call this cover \(\tilde{M}(k)\). Choose some pre-image of \(\tilde{S}\) in this cover such that it gives a nontrivial cocycle of the same weight as \(\tilde{S}\); call this pre-image \(\tilde{S}(k)\), and define positive and negative graphs and positive and negative sets as before. Note that \(\tilde{S}(k)\) is minimal weight, as any cocycle of lower weight in the cover \(\tilde{M}(k)\) would define some cocycle of lower weight than \(\tilde{S}\) in \(\tilde{M}\) by applying the covering map. Now we claim that the distance from the positive to the negative set in the \(1\)-skeleton of \(\tilde{M}(k)\) with \(\operatorname{supp}(\tilde{S}(k))\) removed is at most \(k\ell\leq kN/\operatorname{weight}(S)\). Indeed, this follows by the same argument as in the above paragraph.
Footnote 3: i,e, vertices in the cover are labelled by a pair, giving a vertex in the \(\tilde{M}\) and an integer taken periodic mod \(k\). Going from positive to negative set in \(\tilde{S}\) increases this integer by one, while going on other edges does not change the integer.
So, there is some path of length at most \(kN/\operatorname{weight}(S)\) from the positive to the negative set in the \(k\)-fold cover, and under the covering map this defines some path in \(\tilde{M}\) which starts on the positive set, goes from the negative set to the positive set \(k-1\) times by an edge in \(\tilde{S}\) without ever going from positive set to negative set by an edge in \(\tilde{S}\), and finally ends on the negative set. Extend this path by a single step so that it finally ends on the positive set. Call the resulting path \(P\). This path \(P\) crosses from negative to positive set a total of \(k\) times by an edge in \(\tilde{S}\). Let \(i_{1}\) be the start of path \(P\). When path \(P\) crosses from negative set to positive set for the \(m\)-th time, call that point in the positive set \(i_{m+1}\). So, we have a sequence of points \(i_{1},i_{2},\ldots,i_{k+1}\) on the positive set.
Pick \(k=\lceil c\cdot\operatorname{weight}(S)/\sqrt{N}\rceil\) for a constant \(c=\mathcal{O}(1)\) chosen later, so that \(P\) has length \(\mathcal{O}(\sqrt{N})\). Note that we may assume \(\operatorname{weight}(S)=\Omega(\sqrt{N})\) as otherwise the lemma follows.
Since the positive set is connected, and since by assumption \(\text{weight}(S)=\Omega(\sqrt{N})\), then within distance \(\sqrt{N}\) of any point in the positive set there are \(\Omega(\sqrt{N})\) other points in the positive set. Picking \(c=\mathcal{O}(1)\) large enough, then, by the pigeonhole principle, there must be some \(m,n\) with \(1\leq m<n\leq k\) such that some point within distance \(\sqrt{N}\) of \(i_{m}\) is also within distance \(\sqrt{N}\) of \(i_{n}\) by a path which avoids \(\text{supp}(\tilde{S})\), and so \(i_{m}\) is within distance \(2\sqrt{N}\) of \(i_{n}\) by such a path; denote that path from \(i_{m}\) to \(i_{n}\) by \(Q\). Now form the following closed path \(C\): use some segment in the middle of path \(P\) to get a path from \(i_{m}\) to \(i_{n}\) which crosses from negative set to positive set \(n-m\) times and so has inner product \(n-m\) with \(\tilde{S}\), and then concatenate that segment with \(Q\). This gives a closed path \(C\) of total length at most \(\mathcal{O}(\sqrt{N})\). This path corresponds to a 1-cycle of weight \(\mathcal{O}(\sqrt{N})\) which has inner product \(n-m\) with \(\tilde{S}\).
If we had done some subdivision at the start of the proof, we deform the this 1-cycle \(C\) so it lies on the 1-skeleton of \(M\), avoiding any edges in \(\tilde{M}\) used to subdivide 2-cells, leading to at most an \(\mathcal{O}(1)\) factor increase in weight.
Remark: note that the inner product of \(C\) with \(S\) may be even. So, if we had considered the case of \(\mathbb{Z}_{2}\) coefficients, and we had assumed that some nontrivial cocycle \(S_{0}\) with \(\mathbb{Z}_{2}\) coefficients lifts to some cocycle \(S\) with integer coefficients, it is possible that \(C\) constructed in the above lemma might not give a cycle with \(\mathbb{Z}_{2}\) coefficients which has nontrivial inner product with \(S_{0}\).
## IV Vacancies and High Weight Vertices in Check Graph
### Vacancies and Check Graph
An interesting question studied by several authors is how to deal with dead qubits in a planar quantum code. Suppose qubits are chosen randomly and independently to be dead with some probability \(P_{\text{fail}}\). For small enough \(P_{\text{fail}}>0\), for small enough noise probability \(p\), is the probability of a logical error vanishing in the thermodynamic limit?
We are interested in the question of "circuit level noise". In a simple model of this, one alternately applies noise and measures stabilizers with some probability of error in the stabilizer measurement; the goal is to preserve logical information for some time which diverges as the system becomes large.
To have a threshold, it is necessary to measure "super-stabilizers". That is, dead qubits create holes in the lattice, which create extra logical qubits. The "super-stabilizers" measure these new logical operators. Various schemes have been proposed in both surface and Floquet codes[21; 22; 23; 24; 25; 26] and a threshold has been proven in one particular scheme[24] with a carefully chosen schedule to measure the super-stabilizers where one has long periods (with a time depending on the size of the hole) in which the hole has one boundary condition followed by long periods in which the hole has a different boundary condition.
However, no threshold has been proven in what are perhaps the most natural schemes, in which the hole alternates its boundary conditions on a time \(\mathcal{O}(1)\). These schemes lead a "check graph" with columns of high weight vertices in spacetime, making a simple Peierls argument fail[26]. We analyze the statistical mechanics of this situation. Our goal is not to prove a threshold; rather, our goal is to give statistical mechanics arguments for the correct asymptotics of decoding in this case.
First, let us say what a check graph is. We are considering the question of using a quantum code as a memory, where it is run for some long time, and we repeatedly measure stabilizers or checks of the code, while errors can occur at arbitrary times. For simplicity, time is taken to be discrete. For some codes (such as surface code or the honeycomb code), it is possible to find a basis of Pauli errors (i.e., choosing to expand errors in a particular basis of two of the three possible single qubit Pauli operators) such that each error will cause one or two "detection events" to occur. If each error causes two detection events, we can construct a check graph, where each edge of this corresponds to some possible error location in spacetime and and some type (i.e. Pauli \(X,Y\), or \(Z\)) of error, and where each vertex corresponds to some detection event, where a detection event is some particular product of measurements that should equal some fixed value \(\pm 1\) if no error occurs. An error on an edge flips the detection events in the vertices attached to that edge. For some given error pattern (which is a 1-chain, i.e., a sum of edges) that occurs, and some given resulting pattern of detection events, a decoder finds a decoding by choosing some other error pattern such that the sum of the two error patterns is a 1-cycle.
We have said that an error may cause one or two detection events. This means we also allow "dangling edges" in a check graph. A dangling edge only has a detection event on one of its vertices, and corresponds to an error which causes only one detection event. These will be convenient to use later. Edges are not dangling unless otherwise specified.
In a code like the surface code, there actually are two distinct check graphs, one for \(Z\)-type errors and one for \(X\)-type errors.
There is a useful homological description. We have associated error patterns with 1-chains with \(\mathbb{Z}_{2}\) coefficients: the chain has a coefficient 1 on cells in the given error pattern. Implicitly, we also have some 2-cells attached to the graph, any two 1-chains in the same homology class describe the same decoding.
For example, let us first describe the check graph and 2-cells for a simple model without vacancies. Consider a square lattice (of size depending on the number of qubits). There is a 2-cell for each square in the lattice. Call this the "spatial check graph". Now take this spatial check graph and cross it with a cellulation of the interval with \(T\) 0-cells for some integer \(T\) giving the total time for which we run error correction, i.e., take the homological product of this 2-complex with this 1-complex. The product gives some graph in three dimensions. Every vertex in the product then corresponds to a pair: a vertex in the spatial check graph and a time coordinate. We describe edges in the product as "spacelike" or "timelike" in the obvious way: spacelike edges connect vertices with the same time coordinate. We use the term "time slice" to describe a subcomplex consisting of all 0-, 1-, and 2-cells at a given time coordinate.
The spatial check graph can be considered on various topologies to give nontrivial homology, i.e., to encode logical qubits. For example, it could be on a torus, or, to make it planar, it could correspond to a patch of surface code with "rough" boundary conditions on one pair of opposite edges and "smooth" boundary conditions on the opposite pair of edges. In the second case, rough boundaries have dangling edges for one type of Pauli error, while smooth boundaries have dangling edges for the opposite type of Pauli error, and nontrivial homology representatives are described by chain stretching from one face to the opposite face.
Now we describe the check graph for these planar quantum codes with vacancies. We actually give a "cartoon" of the check graph, giving a slightly simplified model that skips over some details. However, this model captures the essential features of high degree vertices, and the results in this model should apply also to more specific models with some modifications of details.
We will call the model described here the "vacancy model".
First, a spatial check graph is obtained in a few steps starting with the square lattice. First, remove edges, corresponding to the dead qubits. Remove any vertices and 2-cells attached to removed edges. This will create some "holes" in the graph. Note our terminology: a "hole" is created by removing one or more edges corresponding to dead qubits. At the center of each hole, add a single vertex. We will call this vertex the "hole center" later. Attach one edge from the hole center to each vertex around the boundary of the hole, drawing the edges so that they do not cross, and attach 2-cells to each region in the spatial check graph bounded by these added edges.
Importantly, note that if there are many dead qubits, the hole center may have high degree.
Now again we take this spatial check graph and cross it with a cellulation of the interval with \(T\) 0-cells for some integer \(T\) giving the total time for which we run error correction, i.e., take the homological product of this 2-complex with this 1-complex.
This gives some three-dimensional graph. We will call any instead of this graph \(G_{\rm bulk}\) below. The "bulk" of \(G_{\rm bulk}\) refers to any area away from a high degree vertex.
The error model that we consider has a probability of \(p\) that there is an error on each edge, independently of other edges. This includes both spacelike and timelike edges.
Our general picture is the following. As is well-known and as we review below, various decoders (both Monte Carlo and minimum weight decoders) lead to error chains which are 1-cycles by combining the actual errors with the error pattern given by the decoder. These 1-cycles can be decomposed into closed paths (i.e., a single cycle may decompose into several closed paths), and a logical error may occur if homologically nontrivial paths exist. Away from the high degree hole centers, a Peierls argument (reviewed below) works to bound the probability of long chains and we in fact believe that this picture is quantitatively correct (up to subleading corrections) at small enough \(p\) in the thermodynamic limit.
However, this Peiers argument breaks down near the high degree vertices. Further, as we show below, near a high degree vertex, even a maximum likelihood decoder has its limitations. We begin with a simplified model in which errors can occur only on edges very close to the vertex, and find that even a maximum likelihood decoder has a performance which is exponentially poor in the degree of the vertex. In the simplest problem, we just consider a column of high degree vertices, with some nontrivial homology (induced by either taking it periodic in the time direction or adding dangling edges at the initial and final times), and we find that the distance in the time direction must be exponentially large in the degree to decode significantly better than random chance. Then, of course, if a column of high degree vertices exists in a larger graph \(G_{\rm bulk}\), the decoding can only get worse. However, we believe that our toy model essentially captures the problem of decoding near a high degree vertex. Our general picture then is that the dominant way in which we can have a homologically nontrivial error chain is that in the bulk the error chains are as short as possible, and can be described by a Peierls argument, but near a hole center it is hard to distinguish different error chains. We believe that an effective model is to consider a new graph as follows. The spatial check graph is given by taking one vertex for each hole center. In addition, we have one vertex for every vertex in \(G_{\rm bulk}\) with a dangling edge. There is an edge between every pair of vertices. Then take the product of this spatial check graph with a cellulation of an interval. The probability of error on each edge is as follows: for a spacelike edge,
the error probability is as given by a Peierls argument (roughly \(p\) raised to the power of half the distance between centers), and is exponentially small in the spacing between the two hole centers (on a torus, one takes the shortest path; if homologically inequivalent paths have the same lengths, then the error probability is exponentially small in system size and so is negligible in the thermodynamic limit). For a timelike edge, the error probability is described by our simplified models of decoding near a hole center below, and so the error probability is exponentially close to \(1/2\). Given this simplified model, it is possible that a Peierls argument works, depending on the error probabilities on the various edges: while the error probability is very close to \(1/2\) on timelike edges, there is only one such path and weights for the paths to other hole centers are exponentially small. Indeed, if the hole centers are sufficiently separated than a Peierls argument will work. However, the Peierls argument will fail if two sufficiently high degree hole centers are sufficiently close in space. We analyze this case in the end, and argue that a correct description is that they are replaced by a single vertex of higher degree. For typical disorder configurations, ultimately one ends at an effective model that can be treated by a Peierls argument.
_An Application of this Effective Model--_ Before explaining where the effective model (with one vertex per hole center) arises, we give a brief application. Consider a toric code on a square patch of linear size \(\ell\), with smooth boundaries on two opposite sides and rough boundaries on the other sides. If there are no vacancies, then we use a Peierls argument to estimate the error probability. After a time \(T\), the error probability is \(\approx T\exp(-cL)\) for some constant \(c\) depending on \(p\) determined by the Peierls argument. Of course, for large enough \(T\), the error probability saturates at \(1/2\).. Suppose instead we have a single hole, with hole center of degree \(d\), at distance \(r\) from one side and distance \(\ell-r\) from the opposite side, with \(r\leq\ell/2\).
Now, we can have a path of errors from one side to the opposite side going through the hole center. For small \(T\), the error probability is \(\approx T^{2}\exp(-cL)\). The error probability on the timelike edge on the hole center is \(1/2\) minus \(\approx\exp(-c^{\prime}d)\) for some some constant \(c^{\prime}\) depending on \(p\), Once \(T\) becomes large compared to \(\min(\exp(-cr),\exp(c^{\prime}d))\), then the error probability crosses over to linear behavior. If \(\exp(-cr)\ll\exp(-c^{\prime}d)\), then the error probability crosses over to \(T\exp(c^{\prime}d)\exp(-cL)\). If \(\exp(-cr)\gg\exp(-c^{\prime}d)\), then the error probability crosses over to \(T\exp(-c(\ell-r))\).
_Maximum Likelihood and Monte Carlo Decoding--_ Recall how maximum likelihood decoding works for a decoding graph. Maximum likelihood decoding can be obtained by defining a certain statistical mechanical model. In this model, the only allowed error patterns are those which give the observed syndrome, and each error pattern has a probability proportional to \((p/(1-p)))^{n_{E}}\), where \(n_{E}\) is the total number of errors in the given pattern[27]. We can then compute the probability that the error pattern is in a given homology class, and the most likely homology class defines the maximum likelihood decoding. This can equivalently be describing by saying: compute, for each homology class, a "partition function in the homology class" which is the sum over all patterns in that class with weight \((p/(1-p))^{n_{E}}\), and take the homology class with the largest partition function. The relative partition functions in different classes give the relative probabilities of the different decodings.
We can give these error patterns a pictorial description that will be useful later. Let us color edges where an error actually occurs with a red color. The partition function in a given homology class is a weighted sum over error patterns that give the same boundary. Consider any such error pattern and color edges of the graph in that error pattern blue. Thus, the edges of the graph may be colored red, blue, neither, or both. The edges which are colored once form a closed \(1\)-chain.
We will refer below to the particular pattern of red colored edges as the "quenched disorder" as that is what it is in the context of a statistical mechanics model.
We can define a different decoder that we call a Monte Carlo (MC) decoder. This decoder picks a random blue coloring, such that it has the same boundary as the red coloring, with a probability proportional to \((p/(1-p)))^{n_{E}}\).
**Lemma 4**.: _Suppose that the Monte Carlo decoder has some probability \(P\) of making an error in a case where there are only two homology classes. Then, the probability that the MC decoder makes an error is \(2P(1-P)\)._
Proof.: For given \(P\), the probability that the correct homology class is the one found by the maximum likelihood decoder is \(1-P\), while the probability that the other class is correct is \(P\). The MC decoder then picks from the class found by the maximum likelihood decoder with probability \(1-P\) while it picks from the other class with probability \(P\).
Note then that if \(P\) is exponentially small, the probability that the MC decoder makes an error is also exponentially small.
The reason we define the MC decoder is that it is useful in results like Lemma 6 below. We will break our analysis of the decoding problem into analyzing decoding on some subproblems, and then put those results together. This is slightly easier to describe with the MC decoder. The reason is, we have several probabilities to track, describing random error patterns and describing the confidence that a maximum likelihood decoder has that it gives the correct decoding on a subproblem (this confidence is important when combining problems), while for the ML decoder we can combine these into a single probability.
_Peierls Arguments--_ Next let us recall how a Peierls argument for a threshold works. We discuss it for both maximum likelihood and minimum weight decoders. The Peierls argument works on graphs of bounded degree, but can break down with unbounded degree graphs.
We begin with an MC decoder.
If there are no dangling edges in the graph, then edges colored with exactly one color form a collection of closed loops, while if there are dangling edges in the decoding graph there may also be paths beginning and ending at dangling edges. If some vertices in the decoding graph have degree greater than three, there may be some ambiguity in how to write it as loops. For example, we could have a "figure-eight" configuration that could be regard as either one or two loops. This ambiguity does not matter for what is below.
For a Peierls argument, consider any path \(C\), which forms either a closed loop or begins and ends at dangling edges. Let \(C\) have \(|C|\) edges. We compute the probability that every edge in \(C\) is colored exactly once. The probability that we consider here is an average over disorder (which may color some of the edges in \(C\) red) of the probability (using statistical weight proportional to \((p/(1-p))^{n_{E}}\)) that the remaining edges are colored blue and the edges colored red do not get colored blue.
Let's first emphasize what we need to calculate by giving a _wrong_ way of calculating this statistical weight. To upper bound this probability, one might consider the relative statistical weight of configurations where each edge in \(C\) is colored exactly once to those where no edge is colored. There are \(2^{|C|}\) possible ways of coloring each edge in \(C\) exactly once. For each way, there is a one-to-one correspondence between configurations with that coloring of edges in \(C\) and configurations where no edge in \(C\) is colored: simply take any configuration with the given coloring of edges in \(C\), and erase the colors on all edges in \(C\). One might then naively consider the relative probability (considering again both the statistical weight of the statistical mechanical model and the probability that we there was an error pattern which gave the given red coloring of the edges in \(C\)) of these two configurations and think that it is \((p/(1-p))^{|C|}\).
However, this is not correct! For any given quenched disorder, the statistical weight of the blue coloring is proportional to \((p/(1-p))^{n_{E}}\), but the weight depends upon the quenched disorder! Indeed, it should be clear that the calculation is not correct: if slightly more than half of the edges in \(C\) are colored red, then (taking the coloring of edges not in \(C\) fixed) it is likely that the remaining edges will be colored blue, so the probability of having all edges colored once is more like \((p/(1-p))^{|C|/2}\) rather than \((p/(1-p))^{|C|}\).
Rather, a correct way is to consider each of the \(2^{|C|}\) subsets of edges in \(C\). Call a subset \(S\). We consider the probability that the edges in \(S\) are colored red and the edges in \(C\setminus S\) are not colored red. Then, for that \(S\), we consider the probability of an event \(E_{S}\) that the edges in \(C\setminus S\) are colored blue while those in \(S\) are not colored blue; to upper bound this probability, we consider the relative probability of two events of configurations, one where event \(E_{S}\) occurs, and the other where event \(E_{S}^{\prime}\) occurs in which the edges in \(S\) are colored blue while those in \(C\setminus S\) are not colored blue. There is again a one-to-one correspondance between such configurations: simply change which edges on \(C\) are colored blue. The relative statistical weight of the two events \(E_{S}\) and \(E_{S}^{\prime}\) is \((p/(1-p))^{|C|-2|S|}\). So, the probability of event \(E_{S}\) occurring is at most
\[\frac{(p/(1-p))^{|C|-2|S|}}{(p/(1-p))^{|C|-2|S|}+1}.\]
So, Averaging over choices of \(S\), with the probability of a given \(S\) being \(p^{|S|}(1-p)^{|C|-|S|}\),
**Lemma 5**.: _The probability that each edge in \(C\) is colored exactly once in an MC decoder is bounded by_
\[\sum_{m=0}^{|C|}\binom{|C|}{m}p^{m}(1-p)^{|C|-m}\frac{(p/(1-p))^{|C|-2m}}{(p/( 1-p))^{|C|-2m}+1},\]
_which is exponentially small in \(|C|\)._
A similar Peierls argument may be made for a minimum weight decoder. Indeed, the minimum weight decoder may be obtained by a statistical mechanical model where we sum over error patterns with some weight \((p^{\prime}/(1-p^{\prime}))^{n_{E}}\) and take a limit as \(p^{\prime}\to 0\). One may use the same argument as above with this modified weight.
We can apply these Peierls arguments to prove a threshold if the maximum vertex degree is \(\mathcal{O}(1)\). Consider a spatial check graph of linear size \(L\) for some \(L\). Suppose all nontrivial homology representatives have length at least \(L\) and end on some boundary of the spatial decoding graph at arbitrary time so that there are \(\mathcal{O}(L)T\) possible starting points. If an error is made, then there must be a path of edges colored once which gives such a nontrivial representative. For any given path of length \(\ell\), the probability that the edges in that path are colored once is \(\mathcal{O}(p)^{\ell}\), while the number of paths of length \(\ell\) for any \(\ell\) with a given start position is bounded by \(\mathcal{O}(1)^{\ell}\) where the \(\mathcal{O}(1)\) depends on the maximum vertex degree. So, by a union bound, the probability of an error is bounded by
\[\sum_{\ell\geq L}\mathcal{O}(p)^{\ell}\mathcal{O}(L)T,\]
and or small enough \(p\), this is bounded by \(T\) times some quantity exponentially small in \(L\).
_Decoding With a Column of High Degree Vertices: Simplest Model--_ To begin, consider the following toy model of a high degree check graph. We have \(T\) vertices, labeled \(i=0,\ldots,T-1\). We have \(d\) edges from vertex \(i\) to vertex \(i+1\mod T\), for all \(i\), so that each vertex has degree \(2d\) and there are \(Nd\) edges.
This check graph actually arises in a familiar code, a generalization of Shor's 9 qubit code. Take a repetition code in the \(Z\) basis on \(T\) qubits and concatenate it with a repetition code in the \(X\) basis on \(d\) qubits. Label qubits by pairs \((x,y)\) for \(x\in 0,\ldots,T-1\) and \(y\in 1,\ldots,d\). Then there are stabilizers \(X_{x,y}X_{x,y+1\mod d}\) for all \(x,y\), as well as stabilizers \((\prod_{y}Z_{x,y})(\prod_{z}Z_{x+1,z})\) for all \(x\) for all \(x\) (the \(x\) coordinate is periodic in \(T\) so \(x=T\) is the same as \(x=0\)). The check graph we have given describes correction of \(X\) errors when errors occur only at a single time (i.e., we use the quantum code in a communication channel rather than as a memory). Each vertex is a single \(Z\)-stabilizer. This quantum code encodes a single logical qubit.
Consider some random pattern of \(X\) errors, choosing \(X\) errors on each edge of this check graph independently with some probability \(p\). Thus, on each \(d\)-tuple of edges between any pair of vertices \(i,i+1\mod T\), we have either an even or odd number of \(X\) errors. The probability that there are an even number of \(X\) errors on that \(d\)-tuple equals
\[\frac{1+(1-2p)^{d}}{2},\]
while the probability that there are an odd number of of \(X\) errors on that \(d\)-tuple equals
\[\frac{1-(1-2p)^{d}}{2},\]
so that as \(d\) becomes large, at fixed \(p\), both probabilities approach \(1/2\) exponentially.
A decoder then finds some other pattern of errors so that in the sum of the two error patterns, all edges have the same parity of errors on each \(d\)-tuple, either even or odd, so that the pattern of errors found by the decoder produces the same pattern of detection events as the actual errors. The decoder will decode correctly if the sum of the two error patterns has even parity on all edges.
Indeed, then, all that matters is whether the decoder chooses an even or odd number of errors on each \(d\)-tuple, so it suffices to have the decoder take either \(0\) or \(1\) error on each \(d\)-tuple, say, so that there are only two possible decodings that the decoder considers on each \(d\)-tuple.
This decoding problem is then the same as decoding a simpler check graph: a check graph with \(T\) vertices of degree \(2\), with one edge from vertex \(i\) to vertex \(i+1\mod T\), for all \(i\), where there is a probability \(\frac{1-(1-2p)^{d}}{2}\) of having an error on an edge. One may see that it requires \(T\) exponentially large in \(d\) for the decoder to have a decoding probability significantly better than \(1/2\). Indeed, _the probability that a maximum likelihood or MC decoder makes an error on this check graph with \(T\) vertices of degree \(2d\) is exponentially small in \(T(1-2p)^{d}\)._ To see this, note that this is the same as the problem of decoding a classical one-dimensional Ising model or repetition code: we have \(T\) spins, each initialized in some unknown configuration, either all up or all down, and we flip each spin with probability \(\frac{1-(1-2p)^{d}}{2}\). Then, maximum likelihood decoding is the same as majority decoding, with the given probability of error. Indeed, let's suppose that \(T\) is odd. Then error occurs if we flip more than \(\lfloor T/2\rfloor\) spins, so the error probability is
\[\sum_{S\geq\lfloor T/2\rfloor}{T\choose S}p_{eff}^{S}(1-p_{eff})^{T-S},\]
where \(p_{eff}=(1-2p)^{2}\).
We may consider a slightly modified version of this check graph which has dangling edges. Simply "cut" the edges from vertex \(0\) to vertex \(T-1\), so that those two vertices each have \(d\) dangling edges, and also \(d\) edges going to other vertices. Call this graph \(C_{\mathrm{column}}\) Again, _the probability that a maximum likelihood or MC decoder makes an error on this check graph \(C_{\mathrm{column}}\) with \(T\) vertices of degree \(2d\) and dangling edges is exponentially small in \(T(1-2p)^{d}\)._
Remark: in fact, a minimum weight matching decoder will give the _same decoding_ as a maximum likelihood decoder for both of these check graphs. However, the minimum weight matching decoder is, in some sense, "too confident" in the decoding. That is, we can consider the minimum weight decoding for a given choice of the two possible logical decodings. Both of these minimum weight decodings have either \(0\) or \(1\) error on each \(d\)-tuple of edges and they are complements of each other. Suppose, for example, we take a fixed \(T=1\); then, the difference in the weight of these two decodings is always equal to \(1\), independently of \(d\), even though one has only exponentially small information about the true decoding. That is, the difference in the weights of the two different minimum weight decodings does not accurately reflect how much information we have. Indeed, this is why we consider minimum weight and MC decoders.
Finally, it is useful to generalize this model to the case in which the edges in the check graph may have different error probabilities. We take \(T\) vertices, labeled \(i=0,\ldots,T-1\). We have \(d\) edges from vertex \(i\) to vertex \(i+1\mod T\), for all \(i\), so that each vertex has degree \(2d\) and there are \(Nd\) edges. We label the edges by a pair \((x,y)\) for \(x\in 0,\ldots,T-1\) and \(y\in 1,\ldots,d\) and we let the error probablity be \(p_{y}\) for some \(y\). Note then that in this model the error probabilities do not depend on \(x\). Then, again on each \(d\)-tuple of edges between any pair of vertices \(i,i+1\mod T\), we have either an even or odd number of errors and the probability that there are an even number of errors on that \(d\)-tuple equals
\[\frac{1+\prod_{y=1}^{d}(1-2p_{y})}{2},\]
while the probability that there are an odd number of of \(X\) errors on that \(d\)-tuple equals
\[\frac{1-\prod_{y=1}^{d}(1-2p_{y})}{2}.\]
Thus, it is the same as decoding in the case \(d=1\) with an error probability on an edge equal to \(\frac{1-\prod_{y=1}^{d}(1-2p_{y})}{2}.\) So, the probability of making an error in decoding is exponentially small in \(T\prod_{y=1}^{d}(1-2p_{y})\).
_Decoding Near The Hole Center--_ Now we consider a model of decoding _near_ a hole center of degree \(d\) in the graph \(G_{\text{bulk}}\). Specifically, we assume that all error probabilities are zero except for spacelike edges connecting some hole center to one of its neighbors and for timelike edges attached to either the hole center or one of its neighhrbors. This decoding problem is the same as taking the spatial check graph to contain just the hole center and its neighbors, with only edges from the hole center to neighbors, rather than between neighbors.
If there is no quenched disorder, then the statistical mechanics of the MC decoder here would be easy to solve using transfer matrix methods, being a one-dimensional system. With quenched disorder, this becomes more difficult. So, we give an approximate treatment valid for small \(p\).
First, the shortest path from the center hole, back to the center hole, but at a different time, is length 3. This has a spacelike edge from the center hole to a neighbor at some time \(s\), \(0\leq s\leq T\), then a timelike edge from that neighbor to itself at a different time \(s+1\), then a spacelike edge back to the center hole. It is possible that all three of these edges are colored red. This occurs with probability \(p^{3}\) for any such path. There is some "interference" between these events: if we have such a path with some given \(s\), then it overlaps with the analogous path going to the same neighbor at some time \(s^{\prime}=s\pm-1\). However, for small \(p\), we expect that this interference is negligible. So, we expect that the effect is as if we considered the simplified model \(C_{\text{column}}\) where there are \(d\) edges which each have error probability \(p^{\prime}=p^{3}\).
However, it is also possible that only some of those edges are colored red but that the decoder makes a mistake. Indeed, we observe violated checks in the graph on some neighbor at time \(s\) and at time \(s+1\). This can occur if we color red the timelike edge from that neighbor at time \(s\) to itself at time \(s+1\), but do not color red any of the spacelike edges attached to it. This occurs with probability \(p-o(p^{2})\) on any given path of three edges. Alternatively, it can occur if we color red the two spacelike edges attached to that neighbor at times \(s\) and \(s+1\), which occurs with probability \(p^{2}-o(p^{3})\). So, typically for any time \(s\), there will be \(\approx pd\) neighbors for which we have violated checks at the time \(s\) and at \(s+1\). If we color blue the timelike edge connecting those checks, then with probability \(p+o(p^{2})\) we color each edge in the length 3 path once. So, we expect that the effect is as if we had \(\approx pd\) edges in the simplified model \(C_{\text{column}}\) which each had error probability \(p\). Again, some "interference" is possible, between this path at some time \(s\) and the analogous path at times \(s^{\prime}=s\pm 1\), but again for small \(p\), we expect that this interference is negligible.
Another possibility is that we observe a violated check in the graph on some neighbor at time \(s\) but no violated checks at some \(s+1\) or \(s-1\). This can occur if we color red the spacelike edge from that neighbor at time \(s\) to the center hole, but do not color red any of the timelike edges attached to it. This occurs with probability \(p-o(p^{2})\) on any given path of three edges. Alternatively, it can occur if we color red the exactly one of the two timelike edges attached to it, going from that neighbor to itself at time \(s\pm 1\), and also color red the spacelike edge from that neighbor at time \(s\pm 1\) to the center hole. So, typically for any time \(s\), there will be \(\approx pd\) neighbors for which we have violated checks at the time \(s\) but not at \(s\pm 1\). If we color blue the spacelike edge from that check to the center hole, then w with probability \(2p+o(p^{2})\) we color each edge in the length 3 path once; note the factor of 2 in front of \(2p\) due to the two possible ways in which we can make an error. So, we expect that the effect is as if we had \(\approx pd\) edges with error probability \(2pd\). Again, some "interference" is possible but expected to be negligible at small \(p\).
Thus, an approximate description should be by a model \(C_{\text{column}}\) with \(d\) edges with effective error probability \(p^{3}\), and \(pd\) edges with effective error probability \(p\) and \(pd\) edges with effective error probability \(2pd\). From the discussion above, the probability of making an error in decoding is then exponentially small in \(T\prod_{y=1}^{d}(1-2p_{y})\), where \(p_{y}\) are
the error probabilities on these effective edges. In this case, this is \(\prod_{y=1}^{d}(1-2p_{y})\approx\exp(-3(p^{2}+o(p^{3}))d)\) and so the effective error probability is exponentially small in \(T\exp(-3(p^{2}d+o(p^{3})))\).
Now suppose we wish to consider decoding with a single high degree hole in \(G_{\rm bulk}\), with dangling edges from the hole center at times \(0\) and \(T\). The above analysis considered only paths of length \(3\) going from the hole center to itself. We expect that longer paths are negligible, being higher order in \(d\), and so the effective error probability will still be exponentially small in \(T\exp(-3(p^{2}d+o(p^{3})))\).
While we will not prove that the longer paths are negligible, it is possible to give a power series treatment of their effects in the limit of small \(p\). If it could be proven that this series converges, then this would give a proof of our conjecture for the decoding of a single high degree hole. The decoder we analyze does the following. In the first step, it does an MC decoder on a modified graph, where we remove the center hole, and replace all the edges from neighbors of the center hole to the center hole with dangling edges attached to those neighbors. Note that the degree of this modified graph is bounded! In the second step, it takes the blue coloring that it finds, and uses that blue coloring on the original graph \(G_{\rm bulk}\), coloring each edge the same way as on the modified graph; an edge from a neighbor of the hole center to the hole center is colored the same way as the dangling edge. Having done this, there are only two possible colorings of the timelike edges on the hole center. It picks the minimum weight such pattern.
To determine whether or not the decoder makes an error, consider each time \(S\). Consider all the timelike edges from time \(S\) to time \(S+1\), including both those on the hole center and not on it. Count the number of such edges colored red. Also count the number of such edges colored blue in the first step. Add these two totals, modulo \(2\). If the total is odd, then say that an "overall error" occurs between times \(S,S+1\). If more than \(T/2\) overall errors occur, then in the second step the decoder makes a logical error; if less than \(T/2\) occur, then it does not.
Now, can we compute the probability that more than \(T/2\) overall errors occur? First, consider the average number of overall errors. Pick any given time \(S\). For each timelike edge \(e\) from \(S\) to \(S+1\), let \(Z_{e}\) equal \(-1\) if edge \(e\) is colored an odd number of times, and \(+1\) otherwise. If we can compute the average of \(\prod_{e}Z_{e}\), then we can straightforwardly compute the probability that there is an overall error between times \(S,S+1\), as this product is \(-1\) if there is an overall error.
This average is a correlation function in a statistical mechanical model with quenched disorder. If we wanted to compute the product of the average instead of the average of the product, this would be much easier as we would be computing a product of correlation functions each involving a single degree of freedom. This, indeed, is what happens in the graph \(G_{\rm column}\): we just need to compute whether or not there are an odd number of red edges from one time to the next, and the probability that a given edge is red is independent of the other edges.
However, we can expand this correlation function (the average of \(\prod_{e}Z_{e}\)) as a sum of connected correlation functions. We expect that connected correlation functions involving more degrees of freedom are suppressed in powers of \(p\). Hence, while the sum of connected correlation functions is likely not to have a well-behaved power series, we believe that the power series in \(p\) for the _logarithm_ of the average of \(\prod_{e}Z_{e}\) is a convergent expansion; the leading term of this series is the quantity \(-3p^{2}d\) found above.
Even once we have computed the average of \(\prod_{e}Z_{e}\), this is not yet sufficient, as the probability of having an overall error between times \(S,S+1\) is not independent of having overall errors at other times. However, again we believe that the power series expansion of logarithms quantities such as \(\prod_{e}Z_{e}\prod_{e^{\prime}}Z_{e^{\prime}}\), where edges \(e\) are from times \(S\) to \(S+1\) and edges \(e^{\prime}\) are from times \(S^{\prime}\) to \(S^{\prime}+1\) is a convergent expansion.
More importantly, we believe that the correlations between the overall errors at different times are negligible. To see this, let's consider another effective model. In this model, we consider a model of Ising spins, labelled \(1,\ldots,T\), each initialized to the state \(+1\), then we flip each spin independently with probability \(1/2-\epsilon\) for some small \(\epsilon\), and then for each pair of neighboring spins we flip both spins in that pair with probability \(1/2-\epsilon^{\prime}\) for some small \(\epsilon^{\prime}\), independently of the other pairs. This is an effective model of the following situation: the shortest paths (of length \(3\)) considered above can create an overall error at some time. However, length \(4\) paths can create an overall error at two neighboring times at a higher order in \(p\); we'll ignore the possibility of even further paths because they are further suppressed in \(p\), but the treatment is similar to the case here. The quantity \(\epsilon\) is then exponentially small in \(p^{2}d\), while the quantity \(\epsilon^{\prime}\) is exponentially small in \(p^{3}d\) and so \(\epsilon\ll\epsilon^{\prime}\). Consider any given configuration \(s_{1},s_{2},\ldots,s_{T}\) of Ising spins. We will compute the probability that this can arise in this effective model. This can be computed with a transfer matrix technique. Let \(b_{1},\ldots\) be binary variables corresponding to the Ising spins with \(b_{j}=(1-s_{j})/2\). Let \(f_{1},f_{2},\ldots\) be binary variables, such that if \(f_{i}=1\), we flip Ising spins \(i,i+1\) in the above process.
Then, the probability of any given \(b_{1},b_{2},\ldots\) is given by
\[c^{-1}\sum_{\{f_{i}\}}\prod_{j}z^{b_{j}\oplus f_{j-1}\oplus f_{j}}y^{f_{j}}\]
, where \(y=(1/2-\epsilon^{\prime})/(1/2+\epsilon^{\prime})\) and \(z=(1/2-\epsilon)/(1/2+\epsilon)\) and where the normalization factor
\(\epsilon^{\prime})^{T-1}\). Introduce matrices
\[M_{0}=\begin{pmatrix}1&z\sqrt{y}\\ \sqrt{y}z&y\end{pmatrix},\]
and
\[M_{1}=\begin{pmatrix}z&\sqrt{y}\\ \sqrt{y}&zy\end{pmatrix},\]
so that the probability of given \(b_{1},b_{2},\ldots\) is given by \(\langle\psi_{L}|M|\psi_{R}\rangle\), where
\[M=M_{b_{2}}M_{b_{3}}M_{b_{4}}\ldots M_{b_{T-1}},\]
and where the vectors \(\psi_{L},\psi_{R}\) depend on the spins \(s_{1},s_{T}\), respectively. The exact form of the vectors \(\psi_{L},\psi_{R}\) is not so important, and we omit it (with periodic boundary conditions we replace this with a trace). Then, it is convenient to change basis to compute the matrix product. Introduce a new basis of vectors
\[\frac{1}{\sqrt{1+y}}(1,\sqrt{y}),\quad\frac{1}{\sqrt{1+y}}(\sqrt{y},-1).\]
This is a basis of eigenvectors of both \(M_{0}\) and \(M_{1}\) at \(z=1\). In this basis, we have
\[M_{0}=\frac{1}{1+y}\begin{pmatrix}1+2zy+y^{2}&(1-y)(1-z)\sqrt{y}\\ (1-y)(1-z)\sqrt{y}&1-2zy+y^{2}\end{pmatrix},\]
and
\[M_{1}=\frac{1}{1+y}\begin{pmatrix}z+2y+zy^{2}&-(1-y)(1-z)\sqrt{y}\\ -(1-y)(1-z)\sqrt{y}&2(z-1)y\end{pmatrix}.\]
Note that in this basis, for both matrices, the off diagonal terms are \(\mathcal{O}(\epsilon\epsilon^{\prime})\), and the term in the lower right is \(\mathcal{O}(\epsilon+\epsilon^{\prime 2})\) for \(M_{0}\) and \(\mathcal{O}(\epsilon)\) for \(M_{1}\). At the same time, the term in the upper left is off order unity. Thus, so long as \(T\) is small compared to \(\epsilon^{-2}\epsilon^{-2}(\epsilon+\epsilon^{\prime 2})^{-1}\), _we can approximate the matrix product as the product of terms in the upper left corner_; at the same time, we will see from the solution in this regime that if \(T\) is not small compared to this quantity, then the probability of a decoding error is negligible. So, we approximate the probability as (putting in the correct normalization)
\[\frac{\Big{(}1+2zy+y^{2}\Big{)}^{n_{\uparrow}}\Big{(}z+2y+zy^{2}\Big{)}^{n_{ \downarrow}}}{\Big{(}1+2zy+y^{2}+z+2y+2y^{2}\Big{)}^{T}},\]
where \(n_{\uparrow}\) and \(n_{\downarrow}\) are the number of Ising spins in the \(+1\) and \(-1\) states, respectively. One may recognize that this is the probability corresponding to a process where we first flip each spin independently with probability \(z/(1+z)\), then flip each spin again independently with probability \(y/(1+y)\), and finally again flip each spin independently with probability \(y/(1+y)\). This of course is correct for any given spin in the original effective model: each spin is subject to those three possible spin flips, however the important thing is that we can ignore any correlation between flips of neighboring spins.
_Gluing Together_-- Next we have a general principle. We have discussed decoding in various cases with dangling edge, either considering decoding near a hole with dangling edges at some time, or considering Peierls arguments for some path with dangling edges. Now suppose we have two such decoding problems.
**Lemma 6**.: _Consider two separate decoding graphs \(G_{1},G_{2}\), each with dangling edges. Suppose the probability that an MC decoder makes an error on graph \(G_{1}\) is at most some probability \(P_{1}\), and similarly the probability that it makes on error on graph \(G_{2}\) is at most some \(P_{2}\). Consider the graph obtained by attaching some dangling edge \(e_{1}\) of \(G_{1}\) to some other dangling edge \(e_{2}\) of \(G_{2}\), and assume there are only two homology classes on \(G_{1}\), and on \(G_{2}\), and on the combined graph, so that making any error on the combined graph requires coloring \(e_{1}\) and \(e_{2}\) both once.. The probability that it makes an error on that graph is at most \(2P_{1}P_{2}\)._
Proof.: Consider some fixed quenched disorder on each graph. Suppose for that given quenched disorder, the MC decoder makes an error with probability \(Q_{1}\) on the first graph and probability \(Q_{2}\) on the second graph. The MC
decoding is two separate decoding problems, constrained by the requirement that \(e_{1}\) must be colored the same, mod \(2\), as \(e_{2}\). So, the probability of an error on the combined graph is
\[\frac{Q_{1}Q_{2}}{Q_{1}Q_{2}+(1-Q_{1})(1-Q_{2})}\leq 2Q_{1}Q_{2}.\]
Further, we know that the average of \(Q_{1}\) over quenched disorder is equal to \(P_{1}\) and the average of \(Q_{2}\) is equals to \(P_{2}\). So, the average of \(2Q_{1}Q_{2}\) equals \(2P_{1}P_{2}\).
Remark: this worst case is achieved if there is a probability \(2P_{1}\) that the quenched disorder is such that \(Q_{1}=1/2\) and similarly a probability \(2P_{2}\) that \(Q_{2}=1/2\).
Then, in our general picture at the start, we believe that we can use the Peierls argument for paths between columns of high degree vertices, use the effective model \(C_{\mathrm{column}}\) above for high degree vertices with dangling edges on the vertices where a Peierls path arrives at a vertex, and use this lemma to glue the paths, to get the effective with weak edges connecting different hole centers discussed at the start.
_Two Nearby High Degree Vertices--_ Finally, suppose we have two nearby high degree vertices in our effective model with one vertex per hole center and edges between every pair of vertices. We can describe these two nearby high degree vertices by a check graph with a "ladder" shape. The "rungs" of the ladder are the spacelike edges between vertices. Suppose we have some error probability \(r\) on this edge, while we have error probabilities \(p_{1}=1/2-\epsilon_{1}\) and \(p_{2}=1/2-\epsilon_{2}\) on timelike edges attached to one or the other hole center.
If \(r\ll\epsilon_{1}^{-1}\) or \(r\ll\epsilon_{2}^{-1}\), then we can use a Peierls argument on this effective ladder model. However, suppose \(r\gg\epsilon_{1}\) and \(r\gg\epsilon_{2}\). Then, a Peierls argument does not work. In this case, in the limit of large \(r\epsilon_{1}^{-1}\) and large \(r\epsilon_{2}^{-1}\), we cannot effectively determine whether or not any of the spacelike edges are colored red. Then, we arrive at an effective model \(C_{\mathrm{column}}\) with \(d=2\) and with error probabilities \(p_{1}=1/2-\epsilon_{1}\) and \(p_{2}=1/2-\epsilon_{2}\). Thus, this is the same as a model \(C_{\mathrm{column}}\) with \(d=1\) and error probability
\[\frac{1-\prod_{y=1}^{2}(1-2p_{y})}{2}=1/2-2\epsilon_{1}\epsilon_{2}.\]
## Appendix A Chain Maps
Many constructions in homological algebra have some relationship to some construction in quantum error correcting codes. Most obviously, quantum codes correspond to chain complexes over \(\mathbb{F}_{2}\), and logical \(X\) and \(Z\) operators correspond to homology or cohomology classes.
Here we will discuss an interesting relationship between measurements on quantum codes and chain maps.
Consider the case of a CSS quantum code \(C\). We may define a corresponding chain complex \(\mathcal{C}\) with three degrees, which we label by \(Z,X,Q\), with \(Z\to X\to Q\), and two boundary operators, \(\partial_{Z}:Z\to Q\) and \(\partial_{Q}:Q\to X\). The different degrees are each given some preferred basis and \(Z\)-stabilizer generators correspond to basis elements of \(Z\), qubits to basis elements of \(Q\), and \(X\)-stabilizer generators to basis elements of \(X\), with the boundary operator determined by the stabilizers of the quantum code. See Ref. [28] for a review and a dictionary between these languages.
Remark: this reverses the order we used previously, where we used \(Z\)-stabilizers for \(0\)-cells and \(X\)-stabilizers for \(2\)-cells. It is chosen for consistency with other literature.
While this mapping to stabilizers seems restricted to the case of CSS codes, given any more general quantum stabilizer code on qubits, we may construct a corresponding CSS code which is self-dual. See Ref. [29] for the general construction. Roughly, this is done by regarding such a more general code as an instance of a code whose stabilizers are products of Majorana operators, encoding each qubit into \(4\) Majoranas; then, the stabilizers of this Majorana code can be interpreted as defining both the \(X\)- and \(Z\)-stabilizers of some self-dual CSS code.
So, we will consider just this case of CSS codes.
Now, suppose we measure some operator \(O\) which is a product of Pauli \(X\) and is not in the stabilizer group; the case of measuring a product of Pauli \(X\) is similar. Suppose \(O\) does not commute with one of the \(Z\)-stabilizers (so \(O\) is not a logical operator of the code). As a result, this measurement of \(O\) does not increase the rank of the stabilizer group: it increases the rank of the group generated by \(X\)-stabilizers by one but it decreases the rank of the group generated by \(Z\)-stabilizers by one. Of course, in the case of a self-dual code derived as above, a measurement of an arbitrary operator would map to a measurement of some product of Pauli \(X\) and "the same" product of Pauli \(Z\); since these two measurements commute (necessarily it is an even weight product) we could do them in an arbitrary order.
We identify this operator \(O\) with some vector \(w\in Q\). This vector is a sum of basis elements corresponding to qubits in the product defining \(O\). By assumption, there is some \(Z\)-stabilizer which anticommutes with \(O\); choose an
arbitrary such \(Z\) stabilizer and identify that \(Z\)-stabilizer with some vector \(v\in Q\), that vector being a sum of basis elements in the product defining the given \(Z\)-stabilizer. Then, the anticommutation can be expressed as
\[\langle v,w\rangle=1,\]
where the angle brackets denote an inner product.
After this measurement, we have some new stabilizers of some new code \(C^{\prime}\) and some corresponding chain complex \(\mathcal{C}^{\prime}\), \(Z^{\prime}\to X^{\prime}\to Q^{\prime}\), with boundary operators \(\partial^{\prime}_{Z^{\prime}}\) and \(\partial^{\prime}_{Q^{\prime}}\). Of course, \(Q^{\prime}\) and \(Q\) are of the same dimension.
We will define a chain map \(f\) from \(\mathcal{C}\) to \(\mathcal{C}^{\prime}\). A chain map is a collection of maps, \(f_{Z}:Z\to Z^{\prime}\), \(f_{Q}:Q\to Q^{\prime}\), and \(f_{X}:X\to X^{\prime}\), such that the chain map commutes with the boundary operators.
We define, for \(q\in Q\), that
\[f_{Q}(q)=q+\langle q,w\rangle v. \tag{100}\]
This map is such that \(f_{Q}(v)=0\), so it "kills" the corresponding \(Z\) stabilizer.
To define \(f_{Z}\), let us first define \(Z^{\prime}\). In a basis independent fashion, we define \(Z^{\prime}\) to be the subspace of \(Z\) containing vectors \(z\) such that \(\langle\partial_{Z}z,w\rangle=0\). We may give it a basis by picking a basis for \(Z\) that corresponds to a choice of \(Z\)-stabilizer generators where all generators commute with \(O\), except for one generator corresponding to vector \(v\), and then a basis for \(Z^{\prime}\) is chosen in the obvious way: the \(Z\)-stabilizer generators of \(C^{\prime}\) can be chosen to be the \(Z\)-stabilizer generators of \(C\) which commute with \(O\).
The boundary operator \(\partial^{\prime}_{Z^{\prime}}\) is defined in the obvious way from the stabilizers of \(C\). In the basis independent definition of \(Z^{\prime}\), the boundary operator \(\partial^{\prime}_{Z^{\prime}}\) is the boundary operator \(\partial_{Z}\) on the given subspace.
In the first, basis independent method, we define
\[f_{Z}(z)=z+\langle\partial_{Z}z,w\rangle\partial^{-1}v, \tag{101}\]
where \(\partial^{-1}v\) is any vector in the pre-image of \(v\), chosen arbitrarily (of course, we make this choice one time, and then use the same choice for all \(z\)). Note that if there is no redundancy among the stabilizer generators then \(\partial^{-1}v\) is
In the second, basis dependent method, we define \(f_{Z}\) in the obvious way: each basis element of \(Z\) corresponding to a generator which commutes with \(O\) is mapped to the corresponding basis element of \(Z^{\prime}\), while the basis element which does not commute with \(O\) is mapped to zero. One may verify that Eq. (101) indeed defines this map on the basis elements of \(Z\) if we choose \(\partial^{-1}v\) to be the basis element.
One may verify the chain map condition that
\[\partial^{\prime}_{Z^{\prime}}f_{Z}(z)=f_{Q}(\partial_{Z}z). \tag{102}\]
We may define \(X^{\prime}\) to be the direct sum of \(X\) with a 1-dimensional vector space, corresponding to the one additional \(X\)-stabilizer in \(C^{\prime}\). Thus, \(X^{\prime}\) is defined by ordered pairs \((x,b)\) where \(x\in X\) and \(b\in\{0,1\}\). We define \(\partial^{\prime}_{Q^{\prime}}\) in the obvious way:
\[(\partial^{\prime}_{Q^{\prime}})^{T}(x,b)=(\partial_{X})^{T}x+bw,\]
where the superscript \(T\) denotes the adjoint, so
\[\partial^{\prime}_{Q^{\prime}}q=(\partial_{Q}q,\langle q,w\rangle).\]
Then define \(f_{X}\) to be the obvious map from \(X\) to \(X^{\prime}\):
\[f_{X}(x)=(x,0). \tag{103}\]
We claim that we also have the chain map condition
\[\partial^{\prime}_{Q^{\prime}}f_{Q}(q)=f_{X}(\partial_{Q}q). \tag{104}\]
That is, we claim \(\partial^{\prime}_{Q^{\prime}}q+\langle q,w\rangle\partial^{\prime}_{Q^{ \prime}}v=\langle(\partial_{Q}q),0\rangle\). We have \(\partial^{\prime}_{Q^{\prime}}v=(0,1)\) since, by assumption, \(\langle v,w\rangle=1\) and \(\partial_{Q}v=0\) since \(\mathcal{C}\) is a chain complex (i.e., \(v\) corresponds to a \(Z\)-stabilizer), and then the claim follows from the definition of \(\partial^{\prime}_{Q^{\prime}}\).
## Appendix B Toy Model of Vacancies Without Measuring Superstabilizers
We now consider a toy model of decoding in the presence of vacancies, but _without_ measuring superstabilizers (i.e., without measuring logical operators of the code created by the vacancies). In this case, anyons can move onto the holes created by the defects without being detected, destroying the error correction properties of the code. Logical information can be lost in time \(\mathcal{O}(1)\), independently of system size, even if all the holes are size \(\mathcal{O}(1)\), so long as many holes are present. We are interested in how the logical error probability grows with time if we start from some state with no anyons anywhere and then evolve for some time in the presence of noise without measuring superstabilizers. We argue that the growth can be superlinear with time in some cases.
While our main interest is in decoding quantum codes in two dimensions (e.g., surface codes, Floquet codes such as the honeycomb code, and so on), we consider here a simpler toy model of decoding _classical_ information in a one-dimensional Ising model with vacancies. To motivate this model, and argue that its behavior should be analogous to that of quantum codes with defects in two dimensions, note that in many stabilizer codes, the defects of the code have certain integer spatial dimensions. For example, anyons in the two-dimensional toric code are pointlike particles (dimension \(=0\)), while the three-dimensional toric code has both pointlike (dimension \(=0\)) and looplike (dimension \(=1\)) excitations. The classical Ising model in two-dimensions has one-dimensional domain walls but the one-dimensional Ising model, like the two-dimensional toric code, has pointlike excitations. Thus, the two-dimensional toric code and the one-dimensional Ising model are in some way similar as the defects in both are pointlike. Similarly, vacancies in a theory could be pointlike (sites are randomly deleted from a system), linelike (lines are randomly removed), and so on, so vacancies can also be \(0,1,2,\ldots\) dimensional. The questions we consider here arise when the vacancy dimension matches the defect dimension.
We consider the following model of a one-dimensional Ising model with vacancies. We have \(N\) spins, labeled \(1,2,\ldots,N\). Each spin \(i\) corresponds to some classical variable \(Z_{i}=\pm 1\). We assume that the system is initialized in some unknown state, either all spins \(+1\) or all spins \(-1\), with equal probability for each choice. Then, the system proceeds through \(T\) rounds of errors. In each round, each spin is flipped independently with some probability \(p\). Then, checks \(Z_{i}Z_{i+1}\) are measured, for each \(i\in\{1,2,\ldots,N-1\}\setminus V\), where \(V\) is some set of "vacancies". That is, we measure checks on every neighboring pair of spins, except for certain vacancies. Finally, after \(T\) rounds, all spins are measured perfectly (i.e., with no error), and we attempt to reconstruct the initial state from those final spin measurements and from the check measurements.
_Vacancies Everywhere--_ As a warmup, we begin with a simple case where \(V=\{1,2,\ldots,N-1\}\). That is, there are vacancies everywhere and no checks are measured. After \(T\) rounds of flips, each spin is flipped from its initial state with probability
\[P_{flip}=\frac{1-(1-2p)^{T}}{2}. \tag{12}\]
We can regard this as defining some biased coin, which is tails with probability \(P_{flip}\) and heads with probability \(1-P_{flip}\). Maximal likelihood decoding in this case is simply majority decoding: we assume that the initial state is where all spins had the value given by the majority after \(T\) rounds. Thus, decoding gives the correct answer if \(N\) flips of this biased coin have more than \(N/2\) heads and fails if \(N\) flips have more than \(N/2\) tails. If \(N\) is even and there are exactly \(N/2\) heads, then maximum likelihood decoding returns that no decoding is possible: both initial states are equally likely.
Taking the case of odd \(N\) for simplicity, the probability of an error is then equal to
\[P_{err}=\sum_{j=\lfloor N/2\rfloor+1}^{N}{N\choose j}P_{flip}^{j}(1-P_{flip} )^{N-j}. \tag{13}\]
For \(P_{flip}\ll 1\), this is approximated by
\[(P_{flip})^{\lfloor N/2\rfloor+1}{N\choose\lfloor N/2\rfloor+1}\sim\frac{2^{N }}{\sqrt{N}}(P_{flip})^{\lfloor N/2\rfloor+1},\]
and for \(pT\ll 1\) we have \(P_{flip}=pT-cO(T^{2})\) so the error probability \(P_{err}\) grows _superlinearly_ in \(T\). Indeed, it grows as a power \(\lfloor N/2\rfloor+1\).
_Intervals: Generalities--_ We next consider the general case, of arbitrary vacancy locations. We can describe this by saying that there are intervals of lengths \(\ell_{1},\ell_{2},\ldots,\ell_{k}\) for some \(k\), i.e. \(N=\sum_{i=1}^{k}\ell_{i}\) and \(V=\{\ell_{1},\ell_{1}+\ell_{2},\ell_{1}+\ell_{2}+\ell_{3},\ldots\}\). Consider a given interval of some length \(\ell\). In a given round, the probability of \(m\) errors in that interval is
\[P(m,\ell)=p^{m}(1-p)^{\ell-m}{\ell\choose m}.\]
Because all checks are present in an interval, the decoder can determine that either \(m\) or \(\ell-m\) errors occurred; indeed, it knows that either some specific pattern of \(m\) errors occurred or the "complementary" pattern of \(\ell-m\) errors occurred. The decoder can then correct the errors that occurred, up to possibly an overall flip of all spins in the interval; if the decoder chooses to correct the minimal number of errors (i.e., for \(m<\ell/2\), it assumes \(m\) errors rather than \(\ell-m\) occurred), then this can be described by flipping a biased coin: with probability \(P(\ell-m,\ell)/(P(m,\ell)+P(\ell-m,\ell))\) the value of all spins in the interval gets flipped.
Thus, we can describe this by the following effective model (and we emphasize that a maximal likelihood decoding of this effective model will give a maximum likelihood decoding of the original model, so that no meaningful information is lost when going to the effective model). We describe the state of each interval by a single _effective spin_, \(\pm 1\). We then draw some integer \(m\) at random, \(0\leq m\leq\ell/2\), from the following distribution. For \(m<\ell/2\), the probability of having that given \(m\) is \(P(m,\ell)+P(\ell-m,\ell)\), while for \(m=\ell/2\), which is possible if \(\ell\) is even, the probability of having that given \(m\) is \(P(m,\ell)\). The decoder knows the value of this \(m\). Then, for \(m<\ell/2\), we toss a biased coin and flip the effective spin in that interval with probability \(P(\ell-m,\ell)/(P(m,\ell)+P(\ell-m,\ell))\). If \(m=\ell/2\), we flip the effective spin with probability \(1/2\). We emphasize, there is only a single effective spin for each interval.
Thus, in this effective model, each interval behaves independently over the \(T\) rounds. Each interval is subject to a sequence of biased coin tosses, with the bias known to the decoder, flipping the effective spin in the interval if the coin is tails.
Let \(P_{flip}(i,t)\) denote the probability of flipping the effective spin in the \(i\)-th interval on the \(t\)-th round. Remark: in the previous case where we had vacancies everywhere, all \(P_{flip}(i,t)\) are equal to each other (indeed, they equal \(p\)), but now we allow them to vary from round to round and from interval to interval. Thus, the effect of \(T\) rounds of flips on a given effective spin is that it is flipped, from its initial state at round \(0\), with probability
\[P_{flip}(i)=\frac{1-\prod_{t=1}^{T}\Bigl{(}1-2P_{flip}(i,t)\Bigr{)}}{2}. \tag{101}\]
This reduces to Eq. (100) if \(P_{flip}(i,t)=p\) for all \(i,t\). Again, these probabilities \(P_{flip}(i)\) are known to the decoder.
Finally, we apply maximum likelihood decoding after \(T\) rounds. The decoder can determine that either some given pattern of errors on the _effective_ spins occurred (i.e., some specific set of effective spins are flipped) or that the complementary pattern of errors occured (i.e., that the complement of that set of effective spins is flipped). Maximum likelihood decoding gives an error if some pattern occurs but the complementary pattern is more likely. If the complementary pattern is equally likely, then maximum likelihood decoding knows that both initial states were equally likely.
_Two Intervals--_ The above analysis of intervals gives some general results reducing to an effective model. However, this effective model still must be analyzed: to determine asymptotics, we need to determine which sequences of biased coin flips are most likely. We work now on the case of two intervals, of lengths \(\ell_{1},\ell_{2}\).
We assume without loss of generality that \(\ell_{1}\leq\ell_{2}\).
There are two ways in which an error can occur. One way is when the effective spin in both intervals is flipped after \(T\) time steps. The second way is when the effective spin in one interval is flipped while the other is not flipped, but the decoder still makes an error as it determines that it is more likely that the other effective spin flips. Remark: we say "is flipped" rather than "flips" to emphasize that the important thing is whether there is a total of an odd number of flips after \(T\) rounds.
Let us consider the first way first. Eq. (101) gives the probablity of this occuring for each choice of \(P_{flip}(i,t)\). However, since Eq. (101) is linear in each \(P_{flip}(i,t)\), we can compute the probability that the effective spin is flipped for random choice of \(P_{flip}(i,t)\) by replacing \(P_{flip}(i,t)\) with its average. Thus, the probability that the effective spin in an interval of length \(\ell\) is flipped is given by
\[\frac{1-(1-2P_{eff})^{T}}{2},\]
where for odd \(\ell\) we have
\[P_{eff}=\sum_{0<=m<\ell/2}\Bigl{(}P(m,\ell)+P(\ell-m,\ell)\Bigr{)}\Bigl{(} \frac{P(\ell-m,\ell)}{(P(m,\ell)+P(\ell-m,\ell))}\Bigr{)}=\sum_{0<=m<\ell/2}P( \ell-m,\ell).\]
From now on, we consider the limiting case \(p\ll 1\), where this is dominated by the contribution with \(m=\lfloor\ell/2\rfloor\), so in this case
\[P_{eff}\approx Cp^{\lfloor\ell/2\rfloor+1}2^{\ell}\ell^{-1/2},\]
where the factor of \(C2^{\ell}\ell^{-1/2}\) is an approximation to \(\binom{\ell}{\lfloor\ell/2\rfloor}\) and \(C=\sqrt{2}/\pi\) from Stirling's formula.
Let
\[P_{eff}^{1}=Cp^{\lfloor\ell_{1}/2\rfloor+1}2^{\ell_{1}}\ell_{1}^{-1/2},\]
and
\[P_{2,eff}=Cp^{\lfloor\ell_{2}/2\rfloor+1}2^{\ell_{2}}\ell_{2}^{-1/2}.\]
Since \(\ell_{1}\leq\ell_{2}\), \(P_{2,eff}\leq P_{1,eff}\). We will consider three different asymptotic time regimes below: first, \(TP_{1,eff}\ll 1\); second, \(TP_{2,eff}\ll 1\ll TP_{1,eff}\); and finally, \(1\ll TP_{2,eff}\).
The probability that both effective spins are flipped is
\[P_{both}\approx\Big{(}\frac{1-(1-2P_{1,eff})^{T}}{2}\Big{)}\Big{(}\frac{1-(1- 2P_{2,eff})^{T}}{2}\Big{)}. \tag{100}\]
Each term in the product in Eq. (100) asymptotes as \(T\to\infty\) at \(1/2\) so the product asymptotes at \(1/4\). The first term displays a linear behavior for \(P_{1,eff}T\ll 1\), and is roughly constant for \(P_{1,eff}T\gg 1\) and similarly for the second term.
In the regime \(P_{1,eff}T\ll 1\) we see a quadratic growth in \(P_{both}\):
\[P_{both}\sim T^{2}P_{1,eff}P_{2,eff}.\]
In the regime \(P_{1,eff}T\gg 1\) but \(P_{2,eff}T\ll 1\), we see a linear growth in \(P_{both}\): \(P_{both}\approx\frac{1}{2}TP_{2,eff}\).
Finally, in the regime \(P_{2,eff}T\gg 1\), \(P_{both}\approx 1/4\).
Now, let us consider the second way, where one effective spin is flipped and the other is not, but the decoder still decodes incorrectly.
First consider the regime \(P_{1,eff}T\ll 1\). Suppose that a single coin toss at some time in one of the intervals is tails, and all other tosses are heads. To leading order in \(P_{1,eff}T\), it suffices to consider this possibility.
Suppose for example, a coin toss in the first interval is tails (the case where a toss in the second interval is tails is similar to this case), and occurs at some time when we have some \(m_{1}\leq\ell_{1}/2\) errors in the first interval at some time. If at any time we have \(m_{2}\leq\ell_{2}/2\) errors in the second interval with \(\ell_{1}/2-m_{1}>\ell_{2}/2-m_{2}\) then the decoder will make an error because in this case it is _more likely that the first coin toss is heads and the second is tails, than visa-versa_.
The probability that this occurs (i.e., that we have some given \(m_{1}\) at some time, that that coin toss is tails, and that we have such an \(m_{2}\) at some, possibly different, time) is at the same order in the physical error error probability as in the case above where we made an error in each interval. That is, it is at order \(p^{\lfloor\ell_{1}/2\rfloor+\lfloor\ell_{2}/2\rfloor+2}\). Further, for \(m_{1}\approx\lfloor\ell_{1}/2\rfloor\) and \(m_{2}\approx\lfloor\ell_{2}/2\rfloor\), the probability of an error here is similar to that in the case above: the probability that such \(m_{1},m_{2}\) occur gives roughly the same quadratic growth in time as both events (\(m_{1}\approx\lfloor\ell_{1}/2\rfloor\) and \(m_{2}\approx\lfloor\ell_{2}/2\rfloor\)) are unlikely and there are \(T\) different times that each could occur at.
However, we could also have a case with \(\ell_{1}=\ell_{2}\) and have an event where at some time with \(m_{1}=0\) the coin toss in the first interval is tails. While this is event is rare (having small \(m_{1}\) is not rare, but having the toss be tails is rare), the decoder can make a mistake without any rare event in the second interval occuring (as it can make a decoding mistake even if \(m_{2}\) is always small in this case). So, this kind of contribution gives rise to only a linear growth in \(T\). However, as \(m_{1}\) becomes smaller than \(\ell_{1}/2\) or \(m_{2}\) becomes smaller than \(\ell_{2}/2\), the probability of these events is becomes _less likely_ because the factor of \(\binom{\ell_{1}}{m_{1}}\binom{\ell_{2}}{m_{2}}\) is smaller. So, the dominant contribution is quadratic in \(T\), as it was when we considered the first way of making an error.
In the regime \(P_{2,eff}\ll 1\ll P_{1,eff}\), it is likely that several times we had \(m=\lfloor\ell_{1}/2\rfloor\) in the first interval. Indeed, the expected number of times that this occurs is proportional to \(P_{1,eff}T\). In this case, a maximal likelihood decoder knows that the first effective spin likely flips several times4. Indeed, the first effective spin is flipped with probability close to \(1/2\). So, a maximal likelihood decoder will typically use the second effective spin to determine the correct decoding, largely ignoring the first. The probability that the second effective spin is flipped is approximately \(TP_{2,eff}\) in this regime. So, the probability of an error occurring in this way is roughly \((1/2)TP_{2,eff}\).
Footnote 4: There is an exponentially small (in \(P_{1,eff}T\)) probability that we never see \(m=\lfloor\ell_{1}/2\rfloor\); this possibility does not contribute significantly to the decoding error probability.
Thus, counting both ways of making an error in this regime (in the first way, both effective spins are flipped, while in the second way the dominant contribution is that the first effective spin is not flipped and the second effective spin
is flipped), we claim that for \(P_{1,eff}T\gg 1\) but \(P_{2,eff}T\ll 1\), there is a linear growth in the probability of an error in decoding so that it is roughly \((1/2)TP_{2,eff}\).
Similarly, in the regime \(P_{2,eff}T\gg 1\), the probability of an error in decoding is roughly \(1/2\) when we consider both ways of making an error.
So, there is a crossover from quadratic, to linear, to constant error probability as \(T\) increases.
|
2305.09911
|
Impact of high-rank excitations on accuracy of the unitary coupled
cluster downfolding formalism
|
In this paper, we evaluate the accuracy of the Hermitian form of the
downfolding procedure utilizing the double unitary coupled cluster Ansatz
(DUCC) on the H6 and H8 benchmark systems. The computational infrastructure
employs the occupation-number-representation codes to construct the matrix
representation of arbitrary second-quantized operators, enabling the exact
representation of exponentials of various operators. The tests utilize external
excitations estimated from standard single-reference coupled cluster methods
(SR-CC) to demonstrate that higher-rank SR-CC external amplitudes were
necessary to describe the energies in the strongly correlated regime
adequately. We show that this approach can offset problems of the corresponding
SR-CC theories associated with losing the variational character of
corresponding energies.
|
Karol Kowalski, Bo Peng, Nicholas P. Bauman
|
2023-05-17T02:42:24Z
|
http://arxiv.org/abs/2305.09911v1
|
# Impact of high-rank excitations on accuracy of the unitary coupled cluster downfolding formalism
###### Abstract
In this paper, we evaluate the accuracy of the Hermitian form of the downfolding procedure utilizing the double unitary coupled cluster Ansatz (DUCC) on the H6 and H8 benchmark systems. The computational infrastructure employs the occupation-number-representation codes to construct the matrix representation of arbitrary second-quantized operators, enabling the exact representation of exponentials of various operators. The tests utilize external excitations estimated from standard single-reference coupled cluster methods (SR-CC) to demonstrate that higher-rank SR-CC external amplitudes were necessary to describe the energies in the strongly correlated regime adequately. We show that this approach can offset problems of the corresponding SR-CC theories associated with losing the variational character of corresponding energies.
## I Introduction
Applying many-body methods for dimensionality/cost reduction (DCR) of _ab-initio_ formulations is imperative in expanding the range of system sizes amenable to accurate many-body formulations in chemistry and material sciences. Additionally, these techniques are vital in effectively using early quantum computing resources, commonly referred to as the noisy intermediate-scale quantum devices (NISO).[1; 2; 3; 4] DCR methods primarily focus on minimizing the number of qubits required to represent a given quantum problem. One should mention several techniques developed to take full advantage of the ubiquitous Variational Quantum Eigensolvers (VQE) approach[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] in addressing problems beyond the situation where few electrons are correlated.
In the context of quantum algorithms for quantum chemistry, the utilization of DCR techniques is linked to the partitioning of electron correlation effects into static and dynamic partitions. In terms of methodology, the coupled cluster (CC) formalism[17; 18; 19; 20; 21; 22; 23; 24; 25] offers an effective means of describing these effects in a many-body language. Although static effects can be incorporated for small-scale systems using presently accessible quantum hardware, the implementation of necessary dynamical correlation effects, which typically involve numerous parameters with minute values, remains beyond the scope of contemporary quantum computing technologies.
We recently introduced and tested downfolding techniques based on the double unitary coupled cluster Ansatz (DUCC)[26] to address the abovementioned problem. The downfolding procedure utilizes the properties of the ground-state DUCC Ansatz, which in analogy to single-reference sub-system embedding sub-algebras (SES-CC).[27; 28] allows to construct of effective Hamiltonians that integrate out-of-active-space degrees of freedom usually identified with dynamical amplitudes. In contrast to the SES-CC approach, the DUCC formalism yields the Hermitian form of the effective Hamiltonian in the active space.
The DUCC-driven downfolded Hamiltonians are critical components of hybrid computing. Classical computing resources are employed to calculate the second quantized form of effective Hamiltonians, and quantum computing is invoked to diagonalize them in active spaces that ideally match the available quantum resources. This type of approach provides much needed algorithmic transition mechanism from current NISQ technologies[29] to mature error-corrected quantum computers of the future, where the size of the active space is adjusted to the available quantum resources. For this purpose, several approximations were tested to validate the efficiency of the downfolding procedure. These approximations, due to the non-commutativity of the components defining DUCC cluster operators, were based on the finite low-rank commutator expansions, the limited rank of interactions included in the downfolded Hamiltonians (one- and two-body interactions), and simple form of the external amplitudes extracted from the single-reference CC (SR-CC) model with singles and doubles (CCSD).[23]
Our team has recently developed a novel full configuration interaction (FCI) code called stringMB, which employs a string-based approach to emulate quantum systems and represent operators in matrix form. This code has been integrated into the NWChem software, enabling us to (1) work with the exact representations of operator exponents and (2) leverage various sources for external CC amplitudes. Consequently, we have a unique opportunity to study the exact nature of downfolded Hamiltonians using the DUCC method. In this study, we investigate the impact of higher-rank external excitations obtained through CCSD,[23] CCSDT,[30; 31; 32] and CCSDTQ[33; 34; 35] simulations, as well as the active space size, on the accuracy of ground-state energies for small benchmark systems H6 and H8 representing linear chains of hydrogen atoms.
## II Theory
The DUCC formulations have been amply discussed in recent papers (see Refs.[36; 26; 37]). Here we overview only the salient features of these approaches. While the SES-CC technique[27; 28] forms the basis for non-Hermitian downfolding, the DUCC expansions provide its Hermitian formulations. The Hermitian form of the downfolded Hamiltonian is obtained as a consequence of utilizing active
space-dependent DUCC representation of the wave function
\[|\Psi\rangle=e^{\sigma_{\rm ext}}e^{\sigma_{\rm int}}|\Phi\rangle\, \tag{1}\]
where \(\sigma_{\rm ext}\) and \(\sigma_{\rm int}\), referred to as the external and internal cluster operators, are general-type anti-Hermitian operators
\[\sigma_{\rm int}^{\dagger} = -\sigma_{\rm int}\, \tag{2}\] \[\sigma_{\rm ext}^{\dagger} = -\sigma_{\rm ext}. \tag{3}\]
In analogy to the non-Hermitian case, the \(\sigma_{\rm ext}\) and \(\sigma_{\rm int}\) operators are defined by parameters carrying active spin-orbital labels only and at least one in-active spin-orbital label, respectively. The DUCC Ansatz falls into a broad class of active space coupled cluster methods.[38; 39; 40]
The use of the DUCC Ansatz (1), in analogy to the SES-CC case, leads to an alternative way of determining energy, which can be obtained by solving active-space Hermitian eigenvalue problem:
\[H^{\rm eff}e^{\sigma_{\rm int}}|\Phi\rangle=Ee^{\sigma_{\rm int}}|\Phi\rangle, \tag{4}\]
where
\[H^{\rm eff}=(P+Q_{\rm int})\tilde{H}_{\rm ext}(P+Q_{\rm int}) \tag{5}\]
and
\[\tilde{H}_{\rm ext}=e^{-\sigma_{\rm ext}}He^{\sigma_{\rm ext}}. \tag{6}\]
When the external cluster amplitudes are known (or can be effectively approximated), the energy (or its approximation) can be calculated by diagonalizing the Hermitian effective/downfolded Hamiltonian (5) in the active space using various quantum or classical diagonalizers. The \(Q_{\rm int}\) operator is a projection onto excited (with respect to \(|\Phi\rangle\)) excited configurations in complete active space (CAS) and the projection onto the reference function is denoted as \(P\).
For quantum computing applications second-quantized representation of \(H^{\rm eff}\) is required. In the light of the non-commuting character of components defining \(\sigma_{\rm ext}\) operator, to this end, one has to rely on the finite-rank commutator expansions, i.e.,
\[\tilde{H}_{\rm ext}\simeq H+\sum_{i=1}^{\rm Max_{\rm R}}\frac{1}{i!}[\ldots[H, \sigma_{\rm ext}],\ldots],\sigma_{\rm ext}]_{i}\, \tag{7}\]
where \(\rm Max_{\rm R}\) stands for the length of commutator expansion. Due to the numerical costs associated with the contractions of multi-dimensional tensors, only approximations based on including low-rank commutators are feasible. In recent studies, approximations based on single, double, and part of triple commutators were explored where one- and two-body interactions were retained in the second quantized form of \(H^{\rm eff}\). In practical applications, one also has to determine the approximate form of \(\sigma_{\rm ext}\). For practical reasons we used the following approximation
\[\sigma_{\rm ext}\simeq T_{\rm ext}-T_{\rm ext}^{\dagger}\, \tag{8}\]
where \(T_{\rm ext}\) can be defined through the external parts of the typical SR-CC cluster operators.
Given the progress achieved in the development of the Hermitian form of the downfolded Hamiltonians, addressing two pressing questions (1) what is the impact of the choice of the \(T_{\rm ext}\) on the quality of ground-state energy of \(H^{\rm eff}\)? and (2) what are the energy values corresponding to the untruncated (exact) form of the \(H^{\rm eff}\)? play a pivotal role in further understanding and advances of CC downfolding techniques. We answer these questions using stringMB code that allows us to deal with the exact matrix representations of second quantized operators and their functions in the FCI space.
## III Implementation
For interacting fermionic systems,the action of the creation/annihilation operators for the electron in \(p\)-th spin-orbital (\(a_{p}/a_{p}^{\dagger}\)) on the Slater determinants can be conveniently described using occupation number representation, where each Slater determinant is represented as a vector
\[|n_{M}\ n_{M-1}\ \ldots\ n_{i+1}\ n_{i}\ n_{i-1}\ \ldots\ n_{1}\rangle \tag{9}\]
where occupation numbers \(n_{i}\) are equal to either 1 (electron occupies \(i\)-th spin orbital) or 0 (no electron is occupying \(i\)-th spin orbital). In (9), \(M\) stands for the total number of spin-orbitals used to describe quantum system and \(M=2n\), where \(n\) is the number of orbitals.
The following formulas give the non-trivial action of creation/annihilation operators on the state vectors
\[a_{i}^{\dagger}|n_{M}\ n_{M-1}\ \ldots\ n_{i+1}\ 0\ n_{i-1}\ \ldots n_{1}\ \rangle = (-1)^{\sum_{k=1}^{i-1}n_{k}}|n_{M}\ n_{M-1}\ \ldots\ n_{i+1}\ 1\ n_{i-1}\ \ldots n_{1}\ \rangle \tag{10}\] \[a_{i}|n_{M}\ n_{M-1}\ \ldots\ n_{i+1}\ 1\ n_{i-1}\ \ldots n_{1}\ \rangle = (-1)^{\sum_{k=1}^{i-1}n_{k}}|n_{M}\ n_{M-1}\ \ldots\ n_{i+1}\ 0\ n_{i-1}\ \ldots n_{1}\ \rangle. \tag{11}\]
Using the occupation-number representation, the stringMB code allows one to construct a matrix rep
resentation (**A**) of general second-quantized operators \(A\), where \(A\) can be identified with electronic Hamiltonian, the external part of the cluster operator \(T_{\text{ext}}\), and exponents of \(T_{\text{ext}}-T_{\text{ext}}^{\dagger}\), i.e.,
\[H \rightarrow \mathbf{H}\ \, \tag{12}\] \[T_{\text{ext}} \rightarrow \mathbf{T}_{\text{ext}}\] (13) \[e^{\sigma_{\text{ext}}}\simeq e^{T_{\text{ext}}-T_{\text{ext}}^{ \dagger}} \rightarrow e^{T_{\text{ext}}-T_{\text{ext}}^{\dagger}},\] (14) \[e^{-\sigma_{\text{ext}}}\simeq e^{-(T_{\text{ext}}-T_{\text{ext}}^ {\dagger})} \rightarrow e^{-(T_{\text{ext}}-T_{\text{ext}}^{\dagger})},\] (15) \[\tilde{H}_{\text{ext}} \rightarrow \tilde{\mathbf{H}}_{\text{ext}},\] (16) \[H^{\text{eff}} \rightarrow \mathbf{H}^{\text{eff}}. \tag{17}\]
Moreover, the stringMB can extract sub-blocks of matrices or their products corresponding to arbitrary active space. This feature is used to form matrix representations of the effective Hamiltonians \(H^{\text{eff}}\).
approach can be effectively implemented on quantum computers. The PDS results are collated in Table 6, providing Hartree-Fock, complete active space self-consistent field (using four active electrons distributed over four active orbitals), and active-space FCI energies. For H6 and H8 models, we used \(\{2,3,4,5\}\)- and \(\{4,5,6,7\}\)-generated active spaces, respectively. Before discussing the PDS results, we should stress the efficiency of the downfolding procedures (here illustrated in the example of the DUCCCCSDTQ approach) in capturing the out-of-active-space correlation effect. This is best illustrated by comparing DUCC-CCSDTQ vs. CASSCF(4,4) and Active-space FCI energies. Despite using the same active space definitions, the CASSCF(4,4) and Active-space FCI energies for all ge
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(R_{\mathrm{H-H}}\) & FCI & SD & SDT & SDTQ & DUCC-SD & DUCC-SDT & DUCC-SDTQ \\ \hline
1.50 & -4.235775 & -4.235111 & -4.235846 & -4.235775 & -4.235071 & -4.235757 & -4.235774 \\
1.75 & -4.315273 & -4.314347 & -4.315504 & -4.315273 & -4.314173 & -4.315222 & -4.315271 \\
2.00 & -4.286011 & -4.284844 & -4.286688 & -4.286013 & -4.284235 & -4.285862 & -4.286005 \\
2.25 & -4.208339 & -4.207232 & -4.210169 & -4.208337 & -4.205334 & -4.207876 & -4.208316 \\
2.50 & -4.114829 & -4.115000 & -4.119502 & -4.114795 & -4.109473 & -4.113350 & -4.114739 \\
2.75 & -4.023783 & -4.029321 & -4.035510 & -4.023578 & -4.013082 & -4.018712 & -4.023447 \\
3.00 & -3.944748 & -3.972672 & -3.978401 & -3.943920 & -3.912005 & -3.921323 & -3.943614 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of energies of the downfolded Hamiltonians for the linear HS system in the STO-3G basis set based on various sources of the external amplitudes \(T_{\mathrm{ext}}\) used to approximate the \(\sigma_{\mathrm{ext}}\) operator (\(\sigma_{\mathrm{ext}}\simeq T_{\mathrm{ext}}-T_{\mathrm{ext}}^{\dagger}\)). All simulations used restricted Hartree-Fock molecular orbitals 2,3, and 4,5 as active occupied and virtual orbitals, respectively. In the linear chain of the H atoms, the geometry is defined by the distance between neighboring hydrogen atoms (\(R_{\mathrm{H-H}}\)) in a.u.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(R_{\mathrm{H-H}}\) & FCI & SD & SDT & SDTQ & DUCC-SD & DUCC-SDT & DUCC-SDTQ \\ \hline
1.50 & -3.199566 & -3.199332 & -3.199601 & -3.199566 & -3.199324 & -3.199562 & -3.199566 \\
1.75 & -3.245936 & -3.245603 & -3.246054 & -3.245936 & -3.245547 & -3.245923 & -3.245936 \\
2.00 & -3.217699 & -3.217277 & -3.218047 & -3.217699 & -3.217040 & -3.217655 & -3.217697 \\
2.25 & -3.156624 & -3.156266 & -3.157559 & -3.156621 & -3.155447 & -3.156484 & -3.156618 \\
2.50 & -3.085398 & -3.085691 & -3.087713 & -3.085380 & -3.083217 & -3.084962 & -3.085374 \\
2.75 & -3.016841 & -3.019512 & -3.022159 & -3.016770 & -3.012642 & -3.015537 & -3.016758 \\
3.00 & -2.957646 & -2.967326 & -2.969163 & -2.957405 & -2.948732 & -2.953850 & -2.957384 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of energies of the downfolded Hamiltonians for the linear H6 system in the STO-3G basis set based on various sources of the external amplitudes \(T_{\mathrm{ext}}\) used to approximate the \(\sigma_{\mathrm{ext}}\) operator (\(\sigma_{\mathrm{ext}}\simeq T_{\mathrm{ext}}-T_{\mathrm{ext}}^{\dagger}\)). All simulations used restricted Hartree-Fock molecular orbitals 2,3, and 4,5 as active occupied and virtual orbitals, respectively. In the linear chain of the H atoms, the geometry is defined by the distance between neighboring hydrogen atoms (\(R_{\mathrm{H-H}}\)) in a.u.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(R_{\mathrm{H-H}}\) & FCI & CCSDTQ & DUCC-CCSDTQ \\ \hline
2.00 & -4.286011 & -4.286013 & -4.286008 \\
2.50 & -4.114829 & -4.114795 & -4.114782 \\
3.00 & -3.944748 & -3.943920 & -3.944137 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The DUCC-CCSDTQ results were obtained for various geometries of the H8 model in the STO-3G basis set for various choices of active spaces.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(R_{\mathrm{H-H}}\) & FCI & CCSDTQ & DUCC-CCSDTQ \\ \hline
1.50 & -3.199566 & -3.199332 & -3.199601 & -3.199566 & -3.199324 & -3.199562 & -3.199566 \\
1.75 & -3.245936 & -3.245603 & -3.246054 & -3.245936 & -3.245547 & -3.245923 & -3.245936 \\
2.00 & -3.217699 & -3.217277 & -3.218047 & -3.217699 & -3.217040 & -3.217655 & -3.217697 \\
2.25 & -3.156624 & -3.156266 & -3.157559 & -3.156621 & -3.155447 & -3.156484 & -3.156618 \\
2.50 & -3.085398 & -3.085691 & -3.087713 & -3.085380 & -3.083217 & -3.084962 & -3.085374 \\
2.75 & -3.016841 & -3.019512 & -3.022159 & -3.016770 & -3.012642 & -3.015537 & -3.016758 \\
3.00 & -2.957646 & -2.967326 & -2.969163 & -2.957405 & -2.948732 & -2.953850 & -2.957384 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of energies of the downfolded Hamiltonians for the linear H6 system in the STO-3G basis set based on various sources of the external amplitudes \(T_{\mathrm{ext}}\) used to approximate the \(\sigma_{\mathrm{ext}}\) operator (\(\sigma_{\mathrm{ext}}\simeq T_{\mathrm{ext}}-T_{\mathrm{ext}}^{\dagger}\)). All simulations used restricted Hartree-Fock molecular orbitals 2,3, and 4,5 as active occupied and virtual orbitals, respectively. In the linear chain of the H atoms, the geometry is defined by the distance between neighboring hydrogen atoms (\(R_{\mathrm{H-H}}\)) in a.u.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(R_{\mathrm{H-H}}\) & FCI & CCSDTQ & DUCC-CCSDTQ \\ \hline
2.00 & -4.286011 & -4.286013 & -4.286008 \\
2.50 & -4.114829 & -4.114795 & -4.114782 \\
3.00 & -3.944748 & -3.943920 & -3.944137 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The DUCC-CCSDTQ results were obtained for various geometries of the H8 model in the STO-3G basis set for various choices of active spaces.
ometries of H6 and H8, in contrast to the DUCC-CCSDTQ approach, are characterized by significant errors with respect to the FCI energies. Although, in many cases, quantum simulations are performed for small dimensionality active spaces using bare Hamiltonians, the quality of the results can be significantly improved without a significant increase in quantum computing resources by using a downfolded form of the Hamiltonian. As seen from Table **V**, the PDS(3) can provide much better quality results than active-space FCI. The PDS(4) can further refine the accuracies of the PDS(3) approach reducing the errors with respect to exact DUCC-CCSDTQ energies to within 0.7 milliHartree for H6 and H8 model systems. In the last part of our discussion, we analyze the accuracies of the finite rank (Max\({}_{R}\)) approximations for downfolded Hamiltonians. The results for H6 and H8 models are collected in Table **VI**. The convergence of the commutator expansions is illustrated in the example of the 10th-rank commutator expansion. In all cases discussed in Table **VI** Max\({}_{R}\)=10 approximation reproduces virtually the exact DUCC-CCSDTQ energies. In practical applications based on the many-body form of the downfolded Hamiltonian, only low-rank commutator expansions (Max\({}_{R}\)=1,...,4) are numerically feasible (the Max\({}_{R}\)=0 corresponds to the active-space FCI results). One can observe that for weakly correlated situations (\(R_{\rm H-H}\)=2.0 a.u.) Max\({}_{R}\)=3 provides a satisfactory approximation of the exact DUCC-CCSDTQ energies. For strongly correlated case (\(R_{\rm H-H}\)=3.0 a.u.), the inclusion of the 4-th rank commutators (Max\({}_{R}\)=4) is needed. In practical application, however, all expansions are based on the mixing of Max\({}_{R}\)= \(n\) contributions with \(n+1\)-rank commutators stemming from the Fock matrix terms to reinstate the so-called perturbative balance (see Refs. for the discussion). For example, Max\({}_{R}\)=1 case epitomizes a situation where the perturbative balance is violated, and non-variational energies can be obtained. Therefore for the strongly correlated cases, we recommend the expansions based on the inclusion of Max\({}_{R}\)=3 or Max\({}_{R}\)=4 terms with the Fock-operator-dependent terms originating in the 4-th and 5-th commutators, respectively.
## V Conclusion
A series of calculations were conducted to examine the impact of approximations made on external cluster amplitudes on CC downfolded energies. Simple model systems, H6 and H8 linear chains, were utilized to continuously vary the extent of correlation effects from weakly to strongly correlated regimes (with \(R_{\rm H-H}\) ranging from 2.0 to 3.0 a.u.). The results showed that while the external cluster amplitudes from SR-CCSD calculations were satisfactory for the weakly correlated situation, for the strongly correlated case, the effect of triply and quadruply excited external clusters could no longer be neglected. The downfolding procedure acted as stabilizers and could restore the variational character of energies despite the fact that external amplitudes were obtained from SR-CC calculations that suffer from the variational collapse. Furthermore, the downfolded energies obtained for various active spaces on the H8 systems (including those that did not include essential correlation effects) had only small discrepancies, demonstrating the approximate invariance of downfolding energies for various active spaces. This was the case for all SES-CC types active spaces for commutative SR-CC formulations. Additionally, it was shown that the downfolded Hamiltonians could be effectively diagonalized with low-order PDS formulations for both weakly and strongly correlated regimes.
The assessment of the impact of the maximum rank of the commutator expansion on the precision of downfolded energies is an integral aspect of the analysis associated with practical applications of CC downfolding procedures. Our findings demonstrate that an increase in the degree of correlation effects necessitates the inclusion of higher rank commutators. In particular, for H6/H8 \(R_{\rm H-H}=2.0\) a.u., the inclusion of single, double, and triple commutators is sufficient to achieve a favorable agreement with the FCI energies. However, for \(R_{\rm H-H}=3.0\) a.u., higher-order commutators (quadruple/pentuple) must be incorporated into the approximation. It should be noted, however, that for the \(R_{\rm H-H}=3.0\) a.u. scenario, an active space that can distinguish between static and dynamical correlation effects cannot be constructed (i.e. all orbitals must be considered active). Nonetheless, as Table (VI) demonstrates, the corresponding commutator-rank expansion rapidly converges to the FCI energies.
## VI Acknowledgement
This work was supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy (under FWP 76213) and by "Embedding QC into Many-body Frameworks for Strongly Correlated Molecular and Materials Systems" project, which is funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, the Division of Chemical Sciences, Geosciences, and Biosciences (under FWP 72689). This work used resources from the Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for the U.S. Department of Energy under Contract DE-AC05-76RL01830.
## Author declarations
### Conflict of Interest
The authors have no conflicts of interest to declare.
## Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{H6} & \multicolumn{2}{c}{H6} & \multicolumn{2}{c}{H8} & \multicolumn{2}{c}{H8} \\ & \((R_{\rm H-H}=2.0\) a.u) & \((R_{\rm H-H}=3.0\) a.u) & \((R_{\rm H-H}=2.0\) a.u) & \((R_{\rm H-H}=3.0\) a.u) \\ \hline HF & -3.105850 & -2.675432 & -4.138199 & -3.572347 \\ CASSCF(4,4) & -3.175370 & -2.856832 & -4.205528 & -3.699677 \\ Active-space FCI & -3.166938 & -2.802092 & -4.190602 & -3.665605 \\ FCI & -3.217699 & -2.957646 & -4.286011 & -3.944748 \\ DUCC-CCSDTQ & -3.217697 & -2.957384 & -4.286005 & -3.943614 \\ PDS(3) & -3.214888 & -2.953067 & -4.283332 & -3.941223 \\ PDS(4) & -3.217234 & -2.956712 & -4.285622 & -3.943349 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of the CASSCF(4,4), active-space FCI, DUCC-CCSDTQ, and PDS(3)/PDS(4) energies for H6 and H8 model systems. The PDS(3)/PDS(4) approaches were applied to evaluate the ground-state energy of the DUCC-CCSDTQ effective Hamiltonians. The \(\{2,3,4,5\}\)- and \(\{4,5,6,7\}\)-generated active spaces were used for H6 and H8 systems, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Max\({}_{\rm R}\)/Method} & \multicolumn{2}{c}{H6} & \multicolumn{2}{c}{H6} & \multicolumn{2}{c}{H8} & \multicolumn{2}{c}{H8} \\ & \((R_{\rm H-H}=2.0\) a.u) & \((R_{\rm H-H}=3.0\) a.u) & \((R_{\rm H-H}=2.0\) a.u) & \((R_{\rm H-H}=3.0\) a.u) \\ \hline Max\({}_{\rm R}\)=0 & -3.166938 & -2.802092 & -4.190602 & -3.665605 \\ Max\({}_{\rm R}\)=1 & -3.269110 & -3.116145 & -4.382423 & -4.228605 \\ Max\({}_{\rm R}\)=2 & -3.218732 & -2.976207 & -4.288761 & -3.986313 \\ Max\({}_{\rm R}\)=3 & -3.217344 & -2.949796 & -4.285044 & -3.927241 \\ Max\({}_{\rm R}\)=4 & -3.217693 & -2.956814 & -4.285985 & -3.941744 \\ Max\({}_{\rm R}\)=5 & -3.217699 & -2.957632 & -4.286012 & -3.944383 \\ Max\({}_{\rm R}\)=10 & 3.217697 & -2.957384 & -4.286005 & -3.943615 \\ \hline DUCC–CCSDTQ & -3.217697 & -2.957384 & -4.286005 & -3.943614 \\ FCI & -3.217699 & -2.957646 & -4.286011 & -3.944748 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of energies obtained with finite Commutator Expansion (CCSDTQ based) for the downfolded Hamiltonians and Exact Downfolding(DUCC-CCSDTQ) for H6 and H8 models in the STO-3G basis set.
|
2307.16090
|
Rapid Flood Inundation Forecast Using Fourier Neural Operator
|
Flood inundation forecast provides critical information for emergency
planning before and during flood events. Real time flood inundation forecast
tools are still lacking. High-resolution hydrodynamic modeling has become more
accessible in recent years, however, predicting flood extents at the street and
building levels in real-time is still computationally demanding. Here we
present a hybrid process-based and data-driven machine learning (ML) approach
for flood extent and inundation depth prediction. We used the Fourier neural
operator (FNO), a highly efficient ML method, for surrogate modeling. The FNO
model is demonstrated over an urban area in Houston (Texas, U.S.) by training
using simulated water depths (in 15-min intervals) from six historical storm
events and then tested over two holdout events. Results show FNO outperforms
the baseline U-Net model. It maintains high predictability at all lead times
tested (up to 3 hrs) and performs well when applying to new sites, suggesting
strong generalization skill.
|
Alexander Y. Sun, Zhi Li, Wonhyun Lee, Qixing Huang, Bridget R. Scanlon, Clint Dawson
|
2023-07-29T22:49:50Z
|
http://arxiv.org/abs/2307.16090v1
|
# Rapid Flood Inundation Forecast Using Fourier Neural Operator
###### Abstract
Flood inundation forecast provides critical information for emergency planning before and during flood events. Real time flood inundation forecast tools are still lacking. High-resolution hydrodynamic modeling has become more accessible in recent years, however, predicting flood extents at the street and building levels in real-time is still computationally demanding. Here we present a hybrid process-based and data-driven machine learning (ML) approach for flood extent and inundation depth prediction. We used the Fourier neural operator (FNO), a highly efficient ML method, for surrogate modeling. The FNO model is demonstrated over an urban area in Houston (Texas, U.S.) by training using simulated water depths (in 15-min intervals) from six historical storm events and then tested over two holdout events. Results show FNO outperforms the baseline U-Net model. It maintains high predictability at all lead times tested (up to 3 hrs) and performs well when applying to new sites, suggesting strong generalization skill.
## 1 Introduction and application context
Flooding is the most disruptive natural disaster, causing tens of billions of dollars of direct economic loss each year and affecting millions of people [4, 23]. In coastal areas, flooding may result from overbank river flow (fluvial), heavy rainfall (pluvial), coastal storm surge, or a combination of all three. A warming climate is likely to further intensify the extreme precipitation, induce global sea level rise, and increase the frequency and intensity of tropical cyclones, making future flooding events more severe [14, 39]. In the U.S., tens of millions of people are already exposed to the risk of coastal flooding [35]. By 2050, the U.S. population density in flood-prone coastal zones and megacities is expected to grow by 25% [1], and flood risk is projected to increase by 26%, with hotspots expected in highly populated counties along both coasts, as well as across the Northeast and through Appalachia [5, 37].
Flood inundation modeling (FIM), seeking to predict the flood water extent and depth using hydrodynamic models, is an integral part of flood risk management. Two major usages of FIM may be identified, flood susceptibility mapping and real time forecasting. In flood susceptibility mapping, FIM is used to quantify risks to flood events of a particular return period (e.g., 100-year event), providing risk-informed inputs to planners and insurers for land use zoning and infrastructure development. In real time forecasting, FIM is used to provide prediction of surface water levels during storm events. State of the art hydrodynamic models typically solve 2D full shallow water equations (SWE), which are simplified Navier-Stokes equations representing depth-averaged mass and momentum conservation [4]. A flood inundation model is forced by initial and boundary conditions such as upstream inflow and precipitation. For urban settings, the model spatial resolution should ideally be 3-10 m and the temporal resolution should be sub-hourly [4]. High-resolution FIM is not only necessary for street-level flood impact mapping, but also helps to analyze the exposure and vulnerability of local communities, especially disadvantaged population groups [3, 26, 28]. However, solving SWE at high spatiotemporal resolutions is still computationally demanding, presenting a major challenge to its operational use.
AI/ML-enabled surrogate models can provide a potential solution to scaling up FIM. In a broader context, AI/ML is envisioned to ultimately power the development of earth system digital twins [6, 16]. Extreme weather forecasting represents a major component of earth system digital twins. To demonstrate such a potential, this study presents a physics-based, hybrid ML approach for FIM. Physics-informed ML models are now widely developed and used in climate and earth system sciences to incorporate domain
knowledge stemming from empirical and physical principles [9, 16, 17]. Integration of prior knowledge and/or physical constraints not only allows for training of more accurate ML models with sparse/noisy data, but also leads to more interpretable results. Hybrid ML, which is a form of physics-informed ML, utilizes outputs from process-based models as inputs to ML models. The ML algorithm we adopted in this work is Fourier Neural Operator (FNO), which is a type of neural operator for approximating mappings between infinite-dimensional function spaces [21]. FNO converts spatial domain representation into the spectral space through Fourier transforms, thus enabling more efficient computation of convolutions [19].
**Main Contributions**. The main contributions of this work are summarized below:
* We developed an FNO-based, flood inundation model for real-time flood mapping at multiple lead times
* A physics-based loss function is used to minimize the mismatch in predicted water depths and in spatial derivatives
* Results, demonstrated over an urban area in Houston, suggest the FNO model is more efficient than a U-Net-based baseline, and finally
* We showed that an FNO model pretrained on similar domains can be applied to the study site without fine-tuning, suggesting good generalization capability
## 2 Related work
FIM has been commonly used for flood susceptibility mapping [7]. Applications of (near) real time flood inundation mapping have only risen in recent years because of increased availability of SWE solvers and AI/ML. In [38], a random forest model was trained to map topographic and environmental features to hourly water depths simulated by a hydrodynamic model at 16,914 street segments in the coastal city of Norfolk (Virginia, U.S.). Guo et al. [13] presented a data-driven approach for maximum water depth prediction using CNN but assumed steady state. In [15], a generative adversarial network (GAN) was trained using synthetic rainfall events and simulated water depths. Their GAN-based approach was recently extended to include static features (e.g., elevation and slope) such that the trained model can be applied in zero-shot learning [10]. Note that many of these previous FIM studies only considered a single lead time [10, 29]. Satellite remote sensing provides flood extent information, but the coarse spatial resolution and latency of most satellites largely restrict their use to post-event impact assessment, such as mapping flooded areas using multispectral surface reflectance imagery [25], identifying the flood water extent from synthetic aperture radar (SAR) or multi-spectral (MS) imagery [18, 30]. To the best of our knowledge, neural operators have not been applied to FIM.
## 3 Methods
**FNO** The goal of neural operator learning is to learn a mapping between two infinite-dimensional spaces by using paired inputs/outputs. Specifically, let \(\mathcal{D}\in\mathbb{R}^{d}\) and \(\mathcal{D}^{\prime}\in\mathbb{R}^{d^{\prime}}\) be bounded domains, and \(\mathcal{A}\) and \(\mathcal{U}\) be input and output function spaces defined on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), respectively. In our case, the input space consists of meteorological forcing (precipitation), static features (e.g., elevation) and/or antecedent water depths, while the output space represents the predicted surface water depths. Let \(\mathcal{G}^{\dagger}\) represent an operator that maps the input to output, \(\mathcal{G}^{\dagger}:\mathcal{A}\longrightarrow\mathcal{U}\). A neural operator \(\mathcal{G}_{\theta}\) is a parametric map that approximates \(\mathcal{G}^{\dagger}\), where \(\theta\in\Theta\) are trainable parameters that can be obtained by solving a minimization problem with a loss function \(L\)[21]
\[\min_{\theta\in\Theta}\mathbb{E}\left(L(G_{\theta}(a),G^{\dagger}(a))\right), \tag{1}\]
FNO seeks to approximate the following integral kernel operator commonly used in solutions of partial differential equations [21]
\[\left(\mathcal{K}(v_{l})\right)(x)=\int_{D}\kappa(x,y)v_{l}(y)dv_{l}(y), \tag{2}\]
where \(x,y\in\mathcal{D}\), \(v_{l}\) is a function, and \(\kappa(x,y)\) is a kernel function. In the Fourier integral operator, the kernel function is replaced by a convolution operator
\[\left(\mathcal{K}(v_{l})\right)(x)=\mathcal{F}^{-1}\left(R_{\phi}\cdot \mathcal{F}(v_{l})\right)(x), \tag{3}\]
in which \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) are forward and inverse Fourier transforms, and \(R_{\phi}\) is the Fourier transform of a periodic function \(\kappa\) that is parameterized by \(\phi\in\Theta\). Assuming uniform discretization, then \(\mathcal{F}\) is replaced by Fast Fourier Transform (FFT), and \(R_{\phi}\) is approximated as a complex-valued tensor comprising a collection of truncated Fourier modes [21] and the values of \(R_{\phi}\) are learned from training data.
**Loss function** We used the relative \(L_{2}\) error as loss function, which has been observed to impose a good normalization and regularization effect that prevents overfitting [19]. Inspired by physics informed ML, we further minimized mismatch of spatial derivatives in terms of relative \(L_{2}\) error [27, 36]
\[\begin{split}\mathrm{Rel.\,Loss}=\frac{\|u-\hat{u}\|}{\|u\|}& +\beta_{1}\frac{\|du/dx-d\hat{u}/dx\|}{\|du/dx\|}\\ &+\beta_{2}\frac{\|du/dy-d\hat{u}/dy\|}{\|du/dy\|}\end{split} \tag{4}\]
where \(u\) and \(\hat{u}\) are simulated and predicted water depths, \(dx\), \(\beta_{1}\) and \(\beta_{2}\) are hyperparameters. We assigned \(dx=0.2\) and used 0.1 for both \(\beta_{1}\) and \(\beta_{2}\) after a grid search.
## 4 Experiments
**Physics model setup and dataset creation** The efficacy of FNO was demonstrated via a series of experiments. Under the _single-domain, multi-event setting_, we considered a \(1.3\times 2.6\mathrm{km}^{2}\) domain located in the Brays Bayou watershed near downtown Houston (D4 in Fig.1A). Brays Bayou is a fully urbanized watershed. Land use comprises of residential, industrial, and commercial buildings. The bayou flows eastward to its confluence with the Houston Ship Channel. Multiple historical storm events were simulated using the open-source CREST-iMAP, a coupled hydrology-hydraulic framework for riverbank flow and overland flood inundation modeling [20]. Previously, CREST-iMAP has been validated against Hurricane Harvey observations (e.g., streamflow, high water marks, and flood insurance claims) and showed comparable or better performance than other state-of-the-art hydrodynamic models [20]. In this study, CREST-iMAP was forced by using a high quality, radar-based quantitative precipitation estimation (QPE) product--the Multi-Radar/Multi-Sensor System (MRMS) data published by the National Severe Storms Laboratory (NSSL) in U.S. National Oceanic and Atmospheric Administration (NOAA). MRMS comes at 1-km resolution in 2-min intervals. It is behind several rainfall nowcasting DL models such as Google's MetNet models [11, 33]. In Fig.1B, a typical flooding scene is shown. We used simulations corresponding to six storm events from NOAA storm inventory for training and the rest for testing (TableA1. Under the _multi-domain, multi-event setting_, the same NOAA storm events were first simulated over different spatial domains (D1-D3 in Fig.1A). We then trained an FNO model using D1-D3 data and tested on D4. More details on CREST-iMAP run configurations are provided under Appendix A1.
**Surrogate model training** A hybrid FNO surrogate model was trained for multi-step flood inundation prediction. We assumed the process-based flood inundation model and the FNO are running in parallel such that the outputs of the process-based model are available for the FNO model to ingest as predictors [34, 31]. To generate training datasets, we aggregated the CREST-iMAP inputs/outputs to 15-min intervals, and then sampled \(128\times 128\) input and output patches from the spatial domain. Each 15-min frame was sampled twice by randomly varying image centers within the image bound. Fig.2 illustrates the architecture of the model. Inputs to FNO include antecedent precipitation and simulated water depths, and digital elevation model (DEM). A key feature of FNO is it concatenates x- and y-coordinates to the input features to help it capture dependencies between inputs and spatial locations, enabling the model to generalize to new locations [21]. The target variable is predicted water depth at a lead time, for which the target lead time value is concatenated to the inputs as a label. Alternatively, the target lead time can be treated as a trainable feature so that the trained model works for arbitrary lead times. The strategy was not used in this preliminary work. The FNO architecture includes four 2D spectral convolution layers, each followed by a GeLU activation layer. The data samples are split into training, validation, and testing in 0.8, 0.1, 0.1 ratios. The number of Fourier modes used is 16 in both directions.
We trained the models in PyTorch Lightning [12] using the Adam optimizer with an initial learning rate of 5e-4, a cosine annealing training schedule [22], and early stopping. Unless otherwise specified, the maximum epochs used is 60 and batch size is 8, which were found sufficient for this problem. Training was done on an Nvidia RTX3090 GPU. Training time per epoch is 72 sec wallclock time and inference time is 0.002 sec/sample. For baseline, we considered a U-Net like model adapted from RainNet, which has a relatively simple deep architecture but nonetheless performs surprisingly well on radar-based precipitation nowcasting problems [2]. RainNet still has 31.4M trainable parameters, while FNO has 8.1M. More details on the baseline model can be found in A2.
**Performance metrics** Model performance is measured using Critical Success Index (CSI, range 0-1.0) and mean
Figure 1: (A) Areal view of the Domain 4 (D4), which is located in the Brays Bayou watershed in Houston, Texas, U.S. and (B) an exemplary flooding scene, where darker blue indicates deeper water. Map inset shows locations of all domains (D1–D4) used in this study.
absolute error (MAE, range 0-\(\infty\)) that are often used in FIM [33, 26]. CSI is defined as \(\#\mathrm{Hits}/(\#\mathrm{Hits}+\#\mathrm{Misses}+\#\mathrm{FalseAlarms})\), where #Hits is the number of flood events correctly predicted; #Misses is the number of flood events incorrectly predicted as non-flood events; and #FalseAlarms is the number of non-flood events incorrectly predicted as flood events [8]. In this work, we used the average of CSI calculated over three depth thresholds, 3cm, 10cm, and 25cm, to gauge model performance. In the literature 3cm is often used as the threshold for nuance flooding [24]. Both CSI and MAE were calculated at the grid cell level and then averaged spatially and temporally.
## 5 Results
**Single-domain multi-event results** Table1 summarizes CSI and MAE metrics on two holdout storm events. Note the event on 2019/09/17 corresponds to Tropical Storm Imelda, which is the fourth wettest even on record in Texas [32]. At 12 lookback frames (i.e., 180min), FNO-12 outperformed U-Net-12 for all 12 lead times (Fig.3). The CSI of both models drops at the beginning of streamflow ascending due to strong discontinuity in predictors, but quickly bounces back for the rest of flood duration. As lead time increases, the FNO performance decreases linearly; thus, it is expected to give reasonable performance for longer lead times than tested here.
As ablation studies, we considered longer lookback periods (24 past frames or 6 hrs), which did not improve the results (FNO-24 in Table1). This is probably because of the lack of long memory during storm events. We also considered a precipitation-only experiment (FNO-12P in Table1) where no antecedent water depth information is used. In that case the CSI dropped significantly, suggesting the importance of hybrid forecasting.
**Multi-domain multi-event results** An FNO model pre-trained using data from D1-D3 was applied to predict the same events and results for D4. Results, shown in Table 1 under FNO-MD, suggest the pretrained model adapts to the new domain well in this case, largely because of the physics embedded.
## Acknowledgement
AYS, WL, BRS, and CW were funded by Department of Energy Advanced Scientific Computing Research program under grant no. DE-SC0022211.
|
2303.04163
|
Characterizing fragmentation and sub-Jovian clump properties in
magnetized young protoplanetary disks
|
We study the initial development, structure and evolution of protoplanetary
clumps formed in 3D resistive MHD simulations of self-gravitating disks. The
magnetic field grows by means of the recently identified gravitational
instability dynamo (Riols & Latter 2018; Deng et al. 2020). Clumps are
identified and their evolution is tracked finely both backward and forward in
time. Their properties and evolutionary path is compared to clumps in companion
simulations without magnetic fields. We find that magnetic and rotational
energy are important in the clumps' outer regions, while in the cores, despite
appreciable magnetic field amplification, thermal pressure is most important in
counteracting gravity. Turbulent kinetic energy is of a smaller scale than
magnetic energy in the clumps. Compared to non-magnetized clumps, rotation is
less prominent, which results in lower angular momentum in much better
agreement with observations. In order to understand the very low sub-Jovian
masses of clumps forming in MHD simulations, we revisit the perturbation theory
of magnetized sheets finding support for a previously proposed magnetic
destabilization in low-shear regions. This can help explaining why
fragmentation ensues on a scale more than an order of magnitude smaller than
that of the Toomre mass. The smaller fragmentation scale and the high magnetic
pressure in clumps' envelopes explain why clumps in magnetized disks are
typically in the super-Earth to Neptune mass regime rather than Super-Jupiters
as in conventional disk instability. Our findings put forward a viable
alternative to core accretion to explain widespread formation of
intermediate-mass planets.
|
Noah Kubli, Lucio Mayer, Hongping Deng
|
2023-03-07T19:00:01Z
|
http://arxiv.org/abs/2303.04163v1
|
Characterizing fragmentation and sub-Jovian clump properties in magnetized young protoplanetary disks
###### Abstract
We study the initial development, structure and evolution of protoplanetary clumps formed in 3D resistive MHD simulations of self-gravitating disks. The magnetic field grows by means of the recently identified gravitational instability dynamo (Riols and Latter, 2018; Deng et al., 2020). Clumps are identified and their evolution is tracked finely both backward and forward in time. Their properties and evolutionary path is compared to clumps in companion simulations without magnetic fields. We find that magnetic and rotational energy are important in the clumps' outer regions, while in the cores, despite appreciable magnetic field amplification, thermal pressure is most important in counteracting gravity. Turbulent kinetic energy is of a smaller scale than magnetic energy in the clumps. Compared to non-magnetized clumps, rotation is less prominent, which results in lower angular momentum in much better agreement with observations. In order to understand the very low sub-Jovian masses of clumps forming in MHD simulations, we revisit the perturbation theory of magnetized sheets finding support for a previously proposed magnetic destabilization in low-shear regions. This can help explaining why fragmentation ensues on a scale more than an order of magnitude smaller than that of the Toomre mass. The smaller fragmentation scale and the high magnetic pressure in clumps' envelopes explain why clumps in magnetized disks are typically in the super-Earth to Neptune mass regime rather than Super-Jupiters as in conventional disk instability. Our findings put forward a viable alternative to core accretion to explain widespread formation of intermediate-mass planets.
keywords: Protoplanetary disks - Magnetohydrodynamics - planets and satellites: formation
## 1 Introduction
With over 5000 confirmed detections of exoplanets1, the mass statistics of the population can now be inferred (Zhu and Dong, 2021). It is known that the most common exoplanets lie in the intermediate-mass regime ranging from super-Earth to Neptune size (Schneider et al., 2011). Further, many gas giants have been detected.
Footnote 1: taken from NASA Exoplanet Science Institute (2023)
Planet formation is addressed by two competing theories; core accretion and gravitational instability. In core accretion (Safronov, 1972; Pollack et al., 1996) a rocky planetary embryo grows through the accretion of planetesimals (Nubugi et al., 2019). If it becomes massive enough, it might attract a gaseous envelope to become a gas giant (Helled et al., 2014). The process is slow compared to the disk's lifetime but can be significantly accelerated by pebble accretion (Ormel, 2017), or even via a combination of pebble and planetesimal accretion (Alibert et al., 2018). On the other hand, with disk instability (Kuiper, 1951; Boss, 1997; Mayer et al., 2002) the timescale problem is circumvented by assuming that the formation process of a (massive) planet is driven by the self-gravity of fluid matter in the disk. If gas is sufficiently cold and dense, direct collapse of a patch can occur despite the counteracting action of shear, where the Toomre theory (Toomre, 1964) provides a criterion for the occurrence of disk instability in the simple framework of linear perturbation theory, and widely verified by numerical simulations across various domains of astrophysics (Durisen et al., 2007). In the context of planet formation, three-dimensional numerical simulations of disk instability were first conducted by Boss (1997) in order to explain the formation of Jupiter and Saturn. Gas collapse will be eventually followed by accretion of solids to form a rocky core and a metal-enriched envelope (Helled et al., 2014). A massive disk, of order 10% of the mass of the star, can become gravitationally unstable on an orbital timescale. Disk instability could well explain massive planets (e.g. HR8799, see (Nero and Bjorkman, 2009)). It can also explain massive planets around low-mass stars (e.g. GJ3512b, see Morales et al. (2019)) and wide-orbit gap-carving planets, (e.g. AS209, see Bae et al. (2022)) both of which cannot be explained by core accretion even when pebble accretion is considered.
On the population level the core accretion model predicts a dip in the planet mass function around Neptune mass which is due to the runaway gas accretion which is required to build
gas planets in this model. This is however contrary to observations (Suzuki et al., 2018; Schlecker, M. et al., 2022). Traditional disk instability, neglecting magnetic fields, is thought to be only relevant for gas giants and thus cannot provide an explanation for intermediate-mass planets. However, young gravitationally unstable disks exhibit spiral structures (Toomre, 1964; Deng and Ogilvie, 2022), such as observed in Elias 2-27 (Meru et al., 2017; Veronesi et al., 2021; Perez et al., 2016), suggesting some role of disk instability.
The spirals sustain a dynamo (even in poorly ionized disks, see Riols et al. (2021)) and lead to strong magnetic fields. This effect was described in Riols and Latter (2018) and Riols and Latter (2019) and should not be confused with the magneto-rotational instability (MRI). The spiral-driven dynamo grows the magnetic field by means of a feedback-loop amplification between field stretching along the spirals and field twisting across them owing to vertical rolls triggered by shocks generated by the spirals. In this way, an initial small toroidal field is converted into a stronger poloidal field, and then converted back into a proportionally stronger toroidal field. Amplification in the vertical rolls is the key step, and makes the dynamo inherently three-dimensional. Magnetic energy grows at the expense of self-gravity, of which the spirals are a manifestation, and rotational energy. While the MRI breaks down at high values of resistivity, the spiral-driven dynamo is resilient, and the resulting magnetic field is much stronger than in the case of the MRI, as shown in Deng et al. (2020). They also demonstrated the global nature of the dynamo, e.g. by measuring the global toroidal field pattern and showing the importance of outflow boundary conditions at high altitude. In (Riols et al., 2021) the effect of ambipolar diffusion on the dynamo has been investigated showing that the dynamo is able to work on a large range of ambipolar Elsasser numbers.
Recent simulations of (Deng et al., 2021) showed that magnetic fields may have an important impact on the formation of planets through disk instability. Protoplanetary clumps emerged with masses one to two orders of magnitude smaller than one would expect from conventional simulations and models of disk instability (Durisen et al., 2007). Their masses clustered around Super-Earth to Neptune masses, a mass range which is not prevalent in core accretion (Suzuki et al., 2018; Mordasini, 2018) while conventional disk instability favors planets with masses from that of Jupiter up to the brown dwarf regime (Helled et al., 2014). On the other hand, observations suggest that exoplanets, at least in our Galaxy, are most abundant in this mass range (Schneider et al., 2011). In purely hydrodynamical simulations using identical disk models and cooling (without the magnetic field) which they run for comparison, much fewer clumps resulted, none survived till the end of the simulations, and their masses up to the disruption were close within factors of a few from a Jupiter mass.
Besides the lower mass of the fragments, the presence of the magnetic field also leads to differences in the further evolution of the clumps. The purely hydrodynamical simulations required a fast, physically unrealistic cooling for the clumps to survive, otherwise they would be disrupted by shear (Deng et al., 2021). On the other hand, even the lowest mass clumps forming in the MHD simulations could survive, which was attributed to a shielding-effect by the magnetic field which underwent amplification at their boundary, which also prevented significant mass growth via the effect of magnetic pressure.
For magnetic fields to be present in protoplanetary disks, they need to be ionized to a certain degree. Although spiral shocks may heat the disk up to some hundred K (Podolak et al., 2011), in general the temperature in the simulations is too low to provide the necessary ionization (Deng et al., 2020). It has been discussed in (Deng et al., 2020) that the ionization must stem from other sources than temperature which could be the central star or other close stars providing a source for ionizing radiation or cosmic rays. The magnetic fields have shown to be dynamically important (Turner et al., 2014; Masson et al., 2016) in protoplanetary disks.
A physical understanding of the small masses of the clumps in magnetized disks, from their very appearance in the disk to their growth phase, is still lacking. In addition in (Deng et al., 2021) many questions were left open concerning if and how they differ from clumps in unmagnetized disks in other ways than just their mass, and what is the relation between their properties and the nature of the flow in the disk, which is magnetized but also more turbulent than in conventional disk instability (Deng et al., 2020). In this paper, the properties of the magnetized clumps as well as their formation path from the disk material are studied and characterized in great detail. In addition, with the aid of the simulations, we propose a theoretical framework that can provide an understanding of their low masses.
The standard method to characterize disk instability is the Toomre analysis (Toomre, 1964). Starting from the hydrodynamical fluid equations, and performing a perturbative analysis, Toomre derived a criterion for instability.
\[Q=\frac{c_{s}\kappa}{G\pi\Sigma}<1 \tag{1}\]
The disk is destabilized through its self-gravity, encapsulated in its surface mass density \(\Sigma\), and stabilized by rotation and gas pressure, here expressed via the epicyclic frequency \(\kappa\) and the sound speed \(c_{s}\), respectively. The Toomre criterion is derived under the assumption of razor-thin sheet with no pressure gradients, and is valid for local axisymmetric perturbations.
The case of disk fragmentation in the presence of magnetic fields was investigated in Gammie (1996) and Elmegreen (1987) in the framework of galactic disks. Starting from the magnetized fluid equations (see section 4.1) Gammie could derive a relation similar to the Toomre analysis. For axisymmetric perturbations and a toroidal orientation of the magnetic field he found that the magnetic field lead to an increasing stability of the system. A dispersion relation was derived for perturbations in such disks in which the magnetic field acts like the gas pressure:
\[\omega^{2}-(\kappa^{2}-2\pi G\Sigma|k|+(c_{s}^{2}+V_{a}^{2})k^{2})=0 \tag{2}\]
where the magnetic field is expressed via the Alfven velocity \(V_{a}\). Applying the same reasoning as in the Toomre theory (Toomre, 1964) this allows to define a parameter for magnetized disks \(Q_{B}=\frac{\kappa\sqrt{G_{B}^{2}+V_{a}^{2}}}{G\pi\Sigma}\)
Another approach, which was specialized for galactic (hence non-Keplerian) disks was put forward by (Elmegreen, 1987). The latter author studied the evolution of non-axisymmetric perturbations in a differentially rotating magnetized thin sheet through numerical integration of the per
turbed fluid equations. Since spiral structure is typically seen to develop in fragmenting disks before fragmentation actually occurs, the study of non-axisymmetric perturbations is most relevant for our purpose. In his study, (Elmegreen, 1987) found that the presence of a magnetic field can lead to a destabilization of the system in certain regions characterized by weak shear since it can inhibit stabilization of a perturbation through the Coriolis force. As a result, perturbation with smaller wavelength can grow, which would be otherwise stable. This destabilisation mechanism, which is discussed in section 4.1 and studied with our simulations, is appealing because it could provide a clue to understand the different nature of fragmentation in magnetized disks.
## 2 Methods
### Fragmenting MHD Simulations
In this section we briefly describe the simulations that were analyzed in this work. These simulations were already presented in Deng et al. (2021) and are based on simulations from Deng et al. (2020).
In the simulations, the self-gravitating MHD equations with resistivity and cooling were solved:
\[\frac{\partial\rho}{\partial t}+\nabla(\rho v)=0 \tag{3}\]
\[\frac{\partial v}{\partial t}+v\cdot\nabla v=-\frac{1}{\rho}\nabla(P+\frac{B^ {2}}{8\pi})+\frac{(B\cdot\nabla)B}{4\pi\rho}-\nabla\Phi \tag{4}\]
\[\frac{\partial B}{\partial t}=\nabla\times(v\times B)+\eta\nabla^{2}B \tag{5}\]
\[\frac{\partial U}{\partial t}+\nabla(Uv)=-P\nabla v-\frac{U}{\tau_{c}} \tag{6}\]
The cooling time was just assumed to be proportional to the orbital time: \(\tau_{c}=\beta/\Omega\) while the relation of pressure and internal energy is determined via the ideal gas equation \(P=(\gamma-1)U\) with \(\gamma=5/3\). The simulations were conducted with GIZMO (Hopkins, 2015; Hopkins, 2016; Hopkins and Raives, 2016) which uses the MFM (meshless finite mass) method. They simulated a disk of mass \(0.07M_{\rm sun}\) in a radius of \(5-25\)AU with a central star of \(1M_{\rm sun}\) that is represented by a sink particle. The initialization of the simulations is described in Deng et al. (2020): They started with a surface density and a temperature profile of \(\Sigma\propto r^{-1}\) and \(T\propto r^{-1/2}\). Also, a toroidal seed magnetic field was added in the MHD case. The simulations were then run using a weak cooling rate (\(\beta=8\)) until the disk's spiral structure was established. Then the cooling was increased to \(\beta=6.28\) and the simulations were continued to saturate the magnetic field. During this process, particle-splitting was applied to achieve the desired resolution. The achieved resolution is very high: for the main MHD simulation more than 30 million particles were used to resolve MHD effects. The same simulation was run in more than one variant (see below), such as with or without Ohmic resistivity, and with a different cooling prescription for the high density regions (see below). Companion HD-simulations that did not include a magnetic field were also conducted; for those, lower resolutions were required (3 million particles, see also discussion). Overall, these simulations took more than a year of computing time on the Cray XC40 supercomputer "PizDaint" at the Swiss Supercomputing Center (CSCS). This prevented us from running a large set of simulations with different disk models so far.
These simulations were then taken and used as initial conditions for the fragmenting simulation. Fragmentation was then induced through an increase in cooling by changing to \(\beta=3\)(see Gammie, 2001; Deng et al., 2017). The results were then used as initial conditions for subsequent runs that investigated the further evolution of the clumps as described in (Deng et al., 2021). They also used a cooling-shutoff in the innermost regions of the clumps after they become gravitationally bound noting that the high cooling rates there would be unrealistic because the high density leads to highly optically thick conditions, resulting in long photon diffusion times and nearly adiabatic evolution. However, fragmentation and the early physical properties and initial evolution of clumps, the focus of this paper, are insensitive to the latter aspect, hence we will not use this variant of the simulations for analysis here. Furthermore, the specific MHD simulation used for the analysis of this paper includes Ohmic resistivity. The companion HD-simulations are also used in this work for comparison. The resistivity is set via the magnetic Reynolds number \(R_{m}\equiv c_{s}H/\eta=20\) with \(c_{s}\) the sound speed and \(H\) the scale height of the disk (Deng et al., 2021).
The analysis presented in this work is based on snapshots taken at equally-spaced time intervals of \(10/2\pi\) years. We describe the methods used to analyze the simulations in the next section.
### Identification of the clumps and backtracing
In this section we describe the procedure to find the clumps in the snapshots, and to analyze them.
Towards the end of the simulation, at the last snapshot that we are considering, we identify clumps as follows; first, we find density peaks by selecting all particles above a cer
Figure 1: Surface density plot of the disk towards the end of the considered time frame (\(t=156\)yr). The clumps are marked in orange circles. The flocculent appearance of the disk can be seen as observed in Deng et al. (2020).
tain density threshold and assign them to a cell on a grid that is superimposed on the particle distribution. The cells that contain such particles are marked as dense cells. We identify connected dense cells (clusters) on the grid and define all particles (including those below the density threshold) that lie in the corresponding cells as belonging to that cluster. Each of these clusters serves as the approximate location of a clump. For the density threshold we chose a value of \(10^{-8}\)g/cm\({}^{3}\) which amounts to \(\approx 100\) times the average density in the simulation in the clump-forming radial extent of the disk. The exact choice of this value does not make much of a difference since the exact extension of the clumps are determined in the next step by identifying their gravitational boundedness.
In the next step, we start by determining the exact location (particle-wise) of a density peak within the cluster which we use as a guess for the corresponding clump's centre. Around this point we introduce concentric shells (radial bins). Starting from the centre, we increase the radius by gradually determining if a certain shell is bound to the clump that is defined using all particles inside the corresponding radius of the shell. The gravitational boundedness is determined by calculating for each particle the potential energy with respect to the clump, the kinetic and the internal energy. The radius of the clump is finally defined using the inner end of the first unbound shell encountered.
Since we are interested in the clump's evolution and their origins, we now trace them back to earlier snapshots. This is done by determining all particles that are within the clump's radius and then identifying the same particles at earlier snapshots. Now there are two ways to proceed.
First, considering only this subset of particles, we find the position of their density maximum, i.e. where they congregate the closest. This position is then taken as the clump's centre at the earlier snapshot. The radius of the gravitationally bound region is then determined in order to identify the clump. We measure quantities such as their mass, angular momentum and energy. The method just described, however cannot be used indefinitely backward in time since there is no well defined density maximum in the very early stage of clump formation, before a bound clump is present. It can only be extended back to the time when the clump first becomes bound (and somewhat before that). We check that our clump is still well-defined by evaluating the fraction of particles originally found in the clumps identified at a later stage but not included in the overdense region. At the time when the clump is bound, this fraction is low (\(\lesssim 20\%\)), meaning that there is no big change in the particle membership of the clumps after they become bound, namely their initial stage reflects the "in-situ" flow.
Additionally, we are also interested in the regions of the disk at earlier times where the clumps will emerge in order to calculate disk properties relevant for their formation such as the Toomre or Jeans mass. For the latter we use a second method; we identify the clump-forming regions by finding the ensemble of particles that will be later incorporated in the clumps.
Fig. 1 shows a density plot of the simulation towards a time when all the clumps that become gravitationally bound have formed. The small scale structure that was observed in Deng et al. (2021) can also be recognized in this plot. The evolution of the clump population is presented in fig. 2 showing the number of clumps and their total mass over time and also comparing to the HD-case. The clumps are counted as soon as they are determined as bound using the method described above. One can see that in the HD-case, there are much fewer clumps than in the MHD-case.
In Fig. 3, the masses of the clumps are shown. They are calculated by determining the bound radius as described above and then summing up the mass of all the particles inside. The plot then shows their average mass from the time they be
Figure 3: Masses of the resulting clumps in the MHD and the HD case. The dots with the black edge colour represent their masses at the time of fragmentation while the others show the mass averaged over the lifetime. In the MHD case, much smaller clumps can emerge, going below \(10^{-2}M_{\rm jup}\), this is already true at the time of fragmentation, when the clumps emerge.
Figure 2: Evolution of the clump formation: Number of clumps (solid, left) and total mass contained in the clumps (dashed, right). Many more clumps form in the MHD case.
come bound. Further, also the masses at the time when they first become bound are shown. As can be seen in the plot, in the MHD case, the clumps' masses are generally lower and can have a much greater variation than in the HD case. Indeed, in the magnetized disks there are many low-mass clumps going below the mass of Neptune (Deng et al., 2021). We also note that the difference between the MHD and the HD case is already present at the onset of fragmentation.
In summary, the difference between the clumps in the MHD case and the HD case is two-fold; first, the clumps evolve differently when they are embedded in a magnetized disk (e.g. they have lower growth, and are protected from tidal disruption, see Deng et al. (2021)). Second, the initial fragmentation stage is different as the MHD clumps have smaller masses from the beginning and fragmentation is more plentiful.
## 3 Numerical Results
### Predicted mass scales
From Toomre's theory of disk instability one can derive an estimate for the mass of the clumps by assuming that the collapsing region has a characteristic size of order the Toomre-most-unstable wave length \(\lambda_{T}\): \(M_{\rm Toomre}=\pi\left(\frac{\lambda_{T}}{2}\right)^{2}\Sigma\). This estimate is shown in fig. 4 where we identified the fragmenting regions in the early snapshots of the MHD simulation (using the second method described in section 2.2) and determined a representative Toomre mass at any given time by averaging over the Toomre mass values obtained from the back-traced particles of the different clumps. Only the early snapshots in fig. 4 should be taken into account since the Toomre theory assumes an equilibrated disk which is better fulfilled before it fragments - when the clumps collapse the measure becomes invalid.
It can be seen that the predicted mass is around \(1M_{\rm jup}\). The masses observed in the MHD simulation reach down to \(0.006M_{\rm jup}\) - (see fig. 3) all being much lower than the prediction. Although higher, the masses of the clumps in the HD simulation are still below what would be expected from the Toomre mass. This means that additional effects have to be considered to study the nature of the collapse - both related and unrelated to the magnetic field. These effects could be the vertical extension of the disk, turbulence which could be induced or altered through the magnetic field or a direct effect of the magnetic field on fragmentation. We investigate the latter effect in section 4.
While Toomre's theory assumes a thin background axisymmetric disk with differential rotation, the minimum collapsing mass should be comparable to the Jeans mass since that neglects rotation which affects the longer wavelength branch in Toomre instability theory. One can thus calculate the Jeans mass of the backtraced regions; also, this simplistic estimate is definitely too high for explaining the observed clumps masses.
A different model to estimate clump masses has been presented in Boley et al. (2010), which attempts to capture the actual dynamics in a non-axisymmetric disk. The model was verified against 3D radiative simulations of protoplanetary disks, and more recently was also shown to match well the results of fragmentation in high redshift galactic disks (Tambuello et al., 2015). Fragmentation, as seen from numerical simulations, does indeed occur in spiral arms rather than directly from the axisymmetric background flow (Mayer et al., 2004; Durisen et al., 2007). Instead of a homogeneous, axially symmetric background flow, the initial state is a spiral density wave identified as an overdensity whose strength is proportional to the local Mach number of the flow, which leads to velocity gradients that determine the region that can collapse. Considering also finite thickness, namely that the spiral arm has a vertical extent of order the disk scale
Figure 4: Toomre mass and mass prediction according to (Boley et al., 2010) determined using the backtraced regions of particles. At each snapshot in time, the particles that will later form the clump are identified and the quantities are measured. Then we average over the different clumps so the resulting values in the early stages are an average prediction for the clump masses (see second method described in section 2.2). At fragmentation, the Toomre mass and later also the Boley mass greatly increase since the background assumptions of the theories become invalid. Shortly later, the clumps become bound and are marked with a corresponding dot. As a comparison, the clumps arising in the HD simulation are shown with a triangle.
height, they obtain the following mass estimate:
\[M_{f}=4\frac{c_{s}^{3}}{G\Omega f_{g}} \tag{7}\]
with \(c_{s}\) the sound speed, \(\Omega\) the angular frequency and \(f_{g}\) a form factor to account for effects from self-gravity. (Boley et al., 2010) estimated \(f_{g}\approx 1.8Q\). This is compatible with the considerations in Deng and Ogilvie (2022) where they suggested a solitary ring structure as a transitory state in which spiral density waves would emerge, and then collapse would eventually ensue in the flow entrained by them. Measuring \(Q\) in our simulations by tracing back the clump's particles in time leads to values of \(Q\approx 1.15\) in the early phases of the simulation. We present the resulting estimate in fig. 4. The resulting mass lies around \(0.05M_{\rm Jup}\), a bit higher at the beginning of the simulation. While this estimate lies indeed much closer to the resulting clump masses, any eventual effect of the magnetic field is not taken into account. The lower-end of the clump masses is still well below the estimated value. Also, when tracing back the particles of each clump individually and determining the estimated mass separately, namely without averaging, the mass estimates do differ significantly from the initial masses of the fragments.
### Energetics of clump-forming sites before the collapse
As we seek the reason for the higher fragmentation rate and the lower mass fragments in the MHD case one should explore how the magnetic field itself can influence the fragmentation process. It could either directly change the physics during the collapse or indirectly, for example by stirring turbulence. Indeed, in their study of the mean flow properties of self-gravitating disks with and without magnetic fields, Deng et al. (2020) showed that magnetized disks are more turbulent relative to unmagnetized disks because Maxwell and gravitational stresses concur to generate a larger overall stress, resulting in enhanced angular momentum transport. In either case, the magnetic field is expected to affect the dynamics of the material because at a minimum an additional force, namely the Lorenz force, enters the equations of motion of fluid elements. Therefore it is important to know the relative contribution of the magnetic field, and of turbulence, to the energetics of those regions of the disk, along spiral arms, that will turn into clumps.
In fig. 5a the relation between the magnetic and the turbulent kinetic energy is shown for the individual clumps. The specific magnetic energy density is calculated via \(E_{B}=\frac{B^{2}}{4\mu_{B}}\) where \(B\) is the magnetic field and \(\rho\) the density. To quantify the turbulent kinetic energy, we first defined the velocity dispersion of a particle. We used kernel-smoothing to calculate a mean velocity around a particle \(i\) using a number of neighbours of \(n_{\rm smooth}=32\), \(\left\langle v_{i}\right\rangle=\sum_{j}v_{j}W\left(\frac{x_{j}-x_{i}}{h} \right)\frac{m_{j}}{\rho_{j}h^{3}}\). Here \(W\) denotes the smoothing kernel for which we used the cubic spline (Monaghan, 1992), \(h\) the smoothing length, \(x_{i}\), \(m_{i}\), \(\rho_{i}\) the position, mass and density of particle \(i\). To arrive at the velocity dispersion we smoothed over the square deviation of the mean velocity \(\left\langle v_{i}\right\rangle\):
\[\sigma_{i}=\sqrt{\sum_{j}\left(v_{j}-\left\langle v_{i}\right\rangle\right)^ {2}W\left(\frac{x_{j}-x_{i}}{h}\right)\frac{m_{j}}{\rho_{j}h^{3}}}. \tag{8}\]
Then we calculated the turbulent kinetic energy via \(E_{\rm turb,i}=\frac{1}{2}m_{i}\sigma_{i}^{2}\).
Similarly, in fig. 5b the relation between the magnetic and the internal energy is shown. The specific internal energy is available directly as a result of the simulation. In both plots, we concentrate on the clumps 15 snapshots (corresponding to \(\approx 24\)yr) of the simulation before they become bound since we are interested in the influence of the magnetic field as a precondition of the fragmentation process. The results are then shown together with the mass of the respective clumps.
To trace the particles back in time, we used the first method described in section 2.2 namely we followed the density maxima as the centres of the distribution of particles associated to a given clump backwards in time. Despite of the clumps not yet being bound at earlier time, this method still traces the clump-forming regions until a density maximum can be defined and identified (see section 2.2).
It can be seen that the magnetic energy is larger than the turbulent kinetic energy for all the clumps. For most of them the difference is around an order of magnitude. This means that the energy stored in the magnetic field is much greater than in the turbulent motion of particles. Therefore knowing the structure of the magnetic field is important to understand the gas motion.
However, when comparing with the internal energy of the gas, the magnetic field is significantly smaller. Therefore the thermal gas pressure is still the most important of the quantities considered so far. We anticipate that, except in the inner regions of the clumps, the total kinetic energy, including the non-turbulent component of the velocity field, is the dominant contribution counteracting gravitational potential energy, because clumps are rapidly rotating. This will be studied in section 3.4. In the interior of clumps, instead, internal energy is always the main component establishing equilibrium, as it will be further assessed in section 3.4. However, from Deng et al. (2020) we know that the presence of magnetic fields change the dynamics of the flow: first, the magnetic field ignites turbulence in the disk and second, the presence of a magnetic field leads to smaller-scale structures in the disk. The question remains if the differences in fragmentation in the MHD case is a direct consequence of the presence of the magnetic field during this process or if it is a secondary effect arising from different features of the environment. The result from fig. 5a would be compatible with the magnetic field directly impacting fragmentation since it carries much more energy in itself than the turbulent component of the kinetic energy. In section 4 we discuss a possible path of how the magnetic field could affect fragmentation.
### Energetics of the clumps after collapse; the role of the magnetic field
In this section we want to give an overview of the evolution of the clumps during their early stages right after they become bound and to investigate the role of the magnetic field during this time. For that, we focus on the magnetic, but also the turbulent kinetic energy and the internal energy of the gas. It was already mentioned in Deng et al. (2021) that the magnetic field is amplified in and around the clumps and may shield them from further growth but may also prevent their disruption because of its relative strength compared to
the kinetic energy. We also look at this amplification effect more closely in this section.
We now follow the evolution of a typical clump in the MHD simulations. For that, we present two-dimensional cuts and radial profiles of the density, the magnetic field and the velocity field. We choose clump nr. 5 from fig. 1 as it belongs to the lower-mass end of the clumps distribution (\(0.03M_{\rm lup}\)), our implicit assumption being that lower mass clumps should be most affected by the magnetic field as they are absent in the non-magnetized case (HD simulations). We start at a simulation time of 127 yr. At this time, some of the clumps are already bound while others are still forming. Clump 5 is just becoming bound. It is forming in a filament structure of increased density, along with three other clumps (0, 1 and 8). In our subsequent analysis we are interested in the configuration of the magnetic field and how it could affect the clump and its surroundings.
Fig. 6 shows the configuration of clump 5 at this time. In the following two-dimensional profiles the clump is always centred using its density maximum. In the top of fig. 6 the magnetic field strength is shown around a region of \(\pm 0.4\)AU. The clump will have a radial extension of \(\approx 0.1\)AU (see e.g. Fig. 7). The orange lines show the magnetic field lines. In the horizontal cut on the left which is made parallel to the disk, it can be seen that the centre of the clump lies in an elongated region of low magnetic field strength which is also of a higher density. To the side of this region the magnetic field increases. This effect of an increasing magnetic field along a thin elongated region of higher density can also be observed around other clumps.
Also, contrary to the other clumps, here the magnetic field reverts its direction when passing through the high-density filament. This can be explained by looking at the plots be
Figure 5: Magnetic energy relative to turbulent energy and relative to internal energy, averaged 15 snapshots (corresponding to \(\approx 24\)yr) before up until the existence of a bound structure. Each dot represents a clump, the bars show the standard deviation of observed values. The magnetic energy is roughly an order of magnitude greater than the turbulent kinetic energy compatible with a direct effect on fragmentation. The internal energy is larger than the magnetic energy for all clumps.
Figure 6: Evolution of clump 5: First bounded stage. **Top:** magnetic field strength and magnetic field lines in a region of 0.4AU around the clump. The horizontal cut on the left is aligned with the disks plane while the vertical cut on the right shows the height \(z\) over the disk’s radial coordinate \(\Delta r\). **Bottom:** The same location but showing the velocity field lines (again magnetic field strength in the background) instead of the magnetic field lines in a region of 0.2AU. On both sides of the high-density filament, the magnetic field is increased. When comparing the magnetic field energy to the internal energy, one arrives roughly at a value of \(\beta_{\rm plasma}^{-1}\approx 0.5\) for the outer regions in the plot.
low which show a smaller extract with the same centre. Here, the velocity field (black lines) is drawn over the magnetic field strength: The flow of the collapsing region moves inwards from two opposite sides, thereby growing the high-density filament. During this process magnetized material is transported close to the filament, enriching the magnetic field there.
What magnetic field strength do we expect the filamentary structure to have? We assume that this structure emerged from a partial collapse in two directions orthogonal to the direction of the filament. Assuming also ideal MHD without resistivity, the flux through a surface defined via any particles remains constant over time. If we choose this surface to be orthogonal to the filament's elongated direction, then the area of the surface scales as \(A\propto\rho^{-1}\) over time (since we assume no collapse in the elongated direction). Since the conserved flux is defined as
\[\Phi=\int B\cdot dS \tag{9}\]
where the integral goes over the chosen surface, the magnetic field \(B\) has to scale as \(B\propto\rho\). The density at the boundary of the filament increases roughly four-fold. So the magnetic field can be expected to also be four times as strong along the filamentary structure. This is about what is observed in fig. 10, when assuming a background strength of the magnetic field of \(\approx 0.8\)Gs (see also fig. 8).
While the magnetic field is increased at the boundary of the filament, the low magnetic field region in the middle probably arises because here the flow combines two regions of opposite magnetic fields directions. Further there is probably a higher gas pressure in this region because of the higher density. An effect of this can be seen at the bottom right of the figure where a vertical cut of the clump's region is shown. In the vertical cut we show the height (\(z\)) and the change of the radial component \(\Delta r\) measured from the central star. In this case, the prominent region of a strong magnetic field from the horizontal cut on the left is now in the right side of the figure at higher radii. It can be seen that the flow escapes from the central region of higher density (\(z=0\)AU) up- and downwards. In the vertical cut on the right vortices of the magnetic field can be seen above and below the clump. Such vortices are also observed around other clumps.
density and turbulent kinetic energy density to magnetic energy density tend to become smaller at small radii inside the clump's bound radius. This is because of the greatly increased density at these regions. Because of that, the same temperature yields a much higher internal energy density. Vice versa, the turbulent kinetic energy density is also increased at this region. In the next section we characterize the influence of various physical quantities on the evolution of the clump and show their relative importance at different locations in the clump.
### Rotation and clump dynamics
Until here, we focused on the properties of the flow in the very vicinity of the ensuing clumps. Their evolutionary path depends also on their internal properties which are investigated in this section.
Rotation has often been reported as dynamically important in clumps formed via disk instability (Mayer et al., 2004; Galvagni et al., 2011; Shabram et al., 2011; Helled et al., 2014) Therefore, we investigate its relevance in magnetized clumps as well. Additionally, the strength of rotation will also have implications on the rotation rate of an eventual planet resulting from further collapse. Fig. 10 shows again the configuration of clump 5 at two different times: On top at 140 yr, 8 snapshots after the clump became bound and at the bottom at 158 yr at the end of the analyzed simulations. The plots show the density with the velocity vector field for horizontal cuts aligned with the disk's plane (left) and for vertical cuts perpendicular to the disk (right).
It can be seen at the top left, that the clump at this earlier time has a wide-spread, almost elliptical region where rotation around the clump's centre dominates the velocity field. At this stage the clump is probably sustained with rotation by the differential rotation of the surrounding gas of the disk.
In the vertical cut on the top right it can be seen that the clump at this stage has an almond-like shape that is elongated along the mid-plane. The surrounding material flows around this structure from smaller to larger disk's radii. In the interior the flow seems to be erratic without showing a preferred pattern. The elongated shape could be a hint for the rotation to be important for stabilization at this stage since it only exerts a force in the rotating plane.
When looking at the later stages at the bottom, one sees that the shape of the clump has changed. From the horizontal cut at the bottom left it is visible that the clump has become denser than before and also seems to be concentrated in a smaller region. The velocity field only shows a clear rotating behaviour in the inner parts with radii \(<0.05\)AU. Further outside the velocity field still suggests some rotation although the flow seems to be in a more undetermined state.
The vertical cut at the bottom left shows that the clump has become much rounder than before and is no longer embedded in this almond-shaped high-density region. The flow at this stage mostly comes from the upper and lower end to the centre.
Now the clump's rotation should be determined quantitatively. As in section 3.3 we show one-dimensional radial profiles of the clump. Here it seems more appropriate to use a cylindrical coordinate system since rotation is defined along an axis. We start by determining the main rotation axis of the clump. For that, we consider all particles inside the bound
Figure 8: Radial profile of the magnetic field strength (in Gs) for clump 5 at various times. The bound radius of the clump at each time is shown with a dot. At 143 yr, shortly after the clump became bound, a peak of the magnetic field develops in the region of the bound radius. Later, the magnetic field decays but begins to increase again at 156 yr.
Figure 7: Evolution of clump 5: Build-up of a magnetic shield. The plots on the left are horizontal cuts in the disk’s plane, the plots on the right are vertical plotting the height \(z\) and and the radial coordinate w.r.t. to the central star. They show the magnetic field strength with the velocity field lines at two consecutive snapshots (top: 142 yr, bottom: 143 yr).
radius and determine their total angular momentum relative to the center of mass. The normalized angular momentum vector gives us the z-component of a cylindrical coordinate system. This vector points in the same direction as the orientation of the protoplanetary disk.
In that coordinate system we determine the azimuthal velocity \(v_{\phi}\) of the particles. We then consider the squared relation to the total velocity \(v_{\phi}^{2}/v^{2}\) thereby comparing the rotational to the total kinetic energy. Thus we can determine more quantitatively if the clump exhibits rotating behaviour. A radial profile of this relation is shown in the middle-bottom plot of fig. 9 for clump 5. A value of 1 means that all kinetic energy is purely in rotation, if the energy is equipartitioned we would expect a value of 1/3.
At the time of the blue curve, before the cuts of fig. 10 the curve is flat over a large region and sharp decreasing in the interior. This decrease seems to come from vertical infall of material to the centre. At this stage, the clump is not yet bound.
At a later stage at 140 yr (yellow curve) which represents the time of the top plots in fig. 10 the curve begins to increase when approaching the centre. Here, the clump is bound up until a radius of \(\approx 0.15\)AU. While at radii further out than \(\gtrsim 0.2\)AU the curve is flat, inside it reaches values of \(\approx 0.9\). Inside the bound radius, the clump exhibits strong rotation. At the even later stage at 156 yr (green curve) which corresponds to the time of the bottom plots in fig. 10 the spike is even narrower and higher. Consistently, the bound radius is also smaller, somewhat below 0.1AU, up to where the rotation is no longer dominating. This seems to indicate that the clump shrunk in radius during this time. This also confirms our method of measuring the bound radius as described in
Figure 9: Top: Magnetic energy density relative to internal energy density (\(\beta_{\rm plasma}^{-1}\)). In the vicinity of the clumps, the magnetic energy remains important when comparing to the internal energy. Middle: Magnetic energy density relative to turbulent kinetic energy density. The magnetic energy is generally of a larger magnitude than the turbulent kinetic energy, especially at the later stages of the clump’s evolution. Often, this relation increases after the clumps have become bound. Bottom: Relative rotational energy \(v_{\phi}^{2}/v^{2}\). The plots show radial profiles of the mentioned quantities at different times and for three different clumps. Inside the bound radius, the motion is mostly rotation-dominated.
section 2.2. In all observed clumps we measure the behaviour of increased rotation near the centre after they are bound. Often however, the bound radius is more extended than what one would expect from simply estimating where the rotation curve becomes flat. On the other hand, if in an inner region the rotation is significantly enhanced compared to outside, the clumps seem to always be bound at least in that part.
From rotation we can also determine the total specific angular momentum \(L/m\). This is plotted for each clump in fig. 11. It is calculated over the lifetime of each clump respectively where the dot indicates the mean and the bars are the standard deviation of the range of observed values. For comparison, the HD case is also plotted below. It can be seen that the specific angular momentum measured in the MHD case is significantly smaller than in the HD case. Here, it should be noted that it has already been found in Mayer et al. (2004) that the angular momentum of protoplanetary clumps observed in simulations of fragmenting disks is an order of magnitude too high when comparing with those of the gas giants in our solar system. The specific angular momenta of the gas giants (Helled et al., 2011) are also shown in the plots - the protoplanets in the MHD case are much closer to them than they are in the HD case. The reason for this difference could be the resistivity. If the magnetic fields are enhanced by the clump's rotation the resistivity could remove the magnetic energy over time, preventing a possible saturation of the magnetic field and thereby leading to a continuous depletion of rotational energy. This would eventually bring the specific angular momentum more closely to that of Jupiter.
An important effect of rotation could be the stabilization of the clump against collapse because of the gravitational force. Another stabilizing force inside the clump is the gas pressure. Fig. 12b shows the evolution of 1d profiles of the specific internal energy. This quantity is directly proportional to the gas temperature and thus also determines the gas pressure. It can be seen that before the bound stage (blue curve), the internal energy profile is flat meaning that the center of the clump forming region has the same temperature as its surroundings. At 140 yr, corresponding to the top plots in fig. 10 the internal energy is slightly enhanced in the region inside the bound radius. However this enhancement is only weak indicating the early stage in the clump's evolution where it
Figure 11: Specific angular momenta of the clumps arising in the MHD and the HD simulation. They are significantly lower in the MHD case– probably because of the dissipation of the magnetic field. As such, they are closer to those of Jupiter and Saturn which are also shown for comparison.
Figure 10: Evolution of clump 5: Density at early (top) and later (bottom) stage (colour) and velocity field (black arrows). The horizontal cuts on the left are aligned with the disk’s plane and show a contraction of the clump over time. The vertical cuts on the right show that the clump is first elongated along the disk’s plane, whereas later it is more round.
has not reached its final density. At this stage, the relatively cold temperature could mean that rotation is more important leading to the elongated shape of clump 5 at this time which was described before. At later times, at 156 yr (green curve), the internal energy clearly increases in the centre indicating the evolved state of the clump.
Since it was already shown in section 3.3 that there are significant magnetic fields present inside the clumps it remains to characterize them and discuss their effects inside the clump. The magnetic fields could act in both ways on the clump, stabilizing or compressing.
We estimate their importance relative to the other forces (gravity, gas pressure and rotation) by resorting to a one-dimensional model of the clump. For that, we calculate radial profiles of the quantities thereby ignoring angular features.
The force on a volume element \(\delta A\delta x\) consists of several force terms:
\[\delta F=(g\rho\delta x+\Delta P_{g}+\Delta P_{B}+\cos(\theta)\rho\;\delta xv_{ \varphi}^{2}/r_{c})\delta A \tag{11}\]
The gravitational acceleration is
\[g=G\frac{M_{\rm{encl}}}{r^{2}} \tag{12}\]
with \(M_{\rm{encl}}\) the enclosed mass in a sphere of radius \(r\). The pressure difference between two sides of the volume element is defined via the internal energy:
\[\Delta P_{g}=(\gamma-1)\rho\Delta u \tag{13}\]
The magnetic pressure term is derived from the magnetic energy density
\[\Delta P_{g}=\frac{\Delta(B^{2})}{8\pi}. \tag{14}\]
In equation 11 we also subtract a centrifugal force term representing the stabilizing effect of rotation. The rotation is assumed to happen around a rotation axis. We define this force in terms of the cylindrical radius \(r_{c}\) (the distance to the rotation axis), the angular part of the velocity (defined in the cylindrical coordinate system) \(v_{\phi}\) and the angle \(\theta\) for the angle between the spherical radial direction and the cylindrical radial direction.
Fig. (a)a shows the contribution of the various forces at 142 yr of clump 5. It can be seen that inside the clump's radius the dominating stabilizing force is the gas pressure, being larger than the rotational force. This is despite the clump showing significant flatness (see Fig. 10 top right). Inside this clump, the magnetic field exerts a compressing force. Somewhat outside the clump's radius the magnetic field becomes stronger and its pressure force points outward being of a similar order as the gas pressure and the rotational force. Fig. (b)b shows the same for clump 8 at 151 yr. There, it can again be seen that the gas pressure dominates over the other stabilizing forces inside the clump. This is observed in all of the clumps from which we can make the conclusion that the clumps at this stage are pressure-supported instead of rotation-supported. For clump 8, it can also be seen that the magnetic field is even stronger than the other stabilizing forces in a region outside the bound radius. That the magnetic field has the highest contribution compared to the other forces around and somewhat outside the bound radius is a general feature we observe in the clumps.
The resulting radial acceleration from these force contributions shows a characteristic difference between taking the magnetic field into account and neglecting it. As expected, the magnetic field has the greatest effect around the bound radius. Somewhat outside the bound radius there is for most clumps a region where the radial acceleration is higher meaning that material is prevented from accreting on the clump. At the bound radius the situation is sometimes reversed (e.g. clump 5) and the magnetic field acts compressing. Further inside and far outside the effect is small. This behaviour can be explained by looking at fig. 8. At this time, the magnetic field has a sharp peak just outside the bound radius of the clump. Therefore the magnetic pressure force points inwards when going closer to the centre and outwards when going in the other direction. The first effect can be seen for most of the
Figure 12: Radial profiles of the specific internal energy of clump 1 and 5. After the clumps are bound, they heat up in the centre. The gas pressure, which is determined through the specific internal energy, is important for stabilizing the clumps.
clumps: When including the magnetic field, the force balance is shifted to the outward direction outside the bound radius.
For clump 8 this effect is even more pronounced. While here, in the situation without the magnetic field, the system would be collapsing up until a radius of \(\approx 0.5\)AU, if the magnetic field is included, only a region of \(\approx 0.3\)AU has a clear negative force. The other effect of a compressing force at the bound radius can however not be observed for the other clumps possibly because for them the magnetic field is dominated by the other forces at this radius.
This observation of an outward pointing force arising from the magnetic field is consistent with the findings in Deng et al. (2021). There the simulations were continued without the magnetic field after the clump formation and it was found that the further evolution of the clumps changed compared to simulations that continued to include the magnetic field. Namely, the clumps were disrupted if no magnetic field was present due to the missing of the shielding effect.
The analysis presented in this section was carried out at a time when the clumps have already formed and are gravitationally bound. It remains however to find reasons for the smaller initial clump mass at the very onset of fragmentation in the MHD simulations compared to the HD ones. This will be the focus of the next section.
## 4 A physical description of gravitational instability in magnetized disks
### A linear perturbation theory approach
Here we will try to address how different the initial development of clumps is in a magnetized flow as opposed to an unmagnetized one. This is important since, as we reported, the masses of clumps in magnetized disks are significantly lower than those in unmagnetized ones since the beginning (since they become bound), which suggests the effect of magnetic pressure in stifling gas accretion, suggested in (Deng et al., 2021), can not be the only reason behind the low masses of clumps (see fig. 3).
To this aim, we investigate how the presence of the magnetic field could change the fragmentation. Let us now turn back to Elmegreen's analysis on fragmentation in magnetized galactic disks (Elmegreen, 1987). Starting with the magneto-hydrodynamical equations he assumed first-order perturbations. Then the equations were evolved numerically and the response of the system to a perturbation was studied. We note that the results presented in this paper include resistivity. However, similar results have been observed for ideal MHD simulations (Deng et al., 2020) where we expect even more prominent differences since the magnetic field is not restrained. For simplicity, we consider ideal MHD in this section. The ideal magneto-hydrodynamical equations describe the gas motion by considering gas pressure, self-gravity and magnetic fields:
\[\frac{\partial\rho}{\partial t}+\nabla(\rho v)=0 \tag{15}\]
\[\frac{\partial\vec{v}}{\partial t}+\vec{v}\cdot\mathrm{grad}\,\vec{v}=-\frac {1}{\rho}c_{s}^{2}\,\mathrm{grad}\,\rho-\mathrm{grad}\,\Phi\,+\frac{1}{\mu_{0} \rho}(\vec{\nabla}\times\vec{B})\times\vec{B} \tag{16}\]
\[\Delta\Phi=4\pi G\rho \tag{17}\]
\[\frac{\partial B}{\partial t}=v\times(v\times B) \tag{18}\]
In a localized cartesian coordinate system in the disk where the x points in the radial and y in the transversal direction, the local background flow can be approximated as
\[\begin{pmatrix}v_{x}\\ v_{y}\end{pmatrix}\,\simeq\,\begin{pmatrix}-\Omega(r_{0})y\\ 2\mathcal{A}x+\Omega(r_{0})x\end{pmatrix} \tag{19}\]
with \(\mathcal{A}\) being one of the Oort's constant (Binney & Tremaine
Figure 13: Resulting force density \(\delta F/\delta A\delta x\) for two different clumps. **Top:** Clump 5. Inside the bound radius (marked with a dot), the gas pressure is the dominating stabilizing force although rotation also plays a role which could explain the observed flatness in fig. 10. The magnetic field acts compressing up to a certain radius until it pushes matter outwards corresponding to the magnetic shield visible in fig. 7. **Bottom:** Clump 8. While at small radii inside the clump, gas pressure and rotation dominate over the magnetic field (possibly due to the higher density), the magnetic field becomes more important further out, around the clump’s bound radius (marked with a dot).
2008) that represents the shear arising from differential rotation. By defining a dimensionless shear-parameter \(\alpha:=\mathcal{A}/\Omega\) the shear of the flow can be more conveniently quantified which makes it radius-independent if the angular velocity follows a power law in terms of the radius (e.g. in a Keplerian orbit).
Elmegreen considered linear perturbations to an equilibrium solution that are proportional to \(\exp(ik_{y}(y-2\mathcal{A}xt))\) which means they start azimuthally oriented and are then sheared out over time. He then integrated the perturbative solution numerically over time for various parameters of the magnetic field and the shear rate. Elmegreen found that the effect of the magnetic field depended hugely on the value of the shear parameter. In a strong shear case which corresponds to \(\alpha=-0.5\) as for a flat rotation curve of a galactic disk, the magnetic field stabilized the disk. Here, an increase in the magnetic field resulted in a stronger damping of the perturbations similar to what was found in the last section. On the other hand in a weak-shear case which corresponds to \(\alpha=-0.05\) the result however was very different. Here, it was found that the magnetic field severely destabilized the disk. While the response of the system to the perturbation was stable without a magnetic field, even a small value of the magnetic field led to a huge amplification of the perturbation. The stronger the magnetic field, the more unstable the system became.
Elmegreen (1987) explained the destabilizing effect of the magnetic field intuitively by looking at the magnetohydrodynamic equations. If a region without a magnetic field collapses, the collapse is stabilized by the Coriolis force. However, the magnetic field is assumed to be toroidal and thus introduces an asymmetry. Therefore the magnetic field dampens the radial part (the x-direction) of the perturbed velocity and no stabilizing Coriolis force can arise which gives rise to a huge growth (Elmegreen, 1987).
While these are numerical results, Gammie used an analytical approach to derive a stabilizing effect of the magnetic field for axisymmetric perturbations (Gammie, 1996). From the linearized MHD equations and after solving for one axisymmetrical mode, he derived a dispersion relation for MHD perturbations. We now want to examine if a destabilizing effect as found by Elmegreen can also be seen in Gammie's analytical framework.
### Dispersion relation
In the following paragraphs we look at this dispersion relation but for the general case without the restriction to axisymmetric perturbations. We find two effects: First, the magnetic field could destabilize the system. This can happen either in a region of weak shear meaning that the shear parameter is \(\alpha\gtrsim-0.15\) whereby regions can be destabilized that are otherwise stable. Or, if the shear parameter is \(\alpha\gtrsim-0.4\) the magnetic field can at least increase the growth of instabilities that would otherwise also be present. The second effect concerns the wavelength of the most unstable perturbation: Considering a situation where a magnetic field is present, in regions of weaker shear the wavelength is significantly smaller potentially leading to smaller-sized objects.
We begin by linearizing the MHD equations and solving for one mode. As in Elmegreen (1987), the perturbations are now non-axisymmetric but shearing with the flow, with wavenumber
\[\begin{pmatrix}k_{x}\\ k_{y}\end{pmatrix}=\begin{pmatrix}\tilde{k}_{x}-2\mathcal{A}xt\\ k_{y}\end{pmatrix}. \tag{20}\]
Then equations are solved for one angular frequency \(\omega\) (so the perturbations are proportional to \(e^{i\vec{k}\cdot\vec{x}}e^{i\omega t}\)). Without any further simplifications this yields a dispersion relation:
\[\begin{split}&\omega^{4}+i\omega^{3}\left(4\mathcal{A}\frac{k_{y }k_{x}}{k^{2}}\right)\\ &-\omega^{2}\left(\kappa^{2}-2\pi G\Sigma|k|+c_{s}^{2}\vec{k}^{2}+ k_{x}^{2}\vec{V}_{a}^{2}+k_{y}^{2}(\vec{V}_{a,y}^{2}-\vec{V}_{a,x}^{2})\right)\\ &-i\omega\mathcal{A}\left(\vec{k}^{2}V_{a,x}V_{a,y}+k_{x}k_{y}\vec{V}_{a}^ {2}\right)\\ &+(k_{x}^{2}V_{a,x}^{2}+k_{y}^{2}V_{a,y}^{2})\left(-2\pi G|\vec{k}| \Sigma+c_{s}^{2}\vec{k}^{2}\right)=0\end{split} \tag{21}\]
which is a fourth-order polynomial in \(\omega\).
In the next step we want to examine the roots of this polynomial \(\{\omega_{0}\}\) numerically. In general we expect 4 solutions to
Figure 14: Solutions of the dispersion relation for shearing perturbations at different times (different values of \(k_{x}\)), plotted over the wavenumber. The imaginary parts of the solution are solid and the real parts are dashed. If there exists a solution with a non-zero imaginary part, perturbations can grow in the corresponding regime. At large wavenumbers, there exists only a real (oscillating) solution.
the equation. If they are all real, the solution is an oscillating wave. If one of the solutions has an imaginary part, perturbations can grow. Fig. 13(a) in the appendix presents as an example the situation for realistic values at approximately \(10\)AU in a protoplanetary disk. The \(4\) solutions depending on the wavenumber \(k_{y}\) are drawn in different colours where the imaginary part is drawn solid and the real part is dashed. In this case, in the region \(k_{y}<5/\)AU there exists a solution with a non-zero imaginary part which means that perturbations can grow in this regime. At large wavenumbers there is only a real (oscillating) solution.
The next step is now to look if this formalism also shows a destabilizing effect of the magnetic field. For that, a toy model of a protoplanetary disk is introduced, using realistic values comparable to what was used to initialize the simulations from Deng et al. (2021). We now calculate a radial profile of the solution. At each radius, the solutions \(\omega_{0}\) of the dispersion relation (21) are calculated for a range of wavenumbers \(\vec{k}\) with \(\tilde{k}_{x}=0\). At each wavenumber \(k_{y}\), the one of the \(4\) solutions which is growing the fastest is selected. The wavenumber that leads to the largest imaginary part of \(\omega\) (which grows the fastest) is then chosen. Its growth factor, defined as \(s\coloneqq\mathfrak{Im}(\omega)\), is then taken as the growth factor at this radius (assuming that the fastest growing mode dominates).
Fig. 14(a) shows the growth factor \(s\) for different configurations over the radius of the toy model. It shows two situations: One where the shear value is set to the Keplerian shear and another with weak shear where \(\alpha=-0.1\). Both situations are shown with and without a magnetic field. In the Keplerian shear situation, the presence of the magnetic field lowers the growth factor a bit. Here, the system is unstable in both cases since the growth factor is non-zero. On the other hand, in the weak shear situation (\(\alpha=-0.1\)), the magnetic field seems to be required for the system to be unstable. Without the magnetic field (the green line), the growth factor is zero everywhere meaning that the system is stable. When introducing a magnetic field however, the growth-factor is non-zero and thus the system unstable. This is even true when introducing a much weaker magnetic field (\(10\) times smaller) but then the growth factor is also a bit lower. If the magnetic field is further increased in strength, the growth factor seems to saturate. This clearly destabilizing effect seems to be present for shear rates \(\alpha\gtrsim-0.15\). In an intermediate regime until (\(\alpha\sim-0.4\)) (see fig. 14(b)) the magnetic field increases the growth rate but does mostly not destabilize regions that would otherwise be stable.
It is now examined why the magnetic field makes such a difference in the growth factor at low shear. Starting from the dispersion relation (eq. 21) and now assuming zero-shear \(\mathcal{A}=0\) one arrives at a quadratic equation in \(\nu\coloneqq\omega^{2}\):
\[\nu^{2}\ -\nu\overbrace{\left(\kappa^{2}\ -2\pi G\Sigma|k|+c_{s}^{2} \vec{k}^{2}+k_{x}^{2}\vec{V}_{a}^{2}+k_{y}^{2}(\vec{V}_{a,y}^{2}-\vec{V}_{a,x }^{2})\right)}^{=q}\] \[+\underbrace{(k_{x}^{2}V_{a,x}^{2}+k_{y}^{2}V_{a,y}^{2})\left(-2 \pi G|\vec{k}|\Sigma+c_{s}^{2}\vec{k}^{2}\right)}_{=w}=0 \tag{22}\]
If there is no magnetic field, then \(w=0\) and a negative solution in \(\nu\) exists only if \(q<0\) which is just Toomre's criterion for instability. If \(\nu<0\) then there is a solution with \(\omega\) purely imaginary which means exponential growth of the perturbation. Note that however now there is \(\kappa=2\Omega\) (since \(\mathcal{A}=0\)) which makes the model stable at all radii. If the model is stable then \(Q>1\iff q>0\) (still without the magnetic field). Now imagine introducing even a small magnetic field. It can be seen that there exists always a \(|k|\) such that \(w<0\). But if that is the case, then the discriminant is positive \(\mathcal{D}=q^{2}-4w>0\) meaning that the solutions are real in \(\nu\). The solutions are \(\nu=\frac{q\pm\sqrt{\mathcal{D}}}{2}\). From \(w<0\) it follows that \(\sqrt{\mathcal{D}}>q\). Therefore one of the solutions is negative (\(\nu>0\)). This again means that \(\omega\) is purely imaginary and thus the system becomes unstable. In the case of strong shear, where
Figure 15: Radial profiles of growth factor \(s\) obtained from linear perturbation theory.
the system is already unstable, the damping effect of the magnetic field can be attributed to the term in \(q\) proportional to the Alfven velocity which acts like a gas pressure.
What are the expected scales of fragmentation? The fastest growing mode for certain values of the shear parameter is plotted in fig. 16. This includes the presence of a magnetic field. It can be seen that the scale is lower for weaker shear meaning that smaller objects may be produced. Compared to the Toomre most unstable wavelength, the scale is somewhat reduced to \(2/3\) so it would lead to masses of \(1/3\) the size.
When comparing the masses of the clumps in the MHD simulations to the ones from the HD simulation (see fig. 3) one can see that this could explain a large part of the difference between the two cases. Still, the predicted mass is much too large when compared to the clumps actually observed in the simulation. However, when we phenomenologically combine this magnetic destabilization effect with the predicted mass according to (Boley et al., 2010) (see fig. 4) the prediction lies actually in the range of the small clumps from the simulations.
We conclude our discussion of perturbation theory results with a few comments on the validity of the approximations made. With eq. 19 the approximation of a local coordinate system was made. This is only valid if we consider regions that are much smaller than the system's length scale. This implies that the wavelengths of the perturbation need to be much smaller than the radial distance from the star \(\lambda\ll r\). This is certainly fulfilled for the weak-shear cases (see fig. 16). For the strong-shear cases the analysis could become invalid at small radii. Further, the WKB approximation was made where the analysis concentrates on one mode and does not take into account mixing. However, since for non-axisymmetric perturbations the wavenumber \(\vec{k}(t)\) is time-dependent this can only be justified as long as the change of \(\vec{k}\) is small on the considered time-scale. This means that the growth factor should be large compared to Oort's parameter, \(s\gg k^{\prime}_{\rm z}/\left|\vec{k}\right|=2\left|\mathcal{A}\right|\cdot k _{y}/\left|k\right|\lesssim 2\left|\mathcal{A}\right|\). For the weak-shear case (\(\alpha\gtrsim-0.15\)) this is also fulfilled over the whole region considered in fig. 15a since e.g. at \(10\)AU the angular frequency is \(\approx 2\pi/30\)yr leading to an Oort's parameter of \(\mathcal{A}\approx-0.02/\)yr. However the solutions for non-axisymmetric perturbations at Keplerian shear are probably not valid since then the Oort's shear parameter is comparable to the growth rate. But still the magnetic field could slightly enhance growth in the regime of intermediate shear \(\alpha\gtrsim-0.4\).
From fig. 14b one can see that even if we look at the behaviour at later times (here \(t=0.5/\Omega\)) when the shape of the perturbations has changed (see eq. 20) the solutions to the dispersion relation don't change much. This means that the time-evolved perturbation is still unstable and can grow further.
After these theoretical considerations it remains to check if the conditions used in this section are met in the simulations. This is done in the next section.
### Preconditions
In the next step we want to find out if the process described in the last section could contribute to the fragmentation results. To this end, we trace the particles of the clumps back in time to compute the physical properties of the fragmenting regions at early stages. The fragmentation process described by Elmegreen (1987) relies on two conditions: First, the magnetic field was assumed to be toroidal such that locally an asymmetry arises between the radial and the azimuthal direction. Second, the effect requires low-shear regions because only then the magnetic field amplifies perturbations and also the size of the perturbations seems to be lower.
Fig. 17 shows the measured shear parameters \(\alpha\) in the simulation at various times. It can be seen that most values are around the expected Keplerian shear \(\alpha=-0.75\) but there is a wide dispersion. The shear values may deviate from the Keplerian value through gas pressure, turbulence or magnetic field effects. Regions of low shear \(\alpha\gtrsim-0.15\) exist, although they are not common (2%). However, regions of intermediate shear \(\alpha\gtrsim-0.4\) appear more frequent (10%). In these regions the magnetic field enhances perturbations that would already be unstable without it. Nevertheless, perturbations in regions of these shear values could lead to smaller fragmented objects.
In fig. 18 the relation \(B_{\phi}^{2}/B^{2}\) is measured, with \(B_{\phi}\) being the azimuthal component of the magnetic field and \(B\) the total magnetic field. The relation measures the fraction of the magnetic energy that is in the toroidal component. It is shown over a range of snapshots from the beginning of the simulations until the first fragments appear; only the regions that contain particles that will later fragment are taken into account. It can be seen that the magnetic field is predominantly toroidal but a significant fraction of the energy is also in the radial and the z-component. We assume that this is still compatible with the effect described by Elmegreen because the magnetic field would still be coupled to the contraction much more in the azimuthal direction than in the other directions and could thus still suppress the Coriolis force (see section 4.1). When querying the solutions of the dispersion relation (section 4.2) we arrive at very similar solutions if we don't use a perfectly axisymmetric magnetic field (\(B_{x}=0\)).
Figure 16: Radial profile of the most unstable wavelength for non-axisymmetric perturbations in a toy model disk with different values of the shear parameter \(\alpha\) and including a magnetic field. The Toomre most unstable wavelength is shown as a reference. At weak shear, where the magnetic field may act destabilizing (see fig. 15a), the wavelength of the most unstable perturbation is smaller than Toomre’s prediction potentially contributing to the significantly lower size of fragments observed in the MHD simulations.
## 5 Summary and concluding discussion
As in conventional disk instability, clumps in magnetized self-gravitating disks formed from fragmentation sites inside spiral structure. The flow state in such a disk has been shown to be more turbulent relative to non-magnetized disks, due to a combination of Maxwell and gravitational stresses (Deng et al., 2020), which also leads to more flocculent spiral structure. The initial properties and structure of the clumps in the fragmenting sites are thus determined by a combination of the gas flows kinematics, the magnetic field, and the thermodynamical state of the medium. We analyzed both the pre-collapse and post-collapse properties of the fluid that ends up generating the clumps, which led to numerous findings on the origin, dynamics and development of magnetized clumps:
* Clumps forming in magnetized disks have gravitationally bound masses from one to almost two orders of magnitude lower than clumps in unmagnetized disks, being typically in the range of Super-Earths and Neptune-sized bodies.
* When comparing the energy scales at the time right before the formation of the clumps it is found that the magnetic energy is smaller than the internal energy but dominates over the kinetic turbulence energy. Since the energy stored in the magnetic field is much greater than that in the turbulent motion of particles its role in determining directly the properties and dynamics of clumps is most important.
* After the collapse, the magnetic field is amplified around and inside the clumps. The peak may be just outside the bound radius in which case the magnetic field acts compressing inwards and pushes the surrounding flow outwards, or it can be at the centre of the clump in which case it could just isolate the clump from the outside. In general, we confirm that this "magnetic shield" stifles gas accretion, suppressing further clump growth.
* While the magnetic field may have its maximum field strength inside the clump, relative to the other energy components, it is dominant only at the periphery of the clumps.
* Thermal gas pressure plays an important role in determining the clump energetics. It is higher than both rotational energy and the magnetic energy near the centre of the clumps. The importance of the magnetic field generally increases further outside, around the bound radius of the clumps.
* After clump formation, rotational energy becomes the dominant form of kinetic energy inside clumps. In their outermost regions clumps are rotationally supported in both MHD and HD simulations, but rotation is significantly higher in the HD clumps than in MHD clumps. As a result the MHD clumps have a lower specific angular momentum than the HD clumps, which brings their spin in better agreement with the spin of gas and ice giant planets in the Solar System (excessively high spins are a known problem for conventional HD fragmentation simulations (Mayer et al., 2004)).
* Beside influencing the evolution of the clumps, the magnetic field also has an influence on the fragmentation process itself being responsible for significantly smaller initial masses of the clumps. Adapting previous results of linear perturbation theory for non-axisymmetric perturbations of a magnetized rotating sheet by Elmegreen (1987) lends evidence for a
Figure 17: Histogram of values of the shear parameter \(\alpha\) at the beginning of the simulation (left column) and color-coded intensity map of the shear parameter in the disk at different times, at the beginning of the simulation (top), right before fragmentation (middle), and after fragmentation (after most clumps have formed (bottom). In the last plot, the locations of the clumps at this time are shown. It is clear that low shear values occur primarily along dense spirals, namely at the typical sites of clump formation.
Figure 18: Azimuthality of the magnetic field over the simulation time defined as \(B_{\phi}^{2}/B^{2}\) where \(B_{\phi}\) is the toroidal part of the magnetic field (relative to the disk) and \(B\) the total magnetic field. It is measured taking into account the particles that will later form the clumps and thus traces the collapsing regions.
destabilizing effect of the magnetic field in low-shear regions which results in a smaller characteristic scale of fragments.
We discussed the fragmentation and early evolution of intermediate-mass protoplanets in the MHD disk. It remains however, to investigate their long-term evolution to establish that such protoplanets really contribute to the observed intermediate-mass planet population. First, such protoplanets need to survive for a sufficiently long time, therefore improving the understanding of migration of such clumps will be crucial to determine their further outcome. Inward-migration may eventually lead to tidal disruption by the host star (Boiley et al., 2010). To form gas planets, they have to avoid tidal disruption until they cool enough to undergo their second dynamical collapse due to the dissociation of molecular hydrogen (Helled et al., 2014). To form a solid core, it is crucial for them to accrete dust and form a core sufficiently fast. This would be crucial to explain terrestrial intermediate-mass planets. Even after such processes, the protoplanet can still fall into the star (Helled et al., 2014). Deng et al. (2021) noted that the protoplanets experience migration both in- and outward, hence they will eventually be distributed over a broad radial range. However, the disks were evolved for only \(\approx 10\) orbits so the question of migration on longer time scales remains to be investigated. The significantly smaller masses of the clumps in magnetized disks are also expected to have an overall impact on the strength of migration. No runaway migration is expected in the mass range of typical clumps, in contrast with clumps in conventional disk instability simulations (Baruteau et al., 2011; Malik et al., 2015), which should significantly increase the chances of clump survival. Furthermore, the different nature of the background flow could have an impact on the nature of the migration process itself. In Nelson & Papaloizou (2004), who simulated low mass planets in MHD non-self-gravitating disks, it was found that a planet of \(3\,M_{\rm earth}\) would experience random walk migration instead of a monotonic drift because of the high turbulence in the disk. Since the disk in the simulations analysed here (Deng et al., 2021) is 10 times more massive, the same mass ratio between disk and planet would correspond to a mass of \(\approx 0.05M_{\rm jup}\), namely compatible with the typical clump mass in our simulations. Therefore, in addition to the low mass, this is another way clumps in magnetized disks would avoid fast migration and survive.
Further improvements in the understanding of the evolution of such protoplanets could be achieved by implementing additional and more accurate physics. Published hydrodynamical simulations of disk instability provide plenty of hints of what physics should be important. As an example, in Stamatellos (2015) it was found that the inclusion of radiative feedback in the simulations changed the outcome of migration for giant gas planets, namely the outward-migration was prevented and inward-migration also came to a halt because of a gap at the orbit of the planet that arose from the heating of the material that was accreted on to the planet. Likewise, Rowther & Meru (2020) showed how heating of the inner disk can stifle migration of massive clumps. Furthermore, in Nayakshin & Cha (2013) it was shown that radiative feedback may have the effect of slowing the accretion of matter on to the planet therefore reducing its growth; but again this result is for giant planets. Also, in the very few simulations that have included even simple approximations to radiative transfer, such as flux-limited diffusion, it has been shown how clump mass growth is slowed down as their thermal pressure support increases beyond what is predicted by Beta cooling or other simple cooling recipes (Szulagyi et al., 2017). This suggests that also radiative transfer, as well as radiative feedback, should be included in future MHD simulations of self-gravitating disks.
Additionally, the effect of ambipolar diffusion and the Hall effect should be studied. The non-ideal effects should be more important near the mid-plane of the disk as in the outer layers the gas is expected to be ionized by the stellar radiation (Perez-Becker & Chiang, 2011). While Ohmic dissipation, which we have included, is usually dominant in the highest density medium as that inside the clumps, ambipolar diffusion could affect magnetic field dissipation in lower density regions, such as at the periphery of the clumps. For example, it should be investigated if ambipolar diffusion should affect the "magnetic shield" developed around MHD clumps, which, as we have seen, plays an important role in their overall mass growth, or if it could have an influence on the initial stage of fragmentation (although order of magnitude estimates by Deng et al. (2021) suggest that the dissipation rate should be too low to be dynamically relevant over the short timescales probed by our simulations). Moreover, one could use the simulations presented here as a starting point for additional high-resolution simulations of isolated clumps in order to verify their internal structure and study the collapse of MHD clumps similarly to what has been done for hydrodynamical clumps in Galvagni et al. (2012)
While the MHD simulation used \(\approx 30\) million particles, the companion HD simulations that we used as a comparison used only \(\approx 3\) million. New HD simulations that used \(\approx 30\) million particles have also been conducted starting from the same initial conditions using the procedure described in section 2.1. A quick check revealed not much difference from the lower-resolution HD simulations used in this paper, neither in the number of resulting clumps nor in the angular momentum result (see fig. 11) but we did not analyze them any further.
For the perturbation analysis, we note that since we considered non-axisymmetric modes, their wavenumbers change over time leading to mode mixing. A more accurate treatment would have to take this effect into account as it would impact the fragmentation process. More in general, one could question the use of linear perturbation theory as done in this paper, beginning with the fact that we considered perturbations on a smooth axisymmetric background. In fact, it is well established in the literature (eg. Durisen et al. (2007)) that fragmentation occurs inside spiral arms, namely in a non-axisymmetric, already nonlinear flow. Furthermore, spiral structure typically develops after a transient stage in which ring-like global perturbations arise in the disk (eg. Deng et al. (2017)). With this in mind, Deng & Ogilvie (2022) instead of a smooth disk, considered an already non-linear ring-like structure as the background state, described by solitary waves. Then, they studied the growth of non-axisymmetric perturbations to the solitary modes, identifying fast growth, which would result in the development of a spiral structure. Note that this is different from the conventional swing amplification mechanism, which assumes that non-axisymmetric waves are already present and can increase their amplitude exponentially when they switch from lead
ing to trailing (Goldreich & Lynden-Bell, 1965). Fragmenting sites would thus correspond to self-gravitating patches in the growing non-axisymmetric pattern, a calculation that should be attempted in the future as it could lend a new, more realistic prediction of the fragmentation scale. Subsequently, such an approach should be extended to include the effect of the magnetic field on the mode growth.
## Acknowledgements
This work is supported by the Swiss Platform for Advanced Scientific Computing (PASC) project SPH-EXA2.
## Data Availability
The data files that support our analysis will be made available upon reasonable request.
|
2308.09735
|
CTP:A Causal Interpretable Model for Non-Communicable Disease
Progression Prediction
|
Non-communicable disease is the leading cause of death, emphasizing the need
for accurate prediction of disease progression and informed clinical
decision-making. Machine learning (ML) models have shown promise in this domain
by capturing non-linear patterns within patient features. However, existing
ML-based models cannot provide causal interpretable predictions and estimate
treatment effects, limiting their decision-making perspective. In this study,
we propose a novel model called causal trajectory prediction (CTP) to tackle
the limitation. The CTP model combines trajectory prediction and causal
discovery to enable accurate prediction of disease progression trajectories and
uncover causal relationships between features. By incorporating a causal graph
into the prediction process, CTP ensures that ancestor features are not
influenced by the treatment of descendant features, thereby enhancing the
interpretability of the model. By estimating the bounds of treatment effects,
even in the presence of unmeasured confounders, the CTP provides valuable
insights for clinical decision-making. We evaluate the performance of the CTP
using simulated and real medical datasets. Experimental results demonstrate
that our model achieves satisfactory performance, highlighting its potential to
assist clinical decisions. Source code is in
\href{https://github.com/DanielSun94/CFPA}{here}.
|
Zhoujian Sun, Wenzhuo Zhang, Zhengxing Huang, Nai Ding, Cheng Luo
|
2023-08-18T06:58:31Z
|
http://arxiv.org/abs/2308.09735v2
|
# CTP: A Causal Interpretable Model for Non-Communicable Disease Progression Prediction
###### Abstract
Non-communicable disease is the leading cause of death, emphasizing the need for accurate prediction of disease progression and informed clinical decision-making. Machine learning (ML) models have shown promise in this domain by capturing non-linear patterns within patient features. However, existing ML-based models cannot provide causal interpretable predictions and estimate treatment effects, limiting their decision-making perspective. In this study, we propose a novel model called causal trajectory prediction (CTP) to tackle the limitation. The CTP model combines trajectory prediction and causal discovery to enable accurate prediction of disease progression trajectories and uncover causal relationships between features. By incorporating a causal graph into the prediction process, CTP ensures that ancestor features are not influenced by the treatment of descendant features, thereby enhancing the interpretability of the model. By estimating the bounds of treatment effects, the CTP provides valuable insights for clinical decision-making. We evaluate the performance of the CTP using simulated and real medical datasets. Experimental results demonstrate that our model has the potential to assist clinical decisions. Source code is in Github.
## 1 Introduction
Non-communicable disease (NCD), e.g., Alzheimer's disease and heart failure, is the leading cause of death across the world (Bennett et al., 2018). Prognosis prediction is regarded as a method for achieving precise medicine to NCDs (Gill, 2012). The assumption is that if we can predict prognosis (e.g., die in one year) accurately, we can adopt target treatment in advance for patients with bad prognosis, and the bad prognosis may be prevented. The more accurate the model, the more precise the clinical decision. Under the assumption, recent studies focus on utilizing machine learning (ML) methods to obtain more accurate prediction models (Coorey et al., 2022).
However, whether the assumption is right in NCDs is challenged by clinicians (Wilkinson et al., 2020). NCDs usually have heterogeneous etiologies. Different patients usually require different therapies. Nevertheless, current ML-based models just foreshow the outcome of a patient. No matter how accurate, they did not inform clinicians which therapy may be helpful, so assisting clinical decisions is infeasible. What clinicians require are causal interpretable models that not only predict prognosis accurately but also answer counterfactual injuries such as "What will happen to the patient if we control the value of a feature from \(a\) to \(b\)?" (Moraffah et al., 2020). Clinical decision-making support is feasible when the model can help clinicians anticipate the consequences of their actions (Coorey et al., 2022). Most ML models (even explainable ML models) cannot answer such inquiries as they only capture correlational relationships between features. Although there are studies focused on estimating treatment effect, their application perspective is restricted because they usually only investigate the effect of a drug on a predefined binary end-point (Yao et al., 2021).
This study aims to investigate a more general problem that predicts the disease progression trajectory of a patient when we control the value of a feature according to observational data (Figure 1 (a)). Compared to previous treatment effect studies, we regard a treatment can be direct, dynamic, and continuous control to an arbitrary feature and not necessarily the usage of a drug. We regard the outcomes are continuous trajectories of all interested features rather than a predefined event (Yao et al., 2021; Ashman et al., 2023). We need to tackle two problems in achieving this goal. First, we
do not know the causal structure between features. A controlled feature \(A\) may be a consequence of interested features \(Y\) (i.e., \(A\gets Y\)). If \(A\) is a consequence of \(Y\), our model is expected to generate unaffected trajectories of \(Y\) when we control \(A\) (Figure 1 (b)) (Neal, 2020). Second, as the development mechanisms of many NCDs are unclear, there may exist unmeasured confounders that we do not realize. Our models need to generate reliable estimations when a confounder exists (De Brouwer et al., 2022). Intuitively, our model needs to discover the causal structure between features from observational datasets and ensure the trajectory of every feature is predicted only by its historical value and its causative features. Then, we can generate unaffected trajectories of \(Y\) under treatment \(A\) when \(A\) is a descendant of \(Y\). In this study, we treat all observable features as \(Y\). It's important to note that the effect of an unmeasured confounder may be unidentifiable as there may be infinite models that can generate the observed dataset when an unmeasured confounder exists (Gunsilius, 2021). Therefore, we additionally estimate the possible bounds of the treatment effect.
We designed a causal trajectory prediction (CTP) model consisting of two phases. The CTP predicts the progression trajectory of features in a causal interpretable manner in the first phase. It formulates the trajectory prediction problem as solving ordinary differential equations (ODE). It estimates features' derivatives using neural networks, and trajectories can be predicted via a numerical neural ODE solver (Chen et al., 2018). It also adopts a neural connectivity matrix to evaluate the predictive effect with respect to each feature pair (Lachapelle et al., 2020). To ensure each feature is predicted only by its historical value and its causative features, we applied a sparse penalty and a score-based penalty to the neural connectivity matrix. Previous studies have demonstrated that it is possible to discover causal structures in a linear dynamical system using penalties (Stanhope et al., 2014; Brunton et al., 2016; Chen et al., 2021). We extend this approach to a non-linear system in this study. Once the CTP model is optimized and the causal structure is identified, we retrain a group of independent CTP models to estimate the bounds of the treatment effect (the second phase). The training goal is that the group of new CTP models needs to fit the observed dataset accurately and generate trajectories as different as possible when we apply a treatment. Finally, we estimate the treatment effect bounds by analyzing trajectories generated by the group of retrained models.
We investigated the performance of the CTP model in discovering causal relationships between features, predicting feature progression trajectories, and predicting treatment effects. We utilized one real longitudinal medical dataset and four simulated datasets to evaluate model performance. Experiment results indicate that our framework is able to reconstruct the causal graph within features, obtain satisfactory predictive performance, and evaluate the bound of treatment effects well. Therefore, we believe the CTP model introduces an effective way to assist clinical decisions.
## 2 Method
### Preliminary
We denote a dataset consisting of a sequence of a two-element tuple \((s_{i},l_{i})_{i=1}^{N}\). \(s_{i}=(v_{ij},m_{ij},t_{ij})_{j=1}^{N_{i}^{s}}\) indicates a sequence of visit data of a patient until time point \(t_{iN_{i}^{s}}\), and \(N_{i}^{s}\) means
Figure 1: Goal and challenge. (a) This study investigates the progression of features under dynamic, continuous control of treatment via observational data. The blue line is the original trajectory while the red is the trajectory under a treatment, and the red region is the possible bound. (b) The main challenge is that the causal structure is unknown and there may exist unobserved confounders.
the number of visits. \(v_{i}\) denotes progression trajectory of the \(i\)-th patient, and \(v_{i}(t)\) is a vector with \(K\) elements that denotes patient characteristics at timepoint \(t\). \(v_{ij}\) means data in the \(j\)-th visit (or it can be regarded as the abbreviation of \(v_{i}(t_{ij})\)), and \(v_{ij}^{k}\) denotes the value of the \(k\)-th feature. \(v_{ij}^{k}\) can be a continuous variable or a discrete variable. In this study, we presume discrete variables are binary for simplicity, while the proposed method can be generalized to handle categorical variables naturally. \(m_{ij}\in\{0,1\}^{K}\) indicates whether a corresponding feature in \(v_{ij}\) is missing (1 indicates missing). \(l_{i}=(v_{ij},m_{ij},t_{ij})_{j=N_{i}^{\prime}+1}^{N_{i}^{\prime}+N_{i}^{ \prime}}\) indicates a sequence of (label) data need to be predicted.
We use a numerical adjacency matrix \(D\in\mathbb{R}_{\geq 0}^{(K+1)\times(K+1)}\) to describe the casual structure between features (Bhattacharya et al., 2021). \(D_{kl}=0\) indicates the \(k\)-th feature is not the cause of the \(l\)-th feature, and \(D_{kl}\neq 0\) means \(k\)-th feature is a cause of the \(l\)-th feature. Without loss of generality, \(D\) uses an extra dimension to model the causal relationship between observed features and unmeasured confounders (Lowe et al., 2022). In this study, we presume the progression of the unmeasured confounder is not affected by any observed features.
### Causal Trajectory Prediction
We estimate progression trajectories of features \(\hat{v}_{i}(t)\) by solving ODEs (Chen et al., 2018). Our CTP model first estimates the value of patient characteristics \(\hat{v}_{i0}\in\mathbb{R}^{K+1}\) at an initial time point \(t_{0}\). We follow a standard variational autoencoder to estimate the posterior of the \(q(\hat{v}_{i0}|s_{i})\), where \(q(\hat{v}_{i0}|s_{i})\) follows a Gaussian distribution with a diagonal covariance matrix, \(t_{0}\) is a custom number. We use a long-short-term memory (LSTM) network parameterized by \(\phi\) to estimate \(\mu_{i},\sigma_{i}\) (Equation 1). Then, we randomly sample the \(\hat{v}_{i0}\) via the reparameterization trick (Equation 2) (Kingma & Welling, 2014):
\[[\mu_{i},\sigma_{i}]=\mathrm{LSTM}([v_{ij},m_{ij},t_{ij}]_{j=1}^{N_{i}^{\prime }};\phi). \tag{1}\]
\[q(\hat{v}_{i0}|s_{i})=\mathcal{N}(\hat{v}_{i0}|\mu_{i},\sigma_{i}). \tag{2}\]
\[\hat{v}_{i}^{k}(t)=\begin{cases}\hat{v}_{i}^{k}(t),\text{ continuous variable}\\ \text{Gumbel.Sigmoid}(\hat{v}_{i}^{k}(t)),\text{ discrete variable} \end{cases} \tag{3}\]
\[\frac{d\hat{v}_{i}^{k}(t)}{dt}=f_{\theta_{k}}(\bar{v}_{i}(t)\circ\mathcal{M} _{k}). \tag{4}\]
\[\hat{v}_{i}(t)=\mathrm{ODESolver}(f_{\theta},\hat{v}_{i0},t_{0},t). \tag{5}\]
We first map the \(\hat{v}_{i}^{k}(t)\) to \(\bar{v}_{i}^{k}(t)\) (Equation 3), which is an identical mapping for continuous variables and discretizes logit for discrete variables (Jang et al., 2017). Then, we use \(K+1\) independent components \(f_{\theta_{k}}\) to predict each feature derivatives \(\frac{d\hat{v}_{i}^{k}(t)}{dt}\) (Equation 4), where \(\circ\) means element-wise multiplication, and each \(f_{\theta_{k}}\) is a feed-forward network (FFN). \(\mathcal{M}_{k}\) is a causal mask which will be introduced later and it can be regarded as a vector whose all elements are one at present. The \(\hat{v}_{i}(t)\) can be estimated according to \(f_{\theta}\) and \(v_{i0}\) via a numerical ODE solver (Equation 5) (Chen et al., 2018). As ODE-based models are only capable of modeling the dynamics of continuous variables, the \(\hat{v}_{i}^{k}\) represents logit values, rather than the true values, for discrete variables.
The core of causal interpretability is to ensure a feature is predicted only by itself and its cause features. Here, we introduce the _neural connectivity matrix_ to evaluate the predicted effect of a feature pair (Lachapelle et al., 2020). Specifically, the form of an FFN (without bias term, the output is a real number) follows \(o=W_{N}q(\cdots\sigma(W_{2}(\sigma(W_{1}x))))\). \(x\in\mathbb{R}^{K+1}\) is the input, \(o\in\mathbb{R}\) is the output, \(\sigma\) is the non-linear activation, and \(W_{1},\cdots,W_{N}\) is a series of weight matrices (vectors). The connectivity vector \(C\in\mathbb{R}^{K+1}\) follows:
\[C=|W_{N}|\cdots|W_{2}||W_{1}|. \tag{6}\]
It is easy to find that \(C_{j}=0\) indicates the \(j\)-th input element does not affect the output. We can derive \(\widetilde{D}_{jk}=C_{j}^{k}\), where \(C^{k}\) is the connectivity vector for \(f_{\theta_{k}}\). \(\widetilde{D}\) analogs an adjacent matrix where
\(\widetilde{D}_{jk}\neq 0\) represents the \(j\)-th feature can predict the \(k\)-th feature somehow. We can use sparse penalty \(g(\theta)=\sum_{i=1}^{K+1}\sum_{j=1}^{K+1}(\widetilde{D})_{ij}\) to remove spurious causal connections (Brunton et al., 2016).
Moreover, the causal relationship between features in many diseases can be characterized as a directed acyclic graph (DAG) and there is no feedback (Blaser et al., 2015; Suttorp et al., 2015). For example, the casual relation of features in the amyloid beta pathway in the progression of Alzheimer's disease formulates a DAG (Hao et al., 2022). Therefore, we additionally presume the causal graph is a DAG and applied an extra score-based constraint to \(\widetilde{D}\) in this study (Equation 7).
\[h(\theta)=\mathrm{Tr}(\exp((1-I)\circ\widetilde{D}))-K, \tag{7}\]
where \(\mathrm{Tr}\) means the trace of a matrix, and \(\exp(A)\) denotes the exponential of a non-negative square adjacent matrix \(A\) that is defined as the infinite Taylor series, i.e., \(\exp(A)=\sum_{k=0}^{\infty}\frac{1}{k!}A^{k},\ A^{0}=I\). We use \((1-I)\) to denote we allow self-loop (i.e., a feature is able to predict its derivative). Zheng et al. (2018) provided \(A^{k}_{ij}\) indicates a weighted path count from element \(i\) to \(j\) after \(k\) steps, and the count is a non-negative number. If the \(A\) represents a cyclic graph, there must be some \(A^{k}_{ii}>0\), and cause \(\mathrm{Tr}(\exp(A))-K>0\). The score-based DAG constraint equals zero if and only if the \((1-I)\circ\widetilde{D}\) represents a DAG. Once the constrained hold, it is possible that every \(v^{k}_{i}(t)\) is predicted only by itself, and its direct cause features.
We optimize parameters by minimizing the \(\mathcal{L}\) (Equation 8). The optimizing goal is perfectly reconstructing observed data when \(j\) is less or equal to \(N^{s}_{i}\), and accurately predicting future trajectory when \(j\) is greater than \(N^{s}_{i}\). The mean square error (MSE) is used to measure the difference between the predicted value and true value in continuous variables, and cross-entropy (CE) is used for discrete variables. \(B\) is a mini batch of dataset and \(|B|\) indicates its size.
\[\mathcal{L}=\sum_{i,j,k}^{|B|,N^{s}_{i}+N^{i}_{i},K}\left\{\begin{aligned} & \mathrm{MSE}(v^{k}_{ij},v^{k}_{ij}),\ v^{k}_{ij}\ \mathrm{is\ continuous}\\ &\mathrm{CE}(v^{k}_{ij},v^{k}_{ij}),\ v^{k}_{ij}\ \mathrm{is\ discrete}\\ & 0,\quad v^{k}_{ij}\ \mathrm{is\ missing}\end{aligned}\right.. \tag{8}\]
Finally, the objective function of this study follows Equation 9.
\[\min_{\theta,\phi}(\mathcal{L}+\beta g(\theta))\quad\mathrm{s.t.}\ h(\theta)=0, \tag{9}\]
where \(\beta\) is the weight of the sparse penalty. The augmented Lagrangian method can optimize parameters by solving a sequence of unconstrained subproblems (Lachapelle et al., 2020; Zheng et al., 2018). In our study, each subproblem is:
\[\mathcal{L}_{final}=\mathcal{L}+\beta g(\theta)+\frac{\rho}{2}h(\theta)^{2}+ \alpha h(\theta), \tag{10}\]
where \(\rho\), \(\alpha\) are penalty weights, respectively. We approximately solve each subproblem via the stochastic gradient descent method (details in Appendix C.1). We adopted the adjoint sensitive method to make the ODE solver differentiable (Chen et al., 2018).
### Causal Graph Identification
Our CTP model still faces challenges in discovering causal relationships between features. The first challenge is that the value of \(\widetilde{D}_{ij}\) cannot be penalized to exactly zero, but a very small number, as we use a numerical optimizer. The model is also fragile because the neural ODE and matrix exponential operation in the CTP model is sensitive to parameter initialization and input noise (Rodriguez et al., 2022). Even if the CTP model successfully converges and obtains good prediction performance, it is easy to identify causal edges with the wrong causal direction. We adopted an iterative algorithm to tackle the above limitations. The algorithm uses \(\mathcal{M}\in\{0,1\}^{(K+1)\times(K+1)}\) to describe causal relation between features and \(\widetilde{\mathcal{M}}\in\{0,1\}^{(K+1)\times(K+1)}\) to determine whether the causal relationship is certain. We use the Equation 11 to initialize \(\mathcal{M}\) and \(\widetilde{\mathcal{M}}\), where \(\mathcal{M}_{ij}=1\) indicates \(i\)-th feature is the cause of the \(j\)-th feature, and \(\widetilde{\mathcal{M}}_{ij}=1\) indicates the \(\mathcal{M}_{ij}\) is not certain. In the beginning, most
causal relations are uncertain. Of note, we set \(\mathcal{M}_{ij}=0\) when \(i=K+1\)\(\mathrm{and}\)\(j\leq K\) because we presume the hidden confounder is not the consequence of observed features.
\[\mathcal{M}_{ij},\widetilde{\mathcal{M}}_{ij}=\begin{cases}1,\;j\leq K\\ 0,\;j=K+1\;\mathrm{and}\;i\leq K\\ 1,\;j=K+1\;\mathrm{and}\;i=K+1\end{cases}. \tag{11}\]
The algorithm first repeatedly optimizes independent CTP models until \(N\) models converge successfully. We presume these models are more probable to identify correct causal relationships. Then, we analyze the neural connectivity matrix of each model. We treat elements in neural connectivity matrix larger than a threshold is valid (Lachapelle et al., 2020). We use the \(e_{ij}/N\) to determine how many models treat the connection \(i\to j\) is valid, where \(e_{ij}\) the number of models that regard \(i\to j\) is valid. If the value is larger than an accept ratio \(\rho\), we will treat the connection is certainly valid and set \(\mathcal{M}_{ij}=1\) and \(\widetilde{\mathcal{M}}_{ij}=0\). \(e_{ij}/N\) is less than \(1-\rho\) indicates the connection is certainly invalid and we we set \(\mathcal{M}_{ij}=0\) and \(\widetilde{\mathcal{M}}_{ij}=0\). We repeat the process until all casual relations become certain (i.e., \(\sum_{ij}(\widetilde{\mathcal{M}}_{ij})=0\)). The pseudocode of the algorithm is described in Appendix C.2.
### Treatment Effect Prediction
We conduct treatment effect prediction under two assumptions. (1) The optimized CTP model \(M^{\star}\) predicts the feature progression trajectory accurately. (2) The \(M^{\star}\) summarizes reliable causal structure between observed features. However, it is challenging to evaluate the effect of a treatment when an unmeasured confounder exists because parameters in the \(M^{\star}\) may be not the only solution to the prediction problem (Miao et al., 2011). There maybe infinite other parameters that can generate the observed dataset, and these choices of parameters generate different trajectories under a treatment. In this study, we presume all these parameters are located in a connected region.
As we lack the ability to identify which choice of parameters is better, this study estimates the feature trajectories and probable bounds under a treatment, rather than deconfounding (Cao et al., 2023). We adopt an intuitive idea that retrains a series of new independent CTP models that cover the region of the parameter. Generally, the new group of retrained CTP models share the same structure and parameters according to the \(M^{\star}\). We use \(\mathrm{do}(A^{t_{a}}=a)\) to denote a treatment, which means fixing the value of \(\bar{v}^{A}\) to \(a\) from a time point \(t_{a}\), regardless of its original value. Given patient data \(s_{i}\) and a new model \(M^{l}\), we can predict trajectories of other features \({}^{l}\hat{v}^{k}_{i}(t)\) under the treatment \(\mathrm{do}(A^{t_{a}}=a)\), where \(l\) is the index of a new CTP model. Of note, \(M^{l}\) infers the value of unmeasured confounders so that we evaluate the effect of confounders. We record the patient characteristics under a treatment \({}^{l}\hat{v}^{k}_{i}(t_{o})\) at a randomly selected time point \(t_{o}\) after treatment. To make trajectories of \(M^{l}\) as dissimilar as possible, we maximize the pair-wise distance between recorded \({}^{l}\hat{v}_{i}(t_{o})\) (Equation 12). In this study, we applied the simplest p-norm distance for computational efficiency. However, a more sophisticated loss function such as Wasserstein distance is also applicable (Balazadeh Moresht et al., 2022). The optimization problem is a min-max problem that minimizes the prediction loss \(\mathcal{L}_{p}\) (Equation 13) and maximizes the treatment \(\mathcal{L}_{t}\). We use two optimizers to update parameters alternatively (details in Appendix C.3), which is widely used in similar studies (Kostikov et al., 2020). Once the series of new CTP models are retrained, we treat contours (i.e., \(\max({}^{l}\hat{v}_{i}(t))/\min({}^{l}\hat{v}_{i}(t))\)) as trajectory bounds. We use the expectation of trajectories of retrained models as the trajectories under the treatment.
\[\mathcal{L}_{t}=\sum_{i=1}^{|B|}\sum_{j=1}^{L-1}\sum_{k=j+1}^{L}\mathrm{Distance }({}^{j}\hat{v}_{i}(t_{o}),{}^{k}\hat{v}_{i}(t_{o})), \tag{12}\]
\[\mathcal{L}_{p}=\sum_{l,i,k}^{L,|B|,K}\begin{cases}\mathrm{MSE}(v^{k}_{i}(t_{ a}),{}^{l}\hat{v}^{k}_{i}(t_{a})),\;v^{k}_{i}(t_{a})\;\mathrm{is\;continuous}\\ \mathrm{CE}(v^{k}_{i}(t_{a}),{}^{l}\hat{v}^{k}_{i}(t_{a})),\;v^{k}_{i}(t_{a}) \;\mathrm{is\;discrete}\end{cases}. \tag{13}\]
## 3 Experiment
### Experiment Settings
**Dataset.** We used one real medical dataset and four simulated data to evaluate the performance of CTP, whose statistics are in Table 1. _Real Dataset_: ADNI dataset consists of a comprehensive collection of multi-modal data from a large cohort of subjects, including healthy controls, individuals with mild cognitive impairment, and patients with Alzheimer's disease (Petersen et al., 2010). The preprocessed ADNI dataset reserved 88 features (23 continuous and 65 discrete) of patients whose available visit record is equal to or greater than three. _Simulated Dataset_: Hao dataset records the progression of four features (amyloid beta (\(A_{\beta}\)), phosphorylated tau protein (\(\tau_{p}\)), neurodegeneration (\(N\)), and cognitive decline score (\(C\)) of late mild cognitive impairment patients (Hao et al., 2022). Zheng Dataset: it records the progression trajectories of four features (i.e., \(A_{\beta}\), tau protein \(\tau\), \(N\), and \(C\)) of Alzheimer's disease patients (Zheng et al., 2022). MM-25 and MM-50 datasets: we also generated two Michaelis-Menten (MM) kinetics datasets to evaluate the model in a high-dimensional scenario (Zhang et al., 2022). The MM-25 contains 20 features and the MM-50 contains 45 features. The Zheng dataset is a confounder-free dataset and the other three datasets contain unobservable confounders. All datasets were normalized before training. More detailed data generation and the preprocessing process are described in Appendix B.
**Baselines**. We used two commonly seen models and three recently proposed models as baselines. _LODE_: the Linear ODE baseline uses the same structure compared to the CTP, while it uses a linear function to model the derivatives of features. The LODE used the ridge loss to remove spurious connections (Brunton et al., 2016). _NODE_. Neural ODE also uses the same structure compared to the CTP, while it does not use ridge loss and DAG loss to optimize parameters (Chen et al., 2018). _NGM_: NGM uses the same structure compared to our CTP, while it only adds group ridge loss to the first layer of neural network to extract causality (Bellot et al., 2022). _TE-CDE_: TE-CDE adopts controlled differential equations to evaluate patient trajectory at any time point and uses an adversarial training approach to adjust unmeasured confounding (Seedat et al., 2022). _CF-ODE_: CF-ODE adopts the Bayesian framework to predict the impact of treatment continuously over time using NODE equipped with uncertainty estimates (De Brouwer et al., 2022).
**Treatment Settings**. We only conducted treatment effect analysis on four simulated datasets because it is impossible to access the counterfactual result of the ADNI dataset. We run 16 independent models for each dataset. _Hao Dataset_: we set the neurodegenerative value (i.e., \(n\)) to zero at time point 52. Then, we observed the feature progression trajectories under the treatment from 52 to 60. _Zheng Dataset_: we set the neurodegenerative value (i.e., \(n\)) to zero at the DPS time zero. Then, we observed the feature progression trajectories under the treatment from 0 to 20. _MM-25 and MM-50 Dataset_. We set the value of the No. 10 node to one at the time point one. We recorded the feature progression trajectories under the treatment from 1 to 10.
**Metrics**. _Trajectory Prediction:_ We used MSE to evaluate the prediction performance on continuous features and the macro average area under the receiver operating curve (AUC) to evaluate the prediction performance on discrete features. _Causal Discovery:_ We investigated the causal discovery performance of the CTP and baselines by analyzing the neural connectivity matrix \(\widetilde{D}\). We regard a casual edge is inexistent if its corresponding element is less than 0.0001, and vice versa. Then, we use accuracy, F1, and AUC to evaluate causal discovery performance. _Treatment Effect Prediction_: We used the MSE between the true value and estimated trajectories to evaluate the treatment effect prediction performance.
**Extended Experiments**. We released extended experiment results in Appendices.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Hao & Zheng & MM-25 & MM-50 & ADNI \\ \hline \# of Samples & 1,024 & 1,024 & 1,024 & 1,024 & 275 \\ Avg. Visit & 15 & 15 & 15 & 15 & 3.7 \\ \# Features & 4 & 4 & 20 & 45 & 88 \\ Avg. Interval & 1.00 & 2.00 & 0.25 & 0.25 & 1.65 \\ All Continuous & Yes & Yes & Yes & Yes & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics
### Trajectory Prediction
We investigated the disease progression trajectory prediction performance of the CTP model and baselines. Table 2 indicates that our CTP models obtained comparable performance to baselines in the ADNI dataset. It obtained the first place in reconstruct MSE (0.20), reconstruct AUC (0.73), and predict AUC (0.55). The CTP model obtained second place in predict MSE (0.29), which is worse than the LODE model (0.26). Meanwhile, the experiment results on four simulated datasets showed our CTP model also obtained better or comparable performance compared to baselines, whose detail is in Appendix E.1. Although the goal of this study is not to propose a more accurate predictive model, these experiment results demonstrated our CTP model is able to obtain state-of-the-art (SOTA) or nearly SOTA performance in prognosis prediction tasks.
### Causal Discovery
We only investigated causal discovery performance on four simulated datasets because we cannot access the true causal graph of the ADNI dataset (Table 3). It is not surprising that the NODE model cannot extract causal relations from features, as its AUC is less than 0.57 in all four datasets. The TE-ODE and CF-ODE also obtained unsatisfactory performance as they are not designed to extract causal relations between features. They require prior causal information to estimate the treatment effect. The LODE and NGM achieve significantly better performance benefits from utilizing ridge loss. The NGM outperforms the LODE, which may be attributed to the usage of neural networks. Furthermore, We find that our CTP model can identify causal relations between features better than all baselines, and its performance can be further improved by utilizing the causal identification algorithm. For example, the original CTP model only obtained 0.56 causal discovery accuracy in the Hao dataset. However, its causal discovery performance can be significantly improved if we apply the causal identification algorithm. The CTP\({}^{*}\) model obtained 0.44, 0.27, 0.03, 0.04 performance gains in accuracy concerning four datasets and 0.38, 0.26, 0.05, 0.03 performance gains in F1.
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Model** & \multicolumn{3}{c}{**Hao Dataset**} & \multicolumn{3}{c}{**Zheng Dataset**} \\ & ACC. & F1 & AUC & ACC. & F1 & AUC \\ \hline LODE & 0.53\(\pm\)0.01 & 0.60\(\pm\)0.01 & 0.67\(\pm\)0.01 & 0.58\(\pm\)0.01 & 0.64\(\pm\)0.02 & 0.64\(\pm\)0.03 \\ NODE & 0.50\(\pm\)0.02 & 0.57\(\pm\)0.03 & 0.54\(\pm\)0.10 & 0.49\(\pm\)0.01 & 0.62\(\pm\)0.02 & 0.53\(\pm\)0.01 \\ NGM & 0.54\(\pm\)0.02 & 0.60\(\pm\)0.02 & 0.71\(\pm\)0.05 & 0.59\(\pm\)0.02 & 0.65\(\pm\)0.02 & 0.61\(\pm\)0.01 \\ TE-CDE & 0.53\(\pm\)0.02 & 0.59\(\pm\)0.03 & 0.57\(\pm\)0.10 & 0.56\(\pm\)0.03 & 0.56\(\pm\)0.03 & 0.54\(\pm\)0.02 \\ CF-ODE & 0.54\(\pm\)0.02 & 0.58\(\pm\)0.03 & 0.55\(\pm\)0.10 & 0.57\(\pm\)0.01 & 0.65\(\pm\)0.02 & 0.55\(\pm\)0.01 \\ CTP & **0.56\(\pm\)0.01** & **0.62\(\pm\)0.02** & **0.81\(\pm\)0.03** & **0.61\(\pm\)0.01** & **0.67\(\pm\)0.01** & **0.66\(\pm\)0.00** \\ CTP\({}^{*}\) & 1.00 & 1.00 & / & 0.88 & 0.93 & / \\ \hline \multicolumn{5}{c}{**MM-25 Dataset**} & \multicolumn{3}{c}{**MM-50 Dataset**} \\ & ACC. & F1 & AUC & ACC. & F1 & AUC \\ \hline LODE & 0.75\(\pm\)0.01 & 0.76\(\pm\)0.01 & 0.82\(\pm\)0.01 & 0.87\(\pm\)0.01 & 0.89\(\pm\)0.01 \\ NODE & 0.51\(\pm\)0.02 & 0.67\(\pm\)0.03 & 0.53\(\pm\)0.10 & 0.50\(\pm\)0.02 & 0.66\(\pm\)0.01 & 0.57\(\pm\)0.02 \\ NGM & 0.80\(\pm\)0.02 & 0.81\(\pm\)0.02 & 0.67\(\pm\)0.05 & 0.87\(\pm\)0.02 & 0.87\(\pm\)0.01 & 0.66\(\pm\)0.01 \\ TE-CDE & 0.80\(\pm\)0.02 & 0.81\(\pm\)0.02 & 0.56\(\pm\)0.05 & 0.87\(\pm\)0.01 & 0.87\(\pm\)0.01 & 0.58\(\pm\)0.02 \\ CF-ODE & 0.53\(\pm\)0.01 & 0.56\(\pm\)0.02 & 0.53\(\pm\)0.03 & 0.87\(\pm\)0.03 & 0.87\(\pm\)0.02 & 0.58\(\pm\)0.02 \\ CTP & **0.82\(\pm\)0.01** & **0.83\(\pm\)0.02** & **0.88\(\pm\)0.03** & **0.89\(\pm\)0.01** & **0.89\(\pm\)0.01** & **0.90\(\pm\)0.01** \\ CTP\({}^{*}\) & 0.85 & 0.88 & / & 0.93 & 0.92 & / \\ \hline \end{tabular}
\end{table}
Table 3: Causal Discovery Performance. “CTP” and “CTP\({}^{*}\)” indicate the causal discovery performance of a CTP model without/with using the causal identification algorithm.
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Model** & \multicolumn{3}{c}{**Heo Dataset**} & \multicolumn{3}{c}{**Zheng Dataset**} \\ & ACC. & F1 & AUC & ACC. & F1 & AUC \\ \hline LODE & 0.53\(\pm\)0.01 & 0.60\(\pm\)0.01 & 0.67\(\pm\)0.01 & 0.58\(\pm\)0.01 & 0.64\(\pm\)0.02 & 0.64\(\pm\)0.03 \\ NODE & 0.50\(\pm\)0.02 & 0.57\(\pm\)0.03 & 0.54\(\pm\)0.10 & 0.49\(\pm\)0.01 & 0.62\(\pm\)0.02 & 0.53\(\pm\)0.01 \\ NGM & 0.54\(\pm\)0.02 & 0.60\(\pm\)0.02 & 0.71\(\pm\)0.05 & 0.59\(\pm\)0.02 & 0.65\(\pm\)0.02 & 0.61\(\pm\)0.01 \\ TE-CDE & 0.53\(\pm\)0.02 & 0.59\(\pm\)0.03 & 0.57\(\pm\)0.10 & 0.56\(\pm\)0.03 & 0.65\(\pm\)0.03 & 0.54\(\pm\)0.02 \\ CF-ODE & 0.54\(\pm\)0.02 & 0.58\(\pm\)0.03 & 0.55\(\pm\)0.10 & 0.57\(\pm\)0.01 & 0.65\(\pm\)0.02 & 0.55\(\pm\)0.01 \\ CTP & **0.56\(\pm\)0.01** & **0.62\(\pm\)0.02** & **0.81\(\pm\)0.03** & **0.61\(\pm\)0.01** & **0.67\(\pm\)0.01** & **0.66\(\pm\)0.00** \\ CTP\({}^{*}\) & 1.00 & 1.00 & / & 0.88 & 0.93 & / \\ \hline \multicolumn{5}{c}{**MM-25 Dataset**} & \multicolumn{3}{c}{**MM-50 Dataset**} \\ & ACC. & F1 & AUC & ACC. & F1 & AUC \\ \hline LODE & 0.75\(\pm\)0.01 & 0.76\(\pm\)0.01 & 0.82\(\pm\)0.01 & 0.87\(\pm\)0.01 & 0.87\(\pm\)0.01 & 0.89\(\pm\)0.01 \\ NODE & 0.51\(\pm\)0.02 & 0.67\(\pm\)0.03 & 0.53\(\pm\)0.10 & 0.50\(\pm\)0.02 & 0.66\(\pm\)0.01 & 0.57\(\pm\)0.02 \\ NGM & 0.80\(\pm\)0.02 & 0.81\(\pm\)0.02 & 0.67\(\pm\)0.05 & 0.87\(\pm\)0.02 & 0.87\(\pm\)0.01 & 0.66\(\pm\)0.01 \\ TE-CDE & 0.80\(\pm\)0.02 & 0.81\(\pm\)0.02 & 0.56\(\pm\)0.05 & 0.87\(\pm\)0.01 & 0.87\(\pm\)0.01 & 0.58\(\pm\)0.02 \\ CF-ODE & 0.53\(\pm\)0.01 & 0.56\(\pm\)0.02 & 0.53\(\pm\)0.03 & 0.87\(\pm\)0.03 & 0.87\(\pm\)0.02 & 0.58\(\pm\)0.02 \\ CTP & **0.82\(\pm\)0.01** & **0.83\(\pm\)0.02** & **0.88\(\pm\)0.03** & **0.89\(\pm\)0.01** &
### Treatment Effect Prediction
Table 4 describes the performance of predicting progression trajectories under a given treatment of the CTP model and baselines in four datasets. The CTP (without utilizing a causal identification algorithm) obtained significantly better performance than all baselines. For example, The full MSE of NODE in the Hao dataset is 1.08, about six times more than the CTP model (0.16), and the full MSE of NODE is 1.39, about four times more than the CTP model (0.25). The TE-CDE and CF-ODE also obtained poor performance, which may be attributed to they only focus on deconfounding when prior causal information is available. The NGM model obtained the best performance among baselines, while the CTP model obtained better performance than the NGM. The performance of our CTP model is significantly better in the MM-25 and MM-50 datasets. These experiment results demonstrate our CTP model has good scalability. We also find that the CTP\({}^{\star}\) model obtained better performance by utilizing the causal identification algorithm, though the improvement is not significant.
\begin{table}
\begin{tabular}{l c c c c c c} \hline
**Model** & \multicolumn{3}{c}{**Hao Dataset**} & \multicolumn{3}{c}{**Zheng Dataset**} \\ & Full & Near & Far & Full & Near & Far \\ \hline LODE & 0.77\(\pm\)0.08 & 0.33\(\pm\)0.05 & 1.37\(\pm\)0.14 & 0.79\(\pm\)0.05 & 0.67\(\pm\)0.02 & 1.05\(\pm\)0.10 \\ NODE & 1.08\(\pm\)0.13 & 0.54\(\pm\)0.08 & 1.85\(\pm\)0.19 & 2.32\(\pm\)0.01 & 1.39\(\pm\)0.00 & 4.56\(\pm\)0.02 \\ NGM & 0.25\(\pm\)0.01 & 0.25\(\pm\)0.01 & 0.26\(\pm\)0.01 & 0.78\(\pm\)0.04 & 0.66\(\pm\)0.03 & 1.00\(\pm\)0.03 \\ TE-CDE & 0.32\(\pm\)0.02 & 0.32\(\pm\)0.01 & 0.31\(\pm\)0.01 & 3.88\(\pm\)0.01 & 1.75\(\pm\)0.01 & 9.21\(\pm\)0.01 \\ CF-ODE & 0.57\(\pm\)0.03 & 0.38\(\pm\)0.02 & 0.84\(\pm\)0.05 & 0.36\(\pm\)0.03 & **0.24\(\pm\)0.02** & 0.55\(\pm\)0.06 \\ CTP & **0.16\(\pm\)0.01** & **0.16\(\pm\)0.00** & **0.16\(\pm\)0.01** & **0.32\(\pm\)0.03** & 0.26\(\pm\)0.02 & **0.47\(\pm\)0.03** \\ CTP\({}^{\star}\) & 0.13\(\pm\)0.01 & 0.14\(\pm\)0.00 & 0.13\(\pm\)0.01 & 0.29\(\pm\)0.03 & 0.25\(\pm\)0.02 & 0.46\(\pm\)0.03 \\ \hline & \multicolumn{3}{c}{**MM-25 Dataset**} & \multicolumn{3}{c}{**MM-50 Dataset**} \\ & Full & Near & Far & Full & Near & Far \\ \hline LODE & 1.13\(\pm\)0.05 & 1.11\(\pm\)0.07 & 1.22\(\pm\)0.04 & 1.51\(\pm\)0.01 & 1.52\(\pm\)0.00 & 1.51\(\pm\)0.01 \\ NODE & 1.25\(\pm\)0.04 & 1.23\(\pm\)0.05 & 1.60\(\pm\)0.03 & 1.48\(\pm\)0.03 & 1.43\(\pm\)0.03 & 1.67\(\pm\)0.02 \\ NGM & 0.89\(\pm\)0.03 & 0.91\(\pm\)0.04 & **0.79\(\pm\)0.02** & 2.41\(\pm\)0.08 & 2.24\(\pm\)0.05 & 3.12\(\pm\)0.13 \\ TE-CDE & 1.25\(\pm\)0.09 & 1.19\(\pm\)0.11 & 1.52\(\pm\)0.15 & 1.50\(\pm\)0.04 & 1.45\(\pm\)0.05 & 1.70\(\pm\)0.08 \\ CF-ODE & 1.29\(\pm\)0.04 & 1.22\(\pm\)0.03 & 1.59\(\pm\)0.05 & 1.64\(\pm\)0.09 & 1.58\(\pm\)0.07 & 1.88\(\pm\)0.13 \\ CTP & **0.78\(\pm\)0.03** & **0.74\(\pm\)0.05** & 0.88\(\pm\)0.07 & **1.20\(\pm\)0.06** & **1.15\(\pm\)0.08** & **1.43\(\pm\)0.05** \\ CTP\({}^{\star}\) & 0.75\(\pm\)0.02 & 0.73\(\pm\)0.04 & 0.80\(\pm\)0.05 & 1.17\(\pm\)0.06 & 1.11\(\pm\)0.05 & 1.49\(\pm\)0.06 \\ \hline \end{tabular}
\end{table}
Table 4: Treatment Effect Prediction. The “Full” column indicates the general difference between predicted trajectories and oracle trajectories of features. The “Near” column indicates the differences in the trajectories before treatment and the first half observations after the treatment. The “Far” column indicates the differences in the second half observations after the treatment.
Figure 2: Treatment Effect Analysis (average trajectories of retrained CTP models and baselines).
We qualitatively analyzed why our CTP model obtained better performance than baselines in Figure 2. For the space limit, we only draw trajectories of two randomly selected samples from the Hao dataset and Zheng dataset. The figure shows that some trajectories generated by baselines changed mistakenly after the treatment. For example, the predicted trajectory of \(a\) in the Hao dataset of LODE and NODE decreases rapidly after we apply the treatment, while the treatment does not affect the trajectory of \(a\) because \(a\) is an ancestor feature of the \(n\). Similar deviations also occur in other features. As these baselines utilized correlational relations, the treatment action brings unexpectable influence to the predicted feature. Although the LODE and the NGM model applied ridge loss to remove spurious connections, experimental results demonstrated they could not recover causal relations between features well. The incorporation of score-based DAG loss helps the model predict treatment effects better when there is no feedback between features.
We plot the bound of trajectories of a randomly selected sample to evaluate the bound qualitatively. The sample is from the Hao dataset as it contains an unobserved confounder (Figure 3). We find bounds generated by a group of retrained CTP models include true trajectories and are not very loose, indicating the bound may be helpful in clinical decision-making. We also describe the prediction performance of the retrained CTP models (Table 5) to investigate whether the retraining process deteriorates the trajectory prediction performance of models. We find that the retrained group of CTP models, though have different parameters, obtained the same or better performance in reconstructing input in the four datasets. They also obtained comparable performance to the original CTP model in predicting tasks. We may summarize that the retraining process not only helps us find the bound of treatment effect but also does not affect the prediction performance of the CTP model. The finding may be indirect evidence that there may be multiple models that can generate the observed dataset when an unobserved confounder exists.
## 4 Conclusion
We proposed a causal interpretable model that combines trajectory prediction and causal graph discovery to predict feature progression trajectories. The model ensures that each feature is predicted only by itself and its causal ancestors and tackles the issue of unmeasured confounders by identifying correlated errors and constraining the possible effect space of confounders using observed data. Experimental results demonstrate that the CTP model performs comparably or better than baselines in trajectory prediction. It obtained significantly better performance in predicting feature trajectories under a treatment. This model offers a novel approach to support clinical decision-making. In the future study, we will try to evaluate the causal discovery performance and treatment effect prediction performance of the CTP model in real medical datasets.
Figure 3: Trajectories Bounds under a Treatment of Hao Dataset
\begin{table}
\begin{tabular}{l c c c c} \hline
**Dataset** & \multicolumn{2}{c}{**Reconstruct**} & \multicolumn{2}{c}{**Prediction**} \\ & Origin & Retrained & Origin & Retrained \\ \hline Hao & **0.01** & **0.01** & **0.02** & 0.03 \\ Zheng & 0.09 & **0.07** & **0.09** & **0.09** \\ MM-25 & 0.04 & **0.03** & 0.24 & **0.23** \\ MM-50 & 0.04 & **0.04** & **0.28** & 0.29 \\ \hline \end{tabular}
\end{table}
Table 5: Prediction Performance of Retrained Models (Avg. MSE)
|
2303.14370
|
Type-II antiferromagnetic ordering in double perovskite oxide
Sr$_2$NiWO$_6$
|
Magnetic double perovskite compounds provide a fertile playground to explore
interesting electronic and magnetic properties. By complementary macroscopic
characterizations, neutron powder diffraction measurements and first-principles
calculations, we have performed comprehensive studies on the magnetic ordering
in the double perovskite compound Sr$_2$NiWO$_6$. It is found by neutron
diffraction to order magnetically in a collinear type-II antiferromagnetic
structure in a tetragonal lattice with $k$ = (0.5, 0, 0.5) below $T\rm_N$ = 56
K. In the ground state, the ordered moment of the spin-1 Ni$^{2+}$ ions is
determined to be 1.9(2) $\mu\rm_{B}$, indicating a significant quenching of the
orbital moment. The Ni$^{2+}$ moments in Sr$_2$NiWO$_6$ are revealed to cant
off the $c$ axis by 29.2$^{\circ}$, which is well supported by the
first-principles magnetic anisotropy energy calculations. Furthermore, the
in-plane and out-of-plane next-nearest-neighbor superexchange couplings
($J\rm_2$ and $J\rm_{2c}$) are found to play a dominant role in the spin
Hamiltonian of Sr$_2$NiWO$_6$, which accounts for the stabilization of the
type-II AFM structure as its magnetic ground state.
|
Cheng Su, Xu-Tao Zeng, Kaitong Sun, Denis Sheptyakov, Ziyu Chen, Xian-Lei Sheng, Haifeng Li, Wentao Jin
|
2023-03-25T06:11:46Z
|
http://arxiv.org/abs/2303.14370v2
|
# Collinear antiferromagnetic ordering in double perovskite oxide Sr\({}_{2}\)NiWO\({}_{6}\)
###### Abstract
Magnetic double perovskite compounds provide a fertile playground to explore interesting electronic and magnetic properties. By complementary macroscopic characterizations, neutron powder diffraction measurements and first-principles calculations, we have performed comprehensive studies on the magnetic ordering in the double perovskite compound Sr\({}_{2}\)NiWO\({}_{6}\). It is found by neutron diffraction to order magnetically in a collinear AFM structure in a tetragonal lattice with \(k\) = (0.5, 0, 0.5) below \(T_{\rm N}\) = 56 K. In the ground state, the ordered moment of the spin-1 Ni\({}^{2+}\) ions is determined to be 1.9(2) \(\mu_{\rm B}\), indicating a significant quenching of the orbital moment. The Ni\({}^{2+}\) moments in Sr\({}_{2}\)NiWO\({}_{6}\) are revealed to cant off the \(c\) axis by 29.2\({}^{\circ}\) with an easy-axis type magnetic anisotropy, which is well supported by the first-principles calculations. Furthermore, the in-plane and out-of-plane next-nearest-neighbor superexchange couplings (\(J_{2}\) and \(J_{2\circ}\)) are found to play a dominant role in the spin Hamiltonian of Sr\({}_{2}\)NiWO\({}_{6}\), which accounts for the stabilization of the collinear AFM-II structure as its magnetic ground state.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
Perovskite oxide \(AB\)O\({}_{3}\) has drawn great attention and interest during the past few decades [1; 2], owing to their highly tunable physical properties and promising applications in electronic or spintronic devices [3; 4], fuel cells [5], solar cells [6] and so on, arising from their highly flexible chemical and structural properties. As a variant of the perovskite structure, the \(B\)-site ordered double-perovskite (DP) oxides with the general formula of \(A_{2}BB^{\prime}\)O\({}_{6}\) (\(A\) being a divalent or trivalent metal, \(B\) and \(B^{\prime}\) being transition-metal ions alternately arranged in a rock-salt structure and surrounded by corner-sharing oxygen octahedra) have been the focus of intensive studies in recent years [7; 8], because of various intriguing electronic and magnetic properties that may be realized in this large family. The most well-known example is Sr\({}_{2}\)FeMoO\({}_{6}\) with a high Curie temperature of \(T_{\rm C}\sim\) 420 K and a large room-temperature magnetoresistance, as a result of the strong 3\(d\)-4\(d\) hybridization between the Fe\({}^{3+}\) and Mo\({}^{5+}\) ions [9; 10]. Such a hybridization between the \(B\) and \(B^{\prime}\) ions is also evidenced in 3\(d\)-5\(d\) DP rehneates and iridates [11; 12; 13; 14], in which the strong spin-orbit coupling on 5\(d\) ions may give rise to anisotropic and bond-dependent magnetic interactions.
For magnetic \(B\)-site ions located in the center of the \(BO_{6}\) octahedra, although the direct exchange interaction is negligible due to the large distance between them, the nearest-neighbor (NN) and next-nearest-neighbor (NNN) superexchange interactions can take place over a 90\({}^{\circ}\)\(B\)-O-(\(B^{\prime}\))-O-\(B\) path and a 180\({}^{\circ}\)\(B\)-O-\(B^{\prime}\)-O-\(B\) path, respectively. Typically, the \(B\)-site ordered DP with a single magnetic sublattice shows a low-temperature antiferromagnetic (AFM) ordering. Depending on the identities of \(B\) and \(B^{\prime}\) ions and the relative strength of the NN (\(J_{1}\)) and NNN (\(J_{2}\)) interactions, different types of AFM structures have been observed [15; 16; 17]. As two distinct examples, the magnetic ground state of Sr\({}_{2}\)CuMoO\({}_{6}\) is a Neel-type AFM ordering as expected for \(J_{2}<\)\(J_{1}\)[18], while Sr\({}_{2}\)CuWO\({}_{6}\) exhibits a collinear-type AFM ordering as expected for \(J_{2}>\)\(J_{1}\)[19]. Accordingly, they show a quasi-two-dimensional (2D) and three-dimensional (3D) nature, respectively, associated with the magnetism of the \(S\) = 1/2 Cu\({}^{2+}\) ions.
Sr\({}_{2}\)NiWO\({}_{6}\) (SNWO) is a \(B\)-site ordered DP oxide, in which the magnetic structure of the \(S\) = 1 Ni\({}^{2+}\) moments is unknown so far. The divalent magnetic Ni\({}^{2+}\) ions and hexavalent diamagnetic W\({}^{6+}\) ions occupy the \(B\) and \(B^{\prime}\) sites, respectively, forming interpenetrating face-centered cubic (FCC) lattices. It crystallizes in a tetragonal structure (space group \(I4/m\)) at the ambient condition [20; 21; 22], and undergoes a tetragonal-to-cubic structural phase transition above 300 \({}^{\circ}\)C [23; 24]. Interestingly, SNWO was found to order antiferromagnetically at a rather high Neel temperature of \(T_{\rm N}\) = 54-59 K [20; 21; 22; 25], despite a very long superexchange path (Ni-O-W-O-Ni) of \(\sim\) 8 A. Although an _ab initio_ calculation predicts the type-II AFM ordering (AFM-II structure) as its magnetic ground state [26], direct evidence from magnetic neutron diffraction is lacking, which impedes a thorough understanding about the spin interactions and the origin of AFM ordering in SNWO.
In this work, we have conducted comprehensive studies
on the magnetic ordering in SNWO, combining macroscopic and microscopic experimental probes and first-principles calculations. Using neutron diffraction as the microscopic probe, we have provided a solid evidence that SNWO orders magnetically in a collinear AFM structure with \(k\) = (0.5, 0, 0.5) below \(T_{\rm N}\) = 56 K, in good agreement with the previously predicted AFM-II structure. At the base temperature, the Ni\({}^{2+}\) moments in SNWO are found to cant off the \(c\) axis by 29.2\({}^{\circ}\) with an easy-axis type magnetic anisotropy, well supported by the calculations. The strengths of the superexchange couplings are also estimated to understand the origin of the experimentally determined collinear AFM-II structure.
## II Methods
Polycrystalline samples of Sr\({}_{2}\)NiWO\({}_{6}\) were synthesized by standard solid-state reaction method as reported in Ref. [22]. NiWO\({}_{4}\), the precursor material, was firstly synthesized by sintering a stoichiometric mixture of WO\({}_{3}\) (99.99%) and NiO (99.9%) in air, at 1000 \({}^{\circ}\)C for 10 h. Then the synthesized NiWO\({}_{4}\) powder was mixed with SrCO\({}_{3}\) (99.95%) with a stoichiometric ratio of 1:2, and sintered in air at 1400 \({}^{\circ}\)C for 8 h. The powder mixture in each procedure mentioned above needs to be thoroughly ground and pelletized before the sintering. The phase purity was checked by a room-temperature x-ray diffraction (XRD) measurement on a Bruker D8 ADVANCE diffractometer with Cu-K\(\alpha\) radiation (\(\lambda\) = 1.5406 A).
The neutron powder diffraction (NPD) experiments were carried out on the High Resolution Powder diffractometer for Thermal neutrons (HRPT) [27] at the Swiss Neutron Spallation Source (SINQ), at the Paul Scherrer Institute in Villigen, Switzerland. Powder sample of SNWO with a total mass of 5.5 grams was loaded into a 8 mm-diameter vanadium can. The diffraction patterns were collected using wavelengths \(\lambda\) = 1.1545A, 1.494A and 2.45A in the temperature range from 1.5 to 300 K. The shorter wavelengths cover a wider \(Q\)-range suitable for refining the structural parameters, while the longer wavelength provides a higher resolution in the lower \(Q\)-range for a more reliable determination of magnetic structure. Refinements of the nuclear and magnetic structures were conducted using the FULLPROF program suite [28].
First-principles calculations were performed on the basis of density-functional theory (DFT) using the generalized gradient approximation (GGA) in the form proposed by Perdew _et al._[29], as implemented in the Vienna _ab initio_ Simulation Package (VASP) [30; 31]. The energy cutoff of the plane wave was set to 500 eV. The energy convergence criterion in the self-consistent calculations was set to 10\({}^{-6}\) eV. A \(\Gamma\)-centered Monkohort-Pack _k_-point mesh with a resolution of 2\(\pi\times\)0.03 A\({}^{-1}\) was used for the first Brillouin zone sampling. To account for the correlation effects for Ni, we adopted the GGA _+ U_ method [32] with the value of \(U\) = 5 eV, which is commonly used in studying nickel compounds [33].
## III Results and Discussions
### Structural characterizations
Figure 1(a) and 1(b) show the room-temperature XRD and NPD patterns of the synthesized polycrystalline SNWO sample, respectively, which can be well fitted using the reported tetragonal phase of SNWO [21; 22; 23] with satisfactory \(R\) factors. Tiny amounts of impurity phases were detected in x-ray (SrWO\({}_{4}\), 0.9% wt) and neutron (NiO, 0.5% wt) powder diffraction data and included into the corresponding Rietveld refinements. The contribution of SrWO\({}_{4}\) to the neutron powder data was however so weak that it could be omitted in the refinements.
Due to the relatively large coherent neutron scattering length of oxygen atoms, the room-temperature structural parameters of SNWO including all atomic coordinates can be precisely determined by Rietveld refinements to the NPD pattern shown in Fig. 1(b) and listed in Table 1. According to our refinements, the atomic coordinates are consistent with those reported [21; 23; 34] and no obvious site mixing between Ni and W was observed. Thus, the actual structure of the synthesized SNWO is indeed a rock-salt type _B_-site ordered DP (see Fig. 1(c)).
As illustrated in Fig.1(d), the adjacent NiO\({}_{6}\) and WO\({}_{6}\) octahedra display out-of-phase rotations around the \(c\) axis (denoted as an \(a^{0}a^{0}c^{-}\) rotation pattern in the well adopted Glazer notation in discribing the perovskite structures [35]), yielding an in-plane Ni-O(1)-W bond angle of 167.55\({}^{\circ}\) and a tetragonal symmetry at 300 K, in contrast to a non-distorted 180\({}^{\circ}\) Ni-O(1)-W bond angle associated with the cubic symmetry at very high temper
\begin{table}
\begin{tabular}{c c c c c} \(T\) = 300 K & & & & \\ Atom (site) & \(x\) & \(y\) & \(z\) & \(B_{\rm iso}(\AA^{2})\) \\ \hline O(1) (8h) & 0.2843(3) & 0.2298(3) & 0 & 0.79(2) \\ O(2) (4e) & 0 & 0 & 0.2566(4) & 0.87(2) \\ W (2b) & 0 & 0 & 0.5 & 0.57(4) \\ Sr (4d) & 0 & 0.5 & 0.25 & 0.60(1) \\ Ni (2a) & 0 & 0 & 0 & 0.20(2) \\ \hline \end{tabular} \(T\) = 1.5 K
Atom (site) \(x\) & \(y\) & \(z\) & \(B_{\rm iso}(\AA^{2})\) \\ \hline O(1) (8h) & 0.2912(3) & 0.2246(3) & 0 & 0.38(2) \\ O(2) (4e) & 0 & 0 & 0.2576(4) & 0.33(3) \\ W (2b) & 0 & 0 & 0.5 & 0.42(6) \\ Sr (4d) & 0 & 0.5 & 0.25 & 0.17(2) \\ Ni (2a) & 0 & 0 & 0 & 0.05(3) \\ \end{tabular}
\end{table}
Table 1: Refinement results of the atomic coordinates and thermal factors of SNWO at 300 K and 1.5 K, respectively. (Space group \(I4/m\), \(Z\)=2)
ature [23; 24]. Such a lattice distortion has been widely reported in various DP compounds [7], which releases the stress caused by the mismatch of ionic radius on the \(B\) and \(B^{\prime}\) sites and thus provides a great tolerance of the DP structure to accommodate most elements in the periodic table.
### Macroscopic magnetic properties
The temperature dependence of the dc magnetic susceptibility of the polycrystalline SNWO measured in an applied magnetic field of 1 T is shown in Fig. 2(a), which clearly indicates a typical AFM transition at \(T_{\rm N}\) = 56(1) K, consistent with the previously reported Neel temperature of 54-59 K [20; 21; 22; 25]. By performing a Curie-Weiss fitting to the inverse magnetic susceptibility (1/\(\chi\)) in the paramagnetic state from 200 to 300 K, an effective magnetic moment of \(\mu_{\rm eff}\) = 3.127(2) \(\mu_{\rm B}\) for the Ni\({}^{2+}\) ions and a Curie-Weiss temperature of \(\theta\) = \(-\)92.8(3) K are obtained. The \(\mu_{\rm eff}\) value is close to the spin-only moment value of 2.83 \(\mu_{\rm B}\) for the spin-1 Ni\({}^{2+}\) ions, suggesting the significant quenching of their orbital angular momentum in the octahedral coordination formed by surrounding oxygen ions.
The large negative value of \(\theta\) close to \(-\)10\({}^{2}\) K suggests a very strong dominant AFM interaction between the Ni\({}^{2+}\) moments in SNWO, which is further supported by the isothermal magnetization curve measured at 4 K, as shown in the inset of Fig. 2(a). Up to 14 T, the measured magnetization is still far below the saturated value of \(\mu_{\rm S}\) = 2 \(\mu_{\rm B}\) per Ni\({}^{2+}\) ion with the spin-only moment (Lande
Figure 1: Room-temperature XRD (a) and NPD (b) patterns of the polycrystalline SNWO sample and the corresponding crystal structure (c, d). In (a, b), the black circles represent the observed intensities and the red solid line is the calculated pattern according to the Rietveld refinement. The difference between the observed and calculated intensities is shown as the blue line at the bottom. The olive, red and navy vertical bars indicate the Bragg reflections from the SNWO main phase, NiO and SrWO\({}_{4}\) impurities, respectively. (d) illustrates the projection of the lattice of SNWO onto the \(ab\) plane, showing the out-of-phase rotations of NiO\({}_{6}\) and WO\({}_{6}\) octahedra along the \(c\) axis and the resultant in-plane Ni-O(1)-W bond angle of 167.55\({}^{\circ}\) at 300 K.
factor \(g=2\)), as a free spin-1 ion coupling with a 14 T magnetic field is estimated to obtain an Zeeman energy of \(\sim 20\) K, which is far below the characteristic temperature of \(\theta\) = -92.8(3) K.
In addition, as shown in Fig. 2(b), the molar specific heat (\(C\)) of SNWO also shows a clear anomaly at \(\sim 54\) K, supporting the long-range nature of the AFM ordering revealed in Fig. 2(a). After subtracting the phonon contribution to the zero-field specific heat (\(C_{\rm ph}\)) approximated by the Debye formula,
\[C_{\rm ph}=9R\sum_{i=1}^{2}c_{i}(\frac{T}{\theta_{Di}})^{3}\int_{0}^{\theta_{ Di}/T}{\rm d}x\frac{x^{4}e^{x}}{(e^{x}-1)^{2}},\]
where \(c_{1}\) = 5.607, \(c_{2}\) = 4.393, \(\theta_{D1}\) = 293.5 K, \(\theta_{D2}\) = 868.5 K, the magnetic specific heat \(C_{\rm m}\) is obtained and plotted in the inset of Fig.2(b). By integrating \(C_{\rm m}/T\) over the temperature, the experimental maximal change of magnetic entropy \(\Delta S_{\rm m}\) is estimated to be 11.5 JK\({}^{-1}\)mol\({}^{-1}\). The difference between the measured magnetic entropy change and the theoretically expected value for a spin-1 system (\(\Delta S_{\rm m}=R{\rm ln}(2S+1)=R{\rm ln}3=9.134\) JK\({}^{-1}\)mol\({}^{-1}\)) might be due to a possible underestimation of the phonon contribution \(C_{\rm ph}\) using the Debye model. It's worth noting that such an anomaly around \(T_{N}\) is hardly affected by the external field up to 14 T, further corroborating the robustness of the AFM interactions in SNWO against the external perturbation.
### Magnetic structure determination
Low-temperature NPD measurements were carried out to determine the magnetic structure of SNWO below the AFM transition at \(T_{\rm N}\). Fig. 3 shows the NPD patterns collected at 100 K (a, b) and 1.5 K (c, d) using the wavelength of 1.1545 A and 2.45 A, and the results of the Rietveld refinements accordingly. At 100 K, well above \(T_{\rm N}\), the diffraction patterns can be perfectly fitted by the same tetragonal phase of SNWO as the room temperature (see Fig. 3(a, b)). Upon cooling, as shown in Fig. 3(c, d) for the base temperature of 1.5 K, additional reflections arising from the AFM ordering of Ni\({}^{2+}\) moments marked by the navy vertical bars emerge, which can be well indexed with a magnetic propagation vector \(k\) = (0.5, 0, 0.5).
To deduce the ground-state magnetic structure of SNWO, an irreducible representation analysis was performed first using the BASIREPS program integrated into the FULLPROF suite. For Ni\({}^{2+}\) ions locating at the \(2a\) Wyckoff position of the tetragonal lattice with the \(I4/m\) space group, only one irreducible representation (IR) \(\Gamma_{1}\) is possible for \(k\) = (0.5, 0, 0.5) whose basis vectors are listed in Table 2. The three basis vectors (\(\psi_{\nu}\), real unit vectors along three crystallographic axes) suggest the possibility of pointing to any direction for the Ni\({}^{2+}\) moments.
With the combination of the magnetic basis vectors, the additional magnetic reflections emerging below \(T_{\rm N}\) can be fitted to determine the size and direction of the Ni\({}^{2+}\) moments. By simultaneous refinements to the 1.1545 A and 2.45 A datasets shown in Fig. 3(c, d), both the nuclear and magnetic structure of SNWO at the base temperature of 1.5 K can be determined with satisfactory \(R\) factors. The crystallographic parameters at 1.5 K from the nuclear structure refinement is also listed in Table 1,
\begin{table}
\begin{tabular}{l c c c} IR & \(\psi_{\nu}\) & Components & Ni \\ \hline \(\Gamma_{1}\) & \(\psi_{1}\) & Real & (1,0,0) \\ & \(\psi_{2}\) & Real & (0,1,0) \\ & \(\psi_{3}\) & Real & (0,0,1) \\ \end{tabular}
\end{table}
Table 2: Basis vectors (\(\psi_{\nu}\)) of \(\Gamma_{1}\), the only possible magnetic irreducible representation, for the Ni atoms occupying the 2a sites in SNWO with the space group \(I4/m\) and \(k\) = (0.5, 0, 0.5), obtained from representation analysis.
Figure 2: (a) DC magnetic susceptibility (\(\chi\), black circles) and inverse susceptibility (\(1/\chi\), blue squares) of the polycrystalline SNWO sample, measured in a magnetic field of 1 T. The dashed line represents the Curie-Weiss fitting to \(1/\chi\) from 200 to 300 K. The inset shows the isothermal magnetization curve measured at 4 K. (b) Molar specific heat \(C\) of SNWO measured in the magnetic field of 0, 4, 9, and 14 T, respectively. The dashed line represents a fitting to the phonon contribution \(C_{\rm ph}\) to the zero-field specific heat. The inset shows the magnetic specific heat \(C_{\rm m}\), the change of magnetic entropy \(\Delta S_{\rm m}\) calculated by integrating \(C_{\rm m}/T\) and its comparison with the theoretical expectation of \(R\)ln\(3\).
to compare with those at 300 K. At the base temperature, SNWO shows an overall similar tetragonal structure but an even larger octahedral rotation with the in-plane Ni-O(1)-W bond angle of 164.81\({}^{\circ}\), compared with 167.55\({}^{\circ}\) at 300 K.
According to the refined coefficients of the three basis vectors of the IR \(\Gamma_{1}\), the Ni\({}^{2+}\) magnetic moment in SNWO is determined to be 1.9(2) \(\mu_{\rm B}\) in size at 1.5 K, with components of \(-\)0.65(16), 0.69(29) and 1.70(11) \(\mu_{\rm B}\) along the crystallographic \(a\), \(b\) and \(c\) axes, respectively. Such a moment value is well consistent with that expected for the spin-only \(S=1\) Ni\({}^{2+}\) moments showing an ordered moment value of \(gS=2\)\(\mu_{\rm B}\), adopting the Lande factor \(g=2\) for spin-only cases. It is also in good agreement with the observed moment value of 1.8-2.2 \(\mu_{\rm B}\) in various nickel compounds with NiO\({}_{6}\) octahedra as determined by neutron diffraction [36; 37; 38], further supporting the significant quenching of the orbital moment for the Ni\({}^{2+}\) ions in SNWO, as evidenced by the dc magnetic susceptibility data presented in Section III. B.
In addition, we note that in a previous \(ab~{}initio\) investigation of the magnetic ordering in SNWO, it was
Figure 3: NPD patterns of SNWO at 100 K (a, b) and 1.5 K (c, d) and the corresponding Rietveld refinements. The left (a, c) and right (b, d) panels show the data collected with the neutron wavelength of 1.1545 Å and 2.45 Å, respectively. The Rietveld refinements at 1.5 K were performed adopting the magnetic structure described in the text. The black open circles represent the observed intensities, and the calculated patterns according to the refinements are shown as red solid lines. The differences between the observed and calculated intensities are plotted at the bottom as blue solid lines. The olive, navy and red vertical bars indicate the nuclear Bragg reflections from SNWO, magnetic reflections from SNWO and NiO impurity (nuclear and magnetic reflections), respectively. (e) shows the low-\(Q\) NPD patterns collected using \(\lambda=2.45\) Å at 1.5 K and 100 K. The refinements to the 1.5 K pattern with three different magnetic structures, by completely relaxing the direction of Ni\({}^{2+}\) moments or fixing it along \(c\) or \(b\) axes, are plotted as red solid line, blue dotted line and orange solid line(in the inset), respectively, for comparison. NPD pattern collected at 100K is normalized and plotted in green line while the 1.5K pattern is plotted as black open circles.
proposed that the AFM-II configuration with the moments aligned along the \(c\) axis is energetically favorable [26]. In Fig. 3(e), we have shown the refinements to the 2.45 A NPD pattern collected at 1.5 K using two different magnetic structures, by completely relaxing the direction of Ni\({}^{2+}\) moments or fixing it along the \(c\) axis, for comparison. It is clear that the latter yields a much worse agreement with the observed intensities in the low-\(Q\) region (\(R_{\text{mag}}\) = 23.4), compared with the former (\(R_{\text{mag}}\) = 16.2). Besides, for comparison, the inset of Figure 3(e) also displays the refinement result fixing moments along \(b\) axis (\(R_{\text{mag}}\) = 20.2). The failure of fitting the 1.33 A\({}^{-1}\) peak also varifs the necessity of \(a/c\) axis component, suggesting the validity of the refined result of \(\vec{m}_{\text{exp}}\) = (\(-\)0.65, 0.69, 1.70) \(\mu_{\text{B}}\) with components along all three crystallographic axes.
Fig. 4 shows the refined moment size in SNWO as a function of temperature, which can be regarded as the order parameter associated with the AFM transition. As the dashed line presents, below \(T_{\text{N}}\) = 56 K (as determined in Section III. B), the AFM order parameter follows a power-law behavior, \(M\propto(\frac{T_{\text{N}}-T}{T_{\text{N}}})^{\beta}\), with the exponent \(\beta\) fitted to be 0.20(4). It is worth pointing out that the \(\beta\) value determined here can not be directly compared with the universal critical exponent associated with the magnetic ordering, as Fig. 4 does not contain enough data points in the critical region close to \(T_{\text{N}}\).
Associated with \(k\) = (0.5, 0, 0.5), the ground-state magnetic structure of SNWO is depicted as the inset of Fig. 4. The Ni\({}^{2+}\) spins align antiparallelly along both the \(a\) and \(c\) axes, but align parallelly along the \(b\) axis, forming a collinear-type AFM structure. Such a collinear AFM structure was previously reported for the DP tungstates Sr\({}_{2}\)CuWO\({}_{6}\) and (Ba/Sr)\({}_{2}\)FeWO\({}_{6}\), ascribed to a dominant NNN interaction \(J_{2}\) in their spin Hamiltonian [19; 39]. The ordering pattern of the Ni\({}^{2+}\) spins is consistent with the AFM-II configuration theoretically proposed for SNWO in Ref. [26], but the moment direction differs a lot. The reason of such a discrepancy will be discussed in Section III. D.
### First-principles calculations
To compare with the experimentally determined magnetic structure of SNWO, the energies of 801 different spin orientations uniformly distributed in the real space have been calculated within the frame of DFT. Fig. 5(a) shows the 2D angular dependence of the magnetic anisotropy energy (MAE) obtained by linear interpolations to the calculated values, with respect to the magnetic ground state. By setting a Cartesian coordinate system with its \(x\), \(y\) and \(z\) axes overlapped with the \(a\), \(b\), and \(c\) axes of the tetragonal lattice, the direction of the Ni\({}^{2+}\) moments can be fully described by angles \(\theta\) and \(\varphi\), as illustrated in the inset of Fig. 5(b). The calculated metal-like MAE pattern near the center area in Fig. 5(a) displays a four-fold rotational symmetry around the central point (\(\theta\) = 90\({}^{\circ}\)), suggesting a \(C_{4z}\) symmetry from the perspective of DFT, which is not surprising considering the tetragonal symmetry of SNWO at low temperatures. The \(C_{4z}\) symmetry is better visualized in the \(\varphi\) dependence of the in-plane MAE as shown in Fig. 5(b), with \(\theta\) fixed to be 0.
According to the calculation, the spin orientation with the moment size converged to \(\vec{m}_{\text{cal}}\) = (\(-\)0.89, 0.71, 1.65) \(\mu_{\text{B}}\) per formula unit owes the lowest energy, corresponding to \(\theta\) = 55.5\({}^{\circ}\) and \(\varphi\) = 141\({}^{\circ}\), as marked by the white triangle in Fig. 5(a). This calculated magnetic easy-axis direction is highly consistent with the experimentally determined orientation of \(\vec{m}_{\text{exp}}\) = (\(-\)0.65, 0.69, 1.70) \(\mu_{\text{B}}\) (corresponding to \(\theta\) = 60.8\({}^{\circ}\) and \(\varphi\) = 133\({}^{\circ}\), marked by the red star in Fig. 5(a)), with a tiny difference of only 6.9\({}^{\circ}\).
As mentioned above in Section III. C, a previous DFT study proposed the AFM-II configuration with the moments aligned along the \(c\) axis as the possible magnetic ground state [26]. However, it is clear from Fig. 3(e) that the experimentally determined direction of \(\vec{m}_{\text{exp}}\) actually deviates from the \(c\) axis, with a canting angle of 29.2\({}^{\circ}\), which is supported by our detailed MAE calculation as shown in Fig. 5(a). To address the discrepancy between our work and Ref. [26], we have checked the \(\theta\) dependencies of the calculated MAE for fixed \(\varphi\) (or the out-of-plane MAE). As shown in Fig. 5(c), if ignoring the in-plane anisotropy and fixing the moment in the \(xz\) plane (\(\varphi\) = 0), the MAE curve will reach a local minimum at \(\theta\) = 90\({}^{\circ}\), corresponding to the spin alignment along the \(c\) axis, which is consistent with the result in Ref. [26]. However, since the in-plane MAE shows periodic modulations (see Fig. 5(b)), the effect of \(\varphi\) must be taken into considerations and the real minimum of the MAE
Figure 4: Temperature dependence of the refined moment size of the Ni\({}^{2+}\) ions, where the dashed line represents a power-law fitting. The inset illustrates the experimentally determined collinear AFM structure of SNWO in the ground state, in which the blue arrows represent the in-plane and out-of-plane NN exchange couplings (\(J_{1}\) and \(J_{1c}\)) and the yellow arrows represent the in-plane and out-of-plane NNN exchange couplings (\(J_{2}\) and \(J_{2c}\)).
has to be sought globally through the 2D mapping. By fixing \(\varphi\) as the experimentally determined value of \(133^{\circ}\), the MAE curve actually reaches an even lower minimum at \(\theta=50.4^{\circ}\) (see Fig. 5(b)), close to \(\theta=60.8^{\circ}\) found experimentally, therefore supporting our model with the Ni\({}^{2+}\) moments canting off the \(c\) axis. In addition, we note that the out-of-plane modulation of the MAE shown in Fig. 5(c) is much stronger, compared with that of the in-plane MAE.
Furthermore, to verify the stability of the experimentally determined collinear AFM-II structure against other possibilities, the free energies of SNWO with twelve different representative spin configurations of the Ni\({}^{2+}\) moments as shown in Fig. 6(a-l) are calculated. The energies of these configurations with the moments all aligned along the \(c\) axis have already been calculated by the GGA+\(U\) method in Ref. [26]. Here we have further incorporated the effect of spin-orbit coupling (SOC) and set the initial moment direction the same as \(\vec{m}_{\rm exp}\), the experimentally determined one. Table 3 lists the calculation results of the relative energies, with respect to the ground state. It turns out that the lowest energy is indeed achieved for the AFM-II configuration as shown in Fig. 6(c), consistent with the experimentally determined magnetic structure, further corroborating the validity of our magnetic structure model.
As mentioned in Section I, direct exchange couplings between the Ni\({}^{2+}\) ions are negligible because of the large distance between them, and the NN and NNN superexchange couplings occurring via a \(90^{\circ}\) Ni-O-(W)-O-Ni path and a \(180^{\circ}\) Ni-O-W-O-Ni path, respectively, have to be responsible for the magnetic ordering of the Ni\({}^{2+}\) moments. Based on the calculated energies of the twelve spin configurations given in table 3, we have estimated the strengths of the superexchange couplings in SNWO, as marked in the inset of Fig. 4, using the mapping method similar to that used in Ref. [26]. As a result, the in-plane superexchange coupling constants are estimated to be \(J_{1}\sim-0.17\) meV and \(J_{2}\sim-2.00\) meV, while the out-plane coupling constants are \(J_{1\rm c}\sim-0.16\) meV and \(J_{2\rm c}\sim-2.46\) meV. The magnitudes of the coupling constants estimated here agree well with those calculated in Ref. [26] and determined from inelastic neutron scattering in Ref. [25], indicating the dominant role of the NNN coupling \(J_{2}\) and \(J_{2\rm c}\) in the spin Hamiltonian of SNWO, which is favorable for the antiparallel couplings between the Ni\({}^{2+}\) spins along the (001) and (110) directions and the stabilization of an AFM-II type collinear structure.
Combining the results of our neutron diffraction measurements and DFT calculations, we can reach a conclusion that SNWO is a collinear antiferromagnet with a strong, easy-axis type magnetic anisotropy. Considering the negligible spin-orbit coupling due to the significant quenching of the Ni\({}^{2+}\) orbital moment, this strong magnetic anisotropy is likely to arise from the single
Figure 5: Angular dependence of the calculated MAE. (a) shows the 2D contour map of the MAE as functions of angles \(\theta\) and \(\varphi\), which are defined in the Cartesian coordinate system overlapped with the \(a\), \(b\), and \(c\) axes of the tetragonal lattice, as shown in the inset of (b). The color represents the relative energy of certain spin orientation calculated by DFT, with respect to the magnetic ground state. The moment directions found experimentally and theoretically, according to the neutron data and the minimum of the calculated MAE, are labeled by the red star and white triangle, respectively. (b) shows the \(\varphi\) dependence of the MAE (or in-plane MAE) with fixed \(\theta=0\). (c) shows the \(\theta\) dependence of the MAE (or out-of-plane MAE) with fixing \(\varphi=133^{\circ}\) (the experimentally found value) and \(\varphi=0^{\circ}\), respectively.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Spin configuration & Energy (meV) & Spin configuration & Energy (meV) \\ \hline FM & 14.57 & AFM-VI & 4.78 \\ AFM-I & 13.61 & AFM-VII & 8.89 \\ AFM-II & 0.00 & AFM-VIII & 7.75 \\ AFM-III & 8.66 & AFM-IX & 6.77 \\ AFM-IV & 13.57 & AFM-X & 11.33 \\ AFM-V & 9.32 & AFM-XI & 11.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Relative energies (in unit of meV) of twelve different spin configurations for SNWO, as shown in Fig. 6, calculated using GGA+SOC+\(U\).
ion anisotropy associated with the \(S=1\) Ni\({}^{2+}\) ions [40; 41; 42]. Another scenario responsible for the strong magnetic anisotropy might be some underlying magnetoelastic coupling [43; 44; 45]. Further temperature-dependent XRD and NPD studies will be crucial to figure out the delicate structural change of SNWO across \(T_{\rm N}\), including the lattice constants, the atomic displacement parameters, the NiO\({}_{6}\) octahedral rotation or distortion, and so on.
## IV Conclusion
In summary, we have conducted comprehensive investigations on the magnetic ordering in the double perovskite compound Sr\({}_{2}\)NiWO\({}_{6}\), combining macroscopic magnetic characterizations, neutron powder diffraction measurements, and DFT calculations. Below \(T_{\rm N}=56\) K, SNWO is revealed to order magnetically in a collinear AFM-II structure with \(k=\) (0.5, 0, 0.5). Due to a significant quenching of the orbital moment, the low-temperature magnetic properties of the Ni\({}^{2+}\) ions can be well described by \(S=1\) in the spin-only case, and the ordered moment at 1.5 K is determined to be 1.9(2) \(\mu_{\rm B}\). In the ground state, the Ni\({}^{2+}\) moments in SNWO are estimated to cant off the \(c\) axis by 29.2\({}^{\circ}\) with an easy-axis type magnetic anisotropy, which is well supported by the DFT calculations. In addition, the strengths of the in-plane and out-of-plane NNN superexchange couplings \(J_{2}\) and \(J_{2c}\) deduced from the DFT results are found to be dominant in the spin Hamiltonian of SNWO, which accounts for the stabilization of the collinear AFM-II structure as its magnetic ground state.
###### Acknowledgements.
This work is partly based on experiments performed at the Swiss Spallation Neutron Source SINQ, Paul Scherrer Institute, Villigen, Switzerland. The authors acknowledge the computational support from HPC of
Figure 6: Twelve different representative spin configurations of SNWO, including the FM (a) and AFM-I\(\sim\)XI (b-l) structures.
the Beihang University. The work at Beihang University is financially supported by the National Natural Science Foundation of China (Grant No. 12074023) and the Fundamental Research Funds for the Central Universities in China. The work at the University of Macau was supported by the Science and Technology Development Fund, Macao SAR (File Nos. 0051/2019/AFJ, 0090/2021/A2, and 0049/2021/AGJ) and the University of Macau (MYRG2020-00278-IAPME and EF030/IAPME-LHF/2021/GDSTIC).
|
2305.10298
|
Estimation of Remaining Useful Life and SOH of Lithium Ion Batteries
(For EV Vehicles)
|
Lithium-ion batteries are widely used in various applications, including
portable electronic devices, electric vehicles, and renewable energy storage
systems. Accurately estimating the remaining useful life of these batteries is
crucial for ensuring their optimal performance, preventing unexpected failures,
and reducing maintenance costs. In this paper, we present a comprehensive
review of the existing approaches for estimating the remaining useful life of
lithium-ion batteries, including data-driven methods, physics-based models, and
hybrid approaches. We also propose a novel approach based on machine learning
techniques for accurately predicting the remaining useful life of lithium-ion
batteries. Our approach utilizes various battery performance parameters,
including voltage, current, and temperature, to train a predictive model that
can accurately estimate the remaining useful life of the battery. We evaluate
the performance of our approach on a dataset of lithium-ion battery cycles and
compare it with other state-of-the-art methods. The results demonstrate the
effectiveness of our proposed approach in accurately estimating the remaining
useful life of lithium-ion batteries.
|
Ganesh Kumar
|
2023-05-17T15:35:31Z
|
http://arxiv.org/abs/2305.10298v1
|
## 1 Abstract
## 1 Abstract
Lithium-ion batteries are widely used in various applications, including portable electronic devices, electric vehicles, and renewable energy storage systems. Accurately estimating the remaining useful life of these batteries is crucial for ensuring their optimal performance, preventing unexpected failures, and reducing maintenance costs. In this paper, we present a comprehensive review of the existing approaches for estimating the remaining useful life of lithium-ion batteries, including data-driven methods, physics-based models, and hybrid approaches. We also propose a novel approach based on machine learning techniques for accurately predicting the remaining useful life of lithium-ion batteries. Our approach utilizes various battery performance parameters, including voltage, current, and temperature, to train a predictive model that can accurately estimate the remaining useful life of the battery. We evaluate the performance of our approach on a dataset of lithium-ion battery cycles and compare it with other state-of-the-art methods. The results demonstrate the effectiveness of our proposed approach in accurately estimating the remaining useful life of lithium-ion batteries.
Keywords:
* Artificial Intelligence
* Neural Networks
* Machine Learning
* Lithium-Ion / Polymer Batteries
## 2 Introduction
Lithium-ion batteries are widely used in portable electronic devices, electric vehicles, and renewable energy systems due to their high energy density and long cycle life. However, the aging of lithium-ion batteries can lead to a decrease in their performance and reliability, which poses a significant challenge to the safe and efficient operation of battery-powered systems. To ensure the safe and optimal use of lithium-ion batteries, it is crucial to accurately estimate their
remaining useful life (RUL), which is the time until the battery reaches a predefined end-of-life criteria.
Several approaches have been proposed to estimate the RUL of lithium-ion batteries, including empirical models, data-driven models, and physics-based models. Empirical models rely on statistical analysis of battery performance data and are relatively easy to implement but may lack accuracy and generality. Physics-based models are based on the fundamental electrochemical processes that govern the battery's behavior and are capable of capturing the complex and nonlinear relationships between the battery's operating conditions and its performance. However, physics-based models require detailed knowledge of the battery's material properties and may not be applicable to all types of batteries.
Data-driven models, which use machine learning techniques to capture the complex relationships between battery performance and its operating conditions, have shown promising results in recent years. However, most of these models focus on short-term performance prediction and may not be suitable for RUL estimation.
In this paper, we propose a novel approach for estimating the RUL of lithium-ion batteries, which integrates machine learning techniques with electrochemical modeling. The electrochemical model provides a fundamental understanding of the aging mechanisms of the battery, while machine learning algorithms are used to capture the complex and nonlinear relationships between the battery's operating parameters and its RUL. The proposed approach is evaluated using experimental data from a set of commercially available lithium-ion batteries, and the results demonstrate its effectiveness in accurately predicting the RUL of the batteries. The proposed approach has the potential to enhance the reliability and safety of battery-powered systems and enable efficient utilization of battery resources.
Fig. 1: Example of Working of a Li-Ion Battery
The aim of this research is to propose a novel approach for estimating the remaining useful life (RUL) of lithium-ion batteries using a combination of electrochemical modeling and machine learning techniques. The proposed approach is expected to provide accurate and reliable estimates of the RUL of lithium-ion batteries under various operating conditions, which is crucial for the safe and efficient operation of battery-powered systems such as electric vehicles.
This research will be helpful for electric vehicles by improving their reliability and safety, optimizing battery utilization, and reducing the cost of battery replacements. Accurate estimation of RUL can also help in extending the lifespan of the battery, which is beneficial for the environment by reducing the need for frequent battery replacements and minimizing waste. Furthermore, the proposed approach can aid in the development of battery management systems for electric vehicles that can optimize the use of battery resources and enhance the overall performance of the vehicle.
## 3 Related work
Several approaches have been proposed for estimating the RUL of lithium-ion batteries in the literature. Empirical models are commonly used for short-term performance prediction, and they rely on statistical analysis of battery performance data to estimate the RUL. Some of the commonly used empirical models include the empirical state-of-charge (SOC) model, the empirical capacity fade model, and the empirical impedance-based model. Although empirical models are relatively easy to implement, they may lack accuracy and generality.
Physics-based models are another category of RUL estimation models that are based on the fundamental electrochemical processes that govern the battery's behavior. These models use the equations that describe the battery's electrochemical behavior to estimate the RUL. Some of the commonly used physics-based models include the Doyle-Fuller-Newman (DFN) model, the single particle model (SPM), and the pseudo-two-dimensional model (P2D). Although physics-based models are capable of capturing the complex and nonlinear relationships between the battery's operating conditions and its performance, they require detailed knowledge of the battery's material properties, which may not be available for all types of batteries.
Data-driven models, which use machine learning techniques to capture the complex relationships between battery performance and its operating conditions, have shown promising results in recent years. These models are typically trained on large datasets of battery performance data to predict the RUL. Some of the commonly used data-driven models include the artificial neural network (ANN), support vector machine (SVM), and random forest (RF) models. Although data-driven models have shown promising results, most of them focus on short-term performance prediction and may not be suitable for RUL estimation.
## 4 Proposed Method
Accurate estimation of the remaining useful life (RUL) of lithium-ion batteries is crucial for ensuring the safety and efficiency of battery-powered systems. In recent years, data-driven models based on machine learning techniques have shown promising results for RUL estimation. These models can capture the complex and nonlinear relationships between battery performance and its operating conditions, but they require large amounts of high-quality data for training.
In this paper, we propose a novel approach for RUL estimation of lithium-ion batteries using the TensorFlow Keras library and a sequential model architecture. The proposed approach is evaluated using experimental data from a set of commercially available lithium-ion batteries provided by the National Aeronautics and Space Administration (NASA). The dataset contains various operating conditions and degradation levels, making it suitable for evaluating the effectiveness of the proposed approach.
The proposed approach consists of two main stages: feature engineering and model training. In the feature engineering stage, we extract relevant features from the battery performance data, including the voltage, current, temperature, and capacity, using statistical methods and domain knowledge. These features are then used to train the sequential model.
The sequential model is based on a deep neural network architecture and is trained using the TensorFlow Keras library. The model consists of multiple layers of densely connected neurons, which are optimized using the backpropagation algorithm. The model takes as input the battery performance data and outputs the estimated RUL.
The proposed approach is evaluated using a cross-validation method and compared with several baseline models, including linear regression, decision tree, and random forest models. The results demonstrate that the proposed approach outperforms the baseline models in terms of RUL estimation accuracy and generalization ability.
In summary, the proposed approach for RUL estimation of lithium-ion batteries using the TensorFlow Keras library and a sequential model architecture is a promising method for improving the reliability and safety of battery-powered systems. The use of experimental data
from NASA further validates the effectiveness of the proposed approach and its potential for practical applications in real-world settings.
### 4.1 Dataset
The Dataset was taken from the NASA Battery Dataset. The data was in '.mat' file and was converted to '.csv' to use it better in a python notebook.
This was a pre-prepred data by the NASA which was publicly release for research purposes. The dataset contains various operating conditions and degrading levels, making it suitable for evaluation the effectiveness of the proposed approach.
### 4.2 Processing and Architecture
In this paper, we propose a sequential model architecture for estimating the remaining useful life (RUL) of lithium-ion batteries. The proposed model consists of multiple layers of densely connected neurons, with each layer having a specific number of neurons and an activation function. The activation functions used in this model include tanh, sigmoid, and relu.
The tanh activation function is a smooth and bounded function that maps input values to the range [-1, 1]. It is commonly used in neural networks for classification and regression tasks. The sigmoid activation function is another commonly used function in neural networks, which maps input values to the range [0, 1]. It is particularly useful for binary classification tasks. The relu activation function is a simple and effective function that returns zero for negative input values and the input value for positive values. It has been shown to perform well in many deep learning applications.
Model: "sequential_4" Layer (type) Output Shape Param # ================================ dense_12 (Dense) (None, 10) 60 dropout_8 (Dropout) (None, 10) 0 dense_13 (Dense) (None, 7) 77 dropout_9 (Dropout) (None, 7) 0 dense_14 (Dense) (None, 3) 24 ================================ Total params: 161 Trainable params: 161 Non-trainable params: 0 None (Fig.4 Model Summary)
In our proposed model, we experiment with different combinations of activation functions and layer configurations to find the best-performing model. We also use the Adam optimizer, which is a stochastic gradient descent optimizer that uses adaptive learning rates to update the model weights. The Adam optimizer has been shown to perform well in many deep learning applications and is widely used in the research community.
To train the proposed model, we use a dataset of experimental data from NASA, which contains various operating conditions and degradation levels of commercially available lithium-ion batteries. The dataset is preprocessed and split into training and testing sets for model training and evaluation.
The proposed model is trained using the TensorFlow Keras library, which provides a simple and efficient way to build and train deep neural networks. The model is optimized using the Adam optimizer and trained for a specific number of epochs, with the training loss and validation loss monitored to prevent overfitting.
\begin{tabular}{|c|c|c|} \hline dense input & input: & [(None, 5)] \\ \hline InputLayer & output: & [(None, 5)] \\ \hline dense & input: & (None, 5) \\ \hline Dense & output: & (None, 10) \\ \hline dropout & input: & (None, 10) \\ \hline Dropout & output: & (None, 10) \\ \hline dense\_1 & input: & (None, 10) \\ \hline Dense & output: & (None, 7) \\ \hline dropout\_1 & input: & (None, 7) \\ \hline Dropout & output: & (None, 7) \\ \hline dense\_2 & input: & (None, 7) \\ \hline Dense & output: & (None, 3) \\ \hline \end{tabular}
(Fig.5 Model Architecture)
Overall, the proposed sequential model architecture with different activation functions such as tanh, sigmoid, and relu, and the Adam optimizer is a promising approach for accurately estimating the RUL of lithium-ion batteries. The use of experimental data from NASA further validates the effectiveness of the proposed model and its potential for practical applications in real-world settings.
### 5.1 Model Evaluation
We experiment with different combinations of hyperparameters, including the number of neurons in each layer, the activation functions, the optimizer, the batch size, and the number of epochs. We use a grid search method to find the best-performing model with the highest accuracy.
After evaluating different hyperparameters, we found that the proposed sequential model with appropriate neurons in each layer, the relu activation function, and the Adam optimizer performs the best for RUL estimation. We further experiment with different batch sizes and epochs and
observe that the model achieves a high accuracy of 0.985 when we interchange the batch size and epochs.
To validate the effectiveness of the proposed approach, we compare it with several baseline models, including decision tree and Functional API models. The results show that the proposed approach outperforms the baseline models in terms of RUL estimation accuracy and generalization ability.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Model & Functional (Keras API) & Sequential (Keras API) \\ \hline Accuracy & 0.95 & 0.985 \\ \hline \end{tabular}
## 5 Conclusion
In this paper, we proposed a novel approach to solve the problem of a new model in terms of RUL estimation accuracy and generalization ability. We proposed a novel approach to solve the problem of a new model in terms of RUL estimation accuracy and generalization ability.
The evaluation results demonstrate that the proposed approach for RUL estimation of lithium-ion batteries using the TensorFlow Keras library and a sequential model architecture is a promising method for improving the reliability and safety of battery-powered systems. The high accuracy achieved with different batch sizes and epochs further validates the effectiveness of the proposed approach and its potential for practical applications in real-world settings.
## 6 Conclusion
In this paper, we proposed a novel approach for estimating the remaining useful life (RUL) of lithium-ion batteries using a sequential model architecture and the TensorFlow Keras library. The proposed approach leverages experimental data from NASA to train and evaluate the model.
We experimented with different combinations of hyperparameters, including the number of neurons in each layer, the activation functions, the optimizer, the batch size, and the number of epochs, to find the best-performing model. The results showed that the proposed sequential
model with (10,7,3) neurons in each layer, the relu activation function, and the Adam optimizer achieved the highest accuracy of 0.985 when we interchange the batch size and epochs.
Overall, the proposed approach provides a promising method for improving the reliability and safety of battery-powered systems by accurately estimating the RUL of lithium-ion batteries. The use of experimental data from NASA further validates the effectiveness of the proposed approach and its potential for practical applications in real-world settings.
Future work can explore the use of other types of neural network architectures and optimization techniques to further improve the accuracy and robustness of the RUL estimation approach. Additionally, the proposed approach can be applied to other battery chemistries and types of systems to expand its scope of applications.
|
2307.06283
|
Tackling Computational Heterogeneity in FL: A Few Theoretical Insights
|
The future of machine learning lies in moving data collection along with
training to the edge. Federated Learning, for short FL, has been recently
proposed to achieve this goal. The principle of this approach is to aggregate
models learned over a large number of distributed clients, i.e.,
resource-constrained mobile devices that collect data from their environment,
to obtain a new more general model. The latter is subsequently redistributed to
clients for further training. A key feature that distinguishes federated
learning from data-center-based distributed training is the inherent
heterogeneity. In this work, we introduce and analyse a novel aggregation
framework that allows for formalizing and tackling computational heterogeneity
in federated optimization, in terms of both heterogeneous data and local
updates. Proposed aggregation algorithms are extensively analyzed from a
theoretical, and an experimental prospective.
|
Adnan Ben Mansour, Gaia Carenini, Alexandre Duplessis
|
2023-07-12T16:28:21Z
|
http://arxiv.org/abs/2307.06283v1
|
# Tackling Computational Heterogeneity in FL:
###### Abstract
The future of machine learning lies in moving data collection along with training to the edge. Federated Learning, for short FL, has been recently proposed to achieve this goal. The principle of this approach is to aggregate models learned over a large number of distributed clients, i.e., resource-constrained mobile devices that collect data from their environment, to obtain a new more general model. The latter is subsequently redistributed to clients for further training. A key feature that distinguishes federated learning from data-center-based distributed training is the inherent heterogeneity. In this work, we introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneity in federated optimization, in terms of both heterogeneous data and local updates. Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
**Keywords:** Federated Learning, Model Aggregation, Heterogeneity
## 1 Introduction
Until recently, machine learning models were extensively trained in centralized data center settings using powerful computing nodes, fast inter-node communication links, and large centrally-available training datasets. However, with the proliferation of mobile devices that collectively gather a massive amount of relevant data every day, centralization is not always practical [Lim et al. (2020)]. Therefore, the future of machine learning lies in moving both data collection and model training to the edge to take advantage of the computational power available there, and to minimize the communication cost. Furthermore, in many fields such as medical information processing, public policy, and the design of products or services, the collected datasets are _privacy-sensitive_. This creates a need to reduce human exposure to data to avoid confidentiality violations due to human failure. This may preclude logging into a data center and performing training there using conventional approaches. In fact, conventional machine learning requires feeding training data into a learning algorithm and
revealing information indirectly to the developers. When several data sources are involved, a merging procedure for creating a single dataset is also required, and merging in a privacy-preserving way is still an important open problem [Zheng et al. (2019)].
**Input**: \(N\), \(C\), \(T\), \(E\)
**Output**: \(w_{TE}\)
```
1: Initialize \(w_{0}\).
2:for each round \(t\in\{0,E,2E,\ldots,(T-1)E\}\)do
3:\(m\leftarrow\max(C\cdot N,1)\)
4:\(I_{t}\leftarrow\)Create-Client-Set(\(m\))
5:for each client \(i\in I_{t}\) in parallel do
6:\(w_{t+E}^{i}\leftarrow\)Client-Update(\(w_{t}\))
7:endfor
8:\(w_{t+E}\leftarrow\)Aggregation(\(w_{t+E}^{1},\ldots,w_{t+E}^{N}\))
9:endfor
10:return\(w_{TE}\)
```
**Algorithm 1** General Federated Learning Protocol
Recently, McMahan et al. (2017) proposed a distributed data-mining technique for edge devices called _Federated Learning_ (FL), which allows to decouple the model training from the need for direct access to the raw data. Formally, FL is a protocol that operates according to Algorithm 1, cf. Li et al. (2020) for an overview. The framework involves a group of devices called _clients_ and a _server_ that coordinates the learning process. Each client has a local training dataset that is never uploaded to the server. The goal is to train a global model by aggregating the results of the local training. Parameters fixed by the centralized part of the global learning system include: \(N\) clients, the ratio of clients \(C\) selected at each round, the set of clients \(I_{t}\) selected at round \(t\), the number of communication rounds \(T\), and the number of local epochs \(E\). A model for a client \(i\) at a given instant \(t\) is completely defined by its weights \(w_{i}^{t}\). At the end of each epoch \(t\in\{0,\ldots,TE-1\}\), \(w_{t+1}^{i}\) indicates the weight of client \(i\in I\). For each communication round \(t\in\{0,E,\ldots,(T-1)E\}\), \(w_{t}\) is the global model detained by the server at time \(t\), and \(w_{TE}\) is the final weight. In the following, we will use the notations given in Table 1.
Algorithm 1 describes the training procedure for FL. The framework involves a fixed set of \(I=\{1,\ldots,N\}\) clients, each with a local dataset. Before every communication round \(t\in\{0,E,\ldots,(T-1)E\}\), the server sends the current global model state to the clients and requests them to perform local computations based on the global state and their local dataset, and sends back an update. At the end of each round, the server updates the weights of the model by aggregating the clients' updates, and the process repeats. For the client selection procedure (Create-Client-Set), local training procedure (Client-Update), and aggregation of the local updates (Aggregation), several possibilities exist. For some results concerning client selection, see [McMahan et al. (2017); Chen et al. (2017); Huang et al. (2022); Cho et al. (2022)]. Regarding local updates, available methods range from simple variants of SGD, such as mini-batch SGD [Gower et al. (2019)], to more sophisticated approaches, such as PAGE [Zhao et al. (2021)]; other results are included in [Berahas et al. (2022); Liu et al. (2020); Reddi et al. (2020); Jin et al. (2022)]. We will describe in greater detail the existing routines for aggregation, the central topic of this work. In 2017, the seminal work of McMahan et al.
(2017b) proposed a plain coordinate-wise mean averaging of model weights; later, Yurochkin et al. (2019) proposed an extension that takes the invariance of network weights under permutation into account. The same year, Bonawitz et al. (2019) proposed an auto-tuned communication-efficient secure aggregation. More recently, Cho et al. (2020) extended the coordinate-wise mean averaging approach, substituting it with a term that amplifies the contribution of the most informative terms over less informative ones. Then, Sannara et al. (2021) adjusted this to enforce closeness of local and global updates. Last year, Charles et al. (2022) introduces an aggregation that allows clients to select what values of the global model are sent to them. Despite methodological advances, there is neither theoretical nor practical evidence for the right criterion for choosing a particular aggregation strategy.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Notation** & **Meaning** \\ \hline \(N\) & number of clients \\ \(C\) & ratio of clients \\ \(I_{t}\) & set of clients selected at round \(t\) \\ \(T\) & number of communication rounds \\ \(E\) & number of local epochs \\ \(w_{t}\) & weights of the global model at time \(t\) \\ \(w^{t}_{t}\) & model of client \(i\) at time \(t\) \\ \(F^{t}_{i}\) & loss function of the \(i\)-th client at time \(t\) \\ \(F\) & averaged loss function at time \(t\) \\ \(\eta_{t}\) & learning rate decay at time \(t\) \\ \(\zeta^{t}_{t}\) & mini-batch associated to client \(i\) at time \(t\) \\ \({}^{*}\) & index of optimality \\ \hline \hline \end{tabular}
\end{table}
Table 1: Conventions used in this paper.
Figure 1: Simplified representation of classic FL framework.
### The Challenges from Computational Heterogeneity in FL
As we have seen above, several emerging FL algorithms have been proposed. Due to the high cost of real deployment, existing studies in FL usually involves simulations [Li et al. (2019); Chen et al. (2019); Bagdasaryan et al. (2020)] and have no data to describe how devices participate in FL [Yang et al. (2021)]. The direct consequence of this approach is that this studies build on excessively ideal assumptions, for instance the one that all the devices are constantly available for training and equipped with the same resources, e.g., the same CPU and RAM capacity [Li et al. (2019); Chen et al. (2019); Bagdasaryan et al. (2020); Mohri et al. (2019); Konecny et al. (2016)]. However, these assumptions can be inadequate for FL deployment in practice. FL, in fact, requires a large number of devices to collaboratively accomplish a learning task, which poses a great challenge, namely _heterogeneity_[Li et al. (2020)], that impacts FL both in terms of accuracy and training time. We can divide heterogeneity in two main macro-classes: the _system heterogeneity_ and the _statistical heterogeneity_.
In federated settings, system heterogeneity points out the significant variability in the systems characteristics across the network, as devices may differ in terms of hardware, network connectivity, and battery power. These systems characteristics make issues such as stragglers significantly more prevalent than in typical data center environments. Several solution to handle systems heterogeneity have been proposed, e.g. asynchronous communication, see [Dai et al. (2015); Duchi et al. (2013)], active device sampling, see [Nishio and Yonetani (2019)], and fault tolerance, see [Jiang and Agrawal (2018)]. Statistical heterogeneity deals instead with the challenges that arise when training federated models from data that is not identically distributed across devices, both in terms of modeling the data, and in terms of analyzing the convergence behavior of associated training procedures. There exists a large body of literature in machine learning that has modeled statistical heterogeneity via methods such as meta-learning and multi-task learning [Chen et al. (2018); Corinzia et al. (2019); Eichner et al. (2019); Khodak et al. (2019)].
Despite heterogeneity is associated with several possible problems such as _free-riding_[Karimireddy et al. (2022)], theoretical guarantee to convergence of heterogeneous federated learning have been recently found [Zhou et al. (2022); Zhou et al.; Wang et al. (2020)] and approaches to overcome these challenges formalized, e.g. thanks to the introduction of _Personalized Federated Learning_ (PFL) [Tan et al. (2022); Bergou et al. (2022); Cho et al. (2021)], and the one of heterogeneous ensemble knowledge transfer [Cho et al. (2022)]. Several methods have been proposed to attack the heterogeneity arising from specific sources such as data, see [Shang et al. (2022); Horvath et al. (2022); Mendieta et al. (2022)], partial and biased client participation, see [Jhunjhunwala et al. (2022); Cho et al. (2022)]. In what follows, we will discuss how to possibly tackle heterogeneous local updates performances in edge clients, propose new aggregation methods, test them experimentally, and provide insights on their convergence properties, their stability and the client participation within the training.
## 2 Tackling Performance-Heterogeneity in FL: The Theoretical Side
We study theoretically how the heterogeneous performances of clients can be exploited in aggregation methods (under reasonable assumptions). The analysis presented is fairly general and allows to extract information concerning the existing trade-off between accuracy and efficacy. This analysis
can be seen as a remarkable follow-up of Cho et al. (2020), the first work presenting a convergence analysis of federated learning with biased client selection that is cognizant of the training progress at each client, and the work where was discovered that biasing the client selection towards clients with higher local losses increases the rate of convergence (compared to unbiased client selection).
### Framework of Analysis & Preliminaries
Throughout the analysis, we assume that all the clients are involved in each local and global iteration, i.e., \(C=1\). We denote with \(F_{i}\) the loss function of the \(i\)-th client, and with \(F\) the weighted average of the \(F_{i}\) upon the distribution \(P:=\{p_{i}\mid i\in I\}\). We restrain our analysis to the case in which the Client-Update procedure is mini-batch SGD with learning rate decay \(\eta_{t}\) and mini-batches \(\zeta_{t}^{i}\) of cardinality \(b\). In particular:
\[g_{i}(w_{t}^{i}):=\frac{1}{b}\sum_{\zeta\in\zeta_{t}^{i}}\nabla F_{i}(w_{t}^{ i},\zeta) \tag{1}\]
In any iteration, the weights of the model are defined as follows:
\[w_{t+1}^{i}:=\left\{\begin{array}{ll}w_{t}^{i}-\eta_{t}g_{i}(w_{t}^{i})& \text{if }E/|t\\ \\ \sum_{j\in I}\alpha_{t}^{j}(w_{t}^{j}-\eta_{t}g_{j}(w_{t}^{j})):=w_{t+1}& \text{if }E|t\end{array}\right. \tag{2}\]
where \(\alpha_{t}^{j}\) is the _aggregation coefficient_ referred to client \(j\) at communication round \(t\), and where, for each \(t\), the following constraint holds:
\[\sum_{j\in I}\alpha_{t}^{j}=1 \tag{3}\]
In our mathematical analysis, we introduce a few assumptions:
**Assumption 1** (\(L\)_-smoothness_) \(F_{1},\dots,F_{N}\) satisfy:
\[\forall v,w,F_{i}(v)\leq F_{i}(w)+\langle v-w,\nabla F_{i}(w)\rangle+\tfrac{L} {2}\left\|v-w\right\|_{2}^{2}\]
**Assumption 2** (\(\mu\)_-convexity_) \(F_{1},\dots,F_{N}\) satisfy:
\[\forall v,w,F_{i}(v)\geq F_{i}(w)+\langle v-w,\nabla F_{i}(w)\rangle+\tfrac{ \mu}{2}\left\|v-w\right\|_{2}^{2}\]
**Assumption 3** The variance of the stochastic gradient descent is bounded, more formally, the following condition is satisfied:
\[\forall i\in I,\,\mathbb{E}\left\|g_{i}(w_{i})-\nabla F_{i}(w_{i})\right\|^{2 }\leq\sigma^{2}\]
**Assumption 4** The stochastic gradient's expected squared norm is uniformly bounded, in mathematical terms:
\[\forall i\in I,\,\mathbb{E}\left\|g_{i}(w_{i})\right\|^{2}\leq G^{2}\]
What follows is closely related to what was previously done in Cho et al. (2022), the novelty arise from the fact that: (a) instead of analyzing the selection of clients, we examine the attribution of the weights to them, and (b) we extensively study the expression of the _learning error_ from which we
derive principled aggregation strategies.
To facilitate the convergence analysis, we define the quantity \(w_{t}\) (for which \(t\neq 0\) mod \(E\)) as:
\[w_{t+1}:=w_{t}-\eta_{t}\sum\limits_{i\in I}\alpha_{t}^{j}g_{i}(w_{t}^{i}) \tag{4}\]
where \(\alpha_{t}^{i}=p_{i}\). Let \(w^{\star}\) be the global optimum of \(F\) and \(w_{t}^{\star}\) the global optimum of \(F_{i}\). We define \(F^{\star}\) as \(F(w^{\star})\), \(F_{i}^{\star}\) as \(F(w_{t}^{\star})\) and _heterogeneity_ as:
\[\Gamma:=F^{\star}-\sum\limits_{i\in I}p_{i}F_{i}^{\star} \tag{5}\]
We list below a couple of results useful in proving the main theorem.
**Lemma 1**: _Let \(f\) be a \(L\)-smooth function with a unique global minimum at \(w^{\star}\). Then :_
\[\forall w,\hskip 28.452756pt||\nabla f(w)||^{2}\leq 2L(f(w)-f(w^{\star})) \tag{6}\]
**Lemma 2**: _With the same notations as above and defining \(\mathds{E}[.]\) as the total expectation over all random sources, the expected average discrepancy between \(w_{t}\) and \(w_{t}^{i}\) is bounded:_
\[\mathds{E}\left[\sum\limits_{i\in I}\alpha_{t}^{i}\left\|w_{t}-w_{t}^{i} \right\|^{2}\right]\leq 16\eta_{t}^{2}E^{2}G^{2} \tag{7}\]
Before presenting the main results, we define the _weighting skew_\(\rho\)1 as:
Footnote 1: We observe that \(\rho(t,w)\) is not defined when \(F(w)=\sum\limits_{i\in I}p_{i}F_{i}^{\star}\). This condition will be always assumed below.
\[\rho(t,w):=\frac{\sum\limits_{i\in I}\alpha_{t}^{i}(F_{i}(w)-F_{i}^{\star})}{ F(w)-\sum\limits_{i\in I}p_{i}F_{i}^{\star}} \tag{8}\]
and introduce these notations: \(\overline{\rho}:=\min\limits_{t=0\text{mod }E}\ \rho(t,w_{t})\), and \(\tilde{\rho}:=\max\limits_{t=0\text{mod }E}\rho(t,w^{\star})\).
### Main Theorem and Consequences
In the framework outlined, we state an extension of the main theorem presented in Cho et al. (2022b), that is adapted to our extended goal. The proofs are available in the appendix A.
**Theorem 3**: _Under assumptions (1 - 4), the following holds:_
\[\begin{split}\mathds{E}\left[\left\|w_{t+1}-w^{\star}\right\|^{2 }\right]\leq\left(1-\eta_{t}\mu\left(1+\frac{3}{8}\overline{\rho}\right) \right)\mathds{E}\left[\left\|w_{t}-w^{\star}\right\|^{2}\right]\\ +\eta_{t}^{2}\left(32E^{2}G^{2}+6\overline{\rho}L\Gamma+\sigma^{2 }\right)\\ +2\eta_{t}\Gamma\left(\tilde{\rho}-\overline{\rho}\right)\end{split} \tag{9}\]
From Theorem 3, we can directly deduce Corollary 4 below.
**Corollary 4**: _Assuming \(\eta_{t}=\frac{1}{\mu\left(t+\gamma\right)}\) and \(\gamma=\frac{4L}{\mu}\), the following bound holds:_
\[\mathbb{E}\left[F(w_{T})\right]-F^{\star}\leq\frac{1}{T+\gamma}\mathcal{V}( \widetilde{\rho},\widetilde{\rho})+\mathcal{E}(\widetilde{\rho},\widetilde{ \rho}) \tag{10}\]
_where:_
\[\mathcal{V}(\widetilde{\rho},\widetilde{\rho}) =\frac{4L(32\tau^{2}G^{2}+\sigma^{2})}{3\mu^{2}\widetilde{\rho}} +\frac{8L^{2}\Gamma}{\mu^{2}}+\frac{L\gamma\left\|w^{0}-w^{\star}\right\|^{2} }{2}\] \[\mathcal{E}(\widetilde{\rho},\widetilde{\rho}) =\frac{8L\Gamma}{3\mu}\left(\frac{\widetilde{\rho}}{\widetilde{ \rho}}-1\right)\]
**Remark 5**: _Corollary 4 implies that:_
\[\mathbb{E}\left[F(w_{T})-F^{\star}\right]=O\left(1/T\right) \tag{11}\]
The mathematical expressions \(\mathcal{V}\) and \(\mathcal{E}\) are estimators for the _speed of convergence_, and the _learning error_, respectively. A complex multi-objective optimization problem arises when trying to maximize the speed while minimizing the error. We decouple these two quantities and optimize them separately without underestimating the existing trade-off among them. This procedure allows to outline the global trends, but it does not imply the universal optimality of the strategies defined below.
**Remark 6**: _Since \(\frac{8L^{2}\Gamma}{\mu^{2}}+\frac{L\gamma\left\|w^{0}-w^{\star}\right\|^{2}}{ 2}\) is a constant depending only on the data and the initial guess, and \(\overline{\rho}\) may be arbitrary large, we can deduce from Corollary 4 the existence of a minimal value for the convergence speed, given by:_
\[\mathcal{V}_{\min}:=\frac{8L^{2}\Gamma}{\mu^{2}}+\frac{L\gamma\left\|w^{0}-w^{ \star}\right\|^{2}}{2} \tag{12}\]
In this framework, we can analyze all the possible scenarios, starting from the one in which \(\Gamma=0\), that can be appointed as _error-free case_ and corresponds to an IID-dataset.
#### Error-free Framework
Under the assumption that \(\Gamma=0\), the main theorem can be leveraged as follows:
\[\mathbb{E}\left[\left\|w_{t+1}-w^{\star}\right\|^{2}\right]\leq\left(1-\eta_{t }\mu\left(1+\frac{3}{8}\overline{\rho}\right)\right)\mathbb{E}\left[\left\|w _{t}-w^{\star}\right\|^{2}\right]+\eta_{t}^{2}\left(32E^{2}G^{2}+\sigma^{2}\right) \tag{13}\]
and applying Corollary 4, we derive the following inequality:
\[\mathbb{E}\left[F(w_{T})\right]-F^{\star}\leq\frac{1}{T+\gamma}\left[\frac{4L (32\tau^{2}G^{2}+\sigma^{2})}{3\mu^{2}\widetilde{\rho}}+\frac{L\gamma\left\| w^{0}-w^{\star}\right\|^{2}}{2}\right] \tag{14}\]
Despite its simplicity, this setting is interesting since the error term vanishes, and therefore we can deduce a truly optimal algorithm given by the maximization of \(\overline{\rho}\): 2, achieved when:
Footnote 2: \(\overline{\rho}\) is well defined as long as \(F(w_{t})\neq F(w^{\star})\) for all \(t\), which is a reasonable assumption.
\[\alpha_{t}^{i}=\left\{\begin{array}{ll}\frac{1}{|J_{t}|}&\mbox{if $i\in J_{t}$}\\ 0&\mbox{else}\end{array}\right. \tag{15}\]
where \(J_{t}=\underset{i\in I}{\arg\max}(F_{i}(w_{t})-F_{i}^{\star})\).
#### General Framework
In the general case, both \(\mathcal{V}\) and \(\mathcal{E}\) depend on the choice of the \(\alpha_{t}^{I}\). As already noticed before, this raises a multi-objective problem that doesn't allow for a joint optimization of terms \(\mathcal{V}\) and \(\mathcal{E}\). Consequently, we provide an approximated optimization that builds upon the existing trade-off between the convergence speed and the accuracy3.
Footnote 3: It is important to notice that the bounds for \(\mathcal{V}\) and \(\mathcal{E}\) are not tight. Consequently we cannot guarantee the unconditional optimality of the strategies proposed.
**Remark 7**: _We observe that optimizing the convergence speed, while "forgetting" about the error, amounts to maximize \(\overline{\rho}\), exactly as done in the error-free case. Instead, minimizing \(\mathcal{E}(\overline{\rho},\bar{\rho})\) neglecting \(\mathcal{V}\), amounts to minimize \(\frac{\bar{\rho}}{\bar{\rho}}-1\). This is achieved when \(\alpha_{t}^{i}=p_{i}\), which gives \(\mathcal{E}=0\)._
Now, knowing that \(\alpha_{t}^{i}=p_{i}\) ensures obtaining optimal accuracy, we assume \(\alpha_{t}^{i}=\kappa_{t}^{i}p_{i}\). The following notation is used:
\[\pi_{t}=\min_{i\in I}\ \kappa_{t}^{i},\Pi_{t}=\max_{i\in I}\ \kappa_{t}^{i},\pi= \min_{t}\ \pi_{t},\mbox{ and }\Pi=\max_{t}\ \Pi^{t} \tag{16}\]
Without loss of generality, we assume without that \(\forall t,\pi_{t}>0\). If it were not the case, we would have assigned to the \(\alpha_{t}^{i}\) equal to zero an infinitesimal value, and increment the other \(\alpha_{t}^{i}\) substantially. Under these assumptions, we have that \(\frac{\bar{\rho}}{\bar{\rho}}\leq\frac{\Pi}{\pi},\frac{1}{\bar{\rho}}\leq\frac {1}{\pi}\) and therefore:
\[\mathbb{E}[F(w_{T})]-F^{\star}\leq\frac{1}{T+\gamma}\left[C+\frac{\lambda_{1} }{\pi}\right]+\lambda_{2}\frac{\Pi-\pi}{\pi} \tag{17}\]
where \(C,\lambda_{1}\) and \(\lambda_{2}\) are constants. Since,
\[\Pi\min\ p_{i}\leq\max\ \kappa_{t}^{i}p_{i}\leq 1-(N-1)\min\ \kappa_{t}^{i}p_{i} \leq 1-(N-1)\pi\min\ p_{i} \tag{18}\]
we can infer that \(\Pi\leq\frac{1-(N-1)\pi\min\ p_{i}}{\min\ p_{i}}\) and \(\mathcal{E}\leq\frac{1}{\pi\min\ p_{i}}-N\), from which, we obtain:
\[\mathbb{E}[F(w_{T})]-F^{\star}\leq\frac{1}{T+\gamma}\left[C+\frac{\lambda_{1} }{\pi}\right]+\lambda_{2}\left(\frac{1}{\pi\min\ p_{i}}-N\right) \tag{19}\]
**Remark 8**: _This last inequality has an intrinsic interest; in fact, it allows to state that the new speed and error bounds depend exclusively on \(\pi\) and to ensure a bound on the error term (once set a properly chosen minimal value of the \(\alpha_{t}^{i}\))._
### Derived Aggregation Strategies
The theoretical results discussed above provides several important insights for the design of aggregation algorithms.
The first algorithm presented is the generalized FedAvg, that corresponds to take \(\alpha_{t}^{i}=p_{i}\) for any \(t\) and \(i\in I\). This strategy is inspired by McMahan et al. (2017b) and it boils down to consider the weighted average (upon \(p_{i}\)) of the local models as global model. As observed above, this approach is optimal in terms of accuracy (since \(\mathcal{E}=0\)) and its convergence speed can be bounded as below:
\[\mathcal{V}=\mathcal{V}_{\min}+\frac{4L-32\tau^{2}G^{2}+\sigma^{2}}{3\mu^{2}} \tag{20}\]
The second algorithm proposed is called FedMax and it is defined as follows. For any \(t\):
\[\alpha_{t}^{i}=\left\{\begin{array}{ll}\frac{1}{|J_{t}|}&\mbox{if }i\in J_{t}\\ \\ 0&\mbox{else}\end{array}\right. \tag{21}\]
where:
\[J_{t}=\operatorname*{arg\,max}_{i\in I}(F_{i}(w_{t})-F_{i}^{\star}) \tag{22}\]
Note that two distinct clients in practice never have the same value, i.e. \(|J_{t}|=1\). This strategy is our original algorithmic contribution, and consists in considering as global model the client's local model with the worst performance at the end of the previous communication round. This approach partially leverages the difference among the values of the loss functions of the different clients and, as observed above, this strategy gives an optimal bound on the convergence speed. For improving the performance in real-world applications and for avoiding over-training on outliers, we introduce a couple of variants of the previous algorithm, namely FedMax(\(k\)) and FedSoftMax.
FedMax(\(k\)), instead of taking the client with the highest loss, considers the first \(k\) clients when sorted by decreasing order with respect to \(F_{i}(w_{t})-F_{i}^{\star}\). This strategy boils down to the client selection strategy _Power-of-Choice_, introduced in Cho et al. (2022b). In FedSoftMax, for any \(t\) and \(i\in I\), we take \(\alpha_{t}^{i}=p_{i}\exp(T^{-1}(F_{i}(w_{t})-F_{i}^{\star}))\) re-normalized, i.e., the softened version of the original routine. The reason behind the introduction of this method is reinforcing the stability of FedMax, but this has, as well, the theoretical advantage of ensuring nonzero values of the \(\alpha_{t}^{i}\). Note that, for this method, we can obtain an upper bound over the error by applying inequality 19.
## 3 Tackling Performance-Heterogeneity in FL: The Practical Side
One of the greatest difficulties in developing new algorithms in ML is to combine theoretical guarantees with practical requirements. With the aim of providing algorithms suitable for exploitation in applications, we conduct an experimental analysis with a twofold purpose to establish the performance of the proposed strategies and to identify their potential weaknesses and strengths.
### Experimental Framework
We describe below the full experimental framework involved in the study of the strategies described above. The design of the experimental apparatus is minimal; in fact, the goal is to focus maximally on the effects of the aggregation procedure.
Synthetic DataWe generate two distinct synthetic datasets, corresponding to the IID and to the non-IID framework. For the first, we sort data according to labels, choose the cardinality of the different local datasets and distribute the items preserving an identical label distribution over the clients. Instead, for the second, we sort the dataset by label and divide it into multiple contiguous shards according to the criteria given in McMahan et al. (2017). Then, we distribute these shards among our clients generating an unbalanced partition. We do not require clients to have the same number of samples, however each client has at least one shard. We actually also implemented a "hand-pick" splitting system, to enable better control of the distribution of numbers among clients. Both methods were tested and gave similar results for all experiments.
ModelThe model4 used is fairly basic: a CNN with two \(3\times 3\) convolution layers (the first with 32 channels, the second with 64, each followed with \(2\times 2\) max pooling), a fully connected layer with 1600 units and ReLu activation, and a final softmax output layer. The local learning algorithm is mini-batch SGD with a batch size fixed at 64.
Footnote 4: Much better performance could be achieved using more complex models developed throughout literature. In this work, the performance of the network on the task is secondary and therefore we opt for the simplest model used in practice.
ParametersThe parameters involved in the experimental analysis are summarized in Table 2.
Tasks descriptionThe task used is a classification task of the images of the datasets [MNIST Deng (2012)], and [Fashion-MNIST Xiao et al. (2017)], both in IID and in not-IID framework.
EvaluationTo evaluate the performance of the strategies proposed, we focus our attention on two kinds of measures: the _accuracy_ reached after a fixed number of communication rounds and the index \(R_{90}\), that corresponds to the number of communication rounds required to reach a 90% accuracy. We furthermore keep track of the accuracy value and of the loss function at each communication round of the global model.
ResourcesAll the strategies are implemented within Pytorch and trained on an Intel Xeon E5-2670 2,60 Ghz., 8 hearts, 64 Go RAM.
### Experimental Analysis
We focus on results related to FedMax, the main method introduced.
Comparative Analysis of the Strategies ProposedWe have tested extensively the methods proposed in IID, not-IID and extremely not IID framework. We can observe that, in last two cases, it is sufficient to focus on the first 50 communication rounds in order to encounter a significant discrepancy among the methods. While the difference between the final accuracy obtained through FedSoftMax and FedAvg is rather low in any framework (see Figures 2 and 3), a large gap is evident in how quickly the learning system achieves 90% accuracy in the not-IID and very-not-IID cases (TNIID) (see Figure 4 and see Table 3). Therefore, experiments give a clear confirmation of the theory, and tend to prove that the upper bound provided by the main theorem is quite tight.
**Remark 9**: _FedSoftMax has a higher convergence speed compared to FedAvg. The discrepancy increments with the bias of the data with respect to the closest IID distribution. Moreover, FedSoftMax produces a rather small bias that is directly proportional to the distance that the data distribution among the clients has from the IID one._
To try to better understand the optimality of FedSoftMax, we have investigated the change in the performance when it is modified the parameter \(T\), the one accounting for the temperature. The experimental results show that, if we restrict the temperatures considered to the range between 5 and 30, an higher temperature entails an higher convergence speed (see Figure 5).
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Symbol** & **Meaning** & **Value** \\ \hline \(N\) & number of clients & 50 \\ \(C\) & ratio of clients & 1 \\ \(N\) & size of each client’s dataset in non-IID fr. & 200 \\ \(\#_{\text{shard}}\) & cardinality of the shards in non-IID fr. & 60 \\ \(\#_{\text{shard, v}}\) & cardinality of the shards in very-non-IID fr. & 100 \\ \(T\) & number of communication rounds & 50 \\ \(E\) & number of local epochs & 2 \\ \(\eta_{t}\) & learning rate at time \(t\) & \(10^{-4}\cdot 0.99^{r}\) \\ \(b\) & cardinality of the batch & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters used in the experiments.
Figure 2: **Comparative analysis between FedAvg and FedSoftMax: Final and intermediate accuracy in IID framwork.**
The horizontal axis accounts for communication round and the vertical axes accounts for the accuracy reached. These results are the ones obtained on MNIST.
Figure 4: **Comparative analysis between FedAvg and FedSoftMax.**
The horizontal axis accounts for communication round and the vertical axes accounts for the accuracy reached. The image bottom left refers to the IID framework, the top left to the not IID framework and the right one to the extremely not-IID framework. These results are the ones obtained on MNIST.
Figure 3: **Comparative analysis between FedAvg and FedSoftMax: Final and intermediate accuracy in not IID framework.**
The horizontal axis accounts for communication round and the vertical axes accounts for the accuracy reached. These results are the ones obtained on MNIST.
\begin{table}
\begin{tabular}{l l l} \hline \hline Framework & Algorithm & Confidence Interval for \(R_{90}\) \\ \hline TNIID & FedAvg & 14.877887807248147 - 15.842112192751852 \\ TNIID & FedSoftMax & 7.855914695373678 - 8.704085304626322 \\ NIID & FedAvg & 12.726276322973177 - 13.957934203342614 \\ NIID & FedSoftMax & 9.996249956446318 - 11.31953951723789 \\ IID & FedAvg & 21.527442685457743 - 24.58366842565337 \\ IID & FedSoftMax & 21.607110478485172 - 24.504000632625942 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results concerning convergence speed (\(R_{90}\)).
Figure 5: **Comparative analysis between FedAvg and FedSoftMax(T) for several values of T, the temperature.**
The horizontal axis accounts for communication round and the vertical axes accounts for the accuracy reached. These results are the ones obtained on MNIST in not IID framework.
Weakness and Strength of the Strategies ProposedOne potential weakness that has emerged from the theory is that the method potentially converges to a different value of the optimum. We were therefore interested in studying whether experimentally a significant difference could be observed and whether this might preclude the use of the method. With this purpose, we studied the evolution of \(\alpha_{i}\). The result is extremely positive, not only do we see that the imbalance produced is minimal, but rather we observe that the \(\alpha_{i}\) almost always converges to the \(p_{i}\) with a rate of \(1/t\) (see Figure 6). All these entails the following remark:
**Remark 10**: _FedSoftMax is natural smooth interpolation between FedAvg and FedMax(\(k\)), taking the advantage from the higher convergence speed of FedMax(\(k\)) in the initial phase and the stability and correctness of FedAvg in the rest of the learning._
In fact FedMax(k) methods, while speeding the process at the beginning, give poor results when it comes to the final accuracy. This can actually be well visualized by analyzing the top losses of clients during FedSoftMax running (see Figure 7). We actually see that only a small group of clients are used through all the process, and while this is profitable for speed purposes in the first rounds, it has a huge drawback in the following rounds since we only use a small amount of data that (by non-IIDness) is not representative of the whole dataset. There lies the power of FedSoftMax, which enables to use both the speed-up ability of FedMax, and the data of all clients at the end as in FedAvg.
Figure 6: **Convergence of \(\alpha_{i}\) to \(p_{i}\) as a function of time (represented through the communication rounds) in the IID framework.**
The horizontal axis accounts for communication round and the vertical axes accounts for the difference among \(\alpha_{i}\) and \(p_{i}\). These results are the ones obtained on MNIST.
Finally, we became interested in measuring the stability of FedSoftMax when compared to FedAvg. For this purpose we use a lag one autocorrelation measure based on the normalized standard deviation of the variations over one round. The results in this case show a more pronounced tendency toward instability than FedSoftMax, which nevertheless appears to be reasonably stable (see Table 4).
### Discussion & Final Remarks
We have extended the insightful analysis already carried out by Cho et al. (2022b), and examined further the joint evolution of \(\overline{\rho}\) and \(\tilde{\rho}\), obtaining simpler bounds. Taking advantage of these theoretical
\begin{table}
\begin{tabular}{l l l} \hline Framework & Algorithm & Smoothness value \\ \hline TNIID & FedAvg & 1.3686297084899968 \\ TNIID & FedSoftMax & 2.322656907880106 \\ NIID & FedAvg & 1.3404421487057407 \\ NIID & FedSoftMax & 1.7307167213190393 \\ IID & FedAvg & 2.8107170960158876 \\ IID & FedSoftMax & 2.7724249879251603 \\ \hline \end{tabular}
\end{table}
Table 4: Results concerning stability.
Figure 7: **Clients participation in the aggregation process**
The horizontal axis accounts for a unique identifier of the client and the vertical axis accounts for the communication round. The value of the \(\alpha_{i}\) is encoded through colors. Then ten highest values are colored in yellow, while the others are colored in blue. These results are the one obtained on MNIST.
insights, we have proposed a family of aggregation strategies, among which FedSoftMax is the most relevant one. Here, we complement our previous work by investigating empirically the latter with the goal of finding out weaknesses and of quantifying its strength for potential exploitation in practice. The experimental results fully confirm the theory and also suggest that the bias introduced by mismatched weighting of the data distribution does not affect the quality of the final results. Moreover, this method seems to naturally converge to FedAvg leveraging the biases introduced in the first communication rounds.
### Further directions of research
Several aspects emerged may be object of further analysis. We report them as associated research questions. From a theoretical point of view, we propose 3 possible directions to investigate:
**Proposed Research Question 1** Is it possible to obtain expressive bounds weakening at least one of the 4 assumptions introduced? We believe interesting results could be obtained by weakening of assumption 3.
**Proposed Research Question 2** Can we substitute the learning algorithm used throughout all the analysis, i.e. SGD mini-batch, with others? We believe that interesting results may come even using fairly natural algorithms such as GD.
**Proposed Research Question 3** The experiments have shown that the \(\alpha_{i}\) coefficients converge to the \(p_{i}\) in a framework where datasets are not too much non-IID. We might thus be interested in both proving this affirmation under supplementary hypothesis, and to see its consequences when it comes to the adaptation of the main theorem.
Moreover, we actually see that in the case of a very non-IID dataset the \(\alpha_{i}\) actually do not converge to the \(p_{i}\), but they still converge to some fixed limits, and it would be interested to study these limits, and their potential correlation with the \(F_{i}^{*}\) or other client dependent parameters.
**Proposed Research Question 4** Figure 5 showed the correlation between parameter \(T\) of the FedSoftMax method and the gain in speed. Further experiments not shown here show that increasing further \(T^{-1}\) increases the speed up to a certain limits, i.e. the accuracy curves tend to converge to a "maximal-speed" curve. Not only could we empirically study the properties of this limit curve, but we could also try to give theoretical evidence for this observation.
**Proposed Research Question 5** Can we extend our results to the non-convex setting? We suggest to start introducing some simplifying conditions such as the ones associated to the Polyak-Lojasiewicz inequality.
From a practical point of view, it could be interesting to investigate if there is a practical advantage induced by the speed-up given by FedSoftMax.
### Acknowledgements
_This work was granted access to HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme_ Investissements d'Avenir _supervised by the Agence Nationale pour la Recherche._
|
2310.03734
|
Leveraging Unpaired Data for Vision-Language Generative Models via Cycle
Consistency
|
Current vision-language generative models rely on expansive corpora of paired
image-text data to attain optimal performance and generalization capabilities.
However, automatically collecting such data (e.g. via large-scale web scraping)
leads to low quality and poor image-text correlation, while human annotation is
more accurate but requires significant manual effort and expense. We introduce
$\textbf{ITIT}$ ($\textbf{I}$n$\textbf{T}$egrating $\textbf{I}$mage
$\textbf{T}$ext): an innovative training paradigm grounded in the concept of
cycle consistency which allows vision-language training on unpaired image and
text data. ITIT is comprised of a joint image-text encoder with disjoint image
and text decoders that enable bidirectional image-to-text and text-to-image
generation in a single framework. During training, ITIT leverages a small set
of paired image-text data to ensure its output matches the input reasonably
well in both directions. Simultaneously, the model is also trained on much
larger datasets containing only images or texts. This is achieved by enforcing
cycle consistency between the original unpaired samples and the cycle-generated
counterparts. For instance, it generates a caption for a given input image and
then uses the caption to create an output image, and enforces similarity
between the input and output images. Our experiments show that ITIT with
unpaired datasets exhibits similar scaling behavior as using high-quality
paired data. We demonstrate image generation and captioning performance on par
with state-of-the-art text-to-image and image-to-text models with orders of
magnitude fewer (only 3M) paired image-text data.
|
Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan
|
2023-10-05T17:55:19Z
|
http://arxiv.org/abs/2310.03734v1
|
# Leveraging unpaired data for vision-language generative models via Cycle Consistency
###### Abstract
Current vision-language generative models rely on expansive corpora of _paired_ image-text data to attain optimal performance and generalization capabilities. However, automatically collecting such data (e.g. via large-scale web scraping) leads to low quality and poor image-text correlation, while human annotation is more accurate but requires significant manual effort and expense. We introduce **ITIT** (**I**n**T**e**g**rating **I**m**e**r): an innovative training paradigm grounded in the concept of cycle consistency which allows vision-language training on _unpaired_ image and text data. ITIT is comprised of a joint image-text encoder with disjoint image and text decoders that enable bidirectional image-to-text and text-to-image generation in a single framework. During training, ITIT leverages a small set of paired image-text data to ensure its output matches the input reasonably well in both directions. Simultaneously, the model is also trained on much larger datasets containing only images or texts. This is achieved by enforcing cycle consistency between the original unpaired samples and the cycle-generated counterparts. For instance, it generates a caption for a given input image and then uses the caption to create an output image, and enforces similarity between the input and output images. Our experiments show that ITIT with unpaired datasets exhibits similar scaling behavior as using high-quality paired data. We demonstrate image generation and captioning performance on par with state-of-the-art text-to-image and image-to-text models with orders of magnitude fewer (only 3M) paired image-text data.
## 1 Introduction
Image-text multimodal training has gained remarkable attention in recent years. Models for text-to-image generation show the impressive ability to synthesize realistic images from textual prompts (Rombach et al., 2022; Chang et al., 2023; Saharia et al., 2022; Yu et al., 2022; Ramesh et al., 2021). Similarly, image-to-text models have demonstrated advanced image comprehension capabilities by providing precise descriptions of input images (Chen et al., 2023; Wang et al., 2022; Li et al., 2022; 2023; Alayrac et al., 2022; Wang et al., 2022). Training these models to exceptional performance demands datasets comprising hundreds of millions to billions of _paired_ image-text samples (Schuhmann et al., 2022). The collection of very large paired datasets comes at a considerable cost as well as concerns of low quality (Jia et al., 2021). On the other hand, diverse and vast unimodal images or texts datasets remain unused in current generative vision-language training (Raffel et al., 2020; Sun et al., 2017; Zhai et al., 2022). This raises a natural question: can we leverage _unpaired_ image and text data to facilitate generative vision-language training?
The major problem with using unpaired data during vision-language training is the lack of supervision. To overcome this problem, we introduce ITIT, a novel training paradigm that uses _cycle consistency_ losses between cycle-generated images/texts and their corresponding original inputs to provide supervision for image-only and text-only data (Figure 1). ITIT utilizes a small set of paired image-text data to achieve reasonable text-to-image and image-to-text generation performance. Simultaneously, for unpaired image (text) data, ITIT generates corresponding text (image) counter
parts and employs them as inputs to reconstruct the input image (text): this corresponds to a full cycle loss. We consider two kinds of full cycles: T2I2T (starting with an unpaired text sample); and I2T2I (starting with an unpaired image sample). These two types of cycles enable us to leverage both unpaired image and text data to provide informative supervision signals for training.
To enable cycle training, we first unify image-to-text (I2T) and text-to-image (T2I) generation in the same framework, with a bi-directional image-text encoder and disjoint image and text decoders. We tokenize images into discrete visual tokens (Van Den Oord et al., 2017) and combine them with text embeddings from a pre-trained T5 model (Raffel et al., 2020) as input to the joint image-text encoder. For I2T generation, we employ an autoregressive text decoder (Wang et al., 2022), while for T2I generation we use a non-autoregressive parallel image decoder (Chang et al., 2023), which is an order of magnitude faster than autoregressive image decoders such as Yu et al. (2022).
A technical challenge of ITIT is that, state-of-the-art text-to-image and image-to-text generation processes typically involve multiple forward steps of the model (Esser et al., 2021; Chang et al., 2023; Rombach et al., 2022; Wang et al., 2022). Back-propagating gradient through all these forward steps brings significant memory and computation overheads. To solve this problem, for T2I2T cycle, we first generate the image with parallel decoding. We then back-propagate the gradient through one step of the parallel decoding process. For I2T2I cycle, we first generate the text autoregressively with multiple steps. Then we forward the text decoder once with the generated text as input, and back-propagate the gradient only to this forward step. This significantly reduces the computational overhead of the cycle training, making it feasible to apply in large model settings.
We evaluate the performance of ITIT on standard image-to-text and text-to-image generation benchmarks and demonstrate that, by leveraging unpaired data and cycle consistency, ITIT attains performance levels similar to a non-cycle baseline. However, ITIT uses up to 2 orders of magnitude lower paired data. Furthermore, ITIT scales similarly with unpaired data as the baseline does with equivalent amounts of paired data, while being much more robust to low data quality. We also compare ITIT with state-of-the-art methods and show that we can achieve comparable performance on common text-to-image and image-to-text benchmarks with substantially lesser paired data. Our contributions are summarized as follows:
* We introduce a framework that unifies text-to-image and image-to-text generation, and propose ITIT, a novel technique that enforces consistency between cycle-generated images/text and their corresponding originals. This approach allows the training of image-to-text and text-to-image models using unpaired image and text data.
* We comprehensively evaluate the proposed ITIT framework and the image-text cycle consistency method, and demonstrate that they significantly enhance model performance.
Figure 1: Overview of ITIT. For unpaired data, ITIT first generates the image/text counterpart, and then uses these generated counterparts to reconstruct the original text or image.
* We show that ITTT can achieve performance on par with state-of-the-art methods on common text-to-image and image-to-text benchmarks with much lesser (\(\sim\)100x) paired data. When scaling up training data to improve model efficacy, we show that we can add only unpaired examples using our framework and achieve similar performance as scaled-up paired data, without the downsides of significant manual effort and poor pairing quality.
## 2 Literature Review
**Image-to-Text Generation.** Various works explore autonomously generating textual descriptions from input images, either training the network with generative loss alone (Wang et al., 2022; Alayrac et al., 2022; Chen et al., 2023; Li et al., 2022; 2023a), or combining it with contrastive learning (Yu et al., 2022). GIT (Wang et al., 2022) trains a model comprising an image encoder and an auto-regressive text decoder using a language modeling loss, the image encoder pre-trained with contrastive loss (Radford et al., 2021). In our work, we adopt a similar framework to GIT for our Image-to-Text (I2T) framework, but we initialize our image encoder from scratch.
**Text-to-Image Generation.** Recent works focus on two primary paradigms: diffusion-based models (Rombach et al. (2022); Dhariwal and Nichol (2021); Nichol et al. (2021); Saharia et al. (2022); Ramesh et al. (2022); Ruiz et al. (2023)); and token-based methods. Token-based strategies transform raw images into image tokens, and predict these tokens either in an autoregressive manner (Esser et al., 2021; Ramesh et al., 2021; Gafni et al., 2022; Yu et al., 2021; Ding et al., 2021; Yu et al., 2022) or in parallel (Chang et al., 2022; Li et al., 2023; Chang et al., 2023). Muse (Chang et al., 2023) demonstrates that token-based strategies with parallel decoding can be considerably faster than diffusion-based or autoregressive generative models. Since this speed advantage facilitates text-to-image synthesis during training, we adopt this strategy in our T2I framework.
**Unifying Image and Text Generation.** COBIT (You et al. (2023)) achieves this by employing distinct image and text unicoders, coupled with a unified cross-modal decoder. Additionally, CM3 (Aghajanyan et al. (2022)) and CM3Leon (Yu et al. (2023)) harness causally masked generative models trained on extensive multi-modal document datasets, and enable the synthesis of both text and images. However, all these works still heavily rely on large-scale _paired_ image-text datasets.
**Leveraging Unpaired Data in Generative Vision-Language Training.** Early works have tried to use unpaired image and text to train image captioning model in an unsupervised way (Feng et al., 2019). However, the performance is relatively poor. Recent efforts in incorporating unpaired data into generative vision-language training primarily focus on pre-trained image and text encoders (Esser et al., 2021; Roberts et al., 2019). However, these applications are limited to pre-training and do not encompass the entire generative vision-language training procedure, thus providing only incremental improvements. In some cases, researchers have explored the use of text-only data to improve text decoders (Wang et al. (2022)), utilizing text-to-text training. However, this only enhances the text decoder and not the image encoder, resulting again in constrained improvements.
**Cycle-consistency.** The concept of cycle consistency has previously been used to provide regularization and/or compensate for a lack of annotated data. Zach et al. (2010); Zhou et al. (2016); Godard et al. (2016); Zhu et al. (2017); Messikommer et al. (2022) explore it for computer vision applications such as learning dense correspondence, event detection, depth estimation, and image-to-image translation. Most related to our work is Gorti and Ma (2018), which uses text-image-text cycle consistency to perform text-to-image translation, but the performance is poor. Moreover, none of the previous works has explored the potential of cycle consistency in generative vision-language training using unpaired data.
Our novel approach diverges from preceding vision-language models that heavily rely on either a large corpus of paired image-text data, or fine-tuning methods that target only text or image encoder/decoders separately. For the first time, our method facilitates the utilization of unpaired image and text data during generative vision-language training. This innovation significantly reduces the dependency on paired image-text samples during the training process, which empowers the expansion of generative vision-language training to nearly boundless text-only and image-only datasets.
Method
ITIT is the first framework that enables generative vision-language training on unpaired image-only and text-only data. It uses a simple yet effective architecture: a unified image-text encoder and two separate image and text decoders. This design seamlessly enables text-to-image and image-to-text generation in the same framework, which paves the way for text-image-text (T2I2T) and image-text-image (I2T2I) cyclic losses. Below, we describe each component of our ITIT architecture and the cycle-consistency training paradigm in detail.
### Unified Image-Text Generation Framework
**Architecture.** We first obtain text embedding \(T=[t_{l}]_{l=1}^{L}\) from the output of a T5 encoder (Roberts et al., 2019) on the raw text. Similarly, raw images are passed through a pre-trained VQ-tokenizer (Esser et al., 2021) to output image tokens \(I=[i_{k}]_{k=1}^{K}\). \(L\) and \(K\) are the token sequence lengths for text and image, respectively. The image tokens \(I\) are then embedded with an embedding layer and concatenated with the T5 text features \(T\) as input to the image-text encoder. Modality-specific decoders then operate on the encoded image-text features to generate either text or image tokens. The text decoder is autoregressive (Wang et al., 2022), while the image decoder is parallel (Chang et al., 2023). Both encoder and decoders are based on Transformer (Vaswani et al., 2017) layers. A detailed description of the model architecture is included in Appendix B.
**Image-to-Text (I2T) Training.** As shown in Figure 2, we input masked image tokens along with empty text embedding to the image-text encoder. Masking is used to save computation, similar to MAE (He et al., 2022). We then use the features generated by the image-text encoder, as well as the ground-truth text tokens prepended with [BOS] (begin-of-sentence) token as the input to our text decoder. We use an auto-regressive language modeling (LM) loss to train the encoder and decoder:
\[\mathcal{L}_{I2T}=-\mathbb{E}_{(I,T)\in\mathcal{D}}\big{[}\sum_{l=1}^{L}\log p (t_{l}|I_{M},t_{0},\cdots,t_{l-1})\big{]}, \tag{1}\]
which is a CE loss with label smoothing 0.1. Here, \(t_{0}\) is set to be the [BOS] token. \(I_{M}\) are the (subset of) _unmasked_ tokens in \(I\) and \(p(i_{k}|I_{M},T)\) is the probability predicted by the encoder-decoder network (the 'logits' layer), \(\mathcal{D}\) is the distribution of paired image-text data. Note that the text decoder employs causal attention similar to GIT (Wang et al. (2022)): each text token only depends on the preceding text tokens and all image features.
**Text-to-Image (T2I) Training.** As shown in Figure 2, right panel, we use masked image modeling for image generation, where the training objective is to reconstruct masked image tokens conditioned on the unmasked image tokens and the paired text features. We denote the binary mask determining which image tokens are masked by \(M=[m_{k}]_{k=1}^{K}\). We use a cross-entropy loss between the ground-truth one-hot image tokens and the output of the image decoder. Specifically,
\[\mathcal{L}_{T2I}=-\mathbb{E}_{(I,T)\in\mathcal{D}}\big{[}\sum_{\forall k:m_{k }=1}\log p(i_{k}|I_{M},T)\big{]}, \tag{2}\]
**Inference.** We follow GIT (Wang et al., 2022) for image-to-text inference and Muse (Chang et al., 2023) for text-to-image inference. More details are included in Appendix B.
### Training with Cycle Consistency
Our cycle consistency training paradigm allows training with image-only and text-only data. The key idea is to first synthesize the corresponding text/image from the image-only or text-only data, and then use the synthesized data as input to reconstruct the original image/text. This allows us to apply cycle consistency supervision on image-only and text-only data.
**Text-Image-Text (T2I2T) Cycle.** Our T2I2T training pipeline is shown in Figure 3, top panel. At each training iteration, we first synthesize pseudo paired image tokens \(I^{\prime}\) for input text \(T=[t_{l}]_{l=1}^{L}\) using our T2I inference pipeline. We then apply random mask \(M\) to \(I^{\prime}\), perform reconstruction on \(I^{\prime}_{M}\) with the text \(T\) using the T2I pipeline, and obtain the reconstructed synthesized image \(\tilde{I}\). This two-step process allows us to avoid the excessive memory requirements of back-propagating
gradients through all 24 steps of parallel decoding, while still training the T2I module. Finally, we randomly mask \(\tilde{I}\) and use \(\tilde{I}_{M}\) to generate text using the I2T pipeline. The objective of our cycle paradigm is to enforce consistency between this generated text and the original text. Therefore, the T2I2T cycle-consistency loss can be formulated as follows:
\[\mathcal{L}_{T2I2T}=-\mathbb{E}_{T\in\mathcal{D}_{text}}\big{[}\sum_{l=1}^{L} \log p(t_{l}|\tilde{I}_{M},t_{0},\cdots,t_{l-1})\big{]}, \tag{3}\]
This is very similar to the I2T loss in Equation (1), except that \(\tilde{I}\) is synthesized from \(T\) instead of being drawn from the image-text joint distribution.
**Image-Text-Image (I2T2I) Consistency.** Our I2T2I training pipeline is shown in Figure 3, bottom panel. Similar to the T2I2T pipeline, we first synthesize pseudo paired text tokens \(T^{\prime}\) for input image tokens \(I\) using our I2T inference pipeline. We then use the I2T training pipeline to predict \(\tilde{t}_{l}\) from \(t^{\prime}_{0},\cdots,t^{\prime}_{l-1}\) and \(I_{M}\). As before, this avoids the excessive memory requirements of back-propagating gradients through the auto-regressive greedy decoding. We then mask \(I\), and pass it through the T2I pipeline with the predicted \(\tilde{T}\) to reconstruct the masked image tokens. Again, the loss enforces consistency between the reconstructed and the original image tokens using cross-entropy:
\[\mathcal{L}_{I2T2I}=-\mathbb{E}_{I\in\mathcal{D}_{image}}\big{[}\sum_{\forall k :m_{k}=1}\log p(i_{k}|I_{M},\tilde{T})\big{]}, \tag{4}\]
**Gradient Estimation.** One challenge in our cycle training is that \(\tilde{i_{k}}=\arg\max(p(i_{k}|I^{\prime}_{M},T)\) and \(\tilde{t}_{l}=\arg\max p(t_{l}|I_{M},t^{\prime}_{0},\cdots,t^{\prime}_{l-1})\), which are not differentiable. To solve this, we use a straight-through estimation on the predicted logits to approximate the gradient. Specifically, we directly copy the gradient on the one-hot prediction to the predicted logits after softmax. We show in section 4.4 that this helps improve both text-to-image and image-to-text performance.
Figure 3: Text-image-text (top) and image-text-image (bottom) cycle training pipelines for _unpaired_ image and text data. We use pseudo-generated image and text to enable the cycle consistency. Image token masks \(M\) are always randomly chosen. The dashed line denotes causal attention. Text tokens prepended with [BOS] token are used for auto-regressive language modeling loss.
Figure 2: I2T (left) and T2I (right) training pipelines for _paired_ image and text data.
## 4 Results
### Experiment Setup
**Datasets.** We use three datasets in our experiments: CC3M (Sharma et al., 2018), WebLI (Chen et al., 2023), and Shutterstock (Shutterstock, 2023). CC3M contains 3.3 million high-quality image-text pairs. WebLI (Web Language Image) contains 111 million images where the image-text pairing quality is much lower than CC3M. Thus, WebLI is significantly noisier and, as we show, leads to worse performance for I2T. Shutterstock contains 398 million images labeled by human annotators, which incurs significant expense and effort. More dataset details are included in Appendix C.
We use CC3M as our paired dataset, 50% of WebLI images as our unpaired image dataset, and the other 50% of WebLI texts as our unpaired text dataset for most of our experiments (Section 4.3 and Section 4.4). This 50%-50% split ensures that corresponding image-text pairs are not present in our unpaired image and text splits. We use the Shutterstock dataset in Section 4.2, where we analyze how IITT scales w.r.t. different number of paired and unpaired data samples.
**Training.** We set the input image resolution as 256x256 to be consistent with previous literature. After passing through the VQGAN tokenizer, the image token sequence length is 16x16 (256 tokens). The raw text (maximum length of 64) is tokenized by SentencePiece tokenization (SentencePiece, 2023), and embedded using a pre-trained T5 encoder. These embeddings are then concatenated with the image token embeddings as the input to our image-text encoder.
We experiment with ViT-B, ViT-L, and ViT-H size Transformers (Dosovitskiy et al. (2021)) for our image-text encoder. We combine the losses in Equations 1 through 4 with equal weight for training. For results in Section 4.3, we use Adafactor (Shazeer and Stern, 2018) to train the model for 1.5M steps with a batch size of 2048 (1024 for image-text pairs, 512 for unpaired images, and 512 for unpaired texts). We use a cosine learning rate schedule with 5K steps warmup and maximum learning rate \(1\times 10^{-4}\). For other experiments, we use the exact same training paradigm except that we train the models for 500K steps. More details are included in Appendix B.
**Evaluation.** We follow the commonly used MS-COCO benchmark and evaluation protocols. For image-captioning, we evaluate both the zero-shot and fine-tuning performance of IITT on the COCO Karpathy split (Karpathy and Fei-Fei, 2015) and report the CIDEr score (Vedantantam et al., 2015). For text-to-image generation, we evaluate IITT on 30K image-text pairs randomly selected from the COCO Captions training set and report the Frechet Inception Distance (FID) score (Heusel et al., 2017). CIDEr is the higher the better, and FID is the lower the better.
### Scale with Data
In this section, we comprehensively evaluate IITT's performance with different amounts of paired and unpaired data on Shutterstock dataset (Shutterstock, 2023) consisting of 398M image-text pairs.
Figure 4: How IITT-H’s performance scales with additional paired Shutterstock data. The baseline (T2I+I2T) is trained with paired samples only. IITT is trained with the same number of paired samples, as well as 398M unpaired samples (the full Shutterstock dataset) using cycle loss.
Figure 4 analyses how IITT's performance scales with paired data. We train a baseline with only paired data, with the sum of the losses in Equation (1) and Equation (2). IITT is trained with the same paired data as the baseline, and the entire set of 398M images and text present in Shutterstock as unpaired data. More paired data helps both settings, but training with unpaired data significantly improves IITT's performance over the baseline on both image captioning and text-to-image generation. Remarkably, with only 4M paired data and 398M unpaired data, IITT achieves _a similar performance as training with 398M paired data_. Note that IITT does not use any samples not present in the baseline trained with 398M paired data, as all of the samples are from Shutterstock. Therefore IITT can perform similarly as a baseline with 100x fewer image-text pairs, significantly reducing the effort and expense for the training of generative vision-language training.
Next, we evaluate how IITT's performance scales w.r.t. the total amount of data used. We first train a model with 1.2M paired image-text Shutterstock data. We then evaluate the effect of training models on adding increasing amounts of additional paired data vs. adding increasing amounts of applied data with cycle loss, keeping the total amount of data the same for both. As expected, we see in Figure 5 that performance scales up with additional paired data. Surprisingly, however, additional unpaired data exhibits similar scalability as paired. In fact, we can achieve 19.2 FID and 21.0 CIDEr with only 1.2M paired and 396.8M unpaired examples, which is very competitive with 19.0 FID and 22.2 CIDEr using 398M paired examples only. This experiment thus demonstrates that when scaling up training data, practitioners can rely on only adding unpaired examples using our method and achieve similar performance as paired data without the extra manual effort required to collect it.
We repeat the above experiment in a more realistic setting, where the small-scale paired dataset can contain high-quality image-text pairs but a large-scale paired dataset has much lower quality. For this, we use the high-quality CC3M as the paired dataset, and the much larger WebLI as the low-quality unpaired dataset. As before, we start with a model trained on 3M paired examples (from CC3M), and add additional training data from WebLI in paired (blue) or unpaired (orange) form. As shown in Figure 5, right pair, adding low-quality image-text pairs harms image captioning performance severely for the fully-paired case. However, the IITT regime is not affected by this low quality and scales similarly as before. This demonstrates that our method is robust to low data quality in large datasets, and can in fact be used to achieve significantly better performance in settings when paired data is present but of low quality.
### Comparison to Prior Work
In Table 1, we compare IITT with state-of-the-art image-to-text and text-to-image models on the commonly used MS-COCO benchmark. As shown, all SOTA methods rely heavily on training on a large corpus of paired image-text data. IITT, however, is trained with only 3M paired examples
Figure 5: How IITT’s performance scales with the total amount of data used (x-axis). The baseline (T2I + I2T) in blue is trained entirely with increasing amounts of paired data. IITT (orange) is trained with an increasing amount of unpaired data using cycle loss, while keeping the total amount of data equal for both curves. For example, the rightmost point with Shutterstock uses 1.2M image-text pairs and 396.8M unpaired samples (half as unpaired image and half as unpaired text) for IITT with cycle loss, and 398M image-text pairs for the baseline. _Left_: Shutterstock data as both paired and unpaired. _Right_: CC3M as paired data, and varying fractions of WebLI as additional paired / unpaired data.
(CC3M), and an additional 55M unpaired image and text examples each (WebLI). Despite this, it beats many other methods trained on much more data for text-to-image generation (FID). For I2T, it beats methods using a comparable amount of data (highlighted in green), and achieves performance competitive with other SOTA methods. We find that the pre-training data (both the mixture and the size) also makes a difference to CIDEr score. For example, GIT (Wang et al., 2022) achieves only 89.0 CIDEr fine-tuning performance on COCO captions when trained from scratch with 10M image-text pairs, which is far from its reported performance (144.8) when trained with 800M image-text pairs. Our approach is orthogonal to dataset mixture considerations, and we believe that scaling data size and variety will further enhance FID and CIDEr scores. We leave this to future work.
### Ablations
In Table 2, we ablate the effectiveness of the four components of ITIT: T2I, I2T, T2I2T, and I2T2I. As shown in rows 1-3, combining T2I and I2T training in our framework already improves image captioning performance. This is likely because the T2I training alleviates the overfitting problem of I2T training, as shown in GIT (Wang et al., 2022).
As before (Figure 5), we can see in row 4 that combining CC3M and WebLI improves text-to-image generation, but harms image captioning performance. This is because of the lower image-text pairing quality of WebLI compared to CC3M. The remaining rows demonstrate that the cycle loss alleviates this by using WebLI as unpaired data and does not depend on its image-text pairing quality. It is thus more generalizable to large-scale image-text datasets.
Next, rows 5-7 are naive baselines for using unpaired image or text data during generative vision-language training. We can simply perform text-to-text (T2T) autoregressive training without conditioning on images, which has been explored in some prior works (Wang et al. (2022)). Similarly, we can perform image-to-image (I2I) reconstructive training without conditioning on text. Such baselines do improve the performance over not using any paired data (row 3).
We consider an ablation where the gradient of the cycle consistency loss is backpropagated up until the argmax step. Hence, only half of the cycle is trained. In fact, this is equivalent to first synthesizing an image counterpart from unpaired text and then using it as a pseudo image-text pair to train the I2T model (similarly for T2I). Rows 8-10 show that the half-cycle loss achieves much better performance than non-cycle baselines.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline Methods & \#params & \#paired data & \#unpaired data & FID\(\downarrow\) & CIDEr\(\uparrow\) (\(\approx\)) & CIDEr\(\uparrow\) (\(\text{ft}\)) \\ \hline _T2I_ & & & & & & \\ StablDiffusion (Rombach et al., 2022) & 800M & 400M & - & 12.60 & - & - \\ GLIDE (Nichel et al., 2021) & 5B & 250M & - & 12.24 & - & - \\ Make-A-Scene (Gafini et al., 2022) & 4B & 35M & - & 11.84 & - & - \\ DALL-E 2 (Ramesh et al., 2022) & 3.5B & 650M & - & 10.39 & - & - \\ PARTI (Yu et al., 2022) & 750M & 500M & - & 10.71 & - & - \\ Muse-512 (Zhang et al., 2023) & 3B & 860M & - & 7.88 & - & - \\ Muse-51 (Chang et al., 2023) & 750M & 3M & - & 23.7 & - & - \\ \hline _I2T_ & & 446M & 129M & - & - & - & 136.7 \\ BiLVLM(Wang et al., 2022) & - & 1100M & 365M T & - & 24.0 & 134.8 \\ SimVLMLM(Wang et al., 2022) & \(\sim\)1.4B & 1100M & 365M T & - & 32.2 & 143.3 \\ GIT (CLIP) (Wang et al., 2022) & 681M & 800M & - & - & - & 144.8 \\ GIT (Sföratch)(Wang et al., 2022) & 129M & 10M & - & - & - & 89.0 \\ \hline _T2I-12T_ & & & & & & & \\ CoBIT-Base (You et al., 2023) & 626M & 5200M & - & 10.35 & 43.0 & 135.4 \\ CoBIT-Large (You et al., 2023) & 1091M & 5200M & - & 9.37 & 44.8 & 139.5 \\ CM3Leon (Yu et al., 2023) & 7B & 340M & - & 4.88 & 61.6 & - \\ IIIT-B & 221M & 3M & 55M I+55M T & 13.4 & 32.1 & 103.5 \\ IIIT-L & 487M & 3M & 55M I+55M T & 12.0 & 35.1 & 116.4 \\ IIIT-H & 868M & 3M & 55M I+55M T & 10.4 & 38.2 & 125.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison with state-of-the-art text-to-image and image-to-text models on MS-COCO. The image-captioning performance is evaluated on the COCO Karpathy split, and the text-to-image generation FID is evaluated on 30K COCO images. \(\dagger\) denotes our re-implementation. We highlight in green other models that use comparable amounts of paired data. Note that the GIT (CLIP) model uses a CLIP (Radford et al., 2021) encoder pre-trained with 400M image-text pairs.
Finally, rows 11-14 show the performance of the full cycle IITT training. Although T2I2T favors image captioning while I2T2I favors text-to-image generation, they both show significant improvement in text-to-image generation and image captioning. Moreover, row 14 demonstrates that such two cycle losses can be combined to further improve performance. Additionally, we can see that the full cycle loss beats the half-cycle baselines (row 8-10), demonstrating the effectiveness of the gradient estimation step.
Lastly, we find by comparing row 3 and 13 that the cycle consistency loss can slightly improve the performance even without any additional data. We believe this is because it forces better image-text alignment. However, comparing row 13 and 14 shows that the huge improvements in both text-to-image and image-to-text generation mainly stem from the usage of additional unpaired data.
### Cycle-Generation Results
With a framework that can perform both image-to-text and text-to-image, we can easily perform cycle-generation, as shown in Figure 6. With IITT training, the cycle generation often keeps the same semantics as the input text prompt. On the other hand, without the cycle consistency training, the cycle generation misses the "blue" semantics after the first cycle. This demonstrates that our cycle consistency training not only enables integrating unpaired image and text data into generative vision-language training, but also improves image-text alignment for both image-to-text and text-to-image generation. We include a number of results of image and text generation in Appendix A (Figures 1 through 4).
\begin{table}
\begin{tabular}{c c c c c c c|c c} \hline \hline & T2I & I2T & T2I2T & I2T2I & paired data & unpaired text & unpaired image & FID\(\downarrow\) & CIDEr\(\uparrow\) \\ \hline \multicolumn{10}{c}{_Paired data only_} \\
1 & ✓ & ✗ & ✗ & ✗ & CC3M & ✗ & ✗ & 15.5 & N/A \\
2 & ✗ & ✓ & ✗ & ✗ & CC3M & ✗ & ✗ & N/A & 19.0 \\
3 & ✓ & ✓ & ✗ & ✗ & CC3M & ✗ & ✗ & 15.7 & 23.5 \\
4 & ✓ & ✓ & ✗ & ✗ & CC3M+WebLI & ✗ & ✗ & 14.2 & 20.7 \\ \hline \multicolumn{10}{c}{_Paired+unpaired data, no cycle_} \\
5 & ✓ & ✓ & T2T & ✗ & CC3M & 50\% WebLI & ✗ & 15.1 & 26.0 \\
6 & ✓ & ✓ & ✗ & I2I & CC3M & ✗ & 50\% WebLI & 15.9 & 24.2 \\
7 & ✓ & ✓ & T2T & I2I & CC3M & 50\% WebLI & 50\% WebLI & 15.6 & 28.5 \\ \hline \multicolumn{10}{c}{_Paired+unpaired data, half cycle_} \\
8 & ✓ & ✓ & Half & ✗ & CC3M & 50\% WebLI & ✗ & 14.8 & 27.6 \\
9 & ✓ & ✓ & ✗ & Half & CC3M & ✗ & 50\% WebLI & 14.7 & 24.8 \\
10 & ✓ & ✓ & Half & Half & CC3M & 50\% WebLI & 50\% WebLI & 14.5 & 30.5 \\ \hline \multicolumn{10}{c}{_Paired+unpaired data, full cycle_} \\
11 & ✓ & ✓ & Full & ✗ & CC3M & 50\% WebLI & ✗ & 14.6 & 28.4 \\
12 & ✓ & ✓ & ✗ & Full & CC3M & ✗ & 50\% WebLI & 14.6 & 26.3 \\
13 & ✓ & ✓ & Full & Full & CC3M & CC3M & CC3M & 15.4 & 24.4 \\
14 & ✓ & ✓ & Full & Full & CC3M & 50\% WebLI & 50\% WebLI & **14.3** & **31.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison between different variants of IITT on MS-COCO. All experiments use IITT\({}_{\text{B}}\) trained with 500K steps. We take 50% of WebLI data and use the images as our unpaired image data, and the other 50% of WebLI data and use the texts as our unpaired text data.
Figure 6: Iteratively generating text to image to text and so on. With IITT, the generated results are more consistent than the results from a model trained without the cycle consistency loss.
## 5 Discussion
We propose IITT, a novel training scheme that for the first time incorporates unpaired images and text into generative vision-language training. Through extensive ablations, we demonstrate the effectiveness of both the T2I2T cycle and I2T2I cycle in improving text-to-image and image-to-text generation performance. As a result, IITT achieves performance competitive with state-of-the-art vision-language generative models, but with only 3 million paired image-text samples. Our method can be used even when paired image-text data is present, and is especially helpful when the pairing quality is low. Future directions include scaling IITT to larger unpaired image and text data and model sizes, and utilizing more diverse datasets.
|
2310.18472
|
Parameter-Efficient Methods for Metastases Detection from Clinical Notes
|
Understanding the progression of cancer is crucial for defining treatments
for patients. The objective of this study is to automate the detection of
metastatic liver disease from free-style computed tomography (CT) radiology
reports. Our research demonstrates that transferring knowledge using three
approaches can improve model performance. First, we utilize generic language
models (LMs), pretrained in a self-supervised manner. Second, we use a
semi-supervised approach to train our model by automatically annotating a large
unlabeled dataset; this approach substantially enhances the model's
performance. Finally, we transfer knowledge from related tasks by designing a
multi-task transfer learning methodology. We leverage the recent advancement of
parameter-efficient LM adaptation strategies to improve performance and
training efficiency. Our dataset consists of CT reports collected at Memorial
Sloan Kettering Cancer Center (MSKCC) over the course of 12 years. 2,641
reports were manually annotated by domain experts; among them, 841 reports have
been annotated for the presence of liver metastases. Our best model achieved an
F1-score of 73.8%, a precision of 84%, and a recall of 65.8%.
|
Maede Ashofteh Barabadi, Xiaodan Zhu, Wai Yip Chan, Amber L. Simpson, Richard K. G. Do
|
2023-10-27T20:30:59Z
|
http://arxiv.org/abs/2310.18472v1
|
# Parameter-Efficient Methods for Metastases Detection from Clinical Notes
###### Abstract
Understanding the progression of cancer is crucial for defining treatments for patients. The objective of this study is to automate the detection of metastatic liver disease from free-style computed tomography (CT) radiology reports. Our research demonstrates that transferring knowledge using three approaches can improve model performance. First, we utilize genetic language models (LMs), pre-trained in a self-supervised manner. Second, we use a semi-supervised approach to train our model by automatically annotating a large unlabeled dataset; this approach substantially enhances the model's performance. Finally, we transfer knowledge from related tasks by designing a multi-task transfer learning methodology. We leverage the recent advancement of parameter-efficient LM adaptation strategies to improve performance and training efficiency. Our dataset consists of CT reports collected at Memorial Sloan Kettering Cancer Center (MSKC) over the course of 12 years. 2,641 reports were manually annotated by domain experts; among them, 841 reports have been annotated for the presence of liver metastases. Our best model achieved an F1-score of 73.8%, a precision of 84%, and a recall of 65.8%.
Parameter-Efficient Tuning, Pre-trained Language Models, Metastases Detection. +
Footnote †: journal:
0
Footnote †: journal:
0
0
Footnote 0: [email protected]
## 1 Introduction
Progression of metastatic disease is often the primary cause of cancer-related death [1], thus early detection of metastasis is important for selecting targeted and other therapies. In the liver, for example, metastases can be treated more effectively when discovered early. Understanding the spatial and temporal patterns of metastases distribution would help radiologists more accurately interpret CT images for the existence of any metastasis. In order to extract the patterns, a comprehensive analysis of large-scale clinical data is necessary, but this is difficult given the unstructured nature of most electronic health records. Since cancer patients receive many CT scans as part of care, the corresponding reports contain rich data that can be mined for cancer recurrence and progression. Annotating CT reports requires domain expertise and is costly and time-consuming to perform manually on a large scale. Therefore, automation of metastatic site detection from radiology reports can substantially advance studying and treating cancer progression.
Since the amount of human-annotated data is limited, training large models has a high risk of overfitting. However, the strategy of pre-training large LMs followed by task-specific fine-tuning allows us to tailor to a new task using a small task-specific dataset. While full fine-tuning is the conventional adaptation paradigm, parameter-efficient tuning has recently been shown to achieve comparable performance by adapting only a small percentage of the parameters [2]. However, they have not received enough study in medical applications. In this work, we adapt a pre-trained LM through fine-tuning and prompt-tuning -- a typical parameter-efficient tuning approach -- to the task of detecting liver metastases. We also employ a semi-supervised approach by leveraging a dataset annotated by another machine learning model.
The data used in this study were collected at Memorial Sloan Kettering Cancer Center (MSKCC) from July 2009 to May 2022 by waiver of informed consent and follows a structured departmental template, which includes a separate header under the findings section for each organ and an impression section that summarizes key observations. Previous studies have shown promising results
by exploiting all sections related to the organ of interest [3, 4], but their applicability is limited to radiology reports with a similar structure. To reduce the reliance on the report format and increase the applicability of the proposed methods to a wider variety of radiology reports, only the impression section is used as input.
Our main contributions are as follows: (1) We propose to use parameter-efficient tuning -- the soft prompt tuning -- to solve the problem and demonstrate that it outperforms full fine-tuning when only a small manually curated dataset is available. (2) Our introduced methods only require the presence of an impression section (i.e., free text), which is a common practice in radiology reports, so their applicability can be extended to most radiology reports. (3) We train BERT on a large-scale, automatic-annotated dataset, which leads to higher performance than training on a small, human-annotated dataset. (4) We also present a multi-task transfer learning method based on prompt-tuning which improves performance moderately.
## 2 Dataset and Problem Description
**Dataset.** The data used in our experiments were gathered at MSKCC from July 2009 to May 2022. The entire collected data was split into two specific datasets. The first dataset was annotated by five radiologists, for the presence of liver metastases. They were instructed to read all reports available for each patient, including future reports, before deciding on the presence or absence of metastases at the time of each report. Further details of the annotation process can be found in [4]. This process resulted in 2,641 annotated reports from 314 patients. Data were partitioned into training (20%), validation (30%), and testing samples (50%) by patients. Half of the dataset records are allocated for testing, aiming to ensure evaluation quality. The remaining 50% for training and validation reflects the scarcity of data in real-life applications.
The second dataset records are automatically annotated with a fine-tuned BERT model trained following the method in [3]. The annotating model had access to the dedicated organ section and impression section. This automatic-annotated dataset consists of 1,032,709 radiology reports from 192,650 patients and has annotations for 13 organs: _liver_, _lungs_, _pleura_, _thoracic nodes_, _spleen_, _adrenal glands_, _renal_, _abdominalpic nodes_, _pelvic organs_, _bowel/peritoneum_, and _bones/soft tissues_. Since automatic-annotated labels are noisy, the evaluation of all trained models was done on the human-annotated test set, regardless of their training data.
**Problem Formulation.** We formulate the problem of detecting liver metastasis from the impression section of a radiology report as a binary classification task. Our model input is the impression section of the report to closely mimic the real-life setup. Table 1 shows some sample impression texts. Some of the texts are relatively non-informative, like example 2, while others are more detailed. We denote the training set as \(\{(x,y)\}\), where \(x\) is an impression text, and \(y\in\{0,1\}\) is the ground-truth label when \(1\) indicates the presence of liver metastasis and a \(0\) indicates no liver metastasis. We use \(p_{\theta}(x)\) to denote the probability of a positive class predicted by a model parameterized by \(\theta\).
## 3 Related Work
**Analyses of Cancer Patient Clinical Records.** Previous research on detecting metastasis has analyzed CT images [5]. However, using CT reports instead of images provides more comprehensive
\begin{table}
\begin{tabular}{c c} \hline \hline
1 & Since \(\in\)date\(>\), 1. Stable collection the hepatic resection margin. \\ \hline
2 & Since \(\in\)date\(>\), no interval changes. \\ \hline
3 & Since \(\in\)date\(>\), 1. Status post right hemicotomy with mural soft tissue thickening or retained material in the colon just distal to the anastomosis. Correlation with endoscopy recommended. Email sent to \(<\)person\(>\). 2. Status post partial hepatic resection with no evidence of new hepatic lesion. Reduced size of fluid adjacent to resection margin consistent with reduced postoperative change. 3. Stable tiny pulmonary nodules. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of Impression Text
information, as radiologists consider a patient's medical history when interpreting the images. Researchers have applied a wide range of natural language processing (NLP) techniques to interpret CT reports, from rule-based methods [6] to classical machine learning algorithms [7, 8] to deep neural networks [9]. For example, the authors in [3] used both classical NLP methods -- a TF-IDF feature extractor and SVM/random forest classifiers -- and BERT to detect metastatic sites from structured radiology reports. Another study utilized long short-term memory (LSTM) and convolutional neural network (CNN) and found that accessing previous reports is beneficial in detecting metastasis [4]. Although these two works show promising results, their data follow the previously mentioned departmental template, so the application of their models is limited to reports from a very specific institute. To the best of our knowledge, our work is the first to address this limitation by performing metastasis detection based solely on the impression section.
**Parameter-Efficient Tuning for Classification.** The most common paradigm of adapting general pre-trained LMs to a specific task is fine-tuning, in which all parameters of the pre-trained model are updated using data for the downstream task. However, as LMs have grown inexorably larger, the cost of fine-tuning has become burdensome. To address this issue, researchers have introduced parameter-efficient methods that freeze (do not update) all or part of the LM parameters. These methods either fine-tune only a small portion of model parameters, such as BitFit [10] and child-tuning [11], or introduce new parameters and train them from scratch, such as adapter-tuning [12]. Prompt-tuning is a parameter-efficient method that prepends extra tokens to the keys and values in the attention layers [13]. The concept of prompt-tuning was first introduced in [14], which demonstrated promising results on natural language generation tasks. Subsequently, [13] employed the method (with some modifications) on classification tasks by translating them into a text-to-text format. It yielded comparable performance to fine-tuning when the model size exceeded one billion parameters. P-tuning v2 [2] further extended this research to natural language understanding (NLU) by adding a trainable layer on top of the LM. Their proposed architecture performs comparably with fine-tuning over different scales. In this work, we use P-tuning v2 to train a classifier for metastasis detection.
**Parameter-Efficient Multi-Task Transfer Learning.** Multi-task transfer learning is a strategy that enhances the performance of models on a target task by transferring useful knowledge from related tasks. Prior studies have investigated multi-task approaches that are compatible with prompt-tuning. For example, SPoT [15] suggests initializing the downstream task prompts with prompts that have been tuned on a mixture of related tasks. Meanwhile, HyperPELT [16] trains a hypernetwork that generates trainable parameters for the main model, including prompt tokens. Another approach, ATTEMPT [17], learns prompts for all the source tasks and then creates an instance-wise prompt for the target task by combining the source tasks' prompts and a newly initialized prompt using an attention block. We will discuss how our method is different from theirs in the methodology section.
## 4 Methodology
To address the scarcity of manually annotated data, we employ several strategies. Firstly, we utilize pre-trained LMs by adapting prompt-tuning to reduce the risk of overfitting. Secondly, we augment the training data by automatically annotating a large dataset that would be challenging to label manually. Lastly, we present a multi-task transfer learning framework that allows the model to leverage information from other organs. This method builds upon the prompt-tuning approach and formulates the final target task prompt as a linear combination of source prompts. Figure 1 illustrates this process. We have 13 source prompts, \(P_{1},P_{2},...,P_{13}\), but only three of them are shown in Figure 1 for the sake of demonstration. The source prompts were learned using P-tuning v2 [2] on the source tasks of detecting metastasis in different organs, including the liver. P-tuning v2 and our prompt attention mechanism are described in detail in the following sections.
**Prompt-Tuning.** Assume we have an encoder building on any Transformer-based LM with a classifier layer on top of the last representation layer. We denote this architecture as \(p_{\theta,\theta_{c}}(x)\), where \(\theta\) and \(\theta_{c}\) refer to the LM parameters and classification head parameters, respectively. In fine-tuning, we tune all parameters by optimizing \(min_{(\theta,\theta_{c})}\mathrm{BCELoss}(p_{\theta,\theta_{c}}(x),y)\) over all (\(x\), \(y\)) pairs from the training
set. BCELoss refers to binary cross-entropy loss, the conventional loss function for classification problems. In P-tuning v2 [2], prompt tokens are prepended to the keys and values of the attention modules in all transformer layers, as described in Equation 4.1. \(h_{i}\) is the output of \(i\)-th transformer encoder layer, and \(f_{i}\) is the output of the attention layer in the same transformer block, while \(q_{i}\), \(k_{i}\), and \(v_{i}\) denote the query matrix, key matrix, and value matrix in the \(i\)-th layer, which are obtained by transferring the last layer output with \(W_{i}^{Q}\), \(W_{i}^{K}\), and \(W_{i}^{V}\) matrices to new latent spaces. Before computing attention, we concatenate key prompt tokens \(p_{i}^{K}\in\mathbf{R}^{d_{m}\times pl}\) and value prompt tokens \(p_{i}^{V}\in\mathbf{R}^{d_{m}\times pl}\) with the key and value matrices where \(pl\) refers to prompt length.
\[q_{i},k_{i},v_{i}=W_{i}^{Q}h_{i-1},W_{i}^{K}h_{i-1},W_{i}^{V}h_{i-1} \tag{4.1}\]
\[f_{i}=\mathrm{MultiHeadAttention}(q_{i},[p_{i}^{K};k_{i}],[p_{i}^{V};v_{i}])\]
The LM parameters are frozen during prompt-tuning. The only trainable parameters are the prompt tokens and the classification head. So, we can formulate the prompt-tuning optimization problem as \(min_{(\theta_{e},p^{K},p^{V})}\mathrm{BCELoss}(p_{\theta,\theta_{e},p^{K},p^{ V}}(x),y)\). Depending on the prompt length, P-tuning v2 reduces the trainable parameters to 0.5-2% of that of full fine-tuning. We did not observe any improvement from reparameterization and thus we learned prompt tokens directly.
**Attentional Mixture of Prompts.** After obtaining source prompts from the prompt-tuning method, we interpolated them to form a new prompt for the target task using an attention module (Figure 1). The source prompt weights \(w_{i}\) were determined by the attention between the target task query \(q\) and keys \(k_{i}\). To generate keys, we first reduce the dimensionality of the source prompts by max pooling and make a compact representation \(\hat{P}_{i}\in\mathbf{R}^{d_{m}}\), where \(d_{m}\) represents the LM hidden size, which is 768 for BERT-base. We then map the max-pooled source prompts to a new space via transformation matrix \(W_{K}\), and apply layer normalization to prevent gradients from becoming excessively large. The attention module calculates the target prompt using Equation 4.2, where \(e\) and \(n\) are Euler's number and number of source tasks, respectively.
\[k_{i}=\mathrm{LayerNorm}(W_{K}\hat{P}_{i})\qquad w_{i}=\frac{(q\,k_{i}/(e \cdot d_{m}))^{2}}{\Sigma_{j=1}^{n}(q\cdot k_{j}/(e\cdot d_{m}))^{2}}\qquad P_ {target}=\Sigma_{j=1}^{n}w_{j}P_{j} \tag{4.2}\]
The conventional attention method uses _softmax_ to normalize weights, which tends to assign a high weight to the liver source prompt and very small weights to other source prompts. This impedes the effective transfer of knowledge between tasks. Instead, we apply a degree-2 polynomial kernel to produce more evenly distributed weights. We scale the dot product of the key and query to make the result independent of the model's hidden size. \(W_{K}\) and \(q\) are trainable parameters of the attention block, while other components, including source prompts, remain frozen. We prepend \(P_{target}\) tokens to all model layers and pass input through LM to compute the model's output.
In the multiple target tasks case, the attention module parameters can be shared. After training is finished, \(P_{target}\) can be calculated once and saved. Our method is different from ATTEMPT [17], which requires both the attention module and source prompts during inference in order to compute its
Figure 1: Proposed multi-task soft prompt architecture.
instance-dependent attention query, leading to more computation and storage. Our method operates like P-tuning v2 during inference with no additional parameters or computation steps.
## 5 Experiments and Results
**Experiment Setup.** We evaluated all models on the human-annotated test set. We fine-tuned BERT using both the human-annotated and automatic-annotated datasets. Additionally, we obtained prompt-tuned models on both datasets, which also leveraged BERT-base as the backbone LM. Our Multi-task model was solely trained on the automatic-annotated data, as it provided metastasis annotation for multiple organs. The implementation of P-tuning v2 was based on the source code provided by the authors1. The models were trained for a maximum of 1,000 epochs on human-annotated data and 10 epochs on automatic-annotated data. The best checkpoint was selected based on the F1-score on the validation set. To address the problem of data imbalance, we upsampled the positive class to balance the number of samples per class. We found the best batch size, learning rate, and prompt length, when applicable, based on F1-score on the development set.
Footnote 1: [https://github.com/THUDM/P-tuning-v2](https://github.com/THUDM/P-tuning-v2)
**Experiment Results.** The performance of the models is summarized in Table 2. On manually annotated data (_manual_), prompt-tuning improves the test F1-score by almost three points (from 69.0% to 71.9%) with only 1% tunable parameters compared to the fine-tuning (1.2M vs. 109M), showing that prompt-tuning performs better in the low-data setting, where only a limited amount of (manually annotated) training data is available. This can be attributed to the fact that prompt-tuning has far fewer parameters, making it less prone to over-fitting, which can be seen from the difference in performance between the validation and test set.
When the amount of training data is much larger using automatically annotated data (_automatic_), with around 1 million samples, fine-tuning and prompt-tuning perform similarly. In this case, prompt-tuning is still preferable, since it is computationally more efficient during training and can be served in shared mode with other tasks with considerably reduced memory (1.6M tunable parameters vs. 109M in fine-tuning). This benefit will be more significant as the pre-trained models continue to grow significantly larger every year.
Our proposed multi-task approach surpasses both prompt-tuning and fine-tuning. These outcomes suggest that transferring knowledge from related tasks in the medical domain can enhance the performance of the prompt-tuning method while maintaining parameter efficiency. Our experiments only utilized 13 source tasks, and incorporating more related tasks may result in greater improvements.
Our observation from Table 2 reveals that the models trained on automatically-annotated data outperform those on human-annotated data for both fine-tuning and prompt-tuning methods. This suggests that even if we use parameter-efficient methods, a few hundred annotated records are not sufficient to obtain high performance for liver metastasis detection from impression text. While manually annotating large datasets is a time-consuming and resource-intensive approach, automatically annotating data using a model that has access to more information from the input report is a low-cost alternative that we proved worthy of pursuit.
\begin{table}
\begin{tabular}{l|c||c|c c|r} \hline \hline Method & Training data & Val. F1 & Test F1 & Precision & Recall & \# Tunable param \\ \hline Fine-tuning & manual & 75.8 & 69.0 & 74.3 & 64.3 & 109M \\ Prompt-tuning & manual & 75.6 & 71.9 & 69.1 & 74.9 & 1,236K \\ \hline Fine-tuning & automatic & 79.7 & 73.4 & 89.7 & 62.1 & 109M \\ Prompt-tuning & automatic & 79.6 & 73.3 & 86.0 & 63.8 & 1,624K \\ Multi-task model & automatic & 79.7 & **73.8** & 84.0 & 65.8 & 2,218K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different models. The Val. F1 and Test F1 refer to F1-scores on the validation and test set, respectively, while _manual_ and _automatic_ refer to using manually annotated and automatically annotated training data, respectively. The improvement of the multi-task model over both the fine-tuning and prompt-tuning is statistically significant (p < 0.01) under the one-tailed paired T-test.
## 6 Conclusion
In this paper, we propose metastatic liver identification from free-style radiology reports by removing restrictive assumptions about the report structure. Our results indicate that soft prompt-tuning, as a typical parameter-efficient method, surpasses fine-tuning in the low-data setting and achieves comparable results with a large train set. It implies that prompt-tuning can be used to build more efficient models without sacrificing performance. Additionally, we proposed a multi-task transfer learning framework and found it to improve the performance of metastasis detection by leveraging information from related tasks. We also demonstrated the usefulness of training on large automatically annotated data via a semi-supervised approach. This suggests that artificially annotating large datasets is an effective solution to overcome the challenge of limited labeled data in tasks with similar settings. These techniques have the potential to be applied to other tasks in the medical domain that have a similar setup.
## Acknowledgements
This research is supported by the Vector Scholarship in Artificial Intelligence, provided through the Vector Institute. 2 The research is partially supported by NSERC Discovery Grants.
Footnote 2: [https://vectorinstitute.ai/](https://vectorinstitute.ai/)
|
2303.00305
|
A family of $2$-groups and an associated family of semisymmetric,
locally $2$-arc-transitive graphs
|
A mixed dihedral group is a group $H$ with two disjoint subgroups $X$ and
$Y$, each elementary abelian of order $2^n$, such that $H$ is generated by
$X\cup Y$, and $H/H'\cong X\times Y$. In this paper, for each $n\geq 2$, we
construct a mixed dihedral $2$-group $H$ of nilpotency class $3$ and order
$2^a$ where $a=(n^3+n^2+4n)/2$, and a corresponding graph $\Sigma$, which is
the clique graph of a Cayley graph of $H$. We prove that $\Sigma$ is
semisymmetric, that is, ${\rm Aut}(\Sigma)$ acts transitively on the edges, but
intransitively on the vertices, of $\Sigma$. These graphs are the first known
semisymmetric graphs constructed from groups that are not $2$-generated (indeed
$H$ requires $2n$ generators). Additionally, we prove that $\Sigma$ is locally
$2$-arc-transitive, and is a normal cover of the `basic' locally
$2$-arc-transitive graph ${\rm K}_{2^n,2^n}$. As such, the construction of this
family of graphs contributes to the investigation of normal covers of
prime-power order of basic locally $2$-arc-transitive graphs -- the `local'
analogue of a question posed by C.~H.~Li.
|
Daniel R. Hawtin, Jin-Xin Zhou, Cheryl E. Praeger
|
2023-03-01T08:05:45Z
|
http://arxiv.org/abs/2303.00305v1
|
# A family of 2-groups and an associated family of semisymmetric, locally 2-arc-transitive graphs
###### Abstract
A _mixed dihedral group_ is a group \(H\) with two disjoint subgroups \(X\) and \(Y\), each elementary abelian of order \(2^{n}\), such that \(H\) is generated by \(X\cup Y\), and \(H/H^{\prime}\cong X\times Y\). In this paper, for each \(n\geq 2\), we construct a mixed dihedral 2-group \(H\) of nilpotency class 3 and order \(2^{a}\) where \(a=(n^{3}+n^{2}+4n)/2\), and a corresponding graph \(\Sigma\), which is the clique graph of a Cayley graph of \(H\). We prove that \(\Sigma\) is semisymmetric, that is, \(\mathrm{Aut}(\Sigma)\) acts transitively on the edges, but intransitively on the vertices, of \(\Sigma\). These graphs are the first known semisymmetric graphs constructed from groups that are not 2-generated (indeed \(H\) requires \(2n\) generators). Additionally, we prove that \(\Sigma\) is locally 2-arc-transitive, and is a normal cover of the 'basic' locally 2-arc-transitive graph \(\mathbf{K}_{2^{n},2^{n}}\). As such, the construction of this family of graphs contributes to the investigation of normal covers of prime-power order of basic locally 2-arc-transitive graphs - the 'local' analogue of a question posed by C. H. Li.
**Key words:** semisymmetric, 2-arc-transitive, edge-transitive, normal cover, Cayley graph
**2000 Mathematics subject classification:** 05C38, 20B25
Introduction
Many graphs with a lot of symmetry arise from constructions based on groups. These include Cayley graphs, and more generally arc-transitive coset graphs, all of which are vertex-transitive. More recently the Cayley graph construction was extended to a theory of bi-Cayley graphs in [25] which led to the construction of the first (to our knowledge) infinite family of semisymmetric graphs based on finite \(2\)-groups.
_Semisymmetric graphs._ These are regular graphs (that is, each vertex has the same valency) which are edge-transitive but not vertex-transitive. They have been studied for more than \(50\) years. In 1967, in what is perhaps the first paper published on the subject, Folkman gave a method for constructing examples of semisymmetric graphs from abelian groups [5, Theorem 4] and posed a number of questions, notably he asked for _all values of \(v,k\) such there exists a semisymmetric graph on \(v\) vertices (that is, of order \(v\)) with valency \(k\)_. By 2006, all cubic (valency \(3\)) semisymmetric graphs on up to \(768\) vertices had been enumerated, (by Ivanov [13] for orders up to \(28\) in 1987, and the rest by Conder et. al. [2]). Ming Yao Xu alerted the third author that the list contained no examples with \(2\)-power order, and it turned out that the theory of bi-Cayley graphs developed in [25] could be applied to construct, for each \(n\geq 2\), a cubic semisymmetric graph of order \(2^{2n+7}\) which is a bi-Cayley graph for a \(2\)-group \(H\) of order \(2^{2n+6}\). The group \(H\) was \(2\)-generated with derived quotient \(H/H^{\prime}\cong C_{2^{n}}\times C_{2^{n}}\) and \(|H^{\prime}|=2^{6}\). An additional family of semisymmetric bi-Cayley graphs was given by Conder et. al. [3, Example 5.2 and Proposition 5.4] in 2020. This time the graphs had order \(4n\) and valency \(2k\), with \(k\) odd, and were constructed as bi-Cayley graphs for a dihedral group \(D_{2n}\) of order \(2n\), with the requirement that some element of \(\mathbb{Z}_{n}^{*}\) has multiplicative order \(2k\). Thus although the valencies in this new family were unbounded, the groups used in the construction were still \(2\)-generated.
Our aim in this paper is to present a new infinite family of semisymmetric graphs based on very different kinds of \(2\)-groups. For each \(n\geq 2\) we construct (see Definitions 1.1, 1.2 and Theorem 1.3) a semisymmetric graph of order \(2^{n^{2}(n+1)/2+n+1}\) and valency \(2^{n}\), based on a \(2\)-group \(H\) with \(H/H^{\prime}=C_{2}^{2n}\) (which implies that \(H\) requires \(2n\) generators). The idea for our construction came from our recent paper [10] where we studied a natural Cayley graph \(\Gamma(H)\) for a group \(H\) (not necessarily a \(2\)-group) with disjoint subgroups \(X,Y\) such that \(X\cong Y\cong C_{2}^{n}\), \(H=\langle X,Y\rangle\), and \(H/H^{\prime}\cong C_{2}^{2n}\) (Definition 1.1). It turns out that, in the case where \(\Gamma(H)\) is edge-transitive, its clique graph has many desirable properties [10, Theorem 1.6]. Here we apply this theory to an explicit family of such groups \(H\) with nilpotency class \(3\) and show that the associated clique graphs of the Cayley graphs \(\Gamma(H)\) are semisymmetric (Theorem 1.3).
_Locally \(2\)-arc-transitive graphs._ In making this construction we had an additional objective in mind. Our construction produces semisymmetric graphs which are locally \(2\)-arc-transitive (Theorem 1.3). The \(2\)-arcs in a graph \(\Sigma\) are vertex-triples \((u,v,w)\) such that \(\{u,v\}\) and \(\{v,w\}\) are edges and \(u\neq w\), and \(\Sigma\) is said to be _locally \((G,2)\)-arc-transitive_ if \(G\leq\operatorname{Aut}(\Gamma)\) and for each vertex \(u\), the stabiliser \(G_{u}\) is transitive on the \(2\)-arcs \((u,v,w)\) starting at \(u\). For a finite connected locally \((G,2)\)-arc-transitive graph \(\Sigma\), the group \(G\) is edge-transitive, and either \(G\) is vertex-transitive or \(\Sigma\) is bipartite and the two parts of the vertex-bipartition are the \(G\) vertex-orbits. Such graphs had been extensively studied since the seminal work of Tutte [24] in the 1940s, and more recently the second author (in 1993 [20, Theorem 4.1]
for \(G\)-vertex-transitive graphs, and in 2004 with Giudici and Li [7, Theorems 1.2 and 1.3] for \(G\)-vertex-intransitive graphs) identified a sub-family of 'basic' locally \((G,2)\)-arc-transitive graphs such that each finite connected locally \((G,2)\)-arc-transitive graph is a _normal cover_ of a basic example (see Section 2.1 for a discussion of these concepts). This new approach allows effective use of modern permutation group theory and the finite simple group classification to study these graphs. We note that, if a locally \((G,2)\)-arc-transitive, \(G\)-vertex-intransitive graph is a normal cover of a basic graph \(\Sigma_{0}\), then \(\Sigma_{0}\) may have additional symmetry not inherited from \(G\); in particular it may be vertex-transitive.
The family of 'basic' graphs that are covered by the graphs in our construction are the complete bipartite graphs \({\bf K}_{2^{n},2^{n}}\), which of course are vertex-transitive. They form one of a small number of families of basic locally \((G,2)\)-arc-transitive, \(G\)-vertex-transitive graphs of prime power order classified in [14], sharpening the classification arising from [12, 20, 21] for prime power orders (see Subsection 2.1). They also arise, as we prove in Theorem 1.3, as basic graphs covered by \(G\)-vertex-intransitive, locally \((G,2)\)-arc-transitive graphs of 2-power order. It would be good to have an extension of Li's classification (of the vertex-transitive examples) to all basic regular locally \((G,2)\)-arc-transitive graphs of prime power order. We note that there are some examples of vertex-intransitive, basic locally \((G,2)\)-arc-transitive graphs of prime power order that are not regular graphs, for example the stars \(K_{1,p^{a}-1}\) with \(G=S_{p^{a}-1}\), but any regular graphs with these properties will have order a 2-power.
**Problem 1**: _Classify the basic, regular, locally \((G,2)\)-arc-transitive graphs of prime power order. In particular, are there any additional bipartite examples which are not already in Li's classification_[14, Theorem 1.1]_?_
Li [14, pp.130-131] was "inclined to think that non-basic 2-arc-transitive graphs of prime power order would be rare and hard to construct", and posed the problem [14, Problem] of constructing and characterising the normal covers of prime power order of the basic graphs in his classification. We would like to expand this problem to include covers of all basic graphs arising from Problem 1. Here, as in [10], we focus on covers of the graphs \({\bf K}_{2^{n},2^{n}}\).
### The main result
The general family of groups and graph constructions we will study are specified in Definition 1.1. One of the graphs is a _Cayley graph_\({\rm Cay}(G,S)\) for a group \(G\) with respect to an inverse-closed subset \(S\subseteq G\setminus\{1\}\) (that is, \(s^{-1}\in S\) for all \(s\in S\)): it is the graph with vertex set \(G\) and edge set \(\{\{g,sg\}\ :\ g\in G,s\in S\}\).
**Definition 1.1**: Let \(n\) be an integer, \(n\geq 2\).
(a) If \(H\) is a finite group with subgroups \(X,Y\) such that \(X\cong Y\cong C_{2}^{n}\), \(H=\langle X,Y\rangle\) and \(H/H^{\prime}\cong C_{2}^{2n}\), where \(H^{\prime}\) is the derived subgroup of \(H\), then we say that \(H\) is an \(n\)_-dimensional mixed dihedral group relative to \(X\) and \(Y\)_.
(b) For \(H,X,Y\) as in part (a), the graphs \(C(H,X,Y)\) and \(\Sigma(H,X,Y)\) are defined as follows:
\[C(H,X,Y)={\rm Cay}(H,S(X,Y)),\ {\rm with}\ S(X,Y)=(X\cup Y)\setminus\{1\}; \tag{1}\]
and \(\Sigma=\Sigma(H,X,Y)\) is the graph with vertex-set and edge-set given by:
\[\begin{array}{l}V(\Sigma)=\{Xh,Yh:h\in H\},\\ E(\Sigma)=\{\{Xh,Yg\}:h,g\in H\mbox{ and }Xh\cap Yg\neq\emptyset\}.\end{array} \tag{2}\]
While this construction was used in [10, Theorem 1.8] to obtain an infinite family of (vertex-transitive) \(2\)-arc-transitive normal covers of \({\bf K}_{2^{n},2^{n}}\) of order a \(2\)-power, our interest here is semisymmetric examples. This is a much more delicate problem, and requires us to analyse a new infinite family of mixed-dihedral \(2\)-groups \({\cal H}(n)\), which we now define.
**Definition 1.2**: Let \(n\geq 2\) be an integer and let \(X_{0}=\{x_{1},\ldots,x_{n}\}\), \(Y_{0}=\{y_{1},\ldots,y_{n}\}\), and consider the group
\[{\cal H}(n)=\langle X_{0}\cup Y_{0}\mid{\cal R}\rangle\]
where \({\cal R}\) is the following set of relations: for \(x,x^{\prime}\in X_{0}\), \(y,y^{\prime}\in Y_{0}\), and \(z,z^{\prime},z^{\prime\prime},z^{\prime\prime\prime}\in X_{0}\cup Y_{0}\),
\(z^{2}=1,[x,x^{\prime}]=[y,y^{\prime}]=1,[x,y]^{2}=1,[[y,x],y^{\prime}]=1,[[x, y],z]^{2}=1,[[[z,z^{\prime}],z^{\prime\prime}],z^{\prime\prime\prime}]=1\).
It turns out that, for these groups, the graph \(\Sigma({\cal H}(n),X,Y)\) is semisymmetric and is a locally \(2\)-arc-transitive normal cover of \({\bf K}_{2^{n},2^{n}}\).
**Theorem 1.3**: _For \(n\geq 2\), the group \({\cal H}(n)\) in Definition 1.2 is an \(n\)-dimensional mixed dihedral group of order \(2^{n(n^{2}+n+4)/2}\) relative to \(X=\langle X_{0}\rangle\) and \(Y=\langle Y_{0}\rangle\), and the graph \(\Sigma({\cal H}(n),X,Y)\) as in (2) is semisymmetric and locally \(2\)-arc-transitive, of valency \(2^{n}\) and order \(2^{n^{2}(n+1)/2+n+1}\)._
**Remark 1.4**: The smallest case in Theorem 1.3 is that of \(n=2\). Here \(|{\cal H}(2)|=2^{10}\), and \(\Sigma=\Sigma({\cal H}(2),X,Y)\) has order \(|V(\Sigma)|=2^{9}\) and valency \(4\). A computation in GAP [6] shows that \(|\operatorname{Aut}(\Sigma)|=2^{15}\cdot 3^{5}\), considerably larger than the subgroup \(A:={\cal H}(2)\cdot A({\cal H}(2),X,Y)={\cal H}(2)\cdot(\operatorname{GL}_{2} (2)\times\operatorname{GL}_{2}(2))\) of order \(2^{12}\cdot 3^{2}\) used in the proof of Theorem 1.3 (see also Lemma 2.3 and Theorem 4.5 (5)). To give some insight into the structure of \(\Sigma\) we computed, again using GAP, the distance diagrams for \(\Sigma\) from the vertices \(X\) and \(Y\). These are shown in Figure 1, and demonstrate that \(\Sigma\) is locally \(3\)-distance-transitive. The two diagrams exhibit strikingly different structure from distance \(4\) onwards. In fact, the stabilisers \(\operatorname{Aut}(\Sigma)_{X}\) and \(\operatorname{Aut}(\Sigma)_{Y}\) of the vertices \(X\) and \(Y\) are non-isomorphic subgroups of order \(2^{7}\cdot 3^{5}\). Further computations, performed in GAP and Magma [1], show that
1. \(O_{2}(\operatorname{Aut}(\Sigma))={\cal H}(2)^{\prime}.Y\cong C_{2}^{8}\), lies in \({\cal H}(2)\), is faithful and regular on the \(\operatorname{Aut}(\Sigma)\)-orbit containing the vertex \(X\), and has four orbits of length \(64\) on the \(\operatorname{Aut}(\Sigma)\)-orbit containing \(Y\). The normal quotient of \(\Sigma\) modulo \(O_{2}(\operatorname{Aut}(\Sigma))\) is the'star' \({\bf K}_{1,4}\).
2. Also \(A\) is self-normalising in \(\operatorname{Aut}(\Sigma)\), and there are exactly \(896\) subgroups of \(\operatorname{Aut}(\Sigma)\) of order \(256\) that act semiregularly on \(V(\Sigma)\) with orbits the two parts of the bipartition. Hence, at least in the case \(n=2\), the graph \(\Sigma\) is a bi-Cayley graph of some \(2\)-group.
This paper is organised as follows. In Section 2, we outline the notation used in the paper and give several preliminary results, including Lemma 2.3 which summarises several properties of the graphs \(C(H,X,Y)\) and \(\Sigma(H,X,Y)\) which we will need, and which were proved in [10]. In Section 3, we investigate the structure of the group \({\cal H}(n)\), and in Section 4 we prove Theorem 1.3.
## 2 Notation and preliminary results for graphs
All graphs we consider are finite, connected, simple and undirected. Let \(\Gamma\) be a graph. Denote by \(V(\Gamma)\), \(E(\Gamma)\) and \(\operatorname{Aut}(\Gamma)\) the vertex set, edge set, and full automorphism group of \(\Gamma\), respectively. For \(v\in V(\Gamma)\), let \(\Gamma(v)\) denote the set of vertices adjacent to \(v\). A graph \(\Gamma\) is said to be _regular_ if there exists an integer \(k\) such that \(|\Gamma(v)|=k\) for all vertices \(v\in V(\Gamma)\). A graph \(\Gamma\) is bipartite if \(E(\Gamma)\neq\emptyset\) and \(V(\Gamma)\) is of the form \(\Delta\cup\Delta^{\prime}\) such that each edge consists of one vertex from \(\Delta\) and one vertex from \(\Delta^{\prime}\). If \(\Gamma\) is connected then this vertex partition is uniquely determined and the two parts \(\Delta,\Delta^{\prime}\) are often called the _biparts_ of \(\Gamma\).
For a graph \(\Gamma\), let \(G\leq\operatorname{Aut}(\Gamma)\). For \(v\in V(\Gamma)\), let \(G_{v}=\{g\in G\ :\ v^{g}=v\}\), the stabiliser of \(v\) in \(G\). We say that \(\Gamma\) is \(G\)_-vertex-transitive_ or \(G\)_-edge-transitive_ if \(G\) is transitive on \(V(\Gamma)\) or \(E(\Gamma)\), respectively, and that \(\Gamma\) is \(G\)_-semisymmetric_ if \(\Gamma\) is regular and \(G\)-edge-transitive but not \(G\)-vertex-transitive. When \(G=\operatorname{Aut}(\Gamma)\), a \(G\)-vertex-transitive, \(G\)-edge-transitive or \(G\)-semisymmetric graph \(\Gamma\) is simply called _vertex-transitive_, _edge-transitive_ or _semisymmetric_, respectively. The \(2\)-arcs in a graph \(\Gamma\) are vertex-triples \((u,v,w)\) such that \(\{u,v\},\{v,w\}\in E(\Gamma)\) and \(u\neq w\). A graph \(\Gamma\) is said to be _locally \((G,2)\)-arc-transitive_ if \(G\leq\operatorname{Aut}(\Gamma)\) and, for each \(u\in V(\Gamma)\), \(G_{u}\) is transitive on the \(2\)-arcs \((u,v,w)\) starting at \(u\), or equivalently, see [7, Lemma 3.2], \(G_{u}\) is \(2\)-transitive on the set \(\Gamma(u)\). Similarly, when \(G=\operatorname{Aut}(\Gamma)\), a locally \((G,2)\)-arc-transitive graph \(\Gamma\) is simply called _locally \(2\)-arc-transitive_. There is a considerable body of literature on locally \(2\)-arc-transitive graphs, see for example [4, 7, 15, 17, 20].
### Normal quotients and normal covers of graphs
The normal quotient method for investigating vertex- or edge-transitive graphs proceeds as follows. Assume that \(G\leq\operatorname{Aut}(\Gamma)\) is such that \(\Gamma\) is \(G\)-vertex-transitive or \(G\)-edge-transitive. Let \(N\) be a normal subgroup of \(G\) such that \(N\) is intransitive on \(V(\Gamma)\). The _\(N\)-normal quotient graph_ of \(\Gamma\) is defined as the graph \(\Gamma_{N}\) with vertices the \(N\)-orbits in \(V(\Gamma)\) and with
Figure 1: Distance diagrams at the vertices \(X\) (upper) and \(Y\) (lower) for the smallest graph \(\Sigma=\Sigma(\mathcal{H}(2),X,Y)\) in Theorem 1.3. It has \(512\) vertices and valency \(4\). Here each node represents an orbit of the stabiliser of the relevant vertex (\(X\) or \(Y\)) in the full automorphism group of \(\Sigma\). Computations performed in GAP [6].
two distinct \(N\)-orbits adjacent if there exists an edge in \(\Gamma\) consisting of one vertex from each of these orbits. If \(\Gamma\) is regular, and if \(\Gamma_{N}\) and \(\Gamma\) have the same valency, then we say that \(\Gamma\) is an \(N\)_-normal cover_ of \(\Gamma_{N}\).
If \(\Gamma\) is connected and is a regular, locally \((G,2)\)-arc-transitive graph, and if \(N\) is intransitive on each \(G\)-vertex-orbit, then (by [20, Theorem 4.1] and [7, Lemma 5.1]) also \(\Gamma_{N}\) is a connected, regular locally \((G/N,2)\)-arc-transitive graph, \(\Gamma\) is an \(N\)-normal cover of \(\Gamma_{N}\), and \(N\) is semiregular on \(V(\Gamma)\), that is, each \(N\)-orbit has size \(|N|\). Such a graph \(\Gamma\) is said to be _basic_ (or sometimes \(G\)-basic, to emphasise dependence on \(G\)) if there is no suitable normal subgroup \(N\) to make such a reduction; that is, if each nontrivial normal subgroup \(N\) of \(G\) is transitive on at least one \(G\)-orbit, forcing the quotient \(\Gamma_{N}\) to be degenerate, namely either \({\bf K}_{1}\) or a star \({\bf K}_{1,k}\) for some \(k\geq 1\). In some cases the \(G\)-basic graphs can be determined: Li's classification in [14] shows that the \(G\)-basic graphs of prime power order, in the case where \(G\) is vertex-transitive are one of: \({\bf K}_{2^{n},2^{n}}\) (the complete bipartite graph), \({\bf K}_{p^{m}}\) (the complete graph), \({\bf K}_{2^{n},2^{n}}-2^{n}{\bf K}_{2}\) (the graph obtained by deleting a 1-factor from \({\bf K}_{2^{n},2^{n}}\)) or a primitive or biprimitive 'affine graph' (by which, see [14, p.130], Li meant the graphs in the classification by Ivanov and the second author in [12, Table 1]).
### Cliques, clique graphs and line graphs
A _clique_ of a graph \(\Gamma\) is a subset \(U\subseteq V(\Gamma)\) such that every pair of vertices in \(U\) forms an edge of \(\Gamma\). A clique \(U\) is _maximal_ if no subset of \(V(\Gamma)\) properly containing \(U\) is a clique. The _clique graph_ of \(\Gamma\) is defined as the graph \(\Sigma(\Gamma)\) with vertices the maximal cliques of \(\Gamma\) such that two distinct maximal cliques are adjacent in \(\Sigma(\Gamma)\) if and only if their intersection is non-empty. Similarly the _line graph_ of \(\Gamma\) is defined as the graph \({\cal L}(\Gamma)\) with vertex set \(E(\Gamma)\) such that two distinct edges \(e,e^{\prime}\in E(\Gamma)\) are adjacent in \({\cal L}(\Gamma)\) if and only if \(e\cap e^{\prime}\neq\emptyset\).
### Cayley graphs and bi-Cayley graphs
A group \(G\) of permutations of a set \(V(\Gamma)\) is called _regular_ if it is transitive, and some (and hence all) stabilisers \(G_{v}\) are trivial. (It is unfortunate that this conflicts with the usage of'regular' as defined above for graphs.) More generally \(G\) is called _semiregular_ if the stabiliser \(G_{v}=1\) for all \(v\in V(\Gamma)\). So \(G\) is regular if and only if it is semiregular and transitive.
Let \(\Gamma=\mbox{Cay}(G,S)\) be a Cayley graph on \(G\) with respect to \(S\). For any \(g\in G\) define
\[R(g):x\mapsto xg\mbox{ for }x\in G\mbox{ and set }R(G)=\{R(g)\ :\ g\in G\}.\]
Then \(R(G)\) is a regular permutation group on \(V(\Gamma)\) (see, for example [22, Lemma 3.7]) and is a subgroup of \(\mbox{Aut}(\mbox{Cay}(G,S))\) (as \(R(g)\) maps each edge \(\{x,sx\}\) to an edge \(\{xg,sxg\}\)). For briefness, we shall identify \(R(G)\) with \(G\) in the following. Let
\[\mbox{Aut}(G,S)=\{\alpha\in\mbox{Aut}(G):S^{\alpha}=S\}.\]
It was proved by Godsil [8] that the normaliser of \(G\) in \(\mbox{Aut}(\mbox{Cay}(G,S))\) is \(G:\mbox{Aut}(G,S)\).
Cayley graphs are precisely those graphs \(\Gamma\) for which \(\mbox{Aut}(\Gamma)\) has a subgroup that is regular on \(V(\Gamma)\). For this reason we say that a graph \(\Gamma\) is a _bi-Cayley graph_ if \(\mbox{Aut}(\Gamma)\) has
a subgroup \(H\) which is semiregular on \(V(\Gamma)\) with two orbits. The following result from [10] will be useful.
**Lemma 2.1**: [10, Lemma 2.6] _Let \(\Gamma\) be a connected \((G,2)\)-arc-transitive graph, and let \(u\in V(\Gamma)\). Suppose that \(\Gamma\) is an \(N\)-normal cover of \({\bf K}_{2^{n},2^{n}}\), for some normal \(2\)-subgroup \(N\) of \(G\). Then \(\Gamma\) is bipartite, and one of the following holds:_
1. \(\Gamma\) _is a normal Cayley graph of a_ \(2\)_-group;_
2. \(\Gamma\) _is a bi-Cayley graph of a_ \(2\)_-group_ \(H\) _such that_ \(G\leq N_{\rm Aut(\Gamma)}(H)\)_;_
3. \(N\unlhd{\rm Aut}(\Gamma)\)_._
_Moreover if the stabiliser \(G_{u}\) acts unfaithfully on \(\Gamma(u)\), then part_ (3) _holds._
The next result is developed from [16, Theorem 1.1] for the case of locally primitive bipartite graphs \(\Gamma\). For \(G\leq{\rm Aut}(\Gamma)\), we denote by \(G^{+}\), the subgroup of \(G\) (of index at most \(2\)) that fixes both biparts of \(V(\Gamma)\) setwise. The proof uses the following concept: a permutation group \(G\leq{\rm Sym}(V)\) is _biquasiprimitive_ if each nontrivial normal subgroup has at most two orbits in \(V\), and there exists such a subgroup having two orbits.
**Lemma 2.2**: _Let \(\Gamma\) be a connected bipartite graph of order \(2^{m}\) and valency \(2^{n}\) with \(m>n\). Assume that \(\Gamma\) is vertex-transitive and locally primitive, and that \({\rm Aut}(\Gamma)_{u}\) is not faithful on \(\Gamma(u)\), for some \(u\in V(\Gamma)\). Then \(n\geq 2\), and there exists \(N\unlhd{\rm Aut}(\Gamma)\) such that \(N\) is a \(2\)-group, \(N\leq{\rm Aut}(\Gamma)^{+}\) and is semiregular on \(V(\Gamma)\), \(\Gamma\) is an \(N\)-normal cover of \(\Gamma_{N}\), and \(\Gamma_{N}\cong{\bf K}_{2^{n},2^{n}}\)._
**Proof** Let \(X={\rm Aut}(\Gamma)\), and note that \(n\geq 2\), since if \(n=1\) then \(\Gamma\) is a cycle and \(X_{u}\cong C_{2}\) is faithful on \(\Gamma(u)\). Let \(N\unlhd X\) be maximal subject to the condition that \(N\) has at least three orbits on \(V(\Gamma)\) (possibly \(N=1\)), let \(\overline{X}=X/N\), and let \(X^{+}\) be the normal subgroup of index \(2\) in \(X\) which fixes both parts of the bipartition of \(\Gamma\). By [18, Lemma 1.6], or see [19, Theorem 1.3], \(N\) is semiregular on \(V(\Gamma)\), and hence \(N\) is a \(2\)-group since \(|V(\Gamma)|=2^{m}\). Also, by [19, Lemma 3.1], the quotient \(\Gamma_{N}\) is bipartite and \(N\leq X^{+}\), so \(X^{+}/N\) is an index \(2\) normal subgroup of \(\overline{X}\) with orbits in \(V(G_{N})\) the two biparts of \(\Gamma_{N}\). Moreover, by the maximality of \(N\), each nontrivial normal subgroup of \(\overline{X}\) has at most two orbits in \(V(\Gamma_{N})\), and hence \(\overline{X}\) is bi-quasiprimitive on \(V(\Gamma_{N})\). By [19, Theorem 1.3], \(\Gamma_{N}\) is \(\overline{X}\)-locally primitive and \(\Gamma\) is an \(N\)-normal cover of \(\Gamma_{N}\). Thus \(\Gamma_{N}\) also has valency \(2^{n}\), and \(\Gamma_{N}\) has order \(2^{k}=|V(\Gamma)|/|N|\) for some \(k>n\geq 2\). If \(\Gamma_{N}\cong{\bf K}_{2^{n},2^{n}}\) then the result holds, so we may assume that \(\Gamma_{N}\ncong{\bf K}_{2^{n},2^{n}}\).
Then by the third paragraph in the proof of [16, Theorem 1.1] (and using [16, Lemmas 3.3 and 4.2]), \(X\) has a subgroup \(G\) satisfying \(N<G\leq X^{+}\) such that \(G\) is faithful and regular on each of the biparts of \(V(\Gamma)\), and moreover \(\Gamma\) is a bi-Cayley graph of \(G\) of the form \(\Gamma=\Upsilon\times{\bf K}_{2}\) (the direct product of \(\Upsilon\) and \({\bf K}_{2}\)), where \(\Upsilon={\rm Cay}(G,S)\) is a Cayley graph of \(G\) with respect to some subset \(S\) of \(G\). Further, \(\Gamma_{N}\) and \(\overline{X}\) satisfy [16, Lemma 3.3(ii) or (iii)]. If [16, Lemma 3.3(ii)] holds then by the second last paragraph in the proof of [16, Theorem 1.1], \(\Gamma\) is a normal Cayley graph of \(G\times C_{2}\). However in this case \(X_{u}\) is faithful on \(\Gamma(u)\), which is a contradiction. Thus [16, Lemma 3.3(iii)] holds, and then the last paragraph in the proof of [16, Theorem 1.1] shows (since \(|V(\Gamma)|=2^{m}\)) that the quotient \(\Upsilon_{N}={\bf K}_{2^{r}}^{\times\ell}\) (a direct product of \(\ell\) copies of \({\bf K}_{2^{r}}\)) of valency \((2^{r}-1)^{\ell}\). However in this case, the three graphs \(\Gamma\), \(\Upsilon\) and \(\Upsilon_{N}\) have the same valency \(2^{n}\geq 4\), which is a contradiction. \(\square\)
### Mixed dihedrants and their clique graphs
We record the properties we will need of the graphs from Definition 1.1.
**Lemma 2.3**: [10, Lemmas 4.1-4.2] _Let \(H\) be an \(n\)-dimensional mixed dihedral group relative to \(X\) and \(Y\) with \(|X|=|Y|=2^{n}\geq 4\), and let \(C(H,X,Y)\) and \(\Sigma(H,X,Y)\) be the graphs defined in Definition 1.1. let \(\Sigma=\Sigma(H,X,Y)\), and \(G=H:A(H,X,Y)\), where \(A(H,X,Y)\) is the setwise stabiliser in \(\operatorname{Aut}(H)\) of \(X\cup Y\). Then the following hold._
1. \(\Sigma(H,X,Y)\) _is the clique graph of_ \(C(H,X,Y)\)_._
2. _The map_ \(\varphi:z\to\{Xz,Yz\}\)_, for_ \(z\in H\)_, is a bijection_ \(\varphi:H\to E(\Sigma(H,X,Y))\)_, and induces a graph isomorphism from_ \(C(H,X,Y)\) _to the line graph_ \(\mathcal{L}(\Sigma(H,X,Y))\) _of_ \(\Sigma(H,X,Y)\)_._
3. \(\operatorname{Aut}(C(H,X,Y))=\operatorname{Aut}(\Sigma(H,X,Y))=\operatorname{ Aut}(\mathcal{L}(\Sigma(H,X,Y)))\)_._
4. _The group_ \(G\) _acts as a subgroup of automorphisms on_ \(\Sigma\) _as follows, for_ \(h,z\in H,\sigma\in A(H,X,Y)\)_, and_ \(\varphi:H\to E(\Sigma)\) _as in part_ \((2)\)_:_
_The subgroup_ \(H\) _acts regularly on_ \(E(\Sigma)\) _and has two orbits on_ \(V(\Sigma)\)_. In particular, this_ \(G\)_-action is edge-transitive._
5. _The_ \(H^{\prime}\)_-normal quotient graph_ \(\Sigma_{H^{\prime}}\) _of_ \(\Sigma\) _is isomorphic to_ \(\mathbf{K}_{2^{n},2^{n}}\) _and admits_ \(G/H^{\prime}\) _as an edge-transitive group of automorphisms. Moreover,_ \(\Sigma\) _is an_ \(H^{\prime}\)_-normal cover of_ \(\mathbf{K}_{2^{n},2^{n}}\)_._
6. \(A(H,X,Y))\cong A(H,X,Y)^{X\cup Y}\leq(\operatorname{Aut}(X)\times \operatorname{Aut}(Y)):C_{2}\cong(\operatorname{GL}(n,2)\times\operatorname{ GL}(n,2)):C_{2}\) _where the_ \(C_{2}\) _interchanges_ \(X\) _and_ \(Y\)_._
## 3 Notation and preliminary results for groups
For a positive integer \(n\), \(C_{n}\) denotes a cyclic group of order \(n\), and \(D_{2n}\) denotes a dihedral group of order \(2n\). For a group \(G\), we denote by \(1\), \(Z(G)\), \(\Phi(G)\), \(G^{\prime}\), \(\operatorname{soc}(G)\) and \(\operatorname{Aut}(G)\), the identity element, the centre, the Frattini subgroup, the derived subgroup, the socle and the automorphism group of \(G\), respectively. For a subgroup \(H\) of a group \(G\), denote by \(C_{G}(H)\) the centraliser of \(H\) in \(G\) and by \(N_{G}(H)\) the normaliser of \(H\) in \(G\). For elements \(a,b\) of \(G\), the _commutator_ of \(a,b\) is defined as \([a,b]=a^{-1}b^{-1}ab\). If \(X,Y\subseteq G\), then \([X,Y]\) denotes the subgroup generated by all the commutators \([x,y]\) with \(x\in X\) and \(y\in Y\). We will need the following result concerning \(p\)-groups.
**Lemma 3.1**: [23, 5.3.2] _Let \(G\) be a finite \(p\)-group, for some prime \(p\), and let \(p^{r}=|G:\Phi(G)|\). Then, \(\Phi(G)=G^{\prime}G^{p}\), where \(G^{p}=\langle g^{p}\mid g\in G\rangle\). Moreover, every generating set for \(G\) has an \(r\)-element subset which also generates \(G\), and in particular, \(G/\Phi(G)\cong C_{p}^{r}\)._
### Some results on commutators in arbitrary groups
We first cite the so-called Witt-Hall formula.
**Lemma 3.2**: [9, 10.2.1.4] or [23, 5.1.5(iv)] _Let \(x,y,z\) be elements of a group \(G\). Then_
\[[[x,y^{-1}],z]^{y}\cdot[[y,z^{-1}],x]^{z}\cdot[[z,x^{-1}],y]^{x}=1.\]
Using the Witt-Hall formula above, we obtain the following lemma.
**Lemma 3.3**: _Let \(a,b,c\) be elements of a group \(G\) such that \(G^{\prime}\) is abelian. Then_
\[[[a,b],c]\cdot[[b,c],a]\cdot[[c,a],b]=1.\]
**Proof** Let \(a,b,c\in G\). By a direct computation or [23, 5.1.5(iii)], we have \([b,a^{-1}]^{a}=[b,a]^{-1}=[a,b]\). Hence \([[b,a^{-1}],c]^{a}=[[b,a^{-1}]^{a},c^{a}]=[[a,b],c[c,a]]\).
Again (by direct computation or [23, 5.1.5(ii)]), \([[a,b],c[c,a]]=[[a,b],[c,a]]\cdot[[a,b],c]^{[c,a]}\). Since \(G^{\prime}\) is abelian, this becomes \([[a,b],c[c,a]]=[[a,b],c]\). Consequently, \([[b,a^{-1}],c]^{a}=[[a,b],c].\) Similarly, \([[a,c^{-1}],b]^{c}=[[c,a],b]\) and \([[c,b^{-1}],a]^{b}=[[b,c],a]\).
Now applying the Witt-Hall formula from Lemma 3.2 with \(y=a,x=b,z=c,\) yields the asserted formula: \([[a,b],c]\cdot[[b,c],a]\cdot[[c,a],b]=1\). \(\Box\)
### Basic commutators in free groups
In this section, we recall some theory for commutators of a free group, following Marshall Hall's [9, Charpter 11].
#### 3.2.1 Formal commutators in free groups
Let \(F\) be the free group on the ordered alphabet \(A=\{a_{1},a_{2},\ldots,a_{r}\}\), where \(r\geq 1\). For \(j\geq 1\), the _formal commutator_\(c_{j}\) of \(F\), and its _weight_\(w(c_{j})\), are defined by the rules:
* For \(j=1,2,\ldots,r\), \(c_{j}=a_{j}\), and these are the commutators of weight \(1\); _i.e._, \(w(a_{j})=1\).
* If \(c_{i}\) and \(c_{j}\) are (formal) commutators, then \([c_{i},c_{j}]\) is a (formal) commutator, say \(c_{k}\), and \(w(c_{k})=w(c_{i})+w(c_{j})\).
#### 3.2.2 Basic commutators of weight \(\ell\) in free groups
Let \(F\) be the free group on the ordered alphabet \(A=\{a_{1},a_{2},\ldots,a_{r}\}\), where \(r\geq 1\). For each positive integer \(\ell\), we define as follows the set \(\mathbf{BC}_{\ell}\) of _basic commutators_ of \(F\) of weight \(\ell\), together with a total ordering on \(\cup_{u\geq 1}\mathbf{BC}_{u}\):
1. \(\mathbf{BC}_{1}=\{a_{1},\ldots,a_{r}\}\), and we choose the ordering \(a_{1}<a_{2}<\cdots<a_{r}\). Let \(\ell>1\), and assume inductively that \(\cup_{1\leq u<\ell}\mathbf{BC}_{u}\) has been defined and ordered.
2. Then the set \(\mathbf{BC}_{\ell}\) consists of all the commutators \([c_{i},c_{j}]\) that satisfy the following three conditions: 1. \(c_{i},c_{j}\in\cup_{1\leq u<\ell}\mathbf{BC}_{u}\) with \(\ell=w(c_{i})+w(c_{j})\); 2. \(c_{j}<c_{i}\); 3. If \(c_{i}=[c_{s},c_{t}]\), where \(c_{s},c_{t}\in\cup_{1\leq u<\ell}\mathbf{BC}_{u}\), then \(c_{t}\leq c_{j}\).
3. The ordering on \(\cup_{1\leq u<\ell}\mathbf{BC}_{u}\) is extended to \(\cup_{1\leq u\leq\ell}\mathbf{BC}_{u}\) as follows: we choose an arbitrary order on the set \(\mathbf{BC}_{\ell}\), and if \(c_{i}\in\mathbf{BC}_{\ell}\) and \(c_{j}\in\cup_{1\leq u<\ell}\mathbf{BC}_{u}\), we define \(c_{j}<c_{i}\).
We next record the nature and sizes of the two sets \(\mathbf{BC}_{2}\) and \(\mathbf{BC}_{3}\), using the following arithmetic facts.
**Lemma 3.4**: _Let \(n\in\mathbb{N}\) with \(n\geq 2\). Then_
1. \(|\{(i,j):1\leq j<i\leq n\}|=\frac{n(n-1)}{2}\)_;_
2. \(|\{(i,j,k):1\leq j<i\leq n,\text{ and }j<k\leq n\}|=\frac{n(n-1)(2n-1)}{6}= \frac{n^{3}}{3}-\frac{n^{2}}{2}+\frac{n}{6}\)_;_
3. \(|\{(i,j,k):1\leq j<i\leq n,\text{ and }j<k\leq n,k\neq i\}|=\frac{n(n-1)(n-2)}{3}= \frac{n^{3}}{3}-n^{2}+\frac{2n}{3}\)_;_
4. \(|\{(i,j,k):1\leq j<i\leq n,\text{ and }j\leq k\leq n\}|=\frac{n^{3}-n}{3}\)_._
**Proof** (a) For each \(j\) such that \(1\leq j<n\), there are precisely \(n-j\) choices for \(i\), and hence the number of these pair \((i,j)\) is \(\sum_{j=1}^{n-1}(n-j)=\sum_{\ell=1}^{n-1}\ell=\frac{n(n-1)}{2}\).
(b) For a fixed \(j\) such that \(1\leq j<n\), the number of choices of \(i,k\) such that \(j<i\leq n\) and \(j<k\leq n\) is \((n-j)^{2}\), yielding a total of \((n-1)^{2}+(n-2)^{2}+\cdots+1=\frac{(n-1)n(2n-1)}{6}=\frac{n^{3}}{3}-\frac{n^{2} }{2}+\frac{n}{6}\) triples \((i,j,k)\) with the required constraints.
(c) The number of these triples is equal to the number of triples in part (b) minus the number of triples \((i,j,i)\) with \(1\leq j<i\leq n\), which is \(\frac{n(n-1)}{2}\) by part (a).
(d) The number of these triples is equal to the number of triples in part (b) plus the number of triples \((i,j,j)\) with \(1\leq j<i\leq n\), which is \(\frac{n(n-1)}{2}\) by part (a). \(\Box\)
**Lemma 3.5**: _Let \(F\) be the free group on the ordered alphabet \(A=\{a_{1},a_{2},\ldots,a_{r}\}\), where \(r\geq 1\). Then_
\[\mathbf{BC}_{2} =\{[a_{i},a_{j}]:1\leq j<i\leq r\}, \text{of size }|\mathbf{BC}_{2}|=\frac{r(r-1)}{2},\] \[\mathbf{BC}_{3} =\{[[a_{i},a_{j}],a_{k}]:1\leq j<i\leq r,\text{and }j\leq k\leq r\} \text{of size }|\mathbf{BC}_{3}|=\frac{r^{3}-r}{3}.\]
**Proof** The set \({\bf BC}_{2}\) is as claimed (by conditions (2)(a) and (2)(b) in the definition of \({\bf BC}_{\ell}\) with \(\ell=2\)), and \(|{\bf BC}_{2}|=\frac{r(r-1)}{2}\) as it is in bijection with the 2-subsets of \(\{1,\ldots,r\}\). Next, each element of \({\bf BC}_{3}\) has the form \([c_{t},c_{u}]\) with \(w(c_{t})=2,w(c_{u})=1\) (since \(c_{u}<c_{t}\)), and so \(c_{u}=a_{k}\) for some \(k\in[1,r]\) (by condition 1) and \(c_{t}=[a_{i},a_{j}]\) with \(1\leq j<i\leq r\) (as we have just seen) and \(j\leq k\) (by condition 2(c)). Thus \({\bf BC}_{3}\) is as claimed. By Lemma 3.4(c), the number of elements \([[a_{i},a_{j}],a_{k}]\in{\bf BC}_{3}\) with \(1\leq j<i\leq r,\mbox{and }j\leq k\leq r\) is \(\frac{r^{3}-r}{3}=|{\bf BC}_{3}|\). \(\Box\)
The following result is known as the Basis Theorem where, following [9], for each \(k\) we denote the \(k\)-th term of the lower central series of a free group \(F\) by \(F_{k}\). Note in particular that \(F_{2}=F^{\prime}\), the derived subgroup of \(F\).
**Theorem 3.6**: [9, Theorem 11.2.4] _Let \(F\) be the free group on the ordered alphabet \(A=\{a_{1},a_{2},\ldots,a_{r}\}\), where \(r\geq 1\), let \(\ell\geq 1\), and suppose that \(\cup_{1\leq u\leq\ell}{\bf BC}_{u}=\{c_{1},\ldots,c_{t}\}\) with \(c_{1}<c_{2}<\cdots<c_{t}\). Then each \(f\in F\) has a unique representation_
\[f=c_{1}^{s_{1}}c_{2}^{s_{2}}\ldots c_{t}^{s_{t}}\ \mbox{\rm mod}\ F_{\ell+1},\]
_for some integers \(s_{i}\) and, modulo \(F_{\ell+1}\), \({\bf BC}_{\ell}\) forms a basis for the free Abelian group \(F_{\ell}/F_{\ell+1}\)._
We complete this section with three results about the quotient \(F/F_{4}\).
**Lemma 3.7**: _Let \(F\), the \(F_{\ell}\) and \({\bf BC}_{\ell}\) be as in Theorem 3.6. Then_
* \(F^{\prime}=F_{2}=\langle{\bf BC}_{2},F_{3}\rangle\)_;_
* \(F_{3}=\langle{\bf BC}_{3},F_{4}\rangle\) _and_ \(F_{3}/F_{4}\leq Z(F/F_{4})\)_;_
* \((F/F_{4})^{\prime}=F^{\prime}/F_{4}\) _is abelian._
**Proof** Recall that \(F_{2}\) is the derived subgroup \(F^{\prime}\). By Theorem 3.6, \({\bf BC}_{2}F_{3}=\{cF_{3}\mid c\in{\bf BC}_{2}\}\) is a basis for the free abelian group \(F^{\prime}/F_{3}\), and \({\bf BC}_{3}F_{4}=\{cF_{4}\mid c\in{\bf BC}_{3}\}\) is a basis for the free abelian group \(F_{3}/F_{4}\). Thus \(F_{3}=\langle{\bf BC}_{3},F_{4}\rangle\), \(F^{\prime}=\langle{\bf BC}_{2},F_{3}\rangle=\langle{\bf BC}_{2},{\bf BC}_{3},F _{4}\rangle\) and \(F^{\prime}/F_{4}=\langle{\bf BC}_{2}F_{4},{\bf BC}_{3}F_{4}\rangle\). In particular, part (1) is proved.
Let \(c=[[a_{i},a_{j}],a_{k}]\in{\bf BC}_{3}\). Then, for each \(a\in F\), the commutator \([c,a]\in F_{4}\), and hence \(cF_{4}\in Z(F/F_{4})\). Since \(F_{3}=\langle{\bf BC}_{3},F_{4}\rangle\) this implies that \(F_{3}/F_{4}\leq Z(F/F_{4})\), proving part (2).
Since \(F_{4}<F^{\prime}\), we have \((F/F_{4})^{\prime}=F^{\prime}/F_{4}\). We have shown that \(F^{\prime}/F_{4}=\langle{\bf BC}_{2}F_{4},{\bf BC}_{3}F_{4}\rangle\), and by part (2) each element of \({\bf BC}_{3}F_{4}\) commutes with each element of \({\bf BC}_{2}F_{4}\) and \({\bf BC}_{3}F_{4}\). Thus to prove part (3) it remains to prove that each pair of elements of \({\bf BC}_{2}F_{4}\) commute. So let \([a_{i},a_{j}],c\in{\bf BC}_{2}\). Then
\[([a_{i},a_{j}]F_{4})^{cF_{4}} =[a_{i}^{c},a_{j}^{c}]F_{4}=[c^{-1}a_{i}c,c^{-1}a_{j}c]F_{4}=[c^ {-1}ca_{i}[a_{i},c],c^{-1}ca_{j}[a_{j},c]]F_{4}\] \[=[a_{i}[a_{i},c],a_{j}[a_{j},c]]F_{4}.\]
For all \(k\leq r\), \([c,a_{k}]\in F_{3}\) by the definition of \(F_{3}\), and hence \([a_{k},c]\in F_{3}\). Therefore, by part (2), \([a_{i},c]F_{4}\) and \([a_{j},c]F_{4}\) are contained in \(Z(F/F_{4})\). It follows that \(([a_{i},a_{j}]F_{4})^{cF_{4}}=[a_{i},a_{j}]F_{4}\). Thus \([a_{i},a_{j}]F_{4}\) and \(cF_{4}\) commute, and part (3) is proved. \(\Box\)
**Lemma 3.8**: _Let \(F,A\), and the \(F_{\ell}\) be as in Theorem 3.6. Then, for any \(a_{i},a_{j},a_{k}\in A\),_
\[(1) (a_{i}^{2}F_{4})^{a_{j}F_{4}}=a_{i}^{2}\cdot[[a_{i},a_{j}],a_{i}] \cdot[a_{i},a_{j}]^{2}F_{4}\text{,}\] \[(2) ([a_{i},a_{j}]^{2}F_{4})^{a_{k}F_{4}}=[a_{i},a_{j}]^{2}\cdot[[a_{i},a_{j}],a_{k}]^{2}F_{4}\text{,}\] \[(3) [a_{i},a_{j}]=[a_{j},a_{i}]^{-1}\text{,}\] \[(4) [[a_{j},a_{i}],a_{k}]F_{4}=[[a_{i},a_{j}]^{-1},a_{k}]F_{4}=([[a_{i},a_{j}],a_{k}]F_{4})^{-1}\text{,}\] \[(5) [[a_{i},a_{j}],a_{k}]F_{4}=([[a_{j},a_{k}],a_{i}]F_{4})^{-1}\cdot[ [a_{i},a_{k}],a_{j}]F_{4}\text{.}\]
**Proof** (1), (2) By Lemma 3.7 (2), \([[a_{i},a_{j}],a_{i}]F_{4}\in Z(F/F_{4})\). We use this fact for the last equalities of the following two computations in \(F/F_{4}\), which prove parts (1) and (2).
\[(a_{i}^{2}F_{4})^{a_{j}F_{4}} =a_{j}^{-1}a_{i}^{2}a_{j}F_{4}=a_{j}^{-1}a_{i}a_{j}a_{i}[a_{i},a_{ j}]F_{4}=a_{j}^{-1}a_{j}a_{i}[a_{i},a_{j}]a_{i}[a_{i},a_{j}]F_{4}\] \[=a_{i}^{2}[a_{i},a_{j}][[a_{i},a_{j}],a_{i}][a_{i},a_{j}]F_{4}=a_{ i}^{2}[[a_{i},a_{j}],a_{i}][a_{i},a_{j}]^{2}F_{4}\text{.}\] \[([a_{i},a_{j}]^{2}F_{4})^{a_{k}F_{4}} =a_{k}^{-1}[a_{i},a_{j}]^{2}a_{k}F_{4}=a_{k}^{-1}[a_{i},a_{j}]a_{k }[a_{i},a_{j}][[a_{i},a_{j}],a_{k}]F_{4}\] \[=a_{k}^{-1}a_{k}[a_{i},a_{j}][[a_{i},a_{j}],a_{k}][a_{i},a_{j}][[a _{i},a_{j}],a_{k}]F_{4}=[a_{i},a_{j}]^{2}[[a_{i},a_{j}],a_{k}]^{2}F_{4}\text{.}\]
(3) This follows directly from: \([a_{j},a_{i}]=(a_{j}^{-1}a_{i}^{-1}a_{j}a_{i})^{-1}=a_{i}^{-1}a_{j}^{-1}a_{i} a_{j}=[a_{i},a_{j}]\).
(4) By part (3), we have \([[a_{j},a_{i}],a_{k}]=[[a_{i},a_{j}]^{-1},a_{k}]\). Then, by a direct computation or [23, 5.1.5(iii)], we have \([[a_{i},a_{j}]^{-1},a_{k}]=([[a_{i},a_{j}],a_{k}]^{[a_{j},a_{i}]})^{-1}\). Lemma 3.7 (3) then implies that
\[[[a_{i},a_{j}]^{-1},a_{k}]F_{4}=([[a_{i},a_{j}],a_{k}]F_{4})^{-1}\text{,}\]
proving part (4).
(5) By Lemma 3.7 (3), \(F^{\prime}/F_{4}\) is abelian and hence, by Lemma 3.3, we have
\[[[a_{i},a_{j}],a_{k}]\cdot[[a_{j},a_{k}],a_{i}]\cdot[[a_{k},a_{i}],a_{j}]F_{4} =F_{4}\text{.}\]
Since, by part (4), \(([[a_{k},a_{i}],a_{j}]F_{4})^{-1}=[[a_{i},a_{k}],a_{j}]F_{4}\), it follows, again using the fact \(F^{\prime}/F_{4}\) is abelian, that
\[[[a_{i},a_{j}],a_{k}]F_{4}=([[a_{j},a_{k}],a_{i}]F_{4})^{-1}\cdot[[a_{i},a_{k }],a_{j}]F_{4}\text{.}\]
This proves part (5), completing the proof. \(\square\)
We end this section by considering a certain subgroup \(K\) of \(F\) containing \(F_{4}\).
**Lemma 3.9**: _Let \(F,A\), and the \(F_{\ell}\) be as in Theorem 3.6, and let_
\[K=\langle F_{4},a_{i}^{2},[a_{i},a_{j}]^{2},[[a_{i},a_{j}],a_{k}]^{2},[[a_{i},a _{j}],a_{i}]:i,j,k\in[1,r]\rangle\text{.}\]
_Then the following hold._
* \(K\unlhd F\)_, and_ \(K\) _is also generated by_ \(F_{4}\cup B_{K}\)_, where_ \(B_{K}\) _is the set_ \[\{a_{i}^{2},[a_{i},a_{j}]^{2},[[a_{i},a_{j}],a_{k}]^{2},[[a_{i},a_{j}],a_{\ell }]:1\leq j<i\leq r,\ j<k\leq r,\ k\neq i,\ \ell\in\{i,j\}\};\]
2. \(F^{\prime}K/K\cong C_{2}^{r(r-1)(2r-1)/6}\) _and_ \(\{cK:c\in{\bf BC}_{2}\cup D_{K}\}\) _is a basis for_ \(F^{\prime}K/K\)_, where_ \(D_{K}=\{[[a_{i},a_{j}],a_{k}]:1\leq j<i\leq r,\ j<k\leq r,\ k\neq i\}\)_, and_ \({\bf BC}_{2}\) _is as in Lemma_ 3.5_;_
3. \(F/F^{\prime}K\cong C_{2}^{r}\)_._
**Proof** (1) Since \(F=\langle A\rangle\), to prove that \(K\unlhd F\) it is enough to show that \(b^{a}\in K\) (or equivalently that \((bF_{4})^{aF_{4}}\in K/F_{4}\)), for any \(a\in A\) and any element \(b\) in the given generating set for \(K\). Let \(a\in A\). By Lemma 3.8 (1) and (2) we have \((a_{i}^{2}F_{4})^{aF_{4}},([a_{i},a_{j}]^{2}F_{4})^{aF_{4}}\in K/F_{4}\) for all \(i,j\). The remaining generators, of the form \([[a_{i},a_{j}],a_{k}]^{2},[[a_{i},a_{j}],a_{i}]\), all lie in \(F_{3}\), and by Lemma 3.7 (2), \(F_{3}/F_{4}\leq Z(F/F_{4})\). It follows that \((bF_{4})^{aF_{4}}=bF_{4}\in K/F_{4}\) for each of these generators also. Thus \(K\unlhd F\).
To prove the second assertion of (1) let \(K_{0}:=\langle F_{4},B_{K}\rangle\). We show that each of the given generators for \(K\) lies in \(K_{0}\). Each of the generators \(a_{i}^{2}\) lies in \(B_{K}\subset K_{0}\). By Lemma 3.8 (3), \([a_{i},a_{j}]=[a_{j},a_{i}]^{-1}\), and hence each of the generators \([a_{i},a_{j}]^{2}\) lies in \(K_{0}\). For a generator \(x=[[a_{i},a_{j}],a_{i}]\) with \(i,j\in[1,r]\), if \(j=i\) then \(x=1\in K_{0}\), if \(j<i\) then \(x\in B_{K}\subset K_{0}\), while if \(j>i\) then \(x\in[[a_{j},a_{i}],a_{i}]^{-1}F_{4}\subset K_{0}\) (by Lemma 3.8(4)). It remains to consider the generators of the form \(x=[[a_{i},a_{j}],a_{k}]^{2}\). If \(i=j\) then \(x=1\in K_{0}\) so we may assume that \(i\neq j\). If \(k=i\) then we have just shown that \([[a_{i},a_{j}],a_{k}]\in K_{0}\) so \(x\in K_{0}\) and we may assume also that \(k\neq i\). If \(k=j\) then, by Lemma 3.8 (4), \([[a_{i},a_{j}],a_{j}]\in[[a_{j},a_{i}],a_{j}]^{-1}F_{4}\) which we have shown lies in \(K_{0}\) so again \(x\in K_{0}\). Thus we may assume that \(i,j,k\) are pairwise distinct. Let \(m=\min\{i,j,k\}\). If \(m=j\) then \(x\in B_{K}\subset K_{0}\); if \(m=i\) then by Lemma 3.8 (4), \(x\in[[a_{j},a_{i}],a_{k}]^{-2}F_{4}\subset K_{0}\); and if \(m=k\) then by Lemma 3.8 (4), modulo \(F_{4}\), \(x=[[a_{j},a_{k}],a_{i}]^{-2}\cdot[[a_{i},a_{k}],a_{j}]^{2}\), and each factor lies on \(K_{0}\) so \(x\in K_{0}\). Thus all generators lie in \(K_{0}\) and hence \(K_{0}=K\). This completes the proof of part (1).
(2) Now \(F^{\prime}K/K\cong F^{\prime}/(F^{\prime}\cap K)\cong(F^{\prime}/F_{4})/((F^{ \prime}\cap K)/F_{4})\) (note that \(F_{4}\leq K\cap F^{\prime}\)). By Lemma 3.7 (3), \(F^{\prime}/F_{4}\) is abelian, and hence \(F^{\prime}/(F^{\prime}\cap K)\) is abelian. Moreover, by [11, Hilfsatz 1.11], \(F^{\prime}\) is generated by \([a_{i},a_{j}]^{g}\) for \(i,j\in[1,r]\) and \(g\in F\). Then since each \([a_{i},a_{j}]^{2}\in K\), it follows that \(F^{\prime}K/K\cong C_{2}^{m}\), for some \(m\), and we need to find \(m\).
Note that, by Lemma 3.5 and the definition of \(D_{K}\), \({\bf BC}_{3}\) is the disjoint union \({\bf BC}_{3}=D_{K}\cup S\) where \(S=\{[[a_{i},a_{j}],a_{\ell}]:1\leq j<i\leq r,\ \ell\in\{i,j\}\}\) (and note that \(S\subset B_{K}\subset K\)). By Lemma 3.7, \(F^{\prime}=\langle{\bf BC}_{2},{\bf BC}_{3},F_{4}\rangle=\langle{\bf BC}_{2},D _{K},S,F_{4}\rangle\), and since \(K\) contains \(F_{4}\cup S\), it follows that \(\{xK:x\in{\bf BC}_{2}\cup D_{K}\}\) is a generating set for \(F^{\prime}K/K\cong C_{2}^{m}\). _We will show that it is in fact a basis using Theorem 3.6._ It is helpful to consider the following subgroup \(H\) such that \(F_{4}<H<K\),
\[H=\langle F_{4},[a_{i},a_{j}]^{2},[[a_{i},a_{j}],a_{k}]^{2},[[a_{i},a_{j}],a_{ \ell}]:1\leq j<i\leq r,\ j<k\leq r,\ k\neq i,\ \ell\in\{i,j\}\rangle.\]
By part (1) we have \(K=\langle H,a_{i}^{2}:i\in[1,r]\rangle\), and it follows from Lemma 3.8 (1) that each \(a_{i}^{2}H\) belongs to \(Z(F/H)\), so \(K/H\) is abelian. Also \(H/F_{4}\leq F^{\prime}/F_{4}\), and \(F^{\prime}/F_{4}\) is abelian by Lemma 3.7 (3), so also \(H/F_{4}\) is abelian. For convenience, we let \({\bf BC}_{2}=\{c_{1},\ldots,c_{s}\}\) and \(D_{K}=\{d_{1},\ldots,d_{t}\}\). Suppose that \(c_{1}^{e_{1}}\cdots c_{s}^{e_{s}}d_{1}^{f_{1}}\cdots d_{t}^{f_{t}}K=K\) with \(e_{1},\ldots,e_{s},f_{1},\ldots,f_{t}\in\{0,1\}\). Set \(g=c_{1}^{e_{1}}\cdots c_{s}^{e_{s}}d_{1}^{f_{1}}\cdots d_{t}^{f_{t}}\). Then \(g\in K\), so \(gH\) lies in the abelian group \(K/H=\langle a_{i}^{2}H:1\leq i\leq r\rangle\), and hence \(g=(a_{1}^{2})^{\ell_{1}}(a_{2}^{2})^{\ell_{2}}\cdots(a_{r}^{2})^{\ell_{r}}h\) for some integers \(\ell_{1},\ell_{2},\ldots,\ell_{r}\) and some \(h\in H\). Also \(hF_{4}\) lies in the abelian group \(H/F_{4}\), and hence \(h=c_{1}^{2i_{1}}\cdots c_{s}^{2i_{s}}d_{1}^{2j_{1}}\cdots d_{t}^{2j_{t}}h^{\prime}\), where all the \(i_{k}\) and \(j_{k}\) are integers and \(h^{\prime}\in\langle F_{4},[[a_{i},a_{j}],a_{\ell}]:1\leq j<i\leq r,\ \ell\in\{i,j\}\rangle\). Thus
\[c_{1}^{e_{1}}\cdots c_{s}^{e_{s}}d_{1}^{f_{1}}\cdots d_{t}^{f_{t}}F_{4}=gF_{4} =(a_{1}^{2})^{\ell_{1}}(a_{2}^{2})^{\ell_{2}}\cdots((a_{r}^{2})^{\ell_{r}})c_{1} ^{2i_{1}}\cdots c_{s}^{2i_{s}}d_{1}^{2j_{1}}\cdots d_{t}^{2j_{t}}h^{\prime}F_{4}.\]
However, by Theorem 3.6, \(gF_{4}\) has a unique representation of the form
\[gF_{4}=d^{\prime s^{\prime}_{1}}_{1}d^{\prime}_{2}{}^{s^{\prime}_{2}}\cdots d^{ \prime}_{t^{\prime}}{}^{s^{\prime}_{t^{\prime}}}F_{4},\]
where \(d^{\prime}_{1},d^{\prime}_{2},\ldots,d^{\prime}_{t^{\prime}}\in\cup_{u=1}^{3} \mathbf{BC}_{u}=\{a_{i},[a_{i},a_{j}],[[a_{i},a_{j}],a_{k}]:1\leq j<i\leq r,\ j \leq k\leq r\}\) and the \(s^{\prime}_{i}\) are integers. It follows that \(\ell_{i}=0\ (1\leq i\leq r)\), \(e_{k}=2i_{k}\ (1\leq k\leq s)\), \(f_{k}=2j_{k}\ (1\leq k\leq t)\) and \(h^{\prime}\in F_{4}\). Since \(e_{k},f_{k}\in\{0,1\}\), this implies that \(e_{1}=e_{2}=\cdots=e_{s}=f_{1}=f_{2}=\cdots=f_{t}=0\). Thus \(c_{1}K,\ldots,c_{s}K,d_{1}K\ldots,d_{t}K\) is a basis for \(F^{\prime}K/K\cong C_{2}^{m}\), as asserted, and \(m=|\mathbf{BC}_{2}\cup D_{K}|\).
Finally we determine this cardinality. By Lemma 3.5, \(|\mathbf{BC}_{2}|=r(r-1)/2\), and \(|D_{K}|=\frac{r^{3}}{3}-r^{2}+\frac{2r}{3}\), so
\[m=\frac{r(r-1)}{2}+\left(\frac{r^{3}}{3}-r^{2}+\frac{2r}{3}\right)=\frac{r^{3} }{3}-\frac{r^{2}}{2}+\frac{r}{6}=\frac{r(r-1)(2r-1)}{6}.\]
This completes the proof of part (2).
(3) Since \(F^{\prime}\leq F^{\prime}K<F\), it follows that \(F/F^{\prime}K\) is a quotient of the abelian group \(F/F^{\prime}\) and hence \(F/F^{\prime}K\) is abelian, generated by \(\{a_{i}F^{\prime}K:i\in[1,r]\}\). Moreover since each \(a_{i}^{2}\in K\leq F^{\prime}K\), the group \(F/F^{\prime}K\cong C_{2}^{m}\) for some \(m\leq r\). Suppose that
\[a_{1}^{e_{1}}a_{2}^{e_{2}}\cdots a_{r}^{e_{r}}F^{\prime}K=F^{\prime}K,\]
where \(e_{1},\ldots,e_{r}\in\{0,1\}\). Then \(a_{1}^{e_{1}}a_{2}^{e_{2}}\cdots a_{r}^{e_{r}}F^{\prime}=kF^{\prime}\) for some \(k\in K\), and it follows from part (1) that \(kF^{\prime}=a_{1}^{2f_{1}}a_{2}^{2f_{2}}\cdots a_{r}^{2f_{r}}F^{\prime}\) for some integers \(f_{1},\ldots,f_{r}\). Thus \(a_{1}^{e_{1}}a_{2}^{e_{2}}\cdots a_{r}^{e_{r}}F^{\prime}=a_{1}^{2f_{1}}a_{2}^ {2f_{2}}\cdots a_{r}^{2f_{r}}F^{\prime}\). By Theorem 3.6, \(a_{1}F^{\prime},a_{2}F^{\prime},\ldots,a_{r}F^{\prime}\) form a basis for the free abelian group \(F/F^{\prime}\), and this implies that \(e_{i}=2f_{i}\) for each \(i\). Since each \(e_{i}=0\) or \(1\), we have \(e_{1}=e_{2}=\cdots=e_{r}=0\). Therefore, \(a_{1}F^{\prime}K,a_{2}F^{\prime}K,\ldots,a_{r}F^{\prime}K\) form a basis for \(F/F^{\prime}K\), and \(F/F^{\prime}K\cong C_{2}^{r}\). \(\Box\)
## 4 Structure of \(\mathcal{H}(n)\)
The goal of this section is to investigate the order, the subgroups and the automorphisms of the group \(\mathcal{H}(n)\) in Definition 1.2. First we obtain a lower bound for the order of \(\mathcal{H}(n)\).
**Proposition 4.1**: _Let \(F,A\), and the \(F_{\ell}\) be as in Theorem 3.6, with \(r=2n\geq 4\), and write \(A=A_{0}\cup B_{0}\), where \(A_{0}=\{a_{i}:1\leq i\leq n\}\) and \(B_{0}=\{a_{n+i}:1\leq i\leq n\}\). Also let \(K\) be the subgroup defined in Lemma 3.9, and let \(\mathcal{H}(n)\) be the group defined in Definition 1.2. Define_
\[I=\langle K,[a,a^{\prime}],[b,b^{\prime}],[[a,a^{\prime}],c],[[b,b^{\prime}],c ],[[b,a],b^{\prime}]\ :\ a,a^{\prime}\in A_{0},b,b^{\prime}\in B_{0},c\in A\ \rangle,\]
_Then the following hold._
* \(K<I\leq F^{\prime}I=F^{\prime}K\)_,_ \(I\unlhd F\)_, and_ \((F/I)/(F/I)^{\prime}\cong F/F^{\prime}I\cong C_{2}^{2n}\)
2. \(I\) _is also generated by_ \(K\cup D_{I}\)_, where_ \(D_{I}\) _is the set_ \[\{[a_{i},a_{j}],[a_{n+i},a_{n+j}],[[a_{i},a_{j}],a_{k}],[[a_{i},a_{j}],a_{n+ \ell}],[[a_{n+i},a_{n+j}],a_{n+k}],[[a_{n+i},a_{n+j}],a_{\ell}],\] \[[[a_{n+i^{\prime}},a_{\ell}],a_{n+j^{\prime}}]:1\leq j<i\leq n,\ j<k\leq n,\ 1\leq j^{\prime}<i^{\prime}\leq n,\ 1\leq\ell\leq n,\ k\neq i\}.\]
3. \(I/K\cong C_{2}^{u}\) _with_ \(v=(13n^{3}-15n^{2}+2n)/6\)_, and_ \(F^{\prime}I/I\cong C_{2}^{u}\) _with_ \(u=(n^{3}+n^{2})/2\)_; moreover_ \(F/I\) _is nilpotent of class_ \(3\) _and order_ \(|F/I|=2^{(n^{3}+n^{2}+4n)/2}\)_._
4. _The map_ \(\phi(x_{i})=a_{i}I\) _and_ \(\phi(y_{i})=a_{n+i}I\)_, for each_ \(i=1,\ldots,n\)_, defines an epimorphism_ \(\phi:{\cal H}(n)\to F/I\)_. In particular,_ \(|{\cal H}(n)|\geq 2^{(n^{3}+n^{2}+4n)/2}\)_._
**Proof** (1) By definition, \(K<I\), and \(F^{\prime}K\) contains each of the given generators for \(I\), so \(I\leq F^{\prime}K\). This implies that \(F^{\prime}I\leq F^{\prime}K\), and on the other hand \(F^{\prime}K\leq F^{\prime}I\), so \(F^{\prime}I=F^{\prime}K\). Next we prove that \(I\unlhd F\). Since \(F_{4}\leq K<I\) and \(F_{3}/F_{4}\leq Z(F/F_{4})\) (by Lemma 3.7(2)) and \(K\unlhd F\) (Lemma 3.9), it follows that, for each \(x\in F_{3}\cap I\), \(y\in K\), and \(z\in F\), the conjugates \(x^{z}\in xF_{4}\subseteq I\) and \(y^{z}\in K<I\). Thus to prove that \(I\unlhd F\), it is sufficient to prove that, for each \(a,a^{\prime}\in A_{0}\), \(b,b^{\prime}\in B_{0}\) and \(c\in A\), the conjugates \([a,a^{\prime}]^{c},[b,b^{\prime}]^{c}\) both lie in \(I\). Now \([a,a^{\prime}]^{c}=[a,a^{\prime}][[a,a^{\prime}],c]\) and both factors lie in \(I\), so \([a,a^{\prime}]^{c}\in I\). Similarly \([b,b^{\prime}]^{c}\in I\), and hence \(I\unlhd F\). Finally \((F/I)^{\prime}=F^{\prime}I/I\), and hence \((F/I)/(F/I)^{\prime}\cong F/F^{\prime}I=F/F^{\prime}K\) which, by Lemma 3.9(3), is isomorphic to \(C_{2}^{2n}\). Thus part (1) is proved.
(2) Let \(I_{0}=\langle K,D_{I}\rangle\), with \(D_{I}\) as in (2). Then \(I_{0}\leq I\) and we show that equality holds by proving that each of the given generators for \(I\) lies in \(I_{0}\). First \(K\) lies in the subset so \(F_{4}<K\leq I_{0}\), and it follows from Lemma 3.8(3) that each \([a,a^{\prime}]\) and \([b,b^{\prime}]\) lies in \(I_{0}\). Next we consider \(x:=[[a,a^{\prime}],c]\). If \(a=a^{\prime}\) then \(x=1\in I_{0}\), while if \(c\in\{a,a^{\prime}\}\), then \(x\in K\) (using Lemma 3.9 and Lemma 3.8(4)), and again \(x\in I_{0}\). Otherwise \(a,a^{\prime},c\) are pairwise distinct; if \(c\in B_{0}\) then, by Lemma 3.8(4), \(x^{\pm 1}F_{4}\) contains an element of \(D_{I}\) of the form \([[a_{i},a_{j}],a_{n+\ell}]\) and hence \(x\in I_{0}\), while if \(c\in A_{0}\) then \(x\) is of the form \(x=[[a_{i},a_{j}],a_{k}]\) with \(i,j,k\leq n\) and pairwise distinct. In this case, if \(\min\{i,j,k\}\neq k\) then either \(x\in D_{I}\) or \(x^{-1}F_{4}\) contains \([[a_{j},a_{i}],a_{k}]\in D_{I}\), by Lemma 3.8(4), and if \(\min\{i,j,k\}=k\) then by Lemma 3.8(5), \(xF_{4}\) contains a product of two elements of \(D_{I}\). In all cases \(x\in I_{0}\). An analogous argument shows that each \([[b,b^{\prime}],c]\in I_{0}\). Finally consider \(x:=[[b,a],b^{\prime}]\). If \(b^{\prime}=b\) then \(x\in K\) so take \(b^{\prime}=a_{n+j^{\prime}}\neq b=a_{n+i^{\prime}}\) and \(a=a_{\ell}\). If \(i^{\prime}>j^{\prime}\) then \(x\in D_{I}\subset I_{0}\), so we may assume that \(i^{\prime}<j^{\prime}\). By Lemma 3.8(4) and (5), modulo \(F_{4}\), \(x=[[a_{n+i^{\prime}},a_{\ell}],a_{n+j^{\prime}}]\) satisfies
\[x\equiv[[a_{n+j^{\prime}},a_{\ell}],a_{n+i^{\prime}}]\cdot[[a_{n+i^{\prime}},a _{n+j^{\prime}}],a_{\ell}]\equiv[[a_{n+j^{\prime}},a_{\ell}],a_{n+i^{\prime}} ]\cdot[[a_{n+j^{\prime}},a_{n+i^{\prime}}],a_{\ell}]^{-1},\]
and each of these factors lies in \(I_{0}\), so \(x\in I_{0}\). We conclude that \(I_{0}=I\).
(3) Note that \(F^{\prime}I/I\cong F^{\prime}/(F^{\prime}\cap I)\), and \(F^{\prime}\cap I\) contains \(F^{\prime}\cap K\) since \(F_{4}<K<I\). Hence \(F^{\prime}/(F^{\prime}\cap I)\) is a quotient of \(F^{\prime}/(F^{\prime}\cap K)\), and by Lemma 3.9(2), \(F^{\prime}/(F^{\prime}\cap K)\cong F^{\prime}K/K\cong C_{2}^{r(r-1)(2r-1)/6}=C_{2}^ {(8n^{3}-6n^{2}+n)/3}\) (as \(|A|=r=2n\)), with basis \(\{c(F^{\prime}\cap K):c\in{\bf BC}_{2}\cup D_{K}\}\) where \(D_{K}=\{[[a_{i},a_{j}],a_{k}]:1\leq j<i\leq 2n,\ j<k\leq 2n,\ k\neq i\}\), and \({\bf BC}_{2}\) is as in Lemma 3.5. Since \(F^{\prime}K=F^{\prime}I\), we deduce that \(F^{\prime}I/I\cong C_{2}^{u}\) and \(I/K\cong C_{2}^{v}\), for some \(u,v\) such that \(u+v=(8n^{3}-6n^{2}+n)/3\).
Let \(B_{2}=\{[b,a]\ :\ a\in A_{0},b\in B_{0}\}\). Then \(|B_{2}|=n^{2}\), \(|{\bf BC}_{2}|=n(2n-1)\) (Lemma 3.5), and \({\bf BC}_{2}\) is the disjoint union \({\bf BC}_{2}=({\bf BC}_{2}\cap D_{I})\cup B_{2}\) with \(|{\bf BC}_{2}\cap D_{I}|=n^{2}-n\). We obtain a similar partition of \(D_{K}\). Suppose that \(z\in D_{K}\) and \(z\not\in I\). Then \(z=[[c,c^{\prime}],c^{\prime\prime}]\) for certain
\(c,c^{\prime},c^{\prime\prime}\in A\). First we show that \(c^{\prime},c^{\prime\prime}\in A_{0}\) and \(c\in B_{0}\). If \(c^{\prime}\in B_{0}\), then by the definition of \(D_{K}\) also \(c,c^{\prime\prime}\in B_{0}\) and \(z\) is an element of \(D_{I}\) of the form \([[a_{n+i},a_{n+j}],a_{n+k}]\), which is a contradiction, so \(c^{\prime}\in A_{0}\). Next, if also \(c,c^{\prime\prime}\in A_{0}\) then \(z\) would be an element of \(D_{I}\) of the form \([[a_{i},a_{j}],a_{k}]\), which is a contradiction, so at least one of \(c,c^{\prime\prime}\) lies in \(B_{0}\). If \(c\in A_{0}\) then we must have \(c^{\prime\prime}\in B_{0}\) and then \(z\) is an element of \(D_{I}\) of the form \([[a_{i},a_{j}],a_{n+\ell}]\), which is a contradiction. Thus \(c\in B_{0}\). If also \(c^{\prime\prime}\in B_{0}\), then \(z\) is one of the given generators for \(I\) of the form \([[b,a],b^{\prime}]\), which is again a contradiction. Hence \(c^{\prime\prime}\in A_{0}\), and our assertions are proved. Thus such elements \(z\) have the form \(z=[[a_{n+i},a_{j}],a_{k}]\), for some \(i,j,k\in[1,n]\); and as \(z\in D_{K}\) we have \(j<k\). Let
\[B_{3}:=\{[[a_{n+i},a_{j}],a_{k}]\ :\ 1\leq j<k\leq n,\ 1\leq i\leq n\}.\]
Then by Lemma 3.4, \(|B_{3}|=n\cdot n(n-1)/2=(n^{3}-n^{2})/2\), and \(D_{K}\) is the disjoint union \(D_{K}=B_{3}^{\prime}\cup B_{3}\) with \(B_{3}^{\prime}\subset I\). Now \(|D_{K}|=\frac{8n^{3}}{3}-4n^{2}+\frac{4n}{3}\) (Lemma 3.9), so \(|B_{3}^{\prime}|=|D_{K}|-|B_{3}|=(13n^{3}-21n^{2}+8n)/6\).
An analogous argument to that in the proof of Lemma 3.9(2) shows that \(\{cI\ :\ c\in B_{2}\cup B_{3}\}\) forms a basis for \(F^{\prime}I/I\cong C_{2}^{u}\) and hence \(u=|B_{2}|+|B_{3}|=n^{2}+(n^{3}-n^{2})/2=(n^{3}+n^{2})/2\). Thus by part (1), \(|F/I|=2^{2n+u}\) and \(2n+u=(n^{3}+n^{2}+4n)/2\). Also \(I/K\cong C_{2}^{v}\) with \(v=|{\bf BC}_{2}\cap I|+|B_{3}^{\prime}|=(n^{2}-n)+(13n^{3}-21n^{2}+8n)/6=(13n^ {3}-15n^{2}+2n)/6\).
To see that \(F/I\) is nilpotent of class \(3\), we note first that its derived subgroup is \((F/I)^{\prime}=F^{\prime}I/I\cong C_{2}^{2n}\). The second term in the lower central series is \(F_{3}I/I\). It follows from Lemma 3.7(1) that \(F^{\prime}I=\langle{\bf BC}_{2},F_{3},I\rangle\), and hence \(F^{\prime}I/F_{3}I\) is generated by \(\{cF_{3}I:c\in{\bf BC}_{2},c\not\in I\}\). We showed above that \({\bf BC}_{2}\setminus I=B_{2}\) has size \(n^{2}\), and hence \(F^{\prime}I/F_{3}I=C_{2}^{u_{2}}\) with \(u_{2}\leq n^{2}<u\). The third term in the lower central series is \(F_{4}I/I\), which is trivial since \(F_{4}<I\). Thus \(F/I\) is nilpotent of class \(3\), as asserted. This completes the proof of part (3).
(4) Setting \(\phi(x_{i})=a_{i}I\) and \(\phi(y_{i})=a_{n+i}I\), for each \(i=1,\ldots,n\), we have a map from the generating set for \({\cal H}(n)\) in Definition 1.2 to the set \(\{aI\ :\ a\in A\}\) of generators for \(F/I\). Moreover, extending \(\phi\) to a map on words in these generators of \({\cal H}(n)\), for each relator \(w\in{\cal R}\) in Definition 1.2, \(\phi(w)=w^{\prime}I\) such that either \(w^{\prime}\in K\) or \(w^{\prime}\) is one of the given generators for \(I\). Thus the images \(\phi(x_{i}),\phi(y_{i})\ (1\leq i\leq n)\) satisfy all the given relations of \({\cal H}(n)\), and hence, by von Dyck's Theorem (see [23, Theorem 2.2.1], the extension of \(\phi\) to \({\cal H}(n)\to F/I\) is an epimorphism. This completes the proof of the proposition. \(\Box\)
Our next task is to prove that the epimorphism \(\phi\) in Proposition 4.1(4) is in fact an isomorphism. We need the following information about certain commutators in \({\cal H}(n)\).
**Lemma 4.2**: _Let \({\cal H}(n)=\langle X_{0}\cup Y_{0}\mid{\cal R}\rangle\) be the group defined in Definition 1.2, where \(n\geq 2\). Then,_
1. _for all_ \(z,z^{\prime}\in X_{0}\cup Y_{0}\)_,_ \([[z,z^{\prime}],z]=[[z,z^{\prime}],z^{\prime}]=1\)_;_
2. _for_ \(z,z^{\prime},z^{\prime\prime}\in X_{0}\cup Y_{0}\)_, we have_ \([[z,z^{\prime}],z^{\prime\prime}]\in Z({\cal H}(n))\)_,_ \([z,z^{\prime}]=[z^{\prime},z]\)_, and_ \([z,z^{\prime}]^{2}=[[z,z^{\prime}],z^{\prime\prime}]^{2}=1\)_._
3. _the third term_ \({\cal H}(n)_{4}\) _of the lower central series for_ \({\cal H}(n)\) _is trivial, so_ \({\cal H}(n)\) _is nilpotent of class at most_ \(3\)
**Proof** (1) We use the first few relations in \({\cal R}\). Firstly, \(z^{2}=(z^{\prime})^{2}=1\), so \((zz^{\prime})^{2}=[z,z^{\prime}]\). If both \(z,z^{\prime}\) lie in \(X_{0}\) or both lie in \(Y_{0}\), then we have the relation \([z,z^{\prime}]=1\), and hence \([[z,z^{\prime}],z]=[[z,z^{\prime}],z^{\prime}]=1\). So we may assume that \(z\in X_{0}\), say, and \(z^{\prime}\in Y_{0}\). Then we have \((zz^{\prime})^{4}=[z,z^{\prime}]^{2}=1\), and hence \(\langle z,z^{\prime}\rangle=D_{8}\) or \(C_{2}^{2}\). In either case \([z,z^{\prime}]=(zz^{\prime})^{2}\) is centralised by \(z\) and \(z^{\prime}\), and this implies that \([[z,z^{\prime}],z]=[[z,z^{\prime}],z^{\prime}]=1\). This proves part (1).
(2) Let \(u=[[z,z^{\prime}],z^{\prime\prime}]\). Then for all \(z^{\prime\prime\prime}\in X_{0}\cup Y_{0}\), \([u,z^{\prime\prime\prime}]\in{\cal R}\) and hence \([u,z^{\prime\prime\prime}]=1\). Since \([u,z^{\prime\prime\prime}]=u^{-1}u^{z^{\prime\prime\prime}}\), this implies that \(u=u^{z^{\prime\prime\prime}}\), and hence \(u\in Z({\cal H}(n))\). If both \(z,z^{\prime}\) lie in \(X_{0}\) or both lie in \(Y_{0}\), then \([z,z^{\prime}]=[z^{\prime},z]=1\) and the other assertions follow. If \(z\in X_{0}\) and \(z^{\prime}\in Y_{0}\), then \([z,z^{\prime}]^{2}\in{\cal R}\) and hence \([z,z^{\prime}]^{2}=1\), and this implies that \([z,z^{\prime}]^{-1}=[z,z^{\prime}]\). However \([z,z^{\prime}]^{-1}=[z^{\prime},z]\), and hence \([z,z^{\prime}]=[z^{\prime},z]\). Also \(u^{2}\in{\cal R}\) in this case and so \(u^{2}=1\). Finally, if \(z\in Y_{0}\) and \(z^{\prime}\in X_{0}\), then \([z^{\prime},z]^{2}\in{\cal R}\), and the argument just given shows that \([z,z^{\prime}]=[z^{\prime},z]\) and \([z,z^{\prime}]^{2}=1\). This implies that \(u^{2}=[[z,z^{\prime}],z^{\prime\prime}]^{2}=[[z^{\prime},z],z^{\prime\prime}]^ {2}\), which lies in \({\cal R}\) and hence is trivial.
(3) By part (2), for all \(z,z^{\prime},z^{\prime\prime},z^{\prime\prime\prime}\in X_{0}\cup Y_{0}\), we have \([[z,z^{\prime}],z^{\prime\prime}]\in Z({\cal H}(n))\) and hence \([[[z,z^{\prime}],z^{\prime\prime}],z^{\prime\prime\prime}]=1\). On the other hand, by [11, Hilfsatz 1.11(a)], \({\cal H}(n)_{4}\) is generated by the set of all conjugates \([[[z,z^{\prime}],z^{\prime\prime}],z^{\prime\prime\prime}]^{h}\) of such commutators by elements \(h\in{\cal H}(n)\), and hence \({\cal H}(n)_{4}\) is trivial. Thus \({\cal H}(n)\) is nilpotent of class at most \(3\). \(\Box\)
**Proposition 4.3**: _Let \({\cal H}(n)=\langle X_{0}\cup Y_{0}\mid{\cal R}\rangle\) be the group defined in Definition 1.2, where \(n\geq 2\), and let \(F,A=A_{0}\cup B_{0},I,K\) be as in Proposition 4.1. Then the map \(\psi:AI/I\to X_{0}\cup Y_{0}\) such that \(\psi(a_{i}I)=x_{i}\) and \(\psi(a_{n+i}I)=y_{i}\), for \(i=1,\ldots,n\), defines an epimorphism \(\psi:F/I\to{\cal H}(n)\), such that \(\psi\) is the inverse of the map \(\phi\) in Proposition 4.1(4). In particular \({\cal H}(n)\cong F/I\) and \(|{\cal H}(n)|=2^{(n^{3}+n^{2}+4n)/2}\)._
**Remark 4.4**: _It follows from Proposition 4.1(1) that \(F/I\) is isomorphic to the group with presentation \(\overline{F}=\langle A_{0}\cup B_{0}\mid F_{4}\cup B_{K}\cup D_{I}\rangle\), where \(B_{K},D_{I}\) are as in Lemma 3.9 and Proposition 4.1(2). In the proof we will work with the group \(\overline{F}\)._
**Proof** Interpreting \(F/I\) as the group \(\overline{F}:=\langle A_{0}\cup B_{0}\mid F_{4}\cup B_{K}\cup D_{I}\rangle\), the map \(\psi\) becomes a bijection \(\psi:A_{0}\cap B_{0}\to X_{0}\cup Y_{0}\) given by \(\psi:a_{i}\to x_{i},a_{n+i}\to y_{i}\), for \(i=1,\ldots,n\). Consider the extension of \(\psi\) to a map on words in these generators of \(\overline{F}\), so that for each element (relator) \(w\in F_{4}\cap B_{K}\cup D_{I}\), \(\psi(w)\) is the same word in \(X_{0}\cup Y_{0}\). We check that \(\psi(w)\) is equal to the identity of \({\cal H}(n)\) for each \(w\) and then apply von Dyck's Theorem [23, Theorem 2.2.1]. If \(w\in F_{4}\) then \(\psi(w)=1\) since \({\cal H}(n)\) has nilpotency class at most \(3\) by Lemma 4.2(2).
Next we consider the elements \(w\) of \(B_{K}\) as in Lemma 3.9(1). If \(w=a^{2}\) for \(a\in A_{0}\cup B_{0}\), then \(\psi(w)\in{\cal R}\) and so \(\psi(w)=1\). If \(w=[a,a^{\prime}]^{2}\) or \(w=[[a,a^{\prime}],a^{\prime\prime}]^{2}\), for \(a,a^{\prime},a^{\prime\prime}\in A_{0}\cup B_{0}\), then \(\psi(w)=1\) by Lemma 4.2(3). Finally suppose that \(w=[[a,a^{\prime}],a^{\prime\prime}]\), for \(a,a^{\prime}\in A_{0}\cup B_{0}\) with \(a^{\prime\prime}\in\{a,a^{\prime}\}\). Then it follows from Lemma 4.2(1), that \(\psi(w)=1\).
Finally we consider the elements \(w\) of \(D_{I}\) as in Proposition 4.1(2). If \(w=[a,a^{\prime}]\) for \(a,a^{\prime}\in A_{0}\) or \(a,a^{\prime}\in B_{0}\), then \(\psi(w)\in{\cal R}\) and so \(\psi(w)=1\). If \(w=[[a,a^{\prime}],a^{\prime\prime}]\), for \(a,a^{\prime},a^{\prime\prime}\in A_{0}\cup B_{0}\) such that either \(a,a^{\prime}\in A_{0}\) or \(a,a^{\prime}\in B_{0}\), then again \(\psi(w)=1\) since in these cases \(\psi([a,a^{\prime}])\in{\cal R}\). Finally we consider elements \(w=[[a,a^{\prime}],a^{\prime\prime}]\), for \(a,a^{\prime\prime}\in B_{0}\) and \(a^{\prime}\in A_{0}\). For these elements, \(\psi(w)\in{\cal R}\) and so \(\psi(w)=1\).
It now follows from von Dyck's Theorem [23, Theorem 2.2.1] that this map \(\psi\) defines an epimorphism \(\overline{F}\to{\cal H}(n)\). By the definitions, \(\psi\) is the inverse of the map \(\phi\) in Proposition 4.1(4), and hence \({\cal H}(n)\cong F/I\). The order \(|{\cal H}(n)|=|F/I|\) follows from Proposition 4.1(3). \(\Box\)
### Subgroups and automorphisms of \({\cal H}(n)\)
The main purpose of this subsection is to prove the following theorem.
**Theorem 4.5**: _Let \({\cal H}(n)\) be as in Definition 1.2 and let \({\cal H}(n)_{3}\) be the third term in the lower centre series of \({\cal H}(n)\). Then the following hold._
* _For arbitrary_ \(x,x^{\prime}\in X_{0}\)_,_ \(y\in Y_{0}\)_, we have_ \([[x,y],x^{\prime}]=[[x^{\prime},y],x]\)_._
* \({\cal H}(n)/({\cal H}(n)^{\prime})\cong C_{2}^{2n}\)_, and_ \({\cal H}(n)\) _is an_ \(n\)_-dimensional mixed dihedral group relative to_ \(X:=\langle X_{0}\rangle\) _and_ \(Y:=\langle Y_{0}\rangle\)_._
* \({\cal H}(n)^{\prime}=W\times{\cal H}(n)_{3}\cong C_{2}^{n^{2}(n+1)/2}\)_, where_
* \(W=\langle[x_{i},y_{j}]:1\leq i,j\leq n\rangle\cong C_{2}^{n^{2}}\)_, and_
* \({\cal H}(n)_{3}=\langle[[x_{i},y_{j}],x_{k}]:1\leq i,j\leq n,\ i<k\leq n\rangle \cong C_{2}^{n^{2}(n-1)/2}\)_; and moreover_ \({\cal H}(n)_{3}\leq Z({\cal H}(n))\)_._
* _For any_ \(a\in{\cal H}(n)\) _and_ \(b,b^{\prime}\in Y\) _we have_ \([[b,a],b^{\prime}]=1\) _and_ \([[a,b],b^{\prime}]=1\)_._
* _For any_ \(g\in{\rm Aut}(X)\times{\rm Aut}(Y)\)_,_ \(g\) _induces an automorphism of_ \({\cal H}(n)\)_._
* _Let_ \(c\in X\) _and let_ \(d\in Y\) _so that_ \(c=x_{i_{1}}x_{i_{2}}\ldots x_{i_{k}}\) _and_ \(d=y_{j_{1}}y_{j_{2}}\ldots y_{j_{\ell}}\) _for some_ \(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{k}}\in X_{0}\) _and_ \(y_{j_{1}},y_{j_{2}},\ldots,y_{j_{\ell}}\in Y_{0}\)_, with these expressions chosen so that_ \(k,\ell\) _are minimal. Then_ \[[c,d]{\cal H}(n)_{3}=\prod_{1\leq u\leq k,1\leq v\leq\ell}[x_{i_{u}},y_{j_{v}} ]{\cal H}(n)_{3}.\]
**Proof** (1) Let \(x,x^{\prime}\in X_{0}\) and \(y\in Y_{0}\). By Lemma 4.2 (2), any weight 3 commutator involving \(x,x^{\prime},y\) is in the center of \({\cal H}(n)\). Thus, by Lemma 3.2, we have
\[[[x,y^{-1}],x^{\prime}]\cdot[[y,(x^{\prime})^{-1}],x]\cdot[[x^{\prime},x^{-1} ],y]=1.\]
Since \(x^{2}=(x^{\prime})^{2}=y^{2}=1\) and \([x,x^{\prime}]=1\) the above equation becomes \([[x,y],x^{\prime}]\cdot[[y,x^{\prime}],x]=1\), and hence \([[x,y],x^{\prime}]=[[y,x^{\prime}],x]^{-1}\). Using the relations \([[x^{\prime},y],x]^{2}=1\) and \([x^{\prime},y]^{2}=1\) (which implies that \([x^{\prime},y]=[y,x^{\prime}]\)) we have \([[y,x^{\prime}],x]^{-1}=[[x^{\prime},y],x]^{-1}=[[x^{\prime},y],x]\), and part (1) holds.
(2) It follows from Proposition 4.3 that \({\cal H}(n)\cong F/I\), and from Proposition 4.1 (1) that \({\cal H}(n)/{\cal H}(n)^{\prime}\cong F/F^{\prime}I\cong C_{2}^{2n}\). Also, \({\cal H}(n)=\langle X,Y\rangle\), \(X=\langle X_{0}\rangle\cong C_{2}^{n}\) and \(Y=\langle Y_{0}\rangle\cong C_{2}^{n}\)
by Definition 1.2, and hence, by Definition 1.1(a), \({\cal H}(n)\) is an \(n\)-dimensional mixed dihedral group relative to \(X\) and \(Y\).
(3) By Propositions 4.3 and 4.1 (3), \({\cal H}(n)^{\prime}\cong F^{\prime}I/I\cong C_{2}^{u}\) with \(u=(n^{3}+n^{2})/2\), and by [11, Hilfsatz 1.11(a) and (b)],
\[{\cal H}(n)^{\prime}=\langle[z,z^{\prime}],[[z,z^{\prime}],z^{\prime\prime}]^{ h}\mid z,z^{\prime},z^{\prime\prime}\in X_{0}\cup Y_{0},\ h\in{\cal H}(n)\rangle.\]
Also, by Lemma 4.2 (2), \({\cal H}(n)_{3}\leq Z({\cal H}(n))\) and hence \([[z,z^{\prime}],z^{\prime\prime}]^{h}=[[z,z^{\prime}],z^{\prime\prime}]\) for each \(h\in{\cal H}(n)\). Let \(x,x^{\prime}\in X_{0}\) and \(y,y^{\prime}\in Y_{0}\). Since \([x,x^{\prime}]=[y,y^{\prime}]=1\) are relations in \({\cal R}\), the only weight two generators required are \([x_{i},y_{j}]\), for \(1\leq i,j\leq n\), and for the weight three generators \([[z,z^{\prime}],z^{\prime\prime}]\), we may assume that one of \(z,z^{\prime}\) lies in \(X_{0}\) and the other lies in \(Y_{0}\). Since \({\cal H}(n)_{4}=1\) by Lemma 4.2(3), it follows from Lemma 3.8(4) that \([[x,y],y^{\prime}]^{-1}=[[y,x],y^{\prime}]\), and since \([[y,x],y^{\prime}]=1\) is a relation in \({\cal R}\), also \([[x,y],y^{\prime}]=1\), and so the only weight three generators required are those of the form \([[y,x],x^{\prime}]\) or \([[x,y],x^{\prime}]\). Further by part (1), \([[x,y],x^{\prime}]=[[x^{\prime},y],x]\), and by Lemma 4.2 (1), \([[x,y],x]=1\). Thus we have
\[{\cal H}(n)^{\prime}=\langle[x_{i},y_{j}],[[x_{i},y_{j}],x_{k}]\mid 1\leq i,j \leq n,\ i<k\leq n\rangle.\]
Since there are precisely \(u=(n^{3}+n^{2})/2\) generators in the generating set above, and since \({\cal H}(n)^{\prime}=C_{2}^{u}\), we conclude that \({\cal H}(n)^{\prime}=W\times{\cal H}(n)_{3}\), where \(W=\langle[x_{i},y_{j}]:1\leq i,j\leq n\rangle\cong C_{2}^{n^{2}}\) and \({\cal H}(n)_{3}=\langle[[x_{i},y_{j}],x_{k}]:1\leq i,j\leq n,\ i<k\leq n\rangle \cong C_{2}^{n^{2}(n-1)/2}\).
(4) By part (3), we have \([b,a]=gh\) where \(g\in W\) and \(h\in{\cal H}(n)_{3}\). Moreover, by part (3) we also have \({\cal H}(n)_{3}\leq Z({\cal H}(n))\). This implies that
\[[[b,a],b^{\prime}]=[gh,b^{\prime}]=(gh)^{-1}(b^{\prime})^{-1}(gh)b^{\prime}=g^ {-1}(b^{\prime})^{-1}gb^{\prime}=[g,b^{\prime}].\]
Again by part (3), we have \(g=w_{1}w_{2}\ldots w_{s}\) for some \(w_{1},w_{2},\ldots,w_{s}\in\{[x_{i},y_{j}]:1\leq i,j\leq n\}\). We will use induction on \(s\) to prove that \([g,y_{k}]=1\), for any \(y_{k}\in Y_{0}\). In the proof of part (3) we showed that \([[x_{i},y_{j}],y_{k}]=1\), for all \(i,j,k\). Thus, if \(s=1\), then \([g,y_{k}]=[w_{1},y_{k}]=1\). Now assume that \(s>1\), and assume inductively that \([g,y_{k}]=1\) if \(g\in W\) can be expressed as a word of length less than \(s\) in the generators. Then, by [11, Hilfsatz III.1.2(c)],
\[[g,y_{k}]=[w_{1}w_{2}\ldots w_{s-1}w_{s},y_{k}]=[w_{1}w_{2}\ldots w_{s-1},y_{k }]^{w_{s}}[w_{s},y_{k}],\]
and also \([w_{s},y_{k}]=1\) from the case \(s=1\). Since by part (3) the commutator subgroup \({\cal H}(n)^{\prime}\) is abelian, we have
\[[g,y_{k}]=[w_{1}w_{2}\ldots w_{s-1},y_{k}]^{w_{s}}=[w_{1}w_{2}\ldots w_{s-1},y_ {k}].\]
By induction \([w_{1}w_{2}\ldots w_{s-1},y_{k}]=1\), and hence \([g,y_{k}]=1\). This implies that \(g\) commutes with every \(y_{k}\in Y_{0}\). Since \(Y\) is generated by \(Y_{0}\), it follows that \(g\) centralises \(Y\), and hence we have \([g,b^{\prime}]=1\). Thus, \([[b,a],b^{\prime}]=1\). By part (3), \({\cal H}(n)^{\prime}\) is an elementary abelian 2-group, so \([b,a]=[b,a]^{-1}=[a,b]\), and hence also \([[a,b],b^{\prime}]=1\).
(5) Let \((g_{1},g_{2})\in{\rm Aut}(X)\times{\rm Aut}(Y)\), and let \(x_{i}^{\prime}=x_{i}^{g_{1}}\) and \(y_{i}^{\prime}=y_{i}^{g_{2}}\), for each \(i\in\{1,\ldots,n\}\). Set \(X_{0}^{\prime}=\{x_{i}^{\prime}:i=1,2,\ldots,n\}\) and \(Y_{0}^{\prime}=\{y_{i}^{\prime}:i=1,2,\ldots,n\}\). We will apply von Dyck's Theorem (see [23, Theorem 2.2.1]) to show that the map \(\phi:X_{0}\cap Y_{0}\to{\cal H}(n)\) given by \(x_{i}\to x_{i}^{\prime},\ y_{i}\to y_{i}^{\prime}\), for \(i=1,\ldots,n\), extends uniquely to an automorphism \(\phi\) of \({\cal H}(n)\). To do
this, it is sufficient to show that \({\cal H}(n)\) is generated by \(X^{\prime}_{0}\cup Y^{\prime}_{0}\) and that for every relation \(w(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n})=1\) in \({\cal R}\), we have \(w(x^{\prime}_{1},\ldots,x^{\prime}_{n},y^{\prime}_{1},\ldots,y^{\prime}_{n})=1\). First, \(X=\langle X_{0}\rangle\cong Y=\langle Y_{0}\rangle\cong C^{n}_{2}\) and \({\cal H}(n)=\langle X,Y\rangle\), by the definition of \({\cal H}(n)\). Also \(\langle X^{\prime}_{0}\rangle=X\) and \(\langle Y^{\prime}_{0}\rangle=Y\), by the definition of \(g_{1}\) and \(g_{2}\). Hence \({\cal H}(n)=\langle X^{\prime}_{0}\cup Y^{\prime}_{0}\rangle\). Next we consider the relations. Let \(a,a^{\prime}\in X^{\prime}_{0},b,b^{\prime}\in Y^{\prime}_{0}\) and \(c,c^{\prime}\in X^{\prime}_{0}\cup Y^{\prime}_{0}\). Then \(c^{2}=1\) and \([a,a^{\prime}]=[b,b^{\prime}]=1\). By part (3), we have \([a,b],[[a,b],c]\in{\cal H}(n)^{\prime}\cong C^{n^{2}(n+1)/2}_{n}\), and thus, \([a,b]^{2}=1\) and \([[a,b],c]^{2}=1\). By part (4), \([[b,a],b^{\prime}]=1\). Finally, \([[[a,b],c],c^{\prime}]\) lies in \({\cal H}(n)_{4}\) and hence is trivial, by Lemma 4.2(3). This proves part (5).
(6) In the following we write, for convenience, \(w\equiv w^{\prime}\pmod{{\cal H}(n)_{3}}\) if and only if \(w{\cal H}(n)_{3}=w^{\prime}{\cal H}(n)_{3}\). First we apply [11, Hilfssatz III.1.2(c)] several times: \([gh,f]=[g,f]^{h}\cdot[h,f]=[g,f]\cdot[[g,f],h]\cdot[h,f]\), for any \(g,h,f\in{\cal H}(n)\). Writing \(c=c^{\prime}x_{i_{k}}\), this implies that \([c,d]=[c^{\prime},d]\cdot[[c^{\prime},d],x_{i_{k}}]\cdot[x_{i_{k}},d]\), and since \([[c^{\prime},d],x_{i_{k}}]\in Z({\cal H}(n)_{3})\) (by part (3)), it follows that \([c,d]\equiv[c^{\prime},d]\cdot[x_{i_{k}},d]\pmod{{\cal H}(n)_{3}}\). Repeating this \(k\) times we obtain
\[[c,d]\equiv[x_{i_{1}},d]\ldots[x_{i_{k}},d]\pmod{{\cal H}(n)_{3}}.\]
Now we apply [11, Hilfssatz III.1.2(b)] several times: \([g,hf]=[g,f]\cdot[g,h]^{f}=[g,f]\cdot[g,h]\cdot[[g,h],f]\), for any \(g,h,f\in{\cal H}(n)\). Writing \(d=d^{\prime}y_{j_{\ell}}\), this implies, for all \(u\), that \([x_{i_{u}},d]=[x_{i_{u}},y_{j_{\ell}}]\cdot[x_{i_{u}},d^{\prime}]\cdot[[x_{i_{ u}},d^{\prime}],y_{j_{\ell}}]\), and since \([[x_{i_{u}},d^{\prime}],y_{j_{\ell}}]=1\) (by part (4)), it follows that \([x_{i_{u}},d]=[x_{i_{u}},y_{j_{\ell}}]\cdot[x_{i_{u}},d^{\prime}]\). Repeating this \(\ell\) times for each \(u\), and using the fact that \({\cal H}(n)^{\prime}/{\cal H}(n)_{3}\) is abelian (by part (3)), we obtain
\[[c,d]\equiv\prod_{1\leq u\leq k,1\leq v\leq\ell}[x_{i_{u}},y_{j_{v}}]\pmod{{ \cal H}(n)_{3}}.\]
This completes the proof.
## 5 Proof of Theorem 1.3
Let \({\cal H}(n)\), \(X=\langle X_{0}\rangle\), \(Y=\langle Y_{0}\rangle\) and \({\cal R}\) be as in Definition 1.2. By Theorem 4.5 (2), \({\cal H}(n)\) is an \(n\)-dimensional mixed dihedral group relative to \(X\) and \(Y\). By Proposition 4.3, the order of \({\cal H}(n)\) is \(2^{(n^{3}+n^{2}+4n)/2}\). Let \(\Gamma=C({\cal H}(n),X,Y)\) and \(\Sigma=\Sigma({\cal H}(n),X,Y)\), as in Definition 1.1. It follows from Lemma 2.3(5) that \(\Sigma\) has valency \(2^{n}\), and from Definition 1.1(b) that \(|V(\Sigma|=2\cdot|{\cal H}(n):X|=2^{a}\), where \(a=1+(n^{3}+n^{2}+4n)/2-n=1+(n^{3}+n^{2}+2n)/2\). It remains for us to prove that \(\Sigma\) is semisymmetric and locally \(2\)-arc-transitive.
First we prove that \(\Sigma\) is locally \(2\)-arc-transitive. By Lemma 2.3(1), (3) and (4), \(\Sigma\) is the clique graph of \(\Gamma\), \({\rm Aut}(\Gamma)={\rm Aut}(\Sigma)\) contains \(G:={\cal H}(n)\rtimes A({\cal H}(n),X,Y)\), the group \({\cal H}(n)\) has two obits on \(V(\Sigma)\), namely \(\{Xh:h\in{\cal H}(n)\}\) and \(\{Yh:h\in{\cal H}(n)\}\), and \({\cal H}(n)\) acts regularly on \(E(\Sigma)\). Further the stabiliser in \(G\) of the \(1\)-arc \((X,Y)\) of \(\Sigma\) is the subgroup \(A({\cal H}(n),X,Y)\). By Theorem 4.5(5), \(A({\cal H}(n),X,Y)\) contains \({\rm Aut}(X)\times{\rm Aut}(Y)\). By (2), \((X,Y,Z)\) is a \(2\)-arc of \(\Sigma\) if and only if \(Z=Xz\) for some \(z\in{\cal H}(n)\) such that \(Xz\cap Y\neq\emptyset\). Thus \(Z=Xy\) for some \(y\in Y\) and since \(Z\neq X\), we have \(y\neq 1\). Since \({\rm Aut}(Y)\cong{\rm GL}_{n}(2)\) is transitive on \(Y\setminus\{1\}\) it follows that \({\rm Aut}(X)\times{\rm Aut}(Y)\) is transitive on all the \(2\)-arcs of the form \((X,Y,Z)\), and hence the stabiliser in \(G\) of \(X\) is transitive on all the \(2\)-arcs of \(\Sigma\) with first vertex \(X\). An analogous argument with \(X\) and \(Y\) interchanged shows that the stabiliser
in \(G\) of \(Y\) is transitive on all the \(2\)-arcs of \(\Sigma\) with first vertex \(Y\), and it follows that \(\Sigma\) is locally \(2\)-arc-transitive.
Showing that \(\Sigma\) is semisymmetric is the most delicate part of the proof. In the smallest case, where \(n=2\), a computation using Magma [1] shows that \(\Sigma\) is semisymmetric (see Remark 5.1 for a description of these computations). Thus we assume that \(n\geq 3\). By Lemma 2.3 (4), \(\Sigma\) is edge-transitive. Thus, to show that \(\Sigma\) is semisymmetric it is sufficient to prove that \(\operatorname{Aut}(\Sigma)\) is not transitive on \(V(\Sigma)\). We suppose to the contrary that \(\operatorname{Aut}(\Sigma)\) is transitive on \(V(\Sigma)\), and seek a contradiction. Under this assumption \(\Sigma\) is a \(2\)-arc-transitive graph of order a \(2\)-power and valency \(2^{n}\geq 8\). We shall process the proof by the following four steps.
**Step 1.**\(\mathcal{H}(n)^{\prime}\unlhd\operatorname{Aut}(\Sigma)\).
Let \(u=X\in V(\Sigma)\), and let \(A:=\mathcal{H}(n)\rtimes(\operatorname{Aut}(X)\times\operatorname{Aut}(Y))\). Then by (2), \(\Sigma(u)=\{Yx:x\in X\}\), and hence by Lemma 2.3(4) and Theorem 4.5(5), the kernel of the action of \(\operatorname{Aut}(\sigma)_{u}\) on \(\Sigma(u)\) contains \(\operatorname{Aut}(Y)\). Thus Lemma 2.2 applies, and so there exists a \(2\)-group \(M\unlhd\operatorname{Aut}(\Sigma)\) such that \(M\leq\operatorname{Aut}(\Sigma)^{+}\), \(M\) is semiregular on \(V(\Sigma)\), and \(\Sigma\) is an \(M\)-normal cover of \(\Sigma_{M}\cong\mathbf{K}_{2^{n},2^{n}}\). As noted above \(\mathcal{H}(n)\unlhd A\leq\operatorname{Aut}(\Sigma)^{+}\) and \(\mathcal{H}(n)\) acts regularly on \(E(\Sigma)\) and \(A\) is locally \(2\)-arc-transitive on \(\Sigma\) (by Lemma 2.3(4)). Hence \(M\mathcal{H}(n)\unlhd MA\leq\operatorname{Aut}(\Sigma)^{+}\), \(M\mathcal{H}(n)\) is a \(2\)-group (since both \(M\) and \(\mathcal{H}(n)\) are \(2\)-groups), and \(M\mathcal{H}(n)\) is edge-transitive on \(\Sigma\) (since \(\mathcal{H}(n)\) is transitive on \(E(\Sigma)\)), and its vertex-orbits are the two biparts of \(\Sigma\). Let \(\Phi\) be the Frattini subgroup of \(M\mathcal{H}(n)\), so \(\Phi\) is a characteristic subgroup of \(M\mathcal{H}(n)\) and hence \(\Phi\unlhd MA\). If \(\Phi\) were transitive on one of the biparts, say \(O\), of \(\Sigma_{M}\), and if \(v\in O\), then \(M\mathcal{H}(n)=(M\mathcal{H}(n))_{v}\Phi\), and by the properties of a Frattini subgroup ([11, Satz III.3.2(a)]), \(M\mathcal{H}(n)=(M\mathcal{H}(n))_{v}\), contradicting the fact that \(M\mathcal{H}(n)\) is transitive on each bipart of \(\Sigma\). Thus \(\Phi\) is intransitive on each bipart of \(\Sigma_{M}\), and \(\Phi\unlhd MA\). On the other hand \(\Sigma\) is locally \((MA,2)\)-arc transitive (since it is locally \((A,2)\)-arc transitive), and hence \(\Sigma_{M}\) is locally \((MA/M,2)\)-arc transitive (by [7, Lemma 5.1]).
Since \(\Sigma_{M}\cong\mathbf{K}_{2^{n},2^{n}}\), this means that \(MA\) acts \(2\)-transitively on each bipart \(O\) of \(\Sigma_{M}\), and since \(\Phi\) is an intransitive normal subgroup of \(MA\) it follows that \(\Phi\) acts trivially on \(O\), for each bipart \(O\) of \(\Sigma_{M}\). Hence \(\Phi\) is contained in the kernel of the action of \(MA\) on \(\Sigma_{M}\), that is, \(\Phi\leq M\). Thus \(M\mathcal{H}(n)/M\) is a quotient of \(M\mathcal{H}(n)/\Phi\) and hence \(M\mathcal{H}(n)/M\cong C_{2}^{s}\) for some \(s\) (by Lemma 3.1). Since \(M\mathcal{H}(n)\) is edge-transitive on \(\Sigma\), it follows that \(M\mathcal{H}(n)/M\) is an abelian group acting transitively on \(E(\Sigma_{M})\), and hence \(M\mathcal{H}(n)/M\) is regular on \(E(\Sigma_{M})\) (see [22, Lemma 2.4]), so \(s=2n\) and \(\mathcal{H}(n)/(M\cap\mathcal{H}(n))\cong C_{2}^{2n}\). The group induced by \(A\) on \(\Sigma_{M}\) is \(A/(A\cap M)\cong MA/M\), and is isomorphic to \(C_{2}^{2n}\rtimes(\operatorname{Aut}(X)\times\operatorname{Aut}(Y))\). Thus both \(A\) and \(MA\) are edge-transitive on \(\Sigma\) with edge-stabilisers isomorphic to \(\operatorname{Aut}(X)\times\operatorname{Aut}(Y)\), and hence \(MA=A\), that is, \(M\leq A\). Then since \(\mathcal{H}(n)\) is the largest normal \(2\)-subgroup of \(A\), we have \(M\leq\mathcal{H}(n)\). It follows that \(\mathcal{H}(n)/M\cong C_{2}^{2n}\) and hence \(M\leq\mathcal{H}(n)^{\prime}\); and since \(\mathcal{H}(n)/\mathcal{H}(n)^{\prime}\cong C_{2}^{2n}\) by Theorem 4.5(2), we conclude that \(M=\mathcal{H}(n)^{\prime}\), and the assertion of Step 1 is proved.
For the next part of the argument we exploit the fact that \(\operatorname{Aut}(\Sigma)=\operatorname{Aut}(\Gamma)\), recalling that \(\Gamma=C(\mathcal{H}(n),X,Y)\) is the Cayley graph \(\operatorname{Cay}(\mathcal{H}(n),S)\), where \(S=(X\cup Y)\setminus\{1\}\). We will frequently use the basic fact [10, Lemma 2.1] about mixed dihedral groups that the natural projection map \(\phi:h\to h\mathcal{H}(n)^{\prime}\) determines an isomorphism \(\mathcal{H}(n)/\mathcal{H}(n)^{\prime}\cong\phi(X)\times\phi(Y)\cong X\times Y\). In Step 2 we study the subset \(\Gamma_{4}(1)\) of the vertex set \(\mathcal{H}(n)\) of \(\Gamma\) consisting of all
elements \(h\) which can be reached by a path of length at most four from the vertex \(1\), so \(\Gamma_{4}(1)\) consists of all elements \(h\in\mathcal{H}(n)\) such that \(h=h_{1}h_{2}\ldots h_{k}\) with each \(h_{i}\in S\) and \(0\leq k\leq 4\).
**Step 2.**\(\Gamma_{4}(1)\cap\mathcal{H}(n)^{\prime}=\{1\}\cup S^{\prime}\), where \(S^{\prime}:=\{[x,y]:x\in X\setminus\{1\},y\in Y\setminus\{1\}\}\).
Let \(\Gamma_{4}(1)^{\prime}:=\Gamma_{4}(1)\cap\mathcal{H}(n)^{\prime}\). Clearly \(1\in\Gamma_{4}(1)^{\prime}\) and \(\Gamma_{4}(1)^{\prime}\subseteq\mathcal{H}(n)^{\prime}\). Suppose that \(h\in\Gamma_{4}(1)^{\prime}\setminus\{1\}\), so \(h=h_{1}h_{2}\ldots h_{k}\) with each \(h_{i}\in S\) and \(1\leq k\leq 4\). Choose such an expression for \(h\) with \(k\) minimal. Note in particular that \(h\in\mathcal{H}(n)^{\prime}\) and hence \(\phi(h)=1\), with \(\phi\) as above. If all the \(h_{i}\in X\setminus\{1\}\) then \(h\in X\), and since \(h\neq 1\) it follows from [10, Lemma 2.1] that \(\phi(h)\neq 1\) which is a contradiction. We obtain a similar contradiction if all the \(h_{i}\) lie in \(Y\setminus\{1\}\). Hence \(2\leq k\leq 4\) and not all the \(h_{i}\) lie in the same set, \(X\setminus\{1\}\) or \(Y\setminus\{1\}\). Next if there exists a unique \(i\) such that \(h_{i}\in X\setminus\{1\}\), then \(\phi(h)=\phi(h_{i})\cdot a\) for some \(a\in\phi(Y)\), and again we find that \(\phi(h)\neq 1\), and obtain a contradiction. Thus at least two of the \(h_{i}\) lie in \(X\setminus\{1\}\) and, similarly, at least two of the \(h_{i}\) lie in \(Y\setminus\{1\}\). This means that \(k=4\), and exactly two of the \(h_{i}\) lie in \(X\setminus\{1\}\), say \(x\) and \(x^{\prime}\), and exactly two of the \(h_{i}\) lie in \(Y\setminus\{1\}\), say \(y\) and \(y^{\prime}\). Then \(\phi(h)=\phi(xx^{\prime})\cdot\phi(yy^{\prime})\). If at least one of \(xx^{\prime}\) or \(yy^{\prime}\) is nontrivial then \(\phi(h)\neq 1\) by [10, Lemma 2.1], and we have a contradiction. Thus \(x^{\prime}=x^{-1}=x\) and \(y^{\prime}=y^{-1}=y\). Further, if \(h_{i}=h_{i+1}\) for some \(i\) we would have \(h_{i}h_{i+1}=1\) and obtain a shorter expression for \(h\). Thus the minimality of \(k\) implies that \(h=xyxy=[x,y]\), or \(h=yxyx=[y,x]=[x,y]\) (where the last equality uses the facts that each of \(x,y\) and \([x,y]\) is equal to its inverse). Thus Step 2 is proved.
**Step 3.**\(\mathrm{Aut}(\Sigma)_{1}\) fixes setwise the subset \(S^{\prime}\) of \(V(\Gamma)\) in Step 2, and acts transitively on \(S^{\prime}\).
By Step 1, we have \(\mathcal{H}(n)^{\prime}\unlhd\mathrm{Aut}(\Sigma)\). Thus \(\alpha^{-1}h\alpha\in\mathcal{H}(n)^{\prime}\) for all \(\alpha\) in the vertex stabiliser \(\mathrm{Aut}(\Sigma)_{1}\) and \(h\in\mathcal{H}(n)^{\prime}\). Since \(\mathcal{H}(n)\) acts on \(V(\Gamma)=\mathcal{H}(n)\) by right multiplication, the image of the vertex \(h\in\mathcal{H}(n)^{\prime}\) under \(\alpha\in\mathrm{Aut}(\Sigma)_{1}\) is
\[h^{\alpha}=(1^{h})^{\alpha}=1^{h\alpha}=1^{\alpha^{-1}h\alpha}=\alpha^{-1}h\alpha.\]
This implies that \(\mathrm{Aut}(\Sigma)_{1}\) fixes setwise the subset \(\mathcal{H}(n)^{\prime}\) of \(V(\Gamma)\). Since \(\mathrm{Aut}(\Sigma)_{1}\) also fixes \(\Gamma_{4}(1)\) setwise, and fixes the vertex \(1\), it follows that \(\mathrm{Aut}(\Sigma)_{1}\) fixes \((\Gamma_{4}(1)\cap\mathcal{H}(n)^{\prime})\setminus\{1\}\) setwise. By Step 2, we have \((\Gamma_{4}(1)\cap\mathcal{H}(n)^{\prime})\setminus\{1\}=\{[x,y]:x\in X \setminus\{1\},y\in Y\setminus\{1\}\}=S^{\prime}\). Recall that \(\mathrm{Aut}(X)\times\mathrm{Aut}(Y)\leq\mathrm{Aut}(\Sigma)_{1}\) and, since \(\mathrm{Aut}(X)\times\mathrm{Aut}(Y)\) normalises \(\mathcal{H}(n)\), that \(\mathrm{Aut}(X)\times\mathrm{Aut}(Y)\) acts on \(V(\Gamma)=\mathcal{H}(n)\) via its natural action. Since \(\mathrm{Aut}(X)\) is transitive on \(X\setminus\{1\}\) and \(\mathrm{Aut}(Y)\) is transitive on \(Y\setminus\{1\}\), it follows that \(\mathrm{Aut}(X)\times\mathrm{Aut}(Y)\), and hence also \(\mathrm{Aut}(\Sigma)_{1}\), is transitive on \(S^{\prime}\). Thus Step 3 is proved.
**Step 4.** A final contradiction.
For the final part of the proof we analyse a Cayley graph related to \(\Gamma\), namely the graph \(\Lambda:=\mathrm{Cay}(\mathcal{H}(n),S\cup S^{\prime})\). Note that the right multiplication action of \(\mathcal{H}(n)\) yields \(\mathcal{H}(n)\) as a subgroup of automorphisms of the graphs \(\Gamma:=\mathrm{Cay}(\mathcal{H}(n),S)\) and \(\mathrm{Cay}(\mathcal{H}(n),S^{\prime})\), and hence also \(\mathcal{H}(n)\leq\mathrm{Aut}(\Lambda)\). Moreover, since \(\mathrm{Aut}(\Sigma)=\mathrm{Aut}(\Gamma)\), the group \(\mathrm{Aut}(\Sigma)_{1}\), in its natural action on \(V(\Lambda)=\mathcal{H}(n)\), leaves \(S\) invariant, and by Step 3, \(\mathrm{Aut}(\Sigma)_{1}\) also leaves \(S^{\prime}\) invariant (and is transitive on it), and hence \(\mathrm{Aut}(\Sigma)_{1}\) leaves \(S\cup S^{\prime}\) invariant. Therefore also \(\mathrm{Aut}(\Sigma)_{1}\leq\mathrm{Aut}(\Lambda)\) and hence, since \(\mathrm{Aut}(\Sigma)=\mathcal{H}(n)\,\mathrm{Aut}(\Sigma)_{1}\), we have \(\mathrm{Aut}(\Sigma)\leq\mathrm{Aut}(\Lambda)\). Now \(\Lambda(1)=S\cup S^{\prime}\) and \(S^{\prime}\cap S=\emptyset\), and \(\mathrm{Aut}(\Sigma)_{1}\) is transitive on \(S^{\prime}\).
We claim that also \(S\) is an orbit of \(\operatorname{Aut}(\Sigma)_{1}\). The set \(S\cup\{1\}=X\cup Y\) is invariant under \(\operatorname{Aut}(\Sigma)_{1}\), and in the proof of Step 3 we noted that the subgroup \(\operatorname{Aut}(X)\times\operatorname{Aut}(Y)\) of \(\operatorname{Aut}(\Sigma)_{1}\) is transitive on each of \(X\setminus\{1\}\) and \(Y\setminus\{1\}\). Moreover we are assuming that \(\operatorname{Aut}(\Sigma)\) is transitive on \(V(\Sigma)\) and hence, since \(\Sigma\) is locally \(2\)-arc-transitive, \(\operatorname{Aut}(\Sigma)\) is transitive on the arcs of \(\Sigma\). Thus \(\operatorname{Aut}(\Sigma)\) contains an element \(\sigma\) which maps the arc \((X,Y)\) of \(\Sigma\) to the arc \((Y,X)\). Since \(\Sigma\) is the clique graph of \(\Gamma\) (Lemma 2.3(1)), \(X,Y\) (as subsets of \(\mathcal{H}(n)\)) are maximal cliques of \(\Gamma\) and are interchanged by \(\sigma\). In particular, \(\sigma\) induces an automorphism of the subgraph of \(\Gamma\) induced on \(X\cup Y\). The identity \(1\) is adjacent in \(\Gamma\) to every vertex of \(S\), while each other vertex \(z\in S\) is adjacent to only \(|X|-1\) elements of \(S\cup\{1\}\). Thus \(\sigma\) must fix \(1\) and interchange \(X\setminus\{1\}\) and \(Y\setminus\{1\}\). It follows that \(\operatorname{Aut}(\Sigma)_{1}\) is transitive on \(S\), proving the claim. Thus \(\operatorname{Aut}(\Sigma)\) has exactly two orbits on the arcs of \(\Lambda\), namely the arcs \((w,z)\) with \(wz^{-1}\in S\) and those with \(wz^{-1}\in S^{\prime}\).
Next we identify certain small subgraphs of \(\Lambda\). For any \([x,y]\in S^{\prime}\) and \(y^{\prime}\in Y\setminus\{1\}\), by Theorem 4.5 (4) we have \([[y,x],y^{\prime}]=1\) and \([[x,y],y^{\prime}]=1\), so \([x,y]y^{\prime}=y^{\prime}[x,y]\), and
\[(1,y^{\prime},[x,y]y^{\prime},[x,y],1)\]
is a \(4\)-arc of \(\Lambda\). Moreover, since \([x,y]y^{\prime}\not\in S\cup S^{\prime}\), it follows that the subgraph of \(\Lambda\) induced on \(C(x,y,y^{\prime}):=\{1,y^{\prime},[x,y]y^{\prime},[x,y]\}\) is a \(4\)-cycle.
Now we choose \(x^{\prime}=x_{2}\in X_{0}\) and \(a=x_{1}\in X_{0}\), \(b=y_{1}\in Y_{0}\) so that \([a,b]\in S^{\prime}\). These elements arise as images under \(\sigma\) as follows: there exists \([x,y]\in S^{\prime}\) such that \([x,y]^{\sigma}=[a,b]\), and there exists \(y^{\prime}\in Y\) such that \((y^{\prime})^{\sigma}=x^{\prime}\). Thus the subgraph of \(\Lambda\) induced on \(C^{\prime}:=C(x,y,y^{\prime})^{\sigma}=\{1,x^{\prime},([x,y]y^{\prime})^{ \sigma},[a,b]\}\) is a \(4\)-cycle including the \(2\)-arc \(([a,b],1,x^{\prime})\). Thus, setting \(z:=([x,y]y^{\prime})^{\sigma}\), this \(4\)-cycle is \((1,x^{\prime},z,[a,b],1)\) and hence \(z=sx^{\prime}=t[a,b]\) for some \(s,t\in S\cup S^{\prime}\). As these four vertices are pairwise distinct, \(sx^{\prime}\neq 1\) and \(tsx^{\prime}=[a,b]\neq 1\).
Each of \(t,s\) lies in either \(S\) or \(S^{\prime}\), giving four possible combinations. We obtain a contradiction from each possibility as follows. First, if both \(t,s\in S^{\prime}\), then \(x^{\prime}=st[a,b]\) and \(st[a,b]\in\mathcal{H}(n)^{\prime}\) while \(x^{\prime}\in X\setminus\{1\}\), and we have a contradiction since by [10, Lemma 2.1], \(\phi(x^{\prime})\neq 1\) while \(\phi(st[a,b])=1\). Next suppose that \(t\in S^{\prime}\) and \(s\in S\). Then \(sx^{\prime}=t[a,b]\in\mathcal{H}(n)^{\prime}\) and hence \(\phi(sx^{\prime})=\phi(t[a,b])=1\), which implies that \(sx^{\prime}=1\) (by [10, Lemma 2.1]), a contradiction. Thirdly, suppose that \(s,t\in S\). Then \(tsx^{\prime}=[a,b]\in\mathcal{H}(n)^{\prime}\) and hence \(\phi(tsx^{\prime})=\phi([a,b])=1\). Again we conclude that \(tsx^{\prime}=1\) by [10, Lemma 2.1], which is a contradiction.
This leaves the case \(t\in S,s\in S^{\prime}\), and hence \(\phi(x^{\prime})=\phi(sx^{\prime})=\phi(t[a,b])=\phi(t)\). Then by [10, Lemma 2.1], we must have \(t\in X\setminus\{1\}\) and \(\phi(x^{\prime})=\phi(t)\) implies that \(t=x^{\prime}\). Thus \(x^{\prime}sx^{\prime}=[a,b]\) and so \(s=x^{\prime}[a,b]x^{\prime}=[a,b]^{x^{\prime}}\in S^{\prime}\), which implies that \([a,b][[a,b],x^{\prime}]=[a,b]^{x^{\prime}}=s=[c,d]\) for some \(c\in X\setminus\{1\}\) and \(d\in Y\setminus\{1\}\). Now \(c=x_{i_{1}}x_{i_{2}}\ldots x_{i_{k}}\) and \(d=y_{j_{1}}y_{j_{2}}\ldots y_{j_{\ell}}\) for some \(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{k}}\in X_{0}\) and \(y_{j_{1}},y_{j_{2}},\ldots,y_{j_{\ell}}\in Y_{0}\), and we choose these expressions with \(k,\ell\) minimal. We now apply Theorem 4.5 (6) to \([c,d]\), observing that \([a,b]\mathcal{H}(n)_{3}=[c,d]\mathcal{H}(n)_{3}\), recalling that \(a=x_{1}\in X_{0}\) and \(b=y_{1}\in Y_{0})\), and using the fact that the cosets \([x_{i_{u}},y_{j_{v}}]\mathcal{H}(n)_{3}\), for \(1\leq u,v\leq n\), form a basis for \(\mathcal{H}(n)^{\prime}/\mathcal{H}(n)_{3}\cong C_{2}^{n^{2}}\) (see Theorem 4.5 (3)). We deduce that \(k=\ell=1\), so that \(a=c=x_{i_{1}}=x_{1}\) and \(b=d=y_{j_{1}}=y_{1}\). The equality \([a,b][[a,b],x^{\prime}]=[c,d]\) then implies that \([[a,b],x^{\prime}]=1\), that is to say, \([[x_{1},y_{1}],x_{2}]=1\), which contradicts Theorem 4.5 (3). Thus \(\operatorname{Aut}\Gamma\) acts intransitively on the vertices of \(\Gamma\), and \(\Gamma\) is semisymmetric, completing the proof of Theorem 1.3.
**Remark 5.1**: To prove \(\Sigma\) is semisymmertic in case \(n=2\), we make use of a Magma [1] computation, which we now describe. First, the group \({\cal H}(2)\) is input in the category GrpFP via the presentation given in Definition 1.2. Next, the pQuotient command is used to construct the largest 2-quotient \(H2\) of \({\cal H}(2)\) having lower exponent-2 class at most 100 as group in the category GrpPC. Comparing the orders of these groups, we find \(|{\cal H}(2)|=|H2|\), so that \({\cal H}(2)\cong H2\). Next we construct the graph \(\Sigma\). Computation shows that \(\Sigma\) is edge-transitive but not vertex-transitive, and has valency 4. We have made available the Magma programs in the following Appendix.
## Appendix: Magma programs used in the proof of Theorem 1.3 in the case \(n=2\).
Input the group \({\cal H}(2)\):
G<x1,x2,y1,y2>:=Group<x1,x2,y1,y2| x1^2, x2^2, y1^2, y2^2, (x1,x2)=(y1,y2)=1, (x1,y1)^2=(x1,y2)^2=(x2,y1)^2=(x2,y2)^2=1, ((x1,y1),x2)^2=((x1,y1),y2)=1, ((x1,y2),x2)^2=((x1,y2),y1)=1, ((x2,y1),x1)^2=((x2,y1),y2)=1, ((x2,y2),x1)^2=((x2,y2),y1)=1, (x1,((x1,y1),x2))=(x2,((x1,y1),x2))=(y1,((x1,y1),y2))=(y2,((x1,y1),y2))=1, (x1,((x1,y2),x2))=(x2,((x1,y2),x2))=(y1,((x1,y2),x2))=(y2,((x1,y2),x2))=1, (x1,((x1,y2),y1))=(x2,((x1,y2),y1))=(y1,((x1,y2),y1))=(y2,((x1,y2),y1))=1, (x1,((x2,y1),x1))=(x2,((x2,y1),x1))=(y1,((x2,y1),x1))=(y2,((x2,y1),x1))=1, (x1,((x2,y1),y2))=(x2,((x2,y1),y2))=(y1,((x2,y1),y2))=(y2,((x2,y1),y2))=1, (x1,((x2,y2),x1))=(x2,((x2,y2),x1))=(y1,((x2,y2),x1))=(y2,((x2,y2),y1))=1>;
Construct the largest 2-quotient group of \({\cal H}(2)\) having lower exponent-2 class at most 100 as group in the category GrpPC:
H2,q:=pQuotient(G,2,100);
Order of \({\cal H}2\) (The result shows that \(|H2|=|{\cal H}(2)|\), and so \(H2\cong{\cal H}(2)\)):
FactoredOrder(H2);
Construct the graph \(\Sigma\):
X:=sub<H2|x1,x2>; Y:=sub<H2|y1,y2>;
Vsigma1:={};
for g in H2 do
Xg:={};
for a in X do
Include(~Xg, a*g);
end for;
Include(~Vsigma1,Xg);
end for; Vsigma2:={}; for g in H2 do Yg:={}; for b in Y do Include(~Yg, b*g); end for; Include(~Vsigma2,Yg); end for; Vsigma:=Vsigma1 join Vsigma2; Esigma:={{x,y}: x in Vsigma1, y in Vsigma2 | $(x meet y) ne 0}; Sigma:=Graph<Vsigma|Esigma>; Test if \(\Sigma\) is a tetravalent semisymmetric graph: IsVertexTransitive(Sigma); IsEdgeTransitive(Sigma); Valence(Sigma);
## Acknowledgements
The first author has been supported by the Croatian Science Foundation under the project 6732. The second author is grateful for Australian Research Council Discovery Project Grant DP230101268. The third author was supported by the National Natural Science Foundation of China (12071023, 12161141005) and the 111 Project of China (B16002).
|
2301.01662
|
Unimodular approaches to the cosmological constant problem
|
We review selected aspects of unimodular gravity and we discuss its viability
as a solution of the old cosmological constant problem. In unimodular gravity
the cosmological constant is promoted to a global degree of freedom. We
highlight the importance of correctly setting up its initial data in order to
achieve a resolution of the cosmological constant problem on a semi-classical
level. We review recent path integral analysis of quantum aspects of unimodular
gravity to note that the semi-classical findings carry over to the quantum
level as well. We point out that a resolution of the problem inherently relies
on a global constraint on the space-time four-volume. This makes the theory
closely related to the vacuum energy sequester, which operates in a similar
way. We discuss possible avenues of extending unimodular gravity that preserve
the resolution of the cosmological constant problem.
|
Pavel Jiroušek
|
2023-01-04T15:20:03Z
|
http://arxiv.org/abs/2301.01662v1
|
# Unimodular approaches to the cosmological constant problem
###### Abstract
We review selected aspects of unimodular gravity and we discuss its viability as a solution of the old cosmological constant problem. In unimodular gravity the cosmological constant is promoted to a global degree of freedom. We highlight the importance of correctly setting up its initial data in order to achieve a resolution of the cosmological constant problem on a semi-classical level. We review recent path integral analysis of quantum aspects of unimodular gravity to note that the semi-classical findings carry over to the quantum level as well. We point out that a resolution of the problem inherently relies on a global constraint on the space-time four-volume. This makes the theory closely related to the vacuum energy sequester, which operates in a similar way. We discuss possible avenues of extending unimodular gravity that preserve the resolution of the cosmological constant problem.
unimodular gravity; cosmological constant problem; modified gravity; Weyl invariance +
Footnote †: journal: Physics Letters A
Pavel Jirousek
## 1 Introduction
Prior to the measurement of the accelerated nature of the expansion of the Universe [1; 2] the cosmology community predominantly expected that the effective value of the cosmological constant (CC) would be zero. However, even before we have been burdened with reconciling the puzzling minuscule value of the CC, that we observe today [3], it has been recognized that there is an underlying problem with the vanishing of the vacuum energy [4; 5; 6]. This problem stems from the observation that the energy of the vacuum state of quantum fields behaves exactly as an effective cosmological constant. In the semi-classical approximation, where gravity behaves classically, while matter fields are quantized, these vacuum energies appear to be able to drive an accelerated expansion of the universe. However, any attempt at estimating these contributions have produced values of such magnitude that their effect on cosmology would be impossible to miss. It is needless to say that such effects have not been observed and while the vacuum energy has ultimately been measured to be non-zero it is still orders and orders of magnitude smaller than any estimation obtained from quantum field theory. The question then arises: Why do we not observe these large vacuum energies or rather what mechanism causes them to cancel out or vanish. For more details see i.e. [7; 8; 9; 10; 11]. Due to its origin, the above problem is often referred to as the 'old' cosmological constant problem. Note that more recent questions regarding the value of the vacuum energy like the coincidence problem often assume that the old cosmological constant problem is somehow solved and usually do not address it in any way.
The old cosmological constant problem is commonly considered to be a fine tuning problem as one can carefully tune the bare cosmological constant of general relativity (GR) in such a way that it cancels the quantum contributions up to the tiny residual amount that we observe. However, this view is grossly oversimplified. As it has been pointed out [12; 13; 14], the cosmological constant also receives large contributions from higher loop correction of matter fields, which do not diminish with higher loop orders. One is then forced to tune the cosmological constant at every step of loop expansion to a very high degree of precision, which entails an infinite amount of fine tunings. Each as bad as the previous one. This signals that the running of such _renormalized_ cosmological constant is ultrasensitive to the UV completion
of the matter theory, which we have no knowledge of. This is of course a disaster since it would imply that our understanding of gravity at the lowest energies depends significantly on the microscopic physics of large energies. Hence, we are in need to protect the effective cosmological constant from such effects.
In this paper we are going to discuss a popular theory commonly used to address this problem - the _unimodular gravity_ (UG) [15; 16; 17]. The origins of this theory go back to Einstein himself who used the unimodular condition \(\sqrt{-g}=1\) as a partial gauge fixing of diffeomorphism invariance to simplify calculations in GR [18]. Only later it has been realized that assuming such a gauge fixing prior to variation of the Einstein-Hilbert action yields a modification of GR, where the trace of the Einstein equations is directly subtracted1[15; 16; 19; 20; 21]
Footnote 1: We use the reduced Planck units \(8\pi G_{N}=1\) and signature convention \((+,-,-,-)\).
\[G_{\mu\nu}-\frac{1}{4}Gg_{\mu\nu}=T_{\mu\nu}-\frac{1}{4}Tg_{\mu\nu}. \tag{1}\]
Surprisingly, these equations turn out to be classically equivalent to those of GR; however, with an unspecified cosmological constant. The key property of equations (1) is that they appear to be blind to any cosmological constant term. This has raised hopes that within UG the quantum corrections to the vacuum energy would fail to affect the space-time geometry. This would clearly solve the old cosmological constant problem. Alas, this claim is often merely stated or it is considered to be an obvious fact, without providing any detail or references on the argument. Furthermore, it has been recently argued that the quantum corrections to vacuum energy in fact do not decouple in UG and that the old CC problem remains present in UG as well [8; 22; 23]. As far as we are aware the discussion on this topic is not settled and a consensus has not been reached. It is one of the aims of this work to highlight the origin in this difference in views.
In section 2 we will discuss several popular formulations of UG and the status of the old CC problem within them. We show, that the resolution of whether UG solves the old CC problem or not, hinges on the way we provide initial data, that determine the effective cosmological constant. In particular, in formulations that rely on the use of a Lagrange multiplier to fix the unimodular condition for metric determinant, i. e.
\[\lambda(\sqrt{-g}-1)\, \tag{2}\]
specifying the initial value for the Lagrange multiplier directly, spoils the decoupling mechanism. Conversely, in formulations where the same restriction is achieved via a composite structure of the minimally coupled metric, for example:
\[g_{\mu\nu}^{phys}=\frac{g_{\mu\nu}}{\sqrt[4]{-g}}\, \tag{3}\]
such initial conditions cannot be given and the old CC problem appears to be resolved. We first discuss the ambiguity in the initial data on the level of equations of motion, then we provide a discussion for transverse diffeomorphism invariant formulations in section 2.1. We review the fully diffeomorphism invariant theories a la Henneaux and Teitelboim [24] in section 2.2 and its Weyl invariant extensions [25; 26] in section 2.3. We further comment on the usefulness of these extensions for further study of unimodular gravity. Finally, we comment on the appearance of a global constraint on four-volume of space-time in section 2.4.
The problem with specifying the initial value of the Lagrange multiplier carries over to the quantum regime. We review a partial path integration procedure of the unimodular degrees of freedom that are extra in comparison to GR, for the Henneaux and Teitelboim formulation [24]. Quantum aspects of UG have been studied using path integral techniques in multiple
recent works, e.i. [22; 27; 28; 29; 30; 31]. The integration can either be carried out, while keeping the initial value of the Lagrange multiplier fixed as in [22; 28; 31] or by keeping it free. The former reduces to GR with a directly specified cosmological constant. Hence, hence it corresponds to an extension of UG, which preserves it equivalence with GR. Consequently, the old CC problem persists. The latter calculation results in expression, which corresponds to GR with a CC that is selected by a global constraint on the space-time four-volume. Such fixing is inherently invariant under quantum corrections to vacuum energy as has been pointed out in [14]. Hence the old cosmological constant problem seems to be alleviated. We also briefly discuss the appearance of quantum fluctuations of the cosmological constant that naively appears due to the promotion of CC to a degree of freedom that have been noted in [32].
Lastly, we review the proposal of vacuum energy sequestering [13; 14], which also achieves the decoupling of the quantum corrections to CC. This mechanism relies on a similar blindness of the equations of motion to the vacuum energies as we find in UG. Unlike UG, this approach forcibly introduces a pair of global constraints, which completely removes the ambiguity in providing initial data in UG. Vacuum energy sequester can be formulated as a local theory [33] that is very similar to UG. We point out that the local approach again introduces the ambiguity in providing initial data, which affected the solution of the old cosmological constant problem. Finally, we discuss the relation between the local and global version and propose how such procedure can be applied in UG
## 2 Classical formulations of Unimodular Gravity
As we have alluded in the introduction the motivation for UG stems mainly from the observation that the trace-free Einstein equation [15; 16; 19; 20; 21]2
Footnote 2: A similar equation has been written down originally by Einstein himself [34], however, only for a priori trace-less energy momentum tensor (of radiation). Only later it has been realized that these equations describe UG.
\[G_{\mu\nu}-\frac{1}{4}Gg_{\mu\nu}=T_{\mu\nu}-\frac{1}{4}Tg_{\mu\nu}\, \tag{4}\]
contains no information about the cosmological constant term in the Einstein-Hilbert action. Despite this, these equations are almost equivalent to standard Einstein's equations. We can see this by taking the covariant divergence of both sides, which gives
\[\partial_{\mu}(G-T)=0. \tag{5}\]
This is a differential constraint, which can be easily solved as
\[G-T=4\Lambda\, \tag{6}\]
where \(\Lambda\) is an integration constant. Plugging this into the original traceless equation (4) yields the Einstein equations with a cosmological constant \(\Lambda\)
\[G_{\mu\nu}=T_{\mu\nu}+\Lambda g_{\mu\nu}\, \tag{7}\]
A crucial difference in comparison to GR is that any value for \(\Lambda\) is admissible here. In other words, any solution of Einstein equations with _arbitrary cosmological constant_ is a solution of the traceless equations (4). The key property responsible for this arbitrariness is that equations (4) are invariant under constant shifts of vacuum energy
\[T_{\mu\nu}\to T_{\mu\nu}+\rho_{vac}\ g_{\mu\nu}\, \tag{8}\]
where \(\rho_{vac}\) is a constant. Crucially, the shift in the energy-momentum tensor, which is generated by accounting for the quantum corrections to vacuum energy, has exactly the form (8) and therefore the original equations (4) are indeed blind to such corrections. The symmetry (8) holds even in equation (5), but it is finally broken once we specify the cosmological constant \(\Lambda\) in (6). If we wish to evolve some initial conditions using trace-free equations (4), we would soon find that we need to specify the effective cosmological constant (6) in order to get a unique solution. However, as it has been pointed out [22; 23], specifying this parameter immediately yields the equation (7), which is just an Einstein equation with a cosmological constant. It was here, where the original cosmological constant problem arose in the first place. This seems to imply that we have not succeeded in solving the old CC problem, rather we have just shifted it one step away.
However, the situation is not completely hopeless as the way we have chosen \(\Lambda\) is far from unique. If we first consider a splitting of the energy-momentum tensor into two pieces
\[T_{\mu\nu}=\tilde{T}_{\mu\nu}+T^{vac}_{\mu\nu}. \tag{9}\]
where the second piece on the right hand side accounts for the constant vacuum energy
\[T^{vac}_{\mu\nu}=\frac{1}{4}T^{vac}\,g_{\mu\nu}\, \tag{10}\]
where \(T^{vac}\) is a space-time constant. Furthermore, we assign any additional quantum corrections to the vacuum energy to \(T^{vac}_{\mu\nu}\). Hence, \(\tilde{T}_{\mu\nu}\) receives no such contributions. Plugging this splitting into (4) we will find that \(T^{vac}_{\mu\nu}\) completely drops out of the equations and therefore we can write
\[G_{\mu\nu}-\frac{1}{4}Gg_{\mu\nu}=\tilde{T}_{\mu\nu}-\frac{1}{4}\tilde{T}g_{ \mu\nu}. \tag{11}\]
Repeating the argument above we obtain a differential constraint
\[\partial_{\mu}(G-\tilde{T})=0\, \tag{12}\]
which we integrate as
\[G-\tilde{T}=4\tilde{\Lambda}. \tag{13}\]
This yields an Einstein equation
\[G_{\mu\nu}=\tilde{T}_{\mu\nu}+\tilde{\Lambda}g_{\mu\nu}. \tag{14}\]
We can see that now only \(\tilde{T}_{\mu\nu}\) appears in the above equation, and since \(\tilde{T}_{\mu\nu}\) does not, by definition, receive any quantum correction to the vacuum energy it follows that a choice of \(\tilde{\Lambda}\) is also stable. Hence, if we perform a measurement of the cosmological constant and interpret it as a parameter of this equation rather then (7), we will obtain a constant stable under quantum corrections.
It seems that the cosmological constant problem in equations (4) is not solved automatically but allows us a leeway in how we interpret the measurement of the cosmological constant. Some ways are stable, while others are not. This is in a stark contrast with GR where the cosmological constant is only interpreted as the bare coupling constant of the Einstein-Hilbert action and we do not have any choice in interpretation. Note that this does not imply that GR and UG are physically inequivalent on classical level. Rather, the change in description in UG allows us to interpret the measurement of cosmological constant in a different manner, where the old cosmological constant problem does not arise. Going forward we will see that this is the case in other formulations of unimodular gravity as well.
### The unimodular constraint
Maybe the most common formulation of UG in literature is based on the so called unimodular condition, from which UG gets its name
\[\sqrt{-g}=1. \tag{15}\]
In order to actually modify the dynamics of GR this condition is enforced prior to variation of the Einstein-Hilbert (EH) action. This can be achieved in multiple ways; however, the most common one is to enforce it via a Lagrange multiplier directly in the action [15; 16]
\[S[g,\lambda,\Psi]=\int_{\mathcal{M}}d^{4}x\bigg{[}-\frac{1}{2}\sqrt{-g}R+ \lambda\big{(}\sqrt{-g}-1\big{)}\bigg{]}+S_{matter}[g,\Psi]. \tag{16}\]
Here \(S_{matter}\) accounts for any matter field \(\Psi\) that we consider along the gravitational sector and \(\mathcal{M}\) is the spacetime region under consideration. A downside of this formulation is that the action clearly breaks the diffeomorphism invariance of the original action. This is because the metric density \(\sqrt{-g}\) is set to be equal to a scalar quantity, in this case a unity. Hence the action is invariant only under transverse diffeomorphisms generated by \(\xi^{\mu}\) satisfying
\[\nabla_{\mu}\xi^{\mu}=0. \tag{17}\]
Such diffeomorphisms indeed preserve the metric density since
\[\delta_{\xi}\sqrt{-g}=\mathcal{L}_{\xi}\sqrt{-g}=\frac{1}{2}\sqrt{-g}\nabla_{ \mu}\xi^{\mu}=0. \tag{18}\]
Hence the symmetry group of this theory is substantially different from GR. Crucially, the only diffeomorphism breaking term depends only on \(\lambda\), while the rest of the action is still diffeomorphism invariant. Consequently, all the matter and gravity equations of motion remain covariant. In particular, the Einstein equation implied by action (16) is
\[G_{\mu\nu}+\lambda\,g_{\mu\nu}=T_{\mu\nu}. \tag{19}\]
Clearly the Lagrange multiplier \(\lambda\) plays the role of the cosmological 'constant', which, at this point, is a general scalar field. However, since the matter sector of the action is assumed to be diffeomorphism invariant, it follows that the right hand side is covariantly conserved. By taking the covariant divergence of both sides we find
\[\partial_{\mu}\lambda=0. \tag{20}\]
Therefore, consistency requires that \(\lambda\) is indeed a constant. The unimodular constraint enforced by \(\lambda\) can be viewed locally as a mere gauge choice. Hence, it naively seems that any solution of GR with the cosmological constant \(\lambda\) in _any_ coordinates can be considered as a solution of the above unimodular equations. Indeed, any such solution can be locally transformed into coordinates such that (15) is satisfied.
Let us now discuss the fate of the quantum corrections to the vacuum energy in this formulation. As opposed to the previous trace-free equations (4), the cosmological constant in (19) is an independent field and enters the Einstein equations directly. Consequently, such variable appears to have a privileged status in the theory and it seems only natural to provide initial conditions for it in order to solve (20). This, however, directly leads to the old cosmological
constant problem. To demonstrate this, we proceed with the only consistent initial condition. That is when \(\lambda\) is a spatial constant \(\Lambda\)
\[\lambda(t_{1})=\Lambda. \tag{21}\]
The equation (20) then immediately fixes \(\lambda(t)=\Lambda\) for the rest of the time evolution. Consequently, we are left with standard GR with a cosmological constant \(\Lambda\) and the old cosmological constant problem appears again. A common counter-argument to this conclusion is that any quantum correction \(\rho_{vac}\) to the cosmological constant appears in the action coupled to \(\sqrt{-g}\). Surely, we can decouple such terms from gravity by using the constraint to eliminate the \(\sqrt{-g}\) dependence:
\[\int_{\mathcal{M}}d^{4}x\big{[}\lambda\big{(}\sqrt{-g}-1\big{)}+\rho_{vac} \sqrt{-g}\big{]}\to\int_{\mathcal{M}}d^{4}x\big{[}\lambda\big{(}\sqrt{-g}-1 \big{)}+\rho_{vac}\big{]}. \tag{22}\]
Doing so prior to the derivation of equations of motion eliminates any information about the quantum corrections in the Einstein equations! However, while this seems as a straightforward step, we must realize that using the constraint within the action necessarily entails a redefinition of the Lagrange multiplier \(\lambda\). In this case this redefinition is the shift
\[\lambda\to\lambda-\rho_{vac}. \tag{23}\]
Consequently, any initial condition for \(\lambda\) (21) posed prior to the use of the constraint within the action is shifted exactly by the same amount \(\rho_{vac}\) in the opposite direction
\[\Lambda\to\Lambda+\rho_{vac}. \tag{24}\]
Hence, the value of the cosmological constant \(\Lambda\) clearly receives the quantum contributions. One could be tempted to argue that we should therefore set up the initial conditions for \(\lambda\) only after we calculate the quantum corrections to vacuum energy and use the constraint to decouple them. However, this is no different from fine-tuning the cosmological constant at each level of the loop expansion, because we would need to set up the right initial value for each loop order separately. Hence this'solution' amounts to an infinite amount of fine tunings.
Interestingly, we can use the constraint to modify the action in a more substantial way so that the quantum vacuum energy contributions decouple automatically. Indeed, consider the following substitution
\[g_{\mu\nu}\to\hat{g}_{\mu\nu}=g_{\mu\nu}\,|g|^{-1/4}\, \tag{25}\]
which is carried out everywhere in the action outside of the constraint itself. This results in a theory with the following form
\[S[g,\lambda,\Psi]=\int_{\mathcal{M}}d^{4}x\bigg{[}-\frac{1}{2}\hat{R}+\lambda \big{(}\sqrt{-g}-1\big{)}\bigg{]}+S_{matter}[\hat{g},\Psi]. \tag{26}\]
Here \(\hat{R}\) is the scalar curvature evaluated using \(\hat{g}_{\mu\nu}\). The variation of (26) with respect to \(g_{\mu\nu}\) yields the equations of motion
\[\hat{G}_{\mu\nu}-\frac{1}{4}\hat{G}\hat{g}_{\mu\nu}+\lambda g_{\mu\nu}=\hat{ T}_{\mu\nu}-\frac{1}{4}\hat{T}\hat{g}_{\mu\nu}\, \tag{27}\]
where the hat above \(G_{\mu\nu}\) and \(T_{\mu\nu}\) signifies that the tensors are evaluated using the metric \(\hat{g}_{\mu\nu}\). However, upon the constraint (15) we have \(\hat{g}_{\mu\nu}=g_{\mu\nu}\) and thus we can drop the hats in the above equation. Crucially, taking the trace of (27) immediately implies \(\lambda=0\). We get this conclusion without ever specifying its initial conditions! In fact, specifying non-zero initial
conditions for \(\lambda\) is clearly inconsistent. Substituting \(\lambda=0\) back into equation (27) yields the traceless Einstein equations. We see that \(\lambda\) no longer plays the role of the cosmological constant. Furthermore, since the entire matter and gravitational Lagrangian depend strictly on \(\hat{g}_{\mu\nu}\), any quantum correction will contribute as \(\rho_{vac}\sqrt{-\hat{g}}\) to action (26). Since the novel composite metric \(\hat{g}_{\mu\nu}\) has a unit determinant by construction
\[\sqrt{-\hat{g}}=1\, \tag{28}\]
these contributions decouple trivially, without the need to use the constraint or, equivalently, redefine \(\lambda\). Therefore, the energy momentum tensor that appears in trace-free equations is automatically _free of any quantum corrections to vacuum energy_. A downside of this is that the trace free equations lack _all_ information about the cosmological constant and its effective value must be put in by hand as it is seemingly not tied to initial conditions of any fields.
Considering the action (26), it is clear that the constraint in (26) is rendered unnecessary (at least on the classical level) thus we can remove it from the action to obtain, yet another formulation of UG
\[S[g,\Psi]=-\frac{1}{2}\int_{\mathcal{M}}d^{4}x\,\hat{R}+S_{matter}\big{[}\hat{g },\Psi\big{]}\, \tag{29}\]
Since all terms in the action now depend purely on \(\hat{g}_{\mu\nu}\), the action has a manifest Weyl invariance under transformations of the metric
\[g_{\mu\nu}\to\omega^{2}g_{\mu\nu}\, \tag{30}\]
where \(\omega\) is an arbitrary non-zero function. Consequently, the resulting equations of motion associated with \(g_{\mu\nu}\) are necessarily traceless. The equations of motion associated to this action are indeed the Einstein traceless equations (11) taken together with the unimodularity condition (15). Since the Lagrange multiplier is no longer present in this formulation, choosing its initial value is clearly impossible here.
It is interesting to note that the Weyl symmetry (30) arises only after we eliminate the 'cosmological constant' term in the action
\[\lambda\,\sqrt{-g}. \tag{31}\]
Hence the symmetry of the action is increased by having \(\lambda=0\). This bears a striking resemblance to the technical naturalness [35] for the cosmological constant; however, in this case the extra symmetry is a gauge symmetry rather then a regular symmetry and consequently the conclusion is not that \(\Lambda\) is protected against quantum corrections but instead it is necessarily vanishing. It is important to note that the Weyl symmetry in UG does not become anomalous in quantum regime as it has been shown in [36; 37].
Note that the main difference between the original action (16) and the theory (29) is that in the former, the unimodular condition (15) is enforced via a Lagrange multiplier, while the latter achieves the same by universally coupling to a _composite metric_ (25). It is not immediately clear why this should make a difference as a standard intuition dictates that these theories should be entirely equivalent. Yet as we have seen, they behave differently. The difference stems from the fact that in the theory (16) we are tempted to introduce initial condition for the constant part of \(\lambda\). This implies that that the zero mode is not varied and thus the integral conclusion of the unimodular condition
\[\int_{\mathcal{M}}d^{4}x\sqrt{-g}=\int_{\mathcal{M}}d^{4}x\, \tag{32}\]
is not meant to be enforced. Note that this is the only diffeomorphism invariant information in (15) and hence it represents a physical constraint [30; 38]. It is thus not surprising that
abandoning it leaves the theory unmodified - equivalent to GR. Leaving the initial value for \(\lambda\) unspecified allows us to use the constraint _freely_ and hence we can use (22) without limits to decouple any contribution. Consequently, the cosmological constant problem is alleviated. The condition (32) should then provide the missing information in the trace-less Einstein equations (11) and consequently allow us to to determine the effective cosmological constant. In the current setting this point is difficult to demonstrate; however, we will revisit it for the HT formulation in section 2.4, where analogous situation occurs.
### Henneaux and Teitelboim UG
The introduction of the unimodularity condition (15) in the previous formulation has the unfortunate consequence of reducing the gauge group of the theory from diffeomorphisms to transverse diffeomorphisms. However, the full diffeomorphism invariance can be restored by introducing a novel vector density \(V^{\mu}\). This construction has been described in [24] and the resulting theory is given by the following action3
Footnote 3: This theory can be very easily rewritten in several other forms that are immediately equivalent. The only difference is that the fields \(V^{\mu}\) and \(\lambda\) can be redefined in such a way that the constraint part of the action becomes
\[\sqrt{-g}\lambda(\nabla_{\mu}W^{\mu}-1)\, \tag{33}\]
where \(W^{\mu}\) is an ordinary vector field. Other popular choice is \[\lambda(\frac{1}{4!}\epsilon^{\mu\nu\rho}F_{\mu\nu\rho}-\sqrt{-g})\,\] (34)
where \(F_{\mu\nu\rho}\) is the field strength of a 3-form gauge field \(A_{\mu\nu}\) given as \(F_{\mu\nu\rho}=4\partial_{[\mu}A_{\nu\rho]}\)., which is usually referred to as the Henneaux and Teitelboim (HT) unimodular gravity \[S_{HT}[g,\lambda,V]=\int_{\mathcal{M}}d^{4}x\Big{[}-\frac{1}{2}\sqrt{-g}R- \lambda\big{(}\partial_{\mu}V^{\mu}-\sqrt{-g}\big{)}\Big{]}+S_{matter}[g,\Psi ]\.\] (35)
Note that the divergence of a vector density is a scalar density and therefore the above action is fully diffeomorphism invariant. We can clearly see that this action reduces to (16), when we fix \(\partial_{\mu}V^{\mu}=1\), which can be achieved locally by a partial gauge fixing of diffeomorphisms. A gauge fixing prior to variation is not a generally admissible step and thus this does not guarantee the equivalence of the two theories. Nevertheless, the classical equivalence can be immediately demonstrated from the equations of motion. The variation with respect to \(g_{\mu\nu}\) yields
\[G_{\mu\nu}+\lambda\ g_{\mu\nu}=T_{\mu\nu}\, \tag{36}\]
and by varying \(V^{\mu}\) we obtain
\[\partial_{\mu}\lambda=0. \tag{37}\]
These equations are clearly the same as (19) and (20). The difference is that the second equation now arose as an equation of motion rather then due to Bianchi identity. Finally, the constraint associated with \(\lambda\) forces
\[\partial_{\mu}V^{\mu}=\sqrt{-g}. \tag{38}\]
The status of the cosmological constant problem in this formulation is very similar to the version discussed in section 2.1. We can see that the Lagrange multiplier \(\lambda\) enters the Einstein equation in the same way as in (19). It is thus not surprising that determining the cosmological constant by fixing \(\lambda\) directly through an initial condition will lead to the CC problem. Similarly, trying to use the constraint within the action to decouple any terms of the form
\[\rho_{vac}\sqrt{-g}\, \tag{39}\]
will yield shifts of the initial value set up for \(\lambda\) exactly like in (24). On the other hand, the same solution that worked in section 2.1 works here as well. If we give up the initial condition on \(\lambda\) we may find a form of the action where \(\lambda\) can be determined uniquely. The steps are similar as well. We use the constraint (38) in the action to substitute
\[g_{\mu\nu}\to\hat{g}_{\mu\nu}=g_{\mu\nu}\left(\frac{\partial_{\mu}V^{\mu}}{ \sqrt{-g}}\right)^{1/2}, \tag{40}\]
which yields the action
\[S[g,\lambda,V,\Psi]=\int_{\mathcal{M}}d^{4}x\bigg{[}-\frac{1}{2}\hat{R}\partial _{\mu}V^{\mu}-\lambda\big{(}\partial_{\mu}V^{\mu}-\sqrt{-g}\big{)}\bigg{]}+S_{ matter}[\hat{g},\Psi]. \tag{41}\]
Upon variation, this gives the following tensor equation of motion
\[\hat{G}_{\mu\nu}-\frac{1}{4}\hat{G}\hat{g}_{\mu\nu}+\lambda g_{\mu\nu}=\hat{T} _{\mu\nu}-\frac{1}{4}\hat{T}\hat{g}_{\mu\nu}. \tag{42}\]
Taking the trace of this equation immediately gives us \(\lambda=0\) and thus \(\lambda\) is determined uniquely. Crucially, any quantum correction to the vacuum energy from the matter sector or the gravitational sector couples directly to \(\sqrt{-\hat{g}}\), which immediately reduces it to a boundary term since the metric \(\hat{g}_{\mu\nu}\) by construction satisfies
\[\sqrt{-\hat{g}}=\partial_{\mu}V^{\mu}. \tag{43}\]
Hence such terms decouple trivially, which renders the cosmological constant stable under quantum corrections of the vacuum energy. Note that unlike (25) the metric \(\hat{g}_{\mu\nu}\) is a metric in a true sense - a rank 2 tensor with a density weight of 0. Furthermore, the constraint (38) reduces it to
\[g_{\mu\nu}=\hat{g}_{\mu\nu}. \tag{44}\]
so we can drop the hats in our tensor equations of motion.
### Diffeomorphism covariant, Weyl invariant UG
Finally, similarly to (26) the constraint in the action (41) can be omitted to give
\[S[g,V,\Psi]=S_{GR}\big{[}\hat{g}(g,V),\Psi]=-\frac{1}{2}\int_{\mathcal{M}}d^{4 }x\hat{R}\partial_{\mu}V^{\mu}+S_{matter}[\hat{g},\Psi]. \tag{45}\]
This theory amounts to ordinary GR, whose metric is transformed using the _manifestly Weyl invariant_ redefinition (40). The Weyl symmetry of the ansatz is then inherited by the entire action (45) and thus we obtain a Weyl invariant, fully covariant theory of UG. This theory has been first suggested in [39] and was later found as a generalization of mimetic gravity in [25], where the classical equivalence to HT formulation of UG (35) has been pointed out.
In comparison to (29) the presence of the derivative of the vector field \(V^{\mu}\) in the redefinition (40) implies that (45) is a higher derivative vector-tensor theory. This can be easily seen by expanding the scalar curvature in the gravitational part of (45) to obtain
\[S_{grav}=-\frac{1}{2}\int d^{4}x\,\sqrt{-g}\bigg{[}\sqrt{D}R+\frac{3}{8}\frac {g^{\mu\nu}\partial_{\mu}D\partial_{\nu}D}{D^{3/2}}\bigg{]}\, \tag{46}\]
where we have denoted
\[D=\frac{\partial_{\mu}V^{\mu}}{\sqrt{-g}}. \tag{47}\]
We can directly see that the action contains up to second time-derivatives of the time component of \(V^{\mu}\). Nevertheless, due to the structure, in which these terms enter the action, and crucially, due to the universal coupling of \(V^{\mu}\) to the matter sector, the equations of motion for \(V^{\mu}\) reduce to a very simple form when written using \(\hat{g}_{\mu\nu}\). Specifically, they become
\[\partial_{\mu}\big{(}\hat{G}-\hat{T}\big{)}=0. \tag{48}\]
The dynamics of the vector field \(V^{\mu}\), as perceived in the space-time, that is given by \(\hat{g}_{\mu\nu}\), is determined by the built-in constraint (43). Finally, the tensor equations of motion for \(g_{\mu\nu}\) are the traceless equations (11) evaluated using the metric \(\hat{g}_{\mu\nu}\)4. Hence, despite the higher derivative structure of this theory, the classical dynamics are equivalent to UG (35) and the theory does not suffer from any ghost instability5. Crucially, in contrast to (29) the reliance of this mechanism on the extra vector field \(V^{\mu}\) allows us to propose deviations from the basic theory, without affecting the decoupling mechanism for vacuum energies. For example, we can introduce novel terms in the action, which alter the dynamics of the vector field. To preserve the decoupling mechanism these terms must depend strictly on \(\hat{g}_{\mu\nu}\) and on the composite vector field
Footnote 4: Note that the system equations is understood as a system for the field \(\hat{g}_{\mu\nu}\) rather then for \(g_{\mu\nu}\)
Footnote 5: Interestingly, the resulting Hamiltonian is linear in the momentum \(\lambda\) and thus it is unbounded from bellow. Nevertheless, since \(\lambda\) becomes a constant on-shell, the system is perfectly stable.[40]
\[W^{\mu}=\frac{V^{\mu}}{\partial_{\nu}V^{\nu}}. \tag{49}\]
By adding such terms the theory no longer describes UG as the original cosmological constant can acquire non-trivial dynamics and thus could become a more complicated model of dark energy, which not only models late time acceleration of the universe but also solves the old cosmological constant problem.
The form of the redefinition (40) is not unique in its ability to facilitate the decoupling of vacuum energies from gravity. An alternative ansatz has been proposed in [26], where the metric \(\hat{g}_{\mu\nu}\) is given through the following relation
\[g_{\mu\nu}\to\hat{g}_{\mu\nu}=\frac{g_{\mu\nu}}{\sqrt[4]{-g}}\sqrt{\mathcal{P}} \tag{50}\]
where \(\mathcal{P}\) is the Pontryagin density
\[\mathcal{P}=\text{Tr}\,\tilde{F}^{\mu\nu}F_{\mu\nu}\, \tag{51}\]
constructed from the Yang-Mills gauge field strength
\[F_{\mu\nu}=D_{\mu}A_{\nu}-D_{\nu}A_{\mu}. \tag{52}\]
The derivative \(D_{\mu}\) is the covariant derivative associated with \(A_{\mu}\). Finally, \(\tilde{F}^{\mu\nu}\) is the density dual of \(F_{\mu\nu}\)
\[\tilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\,F_{\rho\sigma}\, \tag{53}\]
and \(\epsilon^{\mu\nu\rho\sigma}\) is the Levi-Civita symbol. Crucially, the mechanism by which the vacuum energies decouple is intact as the metric determinant is again constrained to6
Footnote 6: Note that in the Abelian case \(\mathcal{P}\) represents the Pfaffian of the matrix \(F_{\mu\nu}\)
\[\sqrt{-\hat{g}}=\mathcal{P}\, \tag{54}\]
which is a total derivative of the Chern-Simons current density \(\mathbb{C}^{\mu}\)
\[\mathcal{P}=\partial_{\mu}\mathbb{C}^{\mu}\,\qquad\text{where}\qquad\mathbb{C}^{ \mu}=\varepsilon^{\mu\nu\rho\sigma}\operatorname{Tr}\left(F_{\nu\rho}A_{\sigma }-\frac{2if}{3}A_{\nu}A_{\rho}A_{\sigma}\right)\,. \tag{55}\]
Here \(f\) is the coupling constant of the associated gauge theory. Hence any corrections to vacuum energy are by construction decoupled as they equate to a total derivative in the Lagrangian
\[\int_{\mathcal{M}}d^{4}x\,\rho_{vac}\sqrt{-\delta}=\int_{\mathcal{M}}d^{4}x\, \rho_{vac}\mathcal{P}. \tag{56}\]
Applying (50) to GR results in the theory defined by the simple substitution as
\[S[g,A,\Psi]=S_{GR}\big{[}g(g,A),\Psi]=-\frac{1}{2}\int_{\mathcal{M}}d^{4}x\, \hat{R}\mathcal{P}+S_{matter}\big{[}\delta,\Psi\big{]}. \tag{57}\]
Due to the Weyl invariance of (50) the tensor equations of motion are again the traceless Einstein equations (11) evaluated using the metric \(\hat{g}_{\mu\nu}\). Furthermore, the equation of motion for \(A_{\mu}\) gives
\[\tilde{F}^{\mu\nu}\partial_{\nu}\big{(}\hat{G}-\hat{T}\big{)}=0. \tag{58}\]
For the U(1) case, the non-vanishing value of \(\mathcal{P}\) implies that \(\tilde{F}^{\mu\nu}\) is an invertible matrix so we still recover (48). For SU(N), \(\tilde{F}^{\mu\nu}\) is a Lie algebra valued object, but we find that at least one of its components (Lie algebra component) is again an invertible matrix and thus we arrive at an equivalent conclusion (48). Hence, the gravitational dynamics are clearly equivalent to the HT formulation of UG. The dynamics of \(A_{\mu}\) are determined by the built-in constraint (54). Note that solutions for this equation exist for arbitrary globally hyperbolic space-time as long as the gauge group contains an SU(2) subgroup [41]. For the U(1) case a general proof of existence has not been found yet, but for the standard FRW space-times a solution was constructed explicitly.
There are multiple advantages of using (50) in contrast to (40) and other formulations of UG. The gauge fields \(A_{\mu}\) are clearly more natural objects in the context of the Standard Model of particle physics. Hence, the present formulation is advantageous for the study of modifications via possible couplings to ordinary matter. Such extensions could be very interesting to explore as they could provide additional dynamics that might allow us to study selection mechanisms for the otherwise unspecified effective CC in UG. Furthermore, while the quantum corrections to vacuum energy get automatically converted into a total derivative, the resulting boundary term is not necessarily physically irrelevant. In this particular case, it plays the role of the theta term [42; 43] (56) for the corresponding Yang-Mills theory, under the assumption that we also include appropriate kinetic term. This then allows us to naively connect the old cosmological constant problem with the strong CP problem of quantum chromodynamics. From equation (58) it is clear that upon the addition of a kinetic term for \(A_{\mu}\), the unimodular dynamics are altered, and the theory no longer describes GR with a cosmological constant [44]. Nevertheless, the decoupling mechanism for corrections to vacuum energy (56) is still applicable.
Finally, the theory can be written using a Lagrange multiplier in a form analogous to (35). That is
\[S[g,\lambda,A]=\int_{\mathcal{M}}d^{4}x\Big{[}-\frac{1}{2}\sqrt{-g}R+\lambda \big{(}\operatorname{Tr}\tilde{F}^{\mu\nu}F_{\mu\nu}-\sqrt{-g}\big{)}\Big{]}+ S_{matter}[g,\Psi]. \tag{59}\]
We can see that in this form the Lagrange multiplier couples to the gauge fields like an axion field. Since \(\lambda\) is constrained to be a constant through the \(A_{\mu}\) equation of motion we can even add a kinetic term \(\partial_{\mu}\lambda\partial^{\mu}\lambda\) to the action to increase this resemblance further7. We get
Footnote 7: Note that this preserves the original solutions while also adding a novel branch, where \(\lambda\) becomes dynamical. This addition can be also applied to (35), where the constraint on constancy of \(\lambda\) is stricter and no new branch appears.
\[S[g,\lambda,A]=\int_{\mathcal{M}}d^{4}x\Big{[}-\frac{1}{2}\sqrt{-g}R+\frac{1}{ 2}\sqrt{-g}g^{\mu\nu}\partial_{\mu}\lambda\partial_{\nu}\lambda+\lambda\big{(} \text{Tr}\,\tilde{F}^{\mu\nu}F_{\mu\nu}-\sqrt{-g}\big{)}\Big{]}+S_{matter}[g, \nabla]. \tag{60}\]
Note that this action cannot be straightforwardly traced back to a formulation without Lagrange multiplier. Nevertheless, since the kinetic term is shift symmetric, we are still able to decouple any _constant_ corrections to vacuum energy as long as we let the initial conditions on zero mode of \(\lambda\) be free. Again adding a kinetic term for \(A_{\mu}\) changes the dynamics significantly; however, it still does no affect the decoupling mechanism. It is then a attractive speculation whether unimodular gravity can arise as a dynamical regime of axion where the dynamics of the gauge field become frozen.
### Degrees of freedom of UG
The field content in the Henneaux and Teitelboim theory is clearly larger then in GR; however, it is accompanied by a large symmetry group of divergenceless shifts of the vector field
\[V^{\mu}\to V^{\mu}+\zeta^{\mu}\,\qquad\text{where}\qquad\partial_{\mu}\zeta^{ \mu}=0. \tag{61}\]
Consequently, much of the field content is a pure gauge and the theory can be shown to only contain a single extra global degree of freedom in comparison with its GR counterpart [24; 40]. The global degree of freedom here is given as the overall charge associated with the current \(V^{\mu}\), which is defined on a given foliation \(\Sigma_{t}\).
\[\mathcal{T}=\int_{\Sigma_{t}}d\Sigma_{\mu}V^{\mu}. \tag{62}\]
Here \(d\Sigma_{\mu}=n_{\mu}\,d^{3}y\), where \(d^{3}y\) is the associated coordinate volume element on \(\Sigma_{t}\) and \(n_{\mu}\) is perpendicular to \(\Sigma_{t}\)8. \(\mathcal{T}\) is often referred to as the 'cosmic time'. Note that the symmetry (61) shifts \(\mathcal{T}\) by a constant value
Footnote 8: Note that if \(\Sigma_{t}\) is infinite the definition of \(\mathcal{T}\) might result in an infinity as well. In such cases one will have to consider some appropriate regularization in order to make sense of these quantities.
\[\mathcal{T}\to\mathcal{T}+const. \tag{63}\]
as long as \(\zeta^{\mu}\) vanishes at infinity or on the boundary of \(\Sigma_{t}\). Such shift correspond to a symmetry of the theory, which naturally results in a conservation law, in this case, the conservation of the conjugate momentum \(\lambda\). Transformations that leave \(\mathcal{T}\) intact form a gauge group of the theory. It follows that \(\mathcal{T}\) is the only gauge-invariant information contained within the vector field \(V^{\mu}\) and the gauge symmetry can be taken to fix the non-zero mode of \(n_{\mu}V^{\mu}\) arbitrarily.
Integrating the expression (38), while assuming appropriate conditions for \(V^{\mu}\) on the boundary of \(\Sigma_{t}\), enables us to calculate the change in \(\mathcal{T}\) as
\[\mathcal{T}(t_{2})-\mathcal{T}(t_{1})=\text{Vol}_{\mathcal{M}}[g]\, \tag{64}\]
where \(\partial\mathcal{M}=\Sigma_{t_{2}}\cup\Sigma_{t_{1}}\). The two variables \(\lambda\) and \(\mathcal{T}\) are conjugate of each other in the sense that their Dirac bracket is
\[\{\lambda,\mathcal{T}\}_{D}=1. \tag{65}\]
Since the constraint part of the action (35) is linear in time derivatives we can glance this commutations directly from the action using the Fadeev-Jackiw procedure [45; 46]. Considering only the spatially constant part of \(\lambda\) we find that the only term containing time derivatives has the form
\[\int d^{4}x\lambda\partial_{\mu}V^{\mu}\approx\int\ dt\lambda\ \frac{d}{dt}\int_{ \Sigma_{t}}d\Sigma_{\mu}V^{\mu}=\int dt\lambda\dot{\mathcal{T}}. \tag{66}\]
Hence we see that the momentum associated with \(\mathcal{T}\) is indeed \(\lambda\) and (65) immediately follows. This can be also confirmed directly using Dirac analysis. Note, that while this is true in the original HT formulation (35), in classically equivalent actions (41) and (45) this no longer holds as \(\lambda\) is constrained to vanish in the former and is not present in the latter.
In order to obtain a unique evolution in UG one has to provide both the initial cosmic time \(\mathcal{T}_{i}\) as well as the value for the cosmological constant \(\lambda_{i}\). In practise, \(\mathcal{T}_{i}\) can be easily omitted as the evolution of the cosmic time does not affect the dynamics of gravity and other fields. It is often considered unphysical [22; 47]. However, we would like to point out that, while \(\mathcal{T}\) is indeed unphysical, the difference of the cosmic time between two hypersurfaces \(\Sigma_{t_{1}}\) and \(\Sigma_{t_{2}}\) is a physical quantity, namely the total volume [30; 38]. This encodes the information about the effective cosmological constant, which can be reconstructed from the knowledge of such difference. This can be seen by fixing \(\mathcal{T}(t_{2})=\mathcal{T}_{2}\) and \(\mathcal{T}(t_{1})=\mathcal{T}_{1}\) in (64), which yields a global constraint on the four-volume. We can then determine the value of the effective cosmological constant by solving the Einstein equation with an unspecified cosmological constant, for example the traceless equations (4), and label its solutions by the value of the cosmological constant. Hence we get a one parameter family of solutions labeled by \(\Lambda\)
\[g_{\mu\nu}(\Lambda). \tag{67}\]
Plugging such solution into (64) with fixed values of the cosmic time yields a single equation that is in general able to determine \(\Lambda\) as a function of \(\Delta\mathcal{T}=\mathcal{T}_{2}-\mathcal{T}_{1}\). Let us demonstrate this explicitly on a very simple example. We consider a flat FRW universe, which is void of matter and energy, up to the unspecified cosmological constant. Hence the solutions of Friedmann equations are
\[a(t)=a_{0}e^{\sqrt{\Lambda/3}(t-t_{1})}. \tag{68}\]
Plugging this into (64) yields the following relation
\[\Delta\mathcal{T}=\frac{a_{0}^{3}\sqrt{3}}{\sqrt{\Lambda}}\Big{(}e^{\sqrt{ \Lambda/3}(t_{2}-t_{1})}-1\Big{)}\, \tag{69}\]
where we have rescaled the values of the cosmic time to factor out the infinite coordinate volume \(\text{Vol}_{3}=\int d^{3}x\). The above equation is an algebraic equation for \(\Lambda\)9
Footnote 9: This solution can be found explicitly using the Lambert \(W\) function as
\[\Lambda=3\,W_{0}^{2}\Big{(}-a_{0}^{2}\frac{\Delta t}{\Delta T}\Big{)}\Delta t^ {-2}\, \tag{70}\]
where \(\Delta t=t_{2}-t_{1}\).
\[a(t)=a_{0}e^{\sqrt{\Lambda(\Delta\mathcal{T})/3}(t-t_{1})}. \tag{71}\]
Note that any correction to the value \(\Lambda\to\Lambda+\rho_{vac}\), is irrelevant here. We will reproduce the above solution (71) for any value of \(\rho_{vac}\) we account for. Hence specifying the effective cosmological constant by providing the value \(\Delta\Lambda\) for two given times \(t_{f}\) and \(t_{i}\) is stable under
quantum corrections. As it has been noted in [14], in this case "it is the space-time volume that remains fixed, forcing \(\Lambda\) to adjust". Note that the global constraint (64) is qualitatively equivalent to the diffeomorphism invariant constraint (32), with the difference that instead of \({\cal T}_{1,2}\) we are given a coordinate volume of the space-time region \({\cal M}\). The constraint (32) is automatically present in formulations, which do not rely on the use of the Lagrange multiplier. This implies that resolution of the CC problem in UG inherently involves the existence of such global constraint.
Note that these global constraints do not need to entail 'knowledge of the future' as both hypersurfaces \(\Sigma_{t_{1},t_{2}}\) can be located in the past. This still fixes the cosmological constant, which can then be taken to determine the solutions for arbitrary future times. Note, that the hypersurfaces must not be too close to each other as the infinitesimal change in \({\cal T}\) becomes insensitive to the cosmological constant. Indeed, in the current setting taking the limit \(t_{2}\to t_{1}\) would give us
\[\dot{\cal T}=\int_{\Sigma_{t}}\,d^{3}y\sqrt{-g}\,. \tag{72}\]
which for the FRW solution (68) gives us
\[\dot{\cal T}_{i}\propto a_{i}^{3}=a_{0}^{3}\,. \tag{73}\]
which does not depend on \(\Lambda\).
## 3 Quantum aspects of UG
As we have seen in the previous section the way we specify the initial conditions for our variables substantially affects the behavior of the theory with respect to the quantum correction of the effective cosmological constant. We have demonstrated this behavior on a semi-classical level, where the quantum corrections of the vacuum energy have been accounted only as unspecified shifts of the energy momentum tensor (8). The entire gravitational sector has been considered only on a classical level. In order to resolve the old CC problem to full satisfaction, one should address it in a quantum setting. Since the structure of UG is so similar to GR the problem of finding its fully quantum formulation is as problematic as that in Einstein's theory. Hence the full quantum treatment is not within our technical reach. However, the extra degrees of freedom that are present in UG have a very simple structure and can be quantized separately from the degrees of freedom of the metric and matter.
In this section we review a procedure where these degrees of freedom are integrated out in the path integral sense and by doing so they introduce a minor or no modification of the ordinary GR dynamics [22; 27; 28; 31; 47]. The distinction hinges on the way we carry out such integration. In particular we will explore two ways. Either we fix the initial condition for \(\lambda\), or we keep \(\lambda\) free. The effect of such procedure is in line with our argument from section 2. That is: the former way reconstructs GR, while the latter offers a resolution to the old CC problem. Finally, it has been observed that the canonical structure of the unimodular degrees of freedom implies a non-trivial commutation relations for the cosmological constant and the space-time volume. This naively implies presence of quantum fluctuations of these quantities, which could present a possible distinction between GR and UG on a quantum level.
### Path integral
The full formal expression for the generating function in UG for the action (35) can be written _formally_ as a path integral
\[Z[J]=\int[Dg][D\Psi][D\lambda][DV]\exp(iS[g,\Psi,\lambda,V]+iS_{ext}[g,\Psi, J])\,. \tag{74}\]
Note that we couple the external current only to the metric and matter degrees of freedom and not the fields \(\lambda\) or \(V^{\mu}\). The extra degrees of freedom of unimodular gravity are not deeply intertwined with the rest of the gravitational dynamic as they are neatly isolated within the constraint part of the action. Hence we can integrate them out separately prior to integration of the metric degrees of freedom or matter degrees of freedom10. We may thus define this partial integration as
Footnote 10: Note that correctly one should go first to the ADM formalism to workout the canonical structure and then calculate the associated path integral in the Hamiltonian formalism along with any necessary fixing of gauge symmetries and associated Faddeev-Popov determinants. The procedure has been carried out in ADM formalism both in [31], while the ghost sector has been discussed in [28]. Nevertheless, such considerations do not meaningfully affect the result in comparison to a more naive approach we consider here.
\[\mathcal{I}\equiv\int[D\lambda][DV]\exp(iS[g,\Psi,\lambda,V]). \tag{75}\]
The generating functional (74) can then be calculated by integrating \(\mathcal{I}\) over the metric and matter degrees of freedom along with external sources. The integration (75) can be carried out in more then one way depending on how we fix the initial and final condition for our fields \(\lambda\) and \(V^{\mu}\) or rather for the associated degree of freedom \(\lambda\) and \(\mathcal{T}\). We are going to be mainly looking at two ways: First, we fix the initial and final value of the cosmic time \(\mathcal{T}\) and second we fix the initial and final value of \(\lambda\). Note that the latter case has been worked out in [22]. Technically, it is possible to fix both as it has been done in [31]; however, we would like to point out that the knowledge of both value of \(\lambda\) and \(\mathcal{T}\) is prohibited due to the commutation relation (65). Hence, the physically relevant calculation fixes only one of these variables on a given spatial slice.
We first consider the following path integral where the endpoint values for \(\mathcal{T}\) are fixed
\[\mathcal{I}_{\mathcal{T}}\equiv\int_{\mathcal{T}_{i}}^{\mathcal{T}_{\mathcal{ T}}}[D\lambda][DV]\exp(iS[g,\Psi,\lambda,V]). \tag{76}\]
The action in the exponent is taken to be the HT action for UG (35). The integration over the field \(V^{\mu}\) in (76) is taken only over configurations that satisfy the following conditions
\[\int_{\Sigma_{i}}d\Sigma_{\mu}V^{\mu}=T_{i}\,\qquad\qquad\text{and}\qquad \qquad\int_{\Sigma_{f}}d\Sigma_{\mu}V^{\mu}=T_{f}. \tag{77}\]
Note that the cosmic time \(\mathcal{T}\) is the only gauge invariant information in \(V^{\mu}\) and thus when we consider appropriate fixing of the symmetry (61) into account the above conditions determine the initial and final configurations of \(V^{\mu}\) completely. The integration over \(\lambda\) is carried out freely without fixing the endpoints. As a first step we divide the action into the GR component and the constraint
\[S_{HT}[g,\Psi,\lambda,V]=S_{EH}[g,\Psi]+\int_{\mathcal{M}}d^{4}x\,\lambda \left(\partial_{\mu}V^{\mu}-\sqrt{-g}\right)\,. \tag{78}\]
Here \(S_{EH}\) corresponds to the Einstein-Hilbert action together with arbitrary matter action for \(\Psi\) in the theory. This part of the action is unaffected by the integration and thus we may focus on the constraint itself. In order to isolate the initial and final condition on the cosmic time we first integrate by parts to obtain
\[S_{HT}[g,\Psi,\lambda,V]=S_{EH}[g,\Psi]+\int_{\mathcal{M}}d^{4}x\,\big{(}-V^{ \mu}\partial_{\mu}\lambda-\lambda\sqrt{-g}\big{)}+\int_{\partial\mathcal{M}}d \Sigma_{\mu}\lambda\,V^{\mu}. \tag{79}\]
Since the last term is evaluated on the boundary \(\partial\mathcal{M}=\Sigma_{f}\cup\Sigma_{i}\) where \(V^{\mu}\) is fixed, this term is unaffected by the integration over \(V^{\mu}\). Hence the integration over \(V^{\mu}\) gives us a delta function
\[I_{\mathcal{T}}=\int_{\mathcal{T}_{i}}^{\mathcal{T}_{f}}[D\lambda]\delta( \partial_{\mu}\lambda)\exp\biggl{(}iS_{EH}[g,\Psi]-i\int_{\mathcal{M}}d^{4}x\, \sqrt{-g}\lambda+i\int_{\partial\mathcal{M}}d\Sigma_{\mu}\lambda\,V^{\mu} \biggr{)}. \tag{80}\]
The integration over the delta function fixes \(\lambda\) to be a constant and thus it can be taken in front of the integral in the action. This allows us to express \(V^{\mu}\) completely as the cosmic time \(\mathcal{T}\)
\[I_{\mathcal{T}}=\int_{-\infty}^{\infty}d\lambda\exp\biggl{(}iS_{EH}[g,\Psi]-i \lambda\biggl{(}\mathcal{T}_{f}-\mathcal{T}_{i}-\int_{\mathcal{M}}d^{4}x\sqrt {-g}\biggr{)}\biggr{)}. \tag{81}\]
Note that the delta function did not fix \(\lambda\) completely and thus we are meant to integrate over the residual constant part. This gives us an ordinary delta function fixing a global constraint
\[\mathcal{I}_{\mathcal{T}}=\delta(\mathcal{T}_{f}-\mathcal{T}_{i}-\mathrm{Vol} _{\mathcal{M}}[g])\exp\bigl{(}iS_{EH}[g,\Psi]\bigr{)}. \tag{82}\]
We can see that the integration over the Lagrange multiplier \(\lambda\) introduces an extra global constraint on the metric volume of the considered spacetime region \(\mathcal{M}\)
\[\mathrm{Vol}_{\mathcal{M}}[g]=\mathcal{T}_{f}-\mathcal{T}_{i}\,. \tag{83}\]
From (81) we can see that the Einstein Hilbert action thus obtains an unspecified cosmological constant term, which is, however, classically fixed by the global volume as we have explained in section 2.4. Note that any shift of the vacuum energy, which we may obtain by integrating out some heavy modes of matter fields \(\int[D\Psi]\) only acts to rescale the entire expression
\[\mathcal{I}_{\mathcal{T}}\to\exp\Bigl{(}i\rho_{vac}(\mathcal{T}_{f}-\mathcal{ T}_{i})\Bigr{)}\,I_{\mathcal{T}}\,\qquad\text{as}\qquad S_{EH}[g,\Psi]\to S_{EH}[g,\Psi]+\rho_{vac}\mathrm{ Vol}_{\mathcal{M}}[g] \tag{84}\]
Hence correlation functions of any kind remain unaffected by such shifts and consequently local measurements are unaffected as well.
Now we consider the situation where we fix the endpoint values of \(\lambda\). This gives a nearly identical expression to (76)
\[\mathcal{I}_{\lambda}=\int_{\lambda_{i}}^{\lambda_{f}}[D\lambda][DV]\exp(iS[g,\Psi,\lambda,V])\ ; \tag{85}\]
however, in order to get a consistent result we must alter the action that we use. Note that in (35) one must pose a vanishing boundary conditions for the field \(V^{\mu}\). This corresponds to a fixing of initial and final configuration of \(V^{\mu}\) and consequently of \(\mathcal{T}\), but not \(\lambda\). Such action is thus suitable to calculate path integrals with fixed initial and final \(\mathcal{T}\). The appropriate action to calculate the transition amplitude between \(\lambda\) eigenstates is instead (79) with the boundary term being dropped [47]. Hence
\[S[g,\Psi,\lambda,V]=S_{EH}[g,\Psi]+\int_{\mathcal{M}}d^{4}x\,\bigl{(}-V^{\mu} \partial_{\mu}\lambda-\lambda\sqrt{-g}\bigr{)}. \tag{86}\]
In this action we must instead pose the vanishing of variation of \(\lambda\) in order to obtain equations of motion. The integration over \(V^{\mu}\) in (85) is completely free while the integration over \(\lambda\) has fixed endpoints
\[\lambda(t_{i})=\lambda_{i}\,\qquad\qquad\text{and}\qquad\qquad\lambda(t_{f})= \lambda_{f}\,. \tag{87}\]
We can carry out the integration over \(V^{H}\) directly to obtain
\[\mathcal{I}_{\lambda}=\int_{\lambda_{i}}^{\lambda_{f}}[D\lambda]\delta(\partial_{ \mu}\lambda)\exp(iS_{GR}[g,\Psi]-i\lambda\text{Vol}_{\mathcal{M}}[g])\;. \tag{88}\]
The integration over the delta function now gives
\[\int_{\lambda_{i}}^{\lambda_{f}}[D\lambda]\delta(\partial_{\mu}\lambda)=\delta( \lambda_{f}-\lambda_{i})\;. \tag{89}\]
Hence, we obtain
\[\mathcal{I}_{\lambda}=\delta(\lambda_{f}-\lambda_{i})\exp(iS_{GR}[g,\Psi]-i \lambda_{i}\text{Vol}_{\mathcal{M}}[g])\;. \tag{90}\]
In this case the cosmological constant is specified directly by \(\lambda_{i}\). Consequently, the effective cosmological constant receives any shifts of vacuum energy \(\rho_{vac}\) from the matter sector.
We can see that the results (82) is insensitive to quantum corrections of the vacuum energy while the latter is (90). Crucially, the difference in considerations that lead to these results is exactly in line with the semi-classical case that we have discussed in section 2. In particular, choosing the initial value for the Lagrange multiplier \(\lambda\) spoils the solution of the old cosmological constant problem. Instead, allowing \(\lambda\) to be free yields a formulation where the effective cosmological constant is stable against radiative corrections. In the semi-classical case this corresponds to solving for \(\lambda\) algebraically, without initial conditions, while in the present setting it corresponds to integration over the Lagrange multiplier including its zero mode. This distinction is consistent with other results on various aspects of quantum UG. For example the works [22; 28; 31; 47] fix \(\lambda\) by hand and the results point toward the conclusion that the status of the cosmological constant in UG is not any different from GR. On the other hand the works [27; 30; 48; 49; 50; 51] base their calculations on formulations that do not rely on a Lagrange multiplier to enforce (15) and their conclusions are consistent with the old CC problem indeed being solved in UG.
### Quantum fluctuations
As we have seen in section 2.4 the two global quantities \(\mathcal{T}\) and \(\lambda\) form a conjugate pair, with the following Dirac bracket relation
\[\{\mathcal{T},\lambda\}=1\;. \tag{91}\]
Upon standard canonical quantization such relation becomes a commutator of operators due to the correspondence principle
\[[\hat{\mathcal{T}},\hat{\lambda}]=1\;. \tag{92}\]
Consequently, the two associated observables are not simultaneously measurable and the corresponding quantities are subjected to quantum fluctuations, whose size is constrained by the Heisenberg uncertainty relations
\[\delta\mathcal{T}\times\delta\lambda\geq\frac{1}{2}\;. \tag{93}\]
Since the measurement of cosmic time between two hypersurfaces corresponds to the spacetime four-volume, any uncertainty in the measurement of \(\mathcal{T}\) is translated to an uncertainty of the four-volume itself. Hence we can write [32; 52]
\[\delta\lambda\times\delta\text{Vol}_{\mathcal{M}}[g]\geq\frac{1}{2}\;. \tag{94}\]
Such fluctuations are mostly harmless as the four-volume is typically very large and any fluctuations in it can be localized very far from a local observer. Potentially, even in a causally disconnected region. Hence, we may usually measure \(\lambda\) with an arbitrary precision. It follows that such fluctuations are unlikely to have any effect in our Universe; however, they present a conceptual difference between quantum GR and UG.
Nevertheless, we can imagine situations, where such fluctuations can have significant effects. Consider a closed, radiation dominated Friedman universe. The associated scale factor hence evolves as
\[a(\eta)=a_{m}\sin(\eta)\, \tag{95}\]
where \(a_{m}\) is the scale factor at the turning point and \(\eta\) is the conformal time. It is reasonable to assume that the fluctuations are smaller then the total four volume. Hence, we obtain [52]
\[\delta\mathcal{T}<\text{Vol}_{4}[g]=\frac{3\pi^{3}}{4}a_{m}^{4}. \tag{96}\]
Using the uncertainty relation we find a lower bound on the fluctuations of the cosmological constant
\[\delta\lambda>\frac{2}{3\pi^{3}}a_{m}^{-4}. \tag{97}\]
Clearly, this is negligible in large universe but it renders small universes inconsistent as large fluctuations of \(\lambda\) violate the assumption of radiation domination. It would be interesting to see whether such a small universe, that would quickly collapse in the ordinary GR setting, could grow large due to such fluctuation in \(\lambda\). It would also be of interest if such fluctuations can be recovered through the path integral techniques, which have been explored in greater detail. Finally, we would like to note that just a slight modification of the Henneaux and Teitelboim construction (35) can be used to promote any physical constant to a degree of freedom by promoting the constant \(\alpha\) to a scalar field and introducing the term
\[V^{\mu}\partial_{\mu}\alpha. \tag{98}\]
By extension we can obtain Heisenberg relations for various constants with their associated global conjugates. This has been performed for the Planck mass and the Planck constant [32] as well as various others [53; 54; 55]
## 4 Vacuum energy sequestering
Another notable theory that aims to address the old cosmological constant problem is vacuum energy sequestering [13; 14]. This proposal shares multiple similarities with UG, in particular in its local formulation [33] and hence it is useful to compare the two here. The original idea relies on an introduction of global mechanics, which enforce a pair of global constraints. These constraints then determine the effective cosmological constant in a manner that is stable against quantum corrections of vacuum energy. The global dynamics are introduced by considering a pair of _global variables_\(\theta\) and \(\Lambda\). The former is input by hand as a rescaling of the physical metric in the gravitational sector
\[g_{\mu\nu}\rightarrow\tilde{g}_{\mu\nu}=\theta^{-2}\,g_{\mu\nu}. \tag{99}\]
The second variable is the cosmological constant of GR promoted to an independent variable with no space-time dependence. Hence the _local_ part of the action is modified as
\[S=\int d^{4}x\sqrt{-g}\bigg{[}-\frac{1}{2\theta^{2}}R+\Lambda+\mathcal{L}(g_{ \mu\nu},\mathbb{V})\bigg{]}. \tag{100}\]
Furthermore, this action is supplemented by a _global_ term
\[\sigma\bigg{(}\frac{\Lambda}{\mu^{4}}\bigg{)}. \tag{101}\]
where \(\sigma\) is an arbitrary monotonous function and \(\mu\) is an unspecified dimensionful parameter, which is meant to be measured. The total action is
\[S[g,\Psi,\Lambda,\theta]=\int d^{4}x\sqrt{-g}\bigg{[}-\frac{1}{2\theta^{2}}R+ \Lambda+\mathcal{L}(g_{\mu\nu},\Psi)\bigg{]}+\sigma\bigg{(}\frac{\Lambda}{\mu^ {4}}\bigg{)}. \tag{102}\]
Crucially, the novel variables \(\theta\) and \(\Lambda\) are not fields and have no space-time dependence, yet, they are subjected to the variation principle. Consequently, their equations of motion yield two global constraints
\[\int d^{4}x\sqrt{-g}R=0\,\qquad\qquad\qquad\frac{\sigma^{\prime}}{\mu^{4}}= \int d^{4}x\,\sqrt{-g}. \tag{103}\]
It is useful to introduce a space-time average of a scalar quantity as \(\langle\phi\rangle\equiv\int d^{4}x\sqrt{-g}\phi/\int d^{4}x\sqrt{-g}\). Using this we can rewrite the first constraint as
\[\langle R\rangle=0. \tag{104}\]
The equations of motion for the metric \(g_{\mu\nu}\) are given as
\[\theta^{-2}G_{\mu\nu}=T_{\mu\nu}-\Lambda g_{\mu\nu}. \tag{105}\]
We can clearly see that these are just Einstein equations with an unspecified cosmological constant and unspecified rescaling of the Planck mass. The key property of the sequestering mechanism is that the global constraints (103) allow us to find an explicit expression for \(\Lambda\), that does not reduce the Einstein equation to traceless equations (4). This is achieved by taking the trace and a space-time average of (105). Doing so we obtain
\[-\langle R\rangle=\langle T\rangle-4\Lambda. \tag{106}\]
We can use the first of the two global constraints (103) to eliminate the average curvature to find11
Footnote 11: Interestingly similar constraint for \(\Lambda\) has been considered in [19; 27]
\[\Lambda=\frac{1}{4}\,\langle T\rangle. \tag{107}\]
This can be plugged back into the Einstein equation (105), which now reads
\[\theta^{-2}G_{\mu\nu}=T_{\mu\nu}-\frac{1}{4}\,\langle T\rangle\,g_{\mu\nu}. \tag{108}\]
This equation clearly possesses the same symmetry (8) as UG. However, unlike UG the covariant divergence of this equation vanishes identically. Hence, there is no differential constraint, which would give rise to an additional component of the cosmological constant. Instead, we obtain full set of 10 equations. The form of these equations (108) is rather unusual as it contains a term that is non-local in time. Hence, it would seem it would be difficult to interpret it as an evolution equation for a set of initial data. Nevertheless, finding solutions of these equations is
rather straightforward. The method is exactly the same as we have discussed in section 2.4. We consider the Einstein equation with an _unspecified_ cosmological constant \(\lambda\) and the parameter \(\theta\)
\[\theta^{-2}G_{\mu\nu}=T_{\mu\nu}-\lambda g_{\mu\nu}\, \tag{109}\]
and find a family of solutions labeled by their values: \(g_{\mu\nu}(\theta,\lambda)\). For such solutions we evaluate the energy momentum tensor \(T_{\mu\nu}(\theta,\lambda)\) and calculate its associated space-time average. Plugging such expression into (107) and setting \(\Lambda=\lambda\) we find a consistency equation
\[\Lambda=\frac{1}{4}\left\langle T\right\rangle(\theta,\Lambda). \tag{110}\]
The actual value of \(\Lambda\) is then selected as a solution of this equation in terms of \(\theta\). Note that existence of a solution is not in general guaranteed. If this occurs then the entire family of solutions \(g_{\mu\nu}(\theta,\Lambda)\) are not solutions of equations (108). Once we find \(\Lambda\) we can plug it back into the second global constraint in (103) in order to determine \(\theta\). Crucially, unlike UG the vacuum energy sequester does not allow for arbitrary value of the cosmological constant but a specific one which is determined by the above procedure.
### Local formulation
The main disadvantage of the sequestering proposal is that its formulation requires an unusual global term and variables. To remedy this, a local formulation of the theory has been proposed in [33], which can be obtained by a 'localization' of the global dynamics of (102). The strategy to localize action (102) is rather simple. The global variables \(\theta\) and \(\Lambda\) are promoted to local variables - scalar fields
\[\Lambda\to\Lambda(x)\,\qquad\qquad\theta\to\theta(x)\, \tag{111}\]
however, their non-constant part is immediately constrained to vanish using vector Lagrange multipliers
\[V^{\nu}\,\partial_{\nu}\sigma\bigg{(}\frac{\Lambda}{\mu^{4}}\bigg{)}\,\qquad\qquad W^{\nu}\, \partial_{\nu}\tilde{\sigma}\bigg{(}\frac{1}{\theta}\bigg{)}\, \tag{112}\]
where \(\sigma\) and \(\tilde{\sigma}\) are taken to be monotonous functions and \(V^{\mu}\) and \(W^{\mu}\) are vector densities. Hence, the equations of motion for \(V^{\mu}\) and \(W^{\mu}\) yield
\[\partial_{\mu}\,\Lambda=\partial_{\mu}\,\theta=0. \tag{113}\]
\(\Lambda\) and \(\theta\) thus become global variables only on equations of motions rather then apriori. Including the constraint terms in the action yields
\[S[g,\Psi,\Lambda,\theta]=\int d^{4}x\sqrt{-g}\bigg{[}-\frac{1}{2 \theta^{2}}R-\Lambda+\mathcal{L}(g_{\mu\nu},\Psi)\bigg{]}\] \[\qquad\qquad\qquad\qquad\qquad+\int d^{4}x\bigg{[}\partial_{\mu}V^ {\mu}\,\sigma\bigg{(}\frac{\Lambda}{\mu^{4}}\bigg{)}+\partial_{\mu}W^{\mu}\, \tilde{\sigma}\bigg{(}\frac{1}{\theta}\bigg{)}\bigg{]}. \tag{114}\]
The extra constraint terms are metric independent and thus they do not affect the gravitational equations; however, we get new relations that govern the dynamics of the extra fields. In total, we obtain the following set of equations
\[\theta^{-2}G_{\mu\nu}=T_{\mu\nu}-\Lambda g_{\mu\nu}\, \tag{115}\] \[\frac{\sigma^{\prime}}{\mu^{4}}\partial_{\mu}V^{\mu}=\sqrt{-g}\, \frac{\partial^{\prime}}{\theta}\partial_{\mu}W^{\mu}=\sqrt{-g}R\,\] (116) \[\partial_{\mu}\,\Lambda=0\, \partial_{\mu}\,\theta=0. \tag{117}\]
We can see that the action (114), as well as the associated equations of motion, bear a strong resemblance to unimodular gravity. The tensor equation of motion remains unaffected by the change in description and retains its form (105). The vector field \(V^{\mu}\) has the same role as it had in the HT formulation of UG - to force constancy of \(\Lambda\). Analogously, the vector density \(W^{\mu}\) is used to force constancy of \(\theta\). In essence, the effective Planck mass and the cosmological constant are promoted to an integration constant rather then bare coupling constants. The physical interpretation and pitfalls of this theory are consequently very similar to UG. Indeed, if we provide initial conditions for \(\Lambda\) and \(\theta\) directly in order to solve (117), we obtain an ordinary Einstein equation (115) with the chosen constants. Such approach is clearly no different from choosing the cosmological constant directly in UG, and hence, it will be unstable against radiative corrections.
However, similar to UG, the cosmological constant can be prescribed in a stable manner. To demonstrate this we first analyze the equations of motion for \(\theta\) and \(\Lambda\) (116). These are local analogues of the global constraints (103). While being local, these equations describe the evolution of a two global quantities. Namely the 'cosmic times' \(\mathcal{T}\) and \(\tilde{\mathcal{T}}\) associated with \(V^{\mu}\) and \(W^{\mu}\), which can be introduced as in (62). Such quantities are sourced by the space-time volume and the integrated curvature respectively
\[\mathcal{T}_{t_{2}}-\mathcal{T}_{t_{1}}=\frac{\mu^{4}}{\sigma^{\prime}}\text{ Vol}_{\mathcal{M}}[g]\,\qquad\qquad\tilde{\mathcal{T}}_{t_{2}}-\tilde{\mathcal{T}}_{t_{1}}=\frac{ \theta}{\partial^{\prime}}\int_{\mathcal{M}}d^{4}x\sqrt{-g}R. \tag{118}\]
These equations are now global equations, which can be used in the same manner as the original global constraints (103). In particular, consider taking the trace and a space-time average of the equation (115). We again find (106). By taking the ratio of (118) we can express the averaged curvature
\[\left\langle R\right\rangle=\frac{\sigma^{\prime}}{\sigma^{\prime}}\,\frac{1 }{\theta\mu^{4}}\,\frac{\tilde{\mathcal{T}}_{t_{2}}-\tilde{\mathcal{T}}_{t_{ 1}}}{\mathcal{T}_{t_{2}}-\mathcal{T}_{t_{1}}}. \tag{119}\]
Hence from (106) we find
\[\Lambda=\frac{1}{4}\left\langle T\right\rangle+\Delta\Lambda\, \tag{120}\]
where
\[\Delta\Lambda=\frac{\sigma^{\prime}}{\sigma^{\prime}}\,\frac{1}{\theta^{3}\mu ^{4}}\,\frac{\tilde{\mathcal{T}}_{t_{2}}-\tilde{\mathcal{T}}_{t_{1}}}{\mathcal{ T}_{t_{2}}-\mathcal{T}_{t_{1}}}. \tag{121}\]
Note that (120) is not in general an explicit solution for \(\Lambda\) as \(\sigma\) is a function of \(\Lambda\). Nevertheless, plugging this expression into the Einstein equation (115) we find the sequestered equations (105) up to an extra vacuum energy piece \(\Delta\Lambda\)
\[\theta^{-2}G_{\mu\nu}=\,T_{\mu\nu}-\frac{1}{4}\left\langle T\right\rangle g_{ \mu\nu}-\Delta\Lambda g_{\mu\nu}. \tag{122}\]
This equation again has the shift symmetry (8), which cancels out the quantum corrections of vacuum energy on the right hand side. Furthermore, the extra piece of the effective cosmological
constant, \(\Delta\Lambda\), does not depend on the energy and momentum of matter at all. It depends only on gravitational quantities such as the integral of \(V^{\mu}\) and \(W^{\mu}\), which are sourced by the four-volume and the scalar curvature. Thus \(\Delta\Lambda\) does not directly carry the information about the energy and momentum of matter. Consequently, since gravitational degrees of freedom are protected via the symmetry (8) in the tensor equation, \(\Delta\Lambda\) does not receive any such corrections. Note that unlike in the original global version of the sequester, we do not have a direct expression for the effective cosmological constant. Instead, such constant must be determined through measurement. The main point of the above discussion is to demonstrate that such value is then stable under the radiative corrections.
We would like to comment here that the equations of the local version of the sequester can be reduced to the original (108) if we allow ourselves prescribe initial and final values of the cosmic times \(\mathcal{T}\) and \(\tilde{\mathcal{T}}\). In particular, the choice \(\tilde{\mathcal{T}}_{t_{2}}=\tilde{\mathcal{T}}_{t_{1}}\) reduces the second equation (118) to the constraint (104). Equivalently, we get \(\Delta\Lambda=0\) so equations (122) reduce exactly to (108). Finally, we would like to point out that the same strategy can be applied to the HT formulation of UG. Providing initial and final \(\mathcal{T}\) gives us a global constraint
\[\mathcal{T}_{t_{2}}-\mathcal{T}_{t_{1}}=\text{Vol}_{\mathcal{M}}[g]\, \tag{123}\]
which can be used to determine the effective cosmological constant. Such constant is then clearly insensitive to any quantum corrections. As the space-time volume is fixed.
Finally, the localization procedure described in this section can be reversed and applied to unimodular gravity (35) to find a unimodular analogue of the sequestering mechanism. Doing so implies that \(\lambda\) becomes a global variable
\[\lambda(x)\to\lambda. \tag{124}\]
This allows us to integrate the divergence of the vector in the constraint part of the action (35) to obtain
\[S_{const}=\lambda\Big{(}\mathcal{T}_{f}-\mathcal{T}_{i}-\text{Vol}_{\mathcal{ M}}[g]\Big{)}. \tag{125}\]
The variation of the global \(\lambda\) now implies (64), where the time at the endpoints must be apriori specified. Upon variation this yields Einstein equations with a cosmological constant that is determined through the global constraint (64). Note that the crucial difference in comparison to sequestering is that the freedom in choosing \(\mathcal{T}_{f}-\mathcal{T}_{i}\) allows us to reconstruct _any_ value of the cosmological constant. In sequestering the global constraint that determines \(\Lambda\) (104) does not present any choice. On the other hand the solution for \(\theta\) is affected by the choice of the function \(\sigma\).
## 5 Conclusions
In this work we discussed whether unimodular gravity is or is not able to reconcile the old cosmological constant problem. In section 2 we pointed out that the answer hinges on a rather minute technicality - on how one provides the data that determine the effective cosmological constant. This point is completely mute on a classical level; however, it becomes crucial on a semi-classical level, when we introduce quantum corrections to vacuum energy. The distinction arises in theories, which use a Lagrange multiplier (16), (35) in order to enforce their respective constraints (15), (38). In such formulations one often encounters that the initial condition is set up for the Lagrange multiplier directly. Such fixing implies that the zero mode of the multiplier is not varied in the action and hence, the associated constraint is enforced only locally. The local versions of the constraints are, however, nearly 'empty' as GR possesses enough gauge symmetry to satisfy them without any effect on the dynamics. Consequently, setting up the cosmological constant in this manner amounts to little to no
change in the dynamics in comparison to GR with a chosen CC. Hence, we found that the cosmological constant problem is still present when CC is chosen in this way. Leaving the initial conditions of the Lagrange multiplier be free, implies that the constraints are enforced fully. This introduces a global constraint on the four-volume (32), (64), which is able to fix the effective cosmological constant in a manner that is stable against quantum corrections (71). Hence, such route offers a resolution of the old cosmological constant problem. The above mentioned issues are not encountered in theories, where the metric is endowed with a composite structure, which enforces the appropriate constraints (26), (45) automatically. When there are no Lagrange multipliers, we cannot assign initial values to them. For this reason such formulations can be considered to have an advantage over the Lagrange multiplier ones and indeed the cosmological constant problem has been reported to be solved in these versions of UG [48,49].
We discussed a recently proposed pair of theories of UG (45), (57) [25,26] in section 2.3. These proposals combine many desired properties as they are fully diffeomorphism covariant, Weyl invariant theories of UG that do not rely on a Lagrange multiplier. Hence the cosmological constant problem is unambiguously solved within them. We discuss possible extensions of these theories beyond unimodular gravity and how they can fit within the Standard Model of particle physics, while making sure that the decoupling mechanism for quantum corrections of vacuum energy is still functional. We point out a striking similarity of the proposal (59) to the axion dynamics of SU(3) Yang-Mills theory.
In section 3 we reviewed the path integral quantization of the unimodular degree of freedom, the cosmological constant, in the HT formulation (35) with Lagrange multiplier. The structure of the additional degree of freedom is very simple and can be integrated out easily separately from the metric and matter degrees of freedom. Such integration can be carried out in two ways: by either fixing the initial and final value of the cosmological constant itself or by doing the same for its conjugate quantity - the cosmic time (62). This is a direct analogue of the initial value ambiguity in the semi-classical case discussed in section 2 and leads to the same conclusion. That is, choosing the initial value of the cosmological constant directly, spoils the solution of the cosmological constant problem. Conversely, the second route, fixing the cosmic time, leads to a reconciliation of the CC problem by introducing a global constraint. We further discuss that the promotion of the cosmological constant to a degree of freedom naively leads to an appearance of global fluctuations of the cosmological constant. Such fluctuations have likely no effect in our Universe due to its large size. Nevertheless, the existence of these fluctuations presents a conceptual difference of UG from GR.
Finally, we discussed the vacuum energy sequestering [13] in section 4. We reviewed its basic formulation and compared its working with UG. In our view, the mechanism that allows UG to alleviate the CC problem is surprisingly similar to the mechanism of vacuum energy sequestering in that the two theories can be both viewed as operating using a global constraints. In contrast to UG, the original sequestering proposal does not allow us to stray away from this global structure and thus it is guaranteed to provide a reconciliation of the old cosmological constant problem. Furthermore, in comparison to UG, the constraint, which determines the cosmological constant in sequestering is uniquely fixed. In this sense sequestering is more constrained than UG. The similarities between sequestering and UG are even more pronounced in the local formulation (114), which unfortunately introduces the same ambiguity in providing the initial value for the cosmological constant. The relation between the local and global formulation of sequestering can be extrapolated to allow us to write down a formulation of UG analogous to the global vacuum energy sequestering (125).
In our view unimodular gravity indeed offers a resolution of the old cosmological constant problem. However, only as long as one is careful in setting up the initial value for the cosmological constant in a correct way. This particular distinction goes beyond the classical
consideration, which leads to conflicting reports on the viability of UG in regards to the old CC problem. However, these findings are consistent when the above distinction is highlighted.
**Funding:** P.J. acknowledges funding from the South African Research Chairs Initiative of the Department of Science and Technology and the National Research Foundation of South Africa.
**Data Availability Statement:** Not applicable.
**Acknowledgments:** It is a pleasure to thank Alexander Vikman and Ippocratis Saltas for useful discussions.
**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
## Abbreviations
The following abbreviations are used in this manuscript:
CC cosmological constant GR general relativity UG unimodular gravity HT Henneaux and Teitelboim EH Einstein-Hilbert
|
2305.13791
|
The Quadratic Local Variance Gamma Model: an arbitrage-free
interpolation of class $\mathcal{C}^3$ for option prices
|
This paper generalizes the local variance gamma model of Carr and Nadtochiy,
to a piecewise quadratic local variance function. The formulation encompasses
the piecewise linear Bachelier and piecewise linear Black local variance gamma
models. The quadratic local variance function results in an arbitrage-free
interpolation of class $\mathcal{C}^3$. The increased smoothness over the
piecewise-constant and piecewise-linear representation allows to reduce the
number of knots when interpolating raw market quotes, thus providing an
interesting alternative to regularization while reducing the computational
cost.
|
Fabien Le Floc'h
|
2023-05-23T07:58:18Z
|
http://arxiv.org/abs/2305.13791v1
|
The Quadratic Local Variance Gamma Model: an arbitrage-free interpolation of class \(\mathscr{C}^{3}\) for option prices
###### Abstract
This paper generalizes the local variance gamma model of Carr and Nadtochiy, to a piecewise quadratic local variance function. The formulation encompasses the piecewise linear Bachelier and piecewise linear Black local variance gamma models. The quadratic local variance function results in an arbitrage-free interpolation of class \(\mathscr{C}^{3}\). The increased smoothness over the piecewise-constant and piecewise-linear representation allows to reduce the number of knots when interpolating raw market quotes, thus providing an interesting alternative to regularization while reducing the computational cost.
v 2019 11
v 2019 1
## 1 Introduction
The financial markets provide option prices for a discrete set of strike prices and maturity dates. In order to price over-the-counter vanilla options with different strikes, or to hedge complex derivatives with vanilla options, it is useful to have a continuous arbitrage-free representation of the option prices, or equivalently of their implied volatilities. For example, the variance swap replication of Carr and Madan consists in integrating a specific function over a continuum of vanilla put and call option prices (Carr and Lee, 2008; Carr and Madan, 2001). More generally, Breeden and Litzenberger (1978) have shown that any path-independent claim can be valued by integrating over the probability density implied by market option prices. An arbitrage-free representation is also particularly important for the Dupire local volatility model (Dupire, 1994), where arbitrage will translate to a negative local variance. A option price representation of class \(\mathscr{C}^{2}\) is also key to guarantee the second-order convergence of numerical schemes applied to the Dupire partial differential equation, commonly used to price exotic financial derivative contracts.
A rudimentary, but popular representation is to interpolate market implied volatilities with a cubic spline across option strikes. Unfortunately this may not be arbitrage-free as it does not preserve the convexity of option prices in general. A typical convex interpolation of the call option prices by quadratic splines or rational splines is also not satisfactory in general since it may generate unrealistic oscillations in the corresponding implied volatilities, as evidenced in (Jackel, 2014). Kahale (2004) designs an arbitrage-free interpolation of the call option prices, which however requires convex input quotes, employs two embedded non-linear minimizations, and it is not proven that the algorithm for the interpolation function of class \(\mathscr{C}^{2}\) converges.
More recently, Andreasen and Huge (2011) have proposed to calibrate the discrete piecewise constant local volatility corresponding to a single-step finite difference discretization of the forward Dupire equation. In their representation of the local volatility, the authors use as many constants as the number of market option strikes for an optimal fit. It is thus sometimes considered to be "non-parametric". Their technique works well in general but requires some care around the choice of discretization grid: it must be sufficiently dense so that two market strikes do not fall in between the same consecutive grid nodes, and sufficiently wide to properly model the boundary behaviour. Those two requirements complicate, and slow down the non-linear optimization involved in the technique. Furthermore the output is a discrete set of option prices, which, while relatively dense, must still be interpolated carefully to obtain the price of options whose strike falls in between grid nodes.
Le Floc'h and Oosterlee (2019) derived a specific B-spline collocation to fit the market option prices, while ensuring the arbitrage-free property at the same time. While the fit is quite good in general, it may not
be applicable to interpolate the original quotes with high accuracy. For example, input quotes may already be smoothed out if they stem from a prior model, or from a market data broker, or from another system in the bank. In those cases, it is desirable to use a nearly exact interpolation.
Le Floc'h (2021) extends the local variance gamma model of Carr and Nadtochiy (2017), which relies on a piecewise-constant representation of the local variance function, to a piecewise-linear Bachelier representation. This paper generalizes the model to a piecewise-quadratic function. It encompasses the piecewise-linear Bachelier and piecewise-linear Black representations. The full piecewise-quadratic model results in an arbitrage-free interpolation of class \(\mathcal{C}^{3}\) for the option prices. The smoother implied probability density allows for the use of a sparser set of interpolation knots, thus providing an alternative to regularization in order to avoid overfitting. In addition, a sparser set of knots reduces the computational cost of the technique.
## 2 Dupire's PDDE in the local variance gamma model
We recall Dupire's partial difference differential equation (PDDE) for a call option price \(C(T,x)\) of strike \(x\) and maturity \(T\)(Carr and Nadtochiy, 2017):
\[\frac{C(T,x)-\max(X(0)-x,0)}{T}=\frac{1}{2}\alpha^{2}(x)\frac{\partial^{2}C(T,x)}{\partial x^{2}}\,, \tag{1}\]
for a Martingale asset price process \(X(t)\) of expectation \(\mathbb{E}_{0}[X(t)]=X(0)\).
Let \(\{x_{0},x_{1},...,x_{m},x_{m+1}\}\) be a increasing set of the strike prices, such that \(x_{0}=L\), \(x_{m+1}=U\) with the interval \((L,U)\) being the spatial interval where the asset \(X\) lives. Furthermore, we require the following to hold
\[\exists s\in[1,m]|x_{s}=X(0)\,.\]
The \((x_{1},...,x_{m})\) may correspond to the strike prices of the options of maturity \(T\) we want to calibrate against, along with the forward price as in (Carr and Nadtochiy, 2017; Le Floc'h, 2021). This choice allows for a nearly exact fit. It may also be some specific discretization of size \(m\) with \(m\) lower or equal to the number of market strike prices.
We consider \(a\) to be a piecewise-quadratic function of class \(\mathcal{C}^{0}\) on \([x_{0},x_{m}]\).
Let \(V\) be the function defined by \(V(x)=C(x,T)-\max(X(0)-x,0)\). \(V\) is effectively the price of an out-of-the-money option (the price of a call option for \(x>X(0)\) and of a put option for \(x<X(0)\)). The Dupire PDDE leads to
\[V(x)=\frac{1}{2}a^{2}(x)T\left[V^{\prime\prime}(x)+\delta(x=X(0))\right]\,, \tag{2}\]
on the interval \((L,U)\), where \(\delta\) is the Dirac delta function. Instead of solving Equation 2 directly, we look for a solution \(V\) on the two intervals \((L,X(0))\) and \((X(0),U)\) separately. On each interval, we have
\[V(x)=\frac{1}{2}a^{2}(x)TV^{\prime\prime}(x)\,, \tag{3}\]
Then, the continuity of \(\frac{\partial C}{\partial x}\) at \(x=X(0)\) implies
\[\lim_{x-X(0)-}V^{\prime}(x)=1+\lim_{x\to X(0)+}V^{\prime}(x)\,. \tag{4}\]
In order to define a unique \(V\), we also impose the absorbing boundary conditions
\[V(L)=0=V(U)\,. \tag{5}\]
The continuity of the second derivative of \(V\) at \(x=X(0)\) follows from the continuity of \(a(x)\) at \(x=X(0)\). We may further impose a \(\mathcal{C}^{3}\) continuity relation at \(x=x_{s}\):
\[\lim_{x\to X(0)-}\left(\frac{V}{a^{2}}\right)^{\prime}(x)=\lim_{x\to X(0)+} \left(\frac{V}{a^{2}}\right)^{\prime}(x)\,. \tag{6}\]
## 3 Explicit solution
Let \(a(x)=\alpha_{i}x^{2}+\beta_{i}x+\gamma_{i}\) on \(\{x_{i},x_{i+1}\}\) with \((\alpha_{i},\beta_{i},\gamma_{i})\in\mathbb{R}^{3}\). Being a quadratic, \(a\) may also be expressed as \(a(x)=\alpha_{i}(x-\tilde{x}_{i,1})(x-\tilde{x}_{i,2})\) with
\[\tilde{x}_{i,1}=\frac{-\beta_{i}+\sqrt{\delta_{i}}}{2\alpha_{i}}\,,\quad\tilde {x}_{i,2}=\frac{-\beta_{i}-\sqrt{\delta_{i}}}{2\alpha_{i}}\,,\quad\text{with }\delta_{i}=\beta_{i}^{2}-4\alpha_{i} \gamma_{i}\,,\quad\text{for }\alpha\neq 0\,.\]
In particular, \(\tilde{x}_{i,1}\) and \(\tilde{x}_{i,2}\) may be complex numbers. When \(\alpha_{i}=0\) and \(\beta_{i}\neq 0\), we may define \(\delta_{i}=\beta_{i}^{2}\) and we have \(a(x)=\alpha_{i}(x-\tilde{x}_{i,1})\) with \(\tilde{x}_{i,1}=-\gamma_{i}\,\beta_{i}\).
The solutions of Equation 3 on \(\{x_{i},x_{i+1}\}\) read
\[V(x)=\frac{\chi_{i}(x)}{\chi_{i}(x_{i})}\left[\Theta_{i}^{c}\cosh\left(\omega _{i}\left(z_{i}(x)-z_{i}(x_{i})\right)\right)+\Theta_{i}^{s}\sinh\left(\omega _{i}\left(z_{i}(x)-z_{i}(x_{i})\right)\right)\right]\,, \tag{7}\]
with1
Footnote 1: See Appendix A on how to avoid the use of complex numbers.
\[z_{i}(x)=\ln\left(\frac{x-\tilde{x}_{i,1}}{x-\tilde{x}_{i,2}} \right)\,, \omega_{i}=\frac{1}{2}\sqrt{1+\frac{8}{\delta_{i}\,T}}\,, \chi_{i}=\sqrt{\left(x-\tilde{x}_{i,1}\right)\left(x-\tilde{x}_{i,2} \right)}\,, \text{for }\alpha_{i}\neq 0\,,\] \[z_{i}(x)=\ln\left|x-\tilde{x}_{i,1}\right|\,, \omega_{i}=\frac{1}{2}\sqrt{1+\frac{8}{\delta_{i}\,T}}\,, \chi_{i}=\sqrt{\left|x-\tilde{x}_{i,1}\right|}\,, \text{for }\alpha_{i}=0\text{, and }\beta_{i}\neq 0\,,\] \[z_{i}(x)=x\,, \omega_{i}=\frac{1}{\gamma_{i}}\sqrt{\frac{2}{T}}\,, \chi_{i}=1\,, \text{for }\alpha_{i}=0\text{, and }\beta_{i}=0\,.\]
where \((\Theta_{i}^{c},\Theta_{i}^{s})\in\mathbb{C}^{2}\). The normalization makes \(V(x_{i})=\Theta_{i}^{c}\).
The derivative of \(V\) reads
\[V^{\prime}(x)=\frac{\chi_{i}(x)}{\chi_{i}(x_{i})}z_{i}^{\prime}(x)\left[(\kappa _{i}\Theta_{i}^{c}+\omega_{i}\Theta_{i}^{s})\cosh\left(\omega_{i}(z_{i}(x)-z_{ i}(x_{i}))\right)+(\kappa_{i}\Theta_{i}^{s}+\omega_{i}\Theta_{i}^{c})\sinh\left( \omega_{i}(z_{i}(x)-z_{i}(x_{i}))\right)\right]\,, \tag{8}\]
with
\[z_{i}^{\prime}(x)=\frac{1}{x-\tilde{x}_{i,1}}-\frac{1}{x-\tilde{ x}_{i,2}}\,, \kappa_{i}=\frac{1}{2z_{i}^{\prime}(x)}\left(\frac{1}{x-\tilde{x}_{i,1}}+\frac{ 1}{x-\tilde{x}_{i,2}}\right)\,, \text{for }\alpha_{i}\neq 0\,,\] \[z_{i}^{\prime}(x)=\frac{1}{x-\tilde{x}_{i,1}}\,, \kappa_{i}=\frac{1}{2}\,, \text{for }\alpha_{i}=0\text{ and }\beta_{i}\neq 0\,,\] \[z_{i}^{\prime}(x)=1\,, \kappa_{i}=0\,, \text{for }\alpha_{i}=0\text{ and }\beta_{i}=0\,.\]
The conditions to impose continuity of \(V\) and its derivative at \(x=x_{i+1}\) results in the following linear system
\[\cosh_{i}\Theta_{i}^{c}+\sinh_{i}\Theta_{i}^{s}=\frac{\Theta_{i+1 }^{c}}{\chi_{i}(x_{i+1})}\,, \tag{9}\] \[(\kappa_{i}\cosh_{i}+\omega_{i}\sinh_{i})\Theta_{i}^{c}+(\omega_{i }\cosh_{i}+\kappa_{i}\sinh_{i})\Theta_{i}^{s}=\frac{\left(\kappa_{i+1}\Theta_{i+ 1}^{c}+\omega_{i+1}\Theta_{i+1}^{s}\right)z_{i+1}^{\prime}(x_{i+1})}{\chi_{i}(x _{i+1})z_{i}^{\prime}(x_{i+1})} \tag{10}\]
for \(i=0,...,s-2\), with
\[\cosh_{i}=\cosh\left(\omega_{i}(z_{i}(x_{i+1})-z_{i}(x_{i}))\right)\,,\quad\sinh _{i}=\sinh\left(\omega_{i}(z_{i}(x_{i+1})-z_{i}(x_{i}))\right)\,.\]
The boundary condition at \(x=x_{0}=L\) translates to \(\Theta_{0}^{c}=0\). At \(x=x_{m+1}=U\), the boundary condition translates to \(\Theta_{m}^{c}=-\Theta_{m}^{s}\frac{\sinh_{m}}{\cosh_{m}}\). The jump condition at \(x=s\) reads
\[V_{s-1}(x_{s}) =V_{s}(x_{s})\,,\] \[V_{s-1}^{\prime}(x_{s}) =1+V_{s}^{\prime}(x_{s})\,,\]
with
\[V_{s-1}(x_{s}) =\chi_{s-1}(x_{s})(\Theta_{s-1}^{c}\cosh_{s-1}+\Theta_{s-1}^{s} \sinh_{s-1})\,,\] \[V_{s}(x_{s}) =\Theta_{s}^{c}\,,\] \[V_{s-1}^{\prime}(x_{s}) =\chi_{s-1}(x_{s})z_{s-1}^{\prime}(x_{s})\left[(\chi_{s-1}\Theta_ {s-1}^{c}+\omega_{s-1}\Theta_{s-1}^{s})\cosh_{s-1}+(\omega_{s-1}\Theta_{s-1}^{ c}+\kappa_{s-1}\Theta_{s-1}^{s})\sinh_{s-1}\right]\,,\] \[V_{s}^{\prime}(x_{s}) =(\kappa_{s}\Theta_{s}^{c}+\omega_{s}\Theta_{s}^{s})z_{s}^{ \prime}(x_{s})\,.\]
From the above equations, we deduce that the coefficients \(\Theta_{i}^{c},\Theta_{i}^{s}\) are solutions of the following tridiagonal system
\[\begin{pmatrix}B_{0}&C_{0}&&0\\ A_{1}&\ddots&\ddots&\\ &\ddots&\ddots&C_{2m}\\ 0&&A_{2m+1}&B_{2m+1}\end{pmatrix}\begin{pmatrix}\Theta_{0}^{s}\\ \Theta_{0}^{c}\\ \vdots\\ \Theta_{m}^{s}\\ \Theta_{m}^{c}\end{pmatrix}=\begin{pmatrix}D_{0}\\ \vdots\\ D_{2m+1}\end{pmatrix}\,, \tag{11}\]
with \(D_{i}=0\) for \(i\notin\{2s-1,2s\}\), \(D_{2s-1}=D_{2s}=1\),
\[\begin{cases}A_{2i+1}&=(\omega_{i}\cosh_{i}+\kappa_{i}\sinh_{i})\chi_{i}z_{i}^ {\prime}(x_{i+1})-\kappa_{i+1}\sinh_{i}\chi_{i}z_{i+1}^{\prime}(x_{i+1})\,,\\ B_{2i+1}&=(\kappa_{i}\cosh_{i}+\omega_{i}\sinh_{i})\chi_{i}z_{i}^{\prime}(x_{i +1})-\kappa_{i+1}\cosh_{i}\chi_{i}z_{i+1}^{\prime}(x_{i+1})\,,\\ C_{2i+1}&=-\omega_{i+1}z_{i+1}^{\prime}(x_{i+1})\,,\\ A_{2i+2}&=\left[\kappa_{i}\cosh_{i}+\omega_{i}\sinh_{i}-\frac{\omega_{i}\cosh_{ i}+\kappa_{i}\sinh_{i}}{\sinh_{i}}\cosh_{i}\right]\chi_{i}z_{i}^{\prime}(x_{i+1})\,,\\ B_{2i+2}&=-\omega_{i+1}z_{i+1}^{\prime}(x_{i+1})\,,\\ C_{2i+2}&=\frac{\omega_{i}\cosh_{i}+\kappa_{i}\sinh_{i}}{\sinh_{i}}z_{i}^{ \prime}(x_{i+1})-\kappa_{i+2}z_{i+1}^{\prime}(x_{i+1})\,,\end{cases}\]
for \(i=0,...,m-1\), and \(B_{0}=0,C_{0}=1,A_{2m+1}=\sinh_{m},B_{2m+1}=\cosh_{m}\).
Using the continuity of \(V(x_{s})=\Theta_{s}^{c}\), the jump condition of \(V^{\prime}\) at \(x_{s}\) and the continuity of \(a(x_{s})\), the \(\mathcal{C}^{3}\) condition (Equation 6) reads
\[1-2\Theta_{s}^{c}\frac{\lim_{x\to x_{s}-x_{s}}a^{\prime}(x)}{a(x_{s})}=-2 \Theta_{s}^{c}\frac{\lim_{x\to x_{s}+}a^{\prime}(x)}{a(x_{s})}\,. \tag{12}\]
Equation 12 implies that \(a^{\prime}\) is not continuous at \(x_{s}\), unless \(a(x_{s})=0\). The condition can not be imposed as an additional constraint on \(\Theta_{s}^{c}\) since its value is already fully determined by the tridiagonal system. It may however be imposed by choosing the correct model parameter to adjust the value of \(a\) at \(x_{s}\) along with its left and right derivative values.
## 4 Parameterizations
### Linear Bachelier
The linear Bachelier local variance consists in \(\alpha_{i}=0\) and may be rewritten using values at the knots \(\sigma_{i}\) as
\[a(x)=\frac{x-x_{i}}{x_{i+1}-x_{i}}(\sigma_{i+1}-\sigma_{i})+\sigma_{i}\quad \text{ for }x_{i}\leq x<x_{i+1}\,,\quad i=0,...,m\,, \tag{13}\]
where the parameters \(\sigma_{i}>0\).
It corresponds to the parameterization studied in [10], where it is shown that the local variance function must not be \(\mathcal{C}^{1}\) at \(x=x_{s}\) but must follow \(\mathcal{C}^{3}\) condition (Equation 12) in order to avoid a spurious spike at \(x=x_{s}\). Under the linear Bachelier local variance, the condition reads
\[1-2\Theta_{s}^{c}\frac{\sigma_{s}-\sigma_{s-1}}{(x_{s}-x_{s-1})\sigma_{s}}=-2 \Theta_{s}^{c}\frac{\sigma_{s+1}-\sigma_{s}}{(x_{s+1}-x_{s})\sigma_{s}}\,,\]
or equivalently
\[\sigma_{s}=\frac{2\Theta_{s}^{c}\left(\frac{\sigma_{s-1}}{x_{s}-x_{s-1}}+ \frac{\sigma_{s+1}}{x_{s+1}-x_{s}}\right)}{2\Theta_{s}^{c}\left(\frac{1}{x_{s }-x_{s-1}}+\frac{1}{x_{s+1}-x_{s}}\right)-1}\,. \tag{14}\]
This is not a linear problem, as \(\Theta_{s}^{c}\) depends on \(\sigma_{s}\) through \(\Theta_{s+1}^{c},\Theta_{s+1}^{s}\) in a non-linear way (Equations 9 and 10). Starting with the algorithm described in Section 3 to compute \(\Theta^{c},\Theta^{s}\), using Equation 14 with \(\Theta_{s}^{c}\approx V_{\mathsf{truncket}}(x_{s})\) as initial guess for \(\sigma_{s}\), we may however apply the following iteration
* Update \(\sigma_{s}\) through Equation 14.
* Recalculate \(\Theta_{i}^{c}\) and \(\Theta_{i}^{s}\) for \(i=0,...,m\) by solving the updated tridiagonal system.
Three iterations are enough in practice.
### Linear Black
The linear Black local variance model is defined by \(\gamma=0\). The local variance function may be rewritten using values at the knots \(\sigma_{i}\) as
\[a(x)=\left(\frac{x-x_{i}}{x_{i+1}-x_{i}}(\sigma_{i+1}-\sigma_{i})+\sigma_{i} \right)x\quad\text{for }x_{i}\leq x<x_{i+1}\,,\quad i=0,...,m\,, \tag{15}\]
where the parameters \(\sigma_{i}>0\). Interestingly, the \(\mathcal{C}^{3}\) condition (Equation 12) is also given by Equation 14.
### Positive quadratic B-spline
A B-spline parameterization with positive coefficients implies \(a\) positive. Furthermore, Equation 12 imposes a double knot at \(x=x_{s}\) (because the derivative of \(a\) is not continuous there). We thus consider
\[a(x)=\sum_{i=1}^{m+3}\lambda_{i}B_{i,3}(x) \tag{16}\]
where \(\lambda_{i}>0\) and \(B_{i,3}\) is the quadratic basis spline with knots \(\mathbf{t}=(L,L,L,x_{1},x_{2},...,X(0),X(0),...,x_{m},U,U,U)\). In particular, we have \(t_{s+2}=t_{s+3}=X(0)\). Using the B-spline derivative identity [13] and the fact that the order of the B-spline is \(3\), we obtain
\[a^{\prime}(x)=2\sum_{i}\frac{\lambda_{i}-\lambda_{i-1}}{t_{i+2}-t_{i}}B_{i-1, 2}(x)\,,\]
and the \(\mathcal{C}^{3}\) condition reads
\[\sum_{i}\lambda_{i}B_{i,3}(x_{s})=4\Theta_{s}^{c}\left(\frac{\lambda_{s+1}- \lambda_{s}}{t_{s+3}-t_{s+1}}B_{s,2}(x_{s}^{-})-\frac{\lambda_{s+2}-\lambda_{ s+1}}{t_{s+4}-t_{s+2}}B_{s+1,2}(x_{s}^{+})\right).\]
Using the definitions of \(B_{i,3}\) and \(B_{i,2}\) we obtain
\[\lambda_{s+1}=4\Theta_{s}^{c}\left(\frac{\lambda_{s+1}-\lambda_{s}}{t_{s+3}-t _{s+1}}-\frac{\lambda_{s+2}-\lambda_{s+1}}{t_{s+4}-t_{s+2}}\right),\]
or equivalently
\[\lambda_{s+1}=\frac{4\Theta_{s}^{c}\left(\frac{\lambda_{s}}{2s+3s-t_{s+1}}+\frac{ \lambda_{4s2}}{t_{s+4}-t_{s+2}}\right)}{4\Theta_{s}^{c}\left(\frac{1}{s+3s-t_{s +1}}+\frac{1}{t_{s+4}-t_{s+2}}\right)-1}\,. \tag{17}\]
As an illustration, we consider the same example as in [10]: we fit the quadratic LVG model to 10 option prices of strikes (0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.3, 1.4), obtained by the Black-Scholes model with constant volatility \(\sigma_{B}=20\%\), time to maturity \(T=0.25\) and forward price 1.025. We know that the theoretical distribution is a lognormal distribution. A straightforward \(\mathcal{C}^{1}\) quadratic B-spline leads to a large spike in the probability density implied from the calibrated LVG model (Figure 1). Adding the \(\mathcal{C}^{3}\) condition through an additional B-spline knot recovers a smooth implied probability density.
## 5 Calibration
### Error measure
The calibration of a single maturity consists in finding the parameters (the \(\boldsymbol{\alpha}\) for the linear models, or \(\lambda\) for the quadratic B-spline) such that the function \(C(x,T)\), solution of the Dupire PDDE fits the market option prices \((\hat{C}_{i})_{i=1,...,n}\) of respective strikes \((K_{i})_{i=1,...,n}\) according to an appropriate measure. A common practice is to perform a least-squares minimization of the error measure \(E\) defined by
\[E=\sum_{i=1}^{m}\mu_{i}^{2}\,(\sigma(\alpha,x_{i})-\hat{\sigma}_{i})^{2}\,, \tag{18}\]
with \(\alpha_{i}>0\) for \(i=1,...,m\) and where \(\sigma(\alpha,x)\) is the implied volatility corresponding to the option prices obtained with the piecewise-linear local gamma variance model and \(\hat{\sigma}_{i}\) is the market implied volatility at strike \(x_{i}\), \((\mu_{i})_{i=1,...,m}\) are weights associated to the accuracy of the fit at each point.
In order to solve this non-linear least-squares problem, we will use the Levenberg-Marquardt algorithm as implemented by Klare and Miller (2013). The box constraints \(\alpha_{i}>0\) can be added in a relatively straightforward manner to any Levenberg-Marquardt algorithm, through the projection technique described in (Kanzow et al., 2004), or through a variable transform from \(\mathbb{R}\) to a subset of \(\mathbb{R}^{+}\) (for example through the function \(x\to x^{2}+\epsilon\) with some small positive \(\epsilon\)).
Figure 1: Implied probability density for the quadratic LVG model with or without the \(\mathcal{C}^{3}\) condition (Equation 6), fitted to a Black-Scholes model with constant volatility \(\sigma_{B}=20\%\), time to expiry \(T=0.25\) and forward 1.025.
The implied volatility for a given option price may be found efficiently and accurately through the algorithm of Jackel (2015). In general, we prefer to solve an almost equivalent formulation in terms of option prices, using the error measure \(E_{V}\) defined by
\[E_{V}=\sum_{i=1}^{m}w_{i}^{2}\left(C(\alpha,x_{i})-\hat{C}_{i}\right)^{2}, \tag{19}\]
with \(C(\alpha,x)\) being the local variance gamma option price with parameter \(\alpha\) and strike \(x\), and the capped inverse Vega weights \(w_{i}\) given by
\[w_{i}=\min\left(\frac{1}{\nu_{i}},\frac{10^{6}}{X(0)}\right)\mu_{i}, \tag{20}\]
where \(\nu_{i}=\frac{\partial\hat{C}_{i}}{\partial\alpha}\) is the Black-Scholes Vega corresponding the market option price \(\hat{C}_{i}\), and \(10^{6}\) is a cap applied to avoid numerical issues related to the limited machine accuracy (see Le Floc'h (2021); Le Floc'h and Oosterlee (2019) for the justification).
### Exact interpolation
Sometimes, it is desirable to interpolate a given set of reference prices nearly exactly. This is typically the case when the reference prices come from some prior model. We apply the same least-square minimization but choose the number of free parameters to be equal to the number of reference prices.
#### 5.2.1 Linear models
For the linear models, this means to use \(m=n\) and set \(\alpha_{0}=\alpha_{1}\) and \(\alpha_{m+1}=\alpha_{m}\) to model a flat extrapolation. In general, the market strikes will not include \(X(0)\). In this case, \(X(0)\) must be added to the knots \(\{x_{i}\}_{i=1,...,m}\) used in the local variance gamma representation. This adds one more parameter \(\alpha_{s}\) to the representation, where \(s\) is the index corresponding to \(X(0)\) in the set of knots. The value of \(\alpha_{s}\) is not free, it is given by the \(\mathcal{C}^{3}\) condition (Equation 6) and enforced through the iterative procedure described in the previous sections.
#### 5.2.2 B-spline knots locations
For the linear Bachelier and Black parameterization, choosing the knots at the market strikes works well. The situation is more complex for the quadratic B-spline parameterization.
Let \((K_{i})_{i=1,...,n}\) be the market options strikes. Let the index \(i_{F}\) be such that \(K_{i_{F}}\leq F<K_{i_{F}+1}\). We may:
* place the knots at the market strikes (labeled "Strikes" in the figures) \[\mathbf{t}=\begin{cases}\left(L,L,L,K_{1},...,K_{i_{F}},F,F,K_{i_{F}+1},...,K _{n},U,U,U\right)&\text{if }K_{i_{F}}\neq F\\ \left(L,L,L,K_{1},...,K_{i_{F}},F,K_{i_{F}+1},...,K_{n},U,U,U\right)&\text{if }K_{i_{F}}=F \end{cases},\] The dimension of \(\lambda\) is then \(n_{\lambda}=n+5\) if \(F\neq K_{i_{F}}\) and \(n+4\) if \(F=K_{i_{F}}\). The change in the number of dimensions suggests that the interpolation may change significantly when the forward price moves across a market strike.
* place the knots in the middle of market strikes. According to (De Boor, 1978, p. 61), the \(\mathcal{C}^{1}\) quadratic spline is then solution to a diagonally dominant tridiagonal system, which increases the stability and reduce oscillations of the interpolation. There are however several ways to do it:
* choose the direct mid-points (labeled "Mid-Strikes") \[\mathbf{t}=\begin{cases}\left(L,L,L,\frac{K_{1}+K_{2}}{2},...,\frac{K_{F-1}+K_ {F}}{2},F,F,\frac{K_{i_{F}+K_{F+1}}}{2},...,\frac{K_{F-1}+K_{n}}{2},U,U,U\right)& \text{if }F<\frac{K_{i_{F}}+K_{F+1}}{2}\\ L,L,L,\frac{K_{1}+K_{2}}{2},...,\frac{K_{F-1}+K_{F}}{2},\frac{K_{i_{F}+K_{F+1}} }{2},F,F,...,\frac{K_{n-1}+K_{n}}{2},U,U,U\right)&\text{if }F>\frac{K_{i_{F}}+K_{F+1}}{2} \end{cases},\] \[\mathbf{t}=\begin{cases}\left(L,L,L,\frac{K_{1}+K_{2}}{2},...,\frac{K_{F-1}+K _{F}}{2},F,F,\frac{K_{i_{F}+1}+K_{F+2}}{2},...,\frac{K_{n-1}+K_{n}}{2},U,U,U \right)&\text{if }F=\frac{K_{i_{F}}+K_{F+1}}{2}\end{cases},\] \[\mathbf{t}=\begin{cases}\left(L,L,L,K_{1},L,K_{2},...,K_{F-1}+K_{F},L
The dimension of \(\lambda\) is then \(n_{\lambda}=n+4\) if \(F\neq K_{lr}\) and \(n+3\) if \(F=K_{lr}\). * choose the mid-points, excluding the point closest to the forward price \(F\) (labeled "Mid-X") \[\mathbf{t}=\left(L,L,L,\frac{K_{1}+K_{2}}{2},...,\frac{K_{lr-1}+K_{lr}}{2},F,F, \frac{K_{lr+1}+K_{lr+2}}{2},...,\frac{K_{n-1}+K_{n}}{2},U,U,U\right),\] The dimension of \(\lambda\) is then \(n_{\lambda}=n+3\). * choose the mid-points, excluding the point closest to the forward price and placing the first and last strike in the middle of two knots (labeled "Mid-XX") \[\mathbf{t}=\left(L,L,L,\frac{3K_{1}-K_{2}}{2},\frac{K_{1}+K_{2}}{2},...,\frac{K_{lr -1}+K_{lr}}{2},F,F,\frac{K_{lr+1}+K_{lr+2}}{2},...,\frac{K_{n-1}+K_{n}}{2}, \frac{3K_{n}-K_{n-1}}{2},U,U,U\right),\] The dimension of \(\lambda\) is then \(n_{\lambda}=n+5\).
* use a uniform discretization of \(\left\{K_{1},K_{n}\right\}\) composed of \(n+1\) points, and shift it such that the forward is exactly part of the knots and we have \(n_{\lambda}=n+5\).
In each of those case, we make sure to add the forward price as a double knot, as well as the boundaries \(L,U\). The dimension of \(\lambda\) implied by the knots is larger than the number of market strikes. We choose the extra parameters as such:
* if \(n_{\lambda}=n+5\), we set \(\lambda_{1}=\lambda_{2}=\lambda_{3}\), \(\lambda_{n+3}=\lambda_{n+4}=\lambda_{n+5}\) and \(\lambda_{i_{F}+3}\) is obtained from \(\lambda_{i_{F}+2}\) and \(\lambda_{i_{F}+4}\).
* if \(n_{\lambda}=n+4\), we set \(\lambda_{1}=\lambda_{2}=\lambda_{3}\), \(\lambda_{n+3}=\lambda_{n+4}\) and \(\lambda_{i_{F}+3}\) is obtained from \(\lambda_{i_{F}+2}\) and \(\lambda_{i_{F}+4}\).
* if \(n_{\lambda}=n+3\), we set \(\lambda_{1}=\lambda_{2}\), \(\lambda_{n+2}=\lambda_{n+3}\) and \(\lambda_{i_{F}+2}\) is obtained from \(\lambda_{i_{F}+1}\) and \(\lambda_{i_{F}+3}\).
In order to assess the various knots candidates, we consider the same example as in the previous section, but using a few different random sets of 10 strikes in the interval [85,140] and a forward price \(F=101\) (Table 1).
The uniform discretization may2 result in strong oscillations due to overfitting in places where no market strike is quoted as in the set A (Figure 2(a)).
Footnote 2: In practice, market strikes are not randomly distributed, but according to multiples of a minimum strike width, with more strikes near the money. The uniform discretization may still be relevant if some regularization is added to the objective of the minimizer.
The "Mid-Strikes" choice leads to a strong oscillation around the forward in set B (Figure 2(b)). The "Mid-X" produces a somewhat awkward shape on the set B. When the forward is very close to some of the knots as in set C, the "Strikes", "Mid-Strikes" choices lead to a density with a sharp gradient near the forward, a feature not desirable (Figure 2(c)). When the forward is part of the market strikes, a small wiggle is visible at the forward for "Strikes" and "Mid-Strikes" (Figure 2(d)).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Set & \(K_{1}\) & \(K_{2}\) & \(K_{3}\) & \(K_{4}\) & \(K_{5}\) & \(K_{6}\) & \(K_{7}\) & \(K_{8}\) & \(K_{9}\) & \(K_{10}\) \\ \hline A & 88.77 & 92.85 & 93.38 & 99.37 & 107.99 & 120.29 & 122.03 & 123.9 & 134.71 & 135.43 \\ B & 85.02 & 101.92 & 103.55 & 114.45 & 121.85 & 123.69 & 125.07 & 125.58 & 131.63 & 133.86 \\ C & 98.07 & 100.93 & 101.06 & 106.88 & 109.12 & 110.93 & 119.76 & 119.83 & 132.19 & 138.27 \\ D & 85.00 & 90.00 & 95.00 & 100.00 & 101.00 & 105.00 & 110.00 & 115.00 & 120.00 & 130.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sets of market strikes with a Black-Scholes volatility of 20% for a maturity \(T=0.25\) and forward \(F=101\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Set & Strikes & Mid-Strikes & Mid-X & Mid-XX & Uniform \\ \hline A & 9.4e-8 & 6.0e-3 & 5.2e-3 & 4.1e-8 & 4.8e-9 \\ B & 9.9e-9 & 2.8e-3 & 5.8e-1 & 2.9e-6 & 9.9e-3 \\ C & 1.0e-6 & 1.9e-3 & 1.0e-2 & 1.1e-8 & 1.4e-3 \\ D & 4.1e-4 & 8.1e-2 & 4.1e-2 & 2.6e-5 & 5.0e-7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Root mean square error in implied volatilities % for various choices of B-spline knots. The reference volatilities are flat 20%.
Finally it is also interesting to look at the overall root mean square error in implied volatilities for the different choices (Table 2). The "Strikes" and "Mid-XX" choices consistently result in a near-perfect3 fit.
Footnote 3: The error in volatility is always below one basis point.
Overall, the "Mid-XX" knots lead to the most stable probability density along with an excellent fit.
#### 5.2.3 Marry quotes, few parameters
In (Le Floc'h, 2021), regularization is employed to ensure a smooth implied probability density when fitting to many, eventually noisy, market option quotes. An interesting simpler alternative is to use few knots/few parameters instead of as many parameters as market quotes: by limiting the number of free-parameters, we may avoid overfitting issues, and at the same time we reduce the number of dimensions of the problem, thus increasing stability and performance. Where to place the knots then? Based on the previous observations, we may choose knots such that the market strikes are equidistributed. Concretely, we use
\[\bar{\mathbf{K}}=\{K_{1},K_{1+j},K_{1+2j},...,K_{n}\}\,\]
where \(j=n/m\) with \(m\leq n\) and use the "Mid-XX" knots on top of \(\bar{\mathbf{K}}\). It may happen that many market strikes are quoted in a narrow range, in which case the set could be adjusted with a minimum strike width, although we did not need this tweak on the market examples presented below.
We consider several different underlying assets: SPXS00 expiring on March 24, 2017 as March 16, 2017 (one week - 1w) and on March 7, 2018 as of February 5, 2018 (one month - 1m), TSIA of maturity July 20, 2018 as of June 15, 2018 (1m), AAPL with expiry in 4 days as of October 28, 2013 (4d) from Alexiou et al. (2021),
Figure 2: Implied probability density of the calibrated quadratic LVG model using different sets of knots.
as as well as the AUD/NZD currency pair of maturity July 9, 2014 as of July 2, 2014 (1w) from Wystup (2018). There is an apparent focus on short maturities as those are more difficult to capture, with a variety of smile shapes, with some exhibiting multi-modality. The latter are particularly challenging for parametric models such as SVI (Gatheral, 2006), SABR (Hagan et al., 2002), or even polynomial stochastic collocation (Le Floc'h and Oosterlee, 2019, 2019). As the implied volatility smile flattens for long maturities, those are much easier to fit to.
The AUD/NZD foreign exchange smile from Wystup (2018) is useful to see how the local variance gamma model behaves on a minimalistic example: indeed, as is usual on the foreign exchange options market, it involves only five options quotes. In this case we use as many parameters as market quotes. Figure 7(b) shows that the density implied by the quadratic LVG model is smooth, but the one implied by the linear Bachelier LVG model exhibits some sharp unnatural gradients near the money. Indeed, the linear LVG model leads only to a \(\mathcal{C}^{0}\) probability density. Such gradients are then inevitable when the number of quotes is small. On this example, the (unconstrained) SVI model is known to lead to some negative probability density.
In all the examples considered so far, the fit in terms of implied volatilities is excellent, and the implied probability density is smooth, without spurious peaks, although the number of points considered (\(n=10\)) is somewhat arbitrary.
#### 5.2.4 Challenging examples of exact interpolation
We consider the manufactured examples of Jackel (2014) presented in Table A5. In the first example, a cubic spline interpolation of option prices is known to produce oscillations in the implied volatility, while a
Figure 4: Quadratic LVG model with 10 points calibrated to SPX500 options of maturity 1m.
Figure 3: Quadratic LVG model with 10 points calibrated to SPX500 options of maturity 1w.
cubic spline on the volatilities introduces spurious arbitrages. In the second example, some of the quotes are at the limit of arbitrage.
On those examples, some care need to be taken in the choice of the boundaries \(L\) and \(U\): they must be far away enough. We pick \(L=K_{1}/2\) and \(U=2K_{m}\) where \(K_{1}\) is the smallest quoted strike and \(K_{m}\) the largest.
Figure 5: Quadratic LVG model with 10 points calibrated to TSLA options of maturity 1m.
Figure 6: Quadratic LVG model with 10 points calibrated to AAPL options of maturity 4d.
Figure 7: Quadratic LVG model with \(5\) points calibrated to AUD/NZD options of maturity 1w.
On the example case I, the fit is nearly exact for the linear Bachelier, Black and quadratic LVG models and there is no oscillation or wiggles in the implied volatility interpolation (Figure 8(a)). The corresponding implied probability density is of course smoothest with the quadratic LVG model (Figure 8(b)).
On the example case II, the quadratic LVG model does not allow for an exact fit. The root mean square error is around 4 basis points (Table 3), the fit is qualitatively good (Figure 9(a)). The near-arbitrages impose the probability density to go to almost zero, which conflicts with the \(\mathcal{C}^{1}\) continuity constraints of parameterized B-spline density. The density stays however smooth, and looks more natural than the clear overfit of the linear LVG models (Figure 9(b)).
Our choice of range for the \(\alpha\) parameter does not allow the linear Bachelier model to fit as well as the linear Black model. Increasing the range would make the two implied probability densities even more similar.
Figure 8: LVG models calibrated to the example case I of Jäckel (2014).
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Case I & Case II \\ \hline Linear Bachelier & 5.00e-13 & 4.54e-6 \\ Linear Black & 3.64e-12 & 8.04e-8 \\ Quadratic & 2.25e-12 & 4.02e-4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Root mean square error (RMSE) in implied volatilities of the LVG models calibrated to the market data of Table 3.
Figure 9: LVG models calibrated to the example case II of Jäckel (2014).
## 6 Conclusion
The quadratic local variance gamma model with a small number of knots lead to smooth implied probability densities on a variety of market option quotes, while providing an excellent fit in terms of implied volatilities, even on challenging examples of non convex implied volatilities or multi-modal probability densities.
It thus constitutes an interesting alternative to the non-parametric approaches of the linear or quadratic local variance gamma models or of the Andreasen-Huge one-step local volatility model, which all require a non-trivial choice of regularization constant to produce a smooth implied probability density.
In similar fashion as Falck and Deryabin (2017), it may be used as specific generic parameterization with a fixed, small number of parameters using only 3 or 5 points. In contrast to Falck and Deryabin (2017), where the linear parameterization leads to edges in the implied probability density and where the problem of the discontinuity at the forward is not dealt with, the probability density of the quadratic LVG model with a reduced number of points will be smooth.
Further research may explore the use of Fourier transforms to recover efficient pricing formulae for European options under the local variance gamma model with a more general local variance function.
|
2309.02549
|
Post disruption reconnection event driven by a runaway current
|
The role of a runaway current in a post disruption plasma is investigated
through numerical simulations in an asymmetric magnetic reconnection event.
While the runaways do not alter the linear growth of the island, they lead to a
rotation of the island in the poloidal direction as found in [C. Liu et al.
Physics of Plasmas 27, 092507 (2020)]. The role of a microlayer smaller than
the resistive one is thoroughly investigated. While the resistive layer
controls the transition of the island from the linear to the nonlinear stage,
the microlayer width causes the runaways to become nonlinear as soon as the
size of the island exceeds it. Moreover, this transition of the runways
electrons to the nonlinear phase is accompanied by a drastic redistribution of
runaways within the island with respect to the symmetric case. The influence of
the electron skin depth on the linear evolution is also taken into account.
Finally, nonlinear simulations show that the rotation frequency tends toward
zero when the island saturates.
|
L. Singh, D. Borgogno, F. Subba, D. Grasso
|
2023-09-05T19:35:53Z
|
http://arxiv.org/abs/2309.02549v1
|
# Post disruption reconnection event driven by a runaway current
###### Abstract
The role of a runaway current in a post disruption plasma is investigated through numerical simulations in an asymmetric magnetic reconnection event. While the runaways do not alter the linear growth of the island, they lead to a rotation of the island in the poloidal direction as found in [C. Liu et al. Physics of Plasmas 27, 092507 (2020)]. The role of a microlayer smaller than the resistive one is thoroughly investigated. While the resistive layer controls the transition of the island from the linear to the nonlinear stage, the microlayer width causes the runaways to become nonlinear as soon as the size of the island exceeds it. Moreover, this transition of the runaways electrons to the nonlinear phase is accompanied by a drastic redistribution of runaways within the island with respect to the symmetric case. The influence of the electron skin depth on the linear evolution is also taken into account. Finally, nonlinear simulations show that the rotation frequency tends toward zero when the island saturates.
## I Introduction
Runaway electrons are a major cause of concern for future fusion devices, ITER [1] foremost among them. Because of the decrease in electron collision frequency with increasing velocity, electrons subjected to a strong electric field can experience unlimited "runaway" acceleration. In tokamaks, runaway electrons can be produced in the disruptions because of the strong inductive electric field formed when the thermal energy of the plasma is rapidly lost. The runaway population can grow exponentially (avalanche mechanism) due to collisions of relativistic electrons with low-energy electrons.
It is estimated that the impact of electron runaways on the walls of a machine such as ITER can cause severe damage. It is therefore important to improve the understanding of plasma dynamics during all the life time of the RE beam to mitigate its effects. Recently [2] a study has been carried out to model the RE benign termination observed in the JET shot 95135 [3]. In particular, during this shot, MHD activity has caused the mitigation of RE through a broad deposition of the beam on plasma-facing components. It has been demonstrated in [2], through numerical simulations done with JOREK [4], that this RE suppression is caused by the stochastization of the magnetic field lines causing the crash of the RE current.
In this perspective, it also becomes important to understand what is the interaction mechanism between the runaway population generated during a disruption and the post-disruption plasma. That is, to understand what are the stability properties of such a plasma in which the plasma current is replaced by the runaway current. More than a decade ago this problem was studied in [5] by considering the role of a runaway electron current on the spontaneous development of the magnetic reconnection instability in a resistive, weakly unstable plasma. It was found that when the plasma current is totally carried by runaway electrons, two-dimensional (2D) magnetic perturbations, with no dependence on the spatial coordinate along the guiding magnetic field, which resonate on the surface where the current peaks, significantly increase the saturated amplitude of the reconnected region (magnetic island) compared to the standard case without runaway electrons, but do not affect the linear growth rates. More recently [6] some aspects, which had remained unclear in [5] because of the periodic equilibrium magnetic configuration adopted, were clarified with new numerical simulations assuming a different equilibrium. Indeed, the periodic equilibrium configuration implied the presence of two magnetic islands, one at the center of the integration domain and one at the boundaries. When the magnetic islands grew too large in the nonlinear phase they began to interact, affecting the saturation results.
In 2020, Liu et al. [7] extended this analysis by considering the linear evolution of 2D asymmetric perturbations, whose resonant surfaces do not lie at the current peaks. Two peculiar features have been highlighted: firstly, the poloidal rotation of the magnetic island, the frequency of which depends on the derivative of the runaway electron current on the resonant surface, and secondly, the existence of a microscopic layer in which the current density of runaway electrons is concentrated. Since the width of these layers depends on the inverse of the runaway electron velocity, that is of the order of the speed of light, it is expected that they can thin to microscales where the kinetic effects can make a non-negligible contribution to plasma dynamics. This paper is focused on the self-consistent study of the mutual interaction between runaway electrons and asymmetric magnetic reconnection in nonlinear regimes. The analysis is performed by considering a two-fluid, collisional plasma model, where the effects of the electron inertia and the electron temperature, that introduce the electron skin depth and the ion-sound Larmor radius, respectively, are taken into account. Although these scales are small in a post disruption scenario due to the low plasma temperature and the electron mass, they could become relevant in presence of localized current layers, affecting the evolution of the global process. The linear analysis of a RE-driven magnetic reconnection reproduced the results obtained in [5] with RE not having an influence on the linear growth rates of the island both in symmetric and asymmetric cases and the island width at saturation being 50% higher with RE with respect to the case without RE. With respect to the symmetric case, an asymmetric current profile leads to the island rotation and the presence of a microlayer on the RE current distribution at the X-point consistently with the results shown in [7]. In addition, the electron skin depth affects the thermal electron distribution
at the X-point of the island. The nonlinear evolution of asymmetric modes leads to the generation of a spiral-like structure inside the island whereas the island rotation frequency tends towards zero locking in at saturation.
The paper is organized as follows. In section II the model equations are introduced. In section III the SCOPE3D numerical tool is presented and its benchmark and verification test is shown in IV. Sections V focus on linear and nonlinear results for asymmetric modes rispectivley. Conclusions close the paper.
## II Model equations
In our analysis we extend the reduced, purely collisional model adopted in [5] by considering as in ref. [6] the contributions of the effects of the electron mass \(m_{e}\) and the electron temperature \(T_{e}\) through new terms in the plasma Ohm's law proportional to the electron skin depth \(d_{e}=c/\sqrt{4\pi n_{e}e^{2}/m_{e}}\) and the ion sound Larmor radius \(\rho_{s}=\sqrt{(T_{e}/m_{i})}/\omega_{ci}\), with \(\omega_{ci}\) the ion gyrofrequency, respectively. Furthermore, here we consider a three-dimensional slab geometry, which also allows us to deal with perturbations dependent on the coordinate along the direction of the guiding magnetic field direction. The equations normalized on the Alfven time and on the characteristic length of variation of the equilibrium magnetic field are [6]:
\[\frac{\partial\psi}{\partial t}+[\varphi,\psi]+d_{e}^{2}\frac{ \partial J}{\partial t}+d_{e}^{2}[\varphi,J]-\rho_{s}^{2}[U,\psi]\] \[+\eta(J-J_{RE})+\frac{\partial\varphi}{\partial z}+\rho_{s}^{2} \frac{\partial U}{\partial z}=0 \tag{1}\]
\[\frac{\partial U}{\partial t}+[\varphi,U]-[J,\psi]-\frac{\partial J}{ \partial z}=0 \tag{2}\]
\[\frac{\partial J_{RE}}{\partial t}+[\varphi,J_{RE}]+\frac{c}{v_{A}}([\psi,J_{ RE}]-\frac{\partial J_{RE}}{\partial z})=0 \tag{3}\]
\[J=-\nabla^{2}\bot\psi,U=\nabla^{2}\bot\varphi \tag{4}\]
where \(\nabla^{2}\bot=\partial_{x}^{2}+\partial_{y}^{2}\) and \([f,g]=\partial_{x}f\partial_{y}g-\partial_{x}g\partial_{y}f\). The model assumes a magnetic field \(B=B_{0}\mathbf{e_{z}}+\nabla\psi\times\mathbf{e_{z}}\), where \(B_{0}\) represents the uniform magnetic guide field and is set to 1, and a velocity field \(v=-\nabla\varphi\times\mathbf{e_{z}}\). The fields \(\psi\) and \(\varphi\) are the magnetic flux and the stream function, respectively, while \(J\) is the current density of the plasma and \(U\) its vorticity. Eq. 3 describes the evolution of the runaway current density \(J_{RE}\), where it has been made the assumption that these particles move with the (normalized) speed of light along the magnetic field lines. Due to their relativistic velocity, in contrast to the thermal electrons, runaway electrons do not collide with ions, as stated by the dissipative term in the plasma Ohm's law in Eq. 1, where \(\eta\) is the normalized resistivity.
In this paper, we focus on the analysis of spontaneous magnetic reconnection events induced by single helicity (SH) perturbations in a sheared, unstable equilibrium magnetic configuration, with \(B_{y}=B_{y}(x)\). For a generic field, \(f\), SH modes have the following form:
\[f(x,y,0)=\sum_{k_{y},k_{z}}\hat{f}_{k_{y},k_{z}}(x)\exp(ik_{y}y+ik_{z}z)=\sum_ {k_{y}}\hat{f}_{k_{y}}(x)\exp(ik_{y}(y+k_{z}/k_{y}z))\]
where the helicity \(\alpha=k_{z}/k_{y}\) is fixed. Here \(k_{y}=\pi m/L_{y}\) and \(k_{z}=\pi n/L_{z}\), with \(m,n\) integer numbers and \(Ly\) and \(L_{z}\) the half-widths of the computational box along \(y\) and \(z\), respectively. As shown in [8; 9], SH problems can be treated as 2D problems, i.e. without dependence on the \(z\) coordinate, by transforming the sheared component of the equilibrium magnetic field from \(B_{y}\) to \(B_{y}-\alpha\), which corresponds to a rotation in the \((y,z)\) plane.
## III The numerical tool SCOPE3D
The SCOPE3D (Solver for COllisionless Plasma Equations in a 3D slab geometry) code is adopted to solve the equations 1-4 in a slab geometry. It is based on an explicit, third-order Adam-Bashforth temporal discretization and is parallelized along the two periodic directions \(y\) and \(z\). In order to have a high spatial resolution in the reconnection region, a compact finite difference scheme [10], specifically designed for a non-equispaced grid, is adopted for the spatial discretization along the x direction. Fast Fourier methods are applied instead along the periodic directions. In addition, numerical filters are used in the y and z directions to remove short-length scales caused by nonlinear interactions [10], while the physical dissipation is sufficient to control the numerical noise along the \(x\) direction. Since we analyze here only SH modes all the simulations have been carried out in the 2D limit, saving computational time.
We consider magnetic reconnection events starting from a static plasma, immersed in an asymmetric, Harris-type, sheared magnetic field [11], in which all current is carried by runaway electrons, such that:
\[\varphi_{eq}(x)=0 \tag{5}\]
\[\psi_{eq}(x)=-\log(\cosh(x))+\alpha x \tag{6}\]
\[J_{RE_{eq}}(x)=J_{eq}(x)=-\nabla^{2}\psi_{eq}(x) \tag{7}\]
Concerning the grid used in this work, for the linear analysis a resolution of ny=96 points has been used along the periodic direction y. In contrast, for the x-direction, the number of grid points has been varied from 1200 to 4800 on the nonequisipaced grid in order to have an adequate resolution in the reconnection region. In particular, for the purpose of benchmarking, nx = 1200 grid points have been adopted for the x-direction so as to guarantee a resolution of dx = 0.0038 around x=0 where the reconnection occurs.
## IV Symmetric case: \(\alpha\)=0
Here we briefly summarize the linear and nonlinear results [6] we obtained assuming a runaway current profile peaked on
the rational surface at \(x=0\), in order to have at hands a comparison term when exploring the \(\alpha\neq 0\) cases. These results have been validated against the ones reported in [5] and for this reason we consider the pure resistive regime (\(d_{e}=0\)) and the limit \(c/v_{A}=1\). A \([-L_{x},L_{x}]\times[-L_{y},L_{y}]\) with \(L_{x}=3\pi\) and different \(L_{y}\) domain was used to integrate the equations. \(L_{y}\) was varied to account for different degrees of instability.
Differently from [5], where a periodic in-plane component of the equilibrium magnetic field was adopted, i.e. \(\psi_{eq}=\cos(x)\), the equilibrium (6) allowed us to carry out analysis on long nonlinear times and to investigate the evolution of a single magnetic island until saturation. This has not been possible in [5] since the periodic equilibrium adopted there led to the presence of a second magnetic island at the boundaries of the integration domain, influencing the one located at the center. The Harris equilibrium leads to an equilibrium current that has been discussed in MHD theory as the most probable profile [12]. Figure 1 shows the numerical and analytical linear growth rates with and without runaways. The resistivity is fixed at \(\eta=3e-4\) and different perturbation wave numbers \(k_{y}\) are adopted, corresponding to modes with different values of the stability parameter \(\Delta^{\prime}=2(1/k_{y}-k_{y})\). Furthermore, the effect of electron compressibility along magnetic field lines is taken into account through the introduction of the ion sound Larmor radius scale length, \(\rho_{s}\), into the equations. In particular, the blue points for runaways and green points for no runaways are compared with the expected theoretical values with (blue curve) and without (green curve) runaways. The theoretical prediction given by Eq. 9 and represented by the green curve, accounts for the finite resistivity correction because of the relatively high value of \(\eta\)[13]. In the same figure red and magenta points represent the linear growth rates in presence of runaways and when no runaways are present fore cases where \(\rho_{s}=0.1\). These points are compared with the results of Eq. 10, corresponding to the red curve. It can be observed that the runaway current's presence does not significantly alter the linear growth rates as in the pure resistive case. The linear dispersion relation shown in fig. 1 has been derived in slab geometry in Ref. [5] with and without RE, obtaining:
\[\frac{\gamma^{5/4}}{\eta^{3/4}k_{y}^{1/2}}=0.47\Delta^{\prime}\qquad\text{ with runaways} \tag{8}\]
\[\frac{\gamma^{1/4}(\gamma-2b\eta)}{\eta^{3/4}k_{y}^{1/2}}=0.47\Delta^{\prime} \qquad\text{ without runaways} \tag{9}\]
where \(b=\frac{\psi_{xy}^{3/2}}{\psi_{xy}^{3/2}}\mid_{x=0}\).
Eq. 8 is the standard Furth, Killeen and Rosenbluth (FKR) [14] growth rate in the small \(\Delta^{\prime}\) regime, defined by the inequality \(\Delta^{\prime}\eta^{1/3}\ll 1\). While the derivation of the eq. 9 considers the higher-order derivatives corrections of the current density at the resonant surface [13]. On the other hand, in presence of \(\rho_{s}\) the dispersion relation for the linear growth rates becomes [15],
\[\frac{\gamma^{3/2}}{\eta^{1/2}k_{y}}\gamma=0.32\rho_{s}\Delta^{\prime} \tag{10}\]
As it can be observed in fig. 1, a good agreement is found in both cases. In fig. 2 we compare the nonlinear saturated magnetic island widths in presence and absence of runaways, \(w\), for \(\Delta^{\prime}\) in the range [0.1, 2], where \(w\) is given by [5]:
\[w=-\frac{1}{b}\frac{\Delta^{\prime}}{0.272}\qquad\text{with runaways} \tag{11}\]
\[w=-\frac{1}{b}\frac{\Delta^{\prime}}{0.411}\qquad\text{without runaways}. \tag{12}\]
Figure 1: Numerical and analytically derived linear growth rates for a pure resistive reconnecting mode driven by a runaway current (blue points and blue curve) compared with the case without runaway current (green points and green curve) along with the numerical linear growth rates with (red points) and without a runaway current (magenta points) compared with the results of Eq. 10 in presence of electron temperature effects (red curve).
Figure 2: Numerical and analytically derived saturation island widths for the pure resistive reconnecting mode driven by a runaway current (green points and green line) compared with the case without runaways (red points and red line) along with the numerical saturated island widths with (blue plus markers) and without (yellow plus markers) in presence of electron temperature effects.
Having \(b=-2\) for the type of equilibrium considered in this study, we get \(w=1.85\Delta^{\prime}\) and \(w=1.22\Delta^{\prime}\) respectively for the case with and without RE. As found in ref. [5], the presence of a runaway current leads to an increase of 50% in the saturated magnetic island width with respect to the case with no RE.
The analytical expressions found for the saturated island widths in presence, represented by the green line, and absence, represented by the red line, of runaways are plotted in fig. 2 to compare them against the numerically obtained results shown with the corresponding color points. In addition, fig. 2 shows the numerically obtained saturated magnetic island widths in presence of \(\rho_{s}\) with and without runaways corresponding to blue and yellow plus markers respectively. As it can be observed, the presence of the electron temperature does not lead to any change in the saturation width since the microphysics at the ion sound Larmor radius scale plays no role in the saturated island width, which should depend only on the free energy available for the reconnection process. A good agreement is found between the theory and simulation, however, a difference between the theoretical and the numerically observed widths in presence of runaways can be observed at values of \(\Delta^{\prime}\) of order unity. This follows from the fact that at these \(\Delta^{\prime}\) values, the simulation parameters do not fall into the range of validity of the analytical theory since they depart from the asymptotic limit of the small \(\Delta^{\prime}\) regime. However, these simulations were necessary in order to verify the presence of a bifurcation in the sequence of saturated equilibria which was postulated in ref. [5]. In our study we did not find any bifurcation allowing the island to grow up to saturation even for values of \(\Delta^{\prime}\) of order 1. In ref. [5] periodic boundary conditions in the inhomogeneity direction of the equilibrium magnetic field prevented the island to reach a saturated state.
## V Asymmetric case \(k_{z}/k_{y}\neq 0\)
Here we consider asymmetric modes, by shifting the in-plane component of the equilibrium magnetic field \(B_{v_{eq}}=\tanh(x)-\alpha\). Hence the rational surface is now located at \(x_{s}=\mathrm{settanh}(\alpha)\) while the runaway current profile is peaked at \(x=0\).
### Linear analysis
When considering modes for which \(k_{z}/k_{y}\neq 0\) Liu et al. [7] demonstrate that the runaway electrons convection causes a mode rotation which explains the real frequency seen in their simulation campaign carried out with the code M3DC-1. In the same work, RE are shown to lead to the formation of a smaller layer within the resistive layer, which width depends on the ratio \(c/v_{A}\). In particular, the resistive layer half width, \(\delta_{1}\) (referred as the layer width in the following), the sublayer half width, \(\delta_{2}\) (referred as the sublayer width in the following), and the growth rate are given in slab geometry by
\[\delta_{1}=\gamma^{1/4}\eta^{1/4}k_{y}^{-1/2} \tag{13}\]
\[\delta_{2}=\gamma v_{A}/k_{y}c \tag{14}\]
\[\frac{\gamma^{5/4}}{\eta^{3/4}k_{y}^{1/2}}\frac{2\pi\Gamma(3/4)}{\Gamma(1/4)} =\Delta^{\prime}-i\pi\frac{k_{y}J_{RE0}^{\prime}}{|k_{y}|} \tag{15}\]
where \(J_{RE0}^{\prime}\) is the derivative of the RE current at the rational surface. For \(J_{RE0}^{\prime}\neq 0\) an imaginary part of the growth rate appears which gives a rotation of the magnetic island. The \(\Delta^{\prime}\) parameter taking into account corrections due to finite values of \(k_{z}/k_{y}\) has been evaluated according to [16],
\[\Delta^{\prime}\approx 2\left(\frac{1}{k}-k\right)\left[1+\frac{\tanh^{2}\!x_{s} }{2}\left(1+\frac{1}{1-k}\right)\right] \tag{16}\]
where \(k\equiv|\mathbf{k}|\) with \(\mathbf{k}=k_{y}e_{\mathbf{y}}+k_{z}e_{\mathbf{z}}\).
In fig. 3 the growth rates, given by the Eq. 15, are shown by the blue line and compared with the simulation results (blue points) for cases with \(\Delta^{\prime}=1.015\), \(c/v_{A}=1\) and resistivity values in the [1e-3, 1e-6] range. As can be observed, the simulation results agree well with the analytical derivation and the same is true for the rotation frequency represented by the rust points and compared with the results of the Eq. 15 (rust curve).
To analyze the inner layer, we have chosen the value \(\Delta^{\prime}=6.057\), even though fig. 4 shows that the growth rates from the simulations (blue points) are less close to the analytical results (blue curve), than the more asymptotic case \(\Delta^{\prime}=1.051\). At the same time, the resistivity range considered for this case has been changed to [1e-4, Se-7] with respect to the range adopted for the previous case in order to respect the small \(\Delta^{\prime}\) regime limit. On the other hand, the \(\Delta^{\prime}=1.051\) case does not
Figure 3: Comparison between the analytical growth rate (blue line) and rotation frequency (rust curve) given by Eq. 15 and the numerical growth rate (blue points) and rotation frequency (rust points) for values of \(\eta\) in the [1e-3, 1e-6] range, \(\Delta^{\prime}=1.015\) and \(c/v_{A}=1\).
allow a sufficient numerical resolution to determine the inner layer widths already with \(c/v_{A}=1\), so that analysis with \(c/v_{A}=10\) is computationally infeasible. An accurate measurement of the \(\delta_{2}\) requires adopting an increasing number of grid points along the radial direction with decreasing resistivity. This leads to reducing the radial extension of the simulation domain in order to have enough spatial resolution in the reconnection region. As a consequence, with a smaller domain, the system is less asymptotic which partially explains also the differences in the observed growth rates in fig. 4 Furthermore, the numerically obtained rotation frequencies (rust points) for \(\Delta^{\prime}=6.057\) agree with the theory (rust curve), as can be seen in fig. 4, since the rotation frequency does not depend on the value of \(\Delta^{\prime}\), in accordance with the Eq. 15.
Fig. 5 shows the linear eigenfunction for mode 1 of the total current (blue curve), runaway current (red curve), and thermal electrons (green curve) normalized to the maximum of the total current for a case with \(\eta=1e-4\),\(\Delta^{\prime}=6.057\), \(c/v_{A}=1\). This figure highlights the presence of a resistive layer \(\delta_{1}\) on the thermal current profile and an inner layer \(\delta_{2}\) on the runaway current profile. In particular, it can be observed that the second layer is much narrower than the resistive one and the thermal current profile dominates the runaway current profile. However, as shown in fig. 6, which shows the same profiles as in fig. 5 but for \(c/v_{A}=10\), the peak of the runaways is higher than that of the thermal current. This results from the conservation of runaways which in the case of a thinner layer are distributed over a smaller area and reach a maximum that is higher than the peak value of the thermal electrons.
The dependence of the microlayer \(\delta_{2}\) on the \(c/v_{A}\) ratio is scanned in fig. 7 for different values of the resistivity, through the growth rates \(\gamma\). In particular, the numerical (blue and red) curves are compared with the anaytical (green and magenta) lines. The blue and green curves correspond to the \(c/v_{A}=1\) case while the red and magenta ones to the \(c/v_{A}=10\) case. By comparing the two numerical curves we observe that there is about an order of magnitude difference between the results as it should be according to the definition of \(\delta_{2}\) given in Eq. 14. For both cases, \(c/v_{A}=1\) and \(c/v_{A}=10\) a good agreement is found between theory and simulations. For the higher ratio, the resistivity interval considered is limited to \(\eta=[1e-4,5e-5]\) since below this interval the problem requires a finer resolution which causes the simulations to become computationally unfeasible.
Concerning the resistive layer, fig. 8 shows a comparison between the theory (green dashed line) and the numerical results for \(c/v_{A}=[1,10]\) (blue and red curve respectively) and a good agreement is found. Moreover, the curve for \(c/v_{A}=10\) shows a better agreement with the theory with respect to the other curve since a higher ratio makes the simulation more
Figure 4: Comparison between the analytical growth rate (blue line) and rotation frequency (rust curve) given by Eq. 15 and the numerical growth rate (blue points) and rotation frequency (rust points) for values of \(\eta\) in the [1e-4, 5e-7] range, \(\Delta^{\prime}=6.057\) and \(c/v_{A}=1\).
Figure 5: Eigenfunction of mode 1 for total current (blue curve), runaway current (red curve), and thermal electrons current (green curve) profiles showing the inner layer \(\delta_{2}\) on the runaway current profile and the resistive layer \(\delta_{1}\) on the thermal electrons profile for \(\eta=1e-4\), \(\Delta^{\prime}=6.057\) and \(c/v_{A}=1\).
Figure 6: Eigenfunction of mode 1 for total current (blue curve), runaway current (red curve) and thermal electrons current (green curve) profiles showing the inner layer \(\delta_{2}\) on the runaway current profile and the resistive layer \(\delta_{1}\) on the thermal electrons profile for \(\eta=1e-4\), \(\Delta^{\prime}=6.057\) and \(c/v_{A}=10\).
asymptotic.
One question already raised in ref. [7] concerns the width of the microlayer, which can be comparable with the electron skin depth, \(d_{e}\), and therefore could imply an effect of the electron mass on the Ohm's law. In order to investigate the effects of the electron inertia on the system evolution we performed a simulation campaign retaining the terms related to \(d_{e}\) in eq. 1. In particular, a \(d_{e}=0.1\) value was taken into consideration which is close to \(d_{e}=0.017\) taking a post disruptive plasma density of \(n_{e}=1e17m^{-3}\). While we do not observe any difference when \(c/v_{A}=1\) the presence of \(d_{e}\) in Ohm's law affects the radial distribution of the thermal electrons for \(c/v_{A}=10\) where the runaway current is carrying almost all the plasma current. In this scenario, the thermal electrons are no more characterized by a Gaussian-like distribution as in a purely resistive case, but by a smaller layer as reported in fig. 9. Comparing fig. 9, which shows the eigenfunction of mode 1 for the total current (blue curve), runaway current (red curve), and thermal current (green curve) normalized to the maximum of the total current, with fig. 6 the difference between the thermal electron distribution can be appreciated. The presence of the electron skip depth leads the thermal current to become important already during the linear evolution of the island.
### Nonlinear analysis
With respect to the linear regime, where a smaller integration domain does not affect the evolution of the system, in the nonlinear regime the island width reaches dimensions of the order of the domain radial extension. As a consequence, in order to avoid border effects, an extension of \(Lx=3\pi\) was chosen with \(\eta=1e-4\) and \(nx=4800\). This setup enables us to have enough spatial resolution in the reconnection region even with a larger domain.
During the non-linear evolution of the magnetic reconnection process in the presence of a runaway current, the distribution of the RE population undergoes significant changes. Specifically, when entering the nonlinear phase the single microlayer observed on the RE profile splits into multiple local peaks as shown in fig. 10 where the eigenfunction of mode 1 associated with RE radial profiles at t=1500 (red) and \(t=1800\) (blue) are depicted. At \(t=1500\) the runaway electron evolution is at the end of the linear regime and at \(t=1800\) it is in the nonlinear regime as shown by the vertical lines in fig. 11. Here we show the temporal evolution of \(\psi\) at the X-point (rust), which in the linear phase is directly linked to the island evolution, and the temporal evolution of the derivative of
Figure 8: Numerically obtained resistive layer widths compared with the analytical derivation given in Eq. 13 for different values of \(\gamma\) corresponding to \(\eta\) in the [1e-4,5e-5] range, \(c/v_{A}=[1,10]\) and \(\Delta^{\prime}=6.057\). The green dashed line corresponds to the analytical curve, while the blue curve represents the numerical results for \(c/v_{A}=1\) and the red curve for \(c/v_{A}=10\).
Figure 7: Numerically obtained sublayer widths compared with the analytical derivation given in Eq. 14 for different values of \(\gamma\) corresponding to \(\eta\) in the [1e-4,5e-7] range, \(c/v_{A}=[1,10]\) and \(\Delta^{\prime}=6.057\). The green dashed line corresponds to the analytical curve for \(c/v_{A}=1\) and the magenta dashed line to the analytical curve for \(c/v_{A}=10\), while the blue curve represents the numerical results for \(c/v_{A}=1\) and the red curve for \(c/v_{A}=10\)
Figure 9: Eigenfunction of mode 1 for total current (blue curve), runaway current (red curve) and thermal electrons current (green curve) profiles for \(\eta=1e-4\), \(\Delta^{\prime}=6.057\) and \(c/v_{A}=10\).
the eigenfunction of mode 0 representing the equilibrium runaway current at the rational surface, \(J^{\prime}_{RE0}\) (blue) for \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=6.057\). Moreover, it was observed that with a higher value of \(c/v_{A}\) the runaways become nonlinear at an earlier stage. Indeed, with \(c/v_{A}=10\), the runaways become nonlinear already during the linear evolution of the island. It was found that the transition from the linear to the nonlinear regime for runaways is governed by the widths of the inner layer \(\delta_{2}\) and the island. In particular, when the island width becomes larger than the inner layer the runaways become non linear and as the inner layer gets smaller with increasing \(c/v_{A}\), the RE become non linear earlier in the case with \(c/v_{A}=10\) with respect to the case where \(c/v_{A}=1\).
In contrast to the linear phase where the RE concentrates on the X-point of the island, during the nonlinear phase, the RE starts firstly to distribute over the separatrices of the island, and then, advancing more and more in the nonlinear phase, over multiple peaks. The RE's redistribution during its nonlinear evolution is significantly impacted by the combination of two phenomena which are the island growth and its rotation, resulting in the formation of a spiral-like structure as represented in fig. 12 where the left figure depicts the RE distribution over a magnetic island and the right figure shows the RE radial distribution along the spiral for the \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=6.057\) case across the island O point and \(t=2500\). As the island grows and the runaways distribute over the separatrices of the island, these are influenced by the island's rotation along the poloidal direction to form a spiral-like structure. In the case where \(c/v_{A}=10\) the island width becomes larger than \(\delta_{2}\) already during the linear phase of the island evolution causing the runaways to become nonlinear which leads to the formation of the spiral at an earlier stage with respect to the \(c/v_{A}=1\) case.
The spiral-like structure ceases to exist once the island rotation tends toward zero. Indeed, as it can be observed in the left panel of fig. 13 which compares the evolution of the island by the mean of \(\psi^{\prime}_{x}\) (blue curve) and its rotation frequency (rust curve) when the island enters the nonlinear regime after \(t=2000\) the island rotation tends towards zero. Once the island saturates, approximately around \(4000\tau_{A}\), also the rotation frequency goes to zero. Moreover, the rotation frequency depends strictly on \(J^{\prime}_{RE0}\), so it agrees with the eq. 15, only during the linear phase of the runaway current evolution, as it can be observed in the right panel of fig. 13. Here, the analytical value, \(\omega_{theory}\) (blue curve), computed using eq. 15 with \(J^{\prime}_{RE0}\) given by the simulation, is compared with the numerical rotation frequency, \(\omega_{simulation}\) (rust curve), for \(\eta=1e-4\), \(c/v_{A}=10\) and \(\Delta^{\prime}=6.057\). We can clearly see that the agreement between the two curves stops when the equilibrium runaway current is perturbed. As a consequence, computing \(\omega_{theory}\) through eq. 15 leads to a non-zero rotation frequency at saturation whereas in the simulation this goes to zero.
Concerning the island width at saturation, it was found that with \(\Delta^{\prime}=6\) the presence of a runaway current does not lead to a \(50\%\) higher width of the saturated island with respect to the width in the absence of runaways as demonstrated in [5] and shown in fig. 2. Indeed, by comparing the simulations with and without runaways it was observed that when \(\Delta^{\prime}>3\) the saturated island width in the presence of RE was comparable to the dimension of the island in the absence of RE. This is caused by a different nonlinear evolution of the island as it can also be noticed by looking at the temporal evolution of \(\psi^{\prime}_{x}\) in fig. 11 which tends to increase in the nonlinear phase. Without RE, the growth of \(\psi^{\prime}_{X}\), which approximates well the island growth even in the nonlinear phase, is more pronounced as can be noticed in the left panel in fig. 14, which depicts the temporal evolution of \(\psi^{\prime}_{X}\) w/o (blue curve) and with RE (rust curve). This burst in the island growth only takes place in the absence of RE, compensating for the \(50\%\) larger island seen in the presence of RE. Thus this behavior leads to a magnetic island at saturation similar in size, as can be seen in the right panel of fig. 14. For the sake of completeness, we also show the same plots for the case \(\Delta^{\prime}=3\) in fig. 15. In this case, the nonlinear evolution of \(\psi^{\prime}_{X}\) is characteristic of the small
Figure 10: Radial profile of the runaway electrons distribution during the linear evolution (red) at t=1500 and during the non linear evolution at t=1800 of runaways for \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=6.057\).
\(\Delta^{\prime}\) regime and leads to a magnetic island width at saturation approximately 50% higher in presence of RE compared to the case without RE as predicted by the theory in [5].
## VI Conclusions
In this paper we have addressed the problem of the stability of a post-disruption plasma in a realistic asymmetric configuration, characterised by a mismatch between the current peak and the resonant surface. The runaway fluid equation is coupled with the MHD equations through a current coupling scheme and all the equilibrium plasma current is assumed to be carried by the runaways.
We find that, while, as in the symmetric case [5], the runaways do not alter the linear growth of the island, they lead to a rotation of the island in the poloidal direction consistently with the analytical results shown in [7]. An additional feature seen in the asymmetric simulations is the microlayer on the runaway current profile much smaller than the resistive layer. While the resistive layer controls the transition of the island from the linear to the nonlinear stage, the microlayer width causes the runaways to become nonlinear once the island size becomes larger than the microlayer width. This transition of the runaways to the nonlinear phase is accompanied by the generation of a spiral-like structure inside the island changing drastically the distribution of runaways with respect to the symmetric case. In addition, since the microlayer widths are comparable in size with the electron skin depths measured in a
Figure 12: Runaway electrons distribution over the magnetic island during the non linear evolution of the runways (left) and runaway electron profile at t=2500 (right) for \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=6.057\) at Y=-5.
post-disruptive scenario we studied the effect of the presence of \(d_{e}\) during the linear evolution of the island and find that the thermal electron distribution is no more characterized by the presence of a resistive layer, but a much narrower layer. As said earlier, the resistive layer governs the island transition to the nonlinear phase so it can be assumed that this change in the thermal electrons distribution might have an influence on the island growth as well. However, a nonlinear analysis of a RE-driven magnetic reconnection in the presence of \(d_{e}\) was not possible due to numerical noise rising when approaching the nonlinear stage of the island.
The nonlinear analysis in the resistive regime shows that the frequency does not follow the evolution of the equilibrium runaway current once this becomes nonlinear and tends to zero when the island goes toward saturation. Finally, we find that as far as we are in the small \(\Delta^{\prime}\) regime the island width at saturation is 50% bigger than the corresponding island without runaways, consistently with the theory [5]. While, increasing \(\Delta^{\prime}\), the magnetic island saturation width in the presence of RE becomes more similar in size to the width in the absence of RE. In this case the rapid burst in the nonlinear island growth in the absence of RE compensates the 50% higher saturation width with RE.
This study confirms the importance of investigating the stability of a post disruptive plasma in the presence of a RE current. Among other effects, the presence of more than one helicity in the initial perturbation can lead to plasma stochstization, connecting strictly this work with the study of the benign runaway termination in presence of stochastic magnetic field [2]. On top of that, a 50% higher width at saturation
Figure 14: Temporal evolution of \(\psi^{\prime}_{\mathbf{x}}\) w/o (blue curve) and with RE (rust curve) at left panel and temporal evolution of island area w/o (blue curve) and with RE (rust curve) for at the right panel for \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=6.057\).
Figure 15: Temporal evolution of \(\psi^{\prime}_{\mathbf{x}}\) w/o (blue curve) and with RE (rust curve) at left panel and temporal evolution of island area w/o (blue curve) and with RE (rust curve) at the right panel for \(\eta=1e-4\), \(c/v_{A}=1\) and \(\Delta^{\prime}=3\).
tion of the magnetic island may have a significant effect on the evolution of the magnetic stochasticity preventing the RE population growth. These aspects of the RE driven magnetic reconnection will be addressed in a future study.
## Acknowledgement
The numerical simulations were performed using the EUROfusion high performance computer Marconi Fusion hosted at CINECA (Project No. FUA36-FKMR2)
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2310.17741
|
Probing Light Fermiophobic Higgs Boson via diphoton jets at the HL-LHC
|
In this study, we explore the phenomenological signatures associated with a
light fermiophobic Higgs boson, $h_{\rm f}$, within the type-I
two-Higgs-doublet model at the HL-LHC. Our meticulous parameter scan
illuminates an intriguing mass range for $m_{h_{\rm f}}$, spanning
$[1,10]{\;{\rm GeV}}$. This mass range owes its viability to substantial
parameter points, largely due to the inherent challenges of detecting the soft
decay products of $h_{\rm f}$ at contemporary high-energy colliders. Given that
this light $h_{\rm f}$ ensures $Br(h_{\rm f}\to\gamma\gamma)\simeq 1$,
$Br(H^\pm \to h_{\rm f} W^\pm)\simeq 1$, and $M_{H^\pm}\lesssim 330{\;{\rm
GeV}}$, we propose a golden discovery channel: $pp\to h_{\rm f}H^\pm\to
\gamma\gamma\gamma\gamma \,l^\pm\nu$, where $l^\pm$ includes $e^\pm$ and
$\mu^\pm$. However, a significant obstacle arises as the two photons from the
$h_{\rm f}$ decay mostly merge into a single jet due to their proximity within
$\Delta R<0.4$. This results in a final state characterized by two jets, rather
than four isolated photons, thus intensifying the QCD backgrounds. To tackle
this, we devise a strategy within \textsc{Delphes} to identify jets with two
leading subparticles as photons, termed diphoton jets. Our thorough
detector-level simulations across 18 benchmark points predominantly show signal
significances exceeding the $5\sigma$ threshold at an integrated luminosity of
$3{\;{\rm ab}^{-1}}$. Furthermore, our approach facilitates accurate mass
reconstructions for both $m_{h_{\rm f}}$ and $M_{H^\pm}$. Notably, in the
intricate scenarios with heavy charged Higgs bosons, our application of machine
learning techniques provides a significant boost in significance.
|
Daohan Wang, Jin-Hwan Cho, Jinheung Kim, Soojin Lee, Prasenjit Sanyal, Jeonghyeon Song
|
2023-10-26T19:27:53Z
|
http://arxiv.org/abs/2310.17741v1
|
# Probing Light Fermiophobic Higgs Boson via diphoton jets
###### Abstract
In this study, we explore the phenomenological signatures associated with a light fermiophobic Higgs boson, \(h_{\rm f}\), within the type-I two-Higgs-doublet model at the HL-LHC. Our meticulous parameter scan illuminates an intriguing mass range for \(m_{h_{\rm f}}\), spanning \([1,10]~{}{\rm GeV}\). This mass range owes its viability to substantial parameter points, largely due to the inherent challenges of detecting the soft decay products of \(h_{\rm f}\) at contemporary high-energy colliders. Given that this light \(h_{\rm f}\) ensures \({\rm Br}(h_{\rm f}\to\gamma\gamma)\simeq 1\), \({\rm Br}(H^{\pm}\to h_{\rm f}W^{\pm})\simeq 1\), and \(M_{H^{\pm}}\lesssim 330~{}{\rm GeV}\), we propose a golden discovery channel: \(pp\to h_{\rm f}H^{\pm}\to\gamma\gamma\gamma\gamma\,\ell^{\pm}\nu\), where \(\ell^{\pm}\) includes \(e^{\pm}\) and \(\mu^{\pm}\). However, a significant obstacle arises as the two photons from the \(h_{\rm f}\) decay mostly merge into a single jet due to their proximity within \(\Delta R<0.4\). This results in a final state characterized by two jets, rather than four isolated photons, thus intensifying the QCD backgrounds. To tackle this, we devise a strategy within Delphes to identify jets with two leading subparticles as photons, termed diphoton jets. Our thorough detector-level simulations across 18 benchmark points predominantly show signal significances exceeding the \(5\sigma\) threshold at an integrated luminosity of \(3~{}{\rm ab}^{-1}\). Furthermore, our approach facilitates accurate mass reconstructions for both \(m_{h_{\rm f}}\) and \(M_{H^{\pm}}\). Notably, in the intricate scenarios with heavy charged Higgs bosons, our application of machine learning techniques provides a significant boost in significance.
Higgs Physics, Beyond the Standard Model, Machine Learning
###### Contents
* I Introduction
* II Fermiophobic type-I with very light \(h_{\rm f}\)
* II.1 Review of the fermiophobic type-I
* II.2 Viable parameter space for very light \(h_{\rm f}\)
* II.3 Signature of the golden channel \(pp\to h_{\rm f}H^{\pm}\)
* III Jet subparticles and pileups
* IV Cut-Based Analysis
* V Mass Reconstruction for \(m_{h_{\rm f}}\) and \(M_{H^{\pm}}\)
* VI Machine Learning Approach for heavy \(M_{H^{\pm}}\)
* VII Conclusions
* A Weighting factor method
## I Introduction
The discovery of the Higgs boson with a mass of \(125\:{\rm GeV}\) at the LHC [1; 2] was a pivotal moment in validating the standard model (SM). Beyond this foundational achievement, the Higgs boson holds an unparalleled position, serving as a potential portal to probe theories of particle physics beyond the SM (BSM). This perspective emerges from numerous unresolved fundamental questions, such as the nature of dark matter [3; 4], neutrino masses, the metastability of the SM vacuum [5], and the naturalness problem [6; 7; 8], all of which have deep ties to the Higgs sector. Therefore, postulating an extended Higgs sector is both logical and compelling. However, despite great efforts, current explorations of the Higgs sector have not identified any significant deviations from the predictions of the SM: the properties of the observed Higgs boson align perfectly with SM expectations, and direct searches for additional scalar bosons have so far yielded no new findings. Nonetheless, the unwavering pursuit of BSM theories persists. One promising avenue is to probe scenarios where potential discovery channels for new Higgs bosons may have been overlooked.
A charming example is a light fermiophobic Higgs boson, \(h_{\rm f}\), with a mass below \(125\:{\rm GeV}\) in the type-I two-Higgs-doublet model (2HDM) [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. This light mass is rationalized in the inverted Higgs scenario [30; 31; 32; 33; 34; 35], where the heavier \(CP\)-even Higgs boson is the observed one.
The fermiophobic nature of \(h_{\rm f}\) stems from the condition \(\alpha=\pi/2\) in type-I,1 where all Yukawa couplings of \(h_{\rm f}\) are proportional to \(\cos\alpha\). At the LHC, the production of \(h_{\rm f}\) is straightforward, primarily through the \(pp\to W^{*}\to h_{\rm f}H^{\pm}\) channel. Given the dominant decay modes \(h_{\rm f}\to\gamma\gamma\) and \(H^{\pm}\to h_{\rm f}W^{\pm}/\tau\nu\), several studies have explored new signatures such as \(4\gamma+V\)[14; 15; 23; 36; 37], \(4\gamma+VV^{\prime}\)[38], and \(\tau^{\pm}\nu\gamma\gamma\)[27].
Footnote 1: Here, \(\alpha\) denotes the mixing angle between the two \(CP\)-even Higgs bosons in the 2HDM.
Yet, there remains an unexplored territory for the light \(h_{\rm f}\) within the mass range \(m_{h_{\rm f}}\in[1,10]~{}{\rm GeV}\). Delving into this range is essential, as it encompasses numerous parameter points that meet theoretical prerequisites, experimental constraints, and a cutoff scale surpassing \(10~{}{\rm TeV}\). However, its signals at the LHC remain elusive to traditional search methodologies. This is primarily because the two photons from the \(h_{\rm f}\) decay are highly collimated to merge into a single jet, not manifesting as two isolated photons. Huge backgrounds from QCD jets should obscure the \(h_{\rm f}\) signals.
To tackle this challenge, we propose investigating the subparticles within the jet using EFlow objects in the Delphes framework [39]. This novel methodology allows us to distinguish between QCD jets and signal jets housing two leading subparticles as photons, termed "diphoton jets". Although diphoton jets have been studied in the context of axion-like particles [40; 41; 42; 43; 44], no research has been conducted regarding the light fermiophobic Higgs boson. Our study addresses this gap for the first time.
Drawing from insights on diphoton jet studies, we will execute a meticulous simulation at the detector level for the signal-to-background analysis, spanning 18 benchmark points to comprehensively represent the viable parameter space. In the cut-based analysis, we will devise a strategy aimed at maximizing significances. Moreover, we will illustrate the potential for accurately reconstructing the masses of \(m_{h_{\rm f}}\) and \(M_{H^{\pm}}\). For challenging scenarios involving heavy charged Higgs bosons, we will turn to machine learning techniques [45; 46; 47; 48; 49; 50; 51], specifically employing one-dimensional convolutional neural networks (CNN) [52]. The improvements achieved through this approach mark significant contributions to the topic.
The structure of this paper is outlined as follows. In Sec. II.1, we offer a concise review of our model. Section II.2 details the scanning methodology used to determine the viable parameter space. We also explore the defining characteristics of these allowed parameter points, emphasizing the branching ratios of the BSM Higgs bosons. In Sec. II.3, the unique feature that the two photons from the \(h_{\rm f}\) decay appear as a single jet isx clarified. Section III is dedicated to discussing the phenomenologies of the subparticles within the diphoton jet. We also provide a new method to subtract the significant pileups anticipated at the HL-LHC. In Sec. IV, we direct our focus towards the signal-to-background analysis in a cut-based approach. Section V sees us undertaking the task of mass reconstruction for both \(m_{h_{\rm f}}\) and \(M_{H^{\pm}}\). For the challenging cases involving heavy charged Higgs bosons, machine learning techniques come into play. These are detailed in Sec. VI. Finally, our conclusions are presented in Sec. VII.
## II Fermiophobic type-I with very light \(h_{\rm f}\)
### Review of the fermiophobic type-I
Let us briefly review the type-I 2HDM with a light fermiophobic Higgs boson. The 2HDM introduces two \(SU(2)_{L}\) complex scalar doublet fields with hypercharge \(Y=1\)[53]:
\[\Phi_{i}=\left(\begin{array}{c}w_{i}^{+}\\ \dfrac{v_{i}+\rho_{i}+i\eta_{i}}{\sqrt{2}}\end{array}\right)\quad \text{for }i=1,2. \tag{1}\]
Here, \(v_{1}\) and \(v_{2}\) denote the vacuum expectation values of \(\Phi_{1}\) and \(\Phi_{2}\), respectively, defining \(\tan\beta=v_{2}/v_{1}\). The electroweak symmetry is spontaneously broken by \(v=\sqrt{v_{1}^{2}+v_{2}^{2}}=246\text{ GeV}\). For the sake of simplicity in notation, we will use \(s_{x}=\sin x\), \(c_{x}=\cos x\), and \(t_{x}=\tan x\) in what follows.
In order to prevent flavor changing neutral currents (FCNCs) at the tree level, a discrete \(Z_{2}\) symmetry is imposed, under which \(\Phi_{1}\to\Phi_{1}\) and \(\Phi_{2}\to-\Phi_{2}\)[54; 55]. Assuming \(CP\)-invariance and softly broken \(Z_{2}\) symmetry, the scalar potential is written as
\[\begin{split} V_{\Phi}&=m_{11}^{2}\Phi_{1}^{\dagger }\Phi_{1}+m_{22}^{2}\Phi_{2}^{\dagger}\Phi_{2}-m_{12}^{2}(\Phi_{1}^{\dagger} \Phi_{2}+\text{H.c.})+\dfrac{\lambda_{1}}{2}(\Phi_{1}^{\dagger}\Phi_{1})^{2}+ \dfrac{\lambda_{2}}{2}(\Phi_{2}^{\dagger}\Phi_{2})^{2}\\ &\qquad+\lambda_{3}(\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{2}^{ \dagger}\Phi_{2})+\lambda_{4}(\Phi_{1}^{\dagger}\Phi_{2})(\Phi_{2}^{\dagger} \Phi_{1})+\dfrac{\lambda_{5}}{2}\left[(\Phi_{1}^{\dagger}\Phi_{2})^{2}+\text{ H.c.}\right].\end{split} \tag{2}\]
Within this framework, five physical Higgs bosons emerge: the lighter \(CP\)-even scalar \(h\), the heavier \(CP\)-even scalar \(H\), the \(CP\)-odd pseudoscalar \(A\), and a pair of charged Higgs bosons \(H^{\pm}\). These physical Higgs bosons are related with the weak eigenstates in Equation 1 through two mixing angles, namely \(\alpha\) and \(\beta\)[32]. The SM Higgs boson \(h_{\rm SM}\) is a linear combination of \(h\) and \(H\), expressed as \(h_{\rm SM}=s_{\beta-\alpha}h+c_{\beta-\alpha}H\). Since the Higgs boson observed at the LHC has shown remarkable alignment with the predicted behavior of \(h_{\rm SM}\)[56; 57; 58; 59; 60; 70], we have two plausible scenarios, the normal scenario where \(h\simeq h_{\rm SM}\) and the inverted scenario where \(H\simeq h_{\rm SM}\). To accommodate a light fermiophobic Higgs boson, we focus on type-I within the inverted Higgs scenario. In type-I, every Yukawa coupling associated with \(h\) is proportional to \(c_{\alpha}\). Therefore, by merely setting \(\alpha=\pi/2\), \(h\) acquires fermiophobic characteristics, which endure even when loop corrections are considered [12; 13]. For brevity in subsequent discussions, we will denote the type-I 2HDM with \(\alpha=\pi/2\) in the inverted Higgs scenario as the fermiophobic type-I and the lighter \(CP\)-even Higgs boson with \(\alpha=\pi/2\) as \(h_{\rm f}\).
The Yukawa couplings of the SM fermions are parametrized by
\[\mathscr{L}^{\rm Yuk}= -\sum_{f}\left(\dfrac{m_{f}}{v}\xi_{f}^{h}\bar{f}fh_{\rm f}+ \dfrac{m_{f}}{v}\kappa_{f}^{H}\bar{f}fH-i\dfrac{m_{f}}{v}\xi_{f}^{A}\bar{f} \gamma_{5}fA\right)\] \[-\left\{\dfrac{\sqrt{2}}{v}\bar{t}\left(m_{t}\xi_{t}^{A}P_{-}+m_{b }\xi_{b}^{A}P_{+}\right)bH^{+}+\dfrac{\sqrt{2}m_{\tau}}{v}\xi_{\tau}^{A}\, \overline{\nu}_{\tau}P_{+}\tau H^{+}+\text{H.c.}\right\},\]
where \(P_{\pm}=(1\pm\gamma^{5})/2\). In the fermiophobic type-I, the Yukawa coupling modifiers are given by
\[\xi_{f}^{h_{\rm f}}=0,\quad\kappa_{f}^{H}=\frac{\sqrt{1+t_{\beta}^{2}}}{t_{\beta }},\quad\xi_{t}^{A}=-\xi_{b}^{A}=-\xi_{\tau}^{A}=\frac{1}{t_{\beta}}. \tag{3}\]
To be consistent with the current best-fit results for the Peskin-Takeuchi oblique parameters [71], an additional assumption is introduced: \(M_{A}=M_{H^{\pm}}\equiv M_{A/H^{\pm}}\). In summary, the complete set of model parameters includes:
\[\{m_{h_{\rm f}},\;M_{A/H^{\pm}},\;m_{12}^{2},\;t_{\beta}\}. \tag{4}\]
### Viable parameter space for very light \(h_{\rm f}\)
In the quest to discover the light \(h_{\rm f}\) at the LHC, our preliminary task involves a systematic scan of the parameter space to identify viable candidates that comply with both theoretical requirements and experimental constraints. Our scan encompasses the following ranges:
\[m_{h_{\rm f}}\in[1,30]\;{\rm GeV},\quad M_{A/H^{\pm}}\in[80,900 ]\;\,{\rm GeV}, \tag{5}\] \[t_{\beta}\in[0.5,50]\,,\qquad\quad m_{12}^{2}\in[0,20000]\;\,{ \rm GeV}^{2}.\]
We consider only positive values for \(m_{12}^{2}\) since preliminary scans indicate that parameter points with negative \(m_{12}^{2}\) fail to meet the vacuum stability condition.
Within this extensive parameter space, we apply a cumulative series of constraints, outlined as follows:2
Footnote 2: Due to our assumption \(M_{H^{\pm}}=M_{A}\), we disregard constraints from the Peskin-Takeuchi oblique parameters, as the new contributions from the BSM Higgs bosons become negligible [72; 73].
**Step A**: Theoretical requirements and the low energy data
1. We use the public code 2HDMC to ensure the bounded-from-below condition for the Higgs potential [74], tree-level unitarity of scalar-scalar scatterings [75; 53], and perturbativity of the Higgs quartic couplings [31]. Additionally, the vacuum stability condition is enforced [76; 77; 78].
2. We demand alignment with the FCNC data, particularly emphasizing the inclusive \(B\)-meson decay measurements into \(X_{s}\gamma\) at the 95% C.L. [79; 80; 81].
3. We require the cutoff scale \(\Lambda_{\rm cut}\) to exceed \(10\;{\rm TeV}\). To determine this, we run the model parameters under the renormalization group equations using the public 2HDME code [82; 83]. The cutoff scale is defined by the energy scale at which any of the three conditions--tree-level unitarity, perturbativity, or vacuum stability--is violated [27].
**Step B**.: High energy collider data
1. We examine direct search constraints from LEP, Tevatron, and LHC experiments, excluding parameter points with a cross section above the observed \(2\sigma\) band. We used the public code HiggsBounds-v5.10.2 [84].
2. We assess alignment with Higgs precision data utilizing HiggsSignals-v2.6.2 [85]. We mandate that the cross section of a parameter point lies within \(2\sigma\) confidence levels in relation to the model's optimal fit point.
3. We consider additional measurements sensitive to the light fermiophobic Higgs boson. This includes \(e^{+}e^{-}\to h_{\rm f}(\to\gamma\gamma)Z\), \(e^{+}e^{-}\to h_{\rm f}(\to\gamma\gamma)A(\to b\bar{b}/h_{\rm f}Z)\)[86], \(p\bar{p}\to h_{\rm f}H^{\pm}(\to h_{\rm f}W^{\pm})\to 4\gamma X\)[87], and \(pp\to H\to h_{\rm f}h_{\rm f}\to 4\gamma\)[88]. Parameter points yielding a cross section above the \(2\sigma\) bound are excluded.
Let us begin by examining the survival rates after each constraint is applied. We use the parameter points that satisfy Step A(1) as our reference dataset, from which all subsequent survival rates are calculated. Upon implementing the FCNC constraint in Step A(2), a respectable 73.3% of points persist. The enforcement of \(\Lambda_{\rm cut}>10\) TeV in Step A(3) further refines our pool, leaving 26.6% of points standing. Progressing to Step B(1), our selection tightens, whittling down to a mere 2.03%. Upon assimilation of the Higgs precision data in Step B(2), around 1.94% survive. Ultimately, after accounting for Step B(3), 1.38% of the parameter points from A(1) endure.
Now we investigate the characteristics of the parameter points satisfying all imposed constraints. In Figure 1, we present \(M_{H^{\pm}}\) versus \(m_{h_{\rm f}}\) with the color code of \(\Lambda_{\rm cut}\) (left panel),
Figure 1: \(M_{H^{\pm}}\) versus \(m_{h_{\rm f}}\) with a color-code of \(\Lambda_{\rm cut}\) in GeV (left panel), and \(t_{\beta}\) versus \(m_{h_{\rm f}}\) with a color-code of \(m_{12}^{2}\) in units of GeV\({}^{2}\) (right panel). All depicted parameter points satisfy the complete set of theoretical and experimental constraints. The parameter points are ordered by ascending values of \(\Lambda_{\rm cut}\) in the left panel and \(m_{12}^{2}\) in the right panel.
and \(t_{\beta}\) versus \(m_{h_{\rm f}}\) with the color code of \(m_{12}^{2}\) (right panel). For visualization clarity, we have ordered the parameter points by ascending values of \(\Lambda_{\rm cut}\) in the left panel and \(m_{12}^{2}\) in the right panel. This stacking method ensures that points with lower \(\Lambda_{\rm cut}\) (or \(m_{12}^{2}\)) are positioned underneath [89].
Turning to the \(M_{A/H^{\pm}}\) versus \(m_{h_{\rm f}}\) plot, we notice several distinct features. First, the density of viable parameter points varies noticeably with the \(m_{h_{\rm f}}\) value. Specifically, the number of viable parameter points per unit mass for the intervals \([1,10]\) GeV, \([10,20]\) GeV, and \([20,30]\) GeV has a ratio of \(1\,\):\(\,0.71\,\):\(\,0.0058\). These significant variations arise from the following direct search constraints:
* The measurement of \(pp\to h_{\rm SM}\to h_{t}h_{\rm f}\to 4\gamma\) by the ATLAS Collaboration significantly constrains the parameter space for \(m_{h_{\rm f}}\in[10,30]\) GeV [90].
* The examination of \(e^{+}e^{-}\to h_{\rm f}Z\to\gamma\gamma Z\) by the ALEPH Collaboration eliminates nearly all parameter points in \(m_{h_{\rm f}}\in[20,30]\) GeV [91].
Considering the markedly higher survival percentages, the mass range of \(m_{h_{\rm f}}\in[1,10]\) GeV warrants thorough investigation,3 an endeavor not yet undertaken in existing literature. The second notable feature is the presence of the upper bound on \(M_{A/H^{\pm}}\), approximately at 330 GeV. This upper bound exhibits a tendency to decrease as \(\Lambda_{\rm cut}\) increases: when \(\Lambda_{\rm cut}>100\) TeV, the upper threshold reduces4 to \(M_{A/H^{\pm}}\lesssim 280\text{ GeV}\). These features hold promising implications for the HL-LHC, where the center-of-mass energy of 14 TeV offers a favorable environment for producing \(H^{\pm}\).
Footnote 3: A high survival percentage alone does not inherently validate any model parameter, since nature chooses one parameter point. But prioritizing parameter regions with a higher likelihood is a prudent strategy.
Footnote 4: The inverse is not necessarily true: a smaller \(M_{A/H^{\pm}}\) does not automatically imply a larger \(\Lambda_{\rm cut}\). Note that the blue points are positioned below the red ones.
In the \(t_{\beta}\) versus \(m_{h_{\rm f}}\) plot, three significant features stand out. First, lower bounds on \(t_{\beta}\) emerge, characterized by \(t_{\beta}\gtrsim 4\). This happens because the Yukawa couplings of the BSM Higgs bosons increase as \(t_{\beta}\) decreases, as illustrated in Equation 3. The second salient feature is an evident transition at \(m_{h_{\rm f}}\simeq 10\text{ GeV}\). Beneath this threshold, the distribution of permissible parameter points uniformly spans the \(t_{\beta}\in[4,50]\) range. For \(m_{h_{\rm f}}>10\text{ GeV}\), however, there is an upper limit on \(t_{\beta}\), progressively declining as \(m_{h_{\rm f}}\) increases. This transition around \(m_{h_{\rm f}}=10\text{ GeV}\) stems from the notably light mass of \(h_{\rm f}\), leading to decay products in high-energy colliders that are challenging to discern. Finally, the \(m_{12}^{2}\) distribution primarily leans towards the lower end, peaking around \(26\text{ GeV}^{2}\). This small \(m_{12}^{2}\) hints the approximate preservation of \(Z_{2}\) parity in the fermiophobic type-I, because only the \(m_{12}^{2}\) term breaks \(Z_{2}\) parity.
Given these characteristics of the fermiophobic type-I model, we concentrate on the following mass range for \(h_{\rm f}\):
\[m_{h_{\rm f}}\in[1,10]\text{ GeV}. \tag{6}\]
In subsequent discussions and investigations, we will refer to \(h_{\rm f}\) within this mass range as a "very light" \(h_{\rm f}\).
Given the distinct characteristics of the fermiophobic type-I model, our attention is directed towards the discovery potential of the the HL-LHC for the very light \(h_{\rm f}\). Central to this are its decay modes and production channels. The decay pattern for this particle is unambiguous, with \({\rm Br}(h_{\rm f}\to\gamma\gamma)\simeq 100\%\). Its primary production mechanisms at the LHC occur in association with other BSM Higgs bosons, specifically \(pp\to W^{*}\to h_{\rm f}H^{\pm}\) and \(pp\to Z^{*}\to h_{\rm f}A\)[27; 38]. As a result, the final states arising from these production avenues are intrinsically tied to the decay patterns of \(H^{\pm}\) and \(A\).
In Figure 2, we depict \({\rm Br}(H^{\pm}\to W^{\pm}h_{\rm f})\) versus \(m_{h_{\rm f}}\) (left panel) and \({\rm Br}(A\to Zh_{\rm f})\) versus \(m_{h_{\rm f}}\) (right panel) across all the viable parameter points, with the color codes signifying \(\Lambda_{\rm cut}\) values in GeV. Notably, \(H^{\pm}\to W^{\pm}h_{\rm f}\) and \(A\to h_{\rm f}Z\) surface as the predominant decay channels, with \({\rm Br}(H^{\pm}\to W^{\pm}h_{\rm f})\) and \({\rm Br}(A\to Zh_{\rm f})\) surpassing 88% and 96%, respectively. A high cutoff scale, such as \(\Lambda_{\rm cut}\sim 10^{14}\) GeV, results in nearly 100% branching ratios for both \(H^{\pm}\to h_{\rm f}W^{\pm}\) and \(A\to h_{\rm f}Z\). Hence, two primary candidates for discovery channels present themselves: \(pp\to h_{\rm f}H^{\pm}(\to h_{\rm f}W^{\pm})\) and \(pp\to h_{\rm f}A(\to h_{\rm f}Z)\). Considering the dominant charged-current production and the larger branching ratio of the leptonic decays of \(W^{\pm}\) compared to \(Z\), we propose the following as the golden channel to probe the very light \(h_{\rm f}\):
\[pp\to W^{*}\to h_{\rm f}H^{\pm}(\to h_{\rm f}W^{\pm})\to\gamma\gamma+\gamma \gamma+\ell^{\pm}E_{T}^{\rm miss}, \tag{7}\]
where \(\ell^{\pm}=e^{\pm},\mu^{\pm}\). In our comprehensive analysis, we also incorporate the decay mode \(W^{\pm}\to\tau^{\pm}\nu\), which is subsequently followed by \(\tau^{\pm}\to\ell^{\pm}\nu\nu\). The corresponding Feynman diagram is depicted in Figure 3.
### Signature of the golden channel \(pp\to h_{\rm f}H^{\pm}\)
Let us now present the parton-level cross section of the proposed golden channel for \(h_{\rm f}\). Initially, we generated the Universal FeynRules Output (UFO) [92] for the fermiophobic type-I through FeynRules[93]. Incorporating this UFO file into MadGraph5-aMC@NLO[94], we determined the cross-sections of \(pp\to H^{\pm}h_{\rm f}\) at the 14 TeV LHC. For the parton distribution function, we adopted the NNPDF31_lo_as_0118 set [95]. The branching ratios of \(h_{\rm f}\) and \(H^{\pm}\) were obtained from 2HDMC [96], and subsequently multiplied by the cross-sections.
In Figure 4, the scatter plot shows the parton-level cross sections for \(m_{h_{\rm f}}=5\) GeV against the charged Higgs boson mass, spanning all viable parameter points.5 The color code represents \(\Lambda_{\rm cut}\). An expected correlation appears between the cross section and \(M_{H^{\pm}}\): as \(M_{H^{\pm}}\) increases, \(\sigma_{\rm tot}\) decreases. Additionally, for a given \(M_{H^{\pm}}\), the cross sections across all viable parameter points are nearly constant, with deviations of less than 10%. A compelling feature is the substantial size of the signal cross section. Even the minimum cross section, encountered when \(M_{H^{\pm}}\simeq 330\) GeV, reaches a significant \(\sim 7\) fb.
Footnote 5: According to our analysis, the cross sections for cases with \(m_{h_{\rm f}}=1\) GeV and \(m_{h_{\rm f}}=10\) GeV align closely with those for \(m_{h_{\rm f}}=5\) GeV, mostly deviating by about 1%.
Despite these considerable signal cross sections, distinguishing the signal from the background at the HL-LHC remains a challenge. At first glance, a final state comprised of four photons, a lepton, and missing transverse energy might seem to suppress major QCD backgrounds. But the reality is more intricate. When the \(h_{\rm f}\) decays into two photons at high-energy colliders, the resulting photons are not typically isolated because they are tightly collimated
within a radius of \(\Delta R<0.4\). Here \(\Delta R\) is the angular distance, given by \(\Delta R=\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}\). Still, these photons register an energy deposit in the calorimeters, eventually being recognized and grouped as a jet. This leads to substantial QCD backgrounds.
To better elucidate how the detector processes photons, let us briefly review the photon isolation criteria adopted by the Delphes. Consider a photon candidate P, a stable particle that deposits its energy into the electromagnetic calorimeter (ECAL), while leaving no trace in the tracker. For P to be recognized as a photon, it should be sufficiently isolated from neighboring particles. In Delphes, this isolation is determined using the criterion \(I(\texttt{P})>I_{\text{min}}\). Here, the isolation variable \(I(\texttt{P})\) is expressed as:
\[I(\texttt{P})=\frac{\sum_{i\neq\texttt{P}}^{\Delta R<R_{\gamma}}p_{T}^{i}}{p_{ T}^{\texttt{P}}}, \tag{8}\]
where the numerator represents the combined transverse momenta of all particles (excluding P) that fall within a cone of radius \(R_{\gamma}\) centered around P. In the delphes_card_HLLHC.tcl utilized in subsequent analysis, the default settings are \(I_{\text{min}}=0.1\) and \(R_{\gamma}=0.3\).
In Delphes, the photon isolation is evaluated concurrently with jet clustering. This procedure involves clustering EflowPhoton, EflowNeutralHadrons, and EflowChargedHadrons according to the energy flow algorithm. Once this process concludes, the definitive identification for P is set. If P satisfies the photon isolation criteria, it is recognized as a photon. Conversely, if P fail the criteria, it is designated as a jet.
To demonstrate our claim that the two photons from \(h_{\text{f}}\to\gamma\gamma\) are more likely to be recognized as a single jet, we conducted a comprehensive detector simulation for the signal with
\(5\text{ GeV}\), \(M_{A/H^{\pm}}=150\text{ GeV}\), and \(\text{Br}(H^{\pm}\to h_{\text{f}}W^{\pm})=\text{Br}(h_{\text{f}}\to\gamma\gamma)=1\). Parton showering and hadronization were integrated using Pythia version 8.309[97]. We employed Delphes high-luminosity card delphes_card_HLLHC.tcl. For jet clustering, FastJet version 3.3.4[98], deploying the anti-\(k_{T}\) algorithm [99], was utilized for the jet radius of \(R=0.4\). At this stage, we opted not to consider pileup effects.
In Figure 5, we present the photon multiplicity versus the jet multiplicity for the signal process at the detector level, using a color code to represent the normalized number of events. These results are derived from \(5\times 10^{5}\) events at the generation level. The findings in Figure 5 are striking. The signal event, which includes four photons at the parton level, results in a markedly different outcome at the detector level. Approximately 70% of events fall under \(N_{\gamma}=0\), while around 26% are categorized as \(N_{\gamma}=1\). Events with \(N_{\gamma}=2\) are scarce. Instead, the majority of signal events manifest as two jets.
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|} \hline Background & Cross section [pb] & \(n_{\text{gen}}\) & Background & Cross section [pb] & \(n_{\text{gen}}\) \\ \hline \(W^{\pm}(\to L^{\pm}\nu)jj\) & \(3.54\times 10^{3}\) & \(5\times 10^{8}\) & \(W^{\pm}Z\) & \(3.16\times 10\) & \(3\times 10^{6}\) \\ \hline \(Z(\to L^{+}L^{-})jj\) & \(2.67\times 10^{2}\) & \(5\times 10^{7}\) & \(Z(\to L^{+}L^{-})j\gamma\) & \(2.09\) & \(10^{6}\) \\ \hline \(t\bar{t}(\to b\bar{b}W_{L\nu}W_{jj})\) & \(1.23\times 10^{2}\) & \(1.2\times 10^{7}\) & \(ZZ\) & \(1.18\times 10\) & \(10^{6}\) \\ \hline \(W^{\pm}(\to L^{\pm}\nu)j\gamma\) & \(2.53\times 10\) & \(3\times 10^{6}\) & \(W^{\pm}(\to L^{\pm}\nu)\gamma\gamma\) & \(3.28\times 10^{-2}\) & \(10^{6}\) \\ \hline \(W^{+}W^{-}\) & \(8.22\times 10\) & \(9\times 10^{6}\) & \(Z(\to L^{+}L^{-})\gamma\gamma\) & \(1.12\times 10^{-2}\) & \(10^{6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parton-level cross sections of the backgrounds at the 14 TeV LHC, where \(L^{\pm}\) denotes \(e^{\pm},\mu^{\pm}\), or \(\tau^{\pm}\). The number of generated events, denoted as \(n_{\text{gen}}\), is also provided.
As the final state includes two jets, various backgrounds arise. We take into account a total of ten background processes: \(W^{\pm}(\to L^{\pm}\nu)jj\), \(Z(\to L^{+}L^{-})jj\), \(t\bar{t}(\to b\bar{b}W_{L\nu}W_{jj})\), \(W^{\pm}(\to L^{\pm}\nu)j\gamma\), \(W^{+}W^{-}\), \(W^{\pm}Z\), \(Z(\to L^{+}L^{-})j\gamma\), \(ZZ\), \(W^{\pm}(\to L^{\pm}\nu)\gamma\gamma\), and \(Z(\to L^{+}L^{-})\gamma\gamma\). Here, \(L^{\pm}\) represents \(e^{\pm},\mu^{\pm}\), or \(\tau^{\pm}\). Given that our signal process includes either one electron or one muon, we have incorporated the leptonic decays of \(W^{\pm}\) and \(Z\) into some dominant backgrounds.
In Table 1, we summarize the parton-level cross sections for the ten background processes at the 14 TeV LHC, applying generation-level cuts of \(p_{T}^{j}>20\) GeV, \(p_{T}^{L,\gamma}>10\) GeV, \(|\eta_{j}|<5\), \(|\eta_{L,\gamma}|<2.5\), and \(\Delta R_{ii^{\prime}}>0.4\) where \(i\) and \(i^{\prime}\) include all the particles in the final state. Due to the considerable differences in the cross sections among these background processes, we produce different event counts at the generation level, represented by \(n_{\rm gen}\) in Table 1. Notably, the background cross sections significantly exceed the signal cross section. If the analysis only considers collective objects like jets in the final state, distinguishing the signal from the backgrounds becomes almost infeasible. Consequently, devising a strategy targeting diphoton jets is pivotal for detecting the signal at the HL-LHC.
## III Jet Subparticles and Pileups
In the previous section, we illustrated that the four photons in our signal process, \(pp\to h_{\rm f}h_{\rm f}W^{\pm}\to 4\gamma W^{\pm}\), are predominantly tagged as two jets, not isolated photon entities. Given that these photons exist as subparticles within a jet, distinguishing this unique diphoton jet from a standard QCD jet necessitates a thorough analysis of the jet's subparticles. To enable this differentiation, we employ the EFlow objects within jets in the Delphes framework. These EFlow objects are divided into three categories: EflowPhoton, EflowNeutralHadrons, and EeflowChargedHadrons, with each type determined by tracker and tower information. The tracker identifies charged particles through their characteristic ionization patterns within its system, while tower data focus on energy deposits in the calorimeter.
To enhance our understanding, let us revisit the interactions of particles within calorimeters. Photons, when passing through the electromagnetic calorimeter (ECAL), trigger an energy dispersion across its layers. Hadrons, on the other hand, deposit energy differently depending on their type. Neutral pions, for instance, decay promptly into a pair of photons, largely concentrating their energy within the ECAL. Meanwhile, stable hadrons like neutrons and charged pions predominantly channel their energy to the hadron calorimeter (HCAL). A notable scenario occurs with long-lived hadrons, such as Kaons and \(\Lambda\) baryons. With a decay length around 10 mm, they interact with both the ECAL and HCAL, resulting in a division of energy deposit as \(f_{\rm ECAL}=0.3\) and \(f_{\rm HCAL}=0.7\).
Yet, when it comes to utilizing jet subparticles, the issue of pileup poses a formidable challenge. Pileup, a byproduct of the high luminosity in hadron colliders, results from multiple proton-proton collisions within a single bunch crossing. At the HL-LHC, where roughly 200 pileup events are standard, discerning the diphoton jet from a QCD jet becomes intricate due to the flood of pileup-induced particles. Therefore, it is crucial to effectively subtract pileups
in our analysis.
Several methods for pileup subtraction have been advanced, such as the jet vertex fraction [100], charged hadron subtraction (CHS) [101; 102], the Puppi method [103], and the SoftKiller method [104]. In our exploration, we cast a special focus on CHS and SoftKiller. The CHS technique leverages the capability of the detector to determine the vertex distance of charged tracks relative to the primary vertex. In contrast, SoftKiller is a fast event-level pileup subtraction tool, relying on a particle's transverse momentum to estimate the probability of being a pileup [105; 106].
Exploring the advantages of CHS and SoftKiller, we propose an optimal combination: a hybrid strategy named CHS+SK\({}_{0}\). This method first uses CHS to eliminate charged pileup particles, specifically targeting those with a vertex distance greater than 0.1 mm. Following this, SoftKiller comes into play, removing pileup photons and neutral hadrons that fall below a certain transverse momentum threshold. To avoid overcorrection, we have carefully configured SoftKiller to bypass charged hadrons.
Before showcasing the impressive performance of CHS+SK\({}_{0}\), it is necessary to outline the crucial simulation steps involved. We need to make two important changes to the Delphes settings: first, we remove the pileup subtractors, and second, we turn off the unique object finder module. (However, we ensure that calculations for electron and muon isolation remain intact.) Following these adjustments, the refined output from Delphes is directed to a pileup subtraction module. In the final phase, jet clustering is executed.
We now turn our attention to demonstrating the exceptional performance of CHS+SK\({}_{0}\), utilizing jet images to provide a visual representation of the \(p_{T}\) distribution of jet subparticles across a \(\eta\times\phi\) grid. In Figure 6, jet images for the leading jet from the \(W^{\pm}jj\) background are presented, derived from an extensive sample of \(10^{5}\) events.
Preprocessing involves translation and normalization techniques, as outlined in Ref. [44]. We then sum the transverse momenta of all the subparticles within the jet and represent the intensity using \(\log r\), where \(r\) is the ratio of the subparticle's \(p_{T}\) to the mother jet's \(p_{T}\). The \(\log r\) information is distributed across the recalibrated \(\eta\) and \(\phi\) coordinates of each subparticle, which are now positioned relative to their mother jet. Here, we have adopted a pixel size of \(\Delta\eta\times\Delta\phi=0.02\times 0.02\), reflecting the resolution of the simulated CMS electromagnetic calorimeter.
Equipped with these jet images, we are ready to conduct a comprehensive comparison between the CHS+SK\({}_{0}\) and SoftKiller subtraction methods. In Figure 6, we explore three scenarios of pileups. The top panels present jet images in the absence of pileups, providing a reference for our pileup subtraction efforts. The middle and bottom panels, on the other hand, display jet images with 200 pileups, processed using the CHS+SK\({}_{0}\) and SoftKiller subtraction methods, respectively.
Further breaking down our analysis, we categorize it into four distinct channels, each illustrated column-wise: total jet, EflowPhotons, EflowChargedHadrons, and EflowNeutralHadrons. As Figure 6 vividly demonstrates, the CHS+SK\({}_{0}\) method significantly outperforms its counter
part, particularly in the efficient removal of charged pileup hadrons. This leads us to opt for the CHS+SK\({}_{0}\) subtraction technique for our subsequent analyses, incorporating all 200 pileup events.
Finally, we establish clear definitions for our terminology related to jets:
**Jet (\(J\)):**: A jet encompasses all physical entities that deposit energy in the calorimeters and undergo clustering by a jet algorithm. It is represented as \(J\).
**Diphoton Jet (\(J_{\gamma\gamma}\)):**: A clustered jet is termed a diphoton jet if its two leading subparticles are EFlowPhotons. We denote this as \(J_{\gamma\gamma}\).
Figure 6: Jet images of the \(W^{\pm}jj\) background, where the color scale indicates the logarithm of the ratio of subparticle \(p_{T}\) to the mother jet’s \(p_{T}\). We examine three pileup subtraction scenarios: no pileup (upper panels), 200 pileups using the CHS+SK\({}_{0}\) subtraction method (middle panels), and 200 pileups using the SoftKiller (lower panels). The presentation spans four distinct jet image types: total jet images (first column), EflowPhotons (second column), EflowNeutralHadrons (third column), and EflowChargedHadrons (fourth column).
**QCD Jet (\(j\)):**: A QCD jet, stemming from quarks or gluons, is represented as \(j\).
**Subparticle (\(s_{ij}\)):**: Each EFlow object inside a jet is referred to as a subparticle. The notation \(s_{ij}\) denotes the \(i\)-th subparticle in the \(j\)-th jet. Both jets and subparticles are arranged in descending order of their \(p_{T}\).
## IV Cut-based analysis
In this section, we perform a signal-to-background analysis using the traditional cut-based approach. Our primary goal is to attain high signal significances across the entire parameter space for the very light \(h_{\mathrm{f}}\). To achieve this, we analyze 18 benchmark parameter points, as listed in Table 2. For each signal benchmark point, we generate \(3\times 10^{6}\) events. Additionally, we consider the ten background processes specified in Table 1. All events are processed through Pythia8 and Delphes, employing the Delphes configuration outlined in the preceding section.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline BP no. & \(m_{h_{\mathrm{f}}}\) & \(M_{A/H^{\pm}}\) & \(s_{\beta-\alpha}\) & \(m_{12}^{2}\) [GeV\({}^{2}\)] & \(t_{\beta}\) \\ \hline BP-1 & & \(150\) GeV & \(-0.123\) & \(0.0786\) & \(8.06\) \\ BP-2 & & \(175\) GeV & \(-0.0909\) & \(0.0400\) & \(11.0\) \\ BP-3 & & \(200\) GeV & \(-0.0929\) & \(0.0813\) & \(10.7\) \\ BP-4 & & \(250\) GeV & \(-0.0941\) & \(0.0494\) & \(10.6\) \\ BP-5 & & \(300\) GeV & \(-0.0985\) & \(0.0237\) & \(10.1\) \\ BP-6 & & \(331\) GeV & \(-0.0974\) & \(0.0634\) & \(10.2\) \\ \hline BP-7 & & \(150\) GeV & \(-0.0737\) & \(0.305\) & \(13.5\) \\ BP-8 & & \(175\) GeV & \(-0.0922\) & \(2.20\) & \(10.8\) \\ BP-9 & & \(200\) GeV & \(-0.0983\) & \(1.93\) & \(10.1\) \\ BP-10 & & \(250\) GeV & \(-0.0907\) & \(1.99\) & \(11.0\) \\ BP-11 & & \(300\) GeV & \(-0.0984\) & \(1.84\) & \(10.1\) \\ BP-12 & & \(331\) GeV & \(-0.0920\) & \(2.17\) & \(10.8\) \\ \hline BP-13 & & \(150\) GeV & \(-0.0748\) & \(1.17\) & \(13.3\) \\ BP-14 & & \(175\) GeV & \(-0.0993\) & \(1.70\) & \(10.0\) \\ BP-15 & & \(200\) GeV & \(-0.0919\) & \(0.973\) & \(10.8\) \\ BP-16 & & \(250\) GeV & \(-0.0974\) & \(0.851\) & \(10.2\) \\ BP-17 & & \(300\) GeV & \(-0.0917\) & \(0.0396\) & \(10.9\) \\ BP-18 & & \(328.3\) GeV & \(-0.0979\) & \(1.15\) & \(10.2\) \\ \hline \end{tabular}
\end{table}
Table 2: Benchmark points for the very light \(h_{\mathrm{f}}\). All the parameter points satisfy the theoretical and experimental conditions.
With our simulated data set ready, we implement the basic selection criteria as follows:
* There must be exactly one lepton with \(p_{T}^{\ell}>20\) GeV and \(|\eta_{\ell}|<2.5\).
* The leading jet is required to satisfy \(p_{T}^{J_{1}}>50\) GeV and \(|\eta_{J_{1}}|<2.5\).
* The subleading jet should fulfill the conditions \(p_{T}^{J_{2}}>30\) GeV and \(|\eta_{J_{2}}|<2.5\).
* The missing transverse energy should exceed \(E_{T}^{\rm miss}>10\) GeV.
In pursuit of optimizing signal significances, we highlight two distinguishing characteristics of our signal: (i) the two leading subparticles in two leading jets are predominantly EFlowPhotons; (ii) these leading subparticles contribute significantly to the transverse momentum of their mother jet.
To highlight the first characteristic, we present the probabilities \(P(h_{\rm f}\to J_{\gamma\gamma})\) and \(P(j\to J_{\gamma\gamma})\) against the \(p_{T}\) of the mother jet in Figure 7. Results for the leading and subleading jets are presented in the left and right panels, respectively. \(P(h_{\rm f}\to J_{\gamma\gamma})\) represents the probability of the two photons from an \(h_{\rm f}\) decay being identified as a diphoton jet, with the red, green, and orange lines corresponding to benchmark points BP-1, BP-7, and BP-13, respectively. On the other hand, \(P(j\to J_{\gamma\gamma})\) denotes the rate at which a QCD jet is misidentified as a diphoton jet in the \(W^{\pm}jj\) background.6
Footnote 6: A thorough analysis reveals that \(P(j\to J_{\gamma\gamma})\) in the \(Zjj\) background is similar to that in the \(W^{\pm}jj\) background, within 10%.
Figure 7: \(P(h_{\rm f}\to J_{\gamma\gamma})\) and \(P(j\to J_{\gamma\gamma})\) as functions of \(p_{T}^{J}\), for the leading jet in the left panel and the subleading jet in the right panel. \(P(h_{\rm f}\to J_{\gamma\gamma})\) represents the probability of two photons from \(h_{\rm f}\) being identified as a diphoton jet, while \(P(j\to J_{\gamma\gamma})\) is the rate of a QCD jet tagged as a diphoton jet in the \(W^{\pm}jj\) background. The red, green, and orange lines depict signal results for benchmark points BP-1, BP-7, and BP-13, respectively.
For the signal, the probability \(P(h_{t}\to J_{\gamma\gamma})\) remains substantial, consistently surpassing 40% when \(p_{T}^{J}\geq 50\text{ GeV}\). However, the relationship between this probability and \(p_{T}^{J}\) varies with \(m_{h_{t}}\). For BP-7 (\(m_{h_{t}}=5\text{ GeV}\)) and BP-13 (\(m_{h_{t}}=10\text{ GeV}\)), the probability rises with increasing \(p_{T}^{J}\), reaching approximately 85%. In contrast, BP-1 (\(m_{h_{t}}=1\text{ GeV}\)) shows a distinct pattern: an initial increase, followed by a peak, and then a decrease as \(p_{T}^{J}\) rises. This behavior can be attributed to the small \(m_{h_{t}}\) value in BP-1. Since \(R_{\gamma\gamma}\sim 2m_{h_{t}}/p_{T}\), some of the two photons with high \(p_{T}^{J}\) are so collimated that they nearly merge into a single EFlowPhoton, making them challenging to identify as a diphoton jet. Nevertheless, the probability value even for BP-1 remains sizable, hovering around 40%. On the other hand, the mistagging rate \(P(j\to J_{\gamma\gamma})\) is only a few percent, demonstrating a clear distinction between signal and background.
The second salient feature of the signal is the large ratios of \(p_{T}\) of the two leading subparticles to the \(p_{T}\) of their mother jet \(J\). In the case of the signal, the diphoton jet is mainly composed of two hard photons, resulting in the leading and subleading subparticles holding a considerable share of \(p_{T}^{J}\). In contrast, a QCD jet consists of a diverse mix of particles, numbering from tens to well over a hundred. Consequently, it is rare for the two leading subparticles in a QCD jet to occupy a significant portion of \(p_{T}^{J}\). To more vividly illustrate this distinction, we define:
\[r_{ij}=\frac{p_{T}^{s_{ij}}}{p_{T}^{J_{j}}}\quad\text{for }i,j=1,2. \tag{9}\]
To demonstrate this second feature, we present in Figure 8 the normalized distributions of \(r_{ij}\) for both the signal BP-7 (in red) and the \(W^{\pm}jj\) background7 (in blue) after the basic
Figure 8: Normalized distributions of \(r_{ij}\) for the signal in BP-7 (red) and the \(W^{\pm}jj\) background (blue) after the basic selection. Here, \(r_{ij}\) is the \(p_{T}\) ratio defined in Equation 9. The left panel presents the results for the leading subparticle, while the right panel focuses on the second-leading subparticle. Solid lines correspond to results for the leading jet, whereas dashed lines represent the subleading jet.
selection. The left panel showcases the \(p_{T}\) ratio for the leading subparticle, \(r_{1i}\), while the right panel focuses on the subleading subparticle, \(r_{2i}\). Solid lines depict results for \(J_{1}\), and dashed lines correspond to \(J_{2}\).
A primary observation reveals that the \(r_{1i}\) value for the signal consistently surpasses 0.5, indicating that the leading subparticle of a diphoton jet contributes almost half of its mother jet's \(p_{T}\). In contrast, the ratio for the \(W^{\pm}jj\) background typically remains under 0.5. Nevertheless, a noticeable peak around \(r_{1i}\simeq 0.9\) in the \(W^{\pm}jj\) background suggests that merely imposing an upper bound on \(r_{1i}\) may not sufficiently differentiate the signal from the background. Consequently, we shift our focus to the \(r_{2i}\) distributions. While both the signal and background inherently exhibit \(r_{2i}<0.5\), the signal's \(r_{2i}\) is notably larger. By imposing a condition of \(r_{2i}>0.25\), which corresponds to \(r_{1i}<0.75\), we adeptly avoid the subtle peak around \(r_{1i}\sim 0.9\) in the \(W^{\pm}jj\) background.
Based on the aforementioned two characteristics of the signal, we devise a strategy to optimize the signal significance using a cut-based analysis. The cut-flow chart in Table 3 outlines the cross sections for the signal and the four main backgrounds--\(W^{\pm}jj\), \(Zjj\), \(t\bar{t}\), and \(W^{\pm}j\gamma\)--at the 14 TeV LHC. We have selected BP-7 as the representative benchmark point for detailed presentation, as it exemplifies the common trends observed across 18 benchmarks. While we have comprehensively analyzed other backgrounds of Table 1, they are omitted in Table 3 due
\begin{table}
\begin{tabular}{|c||c||c|c|c|c||c|} \hline \multicolumn{5}{|c|}{Cross sections in units of fb at the 14 TeV LHC with \(\mathcal{L}_{\rm tot}=3\) ab\({}^{-1}\)} \\ \hline Cut & BP-7 & \(W^{\pm}jj\) & \(Zjj\) & \(t\bar{t}\) & \(W^{\pm}j\gamma\) & \(\mathcal{S}_{\rm BP-7}^{10\%}\) \\ \hline Basic & 34.8 & 372 622 & 27 727 & 32 052 & 3 047 & \(1.09\times 10^{-3}\) \\ \hline \(E_{T}^{\rm miss}>50\) GeV & 29.7 & 318 407 & 23 274 & 27 395 & 2 610 & \(9.01\times 10^{-4}\) \\ \hline \(r_{11}>0.50\) & 24.9 & 102 182 & 7 843 & 4 150 & 1 214 & \(2.15\times 10^{-3}\) \\ \hline \(r_{12}>0.50\) & 18.7 & 36 204 & 2 853 & 692 & 541 & \(4.56\times 10^{-3}\) \\ \hline \(r_{21}>0.25\) & 7.06 & 4 218 & 323 & 62.2 & 55.8 & \(1.49\times 10^{-2}\) \\ \hline \(r_{22}>0.25\) & 2.40 & 840 & 61.3 & 8.61 & 10.1 & \(2.56\times 10^{-2}\) \\ \hline \(J_{1}\to J_{\gamma\gamma}\) & 2.29 & 18.6 & 2.31 & 0.205 & 0.467 & 1.01 \\ \hline \(J_{2}\to J_{\gamma\gamma}\) & 1.98 & 0.363 & 0.0589 & 0.00 & 0.00849 & 22.8 \\ \hline \end{tabular}
\end{table}
Table 3: Cross-section cut-flow chart for BP-7 and the main backgrounds from \(W^{\pm}jj\), \(Zjj\), \(t\bar{t}\), and \(W^{\pm}j\gamma\) at the 14 TeV LHC. The presented cross sections are in femtobarns (fb). The basic selection criteria and the ratio \(r_{ij}\) are detailed in the main text. For calculating the signal significance (\(\mathcal{S}\)), we take into account a 10% background uncertainty and assume an integrated luminosity (\(\mathcal{L}_{\rm tot}\)) of \(3\) ab\({}^{-1}\).
to their negligible impact.
The final column in Table 3 offers the signal significance \(\mathcal{S}\), defined by [107]:
\[\mathcal{S}=\left[2(N_{\mathrm{S}}+N_{\mathrm{B}})\log\left(\frac{(N_{\mathrm{S} }+N_{\mathrm{B}})(N_{\mathrm{B}}+\delta_{\mathrm{B}}^{2})}{N_{\mathrm{B}}^{2}+( N_{\mathrm{S}}+N_{\mathrm{B}})\delta_{\mathrm{B}}^{2}}\right)-\frac{2N_{\mathrm{B}}^{2} }{\delta_{b}^{2}}\log\left(1+\frac{\delta_{\mathrm{B}}^{2}N_{\mathrm{S}}}{N_{ \mathrm{B}}(N_{\mathrm{B}}+\delta_{\mathrm{B}}^{2})}\right)\right]^{1/2}. \tag{10}\]
Here, \(N_{\mathrm{S}}\) denotes the number of signal events, \(N_{\mathrm{B}}\) the number of background events, and \(\delta_{\mathrm{B}}=\Delta_{\mathrm{B}}N_{\mathrm{B}}\) the background uncertainty yield. We take a 10% background uncertainty (\(\Delta_{\mathrm{B}}=10\%\)).
The results in Table 3 are remarkable. After the basic selection, the four primary backgrounds overwhelm the signal, yielding the significance to an order of magnitude of \(10^{-3}\). The cut on the missing transverse energy, pivotal for neutrino tagging, fails to boost the significance due to the presence of a neutrino in the dominant \(W^{\pm}jj\) background. The differentiation becomes evident when applying the \(p_{T}\) ratio cuts. By enforcing \(r_{11}>0.5\) and \(r_{12}>0.5\), we retain approximately 63% of the signal events that survive the \(E_{T}^{\mathrm{miss}}>50\;\mathrm{GeV}\) cut, while the backgrounds are diminished to \(\mathcal{O}(10^{-3})\). Further imposing \(p_{T}\) ratio conditions of \(r_{2i}>0.25\) effectively suppresses the backgrounds. Yet, the signal significance remains relatively low, hovering around 2.6%.
The last two selection criteria are decisive. We first require that the leading jet must be a diphoton jet. While this condition significantly reduces the \(N_{\mathrm{S}}/N_{\mathrm{B}}\) ratio, it is not enough to markedly elevate the significance. The final condition that the subleading jet also be a diphoton jet is what truly drives up the significance. When accounting for a 10% background uncertainty, the final significance ascends to 22.8, affirming the discovery of a very light fermiophobic Higgs boson.
Moving forward, we present the conclusive results for all 18 benchmark points. Table 4 presents the signal cross sections after the final selection and the corresponding significance
\begin{table}
\begin{tabular}{|c|c|c||c|c|c||c|c|c|} \hline \multicolumn{8}{|c|}{Results in the cut-based analysis at the 14 TeV LHC with \(\mathcal{L}_{\mathrm{tot}}=3\;\mathrm{ab}^{-1}\)} \\ \hline & \(\sigma_{\mathrm{final}}\) [fb] & \(\mathcal{S}^{10\%}\) & & \(\sigma_{\mathrm{final}}\) [fb] & \(\mathcal{S}^{10\%}\) & & \(\sigma_{\mathrm{final}}\) [fb] & \(\mathcal{S}^{10\%}\) \\ \hline BP-1 & 1.46 & 18.5 & BP-7 & 1.98 & 22.8 & BP-13 & 1.81 & 21.5 \\ \hline BP-2 & 1.19 & 16.1 & BP-8 & 1.68 & 20.4 & BP-14 & 1.56 & 19.4 \\ \hline BP-3 & 0.927 & 13.4 & BP-9 & 1.37 & 17.7 & BP-15 & 1.29 & 17.1 \\ \hline BP-4 & 0.529 & 8.71 & BP-10 & 0.900 & 13.0 & BP-16 & 0.857 & 12.7 \\ \hline BP-5 & 0.303 & 5.49 & BP-11 & 0.582 & 9.40 & BP-17 & 0.566 & 9.19 \\ \hline BP-6 & 0.216 & 4.09 & BP-12 & 0.457 & 7.74 & BP-18 & 0.456 & 7.72 \\ \hline \end{tabular}
\end{table}
Table 4: Signal cross sections and the significance values after the final selection at the 14 TeV LHC. Calculations are based on a total integrated luminosity of \(3\;\mathrm{ab}^{-1}\) and a 10% background uncertainty.
values at the 14 TeV LHC. These computations are based on a total integrated luminosity of \(3\text{ ab}^{-1}\) and a 10% background uncertainty. The comprehensive suite of cuts in Table 3 is uniformly applied across all benchmark points, avoiding tailored adjustments for specific benchmark points in pursuit of unbiased analysis. The significance values obtained are encouraging. With the exception of BP-6, every benchmark point boasts significance values surpassing 5. Even the notably challenging BP-6 achieves a respectable significance of 4.09.
We observe distinct trends in significance depending on the benchmark points. When holding \(m_{h_{\text{f}}}\) constant, the significance tends to decrease as \(M_{H^{\pm}}\) increases, a reduction primarily due to the smaller signal cross section from the limited kinematic phase space available at higher \(M_{H^{\pm}}\) values. Conversely, when fixing \(M_{H^{\pm}}\), scenarios with \(m_{h_{\text{f}}}=5\) GeV consistently yield the highest significances. The slightly reduced significances observed in scenarios with \(m_{h_{\text{f}}}=10\) GeV result from a subset of signal events producing two photons with \(\Delta R>0.4\), which fails to satisfy the criteria for two diphoton jets. On the other hand, scenarios featuring \(m_{h_{\text{f}}}=1\) GeV consistently exhibit the lowest significance values. This small \(m_{h_{\text{f}}}\) leads to two collimated photons, causing a significant portion of signal events to not satisfy the two diphoton-jet requirement.
## V Mass reconstruction for \(m_{h_{\text{f}}}\) and \(M_{H^{\pm}}\)
In the previous two sections, we underscored the efficacy of our cut-based analysis strategy in achieving robust significance values. Our next aim is to validate that the observed signal indeed originates from the \(pp\to h_{\text{f}}H^{\pm}\to h_{\text{f}}h_{\text{f}}W^{\pm}\) process. Precisely identifying the source necessitates the reconstruction of \(m_{h_{\text{f}}}\) and \(M_{H^{\pm}}\). Since \(h_{\text{f}}\) predominantly decays into two photons, \(m_{h_{\text{f}}}\) can be reconstructed using the invariant mass of the two photons within a diphoton jet. To reconstruct the mass of the charged Higgs boson, we focus on \(M_{T}^{H^{\pm}}\), the transverse mass of \(H^{\pm}\) as it decays to \(\gamma\gamma\ell\nu\).
To initiate the calculation of \(M_{T}^{H^{\pm}}\), we first define the four-momentum of the visible components, denoted as \(p_{\text{vis}}^{\mu}\). A challenge arises due to the presence of an additional diphoton jet in the full scattering process, leading to ambiguity in determining which diphoton jet results from the \(H^{\pm}\) decay. To navigate this, we adopt a reasonable assumption: the diphoton jet stemming from the decay of \(H^{\pm}\) is the subleading jet. This assumption is based on the observation that the prompt diphoton jet generally exhibits a higher \(p_{T}\) than the one involved in the decay chain.8
Footnote 8: Our preliminary simulations indicate an approximate 20% contamination resulting from this assumption.
Following this assumption, we establish:
\[p_{\text{vis}}^{\mu}=p_{s_{12}}^{\mu}+p_{s_{22}}^{\mu}+p_{\ell}^{\mu}. \tag{11}\]
The square of the transverse mass of the charged Higgs boson is then defined as:
\[\left(M_{T}^{H^{\pm}}\right)^{2}=m_{\text{vis}}^{2}+2\left[E_{T}^{\text{vis}}E _{T}^{\text{miss}}-\vec{p}_{T}^{\text{vis}}\cdot\vec{E}_{T}^{\text{miss}} \right], \tag{12}\]
where \(m_{\rm vis}^{2}=p_{\rm vis}\cdot p_{\rm vis}\), \(E_{T}^{\rm vis}=\sqrt{m_{\rm vis}^{2}+\left(p_{T}^{\rm vis}\right)^{2}}\), and \(\vec{E}_{T}^{\rm miss}=-\sum_{i}\vec{p}_{T}^{i}\) with \(i\) covering all the observed particles after the pileup subtraction.
In our endeavor to determine \(m_{h_{t}}\) and \(M_{T}^{H^{\pm}}\), we are confronted with a formidable challenge: obtaining accurate _background_ distributions _after imposing the final selection_. While the final selection guarantees robust signal significances through a drastic reduction in background events--leaving only 51 events for the \(W^{\pm}jj\) background and 4 events for \(Zjj\)--this scarcity of events impairs our ability to acquire precise distributions for both \(m_{\gamma\gamma}\) and \(M_{T}^{H^{\pm}}\). However, abandoning the final selection is not an option, as the second-to-last cut results in an unacceptably low significance, falling below one. Furthermore, intensifying event generation to amplify the number of background events is impractical, as our computational resources are already maximized, with \(5\times 10^{8}\) events generated for \(W^{\pm}jj\) and \(5\times 10^{7}\) for \(Zjj\).
In tackling this challenge, we have developed a novel approach that incorporates the mistagging probability \(P(j\to J_{\gamma\gamma})\) as a weighting factor, a method we term the Weighting Factor Method (WFM). To grasp the benefits of WFM, it is instructive to examine the methodology of traditional cut-based analyses. These analyses operate by either retaining or discarding events based on selection criteria, effectively assigning a binary weight of one or zero to each event. While straightforward, this method proves inefficient for analyzing background distributions when the selection efficiency is exceedingly low. For instance, the final selection efficiency for the \(W^{\pm}jj\) backgrounds, relative to the basic selection, is an astonishingly sparse \(10^{-7}\).
In contrast, our WFM strategically utilizes the continuous nature of the weighting factor \(P(j\to J_{\gamma\gamma})\). This approach ensures the inclusion of nearly all pertinent background events, ensuring a thorough representation of the background. For a comprehensive explanation of WFM, including a detailed discussion on how we model \(P(j\to J_{\gamma\gamma})\), please refer to Appendix A.
In Figure 9, we depict the distributions of \(m_{s_{11}s_{21}}\) (left panel) and \(m_{s_{12}s_{22}}\) (right panel) for both signal and background events at the 14 TeV LHC, adhering to the final selection criteria detailed in Table 3. We consider three signal benchmark points with a heavy \(M_{H^{\pm}}\): BP-6 (blue), BP-12 (orange), and BP-18 (green). For the signal distributions, we rely on the results from the traditional cut-based analysis, justified by the ample number of signal events remaining post final selection. In addition, we display the results for the two primary backgrounds, \(W^{\pm}jj\) and \(Zjj\), in the stacked format, using the WFM. We omit other backgrounds here due to their inconsequential contributions following the final selection.
A salient characteristic for the signal in Figure 9 is a distinct resonance peak, for both the leading and subleading jets. This peak closely corresponds to the mass of the fermiophobic Higgs boson. Conversely, the background distributions exhibit two peaks: a sharp one and a more diffuse secondary one. The acute peak, centered at \(m_{\gamma\gamma}\simeq 0\), is predominantly attributed to light mesons, such as \(\pi^{0}\), \(\rho\), \(\eta\), and \(\eta^{\prime}\), which decay into two photons.9 Meanwhile, the broader
peak around \(m_{\gamma\gamma}\simeq 10\text{ GeV}\) emerges as background events increasingly mimic the signal after meeting all selection criteria. Nevertheless, the resonance peaks in the diphoton invariant mass distributions are clearly distinguishable from the backgrounds.
In Figure 10, we show the transverse mass distribution of the charged Higgs boson. For the
Figure 10: Distributions of the transverse mass of the charged Higgs boson after the final selection at the 14 TeV LHC. The results for the \(W^{\pm}jj\) and \(Zjj\) backgrounds are displayed in a stacked manner. The expected signals for BP-6 (red), BP-7 (blue), BP-10 (orange), and BP-12 (green) are represented by solid lines.
Figure 9: Invariant mass distributions for the two leading subparticles in the leading jet (left panel) and the subleading jet (right panel) at the 14 TeV LHC. All depicted events meet the final selection criteria. For the stacked \(W^{\pm}jj\) and \(Zjj\) backgrounds, the WFM is utilized. The expected signals for BP-6 (blue), BP-12 (orange), and BP-18 (green) are illustrated with solid lines.
signal, four benchmark points are considered: BP-6 (red), BP-7 (blue), BP-10 (orange), and BP-12 (green). Additionally, we showcase the WFM results for the two primary backgrounds, \(W^{\pm}jj\) and \(Zjj\), in a stacked manner.
The \(M_{T}^{H^{\pm}}\) distributions for the signal exhibit a unique wedge-shaped peak, marked by a sudden drop around \(M_{T}^{H^{\pm}}\simeq M_{H^{\pm}}\). However, this peak is broader than the well-known distribution shape of \(W^{\pm}\to\ell^{\pm}\nu\). This broadening arises from two main factors. First, the long decay chain of \(H^{\pm}\to h_{t}W^{\pm}\to\gamma\gamma\ell\nu\) introduces inherent uncertainties, especially when measuring the three momenta of the two photons and one lepton. Second, there is ambiguity in determining which diphoton jet originates from the \(H^{\pm}\) decay. Despite the broadness, the characteristic shape of the transverse mass distribution is evident in the signal.
On the other hand, the backgrounds show a single, broad hill-shaped peak centered around \(180\;\mathrm{GeV}\). This shape evolves as background events increasingly resemble the signal after satisfying all selection criteria. One might worry that the background peak around \(180\;\mathrm{GeV}\) could obscure the signal \(M_{T}^{H^{\pm}}\) peak when \(M_{H^{\pm}}\) is close to \(180\;\mathrm{GeV}\). However, as indicated in Table 4, the significance values for \(M_{H^{\pm}}=175\;\mathrm{GeV}\) are so high that the \(M_{T}^{H^{\pm}}\) signal peaks remain distinct and easily distinguishable from the background contributions.
In conclusion, the mass reconstruction of \(m_{h_{t}}\) and \(M_{H^{\pm}}\) is feasible, signifying that the combined \(m_{\gamma\gamma}\) and \(M_{T}^{H^{\pm}}\) distributions effectively and distinctly pinpoint the origin of our new signal.
## VI Machine learning approach for heavy \(M_{h^{\pm}}\)
In the previous two sections, we underscored the efficacy of our cut-based analysis strategy in achieving robust significance values as well as the mass reconstruction of \(m_{\gamma\gamma}\) and \(M_{H^{\pm}}\). Yet, challenges manifested when addressing the heavy charged Higgs boson. For instance, BP-6 reached a significance of \(4.09\), which is not convincing enough to confirm the presence of the very light fermiophobic Higgs boson. Hence, in this section, we employ machine learning techniques, with a keen focus on BP-6, BP-12, and BP-18, aiming to enhance the significances. At the parton-level, the total cross sections for these benchmarks are \(\sigma_{\mathrm{tot}}(\text{BP-6})=9.62\;\mathrm{fb}\), \(\sigma_{\mathrm{tot}}(\text{BP-12})=9.63\;\mathrm{fb}\), and \(\sigma_{\mathrm{tot}}(\text{BP-18})=9.83\;\mathrm{fb}\).
Let us begin by discussing the preparation of input features. We formulate two distinct features: the event feature and the subparticle feature. The event feature comprises 21 elements, constructed as follows:
\[\mathbf{v}_{\mathrm{event}}= \left[p_{T}^{J_{1}},\eta_{J_{1}},\phi_{J_{1}},m_{J_{1}},p_{T}^{J_ {2}},\eta_{J_{2}},\phi_{J_{2}},m_{J_{2}},p_{T}^{\ell},\eta_{\ell},\phi_{\ell}, E_{T}^{\mathrm{miss}},\phi_{\tilde{E}_{T}^{\mathrm{miss}}},\right. \tag{13}\] \[\left.\Delta R_{J_{1}J_{2}},\Delta R_{J_{1}\ell},\Delta R_{J_{2} \ell},\Delta R_{J_{1}\tilde{E}_{T}^{\mathrm{miss}}},\Delta R_{J_{2}\tilde{E}_ {T}^{\mathrm{miss}}},\Delta R_{\ell\tilde{E}_{T}^{\mathrm{miss}}},M_{T}^{J_{1} },M_{T}^{J_{2}}\right], \tag{14}\]
with \(M_{T}^{J_{i}}\) (\(i=1,2\)) representing the transverse mass in Equation 12 using \(p_{\mathrm{vis}}^{\mu}=p_{J_{i}}^{\mu}+p_{\ell}^{\mu}\). For normalization, the feature elements with a mass dimension are divided by \(500\;\mathrm{GeV}\). This list includes the transverse momentum \(p_{\mathrm{T}}\), the invariant mass \(m_{J_{i}}\), the missing transverse energy \(E_{T}^{\mathrm{miss}}\), and the transverse mass \(M_{T}^{J_{i}}\).
The subparticle feature is divided into two vectors associated with \(J_{1}\) and \(J_{2}\). Each \(J_{i}\) category includes the 10 leading subparticles, each characterized by three attributes: \(p_{T}\), \(\eta\), and \(\phi\). As a result, the total dimension of the subparticle feature is \(30\times 2\). The coordinates \(\eta\) and \(\phi\) of a given subparticle are adjusted to be relative to their mother jet. We divide the \(p_{T}\) values by 100 GeV for normalization. To emphasize the photons, other particles (hadrons) are assigned a value of zero for all three attributes.
Our network architecture, illustrated in Figure 11, consists of three main components: a one-dimensional (1D) CNN block and two multilayer perceptrons (MLP1 and MLP2). The 1D CNN block is responsible for processing the subparticle feature, whereas MLP1 handles the event feature. MLP2 merges the outputs from both the 1D CNN and MLP1 to produce the final model prediction. For those interested in the datasets and the detailed operation of the deep learning model, we have made them available on our GitHub repository.10
Footnote 10: [https://github.com/chofchof/light-hf-ml/](https://github.com/chofchof/light-hf-ml/)
Diving into the details, the 1D CNN block comprises nine 1D convolutional layers. The first layer uses a kernel size of 3 and its output goes through a sigmoid function, which maps the values between 0 and 1. Functioning as attention weights, these values are multiplied by each subparticle input feature. This is a crucial step with a clear purpose: it assigns varying weights to each element within the input features, thereby enhancing the model's ability to focus on informative parts of the data. The next eight layers are also 1D convolutions with a kernel size of 3, but they include a ReLU activation function to add non-linearity to the model. Following these layers, an average pooling operation and a fully connected layer condense the information into a 128-dimensional feature vector.
MLP1 primarily transforms the event input feature into a 128-dimensional feature vector. This perceptron comprises six fully connected layers, each containing 128 nodes. Following each layer, batch normalization, a ReLU activation function, and a dropout layer with a 50% probability are applied.
MLP2 finally determines the probability that an event is classified as a signal. Its architecture includes five fully connected layers with node counts of 256, 256, 256, 64, and 16, in succession. Each layer is followed by batch normalization, a ReLU activation function, and a dropout layer with a 50% probability. After these five layers, an additional fully connected layer is set in place to produce an one-dimensional feature vector. This vector then undergoes processing via a sigmoid function, yielding the final classification probability as the output.
For optimal model implementation and precision, we utilize the renowned PyTorch deep learning framework [108]. Both training and evaluation processes are expedited using the NVIDIA Titan V GPU. We optimize model parameters with the AdamW optimizer [109], which is set with an initial learning rate of 0.002 and a weight decay of 0.01, based on mini-batches of 512 training samples. Throughout the training phase, which spans 100 epochs, we decrease the learning rate by half every 10 epochs to enhance convergence.
Now let us describe the generation and assignment of our dataset for training and evaluation.
Figure 11: Model architecture of 1D CNN
To leverage the unique attributes that differentiate the signal from the backgrounds, we enforce additional conditions \(p_{T}^{J_{1}}>100\text{ GeV}\) and \(p_{T}^{J_{2}}>80\text{ GeV}\), after the basic selection. During the training phase, we employ training and validation datasets, each brimming with \(6\times 10^{5}\) events. These datasets are evenly split for signals and backgrounds. The signal events are equally divided amongst BP-6, BP-12, and BP-18. For the background events, which originate from ten processes, allocation is proportionate to their respective cross sections.
Central to our training and evaluation processes is the design of our loss function. Our primary goal of enhancing detection significance necessitates efficient background rejection. Accordingly, we have tailored the loss function to inversely correlate with signal significance. For the sake of computational efficiency, we employ \(1/Z\) as the loss function, where \(Z\) is a concise representation for the significance:
\[Z=\frac{N_{\text{S}}}{\sqrt{N_{\text{B}}+\delta_{\text{B}}^{2}}}, \tag{15}\]
where we take into account a \(10\%\) background uncertainty, denoted as \(\delta_{\text{B}}=0.1N_{\text{B}}\).
Upon concluding the training process, we extract the model's optimal parameters and apply them to our entire test dataset--totaling \(1.27\times 10^{8}\) events, consisting of \(9\times 10^{6}\) signal events and an overwhelming \(1.18\times 10^{8}\) background events. Subsequently, we apply a specified selection threshold \(x_{\text{cut}}\) on the outputs of all the test samples. Finally we then determine the comprehensive significance metric \(\mathcal{S}\) in Equation 10.
Given two threshold options, \(x_{\text{cut}}=0.5\) and \(x_{\text{cut}}=0.9\), we present the signal significances for BP-6, BP-12, and BP-18 as follows:
\[\begin{split} x_{\text{cut}}&=0.5:\qquad\mathcal{ S}_{\text{BP-6}}^{10\%}=\phantom{-}9.0,\quad\mathcal{S}_{\text{BP-12}}^{10\%}=15.4, \quad\mathcal{S}_{\text{BP-18}}^{10\%}=15.0;\\ x_{\text{cut}}&=0.9:\qquad\mathcal{S}_{\text{BP-6}}^{10 \%}=18.9,\quad\mathcal{S}_{\text{BP-12}}^{10\%}=33.2,\quad\mathcal{S}_{\text{ BP-18}}^{10\%}=32.4.\end{split} \tag{16}\]
The outcomes from our CNN machine learning approach are indeed outstanding. Even with the conservative threshold of \(x_{\text{cut}}=0.5\), BP-6 now reaches a significance of 9.0. Furthermore, both BP-12 and BP-18 witness approximately \(100\%\) increases in their significances when compared to the results from the cut-based analysis. Opting for the more aggressive threshold of \(x_{\text{cut}}=0.9\) yields even more enhanced significances. Collectively, these outcomes emphatically demonstrate the effectiveness of our model architecture.
## VII Conclusions
We have comprehensively studied the phenomenological signatures associated with a very light fermiophobic Higgs boson \(h_{\text{f}}\) with a mass range of \(m_{h_{\text{f}}}\in[1,10]\text{ GeV}\) at the 14 TeV LHC. The light \(h_{\text{f}}\) is postulated under the condition \(\alpha=\pi/2\) within the inverted Higgs scenario of the type-I two-Higgs-doublet model. Through an exhaustive scan of the parameter space, taking into account theoretical requirements, experimental constraints, and the cutoff scale exceeding 10 TeV, we demonstrated that the \(m_{h_{\text{f}}}\in[1,10]\text{ GeV}\) range retains a substantial number of
viable parameter points. This is largely attributed to the experimental complexities of detecting the soft decay products of \(h_{\rm f}\). Importantly, this mass range results in strictly defined parameter space, ensuring predictable phenomenological signatures. Two standout features of the viable parameter space are: (i) the BSM Higgs bosons have a single dominant decay mode, such as \(h_{\rm f}\to\gamma\gamma\), \(H^{\pm}\to h_{\rm f}W^{\pm}\), and \(A\to h_{\rm f}Z\); (ii) \(M_{H^{\pm}}\) and \(M_{A}\) are relatively light below \(\lesssim 330\;{\rm GeV}\). Building on these insights, we have proposed a _golden channel_, \(pp\to h_{\rm f}H^{\pm}\to\gamma\gamma\gamma\ell\nu\), for exploration of \(h_{\rm f}\) at the HL-LHC.
A serious challenge surfaces as the two photons from \(h_{\rm f}\to\gamma\gamma\) fail to meet the photon isolation criteria, due to their high collimation within \(\Delta R<0.4\). As a result, the final state (characterized by four photons) usually manifests as two jets, thereby facing immense QCD backgrounds. To address this, we shifted our focus to the subparticles within the jet, identifiable as EFlow objects within the Delphes framework. This approach facilitates the extraction of information about a subparticle's type (EflowPhoton, EflowNeutralHadrons, or EflowChargedHadrons), subsequently enabling the probing of diphoton jets. The challenges posed by pronounced pileups, which could blur the distinction between diphoton jets and QCD jets, are effectively addressed by our innovative pileup subtraction method--a hybrid solution combining charged hadron subtraction with SoftKiller.
With the method of probing diphoton jets, we performed the full simulation for signal-to-background analysis at the detector level across 18 benchmark points. A universal strategy was articulated for the cut-based analysis, yielding encouraging outcomes. Except for BP-6, characterized by \(m_{h_{\rm f}}=1\;{\rm GeV}\) and \(M_{H^{\pm}}=330\;{\rm GeV}\), all benchmark points exhibited signal significance considerably above 5. For the mass reconstructions of the BSM Higgs bosons, we analyzed both the invariant mass distribution of the two leading subparticles and the transverse mass of the charged Higgs boson, based on events post the final selection. Distinct peaks correlating with \(m_{h_{\rm f}}\) and \(M_{H^{\pm}}\) were prominently discerned above the background signals. An inherent challenge--securing reliable background distributions with the scarce events post the final selection--is addressed through our pioneering Weighting Factor Method (WFM).
To cover the more challenging regions marked by a heavy charged Higgs boson mass, we employed machine learning techniques. A potent network structure was designed, comprised of a one-dimensional (1D) CNN block followed by two multilayer perceptrons. The efficacy of this model was commendable. With the nominal threshold of \(x_{\rm cut}=0.5\), we managed to nearly double the significances for the heavy \(M_{H^{\pm}}\) cases.
In this extensive research, we have explored uncharted territories of a very light fermiophobic Higgs boson via diphoton jets. Our approach, harmonizing traditional analyses with innovative methodologies like hybrid pileup subtraction, the WFM, and machine learning, offers novel contributions to the field. We urge the community to consider our findings in the quest for BSM signals.
###### Acknowledgements.
The work of J.C. is supported by National Institute for Mathematical Sciences (NIMS) grant funded by the Korea government (MSIT) (No. B23810000). And the work of D.W., J.K., P.S., and J.S. is supported by the National Research Foundation of Korea, Grant No. NRF-2022R1A2C1007583. The work of S.L. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(RS-2023-00274098).
## Appendix A Weighting factor method
In this appendix, we elaborate on the Weighting Factor Method (WFM). Our focus sharpens on the modeling of \(P(j\to J_{\gamma\gamma})\) for background processes, where \(P(j\to J_{\gamma\gamma})\) represents the probability of a QCD jet misidentified as a diphoton jet. The extreme scarcity of background events that pass the final selection criteria makes this approach crucial for attaining reliable distributions of \(m_{\gamma\gamma}\) and \(M_{T}^{H^{\pm}}\), which necessitate a substantial number of events. Our discussion in this Appendix focuses on the dominant \(W^{\pm}jj\) backgrounds, considering that the next dominant \(Zjj\) backgrounds contribute to only about 10% of the \(W^{\pm}jj\) events.11
Footnote 11: Our rigorous analysis affirmed that the performance of the WFM for the \(Zjj\) backgrounds is similar to that for \(W^{\pm}jj\).
For clarity in our subsequent discussions, we elucidate some terminologies. The expected number of events corresponding to a specific luminosity is denoted by \(N\). In realistic simulations, however, the actual number of generated background events is less than \(N\). For distinction, we denote it by \(n\). To be more explicit, let us define \(E_{\rm cut}\) as the set of events that fulfill a certain "cut". The number of events meeting this cut is determined by the cardinality of the set \(E_{\rm cut}\):
\[n_{\rm cut}\equiv\#E_{\rm cut}. \tag{10}\]
In the conventional cut-based analysis, the cross section after the final selection is then given by
\[\sigma_{\rm final}^{\rm cut\text{-based}}=\sum_{e\in E_{\rm final }}1\times\frac{\sigma_{\rm tot}}{n_{\rm gen}}=\frac{n_{\rm final}}{n_{\rm gen }}\;\sigma_{\rm tot}, \tag{11}\]
where \(\sigma_{\rm tot}\) represents the total cross section at the parton level.
Let us revisit the cut-flow presented in Table 3. Following the basic selection, we have an accumulative sequence of criteria: (i) \(E_{T}^{\rm miss}>50\;\text{GeV}\); (ii) \(r_{11}>0.5\); (iii) \(r_{12}>0.5\); (iv) \(r_{21}>0.25\); (v) \(r_{22}>0.25\); (vi) \(J_{1}\to J_{\gamma\gamma}\); (vii) \(J_{2}\to J_{\gamma\gamma}\). The \(W^{\pm}jj\) backgrounds register counts of \(n_{r_{22}}=1.180\times 10^{5}\) and \(n_{\rm final}=51\), where the condition \(r_{22}\) represents the accumulated conditions leading up to \(r_{22}>0.25\).
Now we unpack how the WFM modifies \(\sigma_{\rm final}^{\rm cut\text{-based}}\). Instead of focusing on the background events post the final selection, we shift our attention to the more extensive dataset refined by
the \(r_{22}\) condition. For each background event \(e\) within the set \(E_{r_{22}}\), we determine \(P_{e}(j_{1}\to J_{\gamma\gamma})\) and \(P_{e}(j_{2}\to J_{\gamma\gamma})\) that serve as weight factors. To compute the joint probability using these multipliers, we adopt an assumption: the observation of \(j_{1}\) as a diphoton jet remains statistically decoupled from \(j_{2}\)'s categorization. This implies that scenarios in which both jets are tagged as diphoton jets are derived from the multiplication of their respective weighting factors. Therefore, the cross section following the final selection, under the WFM framework, becomes
\[\sigma_{\rm final}^{\rm WFM}=\sum_{e\in E_{r_{22}}}P_{e}(j_{1}\to J_{ \gamma\gamma})P_{e}(j_{2}\to J_{\gamma\gamma})\times\frac{\sigma_{\rm tot}}{n _{\rm gen}}. \tag{10}\]
It is important to reiterate: for \(\sigma_{\rm final}^{\rm cut\text{-}based}\) in Equation 11, we consider the \(n_{\rm final}\) events, while for \(\sigma_{\rm final}^{\rm WFM}\) in Equation 10, we employ the \(n_{r_{22}}\) events.
To model \(P_{e}(j\to J_{\gamma\gamma})\) in practice, we need to compute the ratio of event counts after the \(j\to J_{\gamma\gamma}\) cut to those satisfying the \(r_{22}\) cut. Recognizing that \(P_{e}(j\to J_{\gamma\gamma})\) would naturally depend on event-specific characteristics like \(p_{T}^{j}\), it is pertinent to focus on the event counts within a defined kinematic bin when calculating the ratio. Considering that the magnitude of \(P_{e}(j\to J_{\gamma\gamma})\) is on the order of a few percent, a substantial volume of events that satisfy the \(r_{22}\) condition must be collected in the reference set. Strategically, we adopt two-dimensional kinematic bins.12
Footnote 12: The binning strategy for \(m_{\gamma\gamma}\) and \(M_{T}^{H^{\pm}}\) distributions varies. For the invariant mass distribution of the two foremost subparticles in the QCD jet \(j_{1,2}\), we employ the scheme \((m_{s_{1i}s_{2i}},p_{T}^{ji})\), with \(i\) taking values 1 or 2. In contrast, the \(M_{T}^{H^{\pm}}\) distribution utilizes the pair \((M_{T}^{H^{\pm}},p_{T}^{ji})\).
For any specific event \(e\), we introduce \(B_{e}\) as the set of all events within the bin containing \(e\). Consequently, the probability of a QCD jet being incorrectly identified as a diphoton jet in event \(e\) is given by:
\[P_{e}(j\to J_{\gamma\gamma})=\frac{\#\left(E_{j\to J_{\gamma \gamma}}\cap B_{e}\right)}{\#\left(E_{r_{22}}\cap B_{e}\right)}, \tag{11}\]
where the criteria within \(E_{j\to J_{\gamma\gamma}}\) means the combination of the \(j\to J_{\gamma\gamma}\) condition with the \(r_{22}\) cut.
The advantages of WFM become clear when analyzing kinematic distributions. As an illustration, consider a case where \(\#\left(E_{r_{22}}\cap B_{e}\right)=1000\). Using the traditional cut-based method and implementing both \(j_{1}\to J_{\gamma\gamma}\) and \(j_{2}\to J_{\gamma\gamma}\) conditions, most of the kinematic bins become empty since the joint probability is exceedingly low as \(3.8\times 10^{-4}\). It is not feasible to obtain a reliable kinematic distribution in this case. In contrast, utilizing the WFM method, we can expect a projection of roughly 0.38 events post-final selection, enabling reliable distributions.
Finally, to confirm the effectiveness of the WFM, we compare its resulting distributions with those from the traditional cut-based analysis. This comparison is meaningful when the cut-based analysis accurately reflects the main features of the true distribution after applying the final selection criteria. However, the \(m_{s_{1i}s_{2i}}\) distribution in the \(W^{\pm}jj\) background is not
suitable for this comparison due to its complexity. Therefore, we consider the \(M_{T}^{H^{\pm}}\) distribution for the comparison, which enjoys a simple, smooth hill-like shape.
In Figure 12, we present side-by-side the \(M_{T}^{H^{\pm}}\) distribution of the \(W^{\pm}jj\) background from the traditional cut-based analysis and its WFM counterpart. The results are post the final selection. Despite the inherent constraints arising from the limited data in the cut-based method, there is a clear resemblance between the two distributions. Both profiles exhibit a smoothly contoured hill shape and almost the same peak positions. This similarity underscores the capability of the WFM to properly represent the \(M_{T}^{H^{\pm}}\) distribution. In conclusion, the WFM proves indispensable when tackling huge backgrounds with particularly stringent selection criteria.
|
2310.08661
|
Counting and Algorithmic Generalization with Transformers
|
Algorithmic generalization in machine learning refers to the ability to learn
the underlying algorithm that generates data in a way that generalizes
out-of-distribution. This is generally considered a difficult task for most
machine learning algorithms. Here, we analyze algorithmic generalization when
counting is required, either implicitly or explicitly. We show that standard
Transformers are based on architectural decisions that hinder
out-of-distribution performance for such tasks. In particular, we discuss the
consequences of using layer normalization and of normalizing the attention
weights via softmax. With ablation of the problematic operations, we
demonstrate that a modified transformer can exhibit a good algorithmic
generalization performance on counting while using a very lightweight
architecture.
|
Simon Ouellette, Rolf Pfister, Hansueli Jud
|
2023-10-12T18:39:24Z
|
http://arxiv.org/abs/2310.08661v2
|
# Counting and Algorithmic Generalization with Transformers
###### Abstract
Algorithmic generalization in machine learning refers to the ability to learn the underlying algorithm that generates data in a way that generalizes out-of-distribution. This is generally considered a difficult task for most machine learning algorithms. Here, we analyze algorithmic generalization when counting is required, either implicitly or explicitly. We show that standard Transformers are based on architectural decisions that hinder out-of-distribution performance for such tasks. In particular, we discuss the consequences of using layer normalization and of normalizing the attention weights via softmax. With ablation of the problematic operations, we demonstrate that a modified transformer can exhibit a good algorithmic generalization performance on counting while using a very lightweight architecture.
## Introduction
Algorithmic generalization and extrapolation are machine learning functionalities that are considered difficult to achieve. They are, however, essential capabilities of human intelligence. For example, the Abstraction & Reasoning Corpus (ARC) challenge [1] is a set of visual reasoning tasks intended to be a test of intelligence. It requires the ability to learn the algorithm that solves a task from very few examples. Doing so requires a capability for abstraction and for reasoning, as its name implies. In particular, notions of cardinality are one of the many requirements to solve the ARC challenge (example in Appendix A).
Although research in this field has produced some interesting breakthroughs, such as the Neural GPU [1, 2, 3] and the Universal Transformer [4, 5], there remains a lot of work to do. While both approaches have increased performance levels on algorithmic tasks relative to prior work, they still fall well below perfect generalization performance, and struggle on more complex tasks. In this paper we use counting as an example to demonstrate the types of failure conditions that can occur when attempting to learn algorithms with transformers. To be more specific, we point out architectural decisions underlying the standard Transformer that hinder cardinality-based generalization.
In particular, we identify layer normalization and scaling of the attention weights via softmax as the two main operations that make it difficult, if not impossible, for standard Transformers to learn counting in a generalizable way. Both architectural choices rescale values and thereby assume that quantity (absolute value) is irrelevant. Normalization is a common technique in machine learning, because it tends to smooth out gradients and it helps with stability and convergence, especially for deeper architectures. However, in doing so, they force the neural network to learn a mapping function from natural quantities to normalized numerical vectors that overfits the training set distribution. In contrast, if we let the natural quantities speak for themselves, we find that the model generalizes much better out-of-distribution.
The decisions behind the Transformer architecture are validated when we are manipulating fundamentally qualitative entities, as in natural language processing tasks. However, with the increasing popularity of Transformers, attempts are made to apply them to other types of tasks. The insights presented here should be carefully considered when doing so.
## Related work
### Algorithmic generalization with Transformers
Algorithmic tasks include problems such as learning multi-digit multiplication, sorting lists, and string manipulation. What is common to these types of tasks is the need for iterative, exact processing of intermediate steps according to some rules (which must be learned). Research on algorithmic tasks especially places emphasis on out-of-distribution generalization. The latter implies that the correct algorithm was learned, as opposed to a mere memorization of the training domain.
Standard Transformers [2] are architecturally limited to a static number of sequential operations [1]. By construction, only \(N\) sequential attention+feed-forward operations can be applied to each token, where \(N\) is the number of encoding and decoding layers. Thus, they lack the recurrent inductive bias, which appears crucial for robust generalization when the training and test set differ in required processing depth.
To address this limitation, Universal Transformers were developed [1]. They are composed of a standard encoder and decoder block over which a potentially dynamic number of iterations can occur, determined by the
Adaptive Computation Time mechanism. This concept relies on a separate neural network that determines the halting condition, thereby enabling conditional loops. The authors show that it outperforms the standard Transformer on a variety of algorithmic and reasoning tasks. The Universal Transformer has been further enhanced with the addition of an auxiliary grid-like memory module (Cognolato and Testolin, 2022), thereby enabling new levels of algorithmic generalization on the multi-digit multiplication problem.
In further support to the insights presented in Dehghani et al. (2019), Hahn (2020) demonstrates mathematically that standard Transformers "cannot model periodic finite-state languages, nor hierarchical structure, unless the number of layers or heads increases with input length". Indeed, once again, the static number of layers or heads is shown to hinder learning of processes that require arbitrary amounts of iteration or recursion.
### Counting
It has been shown that recurrent neural networks, especially Long Short-Term Memory (LSTM) networks, are capable of some level of counting generalization (Suzgun et al., 2019; Weiss, Goldberg, and Yahav, 2018). However, it is also notable that there is always a certain degree of performance degradation whenever the test set goes beyond the scale of the training set (El-Naggar, Madhyastha, and Weyde, 2022).
Algorithmic tasks include so-called counter languages: formal languages that implicitly require some degree of counting ability. For example, a simple counter language is Dyck, in which the alphabet is composed of only opening and closing parentheses. Roughly described, it is a set of balanced strings of parentheses, where each opening parenthesis has a corresponding closing parenthesis in the correct order. Thus, "(())" is a well-formed sentence in the Dyck language, while ")((" is not. The machine learning task on these counter languages is usually to distinguish well-formed from illegal sentences.
Transformers can learn some counter languages (like Shuffle-Dyck and n-ary Boolean Expressions), but they perform worse than LSTMs overall. Out of 7 formal counting languages, LSTMs generalized perfectly on all of them, while Transformers failed to generalize on 3 of them (Bhattamishra, Ahuja, and Goyal, 2020).
Transformer-based NLP models are able to answer numerical reasoning questions from the DROP dataset, suggesting a certain degree of emergent numeracy. When inspecting the embeddings, Bhattacharya, Ahuja, and Goyal (2020) found that pre-trained embeddings contain fine-grained information about cardinality and ordinality. However, a significant degradation of performance has been observed when the model needs to extrapolate to numbers beyond the training range.
## Methodology
### Scaled dot-product attention
Scaled dot-product attention (Appendix B) is introduced in (Vaswani et al., 2017). To better understand why this hinders counting, we performed the experiments _Std-Transformer-Count_ and _No-LayerNorm-Count_1. In these experiments, which are inspired by the ARC challenge, we have as input a sequence representing a grid (which is flattened). Each token in the input sequence corresponds to a cell pixel in the grid. Each pixel can have one of ten different colours, which is one-hot encoded over a 10-dimensional vector. As a result, for a 6x6 grid, for example, we have a corresponding matrix of dimension [6, 6, 10], which is in turn flattened to a sequence of [36, 10].
Footnote 1: Code for all experiments in this paper can be found at: [https://github.com/SimonOuellette35/CountingWithTransformers](https://github.com/SimonOuellette35/CountingWithTransformers)
The goal of this experiment is to output a sequence that contains the count of instances of each color in the grid. The output is of the same dimension as the input and each cell contains the number of occurrences of that cell's color. More specifically, it is the dimension of the one-hot encoding that was set to 1 in the input grid that will contain this count in the output grid. For example, suppose that in the 10-dimensional one-hot encodings the second dimension represents the color blue, and that the input grid contains 5 blue pixels. Then, each blue cell in the output grid will be encoded as: [0, 5, 0, 0, 0, 0, 0, 0, 0]. Since the original intention is to count "non-background" colors, the color zero (which is the black color, i.e. the background color) has ground truths that are always set to 0, regardless of how many instances it has.
The accuracy metric used to evaluate model performance rounds up the output matrix to the nearest integers, and checks if all of the rounded up values are exactly the same as the ground truth. In _Std-Transformer-Count_, we use a standard Transformer encoder module with only 1 layer and train it on randomly generated data for this task. In _No-LayerNorm-Count_, we disable its softmax operation over the attention weights, as well as its layer normalizations. The training procedure is the same as for _Std-Transformer-Count_.
As detailed in the Experiments section, _Std-Transformer-Count_ fails to learn even in-distribution. _No-LayerNorm-Count_ reaches performance in-distribution, and it also generalizes well to grid dimensions not seen during training. It does so by learning to count in the parallel way, essentially learning a mathematically equivalent form of algorithm 1.
### Layer normalization
Layer normalization (Appendix C) is motivated by the fact that it helps with convergence speed by smoothing out the gradients. However, in the name of smoothing out the gradients, we lose key information: absolute values. In other words, we lose information about quantities. This is in the same spirit as the softmax operation over the attention weights: the assumption is that only the relative numerical values across the various dimensions matter, not absolute quantities themselves.
Thanks to the learned \(\gamma\) and \(\beta\) parameters, it is still possible to output unnormalized values, such as count quantities, for example. That is, the normalized value of a dimension can be multiplied by an arbitrarily large coefficient \(\gamma\) or
added to an arbitrarily large bias \(\beta\), resulting in any arbitrary value that does not need to be constrained between -1 and 1.
Experiments _LayerNorm-SA-Count_, _LayerNorm-FF-Count_, _LayerNorm-Identity_ and _No-LayerNorm-Identity_ are all intended to empirically support, as well as better analyze, this phenomenon. In _LayerNorm-SA-Count_ and _LayerNorm-FF-Count_, we use the same counting task as in the previous experiments. Starting from the modified Transformer model used in _No-LayerNorm-Count_, we re-introduce the layer normalization operation in two steps. In _LayerNorm-SA-Count_, we enable it at the level of the self-attention (SA) module, while keeping it disabled in the feed-forward (FF) network module. In _LayerNorm-FF-Count_, we keep it disabled in the SA module, while re-enabling it in the FF network.
In _LayerNorm-FF-Count_, the FF network is essentially trying to learn the identity function (because the SA module itself is enough to learn the counting task). We will show with experiments _LayerNorm-Identity_ and _No-LayerNorm-Identity_ that layer normalization hinders out-of-distribution generalization even for the identity function. In these experiments, rather than using as inputs and outputs integer values are represent counts, we use floating-point numerical vectors.
Specifically, for the training set, we generate 5-dimensional vectors whose numerical values are randomly picked in the range [-0.5, 0.5]. For the test set, however, we increase that range to [-1, 1]. The function to learn is the identity function: the ground truth and the inputs are the same values.
In support of the aforementioned theory that layer normalization essentially tethers the model to the statistical distribution of the training set, we show that a feed-forward network without layer normalization (_No-LayerNorm-Identity_) can learn the identity function in a way that generalizes well out-of-distribution. However, once we add a layer normalization operation after its output (_LayerNorm-Identity_), performance falls abruptly.
### Counting the iterative way
The experiments show that a single-layer standard transformer encoder cannot learn counting in a parallel way. However, this does not rule out that a multi-layer transformer cannot learn it in an alternate way. One such possibility is iterative counting, wherein the transformer maintains an internal counter updated sequentially across its decoder layers.
Algorithm 2 describes a possible iterative approach to counting given a grid matrix M that can be loosely replicated with a Transformer architecture. Therein, every colour count is represented by two tokens from the learned embeddings \(n_{i}\) and the resultant target sequence contains 18 elements, representing counts for the nine distinct colours up to 99 per colour.
```
1:\(M\), the matrix representing the grid
2:2-digit count per color
3:# Numbers: learned embeddings for digits 0-9
4:\(Numbers\leftarrow\{n_{0},n_{1},n_{2},...,n_{9}\}\)
5:\(input\_len\gets length(M)\)
6:\(target\_seq\leftarrow\{\}\)
7:for all\(i\in\{0..17\}\)do
8:\(counter\gets n_{0}\)
9:\(target\_pos\gets pos\_enc(i)\)
10:for all\(layer\in\{0..(input\_len-1)\}\)do
11:\(W=attend(target\_pos,layer,M)\)
12:\(counter\gets FF(counter+W\cdot M+target\_pos)\)
13:endfor
14:\(target\_seq\gets target\_seq\cup class(counter)\)
15:endfor
16:return\(target\_seq\)
```
**Algorithm 2** Generalizable iterative counting algorithm
Algorithm 3 \(M\)
is theoretically able to iterate dynamically at inference time on the same decoding block, thereby iterating on this counting operation as many times as needed.
We intend to support this empirically with experiments _Full-Transformer-CountV2_ and _Universal-Transformer-CountV2_. Because we theorize that a decoder is needed to perform this iterative counting, the counting task will need to be modified. Instead, its output for this experiment will be a sequence that represents the number of pixels in the input grid, for each color (rather than for each cell). Details can be found in Appendix D.
## Experiments
Experiments _Std-Transformer-Count_, _No-LayerNorm-Count_, _LayerNorm-SA-Count_ and _LayerNorm-FF-Count_ are trained over 300k epochs at a learning rate of 0.0002 and with a batch size of 50.
For experiments _No-LayerNorm-Count_, _LayerNorm-SA-Count_ and _LayerNorm-FF-Count_ the FF network component is merely a single linear layer, because this was found to give better generalization results than the default 2-layer ReLU network (see Appendix E for comparison and additional details).
In all experiments, we train the models on randomly generated grids of dimension 1x1 to 6x6 inclusively. We evaluate them on grids of 6x6 (in-distribution) and larger out-of-distribution grids. See Appendix F for details on hyperparameters.
In models _Std-Transformer-Count_, _LayerNorm-Identity_ and _No-LayerNorm-Identity_, the FF network has 2 hidden layers of dimensionality 2048 each. The activation function used is a rectified linear unit. This is the default architecture of the FF module on the standard Transformer encoder.
Std-Transformer-CountIn Table 5, we see that the accuracy stays around 28% regardless of the grid size. This suggests that the standard Transformer encoder architecture with 1 layer is intrinsically incapable of learning counting.
No-LayerNorm-CountIn stark contrast to _Std-Transformer-Count_, not only does this modified Transformer encoder layer learn to solve the task on the training set, it generalizes with perfect accuracy up to 20x20 grids (in fact, up to 100x100 grids, in our experiments).
LayerNorm-SA-CountTable 5 indicates that this model has learned fairly well to solve the task on the training set. Yet, there is a relatively rapid drop in performance as we increase the size of the grids beyond that of the training set.
LayerNorm-FF-CountThe performance is similar, but slightly worse, than _LayerNorm-SA-Count_.
LayerNorm-IdentityWe train this model on the "identity task" that consists of 5-dimensional numerical vectors randomly generated in the interval [-0.5, 0.5]. Then, we test it on data of same dimension, but the interval is [-1, 1]. The goal is to simply learn the identity function: the predictions must reproduce the inputs. Because the outputs are continuous values, rather than integers or classes, we use the mean squared error as the loss function and evaluation metric. _LayerNorm-Identity_ was trained over 200k epochs, with a learning rate of 0.0001 and a batch size of 200. No further training was done because it was obvious from the learning curve that learning had stagnated.
No-LayerNorm-IdentityIn this variant of the experiment we disable layer normalization. The architecture, training procedure and task are otherwise identical to _No-LayerNorm-Identity_, except that it was trained for only 100k epochs due to faster convergence. In Table 2, we see that the model without layer normalization significantly outperforms the model that uses layer normalization.
Full- & Universal-Transformer-CountV2Since we're using a full encoder-decoder architecture for the experiments reported in Table 3, we are using a modified version of the counting task. This new task (CountV2) outputs a target sequence of 18 tokens (9 color counts, 2 digits each). First, grids of up to 6x6 were generated, and 5 decoder layers (or 5 maximum decoder iterations in the Universal Transformer) were used. Second, grids of up to 10x10 were generated, and 10 decoder layers (or 10 maximum decoder iterations in the Universal Transformer) were used. Third, grids of up to 10x10 were generated, and 25 decoder layers (or 25 maximum decoder iterations in the Universal Transformers) were used. In all experiments, only 1 encoder layer is used, since only the decoder is expected to play a role in iterative counting. In all cases, 10k epochs of training were deemed sufficient to reach convergence, with a learning rate of 0.0005 and a batch size of 50.
## Discussion
### Scaled dot-product attention
There is a simple algorithm (Algorithm 1), that can solve the counting task in a way that generalizes beyond the training set distribution.
The first step in the loop consists of attending, for each cell in the grid, to all other cells in the grid that contain the same color. \(W\) is the attention weights matrix in which the weights are set to 1 whenever the color is the same as for cell
Figure 1: Generalization performance (%) comparison of different models
\(c\). These attention weights are multiplied by the input grid in order to get a vector where all cells with the same color have value 1 in the corresponding dimension, while the other cells have all zeros. By summing these up, we obtain the desired answer: the count of cells with the same color.
However, if we introduce a softmax on the weights \(W\), instead of having weights of 1, we will have weights of \(\frac{1}{d}\) where \(d\) is the number of instances of the same color. Once the attention module sums up these cells to get the final attention output, we end up going back to the value 1. In other words, no matter how many cells of the same color there are in the grid, the result of the attention output is 1. So, this generalizable solution is not possible when a softmax operation is applied on the attention weights, as in the standard Transformer encoder. We theorize that this is why _Std-Transformer-Count_ fails to solve the counting task.
In _No-LayerNorm-Count_, the model ends up inferring the attention schema displayed in Table 4 when given the input sequence [6, 6, 0, 3, 7, 0, 4, 3, 6] representing a 3x3 grid.
As can be seen, for each cell in the grid, a weight \(\lambda=.23\) (approximately) is attributed to each other cell that contains the same color. At the same time, a weight of approximately 0 is attributed to the unrelated colors. For example, cell 0 has color 6, which is found at indices 0, 1 and 8 of the sequence. Consequently, row 0 in the weight matrix has non-zero values at columns 0, 1, and 8. Note that the rows corresponding to the color zero do not contain any non-zero weights, simply because the model must learn to always output zero for the color zero. Mathematically, then, the vector output \(\hat{y}\) of each cell after the self-attention module is:
\[\hat{y}=\sum_{i}\lambda\cdot c_{i}=\lambda\sum_{i}c_{i} \tag{1}\]
where \(c_{i}\) are the values of the cells that have the same color. One can immediately see that all we need to do to get the final answers, is to divide by \(\lambda\). This is trivial for the FF network to learn. This learned model is mathematically equivalent to Algorithm 1, and generalizes well regardless of the count values or of the number of tokens.
In _Std-Transformer-Count_, however, the Transformer encoder block fails to learn this model, because of the softmax operation that occurs in the standard Transformer's attention module. It effectively turns the previous formula into the following:
\[\sum_{i}\frac{1}{N}\cdot c_{i}=\frac{1}{N}\sum_{i}c_{i}=\frac{\hat{y}}{N} \tag{2}\]
where N is the number of same-color cells \(c_{i}\). This is because, with the softmax operation, the attention weights must sum up to 1. The result, \(\frac{\hat{y}}{N}\) could be salvaged if the FF network had the ability to divide by \(\frac{1}{N}\). However, the FF network does not have access to the value N, which is dynamic on a grid-by-grid basis, since it processes tokens one at a time, rather than the grid as a whole. It, therefore, cannot generate the desired final answer.
### Layer normalization
The poor generalization performance of _LayerNorm-FF-Count_ can be directly observed in the differences between the distribution of the predictions and the ground truths. On the 6x6 grids, the prediction and ground truth count value distributions are approximately the same (see Fig. 2). However, on the 15x15 grids, where the error rate is high, the distributions are quite different (see Fig. 3). Note the high rate of zero-counts, due to the special case related to the background color. This artifact can be ignored for the purpose of this analysis.
We theorize that the FF neural network must anticipate the layer normalization operation and structure the inputs that it feeds to it such that the desired output survives the transformation. In particular, since the output of the layer
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline
**Model** & **6x6** & **7x7** & **8x8** & **9x9** & **10x10** & **12x12** & **15x15** & **20x20** \\ \hline Std-Transformer-Count & 32.48\% & 26.74\% & 25.10\% & 24.56\% & 25.39\% & 27.37\% & 31.12\% & 30.76\% \\ \hline
**No-LayerNorm-Count** & **100\%** & **100\%** & **100\%** & **100\%** & **100\%** & **100\%** & **100\%** & **100\%** \\ \hline LayerNorm-SA-Count & 99.97\% & 99.80\% & 98.52\% & 92.59\% & 80.86\% & 45.99\% & 30.17\% & 33.79 \% \\ \hline LayerNorm-FF-Count & 97.49\% & 90.07\% & 75.49\% & 56.05\% & 40.52\% & 27.98\% & 28.30\% & 34.11\% \\ \hline \end{tabular}
\end{table}
Table 1: Results for the experiments on the counting task
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Model** & **Train. loss** & **Test loss** \\ \hline
**No-LayerNorm-** & **7.98e-08** & **8.1954e-05** \\
**Identity** & **(4.698e-08)** & **(1.279e-05)** \\ \hline LayerNorm-Identity & 0.0174 & 0.1147 \\ & (0.0019) & (0.0069) \\ \hline \end{tabular}
\end{table}
Table 2: Mean losses (std. dev. in parenthesis) for learning the identity function
Figure 2: Histograms of count predictions (left) vs ground truths (right) for 6x6 grids (LayerNorm-FF-Count)
normalization, in this case, is directly the count value prediction, the FF network must learn to "counter" the layer normalization operation. Successfully countering the layer normalization operation involves controlling the variance of the outputted vectors, because layer normalization divides by the standard deviation of a vector. It then multiplies each dimension of the vector by a learned coefficient (which is static once learned; it does not dynamically adapt to the vector itself). By controlling the output vector's variances, the model can ensure that the output of the layer normalization operation corresponds to the desired count value.
In Figure 4, we see the standard deviations of the vectors outputted by the FF network (for 12x12 grids), before being passed to the layer normalization operation. When the standard deviation is close to 0.001, the prediction tends to be correct. When the standard deviation is higher than 0.001, it tends to be incorrect.
Mathematically, suppose that \(\hat{y}\) is the intended count value prediction. Let \(\gamma\) be the coefficient learned by the layer normalization for the corresponding dimension in the vector where \(\hat{y}\) is located (remember that these count values are in fact 10-dimensional). Then, from the equation of layer normalization (see Appendix C, equation 2), the FF network must output a vector \(\bar{v}\) containing value \(\bar{y}\) such that:
\[\frac{\bar{y}-\mathrm{E}[\bar{v}]}{\sqrt{\mathrm{Var}[\bar{v}]+\epsilon}} \cdot\gamma+\beta=\hat{y} \tag{3}\]
A simple strategy, which our model learned, is to ensure that \(\mathrm{Var}[\bar{v}]\) is approximately static (on the training set), at a value that counteracts the multiplication by \(\gamma\). This way, the model can ensure that the output of the layer normalization operation corresponds to the desired count value.
The problem with this learned strategy is that the mapping from the FF network's input vectors (count values) to output vectors of a fixed variance evidently overfits the training set distribution. This is demonstrated by the significantly varying variances in the out-of-distribution data. These out-of-distribution variances, in turn, result in incorrect predictions once they pass through the layer normalization operation.
We see a similar phenomenon in _LayerNorm-SA-Count_, where the self-attention linear output layer is forced to en
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**9-D** & **[0]** & **[1]** & **[2]** & **[3]** & **[4]** & **[5]** & **[6]** & **[7]** & **[8]** \\ \cline{2-11} seq. & & & & & & & & & \\ \hline
**[0]** &.23 &.23 &.00 &.00 &.00 &.00 &.00 &.24 \\ \hline
**[1]** &.23 &.23 &.00 &.00 &.00 &.00 &.00 &.24 \\ \hline
**[2]** &.00 &.00 &.00 &.00 &.00 &.00 &.00 &.00 \\ \hline
**[3]** &.00 &.00 &.22 &.00 &.00 &.00 &.22 &.00 \\ \hline
**[4]** &.00 &.00 &.00 &.23 &.00 &.00 &.00 &.00 \\ \hline
**[5]** &.00 &.00 &.00 &.00 &.00 &.00 &.00 &.00 \\ \hline
**[6]** &.00 &.00 &.00 &.00 &.00 &.23 &.00 &.00 \\ \hline
**[7]** &.00 &.00 &.00 &.22 &.00 &.00 &.22 &.00 \\ \hline
**[8]** &.23 &.23 &.00 &.00 &.00 &.00 &.00 &.24 \\ \hline \end{tabular}
\end{table}
Table 4: Inferred attention weights in _No-LayerNorm-Count_, for sequence [6, 6, 0, 3, 7, 0, 4, 3, 6]
Figure 4: Histograms of std. deviations for successful predictions (left) vs failed predictions (right) count values on 12x12 grids (LayerNorm-FF-Count)
Figure 3: Histograms of count predictions (left) vs ground truths (right) for 15x15 grids (LayerNorm-FF-Count)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Hyperparams** & \multicolumn{6}{c|}{**Experiment results**} \\ \cline{2-9}
**Max. 6x6, 5 layers** & \multicolumn{2}{c|}{**Model**} & **3x3** & **5x5** & **7x7** & **9x9** & **10x10** & **12x12** \\ \cline{2-9} & Full-Transformer-CountV2 & 100\% & 93.33\% & 52.22\% & 32.22\% & 21.67\% & 13.98\% \\ \cline{2-9} & Universal-Transformer-CountV2 & 99.89\% & 99.94\% & 49.89\% & 31.39\% & 22.67\% & 15.78\% \\ \hline \multirow{3}{*}{**Max. 10x10, 10 layers**} & \multicolumn{2}{c|}{**Model**} & **3x3** & **5x5** & **7x7** & **9x9** & **10x10** & **12x12** \\ \cline{2-9} & Full-Transformer-CountV2 & 100\% & 100\% & 100\% & 48.33\% & 33.33\% & 23.33\% \\ \cline{2-9} & Universal-Transformer-CountV2 & 99.33\% & 84.16\% & 63.89\% & 85.22\% & 45.67\% & 20.56\% \\ \hline \multirow{3}{*}{**Max. 10x10, 25 layers**} & \multicolumn{2}{c|}{**Model**} & **3x3** & **5x5** & **7x7** & **9x9** & **10x10** & **12x12** \\ \cline{2-9} & Full-Transformer-CountV2 & 100\% & 98.89\% & 97.22\% & 36.67\% & 32.22\% & 19.44\% \\ \cline{1-1} \cline{2-9} & Universal-Transformer-CountV2 & 86.39\% & 59.06\% & 44.72\% & 35.11\% & 37.56\% & 42.39\% \\ \hline \end{tabular}
\end{table}
Table 3: Results for experiments on the modified counting task (_-CountV2_)
code count values into normalizable numerical vectors. In other words, the information about the pixel counts must "survive" the normalization operation. This would not be possible by simply letting the count values speak for themselves, since the quantity is lost when subtracting by the vector mean and dividing by its standard deviation.
The impact on the distribution of predicted count values is more subtle in _LayerNorm-SA-Count_ than in _LayerNorm-FF-Count_ (see Fig. 5 and 6). Here also, observing the standard deviations of the inputs of the layer normalization module is more informative (see Fig. 7). We can see that inputs to the SA module that have a standard deviation below 0.001 are generally correct, while inputs to the SA module that have a standard deviation above 0.001 are incorrect.
### Counting the iterative way
In the experiments _Full-Transformer-CountV2_ and _Universal-Transformer-CountV2_ we hit what is arguably a computational resource problem: our models were unable to even fit the training set, on anything but the smallest grids. This is because generalizable counting in the iterative way is a computationally expensive operation.
For a 10x10 input grid, this implies a sequence of 100 tokens (or embeddings). Because the generalizable iterative solution implies iterating as many times as there are tokens, and each iteration is a decoder layer (or a decoder iteration in the Universal Transformer), we would need at least 100 layers to solve this problem. Training through 100 Transformer layers is well beyond our available computational resources.
Experimenting on a smaller scale also revealed itself to be futile, because smaller grids do not afford count examples that are sufficiently diverse to learn to generalize. In other words, a 3x3 grids typically only has counts up to 3 - which means a model trained on these could never learn the concepts for numbers 4 to 9.
If, instead, we made sure to have grids that contain all possible counts between 0 and 9, we would end up with a model unable to generalize beyond 9 to 2-digit numbers. Indeed, this modified counting task implies learning the concept of representing numbers in base 10. That is, it must learn the idea that the rightmost digit is the number modulo 10, that each time we cycle back to 0 we must increment the digit to the left, etc. This is why learning from grids of a significant size is necessary to even hope to generalize.
## Conclusion
Transformers are increasingly being used in research on algorithmic generalization. However, they were originally designed with a specific purpose that makes them well-suited to tasks like NLP. This is revealed in some architectural decisions that make fundamental assumptions about the unimportance of quantity. We show the consequences of these assumptions for algorithmic tasks, focusing on tasks that require explicit or implicit counting, and on the identity function. In particular, we point out that applying a softmax operation over the attention weights make it impossible to learn to count entities across the input tokens in a parallel manner. Furthermore, we demonstrate that layer normalization causes models to overfit the statistical distribution of the training set. Further research is needed to determine the extent of this phenomenon, and to better understand it.
|
2302.07743
|
Holomorphic motions, dimension, area and quasiconformal mappings
|
We describe the variation of the Minkowski, packing and Hausdorff dimensions
of a set moving under a holomorphic motion, as well as the variation of its
area. Our method provides a new, unified approach to various celebrated
theorems about quasiconformal mappings, including the work of Astala on the
distortion of area and dimension under quasiconformal mappings and the work of
Smirnov on the dimension of quasicircles.
|
Aidan Fuhrer, Thomas Ransford, Malik Younsi
|
2023-02-15T15:42:44Z
|
http://arxiv.org/abs/2302.07743v2
|
# Holomorphic motions, dimension, area and quasiconformal mappings
###### Abstract.
We describe the variation of the Minkowski, packing and Hausdorff dimensions of a set moving under a holomorphic motion, as well as the variation of its area. Our method provides a new, unified approach to various celebrated theorems about quasiconformal mappings, including the work of Astala on the distortion of area and dimension under quasiconformal mappings and the work of Smirnov on the dimension of quasicircles.
Key words and phrases:Holomorphic motion, area, Hausdorff dimension, packing dimension, Minkowski dimension, harmonic function, quasiconformal mapping, quasicircle 2020 Mathematics Subject Classification: Primary 37F44, Secondary 30C62, 31A05, 28A78 Fuhrer supported by an NSERC Canada Graduate Scholarship. Ransford supported by grants from NSERC and the Canada Research Chairs program. Younsi supported by NSF Grant DMS-2050113.
Consider the following problem. Let \(\lambda\mapsto A_{\lambda}\) be a holomorphic motion such that \(A_{\lambda}\subset\mathbb{C}\) for all \(\lambda\in\mathbb{D}\). What sort of functions are \(\lambda\mapsto\dim(A_{\lambda})\) and \(\lambda\mapsto|A_{\lambda}|\)? Here \(|\cdot|\) denotes the area measure (two-dimensional Lebesgue measure) and \(\dim(\cdot)\) can denote any reasonable notion of dimension. Various aspects of this problem have been treated in the literature, see for example [1, 3, 4, 7, 8, 11, 14, 17, 18, 19, 21, 23]. We shall discuss some of these contributions in more detail later.
In this article, we shall be mainly interested in three notions of dimension, namely the Minkowski, packing and Hausdorff dimensions. To state our results, it is convenient to introduce another definition.
**Definition 1.2**.: Let \(D\) be a domain in \(\mathbb{C}\). A positive function \(u:D\to[0,\infty)\) is called _inf-harmonic_ if there exists a family \(\mathcal{H}\) of harmonic functions on \(D\) such that \(u(\lambda)=\inf_{h\in\mathcal{H}}h(\lambda)\) for all \(\lambda\in D\).
In Theorems 1.3-1.6, we consider a holomorphic motion \(f:\mathbb{D}\times A\to\mathbb{C}\) of a subset \(A\) of \(\mathbb{C}\), and write \(A_{\lambda}:=f_{\lambda}(A)\).
Our first result describes the variation of the Minkowski dimension, or more precisely the upper Minkowski dimension \(\overline{\dim}_{M}\), of a bounded set moving under a holomorphic motion.
**Theorem 1.3**.: _Let \(\lambda\mapsto A_{\lambda}\) be a holomorphic motion of a bounded subset \(A\) of \(\mathbb{C}\). Then \(A_{\lambda}\) is bounded for all \(\lambda\in\mathbb{D}\), and either \(\overline{\dim}_{M}(A_{\lambda})=0\) for all \(\lambda\in\mathbb{D}\), or \(\lambda\mapsto 1/\overline{\dim}_{M}(A_{\lambda})\) is an inf-harmonic function on \(\mathbb{D}\)._
From this theorem, we deduce an analogous result for the packing dimension \(\dim_{P}\).
**Theorem 1.4**.: _Let \(\lambda\mapsto A_{\lambda}\) be a holomorphic motion of a subset \(A\) of \(\mathbb{C}\). Then either \(\dim_{P}(A_{\lambda})=0\) for all \(\lambda\in\mathbb{D}\), or \(\lambda\mapsto 1/\dim_{P}(A_{\lambda})\) is an inf-harmonic function on \(\mathbb{D}\)._
From these theorems, we obtain the following corollary.
**Corollary 1.5**.: _Under the respective assumptions of Theorems 1.3 and 1.4, \(\overline{\dim}_{M}(A_{\lambda})\) and \(\dim_{P}(A_{\lambda})\) are continuous, logarithmically subharmonic functions of \(\lambda\in\mathbb{D}\) (and hence also subharmonic on \(\mathbb{D}\)). In particular, if either these functions attains a maximum on \(\mathbb{D}\), then it is constant._
Proof.: As we shall see, an inf-harmonic function is a continuous superharmonic function. Using Jensen's inequality, it is easy to see that, if \(1/v\) is a positive superharmonic function, then \(\log v\) is a subharmonic function, and hence also \(v\). The last part of the corollary is a consequence of the maximum principle for subharmonic functions.
For the Hausdorff dimension \(\dim_{H}\), there is a result similar to Theorems 1.3 and 1.4, but with a weaker conclusion.
**Theorem 1.6**.: _Let \(\lambda\mapsto A_{\lambda}\) be a holomorphic motion of a subset \(A\) of \(\mathbb{C}\). Then either \(\dim_{H}(A_{\lambda})=0\) for all \(\lambda\in\mathbb{D}\), or \(\dim_{H}(A_{\lambda})>0\) for all \(\lambda\in\mathbb{D}\)._
_In the latter case, \(\lambda\mapsto(1/\dim_{H}(A_{\lambda})-1/2)\) is the supremum of a family of inf-harmonic functions on \(\mathbb{D}\)._
The nature of the conclusion in Theorem 1.6 does not permit us to deduce that \(\log\dim_{H}(A_{\lambda})\) or \(\dim_{H}(A_{\lambda})\) is a subharmonic function of \(\lambda\in\mathbb{D}\). We shall return to this problem at the end of the article.
Our next theorem is a sort of converse result.
**Theorem 1.7**.: _Let \(d:\mathbb{D}\to(0,2]\) be a function such that \(1/d\) is inf-harmonic on \(\mathbb{D}\). Then there exists a holomorphic motion \(f:\mathbb{D}\times A\to\mathbb{C}\) of a compact subset \(A\) of \(\mathbb{C}\) such that, setting \(A_{\lambda}:=f_{\lambda}(A)\), we have \(\dim_{P}(A_{\lambda})=\dim_{H}(A_{\lambda})=d(\lambda)\) for all \(\lambda\in\mathbb{D}\)._
We remark that Theorems 1.4 and 1.7 together yield a complete characterization of the variation of the packing dimension of a set moving under a holomorphic motion.
The holomorphic motions that arise from Julia sets of holomorphic families of hyperbolic rational maps (as considered in [15]) have the additional property that their Hausdorff and packing dimensions vary as real-analytic functions of \(\lambda\). This is a special case of a result of Ruelle [21]. (Ruelle stated his theorem for Hausdorff dimension, but it coincides with packing dimension in this case.) For general holomorphic motions, it is known that the Hausdorff and packing dimensions need not be real-analytic (see e.g. [3]). The following corollary of Theorem 1.7 shows that in fact they may have the same lack of smoothness as an arbitrary concave function.
**Corollary 1.8**.: _Given a concave function \(\psi:\mathbb{D}\to[0,\infty)\), there exists a holomorphic motion \(f:\mathbb{D}\times A\to\mathbb{C}\) of a compact subset \(A\) of \(\mathbb{C}\) such that, setting \(A_{\lambda}:=f_{\lambda}(A)\), we have_
\[\dim_{H}(A_{\lambda})=\dim_{P}(A_{\lambda})=\frac{2}{1+\psi(\lambda)}\quad( \lambda\in\mathbb{D}).\]
Proof.: Every positive concave function on \(\mathbb{D}\) is inf-harmonic, since it is the lower envelope of a family of affine functions \(\lambda\mapsto a\operatorname{Re}(\lambda)+b\operatorname{Im}(\lambda)+c\), each of which is harmonic on \(D\). Thus the map \(\lambda\mapsto\frac{1}{2}(1+\psi(\lambda))\) is inf-harmonic on \(\mathbb{D}\). Also, it is clearly bounded below by \(1/2\), so its reciprocal takes values in \((0,2]\). The result therefore follows from Theorem 1.7.
We now turn to the discussion of the variation of the area of a set \(A\subset\mathbb{C}\) moving under a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\). As before, for \(\lambda\in\mathbb{D}\), we write \(f_{\lambda}(z):=f(\lambda,z)\) and \(A_{\lambda}:=f_{\lambda}(A)\). Then each \(f_{\lambda}:\mathbb{C}\to\mathbb{C}\) is quasiconformal and we denote its complex dilatation by \(\mu_{f_{\lambda}}\) (see SS3 for the definitions). Our next result gives a partial description of the function \(\lambda\mapsto|A_{\lambda}|\), where \(|\cdot|\) denotes area measure.
**Theorem 1.9**.: _Suppose that there exists a compact subset \(\Delta\) of \(\mathbb{C}\) such that, for each \(\lambda\in\mathbb{D}\), the map \(f_{\lambda}\) is conformal on \(\mathbb{C}\setminus\Delta\) and \(f_{\lambda}(z)=z+O(1)\) near \(\infty\). Let \(A\) be a Borel subset of \(\Delta\) such that \(|A|>0\)._
1. _If_ \(\mu_{f_{\lambda}}=0\) _a.e. on_ \(A\)_, then_ \(\lambda\mapsto\log(\pi c(\Delta)^{2}/|A_{\lambda}|)\) _is an inf-harmonic function on_ \(\mathbb{D}\)_, where_ \(c(\Delta)\) _denotes the logarithmic capacity of_ \(\Delta\)_._
2. _If_ \(\mu_{f_{\lambda}}=0\) _a.e. on_ \(\mathbb{C}\setminus A\)_, then_ \(\lambda\mapsto|A_{\lambda}|\) _is an inf-harmonic function on_ \(\mathbb{D}\)_._
Note that if \(|A|=0\), then \(|A_{\lambda}|=0\) for all \(\lambda\in\mathbb{D}\), because quasiconformal mappings preserve zero area.
Our approach based on inf-harmonic functions also permits us to present a unified treatment of several celebrated theorems about the distortion of area and dimension under quasiconformal maps.
We emphasize here that prior works on the distortion of dimension under quasiconformal mappings relied on some of their more involved analytic properties, such as higher order integrability of the Jacobian. Our approach, on the other hand, only requires the fact that quasiconformal mappings satisfy a "weak" quasisymmetry property, as stated in Corollary 3.7.
For instance, a simple application of the Harnack inequality allows us to obtain the following two results. In Theorem 1.10, \(\dim\) denotes any one of \(\dim_{P},\dim_{H}\) or \(\overline{\dim}_{M}\). (In the case of \(\overline{\dim}_{M}\), we also suppose that \(A\) is bounded.)
**Theorem 1.10**.: _Let \(F:\mathbb{C}\to\mathbb{C}\) be a \(k\)-quasiconformal homeomorphism, and let \(A\) be a subset of \(\mathbb{C}\) such that \(\dim(A)>0\). Then_
\[\frac{1}{K}\Big{(}\frac{1}{\dim A}-\frac{1}{2}\Big{)}\leq\Big{(}\frac{1}{\dim F (A)}-\frac{1}{2}\Big{)}\leq K\Big{(}\frac{1}{\dim A}-\frac{1}{2}\Big{)},\]
_where \(K:=(1+k)/(1-k)\)._
For the Hausdorff dimension, the above estimate was first suggested by Gehring and Vaisala [11] and finally proved by Astala [1, Theorem 1.4]. For packing dimension it is a special case of a result of Kaufmann [14, Theorem 4].
**Theorem 1.11**.: _Let \(F:\mathbb{C}\to\mathbb{C}\) be a \(k\)-quasiconformal homeomorphism which is conformal on \(\mathbb{C}\setminus\Delta\), where \(\Delta\) is a compact set of logarithmic capacity at most \(1\), and such that \(F(z)=z+o(1)\) near \(\infty\). Let \(A\) be a Borel subset of \(\Delta\)._
1. _If_ \(\mu_{F}=0\) _a.e. on_ \(A\)_, then_ \[|F(A)|\leq\pi^{1-1/K}|A|^{1/K}.\]
2. _If_ \(\mu_{F}=0\) _a.e. on_ \(\mathbb{C}\setminus A\)_, then_ \[|F(A)|\leq K|A|.\]
3. _Hence, in general,_ \[|F(A)|\leq K\pi^{1-1/K}|A|^{1/K}.\]
_Here again \(K=(1+k)/(1-k)\)._
Theorem 1.11 is a sharpened form of a result of Astala [1, Theorem 1] due to Eremenko and Hamilton [8, Theorem 1].
We also show how the proof of Theorem 1.6 can be adapted to obtain the following upper bound for the Hausdorff dimension of quasicircles due to Smirnov [23].
**Theorem 1.12**.: _If \(\Gamma\) is a \(k\)-quasicircle, then \(\dim_{H}(\Gamma)\leq 1+k^{2}\)._
Finally, we obtain a result on the distortion of dimension under quasi-symmetric maps. For the Hausdorff dimension \(\dim_{H}\), it was proved by Prause and Smirnov, see the main result of [18] and also [17, Theorem 3.1]. In the theorem below, \(\dim\) denotes one of \(\overline{\dim}_{M}\) or \(\dim_{P}\). In the case of \(\overline{\dim}_{M}\), we also assume that \(A\) is bounded.
**Theorem 1.13**.: _Let \(g:\mathbb{R}\to\mathbb{R}\) be a \(k\)-quasisymmetric map, where \(k\in[0,1)\). Then, given a set \(A\subset\mathbb{R}\) with \(\dim(A)=\delta\), \(0<\delta\leq 1\), we have_
\[\Delta(\delta,k)\leq\dim(g(A))\leq\Delta^{*}(\delta,k).\]
_Here_
\[\Delta(\delta,k):=1-\left(\frac{k+l}{1+kl}\right)^{2}\]
_where \(l:=\sqrt{1-\delta}\), and \(\Delta^{*}(\delta,k)\) is the inverse_
\[\Delta^{*}(\delta,k):=\Delta(\delta,-\min(k,\sqrt{1-\delta})).\]
_In particular, if \(\dim A=\delta=1\), then \(l=0\) and \(\Delta(\delta,k)=1-k^{2}\), whence_
\[\dim(g(A))\geq 1-k^{2}.\]
The remainder of the paper is organized as follows. We review the notions of Hausdorff, packing and Minkowski dimensions in SS2. In SS3 we discuss holomorphic motions in more detail, in particular their relation to quasiconformal maps. The basic properties of inf-harmonic functions that we need are developed in SS4. Our main results, Theorems 1.3, 1.4, 1.6, 1.7 and 1.9, are proved in SSSS5-9. The applications to quasiconformal mappings, namely Theorems 1.10, 1.11, 1.12 and 1.13, are treated in SS10. We conclude in SS11 with an open problem.
## 2. Notions of dimension
In this section we present a very brief review of some basic notions of dimension, introducing the notation, and concentrating on the aspects that will be useful to us later. Our account is based on the books of Bishop-Peres [6] and Falconer [9].
### Hausdorff dimension
We begin with the definition. Let \(A\subset\mathbb{C}\). For \(s\geq 0\) and \(\delta>0\), define
\[\mathcal{H}^{s}_{\delta}(A):=\inf\Bigl{\{}\sum_{j=1}^{\infty}\operatorname{diam} (A_{j})^{s}\Bigr{\}},\]
where the infimum is taken over all countable covers \(\{A_{j}\}\) of \(A\) by sets of diameter at most \(\delta\). Since \(\mathcal{H}^{s}_{\delta}(A)\) increases as \(\delta\) decreases, the limit
\[\mathcal{H}^{s}(A):=\lim_{\delta\to 0}\mathcal{H}^{s}_{\delta}(A)\]
exists, possibly \(0\) or \(\infty\). The set function \(\mathcal{H}^{s}(\cdot)\) is an outer measure on \(\mathbb{C}\), called the _\(s\)-dimensional Hausdorff measure_. The _Hausdorff dimension_ of \(A\) is defined as the unique real number \(\dim_{H}(A)\in[0,2]\) such that
\[\mathcal{H}^{s}(A)=\begin{cases}\infty,&s<\dim_{H}(A),\\ 0,&s>\dim_{H}(A).\end{cases}\]
We shall need a slight variant of this construction. A _dyadic square_ is a subset of \(\mathbb{C}\) of the form \(Q=[m2^{-k},(m+1)2^{-k})\times[n2^{-k},(n+1)2^{-k})\), where \(k,m,n\) are integers (possibly negative). Define
\[\widetilde{\mathcal{H}}^{s}_{\delta}(A):=\inf\Bigl{\{}\sum_{j=1}^{\infty} \operatorname{diam}(Q_{j})^{s}\Bigr{\}},\]
where now the infimum is taken merely over countable covers \(\{Q_{j}\}\) of \(A\) by dyadic squares of diameter at most \(\delta\). As before, we also set
\[\widetilde{\mathcal{H}}^{s}(A):=\lim_{\delta\to 0}\widetilde{\mathcal{H}}^{s}_{ \delta}(A).\]
Clearly we have \(\widetilde{\mathcal{H}}^{s}_{\delta}(A)\geq\mathcal{H}^{s}_{\delta}(A)\) for all \(\delta\), and hence \(\widetilde{\mathcal{H}}^{s}(A)\geq\mathcal{H}^{s}(A)\). Also, it is not hard to see that any bounded subset of \(\mathbb{C}\) can be covered by \(9\) dyadic squares of smaller diameter, from which it follows that \(\widetilde{\mathcal{H}}^{s}_{\delta}(A)\leq 9\mathcal{H}^{s}_{\delta}(A)\) for all \(\delta\), and hence \(\widetilde{\mathcal{H}}^{s}(A)\leq 9\mathcal{H}^{s}(A)\). In particular, we deduce the following result.
**Proposition 2.1**.: _With the above notation, we have_
\[\widetilde{\mathcal{H}}^{s}(A)=\begin{cases}\infty,&s<\dim_{H}(A),\\ 0,&s>\dim_{H}(A).\end{cases}\]
Dyadic squares have the property that any two of them are either nested or disjoint. Thus the sets \(Q_{j}\) in the definition of \(\widetilde{\mathcal{H}}^{s}_{\delta}(A)\) may be taken to be disjoint. This will be useful for us later. For more on this, see [6, SS1.3, p.11].
We conclude by noting that Hausdorff dimension is _countably stable_, i.e., for any sequence of sets \((A_{j})\) we have \(\dim_{H}(\cup_{j\geq 1}A_{j})=\sup_{j\geq 1}\dim_{H}(A_{j})\) (see e.g. [9, p.49]).
### Packing dimension
The notion of packing dimension is in some sense dual to that of Hausdorff dimension. It was introduced by Tricot in [25].
Once again, we begin with the definition. Let \(A\subset\mathbb{C}\). For \(s\geq 0\) and \(\delta>0\), define
\[\mathcal{P}^{s}_{\delta}(A):=\sup\Bigl{\{}\sum_{j=1}^{n}\operatorname{diam}(D_{ j})^{s}\Bigr{\}},\]
where the supremum is taken over all finite sets of disjoint disks \(\{D_{j}\}\) with centres in \(A\) and of diameters at most \(\delta\). Since \(P^{s}_{\delta}(A)\) decreases as \(\delta\) decreases, the limit
\[\mathcal{P}^{s}_{0}(A):=\lim_{\delta\to 0}\mathcal{P}^{s}_{\delta}(A)\]
exists, possibly \(0\) or \(\infty\). This is not yet an outer measure, because it is not countably subadditive. It is sometimes called the _\(s\)-dimensional pre-packing measure_ of \(A\). We modify it to make it an outer measure, defining the _\(s\)-dimensional packing measure_ of \(A\) by
\[\mathcal{P}^{s}(A):=\inf\Bigl{\{}\sum_{j\geq 1}\mathcal{P}^{s}_{0}(A_{j}):A= \cup_{j\geq 1}A_{j}\Bigr{\}},\]
where the infimum is taken over all countable covers of \(A\) by subsets \((A_{j})_{j\geq 1}\). The _packing dimension_ of \(A\) is then defined as the unique real number \(\dim_{P}(A)\in[0,2]\) such that
\[\mathcal{P}^{s}(A)=\begin{cases}\infty,&s<\dim_{P}(A),\\ 0,&s>\dim_{P}(A).\end{cases}\]
As in the case of Hausdorff dimension, the packing dimension is countably stable: \(\dim_{P}(\cup_{j\geq 1}A_{j})=\sup_{j\geq 1}\dim_{P}(A_{j})\). Also, we always have
\[\dim_{H}(A)\leq\dim_{P}(A),\]
and the inequality may be strict.
### Minkowski dimension
Let \(A\) be a bounded subset of \(\mathbb{C}\). Given \(\delta>0\), we denote by \(N_{\delta}(A)\) the smallest number of sets of diameter at most \(\delta\) needed to cover \(A\). The _upper_ and _lower Minkowski dimensions_ of \(A\) are respectively defined by
\[\overline{\dim}_{M}(A):=\limsup_{\delta\to 0}\frac{\log N_{\delta}(A)}{\log(1/ \delta)}\quad\text{and}\quad\underline{\dim}_{M}(A):=\liminf_{\delta\to 0} \frac{\log N_{\delta}(A)}{\log(1/\delta)}.\]
Of course we always have \(\underline{\dim}_{M}(A)\leq\overline{\dim}_{M}(A)\). The inequality may be strict. If equality holds, then we speak simply of the Minkowski dimension of \(A\), denoted \(\dim_{M}(A)\). It is also called the _box-counting dimension_ of \(A\).
The Minkowski dimension has the virtue of simplicity, but it also suffers from the drawback that, unlike the Hausdorff and packing dimensions, it is not countably stable, i.e., it can happen that \(\dim_{M}(\cup_{j}A_{j})>\sup_{j}\dim_{M}(A_{j})\).
There is a useful relationship between upper Minkowski dimension and the pre-packing measure \(\mathcal{P}_{0}^{s}\) introduced in the previous subsection. The following result is due to Tricot [25, Corollary 2].
**Proposition 2.2**.: _If \(A\) is a bounded subset of \(\mathbb{C}\), then_
\[\mathcal{P}_{0}^{s}(A)=\begin{cases}\infty,&s<\overline{\dim}_{M}(A),\\ 0,&s>\overline{\dim}_{M}(A).\end{cases}\]
Using this result, we can express the packing dimension in terms of the upper Minkowski dimension. The following theorem is again due to Tricot [25, Proposition 2], see also [6, Theorem 2.7.1].
**Proposition 2.3**.: _If \(A\) is a subset of \(\mathbb{C}\), then_
\[\dim_{P}(A)=\inf\Bigl{\{}\sup_{j\geq 1}\overline{\dim}_{M}(A_{j}):A=\cup_{j \geq 1}A_{j}\Bigr{\}},\]
_where the infimum is taken over all countable covers of \(A\) by bounded subsets \((A_{j})\)._
From this result, it is obvious that, for every bounded set \(A\), we have
\[\dim_{P}(A)\leq\overline{\dim}_{M}(A).\]
In general the inequality can be strict. The books [6] and [9] both contain a discussion of conditions under which equality holds.
### Similarity dimension
There is one further notion of dimension that will prove useful in what follows. It applies to a specific example.
Consider a finite system of contractive similarities
\[\gamma_{j}(z)=a_{j}z+b_{j}\quad(j=1,\ldots,n),\]
where \(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n}\in\mathbb{C}\) and \(|a_{j}|<1\) for all \(j\). In this situation, there is a unique compact subset \(L\) of \(\mathbb{C}\) such that \(L=\cup_{j=1}^{n}\gamma_{j}(L)\), called the _limit set_ of the iterated function system \(\{\gamma_{1},\ldots,\gamma_{n}\}\).
The system \(\{\gamma_{1},\ldots,\gamma_{n}\}\) is said to satisfy the _open set condition_ if there exists a non-empty open subset \(U\) of \(\mathbb{C}\) such that \(\gamma_{j}(U)\subset U\) for all \(j\) and \(\gamma_{i}(U)\cap\gamma_{j}(U)=\emptyset\) whenever \(i\neq j\). The following result is due to Hutchinson [12], generalizing an earlier result of Moran [16], see also [9, Theorem 9.3] or [6, Theorem 2.2.2].
**Theorem 2.4**.: _If the system \(\{\gamma_{1},\ldots,\gamma_{n}\}\) satisfies the open set condition, then the Hausdorff and packing dimensions of its limit set \(L\) are given by \(\dim_{H}(L)=\dim_{P}(L)=s\), where \(s\) is the unique solution of the equation_
\[\sum_{j=1}^{n}|a_{j}|^{s}=1.\]
The number \(s\) (with or without the open set condition) is called the _similarity dimension_ of the system.
## 3. Holomorphic motions and quasiconformal maps
Holomorphic motions were defined in Definition 1.1. As was mentioned in the introduction, they were introduced in [15] by Mane, Sad and Sullivan, who also established the \(\lambda\)-lemma. Their result was later improved by Slodkowski in [22], confirming a conjecture of Sullivan and Thurston [24]. Slodkowski's result is often called the extended \(\lambda\)-lemma. There are now several proofs; another one can be found in [2, SS12].
**Theorem 3.1** (Extended \(\lambda\)-lemma).: _A holomorphic motion \(f:\mathbb{D}\times A\to\mathbb{C}\) has an extension to a holomorphic motion \(F:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\). The function \(F\) is jointly continuous on \(\mathbb{D}\times\mathbb{C}\)._
As was already remarked in [15], holomorphic motions are closely related to quasiconformal maps. We now define this term and state some results that will be needed in the sequel. Our treatment follows that in [2].
**Definition 3.2**.: Let \(\Omega,\Omega^{\prime}\) be plane domains. A homeomorphism \(f:\Omega\to\Omega^{\prime}\) is called _quasiconformal_ if:
1. \(f\) is orientation-preserving;
2. its distributional Wirtinger derivatives \(\partial f/\partial z\) and \(\partial f/\partial\overline{z}\) both belong to \(L^{2}_{\mathrm{loc}}(\Omega)\), and
3. \(f\) satisfies the _Beltrami equation_: \[\frac{\partial f}{\partial\overline{z}}=\mu_{f}\,\frac{\partial f}{\partial z }\quad\text{a.e. on }\Omega,\] where \(\mu_{f}\) is a measurable function on \(\Omega\) such that \(\|\mu_{f}\|_{\infty}<1\).
The function \(\mu_{f}\) is called the _Beltrami coefficient_ or _complex dilatation_ of \(f\). We shall say that the mapping \(f\) is _\(k\)-quasiconformal_ if \(\|\mu_{f}\|_{\infty}\leq k\).
_Remark_.: Many authors (including those of [2]) use the term \(K\)-quasiconformal to mean \(k\)-quasiconformal in our sense with \(K=(1+k)/(1-k)\).
We shall need the following fundamental result on the existence and uniqueness of solutions to the Beltrami equation [2, Theorem 5.3.4].
**Theorem 3.3** (Measurable Riemann mapping theorem).: _Let \(\mu\) be a measurable function on \(\mathbb{C}\) with \(\|\mu\|_{\infty}<1\). Then there exists a unique quasiconformal mapping \(f:\mathbb{C}\to\mathbb{C}\) fixing \(0\) and \(1\) with \(\mu_{f}=\mu\) a.e. on \(\mathbb{C}\)._
It is well known that solutions of the Beltrami equation depend holomorphically on the parameter \(\mu\)[2, Corollary 5.7.5]. Combined with [2, Theorem 12.3.2], this is the key to the following characterization of holomorphic motions.
**Theorem 3.4**.: _Let \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) be a function. The following statements are equivalent:_
1. _The map_ \(f\) _is a holomorphic motion._
_._
2. _For each_ \(\lambda\in\mathbb{D}\)_, the map_ \(f_{\lambda}:\mathbb{C}\to\mathbb{C}\) _is quasiconformal with Beltrami coefficient_ \(\mu_{\lambda}\) _satisfying_ \(\|\mu_{\lambda}\|_{\infty}\leq|\lambda|\)_. Moreover, the map_ \(f_{0}\) _is the identity, and the_ \(L^{\infty}(\mathbb{C})\)_-valued map_ \(\lambda\mapsto\mu_{\lambda}\) _is holomorphic on_ \(\mathbb{D}\)_._
These results can be used to show that every quasiconformal homeomorphism of \(\mathbb{C}\) can be embedded as part of a holomorphic motion [2, Theorem 12.5.3].
**Theorem 3.5**.: _If \(F:\mathbb{C}\to\mathbb{C}\) is a \(k\)-quasiconformal homeomorphism, then there exists a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) such that \(f_{k}=F\)._
Quasiconformal maps exhibit numerous interesting properties. An important one for us is the fact that quasiconformal homeomorphisms of \(\mathbb{C}\) are quasisymmetric in the sense described in the next theorem [2, Theorem 3.5.3].
**Theorem 3.6**.: _Given \(k\in[0,1)\), there exists an increasing homeomorphism \(\eta:[0,\infty)\to[0,\infty)\) such that every \(k\)-quasiconformal map \(f:\mathbb{C}\to\mathbb{C}\) satisfies_
\[\frac{|f(z_{0})-f(z_{1})|}{|f(z_{0})-f(z_{2})|}\leq\eta\Big{(}\frac{|z_{0}-z_ {1}|}{|z_{0}-z_{2}|}\Big{)}\quad(z_{0},z_{1},z_{2}\in\mathbb{C}). \tag{3.1}\]
We shall exploit this result via the following simple corollary.
**Corollary 3.7**.: _Given \(k\in[0,1)\), there exist constants \(\delta,\delta^{\prime}>0\) such that every \(k\)-quasiconformal homeomorphism \(f:\mathbb{C}\to\mathbb{C}\) has the following properties:_
1. _If_ \(z_{0}\in\mathbb{C}\) _and_ \(D\) _is an open disk with centre_ \(z_{0}\)_, then_ \(f(D)\) _contains the open disk with centre_ \(f(z_{0})\) _and radius_ \(\delta\operatorname{diam}f(D)\)_._
2. _If_ \(z_{0}\in\mathbb{C}\) _and_ \(Q\) _is an open square with centre_ \(z_{0}\)_, then_ \(f(Q)\) _contains the open disk with centre_ \(f(z_{0})\) _and radius_ \(\delta^{\prime}\operatorname{diam}f(Q)\)_._
Proof.: Let \(\eta\) be the function associated to \(k\) by Theorem 3.6. In the case of the square, if \(z_{1},z_{2}\in\partial Q\), then \(|z_{0}-z_{1}|/|z_{0}-z_{2}|\leq\sqrt{2}\), so by (3.1) we have \(|f(z_{0})-f(z_{1})|/|f(z_{0})-f(z_{2})|\leq\eta(\sqrt{2})\). It follows that (ii) holds with \(\delta^{\prime}=1/(2\eta(\sqrt{2}))\). The proof of (i) is similar, now with \(\delta=1/(2\eta(1))\).
## 4. Inf-harmonic functions
Recall from Definition 1.2 that a function \(u:D\to[0,\infty)\) defined on a plane domain \(D\) is inf-harmonic if it is the lower envelope of a family of harmonic functions. It is inherent in the definition that \(u\) is positive, so the harmonic functions are positive too. This has the consequence that inf-harmonic functions inherit several of the good properties of positive harmonic functions.
We begin by showing that inf-harmonic functions satisfy Harnack's inequality. Recall that, given \(\lambda_{1},\lambda_{2}\in D\), there exists \(\tau>0\) such that, for all
positive harmonic functions \(h\) on \(D\)
\[\frac{1}{\tau}\leq\frac{h(\lambda_{1})}{h(\lambda_{2})}\leq\tau.\]
The smallest such \(\tau\) is called the _Harnack distance_ between \(\lambda_{1},\lambda_{2}\), denoted \(\tau_{D}(\lambda_{1},\lambda_{2})\). For example, \(\tau_{\mathbb{D}}(0,\lambda)=(1+|\lambda|)/(1-|\lambda|)\) for \(\lambda\in\mathbb{D}\).
**Proposition 4.1**.: _Let \(u\) be an inf-harmonic function on a domain \(D\), and suppose that \(u\not\equiv 0\). Then \(u(\lambda)>0\) for all \(\lambda\in D\) and_
\[\frac{1}{\tau_{D}(\lambda_{1},\lambda_{2})}\leq\frac{u(\lambda_{1})}{u(\lambda _{2})}\leq\tau_{D}(\lambda_{1},\lambda_{2})\quad(\lambda_{1},\lambda_{2}\in D). \tag{4.1}\]
Proof.: For each positive harmonic function \(h\) on \(D\), we have
\[\frac{1}{\tau_{D}(\lambda_{1},\lambda_{2})}h(\lambda_{2})\leq h(\lambda_{1}) \leq\tau_{D}(\lambda_{1},\lambda_{2})h(\lambda_{2})\quad(\lambda_{1},\lambda_ {2}\in D).\]
Taking the infimum over all \(h\) such that \(h\geq u\), we obtain
\[\frac{1}{\tau_{D}(\lambda_{1},\lambda_{2})}u(\lambda_{2})\leq u(\lambda_{1}) \leq\tau_{D}(\lambda_{1},\lambda_{2})u(\lambda_{2})\quad(\lambda_{1},\lambda_ {2}\in D).\]
Since \(u\not\equiv 0\), this shows that \(u(\lambda)>0\) for all \(\lambda\in D\), and (4.1) now follows immediately.
**Corollary 4.2**.: _If \(u\) is an inf-harmonic function on \(D\), then it is a continuous superharmonic function on \(D\)._
Proof.: The continuity of \(u\) follows from Proposition 4.1, since \(\tau_{D}\) is continuous on \(D\times D\). As \(u\) is the infimum of harmonic functions, it clearly satisfies the super-mean value property, so it is superharmonic on \(D\).
The next result is a normal-family property.
**Proposition 4.3**.: _Let \((D_{n})_{n\geq 1}\) be an increasing sequence of domains, and let \(D:=\cup_{n\geq 1}D_{n}\). For each \(n\), let \(u_{n}\) be an inf-harmonic function on \(D_{n}\). Then either \(u_{n}\to\infty\) locally uniformly on \(D\), or else some subsequence \(u_{n_{j}}\to u\) locally uniformly on \(D\), where \(u\) is inf-harmonic on \(D\)._
Proof.: If there exists a point \(\lambda_{0}\in D\) such that \(u_{n}(\lambda_{0})\to\infty\), then by Proposition 4.1 the sequence \(u_{n}\to\infty\) locally uniformly in each \(D_{m}\) and hence also on \(D\). Likewise if \(u_{n}(\lambda_{0})\to 0\), then \(u_{n}\to 0\) locally uniformly in \(D\). Thus, replacing \((u_{n})\) by a subsequence if necessary, we may assume that there exists \(\lambda_{0}\in D_{1}\) and \(M>1\) such that \(1/M\leq u_{n}(\lambda_{0})\leq M\) for all \(n\). In this case, by Proposition 4.1 once more, the sequence \((u_{n})\) is equicontinuous on each \(D_{m}\), and by the Arzela-Ascoli theorem, a subsequence \((u_{n_{j}})\) converges locally uniformly on \(D\) to a finite-valued function \(u\).
It remains to show that \(u\) is itself inf-harmonic on \(D\). Relabelling, if necessary, we can suppose that the whole sequence \(u_{n}\) converges to \(u\) locally uniformly on \(D\). Let \(\lambda_{0}\in D\). Choose \(n_{0}\) so that \(\lambda_{0}\in D_{n_{0}}\). For each \(n\geq n_{0}\), the function \(u_{n}\) is inf-harmonic on \(D_{n}\), so there exists a (positive) harmonic function \(h_{n}\) on \(D_{n}\) such that \(h_{n}\geq u_{n}\) on \(D_{n}\) and \(h_{n}(\lambda_{0})\leq u(\lambda_{0})+1/n\). By
a standard normal-family argument, a subsequence \((h_{n_{j}})\) converges locally uniformly on \(D\) to a function \(h\) that is harmonic on \(D\). Clearly \(h\geq u\) on \(D\) and \(h(\lambda_{0})=u(\lambda_{0})\). Such an \(h\) exists for each choice of \(\lambda_{0}\in D\), so we conclude that \(u\) is indeed inf-harmonic on \(D\).
The following result lists some closure properties of the family of inf-harmonic functions.
**Proposition 4.4**.:
1. _If_ \(u\) _and_ \(v\) _are inf-harmonic on_ \(D\) _and if_ \(\alpha,\beta\geq 0\)_, then_ \(\alpha u+\beta v\) _is inf-harmonic on_ \(D\)_._
2. _If_ \(u\) _is inf-harmonic on_ \(D\) _and_ \(h\) _is harmonic on_ \(D\)_, and if_ \(u\geq h\)_, then_ \(u-h\) _is inf-harmonic on_ \(D\)_._
3. _If_ \((u_{n})_{n\geq 1}\) _are inf-harmonic functions on_ \(D\)_, and if_ \(u_{n}\to u\) _pointwise on_ \(D\)_, then either_ \(u\) _is inf-harmonic on_ \(D\) _or_ \(u\equiv\infty\)_._
4. _If_ \((D_{n})\) _is an increasing sequence of domains with_ \(\cup_{n\geq 1}D_{n}=D\)_, and if_ \(u\) _is a function on_ \(D\) _such that_ \(u|_{D_{n}}\) _is inf-harmonic on_ \(D_{n}\) _for each_ \(n\)_, then_ \(u\) _is inf-harmonic on_ \(D\)_._
5. _If_ \(\mathcal{V}\) _is a family of inf-harmonic functions on_ \(D\) _and_ \(u:=\inf_{v\in\mathcal{V}}v\)_, then_ \(u\) _is inf-harmonic on_ \(D\)_._
6. _If_ \(\mathcal{V}\) _is an upward-directed family of inf-harmonic functions on_ \(D\) _(i.e., given_ \(v_{1},v_{2}\in\mathcal{V}\)_, there exists_ \(v_{3}\in\mathcal{V}\) _with_ \(v_{3}\geq\max\{v_{1},v_{2}\}\)_), and if_ \(u:=\sup_{v\in\mathcal{V}}v\)_, then either_ \(u\) _is inf-harmonic on_ \(D\) _or_ \(u\equiv\infty\)_._
Proof.: (i),(ii) These are both obvious.
(iii) Assume that \(u\not\equiv\infty\). Then, by Proposition 4.3, a subsequence of the \((u_{n})\) converges locally uniformly on \(D\) to an inf-harmonic function \(v\). Since the same subsequence converges pointwise to \(u\), we must have \(v=u\). Hence \(u\) is inf-harmonic.
(iv) This follows by applying Proposition 4.3 with \(u_{n}:=u|_{D_{n}}\).
(v) Again, this is obvious.
(vi) Assume that \(u\not\equiv\infty\). Then, by Proposition 4.1, \(u\) is finite-valued and continuous on \(D\). Let \(\Lambda=(\lambda_{j})\) be a sequence that is dense in \(D\). Using the fact that \(\mathcal{V}\) is upward-directed, we may construct an increasing sequence of functions \(v_{n}\in\mathcal{V}\) such that \(v_{n}(\lambda_{j})\geq u(\lambda_{j})-1/n\) for all \(j\in\{1,2,\ldots,n\}\) and all \(n\geq 1\). Then \(v_{n}\) converges pointwise to a function \(v\) such that \(v\leq u\) and \(v=u\) on \(\Lambda\). By part (iii) above, \(v\) is inf-harmonic on \(D\). As \(v=u\) on the dense subset \(\Lambda\) and both \(u,v\) are continuous, we have \(v=u\) on \(D\). Thus \(u\) is inf-harmonic on \(D\), as asserted.
We conclude this section with an implicit function theorem for inf-harmonic functions.
**Theorem 4.5**.: _Let \(D\) be a plane domain, and let \(a_{j}:D\to(0,1)\) be a finite or infinite sequence of functions such that \(\log(1/a_{j})\) is inf-harmonic on \(D\) for each \(j\). Let \(c>0\), and define \(s:D\to[0,\infty]\) by_
\[s(\lambda):=\inf\Bigl{\{}\alpha>0:\sum_{j}a_{j}(\lambda)^{\alpha}\leq c\Bigr{\}} \quad(\lambda\in D),\]
_where we interpret \(\inf\emptyset=\infty\). Then either \(s\equiv 0\) or \(1/s\) is an inf-harmonic function on \(D\)._
It is perhaps worth emphasizing the case where there are only finitely many functions \(a_{j}\). It then becomes a result closely linked to the notion of similarity dimension defined in SS2. It generalizes a result of Baribeau and Roy [4, Theorem 1].
**Corollary 4.6**.: _Let \(a_{1},\ldots,a_{n}:D\to(0,1)\) be functions such that \(\log(1/a_{j})\) is inf-harmonic on \(D\) for each \(j\). Let \(c\in(0,n)\) and, for each \(\lambda\in D\), let \(s(\lambda)\) be the unique solution of the equation_
\[\sum_{j=1}^{n}a_{j}(\lambda)^{s(\lambda)}=c.\]
_Then \(1/s\) is an inf-harmonic function on \(D\)._
We shall deduce Theorem 4.5 from a more general abstract result. To formulate this result, it is convenient to introduce some terminology.
Let \(X\) be a set and let \(\mathcal{U}\) be a family of functions \(u:X\to[0,\infty)\). We call \(\mathcal{U}\) an _inf-cone_ on \(X\) if it satisfies the following closure properties:
* if \(u,v\in\mathcal{U}\) and \(\alpha,\beta\geq 0\), then \(\alpha u+\beta v\in\mathcal{U}\);
* if \(\emptyset\neq\mathcal{V}\subset\mathcal{U}\) and \(u:=\inf_{v\in\mathcal{V}}v\), then \(u\in\mathcal{U}\).
By Proposition 4.4 parts (i) and (v), the set of inf-harmonic functions on a domain \(D\) is an inf-cone on \(D\).
The following result may be viewed as an abstract implicit function theorem for inf-cones.
**Lemma 4.7**.: _Let \(\mathcal{U}\) be an inf-cone on \(X\), let \((u_{j})_{j\geq 1}\) be a sequence in \(\mathcal{U}\), and for each \(j\) let \(\phi_{j}:[0,\infty)\to[0,\infty)\) be a continuous, decreasing, convex function. Define \(v:X\to[0,\infty]\) by_
\[v(x):=\sup\Bigl{\{}t>0:\sum_{j\geq 1}\phi_{j}(u_{j}(x)/t)\leq 1\Bigr{\}}\quad(x \in X),\]
_where we interpret \(\sup\emptyset=0\). Then \(v\in\mathcal{U}\) or \(v\equiv\infty\)._
Proof.: For each \(j\), let \(\mathcal{L}_{j}\) be the family of functions of the form \(L(y):=b_{L}-a_{L}y\), such that \(a_{L}\geq 0,b_{L}\in\mathbb{R}\) and \(L\leq\phi_{j}\). As \(\phi_{j}\) is a continuous decreasing convex function, we have \(\phi_{j}=\sup_{L\in\mathcal{L}_{j}}L\). Consequently, if \(x\in X\)
and \(t>0\), then
\[\sum_{j\geq 1}\phi_{j}(u_{j}(x)/t)\leq 1\] \[\iff \sum_{j=1}^{n}\phi_{j}(u_{j}(x)/t)\leq 1\quad(n\geq 1)\] \[\iff \sum_{j=1}^{n}(b_{L_{j}}-a_{L_{j}}u_{j}(x)/t)\leq 1\quad(n\geq 1,\ L_ {1}\in\mathcal{L}_{1},\ \ldots,\ L_{n}\in\mathcal{L}_{n})\] \[\iff t\Bigl{(}\sum_{j=1}^{n}b_{L_{j}}-1\Bigr{)}\leq\sum_{j=1}^{n}a_{L_ {j}}u_{j}(x)\quad(n\geq 1,\ L_{1}\in\mathcal{L}_{1},\ \ldots,\ L_{n}\in\mathcal{L}_{n}).\]
There are now two possibilities. If \(\sum_{j=1}^{n}b_{L_{j}}\leq 1\) for all \(n\) and all choices of \((L_{1},\ldots,L_{n})\in\mathcal{L}_{1}\times\cdots\times\mathcal{L}_{n}\), then the above conditions are satisfied for all \(t>0\) and all \(x\in X\). In this case \(v\equiv\infty\). In the other case, we have
\[v(x)=\inf\Biggl{\{}\frac{\sum_{j=1}^{n}a_{L_{j}}u_{j}(x)}{\sum_{j=1}^{n}b_{L_ {j}}-1}\Biggr{\}}\quad(x\in X),\]
where the infimum is taken over all \(n\geq 1\) and all \((L_{1},\ldots,L_{n})\in\mathcal{L}_{1}\times\cdots\times\mathcal{L}_{n}\) such that \(\sum_{j=1}^{n}b_{L_{j}}>1\). Hence \(v\in\mathcal{U}\) in this case.
Proof of Theorem 4.5.: This result follows from Lemma 4.7 upon taking \(\mathcal{U}\) to be the set of inf-harmonic functions on \(D\), and \(\psi_{j}(y):=(1/c)\exp(-y)\) for each \(j\).
## 5. Proof of Theorem 1.3
We have \(A_{\lambda}=f_{\lambda}(A)=f(\lambda,A)\), where \(f:\mathbb{D}\times A\to\mathbb{C}\) is a holomorphic motion. By Theorem 3.1, we may extend \(f\) to a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\). We shall assume that \(f\) has been so extended. Since \(A\) is bounded and \(f\) is continuous, it follows that \(A_{\lambda}\) is bounded for all \(\lambda\in\mathbb{D}\).
The following lemma establishes the link with inf-harmonic functions. We recall that \(D(a,r)\) denotes the open disk with centre \(a\) and radius \(r\), and that \(\operatorname{diam}(S)\) denotes the euclidean diameter of \(S\).
**Lemma 5.1**.: _Let \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) be a holomorphic motion. Let \(B\) be a bounded subset of \(\mathbb{C}\) and let \(\rho\in(0,1)\). Then \(M:=\operatorname{diam}f(D(0,\rho)\times B)<\infty\). If \(S\) is a subset of \(B\), then the map \(\lambda\mapsto\log(M/\operatorname{diam}f_{\lambda}(S))\) is an inf-harmonic function on \(D(0,\rho)\). Consequently, we have_
\[\frac{\rho-|\lambda|}{\rho+|\lambda|}\leq\frac{\log(M/\operatorname{diam}f_{ \lambda}(S))}{\log(M/\operatorname{diam}S)}\leq\frac{\rho+|\lambda|}{\rho-| \lambda|}\qquad(\lambda\in D(0,\rho)). \tag{5.1}\]
Proof.: As \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) is a continuous map and \(\overline{D}(0,\rho)\times\overline{B}\) is a compact subset of \(\mathbb{D}\times\mathbb{C}\), it follows that \(f(\overline{D}(0,\rho)\times\overline{B})\) is a compact subset of \(\mathbb{C}\). In particular it has finite diameter, so \(M<\infty\).
Given \(S\subset B\), we have
\[\log\Bigl{(}\frac{M}{\operatorname{diam}f_{\lambda}(S)}\Bigr{)}=\inf\Bigl{\{}\log \Bigl{(}\frac{M}{|f_{\lambda}(z)-f_{\lambda}(w)|}\Bigr{)}:z,w\in S,\,z\neq w \Bigr{\}}.\]
Fo each pair \(z,w\in S\) with \(z\neq w\), the function \(\lambda\mapsto\log(M/|f_{\lambda}(z)-f_{\lambda}(w)|)\) is positive and harmonic on \(D(0,\rho)\). Therefore \(\lambda\mapsto\log(M/\operatorname{diam}f_{\lambda}(S))\) is inf-harmonic on \(D(0,\rho)\).
Finally, the inequality (5.1) is a direct consequence of Harnack's inequality for inf-harmonic functions, Proposition 4.1.
The next lemma contains the heart of the proof of Theorem 1.3. We recall that the upper Minkowski dimension \(\overline{\dim}_{M}\) can be characterized using Proposition 2.2.
**Lemma 5.2**.: _If \(\overline{\dim}_{M}(A)>0\), then there exists an inf-harmonic function \(u\) on \(\mathbb{D}\) such that_
\[u(0)=1/\overline{\dim}_{M}(A)\qquad\text{and}\qquad u(\lambda)\geq 1/\overline{ \dim}_{M}(A_{\lambda})\quad(\lambda\in\mathbb{D}).\]
Proof.: Let \(\rho\in(0,1)\). We shall carry out the proof on the disk \(D(0,\rho)\), and then let \(\rho\to 1\) at the very end.
Let \((d_{n})\) be a sequence such that \(0<d_{n}<\overline{\dim}_{M}(A)\) and \(d_{n}\to\overline{\dim}_{M}(A)\). By Proposition 2.2, for each \(n\) there exists a finite set \(\mathcal{D}_{n}\) of disjoint disks with centres in \(A\) such that, as \(n\to\infty\),
\[\max_{D\in\mathcal{D}_{n}}\operatorname{diam}(D)\to 0\quad\text{and}\quad \sum_{D\in\mathcal{D}_{n}}\operatorname{diam}(D)^{d_{n}}\to\infty. \tag{5.2}\]
Let \(B\) be the union of all the disks in \(\cup_{n\geq 1}\mathcal{D}_{n}\). This is a bounded set, so, by Lemma 5.1, \(M:=\operatorname{diam}f(D(0,\rho)\times B)<\infty\), and \(\lambda\mapsto\log(M/\operatorname{diam}f_{\lambda}(D))\) is inf-harmonic on \(D(0,\rho)\) for each \(D\in\cup_{n}\mathcal{D}_{n}\).
For each \(\lambda\in D(0,\rho)\), let \(s_{n}(\lambda)\) be the unique solution of the equation
\[\sum_{D\in\mathcal{D}_{n}}(\operatorname{diam}f_{\lambda}(D)/M)^{s_{n}( \lambda)}=\sum_{D\in\mathcal{D}_{n}}(\operatorname{diam}D/M)^{d_{n}}.\]
Clearly \(s_{n}(0)=d_{n}\). Also, by the implicit function theorem, Corollary 4.6, the function \(1/s_{n}\) is inf-harmonic on \(D(0,\rho)\). By Proposition 4.3, a subsequence of \(1/s_{n}\) (which, by relabelling, we may suppose to be the whole sequence) converges locally uniformly to an inf-harmonic function \(u\) on \(D(0,\rho)\). Clearly we have \(u(0)=\lim_{n}(1/d_{n})=1/\overline{\dim}_{M}(A)\). We shall show that \(u(\lambda)\geq 1/\overline{\dim}_{M}(A_{\lambda})\) for all \(\lambda\in D(0,\rho)\).
Fix \(\lambda\in D(0,\rho)\), and let \(c\in(0,1/u(\lambda))\). Then \(s_{n}(\lambda)>c\) for all large enough \(n\), and so, for these \(n\), we have
\[\sum_{D\in\mathcal{D}_{n}}(\operatorname{diam}f_{\lambda}(D)/M)^ {c} \geq\sum_{D\in\mathcal{D}_{n}}(\operatorname{diam}f_{\lambda}(D)/ M)^{s_{n}(\lambda)}\] \[=\sum_{D\in\mathcal{D}_{n}}(\operatorname{diam}D/M)^{d_{n}},\]
whence
\[\sum_{D\in{\mathcal{D}}_{n}}(\operatorname{diam}f_{\lambda}(D))^{c}\geq M^{c-d_{n }}\sum_{D\in{\mathcal{D}}_{n}}(\operatorname{diam}D)^{d_{n}}.\]
For a given value of \(n\), the sets \(\{f_{\lambda}(D):D\in{\mathcal{D}}_{n}\}\) are disjoint, but they are not disks. However, we can circumvent this difficulty by invoking the theory of quasiconformal mappings. By Theorem 3.4, the map \(f_{\lambda}\) is a \(\rho\)-quasiconformal self-homeomorphism of \({\mathbb{C}}\). Consequently, by Corollary 3.7(i), there exists a \(\delta>0\) such that, for each \(w\in{\mathbb{C}}\) and each open disk \(D\) with centre \(w\), the set \(f_{\lambda}(D)\) contains the open disk with centre \(f_{\lambda}(w)\) and radius \(\delta\operatorname{diam}f_{\lambda}(D)\). In particular, for each \(D\in{\mathcal{D}}_{n}\), the set \(f_{\lambda}(D)\) contains a disk with centre in \(f_{\lambda}(A)\) and diameter at least \(\delta\operatorname{diam}f_{\lambda}(D)\). Denoting by \({\mathcal{D}}_{n}^{\prime}\) the set of such disks, we obtain a finite set of disjoint disks \(D^{\prime}\) with centres in \(A_{\lambda}\) and such that
\[\sum_{D^{\prime}\in{\mathcal{D}}_{n}^{\prime}}(\operatorname{diam}D^{\prime}) ^{c}\geq\sum_{D\in{\mathcal{D}}_{n}}(\delta\operatorname{diam}f_{\lambda}(D)) ^{c}\geq\delta^{c}M^{c-d_{n}}\sum_{D\in{\mathcal{D}}_{n}}(\operatorname{diam}D )^{d_{n}}.\]
From (5.2), we have \(\sum_{D\in{\mathcal{D}}_{n}}(\operatorname{diam}D)^{d_{n}}\to\infty\), whence it follows that
\[\sum_{D^{\prime}\in{\mathcal{D}}_{n}^{\prime}}(\operatorname{diam}D^{\prime}) ^{c}\to\infty\quad(n\to\infty). \tag{5.3}\]
Also from (5.2), we have \(\max_{D\in{\mathcal{D}}_{n}}\operatorname{diam}(D)\to 0\), which, together with the inequality (5.1), implies that
\[\max_{D^{\prime}\in{\mathcal{D}}_{n}^{\prime}}\operatorname{diam}(D^{\prime}) \to 0\quad(n\to\infty). \tag{5.4}\]
Taken together, the limits (5.3) and (5.4) show that \(\overline{\dim}_{M}(A_{\lambda})\geq c\). As this holds for each \(c\in(0,1/u(\lambda))\), we deduce that \(\overline{\dim}_{M}(A_{\lambda})\geq 1/u(\lambda)\), in other words, that \(u(\lambda)\geq 1/\overline{\dim}_{M}(A_{\lambda})\), as desired.
The proof of the lemma is nearly complete, save for the fact that \(u\) is defined only on \(D(0,\rho)\), not on \({\mathbb{D}}\). To fix this, let us choose an increasing sequence \((\rho_{m})\) in \((0,1)\) such that \(\rho_{m}\to 1\). For each \(m\), the argument above furnishes an inf-harmonic function \(u_{m}\) defined on \(D(0,\rho_{m})\) such that \(u_{m}(0)=1/\overline{\dim}_{M}(A)\) and \(u_{m}(\lambda)\geq 1/\overline{\dim}_{M}(A_{\lambda})\) for all \(\lambda\in D(0,\rho_{m})\). By Proposition 4.3, a subsequence of \((u_{m})\) converges locally uniformly to an inf-harmonic function \(u\) on \({\mathbb{D}}\). Clearly we have \(u(0)=1/\overline{\dim}_{M}(A)\) and \(u(\lambda)\geq 1/\overline{\dim}_{M}(A_{\lambda})\) for all \(\lambda\in{\mathbb{D}}\). The proof is now complete.
From here, it is a small step to establish the main result.
Proof of Theorem 1.3.: It is enough to show that, for each \(\lambda_{0}\in{\mathbb{D}}\) such that \(\overline{\dim}_{M}(A_{\lambda_{0}})>0\), there exists an inf-harmonic function \(u\) on \({\mathbb{D}}\) such that
\[u(\lambda_{0})=1/\overline{\dim}_{M}(A_{\lambda_{0}})\qquad\text{and}\qquad u (\lambda)\geq 1/\overline{\dim}_{M}(A_{\lambda})\quad(\lambda\in{\mathbb{D}}). \tag{5.5}\]
The special case \(\lambda_{0}=0\) has already been proved in Lemma 5.2. The general case can be deduced from this as follows.
Fix a Mobius automorphism \(\phi\) of \(\mathbb{D}\) such that \(\phi(0)=\lambda_{0}\). Then \(\widetilde{f}_{\lambda}:=f_{\phi(\lambda)}\circ f_{\lambda_{0}}^{-1}\) is a holomorphic motion mapping \(\mathbb{D}\times\mathbb{C}\) into \(\mathbb{C}\). Also \(\widetilde{A}:=A_{\lambda_{0}}\) is a bounded subset of \(\mathbb{C}\), such that \(\widetilde{f}_{\lambda}(\widetilde{A})=A_{\phi(\lambda)}\) for all \(\lambda\in\mathbb{D}\). Thus, applying Lemma 5.2 with \(A,f\) replaced by the pair \(\widetilde{A},\widetilde{f}\), we deduce that there exists an inf-harmonic function \(v\) on \(\mathbb{D}\) such that
\[v(0)=1/\overline{\dim}_{M}(A_{\phi(0)})\qquad\text{and}\qquad v(\lambda)\geq 1 /\overline{\dim}_{M}(A_{\phi(\lambda)})\quad(\lambda\in\mathbb{D}).\]
Then \(u:=v\circ\phi^{-1}\) is an inf-harmonic function on \(\mathbb{D}\) satisfying (5.5). This completes the proof of the theorem.
## 6. Proof of Theorem 1.4
As in the previous section, we may suppose that \(A_{\lambda}=f_{\lambda}(A)=f(\lambda,A)\), where \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) is a holomorphic motion.
We shall deduce Theorem 1.4 from Theorem 1.3, using the characterization of packing dimension in terms of Minkowski dimension given in Proposition 2.3. From that result, we have
\[\dim_{P}(A)=\inf\Bigl{\{}\sup_{j\geq 1}\overline{\dim}_{M}(A_{j}):A=\cup_{j \geq 1}A_{j}\Bigr{\}},\]
where the infimum is taken over all countable covers of \(A\) by bounded subsets \((A_{j})\). Since \(f_{\lambda}\) is a bijection of \(A\) onto \(A_{\lambda}\), it follows that,
\[\dim_{P}(A_{\lambda})=\inf\Bigl{\{}\sup_{j\geq 1}\overline{\dim}_{M}(f_{ \lambda}(A_{j})):A=\cup_{j\geq 1}A_{j}\Bigr{\}}\quad(\lambda\in\mathbb{D}),\]
and hence
\[\frac{1}{\dim_{P}(A_{\lambda})}=\sup\Bigl{\{}\inf_{j\geq 1}\frac{1}{\overline{ \dim}_{M}(f_{\lambda}(A_{j}))}:A=\cup_{j\geq 1}A_{j}\Bigr{\}}\quad(\lambda\in \mathbb{D}). \tag{6.1}\]
Let \(A=\cup_{j\geq 1}A_{j}\) be a countable cover of \(A\) by bounded subsets of \(A\). By Theorem 1.3, for each \(j\), either \(\overline{\dim}_{M}(f_{\lambda}(A_{j}))\equiv 0\) or \(\lambda\mapsto 1/\overline{\dim}_{M}(f_{\lambda}(A_{j}))\) is an inf-harmonic function on \(\mathbb{D}\). It follows that either \(\overline{\dim}_{M}(f_{\lambda}(A_{j}))\equiv 0\) for all \(j\geq 1\) or else \(\lambda\mapsto\inf_{j\geq 1}1/\overline{\dim}_{M}(f_{\lambda}(A_{j}))\) is an inf-harmonic function on \(\mathbb{D}\). In the first case, (6.1) implies that \(\dim_{P}(A_{\lambda})\equiv 0\). In the second case, the relation (6.1) expresses \(1/\dim_{P}(A_{\lambda})\) as the supremum of a family of inf-harmonic functions.
Ordinarily, the supremum of a family of inf-harmonic functions is no longer inf-harmonic. However, this particular family is an upward-directed set, in the sense of Proposition 4.4 (vi). Indeed, given any two countable covers \(A=\cup_{i}A_{i}=\cup_{j}B_{j}\) of \(A\) by bounded sets, there is a third such cover, namely \(A=\cup_{i,j}(A_{i}\cap B_{j})\), with the property that
\[\sup_{i,j}\overline{\dim}_{M}(A_{i}\cap B_{j})\leq\min\Bigl{\{}\sup_{i} \overline{\dim}_{M}(A_{i}),\,\sup_{j}\overline{\dim}_{M}(B_{j})\Bigr{\}},\]
which implies upward-directedness in (6.1). By Proposition 4.4 (vi), it follows that either \(\dim_{P}(A_{\lambda})\equiv 0\) or \(\lambda\mapsto 1/\dim_{P}(A_{\lambda})\) is inf-harmonic on \(\mathbb{D}\). This completes the proof of Theorem 1.4.
## 7. Proof of Theorem 1.6
The proof of Theorem 1.6 follows a similar pattern to that of Theorem 1.3, presented in SS5, except that, because Hausdorff dimension is defined in terms of coverings rather than packings, some of the inequalities go in the other direction. Unfortunately, this leads ultimately to a weaker result.
We have \(A_{\lambda}=f_{\lambda}(A)=f(\lambda,A)\), where \(f:\mathbb{D}\times A\to\mathbb{C}\) is a holomorphic motion. As before, we may extend \(f\) to a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\), and we shall assume that \(f\) has been so extended.
The core of the proof is contained in the following lemma.
**Lemma 7.1**.:
1. _If_ \(\dim_{H}(A)=0\)_, then_ \(\dim_{H}(A_{\lambda})=0\) _for all_ \(\lambda\in\mathbb{D}\)_._
2. _If_ \(\dim_{H}(A)>0\)_, then there exists an inf-harmonic function_ \(u\) _on_ \(\mathbb{D}\) _such that_ \[u(0)=1/\dim_{H}(A)\quad\text{and}\quad 1/2\leq u(\lambda)\leq 1/\dim_{H}(A_{ \lambda})\quad(\lambda\in\mathbb{D}).\]
Proof.: If \(\dim_{H}(A)=2\), then we may simply take \(u\equiv 1/2\). Henceforth, we suppose that \(0\leq\dim_{H}(A)<2\).
Let \(\rho\in(0,1)\). We shall carry out the proof on the disk \(D(0,\rho)\), and then let \(\rho\to 1\) at the very end.
Let \((d_{n})\) be a sequence such that \(\dim_{H}(A)<d_{n}<2\) and \(d_{n}\to\dim_{H}(A)\). By Proposition 2.1, for each \(n\) there exists a (countable) cover \(\mathcal{Q}_{n}\) of \(A\) by disjoint dyadic squares such that, as \(n\to\infty\),
\[\sup_{Q\in\mathcal{Q}_{n}}\operatorname{diam}(Q)\to 0\quad\text{and}\quad\sum_{Q \in\mathcal{Q}_{n}}\operatorname{diam}(Q)^{d_{n}}\to 0. \tag{7.1}\]
We can suppose that all the squares in \(\cup_{n}\mathcal{Q}_{n}\) meet \(A\). Thus, if \(B\) is the union of all the squares in \(\cup_{n}\mathcal{Q}_{n}\), then \(B\) is a bounded set. By Lemma 5.1, \(M:=\operatorname{diam}f(D(0,\rho)\times B)<\infty\), and \(\lambda\mapsto\log(M/\operatorname{diam}f_{\lambda}(Q))\) is inf-harmonic on \(D(0,\rho)\) for each \(Q\in\cup_{n}\mathcal{Q}_{n}\).
Fix a constant \(C\), to be chosen later (it will depend only on \(\rho\)), and, for each \(\lambda\in D(0,\rho)\), set
\[s_{n}(\lambda):=\inf\Bigl{\{}\alpha>0:\sum_{Q\in\mathcal{Q}_{n}}\Bigl{(} \frac{\operatorname{diam}f_{\lambda}(Q)}{M}\Bigr{)}^{\alpha}\leq C\Bigr{\}}.\]
By the implicit function theorem, Theorem 4.5, either \(s_{n}\equiv 0\) or \(1/s_{n}\) is inf-harmonic on \(D(0,\rho)\). By Proposition 4.3, a subsequence of \((s_{n})\) (which, by relabelling, we may suppose to be the whole sequence) converges locally uniformly to \(s\) on \(D(0,\rho)\), where either \(s\equiv 0\) or \(1/s\) is inf-harmonic on \(D(0,\rho)\).
From (7.1) we have we have \(s_{n}(0)\leq d_{n}\) for all sufficiently large \(n\), so
\[s(0)=\lim_{n\to\infty}s_{n}(0)\leq\lim_{n\to\infty}d_{n}=\dim_{H}(A). \tag{7.2}\]
If \(\alpha>s(\lambda)\) for some \(\lambda\in D(0,\rho)\), then \(\alpha>s_{n}(\lambda)\) for all large enough \(n\), so, for these \(n\),
\[\sum_{Q\in\mathcal{Q}_{n}}\operatorname{diam}f_{\lambda}(Q)^{\alpha}\leq CM^{ \alpha}.\]
For each \(n\), the family \(\{f_{\lambda}(Q):Q\in\mathcal{Q}_{n}\}\) is a cover of \(A_{\lambda}\). Also, from (7.1) and (5.1), we have \(\sup_{Q\in\mathcal{Q}_{n}}\operatorname{diam}f_{\lambda}(Q)\to 0\) as \(n\to\infty\). It follows from the definition of Hausdorff dimension that \(\dim_{H}(A_{\lambda})\leq\alpha\). As this holds for each \(\alpha>s(\lambda)\), we conclude that
\[s(\lambda)\geq\dim_{H}(A_{\lambda})\quad(\lambda\in D(0,\rho)). \tag{7.3}\]
Next we show that, if the constant \(C\) is chosen sufficiently large, then we also have \(s(\lambda)\leq 2\) for all \(\lambda\in D(0,\rho)\). To achieve this, we once again invoke the theory of quasiconformal mappings. By Theorem 3.4, the map \(f_{\lambda}\) is a \(\rho\)-quasiconformal self-homeomorphism of \(\mathbb{C}\). Consequently, by Corollary 3.7 (ii), there exists a constant \(\delta^{\prime}>0\), depending only on \(\rho\), such that, for each open square \(Q\) in \(\mathbb{C}\), the set \(f_{\lambda}(Q)\) contains an open disk of radius \(\delta^{\prime}\operatorname{diam}f_{\lambda}(Q)\). In particular, for each \(n\), the disjoint sets \(\{f_{\lambda}(Q):Q\in\mathcal{Q}_{n}\}\) contain disjoint disks of radii \(\delta^{\prime}\operatorname{diam}f_{\lambda}(Q)\). As these disks are all contained within the set \(f(D(0,\rho)\times B)\), which has diameter \(M\), consideration of their areas leads to the inequality
\[\sum_{Q\in\mathcal{Q}_{n}}\pi(\delta^{\prime}\operatorname{diam}f_{\lambda}( Q))^{2}\leq\pi M^{2},\]
in other words,
\[\sum_{Q\in\mathcal{Q}_{n}}\Bigl{(}\frac{\operatorname{diam}(f_{\lambda}(Q))}{ M}\Bigr{)}^{2}\leq 1/\delta^{\prime 2}.\]
This shows that, if \(C\geq 1/\delta^{\prime 2}\), then \(s_{n}(\lambda)\leq 2\) for all \(n\), and consequently \(s(\lambda)\leq 2\).
To summarize, we have shown that, if \(\dim_{H}(A)=0\), then \(\dim_{H}(A_{\lambda})=0\) for all \(\lambda\in D(0,\rho)\) (combine (7.2) and (7.3)), and, if \(\dim_{H}(A)>0\), then \(u:=1/s\) is an inf-harmonic function on \(D(0,\rho)\) such that
\[u(0)=1/\dim_{H}(A)\quad\text{and}\quad 1/2\leq u(\lambda)\leq 1/\dim_{H}(A_{ \lambda})\quad(\lambda\in D(0,\rho)).\]
The proof of the lemma is nearly complete, except that \(u\) is defined only on \(D(0,\rho)\), not on \(\mathbb{D}\). We fix this in exactly the same way as at the end of the proof of Lemma 5.2.
_Remark_.: Part (i) of Lemma 7.1 could also have been proved using the well-known fact that the quasiconformal image of a set of Hausdorff dimension zero also has Hausdorff dimension zero.
Proof of Theorem 1.6.: We claim that, for each \(\zeta\in\mathbb{D}\), if \(\dim_{H}(A_{\zeta})=0\), then \(\dim_{H}(A_{\lambda})=0\) for all \(\lambda\in\mathbb{D}\), and, if \(\dim_{H}(A_{\zeta})>0\), then there exists an inf-harmonic function \(u_{\zeta}\) on \(\mathbb{D}\) such that
\[u_{\zeta}(\zeta)=1/\dim_{H}(A_{\zeta})\quad\text{and}\quad 1/2\leq u_{\zeta}( \lambda)\leq 1/\dim_{H}(A_{\lambda})\quad(\lambda\in\mathbb{D}).\]
The special case \(\zeta=0\) has been proved in Lemma 7.1, and the general case is deduced from this just as in the proof of Theorem 1.3 at the end of SS5.
Thus, either \(\dim_{H}(A_{\lambda})=0\) for all \(\lambda\in\mathbb{D}\), or \(\dim_{H}(A_{\lambda})>0\) for all \(\lambda\in\mathbb{D}\). In the latter case, we have
\[\frac{1}{\dim_{H}(A_{\lambda})}-\frac{1}{2}=\sup_{\zeta\in\mathbb{D}}(u_{ \zeta}(\lambda)-1/2)\quad(\lambda\in\mathbb{D}),\]
where the right-hand side is the supremum of a family of functions that are inf-harmonic on \(\mathbb{D}\).
## 8. Proof of Theorem 1.7
The essential idea of the proof is contained in the following lemma, which is based on a construction in Astala's paper [1].
**Lemma 8.1**.: _Let \(h:\mathbb{D}\to(0,\infty)\) be a positive harmonic function, and let \(n\geq 10\). Then there exists a holomorphic motion \(\lambda\mapsto E_{\lambda}\) such that \(E_{\lambda}\) is a compact subset of \(\mathbb{D}\) for all \(\lambda\in\mathbb{D}\) and_
\[\frac{1}{\dim_{H}(E_{\lambda})}=\frac{1}{\dim_{P}(E_{\lambda})}=h(\lambda)+ \frac{1}{2}+\frac{\log 2}{2\log n}\quad(\lambda\in\mathbb{D}).\]
Proof.: As \(h\) is a positive harmonic function on \(\mathbb{D}\), there exists a holomorphic function \(a:\mathbb{D}\to\mathbb{D}\setminus\{0\}\) such that
\[\log|a(\lambda)|=-h(\lambda)\log n\quad(\lambda\in\mathbb{D}).\]
Let \(\overline{D}(w_{1},r),\dots,\overline{D}(w_{n},r)\) be disjoint closed disks inside \(\mathbb{D}\), where \(r=1/\sqrt{2n}\). Such disks may be found if \(n\geq 10\). For \(j=1,\dots,n\) and \(\lambda\in\mathbb{D}\), define
\[\gamma_{j,\lambda}(z):=ra(\lambda)z+w_{j}\quad(z\in\mathbb{C}).\]
Note that \(\gamma_{j,\lambda}(\mathbb{D})\subset D(w_{j},r)\) for each \(j=1,\dots,n\) and each \(\lambda\in\mathbb{D}\). Thus, for each \(\lambda\in\mathbb{D}\), the family \(\{\gamma_{j,\lambda}:j=1,\dots,n\}\) generates an iterated function system satisfying the open set condition. If we denote by \(E_{\lambda}\) its limit set, then \(\lambda\mapsto E_{\lambda}\) is a compact-valued holomorphic motion (see e.g. [4, Theorem 4]) such that \(E_{\lambda}\subset\cup_{j=1}^{n}\overline{D}(w_{j},r)\subset\mathbb{D}\) for all \(\lambda\in\mathbb{D}\). Moreover, by a special case of the Hutchinson-Moran formula Theorem 2.4, the Hausdorff and packing dimensions of \(E_{\lambda}\) are given by \(\dim_{H}E_{\lambda}=\dim_{P}E_{\lambda}=s(\lambda)\), where \(s(\lambda)\) is the solution of the equation
\[n(r|a(\lambda)|)^{s(\lambda)}=1.\]
Solving this equation, we obtain
\[\frac{1}{s(\lambda)}=-\frac{\log(r|a(\lambda)|)}{\log n}=\frac{\log(\sqrt{2n} )+h(\lambda)\log n}{\log n}=\frac{\log 2}{2\log n}+\frac{1}{2}+h(\lambda).\]
This completes the proof.
**Lemma 8.2**.: _Let \(D\) be a domain and let \(u\) be an inf-harmonic function on \(D\). Then there exists a sequence \((h_{n})_{n\geq 1}\) of positive harmonic functions on \(D\) such that, for every \(m\geq 1\), we have \(u=\inf_{n\geq m}h_{n}\) on \(D\)._
Proof.: Let \(S\) be a countable dense subset of \(D\), and let \((\lambda_{n})_{n\geq 1}\) be a sequence in \(S\) that visits every point of \(S\) infinitely often. Since \(u\) is inf-harmonic on \(D\), for each \(n\geq 1\) there exists a positive harmonic function \(h_{n}\) on \(D\) such that \(h_{n}\geq u\) and \(h_{n}(\lambda_{n})<u(\lambda_{n})+1/n\). Then, for each \(m\geq 1\), we have \(u=\inf_{n\geq m}h_{n}\) on \(S\). Since \(S\) is dense in \(D\) and inf-harmonic functions are automatically continuous, it follows that \(u=\inf_{n\geq m}h_{n}\) on \(D\).
Proof of Theorem 1.7.: Set \(u(\lambda):=1/d(\lambda)-1/2\). Since \(1/d\) is inf-harmonic and \(1/d\geq 1/2\), it follows that \(u\) is inf-harmonic as well. By Lemma 8.2, there exists a sequence \((h_{n})_{n\geq 1}\) of positive harmonic functions on \(\mathbb{D}\) such that \(u=\inf_{n\geq m}h_{n}\) for every \(m\geq 1\).
By Lemma 8.1, for each \(n\geq 10\), there exists a compact-valued holomorphic motion \(\lambda\mapsto E_{\lambda}^{(n)}\) in \(\mathbb{D}\) such that
\[\frac{1}{\dim_{H}(E_{\lambda}^{(n)})}=\frac{1}{\dim_{P}(E_{\lambda}^{(n)})}=h _{n}(\lambda)+\frac{1}{2}+\frac{\log 2}{2\log n}\quad(\lambda\in\mathbb{D}).\]
Fix a sequence of disjoint closed disks \(\overline{D}(\zeta_{n},s_{n})\) in \(\mathbb{C}\) such that \(\zeta_{n}\to 0\) and \(s_{n}\to 0\), and define
\[A_{\lambda}:=\bigcup_{n\geq 10}(s_{n}E_{\lambda}^{(n)}+\zeta_{n})\cup\{0\} \quad(\lambda\in\mathbb{D}).\]
Then \(\lambda\mapsto A_{\lambda}\) is a union of holomorphic motions taking place in disjoint disks, so it is itself a holomorphic motion. Moreover \(A_{\lambda}\) is a compact set for each \(\lambda\in\mathbb{D}\). Finally, since both Hausdorff dimension and packing dimension are countably stable, and these dimensions are unchanged under similarities, we have
\[\frac{1}{\dim_{H}(A_{\lambda})}=\frac{1}{\dim_{P}(A_{\lambda})} =\inf_{n\geq 10}\biggl{(}\frac{1}{\dim_{P}(E_{\lambda}^{(n)})} \biggr{)}\] \[=\inf_{n\geq 10}\Bigl{(}h_{n}(\lambda)+\frac{1}{2}+\frac{\log 2 }{2\log n}\Bigr{)}\] \[=u(\lambda)+\frac{1}{2}=\frac{1}{d(\lambda)}.\]
In other words, \(\dim_{H}(A_{\lambda})=\dim_{P}(A_{\lambda})=d(\lambda)\) for all \(\lambda\in\mathbb{D}\). This completes the proof.
## 9. Proof of Theorem 1.9
In this section, we prove Theorem 1.9 on the variation of the area of a set moving under a holomorphic motion. The proof of part (i) follows closely the ideas of [8], as elaborated in [2, SS13.1]. We first need the following lemmas.
**Lemma 9.1**.: _Let \((\Omega,\nu)\) be a measure space and let \(a:\Omega\to(0,\infty)\) be a measurable function such that \(\int_{\Omega}a\,d\nu<\infty\). Then, for every measurable
function \(p:\Omega\to(0,\infty)\) such that \(\int_{\Omega}p\,d\nu=1\), we have_
\[\log\Bigl{(}\int_{\Omega}a\,d\nu\Bigr{)}\geq\int_{\Omega}p\log\Bigl{(}\frac{a}{p }\Bigr{)}\,d\nu,\]
_with equality if \(p=a/(\int_{\Omega}a\,d\nu)\)._
Proof.: The inequality follows from Jensen's inequality applied to the concave function \(\log x\) and the probability space \((\Omega,\,p\,d\nu)\). The case of equality is obvious.
**Lemma 9.2**.: _Let \(D\) be a plane domain, let \((\Omega,\nu)\) be a finite measure space, and let \(h:D\times\Omega\to\mathbb{R}\) be a measurable function such that:_
* \(\lambda\mapsto h(\lambda,\omega)\) _is harmonic on_ \(D\)_, for each_ \(\omega\in\Omega\)_;_
* \(\sup_{K\times\Omega}|h(\lambda,\omega)|<\infty\) _for each compact_ \(K\subset D\)_._
_Then the function \(H(\lambda):=\int_{\Omega}h(\lambda,\omega)\,d\nu(\omega)\) is harmonic on \(D\)._
Proof.: The function \(H\) is continuous on \(D\), by the dominated convergence theorem. Also it satisfies the mean-value property on \(D\), by the harmonicity of \(h(\cdot,\omega)\) and Fubini's theorem. Therefore \(H\) is harmonic on \(D\).
**Lemma 9.3**.: _Let \(k\in(0,1)\) and let \(R>0\). Let \(g,g_{n}:\mathbb{C}\to\mathbb{C}\) be \(k\)-quasiconformal homeomorphisms such that_
* \(\mu_{g_{n}}\to\mu_{g}\) _a.e. on_ \(\mathbb{C}\)_,_
* \(\operatorname{supp}\mu_{g_{n}}\subset D(0,R)\) _for each_ \(n\geq 1\)_,_
* \(g_{n}(z)=z+o(1)=g(z)\) _as_ \(|z|\to\infty\) _for each_ \(n\geq 1\)_._
_Then_
\[\|\partial_{z}g_{n}-\partial_{z}g\|_{L^{2}(\mathbb{C})}\to 0\quad\text{and} \quad\|\partial_{\overline{z}}g_{n}-\partial_{\overline{z}}g\|_{L^{2}(\mathbb{ C})}\to 0.\]
Proof.: The second limit holds by [2, Lemma 5.3.1]. The first limit is an automatic consequence, since \(\|\partial_{z}g_{n}-\partial_{z}g\|_{L^{2}(\mathbb{C})}=\|\partial_{\overline {z}}g_{n}-\partial_{\overline{z}}g\|_{L^{2}(\mathbb{C})}\). This is because the Beurling transform, which takes \(\partial_{\overline{z}}f\) to \(\partial_{z}f\), is a unitary operator on \(L^{2}(\mathbb{C})\) (see the discussion on [2, p.95]).
Proof of Theorem 1.9.: Let \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) be a holomorphic motion. Suppose that there exists a compact subset \(\Delta\) of \(\mathbb{C}\) such that, for each \(\lambda\in\mathbb{D}\), the map \(f_{\lambda}\) is conformal on \(\mathbb{C}\setminus\Delta\) and \(f_{\lambda}(z)=z+O(1)\) near \(\infty\). Let \(A\) be a Borel subset of \(\Delta\) such that \(|A|>0\). We begin with some preliminary remarks.
The first remark is that, in the normalization \(f_{\lambda}(z)=z+O(1)\) near \(\infty\), we may as well suppose that in fact \(f_{\lambda}(z)=z+o(1)\) near \(\infty\). Indeed, it suffices to consider the translated holomorphic motion \(f(\lambda,z)-a_{0}(\lambda)\), where \(a_{0}(\lambda)\) is the constant coefficient in the Laurent expansion of \(f_{\lambda}(z)\) near infinity. Note that \(a_{0}(\lambda)\) is holomorphic in \(\mathbb{D}\), as can be seen from the formula
\[a_{0}(\lambda)=\frac{1}{2\pi i}\int_{|z|=R}\frac{f_{\lambda}(z)}{z}\,dz,\]
valid for all \(R\) large enough so that \(\Delta\subset D(0,R)\).
Next, we claim that there is a simple _a priori_ bound on \(|A_{\lambda}|\), namely
\[|A_{\lambda}|\leq\pi c(\Delta)^{2}\quad(\lambda\in\mathbb{D}). \tag{9.1}\]
Here \(c(\Delta)\) is the logarithmic capacity of \(\Delta\), see e.g. [20, Chapter 5] for the definition. Indeed, since \(f_{\lambda}\) is a conformal homeomorphism of \(\mathbb{C}\setminus\Delta\) onto \(\mathbb{C}\setminus f_{\lambda}(\Delta)\) satisfying \(f_{\lambda}(z)=z+o(1)\) at infinity, the sets \(\Delta\) and \(f_{\lambda}(\Delta)\) have the same logarithmic capacity:
\[c(f_{\lambda}(\Delta))=c(\Delta)\quad(\lambda\in\mathbb{D}),\]
by [20, Theorem 5.2.3]. From the isoperimetric inequality for logarithmic capacity ([20, Theorem 5.3.5]) we have \(|f_{\lambda}(\Delta)|\leq\pi c(f_{\lambda}(\Delta))^{2}\), and it follows that
\[|A_{\lambda}|=|f_{\lambda}(A)|\leq|f_{\lambda}(\Delta)|\leq\pi c(f_{\lambda}( \Delta))^{2}=\pi c(\Delta)^{2},\]
as claimed.
We now turn to the proof of part (i) of the theorem. Suppose first that \(A\) is compact and that there exists an open neighbourhood \(U\) of \(A\) such that \(\mu_{f_{\lambda}}\equiv 0\) on \(U\) for all \(\lambda\in\mathbb{D}\). Then each \(f_{\lambda}\) is a conformal mapping on \(U\), so \(f_{\lambda}^{\prime}(z)\neq 0\) for all \(z\in U\). By the standard Jacobian formula for area, we have
\[|A_{\lambda}|=|f_{\lambda}(A)|=\int_{A}|f_{\lambda}^{\prime}(z)|^{2}\,dm(z),\]
where \(dm\) denotes area measure on \(\mathbb{C}\). Using Lemma 9.1, we can write \(\log|A_{\lambda}|\) as
\[\log|A_{\lambda}|=\sup_{p}\Bigl{\{}\int_{A}p(z)\log\Bigl{(}\frac{|f_{\lambda} ^{\prime}(z)|^{2}}{p(z)}\Bigr{)}\,dm(z)\Bigr{\}},\]
where the supremum is taken over all continuous functions \(p:A\to(0,\infty)\) such that \(\int_{A}p\,dm=1\). By Lemma 9.2, each of the integrals is a harmonic function of \(\lambda\in\mathbb{D}\). Therefore \(\log(C/|A_{\lambda}|)\) is an inf-harmonic function on \(\mathbb{D}\) for each \(C\geq\sup_{\lambda\in\mathbb{D}}|A_{\lambda}|\), in particular for \(C=\pi c(\Delta)^{2}\), by (9.1).
Suppose now that \(A\) is merely Borel, but still that \(\mu_{f_{\lambda}}\equiv 0\) on \(U\) for all \(\lambda\in\mathbb{D}\). We have
\[\log\Bigl{(}\frac{\pi c(\Delta)^{2}}{|A_{\lambda}|}\Bigr{)}=\inf_{F}\log \Bigl{(}\frac{\pi c(\Delta)^{2}}{|F_{\lambda}|}\Bigr{)}\quad(\lambda\in \mathbb{D}),\]
where the infimum is taken over all compact subsets \(F\) of \(A\). Each function on the right-hand side is inf-harmonic on \(\mathbb{D}\), by what we have already proved. Therefore the left-hand side is inf-harmonic on \(\mathbb{D}\) as well.
Finally, suppose merely that \(\mu_{f_{\lambda}}=0\) a.e. on \(A\) for each \(\lambda\in\mathbb{D}\). Let \(U_{n}\) be a deceasing sequence of bounded open sets such that \(|U_{n}\setminus A|\to 0\). By Theorem 3.4, for each \(n\) there exists a holomorphic motion \(f_{n}:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) such that, for each \(\lambda\in\mathbb{D}\), we have \(\mu_{f_{n,\lambda}}=1_{\mathbb{C}\setminus U_{n}}\mu_{f_{\lambda}}\) a.e. on \(\mathbb{C}\) and \(f_{n,\lambda}(z)=z+o(1)\) near \(\infty\). By Lemma 9.3, it follows that
\[\|\partial_{z}f_{n,\lambda}-\partial_{z}f_{\lambda}\|_{L^{2}(\mathbb{C})}\to 0 \quad\text{and}\quad\|\partial_{\overline{z}}f_{n,\lambda}-\partial_{\overline {z}}f_{\lambda}\|_{L^{2}(\mathbb{C})}\to 0.\]
Therefore
\[\int_{A}|\partial_{z}f_{\lambda}|^{2}\,dm=\lim_{n\to\infty}\int_{A}|\partial_{z}f _{n,\lambda}|^{2}\,dm=\lim_{n\to\infty}|f_{n,\lambda}(A)|\]
and
\[\int_{A}|\partial_{\overline{z}}f_{\lambda}|^{2}\,dm=\lim_{n\to\infty}\int_{A}| \partial_{\overline{z}}f_{n,\lambda}|^{2}\,dm=0.\]
Hence, using [2, formula (2.24)], we obtain
\[|f_{\lambda}(A)|=\int_{A}(|\partial_{z}f_{\lambda}|^{2}-|\partial_{\overline{z }}f_{\lambda}|^{2})\,dm=\lim_{n\to\infty}|f_{n,\lambda}(A)|.\]
Thus
\[\log\Bigl{(}\frac{\pi c(\Delta)^{2}}{|A_{\lambda}|}\Bigr{)}=\lim_{n\to\infty} \log\Bigl{(}\frac{\pi c(\Delta)^{2}}{|f_{n,\lambda}(A)|}\Bigr{)}\quad(\lambda \in\mathbb{D}).\]
By what we have already proved, the right-hand sides are inf-harmonic functions of \(\lambda\). It follows from Proposition 4.4 part (iii) that the left-hand side is inf-harmonic on \(\mathbb{D}\) as well. This completes the proof of part (i) of the theorem.
We now turn to the proof of part (ii). Set \(R_{0}:=\sup_{z\in\Delta}|z|\) and, for \(R>R_{0}\), set \(\Delta_{R}:=\overline{D}(0,R)\) (so \(c(\Delta_{R})=R\)). By hypothesis \(\mu_{f_{\lambda}}=0\) a.e. on \(\Delta_{R}\setminus A\). So, applying what we have proved in part (i) (with \(\Delta\) replaced by \(\Delta_{R}\) and \(A\) replaced by \(\Delta_{R}\setminus A\)), we see that
\[\lambda\mapsto\log\Bigl{(}\frac{\pi R^{2}}{|f_{\lambda}(\Delta_{R}\setminus A )|}\Bigr{)}\]
is an inf-harmonic function on \(\mathbb{D}\). Now, fix \(\lambda\in\mathbb{D}\). Then \(|f_{\lambda}(\Delta_{R}\setminus A)|=|f_{\lambda}(\Delta_{R})|-|A_{\lambda}|\), and by the area theorem from univalent function theory,
\[|f_{\lambda}(\Delta_{R})|=\pi R^{2}-\pi\sum_{n\geq 1}n|a_{n}(\lambda)|^{2}R^{-2 n},\]
where \(f_{\lambda}(z)=z+\sum_{n\geq 1}a_{n}(\lambda)z^{-n}\) is the Laurent expansion of \(f_{\lambda}\) near infinity. In particular, \(|f_{\lambda}(\Delta_{R})|=\pi R^{2}+O(R^{-2})\) as \(R\to\infty\). Hence
\[\log\Bigl{(}\frac{\pi R^{2}}{|f_{\lambda}(\Delta_{R}\setminus A)|}\Bigr{)}= \log\Bigl{(}\frac{\pi R^{2}}{\pi R^{2}-|A_{\lambda}|+O(R^{-2})}\Bigr{)}=\frac{ |A_{\lambda}|}{\pi R^{2}}+O(R^{-4})\quad(R\to\infty).\]
It follows that
\[|A_{\lambda}|=\lim_{R\to\infty}\pi R^{2}\log\Bigl{(}\frac{\pi R^{2}}{|f_{ \lambda}(\Delta_{R}\setminus A)|}\Bigr{)}\quad(\lambda\in\mathbb{D}).\]
By what we have shown earlier, the right-hand sides are inf-harmonic functions of \(\lambda\). It follows from Proposition 4.4 part (iii) that the left-hand side is inf-harmonic on \(\mathbb{D}\) as well. This completes the proof of part (ii) of the theorem.
## 10. Applications to quasiconformal maps
In this section we show how our results lead to a unified approach to the four theorems on quasiconformal distortion of area and dimension that were stated at the end of the introduction.
### Distortion of dimension by quasiconformal maps
In this subsection we establish Theorem 1.10, to the effect that, if \(F:\mathbb{C}\to\mathbb{C}\) is a \(k\)-quasiconformal homeomorphism and \(\dim A>0\), then
\[\frac{1}{K}\Big{(}\frac{1}{\dim A}-\frac{1}{2}\Big{)}\leq\Big{(}\frac{1}{ \dim F(A)}-\frac{1}{2}\Big{)}\leq K\Big{(}\frac{1}{\dim A}-\frac{1}{2}\Big{)}, \tag{10.1}\]
where \(K=(1+k)/(1-k)\). Here \(\dim\) denotes any one of \(\dim_{P},\dim_{H}\) or \(\overline{\dim_{M}}\). In the case of \(\overline{\dim_{M}}\), we also suppose that the set \(A\) is bounded.
Proof of Theorem 1.10.: By Theorem 3.5, there exists a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) such that \(f_{k}=F\). For \(\lambda\in\mathbb{D}\), set \(A_{\lambda}:=f_{\lambda}(A)\). By Theorems 1.3, 1.4 and 1.6, either \(\lambda\mapsto(1/\dim(A_{\lambda})-1/2)\) is an inf-harmonic function on \(\mathbb{D}\), or, at the very least, it is a supremum of inf-harmonic functions. Either way, it satisfies Harnack's inequality, so, for all \(\lambda\in\mathbb{D}\), we have
\[\frac{1-|\lambda|}{1+|\lambda|}\Big{(}\frac{1}{\dim(A_{0})}-\frac{1}{2}\Big{)} \leq\Big{(}\frac{1}{\dim(A_{\lambda})}-\frac{1}{2}\Big{)}\leq\frac{1+|\lambda |}{1-|\lambda|}\Big{(}\frac{1}{\dim(A_{0})}-\frac{1}{2}\Big{)}.\]
In particular, taking \(\lambda=k\), we obtain (10.1).
_Remark_.: One consequence of Theorem 1.10 is that, if \(f:\mathbb{D}\times A\to\mathbb{C}\) is a holomorphic motion and \(A_{\lambda}=f_{\lambda}(A)\), then the map \(\lambda\mapsto\dim(A_{\lambda})\) is a continuous function. For the Minkowski and packing dimensions, this was also proved in Corollary 1.5. For all three notions of dimension, it can also be seen more directly as follows.
As \(\lambda\to\lambda_{0}\in\mathbb{D}\), the transition map \(f_{\lambda}\circ f_{\lambda_{0}}^{-1}\) is \(k\)-quasiconformal with \(k\) tending to \(0\), hence also Holder-continuous with Holder exponent tending to \(1\) (see [2, Theorem 12.2.3 and Corollary 3.10.3]). Thus \(\dim(A_{\lambda})=\dim(f_{\lambda}\circ f_{\lambda_{0}}^{-1})(A_{\lambda_{0}} )\to\dim(A_{\lambda_{0}})\) as \(\lambda\to\lambda_{0}\).
### Distortion of area by quasiconformal maps
Proof of Theorem 1.11.: Let \(F:\mathbb{C}\to\mathbb{C}\) be a \(k\)-quasiconformal homeomorphism which is conformal on \(\mathbb{C}\setminus\Delta\), where \(\Delta\) is a compact set of logarithmic capacity at most \(1\), and such that \(F(z)=z+o(1)\) near \(\infty\). Let \(A\) be a Borel subset of \(\Delta\).
Let \(k:=(K-1)/(K+1)\). By Theorem 3.4, there is a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) with \(f_{k}=F\) and \(\mu_{f_{\lambda}}=(\lambda/k)\mu_{F}\) for each \(\lambda\in\mathbb{D}\). We may also require that \(f_{\lambda}(z)=z+o(1)\) near \(\infty\).
Suppose first that \(\mu_{F}=0\) a.e. on \(A\). By Theorem 1.9(i), the function \(\lambda\mapsto\log(\pi/|A_{\lambda}|)\) is inf-harmonic on \(\mathbb{D}\). In particular, it satisfies Harnack's
inequality there:
\[\log\Bigl{(}\frac{\pi}{|A_{\lambda}|}\Bigr{)}\geq\frac{1-|\lambda|}{1+|\lambda|} \log\Bigl{(}\frac{\pi}{|A|}\Bigr{)}\quad(\lambda\in\mathbb{D}).\]
Setting \(\lambda=k\), we obtain
\[\log\Bigl{(}\frac{\pi}{|F(A)|}\Bigr{)}\geq\frac{1}{K}\log\Bigl{(}\frac{\pi}{|A |}\Bigr{)}.\]
This proves (i).
Suppose instead that \(\mu_{F}=0\) a.e. on \(\mathbb{C}\setminus A\). By Theorem 1.9(ii), the function \(\lambda\mapsto|A_{\lambda}|\) is inf-harmonic on \(\mathbb{D}\). In particular, it satisfies Harnack's inequality there:
\[|A_{\lambda}|\leq\frac{1+|\lambda|}{1-|\lambda|}|A|\quad(\lambda\in\mathbb{D}).\]
Setting \(\lambda=k\), we obtain
\[|F(A)|\leq K|A|.\]
This proves (ii).
Finally, the general case (iii) is deduced from (i) and (ii) via a standard factorization process, see e.g. [2, Theorem 13.1.4].
_Remark_.: As mentioned in SS9, our proof of part (i) of Theorem 1.11 is quite similar to the original proof of Eremenko and Hamilton [8], as presented in [2, SS13.1]. On the other hand, our proof of part (ii) is completely different from (and rather simpler than) the methods used in [8] and [2].
### Symmetric holomorphic motions and inf-sym-harmonic functions
In preparation for the proofs of Theorems 1.12 and 1.13, we study what can be said about the function \(1/\dim(A_{\lambda})\) when \(A\) is a subset of \(\mathbb{R}\) and \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) is a holomorphic motion that is symmetric in the sense defined below.
**Definition 10.1**.: We say that a holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) is _symmetric_ if
\[f_{\lambda}(z)=\overline{f_{\overline{\lambda}}(\overline{z})}\quad(\lambda \in\mathbb{D},\,z\in\mathbb{C}).\]
**Definition 10.2**.: We say that a harmonic function \(h:\mathbb{D}\to\mathbb{R}\) is _symmetric_ if \(h(\overline{\lambda})=h(\lambda)\) for all \(\lambda\in\mathbb{D}\). A function \(u:\mathbb{D}\to[0,\infty)\) is _inf-sym-harmonic_ if there is a family \(\mathcal{H}\) of symmetric harmonic functions on \(\mathbb{D}\) such that
\[u(\lambda)=\inf_{h\in\mathcal{H}}h(\lambda)\qquad(\lambda\in\mathbb{D}).\]
We now state symmetric versions of Lemmas 5.2 and 7.1.
**Lemma 10.3**.: _Let \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) be a symmetric holomorphic motion and let \(A\) be a bounded subset of \(\mathbb{R}\) with \(\overline{\dim}_{M}(A)>0\). Set \(A_{\lambda}:=f_{\lambda}(A)\). Then there exists an inf-sym-harmonic function \(u\) on \(\mathbb{D}\) such that_
\[u(0)=1/\overline{\dim}_{M}(A)\qquad\text{and}\qquad u(\lambda)\geq 1/ \overline{\dim}_{M}(A_{\lambda})\quad(\lambda\in\mathbb{D}).\]
**Lemma 10.4**.: _Let \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) be a symmetric holomorphic motion and let \(A\) be a subset of \(\mathbb{R}\) with \(\dim_{H}(A)>0\). Set \(A_{\lambda}:=f_{\lambda}(A)\). Then there exists an inf-sym-harmonic function \(u\) on \(\mathbb{D}\) such that_
\[u(0)=1/\dim_{H}(A)\quad\text{and}\quad 1/2\leq u(\lambda)\leq 1/\dim_{H}(A_{ \lambda})\quad(\lambda\in\mathbb{D}).\]
Proof.: The proofs follow closely those of Lemmas 5.2 and 7.1, with the following differences:
* If \(S\subset\mathbb{R}\), then the function \(\log(M/\operatorname{diam}f_{\lambda}(S))\) defined in Lemma 5.1 is inf-sym-harmonic on \(D(0,\rho)\). This can be seen directly from the formula \[\log\Bigl{(}\frac{M}{\operatorname{diam}f_{\lambda}(S)}\Bigr{)}=\inf\Bigl{\{} \log\Bigl{(}\frac{M}{|f_{\lambda}(z)-f_{\lambda}(w)|}\Bigr{)}:z,w\in S,\,z\neq w \Bigr{\}},\] using the symmetry relation \(f_{\lambda}(z)=\overline{f_{\overline{\lambda}}(\overline{z})}\).
* Consequently, if we replace the occurrences of \(f_{\lambda}(D)\) and \(f_{\lambda}(Q)\) in the proofs of Lemmas 5.2 and 7.1 by \(f_{\lambda}(D\cap\mathbb{R})\) and \(f_{\lambda}(Q\cap\mathbb{R})\) respectively, then all the functions that were previously inf-harmonic are now inf-sym-harmonic. Intersecting with \(\mathbb{R}\) leads to no loss of information about \(A\), since \(A\subset\mathbb{R}\).
* When applying the implicit function theorem or its corollary (Theorem 4.5 and Corollary 4.6), it is now assumed that the functions \(\log(1/a_{j})\) are inf-sym-harmonic, and the conclusion is now that \(1/s\) is inf-sym-harmonic (or \(s\equiv 0\)). This follows by applying Lemma 4.7, taking \(\mathcal{U}\) to be the inf-cone of inf-sym-harmonic functions.
### Dimension of quasicircles
In this subsection, we establish Theorem 1.12. More precisely, we use Lemma 10.4 to show that the Hausdorff dimension of a \(k\)-quasicircle is at most \(1+k^{2}\).
**Definition 10.5**.: Let \(k\in[0,1)\). A curve \(\Gamma\) in \(\mathbb{C}\) is a \(k\)-_quasicircle_ if \(\Gamma=g(\mathbb{R})\), where \(g:\mathbb{C}\to\mathbb{C}\) is a normalized \(k\)-quasiconformal homeomorphism. By normalized, we mean simply that \(g\) fixes \(0\) and \(1\).
Quasicircles have been studied extensively over the years because of the desirable function-theoretic properties of the domains that they bound, see e.g. [10]. In particular, the problem of finding upper bounds for the Hausdorff dimension of a \(k\)-quasicircle in terms of \(k\) has attracted much interest. Theorem 1.10 implies that if \(\Gamma\) is a \(k\)-quasicircle, then
\[\dim_{H}(\Gamma)\leq 1+k.\]
Motivated by examples of Becker and Pommerenke [5], Astala asked in [1] whether the upper bound can be replaced by \(1+k^{2}\). This was answered in the affirmative by Smirnov in [23]. As we will now see, Astala's question can also be answered using inf-harmonic functions.
We first need a result on symmetrization of Beltrami coefficients due to Smirnov [23, Theorem 4]. See also [2, SS13.3.1].
**Lemma 10.6**.: _The function \(g\) in Definition 10.5 may be chosen so that, in addition, its Beltrami coefficient satisfies the antisymmetry relation_
\[\overline{\mu_{g}(\overline{z})}=-\mu_{g}(z)\quad\text{a.e.\ in }\mathbb{C}. \tag{10.2}\]
We will also need the following Harnack-type inequality for inf-sym-harmonic functions, reminiscent of [23, Lemma 7]. See also [2, Lemma 13.3.8].
**Lemma 10.7**.: _Let \(v:\mathbb{D}\to[0,\infty)\) be an inf-sym-harmonic function. Then_
\[\frac{1-y^{2}}{1+y^{2}}v(0)\leq v(iy)\leq\frac{1+y^{2}}{1-y^{2}}v(0)\qquad(y \in(-1,1)).\]
Proof.: Write
\[v(\lambda)=\inf_{h\in\mathcal{H}}h(\lambda)\qquad(\lambda\in\mathbb{D}),\]
where each \(h\in\mathcal{H}\) is a positive and symmetric harmonic function on \(\mathbb{D}\). Fix \(h\in\mathcal{H}\), and set \(k(\lambda):=(h(\lambda)+h(-\lambda))/2\). Clearly \(k\) is an even positive harmonic function on \(\mathbb{D}\). Thus it can be written as \(k(\lambda)=l(\lambda^{2})\), where \(l\) is a positive harmonic function on \(\mathbb{D}\). Applying the standard Harnack inequality to \(l\), we get
\[\frac{1-|\lambda|^{2}}{1+|\lambda|^{2}}k(0)\leq k(\lambda)\leq\frac{1+| \lambda|^{2}}{1-|\lambda|^{2}}k(0)\qquad(\lambda\in\mathbb{D}).\]
As \(h\) is symmetric, we have
\[h(iy)=\frac{h(iy)+h(-iy)}{2}=k(iy)\qquad(y\in(-1,1)).\]
Hence
\[\frac{1-y^{2}}{1+y^{2}}h(0)\leq h(iy)\leq\frac{1+y^{2}}{1-y^{2}}h(0)\qquad( \lambda\in\mathbb{D}).\]
Taking the infimum over all \(h\in\mathcal{H}\) gives the result.
We can now prove the main result of this subsection.
Proof of Theorem 1.12.: Let \(\Gamma\) be a \(k\)-quasicircle. By Lemma 10.6, we can write \(\Gamma=g(\mathbb{R})\) for some normalized \(k\)-quasiconformal mapping \(g:\mathbb{C}\to\mathbb{C}\) whose Beltrami coefficient \(\mu_{g}\) satisfies the antisymmetry relation (10.2). For \(\lambda\in\mathbb{D}\), define a Beltrami coefficient \(\mu_{\lambda}\) by
\[\mu_{\lambda}:=\frac{\lambda}{ik}\mu_{g},\]
and denote by \(f_{\lambda}:\mathbb{C}\to\mathbb{C}\) the unique normalized quasiconformal mapping whose Beltrami coefficient is \(\mu_{\lambda}\), as given by Theorem 3.3. Note that \(f_{0}\) is the identity and \(f_{ik}=g\). It follows from Theorem 3.4 that the maps \(f_{\lambda}\) define a holomorphic motion of \(\mathbb{C}\). Moreover, we have
\[\overline{\mu_{\overline{\lambda}}(\overline{z})}=\frac{\lambda}{-ik} \overline{\mu_{g}(\overline{z})}=\mu_{\lambda}(z)\quad\text{a.e.\ in }\mathbb{C}.\]
It easily follows that the maps \(f_{\lambda}\) inherit the same symmetry:
\[f_{\lambda}(z)=\overline{f_{\overline{\lambda}}(\overline{z})}\qquad( \lambda\in\mathbb{D},z\in\mathbb{C}),\]
see e.g. [2, Section 13.3.1]. In other words, the holomorphic motion \(f\) is symmetric in the sense of Definition 10.1.
Now, let \(A:=\mathbb{R}\). By Lemma 10.4, there is an inf-sym-harmonic function \(u\) on \(\mathbb{D}\) such that
\[u(0)=1/\dim_{H}(A)=1\quad\text{and}\quad 1/2\leq u(\lambda)\leq 1/\dim_{H}(A_{ \lambda})\quad(\lambda\in\mathbb{D}).\]
In particular, the function \(v:=u-1/2\) is also inf-sym-harmonic, and Lemma 10.7 yields
\[v(ik)\geq\frac{1-k^{2}}{1+k^{2}}v(0)=\frac{1}{2}\frac{1-k^{2}}{1+k^{2}}.\]
But also
\[v(ik)\leq\frac{1}{\dim_{H}(f_{ik}(A))}-\frac{1}{2}=\frac{1}{\dim_{H}(\Gamma)} -\frac{1}{2},\]
and hence we obtain
\[\dim_{H}(\Gamma)\leq 1+k^{2},\]
as required.
_Remark_.: In fact, the upper bound in Theorem 1.12 is not sharp, as recently proved by Oleg Ivrii [13].
### Quasisymmetric distortion spectrum
In this subsection, we prove Theorem 1.13. More precisely, we use Lemma 10.3 to estimate the Minkowski and packing dimensions of the image of a subset of the real line under a quasisymmetric map.
**Definition 10.8**.: Let \(k\in[0,1)\). A homeomorphism \(g:\mathbb{R}\to\mathbb{R}\) is called \(k\)-_quasisymmetric_ if it extends to a normalized \(k\)-quasiconformal map \(g:\mathbb{C}\to\mathbb{C}\) such that \(g(z)=\overline{g(\overline{z})}\) for all \(z\in\mathbb{C}\).
For the proof of Theorem 1.13, we need the following Schwarz-Pick type inequality, see [18, Lemma 2.2].
**Lemma 10.9**.: _Let \(\phi:\mathbb{D}\to\mathbb{D}\) be a holomorphic function. Suppose that \(\phi(\lambda)=\overline{\phi(\overline{\lambda})}\) for all \(\lambda\in\mathbb{D}\) and that \(\phi(\lambda)\geq 0\) for all \(\lambda\in(-1,1)\). Then_
\[\phi(k)\leq\left(\frac{k+\sqrt{\phi(0)}}{1+k\sqrt{\phi(0)}}\right)^{2}\qquad( 0\leq k<1).\]
Proof of Theorem 1.13.: It is enough to prove the result for the Minkowski dimension. The case of the packing dimension then follows easily by applying Proposition 2.3.
Let \(g:\mathbb{R}\to\mathbb{R}\) be a \(k\)-quasisymmetric map, and let \(A\subset\mathbb{R}\) be a bounded set with \(\overline{\dim}_{M}(A)=\delta\), where \(0<\delta\leq 1\). It suffices to show that \(\overline{\dim}_{M}(g(A))\geq\Delta(\delta,k)\), since the upper bound follows from the lower bound, replacing \(g\) by \(g^{-1}\) and using the definition of \(\Delta^{*}(\delta,k)\).
Extend \(g\) to a normalized \(k\)-quasiconformal mapping \(g:\mathbb{C}\to\mathbb{C}\) such that \(g(z)=\overline{g(\overline{z})}\) for all \(z\in\mathbb{C}\). The Beltrami coefficient \(\mu_{g}\) satisfies
\[\mu_{g}(z)=\overline{\mu_{g}(\overline{z})}\quad(z\in\mathbb{C}).\]
Therefore, by a similar construction to that in the proof of Theorem 1.12, there is a symmetric holomorphic motion \(f:\mathbb{D}\times\mathbb{C}\to\mathbb{C}\) with \(f_{k}=g\). By Lemma 10.3, there exists an inf-sym-harmonic function \(u\) on \(\mathbb{D}\) such that
\[u(0)=1/\overline{\dim}_{M}(A)=1/\delta\qquad\text{and}\qquad u(\lambda)\geq 1/ \overline{\dim}_{M}(A_{\lambda})\quad(\lambda\in\mathbb{D}).\]
The function \(v:=u-1/2\) is also inf-sym-harmonic, and we can write
\[v(\lambda)=\inf_{h\in\mathcal{H}}h(\lambda),\]
where each \(h\in\mathcal{H}\) is a positive, symmetric harmonic function on \(\mathbb{D}\).
Fix \(h\in\mathcal{H}\). Since \(h\) is harmonic and \(h(\overline{\lambda})=h(\lambda)\) for all \(\lambda\in\mathbb{D}\), there is a holomorphic function \(H\) on \(\mathbb{D}\) with \(\operatorname{Re}H=h\) and \(\overline{H(\overline{\lambda})}=H(\lambda)\) for all \(\lambda\in\mathbb{D}\). Then \(H\) maps \(\mathbb{D}\) into the right half-plane. Also, for \(\lambda\in(-1,1)\), we have
\[H(\lambda)=h(\lambda)\geq v(\lambda)\geq\frac{1}{\overline{\dim}_{M}(A_{ \lambda})}-\frac{1}{2}\geq\frac{1}{2},\]
since \(A_{\lambda}\subset\mathbb{R}\) by the symmetry of the holomorphic motion. It follows that the function
\[\phi:=\frac{2H-1}{2H+1}\]
satisfies the assumptions of Lemma 10.9, and we get
\[\frac{2h(k)-1}{2h(k)+1}=\phi(k)\leq\left(\frac{k+l^{\prime}}{1+kl^{\prime}} \right)^{2},\]
where \(l^{\prime}=\sqrt{\phi(0)}\). Using the fact that the functions \(x\mapsto(2x-1)/(2x+1)\) and \(x\mapsto(k+x)/(1+kx)\) are increasing, we obtain, after taking the infimum over all \(h\in\mathcal{H}\),
\[\frac{2v(k)-1}{2v(k)+1}\leq\left(\frac{k+l}{1+kl}\right)^{2},\]
where
\[l=\left(\frac{2v(0)-1}{2v(0)+1}\right)^{1/2}=\left(\frac{2(1/\delta-1/2)-1}{2( 1/\delta-1/2)+1}\right)^{1/2}=\sqrt{1-\delta}.\]
Note that
\[\frac{2v(k)-1}{2v(k)+1}=\frac{2u(k)-2}{2u(k)}=1-\frac{1}{u(k)}\geq 1-\overline {\dim}_{M}(g(A)).\]
This gives the desired inequality, namely
\[\overline{\dim}_{M}(g(A))\geq 1-\left(\frac{k+l}{1+kl}\right)^{2}=\Delta( \delta,k).\qed\]
## 11. An open problem
As remarked in the introduction, Theorems 1.4 and 1.7 between them provide a complete characterization of the variation of the packing dimension of a set moving under a holomorphic motion. Such a characterization for the Hausdorff dimension is currently lacking, due to the fact that the conclusion in Theorem 1.6 is weaker than that in Theorem 1.4. This naturally raises the following question.
**Question 11.1**.: _Let \(A\) be a subset of \(\mathbb{C}\) such that \(\dim_{H}A>0\), and let \(f:\mathbb{D}\times A\to\mathbb{C}\) be a holomorphic motion. Set \(A_{\lambda}:=f_{\lambda}(A)\). Then must \(\lambda\mapsto 1/\dim_{H}(A_{\lambda})\) be an inf-harmonic function on \(\mathbb{D}\)?_
The same question was posed 30 years ago in [19]. As far as we know, it is still an open problem.
It was shown in [19] that the answer to Question 11.1 is affirmative in the following special case. Let \((R_{\lambda})_{\lambda\in\mathbb{D}}\) be a holomorphic family of hyperbolic rational maps. Then the holomorphic motion \(\lambda\mapsto J(R_{\lambda})\) defined by their Julia sets has the property that \(1/\dim_{H}J(R_{\lambda})\) is an inf-harmonic function on \(\mathbb{D}\). The proof relies on an explicit formula for the Hausdorff dimension, namely the Bowen-Ruelle-Manning formula.
Another special case was established by Baribeau and Roy [4]. They showed that, if \(L_{\lambda}\) is the limit set of an iterated function system of contractive similarities depending holomorphically on a parameter \(\lambda\in\mathbb{D}\), then, subject to a technical condition, the map \(\lambda\mapsto L_{\lambda}\) is a holomorphic motion for which \(1/\dim_{H}(L_{\lambda})\) is an inf-harmonic function on \(\mathbb{D}\). Their proof also relies on an explicit formula for the the Hausdorff dimension, this time the Hutchinson-Moran formula, Theorem 2.4.
In fact, in both these special cases, it turns out that the Hausdorff dimension coincides with the packing dimension, so both results are now consequences of Theorem 1.4, without any recourse to explicit formulas for the dimension.
Finally, we remark that an affirmative answer to Question 11.1 would imply that \(\lambda\mapsto\dim_{H}(A_{\lambda})\) is a subharmonic function (in much the same way that Corollary 1.5 was proved for the packing and Minkowski dimensions). Even this apparently weaker statement is also still an open problem. As an interesting test case, we pose the following question.
**Question 11.2**.: _Does the Hausdorff dimension of a holomorphic motion \(\lambda\mapsto A_{\lambda}\) always satisfy the inequality_
\[\dim_{H}(A_{0})\leq\max_{|\lambda|=1/2}\dim_{H}(A_{\lambda})?\]
|
2308.12473
|
High scale validity of two Higgs doublet scenarios with a real scalar
singlet dark matter
|
We study the high-scale validity of two kinds of two Higgs doublet models
(2HDM), namely, Type-II and Type-X, but with a scalar SU(2) singlet dark matter
(DM) candidate in addition in each case. The additional quartic couplings
involving the DM particle in the scalar potential in both the scenarios bring
in additional constraints from the requirement of perturbative unitarity and
vacuum stability. DM relic density and direct search constraints play a crucial
role in this analysis as the perturbative unitarity of the DM-Higgs portal
couplings primarily decide the high scale validity of the model. We find that,
within the parameter regions thus restricted, the Type-II scenario must have a
cut-off at around $10^6$ GeV, while the Type-X scenario admits of validity upto
the Planck scale. However, only those regions which are valid upto about $10^8$
GeV in Type-X 2HDM is amenable to detection at the High-luminosity LHC (upto
3000 $fb^{-1}$), while most of the parameter space of the Type-II scenario
mentioned above is likely to be detectable.
|
Subhaditya Bhattacharya, Atri Dey, Jayita Lahiri, Biswarup Mukhopadhyaya
|
2023-08-23T23:57:20Z
|
http://arxiv.org/abs/2308.12473v1
|
# High scale validity of two Higgs doublet scenarios
###### Abstract
We study the high-scale validity of two kinds of two Higgs doublet models (2HDM), namely, Type-II and Type-X, but with a scalar SU(2) singlet dark matter (DM) candidate in addition in each case. The additional quartic couplings involving the DM particle in the scalar potential in both the scenarios bring in additional constraints from the requirement of perturbative unitarity and vacuum stability. DM relic density and direct search constraints play a crucial role in this analysis as the perturbative unitarity of the DM-Higgs portal couplings primarily decide the high scale validity of the model. We find that, within the parameter regions thus restricted, the Type-II scenario must have a cut-off at around \(10^{6}\) GeV, while the Type-X scenario admits of validity upto the Planck scale. However, only those regions which are valid upto about \(10^{8}\) GeV in Type-X 2HDM is amenable to detection at the High-luminosity LHC (upto 3000 \(fb^{-1}\)), while most of the parameter space of the Type-II scenario mentioned above is likely to be detectable.
## 1 Introduction
### Models and Constraints
* 2.1 Models
* 2.2 Theoretical constraints
* 2.3 Experimental constraints
* 2.4 Dark matter constraints
* 3 Demonstration of running couplings with some benchmarks
* 3.1 The one-loop RGE's
* 3.1.1 Type-II 2HDM
* 3.1.2 Type-X 2HDM
* 3.2 Choice of benchmarks and the running of quartic couplings
* 3.2.1 Type-II 2HDM
* 3.2.2 Type-X 2HDM
* 4 Study of model parameter space
* 4.1 Regions of high-scale validity
* 4.2 Constraints from DM sector
* 4.3 Combining high-scale validity with DM constraints
* 4.4 Prospects at the LHC
* 5 Summary and Conclusions
* 6 Acknowledgements
* A Two-loop RGE's
* A.1 Type-II
* A.2 Type-X
Introduction
The discovery and subsequent study of the 125-GeV scalar has almost decisively confirmed the spontaneous breakdown mechanism in the standard electroweak model (SM). It is, however, still possible that more than one scalar SU(2) doublets participate in the electroweak symmetry breaking (EWSB) scheme. Two Higgs doublet models (2HDM) [1] have thus become subjects of frequent investigation, motivated from various unanswered questions in the SM. There are various kinds of 2HDM, for each of which substantial regions of the parameter space are identified as consistent with observed phenomenology. This statement is particularly valid if one remains within the 'alignment limit', much of which is accessible to accelerator searches.
Side by side, a question that constantly haunts physicists is the origin of dark matter (DM) in our universe, which strongly suggests physics beyond the SM (BSM), so long as DM is constituted of some yet unknown elementary particle(s). One possibility that is often explored in this context is whether a DM particle, especially a scalar one, can interact with the rest of the SM spectrum through the EWSB sector. Such 'Higgs portal' scenarios, however, are strongly constrained from direct DM search data, because of the SM-like scalar contributions to the spin-independent cross-section [2; 3; 4]. The restriction, however, become considerably relaxed, if the DM particle has appreciable interaction strength with a heavier neutral scalar in the 2HDM spectrum [5; 6; 7]. In that case, not only does one have smaller, propagator-suppressed, contributions to direct search cross-sections, but it is also easier to reduce the tension between the Direct search limits [8; 9] and those from the relic density of the universe [10], due to appropriate interface between more than one contributing channels and the larger number of parameters at one's disposal. The constraints as well as collider signatures of such _2HDM + scalar DM_ scenarios [11; 12; 13] have already been explored [5; 7].
We go a step further in the current work. It is a natural question to ask as to what can be the ultraviolet (UV) behaviour of a _2HDM + scalar DM_ scenario. Such a question is not only germane in the context of model-building but has also regarding implications in early universe issues, such as electroweak phase transition or the freeze-out of the scalar DM candidate. This issue becomes even more fascinating as the presence of DM turns out to crucially govern the high scale validity of the model.
Keeping such points in mind, we study the high-scale behaviour of such a theoretical scenario. The running of the parameters via Renormalization Group Equations (RGE's) there lead to additional constraints arising from vacuum stability and perturbative unitarity at high scales. These bring in further restrictions of the allowed regions of the parameter space, over and above the ones already studied. It is thus important to know which parameter regions have chance of revealing themselves at the high-luminosity Large Hadron Collider (HL-LHC), corresponding to different upper limits of validity of this kind of a theory. It is worth mentioning that the high-scale validity of Type-X 2HDM, especially of the parameter space giving rise to the observed anomalous magnetic moment of muon [14; 15; 16], has been studied in a previous work [17]. It may be mentioned that the tension between theory and experiment in \(g_{\mu}-2\) is claimed to have been relaxed on the basis of Lattice calculations [18; 19; 20; 21]. We nevertheless have taken a look at the Type-X 2HDM, since it
still allows relatively light (pseudo)scalars, which has a bearing on the evolution of mass parameters upto high-scales. Furthermore, there have been studies on the vacuum stability as well as high scale validity of various other extended scalar sectors, like inert doublets and triplets in association with DM [22; 23; 24].
On the whole, the novel features of this study are as follows:
* We take up for our high-scale study two phenomenologically relevant 2HDM types, namely, Type-II (which occurs rather naturally in the supersymmetric SM) and Type-X (which is of interest in the context of muon \((g-2)\) ). Each of these scenarios are in addition augmented with one SU(2) singlet DM candidate, for which the scalar potential serves as the portal to SM physics.
* For both these cases, the running of various quartic coupling strengths is studied. A scan is made of the parameter space of each of the scenarios, and the allowed regions the parameter spaces are identified, considering in turn the constraints corresponding to different cut-off scales, and those coming from DM-related issues (mainly direct searches and relic density), in conjunction with the usual phenomenological limits. In particular, the parameter region in Type-X 2HDM improving on the discrepancy in muon \((g-2)\) is filtered out as an added requirement.
* With the parameter regions thus narrowed down, the potential of capturing the signatures of such scenarios at the HL-LHC are commented upon.
The paper is organized as follows. We discuss the model and theoretical, experimental contraints as well as constraints from the dark matter sector on the model in Section 2. In Section 3, we discuss the RG running of all the couplings and demonstrate with a few benchmark points from Type-II and Type-X 2HDM. We identify the allowed parameters from the perspective of various high-scale validity and dark matter constraints, and discuss the interplay between the two in Section 4. Finally, we summarize and conclude our discussion in Section 5.
## 2 Models and Constraints
### Models
As stated above, we concentrate on a two Higgs doublet model (2HDM), with an \(SU(2)\) real singlet scalar dark matter candidate \(S\). \(S\) interacts with two higgs doublets \(\Phi_{1,2}\). The scalar potential of the full scenario is
\[\mathcal{V}=\mathcal{V}_{2HDM}+\frac{1}{2}M_{S}^{2}S^{2}+\frac{\lambda_{S}}{4! }S^{4}+\lambda_{S1}S^{2}\Phi_{1}^{\dagger}\Phi_{1}+\lambda_{S2}S^{2}\Phi_{2}^{ \dagger}\Phi_{2}. \tag{1}\]
where the terms in odd powers of \(S\) are absent due to a \(Z_{2}\) symmetry that stabilizes it.
The most general scalar potential involving two scalar doublets in 2HDM is given as follows.
\[{\cal V}_{2HDM} = m_{11}^{2}(\Phi_{1}^{\dagger}\Phi_{1})+m_{22}^{2}(\Phi_{2}^{\dagger }\Phi_{2})-\left[m_{12}^{2}(\Phi_{1}^{\dagger}\Phi_{2}+{\rm h.c.})\right]+\frac{ \lambda_{1}}{2}(\Phi_{1}^{\dagger}\Phi_{1})^{2}+\frac{\lambda_{2}}{2}(\Phi_{2} ^{\dagger}\Phi_{2})^{2} \tag{2}\] \[+\lambda_{3}(\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{2}^{\dagger}\Phi_{ 2})+\lambda_{4}(\Phi_{1}^{\dagger}\Phi_{2})(\Phi_{2}^{\dagger}\Phi_{1})+\left[ \frac{\lambda_{5}}{2}(\Phi_{1}^{\dagger}\Phi_{2})^{2}+{\rm h.c.}\right].\]
We assume CP-conservation, which is ensured by taking all \(\lambda_{i}\)'s and \(m_{12}^{2}\) to be real.
The two complex Higgs doublets with hypercharge \(Y=1\) can be written as
\[\Phi_{1}=\left(\begin{array}{c}\phi_{1}^{+}\\ \frac{1}{\sqrt{2}}\left(v_{1}+\phi_{1}^{0}+ia_{1}\right)\end{array}\right)\,, \quad\Phi_{2}=\left(\begin{array}{c}\phi_{2}^{+}\\ \frac{1}{\sqrt{2}}\left(v_{2}+\phi_{2}^{0}+ia_{2}\right)\end{array}\right). \tag{3}\]
Where \(v_{1}\) and \(v_{2}\) are the vacuum expectation values of the two doublets, with \(v^{2}=v_{1}^{2}+v_{2}^{2}=(246\ {\rm GeV})^{2}\) and \(\tan\beta=v_{2}/v_{1}\). After EWSB, we obtain five physical states, two neutral CP-even scalars, the lighter of which will be called \(h\), and the heavier \(H\), one neutral pseudoscalar \(A\), and a pair of charged scalars \(H^{\pm}\).
The above potential prevents mixing between \(S\) and the scalar doublets as also any vacuum expectation value (VEV) for \(S\). The mass of the DM candidate \(S\) is given by, \(M_{S}^{phy^{2}}=M_{S}^{2}+(\lambda_{S1}v_{1}^{2}+\lambda_{S2}v_{2}^{2})\).
In order to suppress tree-level Flavour changing neutral current (FCNC), one needs to impose further a \({\cal Z}_{2}\) symmetry in the Yukawa sector. Depending on its nature there are four major Types of 2HDM's. Here we concentrate on Type-II and Type-X 2HDM. In Type-II 2HDM, up-type quarks couple to one doublet, and down-type quarks and charged leptons to the other doublet. Under this assumption, \({\cal L}_{Yukawa}\) becomes
\[-{\cal L}_{Yukawa} = Y_{u2}\,\overline{Q}_{L}\,\tilde{\Phi}_{2}\,u_{R}+\,Y_{d1}\, \overline{Q}_{L}\,\Phi_{1}\,d_{R}\,+\,Y_{\ell 1}\,\overline{L}_{L}\,\Phi_{1}\,e_{R}+ \,{\rm h.c.} \tag{4}\]
In Type-X 2HDM, on the other hand, the Yukawa interactions are given as
\[-{\cal L}_{Yukawa} = Y_{u2}\,\overline{Q}_{L}\,\tilde{\Phi}_{2}\,u_{R}+\,Y_{d2}\, \overline{Q}_{L}\,\Phi_{2}\,d_{R}\,+\,Y_{\ell 1}\,\overline{L}_{L}\,\Phi_{1}\,e_{R}+ \,{\rm h.c.} \tag{5}\]
where \(\Phi_{1}\) couples to leptons only and \(\Phi_{2}\), only to quarks. In Equation 4 and 5,
\(Q_{L}^{T}=(u_{L}\,,d_{L})\), \(L_{L}^{T}=(\nu_{L}\,,l_{L})\), and \(\widetilde{\Phi}_{1,2}=i\tau_{2}\Phi_{1,2}^{*}\). \(Y_{u2}\), \(Y_{d1}\),\(Y_{d2}\) and \(Y_{\ell 1}\) are the couplings of the up, down quarks and leptons with the two doublets where family indices are suppressed.
It should also be noted that \({\cal Z}_{2}\) symmetry of Yukawa sector is present in the scalar potential as well, excepting for the soft-breaking term \(m_{12}^{2}\).
### Theoretical constraints
Theoretical constraints on the model include perturbativity, unitarity and vacuum stability, all the way upto the energy scale which marks the upper limit of validity of the model. Effects of these constraints on various 2HDM parameter spaces have been studied in detail earlier [25; 26; 27; 28; 29]. it has been pointed out that large separation between \(m_{A}\) and \(m_{H^{\pm}}\) is disfavored from the requirement of vacuum stability and perturbativity.
\(\bullet\)**Vacuum stability:** We would like to check the boundedness from below condition of the scalar potential, which implies there exists no direction in the field space in which \(\mathcal{V}\rightarrow-\infty\). This leads to the following conditions on the quartic couplings of the potential [30; 31; 32].
\[\lambda_{1,2}>0\,, \tag{6}\] \[\lambda_{S}>0\,,\] (7) \[\lambda_{3}>-\sqrt{\lambda_{1}\lambda_{2}}\,,\] (8) \[|\lambda_{5}|<\lambda_{3}+\lambda_{4}+\sqrt{\lambda_{1}\lambda_ {2}}\,,\] (9) \[\lambda_{S1}>-\sqrt{\frac{1}{12}\lambda_{S}\lambda_{1}},\lambda_{ S2}>-\sqrt{\frac{1}{12}\lambda_{S}\lambda_{2}}. \tag{10}\]
For negative \(\lambda_{S1}\) or \(\lambda_{S2}\) one additionally has to satisfy,
\[\left(\frac{1}{12}\lambda_{S}\lambda_{1}-\lambda_{S1}^{2}\right) >0, \tag{11}\] \[\left(\frac{1}{12}\lambda_{S}\lambda_{2}-\lambda_{S2}^{2}\right) >0,\] (12) \[-2\lambda_{S1}\lambda_{S2}+\frac{1}{6}\lambda_{S}\lambda_{3} >-\sqrt{4\left(\frac{1}{12}\lambda_{S}\lambda_{1}-\lambda_{S1}^{2} \right)\left(\frac{1}{12}\lambda_{S}\lambda_{2}-\lambda_{S2}^{2}\right)},\] (13) \[-2\lambda_{S1}\lambda_{S2}+\frac{1}{6}\lambda_{S}\left(\lambda_{ 3}+\lambda_{4}-|\lambda_{5}|\right) >-\sqrt{4\left(\frac{1}{12}\lambda_{S}\lambda_{1}-\lambda_{S1}^{2} \right)\left(\frac{1}{12}\lambda_{S}\lambda_{2}-\lambda_{S2}^{2}\right)}. \tag{14}\]
\(\bullet\)**Perturbativity:** If 2HDM is a perturbative quantum field theory at a given scale, it would imply, all quartic couplings, involving the scalar mass eigenstates \(C_{H_{i}H_{j}H_{k}H_{l}}<4\pi\) and all Yukawa couplings \(Y_{j}<\sqrt{4\pi}\). Further, unitarity bound on the tree level scattering amplitude of the scalars and longitudinal parts of EW gauge bosons put an upper bound on the eigenvalues \(|a_{i}|\leq 8\pi\) of the \(2\to 2\) scattering matrices [33; 34; 35; 36; 37; 38].
The physical masses of the additional scalars can be expressed as:
\[m_{A}^{2} = \frac{m_{12}^{2}}{\sin\beta\cos\beta}-\lambda_{5}v^{2}, \tag{15}\] \[m_{H^{\pm}}^{2} \approx m_{A}^{2}+\frac{1}{2}v^{2}(\lambda_{5}-\lambda_{4}). \tag{16}\]
It is clear from Equation 16 that \(m_{H^{\pm}}^{2}-m_{A}^{2}\) is proportional to \(\lambda_{5}-\lambda_{4}\) which should be less than \(\lambda_{3}+\sqrt{\lambda_{1}\lambda_{2}}\) from the requirement of boundedness from below (Equation 9). Therefore these conditions along with the requirement of perturbativity ie. \(C_{H_{i}H_{j}H_{k}H_{l}}<4\pi\) puts an upper limit on the mass square difference \(m_{H^{\pm}}^{2}-m_{A}^{2}\).
The aforementioned constraints can be easily translated into those of the parameter space by expressing the quartic couplings into parameters of the physical basis i.e. masses of the scalars and the mixing angles as follows.
\[\lambda_{1} =\frac{m_{H}^{2}\cos^{2}\alpha+m_{h}^{2}\sin^{2}\alpha-m_{12}^{2}\tan \beta}{v^{2}\cos^{2}\beta},\] \[\lambda_{2} =\frac{m_{H}^{2}\sin^{2}\alpha+m_{h}^{2}\cos^{2}\alpha-m_{12}^{2} \cot\beta}{v^{2}\sin^{2}\beta},\] \[\lambda_{3} =\frac{(m_{H}^{2}-m_{h}^{2})\cos\alpha\sin\alpha+2m_{H^{\pm}}^{2} \sin\beta\cos\beta-m_{12}^{2}}{v^{2}\sin\beta\cos\beta},\] \[\lambda_{4} =\frac{(m_{A}^{2}-2m_{H^{\pm}}^{2})\sin\beta\cos\beta+m_{12}^{2} }{v^{2}\sin\beta\cos\beta},\] \[\lambda_{5} =\frac{m_{12}^{2}-m_{A}^{2}\sin\beta\cos\beta}{v^{2}\sin\beta \cos\beta}. \tag{17}\]
One should note from the expression of \(\lambda_{1}\) in Equation 17 that, to have it in the perturbative limit, the soft \(\mathcal{Z}_{2}\) breaking parameter \(m_{12}^{2}\approx\frac{m_{H}^{2}}{\tan\beta}\), especially when \(m_{H}>>m_{h}\).
### Experimental constraints
Now we briefly discuss experimental constraints on the model parameters.
\(\bullet\) **Electroweak Precision measurements:** Electroweak precision measurements [39], especially from the oblique parameters \((S,T,U)\)[40; 41], put significant constraint on 2HDM parameter space when considered at one-loop level, because of the presence of additional scalars. In various earlier works, it is pointed out that the heavier neutral scalar (\(H\)) and charged scalar (\(H^{\pm}\)) masses should be closer to each other (\(\Delta m\lesssim 50\)GeV), in order to avoid the breaking of custodial SU(2) symmetry [42; 43; 44; 45; 46], and at the same time keep the pseudo-scalar mass less constrained. This limit on \(\Delta m\) becomes stronger when \(H\) and \(H^{\pm}\)become heavier ( \(\gtrsim\) 600GeV).
\(\bullet\) **Collider bounds:** The CMS and ATLAS data from runs I and II, on the observed SM-like 125-GeV scalar provide measurements of its signal strengths in different channels with increasing precision [47; 48; 49]. The data shows significant agreement with the SM predictions of couplings and pushes the limit towards the so-called alignment limit, i.e, \((\beta-\alpha)\approx\frac{\pi}{2}\).
The direct search of non-standard scalars also put severe constraints on the parameter space. For Type-X 2HDM, the \(\tau\tau\) final state restricts our parameter space strongly [50; 51], mostly because we choose to work in the low scalar mass and large \(\tan\beta\) region, owing its connection to \(g_{\mu}-2\). For Type-II, on the other hand, the major constraint comes from \(hh\) final state [52; 53]. LEP-limits on charged Higgs mass ( \(\gtrsim\) 80GeV) [54] is imposed for both types. Type-II, in addition, gets constrained from B-physics observables which put a strong lower bound on the charged Higgs mass \(m_{H^{\pm}}\gtrsim\) 600 GeV [27; 28; 29; 55]. All our chosen benchmarks in this work are consistent with the results of HiggsTools, in particular, HiggsSignals[56; 57; 58; 59; 60] and HiggsBounds[61; 62; 63; 64; 65; 66; 67].
### Dark matter constraints
The WIMP-DM candidate of our model should satisfy the following constraints:
\(\bullet\) The thermal relic density should be consistent with the latest Planck data [10].
\(\bullet\) The DM-nucleon cross-section must be below the upper bound given by the latest LZ experiment [68].
\(\bullet\) Indirect detection constraints of Fermi-LAT experiments coming from isotropic gamma-ray data and the gamma ray observations from dwarf spheroidal galaxies [69] should be satisfied.
## 3 Demonstration of running couplings with some benchmarks
After discussing all the theoretical and experimental constraints, we examine high-scale validity of such models (_2HDM + scalar_). First we list the RGE's for all gauge, Yukawa and scalar quartic couplings for our model at one-loop level. Though in rest of our work we used two-loop RGE's, we present the Equations for the one-loop RGE's to have an intuitive grasp on the key features of the running of relevant couplings. However, use has been made of the two-loop RGE's [70] only in the subsequent numerical analyses and the results that follow. The two-loop RGE's are shown in detail in Appendix A. We have implemented the model and generated the one and two-loop RG equations in SARAH[71; 72; 73; 74]. Subsequently, we evolved the couplings using the aforementioned RG equations using 2HDME[75].
### The one-loop RGE's
We begin by introducing the one-loop RGE's for the gauge couplings. Equation 1 demonstrates that they constitute a stand-alone set at one-loop and, as a result, are the same for different types of 2HDM. We would like to mentions that GUT normalisation was not used while writing Equation 1.
\[16\pi^{2}\beta_{g_{1}}= 7g_{1}^{3},\] \[16\pi^{2}\beta_{g_{2}}= -3g_{2}^{3},\] \[16\pi^{2}\beta_{g_{3}}= -7g_{3}^{3}. \tag{1}\]
However, the running of Yukawa couplings as well as quartic couplings pertaining to the scalar sector receive different contribution for different types of 2HDM's since the quark and lepton couplings with the scalar doublets play crucial role in these cases at one-loop level. We present the runnings of the aforementioned couplings for Type-II and Type-X 2HDM's.
#### 3.1.1 Type-II 2HDM
We first concentrate on the RGE of Type-II 2HDM Yukawa couplings. Here, the superscripts \(g\) and \(Y\), stand for contributions from the gauge and Yukawa sector, respectively, to the running of the Yukawa couplings (taken here as real).
\[16\pi^{2}\beta^{g}_{Y_{t}} =-\left(\frac{17}{12}g_{1}^{2}+\frac{9}{4}g_{2}^{2}+8g_{3}^{2}\right) Y_{t},\] \[16\pi^{2}\beta^{Y}_{Y_{t}} =\left(\frac{3}{2}Y_{b}^{2}+\frac{9}{2}Y_{t}^{2}+Y_{\tau}^{2} \right)Y_{t}-\left(Y_{b}^{2}+Y_{\tau}^{2}\right)Y_{t},\] \[16\pi^{2}\beta^{g}_{Y_{b}} =-\left(\frac{5}{12}g_{1}^{2}+\frac{9}{4}g_{2}^{2}+8g_{3}^{2} \right)Y_{b},\] \[16\pi^{2}\beta^{Y}_{Y_{b}} =\left(\frac{9}{2}Y_{b}^{2}+\frac{3}{2}Y_{t}^{2}+Y_{\tau}^{2} \right)Y_{b}-Y_{t}^{2}Y_{b},\] \[16\pi^{2}\beta^{g}_{Y_{\tau}} =-\left(\frac{15}{4}g_{1}^{2}+\frac{9}{4}g_{2}^{2}\right)Y_{\tau},\] \[16\pi^{2}\beta^{Y}_{Y_{\tau}} =\left(\frac{5}{2}Y_{\tau}^{2}+3Y_{b}^{2}\right)Y_{\tau}. \tag{3.2}\]
The resulting beta-function will be the sum of the gauge and Yukawa components.
\[\beta_{Y}=\beta^{g}_{Y}+\beta^{Y}_{Y}. \tag{3.3}\]
The relevant equations for the running of quartic couplings are given below. Here, the superscripts \(b\) and \(Y\) denote, respectively, bosonic(gauge couplings and quartic couplings) and Yukawa interactions, contributing to the running of \(\lambda^{\prime}\)s.
\[16\pi^{2}\beta^{b}_{\lambda_{1}}= \frac{3}{4}g_{1}^{4}+\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{2 }^{4}-3g_{1}^{2}\lambda_{1}-9g_{2}^{2}\lambda_{1}+12\lambda_{1}^{2}+4 \lambda_{3}^{2}+4\lambda_{3}\lambda_{4}+2\lambda_{4}^{2}+2\lambda_{5}^{2}+4 \lambda_{S1}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{1}}= -4Y_{\tau}^{4}+4Y_{\tau}^{2}\lambda_{1}-12Y_{b}^{4}+12Y_{b}^{2} \lambda_{1},\] \[16\pi^{2}\beta^{b}_{\lambda_{2}}= \frac{3}{4}g_{1}^{4}+\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{2 }^{4}-3g_{1}^{2}\lambda_{2}-9g_{2}^{2}\lambda_{2}+12\lambda_{2}^{2}+4 \lambda_{3}^{2}+4\lambda_{3}\lambda_{4}+2\lambda_{4}^{2}+2\lambda_{5}^{2}+4 \lambda_{S2}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{2}}= -12Y_{t}^{4}+12Y_{t}^{2}\lambda_{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{3}}= \frac{3}{4}g_{1}^{4}-\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{ 2}^{4}-3g_{1}^{2}\lambda_{3}-9g_{2}^{2}\lambda_{3}+\left(\lambda_{1}+\lambda_{ 2}\right)\left(6\lambda_{3}+2\lambda_{4}\right)+4\lambda_{3}^{2}+2\lambda_{4}^ {2}+2\lambda_{5}^{2}+4\lambda_{S1}\lambda_{S2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{3}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{3}-12Y_{ b}^{2}Y_{t}^{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{4}}= 3g_{1}^{2}g_{2}^{2}-\left(3g_{1}^{2}+9g_{2}^{2}\right)\lambda_{4} +2\lambda_{1}\lambda_{4}+2\lambda_{2}\lambda_{4}+8\lambda_{3}\lambda_{4}+4 \lambda_{4}^{2}+8\lambda_{5}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{4}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{4}+12Y_{ b}^{2}Y_{t}^{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{5}}= \left(-3g_{1}^{2}-9g_{2}^{2}+2\lambda_{1}+2\lambda_{2}+8 \lambda_{3}+12\lambda_{4}\right)\lambda_{5},\] \[16\pi^{2}\beta^{Y}_{\lambda_{5}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{5},\] \[16\pi^{2}\beta^{b}_{\lambda_{S}}= 3\left(16\lambda_{S1}^{2}+16\lambda_{S2}^{2}+\lambda_{S}^{2} \right),\] \[16\pi^{2}\beta^{Y}_{\lambda_{S}}= 0,\] \[16\pi^{2}\beta^{b}_{\lambda_{S1}}= -\frac{3}{2}g_{1}^{2}\lambda_{S1}-\frac{9}{2}g_{2}^{2}\lambda_{S1 }+6\lambda_{1}\lambda_{S1}+\lambda_{S}\lambda_{S1}+4\lambda_{3}\lambda_{S2}+2 \lambda_{4}\lambda_{S2}+8\lambda_{S1}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{S1}}= 6\lambda_{S1}Y_{b}^{2}+2\lambda_{S1}Y_{\tau}^{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{S2}}= -\frac{3}{2}g_{1}^{2}\lambda_{S2}-\frac{9}{2}g_{2}^{2}\lambda_{S2 }+6\lambda_{2}\lambda_{S2}+\lambda_{S}\lambda_{S2}+4\lambda_{3}\lambda_{S1}+2 \lambda_{4}\lambda_{S1}+8\lambda_{S2}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{S2}}= 6\lambda_{S2}Y_{t}^{2}. \tag{3.4}\]
Like before, the actual beta-function will be the sum of the bosonic and Yukawa components.
\[\beta_{\lambda}=\beta_{\lambda}^{b}+\beta_{\lambda}^{Y}. \tag{3.5}\]
Comparing with Equation 2.4, we would like to make the following identifications for type-II 2HDM.
\[Y_{u2}=Y_{t},\quad Y_{d1}=Y_{b}\ \ \text{and}\ \ Y_{\ell 1}=Y_{\tau}.\]
#### 3.1.2 Type-X 2HDM
Next we focus on the RGE of the Yukawa couplings in Type-X 2HDM. Here too, the superscripts \(g\) and \(Y\) stand for contributions from gauge and Yukawa sectors, respectively.
\[16\pi^{2}\beta_{Y_{t}}^{g} =-\left(\frac{17}{12}g_{1}^{2}+\frac{9}{4}g_{2}^{2}+8g_{3}^{2} \right)Y_{t},\] \[16\pi^{2}\beta_{Y_{t}}^{Y} =\left(\frac{3}{2}Y_{b}^{2}+\frac{9}{2}Y_{t}^{2}\right)Y_{t},\] \[16\pi^{2}\beta_{Y_{b}}^{g} =-\left(\frac{5}{12}g_{1}^{2}+\frac{9}{4}g_{2}^{2}+8g_{3}^{2} \right)Y_{b},\] \[16\pi^{2}\beta_{Y_{b}}^{Y} =\left(\frac{9}{2}Y_{b}^{2}+\frac{3}{2}Y_{t}^{2}\right)Y_{b},\] \[16\pi^{2}\beta_{Y_{\tau}}^{g} =-\left(\frac{15}{4}g_{1}^{2}+\frac{9}{4}g_{2}^{2}\right)Y_{\tau},\] \[16\pi^{2}\beta_{Y_{\tau}}^{Y} =\frac{5}{2}Y_{\tau}^{3}. \tag{3.6}\]
The gauge and Yukawa components will be added to provide the final beta-function.
\[\beta_{Y}=\beta_{Y}^{g}+\beta_{Y}^{Y}. \tag{3.7}\]
We present next the running of scalar quartic couplings below. The superscripts \(b\) and \(Y\) bear similar meaning as in the type-II case.
\[16\pi^{2}\beta^{b}_{\lambda_{1}}= \frac{3}{4}g_{1}^{4}+\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{2 }^{4}-3g_{1}^{2}\lambda_{1}-9g_{2}^{2}\lambda_{1}+12\lambda_{1}^{2}+4\lambda_{3} ^{2}+4\lambda_{3}\lambda_{4}+2\lambda_{4}^{2}+2\lambda_{5}^{2}+4\lambda_{S1}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{1}}= -4Y_{\tau}^{4}+4Y_{\tau}^{2}\lambda_{1},\] \[16\pi^{2}\beta^{b}_{\lambda_{2}}= \frac{3}{4}g_{1}^{4}+\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{ 2}^{4}-3g_{1}^{2}\lambda_{2}-9g_{2}^{2}\lambda_{2}+12\lambda_{2}^{2}+4\lambda_ {3}^{2}+4\lambda_{3}\lambda_{4}+2\lambda_{4}^{2}+2\lambda_{5}^{2}+4\lambda_{S2 }^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{2}}= -12Y_{b}^{4}-12Y_{t}^{4}+\left(12Y_{b}^{2}+12Y_{t}^{2}\right) \lambda_{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{3}}= \frac{3}{4}g_{1}^{4}-\frac{3}{2}g_{1}^{2}g_{2}^{2}+\frac{9}{4}g_{ 2}^{4}-3g_{1}^{2}\lambda_{3}-9g_{2}^{2}\lambda_{3}+\left(\lambda_{1}+\lambda_ {2}\right)\left(6\lambda_{3}+2\lambda_{4}\right)+4\lambda_{3}^{2}+2\lambda_{4} ^{2}+2\lambda_{5}^{2}+4\lambda_{S1}\lambda_{S2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{3}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{3},\] \[16\pi^{2}\beta^{b}_{\lambda_{4}}= 3g_{1}^{2}g_{2}^{2}-\left(3g_{1}^{2}+9g_{2}^{2}\right)\lambda_{ 4}+2\lambda_{1}\lambda_{4}+2\lambda_{2}\lambda_{4}+8\lambda_{3}\lambda_{4}+4 \lambda_{4}^{2}+8\lambda_{5}^{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{5}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{4},\] \[16\pi^{2}\beta^{b}_{\lambda_{5}}= \left(-3g_{1}^{2}-9g_{2}^{2}+2\lambda_{1}+2\lambda_{2}+8\lambda_ {3}+12\lambda_{4}\right)\lambda_{5},\] \[16\pi^{2}\beta^{Y}_{\lambda_{5}}= \left(6Y_{b}^{2}+6Y_{t}^{2}+2Y_{\tau}^{2}\right)\lambda_{5},\] \[16\pi^{2}\beta^{b}_{\lambda_{S}}= 3\left(16\lambda_{S1}^{2}+16\lambda_{S2}^{2}+\lambda_{S}^{2} \right),\] \[16\pi^{2}\beta^{Y}_{\lambda_{S}}= 0,\] \[16\pi^{2}\beta^{b}_{\lambda_{S1}}= -\frac{3}{2}g_{1}^{2}\lambda_{S1}-\frac{9}{2}g_{2}^{2}\lambda_{S1 }+6\lambda_{1}\lambda_{S1}+\lambda_{S}\lambda_{S1}+4\lambda_{3}\lambda_{S2}+2 \lambda_{4}\lambda_{S2}+8\lambda_{S1}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{S1}}= 2\lambda_{S1}Y_{\tau}^{2},\] \[16\pi^{2}\beta^{b}_{\lambda_{S2}}= -\frac{3}{2}g_{1}^{2}\lambda_{S2}-\frac{9}{2}g_{2}^{2}\lambda_{S2 }+6\lambda_{2}\lambda_{S2}+\lambda_{S}\lambda_{S2}+4\lambda_{3}\lambda_{S1}+2 \lambda_{4}\lambda_{S1}+8\lambda_{S2}^{2},\] \[16\pi^{2}\beta^{Y}_{\lambda_{S2}}= 6\lambda_{S2}Y_{t}^{2}+6\lambda_{S2}Y_{b}^{2}. \tag{3.8}\]
The actual beta-function, as before, will be the sum of the bosonic and Yukawa parts.
\[\beta_{\lambda}=\beta^{b}_{\lambda}+\beta^{Y}_{\lambda}. \tag{3.9}\]
In case of Type-X 2HDM, comparing with Equation 2.5, we make the following identifications.
\[Y_{u2}=Y_{t},\quad Y_{d2}=Y_{b}\ \ \text{and}\ \ Y_{\ell 1}=Y_{\tau}.\]
### Choice of benchmarks and the running of quartic couplings
In this subsection we will try to understand the pattern of the running of different quartic couplings for our case. The pattern of running for different 2HDM's are already studied extensively in the literature [17; 70]. Here our main goal is to see how the scalar DM affects the cut-off scales \(\Lambda_{UV}^{Cut-off}\), where vacuum stability or unitarity or perturbativity breaks. We would also like to compare this scenario with the 2HDM scenarios. We choose a few different benchmark points, four for each Type-II and Type-X with different values of \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\), presented in Table 1 and Table 2 respectively. We show the their running upto their respective cut-off scales \(\Lambda_{UV}^{Cut-off}\). Here we use two-loop RGE's of different quartic couplings presented in Appendix A. Please also note that at this stage, the constraints from DM direct search or relic density have not been taken into account. These will come _post facto_, as exemplified in Section 4.
#### 3.2.1 Type-II 2HDM
For Type-II 2HDM benchmark points we choose scalar masses and mixing angles which are allowed by all the theoretical and experimental constraints, as presented below.
\(m_{h}\) = 125.0 GeV, \(m_{H}\) = 588.0 GeV, \(m_{A}\) = 588 GeV, \(m_{H}^{\pm}\) = 610 GeV, \(m_{12}^{2}\) = 35776.44128 GeV\({}^{21}\), \(\tan\beta\) = 9.6, \(\sin(\beta-\alpha)\) = 0.998, which in the general basis leads to \(\lambda_{1}\) = 1.60, \(\lambda_{2}\) = 0.21, \(\lambda_{3}\) = 3.15, \(\lambda_{4}\) = 0.42, \(\lambda_{5}\) = 0.03. Alongside, we choose four sets for DM sector parameters \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\)(see Table 1).
BP1 corresponds to the usual Type-II 2HDM case, without the addition of scalar singlet DM. On the other hand, for BP2 our \(\lambda_{S2}\) is larger than both \(\lambda_{S1}\) and \(\lambda_{S}\), which are kept at moderate values. In case of BP3, \(\lambda_{S2}\) is taken to be much larger than \(\lambda_{S}\) and \(\lambda_{S1}\), both of which are kept at extremely small values. Finally in BP4, both of the \(\lambda_{S1}\) and \(\lambda_{S2}\) are larger compared to \(\lambda_{S}\) and \(\lambda_{S}\) is chosen at a moderate value.
Figure 1 represents the two-loop RG running of various quartic couplings for Type-II scenario with starting scale set at top quark pole mass. In Figure 1(a), \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\) is set to zero at EW scale. As the RGE's of these three \(\lambda\)'s are always proportional to one of these three \(\lambda\), all of three remain zero at any energy scale even upto two-loop renormalization. Therefore, in this case the perturbative unitarity of the quartic couplings \(\lambda_{3}\) determines the scale of validity.
In case of BP2 and BP4, where \(\lambda_{S}\) has moderate values, the cut-off scale is determined by the perturbativity of \(\lambda_{S}\). On the other hand, for BP3, the cut-off scale \(\Lambda_{UV}^{Cut-off}\) is determined by the perturbative unitarity of quartic coupling \(\lambda_{3}\), since in this case \(\lambda_{S}\) is extremely small.
In all the benchmarks in Figure 1, the quartic couplings increase with energy. Since \(\lambda_{3}\) is largest among all the quartic couplings at EW scale for our benchmark and its running involves the factor \(4\lambda_{S1}\lambda_{S2}\) (see Equation 11), its perturbative unitarity breaks at much lower scale for for BP2, BP3 and BP4, compared to BP1 (normal 2HDM), as long as \(\lambda_{S1}\) and \(\lambda_{S2}\) are of the same sign. In such cases, normal 2HDM type-II scenario (BP1) naturally corresponds to largest cut-off scale (see Figure 1(a)). If \(\lambda_{S1}\) and \(\lambda_{S2}\) are of different sign,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Type-II BP1 & Type-II BP2 & Type-II BP3 & Type-II BP4 \\ \hline \(\lambda_{S}\) & 0.0 & 0.1 & 1.0 \(\times 10^{-6}\) & 0.1 \\ \hline \(\lambda_{S1}\) & 0.0 & 0.3 & 3.0 \(\times 10^{-6}\) & 2.5 \\ \hline \(\lambda_{S2}\) & 0.0 & 2.0 & 1.5 & 2.5 \\ \hline \(\Lambda_{UV}^{Cut-off}\) (2-loop) in GeV & 5.17 \(\times 10^{3}\) & 4.27 \(\times 10^{3}\) & 4.79 \(\times 10^{3}\) & 1.39 \(\times 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: BP’s for Type-II 2HDM with singlet scalar DM
the vacuum stability breaks down early on. Therefore even in that case, inclusion of DM worsens the high scale validity of normal 2HDM.
We would like to note that the benchmark points chosen here correspond to a large \(\lambda_{3}\) at the EW scale, for which the cut-off scale of the model turns out to be in TeV scale (see Figure 1). However, this is not a generic feature and the model can be valid to much higher scales, as we show in the scan in the next section.
Furthermore, \(\lambda_{S}\) can also play an important role in the breaking of perturbative uni
Figure 1: _RG running of quartic couplings for benchmarks (a) BP1, (b) BP2, (c) BP3, (d) BP4 for Type-II with singlet scalar DM scenario. In all cases two-loop RGE’s have been used._
tarity. The running of \(\lambda_{S}\) is determined by \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\). It is therefore clear that for BP4 the perturbativity breaks down at a much lower scale compared to BP2, since the values of \(\lambda_{S1}\) and \(\lambda_{S2}\) for BP4 are largest among all benchmarks.
#### 3.2.2 Type-X 2HDM
For Type-X 2HDM we choose the following benchmarks. Similar to Type-II case, here too, all the masses and mixing angles are allowed by theoretical and experimental constraints, \(m_{h}=93.6\) GeV, \(m_{H}=125.0\) GeV, \(m_{A}=15.8\) GeV, \(m_{H}^{\pm}=135.0\), \(m_{12}^{2}=393.28757\) GeV2, \(\tan\beta=22.0\)\(\sin(\beta-\alpha)=0.006\). Our chosen masses and mixing angles in the physical basis leads to the following quartic couplings in the flavor basis, \(\lambda_{1}\) = 1.03, \(\lambda_{2}\) = 0.26, \(\lambda_{3}\) = 0.59, \(\lambda_{4}\) = -0.45, \(\lambda_{5}\) = 0.14.
Footnote 2: Here too, the retention of the number of places after decimal for various parameters is guided by the same consideration as that in the case of Type-II 2HDM.
One should note that unlike the Type-II case, here, our SM-like Higgs of 125 GeV mass is the second lightest CP-even scalar, which implies mixing angle \(\sin(\beta-\alpha)<<1\), This region is favored from the simultaneous requirement of high scale validity and the observed \(g_{\mu}-2\)[17]. The complementary region where 125 GeV Higgs is the lightest, has been studied in [17] without the inclusion of DM. Note that this is just for illustration; in Section 4, we present a more general parameter scan. Furthermore, we choose four sets for DM sector parameters \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\) (see Table 2).
Here too, BP5 represents the usual Type-X 2HDM scenario where the values of new couplings pertaining to DM sector (\(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\)) are set to zero. For BP6, \(\lambda_{S1}\) is chosen to be much larger than the other two couplings, where in BP7 \(\lambda_{S2}\) is much larger than the other two. On the other hand, in BP8, both of the \(\lambda_{S1}\) and \(\lambda_{S2}\) are chosen to be moderate, but smaller compared to \(\lambda_{S}\) and they have similar magnitude with opposite sign.
Figure 2 represents the one-loop RG evolution of various quartic couplings for Type-X scenario, with the initial scale set at top quark pole mass. In BP5 (Figure 2(a)) \(\lambda_{S}\), \(\lambda_{S1}\) and \(\lambda_{S2}\) are set to zero at EW scale and since the RGE's of this three couplings are proportional to at least one of these three couplings (same as Type-II scenario), they remain zero at any higher energy scale for BP5 at one-loop. Therefore, at one-loop level, for BP5 the cut-off scale \(\Lambda_{UV}^{Cut-off}\) is determined by the perturbativity and unitarity of quartic couplings of Type-X 2HDM, especially \(\lambda_{1}\), since in our chosen benchmark \(\lambda_{1}\) is the largest.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Type-X BP5 & Type-X BP6 & Type-X BP7 & Type-X BP8 \\ \hline \(\lambda_{S}\) & 0.0 & 1.0 \(\times 10^{-6}\) & 1.0 \(\times 10^{-6}\) & 4.0 \\ \hline \(\lambda_{S1}\) & 0.0 & 0.69 & 3.0 \(\times 10^{-6}\) & -0.24 \\ \hline \(\lambda_{S2}\) & 0.0 & 3.0 \(\times 10^{-6}\) & 0.69 & 0.24 \\ \hline \(\Lambda_{UV}^{Cut-off}\) (2-loop) in GeV & 2.27 \(\times 10^{8}\) & 4.49 \(\times 10^{7}\) & 2.02 \(\times 10^{8}\) & 1.85 \(\times 10^{4}\) \\ \hline \end{tabular}
\end{table}
Table 2: BP’s for Type-X 2HDM with singlet scalar DM
In Figures 2(a), 2(b) and 2(c), all the quartic couplings increase with energy except \(\lambda_{4}\), while in Figure 2(d) both \(\lambda_{4}\) and \(\lambda_{S1}\) decrease with energy. \(\lambda_{1}\) is largest among all at EW scale for first three benchmarks and its running majorly depends on the factor \(4\lambda_{S1}^{2}\), when \(\lambda_{S1}^{2}\) has a non-zero value at EW scale. The perturbative unitarity breaks at a lower scale for BP6 (Figure 2(b)) compared BP7 (Figure 2(c)), because of higher value of \(\lambda_{S1}\) in case of BP6. Similar to Type-II, here too, normal 2HDM Type-X scenario, namely BP5 pertains to the highest cut-off scale. Interestingly, since the running of \(\lambda_{S}\) is symmetric in
Figure 2: _RG running of quartic couplings for the benchmarks (a) BP5, (b) BP6, (c) BP7 and (d) BP8 for Type-X with singlet scalar DM scenario. In all cases two-loop RGE’s have been used._
\(\lambda_{S1}\) and \(\lambda_{S2}\), \(\lambda_{S}\) runs identically for BP6 and BP7.
On the other hand, \(\lambda_{S}\) increases much faster than other \(\lambda\)'s for BP8 at two-loop (Figure 2(d)). BP8 shows a distinct behaviour because of the negative sign of \(\lambda_{S1}\). In this case, the most stringent constraint comes from vacuum stability, if one or both \(\lambda_{S1}\) or \(\lambda_{S2}\) are chosen negative, which breaks the stability at much smaller scale, which is the case with BP8.
In our earlier work [17], we have discussed in detail the running of quartic couplings of Type-X 2HDM. It is not difficult to understand how the presence of the SM-DM and DM-DM couplings affect the allowed parameter space obtained there. We can see from Equations 3 and 8, \(\lambda_{1}\), \(\lambda_{2}\), always gets positive contribution, in terms of \(4\lambda_{S1}^{2}\) and \(4\lambda_{S2}^{2}\) respectively, while \(\lambda_{3}\) receives positive or negative contribution (\(4\lambda_{S1}\lambda_{S2}\)), depending on relative size of \(\lambda_{S1}\) and \(\lambda_{S2}\). Therefore, if \(\lambda_{3}\), \(\lambda_{S1}\) and \(\lambda_{S2}\) are considerably large compared to all other quartic couplings and \(\lambda_{S1}\) and \(\lambda_{S2}\) come with a relative negative sign, there is a possibility of getting a more relaxed parameter space in terms of perturbative unitarity. But in that case, the vacuum stability will be at stake (similar to BP8) and the final allowed parameter space will be more restricted compared to 2HDM parameter space.
It is quite apparent that a WIMP like scalar DM, having sizable portal coupling, restricts the high scale validity of the two Higgs doublet models significantly, while interestingly, a FIMP (Feebly Interacting Massive Particle) like scalar singlet having tiny portal-couplings won't affect the high scale validity of the model so much.
## 4 Study of model parameter space
### Regions of high-scale validity
After discussing the RG evolutions of all the relevant couplings in the model, we proceed to scan the model parameter space and look for points that satisfy all the theoretical constraints, namely perturbativity, unitarity, and vacuum stability up to cutoff scale \(\Lambda_{UV}^{cut-off}\). We have chosen four different scales in this context, namely, \(10^{4}\),\(10^{8}\),\(10^{16}\) and \(10^{19}\) GeV, and present the corresponding allowed regions in the parameter space, spanned by the DM-sector couplings \(\lambda_{S1},\lambda_{S2}\) and \(\lambda_{S}\).
In Figure 3 and 4, we show the parameter space valid upto various high-scales in case of Type-II+singlet DM and Type-X+singlet DM scenarios respectively. In any plots, where any two of the three couplings \(\lambda_{S1}\), \(\lambda_{S2}\) and \(\lambda_{S}\) are shown, the third remaining coupling has been varied from \(-4\pi\) to \(4\pi\) in the scatter plots. Similar marginalization has been carried out for the remaining parameters in the scalar potential.
In Figure 3(a),(b) and (c), we can see that the \(\lambda_{S1}\), and \(0.1\) and \(1\) in order for the model to be valid upto 10 TeV, \(10^{8}\) GeV, \(10^{16}\) GeV and \(10^{20}\) GeV respectively in Type-II. The results are very similar in case of Type-X as can be seen in Figure 4(a),(b) and (c). In that case, \(\lambda_{S1}\), \(\lambda_{S2}\lesssim 2.4,0.9,0.4\) and \(0.3\) and \(\lambda_{S}\lesssim 6.8,3.2,1.7\) and \(1.2\) in order for the model to be valid upto 10 TeV, \(10^{8}\) GeV, \(10^{16}\) GeV and \(10^{20}\) GeV respectively.
he aforementioned constraints come largely from perturbative unitarity. As we have discussed earlier, the \(\lambda_{S}\) coupling runs the fastest among all scalar couplings and therefore the perturbativity is driven by \(\lambda_{S}\). If we see Equations 3.4 and 3.8, we see that the running of \(\lambda_{S}\) depends strongly on \(\lambda_{S1}\) as well as \(\lambda_{S2}\). It is also clear that when \(\lambda_{S1}\) and \(\lambda_{S2}\) are positive, increasing \(\lambda_{S}\) will imply stronger limits on \(\lambda_{S1}\) as well as \(\lambda_{S2}\). This is also clear from Figure 3(b),(c) as well as 4(b),(c). In Figures 3 and 4, the regions where either of \(\lambda_{S1}\) and \(\lambda_{S2}\) is negative, get constrained by the requirement of vacuum stability (Equations 2.10) as well.
We would like to point out that, if we compare the regions in Figures 3 and 4, valid upto various scales, we see that the regions follow very similar pattern in Type-II and Type-X, although the Yukawa sectors in both cases are different. The ranges of high scale validity are comparable in both cases, though the allowed region is slightly bigger in Type-X compared to Type-II. We have further checked that, if we are phenomenologically allowed
Figure 3: _The parameter space spanned by (a) \(\lambda_{S1}-\lambda_{S2}\) (b) \(\lambda_{S}-\lambda_{S1}\) and \(\lambda_{S}-\lambda_{S2}\), valid upto different high scales in Type-II 2HDM+DM scenario._
to start with exactly same low-scale values of all the parameters in Type-II and Type-X, the high scale upto which the theories will be valid differs by \(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;100\) GeV. The comparison between the two scenarios in this respect can be summarized as follows:
* As long as \(\lambda_{S}\) is moderate to large, the perturbativity constraints are strongly driven by \(\lambda_{S}\). The running of \(\lambda_{S}\) has very little contribution from the Yukawa sector, since the Yukawa coupling-dependent terms enter in the running of \(\lambda_{S}\), indirectly via \(\lambda_{S1}\) and \(\lambda_{S2}\). Moreover, Yukawa sector differs for Type-II and Type-X only in terms of \(Y_{b}\), which is a small quantity compared to the other terms in the running. Therefore, when the \(\lambda_{S}\) plays dominant role in high-scale validity, little difference is expected between Type-II ad Type-X.
* When \(\lambda_{S}\) is extremely small, the perturbative unitarity is driven by the quartic couplings \(\lambda_{1}\) or \(\lambda_{3}\). Although the runnings of these couplings directly involve Yukawa
Figure 4: _The parameter space spanned by (a) \(\lambda_{S1}-\lambda_{S2}\) (b) \(\lambda_{S}-\lambda_{S1}\) and \(\lambda_{S}-\lambda_{S2}\), valid upto different high scales in Type-X 2HDM+DM scenario._
terms, the smallness of bottom Yukawa ensures that the running remains almost same for Type-II and Type-X.
* We have checked that for the same benchmarks, the limits of high-scale validity differ by \(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;100\) GeV between Type-II and Type-X 2HDM.
* We have further explored the possible difference in the allowed parameter space Type-II and Type-X, after imposing high-scale validity as well as the experimental constraints discussed earlier. This can be seen in terms of the upper limit on \(\lambda_{S1}\) and \(\lambda_{S2}\). The allowed parameter space is larger for Type-X. The reason is as follows. In Type-X the non-standard scalar masses can be low even after all the collider and B-physics constraints are applied. But in Type-II 2HDM requirements from B-physics as well as collider constraints imply large lower limits on non-standard scalar masses. This in turn necessitates large values of quartic couplings in the EW-scale (see Equations 17). Therefore, as a consequence of RG-running the limits on \(\lambda_{S1}\) and \(\lambda_{S2}\) become stronger in case of Type-II as compared to Type-X, following the requirement of perturbative unitarity of the quartic couplings.
* We also see that in the regions allowed upto GUT scale or Planck scale (red and yellow points) in Figures 3 and 4, Type-II case has much fewer points. In the Type-II the heavy scalars have to be much heavier compared to the 125 GeV Higgs (constraint coming from B-physics and collider observables), as can be seen from Equations 17. It is therefore extremely difficult to get quartic couplings that are small.
### Constraints from DM sector
An important question arises: which fractions of the parameter regions discussed above are consistent with constraints on a scalar DM? With this in mind, we look next for parameter regions that are allowed by the relic density [10] and direct DM search experiments such as XENON [76; 77; 8], PANDA-X [78; 79] and LUX-ZEPLIN [9]. In this work we have implemented our models in Feynrules[80] and calculated the DM observables with micrOMEGAs[81].
Let us discuss Type-II and Type-X cases one by one. Since DM mass plays an important role in DM-DM annihilation as well as DM-nucleon scattering, we present our results in both cases for three benchmark DM masses, namely \(m_{\rm DM}=400,200\) and \(62.5\) GeV. While implementing the limit from the observed relic density from PLANCK [10], we have made sure that relic density for our parameter points do not exceed the \(2\sigma\) upper bound. We have also ensured that our DM candidate accounts for at least \(10\%\) of the total observed relic, since there is always a possibility that there are multiple DM candidates in nature which can account for the observed relic density collectively. However, we also indicate the regions of parameter space that give rise to the observed relic within \(2\sigma\) uncertainty.
In Figure 5, we present our results for Type-II 2HDM. Figure 5(a), (b) and (c) represent the allowed parameter space for \(m_{\rm DM}=400,200\) and \(62.5\) GeV respectively, the maroon points satisfy under-relic upto \(10\%\), and the orange points satisfy the upper limit from direct search experiments in addition. The major annihilation channel for the DM pair in
this case is into a pair of Higgses. Since in Type-II, the non-standard scalars are heavy \(\ \raise 1.29pt\hbox{$>$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}\ 600\) GeV, the kinematically favored annihilation channel is into SM-Higgs pair, which is typically governed by the \(\lambda_{S2}\), coupling. Therefore, in all the plots we see the range of \(\lambda_{S2}\) is restricted by the observed relic density to \(|\lambda_{S2}|\ \raise 1.29pt\hbox{$<$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}\ 0.2(0.1)\) for 400 GeV (200 GeV) DM mass, whereas the limit on \(\lambda_{S1}\) is more relaxed. However, the limit on \(\lambda_{S1}\) in case of \(m_{\rm DM}=400\) GeV is stronger compared to \(m_{\rm DM}=200\) GeV case. The reason is, for \(m_{\rm DM}=400\) GeV, an additional \(Hh\) final state also opens up. Therefore, in this case, the parameter space becomes relic under-abundant with smaller \(\lambda_{S1}\) compared to \(m_{\rm DM}=200\) GeV case. One can see a small region in the center of the \(\lambda_{S1}-\lambda_{S2}\) plane, which is disallowed by the relic over-abundance. \(|\lambda_{S1}|>0.9(1)\) for 400 GeV (200 GeV) DM mass.
Figure 5: _The allowed parameter space in Type-II 2HDM, spanned by \(\lambda_{S1}-\lambda_{S2}\) for (a) \(m_{DM}=400\) GeV and (b) \(m_{DM}=200\) GeV and (c) \(m_{DM}=62.5\) GeV. The maroon points have at least 10% contribution to observed relic density, while the orange points satisfy direct detection constraints in addition. The white region at the centre in 5(a) and 5(b) are disallowed by relic over-abundance. The blue points satisfy the actual observed relic density as well as direct detection bound._
A special mention is in order for the Higgs-resonance region. Since one of the major annihilation channels for the DM pair is into \(b\bar{b}\) final states, the DM mass in the Higgs resonance implies large annihilation cross-section and major under-abundance, unless the relevant coupling \(\lambda_{S2}\) is very small, as can be seen from Figure 5(c). In this region, understandably, the dependence on \(\lambda_{S1}\) is further diminished compared to the other mass points, from the point of view of relic density.
The DM-nucleon elastic scattering cross-section in Type-II 2HDM, shows an interesting pattern in the allowed parameter space. Since the coupling \(\lambda_{S2}\), which plays a crucial role in the annihilation of DM-pairs, is also responsible for the DM-nucleon scattering, a small DM-nucleon scattering cross-section will necessarily imply small annihilation cross-section and consequently large relic over-abundance. This problem can be avoided in specific regions of the parameter space especially regions of \(\tan\beta\), where due to enhanced couplings of the second doublet to down-type quarks in Type-II we can have cancellation between contributions coming from \(t\)-channel elastic scatterings involving the two neutral scalars.
Figure 6: _Same as Figure 5, but for Type-X 2HDM._
This effect was pointed out in [5] earlier. Therefore, in Figure 5, (a) and (b) we see the orange points which are allowed by both observed relic and direct search bound have specific correlation between the two couplings \(\lambda_{S1}\) and \(\lambda_{S2}\). For \(m_{\rm DM}=62.5\) GeV i.e. in the vicinity of 125 GeV Higgs resonance, we would get very strong limit on \(\lambda_{S2}\) from relic under-abundance as mentioned earlier, which automatically ensures very small direct search cross-section, as well as BR(\(h_{\rm SM}\to invisible\)) [82]. However, in this case the upper bound from direct search does put a strong limit on \(|\lambda_{S1}|\)\(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;0.3\).
In Figures 6, we present the allowed parameter space for the Type-X scenario. Here too, the maroon points satisfy observed relic density (at least 10% of observed relic) and the orange points satisfy the upper bound from direct search experiments in addition. Here too, the preferred annihilation channels are into a pair of scalars. Notably, in Type-X 2HDM, non-standard scalar masses are allowed to be low, therefore annihilation into second non-standard CP-even scalars as well as charged Higgses also takes place. Therefore, we see both \(\lambda_{S1}\) and \(\lambda_{S2}\) become constrained by the observed relic in this case. We can see both \(|\lambda_{S1}|\) and \(|\lambda_{S2}|,\;\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}} \;0.2(0.15)\)\(m_{\rm DM}=400(200)\) GeV from relic under-abundance. On the other hand, we see a small region in the middle of the \(\lambda_{S1}\)-\(\lambda_{S2}\) plane disallowed from over-abundance. The corresponding limits on \(|\lambda_{S1}|,|\lambda_{S2}|\;\raise 1.29pt\hbox{$>$\kern-7.5pt\raise-4.73pt\hbox{$ \sim$}}\;0.1(0.08)\) for 400 GeV (200 GeV) DM mass. When DM mass is in the vicinity of 125 GeV Higgs resonance, the coupling \(|\lambda_{S2}|\) becomes strongly restricted from relic under-abundance, to \(\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;0.01\) whereas the other coupling \(\lambda_{S1}\) is naturally less constrained and can vary upto \(|\lambda_{S1}|\;\raise 1.29pt\hbox{$<$\kern-7.5pt\raise-4.73pt\hbox{$\sim$}}\;0.1\). One will see the reverse behavior in terms of the two couplings if the DM mass is in the vicinity of resonance of the non-standard scalar.
The region allowed by the direct search experiments shows a very different pattern in Type-X case compared to Type-II 2HDM. Since here the only coupling participating in the DM-nucleon elastic scattering is \(\lambda_{S2}\) (the quarks couple to \(\Phi_{2}\) in this case), the upper bound from direct search experiments only constrains \(\lambda_{S2}\), while keeping \(\lambda_{S1}\) completely free, as can be seen from Figures 6. When \(m_{\rm DM}\) is in the vicinity of Higgs resonance, the smallness of \(\lambda_{S2}\) demanded by relic under-abundance, necessarily ensures small direct search cross-section, similar to Type-II case. However, unlike Type-II, here the direct search bound does not constrain \(\lambda_{S1}\) at all.
One should note that, for our chosen DM mass range of few hundred GeV, the presence of light non-standard scalars make the annihilation process stronger in Type-X model as compared to Type-II. A more restrictive outer contour for Type-X, compared to that in Type-II (see Figures 5 and 6) is a result of demanding at least 10% of the observed relic in both cases. Similarly, the constraint from relic over-abundance affecting the central regions of the plots is more relaxed in Type-X compared to Type-II.
We would like to remind here that the scalar singlet having masses of the order of hundred GeV is still allowed by relic density and direct search constraints having two portal interactions with SM-like and heavy/light Higgs, unlike the usual Higgs-portal scenario, where only SM Higgs portal is present. We also note that while calculating direct search constraints for the relic under-abundant situation (upto 10% contribution), we have been conservative to take the full direct search cross-section, without folding it by the appropriate factor. We have also checked that the parameter space considered here satisfies the
constraints from indirect detection.
### Combining high-scale validity with DM constraints
Having illustrated the regions allowed by perturbative unitarity, vacuum stability upto various high scales, and after studying the regions allowed by DM constraints, namely, observed relic density and direct search, we will confront the two types of constraints with each other. We have seen in Figures 3 and 4 that the high scale validity pushes the DM-portal couplings (\(\lambda_{S1}\) and \(\lambda_{S2}\)) as well as DM self-coupling (\(\lambda_{S}\)) to smaller values. We have pointed out the upper limits on these couplings, previously. We also point out regions allowed by DM constraints, namely observed relic and direct search. Both these constraints affect only the DM-portal couplings \(\lambda_{S1}\) and \(\lambda_{S2}\). In Figures 5 and 6, we saw very small couplings are disfavored since they will overclose the universe. Interestingly, this is in tension with the high-scale validity discussed before. In order to examine the contrast between the two, we present a comparison between the two competing constraints in Table 3 for Type-II and 4 for Type-X. We have seen in the previous section that the upper and lower limits on \(\lambda_{S1}\) and \(\lambda_{S2}\) vary a little bit with DM mass. However, it was also evident that the limits from relic over-abundance did not change substantially between 400 GeV and 200 GeV DM mass. Therefore, in order to avoid confusion, we quote only single numbers in the last row Table 3 and 4. Although these numbers can vary slightly with DM mass, our major conclusion remains.
It is clear from the Table 3, that in Type-II+DM scenario, the parameter space can be valid upto \(\sim 10^{6}\) GeV and not higher, in order to remain consistent with the DM constraints. On the other hand, as we see in Table 4, the DM constraints in case of Type-X are less restrictive, and completely satisfy the requirements for validity upto very high scales. To be precise, the entire region of the parameter space consistent with DM constraints is also valid upto the Planck scale in case of Type-X 2HDM in association with a real singlet scalar DM. We would like to highlight this as an important contrast between Type-II and Type-X scenarios in the presence of real singlet scalar DM.
In the discussion and interpretation of Tables 3 and 4, a few comments are in order. The aforementioned limits are strictly valid away from the resonance regions. For example, in Figures 5(c) and 6(c), we see that even with extremely small couplings \(\lambda_{S1}\) and \(\lambda_{S2}\), the constraints from relic density are satisfied, since the s-channel Higgs-mediated annihilation cross-section is large in this region (\(m_{\rm DM}\approx\frac{m_{h}}{2}\)). Similar relaxation of the relic density constraints occurs when DM-mass is in the vicinity of the CP-even heavy Higgs resonance i.e. \(m_{\rm DM}\approx\frac{m_{H}}{2}\). In the resonance regions, for both Type-II and Type-X models, DM-constraints allowed parameter spaces can be valid upto the Planck scale.
We also note that the relic density and direct search constraints allow the models to have a cut-off as high as \(10^{6}\) GeV or higher, so that the freeze-out of the DM (with \(x\sim 20\)) is not affected for DM masses upto TeV scale, with dominant depletion contribution coming after EWSB.
### Prospects at the LHC
We briefly comment on the prospect of probing the regions of our models, which are valid upto high scales as well as consistent with DM constraints, at the LHC. In Type-II, we have seen that simultaneous satisfaction of both types of constraints leads to a maximum admissible validity scale \(10^{6}\) GeV. The DM-portal couplings \(\lambda_{S1}\) and \(\lambda_{S2}\) are in this case \(\raise 1.29pt\hbox{$\;>$\kern-7.5pt\raise-4.73pt\hbox{$\sim\;$}}1\). In our earlier work [5] we have seen that this region (\(\lambda_{S1},\lambda_{S2}\raise 1.29pt\hbox{$\;>$\kern-7.5pt\raise-4.73pt\hbox{$ \sim\;$}}1\)) can be probed at the high-luminosity LHC (\(3000~{}fb^{-1}\)) with \(\sim 3\sigma\) significance with cut-based analysis. This happens particularly when a non-standard scalar is produced in vector boson fusion and then decays into a DM pair. It has been shown in that work that the corresponding gluon fusion channel performs rather poorly in this case. Further improvement is possible using machine-learning techniques, as pointed out in [5]. In Type-X, although smaller \(\lambda_{S1},\lambda_{S2}\) couplings are allowed from high-scale validity as well as DM constraints as we pointed out in the previous subsection, there too, at least \(\lambda_{S1},\lambda_{S2}\raise 1.29pt\hbox{$\;>$\kern-7.5pt\raise-4.73pt\hbox{$ \sim\;$}}1\) couplings are required to probe it in the high-luminosity LHC with \(3000fb^{-1}\) data [7]. Therefore, although the type-X scenario can be valid upto as much as the Planck scale, even after imposing DM constraints, the regions that can be probed at the LHC are restricted to validity limits around \(10^{8}\) GeV.
## 5 Summary and Conclusions
We have explored the high-scale validity in terms of perturbativity, unitarity and vacuum stability, of two-Higgs doublet models with a real singlet scalar DM candidate. Such an
exploration can be expected to yield useful guidelines on scenarios with an extended Higgs sector as the DM portal In this context, we have considered Type-II 2HDM, which derives its motivation from supersymmetry and Type-X 2HDM, that allows for a low mass pseudo-scalar and provides at least a partial solution to the observed \((g_{\mu}-2)\) anomaly. After obtaining the one- and two-loop RG running equations with appropriate modifications/extensions in SARAH and 2HDME, we have identified the differences between the two aforementioned scenarios, in terms of high-scale behavior.
We applied all the experimental constraints on both the models. The B-physics observables as well as direct collider search experiments push the lower limit for non-standard scalars to much higher values in Type-II, as compared to Type-X. The presence of the low mass pseudo-scalars in Type-X 2HDM, not only contributes to \(g_{\mu}-2\), but also allows for much smaller quartic couplings, at the electro-weak scale, as compared to the Type-II scenario. This in turn, after RG-running, leads to larger allowed regions of parameter space upto various high scales in Type-X case. We have also compared the high-scale validity of 2HDM+DM scenario, with normal 2HDM cases, which was analysed in [17]. We see that the high-scale validity is generally worsened in the presence of a real-singlet DM.
We further study the impact of the high-scale validity on the DM sector. The existing constraints, namely the observed relic density and upper bound from direct search experiments put limits on the portal couplings between DM and the scalar sector. The high scale validity of the model crucially relies on the DM constraints, as the perturbative unitarity of the portal couplings often governs the cut-off scale. In this work, we have identified the regions of parameter space of the aforementioned models, that are allowed by the DM constraints, and are also valid upto various high scales. We find that the Type-II 2HDM+real singlet DM scenario can only be valid upto \(\sim 10^{6}\) GeV from the requirement of perturbative unitarity and vacuum stability, while obeying all the DM constraints at the same time. This implies, such a scenario will require intervention of new physics around \(\sim 10^{6}\) GeV, in order to be viable from the standpoint of particle phenomenology as well as DM-related observations. In Type-X 2HDM + real singlet DM, on the other hand, the restrictions are much more relaxed because of the less stringent phenomenological constraints on the parameter space. It can be valid upto Planck scale while at the same time being allowed by all the existing DM search results. Finally, we comment on the discovery prospect at the high-luminosity LHC, of the regions of the parameter space in these models, that are valid upto high scales, and are also allowed by DM constraints. We find that Type-II 2HDM+real singlet DM, which is valid upto \(\sim 10^{6}\) GeV, can be probed at the high-luminosity LHC. On the other hand, although its Type-X counterpart can be valid upto the Planck scale, only the portion of its parameter space, valid upto \(\sim 10^{8}\) GeV can be probed at the high-luminosity LHC.
## 6 Acknowledgements
AD and JL would like to thank Indian Institute of Science Education and Research, Kolkata, where part of the work was done.
Two-loop RGE's
Here we listed the two-loop RGE's of gauge, Yukawa and quartic couplings for our scenario.
### Type-II
\[(16\pi^{2}\beta_{g_{1}})_{2-loop} =(16\pi^{2}\beta_{g_{1}})_{2HDM}^{2-loop},\] \[(16\pi^{2}\beta_{g_{2}})_{2-loop} =(16\pi^{2}\beta_{g_{2}})_{2HDM}^{2-loop},\] \[(16\pi^{2}\beta_{g_{3}})_{2-loop} =(16\pi^{2}\beta_{g_{3}})_{2HDM}^{2-loop}. \tag{100}\]
The \((16\pi^{2}\beta_{g_{i}})_{2HDM}^{2-loop}\), for \(i=1,2,3\) represent the RGE's of gauge couplings for general 2HDM's at two-loop level and can be found in [70].
\[(16\pi^{2}\beta_{Y_{t}})_{2-loop} =(16\pi^{2}\beta_{Y_{t}})_{2HDM}^{2-loop}+\frac{\lambda_{S2}^{2} Y_{t}}{16\pi^{2}},\] \[(16\pi^{2}\beta_{Y_{b}})_{2-loop} =(16\pi^{2}\beta_{Y_{b}})_{2HDM}^{2-loop}+\frac{\lambda_{S1}^{2 }Y_{b}}{16\pi^{2}},\] \[(16\pi^{2}\beta_{Y_{\tau}})_{2-loop} =(16\pi^{2}\beta_{Y_{\tau}})_{2HDM}^{2-loop}+\frac{\lambda_{S1}^{ 2}Y_{\tau}}{16\pi^{2}}. \tag{101}\]
Here too, \((16\pi^{2}\beta_{Y_{j}})_{2HDM}^{2-loop}\), where \(j\) can be \(t,b\) or \(\tau\), are the two-loop RGE's for general 2HDM's (can be different for different types). One can easily find out the structure of them from [70]. Next is the RGE's of quartic couplings for our model.
\[(16\pi^{2}\beta_{\lambda_{1}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{1}})_{2HDM}^{2-loop}+4\lambda_{S1}^{2 }+\frac{(-32\lambda_{S1}^{3}-20\lambda_{1}\lambda_{S1}^{2})}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{2}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{2}})_{2HDM}^{2-loop}+4\lambda_{S2}^{ 2}+\frac{(-32\lambda_{S2}^{3}-20\lambda_{2}\lambda_{S2}^{2})}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{3}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{3}})_{2HDM}^{2-loop}+4\lambda_{S1} \lambda_{S2}+\frac{(-16(\lambda_{S1}^{2}\lambda_{S2}+\lambda_{S2}^{2}\lambda_ {S1})-2\lambda_{3}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2 }))}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{4}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{4}})_{2HDM}^{2-loop}+\frac{(-2\lambda _{4}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2}))}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{5}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{5}})_{2HDM}^{2-loop}+\frac{(-2\lambda _{5}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2}))}{16\pi^{2}}. \tag{102}\]
\[(16\pi^{2}\beta_{\lambda_{S}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S}})^{1-loop}+\frac{288}{3}g_{1}^{2}( \lambda_{S1}^{2}+\lambda_{S2}^{2})+288g_{2}^{2}(\lambda_{S1}^{2}+\lambda_{S2}^{ 2})-384(\lambda_{S1}^{3}+\lambda_{S2}^{3})\] \[-80\lambda_{S}(\lambda_{S1}^{2}+\lambda_{S2}^{2})-\frac{17}{3} \lambda_{S}^{3}-288\lambda_{S1}^{2}Y_{b}^{2}-96\lambda_{S1}^{2}Y_{\tau}^{2}-288 \lambda_{S2}^{2}Y_{t}^{2},\] \[(16\pi^{2}\beta_{\lambda_{S1}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S1}})^{1-loop}+\frac{5}{2}g_{1}^{4} \lambda_{S2}+\frac{15}{2}g_{2}^{4}\lambda_{S2}+\frac{1737}{144}g_{1}^{4} \lambda_{S1}+\frac{15}{8}g_{1}^{2}g_{2}^{2}\lambda_{S1}-\frac{123}{16}g_{2}^{ 4}\lambda_{S1}\] \[-8\lambda_{S2}^{2}\lambda_{S1}+(12g_{1}^{2}+36g_{2}^{2})\lambda_ {2}\lambda_{S1}-15\lambda_{1}^{2}\lambda_{S1}+(2g_{1}^{2}+6g_{2}^{2})\lambda_ {S1}^{2}-72\lambda_{1}\lambda_{S2}^{2}\] \[+(8g_{1}^{2}+24g_{2}^{2})\lambda_{3}\lambda_{S2}-42\lambda_{S1}^{ 3}-16\lambda_{3}\lambda_{S2}^{2}-32\lambda_{3}\lambda_{S1}\lambda_{S2}-(8 \lambda_{S2}+2\lambda_{S1})\lambda_{3}^{2}\] \[+(4g_{1}^{2}+12g_{2}^{2})\lambda_{4}\lambda_{S2}-16\lambda_{4} \lambda_{S1}\lambda_{S2}-(8\lambda_{S2}+2\lambda_{S1})\lambda_{3}\lambda_{4}-( 8\lambda_{S2}+2\lambda_{S1})\lambda_{4}^{2}-12\lambda_{S1}^{2}\lambda_{5}\] \[-(12\lambda_{S2}-\frac{13}{6}\lambda_{S1})\lambda_{5}^{2}-12(2 \lambda_{3}+\lambda_{4})\lambda_{S1}Y_{t}^{2}-12\lambda_{1}\lambda_{S1}Y_{\tau }^{2}-\frac{9}{2}\lambda_{S1}Y_{b}^{2}Y_{t}^{2}\] \[-(\frac{9}{2}Y_{\tau}^{4}+\frac{27}{2}Y_{b}^{4})\lambda_{S1}-8 \lambda_{S1}^{2}Y_{\tau}^{2}+(\frac{25}{4}g_{1}^{2}+\frac{15}{4}g_{2}^{2}) \lambda_{S1}Y_{\tau}^{2}\] \[+(\frac{25}{12}g_{1}^{2}+\frac{45}{4}g_{2}^{2}+40g_{3}^{2}-24 \lambda_{S1}-36\lambda_{1})\lambda_{S1}Y_{b}^{2},\] \[(16\pi^{2}\beta_{\lambda_{S2}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S2}})^{1-loop}+\frac{5}{2}g_{1}^{4} \lambda_{S1}+\frac{15}{2}g_{2}^{4}\lambda_{S1}+\frac{1737}{144}g_{1}^{4} \lambda_{S2}+\frac{15}{8}g_{1}^{2}g_{2}^{2}\lambda_{S2}-\frac{123}{16}g_{2}^{ 4}\lambda_{S2}\] \[-8\lambda_{S1}^{2}\lambda_{S2}+(12g_{1}^{2}+36g_{2}^{2})\lambda_ {2}\lambda_{S2}-15\lambda_{2}^{2}\lambda_{S2}+(2g_{1}^{2}+6g_{2}^{2})\lambda_ {S2}^{2}-72\lambda_{2}\lambda_{S2}^{2}\] \[+(8g_{1}^{2}+24g_{2}^{2})\lambda_{3}\lambda_{S1}-42\lambda_{S2}^{ 3}-16\lambda_{3}\lambda_{S1}^{2}-32\lambda_{3}\lambda_{S1}\lambda_{S2}-(8 \lambda_{S1}+2\lambda_{S2})\lambda_{3}^{2}\] \[+(4g_{1}^{2}+12g_{2}^{2})\lambda_{4}\lambda_{S1}-16\lambda_{4} \lambda_{S1}\lambda_{S2}-(8\lambda_{S1}+2\lambda_{S2})\lambda_{3}\lambda_{4}-( 8\lambda_{S1}+2\lambda_{S2})\lambda_{4}^{2}-12\lambda_{S2}^{2}\lambda_{5}\] \[-(12\lambda_{S1}-\frac{13}{6}\lambda_{S2})\lambda_{5}^{2}-12(2 \lambda_{3}+\lambda_{4})\lambda_{S1}Y_{b}^{2}-(8\lambda_{3}+4\lambda_{4}) \lambda_{S1}Y_{\tau}^{2}-\frac{9}{2}\lambda_{S2}Y_{b}^{2}Y_{t}^{2}-\frac{27}{2 }\lambda_{S2}Y_{t}^{4}\] \[+(\frac{85}{12}g_{1}^{2}+\frac{45}{4}g_{2}^{2}+40g_{3}^{2}-24 \lambda_{S2}-36\lambda_{2})\lambda_{S2}Y_{t}^{2}.\] (A.4)
### Type-X
\[(16\pi^{2}\beta_{g_{1}})_{2-loop} =(16\pi^{2}\beta_{g_{1}})_{2HDM}^{2-loop},\] \[(16\pi^{2}\beta_{g_{2}})_{2-loop} =(16\pi^{2}\beta_{g_{2}})_{2HDM}^{2-loop},\] \[(16\pi^{2}\beta_{g_{3}})_{2-loop} =(16\pi^{2}\beta_{g_{3}})_{2HDM}^{2-loop}.\] (A.5)
The \((16\pi^{2}\beta_{g_{i}})_{2HDM}^{2-loop}\), for \(i=1,2,3\) represent the RGE's of gauge couplings for general 2HDM's at two-loop level and can be found in [70].
\[(16\pi^{2}\beta_{Y_{t}})_{2-loop} =(16\pi^{2}\beta_{Y_{t}})_{2HDM}^{2-loop}+\frac{\lambda_{S2}^{2}Y_ {t}}{16\pi^{2}},\] \[(16\pi^{2}\beta_{Y_{b}})_{2-loop} =(16\pi^{2}\beta_{Y_{b}})_{2HDM}^{2-loop}+\frac{\lambda_{S2}^{2}Y_ {b}}{16\pi^{2}},\] \[(16\pi^{2}\beta_{Y_{\tau}})_{2-loop} =(16\pi^{2}\beta_{Y_{\tau}})_{2HDM}^{2-loop}+\frac{\lambda_{S1}^{ 2}Y_{\tau}}{16\pi^{2}}.\] (A.6)
Here too, \((16\pi^{2}\beta_{Y_{j}})_{2HDM}^{2-loop}\), where \(j\) can be \(t,b\) or \(\tau\), are the two-loop RGE's for general 2HDM's (can be different for different types). One can easily find out the structure of them
from [70]. Next is the RGE's of quartic couplings for our model.
\[(16\pi^{2}\beta_{\lambda_{1}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{1}})_{2HDM}^{2-loop}+4\lambda_{S1}^{2}+ \frac{(-32\lambda_{S1}^{3}-20\lambda_{1}\lambda_{S1}^{2})}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{2}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{2}})_{2HDM}^{2-loop}+4\lambda_{S2}^{2}+ \frac{(-32\lambda_{S2}^{3}-20\lambda_{2}\lambda_{S2}^{2})}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{3}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{3}})_{2HDM}^{2-loop}+4\lambda_{S1} \lambda_{S2}+\frac{(-16(\lambda_{S1}^{2}\lambda_{S2}+\lambda_{S2}^{2}\lambda_{ S1})-2\lambda_{3}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2}))}{16 \pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{4}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{4}})_{2HDM}^{2-loop}+\frac{(-2\lambda _{4}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2}))}{16\pi^{2}},\] \[(16\pi^{2}\beta_{\lambda_{5}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{5}})_{2HDM}^{2-loop}+\frac{(-2\lambda _{5}(\lambda_{S1}^{2}+\lambda_{S2}^{2}+8\lambda_{S1}\lambda_{S2}))}{16\pi^{2}}. \tag{111}\]
\[(16\pi^{2}\beta_{\lambda_{S}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S}})^{1-loop}+\frac{288}{3}g_{1}^{2}( \lambda_{S1}^{2}+\lambda_{S2}^{2})+288g_{2}^{2}(\lambda_{S1}^{2}+\lambda_{S2} ^{2})-384(\lambda_{S1}^{3}+\lambda_{S2}^{3})\] \[-80\lambda_{S}(\lambda_{S1}^{2}+\lambda_{S2}^{2})-\frac{17}{3} \lambda_{S}^{3}-288\lambda_{S2}^{2}Y_{b}^{2}-96\lambda_{S1}^{2}Y_{\tau}^{2}-2 88\lambda_{S2}^{2}Y_{t}^{2},\] \[(16\pi^{2}\beta_{\lambda_{S1}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S1}})^{1-loop}+\frac{5}{2}g_{1}^{4} \lambda_{S2}+\frac{15}{2}g_{2}^{4}\lambda_{S2}+\frac{1737}{144}g_{1}^{4} \lambda_{S1}+\frac{15}{8}g_{1}^{2}g_{2}^{2}\lambda_{S1}-\frac{123}{16}g_{2}^{ 4}\lambda_{S1}\] \[-8\lambda_{S2}^{2}\lambda_{S1}+(12g_{1}^{2}+36g_{2}^{2})\lambda_{ 2}\lambda_{S1}-15\lambda_{1}^{2}\lambda_{S1}+(2g_{1}^{2}+6g_{2}^{2})\lambda_{ S1}^{2}-72\lambda_{1}\lambda_{S2}^{2}\] \[+(8g_{1}^{2}+24g_{2}^{2})\lambda_{3}\lambda_{S2}-42\lambda_{S1}^{ 3}-16\lambda_{3}\lambda_{S2}^{2}-32\lambda_{3}\lambda_{S1}\lambda_{S2}-(8 \lambda_{S2}+2\lambda_{S1})\lambda_{3}^{2}\] \[+(4g_{1}^{2}+12g_{2}^{2})\lambda_{4}\lambda_{S2}-16\lambda_{4} \lambda_{S1}\lambda_{S2}-(8\lambda_{S2}+2\lambda_{S1})\lambda_{3}\lambda_{4}- (8\lambda_{S2}+2\lambda_{S1})\lambda_{4}^{2}-12\lambda_{S1}^{2}\lambda_{S}\] \[-(12\lambda_{S2}-\frac{13}{6}\lambda_{S1})\lambda_{5}^{2}-12(2 \lambda_{3}+\lambda_{4})\lambda_{S2}(Y_{t}^{2}+Y_{b}^{2})-12\lambda_{1}\lambda_ {S1}Y_{\tau}^{2}-\frac{9}{2}Y_{\tau}^{4}\lambda_{S1}-8\lambda_{S1}^{2}Y_{\tau}^ {2}\] \[+(\frac{25}{4}g_{1}^{2}+\frac{15}{4}g_{2}^{2})\lambda_{S1}Y_{\tau} ^{2},\] \[(16\pi^{2}\beta_{\lambda_{S2}})_{2-loop} =(16\pi^{2}\beta_{\lambda_{S2}})^{1-loop}+\frac{5}{2}g_{1}^{4} \lambda_{S1}+\frac{15}{2}g_{2}^{4}\lambda_{S1}+\frac{1737}{144}g_{1}^{4} \lambda_{S2}+\frac{15}{8}g_{1}^{2}g_{2}^{2}\lambda_{S2}-\frac{123}{16}g_{2}^{4 }\lambda_{S2}\] \[-8\lambda_{S1}^{2}\lambda_{S2}+(12g_{1}^{2}+36g_{2}^{2})\lambda_{ 2}\lambda_{S2}-15\lambda_{2}^{2}\lambda_{S2}+(2g_{1}^{2}+6g_{2}^{2})\lambda_{ S2}^{2}-72\lambda_{2}\lambda_{S2}^{2}\] \[+(8g_{1}^{2}+24g_{2}^{2})\lambda_{3}\lambda_{S1}-42\lambda_{S2}^{ 3}-16\lambda_{3}\lambda_{S1}^{2}-32\lambda_{3}\lambda_{S1}\lambda_{S2}-(8 \lambda_{S1}+2\lambda_{S2})\lambda_{3}^{2}\] \[+(4g_{1}^{2}+12g_{2}^{2})\lambda_{4}\lambda_{S1}-16\lambda_{4} \lambda_{S1}\lambda_{S2}-(8\lambda_{S1}+2\lambda_{S2})\lambda_{3}\lambda_{4}-(8 \lambda_{S1}+2\lambda_{S2})\lambda_{4}^{2}-12\lambda_{S2}^{2}\lambda_{S}\] \[-(12\lambda_{S1}-\frac{13}{6}\lambda_{S2})\lambda_{5}^{2}-4(2 \lambda_{3}+\lambda_{4})\lambda_{S1}Y_{\tau}^{2}-frac{1}{2}\lambda_{S1}Y_{ \tau}^{2}\] \[-(36\lambda_{2}+24\lambda_{S2})\lambda_{S2}Y_{t}^{2}+(\frac{25}{12} g_{1}^{2}+\frac{45}{4}g_{2}^{2}+40g_{3}^{2}-24\lambda_{S2}-36\lambda_{2})\lambda_{S2}Y_{b}^{2}\] \[+(\frac{85}{12}g_{1}^{2}+\frac{45}{4}g_{2}^{2}+40g_{3}^{2}) \lambda_{S2}Y_{t}^{2}. \tag{112}\]
In the case of the usual quartic couplings present in general 2HDM's, which are \(\lambda_{1,..5}\), we use the term \((16\pi^{2}\beta_{\lambda_{k}})_{2HDM}^{2-loop}(k=1,..5)\) to represent the two-loop RGE's of respective couplings at two-loop in general 2HDM model for different types (see [70]). On the other hand, for the other three quartic couplings, namely \(\lambda_{S},\lambda_{S1}\) and \(\lambda_{S2}\), the terms, \((16\pi^{2}\beta_{\lambda_{l}})^{1-loop}\) (\(l=S,S1\) or \(S2\)) represent the one-loop RGE's of respective coupling for our case for Type-II and Type-X 2HDM respectively.
|
2306.04965
|
Machine Learning in Digital Forensics: A Systematic Literature Review
|
Development and exploitation of technology have led to the further expansion
and complexity of digital crimes. On the other hand, the growing volume of data
and, subsequently, evidence is a severe challenge in digital forensics. In
recent years, the application of machine learning techniques to identify and
analyze evidence has been on the rise in different digital forensics domains.
This paper offers a systematic literature review of the research published in
major academic databases from January 2010 to December 2021 on the application
of machine learning in digital forensics, which was not presented yet to the
best of our knowledge as comprehensive as this. The review also identifies the
domains of digital forensics and machine learning methods that have received
the most attention in the previous papers and finally introduces remaining
research gaps. Our findings demonstrate that image forensics has obtained the
greatest benefit from using machine learning methods, compared to other
forensic domains. Moreover, CNN-based models are the most important machine
learning methods that are increasingly being used in digital forensics. We
present a comprehensive mind map to provide a proper perspective for valuable
analytical results. Furthermore, visual analysis has been conducted based on
the keywords of the papers, providing different thematic relevance topics. This
research will give digital forensics investigators, machine learning
developers, security researchers, and enthusiasts a broad view of the
application of machine learning in digital forensics.
|
Tahereh Nayerifard, Haleh Amintoosi, Abbas Ghaemi Bafghi, Ali Dehghantanha
|
2023-06-08T06:47:25Z
|
http://arxiv.org/abs/2306.04965v1
|
# Machine Learning in Digital Forensics: A Systematic Literature Review
###### Abstract
Development and exploitation of technology have led to the further expansion and complexity of digital crimes. On the other hand, the growing volume of data and, subsequently, evidence is a severe challenge in digital forensics. In recent years, the application of machine learning techniques to identify and analyze evidence has been on the rise in different digital forensics domains. This paper offers a systematic literature review of the research published in major academic databases from January 2010 to December 2021 on the application of machine learning in digital forensics, which was not presented yet to the best of our knowledge as comprehensive as this. The review also identifies the domains of digital forensics and machine learning methods that have received the most attention in the previous papers and finally introduces remaining research gaps. Our findings demonstrate that image forensics has obtained the greatest benefit from using machine learning methods, compared to other forensic domains. Moreover, CNN-based models are the most important machine learning methods that are increasingly being used in digital forensics. We present a comprehensive mind map to provide a proper perspective for valuable analytical results. Furthermore, visual analysis has been conducted based on the keywords of the papers, providing different thematic relevance topics. This research will give digital forensics
investigators, machine learning developers, security researchers, and enthusiasts a broad view of the application of machine learning in digital forensics.
keywords: Digital forensic, Machine learning, Convolutional neural networks, Image forensics, Deep learning, SLR +
Footnote †: journal: Computer Science
## 1 Introduction
With the expanding use of digital devices and their role in human life and increasing cybercrime, digital forensics (DF) has become a significant area of research. However, there are substantial challenges in digital forensics. One of these challenges is the growing volume of data and its complexity, making the investigation process time-consuming [1; 2]. In many cases, analysis requires the classification of big data into sets that are not easy to define [3]. Another problem facing digital forensics is the diversity of data. For example, in the Internet of Things (IoT) environments, there are billions of sensors collecting various types of data, posing severe challenges in real-time cybercrime investigation cases [4]. Another essential requirement in digital forensics is accuracy and reliability in the investigation process and its results. Three factors of intelligent computing, speeding up, and reducing time, bring more reliable and accurate results in some cases [5; 6].
In recent years, machine learning (ML) techniques in various fields such as image processing, text analysis, voice recognition, optical, and character recognition are still expanding and advancing [7]. In digital forensics, various ML techniques could gather knowledge from large volumes of digital evidence by matching conceptual models to enable data mining and knowledge discovery [1], and help investigators to analyze high volumes of data [8]. These methods are employed to find anomalies and identify patterns in digital forensic investigation. The automation of the investigation process in digital forensics can lead to bringing valuable aids to researchers, speeding up the process, and increasing the processing capacity [9]. Deep learning (DL) models are used in many DF domains, in adversarial image forensics [10], image tamper detection [11], and computer forensics [12]. These models can also be a viable solution for handling divergent data in big volumes with acceptable accuracy, e.g., analysis network traffic [13].
Given the importance of using ML techniques to address the digital forensics challenges and to enhance its process, in this research, a community-driven initiation has been provided to better study digital forensics and ML
techniques. Identifying the ML techniques of interest in the digital forensics science community can help researchers use these techniques better and more effectively. Toward this goal, the previous investigations on digital forensics and ML have been reviewed, and new directions have been developed.
### Prior research
To the best of our knowledge, no peer-reviewed systematic literature review has been conducted, discussing the application of ML to the problem of digital forensics explicitly. Besides, the related works have not been as comprehensive as this research thus far. The works introduced in this section have examined the relationship between DF and ML from a specific and limited view. Some papers have focused on specific techniques or applications of ML in digital forensics. Quick and Raymond Choo [1] studied some papers since 2004 about the problem of large amounts of data in digital forensics and offered solutions, including artificial intelligence (AI) and ML-based solutions. Pratama et al. [5] conducted a study on digital forensics trends from the 1990s to 2014. They also had a brief review of the role of computational intelligence and its effects on digital forensics.
Faye Rona Mitchell [3] introduced AI techniques in applying pattern recognition to be used in cybersecurity and digital forensics. Some techniques such as knowledge representation, pattern recognition techniques (ML and knowledge discovery), exploratory data analysis, and knowledge refinement were briefly studied in this research. A. M. Qadir and A. Varol [8] introduced the application of using ML algorithms and techniques in digital forensics to analyze large amounts of diverse datasets to find criminal behaviours. Adam and Varol [2] studied the literature of papers between 2005 and 2019 that used classification and clustering in the process of a digital forensics investigation. They also proposed a framework for the intelligence of digital forensics. These papers only examine a specific role and application of ML in digital forensics.
Some articles have only been conducted in a specific domain of digital forensics. Kebande et al. [4] highlighted the importance of supervised ML methods in live digital forensics. They presented a framework for Emergent Configurations in IoT Environments using machine learning facilities. N. Koroniotis et al. [13] conducted a comprehensive discussion and explored the challenges of botnets and current solutions. They also studied the application of DL in network forensics and intrusion detection and its role in handling diverse data in IoT forensics as an appropriate solution. In video forensics,
Abdul RehmanJaved et al. [6]conducted a survey considering challenges and presenting a taxonomy of prominent video forensics products available for investigation. Al-Khateeb and Epiphaniou [14] discussed the role of ML classification techniques in an incident response methodology to improve the detection of unwanted patterns, for example, in text messages, cyberstalking, and online grooming. In 2018, Karampidis et al. [15] conducted a review of steganalysis techniques for image digital forensics. Krivchenkov et al. in [16] investigated the state of the art intelligent methods used in IoT between 2009 and 2018 and their problems in three categories of rule extraction, anomaly detection, and intrusion classification. In digital camera source identification, Jaroslaw Bernacki [17] scrutinized some methods available, including ML and DL models. Their result showed that using DL models has grown, and the CNN-based classifiers present high detection accuracy.
In a recent study conducted in 2021, Manjunatha and Patil presented a review of the DL-based passive image forensics analysis methods for detection tampering [11]. In a survey [18], Cifuentes et al. studied using the DL methods to automate the detection of sexually explicit videos. In terms of obtaining digital evidence, Zaytsev et al. [19] revealed that AI enables a multifaceted, complicated, and objective approach to investigate crime situations and notably enhances the proof of efficiency. Jarrett and Raymond Choo [20] examined the relationship between multimedia science in three areas of Cyber Threat Intelligence, AI, and Cybercrime. They inspected the effect of AI and automation in DF on efficiency, accuracy, and cost-reduction and explored the main automation challenges of digital forensics. To apply the ML techniques in image manipulation detection, a comprehensive survey conducted by Norzoi et al. [10] examined the techniques available and pointed out their vulnerabilities against adversarial attacks. In another work on cybersecurity intrusion detection [21], and file type identification [12], the effectiveness of using ML techniques has been indicated. Shalaginov et al. [22] discussed ML techniques in static malware analysis, which can help researchers to use machine learning in malware forensics. In a systematization of knowledge (SoK), Xiaoyu Du et al. [9] studied the state-of-the-art of AI-based tools and approaches in digital forensics. In this SoK, applying AI in Data Discovery, Device Triage, Network Traffic Analysis, Handling Encrypted Data, Computer Vision, Forgery Detection, and Fingerprinting were examined, and current challenges and future directions in each field were discussed. However, this research has not been conducted as a systematic literature review.
### Research goals
This research aims to review and identify the applications of ML in digital forensics domains. In particular, This study focuses on answering five research questions given in Table 1.
### Contributions and layout
By considering all ML methods and DF domains, this SLR is complementary to the current research and presents the following contributions for those having a curiosity in digital forensics and ML to promote their work:
* We identify 608 primary studies related to ML and digital forensics up to December 2021. The results can give an excellent view to researchers in this specific field.
* We present a meta-analysis of the state of play regarding ML methods employed to improve the digital forensics investigation process and address DF challenges.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline Research Questions (RQ) & Discussion \\ \hline
**RQ1**: How are publications related to ML applications in DF spread throughout the years? & To identify the trend and the progress of the subject in each year. \\
**RQ2**: How is the research activity in the applications of ML in DF dispersed geographically? & To identify leading countries in this field. \\
**RQ3**: What are the most popular publication venues and databases in this domain? & To identify the leading publishers and specify the extent to which conferences and journals pay attention to this subject. \\
**RQ4**: What are the most commonly used related keywords in the research, and how are they related? & Identifying the essential keywords used in the research can be helpful to categorize the trending topics and the fields receiving less attention. \\
**RQ5**: What are ML methods used in digital forensics, and in what fields? & Digital forensics use ML in various fields. Identifying the areas in which the ML techniques are utilized can help understand the ML’s role in digital forensics. Because ML has a wide range of techniques, it is essential to identify the most used ones in digital forensics. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Research questions.
* We Provide visual analysis based on all 608 related articles according to the authors' selected keywords, showing thematic relevance in the last decade's research.
* We conduct a comprehensive mind map of the relationship between ML methods and DF domains. Other researchers can use this mind map to further their work.
* We make representations and produce guidelines to help further work in this area.
The structure of this research is as follows: Section 2 describes the methods in which the primary studies are systematically selected for analysis. Section 3 discusses the findings related to the research questions presented earlier. Section 4 discusses the future research directions of ML application in digital forensics. Section 5 concludes the research.
## 2 Research methodology
To answer the RQs mentioned above, the SLR was conducted based on the guidance presented by Kitchenham and Charters [23]. In iterations, we desired to move via the review's planning, conducting, and reporting phases to complete the SLR evaluation.
### Selection of primary studies
Primary studies were gathered by searching keywords in the publication's search tool or the search engine. The keywords were chosen to aid in discovering research findings that would address the research questions. We used AND and OR Boolean operators. The search terms were as follows:
_("machine learning" OR "artificial intelligence" OR "classification") AND "digital forensic"_
_("neural network" OR "convolutional neural network" OR "deep neural network" OR "deep Learning") AND "digital forensic"_
_("support vector machine" OR bayesian OR regression OR "decision tree" OR "k-nearest neighbor" OR supervised OR "k-means" OR reinforcement OR "Markov" OR "random forest") AND "digital forensic"_
The platforms searched were:
* ACM Digital Library
* IEEE Xplore Digital Library
* ScienceDirect
* SpringerLink
Depending on the search platforms, the searches were done against the title, keywords, or abstract. The final search was performed on December 10, 2021, and all studies published till that date were reviewed. The inclusion/exclusion criteria described in Section 2.2 were used to filter the results. The filtered results were then fed into Wohlin's [24] snowballing process. Iterations of forward and backward snowballing were done until no more papers meeting the inclusion criteria were found.
### Inclusion and exclusion criteria
Studies can be research papers or case studies, new technical ML applications, and commentaries on developing existing digital forensics processes through the ML combination. They must be written in English and peer-reviewed. In cases where multiple versions of a study are found, the most recent version is considered. The critical inclusion and exclusion criteria are shown in Table 2.
\begin{table}
\begin{tabular}{l l} \hline \hline Criteria for Inclusion & Criteria for Exclusion \\ \hline The paper must contain information & Grey literature such as blogs and government documents. \\ & The paper must offer empirical data \\ & related to the ML application in digital \\ & The paper must be peer-reviewed \\ & research published in a conference or \\ & journal. \\ & Published between 2010 and 2021. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inclusion and exclusion criteria for the primary studies.
### Selection results
The initial keyword searches yielded to identifying 6781 studies, which were reduced to 4521 once duplicates were removed. The remaining papers were then reviewed considering the inclusion/exclusion criteria, resulting in 605 papers being left. The 605 publications were read completely, and the inclusion/exclusion criteria were reapplied, resulting in 533 studies remaining. Snowballing was done in iterations until no more papers fulfilled the requirements, resulting in 608 papers being included in the SLR. Fig. 1 displays the number of studies picked at each stage as well as the rate of attrition of articles.
### Data extraction and analysis
The data belonging to studies that passed the quality assessment was then extracted, categorised, and saved as a spreadsheet to check for completeness and the accuracy of recording. The categories given to the data were as follows:
* Context data: Information about the study goal.
* Qualitative data: Findings and conclusions provided by the authors.
Figure 1: Attrition of papers through processing.
* Quantitative data: data are observed by experimentation and research when applied to the study.
To meet the goals of research questions, the data contained within the qualitative and quantitative findings categories were compiled. In addition, a meta-analysis of those studies subjected to the final data extraction process was conducted.
## 3 Discussion
Each primary study was fully read, and relevant qualitative and quantitative data was extracted provided as a mind map in answer to RQ5. All the primary studies have been focused on how ML deals with a particular problem in digital forensics. Primary keyword research shows that many papers have been published about ML techniques in digital forensics since 2006. One of the main problems in digital forensics is the large amount of evidence that makes collecting and analyzing problematic for an investigator. This high volume of data and the need to spend a very long time to find relevant evidence also increase the probability of human error. The study results show that the ML techniques can be well used in response to this problem and increase the accuracy and speed of the investigation process in data collection, inference, and analysis. The proposed ML-based methods present outstanding performance to improve the accuracy and reduce the error rate. However, limitations and challenges often relate to the ML's nature, such as the need for adequate and appropriate training data and samples [120; 310; 355; 365; 366; 409; 463], the selection of features [233; 385; 445; 516], the number of features [169; 172; 366], the massive number of parameters, and the determination of optimal values [48].
### Quantitative data
This section provides a quantitative analysis of the set of studies resulting from the primary research. In particular, the research questions RQ1 to RQ3 were addressed by analyzing the number of publications related to the use of ML in DF over the years, the geographical distribution of these studies, and the favourite publication venues.
#### 3.1.1 RQ1: Spread of Publications Throughout the Years
Fig. 2 indicates the number of publications between 2010 and 2021. Although the research in ML application in digital forensics has grown between
2010 and 2015, it has been more impressive since 2016. A sharp increase observed from 2016 to 2021 shows that ML application in DF has been at the centre of interest of the research community.
#### 3.1.2 RQ2: Geographical Distribution of using ML in DF Research
The geographical distribution of the research activity has been shown in Fig. 3. The data were obtained by extracting the first author's affiliation with the desired studies. China, India, and the USA are the three major contributors. This can be due to the size of their industries and the importance of research in this field. However, about 18% of the studies emerge from Europe, indicating that this topic is less intriguing. The 21 countries in the 'others' group with five or fewer publications in this field are: Greece, Turkey, Norway, Pakistan, Japan, Russia, Austria, Vietnam, Netherlands, Canada, Bangladesh, Romania, Poland, United Arab Emirates, Colombia, Hong Kong, Jordan, Lithuania, Portugal, South Africa, and Estonia. The papers were classified based on their publication's venue type and database, as shown in Figs. 4 and 5, respectively. As can be seen, conference proceedings are more active in publishing papers compared to journals. Although the Springer database has the highest number of publications, most journal papers belong to Elsevier, with about 47% of all.
Figure 2: Publication year
Figure 4: The popularity of different venue types.
Figure 3: Demographic: geographical distribution of research activity based on the first author’s country of affiliation.
### Quality data
In this section, a quality analysis of the primary studies was provided to answer RQ4 and RQ5.
#### 3.2.1 RQ4: Keyword network: analysis for identifying research areas
To summarize the common topics amongst the selected primary papers, keywords were analysed across all 608 papers based on authors' keywords. Table 3 shows the significant words that have been repeated more than 20 times along with their number of repetitions, and Fig. 6 shows their relations. For this aim, a keyword network analysis was conducted using VOSviewer [25]. At first, the social network map of the co-occurrence matrix was obtained. Fig. 6 shows a network based on the repetition of authors' keywords in the literature. Based on the similarity of keywords in topics, a classification of the keywords with the most thematic relevance is displayed in different colours. It could intuitively reveal the relationship of research themes of using machine learning in digital forensics. The size of the node's font indicates the frequency of keywords: the higher the frequency causes the larger the node's font size. The thickness of the line is relevant to the closeness of connections between two keywords. The settings in VOSviewer are based on the number of occurrences of a term=5 and Binary counting with Max. length=30, Max. lines=100 and weights= Occurrences.
A graph of density visualization is illustrated in Fig. 7 with a kernel size of 1.5 (default) to acquire more information on these keywords. In the item density visualization, the items indicated by their labels are similar to the network visualization. The colour of each point on the map is determined
Figure 5: The number of publications in different databases.
by the density of its items. In the item density visualization, items are represented by their labels, similar to the network visualization. By default, the range of colours is blue to green to yellow. Larger number of items in the neighbourhood of a point and higher weights of neighbouring items result in the colour of the point closer to yellow. On the other hand, the smaller values are blue [25].
The goal of this research question is two folds. Firstly, the aim is to determine which digital forensics domains successfully used the ML methods, leading to their advancement. Secondly, the aim is to specify the ML techniques with the most capabilities to be used in digital forensics and discover the DF domains which have used each of these ML techniques. Notice that the last perspective is different from the first one.
To answer this question, the categorization of digital forensics domains presented in [9] was considered. These categories are Data Discovery and Recovery, Fingerprinting, Multimedia Forensics (Image, Video, Audio, Text), Network, and Triage mode as shown in Fig. 8. The statistical results of this section are based on the authors' keywords. It should be noted that the papers related to Electromagnetic side-channel analysis were included in the network category due to their relevance to the IoT and malware detection. However, to better identify the relationship between ML techniques and digital forensics domains, a comprehensive mind map is presented in Fig. 10 to enable researchers to identify the field of study of papers more accurately. This mind map is based on the paper's context and not just keywords. Due
\begin{table}
\begin{tabular}{l l} \hline \hline Keywords & Count \\ \hline Image & 347 \\ Detection & 210 \\ CNN & 139 \\ Forensic(s) & 122 \\ Identification & 91 \\ DL & 91 \\ Forgery & 81 \\ SVM & 71 \\ classification & 69 \\ Video & 61 \\ Compression & 58 \\ Jpeg & 43 \\ Camera & 42 \\ Splicing & 41 \\ Audio & 31 \\ Computer & 28 \\ Multimedia & 27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Counts of the significant authors’ keywords in the primary studies
Figure 6: Map of a co-occurrence network of authors’ keywords.
Figure 7: Density visualization of authors’ keywords.
to the existence of many works in the multimedia domain, and for more clarity of the presented results, this domain is considered with its subsets in Fig. 8. As can be seen, image forensics has been at the centre of interest among other domains. A statistical analysis of the papers revealed that about 62% of the works were related to image forensics. Video with about 11%, followed by audio and fingerprinting domains with about 7%, have been exciting topics to researchers. The 'others' group of DF domains consists of three forensics phrases written by the authors, namely, sensor forensics, computational forensics, and location forensics.
Fig. 9 shows the main ML methods used in DF. Deep neural network (DNN) methods are placed in the category of DL with 14 repetitions in keywords. Furthermore, the Deep CNN (DCNN) keyword with eight repetitious is placed in the CNN category. The 'others' group consists of the six methods that have ten or fewer repetitions in the authors' keywords: Bayesian, Logistic Regression (LR), KNN (k-nearest neighbour), LSTM (Long Short-Term Memory), CapsNet, and K-means. Based on the results obtained, deep learning, especially convolutional methods, plays a more meaningful role in digital and image forensics. It is considered a strength for digital forensics because of the more accurate results gained by CNN-based models. However, this may also be a weakness due to an increase in adversarial attacks. Using deep models in digital forensics has significantly grown since 2017. In 2021,
Figure 8: DF domains using ML methods (Based on authors’ keywords)
about 53% of the papers related to using CNN-based methods, and about 50% were conducted in the image forensics domain.
In the following, some articles, mostly selected from Q1-ranked journals, are briefly reviewed to cover the main topics. A comprehensive grouping of all related works published from 2010 to 2021 is presented in Fig. 10. As mentioned before, this mind map is based on the papers' context and covers all DF domains considered in papers. Considering the full text of the papers, this map shows that the CNN models have the most application in providing the DF approaches and frameworks, primarily in the image domain. Besides, it shows that the most significant concern in digital forensics is image manipulation. The dominant ML methods in this field have been CNN and Support Vector Machine (SVM). Another consideration is that SVM, tree-based, and Neural network-based (NN) models are effective in about all of DF categories, and K-means are used to a limited extent. One of the growing areas in data source identification is Social media source identification, and SVM techniques are used more. In the Triage models, traditional models such as Bayesian, Tree-based, and SVM are used, and DL models do not play a significant role in this regard.
Figure 9: The main ML methods used in DF (Based on authors’ keywords)
Multimedia:With the development of mobile photography devices and editing software tools, the detection and identification of computer-generated images (CGI), recaptured images, and manipulated images have become the most severe challenges in multimedia forensics. Image manipulation includes a wide range of tampering and forgery, including splicing, copy-move, double compression, re-sampling, removal, resizing, sharpening, or smoothing operations such as median filtering and Gaussian filtering. In the following, the main works in this field will be reviewed.
* **CGI and fake detection image:** Deep learning models, especially CNN-based models, have been successful in automatic multidimensional feature extraction [239, 272], training, and classification for high accuracy recognition [96] due to their ability to obtain higher-order features. However, DL models have challenges. The performance of these models decreases in blind detection. For example, when training and testing data are generated with unknown computer graphics rendering tools, they deeply extract context from images without inferring any unique fingerprint. To solve this problem, Convolutional Traces analysis and feature extraction with Expectation-Maximization (EM) algorithm have had good results in the classification of fake images with SVM [64, 87]. Correspondingly, transfer learning [28, 304] can overcome two main DL models problems: 1) the need for a massive amount of training data and 2) overfitting on CNN. The reason is that the parameters of the pre-trained neural network (called source network), belonging to a particular task, are transferred to a new neural network (called target network), designed to solve somewhat similar tasks. A local-to-global strategy can also reduce the computational cost of images that are not the same size [508]. In this way, the CNN model will decide on local patches and the whole image in actual size through simple majority voting. Indeed, it crops several image patches with a fixed size in training, thus increasing the augmentation power of the training dataset. SVM classification has been suggested to be applied to CGI detection in two aspects. The first is extracting histogram and multifractal spectrum features from residual images and regression model fitness features [493]. The second one is binary similarity measures of Photo
Response Non-Uniformity (PRNU is a unique attribute in natural images) [309, 408]. However, in combination with the DL methods, feature extraction can be done automatically. In NNs, using laplacian of gaussian, auto-correlation, and extreme learning machines have been suggested [511]. In fake colourized image detection [218], extracting features from HSV colour space instead of RGB and NN training instead of SVM (due to low speed in large datasets and difficulty in choosing the correct kernel) can be a good solution. Detection in recaptured images can be done based on the difference in the number of pixels at the edges of the actual image and the recaptured image. This technique enables the detection of the image taken from the screens but presented as the original image. The same is possible for the hidden tampered images [114].
**Video:** To distinguish fake videos from the original, extracting recompression error can be a good criterion for detection using CNN-based models [424]. A fake detection scheme in [115] has been proposed for both video and audio with a convolutional recurrent neural network framework. In fake bitrate detection, SVM showed an excellent classification accuracy [248]. In biometric authentication, video manipulation to fail authentication is more effortless compared to other authentication methods. An increase in AI-generated videos-based attacks leads to identifying the original videos from the tampered ones. In [81], a hybrid CNN-LSTM model was proposed to improve the facial motion differences between fake and original videos.
**Audio:** Similar to other Multimedia forensics domains, data manipulation is one of the problems of audio forensics. The CNN-based method proposed in [334] determines whether an audio recording is recaptured or is genuine. The results showed that it has more robustness against ambient noise. Moreover, it can be detected appropriately for short 2-second clips.
* **Manipulation detection** **image:** In the handcrafted-features-based methods, the learning and classifying steps cannot be simultaneously optimized [203], as they are
separate steps. In the CNN models, feature extractors and classifiers work automatically, and each feature map is generated at each network step. By performing a convolutional operation on the whole image and learning weights, features are extracted while training on a set of images. One of the essential advantages of these models is weight sharing. This capability makes them work faster and reduces data latency due to the computational resources of edge nodes while providing acceptable detection accuracy. Hence, weight sharing has been used in many works ranging from detecting forgery and locating it [29, 138, 267, 268], based on the part of the image or the whole image and with any size [32], to global manipulation detection and processing history detection [47, 118]. The results of [73] show analysis of colour filter array (CFA) using a trained SVM with original images and forged images, which leads to less computation time. Using the CNN models is recommended in small images.
Determining proper filter parameters can help in retrieving the manipulation history of an image. In [361], authors suggested adding a transform layer to CNN and training it with frequency-domain features to identify template parameters of various spatial smooth filtering operations. Accurate locating of tampered and refined contours of tampered regions has been proposed in [529]. In some works [39, 82, 530], the automatic feature extraction capability of the CNN models has been used. Then, in the classification step, other methods such as SVM are indicated to provide good results in two-class problems [39, 530] and the Forensic Similarity Graph approach proposed in [82]. However, the scheme presented in [54] extracts features based on median filter residual image creation using least-squares and employs CNN as the classifier. One of the CNN-based model features is the possibility of changing the kernel size. This feature improves the performance and is used to extract and integrate multi-scale features for copy-move tampering [296] and increase the reassembling rate estimation [101, 117]. Most deep-based methods pay attention to high-level features while there is much digital forensics evidence in low-level features. When these features are extracted, manipulations can be detected by training the network via a small dataset [83].
Face images contain much personal information such as age, race, or even feelings. One of the challenges of the CNN models in image
forensics is how to design a network for learning features through weak traces related to a specific manipulation [47, 132]. Because metadata is easier to manipulate in fake facial images, the RGB-based detection will work better [528]. In tamper detection, a determination is carried out if an image is derived from another one [183, 143]. Moreover, training a DNN with any pair of images in the face swap detection is also proposed [123].
Compressed images are difficult to detect due to the influence of quality factors on multiple compressions, and the available methods work well under certain conditions [200]. In recent years, DNNs are suggested for JPEG compression manipulation detection [100, 167, 342], non-aligned double JPEG compression detection [119, 496], and resizing manipulation detection [184]. In [125], features such as spatial, frequency, and compression increase detection performance. In the low-resolution image, which is a lossy compressed image that leads to the lack of statistical pixels to extract reliable features, filter layers and residual learning for Median filtering detection can be helpful [164]. SVM-based techniques are suitable to solve binary problems. Still, they are not appropriate for three-class issues, and most detection solutions are related to double compression. In a different work presented in [173], the Triple JPEG Compressed Color Images detection was addressed. According to [181], SVM obtains acceptable results for median filtering detection using an autoregressive model to extract features from bit-planes in uncompressed images. It also obtains promising results in uncompressed and JPEG compressed images based on the streaking effect.
In the field of splicing detection, Jinwei Wang et al. [93] proposed a CNN-based solution that uses a combination of YCbCr, edge, and PRNU features based on the weighting strategy and eliminates redundant information to obtain better results. In median filtering, a CNN-based model with an adaptive filtering layer was proposed in [514], which is built upon the discrete cosine transform (DCT) domain. In social network image splicing detection, SVM is used for classification based on Texture Features [510]; SVM also utilizes Markov Features in QDCT and QWT Domains [253]. In [98], using Vicinity Noise Descriptor with SVM has been proposed to solve the noise fluctuation problem in splicing zone detection.
For various types of motion or out-of-focus, blur image detection has been proposed using CNN-based model detection [188]. Furthermore, authors in [176] used multi-derivative grey level co-occurrence matrix (MGLCM) features to train SVM. Among image manipulations, sharpening is one of the most common techniques in most image editing tools [125]; however, its detection is a challenge in small images. To deal with this problem, using DCT-CHDMY features with SVM has been suggested [42].
In Modified Contrast Enhancement-based forgery detection [284] and overlapping concurrent directional patterns [313], NNs are suggested instead of SVM to obtain more robustness and accuracy. It should be mentioned that SVM had better results compared to NN in forgery detection of illumination component change using homomorphic image processing [196] and linear transformations such as rotation and resizing [278]. Although the NN and DL models and conventional classifiers such as SVM have been used more in works, some of which are mentioned earlier, other techniques also presented good results. High accuracy was obtained in classification via KNN in splicing detection [353], Naive Bayes (NB) in tampered JPEG image detection [159], ensemble learning in seam carving forgery, and JPEG detection down-recompression [331].
**Video:** Ease of manipulation risks the authenticity and integrity of digital videos, mainly when it is considered digital evidence in court. Unlike image data, video data contains temporal information. One of the problems with video manipulation detection is that there are no datasets with various tags for tampered videos to be large enough to be used in the ML techniques training. The CNN-based techniques can extract features and estimate various compression parameters such as quantification parameters, intra- or inter-frame type. Johnston and Elyan [128] deblocked the filter setting from good videos and trained a CNN model for manipulation detection. In relocated I-frames detection in double compressed videos, using a smaller CNN model for feature extraction and detection is appropriate to be embedded into mobile forensics devices [507].
In compressed double videos, detecting abnormal frames in HEVC (High-Efficiency Video Coding) videos using NN has been proposed [439]. SVM classification is also considered in various fields due to its good
compromise between computational complexity and detection accuracy [503]. A comparison has been conducted in [432] among SVM, KNN, and logistic regression to detect deleted frames based on feature extraction from the bitstream and the reconstructed images. The results showed that SVM in CBR (constant bitrate) coding and LR in VBR (variable bitrate) coding has higher true positive (TP) rates compared to other techniques. Simultaneously, using KNN and linear discriminant analysis (due to their simplicity) and a three full connected multilayer perceptron (due to its accuracy) have better and more reliable performance in the classification of single and double compressed [170].
**Audio:** Voice biometrics is an approach that is increasingly used in speaker verification systems. Audio spoofing is one of the significant challenges in this field. [410]. Pitch shifting technique in audio editing and increasing or decreasing it is a prevalent voice manipulation to hide the true identity. Detecting a weakly pitch-shifted voice is one of the challenges in this field. The proposed CNN-based model in [92] has been able to overcome it. D. Luo et al., [339] used NN in double compressed AMR audio detection via a stacked autoencoder to learn the optimal features of audio waveforms. For higher efficiency, they extracted compressed-domain speech features instead of decoded speech waveforms from encoded AMR files. However, the accuracy in [69] has been higher because of using SVM classification. Reis et al. [343] utilized electrical network frequency to detect and identify tampered audio recordings by detecting abnormal variations.
**Text analysis** Due to daily user activities such as email, web browsing, word processing, etc., textual evidence is vital. Much of this evidence can be in the unallocated space, or the file's signature has been overwritten. Lang Beebe et al. [419] employed SVM to provide ranking algorithms for the search string. In Uyghur Web Text Classification [323], SVM also presented good results. In automating the analysis of text and log chats analysis, the ML techniques help investigators analyze and identify conversations containing sexually inappropriate subjects [49]. In the child sexual abuse and the identification of Unknown Criminal
Media in P2P networks, SVM performed better than Naive Bayes and LR in text and image classification [520] regarding the F-score. Identifying the source of evidence is essential in the investigation of online communications on social networks and email. In [540], a ML-based framework and natural language processing has been developed. The results indicated that random forest (RF) has the best average precision in classification compared to the Decision tree (DT), NB, AdaBoost, LR, and SVM.
* **Other multimedia fields** A SVM-based classification tool has been recommended in [534] to reduce costs and speed up the investigation. They used a parallel software architecture for photo and video multimedia evidence classification easily and instantly. If the results are positive and there is evidence, the device should be taken to the laboratory for further analysis. In face detection and Sketch Synthesis (FSS), drawing the suspect face showed promising results by applying the IEHO algorithm to hyperparameter optimization in the proposed CNN-based model [108]. Although existing DL modelling tools work well on adult age estimation, they do not perform well on underage subjects. Authors in [65, 531] improved a DL-based approach, especially in child sexual abuse investigations. Their scheme performed well by obtaining a large dataset with balanced and valid class labelling. In gender identification, good results were reported in CNN-based gender identification of handwriting by extracting hierarchical page features and word features [43]. In video object detection, there is footage recording quality dependency. The low quality of the recording will reduce the reliability of the evidence collected for presentation to the court. In object detection in the footage, the colour of a particular object is not consistent for reasons such as changing the quality of video or changing light, etc., all of which make identification difficult. In this case, the DL methods will be more efficient in identifying the image object [220]. Due to the high noise in the circumference, environmental sound classification (ESC) is one of the most challenging problems for DF and ML. Several solutions based on DNN [37] and SVM [40] were proposed to solve these problems. In
sound-based gun model identification, which is complex and costly to do manually, the KNN classifier showed better accuracy than SVM [535].
FingerprintingIn identifying a suspected source, which could be a printer, a digital camera, malicious code writers, or even the way people behave, often there are clues in relevant evidence that help investigators. Footprints can be left in various data such as images, audio, video documents, etc. The ML methods can help investigators to extract and analyze these footprints more accurately and quickly.
* **Camera or phone source identification** **image:** The issue of device source identification based on their fingerprints is a growing topic in digital forensics [33]. Source camera identification is a serious issue in digital forensics areas such as copyright and property [305], including camera manufacturer identification and camera model identification (CMI). The DL models like CNN can automatically extract features and identify multiple camera models [51, 149, 198, 509, 516]. They have also been used in pre-processing tasks to remove scene content [116], which severely hides the camera fingerprints. Distinguishing between images taken directly by a camera or downloaded from social networks is challenging in image forensics. In this field, using SVM classifiers is taken into consideration [256, 257, 258, 340]. In source camera identification without knowing the models, using the KNN classifier and a self-training strategy showed better results than a binary SVM and a multi-class SVM [377]. KNN is also used in multimedia phylogeny [357] to find abusive or original content publishers. In uploaded images to messenger applications or social networks, remaining footprints can be extracted to help identify the image source. In this regard, a CCN-based camera detection on a shared document has been proposed [501]. One of the challenges in messengers is that filters applied to images can cause sensor pattern noise (SPN). This can cause camera detection methods not to work correctly. A CNN-based solution in [180] has been suggested for messenger apps like Whatsapp to identify these filters. Another challenge is low accuracy with a limited set of labels for training. In [62], the deep siamese network significantly increased the classification accuracy. The RF has been
proposed to identify image sources from social networks like Facebook, Twitter, and Flickr [512].
**Video:** The supervised ML algorithms effectively detected the video editing tool used in social networks and messengers, and higher performance was obtained by random forest [34]. In [84], a container-based method has been proposed to identify software and operating systems that manipulate videos on social networks. They also employed a decision tree to explain decisions.
**Audio:** Identifying a mobile device based on taken images is an approach that has been considerably used in the literature [51; 62; 116; 149; 180], as discussed earlier. Another method that can be employed to identify cell phones is microphone identification. Using directly extracted features from audio signals is a common way. It works well in identifying different brands, but it fails to distinguish different brand models. In [122], a CNN-based model identified cell phones through their built-in microphone by extracting features from different parts of the spectrogram of multiple streams [122]. In addition to identifying different brands, it also identifies different models of the same brand. Using NN with device noise feature [113] and sparse representation-based classification method [497] have their benefits in source recording device recognition. One of the challenges in audio forensics is recognising VoIP calls as caller-ID that can be easily manipulated in such calls. Among ML-based proposed methods for this problem, SVM and NN showed good accuracy and robustness [490]; however, they can be used to identify a specific source device. Hence, a CNN model was suggested in [186] based on temporal and spectral domain features.
* **Printer source identification** Printer or scanner source identification is vital in matters such as copyright ownership or document authentication manipulation. In recent years, using SVM has been discussed in various areas such as printer source identification of a text document [109; 212; 325], printer source identification for both text and image using
image processing and microscopic image techniques [252], and data exploration methods [244]. Furthermore, the method composed of Naive Bayes classifiers, KNN, and RF has been proposed in [26] as a feature extraction method (to extract Speeded Up Robust and Oriented Fast Rotated features). Random forest algorithm presented better performance than SVM due to feature importance that allows explaining a method with less run-time and no need for kernel and parameterization adjustments [242].
* **Authorship attribution and Profiling** In digital forensics, the details of the document author, such as identity, demographic information of the authors, etc., are significant [44]. Authorship verification is trying to identify whether the authors of the two documents are the same or not. This is a challenging issue due to the shortness of the messages on social networks [210]. In programming, Source Code Authorship Attribution can be necessary for digital forensics in some ways, including developer privacy and the importance of identifying malicious code programmers. To identify three programming languages like C++, Java, and C#, the accuracy of the DL-based technique among 100 programmers was reported to be 97.34% [110]. The results of [211] showed that using syntax tree and deep learning versus Program Dependence Graph with Deep Learning has a lower complexity cost. Because the time and rhythm of people typing can have a specific pattern, keystroke-based identification techniques are used as the biometric tool to identify individuals. The lack of a dataset with real keystroke dynamics is one of the problems in this field. Authors in [112] created a new keystroke dynamic dataset. They also showed that classification with radial basis neural networks was more accurate than the other ML models.
_Network._
* **Attack and malware detection** In detecting the source of cyberattacks, the high accuracy and low false alarms rate are important parameters. Among the DNN models, the multilayer perceptron algorithm [36] showed promising results in identifying attacks for high-volume
data. Among the NN-based models, Neuro-Fuzzy-based techniques [365; 376; 491; 537] can be used in network traffic analysis to extract accurate and interpretable data. The Fuzzy Min-Max NN model [524] with online adaptation and online learning capability can add new classes or modify existing classes without the need for retraining.
Autoencoders do not learn patterns that they have not seen before, so multi-autoencoders have been suggested in [95]. A One-Class SVM model and a semi-supervised model can be trained using the appropriate core function, only with data from a normal class (without attack). They can then be utilized to classify normal events from abnormal [66; 297]. In an under-attack situation such as critical infrastructures, various critical processes are executed, each of which can have its attack. By applying the appropriate SVM settings for each process, the specific attacks can be more accurately identified.
Deep learning models are complex in detecting and analyzing malware and require much training and pre-processing time, which is unsuitable for critical infrastructure environments. The NN models can be faster options [513; 386]. Specifically, CNNs are Shift invariant, meaning the CNN models do not require the expertise of an investigator. In the models that rely on signatures and malware behaviour, raw static byte code as input [245] can solely solve time-consuming and high computational resource consumption challenges. For detecting malicious web pages, authors in [76] proposed a DNN-based tool, which reported 99.8% accuracy. Among tree-based models, Boosted tree methods can provide investigators with a simple approach that detects malware faster than an antivirus. Likewise, among various ML techniques, the DT can perform better in malware family classification due to having a comprehensive analysis nature and effective in reducing false alarm rates [517]. On the networks with Windows operating systems, extracting features from the registry to train Boosted tree [134] reported good results compared to NN, LR, and DT. The use of LR in identifying the reason for scavenging adversaries accessing data has been suggested in IoT environments [457]. In the fields such as cyber threat intelligence (CTI) [158], RF can have good results in malware classification thanks to reducing the total token count.
* **Attack and malware detection** With the expansion of the IoT, the devices used in these networks could be a potential source of evidence for Digital forensics. However, the diversity of devices and the lack of a standard interface are some of the problems facing DF investigators. Because of the limited interface of smartphones, retrieving valuable forensics information is a challenging task. Recently, researchers have employed EM-SCA attacks to gather information from IoT devices in a forensically sound way. However, the successful implementation of this attack requires knowledge and equipment that most investigators do not have. To solve this challenge, evidence collection and analysis can be done automatically [536]. Although the analysis of the massive amount of collected EM traces is complicated, the efficiency of the analysis can be increased if a subset of frequency channels with sufficient information can be obtained. Using the RF classifier [57] due to its fastness and high accuracy can be suitable for identifying information-leaking frequency channels from EM side-channel data with high dimensions and selecting features (time domain and frequency domain properties). In the DL models, the large volume of collected data can increase the accuracy of data analysis by receiving an input vector with a larger size. However, this data is not directly suitable for the input and training of the DL models and requires pre-processing. In a recurrent neural network (RNN), the LSTM architecture is appropriate to identify the patterns that occur in time series data. With the Fast Fourier Transform (FFT) vector, LSTM can be trained to distinguish elliptic curve cryptography (ECC) operations from other software activities [527].
Data Discovery and Recovery.In data carving and fragmentation, identifying the data type is essential because it helps identify the type of crime. Large-scale file fragment type identification is one of the challenges in this field. Implicit extraction of features with a CNN model has better performance and run-time than similar works [86]. Another challenge is related to the fragmented JPEG files whose metadata is lost intertwined with non-JPEG files in the scanned area. Using extreme learning [206], better accuracy and timing have been reported to classify and identify JPEG than non-JPEG. Using SVM in this area has also been considered, which consists of improving the accuracy of classification of file systems accessed during a digital crime [140], improving data type identification [443], and file fragment
classification [277; 469]. Based on n-gram analysis [533], SVM-based approaches performed better in file type identification than NN. However, their scalability is still a challenge.
In data theft detection, regression tree [426] performs better than neural networks. Moreover, in data wiping, where files are securely deleted, one of the abuses is related to eliminating evidence. Random forest with high accuracy [28] can help investigators identify what data was deleted and what tools were used. Because some devices have built-in encryption algorithms for security reasons, the EM-SCA process becomes more complex. There is also a need to identify and classify encryption algorithms. Neural networks with FFT vector [192] and DNN with amplitudes as input [61] are used effectively in this field.
## 4 Future research directions of ML application in DF
This study aims to provide an exhaustive review of the ML applications in DF over the past decade. We present the details of ML applications in various domains of digital forensics. Based on the results of this survey, we present the following research directions of machine learning for digital forensics that are worth further investigation:
* The lack of a standard taxonomy and ontology for a The complete classification of digital forensics domains based on the scope of their applications and the type of evidence is one of the crucial issues the authors faced in this study. For example, in the discussion of image source identification, different researchers have used different terms. Some have included it in printer forensics, camera forensics (device forensics), and some in image forensics. Different categories such as e-mail forensics, network forensics, or text forensics were used in cases related to identifying the author of a text.
* This work is a systematic review to identify the state of ML methods in digital forensics. The results showed that the CNN-based models are the dominant method in studies. However, researching the technical aspect, identifying the level of security of using these methods, and examining the effect of adversarial attacks on the proposed methods can significantly help increase the security of the CNN-based methods.
Figure 10: Mind map of the primary studies.
* Adjusting many parameters in the DL models and defining the usefulness of layers in some applications are among the problems raised in the related works. One of the essential needs, especially in image forensics, can be related to collecting the relevant settings in the investigations and comparing the results to identify the best settings in each digital forensics application.
* Except for image and video forensics, using the DL models in other domains is relatively less. Consequently, further attention and research are required to take advantage of the DL capacity in other digital forensics domains.
* The focus in using ML in the DF process is on evidence acquisition and detection. Using ML methods in the evidence reconstruction and analysis phase could be valuable in the future of DF research.
## 5 Conclusion
In recent years, the growth of digital data and the increase of tools to facilitate digital crime showed that traditional digital forensics methods are no longer responsive. There is a need to automate processes to increase speed and accuracy in investigating and analyzing the results. In particular, ML techniques have been used for a long time in various fields, and their features have also been growing in digital forensics. Having a general view of the conducted investigations is necessary to identify the leading ML methods in DF and the affected DF domains, discover existing gaps, and clarify the future scientific path for researchers. Thus far, studies and surveys have been conducted, but these studies have some limitations. They have focused on a specific DF domain or only have examined the specific ML methods. In this paper, for the first time -as far as we know-, we review the last ten years of research related to using ML in Digital forensics. Furthermore, colour keywords-based visual analysis and a comprehensive Mind Map were provided to identify the applications of ML methods in DF domains. It can be used as a general map for digital forensics researchers.
The growing trend of using ML in DF indicates that it effectively improves the DF process, and there are still open research areas. As the meta-analysis of this study showed, CNN models have found an important place in DF. Due to the increase in attacks on these models, it is appropriate to consider the security of these models in DF in future research. The number of studies
related to DL models other than image and video was significantly lower. Due to the high performance of these models, more research on their application in other DF domains will be valuable. Another important and exciting topic in future research is expanding ML methods in the investigation process and more DF phases and paying attention to explainable ML in digital forensics.
|
2301.11471
|
Multi-channel Medium Access Control Protocols for Wireless Networks
within Computing Packages
|
Wireless communications at the chip scale emerge as a interesting complement
to traditional wire-based approaches thanks to their low latency, inherent
broadcast nature, and capacity to bypass pin constraints. However, as current
trends push towards massive and bandwidth-hungry processor architectures, there
is a need for wireless chip-scale networks that exploit and share as many
channels as possible. In this context, this work addresses the issue of channel
sharing by exploring the design space of multi-channel Medium Access Control
(MAC) protocols for chip-scale networks. Distinct channel assignment strategies
for both random access and token passing are presented and evaluated under
realistic traffic patterns. It is shown that, even with the improvements
enabled by the multiple channels, both protocols maintain their intrinsic
advantages and disadvantages.
|
Bernat Ollé, Pau Talarn, Albert Cabellos-Aparicio, Filip Lemic, Eduard Alarcón, Sergi Abadal
|
2023-01-27T00:24:50Z
|
http://arxiv.org/abs/2301.11471v1
|
# Multi-channel Medium Access Control Protocols for Wireless Networks within Computing Packages
###### Abstract
Wireless communications at the chip scale emerge as a interesting complement to traditional wire-based approaches thanks to their low latency, inherent broadcast nature, and capacity to bypass pin constraints. However, as current trends push towards massive and bandwidth-hungry processor architectures, there is a need for wireless chip-scale networks that exploit and share as many channels as possible. In this context, this work addresses the issue of channel sharing by exploring the design space of multi-channel Medium Access Control (MAC) protocols for chip-scale networks. Distinct channel assignment strategies for both random access and token passing are presented and evaluated under realistic traffic patterns. It is shown that, even with the improvements enabled by the multiple channels, both protocols maintain their intrinsic advantages and disadvantages.
## I Introduction
Efficient integrated networks at the chip scale within Systems-in-Package (SiPs) are a prerequisite for high performance in such computing systems. Currently, most systems incorporate a Network-in-Package (NiP) consisting of a set of on-chip routers and intra-/inter-chip wired links [1, 2]. However, recent scaling [3, 4], specialization [5, 6], and disintegration trends [7, 8] are increasing the pressure placed on the interconnect, to the point that new communication paradigms may be required [9, 10].
Among the emerging alternatives, wireless chip-scale communications stand as a promising contender [11, 12, 13, 14]. This communication paradigm relies the use of modulated electromagnetic waves for data transmission using the chip package as communications medium (Fig. 1). The resulting _wireless in-package links_ provide low latency, inherent broadcast capabilities, and global reconfigurability.
Since the communications medium is shared, wireless in-package communications require Medium Access Control (MAC) protocols to avoid or manage wasteful collisions. In this scenario, MAC protocols generally reduce to variants of multiplexing, random access, or token passing [15, 16, 17, 18, 19, 20, 21]. Even though recent works have demonstrated that computing packages could support a few frequency [22, 23] and space channels [24, 25], it is still unclear how MAC protocols can benefit from them. This is because more than a few channels are needed to implement truly scalable frequency/space multiplexing techniques [15], and most importantly, because multi-channel variants of random access and token passing have not been explored yet.
This paper aims to bridge this gap by focusing on the study of multi-channel versions of the two most representative protocol types in chip-scale scenarios, i.e. random access and token passing. In particular, the main contributions are as follows. We first describe the different ways we can extend random access and token passing with a small set of channels in Sec. II. Then, in Sec. III, we evaluate these protocol variants with traffic models typically used to mimic multiprocessor workloads [26]. This analysis sheds light on the impact of channel assignment on the protocol performance, as summarized in Sec. IV and concluded in Sec. V.
## II Multi-channel MAC Protocols
In this work, we describe three distinct channel assignment strategies for random access and token passing. As baselines, we take BRS [18] for random access and the baseline from [20] for token passing. The strategies presented here are not provably optimal, but they are simple (as required by the resource constraints of the chip-scale scenario) and representative of the potential techniques that can be used.
### _Assignment Methods for BRS_
In random access protocols such as BRS [18], nodes contend for channel access and back off if the channel is busy or there is a collision. Assuming \(N\) nodes, we study three ways to reduce the collision probability using \(N_{c}\) channels, namely:
**AS1:** Channels are assigned to nodes individually and randomly. When a node has a packet to transmit, the node is assigned a random channel. If the channel is busy or there is a collision, nodes undergo a random back off and also choose a random channel to use in the next attempt.
**AS2:** Each channel is assigned to \(\frac{N}{N_{c}}\) nodes statically following a uniform distribution, this is, assuming that all nodes have the same load (see Fig. 2, left). While this is not optimal for spatially unbalanced traffic, it serves as a baseline.
**AS3:** Channels are assigned to a variable number of nodes following a distribution that balances the load in each channel (see Fig. 2, right). To that end, nodes are ordered based on
Fig. 1: Pictorial view of a wireless chip-to-chip communication link.
the expected normalized load and assigned to each channel in order following a greedy algorithm.
### _Assignment Methods for Token Passing_
In token passing [27], typically, all \(N\) nodes are sorted forming a virtual ring and the token is passed in order through that ring. In a version with \(N_{c}\) channels, each channel can be a token. The design decisions then lie on the number of rings and the nodes that form each ring. For instance:
**AS1:** We assume as many rings as there are channels and map nodes uniformly to each ring. In other words, we distribute them in rings of \(\frac{N}{N_{c}}\) nodes, regardless of their expected load.
**AS2:** We assume a single virtual ring with multiple tokens circulating in it. In this case, tokens can jump over other tokens: when node \(i\) holds a token for multiple cycles during a transmission, idle tokens that arrive at \(i\)-1 can jump to \(i\)+1.
**AS3:** This strategy is similar to AS1, but nodes are mapped to rings based on their expected load. This may lead to rings of different sizes, but similar in the expected overall load.
## III Performance Evaluation
The architecture and application parameters are summarized in Table I. We implement both single-channel baselines and multi-channel versions of BRS and token passing as finite state machines in a modified version of Multi2sim that models wireless links and supports collision detection [28]. The protocols are stressed with synthetic traffic modeling uneven injection distributions (through the \(\sigma\) parameter) and bursty temporal behavior (through the Hurst exponent \(H\)) [26]. The default values for the different parameters are \(N=64\) nodes, \(N_{c}=4\) channels, \(H=0.5\) and \(\sigma=1\). Simulations are cycle-accurate.
In all cases, we compare the packet latency (in cycles) and throughput (in packets/cycle) of the different options. Given the high number of protocol strategies and traffic types, instead of plotting the classical latency-throughput curve, we make use of box plots that summarize the latency and throughput statistics. In our plots, the X axis shows the parameters under study. The plots have two Y axis: the left axis represents the latency and corresponds to the box plot values, whereas the right axis represents the throughput and corresponds to single-value markers of saturation throughput. Since a single packet takes 4 cycles in a single channel to be transmitted, the maximum throughput is 0.25 packets/cycle/channel.
### _Number of Channels_
Here, we discuss the results shown in Fig. 4 for BRS and token passing and an increasing number of channels.
**Latency.** In general, it can be observed that BRS is less stable than token in terms of latency as the range of values is larger, with a higher number of outlier points. However, BRS has a much better zero-load latency than token since, in BRS, the protocol allows nodes to start transmitting immediately when the channel is sensed idle. This fact also can explain why independently of the parameters evaluated here (assignment, number of channels) the minimum latency is quite similar. The worst-case latency, however, clearly improves when having multiple channels, as the high load is distributed over multiple channels. On the other hand, in token passing, nodes must wait until they possess the token to start transmitting. For this reason, when the number of nodes is large, \(N=64\) in this case, the system remains idle much longer.
**Throughput.** The results for token passing depict a rather stable increase in saturation throughput as more channels are added, regardless of the assignment method. This could be due to the use, by default, of non-bursty and non-hotspot traffic to evaluate scalability. On the other hand, the results for BRS illustrate a different behavior than in token passing. Firstly, BRS cannot reach a saturation throughput as high as token passing. The main reasons are that channel contention and multiple collisions lead to channel waste and, hence, to a reduced throughput. Furthermore, BRS is more irregular than token passing in terms of saturation throughput as it depends
Fig. 4: Performance of multi-channel BRS (top) and token passing (bottom) for an increasing number of channels, \(C1\) to \(C4\), and different assignments.
Fig. 3: Graphical representations of the different assignment techniques for token passing assuming 16 nodes and 4 channels.
Fig. 2: Graphical representations of assignment techniques AS2 (left) and AS3 (right) for BRS assuming 16 nodes and 4 channels.
on the percentage of collisions at high loads. As a result, the difference between the saturation throughput achieved for different assignments increases with the number of channels.
### _Number of Nodes_
Next, we comment on the performance of BRS and token passing for an increasing number of nodes, with \(N_{c}=4\). The results are shown in the left charts of Fig. 5 and Fig. 6.
**Latency.** BRS has a much lower latency than token passing due to its ability to transmit when the channel is idle. The span of the latency values differs across number of nodes and assignments, but in general are restrained to similar values because in the end, the same aggregated load ends up being distributed over more nodes. Static assignment of channels (AS2) works worse than the other alternatives. On the other hand, from the plot of token passing, it is clear that more nodes lead to much higher latency due to the increase of the token turnaround time. In fact, the low-load latency is proportional to the number of nodes in all cases. The span of the latency values is similar across the different system sizes.
**Throughput.** In general, saturation throughput is slightly higher for a lower number of nodes. In our protocols, having more nodes means having a higher population and, hence, a higher chance of collisions even for the same load for BRS, and a higher waiting time (or lower probability of having all nodes backlogged) in token passing. It seems, in any case, that BRS is more resilient to the change in the number of nodes as the drop is more subtle, except for AS3, where possibly the load balancing algorithm is not performing well when such a large number of nodes has to be classified. Finally, all three assignments have very similar throughput in all cases for token passing, whereas AS1 (random channel assignment to individual packets) works better in BRS.
### _Hotspot Traffic_
We next discuss the results shown in the middle plots of Fig. 5 and Fig. 6, which illustrate the impact of uneven spatial injection distribution on performance. We remind that low/high values of \(\sigma\) mean that traffic is hotspot/evenly distributed [26].
**Latency.** In BRS, the hotspot behavior of traffic does not seem to have a large influence on the performance of the different assignment methods. The outlier, third quartile, and maximum values within the distribution seem to be mildly impacted by the hotspot nature of traffic. In general, BRS is resilient to such variations and actually could benefit from having a lower amount of nodes contending for the available channels. Still, the results show a small tendency to worse results when traffic is concentrated around a few nodes, possibly because of the nodes with higher load reaching higher backoff values. In AS3, this situation is avoided by proactively placing high-load nodes in different channels. Similarly, in token passing, latency is affected by the concentration of traffic around a given set of nodes mostly because the different assignment methods are able to provide tokens quickly to nodes that need it, even if they are spaced apart within the ring. This is clearly visible in the extreme case of \(\sigma=0.05\). Similarly, outlier values seem to be larger when traffic is more hotspot. We also observe how AS2 fails to provide a good performance at low loads, and this behavior is exacerbated for very hotspot traffic.
**Throughput.** The throughput of BRS in its different implementations does not vary significantly with the type of
Fig. 5: Performance of multi-channel BRS protocol for an increasing number of nodes, \(N\)=64–512 (left graph), different spatial concentration levels, \(\sigma\)=0.1–100 (center graph), different temporal burstiness levels, \(H\)=0.5–0.85 (right graph), and different assignment techniques.
Fig. 6: Performance of multi-channel token passing protocol for an increasing number of nodes, \(N\)=64–512 (left graph), different spatial concentration levels, \(\sigma\)=0.1–100 (center graph), different temporal burstiness levels, \(H\)=0.5–0.85 (right graph), and different assignment techniques.
spatial distribution of traffic, except for AS3, where a higher concentration of traffic around a few nodes seems to have a positive effect on the throughput. One reason could be that the most active nodes are distributed over the different channels so that contention is minimized. That does not happen in other assignment methods. Different behavior is observed in token passing, where the hotspot behavior of traffic modifies the throughput of the different assignment methods, with AS3 being affected a bit less. This is because if the load is concentrated around a small set of nodes, a large portion of the airtime is wasted while passing the token among these nodes.
### _Bursty Traffic_
Finally, we present the latency and throughput results for an increasingly bursty traffic. The results are shown in the right plots of Fig. 5 and Fig. 6 for BRS and token passing, respectively. Temporal injection of traffic is modeled through the Hurst exponent [26], with higher values indicating more bursty behavior, i.e. longer bursts followed by longer silences.
**Latency.** In BRS, it can be seen that the higher the value of \(H\), the higher the latency in average and also the more unpredictable. This is because with an \(H\) of 0.5, the packets are injected following a random Poisson process, which minimizes the probability of collisions. However, when increasingly bursty traffic is considered, the probability of packets being injected (and nodes trying to transmit) in the same exact cycle increases. The effect is multiplicative with the burstiness, as the effect of cascading collisions leads to an exponential increase of the backoff time. This affects the system at all loads. On the other hand, token passing also suffers when bursty traffic is served, leading to very high latency especially for high values of \(H\). The latency is a bit more stable than in the case of BRS, mainly because the protocol does not react with exponential backoffs, but rather with linear token passings to bursts of traffic. Still, the latency is much higher than that of BRS, discouraging its use for large number of nodes.
**Throughput.** On one hand, it can be verified that in BRS, the saturation throughput remains rather constant across all assignments regardless of the value of the Hurst exponent. A possible reason could stem from the behavior of the backoff mechanism; bursty traffic leads to a large number of collisions which increases latency even for low loads, but the protocol may converge to a large backoff value that can accommodate the load even if it comes in bursts. In other works, the backoff mechanisms spreads out the bursts of traffic over time, until all nodes are backlogged. On the other hand, it can be seen that in the case of token passing, the saturation throughput seems to drop significantly for higher numbers of \(H\), to a point that the achieved throughput becomes comparable with that of BRS. A potential reason for this behavior is the lack of an adaptive mechanism to react to bursts; the token has to still move around the ring even if bursts of traffic lead to the generation of multiple packets in a given node, leading to gaps where the wireless channel remains silent. When traffic is less bursty, the probability of such events is lower.
## IV Discussion
Figure 7 plots the performance of all the compared protocols and assignments representing the zero-load latency (X axis) and saturation throughput (Y axis) of a particular protocol for a given number of channels and assignment method.
In general terms, BRS is preferred over token in terms of zero-load latency given its ability to transmit immediately when the channels are idle. Hence, we see most BRS points located in a _low latency region_. Among the assignment techniques, AS1 achieves similar results than AS3 and would probably be preferred as it does not require prior knowledge of the load of each node to assign the channels. On the downside, the throughput is half of that of token passing, at most.
On the other hand, token passing can reach high throughput levels in the _high capacity region_, close to the maximum total bandwidth of the wireless network. However, while putting more channels reduces the latency significantly, the best latency in token passing is still several cycles away from the BRS values. Finally, we observe that it is hard to provide a good channel assignment overall: AS3 requires prior knowledge on the traffic distribution, AS1 does not perform well for hotspot traffic and AS2 has high latency.
## V Conclusions
This paper has explored several techniques to extend random access and token passing MAC protocols to multiple channels for wireless chip-scale networks. In general, more channels alleviate the problems of both types of protocols, increasing the throughput of random access and cutting down the latency of token passing to a few tens of cycles. Additionally, random access is more resilient to hotspot and bursty traffic and more scalable to massive chip-scale networks. However, the higher throughput achievable with token renders the decision of the protocol (and assignment) to choose extremely challenging. Hence, we see a trend similar to that of single-channel protocols: it would be desirable to develop a multi-channel protocol that is able to seamlessly obtain the best of both paradigms. This will be explored in future work.
Fig. 7: Summary of the latency and throughput results over all the protocols, assignment methods, and traffic conditions. \(B\) and \(T\) stand for BRS (random access) and token passing, \(C\) and \(N\) denote number of channels and nodes, whereas \(S\) and \(H\) represent the different spatial and temporal injection distributions. For instance, the \(B\_C\) symbols represent the latency-throughput of all the assignment methods for BRS for different number of channels. Two desirable design spaces and a Pareto frontier are also given.
|
2301.07213
|
SCARP: 3D Shape Completion in ARbitrary Poses for Improved Grasping
|
Recovering full 3D shapes from partial observations is a challenging task
that has been extensively addressed in the computer vision community. Many deep
learning methods tackle this problem by training 3D shape generation networks
to learn a prior over the full 3D shapes. In this training regime, the methods
expect the inputs to be in a fixed canonical form, without which they fail to
learn a valid prior over the 3D shapes. We propose SCARP, a model that performs
Shape Completion in ARbitrary Poses. Given a partial pointcloud of an object,
SCARP learns a disentangled feature representation of pose and shape by relying
on rotationally equivariant pose features and geometric shape features trained
using a multi-tasking objective. Unlike existing methods that depend on an
external canonicalization, SCARP performs canonicalization, pose estimation,
and shape completion in a single network, improving the performance by 45% over
the existing baselines. In this work, we use SCARP for improving grasp
proposals on tabletop objects. By completing partial tabletop objects directly
in their observed poses, SCARP enables a SOTA grasp proposal network improve
their proposals by 71.2% on partial shapes. Project page:
https://bipashasen.github.io/scarp
|
Bipasha Sen, Aditya Agarwal, Gaurav Singh, Brojeshwar B., Srinath Sridhar, Madhava Krishna
|
2023-01-17T22:29:31Z
|
http://arxiv.org/abs/2301.07213v1
|
# SCARP: 3D Shape Completion in ARbitrary Poses for Improved Grasping
###### Abstract
Recovering full 3D shapes from partial observations is a challenging task that has been extensively addressed in the computer vision community. Many deep learning methods tackle this problem by training 3D shape generation networks to learn a prior over the full 3D shapes. In this training regime, the methods expect the inputs to be in a fixed canonical form, without which they fail to learn a valid prior over the 3D shapes. We propose SCARP, a model that performs Shape Completion in ARbitrary Poses. Given a partial pointcloud of an object, SCARP learns a disentangled feature representation of pose and shape by relying on rotationally equivariant pose features and geometric shape features trained using a multi-tasking objective. Unlike existing methods that depend on an external canonicalization, SCARP performs canonicalization, pose estimation, and shape completion in a single network, improving the performance by 45% over the existing baselines. In this work, we use SCARP for improving grasp proposals on tabletop objects. By completing partial tabletop objects directly in their observed poses, SCARP enables a SOTA grasp proposal network improve their proposals by 71.2% on partial shapes. Project page: [https://bipashasen.github.io/scarp](https://bipashasen.github.io/scarp)
## I Introduction
Given a partial observation of an object, 3D shape completion aims to recover the full 3D shape of the object. This has been widely addressed in computer vision [1, 2, 3, 4, 5, 6, 7, 8] and has many diverse downstream applications in robotics including visual servoing [9], manipulation [10, 11, 12, 13], visual inspection [14], autonomous driving [15, 16, 17].
Many existing methods tackle shape completion by incorporating a training scheme that learns a prior over the full 3D shapes. This is done by training an autoencoder [1, 6, 18, 19] or a GAN [20] over many different instances of full shapes. At inference, this learned prior space is conditionally queried on the partial observations. These methods however, suffer from a major limitation: they expect the partial input to be in a fixed canonical frame-a common frame of reference that is shared between instances in that category [21, 22]. A particular shape \(X\) in two different poses \(\{R_{1},T_{1}\}\) and \(\{R_{2},T_{2}\}\) will have very different geometry. As a result, \(X\) in different poses appear as novel instances for these methods inhibiting them from learning a valid prior over shapes.
Existing datasets like ShapeNet [23] have shapes that are manually aligned to a canonical frame, but real shape observations (e.g., depth maps) do not contain this information. A naive approach to tackling this challenge is to _canonicalize_, i.e., map a 3D (full or partial) shape to a category-level canonical frame with [21] or without supervision [22, 24, 25]. A multi-stage pipeline can be built involving the sequential steps of (1) canonicalization, (2) shape completion, and (3) de-canonicalization (bringing the object back in the original pose). In such a pipeline however, the performance of a shape completion network directly depends on the output quality of the canonicalization module. This can lead to errors propagating between these modules leading to a sub-optimal completion.
We propose SCARP, a method that performs Shape Completion in **AR**bitrary **P**oses. Unlike existing methods that have to directly learn a prior over all possible poses and shapes, we first disentangle the pose from the shape of a partial pointcloud. We build a multi-task objective that: (1) generates a disentangled feature representation of pose and shape by canonicalizing an object to a fixed frame of reference, (2) estimates the exact pose of the object, and (3) completes the shape of the obj
Fig. 1: SCARP performs Shape Completion in ARbitrary Poses (top-left). We show an example of a real scene made of two tabletop objects. (a) The captured scene is partial leaving out a portion of the objects. (b) This results in grasp poses (in green) on the partial point cloud that directly collide (in red) with the actual object leading to a collision between the object and the Franka Panda’s gripper (grey). (c) SCARP improves grasp proposal by accurately completing the partial pointcloud in the observed pose. (d) This enables the grasp proposal network to propose grasp poses (shown in green) on the completed pointcloud that do not collide with the actual object.
representation. This multi-task objective allows our network to jointly understand the pose and shape of the input. It does so by learning rotationally-equivariant and translationally-invariant pose features using Tensor Field Networks [26], and global geometric shape features using PointNet++ [27].
**Application:** Robotic grasp pose estimation [12, 13, 28, 29] is a challenging area of research that often expects a faithful reconstruction of the scene in 3D. As shown in Fig. 1 (b), under a partial observation, [13] generates grasp proposals that directly collide with the actual object in the scene (shown in red). As a result, the manipulator is likely to collide with the object as it attempts to grasp the objects using one of these predicted grasp poses. We use SCARP to complete these partial shapes directly in their observed poses and estimate grasp proposals on these completed shapes. We show that SCARP reduces such invalid grasps by \(71.2\%\) over predicting grasp poses directly on the partial observations. To summarize, our contributions are:
1. We propose SCARP, a novel architecture to perform shape completion from partial pointclouds in arbitrary poses. To the best of our knowledge, this is the first work to do so.
2. We show for the first time how a multi-task objective can support: (1) canonicalization, (2) 6D pose estimation, and (3) shape completion on partial pointclouds.
3. We demonstrate that SCARP outperforms the existing shape completion baselines (with pre-canonicalization) by \(45\%\) and improves grasp pose estimation by reducing invalid grasp poses by \(71\%\).
## II Related Work
**Partial Pointcloud Completion** has been extensively addressed over the years [18, 20, 30, 31, 32, 1, 1, 33, 34, 30]. Early 3D shape completion works relied on intermediate voxel representation for representing the 3D objects [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 240, 241, 25, 26, 27, 28, 29, 20, 211, 22, 233, 242, 25, 26, 29, 20, 227, 28, 29, 212, 23, 243, 26, 29, 22, 23, 25, 26, 29, 27, 28, 29, 23, 29, 24, 26, 29, 25, 27, 28, 29, 26, 29, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 59, 52, 53, 55, 57, 59, 61, 50, 53, 58, 59, 62, 54, 56, 58, 59, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 111, 12, 13, 14, 15, 16, 17, 18, 19, 13, 19, 14, 16, 18, 19, 15, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 54, 55, 56, 57, 58, 59, 60, 53, 59, 61, 50, 54, 52, 55, 56, 58, 59, 70, 55, 59, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 56, 57, 58, 59, 60, 54, 59, 53, 59, 61, 50, 55, 57, 59, 62, 58, 59, 70, 55, 59, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 88, 89, 93, 94, 87, 88, 89, 95, 96, 97, 99, 100, 11, 12, 13, 14, 15, 16, 17, 19, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 82, 85, 86, 87, 89, 93, 94, 88, 95, 96, 97, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 52, 53, 54, 56, 57, 58, 59, 60, 59, 61, 62, 63, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75,
respondences, Chamfer's Distance (CD)1 can be used to compute the distance \(d_{CD}(X,Y)\) by considering the nearest neighbor of the points \(\{x\in X,y\in Y\}\) in \(\{Y,X\}\) as their correspondence. The distance is then given as:
Footnote 1: [https://pdal.io/en/stable/apps/chamfer.html](https://pdal.io/en/stable/apps/chamfer.html)
\[\sum_{x\in X}\min_{y\in Y}\left\|{|x-y|}\right\|_{2}^{2}+\sum_{y\in Y}\min_{x \in X}\left\|{|x-y|}\right\|_{2}^{2} \tag{1}\]
However, CD does not guarantee uniformity in the output density. Density-Aware Chamfer's Distance (DCD) [51] overcomes this issue by modifying CD as:
\[\frac{1}{2}\left(\frac{1}{|X|}\sum_{x\in X}\left(1-\frac{e^{\mathcal{Z}_{x}}}{ n_{\hat{y}}}\right)+\frac{1}{|Y|}\sum_{y\in Y}\left(1-\frac{e^{\mathcal{Z}_{y}}}{ n_{\hat{x}}}\right)\right) \tag{2}\]
Please refer to [51] for the exact notations.
## IV SCARP: Shape Completion in ARbitrary Poses
Given a partial object pointcloud \(\hat{X_{p}}\) at an unknown pose \(\{R,T\}\), we want to estimate this pose and the corresponding full object pointcloud \(\hat{X}\) in the same pose.
This is a challenging task as for a neural network, a pointcloud \(X\) in two different poses \(\{R_{1},T_{1}\}\) and \(\{R_{2},T_{2}\}\) are two completely different pointclouds. Thus, we adopt a multi-tasking objective that disentangles the pose and the shape of the input partial pointcloud \(\hat{X_{p}}\). The shape component allows us to understand that \(\hat{X_{p}}\) is a partial observation of \(X\) which is \(\hat{X}\) in its canonical form. The pose component is then used to estimate the pose transform \(\{R,T\}\) between \(\hat{X}\) and \(X\).
### _Multi-tasking Pipeline for disentangling Shape and Pose_
Let \(X_{p}\) and \(X\) be a partial and its corresponding full pointcloud in a fixed canonical frame. Then \(\hat{X_{p}}\) and \(\hat{X}\) are \(X_{p}\) and \(X\) in an unknown arbitrary pose \(\{R,T\}\) such that \(\hat{X_{p}}=R(X_{p})+T\) and \(\hat{X}=R(X)+T\). The input to our network is \(\hat{X_{p}}\) which is mean centered at the origin. Our aim is to predict \(\{R,T\}\) and the full pointcloud \(\hat{X}\) which is posed as:
\[\{R,T,\hat{X}\}=\Phi(\hat{X_{p}}) \tag{3}\]
where \(\Phi\) denotes our proposed network, SCARP.
Our multi-tasking objective is formulated to (1) complete the partial pointcloud in a fixed canonical frame given by \(X\) and (2) estimate the pose transformation from the canonical frame to the original pose \(\{R,T\}\). In this pipeline, the two components (1) pose and (2) shape are predicted separately using two different output heads as shown in Fig. 2.
#### Iv-A1 Feature Extraction
To estimate the input's shape, we compute global geometric shape features, \(p\in\mathbb{R}^{E}\), using Pointnet++ [27] as explained in Sec. III. To estimate the pose of the input, we adapt TFN [26] as explained in Sec. III. Our TFN computes a global equivariant feature, \(F\in\mathbb{R}^{N\times E}\) by max pooling over the types \(\{\ell\}_{\ell=0}^{\ell=\ell_{max}}\), where \(E\) is the dimension of the equivariant embeddings, \(N\) and \(\ell_{max}\) are user-defined.
The input to our shape completion network is a non-linear combination of \(p\) and a global invariant embedding, \(F_{\mathcal{X}}\in\mathbb{R}^{E}\), computed by max pooling \(F\) over the channel dimension, \(N\). Additionally, \(F\) is used to estimate an equivariant frame of reference, \(\{R^{\prime}\in\mathbb{R}^{3\times 3},T^{\prime}\in\mathbb{R}^{3}\}\) that transforms the invariant embeddings to \(X\)'s original pose.
#### Iv-A2 Task I: Shape Completion
Completing the shape of a partial input at any arbitrary orientation is difficult. Therefore, we aim to first complete the shape at a fixed
Fig. 2: **Overview of our proposed approach:** The input to SCARP is a mean-centered partial pointcloud \(\hat{X_{p}}\) in an arbitrary orientation \(R\). Our feature extraction module **(b)** disentangles the partial pointcloud’s pose and shape and is trained in a multi-tasking objective **(a)**. In the first task, SCARP combines Pointnet++ [27] and TFN [26] features to generate a shape feature that is used by a pointcloud completion network, \(G\), to generate \(X^{\prime}\). In the second task, the TFN pose feature is used to generate an equivariant frame \(\{R^{\prime},T^{\prime}\}\). Our loss functions enable the overall network to learn a prior over the shape while understanding the pose of the partial input.
canonical frame. To learn this canonical frame, the model needs to build an understanding of the full shape of the partial input. To achieve this, we train our model to predict a full canonicalized pointcloud \(X^{\prime}\) directly from \(\hat{X_{p}}\). Shape completion enables our model to learn a prior over the global shape of a category (a typical chair would have four legs and a backrest) enabling our network to directly canonicalize the partial inputs accurately.
We adopt \(G\), as explained in Sec. III, as our shape completion network where (1) the input to \(G\) is a semantically meaningful embedding generated from a partial input \(\hat{X_{p}}\) and (2) is trained using a distance loss against the full pointcloud \(X\) to learn a relationship between the partial input \(\hat{X_{p}}\) and the predicted full canonical pointcloud \(X^{\prime}\). As shown in Fig. 2 (right), the input to \(G\) is a globally invariant feature vector \(f\in\mathbb{R}^{E}\) computed by combining \(p=P(\hat{X_{p}})\) and \(F_{X}=\mathcal{X}(\hat{X_{p}})\) non-linearly using a neural network \(\phi_{S}\) given as:
\[X^{\prime}=G(f)\quad\text{and}\quad f=\phi_{S}(\mathcal{X}(\hat{X_{p}})\oplus P (\hat{X_{p}}))) \tag{4}\]
#### Iv-A3 Task II: Pose Estimation
Once \(G\) predicts the full pointcloud \(X^{\prime}\) in a canonical pose, it is important to estimate the correct rotation \(SO(3)\) matrix \(R\in\mathbb{R}^{3\times 3}\) and translation \(T\in\mathbb{R}^{3}\) to register \(X^{\prime}\) back on \(\hat{X_{p}}\). We predict \(R^{\prime}\) and \(T^{\prime}\) on the second head of our model using the rotationally equivariant TFN features \(F\) given as:
\[R^{\prime}=\phi_{R}(F)\quad\text{and}\quad T^{\prime}=\phi_{T}(F) \tag{5}\]
where \(\phi_{R}\) and \(\phi_{T}\) are multi-layered perceptrons.
### _Loss Functions for Multitask Training_
#### Iv-B1 Shape Completion in a fixed Canonical Frame
In the first task, we estimate the completed pointcloud in a fixed canonical frame given by \(X^{\prime}\). We use DCD (see Sec. III) to minimize the distance between the predicted pointcloud \(X^{\prime}\) and the ground truth canonical pointcloud \(X\) given by:
\[\mathcal{L}_{shape}=d_{DCD}(X^{\prime},X) \tag{6}\]
#### Iv-B2 Estimating the pose of the object
To estimate the pose given by \(\{R,T\}\), we use rotationally equivariant pose features \(F\) and pass it through \(\{\phi_{R},\phi_{T}\}\). We constrain this prediction against the canonical frame. To do so, we rotate the canonical output \(X^{\prime}\) to obtain \(R^{\prime}(X^{\prime})\) and compare it against the rotated ground truth \(\hat{X}\). At this point however, the pointwise correspondences between \(X\) and \(X^{\prime}\) are lost. Thus, a hard distance loss such as Euclidean distance cannot be directly used. To tackle this, we minimize permutation invariant CD objective as explained in Sec. III between \(X\) and \(X^{\prime}\). However, CD only minimizes the distance between the nearest neighbors of the points in the pointcloud. This results in local minimas where the loss is minimal even when the actual correspondences are far. As a result, the predicted pointcloud is often flipped about one of the axes. To tackle this issue, we rotate the canonical ground truth \(X\) using the predicted \(R^{\prime}\) and compare against \(\hat{X}\) using \(L2\) loss. The overall loss is:
\[\mathcal{L}_{rot}=\delta d_{CD}(\hat{X},R^{\prime}(X^{\prime}))+\gamma||\hat{X },R^{\prime}(X)||_{2} \tag{7}\]
\(R^{\prime}(X^{\prime})\) is computed by detaching the forward computation graph at the output of \(G\). The gradients from the loss does not backpropogate through \(G\) at the first head.
For symmetrical objects such as bowls and glasses, multiple \(R^{\prime}\) predictions can be correct. A hard \(L2\) loss penalizes the network for correct predictions even for correct \(R^{\prime}\) if the correspondences do not exactly match. Thus, for symmetrical objects, we keep \(\delta\sim 1.0\) and \(\gamma\sim 0.0\) and for non-symmetrical objects we keep \(\delta\sim 1.0\) and \(\gamma\sim 1.0\).
The input to our network is a mean-centered partial pointcloud \(\hat{X_{p}}\). At this point, we train our network to regress to \(\hat{X_{p}}\)'s centroid in the full pointcloud \(X\) given by \(T^{\prime}\). We directly supervise \(T^{\prime}\) against the ground truth \(T\) given as:
\[\mathcal{L}_{trans}=||T^{\prime}-T||_{2} \tag{8}\]
The final output is obtained by rotating and translating our predicted pointcloud \(X^{\prime}\) by \(R^{\prime}\) and \(T^{\prime}\) respectively as:
\[X_{o}=R^{\prime}(X^{\prime})+T^{\prime} \tag{9}\]
**Orthonormality Loss:** The rotation \(R^{\prime}\) predicted by our network is a \(3\times 3\) matrix in the \(SO(3)\) space. However, the matrix predicted by Eqn. 5 is not guaranteed to be a valid \(SO(3)\) matrix. We therefore, enforce orthonormality on \(R^{\prime}\) by minimizing its difference to its closest orthonormal matrix. To do so, we compute the SVD decomposition of \(R=U\Sigma V^{T}\) and enforce unit eigenvalues as:
\[\mathcal{L}_{orth}=||UV^{T}-R||_{2} \tag{10}\]
#### Iv-B3 Combined Loss
We train our network end-to-end by combining all the losses as:
\[\mathcal{L}=\mathcal{L}_{shape}+\mathcal{L}_{rot}+\mathcal{L}_{trans}+\mathcal{L }_{orth} \tag{11}\]
## V Experiments
In this section, we evaluate SCARP on two tasks: **(T1)** Shape completion in arbitrary poses and **(T2)** Improving grasp proposals by completing partial pointclouds.
**Baselines:** As we are the first to perform the task T1, we modify the existing shape completion networks by developing a multi-stage pipeline: (1) We use ConDor [22] to first canonicalize the input partial pointclouds to a fixed canonical frame defined implicitly by ConDor. (2) We train and test the existing shape completion methods on ConDor's canonical frame. (3) Bring the completed pointcloud to the original orientation using a pose transform predicted by ConDor. We compare against (1) ConDor\(+\)Pointr [7], a SOTA pointcloud completion network that generates high-resolution completed pointclouds and (2) ConDor\(+\)Shape Inversion (SInv.) [20] based on tree-GAN [39] that shares our generator \(G\).
**Metrics:** We use Chamfer's Distance **(CD)** as explained in Sec. III to compute the distance between the ground truth pointcloud \(\hat{X}\) and the predicted pointcloud given as \(R^{\prime}(X^{\prime})+T^{\prime}\) to evaluate the match in shape.
Earth Movers Distance-Maximum Mean Discrepancy **(MMD-EMD)**[39, 50] is used to evaluate for uniformity in the prediction by conducting bijective matching of points between two pointclouds. As we only want to measure the
output's uniformity, we compute this metric between the canonical ground truth \(X\) and the canonical prediction \(X^{\prime}\).
We evaluate SCARP by measuring its impact in an important downstream task: grasp pose estimation. In this, we measure the a) the number of grasp proposals made on the partial object that collide with the actual object on the table (shown in Fig. 1 and 4) denoted by \(C\) and b) number of invalid grasps that do not result in a valid grasping denoted by \(I\). We then compute the Grasping Error **(GE)** as:
\[\frac{1}{\mathcal{D}}\sum_{i=1}^{\mathcal{D}}\frac{C+I}{\mathcal{N}} \tag{12}\]
where \(\mathcal{N}\) is the number of top grasp proposals and \(\mathcal{D}\) is the total number of pointcloud instances. In our case, \(\mathcal{N}=30\).
**Dataset:** Our dataset is a subset of [23] derived from [22] and [19] made of 5 tabletop (Bowl, Bottle, Can, Mug, Basket) and 4 non-tabletop (Plane, Car, Chair, Watercraft) categories. We evaluate \(GE\) only on the tabletop objects.
**Results:** As shown in Table II, SCARP outperforms the existing multi-stage baselines on all the categories on an average by \(45\%\). The existing shape completion methods rely on the output of an external canonicalization model that suffer from their own inconsistencies as reported in their paper [22]. This results in an error propagation as the input to the shape completion networks are not always in the exact canonical forms. The errors in the input map to a larger error in the output of the networks. This is followed by an error in the transform from the canonical form to the original pose. The resulting output of the multi-stage pipeline suffer from high inconsitensies and sub-optimal outputs. Unlike these networks, our model is trained jointly on both tasks (canonicalization and shape-completion) using a multi-tasking objective. As we show in the ablations, this objective plays a crucial role in achieving a disentangled representation of shape and pose. Qualitative results are shown in Fig. 3 that vividly show the closeness of SCARP's output to the ground truth when compared with others.
_Improvement in Grasp Proposals_: Generating grasp proposals for partial pointclouds is a challenging task as a network may mistake a missing portion of an object as a potential area to grasp (see Fig. 1 and Fig. 4). We apply SCARP to complete these partial observations directly in the observed poses and predict grasp poses on these completed pointclouds using a SOTA grasp generation network Contact-Graspnet [13]. To evaluate the grasp proposals, we compute GE on (1) partial observations, (2) completed observations by SCARP, and (3) actual objects (ground truth). Actual objects are full pointclouds with no missing portion. As we show in Table III, SCARP shows a relative improvement of \(71.2\%\) and an absolute improvement of \(48.27\%\) over the grasp proposals on the partial pointclouds. Moreover, there is only an absolute degradation of \(4.19\%\) vis-a-vis the ground truth. The ground truth error in Table III is the datum error in the grasp proposals output by Contact-Graspnet. Qualitative results are shown in Fig. 4. Green and red proposals denote valid and colliding grasp proposals respectively. As can be
\begin{table}
\begin{tabular}{|c|c|c c c c c|c c c c|} \hline & & \multicolumn{4}{c|}{Tabletop} & \multicolumn{4}{c|}{Off-Table} & \\ \hline & & Bowl & Bottle & Can & Mug & Basket & Plane & Car & Chair & Watercraft & Average \\ \hline \multirow{3}{*}{CD\(\downarrow\)} & ConDor+SInv. & 82.7 & 27.4 & 45.4 & 41.5 & 85.3 & 34.2 & 14.7 & 59.4 & 39.9 & 47.8 \\ & ConDor+Pointur & 30.8 & 20.9 & 29.9 & 14.2 & 40.9 & 22.1 & 6.4 & 19.8 & 8.5 & 21.5 \\ & SCARP (Ours) & **21.8** & **7.9** & **11.8** & **12.1** & **34.2** & **6.9** & **5.6** & **19.1** & **7.1** & **14.0** \\ \hline \multirow{3}{*}{MMD-EMD\(\downarrow\)} & Bowl & Bottle & Can & Mug & Basket & Plane & Car & Chair & Watercraft & Average \\ \cline{1-1} \cline{2-11} & ConDor+SInv. & 27.3 & 17.2 & 20.1 & 19.9 & 29.2 & 19.6 & 11.3 & 22.2 & 18.9 & 20.6 \\ \cline{1-1} & ConDor+Pointur & 21.6 & 13.6 & 14.8 & 12.6 & 18.8 & 14.4 & 8.1 & 13.5 & 9.1 & 14.1 \\ \cline{1-1} & SCARP (Ours) & **9.6** & **6.3** & **8.8** & **8.4** & **10.6** & **5.0** & **5.6** & **8.4** & **6.0** & **7.6** \\ \hline \end{tabular}
\end{table} TABLE I: Quantitative comparison of shape completion in arbitrary poses for tabletop and off-tabletop objects. Most tabletop objects are symmetrical whereas off-table objects have more variations in structure. Chamfer’s Distance (CD) and Earth Movers Distance-Maximum Mean Discrepancy (MMD-EMD) are explained in Sec. V and are scaled by \(10^{3}\) and \(10^{2}\).
Fig. 3: Qualitative comparison of shape completion in arbitrary poses on SCARP and the existing multi-stage baselines: Canonicalization using ConDor, Shape Completion, and De-canonicalization. Pointr [7] is a SOTA pointcloud completion network that generates high-resolution completed pointclouds. Shape Inversion (SInv.) [20] is based on tree-GAN [39] that shares our generator \(G\).
seen, the grasps proposed on partial observations collide with the actual object (ground truth), whereas, the grasp proposals made on the completed object by SCARP are valid.
**Ablation:** SCARP is trained on a multi-tasking objective to achieve: (1) canonicalization, (2) 6D pose estimation, and (3) shape completion. We evaluate the contribution of the different components in our network in achieving these tasks.
_(A) Canonicalization without Shape Completion_: Canonical involves mapping an input \(X\) to its category's fixed canonical frame [21, 22, 25]. Learning a canonical frame for a partial input is challenging as a network may struggle to understand the overall structure of the partial shape. In our model, the structure of a category is correctly learned using the shape completion task. Thus, we analyze if SCARP can canonicalize the partial inputs without performing shape completion. To evaluate this, we modify SCARP by training to auto-encode the partial input \(\hat{X}_{p}\) while simultaneously estimating its pose \(\{R,T\}\). That is, our generator \(G\) generates \(X_{p}\) which is \(\hat{X}_{p}\) in its canonical form and uses \(R^{\prime}\) to rotate \(X_{p}\) back to \(\hat{X}_{p}\). To measure the performance, we compute the CD between \(G^{\prime}s\) canonical output and the canonical ground truth. In case of SCARP, this is given as \(d_{CD}(X^{\prime},X)\), and in case of ablation, this is given as \(d_{CD}(X^{\prime}_{p},X_{p})\). As shown in Table. II (w/o SC), on average \(d_{CD}\) on SCARP is 9.83 whereas when the shape completion aspect is removed, the average distance is 94.54. This indicates that the network does not learn anything meaningful if the task of shape completion is removed from the formulation.
_(B) Shape Completion without pose and shape features:_ As shown in Fig. 2, \(G\) expects a disentangled feature embedding that is a non-linear combination of the pose \(F_{\mathcal{X}}\) and shape embeddings \(p\). We remove these features one by one and observe their impact on shape completion. We measure the performance of shape completion as \(d_{CD}(\hat{X},R^{\prime}(X^{\prime}))\). As shown in Table II (w/o \(F_{\mathcal{X}}\) and w/o \(p\)), SCARP fails to converge without either of the features. In both cases, the model fails to learn a correct transformation between the canonical and the original pose. Without pointnet (w/o \(p\)), the model collapses to a few different shapes across the instances missing out on per-instance details. Without TFN (w/o \(F_{\mathcal{X}}\)), completion in the canonical frame is more accurate but TFN fails to estimate the correct pose transform. In summary, as the shape completion in the canonical form suffers, the pose transform is also inaccurate thus indicating the importance of the multi-task objective.
## VI Conclusion
Existing shape completion works assume the partial inputs to be in a fixed canonical frame. This is difficult to achieve in a robotics setting where the objects are observed in arbitrary poses thus needing pre-canonicalization. This leads to an error propagation resulting in a sub-optimal shape completion. We propose SCARP, a novel architecture that performs Shape Completion in ARbitrary Poses. SCARP is trained using a multi-task objective to perform (1) canonicalization, (2) 6D pose estimation, and (3) shape completion. SCARP outperforms the existing multi-stage baselines by \(45\%\) and showcases its potential in improving grasp proposals on tabletop objects, reducing colliding grasps by more than \(70\%\). SCARP has a huge potential in many more robotics applications like collision avoidance in trajectory planning or differential simulators for model-based RL planners.
**Acknowledgement:** We would like to thank Karthik Desingh, who is an Assistant Professor at the University of Minnesota, and Adrien Poulenard, who is a Postdoctoral Fellow at the Stanford University, for their valuable feedback.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & & w/o SC & w/o \(F_{\mathcal{X}}\) & w/o \(p\) \\ \hline \multirow{2}{*}{Plane} & Ours & **3.8** & **6.9** & **6.9** \\ & Modified & 112.34 & 135.3 & 111.9 \\ \hline \multirow{2}{*}{Bowl} & Ours & **14.9** & **21.8** & **21.8** \\ & Modified & 123.7 & 185.3 & 156.4 \\ \hline \multirow{2}{*}{Mug} & Ours & **10.79** & **12.1** & **12.1** \\ & Modified & 47.6 & 45.5 & 44.3 \\ \hline \end{tabular}
\end{table} TABLE II: Ablation Study: We show the affect of removing different components of our network. In w/o SC, we train SCARP as an auto-encoding network to verify if the model can still learns to canonicalize the input pointcloud. In w/o \(F_{\mathcal{X}}\) and w/o \(p\), we remove TFN and pointnet features, respectively, and evaluate the quality of shape completion. Each component plays a crucial role in achieving SCARP as is evident by the drop in metrics. Metrics are scaled by \(10^{3}\).
Fig. 4: **(left)**: Grasp proposals made by a SOTA grasp proposal network, [13], on partial observations lead to collisions with the actual object. _Partial_ is a partial observation and _Ground truth_ denotes the actual object. The proposals are made on _Partial_ (shown in green) but collide with the actual object (shown in red). **(right)**: We use SCARP to complete the partial observations. Grasp proposals made on the completed objects align well with the actual object on the table reducing such collisions by a large margin.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Partial & SCARP & Ground Truth \\ \hline Bowl & 62.18 & 21.14 & 16.86 \\ Bottle & 46.5 & 7.35 & 6.32 \\ Can & 81.33 & 22.33 & 16.0 \\ \hline Mug & 71.33 & 25.0 & 23.5 \\ \hline Basket & 77.33 & 21.5 & 13.66 \\ \hline Average & 67.73 & 19.46 & 15.27 \\ \hline \end{tabular}
\end{table} TABLE III: Quantitative metrics on GE (explained in Sec. V): % of grasp proposals that are invalid or collide with the actual object on the table when the proposals are made on (1) Partial Observations, (2) Shape Completed objects by SCARP, and (3) Actual Objects on the table (Ground Truth). SCARP reduces invalid and colliding grasp proposals by \(71.2\%\) when only partial observations are available by accurately completing the object in the observed pose.
|
2303.11208
|
On Intersecting Polygons
|
Consider two regions in the plane, bounded by an $n$-gon and an $m$-gon,
respectively. At most how many connected components can there be in their
intersection? This question was asked by Croft. We answer this asymptotically,
proving the bounds $$\left\lfloor \frac{m}{2}\right\rfloor \cdot \left\lfloor
\frac{n}{2}\right\rfloor\le f(n,m)\le \left\lfloor \frac{m}{2}\right\rfloor
\cdot \frac{n}{2} + \frac{m}{2} $$ where $f(n,m)$ denotes the maximal number of
components and $m\le n$. Furthermore, we give an exact answer to the related
question of finding the maximal number of components if the $m$-gon is required
to be convex: $\left \lfloor \frac{m+n-2}{2}\right\rfloor$ if $n\ge m+2$ and
$n-2$ otherwise.
|
Kada Williams
|
2023-03-20T15:40:36Z
|
http://arxiv.org/abs/2303.11208v1
|
# On Intersecting Polygons
###### Abstract
Consider two regions in the plane, bounded by an \(n\)-gon and an \(m\)-gon, respectively. At most how many connected components can there be in their intersection? This question was asked by Croft. We answer this asymptotically, proving the bounds
\[\left\lfloor\frac{m}{2}\right\rfloor\cdot\left\lfloor\frac{n}{2}\right\rfloor \leq f(n,m)\leq\left\lfloor\frac{m}{2}\right\rfloor\cdot\frac{n}{2}+\frac{m}{2}\]
where \(f(n,m)\) denotes the maximal number of components and \(m\leq n\). Furthermore, we give an exact answer to the related question of finding the maximal number of components if the \(m\)-gon is required to be convex: \(\left\lfloor\frac{m+n-2}{2}\right\rfloor\) if \(n\geq m+2\) and \(n-2\) otherwise.
## 1 Introduction
Croft [1] asked the following question.
**Question 1**.: _What is the maximal number of connected components in the intersection of the polygonal regions determined by an \(n\)-gon and an \(m\)-gon?_
As usual, a _polygon_ is a piecewise linear simple closed curve in the plane. Since all polygons admit a triangulation, the \(n\)-gonal region can be partitioned into \(n-2\) triangles, the \(m\)-gonal region into \(m-2\). Now the intersection of convex regions is convex, every triangle is convex, and every convex region is either path-connected or empty. It follows that the answer is at most \((m-2)(n-2)\), hence finite. Furthermore, supposing that the \(m\)-gon is convex, there are at most \(n-2\) components.
Figure 1: intersecting polygons
**Question 2**.: _What is the maximal number of connected components in the intersection of a region bounded by an \(n\)-gon and a convex \(m\)-gon?_
In Figure 3, the yellow regions are in different connected components if the polygons are viewed as open, whereas if the polygons are closed, there is only one connected component. Yet as we will see, the maximal number of connected components in Questions 1 and 2 does not depend on whether the polygonal regions are open or closed. Essentially, we will deform the polygons slightly if these open overlaps have a boundary point in common. We say that an _overlap_ is a connected component in the intersection of an open \(n\)-gon and an open \(m\)-gon.
We claim that every overlap is a polygon, and that the boundary of an overlap consists of segments contained by the perimeter of the \(n\)-gon or the \(m\)-gon. To this end, let us first consider the convex polygons formed by intersecting \(n-2\) triangles with \(m-2\) triangles in their respective triangulations. The overlaps arise when we glue these polygons along the inner diagonals, and so are polygons as well. Their boundary segments are contained not by inner diagonals, but by sides of the \(n\)-gon or \(m\)-gon.
When intersecting a closed \(n\)-gon and a closed \(m\)-gon, it is possible that the boundaries of two overlaps meet, or that a one-dimensional connected component occurs from a vertex or side of one polygon tangent to the side of another. Observe that this is only possible if three side lines of the \(n\)-gon and \(m\)-gon pass through a point. We resolve this as follows.
Given a side, let us translate its line by a perpendicular vector such that we increase the width of any component it contains, we do not let the line move across the finitely many points where two side lines meet, and the vector has smaller length than the positive width overlaps. Iterating this, the number of times three side lines concur can be decreased to zero, while the number of components does not decrease. Therefore, in our investigations, we may assume that no three side lines pass through a point, and that the polygons are open.
Figure 3: deformation, before and after
Figure 2: overlaps
Let \(f(n,m)\) denote the maximal number of overlaps (equivalently, of connected components) between an \(n\)-gonal and an \(m\)-gonal region. Our main result is the following.
**Theorem 3**.: \(\left\lfloor\frac{m}{2}\right\rfloor\cdot\left\lfloor\frac{n}{2}\right\rfloor \leq f(n,m)\leq\left\lfloor\frac{m}{2}\right\rfloor\cdot\frac{n}{2}+\frac{m}{2}\)__
Furthermore, we give an exact answer to the convex version of the question.
**Theorem 4**.: _Let us intersect an \(n\)-gonal region with a convex \(m\)-gonal region. The maximal number of overlaps is \(\left\lfloor\frac{m+n-2}{2}\right\rfloor\) if \(n\geq m+2\) and \(n-2\) otherwise._
Proof of Theorem 3, lower bound.: Intersecting \(\left\lfloor\frac{m}{2}\right\rfloor\) 'teeth' with \(\left\lfloor\frac{n}{2}\right\rfloor\) 'teeth' as in Figure 4 yields \(\left\lfloor\frac{m}{2}\right\rfloor\cdot\left\lfloor\frac{n}{2}\right\rfloor\) overlaps. Clearly, the direction of the end segments is such that they meet. Possibly with an additional side in the case when \(m\) is odd or when \(n\) is odd, this creates an \(n\)-gon and an \(m\)-gon.
Proving the upper bound in Theorem 3 involves the following lemma. We shall use standard notation - see e.g. [2].
**Lemma 5**.: _If an overlap does not contain a vertex of the \(m\)-gon or \(n\)-gon, then it has an even number of sides._
## 2 Three special cases
In the case \(m=3\) and \(n=5\), by triangulating the pentagon, it is clear that at most \(3\) overlaps emerge. Moreover, as many as \(3\) overlaps can be attained.
Figure 4: double dentures
Figure 5: \(f(5,3)=3\)
In the case \(m=4\) and \(n=4\), the number of overlaps is \(\leq 2\cdot 2=4\), because each quadrilateral is made up of two triangles. This many overlaps can be attained, see Figure 6.
In the case \(m=5\) and \(n=5\), by tweaking the construction for \(n=3\), we can achieve \(5\) overlaps. On the other hand, our proof that no more than \(5\) can be had is more subtle. First, we note that a triangle can only form \(3\) overlaps with a pentagon if the pentagon can be partitioned into no less than \(3\) convex regions. This is only possible if the pentagon has three sides proceeding along a concave arc, met by the triangle according to Figure 5.
If neither pentagon has this shape, then at most \(2\cdot 2=4\) overlaps are possible. If one pentagon has this shape, then of its three triangle parts, the outer two are intersected at most twice, or else the other pentagon is determined. In particular, if an outer triangle meets the other pentagon in three regions, then two of these regions border an inner diagonal, thus forming no more than \(\frac{1}{2}\) of an overlap (except for an option where these three are all the overlaps). As for the central triangle, it contains at most \(1+\frac{1}{2}+\frac{1}{3}\) overlaps (except for an option where the other pentagon meets no inner diagonal, with two overlaps from the central triangle and at most one from each outer one). In total, there are certainly fewer than \(6\) overlaps, whichever case it is. Therefore, \(f(5,5)=5\).
Figure 6: \(f(4,4)=4\)
Figure 7: construction for \(f(5,5)=5\)
The proofs
Let us show that if an overlap does not contain a vertex of the \(m\)-gon or \(n\)-gon, then it has an even number of sides.
Proof of Lemma 5.: The key is to inspect two consecutive sides of an overlap. Recall that any side of an overlap is contained by the boundary of the \(m\)-gon or \(n\)-gon. If two consecutive sides belong to the \(m\)-gon, then the vertex where they meet must be a vertex of the \(m\)-gon, and similarly for the \(n\)-gon. Therefore, provided that an overlap does not contain such a vertex, its consecutive sides belong only to the \(m\)-gon or only to the \(n\)-gon, alternating between the two. Hence, this overlap has an even number of sides along its boundary.
Proof of Theorem 3, upper bound.: Let there be given an \(m\)-gon and an \(n\)-gon with \(F\) overlaps. Of these, let there be \(F_{k}\) many with exactly \(k\) sides belonging only to the \(n\)-gon. If an overlap has two consecutive sides on the \(m\)-gon, then they meet at a vertex of the \(m\)-gon. We may assume overlaps are disjoint, so each vertex of the \(m\)-gon occurs at most once this way. We deduce that \(3F_{0}+F_{1}\leq m\), as surely all vertices are such if all sides belong to the \(m\)-gon, and surely at least one vertex is such if there is just one exception.
The essential idea is to double-count the overlap side segments on the \(n\)-gon. Looking at one single side of the \(n\)-gon, it contains at most \(m\) points of intersection with the \(m\)-gon, and so contains at most \(\left\lfloor\frac{m}{2}\right\rfloor\) segments bounding some overlap. This amounts to \(n\cdot\left\lfloor\frac{m}{2}\right\rfloor\) segments in total. On the other hand, there are \(k\) segments on \(F_{k}\) many overlaps. Therefore,
\[\sum_{k}kF_{k}\leq n\cdot\left\lfloor\frac{m}{2}\right\rfloor.\]
Adding to this the inequality \(3F_{0}+F_{1}\leq m\), the coefficient of each \(F_{k}\) on the left-hand side is at least \(2\). Thus,
\[2F\leq n\cdot\left\lfloor\frac{m}{2}\right\rfloor+m.\]
Dividing by \(2\) implies the upper bound \(f(n,m)\leq\left\lfloor\frac{m}{2}\right\rfloor\cdot\frac{n}{2}+\frac{m}{2}\).
Next, let us see how to maximise the number of overlaps with a convex \(m\)-gon.
Figure 8: appending a tooth
Proof of Theorem 4.: With regard to Figure 5, if a concave arc of \(m\) sides is formed over a \(V\) shape, the resulting \((m+2)\)-gon is intersected in \(m\) overlaps by a convex \(m\)-gon with vertices near the midpoints of these sides. Thus, if \(n=m+2\), there can be \(n-2=m\) overlaps. If \(m\) is increased, there still can be \(n-2\) overlaps. If \(n\) is increased, then at the cost of two sides, a 'tooth' can be appended as in Figure 8, causing \(\left\lfloor\frac{n-(m+2)}{2}\right\rfloor\) more overlaps.
As the \(n\)-gon can be partitioned into \(n-2\) triangles, each of which overlap the convex \(m\)-gon at most once, there can be no more than \(n-2\) overlaps in the case \(n<m+2\). Let us further prove that there can be no more than \(m+\left\lfloor\frac{n-(m+2)}{2}\right\rfloor=\left\lfloor\frac{m+n-2}{2}\right\rfloor\) overlaps in the case \(n\geq m+2\).
Following the same notation as in the previous proof, the number of overlap side segments on the \(n\)-gon is \(\sum kF_{k}\), where \(3F_{0}+F_{1}\leq m\). However, in this problem, each side of the \(n\)-gon creates at most one segment bounding some overlap, and so \(\sum kF_{k}\leq n\). Adding these yields
\[2F\leq m+n.\]
Our strategy is to improve this bound by \(2\). Let us regard the sides of the \(n\)-gon which belong to its convex hull, of which there are at least \(2\). Should these fail to meet the \(m\)-gon, clearly \(\sum kF_{k}\leq n-2\). If the \(m\)-gon has two vertices outside the \(n\)-gon, \(3F_{0}+F_{1}\leq m-2\). Finally, if the \(m\)-gon has but one vertex outside the \(n\)-gon, then the convex hull of the \(n\)-gon is only met by two consecutive sides of the \(m\)-gon, so only one overlap is bounded with the convex hull. In this case, \(3F_{0}+F_{1}\leq m-1\), but also \(\sum kF_{k}\leq n-1\). Therefore,
\[2F\leq m+n-2.\]
It follows that no more than \(\left\lfloor\frac{m+n-2}{2}\right\rfloor\) overlaps can be attained.
## 4 Concluding remarks
Despite our efforts to approximate \(f(n,m)\), Problem 1 remains open. As we saw in our proof that \(f(5,5)=5\), if an overlap does necessarily involve a vertex of the \(m\)-gon, then it takes up considerable space. Our upper bound appears to be wasteful in general. Therefore, we may conjecture the following.
**Conjecture 6**.: _The answer to Problem 1 is \(f(n,m)=\left\lfloor\frac{n}{2}\right\rfloor\cdot\left\lfloor\frac{m}{2}\right\rfloor+\epsilon\), where \(\epsilon=1\) for odd values of \(n\) and \(m\), and \(\epsilon=0\) otherwise._
The extra term \(\epsilon\) is needed because in Figure 4, a new overlap is made when swapping the lowest edge for a triangle reaching a new edge, as in Figure 5.
**Acknowledgement.** The author would like to thank Imre Leader for helpful discussions and guidance in writing this paper.
|
2301.08120
|
Convex Bodies associated to Linear series of Adelic Divisors on
Quasi-Projective Varieties
|
In this article we define and study convex bodies associated to linear series
of adelic divisors over quasi-projective varieties that have been introduced
recently by Xinyi Yuan and Shou-Wu Zhang. Restricting our attention to big
adelic divisors, we deduce properties of volumes obtained by Yuan and Zhang
using different convex geometric arguments. We go on to define augmented base
loci and restricted volumes of adelic divisors following the works of Michael
Nakamaye and develop a similar study using convex bodies to obtain analogous
properties for restricted volumes. We closely follow methods developed
originally by Robert Lazarsfeld and Mircea Mustata.
|
Debam Biswas
|
2023-01-19T15:23:34Z
|
http://arxiv.org/abs/2301.08120v1
|
# Convex Bodies associated to Linear series of Adelic Divisors on Quasi-Projective Varieties
###### Abstract
In this article we define and study convex bodies associated to linear series of adelic divisors over quasi-projective varieties that have been introduced recently by Xinyi Yuan and Shou-Wu Zhang. Restricting our attention to big adelic divisors, we deduce properties of volumes obtained by Yuan and Zhang using different convex geometric arguments. We go on to define augmented base loci and restricted volumes of adelic divisors following the works of Michael Nakamaye and develop a similar study using convex bodies to obtain analogous properties for restricted volumes. We closely follow methods developed originally by Robert Lazarsfeld and Mircea Mustata.
+
Footnote †: Debam Biswas, Department of Mathematics, University of Regensburg Universitatstrasse 31, 93053, Regensburg Email: [email protected]**
## Keywords:
**Adelic Divisors, Volumes and Restricted Volumes, Augmented Base locus, Okounkov Bodies**
## 1 Introduction
The theory of Okounkov bodies to study linear systems of line bundles on a projective variety was introduced by Russian mathematician Andrei Okounkov in his articles [14] and [15]. Given a linear series of an ample line bundle on a projective variety, he introduced certain convex bodies, which later came to be known as _Okounkov bodies_ whose convex geometric properties encode interesting invariants of the graded series. In their article [13] Robert Lazarsfeld and Mircea Mustata noticed that the constructions of Okounkov generalise from ample line bundles to arbitary big line bundles on projective varieties. In their paper [13] they develop a systematic study of Okounkov bodies for big line bundles and prove various properties of volumes such as continuity, Fujita approximation and others. They also consider the notion of _restricted volumes_ along a closed sub-variety and prove properties analogous to those of ordinary volumes.
A crucial feature of the approach in [13] is that the construction of Okounkov bodies makes sense even when the variety is not projective as long as we have a graded series of the space of global sections on our given line bundle. In this article we use this observation to construct Okounkov bodies for "compactified" line-bundles on quasi-projective varieties. In their recent pre-print [12] Xinyi Yuan and Shou-Wu Zhang introduced the notion of _adelic divisors_ on a quasi-projective variety \(U\) over a field. They manage to put a topology on the space of all divisors which come from projective
models \(X_{i}\) of \(U\) and consider all divisors which are "compactified" with this topology. In other words an _adelic divisor_ on a normal quasi projective variety \(U\) is given by the data \(\{X_{i},D_{i}\}\) and a sequence of positive rationals \(q_{i}\) converging to \(0\) where \(X_{i}\) are projective models of \(U\), \(D_{i}\) are \(\mathbb{Q}\)-divisors on \(X_{i}\) with \(D_{i}|_{U}=D_{j}|_{U}\) for all \(i,j\) such that the following "Cauchy condition" holds.
\[-q_{j}D_{0}\leq D_{i}-D_{j}\leq q_{j}D_{0}\ \forall\ i\geq j\]
Here inequalities signifiy effectivity relations holding in a common projective model (see section 2.4 of [21] for details). As a result of their consideration, given any divisor \(D\) on \(U\) and an adelic compactification on \(D\) denoted by \(\overline{D}\), we get a space of adelic global sections \(H^{0}(U,\overline{D})\) which is a **finite dimensional** sub-space of all global sections \(H^{0}(U,O(D))\). Hence we can consider the notions of volumes similarly to the projective case and it is shown in [21] that these volume functions shows properties analogous to the classical projective volumes (see [21], section 5). However following the approach in [17] in this article we construct **Okounkov bodies**\(\Delta(\overline{D})\) for the graded series \(\{H^{0}(U,m\overline{D})\}_{m\in\mathbb{N}}\). The construction is essentially a special case of the construction sketched in Definition 1.16 of [17] where we take \(W_{m}=H^{0}(U,m\overline{D})\subseteq H^{0}(U,O(D))\). If the divisor \(\overline{D}\) is big _i.e_ it has positive volume as defined in [21], we show that the Lebesgue volume of the body is essentially the same as the algebraic volume upto scaling. The first main theorem of our article is as follows
**Theorem** (A).: _Suppose we have a big adelic divisor \(\overline{D}\) on a normal quasi-projective variety \(U\) and suppose \(\Delta(\overline{D})\) is the Okounkov body associated to \(\overline{D}\). Furthermore suppose \(\widehat{\operatorname{vol}}(\overline{D})\) be the adelic volume defined in Theorem 5.2.1 of [21]. Then we have_
\[\operatorname{vol}_{\mathbb{R}^{d}}(\Delta(\overline{D}))=\lim_{m\to\infty} \frac{\dim_{K}(\overline{H^{0}}(U,m\overline{D}))}{m^{d}}=\frac{1}{d!}\cdot \widehat{\operatorname{vol}}(\overline{D})\]
Continuing with our analogy of the approach in the article [17] we construct global bodies for adelic Okounkov bodies to study the variation of these bodies. Although we do not have finiteness of the Neron-Severi space associated to adelic divisors, it turns out there exist a canonical global convex body whose fibers give Okounkov bodies even if this global body depends on some choices of divisors in contrast to in [17]. This is the content of our next theorem
**Theorem** (B).: _Suppose \(\overline{D}\) and \(\overline{E}\) be adelic divisors on a normal quasi-projective variety \(U\) such that \(\overline{D}\) is big. Then there exists a convex body \(\Delta(U)=\Delta(U,\overline{D},\overline{E})\subset\mathbb{R}^{d+2}\) with the property that for any \(\vec{a}=(a_{1},a_{2})\in\mathbb{Q}^{2}\) with \(a_{1}\overline{D}+a_{2}\overline{E}\) big, we have_
\[\Delta(a_{1}\overline{D}+a_{2}\overline{E})=\Delta(U)\cap(\mathbb{R}^{d}\times \{\vec{a}\})\]
The above two theorems combine to prove that Theorem 5.2.1 of [21] for big adelic divisors using convex geometric methods and Okounkov bodies. This volume essentially measures the asymptotic growth of the global sections which arise as restrictions of global sections from the bigger variety just as in the classical projective case ( Furthermore they show that not only the volume of the big adelic divisor but also its Okounkov body constructed in this article is approximated (in terms of Hausdorff metric) by the corresponding Okounkov bodies of the projective models defining the divisor.
Next we go on to define the notions of restricted volumes of adelic divisors along a closed sub-variety \(E\) of \(U\) using Okounkov bodies. The restricted volume essentially measures the asymptotic growth of global sections of \(\overline{D}|_{E}\) which arise as restrictions of sections of \(\overline{D}\) over \(U\) to \(E\) analogously to the classical projective setting (see [18] for more details). Analogously to the projective case, we can form the convex geometric objects \(\Gamma_{U|E}(\overline{D})\), \(\Delta_{U|E}(\overline{D})\) and the algebraic objects \(H^{0}(U|E,\overline{D})\), \(\widehat{\operatorname{vol}}_{U|E}(\overline{D})\) for a given adelic divisor \(\overline{D}\). In order to have relations analogous to that of the adelic volume, we introduce the notion of augmented base locus of an adelic divisor in analogy with projective augmented base locus (see section 2.4, [17]). Our definition, although being very similar to the projective case, requires some work to be shown well-defined. Since we do not have Serre finiteness on quasi-projective varieties, we have to use the main result of [16] to show the well-definedness. We go on to show that when \(E\) is not contained in the augmented base locus, the expected properties hold which is our next theorem
**Theorem** (C).: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\). Furthermore suppose \(E\) is a closed irreducible sub-variety of \(U\) not contained in the augmented base locus of \(\overline{D}\). Then we have_
\[\operatorname{vol}_{\mathbb{R}^{d}}(\Delta_{U|E}(\overline{D}))=\lim_{m\to \infty}\frac{\dim_{K}(H^{0}(U|E,O_{E}(m\overline{D})))}{m^{d}}=\frac{1}{d!} \cdot\widehat{\operatorname{vol}}_{U|E}(\overline{D})\]
_where \(\dim(E)=d\)_
We go on to show the existence of a global body even for restricted volumes and hence the variation of these restricted Okounkov bodies also has desirable satisfying properties like continuity etc.
The organisation of the article is as follows. In the first three sections of the first chapter we review the notions of adelic divisors and their space of global sections following sub-section 2.4 in [10]. In the fourth section we construct the Okounkov bodies for adelic divisors and show some preleminary properties of them. In the fifth section we prove our first main theorem relating the algebraic adelic volumes with Euclidean volumes of their Okounkov bodies. In the next two sections we construct the global bodies and show that their fibers essentially gives the variation of Okounkov bodies in fixed directions. We also deduce certain corollaries of the existence of global bodies. In the first section of the second chapter we define augmented base locus of an adelic divisor. We go on to define the restricted volume of an adelic divisor along a closed sub-variety in the next section. We relate them to Euclidean volumes of restricted Okounkov bodies and show the existence of global bodies in analogy to the adelic volume in the next two sections. We end the chapter by obtaining certain corollaries of restricted volumes similar to those of ordinary volumes in chapter 1.
### Adelic divisors
We begin by giving a short review of adelic divisors which are our main objects of interest in this article. We fix a quasi-projective varity \(U\) over any field \(k\). By a _projective model_ of \(U\), we mean a projective variety \(X\) over \(k\) which contains \(U\) as an open dense subset via an open immersion \(U\hookrightarrow X\). Given a projective model \(X\) of \(U\), we have the group of Cartier \(\mathbb{Q}\)-divisors denoted by \(\operatorname{Div}(X)_{\mathbb{Q}}=\operatorname{Div}(X)\otimes_{\mathbb{Z}} \mathbb{Q}\). Then we consider the group of \((\mathbb{Q},\mathbb{Z})\)-divisors \(\operatorname{Div}(X,U)\) as follows
\[\operatorname{Div}(X,U)=\{(D,\mathcal{D})\in\operatorname{Div}(U)\oplus \operatorname{Div}(X)_{\mathbb{Q}}\mid\mathcal{D}|_{U}=D\text{ in }\operatorname{Div}(U)_{\mathbb{Q}}\}\]
where \(\mathcal{D}|_{U}\) denotes the image of \(\mathcal{D}\) under the pull-back morphism \(\operatorname{Div}(X)_{\mathbb{Q}}\to\operatorname{Div}(U)_{\mathbb{Q}}\).
Note that the set of _all_ projective models of a given \(U\) for an inverse system which in turn makes the set of \((\mathbb{Q},\mathbb{Z})\)-divisors into a directed system via pull-backs. Then we can form the direct limit to define the group of _model divisors_ as follows
\[\operatorname{Div}(U/k)_{\operatorname{mod}}=\lim_{X}\operatorname{Div}(X,U)\]
where above the direct limit is taken as \(X\) varies over all projective models of \(U\). Next note that there is a notion of effectivity in both the groups \(\operatorname{Div}(X)_{\mathbb{Q}}\) and \(\operatorname{Div}(U)\) which induces a partial order on \(\operatorname{Div}(X,U)\) where \((D,\mathcal{D})\leq(D^{\prime},\mathcal{D}^{\prime})\) if and only if both \(D^{\prime}-D\) and \(\mathcal{D}^{\prime}-\mathcal{D}\) are effective in \(\operatorname{Div}(U)\) and \(\operatorname{Div}(X)_{\mathbb{Q}}\) respectively. This partial order induces a partial order in \(\operatorname{Div}(U/k)_{\operatorname{mod}}\) by passing to direct limits.
By a _boundary divisor_ of \(U\) over \(k\), we mean a tuple \((X_{0},D_{0})\) where \(X_{0}\) is a projective model of \(U\) and \(D_{0}\) is an effective Cartier divisor on \(X_{0}\) with \(\operatorname{Supp}(D_{0})=X_{0}-U\). Note that such a boundary divisor always exists which can be seen by choosing any projective model \(X_{0}^{\prime}\) of \(U\) and blowing-up \(X_{0}^{\prime}\) along the reduced center \(X_{0}-U\). Then note for any non-zero rational \(r\in\mathbb{Q}\) we can view \(rD_{0}\) as an element of \(\operatorname{Div}(X_{0},U)\) and hence as en element of \(\operatorname{Div}(U/k)\) by setting the component on \(\operatorname{Div}(U)\) to be \(0\).
We can finally put a norm, denoted by a _boundary norm_ on \(\operatorname{Div}(U/k)_{\operatorname{mod}}\) as follows
\[||\cdot||_{D_{0}}\colon\operatorname{Div}(U/k)_{\operatorname{mod}}\to[0,\infty]\]
\[||\overline{D}||_{D_{0}}=\inf\{q\in\mathbb{Q}_{>0}\ |\ -qD_{0}\leq\overline{D} \leq qD_{0}\text{ in }\operatorname{Div}(U/k)_{\text{mod}}\}\]
It is shown Lemma 2.4.1 of [13] that \(||\cdot||_{D_{0}}\) is actually a norm and the topology induced by it on \(\operatorname{Div}(U/k)_{\text{mod}}\) is independent of the chosen boundary divisor \((X_{0},D_{0})\). Hence we can talk about the _boundary topology_ on \(\operatorname{Div}(U/k)_{\text{mod}}\) as the topology induced by a boundary norm coming from _any_ boundary divisor. We finally define the _adelic divisors_, denoted by \(\operatorname{\widetilde{Div}}(U,K)\), as the completion of the topological space \(\operatorname{Div}(U/k)_{\text{mod}}\) with respect to the boundary topology described above. Note that then by an adelic divisor we mean the data \(\{X_{i},D_{i}\}\) where \(X_{i}\) are projective models and \(D_{i}\in\operatorname{Div}(X_{i},U)\) and a sequence of positive rationals \(\{q_{i}\}\) converging to \(0\) satisfying the effectivity relations
\[-q_{i}D_{0}\leq D_{i}-D_{j}\leq q_{i}D_{0}\text{ in }\operatorname{Div}(U/k)_{ \text{mod}}\text{ for all }j\geq i\]
_Remark_.: If we assume \(U\) to be normal, we can choose the models \(X_{i}\) to be normal and further embedding the group of Cartier divisors into Weil divisors we can look at \(D_{i}\) just as elements of \(\operatorname{Div}(X_{i})_{\mathbb{Q}}\) and the effectivity relation to be holding in just \(\operatorname{Div}(X_{i})_{\mathbb{Q}}\) instead of \(\operatorname{Div}(U/k)_{\text{mod}}\). This is due to the fact that group of Weil divisors on \(U\) has no torsion.
### Space of global sections of an adelic divisor
We fix an algebraically closed field \(K\) and a normal quasi-projective variety \(U\) over \(K\). As described in the previous section, we have the notion of the group of _adelic divisors_\(\operatorname{\widetilde{Div}}(U,K)\) ([13] sub-section 2.4.1 for more details) which are given by a compatible sequence of models \(\{X_{i},D_{i}\}\) such that \(D_{i}\) are Cartier \(\mathbb{Q}\)-divisors on the projective models \(X_{i}\) such that \(D_{i}\) restrict to a Cartier divisor \(D\) on \(U\) and they satisfy the Cauchy condition with respect to a boundary divisor \(D_{0}\) defined over a projective model \(X_{0}\)_i.e_ there exists a sequence of positive rational numbers \(\{q_{j}\}\) converging to \(0\) such that
\[D_{j}-q_{j}D_{0}\leq D_{i}\leq D_{j}+q_{j}D_{0}\text{ for all }i\geq j\,. \tag{1}\]
where \(D_{0}\) is an effective Cartier divisor on \(X_{0}\) with support exactly equal to the complement of \(U\) in \(X_{0}\) and the above effectivity relations are considered in a common model.( for details see [13], section 2) Note that the definition of adelic divisor does not depend on the particular choice of the boundary divisor \(D_{0}\), as shown in [13, Lemma 2.4.1]. We denote this data by \(\overline{D}\). Given such an adelic divisor, we introduce the space of global sections
\[H^{0}(U,\overline{D})=H^{0}(U,O(\overline{D}))=\{f\in\kappa(U)^{\times}\ |\ \operatorname{div}(f)+\overline{D}\geq 0\}\cup\{0\}\]
following [13, section 5.1.2]. In the above definition, \(\operatorname{div}(f)\) is the adelic divisor obtained by picking the divisor corresponding to \(f\in\kappa(U)^{\times}=\kappa(X)^{\times}\) on any projective model \(X\) of \(U\), and \(\operatorname{div}(f)+\overline{D}\geq 0\) means that the left hand side can be represented by a sequence of effective divisors on the corresponding models.
_Remark_.: It is shown in [13, Lemma 5.1.7(2)] that this space is always finite dimensional. This will be our analogue for the usual space of global sections on which we construct Okounkov bodies. For this purpose, note that by restricting the effectivity relation \(\operatorname{div}(f)+\overline{D}\geq 0\) to \(U\), we can identify \(H^{0}(U,\overline{D})\) with a finite dimensional vector sub-space of the space of all sections \(H^{0}(U,O(D))\)( which in general is very large and infinite dimensional). This will always be our way of viewing the vector spaces \(H^{0}(U,\overline{D})\).
### Different notions of effective sections
Note that \(D_{i}\) can be viewed as a (model) adelic divisor \(\overline{D}_{i}\) in \(\operatorname{\widetilde{Div}}(U,K)\) and consequently we have the space of global sections \(H^{0}(U,\overline{D_{i}})\) as before, where we put the overline to emphasize it is viewed as a model adelic divisor. However viewing \(D_{i}\) as a \(\mathbb{Q}\)-divisor on the projective variety \(X_{i}\) we can also define the space of global sections as before
\[H^{0}(X_{i},D_{i})^{\prime}=\{f\in\kappa(X_{i})^{\times}\ |\ \operatorname{div}(f)+D_{i}\geq 0 \text{ in }\operatorname{Div}(X_{i})_{\mathbb{Q}}\}\cup\{0\}\,.\]
only by restricting our attention to the projective model \(X_{i}\). These two notions of effective sections can be different a-priori. However if we consider \(U\) to be normal, then by [13, Lemma 5.1.5 and Remark 5.1.6] they are canonically identified and we get that both these notions are the same. Next we will obtain some inclusions.
**Lemma 1.1**.: _We have the sequence of inclusions_
\[H^{0}(X_{j},k(D_{j}-q_{j}D_{0}))^{\prime}\hookrightarrow\ H^{0}(U,O(k\overline{D }))\hookrightarrow H^{0}(X_{j},k(D_{j}+q_{j}D_{0}))^{\prime}\]
_for all \(k\in\mathbb{N}\) and for all \(j\)._
Proof.: Note that by our discussion above the two extremes of the sequence can be replaced by
\[H^{0}(U,k(\overline{D_{j}}-q_{j}\overline{D_{0}}))\quad\text{and}\quad H^{0}( U,k(\overline{D_{j}}+q_{j}\overline{D_{0}}))\]
respectively as \(U\) is assumed to be normal. Therefore, the statement is equivalent to the chain of inequalities
\[\overline{D}_{j}-q_{j}\overline{D}_{0}\leq\overline{D}\leq\overline{D}_{j}+q_ {j}\overline{D}_{0}\,,\]
which is an immediate consequence of (1).
Next we define the volume of an adelic line bundle following [13, sub-section 5.2.2].
**Definition 1.2**.: _Given an adelic line bundle \(\overline{D}\) on a quasi-projective variety \(U\) as above, we define the volume of \(\overline{D}\) as_
\[\widehat{\operatorname{vol}}(\overline{D})=\limsup_{m\to\infty}\frac{ \dim_{K}(\widehat{H}^{0}(U,m\overline{D}))}{m^{d}/d!}\,,\]
_where \(d\) is the dimension of \(U\). We call an adelic divisor big if \(\widehat{\operatorname{vol}}(\overline{D})>0\)._
We will primarily be interested in the Okounkov bodies of the big adelic divisors.
_Remark_.: It is shown in [13, Theorem 5.2.1(1)] that the \(\limsup\) in Definition 1.2 is actually a limit by using the fact that the volume is actually a limit of the volumes of the projective \(\mathbb{Q}\)-volumes of thze models. However we will not assume that here and we will use the theory of Okounkov bodies to independently show that this volume is given by a limit.
### Okounkov bodies for adelic divisors
We recall the valuation function crucial in the definition of Okounkov bodies. Note that as we remarked at the end of section 1.1, every element of \(\overline{H^{0}}(U,O(\overline{D}))\) can be identified as a global section of \(O(D)\) on \(U\) by restricting the effectivity relation \(\operatorname{div}(f)+\overline{D}\geq 0\) to \(U\). Now we fix a closed regular point \(x\in U(K)\) and consider any local trivialisation \(s_{0}\) of \(O(D)\) around \(x\). Then every element \(s\in H^{0}(U,\overline{D})\subseteq H^{0}(U,O(D))\) induces a regular function by \(f=\frac{s}{s_{0}}\) around \(x\) and hence an element in the completion \(\widehat{O_{U,x}}\cong K[[x_{1}\dots x_{d}]]\) where \(d\) is the dimension of \(U\) and the second congruence follows from the regularity of \(x\). Then we define a valuation like function denoted by ord as follows:
\[\nu_{x}(f)=\min\{\alpha\in\mathbb{N}^{d}\mid f=\sum a_{\alpha}\mathrm{x}^{ \alpha}\ \mathrm{in}\ \widehat{O_{U,x}},\ a_{\alpha}\neq 0\}\]
where the minimum is taken with respect to the lexicographic order on the variables \(x_{1}\dots x_{d}\) and this function is independent of the choice of \(s_{0}\). Now the choice of a flag \(x=Y_{0}\subset Y_{1}\subset\dots Y_{d}=U\) centered at \(x\) gives a choice of variables \(x_{1}\dots x_{d}\) as above and hence yields a valuation function \(\nu_{x}\) on \(\overline{H^{0}}(U,\overline{D})\). Note that the sub-spaces \(\overline{H^{0}}(U,m\overline{D})\) are finite dimensional and induces a graded linear series \(\{V_{m}\subseteq H(U,mD)\}\) in the sense of section 1.3 of [10]. Hence we can define the semi-groups and convex bodies similarly
**Definition 1.3**.: _Suppose we have the adelic divisor \(\overline{D}\). Then we can define the semi-group_
\[\Gamma(\overline{D})=\{(\alpha,m)\in\mathbb{N}^{d+1}\mid\alpha=\nu_{x}(s)\text{ for some }s\in\overline{H^{0}}(U,m\overline{D})\}\]
_We further define \(\Gamma(\overline{D})_{m}=\Gamma(\overline{D})\cap(\mathbb{N}^{d}\times\{m\})\). Finally we define the associated Okounkov body of \(\overline{D}\) as_
\[\Delta(\overline{D})=\text{closed convex hull}(\cup_{m}\frac{1}{m}\cdot\Gamma( \overline{D})_{m})=\Sigma(\Gamma(\overline{D}))\cap(\mathbb{R}^{d}\times\{1\})\]
_where \(\Sigma(\cdot)\) denotes taking the closed convex cone in the ambient Euclidean space._
We are going to derive required properties of \(\Gamma(\overline{D})\) and \(\Delta(\overline{D})\) with the goal of relating its volume to the volume of adelic line bundles. We begin by showing that eventually the models are big when perturbed a little by the boundary divisor \(D_{0}\) provided \(\overline{D}\) is big. This is immediate if we assume Proposition 5.2.1 of [21]. However even without the full strength of the result, we have the following lemma:
**Lemma 1.4**.: _Suppose \(\overline{D}\) is a big adelic divisor given by models \(\{X_{i},D_{i}\}\) as above with boundary divisor \(D_{0}\). Then for \(j>>0\), \(D_{j}-q_{j}D_{0}\) (and hence \(D_{j}\)) is a big \(\mathbb{Q}\)-divisor on \(X_{j}\). In particular, we deduce that there exists a \(r_{0}\) such that \(H^{0}(U,r\overline{D})\neq\{0\}\) for all \(r>r_{0}\)._
Proof.: We are going to use Fujita approximation ([18]) for \(\mathbb{Q}\)-divisors on projective models. Note that the RHS of the inclusions in Lemma 1.1 gives us that \(\operatorname{vol}(D_{j}+q_{j}D_{0})\geq\widetilde{\operatorname{vol}}( \overline{D})\). Hence for \(\epsilon_{i}>0\), we can find by Fujita approximation an ample \(\mathbb{Q}\)-divisor \(A_{j}\) on a birational modification \(\pi\colon X_{j}^{\prime}\to X_{j}\) such that \(\pi^{*}(D_{j}+q_{j}D_{0})\geq A_{j}\) and \(\operatorname{vol}(A_{j})\geq\operatorname{vol}(D_{j}+q_{j}D_{0})-\epsilon_{j}\). Then consider the \(\mathbb{Q}\)-divisor \(A_{j}-2q_{j}D_{0}\leq D_{j}-q_{j}D_{0}\) where we consider this effectivity relation in \(X_{j}^{\prime}\) by pulling back both \(D_{0}\) and \(D_{j}\) to \(X_{j}^{\prime}\) and we omit the notations of pull-backs. Write \(D_{0}=A-B\) where \(A\) and \(B\) are nef effective \(\mathbb{Q}\)-divisors in \(X_{0}\). Then we have
\[\operatorname{vol}(D_{j}-q_{j}D_{0})\geq\operatorname{vol}(A_{j}-2q_{j}D_{0})= \operatorname{vol}(A_{j}+2q_{j}B-2q_{j}A)\]
\[\geq(A_{j}+2q_{j}B)^{d}-2dq_{j}(A_{j}+2q_{j}B)^{d-1}\cdot A\geq\ A_{j}^{d}-2dq _{j}(A_{j}+2q_{j}B)^{d-1}A\geq\]
\[\operatorname{vol}(D_{j}+q_{j}D_{0})-\epsilon_{j}-2dq_{j}(A_{j}+2q_{j}B)^{d-1}\cdot A\]
Here in the second inequality we have used Siu's inequality to the nef divisors \(A_{j}+2q_{j}B\) and \(2q_{j}A\) since both \(A\) and \(B\) were nef in \(X_{0}\) and nefness is preserved under bi-rational pull-backs whereas \(A_{j}\) is ample in \(X_{j}^{\prime}\), in the third inequality we have used that \(A_{j}\) is nef and \(B\) is nef and effective and in the last one we have used \(A_{j}^{d}=\operatorname{vol}(A_{j})\geq\operatorname{vol}(D_{j}+q_{j}D_{0})- \epsilon_{j}\). Now choosing \(\epsilon_{j}\to 0\) as \(j\to\infty\) and suppose we can choose a nef model divisor \(N\) such that \(A_{j}+2q_{j}\pi_{j}^{*}B\leq\pi_{j}^{*}N\) for all \(j\). Then we get that
\[\operatorname{vol}(D_{j}-q_{j}D_{0})\geq\operatorname{vol}(D_{j}+q_{j}D_{0})-2 dq_{j}M-\epsilon_{j}\]
where \(M=N^{d-1}A\) is a fixed number independent of \(j\). Noting that both \(\epsilon_{j}\) and \(q_{j}\) go to \(0\) as \(j\to\infty\) and noting that \(\operatorname{vol}(D_{j}+q_{j}D_{0})\geq\widetilde{\operatorname{vol}}( \overline{D})>0\) is bounded from below independently of \(j\), the above inequality shows that for large enough \(j\), \(\operatorname{vol}(D_{j}-q_{j}D_{0})>0\) which finishes the claim.
Hence we are reduced to showing that there exists a model nef divisor \(N\) in \(\operatorname{Div}(U,k)_{\operatorname{mod}}\) such that \(\pi_{j}^{*}N\geq A_{j}+2q_{j}\pi_{j}^{*}B\) for all \(j\). To this end choose a positive integer \(r\) such that \(r>q_{j}\) for all \(j\). Then consider the divisor \(D_{1}+2rD_{0}+2rB\) in \(X_{0}\). By Serre's finiteness there is a nef divisor \(N\) on \(X_{0}\) such that \(N\geq D_{1}+2rD_{0}+2rB\). Then since \(r>q_{1}>0\) and \(D_{0}\) is effective, we get that \(N\geq D_{1}+q_{1}D_{0}+q_{j}D_{0}+2q_{j}B\) for all \(j\). But note that we have the effectivity relation \(D_{1}+q_{1}D_{0}\geq D_{j}\) and hence we conclude \(N\geq D_{j}+q_{j}D_{0}+2q_{j}B\). Since we have the effectivity \(\pi_{j}^{*}(D_{j}+q_{j}D_{0})\geq A_{j}\), pulling back by \(\pi_{j}\) we deduce \(\pi_{j}^{*}N\geq\pi_{j}^{*}(D_{j}+q_{j}D_{0})+2q_{j}\pi_{j}^{*}B\geq A_{j}+2q_{j }\pi_{j}^{*}B\) as required.
From now on onwards thanks to the previous lemma, we fix once and for all a \(j\) such that \(D_{k}-q_{k}D_{0}\) is big for all \(k\geq j\). The first result we want to state is the boundedness of \(\Delta(\overline{D})\) where we use the similar result for integral divisors on projective varieties from [17] to obtain our claim.
We start with a sequence of inclusions.
**Lemma 1.5**.: _We have a sequence of inclusions_
\[\Gamma(D_{j}-q_{j}D_{0})_{k}\subseteq\Gamma(\overline{D})_{k}\subseteq\Gamma(D_{j }+q_{j}D_{0})_{k}\]
_for all positive integers \(k\) and hence as a consequence_
\[\Delta(D_{j}-q_{j}D_{0})\subseteq\Delta(\overline{D})\subseteq\Delta(D_{j}+q_{ j}D_{0})\]
Proof.: First note that it makes sense to have \(\Gamma(\cdot)\) and \(\Delta(\cdot)\) in the right and left extremities above even though the arguments are \(\mathbb{Q}\)-divisors by just viewing them as model adelic divisors in \(\operatorname{Div}(U,k)_{\operatorname{mod}}\). The first sequence of inclusions then follow easily from the set of injective maps in 1.1 and noting that the construction of \(\nu_{x}\) is local. The second set of inclusions then easily follows from definition of a closed convex hull generated by subsets.
Finally we can state the boundedness result that we wanted to obtain.
**Lemma 1.6**.: _The subset \(\Delta(\overline{D})\) is compact convex subset of \(\mathbb{R}^{d}\)._
Proof.: The said subset is already closed and convex. Hence it is enough to prove that it is bounded. Note that \(R=D_{j}+q_{j}D_{0}\) is a \(\mathbb{Q}\)-divisor on \(X_{i}\) and hence there is an integer \(t\) such that \(tR\) is an integral Cartier divisor. Note that from the RHS of the set of inclusions 1.1 we conclude that for any section \(s\in\overline{H^{0}}(U,kt\overline{D})\) induces a section \(s^{\prime}\in H^{0}(X_{i},ktR)^{\prime}=H^{0}(U,kt\overline{R})\) and both of these have the same valuation vector. Hence we get that \(\Gamma(t\overline{D})\subseteq\Gamma(tR)\) where the RHS is well defined as \(tR\) is an integral Cartier divisor which in turn yields by construction that \(\Delta(t\overline{D})\subseteq\Delta(tR)\). On the other hand we have \(\Gamma(\overline{D})\subseteq\frac{1}{t}\cdot\Gamma(t\overline{D})\) and hence by construction we get \(\Delta(\overline{D})\subseteq\frac{1}{t}\cdot\Delta(t\overline{D})\). This readily gives the boundedness as \(\Delta(tR)\) is bounded by Lemma 1.10 of [13] as \(tR\) is integral divisor and \(X_{i}\) is projective.
_Remark_.: The proof of boundedness for the projective case in [13] is based on intersecting ample divisors with the flag which gives us the Okounkov construction. It might be interesting to try to give a proof using intersection theory as there is a new intersection theory now with adelic line bundles on quasi-projective varieties. However the notion of "adelic ample divisors" are "positive" is not immediate to formulate since pull back of ample bundles by birational morphisms is not necessarily ample again and this might arise as a problem.
### Volumes of Okounkov bodies
We want to relate the volume of the Okounkov body \(\Delta(\overline{D})\) with the volume of the adelic divisor \(\overline{D}\) as defined in [10]. It will turn out that they are equal (upto scaling) analogous to the projective case. We start with a lemma listing the properties of the \(\Gamma(\overline{D})\) which are sufficient to assert the volume equality. We fix a \(j\) as in the previous section once again.
We begin by recording a result which relates the dimension of the space of global sections with the cardinality of slices of \(\Gamma(\overline{D})\). We denote by \(\Gamma(\overline{D})_{m}=\Gamma(\overline{D})\cap(\mathbb{N}^{d}\times\{m\})\). Then we have
**Lemma 1.7**.: _We have \(\#\Gamma_{m}=\dim_{K}(\overline{H^{0}}(U,m\overline{D}))\)_
Proof.: The claim immediately follows from Lemma 1.4 of [13] by taking \(W=\overline{H^{0}}(U,m\overline{D})\) and noting that \(W\) is finite dimensional from Lemma 5.1.7 in [10].
Next we want to naturally extend the notion of Okounkov bodies to \(\mathbb{Q}\)-adelic line bundles. One necessary property is to show that the construction of \(\Delta(\cdot)\) behaves well with taking integral multiples of adelic divisors which is the content of our next lemma. Note that if we can show \(\operatorname{vol}_{\mathbb{R}}(\Delta(\overline{D}))=\lim_{m\to\infty}\frac{ \#\Gamma_{m}}{m^{d}}\), then with the Lemma 1.7 we have that the Euclidean volume of \(\Delta(\overline{D})\) is the same as the volume of \(\overline{D}\) as defined in Definition 1.2 upto scaling by \(d\mathbb{I}\). It turns out that for the above equality to be true, it is enough for \(\Gamma(\overline{D})\) to satisfy certain properties which are purely Euclidean geometric in nature. We wish to state and prove them in our main lemma of this section. Before that we prove a property necessary in our next lemma.
**Lemma 1.8**.: _Suppose \(\overline{D}\) is a big adelic divisor on a normal quasi-projective variety \(U\) given by the sequence of models \(\{X_{i},D_{i}\}\) and rationals \(\{q_{i}\to 0\}\) as usual. Then there is a model \(X_{j}\) such that for all ample divisors \(\overline{A}\) on \(X_{j}\), there exists a non-zero section \(s_{0}\in H^{0}(U,m\overline{D}-\overline{A})\) whenever \(m\) is a sufficiently large positive integer._
Proof.: The idea is to use Kodaira lemma (Proposition 2.2.6 [11]) in the projective case on models approximating \(\overline{D}\) from below. More precisely suppose \(D^{\prime}_{j}=D_{j}-q_{j}D_{0}\). Then as \(\{D^{\prime}_{j}\}\) is a sequence also representing the big divisor \(\overline{D}\), by Lemma 1.4 we can find a \(j\) such that \(D^{\prime}_{j}\) is a big divisor. Now applying the Kodaira lemma on the big divisor \(D^{\prime}_{j}\) on the projective variety \(X_{j}\), we conclude that for all sufficiently large \(m\), there exists a non-zero section of \(O(mD^{\prime}_{j}-\overline{A})\) on \(X_{j}\) and restricting to \(U\), we get a non-zero section \(s_{0}\in H^{0}(U,mD^{\prime}_{j}-\overline{A})\) for all sufficiently large \(m\). Now the claim follows from noting that the effectivity relation \(\overline{D}\geq D^{\prime}_{j}\) implies that \(H^{0}(U,mD^{\prime}_{j}-\overline{A})\subseteq H^{0}(U,m\overline{D}-\overline {A})\).
**Lemma 1.9**.: _Suppose \(\overline{D}\) is a big adelic divisor on a normal quasi-projective variety \(U\) over \(K\). Then the convex body \(\Gamma(\overline{D})\) satisfies the following properties:_
1. \(\Gamma_{0}=\{0\}\)__
2. _There exist finitely many vectors_ \((v_{i},1)\) _spanning a semi-group_ \(B\subseteq\mathbb{N}^{d+1}\) _such that_ \(\Gamma(\overline{D})\subseteq B\)_._
3. \(\Gamma(\overline{D})\) _generates_ \(\mathbb{Z}^{d+1}\) _as a group._
Proof.: The first point is trivial. For the second point we follow the proof of Lemma 2.2 in [10]. Denote by \(v_{i}(s)\) the \(i\)-th co-ordinate in the valuation vector of a section \(s\). Note then \(v_{i}(s)\leq mb\) for some large constant \(b\) and for all non-zero \(s\in H^{0}(U,m\overline{D})\) due to the fact that \(\Delta(\overline{D})\) is bounded (Lemma 1.6) as \(\Delta(\overline{D})\) contains \(\frac{1}{m}\cdot\Gamma(\overline{D})_{m}\) for each \(m\in\mathbb{N}\). Now a basic algebraic calculation easily shows that \(\Gamma(\overline{D})\) is contained in the semi-group generated by the finite set of integer vectors \(\{(a_{i})\mid 0\leq a_{i}\leq b\}\) which shows the second point. Hence it is enough to prove the third point.
To this end, choose a model \(X_{j}\) which satisfies the condition of Lemma 1.8. Then choose a very ample divisor \(\overline{A}\) on \(X_{j}\) such that there exists sections \(\overline{s_{i}}\) of \(O(\overline{A})\) for \(i=0,1,\ldots d\) with \(v(\overline{s_{i}})=e_{i}\) where \(v\) is the valuation vector with respect to the chosen flag and \(\{e_{i}\}\) is the standard basis of \(\mathbb{R}^{d}\) for \(i=1,\ldots d\) and \(e_{0}\) is the zero vector, as suggested in the beginning of the proof of Lemma 2.2 in [10]. Restricting these sections give sections \(s_{i}\in H^{0}(U,\overline{A})\) with \(v(s_{i})=e_{i}\). Now thanks to Lemma 1.8 and our choice of \(X_{j}\), we can find non-zero sections \(t_{i}\in H^{0}(U,(m_{0}+i)\overline{D}-\overline{A})\) for \(i=0,1\) with valuation vectors \(v(s_{i})=f_{i}\). Then clearly we find non-zero sections \(s^{\prime}_{i}=s_{i}\otimes t_{0}\in H^{0}(U,m_{0}\overline{D})\) and \(s^{\prime\prime}_{0}=s_{0}\otimes t_{1}\in H^{0}((m+1)\overline{D})\) with valuation vectors \(v(s^{\prime}_{i})=(f_{0}+e_{i})\) for \(i=0,\ldots d\) and \(v(s^{\prime\prime}_{i})=f_{1}\). Hence \(\Gamma(\overline{D})\) contains the vectors \((f_{0},m_{0})\), \((f_{0}+e_{i},m_{0})\) for \(i=1\ldots d\) and \((f_{1},m_{0}+1)\). Then it clearly shows that \(\Gamma(\overline{D})\) generated \(\mathbb{Z}^{d}\) as a group and finishes the proof.
We are ready to state the first main theorem of this chapter.
**Theorem 1.10**.: _Suppose we have a big adelic divisor \(\overline{D}\) on a normal quasi-projective variety \(U\) and suppose \(\Delta(\overline{D})\) is the Okounkov body associated to \(\overline{D}\) as constructed above. Furthermore suppose \(\widehat{\operatorname{vol}}(\overline{D})\) be the adelic volume defined in section 5 of [10]. Then we have_
\[\operatorname{vol}_{\mathbb{R}^{d}}(\Delta(\overline{D}))=\lim_{m\to\infty} \frac{\#\Gamma_{m}}{m^{d}}=\lim_{m\to\infty}\frac{\dim_{K}(\overline{H^{0}}(U,m \overline{D}))}{m^{d}}=\frac{1}{d!}\cdot\widehat{\operatorname{vol}}(\overline {D})\]
Proof.: With Lemma 1.9 and by basic arguments of euclidean and convex geometry as indicated in the proof of Proposition 2.1 of [10], we get that
\[\operatorname{vol}_{\mathbb{R}}(\Delta(\overline{D}))=\lim_{m\to\infty}\frac{ \#\Gamma_{m}}{m^{d}}=\lim_{m\to\infty}\frac{\dim_{K}(H^{0}(U,m\overline{D}) }{m^{d}} \tag{2}\]
exists which clearly gives the claim.
_Remark_.: Note that the above theorem also proves that the \(\limsup\) in the definition of \(\widetilde{\operatorname{vol}}(\overline{D})\) is actually given by a limit directly from convex geometric properties of the Okounkov bodies which is essentially the content of the first part of Theorem 5.2.1 of [13].
We end this section by showing that the construction of Okounkov body is homogenous with respect to scaling.
**Lemma 1.11**.: _Suppose \(\overline{D}\) is a big adelic divisor on a normal quasi-projective variety \(U\). Then_
\[\Delta(t\overline{D})=t\cdot\Delta(\overline{D})\]
_for all positive integers \(t\). Hence we can naturally extend the construction of \(\Delta(\cdot)\) to big adelic \(\mathbb{Q}\)-divisors._
Proof.: We choose an integer \(r_{0}\) such that \(H^{0}(U,r\overline{D})\neq\{0\}\) for all \(r>r_{0}\). We can always do this as we assumed \(\overline{D}\) is big (as explained in the proof of Lemma 1.9). Next choose \(q_{0}>0\) such that \(q_{0}t-(t+r_{0})>r_{0}\) for all \(t\). Then for all \(r_{0}+1\leq r\leq r_{0}+t\) we can find non-zero sections \(s_{r}\in H^{0}(U,r\overline{D})\) and \(t_{r}\in H^{0}(U,(q_{0}t-r)\overline{D})\) which gives inclusions
\[H^{0}(U,mt\overline{D})\stackrel{{\otimes s_{r}}}{{ \longleftrightarrow}}H^{0}(U,(mt+r)\overline{D})\stackrel{{ \otimes t_{r}}}{{\longleftrightarrow}}H^{0}(U,(m+q_{0})t \overline{D})\]
which gives the corresponding inclusion of the graded semi-groups
\[\Gamma(t\overline{D})_{m}+e_{r}+f_{r}\subseteq\Gamma(\overline{D})_{mt+r}+f_{ r}\subseteq\Gamma(t\overline{D})_{m+q_{0}}\]
where \(e_{r}=v(s_{r})\) and \(f_{r}=v(t_{r})\). Now recalling the construction of \(\Delta(\cdot)\) and letting \(m\to\infty\) we get
\[\Delta(t\overline{D})\subseteq t\cdot\Delta(\overline{D})\subseteq\Delta(t \overline{D})\]
which clearly finishes our proof.
_Remark_.: Note that this homogeneity allows us to define Okounkov bodies for adelic \(\mathbb{Q}\)-divisors by passing to integral multiples and hence conclude that adelic volumes are homogenous for big divisors as in the projective case.
### Variation of Okounkov bodies
We fix a normal quasi-projective variety \(U\) over \(K\) and a big adelic divisor \(\overline{D}\) on it. Furthermore suppose \(\overline{E}\) is any adelic divisor on \(U\). We will construct a global convex body \(\Delta(U)=\Delta(U,\overline{D},\overline{E})\subseteq\mathbb{R}^{d}\times \mathbb{R}^{2}\) such that the fiber of this body over a vector \((a_{1},a_{2})\in\mathbb{Q}^{2}\) under the projection to \(\mathbb{R}^{2}\) will give us the Okounkov body of the adelic \(\mathbb{Q}\)-divisor \(q_{1}\overline{D}+q_{2}\overline{E}\) provided it is big. Furthermore we fix a flag \(Y_{d}\subset\ldots\subset Y_{0}\) as before. We are going to follow closely the arguments in Section 4 of [10]. All constructions are dependent on the choice of the divisor \(\overline{D}\) and \(\overline{E}\) but we fix them for this section and we omit them in the notation. We start by defining the semi-group associated to these two adelic divisors.
**Definition 1.12**.: _Suppose \(\overline{D}\) and \(\overline{E}\) are as before. We define the graded semi-group \(\Gamma(U)\) as_
\[\Gamma(U)=\{((v(s),a_{1},a_{2})\mid a_{i}\in\mathbb{Z},s\neq 0\in H^{0}(U,a_{1 }\overline{D}+a_{2}\overline{E})\}\]
_where \(v(\cdot)\) is the valuation corresponding to the chosen flag. Further more we define the global Okounkov body \(\Delta(U)\) as_
\[\Delta(U)=\text{closed convex cone}(\Gamma(U))\]
_which is a closed convex subset of \(\mathbb{R}^{d}\times\mathbb{R}^{2}\)._
As in the case with one bundle, we will deduce the properties needed from general properties of convex bodies and graded semi-groups. Before doing that we define certain terms necessary.
**Definition 1.13**.: _Suppose we have an additive semi-group \(\Gamma\) in \(\mathbb{R}^{d}\times\mathbb{R}^{2}\). Denote by \(P\) the projection from \(\mathbb{R}^{d}\times\mathbb{R}^{2}\) to \(\mathbb{R}^{2}\) and \(\Delta=\Sigma(\Gamma)\) is the closed convex cone generated by \(\Gamma\). We define the support of \(\Delta\), denoted as \(\text{Supp}(\Delta)\) to be its image under \(P\). It coincides with the closed convex cone in \(\mathbb{R}^{2}\) generated by the image of \(\Gamma\) under \(P\). Finally given a vector \(\vec{a}=(a_{1},a_{2})\in\mathbb{Z}^{2}\) we denote_
\[\Gamma_{\mathbb{N}\vec{a}} =\Gamma\cap(\mathbb{N}^{r}\times\mathbb{N}\vec{a})\] \[\Delta_{\mathbb{R}\vec{a}} =\Delta\cap(\mathbb{R}^{d}\times\mathbb{R}\vec{a})\]
_Furthermore we denote \(\Gamma_{\mathbb{N}\vec{a}}\) as a semi-group inside \(\mathbb{N}^{d}\times\mathbb{N}\vec{a}\cong\mathbb{N}^{d+1}\) and denote the closed convex cone generated by it in \(\mathbb{R}^{d+1}\) as \(\Sigma(\Gamma_{\mathbb{N}\vec{a}})\)._
With the above definitions we can state our next lemma.
**Lemma 1.14**.: _Suppose the semi-group \(\Gamma\) generates a sub-group of finite index in \(\mathbb{Z}^{d+2}\) and suppose \(\vec{a}\in\mathbb{N}^{2}\) such that \(\vec{a}\in\operatorname{int}(\text{Supp}(\Delta))\). Then we have_
\[\Delta_{\mathbb{R}\vec{a}}=\Sigma(\Gamma_{\mathbb{N}\vec{a}})\]
Proof.: The statement and the proof of the Lemma is identical as in Proposition 4.9 of [10].
Next we want to show that the vectors which gives rise to big combinations of the bundles \(\overline{D}\) and \(\overline{E}\) in fact belong to the interior \(\operatorname{int}(\text{Supp}(\Delta))\) which is the content of the next lemma. Note that by passing to rational multiples just as in the projective case, we can similarly define \(\mathbb{Q}\)-adelic divisors. Furthermore by the remark at the end of the previous section, we can also define Okounkov bodies for \(\mathbb{Q}\)-adelic divisors which behave homogenously.
**Lemma 1.15**.: _Suppose \(\vec{a}\in\mathbb{Q}^{2}\) such that \(a_{1}\overline{D}+a_{2}\overline{E}\) is a big adelic divisor. Then \(\vec{a}\in\operatorname{int}(\text{Supp}(\Delta))\)._
Proof.: We assume that \(\dim(U)>0\) as the \(0\)-dimensional case is degenerate. Clearly it is enough to show the case when \(a_{i}\in\mathbb{Z}\) because \(\text{Supp}(\Delta)\) is a cone and scaling sends open sets to open sets. We can assume that both \(\overline{D}\) and \(\overline{E}\) are given by models \(D_{i}\) and \(E_{i}\) on projective models \(X_{i}\) of \(U\) respectively along with a boundary divisor \(D_{0}\) and rationals \(q_{i}\to 0\) as in our usual notation. We first prove that for any rational \(q\in\mathbb{Q}\) such that \(\overline{D}+q\overline{E}\) is big, there is an \(\epsilon>0\) such that \((1,x)\) is in \(\text{Supp}(\Delta)\) for all \(x\in(q-\epsilon,q+\epsilon)\). Suppose first that \(q>0\). Then note that the sequence of models \(S_{j}=(D_{j}-q_{j}D_{0})+q(E_{j}-q_{j}D_{0})\) gives a Cauchy sequence defining \(\overline{D}+q\overline{E}\) and hence by Lemma 1.4 we get that \(S_{j}\) is big for large enough \(j\). Now due to the continuity of the volume function in the projective setting, we can find a rational \(0<q<p\) such that \((D_{j}-q_{j}D_{0})+p(E_{j}-q_{j}D_{0})\) is big. Now due to the effectivity relation
\[(D_{j}-q_{j}D_{0})+p(E_{j}-q_{j}E_{0})\leq\overline{D}+p\overline{E}\]
we deduce that the right hand side above is big. Hence we get that for some positive integer \(p_{0}\), \(p_{0}\cdot(1,p)\in P(\Gamma)\) where \(P\colon\mathbb{R}^{d}\times\mathbb{R}^{2}\to\mathbb{R}^{2}\) is the projection and \(\Gamma=\Gamma(\overline{D})\). As \(\overline{D}\) is assumed to be big, we also obtain that \(r_{0}\cdot(1,0)\in P(\Gamma)\) as well for some large positive integer \(r_{0}\). As \(p_{0}\) and \(r_{0}\) are positive, it is enough to find an \(\epsilon>0\) such that \((1,x)\) is in the convex cone generated by \((1,0)\) and \((1,p)\) for all \(x\in(q-\epsilon,q+\epsilon)\) because \(\text{Supp}(\Delta)\) is exactly the convex cone generated by \(P(\Gamma)\). But clearly \((1,x)\) is in the convex cone generated by \((1,0)\) and \((1,p)\) for all \(0<x<p\) which clearly yields the existence of one such \(\epsilon\) because \(0<q<p\). For the case when \(q<0\) we do a similar calculation but with \(E_{j}-q_{j}D_{0}\) being replaced by \(E_{j}+q_{j}D_{0}\). Finally for the case \(q=0\) using similar arguments we can find a rational number \(q_{0}\) such that all the three vectors \(p_{0}\cdot(1,0)\), \(p_{0}\cdot(1,-q_{0})\) and \(p_{0}\cdot(1,q_{0})\) are in \(P(\Gamma)\) for some large positive integer \(p_{0}\). Hence by the above arguments we get that \((1,x)\) is in \(\text{Supp}(\Delta)\) for \(x\in(-q_{0},q_{0})\).
Next we take any \(\vec{a}=(a_{1},a_{2})\) such that \(a_{1}\overline{D}+a_{2}\overline{E}\) is big. First suppose \(a_{1}\leq 0\). It is easy to see that the sum of two big adelic divisors is again big. Hence adding \((-a_{1})\overline{D}\) we conclude that \(a_{2}\overline{E}\) is big. Since the trivial adelic divisor is not big, we conclude that \(a_{2}\neq 0\). Then adding the big adelic divisor
\(-a_{1}\overline{D}\) we deduce that \(\overline{E}\) (resp. \(\neg E\)) is big if \(a_{2}>0\) (resp. \(a_{2}<0\)). Hence in these two cases replacing \(\overline{D}\) by \(\overline{E}\) or \(-\overline{E}\) we are reduced to the case when \(a_{1}>0\) and hence we can assume WLOG that \(a_{1}>0\). In that case scaling by \(a_{1}\) we obtain that \(\overline{D}+q\overline{E}\) is big for \(q=\frac{a_{2}}{a_{1}}\) and by our considerations before we deduce that for some \(\epsilon>0\), \((1,x)\) is in the convex cone generated by \(P(\Gamma)\) for all \(x\in(q-\epsilon,q+\epsilon)\). We assume that \(a_{2}\geq 0\) and the argument for \(a_{2}<0\) will just be the analogue by changing signs. Then for any \(\kappa>0\) we have
\[\frac{a_{2}-\kappa}{a_{1}+\kappa}\leq\frac{a_{2}+t_{2}}{a_{1}+t_{1}}\leq\frac {a_{2}+\kappa}{a_{1}-\kappa}\]
for all \(t_{1},t_{2}\in(-\kappa,+\kappa)\). Choose \(\kappa>0\) so small that
\[(\frac{a_{2}-\kappa}{a_{1}+\kappa},\frac{a_{2}+\kappa}{a_{1}-\kappa})\subset( q-\epsilon,q+\epsilon)\]
and \(a_{1}\pm\kappa>0\) which we can do as \(q=\frac{a_{2}}{a_{1}}\) and \(a_{1}>0\). Hence by the choice of \(\epsilon\) for any \(t_{1},t_{2}\in(-\kappa,\kappa)\), the vector \((1,\frac{a_{2}+t_{2}}{a_{1}+t_{1}})\) and hence \((a_{1}+t_{1},a_{2}+t_{2})\) is in the convex cone generated by \(P(\Sigma)\) and hence in \(\mathrm{Supp}(\Delta)\) as \(a_{1}+t_{1}>0\). This clearly shows that \((a_{1},a_{2})\in\mathrm{int}(\mathrm{Supp}(\Delta))\) and finishes the proof.
Next to use Lemma 1.14 we have to prove that \(\Gamma(U)\) generates a sub-group of finite index in \(\mathbb{Z}^{d+2}\) which in particular guarantees that \(\mathrm{int}(\mathrm{Supp}(\Delta(U))\) is non-empty. This is going to be the content of our next Lemma.
**Lemma 1.16**.: _The multi-graded semi-group \(\Gamma(U)\) constructed in Definition 1.12 generates \(\mathbb{Z}^{d+2}\) as a group._
Proof.: Arguing similarly as in the proof of Lemma 1.15, as \(\overline{D}\) is big, we can find a positive integer \(m\) such that \(m\overline{D}-\overline{E}\) is big. On the other hand we already know that \(\overline{D}\) is big. Note that the semi-groups \(\Gamma(m\overline{D}-\overline{E})\) and \(\Gamma(\overline{D})\) sit naturally as sub-semigroups of \(\Gamma(U)\). Moreover from Lemma 1.9 we deduce that \(\Gamma(\overline{D})\) and \(\Gamma(m\overline{D}-\overline{E})\) generate the sub-groups \(\mathbb{Z}^{d}\times\mathbb{Z}\cdot(1,0)\) and \(\mathbb{Z}^{d}\times\mathbb{Z}\cdot(m,-1)\). But the vectors \((1,0)\) and \((m,-1)\) generate \(\mathbb{Z}^{2}\) which clearly shows that \(\Gamma(U)\) generates \(\mathbb{Z}^{d+2}\) as a group.
Finally we are ready to state and prove the main theorem of this section.
**Theorem 1.17**.: _Suppose \(\overline{D}\) and \(\overline{E}\) be adelic divisors on a normal quasi-projective variety \(U\) such that \(\overline{D}\) is big. Then there exists a convex body \(\Delta(U)=\Delta(U,\overline{D},\overline{E})\subset\mathbb{R}^{d+2}\) with the property that for any \(\vec{a}=(a_{1},a_{2})\in\mathbb{Q}^{2}\) with \(a_{1}\overline{D}+a_{2}\overline{E}\) big, we have_
\[\Delta(a_{1}\overline{D}+a_{2}\overline{E})=\Delta(U)\cap(\mathbb{R}^{d} \times\{\vec{a}\})\]
_where \(\Delta(a_{1}\overline{D}+a_{2}\overline{E})\) is the Okounkov body of \(a_{1}\overline{D}+a_{2}\overline{E}\) as constructed in Definition 1.3._
Proof.: Clearly it is enough to show when \(\vec{a}\in\mathbb{Z}^{2}\) by homogeneity of Okounkov bodies (Lemma 1.11). Note that the semi-group \(\Gamma(a_{1}\overline{D}+a_{2}\overline{E})\) sits naturally in \(\mathbb{N}^{d}\times\mathbb{N}\cdot\vec{a}\cong\mathbb{N}^{d+1}\) and by construction of \(\Delta(\cdot)\) as in Definition 1.3, we deduce that \(\Delta(a_{1}\overline{D}+a_{2}\overline{E})=\Sigma(\Gamma(U)_{\mathbb{N}\vec{ a}})\cap(\mathbb{R}^{d}\times\{\vec{a}\})\). By Lemma 1.15 we get that \(\vec{a}\in\mathrm{int}(\mathrm{Supp}(\Delta(U))\) and hence by Lemma 1.14 we have \(\Delta(U)_{\mathbb{R}\vec{a}}=\Sigma(\Gamma(U)_{\mathbb{N}\vec{a}})\). Hence we deduce that
\[\Delta(a_{1}\overline{D}+a_{2}\overline{E})=\Sigma(\Gamma(U)_{\mathbb{N}\vec{ a}})\cap(\mathbb{R}^{d}\times\{\vec{a}\})=\Delta(U)_{\mathbb{R}\vec{a}}\times( \mathbb{R}^{d}\times\{\vec{a}\})=\Delta(U)\cap(\mathbb{R}^{d}\times\{\vec{a}\})\]
concluding the proof.
_Remark_.: The construction of the Global body \(\Delta(U,\overline{D},\overline{E})\) is done here by mimicking the constructions in section 4 of [10]. However one stark difference is that the Global body constructed in [10] is independent of the chosen basis of the Neron-Severi group because they work modulo numerical equivalences. However even if there can be a notion of "numerical equivalence" in the adelic setting,it is certainly not known if the corresponding Neron-Severi space is finitely generated and hence such a "canonical global body" cannot be constructed using similar methods and our \(\Delta(U,\overline{D},\overline{E})\) is dependent on the chosen divisors \(\overline{D}\) and \(\overline{E}\). However our version still gives some interesting corollaries which we shall see next.
### Corollaries : Continuity, Fujita approximation and more
Before going to state our first corollary, we introduce the notion of _Hausdorff distance_ which will be the correct metric under which we want to show the convergence of bodies.
**Definition 1.18**.: _Let \((V,\|\cdot\|)\) be a normed real vector space. The Hausdorff distance between two closed compact subsets \(C_{1}\) and \(C_{2}\) in \(V\) is defined as_
\[d_{H}(C_{1},C_{2})=\text{inf}\{\epsilon>0\mid C_{1}\subseteq C_{2}+\epsilon \mathbb{B},C_{2}\subseteq C_{1}+\epsilon\mathbb{B}\}\]
_where \(\mathbb{B}\) is the unit ball in \(V\) with respect to \(||\cdot||\)._
Now we can state our first main corollary.
**Corollary 1.19**.: _Suppose \(\overline{D}\) is a big adelic divisor on a normal quasi-projective variety \(U\) given by models \(\{X_{i},D_{i}\}\) in our usual notation. Then_
\[\lim_{j\to\infty}d_{H}(\Delta(\overline{D}),\Delta(\overline{D_{j}}))=0\]
_where \(\overline{D_{j}}\) is just \(D_{j}\) looked at as a model divisor in \(\operatorname{Div}(U,k)_{\operatorname{mod}}\). In particular, we have_
\[\widehat{\operatorname{vol}}(\overline{D})=\lim_{j\to\infty}\operatorname{ vol}(D_{j})\]
_where \(\operatorname{vol}(\cdot)\) is the classical projective volume considering \(D_{j}\) as a \(\mathbb{Q}\)-divisor in \(X_{j}\)._
Proof.: We prove the first claim at first. Begin by noting that the sequence of inclusions
\[\Delta(\overline{D}-q_{j}\overline{D_{0}})\subseteq\Delta(\overline{D_{j}}) \subseteq\Delta(\overline{D}+q_{j}\overline{D_{0}})\]
implies that it is enough to show that \(d_{H}(\Delta(\overline{D}-q_{j}D_{0}),\Delta(\overline{D}+q_{j}D_{0}))\to 0\) as \(j\to\infty\). But this immediately follows from Theorem 1.17 taking \(\overline{E}=\overline{D_{0}}\) and Theorem 13 in [11] noting that \(q_{j}\to 0\) as \(j\to\infty\). Now the second claim follows readily from Theorem 7 in [10] and the first claim noting that \(\operatorname{vol}(D_{j})=\widehat{\operatorname{vol}}(\overline{D_{j}})=d!\cdot\operatorname{vol}(\Delta(\overline{D_{j}}))\) and \(\operatorname{vol}(\overline{D})=d!\cdot\operatorname{vol}(\Delta(\overline {D}))\).
_Remark_.: Note that Corollary 1.19 and Theorem 1.10 prove Theorem 5.2.1 of [10] for big adelic divisors independently using convex geometric methods and hence we can deduce all the corollaries of section 5 of [10] coming from Theorem 5.2.1 for bid divisors which we list next.
**Corollary 1.20** (log-concavity).: _Suppose \(\overline{D}_{1}\) and \(\overline{D}_{2}\) are two effective adelic divisors on a normal quasi-projective variety \(U\). Then we have_
\[\widehat{\operatorname{vol}}(\overline{D}_{1}+\overline{D}_{2})^{\frac{1}{2}} \geq\widehat{\operatorname{vol}}(\overline{D}_{1})^{\frac{1}{2}}+\widehat{ \operatorname{vol}}(\overline{D}_{2})^{\frac{1}{2}}\]
_where \(d=\dim(U)\)._
Proof.: The statement is trivial if one of the divisors is not big. When both of them are big, applying Corollary 1.19 the problem gets converted into the projective case which is proved in Corollary 4.12 in [10].
**Corollary 1.21** (Fujita approximation).: _Suppose \(\overline{D}\) is a big adelic \(\mathbb{Q}\)-divisor on a normal quasi-projective variety \(U\). Then for any \(\epsilon>0\) there exists a normal quasi-projective variety \(U^{\prime}\), a birational morphism \(\pi\colon U^{\prime}\xrightarrow{\ }U\), a projective model \(X^{\prime}\) of \(U^{\prime}\) and an ample \(\mathbb{Q}\)-divisor \(A^{\prime}\) on \(X^{\prime}\) such that \(\pi^{*}\overline{D}-A^{\prime}\geq 0\) in \(\widehat{\operatorname{Div}}(U,K)\) and_
\[\operatorname{vol}(A^{\prime})\geq\widehat{\operatorname{vol}}(\overline{D} )-\epsilon\]
_where \(\operatorname{vol}(A^{\prime})\) is the volume of \(A^{\prime}\) as a divisor on \(X^{\prime}\)._
Proof.: Using the fact that the adelic volume is the limit of its models in Corollary 1.19, the claim gets reduced to the original Fujita approximation which was proved in [14].
Next we come to the final corollary of this section which shows the continuity of the volume function.
**Corollary 1.22** (continuity).: _Suppose \(\overline{D},\overline{M}_{1},\ldots\overline{M}_{r}\) are adelic \(\mathbb{Q}\)-divisors on a normal quasi-projective variety \(U\). Then we have_
\[\lim_{t_{1},t_{2}\ldots t_{r}\to 0}\widehat{\operatorname{vol}}(\overline{D}+t_ {1}\overline{M}_{1}+\ldots t_{r}\overline{M}_{r})=\widehat{\operatorname{vol} }(\overline{D})\]
_where \(t_{1}\ldots t_{r}\) are rational numbers converging to 0. Furthermore we have \(\widehat{\operatorname{vol}}(\overline{D})=\lim_{j\to\infty}\operatorname{ vol}(D_{j})\) for a sequence of model \(D_{j}\) representing \(\overline{D}\)._
Proof.: As in the proof of Theorem 5.2.8 in [10], we choose nef model adelic divisors \(\overline{M}_{i}\) such that \(\overline{M}_{i}^{\prime}\pm\overline{M}_{i}\geq 0\) and we set \(\overline{M}=\overline{M}_{1}^{\prime}+\ldots\overline{M}_{r}^{\prime}\). Then it is enough to show that
\[\lim_{t\to 0}\widehat{\operatorname{vol}}(\overline{D}+t\overline{M})= \widehat{\operatorname{vol}}(\overline{D})\]
as \(t\) converges to 0 over the rationals. First assume that \(\overline{D}\) is big. Then from Theorem 13 of [13] we get
\[\lim_{t\to 0}d_{H}(\Delta(\overline{D}+t\overline{M}),\Delta(\overline{D}))=0\]
by taking \(\overline{E}=\overline{M}\) in Theorem 1.17 since we saw in the proof of Lemma 1.15 that \(\overline{D}+t\overline{M}\) is big for small enough \(t\) whenever \(\overline{D}\) is big. Now the claim follows from Theorem 7 of [11]. The second claim is also true when \(\overline{D}\) is big thanks to Corollary 1.19. Hence we can assume that \(\overline{D}\) is not big. Now suppose the claim does not hold. Then there is a \(c>0\) and a sequence of rationals \(t_{i}\to 0\) such that \(\widehat{\operatorname{vol}}(\overline{D}+t_{i}\overline{M})>c\) for all \(t_{i}\). By Corollary 1.21 we can choose an ample \(\mathbb{Q}\)-divisor \(A_{t_{i}}\) on a projective model \(X^{\prime}\) of a birational modification \(\pi\colon U^{\prime}\to U\) of \(U\) such that \(\pi^{*}(\overline{D}+t_{i}\overline{M})-A_{t_{i}}\geq 0\) and \(\operatorname{vol}(A_{t_{i}})>c/2\). Then clearly
\[\widehat{\operatorname{vol}}(\overline{D})\geq\operatorname{vol}(A_{t_{i}}-t _{i}\overline{M})\geq A_{t_{i}}^{d}-dt_{i}A_{t_{i}}^{d-1}\overline{M}\]
where in the second inequality we used the Siu's criterion for model nef divisors \(A_{ti}\) and \(\overline{M}\). We can bound the intersection number \(A_{t_{i}}^{d-1}\overline{M}\) as in the proof of Theorem 5.2.8 in [10] to conclude that
\[\widehat{\operatorname{vol}}(\overline{D})\geq A_{t_{i}}^{d}-O(t_{i})>c/2-O(t _{i})\text{ as }t_{i}\to 0\]
which clearly contradicts the hypothesis \(\widehat{\operatorname{vol}}(\overline{D})=0\) and finishes the proof of the first claim. Furthermore the effectivity relation \(\overline{D}_{j}\leq\overline{D}+q_{j}\overline{D}_{0}\) shows that \(\operatorname{vol}(D_{j})\leq\widehat{\operatorname{vol}}(\overline{D}+q_{j} \overline{D}_{0})\). Now as \(j\to\infty\) we know that \(q_{j}\to 0\) and hence by the first claim \(\lim_{j\to\infty}\widehat{\operatorname{vol}}(\overline{D}+q_{j}\overline{D} _{0})=0\) which clearly shows the second claim.
## 2 Augmented base loci and Restricted volumes
In this chapter we define _restricted_ volumes of adelic divisors along a closed sub-variety of a normal quasi-projective variety \(U\) over an algebraically closed field \(K\). We will define the notion of _augmented base locus_ of an adelic divisor. It turns out that the restricted volume can be realised as the volume of an Okounkov body when the the sub-variety is not contained in augmented base locus of the adelic divisor. As a corollary we will deduce that the \(\limsup\) defining the restricted volume is actually a limit analogously as in chapter 1 for ordinary volumes. We go on to show that there are global bodies which regulate the variation of restricted volumes along arbitary directions similarly as to ordinary volumes as in chapter 1. Finally as a corollary we will deduce properties analogous to those obtained in chapter one for ordinary volumes.
### Augmented base locus of an adelic divisor
In this section, we recall the concepts of base loci and stable base loci of a graded linear series of an adelic line bundle. Using these concepts we introduce the notion of the _augmented base locus_ of an adelic divisor \(\overline{D}\) in analogy to the projective setting (see [13] section 2.4). In the projective setting, it is shown that the definition of augmented base locus is independent of the choice of the ample divisor using Serre's finiteness. However as in our setting, model divisors are only defined upto bi-rational pull-backs and ampleness is not preserved under such pull-backs, Serre's finiteness does not work. It turns out that this gap can be fixed using the main theorem due to [1] and provides us with a similar independence of choice which will be the main result of this section.
**Definition 2.1**.: _Suppose \(U\) is a normal quasi-projective variety over an algebraically closed field \(K\) and suppose \(D\) is a divisor. Furthermore suppose \(W\subseteq H^{0}(U,O(D))=H^{0}(U,D)\) is a finite dimensional sub-space of the space of global sections of \(O(D)\). Then we define the base locus_
\[\mathrm{Bs}(W)=\{p\in U\mid s(p)=0\text{ in }\kappa(p)=O_{U,p}/m_{U,p}\text{ for all }s\in W\}\]
_Now suppose we have a graded linear series \(W=\{W_{m}\}\) of \(O(D)\). We define the stable base locus as_
\[\mathrm{SB}(W)=\cap_{m\in\mathbb{N}}\mathrm{Bs}(W_{m})\]
_Finally suppose \(\overline{D}\) is an adelic divisor on \(U\). Then it determines graded linear series \(W=\{W_{m}=H^{0}(U,m\overline{D})\}\) as explained in the beginning of chapter 1. Then we define the base locus and stable base locus of \(\overline{D}\) as_
\[\mathrm{Bs}(\overline{D})=\mathrm{Bs}(W_{1})\text{ and }\mathrm{SB}( \overline{D})=\mathrm{SB}(W)\]
_Remark_.: Note that it is easy to check that the stable base locus \(\mathrm{SB}(\overline{D})\) is indeed eventually stable i.e there exists an integer \(p_{0}\) such that \(\mathrm{SB}(\overline{D})=\mathrm{Bs}(p_{0}\overline{D})\) by using noetherianity of \(U\) just like in the projective case.
As discussed above we want to show that this above notion is invariant under passing to other model ample divisors. Our next lemma is the main ingredient to show that
**Lemma 2.2**.: _Suppose \(X_{1}\) and \(X_{2}\) are two normal projective models of a normal quasi-projective variety \(U\) over \(K\), \(f\colon X_{1}\to X_{2}\) a birational morphism which is an isomorphism over \(U\) and \(\overline{A}_{1}.\overline{A}_{2}\) ample divisors on \(X_{1}\) and \(X_{2}\) respectively. Furthermore suppose \(\overline{D}\) is an adelic divisor on \(U\). Then for any closed irreducible sub-variety \(E\) of \(U\), \(E\nsubseteq\mathrm{Bs}(m_{0}\overline{D}-\overline{A}_{2})\) for some positive integer \(m_{0}\) if and only if \(E\nsubseteq\mathrm{Bs}(n_{0}\overline{D}-\overline{A}_{1})\) for some positive integer \(n_{0}\)._
Proof.: We first suppose that \(E\nsubseteq\mathrm{Bs}(m_{0}\overline{D}-\overline{A}_{2})\). We denote \(f^{*}\overline{A}_{2}=\overline{A}_{2}^{\prime}\) which is a big nef divisor on \(X_{1}\) as \(\overline{A}_{2}\) was big and nef( being ample) and this notions are invariant under bi-rational pull-backs. Let \(\overline{E}\) be the Zariski closure of \(E\) in \(X_{1}\). Then clearly \(\overline{A}_{2}^{\prime}|_{\overline{E}}\) is big and as \(\overline{A}_{2}^{\prime}\) is also nef, we can deduce from Theorem 1.4 of [1] that for large enough integer \(s_{0}\), \(\overline{E}\) is not contained in the (projective) stable base locus of \(s_{0}\overline{A}_{2}-\overline{A}_{1}\) since \(\overline{A}_{1}\) is ample in \(X_{1}\). Restricting everything to \(U\), we can find a positive integer \(p_{0}\) and section \(s^{\prime}\in H^{0}(U,s_{0}p\overline{A}_{2}-p\overline{A}_{1})\) such that \(s^{\prime}\) does not vanish along \(E\) whenever \(p_{0}\mid p\). Tensoring by a section of \(H^{0}(U,(p-1)\overline{A}_{1})\) non-vanishing on \(E\), which we can find as \(\overline{A}_{1}\) is ample, we produce a section \(s\in H^{0}(s_{0}\overline{A}_{2}-\overline{A}_{1})\) non-vanishing on \(E\) whenever \(p_{0}\mid p\). By hypothesis we can find a section \(s_{0}\in H^{0}(U,m_{0}s_{0}p_{0}\overline{D}-s_{0}p_{0}\overline{A}_{2})\) non-vanishing along \(E\). Hence picking \(p=p_{0}\) and tensoring \(s\) and \(s_{0}\) we produce a section in \(H^{0}(U,m_{0}s_{0}p_{0}\overline{D}-\overline{A}_{1})\) which does not vanish identically on \(E\) and hence \(E\nsubseteq\mathrm{Bs}(n_{0}\overline{D}-\overline{A}_{1})\) and finishes one direction of the claim with \(n_{0}=m_{0}s_{0}p_{0}\).
For the other side, suppose \(E\nsubseteq\mathrm{Bs}(n_{0}\overline{D}-\overline{A}_{1})\). Hence for every positive integer \(p\) we can find a section \(s_{0}\in H^{0}(U,n_{0}p\overline{D}-p\overline{A}_{1})\) which does not vanish identically on \(E\). Now chose \(p\) large enough such that \(p\overline{A}_{1}-\overline{A}_{2}^{\prime}\) is very ample which we can do by Serre's finiteness theorem on projective varieties because \(\overline{A}_{1}\) is ample on \(X_{1}\). Then chosing a section of \(p\overline{A}_{1}-\overline{A}_{2}^{\prime}\) on \(X_{1}\) not vanishing identically on \(\overline{E}\) and restricting to \(U\), we obtain a section \(s_{0}\in H^{0}(U,p\overline{A}_{1}-\overline{A}_{2})\) not vanishing identically on \(E\) for large enough \(p\). Once again tensoring \(s\) and \(s_{0}\) we obtain that \(E\nsubseteq\mathrm{Bs}(m_{0}\overline{D}-\overline{A}_{2})\) with \(m_{0}=n_{0}p\) for large enough \(p\) and finishes the proof.
_Remark_.: The proof of the above lemma follows along similar lines as the independence of the augmented base locus on the choice of the ample divisor is shown in the projective case. However it uses Serre's finiteness theorem which has no known versions in the adelic setting due to non-invariance of ampleness under birational pull-backs. However it turns out the gap in one direction of the proof can be bridged by the main result due to [1] as we have shown above and in the other direction we already have Serre finiteness.
Finally we can deduce the the desired invariance under pull-backs of model ample divisors as a direct corollary of Lemma 2.2 which we do next.
**Corollary 2.3**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\) and suppose \(X_{1}\) and \(X_{2}\) are two projective models of \(E\) with ample divisors \(\overline{A}_{1}\) and \(\overline{A}_{2}\) respectively on them. Then for any closed irreducible sub-variety \(E\) of \(U\), we have that \(E\nsubseteq\mathrm{Bs}(m_{0}\overline{D}-\overline{A}_{2})\) for some positive integer \(m_{0}\) if and only if \(E\nsubseteq\mathrm{Bs}(n_{0}\overline{D}-\overline{A}_{1})\) for some positive integer \(n_{0}\). In particular the set \(B_{+}(\overline{D},\overline{A})=\cap_{m\in\mathbb{N}}\mathrm{Bs}(m\overline{D }-\overline{A})\) is independent of the chosen model ample divisor \((X,\overline{A})\)._
Proof.: Clearly the the second claim follows from the first and the first claim follows directly from Lemma 2.2 by noting that we can always find a projective model \(X\) of \(U\) dominating both \(X_{1}\) and \(X_{2}\) via a birational morphism over \(U\) and an ample divisor on \(X\).
The above corollary clearly shows what should be the definition of our augmented base locus which we record in the next definition.
**Definition 2.4**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi projective variety \(U\) over \(K\). We define the augmented base locus of \(\overline{D}\) as \(B_{+}(\overline{D})=\cap_{m\in\mathbb{N}}\mathrm{Bs}(m\overline{D}-\overline{ A})\) for any ample divisor \(\overline{A}\) on a projective model of \(U\)._
_Remark_.: Note that the above definition makes sense thanks to Corollary 2.3. It is easy to check that \(B_{+}(m_{0}\overline{D})=B_{+}(\overline{D})\) for any positive integer \(m_{0}\) and hence we can define an augmented base locus of an adelic \(\mathbb{Q}\)-divisor by passing to integral multiples.
We end this section with a lemma which will be necessary later to show that the Okounkov bodies of restricted linear series behave nicely when the sub-variety is not contained in the augmented base locus.
**Corollary 2.5**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\) and suppose \(E\) is a closed irreducible sub-variety with \(E\nsubseteq B_{+}(\overline{D})\). Then there exist a projective model \(X\) such that for any ample divisor \(\overline{A}\) on \(X\), there exists sections \(s_{i}\in H^{0}(U,(m_{0}+i)\overline{D}-p_{i}\overline{A})\) not vanishing identically on \(E\) for some positive integers \(m_{0},p_{0},p_{1}\) and \(i=0,1\)._
Proof.: Suppose \(\overline{D}\) is given a sequence of models \(\{X_{i},D_{i}\}\) and rationals \(q_{i}\to 0\) as usual and let \(X=X_{1}\). Then as \(E\nsubseteq B_{+}(\overline{D})\), for any ample divisor \(\overline{A}\) on \(X_{1}\) we can assume that \(E\nsubseteq\mathrm{Bs}(n_{0}\overline{D}-\overline{A})\) for some \(n_{0}\in\mathbb{N}\) and hence we can produce a section \(s_{0}\in H^{0}(U,2n_{0}p\overline{D}-2p\overline{A})\) not vanishing identically on \(E\) for every positive integer \(p\). Choose \(p\) so large that \(D_{1}^{\prime}+p\overline{A}\) is very ample where \(D_{1}^{\prime}=D_{1}-q_{1}D_{0}\) and choose a section \(s^{\prime}\in H^{0}(U,D_{1}^{\prime}+p\overline{A})\) which does not vanish identically on \(E\). Then tensoring \(s_{0}\) and \(s^{\prime}\) we get a section \(s_{1}\in H^{0}(U,2n_{0}p\overline{D}+D_{1}^{\prime}-p\overline{A})\subseteq H^ {0}(U,(2n_{0}p+1)\overline{D}-p\overline{A})\) where the inclusion follows from the effectivity relation \(D_{1}^{\prime}\leq\overline{D}\). Clearly \(s_{0}\) and \(s_{1}\) satisfy the claim with \(m_{0}=2n_{0}p\), \(p_{1}=2p\) and \(p_{2}=p\).
### Restricted volumes
In this section, we define the restricted volume of an adelic divisor along a closed sub-variety \(E\) of \(U\) in analogy to the projective setting. Then we go on to show that if \(E\) is such an irreducible closed sub-variety with \(E\nsubseteq B_{+}(\overline{D})\), then this restricted volume can be realised as the volume of an Okounkov body calculated with respect to a suitable flag dominated by \(E\). Much in the spirit of Theorem 1.10 we deduce that the \(\limsup\) defining the restricted volume is actually a limit.
Suppose we have an irreducible closed sub-variety \(E\overset{i}{\hookrightarrow}U\) embedding in \(U\) via the closed immersion \(i\). Then as explained in sub-section 5.2.2 of [13] we can consider the pullback of the adelic line bundle \(O(\overline{D})\) by \(i\) which we denote as the _restriction_ of \(O(\overline{D})\) to \(E\) and denote as \(O(\overline{D})|_{E}\). We recall that this line bundle is given by the datum \(\{E_{i},O(D_{i})|_{E_{i}}\}\) where \(D_{i}\) are the,models defining \(\overline{D}\) and \(E_{i}\) are the Zariski closures of \(E\) in the projective models \(X_{i}\) of \(U\). Then there is a restriction map of vector spaces on the space of global sections
\[H^{0}(U,O(\overline{D}))\overset{\mathrm{restr}}{\longrightarrow}H^{0}(E,O( \overline{D})|_{E})\]
and we denote the image of this map by \(H^{0}(U|E,O_{E}(\overline{D}))\) obtained by just restriction maps on sections model wise. This lets us define the notion of restricted volume.
**Definition 2.6**.: _Suppose \(E\) is a closed irreducible sub-variety of a normal quasi-projective variety \(U\) over \(K\) and \(\overline{D}\) be an adelic divisor on \(U\). Then we define the restricted volume of \(\overline{D}\) along \(E\) as_
\[\widehat{\operatorname{vol}}_{U|E}(\overline{D})=\limsup_{m\to\infty}\frac{ \text{dim}_{K}(H^{0}(U|E,O_{E}(m\overline{D}))}{m^{d}/d!}\]
_where \(d=\text{dim}(E)\)._
We can view the finite-dimensional vector spaces \(W_{m}=H^{0}(U|E,O_{E}(m\overline{D}))\) as a graded linear sub-series of \(H^{0}(E,O(m\overline{D})|_{E})\subseteq H^{0}(E,O(mD)|_{E})\). And hence if we can fix a flag in \(E\), we can construct an Okounkov body corresponding to \(\{W_{m}\}\) as indicated in section 1 of [12].
Now given a closed sub-variety \(E\) in \(U\), we fix a flag \(Y_{0}\subset Y_{1}\ldots\subset Y_{d}=E\) in \(E\) where \(\text{dim}(E)=d\). Note that in any projective model of \(U\) this flag induces a canonical partial flag contained in the closure \(E_{j}\) of \(E\) in \(U\) by taking closures we obtain a flag in the model such that the (partial) valuation of a global section of some bundle with respect to this on the model is the same after restricting to \(U\) and evaluating w.r.t to the flag \(Y_{0}\subset Y_{1}\ldots\subset Y_{d}\) and we always take this flag to calculate valuation vectors in the projective models. We fix this flag to calculate the Okounkov body of the linear series \(\{W_{m}\}\). Then we have the notions of the graded semi-group \(\Gamma_{U|E}(\overline{D})\subseteq\mathbb{N}^{d+1}\) and the Okounkov body \(\Delta_{U|E}(\overline{D})\subseteq\mathbb{R}^{d}\). As in chapter 1, we also define \(\Gamma_{U|E}(\overline{D})_{m}\) to be the fiber of the graded semi-group over the positive integer \(m\). Next we show that when \(E\nsubseteq B_{+}(\overline{D})\), then the Okounkov body behaves nicely in the sense of satisfying properties analogous to Lemma 1.9.
**Lemma 2.7**.: _Suppose \(\overline{D}\) is a adelic divisor on a normal quasi-projective variety \(U\) over \(K\). Furthermore suppose \(E\) is a closed irreducible sub-variety of \(U\) such that \(E\nsubseteq B_{+}(\overline{D})\). Then the graded semi-group \(\Gamma_{U|E}(\overline{D})\) satisfies the following properties_
1. \(\Gamma_{U|E}(\overline{D})_{0}=\{0\}\)__
2. _There exists finitely many vectors_ \((v_{i},1)\) _spanning a semi-group_ \(B\subseteq\mathbb{N}^{d+1}\) _such that_ \(\Gamma_{U|E}(\overline{D})\subseteq B\)_._
3. \(\Gamma_{U|E}(\overline{D})\) _generates_ \(\mathbb{Z}^{d+1}\) _as a group._
_Remark_.: Note that in analogy to Lemma 1.9 it is desirable that \(\overline{D}\) is big in the above lemma. However since we assume that \(E\nsubseteq B_{+}(\overline{D})\), by Definition 2.4 we already have a non-zero section \(s\in H^{0}(m\overline{D}-\overline{A})\) for some model ample divisor \(\overline{A}\) on a projective model \(X\) of \(U\). Hence we have the inclusion
\[H^{0}(U,n\overline{A})\overset{s^{0\cdots}}{\longrightarrow}H^{0}(U,mn \overline{D})\]
for all positive integers \(n\) which shows that \(\widehat{\operatorname{vol}}(m\overline{D})\geq\widehat{\operatorname{vol}}( \overline{A})>0\) and hence \(\overline{D}\) is big. In other words the assumption \(E\nsubseteq B_{+}(\overline{D})\) already implies that \(\overline{D}\) is big.
Proof.: Suppose \(\overline{D}\) is given by the sequence of models \(\{X_{i},D_{i}\}\) and rationals \(q_{i}\to 0\) as usual. Note that as in the proof of Lemma 1.9, the first point is trivial and the second point can be deduced once we know that the vectors of \(\Gamma_{U|E}(\overline{D})_{m}\) are bounded by \(mb\) for some large positive constant \(b\) as explained in the proof of Lemma 2.2 in [10]. In other words we need to show that restricted graded series satisifes the condition (A) as defined in Definition 2.4 of [10]. Note that the effectivity relation \(\overline{D}\leq D_{j}+q_{j}D_{0}\) implies the inclusion \(\Gamma_{U|E}(\overline{D})_{m}\subset\Gamma_{U|E}(\overline{D}_{j}+q_{j}D_{0} )_{m}\) for all positive integers \(m\). The right hand side is the same as the graded semi-group of \(D_{j}+q_{j}D_{0}\) viewed as a \(\mathbb{Q}\)-divisor on the projective variety \(X_{j}\) calculated with closures of our flag on \(E\) and hence by the footnote on page 803 of [10], we conclude that \(\Gamma_{U|E}(\overline{D_{j}+q_{j}D_{0}})_{m}\) satisifes condition (A) which clearly shows the second point as \(\Gamma_{U|E}(\overline{D})_{m}\) is a subset. Hence we just need to show the third point.
We argue as in the proof of Lemma 1.9. Choose a model very ample divisor \(\overline{A}\) on \(X_{j}\) such that it has sections \(\overline{s}_{i}\) on \(X_{j}\) with \(v(\overline{s_{i}})=(e_{i})\) for \(i=0\ldots d\) where \(e_{0}\) is the zero vector, \(\{e_{i}\}\) is the standard basis of \(\mathbb{R}^{d}\) for \(i=1,\ldots d\) and \(v(\cdot)\) is the valuation corresponding to the closures in \(X_{j}\) of the chosen flag in \(E\). We can always do this as \(\overline{A}\) is chosen very ample and hence the restriction \(\overline{A}|_{E_{j}}\) is very ample where \(E_{j}\) is the closure of \(E\) in \(X_{j}\), as explained in proof of Lemma 2.2 in [10]. Restricting to \(U\) gives sections \(s_{i}\in H^{0}(U,\overline{A})\) with the same valuation vectors. Note that then for all positive integers \(p\), by appropriately tensoring these sections we can also find sections \(s_{ip}=s_{0}^{\otimes p-1}\otimes s_{i}\in H^{0}(U,p\overline{A})\) such that \(v(s_{ip})=(e_{i})\). Then by restricting to \(E\), we get non-zero sections \(s_{ip}|_{E}\in H^{0}(U|E,O_{E}(p\overline{A}))\) with \(v(s_{ip}|_{E})=e_{i}\). Now using Corollary 2.5 we can find positive integers \(m_{0},p_{0},p_{1}\) and sections \(t_{0},t_{1}\) (recalling them \(t_{i}\) for notational convenience) satisfying the properties stated in the corollary. Restricting \(t_{i}\)'s to \(E\) we get non-zero sections \(t_{i}|_{E}\in H^{0}(U|E,O_{E}((m_{0}+i)\overline{D}-p_{i}\overline{A}))\) and suppose \(v(t_{i}|_{E})=f_{i}\) for \(i=0,1\). Then arguing like in the proof Lemma 1.9 by tensoring \(s_{ip}|_{E}\)'s with \(t_{i}|_{E}\)'s we conclude that the vectors \((f_{0},m_{0})\), \((f_{0}+e_{i},m_{0})\) and \((f_{1},m_{0}+1)\) all belong to \(\Gamma_{U|E}(\overline{D})\) which clearly completes the proof.
Then arguing just like in chapter 1, we deduce the main theorem of this section which we state next.
**Theorem 2.8**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\). Furthermore suppose \(E\) is a closed irreducible sub-variety of \(U\) such that \(E\nsubseteq B_{+}(\overline{D})\). Then we have_
\[\operatorname{vol}_{\mathbb{R}^{d}}(\Delta_{U|E}(\overline{D}))=\lim_{m\to \infty}\frac{\#\Gamma_{U|E}(\overline{D})_{m}}{m^{d}}=\lim_{m\to\infty}\frac{ \dim_{K}(H^{0}(U|E,O_{E}(m\overline{D})))}{m^{d}}=\frac{1}{d!}\cdot\widehat{ \operatorname{vol}}_{U|E}(\overline{D})\]
_where \(\dim(U)=d\)_
We end this section with a homogeneity property analogous to Lemma 1.11. Before going to that we obtain a crucial property needed for the homogeneity.
**Lemma 2.9**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\) and \(E\) is a closed sub-variety with \(E\nsubseteq B_{+}(\overline{D})\). Then there exists an integer \(r_{0}\) such that \(H^{0}(U|E,O_{E}(r\overline{D}))\neq\{0\}\) for all positive integers \(r>r_{0}\)._
Proof.: By Corollary 2.5 there exist non-zero sections \(s_{i}\in H^{0}(U,(m+i)\overline{D})\) which do not vanish identically on \(E\) for \(i=0,1\) by tensoring with sections of \(p_{i}\overline{A}\) which do not vanish identically on \(E\) which exists as \(\overline{A}\) can be assumed very ample. Then for all \(r\geq m^{2}\), write it as \(r=a_{r}m+b_{r}\) for non-negative integers \(a_{r}\geq m\) and \(0\leq b_{r}\leq m-1<a_{r}\). Then note that \(s_{0}^{\otimes a_{r}-b_{r}}\otimes s_{1}^{b_{r}}\) is a section of \(H^{0}(U,r\overline{D})\) which does not vanish identically on \(E\) which clearly finishes the claim with \(r_{0}=m^{2}-1\)
**Lemma 2.10**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\). Then for any closed irreducible sub-variety \(E\) of \(U\) with \(E\nsubseteq B_{+}(\overline{D})\), we have_
\[\Delta_{U|E}(t\overline{D})=t\cdot\Delta_{U|E}(\overline{D})\]
_for all positive integers \(t\). Hence we can naturally extend the construction of \(\Delta_{U|E}(\cdot)\) to big adelic \(\mathbb{Q}\)-divisors._
Proof.: We choose an integer \(r_{0}\) such that \(H^{0}(U|E,O_{E}(r\overline{D}))\neq\{0\}\) for all \(r>r_{0}\) thanks to Lemma 2.9. Next choose \(q_{0}>0\) such that \(q_{0}t-(t+r_{0})>r_{0}\) for all positive integers \(t\). Then for all \(r_{0}+1\leq r\leq r_{0}+t\) we can find non-zero sections \(s_{r}\in H^{0}(U|E,O_{E}(r\overline{D}))\) and \(t_{r}\in H^{0}(U|E,O_{E}((q_{0}t-r)\overline{D}))\) which gives inclusions
\[H^{0}(U|E,O_{E}(mt\overline{D}))\overset{\otimes s_{r}}{\longrightarrow}H^{0 }(U|E,O_{E}((mt+r)\overline{D}))\overset{\otimes t_{r}}{\longrightarrow}H^{0 }(U|E,O_{E}((m+q_{0})t\overline{D}))\]
which gives the corresponding inclusion of the graded semi-groups
\[\Gamma_{U|E}(t\overline{D})_{m}+e_{r}+f_{r}\subseteq\Gamma_{U|E}(\overline{D })_{mt+r}+f_{r}\subseteq\Gamma_{U|E}(t\overline{D})_{m+q_{0}}\]
where \(e_{r}=v(s_{r})\) and \(f_{r}=v(t_{r})\). Now recalling the construction of \(\Delta_{U|E}(\cdot)\) and letting \(m\to\infty\) we get
\[\Delta_{U|E}(t\overline{D})\subseteq t\cdot\Delta_{U|E}(\overline{D})\subseteq \Delta_{U|E}(t\overline{D})\]
which clearly finishes our proof.
### Variation of bodies for restricted volumes
In this section, we construct global bodies whose fibers give the Okounkov bodies \(\Delta_{U|F}(\cdot)\) for a "sufficiently general" closed sub-variety \(F\) of \(U\) much in analogy with Theorem 1.17. Most of the constructions follow analogously as in the global case. The crucial point that we need to show is that given a fixed irreducible sub-variety \(F\), the set of divisors \(\overline{D}\) with \(F\nsubseteq B_{+}(\overline{D})\) is in the interior of the support of the global body as was shown in Lemma 1.15. Most of the other arguments will follow identically as in section 5 of chapter 1. However for sake of clarity we will anyway repeat some constructions. We fix a flag \(Y_{0}\subset\ldots Y_{d}=F\) as explained in the previous section and all calculations of Okounkov bodies is with respect to this flag.
**Definition 2.11**.: _Suppose \(\overline{D}\) and \(\overline{E}\) are two adelic divisors on a normal quasi-projective variety \(U\) over \(K\). Given a closed irreducible sub-variety \(F\) of \(U\) with \(F\nsubseteq B_{+}(\overline{D})\), we define the graded semi-group \(\Gamma_{U|E}(E)\) as_
\[\Gamma_{U|F}(F)=\{((v(s),a_{1},a_{2})\mid a_{i}\in\mathbb{Z},s\neq 0\in H^{0}(U| F,O_{F}(a_{1}\overline{D}+a_{2}\overline{E}))\}\]
_where \(v(\cdot)\) is the valuation corresponding to the chosen flag. Further more we define the global Okounkov body \(\Delta(U)\) as_
\[\Delta_{U|F}(F)=\text{closed convex cone}(\Gamma_{U|F}(F))=\Sigma(\Gamma_{U|F} (F))\]
_which is a closed convex subset of \(\mathbb{R}^{d}\times\mathbb{R}^{2}\)._
**Definition 2.12**.: _Suppose we have an additive semi-group \(\Gamma\) in \(\mathbb{R}^{d}\times\mathbb{R}^{2}\). Denote by \(P\) the projection from \(\mathbb{R}^{d}\times\mathbb{R}^{2}\) to \(\mathbb{R}^{2}\) and \(\Delta=\Sigma(\Gamma)\) is the closed convex cone generated by \(\Gamma\). We define the support of \(\Delta\), denoted as \(\text{Supp}(\Delta)\) to be its image under \(P\). It coincides with the closed convex cone in \(\mathbb{R}^{2}\) generated by the image of \(\Gamma\) under \(P\). Finally given a vector \(\vec{a}=(a_{1},a_{2})\in\mathbb{Z}^{2}\) we denote_
\[\Gamma_{\mathbb{N}\vec{a}} =\Gamma\cap(\mathbb{N}^{r}\times\mathbb{N}\vec{a})\] \[\Delta_{\mathbb{R}\vec{a}} =\Delta\cap(\mathbb{R}^{d}\times\mathbb{R}\vec{a})\]
_Furthermore we denote \(\Gamma_{\mathbb{N}\vec{a}}\) as a semi-group inside \(\mathbb{N}^{d}\times\mathbb{N}\vec{a}=\mathbb{N}^{d+1}\) and denote the closed convex cone generated by it in \(\mathbb{R}^{d+1}\) as \(\Sigma(\Gamma_{\mathbb{N}\vec{a}})\)._
We begin by showing the crucial property of the "good" divisors being open.
**Lemma 2.13**.: _Suppose \(\overline{D}\) and \(\overline{E}\) are two adelic divisors such that \(F\nsubseteq B_{+}(\overline{D})\) and \(F\nsubseteq B_{+}(\overline{D}+q\overline{E})\) for some \(q\in\mathbb{Q}\). Then there is an \(\epsilon>0\) such that \((1,x)\in\text{Supp}(\Delta_{U|F}(F))\) for all \(x\in(q-\epsilon,q+\epsilon)\)._
Proof.: Suppose both \(\overline{D}\) and \(\overline{E}\) are given by models \(D_{i},E_{i}\) on projective models \(X_{i}\) and rationals \(\{q_{i}\to 0\}\) as usual. Further more we denote \(E^{\prime}_{1}=D_{1}-q_{1}D_{0}\) and \(E^{\prime\prime}_{1}=E_{1}+q_{1}E_{0}\). We first consider the case when \(q\neq 0\). Then by hypothesis there exists an integer \(m_{0}\) depending on \(\overline{A}\) such that \(F\nsubseteq\mathrm{Bs}(m_{0}p\overline{D}+m_{0}qp\overline{E}-p\overline{A})\) for some very ample divisor \(\overline{A}\) on \(X_{1}\) and for all sufficiently divisible integers \(p\). Choose \(p\) so large that that \(E^{\prime}_{1}+p\overline{A}\) (resp \(-E^{\prime\prime}_{1}+p\overline{A}\)) is very ample when \(q>0\)( resp. \(q<0\)). Then choosing sections in
\[H^{0}(U,2m_{0}p\overline{D}+2m_{0}qp\overline{E}-2p\overline{A})\text{ and }H^{0}(U,p\overline{A}+E^{\prime}_{1})\text{ (resp }H^{0}(U,p\overline{A}-E^{\prime\prime}_{1}))\]
which do not vanish identically on \(F\) and finally tensoring them, we get sections in \(H^{0}(U,2m_{0}p\overline{D}+2m_{0}qp\overline{E}+E^{\prime}_{1}-p\overline{A})\) (resp \(H^{0}(U,2m_{0}p\overline{D}+2m_{0}qp\overline{E}-E^{\prime\prime}_{1}-p \overline{A})\)) which do not vanish identically on \(F\). Now the effectivity relation \(E^{\prime}_{1}\leq\overline{E}\) (resp \(E^{\prime\prime}_{1}\geq\overline{E}\)) induces the inclusion
\[H^{0}(U,2m_{0}p\overline{D}+2m_{0}qp\overline{E}+E^{\prime}_{1}-p \overline{A})\subseteq H^{0}(U,2m_{0}p\overline{D}+(2m_{0}qp+1)\overline{E}- p\overline{A})\] \[\text{(resp. }H^{0}(U,2m_{0}p\overline{D}+2m_{0}qp\overline{E}-E^{ \prime\prime}_{1}-p\overline{A})\subseteq H^{0}(U,2m_{0}p\overline{D}+(2m_{0} qp-1)\overline{E}-p\overline{A}))\]
when \(q>0\) (resp \(q<0\)). Hence noting the remark at the end of Definition 2.4, we conclude that \(F\nsubseteq B_{+}(2m_{0}p\overline{D}+(2m_{0}qp+1)\overline{D})=B_{+}( \overline{D}+r\overline{E})\) (resp. \(F\nsubseteq B_{+}(2m_{0}p\overline{D}+(2m_{0}qp-1)\overline{D})=B_{+}( \overline{D}+r\overline{E})\)) where \(r=q+\frac{1}{2m_{0}p}>q\) (resp \(r=q-\frac{1}{2m_{0}p}<q\)) when \(q>0\) (resp \(q<0\)). Note that then thanks to Lemma 2.9 we conclude that for some large integer \(p_{0}\) the points \(p_{0}(1,r),\ p_{0}(1,0)\in\mathrm{Supp}(\Delta_{U|F}(F))\) as \(F\nsubseteq B_{+}(\overline{D}+r\overline{E})\) and \(F\nsubseteq B_{+}(\overline{D})\). Hence arguing as in the proof of Lemma 1.15 we obtain the claim. Finally for the case \(q=0\) we repeat the arguments above in both positive and negative directions with \(E^{\prime}_{1}\) and \(E^{\prime\prime}_{1}\) to obtain such an \(\epsilon\).
As a corollary of the above, we obtain the necessary property which we record next.
**Corollary 2.14**.: _Suppose \(\overline{D}\) and \(\overline{E}\) are adelic divisors on a normal quasi-projective variety \(U\) over \(K\) and let \(F\) be a closed sub-variety of \(U\) with \(F\nsubseteq B_{+}(\overline{D})\). Then for any \(\vec{a}=(a_{1},a_{2})\in\mathbb{Q}^{2}\) such that \(a_{1}\overline{D}+a_{2}\overline{E}\) with \(F\nsubseteq B_{+}(a_{1}\overline{D}+a_{2}\overline{E})\) we have \(\vec{a}\in\mathrm{int}(\mathrm{Supp}(\Delta_{U|F}(F)))\)_
Proof.: Due to the homogeneity property in Lemma 2.10 it is enough to show the claim for \(a_{i}\) integers. First note that if \(\overline{D}_{1}\) and \(\overline{D}_{2}\) are two adelic divisors with \(F\nsubseteq B_{+}(\overline{D}_{i})\) for \(i=1,2\), then \(F\nsubseteq B_{+}(\overline{D}_{1}+\overline{D}_{2})\). To see this pick a positive integer \(m\) such that \(F\nsubseteq\mathrm{Bs}(m\overline{D}_{i}-\overline{A})\) for \(i=1,2\) and some ample divisor \(\overline{A}\) on some projective model \(X\). Then choosing sections on each of the bundles non-vanishing on \(F\) and tensoring them, we produce a section in \(H^{0}(U,m(\overline{D}_{1}+\overline{D}_{2})-2\overline{A})\) which does not vanish identically on \(E\) which clearly shows that \(F\nsubseteq B_{+}(\overline{D}_{1}+\overline{D}_{2})\) by definition of the augmented base locus.
Now first suppose that \(a_{1}\leq 0\). Then as \(F\nsubseteq B_{+}(\overline{D})\) by hypothesis, by adding \((-a_{1})\overline{D}\) we deduce using our discussion above that \(F\nsubseteq B_{+}(a_{2}\overline{E})=B_{+}(\overline{E})\)( resp. \(B_{+}(-\overline{E})\)) if \(a_{2}>0\)( resp. \(a_{2}<0\)) by the remark at the end of Definition 2.4 and clearly \(a_{2}\neq 0\). Then switching \(\overline{D}\) with \(\overline{E}\)( resp. \(-\overline{E}\)) we can assume that \(a_{1}>0\). Then once again by the remark, we conclude that \(F\nsubseteq B_{+}(a_{1}\overline{D}+a_{2}\overline{E})=B_{+}(\overline{D}+q \overline{E})\) for \(q=\frac{a_{2}}{a_{1}}\). Once we obtain this then thanks to Lemma 2.13 we can argue exactly as in the end of the proof of Lemma 1.15 to obtain the claim.
Next we show that the interior of the the support is actually non-empty- To show this we show that the graded semi-group generates the whole \(\mathbb{Z}^{d+2}\) in our next Lemma.
**Lemma 2.15**.: _Suppose \(\overline{D}\) and \(\overline{E}\) are adelic divisors on a normal quasi-projective variety \(U\) over \(K\) and \(F\) a closed irreducible sub-variety of \(U\) with \(F\nsubseteq B_{+}(\overline{D})\). Then \(\Gamma_{U|F}(F)\) generates \(\mathbb{Z}^{d+2}\) as a group._
Proof.: The proof is almost identical to the proof of Lemma 1.16. We just need to note that by the proof of Lemma 2.13, when \(q=0\) we can find a positive integer \(n\) such that \(F\nsubseteq B_{+}(\overline{D}-\frac{1}{n}\overline{E})=B_{+}(n\overline{D}- \overline{E})\). The rest of the argument is identical to Lemma 1.16 thanks to the third property in Lemma 2.7 and as \((1,0)\) and \((n,-1)\) generate \(\mathbb{Z}^{2}\) as a group.
Finally we are ready to state and prove the main theorem of this section.
**Theorem 2.16**.: _Suppose \(\overline{D}\) and \(\overline{E}\) be adelic divisors on a normal quasi-projective variety \(U\) over \(K\) and let \(F\) be an irreducible closed sub-variety such that \(F\nsubseteq B_{+}(\overline{D})\). Then there exists a convex body \(\Delta_{U|F}(F)=\Delta_{U|F}(F,\overline{D},\overline{E})\subset\mathbb{R}^{d +2}\) with the property that for any \(\vec{a}=(a_{1},a_{2})\in\mathbb{Q}^{2}\) with \(F\nsubseteq B_{+}(a_{1}\overline{D}+a_{2}\overline{E})\), we have_
\[\Delta_{U|F}(a_{1}\overline{D}+a_{2}\overline{E})=\Delta_{U|F}(F)\cap(\mathbb{ R}^{d}\times\{\vec{a}\})\]
_where \(\Delta_{U|F}(a_{1}\overline{D}+a_{2}\overline{E})\) is the restricted Okounkov body of \(a_{1}\overline{D}+a_{2}\overline{E}\) as constructed in section 2._
Proof.: Clearly it is enough to show when \(\vec{a}\in\mathbb{Z}^{2}\) by homogeneity of Okounkov bodies( Lemma 2.10). Note that the semi-group \(\Gamma_{U|F}(a_{1}\overline{D}+a_{2}\overline{E})\) sits naturally in \(\mathbb{N}^{d}\times\mathbb{N}\cdot\vec{a}\cong\mathbb{N}^{d+1}\) and by construction of \(\Delta_{U|F}(\cdot)\), we deduce that \(\Delta_{U|F}(a_{1}\overline{D}+a_{2}\overline{E})=\Sigma(\Gamma_{U|F}(a_{1} \overline{D}+a_{2}\overline{E})_{\mathbb{N}\vec{a}})\cap(\mathbb{R}^{d}\times \{\vec{a}\})\). By Lemma 2.13 we get that \(\vec{a}\in\operatorname{int}(\operatorname{Supp}(\Delta(U))\) and hence by Lemma 1.14 we have \(\Delta_{U|F}(F)_{\mathbb{R}\vec{a}}=\Sigma(\Gamma_{U|F}(a_{1}\overline{D}+a_{2 }\overline{E})_{\mathbb{N}\vec{a}})\). Hence we deduce that
\[\Delta_{U|F}(a_{1}\overline{D}+a_{2}\overline{E})=\Sigma(\Gamma_{U|F}(a_{1} \overline{D}+a_{2}\overline{E})_{\mathbb{N}\vec{a}})\cap(\mathbb{R}^{d}\times \{\vec{a}\})=\Delta_{U|F}(F)_{\mathbb{R}\vec{a}}\cap(\mathbb{R}^{d}\times\{ \vec{a}\})=\Delta_{U|F}(F)\cap(\mathbb{R}^{d}\times\{\vec{a}\})\]
concluding the proof.
### Corollaries
In this section we deduce some corollaries which are direct from the existence of global bodies for restricted volumes as shown in Theorem 2.16. Note that we already have the notion of restricted volume of a line bundle \(L\) along the closed sub-variety \(E\) of a projective variety \(X\) defined similarly as defined before Lemma 2.16 in [10] which we denote by _projective restricted volume_ in the next corollary.
**Corollary 2.17**.: _Suppose \(\overline{D}\) is an adelic divisor on a normal quasi-projective variety \(U\) over \(K\) and suppose \(F\) is a closed irreducible sub-variety of \(U\) with \(F\nsubseteq B_{+}(\overline{D})\). Furthermore suppose \(\overline{D}\) is given by a Cauchy sequence of models \(\{X_{i},D_{i}\}\) and let \(F_{j}\) be the Zariski closure of \(F\) in \(X_{j}\). Then we have_
\[\lim_{i\to\infty}d_{H}(\Delta_{U|F}(\overline{D}),\Delta_{U|F}(\overline{D_{i }}))=0\]
_where \(d_{H}(\cdot,\cdot)\) is the Hausdorff metric and \(\overline{D_{i}}\) is \(D_{i}\) considered as a model adelic divisor. In particular, we have_
\[\widehat{\operatorname{vol}}_{U|F}(\overline{D})=\lim_{i\to\infty}\operatorname {vol}_{X_{i}|F_{i}}(O(D_{i}))\]
_where \(\operatorname{vol}_{X_{i}|F_{i}}(O(D_{i}))\) is the projective restricted volume of the line-bundle \(O(D_{i})\) with respect to \(F_{i}\)._
Proof.: The proof is very similar to that of the proof of Lemma 1.19. We begin by noting the set of inclusions
\[\Delta_{U|F}(\overline{D}-q_{j}\overline{D_{0}})\subseteq\Delta_{U|F}(\overline {D_{j}})\subseteq\Delta_{U|F}(\overline{D}+q_{j}\overline{D_{0}})\]
where we put overlines to emphasize that they are looked as model divisors. Now the first claim follows once again noting that the two extremities of the above inclusions converge under the Hausdorff metric thanks to Theorem 2.16 and Theorem 13 of [12] when \(q_{j}\) is small enough. Then note that from Lemma 2.13, as \(F\nsubseteq B_{+}(\overline{D})\) we conclude that \(F\nsubseteq B_{+}(\overline{D}-q_{j}\overline{D_{0}})\supseteq B_{+}(\overline {D_{j}})\) for large enough \(j\) as \(q_{j}\to 0\). Hence for large enough \(j\) we have \(F\nsubseteq B_{+}(\overline{D_{j}})\) which implies \(\operatorname{vol}(\Delta_{U|F}(\overline{D_{j}}))=\frac{1}{k!}\widehat{ \operatorname{vol}}_{U|F}(\overline{D_{j}})=\frac{1}{k!}\operatorname{vol}_{X_{j }|F_{j}}(O(D_{j}))\) thanks to Theorem 2.8 which now clearly gives the second claim together with the first claim.
**Corollary 2.18** (log-concavity).: _Suppose \(D_{i}\) are two adelic divisors on a normal quasi-projective variety \(U\) over \(K\) for \(i=1,2\). Furthermore suppose \(F\) is a closed irreducible sub-variety of \(U\) with \(F\nsubseteq{\mathrm{Bs}(\overline{D_{i}})}\) for \(i=1,2\). Then we have_
\[\widehat{\mathrm{vol}}_{U|F}(\overline{D_{1}}+\overline{D_{2}})^{\frac{1}{k}} \geq\widehat{\mathrm{vol}}_{U|F}(\overline{D_{1}})^{\frac{1}{k}}+\widehat{ \mathrm{vol}}_{U|F}(\overline{D_{2}})^{\frac{1}{k}}\]
_where \(\dim(E)=k\)._
Proof.: When \(F\nsubseteq{B_{+}(\overline{D_{i}})}\) for both \(i\), so is their sum and hence passing to models, we are reduced to the claim in the projective setting thanks to Corollary 2.17. The projective case can be deduced from the existence of global bodies as indicated in Example 4.22 of [13].
## Acknowledgements
The author thanks Walter Gubler and Roberto Gualdi for numerous fruitful discussions in the process of preparation of this article.
|
2302.11121
|
Counterfactual Prediction Under Outcome Measurement Error
|
Across domains such as medicine, employment, and criminal justice, predictive
models often target labels that imperfectly reflect the outcomes of interest to
experts and policymakers. For example, clinical risk assessments deployed to
inform physician decision-making often predict measures of healthcare
utilization (e.g., costs, hospitalization) as a proxy for patient medical need.
These proxies can be subject to outcome measurement error when they
systematically differ from the target outcome they are intended to measure.
However, prior modeling efforts to characterize and mitigate outcome
measurement error overlook the fact that the decision being informed by a model
often serves as a risk-mitigating intervention that impacts the target outcome
of interest and its recorded proxy. Thus, in these settings, addressing
measurement error requires counterfactual modeling of treatment effects on
outcomes. In this work, we study intersectional threats to model reliability
introduced by outcome measurement error, treatment effects, and selection bias
from historical decision-making policies. We develop an unbiased risk
minimization method which, given knowledge of proxy measurement error
properties, corrects for the combined effects of these challenges. We also
develop a method for estimating treatment-dependent measurement error
parameters when these are unknown in advance. We demonstrate the utility of our
approach theoretically and via experiments on real-world data from randomized
controlled trials conducted in healthcare and employment domains. As
importantly, we demonstrate that models correcting for outcome measurement
error or treatment effects alone suffer from considerable reliability
limitations. Our work underscores the importance of considering intersectional
threats to model validity during the design and evaluation of predictive models
for decision support.
|
Luke Guerdan, Amanda Coston, Kenneth Holstein, Zhiwei Steven Wu
|
2023-02-22T03:34:19Z
|
http://arxiv.org/abs/2302.11121v2
|
# Counterfactual Prediction Under Outcome Measurement Error
###### Abstract.
Across domains such as medicine, employment, and criminal justice, predictive models often target labels that imperfectly reflect the outcomes of interest to experts and policymakers. For example, clinical risk assessments deployed to inform physician decision-making often predict measures of healthcare utilization (e.g., costs, hospitalization) as a proxy for patient medical need. These proxies can be subject to outcome measurement error when they systematically differ from the target outcome they are intended to measure. However, prior modeling efforts to characterize and mitigate outcome measurement error overlook the fact that the decision being informed by a model often serves as a risk-mitigating intervention that impacts the target outcome of interest and its recorded proxy. Thus, in these settings, addressing measurement error requires counterfactual modeling of treatment effects on outcomes. In this work, we study intersectional threats to model reliability introduced by outcome measurement error, treatment effects, and selection bias from historical decision-making policies. We develop an unbiased risk minimization method which, given knowledge of proxy measurement error properties, corrects for the combined effects of these challenges. We also develop a method for estimating treatment-dependent measurement error parameters when these are unknown in advance. We demonstrate the utility of our approach theoretically and via experiments on real-world data from randomized controlled trials conducted in healthcare and employment domains. As importantly, we demonstrate that models correcting for outcome measurement error or treatment effects alone suffer from considerable reliability limitations. Our work underscores the importance of considering intersectional threats to model validity during the design and evaluation of predictive models for decision support.
algorithmic decision support, measurement, validity, causal inference, model evaluation +
Footnote †: journal: Computer Science
## 1. Introduction
Algorithmic risk assessment instruments (RAIs) often target labels that imperfectly reflect the goals of experts and policymakers. For example, clinical risk assessments used to inform physician treatment decisions target future utilization of medical resources (e.g., cost, medical diagnoses) as a proxy for patient medical need (Luo et al., 2017; Wang et al., 2018; Wang et al., 2019). Predictive models used to inform personalized learning interventions target student test scores as a proxy for learning outcomes (Zhiwei et al., 2019). Yet, these proxies are subject to _outcome measurement error_ (OME) when they systematically differ from the target outcome of interest to domain experts. Unaddressed, OME can be highly consequential: models targeting poor proxies have been linked to misallocation of medical resources (Wang et al., 2018), unwarranted teacher firings (Zhu et al., 2019), and over-policing of minority communities (Bradley et al., 2019). Given its prevalence and implications, increasing research focus has shifted to understanding and mitigating sources of statistical bias impacting proxy outcomes (Zhu et al., 2019; Wang et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019).
However, prior work modeling outcome measurement error makes a critical assumption that the decision informed by the algorithm does not impact downstream outcomes. Yet this assumption is often unreasonable in decision support applications, where decisions constitute _interventions_ that impact the policy-relevant target outcome _and its recorded proxy_(Zhu et al., 2019). For example, in clinical decision support settings, medical treatments act as risk-mitigating interventions designed to avert adverse health outcomes. However, in the process of selecting a treatment option, a physician will _also_ influence measured proxies (e.g., medical cost, disease diagnoses) (Luo et al., 2017; Wang et al., 2019; Wang et al., 2019). As a result, the measurement error characteristics of proxies can vary across the treatment options informed by an algorithm.
We illustrate the importance of considering interactions between OME and treatment effects by revisiting a widely known audit of an algorithm used to inform screening decisions for a high-risk medical care program (Steintein et al., 2017). This audit surfaced measurement error in a _"cost of medical care"_ outcome targeted as a proxy for patient medical need. _Critically, the measurement error analysis performed by Obermeyer et al. (Steintein et al., 2017) assumes that program enrollment status is independent of downstream cost and medical outcomes._
\begin{tabular}{l c c} \hline Sample & FPR & FNR \\ \hline Full population & 0.37 & 0.38 \\ Unenrolled & 0.37 & 0.39 \\ Enrolled & 0.64 & 0.13 \\ \hline \end{tabular} Yet our re-analysis shows that the _"cost of medical care"_ proxy has a substantially higher false positive rate and lower false negative rate among program enrollees as compared to the full population (see Appendix A.1). This error rate discrepancy is consistent with enrollees receiving closer medical supervision (and as a result, greater costs), even after accounting for their underlying medical need. In this work, we show that failing to model the interactions between OME and treatment effects can introduce substantial model reliability challenges.
In this work, we develop a counterfactual prediction method that corrects for outcome measurement error, treatment effects, and selection bias in parallel. Our method builds upon _unbiased risk minimization_ techniques developed in the label noise literature (Krause et al., 2016; Steintein et al., 2017; Steintein et al., 2018; Steintein et al., 2019). Given knowledge of measurement error parameters, unbiased risk minimization methods recover an estimator for target outcomes by minimizing a surrogate loss over proxy outcomes. However, existing methods are not designed for _interventional settings_ whereby decisions impact outcomes - a limitation that we show severely limits model reliability. Therefore, we develop an unbiased risk minimization technique designed for learning counterfactual models from observational data. We compare our approach against models that correct for OME or treatment effects in isolation by conducting experiments on semi-synthetic data from healthcare and employment domains (Krause et al., 2016; Steintein et al., 2018; Steintein et al., 2019). Results validate the efficacy of our risk minimization approach and underscore the need to carefully vet measurement-related assumptions in consultation with domain experts. Our empirical results also surface systematic model failures introduced by correcting for OME or treatment effects in isolation. To our knowledge, our holistic evaluation is the first to examine how outcome measurement error, treatment effects, and selection bias interact to impact model reliability under controlled conditions.
We provide the following contributions: 1) We derive a problem formulation that models interactions between OME, treatment effects, and selection bias (Section 3); 2) We develop a novel approach for learning counterfactual models in the presence of OME (Section 4.1). We provide a flexible approach for estimating measurement error rates when these are unknown in advance (Section 4.2); 3) We conduct synthetic and semi-synthetic experiments to validate our approach and highlight reliability issues introduced by modeling OME or treatment effects in isolation (Section 5).
## 2. Background and Related Work
### Al functionality and validity concerns
Prior work has conducted detailed assessments of specific modeling issues (Krause et al., 2016; Steintein et al., 2017; Steintein et al., 2018; Steintein et al., 2019; Steintein et al., 2019), which have been synthesized into broader critiques of AI validity and functionality (Krause et al., 2016; Stein et al., 2019; Stein et al., 2019). Raji et al. (Raji et al., 2019) surface AI functionality
harms in which models fail to achieve their purported goal due to systematic design, engineering, deployment, and communication failures. Coston et al. (2014) highlight challenges related to value alignment, reliability, and validity that may draw the justifiability of RAIs into question in some contexts. We build upon this literature by studying _intersectional threats to model reliability_ arising from outcome measurement error (Zhou et al., 2017; Zhang et al., 2018), treatment effects (Shi et al., 2018; Shi et al., 2019), and selection bias (Shi et al., 2019) in parallel.
### Outcome measurement error
Modeling outcome measurement error is challenging because it introduces two sources of uncertainty: which error model is reasonable for a given proxy, and which specific error parameters govern the relationship between target and proxy outcomes under the _assumed_ measurement model (Zhou et al., 2017). Popular error models studied in the machine learning literature include uniform (Beng et al., 2016; Chen et al., 2017), class-conditional (Zhou et al., 2017; Zhang et al., 2018), and instance-dependent (Zhou et al., 2017; Zhang et al., 2018) structures of outcome misclassification. Work in algorithmic fairness has also studied settings in which measurement error varies across levels of a protected attribute (Zhang et al., 2018), and proposed error model agnostic sensitivity analysis frameworks (Zhou et al., 2019).
Numerous statistical approaches have been developed for measurement error parameter estimation in the quantitative social sciences literature (Beng et al., 2016; Chen et al., 2017). Application of these approaches is tightly coupled with domain knowledge of the phenomena under study, as in biostatistics (Zhou et al., 2017) or psychometrics (Zhou et al., 2018). To date, data-driven techniques for error parameter estimation have primarily been applied in the machine learning literature, which rely on key assumptions relating the target outcome of interest and its proxy (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). In this work, we build upon an existing _"anchor assumptions"_ framework that estimates error parameters by linking the proxy and target outcome probabilities at specific instances (Zhang et al., 2018). In contrast to prior work, we provide a range of anchoring assumptions, which can be flexibly combined depending on which are reasonable in a specific algorithmic decision support (ADS) domain.
Natarajan et al. (2018) propose a widely-adopted _unbiased risk minimization_ approach for learning under noisy labels given knowledge of measurement error parameters (Shi et al., 2019; Shi et al., 2019; Shi et al., 2019). This method constructs a surrogate loss \(\tilde{\ell}\) such that the \(\tilde{\ell}\)-risk over proxy outcomes is equivalent to the \(\ell\)-risk over target outcomes _in expectation_. Additionally, Natarajan et al. (2018) show that the minimizer of \(\tilde{\ell}\)-risk over proxy outcomes is optimal with respect to target outcomes if \(\ell\) is symmetric (e.g., Huber, logistic, and squared losses). In this work, we develop a novel variant of this unbiased risk minimization approach designed for settings with _treatment-conditional_ OME.
### Counterfactual prediction
Recent work has shown that counterfactual modeling is necessary when the decision informed by a predictive model serves as a risk-mitigating intervention (Shi et al., 2018). Building off of this result, we argue that it is necessary to account for treatment effects on _target outcomes of interest and their observed proxy_ while modeling OME. Our methods build upon conditional average treatment effect (CATE) estimation techniques from the causal inference literature (Beng et al., 2016; Chen et al., 2017; Zhang et al., 2018). Subject to identification conditions (Shi et al., 2019; Shi et al., 2019), these approaches predict the difference between the expected outcome under treatment (e.g., high-risk program enrollment) versus control (e.g., no program enrollment) conditional on covariates. One family of _outcome regression estimators_ predicts the CATE by directly estimating the expected outcome under treatment or control conditional on covariates (Shi et al., 2019; Zhang et al., 2018; Zhang et al., 2018). However, these methods suffer from statistical bias when prior decisions were non-randomized (i.e., due to distribution shift induced by selection bias) (Beng et al., 2016; Chen et al., 2017). Therefore, we leverage a re-weighting strategy proposed by (Chen et al., 2017) to correct for this selection bias during risk minimization. Our re-weighting method performs a similar bias correction as inverse probability weighting (IPW) methods (Shi et al., 2019; Zhang et al., 2018; Zhang et al., 2018).
Outcome measurement error has also been studied in causal inference literature. Finkelstein et al. (2019) bound the average treatment effect (ATE) under multiple plausible OME models. Shu and Yi (2019) propose a doubly robust
method which accounts for measurement error during ATE estimation, while Diaz and van der Laan (2017) provide a sensitivity analysis framework for examining robustness of ATE estimates to OME. This work is primarily concerned with estimating _population statistics_ rather than predicting outcomes conditional on measured covariates (i.e, the CATE).
## 3. Preliminaries
Let \(p^{*}(X,T,Y_{0}^{*},Y_{1}^{*},Y_{0},Y_{1})\) be a fixed joint distribution over covariates \(X\in\mathcal{X}\subseteq\mathbb{R}^{d}\), past decisions1\(T\in\{0,1\}\), _target_ potential outcomes \(\{Y_{0}^{*},Y_{1}^{*}\}\in\mathcal{Y}\subseteq\{0,1\}\), and _proxy_ potential outcomes \(\{Y_{0},Y_{1}\}\in\mathcal{Y}\subseteq\{0,1\}\). Under the potential outcomes framework (Kolmogorov, 1999), \(\{Y_{0}^{*},Y_{0}\}\) and \(\{Y_{1}^{*},Y_{1}\}\) are the target and proxy outcomes that _would occur_ under \(T=0\) and \(T=1\), respectively (Figure 1). Building upon the class-conditional model studied in observational settings (Zhu and van der Laan, 2013; Zhu and van der Laan, 2013), we propose a _treatment-conditional_ outcome measurement error (OME) model whereby the class probability of the proxy potential outcome is given by
Footnote 1: We also use the word _treatments_ to refer to binary decisions. This draws upon historical applications of causal inference to medical settings.
\[\eta_{t}(x)=(1-\beta_{t})\cdot\eta_{t}^{*}(x)+\alpha_{t}\cdot(1-\eta_{t}^{*}(x) ),\ \ \forall x\in X \tag{1}\]
where \(\alpha_{t}\coloneqq p(Y_{t}=1\mid Y_{t}^{*}=0)\), \(\beta_{t}\coloneqq p(Y_{t}=0\mid Y_{t}^{*}=1)\) are the proxy false positive and false negative rates under treatment \(t\in\{0,1\}\) such that \(\alpha_{t}+\beta_{t}<1\). This model imposes the following assumption on the structure of measurement error.
**Assumption 1** (Measurement error).: Measurement error rates are fixed across covariates \(X\): \(Y\perp\!\!\!\perp X\mid Y^{*}\).
While we make this assumption to foreground study of treatment effects, our methods are also compatible with approaches designed for error rates that vary across covariates (Zhu and van der Laan, 2013) (see Section 6.1 for discussion). Given the joint \(p^{*}\), we would like to estimate \(\eta_{t}^{*}(x)\coloneqq p(Y_{t}^{*}=1\mid X=x)\), for any target covariates \(x\in X\), which is the probability of the target potential outcome under intervention \(t\in\{0,1\}\). However, rather than observing \(Y_{t}^{*}\) directly, we sample from an _observational distribution_\(p(X,T,Y)\), where \(Y\in\mathcal{Y}\subseteq\{0,1\}\) is an observed _proxy outcome_. By consistency, the proxy potential outcome recorded in data is determined by the treatment assignment.
**Assumption 2** (Consistency).: \(Y=T\cdot Y_{1}+(1-T)\cdot Y_{0}\). This holds that the proxy outcome \(Y_{t}\) is observed for instances assigned to treatment \(t\).
Figure 1. Toy example illustrating treatment-conditional OME in heart attack prediction. Under the factual decision to screen-out from a high-risk care management program (\(T=0\)), heart attack occurred (\(Y_{0}^{*}=1\)) but went undiagnosed (\(Y_{0}=0\)). Under the counterfactual decision to screen in (\(T=1\)), heart attack _would have_ been averted (\(Y_{1}^{*}=0\)) but would have been incorrectly diagnosed as such (\(Y_{1}=1\)). The observed outcome in medical records reflects the proxy value under factual decision to screen-out (\(Y=0\)).
Consistency is a standard assumption made in causal inference settings (Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2019). To identify proxy outcomes \(Y\), we require the following additional causal assumptions.
**Assumption 3** (Ignorability).: \(\{Y_{0},Y_{1}\}\perp T\mid X\). This holds that no unmeasured confounders \(Z\) jointly influence decisions and proxy potential outcomes.
Ignorability can be violated in decision support applications when unobservables impact both the treatment and outcome (Srivastava et al., 2017; Wang et al., 2018; Wang et al., 2019). Understanding and addressing limitations introduced by ignorability is a major ongoing research focus (Srivastava et al., 2017; Wang et al., 2019; Wang et al., 2019). We provide follow-up discussion of this assumption in Section 6.2.
**Assumption 4** (Positivity).: \(\forall x\in X,~{}0>p(T=1|X=x)>1\). This holds that each instance \(x\in X\) has some chance of receiving each decision \(t\in\{0,1\}\).
Positivity is often reasonable in decision support applications because instances \(x\in X\) that require support from predictive models are subject to discretionary judgement due to uncertainty. Instances that are certain to receive a given treatment (i.e., \(p(T=1|X=x)=0\) or \(p(T=1|X=x)=1\)) would normally be routed via a different administrative procedure. Figure 2 shows a causal diagram representing the data generating process we study in this work.
## 4. Methodology
We begin by developing an unbiased risk minimization approach which recovers an estimator for \(\eta_{t}^{*}\) given knowledge of error parameters (Section 4.1). We then provide a method for estimating \(\alpha_{t}\) and \(\beta_{t}\) when error parameters are unknown in advance (Section 4.2).
### Unbiased risk minimization
In this section, we develop an approach for estimating \(\eta_{t}^{*}\) given observational data drawn from \(p(X,T,Y)\) and measurement error parameters \(\alpha_{t}\), \(\beta_{t}\). Let \(f_{t}\in\mathcal{H}\) for \(\mathcal{H}\subset\{f_{t}:\mathcal{X}\rightarrow[0,1]\}\) be a probabilistic decision function targeting \(Y_{t}^{*}\) and let \(\ell:\mathcal{Y}\times[0,1]\rightarrow\mathbb{R}_{+}\) be a loss function. If we observe target potential outcomes \(Y_{t}^{*}\sim p^{*}\), we can directly apply supervised learning techniques to minimize the expected \(\ell\)-risk of \(f_{t}\) over target potential outcomes
\[R_{\ell}^{*}(f_{t})\coloneqq\mathbb{E}_{p^{*}}[\ell(f_{t}(X),Y_{t}^{*})] \tag{2}\]
and learn an estimator for \(\eta_{t}^{*}\) via standard empirical risk minimization approaches. If \(\ell\) is _strongly proper composite_ such that \(\arg\min_{f_{t}}R_{\ell}^{*}(f_{t})\) is a monotone transform \(\psi\) of \(\eta_{t}^{*}\) (e.g., the logistic and exponential loss) we can recover class probabilities from the optimal prediction via the link function \(\psi\)(Bickel and Rubin, 1980; Grinstein, 1980). However, directly minimizing (2) is not possible in our setting because we sample observational proxies instead of target potential outcomes. We address this challenge by constructing a _re-weighted surrogate risk_\(R_{t,\ell}^{*\psi}\) such that evaluating this risk over observed proxy outcomes is equivalent to \(R_{\ell}^{*}\) in expectation.
Figure 2. A causal diagram of treatment-conditional outcome measurement error.
In particular, let \(w:\mathcal{X}\rightarrow\mathbb{R}_{+}\) be a weighting function satisfying \(\mathbb{E}_{X}[w(X)|T=t]=1\) and let \(\ell:\mathcal{Y}\times[0,1]\rightarrow\mathbb{R}_{+}\) be a surrogate loss function. We construct a _re-weighted surrogate risk_
\[R^{w}_{t,\tilde{\ell}}\left(f_{t}\right)\coloneqq\mathbb{E}_{p}\left[w(X) \tilde{\ell}(f_{t}(X),Y)\mid T=t\right] \tag{3}\]
such that \(R^{*}_{\ell}(f_{t})=R_{t,\tilde{\ell}}(f_{t})\) in expectation. Theorem 4.1 shows that we can recover a surrogate risk satisfying this property by constructing \(w(x)\) as in (4) and \(\tilde{\ell}\) as in (5). Note that this surrogate risk requires knowledge of \(\alpha_{t}\), \(\beta_{t}\).
**Theorem 4.1**.: _Assume treatment-conditional error (1), consistency (2), ignorability (3) and positivity (4). Then under target intervention \(t\in\{0,1\}\), \(R^{*}_{\ell}(f_{t})=R^{w}_{t,\tilde{\ell}}(f_{t})\) for the weighting function \(w:\mathcal{X}\rightarrow\mathbb{R}_{+}\) given by_
\[w(x)\coloneqq\frac{p(T=t)}{(2t-1)\cdot\pi(x)+1-t} \tag{4}\]
_and surrogate loss \(\tilde{\ell}:\mathcal{Y}\times[0,1]\rightarrow\mathbb{R}_{+}\) given by_
\[\tilde{\ell}(f_{t}(x),1) \coloneqq\frac{(1-\alpha_{t})\cdot\ell(f_{t}(x),1)-\beta_{t} \cdot\ell(f_{t}(x),0)}{1-\beta_{t}-\alpha_{t}} \tag{5}\] \[\tilde{\ell}(f_{t}(x),0) \coloneqq\frac{(1-\beta_{t})\cdot\ell(f_{t}(x),0)-\alpha_{t} \cdot\ell(f_{t}(x),1)}{1-\beta_{t}-\alpha_{t}}\]
_where in (4), \(\pi(x)\coloneqq p(T=1|X=x)\) is the propensity score function._
We prove Theorem 4.1 in Appendix A.2. Intuitively, \(R^{w}_{t,\tilde{\ell}}\left(f_{t}\right)\) applies a _joint bias correction_ for OME and distribution shift introduced by historical decision-making policies (i.e., selection bias). The unbiased risk minimization framework dating back to Natarajan et al. [46] corrects for OME by minimizing a surrogate loss \(\tilde{\ell}\) on proxies \(Y\) observed _over the full population unconditional on treatment_. Yet this approach is untenable when decisions impact outcomes \((T\not\perp\{Y^{*},Y\})\) and error rates differ across treatments \((Y\not\perp T\mid Y^{*})\). One possible extension of unbiased risk minimizers to the treatment-conditional setting involves minimizing \(\tilde{\ell}\) over the treatment population \(p(X|T=t)\)
\[R_{t,\tilde{\ell}}\left(f_{t}\right)\coloneqq\mathbb{E}_{p}\left[\tilde{\ell }(f_{t}(X),Y)\mid T=t\right]. \tag{6}\]
However, \(R_{t,\tilde{\ell}}\neq R^{*}_{\ell}\) in observational settings because the treatment population \(p(X|T=t)\) can differ from the marginal population \(p(X)\) under historical selection policies when \(X\not\perp T\). Therefore, our re-weighting procedure applies a second bias correction that adjusts \(p(X|T=t)\) to resemble \(p(X)\).
_Learning algorithm._ As a result of Theorem 4.1, we can learn a predictor \(\hat{\eta}^{*}_{t}\) by minimizing the re-weighted surrogate risk over _observed samples_\((X_{1},T_{1},Y_{1}),...,(X_{n},T_{n},Y_{n})\sim p\). First, we estimate the weighting function \(\hat{w}(x)\) through a finite sample, which boils down to learning propensity scores \(\hat{\pi}(x)\) (as shown in (4)). Estimating the propensity scores can be done by applying supervised learning algorithms to learn a predictor from \(X\) to \(T\). Then for any treatment \(t\), weighting function \(\hat{w}\), and predictor \(f_{t}\), we can approximate \(R^{w}_{t,\tilde{\ell}}\left(f_{t}\right)\) by taking the sample average over the treatment population
\[\hat{R}^{\hat{w}}_{t,\tilde{\ell}}(f_{t})\coloneqq\frac{1}{n_{t}}\sum_{i: \tilde{T}_{i}=t}\hat{w}(X_{i})\tilde{\ell}(f_{t}(X_{i}),Y_{i}) \tag{7}\]
for \(n_{t}=\sum_{i=1}^{n}\mathbb{1}\left[T_{i}=t\right]\). Therefore, given \(\hat{w}\) we can learn a predictor from observational data by minimizing the empirical risk
\[\hat{f}_{t}\leftarrow\operatorname*{arg\,min}_{f_{t}\in\mathcal{H}}\hat{R}^{ \hat{w}}_{t,\tilde{\ell}}(\tilde{f}_{t}). \tag{8}\]
We refer to solving (8) as _re-weighted risk minimization with a surrogate loss_ (Algorithm 1).
```
Input: Data \(\mathcal{W}=\{(X_{t},T_{i},Y_{i})\}_{i=1}^{n}\sim p\) Output: Learned estimator \(\hat{\eta}_{t}^{*}(x)\) Partition \(\mathcal{W}\) into \(\mathcal{W}_{1}\), \(\mathcal{W}_{2}\), \(\mathcal{W}_{3}\) On \(\mathcal{W}_{1}\), estimate parameters \(\hat{a}_{t}\), \(\hat{\beta}_{t}\leftarrow\operatorname{CCPE}(\mathcal{W}_{1})\) On \(\mathcal{W}_{2}\), learn \(\hat{\pi}(x)\) by regressing \(T\sim X\) On \(\mathcal{W}_{3}\), use \(\hat{\pi}(x),\hat{a}_{t},\hat{\beta}_{t}\) to solve \(\hat{\eta}_{t}^{*}(x)\leftarrow\operatorname*{arg\,min}_{f_{t}\in\mathcal{H}} \hat{R}_{t,\hat{t}}^{\hat{w}}(f_{t})\)
```
**Algorithm 2**Conditional class probability estimation (CCPE)
### Error parameter identification and estimation
Because our risk minimization approach requires knowledge of OME parameters, we develop a method for estimating \(\alpha_{t}\), \(\beta_{t}\) from observational data. Error parameter estimation is challenging in decision support applications because target outcomes often result from nuanced social and organizational processes. Understanding the measurement error properties of proxies targeted in criminal justice, medicine, and hiring domains remains an ongoing focus of domain-specific research [3; 8; 23; 45; 49; 79]. _Therefore, we develop an approach compatible with multiple sources of domain knowledge about proxies, which can be flexibly combined depending on which assumptions are deemed reasonable in a specific context._
Error parameters are _identifiable_ if they can be uniquely computed from observational data. Because our error model (e.q. 1) expresses the proxy class probability as a linear equation with two unknowns, \(\alpha_{t}\), \(\beta_{t}\) are identifiable if the target class probability \(c_{t,i}^{*}=\eta_{t}^{*}(x_{i})\) and proxy class probability \(c_{t,i}=\eta_{t}(x_{i})\) are known at two distinct points \((c_{t,i}^{*},c_{t,i})\) and \((c_{t,j}^{*},c_{t,j})\) such that \(c_{t,i}^{*}\neq c_{t,j}^{*}\). Following prior literature [25], we refer to knowledge of \((c_{t,i}^{*},c_{t,i})\) as an _anchor assumption_ because it requires knowledge of the unobserved quantity \(\eta_{t}^{*}\). We now introduce several anchor assumptions that are practical in ADS, before showing that these can be flexibly combined to identify \(\alpha_{t}\), \(\beta_{t}\) in Theorem 4.2.
**Min anchor.** A min anchor assumption holds if there is an instance at no risk of the target potential outcome under intervention \(t\): \(c_{t,i}^{*}=\inf_{x_{i}\in X}\left\{\eta_{t}^{*}(x_{i})\right\}=0\). Because \(\eta_{t}\) is a strictly monotone increasing transform of \(\eta_{t}^{*}\), the corresponding value of \(\eta_{t}\) can be recovered via \(c_{t,i}=\inf_{x_{i}\in X}\{\eta_{t}(x_{i})\}\)[43]. Min anchors are reasonable when there are cases that are confirmed to be at no risk based on domain knowledge of the data generating process. For example, a min anchor may be reasonable in diagnostic testing if a patient is confirmed to be negative for a medical condition based on a high-precision gold standard medical test [18].
**Max anchor.** A max anchor assumption holds if there is an instance at certain risk of the target outcome under intervention \(t\): \(c_{t,i}^{*}=\sup_{x_{i}\in X}\{\eta_{t}^{*}(x_{i})\}=1\). The corresponding value of \(\eta_{t}\) can be recovered via \(c_{t,i}=\sup_{x_{i}\in X}\{\eta_{t}(x_{i})\}\) because \(\eta_{t}\) is a strictly monotone increasing transform of \(\eta_{t}^{*}\). Max anchors are reasonable when there are confirmed instances of a positive target potential outcome based on domain knowledge of the data generating process. For example,
a max anchor may be justified in a medical setting if a subset of patients have confirmed disease diagnoses based on biopsy results (Bang et al., 2017), or if a disease prognosis (and resulting health outcomes) are known from pathology.
**Base rate anchor.** A base rate anchor assumption holds if the expected value of \(\eta_{t}^{*}\) is known under intervention \(t\): \(c_{t,i}^{*}=\mathbb{E}[\eta_{t}^{*}(X)]\). The corresponding value of \(\eta_{t}\) can be recovered by taking the expectation over the proxy class probability \(c_{t,i}=\mathbb{E}[\eta_{t}(X)]\). Base rate anchors are practical because the prevalence of unobservable target outcomes (e.g., medical conditions (Kang et al., 2017), crime (Kang et al., 2017; Li et al., 2018), student performance (Kang et al., 2017; Li et al., 2018)) is routinely estimated via domain-specific analyses of measurement error. For instance, studies have been conducted to estimate the base rate of undiagnosed heart attacks (i.e., accounting for measurement error in diagnosis proxy outcomes) (Kang et al., 2017). Additionally, the conditional average treatment effect \(\mathbb{E}[\eta_{1}^{*}(X)]-\mathbb{E}[\eta_{0}^{*}(X)]\) is commonly estimated in randomized controlled trials (RCTs) while assessing treatment effect heterogeneity (Li et al., 2018). While the conditional average treatment effect is normally estimated via proxies \(Y_{0}\) and \(Y_{1}\), measurement error analysis is a routine component of RCT design and evaluation (Li et al., 2018).
Anchor assumptions can be flexibly combined to identify error parameters based on which set of assumptions are reasonable in a given ADS domain. In particular, Theorem 4.2 shows that combinations of anchor assumptions listed in Table 1 are sufficient for identifying error parameters under our causal assumptions.
Theorem 4.2 ().: _Assume treatment-conditional error (1), consistency (2), ignorability (3) and positivity (4). Then \(\alpha_{t},\beta_{t}\) are identifiable from observational data \(p(X,T,Y)\) given any identifying pair of anchor assumptions provided in Table 1._
We defer proof of Theorem 4.2 to Appendix A.2. In practice, we estimate the error rates on finite samples \((X_{i},T_{i},Y_{i})\sim p\), which gives an approximation \(\hat{\eta}_{t}\). Therefore, we propose a conditional class probability estimation (CCPE) method for parameter estimation which estimates \(\hat{\alpha}_{t}\), \(\hat{\beta}_{t}\) by fitting \(\hat{\eta}_{t}\) on observational data then applying the relevant pair of anchor assumptions to estimate error rates. Algorithm 2 provides pseudocode for this approach with min and max anchors, which can easily be extended to other pairs of identifying assumptions shown in Table 1. The combination of min and max anchors is known as _weak separability_(Kang et al., 2017) or _mutual irreducibility_(Kang et al., 2017; Li et al., 2018) in the observational label noise literature. Prior results in the observational setting show that unconditional class probability estimation (i.e., fitting \(\hat{\eta}(x)=p(Y=1|X=x)\) yields a consistent estimator for observational error rates under weak seperability (Li et al., 2018; Li et al., 2018). Statistical consistency results extend to the treatment-conditional setting under positivity (4) because \(p(T=t|X=x)>0,\ \forall t\in\{0,1\},\ x\in\mathcal{X}\). However, asymptotic convergence rates may be slower under strong selection bias if \(p(T=t|X=x)\) is near \(0\).
## 5. Experiments
Experimental evaluation under treatment-conditional OME is challenging due to compounding sources of uncertainty. We do not observe counterfactual outcomes in historical data, making it challenging to estimate the quality of new models via observational data. Further, because the target outcome is not observed directly, we rely on measurement assumptions when studying proxy outcomes in naturalistic data. We address this challenge by conducting a controlled
\begin{table}
\begin{tabular}{l c c c c c} & **Know \(\alpha_{t}\)** & **Min** & **Base rate** & **Max** & **Know \(\beta_{t}\)** \\ \cline{2-6}
**Know \(\alpha_{t}\)** & \(\bigtimes\) & \(\bigtimes\) & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) \\
**Min** & \(\bigtimes\) & \(\bigtimes\) & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) \\
**Base rate** & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) \\
**Max** & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & \(\bigtimes\) & \(\bigtimes\) \\
**Know \(\beta_{t}\)** & \(\bigvee\) & \(\bigvee\) & \(\bigvee\) & \(\bigtimes\) & \(\bigtimes\) \\ \end{tabular}
\end{table}
Table 1. Multiple combinations of min, max, and base rate anchor assumptions (shown via \(\bigvee\)) enable identification of \(\alpha_{t}\), \(\beta_{t}\).
evaluation with synthetic data where ground truth potential outcomes are fully observed. To better reflect the ecological settings of real-world deployments, we also conduct a semi-synthetic evaluation with real data collected through randomized controlled trials (RCTs) in healthcare and employment domains. Our evaluation (1) validates our proposed risk minimization approach, (2) underscores the need to carefully consider measurement assumptions during error rate estimation, and (3) shows that correcting for OME or treatment effects in isolation is insufficient.
### Models
We compare several modeling approaches in our evaluation to examine how existing modeling practices are impacted by treatment-conditional outcome measurement error:
* **Unconditional proxy (UP)**. Predict the observed outcome unconditional on treatment: \(X\to Y\). Reflecting current practice for deployed models, this model _does not adjust for OME or treatment effects_.2 Footnote 2: This baseline is also called an _observational risk assessment_ in experiments reported by Coston et al. (2013)
* **Unconditional target (UT)**. Predict the target outcome unconditional on treatment: \(X\to Y^{*}\). Here, we determine \(Y^{*}\) by applying consistency: \(Y^{*}=(1-T)\cdot Y_{0}^{*}+T\cdot Y_{1}^{*}\). This method reflects a setting in which no OME is present but modeling does not account for treatment effects (Zhu et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019).
* **Conditional proxy (CP)**. Predict the proxy outcome conditional on treatment: \(X,T\to Y\). This is a counterfactual model that estimates a conditional expectation _without correcting for OME_(Zhu et al., 2017; Li et al., 2019; Li et al., 2019).3 Footnote 3: This model is known by different names in the causal inference literature, including the backdoor adjustment (G-computation) formula (Fama et al., 2019; Li et al., 2019), T-learner (S
(i.e., UP, UT) will learn an average of the two class probability functions. Under our choice of \(\pi(x)\), fewer samples are drawn from \(\eta_{1}^{*}(x)\) in the region where \(\pi(x)\) is small (near \(X=-1\)), and fewer samples are drawn from \(\eta_{0}^{*}(x)\) in the region where \(1-\pi(x)\) is small (near \(x=1\)). This introduces selection bias when sampling from \(\pi(x)\).
_Setup details._ We train each model in Section 5.1 to predict risk under no intervention (\(t=0\)) and vary \((\alpha_{0},\beta_{0})\). We keep \((\alpha_{1},\beta_{1})\) fixed at \((0,0)\) across settings. When estimating OME parameters, we run CCPE with cross-fitting (Algorithm 4) with min and max anchor assumptions for identification. These assumptions hold precisely under this controlled evaluation (Figure 3). We run all methods with cross-fitting (Algorithm A.3) and report performance on \(Y_{0}^{*}\).
_Results._ Figure 4 shows the performance of each model as a function of sample size. TPO provides an upper bound on performance because it learns directly from target potential outcomes. RW-SL outperforms all other methods trained on observational data. Both models that do not condition on treatment (UP and UT), and the conditional regression trained on proxy outcomes (CP), reach a performance plateau by 50k samples and do not benefit from additional data. This indicates that (1) learning a counterfactual model and (2) correcting for measurement error is necessary to learn \(\eta_{t}^{*}\) in this evaluation. We likely observe a sharper plateau in UP and UT above 20k samples because these approaches fit a weighted average of \(\eta_{0}^{*}\) and \(\eta_{1}^{*}\) (where \(\eta_{1}^{*}\) differs from \(\eta_{0}^{*}\) considerably). Table 2 shows a breakdown across error
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \((\alpha_{0},\beta_{0})\) & \((0.0,0.4)\) & \((0.1,0.3)\) & \((0.2,0.2)\) & \((0.3,0.1)\) & \((0.4,0.0)\) \\ \hline UP & 54.18 (0.09) & 53.00 (0.39) & 54.89 (1.09) & 55.81 (0.74) & 46.76 (0.33) \\ UT & 61.57 (0.63) & 60.95 (0.50) & 60.49 (0.41) & 61.00 (0.49) & 60.54 (0.70) \\ CP & 51.36 (1.83) & 68.24 (2.61) & **75.05** (0.92) & 67.77 (1.33) & 61.88 (0.28) \\ \hline SL \((\hat{\alpha},\hat{\beta})\) & 72.38 (1.65) & 65.45 (0.66) & 67.43 (1.64) & 68.01 (0.99) & 65.92 (1.34) \\ RW-SL \((\hat{\alpha},\hat{\beta})\) & 69.08 (1.55) & 65.96 (1.18) & 66.57 (1.32) & 68.39 (1.33) & 64.56 (0.52) \\ SL \((\alpha,\beta)\) & 70.00 (1.83) & 67.03 (1.28) & 65.27 (1.28) & 66.62 (1.35) & 66.79 (1.59) \\ RW-SL \((\alpha,\beta)\) & **73.68** (1.49) & **73.39** (1.60) & 72.52 (1.66) & **74.34** (1.15) & **75.01** (1.24) \\ \hline TPO & **77.08** (0.11) & **77.09** (0.20) & **76.98** (0.08) & **76.84** (0.18) & **76.90** (0.16) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Model accuracy and (s.e.) across error parameter settings \((\alpha_{0},\beta_{0})\) at \(N=60k\) samples over 10 runs. Top-2 performance across each \((\alpha_{0},\beta_{0})\) setting shown in bold.
rates \((\alpha_{0},\beta_{0})\) at \(60k\) samples. RW-SL outperforms SL when oracle parameters are known.4 However, RW-SL and SL perform comparably when weights and parameters are learned. This may be because RW-SL relies on estimates \(\hat{w}\) in addition to \(\hat{\alpha}_{0},\hat{\beta}_{0}\), which could introduce instability given misspecification in \(\hat{w}\). CP performs notably well under high error parameter symmetry (i.e., \(\alpha_{0}=\beta_{0}=.2\)). This is consistent with prior results from the class-conditional label noise literature (Zhu et al., 2017; Zhang et al., 2018), which show that the optimal classifier threshold for misclassification risk does not change under symmetric label noise. CP performs worse under high error asymmetry.
Footnote 4: We omit SL from Figure 4 to avoid clutter. We observe very similar convergence rates of SL and RW-SL.
### Semi-synthetic experiments on healthcare and employment data
In addition to our synthetic evaluation, we conduct experiments using real-world data collected as part of randomized controlled trials (RCTs) in healthcare and employment domains. While this evaluation affords less control over the data generating process, it provides a more realistic sample of data likely to be encountered in real-world model deployments. Evaluation via data from randomized or partially randomized experimental studies is useful for validating counterfactual prediction approaches because random assignment ensures that causal assumptions are satisfied (Han et al., 2015; Zhang et al., 2018; Zhang et al., 2018).
#### 5.3.1. Randomized Controlled Trial (RCT) Datasets
In 2008, the U.S. state of Oregon expanded access to its Medicare program via a lottery system (Obermeyer et al., 2010). This lottery provided an opportunity to study the effects of Medicare enrollment on healthcare utilization and medical outcomes via an experimental design, commonly referred to as the Oregon Health Insurance Experiment (OHIE). Lottery enrollees completed a pre-randomization survey recording demographic factors and baseline health status and were given a one-year follow-up assessment of health status and medical care utilization. We refer the reader to Finkelstein et al. (2010) for details. We use the OHIE dataset to construct an evaluation task that parallels the label choice bias analysis of Obermeyer et al. (2010). We use this dataset rather than synthetic data released by Obermeyer et al. (2010) because (1) treatment was randomly assigned, ruling out positivity and ignorability violations possible in observational data, and (2) OHIE data contains covariates necessary for predictive modeling. We predict diagnosis with an active chronic medical condition over the one-year follow-up period given \(D=58\) covariates, including health history, prior emergency room visits, and public services use. We predict chronic health conditions because findings from Obermeyer et al. (2010) indicate that this outcome variable is a reasonable choice of proxy for patient medical need. We adopt the randomized lottery draw as the treatment. 5
Footnote 5: Note that the OHIE experiment had imperfect compliance (\(=30\) percent of selected individuals successfully enrolled (Obermeyer et al., 2010)). Therefore, we predict diagnosis with a new chronic health condition given the _optimality to enroll_ in Medicare. This evaluation is consistent with many high-stakes decision-support settings granting opportunities to individuals, which they have a choice to pursue if desired.
We conduct a second real-world evaluation using data from the JOBS dataset, which investigates the effect of job retraining on future employment status (Zhu et al., 2018). This dataset includes an experimental sample collected by LaLonde (2017) via the National Supported Work (NSW) program (297 treated, 425 control) consisting primarily of low-income individuals seeking job retraining. Smith and Todd (2017) combine this sample with a "PSID" comparison group (2,490 control) collected from the general population, which resulted in a final sample with 297 treated and 2,915 control. This dataset includes \(D=17\) covariates including age, education, prior earnings, and interaction terms. 482 (15%) of subjects were unemployed at the end of the study. Following Johansson et al. (2017), we construct an evaluation task predicting unemployment under enrollment (\(t=1\)) and no enrollment (\(t=0\)) in a job retraining program conditional on covariates.
#### 5.3.2. Synthetic OME and selection bias
We experimentally manipulate OME to examine how outcome regressions perform under treatment-conditional error of known magnitude. We adopt diagnosis with a new chronic condition and
future unemployment as a _target outcome_ for OHIE and JOBS, respectively. We observe proxy outcomes by flipping target outcomes with probability \((\alpha_{0},\,\beta_{0})\). We keep \((\alpha_{1},\,\beta_{1})\) fixed at \((0,0)\). This procedure of generating proxy outcomes by flipping available labels is a common approach for vetting the feasibility of new methodologies designed to address OME [43, 46, 76]. This approach offers precise control over the magnitude of OME but suffers from less ecological validity than studying multiple naturalistically occurring proxies [49]. We opt for this semi-synthetic evaluation because (1) the precise measurement relationship between naturally occurring proxies may not be fully known, (2) the measurement relationship between naturally occurring proxies cannot be manipulated experimentally, and (3) there are few RCT datasets (i.e., required to guarantee causal assumptions) that contain multiple proxies of the same target outcome.
Models used for decision support are typically trained using data gathered under a historical decision-making policy. When prior decisions were made non-randomly, this introduces selection bias (\(T\not\perp X\)) and causes distribution shift between the population that received treatment \(t\) in training data, and the full population encountered at deployment time. Therefore, we emulate selection bias in the _training dataset_, and evaluate models over a held-out test set of randomized data. We insert selection bias in OHIE data by removing individuals from the treatment (lottery winning) arm with household income above the federal poverty line (10% of the treatment sample). This resembles an observational setting in which low-income individuals are more likely to receive an opportunity to enroll in a health insurance program (e.g., Medicaid, which determines eligibility based on household income in relation to the federal poverty line). We restrict our analysis to single-person households, yielding \(N=12,994\) total samples (6, 189 treatment, 6, 805 control). We model selection bias in JOBS data by including samples from the observational and experimental cohorts in the training data. Because the PSID comparison group consists of individuals with higher income and education than the NSW group, there is considerable distribution shift across the NSW and PSID cohorts [30, 39, 70]. Therefore, a model predicting unemployment over the control population (consisting of NSW and PSID samples) may suffer from bias when evaluated against test data that only includes samples from the NSW experimental arm. Thus we split data from the NSW experimental cohort 50-50 across training and test dataset, and only include PSID data in the training dataset.
#### 5.3.3. Experimental setup
We include a Conditional Target (CT) model in place of a Target Potential Outcome (TPO) model because counterfactual outcomes are not available in experimental data. CT provides a reasonable upper-bound on performance because identifiability conditions are satisfied in an experimental setting [52]. However, it is not possible to report accuracy over potential outcomes because counterfactual outcomes are unobserved. Therefore, we report error in average treatment effect estimates \(\tau-\hat{\tau}\), for
\[\tau\coloneqq\mathbb{E}[Y^{*}\mid T=1]-\mathbb{E}[Y^{*}\mid T=0],\quad\hat{ \tau}\coloneqq\mathbb{E}[\hat{\eta}_{1}(X)]-\mathbb{E}[\hat{\eta}_{0}(X)]\]
where \(\hat{\eta}_{\ell}\) is one of the learned models listed in Section 5.1. One subtlety of this comparison is that our outcome regressions target the _conditional_ average treatment effect (CATE), while \(\tau\) reflects the _average treatment effect_ (ATE) across the full population. Following prior evaluations [30], we compare all methods against the ATE because a ground-truth CATE is not available for JOBS or OHIE data.
_Details._ Due to small sample size of JOBS data under the treatment condition, we run semi-synthetic experiments without cross-fitting (Algorithm 1). We report results over a test fold of randomized data that does not contain flipped outcomes or selection bias. All models included in our comparison use the same 4-layer MLP architecture. We use the same model architecture to estimate error rates via CCPE and learn \(\hat{\text{w}}\). Appendix A.4 contains additional setup details.
#### 5.3.4. Results
Figure 5 shows bias in ATE estimates \(\tau-\hat{\tau}\) over 10 runs on JOBS and OHIE data. The left panel compares CP, UT, UP, and the oracle CT model against RW-SL/SL with oracle parameters \((\alpha_{0},\beta_{0})\), \((\alpha_{1},\beta_{1})\). We show performance
of RW-SL with learned parameters \((\hat{a}_{0},\hat{\beta}_{0})\), \((\hat{a}_{1},\hat{\beta}_{1})\) on the right panel. The left panel shows that CP is highly sensitive to measurement error. This is because measurement error introduces bias in estimates of the conditional expectations, which propagates to treatment effect estimates. Because UT and UP do not condition on treatment, they estimate an _average_ of the outcome functions \(\eta_{0}^{*}\) and \(\eta_{1}^{*}\), and generate predictions near 0. Therefore, while UT and UP perform well on OHIE data due to a small ground-truth ATE (\(\tau=0.015\)), they perform poorly on JOBS (\(\tau=-0.077\)). SL and RW-SL with oracle parameters \(\alpha_{t}\), \(\beta_{t}\) perform comparably to the CT model with oracle access to target potential outcomes across all measurement error settings. As shown on the right panel of Figure 5, RW-SL performance is highly sensitive to the choice of anchor assumption used to estimate parameters \((\hat{a}_{0},\hat{\beta}_{0})\), \((\hat{a}_{1},\hat{\beta}_{1})\) as indicated by increased bias in \(\hat{\tau}\) and greater variability over runs. In particular, RW-SL performs poorly when Min/Max and Br/Max pairs of anchor assumptions are used to estimate error rates because the max anchor assumption is violated on OHIE and JOBS data.
Specifically, when we fit the CT model to estimate \(\hat{\eta}_{0}^{*}\), \(\hat{\eta}_{1}^{*}\) on OHIE data, and compute inferences over a validation fold \(X_{\text{\it aval}}\), we see that
\[\min_{\mathbf{x}\in X_{\text{\it aval}}}\hat{\eta}_{0}^{*}\approx 2.23\cdot \epsilon^{-6},\quad\max_{\mathbf{x}\in X_{\text{\it aval}}}\hat{\eta}_{0}^{*} \approx 0.85,\quad\min_{\mathbf{x}\in X_{\text{\it aval}}}\hat{\eta}_{1}^{*}\approx 1.0 2\cdot\epsilon^{-5},\quad\max_{\mathbf{x}\in X_{\text{\it aval}}}\cdot\hat{\eta}_{1 }^{*}\approx 0.81.\]
This suggests the min anchor assumption that \(\min_{\mathbf{x}\in X_{\text{\it aval}}}\hat{\eta}_{t}^{*}=0\) is reasonable for \(t\in\{0,1\}\), while the max anchor assumption that \(\max_{\mathbf{x}\in X_{\text{\it aval}}}\hat{\eta}_{t}^{*}=1\) is violated for both \(t\in\{0,1\}\). Because the min anchor assumption is satisfied for these choices of target outcome, and the ground-truth base rate is known in this experimental setting, RW-SL demonstrates strong performance in the Br/Min combination of anchor assumptions. Applying this same procedure to the unemployment outcome targeted in JOBS data also reveals a violation of the max anchor assumption. These results underscore the importance of selecting anchor assumptions in close consultation with domain experts because it is not possible to verify anchor assumptions by learning \(\hat{\eta}_{t}^{*}\) when the target outcome of interest is unobserved.
Figure 5. Bias in average treatment effect (ATE) estimates on OHIE and JOBS data. Error bars indicate standard error over ten runs. CT is a model with oracle access to target outcomes and RW-SL is our proposed approach.
## 6. Discussion
In this work, we show the importance of carefully addressing intersectional threats to model reliability during the development and evaluation of predictive models for decision support. Our theoretical and empirical results validate the efficacy of our unbiased risk minimization approach. When OME parameters are known, our method performs comparably to a model with oracle access to target potential outcomes. However, our results underscore the importance of vetting anchoring assumptions used for error parameter estimation before using error rate estimates for risk minimization. Critically, our experimental results also demonstrate that correcting for a single threat to model reliability in isolation is insufficient to address model validity concerns (Sundar et al., 2017), and risks promoting false confidence in corrected models. Below, we expand upon key considerations surfaced by our work.
### Decision-points and complexities in measurement error modeling
Our work speaks to key complexities faced by domain experts, model developers, and other stakeholders while examining proxies in ADS. One decision surfaced by our work entails which _measurement error model_ best describes the relationship between the unobserved outcome of policy interest and its recorded proxy. We open this work by highlighting a measurement model decision made by Obermeyer et al. (2019) during their audit of a clinical risk assessment: that error rates are fixed across treatments. Our work suggests that failing to account for treatment-conditional error in OME models may exacerbate reliability concerns. However, at the same time, the error model we adopt in this work intentionally abstracts over other factors known to impact proxies in decision support tasks, including error rates that vary across covariates. Although this simplifying assumption can be unreasonable in some settings (Obermeyer et al., 2019; Obermeyer et al., 2019), including the one studied by Obermeyer et al. (2019), it is helpful in foregrounding previously-overlooked challenges involving treatment effects and selection bias. In practice, model developers correcting for measurement error may wish to combine our methods with existing unbiased risk minimization approaches designed for group-dependent error where appropriate (Obermeyer et al., 2019). Further, analyses of measurement error should not overlook more fundamental conceptual differences between target outcomes and proxies readily available for modeling (e.g., when long-term child welfare related outcomes targeted by a risk assessment differ from _immediate_ threats to child safety weighted by social workers (Obermeyer et al., 2019; Obermeyer et al., 2019)). This underscores the need to carefully weigh the validity of proxies in consultation with multiple stakeholders (e.g., domain experts, data scientists, and decision-makers) while deciding whether OME correction is warranted.
A second decision point highlighted in this work entails the _specific measurement error parameters_ that govern the relationship between target and proxy outcomes. In particular, our work underscores the need for a tighter coupling between domain expertise and data-driven approaches for error parameter estimation. Current techniques designed to address OME in the machine learning literature - which typically examine settings with "label noise" - rely heavily upon data-driven approaches without close consideration of whether the underlying measurement assumptions hold (Sundar et al., 2017; Obermeyer et al., 2019; Obermeyer et al., 2019). While application of these assumptions may be practical for methodological evaluations and theoretical analysis (Sundar et al., 2017; Obermeyer et al., 2019; Obermeyer et al., 2019), these assumptions should be carefully vetted when applying OME correction to real-world proxies. This is supported by our findings in Figure 5, which show that RW-SL performs poorly when the anchor assumptions used for error parameter estimation are violated. Our flexible set of anchor assumptions provides a step towards a tighter coupling between domain expertise and data-driven approaches in measurement parameter estimation.
### Challenges of linking causal and statistical estimands
Our counterfactual modeling approach requires several causal identifiability assumptions (Steiner, 2017), which may not be satisfied in all decision support contexts. Of our assumptions, the most stringent is likely ignorability, which requires that no unobserved confounders influenced past decisions and outcomes. While recent modeling developments may ease ignorability-related concerns in some cases (Kraus et al., 2019; Steiner, 2017), model developers should carefully evaluate whether confounders are likely to impact a model in a given deployment context. At the same time, our results show that formulating algorithmic decision support as a _"pure prediction problem"_ that optimizes predictive performance without estimating causal effects (Steiner, 2017) imposes equally serious limitations. If the policy-relevant target outcome of interest is risk _conditional on intervention_ (as is often the case in decision support applications), an observational model will generate invalid predictions for cases that historically responded most to treatment (Kraus et al., 2019). Taken together, our work suggests that domain experts and model developers should exercise considerable caution while mapping the causal estimand of policy interest to the statistical estimand targeted by a predictive model (Kraus et al., 2019).
## 7. Acknowledgements
We thank attendees of the NeurIPS 2022 Workshop on Causality for Real-world Impact for their helpful feedback. This work was supported by an award from the UL Research Institutes through the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University, the Carnegie Mellon University Block Center for Technology and Society (Award No. 53680.1.5007718), and the National Science Foundation Graduate Research Fellowship Program (Award No. DGE-1745016). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the grantors.
|
2305.13949
|
A subsolar oxygen abundance or a radiative region deep in Jupiter
revealed by thermochemical modelling
|
Jupiter's deep abundances help to constrain the formation history of the
planet and the environment of the protoplanetary nebula. Juno recently measured
Jupiter's deep oxygen abundance near the equator to be 2.2$_{-2.1}^{+3.9}$
times the protosolar value (2$\sigma$ uncertainties). Even if the nominal value
is supersolar, subsolar abundances cannot be ruled out. Here we use a
state-of-the-art one-dimensional thermochemical and diffusion model with
updated chemistry to constrain the deep oxygen abundance with upper
tropospheric CO observations. We find a value of 0.3$_{-0.2}^{+0.5}$ times the
protosolar value. This result suggests that Jupiter could have a carbon-rich
envelope that accreted in a region where the protosolar nebula was depleted in
water. However, our model can also reproduce a solar/supersolar water abundance
if vertical mixing is reduced in a radiative layer where the deep oxygen
abundance is obtained. More precise measurements of the deep water abundance
are needed to discriminate between these two scenarios and understand Jupiter's
internal structure and evolution.
|
Thibault Cavalié, Jonathan Lunine, Olivier Mousis
|
2023-05-23T11:19:13Z
|
http://arxiv.org/abs/2305.13949v1
|
A subsolar oxygen abundance or a radiative region deep in Jupiter revealed by thermochemical modelling
###### Abstract
Jupiter's deep abundances help to constrain the formation history of the planet and the environment of the protoplanetary nebula. Junor recently measured Jupiter's deep oxygen abundance near the equator to be 2.2\({}^{+3.9}_{-2.1}\) times the protosolar value (2\(\sigma\) uncertainties). Even if the nominal value is supersolar, subsolar abundances cannot be ruled out. Here we use a state-of-the-art one-dimensional thermochemical and diffusion model with updated chemistry to constrain the deep oxygen abundance with upper tropospheric CO observations. We find a value of 0.3\({}^{+0.5}_{-0.2}\) times the protosolar value. This result suggests that Jupiter could have a carbon-rich envelope that accreted in a region where the protosolar nebula was depleted in water. However, our model can also reproduce a solar/supersolar water abundance if vertical mixing is reduced in a radiative layer where the deep oxygen abundance is obtained. More precise measurements of the deep water abundance are needed to discriminate between these two scenarios and understand Jupiter's internal structure and evolution.
1Laboratoire d'Astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allee Geoffroy Saint-Hilaire, 33615 Pessac, France (ORCID: 0000-0002-0649-1192)
LESIA, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Universites, UPMC Univ. Paris 06, Univ. Paris Diderot, Sorbonne Paris Cite, Meudon, France
3Cornell University, Ithaca, NY, USA (ORCID: 0000-0003-2279-4131)
4Aix Marseille Universite, Institut Origines, CNRS, CNES, LAM, Marseille, France
**received: 20 May 2022**
**accepted: 23 February 2023**
**DOI: [https://doi.org/10.1038/s41550-023-01928-8](https://doi.org/10.1038/s41550-023-01928-8)**
## 1 Introduction
Giant planets are the true architects of planetary systems given their relatively short formation timescales compared with terrestrial planets, their gravity and migration. Their deep composition holds part of the key to understanding the formation of giant planets in planetary systems, along with other measurements such as their gravity and magnetic field. It can also help to constrain the processes that led to the condensation or trapping of the primordial ices in the protosolar nebula (Helled & Lunine 2014; Bar-Nun et al. 1988).
The Galileo probe plunged in the atmosphere of Jupiter in 1995, reaching the 22 bar level, and measured its elemental and isotopic composition. The main result is the relatively uniform enrichment in volatiles by a factor of two to four with respect to the protosolar value (Wong et al. 2004). The only element that did not follow this trend, and for which there was no internal process to explain its depletion, is oxygen. The accepted hypothesis is that Galileo entered a dry area where water, the main carrier of oxygen at depths, was still not uniformly mixed at 22 bar. The oxygen measurement from Galileo has thus been mostly accepted as a lower limit ever since.
One of the reasons the Junor mission was designed was to fill this gap left by Galileo regarding the deep oxygen abundance in Jupiter. The idea was to observe the microwave spectrum of Jupiter at various emission angles to retrieve the global water abundance(Janssen et al. 2005). The main difficulty then resides in the dependency of the spectrum on the temperature profile and on absorbers other than water, such as ammonia. While Junor confirmed previous observations that ammonia was depleted below its condensation level (de Pater 1986), it surprisingly showed that this depletion was latitude dependent and that it extended as deep as \(\sim\)50 bar (Li et al. 2017), except near the equator. Li et al. (2020) used the microwave spectra seen by Junor at this latitude to retrieve the vertical water profile of Jupiter at its equator and derive its deep oxygen abundance. They found that the oxygen was nominally enriched by a factor of 2.7\({}^{+2.4}_{-1.7}\) times the protosolar value but the 1\(\sigma\) lower bound was weakly determined, and they state that subsolar values are possible. Helled et al. (2022) emphasized that these error bars are sufficiently large that the depletion seen by Galileo may not be a local anomaly, but could instead reflect a global depletion. In what follows, we use protosolar abundances determined by Lodders (2021). The Jovian oxygen abundance of Li et al. (2020) then translates into 2.2\({}^{+2.0}_{-1.4}\) times the protosolar value.
Long before Galileo and Juno, the reported (Beer 1975) detection of CO in the troposphere of Jupiter with an abundance higher than thermochemical equilibrium predictions by orders of magnitude triggered the development of numerous
models to constrain the deep oxygen abundance by solving the balance between thermochemistry and vertical mixing (for example, Lunine & Stevenson 1987; Fegley & Prinn 1988; Yung et al. 1988). Below the cloud level, in the deep troposphere, CO and H\({}_{2}\)O are in thermochemical equilibrium resulting from the equilibrium reaction CO + 3H\({}_{2}\) = H\({}_{2}\)O + CH\({}_{4}\). As the temperature decreases with height, the thermochemical equilibrium is shifted in favour of H\({}_{2}\)O. The detection of CO in the upper troposphere then demonstrates that thermochemical equilibrium is quenched by vertical mixing when the mixing timescale becomes shorter than the chemical conversion timescale. Although the initial studies cited above focused on identifying the rate-limiting reaction, more recent work incorporated comprehensive chemical schemes (Visscher et al. 2010; Wang et al. 2016). By probing the abundance of CO in the upper troposphere and modelling thermochemistry and diffusion, one can then reconstruct the vertical profile of the species and tie it back to the deep water (and thus to the deep oxygen) abundance in the planet.
We used a 1D thermochemical and diffusion model (Cavalie et al. 2017) to fit a CO upper tropospheric mole fraction of 0.9 \(\pm\) 0.3 ppb, as measured by previous studies (Bezard et al. 2002; Bjoraker et al. 2018). Although the chemical scheme used by Cavalie et al. (2017) was validated globally by the combustion industry over a wide range of temperatures and pressures, Wang et al. (2016) showed that the model failed to agree with most competing chemical schemes regarding the quench chemistry of CO in giant planets. Moses (2014) identified the main source of the disagreement as the kinetics of the conversion reaction of methanol into the methyl radical and water measured by Hidaka et al. (1989). This led Venot et al. (2020) to fully update their CH\({}_{3}\)OH submechanism by adopting the work of Burke et al. (2016), which noticeably provided an explicit logarithmic dependence on pressure of some reaction rates, and removing of the controversial reaction of Hidaka et al. (1989) that produced the methyl radical from methanol (and thus from CO). This updated scheme21 then destroys less CO than that of Venot et al. (2012), in agreement with other studies (Wang et al. 2015, 2016; Moses 2014) (Extended Data Fig. 1). As a consequence, a given CO abundance requires a lower deep oxygen abundance with the new scheme (Methods).
## 2 Results
With our nominal parameter set (Methods), we found that a deep oxygen abundance of 0.3 times the protosolar value is required in Jupiter's deep troposphere. The corresponding vertical profiles are displayed in Fig. 1. In our simulations, we allowed three parameters to vary to fit the tropospheric CO and estimate the uncertainty of the deep oxygen abundance with respect to the methane mole fraction \(y_{\rm CH_{4}}^{\rm top}\) at the top of the troposphere and the vertical eddy diffusion coefficient \(K_{\rm zz}\). We allowed these two parameters to vary within their respective uncertainty ranges; that is, \(y_{\rm CH_{4}}^{\rm top}=0.00204\pm 0.0050\) (ref. 3) and \(K_{\rm zz}=10^{8}\,\rm cm^{2}\,\rm s^{-1}\) within a factor of two (Wang et al. 2016; Grassi et al. 2020). When \(y_{\rm CH_{4}}^{\rm top}\) was allowed to vary within its uncertainty range and \(K_{\rm zz}\) was fixed to \(10^{8}\,\rm cm^{2}\,\rm s^{-1}\), we obtained our nominal fits to the CO mole fraction with a deep oxygen abundance of \(0.3^{+0.3}_{-0.2}\) times the protosolar value (Fig. 2). Conversely, when we varied \(K_{\rm zz}\) and fixed \(y_{\rm CH_{4}}^{\rm top}\) to 0.00204, we obtained a deep oxygen abundance of \(0.3^{+0.3}_{-0.2}\) times the protosolar value (Fig. 3). The deep carbon abundance was 3.2 \(\pm\) 0.8 times the protosolar value, depending on the adopted value of \(y_{\rm CH_{4}}^{\rm top}\). Finally, if we accounted for the uncertainty ranges of \(K_{\rm zz}\) and \(y_{\rm CH_{4}}^{\rm top}\), then the result for the deep oxygen was \(0.3^{+0.5}_{-0.2}\) times the protosolar value. According to our model, oxygen is subsolar and the C/O ratio is \(6^{+10}_{-5}\), suggesting that Jupiter could have a surprisingly carbon-rich envelope.
The deep oxygen abundance in giant planets has long been debated as it is one of the key elements pertaining to the formation of solids and trapping of more volatile species in the protosolar nebula, which are later released to the growing envelopes of the giant planets (Owen et al. 1999; Gautier et al. 2001). Remote sensing observations and in situ measurements provide values ranging from 0.25 to 4.2 times the protosolar value. The Galileo measurement, when translated into an O/H abundance using the protosolar composition of Lodders (2021), is 0.37 \(\pm\) 0.12 (Wong et al. 2004). This value has often been considered a lower limit because the water abundance was still increasing in the measurements when the signal from the Galileo probe was lost, and the probe entered a dry region of Jupiter's atmosphere (a 5 \(\mu\)m hotspot). Bjoraker et al. (2018) found that their Great Red Spot spectrum around 5 \(\mu\)m was best fitted by fixing the water cloud base at 5 \(\pm\) 1 bar, which translates nominally into a near-protosolar deep oxygen abundance, but the 1\(\sigma\) range encompasses 0.3 to 3 times the protosolar value. Finally, a deep water abundance of \(2.2^{+2.0}_{-1.4}\) times the protosolar value was obtained (Li et al. 2020) with the Juno Microwave Spectrometer (MWR) in the sole equatorial region where vertical mixing and meteorology seem to maintain a well-mixed atmosphere throughout the probed gas column. Whether this value is representative of the whole planet remains to be verified, especially given the unexpected results for the meridional distribution of ammonia and its depletion at pressures lower than 30 bar (Li et al. 2017) and the role water seems to play in this depletion (Guillot et al. 2020). The fact that Li et al. (2017) found about half the deep ammonia measured by Galileo could indicate that this oxygen measurement is also not representative of the global value. In any case, their oxygen abundance retrieval presents an error bar in which the lower end is more weakly determined than the higher end. The lower 2\(\sigma\) limit lies at 0.1 times the protosolar value.
## 3 Discussion
We nominally found a subsolar deep oxygen abundance in Jupiter. Our results, compatible with the range obtained from similar modelling in Visscher et al. (2010), indicate marginal agreement with the Juno MWR analysis (Li et al., 2020) as discussed above. Cloud models often require one times solar oxygen or higher (Hurrigarro et al., 2022), but can also accommodate subsolar oxygen (Hueso and Sanchez-Lavega, 2001). More problematic are the results from lightning data. While overall modelling of lightning frequency (Aglyamov et al., 2021) permits a subsolar value, the Galileo detection of a potentially deeper lightning flash would imply a water enrichment exceeding solar values (Dyudina et al., 2002). Decreasing \(K_{zz}\) from the nominal value of \(10^{8}\) cm\({}^{2}\) s\({}^{-1}\) to \(2.5\times 10^{6}\) cm\({}^{2}\) s\({}^{-1}\) would raise the deep oxygen abundance from 0.3 to 2.2 times solar (Fig. 1 and Extended Data Fig. 2). However, such a low level of vertical mixing must be confined to the altitudes below which temperatures reach 1000-1100 K, where CO quenching starts, as demonstrated in Fig. 1; at 800 K it must be at or close to our nominal value to fit the data on other disequilibrium species such as PH\({}_{3}\) and GeH\({}_{4}\)(Wang et al., 2016; Grassi et al., 2020), as described in Methods. Our revised chemical network, taken from Venot et al. (2020), probably still suffers from some uncertainties in the reaction rates. However, any change in the rates to produce a solar or supersolar water abundance would require the deep oxygen abundance derived in the ice giants (Venot et al., 2020) to increase as well, raising the problem of the consistency between the deep oxygen abundance and deep D/H ratio when compared with that found in Port cloud comets (Ali-Dib et al., 2014). Our results therefore require either (1) a carbon-rich envelope in Jupiter or (2) a deep layer of reduced vertical mixing.
Option (1) was already proposed by Mousis et al. (2012), assuming that the Galileo O determination corresponds to the bulk abundance. A high C/O ratio in Jupiter's envelope such as that of \(6^{+10}_{-5}\) found here resulting from a relatively low oxygen abundance was also proposed by previous studies (Lodders, 2004; Mousis et al., 2019). In one model (Mousis et al., 2019), the radial drift of pebbles through the amorphous-to-crystalline-ice transition front in the protoplanetary disk releases carbon-rich supervolatiles into the gas while stranding water in the ice. Other scenarios include the agglomeration
Figure 1: Abundances and temperature profiles for Jupiter. Vertical temperature (T), CO, H\({}_{2}\)O and CH\({}_{4}\) profiles from our nominal model with \(K_{zz}=10^{8}\) cm\({}^{2}\) s\({}^{-1}\) and \(Y_{\rm CH_{4}}^{\rm top}=0.00204\) in which oxygen is subsolar (O/H = 0.3 times the protosolar value). The dashed lines correspond to alternative Jupiter abundance and \(T\) profiles obtained with a more sluggish mixing (\(K_{zz}\) is reduced to \(2.5\times 10^{6}\) cm\({}^{2}\) s\({}^{-1}\)) to match the nominal Juno O/H value (2.2 times the protosolar value). This lower \(K_{zz}\) is then indicative of a radiative region located around the quench level of CO (that is, at pressure \(\rho\sim\) 0.4–0.5 kbar and \(T\sim\) 1000 K). The dash-dotted lines correspond to a final model in which a deep radiative layer with reduced \(K_{zz}\) is inserted to match solar oxygen and produce 0.9 ppb CO in Jupiter’s upper troposphere: \(K_{zz}\) is set to 1 cm\({}^{2}\) s\({}^{-1}\) between \(T\) = 1400 K and 2200 K and transitions linearly with \(T\) towards our nominal value of \(10^{8}\) cm\({}^{2}\) s\({}^{-1}\) at 970 K to ensure that PH\({}_{3}\) and GeH\({}_{4}\) are quenched at \(\sim\)800 K. This is a non-unique solution.
of building blocks condensed in the vicinity of the condensation lines of C-rich volatiles by the growing Jupiter. The C/O ratio in icy solids formed in those regions is expected to rise steeply, as found by Mousis et al. (2021) to explain the water-poor composition of Comet C/2016 R2 (PanSTARRS). Another study (Mousis et al. 2021) suggested that a wide range of protosolar nebula compositions can match Jupiter's metallicity, including several types of icy phase (clathrates and pure condensates). It should be noted that current Jupiter formation and structure models predict a low-metallicity envelope (Helled et al. 2022; Schneider & Bitsch 2021).
Option (2) implies a dramatic decrease in vertical mixing at temperatures higher than 1000-1100 K, corresponding to a pressure level of roughly 0.6 kbar. Various mechanisms may lead to stable regions throughout Jupiter's deep atmosphere, and one mechanism that corresponds roughly to the p-T region here is a radiative zone extending from 1200-2000 K (Guillot et al. 1994). Such a zone is the result of low opacity, obtained only in the case of a depletion in the alkali metals (Guillot et al. 2004), and Juno MWR data hint such a depletion (Bhattacharya et al. 2021). It is therefore plausible that the vertical mixing, represented by \(K_{\rm zz}\) in our model, is very low in the region just below the CO quench level, begins to increase at that level and reaches its full convective value by 900 K (300 bar) where PH\({}_{3}\) and GeH\({}_{4}\) quenching determine our nominal value for \(K_{\rm zz}\). A model in which \(K_{\rm zz}\) was set to the very low value of 1 cm\({}^{2}\)-s\({}^{-1}\) (it could in principle be as low as the molecular diffusivity) at temperatures higher than 1400 K and to \(10^{8}\) cm\({}^{2}\)-s\({}^{-1}\) at temperatures lower than 970 K and then interpolated logarithmically between the two levels produced satisfactory results with solar oxygen (Fig. 1 and Extended Data Fig. 2).
Either of the two possibilities - depleted oxygen or a deep radiative zone - would be important for understanding the nature of Jupiter's interior below its visible atmosphere. Further analysis of Juno MWR data during the extended mission, particularly at the longest wavelength channel, will help to distinguish between them. The results for Jupiter also provide important context for the elemental composition of giant planets in our Solar System, and beyond. In situ probes, despite their inherent limitation of a single entry-point, would provide invaluable compositional data for Saturn and the ice giants, especially regarding noble gases, as presented other works (Mousis et al. 2014, 2018). Cavalie et al. (2020) have shown how the use of thermochemical simulations can increase the science return of in situ probe measurements.
Figure 2: \(K_{\rm zz}\) and oxygen dependence of Jupiter’s upper tropospheric CO mole fraction. The CO mole fraction (colour scale) as a function of tropospheric mixing and the deep oxygen abundance relative to solar. The solid line shows the computations that result in 0.9 ppb CO. The grey dashed lines represent the range that results in 0.6 to 1.2 ppb CO (full uncertainty range). The range of possible \(K_{\rm zz}\) values, constrained from laboratory experiments and matching observed values of PH\({}_{3}\) and GeH\({}_{4}\), is shown by the dashed blue lines. The nominal value of our model is shown by the black dot.
## Methods
### Thermochemical model
We used the 1D thermochemical and diffusion model initially developed in Venot et al. (2012) for warm exoplanets, and adapted in Cavalie et al. (2014, 2017) to study the deep oxygen abundance in Uranus and Neptune. The model solves the continuity equation as a function of time at each altitude, for 111 carbon, oxygen and nitrogen species through 1912 reactions.
Our model required boundary conditions regarding the composition of the upper troposphere. We took the upper tropospheric mole fraction of He and CH\({}_{4}\) from von Zahn et al. (1998) and Wong et al. (2004), respectively \(y_{\rm He}^{\rm top}\) = 0.1359 \(\pm\) 0.0027 and \(y_{\rm CH_{4}}^{\rm top}\) = 0.00204 \(\pm\) 0.0050, both resulting from the Galileo probe measurements. For CO, we adopted an upper tropospheric mole fraction of 0.9 \(\pm\) 0.3 ppb, following measurements from Bezard et al. (2002) and Bjoraker et al. (2018).
To constrain the deep oxygen abundance of Jupiter from upper tropospheric CO observations, we also needed to make assumptions on the vertical mixing and on the temperature profile. Following Wang et al. (2015) and recent Juno results of Grassi et al. (2020) on tropospheric abundances of disequilibrium species, we nominally adopted a vertical eddy mixing coefficient of 10\({}^{8}\) cm\({}^{2}\cdot\)s\({}^{-1}\) with a factor of two uncertainty. Even though Juno observations of NH\({}_{3}\)(Li et al., 2017) and models to explain downward ammonia transport (Guillot et al., 2020, 2020) show that chemical transport does not obey a pure diffusion equation between 0.1 and tens of bars, we assumed that this is the case at several hundred bars in the more homogeneously mixed deeper troposphere where the CO thermochemistry is quenched. Our temperature profile follows the Galileo profile (Seiff et al., 1998) within 1 K down to 22 bar and we extrapolated temperatures using a wet adiabat from the diffusion-dominated upper levels down to the deep levels where thermochemical equilibrium prevails. The deep oxygen abundance measured with Juno for Jupiter (Li et al., 2020) is not high enough to produce a radiative layer, resulting from a mean molecular weight gradient at the water condensation level in which the temperature would sharply increase, as opposed to the case of the ice giants (Cavalie et al., 2017). We stopped our temperature extrapolations at 1700 K, much deeper than the levels where thermochemistry is quenched by vertical mixing. We ensured that our results reached steady state with an integration time of 10\({}^{10}\) s.
Figure 3: Carbon and oxygen dependence of Jupiter’s upper tropospheric CO mole fraction. The CO mole fraction (colour scale) as a function of the carbon abundance (represented by \(y_{\rm CH_{4}}^{\rm top}\)) and oxygen water abundance relative to protosolar according to the thermochemical model results when assuming a constant \(K_{\rm zz}\) of 10\({}^{8}\) cm\({}^{2}\cdot\)s\({}^{-1}\). The layout is similar to figFig2.
We fixed the deep nitrogen abundance to \(\sim\)4 times the protosolar value to reproduce the Galileo measurement (Wong et al. 2004). The abundance of N\({}_{2}\) was then \(\sim\)10\({}^{-5}\) in the upper troposphere. Our model results regarding oxygen were, however, insensitive to this value, because the nitrogen and oxygen chemistries were mostly uncoupled.
We adopted the protosolar abundances reported in Lodders (2021) throughout this Article. We used them to express our model results and to convert previous results to a common scale.
### Chemical scheme
The chemical network in Venot et al. (2012) is a C/H/O/N mechanism initially validated for the combustion industry to help to understand the combustion of fuels in car engines and thus limit their environmental impact. It is based on a C\({}_{0}\)-C\({}_{2}\) mechanism to which a nitrogen reaction base was added. It comprises 105 species and 1926 reactions. Although it was validated against experiments for pressures ranging from 0.01 bar to several hundred bars and for temperatures ranging from 300 to 2500 K, Wang et al. (2016) showed that the conversion of H\({}_{2}\)O into CO was less efficient in Jupiter with the network shown in Venot et al. (2012) compared with several others (see their fig. 17), resulting in CO abundance 10 times lower than in simulations involving competing networks. A similar issue with CH\({}_{4}\)/CO chemistry had been found in applications to hot Jupiters by Moses (2014), even though Venot et al. (2020) found even more compelling differences in cooler planets. Moses (2014) further narrowed down the main difference in the networks to the kinetics of methanol through the H + CH\({}_{3}\)OH = CH\({}_{3}\) + H\({}_{2}\)O reaction. The chemical rate of this reaction had been set in Venot et al. (2012) to that estimated by Hidaka et al. (1989), but was found to be over-estimated by Visscher et al. (2010) in their work on Jupiter's thermochemistry. This led Venot et al. (2020) to fully revise the CH\({}_{3}\)OH sub-network of their chemical scheme. They adopted experimental data (Burke et al. 2016) that are remarkable in several aspects. First, the reaction of Hidaka et al. (1989) is no longer explicitly present in the network. This does not prevent CH\({}_{3}\)OH from being destroyed and producing CH\({}_{3}\) and H\({}_{2}\)O, but this is achieved through other destruction pathways. Second, the kinetic rates of several reactions of this network (more specifically those involving methoxide and the methyl radical) have an explicit logarithmic dependence with pressure defined for up to five pressure decades, which thus increase the accuracy and robustness of the kinetics over this wide range of pressure conditions. This new scheme was validated (Venot et al. 2020) over a wide range of temperature and pressure conditions and showed improved agreement with experimental data. We have produced a CO profile for Jupiter in the same conditions as in fig. 17 of Wang et al. (2016). It is shown in Extended Data Fig. 1 and it fully agrees with the profiles presented in fig. 17 of Wang et al. (2016). When applied to the ice giants (Venot et al. 2020) this scheme resulted in the production of the observed CO in ice giants with substantially lower oxygen enrichments.
As stated by Moses (2014), "the exact mechanism involved with CH\({}_{4}\)-CO quenching in reducing environments has not been strictly identified". This is caused by the high nonlinearity and high coupling between the various chemical reactions of the scheme. It is thus not possible, as found in initial studies that assumed a rate-limiting reaction in the CO destruction mechanism (Fegley & Prinn 1988; Yung et al. 1988; Bezard et al. 2002), to easily identify a single reaction and quantify uncertainties on the results from the uncertainty on the rate of this reaction. A methodology of uncertainty propagation and global sensitivity analysis has been developed (Dobrijevic et al. 2010), but it required running several hundred simulations following a Monte Carlo scheme. Applying this methodology is beyond the scope of this study.
### Deep radiative region in Jupiter?
It has been shown (Wang et al. 2016) that PH\({}_{3}\) and GeH\({}_{4}\) are quenched at \(\sim\)700-800 K (\(p\approx\) 0.1 kbar; see their figs. 6 and 11) and the abundances observed with Juno JIRAM (Grassi et al. 2020) in the upper troposphere imply that \(K_{zz}\approx 10^{7}\)-\(10^{9}\) cm\({}^{2}\) s\({}^{-1}\) from PH\({}_{3}\) and \(\sim\)\(10^{8}\) cm\({}^{2}\) s\({}^{-1}\) from GeH\({}_{4}\) at this level. For uniform \(K_{zz}>10^{7}\) cm\({}^{2}\)-s\({}^{-1}\) our model predicts a subsolar oxygen to fit the observed abundance of CO (Extended Data Table 1).
The first hint that Jupiter's troposphere could harbour a radiative region in the vicinity of the layers where CO is quenched (\(T\approx\) 1000-1100 K; Fig. 1) was then obtained when the deep oxygen abundance was raised to the Juno MWR nominal value of 2.2 times protosolar. This required us to decrease the vertical mixing \(K_{zz}\) from its nominal value of \(10^{8}\) cm\({}^{2}\)-s\({}^{-1}\) to \(2.5\times 10^{6}\) cm\({}^{2}\)-s\({}^{-1}\) (Fig. 1). Fitting the whole range of 1\(\sigma\) uncertainties of the Juno measurement led to the \(K_{zz}\) value reported in Extended Data Table 1. We thus found that convection needs to be less efficient in the CO quench region, with \(K_{zz}\) lowered by a factor 10 to 100, to obtain solar-to-supersolar oxygen.
A decrease in the Rosseland opacity due to hydrogen and helium opacity between 1200 and 4000 K in Jupiter can result in a radiative region, as initially pointed out by a previous study (Guillot et al. 1994). It was subsequently confirmed (Guillot et al. 2004) that this region may exist between \(p\approx\) 1.5 kbar (\(T\approx\) 1400 K) and \(p\approx\) 8.0 kbar (\(T\approx\) 2200 K) provided that the Jovian atmosphere is also depleted in alkali metals. This depletion seems to be confirmed by recent Juno MWR observations (Bhattacharya et al. 2021). If such a radiative region exists, vertical heat transport and chemical mixing would strongly be inhibited, and our assumption of a vertically uniform \(K_{zz}\) would not hold in this region. A previous work (Cavalie et al. 2017) investigated the effect of an insulation layer produced in the ice giants by the rapid change in mean molecular weight gradient where water condenses. Despite the presence of this insulation layer in the altitudes where CO is quenched and vertical transport prevails, they found very limited impact on their results, mostly because the radiative layer was very thin. The radiative layer in Jupiter, which is of a different origin than that in the ice giants, may extend from 1.5 to 8 kbar (Guillot et al. 2004). Even if the radiative layer itself has a limited impact on the CO
profile, because it is located in the region where thermochemical equilibrium between CO and H\({}_{2}\)O prevails over vertical transport, we needed to assess the effect of such a layer and how it connects to upper layers in our simulations.
We found that our model could reconcile solar oxygen with the observed tropospheric CO by adding the presence of a deep radiative layer in which \(K_{\rm zz}\) could be as low as the molecular diffusivity. We set \(K_{\rm zz}\) to 1 cm\({}^{2}\)-s\({}^{-1}\) for layers with \(T>1400\) K and interpolated \(K_{\rm zz}\) between this value at 1400 K and our nominal value of 10\({}^{8}\) cm\({}^{2}\)-s\({}^{-1}\) at 970 K, such that PH\({}_{3}\) and GeH\({}_{4}\) are quenched in the \(\sim\)800 K region as expected from models and observations. This ensures that \(K_{\rm zz}\) is low enough where CO is quenched. The resulting CO profile is shown in Fig. 1.
The \(K_{\rm zz}\) profiles used in this work that correspond to the results presented in Fig. 1 are shown in Extended Data Fig. 2.
## Acknowledgements
T.C. acknowledges funding from CNES and the Programme National de Planetologie (PNP) of CNRS/INSU. J.L. acknowledges support from the Juno mission through a subcontract from the Southwest Research Institute.
\begin{table}
\begin{tabular}{c c} \hline O/H (\(\times\) the protosolar value) & Required \(K_{\rm zz}\) in the CO quench region \\ \hline
0.3 & 1\(\times\)10\({}^{8}\) cm\({}^{2}\)-s\({}^{-1}\) \\
0.8 & 2.5\(\times\)10\({}^{7}\) cm\({}^{2}\)-s\({}^{-1}\) \\
2.2 & 2.5\(\times\)10\({}^{6}\) cm\({}^{2}\)-s\({}^{-1}\) \\
4.2 & 4\(\times\)10\({}^{5}\) cm\({}^{2}\)-s\({}^{-1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Relationship between Jupiter’s deep O/H and the \(K_{\rm zz}\) required in the quench region of CO to fit the observed upper tropospheric CO mole fraction. This essentially shows that the higher the deep oxygen abundance, the more inhibited the mixing to produce the right abundance of CO.
Figure 4: CO vertical profile in Jupiter computed in the same conditions as in (15) with our chemical scheme, i.e., that of (21) with revised methanol chemistry kinetics. The profile is obtained for \(K_{\rm zz}=10^{9}\) cm\({}^{2}\)-s\({}^{-1}\) and seven times solar oxygen. It is in full agreement with those obtained with other chemical schemes and shown in Figure 17 of (15), which are indicated by the grey area.
|
2303.06774
|
Better than square-root cancellation for random multiplicative functions
|
We investigate when the better than square-root cancellation phenomenon
exists for $\sum_{n\le N}a(n)f(n)$, where $a(n)\in \mathbb{C}$ and $f(n)$ is a
random multiplicative function. We focus on the case where $a(n)$ is the
indicator function of $R$ rough numbers. We prove that $\log \log R \asymp
(\log \log x)^{\frac{1}{2}}$ is the threshold for the better than square-root
cancellation phenomenon to disappear.
|
Max Wenqiang Xu
|
2023-03-12T23:20:11Z
|
http://arxiv.org/abs/2303.06774v2
|
# Better than square-root cancellation for random multiplicative functions
###### Abstract.
We investigate when the better than square-root cancellation phenomenon exists for \(\sum_{n\leq N}a(n)f(n)\), where \(a(n)\in\mathbb{C}\) and \(f(n)\) is a random multiplicative function. We focus on the case where \(a(n)\) is the indicator function of \(R\) rough numbers. We prove that \(\log\log R\asymp(\log\log x)^{\frac{1}{2}}\) is the threshold for the better than square-root cancellation phenomenon to disappear.
## 1. introduction
The study of random multiplicative functions has attracted intensive attention. Historically, they were introduced to model arithmetic functions. A Steinhaus random multiplicative function \(f(n)\) is a completely multiplicative function defined on positive integers such that \(f(p)\) are independently and uniformly distributed on the complex unit circle for all primes \(p\). One may view it as a random model for arithmetic functions like Dirichlet characters \(\chi(n)\) or \(n^{it}\). Another popular model is the Rademacher random multiplicative function \(f(n)\) which was first used by Wintner [42] as a random model for Mobius function \(\mu(n)\). In this note, we focus on the Steinhaus case. The obvious dependence between random variables \(f(m)\) and \(f(n)\) whenever \((m,n)\neq 1\) makes the study of random multiplicative functions intriguing.
Arguably the most striking result so far in the study of random multiplicative functions is Harper's [22] remarkable resolution of Helson's conjecture [26], that is, the partial sums of random multiplicative functions enjoy better than square-root cancellation
\[\mathbb{E}[|\sum_{n\leq x}f(n)|]\asymp\frac{\sqrt{x}}{(\log\log x)^{1/4}}, \tag{1.1}\]
where \(f(n)\) are random multiplicative functions. In particular, with the natural normalization \(\sqrt{x}\), the partial sums \(\sum_{n\leq x}f(n)\) do not converge in distribution to the standard complex normal distribution (see also [19]). Before Harper's result [22], there was progress on proving good lower bounds close to \(\sqrt{x}\), e.g. [24], and it was not clear that such better than square-root cancellation in (1.1) would appear until Harper's proof. See also recent companion work on analogous results in the character sums and zeta sums cases established by Harper [21, 23]. It is known that the better than square-root cancellation phenomenon in random multiplicative functions is connected to the "critical multiplicative chaos" in the probability literature. We point out references [7, 9, 28, 34, 37] for related discussions.
A closely related important question in number theory is to understand the distribution of the Riemann zeta function over typical intervals of length \(1\) on the critical line \(\mathfrak{Re}(s)=\frac{1}{2}\). One may crudely see the connection by viewing \(\zeta(s)\) as a sum of \(n^{-\frac{1}{2}-it}\) for a certain range
of \(n\) and \(n^{it}\) behaves like a Steinhaus random multiplicative function for randomly chosen \(t\). A conjecture of Fyodorov, Hiary, and Keating (see e.g. [11, 12]) suggests that there is a subtle difference between the true order of local maximal of \(\log|\zeta(1/2+it)|\) and one's first guess based on Selberg's central limit theorem for \(\log|\zeta(1/2+it)|\). The existence of this subtle difference and the appearance of the better than square-root cancellation for random multiplicative functions both show that the corresponding nontrivial dependence can not be ignored. We refer readers to [1, 2, 3, 4, 5, 13, 14, 16, 17, 18, 21, 30, 33, 38] for related discussions about partial sums of random multiplicative functions and zeta values distribution.
In this paper, we are interested in further exploring Harper's result (1.1) and methods used there, by considering the problem in a more general context.
**Question 1.1**.: _Let \(a(n)\) be a sequence in \(\mathbb{C}\). When does the better than square-root cancellation phenomenon hold for \(\sum_{n\leq N}a(n)f(n)\), i.e._
\[\mathbb{E}[|\sum_{n\leq N}a(n)f(n)|]=o\left(\sqrt{\sum_{n\leq N}|a(n)|^{2}}\right)? \tag{1.2}\]
We first make some simple observations in the situations where \(a(n)\) is "typical" or \(a(n)\) has a rich multiplicative structure. Then we focus on a particular case where the coefficient \(a(n)\) is an indicator function of a multiplicative set.
### Typical coefficients
If partial sums \(\sum_{n\leq N}a(n)f(n)\) with the square-root size normalization behave like the complex standard Gaussian variable, then there is just square-root cancellation. One may attempt to prove such a central limit theorem by computing the high moments, however, the moments usually blow up and such a strategy does not work here (see e.g. [24, 25, 20, 41] for moments computation results). It turns out that for "typical" choices of \(a(n)\), such a central limit theorem does hold. It has been carried out in the concrete case where \(a(n)=e^{2\pi in\theta}\) for some fixed real \(\theta\) without too good Diophantine approximation property (such \(\theta\) has relative density \(1\) in \(\mathbb{R}\), e.g. one can take \(\theta=\pi\)) by Soundararajan and the author [39], and also an average version of the result is proved by Benatar, Nishry and Rodgers [6]. The proof of the result in [39] is based on McLeish's martingale central limit theorem [31], and the method was pioneered by Harper in [19]. The proof reveals the connection between the existence of such a central limit theorem and a quantity called _multiplicative energy_ of \(\mathbf{a}:=\{a(n):1\leq n\leq N\}\)
\[E_{\times}(\mathbf{a}):=\sum_{\begin{subarray}{c}m_{1},n_{1},m_{2},n_{2}\leq N \\ m_{1}m_{2}=n_{1}n_{2}\end{subarray}}a(m_{1})a(m_{2})\overline{a(n_{1})a(n_{2})}.\]
A special case of \(a(n)\) is an indicator function of a set \(\mathcal{A}\), and the quantity \(E_{\times}(\mathcal{A})\) is a popular object studied in additive combinatorics. It is now known [39] that a crucial condition for such a central limit theorem holds for \(\sum_{n\leq N}a(n)f(n)\) is that the set \(\mathcal{A}\) has multiplicative energy \(\leq(2+\epsilon)|\mathcal{A}|^{2}\). See SS9.1 for more discussions on \(a(n)\) being a "typical" choice. We refer readers who are interested in seeing more examples of when a central limit theorem holds for partial (restricted) sums of random multiplicative functions to [6, 8, 19, 27, 29, 35, 39].
### Large multiplicative energy and sparse sets
Let us focus on the case that \(a_{n}\) is an indicator function of a set \(\mathcal{A}\). As we mentioned that if the set \(\mathcal{A}\) has small multiplicative energy (among other conditions), then partial sums exhibit square-root cancellation. Suppose we purposely choose a set \(\mathcal{A}\) with very large multiplicative energy, will it lead to better than square-root cancellation? One extreme example is \(\mathcal{A}=\{p^{n}:1\leq n\leq\log_{p}N\}\) being a geometric progression, where \(p\) is a fixed prime. A standard calculation gives that
\[\mathbb{E}[|\sum_{n\in\mathcal{A}}f(n)|]=\int_{0}^{1}|\sum_{n\leq\log_{p}N}e( \theta n)|d\theta\asymp\log N,\]
while \(\mathbb{E}[|\sum_{n\in\mathcal{A}}f(n)|^{2}]=|\mathcal{A}|\asymp\log N\). It shows that there is a great amount of cancellation in this particular example. One may also take \(\mathcal{A}\) to be some generalized (multidimensional) geometric progression and get strong cancellation of this type. We note that the sets mentioned here with very rich multiplicative structures all have small sizes.
Based on the initial thoughts above, we may lean toward believing that better than square-root cancellation only appears when \(a(n)\) has some particular structure that is perhaps related to multiplicativity. To fully answer Question 1.1 seems hard. The majority of the paper is devoted to a special case, where \(a(n)\) is an indicator function of a set with multiplicative features. We focus on fairly large subsets.
### Main results: multiplicative support
Suppose now that \(a(n)\) is a multiplicative function with \(|a(n)|\leq 1\). The particular example we study in this paper is that \(a(n)\) is the indicator function of \(R\)-rough numbers, although the proof here may be adapted to other cases when \(a(n)\) is multiplicative. We write
\[\mathcal{A}_{R}(x):=\{n\leq x:p|n\implies p\geq R\}. \tag{1.3}\]
By a standard sieve argument, for all \(2\leq R\leq x/2\) (the restriction \(R\leq x/2\) is only needed for the lower bound), we have asymptotically
\[|\mathcal{A}_{R}(x)|\asymp\frac{x}{\log R}. \tag{1.4}\]
We expect the following threshold behavior to happen. If \(R\) is very small, the set \(\mathcal{A}_{R}(x)\) is close to \([1,x]\) and better than square-root cancellation appears as in [22]. If \(R\) is sufficiently large, then weak dependence may even lead to a central limit theorem. Indeed, an extreme case is that \(R>\sqrt{x}\), in which \(\mathcal{A}_{R}(x)\) is a set of primes and \(\{f(n):n\in\mathcal{A}_{R}(x)\}\) is a set of independent random variables. It is natural to ask to what extent the appearance of small primes is needed to guarantee better than square-root cancellation. Our Theorem 1.2 and Theorem 1.3 answer the question. We show that \(\log\log R\approx(\log\log x)^{1/2}\) is the threshold.
**Theorem 1.2**.: _Let \(f(n)\) be a Steinhaus random multiplicative function and \(x\) be large. Let \(\mathcal{A}_{R}(x)\) be the set of \(R\) rough numbers up to \(x\). For any \(\log\log R\ll(\log\log x)^{\frac{1}{2}}\), we have_
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]\ll\sqrt{|\mathcal{A}_{R}(x)|} \cdot\Big{(}\frac{\log\log R+\log\log\log x}{\sqrt{\log\log x}}\Big{)}^{\frac {1}{2}}.\]
_In particular, if \(\log\log R=o((\log\log x)^{\frac{1}{2}})\), then_
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]=o\big{(}\sqrt{|\mathcal{A}_{R}(x )|}\big{)}.\]
The term \(\log\log\log x\) is likely removable. But for the convenience of the proof, we state the above version. See Remark 5.4 for more discussions.
**Theorem 1.3**.: _Let \(f(n)\) be a Steinhaus random multiplicative function and \(x\) be large. Let \(\mathcal{A}_{R}(x)\) be the set of \(R\) rough numbers up to \(x\). For any \(\log\log R\gg(\log\log x)^{\frac{1}{2}}\), we have_
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]\gg\sqrt{|\mathcal{A}_{R}(x)|}.\]
One probably can prove a lower bound of the shape \(\sqrt{|\mathcal{A}_{R}(x)|}\cdot(\log\log R/\sqrt{\log\log x})^{-1/2}\) when \(\log\log R=o(\sqrt{\log\log x})\). We do not pursue this as we focus on finding the threshold value of \(R\) instead of caring about the quantification of the exact cancellation.
We note that one way to derive a lower bound on \(L^{1}\) norm is by proving an upper bound on \(L^{4}\) norm. A simple application of Holder's inequality gives that
\[|\mathcal{A}_{R}(x)|=\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|^{2}]\leq \Big{(}\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|^{4}]\Big{)}^{1/3}\Big{(} \mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]\Big{)}^{2/3}. \tag{1.5}\]
The fourth moment \(\ll|\mathcal{A}_{R}(x)|^{2}\) would imply that \(L^{1}\) norm \(\gg\sqrt{|\mathcal{A}_{R}(x)|}\). However, to achieve such a bound on the fourth moment, one needs \(\log R\gg(\log x)^{c}\) for some constant \(c\), and thus this approach would not give the optimal range as in Theorem 1.3.
Another reason for studying the fourth moment (multiplicative energy) is to understand the distribution. As mentioned before, this is the key quantity that needs to be understood in order to determine if random sums have Gaussian limiting distribution, via the criteria in [39]. One may establish a central limit theorem in the range \(R\gg\exp((\log x)^{c})\) for some small positive constant \(c\)1. Interested readers are suggested to adapt the proof of [39, Corollary 1.2]. We do not pursue results along this direction in this note.
Footnote 1: One trick to get a smaller \(c\) than by directly computing the fourth moment over the full sum is to take the anatomy of integers into account. We refer interested readers to [36, 43] to see how this idea is connected to the correct exponent in extremal sum product conjecture of Elekes and Ruzsa [10].
Theorem 1.2 and Theorem 1.3 are both proved by adapting Harper's robust method in [22], with some modifications, simplifications and new observations, and we sketch the strategy with a focus on how we find the threshold. We also refer readers to a model problem in the function field case by Soundararajan and Zaman [40]. The first step is to reduce the \(L^{1}\) norm estimate to a certain average of the square of random Euler products. Basically, we prove that
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]\approx\Big{(}\frac{x}{\log x} \Big{)}^{1/2}\cdot\mathbb{E}[(\int_{-1/2}^{1/2}|F^{(R)}(\frac{1}{2}+it)|^{2}dt )^{1/2}], \tag{1.6}\]
where \(F^{(R)}(1/2+it):=\prod_{R\leq p\leq x}(1-\frac{f(p)}{p^{1/2+it}})^{-1}\) is the random Euler product over primes \(R\leq p\leq x\). The challenging part is to give a sharp bound on the above expectation involving \(|F^{(R)}(1/2+it)|^{2}\) for \(|t|\leq 1/2\).
We first discuss the upper bound proof. If we directly apply Holder's inequality (i.e. moving the expectation inside the integral in (1.6)), then we would only get the trivial upper bound \(\ll\sqrt{|\mathcal{A}_{R}(x)|}\) as \(\mathbb{E}[|F^{(R)}(1/2+it)|^{2}]\approx\log x/\log R\). Harper's method starts with putting some "barrier events" on the growth rate of all random partial Euler products for all \(t\). Roughly speaking, it requires that for all \(k\),
\[\prod_{x^{e^{-(k+1)}}\leq p\leq x^{e^{-k}}}|1-\frac{f(p)}{p^{1/2+it}}|^{-1}\text { ``grows as expected" for all }|t|\leq 1. \tag{1.7}\]
Denote such good events by \(\mathcal{G}\) and write \(s=1/2+it\). By splitting the probability space based on the event \(\mathcal{G}\) holding or not, and applying Cauchy-Schwarz inequality, we have
\[\mathbb{E}[(\int_{-1/2}^{1/2}|F^{(R)}(s)|^{2}dt)^{1/2}] \approx\mathbb{E}[(\int_{-1/2}^{1/2}\mathbf{1}_{\mathcal{G}}|F^{ (R)}(s)|^{2}dt)^{1/2}]+\mathbb{E}[(\int_{-1/2}^{1/2}\mathbf{1}_{\mathcal{G}\text { \,tail}}|F^{(R)}(s)|^{2}dt)^{1/2}]\] \[\ll\mathbb{E}[(\int_{-1/2}^{1/2}\mathbf{1}_{\mathcal{G}}|F^{(R)} (s)|^{2}dt)^{1/2}]+\mathbb{P}(\mathbf{1}_{\mathcal{G}\text{ \,tail}})^{1/2}(\mathbb{E}[|F^{(R)}(s)|^{2}])^{1/2}.\]
According to the two terms above, there are two tasks that remain to be done.
1. Task 1: Show that the expectation is small, conditioning on \(\mathbf{1}_{\mathcal{G}}\).
2. Task 2: Show that \(\mathbb{P}(\mathbf{1}_{\mathcal{G}\text{ \,tail}})\) is sufficiently small.
To accomplish task 1, Harper's method connects such an estimate to the "ballot problem" or say Gaussian random walks (see SS3.2), which is used to estimate the probability of partial sums of independent Gaussian variables having a certain barrier in growth. Task 2 of estimating the probability of such good events \(\mathcal{G}\) happening can be done by using some concentration inequality, e.g. Chebyshev's inequality. Our main innovation lies in setting up "barrier events" in (1.7) properly which is not the same as in [22]. On one hand, it should give a strong enough restriction on the growth rate of the products so that \(\mathbb{E}[(\int_{-1/2}^{1/2}\mathbf{1}_{\mathcal{G}}|F^{(R)}(s)|^{2}dt)^{1/2}]\) has a saving, compared to it without conditioning on \(\mathbf{1}_{\mathcal{G}}\). On the other hand, one needs to show that such an event \(\mathcal{G}\) is indeed very likely to happen which requires that the designed "barrier" can not be too restrictive. To make the two goals simultaneously achieved, we need \(\log\log R=o(\sqrt{\log\log x})\) and this is the limit that we can push to (see Remark 5.3).
The lower bound proof in Theorem 1.3 uses the same strategy as in [22] but is technically simpler. After the deduction step of reducing the problem to studying a certain average of the square of random Euler products (see (1.6)), we only need to give a lower bound of the shape \(\gg(\log x/\log R)^{1/2}\) for the expectation on the right-hand side of (1.6). Since the integrand \(|F^{(R)}(s)|^{2}\) is positive, it suffices to prove such a lower bound when \(t\) is restricted to a random subset \(\mathcal{L}\). We choose \(\mathcal{L}\) to be the set of \(t\) such that certain properly chosen "barrier events" hold. The main difficulty is to give a strong upper bound on the restricted product \(\mathbb{E}[\mathbf{1}_{\mathbf{1}_{t},t_{2}\in\mathcal{L}}|F^{(R)}(1/2+it_{1} )|^{2}|F^{(R)}(1/2+it_{2})|^{2}]\) in the sense that the bound is as effective as in the ideal situation where the factors \(|F^{(R)}(1/2+it_{1})|^{2}\) and \(|F^{(R)}(1/2+it_{2})|^{2}\) are independent (see Proposition 8.1), and this is also the main reason that the condition \(\log\log R\gg\sqrt{\log\log x}\) is needed subject to our chosen "barrier events". Our proof of Theorem 1.3 does not involve the "two-dimensional Girsanov calculation", which hopefully makes it easier for readers to follow.
### Organization
We set up the proof outline of Theorem 1.2 in Section 2 and defer the proof of two propositions to Section 4 and Section 5 respectively. We put all probabilistic preparations in Section 3 which will be used in the proof for both theorems. The proof of Theorem 1.3 is done in Section 6 and again we defer proofs of two key propositions to Section 7 and Section 8 respectively. Finally, we give more details about the "typical" choices of \(a(n)\) in Section 9, as well as mentioning some natural follow-up open problems.
### Acknowledgement
We thank Adam Harper for helpful discussions, corrections, and comments on earlier versions of the paper and for his encouragement. We also thank Kannan Soundararajan for the interesting discussions. The author is supported by the Cuthbert C. Hurd Graduate Fellowship in the Mathematical Sciences, Stanford.
## 2. Proof of Theorem 1.2
We follow the proof strategy of Harper in [22]. We establish Theorem 1.2 in a stronger form that for \(1/2\leq q\leq 9/10\) and \(R\) in the given range \(\log\log R\ll(\log\log x)^{1/2}\),
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|^{2q}]\ll|\mathcal{A}_{R}(x)|^{ q}\Big{(}\frac{\log\log R+\log\log\log x}{\sqrt{\log\log x}}\Big{)}^{q}.\]
One should be able to push the range of \(q\) to \(1\) but for simplicity in notation, we omit it. Our interest is really about the case \(q=1/2\). Note that in the given range of \(R\), by (1.4), it is the same as proving
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|^{2q}]\ll\Big{(}\frac{x}{\log R }\Big{)}^{q}\Big{(}\frac{\log\log R+\log\log\log x}{\sqrt{\log\log x}}\Big{)} ^{q}.\]
The first step (Proposition 2.1) is to connect the \(L^{1}\) norm of the random sums to a certain average of the square of random Euler products. We define for all \(s\) with \(\mathfrak{Re}(s)>0\) and integers \(0\leq k\leq\log\log x-\log\log R\), the random Euler products
\[F_{k}^{(R)}(s):=\prod_{R\leq p\leq x^{e^{-(k+1)}}}(1-\frac{f(p)}{p^{s}})^{-1} =\sum_{\begin{subarray}{c}n\geq 1\\ p|n\,\Longrightarrow\,R\leq p\leq x^{e^{-(k+1)}}\end{subarray}}\frac{f(n)}{n^ {s}}. \tag{2.1}\]
We also write
\[F^{(R)}(s):=\prod_{R\leq p\leq x}(1-\frac{f(p)}{p^{s}})^{-1}=\sum_{ \begin{subarray}{c}n\geq 1\\ p|n\,\Longrightarrow\,R\leq p\leq x\end{subarray}}\frac{f(n)}{n^{s}}. \tag{2.2}\]
We use the notation \(\|X\|_{2q}:=(\mathbb{E}[|X|^{2q}])^{\frac{1}{2q}}\) for random variable \(X\).
**Proposition 2.1**.: _Let \(f(n)\) be a Steinhaus random multiplicative function and \(x\) be large. Let \(F_{k}^{(R)}(s)\) be defined as in (2.1) and \(\log\log R\ll(\log\log x)^{\frac{1}{2}}\). Set \(\mathcal{K}:=\lfloor\log\log\log x\rfloor\). Then uniformly for all \(1/2\leq q\leq 9/10\), we have_
\[\|\sum_{n\in\mathcal{A}_{R}(x)}f(n)\|_{2q}\leq\sqrt{\frac{x}{\log x}}\sum_{0 \leq k\leq\mathcal{K}}\Big{\|}\int_{-1/2}^{1/2}|F_{k}^{(R)}(\frac{1}{2}-\frac{ k}{\log x}+it)|^{2}dt\Big{\|}_{q}^{\frac{1}{2}}+\sqrt{\frac{x}{\log x}}. \tag{2.3}\]
We remind the readers that the upper bound we aim for in Theorem 1.2 is very close to \(\sqrt{x/\log R}\). The second term in (2.3) is harmless since \(\log R\) is much smaller than \(\log x\).
The second step deals with the average of the square of random Euler products in (2.3), which lies at the heart of the proof.
**Proposition 2.2**.: _Let \(F_{k}^{(R)}(s)\) be defined as in (2.1) and \(\log\log R\ll(\log\log x)^{\frac{1}{2}}\). Then for all \(0\leq k\leq\mathcal{K}=\lfloor\log\log\log x\rfloor\), and uniformly for all \(1/2\leq q\leq 9/10\), we have_
\[\mathbb{E}\left[\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F_{k}^{(R)}(\frac{1}{2 }-\frac{k}{\log x}+it)|^{2}dt\right)^{q}\right]\ll e^{-\frac{k}{2}}\cdot\left( \frac{\log x}{\log R}\right)^{q}\Big{(}\frac{\log\log R+\log\log\log x}{\sqrt {\log\log x}}\Big{)}^{q}.\]
Proof of Theorem 1.2 assuming Proposition 2.1 and Proposition 2.2.: Apply Proposition 2.1 and Proposition 2.2 with \(q=\frac{1}{2}\). Notice that when \(\log\log R\ll(\log\log x)^{1/2}\), the term \(\sqrt{\frac{x}{\log x}}\) in (2.3) is negligible and we complete the proof.
## 3. Probabilistic preparations
In this section, we state some probabilistic results that we need to use later. The proof can be found in [22] (with at most very mild straightforward modification).
### Mean square calculation
We first state results on mean square calculations.
**Lemma 3.1**.: _Let \(f\) be a Steinhaus random multiplicative function. Then for any \(400<x\leq y\) and \(\sigma>-1/\log y\), we have_
\[\mathbb{E}[\prod_{x<p\leq y}|1-\frac{f(p)}{p^{\frac{1}{2}+\sigma}}|^{-2}]=\exp \Big{(}\sum_{x<p\leq y}\frac{1}{p^{1+2\sigma}}+O(\frac{1}{\sqrt{x}\log x}) \Big{)}. \tag{3.1}\]
The proof is basically using the Taylor expansion and the orthogonality deduced from the definition of a Steinhaus random multiplicative function. See [22, Lemma 1, and (3.1)].
We also quote the following result on two-dimensional mean square calculations. This will be used in proving the lower bound in Theorem 1.3.
**Lemma 3.2**.: _Let \(f\) be a Steinhaus random multiplicative function. Then for any \(400<x\leq y\) and \(\sigma>-1/\log y\), we have_
\[\mathbb{E}[\prod_{x<p\leq y}|1-\frac{f(p)}{p^{\frac{1}{2}+\sigma}}|^{-2}|1- \frac{f(p)}{p^{\frac{1}{2}+\sigma+it}}|^{-2}]=\exp\left(\sum_{x<p\leq y}\frac{ 2+2\cos(t\log p)}{p^{1+2\sigma}}+O(\frac{1}{\sqrt{x}\log x})\right). \tag{3.2}\]
_Moreover, if \(x>e^{1/|t|}\), then we further have_
\[=\exp\Big{(}\sum_{x<p\leq y}\frac{2}{p^{1+2\sigma}}+O(1)\Big{)}. \tag{3.3}\]
The proof of (3.2) is in [22, (6)]. To deduce (3.3), we only need to show the contribution involves \(\cos(t\log p)\) terms are \(\ll 1\), which follows from a strong form of prime number theorem. See how it is done in [22, Lemma 5] and [18, Section 6.1].
### Gaussian random walks and the ballot problem
A key probabilistic result used in Harper's method is the following (modification of) a classical result about Gaussian random walks, which is connected to the "ballot problem".
**Lemma 3.3** (Probability result 1, [22]).: _Let \(a\geq 1\). For any integer \(n>1\), let \(G_{1},\ldots,G_{n}\) be independent real Gaussian random variables, each having mean zero and variance between \(1/20\) and \(20\), say. Let h be a function such that \(|h(j)|\leq 10\log j\). Then_
\[\mathbb{P}\Big{(}\sum_{m=1}^{j}G_{m}\leq a+h(j),\quad\forall 1\leq j\leq n \Big{)}\asymp\min\{1,\frac{a}{\sqrt{n}}\}.\]
Without the term \(h(j)\), it is a classical result and actually that is all we need in this paper. However, we state this stronger form as the \(h(j)\) term can be crucial if one wants to remove the \(\log\log\log x\) factor in Theorem 1.2. We expect the random sum is fluctuating on the order of \(\sqrt{j}\) (up to step \(j\)) and so the above result is expected to be true. The quantity \(h(j)\) is much smaller compared to \(\sqrt{j}\) so it is negligible in computing the probability.
We do not directly use the above lemma. We shall use an analogous version for random Euler products (Proposition 3.4). We do the Girsanov-type calculation in our study. As in [22], we introduce the probability measure (here \(x\) is large and \(|\sigma|\leq 1/100\), say)
\[\tilde{\mathbb{P}}(A):=\frac{\mathbb{E}[1_{A}\prod_{p\leq x^{1/e}}|1-\frac{f(p )}{p^{\frac{1}{2}+\sigma}}|^{-2}]}{\mathbb{E}[\prod_{p\leq x^{1/e}}|1-\frac{f( p)}{p^{\frac{1}{2}+\sigma}}|^{-2}]}.\]
For each \(\ell\in\mathbb{N}\cup\{0\}\), we denote the \(\ell\)-th increment of the Euler product
\[I_{\ell}(s):=\prod_{x^{e^{-(\ell+2)}}<p\leq x^{e^{-(\ell+1)}}}(1-\frac{f(p)}{ p^{s}})^{-1}. \tag{3.4}\]
Since we are restricted to \(R\)-rough numbers \(n\), the parameter \(\ell\) lies in the range \(0\leq\ell\leq\log\log x-\log\log R\). All the rest setup is exactly the same as in [22].
**Proposition 3.4**.: _There is a large natural number \(B\) such that the following is true. Let \(n\leq\log\log x-\log\log R-(B+1)\), and define the decreasing sequence \((\ell_{j})_{j=1}^{n}\) of non-negative integers by \(\ell_{j}=\lfloor\log\log x-\log\log R\rfloor-(B+1)-j\). Suppose that \(|\sigma|\leq\frac{1}{e^{B+n+1}}\), and that \((t_{j})_{j=1}^{n}\) is a sequence of real numbers satisfying \(|t_{j}|\leq\frac{1}{j^{2/3}e^{B+j+1}}\) for all \(j\)._
_Then uniformly for any large a and any function \(h(n)\) satisfying \(|h(n)|\leq 10\log n\), and with \(I_{\ell}(s)\) defined as in (3.4), we have_
\[\tilde{\mathbb{P}}(-a-Bj\leq\sum_{m=1}^{j}\log|I_{\ell_{m}}(\frac{1}{2}+\sigma +it_{m})|\leq a+j+h(j),\quad\forall j\leq n)\asymp\min\{1,\frac{a}{\sqrt{n}}\}.\]
One may view the above sum approximately as a sum of \(j\) independent random variables and each with mean \(\approx\sum_{x^{e^{-(\ell+2)}}<p\leq x^{e^{-(\ell+1)}}}\frac{1}{p}\approx 1\) and with constant variance between \(1/20\) and \(20\). This shows the connection to Lemma 3.3. The deduction of Proposition 3.4 from Lemma 3.3 can be found in the proof of [22, Proposition 5]. The only modification is changing the upper bound restriction from \(n\leq\log\log x-(B+1)\) to \(n\leq\log\log x-\log\log R-(B+1)\) and all conditions remaining satisfied.
## 4. Proof of Proposition 2.1
The proof follows closely to the proof of [22, Proposition 1]. For any integer \(0\leq k\leq\mathcal{K}=\lfloor\log\log\log x\rfloor\), let
\[I_{k}:=(x_{k+1},x_{k}]:=(x^{e^{-(k+1)}},x^{e^{-k}}]. \tag{4.1}\]
Let \(P(n)\) be the largest prime factor of \(n\). For simplicity, we use \(\sum_{n}^{\star}\) to denote the sum where the variable \(n\) is \(R\)-rough. By using Minkowski's inequality (as \(2q\geq 1\)),
\[\|\sum_{n\in\mathcal{A}_{R}(x)}f(n)\|_{2q}\leq\sum_{0\leq k\leq\mathcal{K}}\| \sideset{}{{}^{\star}}{\sum}_{\begin{subarray}{c}n\leq x\\ P(n)\in I_{k}\end{subarray}}f(n)\|_{2q}+\|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}n\leq x\\ P(n)\leq x^{e^{-(\mathcal{K}+1)}}\end{subarray}}f(n)\|_{2q}. \tag{4.2}\]
We first bound the last term by only using the smoothness condition and it is bounded by \(\leq\Psi(x,x^{1/\log\log x})^{\frac{1}{2}}\ll\sqrt{x}(\log x)^{-c\log\log\log x}\), which is acceptable. The main contribution to the upper bound in (4.2) can be written as
\[=\sum_{0\leq k\leq\mathcal{K}}\|\sum_{\begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}f(m)\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}n\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)\|_{2q}.\]
We now condition on \(f(p)\) for \(p\) small but at least \(R\). Write \(\mathbb{E}^{(k)}\) to denote the expectation conditional on \((f(p))_{p\leq x_{k+1}}\). Then the above is
\[=\sum_{0\leq k\leq\mathcal{K}}(\mathbb{E}\mathbb{E}^{(k)}[|\sum_{ \begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}f(m)\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}n\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)|^{2q}])^{1/2q}\] \[\leq\sum_{0\leq k\leq\mathcal{K}}(\mathbb{E}[(\mathbb{E}^{(k)}[| \sum_{\begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}f(m)\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}n\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)|^{2}])^{q}])^{1/2q}\] \[=\sum_{0\leq k\leq\mathcal{K}}(\mathbb{E}[(\sideset{}{{}^{ \star}}{\sum}_{\begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}n\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)|^{2})^{q}])^{\frac{1}{2q}}.\]
Then we only need to show that for each expectation in the sum, it is bounded as in (2.3). Replace the discrete mean value with a smooth version. Set \(X=e^{\sqrt{\log x}}\), and we have the expectation involving primes in \(I_{k}\) is
\[\begin{split}&\ll\mathbb{E}\left[\left(\sum_{ \begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}\frac{X}{m}\int_{m}^{m(1+\frac{1}{X})}| \sideset{}{{}^{\star}}{\sum}_{\begin{subarray}{c}n\leq x/t\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)|^{2}dt\right)^{q}\right]\\ &+\mathbb{E}\left[\left(\sum_{\begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}\frac{X}{m}\int_{m}^{m(1+\frac{1}{X})}|\sideset {}{{}^{\star}}{\sum}_{\begin{subarray}{c}x/t\leq n\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}f(n)|^{2}dt\right)^{q}\right].\end{split} \tag{4.3}\]
By using Holder's inequality, we upper bound the second term in (4.3) by the \(q\)-th power of
\[\sum_{\begin{subarray}{c}m\leq x\\ p|m\implies p\in I_{k}\end{subarray}}\frac{X}{m}\int_{m}^{m(1+\frac{1}{X})} \mathbb{E}[|\sum_{\begin{subarray}{c}x/t\leq m\leq x/m\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}^{\star}f(n)|^{2}]dt. \tag{4.4}\]
Do the mean square calculation (3.1) and throw away the restriction on the \(R\)-rough numbers. Then (4.4) is at most \(\ll 2^{-e^{k}}x/\log x\) and thus the second term in (4.3) is \(\ll(2^{-e^{k}}x/\log x)^{q}\). Summing over \(k\leq\mathcal{K}\), this is acceptable and thus we only need to focus on the first term in (4.3). By swapping the summation, it is at most
\[\mathbb{E}\left[\left(\int_{x_{k+1}}^{x}|\sum_{\begin{subarray}{c}n\leq x/t\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}^{\star}f(n)|^{2}\sum_{ \begin{subarray}{c}t/(1+1/X)\leq m\leq t\\ p|m\implies p\in I_{k}\end{subarray}}\frac{X}{m}dt\right)^{q}\right].\]
We upper bound the sum over \(m\) by dropping the prime divisibility condition and using a simple sieve argument to derive that the above is at most
\[\mathbb{E}\left[\left(\int_{x_{k}}^{x}|\sum_{\begin{subarray}{c}n\leq x/t\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}^{\star}f(n)|^{2}\frac{dt}{ \log t}\right)^{q}\right]=x^{q}\mathbb{E}\left[\left(\int_{1}^{x/x_{k+1}}|\sum _{\begin{subarray}{c}n\leq x\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}^{\star}f(n)|^{2}\frac{dz}{z^{2} \log(\frac{x}{z})}\right)^{q}\right],\]
where in the equality above we used the substitution \(z:=x/t\). A simple calculation shows that we can replace \(\log(x/z)\) by \(\log x\) without much loss. Indeed, if \(z\leq\sqrt{x}\) then \(\log(x/z)\gg\log x\); if \(\sqrt{x}\leq z\leq x/x_{k+1}\) then \(\log(x/z)\geq z^{-2k/\log x}\log x\). Thus, we further have the bound
\[\ll\left(\frac{x}{\log x}\right)^{q}\mathbb{E}\left[\left(\int_{1}^{x/x_{k+1} }|\sum_{\begin{subarray}{c}n\leq z\\ n\text{ is }x_{k+1}\text{-smooth}\end{subarray}}^{\star}f(n)|^{2}\frac{dz}{z^{2-2k/ \log x}}\right)^{q}\right]. \tag{4.5}\]
To this end, we apply the following version of Parseval's identity, and its proof can be found in [32, (5.26) in Sec 5.1].
**Lemma 4.1** ([22, Harmonic Analysis Result 1]).: _Let \((a_{n})_{n=1}^{\infty}\) be any sequence of complex numbers, and let \(A(s):=\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}}\) denote the corresponding Dirichlet series, and \(\sigma_{c}\) denote its abscissa of convergence. Then for any \(\sigma>\max\{0,\sigma_{c}\}\), we have_
\[\int_{0}^{\infty}\frac{|\sum_{n\leq x}a_{n}|^{2}}{x^{1+2\sigma}}dx=\frac{1}{2 \pi}\int_{-\infty}^{+\infty}\Big{|}\frac{A(\sigma+it)}{\sigma+it}\Big{|}^{2}dt.\]
Apply Lemma 4.1 and the expectation in (4.5) is
\[=\mathbb{E}\left[\left(\int_{-\infty}^{+\infty}\frac{|F_{k}^{(R)}(\frac{1}{2 }-\frac{k}{\log x}+it)|^{2}}{|\frac{1}{2}-\frac{k}{\log x}+it|^{2}}dt\right)^{ q}\right]\leq\sum_{n\in\mathbb{Z}}\mathbb{E}\left[\left(\int_{n-\frac{1}{2}}^{n+ \frac{1}{2}}\frac{|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}}{|\frac{1 }{2}-\frac{k}{\log x}+it|^{2}}dt\right)^{q}\right].\]
Since \(f(m)m^{it}\) has the same law as \(f(m)\) for all \(m\), for any fixed \(n\) we have
\[\mathbb{E}\left[\left(\int_{n-\frac{1}{2}}^{n+\frac{1}{2}}|F_{k}^{(R)}(\frac{ 1}{2}-\frac{k}{\log x}+it)|^{2}dt\right)^{q}\right]=\mathbb{E}\left[\left( \int_{-\frac{1}{2}}^{\frac{1}{2}}|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)| ^{2}dt\right)^{q}\right].\]
For \(n-1/2\leq t\leq n+1/2\), we have \(1/|\frac{1}{2}-\frac{k}{\log x}+it|^{2}\asymp 1/n^{2}\) which is summable over \(n\). We complete the proof by inserting the above estimates into (4.5).
## 5. Proof of Proposition 2.2
This is the key part of the proof that reveals how \(\log\log R\approx\sqrt{\log\log x}\) could become the transition range. We begin with a discretization process which is the same as in [22]. For each \(|t|\leq\frac{1}{2}\), set \(t(-1)=t\), and then iteratively for each \(0\leq j\leq\log(\log x/\log R)-2\) define
\[t(j):=\max\{u\leq t(j-1):u=\frac{n}{((\log x)/e^{j+1})\log((\log x)/e^{j+1})} \text{ for some }n\in\mathbb{Z}\}.\]
By the definition, we have [22, (4.1)]
\[|t-t(j)|\leq\frac{2}{((\log x/e^{j+1})\log((\log x)/e^{j+1})}.\]
Given this notation, let \(B\) be the large fixed natural number from Proposition 3.4. Let \(\mathcal{G}(k)\) denote the event that for all \(|t|\leq\frac{1}{2}\) and for all \(k\leq j\leq\log\log x-\log\log R-B-2\), we have
\[(\frac{\log x}{e^{j+1}\log R}e^{C(x)})^{-1}\leq\prod_{\ell=j}^{\lfloor\log \log x-\log\log R\rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x}+it(\ell))| \leq\frac{\log x}{e^{j+1}\log R}e^{C(x)}, \tag{5.1}\]
where notably, our \(C(x)\) is chosen as the
\[C(x):=\log\log R+100\log\log\log x. \tag{5.2}\]
We shall establish the following two key propositions. The first proposition says that when we are restricted to the good event \(\mathcal{G}(k)\), the \(q\)-th moment is small.
**Proposition 5.1**.: _Let \(x\) be large and \(\log\log R\ll(\log\log x)^{\frac{1}{2}}\). Let \(C(x)\) be defined as in (5.2). Let \(F_{k}^{(R)}\) be defined as in (2.1) and \(\mathcal{G}(k)\) be defined as in (5.1). For all \(0\leq k\leq\mathcal{K}=\lfloor\log\log\log x\rfloor\) and \(1/2\leq q\leq 9/10\), we have_
\[\mathbb{E}\left[\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbf{1}_{\mathcal{G }(k)}|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}dt\right)^{q}\right] \ll\left(\frac{\log x}{e^{k}\log R}\right)^{q}\Big{(}\frac{C(x)}{\sqrt{\log \log x}}\Big{)}^{q}.\]
The second proposition is to show that indeed \(\mathbf{1}_{\mathcal{G}(k)}\) happens with high probability.
**Proposition 5.2**.: _Let \(\mathcal{G}(k)\) be defined as in (5.1). For all \(0\leq k\leq\mathcal{K}=\lfloor\log\log\log x\rfloor\) and uniformly for all \(1/2\leq q\leq 9/10\) and \(C(x)\) defined in (5.2), we have_
\[\mathbb{P}(\mathcal{G}(k)\text{ fails})\ll e^{-C(x)}.\]
The above two key propositions imply Proposition 2.2.
Deduction of Proposition 2.2.: According to the good event \(\mathcal{G}(k)\) happening or not, we have
\[\mathbb{E}\left[\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F_{k}^{(R)}( \frac{1}{2}-\frac{k}{\log x}+it)|^{2}dt\right)^{q}\right]\] \[\leq\mathbb{E}\left[\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbf{ 1}_{\mathcal{G}(k)}|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}dt\right) ^{q}\right]+\mathbb{E}\left[\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbf{1}_ {\mathcal{G}(k)\text{\rm{fails}}}|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it) |^{2}dt\right)^{q}\right]\] \[\leq\left(\frac{\log x}{e^{k}\log R}\right)^{q}\left(\frac{C(x)}{ \sqrt{\log\log x}}\right)^{q}+\left(\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbb{E }[|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}]dt\right)^{q}\mathbb{P}( \mathcal{G}(k)\text{ fails})^{1-q},\]
where in the first term we used Proposition 5.1 and we applied Holder's inequality with exponents \(\frac{1}{q},\frac{1}{1-q}\) to get the second term. We next apply the mean square calculation (3.1) to derive that the above is
\[\ll\left(\frac{\log x}{e^{k}\log R}\right)^{q}\left(\left(\frac{C(x)}{\sqrt{ \log\log x}}\right)^{q}+\mathbb{P}(\mathcal{G}(k)\text{ fails})^{1-q}\right).\]
Plug in the definition of \(C(x)\) and use Proposition 5.2 with \(1-q\geq 1/10\) (and then the exceptional probability to the power \(1/10\) is negligible) to deduce that
\[\ll e^{-k/2}\left(\frac{\log x}{\log R}\right)^{q}\cdot\Big{(}\frac{C(x)}{ \sqrt{\log\log x}}\Big{)}^{q},\]
which completes the proof.
_Remark 5.3_.: We remark that in (5.2), the quantity \(C(x)=\log\log R+100\log\log\log x\) is different from just being a constant \(C\) in [22]. The reason for our choice of \(C(x)\) is the following. Firstly, to keep the \(q\)-th moment in Proposition 5.1 has a saving (i.e. to make \(\left(\frac{C(x)}{\sqrt{\log\log x}}\right)^{q}\) small), we require that \(C(x)=o(\sqrt{\log\log x})\). Secondly, it turns out that in order to make the exceptional probability in Proposition 5.2 small enough, one has the constraint \(\log\log R\ll C(x)\). The combination of the above two aspects together leads to \(\log\log R=o(\sqrt{\log\log x})\).
_Remark 5.4_.: In the deduction of Proposition 2.2, we did not use an iterative process as used in [22]. Instead, we added an extra term \(100\log\log\log x\) for the purpose of getting strong enough bounds on \(\mathbb{P}(\mathcal{G}(k)\text{ fails})\). We simplified the proof by getting a slightly weaker upper bound in Theorem 1.2 as compensation.
### Proof of Proposition 5.1
The proof of Proposition 5.1 is a simple modification of the proof of Key Proposition 1 in [22]. We emphasize again the main difference is instead of using a large constant \(C\) as in [22] but replacing it with \(C(x)\) defined in (5.2), and we do not need the extra help from the quantity \(h(j)\) which hopefully makes the proof conceptually easier.
By using Holder's inequality, it suffices to prove that
\[\mathbb{E}[\mathbf{1}_{\mathcal{G}(k)}\int_{-\frac{1}{2}}^{\frac{1}{2}}|F_{k}^ {(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}dt]\ll e^{-k}\cdot\frac{\log x}{ \log R}\cdot\frac{C(x)}{\sqrt{\log\log x}}, \tag{5.3}\]
uniformly for \(0\leq k\leq\mathcal{K}=\lfloor\log\log\log x\rfloor\) and \(1/2\leq q\leq 9/10\). We can upper bound the left-hand side of (5.3) by
\[\leq\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbb{E}[\mathbf{1}_{\mathcal{G}(k,t)}| F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}]dt \tag{5.4}\]
where \(\mathbf{1}_{\mathcal{G}(k,t)}\) is the event that
\[(\frac{\log x}{e^{j+1}\log R}e^{C(x)})^{-1}\leq\prod_{\ell=j}^{\lfloor\log \log x-\log\log R\rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x}+it(\ell))| \leq\frac{\log x}{e^{j+1}\log R}e^{C(x)}\]
for all \(k\leq j\leq\log\log x-\log\log R-B-2\). This is an upper bound as \(\mathbf{1}_{\mathcal{G}(k)}\) is the event of \(\mathbf{1}_{\mathcal{G}(k,t)}\) holds for all \(|t|\leq\frac{1}{2}\). By the fact that \(f(n)\) has the same law as \(f(n)n^{it}\), we have
\[\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbb{E}[\mathbf{1}_{\mathcal{G}(k,t)}|F_ {k}^{(R)}(\frac{1}{2}-\frac{k}{\log x}+it)|^{2}]dt=\int_{-\frac{1}{2}}^{\frac {1}{2}}\mathbb{E}[\mathbf{1}_{\mathcal{H}(k,t)}|F_{k}^{(R)}(\frac{1}{2}-\frac {k}{\log x})|^{2}]dt, \tag{5.5}\]
where \(\mathbf{1}_{\mathcal{H}(k,t)}\) denotes the event that
\[(\frac{\log x}{e^{j+1}\log R}e^{C(x)})^{-1}\leq\prod_{\ell=j}^{\lfloor\log \log x-\log\log R\rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x}+i(t(\ell)- t))|\leq\frac{\log x}{e^{j+1}\log R}e^{C(x)},\]
for all \(k\leq j\leq\log\log x-\log\log R-B-2\). We next apply Proposition 3.4. It is clear that \(\mathcal{H}(k,t)\) is the event treated in Proposition 3.4 with \(n=\lfloor\log\log x-\log\log R\rfloor-(B+1)-k\); \(\sigma=\frac{k}{\log x}\) and \(t_{m}=t(\lfloor\log\log x-\log\log R\rfloor-(B+1)-m)-t\) for all \(m\); and
\[a=C(x)+B+1,\quad h(j)=0.\]
The parameters indeed satisfy \(|\sigma|\leq\frac{1}{e^{B+n+1}}\) and \(|t_{m}|\leq\frac{1}{m^{2/3}e^{B+m+1}}\) for all \(m\). Apply Proposition 3.4 to derive
\[\frac{\mathbb{E}[\mathbf{1}_{\mathcal{H}(k,t)}|F_{k}^{(R)}(\frac{1}{2}-\frac {k}{\log x})|^{2}]}{\mathbb{E}[|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x})|^{2} ]}=\tilde{\mathbb{P}}(\mathcal{H}(k,t))\ll\min\{1,\frac{a}{\sqrt{n}}\}.\]
A simple mean square calculation (see (3.1)) gives that
\[\mathbb{E}[|F_{k}^{(R)}(\frac{1}{2}-\frac{k}{\log x})|^{2}]=\exp\left(\sum_{ R\leq p\leq x^{e^{-(k+1)}}}\frac{1}{p^{1-2k/\log x}}+O(1)\right)\ll\frac{\log x }{e^{k}\log R}.\]
Combining the above two inequalities and the relation in (5.5), we get the desired upper bound for the quantity in (5.4). Thus, we complete the proof of (5.3) and Proposition 5.1.
### Proof of Proposition 5.2
In the proof, we will see why it is necessary to make \(C(x)\) large enough compared to \(\log\log R\). The proof starts with the union bound. We have
\[\mathbb{P}(\mathcal{G}(k)\text{ fails})\leq\mathbb{P}_{1}+\mathbb{P}_{2},\]
where
\[\mathbb{P}_{1}=\sum_{k\leq j\leq\log(\frac{\log x}{\log R})-B-2}\mathbb{P}\left( \prod_{\ell=j}^{\lfloor\log(\frac{\log x}{\log R})\rfloor-B-2}|I_{\ell}(\frac{1 }{2}-\frac{k}{\log x}+it(\ell))|>\frac{\log x}{e^{j+1}\log R}e^{C(x)}\text{ for some }t\right)\]
and
\[\mathbb{P}_{2}=\sum_{k\leq j\leq\log(\frac{\log x}{\log R})-B-2}\mathbb{P} \left(\prod_{\ell=j}^{\lfloor\log(\frac{\log x}{\log R})\rfloor-B-2}|I_{\ell}( \frac{1}{2}-\frac{k}{\log x}+it(\ell))|^{-1}>\frac{\log x}{e^{j+1}\log R}e^{C( x)}\text{ for some }t\right),\]
where \(|t|\leq 1/2\). We focus on bounding \(\mathbb{P}_{1}\), and \(\mathbb{P}_{2}\) can be estimated similarly. Replace the set of all \(|t|\leq 1/2\) by the discrete set
\[\mathcal{T}(x,j):=\left\{\frac{n}{((\log x)/e^{j+1})\log((\log x)/e^{j+1})}:| n|\leq((\log x)/e^{j+1})\log((\log x)/e^{j+1})\right\},\]
and apply the union bound to get
\[\mathbb{P}_{1}=\sum_{\begin{subarray}{c}k\leq j\leq\log(\frac{\log x}{\log R} )-B-2\\ t(j)\in\mathcal{T}(x,j)\end{subarray}}\mathbb{P}\left(\prod_{\ell=j}^{\lfloor \log(\frac{\log x}{\log R})\rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x}+ it(\ell))|>\frac{\log x}{e^{j+1}\log R}e^{C(x)}\right).\]
By using Chebyshev's inequality this is at most
\[\leq\sum_{\begin{subarray}{c}k\leq j\leq\log(\frac{\log x}{\log R})-B-2\\ t(j)\in\mathcal{T}(x,j)\end{subarray}}\frac{1}{(\frac{\log x}{e^{j+1}\log R}e^ {C(x)})^{2}}\mathbb{E}[\prod_{\ell=j}^{\lfloor\log(\frac{\log x}{\log R}) \rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x}+it(\ell))|^{2}].\]
Since \(f(n)\) and \(f(n)n^{it}\) have the same law, the above is
\[\ll\sum_{k\leq j\leq\log(\frac{\log x}{\log R})-B-2}\frac{|\mathcal{T}(x,j)|} {(\frac{\log x}{e^{j+1}\log R}e^{C(x)})^{2}}\mathbb{E}[\prod_{\ell=j}^{\lfloor \log(\frac{\log x}{\log R})\rfloor-B-2}|I_{\ell}(\frac{1}{2}-\frac{k}{\log x} )|^{2}].\]
The expectation here is, again through a mean square calculation (3.1), \(\ll\frac{\log x}{e^{j+1}\log R}\). Note \(|\mathcal{T}(x,j)|\leq((\log x)/e^{j+1})\log((\log x)/e^{j+1})\). We conclude that
\[\mathbb{P}_{1}\ll\sum_{k\leq j\leq\log(\frac{\log x}{\log R})-B-2}e^{\log \log R-2C(x)+\log\log(\log x/e^{j+1})}\ll e^{-C(x)},\]
where in the last step we used that \(C(x)=\log\log R+100\log\log\log x\). Thus we complete the proof of Proposition 5.2.
## 6. Proof of Theorem 1.3
We first notice that if \(R>x^{\frac{1}{A}}\) for any fixed large constant \(A\), then \(\mathcal{A}_{R}(x)\) is a set of elements with only \(O_{A}(1)\) number of prime factors. This would immediately imply that \(\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|^{4}]\ll_{A}|\mathcal{A}_{R}(x)|^ {2}\) and by (1.5), the conclusion follows. From now on, we may assume that
\[R\leq x^{\frac{1}{A}}. \tag{6.1}\]
The proof strategy of Theorem 1.3 again follows from [22]. The main differences lie in the design of the barrier events and taking advantage of \(R\) being large. In particular, we do not need a "two-dimensional Girsanov-type" calculation which makes our proof less technical. We first do the reduction step to reduce the problem to understanding certain averages of random Euler products, as in the upper bound proof.
**Proposition 6.1**.: _There exists a large constant \(C\) such that the following is true. Let \(x\) be large and \(\log\log R\gg\sqrt{\log\log x}\). Let \(F^{(R)}(s)\) be defined as in (2.2). Then, uniformly for all \(1/2\leq q\leq 9/10\) and any large \(V\), we have \(\|\sum_{n\in\mathcal{A}_{R}(x)}f(n)\|_{2q}\)_
\[\gg\sqrt{\frac{x}{\log x}}\left(\Big{\|}\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{ (R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2}dt\Big{\|}_{q}^{\frac{1}{2}}-\frac{ C}{e^{V}}\Big{\|}\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{(R)}(\frac{1}{2}+\frac{2V}{ \log x}+it)|^{2}dt\Big{\|}_{q}^{\frac{1}{2}}-C\right).\]
The remaining tasks are to give a desired lower bound on \(\|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)\|_{q}^{\frac{1}{2}}\) and an upper bound on \(\|F^{(R)}(\frac{1}{2}+\frac{2V}{\log x}+it)\|_{q}^{\frac{1}{2}}\). The upper bound part is simple. Indeed, simply apply Holder's inequality and do a mean square calculation (3.1) to get
\[\mathbb{E}[(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{(R)}(\frac{1}{2}+\frac{2V}{ \log x}+it)|^{2}dt)^{q}]\ll\Big{(}\int_{-\frac{1}{2}}^{\frac{1}{2}}\mathbb{E }[|F^{(R)}(\frac{1}{2}+\frac{2V}{\log x}+it)|^{2}]dt\Big{)}^{q}\ll\Big{(}\frac {\log x}{V\log R}\Big{)}^{q}. \tag{6.2}\]
We next focus on the main task, giving a good lower bound on \(\|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)\|_{q}^{\frac{1}{2}}\). For each \(t\in\mathbb{R}\), we use \(L(t)\) denote the event for all \(\lfloor\log V\rfloor+3\leq j\leq\log\log x-\log\log R-B-2\), the following holds
\[(\frac{\log x}{e^{j+1}\log R}e^{D(x)})^{-B}\leq\prod_{\ell=j}^{\lfloor\log \log x-\log\log R\rfloor-B-2}|I_{\ell}(\frac{1}{2}+\frac{4V}{\log x}+it)|\leq \frac{\log x}{e^{j+1}\log R}e^{D(x)}, \tag{6.3}\]
where \(D(x):=c\sqrt{\log\log x-\log\log R}\) with
\[c=\frac{1}{4}\min\Big{\{}\frac{\log\log R}{\sqrt{\log\log x-\log\log R}},1 \Big{\}}\asymp 1. \tag{6.4}\]
We are now ready to define a random set
\[\mathcal{L}:=\{-1/2\leq t\leq 1/2:L(t)\text{ defined by \eqref{eq:L1} holds}\}. \tag{6.5}\]
It is clear that
\[\mathbb{E}[(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{(R)}(\frac{1}{2}+\frac{4V}{ \log x}+it)|^{2}dt)^{q}]\geq\mathbb{E}[(\int_{\mathcal{L}}|F^{(R)}(\frac{1}{2} +\frac{4V}{\log x}+it)|^{2}dt)^{q}]. \tag{6.6}\]
We use the following estimate and defer its proof to Section 8.
**Proposition 6.2**.: _Let \(x\) be large and \(\log\log R\gg\sqrt{\log\log x}\). Let \(F^{(R)}(s)\) be defined as in (2.2) and \(V\) be a large constant. Let \(\mathcal{L}\) be the random set defined in (6.5). Then uniformly for any \(1/2\leq q\leq 9/10\), we have_
\[\mathbb{E}[(\int_{\mathcal{L}}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2} dt)^{q}]\gg\Big{(}\frac{\log x}{V\log R}\Big{)}^{q}. \tag{6.7}\]
Plug (6.6), (6.7) and (6.2) into Proposition 6.1 with \(q=\frac{1}{2}\) (and choosing \(V\) to be a sufficiently large fixed constant so that \(C/e^{V}\) kills the implicit constant) to get that
\[\mathbb{E}[|\sum_{n\in\mathcal{A}_{R}(x)}f(n)|]\gg\sqrt{|\mathcal{A}_{R}(x)|}.\]
This completes the proof of Theorem 1.3.
## 7. Proof of Proposition 6.1
The proof proceeds the same as in [22, Proposition 3] (see also [24]) and we provide a self-contained proof here and highlight some small modifications.
Let \(P(n)\) denote the largest prime factor of \(n\) as before. We have assumed that (6.1) holds, e.g. \(R\leq\sqrt{x}\) (This restriction is not crucial but makes the notation later easier). Let \(\epsilon\) denote a Rademacher random variable independent of \(f(n)\), and recall that \(\sum^{\star}\)indicates that the variable \(n\) under the summation is \(R\) rough. For \(1/2\leq q\leq 9/10\), we have
\[\mathbb{E}[|\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}] =\frac{1}{2^{2q}}\mathbb{E}[|\sum_{\begin{subarray}{c}n\leq x\\ P(n)\leq\sqrt{x}\end{subarray}}^{\star}f(n)+\sum_{\begin{subarray}{c}n\leq x \\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)+\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)+\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)-\sum_{\begin{subarray}{c}n\leq x\\ P(n)\leq\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]\] \[\leq\mathbb{E}[|\sum_{\begin{subarray}{c}n\leq x\\ P(n)\leq\sqrt{x}\end{subarray}}^{\star}f(n)+\sum_{\begin{subarray}{c}n\leq x \\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]+\mathbb{E}[|\sum_{\begin{subarray} {c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)-\sum_{\begin{subarray}{c}n\leq x\\ P(n)\leq\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]\] \[=2\mathbb{E}[|\epsilon\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)+\sum_{\begin{subarray}{c}n\leq x\\ P(n)\leq\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]=2\mathbb{E}[|\sum_{n\leq x}^{ \star}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The inner sum is determined by \((f(p))_{R\leq p\leq\sqrt{x}}\) and apply the Khintchine's inequality [15, Lemma 3.8.1] to get
\[\mathbb{E}[|\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]\gg\mathbb{E}[(\sum_{\sqrt{x}<p \leq x}|\sum_{m\leq x/p}^{\star}f(m)|^{2})^{q}]\geq\frac{1}{(\log x)^{q}} \mathbb{E}[(\sum_{\sqrt{x}<p\leq x}\log p\cdot|\sum_{m\leq x/p}^{\star}f(m)|^{2 })^{q}].\]
Next, do the smoothing step as we did in the upper bound case. Again set \(X=e^{\sqrt{\log x}}\). Write
\[\sum_{\sqrt{x}<p\leq x}\log p\cdot|\sum_{m\leq x/p}^{\star}f(m)|^{2}=\sum_{ \sqrt{x}<p\leq x}\log p\cdot\frac{X}{p}\int_{p}^{p(1+1/X)}|\sum_{m\leq x/p}^{ \star}f(m)|^{2}dt.\]
One has \(|a+b|^{2}\geq a^{2}/4-\min\{|b|^{2},|a/2|^{2}\}\geq 0\) and thus the above is at least
\[\begin{split}&\frac{1}{4}\sum_{\sqrt{x}<p\leq x}\log p\cdot \frac{X}{p}\int_{p}^{p(1+1/X)}|\sum_{m\leq x/t}^{\star}f(m)|^{2}dt\\ &-\sum_{\sqrt{x}<p\leq x}\log p\cdot\frac{X}{p}\int_{p}^{p(1+1/X )}\min\{|\sum_{x/t\leq m\leq x/p}^{\star}f(m)|^{2},\frac{1}{4}|\sum_{m\leq x/t }^{\star}f(m)|^{2}\}.\end{split} \tag{7.1}\]
It follows that the quantity we are interested in has the lower bound
\[\begin{split}\mathbb{E}[|\sum_{\begin{subarray}{c}n\leq x\\ P(n)>\sqrt{x}\end{subarray}}^{\star}f(n)|^{2q}]\geq&\frac{1}{( \log x)^{q}}\mathbb{E}[(\frac{1}{4}\sum_{\sqrt{x}<p\leq x}\log p\cdot\frac{X}{p }\int_{p}^{p(1+1/X)}|\sum_{m\leq x/t}^{\star}f(m)|^{2}dt)^{q}]\\ &-\frac{1}{(\log x)^{q}}\mathbb{E}[(\sum_{\sqrt{x}<p\leq x}\log p \cdot\frac{X}{p}\int_{p}^{p(1+1/X)}|\sum_{x/t<m\leq x/p}^{\star}f(m)|^{2}dt)^{ q}].\end{split} \tag{7.2}\]
Use Holder's inequality and throw away the \(R\)-rough condition to upper bound the subtracted term in (7.2) by
\[\begin{split}&\leq\frac{1}{(\log x)^{q}}\left(\sum_{\sqrt{x}<p \leq x}\log p\cdot\frac{X}{p}\int_{p}^{p(1+1/X)}\mathbb{E}[|\sum_{x/t<m\leq x /p}f(m)|^{2}]dt\right)^{q}\\ &\ll\frac{1}{(\log x)^{q}}\Big{(}\sum_{\sqrt{x}<p\leq x}\log p \cdot(\frac{x}{pX}+1)\Big{)}^{q}\ll\frac{1}{(\log x)^{q}}(\frac{x\log x}{X}+x )^{q}\ll(\frac{x}{\log x})^{q}.\end{split}\]
The first term in (7.2) (without the factor \(1/4(\log x)^{q}\)) is
\[\begin{split}&\mathbb{E}[(\sum_{\sqrt{x}<p\leq x}\log p\cdot \frac{X}{p}\int_{p}^{p(1+1/X)}|\sum_{m\leq x/t}^{\star}f(m)|^{2}dt)^{q}]\\ &=\mathbb{E}[(\int_{\sqrt{x}}^{x}\sum_{\frac{t}{1+1/X}<p\leq t} \log p\cdot\frac{X}{p}|\sum_{m\leq x/t}^{\star}f(m)|^{2}dt)^{q}]\\ &\gg\mathbb{E}[(\int_{\sqrt{x}}^{x}|\sum_{m\leq x/t}^{\star}f(m)| ^{2}dt)^{q}]=x^{q}\mathbb{E}[(\int_{1}^{\sqrt{x}}|\sum_{m\leq x}^{\star}f(m)|^{ 2}\frac{dz}{z^{2}})^{q}].\end{split}\]
To this end, we impose the smooth condition to invert the sums to Euler products. We have for any large \(V\),
\[\mathbb{E}[(\int_{1}^{\sqrt{x}}|\sideset{}{{}^{\star}}{\sum}_{m\leq z }f(m)|^{2}\frac{dz}{z^{2}})^{q}]\geq\mathbb{E}[(\int_{1}^{\sqrt{x}}|\sideset{} {{}^{\star}}{\sum}_{\begin{subarray}{c}m\leq z\\ x-\text{smooth}\end{subarray}}f(m)|^{2}\frac{dz}{z^{2+8V/\log x}})^{q}]\] \[\geq\mathbb{E}[(\int_{1}^{+\infty}|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}m\leq z\\ x-\text{smooth}\end{subarray}}f(m)|^{2}\frac{dz}{z^{2+8V/\log x}})^{q}]- \mathbb{E}[(\int_{\sqrt{x}}^{+\infty}|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}m\leq z\\ x-\text{smooth}\end{subarray}}f(m)|^{2}\frac{dz}{z^{2+8V/\log x}})^{q}]\] \[\geq\mathbb{E}[(\int_{1}^{+\infty}|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}m\leq z\\ x-\text{smooth}\end{subarray}}f(m)|^{2}\frac{dz}{z^{2+8V/\log x}})^{q}]-\frac {1}{e^{2Vq}}\mathbb{E}[(\int_{1}^{+\infty}|\sideset{}{{}^{\star}}{\sum}_{ \begin{subarray}{c}m\leq z\\ x-\text{smooth}\end{subarray}}f(m)|^{2}\frac{dz}{z^{2+4V/\log x}})^{q}].\]
Apply Lemma 4.1 to get that the first term is
\[\gg\mathbb{E}[(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{(R)}(\frac{1}{2}+\frac{4V} {\log x}+it)|^{2}dt)^{q}]. \tag{7.3}\]
For the second term, an application of Lemma 4.1 gives
\[\ll e^{-2Vq}\mathbb{E}[(\int_{-\infty}^{+\infty}\frac{|F^{(R)}(\frac{1}{2}+ \frac{2V}{\log x}+it)}{|\frac{1}{2}+\frac{2V}{\log x}+it|^{2}}dt)^{q}]\ll e^{- 2Vq}\mathbb{E}[(\int_{-\frac{1}{2}}^{\frac{1}{2}}|F^{(R)}(\frac{1}{2}+\frac{2 V}{\log x}+it)|^{2})^{q}] \tag{7.4}\]
where in the last step we used the fact that \(f(n)n^{it}\) has the same law as \(f(n)\) and \(\sum_{n\geq 1}n^{-2}\) converges. Bounds in (7.3) and (7.4) together give the desired bound for the first term in (7.2) and we complete the proof.
## 8. Proof of Proposition 6.2
In this section, we prove Proposition 6.2. The proof significantly relies on the following proposition, which is a mean value estimate of the product of \(|F^{(R)}(\sigma+it_{1})|^{2}\) and \(|F^{(R)}(\sigma+it_{2})|^{2}\). Our upper bound matches the guess if you pretend the two products are independent.
**Proposition 8.1**.: _Let \(x\) be large and \(\log\log R\gg\sqrt{\log\log x}\). Let \(F^{(R)}(s)\) be defined as in (2.2) and \(V\) be a large constant. Let \(\mathcal{L}\) be the random set defined in (6.5). Then we have_
\[\mathbb{E}[(\int_{\mathcal{L}}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2} dt)^{2}]\ll(\frac{\log x}{V\log R})^{2}. \tag{8.1}\]
Proof of Proposition 6.2 assuming Proposition 8.1.: The proof starts with an application of Holder's inequality. We have
\[\mathbb{E}[(\int_{\mathcal{L}}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2} dt)^{q}]\geq\frac{(\mathbb{E}[\int_{\mathcal{L}}|F^{(R)}(\frac{1}{2}+\frac{4V}{ \log x}+it)|^{2}dt])^{2-q}}{(\mathbb{E}[(\int_{\mathcal{L}}|F^{(R)}(\frac{1}{ 2}+\frac{4V}{\log x}+it)|^{2}dt)^{2}])^{1-q}}. \tag{8.2}\]
Proposition 8.1 gives a desired upper bound for the denominator. We next give a lower bound on the numerator. By using that \(f(n)n^{it}\) has the same law as \(f(n)\), the numerator is
\[(\int_{-1/2}^{1/2}\mathbb{E}[\mathbf{1}_{L(t)}|F^{(R)}(\frac{1}{2}+\frac{4V}{ \log x}+it)|^{2}]dt)^{2-q}=(\mathbb{E}[\mathbf{1}_{L(0)}|F^{(R)}(\frac{1}{2}+ \frac{4V}{\log x})|^{2}])^{2-q}.\]
We next use Proposition 3.4 by taking \(n=\lfloor\log\log x-\log\log R\rfloor-(B+1)-\lfloor\log V\rfloor\), \(a=D(x)=c\sqrt{\log\log x-\log\log R}\) and \(h(j)=0\) to conclude that \(\tilde{\mathbb{P}}(\mathbf{1}_{L(0)})\gg 1\). Combining with the mean square calculation (3.1), we have
\[\mathbb{E}[\mathbf{1}_{L(0)}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2}] \gg\tilde{\mathbb{P}}(\mathbf{1}_{L(0)})\cdot\mathbb{E}[|F^{(R)}(\frac{1}{2}+ \frac{4V}{\log x}+it)|^{2}]\gg\frac{\log x}{V\log R}. \tag{8.3}\]
We complete the proof by plugging (8.1) and (8.3) into (8.2).
The proof of Proposition 8.1 is a bit involved and its proof is inspired by [22, Key proposition 5] and [17, Multiplicative chaos results 4]. We are not using the "two-dimensional Girsanov-type" computation as used in [22, Key proposition 5] which significantly simplified the proof. We do not expect any further savings when \(R\) is as large as stated in Proposition 8.1 while for a smaller \(R\), one might expect there could be further cancellation as in [22, Key Proposition 5] which may be verified by adapting the "two-dimensional Girsanov-type" calculation.
Proof of Proposition 8.1.: Expand the square and it equals
\[\mathbb{E}[\int_{-1/2}^{1/2}\mathbf{1}_{L(t_{1})}|F^{(R)}(\frac{1}{2}+\frac{4V }{\log x}+it_{1})|^{2}dt_{1}\int_{-1/2}^{1/2}\mathbf{1}_{L(t_{2})}|F^{(R)}( \frac{1}{2}+\frac{4V}{\log x}+it_{2})|^{2}dt_{2}].\]
By using that \(f(n)n^{it}\) has the same law as \(f(n)\), we write the above as \((t:=t_{1}-t_{2})\)
\[\int_{-1}^{1}\mathbb{E}[\mathbf{1}_{L(0)}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x })|^{2}\mathbf{1}_{L(t)}|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2}]dt. \tag{8.4}\]
For \(|t|\) large enough, the two factors behave independently, which is the easier case. Indeed, if \(|t|>1/\log R\), drop the indicator functions and bound the corresponding integration by
\[\ll\max_{1/\log R<|t|\leq 1}\mathbb{E}[|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x })|^{2}\cdot|F^{(R)}(\frac{1}{2}+\frac{4V}{\log x}+it)|^{2}].\]
Apply the two dimensional mean square calculation (3.3) with \((x,y)=(R,x)\) to conclude that the above is
\[\ll\Big{(}\frac{\log x}{V\log R}\Big{)}^{2}.\]
We next focus on the case \(|t|\leq 1/\log R\). Since \(f(p)\) are independent of each other, we can decompose the Euler products into pieces and analyze their contributions to (8.4) separately. Define the following three sets of primes based on the sizes of primes
\[\mathcal{P}_{1}:=\{p\text{ prime}:R\leq p<x^{e^{-(\lfloor\log\log x-\log\log R \rfloor-B-2)}}\},\]
\[\mathcal{P}_{2}:=\{p\text{ prime}:x^{e^{-(\lfloor\log\log x-\log\log R \rfloor-B-2)}}\leq p\leq x^{e^{-(\lfloor\log V\rfloor+3)}}\},\]
and
\[\mathcal{P}_{3}:=\{p\text{ prime}:x^{e^{-(\lfloor\log V\rfloor+3)}}<p\leq x\}.\]
We proceed as follows. Note that the events \(L(0)\) and \(L(t)\) are irrelevant to \(f(p)\) for \(p\in\mathcal{P}_{1}\cup\mathcal{P}_{3}\). For partial products over primes \(p\in\mathcal{P}_{1}\cup\mathcal{P}_{3}\), we directly do mean square calculations. For partial products over primes \(p\in\mathcal{P}_{2}\), we will crucially use the indicator functions
and \(\mathbf{1}_{L(t)}\) defined in (6.3) with \(j=\lfloor\log V\rfloor+3\). This separation gives that the integration in (8.4) over \(|t|\leq 1/\log R\) is
\[\begin{split}&\int_{|t|\leq\frac{1}{\log R}}\mathbb{E}[\prod_{p\in \mathcal{P}_{1}\cup\mathcal{P}_{3}}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{ \log x}}}|^{-2}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}+it}}|^{-2}]\\ \times&\mathbb{E}[\mathbf{1}_{L(0)}\mathbf{1}_{L(t) }\prod_{p\in\mathcal{P}_{2}}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}}}|^ {-2}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}+it}}|^{-2}]dt.\end{split} \tag{8.5}\]
We first upper bound the expectation over primes in \(\mathcal{P}_{1}\cup\mathcal{P}_{3}\) uniformly over all \(t\). By using independence between \(f(p)\) and (3.2), we can bound it as
\[\ll\exp\Big{(}\sum_{p\in\mathcal{P}_{1}}\frac{4}{p^{1+\frac{8V}{\log x}}}+ \sum_{p\in\mathcal{P}_{3}}\frac{4}{p^{1+\frac{8V}{\log x}}}\Big{)}. \tag{8.6}\]
By simply using the prime number theorem and the definition of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{3}\), one has that both sums in (8.6) are \(\ll 1\) so that (8.6) is \(\ll 1\), where we remind readers that \(B\) is a fixed constant. Now our task is reduced to establishing the following
\[\int_{|t|\leq\frac{1}{\log R}}\mathbb{E}[\mathbf{1}_{L(0)}\mathbf{1}_{L(t)} \prod_{p\in\mathcal{P}_{2}}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}}}| ^{-2}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}+it}}|^{-2}]dt\ll\Big{(} \frac{\log x}{V\log R}\Big{)}^{2}. \tag{8.7}\]
Our strategy would be, roughly speaking, using the barrier event \(\mathbf{1}_{L(t)}\) to bound certain partial products involved with \(t\) directly and then use the mean square calculation to deal with the rest of the products. The exact partial products that we will apply barrier events would depend on the size of \(t\).
We first do a simple case, which helps us get rid of the very small \(t\), say \(|t|<V/\log x\). We use the the condition \(\mathbf{1}_{L(t)}\) and pull out the factors related to \(L(t)\) to get that the contribution from \(|t|<V/\log x\) is at most
\[\begin{split}&\ll\int_{|t|\leq\frac{V}{\log x}}e^{2c\sqrt{\log \log x-\log\log R}}\cdot(\frac{\log x}{V\log R})^{2}\cdot\mathbb{E}[\mathbf{1} _{L(0)}\prod_{p\in\mathcal{P}_{2}}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x }}}|^{-2}]dt\\ &\ll\frac{V}{\log x}\cdot e^{2c\sqrt{\log\log x-\log\log R}} \cdot\Big{(}\frac{\log x}{V\log R}\Big{)}^{2}\cdot\mathbb{E}[\prod_{p\in \mathcal{P}_{2}}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}}}|^{-2}]\\ &\ll\Big{(}\frac{\log x}{V\log R}\Big{)}^{2},\end{split}\]
where in the second last step we dropped the \(\mathbf{1}_{L(0)}\) condition, and in the last step we applied (3.1) together with \(\log R\geq\exp(4c\sqrt{\log\log x})\) where \(c\) is defined in (6.4). Thus we only need to establish the following
\[\int_{\frac{V}{\log x}\leq|t|\leq\frac{1}{\log R}}\mathbb{E}[\mathbf{1}_{L(0)} \mathbf{1}_{L(t)}\prod_{p\in\mathcal{P}_{2}}|1-\frac{f(p)}{p^{\frac{1}{2}+ \frac{4V}{\log x}}}|^{-2}|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}+it}}| ^{-2}]dt\ll\Big{(}\frac{\log x}{V\log R}\Big{)}^{2}. \tag{8.8}\]
We now enter the crucial part where we will apply the barrier events according to the size of \(|t|\). We _decompose the set \(\mathcal{P}_{2}\) into two parts according to \(|t|\)_. For each fixed \(V/\log x\leq|t|\leq 1/\log R\), we write
\[\mathcal{P}_{2}=\mathcal{S}(t)\cup\mathcal{M}(t),\]
where
\[\mathcal{S}(t):=\{p\text{ prime}:x^{e^{-(\lfloor\log\log x-\log\log R\rfloor-B-2)}} \leq p\leq e^{\frac{V}{|t|}}\},\]
and
\[\mathcal{M}(t):=\{p\text{ prime}:e^{\frac{V}{|t|}}\leq p\leq x^{e^{-(\lfloor \log V\rfloor+3)}}\}.\]
The set of primes \(\mathcal{S}(t)\) would be those we will apply barrier events and \(\mathcal{M}(t)\) would be estimated by a mean square calculation. Note that for \(p\in\mathcal{M}(t)\), there is a nice decorrelation as we needed in (3.3) due to that \(p\geq e^{V/|t|}\). Let us now see how such a decomposition of \(\mathcal{P}_{2}\) would help us. We use a local notation
\[G(p,t):=|1-\frac{f(p)}{p^{\frac{1}{2}+\frac{4V}{\log x}+it}}|^{-2}.\]
Then the quantity in (8.8) is the same as
\[\int_{\frac{V}{\log x}\leq|t|\leq\frac{1}{\log R}}\mathbb{E}[\mathbf{1}_{L(0)} \mathbf{1}_{L(t)}\prod_{p\in\mathcal{P}_{2}}G(p,0)\prod_{p\in\mathcal{S}(t)}G (p,t)\prod_{p\in\mathcal{M}(t)}G(p,t)]dt.\]
We apply the barrier events condition \(\mathbf{1}_{L(t)}\) to bound the product over \(p\in\mathcal{S}(t)\) so that the above is at most
\[\ll\Big{(}\frac{V}{\log R}\Big{)}^{2}\cdot e^{2c\sqrt{\log\log x-\log\log R}} \cdot\int_{\frac{V}{\log x}\leq|t|\leq\frac{1}{\log R}}\frac{1}{t^{2}} \mathbb{E}[\mathbf{1}_{L(0)}\prod_{p\in\mathcal{P}_{2}}G(p,0)\prod_{p\in \mathcal{M}(t)}G(p,t)]dt. \tag{8.9}\]
We next upper bound the expectation in (8.9) uniformly for all \(V/\log x\leq|t|\leq 1/\log R\). We first drop the indicator function and rewrite the product based on the independence between \(f(p)\) to derive that
\[\mathbb{E}[\mathbf{1}_{L(0)}\prod_{p\in\mathcal{P}_{2}}G(p,0)\prod_{p\in \mathcal{M}(t)}G(p,t)]\leq\mathbb{E}[\prod_{p\in\mathcal{S}(t)}G(p,0)]\cdot \mathbb{E}[\prod_{p\in\mathcal{M}(t)}G(p,0)G(p,t)].\]
Use the mean square calculation results in (3.1) and (3.3) to further get an upper bound on the expectation
\[\ll\frac{V/|t|}{\log R}\cdot\Big{(}\frac{t\log x}{V^{2}}\Big{)}^{2}\ll\frac{ |t|(\log x)^{2}}{V^{3}\log R}.\]
Now we plug the above bound to (8.9) to get that (8.9) is crudely bounded by
\[\Big{(}\frac{\log x}{\log R}\Big{)}^{2}\cdot\frac{e^{2c\sqrt{\log\log x-\log \log R}}}{V\log R}\cdot\int_{\frac{V}{\log x}\leq|t|\leq\frac{1}{\log R}} \frac{1}{|t|}dt\ll\Big{(}\frac{\log x}{\log R}\Big{)}^{2}.\]
In the last step we used that \(\log R\geq\exp(4c\sqrt{\log\log x})\) where \(c\) is defined in (6.4). This completes the proof of (8.8) and thus the proof of the proposition.
## 9. Concluding remarks
### Typical behavior and small perturbations
We give a sketch of the situation when \(a(n)\) itself is independently and randomly chosen. We write
\[a(n)=r(n)X(n) \tag{9.1}\]
where \(r(n)>0\) is deterministic and \(X(n)\) are independently distributed with \(\mathbb{E}[|X(n)|^{2}]=1\). We may naturally assume that there is some \(r\) such that
\[r(n)\asymp r(m)\asymp r\]
for all \(n,m\), i.e. no particular random variable would dominate the whole sum in size. One may also just assume \(r=1\) throughout the discussion here. We claim that for typical \(X(n)\), the random sums satisfy the sufficient condition established in [39, Theorem 3.1] on having a Gaussian limiting distribution.
The key condition one needs to verify is that almost surely (in terms of over \(X(n)\)), we have
\[R_{N}(\mathbf{a}):=\sum_{\begin{subarray}{c}m_{i},n_{j}\leq N\\ m_{i}\neq n_{j}\\ m_{1}m_{2}=n_{1}n_{2}\end{subarray}}a(n_{1})a(n_{2})\overline{a(m_{1})a(m_{2}) }=o(r^{4}N^{2}). \tag{9.2}\]
The proof of (9.2) is straightforward. By using the divisor bound, we know there are \(\ll N^{2+\epsilon}\) number of quadruples \((m_{1},m_{2},n_{1},n_{2})\) under the summation. If we expect some square-root cancellation among \(a(n_{1})a(n_{2})\overline{a(m_{1})a(m_{2})}\), then \(R_{N}(\mathbf{a})\) above should be around \(r^{4}N^{1+\varepsilon}\) typically. Indeed, by using the fact that all \(a(n)\) are independent, we have the \(L^{2}\) bound
\[\mathbb{E}[|R_{N}|^{2}]=\mathbb{E}[R_{N}\overline{R_{N}}]\ll r^{8}N^{2+ \varepsilon}.\]
This leads to, almost surely (in terms of over \(X(n)\)), that we have
\[R_{N}(\mathbf{a})=o(r^{4}N^{2}).\]
To this end, by using [39, Theorem 3.1], almost surely, we have a central limit theorem for the random partial sums of a Steinhaus random multiplicative function. See [6, Theorem 1.2] for a closely related result where they used the method of moments.
In Question 1.1, we asked if it is possible to characterize the choices of \(a(n)\) that give better than square-root cancellation. On one hand, as discussed above, we know for typical \(a(n)\), there is just square-root cancellation. On the other hand, if \(a(n)\) is a deterministic multiplicative function taking values on the unit circle, then by the fact that \(a(n)f(n)\) has the same distribution as \(f(n)\) and the result established by Harper (1.1), the partial sums \(\sum_{n\leq N}a(n)f(n)\) have better than square-root cancellation. Our main theorems study one particular example of multiplicative nature. Combining these observations, we believe that _any small perturbation coming from \(a(n)\) that destroys the multiplicative structure would make the better than square-root cancellation in (1.1) disappear._ We ask the following question in a vague way as a sub-question of Question 1.1.
**Question 9.1**.: _Is it true that the only "essential choice" of \(a(n)\) leading to better than square-root cancellation is of multiplicative nature?_
### Threshold in other settings and the limiting distribution
The main theorems of this paper prove that there is square-root cancellation for \(\log\log R\gg(\log\log x)^{\frac{1}{2}}\). What is the limiting distribution then? We have remarked earlier that one may establish a central limit theorem when \(R\gg\exp((\log x)^{c})\) for some constant \(c<1\) by understanding the corresponding multiplicative energy. It becomes less clear for smaller \(R\).
**Question 9.2**.: _What is the limiting distribution of \(\sum_{n\in\mathcal{A}_{R}(x)}f(n)\) with "proper" normalization, for all ranges of \(R\)?_
We finally comment that there is another family of partial sums that naturally has the threshold behavior for better than square-root cancellation. Let \(\mathcal{A}=[x,y]\) with \(y\leq x\). We would like to know for what range of \(y\), typically,
\[\sum_{x\leq n\leq x+y}f(n)=o(\sqrt{y}).\]
We believe one can adapt the argument here to find that the threshold behavior is around \(\log(x/y)\approx\sqrt{\log\log x}\). It is certainly interesting to understand the limiting distribution for the short interval case thoroughly, beyond the previous result in [39].
|
2310.01123
|
Impact of Economic Uncertainty, Geopolitical Risk, Pandemic, Financial &
Macroeconomic Factors on Crude Oil Returns -- An Empirical Investigation
|
This study aims to use simultaneous quantile regression (SQR) to examine the
impact of macroeconomic and financial uncertainty including global pandemic,
geopolitical risk on the futures returns of crude oil (ROC). The data for this
study is sourced from the FRED (Federal Reserve Economic Database) economic
dataset; the importance of the factors have been validated by using variation
inflation factor (VIF) and principal component analysis (PCA). To fully
understand the combined effect of these factors on WTI, study includes
interaction terms in the multi-factor model. Empirical results suggest that
changes in ROC can have varying impacts depending on the specific period and
market conditions. The results can be used for informed investment decisions
and to construct portfolios that are well-balanced in terms of risk and return.
Structural breaks, such as changes in global economic conditions or shifts in
demand for crude oil, can cause return on crude oil to be sensitive to changes
in different time periods. The unique aspect ness of this study also lies in
its inclusion of explanatory factors related to the pandemic, geopolitical
risk, and inflation.
|
Sarit Maitra
|
2023-10-02T11:55:01Z
|
http://arxiv.org/abs/2310.01123v2
|
Impact of Economic Uncertainty, Geopolitical Risk, Pandemic, Financial & Macroeconomic Factors on Crude Oil Returns: An Empirical Investigation
###### Abstract
This study aims to use simultaneous quantile regression (SQR) to examine the impact of macroeconomic and financial uncertainty including global pandemic, geopolitical risk on the futures returns of crude oil (ROC). The data for this study is sourced from the FRED (Federal Reserve Economic Database) economic dataset; the importance of the factors have been validated by using variation inflation factor (VIF) and principal component analysis (PCA). To fully understand the combined effect of these factors on WTI, study includes interaction terms in the multi-factor model. Empirical results suggest that changes in ROC can have varying impacts depending on the specific period and market conditions. The results can be used for informed investment decisions and to construct portfolios that are well-balanced in terms of risk and return. Structural breaks, such as changes in global economic conditions or shifts in demand for crude oil, can cause return on crude oil to be sensitive to changes in different time periods. The unique aspect nests of this study also lies in its inclusion of explanatory factors related to the pandemic, geopolitical risk, and inflation.
_Keywords: crude-oil; quantile-regression; pandemic; geopolitical risk; macroeconomic; variation inflation factor;_
## 1 Introduction
Crude oil is the most traded commodity globally, with West Texas Intermediate (WTI)1 and North Sea Brent crude (Brent) being the most widely used benchmarks. The global oil market is valued over $1.7 trillion, making it crucial for creating an ideal portfolio (Nasir et al., 2018; Sarwar et. al., 2019). However, uncertainties like supply and demand, geopolitical events, and economic conditions affect the Return in Crude Oil (ROC) from an investment perspective. There is evidence to believe that efficient market hypothesis has been the failure in most of the energy market (Liu & Lee, 2018). The price of crude oil directly or indirectly impacts all aspects of the economy, and energy market shocks can disrupt the economy and financial system.
Footnote 1: There is evidence to believe that efficient market hypothesis has been the failure in most of the energy market (Liu & Lee, 2018).
The relationship between crude oil price (COP) and macroeconomic variables is influenced by factors such as global economic policy uncertainty (GEPU), geopolitical risk (GPR), global pandemic effect (WUPI), and global price of energy index (GPE). COP remains a significant determinant of economic factors, including inflation rates, which affect the global economy (Tiwari et. al., 2019). The impact of WUPI on COP can be influenced by various variables, such as consumption habits and global supply system disruptions. Understanding the relationship between ROC, excess market return, volatility index, inflation, WUPI, and GPR is crucial for developing hedging methods (Khan et al., 2017; Ferrer et al., 2018). The distribution of ROC can be heterogeneous due to distinct risk-return profiles of oil investments. However, there is limited research on the asymmetric effects of uncertainty and the distributional heterogeneity of return on crude oil.
This research provides a fresh perspective on understanding the risk and return characteristics of crude oil stocks in volatile market environments. It considers distributional heterogeneity and asymmetry of independent variables, offering a new perspective on how these uncertainties affect the ROC collectively. The study also evaluates the indirect effects of pandemics on COP due to macroeconomic changes. The multifactor analysis on ROC is relevant, considering macroeconomic factors like GPR, WUPI, and VIX. The asymmetric quantile approach allows for more flexible and
nuanced analysis. The study can help calculate potential return on investment for refinery projects under various market situations.
The following sections of the study will detail the research methodology and data sets, followed by a discussion of the key empirical findings in the literature. In Section 5, the various statistical tests are covered. The discussion and empirical findings are provided in Section 6.
## 2 Literature review
A growing body of academic work on the relationship between macroeconomic factors and the ROC exists (e.g., McMillan et. al., 2021; Aravind and Nayar, 2019; Hamdi et. al., 2019 etc.). Some studies have used time-varying asymmetric quantile regression methods to examine this relationship (e.g., Dawar et. al., 2021; Xiao and Wang, 2022; Mokni, 2020 etc.), while others have used different statistical approaches like Markov Regime Switching, Vector Auto Regression etc. (e.g., Mahmoudi and Ghaneei, 2022; Golitsis et. al., 2022 etc.).
Several recent studies have examined macroeconomic factors in the context of ROC, e.g., Aravind and Nayar (2019) and Bredin et al. (2021). Macroeconomic conditions and the price of oil have a long-term dynamic relationship (Aravind and Nayar, 2019). The important work of Fama, 1990 demonstrated that INDPRO had a beneficial impact on future cashflow and thus market returns. These studies have contributed to the literature by providing insights into the underlying dynamics of the relationship between COP and macroeconomic variables. However, most of the studies have been limited to 2-factor or 4-factor analysis (e.g., Cedic et., 2021; Shahzad et. al., 2021; Bahloul and Amor, 2021; McMillan et. al., 2021) with few studies using more comprehensive models introducing more factors (e.g., Zhang and Hamori, 2022; Ghosh, 2022; Aravind and Nayar. 2019; Zhang and Hamori, 2022). However, macroeconomic factors are not the only drivers of ROC, there are many other external factors such as GPR, WUPI, GEPU etc. that can directly or indirectly influence the ROC.
Researchers have investigated GPR to examine the effects of global tension, friction, and conflict on the oil-stock markets associations (e.g., Antonakakis et. al., 2017; Wang et. al., 2021; Plakandaras et. al., 2019). Researchers have shown that unexpected and natural events such as pandemics can impact investors' sentiments and affect risk-taking behavior (Shaikh, 2022; Kaplanski and Levy, 2010). GPR is a significant indicator that can contribute to a climate of uncertainty and affect economic performance and asset markets (Drakos and Kallandranis, 2015; Schneider and Troeger, 2006). Indeed, the oil market index can be severely affected by the GPR & WUPI but mostly reported short-term instability. Even though stock market responses to global crises are frequently unfavorable, GPR can offer useful data regarding oil volatility and can offer the greatest potential for financial benefits (Liu et al., 2019). Crude oil prices are expected to rise in an atmosphere where there is both market instability and inflation. Oil can become more expensive for buyers using a currency whose value has fallen due to inflation. Increased oil prices can also be a result of choppy markets, which are marked by swings and uncertainty. This is because traders and investors may be more willing to pay more for a commodity in an unstable market.
Economic policy uncertainty (GEPU) can create uncertainty in the market and affect the demand for crude oil (Lei et. al., 2019). This increased market volatility could create opportunities for investors to buy and sell crude oil as prices fluctuate. Furthermore, as investors look to safeguard their money from economic uncertainty, GEPU may result in an increase in demand for crude oil as a safe-haven asset which could result in greater crude oil prices and earnings (Olayeni et. al., 2020; Mensah et. al., 2017). Our study aims to examine the combined impact of a range of factors on ROC, which can provide valuable information for risk management and portfolio optimization.
An increasing corpus of research suggests non-linear framework between oil prices and economies, despite the studies' primary focus on linear models (e.g., Salisu et. al., 2019; Pan et al., 2017; Le and Chang, 2013). According to Beckmann and Czudaj (2013), nonlinearities may result from significant oil price shocks brought on by external variables, discrete regime transitions, or the fundamentally nonlinear nature of the technique used to generate the data (Alqaralleh, 2020). There is no agreement on the most effective methods for performing multifactor analyses and diagnostic tests for WTI excess returns in the literature that is currently accessible, which may be the cause of the existing literature's lack of specificity. We also suggest that a moderation arises when two variables interact in a way that considers the moderating impact, in line with statistical theory. Our current work improves the ongoing discussions in the relationship between ROC, WUPI, GPR and INFLATION by introducing the interaction term.
There are gaps in the literature when it comes to understanding the risk and return characteristics of crude oil stocks in relation to macroeconomic variables, WUPI, GPR, and GEPU, even though there have been extensive empirical studies devoted to the relationship between oil prices and macroeconomic variables. While previous studies such as
McMillan et. al. (2021), Aravind and Nayar (2019) and Hamdi et. al. (2019) focused on the relationship between oil prices and macroeconomic variables, and how they affect the volatility of oil prices, they have not fully explored the implications of these findings on the risk and return characteristics of crude oil stocks. This gap in the literature is significant because understanding the risk and return characteristics of crude oil stocks is important for investors, as it can inform investment decisions and portfolio construction. Additionally, understanding the factors that drive the return of crude oil stocks and how they vary across different quantiles of the return distribution can provide valuable insights into the underlying dynamics of the oil market and potential risks and opportunities.
Our study uses SQR to examine the relationships of the above discussed variables to close this gap in the literature. This unique research offers a fresh viewpoint on how to comprehend the risk and return characteristics of crude oil stocks in various market environments. This study aims to build upon the work of Fung and Hsieh (2004) and Jurek and Stafford (2015) of the methodological difficulties in applying conventional models by using a multivariate approach to analyze the relationship between COP and various macroeconomic variables, global pandemics WPU, GPR, and GEPU. By considering these additional factors our study aims to provide a more comprehensive and accurate understanding of the factors that drive the ROC and how they vary across different quantiles of the return distribution. This can provide valuable insights into the underlying dynamics of the oil market and potential risks and opportunities.
## 3 Model and Econometric Approach
The SQR measures both upper and lower tail reliance in addition to the average or linear dependence between the variables. As a result, the results of the effect of conditional variables on the dependent variable are more exact and precise (Koenker and Ng, 2005). The model can be mathematically formulated (Eq. (1)) considering y be a dependent variable that is assumed linearly dependent on x.
\[Q_{y}\left(\frac{\tau}{x}\right)\ =\inf\left\{\frac{b}{F_{y}\left(\frac{\tau}{x} \right)}\ \geq\ \tau\right\}=\sum_{k}\beta_{k}\ (\tau)x_{k}=x^{\prime}\beta\ (\tau)\qquad\qquad\qquad\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.} \text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{\small.}\text{ \small.}\text{\small.
To estimate betas, five years of monthly data of the selected variables were collected from FRED Economic Data2. The selection of variables and data sources are summarized in Table 1. This study uses yield data for the crude oil data as it represents the changes in intrinsic value and provide additional insights. Following consideration of the representativeness, transparency, and consistency of the crude oil data, the WTI crude oil spot price3 was chosen as the crude oil price variable. The SPY index tracks the S&P 500 index, which is made up of 500 large- and mid-cap US stocks and acts as one of the main benchmarks of the US equity market. The SPY4 is utilized in this article to both symbolise the financial stability and health of the US economy and to capture shocks to the US stock market. Table (1) presents selected variables for the study.
Footnote 2: FRED data obtained from [https://fred.stlouisfed.org/](https://fred.stlouisfed.org/)
Footnote 3: WTI data obtained from [https://www.cia.gov/dnav/pet/pet_pri_spt_s1_d.htm](https://www.cia.gov/dnav/pet/pet_pri_spt_s1_d.htm)
Footnote 4: SPY data obtained from [https://finance.yahoo.com/quote/SPY/history/](https://finance.yahoo.com/quote/SPY/history/)
Footnote 5: GPR data obtained from [https://www.matteoiaocyiello.com/gpr.htm](https://www.matteoiaocyiello.com/gpr.htm)
Results show that extreme negative returns tend to be larger in absolute value compared to positive one, which may be attributed to the inclusion of crisis periods such as the ongoing pandemic and GPR during the sampling period. High interest rates can make investments less attractive by reducing the current value of future cash flows. As such economic theory predicts that an increase in interest rates will lead to a result in a decline in stock values. The US treasury bill rate is considered here as risk free interest rate.
### Econometric approach
The technique of this study can be divided into two stages: the first stage examines the excess return over time, and the second stage analyses the excess return's cross-section components. Our study's major presumptions are that markets are efficient, events cannot be predicted, and time is affected exogenously.
Eq. (3) can be extended to Eq. (4) to discuss the excess return on WTI.
\[\begin{array}{l}R_{wt}=a_{I}+\beta_{M}R_{M}+\beta_{i}SP_{t}+\beta_{j}SP_{t} +\beta_{j}SPREAD_{t}+\beta_{3}INDPRO_{t}+\beta_{t}INELATION_{t}+\beta_{3}UNRATE_{ t}+\beta_{6}MISL_{t}+\beta\gamma CCUS_{t}+\beta_{8}\\ VIX_{t}+\beta_{9}\ GPR_{t}+\beta_{10}\ WUPI_{t}+\beta_{11}\ GPE_{t}+\beta_{12}\ GEPU_{t}+e_{t}\end{array}\]
\[\ldots\ldots\]
Equation 4
Here, \(R_{wt}\) = the excess ROC; the risk-free rate which is 3-month US Treasury bill rate here, was deducted from the continuously compounded returns to transform the WTI returns into excess returns, \(SP_{t}\) = the excess market return, \(SPREAD_{t}\) = 5-year minus 3-months treasury yield curve, rest all are as shown in Table (12).
\begin{table}
\begin{tabular}{|l|l|l|} \hline Factors & Characterization variable & Abbreviation \\ \hline U.S. Treasury Securities at 3-Month Constant Maturity & DGS3MO index & DGS3MO \\ \hline U.S. Treasury Securities at 5-year Constant Maturity & DGS5 index & DGS5 \\ \hline Industrial Production: Total Index & INDPRO index & PROD \\ \hline Consumer Price Index for All Urban Consumers & CPIAUCSL & INFLATION \\ \hline Unemployment rate & UNRATE index & UNRATE \\ \hline Narrow money supply & M1SL index & M1SL \\ \hline Change in exchange rate & CCUSMA02EZM618N index & CCU \\ \hline Standard \& Poor’s Depositary Receipts (SPDR) S\&P 500 & SPY index & SPY \\ exchange traded fund (ETF) & \multicolumn{2}{l|}{VIX index} & VIX \\ \hline GeoDie Market Volatility Index & \multicolumn{2}{l|}{VIX index} & VIX \\ \hline Geoplolitical Risk Index & \multicolumn{2}{l|}{GPR data} & GPR5 \\ \hline Global price of Energy index & \multicolumn{2}{l|}{PNRGINDEXM index} & GPE \\ \hline World Pandemic Uncertainty Index & \multicolumn{2}{l|}{WUPI index} & WUPI \\ \hline Global economic policy uncertainty & \multicolumn{2}{l|}{GEPU index} & GEPU \\ \hline International crude oil price & \multicolumn{2}{l|}{WTI crude oil spot price} & WTI \\ \hline \end{tabular} Note: Given S&P’s reduction of the nation’s credit rating in 2011, the common perception that U.S. treasury securities are devoid of credit risk may be debatable; nonetheless, that subject is outside the purview of this study.
\end{table}
Table 1: Variable selection & data source
Since the WUPI effect and the aggravation of GPR have not been evaluated, Model (4) may be underspecified. Therefore, model (2), which augments model (1) by adding the interaction term to the WUPI & GPR nexus, may be more applicable. According to past research (Bodie et al., 2010; Zaremba et al., 2020), Eq. (5) explains the excess ROC in a multivariate framework, and Eq. (6) discusses the relationship between GPR and INFLATION. Table (2) justifies the considered variables.
\[R_{uti}=\alpha_{I}+\beta_{u}R_{M}+\beta_{i}SP_{t}+\beta_{2}SPREAD_{t}+\beta_{3} INDPRO_{t}+\beta_{4}\ INFLATION_{t}+\beta_{5}\ UNRATE_{t}+\beta_{6}\ MISL_{t}+\beta_{7}\ CCUS_{t}+\beta_{8}\]
\[\textit{VIX}_{t}+\beta_{9}\textit{GPR}_{t}+\beta_{10}\textit{WUPI}_{t}+\beta_{ 11}\textit{GPE}_{t}+\beta_{12}\textit{GEP}_{t}+\beta_{13}\textit{(GPR}_{t} \ \textbf{*}\ WUPI_{t})+\varepsilon_{t}\]
\[\textit{VIX}_{t}+\beta_{9}\textit{GPR}_{t}+\beta_{10}\textit{WUPI}_{t}+\beta_{ 11}\textit{GPE}_{t}+\beta_{12}\textit{GEP}_{t}+\beta_{13}\textit{(GPR}_{t} \ \textbf{*}\ WUPI_{t})+\varepsilon_{t}\]
\[\textit{VIX}_{t}+\beta_{9}\textit{GPR}_{t}+\beta_{10}\textit{WUPI}_{t}+\beta_{ 11}\textit{GPE}_{t}+\beta_{12}\textit{GEP}_{t}+\beta_{13}\textit{(GPR}_{t} \ \textbf{*}\ INFLATION_{t})+\varepsilon_{t}\]
\[\textit{VIX}_{t}+\beta_{9}\textit{GPR}_{t}+\beta_{10}\textit{WUPI}_{t}+\beta_{ 11}\textit{GPE}_{t}+\beta_{12}\textit{GEP}_{t}+\beta_{13}\textit{(GPR}_{t} \ \textbf{*}\ INFLATION_{t})+\varepsilon_{t}\]
## 4 Method
For each of the variables, a series of modifications must be made. This is based on the Arbitrage Pricing Theory (APT), which contends that unanticipated changes in macroeconomic factors, rather than their levels, can be used to explain stock returns. The study makes the naive assumption that investors' expectations for the future value of the variables will remain unchanged. Thus, the unforeseen change is the overall variation in the variable from one period to the next. Eq. (7) displays the calculation of the monthly logarithmic excess returns for WTI, where the 3- month U.S. Treasury rate is used as the risk-free rate.
\[\mathcal{ER}_{\text{win($t$)}}=\mathit{ln}\left(\frac{\mathit{P}_{\text{wit($t$ )}}}{\mathit{P}_{\text{wit($t$-$1$)}}}\right)-\mathit{r}_{f}\qquad\qquad\qquad \qquad\qquad\qquad\text{-- Equation \ref{eq:P}}\]
In Eq. (7), \(\mathit{ER}_{\text{wit($0$)}}\) is the excess return of WTI at time \(t\), \(\mathit{P}_{\text{wit($0$)}}\) is the price of WTI at time \(t\), \(\mathit{P}_{\text{wit($0$-$1$)}}\) is the price of WTI at time \(t-1\), and \(\mathit{r}_{f}\) is the 3-month U.S. Treasury rate. The monthly yield on a three-month U.S. T Bill is subtracted from the continuously compounded daily returns on the WTI Index to determine monthly excess returns. The macroeconomic factors, which serve as the predictors, are expressed as the log changes of the data. VIX and SP are expressed in levels. We use the Eq. (8) to calculate the daily log changes:
\[\mathit{VIX}_{t}=\mathit{ln}\left(\frac{\mathit{VIX}_{t}}{\mathit{VIX}_{\text {($t$-$1$)}}}\right)\text{ and }\mathit{SP}_{t}=\mathit{ln}\left(\frac{\mathit{SP}_{t}}{\mathit{SP}_{\text {($t$-$1$)}}}\right)\text{ }-\mathit{r}_{f}\qquad\qquad\qquad\qquad\text{ \qquad\qquad\text{ \qquad\text{ \qquad\text{ \qquad\text{ \qquad\text{ \qquad\text{ \text{ \text{ \text{ \text{ \text{ \text{ \text{ \text
tail (Xiao et. al., 2015). Negative skewness (Fig. (7)) suggests that more negative data is concentrated on the mean value despite positive mean indicating favourable results on average yield for investors. Additionally, the dependent variable is typically considered to be independently distributed and homoscedastic. Large excess kurtosis coefficients which is leptokurtosis, are a sign that outliers are present and indicates that there have been numerous price changes in the past (either positive or negative) away from the average returns for the investment. Almost 50% of the data set includes the 2020-21 pandemic crisis and a post crisis period, which becomes apparent by observing the large standard deviation levels associated to some of the variables. For all series, the JB test statistics reject the null hypothesis (H\({}_{0}\)) of a normal distribution at the 5% significance level.
Considering the minimum values, the lowest in this range is the UNRATE with a minimum value of -131.38. GEPU is much dispersed than other variables with a standard deviation of 43.40; closely following this is the GPR with 30.93, UNRATE with 21.29 and MONEY with 20.32. Negative values for skewness are common (SP, CURRENCY, MONEY, UNRATE, INFLATION, GPR & SPREAD) but are positive for the INDRO, PANDEMIC, GPE, VIX & GEPU. Most of these factors show excess kurtosis. The last thing to be concerned about in this type of multifactor modelling study is the incidence of multicollinearity. The variance inflationary factors (VIF) are displayed; in no case does the VIF for any of the factors even come close to the critical value (VIF \(>\) 5) (Hair et. al., 2017). This suggests that multicollinearity, while present, is not too much of a problem. But to develop a new coordinate system and align it with the largest variation in the data, Principal Component Analysis (PCA) was carried out. The results displayed in next section.
Value at Risk (VaR) was estimated (Table 3) on simple returns which represents the worst-case loss associated with probabilities and Cvar was estimated by averaging the severe losses in the tail of distribution of WTI returns.
Quantile normalisation process was performed to alter the raw data to preserve the genuine variation that we were interested in investigating while removing any undesired variation caused by technological artefacts. Fig. (2) displays the normalized box plot of the dependent variables.
Figure 1: Skewed target distribution
\begin{table}
\begin{tabular}{|c|c|c|} \hline Confidence level & VaR & Conditional VaR \\ \hline
90\% & -0.10 & -0.22 \\ \hline
95\% & -0.13 & -0.30 \\ \hline
99\% & -0.43 & -0.43 \\ \hline \end{tabular}
\end{table}
Table 3: WTI Value at Risk
The proportion of eigenvalues attributed to each component is shown in Fig. (3). This indicates the importance of each component for the analysis.
## 5 Multifactor Quantile estimates
Tables (8-10) report the estimates of the SQR for the ROC. The distributions divided into nine different quantiles (i.e., \(\tau=0.10-0.90\)) to get a mixed variety of low, medium, and high return conditions. Values of that are too close to its limits of 0 and 1 do not usually have a good match, hence these values were avoided in this analysis. Numerical results are displayed with consideration of the WUPI and GPR. The OLS regression line and the regression line for q=50 percent are identical. Table (8) reports the regression estimation (Q\({}_{n}\)0.5) based on Eq. (24). The diagnostics tests were performed on conditional median quantile which has been treated here as the estimation results for the baseline regression. The asymmetry in the model can be noticed by contrasting the coefficients of various quantiles.
A few parameter estimations, notably those for the "dCURRENCY", "dUNRATE", "dWUPI", "dGPE", "dGPR", and "dSPREAD" variables, are not statistically distinct from zero. The F-test, which adds the predictive power of all independent variables and demonstrates that it is implausible that all the coefficients are equal to zero, was used to test the H\({}_{0}\) that the parameters for these six variables are all zero. These variables do not seem to perform much better or worse than the WTI stock, either. The H\({}_{0}\) that the estimate is different from 0 cannot be disregarded, as shown by the F-test statistic value of 0.911 and the p-value of 0.494. All the variables taken into consideration have a sufficient impact on ROC, as evidenced by the rejection of p-value. However, F-value 0.911 \(<\) critical value 3.44 (Pesaran et al., 2001 lower bound critical value) at 5% significance level implies that H\({}_{0}\) cannot be rejected and there does not existing any long run relation with COP and these variables.
Figure 3: Percentage of Eigenvalues Attributable to Each Component
Figure 2: Boxplot of data after Quantile Normalization
The heteroscedasticity is assessed using the Breusch-Pagan test. Here, homoskedasticity is assumed by the H\({}_{0}\). Therefore, we reject the H\({}_{0}\) and conclude the existence of heteroskedasticity if p val 0.05. The heteroscedasticity provides the justification to examine the function in different quantiles. The results show Lagrange multiplier statistic (41.56), p-value (0.00), f-value (7.51), and f p-value (0.00). p-values for both being \(<\) 0.05 indicating fundamental problem of heteroscedastic errors. Fig. (3) displays the residual plot, though any clear pattern is not visible, however, Jarque-Bera (JB) normality assumption test was performed to ensure the correctness of our assumption. According to Fig. (4), the pandemic caused an early decline in prices throughout 2020-21, followed by a steep rise as producers reduced supply and demand soared.
The assumption is satisfied because the Durbin Watson's test result of 1.98 indicates that there is no autocorrelation. Following that, a normality test was run on the residuals, with the premise that the model's residuals are normally distributed. Table (4) reports the normality test.
The statistic and \(\chi^{2}\) two-tailed p-value for the test that both the kurtosis and the skewness are consistent with a normal distribution, which is to say that the residuals are overall normally distributed.
\begin{table}
\begin{tabular}{|l|l|} \hline Jarque-Bera & 3452.12 \\ \hline Chi\({}^{2}\) (\(\chi^{2}\)) two-tail prob. & 0.00 \\ Skew & -5.19 \\ Kurtosis & 36.87 \\ \hline \end{tabular}
\end{table}
Table 4: Jarque-Bera normality test
Figure 4: Patterns in the residuals over time
According to the histogram plot in Fig. (5), the distribution of the residuals is bell-shaped; nonetheless, there are several substantial negative outliers that could lead to a significant negative skewness. It appears that a limited number of big negative residuals, which exhibit monthly WTI price declines of greater than 15% and most recently 60%, are what are to blame. Fig. (6) displays the regression residuals and fitted series. Numerous significant (negative) outliers may be seen in the graph, but the largest one is in 2020.
A table of values for the residuals was studied to determine the precise dates when the largest outliers were realised. 1\({}^{\text{st}}\) dummy variable was added to explain the COVID outbreak and lockdown effect, 2\({}^{\text{nd}}\) dummy variable was added to explain Ukraine war.
It is evident from Table (5) that the two most extreme residuals were in April 2022 (- 55.36) and August 2022 (- 12.96). Due to the perfect fit of the dummy variables to the two extremely outlying observations, the rerun of the regression along with the dummy variables significantly increases the pseudo R\({}^{2}\) value from 0.58 to 0.71. The parameters of the highly
Figure 5: Histogram of residuals
\begin{table}
\begin{tabular}{|c|c|} \hline Date & Smallest residuals \\ \hline \multicolumn{2}{|c|}{Dummy variables with exogeneous variables} \\ \hline
2020-04-01 & -55.36 \\ \hline
2022-08-01 & -12.96 \\ \hline
2020-03-01 & -74.29 \\ \hline
2022-08-01 & 21.45 \\ \hline
2019-01-01 & -44.68 \\ \hline
2022-05-01 & -28.42 \\ \hline \end{tabular}
\end{table}
Table 5: Dummy variables construction.
Figure 6: Regression Residuals and Fitted Series
significant dummy variables in the model correspond to the levels that the pertinent residuals would have attained if the dummy variables had not been employed.
Fig. (7) displays the residuals plot where it can be observed that the errors follow a normal distribution. This has effectively established a baseline model to estimate the effect of the event on our target variable. Furthermore, both missing variables and inappropriate functional form were discovered using the RESET. An F-value of 0.008 and a corresponding p-value of 0.9251 from the data show that we cannot rule out the H\({}_{0}\) that the model contains no omitted variables. In other words, there is not anything to suggest that the model's functional form of choice is flawed. To ascertain whether there is a structural break in the data at any given moment, the CUSUM test (Ploberger & Kramer, 1992) for parameter stability based on OLS residuals was carried out. It is common for the date of the structural break to be unknown in advance. The CUSUM non-parametric method tests for the presence of a change at each possible point in the data rather than specifying the exact date of the change. Table (6) present the cumulative total and cumulative sum of squares of recursive residuals to test the structural stability of the models. The absence of any structural breakdowns is the null hypothesis.
Based on Table 6's test statistic and associated p-value, the H0 that the coefficients are stable through time can be rejected because our model does indeed contain a structural break for each break date in the data.
### _Casual impact analysis_
In the time following the intervention, the response variable's average value was 1.36. Without the intervention, we would have anticipated a 3.21 average response. The response variable had an overall value of 43.6 when the post-intervention period's individual data points were added together. But if the intervention had not happened, we would have anticipated a total of 116.77 in absolute terms, with a confidence interval of [80.29, 154.44]. With an upper and lower bound of [-94.96, -31.46], the response variable showed a relative decline of -62.7%. This demonstrates that the detrimental impact seen during the intervention period is statistically significant. Fig. (8) displays the casual impact analysis plot. The Bayesian one-sided tail-area probability of getting this result by chance is exceedingly low (p = 0.0). This indicates that the causal effect is statistically significant.
\begin{table}
\begin{tabular}{|l|l|} \hline test statistic & 1.96 \\ \hline p-value & 0.018 \\ \hline Critical values & [(1, 1.63), (5, 1.36), (10, 1.22)]) \\ \hline \end{tabular}
\end{table}
Table 6: Parameter stability test
Figure 7: Residuals diagnostics
## 6 Empirical results & discussions
The quantile analysis found the following intriguing trends. First, the predicted coefficient is significant across all quantiles; it is negative in lower quantiles across the board for the entire model but positive in upper quantiles. With an upper tail dependence and a lower tail independence, this shows that the dependence structure is asymmetric.
The coefficient on the intercept term is positive and statistically significant for the variables SP, PROD, MONEY, UNRATE, INFLATION, WUPI, GPE, VIX, and GEPU at 5% level implies that substantial impact on ROC when the market is bullish; however, negative coefficient for SPREAD, CURRENCY and GPR at the very lowest quantiles implies a prolonged drop in investment during bearish market. However, the negative effect is not statistically significant at 5% level. Table (7) presents a complete discussion on each factor.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{284.5pt}|} \hline SPY & Regarding the control variables, the effect of SPY stocks on WTI stocks is favourable as anticipated. It is positive and significant for all quantiles except insignificant for WUPI and GPR interaction. Such findings are confirmed by Hung \& Vo, 2021 and Pal \& Mitra, 2019. However, the combined effect of WUPI \& GPR seems to have severe implications on COP volatility and this combined effect is higher than on the GEUPI (Sharif et. al., 2020). \\ \hline PROD & A positive and significant coefficient for the INDPRO variable in the upper tail of a ROC suggests that as industrial production increases, ROC are more likely to be in the upper tail (i.e., above the median return). This implies of a strong industrial sector is associated with higher ROC. \\ \hline CCU & A change in CCU that is not statistically significant across quantiles but mostly shows positive signs indicating, while a change in CCU may have some effect on ROC, it is not a strong determinant. The positive signs suggest that as the CCU changes, the ROC are more likely to rise, but this relationship is not strong enough to be considered definitive. \\ \hline M1SL & A significant and positive sign for MISL across all the quantiles indicates that as the M1SL expands, so will the ROC. This relationship holds true across all quantiles, implying that as the M1SL expands, ROC are likely to rise regardless of whether they are in the lower, middle, or upper part of the distribution. This suggests that the M1SL is a significant determinant of ROC. Asymmetric relationships of Oil Prices, Money Supply have been studied by Bin \& Rehman, 2022 and they have found long-run and short-run relationships. However, no studies have relay emphasized the importance of MISL for ROC perspective. \\ \hline UNRATE & The fact that the UNRATE varies in significance across quantiles suggests that the relationship between the UNRATE and the ROC is not consistent across all levels of the return distribution. The varying significance
\begin{table}
\begin{tabular}{|p{56.9pt}|p{284.5pt}|} \hline M1SL & A significant and positive sign for MISL across all the quantiles indicates that as the M1SL expands, so will the ROC. This relationship holds true across all quantiles, implying that as the M1SL expands, ROC are likely to rise regardless of whether they are in the lower, middle, or upper part of the distribution. This suggests that the M1SL is a significant determinant of ROC. Asymmetric relationships of Oil Prices, Money Supply have been studied by Bin \& Rehman, 2022 and they have found long-run and short-run relationships. However, no studies have relay emphasized the importance of MISL for ROC perspective. \\ \hline UNRATE & The fact that the UNRATE varies in significance across quantiles suggests that the relationship between the UNRATE and the ROC is not consistent across all levels of the return distribution. The varying significance
Figure 8: Casual Impact plot
across quantiles suggests that the relationship between UNRATE and ROC is not straightforward and may be influenced or conditional on other factors.
INFLATION being significant with a positive sign across all quantiles in a quantile regression for ROC implies that as inflation rises, ROC are likely to rise as well. This relationship holds true across all quantiles, implying that as inflation rises, ROC will rise as well, regardless of whether they are in the lower, middle, or upper part of the distribution. This suggests that inflation is a significant determinant of ROC and can be interpreted as a sign that high inflation will lead to high oil prices, which will result in ROC. This was expected because, COP tend to rise in tandem with inflation, crude oil stocks may serve as an inflation hedge.
WUPI: The fact that WUPI is significant with a positive sign in the quantiles (2-4) suggests that as pandemic uncertainty increases, ROC are more likely to be higher in these quantiles, but this relationship may not hold true for the lower or upper quantiles. WUPI uncertainty being significant with a negative sign in the lower quantiles (2-4) implies that as WUPI uncertainty increases, ROC are more likely to be lower in the lower quantiles (2-4), but this relationship may not hold true for the upper quantiles. The WUPI uncertainty captures the uncertainty about the global economy's future, which can lead to market volatility and lower ROC. When an interaction term between GPR and INFLATION (Model (3)) is included, the WUPI is mostly insignificant, implying that the inclusion of the interaction term has changed the relationship between WUPI and ROC. It could imply that the relationship between WUPI and ROC is dependent on the level of GPR and INFLATION, and that the relationship is insignificant when these factors are considered.
GPE: GPE is positive and significant in upper quantile (Model (1)) which signifies when the GPE is high, WTI stock investors can expect higher returns for the top half of the distribution. This also indicate that WTI is sensitive to the global energy market, and that when energy prices are high, WTI stock returns are expected to rise. When the WUPI and GPR interaction term (Model (2)) is included in the model, the GPE has a positive and significant impact on the across all quantiles. This implies that the relationship between GPE and ROC is influenced by WUPI and GPR. It is likely that during times of high pandemic and geopolitical risk, the GPE becomes a better predictor of ROC regardless of where the return falls in the distribution. With the added interaction term between GPR and INFLATION, the GPE is insignificant across all the quantiles which implies that, the relationship between GPE and ROC may be overshadowed by the effects of GPR and INFLATION.
GPE: GPE has a negative but significant impact on ROC in the upper quantiles, implying that as economic policy uncertainty increases, WTI stock returns become more volatile, showing upper tail reliance. This implies that the negative impact of economic policy uncertainty is more pronounced for higher WTI stock returns. The empirical findings, however, show that the effects of COP shocks and GEPU are asymmetric and closely tied to market circumstances when the interaction terms are considered. Similar findings have reported by You et. al., 2017; Xiao and Wang, 2022).
VIX: The VIX has a significant impact on ROC across the entire return distribution. This means that crude oil returns are strongly correlated with stock market volatility, and that when the stock market is volatile, crude oil returns are likely to be lower. This could be because changes in stock market volatility are frequently driven by changes in economic conditions or investor sentiment, both of which can impact demand for crude oil and thus its price. Furthermore, the presence of a significant relationship across all quantiles in all 3-models implies that the relationship between VIX and ROC is consistent and not restricted to specific segments of the return distribution. Previous studies indicate that, VIX showed the least amount of information disturbance before and during the epidemic on all scales (Lahmir and Bekiros, 2020).
GPR: GPR showed no significant relation across any quantile which implies that GPR does not affect ROC in a meaningful way. This could be because the crude oil market is immune to geopolitical events, or because the events considered in this study have little impact on the crude oil market. It is also worth noting that the lack of a significant relationship does not necessarily imply that there is no relationship at all; it could mean that the sample size is too small to detect a relationship, or that the events used as proxies for geopolitical risks do not capture the full picture of geopolitical risks. Earlier studies have reported that Geopolitical uncertainties have a short-term impact on oil prices, lasting less than a year (Jiang et. al., 2022). When the WUPI and GPR interaction term is included in the model, geopolitical risk has a significant impact on ROC at the lower end of the return distribution. This implies that when the WUPI is high, GPR has a greater impact on ROC, particularly at lower returns. This means that when there is a high level of global uncertainty, the crude oil market is more vulnerable to geopolitical events, resulting in lower returns. The inclusion of an interaction term in the model may have helped to capture a more nuanced relationship between GPR and ROC that was not evident when only one variable was used.
SPREAD: The spread of Treasury bond interest rates, has no significant impact on ROC across the entire return distribution. This could be because the crude oil market is immune to changes in interest rates, or because the spread of Treasury bond interest rates used in the study does not capture the full range of factors influencing interest rates and thus has little impact on ROC. Furthermore, the lack of a significant relationship across all quantiles suggests that the relationship between SPREAD and ROC is inconsistent or limited to specific segments of the return distribution.
WUPI*GPR: In the lower quantiles (0.1-0.5), there is a significant effect indicates tail dependence and a high risk of large losses. This also indicate non-linear relationships and given the investor sentiment at the time, this suggests that there is significant investor pessimism on the likelihood of dropping market prices.
Our research has mentioned the essential topic of how ROC and GPR interact against the perilous backdrop of the pandemic. This study found a bearish impact of the pandemics and GPR on the performance of COP which opens opportunity to invest. Our study supports that WTI crude oil can be a cheap hedging tool and when investing a small or larger part in WTI crude oil future market, high hedging effectiveness could be achieved (Dai & Zhu, 2022). Our study also confirms the earlier work of dynamic hedging results which suggest that crude oil futures can provide a profitable hedging opportunity in combination with green energy index (Ahmad, 2017). The most important finding of this exercise is that the WUPI & GPR impacts the crude oil stocks differently. The changes in investor opinions on crude oil stock investments are offered by differences in performance among quantiles. Thus, the WUPI & GPR has not influenced the expectations of investors and this finding confirm the work of Lahmiri, & Bekiros, 2020.
Several macroeconomic factors, including global supply and demand, geopolitical developments, and currency fluctuations, have an impact on the COP. For instance, rising demand for oil from quickly industrialising nations like China and India may push up prices, while falling demand brought on by a recession in a big oil-consuming nation like the United States may push down prices. COP can also be impacted by things like monetary policy changes, natural disasters, and political unrest in nations that produce oil. Considering this, investor expectations, which quickly alter in response to any information made accessible to the public, including economic and political developments and leads to a considerable impact on stock prices. As a result, when examining influences on stock prices, the factors that influence macroeconomy, may be less appropriate to use than those that track changes in expectations about future values of macroeconomic factors. This could be the future direction of this study. Given that investors cannot anticipate or take steps to protect themselves from such risks, indicators that represent unanticipated changes in future macroeconomic variable values are particularly crucial. There are many potential variables that could be taken into consideration because economic theory does not stipulate which parts or how many should be employed in the study.
Our empirical findings have implications for portfolio design and risk management for investors. It also has significant implications for risk management decisions involving hedging and downside risk, given that the financial utility of oil varies depending on market conditions. Finally, our findings have implications for the forecasting of COP across quantiles based on macroeconomic and financial variables. Furthermore, changes in the several parameters taken into account for this study account for almost 2/3 of the monthly fluctuation in the excess returns.
## Conclusion
This analysis uses multiple factors to model the ROC under different market conditions, considering the impact of various economic, political, and health-related factors on the price of crude oil. Instead of focusing only on the mean or overall trend, this study used SQR to determine how these factors affect the various percentiles (or quantiles) of ROC. In doing so, the study used SQR to evaluate the stability of the relationship between the dependent and independent variables over time, as well as to identify any changes in the relationship that may have occurred as a result of changes in the economic or geopolitical landscape. Furthermore, the study used interaction terms with WUPI*GPR and GPR*INFLATION to conduct additional empirical research. The multivariate 12-factor approach helped to estimate the conditional quantiles of the return distribution, which provides valuable information for risk management and portfolio optimization. The model tested various statistical assumptions e.g., VIF, PCA to satisfy the revenant predictors, Breusch-Pagan test for heteroscedasticity, Jarque-Bera normality test and finally RESET to test structural break in the model. This conclusion is based on the sample of data used in the study and it is possible that results might differ with different data set. The COP is an indicator for world economic development can be viewed as a crucial index for investors and policymakers; thereby, the findings from the study have wider ramifications for both policymakers and investors at large. The asymmetric and heterogeneous association between the given variables indicate that financial specialists and policymakers should embrace distinctive investment strategies under changing economic conditions. Despite the complexity of estimating multiple factors, multifactor SQR has been shown to be beneficial in determining the ROC because it allows for a more comprehensive and accurate analysis of the ROC investments by considering the impact of various economic, political, and health-related factors on COP. Furthermore, the results of this analysis can be used to create a predictive model for forecasting COP under various market scenarios. This can aid in the identification of profitable investment opportunities and the formulation of strategic investment decisions. To this end, building a trustworthy empirical model requires iteration and is not a precise science. Other authors could get a different final specification using the same facts and initial theory.
|
2302.01332
|
Bayesian Metric Learning for Uncertainty Quantification in Image
Retrieval
|
We propose the first Bayesian encoder for metric learning. Rather than
relying on neural amortization as done in prior works, we learn a distribution
over the network weights with the Laplace Approximation. We actualize this by
first proving that the contrastive loss is a valid log-posterior. We then
propose three methods that ensure a positive definite Hessian. Lastly, we
present a novel decomposition of the Generalized Gauss-Newton approximation.
Empirically, we show that our Laplacian Metric Learner (LAM) estimates
well-calibrated uncertainties, reliably detects out-of-distribution examples,
and yields state-of-the-art predictive performance.
|
Frederik Warburg, Marco Miani, Silas Brack, Soren Hauberg
|
2023-02-02T18:59:23Z
|
http://arxiv.org/abs/2302.01332v2
|
# Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval
###### Abstract
We propose the first Bayesian encoder for metric learning. Rather than relying on neural amortization as done in prior works, we learn a distribution over the network weights with the Laplace Approximation. We actualize this by first proving that the contrastive loss is a valid log-posterior. We then propose three methods that ensure a positive definite Hessian. Lastly, we present a novel decomposition of the Generalized Gauss-Newton approximation. Empirically, we show that our Laplacian Metric Learner (LAM) estimates well-calibrated uncertainties, reliably detects out-of-distribution examples, and yields state-of-the-art predictive performance.
Machine Learning, ICML
## 1 Introduction
Metric learning seeks data representations where similar observations are near and dissimilar ones are far. This construction elegantly allows for building retrieval systems with simple nearest-neighbor search. Such systems easily cope with a large number of classes, and new classes can organically be added without retraining. While these retrieval systems show impressive performance, they quickly, and with no raised alarms, deteriorate with out-of-distribution data (Shi & Jain, 2019). In particular, in safety-critical applications, the lack of uncertainty estimation is a concern as retrieval errors may propagate unnoticed through the system, resulting in erroneous and possibly dangerous decisions.
We present the Laplacian Metric Learner (LAM) to estimate reliable uncertainties of image embeddings as demonstrated in Fig. 1. LAM is the first Bayesian method proposed for metric learning. We learn a distribution over the network weights (weight posterior) from which we obtain a stochastic representation by embedding an image through sampled neural networks. The Bayesian formulation has multiple benefits, namely (1) robustness to out-of-distribution examples, (2) calibrated in-distribution uncertainties, and (3) a slight improvement in predictive performance.
Our method extends the Laplace Approximation (MacKay, 1992) for metric learning. We present a probabilistic interpretation of the contrastive loss (Hadsell et al., 2006) which justifies that it can be interpreted as an unnormalized negative log-posterior. We then propose three solutions to ensure a positive definite Hessian for the contrastive loss and present two approaches to compute the Generalized Gauss-Newton (Foresee & Hagan, 1997) approximation for \(\ell_{2}\)-normalized networks. Finally, we boost our method with the online training procedure from Miani et al. (2022) and achieve state-of-the-art performance.
We are not the first to consider uncertainty quantification in image retrieval. Seminal works (Shi & Jain, 2019; Oh et al., 2018) have addressed the lack of uncertainties in retrieval with _amortized inference_(Gershman & Goodman, 2014), where a neural network predicts a stochastic embedding. The issues with this approach are that (1) it requires strong assumptions on the distribution of the embedding, (2) the networks are often brittle and difficult to optimize, and (3) out-of-distribution detection relies on the network's capacity to extrapolate uncertainties. As neural networks extrapolate poorly (Xu et al., 2021), the resulting _predicted_ uncertainties
Figure 1: **Reliable stochastic embeddings.** Current state-of-the-art, PFE (Shi & Jain, 2019), and LAM (ours) learn stochastic representations. LAM estimates reliable uncertainties of the latent representation that intuitively follow the amount of blur, noise, or occlusion in the input image.
are unreliable for out-of-distribution data (Detlefsen et al., 2019) and are thus, in practice, of limited value.
In contrast, our method does not assume any distribution on the stochastic embeddings, is simple to optimize, and does not rely on a neural network to extrapolate uncertainties. Instead, our weight posterior is derived from the curvature of the loss landscape and the uncertainties of the latent embeddings deduced (rather than learned) with sampling. We show through rigorous experiments that this leads to reliable out-of-distribution performance and calibrated uncertainties in both controlled toy experiments and challenging real-world applications such as bird, face, and place recognition.
## 2 Related Work
**Metric learning** attempts to map data to an embedding space, where similar data are close together and dissimilar data are far apart. This is especially useful for retrieval tasks with many classes and few observations per class such as place recognition (Warburg et al., 2020) and face recognition (Schroff et al., 2015) or for tasks where classes are not well-defined, such as food tastes (Wilber et al., 2015) or narratives in online discussions (Christensen et al., 2022).
There exist many metric losses that optimize for a well-behaved embedding space. We refer to the excellent survey by Musgrave et al. (2020) for an overview. We here focus on the _contrastive loss_(Hadsell et al., 2006)
\[\begin{split}\mathcal{L}_{\text{con}}(\theta)=&\ \frac{1}{2}\|f_{\theta}(x_{a})-f_{\theta}(x_{p})\|^{2}\\ &+\ \frac{1}{2}\max\left(0,m-\|f_{\theta}(x_{a})-f_{\theta}(x_{n})\|^{2} \right),\end{split} \tag{1}\]
which has shown state-of-the-art performance (Musgrave et al., 2020) and is one of the most commonly used metric losses. Here, \(f_{\theta}\) is a neural network parametrized by \(\theta\) which maps from the observation space to the embedding space. The loss consists of two terms, one that attracts observations from the same class (_anchor_\(x_{a}\) and _positive_\(x_{p}\)), and one that repels observations from different classes (_anchor_\(x_{a}\) and _negative_\(x_{n}\)). The margin \(m\) ensures that negatives are repelled sufficiently far. We will later present a probabilistic extension of the contrastive loss that allows us to learn stochastic, rather than deterministic, features in the embedding space.
**Uncertainty in deep learning** is currently studied across many domains to mitigate fatal accidents and allow for human intervention when neural networks make erroneous predictions. Current methods can be divided into methods that apply amortized optimization to train a neural network to predict the parameters of the output distribution, and methods that do not. The amortized methods, best known from the variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014), seem attractive at first as they can directly estimate the output distribution (without requiring sampling), but they suffer from mode collapse and are sensitive to out-of-distribution data due to the poor extrapolation capabilities of neural networks (Nalisnick et al., 2018; Detlefsen et al., 2019). 'Bayes by Backprop' (Blundell et al., 2015) learns a distribution over parameters variationally but is often deemed too brittle for practical applications. Alternatives to amortized methods includes deep ensembles (Lakshminarayanan et al., 2017), stochastic weight averaging (SWAG) (Maddox et al., 2019), Monte-Carlo dropout (Gal and Ghahramani, 2016) and Laplace Approximation (LA) (Laplace, 1774; MacKay, 1992) which all approximate the generally intractable weight posterior \(p(\theta|\mathcal{D})\) of a neural network. We propose the first principled method to approximate the weight posterior in metric learning.
**Laplace approximations (LA)** can be applied for every loss function \(\mathcal{L}\) that can be interpreted as an unnormalized log-posterior by performing a second-order Taylor expansion around a chosen weight vector \(\theta^{*}\) such that
\[\begin{split}\mathcal{L}(\theta)\approx&\ \mathcal{L}^{*}+(\theta-\theta^{*})^{\top}\nabla\mathcal{L}^{*}\\ &+\frac{1}{2}(\theta-\theta^{*})^{\top}\nabla^{2}\mathcal{L}^{*}( \theta-\theta^{*}),\end{split} \tag{2}\]
where \(\mathcal{L}^{*}\) is the loss evaluated in \(\theta^{*}\). Imposing the unnormalized log-posterior to be a second-order polynomial is equivalent to assuming the posterior to be Gaussian. If \(\theta^{*}\) is a MAP estimate, the first-order term vanishes, and the second-order term can be interpreted as a precision matrix, the inverse of the covariance. Assuming \(\theta^{*}\) is a MAP estimate, this second-order term is negative semi-definite for common (convex) supervised losses, such as the mean-squared error and cross-entropy. Recently, Daxberger et al. (2021) demonstrated that post-hoc LA is scalable and produces well-behaved uncertainties for classification and regression. The Laplacian Autoencoder (LAE) (Miani et al., 2022) improves on the post-hoc LA with an online Monte Carlo EM training procedure to learn a well-behaved posterior. It demonstrates state-of-the-art uncertainty quantification for unsupervised representation learning. We first extend LA to the contrastive loss and achieve state-of-the-art performance with the online EM training procedure.
**Uncertainty in metric learning** is not a new idea (Vilnis and McCallum, 2014), but the large majority of recent methods apply amortized inference to predict distributions in the embedding space (Warburg et al., 2021; Chang et al., 2020; Chun et al., 2021; Oh et al., 2018; Shi and Jain, 2019; Song and Soleymani, 2019; Sun et al., 2020), making them sensitive to mode collapse and out-of-distribution data. Taha et al. (2019b;a) explore deep ensembles and Monte-Carlo dropout as alternatives, but these methods suffer from increased training time, poor empirical performance, and limited Bayesian interpretation (Daxberger et al., 2021). We explore LA in metric learning and attain state-of-the-art performance.
## 3 Laplacian Metric Learning
To perform Bayesian retrieval, we propose to estimate the weight posterior of the embedding network \(f_{\theta}\) such that we can sample data embeddings to propagate uncertainty through the decision process. The embedding network is parametrized by \(\theta\in\Theta\) and trained with the contrastive loss. The network maps an image \(x\in\mathcal{X}:=\mathbb{R}^{HWC}\) to an embedding \(z\in\mathcal{Z}\), which is restricted to be on a \(Z\)-dimensional unit sphere \(\mathcal{Z}:=\mathcal{S}^{Z}\). This normalization to the unit sphere is commonly done in retrieval to obtain faster retrieval and a slight performance boost (Arandjelovic et al., 2016; Radenovic et al., 2018). Fig. 2 illustrate our Bayesian mapping from image to latent space.
To obtain an approximate posterior over the weights \(\theta\) we rely on the Laplace approximation (LA). We first motivate the post-hoc LA as this is the simplest. We then extend this approach to online LA which marginalizes the Laplace approximation during training to improve model fit. In Appendix D, we prove that the contrastive loss is a valid unnormalized log-posterior on the compact spherical space. The proof draws inspiration from electrostatics, and the main idea is to define a PDF for the attracting and repelling terms. We then show that the logarithm of the product of these PDFs is equivalent (up to some constant) to the contrastive loss (details are postponed to Appendix D). Since the contrastive loss is a valid log-posterior, we can proceed with LA.
**A post-hoc Laplace approximation** is found by first training a standard deterministic network through gradient steps with the contrastive loss to find the _maximum a posteriori_ (MAP) parameters \(\theta^{*}\). Since we are in a local optimum, the first-order term in the second-order Taylor expansion (Eq. 2) vanishes, and we can define the parameter distribution as
\[p(\theta|\mathcal{D})=\mathcal{N}\left(\theta\Big{|}\theta^{*},\left(\nabla _{\theta}^{2}\mathcal{L}_{\text{con}}\left(\theta^{*};\mathcal{D}\right)+ \sigma_{\text{prior}}^{-2}\mathbb{I}\right)^{-1}\right). \tag{3}\]
The advantage of post-hoc LA is that the training procedure does not change, and already trained neural networks can be made Bayesian. In practice, however, stochastic gradient-based training does not locate isolated minima of the loss landscape, but rather ends up exploring regions near local minima. The Hessian (and hence the posterior covariance) can change significantly during this exploration, and the post-hoc LA can become unstable.
**Online Laplace approximations**(Miani et al., 2022) avoids this instability by marginalizing the LA during training with Monte Carlo EM. This helps the training recover a solution \(\theta^{*}\) where the Hessian reflects the loss landscape. Specifically, at each step \(t\) during training we keep in memory a Gaussian distribution on the parameters \(q^{t}(\theta)=\mathcal{N}(\theta|\theta_{t},H_{\theta_{t}}^{-1})\). The parameters are updated through an expected gradient step
\[\theta_{t+1}=\theta_{t}+\lambda\mathbb{E}_{\theta\sim q^{t}}[\nabla_{\theta} \mathcal{L}_{\text{con}}(\theta;\mathcal{D})] \tag{4}\]
and a discounted Laplace update
\[H_{\theta_{t+1}}=(1-\alpha)H_{\theta_{t}}+\nabla_{\theta}^{2}\mathcal{L}_{ \text{con}}(\theta;\mathcal{D}), \tag{5}\]
where \(\alpha\) describes an exponential moving average, similar to momentum-like training. The initialization follows the isotropic prior \(q^{0}(\theta)=\mathcal{N}(\theta|0,\sigma_{\text{prior}}^{2}\mathbb{I})\).
In practice, the Hessian scales quadratically in memory wrt. the number of parameters. To mitigate this, we approximate this Hessian by its diagonal (LeCun et al., 1989; Denker and LeCun, 1990).
**Hessian of the contrastive loss.** Both post-hoc and online LA require the Hessian of the contrastive loss \(\nabla_{\theta}^{2}\mathcal{L}_{\text{con}}(\theta;\mathcal{D})\). The Hessian is commonly approximated with the Generalized Gauss-Newton (GGN) approximation (Foresee and Hagan, 1997; Daxberger et al., 2021; Dangel et al., 2020; Detlefsen et al., 2021). The GGN decomposes the loss into \(\mathcal{L}=g\circ f\), where \(g\) is usually chosen as the loss function and \(f\) the model function, and only \(f\) is linearized.
However, in our case, this decomposition is non-trivial. Recall that the last layer of our network is an \(\ell_{2}\) normalization layer, which projects embeddings onto a hyper-sphere. This normalization layer can either be viewed as part of the model \(f\) (linearized normalization layer) or part of the loss \(g\) (non-linearized normalization layer). We show in Appendix F.1 that the former can be interpreted as using the _Euclidean_ distance and in Appendix F.2 that the latter as using the _Arcos_ distance for the contrastive loss (Eq. 1). We highlight that these share the zero- and first-order terms for normalized embeddings but, due to the GGN linearization, not the second-order derivatives. The Euclidean interpretation leads to simpler derivatives and interpretations, and we will therefore use it for our derivations. We emphasize that the Arccos is theoretically a more accurate approximation, because the
Figure 2: **Model overview**. We learn a distribution over parameters, such that we embed an image through sampled encoders \(f_{\theta}\) to points \(z_{i}\) (red dots) in a latent space \(\mathcal{Z}\). We reduce these latent samples to a single measure of uncertainty by estimating the parameters of a von Mises-Fisher distribution.
\(\ell_{2}\)-layer is not linearized, and we provide derivations in Appendix F.
The GGN matrix for contrastive loss with the _Euclidean_ interpretation is given by
\[\nabla^{2}_{\theta}\mathcal{L}_{\text{con}}(\theta;\mathcal{I})= \sum_{ij\in\mathcal{I}}H^{ij}_{\theta}=\sum_{ij\in\mathcal{I}_{p}}H^{ij}_{ \theta}+\sum_{ij\in\mathcal{I}_{n}}H^{ij}_{\theta} \tag{6}\] \[\stackrel{{\text{GGN}}}{{\approx}}\sum_{ij\in \mathcal{I}_{p}}J^{ij\top}_{\theta}\underbrace{\left(\begin{smallmatrix}1&-1 \\ -1&1\end{smallmatrix}\right)}_{:=H_{p}}J^{ij}_{\theta}+\sum_{ij\in\mathcal{I}_{n} }J^{ij\top}_{\theta}\underbrace{\left(\begin{smallmatrix}-1&-1\\ 1&-1\end{smallmatrix}\right)}_{:=H_{n}}J^{ij}_{\theta},\]
where \(J^{ij}_{\theta}=\left(J_{\theta}f_{\theta}(x_{i})^{\top},J_{\theta}f_{\theta} (x_{j})^{\top}\right)^{\top}\), with \(J_{\theta}\) being the Jacobian wrt. the parameters, and where \(H_{p}\) and \(H_{n}\) are the Hessian of the contrastive loss wrt. the model output for positive and negative pairs. Notice that the first sum runs over positive pairs and the second sum runs over negative pairs _within the margin_. Negative pairs outside the margin do not contribute to the Hessian, and can therefore be ignored to reduce the computational load (Appendix B).
The eigenvalues of the Hessian wrt. to the output are \((0,2)\) and \((-2,0)\) for the positive \(H_{p}\) and negative \(H_{n}\) terms, so we are not guaranteed to have a positive definite Hessian, \(H_{\theta}\). To avoid covariances with negative eigenvalues, we propose three solutions to ensure a positive definite Hessian. Proofs are in Appendix F.3.
**Ensuring positive definiteness of the Hessian**. We do not want to be restricted in the choice of the prior, so we must ensure that \(\nabla^{2}_{\theta}\mathcal{L}_{\text{con}}(\theta^{*};\mathcal{D})\) is positive definite itself. Differently from the standard convex losses, this is not ensured by the GGN approximation (Immer et al., 2021). Our main insight is that we can ensure a positive definite Hessian \(H_{\theta}\) by only manipulating the Hessians \(H_{p}\) and \(H_{n}\) in Eq. 6.
_1. Positive: The repelling term is ignored, such that only positive pairs contribute to the Hessian._
\[H_{p}=\begin{pmatrix}1&-1\\ -1&1\end{pmatrix},\qquad H_{n}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix} \tag{7}\]
_2. Fixed: The cross derivatives are ignored._
\[H_{p}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\qquad H_{n}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix} \tag{8}\]
_3. Full: Nothing is ignored, but rather positive definiteness is ensured with ReLU, \(\max(0,\nabla^{2}_{\theta}\mathcal{L}_{\text{con}}(\theta;\mathcal{D}))\), on the Hessian of the loss wrt. the parameters._
\[H_{p}=\begin{pmatrix}1&-1\\ -1&1\end{pmatrix},\qquad H_{n}=\begin{pmatrix}-1&1\\ 1&-1\end{pmatrix} \tag{9}\]
The _positive_ approximation is inspired by Shi and Jain (2019), which also only uses positive pairs to train their uncertainty module. The gradient arrows in Fig. 3 (a) illustrate that negative pairs are neglected when computing the Hessian of the contrastive loss. The _fixed_ approximation considers one data point at the time, assuming the other one is fixed. Thus, given a pair of data points, this can be interpreted as first moving one data point, and then the second (rather than both at the same time). We formalize this in Appendix F.3. Fig. 3 (b) illustrate this idea when all points except \(a\) are fixed. Lastly, we propose the _full_ Hessian of the contrastive loss (Fig. 3 (c)) and ensure a positive definiteness by computing the ReLU of the Hessian. We note that this approximation assumes a diagonal Hessian. We experimentally determine, which of these three methods to ensure a positive definiteness leads to better performance (Section 4).
**Intuition of Hessian approximations.** With Fig. 4 we try to provide some intuition on these three approximations and their impact in a clustered versus cluttered latent space. Positive points will increase the precision (magnitude of the Hessian) while negative points inside the margin will decrease the precision (as indicated with the arrows in Fig. 4). Negative points outside the margin will not affect the gradient or the Hessian. This is the desired behavior as a perfect clustered latent space (Fig. 4 b) will have high precision (low uncertainty), whereas in a cluttered latent space (Fig. 4 a) will have many negatives within the margin, which will decrease the precision (large uncertainty). Therefore, the positive approximation (Eq. 7) will provide the lowest uncertainty, as the negatives are excluded. In practice, we cannot compute the Hessian for all pairs, as this scales quadratically in time and memory with the dataset size. Furthermore, most pairs, namely the negatives outside the margin, do not contribute to the Hessian, so it is wasteful to compute their Hessian. Therefore, we make use of a common trick in metric learning, namely hard negative mining (Musgrave et al., 2020).
**Hard negative mining leads to biased Hessian approximation.** Metric losses operate locally, hence non-zero losses will only occur for negative examples that are close to the anchor in the latent space. Presenting the model for
Figure 3: **Hessian approximations. To ensure a positive definite Hessian approximation we propose three approximations. In (a) only the positives \(p\) contribute to the Hessian as the negatives \(n\) are ignored. In (b) we consider one point at a time, e.g., only the anchor \(a\) contributes. In (c) we consider all interactions.**
randomly sampled data will in practice rarely result in negative pairs that are close, which leads to extremely long training times. Therefore, _hard negative mining_ is often used to construct batches that have an over-representation of hard negative pairs (negative examples that are close to the anchor). We use similar mining when computing the Hessian, which leads to a biased estimate of the Hessian. We thus introduce a scaling parameter \(0\leq w_{n}\leq 1\) to obtain
\[\hat{H}_{\theta}=(1-w_{n})J_{\theta}^{\top}H_{p}J_{\theta}+w_{n}J_{\theta}^{ \top}H_{n}J_{\theta}, \tag{10}\]
which corrects the biased estimate (see Appendix B.1 for proof). Our positive approximation (Eq. 7) can be seen as the extreme case with \(w_{n}=0\), whereas our full approximation Eq. 9 (\(w_{n}=0.5\)) corresponds to unbiased sampling.
Having an estimate of the Hessian in place and three methods to ensure it is positively definite, we can perform both post-hoc and online LA to obtain a distribution over the weights. We can sample from this weight posterior and embed an input image via each sampled network to obtain multiple samples in the latent space (see Fig. 2). We reduce these samples to a single measure of uncertainty by estimating the parameters of a von Mises-Fisher distribution.
**Why the von Mises-Fisher distribution?** The uncertainty deduced from LA is usually computed by the variance of the samples (Daxberger et al., 2021). This assumes that the samples follow a Gaussian distribution. However, for \(\ell_{2}\)-normalized networks, we assume that all the probability mass lies on a \(Z\)-dimensional hyper-sphere. The von Mises-Fisher distribution describes such distribution, and it is parametrized with a directional mean \(\mu\) and a scalar concentration parameter \(\kappa\), which can be interpreted as the inverse of an isotropic covariance \(\kappa=1/\sigma^{2}\), i.e., small \(\kappa\) means high uncertainty and large \(\kappa\) means low uncertainty.
There exist several methods to estimate \(\kappa\). We opt for the simplest and most computationally efficient (Sra, 2012) (see Appendix C). Prior works (Shi and Jain, 2019; Taha et al., 2019; 19) have treated an estimated latent distribution as a Gaussian, although all probability mass lies on the unit sphere. This is insufficient, as samples from the Gaussian distribution will not lie on the unit sphere. In the experimental work, we correct this by projecting the samples onto the unit sphere.
## 4 Experiments
We benchmark our method against strong probabilistic retrieval models. Probabilistic Face Embeddings (PFE) (Shi and Jain, 2019) and Hedge Image Embedding (HIB) (Oh et al., 2018) perform amortized inference and thus estimate the mean and variance of latent observation. We also compare against MC Dropout (Gal and Ghahramani, 2016) and Deep Ensemble (Lakshminarayanan et al., 2017), two approximate Bayesian methods, which have successfully been applied in image retrieval (Taha et al., 2019; 19).
We compare the models' _predictive performance_ with the recall (recall@\(k\)) and mean average precision (mAP@\(k\)) among the \(k\) nearest neighbors (Warburg et al., 2021; Mussgrave et al., 2020; Arandjelovic et al., 2016). We evaluate the models' abilities to _interpolate_ and _extrapolate_ uncertainties by measuring the Area Under the Sparsification Curve (AUSC) and Expected Calibration Error (ECE) on in-distribution (ID) data, and the Area Under Receiver Operator Curve (AUROC) and Area Under Precision-Recall Curve (AUPRC) on out-of-distribution (OoD) data. We provide more details on these metrics in Appendix G.3.
We extend StochMan (Detlefsen et al., 2021) with the Hessian backpropagation for the contrastive loss. The training code is implemented in PyTorch (Paszke et al., 2017) and is available on GitHub1. Appendix G provides more details on the experimental setup.
Footnote 1: Code: [https://github.com/FrederikWarburg/bayesian-metric-learning](https://github.com/FrederikWarburg/bayesian-metric-learning)
**Experimental Summary.** We begin with a short summary of our experimental results. Across five datasets, three different network architectures, and three different sizes of the latent space (ranging from \(3\) to \(2048\)), we find that LAM has well-calibrated uncertainties, reliably detects OoD examples, and achieves state-of-the-art predictive performance. Fig. 4(a) shows that the uncertainties from online LAM reliably identify OoD examples. Online LAM outperforms other Bayesian methods, such as post-hoc LAM and MC dropout, on this task, which in turn clearly improves upon amortized methods that rely on a neural network to extrapolate uncertainties. Fig. 4(b) shows that LAM consistently matches or outperforms existing image retrieval methods in terms of predictive performance. We find that the fixed Hessian approximation with the Arccos distance performs the best, especially on higher dimensional data.
Figure 4: **Intuition of \(\mathbf{H_{p}}\) and \(\mathbf{H_{n}}\). Illustration of a cluttered latent space, where observations from different classes are close, and a clustered latent space with distinct clusters with observations from the same class. Intuitively, negative examples \(\star\) inside the margin decreases the precision (higher variance) with \(H_{n}\), and positive points \(\bullet\) will increase the precision (lower variance) with \(H_{p}\).**
**Ablation: Positive Definiteness of the Hessian.** We experimentally study which method to ensure a positive definite Hessian has the best performance measured in both predictive performance (mAP@\(5\)) and uncertainty quantification (AUROC, AUSC). We found that all these methods perform similarly on simple datasets and low dimensional hyper-spheres, but the fixed approximation with Arccos distance performs better on more challenging datasets and higher dimensional hyper-spheres. We present results on one of these more challenging datasets, namely the LFW (Huang et al., 2007) face recognition dataset with the CUB200 (Wah et al., 2011) bird dataset as an OoD dataset. We use a ResNet50 (He et al., 2016) with a GeM pooling layer (Radenovic et al., 2018) and a \(2048\) dimensional embedding and diagonal, last-layer LA (Daxberger et al., 2021).
Table 1 shows the performance for post-hoc and online LA with fixed, positive, or full Hessian approximation using either Euclidean or Arccos distance. Across all metrics, the online LA with Arccos distance and the fixed Hessian approximation performs similarly or the best. We proceed to benchmark this method against several strong probabilistic baselines on closed-set retrieval and a more challenging open-set retrieval.
**Closed-Set Retrieval.** OoD capabilities are critical for identifying distributional shifts, outliers, and irregular user inputs, which might hinder the propagation of erroneous decisions in an automated system. We evaluate OoD performance on the commonly used benchmarks (Nalisnick et al., 2018), where we use (1) FashionMNIST (Xiao et al., 2017) as ID and MNIST (LeCun et al., 1998) as OoD, and (2) CIFAR10 (Krizhevsky, 2009) as ID and SVHN (Netzer et al., 2011) as OoD. We use, respectively, a standard \(2\)- or \(3\)-layer relu convolutional network followed by a single linear layer on which we compute LA with a diagonal Hessian.
_FashionMNIST (ID) vs MNIST (OoD)._ Table 2 shows that both PFE and post-hoc LAM have a similar predictive performance to the deterministic model. This is not surprising, as both methods are initialized with the deterministic parameters, and then uncertainties are learned (PFE) or deduced (post-hoc LAM) with frozen weights. The awareness of uncertainties during training, grants the online LAM slightly higher predictive performance.
(higher AUROC and AUPRC) performance. Fig. 7 shows the calibration plot for CIFAR10. For this dataset, online LAM has a near-perfect calibration curve. Fig. 6 shows the ROC curves for CIFAR10 and highlights that online LAM is better at distinguishing ID and OoD examples.
**Open-Set Retrieval.** A key advantage of metric learning methods is that they easily cope with a large number of classes and new classes can be added seamlessly. We therefore evaluate LAM's performance on challenging open-set retrieval, where none of the classes in the test set are available during training. We first test with CUB200 (Wah et al., 2011) as ID and CAR196 (Krause et al., 2013) as OoD similarly to Warburg et al. (2021), and second, test with LFW (Huang et al., 2007) as ID and CUB200 as OoD. We use a ResNet50 (He et al., 2016) with a GeM pooling layer (Radenovic et al., 2018) and a \(2048\) dimensional embedding and diagonal, last-layer LA (Daxberger et al., 2021).
_CUB200 (ID) vs CARS196 (OoD)._ The CUB-200-2011 dataset (Wah et al., 2011) has \(200\) bird species captured from different perspectives and in different environments. We follow the zero-shot train/test split (Musgrave et al., 2020). In this zero-shot setting, the trained models have not seen any of the bird species in the test set, and the learned features must generalize well across species. Table 3 shows that LAM matches or surpasses the predictive performance of all other methods. LAM (post-hoc) achieves state-of-the-art predictive performance, while LAM (online) matches the predictive performance of the deterministic trained model while achieving state-of-the-art AUROC and AUPRC for OoD detection.
_LFW (ID) vs CUB200 (OoD)._ Face recognition is another challenging metric learning task with many applications in security and surveillance. The goal is to retrieve images of the same person as in the query image. Table 3 shows that online LAM outperforms existing methods both in terms of predictive performance and uncertainty quantification. Fig. 8 shows that PFE assigns higher uncertainty to images from the ID dataset (faces) than those from the OoD dataset (birds). In contrast, both online and post-hoc LAM better associate high variance to OoD examples, while PFE predicts
\begin{table}
\begin{tabular}{l l|l c c|c c c} \hline \hline & & \multicolumn{3}{c|}{Image retrieval} & \multicolumn{2}{c|}{OoD} & \multicolumn{2}{c}{Calibration} \\ & & mAP@5 \(\uparrow\) & mAP@10 \(\uparrow\) & AUROC \(\uparrow\) & AUPRC \(\uparrow\) & AUSC \(\uparrow\) & ECE \(\downarrow\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Deterministic & 0.78 \(\pm\) 0.01 & 0.73 \(\pm\) 0.01 & 0.72 \(\pm\) 0.01 & — & — & — \\ & Deep Ensemble & 0.69 & 0.62 & 0.59 & 0.41 & 0.46 & 0.61 & 0.04 \\ & PFE & 0.78 \(\pm\) 0.00 & 0.74 \(\pm\) 0.00 & 0.72 \(\pm\) 0.00 & 0.53 \(\pm\) 0.03 & 0.46 \(\pm\) 0.01 & 0.65 \(\pm\) 0.01 & 0.26 \(\pm\) 0.02 \\ & HIB & 0.69 \(\pm\) 0.08 & 0.63 \(\pm\) 0.09 & 0.61 \(\pm\) 0.09 & 0.60 \(\pm\) 0.12 & 0.60 \(\pm\) 0.11 & 0.65 \(\pm\) 0.08 & 0.54 \(\pm\) 0.08 \\ & MC dropout & 0.76 \(\pm\) 0.03 & 0.71 \(\pm\) 0.03 & 0.70 \(\pm\) 0.03 & 0.93 \(\pm\) 0.03 & 0.93 \(\pm\) 0.03 & 0.84 \(\pm\) 0.06 & 0.03 \(\pm\) 0.04 \\ & LAM (post-hoc) & 0.78 \(\pm\) 0.00 & 0.74 \(\pm\) 0.00 & 0.72 \(\pm\) 0.00 & 0.96 \(\pm\) 0.02 & 0.96 \(\pm\) 0.02 & 0.86 \(\pm\) 0.01 & 0.03 \(\pm\) 0.00 \\ & LAM (online) & **0.81 \(\pm\) 0.00** & **0.77 \(\pm\) 0.01** & **0.76 \(\pm\) 0.01** & **0.98 \(\pm\) 0.01** & **0.98 \(\pm\) 0.01** & **0.89 \(\pm\) 0.01** & **0.02 \(\pm\) 0.00** \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & Deterministic & **0.66 \(\pm\) 0.00** & 0.59 \(\pm\) 0.00 & 0.58 \(\pm\) 0.00 & — & — & — & — \\ & Deep Ensemble & **0.66** & **0.61** & **0.59** & 0.42 & 0.67 & 0.72 & 0.02 \\ & MC dropout & 0.46 \(\pm\) 0.01 & 0.37 \(\pm\) 0.01 & 0.34 \(\pm\) 0.01 & 0.60 \(\pm\) 0.03 & 0.76 \(\pm\) 0.02 & 0.61 \(\pm\) 0.01 & 0.05 \(\pm\) 0.00 \\ & HIB & 0.11 \(\pm\) 0.01 & 0.07 \(\pm\) 0.00 & 0.05 \(\pm\) 0.00 & 0.44 \(\pm\) 0.17 & 0.70 \(\pm\) 0.1 & 0.29 \(\pm\) 0.03 & 0.04 \(\pm\) 0.02 \\ & PFE & **0.66 \(\pm\) 0.00** & 0.60 \(\pm\) 0.00 & 0.58 \(\pm\) 0.00 & 0.21 \(\pm\) 0.02 & 0.56 \(\pm\) 0.01 & 0.56 \(\pm\) 0.01 & 0.11 \(\pm\) 0.01 \\ & LAM (post-hoc) & **0.66 \(\pm\) 0.00** & 0.60 \(\pm\) 0.00 & 0.58 \(\pm\) 0.00 & 0.50 \(\pm\) 0.11 & 0.69 \(\pm\) 0.07 & 0.81 \(\pm\) 0.01 & 0.23 \(\pm\) 0.01 \\ & LAM (online) & **0.66 \(\pm\) 0.01** & 0.60 \(\pm\) 0.00 & 0.57 \(\pm\) 0.01 & **0.78 \(\pm\) 0.04** & **0.85 \(\pm\) 0.03** & **0.83 \(\pm\) 0.01** & **0.01 \(\pm\) 0.00** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Closed-set results.** LAM matches or outperforms existing methods in terms of predictive performance. It produces reliable uncertainties ID and OoD on two standard datasets FashionMNIST and CIFAR10. Confidence intervals show one standard deviation computed across five runs.
Figure 6: **Calibration Curves.** LAM is near perfectly calibrated on FashionMNIST and CIFAR10.
Figure 7: **Receive Operator Curves** LAM assign high uncertainty to OoD observations.
high variance to ID examples. Furthermore, online LAM seems to assign the highest variance to images in which the background is complex and thus camouflages the birds.
### Visual Place recognition
Lastly, we evaluate LAM on the challenging task of visual place recognition, which has applications that span from human trafficking investigation (Stylianou et al., 2019) to the long-term operation of autonomous robots (Davison et al., 2007). Our focus is the latter, where the goal is to retrieve images taken within a radius of \(25\) meters from a query image. The high number of unique places and varying visual appearance of each location - including weather, dynamic, structural, view-point, seasonal, and day/night changes - makes visual place recognition a very challenging metric learning problem. Reliable uncertainties and reliable out-of-distribution behavior are important to avoid incorrect loop-closure, which can deteriorate the autonomous robots' location estimate. We evaluate on MSLS (Warburg et al., 2020), which is the largest and most diverse place recognition dataset currently available comprised of \(1.6M\) images from \(30\) cities spanning six continents. We use the standard train/test split, training on \(24\) cities and testing on six other cities. We use the same model as in open-set retrieval.
Table 4 shows that online LAM yields state-of-the-art uncertainties for visual place recognition measured with AUSC, while matching the predictive performance of the alternative probabilistic and deterministic methods on both the MSLS validation and the challenge set. Fig. 10 shows the sparsification curves on the challenge set. Both online and posthoc LAM have monotonically increasing sparsification curves, implying that when we remove the most uncertain observations, the predictive performance increase. This il
Figure 8: **Images with lowest and highest variance** for PFE, post-hoc LAM, and online LAM across LFW (ID) and CUB200 (OoD) datasets. LAM associates high uncertainty to OoD examples, whereas PFE predicts higher uncertainties for ID images. Shows the best-performing PFE and online LAM across five runs.
\begin{table}
\begin{tabular}{l l|l l l|l l|l} \hline \hline & & \multicolumn{4}{c|}{Image retrieval} & \multicolumn{2}{c|}{OoD} & ID \\ & & mAP@1 \(\uparrow\) & mAP@5 \(\uparrow\) & mAP@10 \(\uparrow\) & AUROC \(\uparrow\) & AUROC \(\uparrow\) & AUSC \(\uparrow\) \\ \hline \multirow{6}{*}{**OoD**} & Deterministic & \(0.62\pm 0.01\) & \(0.48\pm 0.01\) & \(0.42\pm 0.01\) & — & — & \\ & Deep Ensemble & \(0.21\) & \(0.11\) & \(0.07\) & \(0.47\) & \(0.55\) & \(0.21\) \\ & PFE & \(0.62\pm 0.01\) & \(0.5\pm 0.01\) & \(0.43\pm 0.01\) & \(0.44\pm 0.16\) & \(0.5\pm 0.08\) & \(0.61\pm 0.02\) \\ & HIB & \(0.33\pm 0.04\) & \(0.19\pm 0.02\) & \(0.14\pm 0.02\) & \(0.54\pm 0.12\) & \(0.61\pm 0.1\) & \(0.31\pm 0.07\) \\ & MC dropout & \(0.61\pm 0.00\) & \(0.48\pm 0.00\) & \(0.42\pm 0.00\) & \(0.73\pm 0.08\) & \(0.68\pm 0.07\) & \(0.63\pm 0.01\) \\ & LAM (post-hoc) & \(\mathbf{0.65\pm 0.01}\) & \(\mathbf{0.52\pm 0.01}\) & \(\mathbf{0.45\pm 0.01}\) & \(0.56\pm 0.16\) & \(0.61\pm 0.11\) & \(\mathbf{0.66\pm 0.03}\) \\ & LAM (online) & \(0.61\pm 0.00\) & \(0.48\pm 0.00\) & \(0.42\pm 0.00\) & \(\mathbf{0.80\pm 0.03}\) & \(\mathbf{0.75\pm 0.03}\) & \(0.63\pm 0.01\) \\ \hline \multirow{6}{*}{**OoD**} & Deterministic & \(0.44\pm 0.00\) & \(0.68\pm 0.00\) & \(0.65\pm 0.00\) & — & — & — \\ & Deep Ensemble & \(0.36\) & \(0.57\) & \(0.54\) & \(0.52\) & \(0.64\) & \(0.33\) \\ & PFE & \(0.44\pm 0.00\) & \(0.68\pm 0.00\) & \(0.65\pm 0.00\) & \(0.03\pm 0.02\) & \(0.41\pm 0.0\) & \(0.49\pm 0.01\) \\ & MC dropout & \(0.42\pm 0.00\) & \(0.65\pm 0.01\) & \(0.63\pm 0.01\) & \(0.03\pm 0.01\) & \(0.41\pm 0.0\) & \(0.46\pm 0.01\) \\ & LAM (post-hoc) & \(0.44\pm 0.01\) & \(0.68\pm 0.01\) & \(0.65\pm 0.00\) & \(0.65\pm 0.14\) & \(0.72\pm 0.11\) & \(0.45\pm 0.03\) \\ & LAM (online) & \(\mathbf{0.46\pm 0.00}\) & \(\mathbf{0.71\pm 0.00}\) & \(\mathbf{0.69\pm 0.00}\) & \(\mathbf{0.71\pm 0.22}\) & \(\mathbf{0.78\pm 0.17}\) & \(\mathbf{0.50\pm 0.02}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Open-set results.** LAM matches or outperforms existing methods in terms of predictive performance and produces state-of-the-art uncertainty quantification for challenging zero-shot metric learning datasets LFW and CUB200. Confidence intervals show one standard deviation computed across five runs.
lustrates that LAM produces reliable uncertainties for this challenging open-set retrieval task. Fig. 9 shows the queries associated with the highest and lowest uncertainty. LAM predicts high uncertainty to images with are blurry, captured facing into the pavement, or contain mostly vegetation. These images do not have features that are descriptive of a specific place, making them hard to geographically locate.
**Limitations.** Similar to other Bayesian methods, LAM relies on \(n\) samples to obtain uncertainties. This makes inferences \(n\) times slower. Computing the Hessian at every step during online LAM also makes training time slower. To combat long training times and high memory usage, we use a last-layer LAM and thus only estimate and sample for a weight posterior of the last layer. The last-layer LAM training time is \(3\) hours for online LAM vs \(2.3\) hours for deterministic contrastive loss on LFW, and \(30\) minutes vs \(15\) minutes loss on CUB200 on an NVIDIA RTX A5000.
## 5 Conclusion
In this paper, we have introduced a Bayesian encoder for metric learning, the Laplacian Metric Learner (LAM), which uses the Laplace approximation. We prove that the contrastive loss is indeed a valid unnormalized log-posterior, and develop three Hessian approximations, which ensures a positive definite covariance matrix. We propose a novel decomposition of the Generalized Gauss-Newton approximation that improves Hessian approximations of \(\ell_{2}\)-normalized networks. Empirically, we demonstrate that LAM consistently produces well-calibrated uncertainties, reliably detects out-of-distribution examples, and achieves state-of-the-art predictive performance on both closed-set and challenging open-set image retrieval tasks.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c|c|c} \hline \hline & \multicolumn{5}{c|}{**Validation Set**} & \multicolumn{5}{c}{**Challenge Set**} \\ & R@1\(\uparrow\) & R@5\(\uparrow\) & R@10\(\uparrow\) & M@5\(\uparrow\) & M@10\(\uparrow\) & AUSC\(\uparrow\) & R@1\(\uparrow\) & R@5\(\uparrow\) & R@10\(\uparrow\) & M@5\(\uparrow\) & M@10\(\uparrow\) & AUSC\(\uparrow\) \\ \hline Deterministic & **0.77** & **0.88** & **0.90** & **0.61** & **0.56** & — & **0.58** & **0.74** & **0.78** & **0.45** & 0.43 & — \\ MC Dropout & 0.75 & 0.87 & 0.87 & 0.59 & 0.54 & **0.77** & 0.55 & 0.71 & 0.76 & 0.43 & 0.41 & 0.57 \\ PFE & **0.77** & **0.88** & **0.90** & **0.61** & **0.56** & 0.73 & **0.58** & **0.74** & **0.78** & **0.45** & **0.44** & 0.57 \\ LAM (post-hoc) & 0.76 & 0.86 & 0.89 & 0.60 & 0.55 & 0.74 & **0.58** & **0.74** & **0.78** & **0.45** & **0.44** & 0.59 \\ LAM (online) & 0.76 & 0.87 & **0.90** & 0.60 & **0.56** & **0.77** & 0.57 & **0.74** & **0.78** & **0.45** & 0.43 & **0.63** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Results on MSLS.** LAM yields state-of-the-art uncertainties and matches the predictive performance of deterministic trained models. We evaluate on both the validation set and the official challenge set (Warburg et al., 2020).
Figure 10: **Sparsification curve.** Online and posthoc LAM’s sparsification curves monotonically increase, illustrating that they reliably associate higher uncertainty to harder observations.
Figure 9: **Images with lowest and highest variance** for PFE, post-hoc LAM, and online LAM across MSLS validation set. LAM reliably associates high uncertainty to images that are blurry, are captured facing the pavement, or contain vegetation. These images do not contain features that are descriptive of a specific place, making them especially challenging to geographically locate.
**Acknowledgement.** This work was supported by research grants (15334, 42062) from VILLUM FONDEN. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 757360). The work was partly funded by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). The authors acknowledge the Pioneer Centre for AI, DNRF grant P1.
|
2303.00311
|
Modeling Multiple User Interests using Hierarchical Knowledge for
Conversational Recommender System
|
A conversational recommender system (CRS) is a practical application for item
recommendation through natural language conversation. Such a system estimates
user interests for appropriate personalized recommendations. Users sometimes
have various interests in different categories or genres, but existing studies
assume a unique user interest that can be covered by closely related items. In
this work, we propose to model such multiple user interests in CRS. We
investigated its effects in experiments using the ReDial dataset and found that
the proposed method can recommend a wider variety of items than that of the
baseline CR-Walker.
|
Yuka Okuda, Katsuhito Sudoh, Seitaro Shinagawa, Satoshi Nakamura
|
2023-03-01T08:15:48Z
|
http://arxiv.org/abs/2303.00311v1
|
# Modeling Multiple User Interests using Hierarchical Knowledge for Conversational Recommender System
###### Abstract
A conversational recommender system (CRS) is a practical application for item recommendation through natural language conversation. Such a system estimates user interests for appropriate personalized recommendations. Users sometimes have various interests in different categories or genres, but existing studies assume a unique user interest that can be covered by closely related items. In this work, we propose to model such multiple user interests in CRS. We investigated its effects in experiments using the ReDial dataset and found that the proposed method can recommend a wider variety of items than that of the baseline CR-Walker.
## 1 Introduction
Recommender System is an attractive field of research and development for many commercial applications. A typical recommender system recommends items to users using collaborative filtering [1; 2] based on a large amount of accumulated data from other users' choices. A major drawback of this approach is the so-called _cold start problem_[3]; when a target user has no history in order to identify his/her interests and preferences for the recommendation. Interaction with users can mitigate this problem by iteratively updating their interests and preferences. Natural language conversation is a promising way for interaction between users and recommender systems, especially for new under-experienced users. Conversational Recommender System (CRS) [4; 5] is a variant of such a recommender system.
CRS recommends items to users according to their _user portrait_ through conversation. The user portrait is a representation of user interests used for the recommendation [6]. Existing CRS studies [5; 6] represent a user portrait using a user-dependent embedding vector and use it to choose appropriate items for recommen
dation. However, such a representation leads to a limited variety of item recommendations because a portrait vector corresponds to one point in the embedding space. In cold start situations, users present their interests step by step through conversation so their interests are not always specific enough to be represented in such a way. Furthermore, in the case of users having wide interests ranging over different categories or genres, this representation would fail to capture their multiple interests.
Fig. 1 shows an example in a movie domain. A user mentions four entities (two genres and two titles) in two utterances, then the CRS estimates the user's portrait from the utterances. These genres and titles are important clues to estimate the user portrait, but the representation by a single portrait embedding vector calculated as a weighted sum of the embeddings mixes these different interests. As a result, the CRS fails to capture the users' interests ranging over different genres such as animation, comedy, and science fiction (derived from Scanner Darkly) and makes recommendations focusing on just one genre of comedy. If the user asks the system to recommend more items for wider coverage, it should recommend similar items related to comedy.
In this paper, we tackle this problem and propose a method that models a user portrait with multiple interests considering different granularity by the use of hierarchical knowledge of items and genres. Using the hierarchical knowledge, we can keep multiple interests in different branches with appropriate granularity, such as a genre of comedy and a specific item of Scanner Darkly. In experiments using Radial dataset, the proposed method recommended a wider variety of items than the baseline using CR-Walker.
Figure 1: An illustration of a difficult case for existing systems to capture multiple interests of users: two example utterances are shown at the top, red text in user utterances indicates entities, entities extracted from user utterances are represented by gray boxes and arranged vertically, and the green frame represents the user portrait.
## 2 Related Work
There have been two major approaches to natural language-based recommendation systems. One is called Interactive Information Retrieval (IIR), which asks users questions in natural language. Zhang et al. [7] proposed an information retrieval system that asks explicit questions to fill the slots describing user interests. The other is CRS which mimics natural language dialogues for the recommendation by humans. It has advantages in its naturalness and sophisticated dialogue strategies to explore and narrow down users' interests. This work is based on the latter approach and aims to leverage these advantages to obtain various user interests in the recommendation task.
CRS has attracted many studies in recent years along with the advance in the field of natural language dialog systems and chatbots. One important problem in CRS is incorporating background knowledge to make accurate recommendations and generate appropriate system utterances. The background knowledge is knowledge of what knowledge the system uses in utterances. Zhou et al. [5] and Ma et al. [6] focus on the relationship between dialogue strategies and background knowledge for these purposes. In these existing studies, the background knowledge is estimated using user portraits, utterance contexts, relationships between knowledge, and so on. CR-Walker [6] uses a reasoning tree to obtain accurate background knowledge. However, these studies do not model a user portrait to capture multiple user interests.
With respect to the problem of multiple user interests, Qi et al. [8] proposed a recommendation system that considers multiple user interests based on click rates in Web browsing. However, their approach cannot be applied to CRS because their system is based on a large-scale click rate data.
In this work, we focus on the problem of multiple user interests in CRS and propose a method to model user portraits that are capable of capturing multiple interests.
Figure 2: CRS using background knowledge.
## 3 CR-Walker: a conventional CRS model with user portrait
Before we present our proposed method, we describe CR-Walker[6] in this section1. CR-Walker is a conventional CRS method using user portraits. It consists of three modules: an utterance encoder, a user portrait extractor, and a system utterance generator using a reasoning tree. Its overall architecture is illustrated in Fig. 3.
Footnote 1: Due to the space limitation, we omit some details that would not be required in this paper. Please refer to the literature for its detailed formulation.
### Utterance encoding
Suppose a user and a CRS take turns in their conversation and let \(x_{t}\) and \(y_{t}\) be user and systems utterances, respectively. The utterance encoder captures a contextual representation of the conversation between a user and the system. User and system utterances are converted into embeddings using BERT [9] and then converted into contextual embeddings using LSTM [10]. The contextual embedding of the conversation until the user utterance at the time step \(t\), \(u_{t}\), is denoted as:
\[u_{t}=\text{LSTM}_{\text{hidden}}\left(u_{t-1},\text{BERT}\left([y_{t-1};x_{t }]\right)\right), \tag{1}\]
where the subscript "hidden" indicates the vector \(u_{t}\) is the hidden vector of the LSTM at time \(t\) and not its output vector. The input to the LSTM is the concatenation of \(y_{t-1}\) and \(x_{t}\).
### User portrait extraction
The user portrait extractor derives a user portrait vector \(p_{t}\) from the user utterances \(x_{1},\ldots,x_{t}\) and a knowledge graph. It does not use whole user utterances but focuses
Figure 3: Overview of CRS.
only on mentioned entities. Here, we assume that the knowledge graph represents relations among a set of named entities \(\mathcal{N}\).
First, the user portrait extractor finds named entities \(E_{t}=e_{1},\ldots,e_{N_{t}}\) from the user utterances using named entity recognition,where \(N_{t}\) is the number of mentioned entities in \(x_{1},\ldots,x_{t}\). The entities are converted into the corresponding entity embeddings \(M_{t}=\{h_{1},\ldots,h_{N_{t}}\}\). Each entity embedding \(h_{i}\) is calculated through a \(L\)-layered network, and its intermediate embedding at \(l\)-th layer is denoted as:
\[h_{i}^{(l)}=\sigma\left(\left(\sum_{r\in\mathcal{R}}\sum_{e^{\prime}\in \mathcal{N}_{e_{i}}^{r}}\frac{1}{|\mathcal{N}_{e_{i}}^{r}|}W_{r}^{(l-1)}h_{e^ {\prime}}^{(l-1)}\right)+W_{0}^{(l-1)}h_{i}^{(l-1)}\right), \tag{2}\]
where \(\mathcal{N}_{e_{i}}^{r}\) is the set of neighboring entities of \(e_{i}\) under the relation \(r\) in the knowledge graph, \(h_{e^{\prime}}^{(l-1)}\) is the embedding vector for the entity \(e^{\prime}\) in \(\mathcal{N}_{e_{i}}^{r}\) at \((l-1)\)-th layer, \(W_{r}^{(l)}\in\mathbb{R}^{d\times d}\) and \(W_{0}^{(l)}\in\mathbb{R}^{d\times d}\) are learnable matrices for integrating relationship-specific information from the neighboring and current entities. \(h_{*}^{0}\) is derived from an embedding matrix \(W_{\text{emb}}\in\mathbb{R}^{d\times\mathcal{N}}\); this means that the user portrait extractor model covers all the named entities in \(\mathcal{N}\) regardless of their appearance in the training data. The knowledge graph is extracted from DBpedia and then represented using R-GCN [11].
Then, the user portrait \(p_{t}\in\mathbb{R}^{d}\) is derived through attention onto \(M_{t}\) as follows:
\[p_{t} =\alpha_{t}*M_{t}, \tag{3}\] \[\alpha_{t} =\text{softmax}\left(w_{p}\cdot\tanh\left(W_{p}M_{t}\right) \right), \tag{4}\]
where \(w_{p}\) is the weight for each entity embedding and \(W_{p}\) is the weight matrix for the self-attention.
### System utterance generation
According to the user portrait extracted by the procedure above, the system generates responses to ask more questions and give recommendations to the user. The generation process consists of: (1) making a reasoning tree to determine what will be mentioned in the response and (2) generating a response using the reasoning tree and a language generation model.
#### 3.3.1 Reasoning Tree
A reasoning tree is a tree-structured graph as shown in Fig. 4. CR-Walker uses two types of nodes for dialogue acts and knowledge graph entities, both represented by their embedding vectors derived from \(u_{t}\) and \(p_{t}\).
The root node stores the dialog act embedding \(i_{t}\) denoted as follows:
\[i_{t}=W_{int}^{2}\,\text{ReLU}\left(W_{int}^{1}u_{t}\right), \tag{5}\]
where \(W_{int}^{1}\) and \(W_{int}^{2}\) are weight matrices. There are three dialog acts: _Recommend_, _Query_, and _Chat_, and the system changes its behaviors according to it as described later.
Vectors associated with the descendant nodes come from embeddings of knowledge graph entities but are also influenced by their ancestor nodes. A context embedding \(c_{t}\) is defined for each descendant node as follows2:
Footnote 2: We omit a superscript specifying the node for simplicity.
\[c_{t} =\gamma_{t}u_{t}+\left(1-\gamma_{t}\right)p_{t}, \tag{6}\] \[\gamma_{t} =\begin{cases}\sigma\left(w_{1}\cdot\left[u_{t};p_{t};i_{t} \right]\right),&\text{(if the parent is the root)}\\ \sigma\left(w_{2}\cdot\left[u_{t};p_{t};i_{t};h_{\text{parent}}\right]\right),& \text{(otherwise)}\end{cases} \tag{7}\]
where \(w_{1}\) and \(w_{2}\) indicate weight vectors, \(h_{\text{parent}}\) is the embedding of the entity associated with the parent node, and \(\left[a;b\right]\) represents concatenation of vectors.
Suppose we are going to append a child node associated with an entity \(e^{\prime}\) to the reasoning tree. We define a score \(\hat{s}_{e^{\prime}}\) using context embeddings over its ancestor nodes as follows:
\[\hat{s}_{e^{\prime}}=\left(h_{e^{\prime}}\cdot\left(c_{t}+c_{t}^{\text{parent }}\right)\right), \tag{8}\]
where \(c_{t}\) is the context vector of the node to which we append the child node, and \(c_{t}^{\text{parent}}\) is that of its parent node. Note that \(c_{t}^{\text{parent}}\) is calculated recursively up to the root node. Based on the score, all of the entities that satisfy the function \(\text{WALK}(e)=\{e^{\prime}\mid\hat{s}_{e^{\prime}}>\tau\}\) form new nodes in the reasoning tree3. The threshold hyperparameter \(\tau\) controls the choice of entities included in the reasoning tree. The node appending step is repeated by \(N\) times to induce an \((N+1)\)-layer reasoning tree. CR-Walker induces a three-layer reasoning tree whose nodes represent different types of entities according to dialog acts, as shown in Table 1. The dialog act _Recommend_ lets the
Figure 4: System Utterance Generator.
system recommend items to the user. _Query_ and _Chat_ let the system ask questions to a user and say something about mentioned entities, respectively.
#### 3.3.2 Utterance generation
Based on the reasoning tree, the system generates utterances. The reasoning tree is converted into a sequence and then used as an input prompt for a pre-trained language model to predict a system utterance that follows the input prompt.
## 4 Proposed method
We propose a method that extends the _User Portrait Extractor_ of CR-Walker to capture a wide range of user interests. The proposed method uses a hierarchical structure of the entity knowledge and considers multiple abstract item classes (i.e., genres) explicitly to calculate a user portrait vector.
In this work, we assume the item hierarchy of the entity knowledge: genres and titles in the case of the movie domain. Our motivation is to capture various user interests ranging over different genres. User portraits induced by CR-Walker are based on entity embeddings and focus on a limited number of entities suggested through attention (Eqs. (3)-(4)). In contrast, the proposed method aims to capture user interests in different genres using explicit genre-level user portraits in the reasoning tree induction.
### Hierarchical item knowledge
We assume that item knowledge in the target domain is available with the information on item classes. We use the knowledge to construct a hierarchical structure as shown in Fig. 5, which has item nodes (e.g., movie titles) and abstract class nodes (e.g., movie genres). Here, we assume each item is associated with only one abstract class for simplicity.
We assign embedding vectors to the nodes in the hierarchical knowledge. The embedding vector for each item node is given simply by the average of word embeddings for nouns and adjectives in its item description: the abstract in the case of
\begin{table}
\begin{tabular}{|l|l|l|} \hline Dialog act (root) & Middle layer & Leaf layer \\ \hline Recommend & Attributes of mentioned items & Candidate items \\ \hline Query & Generic classes & Attributes \\ \hline Chat & Mentioned entities & All entities \\ \hline \end{tabular}
\end{table}
Table 1: Types of entities stored in the reasoning tree for different dialog acts.
movies. Each abstract class node is given by the average of the embeddings of associated item nodes. The embedding model is a pre-trained one such as word2vec. For example in Fig. 5, the embedding of the genre node _Science Fiction_ is the average of the embeddings of the item nodes _Scanner Darkly_ and _Avatar_.
### User portrait extraction considering multiple interests
We calculate the similarity between a user utterance and each node embedding in the hierarchical knowledge as the cosine between these vectors. The vector representation of the user utterances is given in the same way as for the item node. The node score is updated by user utterances in the dialog; the cosine similarity is accumulated as the score of each node, \(S_{t}^{H}\).
Using these node scores, we extract a user portrait vector \(p_{t}^{H}\) and use it for the middle layer of the reasoning tree to constrain the recommendation with the user's interests in the abstract classes. We choose two attribute entities with the highest scores, \(e_{1}^{H}\) and \(e_{2}^{H}\), to consider multiple user interests ranging over different attributes. Their corresponding entity embeddings \(h_{1}^{H}\) and \(h_{2}^{H}\) are used to calculate the user portrait \(p_{t}^{H}\) as follows:
\[p_{t}^{H} =\alpha_{t}^{H}*M_{t}^{H}, \tag{9}\] \[\alpha_{t}^{H} =\text{softmax}\left(w_{p}\cdot\tanh\left(W_{p}M_{t}^{H}\right) \right),\] (10) \[M_{t}^{H} =(h_{1}^{H},h_{2}^{H}). \tag{11}\]
Figure 5: Constructing hierarchical knowledge.
Note that we use the same \(w_{p}\) and \(W_{p}\) as CR-Walker. And we use the user portrait \(p_{t}^{H}\) only for the attribute-level reasoning and still use the original user portrait \(p_{t}\) in the item-level reasoning.
## 5 Experiments
We investigate the performance of the proposed method through the following experiments to compare it with the baseline of CR-Walker.
### Setup
For the experiments, we used ReDial [4], a public conversation recommendation dataset in the movie domain. Each recommendation conversation was performed between two crowd workers; one played the role of a recommender and the other was a seeker. Recommendation utterances by the recommender were annotated with their correct recommendation items (movie titles). In each conversation, at least four different movies are mentioned.
As the baseline, we used the authors' implementation of CR-Walker4 with some modifications: (1) Entities used for \(M_{t}\) to induce a user portrait in Eq. (3) were limited to movie genres for consistent comparisons with the proposed method; (2) When the dialog act is _Recommend_, the system induces a reasoning tree by choosing top-1 entity among entities from movie attributes such as genre, year, actor, etc. in the middle layer and top-2 movie titles in the leaf layer5.
Footnote 4: [https://github.com/truthless11/CR-Walker](https://github.com/truthless11/CR-Walker)
Footnote 5: We observed the threshold-based entity selection described in 3.3.1 results in top-1 entities in most cases, so this modification would not affect the performance of CR-Walker seriously.
Resources used for the proposed method were prepared as follows. The hierarchical knowledge of movie genres and titles came from the CR-Walker implementation, which was originally extracted from DBpedia. The abstract description for each movie title was extracted from the corresponding DBpedia entry and Wikipedia article; we extracted it from DBpedia entries when it was available, otherwise, we apply a simple matching pattern of "_[movie name] is..._" to extract the abstract description. For the word embeddings used in the proposed method, we used a pre-trained word2vec model of _GoogleNews-vectors-negative30word2vec6_and Natural Language Toolkit7.
Footnote 6: [https://code.google.com/archive/p/word2vec](https://code.google.com/archive/p/word2vec)
Footnote 7: [https://www.nltk.org/](https://www.nltk.org/)
### Results
First, we measured the performance of the recommendation using Recall@K and coverage. Recall@K is the recall of K-best retrieval results, which is widely used in information retrieval. The coverage measures the variety of item recommendations by the fraction of the number of unique recommended items (for a certain test set) and the size of the item set. Table 2 shows the results. The proposed method did not outperform CR-Walker in Recall@K but achieved better coverage. This suggests the proposed method covers a wider range of items in its recommendation than CR-Walker while it also causes to degrade the accuracy of the recommended items.
Then, we measured the performance of the utterance generation using BLEU [12], distinct-N [13], and F1. Table 3 shows the results. The proposed method achieved slightly higher distinct scores than CR-Walker while the BLEU and F1 were almost the same. This suggests the proposed method generates diverse utter
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline Turn & Model & Utterance & Dialog act & Middle layer & Leaf layer \\ \hline \hline \multirow{3}{*}{1} & User & i love to watch **funny movies** & & & \\ \cline{2-6} & CR-Walker & one of my favorite comedy movies is the hangover. melissa mccarthy is hilarious. i also really like bridesmaids & Recommend & Comedy & (1) The Hangover (2) Bridesmaids \\ \cline{2-6} & Proposed & one of my favorite comedy movies is the hangover. melissa mccarthy is hilarious. i also really like bridesmaids & Recommend & Comedy & (1) The Hangover (2) Bridesmaids \\ \hline \multirow{3}{*}{2} & User & yes, i love adam sandler & & & \\ \cline{2-6} & CR-Walker & have you seen click or the waterboy? & Recommend & American films & (1) Click (2) The Waterboy \\ \cline{2-6} & Proposed & have you seen click or the waterboy? & Recommend & Comedy & (1) Click (2) The Waterboy \\ \hline \multirow{3}{*}{3} & User & just watched the mask last night! there is a new one Son of the Mask. i like & & \\ \cline{2-6} & & some **scary movie** & & & \\ \cline{2-6} & CR-Walker & have you seen tommy boy or ace ventura: pet detective? & Recommend & **Comedy** & (1) Tommy Boy (2) Ace Ventura: Pet Detective \\ \cline{2-6} & Proposed & have you seen tucker \& dale vs. evil? or leprechaun? & Recommend & **Horror** & (1) Tucker and Dale vs. Evil (2) Leprechaun \\ \hline \end{tabular}
\end{table}
Table 4: Example of the recommendation and utterance generation by the baseline CR-Walker and the proposed method. Middle layer entities (attributes) and leaf layer entities (items) are derived from a reasoning tree.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Model & R@1 & R@10 & R@50 & Cov. \\ \hline \hline CR-Walker & 3.33 & 14.6 & 30.5 & 17.4 \\ Proposed & 3.23 & 14.5 & 30.2 & **21.1** \\ \hline \end{tabular}
\end{table}
Table 2: Recall@K (R@K) and coverage (Cov.) of the recommendations.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Model & BLEU & dist-1 & dist-2 & dist-3 & F1 \\ \hline \hline CR-Walker & 7.62 & 5.5 & 23.0 & 39.1 & 15.64 \\ Proposed & 7.64 & **6.1** & **24.3** & **40.4** & 15.63 \\ \hline \end{tabular}
\end{table}
Table 3: BLEU, distinct (dist)-1/2/3, and F1 of utterance generation results.
ances due to the change in the recommendation strategy and coincides with the finding from the previous results.
### Analysis
#### 5.3.1 Recommendation and utterance generation
We analysed the recommendation and utterance generation results by CR-Walker and the proposed method. Table 4 shows an example. middle layer (attribute) and leaf layer (items) in a reasoning tree, which are also used to constrain the utterance generation. The user indicates the interest in funny (_comedy_) movies in the first utterance, and the both systems behave the same. They work similarly in the next turn. In the third turn, the user mentioned the interest in scary (_horror_) movies. While CR-Walker still traps in the genre of comedy, the proposed method successfully captures the user's interest in horror and recommends movies with comedy and horror aspects.
#### 5.3.2 Transition of user interests across different genres
We investigated the transition of user interests across different genres that happened in the experiments. Tables 6 and 7 represent transition probabilities across middle layer (attribute) within one conversation by CR-Walker and the proposed method, respectively. Each cell in the tables are colored according to the corresponding transltion probability. Their difference clearly indicates the proposed method yields more top-1 attribute transitions than CR-Walker. This suggests the proposed method can adapt the possible change of a user portrait due to multiple user interests.
## 6 Conclusions
In this work, we proposed a method to model multiple user interests for CRS. The proposed method uses hierarchical knowledge of abstract classes and items and incorporates multiple abstract classes explicitly into user portraits used for recommendation and utterance generation. It uses the user portrait to induce a reasoning tree based on the similarity between estimated user interests and nodes in the hierarchical knowledge measured using word embeddings. Experimental results using ReDial dataset showed the proposed method recommends a wider variety of items and generates more diverse utterances than CR-Walker. Our detailed analyses also suggested the proposed method can capture multiple user interests ranging from different classes.
Future work includes user studies using the proposed system rather than the investigation with given dialogue contexts of ReDial, pursuing sophisticated conversational recommendation strategies to narrow down user interests, and appropriate evaluation of entire CRS systems.
|
2310.18466
|
Integer Sequences: Irregular Arrays and Intra-Block Permutations
|
This article investigates integer sequences that partition the sequence into
blocks of various lengths - irregular arrays. The main result of the article is
explicit formulas for numbering of irregular arrays. A generalization of Cantor
diagonal method is proposed. We also define and describe intra-block
permutations of natural numbers. Generalizations of reluctant sequences are
introduced, namely generalized reluctant sequences and generalized reverse
reluctant sequences. Explicit formulas are presented for these sequences. The
article provides numerous examples to illustrate all statements.
|
Boris Putievskiy
|
2023-10-27T20:21:45Z
|
http://arxiv.org/abs/2310.18466v1
|
# Integer Sequences: Irregular Arrays
###### Abstract
This article investigates integer sequences that partition the sequence into blocks of various lengths - irregular arrays. The main result of the article is explicit formulas for numbering of irregular arrays. A generalization of Cantor diagonal method is proposed. We also define and describe intra-block permutations of natural numbers. Generalizations of reluctant sequences are introduced, namely generalized reluctant sequences and generalized reverse reluctant sequences. Explicit formulas are presented for these sequences. The article provides numerous examples to illustrate all statements.
###### Contents
* 1 Introduction
* 2 Partitions of the Set of Positive Integers
* 3 Intra-Block Permutation of Integer Positive Numbers
* 4 Generalized reluctant sequences
## 1 Introduction
Denote the set of integers by \(\mathbb{Z}\), the set of nonnegative integers by \(\mathbb{Z}^{*}\), the set of positive integers by \(\mathbb{Z}^{+}\), the set of positive real numbers by \(\mathbb{R}^{+}\). Denote the set of integer sequences by \(\mathcal{A}\) and the set of positive integer sequences by \(\mathcal{A}^{+}\).
A pairing function is a function that reversibly maps \(\mathbb{Z}^{+}\) x \(\mathbb{Z}^{+}\rightarrow\mathbb{Z}^{+}\). A permutation of natural numbers is bijective map \(\mathbb{Z}^{+}\rightarrow\mathbb{Z}^{+}\).
A block (or segment) of a sequence is any set of the consecutive terms of the form \((a_{k+1},a_{k+2},a_{k+3},\,...\,a_{k+m})\), where \(k\in\mathbb{Z}^{*},\,\,\,m\in\mathbb{Z}^{+}\), and \(m\) is the length of the block.
Throughout this paper, we will refer to sequences by their \(Annnnn\) numbers, as
found in the Online Encyclopedia of Integer Sequences [1]. Denote the sequence of natural numbers \((1,2,3,...)\)\(A000027\) by \(\,\xi\).
## 2 Partitions of the Set of Positive Integers
**Definition 2.1**.: Let a sequences \(\alpha\): \(a_{1},a_{2},a_{3},...\in\mathcal{A}\) and \(\beta\): \(b_{1},b_{2},b_{3},...\in\mathcal{A}^{+}\). The sequence \(\beta\) partitions the sequence \(\alpha\) of into blocks of lengths \(b_{1},b_{2},b_{3},...\). The sequence \(\alpha\) is written as irregular array read by rows:
\(a_{1},a_{2},...a_{b_{1}},\)
\(a_{b_{1}+1},a_{b_{1}+2},...a_{b_{1}+b_{2}},\)
\(a_{b_{1}+b_{2}+1},a_{b_{1}+b_{2}+2},...a_{b_{1}+b_{2}+b_{3}},\)
...
The sequence \(\beta\) is called partitioning sequence. We use two parameters to number the terms of an irregular array \(L(n)\) and \(R(n)\). Where \(L(n)\) represents the block number, and \(R(n)\) indicates the position within the block from left to right. Thus
\((1,1),(1,2),...(1,b_{1}),\)
\((2,1),(2,2),...(2,b_{2}),\)
\((3,1),(3,2),...(3,b_{3}),\)
...
Denote by \(B(s)=b_{1}+b_{2}+...+b_{s}\) partial sums \(\beta\),
\[B(s)=0,\ \ \ B(s-1)+b_{s}=B(s). \tag{1}\]
Let \(L(0)=0,\ \mbox{for}\ n\geq 1\) we get:
\[R(n)=n-B(L(n)-1). \tag{2}\]
Denote by \(R^{{}^{\prime}}(n)\) the position within the block from right to left. Then
\[R^{{}^{\prime}}(n)=B(L(n))+1-n,\ R(n)+R^{{}^{\prime}}(n)=b_{L(n)}+1.\]
Using (2), we can derive a formula for the inverse problem: how to calculate the number of terms if the values of the functions \(L\) and \(R\) are known.
\[n=B(L-1)+R.\]
Let the sequences \(\beta=\xi\), then a sequence \(\alpha\) is written as regular array read by rows:
\(a_{1},\)
\(a_{1},a_{2},\)
\(a_{1},a_{2},a_{3},\)
...
These formulas are commonly known \(A003056,\ A002260,\ A004736.\) Row numbering of a regular array starts from 0: \(t=\lfloor\frac{\sqrt{8n-7}-1}{2}\rfloor\).
Then \(L(n)=t+1\),
\[R(n)=n-\frac{t(t+1)}{2},\ R^{{}^{\prime}}(n)=\frac{(t+1)(t+2)}{2}+1-n,\ R(n)+R ^{{}^{\prime}}(n)=t+2.\]
**Theorem 2.1**.: _Let \(x(n):\mathbb{Z}^{+}\rightarrow\mathbb{R}^{+}\) and \(x(n)\) is the largest root of the equation \(B(x)=n.\) Then_
\[L(n)=\lceil x(n)\rceil. \tag{3}\]
Proof.: By definition \(B(0)=0,\)\(B(1)=b_{1},\)\(B(2)=b_{1}+b_{2},\)... The function \(B(n)\) is strictly increasing. Therefore
\[0<x(1)<x(2)<...<x(b_{1})=1,\] \[1<x(b_{1}+1)<x(b_{1}+2)<...<x(b_{1}+b_{2})=2,\] \[2<x(b_{1}+b_{2}+1)<x(b_{1}+b_{2}+2)<...<x(b_{1}+b_{2}+b_{3})=3,\] \[\cdots\]
We obtain
\[\lceil x(1)\rceil=1,\lceil x(2)\rceil=1,...,\lceil x(b_{1})\rceil=1,\] \[\lceil x(b_{1}+1)\rceil=2,\lceil x(b_{1}+2)\rceil=2,...,\lceil x (b_{1}+b_{2})\rceil=2,\] \[\lceil x(b_{1}+b_{2}+1)\rceil=3,\lceil x(b_{1}+b_{2}+2)\rceil=3,...,\lceil x(b_{1}+b_{2}+b_{3})\rceil=3,\] \[\cdots\]
Let \(L(n)\) be the number of block of the sequence \(\beta:b_{1},b_{2},b_{3},...\in\mathcal{A}^{+}\) and \(m\in\mathbb{Z}^{+},\)\(m>1.\) The following properties hold.
**(P2.1.)** The number of the block of the sequence : \(mb_{1},mb_{2},mb_{3},...\) is \(L(u),\) where \(u=\left\lfloor\frac{n-1}{m}\right\rfloor+1.\)
**(P2.2.)** Let the sequence \(\beta:\)\(b_{s}=0\)\(\mod m\) for \(s\geq 1.\) The number of the block of the sequence \(\dfrac{b_{1}}{m},\dfrac{b_{2}}{m},\dfrac{b_{3}}{m},...\) is \(L(mn).\)
**(P2.3.)** Let a sequences \(\widetilde{\beta}\) be the union of \(m\) rows of the sequence \(\beta\):
\(\widetilde{b_{1}}=b_{1}+b_{2}+...\,b_{m},\)\(\widetilde{b_{2}}=b_{m+1}+b_{m+2}+...\,b_{2m},\)\(\widetilde{b_{3}}=b_{2m+1}+b_{2m+2}+...\,b_{3m},\)\(...\) Then \(\widetilde{L}(n)=\lfloor\dfrac{L(n)+m-1}{m}\rfloor.\)
Let's examine some special cases of the sequence \(\beta.\)
**Example 2.0**.: Let \(p_{0}\in\mathbb{Z}^{+},\)\(b_{s}=p_{0}\). Using (1), (2) and (3) we get
\[B(s)=p_{0}s,\quad x(n)=\dfrac{n}{p_{0}},\quad L(n)=\Big{\lceil}\dfrac{n}{p_{ 0}}\Big{\rceil},\]
\[R(n)=n-p_{0}(L(n)-1).\]
**Example 2.1**.: Let the partitioning sequence \(\beta\) is linear function \(b_{s}=p_{1}s+p_{0},\) where \(p_{0}\in\mathbb{Z},\)\(p_{1}\in\mathbb{Z}^{+}\). Using (1), (2) and (3) we get
\[B(s)=p_{1}\dfrac{s(s+1)}{2}+p_{0}s.\]
\[L(n)=\Big{\lceil}\frac{-2p_{0}-p_{1}+\sqrt{8np_{1}+(2p_{0}+p_{1})^{2}}}{2p_{1}} \Big{\rceil}. \tag{4}\]
\[R(n)=n-p_{1}\frac{(L(n)-1)L(n)}{2}+p_{0}(L(n)-1),\]
Let \(p_{1}=5\) and \(p_{0}=2,\) then \(L(n)=\lceil\sqrt{n+9}-3\rceil:\)
\(1,1,1,1,1,1,1,1,1,\)
\(2,2,2,2,2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,\)
...
**Example 2.1.1.** This is a special case of the previous example \(p_{0}=0,\)\(b_{s}=p_{1}s.\)
\[B(s)=p_{1}\frac{s(s+1)}{2},\]
\[L(n)=\Big{\lceil}\frac{-p_{1}+\sqrt{8np_{1}+p_{1}^{2}}}{2p_{1}}\Big{\rceil}.\]
For \(p_{1}=1\) we obtain the regular array and popular formula \(\underline{A002024}\)
\[L(n)=\Big{\lceil}\frac{-1+\sqrt{8n+1}}{2}\Big{\rceil}.\]
For \(p_{1}=2\) we get irregular array and the formula \(\underline{A000194}\)
\[L(n)=\Big{\lceil}\frac{-1+\sqrt{4n+1}}{2}\Big{\rceil}.\]
We can also solve this problem by using (P2.1).
\[L(n)=\Big{\lceil}\frac{-1+\sqrt{8u+1}}{2}\Big{\rceil},\,\,\,\mbox{where}\,\,\, u=\Big{\lfloor}\frac{n-1}{p_{1}}\Big{\rfloor}+1.\]
Article [4] presents an alternative method
\[L(n)=\Big{\lceil}\sqrt{\Big{\lceil}\frac{2n}{p_{1}}\Big{\rceil}}+\frac{1}{2} \Big{\rceil}-1.\]
**Example 2.1.2.** Cantor's diagonalization is a well-known for numbering infinite arrays. In this example, we propose a generalization of the Cantor numbering method for two adjacent diagonals. A pair of neighboring diagonals are combined into one block. The sequences \(\alpha=\xi\). The partitioning sequence \(\beta\) is \(b_{s}=4s-1,p_{1}=4,\,\,p_{0}=-1,\,\,\underline{A004767}:\,\,\,3,7,11,15,19,...\) Then
\[L(n)=\Big{\lceil}\frac{-1+\sqrt{8n+1}}{4}\Big{\rceil}.\]
The partial sums \(B(s)=s(2s+1)\) is the sequence of second hexagonal numbers [3], A014105.
**Example 2.1.3.** Let \(d\in\mathbb{Z}^{+},\;\;d>1\). We shall combine \(d\) diagonals into one block, starting with the first diagonal. The sequence \(\alpha=\xi\). Then
\[b_{s}=d^{2}s-\frac{d(d-1)}{2},\quad B(s)=\frac{ds(ds+1)}{2}.\]
Using (4) for \(p_{1}=d^{2}\) and \(p_{0}=-\frac{d(d-1)}{2}\) we get
\[L(n)=\Big{\lceil}\frac{-1+\sqrt{8n-7}}{2d}\Big{\rceil},\]
A second way to solve this problem is to use (P2.3.).
\[L(n)=\lfloor\frac{t+d}{d}\rfloor,\;\mbox{where}\;t=\lfloor\frac{\sqrt{8n-7}-1 }{2}\rfloor.\]
For \(d=3\;b_{s}=9s-3\) is the sequences \(\underline{A017233}\) and we obtain
\(1,1,1,1,1,1,1,\\ 2,2,2,2,2,2,2,2,2,2,2,2,2,\\ 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,\\.\quad.\)
**Example 2.1.4.** Now we shall change example 2.1.3. by forming blocks of \(d\) adjacent diagonals, starting from the second diagonal, \(\alpha=\xi\). Then
\[b_{1}=1,\;b_{s}=d^{2}(s-1)-\frac{d(d-3)}{2}\;\mbox{for}\;s>1,\]
The partitioning sequence \(\beta\) is not linear function.
\[B(0)=0,\;B(s)=\frac{(d(s-1)+1)(d(s-1)+2)}{2}\;\mbox{for}\;s>1.\]
Using [3] we obtain
\[L(n)=\Big{\lceil}\frac{2d-3+\sqrt{8n+1}}{2d}\Big{\rceil}.\]
We can solve this problem using a modified version of (P2.3.). Let a sequences \(\widetilde{\beta}:\;\widetilde{b_{1}}=b_{1},\;\widetilde{b_{2}}=b_{2}+...\;b_{ m+1},\;\widetilde{b_{3}}=b_{m+2}+b_{m+3}+...\;b_{2m+1},\;....\). Then
\[L(n)=\lfloor\frac{t+d-1}{d}\rfloor+1,\;\mbox{where}\;\;\;t=\lfloor\frac{ \sqrt{8n-7}-1}{2}\rfloor.\]
For \(d=3\) the sequence \(\beta\) is \(b_{1}=1,\;b_{s}=9(s-1),\;\mbox{for}\;s>1\) and we get
\(1,\)
\(2,2,2,2,2,2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,\)
....
**Example 2.2.** Let the partitioning sequence \(\beta\) is quadratic function
\(b_{s}=p_{2}s^{2}+p_{1}s+p_{0},\) where \(p_{0},\)\(p_{1}\in\mathbb{Z},\)\(p_{2}\in\mathbb{Z}^{+}.\)
\[B(s)=p_{2}\frac{s(s+1)(2s+1)}{6}+p_{1}\frac{s(s+1)}{2}+p_{0}s.\]
For the cubic equation
\[2p_{2}x^{3}+(3p_{2}+3p_{1})x^{2}+(p_{2}+3p_{1}+6p_{0})x-6n=0\]
we use Cardano's formula [5].
\[L(n)=\left\lceil-\frac{p_{1}+p_{2}}{2p_{2}}-\frac{U}{3\cdot 2^{2/3}\cdot p_{2} \sqrt[3]{V+\sqrt{4U^{3}+V^{2}}}}+\frac{1}{6\cdot 2^{1/3}\cdot p_{2}}\sqrt[3]{V+ \sqrt{4U^{3}+V^{2}}}\right\rceil, \tag{5}\]
where
\[U=3(-3p_{1}^{2}+12p_{0}p_{2}-p_{2}^{2}),\]
\[V=54(-p_{1}^{3}+6p_{0}p_{1}+12np_{2}^{2}+6p_{0}p_{2}^{2}+p_{1}p_{2}).\]
\(R(n)=n-p_{2}\frac{(L(n)-1)L(n)(2(L(n)-1)+1}{6}-p_{1}\frac{(L(n)-1)L(n)}{2}-p_{ 0}(L(n)-1).\)
**Example 2.2.1.** This is a special case of the previous example \(p_{2}=1,\)
\(p_{1}=0,\)\(p_{0}\geq 0,\)\(b_{s}=s^{2}+p_{0}\). Using (1) and (5) we get
\[B(s)=\frac{s(s+1)(2s+1)}{6}+p_{0}s,\]
\[U=36p_{0}-3,\quad V=648n+324p_{0}.\quad\mbox{then}\]
The discriminant \(\Delta=-(4(36p_{0}-3)^{3}+(648n+324p_{0})^{2})<0\) and so the cubic equation has one real root and two non-real complex conjugate roots.
\[L(n)=\left\lceil-\frac{1}{2}-\frac{36p_{0}-3}{3\cdot 2^{2/3}\sqrt[3]{648n+324p_{ 0}+\sqrt{4(36p_{0}-3)^{3}+(648n+324p_{0})^{2}}}}\right.\]
For \(p_{0}=0\) we obtain the formula for \(\underline{A074279}:\)
\[L(n)=\Bigg{\lceil}\frac{1}{2}\left(-1+\frac{1}{3^{1/3}W}+\frac{W}{3^{2/3}}\right) \Bigg{\rceil},\]
\[\text{where }W=(108n+\sqrt{3}\sqrt{-1+3888n^{2}})^{1/3}.\]
For \(p_{0}=1\) we obtain \(L(n):\)
\(1,1,\)
\(2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,\)
....
**Example 2.2.2.** Let the sequence \(\beta\) is quadratic function with the coefficients
\[p_{2}=\frac{m-2}{2},\quad p_{1}=-\frac{m-4}{2},\quad p_{0}=0,\quad m\in\mathbb{ Z}^{+},\;m\geq 3.\]
Then \(b_{s}=\frac{(m-2)s^{2}-(m-4)s}{2}\) form the sequence of polygonal numbers [3]. Using (1) and (5) we get
\[B(s)=(m-2)\frac{s(s+1)(2s+1)}{12}-(m-4)\frac{s(s+1)}{4}.\]
The cubic equation takes the form \((2m-4)x^{3}+6x^{2}-(2m-10)x-12n=0.\) Then
\[U=-156+84m-12m^{2},\]
\[V=-2592+1512m-216m^{2}+5184n-5184mn+1296m^{2}n,\]
\[L(n)=\Bigg{\lceil}-\frac{1}{m-2}-\frac{U}{3\cdot 2^{2/3}\cdot(m-2)\sqrt[3]{V+ \sqrt{4U^{3}+V^{2}}}}\]
\[+\frac{1}{6\cdot 2^{1/3}\cdot(m-2)}\sqrt[3]{V+\sqrt{4U^{3}+V^{2}}}\Bigg{\rceil},\]
For \(m>19\), the cubic polynomial is in casus irreducibilis, with three distinct real roots. Therefore, we must use a trigonometric solution to find the roots.
For \(m=5\) the sequence \(b_{s}\) is the sequence of pentagonal numbers \(\underline{A000326}\).
Then \(L(n):\)
\(1,\)
\(2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,\)
...
**Example 2.2.3**.: Let the sequence \(\beta\) is quadratic function with the coefficients
\[p_{2}=\frac{m}{2},\quad p_{1}=-\frac{m}{2},\quad p_{0}=1,\quad m\in\mathbb{Z}^{+}.\]
Then \(b_{s}=m\dfrac{s^{2}-s}{2}+1\) form the sequence of centered polygonal numbers [3]. Using (1) and (5) we get
\[B(s)=m\dfrac{s(s+1)(2s+1)}{12}-m\dfrac{s(s+1)}{4}+s.\]
The cubic equation takes the form \(mx^{3}+(6-m)x-6n=0.\) Then
\[L(n)=\left\lceil-\dfrac{2^{1/3}(6-m)}{\sqrt[3]{162m^{2}n+\sqrt{108(6-m)^{3}m^{3 }+26244m^{4}n^{2}}}}+\right.\]
\[\left.\dfrac{\sqrt[3]{162m^{2}n+\sqrt{108(6-m)^{3}m^{3}+26244m^{4}n^{2}}}}{3 \cdot 2^{1/3}\cdot m}\right\rceil\]
For \(m>24\), the cubic polynomial is in casus irreducibilis, with three distinct real roots. Consequently, we employ trigonometric solution to find the roots.
For \(m=5\) the sequence \(b_{s}\) is the sequence of centered pentagonal numbers \(\underline{A005891}.\) Then \(L(n):\)
1,
\(2,2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,\)
\(.\quad.\quad.\)
**Example 2.3**.: Let the partitioning sequence \(\beta\) is cubic function
\(b_{s}=p_{3}s^{3}+p_{2}s^{2}+p_{1}s+p_{0},\) where \(p_{0},p_{1},p_{2}\in\mathbb{Z},\)\(p_{3}\in\mathbb{Z}^{+}.\)
\[B(s)=p_{3}\dfrac{s^{2}(s+1)^{2}}{4}+p_{2}\dfrac{s(s+1)(2s+1)}{6}+p_{1}\dfrac{s (s+1)}{2}+p_{0}s.\]
There are formulas [5],[6] for solving the 4th degree equation
\[3p_{3}x^{4}+(6p_{3}+4p_{2})x^{3}+(3p_{3}+6p_{2}+6p_{1})x^{2}+(12p_{0}+6p_{1}+2 p_{2})x-12n=0.\]
A different approach is to use numerical solutions of equation.
**Example 2.3.1**.: This is a special case of the previous example \(p_{3}=1,\)
\(p_{1}=0,\)\(p_{0}\geq 1,\)\(b_{s}=s^{3}+p_{0}\). For \(p_{0}=1\) we get the equation
\(x^{2}(x+1)^{2}+4x-4n=0.\) Using (3) we obtain \(L(n):\)
\(1,1,\)
\(2,2,2,2,2,2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,\)
...
**Example 2.3.2.** Let the sequence \(\beta\) is cubic function with the coefficients
\[p_{3}=\frac{m-2}{6},\quad p_{2}=\frac{1}{2},\quad p_{1}=-\frac{m-5}{6},\quad p_{ 0}=0,\quad m\in\mathbb{Z}^{+},\;m\geq 3.\]
Then
\[b_{s}=\frac{1}{6}s(s+1)((m-2)s-(m-5))\]
form the sequence of pyramidal numbers [3]. For \(m=5\) we get the sequence of pentagonal pyramidal numbers \(\underline{A002411}\) and the equation
\[x(x^{3}+4x^{2}+4x+1)-12n=0.\]
Using (3) we obtain \(L(n):\)
\(1,\)
\(2,2,2,2,2,2,\)
\(3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3\)
...
**Example 2.4.** Let the partitioning sequence \(\beta\) is
\[b_{s}=(m-1)m^{s-1}\;\mbox{for}\;m>1\;\mbox{and}\;s\geq 1.\]
Using (1), (2) and (3) we get
\[B(s)=m^{s}-1,\quad L(n)=\lceil\log_{m}(n+1)\rceil,\]
\[R(n)=n-m^{\lceil\log_{m}(n+1)\rceil-1}+1.\]
The sequences \(\underline{A029837}\) and \(\underline{A081604}\) are examples of sequences generated by \(m=2\) and \(m=3\), respectively.
## 3 Intra-Block Permutation of Integer Positive Numbers
Let \(\beta\) is the partitioning sequence. The sequence \(\xi\) is written as irregular array read by rows:
\(B(0)+1,B(0)+2,\;...\,,B(0)+b_{1},\)
\(B(1)+1,B(1)+2,\;...\,,B(1)+b_{3},\)
\(.\quad.\quad.\)
\(B(k)+1,B(k)+2,\;...\,,B(k)+b_{k+1}.\)
**Definition 3.1.** A sequence \(\alpha\in{\cal A}^{+}\) is called an intra-block permutation of integer positive numbers if it maps each block \((B(k)+1,B(k)+2,...,B(k)+b_{k+1})\) to itself.
This means that each block of the sequences \(\alpha\)
\(a_{B(k)+1},a_{B(k)+2},...,a_{B(k)+b_{k+1}}\) is a permutation of the numbers
\(B(k)+1,B(k)+2,\)\(...,B(k)+b_{k+1}.\)
Denote by \(\pi(n)\) a permutation of the first \(n\) natural numbers \((p_{1},p_{2},p_{3},...p_{n}).\) The group of all permutations \(\pi(n)\) is denoted by \(S_{n},\) is called the symmetric group of degree \(n\). The order of a permutation \(\pi,\) denoted by \(o(\pi),\) is defined as the smallest positive integer \(m\) such that \(\pi^{m}=id\)[5].
The set of numbers
\(a_{B(k)+1}-B(k),a_{B(k)+2}-B(k),...,a_{B(k)+b_{k+1}}-B(k)\)
is a permutation \(\pi(b_{k+1}).\)
The sequence \(\alpha\) is determined by the sequence \(\beta\) and the set of permutations
\(\pi(b_{1}),\pi(b_{2}),\pi(b_{3})...\) Let \(\alpha\circ\alpha\) is self-composition \(\alpha(\alpha)\) of the sequence \(\alpha\)[7]. This operation is equivalent to multiplying permutations.
\(\pi(b_{1})\circ\pi(b_{1}),\)\(\pi(b_{2})\circ\pi(b_{2}),\)\(\pi(b_{3})\circ\pi(b_{3}),...\)
The sequence \(\xi\) consists of identity permutations.
**Definition 3.2.** The order of a sequence \(\alpha,\) denoted by \(o(\alpha),\) is the smallest positive integer \(m\) such that \(m\) times self-composition \(\alpha^{m}=\xi.\)
The following properties hold.
**(P3.1.)** The sequence \(\alpha\) is permutation of the natural numbers.
**(P3.2.)** The order of \(\alpha\)\(o(\alpha)=LCM(o(\pi(b_{1}),o(\pi(b_{2}),o(\pi(b_{3}),...).\)
**(P3.3.)** The sequences \(\alpha,\alpha^{2},\alpha^{3},...\) form a cyclic group.
Let a sequence \(\gamma\): \(g_{1},g_{2},g_{3},...\)\(\in{\cal A}^{+}\) such that
\(g_{1}+g_{2}+...++g_{m_{1}}=b_{1},\)
\(g_{m_{1}+1}+g_{m_{1}+2}+...+g_{m_{1}+m_{2}}=b_{2},\)
\(g_{m_{1}+m_{2}+1}+g_{m_{1}+m_{2}+2}+...++g_{m_{1}+m_{2}+m_{3}}=b_{3},\)
\(.\)\(.\)\(.\)\(.\)
Thus, the sequence \(\gamma\) partitions the sequence \(\xi\) into blocks, such that each block of the sequence \(\beta\) is a collection of disjoint blocks of \(\gamma,\) whose union is the block \(\beta\). We denote by \(\gamma\leq\beta.\) Let a sequence \(\mu\in{\cal A}^{+}\) is an intra-block permutation of integer positive numbers for partitioning sequence \(\gamma.\)
**(P3.4.)** The sequences \(\mu\) is intra-block permutation for the partitioning sequence \(\beta.\)
**(P3.5.)** The set of sequences \({\cal A}^{+},\) equipped with a binary relation \(\leq,\) form partially ordered set [8]. The minimal element is the sequence \((1,1,1,...)\)\(\underline{A000012}.\)
**(P3.6.)** The sequences \(\alpha\circ\mu\) is intra-block permutation for the partitioning sequence \(\beta.\)
In all examples in this section, we shall use the partitioning sequence from example 2.1.2. \(\beta:b_{s}=4s-1\) for \(s\geq 1\). All permutations \(\pi(b_{1}),\pi(b_{2}),\pi(b_{3})...\) have odd length. The sequence \(\alpha=\xi.\)
**Example 3.1.** Terms of the \(\pi(n):\)\(p_{i}=R^{{}^{\prime}}(i).\) The order of permutations \(o(\pi(b_{s}))=2\) for \(s\geq 1.\) So \(o(\alpha)=2\) and the sequence \(\alpha\) is self-inverse permutation of the natural numbers. The sequence as irregular array begins
\(3,2,1,\)
\(10,9,8,7,6,5,4,\)
\(21,20,19,18,17,16,15,14,13,12,11,\)
....
**Example 3.2.** The formula for terms of the \(\pi\):
\(p(i)=\begin{cases}R^{{}^{\prime}}(i),&\text{if }R^{{}^{\prime}}(i)\geq R(i)+1,\\ R(i)-\Big{\lfloor}\frac{R(i)+R^{{}^{\prime}}(i)-1}{2}\Big{\rfloor},&\text{if }R^{{}^{ \prime}}(i)<R(i)+1.\end{cases}\)
The order of permutations \(o(\pi(b_{1}))=3,\)\(o(\pi(b_{s}))=12\) for \(s\geq 2.\)
Thus \(o(\alpha)=12\). The sequence begins
\(3,1,2,\)
\(10,9,8,4,5,6,7,\)
\(21,20,19,18,17,11,12,13,14,15,16,\)
....
**Example 3.3.** The formula for terms of the \(\pi\):
\(p_{i}=\begin{cases}\Big{\lfloor}\ \frac{4L(i)-1}{2}\Big{\rfloor}+R(i)+1,& \text{if }R(i)>R^{{}^{\prime}}(i),\\ \\ R(i)-\Big{\lfloor}\frac{4L(i)-1}{2}\Big{\rfloor},&\text{if }R(i)\leq R^{{}^{ \prime}}(i).\end{cases}\)
The order of permutations \(o(\pi(b_{s}))=b_{s}\) for \(s\geq 1.\) So the sequence \(\alpha\) has infinite order. Another formula:
\[a(n)=\frac{(i+j-1)^{2}+i-j+3+2(i+j-1)(-1)^{i+j}}{2},\]
where
\[i=n-\frac{t(t+1)}{2},\quad j=\frac{t^{2}+3t+4}{2}-n,\quad t=\lfloor\frac{ \sqrt{8n-7}-1}{2}\rfloor.\]
The start of the sequence \(\alpha\):
\(3,1,2,\)
\(8,9,10,4,5,6,7,\)
\(17,18,19,20,21,11,12,13,14,15,16\)
...
## 4 Generalized reluctant sequences
**Definition 4.1.** Let sequences \(\alpha\in\mathcal{A}\) and \(\omega\in\mathcal{A}^{+}\). The sequence \(\omega\) is called the reluctant sequence of sequence \(\alpha\), if \(\omega\) is the triangle array read by rows, with row number \(k\) coinciding with the first \(k\) elements of the sequence \(\alpha\)[2].
Formula for a reluctant sequence is:
\[\omega(n)=a_{m},\ \ \mbox{where}\ \ m=n-\frac{t(t+1)}{2},\ \ t=\lfloor\frac{ \sqrt{8n-7}-1}{2}\rfloor.\]
**Definition 4.2.** Let sequences \(\alpha\in\mathcal{A}\) and \(\omega\in\mathcal{A}^{+}\). The sequence \(\omega\) is called the reverse reluctant sequence of sequence \(\alpha\), if \(\omega\) is the triangle array read by rows, with row number \(k\) coinciding with the first \(k\) elements of the sequence \(\alpha\) in reverse order [2].
Formula for a reverse reluctant sequence is:
\[\omega(n)=a_{m},\ \ \mbox{where}\ \ m=\frac{t^{2}+3t+4}{2}-n,\ \ t=\lfloor\frac{ \sqrt{8n-7}-1}{2}\rfloor.\]
Let \(q\in\mathbb{Z}^{+}\). Denote by \((a_{k+1},a_{k+2},a_{k+3},...\ a_{k+m})^{q}\)
\(q\) times concatenation of the block \(a_{k+1},a_{k+2},a_{k+2},...\ a_{k+m}:\)
\(\underbrace{a_{k+1},a_{k+2},a_{k+3},...\ a_{k+m},\ \ a_{k+1},a_{k+2},a_{k+3},...\ a_{k+m},\ \...\ \ a_{k+1},a_{k+2},a_{k+3},...\ a_{k+m}}_{q\ \mbox{ times}}\)
**Definition 4.3.** Let a sequence \(\alpha\): \(a_{1},a_{2},a_{3},...\ \in\mathcal{A}.\) A sequences \(\beta\):
\(b_{1},b_{2},b_{3},...\ \in\mathcal{A}^{+}\) and \(\ q\in\mathbb{Z}^{+}\). The sequence \(\omega\) is called the generalized reluctant sequence of sequences \(\alpha\) if \(\omega\) is irregular array read by rows:
\[\begin{array}{l}(a_{1},a_{2},\ldots\ a_{b_{1}})^{q},\\ (a_{1},a_{2},\ldots\ a_{b_{1}},a_{b_{1}+1},\ldots\ a_{b_{1}+b_{2}})^{q},\\ (a_{1},a_{2},\ldots\ a_{b_{1}},a_{b_{1}+1},\ldots\ a_{b_{1}+b_{2}},a_{b_{1}+b_{ 2}+1},a_{b_{1}+b_{2}+2},\ldots\ a_{b_{1}+b_{2}+b_{3}})^{q},\\ \
reluctant sequence of sequences \(\alpha\) if \(\omega^{{}^{\prime}}\) is irregular array read by rows:
\[(a_{b_{1}},a_{b_{1}-1},\ldots,a_{1})^{q},\] \[(a_{b_{1}+b_{2}},a_{b_{1}+b_{2}-1},\ldots\,a_{b_{1}},a_{b_{1}-1}, \ldots\,a_{1})^{q},\] \[(a_{b_{1}+b_{2}+b_{3}},a_{b_{1}+b_{2}+b_{3}-1},\ldots\,a_{b_{1}+b _{2}},a_{b_{1}+b_{2}-1},\ldots\,a_{b_{1}},a_{b_{1}-1},\ldots\,a_{1})^{q},\] \[\ldots\]
As an illustration, consider the following examples. Let \(\alpha=\xi,\) a partitioning sequences \(\gamma\) is increasing \(g_{s}<g_{s+1},\) for \(s\geq 1\). Then
The sequence \(R(n)\):
\(1,2,...\,g_{1},\)
\(1,2,...\,g_{1},...\,g_{2},\)
\(1,2,...\,\,g_{1},...\,g_{2},...\,g_{3},\)
....
is generalized reluctant sequence of sequences \(\xi\) for \(q=1.\)
Similarly, for the sequence \(R^{{}^{\prime}}(n)\):
\(g_{1},g_{1}-1,...\,1,\)
\(g_{2},g_{2}-1,...\,g_{1},g_{1}-1,...\,1,\)
\(g_{3},g_{3}-1,...\,g_{2},g_{2}-1,...\,g_{1},g_{1}-1,...\,1,\)
...
is generalized reverse reluctant sequence of sequences \(\xi\) for same \(\gamma\) and \(q=1.\)
If the sequence \(\beta=\underline{A000012}\) and \(q=1\) generalized reluctant sequence becomes reluctant sequence \(\underline{A002260}.\) Similarly, for the same sequence \(\beta\) and \(q\) generalized reverse reluctant sequence becomes reverse reluctant sequence \(\underline{A004736}.\)
There are some examples generalized reluctant sequence
for \(q=1\) and \(\beta:\)
\(b_{1}=1,\,\,\,b_{s}=2\,\) for \(s\geq 2\)\(\underline{A071797},\)
\(b_{1}=1,\,\,\,b_{s}=2s-1\,\) for \(s\geq 2\)\(\underline{A064866},\)
\(b_{1}=1,\,\,\,b_{s}=2^{s-2}\,\) for \(s\geq 2\)\(\underline{A062050},\)
for \(q=2\) and \(\beta:\)
\(b_{s}=1,\,\,\,s\geq 1\)\(\underline{A122197}.\)
The example of generalized reverse reluctant sequence is \(\underline{A080883}\)
for \(q=1\) and \(\beta:\,\,\,b_{1}=1,\,\,\,b_{s}=2\) for \(s\geq 2.\)
Let's create a formula to calculate \(L\). Denote by \(\zeta:\,c_{1},c_{2},c_{3},...\) the partitioning sequence for the array (6), where \(c_{s}=qB(s).\) Denote by \(C(s)\) partial sums \(\zeta\):
\[C(s)=0,\,\,\,C(s)=c_{1}+c_{2}+...+c_{s}.\]
Using (2) and (3) we get
\[L(n)=\lceil y(n)\rceil,\]
where \(y(n)\) is the largest root of the equation \(C(x)=n\),
\[R(n)=n-C(L(n)-1),\,\,\,R^{{}^{\prime}}(n)=C(L(n))+1-n.\]
Let's develop a formula for finding the term of the array (6). The term \(\omega(n)\) is located in the row \(L(n)\) at the place \(R(n)\). The row \(L(n)\) contains the block of terms
\[a_{1},a_{2},...\,a_{B(L(n))}\]
repeated \(q\) times. This row is numbered \(R(n)\) from \(1\) to \(c_{L(n)}=qB(L(n))\).
Then generalized reluctant sequence of sequences \(\omega(n)=a(m),\,\,\) where
\[m=1+(R(n)-1)\,\,\,{\rm mod}\,\,\,B(L(n)).\]
Similarly, the generalized reverse reluctant sequence of sequences \(\omega^{{}^{\prime}}(n)=a(m^{{}^{\prime}}),\) where
\[m^{{}^{\prime}}=1+(R^{{}^{\prime}}(n)-1)\,\,\,{\rm mod}\,\,\,B(L(n)).\]
**Example 4.0.** Let \(p,q\in\mathbb{Z}^{+},\) the sequences \(\beta\): \(b_{s}=p\,\,\,\) for \(\,s\geq 1.\)
Then
\[B(s)=ps,\,\,\,c_{s}=pqs,\,\,\,C(s)=pq\frac{s(s+1)}{2},\]
\[L(n)=\Big{\lceil}\,\frac{-pq+\sqrt{8npq+p^{2}q^{2}}}{2pq}\Big{\rceil},\]
\[R(n)=n-pq\frac{(L(n)-1)L(n)}{2},\quad R^{{}^{\prime}}(n)=pq\frac{L(n)(L(n)+1)} {2}+1-n.\]
We get for generalized reluctant sequence \(\omega\) and reverse reluctant sequence \(\omega^{{}^{\prime}}\):
\[m=1+(R(n)-1)\,\,\,{\rm mod}\,\,\,pL(n),\]
\[m^{{}^{\prime}}=1+(R^{{}^{\prime}}(n)-1)\,\,\,{\rm mod}\,\,\,pL(n).\]
Let the sequence \(\alpha=\xi,\,\beta\): \(b_{s}=2\,\,\,{\rm for}\,s\geq 1\,\,\,{\rm and}\,\,\,q=3.\)
Then generalized reluctant sequence \(\omega\):
\(1,2,\,\,1,\,2,\,\,\,1,\,2,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
\(6,5,4,3,2,1\)\(6,5,4,3,2,1,\)\(6,5,4,3,2,1,\)
\(\cdot\)\(\cdot\)\(\cdot\)
**Example 4.1.** Let \(p_{1},q\in\mathbb{Z}^{+},\) the sequences \(\beta\): \(b_{s}=p_{1}s\) for \(s\geq 1\) and \(q\geq 1.\) Then
\[B(s)=p_{1}\frac{s(s+1)}{2},\,\,\,c_{s}=p_{1}q\frac{s(s+1)}{2},\,\,\,C(s)=p_{1}q \frac{s(s+1)(s+2)}{6},\]
Using Cardano's formula [5] we get
\[L(n)=\Big{\lceil}-1+\frac{p_{1}q}{\sqrt[3]{3}U}+\frac{U}{\sqrt[3]{3}^{2}p_{1}q} \Big{\rceil},\]
\[\mbox{where}\,\,\,U=\left(27np_{1}^{2}q^{2}+\sqrt{3}\sqrt{243n^{2}p_{1}^{4}q^{ 4}-p_{1}^{6}q^{6}}\right)^{1/3}.\]
\[R(n)=n-p_{1}q\frac{(L(n)-1)L(n)(L(n)+1)}{6},\]
\[R^{{}^{\prime}}(n)=p_{1}q\frac{L(n)(L(n)+1)(L(n)+2)}{6}+1-n\]
We get for generalized reluctant sequence \(\omega\) and reverse reluctant sequence \(\omega^{{}^{\prime}}\):
\[m=1+(R(n)-1)\,\,\,\mbox{mod}\,\,p_{1}\frac{L(n)(L(n)+1)}{2},\]
\[m^{{}^{\prime}}=1+(R^{{}^{\prime}}(n)-1)\,\,\,\mbox{mod}\,\,p_{1}\frac{L(n)(L(n )+1)}{2}.\]
If the sequence \(\alpha=\xi,\,\beta\): \(b_{s}=2s\,\,\,\mbox{for}\,\,s\geq 1\,\,\,\mbox{and}\,\,\,\,q=3.\)
Then generalized reluctant sequence \(\omega\):
\(1,2,\,\,1,2,\,\,\,1,2,\)
\(1,2,3,4,5,6,\,\,1,2,3,4,5,6,\,\,1,2,3,4,5,6\)
\(1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,1 1,12\)
\(\cdot\)\(\cdot\)\(\cdot\)
Generalized reverse reluctant sequence \(\omega^{{}^{\prime}}\):
\(2,1,\,\,2,1,\,\,\,2,1,\)
\(6,5,4,3,2,1,\,\,6,5,4,3,2,1,\,\,6,5,4,3,2,1,\)
\(12,11,10,9,8,7,6,5,4,3,2,1,12,11,10,9,8,7,6,5,4,3,2,1,12,11,10,9,8,7,6,5,4,3, 2,1\)
\(\cdot\)\(\cdot\)\(\cdot\)
**Example 4.2.** Let \(p,q\in\mathbb{Z}^{+},\,\,\,p\geq 2,\,\,\,q\geq 1,\) the sequences \(\beta\): \(b_{1}=p,\,\,\,b_{s}=p^{s}-p^{s-1}\,\,\,\,\mbox{for}\,\,\,s\geq 2\). Then
\[B(s)=p^{s},\,\,\,c_{s}=qp^{s},\,\,\,C(s)=\frac{pq(p^{s}-1)}{p-1},\]
\[L(n)=\Big{\lceil}\log_{p}\Big{(}\frac{n(p-1)}{pq}+1\Big{)}\Big{\rceil},\]
\[R(n)=n-pq\frac{p^{L(n)-1}-1}{p-1},\quad R^{{}^{\prime}}(n)=pq\frac{p^{L(n)}-1}{p-1 }+1-n.\]
We get for generalized reluctant sequence \(\omega\) and reverse reluctant sequence \(\omega^{{}^{\prime}}\):
\[m=1+(R(n)-1)\,\bmod\,p^{L(n)},\]
\[m^{{}^{\prime}}=1+(R^{{}^{\prime}}(n)-1)\,\bmod\,p^{L(n)}.\]
Let the sequence \(\alpha=\xi\), \(\beta:\,\,\,b_{1}=2,\,\,\,b_{s}=2^{s}-2^{s-1}\,\mbox{for}\,s\geq 2\,\,\,\mbox{and}\, \,q=3\).
Then generalized reluctant sequence \(\omega\):
\(1,2,\,\,1,2,\,\,\,1,2,\)
\(1,2,3,4,\,\,1,2,3,4,\,\,\,1,2,3,4,\)
\(1,2,3,4,5,6,7,8,\,\,1,2,3,4,5,6,7,8,\,\,1,2,3,4,5,6,7,8,\)
....
Generalized reverse reluctant sequence \(\omega^{{}^{\prime}}\):
\(2,1,\,\,2,1,\,\,2,1,\)
\(4,3,2,1,\,\,4,3,2,1,\,\,\,4,3,2,1,\)
\(8,7,6,5,4,3,2,1,\,\,8,7,6,5,4,3,2,1,\,\,8,7,6,5,4,3,2,1,\)
...
|
2308.02674
|
Group-$k$ consistent measurement set maximization via maximum clique
over k-Uniform hypergraphs for robust multi-robot map merging
|
This paper unifies the theory of consistent-set maximization for robust
outlier detection in a simultaneous localization and mapping framework. We
first describe the notion of pairwise consistency before discussing how a
consistency graph can be formed by evaluating pairs of measurements for
consistency. Finding the largest set of consistent measurements is transformed
into an instance of the maximum clique problem and can be solved relatively
quickly using existing maximum-clique solvers. We then generalize our algorithm
to check consistency on a group-$k$ basis by using a generalized notion of
consistency and using generalized graphs. We also present modified maximum
clique algorithms that function on generalized graphs to find the set of
measurements that is internally group-$k$ consistent. We address the
exponential nature of group-$k$ consistency and present methods that can
substantially decrease the number of necessary checks performed when evaluating
consistency. We extend our prior work to multi-agent systems in both simulation
and hardware and provide a comparison with other state-of-the-art methods.
|
Brendon Forsgren, Ram Vasudevan, Michael Kaess, Timothy W. McLain, Joshua G. Mangelson
|
2023-08-04T19:15:27Z
|
http://arxiv.org/abs/2308.02674v1
|
Group-\(k\) consistent measurement set maximization via maximum clique over k-Uniform hypergraphs for robust multi-robot map merging
###### Abstract
This paper unifies the theory of consistent-set maximization for robust outlier detection in a simultaneous localization and mapping framework. We first describe the notion of pairwise consistency before discussing how a consistency graph can be formed by evaluating pairs of measurements for consistency. Finding the largest set of consistent measurements is transformed into an instance of the maximum clique problem and can be solved relatively quickly using existing maximum-clique solvers. We then generalize our algorithm to check consistency on a group-\(k\) basis by using a generalized notion of consistency and using generalized graphs. We also present modified maximum clique algorithms that function on generalized graphs to find the set of measurements that is internally group-\(k\) consistent. We address the exponential nature of group-\(k\) consistency and present methods that can substantially decrease the number of necessary checks performed when evaluating consistency. We extend our prior work to multi-agent systems in both simulation and hardware and provide a comparison with other state-of-the-art methods.
+
Footnote †: 1}\) Department of Mechanical Engineering, Brigham Young University
+
Footnote †: 1}\) Department of Mechanical Engineering, Brigham Young University
+
Footnote †: 1}\) Department of Mechanical Engineering, Brigham Young University
+
Footnote †: 1}\) Department of Mechanical Engineering, Brigham Young University
## 1 Introduction
Multi-agent simultaneous localization and mapping (SLAM) refers to the problem of estimating a map of the environment by fusing the measurements collected by multiple robots as they navigate through that environment. For the estimated map to be accurate, both the local trajectories of the vehicles and the relative offsets (translation and orientation) between the trajectories need to be estimated.
In SLAM, the estimation problem is often modeled using a factor graph containing pose and landmark node variables, and factor nodes that encode the relationship between poses and landmarks. A special case of the SLAM problem, called pose graph SLAM, eliminates the landmark nodes and only estimates the vehicle trajectory. We often formulate the problem as the maximum likelihood estimation (MLE) of the time-discretized robot trajectory given odometric and loop-closure measurements as described by Cadena et al. (2016). Assuming independence and additive Gaussian noise in the measurement and process models, the problem becomes a nonlinear, weighted-least-squares problem that can be solved quickly using available solvers like those presented by Kaess et al. (2008); Kimmerle et al. (2011); Agarwal et al. (2012).
In multi-agent SLAM, multiple vehicles are used to map the environment, resulting in increased scalability and efficiency in the mapping process. However, in addition to estimating the local map, the vehicles must also estimate their relative pose to accurately combine their maps. Generating inter-vehicle measurements is a process that is often susceptible to perceptual aliasing and can be inaccurate. Identifying poor inter-vehicle measurements is a challenging problem given the lack of a single odometry backbone and potentially no prior information on the initial configuration of the vehicles as shown by Pfingsthorn and Birk (2016). Prior work by Mangelson et al. (2018) has examined this problem for full-degree-of-freedom constraints between vehicles. In this work, we present a method that will work using low-degree-of-freedom measurements.
Rather than attempt to classify measurements as inliers and outliers, we find the largest consistent set of inter-robot measurements. In our prior conference paper (Mangelson et al. (2018)1), the problem is formulated as a combinatorial optimization problem that seeks to find the largest set of pairwise-consistent measurements. We then show that this problem can be transformed into an instance of the maximum-clique problem, that existing algorithms can be used to find the optimal solution for moderately sized problems, and heuristic-based methods exist that often find the optimal solution for larger numbers of measurements. Lastly, the proposed method is evaluated on both simulated and real-world data showing that the proposed algorithm
outperforms existing robust SLAM algorithms in selecting consistent measurements and estimating the merged maps. These contributions are included in Section 4, Section 5, and Section 6.
Our second conference paper (Forsgren et al. (2022)1) generalizes the concept of pairwise consistency to group-\(k\) consistency for scenarios, such as range-based SLAM, where pairwise consistency is insufficient to characterize the consistency of a set of measurements. We show that by using a generalized graph, and modifying known maximum-clique algorithms to function over generalized graphs, we can robustly reject outliers in scenarios where pairwise consistency fails. The generalized method was evaluated on simulated data and showed that enforcing group-\(k\) consistency outperforms enforcing pairwise consistency. These results are discussed in Section 7 through Section 11.
Footnote 1: ©2022 IEEE. Reprinted, with permission, from Group-\(k\) consistent measurement set maximization for robust outlier detection. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
This work builds on prior work and makes the following contributions:
1. We develop a framework that takes advantage of the hierarchical structure of consistency to decrease the number of consistency checks needed when building the generalized graph online. (Section 8)
2. We evaluate G\(k\)CM on hardware data recorded by an unmanned underwater vehicle in a range-only SLAM scenario and compare with other outlier-rejection algorithms (Section 11).
3. We propose a consistency function that can be used in vision-based multi-agent pose graph optimization problems. We verify this consistency function on both simulated and hardware data (Sections 12 and 13).
4. We compare our maximum-clique algorithms over hypergraphs with other recently developed algorithms (Section 9, Section 11, Section 13).
5. We release a parallelized implementation of our proposed algorithm ([https://bitbucket.org/jmangelson/gkcm/src/master/](https://bitbucket.org/jmangelson/gkcm/src/master/))
The remainder of this paper is organized as follows. In Section 2, related work is discussed. In Section 3, the general formulation of the multi-robot pose graph SLAM problem is presented. Pairwise consistency maximization (PCM) is presented in Section 4 and evaluated in Section 6. Group-\(k\) consistency maximization (G\(k\)CM) is presented in Section 7, and is applied to range-based SLAM in Section 10 and Section 11, and multi-agent visual pose graph optimization (PGO) in Section 12 and Section 13. Finally in Section 14, we conclude.
## 2 Related Work
The ability to remove outliers is important to many robotics and computer-vision applications. Given the sensitivity of nonlinear least-squares optimization to poor information, there has been a significant amount of effort dedicated toward developing methods to detect and remove outlier measurements from the optimization problem.
The random sample consensus (RANSAC) algorithm in Hartley and Zisserman (2003) is popular in the computer vision community and detects outliers by fitting models to random subsets of the data and counting the number of inliers that belong to each model. The RANSAC algorithm struggles in scenarios where there is no unique model of the underlying data, such as in multi-agent SLAM, or when the outlier ratio is so large that no accurate model of the data can be found. Recent work by Sun (2021) has improved the RANSAC algorithm by adding a compatibility score between the random samples. The new technique, called RANSIC, shows improved performance in high-outlier regimes but will still struggle when no unique model of the data exists. A technique called VODRAC introduced by Hu and Sun (2023) also improves on the RANSAC and RANSIC algorithms by using a two-point sampling strategy combined with a weight-based voting strategy that speeds up the consensus maximization and is robust in 99% outlier regimes.
Work by Burguera et al. (2022) introduces a three step process that combines deep learning with RANSAC and a geometric verification step. They utilize a neural network to detect possible matches between images. RANSAC is then used to estimate the rotation and translation between two images. If the number of correct correspondences found in RANSAC is not sufficiently high, then the loop closure is rejected. Accepted matches are then subjected to a geometry test by tracing a loop using the vehicle odometry and the measurement. If the error in this loop is sufficiently high, then the measurement is also rejected. The technique is strict enough to reject outliers but is not suitable for inter-vehicle measurements since a single odometry backbone may not exist.
Other approaches use the concept of M-estimation. These techniques attempt to detect the presence of outliers during the optimization process and use a robust cost function to decrease their influence in the weighted nonlinear least-squares problem. Sunderhauf and Protzel (2012) use switchable constraints, which introduces a switchable error factor that can be turned off if the residual error becomes too high. Dynamic covariance scaling (DCS), introduced by Agarwal et al. (2013), generalizes the switchable constraints method by increasing the covariance matrix associated with measurements that have high residual error, essentially smoothing the transition to turning a constraint off. Yang et al. (2020) introduce graduated non-convexity (GNC), a technique that first solves a convex approximation of the original problem and iteratively solves less convex approximations until the original problem is solved. The max-mixtures technique presented by Olson and Agarwal (2013) uses mixtures of Gaussians to model various data modes and can detect outliers in real-time. Each of these methods was designed for a single-agent system, and assumes a trusted odometry backbone is present. To apply these systems successfully in multi-agent scenarios would require a good initialization of the relative pose between agents which is not always available. Expectation maximization techniques are used by Dong et al. (2015) and Carlone et al. (2014) to detect outliers among inter-robot measurements for multi-agent systems but the technique still requires an initial guess of the relative pose between agents. Most recently Yang and Carlone (2022) introduce a method called STRIDE that reformulates the estimation
problem using standard robust cost functions as a polynomial optimization problem. Their method is certifiably optimal and works with up to 90% of the measurements being outliers but does not run in real-time.
Carlone et al. (2014) noted that classifying measurements as inliers or outliers is an unobservable task. In light of this, the focus of research has changed from classifying measurements as inliers and outliers to identifying the largest consistent or compatible set of measurements. Joint compatibility branch and bound (JCBB), first introduced by Neira and Tardos (2001), is a method that searches for the largest jointly compatible set. However, utilizing JCBB in multi-robot mapping problems can be difficult because it requires solving the graph for a combinatorial number of measurement combinations to evaluate the likelihood of each measurement given each combination of the other measurements.
Single-cluster spectral graph partitioning (SCGP), used by Olson et al. (2005), identifies an inlier set by thresholding the eigenvector associated with the largest eigenvalue of the adjacency matrix of the underlying consistency graph. SCGP has successfully been applied to pose SLAM (Olson (2009)) as well as range-only SLAM (Olson et al. (2005)). CLEAR generates a graph that associates noisy measurements based on a compatibility criterion (Fathian et al. (2020)). In CLEAR, spectral methods are used to identify the number of landmarks and can generate sets of measurements associated to a unique landmark. CLIPPER thresholds the eigenvector associated with the largest eigenvalue of the affinity matrix and shows that they can identify inliers in 99% outlier regimes and has successfully been applied in point-cloud, line-cloud, and plane-registration problems (Lusk et al. (2021); Lusk and How (2022); Lusk et al. (2022)).
Mangelson et al. (2018) introduce PCM which also generates a consistency graph but transforms the problem into an instance of the maximum clique problem by showing that the maximum clique represents the largest pairwise consistent set. Do et al. (2020) build on PCM by introducing a similarity score between measurements, turning the consistency graph into a weighted graph. They select measurements by solving the maximum edge weight clique problem, a variant of the maximum clique problem used in PCM. Work done by Chen et al. (2023) also solves a maximum edge weight clique problem, but combines multiple consistency metrics into a single consistency function. Chen et al. (2022) use a enforce consistency on both a spatial and temporal basis. Enforcing consistency in a spatiotemporal manner allows them to significantly reduce the time to find a consistent set of measurements. Graph-based maximum consensus registration (GMCR) was presented by Gentner et al. (2023) and introduces decoupled consensus functions for scale, rotation, and translation estimation in point-cloud registration. GMCR utilizes maximum-clique algorithms to find the largest consistent set of measurements from which the variables will be estimated.
All methods described previously evaluate consistency on a pairwise basis. Shi et al. (2021) generalize the evaluation of consistency to groups of \(k\) measurements and uses the maximum \(k\)-core of an embedded consistency graph to quickly approximate the maximum clique in an algorithm called ROBIN. Forsgren et al. (2022) introduce G\(k\)CM and generates a generalized consistency graph where edges in the graph connect \(k\) nodes. They also modify the maximum clique algorithms presented by Pattabifaman et al. (2015) to find the maximum clique of a generalized graph given that no other maximum-clique algorithm over a generalized graph existed. Since then, Shi et al. (2021) have presented a mixed-integer linear program (MILP) that will find the maximum clique of a generalized graph.
Our contributions are a generalization of the work done by Mangelson et al. (2018), and an extension of the work by Forsgren et al. (2022), focused on unifying the theory of consistency, applications for multi-agent systems, and decreasing run-time requirements.
## 3 Problem Formulation
In our factor-graph formulation of SLAM, we denote time-discretized versions of the robot trajectory by \(\mathbf{x}_{i}\in\mathrm{SE}(2)\) or \(\mathrm{SE}(3)\). The factors in the graph are derived from the measurements observed by the robot and penalize estimates of the trajectory that make the observed measurement unlikely. We denote measurements that relate the variables \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) by \(\mathbf{z}_{ij}\) and call them odometric measurements if \(i\) and \(j\) are consecutive and loop-closure measurements if \(i\) and \(j\) are non-consecutive in time. The goal of pose graph SLAM is, then, to estimate the most likely value of each pose variable \(\mathbf{x}_{i}\) given the measurements \(\mathbf{z}_{ij}\). We can formulate the single-robot pose graph SLAM problem as the MLE
\[\hat{\mathbf{X}}=\operatorname*{argmax}_{\mathbf{X}}P(\mathbf{Z}|\mathbf{X}). \tag{1}\]
where, \(\mathbf{X}\) is the set of all pose variables \(\mathbf{x}_{i}\), and \(\mathbf{Z}\) is the set of all relative pose measurements \(\mathbf{z}_{ij}\).
In multi-robot SLAM, we also need to estimate the relative transformation between the local coordinate frames of the respective robots. We adopt the method presented by Kim et al. (2010), which proposes the use of an anchor node for each trajectory that encodes the pose of the vehicle's local coordinate frame with respect to some global reference frame. We denote the homogeneous transformation matrix representing this offset by \(T^{g}_{a}\) and represent measurements relating cross-trajectory poses by \(\mathbf{z}^{ab}_{ij}\), where \(a\) and \(b\) are robot IDs and \(i\) and \(j\) respectively denote which poses on robots \(a\) and \(b\) are being related. \(T^{g}_{a}\) is an element of \(\mathrm{SE}(2)\) or \(\mathrm{SE}(3)\). \(\mathbf{z}^{ab}_{ij}\) is also often an element of \(\mathrm{SE}(2)\) or \(\mathrm{SE}(3)\) but can be a function of this transformation in general.
In the case of two robots, the SLAM estimation problem becomes
\[\hat{\mathbf{X}},\hat{\mathbf{T}}^{g}=\operatorname*{argmax}_{\mathbf{X}, \mathbf{T}^{g}}P(\mathbf{Z}^{a},\mathbf{Z}^{b},\mathbf{Z}^{ab}|\mathbf{X}, \mathbf{T}^{g}), \tag{2}\]
where, \(\mathbf{X}\) now represents the trajectories of both robots, \(\mathbf{Z}^{ab}\) represents the set of all cross-trajectory measurements, \(\mathbf{Z}^{r}\) represents the set of measurements local to robot \(r\), and \(\mathbf{T}^{g}=\{\mathbf{T}^{g}_{a},\mathbf{T}^{g}_{b}\}\). This problem can be treated as weighted, nonlinear least squares and can be solved efficiently using an array of specialized optimization libraries.
Existing methods do a good job of handling outlier measurements in the local measurement sets \(\mathbf{Z}^{a}\) and \(\mathbf{Z}^{b}\), but not in the inter-robot set \(\mathbf{Z}^{ab}\) since no prior estimate of the initial transformation between coordinate frames of
the robots exists in general. The focus of this paper is on selecting a subset of the measurements in the inter-robot set \(\mathbf{Z}^{ab}\) that can be trusted. The next section outlines our approach to accomplish this.
## 4 Pairwise Consistent Measurement Set Maximization
In this section, we first define a novel notion of consistency and then we use that notion to formulate the selection of inter-robot loop-closure measurements as a combinatorial optimization problem that finds the largest consistent set.
### Pairwise Consistency
Directly determining if a measurement is an inlier or outlier from the graph itself is unobservable as shown by Carlone et al. (2014). Thus, instead of trying to classify inlier versus outlier, we attempt to determine the maximum subset of measurements that are internally pairwise consistent:
**Definition 1**: _A set of measurements \(\mathbf{\tilde{Z}}\) is **pairwise internally consistent** with respect to a consistency metric \(C\) and the threshold \(\gamma\) if_
\[C(\mathbf{z}_{i},\mathbf{z}_{j})\leq\gamma,\quad\forall\quad\mathbf{z}_{i}, \mathbf{z}_{j}\in\mathbf{\tilde{Z}} \tag{3}\]
_where, \(C\) is a function measuring the consistency of measurements \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\), and \(\gamma\) is chosen a priori._
This definition of consistency requires that every measurement in the set be consistent with every other measurement in the set with respect to \(C\) and \(\gamma\).
There are a variety of potential choices of a consistency metric depending on the measurement model and the state being observed. In future sections we present several different consistency functions for a variety of applications. However, for the next three sections of this paper, we assume that all inter-robot measurements are relative pose measurements with full degrees of freedom and use the following metric based on the metric used by Olson (2009):
\[C(\mathbf{z}_{ik}^{ab},\mathbf{z}_{jl}^{ab})=\left|\left|(\ominus \mathbf{z}_{ik}^{ab})\oplus\hat{\mathbf{x}}_{ij}^{a}\oplus\mathbf{z}_{jl}^{ab }\oplus\hat{\mathbf{x}}_{lk}^{b}\right|\right|_{\Sigma} \tag{4}\] \[\triangleq\left|\left|\epsilon_{ikjl}\right|\right|_{\left|z_{ikjl}\right|} \tag{5}\]
where, we have adopted the notation of Smith et al. (1990) to denote pose composition using \(\oplus\) and inversion using \(\ominus\), \(||\cdot||_{\Sigma}\) signifies the Mahalanobis distance, and the variables \(\hat{\mathbf{x}}_{ij}^{a}\) and \(\hat{\mathbf{x}}_{lk}^{b}\) are the current relative pose estimates of the associated poses corresponding to inter-robot measurements \(\mathbf{z}_{ik}^{ab}\) and \(\mathbf{z}_{jl}^{ab}\).
This choice of metric is useful because it is both easy to compute and follows a chi-squared distribution, giving us a strategy to select the threshold \(\gamma\) without knowledge of the specific dataset. The composition inside the norm of Eq. (4) evaluates the pose transformation around a loop and should evaluate to the identity transformation in the case of no noise (see Olson (2009)). With Gaussian noise, this normalized squared error follows a chi-squared distribution with degree of freedom equal to the number of degrees of freedom in our state variable. By setting \(\gamma\) accordingly, we can determine if the measurements \(\mathbf{z}_{ik}^{ab}\) and \(\mathbf{z}_{jl}^{ab}\) are consistent with one another.
It should also be noted that pairwise consistency does not necessarily signify full joint consistency. It is possible that a set of measurements can be pairwise internally consistent but not jointly consistent. However, checking full joint consistency is an exponential operation and requires possibly checking every combination of measurements to evaluate their consistency. Finding the maximum-cardinality pairwise-consistent set is also exponential, but by formulating the problem in this way, we can leverage a body of literature on the maximum-clique problem in graph theory that can find or estimate the solution efficiently. In practice we observed that testing for pairwise consistency was restrictive enough to filter inconsistent measurements from typical pose graphs with full degree of freedom measurements.
### The Maximal Cardinality Pairwise Consistent Set
Having this definition of pairwise internal consistency allows us to restrict our algorithm to only consider sets of measurements that are pairwise internally consistent; however, due to perceptual aliasing, we may end up with multiple subsets that are pairwise internally consistent. We need to find a way to select between these possible subsets.
The underlying assumption of our method is based on the following two initial assumptions:
Figure 1: An illustration of the Pairwise Consistency Maximization (PCM) algorithm for selecting consistent inter-map loop closures measurements. (A) Given two independently derived pose graphs (shown in white and black in step A) and a set of potential loop closures between them (shown by colored, dotted lines), our goal is to determine which of these inter-robot loop closures should be trusted. (B) Using a consistency metric such as Mahalanobis distance, we calculate the consistency of each pairwise combination of measurements. (C) We store these pairwise consistency values in a matrix where each element corresponds to the consistency of a pair of measurements. (D) We can transform this matrix into the adjacency matrix for a _consistency graph_ by thresholding the consistency and making it symmetric using the maximum consistency when associated elements across the diagonal have differing consistency values. Each node in this graph represents a measurement and edges denote consistency between measurements. Cliques in this graph are _pairwise internally consistent sets_. (E) Finding the maximum clique represents finding the largest pairwise internally consistent set. (F) After determining the largest consistent set, we can robustly merge the two pose graphs using only the consistent inter-map loop closures, allowing us to reject false measurements.
**Assumption 1**: _The pose graphs are derived from multiple robots or the same robot in multiple sessions exploring the same environment._
**Assumption 2**: _The inter-robot measurements are derived from observations of that environment and the system used to derive them is not biased toward selecting incorrect measurements over correct ones._
These assumptions fit a large number of multi-robot mapping situations and are reasonable even in perceptually aliased environments whenever a place recognition system does not systematically select the perceptually aliased measurement over the correct ones.
If the above conditions are met than the following can also be safely assumed:
**Assumption 3**: _As the number of inter-robot measurements increases, the number of measurements in the correct consistent subset will grow larger than those in the perceptually aliased consistent subsets._
Our goal is, then, to efficiently find the largest consistent subset of \(\mathbf{Z}^{ab}\), which we denote by \(\mathbf{Z}^{*}\).
To formalize this, we introduce a binary switch variable, \(s_{u}\), for each constraint in the set \(\mathbf{Z}^{ab}\) and let \(s_{u}\) take on the value 1 if the measurement is contained in the chosen subset and 0 otherwise. Note that there is a single \(s_{u}\) for each measurement \(\mathbf{z}^{ab}_{ij}\ \in\ \mathbf{Z}^{ab}\); however, for simplicity of notation, we now re-number them with the single index \(u\) and denote the corresponding measurement \(\mathbf{z}^{ab}_{ij}\) by \(\mathbf{z}_{u}\). Letting \(\mathbf{S}\) be the vector containing all \(s_{u}\), our goal is to find the solution, \(\mathbf{S}^{*}\), to the following optimization problem:
\[\begin{split}\mathbf{S}^{*}=\operatorname*{argmax}_{\mathbf{S} \in\{0,1\}^{n}}\ \|\mathbf{S}\|_{0}\\ \text{s.t.}\ ||\epsilon_{uv}||_{\Sigma_{uv}}\ s_{u}s_{v}\leq \gamma\ \ \forall\ u,v,\end{split} \tag{6}\]
where, \(m\) is the number of measurements in \(\mathbf{Z}^{ab}\), \(\mathbf{z}_{u}\) is the measurement corresponding to \(s_{u}\), \(\epsilon_{uv}\) is the associated error term corresponding to measurements \(\mathbf{z}_{u}\) and \(\mathbf{z}_{v}\), and \(\Sigma_{uv}\) is the covariance matrix associated with the error \(\epsilon_{uv}\). We refer to this as the PCM problem.
Once found, we can use \(\mathbf{S}^{*}\) to index into \(\mathbf{Z}^{ab}\) and get \(\mathbf{Z}^{*}\). This consistent subset of the measurements can then be plugged into any of the existing nonlinear least squares based solvers to merge the individual robot maps into a common reference frame. In the next section, we show how this problem can be reformulated into an equivalent problem that has been well studied.
## 5 Solving PCM via Maximum Clique over Consistency Graphs
In this section, we describe how to solve the PCM problem. The goal of PCM to determine the largest subset of the measurements \(\mathbf{Z}^{ab}\) that are pairwise internally consistent. This pairwise consistency is enforced by the \(n^{2}\) constraints listed in Eq. (6). It is important to note that the norm on the left-hand side of the constraints does not contain any of the decision variables \(s_{i}\). These distance measures can be calculated in pre-processing and combined into a matrix of consistency measures \(\mathbf{Q}\), where each element \([\mathbf{Q}]_{uv}=q_{uv}=||\epsilon_{uv}||_{\Sigma_{uv}}\), corresponds to the consistency of measurement \(\mathbf{z}_{u}\) and \(\mathbf{z}_{v}\). This process is depicted in steps B and C in Fig. 1.
We will now introduce the concept of a consistency graph.
**Definition 2**: _A consistency graph is a graph \(G=\{V,\mathcal{E}\}\) where each vertex \(v\in V\) represents a measurement and each edge \(e\in\mathcal{E}\) denotes consistency of the vertices it connects._
We can transform the matrix of consistency measures \(\mathbf{Q}\) into the adjacency matrix for a consistency graph if we threshold it by \(\gamma\) and make it symmetric by requiring that both \(q_{uv}\) and \(q_{vu}\) be less than or equal to \(\gamma\) to insert an edge into the graph. An example adjacency matrix and consistency graph are shown in step D of Fig. 1.
A _clique_ in graph theory is defined as a subset of vertices in which every pair of vertices has an edge between them and the _maximum clique_ is the largest such subset of nodes in the graph. A clique of the consistency graph corresponds to a _pairwise internally consistent set_ of measurements because every measurement is pairwise consistent with every other measurement in the set. Thus, the solution to the problem defined in Eq. (6) is the maximum clique of the consistency graph (see step E of Fig. 1).
In graph theory, the problem of finding the maximum clique for a given graph is called the maximum clique problem and is an NP-hard problem (Wu and Hao (2015)). Zuckerman (2006) and Feige et al. (1991) show that the maximum clique problem is also hard to approximate, meaning that finding a solution arbitrarily close to the true solution is also NP-hard. Dozens of potential solutions have been proposed, each of which can be classified as either an exact or a heuristic algorithm. All of the exact algorithms are exponential in complexity and are usually based on branch and bound, while the heuristic algorithms often try to exploit some type of structure in the problem, making them faster, but not guaranteeing the optimal solution (see Wu and Hao (2015)).
In 2015, Pattabiraman et al. (2015) proposed a method that aggressively prunes the search tree and is able to find maximum-clique solutions for large sparse graphs relatively quickly. They present both an exact algorithm as well as a heuristic version that can be used when the exact algorithm becomes intractable. Though our method could theoretically use any one of the proposed maximum clique algorithms, we selected the one proposed by Pattabiraman et al. (2015) because of its simplicity, parallelizablity, and open-source implementation.
## 6 Pairwise Consistency Maximization Evaluation
In this section, we evaluate the performance of PCM on a variety of synthetic and real-world datasets. For comparison, we implemented single cluster graph partitioning (SCGP) (Olson et al. (2005)), dynamic covariance scaling (DCS) (Agarwal et al. (2013)), and random sample consensus (RANSAC) (Hartley and Zisserman (2003)).
We implemented SCGP as described in Olson et al. (2005), with the exception of using an off the shelf eigen-factorization library as opposed to the power method for simplicity. We implemented DCS as described in the original paper with \(\phi=5\).
We implemented RANSAC by iteratively selecting a single, random inter-map measurement and evaluating the likelihood of the other measurements given the model estimated from the sampled measurement. Because the
processing time for this evaluation is so low (given that the Mahalanobis distance evaluations were performed in pre-processing), we exhaustively iterate through all the measurements and evaluate the likelihood of the other measurements with respect to it in turn. We then return the set of measurements that are likely given the sampled point with the largest support. As explained in Section 6.2, RANSAC is especially sensitive to the likelihood threshold and does not check pairwise consistency.
For PCM, we present results using the exact maximum clique algorithm (PCM-Exact), as well as the heuristic algorithm (PCM-HeuPatt) as explained by Pattabiraman et al. (2015).
### Simulated 1D World
First, we simulated a one dimensional world where the robot has a single state variable, \(x\), and receives measurements that are direct observations of that state. We simulate inlier measurements by drawing multiple samples from a Gaussian distribution with a fixed variance and mean \(x\). We simulate both random and perceptually aliased outliers by drawing multiple samples from a single Gaussian with fixed mean and variance and several others from individual Gaussians with random means and variances. We assume the variances are known and are used when computing Mahalanobis distance.
#### 6.1.1 Comparison with Combinatorial
For this first experiment, we compare how well PCM-Exact and PCM-HeuPatt approximated the combinatorial gold standard in Eq. (6). We generated 100,000 sample worlds. On each of these samples, we estimated the pairwise consistent set using the combinatorial solution as well as PCM-Exact, PCM-HeuPatt, SCGP, and RANSAC.
Fig. 2 shows a comparison between these four methods with respect to the combinatorial solution. Both PCM methods enforce consistency of the returned measurements. PCM-Exact returns the same number of points as the combinatorial solution 100 percent of the time, while PCM-HeuPatt returns the same number of points 98.97 percent of the time. SCGP varies significantly in both the number of points returned and the consistency of those measurements. RANSAC also sometimes returns more or less points than the combinatorial solution and also fails to enforce measurement consistency.
Interestingly, RANSAC is especially dependent on threshold value. The threshold value for RANSAC is centered around a single point and thus is not the same as the threshold value for PCM. If the value is set too high, the number of inconsistent measurements increases. If it is set too low, the total number of returned measurements decreases below the optimal. In Fig. 2, RANSAC's threshold is set arbitrarily to show a single snapshot.
#### 6.1.2 Timing Comparison
We also used this 1D-world to evaluate the timing characteristics of the different algorithms. To test this, we generated 500 sample worlds each for an increasing number of measurement points. The results are shown in Fig. 3. Note, these timing results can be significantly improved through parallelization.
Figure 4: The true positive rate (TPR = TP / (TP + FN)), false positive rate (FPR = FP / (FP + TN)), and average normalized chi-squared value (Chi2) of PCM-Exact, PCM-Heu, and RANSAC versus the threshold value \(\gamma\). The TPR and FPR can be thought of as the probability of getting a true positive or a false positive. The Chi2 value should be close to zero if the measurements in the graph are consistent.
Figure 3: A plot of the evaluation times of the different methods versus the number of measurements being tested. The combinatorial solution takes exponential time and PCM-Exact takes exponential time in the worst case, while the other methods are polynomial in the number of measurements. (This excludes the time to estimate the distance matrix \(\mathbf{Q}\), which is required for all methods.)
Figure 2: Histograms that evaluate how well PCM, SCGP (Olson et al. (2005)), and RANSAC (Harley and Zisserman (2003)) approximate the combinatorial maximum pairwise consistent set in Eq. (6). The first row of histogram plots shows the size of the measurement set as compared to the maximum consistent set size. The second row of histograms shows the number of inconsistent pairs returned with respect to the set \(\gamma\) threshold on Mahalanobis distance.
### Synthetic 2D Comparison
To test our method's accuracy and consistency on a full SLAM dataset, we took a portion of the City10000 dataset released with iSAM (Kaess et al. (2008)) and split it to form two separate robot trajectories. After removing all factors connecting the two graphs, we generated 81 different versions of this dataset by randomly selecting a subset of the true loop closures between the two graphs to be used as inliers, as well as randomly adding outlier loop closures to the graph. As before, some of the outliers are internally consistent to simulate perceptual aliasing and some are generated randomly with random mean and covariance. In this experiment, the number of inlier loop closures was 15, there were two groups of 5 perceptual aliased outliers, and the number of random outliers was 90.
#### 6.2.1 Parameter Sweep
Because RANSAC is significantly dependent on the threshold value set, we ran a parameter sweep for the likelihood threshold over all 81 datasets. Fig. 4 summarizes this experiment. The true positive rate (TPR) and false positive rate (FPR) of PCM is relatively unaffected by the choice of the threshold parameter as long as it is less than about 85 percent. RANSAC, on the other hand, has a different FPR for each threshold selected and never has an FPR of zero. This is because PCM conservatively evaluates the consistency of each measurement and determines consistency of a group of measurements as a whole, while RANSAC selects the largest set of measurements that are likely given a single randomly selected measurement. The last plot shows the average normalized chi-squared value of the residual for the entire graph after solving with the selected factors. This value should be close to zero if the graph is consistent.
The results show that PCM does significantly better at restricting the set of measurements to those that are consistent with one another, decreasing the likelihood of getting a false measurement. This is essential because of the extreme susceptibility of SLAM to false loop closures. PCM-HeuPatt is also almost indistinguishable from PCM-Exact.
#### 6.2.2 Accuracy Analysis
To evaluate the accuracy of PCM, we compared its performance on all 81 datasets to SCGP, RANSAC (using the two second to lowest thresholds from Fig. 4), and dynamic covariance scaling (Agarwal et al. (2013)) or DCS. \(\gamma\) for both PCM-Exact and PCM-HeuPatt was set so that it corresponded to the equivalent of 11% likelihood.
Table 1 gives an overall summary of the results. We used the Mean Squared Error (MSE) of the trajectory of the two graphs (with respect to the no-outlier case), the residual, and the normalized chi squared value of the nonlinear least squares solver as metrics to evaluate the solution accuracy. The rotation MSE was calculated via \(\epsilon_{\text{rot}}=\frac{1}{n}\sum_{i}\big{|}\big{|}\log(R_{i,\text{true}} ^{\top}R_{i,\text{ext}})\big{|}_{F}\) over each pose \(i\) and the translation MSE was calculated in the normal manner. For this experiment all MSE were calculated with respect to the absolute trajectory value.
PCM has the lowest trajectory MSE, and DCS has the lowest residual. Note that DCS also has the highest trajectory MSE, which is as expected. DCS seeks to minimize the least-squares residual error and depends on a good initialization to determine what measurements are consistent enough to not be turned off. Without this initialization, DCS has no reason to believe that the inter-map factors are not outliers and thus turns off all the inter-map factors in the graph.
Once given the matrix \(\mathbf{Q}\), RANSAC and both PCM methods take about the same amount of time to find the consistency set. The average time to estimate the Mahalanobis distances without the use of analytical Jacobians, parallelization, and incremental updates was 70.8s.
Fig. 5 shows example plots of the estimated maps. Both SCGP and RANSAC have trouble disabling all inconsistent measurements. PCM-HeuPatt accurately approximates PCM-Exact and both do well at disabling inconsistent measurements. When PCM does accept measurements not generated from the true distibution, they are still consistent with the uncertainty of the local graphs.
### Real-World Pose-Graph SLAM
We evaluate PCM on the 3D University of Michigan North Campus Long-Term Vision and LiDAR Dataset (NCLT) released by Carlevaris-Bianco et al. (2015). The NCLT dataset was collected using a Segway robot equipped with a LiDAR and Microstrain IMU, along with a variety of other sensors. There are 27 sessions in all with an average length of 5.5 km per session.
For our experiment, we took two sessions collected about two weeks apart, removed the first third of one and the last third of the other, and then generated potential loop-closure measurements between the two graphs by aligning every fourth scan on each graph using GICP (Segal et al. (2009)) and selecting the match with the lowest cost function. We then labeled these registrations as inliers and outliers by thresholding the translation and rotation mean-squared error of the estimated pose transformations with respect to the ground-truth poses for the dataset derived by performing pose-graph optimization on all 27 sessions. Finally, to increase the difficulty of the dataset, we removed all but one sixteenth of the measurements labeled as inliers from the graph, resulting in a graph with ten inliers and 98 outliers.
In this experiment, we compare PCM-HeuPatt with DCS, SCGP, and RANSAC. Fig. 7 shows the normalized chi-squared value of the resulting graphs for RANSAC and PCM versus threshold. Table 2 provides a comparison of results and Fig. 6 shows the estimated maps. The MSE was calculated using the same method as in the prior section, however in this test we calculated trajectory and map relative pose error separately. The trajectory MSE calculates the error in the estimated relative pose between consecutive nodes allowing us to evaluate graph correctness, while the relative map pose MSE evaluates the offset between the maps.
PCM results in the graph with the best trajectory MSE and the best translational MSE for the relative pose of the two graphs and results in a consistent graph regardless of threshold. It also detects all the inlier measurements as well as three of the measurements labeled as outliers. DCS once again disables all measurements. SCGP results in a good graph but only enables three of the inlier measurements, and finally RANSAC (for both of the lowest thresholds tried) enables all inliers and several outliers and results in an inconsistent graph regardless of the threshold selected.
Note that while in this experiment PCM admits more false positives than in the last experiment, the measurements it
accets are consistent with the inlier measurements and local trajectories even though they were labeled as outliers (Fig. 7). In fact, notice that PCM has a better MSE for the relative map pose then the no outlier (NO-OUT) version of the graph. This suggests that by maximizing the consistent set, PCM is selecting measurements that are actually inliers but were mis-labeled as outliers when compared to the ground-truth. After verification this turned out to be the case.
It is also important to note that although SCGP results in a good graph for this dataset, as shown in the earlier experiment, this does not occur in all cases. In addition, if it fails to select the maximum consistent set of measurements, this can be catastrophic in the case of perceptual aliasing.
## 7 Group-\(k\) Consistency Maximization
In this section, we generalize the notion of consistency to sets of \(k>2\) measurements and use this generalized definition to formulate a combinatorial optimization problem.
While maximizing pairwise consistency in Mangelson et al. (2018) outperformed other existing robust SLAM methods, pairwise consistency is not always a sufficient constraint to remove outlier measurements. For example, a set of three range measurements may all intersect in a pairwise manner even if the set of measurements do not intersect at a common point, indicating that they are pairwise consistent but not group-3 consistent.
As currently framed, the consistency check described in Mangelson et al. (2018) is only dependent on two measurements. In some scenarios, such as with the range measurements described above, we may want to define
\begin{table}
\begin{tabular}{|l|c c|c c|c c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Trans. MSE (\(m^{2}\))} & \multicolumn{2}{c|}{Rot. MSE} & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c|}{Inliers} & \multicolumn{2}{c|}{Chi2 Value} & \multicolumn{2}{c|}{Eval Time (sec)} \\ \cline{2-11} & Avg & Std & Avg & Std & Avg & Std & TPR & FPR & Avg & Std & Avg \\ \hline \hline NO-OUT & 0.0 & 0.0 & 0.0 & 0.0 & 32.320 & 0.117 & 1.0 & 0.0 & N/A & N/A & N/A \\ \hline \hline DCS & 183077.917 & 1194931.105 & 4.169 & 3.285 & **31.687** & **0.507** & 0.0 & **0.0** & **0.013 \(<\) 0.001** & N/A \\ \hline SCGP & 623.278 & 1278.493 & 0.648 & 1.535 & 237385.743 & 894303.187 & 0.668 & 0.051 & 66.734 & 364.427 & 0.006 \\ \hline RANSAC-1\% & **5.688** & **21.976** & **0.009** & **0.404** & 185.190 & 587.590 & **0.998** & 0.006 & 0.076 & 0.239 \(<\) **0.001** \\ \hline RANSAC-3.5\% & 183.150 & 636.441 & 0.236 & 0.791 & 3807.570 & 18478.340 & 0.974 & 0.019 & 1.552 & 7.530 \(<\) **0.001** \\ \hline PCM-Exact-11\% & **0.276** & **1.537** & \(<\)**0.001** & **0.003** & **45.057** & **105.385** & **0.997** & **0.001** & **0.018** & **0.043** \(<\) **0.001** \\ \hline PCM-HeuPatt-11\% & **0.276** & **1.537** & \(<\)**0.001** & **0.003** & **45.057** & **105.385** & **0.997** & **0.001** & **0.018** & **0.043** \(<\) **0.001** \\ \hline \end{tabular}
\end{table}
Table 1: Results from using DCS, SCGP, RANSAC(with two different thresholds), and PCM to robustly merge maps generated from a synthetic city dataset. These results are a summary of runs on 81 different generated datasets. We evaluated the mean squared error (MSE) of the two graphs with respect to the non outlier case (NO-OUT). The worst results for each metric are shown in **blue**, and the second best shown in **BOLD**.
\begin{table}
\begin{tabular}{|l|c c|c c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Rel. Pose MSE} & \multicolumn{2}{c|}{Traj. MSE} & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c|}{Inliers} & \multicolumn{2}{c|}{Chi2 Value} & \multicolumn{2}{c|}{Evaluation} \\ \cline{2-11} & Trans. (\(m^{2}\)) & Rot. & Trans. (\(m^{2}\)) & Rot. & Error & TP & FP & Value & Time (sec) \\ \hline \hline NO-OUT & 455.4763 & 0.0308 & 0.0501 & 0.0005 & 765.072 & 10 & 0 & 0.3428 & N/A \\ \hline \hline DCS & 206782.2303 & 0.7154 & 0.0502 & **0.0005** & **724.061** & 0 & **0** & **0.2568** & N/A \\ \hline SCGP & 522.3252 & **0.0162** & 0.0502 & **0.0005** & **748.351 & 3 & **0** & 0.3417 & 0.0021 \\ \hline RANSAC-1 \% & 1244.3818 & 0.0697 & 0.1036 & 0.0015 & 4228.21 & **10** & 6 & 1.8643 \(<\) **0.001** \\ \hline RANSAC-3.5\% & 13507.072 & 17.4156 & 0.1146 & 0.0040 & 7457.54 & **10** & 3 & 3.2795 & 0.0001 \\ \hline PCM-HeuPatt & **386.6876** & 0.0245 & **0.0501** & **0.0005** & 817.803 & **10** & 3 & 0.3635 & 0.0001 \\ \hline \end{tabular}
\end{table}
Table 2: Results from using DCS, SCGP, RANSAC(with two different thresholds), and PCM to robustly merge segments extracted from two sessions of the NCLT dataset. NO-OUT corresponds to a version with none of the measurements labeled as outliers. We evaluated the mean squared error (MSE) of the two graphs with respect to the ground truth. The worst results for each metric are shown in red and the best are shown in **blue**.
Figure 5: Example plots of the maps estimated by PCM-Exact, PCM-HeuPatt, RANSAC, DCS, and SCGP for one of the generated city datasets. Correctly labeled inlier factors are shown in bold dark blue with correctly disabled outliers shown as dotted gray. Accepted outliers are shown in bold red with disabled inliers shown in pink.
a consistency function that depends on more than two measurements.
### Group-\(k\) Consistency
To handle the situation where consistency should be enforced in groups of greater than two measurements we now define a novel notion of _group-\(k\) internally consistent sets_.
**Definition 3**: _A set of measurements \(\mathbf{\widetilde{Z}}\) is **group-\(\mathbf{k}\) internally consistent** with respect to a consistency metric \(C\) and the threshold \(\gamma\) if_
\[C(\{\mathbf{z}_{\mathrm{o}},\cdots,\mathbf{z}_{k}\})\leq\gamma,\quad\forall \quad\{\mathbf{z}_{\mathrm{o}},\cdots,\mathbf{z}_{k}\}\in\mathcal{P}_{k}( \mathbf{\widetilde{Z}}) \tag{7}\]
_where \(C\) is a function measuring the consistency of the set of measurements \(\{\mathbf{z}_{\mathrm{o}},\cdots,\mathbf{z}_{k}\}\), \(\mathcal{P}_{k}(\mathbf{\widetilde{Z}})\) is the the set of all permutations of \(\mathbf{\widetilde{Z}}\) with cardinality \(k\), and \(\gamma\) is chosen a priori._
This definition of consistency requires that every combination of measurements of size \(k\) be consistent with \(C\) and \(\gamma\). We note that the \(n\)-invariant introduced by Shi et al. (2021) is a specific case of our generalized consistency function that does not depend on the relative transformation between poses where measurements were taken. The appropriate choice of consistency function is problem dependent and therefore left to the user to determine. However, we define consistency functions for our two target applications in Section 10 and Section 12.
As with pairwise consistency, establishing group-\(k\) consistency does not guarantee full joint consistency. We settle for checking group-\(k\) consistency and use it as an approximation for joint consistency to keep the problem tractable.
### Group-\(k\) Consistency Maximization
Analogous to pairwise consistency defined by Mangelson et al. (2018), we now want to find the largest subset of measurements that is internally _group-\(k\) consistent_. We use the same assumptions described in Section 4.2.
As in PCM, our goal is to find the largest consistent subset of \(\mathbf{Z}\). We accomplish this by introducing a binary switch variable \(s_{u}\) for each measurement in \(\mathbf{Z}\) and let \(s_{u}\) be 1 if the measurement is contained in the chosen subset and 0 otherwise. Letting \(\mathbf{S}\) be the vector containing all \(s_{u}\), our goal is to find \(\mathbf{S}^{*}\) to the following optimization problem
\[\begin{split}\mathbf{S}^{*}=\operatorname*{argmax}_{\mathbf{S} \in\{0,1\}^{m}}\left\|\mathbf{S}\right\|_{0}\\ \text{s.t.}\ C(\{\mathbf{z}_{\mathrm{o}},\cdots,\mathbf{z}_{k}\} )\ s_{0}\cdots s_{k}\leq\gamma\\ \forall\{\mathbf{z}_{\mathrm{o}},\cdots,\mathbf{z}_{k}\}\in\mathcal{ P}_{k}(\mathbf{Z})\end{split} \tag{8}\]
where \(m\) is the number of measurements in \(\mathbf{Z}\) and \(\mathbf{z}_{u}\) is the measurement corresponding to \(s_{u}\). We refer to this problem as the Group-\(k\) Consistency Maximization, or G\(k\)CM, problem. This problem is a generalization of PCM, and for \(k=2\) they become identical.
Figure 8: An example of a generalized consistency graph with edges made of \(3\)-tuples. (a) highlights that each edge denotes consistency of \(3\) measurements. (b) highlights the maximum clique of the generalized graph in blue.
Figure 6: Plots of the trajectories of two partial sessions of the NCLT dataset as estimated by PCM-HeuPalt, RANSAC, DCS, and SCGP.
Figure 7: The normalized chi-squared value (Chi2) of the resulting NCLT graph versus the threshold value \(\gamma\) for PCM-HeuPalt and RANSAC. The Chi2 value for RANSAC is never below one signifying that the selected factors are probabilistically inconsistent, while the Chi2 value for PCM is relatively constant and below 1.0 regardless of threshold.
### Solving Group-\(k\) Consistency Maximization
As with PCM, we can solve the G\(k\)CM problem by finding the maximum clique of a consistency graph. However, because we want to find the largest subset that is group-\(k\) internally consistent, we need to operate over generalized graphs. In graph theory, a _k-uniform hypergraph_ (or _generalized graph_), \(G\), is defined as a set of vertices \(V\) and a set of \(k\)-tuples of those vertices \(\mathcal{E}\)(Bollobas (1965)). Each \(k\)-tuple is referred to as an edge and a _clique_ within this context is a subgraph of \(G\) where every possible edge exists in \(\mathcal{E}\). We now introduce the concept of a _generalized consistency graph_:
**Definition 4:**_A generalized consistency graph is a generalized graph \(G=\{V,\mathcal{E}\}\) with \(k\)-tuple edges, where each vertex \(v\in V\) represents a measurement and each edge \(e\in\mathcal{E}\) denotes consistency of the vertices it connects._
Solving Eq. (8) is equivalent to finding the maximum clique of a generalized consistency graph and consists of the following two steps: Building the generalized consistency graph; and finding the maximum clique. The next two sections explain these processes in more detail.
## 8 Building the Generalized Consistency Graph
The graph is built by creating a vertex for each measurement and performing the relevant consistency checks to determine what edges should be added. If the graph is created all at once, there are \(\binom{m}{k}\) checks to perform. If the graph is being built incrementally by checking the consistency of a newly added measurement with those already in the graph then the number of checks is \(\binom{m-1}{k-1}\). This means that as \(k\) increases the number of checks that need to be performed increases factorially with \(k\). Thus, it is important that the consistency function in Eq. (7) be computationally efficient. Note that all the checks are independent, allowing for the computation to be parallelized on a CPU or GPU to decrease the time to perform the necessary checks.
Due to the explosive growth in the number of checks, we develop a method that can substantially reduce the number of checks to be computed. We utilize the fact that a set of measurements that are group-\(k\) consistent will almost always be group-\((k-1)\) consistent, assuming that the group-\(k\) and group-\((k-1)\) consistency functions are similar. Using this observation, we can first perform \(\binom{m}{k-1}\) checks (\(\binom{m-1}{k-2}\) checks in incremental scenarios) and record which combinations of measurements failed the consistency check. Then when performing the group-\(k\) check we only need to check combinations for which all groups of \((k-1)\) measurements passed their respective check. The process can be done starting with group-2 checks and building up to the final group-\(k\) check. We show in Section 9.2 that we can significantly decrease time spent performing consistency checks.
For example, three range measurements are consistent if they all intersect at the same point. However, if any pair of the three measurements do not intersect then any group-3 consistency check containing that pair of measurements will also fail. Thus, when processing measurements, we can check the \(\binom{m}{2}\) pairwise intersections of measurements and do only the group-3 checks on combinations where consistency is possible instead of the total \(\binom{m}{3}\) checks.
## 9 Finding the Maximum Clique of a Generalized Graph
Once the graph has been built, we can find the largest consistent set by finding the maximum clique of the graph. The PCM algorithm used the exact and heuristic methods presented by Pattabiraman et al. (2015) but these algorithms were not designed for generalized graphs and used only a single thread. Here, we generalize these algorithms to function over \(k\)-uniform hypergraphs and provide a parallelized implementation of their algorithms.
We start by defining relevant notation. We denote the \(n\) vertices of the graph \(G=(V,\mathcal{E})\) as \(\{v_{1},\cdots,v_{n}\}\). Each vertex has a neighborhood \(N(v_{i})\), that is the set of vertices connected to that vertex by at least one edge. The degree of \(v_{i}\), \(d(v_{i})\), is the number of vertices in its neighborhood. We also define an edge set, \(E(v_{i})\), for each vertex consisting of a set of \((k-1)\)-tuples of vertices. The edge set is derived from the set of \(k\)-tuples in \(\mathcal{E}\) containing the given vertex by removing the given vertex from each edge. Figure 9 shows an example of these values for a given graph.
### Algorithm Overview
The generalized exact and heuristic algorithms presented in Algorithm 1 and Algorithm 2 respectively are similar in structure to the algorithms by Pattabiraman et al. (2015) but require additional checks to guarantee a valid clique is found since the algorithms now operate over generalized graphs.
The exact algorithm, Algorithm 1, begins with a vertex \(v\) and finds cliques of size \(k\) that contain \(v\) (MaxClique line 5). A set of vertices, \(U\), that would increase the clique size by one is found (MaxClique line 11) from the set of edges \(R\) that a valid candidate vertex must have (MaxClique line 7). The Clique function then recursively iterates through potential cliques and updates \(R\) and \(U\) (Clique lines 13, 16). The clique is tracked with \(S\) and a check is performed to see if \(S>S_{max}\) where \(S_{max}\) is replaced with \(S\) if the check passes. The process is repeated for each vertex in the graph (MaxClique line 3). The exact algorithm evaluates all possible cliques, and as such, the time complexity of the exact algorithm is exponential in the worst case.
The heuristic algorithm, Algorithm 2, has a similar structure to the exact algorithm but uses a greedy search to find a potential maximum clique more quickly. For each node with a degree greater than the size of the current maximum clique (MaxCliqueHeu line 4), the algorithm selects a clique
Figure 9: Examples of the degree, neighborhood, and edge set definitions for generalized graphs.
of size \(k\) who has the greatest number of connections in \(E(v_{i})\) (MaxCliqueHeu line 5). This is done by summing the number of connections each node in \(N(v_{i})\) has in \(E(v_{i})\) and selecting the edge \(e\in E(v_{i})\) with the sum total of connections. If the selected clique can potentially be made larger than \(S_{\text{max}}\), then a greedy search selects nodes based on the largest number of connections in \(E(v_{i})\) (CliqueHeu line 5). The generalized heuristic algorithm presented in Algorithm 2 has the same complexity of \(O(n\Delta^{2})\) as the original algorithm presented by Pattabiraman et al. (2015) despite the modifications made to operate on generalized graphs.
Both algorithms are guaranteed to find a valid clique and can be easily parallelized by using multiple threads to simultaneously evaluate each iteration of the for loop on line 3 of MaxClique and MaxCliqueHeu. This significantly decreases the run-time of the algorithm. Our released C++ implementation allows the user to specify the number of threads to be used.
A heuristic was introduced by Chang et al. (2021) to avoid computing the maximum clique from scratch in incremental scenarios. When a new measurement is received the maximum clique will either remain unchanged or a larger clique will exist. A more efficient search can be performed by only searching for cliques that contain the new measurement and comparing the largest clique with that measurement to the current maximum clique. We implement this heuristic in our maximum clique algorithms over generalized graphs so that G\(k\)CM can be performed in both batch and incremental scenarios.
### Evaluation of generalized graph operations
We carried out several experiments to evaluate the effectiveness of Algorithm 1, Algorithm 2, and our hierarchy based approach to evaluating consistency. We also provide a comparison of Algorithms 1 and 2 with the MILP algorithm in Shi et al. (2021) and the maximum \(k\)-core algorithm in Shi et al. (2021) which, to our knowledge, are the only other algorithms that find or approximate the maximum clique of a generalized graph.
#### 9.2.1 Hierarchy of Consistency Evaluation
In this first experiment, we first evaluate the effectiveness of using a hierarchy or consistency checks to decrease the time required to perform all the required checks in a group-4 scenario. For a given number of measurements \(m\), we randomly picked \(\frac{m}{10}\) measurements that would be in the maximum clique meaning that these measurements are always consistent with each other. We also randomly identified other groups of
measurements that are consistent with each other so that the consistency graph that would be formed would have 20% of the total edges present in the graph. Once all the measurements were generated we evaluated the runtime to check the consistency of all measurements in both a batch and incremental manner. We performed three different evaluations. The first was using just a group-4 check, the second was a group-3 check followed by a group-4 check, and the last was a group-2 check followed by a group-3 check followed by a group-4 check. We performed each test 100 times and took an average of the run time.
We first tested the efficacy of this heuristic in a batch scenario where all measurements are processed after the data has been collected. For a given 4-uniform hypergraph size we generated a graph with \(n\) nodes as described previously. We varied the number of measurements between 30 and 110 and tested in increments of 10 measurements recording the time required to process all checks. We repeated the test 100 times and averaged the time to perform all checks. As can be seen in Fig. 10 for each hierarchy used we achieved about an order of magnitude speed up for this setup. Using just a group-4 check required an average of 13.9 s to perform the checks for all 110 measurements and we reduced that time to an average of 0.244 s when group-2 and group-3 checks were used to filter out inconsistent combinations.
We also tested the hierarchy of checks in incremental scenarios. The setup for the incremental scenario was the same as the batch scenario except that we started with 10 measurements and measured the time required to process all the required checks as each new measurement came in. We saw similar speed gains as those found in the batch scenario decreasing the required time from 305 ms to 0.73 ms. The entirety of the results for this experiment can be seen in Fig. 10.
We note that this approach is most effective in high outlier regimes where the vast majority of the measurements will not be consistent with one another. In scenarios where the outlier ratio is low, using the proposed hierarchy approach provides no benefit and can even result in increased run times because few combinations will be filtered out. As such we note that the efficacy of this technique will be situation dependent. This technique also shows one advantage of our consistency function formulation over the use of invariants as used in several other works (Shi et al. (2021b); Lusk et al. (2021); Gentner et al. (2023)). This technique may not be usable with invariant functions since a lower-order invariant may not exist for a given application.
#### 9.2.2 Timing Comparison of Maximum Clique Algorithms
In this second experiment, we evaluate the runtime characteristics of the proposed maximum clique algorithms over generalized graphs in Algorithms 1 and 2 as well as the MILP algorithm (Shi et al. (2021a)) and the maximum k-core method (Shi et al. (2021b)).
We begin by outlining how to find a maximum \(k\)-core as well as the MILP problem. A \(k\)-core of a graph \(G\) is the largest subgraph of \(G\) such that every vertex in \(G\) has a degree of at least \(k\). The maximum \(k\)-core is the \(k\)-core with maximum \(k\) for \(G\) or where the (\(k+1\))-core is the empty set. We base our maximum \(k\)-core algorithm based on the work by Matula and Beck (1983) which presents a linear-time algorithm. The \(k\)-core algorithm used by Shi et al. (2021b) finds the \(k\)-core of a 2-uniform hypergraph by embedding a k-uniform hypergraph into a 2-uniform hypergraph. For example, an edge in a 3-uniform hypergraph that connects nodes \(a\), \(b\), and \(c\), would form three edges in a 2-uniform graph. These edges would connect nodes \(a\) and \(b\), \(a\) and \(c\), and \(b\) and \(c\). While Shi et al. (2021b) note that the maximum \(k\)-core often provides a good approximation of the maximum clique we note that this is for the maximum clique of a 2-uniform hypergraph. We will show later that this does not always hold for \(k\)-uniform hypergraphs.
The MILP algorithm developed in Shi et al. (2021a) is defined as follows. Given a \(k\)-uniform hypergraph \(G(V,\mathcal{E})\) where \(|V|=N\) the maximum clique can be found by solving the following MILP:
\[\max_{\mathbf{b}\in\{0,1\}^{N}} \sum_{i=1}^{N}b_{i}\] (9) s.t. \[\sum_{i\in\mathcal{M}}b_{i}\leq k-1,\ \forall\mathcal{M}\subset V,|\mathcal{M}|=k,\mathcal{M}\notin\mathcal{E}\]
The algorithm seeks to maximize the number of vertices in the group subject to the constraints that for every potential edge \(\mathcal{M}\) that does not appear in \(\mathcal{E}\), the sum of the variables \(b_{i}\) must be less than \(k\). We implement this MILP algorithm in C++ using the Gurobi solver (Gurobi (2017)).
In this experiment, we randomly generated 3-uniform hypergraphs with various node counts ranging from 25 vertices to 300 vertices. Each graph contained all the edges necessary to contain a maximum clique of cardinality ten and additional randomly selected edges to meet a specified graph density. While the run times of the algorithms are dependent on the density of the graph, for this experiment, we chose to hold the density of the graph constant at 0.1 such that approximately 10 percent of all potential edges were contained in the graph. We generated 100 sample graphs for each number of nodes and used all four of the algorithms listed above to estimate the maximum clique of each graph and measured the average run-time for each. Figure 11 shows the results of this experiment using various numbers of threads ranging from one to eight. The exact algorithm and the MILP algorithm were only used for graphs with a total
Figure 10: Time to complete all consistency checks in both batch and incremental scenarios using different hierarchies of checks. Batch results are represented by solid lines while incremental results are dashed. _Note the log scale on the time axis_.
number of nodes of 100 or less because of the exponential nature of the algorithms.
As can be seen in Fig. 11 the MILP algorithm in Shi et al. (2021a) is the slowest algorithm followed by the exact algorithm in Algorithm 1. This is to be expected since both algorithms are guaranteed to find the maximum clique and do not use any heuristics or approximations that would cause them to find a suboptimal solution. Both the \(k\)-core and heuristic algorithm (Algorithm 2) run significantly faster with the heuristic maximum clique algorithm using eight threads being the fastest in this test case. This is primarily due to the graph density. As noted above, the performance of both Algorithm 1 and Algorithm 2 are largely dependent on the graph's density and that run time will increase as the density increases. We suspect that as the graph density grows the runtime for Algorithm 2 would exceed that of the maximum \(k\)-core algorithm. Additionally, if we look at the rate of increase for the \(k\)-core and heuristic algorithms, the \(k\)-core algorithm seems to scale slightly better with the number of vertices in the graph. For larger graphs than those used in this experiment, we expect that the \(k\)-core algorithm will outperform Algorithm 2 in terms of speed.
#### 9.2.3 Heuristic Evaluation
In the third experiment, we again randomly generated 3-uniform hypergraphs, however, in this case, we varied the density of the graph and the size of the inserted clique, while holding the total number of nodes at 100. For each graph, we used the MaxCliqueHeu algorithm to estimate the maximum clique and then evaluated whether or not the algorithm was successful in finding a clique of the same size as the clique we inserted. We again generated 100 sample graphs for each combination of inserted-clique size and graph density. Figure 12 plots the summarized results. If the algorithm happened to return a maximum clique larger than the inserted clique, then the associated sample was dropped.
This experiment shows that the size of the maximum clique and the success rate of the proposed heuristic algorithm are correlated. In addition, it shows that, with the exception of the case when the inserted clique was very small (cardinality 5), the density of the graph and the success rate are inversely correlated. As such, the heuristic seems to perform best when the size of the maximum clique is large and/or when the connectivity of the graph is relatively sparse.
We evaluated the k-core method similarly but don't show the results in Fig. 12. In this experiment we found that the \(k\)-core never found the exact maximum clique and further examination of the \(k\)-core showed that it was not a good approximation of the maximum clique. Our application experiments later show that the \(k\)-core can, at times, provide a good approximation of the maximum clique of a generalized graph which leads us to conclude that the quality of the approximation is largely dependent on the structure of the generalized graph and if that structure is maintained when embedding the graph.
## 10 Range-based SLAM
This section of the paper will consider G\(k\)CM in the context of a single-agent range-based SLAM scenario. Given that range-based SLAM often has multiple landmarks from which measurements are being generated, we make the following additional assumption:
**Assumption 4**: _Measurements to different beacons are known to be inconsistent (i.e. data association is known). We will relax this assumption later in one of our experiments._
For this application, we will use the following \(k=4\) consistency check,
\[C(\mathbf{z}_{ai},\mathbf{z}_{bi},\mathbf{z}_{ci},\mathbf{z}_{di})=\left\|h( \mathbf{X}_{abcd},\mathbf{Z}_{abc}^{i})-\mathbf{z}_{di}\right\|_{\Sigma}\leq\gamma \tag{10}\]
where \(\mathbf{z}_{di}\) is a range measurement from pose \(d\) to beacon \(i\), \(\mathbf{X}_{abcd}\) is a tuple of poses \(\mathbf{x}_{a}\), \(\mathbf{x}_{b}\), \(\mathbf{x}_{c}\), and \(\mathbf{x}_{d}\), and \(\mathbf{Z}_{abc}^{i}\) is a tuple of range measurements from poses \(\mathbf{x}_{a}\), \(\mathbf{x}_{b}\), and \(\mathbf{x}_{c}\) to beacon \(i\). The value \(\gamma\) is a threshold value and the function \(h(\mathbf{X}_{abcd},\mathbf{R}_{abc}^{i})\) is a measurement model defined as
\[h(\mathbf{X}_{abcd},\mathbf{R}_{abc}^{i})=\left\|\mathbf{I}(\mathbf{X}_{abc},\mathbf{R}_{abc}^{i})-\mathbf{p}_{d}\right\|_{2} \tag{11}\]
where \(\mathbf{X}_{abc}\) is a tuple of poses \(\mathbf{x}_{a}\), \(\mathbf{x}_{b}\), and \(\mathbf{x}_{c}\), and \(\mathbf{p}_{i}\) is the position of pose \(i\). The function \(\mathbf{I}(\mathbf{X}_{abc},\mathbf{R}_{abc}^{i})\) is a trilateration function that depends on the input poses and the associated range measurements and returns an estimate of the beacon's location. The covariance, \(\Sigma\), is a function of the covariances on the measurements \(\mathbf{z}\) and the poses \(\mathbf{x}\). The joint covariance, \(\Sigma_{j}\), of the poses and beacon location is calculated by forming the measurement Jacobian of a factor graph and using methods described by Kaess and Dellaert (2009). Once the joint covariance has been obtained the covariance, \(\Sigma\), is calculated as \(\Sigma=H\Sigma_{T}H^{T}+R_{z_{d}}\) where \(H=\frac{\partial h}{\partial\mathbf{z}_{1}}\) and \(\Sigma_{T}=\text{blockdiag}(\Sigma_{j},\Sigma_{z_{d}})\).
The metric checks that the range to the intersection point of three range measurements matches the range of the fourth measurement at some confidence level. The check is done four times for a given set of four measurements, where each permutation of three measurements is used to localize the beacon. As with the consistency function in PCM, this consistency function follows a \(\chi^{2}\) distribution, meaning that the threshold value, \(\gamma\), can be easily chosen without knowledge of the data. Given the combinatorial nature of group consistency, the trilateration algorithm needs to be fast and accurate. The algorithm described by Zhou (2011) fits these criteria and presents a closed-form algorithm that performs comparably to an iterative nonlinear optimization approach, but without the need for an initial guess or an iterative solver.
### Degenerate Configurations
Since our consistency check defined in Eq. (10) uses a trilateration algorithm, we need to discuss the scenarios where trilateration fails to provide a unique solution. The first case is where the poses are collinear as shown in Fig. 13, and the second is when two of the three poses occupy the same position. The trilateration algorithm by Zhou (2011) is robust to such configurations and can return two estimates for the beacon's location. The consistency check in Eq. (10) can pass if either estimate is deemed consistent.
If the trilateration algorithm is not robust to such scenarios, then a test to detect a degeneracy can be designed. If the test indicates the poses are in a degenerate configuration, one or more of the measurements can be stored in a buffer whose consistency with the maximum clique can be tested later. If a degeneration is still present,
then the consistency of the measurement must be tested another way or the measurement be labeled inconsistent. In practice, we found that degenerate configurations did not present an issue because the odometry noise caused pose estimates used in the consistency check to not be degenerate even when the true configuration of poses was degenerate.
## 11 Range-based SLAM Evaluation
In this section, we evaluate the performance of G\(k\)CM on several synthetic datasets where a robot is exploring and taking range measurements to static beacons. Due to the run-time constraints, G\(k\)CM was only evaluated using the heuristic algorithm in Algorithm 2, denoted as G\(k\)CM-HeuPatt going forward. We compare the results of G\(k\)CM-HeuPatt to the results of PCM, where the consistency check for PCM is the check used by Olson et al. (2005), and both MILP and \(k\)-core variants of ROBIN. Since ROBIN-MILP uses the same consistency graph as G\(k\)CM, it will produce identical results to G\(k\)CM when the exact algorithm (Algorithm 1) is used because both algorithms are guaranteed to find the maximum clique.
### Simulated 2D World
First, we simulate a two-dimensional world where a robot navigates in the plane. We simulate three different
Figure 11: Average run-time for generalized maximum clique algorithms proposed in this paper. We also compare against other generalized maximum clique algorithms in Shi et al. (2021a,b). This includes both the time to evaluate the necessary data structures such as neighborhoods/edge sets and the time to estimate the maximum clique. Using eight threads, the heuristic algorithm was able to find the maximum clique of a graph with 250 nodes in a few seconds.
Figure 12: Evaluation of the heuristic algorithm proposed in Algorithm 2. Individual lines denote the cardinality of the maximum clique inserted into the graph. The horizontal axis denotes the density of edges in the graph and the vertical axis denotes the percentage of test cases where the algorithm returned a clique of the correct cardinality. The heuristic algorithm returned cliques of the correct size 100 percent of the time for the graphs with max clique size of 14, 17, 20, 23, 26, and 29.
Figure 13: Degenerate pose configuration where range measurements do not result in a unique landmark location.
trajectories, (Manhattan world, circular, and a straight line) along with range measurements to static beacons placed randomly in the world. Gaussian noise was added to all range measurements and a portion of the measurements were corrupted to simulate outlier measurements. Half of the corrupted measurements were generated in clusters of size 5, and the other half as single random measurements using a Gaussian distribution with a random mean and a known variance. We assume that the variances of the range measurements are known and that these variances are used when performing the consistency check. The simulation was run multiple times varying values such as the trajectory and beacon locations, and statistics were recorded for comparison.
#### 11.1.1 Monte Carlo Experiment
This first example was done to show how well G\(k\)CM performs in situations with large percentages of outliers. In this experiment, a trajectory of 75 poses was simulated with measurements being taken at each pose and 60 of the measurements were corrupted to be outliers. G\(k\)CM was used to identify consistent measurements which were used to solve the range-based SLAM problem in Eq. (1) using GTSAM (Kaess et al. (2012)). The experiment averaged statistics over 100 runs and results are shown in Table 3 where we report the mean and standard deviation for each metric.
As can be seen, ROBIN-MILP performs the best across most metrics followed closely by G\(k\)CM-HeuPatt. The only difference between the two methods is that ROBIN-MILP uses an algorithm that is guaranteed to find the maximum clique while G\(k\)CM-HeuPatt is not. If Algorithm 1 had been used instead of Algorithm 2 the results between G\(k\)CM and ROBIN-MILP would have been identical. We note that the only statistic where either ROBIN-MILP or G\(k\)CM-HeuPatt was not the best method was the true positive rate (TPR). In this case, both PCM and ROBIN-\(k\)-core did better at including more inliers in their set of consistent measurements. However, we argue that since the presence of even a single outlier can have serious negative effects on the map quality the false positive rate (FPR) is a better metric to evaluate each algorithm. The normalized \(\chi^{2}\) entry in the table tells us how well the state estimates fit the data that has been selected and ideally \(\chi^{2}<3.84\) indicating that the data fits within a 95% confidence interval. As can be seen, the data estimates fit the data reasonably well for both G\(k\)CM-HeuPatt and ROBIN-MILP and poorly for PCM and ROBIN-\(k\)-core. This is because PCM assumes that group-2 consistency is a suitable replacement for group-4 consistency. In this application, the embedding of the 4-uniform hypergraph into a 2-uniform hypergraph does not seem to maintain a similar enough structure for the maximum \(k\)-core to provide a good approximation of the maximum clique of the original graph. This is indicated by the high FPR and \(\chi^{2}\) values produced when using the maximum k-core. Figure 14 shows visual examples of the maps produced using the set of selected measurements by each method. The fact that the k-core algorithm performs so poorly arises from the fact that embedding of the 4-uniform hypergraph did not maintain the structure of the graph meaning that the \(k\)-core did not approximate the maximum clique well despite using group-4 consistency.
Additionally, we wished to know at what ratio of outliers to inliers did the performance of G\(k\)CM-HeuPatt begin to drop off. To measure this we simulated robot odometry for 100 poses and corrupted the measurements taken to a beacon with enough outliers to achieve a certain percentage of outliers. We ran the set of measurements through G\(k\)CM-HeuPatt and observed if the selected set of consistent measurements matched the set of inlier measurements. Using the same robot odometry, this was done with several different outlier percentages ranging from 70% to 90% outliers. The process was repeated for multiple trajectories and the true/false positive rates for each outlier percentage were recorded. Results can be seen in Fig. 15.
The figure shows that the true and false positive rates for G\(k\)CM-HeuPatt are fairly constant until about 85 percent of the measurements are outliers while the true positive rate decreases with the number of outliers for PCM and the false positive rate increases. These results are expected because as more outliers are present, it is more likely that either an outlier clique will form or that an outlier measurement will intersect with the inlier set with a pairwise basis showing the need for group-k consistency.
### Hardware Experiment
This experiment evaluates the ability of G\(k\)CM-HeuPatt to reject outliers in range-based SLAM using an underwater vehicle and acoustic ranging to beacons of an unknown location. We use the dataset collected by Olson et al. (2006), where they use SCGP to detect and remove outlier range measurements. The data collected uses four beacons placed around the area of exploration. The underwater vehicle collects between 400 and 600 measurements to each beacon over the course of the experiment. Due to the exponential runtime of the maximum clique algorithms and the factorial increase in the number of consistency checks, we downsample the number of measurements to 100 measurements to each beacon. Given that outliers are present in the data, we randomly selected 80 outlier measurements and 20 inlier measurements as classified by the SCGP algorithm. This was done to ensure that there is an inlier clique in the data while showing that we can reject outliers in high-outlier regimes.
We compare the results of G\(k\)CM with the results produced by SCGP, PCM, and ROBIN using both the MILP and \(k\)-core algorithms. A summary of our results can be found in Table 4 and visual results in Fig. 16. Both G\(k\)CM and the MILP variant of ROBIN perform best across all metrics. This was expected since both algorithms utilize group-\(k\) consistency to evaluate the consistency of the range measurements and utilize maximum clique algorithms over generalized graphs. The primary difference between the two is that we tested G\(k\)CM using the heuristic maximum clique algorithm, Algorithm 2, which can find a suboptimal clique, whereas the MILP (and the exact algorithm in Algorithm 1) are guaranteed to find the maximum clique. The next best performers are PCM and SCGP where PCM had better beacon estimates while SCGP had a lower normalized \(\chi^{2}\) score. We note that the performance of these algorithms varied greatly with the subset of measurements chosen to be used in the test. However, in all the tests we ran they never performed better than either G\(k\)CM or ROBIN-MILP although they occasionally achieved similar performance.
The worst performing algorithm ended up being the ROBIN-k-core variant which is surprising considering that the algorithm uses a group-\(k\) consistency metric. As noted before the shortcoming of using the maximum k-core is that the embedding of the 4-uniform hypergraph into a 2-uniform hypergraph does not preserve the maximum clique structure.
### Data Association
In this experiment, we remove the assumption that the correspondence between a range measurement and its beacon is known. To accomplish this, we modified both the exact and heuristic algorithms to track the \(n\) largest cliques where \(n\) is the number of beacons in the environment assuming the number of beacons is known. Since each clique corresponds to consistent measurements that belong to a unique beacon, we enforce the constraint that a measurement cannot appear in more than one clique.
This experiment was run on a short trajectory of 30 poses where five measurements were received at each pose (one to each beacon). As such 150 measurements are being considered by the G\(k\)CM algorithm. Results were averaged over 81 different trials. Visual results can be seen in Fig. 17 while statistics are in Table 5. G\(k\)CM correctly identifies
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Chi2 & Residual & LM Error (m) \\ \hline G\(k\)CM-HeuPatt & **0.0066** & **49.83** & **5.32** \\ \hline ROBIN-MILP & **0.0066** & **49.83** & **5.32** \\ \hline ROBIN-k-core & 53.71 & 412338 & 185 \\ \hline PCM & 1.02 & 7717 & 12.25 \\ \hline SCGP & 0.39 & 2952 & 35.19 \\ \hline \end{tabular}
\end{table}
Table 4: Results for the \(\chi^{2}\) value, residual and landmark error for the hardware experiment. Best results are in **BOLD**
Figure 14: Results of a single run of the Monte Carlo experiment where 80% of the 75 measurements considered are outliers. The blue dashed lines and x denote the estimated trajectory and beacon locations. The green line and triangles denote the true trajectory and beacon locations.
\begin{table}
\begin{tabular}{|c|c c|c c|c c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{Trans. RMSE (m)} & \multicolumn{2}{c|}{Rot. RMSE (rad)} & \multicolumn{2}{c|}{Beacon Error (m)} & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c|}{Inliers} & \multicolumn{2}{c|}{\(\chi^{2}\)} \\ \cline{2-13} & Avg & Std & Avg & Std & Avg & Std & Avg & Std & TPR & FPR & Avg & Std & Median \\ \hline G\(k\)CM-HeuPatt & 1.15 & 1.23 & 0.11 & 0.07 & **20.92** & **37.17** & 289.82 & 502 & 0.79 & **0.0037** & 1.42 & 2.45 & **0.44** \\ \hline PCM & 2.75 & 4.63 & 0.22 & 0.20 & 24.00 & 39.00 & 10707 & 16866 & **0.98** & 0.015 & 46.25 & 72.78 & 6.24 \\ \hline ROBIN-MILP & **1.11** & **0.95** & **0.10** & **0.06** & 21.07 & 37.54 & **279.36** & **461** & 0.79 & **0.0037** & **1.36** & **2.25** & **0.44** \\ \hline ROBIN-\(k\)-core & 9.62 & 12.59 & 0.78 & 0.83 & 45.43 & 59.41 & 3.81e7 & 7.196e & 0.90 & 0.102 & 11488 & 20462 & 30.83 \\ \hline \end{tabular}
\end{table}
Table 3: Statistics comparing G\(k\)CM-HeuPatt to other methods in the Monte Carlo Experiment. Best results are in **BOLD**
Figure 15: Results showing the normalized TPR (\(TP/(TP+\mathit{FN})\)) and FPR \(\mathit{FP}/(\mathit{FP}+\mathit{TN})\) by varying the number of outliers for a fixed trajectory.
the five cliques corresponding to the different beacons and outperforms PCM in all the metrics. We do not provide a comparison with the other algorithms since they are unable to track multiple cliques simultaneously.
### Tuning Experiment
PCM has the nice property that changing the threshold value, \(\gamma\), did not significantly impact the results of the algorithm. Due to enforced group consistency, as opposed to pairwise, we designed an experiment to test if G\(k\)CM has a similar property. We accomplished this by fixing a robot trajectory of 50 poses and the associated measurements and running G\(k\)CM multiple times with a different value for \(\gamma\) each time. The measurements contained 40 outliers that were generated as described previously. We averaged the \(\chi^{2}\) value and the true and false positive rates over multiple runs. Figure 18 shows how the above values vary with the consistency threshold for both G\(k\)CM and PCM.
As can be seen, G\(k\)CM-HeuPatt performs better than PCM in both the normalized \(\chi^{2}\) and false positive rate, which is more important in our application than the true positive rate. The results indicate that the performance of G\(k\)CM-HeuPatt varies more with the threshold \(\gamma\) than PCM, especially at very low and high confidence thresholds. As such, we recommend that confidence values be used from the \(50-90\%\) confidence range where performance was less variable with the confidence threshold.
### Incremental Update
In this last experiment we evaluate the incremental heuristic described by Chang et al. (2021) since their experiments only evaluated the heuristic for a \(k\)-uniform hypergraph where \(k=2\). For this experiment, we generate
Figure 16: Example plots of the maps estimated by G\(k\)CM-HeuPatt, PCM, MILP and \(k\)-core variants of ROBIN, and SCGP on the data collected in Olson et al. (2006). No truth data for the vehicle is available but the true beacon locations are denoted by a green x and the estimated locations by a blue triangle.
Figure 17: Results of G\(k\)CM-HeuPatt for performing data association and outlier rejection. Each clique found is shown in a different color. Measurements labeled as outliers included in the maximum clique are red dashed lines.
Figure 18: Results showing the normalized chi2 value, TPR, and FPR by varying the consistency threshold value, \(\gamma\), for a fixed trajectory.
a trajectory of 100 poses and measurements, and at each step, we evaluate how long both an incremental and batch update take. Updates include performing consistency checks and finding the maximum clique. We record the runtime for the graph size and average statistics over multiple runs. We plot the runtime against the size of the graph in Fig. 19.
As can be seen, the incremental update with this heuristic provides similar benefits for G\(\ddot{\text{\text{\text{\text{\text{\text{\text{\text{G}}}}}}}}}\)CMas it does for PCM. On average, for a graph of 100 nodes with 80 outliers, it takes a batch solution over 40 seconds to solve for the maximum clique while it takes only 3 seconds for the incremental update. These findings validate the results presented by Chang et al. (2021) and also allow for G\(\ddot{\text{\text{\text{\text{\text{\text{G}}}}}}}\)CMto be run closer to real-time.
## 12 Multi-agent Vision-based pose graph SLAM
In this section, we consider G\(\ddot{\text{\text{\text{\text{\text{\text{G}}}}}}}\)KM in the context of a multi-agent application where each agent is equipped with a monocular camera. We assume that the inter-vehicle measurements are generated by using a place recognition algorithm to identify if two images are of the same location followed by the use of bundle adjustment to estimate the relative pose between the two images up to a scale factor. The measurement takes the following form
\[z^{ab}_{ij}=(\alpha,\epsilon,R^{ab}_{ij}) \tag{12}\]
where \(R^{ab}_{ij}\) expresses the rotation of agent \(b\) at pose \(j\) in the frame of agent \(a\) at pose \(i\), and \(\alpha\) and \(\epsilon\) are the azimuth and elevation angles describing the direction of pose \(j\) with respect to pose \(i\). Lacking the scale factor causes the measurement to lose a degree of freedom over the full relative pose transformation and means that a pairwise check is no longer sufficient to check the consistency of the inter-vehicle measurements. We provide a visual reference of the setup in Fig. 20 showing the relationship of the poses and measurements between the two agents. The check, \(C(z^{ab}_{il},z^{ab}_{jm},z^{ab}_{kn})\) has two parts that we outline below. The first part is very similar to the consistency check in Eq. (3) except that we trace a loop using only the rotations as shown in Eq. (13). This portion is repeated for all combinations of two measurements in the group of three.
\[C(z^{ab}_{il},z^{ab}_{jm})_{R} =\left|\left|\hat{R}^{a}_{ij}\oplus R^{ab}_{jm}\oplus(\odot\hat{ R}^{b}_{lm})\oplus(\odot R^{ab}_{il})\right|\right|_{\Sigma} \tag{13}\] \[C(z^{ab}_{il},z^{ab}_{jm})_{R} \leq\gamma_{R}\]
If this check passes, then we proceed with the second part of the consistency check, which verifies that the azimuth and elevation angles are consistent as outlined below
\[C(z^{ab}_{il},z^{ab}_{jm},z^{ab}_{kn})_{d} =\left|\left|h(\mathbf{X}^{a}_{ijk},\mathbf{X}^{b}_{lmn},\mathbf{ Z}^{ab}_{il,jm,kn})-\binom{\alpha}{\epsilon}\right|\right|_{\Sigma} \tag{14}\] \[C(z^{ab}_{il},z^{ab}_{jm},z^{ab}_{kn})_{d} \leq\gamma_{d}\]
where the function \(h(\mathbf{X}^{a}_{ijk},\mathbf{X}^{b}_{lmn},\mathbf{Z}^{ab}_{il,jm,kn})\) calculates the expected azimuth and elevation angle using the poses on agents \(a\) and \(b\) associated with the measurements in \(\mathbf{Z}\). The function \(h\) first uses two of the measurements in \(\mathbf{Z}\) to estimate the scale in the direction of translation by solving a linear least-squares problem as follows
\[\mathbf{s} =A^{-1}\mathbf{b} \tag{15}\] \[A =\left(\tilde{t}_{il}\quad R^{a}_{ij}\tilde{t}_{jm}\right)\] \[\mathbf{b} =-(t^{a}_{ij}-R^{a}_{ij}R^{ab}_{jm}R^{b}_{ml}t^{b}_{lm})\] \[\tilde{t} =\left(\cos(\alpha)\cos(\epsilon)\quad\sin(\alpha)\cos(\epsilon) \quad\cos(\epsilon)\right)^{T}\]
where \(\tilde{t}\) is the unit vector denoting the direction indicated by the azimuth and elevation angles of a particular
Figure 19: Timing data for both batch and incremental updates for G\(\ddot{\text{\text{\text{\text{G}}}}}\)CM and PCM. This includes the time to perform the relevant consistency checks and the new maximum clique. _Note the log-scale on the vertical axis._
Figure 20: A visual of the setup used in the consistency check for a multi-agent visual PGO scenario. There are two agents each with odometry and several inter-vehicle measurements as outlined in Eq. (12). All measurements are expressed in frame of robot \(a\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Translational RMSE (m)} & \multicolumn{2}{c|}{Rotational RMSE (rad)} & \multicolumn{2}{c|}{Beacon Error (m)} & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c|}{Inliers} & \multicolumn{2}{c|}{\(\chi^{a}\)} \\ \cline{2-13} & Avg & Std & Avg & Std & Avg & Std & Avg & Std & TPR & FPR & Avg & Std & Median \\ \hline G\(\ddot{\text{\text{\text{\text{G}}}}}\)CM-He\(\ddot{\text{\text{\text{\text{\text{G}}}}}}\)nut & **0.6259** & **0.5646** & **0.2469** & **0.0629** & **34.14** & 52.06 & **72.12** & **88.67** & 0.84 & **0.008** & **1.05** & **1.26** & **0.306** \\ \hline PCM & 3.153 & 3.611 & 0.4521 & 0.2218 & 40.28 & **51.03** & 1010 & 1037 & **0.95** & 0.017 & 13.87 & 14.16 & 29.28 \\ \hline \end{tabular}
\end{table}
Table 5: Statistics for G\(\ddot{\text{\text{\text{\text{\text{\text{\text{G}}}}}}}}\)CM and PCM in Data Association experiment. Best results are in BOLD
measurement, \(t^{a}_{ij}\) denotes the position of \(j\) with respect to \(i\) for agent \(a\), and \(\mathbf{s}\) is a two vector denoting the scale on the two measurements used. While testing this function, we found that it was sensitive to the errors in the rotations in the poses and measurements that would often result in a negative scale. To alleviate this, we first perform an initialization step using the single-loop technique described by Carlone et al. (2015). We elected to use the single-loop technique because it has an algebraic solution and is less computationally intense than methods such as chordal initialization that perform very well in more complex pose graphs.
Once the scale has been recovered, the next step is to estimate the relative pose and uncertainty between the agents by applying the scale as follows
\[T^{ab}_{il} =\begin{pmatrix}R^{ab}_{il}&\tilde{st}^{ab}_{il}\\ 0_{1x3}&1\end{pmatrix}\] \[\Sigma_{T} =H\Sigma_{z}H^{T}\]
where \(\Sigma_{T}\) is the covariance matrix on the full relative pose, \(\Sigma_{z}\) is the covariance on the measurement and \(H\) is the jacobian of the function that applies the scale. The next step is to express the poses involved in the third measurement in a common reference frame. These poses, which we will call \(T^{a}_{k}\) and \(T^{b}_{n}\), can be found as shown below.
\[T^{a}_{k} =T^{a}_{k}\] \[T^{b}_{n} =T^{ab}_{il}\oplus T^{b}_{ln}\]
With this information, we can calculate the expected azimuth and elevation angles using Eq. (16).
\[dT =(\odot T^{a}_{k})\oplus T^{b}_{n}\] \[\alpha =\text{atan2}(dT.y,dT.x) \tag{16}\] \[\epsilon =\text{atan2}(dT.z,\sqrt{dT.x^{2}+dT.y^{2}})\]
The covariance on \(dT\) is calculated using methods described by Mangelson et al. (2020), and the covariance on the expected azimuth and elevation angles are found by further pre and post-multiplying by the Jacobian matrix of their respective functions. This process is repeated three times so that each measurement can be validated.
#### Degenerate Configurations
This application is also susceptible to degenerate configurations. In scenarios such as when two poses lie exactly on top of each other, the relative pose cannot be recovered because the matrix in Eq. (15) becomes singular. Should such a degeneracy occur, solutions such as those discussed in Section 10 may be used. In practice, however, such degeneracies never arose due to the noise inherent in the problem.
## 13 Multi-agent Visual PGO Evaluation
This section describes the experimental setup and the results obtained for each of the experiments evaluating the efficacy of G\(k\)CM for multi-agent visual PGO applications. In each experiment we compare our results with the pairwise consistency approach in PCM, and with both the MILP and \(k\)-core variants of ROBIN. Due to run-time constraints, G\(k\)CM and PCM were only evaluated using Algorithm 2.
#### Simulation Results
We first evaluated our algorithm in simulation. We randomly initialized two agents in a predetermined area and let the agents move about in a Manhattan world scenario. After generating the path, we searched for poses on each robot within half a meter of each other to generate an inter-robot measurement. For computational purposes, we selected 100 measurements to be used, and if 100 measurements were not generated we reran the simulation until 100 measurements were present. We then ran the G\(k\)CM, PCM, and ROBIN (using both the MILP and \(k\)-core clique solvers) to find the largest inlier set. Statistics such as the TPR, FPR, and RMSE of the resulting PGO routine were recorded and averaged over 100 runs. To initialize the relative pose, we use chordal initialization (Carlone et al. (2015)) to generate rotation estimates. To initialize the translation, we picked two measurements from the set of inlier measurements and estimated the scale by solving Eq. (15) and applying the scale to the unit vector created by the azimuth and elevation angles.
As can be seen from the results, all of the methods performed fairly equally across all the metrics. G\(k\)CM had the best performance in the most tracked statistics. However, we note that the results of all methods are comparable. The biggest discrepancies lie in both the residual and normalized \(\chi^{2}\) statistics for the resulting map. We believe that these discrepancies are a result of differences in the initialization. We could not guarantee that the same measurements were used to initialize the scale on the translation vector between the two agents since the inlier cliques were not guaranteed to be the same. Often the cliques had five or six measurements in common with one or two that varied between them. Due to the nature of least-squares methods, this caused the scale to vary, and occasionally not be good enough for the map to converge to the exact same solution. We note that this is a problem with the initialization of the graph, which is not the focus of this research. Each of these methods was able to reject nearly all of the outlier measurements. Visual results for a single trial can be seen in Fig. 21.
### Hardware Results
Having shown that the G\(k\)CM algorithm is effective at rejecting outliers in a simulation environment, we now seek to validate our algorithm using hardware data. In this experiment, we used the NCLT dataset presented by Carlervaris-Bianco et al. (2015) because it provides data of a robot exploring the same area over multiple sessions, and has images from several cameras placed on the robot. We selected two sessions that occurred in the same season and around the same time of day to facilitate place recognition, which was done using the OpenFABMap library developed by Glover et al. (2012). We trained OpenFABMap using data collected in one session, and created matches to places in the second session. Due to the size of the dataset, we elected to use only one of the cameras onboard the robot and used every 50th image.
With this setup, we generated 267 matches of which 25 were identified as true matches while the others were labeled as outliers due to perceptual aliasing. We estimated the relative pose between the two images by decomposing the essential matrix using OpenCV, which was then refined through bundle adjustment using GTSAM Dellaert (2012).
From this relative pose, we generated the measurement in Eq. (12) using Eq. (16) and transformed the covariance of the relative pose to the measurement covariance using the Jacobian of Eq. (16). Once the measurements were generated, we then checked the consistency of all the measurements and found the maximum clique using each of the methods discussed previously.
A visual representation of our results can be found in Fig. 22, while the statistics are in Table 7. As can be seen, both G\(\ell\)CM and the MILP are able to successfully filter out the outlier measurements and fuse the maps from the two robots. Both the k-core and PCM methods found cliques that contained outlier measurements that ruined the shape of the resulting map. In Table 7, we present several statistics including the relative translation and rotational error defined as \(\xi=\text{Log}((T_{k+1}^{k})^{-1}(\hat{T}_{k}^{-1}\hat{T}_{k+1}))\).
Both the G\(\ell\)CM-HeuPatt and the MILP techniques have the best results across all the metrics identified, except for a small difference in the average relative translational error, while the PCM and k-core methods produced unusable results. The \(\chi^{2}\) value for PCM post-optimization was surprisingly low considering the poor quality of the map produced. This is attributed to the small number of inter-vehicle constraints when compared with the number of poses in the graph. Further investigation of the relative error statistics shows that the majority of the error is contained between a rather small number of poses in the graph.
The results produced by the G\(\ell\)CM-HeuPatt and MILP methods successfully merge the maps with some error in the offset in the origins of the two maps. The map of the second agent (shown by the greed dotted line in Fig. 22) should lie nearly on top of the blue dotted line but often is shifted down and to the left. We believe this level of error to be acceptable for several reasons. The first is that much less information was used when fusing the maps in our estimated results than was used in generating the ground truth. The ground truth data fused RTK-GPS, lidar scans, and the odometry data from all 22 sessions. Lidar scans were aligned to generate constraints both in a single session and between sessions. This allowed many full-degree-of-freedom constraints to be generated within each session and between the different sessions. Comparatively, our solution uses only a subset of the visual information provided, and only uses this information to generate constraints between the two sessions. No loop closure constraints were generated when the robot visited a location it had visited before in the same session. Additionally, using only a single camera increases the difficulty of the problem because the scale is not observable in the inter-session constraints. This problem does not arise when comparing lidar scans.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & Trans. Error (m) & Rot. Error (rad) & \multicolumn{3}{c|}{Residual} & \multicolumn{3}{c|}{Inliers} \\ \cline{2-11} & Avg & Std & Avg & Std & Avg & Std & TPR & FPR & Avg & Std & Median \\ \hline G\(\ell\)CM-HeuPatt & **1.26** & **1.87** & 0.09 & **0.06** & **8918** & **40183** & 0.74 & **2.2e-4** & **2.95** & **13.27** & 0.011 \\ \hline PCM & 1.36 & 2.00 & 0.086 & 0.082 & 11314 & 40705 & **0.99** & **2.2e-4** & 3.73 & 13.43 & 0.014 \\ \hline ROBIN-MILP & 1.34 & 2.18 & 0.094 & 0.082 & 11399 & 48786 & 0.74 & **2.2e-4** & 3.77 & 16.13 & **0.010** \\ \hline ROBIN-\(k\)-core & 1.41 & 2.05 & **0.085** & 0.082 & 11862 & 41290 & **0.99** & **2.2e-4** & 3.91 & 13.62 & 0.015 \\ \hline \end{tabular}
\end{table}
Table 6: Statistics comparing GLCM to other methods in the multi-agent simulation experiment. Best results are in **BOLD**
Figure 21: Results for each of the algorithms in the multi-agent visual PGO simulation. The red and blue solid lines denote the true trajectories of each vehicle while the dashed lines denote the estimated trajectories.
Lastly, it seems a little surprising that the simulation experiments and hardware experiments produced such different results. In simulation all methods had a fairly similar level of performance. We attribute this to the length of the trajectories in each case. The simulated experiments had a trajectory consisting of about 500 nodes per robot while the two sessions in the NCLT dataset had over 23000 and 29000 nodes. This means that the loops traversed in the NCLT dataset are longer and that the associated covariances used when performing the consistency checks were larger resulting in a denser consistency graph since more combinations passed the consistency check. The result of this is that more measurements were labeled as consistent when evaluating the NCLT dataset. PCM did not have the benefit of the direction information to filter out more outlier measurements like G\(k\)CM and ROBIN-MILP. ROBIN-\(k\)core had the benefit of the direction information but the denser graph meant that the structure of the maximum clique was not preserved when the 3-hypergraph was embedded in a 2-hypergraph resulting in a poor set of measurements being chosen.
## 14 Conclusion
In this paper, we presented a unification of the theory of consistency, starting with pairwise consistency and generalizing to group-\(k\) consistency. We present the group-\(k\) consistency maximization algorithm and the associated maximum clique algorithms that function over generalized graphs and show that we can effectively choose a consistent set of measurements in high-outlier regimes in both range-based SLAM and multi-agent visual SLAM problems. Techniques to alleviate the exponential nature of evaluating consistency and finding the maximum clique were presented. Lastly, we released an open-source implementation of our library for future use by the research community.
###### Acknowledgements.
We would like to thank Allan Papalia and the Marine Robotics Group for providing the range-based SLAM dataset for us to use in evaluating our algorithms.
## Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work has been funded by the Office of Naval Research [award number N00014-21-1-2435], BF: Also get my stuff.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metric & G\(k\)CM-HeuPatt & ROBIN-MILP & ROBIN-\(k\)-core & PCM \\ \hline Relative Translation & \multirow{2}{*}{2.70e-3} & \multirow{2}{*}{2.70e-3} & \multirow{2}{*}{**2.64e-3**} & \multirow{2}{*}{2.70e-3} \\ Error (m) & & & & \\ \hline Relative Rotational & \multirow{2}{*}{**0.012**} & \multirow{2}{*}{**0.012**} & \multirow{2}{*}{0.025} & \multirow{2}{*}{0.018} \\ Error (rad) & & & & \\ \hline Residual & **19899** & **19899** & 5.58e6 & 2.24e5 \\ \hline Normalized \(\chi^{2}\) & **0.12** & **0.12** & 34.99 & 1.40 \\ \hline \end{tabular}
\end{table}
Table 7: Numerical results for the multi-agent hardware experiment. Best results are in **BOLD**.
Figure 22: Results for the hardware experiment. The solid red and purple lines are the true robot trajectories generated using GPS, Lidar, and odometry. The dotted lines are the estimated trajectories using odometry and inter-vehicle measurements from a camera.
## Declaration of conflicting interests
The Authors declare that there is no conflict of interest.
|
2301.11465
|
Steinberg quotients, Weyl Characters, and Kazhdan-Lusztig Polynomials
|
Let $G$ be a reductive group over a field of prime characteristic. An
indecomposable tilting module for $G$ whose highest weight lies above the
Steinberg weight has a character that is divisible by the Steinberg character.
The resulting "Steinberg quotient" carries important information about
$G$-modules, and in previous work we studied patterns in the weight
multiplicities of these characters. In this paper we broaden our scope to
include quantum Steinberg quotients, and show how the multiplicities in these
characters relate to algebraic Steinberg quotients, Weyl characters, and
evaluations of Kazhdan-Lusztig polynomials. We give an explicit algorithm for
computing minimal characters that possess a key attribute of Steinberg
quotients. We provide computations which show that these minimal characters are
not always equal to quantum Steinberg quotients, but are close in several
nontrivial cases.
|
Paul Sobaje
|
2023-01-26T23:47:01Z
|
http://arxiv.org/abs/2301.11465v1
|
# Steinberg quotients, Weyl characters, and Kazhdan-Lusztig polynomials
###### Abstract.
Let \(G\) be a reductive group over a field of prime characteristic. An indecomposable tilting module for \(G\) whose highest weight lies above the Steinberg weight has a character that is divisible by the Steinberg character. The resulting "Steinberg quotient" carries important information about \(G\)-modules, and in previous work we studied patterns in the weight multiplicities of these characters. In this paper we broaden our scope to include quantum Steinberg quotients, and show how the multiplicities in these characters relate to algebraic Steinberg quotients, Weyl characters, and evaluations of Kazhdan-Lusztig polynomials. We give an explicit algorithm for computing minimal characters that possess a key attribute of Steinberg quotients. We provide computations which show that these minimal characters are not always equal to quantum Steinberg quotients, but are close in several nontrivial cases.
2010 Mathematics Subject Classification: Primary 20G05
## 1. Introduction
### Overview
This is a sequel to [S1] in which we investigated characters of certain tilting modules. In short, if \(G\) is a reductive group in prime characteristic \(p>0\), then an indecomposable tilting module for \(G\) of the form \(T((p-1)\rho+\lambda)\), where \(\lambda\) is a \(p\)-restricted dominant weight, has a character that is divisible by the Steinberg character \(\chi((p-1)\rho)\). The resulting "Steinberg quotient" \(t(\lambda)\) is a nonnegative linear combination of \(W\)-orbit sums. Thanks to the linkage principle, we can list which orbit sums might appear in \(t(\lambda)\). In previous work we proved that all such orbit sums do appear, and that their coefficients are weakly increasing in size as one moves down from the highest weight under the \(\uparrow\) partial ordering.
In this paper we enlarge our investigation to consider \(t(\lambda)\) for all dominant weights \(\lambda\), as well as quantum Steinberg quotients \(t_{\zeta}(\lambda)\), defined in the analogous way for tilting modules of a quantum group at a \(p\)-th root of unity. In addition to the above pattern holding more generally, the wider scope makes clearer the connections between Steinberg quotients and more commonly studied quantities such as Weyl characters and Kazhdan-Lusztig combinatorics. We will detail all of this below.
### Relationship to other tilting formulas
Before stating our main results, let us comment briefly on the overlap between the topic of this paper and some existing results in the literature. Thanks to formulations by Soergel for quantum groups [Soe], and by Riche-Williamson for algebraic groups (stated in [RW1], and proved or re-proved in various contexts in [AMRW][RW2][RW3][BR]), combinatorial algorithms for tilting characters are
already known. Moreover, in the case of quantum groups, the Steinberg quotients \(t_{\zeta}(\lambda)\) are governed by the simple characters, and when \(p>h\) the latter are given by Lusztig's Character Formula (LCF) from \([\![\![\lambda]\!]\) (see [1, II.H.12] for an account of this). In the algebraic setting, the analogous statement is not always true as it requires Donkin's tilting module conjecture to hold (it does not in general [1][1]), we do not know precisely when the LCF describes the simple characters ([1], [12][13]), and in any case it would not apply to most \(\lambda\) that are not \(p\)-restricted.
The main thrust of this work is to provide a complementary approach to computing tilting characters that applies only to special tilting modules, and exploits all of the unique properties that these modules possess. The hope is that this can help answer questions that have not yet been answered by existing methods, such as an explanation as to when and why the characters \(t(\lambda)\) and \(t_{\zeta}(\lambda)\) differ. Influences on the approach begun in [14] were Donkin's use of Brauer's formula in [15, Proposition 5.5], along with work by Ye [16] and Doty-Sullivan [17]. In this sequel, we push the limits of these methods, while benefiting from the information and direction that the tilting character formulas mentioned above provide.
### Results and Organization
Let \(\mathbb{X}\) denote the character group of a maximal torus \(T\) of \(G\), and \(\mathbb{Z}[\mathbb{X}]^{W}\) be the ring of \(W\)-invariants, where \(W\) is the Weyl group of \(G\). The fact that the orbit sums in \(t(\lambda)\) appear with weakly increasing multiplicity (when moving from the top orbit down) is due entirely to the fact that for all dominant weights \(\mu\), the character product
\[\chi((p-1)\rho+p\mu)t(\lambda) \tag{1.3.1}\]
has nonnegative coefficients when expressed in the Weyl character basis. Though this argument is present in [14], its importance is more explicitly isolated here in Theorem 3.3.1, where we give a broader statement that highlights the similarity between the orbit-sum multiplicities in Steinberg quotients and those in Weyl characters (a further parallel will be noted shortly).
With this theorem in hand, the extension of the main result from [14] to the Steinberg quotients \(t(\lambda)\) and \(t_{\zeta}(\lambda)\), for any dominant \(\lambda\), follows from well-known facts about tilting modules. We also record other features of these characters that, though easy to prove, give interesting perspective. For example, we obtain a natural framework in which Steinberg quotients become an enlargement of sorts to the set of Weyl characters. That is, for \(\lambda\) dominant, the quotient \(t_{\zeta}(p\lambda)=\chi(\lambda)^{F}\), where \(F\) is the Frobenius twist on a character.
In Section 4 we give direct comparisons between algebraic and quantum Steinberg quotients. From what is already known about the relationship between tilting modules in the respective categories, it follows that the characters \(t(\lambda)\) can be written as nonnegative sums of the various \(t_{\zeta}(\mu)\). By using base-changing results from \([\![\text{\sf{Lin}}]\!]\), \([\![\text{\sf{PS}}]\!]\), and \([\![\text{\sf{And}}2]\!]\), we give more precise statements on this relationship. One interesting consequence is that for the Steinberg quotients \(q(\lambda)\) of the \(G_{1}T\)-indecomposable modules, we can show (under a minor condition on \(p\)) that all possible orbits appear with positive multiplicity, even when \(q(\lambda)\neq t(\lambda)\). This could be viewed as an analog in this setting to the Premet-Suprunenko theorem on the weight sets of the \(p\)-restricted simple \(G\)-modules \([\![\text{\sf{Pr}}]\!]\)\([\![\text{\sf{Su}}]\!]\).
Suppose now that \(p\geq h\), where \(h\) is the Coxeter number of the underlying root system, and assume that \(\lambda-\rho\) is a \(p\)-regular weight. Applying work by Kato \([\![\text{\sf{K}}]\!]\), we show in Section
5 that when the LCF describes the simple characters (in the respective settings), then the orbit multiplicities in \(t_{\zeta}(\lambda)\) are given by evaluations of Kazhdan-Lusztig polynomials, and the same is true for \(q(\lambda)\) when \(\lambda\) is \(p\)-restricted. The hypothesis does hold in the quantum setting when \(p>h\), but will not hold in general in the algebraic setting unless \(p\gg h\). We should also point out that when this condition holds in both settings, then there is an agreement \(t_{\zeta}(\lambda)=q(\lambda)\) for all \(p\)-restricted weights \(\lambda\) (i.e. including the \(p\)-singular ones). Of course we also have \(t_{\zeta}(\lambda)=t(\lambda)\) provided that \(q(\lambda)=t(\lambda)\) (this last equality always holding when \(p\geq 2h-4\)). In making the connection to Kazhdan-Lusztig polynomials, the heavy lifting is done by Kato's paper along with Fiebig's detailed account of it [Fie].
Our ultimate goal is to find character formulas for Steinberg quotients that can, at minimum, differentiate between \(t_{\zeta}(\lambda)\) and \(t(\lambda)\), and that will lend themselves to reasonable dimension formulas (akin to \(p\)-versions of Weyl's dimension formula). In order to achieve this, it is necessary to find the defining properties of Steinberg quotients. We initiate this investigation in Section 6.
In view of the results in Section 3, we begin by defining the character \(\mathcal{M}_{p}(\lambda)\) to be the smallest element in \(\mathbb{Z}[\mathbb{X}]^{W}\) that satisfies (1.3.1) and has \(\lambda\) as its highest weight. In Theorem 6.3.1 we prove that this property can be checked by multiplying against a finite number of characters of the form \(\chi((p-1)\rho+p\mu)\), though in general checking against \(\chi((p-1)\rho)\) alone will not be sufficient. We then give an explicit a process for computing \(\mathcal{M}_{p}(\lambda)\) in Algorithm 6.3.1.
Since the \(t_{\zeta}(\lambda)\) are lower bounds on the \(t(\lambda)\), and can be computed by ordinary Kazhdan-Lusztig polynomials when \(p>h\), it is both natural and possible to check to see how close these are to the \(\mathcal{M}_{p}(\lambda)\). Surprisingly, in all of the computations that we were able to make, they were very close. They were equal for all restricted \(\lambda\) with \(\lambda-\rho\) a \(p\)-regular weight for root systems \(A_{1},A_{2},A_{3}\) (the first two being trivial), and for almost all such weights in type \(A_{4}\), and for many large weights in type \(A_{5}\). The cases of character equality are nontrivial, with orbit multiplicities as large as \(23\) occurring, and in the few cases we found in which they were not equal, it was by the smallest margin possible (a multiplicity difference of \(1\) on the lowest orbits). In many of these cases we can also compute the characters \(t(\lambda)\), thanks to knowing that \(t(\lambda)=q(\lambda)\) from [BMPS2] and [BMPS4], and that the LCF describes the \(p\)-restricted simple characters from [Jan, II.8.22] and [Sc].
### Acknowledgements
We thank Frank Lubeck for generously sharing the extensive computations of Kazhdan-Lusztig polynomials made by the algorithms described in [Lu].
## 2. Notation and Recollections
### Weyl groups, Roots, and Weights
We give a brief overview on our notation. For the most part it follows [Jan], and any notation not explicitly mentioned may be assumed to be consistent with that.
Let \(\Bbbk\) be an algebraically closed field of characteristic \(p>0\). By standard arguments we may consider \(G\) to be a simple and simply connected group, the results for which can be generalized to any \(G\) connected reductive.
Fix a maximal torus \(T\) inside a Borel subgroup \(B\) of \(G\). The root system is denoted \(\Phi\), and we fix a set of simple roots \(S=\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}\), where \(n\) is the rank of \(T\). This determines a set of positive roots \(\Phi^{+}\subseteq\Phi\). Denote by \(\mathbb{X}\) the character group of \(T\) (also
called the set of weights). Each \(\alpha\in\Phi^{+}\) has a corresponding coroot \(\alpha^{\vee}\). The highest short root is \(\alpha_{0}\), and \(\alpha_{0}^{\vee}\) is the highest coroot. For each \(\lambda\in\mathbb{X}\) and coroot \(\alpha^{\vee}\) we denote the natural pairing by \(\langle\lambda,\alpha^{\vee}\rangle\). The set of dominant weights is \(\mathbb{X}^{+}\), and it generated over \(\mathbb{Z}_{\geq 0}\) by the fundamental dominant weights \(\{\varpi_{1},\varpi_{2},\ldots,\varpi_{n}\}\), which are defined by the property that \(\langle\varpi_{i},\alpha_{j}^{\vee}\rangle=\delta_{ij}\). For each \(m\geq 0\), we define
\[\mathbb{X}_{m}=\{a_{1}\varpi_{1}+\cdots a_{n}\varpi_{n}\mid 0\leq a_{i}<m\} \subseteq\mathbb{X}^{+}.\]
Thus \(\mathbb{X}_{p}\) denotes the \(p\)-restricted dominant weights (we note that we have often just used \(\mathbb{X}_{p}\) for this set in the past, but require the finer notation in this paper).
The root lattice is \(\mathbb{Z}\Phi\subseteq\mathbb{X}\). The element \(\rho\) is the half-sum of the positive roots, or equivalently is the sum of the fundamental dominant weights. The Weyl group is \(W\), and \(w_{0}\) is its longest element. For any \(\lambda\in\mathbb{X}\), we let \(W_{\lambda}\) denote the stabilizer of \(\lambda\), while \(W\lambda\) is the \(W\)-orbit of \(\lambda\).
The standard partial order on \(\mathbb{X}\) is denoted as \(\leq\). The affine Weyl group is
\[W_{p}\cong W\ltimes p\mathbb{Z}\Phi,\]
and it acts on \(\mathbb{X}\). It can be shown that the image of \(W_{p}\) in the group of affine transformtions of \(\mathbb{E}=\mathbb{R}\otimes_{\mathbb{Z}}\mathbb{X}\) is generated by affine reflections of the form
\[s_{\alpha,np}(\lambda)=\lambda-(\langle\lambda,\alpha^{\vee}\rangle-np)\alpha\]
for all \(\alpha\in\Phi^{+}\) and \(n\in\mathbb{Z}\). For each \(w\in W_{p}\) and \(\lambda\in\mathbb{E}\), we denote the action of \(w\) on \(\lambda\) by juxtaposition, as \(w\lambda\). We will primary by interested in the "dot action" of \(W_{p}\), where
\[w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}\lambda=w(\lambda+\rho)-\rho.\]
For each \(\alpha\in\Phi^{+},n\in\mathbb{Z}\), there is a hyperplane in \(\mathbb{E}\) defined by
\[H_{\alpha,np}=\{\lambda\in\mathbb{E}\mid\langle\lambda+\rho,\alpha^{\vee} \rangle=np\}.\]
The affine reflection of \(\mathbb{E}\) about \(H_{\alpha,np}\) is precisely the dot action of \(s_{\alpha,np}\) on \(\mathbb{E}\). The partial ordering \(\uparrow\) on \(\mathbb{X}\) is the minimal such ordering with the property that
\[(s_{\alpha,np}\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}\lambda) \uparrow\lambda\quad\text{if}\quad(s_{\alpha,np}\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}\lambda)\leq\lambda,\]
and
\[\lambda\uparrow(s_{\alpha,np}\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$} }}\lambda)\quad\text{if}\quad\lambda\leq(s_{\alpha,np}\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}\lambda).\]
More generally, \(\lambda\uparrow\mu\) if there are affine reflections \(s_{1},\ldots,s_{m}\) such that
\[\lambda\leq s_{1}\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}\lambda \leq s_{2}\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}s_{1} \mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}\lambda\leq\cdots\leq s_{m} \mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}\cdots\mathbin{ \raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}s_{1}\lambda=\mu. \tag{2.1.1}\]
Properties of this ordering are noted in [1, II.6.4].
The hyperplanes \(H_{\alpha,np}\) divide \(\mathbb{E}\) up into a system of alcoves and facets. The alcoves contain points from \(\mathbb{X}\) if and only if \(p\geq h\). The elements in the alcoves are called \(p\)-regular weights. They are those weights \(\lambda\) such that
\[\langle\lambda+\rho,\alpha^{\vee}\rangle\not\in p\mathbb{Z}\]
for all \(\alpha\in\Phi^{+}\).
Let \(\mathcal{C}\) denote the set of all alcoves of \(\mathbb{E}\). The lowest dominant alcove \(C_{0}\) is the alcove
\[C_{0}=\{\lambda\in\mathbb{E}\mid 0<\langle\lambda+\rho,\alpha^{\vee}\rangle<p\}.\]
The action of \(W_{p}\) on \(\mathcal{C}\) is simply transitive, hence for any alcove \(C\in\mathcal{C}\) there is a unique element \(w\in W_{p}\) such that \(w.C_{0}=C\).
An alcove \(C\) is called dominant if \(0<\langle\lambda+\rho,\alpha_{i}^{\vee}\rangle\) for all \(i\), and an alcove is \(p\)-restricted if \(0<\langle\lambda+\rho,\alpha_{i}^{\vee}\rangle<p\) for all \(i\).
The group \(W_{p}\) is a Coxeter group with generators \(\{s_{0},s_{1},\ldots,s_{n}\}\), where for \(1\leq i\leq n\) we have \(s_{i}=s_{\alpha_{i},0}\), and \(s_{0}=s_{\alpha_{0},p}\). These generators are just the affine reflections about the hyperplanes that extend the \(n+1\) walls of the fundamental alcove.
### Characters
The Grothendieck ring of the category of finite dimensional \(T\)-modules is isomorphic to the group algebra \(\mathbb{Z}[\mathbb{X}]\). For each \(\mu\in\mathbb{X}\), we denote by \(e(\mu)\) the corresponding basis element in \(\mathbb{Z}[\mathbb{X}]\). Since \(\mathbb{Z}[\mathbb{X}]\) is the group algebra of a free abelian group of rank \(n\), it is isomorphic to the ring of Laurent polynomials over \(\mathbb{Z}\) in \(n\) indeterminants. In particular, \(\mathbb{Z}[\mathbb{X}]\) is an integral domain, so the cancellation property for ring multiplication holds.
We denote by \(s(\mu)\) the sum of the weights in the \(W\)-orbit of \(\mu\). These elements form a basis of \(\mathbb{Z}[\mathbb{X}]^{W}\).
Recall that for \(\sigma=\sum a_{\mu}e(\mu)\in\mathbb{Z}[\mathbb{X}]\), is "dual" and "Frobenius twist" are
\[\sigma^{*}=\sum a_{\mu}e(-\mu),\]
and
\[\sigma^{F}=\sum a_{\mu}e(p\mu)\]
respectively. If \(\sigma=\operatorname{ch}(M)\) for a \(T\)-module \(M\), then \(\sigma^{*}=\operatorname{ch}(M^{*})\), and \(\sigma^{F}=\operatorname{ch}(M^{(1)})\).
### \(G\)-modules and \(G_{1}t\)-modules
For each \(\lambda\in\mathbb{X}^{+}\) there is a simple \(G\)-module \(L(\lambda)\), a costandard module \(\nabla(\lambda)=\operatorname{ind}_{B}^{G}\lambda\), a standard module \(\Delta(\lambda)=(\operatorname{ind}_{B}^{G}-w_{0}\lambda)^{*}\), and an indecomposable tilting module \(T(\lambda)\). The modules \(\Delta(\lambda)\) and \(\nabla(\lambda)\) each have character given by the Euler characteristic
\[\chi(\lambda)=\sum_{i\geq 0}(-1)^{i}(\operatorname{ch}R^{i}\operatorname{ind}_{B }^{G}\lambda).\]
By the strong linkage principle [Jan, II.6.13], \([\nabla(\lambda):L(\mu)]>0\) implies that \(\mu\uparrow\lambda\). In a similar way, write \((T(\lambda):\chi(\mu))\) for the multiplicity of \(\nabla(\mu)\) in a good filtration of \(T(\lambda)\) (equal to the multiplicity of \(\Delta(\mu)\) in a Weyl filtration of \(T(\lambda)\). If \((T(\lambda):\chi(\mu))>0\), then again \(\mu\uparrow\lambda\) [Jan, II.E.3].
For each \(\lambda\in X\) there is a simple \(G_{1}T\)-module \(\widehat{L}_{1}(\lambda)\), a projective indecomposable \(G_{1}T\)-module \(\widehat{Q}_{1}(\lambda)\), and "baby Verma modules"
\[\widehat{Z}_{1}(\lambda)=\operatorname{coind}_{B_{1}^{+}T}^{G_{1}T}\lambda, \qquad\widehat{Z}_{1}^{\prime}(\lambda)=\operatorname{ind}_{B_{1}T}^{G_{1}T}\lambda.\]
Fix a Frobenius endomorphism \(F:G\to G\). For any \(G\)-module \(M\), we denote by \(M^{(1)}\) its twist under \(F\).
### Quantum Groups
Let \(v\) be an indeterminate, and \(\mathbb{Q}(v)\) the fraction field of \(\mathbb{Q}[v]\). The quantum group \(\mathcal{U}_{v}\) is the \(\mathbb{Q}(v)\)-algebra with generators \(E_{\alpha},F_{\alpha},K_{\alpha}^{\pm 1}\), for \(\alpha\in\Pi\), satisfying the quantum Serre relations of [Jan, H.2]. Over the subring \(A=\mathbb{Z}[v,v^{-1}]\), we denote by \(\mathcal{U}_{A}\) Lusztig's divided power integral form for \(\mathcal{U}_{v}\). The algebra \(\mathcal{U}_{A}\) is free as an \(A\)-module, and the multiplication map \(\mathcal{U}_{A}\otimes_{A}\mathbb{Q}(v)\to\mathcal{U}_{v}\) is an isomorphism of rings.
For any commutative \(A\)-algebra \(B\) one obtains the quantum group \(\mathcal{U}_{B}=\mathcal{U}_{A}\otimes_{A}B\). Let \(\zeta\) be a complex primitive \(p\)-th root of unity. Specializing \(v=\zeta\) makes \(\mathbb{C}\) into an \(A\)-algebra. We now denote by \(\mathcal{U}_{\zeta}\) the resulting quantum group \(\mathcal{U}_{A}\otimes_{A}\mathbb{C}\).
The category of finite dimensional \(\mathcal{U}_{\zeta}\)-modules, denoted \(\mathcal{U}_{\zeta}\)-mod, has many similarities to that of \(G\)-mod. First, it is known that the category breaks into a direct sum of subcategories based on central characters, and we restrict our attention only to the subcategory of type \(\mathbf{1}\)\(\mathcal{U}_{\zeta}\)-modules. In this subcategory, for each \(\lambda\in\mathbb{X}^{+}\) there is a simple module \(L_{\zeta}(\lambda)\), a standard module \(\Delta_{\zeta}(\lambda)\), a costandard module \(\nabla_{\zeta}(\lambda)\), and an indecomposable tilting module \(T_{\zeta}(\lambda)\).
As we are considering only type \(\mathbf{1}\) modules, we will regard the quantum Frobenius morphism as a surjective homomorphism \(F:\mathcal{U}_{\zeta}\to\mathcal{U}(\mathfrak{g}_{\mathbb{C}})\) (note then that the image of \(F\) as defined here is the quotient of the image of the more commonly defined quantum Frobenius morphism). Let \(L_{\mathbb{C}}(\lambda)\) denote the irreducible \(\mathfrak{g}_{\mathbb{C}}\)-module of highest weight \(\lambda\). The pullback under \(F\) will be denoted \(L_{\mathbb{C}}(\lambda)^{F}\). If \(\lambda=\lambda_{0}+p\lambda_{1}\) with \(\lambda_{0}\in\mathbb{X}_{p}\) and \(\lambda_{1}\in\mathbb{X}^{+}\), then there is an isomorphism of \(\mathcal{U}_{\zeta}\)-modules
\[L_{\zeta}(\lambda)\cong L_{\zeta}(\lambda_{0})\otimes L_{\mathbb{C}}(\lambda _{1})^{F}.\]
The character of a type \(\mathbf{1}\) finite dimensional \(\mathcal{U}_{\zeta}\)-module is also an element in \(\mathbb{Z}[\mathbb{X}]^{W}\). We have
\[\chi(\lambda)=\operatorname{ch}(\Delta_{\zeta}(\lambda))=\operatorname{ch}( \nabla_{\zeta}(\lambda)).\]
The small quantum group is denoted \(u_{\zeta}\).
## 3. Steinberg Quotients
In [1] we defined, for each \(\lambda\in\mathbb{X}_{p}\), the Steinberg quotients
\[t(\lambda)=T((p-1)\rho+\lambda)/\chi((p-1)\rho)\]
and
\[q(\lambda)=\widehat{Q}_{1}((p-1)\rho+w_{0}\lambda)/\chi((p-1)\rho).\]
For \(p\geq 2h-4\) these two characters are the same [1], though in general they can differ (see also [1], [1], and [1]). We note that [1] actually utilizes the language of Steinberg quotients, and establishes nice properties about their restrictions to Levi subgroups.
There are non-negative integers \(a_{\mu,\lambda}\) and \(b_{\mu,\lambda}\)1 such that
Footnote 1: In [1] we denoted the double index \(a_{\mu,\lambda}\) as \(a_{\mu}^{\lambda}\), and \(b_{\mu,\lambda}\) as \(b_{\mu}^{\lambda}\).
\[q(\lambda)=\sum a_{\mu,\lambda}s(\mu)\]
and
\[t(\lambda)=\sum b_{\mu,\lambda}s(\mu).\]
Computing Steinberg quotients then amounts to determining these orbit multiplicities, and the Steinberg quotients in turn give the characters of the relevant modules upon multiplying by \(\chi((p-1)\rho)\).
Some preliminary properties that can be established for these coefficients is that for all \(\lambda\in\mathbb{X}_{p}\) and \(\mu\in\mathbb{X}^{+}\):
1. \(a_{\lambda,\lambda}=b_{\lambda,\lambda}=1\)
2. \(a_{\mu,\lambda}\leq b_{\mu,\lambda}\)__
3. \(a_{\mu,\lambda}\neq 0\)__implies__\((\mu-\rho)\uparrow(\lambda-\rho)\).__
4. \(a_{\mu,\lambda}=b_{\mu,\lambda}\) if \(p\geq 2h-4\)__
The main result proved in [S1] was a multiplicity pattern in the coefficients \(b_{\mu,\lambda}\).
**Theorem 3.1.1**.: _[_S1_, Theorem 3.2.2]_ _Let \(\lambda\in\mathbb{X}^{+}\), and let \(b_{\mu,\lambda}\) be non-negative integers such that_
\[t(\lambda)=\sum_{\mu\in\mathbb{X}^{+}}b_{\mu,\lambda}s(\mu).\]
_For dominant weights \(\mu,\mu^{\prime}\),_
\[\text{if}\quad(\mu-\rho)\uparrow(\mu^{\prime}-\rho),\qquad\text{then}\quad b_ {\mu,\lambda}\geq b_{\mu^{\prime},\lambda}.\]
We will generalize this result in Theorem 3.3.1, which distills the essential character arguments. The original proof, and therefore this generalized one also, was inspired by ideas due to Ye [Ye], Doty and Sullivan [DS], and Donkin [HTT, Proposition 5.5].
The statement give in Theorem 3.3.1 will be about formal characters, but applies to Steinberg quotients thanks to the following fundamental results that hold in the category of \(G\)-modules.
* For every \(w\in W\), \[\chi(w\mathbin{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}\lambda)=(-1)^{\ell(w) }\chi(\lambda).\]
* Brauer's formula: for all \(\lambda,\mu\in\mathbb{X}^{+}\), \[\chi(\lambda)s(\mu)=\sum_{w\in W/W_{\mu}}\chi(\lambda+w\mu).\]
* The Andersen-Haboush Theorem: for all \(\mu\in\mathbb{X}^{+}\), \[\nabla((p-1)\rho+p\mu)\cong\operatorname{St}\otimes\nabla(\mu)^{(1)}.\]
* For all \(\lambda,\mu\in\mathbb{X}^{+}\), the module \[T((p-1)\rho+\lambda)\otimes\nabla(\mu)^{(1)}\] has a good filtration.
The last result follows from the Andersen-Haboush Theorem together with the fact that the tensor product of good filtration modules has a good filtration (proved for certain \(p\) by Wang, most \(p\) by Donkin, and all \(p\) by Mathieu). One then observes that \(T((p-1)\rho+\lambda)\otimes\nabla(\mu)^{(1)}\) is a direct summand of
\[\operatorname{St}\otimes T(\lambda)\otimes\nabla(\mu)^{(1)}\cong\nabla((p-1) \rho+p\mu)\otimes T(\lambda).\]
In this subsection we collect a number of lemmas that will simplify proofs both in this section, and then later on in the paper.
**Lemma 3.2.1**.: _For all \(w\in W\) and \(\lambda,\mu\in\mathbb{X}\),_
\[w\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}(\lambda+\mu)=w \mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}\lambda+w\mu.\]
Proof.: We have
\[w\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}(\lambda+\mu) =w(\lambda+\mu+\rho)-\rho\] \[=w(\lambda+\rho)-\rho+w\mu\] \[=w\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}\lambda+w\mu.\]
**Lemma 3.2.2**.: _Let \(\lambda\in\mathbb{X}\), and \(\gamma\in\mathbb{X}^{+}\). If for some \(\alpha\in\Phi^{+}\),_
\[\langle\lambda+\gamma+\rho,\alpha^{\vee}\rangle<0,\]
_then_
\[s_{\alpha}\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}(\lambda+ \gamma)-\gamma\]
_lies strictly between \(\lambda\) and \(s_{\alpha}\lambda\). In particular, this weight is in the interior of \(\operatorname{conv}(W\lambda)\)._
Proof.: In this case, since
\[\langle\gamma+\rho,\alpha^{\vee}\rangle>0,\]
we must have that
\[0>\langle\lambda+\gamma+\rho,\alpha^{\vee}\rangle>\langle\lambda,\alpha^{ \vee}\rangle.\]
Therefore
\[s_{\alpha}\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}( \lambda+\gamma)-\gamma =s_{\alpha}(\lambda+\gamma+\rho)-\rho-\gamma\] \[=\lambda-\langle\lambda+\gamma+\rho,\alpha^{\vee}\rangle\alpha\] \[<\lambda-\langle\lambda,\alpha^{\vee}\rangle\alpha\] \[=s_{\alpha}\lambda.\]
**Lemma 3.2.3**.: _Let \(\lambda,\gamma\in\mathbb{X}^{+}\). Let \(J\subset\Pi\) be the set of all simple roots \(\alpha_{i}\) such that_
\[\langle\gamma+\rho,\alpha_{i}^{\vee}\rangle\leq\langle\lambda,\alpha_{0}^{ \vee}\rangle.\]
_Then for any \(w_{0}\lambda\leq\mu\leq\lambda\), there is a \(w\in W_{J}\) such that_
\[w\mathbin{\raisebox{0.5pt}{\scalebox{1.0}{$\bullet$}}}(\gamma+\mu)\in\mathbb{ X}^{+}-\rho.\]
Proof.: For all \(\alpha_{i}\in\Pi\), the bounding on \(\mu\) implies that
\[\langle w_{0}\lambda,\alpha_{0}^{\vee}\rangle\leq\langle\mu,\alpha_{i}^{\vee }\rangle\leq\langle\lambda,\alpha_{0}^{\vee}\rangle.\]
Therefore, if
\[0 >\langle(\gamma+\mu)+\rho,\alpha_{i}^{\vee}\rangle\] \[=\langle\gamma+\rho,\alpha_{i}^{\vee}\rangle+\langle\mu,\alpha_{i }^{\vee}\rangle\] \[\geq\langle\gamma+\rho,\alpha_{i}^{\vee}\rangle-\langle\lambda, \alpha_{0}^{\vee}\rangle,\]
then it follows that \(\alpha_{i}\in J\). We may replace \(\gamma+\mu\) with \(s_{i}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\,(\gamma+\mu)\), which is on the positive side of the hyperplane \(H_{\alpha_{i},0}\). It follows by the previous lemma, and our assumption on \(\mu\), that
\[s_{i}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\,(\gamma+\mu)=\gamma+\mu^{ \prime},\]
with \(w_{o}\lambda\leq\mu^{\prime}\leq\lambda\). We may now repeat the process above with \(\gamma+\mu^{\prime}\), and will eventually wind up with a weight in \(\mathbb{X}^{+}-\rho\) having only used reflections from \(W_{J}\).
Recall that Weyl characters refer to those Euler characteristics \(\chi(\lambda)\) for which \(\lambda\in\mathbb{X}^{+}\). The Weyl characters form a \(\mathbb{Z}\)-basis for \(\mathbb{Z}[\mathbb{X}]^{W}\). A nonzero element in \(\mathbb{Z}[\mathbb{X}]^{W}\) having nonnegative coefficients in the Weyl basis will be called a _good filtration character_.
**Theorem 3.3.1**.: _Let \(\eta\in\mathbb{Z}[\mathbb{X}]^{W}\), where \(\eta=\sum_{\mu\in\mathbb{X}^{+}}c_{\mu}s(\mu)\)._
1. _Suppose that for every_ \(\lambda\in\mathbb{X}^{+}\)_, the product_ \(\chi(\lambda)\eta\) _is a good filtration character. Then_ \(c_{\mu}\geq c_{\mu^{\prime}}\) _whenever_ \(\mu\leq\mu^{\prime}\)_._
2. _Suppose that for every_ \(\lambda\in\mathbb{X}^{+}\)_, the product_ \(\chi((p-1)\rho+p\lambda)\eta\) _is a good filtration character. Then_ \(c_{\mu}\geq c_{\mu^{\prime}}\) _whenever_ \(\mu-\rho\uparrow\mu^{\prime}-\rho\)_._
Proof.: (1) It suffices to prove the result in the case \(\mu^{\prime}\) is a minimal dominant weight such that \(\mu^{\prime}>\mu\). Under this assumption, [St, Theorem 2.6] shows that there is a positive root \(\alpha\in\Phi^{+}\) such that \(\mu+\alpha=\mu^{\prime}\). Set
\[n=\langle\mu,\alpha^{\vee}\rangle.\]
We then have that \(\langle\mu^{\prime},\alpha^{\vee}\rangle=n+2\), and since \(\mu\) is dominant, that \(n\geq 0\). There is a simple root \(\alpha_{i}\) and an element \(w\in W\) such that \(w\alpha=-\alpha_{i}\). From this it follows that
\[w\mu^{\prime}=w(\mu+\alpha)=w\mu-\alpha_{i}.\]
We also have
\[\langle w\mu,\alpha_{i}^{\vee}\rangle=-n,\]
and
\[\langle w\mu^{\prime},\alpha_{i}^{\vee}\rangle=-(n+2).\]
Set
\[\gamma=\sum m_{j}\varpi_{j}\in\mathbb{X}^{+}\]
where \(m_{i}=n\), and for \(j\neq i\),
\[m_{j}>\langle\sigma,\alpha_{0}^{\vee}\rangle,\]
for all weights \(\sigma\) appearing in \(\eta\) (note that such a choice is possible as there are only finitely many such \(\sigma\)). By Brauer's formula,
\[\chi(\gamma)\eta=\sum_{\lambda\in\mathbb{X}^{+}}\sum_{\sigma\in W\lambda}c_{ \lambda}\chi(\gamma+\sigma). \tag{3.3.1}\]
Among the terms of this sum are \(c_{\mu}\chi(\gamma+w\mu)\) and \(c_{\mu^{\prime}}\chi(\gamma+w\mu^{\prime})\). For any \(\sigma\) that is a weight in \(\eta\), it follows from Lemma 3.2.3 that the choice of \(\gamma\) guarantees that either
\[\sigma+\gamma\in\mathbb{X}^{+}-\rho,\]
or else
\[s_{i}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}\,(\sigma+\gamma)\in \mathbb{X}^{+}-\rho.\]
We see in particular that \(\gamma+w\mu\in\mathbb{X}^{+}-\rho\), and in fact is in \(\mathbb{X}^{+}\), and that
\[s_{i}\,{\raisebox{0.86pt}{\scalebox{1.0}{$\bullet$}}}\,(\gamma+w\mu^{ \prime}) =s_{i}(\gamma+w\mu^{\prime}+\rho)-\rho\] \[=\gamma+w\mu^{\prime}+\rho-\langle\gamma+w\mu^{\prime}+\rho, \alpha_{i}^{\vee}\rangle\alpha_{i}-\rho\] \[=\gamma+w\mu^{\prime}+\rho+\alpha_{i}-\rho\] \[=\gamma+w\mu.\]
It then follows that when the sum in (3.3.1) is rewritten as a sum of Weyl characters, the coefficient on \(\chi(\gamma+w\mu)\) is \(c_{\mu}-c_{\mu^{\prime}}\). By hypothesis, this coefficient is nonnegative, proving that \(c_{\mu}\geq c_{\mu^{\prime}}\).
(2) This case follows a similar logic, though there are a few modifications that need to be spelled out. First, we assume that the relation \(\mu-\rho\uparrow\mu^{\prime}-\rho\) is minimal, so that there is some \(\alpha\in\Phi^{+}\) and some \(n\geq 1\) such that
\[s_{\alpha,np}\,{\raisebox{0.86pt}{\scalebox{1.0}{$\bullet$}}}\,(\mu-\rho)=\mu ^{\prime}-\rho.\]
This implies that
\[\langle\mu-\rho+\rho,\alpha^{\vee}\rangle<np<\langle\mu^{\prime}-\rho+\rho, \alpha^{\vee}\rangle.\]
Equivalently,
\[\langle\mu,\alpha^{\vee}\rangle<np<\langle\mu^{\prime},\alpha^{\vee}\rangle\]
Again there is a simple root \(\alpha_{i}\) and an element \(w\in W\) such that \(w\alpha=-\alpha_{i}\). We then have that
\[\langle w\mu,\alpha_{i}^{\vee}\rangle<-np<\langle w\mu^{\prime},\alpha_{i}^{ \vee}\rangle.\]
We now set
\[\gamma=\sum m_{j}\varpi_{j}\in\mathbb{X}^{+}\]
where \(m_{i}=n-1\), and for \(j\neq i\),
\[p(m_{j}+1)>\langle\sigma,\alpha_{0}^{\vee}\rangle.\]
The proof now follows similar concluding logic as in proof of (1). The character
\[\chi((p-1)\rho+p\gamma)\eta\]
has by assumption nonnegative coefficients when expressed in the Weyl basis, and one can verify that
\[c_{\mu}-c_{\mu^{\prime}}\]
is one of these coefficients.
**Remark 3.3.2**.: In the first statement of this theorem, the fact that \(\chi(0)\eta\) is a good filtration character means that \(\eta\) is a good filtration character. In this case the theorem is simply giving a statement about orbit sums in Weyl characters, and this property is already known. We put these together so that Weyl characters and Steinberg quotients can be seen as parallel in a certain sense.
In [51], the quotients \(t(\lambda)\) and \(q(\lambda)\) were defined only for \(\lambda\in\mathbb{X}_{p}\). While there is not much point in extending the definition of the \(q(\lambda)\) to include more weights (one could make sense of such a definition, but we effectively obtain nothing new as follows from [1, II.11.3(2)]), extending the definition of \(t(\lambda)\) turns out to be quite useful. That is, for \(\lambda\in\mathbb{X}^{+}\) we define (as before)
\[t(\lambda)=\operatorname{ch}(T((p-1)\rho+\lambda))/\chi((p-1)\rho).\]
The facts recalled in Section 3.1 are true of \(T((p-1)\rho+\lambda)\) for all dominant \(\lambda\). Therefore we may apply Theorem 3.3.1(2) to \(t(\lambda)\), showing that the statement in Theorem 3.1.1 holds in this setting also.
By extending this definition, we can calculate precisely a number of the \(t(\lambda)\). This next result follows immediately from Donkin's tensor product theorem [1, Proposition 2.1].
**Proposition 3.4.1**.: _Let \(\lambda\in\mathbb{X}_{p}\). If \(t(\lambda)=q(\lambda)\), then_
\[t(\lambda+p\mu)=t(\lambda)\mathrm{ch}(T(\mu))^{F}.\]
The quantum Steinberg module is \(\operatorname{St}_{\zeta}=L_{\zeta}((p-1)\rho)\). Let \(\mu\in X_{+}\). The character equality
\[\chi((p-1)\rho+p\mu)=\chi((p-1)\rho)\chi(\mu)^{F}\]
reflects the module isomorphism
\[\nabla_{\zeta}((p-1)\rho+p\mu)\cong\operatorname{St}_{\zeta}\otimes L_{ \mathbb{C}}(\mu)^{F}.\]
The module \(\nabla_{\zeta}((p-1)\rho+p\mu)\) is simultaneously simple, standard, costandard, and tilting. For each \(\lambda\in\mathbb{X}^{+}\), the tilting module \(T_{\zeta}((p-1)\rho+\lambda)\) is an injective and projective \(\mathcal{U}_{\zeta}\)-module, and every an indecomposable injective \(\mathcal{U}_{\zeta}\)-module is of this form. Writing \(\lambda=\lambda_{0}+p\lambda_{1}\), with \(\lambda_{0}\in\mathbb{X}_{p}\) and \(\lambda_{1}\in X_{+}\), we have
\[T_{\zeta}((p-1)\rho+\lambda)\cong T_{\zeta}((p-1)\rho+\lambda_{0})\otimes L_ {\mathbb{C}}(\lambda_{1})^{F}.\]
Let \(\lambda\in\mathbb{X}^{+}\). We define the quantum Steinberg quotient \(t_{\zeta}(\lambda)\) by
\[t_{\zeta}(\lambda)=\operatorname{ch}(T_{\zeta}((p-1)\rho+\lambda))/\chi((p-1 )\rho).\]
Since \(\operatorname{ch}(T_{\zeta}((p-1)\rho+\lambda))\) is \(W\)-invariant, there are non-negative integers \(c_{\mu,\lambda}\) such that
\[t_{\zeta}(\lambda)=\sum c_{\mu,\lambda}s(\mu).\]
**Theorem 3.5.1**.: _The statement of Theorem 3.1.1 for the coefficients \(b_{\mu,\lambda}\) holds also for the coefficients \(c_{\mu,\lambda}\)._
Again, we may apply Theorem 3.3.1(2) to see that the statement of Theorem 3.1.1 for the coefficients \(b_{\mu,\lambda}\) holds also for the \(c_{\mu,\lambda}\).
**Proposition 3.5.2**.: _For each \(\lambda\in\mathbb{X}_{p}\) and \(\mu\in\mathbb{X}^{+}\) we have_
\[t_{\zeta}(\lambda+p\mu)=t_{\zeta}(\lambda)\chi(\mu)^{F}.\]
## 4. Comparison between algebraic and quantum Steinberg quotients
We will primarily follow the \(p\)-modular setup of [PS], but also refer the interested reader to [Lin], where some of the modules below were first studied. Recall that \(A=\mathbb{Z}[v,v^{-1}]\). As above, \(\zeta\) denotes a fixed complex primitive \(p\)-th root of unity. Let \(\mathcal{O}\) denote the local ring \(\mathbb{Z}[\zeta]_{(1-\zeta)}\). The assignment \(v\mapsto\zeta\) defines a ring homomorphism \(A\to\mathcal{O}\). Set
\[\mathcal{U}_{\mathcal{O}}=\mathcal{U}_{A}\otimes_{A}\mathcal{O}.\]
Then \(\mathcal{U}_{\mathcal{O}}\) is also a kind of integral form for \(\mathcal{U}_{\zeta}\). Since \(\mathcal{O}\) has residue field \(\mathbb{F}_{p}\), we have \(\mathcal{U}_{\Bbbk}\cong\mathcal{U}_{\mathcal{O}}\otimes_{\mathcal{O}} \Bbbk\). There is a surjective map of algebras \(\phi_{\Bbbk}:\mathcal{U}_{\Bbbk}\twoheadrightarrow\operatorname{Dist}(G)\). The elements in the kernel of \(\phi_{\Bbbk}\) act as \(0\) on any finite dimensional type \(\mathbf{1}\) module for \(\mathcal{U}_{\Bbbk}\). Such a module is therefore a finite dimensional \(\operatorname{Dist}(G)\)-module, and by [Jan, II.1.20], is also a rational \(G\)-module.
A \(\mathcal{U}_{\mathcal{O}}\)-module \(M_{\mathcal{O}}\) will be called a \(\mathcal{U}_{\mathcal{O}}\)-lattice if it is free of finite rank as an \(\mathcal{O}\)-module. One obtains a \(\mathcal{U}_{\zeta}\)-module
\[M_{\zeta}=M_{\mathcal{O}}\otimes_{\mathcal{O}}\mathbb{C}.\]
By the discussion above, if \(M_{\mathcal{O}}\) is a type \(\mathbf{1}\) module, then one obtains a \(\operatorname{Dist}(G)\)-module
\[M=M_{\mathcal{O}}\otimes_{\mathcal{O}}\Bbbk.\]
Let \(\lambda\in\mathbb{X}^{+}\). Let \(V\) be a finite-dimensional \(\mathcal{U}_{\zeta}\)-module of highest weight \(\lambda\) that is generated by a weight vector \(v_{\lambda}\). Such a module will be a quotient of \(\Delta_{\zeta}(\lambda)\), and the particular case of interest for us is when \(V\) is \(L_{\zeta}(\lambda)\). One can always find a particular \(\mathcal{U}_{\mathcal{O}}\)-lattice inside of \(L_{\zeta}(\lambda)\) by taking the submodule \(\mathcal{U}_{\mathcal{O}}.v_{\lambda}\). Such a construction is referred to as a minimal lattice, as it is necessarily contained in any other \(\mathcal{U}_{\mathcal{O}}\)-lattice of \(L_{\zeta}(\lambda)\). By duality there also exists a maximal \(\mathcal{U}_{\mathcal{O}}\)-lattice inside of \(L_{\zeta}(\lambda)\). The resulting \(G\)-modules obtained from the minimal and maximal lattices are denoted as
\[\Delta^{\operatorname{red}}(\lambda)\quad\text{and}\quad\nabla^{\operatorname {red}}(\lambda)\]
respectively. These are indecomposable modules for \(G\), and both have formal characters equal to that of \(L_{\zeta}(\lambda)\). The symbols denoting each module point to their similarities with the standard and costandard \(G\)-modules of highest weight \(\lambda\) (each of which can be constructed by minimal and maximal lattices respectively of finite dimensional \(\mathfrak{g}_{\mathbb{C}}\)-modules). Specifically, we can place these modules in a chain of homomorphisms
\[\Delta(\lambda)\twoheadrightarrow\Delta^{\operatorname{red}}(\lambda) \twoheadrightarrow L(\lambda)\]
and
\[L(\lambda)\hookrightarrow\nabla^{\operatorname{red}}(\lambda)\hookrightarrow \nabla(\lambda).\]
The modules \(L(\lambda)\) and \(\Delta^{\operatorname{red}}(\lambda)\) have the same character if and only if
\[\Delta^{\operatorname{red}}(\lambda)\cong\nabla^{\operatorname{red}}(\lambda).\]
Another way to obtain a \(\mathcal{U}_{\mathcal{O}}\)-lattice is to start with an indecomposable tilting module \(T(\lambda)\) for \(G\). Andersen showed [And2, SS4.2] that this tilting module can be lifted to an indecomposable tilting module \(T_{\mathcal{O}}(\lambda)\) over \(\operatorname{Dist}(G_{\mathcal{O}})\)[Jan, II.E.20]. This pulls back to a type \(\mathbf{1}\) tilting module for \(\mathcal{U}_{\mathcal{O}}\). In this way, one obtains a tilting \(\mathcal{U}_{\zeta}\)-module
\[T_{\mathcal{O}}(\lambda)\otimes_{\mathcal{O}}\mathbb{C}.\]
This module has the same character as \(T(\lambda)\). There are non-negative integers \(n_{\lambda,\mu}\) such that
\[T_{\mathcal{O}}(\lambda)\otimes_{\mathcal{O}}\mathbb{C}\cong T_{\zeta}(\lambda) \oplus\bigoplus_{\mu<\lambda}T_{\zeta}(\mu)^{\oplus n_{\lambda,\mu}}. \tag{4.1.1}\]
The characters of finite dimensional \(G\)-modules and the characters of finite dimensional type \(\mathbf{1}\)\(\mathcal{U}_{\zeta}\)-modules are elements in the ring \(\mathbb{Z}[X]^{W}\). Define the left action of the commutative ring \(\mathbb{Z}[X]^{W}\) on itself by
\[\eta.\sigma=\sigma\eta^{F},\]
for all \(\eta,\sigma\in\mathbb{Z}[X]^{W}\). This action makes \(\mathbb{Z}[X]^{W}\) into a free left \(\mathbb{Z}[X]^{W}\)-module, and following Donkin [1] we call a basis for this action a "\(p\)-basis for \(\mathbb{Z}[X]^{W}\)." One \(p\)-basis is given by the set of orbit sums
\[\{s(\lambda)\mid\lambda\in\mathbb{X}_{p}\}.\]
More generally we obtain a \(p\)-basis from any collection of elements
\[\{f(\lambda)\mid\lambda\in\mathbb{X}_{p}\},\]
where each \(f(\lambda)\) is of the form
\[f(\lambda)=s(\lambda)+\sum_{\mu\in\mathbb{X}_{p},\mu<_{\mathbb{Q}}\lambda}s( \mu){g_{\mu}}^{F},\quad g_{\mu}\in\mathbb{Z}[X]^{W}. \tag{4.2.1}\]
In particular, the collections
\[\{t(\lambda)\}_{\lambda\in\mathbb{X}_{p}},\quad\{q(\lambda)\}_{\lambda\in \mathbb{X}_{p}},\quad\{t_{\zeta}(\lambda)\}_{\lambda\in\mathbb{X}_{p}},\]
each have this form and therefore are all \(p\)-bases.
**Lemma 4.2.1**.: _Let \(\{\eta_{\lambda}\}_{\lambda\in\mathbb{X}_{p}}\) and \(\{\sigma_{\lambda}\}_{\lambda\in\mathbb{X}_{p}}\) be collections of elements in \(\mathbb{Z}[X]^{W}\). If_
\[\sum_{\lambda\in\mathbb{X}_{p}}q(\lambda)\eta_{\lambda}^{F}=\sum_{\lambda\in \mathbb{X}_{p}}t_{\zeta}(\lambda)\sigma_{\lambda}^{F},\]
_then \(\eta_{(p-1)\rho}=\sigma_{(p-1)\rho}\)._
Proof.: We can take the sum
\[\sum_{\lambda\in\mathbb{X}_{p}}q(\lambda)\eta_{\lambda}^{F}\]
and express it in the \(\{t_{\zeta}(\lambda)\}\)-basis. By Equation (4.2.1), and the fact that \((p-1)\rho\) is maximal among weights in \(\mathbb{X}_{p}\) under the ordering \(\leq_{\mathbb{Q}}\), it follows that \(\eta_{(p-1)\rho}^{F}\) will also be the coefficient of \(t_{\zeta}((p-1)\rho)\) in the second expression. The result now follows.
When the indecomposable projective \(G_{1}\)-modules lift to tilting modules for \(G\), we have \(q(\lambda)=t(\lambda)\), and when Lusztig's conjecture regarding the characters of the simple \(G\)-modules holds, we have \(q(\lambda)=t_{\zeta}(\lambda)\). So in very large characteristic, all three are equal. In this subsection we will give general relationships between the quantum and algebraic quotients that hold in all situations.
First we establish a few lemmas.
**Lemma 4.3.1**.: _Let \(\lambda\in\mathbb{X}_{p}\), \(M\) in \(G\)-mod. Then_
\[\mathrm{ch}(\mathrm{Hom}_{G_{1}}(\widehat{Q}_{1}(\lambda),M))=\eta^{F},\]
_where \(\eta^{F}\) is the coefficient of \(q((p-1)\rho)\) when expressing \(q((p-1)\rho-\lambda)\text{ch}(M)\) in the \(\{q(\mu)\}\)-basis._
Proof.: There is an isomorphism of \(T\)-modules
\[\mathrm{Hom}_{G_{1}}(\widehat{Q}_{1}(\lambda),M)\cong\mathrm{Hom}_{G_{1}}( \Bbbk,\widehat{Q}_{1}(\lambda)^{*}\otimes M).\]
The module \(\widehat{Q}_{1}(\lambda)^{*}\otimes M\) is projective over \(G_{1}T\), hence there is a decomposition of \(G_{1}T\)-modules
\[\widehat{Q}_{1}(\lambda)^{*}\otimes M\cong\bigoplus_{\mu\in\mathbb{X}_{p}} \widehat{Q}_{1}(\mu)\otimes\mathrm{Hom}_{G_{1}}(L(\mu),\widehat{Q}_{1}( \lambda)^{*}\otimes M)\]
The result now follows by taking the Steinberg quotients of the characters on each side of this decomposition (noting that \(q((p-1)\rho-\lambda)\text{ch}(M)\) is the Steinberg quotient of \(\widehat{Q}_{1}(\lambda)^{*}\otimes M\)).
For the quantum group \(\mathcal{U}_{\zeta}\), the projective modules for the small quantum group lift to tilting modules for all \(p\). Using an argument similar to the one just given, we obtain a similar result here.
**Lemma 4.3.2**.: _Let \(\lambda\in(p-1)\rho+X_{+}\), \(M\) in \(\mathcal{U}_{\zeta}\)-mod. Then_
\[\mathrm{ch}(\mathrm{Hom}_{u_{\zeta}}(T_{\zeta}(\hat{\lambda}),M))=\eta^{F}\]
_where \(\eta^{F}\) is the coefficient of \(t_{\zeta}((p-1)\rho)\) when expressing \(t_{\zeta}((p-1)\rho-\lambda)\text{ch}(M)\) in the \(\{t_{\zeta}(\mu)\}\)-basis._
We now have the following comparison theorem.
**Theorem 4.3.3**.: _Let \(\lambda\in\mathbb{X}_{p}(T)\). Then_
\[q(\lambda)=\sum_{\mu\in\mathbb{X}_{p}(T)}t_{\zeta}(\mu)\cdot\mathrm{ch}( \mathrm{Hom}_{G_{1}}(\widehat{Q}_{1}((p-1)\rho-\lambda),\nabla^{\mathrm{red}}( (p-1)\rho-\mu))).\]
Proof.: First, since \(\{t(\mu)\}\) is a \(p\)-basis, there are coefficients \(\eta_{\mu}\in\mathbb{Z}[X]^{W}\) such that
\[q(\lambda)=\sum_{\mu\in\mathbb{X}_{p}}t_{\zeta}(\mu)\eta_{\mu}{}^{F}.\]
Fix some \(\sigma\in\mathbb{X}_{p}\). Since \(\mathrm{ch}(\nabla^{\mathrm{red}}((p-1)\rho-\sigma))\) and \(\mathrm{ch}(L_{\zeta}((p-1)\rho-\sigma))\) are equal, we have a character equality
\[q(\lambda)\mathrm{ch}(\nabla^{\mathrm{red}}((p-1)\rho-\sigma))=\sum_{\mu\in \mathbb{X}_{p}}t_{\zeta}(\mu)\eta_{\mu}{}^{F}\mathrm{ch}(L_{\zeta}((p-1)\rho- \sigma)).\]
Expanding the RHS into the \(p\)-basis \(\{t_{\zeta}(\gamma)\}\), we find that the coefficient of \(t_{\zeta}((p-1)\rho)\) is \(\eta_{\sigma}{}^{F}\). Expanding out the LHS into the \(p\)-basis \(\{q(\gamma)\}\), we have that the coefficient of \(q((p-1)\rho)\) is
\[\mathrm{ch}(\mathrm{Hom}_{G_{1}}(\widehat{Q}_{1}((p-1)\rho-\lambda),\nabla^{ \mathrm{red}}((p-1)\rho-\sigma))).\]
The result now follows by Lemma 4.2.1.
Since
\[\operatorname{Hom}_{G_{1}}(\widehat{Q}_{1}((p-1)\rho-\lambda),\nabla^{\operatorname{ red}}((p-1)\rho-\sigma))\cong\Bbbk,\]
it follows that each \(q(\lambda)\) is equal to \(t_{\zeta}(\lambda)\) plus a non-negative sum of various \(s(\mu)\). The following then holds.
This now allows us to make a general statement about the coefficients \(a_{\mu,\lambda}\). Unlike the \(b_{\mu,\lambda}\), we are not able to say in general that the nondecreasing pattern holds. However, by comparing with the \(c_{\mu,\lambda}\), we can say that all orbits that can appear, do.
**Corollary 4.3.4**.: _If \(p>2\), and \(p>3\) if \(G\) has a root system of type \(\mathrm{G}_{2}\), then \(a_{\mu,\lambda}\geq c_{\mu,\lambda}\) for all \(\lambda\in\mathbb{X}_{p}\) and \(\mu\leq\lambda\). In particular, if \(\lambda\in\mathbb{X}_{p}\) and \(\mu\in\mathbb{X}^{+}\) with \((\mu-\rho)\uparrow(\lambda-\rho)\), then \(a_{\mu,\lambda}\geq 1\)._
We also obtain useful facts about the characters \(t(\lambda)\). Andersen conjectured in [1] that for all \(\lambda\in\mathbb{X}^{+}\) such that
\[\langle\lambda+\rho,\alpha_{0}^{\vee}\rangle<p^{2},\]
it should hold that
\[T_{\mathcal{O}}(\lambda)\otimes_{\mathcal{O}}\mathbb{C}\cong T_{\zeta}(\lambda).\]
In other words, for such weights the algebraic and quantum tilting modules should agree. When \(p\geq 2h-2\), this conjecture implies Lusztig's conjecture, therefore it does not hold in general.
Nonetheless, it is an interesting question to consider. As it pertains to Steinberg quotients, we note that
\[\langle(p-1)\rho+\lambda+\rho,\alpha_{0}^{\vee}\rangle=p(h-1)+\langle\lambda, \alpha_{0}^{\vee}\rangle.\]
Thus if
\[\langle\lambda,\alpha_{0}^{\vee}\rangle<p(p-h+1),\]
Andersen's conjecture would imply that
\[t(\lambda)=t_{\zeta}(\lambda).\]
It follows from (4.1.1) and Theorem 3.1.1 that for all \(\lambda\in\mathbb{X}^{+}\), we have
\[t(\lambda)=t_{\zeta}(\lambda)+\sum_{(\mu-\rho)\uparrow(\lambda-\rho)}n^{ \prime}_{\mu,\lambda}t_{\zeta}(\mu),\]
where \(n^{\prime}_{\mu,\lambda}=n_{(p-1)\rho+\mu,(p-1)\rho+\lambda}\) from (4.1.1). The next result follows directly from this observation.
**Proposition 4.4.1**.: _Let \(\lambda\in\mathbb{X}_{p}\)._
1. _If for some_ \(\mu\) _we have_ \(b_{\mu,\lambda}>c_{\mu,\lambda}\)_, then_ \(b_{\gamma,\lambda}>c_{\gamma,\lambda}\) _for all_ \((\gamma-\rho)\uparrow(\mu-\rho)\)_._
2. _Let_ \(\gamma\) _be the minimal dominant weight such that_ \((\gamma-\rho)\uparrow(\lambda-\rho)\)_. Then_ \(t(\lambda)=t_{\zeta}(\lambda)\) _if and only if_ \(b_{\gamma,\lambda}=c_{\gamma,\lambda}\)
## 5. Steinberg quotients and Kazhdan-Lusztig polynomials
Let \(\lambda\in\mathbb{X}_{p}\). For all \(\mu\in\mathbb{X}^{+}\) there are integers \(d_{\mu,\lambda}\) such that
\[\operatorname{ch}(L(\lambda))=\sum d_{\mu,\lambda}\,\chi(\mu). \tag{5.1.1}\]
We can compare these coefficients to the \(a_{\mu,\lambda}\) thanks to a key result due to Kato [K, Theorem 3.5]. Our proof follows Fiebig [Fie].
**Theorem 5.1.1**.: _Suppose that \(\operatorname{ch}(L(\lambda-\rho))\) is given by Lusztig's character formula for all \(\lambda\in\mathbb{X}_{p}\) such that \(\lambda-\rho\) is \(p\)-regular. Then for all \(\lambda\in\mathbb{X}_{p}\) and \(\mu\in\mathbb{X}^{+}\) we have_
\[a_{\mu,\lambda}=|d_{\mu-\rho,\lambda-\rho}|.\]
Proof.: Applying the translation principle, we may assume that \(\lambda-\rho\) is strongly linked to \(0\). Therefore let \(w\in W_{p}\) and \(x\in W_{p}\) be such that
\[w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0=\lambda-\rho,\quad x \mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0=\mu-\rho.\]
Looking at both Corollary 3.4 and the proof of Theorem 3.5 from [Fie], we see that
\[[\widehat{Z}_{1}(w_{0}x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0) :\widehat{L}_{1}(w_{0}w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0) ]=|d_{x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0,w\mathbin{ \raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0}|.\]
We then have
\[[\widehat{Z}_{1}(w_{0}x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$ \bullet$}}}0):\widehat{L}_{1}(w_{0}w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$ \bullet$}}}0)] =[\widehat{Z}_{1}(p\rho+w_{0}x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0):\widehat{L}_{1}(p\rho+w_{0}w\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0)]\] \[=[\widehat{Z}_{1}((p-1)\rho+(w_{0}x\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0+\rho)):\widehat{L}_{1}((p-1)\rho+(w_{0}w\mathbin{ \raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho))]\] \[=(\widehat{Q}_{1}((p-1)\rho+(w_{0}w\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0+\rho)):\widehat{Z}_{1}((p-1)\rho+(w_{0}x\mathbin{ \raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho))),\]
The first equality above follows from the fact that for all \(\mu,\sigma,\gamma\in X\) we have
\[[\widehat{Z}_{1}(\mu):\widehat{L}_{1}(\sigma)]=[\widehat{Z}_{1}(\mu+p\gamma): \widehat{L}_{1}(\sigma+p\gamma)].\]
We note that because the dot action of \(W_{p}\) is a group action, we have for any \(y\in W_{p}\) that \(w_{0}y\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0=w_{0}\mathbin{ \raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}(y\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0)\). Thus the highest weight of
\[\widehat{Q}_{1}((p-1)\rho+(w_{0}w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$ \bullet$}}}0+\rho))\]
is
\[2(p-1)\rho+w_{0}((p-1)\rho+(w_{0}w\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0+\rho)) =2(p-1)\rho+w_{0}(p-1)\rho+w_{0}(w_{0}\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$ \bullet$}}}0+\rho)\] \[=2(p-1)\rho-(p-1)\rho+w_{0}(w_{0}(w\mathbin{\raisebox{0.5pt}{ \scalebox{1.2}{$\bullet$}}}0+\rho)-\rho+\rho)\] \[=(p-1)\rho+w\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho\] \[=(p-1)\rho+\lambda.\]
We also have that the dominant weight in the \(W\)-orbit of \(w_{0}x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho\) is
\[w_{0}(w_{0}\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}x \mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho) =w_{0}(w_{0}(x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+ \rho)-\rho+\rho)\] \[=w_{0}(w_{0}(x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0 )+\rho))\] \[=x\mathbin{\raisebox{0.5pt}{\scalebox{1.2}{$\bullet$}}}0+\rho\] \[=\mu.\]
This shows that \(s(\mu)\) occurs in \(q(\lambda)\) with multiplicity \(|d_{\mu-\rho,\lambda-\rho}|\)
The result in the previous section was in terms of the algebraic group, but the same holds for the quantum group in view of the facts recalled in Section 4. In this case, we indeed do have the hypothesis as holding, at least when \(p>h\) (see [10, II.H.12]).
We further note that in the case of the quantum group, if \(\lambda-\rho\) is \(p\)-regular, then the orbit multiplicities in \(t_{\zeta}(\lambda)\) are given by evaluations of Kazhdan-Lusztig polynomials for all \(\lambda\in\mathbb{X}^{+}\). This follows from Proposition 3.5.2 and [11, Corollary 4.10].
## 6. Lower bounds on Steinberg Quotients
In this section we consider the primary problem of computing \(t(\lambda)\) and \(t_{\zeta}(\lambda)\) in some reasonable way. Specifically, we seek to find properties that completely determine these characters. Using Weyl characters as a guide, we will define an approximation for \(t_{\zeta}(\lambda)\) by keying in on one of its properties. These approximations will be lower bounds on Steinberg quotients, and are computable by a straightforward algorithm. We will then provide computational evidence that suggests that these approximations are reasonably close to the \(t_{\zeta}(\lambda)\) (or at least are so in some nontrivial examples).
Donkin gave alternate proof of Weyl's formula in [12], also recounted in [10, Proposition II.5.10]. Boiling it down to its essence, we see that Weyl characters are a subset of Euler characteristics, and Euler characteristics can be shown to satisfy the following properties:
* For every \(\lambda\in\mathbb{X}^{+}\) and \(w\in W\), \[\chi(w\mathbin{\mathchoice{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}{ \vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}\lambda)=(-1)^{\ell(w)}\chi(\lambda).\]
* For all \(\lambda\in\mathbb{X}^{+}\), \[s(\lambda)=\sum_{w\in W/W_{\lambda}}\chi(w\lambda).\]
* For all \(\lambda\in\mathbb{X}\), \[\chi(\lambda)\in\mathbb{Z}[\mathbb{X}]^{W}.\]
Once we know that for \(\lambda\in\mathbb{X}^{+}\), the highest weight of \(\chi(\lambda)\) is \(\lambda\), occuring once, then these properties completely determine \(\chi(\lambda)\). Together they provide a recursive way to compute the orbit sums that appear in \(\chi(\lambda)\), starting with the outer orbit. This recursive process can be expressed succinctly as a division between signed \(W\)-orbit sums in \(\mathbb{Z}[\mathbb{X}]\).
For each \(\lambda\in\mathbb{X}^{+}\), let \(\mathcal{I}(\lambda)\) denote the "smallest" element in \(\mathbb{Z}[\mathbb{X}]^{W}\) of highest weight \(\lambda\) having the property that, for all \(\mu\in\mathbb{X}^{+}\),
\[\mathcal{I}(\lambda)\chi(\mu)\]
has the character of a good filtration module. It is then trivially true that \(\mathcal{I}(\lambda)=\chi(\lambda)\). This is because \(\mathcal{I}(\lambda)\chi(0)=\mathcal{I}(\lambda)\), so if \(\mathcal{I}(\lambda)\) satisfies the condition above then \(\mathcal{I}(\lambda)\) is itself a nonnegative sum of Weyl characters which has highest weight \(\lambda\). Thus \(\chi(\lambda)\) appears at least once in this sum. Since \(\mathcal{I}(\lambda)\) is meant to be minimal with respect to this property, we must have that \(\mathcal{I}(\lambda)=\chi(\lambda)\).
Motivated by this observation, and looking at Theorem 3.3.1, we make the following definition. We say that an element \(\eta\in\mathbb{Z}[\mathbb{X}]^{W}\) has a _good Steinberg multiplication_ if
\[\eta\chi((p-1)\rho+p\gamma)\quad\text{is a good filtration character}\quad\forall \gamma\in\mathbb{X}^{+}. \tag{6.3.1}\]
We then define, for each \(\lambda\in\mathbb{X}^{+}\), the character \(\mathcal{M}_{p}(\lambda)\) to be the smallest element in \(\mathbb{Z}[\mathbb{X}]^{W}\) having highest weight \(\lambda\) and satisfying the good Steinberg multiplication property.
We will make precise what we mean by "smallest" by way of an algorithm for computing \(\mathcal{M}_{p}(\lambda)\) (this algorithm will evidently make a minimal choice at each stage). We will then prove that this algorithm always produces a well-defined element that has the good Steinberg multiplication property.
Before proceeding, we show that the good Steinberg multiplcation property, which a priori involves checking an infinite number of character multiplications, can in fact be checked by multiplying by a finite number of characters. At the same time, it turns out to be too optimistic to hope that this property can simply be checked by multiplication against the Steinberg character itself.
**Theorem 6.3.1**.: _Let \(\eta\in\mathbb{Z}[\mathbb{X}]^{W}\). Let \(m\) be an integer such that \(pm>\langle\sigma,\alpha_{0}^{\vee}\rangle\) as \(\sigma\) ranges over the weights in \(\eta\). Then the following hold._
1. \(\eta\) _satisfies the good Steinberg multiplication property if and only if_ \[\eta\chi((p-1)\rho+p\gamma)\quad\text{is a good filtration character}\quad\forall \gamma\in\mathbb{X}_{m}.\]
2. _If_ \(\eta\) _is equal to the character of a_ \(G\)_-module, then_ \(\eta\) _satisfies the good Steinberg multiplication property if and only if_ \[\eta\chi((p-1)\rho)\quad\text{is a good filtration character.}\]
3. _If_ \(\eta\) _is not the character of a_ \(G\)_-module, then in general the previous condition fails._
Proof.: (1) The only thing to prove is that it suffices to check the property on this finite set. To do so, we show that for any \(\gamma\not\in\mathbb{X}_{m}\), there exists an element \(\gamma_{m}\in\mathbb{X}_{m}\) such that \(\eta\chi((p-1)\rho+p\gamma)\) is a good filtration character if and only if \(\eta\chi((p-1)\rho+p\gamma^{\prime})\) is a good filtration character.
Let \(J\subset\Pi\) be the set of all simple roots \(\alpha_{i}\) such that
\[\langle\gamma,\alpha_{i}^{\vee}\rangle<m.\]
Define
\[\gamma_{1}=\sum_{\alpha_{i}\in J}\langle\gamma,\alpha_{i}^{\vee}\rangle \varpi_{i}.\]
That is, in terms of the basis of fundamental dominant weights, \(\gamma_{1}\) agrees with \(\gamma\) on those coefficients that are less than \(m\), and is \(0\) on the others. Now define
\[\gamma_{2}=\sum_{\alpha_{i}\in\Pi\setminus J}m\varpi_{i},\]
and set
\[\gamma^{\prime}=\gamma_{1}+\gamma_{2}.\]
We have that
\[\gamma=\gamma_{1}+(\gamma-\gamma_{1}),\]
and the coefficients of \(\gamma-\gamma_{1}\) are greater than or equal to those of \(\gamma_{2}\). Suppose now that \(\sigma\) is a weight in \(\eta\). Then by Lemma 3.2.3 there is some \(w\in W_{J}\) such that
\[w\,\raisebox{-1.0pt}{\scalebox{1.5}{$\bullet$}}\,((p-1)\rho+p\gamma+\sigma)\]
is in \(\mathbb{X}^{+}-\rho\). Note that \(w(\gamma-\gamma_{1})=\gamma-\gamma_{1}\) for any such \(w\), as this weight pairs to \(0\) with all \(\alpha_{i}^{\vee}\) for \(\alpha_{i}\in J\). Applying Lemma 3.2.1, we then have
\[w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1)\rho+p \gamma+\sigma) =w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1)\rho+p \gamma_{1}+\sigma)+w(p(\gamma-\gamma_{1}))\] \[=w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1) \rho+p\gamma_{1}+\sigma)+p(\gamma-\gamma_{1}).\]
The reasoning above also implies for this same \(w\) that the weight
\[w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1)\rho+p \gamma^{\prime}+\sigma) =w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1) \rho+p\gamma_{1}+\sigma)+w(p\gamma_{2})\] \[=w\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}((p-1) \rho+p\gamma_{1}+\sigma)+p\gamma_{2}\]
is in \(\mathbb{X}^{+}-\rho\). It therefore follows that in the basis of Weyl characters, \(\eta\chi((p-1)\rho+p\gamma)\) equals
\[\sum_{\mu\in\mathbb{X}}c_{\mu}\chi(\mu+p(\gamma-\gamma_{1})),\]
if and only if in this basis \(\eta\chi((p-1)\rho+p\gamma^{\prime})\) equals
\[\sum_{\mu\in\mathbb{X}}c_{\mu}\chi(\mu+p\gamma_{2}).\]
(Note that \(\mu\) need not be dominant in these expressions, but must be after adding \(p(\gamma-\gamma_{1})\) or \(p\gamma_{2}\).) This finishes the proof of (1).
(2) This is shown in Section 2 of [1].
(3) A counter-example is given in Section 7.2.
We can now present our algorithm for determining \(\mathcal{M}_{p}(\lambda)\).
**Algorithm 6.3.1**.: **Input:** a weight \(\lambda\in\mathbb{X}^{+}\).
**Output:** the character \(\mathcal{M}_{p}(\lambda)\).
**Preliminary Steps:**
1. Set \(\Psi^{+}(\mathcal{M}_{p}(\lambda))\) to be the set of all \(\gamma\in\mathbb{X}^{+}\) such that \((\gamma-\rho)\uparrow(\lambda-\rho)\).
2. Let \(m\) be the minimal integer such that \(\langle\lambda,\alpha_{0}^{\vee}\rangle<pm\). Recall that \(\mathbb{X}_{m}\subseteq\mathbb{X}^{+}\) denotes the set of \(m\)-restricted weights.
3. Initialize \(\mathcal{M}_{p}(\lambda)=s(\lambda)\).
**Iterative Steps:**
1. Let \(\mu\in\Psi^{+}(\mathcal{M}_{p}(\lambda))\) be a maximal weight, under the \(\leq\) ordering, such that the multiplicity of \(s(\mu)\) in \(\mathcal{M}_{p}(\lambda)\) is \(0\). If no such \(\mu\) exists, then the process is ended. Otherwise, proceed to Step (2).
2. For each \(\gamma\in\mathbb{X}_{m}\), write the character \[\chi((p-1)\rho+p\gamma)\mathcal{M}_{p}(\lambda)\] in the basis of Weyl characters.
3. Let \(x_{\mu}\) be the least integer appearing on as a coefficient on a term of the form \[\chi((p-1)\rho+p\gamma+w\mu)\] in the previous step, for all \(w\in W\) and for all \(\gamma\in\mathbb{X}_{m}\).
4. Add \(-x_{\mu}s(\mu)\) to \(\mathcal{M}_{p}(\lambda)\) and repeat Step (1).
**Claim:** This algorithm terminates after a finite number of computations and returns the same element regardless of any choices made at any stage. The element it returns satisfies the good Steinberg multiplication property.
Proof.: Regarding the termination of the algorithm, because the set \(\mathbb{X}_{m}\) is finite, it is clear that the computations made at every stage are finite. Also, after each loop the coefficient on the relevant \(s(\mu)\) becomes positive, so the loop never returns to a previously settled case. To see this, suppose that \(\mu^{\prime}\) is minimal such that \((\mu-\rho)\uparrow(\mu^{\prime}-\rho)\). If \(\mu\) is (at this stage) a maximal weight such that the coefficient of \(s(\mu)\) is \(0\), then it must be true that \(s(\mu^{\prime})>0\). Now, applying the argument in the proof of Theorem 3.3.1(2), it will follow that after this loop the coefficient of \(s(\mu)\) is at least as big as the coefficient of \(s(\mu^{\prime})\).
Next, we will show that at Step (1) in the iterative process, if there are two or more maximal weights satisfying the condition (necessarily incomparable to each other), the the order in which they are chosen does not matter. Suppose then that \(\mu_{1}\) and \(\mu_{2}\) are two such weights, and that when adding in the orbit sums \(s(\mu_{1})\), we impact the number of \(s(\mu_{2})\) needed later. This would mean that there are elements \(w_{1},w_{2}\in W\) such that \((p-1)\rho+p\gamma+w_{2}\mu_{2}\in\mathbb{X}^{+}-\rho\), and
\[\chi((p-1)\rho+p\gamma+w_{1}\mu_{1})=\pm\chi((p-1)\rho+p\gamma+w_{2}\mu_{2}).\]
If
\[(p-1)\rho+p\gamma+w_{1}\mu_{1}=(p-1)\rho+p\gamma+w_{2}\mu_{2},\]
then we would have \(w_{1}\mu_{1}=w_{2}\mu_{2}\) for \(w_{1}\) and \(w_{2}\) elements in the finite Weyl group. But this cannot happen since \(\mu_{1}\) and \(\mu_{2}\) are distinct dominant weights. The only other case then is that for some nontrivial \(w\in W\),
\[w\centerdot((p-1)\rho+p\gamma+w_{1}\mu_{1})=(p-1)\rho+p\gamma+w_{2}\mu_{2}.\]
However, applying Lemma 3.2.2, it would follow that \(w_{2}\mu_{2}\) lies in the convex hull of \(W(w\mu_{1})\), so that \(\mu_{2}\) lies in the convex hull of \(W\mu_{1}\). But this is well known to imply that \(\mu_{2}\leq\mu_{1}\), which contradicts our assumption that these weights are not comparable.
Finally, by Theorem 6.3.1 we know that the final value for \(\mathcal{M}_{p}(\lambda)\) has the good Steinberg multiplication property.
We wish to compare the multiplicities between \(t_{\zeta}(\lambda)\) and \(\mathcal{M}_{p}(\lambda)\). As some explicit computations indicate, these characters will in general differ, so that \(t_{\zeta}(\lambda)\) is defined by more than just having the good Steinberg multiplication property. Nonetheless, in a very special case we do always have equality.
**Proposition 6.4.1**.: _For all \(\lambda\in\mathbb{X}^{+}\),_
\[\mathcal{M}_{p}(p\lambda)=\chi(\lambda)^{F}.\]
_In particular, \(\mathcal{M}_{p}(p\lambda)=t_{\zeta}(p\lambda)\)._
Proof.: The only orbit sums that are needed to appear in \(\mathcal{M}_{p}(p\lambda)\) are those \(s(\mu)\) such that
\[(\mu-\rho)\uparrow(p\lambda-\rho).\]
But such \(\mu\) are necessarily in \(p\mathbb{X}\). It follows that there are distinct dominants weights \(\mu_{1},\mu_{2},\ldots,\mu_{m}\) which are less than \(\lambda\), and coefficients \(c_{i}\in\mathbb{Z}\), such that
\[\mathcal{M}_{p}(p\lambda)=\chi(\lambda)^{F}+\sum c_{i}\chi(\mu_{i})^{F}.\]
From the definition of \(\mathcal{M}_{p}(p\lambda)\), it is necessary that
\[\chi((p-1)\rho)\mathcal{M}_{p}(p\lambda) =\chi((p-1)\rho)\left(\chi(\lambda)^{F}+\sum c_{i}\chi(\mu_{i})^{ F}\right)\] \[=\chi((p-1)\rho+p\lambda)+\sum c_{i}\chi((p-1)\rho+p\mu_{i})\]
is a good filtration character. The Weyl characters appearing in this last expression are all distinct, therefore the coefficients \(c_{i}\) are nonnegative. Since \(\chi(\lambda)^{F}\) itself has the good Steinberg multiplication property, it follows by the minimality of \(\mathcal{M}_{p}(\lambda)\) that all \(c_{i}=0\), so that \(\mathcal{M}_{p}(p\lambda)=\chi(\lambda)^{F}\).
The last part follows from Proposition 3.5.2.
## 7. Computations in Type \(A\)
For \(A_{n}\), the fundamental dominant weights are \(\varpi_{1},\varpi_{2},\ldots\varpi_{n}\). We write
\[(a_{1},a_{2},\ldots,a_{n})=a_{1}\varpi_{1}+a_{2}\varpi_{2}+\cdots+a_{n}\varpi _{n}.\]
### \(A_{3}\)
We will work here explicitly with \(p=5\). Lusztig's conjecture is known to hold for the algebraic group for all \(p\geq 5\), and a detailed listing of the characters can be found in [Jan, II.8.20]. We also know, by [BMPS2], that \(q(\lambda)=t(\lambda)\) for all \(\lambda\in\mathbb{X}_{p}\) for all \(p\).
Applying the general formula given in [Jan, II.8.20], we have that
\[L(3,2,3)=\chi(3,2,3)-\chi(2,2,2)-\chi(5,0,1)-\chi(1,0,5)+\chi(1,1,3)+\chi(3,1, 1)-2\chi(2,0,2)+3\chi(0,0,0).\]
Because Lusztig's theorem holds here in both cases, we are able by Theorem 5.1.1 to immediately read off the (equal) characters \(q(4,4,4)\) and \(t_{\zeta}(4,4,4)\). By the comments above, these also equal \(t(4,4,4)\). We have
\[t(4,4,4)=s(4,4,4)+s(2,4,2)+s(7,1,1)+s(1,1,7)+s(3,3,1)+s(1,3,3)+2s(2,2,2)+3s(1,2,1).\]
A computer calculation further reveals that these are the minimal orbit multiplicities needed to satisfy the good Steinberg multiplication property, so that this character is also equal to \(\mathcal{M}_{p}(4,4,4)\). At the same time, our algorithm found that
\[\chi((p-1)\rho)(t(4,4,4)-s(2,4,2))\]
is a good filtration character. Thus the orbit \(s(2,4,2)\) is not needed to obtain a good filtration character when multiplying by the Steinberg weight. This gives an example proving claim (3) in 6.3.1.
### \(A_{4}\)
Computer calculations by Scott [Sc] have shown that Lusztig's conjecture holds for the algebraic group for \(p=5,7\). Applying [BMPS4], we also have for \(p=7\) that \(q(\lambda)=t(\lambda)\). We will therefore look at \(p=7\), where it follows then that \(t_{\zeta}(\lambda)=t(\lambda)\). There are 52 dominant weights appearing in \(t(6,6,6,6)\) (this corresponds to the number of dominant alcoves that are less than the top restricted alcove under the \(\uparrow\) ordering). For brevity, we list only the orbit sums appearing with the greatest multiplicities, which are 9, 13, and 21.
\[\begin{array}{l|c|c}\text{Orbit Sum}&\text{Multiplicity in $t_{\zeta}(6,6,6,6)$}& \text{Multiplicity in $\mathcal{M}_{p}(6,6,6,6)$}\\ \hline s(4,2,2,4)&9&9\\ s(1,5,5,1)&9&9\\ s(3,1,2,6)&9&9\\ s(6,2,1,3)&9&9\\ s(5,1,2,3)&13&13\\ s(3,2,1,5)&13&13\\ s(4,1,1,4)&21&20\\ s(1,1,1,1)&21&20\\ \end{array}\]
From this computation we find the interesting fact that the \(\mathcal{M}_{p}(\lambda)\) are not always equal to the \(t_{\zeta}(\lambda)\).
### \(A_{5}\)
Here we do not know of any small primes in which Lusztig's conjecture holds for the algebraic group, but we do know that it holds for \(p>6\) in the quantum setting. Using the Kazhdan-Lusztig polynomials computed by Frank Lubeck, all the computations that we were able to make showed complete agreement between \(t_{\zeta}(\lambda)\) and \(\mathcal{M}_{p}(\lambda)\). The largest example for which we were able to compute \(\mathcal{M}_{p}(\lambda)\) was for the weight \((3,6,2,4,4)\).
We found that the characters \(t_{\zeta}(3,6,2,4,4)\) and \(\mathcal{M}_{p}(3,6,2,4,4)\) were equal. In these characters there are 79 different orbit sums appearing. The lowest orbit sum, \(s(1,1,1,1,1)\), occurs with the greatest multiplicity, which is 23.
### Larger ranks
Lubeck shared complete computations of the relevant Kazhdan-Lusztig polynomials for type \(A\) up through rank 7. Unfortunately, our own code for computing the characters \(\mathcal{M}_{p}(\lambda)\) has not been able to keep up! Our first attempt was an ad hoc script written in MATLAB that replaced by-hand computations. Subsequent versions have been modifications of this, but a more serious design is needed (we did not expect the \(\mathcal{M}_{p}(\lambda)\) to be as close to the \(t_{\zeta}(\lambda)\) as we have found them to be). Ideally we would like to find more compact formulas for computing the characters \(\mathcal{M}_{p}(\lambda)\), possibly one that is patterned after Freudenthal's formula for computing \(\chi(\lambda)\).
We should also say at this point that the data grows dramatically we move up in rank, so it will be illuminating to study the characters side-by-side for larger \(n\). For example, in type \(A_{5}\), \(p=7\), the Steinberg quotient \(t(6,6,6,6,6)\) contains 478 distinct orbits, and the biggest orbit multiplicity is 646. Moving into larger ranks, and stating everything now in terms of a general prime \(p>h\), we find for type \(A_{6}\) that the Steinberg quotient \(t((p-1)\rho)\) has 5706 distinct orbits, with the greatest orbit sum multiplicity being 65199. For \(A_{7}\), the corresponding character has 83824 distinct orbits, and the largest multiplicity is more than 34 million.
These numbers all come from the Kazhdan-Lusztig polynomial computations made by Lubeck, though we had independently computed the numbers of distinct orbits in the characters just listed.
## 8. Concluding Remarks
We find the characters \(\mathcal{M}_{p}(\lambda)\) to be quite interesting. Although they are not in complete agreement with the \(t_{\zeta}(\lambda)\), which when \(p>h\) are computable by evaluating Kazhdan-Lusztig polynomials if \(\lambda-\rho\) is \(p\)-regular, they are nonetheless close in the examples we have seen. Finding further insights into this relationship will be the focus of future work.
It is worth observing also that the algorithm for computing \(\mathcal{M}_{p}(\lambda)\) does not require \(\lambda-\rho\) to be \(p\)-regular. In fact, intuitively it seems reasonable that the more \(p\)-singular the weight \(\lambda-\rho\) is, the closer \(\mathcal{M}_{p}(\lambda)\) will be to \(t_{\zeta}(\lambda)\) (in terms of relative difference). See Proposition 6.4.1.
In a similar vein, some calculations in the case of \(A_{4}\) indicate that the behavior under translation is not the same for \(\mathcal{M}_{p}(\lambda)\) as it is for \(t_{\zeta}(\lambda)\), and this likely accounts for the descrepency detailed in Section 7.3. This will also be explored further.
|
2310.07546
|
Boiling peak heat flux for steady inhomogeneous heat transfer in
superfluid $^4$He
|
Superfluid helium-4 (He II) is a widely adopted coolant in scientific and
engineering applications owing to its exceptional heat transfer capabilities.
However, boiling can spontaneously occur on a heating surface in He II when the
heat flux exceeds a threshold value $q^*$, referred to as the peak heat flux.
While the parameter $q^*$ holds paramount importance in the design of He II
based cooling systems, extensive research has primarily focused on its behavior
in steady homogeneous heat transfer from a flat heating surface. For
inhomogeneous heat transfer from curved surfaces, $q^*$ exhibits intricate
dependance on parameters such as the He II bath temperature $T_b$, the
immersion depth $h$, and the curvature radius $R_0$ of the heating surface. A
comprehensive understanding on how $q^*$ depends on these parameters remains
elusive. In this paper, we report our systematic study on $q^*$ for steady heat
transfer from cylindrical and spherical heaters in He II. We compute $q^*$ for
a wide range of parameter combinations $(T_b, h, R_0)$ by solving the He II
two-fluid equations of motion. The generated data have allowed us to develop a
robust correlation that accurately reproduces $q^*$ for all the parameter
combinations we explored. Our findings, particularly the establishment of the
correlation, carry valuable implications for emergent applications that involve
steady inhomogeneous heat transfer in He II systems.
|
Sosuke Inui, Mikai Hulse, Toshiaki Kanai, Wei Guo
|
2023-10-11T14:50:08Z
|
http://arxiv.org/abs/2310.07546v1
|
# Boiling peak heat flux for steady inhomogeneous heat transfer in superfluid \({}^{4}\)He
###### Abstract
Superfluid helium-4 (He II) is a widely adopted coolant in scientific and engineering applications owing to its exceptional heat transfer capabilities. However, boiling can spontaneously occur on a heating surface in He II when the heat flux exceeds a threshold value \(q^{*}\), referred to as the peak heat flux. While the parameter \(q^{*}\) holds paramount importance in the design of He II based cooling systems, extensive research has primarily focused on its behavior in steady homogeneous heat transfer from a flat heating surface. For inhomogeneous heat transfer from curved surfaces, \(q^{*}\) exhibits intricate dependance on parameters such as the He II bath temperature \(T_{b}\), the immersion depth \(h\), and the curvature radius \(R_{0}\) of the heating surface. A comprehensive understanding on how \(q^{*}\) depends on these parameters remains elusive. In this paper, we report our systematic study on \(q^{*}\) for steady heat transfer from cylindrical and spherical heaters in He II. We compute \(q^{*}\) for a wide range of parameter combinations \((T_{b},h,R_{0})\) by solving the He II two-fluid equations of motion. The generated data have allowed us to develop a robust correlation that accurately reproduces \(q^{*}\) for all the parameter combinations we explored. Our findings, particularly the establishment of the correlation, carry valuable implications for emergent applications that involve steady inhomogeneous heat transfer in He II systems.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Saturated liquid \({}^{4}\)He becomes a superfluid at temperatures below about 2.17 K [1]. In the superfluid phase (known as He II), the liquid can be considered phenomenologically as a mixture of two miscible fluid components: an inviscid superfluid that carries no entropy and a viscous normal fluid that consists of thermal quasiparticles (i.e., phonons and rotons) [2]. Heat transfer in this two-fluid system is via a unique internal convection process known as thermal counterflow. In a counterflow, the normal fluid carries the heat and moves away from a heating surface at a velocity \(v_{n}\)=\(q/\rho sT\), where \(q\) is the heat flux, \(T\) is the He II temperature, and \(\rho\) and \(s\) are the He II density and specific entropy, respectively; the superfluid moves in the opposite direction at a velocity \(v_{s}\)=\(-v_{n}\rho_{n}/\rho_{s}\) so that the net mass flow remains zero (here \(\rho_{n}\) and \(\rho_{s}\) are the densities of the normal fluid and the superfluid, respectively). This counterflow mode is extremely effective, which renders He II a valuable coolant in a wide array of scientific and engineering applications, such as for cooling superconducting particle accelerator cavities, superconducting magnets, medical instruments, and even satellites [3].
When the relative velocity of the two fluids in counterflow exceeds a small critical value [4], a chaotic tangle of quantized vortex lines can develop spontaneously in the superfluid. These quantized vortices are filamentary topological defects, each carrying a quantized circulation \(\kappa\simeq 10^{-3}\) cm\({}^{2}\)/s around its angstrom-sized core [5]. A mutual friction force between the two fluids then emerges due to thermal quasiparticles scattering off the quantized vortices [6]. This mutual friction can lead to novel flow characteristics in both fluids [7; 8; 9; 10; 11]. When the heat flux is further increased to above a threshold value \(q^{*}\), referred to as the peak heat flux, boiling on the heating surface can occur. This boiling action leads to the formation of vapor bubbles, and these bubbles can act as effective insulators between the heating surface and the surrounding He II, which impairs the heat transfer and results in the potential for overheating and damage to the cooled devices.
Developing a reliable correlation for assessing \(q^{*}\) is of great importance in the design of He II based cooling systems. The value of \(q^{*}\) can depend on many parameters, such as the heating duration \(\Delta t\), the temperature of the He II bath \(T_{b}\), the immersion depth \(h\), and the curvature radius \(R_{0}\) of the heating surface. In this paper, we shall focus on \(q^{*}\) in steady heat transfer where \(\Delta t\rightarrow\infty\), since this knowledge lays the groundwork for future explorations of \(q^{*}\) within transient heat transfer scenarios.
There have been extensive studies on \(q^{*}\) in the context of steady, homogeneous heat transfer of He II within uniform channels driven by planar heaters [12; 13; 14; 15; 16]. The relationship between \(q^{*}\) and the parameters \(T_{b}\) and \(h\) has been reasonably well-understood [3]. However, when it comes to inhomogeneous heat transfer from curved surfaces such as cylindrical and spherical surfaces, \(q^{*}\) displays intricate dependencies on the parameter combination \((T_{b},h,R_{0})\). Despite some past studies on \(q^{*}\) for these nonuniform geometries [17; 18; 19; 20; 21; 22], a systematic understanding on how \(q^{*}\) varies with the parameter combination \((T_{b},h,R_{0})\) remains absent. Nevertheless, establishing the capability to reliably predict \(q^{*}\) values in
these nonuniform geometries holds significant importance for specific applications, such as cooling superconducting transmission lines and magnet coils [23; 24], detecting point-like quench spots on superconducting accelerator cavities [25; 26], and emerging applications like the development of hot-wire anemometry for studying quantum turbulence in He II [27].
In this paper, we present a comprehensive numerical investigation of \(q^{*}\) in steady, nonhomogeneous heat transfer from both cylindrical and spherical heating surfaces submerged in He II. We employ the He II two-fluid equations of motion to compute \(q^{*}\) over a wide range of parameter combinations \((T_{b},h,R_{0})\). Furthermore, we demonstrate that the data we generate can facilitate the development of a robust correlation capable of accurately reproducing \(q^{*}\) across all the parameter combinations we explore. The paper is structured as follows: we begin by outlining our theoretical model in Section II. In Section III, we conduct a comparative analysis of the calculated \(q^{*}\) values for heat transfer from cylindrical heaters against available experimental data to calibrate our model. In Section IV.1, we present a systematic computation of \(q^{*}\) using the fine-tuned model for cylindrical heaters under varying parameter combinations \((T_{b},h,R_{0})\) and establish a reliable correlation linking \(q^{*}\) with these parameters. In Section IV.2, we provide a similar analysis and correlation for \(q^{*}\) concerning heat transfer from spherical heaters. We conclude with a summary in Section V.
## II Theoretical model
We employ the two-fluid hydrodynamic model in our current research, which was also utilized in our prior work to analyze transient heat transfer in He II [28; 29]. A comprehensive description of this model is available in Refs. [30; 31]. In brief, this model is based on the conservation laws governing He II mass, momentum, and entropy. It comprises four evolution equations for He II's total density \(\rho\), total momentum density \(\rho\mathbf{v}=\rho_{s}\mathbf{v_{s}}+\rho_{n}\mathbf{v_{n}}\), superfluid velocity \(\mathbf{v_{s}}\), and entropy \(s\), as follows:
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0, \tag{1}\] \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla(\rho_{s}v_{s}^{2}+ \rho_{n}v_{n}^{2})+\nabla P=0,\] (2) \[\frac{\partial\mathbf{v}_{s}}{\partial t}+\mathbf{v}_{s}\cdot\nabla\mathbf{v }_{s}+\nabla\mu=\frac{\mathbf{F}_{ns}}{\rho_{s}},\] (3) \[\frac{\partial(\rho s)}{\partial t}+\nabla\cdot(\rho s\mathbf{v}_{n} )=\frac{\mathbf{F}_{ns}\cdot\mathbf{v}_{ns}}{T}, \tag{4}\]
where \(P\) is the pressure, \(\mu\) is the chemical potential of He II, \(\mathbf{v}_{ns}=\mathbf{v}_{n}-\mathbf{v}_{s}\) is the relative velocity between two fluids, and \(\mathbf{F}_{ns}\) is the Gorter-Mellink mutual friction between the two fluids per unit volume of He II [3].
\(\mathbf{F}_{ns}\) can be expressed in terms of \(\mathbf{v}_{ns}\) and the vortex-line density \(L\) as [32; 33]:
\[\mathbf{F}_{ns}=\frac{\kappa}{3}\frac{\rho_{s}\rho_{n}}{\rho}B_{L}L\mathbf{v}_{ns}, \tag{5}\]
where \(B_{L}\) is a temperature dependent mutual friction coefficient [34]. The calculation of \(\mathbf{F}_{ns}\) requires the evolution of \(L(\mathbf{r},t)\), for which we employ Vinen's equation [6]:
\[\frac{\partial L}{\partial t}+\nabla\cdot(\mathbf{v}_{\text{\tiny L}}L)=\alpha_{ \text{\tiny V}}|\mathbf{v}_{ns}|L^{\frac{3}{2}}-\beta_{V}L^{2}+\gamma_{\text{ \tiny V}}|\mathbf{v}_{ns}|^{\frac{5}{2}}, \tag{6}\]
where \(\alpha_{V}\), \(\beta_{V}\) and \(\gamma_{V}\) are temperature-dependent empirical coefficients [6], and \(\mathbf{v}_{\text{\tiny L}}\) represents the vortex-tangle drift velocity, which is often approximated as equal to the local superfluid velocity \(\mathbf{v}_{s}\)[35; 36].
Similar to the previous works [28; 29], we include correction terms that depend on \(v_{ns}^{2}\) for He II's thermodynamic properties, as suggested by Landau [37; 2], to account for the large \(v_{ns}\) values under high heat flux conditions in the current research:
\[\mu(P,T,v_{ns}) =\mu^{(s)}(P,T)-\frac{1}{2}\frac{\rho_{n}}{\rho}v_{ns}^{2}, \tag{7}\] \[s(P,T,v_{ns}) =s^{(s)}(P,T)+\frac{1}{2}v_{ns}^{2}\frac{\partial(\rho_{n}/\rho) }{\partial T},\] (8) \[\rho(P,T,v_{ns}) =\rho^{(s)}(P,T)+\frac{1}{2}\rho^{2}v_{ns}^{2}\frac{\partial(\rho _{n}/\rho)}{\partial P}, \tag{9}\]
where the quantities with the superscript "\((s)\)" represent static values, which can be obtained from the HEPAK dynamic library [38]. The two-fluid model outlined above provides a coarse-grained description of the He II hydrodynamics, since it does not resolve the interaction between individual vortices and the normal fluid [39; 40; 41]. Nonetheless, prior research has shown that this model describes non-isothermal flows in He II well when \(L\) is reasonably high [28; 42].
Since our current research focuses on the steady-state heat transfer, we drop the terms that involve the time derivative in the governing equations and reformulate them in a manner convenient for numerical solutions. For instance, Eq. (1) leads to:
\[\rho_{s}\mathbf{v}_{s}=-\rho_{n}\mathbf{v}_{n}. \tag{10}\]
Moreover, by integrating Eq. (2), we can derive an expression for \(P(r)\) as:
\[P(r)=P_{b}-\rho_{s}v_{s}^{2}-\rho_{n}v_{n}^{2}. \tag{11}\]
Here the bath pressure \(P_{b}=P_{S}(T_{b})+\rho gh\), where \(P_{S}(T_{b})\) represents the saturation pressure at the bath temperature \(T_{b}\), \(g\) stands for gravitational acceleration, and \(h\) denotes the immersion depth of the heating surface. The last two terms in Eq. (11) account for the Bernoulli pressures associated with the flows in the two fluids. Now, assuming axial symmetry and recognizing the identity \(d\mu=\frac{1}{\rho}dP-sdT\)[2], we can express Eq. (3) in the following form, utilizing Eqs. (2) and (10):
\[\rho v_{s}\frac{\partial v_{n}}{\partial r}+\partial_{r}(\rho v_{n}v_{s})= \rho s\frac{\partial T}{\partial r}+\frac{\rho}{\rho_{s}}F_{ns}. \tag{12}\]
The temperature \(T(r)\) at location \(r\) can be obtained by integrating the above equation as:
\[T(r)=T_{b}+\int_{r}^{\infty}\mathrm{d}r^{\prime}G(r^{\prime}), \tag{13}\]
where
\[G(r):=\frac{1}{\rho_{s}s}F_{ns}-\frac{v_{s}}{s}\partial_{r}v_{n}-\frac{1}{\rho s }\partial_{r}(\rho v_{n}v_{s}). \tag{14}\]
Next, we integrate Eq. (4) from the heater surface \(R_{0}\) to \(r\) to obtain \(v_{n}(r)\) as:
\[v_{n}(r)=\frac{R_{0}^{N}\rho_{0}s_{0}}{r^{N}\rho s}v_{n0}+\frac{1}{r^{N}\rho s }I(r), \tag{15}\]
where
\[I(r):=\int_{R_{0}}^{r}\mathrm{d}r^{\prime N}\frac{F_{ns}(r^{\prime})v_{ns}(r^{ \prime})}{T(r^{\prime})}. \tag{16}\]
The quantities with the subscript "\({}_{0}\)" in the above equations indicate their values at \(r=R_{0}\), and the parameter \(N\) assumes values of 1 or 2, corresponding to cylindrical and spherical coordinates, respectively. Note that \(v_{n0}\) is related to the surface heat flux \(q_{0}\) as \(v_{n0}=q_{0}/\rho_{0}s_{0}T_{0}\), which transforms Eq. (15) into:
\[v_{n}(r)=\frac{q_{0}}{\rho sT_{0}}\left(\frac{R_{0}}{r}\right)^{N}+\frac{I(r) }{r^{N}\rho s}. \tag{17}\]
Finally, within the parameter ranges explored in our current research, it becomes evident that the drift term \(\nabla\cdot(\mathbf{v}_{L}L)\) and the term \(\gamma_{V}|\mathbf{v}_{ns}|^{\frac{5}{2}}\) in Eq. (6) are orders of magnitudes smaller than the remaining terms. By omitting these two terms, we can deduce that \(L(r)=\gamma^{2}v_{ns}(r)^{2}\), where \(\gamma=\alpha_{V}/\beta_{V}\). Therefore, \(F_{ns}\) can be calculated as:
\[F_{ns}=\frac{\kappa}{3}\frac{\rho_{s}\rho_{n}}{\rho}B_{L}\gamma^{2}v_{ns}^{3}. \tag{18}\]
We must emphasize that Eq. (6) was originally proposed for homogeneous and isotropic counterflow. There are ongoing discussions regarding potential modifications of this equation for nonuniform flows [43, 44, 45]. In our present research, we will maintain the use of Eq. (18). However, we will adapt the \(\gamma\) values, originally derived for uniform counterflow [46, 47, 48], to best fit the available data under nonuniform counterflow conditions. The relevant details are provided in Sec. III.
Eqs. (10), (11), (13), (17), and (18) now form the base of our iterative numerical approach for solving the steady-state heat transfer problems involving cylindrical and spherical heaters. The iteration starts with constant He II properties \(P^{(0)}=P_{b}\), \(T^{(0)}=T_{b}\), and a prescribed normal-fluid velocity profile \(v_{n}^{(0)}(r)=\frac{q_{0}}{\rho^{(0)}s^{(0)}T_{0}}\left(\frac{R_{0}}{r} \right)^{N}\). Here, the superscript \({}^{(i=0,1,2\ldots)}\) denotes the iteration number. Utilizing the initial fields \((P^{(0)},T^{(0)},v_{n}^{(0)})\), we can calculate all relevant He II thermodynamic variables and other needed parameters, such as \(v_{s}^{(0)}\), \(\rho^{(0)}\), \(s^{(0)}\), \(F_{ns}^{(0)}\), etc. These results allow us to iteratively update \((P,T,v_{n})\) as:
\[P^{(i+1)}(r) =P_{b}+\rho_{s}^{(i)}v_{s}^{(i)}(r)^{2}+\rho_{n}^{(i)}v_{n}^{(i) }(r)^{2}, \tag{19}\] \[T^{(i+1)}(r) =T_{b}+\int_{\infty}^{r}\mathrm{d}r^{\prime}G(i)^{\prime}(r^{ \prime}),\] (20) \[v_{n}^{(i+1)}(r) =\frac{q_{0}}{\rho^{(0)}s^{(0)}T_{0}}\left(\frac{R_{0}}{r} \right)^{N}+\frac{I^{(i)}(r)}{r^{N}\rho^{(i)}s^{(i)}}. \tag{21}\]
The iteration is terminated once the relative change in the temperature field between consecutive iterations, defined as \(|T^{(i)}-T^{(i-1)}|/T^{(i)}\), becomes less than \(10^{-5}\) at all \(r\). In the simulation, the integrals are performed using Simpson's rule with a step size of \(\Delta r=10\)\(\mu\)m [49].
As an example, we consider a cylindrical heater with a radius \(R_{0}=0.2\) cm, subject to a constant surface heat flux \(q_{0}\), as depicted in Fig. 1(a). We set \(T_{b}=1.78\) K
Figure 1: (a) A schematic diagram of a long cylindrical heater of radius \(R_{0}\) with a constant surface heat flux \(q_{0}\). (b) Simulated profiles of temperature \(T(r)\) (top), vortex-line density \(L(r)\) (middle), and normal-fluid velocity \(v_{n}(r)\) (bottom) at \(q_{0}\) close to \(9.39\) W/cm\({}^{2}\) with \(T_{b}=1.78\) K, \(R_{0}=0.2\) cm, and \(h=50\) cm. (c) The calculated state parameter \((P_{0},T_{0})\) of the He II on the heater surface at various applied \(q_{0}\).
and \(h=50\) cm, and compute the steady-state profiles of \(T(r)\), \(L(r)\), and \(v_{n}(r)\) using the iterative method outlined earlier. The results for \(q_{0}\) close to \(9.39\) W/cm\({}^{2}\) are shown in Fig. 1(b). It is clear that approaching the heater, \(T(r)\), \(L(r)\), and \(v_{n}(r)\) all increase rapidly towards their maximum values at \(r=R_{0}\). In Fig. 1(c), we show the state parameters \((T_{0},P_{0})\) of the He II on the heater surface at various \(q_{0}\). The blue dot represents the state (\(T_{0}=T_{b},P_{0}=P_{b}\)) at \(q_{0}=0\). As \(q_{0}\) increases, the state approaches the saturation line of He II. The slight reduction in pressure is due to the Bernoulli effect incorporated in Eq. (11). At the peak heat flux \(q^{*}\approx 9.39\) W/cm\({}^{2}\), the He II state on the heater surface reaches the saturation line, where boiling can occur spontaneously.
## III Model calibration
To calibrate our model, we have looked into existing experimental research on \(q^{*}\) associated with steady-state nonuniform heat transfer in He II. There were several experimental studies on \(q^{*}\) for cylindrical heaters [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 288; 289; 291; 280; 281; 285; 286; 287; 289; 281; 288; 289; 292; 293; 288; 289; 294; 295; 296; 297; 30; 31; 329; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 84; 89; 86; 88; 87; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 131; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 163; 164; 165; 1667; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 189; 190; 180; 181; 191; 192; 193; 194; 195; 196; 197; 198; 198; 199; 200; 210; 211; 214; 215; 216; 217; 219; 220; 223; 231; 232; 233; 234; 235; 236; 237; 238; 239; 241; 25; 258; 261; 259; 262; 271; 274; 276; 278; 279; 28; 299; 300; 31; 32; 33, 34; 36; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 79; 80; 82; 84; 85; 86; 87; 88; 89; 931; 88; 94; 95; 96; 97; 98; 101; 112; 113; 114; 145; 156; 157; 158; 159; 161; 179; 18; 199; 202; 211; 22; 233; 246; 25; 267; 270; 28; 298; 299; 310; 33; 32; 34; 35; 36; 37; 38; 39; 50; 39; 510; 30; 33; 34; 39; 511; 33; 35; 36; 38; 37; 39; 52; 53; 57; 59; 60; 61; 63; 64; 65; 66; 67; 68; 69; 70; 73; 75; 76; 77; 78; 89; 80; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 99; 90; 91; 101; 112; 133; 14; 157; 168; 179; 18; 19; 199; 20; 211; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 78; 79; 80; 82; 83; 85; 86; 87; 89; 92; 93; 94; 95; 96; 97; 98; 101; 11; 12; 132; 14; 157; 16; 179; 18; 199; 199; 120; 199; 131; 17; 18
## IV Peak heat flux analysis
In this section, we present the simulated \(q^{*}\) values for steady-state counterflow produced by both cylindrical and spherical heaters, considering a variety of parameter combinations \((T_{b},h,R_{0})\). We further demonstrate that \(q^{*}\) can be calculated using an integral formula that involves the temperature difference between the heater surface and the bath. Using our simulation data, we can devise a correlation to evaluate this temperature difference, which in turn leads to a robust correlation for \(q^{*}\).
### Cylindrical heater case
Following the same procedures as illustrated in Fig. 1, we determined \(q^{*}\) as a function of \(T_{b}\) for cylindrical heaters of various \(R_{0}\) and \(h\) values. These results are compiled in Fig. 4. It's evident that at fixed \(R_{0}\) and \(h\) values, \(q^{*}\) exhibits a non-monotonic dependence on \(T_{b}\), with a peak observed between 1.8 K and 1.9 K. On the other hand, at a fixed \(T_{b}\), \(q^{*}\) consistently increases with an increase in \(h\) or a decrease in \(R_{0}\).
To understand the behavior of \(q^{*}\), we can refer to Eq. (12). For the parameter combinations \((T_{b},h,R_{0})\) that we studied, we found that the terms on the left-hand side of Eq. (12) are typically more than two orders of magnitude smaller than the other terms across all values of \(r\). If we dismiss these minor terms and utilize Eq. (18) and (23), while noting that \(v_{ns}(r)=q(r)/\rho_{s}sT\), the following equation can be derived:
\[\frac{\mathrm{d}T}{\mathrm{d}r}=-C^{2}f(T)q(r)^{3}. \tag{24}\]
In steady-state counterflow, \(q(r)\) is given by \(q(r)=q_{0}(R_{0}/r)^{N}\) (recall that \(N=1\) for cylindrical heaters and \(N=2\) for spherical heaters). When the heater surface heat flux \(q_{0}\) reaches \(q^{*}\), the above equation can be rearranged and integrated to produce an expression for \(q^{*}\):
\[q^{*}=\left(\frac{3N-1}{C^{2}R_{0}}\int_{T_{b}}^{T_{b}+\Delta T}\frac{\mathrm{ d}T}{f(T)}\right)^{1/3}, \tag{25}\]
where \(\Delta T\) denotes the temperature increase on the heater surface relative to the He II bath at \(q_{0}=q^{*}\). This equation was introduced in Ref. [3]. However, due to the lack of information on how \(\Delta T\) depends on \((T_{b},h,R_{0})\), this equation was not employed to evaluate \(q^{*}\).
To facilitate the development of a practical correlation for \(q^{*}\), we have computed \(\Delta T\) values for all the cases depicted in Fig. 5. Some results showing relationship of \(\Delta T\) with \(T_{b}\), \(h\), \(R_{0}\) are presented in panels (a), (b), and (c) of Fig. 5. From Fig. 5(a), we can see that at fixed \(h\) and \(R_{0}\), \(\Delta T\) largely scales as \(T_{b}^{-4}\) across the entire bath temperature range we explored. Fig. 5(b) demonstrates a rather good linear dependence of \(\Delta T\) on \(h\) for given \(T_{b}\) and \(R_{0}\). Lastly, Fig. 5(c) reveals a somewhat mild power-law dependance, \(\Delta T\propto R_{0}^{\alpha}\), when \(T_{b}\) and \(h\) are fixed. This power exponent \(\alpha\) varies with \(h\) and \(T_{b}\), as listed in Table 1, and is generally small. Combining all these insights, we can propose the following simple correlation between \(\Delta T\) and the parameters \(T_{b}\), \(h\), and \(R_{0}\):
\[\Delta T(T_{b},h,R_{0},)=D\frac{hR_{0}^{\alpha}}{T_{b}^{4}}, \tag{26}\]
where \(D\) is a numerical factor derivable from the scaling coefficients shown in Fig. 5(a)-(c). To evaluate \(D\) in a more systematic manner, we compute it as \(D=\Delta T/(hR_{0}^{\alpha}/T_{b}^{4})\) for each parameter combination \((T_{b},h,R_{0})\). Notably, within our chosen parameter range, all deduced values for \(D\) fall within the range \(D=0.024\pm 0.002\) K\({}^{5}\)/cm\({}^{1+\alpha}\). More details regarding the derivation of \(D\) is provided in Appendix A.
Figure 3: The optimized \(\gamma_{C}\) as a function of \(T\). Measured \(\gamma\) values in uniform counterflow in various experiments [53; 54; 55; 56; 57] are also shown.
With the obtained expression for \(\Delta T\), we can now derive a convenient correlation to evaluate \(q^{*}\). Given that \(\Delta T\) is typically much smaller than \(T_{b}\) (i.e., see Fig. 5), the integral in Eq. (25) can be approximated by evaluating \(f(T)\) at \(T=T_{b}+\frac{1}{2}\Delta T\), resulting in:
\[q^{*}\approx\left((3N-1)\Delta T/C^{2}R_{0}\right)^{1/3}\cdot f(T_{b}+\Delta T /2)^{-1/3}. \tag{27}\]
To verify the accuracy of this expression for cylindrical heaters, we plot the simulated \(q^{*}/(2\Delta T/C^{2}R_{0})^{1/3}\) in Fig. 6 as a function of \(T_{b}^{\prime}=T_{b}+\Delta T/2\) for all the parameter combinations we studied. Impressively, all the simulated data collapse onto a single curve, which agrees precisely with \(f(T_{b}^{\prime})^{-1/3}\).
In order to derive a convenient correlation for \(q^{*}\) that explicitly depends on \(T_{b}\), \(h\), and \(R_{0}\), one can perform a Taylor expansion of Eq. (27) as:
\[q^{*}\approx(2\Delta T/C^{2}R_{0})^{1/3}\left[\frac{1}{f(T_{b})}-\frac{\Delta T }{2}\frac{f^{\prime}(T_{b})}{f(T_{b})^{2}}\right]^{1/3}. \tag{28}\]
Using the expression for \(\Delta T\) from Eq. (26), we can substitute it into Eq. (28) to yield the following final correlation:
\[q^{*}\approx\left[\frac{2Dh}{C^{2}R_{0}^{1-\alpha}T_{b}^{4}f(T_{b})}\left(1- \frac{DhR_{0}^{\alpha}}{2T_{b}^{4}}\frac{f^{\prime}(T_{b})}{f(T_{b})}\right) \right]^{\frac{1}{3}}. \tag{29}\]
With this correlation, evaluating \(q^{*}\) becomes straightforward given a specific set of parameters \((T_{b},h,R_{0})\). It is worth noting from Eq. (29) that the dependance of \(q^{*}\) on \(R_{0}\) can be expressed as \(q^{*}\propto R_{0}^{-\frac{1}{m}}\), where \(m\approx 3/(1-\alpha)\). For the parameter ranges explored in our simulations, \(m\) varies from 3.06 to 3.4. The deviation of \(m\) from 3 is entirely due to the weak dependance of \(\Delta T\) on \(R_{0}\), i.e., \(\Delta T\propto R_{0}^{\alpha}\) as shown in Eq. (26). It is worth highlighting that such a deviation from \(m=3\) has indeed been reported experimentally [3].
### Spherical heater case
In the case of spherical heaters, we follow a similar procedure to that for the cylindrical heaters. We consider a spherical heater of radius \(R_{0}\) immersed at depth \(h\) in He II held at a bath temperature \(T_{b}\), and then conduct numerical simulations across various \(T_{b}\), \(h\), and \(R_{0}\) values. The obtained \(q^{*}\) data are displayed in Fig. 7. From
Figure 5: (a) Simulated temperature rise \(\Delta T\) on the heater surface as a function of \(T_{b}\) for a cylindrical heater with fixed \(h\) and \(R_{0}\). (b) Dependence of \(\Delta T\) on the immersion depth \(h\) at fixed \(T_{b}\) and \(R_{0}\). (c) Dependence of \(\Delta T\) on the heater radius \(R_{0}\) at fixed \(T_{b}\) and \(h\).
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline \(h\) [cm]\(T_{b}\) [K] & 1.7 & 1.8 & 1.9 & 2.0 \\ \hline \hline
1 & 0.12 & 0.11 & 0.09 & 0.07 \\ \hline
5 & 0.08 & 0.07 & 0.06 & 0.04 \\ \hline
20 & 0.05 & 0.04 & 0.03 & 0.02 \\ \hline \end{tabular}
\end{table}
Table 1: The fitted exponent \(\alpha\) for cylindrical heaters
Figure 6: Simulated \(q^{*}/(2\Delta T/C^{2}R_{0})^{1/3}\) as a function of \(T_{b}^{\prime}=T_{b}+\Delta T/2\) for cylindrical heaters at all the parameter combinations \((T_{b},h,R_{0})\) we studied. The black curve represents \(f^{-1/3}(T_{b}^{\prime})\), where \(f\) is the known He II heat conductivity function [3]. The simulated data collapse nicely onto the \(f^{-1/3}\) curve.
the data, it's evident that the variation of \(q^{*}\) with respect to \(T_{b}\), \(h\), and \(R_{0}\) for spherical heaters show similar trends observed for cylindrical heaters. Moreover, for a given parameter set \((T_{b},h,R_{0})\), the \(q^{*}\) value for spherical heaters is consistently higher than that for cylindrical heaters.
The behavior of \(\Delta T\) for spherical heaters closely mirrors what we observed for cylindrical heaters. In Fig. 8(a), 8(b), and 8(c), we display representative results showing the dependencies of \(\Delta T\) on \(T_{b}\), \(h\), and \(R_{0}\). These results lead us to a correlation for \(\Delta T\) which strikingly takes the same form as Eq. (26) for cylindrical heaters, namely \(\Delta T=D(hR_{0}^{\alpha}/T_{b}^{4})\). The fitted values of \(\alpha\) (as shown in Table 2) is approximately double that of cylindrical heaters. The similarity of these expressions underscores the robustness of the correlation across different heater geometries. As before, the factor \(D\) for each parameter set \((T_{b},h,R_{0})\) can be computed as \(D=\Delta T/(hR_{0}^{\alpha}/T_{b}^{4})\). The resulting values of \(D\) for all studied cases fall within \(D=0.024\pm 0.002\) K\({}^{5}/\)cm\({}^{1+\alpha}\), matching precisely with those derived for cylindrical heaters. Further details on the derivation of \(D\) is provided in Appendix A.
To demonstrate the precision of Eq. (27) for spherical heaters, we again plot \(q^{*}/(5\Delta T/C^{2}R_{0})^{1/3}\) against \(T_{b}^{\prime}=T_{b}+\frac{1}{2}\Delta T\). As shown in Fig. 9, data points for all parameter combinations \((T_{b},h,R_{0})\) collapse onto a single curve descried by \(f(T_{b}^{\prime})^{-1/3}\). Finally, using a similar approach, we can express \(q^{*}\) for spherical heaters explicitly in terms of \(T_{b}\), \(h\) and \(R_{0}\) by incorporating the expression for \(\Delta T\):
\[q^{*}\approx\left[\frac{5Dh}{C^{2}R_{0}^{1-\alpha}T_{b}^{4}f(T_{b})}\left(1- \frac{DhR_{0}^{\alpha}}{2T_{b}^{4}}\frac{f^{\prime}(T_{b})}{f(T_{b})}\right) \right]^{\frac{1}{3}}. \tag{30}\]
Compared to Eq. (29), apart from the variance in \(\alpha\), the main difference lies in the numerical factor \(3N-1=5\) for the spherical geometry.
## V Summary
We have conducted a comprehensive numerical analysis of the boiling peak heat flux \(q^{*}\) for steady-state heat transfer in He II from both cylindrical and spherical heaters. The \(q^{*}\) value was calculated using the He II two-fluid equations of motion for given bath temperature \(T_{b}\), heater immersion depth \(h\), and heater radius \(R_{0}\). We calibrated our model by comparing the simulated \(q^{*}\) values with available experimental data under the same parameter combinations \((T_{b},h,R_{0})\). The optimized model was then utilized to generate \(q^{*}\) values across a wide parameter range. Based on the obtained data, we developed
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline \(h\) [cm]\(T_{b}\) [K] & 1.7 & 1.8 & 1.9 & 2.0 \\ \hline \hline
1 & 0.20 & 0.18 & 0.15 & 0.11 \\ \hline
5 & 0.13 & 0.11 & 0.10 & 0.07 \\ \hline
20 & 0.08 & 0.07 & 0.06 & 0.04 \\ \hline \end{tabular}
\end{table}
Table 2: The fitted exponent \(\alpha\) for spherical heaters
Figure 8: (a) Dependence of \(\Delta T\) on the bath temperature \(T_{b}\) for a spherical heater of fixed \(h\) and \(R_{0}\). (b) Dependence of \(\Delta T\) on the immersion depth \(h\) at fixed \(T_{b}\) and \(R_{0}\). (c) Dependence of \(\Delta T\) on the heater radius \(R_{0}\) at fixed \(T_{b}\) and \(h\).
Figure 7: Simulated peak heat flux \(q^{*}\) for spherical heaters with various \(T_{b}\), \(h\), and \(R_{0}\).
convenient correlations of \(q^{*}\) that explicitly depend on \((T_{b},h,R_{0})\) for both cylindrical and spherical heaters. Notably, while spherical heaters generally exhibit higher \(q^{*}\) values than their cylindrical counterparts under identical parameters, the derived correlations share a structural resemblance. These correlations are valuable in the design of cooling systems that involve steady but inhomogeneous heat transfer in He II. Looking ahead, we plan to extend the current work to evaluate \(q^{*}\) in transient heat transfer of He II in nonhomogeneous geometries. For such transient heat transfer, the correlation of \(q^{*}\) is expected to be more complicated, since it will depend not only on \((T_{b},h,R_{0})\) but also the heating duration \(\Delta t\). The insights obtained in the current research will form the foundation for our future transient heat transfer analysis.
###### Acknowledgements.
The authors acknowledge the support by the US Department of Energy under Grant DE-SC0020113 and the Gordon and Betty Moore Foundation through Grant GBMF11567. The work was conducted at the National High Magnetic Field Laboratory at Florida State University, which is supported by the National Science Foundation Cooperative Agreement No. DMR-2128556 and the state of Florida.
## Appendix A Determination of \(D\) factor
In the main text, we discussed that the temperature rise \(\Delta T=T_{0}-T_{b}\) at the peak heat flux \(q^{*}\) can be expressed in terms of the bath temperature \(T_{b}\), the hydrostatic head \(h\), and the heater radius \(R_{0}\) as given by Eq. (26). To determine \(D\) in a systematic manner, we calculate it as \(D=\Delta T/(hR_{0}^{\alpha}/T_{b}^{4})\) for each parameter combination \((T_{b},h,R_{0})\). Fig. 10 (a) and (b) show the results for cylindrical and spherical heaters, respectively. The data cover a wide range of \(T_{b}\), \(h\) and \(R_{0}\) and are indicated by distinct marker shapes and colors. It is clear that \(D\) remains roughly constant across all the parameter combinations. In each figure, two colored bands are shown. The narrow band shown in orange represents the region bounded by \(D=\bar{D}\pm\sigma_{D}\), where \(\bar{D}=0.024\)\(\mathrm{K}^{5}/\mathrm{cm}^{1+\alpha}\) is the mean value of \(D\) averaged over all the data points and \(\sigma_{D}\) denotes the standard deviation. The wide band shown in blue is bounded by the maximum \(D_{\mathrm{max}}=0.026\)\(\mathrm{K}^{5}/\mathrm{cm}^{1+\alpha}\) and the minimum \(D_{\mathrm{min}}=0.023\)\(\mathrm{K}^{5}/\mathrm{cm}^{1+\alpha}\) among all the data points. It is clear that all the \(D\) values fall within the range \(D=0.024\pm 0.002\)\(\mathrm{K}^{5}/\mathrm{cm}^{1+\alpha}\), across the parameter ranges considered in the paper, for both cylindrical and spherical heaters.
Figure 10: (a)–(b) Correlation factor \(D\) for cylindrical and spherical heaters, respectively, calculated under all the parameter combinations \((T_{b},h,R_{0})\) we explored.
Figure 9: Simulated \(q^{*}/(2\Delta T/C^{2}R_{0})^{1/3}\) as a function of \(T_{b}^{\prime}=T_{b}+\Delta T/2\) for spherical heaters at all the parameter combinations \((T_{b},h,R_{0})\) we studied. The black curve represents \(f^{-1/3}(T_{b}^{\prime})\), where \(f\) is the known He II heat conductivity function [3].
|
2310.11324
|
Quantifying Language Models' Sensitivity to Spurious Features in Prompt
Design or: How I learned to start worrying about prompt formatting
|
As large language models (LLMs) are adopted as a fundamental component of
language technologies, it is crucial to accurately characterize their
performance. Because choices in prompt design can strongly influence model
behavior, this design process is critical in effectively using any modern
pre-trained generative language model. In this work, we focus on LLM
sensitivity to a quintessential class of meaning-preserving design choices:
prompt formatting. We find that several widely used open-source LLMs are
extremely sensitive to subtle changes in prompt formatting in few-shot
settings, with performance differences of up to 76 accuracy points when
evaluated using LLaMA-2-13B. Sensitivity remains even when increasing model
size, the number of few-shot examples, or performing instruction tuning. Our
analysis suggests that work evaluating LLMs with prompting-based methods would
benefit from reporting a range of performance across plausible prompt formats,
instead of the currently-standard practice of reporting performance on a single
format. We also show that format performance only weakly correlates between
models, which puts into question the methodological validity of comparing
models with an arbitrarily chosen, fixed prompt format. To facilitate
systematic analysis we propose FormatSpread, an algorithm that rapidly
evaluates a sampled set of plausible prompt formats for a given task, and
reports the interval of expected performance without accessing model weights.
Furthermore, we present a suite of analyses that characterize the nature of
this sensitivity, including exploring the influence of particular atomic
perturbations and the internal representation of particular formats.
|
Melanie Sclar, Yejin Choi, Yulia Tsvetkov, Alane Suhr
|
2023-10-17T15:03:30Z
|
http://arxiv.org/abs/2310.11324v2
|
Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design _or: How I learned to start worrying about prompt formatting_
###### Abstract
As large language models (LLMs) are adopted as a fundamental component of language technologies, it is crucial to accurately characterize their performance. Because choices in prompt design can strongly influence model behavior, this design process is critical in effectively using any modern pre-trained generative language model. In this work, we focus on LLM sensitivity to a quintessential class of meaning-preserving design choices: prompt formatting. We find that several widely used open-source LLMs are extremely sensitive to subtle changes in prompt formatting in few-shot settings, with performance differences of up to 76 accuracy points when evaluated using LLaMA-2-13B. Sensitivity remains even when increasing model size, the number of few-shot examples, or performing instruction tuning. Our analysis suggests that work evaluating LLMs with prompting-based methods would benefit from reporting a range of performance across plausible prompt formats, instead of the currently-standard practice of reporting performance on a single format. We also show that format performance only weakly correlates between models, which puts into question the methodological validity of comparing models with an arbitrarily chosen, fixed prompt format. To facilitate systematic analysis we propose FormatSpread, an algorithm that rapidly evaluates a sampled set of plausible prompt formats for a given task, and reports the interval of expected performance without accessing model weights1. Furthermore, we present a suite of analyses that characterize the nature of this sensitivity, including exploring the influence of particular atomic perturbations and the internal representation of particular formats.
Footnote 1: We will release FormatSpread’s code at [https://github.com/msclar/formatspread](https://github.com/msclar/formatspread).
## 1 Introduction
As the capabilities of LLMs have rapidly improved, their sensitivity to input prompt features has been used to optimize performance via prompt engineering (White et al., 2023). However, there has been little work in characterizing this sensitivity, especially to seemingly innocuous feature choices that preserve prompt meaning and intent. In this work, we analyze the sensitivity of widely used, open-source LLMs to a class of features that should not influence a prompt's interpretation: formatting choices. We find that pre-trained LLMs are sensitive to these choices in unpredictable ways, with accuracy varying in up to 76 points for LLaMA-2-13B between equivalent formats, and \(\sim\)10 accuracy points on average across 50+ tasks and several models. We also show that this variance is not eliminated by adding few-shot examples, increasing model size, or instruction tuning.
Designing prompt templates is a critical part of effectively using a pre-trained language model. This design process includes making choices about wording, choosing few-shot examples for in-context learning, and making decisions about seemingly trivial features like formatting. This process, and often even the resulting templates, is rarely reported or discussed in research papers, under the assumption that performance variance across these choices is insignificant compared to variance across
data points or models. However, some anecdotal evidence points to formatting choices actually having a significant influence on model behavior (Aghajanyan, 2023). In some cases, researchers report a limited number of manually generated formats to show that scaling trends hold despite performance being significantly different (Schick et al., 2021). The assumption that formatting does not influence overall model performance may become problematic when improvements over existing approaches are attributed to the amount and source of training data, number of parameters, or model architecture, without also accounting for changes in prompt format. Ignoring variance across formats may also negatively affect user experience, e.g. if users inadvertently choose formats the LLM does not perform well on.
Our proposed tool, FormatSpread, enables a systematic analysis of these variances across a wide set of semantically equivalent prompt formats within a user-specified computational budget. We find that choices in formatting few-shot examples during in-context learning introduce spurious biases that may lead to significantly different conclusions in model performance. The sensitivity to formatting choices that we discover across widely-used, open-source models suggests that future research would benefit from reporting a performance _spread_ over a sufficient sample of plausible formats, instead of simply reporting the formatting used and its performance, as is currently standard. Moreover, we argue that this reporting is crucial when comparing the performance of different models, as we show the influence of formatting choices only weakly correlates between models, thus making and fixing a formatting choice could introduce a significant confounding factor.
Fully exploring the space of prompt formats is intractable, as computation costs scale linearly with the number of formats considered. FormatSpread efficiently explores the space of prompt formats under a user-specified computational budget using Bayesian optimization. FormatSpread does not require access to the model weights, allowing its use on API-gated models: we find a spread up to 56 accuracy points with a median spread of 6.4 accuracy points with GPT3.5 across 320 formats and 53 tasks at a cost of under 10USD on average per task. Beyond facilitating evaluation, we also propose a suite of analyses to further characterize model sensitivity to formatting. Among other results, we show that the separability of continuous prompt embeddings correlates with the spread observed in task performance.
## 2 Overview
We evaluate LLM performance over the space of prompt formats that may plausibly be chosen by a non-adversarial user when designing a prompt for a target task, where the space of formats is defined by a grammar (SS3.1). Our grammar's definition naturally induces a definition of semantic equivalence among formats. We quantify model sensitivity in terms of performance range in a target task across the space of equivalent prompt formats to the original choice (SS4.2). We cast the problem of searching across this space as a bandit problem, and propose FormatSpread (SS3), which
Figure 1: Slight modifications in prompt format templating may lead to significantly different model performance for a given task. Each <text> represents a different variable-length placeholder to be replaced with actual data samples. Example shown corresponds to 1-shot LLaMA-2-7B performances for task280 from SuperNaturalInstructions (Wang et al., 2022). This StereoSet-inspired task (Nadeem et al., 2021) requires the model to, given a short passage, classify it into one of four types of stereotype or anti-stereotype (gender, profession, race, and religion).
consists of a grammar (SS3.1) and a procedure to estimate the minimum and maximum performance across a set of semantically equivalent formats given a pre-defined metric (SS3.2). FormatSpread uses Bayesian optimization to identify the expected performance range with low additional computational cost (SS4.5) all without requiring access to model weights, which enables use on API-gated LLMs. Furthermore, we perform in-depth analysis of this observed sensitivity, including by quantifying the contribution of individual feature choices to the final performance (SS4.3) and measuring the identifiability of a format based solely on a model's internal, continuous representation of any prompt via correlation with model performance (SS4.4).
## 3 Measuring Sensitivity with FormatSpread
### Grammar of Plausible Prompt Formats
We construct a grammar that defines both the space of plausible prompt formats and semantic equivalence between formats. The grammar is manually constructed, as opposed to automatically induced from data, to guarantee a higher level of precision when defining the set of equivalent formats. Our grammar is directly tested by verifying that it can generate the formatting associated with 100+ Super-NaturalInstructions tasks (Wang et al., 2022).
Our grammar consists of fields that are composed to create a prompt format. For example, the format 'Passage: <text>', has basic fields 'Passage: <text>', and 'Answer: <text>', denoted \(a_{1}\), and \(a_{2}\). Each basic field consists of a _descriptor_ (e.g. 'Passage'), a _separator_ (e.g. ': '), and a text placeholder to replace with each data point. We define basic fields as \(B_{1}(d,s,f):=f(d)s\texttt{<text>}\) using Backus-Naur notation, where \(d\) is a descriptor string, \(s\in\mathcal{S}_{1}\) a separator, and \(f\in\mathcal{F}_{\text{casing}}\) a function that alters / while preserving meaning. Thus, in our example, \(a_{1}\texttt{=}B_{1}(\texttt{Passage},\texttt{': }\quad\texttt{',}id)\) and \(a_{2}\texttt{=}B_{1}(\texttt{Answer},\texttt{': }\quad\texttt{',}id)\), with \(id\) the identity function. We define joining several fields as \(B_{2}^{(n)}(X_{1},\ldots,X_{n},c):=X_{1}cX_{2}c\ldots cX_{n}\), with \(c\texttt{\in}\mathcal{C}\) being a _space_. Our example's prompt format may be written as \(B_{2}^{(2)}(a_{1},a_{2},\texttt{': }\quad\texttt{'})\).
The grammar also supports enumeration, which is defined as joining several basic fields, each representing a different list item. For example, the enumeration 'Option (A): <text>, Option (B): <text>, Option (C): <text> may be written as \(B_{2}^{(3)}(a_{1},a_{2},a_{3},\texttt{': }\quad\texttt{| '})\), where \(a_{i}=B_{1}(e_{i},\texttt{': ': }\quad\texttt{',}id)\). In our example, \(e_{i}\) represents 'Option (A)', and may in turn be written as the concatenation \(e_{i}:=ds_{2}\texttt{item}(i)\) with \(d=\texttt{'Option'}\), \(s_{2}=\texttt{': }\quad\texttt{'}\) (single space), and \(f_{\text{item}}(1)=\texttt{'(A)'}\). Each \(f_{\text{item}}\) transforms an item \(i\) using a number format (e.g. letters or Roman numerals, denoted as \(\mathcal{F}_{\text{item}}\)) and an item wrapper (e.g. (A) or [A], denoted as \(\mathcal{F}_{\text{item}}\)).
In summary, we define valid prompt formats as those accepted by the following grammar:
\[B_{0}() :=\texttt{<text>}\] \[B_{0}^{{}^{\prime}}(d,s) :=f(d)s\quad\text{with}\;s\in\mathcal{S}_{1},\;f\in\mathcal{F}_{ \text{casing}}\] \[B_{1}(d,s,f) :=f(d)s\texttt{<text>}\quad\text{with}\;s\in\mathcal{S}_{1},\;f \in\mathcal{F}_{\text{casing}}\] \[B_{2}^{(n)}(X_{1},\ldots,X_{n},c) :=X_{1}c\ldots cX_{n}\quad\text{with}\;c\in\mathcal{C},\,X_{i} \in\{B_{0},B_{0}^{\prime},B_{1},B_{2},B_{3}\}\;\forall i\] \[B_{3}^{(n)}(d,j_{1},\ldots,j_{n},s_{1},s_{2},c,f_{1},f_{2}) :=B_{2}^{(n)}(B_{1}(e_{1},s_{1},f_{2})),\ldots,B_{1}(e_{n},s_{1},f_{2}),c)\] \[\text{where}\;e_{i}:=f_{2}(d)\;s_{2}\,f_{1}(j_{i}),j_{i}\in\mathbb{ N}_{0}\;\forall i,\] \[s_{1}\in\mathcal{S}_{1},s_{2}\in\mathcal{S}_{2},f_{1}\in\mathcal{ F}_{\text{item}},f_{2}\in\mathcal{F}_{\text{casing}}\]
Our grammar defines valid formats as finite compositions of \(B_{0},B_{0}^{\prime},B_{1},B_{2},B_{3}\). The sets \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{C}\), \(\mathcal{F}_{\text{casing}}\), \(\mathcal{F}_{\text{item}}\) (two sets of separators, spaces, casing functions, and itemizing functions respectively) are pre-defined by the user. Throughout this work, we instantiate all sets with values typically observed in human-written prompt formats. We intentionally only modify the casing of descriptors (via \(\mathcal{F}_{\text{casing}}\)) to guarantee semantic equivalence; one may also define a set of functions that paraphrases the descriptor, e.g., via synonym replacement. Appendix A.2 contains the full list of values we use for the constant sets, as well as a visualization of a prompt template generated from the grammar.
**Prompt Format Equivalence.** Two prompt formats \(p_{1},p_{2}\) are equivalent if they represent the same rule application \(B_{i}\), the descriptors (if any) are the same, and the sub-elements (if any) are equivalent. Appendix A.1 contains the formal definition of equivalence. The grammar's strict definition allows
us to assume that sets of equivalent formats share equivalent meanings. When measuring sensitivity (SS3.2), we explore only the space of formats equivalent to a task's original format.
**Contextual Restrictions.** We define restrictions to the combinations of spaces and separators to further ensure naturalness. For example, if \(B_{2}(X_{1},\ldots,X_{n},c)\) where \(c\) does not contain a newline, then each \(X_{i}\)'s separators and any subcomponents' separators should not contain a newline. This avoids unnatural formats like Input:\(\backslash\)n <text> Output:\(\backslash\)n <text>. We also allow for adding conditions that force constants (separators, spaces, etc.) in different applications of \(B_{i}\) to be equal. When measuring sensitivity to format perturbations, if two separators or spaces are equal in an original format, they are forced to jointly change to be considered equivalent. Appendix A.3 contains all contextual restrictions.
**Final Prompt Construction.** Given a valid format \(p\) accepted by the grammar, the final prompt is constructed by concatenating with space \(c\) an instruction string \(inst\), \(n\) few-shot data points \(D_{1},\ldots,D_{n}\) exemplifying the task, and a data point \(D_{n+1}\) to be solved. All few-shot examples \(D_{i}\) are formatted using \(p\). Thus, the final prompt template is: \(inst\ c\ p(D_{1})\ c\ p(D_{2})\ c\ \ldots\ c\ p(D_{n})\ c\ p(D_{n+1})\). Since \(D_{n+1}\)'s output will be generated by the model, an empty string is added in place of the answer in the last field in the template. Prompt construction will modify \(inst\) to match specific choices encoded in \(p\): concretely, if \(p\) enumerates valid multiple-choice options as characters \(x_{1}\ldots x_{n}\), we ensure \(inst\) refers to these choices as \(x_{1}\ldots x_{n}\).
### Measuring Sensitivity
We measure how plausible choices in prompt formatting influence quantifiable metrics of generated outputs. Given a set of plausible formats \(\{p_{1},\ldots,p_{n}\}\), a dataset \(\mathcal{D}\), and a scalar metric \(m\), let the _performance interval_ be \([\min_{i}m(p_{i},\mathcal{D}),\max_{i}m(p_{i},\mathcal{D})]\). We define the _performance spread_ or simply _spread_ as \(\max_{i}m(p_{i},\mathcal{D})-\min_{i}m(p_{i},\mathcal{D})\). Higher spread indicates more sensitivity to variance within the space of plausible, semantically-equivalent formats. While our method is agnostic to the scalar metric \(m\) used, and one could consider a number of metrics including text length, formality, or toxicity, throughout this work we focus our analysis on estimated task accuracy _acc_. Due to ease in automatic evaluation, here we evaluate on classification tasks.
Our goal is to compute spread for a given model and task. A comprehensive approach would be to fully evaluate each plausible format \(p_{i}\) on the entire evaluation dataset \(\mathcal{D}\). This increases the cost of reporting a model's performance linearly with \(n\), which becomes computationally infeasible for large values of \(n\). Following prior gradient-free prompt engineering work (Zhou et al., 2023; Pryzant et al., 2023), we model our problem as a multi-arm bandit. Given a random sample of \(n\) formats (arms) \(p_{1},\ldots,p_{n}\) for a task, an arm \(p_{i}\)'s hidden value is the actual performance \(m(p_{i},\mathcal{D})\) when evaluated on the full dataset \(\mathcal{D}\), and the reward for pulling the arm is an estimate \(m(p_{i},\mathcal{\tilde{D}})\) where \(\mathcal{\tilde{D}}\subset\mathcal{D}\), \(|\mathcal{\tilde{D}}|=B\) (mini-batch size) and no element of \(\mathcal{\tilde{D}}\) has yet been evaluated with \(p_{i}\).
We assume a budget of \(E\) total data point evaluations. We first search for the highest performing format with budget \(E/2\), and then for the lowest performing format with budget \(E/2\). Evaluations done for the first exploitation are readily available for the second exploration, which yields a more informative prior for many formats. We consider two well-known regret minimization bandit algorithms: Thompson sampling (used in FormatSpread) and Upper Confidence Bound (UCB).
**Thompson Sampling.** This simple, high-performing Bayesian inference heuristic randomly draws each arm according to its probability of being optimal (Chapelle & Li, 2011). Each \(m(p_{i},\mathcal{D})\) is modeled as a random variable, and since with our target metric each data point evaluation is a Bernoulli trial, it is natural to model \(m(p_{i},\mathcal{D})\) as a Beta distribution. In each round, Thompson sampling draws from each \(m(p_{i},\mathcal{\tilde{D}})\) and chooses the best arm \(\hat{i}\) (Algorithm 1). It then updates \(\hat{i}\) according to the number of observed successes \(r\), and the corresponding \(B-r\) failures, within \(\mathcal{\tilde{D}}\).
Thompson sampling allows for setting informative priors \((\alpha_{i},\beta_{i})\) based on domain knowledge to accelerate runtime. Appendix A.4 details the exact priors we use. To our knowledge, we are the first to consider a Bayesian sampling method for prompt optimization.
**Upper Confidence Bound (UCB) Sampling.** UCB (Lai et al., 1985) computes an upper confidence bound to each arm's performance, derived from Chernoff's bound. The key difference with Thompson sampling is in how \(\theta_{i}^{(t)}\) is defined. In UCB's frequentist approach, \(\theta_{i}^{(t)}\) is assigned the estimated
accuracy plus the upper confidence bound: \(\theta_{i}^{(t)}\!\leftarrow\!S_{i}/N_{i}+c\sqrt{log(t)/N_{i}}\). We use \(c=2\) following Pryzant et al. (2023), who find UCB with \(c=2\) to be most effective for prompt optimization.
**Naive Sampling.** Each prompt format is evaluated on \(E/n\) points (with appropriate rounding).
## 4 Characterizing Prompt Format Variance with FormatSpread
### Experimental setup
**Data.** We use a subset of 53 tasks from Super-NaturalInstructions (Wang et al., 2022) with diverse human-written formats and instructions, comprising 19 multiple-choice tasks and 34 classification tasks with \(\{2,3,4\}\) basic fields. Appendix B.1 details the exact task selection procedure. To construct the final prompt template, we concatenate each task's instruction and \(n\) formatted few-shot examples using \(\backslash\)n\(\backslash\)n as spacing. While selection and ordering of few-shot examples is a component of prompt design influencing features of model output (Lu et al., 2022), our work focuses on prompt formatting. To remove this confounder, we fix the exact choice and ordering of examples for each task and for a given number of shots \(n\). Few-shot examples for each task are chosen randomly within each dataset and are not used for evaluation. We evaluate task data samples on an arbitrary order fixed across settings. Datasets are assumed to be of size 1,000 for fair evaluation across tasks.
**Models.** We evaluate LLaMA-2-\(\{\)7B,13B,70B\(\}\)(Touvron et al., 2023), Falcon-7B and Falcon-7B-Instruct (Almazrouei et al., 2023), GPT-3.5-Turbo (Schulman et al., 2022), all autoregressive LMs.
**Task Evaluation Metrics.** We use two popular measures for computing accuracy: exact prefix matching and probability ranking. In exact prefix matching, we check if the output's prefix matches the expected answer after normalization (casing, spacing, newlines). Ranking accuracy computes the rate that the expected answer is the highest-ranked valid option (in multiple choice and classification tasks) according to the model's output distribution. Results are reported using ranking accuracy unless specified otherwise. Appendix B.2 shows additional analysis of exact prefix matching, with spreads even higher than those shown in Section 4.2, and including how formatting choice affects task degeneration (i.e., not answering any valid option).
Prompt formats have a large performance spread, not eliminated by increasing few-shot examples or model size, nor with instruction tuning
For each evaluation task we randomly sample 10 plausible prompt formats and use FormatSpread to compute performance spread for each modeling and \(n\)-shot choice (Figure 3). We find significant performance spread across tasks, with a median spread of 7.5 accuracy points across choices in the model and the number of few-shot examples. 20% of tasks consistently result in a spread of at least 15 accuracy points for all LLaMA-2 settings, and at least 9 points for all Falcon settings. We observe several tasks with performance spread over 70 accuracy points. Because this analysis uses only 10 randomly sampled formats, it represents a lower bound of the true spreads for each task. Furthermore, there exists significant performance spread regardless of increased model size (Figure 1(a) and Figure 1(f) for LLama-2-70B), instruction tuning (Figure 1(b)), or number of few-shot examples (Figure 1(c); Figure 1(a) and 1(b) plot 1- and 5-shot jointly). Appendix B.2 demonstrates similar results on a selection of non-classification tasks.
**Comparison trends between models are often reversed just by choosing different formats.** Assuming model \(M\) is better than \(M^{\prime}\) by at least \(d\) accuracy using prompt \(p\), we compute how often \(M^{\prime}\) achieves at least \(d\) higher accuracy than \(M\) under a different format \(p^{\prime}\). Figure 4 shows these
trends are often reversed: LLaMA-2-13B and -70B reverse trend by at least \(d\) = 0.02 with probability 0.141; LLaMA-2-7B and Falcon-7B reverse trend by at least \(d\) = 0.02 with probability 0.140. Strikingly, often both experiments (first using \(p\), and then \(p^{\prime}\)) were statistically significant (p-value \(<0.05\)) on 1000 samples2: 76% and 47% respectively for the two model comparisons mentioned above. We find that formats yielding high performance for model \(M\) may not yield high performance for \(M^{\prime}\), implying that **formats may not be inherently good or bad** (Appendix B.2).
Footnote 2: We use one-sided McNemar tests, also known as paired \(\chi^{2}\) tests, since we evaluate models on the same set of samples. We test the significance of \(M\) being _better_ than \(M^{\prime}\) under \(p\), and \(M\) being _worse_ than \(M^{\prime}\) under \(p^{\prime}\).
### How do individual features contribute to performance?
We analyze how choices in particular constants (i.e. \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{C}\), \(\mathcal{F}_{\text{casing}}\), \(\mathcal{F}_{\text{item}}\)) independently influence task performance across different formats. Figure 5 shows the distribution of accuracy for 500 sampled prompts conditioned on the choice of \(\mathcal{S}_{1}\) (the separator between a descriptor and the text placeholder) for one task in Super-NaturalInstructions. When comparing the individual influence of two feature choices, we measure both _weak_ and _strong_ notions of dissimilarity between distributions of accuracy across prompts conditioned on a chosen feature. We say two constant choices yield _weakly_ different accuracy distributions if the values between the first quartile (\(Q_{1}\)) and third quartile (\(Q_{3}\)) do not intersect. This is equivalent to the boxes in a boxplot on overlapping. We say two constant choices yield _strongly_ different accuracy distributions if the ranges \([2.5Q_{1}-1.5Q_{3},2.5Q_{3}+1.5Q_{1}]\) do not overlap (adjusted to end in a data point). This is equivalent to two boxplots with their whiskers not overlapping. In Figure 5, '\n\n\t' and '\n\t' (\tour\) (fourth and sixth) are only weakly different.
We compute accuracy for 500 random formats with 250 samples each on 31 tasks for 1-shot LLama-2-7B. Table 1 shows that choices in \(\mathcal{S}_{2}\), \(\mathcal{F}_{\text{item1}}\), \(\mathcal{F}_{\text{casing}}\) do not independently predict performance differences (weakly or strongly): although these features can have a large performance variance and thus should be explored with FormatSpread, they cannot be used to independently predict accuracy changes. Other constant sets have varying degrees of differences, with \(\mathcal{S}_{1}\) (separators) and \(\mathcal{F}_{\text{item2}}\) (number format changes in enumerations) having the most individual impact. All tasks with strong dissimilarities are shown in Appendix B.4.
Figure 3: Spread across models and \(n\)-shots.
**Small prompt variations often yield large performance differences.** Table 2 shows a selection of tasks where changing a single constant on a format (e.g., casing in task322) results in large accuracy differences. Figure 6 shows that regardless of the scoring criterion used, a significant ratio of these atomic changes are associated with large accuracy changes. For example, 24% of atomic changes have an associated accuracy change of at least 5 points when using exact prefix matching as scoring criteria (11% when using probability ranking).
The space of prompt format accuracy is highly non-monotonic, which makes local search algorithms over the space less effective. Let \((p_{1},p_{2},p_{3})\) be a prompt format triple such that \(p_{i+1}\) is obtained by making an atomic change to \(p_{i}\). We argue that if the prompt format space is smooth, we should often see a triples' accuracy to be strictly monotonic over \(i\). We choose 24 tasks (13 multiple choice, 11 non-multiple choice), sample 300 \((p_{1},p_{2},p_{3})\) triples for each, and the compute accuracy (using exact prefix matching) of each \(p_{i}\) on 250 samples. 32.4 and 33.6% of triples were monotonic for multiple-choice and non-multiple-choice tasks respectively. Given that random shuffling within a triple will result in monotonicity 33.3% of the time, this suggests that local search mechanisms like simulated annealing may not be effective as they require a locally smooth search space.
### Prompt formats are identifiable transformations of prompt embeddings
Prompt format choices represent a deterministic transformation of the input, even if its impact on the resulting performance is hard to predict. We represent prompt embeddings as the last hidden layer obtained when processing the whole input prompt (immediately before selecting the first token to generate). We demonstrate that format choice yields a highly identifiable transformation over this embedding, which suggests that formats can be seen as transformations of the output probability distribution.
For each task, and for both \(\{1,5\}\)-shot settings, we collect prompt embeddings from LLaMA-2-7B corresponding to 10 randomly sampled valid formats for 1000 evaluation examples. We train an XGBoost (Chen & Guestrin, 2016) classifier that maps from the top \(n\) principal components of a
\begin{table}
\begin{tabular}{c c c c} \hline \hline Task Id & Prompt Format 1 (\(p_{1}\)) & Prompt Format 2 (\(p_{2}\)) & Acc \(p_{1}\) & Acc \(p_{2}\) & Diff. \\ \hline task280 & passage:\{\}\n answer:\{\} & passago \{\}n answer \{\} & 0.043 & 0.826 & 0.783 \\ task317 & Passago:\{\} Answers:\{\} & Passago: \{\} Answer:\{\} & 0.076 & 0.638 & 0.562 \\ task190 & Sentence[I]-\{\}Sentence[II]-\{\} & Sentono[A]-\{\}Sentence[B]-\{\} & 0.360 & 0.614 & 0.254 \\ -Answer\{\}\n & -Answer\{\}\n & -Answer\{\} & 0.418 & 0.616 & 0.198 \\ task904 & input: \{\} n output:\{\} & \n output:\{\} & 0.418 & 0.616 & 0.198 \\ task320 & target -\{\} n\{\}nanswer -\{\} & target -\{\} n\{\}n & 0.361 & 0.476 & 0.115 \\ task322 & Comment: \{\} Answer:\{\} & comment: \{\} answer: \{\} & 0.614 & 0.714 & 0.100 \\ task279 & Passago : \{\}. Answer : \{\} & Passago: \{\}. Answer :\{\} & 0.372 & 0.441 & 0.069 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of atomic changes’ impact on accuracy using probability ranking (prefix matching shown in Appendix 5). \(\{\}\) represents a text field; \(p_{2}\) yields higher accuracy than \(p_{1}\) for all tasks.
Figure 5: Example of accuracy variance for different choices of constants in \(\mathcal{S}_{1}\) for task1283.
prompt embedding to the prompt format.3 We find that although the original prompt embeddings are of size 4,0964, using just the top 100 principal components can result in a classifier with \(\geq\)0.98 accuracy in format identification for all 31 tasks analyzed. Figure 7 shows the accuracy of format classification given a fixed number of principal components.5 We find that classifier accuracy given just the top two components correlates moderately with the spread of performance in the prompts they represent (\(0.424\), \(p=8.04\cdot 10^{-6}\); \(0.555\) for the 5-shot setting; using exact prefix matching).
Footnote 3: We train with 800 vectors from each of the 10 formats (8000 vectors) and evaluate on the remaining 200.
Footnote 4: Equivalent to the dimension of hidden representations for LLaMA-2-7B.
Footnote 5: Figure 19 in the Appendix visualizes examples of the top two principal components for ten prompt formats.
### Fast exploration of the prompt formatting space: FormatSpread
In Section 4.2, we demonstrate that even when sampling just 10 formats from the space of plausible formats, we still observe significant performance spread on many tasks. However, this is only a lower bound of the spread a task may exhibit when increasing the number of formats: for example, about 17% of tasks are expected to increase their spread by at least 5 accuracy points when increasing from 10 to 20 sampled formats. Figure 8 quantifies the expected increase in spread when increasing the number of formats by evaluating 500 formats on 250 samples each and computing expected gains.
Figure 9 compares the efficiency of Thompson sampling, UCB, and naive sampling for estimating spread with respect to a budget \(E\) (Section 3.2). To ensure accurate reports, we compute and show the true spread of the highest- and lowest-performing formats chosen by each method using all data. With a budget of 51,200 evaluations, Thompson sampling results in a spread within 1 accuracy point of the true spread, while naive sampling finds a spread within 4 points, and UCB within 11.
Finally, we use FormatSpread to measure sensitivity of several models where inference is expensive. With a budget of 40,000 evaluations and 320 prompt formats, we find that 1-shot
Figure 8: Probability of observing a spread increase of at least \(d\) when increasing sample size from \(k_{1}\) to \(k_{2}\) formats. 31 tasks, 100 trials each. Figure 9: Difference between the true sample spread and each algorithm-found spread with respect to \(E\) (evaluation budget). 320 formats, \(B=\) 20, average of 5 trials over 31 tasks shown.
Figure 6: Probability that an atomic change (e.g. changing a space, separator) has a given impact in accuracy for two scoring criteria. 53 tasks, 30 sampled atomic changes each.
LLaMA-2-70B-ran using 4-bit quantization (Dettmers et al., 2022)-yields a median spread of 0.171 (mean=0.221, std=0.200, using probability ranking across 53 tasks; 25% of tasks had a spread of 0.292 or higher, with a maximum spread of 0.876), and GPT-3.5 yields a median spread of 0.064 (mean=0.110, std=0.115, across 53 tasks using exact prefix matching given that we do not have access to the full logits; 25% of tasks had a spread of 0.148 or higher, with a maximum spread of 0.562), showing sensitivity to formatting is still present even on larger models. 5-shot LLaMA-2-70B still shows high spreads, with 25% of tasks having a spread of 0.310 and a maximum of 0.841. See spread visualization in Figure 23, and a list of best and worst formats found in Table 6.
## 5 Related Work
The task of automatically finding the best-performing prompt for a given task without changing model parameters has recently gained attention, given the constantly improving yet somewhat unpredictable performance of LLMs. Prior work has often focused on discovering optimal prompts with gradient-based methods, which are effective, but often lead to disfluent or unnatural prompts (Shi et al., 2020), which can be mitigated with a Langevin dynamics-based method (Shi et al., 2022). Another approach is to learn, optimize, and insert continuous representations of prompts and tasks as input to models (Qin and Eisner, 2021; Lester et al., 2021; Ding et al., 2022; Ilharco et al., 2023). These methods also require access to the LLM's parameters, thus cannot be applied to models behind an API. In contrast, FormatSpread does not assume access to any model internals. Prior gradient-free work has focused on edit-based enumeration over human-written prompts (Prasad et al., 2023), reinforcement learning (Deng et al., 2022), and by using LLMs themselves (Zhou et al., 2023; Gao et al., 2021). These works aim to achieve competitive task performance, even if the meaning of the prompt or instruction is modified. To our knowledge, we are the first to focus specifically on prompt formatting variance, a quintessential example of semantic equivalence.
Jailbreaking refers to the behavior of intentionally manipulating prompts to elicit inappropriate or sensitive responses, or otherwise reveal parts of the prompt that were intentionally not revealed. While the objective differs from our work, jailbreaking works (Wei et al., 2023; Zou et al., 2023) share the underlying technical question of finding the lowest-performing prompt. Our methods differ, since Wei et al. (2023) evaluate human-generated attacks to guide adversarial prompt design, and Zou et al. (2023) uses gradient-based search methods simultaneously across multiple models.
Some existing work has explored the influence of certain prompt design choices on model performance, for example the prompt's language (Gonen et al., 2022) and the ordering of few-shot examples (Lu et al., 2022). Other work has focused on providing textual interpretations of continuous prompt representations (Khashabi et al., 2022). Beyond autoregressive LLMs, existing work has focused on performance variance in masked language models (Elazar et al., 2021; Jiang et al., 2020). Our work follows efforts in other domains that explore the influence of spurious features on research evaluations, e.g., in deep reinforcement learning (Islam et al., 2017; Henderson et al., 2018) and statistical machine translation (Clark et al., 2011).
## 6 Discussion
We introduce FormatSpread, an algorithm that estimates the performance _spread_ across prompt formatting choices. We use FormatSpread to evaluate the spread of several widely-used open-source LLMs for classification tasks in few-shot learning settings. We find that spread is large regardless of model choice, even when increasing model size, number of few-shots, or when using instruction tuning. FormatSpread is designed to efficiently search the space of plausible prompt formats under a user-specified computational budget. For example, with a computational budget of exploring only 5% of the entire search space for task with 2,500 test examples and 320 plausible formats, we are able to estimate spread within 2 accuracy points of the true spread.
We also characterize the space of prompt formats, finding that it is largely non-monotonic and that few atomic features can be predictors of performance alone, although the separability of format embeddings is highly correlated with observed performance spread. These findings informed the design of our search procedure, where local search methods are not advantageous.
Our findings suggest that performance spread caused by arbitrary prompt formatting choices may influence conclusions made about model performance, especially when comparing models on bench
mark tasks. Thus, we recommend that work evaluating LLMs with prompting-based methods would benefit from reporting a range of performance across plausible formats. However, we want to emphasize that single-format evaluation may still be sufficient for many use cases. For example, for researchers or practitioners who build systems on top of LLMs, choosing a single prompt format that works sufficiently well for use in this larger system is a valid methodological choice. However, we encourage future research to compute FormatSpread when comparing their systems to out-of-the-box models, to ensure fair baseline representation. Furthermore, FormatSpread can be used to identify lower-bound performance of a model or system. For example, when using a model for socially impactful tasks, such as stereotype classification in Figure 1, it is important to report the range of accuracy a non-adversarial user might encounter. Likewise, it is crucial to consider robustness to spurious features when claiming that models possess general abilities, such as theory of mind; and beneficial to report when e.g. exploring model biases. We leave it to future research to develop regularization procedures either during training or with an already-trained model to make models robust to diverse formatting choices.
## 7 Limitations
As defined by our grammar, all equivalent formats are semantically equivalent to human readers. However, some of them are more likely to be used by humans than others. Spaces and separators are inspired from naturally-occurring formats, but some values are more unusual, such as the spacing <sep> or the separator ::. Contextual restrictions enable disallowing undesired combinations of e.g. spaces and separators. However, formats may have multiple valid parses, and some may be more prone than others to unnatural character combinations. For example, let a data sample be 'Passage: Loren ipsum dolor sit amet. Answer: Yes'. Depending on if we consider the full stop. to be part of the passage or the format, we may parse it as \(B_{2}^{(2)}(B_{1}(\texttt{Passage,':~{}~{}}^{\prime},id),B_{1}(\texttt{Answer,':~{}~{}}^{ \prime},id),\texttt{'~{}~{}}^{\prime})\) or \(B_{2}^{(2)}(B_{1}(\texttt{Passage,':~{}~{}}^{\prime},id),B_{1}(\texttt{Answer,':~{}~{}}^{ \prime},id),\texttt{'~{}~{}}^{\prime})\). In this work, we choose the former parsing throughout tasks to ensure full sentences. This sometimes6 leads equivalent formats to have a less usual, yet trivially semantically equivalent resulting character combinations, e.g. \(B_{2}^{(2)}(B_{1}(\texttt{Passage,':~{}~{}}^{\prime},id),B_{1}(\texttt{Answer,':~{}~{}}^{ \prime},id),\texttt{'~{}~{}}^{\prime})\). This last format would have the following string form on the example above: 'Passage: Loren ipsum dolor sit amet.; Answer: Yes'. We observe high performance spread both in these cases and beyond them. Contextual relations may also restrict these cases if desired by the end user.
Footnote 6: Less than 20% of cases, based on a manual inspection of 10 formats across 20 tasks.
Additionally, we focus our evaluation on tasks that have reasonably short input instructions and input field length (see task selection details in B.1). Future work may investigate on how input length affects final performance.
## 8 Acknowledgements
We thank Jillian Fisher, Sachin Kumar, Angela Zhou, and the Berkeley NLP group for valuable discussions. This work was conducted while A.S. was a Young Investigator at AI2. This material is based upon work partly funded by the DARPA CMO under Contract No. HR001120C0124, by DARPA MCS program through NIWC Pacific (N66001-19-2-4031), by NSF DMS-2134012, by NSF CAREER Grant No. IIS2142739, and an Alfred P. Sloan Foundation Fellowship. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily state or reflect those of the United States Government or any agency thereof.
|
2301.07207
|
Spatial and Binary Parameter Distributions of Black Hole Binaries in the
Milky Way Detectable with Gaia
|
Soon after the Gaia data release (DR) 3 in June 2022, some candidates (and
one confirmed) of detached black hole (BH) - luminous companion (LC) binaries
have been reported. Existing and future detections of astrometric BH-LC
binaries will shed light on the spatial distribution of these systems, which
can deepen our understanding of the natal kicks and the underlying formation
mechanism of BHs. By tracking Galactic orbits of BH-LC binaries obtained from
BSE, we find that distributions of BH mass and the height from the Galactic
plane |z| would help us give a constraint on supernova model. We also indicate
that the correlations of (i) orbital periods and eccentricities, and (ii) BH
mass and $|z|$ could be clues for the strength of natal kick, and that the
correlations of ($P$, $Z/Z_\odot$) may tell us a clue for common envelope (CE)
efficiency. We also discuss the possibility of forming BH-LC binaries like the
BH binary candidates reported in Gaia DR3 and Gaia BH 1, finding that if the
candidates as well as the confirmed binary originate from isolated binaries,
they favor models which produce low-mass BHs and have high CE efficiencies
exceeding unity.
|
Minori Shikauchi, Daichi Tsuna, Ataru Tanikawa, Norita Kawanaka
|
2023-01-17T21:52:08Z
|
http://arxiv.org/abs/2301.07207v2
|
Spatial and Binary Parameter Distributions of Black Hole Binaries in the Milky Way Detectable with _Gaia_
###### Abstract
Soon after the _Gaia_ data release (DR) 3 in June 2022, some candidates (and one confirmed) of detached black hole (BH) - luminous companion (LC) binaries have been reported. Existing and future detections of astrometric BH-LC binaries will shed light on the spatial distribution of these systems, which can deepen our understanding of the natal kicks and the underlying formation mechanism of BHs. By tracking Galactic orbits of BH-LC binaries obtained from BSE, we find that distributions of BH mass and the height from the Galactic plane \(|z|\) would help us give a constraint on supernova model. We also indicate that the correlations of (i) orbital periods and eccentricities, and (ii) BH mass and \(|z|\) could be clues for the strength of natal kick. We also discuss the possibility of forming BH-LC binaries like the BH binary candidates reported in _Gaia_ DR3 and _Gaia_ BH 1, finding that if the candidates as well as the confirmed binary originate from isolated binaries, they favor models which produce low-mass BHs and have high common envelope efficiencies exceeding unity.
astrometry -- stars: black holes -- binaries: general
## 1 Introduction
Massive stars are often formed in binaries, which can leave behind compact objects including black holes (BHs) after core-collapse. Such BHs in binary systems are important tools for probing how BHs are born and evolve, as well as the uncertainties of binary evolution models. By observing sinusoidal motions of luminous companions (LCs), the astrometric satellite _Gaia_(Esa, 1997) is supposed to detect non-interacting binaries consisting of LCs and unseen objects, and estimate the mass of the unseen object. If the unseen object mass is larger than a few solar masses and we do not find any excess emission from them by spectroscopy or photometry, the unseen object should be BHs. Since _Gaia_ has been observing for more than five years, orbital periods of the detectable binaries with _Gaia_ should be tens of days to several years, longer than observed in BH X-ray binaries (XRBs). Observations of low mass XRBs (LMXBs) imply the absence of 3 - 5\(M_{\odot}\) BHs (Ozel et al., 2010; Farr et al., 2011), so-called lower mass gap (Bailyn et al., 1998). However, _Gaia_ might reveal a completely different BH population from X-ray binaries, and thus has been attracting more and more people's interest.
There are an increasing number of papers that assess _Gaia_'s detectability of BH-LC binaries (_e.g._ Mashian & Loeb, 2017; Breivik et al., 2017; Yamaguchi et al., 2018; Kinugawa & Yamaguchi, 2018; Yalinewich et al., 2018; Andrews et al., 2019; Shao & Li, 2019; Wiktorowicz et al., 2020; Shikauchi et al., 2020; Chawla et al., 2021; Shikauchi et al., 2022). _Gaia_
should be able to detect several to thousands of BH-LC binaries in the five-year mission. The detectability is greatly dependent on some factors such as binary evolution models (Breivik et al., 2017; Chawla et al., 2021; Shikauchi et al., 2022) and detection criteria adopted in each work.
The recent data release (Data release 3, DR3) was on June 13, 2022 1, which provided about \(3.3\times 10^{7}\) additional sources from DR2 and the information of \(8.1\times 10^{5}\) non-single stars, _e.g._ binaries, from its data spanning about three years. The _Gaia_ collaboration reported BH-main sequence (MS) or post-MS star binary candidates from its spectroscopic data (Gaia Collaboration et al., 2022; Gomel et al., 2022), which were however rejected by El-Badry and Rix (2022) to possess BHs for all of the BH-MS star candidates. More recently, El-Badry et al. (2023) identified _Gaia_ DR3 4373465352415301632 as a binary consisting of a BH and a G dwarf star, and additional BH-LC binary candidates were reported in independent works (Andrews et al., 2022; Shahaf et al., 2022; Tanikawa et al., 2022). As the number of detections increases in the near future, the distributions of the binary parameters, such as the orbital parameters and locations in the Milky Way (MW) phase-space, would be uncovered. Such distributions should reflect the effect of BH natal kicks that accompany the core-collapse of the BHs' progenitors.
Footnote 1: [https://www.cosmos.esa.int/web/gaia/data-release-3](https://www.cosmos.esa.int/web/gaia/data-release-3)
Studies on spatial distributions of BHs have already been done for XRBs (Gandhi et al., 2020; Jonker et al., 2021). Analogous to BH XRBs, the spatial distribution of BH binary candidates reported in _Gaia_ DR3 may pose an independent constraint on BH natal kick models and their origin, as the _Gaia_-detectable BH-LC binaries are supposed to have longer orbital periods than BH XRBs.
In this work, we investigate the spatial distribution of BH-LC binaries detectable with _Gaia_, by obtaining BH-LC binary population with the binary population synthesis code and tracking their motions under the MW potential. In section 2, we describe the initial spatial condition employed here and the initial set-up for the binary population synthesis code, and explain how to simulate the orbits of BH-LC binaries in the MW from formation to the present day. We show the results in section 3 and compare our samples with the reported BH candidates in section 4. Our conclusion is in section 5.
## 2 Method
In this section, we summarize the initial spatial condition in subsection 2.1. The binary population synthesis code and binary evolution models that we employ are depicted in subsection 2.2. How we track the motion of BH-LC binaries under the MW potential is described in subsection 2.3. We also explain sampling techniques to conduct our simulation efficiently in subsection 2.4. Finally, the detection criteria with Gaia that are employed in this work are summarized in subsection 2.5.
### Initial Conditions with Configuration of the MW
Here, we follow Wagg et al. (2021) to synthesize the binary populations throughout the history of the MW. The formalism of Wagg et al. (2021) is based on an empirically-informed analytic model that adopts the metallicity-radius-time relations in Frankel et al. (2018). The relations were calibrated based on data of red clump stars observed with APOGEE (Majewski et al., 2017).
The MW model consists of three components: the low-[\(\alpha\)/Fe] disc (_i.e._ the thin disc), the high-[\(\alpha\)/Fe] disc (_i.e._ the thick disc) and the bar/bulge-like central component. The double disc model reasonably explains the stellar distribution in the MW. For the three components, star formation history and the spatial distribution are modelled independently. For the star-formation history, we weight each model based on the current stellar mass of each component as follows. Assuming that the stellar mass of the bulge \(M_{\rm bulge}\) is \(0.9\times 10^{10}M_{\odot}\), that of both disc components \(M_{\rm disc}\) is \(5.2\times 10^{10}M_{\odot}\)(Licquia and Newman, 2015), and the masses of the thin and thick discs are equal (e.g. Snaith et al., 2014), the number of simulated initial binaries in each component, \(N_{\rm thin}\) in the thin disc, \(N_{\rm thick}\) in the thick disc, and \(N_{\rm bulge}\) in the bulge are
\[N_{\rm i} = N\times\int_{0}^{\tau_{m}}\frac{p(\tau)}{M_{\rm tot}}\mathrm{d}\tau, \tag{1}\] \[= \begin{cases}N\times\frac{M_{\rm disc}/2}{M_{\rm tot}}&\text{(the thin/thick disc)},\\ N\times\frac{M_{\rm bulge}}{M_{\rm tot}}&\text{(the bulge)},\end{cases} \tag{2}\]
where the suffix i represents each component in the MW (_i.e._, the thin/thick disc and the bulge component), \(N=10^{7}\) is the total number of initial binaries in one realization, and \(M_{\rm tot}=M_{\rm disc}+M_{\rm bulge}\).
For the disc components, the star formation history \(p(\tau)\) can be shown as an exponential form,
\[p(\tau){\rm d}\tau\propto\exp\left(-\frac{\tau_{m}-\tau}{\tau_{\rm SFR}} \right){\rm d}\tau, \tag{3}\]
where \(\tau\) is the lookback time, _i.e._ the time elapsed from a binary stars' zero-age MS (ZAMS) stage to now, \(\tau_{m}=\)12 Gyr is the age of the MW, and \(\tau_{\rm SFR}\) is a timescale of the star formation, 6.8 Gyr, based on Frankel et al. (2018). Note that the periods of star formation in the two discs are different, and stars are formed earlier in the thick disc (\(\tau=8-12\) Gyr) and later in the thin disc (\(\tau=0-8\) Gyr). For the bulge component, considering that the central bars seem to include stars with ages of 6 - 12 Gyr with a tail of younger ages and that there are uncertainties on its star formation history, we adopt a scaled and shifted version of the beta function expressed below.
In summary, the exact expression of the star formation history \(p(\tau)\) including normalization factors is
\[p(\tau){\rm d}\tau=\begin{cases}\frac{M_{\rm time}}{2M_{\rm tot}}\times n_{ \rm thin}\exp\left(-\frac{\tau_{m}-\tau}{\tau_{\rm SFR}}\right){\rm d}\tau& \text{(the thin disc, 0 Gyr $<$ \tau$ <$ 8 Gyr)},\\ \frac{M_{\rm time}}{2M_{\rm tot}}\times n_{\rm thick}\exp\left(-\frac{\tau_{m} -\tau}{\tau_{\rm SFR}}\right){\rm d}\tau&\text{(the thick disc, 8 Gyr $<$ \tau$ <$ 12 Gyr)},\\ \frac{M_{\rm bulge}}{M_{\rm tot}}\times\beta(2,3)(\tau^{\prime}){\rm d}\tau& \text{(the bulge, 6 Gyr $<$ \tau$ <$ 12 Gyr)},\end{cases} \tag{4}\]
where
\[\begin{cases}n_{\rm thin}=\frac{1}{\int_{\tau=0}^{\rm Gyr}\exp\left(-\frac{ \tau_{m}-\tau}{\tau_{\rm SFR}}\right){\rm d}\tau},\\ n_{\rm thick}=\frac{\int_{\tau=0}^{12}{\rm Gyr}\exp\left(-\frac{\tau_{m}- \tau}{\tau_{\rm SFR}}\right){\rm d}\tau}{\int_{\tau=0}^{12}{\rm Gyr}\exp \left(-\frac{\tau_{m}-\tau}{\tau_{\rm SFR}}\right){\rm d}\tau},\end{cases} \tag{5}\]
and \(\beta(2,3)(\tau^{\prime})\) is the beta function,
\[\beta(2,3)(\tau^{\prime})=\frac{\Gamma(5)\times\tau^{\prime}(1-\tau^{\prime}) ^{2}}{\Gamma(2)\Gamma(3)}, \tag{6}\]
where \(\tau^{\prime}=(\tau/6\ {\rm Gyr})-1\) so that the beta function is scaled and shifted as \(\beta=0\) at \(\tau=6\) Gyr, 12 Gyr with \(\Gamma\), the Gamma function.
Then, we distribute the initial binaries following the radial and the vertical distributions shown below. For the radial distribution, a single exponential distribution is employed,
\[q(R){\rm d}R=\exp\left(-\frac{R}{R_{d}}\right)\frac{R}{R_{d}^{2}}{\rm d}R, \tag{7}\]
where \(R\) is a radius from the Galactic center, and \(R_{d}\) is a scale length. For the thin disc, \(R_{d}\) is defined as
\[R_{d}\equiv R_{\rm exp}(\tau)=4\ {\rm kpc}\left(1-\alpha_{R_{\rm exp}}\left( \frac{\tau}{8\ {\rm Gyr}}\right)\right), \tag{8}\]
where \(\alpha_{R_{\rm exp}}=0.3\) as the inside-out growth parameter. For the thick disc and the bar structure, \(R_{d}\) is age-independent with the respective values (1/0.43) kpc (Table 1, Bovy et al., 2019) and 1.5 kpc (Bovy et al., 2019).
The vertical distribution for each component is a single exponential form as well,
\[s(|z|){\rm d}z=\frac{1}{z_{d}}\exp\left(-\frac{z}{z_{d}}\right){\rm d}z, \tag{9}\]
where \(z\) is a height from the Galactic plane and \(z_{d}\) is a scale height. The value of \(z_{d}\) for each component is 0.3 kpc for the thin disc (McMillan, 2011), 0.95 kpc for the thick disc (Bovy et al., 2019), and 0.2 kpc for the bulge component (Wegg et al., 2015).
Finally, the metallicity of each star is given as a function of radius and lookback time,
\[[{\rm Fe/H}](R,\tau)=F_{m}+\nabla[{\rm Fe/H}]R-\left(F_{m}+\nabla[{\rm Fe/H}] R_{[{\rm Fe/H}]=0}^{\rm now}\right)f(\tau), \tag{10}\]
where
\[f(\tau)=\left(1-\frac{\tau}{\tau_{m}}\right)^{\gamma_{\rm[Fe/H]}}, \tag{11}\]
\(F_{m}=-1\) dex is the metallicity of the star-forming gas at the center of the disc at \(\tau=\tau_{m}\), \(\nabla[{\rm Fe/H}]=-0.075\) kpc\({}^{-1}\) is the metallicity gradient, and \(R_{\rm[Fe/H]=0}^{\rm now}=8.7\) kpc is the radius at which the present metallicity is the solar value \(Z_{\odot}=0.014\). The value \(\gamma_{\rm[Fe/H]}=0.3\) accounts for the time-dependence of the chemical enrichment. The metallicity can then be obtained by the relation below (e.g. Bertelli et al., 1994),
\[\log_{10}(Z)=0.977{\rm[Fe/H]}+\log_{10}(Z_{\odot}). \tag{12}\]
We note that Wagg et al. (2021) applied this conversion to the thick disc and the bulge component as well as the thin disc, although Frankel et al. (2018) fitted this model only for stars in the thin disc.
To convert the number of BH-LC binaries obtained in the simulation \(N_{\rm BH-LC,sim}\) to the actual number in the MW \(N_{\rm BH-LC,MW}\),
\[N_{\rm BH-LC,MW}=N_{\rm BH-LC,sim}\times\frac{M_{\rm tot}}{M_{\rm tot,sim}}, \tag{13}\]
where \(M_{\rm tot,sim}\) is the total initial mass in the simulation. Note that we restrict the initial primary mass \(m_{\rm prim,ZAMS}\) to be from \(8M_{\odot}\). The value \(M_{\rm tot,sim}\) is thus corrected to the actual total mass with \(m_{\rm prim,ZAMS}\geq 0.08M_{\odot}\), using the initial primary mass function described below. We assume the binary fraction as unity since the value with O-type stars is estimated as \(\sim 0.7\) in Sana et al. (2012). A lower value of the binary fraction will shift down the overall number of detectable binaries, but will not affect the correlations between the parameters studied in section 3.
### Binary Population Synthesis Code and Binary Evolution Models
Binary evolution is simulated by the binary population synthesis code BSE(Hurley et al., 2000; Hurley et al., 2002). We update the stellar wind model in BSE to a metallicity-dependent one following Belczynski et al. (2010).
Two different supernova (SN) mechanisms are employed: "rapid" and "delayed" models suggested in Fryer et al. (2012). In the rapid model, BHs as light as 2 - 4.5\(M_{\odot}\) are rarely born, which reproduces the lower BH mass gap in X-ray observations (Ozel et al., 2010; Farr et al., 2011). Meanwhile, such "mass gap" BHs can be formed in the delayed model. We use both SN models, as it is still uncertain whether the mass gap is intrinsic or due to observational bias.
We also adopt "fallback (FB) kick" model (Fryer et al., 2012) for the rapid and the delayed SN models as BH natal kicks. The strength of BH natal kicks is that of neutron star (NS) natal kicks modulated by \((1-f_{\rm fb})\), where \(f_{\rm fb}\) is the fraction of fallback matter to the ejected mass. The distribution of NS kicks is supposed to be Maxwellian distribution with \(\sigma=265\,{\rm km\,s^{-1}}\)(Hobbs et al., 2005). In general, the mass of the remnant BH tends to be larger in the rapid model than in the delayed model, so the magnitude of FB kick in the rapid model is negligible. In order to see the effect of FB kick, we employ a model with no FB kick for the delayed model as a comparison. We note that there is also contribution of kick from rapid mass loss upon core-collapse (Blaauw kick; Blaauw, 1961), which are included in all of the models.
While the common envelope (CE) phase is treated by \(\alpha\lambda\) prescription (equation 3 in Ivanova et al., 2013), two different CE efficiencies, \(\alpha=1\) and 10 are employed. The latter choice is motivated by El-Badry et al. (2023), which indicated that _Gaia_ BH 1 cannot be formed with \(\alpha=1\) under the assumption of isolated binary origin, and Hirai & Mandel (2022), which revealed that under their new CE formalism post-CE separations can get as large as those translating to a high CE efficiency reaching \(\alpha=10\). We apply the result in Claeys et al. (2014) for \(\lambda\).
For the distributions of initial binary parameters, we assume a single initial primary mass function of Kroupa (2001) from \(8M_{\odot}\) to \(150M_{\odot}\). The mass ratio is assumed to be flat from \(0.1/m_{\rm prim,ZAMS}\) to 1 (Kuiper, 1935; Kobulnicky & Fryer, 2007). The minimum value of the initial secondary mass is set to \(0.1M_{\odot}\). We also set logarithmically flat distribution for a semi-major axis with a range of \(10R_{\odot}\) to \(10^{6}R_{\odot}\). The initial eccentricity is supposed to be thermally distributed (Heggie, 1975). As mentioned in subsection 2.1, we track the evolution of \(10^{7}\) initial binaries per each SN/kick model and a choice of \(\alpha\). At the beginning of the binary evolution, both stars are in the ZAMS stage.
### Tracking the Motion of BH-LC Binaries
For those that survive as BH-LC binaries in the present day, we calculate the motion of each binary in the Galaxy from BH formation to today. We follow the formulations of Tsuna et al. (2018), which numerically solved the orbits
of isolated BHs under the Galactic potential of Irrgang et al. (2013) (their Model II) that contains a spherical bulge, disc and spherical halo. The numerical code calculates the orbit using the cylindrical coordinates \((R,\phi,z)\), with a 4 th-order Runge-Kutta integration.
The displacement of the binary from its birth to BH formation is neglected, and we set the initial \(R\) and \(z\) coordinates to be those of the binary. Since both the binaries and the Galactic potential follow axisymmetric distributions, we randomize the initial azimuthal angle \(\phi\) from 0 to \(\pi/8\). That enables us to increase the number of BH-LC binary samples effectively (see section 2.4). We define the initial velocity of the binary by adding the kick to the Galactic rotation velocity, approximated by a rotation curve of
\[\upsilon_{\phi}(r)=\begin{cases}265-1875(r_{\rm kpc}-0.2)^{2}&\text{km\,s}^{- 1}\quad(\text{for }r_{\rm kpc}<0.2)\\ 225+15.625(r_{\rm kpc}-1.8)^{2}&\text{km\,s}^{-1}\quad(\text{for }0.2<r_{\rm kpc }<1.8)\\ 225+3.75(r_{\rm kpc}-1.8)&\text{km\,s}^{-1}\quad(\text{for }1.8<r_{\rm kpc}<5.8)\\ 240&\text{km\,s}^{-1}\quad(\text{for }r_{\rm kpc}>5.8),\end{cases} \tag{14}\]
where \(r_{\rm kpc}\equiv r/(1\ \text{kpc})\). For each binary we consider 10 randomized realizations of the kick orientation, assuming it follows an isotropic distribution. Figure 1 is an example of the Galactic path of a BH-LC binary in the \(x-y\) (_i.e._ the Galactic plane) and \(x-z\) planes. The star marker corresponds to the starting point of the binary at BH formation.
### Effective Sampling Technique
In order to perform our simulation efficiently, we employ two sampling techniques in spatial and temporal ways. First, we utilize the fact that both the binary distribution and the Galactic potential adopted in our work are axisymmetric. The azimuthal angle distribution of initial binaries are limited to 0 - \(\pi/8\). After tracking the motions of them, we then move the azimuthal angle of the binaries by \(\pi/8\), and repeat that for \(2\pi/(\pi/8)=16\) times. The number of rotations is chosen so that the final number of detectable binaries sufficiently converge. This sampling technique enables us to increase the number of initial samples to \(16\times 10^{7}\).
Our previous work (Shikauchi et al., 2022) found that massive stars with short lifetimes significantly contribute to the luminous sources detectable with _Gaia_, owing to their much larger luminosity. We thus take an importance sampling
Figure 1: An example of the Galactic path of a BH-LC binary in \(x-y\) plane (the Galactic plane) and \(x-z\) one. The star markers show the initial location of the binary. The path is tracked for 1 Gyr from now (_i.e._\(\tau=0\)).
approach2 by employing a bias factor \(b(\tau)\),
Footnote 2: [https://en.wikipedia.org/wiki/Importance_sampling#Application_to_simulation](https://en.wikipedia.org/wiki/Importance_sampling#Application_to_simulation)
\[b(\tau)=\begin{cases}N\times f\times n_{\text{young}}\exp\left(-\frac{\tau_{m}- \tau}{\tau_{\text{SFR}}}\right)&\text{(the thin disc, $0$ Gyr $< \tau$ $< $ 0.1$ Gyr)},\\ N\times(1-f)\times\frac{M_{\text{size}}/2}{M_{\text{tot}}}\times n_{\text{older}} \exp\left(-\frac{\tau_{m}-\tau}{\tau_{\text{SFR}}}\right)&\text{(the thin disc, $0.1$ Gyr $< \tau$ $< $ 8$ Gyr)},\\ N\times(1-f)\times\frac{M_{\text{size}}/2}{M_{\text{tot}}}\times n_{\text{thick}} \exp\left(-\frac{\tau_{m}-\tau}{\tau_{\text{SFR}}}\right)&\text{(the thick disc, $8$ Gyr $< \tau$ $< $ 12$ Gyr)},\\ N\times(1-f)\times\frac{M_{\text{bulge}}}{M_{\text{tot}}}\times\beta(2,3)(\tau)& \text{(the bulge, $6$ Gyr $< \tau$ $< $ 12$ Gyr)}.\end{cases} \tag{15}\]
where
\[\begin{cases}n_{\text{young}}=\frac{1}{\int_{\tau=0.1$ Gyr}^{0.1}\exp\left(- \frac{\tau_{m}-\tau}{\tau_{\text{SFR}}}\right)\text{d}\tau},\\ n_{\text{older}}=\frac{1}{\int_{\tau=0.1$ Gyr}^{8}\exp\left(-\frac{\tau_{m}- \tau}{\tau_{\text{SFR}}}\right)\text{d}\tau},\end{cases} \tag{16}\]
and \(f\) is a weight factor, 0.5. This biased function \(b(\tau)\) shows that 50 % of the total initial binaries with lookback time restricted to \(\tau<0.1\) Gyr, and the rest is assigned to thin disc with \(\tau>0.1\) Gyr, thick disc, and the bulge component. After simulating binary evolution, by multiplying a "weighting factor" \(w(\tau)\),
\[w(\tau) = \frac{p(\tau)}{b(\tau)} \tag{17}\] \[= \begin{cases}\frac{\frac{M_{\text{size}}}{2M_{\text{tot}}}n_{ \text{thin}}}{\int_{n_{\text{young}}}}&\text{(the thin disc, $0$ Gyr $< \tau$ $< $ 0.1$ Gyr)},\\ \frac{n_{\text{thin}}}{(1-f)n_{\text{older}}}&\text{(the thin disc, $0.1$ Gyr $< \tau$ $< $ 8$ Gyr)},\\ \frac{1}{1-f}&\text{(the thick disc)},\\ \frac{1}{1-f}&\text{(the bulge)},\end{cases} \tag{18}\]
to the BH-LC binary population, we obtain the unbiased population while at the same time having the bright LCs with short lifetimes sufficiently sampled.
### The Detection Criteria
After obtaining the present-day location of BH-LC binaries, we calculate their detectability with _Gaia_ by imposing the detection criteria of Yamaguchi et al. (2018) and Shikauchi et al. (2022)3.
Footnote 3: Note that _Gaia_ BH 1, the confirmed BH-LC binary in _Gaia_ DR3 (El-Badry et al., 2023), is correctly flagged as detectable by our detection criteria.
We employ three constraints and obtain the maximum distance \(D_{\text{max}}\) within which each BH binary can be detected. If the distance to the BH binary \(D\) is smaller than \(D_{\text{max}}\), we regard them as detectable.
#### 2.5.1 Limitation from Interstellar Extinction
The first restriction is that the apparent magnitude of a LC \(m_{\text{V}}(L_{\text{LC}},T_{\text{eff,LC}},D_{\text{LC}},z_{\text{LC}})\) should be smaller than _Gaia_'s limiting magnitude in G band \(m_{\text{v,lim}}=20\)(Gaia Collaboration et al., 2016), that is,
\[m_{\text{V}}(L_{\text{LC}},T_{\text{eff,LC}},D_{\text{LC}},z_{\text{LC}})=m_{ \text{v,lim}}, \tag{19}\]
where \(L_{\text{LC}}\) is the LC luminosity, \(T_{\text{eff,LC}}\) is the effective temperature of a LC, \(D_{\text{LC}}\) is the maximum distance where the LC satisfies this condition and \(z_{\text{LC}}\) is the height of the LC from the Galactic plane.
The absolute magnitude of a LC \(M_{\text{V}}(L_{\text{LC}},T_{\text{eff,LC}})\) can be obtained from \(L_{\text{LC}}\) and \(T_{\text{eff,LC}}\) with a bolometric correction (_c.f._ equation 1, 10, and Table 1 in Torres, 2010). Note that we substitute G band with V band. This is a valid approximation for stars bluer than G type stars whose color \(V-I\) is less than one and the color \(|V-G|\) is almost zero according to Figure 11 and 14 of Jordi et al. (2010). The apparent magnitude of a LC \(m_{\text{V}}\) is expressed as a function of the distance to BH binary \(D\) and the height from the Galactic plane to the binary \(z\),
\[m_{\text{v}}=M_{\text{V}}(L_{\text{LC}},T_{\text{eff,LC}})+5(2+\log_{10}D/\text {kpc})+A_{\text{V}}(D,z), \tag{20}\]
where \(D/\)kpc is \(D\) in units of kpc. The term \(A_{\rm V}\) due to interstellar extinction can be expressed following Shafter (2017),
\[A_{\rm V}(D,z) = a_{\rm V}\int_{0}^{D}{\rm e}^{-|z|/h_{*}}{\rm d}D^{\prime} \tag{21}\] \[= a_{\rm V}\frac{Dh_{z}}{|z|}\left[1-\exp\left(-\frac{|z|}{h_{z}} \right)\right], \tag{22}\]
where \(a_{\rm V}\) is the average extinction rate in the Galactic plane (\(z=0\)), 1 mag/kpc, and \(h_{z}=100\) pc is the scale height in the \(z\)-direction perpendicular to the plane (Spitzer, 1978). Thus, the maximum distance satisfying the condition \(D_{\rm LC}\) is
\[M_{\rm V}(L_{\rm LC},T_{\rm eff,LC})+5(2+\log_{10}D_{\rm LC}/{\rm kpc})+A_{ \rm V}(D_{\rm LC},z_{\rm LC})=m_{\rm V,lim}. \tag{23}\]
Note that \(D_{\rm LC}\) depends on the line-of-sight angle with respect to the plane, since the extinction term \(A_{\rm V}\) depends on \(z_{\rm LC}\).
#### 2.5.2 Constraints for Confirmed Detection of BHs
In astrometric observations, we can only identify BHs or NSs based on their masses. In order to consider unseen objects as BHs, we restrict the minimum mass of them to be measured as larger than \(2M_{\odot}\),
\[m_{\rm unseen}-n\sigma_{\rm unseen}>2M_{\odot}, \tag{24}\]
where \(m_{\rm unseen}\) is their true mass and \(\sigma_{\rm unseen}\) is its standard error. We follow Yamaguchi et al. (2018) and adopt \(n=1\). Though the minimum limit we set here may induce contamination of NSs, searching for compact objects with masses of \(2-3M_{\odot}\) should be valuable as the existence of such an object was reported in gravitational wave searches (GW190814, Abbott et al., 2020).
From Kepler's third law the binary parameters, LC mass \(m_{\rm LC}\), BH mass \(m_{\rm BH}\), orbital period \(P\) and semi-major axis \(a\), are correlated. Considering that \(a\) can be expressed by a multiplication of an angular semi-major axis \(a^{*}\) and the distance to BH-LC binary \(D\), the correlation of binary parameters is shown as
\[\frac{(m_{\rm LC}+m_{\rm BH})^{2}}{m_{\rm BH}^{3}}=\frac{G}{4\pi^{2}}\frac{P^ {2}}{(a_{*}D)^{3}}, \tag{25}\]
where \(G\) is the gravitational constant. Ignoring the correlation of each parameter and observational errors, we derive a relationship between each parameter and its standard error,
\[\left(\frac{\sigma_{\rm BH}}{m_{\rm BH}}\right)^{2}=\left(\frac{3}{2}-\frac{m _{\rm BH}}{m_{\rm BH}+m_{\rm LC}}\right)^{-2}\left[\left(\frac{m_{\rm LC}}{m_ {\rm BH}+m_{\rm LC}}\right)^{2}\frac{\sigma_{LC}^{2}}{m_{\rm LC}^{2}}+\frac{ \sigma_{P}^{2}}{P^{2}}+\frac{9}{4}\left(\frac{\sigma_{axis}^{2}}{a_{*}^{2}}+ \frac{\sigma_{D}^{2}}{D^{2}}\right)\right]. \tag{26}\]
where \(\sigma\) is a standard error and each suffix corresponds to each binary parameter.
For confident detection of BHs, we impose a condition that the error of each parameter must be smaller than 10 % of the true value,
\[\frac{\sigma_{\rm LC}}{m_{\rm LC}}<0.1,\frac{\sigma_{P}}{P}<0.1,\frac{\sigma_ {axis}}{a_{*}}<0.1,\ \ {\rm and}\ \ \frac{\sigma_{D}}{D}<0.1. \tag{27}\]
Under these requirements, detection of BHs with \(m_{\rm BH}\gtrsim 3.4M_{\odot}\) should be confirmed as BHs.
The conditions for LC mass and orbital period are easily satisfied. According to Tetzlaff et al. (2011), a standard error of LC mass based on its spectrum and luminosity is typically smaller than 10 %. Furthermore, the standard error of orbital periods is suppressed to below 10 % if the observed periods are shorter than 2/3 of the operation time of _Gaia_(Esa, 1997). As Lucy (2014) and O'Neil et al. (2019) proposed a novel technique to estimate binary parameters when the orbital coverage is less than 40 %, and _Gaia_ has been observing for more than five years, we employ 10 years as the maximum period of observable BH-LC binaries. For the lower limit of orbital periods, we set 50 days as Yamaguchi et al. (2018) does. The rest of the conditions in equation (27) impose two more constraints on \(D_{\rm max}\). First, considering that the parallax \(\Pi\) is proportional to the reciprocal of \(D\), the ratio of the standard error of parallax \(\sigma_{\Pi}\) and \(\Pi\) can be approximated to that of \(\sigma_{D}\) and \(D\),
\[\frac{\sigma_{\Pi}}{\Pi}\sim\frac{\sigma_{D}}{D}<0.1. \tag{28}\]
Gaia Collaboration et al. (2016) provided \(\sigma_{\Pi}\) in G band as a function of the apparent magnitude of a LC \(m_{\rm v}\) and we employ the expression below ignoring the dependence on the color \(V-I\),
\[\sigma_{\Pi}=(-1.631+680.8z(m_{\rm v})+32.73z(m_{\rm v})^{2})^{1/2}[\mu{\rm as }], \tag{29}\]
where
\[z(m_{\rm v})=10^{0.4({\rm max}[12.09,m_{\rm v}]-15)}. \tag{30}\]
Combining equations (28) and (29), the second constraint for \(D_{\rm max}\) is
\[\left(\frac{D_{\rm max}}{{\rm kpc}}\right)<D_{\Pi}=\frac{10^{2}}{(-1.631+680.8 z(m_{\rm v})+32.73z(m_{\rm v})^{2})^{1/2}}. \tag{31}\]
Finally, for the condition of angular semi-major axis, we approximate the uncertainty of the angular semi-major axis of a BH binary \(\sigma_{a*}\) as that of its orbital radius on the celestial sphere \(\sigma_{\Pi}\). Then, the final condition for \(D_{\rm max}\) can be obtained,
\[\left(\frac{D_{\rm max}}{{\rm kpc}}\right)<D_{a}=\frac{am_{\rm BH}}{10(m_{\rm BH }+m_{\rm LC})\sigma_{\Pi}}. \tag{32}\]
In summary, we obtain three constraints for \(D_{\rm max}\), \(D_{\rm LC}\) (equation 23) \(D_{\Pi}\) (equation 31), and \(D_{a}\) (equation 32). For each BH binary sample, we compare the minimum of the three to the current distance to determine whether the binary is detectable.
## 3 Result
Based on the results of BSE and the orbit calculations, we obtain the spatial distributions and binary parameters of the Galactic BH-LC binaries. We summarize in Table 1 the number of BH-LC binaries in the MW with orbital periods of 50 days to 10 years and the detectability, for each SN/kick model and a choice of \(\alpha\). The number of detectable binaries for each model is several times larger than estimated in our previous work (Shikauchi et al., 2022), which can be explained by the following differences between the two works. In this work, we have considered a realistic star formation history instead of a constant star formation rate. That drastically increases the number of BH binaries with low mass LCs (\(m_{\rm LC}\lesssim 1M_{\odot}\)), and also shows different BH/LC mass distributions from our previous work. Binary and spatial parameter distributions are shown in Appendix A. We have also employed a radial distribution with the number density proportional to \(R\exp(-R)\), while the previous work adopted the distribution proportional to \(\exp(-R)\). This effectively enhances the number of sources closer to us. Furthermore, while the previous work employed a single metallicity value of solar for all binaries, here we have considered the metallicity to vary as a function of radius and lookback time. As for binaries born in the past with generally lower metallicity, progenitors with smaller ZAMS masses can evolve into BHs instead of NSs due to reduced mass loss. In addition, the number of heavier BHs will increase, which would make the binary easier to detect.
In order to evaluate the correlation between each binary parameter and spatial parameters, we calculate the "weighted" Pearson correlation coefficients,
\[\rho_{XY,w}=\frac{{\rm cov}(X,Y,w)}{\sqrt{\sigma_{X,w}\sigma_{Y,w}}}, \tag{33}\]
\begin{table}
\begin{tabular}{c c c|c c c} \hline SN model & kick & \(\alpha\) & \(N_{\rm BH-LC,MW}\) & \(N_{\rm det}\) & Shikauchi et al. (2022) \\ \hline delayed & FB kick & 1 & \(3.83\times 10^{3}\) & \(7.22^{+5.98}_{-5.50}\) & 1.1 \\ \(\ldots\) & no kick & \(\ldots\) & \(1.04\times 10^{4}\) & \(51.9^{+5.52}_{-10.9}\) & 22 \\ rapid & FB kick & \(\ldots\) & \(9.14\times 10^{3}\) & \(60.7^{+11.6}_{-6.03}\) & 18 \\ \hline delayed & FB kick & 10 & \(7.68\times 10^{3}\) & \(14.6^{+10.9}_{-0.19}\) & 9.4 \\ \(\ldots\) & no kick & \(\ldots\) & \(3.53\times 10^{4}\) & \(67.4^{+11.4}_{-7.21}\) & 46 \\ rapid & FB kick & \(\ldots\) & \(1.43\times 10^{4}\) & \(92.0^{+6.79}_{-6.67}\) & 31 \\ \hline \end{tabular}
\end{table}
Table 1: The number of BH-LC binaries in the MW \(N_{\rm BH-LC,MW}\) with \(P\) between 50 days and 10 years, and those detectable with _Gaia_\(N_{\rm det}\), for different choices of SN/kick models and values of the CE efficiency \(\alpha\). The numbers and errors in \(N_{\rm det}\) correspond to the median and the spread between 10 th and 90 th percentiles for the 10 realizations of the kick orientation.
where \(X,Y\) are choices of binary parameters and spatial information, \(\mathrm{cov}(X,Y,w)\) is a weighted covariance matrix of \(X\) and \(Y\),
\[\mathrm{cov}(X,Y,w)=\frac{\sum\limits_{i}(w_{i}\times(X_{i}-\bar{X})\times(Y_{i} -\bar{Y}))}{\sum\limits_{i}w_{i}}, \tag{34}\]
\(w\) is the weighting factor for each binary (see equation 18), \(\bar{X},\bar{Y}\) are weighted means of \(X\) and \(Y\), and \(\sigma_{X,w},\sigma_{Y,w}\) are weighted standard deviations of \(X,Y\), _i.e._\(\mathrm{cov}(X,X,w)\) and \(\mathrm{cov}(Y,Y,w)\).
The coefficients of the detectable BH-LC binaries with each SN/kick model and value of the CE efficiency \(\alpha\) are summarized in Figure 2. Values of the coefficients are categorized to seven levels: "strongly positive correlation" (\(1.0\sim 0.7\)), "positive correlation" (\(0.7\sim 0.4\)), "weakly positive correlation" (\(0.4\sim 0.2\)), "no correlation" (\(0.2\sim-0.2\)), "weakly negative correlation" (\(-0.2\sim-0.4\)), "negative correlation" (\(-0.4\sim-0.7\)), and "strongly negative correlation" (\(-0.7\sim-1\)). Figure 3 shows correlation coefficients for the entire Galactic binary population with orbital periods from 50 days to 10 years, for each SN/kick model and \(\alpha\). Most of them show no correlations. Correlation coefficients seen in the detectable BH-LC binaries have the opposite sign and/or are enhanced compared with the correlations among the Galactic BH-LC population. Thus, most of the correlations are generally biased by the detection criteria.
Here, we look into significant correlations of the detectable BH-LC binaries in each model. In the delayed SN model with FB kick and \(\alpha=1\),
1. strongly positive correlation of \((P,e)\),
2. positive correlations of \((m_{\mathrm{BH}},m_{\mathrm{LC}})\), \((P,Z/Z_{\odot})\), and \((e,Z/Z_{\odot})\),
3. negative correlations of \((m_{\mathrm{BH}},|v_{z}|)\) and \((m_{\mathrm{LC}},|v_{z}|)\),
are seen. The strongly positive correlation can be understood based on the positive correlations of \((P,Z/Z_{\odot}),(e,Z/Z_{\odot})\). Heavier BH binaries are formed in lower metallicity, suffering from smaller fallback kick. This results in less eccentric
Figure 2: Correlation coefficients between the current binary parameters (BH mass \(m_{\mathrm{BH}}\), LC mass \(m_{\mathrm{LC}}\), orbital periods \(P\), and eccentricities \(e\)), the current spatial parameters (velocities perpendicular to the Galactic plane \(|v_{z}|\) and the heights from the Galactic plane \(|z|\)), and metallicity \(Z/Z_{\odot}\) of the detectable BH-LC binaries, for different choices of the SN/kick models and values of the CE efficiency \(\alpha\).
and narrower orbits compared to binaries with lighter BHs. The positive correlation of \((m_{\rm BH},m_{\rm LC})\) is highlighted by the detection criteria. Heavier BHs can swing around heavier LCs largely and are more detectable.
The negative correlation of \((m_{\rm BH},|v_{z}|)\) is easily understandable, considering that lighter BHs suffer from larger FB kick. This trend matches the observations of the Galactic XRBs (Gandhi et al., 2019). The negative correlation of \((m_{\rm LC},|v_{z}|)\) can be interpreted as peculiar motion of the binary is proportional to \(m_{\rm BH}/(m_{\rm BH}+m_{\rm LC})\).
In the rapid SN model with FB kick and \(\alpha=1\),
1. positive correlations of \((m_{\rm BH},|z|)\) and \((P,e)\),
2. negative correlations of \((m_{\rm BH},Z/Z_{\odot})\) and \((|z|,Z/Z_{\odot})\)
are seen. The positive correlation of \((P,e)\) exists as well, but can be interpreted in a different way from in the delayed SN model with FB kick. In the rapid SN model, natal kick is not as strong as in the delayed SN model. Thus, BH binaries experiencing the CE phase simply have smaller eccentricities and narrower orbits.
The weaker natal kick in the rapid model also explains the positive correlation of \((m_{\rm BH},|z|)\). In the delayed SN model with FB kick, lighter BH binaries can move farther away from the Galactic plane due to strong FB kick. However, such light BHs are rarely formed in the rapid SN model and BH binaries do not go farther. Rather, the detection criteria highlight the fact that heavier BH binaries are detectable at farther distances according to equation 32.
The negative correlation of \((m_{\rm BH},Z/Z_{\odot})\) is easily understood considering heavier BHs are formed in lower metallicities. The negative correlation of \((|z|,Z/Z_{\odot})\) can be interpreted according to the correlations of \((m_{\rm BH},Z/Z_{\odot}),(m_{\rm BH},|z|)\).
Comparing with the result in the delayed SN model with FB kick, the correlation coefficients of \((m_{\rm BH},|z|)\) (\(-0.18\) with the delayed SN model, \(0.54\) with the rapid SN model) have the opposite signs. As mass gap BHs (\(m_{\rm BH}\lesssim 5M_{\odot}\)) will be detectable only in the delayed SN model, the distribution of \((m_{\rm BH},|z|)\) would be a powerful tool to constrain the SN model.
In the delayed SN model with no kick and \(\alpha=1\), there are
1. positive correlations of \((m_{\rm BH},P)\) and \((m_{\rm LC},Z/Z_{\odot})\),
2. negative correlations of \((m_{\rm BH},e)\), \((m_{\rm BH},Z/Z_{\odot})\), and \((P,e)\).
The positive correlation of \((m_{\rm BH},P)\) can be interpreted as follows. For light BH binaries, BHs are formed after the CE phase. On the other hand, heavier BH binaries (\(m_{\rm BH}\gtrsim 10M_{\odot}\)) do not experience the CE phase because they cannot
Figure 3: Same as Figure 2, but for the entire Galactic BH-LC binary population with orbital periods from 50 days to 10 years.
survive if they enter the phase as shown below. Since heavy BHs are formed in low metallicity, heavy BH binaries are typically born in the distant past. They tend to have low mass LCs (\(m_{\rm LC}\lesssim 1M_{\odot}\)), otherwise they cannot exist as BH-LC binaries until the present day. However, the ZAMS masses of the progenitors of these heavy BHs are as large as \(\gtrsim\) tens of \(M_{\odot}\). Here, we roughly estimate the final orbital separations if such high mass ratio binaries enter the CE phase. Considering \(\alpha\lambda\) prescription of the CE phase, the binding energy of a binary at the beginning of the CE phase is roughly proportional to the orbital energy of a binary at the end of the phase. Orbital separations at the final stage of the CE phase \(a_{\rm f}\) can be approximated as
\[a_{\rm f} = \frac{m_{\rm prim,core}m_{\rm second,ZAMS}}{2}\times\left(\frac{m_ {\rm prim,i}m_{\rm prim,env}}{\alpha\lambda R}+\frac{m_{\rm prim,i}m_{\rm second,ZAMS}}{2a_{\rm i}}\right)^{-1} \tag{35}\] \[\sim \frac{\alpha\lambda}{2}\times\frac{m_{\rm second,ZAMS}m_{\rm prim,core}}{m_{\rm prim,i}m_{\rm prim,env}}R, \tag{36}\]
where we have defined the initial secondary mass \(m_{\rm second,ZAMS}\), the primary mass at the beginning of the CE phase \(m_{\rm prim,i}\), the envelope mass of the primary \(m_{\rm prim,env}\), the core mass of the primary \(m_{\rm prim,core}\), the orbital separation at the beginning of the phase \(a_{\rm i}\) and Roche lobe radius of the primary \(R\). Assuming that mass loss is negligible in low metallicity, \(m_{\rm prim,i}\sim m_{\rm prim,ZAMS}\) and \(m_{\rm prim,core}/m_{\rm prim,env}\sim 0.5\) is almost independent of the primary mass (_e.g._ section 4.2 in Sukhbold et al. 2018). Since \(R\) is approximated to tens of solar radii and \(\lambda\sim 0.4\), \(a_{\rm f}\lesssim 0.1R_{\odot}\) with \(\alpha=1\), and \(\lesssim R_{\odot}\) even for \(\alpha=10\). It is smaller than the core radius of the primary, \(\sim R_{\odot}\), which leads high mass ratio binaries with \(m_{\rm second,ZAMS}/m_{\rm prim,ZAMS}\ll 1\) to merge. Thus, existing binaries with heavy BHs are limited to have longer orbital periods that do not experience the CE phase.
The positive correlation of \((m_{\rm LC},Z/Z_{\odot})\) is easily explained by the fact that only massive LCs that were born recently can survive until today. The negative correlations of \((m_{\rm BH},e)\) is understandable as lighter BH binaries suffer from larger Blaauw kicks. Combining correlations of \((m_{\rm BH},P)\) and \((m_{\rm BH},e)\), the negative correlation of \((P,e)\) would be reasonable.
Comparing the correlation coefficients in the delayed SN model with/without FB kick, a correlation of \((P,e)\) (0.73 with FB kick model, \(-0.66\) without FB kick) are significant and have the opposite trend. Correlations of \((m_{\rm BH},|z|)\) (\(-0.18\) with FB kick, 0.15 without FB kick) might be a clue for the strength of natal kick as well, although they are less significant. Due to the absence of FB kick, binaries with light BHs do not have as large kick or move as far as the model including FB kick. The detection criteria highlight that heavier BH binaries can be detected farther away, which might result in very weakly positive correlations of \((m_{\rm BH},|z|)\). Thus, we expect that we would give a constraint on the strength of natal kicks by checking these correlations from the observed BH-LC samples.
In the delayed SN model with FB kick and \(\alpha=10\),
1. a positive correlation of \((P,e)\),
2. a negative correlation of \((m_{\rm BH},Z/Z_{\odot})\)
exist. Correlations of \((P,e),(m_{\rm BH},Z/Z_{\odot})\) are still seen in the higher CE efficiency case. A correlation of \((m_{\rm BH},|z|)\) is somewhat blurred, but still exists. That might be because most of the light BH binaries seen in \(\alpha=1\) will be disrupted during BH formation. Considering they experience the CE phase, the orbits after the phase will be wider for higherr \(\alpha\), hence easier to disrupt. Correlations of \((P,e),(m_{\rm BH},|z|)\) could be a clue for the strength of natal kick and SN model, regardless of the CE efficiency.
With the high CE efficiency in the rapid SN model, we see
1. positive correlations of \((m_{\rm BH},|z|)\) and \((e,Z/Z_{\odot})\),
2. a negative correlation of \((m_{\rm BH},Z/Z_{\odot})\).
All the significant correlations follow or enhance the trend in \(\alpha=1\) case. As correlations of \((m_{\rm BH},|z|)\) retains the same trend in \(\alpha=1\) case, they would be useful to constraint SN model even if the CE efficiency is high.
Finally, in the delayed SN model without FB kick and \(\alpha=10\),
* negative correlations of \((m_{\rm BH},e)\) and \((m_{\rm BH},Z/Z_{\odot})\)
are seen. They follow the trend seen in \(\alpha=1\) case. The correlations \((P,e)\), \((m_{\rm BH},|z|)\) are somewhat blurred compared to the \(\alpha=1\) case, but still exist. The trend of \((P,e)\) could be explained as follows. Due to the high CE efficiency, lighter BH binaries can survive the CE phase and their final orbits can be as wide as heavy BH binaries seen in \(\alpha=1\). That blurs the correlation of \((m_{\rm BH},P)\), resulting in blurring the correlation of \((P,e)\). Also, the contribution of light BH binaries with larger orbital periods might explain the trend of \((m_{\rm BH},|z|)\). As a number of light BH binaries can have longer orbital periods, they are easier to detect compared to \(\alpha=1\) case. That would weigh the distribution of \((m_{\rm BH},|z|)\) to light BHs, resulting in blurring the correlation of \((m_{\rm BH},|z|)\). Nonetheless, correlations of \((P,e)\), \((m_{\rm BH},|z|)\) would give us a clue for the strength of FB kick in the high CE efficiency case.
In summary, correlations of
* \((m_{\rm BH},|z|)\) (\(-0.18\) in the delayed SN model, \(0.54\) in the rapid SN model with FB kick and \(\alpha=1\))
have the opposite signs by the choice of SN model. Considering mass gap BHs can be detected only in the delayed SN model, the distribution of \((m_{\rm BH},|z|)\) might provide an important clue to constrain the SN model.
Correlations of
1. \((P,e)\) (\(0.73\) and \(-0.66\) for delayed SN model with and without FB kick for \(\alpha=1\) case respectively),
2. \((m_{\rm BH},|z|)\) (\(-0.18\) with FB kick, \(0.15\) without FB kick in \(\alpha=1\))
have the opposite signs depending on the existence of FB kick. Thus these correlations would give a constraint on the strength of natal kick. All of the trends summarized above would be preserved even if the CE efficiency is high.
Finally, we investigated how eccentric are the motions of BH-LC binaries in the Galactic potential. A characteristic quantity we defined is "galactic eccentricity" \(e_{\rm gal}\). It is defined by the maximum and the minimum radius \(r_{\rm max},r_{\rm min}\) at which each binary have reached during its lifetime, \(e_{\rm gal}\equiv(r_{\rm max}-r_{\rm min})/(r_{\rm max}+r_{\rm min})\). For all the SN/kick models with \(\alpha=1\), almost all (\(\gtrsim 99\) %) of the binaries have almost circular (\(e_{\rm gal}<0.1\)) motion in the Galactic potential like shown in Figure 1. In the SN models with FB kick, \(0.5\) % (delayed) and \(0.1\) % (rapid) binaries have eccentric orbits with \(e_{\rm gal}>0.5\). An example of the Galactic path for one of the binaries is shown in Figure 4.
## 4 Comparison with the confirmed BH binary and the BH candidates with _Gaia_
In this section, we compare our results with BH candidates reported in _Gaia_ DR3. First, we review the candidates found in spectroscopic or astrometric data. Then, we select some candidates among them and discuss how they can be formed from isolated field binaries.
From the data of single-lined spectroscopic binaries, the _Gaia_ collaboration reported possible candidates of binaries consisting of compact objects with MS or post-MS stars (Gaia Collaboration et al., 2022). However, El-Badry &
Figure 4: The same as Figure 1, but for a binary traveling the MW with an eccentric orbit of \(e_{\rm gal}\sim 0.5\) (see main text for the definition of \(e_{\rm gal}\)).
Rix (2022) immediately rejected the possibility of possessing BHs for all of the BH-MS star binary candidates by combining other spectroscopic data. Jayasinghe et al. (2022) selected 234 single-lined binaries and investigated the possibility of BHs, resulting in rejecting the possibility for all the candidates. Though BH-MS star candidates in Gaia Collaboration et al. (2022) have been rejected, BH-post MS star candidates reported in Gaia Collaboration et al. (2022) and ellipsoidal variables (Gomel et al., 2022) still remain. For BH-post MS star binaries it is difficult to estimate the masses of post-MS stars. Thus we can only obtain lower limits of the masses of the unseen objects, which makes us hard to identify BH-post MS star binary candidates. Follow-up observations with spectroscopy in other wavelengths will provide details on the light curves of LCs and then reveal the nature of them by fitting models to the spectra as El-Badry and Rix (2022) did.
Also, some researches found BH-LC binary candidates from _Gaia_ DR3 data. Andrews et al. (2022) found 24 candidates possibly including BHs or NSs, with long orbital period such as \(\sim\)years from astrometric binaries. Shahaf et al. (2022) applied their own triage technique (Shahaf et al., 2019) for astrometric data, and found eight candidates including massive unseen objects heavier than \(2.4M_{\odot}\). Tanikawa et al. (2022) reported the existence of a BH-LC binary candidate, with the longest orbital period among the reported BH candidates so far. El-Badry et al. (2023) confirmed _Gaia_ DR3 \(4373465352415301632\) (hereafter _Gaia_ BH 1), a binary with a BH of \(m_{\rm BH}=9.78\pm 0.18M_{\odot}\) and a G dwarf star \(m_{\rm LC}=0.93\pm 0.05M_{\odot}\) with orbital period \(P=185.63\pm 0.05\) days and a modest eccentricity \(e\sim 0.45\). This binary is located at 480 pc from the Earth, and is identified as the nearest BH currently observed. Chakrabarti et al. (2022) rejected any possibilities of luminous stars for the unseen object, and confirmed _Gaia_ BH 1 as a BH-MS star binary as well.
Including _Gaia_ BH 1, we select the candidates with the upper limit of compact object mass larger than \(3M_{\odot}\) from Andrews et al. (2022); Shahaf et al. (2022); Tanikawa et al. (2022) since BSE considers compact objects heavier than \(3M_{\odot}\) as BHs. These BH candidates can be roughly divided into two types in terms of component mass and orbital period: one is \(\lesssim 4M_{\odot}\) BH and \(\lesssim 1.5M_{\odot}\) LC binaries with long orbital periods (\(P\sim 1.5-4\) years) and non-zero eccentricities (type 1) and the other is \(\gtrsim 9M_{\odot}\) BH and \(\lesssim 1.2M_{\odot}\) LC binaries with short orbital periods (\(P\lesssim 1\) year) and non-zero eccentricities (type 2). The latter type includes _Gaia_ BH 1. We summarize the BH candidates and _Gaia_ BH 1 in Table 2. We note that the LC mass of the candidate reported in Tanikawa et al. (2022) is not estimated, so we do not categorize it as either type.
We found that the delayed SN model with no natal kick and \(\alpha=10\) stably forms both types of BH binaries. Based on our simulation, \(\sim 3\times 10^{4}\) type 1-like binaries and \(\sim 900\) type 2-like binary are expected to exist in the MW. Figure 5 shows examples of evolutionary path for both types of BH binaries in the delayed SN model with no kick and \(\alpha=10\). The evolutionary path for both types of binaries is almost the same: they experience the CE phase before forming BHs. The difference is that BH mass of type 1-like binaries is lighter. Thus, their mass loss kick (_i.e._ the Blaauw kick) is larger than that of type 2-like ones, which makes orbits of type 1-like binaries wider and more eccentric. The
\begin{table}
\begin{tabular}{c|c c c c c} \hline _Gaia_ ID & BH mass [\(M_{\odot}\)] & LC mass [\(M_{\odot}\)] & \(P\) [days] & \(e\) & type \\ \hline \(4314242838679237120^{\ast}\)1 & \(2.25^{+1.87}_{-0.84}\) & \(0.63-1.00\) & \(1146\pm 382\) & \(0.70\pm 0.09\) & 1 \\ \(5593444799901901696^{\ast}\)1 & \(2.57^{+0.86}_{-0.69}\) & \(1.27\pm 0.2\) & \(1039\pm 292\) & \(0.44\pm 0.14\) & 1 \\ \(6328149636482597888^{\ast}\)1 & \(2.71^{+1.50}_{-0.36}\) & \(1.21\pm 0.2\) & \(736\pm 23\) & \(0.14\pm 0.07\) & 1 \\ \(6281177228434199296^{\ast}\)2 & \(11.9\pm 1.5\) & \(1.0\) & \(153.95\pm 0.36\) & \(0.180\pm 0.042\) & 2 \\ \(3509370326763016704^{\ast}\)2 & \(3.69\pm 0.24\) & \(0.7\) & \(109.392\pm 0.065\) & \(0.237\pm 0.016\) & 1 \\ \(6802561484797464832^{\ast}\)2 & \(3.08\pm 0.84\) & \(1.2\) & \(574.8\pm 6.2\) & \(0.830\pm 0.071\) & 1 \\ \(3263804373319076480^{\ast}\)2 & \(2.75\pm 0.50\) & \(1.0\) & \(510.7\pm 4.7\) & \(0.278\pm 0.023\) & 1 \\ \(6601396177408279040^{\ast}\)2 & \(2.57\pm 0.50\) & \(1.0\) & \(533.5\pm 2.0\) & \(0.791\pm 0.043\) & 1 \\ \(4373465352415301632^{\ast}\)3 & \(9.78\pm 0.18\) & \(0.93\pm 0.05\) & \(185.63\pm 0.05\) & \(0.454\pm 0.005\) & 2 \\ \(5870569352746779008^{\ast}\)4 & \(>5.25\) & & \(1352.25\pm 45.50\) & \(0.5324\pm 0.0095\) & \\ \hline \end{tabular}
\end{table}
Table 2: Information of the BH candidates reported in _Gaia_ DR 3 whose BH mass exceeds \(3M_{\odot}\) at the upper limit and _Gaia_ BH 1 (El-Badry et al., 2023). Binary parameters of the candidates are based on the _Gaia_ DR3 database. We cited parameters estimated in El-Badry et al. (2023) for the information of _Gaia_ BH 1.
\(\ast\)1: reported in Andrews et al. (2022), \(\ast\)2: reported in Shahaf et al. (2022), \(\ast\)3: reported in El-Badry et al. (2023), and \(\ast\)4: Tanikawa et al. (2022).
delayed SN models with FB kick and \(\alpha=10\) may also form both types of binaries. While type-1 like binaries are stably formed, type-2 like binaries were sometimes born if the strength of natal kick is relatively small such as tens of \(\mathrm{km\,s^{-1}}\) to \(135\,\mathrm{km\,s^{-1}}\). The existence of natal kick can make the orbits of type 2-like binaries more eccentric (\(e\sim 0.3-0.6\)) and narrower (\(P\sim 200\) days), more similar to _Gaia_ BH 1.
However, the other models, _i.e._ the rapid SN model regardless of the CE efficiency or the delayed SN model with the low CE efficiency, cannot form both types of binaries. Light BH binaries with long orbital periods cannot be formed in the rapid SN model. Some BHs as light as \(\lesssim 4M_{\odot}\) are formed in the rapid SN model via accretion-induced collapse, but their orbital periods are shorter than 1 year. Thus, if we confirm that the candidates of type 1-like binaries possess BHs, SN models producing mass gap BHs like the delayed SN model is favored. In the delayed SN model with low CE efficiency, if one attempts to form Type 2-like binaries with heavier BHs, their final orbital periods become \(\sim 10\) days, much shorter than observed.
## 5 Conclusion
We investigated correlations between binary parameters (BH mass, LC mass, orbital periods, and eccentricities), spatial parameters (velocities perpendicular to the Galactic plane, and the heights from the Galactic plane), and metallicity of BH-LC binaries detectable with _Gaia_. By sampling initial spatial conditions, metallicity and lookback time distributions based on Wagg et al. (2021), then simulating binary evolution with BSE and the orbit of the binary under the Galactic potential, we obtained the BH-LC binary population in the MW.
We conclude that most of the correlation coefficients among the detectable binaries have the opposite sign and/or are enhanced by the detection criteria as correlation coefficients among the Galactic population show almost no correlations. Nevertheless, we indicated some correlations might probe the SN model and the strength of natal kick regardless of the CE efficiency. Correlations of \((m_{\mathrm{BH}},|z|)\) would be a clue for the SN model if a strong natal kick like FB kick exists. In the delayed SN model light BHs (\(m_{\mathrm{BH}}\lesssim 4M_{\odot}\)) are formed, and binaries possessing such BHs go farther from the Galactic plane due to strong kick, resulting in a negative correlation. On the other hand, in the rapid SN model, light BHs are rarely formed and natal kick is not so strong, thus BH binaries do not leave far away from the Galactic plane. The detection criteria simply emphasizes heavier BH binaries can be detected at farther distances.
Figure 5: An example of evolutionary paths of type 1-like BH binary (\(e.g\lesssim 4M_{\odot}\) BH and \(\lesssim 1.5M_{\odot}\) LC binaries with long orbital periods (\(P\sim 1.5-4\) years) and non-zero eccentricities) and type 2-like BH binary (\(e.g.\gtrsim 9M_{\odot}\) BH and \(\lesssim 1.2M_{\odot}\) LC binaries with short orbital periods (\(P\lesssim 1\) year) and non-zero eccentricities). Both types of binaries experience the CE phase and finally form different range of BH mass, which makes a difference in terms of orbital separations and eccentricities depending on the strength of mass loss kick.
The signs of correlations of \((P,e)\), \((m_{\rm BH},|z|)\) vary depending on the existence of FB kick, which would be useful to constraint the strength of natal kick. With FB kick, light BH binaries suffer from a strong kick, which makes their orbits more eccentric and wider. On the other hand, due to the absence of FB kick, light BH binaries can remain tighter than those in the same SN model with FB kick, resulting in a positive correlation of \((m_{\rm BH},P)\) and the opposite correlation of \((P,e)\). The trend of \((m_{\rm BH},|z|)\) might be understandable based on the similar reason discussed in the rapid SN model, _i.e._ light BH binaries cannot leave farther from the Galactic plane than those with FB kick and heavier BH binaries can be detected at farther distances.
Using BH-LC samples we employed here, we also investigated the possibility of forming binaries like the BH candidates reported in _Gaia_ DR3 (Andrews et al., 2022; Shahaf et al., 2022) and _Gaia_ BH 1 (El-Badry et al., 2023) in each SN/kick model with a choice of \(\alpha\) used in this work. We divided all the candidates and _Gaia_ BH 1 into two groups, type 1 and 2 (see Table 2), in terms of component masses and orbital periods. We revealed that only the delayed SN model with the high CE efficiency can form both types of binaries in an isolated field. Both types of binaries are formed via the CE phase. If the CE efficiency is as low as unity, type 2-like binaries can not have as large orbital separations as the observed ones. Especially, the rapid SN model cannot form type 1-like binaries since such light BHs are formed via accretion-induced collapse, which requires shorter orbital separations than seen in type 1-like binaries. We also expect the SN model producing light BHs of masses \(\lesssim 4M_{\odot}\) would be favored if BH candidates categorized as type 1 binaries are confirmed as genuine BH binaries.
As more candidates are identified as genuine BH binaries, spatial distributions of BH-LC binaries in the Galactic coordinate will be obtained as shown in Figure 6. Each point in the figure depicts the detectable BH-LC binaries obtained from all the realizations, weighted by the weighting factor and colored by BH mass. In the delayed SN model, light BHs (\(m_{\rm BH}\lesssim 5M_{\odot}\)) would be detectable at a high longitude such as \(|b|>45^{\circ}\). Also, we expect \(|v_{z}|\) distributions might tell us the strength of natal kick. Figure 7 shows a probability density function of \(\log|v_{z}|\) of the detectable BH-LC binaries. If a strong natal kick model such as FB kick is favored, \(\gtrsim 50\) % of the BH-LC binaries would have a large \(|v_{z}|\) such as \(\sim 30\) km s\({}^{-1}\). If SN model which does not produce lower mass gap BHs is favored, the detected BH-LC binaries are less likely to have such a large \(|v_{z}|\).
## Acknowledgement
M.S. is supported by Research Fellowships of Japan Society for the Promotion of Science for Young Scientists, by Forefront Physics and Mathematics Program to Drive Transformation (FoPM), a World-leading Innovative Graduate Study (WINGS) Program, the University of Tokyo, and by JSPS Overseas Challenge Program for Young Researchers.
Figure 6: Spatial distributions of the detectable BH-LC binaries obtained from 10 different realizations with different choices of the SN/kick models and values of the CE efficiency \(\alpha\). Maps are shown in the Galactic coordinate. Each star marker shows each binary, whose size is proportional to the weighting factor. Colors of each marker correspond to BH mass.
D.T. is supported by the Sherman Fairchild Postdoctoral Fellowship at Caltech. This research is supported by Grants-in-Aid for Scientific Research (17H06360, 19K03907, 22K03686) from the Japan Society for the Promotion of Science.
## Appendix A Corner plots of binary parameters
In this appendix, Figures 8 to 13 show two-dimensional scatter plots with binary parameters (BH mass \(m_{\rm BH}\), LC mass \(m_{\rm LC}\), orbital periods \(P\), and eccentricities \(e\)), spatial parameters (velocities in \(z\)-direction \(|v_{z}|\), and the heights from the Galactic plane \(|z|\)) and metallicity, and one-dimensional histograms for each choice of SN/kick models and \(\alpha\) values. Note that the vertical axis in the histograms is linear. Each point shows a BH-LC sample obtained from the 10 different realizations. The black point depicts the detectable BH-LC binaries. The blue ones are the entire Galactic BH-LC binaries with orbital period of 50 days to 10 years.
|
2302.06449
|
XOR and XNOR gates in instantaneous noise based logic
|
In this paper, we propose a new method of applying the XOR and XNOR gates on
exponentially large superpositions in Instantaneous Noise-Based Logic. These
new gates are repeatable, and they can achieve an exponential speed up in
computation with a polynomial requirement in hardware complexity.
|
Mohammad B. Khreishah, Walter C. Daugherity, Laszlo B. Kish
|
2023-02-10T10:21:09Z
|
http://arxiv.org/abs/2302.06449v1
|
# XOR and Xnor gates in instantaneous noise based logic
###### Abstract
In this paper, we propose a new method of applying the XOR and XNOR gates on exponentially large superpositions in Instantaneous Noise-Based Logic. These new gates are repeatable, and they can achieve an exponential speed up in computation with a polynomial requirement in hardware complexity.
Noise-based logic; exponential speedup; polynomial complexity; parallel operations.
## 1 Introduction
Noise-based logic (NBL) was first introduced in [1], where orthogonal stochastic processes (noises), their superposition, their products and the superposition of their products are used to represent the logic state [2]. Instantaneous Noise-based Logic (INBL) [3] is a class of NBL, where the signals carrying the logic values appear immediately at the output of the logic gates without the need of cross-correlating or time averaging.
Previously, NOT [3, 4] and CNOT [12] gates in INBL were created. This paper proposes the solution for XOR and XNOR gates in INBL.
In the next subsections we show a few details of NBL that are essential for the present paper.
### On Noise-Based Logic
NBL requires a Reference Noise System (RNS) as the source for generating the logic states and for the identification of the incoming noises that carry the logic values.
A system with \(N\) noise-bits uses \(2N\) independent, orthogonal noise sources for the RNS [1-11]. Let each noise of the RNS be noted by \(W_{i,j}(t)\), where \([i]\) is the bit significance number (\(1\leq i\leq N\)), and \([j]\) is the value of that bit number (\(j\in\{0,1\}\)). Any binary number, \(R\), in the range of \([0,2^{N}]\), is represented by the product of \(N\) noise sources \(X_{R}(t)\) which we will call a "string":
\[X_{R}(t)=\prod_{i=1}^{N}W_{i,j((k,R)}(t) \tag{1}\]
For example, a system of 2 noise-bits will have the following \(2N\) independent noise sources:
\[W_{i,j}(t)=\{W_{1,0}(t),W_{1,1}(t),W_{2,0}(t),W_{2,1}(t)\} \tag{2}\]
The following string represents the number 2 that has the binary number representation \((10)_{2}\):
\[X_{10}(t)=W_{2,1}(t)W_{1,0}(t) \tag{3}\]
In NBL, a superposition of strings simply means the summation of these strings but not the summation of the numbers they represent. Suppose that \(Y(t)\) is a superposition of strings, then we have:
\[Y(t)=X_{R1}(t)+X_{R2}(t)+X_{R3}(t)\ +\cdots \tag{4}\]
Notice a few characteristics of a superposition:
(i) The maximum number of strings in a superposition is \(2^{N}\) and the minimum number of strings is one.
(ii) The number of all possible subspaces of a superposition, \(S_{total}\), is:
\[S_{total}=\sum_{k=1}^{2^{N}}{2^{N}\choose k}=2^{2^{N}}-1 \tag{5}\]
### Instantaneous Noise-Based Logic & Random telegraph waves:
INBL is a class of the NBL family where cross correlation/time averaging is not required. It has similar logic structure as the quantum computer idea.
The Random Telegraph waves (RTW) [3-12] are the simplest form of the RNS of INBLs.
RTWs are synchronous signals that only change when a new clock cycle begins. At the start of the clock cycle, RTWs start randomly either at +1 or at -1, then at the beginning of each new clock period, RTWs have a probability of 0.5 to flip from +1 to -1 or vice versa. RTW has many practical realizations, but we will restrict our discussion to the above form which is the simplest. That means that statistically, RTWs are +1 half the time, and -1 in the other half, which also means that their mean is zero [12]:
\[\langle R_{i,j}(t)\rangle=0\, \tag{6}\]
where \(R_{i,j}(t)\) is RTW with binary significance \(i\) and binary value \(j\).
The product of RTWs is a new RTW which is orthogonal to each RTW in the product [12]:
\[R_{k,l}(t)=R_{i,j}(t)R_{n,m}(t) \tag{7}\]
\[\langle R_{i,j}(t)R_{k,l}(t)\rangle=\langle R_{n,m}(t)\rangle=0 \tag{8}\]
\[\langle R_{n,m}(t)R_{k,l}(t)\rangle=\langle R_{i,j}(t)\rangle=0 \tag{9}\]
Where \(i\neq n\)_and_\(j\neq m\).
Also, it is important to note that any RTW multiplied by itself will result in 1 [12]:
\[R_{i,j}(t)R_{i,j}(t)=1 \tag{10}\]
Equation (10) will be important in applying the XOR/XNOR operation.
### NOT operation in INBL
Previously, the CNOT and the NOT gates were proposed by acting on the reference wires [12]. Here the NOT gate is reviewed since the new XOR/XNOR gates proposed in the present paper is based on a similar method. From now on, we will be omitting the time notation for convenience, but every signal that is mentioned in this paper is a function of time. Let us suppose that we have the following arbitrary superposition that contains \(R_{i0}\) and \(R_{i1}\):
\[Y_{total}=Y_{0}R_{i0}+Y_{1}R_{i1}\ \ \ \ \, \tag{11}\]
where \(Y_{0}\) and \(Y_{1}\) are also superpositions with the restriction that they do not contain \(R_{i0}\) or \(R_{i1}\). Figure 1 illustrates such a system.
To apply the NOT gate on \(R_{i0}\) and \(R_{i1}\), we simply multiply the reference wires \(R_{i0}\) and \(R_{i1}\) by \(R_{i0}R_{i1}\)[12]. Then due to equation (10), equation (11) becomes:
\[Y_{total}(t)=\ Y_{1}R_{i0}+Y_{0}R_{i1}. \tag{12}\]
Figure 2 illustrates such a system.
Figure 1: A generic INBL that uses RTW signals and contains \(R_{i0}\) & \(R_{i1}\) at the output.
## 2 The XOR & XNOR gates in INBL
### The XOR Gate
Similarly to the NOT gate, we will be using operations on the reference wires to apply the XOR gate. Let us assume that the inputs to the XOR gate are bits \(\{i,f\}\), and that we want the result to appear on bit \(\{h\}\). Let us assume that we have a processor that produces the following superposition:
\[Y_{total}=Y_{0}R_{i0}R_{f0}R_{hx_{0}}+Y_{1}R_{i1}R_{f0}R_{hx_{1}}+Y_{2}R_{i0}R_ {f1}R_{hx_{2}}+Y_{3}R_{i1}R_{f1}R_{hx_{3}} \tag{13}\]
Where \(R_{i0}\) represents the zero value of the \(ith\) bit; \(R_{i1}\) represents the one value of the \(ith\) bit; \(R_{j0}\) represents the zero value of the \(jth\) bit; \(R_{j1}\) represents the one value of the \(jth\) bit; \(R_{hx_{0}},R_{hx_{1}},R_{hx_{2}},R_{hx_{3}}\) represent the \(hth\) bit that has arbitrary initial binary values of \(\{x_{0},x_{1},x_{2},x_{3}\}\),, where these initial values are unimportant; \(Y_{0},Y_{1},Y_{2},Y_{3}\) are arbitrary superpositions that do not contain any RTW with index \(\{i,f,h\}\). This system is illustrated in Figure 3.
Figure 2: NOT operation in INBL (RTW) system.
The first step is to manipulate the superposition shown by equation (13) so that the bit values corresponding to \(\{x_{0},x_{1},x_{2},x_{3}\}\) become \(0\). This can be done by multiplying the reference wire of \(R_{h1}\) by \(R_{h0}R_{h1}\). If a string contains \(R_{h0}\), then it remains the same, and if it contains \(R_{h1}\), then it flips it to \(R_{h0}\). This is illustrated by Figure 4. The superposition in equation (13) will then become:
\[Y_{total}=Y_{0}R_{i0}R_{f0}R_{h0}+Y_{1}R_{i1}R_{f0}R_{h0}+Y_{2}R_{i0}R_{f1}R_{h 0}+Y_{3}R_{i1}R_{f1}R_{h0} \tag{14}\]
That makes bit \(h\) independent of bits \(\{i,f\}\). This step is essential because we need the output bit \(h\) to start with the same value in each element of the superposition, then to change according to the XOR function of the \(\{i,f\}\) bit values, see below.
Figure 3: Generic RTW system before applying the XOR/XNOR gates
The next step is to multiply the \(R_{i1}\) and \(R_{f1}\) references by \(R_{h0}R_{h1}\), which is the main XOR operation. In equation (14), if we substitute \(R_{i1}\) by \(R_{i1}R_{h0}R_{h1}\) and \(R_{f1}\) by \(R_{f1}R_{h0}R_{h1}\), then we get the following:
\[Y_{XOR}=Y_{0}R_{i0}R_{f0}R_{h0}+Y_{1}R_{i1}R_{h0}R_{h1}R_{f0}R_{h0}+Y_{2}R_{f1} R_{h0}R_{h1}R_{i0}R_{h0}+\]
\[Y_{3}R_{i1}R_{h0}R_{h1}R_{f1}R_{h0}R_{h1}R_{h0}\]
\[=Y_{0}R_{i0}R_{f0}R_{h0}+Y_{1}R_{i1}R_{f0}R_{h0}R_{h1}R_{h0}+Y_{2}R_{i0}R_{f1} R_{h0}R_{h1}R_{h0}\]
\[+Y_{3}R_{i1}R_{f1}R_{h0}R_{h1}R_{h0}R_{h1}R_{h0}\]
\[=Y_{0}R_{i0}R_{f0}R_{h0}+Y_{1}R_{i1}R_{f0}NOT(R_{h0})+Y_{2}R_{i0}R_{f1}NOT(R_{ h0})+\]
\[Y_{3}R_{i1}R_{f1}NOT(NOT(R_{h0})) \tag{15}\]
From equations (10) and (15) it follows:
\[Y_{XOR}=Y_{0}R_{i0}R_{f0}R_{h0}+Y_{1}R_{i1}R_{f0}R_{h1}+Y_{2}R_{i0}R_{f1}R_{h1 }+Y_{3}R_{i1}R_{f1}R_{h0} \tag{16}\]
In conclusion, we successfully implemented the XOR gate between bits \(\{i,f\}\) and represented the answer by bit \(\{h\}\). Figure 5 illustrates the total XOR gate.
Figure 4: Making the value of bit \(\{h\}\) equals binary value 0 in all strings in the superposition.
Note, to get exactly the same result, \(R_{i0}\) and \(R_{f0}\) could have been multiplied by \(R_{h0}R_{h1}\) instead of multiplying \(R_{i1}\) and \(R_{f1}\).
### The XNOR Gate
Since we implemented the XOR gate, we can simply use the result from XOR gate and the previously developed NOT gate [12] to get the XNOR gate. If we apply the NOT gate to bit \(\{h\}\) in equation (16), then we get the XNOR gate:
\[Y_{XNOR}=Y_{0}R_{i0}R_{f0}R_{h1}+Y_{1}R_{i1}R_{f0}R_{h0}+Y_{2}R_{i0}R_{f1}R_{ h0}+Y_{3}R_{i1}R_{f1}R_{h1} \tag{17}\]
Alternatively, to get an XNOR gate, we can follow a method similar to the XOR gate. Let us suppose we have a system similar to the one in figure 3. Again, we first need to transform the superposition in equation (13) to equation (14), but this time, we need to multiply \(R_{i1}\) & \(R_{f0}\) by \(R_{h0}R_{h1}\). In equation (14), if we substitute \(R_{i1}\) by \(R_{i1}R_{h0}R_{h1}\) and \(R_{f0}\) by \(R_{f0}R_{h0}R_{h1}\), then we get the following:
\[Y_{XNOR}=Y_{0}R_{i0}R_{f0}R_{h0}R_{h1}R_{h0}+Y_{1}R_{i1}R_{h0}R_{ h1}R_{f0}R_{h0}R_{h1}R_{h0}+Y_{2}R_{i0}R_{f1}R_{h0}+\] \[Y_{3}R_{i1}R_{h0}R_{h1}R_{f1}R_{h0} \tag{18}\]
Figure 5: The full XOR gate implementation
Substituting equation (10) in equation (18), gives equation (17), which is the XNOR operation. Figure 6 illustrates this operation. Alternatively, we could have multiplied \(R_{i0}\) & \(R_{f1}\) by \(R_{h0}R_{h1}\) to get the exact same result.
## 3 Conclusion
We have successfully implemented the XOR and XNOR gates that act on an exponentially large superposition by utilizing polynomial complexity, using only 4 multiplications; one for multiplying the zero value of the RTW with the one value of the RTW (example: \(R_{h0}R_{h1}\) for bit \(\{h\}\)), and the other three multiplication happen directly on the reference wires (see figure 5 and figure 6). The XOR and XNOR operations are repeatable as well, as multiplication is a commutive operation. These gates have potential applications in challenging the supremacy of quantum computing schemes.
|
2303.06760
|
Inertial Migration in Micro-Centrifuge Devices
|
Within microcentrifuge devices, a microfluidic vortex separates larger
particles from a heterogeneous suspension using inertial migration, a
phenomenon that causes particles to migrate across streamlines. The ability to
selectively capture particles based on size differences of a few microns makes
microcentrifuges useful diagnostic tools for trapping rare cells within blood
samples. However, rational design of microcentrifuges has been held back from
its full potential by a lack of quantitative modeling of particle capture
mechanics. Here we use an asymptotic method, in which particles are accurately
modeled as singularities in a linearized flow field, to rapidly calculate
particle trajectories within microcentrifuges. Our predictions for trapping
thresholds and trajectories agree well with published experimental data. Our
results clarify how capture reflects a balance between advection of particles
within a background flow and their inertial focusing and shows why the close
proximity of trapped and untrapped incoming streamlines makes it challenging to
design microcentrifuges with sharp trapping thresholds.
|
Samuel Christensen, Marcus Roper
|
2023-03-12T21:51:11Z
|
http://arxiv.org/abs/2303.06760v1
|
# Inertial Migration In Micro-Centrifuge Devices
###### Abstract
Within microcentrifuge devices, a microfluidic vortex separates larger particles from a heterogeneous suspension using inertial migration, a phenomenon that causes particles to migrate across streamlines. The ability to selectively capture particles based on size differences of a few microns makes microcentrifuges useful diagnostic tools for trapping rare cells within blood samples. However, rational design of microcentrifuges has been held back from its full potential by a lack of quantitative modeling of particle capture mechanics. Here we use an asymptotic method, in which particles are accurately modeled as singularities in a linearized flow field, to rapidly calculate particle trajectories within microcentrifuges. Our predictions for trapping thresholds and trajectories agree well with published experimental data. Our results clarify how capture reflects a balance between advection of particles within a background flow and their inertial focusing and shows why the close proximity of trapped and untrapped incoming streamlines makes it challenging to design microcentrifuges with sharp trapping thresholds.
## I Introduction
Microcentrifuges are a recently developed class of microfluidic devices that can be used to selectively trap large particles from flowing suspensions. The devices consist of a series of chambers connected with microfluidic channels. Within each chamber, one or more eddies may form. _Inertial migration_ causes particles in moderate Reynolds number flows to travel across streamlines. Larger particles migrate faster and are more likely to become trapped within the microcentrifuge chamber [1; 2]. Size-based trapping may be used to trap the largest particles within the suspension, and shows promise as a tool for analyzing cell types in a patient blood sample, e.g. for isolating large circulating cancer cells from small red blood cells [3; 4; 5].
Figure 1: A diagram of the microcentrifuge (not to scale). Microfluidic channels lead to the microcentrifuge chamber. At sufficient Reynolds numbers, a fluid eddy forms in the chamber and inertial migration pushes larger particles into the chamber, where they are then trapped in the eddy, while smaller particles are not captured and continue through the device.
The range of possible particle behavior within the complex three dimensional flows that occur in microconfigure chamber is not well understood, nor is there, to our knowledge, has any straightforward mechanistic description of how a microcentrifuge operates. In addition to the intrinsic interest of identifying new inertial microfluidic phenomena, mechanistic explanation of microcentrifuge function opens a door to solving the reverse problem, of designing a chamber or channel geometry to target a particular size threshold for trapping.
A large impediment to this understanding is how much less well developed the theory of inertial microfluidic migration is, relative to theories modeling the behaviors of particles in zero Reynolds number flows. Although inertial focusing starts at arbitrarily low Reynolds numbers, in practical implementations for useful trapping thresholds to be accessed, the devices so far built have all operated at Reynolds numbers between 50 and 300. Inertial migration is caused by the rigid particle disrupting the otherwise smoothly varying fluid flow throughout the channel. Although the physics of inertial migration in (uni-directional) pipe flow has been studied extensively, (see e.g. [6] and [7]): the key difference between pipe flow and more complex flow is the varying flow that the particle experiences as it advects through the channel. In this paper, we develop an asymptotic theory that reduces the calculation of particle migration velocity to solving a quasi-steady, linear problem.
Direct numerical simulation of particle trajectories within inertial microfluidic devices requires solving for the motion of particles suspended in a fluid-filled domain whose boundaries constantly change due to the movement of the particles. Since, in practical examples, channels operate at moderate Reynolds number, between 5-200, flows cannot be modeled by numerical methods that are designed for small Reynolds number particle-flows such as Stokesian dynamics [8] or boundary integral methods [9]. The time evolving geometry of migrating particles and nonlinear terms favors numerical methods such as immersed boundary [10; 11] or immersed interface [12] which embed moving boundaries within a fixed computational grid. Both methods afford a lot of freedom in the choice of numerical method for solving the Navier-Stokes equations; in inertial microfluidic simulations the Lattice-Boltzmann method (LBM) is a popular method [13] and has been used for calculating particle migration [2; 14; 15]. The Force Coupling Method[16] replaces particles with forcing terms which enforce rigid body motion on a finite volume of fluid to emulate particle dynamics and has been used to calculate inertial migration[17]. Although existing numerical simulations have illuminated the physics of inertial focusing, the high computational cost of nonlinear 3D simulations has meant that predictive simulations are not currently used to design or optimize inertial microfluidic devices.
In our simplification of the physics of inertial migration, a 3D linear PDE models moderate Reynolds number fluid flow and supplies a vector field representing inertial migration that may be added to the background flow. This simplification allows for easier understanding of the various sorting techniques that are used by microfluidic devices. In this study, we will first develop our equations for particle motion and then we will compare our theoretical results with data from a microfluidic device separating different cell types suspended in blood along with differently sized plastic beads.
## II Mathematical Methods
Inertial migration is caused by the disturbance the particle causes to the background flow, the disturbance flow \(\mathbf{u}^{\prime}\)= \(\mathbf{u}-\bar{\mathbf{u}}\) is the difference between the flow with the particle, \(\mathbf{u}\), and the flow without the particle, (called the background flow here) \(\bar{\mathbf{u}}\). We get our main equations by plugging these definitions into the Navier-Stokes equations and non-dimensionalizing by the speed scale \(U\) and length scale \(L\) of the background flow:
\[\Delta\mathbf{u}^{\prime}-\nabla p^{\prime}= Re\bigg{(}\frac{\partial\mathbf{u}^{\prime}}{\partial t}+\bar{ \mathbf{u}}\cdot\nabla\mathbf{u}^{\prime}+\mathbf{u}^{\prime}\cdot\nabla\bar{ \mathbf{u}}+\mathbf{u}^{\prime}\cdot\nabla\mathbf{u}^{\prime}\bigg{)} \tag{1}\] \[\nabla\cdot\mathbf{u}^{\prime}= \mathbf{0}\] \[\mathbf{u}^{\prime}= \mathbf{U}_{p}-\bar{\mathbf{u}}(\mathbf{x}_{p})+\Omega_{p}\times (\mathbf{x}-\mathbf{x}_{p})\qquad\text{on }|\mathbf{x}-\mathbf{x}_{p}|= a/L \tag{2}\]
In addition to \(\mathbf{u}^{\prime}\)=0 on the walls of the channel. Neither boundary conditions nor the incompressibility equation are altered by following transformations of our equation, so we focus on the different incarnations of the momentum balance equation, Eq. 1. Here \(Re\)=\(\frac{UL_{P}}{\mu}\) is the channel Reynolds number, and is typically in the range 50-300, both in real experiments and our calculations. The disturbance velocity is generated by the boundary condition on the sphere. The particle can translate and rotate with the fluid, but it resists the shearing motion of the fluid near its surface, setting up the disturbance velocity \(\mathbf{u}^{\prime}\). Near the particle, the velocity profile is approximately that of simple shear; \(\bar{\mathbf{u}}\approx\bar{\mathbf{u}}(\mathbf{x}_{p})+\gamma\cdot(\mathbf{x }-\mathbf{x}_{p})\), where \(\gamma_{ij}\equiv\frac{\partial i}{\partial x_{j}}\), and we may further decompose the linearized flow field into rotational and straining components: \(\bar{\mathbf{u}}\approx\bar{\mathbf{u}}(\mathbf{x}_{p})+\omega\times(\mathbf{x }-\mathbf{x}_{p})+\mathbf{E}\cdot(\mathbf{x}-\mathbf{x}_{p})\), where \(\omega_{i}\)=\(-\frac{1}{2}\epsilon_{ijk}\gamma_{jk}\), with \(\epsilon\) the unit alternating tensor, and \(E_{ij}\)=\(\frac{1}{2}(\gamma_{ij}+\gamma_{ji})\). Setting \(\mathbf{U}_{p}\approx\bar{\mathbf{u}}(\mathbf{x}_{p})\) and \(\Omega_{p}\)=\(\omega\) renders the particle force and torque free at first order.
We now know the size of all the components involved in right hand side of Eq. 1: under our non-dimensionalization, both \(\mathbf{u}^{\prime}\) and \(\bar{\mathbf{u}}\) are of size \(O(\frac{\pi}{L})\) near the particle. The size of \(\frac{\partial\mathbf{u}^{\prime}}{\partial t}\) is related to the boundary condition Eq. 2 who's
size is the same as \(\frac{\partial\mathbf{E}(\mathbf{x}_{p}(t))}{\partial t}\)=\(\mathbf{U}_{p}\cdot\nabla\mathbf{E}\) which is small as long as the length scale at which \(\mathbf{E}\) changes is large compared to \(a\). Therefore the right hand side is size \(\frac{a^{2}}{L^{2}}Re\). The size ratio \(\alpha\):=\(\frac{a}{L}\)\(\ll\)1 is small enough that we assume \(\alpha^{2}Re\)\(<\)1, which forms the core of our asymptotic expansion. Near the particle, we assume shear dominates and set the inertial terms to zero. Suppressing these inertial terms, we arrive at: \(\Delta\mathbf{u}^{\prime}\)-\(\nabla p^{\prime}\)=0, along with the usual incompressibility and boundary conditions. This problem of Stokes' flow around a sphere has solution:
\[\mathbf{u}^{\prime}\mathbf{=-E}\mathbf{:}\frac{5(\mathbf{x-x}_{p})(\mathbf{x- x}_{p})(\mathbf{x-x}_{p})}{|\mathbf{x-x}_{p}|^{5}}+O(|\mathbf{x-x}_{p}|^{-4}) \tag{3}\]
[18]. However, this solution is not consistent with our complete neglect of the inertial terms from Eq. 1, because as \(|\mathbf{x-x}_{p}|\)\(\rightarrow\)\(\infty\), \(\Delta\mathbf{u}^{\prime}\)\(\sim\)\(1/r^{4}\), while \(Re\mathbf{\bar{u}}\cdot\nabla\mathbf{u}^{\prime}\)\(\sim\)\(Re/r^{2}\), becomes co-dominant with viscous stresses when \(|\mathbf{x-x}_{p}|\)\(\sim\)\(Re^{-1/2}\). We therefore posit that the flow contains an outer region in which inertial terms may not be neglected[6]. However, since decay of the disturbance velocity means that \(|\mathbf{u}^{\prime}|\)\(\ll\)\(|\mathbf{\bar{u}}|\) within this region, we may linearize the inertial terms within this outer region.
In the outer region, we model the particle as a moving singularity. Mathematically, this means replacing the rigid body motion of Eq. 2 with a forcing term equal to \(F(\mathbf{x})\)=\(-\frac{20\pi}{3}E(\mathbf{x}_{p})\):\(\nabla\delta(\mathbf{x-x}_{p}(t))\). Instead of working with a moving particle, we approximate this by looking for a traveling wave solution of the form \(\mathbf{u}^{\prime}(\mathbf{x-x}_{p}(t))\), this approximation transforms the time derivative into \(\frac{\partial\mathbf{u}^{\prime}(\mathbf{x-x}_{p}(t))}{\partial t}\)=\(\mathbf{U}_{p}\cdot\nabla\mathbf{u}\) which we will approximate as \(\frac{\partial\mathbf{u}^{\prime}(\mathbf{x-x}_{p}(t))}{\partial t}\)=\(\mathbf{\bar{u}}(\mathbf{x}_{p})\cdot\nabla\mathbf{u}\).
Our solution in the inner region does not predict particle migration at leading order; instead the stresslet disturbance field modeled in Eq. 3, forces the outer region disturbance velocity. This forcing can be represented equivalently as a boundary condition as \(\mathbf{x}\)\(\rightarrow\)\(\mathbf{x}_{p}\), or, directly, by introducing a force dipole term within the equation:
\[\nabla\mathbf{u}^{\prime}\mathbf{-}\nabla p^{\prime} \mathbf{=}Re((\mathbf{\bar{u}-\bar{u}(\mathbf{x}_{p})})\cdot\nabla \mathbf{u}^{\prime}\mathbf{+}\mathbf{u}^{\prime}\mathbf{\cdot}\nabla\mathbf{ \bar{u}})\mathbf{-}\frac{20\pi}{3}E\mathbf{:}\nabla\delta(\mathbf{x-x}_{p}) \tag{4}\] \[\nabla\mathbf{\cdot}\mathbf{u}^{\prime} \mathbf{=}0\]
The migration velocity is found by solving this PDE and evaluating it at the location of the particle \(\mathcal{M}(\mathbf{x}_{p})\)=\(\mathbf{u}^{\prime}_{\mathbf{x}_{p}}(\mathbf{x}_{p})\), where the subscript denotes that the singularity was located at \(\mathbf{x}_{p}\) according to Eq. 4. We then advect the particle according to following differential equation
\[\frac{d\mathbf{x}_{p}}{dt} \mathbf{=}\mathbf{\bar{u}}(\mathbf{x}_{p})\mathbf{+}\alpha^{3} \mathcal{M}(\mathbf{x}_{p})\mathbf{+}\frac{\alpha^{2}}{6}\Delta\mathbf{\bar{u} }(\mathbf{x}_{p}) \tag{5}\]
This equation includes the term for Faxen's Law, which is the finite size particle correction for force free advection of particles in \(Re\)=0 flows. This formula is exact for unbounded \(Re\)=0 flows, but in our use is accurate to the order \(O(Re^{0}\alpha^{4})\).
When calculating the trajectories of particles, we pre-calculated the migration velocity at 450 points throughout the three dimensional channel by Eq. 5. We then used linear interpolation to extend our particle velocity calculation through the entire domain.
Eq. 5 can be used to to advect particles throughout the channel, however its accuracy is limited to conditions where the underlying asymptotic expansion is valid. In straight channels it has been shown that this asymptotic approximation is accurate for \(\alpha^{2}Re\)=\(O(1)\)[6], however both the size and background flow speed of the microfluidic device can change significantly depending on if we are in a tight channel or large chamber, eg. the large notched chamber in the microcentrifuge. if we define the Reynolds number using the average flow rate in a cross section with length \(L\), we see that \(Re\)=\(\frac{\int_{\mathbf{x}_{p}}\Delta\mathbf{q}S\mathbf{L}}{L^{2}\nu}\)=\(\frac{C}{\nu L}\) where C is the flow rate of the device and is constant for all cross sections in the channel. In this calculation we see that the Reynolds number decreases like \(\frac{1}{L}\) in larger channels, so the asymptotic expansion that forms Eq. 5 is more accurate in the larger testing chambers than it is in the smaller channels leading up to it.
Another robust source of error in the asymptotic approximation is our assumption of the traveling wave solution. While \(\frac{\partial\mathbf{u}^{\prime}(\mathbf{x-x}_{p}(t))}{\partial t}\)\(=\)\(\mathbf{\bar{u}}(\mathbf{x}_{p})\cdot\nabla\mathbf{u}\) is exact in the bulk flow, the particle is moving with respect to the walls of the device, which will cause additional changes to the flow as \(\mathbf{x}_{p}\) moves. It can be shown that this error is proportional to \(\mathbf{U}_{p}\cdot\nabla\mathbf{u}^{\prime}|_{\text{walls}}\) which is small provided the particle is either far away from the wall or not headed directly at it.
## III Analysis of Microcentrifuge
### Background Flow Patterns at Different Reynolds numbers
In [1], Khojah et al. observed a microcentrifuge trapping differently sized cells from blood samples. They found that at different Reynolds numbers it preferentially captured different sizes of cells. At \(Re\)=125 the micro centrifuge consistently captured larger particles and smaller particles could pass through. At \(Re\)=175 they found that the microcentrifuge would inconsistently capture particles of all sizes. At \(Re\)=225 the microcentrifuge would capture smaller particles with more consistency than it would capture larger particles.
In numerical studies using COMSOL Multiphysics [19], we found that these Reynolds numbers correspond with major changes to the background flow. Streamlines were calculated for Reynolds numbers 125, 175, and 225 were calculated using P2+P1 tetrahedral elements at the 'extremely fine' mesh size setting.
Fig. 2 shows how the way the fluid flows through the microcentrifuge chamber changes as we increase the Reynolds number. The top row shows 3D visualization of example particle trajectories and the bottom row shows via Poincare section how the streamlines travel through the chamber once they are caught, the dots represent where the trajectories intersect a plane at the top of each of the orbits the streamlines performed within the chamber. Blue represents where the streamlines entered and red represents where they evited the chamber. The x- and y- coordinates of streamlines entering and exiting the chamber are shown with asterisks instead of dots.
In Fig. 2, we see that at \(Re\)=125 the fluid flows through the microcentrifuge chamber by having the corner streamlines flow down and into the chamber, then the fluid streamlines rotate towards the center of the chamber and then spiral outwards, eventually exiting the chamber. At \(Re\)=175 the path of the streamlines is looping back on itself and is no longer consistently focusing towards the center of the channel. At \(Re\)=225, the loop has inverted: instead of flowing from out to in, the streamlines enter the chamber from the center of the channel and leave out the sides. Inset diagram in the top row of Fig 2. show the exchange of starting and ending streamline locations as \(Re\) is increased.
Figure 2: The fluid flow within the microcentrifuge experiences a topological change as Re changes from 125 to 225. The top row shows an example streamline and the bottom row shows the Poincaré section of many streamlines and how they progress through the channel. Blue shows where the streamlines enter and red shows where the streamlines exit. The trajectory cartoon demonstrates how the entrance and exit of the streamlines loop back on themselves and change their locations in the channel, representing a topological change of how the fluid moves through the channel.
### Particle Trajectories
We will now start looking at the microcentrifuge capture particles, but first we need to look into the initial condition. Before the particle reaches the microcentrifuge, it will travel through a narrow rectangular channel (in our case 40\(\mu\)m\(\times\)70\(\mu\)m), at \(Re\)=125 the particles gather along 4 stable focusing streamlines near the center of each channel wall [7; 20]. This means the particles will be tightly grouped along 4 focusing streamlines before they enter the channel, with the majority of particles evenly split between the two larger walls of the channel (corresponding to the blue and purple trajectories in Fig. 3 [21]).
In Fig 3, we place differently sized particles along the 4 focusing streamlines and advance their position using Eq. 5. All 4 focusing streamlines do not enter the notch, at this Reynolds number the streamlines that enter the channel are along the bottom corners [22], however at particle size 24\(\mu\)m particles along the bottom streamline migrate away from the streamline enough that they become captured in the eddy within the notch. at particle size 28\(\mu\)m the particles along the minor streamlines are also captured. At particle size 35\(\mu\)m the particles along the top streamline are captured as well, however these particles are 87.5% of the channel height and the identification of 2 distinct focusing streamlines may no longer be relevant. Our calculations also show that above above diameter 28\(\mu\)m, the particles from the minor streamlines will quickly focus towards the mid plane 3. Capture of particles along the minor axis is driven by the strong inertial migration towards the microvortex, inertial focusing actually fights the particle from migrating towards the center of the channel but is overpowered by the background flow and the particle is ultimately driven towards the mid plane, The rapid convergence of all particles toward the symmetry plane agrees with numerical results from [2].
Changes in the background fluid flow recorded in section A completely change the way particle capture is achieved and explain the differences in what size particles are captured by the micro-centrifuge. At \(Re\)=125, particles are captured when inertial migration pushes the particle into the microcentrifuge chamber, the limit cycle is created as a product of the balance between the fluid flow spiraling the particle upwards and inertial migration pushing the particle downwards at the top of the spiral. At \(Re\)=175 all particles are predicted to enter the microcentrifuge chamber, but it is not well understood what keeps them in the close to turbulent eddy that is spiraling within the chamber. We can integrate the physics of capture with the description in section A of the three dimensional paths of streamlines through the chamber.
Particles that enter the microcentrifuge chamber continue to experience inertial migration within that chamber. At \(Re\)=125, the streamlines that particles follow, absent inertial migration, spiral outwards within the mid plane Fig. 2, while inertial migration points downward into the chamber. The balance of these two effects caused particle trajectories to converge to a limiting orbit, shown in Fig. 3. We report in more detail upon the limiting orbit in section C.
### Comparison With Data
Looking only along the mid plane, we make predictions about the critical Reynolds number for particle capture and compare with experimental data. Kohjoah et al.[1] performed an experiment in which a micro-centrifuge was used to separate breast cancer cells (MDA-MB-231 cell line) based on size. Inputting their parameters into our asymptotic
Figure 3: The trajectories of background flow (left), 24\(\mu m\) diameter particles (middle), and 28\(\mu m\) diameter particles. Small particles follow the background flow and do not enter the microcentrifuge chamber, capture is predicted to begin at approximately 22.9\(\mu m\), but only particles along the bottom focusing position are captured. For larger particles, particles along all but the top focusing position are captured. All captured particles stably focus towards the mid plane at this Reynolds number.
method we made a prediction on what the critical cell diameter the device sorts the cells by. Our models predict a critical cell diameter of 22.9\(\mu\)m and this prediction shows good agreement with the data 4.
The experiment consisted of running a blood sample containing 224 cells through the microfluidic chamber 3 times, the experiment was performed twice for a total of N=448 cells run through, with 166 cells captured between the two trials. The device consists of an inlet, a 40\(\mu m\times\)70\(\mu\)m\(\times\)3cm channel connecting to the 800\(\mu\)m\(\times\)70\(\mu\)m\(\times\)800\(\mu\)m microcentrifuge chamber, another 40\(\mu\)m\(\times\)70\(\mu\)m\(\times\)3cm, and an outlet. The distribution of inflowing and captured cells were measured directly from images of the flowing cells, and we use the histograms reported in [1] to estimate capture.
We approximated the initial cell size distribution by fitting a log-normal distribution (mean: 20.2\(\mu\)m, standard deviation: 8.2\(\mu\)m) to available data using the maximum likelihood method. we then approximated the percentage of cells captured in a given size range by dividing the observed number of cells that were captured by the expected number of cells in that size range based on our fitted distribution. For the very largest cells (diameters exceeding 32.75\(\mu\)m) there was insufficient sampling sampling of inflowing particles sizes, leading to capture probabilities that could exceed 1; we set capture probability equal to 1 for these largest cells.
In order to estimate the magnitude of the noise in the system, we fit a modified version of Eq. 5 with added
Figure 4: Predicted critical cell diameter for cell sorting matches well with data from micro-centrifuge device in [1]. Top left: A continuum of differently sized particle trajectories, smaller particles are in blue and larger particles in red with the critical particle diameter (22.9\(\mu\)m) is highlighted in black. Top right: Capture data from experiment shows difference in cell size distribution between captured cells and the initial distribution of cells ran through the device. Bottom left: The estimated percentage captured by cell size shows that capture that the beginning of capture is centered around the critical capture diameter. The estimation was necessary because the initial cell size data was given in much coarser bins than the captured cell size data. Bottom right: Estimated percentage captured by cell size is shown along with the percentage captured of trajectories simulation with Eq. 5 with added Gaussian white noise with \(\sigma\)=12.5% of expected particle velocity.
Gaussian white noise. If we define the RHS of Eq. 5 as **V**(\(\alpha\),**x\({}_{p}\)**)=\(\bar{\textbf{u}}(\textbf{x}_{p})\)+\(\alpha^{3}\mathcal{M}(\textbf{x}_{p})\)+\(\frac{\alpha^{2}}{6}\Delta\bar{\textbf{u}}(\textbf{x}_{p})\), our SDE is equal to
\[\frac{d\textbf{x}_{p}}{dt}\textbf{=}\textbf{V}(\alpha\textbf{,}\textbf{x}_{p}) \textbf{+}||\textbf{V}(\alpha\textbf{,}\textbf{x}_{p})||\mathcal{N}(0\textbf{,}\sigma^{2}) \tag{6}\]
We found qualitatively that \(\sigma\)=0.125 fit the randomness present in the data accurately (fig. 4). Percentage capture of differently sized particles were estimated by computing 1000 trajectories for each level of \(\alpha\) and calculating the percentage that were captured.
### Explanation of Variation
Our model is deterministic, so all cells above the critical diameter are predicted to be captured and below are not. However, the experimental results reported in [1] show that capture probability increase continuously from 0 to 1 as particle radius increases over a narrow range from 10\(\mu\)m to 30\(\mu\)m. The likely cause for this indeterminacy of capture is likely a combination of cells on the top streamline not becoming captured and hydrodynamic interaction between the cells. The micro-centrifuge is intended to be a high throughput device and the cells are focused tightly along the streamlines of the channel feeding the micro-centrifuge chamber. As these cells are near each other, they affect each other's inertial migration due to shear induced dispersion [23; 24].
Inertial focusing is generally the strongest contribution in a particle's cross-stream velocity, so it is not immediately apparent that these unmodeled interactions could be large enough to prevent or ensure capture. However, particle capture trajectories are sensitive to noise. In Fig. 5 we visualize the basin of attraction of the microcentrifuge chamber, and we show how close the trajectories of particles approaching the chamber on the lower major focusing streamline are to the separating manifold between the captured region and the un-captured region. In Fig. 5 the two colored regions represent invariant sets where a trajectory that starts inside a region will never leave it.
Each size of particle has different invariant regions because the focusing and curvature term that control particle off-streamline migration in Eq. 5 change with particle size, however the separating manifold between the two invariant regions is always close to the trajectory of a particle. Particles close to the critical diameter, pass extremely closely to the capturing trajectory for almost the entire length of the chamber, meaning that very small perturbations could permit capture of a particle below the critical capture or deny capture in particles larger than the critical diameter.
Next we demonstrate why the trajectories of the particles are so close to capturing stream lines using a phase plane diagram showing their inertial migration velocity vectors. this diagram can be scaled to particles of different sizes by scaling the captured velocity vectors by \(\alpha^{3}\) and we superimpose upon the vector field plot the trajectories of three different particle sizes. We see that the inertial migration is strongest in the channel leading up to the micro-centrifuge and fairly weak near the vortex itself. This is because inertial migration depends upon shear, curvature, and walls [25]. Near the vortex the shear is low so the inertial migration is low. The trajectories of particles near the critical particle diameter all experience very similar inertial migration, the larger particles are captured because they experience slightly more. We are unaware of any experimental observations of particle trajectories during capture, or more generally, for particles crossing the microcentrifuge and this experimental gap is likely due to the high speed of particles within the channels feeding each cavity. By contrast, the limiting trajectories of captured particles within the channels have been extensively reported on. It is known that captured particles are drawn into closed orbits (limit cycles) within the micro centrifuge; our simulations replicate these dynamics (Fig. 6A) and agree quantitatively with measured limit cycle paths for particles of different sizes at \(Re\)=125 (Fig. 6) [2; 27; 26]. In particular we find that larger particles converge to smaller orbits. Orbit shape is well predicted, including a common center of gyration, a long straight trajectory that is tilted and closely conforming to the separating streamline that divides captured and non-captured particles (Fig. 5). To fit experimental data we had to compute particle sizes from the figures of [1]. We chose diameters 15.3\(\mu m\), 16.2\(\mu m\), and 20.5\(\mu m\) because they fit the sizes listed in out parts of [1]. The relative sizes we measured in figure were 8.1, 8.6, and 12.2 pixels respectively. There are some discrepancies, though it is not possible to determine whether they are due to imperfect matching of the Reynolds number or particle sizes, unmodeled effects such as the device flexing slightly when under pressure or whether they are due to approximations made in our model.
The two smaller particles are not predicted to be captured by our simulation, but nevertheless they are predicted to have a stable limit cycle if they enter the cavity. This feature complicates the calculation of capture probabilities; small particles do not migrate across the separating manifold in Fig 5, but if perturbations, such as due to multi-particle effects, induce them to cross it they will be stably entrained in the vortex eddy. The smallest particle that is predicted to have a stable limit cycle in our simulation had a diameter of 10.9\(\mu\)m which lines up well with data where the smallest cell captured had diameter between 10\(-\)12.5\(\mu\)m. the complicating presence of these smaller particles within the microcentrifuge chamber emphasizes that the vortex is unrelated to the capture of particles, the vortex only controls the retention of particles.
### How to Change Critical Particle Diameter
While the calculations results above are a significant analysis of the particles, they cannot be extended to other channels as the background flow depends on the Reynolds number. However, if we change the channel size and flow speed to keep our Reynolds number constant, we can reuse all of our calculations but with a different parameter set, allowing us to select for the critical particle diameter that our microfluidic device selects for:
\[Re = \frac{UL}{\nu}{=}\frac{\frac{U}{k}kL}{\nu}\] \[\alpha = a/kL\] \[\alpha_{k}^{*} = \alpha^{*}/k\]
For example, if you need to change the device so that instead of separating particles above and below \(22.9\mu m\) to selecting particles above and below \(11.45\mu m\), all you need to do is half the size of your device and double your speed.
There is a practical limitation however, the PDMS polymer that the microfluidic devices are etched from do not
Figure 5: Trajectories of different particles are displayed (purple) over the capture region associated with each sized particle. The red area represents starting points where the particle will not be captured by the vortex and the blue area represents the region where the particle will be captured by the vortex. All trajectories are fairly close to the bifurcating streamline, however trajectories of particles near the critical diameter of \(22.9\mu m\) are very close to the separating streamline and whether or not they get captured is influenced by noise.
come at an arbitrary thickness. However, there have been previous studies that if you scale the fluid speed and channel/cavity height accordingly the change in thickness does not effect the fluid flow [22]. We speculate, although leave the testing to future work that this change is also unlikely to effect the inertial migration significantly, that as the optimal Reynolds number for separation drives particles to the symmetry plane, the furthest place from the side walls.
## IV Perspectives
By replacing particle's with singularities, we were able to accurately model inertial migration without solving the nonlinear Navier-Stokes equations or explicitly meshing the 3D and time-varying fluid domain between particles and channel walls. This method allowed us carefully analyze all possible particle trajectories within a microcentrifuge device. Our analysis shows two clear takeaways for microfluidic devices designed to separate particles by size:
Figure 6: **A**: A quiver plot of the strength of inertial focusing throughout the channel shown alongside the trajectories of particles of diameters \(0\mu m\), \(10\mu m\), \(20\mu m\), and \(30\mu m\). The strength of migration is related to color, with arrows not shown being \(<\)\(\frac{1}{100}\) of maximum strength. Strength of inertial migration is measured by \(\mathcal{M}(\mathbf{x})/\mathbf{u}(\mathbf{x})\) which represents the infinitesimally small distance the inertial migration will push the particle at a given point in space. The cutout shows how inertial focusing is strongest in the channel leading to the chamber and near the walls. **B**: The particle limit cycle does not trap stream lines, shown with the dotted line, which slowly spiral outwards eventually exiting the microcentrifuge cavity. **C**: Time lapsed limit cycles from micro-beads of various sizes is overlaid with predictions from our asymptotic simulations. Smaller Particles have larger limit cycles, with the larger particles (\(>\)\(30\mu m\)) having very small limit cycles. While the smaller particles are not predicted to be captured via inertial focusing, their limit cycle is well captured by our asymptotic model.
1. The predominance of the microcentrifuge is deeply dependent on Reynolds number. Lower Reynolds numbers advect particles towards the symmetry plane of the device and allow for more stable separation of particles by size. Higher Reynolds numbers push particles away from the symmetry plane of the device and introduce more complicated 3D effects, which are both difficult to anatomize and have been found empirically to cause devices to be have wider, less consistent capture thresholds.
2. Error in sorting is due to lack of separation between particle trajectories and the capture separatrix. Particles within 3\(\mu\)m of the critical capture diameter are straddling the capture separatrix for 75% the length of the microcentrifuge, making their capture susceptible to particle-particle interaction and other forms of random noise. Inertial migration velocity scale as \(O(\alpha^{3})\), the high order size scaling leads to sufficient separation for sorting particle accurately by size, but is not enough to have low error rates by itself.
Experimental studies that have optimized for capture accuracy have independently agreed with this design principles. [1; 2; 28] have all found optimal Reynolds numbers between 100-150 and [28] were able to achieve 90% capture accuracy at 90% purity by optimizing over Reynolds number. We identify the effect of this optimization as finding maximum separation between focusing position in the channel and separating manifold.
Our singularity method models only the first order inertial migration, further expansions in terms of particle size and Reynolds number are possible, and would add either higher order forcing terms within our Oseen equation, or would require that we impose additional matching conditions to model the particle. Such an extension could be useful for predicting differential focusing of different particles within the same channel. Unpacking the role of particle interactions and extending the calculation to the largest particles on which separation is performed. The simulations here took approximately 10 minutes per solve of inertial migration, however computes could be sped up significantly by using discontinuity decompositions from [21] to. The method can be extended to particles of other shapes, singularity modeling of the particle needs only the stresslet strength associated with the Stokes solution around the particle. This stresslet strength is already known for ellipsoids in shear flow[29] and for other particles can be found by solving for the motion of the particle in Stokes flow, for which there are many approximate or numerical methods [30; 18].
Changes to channel geometry can also lead to significant increases in separation accuracy. Paie et al. [31] found a 20% increase in accuracy of capture by creating a separate reservoir for captured particles and optimizing over 6 parameter combinations. Numerically driven parameter optimization is still extremely computationally expensive, here we have done our best to analyze indepth the basic microcentrifuge geometry and describe the physical principles that that fuel its strengths and weaknesses. Our results provide a platform on which accelerated studies may be used to further optimize the accuracy, controllability, or throughput of future microcentrifuges.
|
2303.08995
|
Fast and Accurate Object Detection on Asymmetrical Receptive Field
|
Object detection has been used in a wide range of industries. For example, in
autonomous driving, the task of object detection is to accurately and
efficiently identify and locate a large number of predefined classes of object
instances (vehicles, pedestrians, traffic signs, etc.) from videos of roads. In
robotics, the industry robot needs to recognize specific machine elements. In
the security field, the camera should accurately recognize each face of people.
With the wide application of deep learning, the accuracy and efficiency of
object detection have been greatly improved, but object detection based on deep
learning still faces challenges. Different applications of object detection
have different requirements, including highly accurate detection,
multi-category object detection, real-time detection, robustness to occlusions,
etc. To address the above challenges, based on extensive literature research,
this paper analyzes methods for improving and optimizing mainstream object
detection algorithms from the perspective of evolution of one-stage and
two-stage object detection algorithms. Furthermore, this article proposes
methods for improving object detection accuracy from the perspective of
changing receptive fields. The new model is based on the original YOLOv5 (You
Look Only Once) with some modifications. The structure of the head part of
YOLOv5 is modified by adding asymmetrical pooling layers. As a result, the
accuracy of the algorithm is improved while ensuring the speed. The
performances of the new model in this article are compared with original YOLOv5
model and analyzed from several parameters. And the evaluation of the new model
is presented in four situations. Moreover, the summary and outlooks are made on
the problems to be solved and the research directions in the future.
|
Tianhao Lin
|
2023-03-15T23:59:18Z
|
http://arxiv.org/abs/2303.08995v2
|
# Fast and Accurate Object Detection on Asymmetrical Receptive Field
###### Abstract
Object detection has been used in a wide range of industries. For example, in autonomous driving, the task of object detection is to accurately and efficiently identify and locate a large number of predefined classes of object instances (vehicles, pedestrians, traffic signs, etc.) from videos of roads. In robotics, the industry robot needs to recognize specific machine elements. In the security field, the camera should accurately recognize each face of people. With the wide application of deep learning, the accuracy and efficiency of object detection have been greatly improved, but object detection based on deep learning still faces challenges. Different applications of object detection have different requirements, including highly accurate detection, multi-category object detection, real-time detection, robustness to occlusions, etc. To address the above challenges, based on extensive literature research, this paper analyzes methods for improving and optimizing mainstream object detection algorithms from the perspective of evolution of one-stage and two-stage object detection algorithms. Furthermore, this article proposes methods for improving object detection accuracy from the perspective of changing receptive fields. The new model is based on the original YOLOv5 (You Look Only Once) with some modifications. The structure of the head part of YOLOv5 is modified by adding asymmetrical pooling layers. As a result, the accuracy of the algorithm is improved while ensuring the speed. The performances of the new model in this article are compared with original YOLOv5 model and analyzed from several parameters. And the evaluation of the new model is presented in four situations. Moreover, the summary and outlooks are made on the problems to be solved and the research directions in the future.
## 1 Introduction
In recent years, object detection has always been a fundamental problem in computer vision. Object detection can be divided into two major schools of thought due to different tendencies for effectiveness. One is two-stage object detection, which focuses more on accuracy, and the other is one-stage object detection, which focuses more on speed. Two-stage object detection, as the name implies, solves the problem in two stages. The first stage is the generation of the regions of interest(RoI) which is called Region Proposal and the extraction of features using convolutional neural networks. The second stage is to put the output of the first stage into the support vector machine (SVM) or CNN-based classifier to classify objects and then correct the objects' positions using the bounding box regression. Two-stage object detection originated from Regions with CNN features (R-CNN) ([1]). R-CNN uses a heuristic (Selective search) to reduce the redundancy of information and improve the speed of detection by firstly forming Region proposal before detection. In addition, the robustness of feature extraction is improved. The researchers then proposed a new neural network by applying a technique named Spatial Pyramid Pooling (SPP) ([2]), which not only reduces computational redundancy, but more importantly, breaks the bound of fixed-size input of fully collected layer. After SPP Net, Fast R-CNN ([3]) emerged. Compared with the original Slow R-CNN, it has been optimized for speed. It changes the original serial structure to parallel structure. The algorithm performs regression on bounding box (Bbox) while classifying. But this was not good enough, so the researchers proposed Faster R-CNN ([4]). Different from the previous heuristic algorithm to produce region proposals, Faster R-CNN proposes a concept of Region Proposal Networks(RPN), which use neural networks to learn to generate region proposals. Meanwhile, the concept of anchor has been introduced in RPNs. The object detection of R-CNN series has been improved and evolved step by step to get the final Faster R-CNN algorithm, which has a great improvement in both
accuracy and speed. However, there is still no way to achieve real-time object detection, so the One-Stage object detection algorithm was proposed later.
One-stage object detection is a one-shot solution that directly regresses on the predicted object of interest. Compared to two-stage object detection, it is very fast and finds a balance between fast and accurate. 'You only look once' (YOLO) ([5]) is one of the representative algorithms. YOLO first resizes the image to a fixed size, then passes it through a set of convolutional neural networks, and finally connects it to fully convolutional layer to output the result directly, which is the basic structure of the whole network. The latest algorithms YOLOv5 has been able to obtain relatively satisfactory results. It's very fast while ensuring sufficient accuracy. Throughout the neural network model, we believe that the final feature map plays a critical role in the results. Each pixel in feature map has a corresponding receptive field ([6]). The depth of feature map is deeper, the receptive field is larger. In the end of the neural network, YOLOv5 algorithm generates three different sizes of feature map. All of those pixels from these three feature maps have the same shape of receptive field----square. Moreover, for better detection of different shapes objects, each feature map has three different shapes of anchors. Now we have a new conjecture that if we change the shape of receptive field, the detection capability of the algorithm will be improved, and it will be easier to detect objects having different shapes. The feature map whose pixels have square receptive field can detect square object more easily. Conversely, the feature map whose pixels have rectangular receptive field can detect rectangular object more easily. Based on this conjecture, we make some modifications to the YOLOv5 model so that we can change the receptive fields of the final feature maps.
## 2 Related Work
In this section, we firstly introduce the development of YOLO (You Look Only Once). Then we introduce COCO (Common Objects in Context) dataset ([7]). Finally, we explain the metrics in the evaluation of YOLO algorithm.
### Development of YOLO
As the pioneer of one-stage algorithm, YOLOv1 ([5]) was a big hit with its simple network structure and real-time detection speed on GPU, breaking the "monopoly" of R-CNN series and bringing a huge change to the field of object detection. YOLOv1 has many drawbacks when viewed from today's perspective, but back then, YOLOv1 was very popular and provided the framework basis for many one-stage algorithms later. The most important feature of YOLOv1 is that it uses only one convolutional neural network to achieve the purpose of object detection end-to-end. At CVPR 2016, following the YOLOv1 work, the original authors reintroduced YOLOv2 (or YOLO9000) ([8]). Compared with YOLOv1, YOLOv2 introduced the anchor box mechanism proposed by the Faster R-CNN, the use of K-means clustering algorithm to obtain a better anchor box. The regression method of the bounding box was also adjusted. Later, YOLOv3 ([9]) is proposed. The changes of YOLOv3 is not only using a better backbone network: DarkNet-53, but also using Feature Pyramid Networks (FPN) ([10]) technology and multi-level detection methods. YOLOv4 ([11]) was proposed in April 2020. It achieves 43.5% AP accuracy and 65 FPS on MS COCO dataset, a 10% and 12% improvement compared to YOLOv3, respectively. YOLOv4 uses the CSPDarkNet-53 network as a new backbone network, which had excellent speed and accuracy at the time. Shortly after YOLOv4 was proposed, Ultralytics came up with YOLOv5. YOLOv5 has no particular changes in network structure, but it has better performance on speed and accuracy.
### COCO Dataset
The COCO dataset ([7]) is a large-scale dataset that can be used for image detection, semantic segmentation and image captioning. It has more than 330K images (220K of them are annotated), containing 1.5 million objects, 80 object categories (pedestrian, car, bicycle, etc.), 91 stuff categories (grass, wall, sky, etc.), five image descriptions per image, and 250K pedestrians with key point annotation. As for object detection, we use COCO2017. The number of training images are 118K, the number of validation images are 5K
and there are 40K test images. Each label of images has 5 parameters, which are category, \(x\) coordinate of centroid, \(y\) coordinate of centroid, width \(w\), and height \(h\). The dataset contains 80 categories covering a large number of real-life scenarios, such as traffic, interviews, dances, animals, etc. These objects differ in scale, occlusion, pose, expression, and lighting conditions. Therefore, the training data is large enough to be a challenge for the detector.
### Metrics
Several metrics are widely used to evaluate the performance of object detection, which mainly include Precision-Recall (PR) curves and Average Precision (AP) ([12]). Before we explain this two metrics, we need first introduce the confusion matrix.
True Positive (TP) means that the sample is actually positive and the network also predicts the sample as negative. Therefore, if the result is TP or TN, the network makes true predictions. False Positive (FP) means the prediction is wrong, because the sample is actually negative but the network predicts it as positive. False Negative: the sample is actually positive but the network predicts it as negative, so this prediction is also wrong. Precision and Recall is a common pair of performance metrics based on confusion matrix.
\[\text{Precision }=\frac{TP}{TP+FP} \tag{1}\]
\[\text{Recall }=\frac{TP}{TP+FN} \tag{2}\]
Average Precision (AP) represents the area under the Precision-Recall curve. Generally, the higher the value of AP, the better the performance of the classifier. The value of AP lies in [0,1]. A perfect classifier will have an AP value of 1. Each class has a AP ([13]). The mean Average Precision (mAP) is calculated by finding AP for each class and then average over a number of classes.
\[\text{mAP}=\frac{1}{N}\sum_{i=1}^{N}\text{AP}_{i} \tag{3}\]
The mAP incorporates the trade-off between precision and recall and considers both false positives (FP) and false negatives (FN). This property makes mAP a suitable metric for most detection applications.
## 3 Proposed Methodology
### Architecture of YOLOv5
In the previous chapter, we briefly introduced the YOLO family. In this section, we will specifically introduce the latest version of the YOLO algorithm, i.e., YOLOv5 and its network structure. Similar to the previous version of YOLO, the whole YOLOv5 can still be divided into three parts, namely backbone, neck and head, see figure 1 and 2. Backbone can be regarded as the feature extraction network of YOLOv5,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Training set & Validation set & Testing set & Total \\ \hline Nr. of images & 118,287 & 5,000 & 40,670 & 163,957 \\ \hline Percentage & 70\% & 5\% & 25\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 1: Basic statistics of the COCO dataset.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{Actual condition} & \multicolumn{2}{|c|}{Predicted condition} \\ \cline{2-3} & Positive & Negative \\ \hline Positive & TP (True Positive) & FN (False Negative) \\ \hline Negative & FP (False Positive) & TN (True Negative) \\ \hline \end{tabular}
\end{table}
Table 2: Confusion table for classification results.
and according to its structure and the previous YOLOv4 backbone, we can generally call it CSPDarknet. The input images are first extracted in CSPDarknet, and the extracted features can be called feature maps. In the backbone part, we obtain three feature maps for the next step of network, i.e., the neck part. This part can be also called the enhanced feature extraction network of YOLOv5. The three feature maps obtained in the backbone part are fused in this part, and the purpose of the feature fusion is to combine the feature information from different scales. In the neck part,, the Path Aggregation Network (PAN) structure is used ([14]), where we not only upsample the features to achieve feature fusion, but also downsample the features again to achieve feature fusion. Head is the classifier and regressor of YOLOv5. With backbone and neck, we have access to three enhanced feature maps. Each feature map has a width, height and number of channels, so we can think of the feature map as a collection of feature pixels, each of which has a number of channels. As in previous versions of YOLO, the detector head in YOLOv5 is composite, i.e., the classification and bounding box regression are implemented by a 1 \(\times\) 1 convolution. In summary, the entire YOLOv5 network is doing the following: feature extraction - feature enhancement - prediction of objects corresponding to the feature pixels.
As for the CSPDarknet in YOLOv5, it has four important features: (1) Using residual network ([15]). The residual convolution in CSPDarknet can be divided into two parts, the main part is a 1 \(\times\) 1 convolution and a 3 \(\times\) 3 convolution; the skip connection does not do any processing, and directly combining the input and output of the main part. The whole YOLOv5 backbone is composed of residual convolution. The residual structure is characterized by its ease of optimization and its ability to improve accuracy by adding considerable depth. Its internal residual blocks use skip connections to alleviate the problem of gradient vanishing caused by increasing depth in deep neural networks. (2) Using the CSPnet structure ([16]). The CSPnet structure is not too complicated, which is a splitting of the original stack of residual blocks into two parts: the main part continues the original stack of residual blocks; the other part acts like a skip connection and is directly connected to the end after a small amount of processing. Therefore, it can be considered that there is a large skip connection in the CSP. (3) Using SiLU activation function, which is an improved version of Sigmoid and ReLU. SiLU has the properties of no upper bound with lower bound, smooth, and non-monotonic. SiLU works better than ReLU on deep neural networks and it can be regarded as a smooth ReLU activation function. (4) Using SPPF structure. Feature extraction is performed by maximum pooling with different pooling kernel sizes to improve the receptive field of the network. In YOLOv4, SPP was used inside the neck, and in YOLOv5, the SPP module is used in the backbone.
### New Head
#### 3.2.1 New Head
In the previous section, we explained the structure of YOLOv5, which consists of backbone, neck and head. The first change to YOLOv5 in this thesis is to change the size of the feature map by adding asymmetrical pooling layer to the head part. The feature map sizes are 20 \(\times\) 20 \(\times\) 1024, 40 \(\times\) 40 \(\times\) 512, and 80 \(\times\) 80 \(\times\) 256 before passing through the 1 \(\times\) 1 convolutional layer, respectively. Take 80 \(\times\) 80 \(\times\) 256 feature map as an example, after passing 1 \(\times\) 1 convolutional layer, its dimension becomes 80 \(\times\) 80 \(\times\) 255, i.e., 80 \(\times\) 80 \(\times\) 3 (4 + 1 + 80). Number 3 shows that there are three feature maps of dimension 80 \(\times\) 80. The difference between them is that the pixels in each feature map correspond to different sizes of anchor boxes. For example, the sizes of the three anchor boxes are 10 \(\times\) 13, 16 \(\times\) 30, and 33 \(\times\) 18. The feature points of the three feature maps have the same characteristics, i.e., three groups of feature points have the same size of receptive fields, so they use different anchor boxes to try to fit the different shapes of the object. In our new model, we change both receptive fields and anchor boxes of each feature map. Firstly, the shape of the receptive field corresponding to each group of pixels needs to be changed. I speculate that when the receptive field corresponding to a point is square, the point has better prediction ability for objects whose shape is close to square. When the aspect ratio of the receptive field is 2:1, points can predict objects with the same shape better. The same is true for a receptive field with an aspect ratio of 1:2. So, after the 1 \(\times\) 1 convolutional layer, I add two asymmetric pooling layers to head part, see figure 3. Thus, in our new model, in order to distinguish the role of each anchor box more clearly, one anchor box
Figure 1: Architecture of YOLOv5. The whole network is composed of Backbone, Neck and Head.
Figure 2: Details of each component in the YOLOv5 backbone.
is used to predict objects close to a square shape, one anchor box is used to predict rectangular objects whose width is larger than their height, and the last anchor box is used to predict rectangular objects whose width is smaller than their height. In summary, the head detector will no longer output 3 feature maps but 9 feature maps. Their sizes are 20 \(\times\) 20 \(\times\) 85, 20 \(\times\) 19 \(\times\) 85, 19 \(\times\) 20 \(\times\) 85, 40 \(\times\) 40 \(\times\) 85, 40 \(\times\) 39 \(\times\) 85, 39 \(\times\) 40 \(\times\) 85, 80 \(\times\) 80 \(\times\) 85, 80 \(\times\) 79 \(\times\) 85, and 79 \(\times\) 80 \(\times\) 85, respectively. For the pooling layer, this thesis chooses average pooling, in order to avoid losing too much context. Since only the pooling layer is added, the number of parameters of the new network does not increase, so it runs just as fast, and at the same time, we think its detection capability will be improved. In fact, one can also try to replace the pooling layer with a convolutional layer with kernel sizes of (1, 2) and (2, 1). Although the network's parameters will increase, its running speed is not significantly affected.
#### 3.2.2 New Anchors
Setting anchors in advance is an very important things. In original YOLOv5 model, depending on the size of the feature map, the anchors are divided into three groups: (10,13), (16,30), (33,23) for 80 \(\times\) 80 feature map, (30,61), (62,45), (59,119) for 40 \(\times\) 40 feature map and (116,90), (156,198), (373,326) for 20 \(\times\) 20 feature map. In order to fit our new model better, we consider that the sizes of the anchors also needed to be modified. Therefore, the new anchors are generated. For the nine feature maps in the figure 3, the anchors are (20,20), (40,20), (20,40), (60,60), (120,60), (60,120), (400,200), (200,400) in order. The shape of these anchors corresponds to the receptive field of each cell on the 9 feature maps. We assume that the feature map whose cells have rectangular receptive field can detect rectangular object more easily by using rectangular anchor.
#### 3.2.3 New Strategy of NMS
As we mentioned before, the head of YOLOv5 model is modified from 3 feature maps to 9 feature maps. Anchors are also modified to adapt to the corresponding feature maps. Finally, we divide 9 feature maps into 3 types: having square receptive fields, having receptive field with an aspect ratio of 2:1, and having receptive field with an aspect ratio of 1:2. We are going to use these three types of feature maps to detect
Figure 3: The new head detector outputs 9 feature maps. For the input, it will be divided into three types of processing: Conv, Conv + (1,2) pooling and Conv + (2,1) pooling.
objects with different aspect ratios. In original YOLOv5 model, the NMS is used on the whole predicted boxes and the number of times we use it is one. But in our method, we use four times NMS. NMS is first performed on the boxes predicted by each of the three types of feature maps, and after the results are fused, NMS is finally done again. Using this strategy, we hope that the new model will have better performance for multiple shapes and categories of objects.
## 4 Experiment
### Configuration
The structure of the new model has the same backbone and neck as YOLOv5 ([17]), only the head is different. Therefore, to ensure the training speed, YOLOv5n is chosen for the new model. We use VS Code as programming platform and the GPU for training and validation is RTX 3080. The machine learning framework is Pytorch. The version of cudatoolkit is 11.3.
### Training Hyperparameter
The training parameters need to be adjusted before training, and these parameters are in train.py.
### Evaluations
The goal of this thesis is to improve the YOLOv5 algorithm. We trained the original and modified YOLOv5n models on the COCO dataset and then compare their Precision, Recall and mAP. The trained models are divided into the following categories: (1) original models, i.e., the backbone, neck and head of the model are not changed. (2) Modified models, which can be divided into 4 kinds, contains three square anchors, three 2:1 aspect ratio, three 1:2 aspect ratio anchors, 9 anchors, respectively.
For the evaluation of the models, we similarly divide the process of validation into the following steps: first, we validate the models which only contain anchors with a shape of square, 2:1 aspect ratio and 1:2 aspect ratio, respectively, to obtain three results, and then compare the results of them with original model. It should be noted that the validation sets of different models are different, for example, for the model with only 3 square anchors ([20, 20], [60, 60], [200, 200]), the labels in its validation set are also approximately square. And for the model with anchors with an aspect ratio of 2:1 ([40, 20], [120, 60], [400, 200]), the width of the labels in its validation set is also larger than the height. This is to verify our idea that a square receptive field with a square anchor is better at predicting objects that are approximately square, as well as rectangular receptive field with a rectangular anchor. Secondly, we validate the model having nine feature maps (the anchors are [20, 20], [60, 60], [200, 200], [40, 20], [120, 60], [400, 200], [20, 40], [60, 120], [200, 400], respectively). And the corresponding validation set contains 5000 pictures with complete labels. Finally, we validate the model with original architecture and modified \(loss.py\).
### Validation on 3-Feature Maps Networks
#### 4.4.1 Square-Anchor Model
In this section, we compares the performance of models with 3 kinds of anchors with the original model. As for the model with 3 square anchors, the validation dataset has 2,988 pictures. Each label in one picture has an aspect ratio between 1/1.2 and 1.2. Figure4 shows the PR-curve of the original model and
\begin{table}
\begin{tabular}{|c|c|} \hline ’-weights’ & default=’ ’ \\ \hline ’-cfg’ & default=’yolov5n.yaml’ \\ \hline ’-epochs’ & 300 \\ \hline ’-batch-size’ & 128 \\ \hline ’-imgsz’ & default=640 \\ \hline ”-noautoanchor” & default=True \\ \hline \end{tabular}
\end{table}
Table 3: Hyperparameters for training of new model.
square-anchor model. The difference between these two model is that the square-anchor model has three feature maps (20 \(\times\) 20, 40 \(\times\) 40, 80 \(\times\) 80), but there is only one square anchor on each feature map.
The result shows that for approximately square labels, the feature maps which have square anchors and square receptive fields have better performance in terms of precision, mAP and processing speed.
#### 4.4.2 Asymmetrical Average Pooling Model
We added each of the two asymmetrical average pooling layers to the head of the original network, thus obtaining two new models. For the model, which is added a (1, 2) pooling layer, its aspect ratio of receptive field becomes 2:1. And its validation set has 3,158 images containing 8,522 labels. Each label in one picture has an aspect ratio greater than 1.2. Figure 5 shows the PR-curve of the original model and (1, 2) pooling model.
The result shows that for the labels, whose width is greater than height, the feature maps which have rectangular anchors and rectangular receptive fields have better performance in terms of precision, mAP and processing speed. And for the model, which is added a (2, 1) pooling layer, its aspect ratio of receptive field becomes 1:2. And its validation set has 4,061 images containing 21,578 labels. Each label in one picture has an aspect ratio less than 1/1.2. Figure 6 shows the PR-curve of the original model and (2, 1) pooling model.
For the labels, whose height is greater than width, the feature maps which have rectangular anchors and rectangular receptive fields also have better performance in terms of precision, mAP and processing speed.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & P & R & [email protected] & [email protected].:95 & pre-process & inference & NMS \\ \hline Original & 0.243 & 0.39 & 0.202 & 0.127 & 0.7ms & 4.3ms & 2.1ms \\ \hline Square-anchors & 0.254 & 0.365 & 0.206 & 0.13 & 0.4ms & 4.1ms & 2.3ms \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of original model and square-anchor model. The first four statistics shows the performance of each model. The last three statistics show the processing speed of each image
Figure 4: PR-curves of original model and square-anchor model. The [email protected] of the original model is 0.202 and the [email protected] of square-anchors model is 0.206. [email protected] means the threshold of IoU is 0.5.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & P & R & [email protected] & [email protected].:95 & pre-process & inference & NMS \\ \hline Original & 0.267 & 0.356 & 0.204 & 0.118 & 0.7ms & 4.1ms & 2.3ms \\ \hline (1, 2) pooling & 0.293 & 0.374 & 0.224 & 0.131 & 0.4ms & 4.6ms & 0.9ms \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of original model and (1, 2) pooling model. The first four statistics shows the performance of each model. The last three statistics show the processing speed of each image
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & P & R & [email protected] & mAP@5.:95 & pre-process & inference & NMS \\ \hline Original & 0.364 & 0.386 & 0.277 & 0.171 & 0.7ms & 4.5ms & 1.8ms \\ \hline (2, 1) pooling & 0.417 & 0.363 & 0.289 & 0.172 & 0.7ms & 4.1ms & 0.9ms \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of original model and (2, 1) pooling model. The first four statistics shows the performance of each model. The last three statistics show the processing speed of each image
Figure 5: PR-curves of original model and (1, 2) pooling model. The [email protected] of the original model is 0.204 and the [email protected] of square-anchors model is 0.224.
Figure 6: PR-curves of original model and (2, 1) pooling model. The [email protected] of the original model is 0.227 and the [email protected] of (2, 1) pooling model is 0.289.
### Validation on 9-Feature Maps Network
From previous section, we find that all three new networks perform better than original network in specific validation sets. Now, we combine these three 3-feature maps networks to get a 9-feature maps network, as shown in figure 3. And the validation dataset contains complete 5,000 images, 36,335 labels.
Figure 7 shows that the 9-feature maps model performs better than original model in terms of recall and mAP, but the speed is slower. This is because the input of the image need to be processed in 9 feature maps instead of 3. Meanwhile, extra pooling layers also increase the processing time. And in NMS step, we do 4 times NMS.
## 5 Conclusion and Future Work
Object detection has been a hot topic in recent years, and with the continuous efforts of researchers, object detection algorithms are performing better and better. Their accuracy and speed are also gradually able to meet the needs of various industries. For example, autonomous driving is also a booming industry, and its high requirements for object detection have promoted further improvements in the effectiveness of object detection algorithms.
This article is based on the original YOLOv5 with some modifications. As a result, the accuracy of the algorithm is improved while ensuring the speed. Specifically, the backbone and neck parts of the new network are the same as the original one, because the facts tell us that they perform well enough. We finally chose to change the head part. The output of the model, i.e., three square feature maps, was changed to nine, and six of them are no longer square. The previous layer of these six feature maps was a newly added asymmetrical pooling layer, so that we can change the receptive fields of the feature maps without adding a new number of parameters, to expect the model to have better predictive power for multiple shapes of objects. The final experimental results show that the new model is indeed improved. Its mAP is improved by 0.002 compared to the original model, but its inference speed is not affected too much. Compared with the original YOLOv5 model, the new model has advantages in terms of detection accuracy. In the future, firstly, we can continue to optimize the network structure and further improve the accuracy.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & P & R & [email protected] & [email protected] & pre-process & inference & NMS \\ \hline Original & 0.597 & 0.418 & 0.454 & 0.267 & 0.6ms & 4.7ms & 1.7ms \\ \hline
9-Feature Maps & 0.596 & 0.427 & 0.456 & 0.269 & 0.6ms & 5.0ms & 2.6ms \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of original model and (2, 1) pooling model. The first four statistics shows the performance of each model. The last three statistics show the processing speed of each image
Figure 7: PR-curves of original model and 9-feature maps model. The [email protected] of the original model is 0.454 and the [email protected] of 9-feature maps model is 0.456.
In addition to modifying the head part, we can also try to modify other structures, for example, backbone and neck. Secondly, the prediction speed of the model has further room for improvement. Finally, the model can be applied to autonomous driving. For example, in an autonomous driving simulation system.
|
2304.03867
|
Masked Student Dataset of Expressions
|
Facial expression recognition (FER) algorithms work well in constrained
environments with little or no occlusion of the face. However, real-world face
occlusion is prevalent, most notably with the need to use a face mask in the
current Covid-19 scenario. While there are works on the problem of occlusion in
FER, little has been done before on the particular face mask scenario.
Moreover, the few works in this area largely use synthetically created masked
FER datasets. Motivated by these challenges posed by the pandemic to FER, we
present a novel dataset, the Masked Student Dataset of Expressions or MSD-E,
consisting of 1,960 real-world non-masked and masked facial expression images
collected from 142 individuals. Along with the issue of obfuscated facial
features, we illustrate how other subtler issues in masked FER are represented
in our dataset. We then provide baseline results using ResNet-18, finding that
its performance dips in the non-masked case when trained for FER in the
presence of masks. To tackle this, we test two training paradigms: contrastive
learning and knowledge distillation, and find that they increase the model's
performance in the masked scenario while maintaining its non-masked
performance. We further visualise our results using t-SNE plots and Grad-CAM,
demonstrating that these paradigms capitalise on the limited features available
in the masked scenario. Finally, we benchmark SOTA methods on MSD-E.
|
Sridhar Sola, Darshan Gera
|
2023-04-07T23:43:21Z
|
http://arxiv.org/abs/2304.03867v1
|
# Masked Student Dataset of Expressions
###### Abstract.
Facial expression recognition (FER) algorithms work well in constrained environments with little or no occlusion of the face. However, real-world face occlusion is prevalent, most notably with the need to use a face mask in the current Covid-19 scenario. While there are works on the problem of occlusion in FER, little has been done before on the particular face mask scenario. Moreover, the few works in this area largely use synthetically created masked FER datasets. Motivated by these challenges posed by the pandemic to FER, we present a novel dataset, the **Masked Student Dataset of Expressions** or **MSD-E**, consisting of 1,960 real-world non-masked and masked facial expression images collected from 142 individuals. Along with the issue of obfuscated facial features, we illustrate how other subtler issues in masked FER are represented in our dataset. We then provide baseline results using ResNet-18, finding that its performance dips in the non-masked case when trained for FER in the presence of masks. To tackle this, we test two training paradigms: contrastive learning and knowledge distillation, and find that they increase the model's performance in the masked scenario while maintaining its non-masked performance. We further visualise our results using t-SNE plots and Grad-CAM, demonstrating that these paradigms capitalise on the limited features available in the masked scenario. Finally, we benchmark SOTA methods on MSD-E. The dataset is available at [https://github.com/SridharSola/MSD-E](https://github.com/SridharSola/MSD-E).
Human-centered computing +
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
+
FootnoteFootnote †: journal: Acm Reference Format
+
Footnote †: journal: Acm Reference Format
|
2310.01137
|
The $*$-exponential as a covering map
|
We employ tools from complex analysis to construct the $*$-logarithm of a
quaternionic slice regular function. Our approach enables us to achieve three
main objectives: we compute the monodromy associated with the $*$-exponential;
we establish sufficient conditions for the $*$-product of two $*$-exponentials
to also be a $*$-exponential; we calculate the slice derivative of the
$*$-exponential of a regular function.
|
Amedeo Altavilla, Samuele Mongodi
|
2023-10-02T12:22:42Z
|
http://arxiv.org/abs/2310.01137v1
|
# The \(*\)-exponential as a covering map
###### Abstract.
We employ tools from complex analysis to construct the \(*\)-logarithm of a quaternionic slice regular function. Our approach enables us to achieve three main objectives: we compute the monodromy associated with the \(*\)-exponential; we establish sufficient conditions for the \(*\)-product of two \(*\)-exponentials to also be a \(*\)-exponential; we calculate the slice derivative of the \(*\)-exponential of a regular function.
Key words and phrases:Slice-regular functions, quaternionic exponential, quaternionic logarithm, Baker-Campbell-Hausdorff, covering maps, monodromy 2020 Mathematics Subject Classification: Primary 30G35, 30C25; secondary 30B50, 33B10, 58K10, 32A10 Partially supported by PRIN 2022MWPMAB - "Interactions between Geometric Structures and Function Theories" GNSAGA of INdAM and by the INdAM project "Teoria delle funzioni ipercomplesse e applicazioni".
finding \(*\)-roots of a slice regular function can be translated into a problem of lifting functions through a holomorphic covering map. The number and the structure of such \(*\)-roots were then linked to the group of deck transformations of the covering map.
Again, we would like to emphasize that this is possible because the analytic expressions of the multiplication in \(\mathbb{H}\) and in \(\mathbb{C}\otimes\mathbb{H}\) in terms of the coordinates with respect to some basis of \(\mathbb{H}\) and the corresponding complexified basis of \(\mathbb{C}\otimes\mathbb{H}\) are the same, reflecting the fact that a real analytic function on, say, the real line has a unique extension to the complexification of the real line given by a power series with the same coefficients.
We present here another instantiation of this consideration: we treat the case of \(*\)-logarithms by considering the map \(\exp:\mathbb{H}\to\mathbb{H}\) and lifting it to the complexification, to a map with the same analytic expression. The study of the local inverses of \(\exp\) again becomes a problem in complex analytic covering maps, from whose solution we also recover what we already proved in the case of \(*\)-roots. Using a geometric approach we will see that under natural topological hypotheses the exponential map in \(\mathbb{C}\otimes\mathbb{H}\) is a covering map (see Theorem 4.2) and we will be able to write down its monodromy. Then, thanks to the standard relation between holomorphic stem functions and slice regular functions, given a never-vanishing slice regular function \(f:U\to\mathbb{H}\) such that its "vector part" is never-vanishing, we will be able to construct a \(2\)-parameter family of \(*\)-logarithms (\(1\)-parameter family if \(U\cap\mathbb{R}\neq\emptyset\)), see Corollaries 5.2 and 5.3 for the results and Remark 5.1 for the explicit description of the monodromy.
Such a study extends what is already contained in [4, 8, 9] by showing the geometric nature of the many problems encountered in the search for a good notion of logarithm in the non-commutative setting.
As already mentioned, the proof of many results contained in the present paper follows topological strategies, which then produce natural hypotheses and conditions, simplifying many proofs contained in the aforementioned papers. On the other hand, since it is not the specific aim of this work, we will only give a glimpse of how the remaining residual cases should be treated, i.e. how some of the hypotheses could be relaxed.
In an effort to highlight the impact that this simple idea can produce, we analyze the problem of when a product of exponential is an exponential itself; the question for quaternions is easily settled by using a simplified version of the Baker-Campbell-Hausdorff formula (or, if one interprets quaternions as rotations, by a standard application of Rodrigues' formula). Once the problem is _analytically_ solved for quaternions, we formally consider the same solution for the same problem in \(\mathbb{C}\otimes\mathbb{H}\), where stem functions take their values. This gives us a solution to the same problem at the level of stem functions, hence for slice regular functions. The same idea is used to compute the slice derivative of the \(*\)-exponential of a slice regular function. Even though this computation is quite natural, it has not been implemented yet, possibly due to the lack of a strategy like the one we use here. In these
last two tasks, we will use a simple formula inspired by standard linear algebra, which allows, given a generic slice regular function \(f\), to write any other function \(g\) as a sum of a component in the "direction" of \(f\) and another "orthogonal" part. This will allow us to write much simpler formulas and to identify possible future generalizations.
## 2. Preliminaries
### Algebraic structures of \(\mathbb{H}\) and of \(\mathbb{H}\otimes\mathbb{C}\)
In this paper we will deal with many different imaginary units, not only those contained in the space of quaternions, but also with others coming from different algebras. Starting from complex numbers, the symbol '\(i\)' will denote the standard imaginary unit in \(\mathbb{C}\) (and hence will be used when working in \(\mathbb{C}^{N}\), \(N\geq 1\)). The symbol '\(i\)' will denote the first imaginary unit in the definition of the space \(\mathbb{H}\) of quaternions:
\[\mathbb{H}:=\{q=q_{0}+q_{1}i+q_{2}j+q_{3}k\,|\,q_{\ell}\in\mathbb{R},\,\ell=0, 1,2,3,\,i^{2}=j^{2}=k^{2}=-1,\,ij=k=-ji\}.\]
We will make use of the standard conjugation in \(\mathbb{H}\) denoted by the superscript \(c\):
\[q=q_{0}+q_{1}i+q_{2}j+q_{3}k\mapsto q^{c}=q_{0}-(q_{1}i+q_{2}j+q_{3}k).\]
Using this conjugation, given any quaternion \(q\), it is possible to define its scalar and vector parts as follows
\[q_{0}=\frac{q+q^{c}}{2},\quad q_{v}=\frac{q-q^{c}}{2},\]
so that \(q=q_{0}+q_{v}\). Obviously, if \(q\) is represented in the form \(q=q_{0}+q_{1}i+q_{2}j+q_{3}k\), then \(q_{0}\) is the scalar part of \(q\) and \(q_{v}=q_{1}i+q_{2}j+q_{3}k\). Using this representation, we can express the product of two quaternions \(q=q_{0}+q_{v}\) and \(p=p_{0}+p_{v}\) in a more understandable way:
\[qp=q_{0}p_{0}-\langle q_{v},p_{v}\rangle+q_{0}p_{v}+p_{0}q_{v}+q_{v}\wedge p_{ v}, \tag{1}\]
where \(\langle\cdot,\cdot\cdot\rangle\) and \(\wedge\) denote the standard Euclidean and cross products. In particular, the square norm of \(q\) can be computed as \(|q|^{2}=qq^{c}\) and we have that \(q_{v}^{2}=-|q_{v}|^{2}\).
Whenever \(q_{v}\neq 0\), we are able to represent \(q\) in another convenient form:
\[q=\alpha+I\beta,\]
where \(\alpha=q_{0}\), \(I=\frac{q_{v}}{|q_{v}|}\) and \(\beta=|q_{v}|\). In particular, if we denote the set of imaginary units as follows
\[\mathbb{S}:=\{I\in\mathbb{H}\,|\,I^{2}=-1\}=\{\alpha_{1}i+\alpha_{2}j+\alpha _{3}k\,|\,\alpha_{1}^{2}+\alpha_{2}^{2}+\alpha_{3}^{2}=1\},\]
and we denote by \(\mathbb{C}_{I}=span(1,I)=\{\alpha+I\beta\,|\,\alpha,\beta\in\mathbb{R}\}\) the complex plane generated by \(1\) and \(I\), we have that
\[\mathbb{H}=\bigcup_{I\in\mathbb{S}}\mathbb{C}_{I}.\]
This last representation comes in handy when working with slice functions, and in order to do that we need to talk about we need to discuss the complexification of \(\mathbb{H}\) (in particular, we will follow the approach of [11]).
The symbol '\(\sqrt{-1}\)' will denote the complex imaginary unit defining the complexification of \(\mathbb{H}\), i.e.
\[\mathbb{C}\otimes\mathbb{H}:=\{q+\sqrt{-1}p\,|\,q,p\in\mathbb{H}\}.\]
The algebraic structure of \(\mathbb{C}\otimes\mathbb{H}\) is defined in the usual way: if \(q_{1}+\sqrt{-1}p_{1},q_{2}+\sqrt{-1}p_{2}\in\mathbb{C}\otimes\mathbb{H}\), then:
\[(q_{1}+\sqrt{-1}p_{1})(q_{2}+\sqrt{-1}p_{2})=q_{1}q_{2}-p_{1}p_{2}+\sqrt{-1}(q _{1}p_{2}+p_{1}q_{2}).\]
By fixing a (orthogonal) basis of \(\mathbb{H}\) containing \(1\) and by writing any quaternion in its \(4\) real coordinates we get a biholomorphism between \(\mathbb{C}\otimes\mathbb{H}\) and \(\mathbb{C}^{4}\): if \(q=q_{0}+q_{1}i+q_{2}j+q_{3}k\) and \(p=p_{0}+p_{1}i+p_{2}j+p_{3}k\), then we define \(\phi:\mathbb{C}\otimes\mathbb{H}\to\mathbb{C}^{4}\) as
\[\phi(q+\sqrt{-1}p)=(q_{0}+\imath p_{0},q_{1}+\imath p_{1},q_{2}+\imath p_{2},q_ {3}+\imath p_{3}).\]
In particular, this biholomorphism induces an algebraic structure on \(\mathbb{C}^{4}\) that is defined exactly as that of \(\mathbb{H}\):
\[\mathbb{C}\otimes\mathbb{H}=\{z=z_{0}+z_{1}i+z_{2}j+z_{3}k\,|\,z_{\ell}\in \mathbb{C}\,,\ell=0,1,2,3,\,i^{2}=j^{2}=k^{2}=-1,\,ij=k=-ji\}.\]
In \(\mathbb{C}\otimes\mathbb{H}\) it is possible to define two commuting conjugations:
\[q+\sqrt{-1}p\mapsto (q+\sqrt{-1}p)^{c}=q^{c}+\sqrt{-1}p^{c},\] \[q+\sqrt{-1}p\mapsto \overline{q+\sqrt{-1}p}=q-\sqrt{-1}p.\]
If we work in \(\mathbb{C}^{4}\), these two conjugations translates as follows
\[z=z_{0}+z_{1}i+z_{2}j+z_{3}k\mapsto z^{c}=z_{0}-(z_{1}i+z_{2}j+z_{3}k),\] \[z=z_{0}+z_{1}i+z_{2}j+z_{3}k\mapsto \overline{z}=\bar{z}_{0}+\bar{z}_{1}i+\bar{z}_{2}j+\bar{z}_{3}k.\]
Exactly as before, we can define the "scalar" and "vector" part of \(z\in\mathbb{C}\otimes\mathbb{H}\) as
\[z_{0}=\frac{z+z^{c}}{2},\quad\underline{z}=\frac{z-z^{c}}{2},\]
Within this language, the product of two elements \(z,w\in\mathbb{C}\otimes\mathbb{H}\) can be written formally as in Formula 1:
\[zw=z_{0}w_{0}-\langle\underline{z},\underline{w}\rangle+z_{0}\underline{w}+w_{ 0}\underline{z}+\underline{z}\wedge\underline{w},\]
where \(\langle\cdot,\cdot\cdot\rangle\) and \(\wedge\) are the formal generalization of the Euclidean and cross product. In particular if \(z=z_{0}+\underline{z}=z_{0}+z_{1}i+z_{2}j+z_{3}k\), setting
\[\underline{z}^{2}=z_{1}^{2}+z_{2}^{2}+z_{3}^{2},\]
we have that \(zz^{c}=\langle z,z\rangle=z_{0}^{2}+\underline{z}^{2}\in\mathbb{C}\) and it is a real number only if the four components of \(z\) are real numbers, i.e. only if \(z\in\mathbb{H}\). However, since the product in \(\mathbb{C}\) is commutative, for any \(z,w\in\mathbb{C}\otimes\mathbb{H}\) we have
\[(zw)(zw)^{c}=zww^{c}z^{c}=(zz^{c})(ww^{c}). \tag{2}\]
If \(z\) is such that \(zz^{c}\neq 0\), then \(z\neq 0\), but unfortunately \(\mathbb{C}\otimes\mathbb{H}\) contains zero divisors. So, in particular, there are \(z\neq 0\) such that \(zz^{c}=0\).
### Stem functions, slice functions and regularity
We are now ready to introduce and discuss slice functions. As already said, we will rely on the approach of stem functions developed in [11] and in subsequent works by the same authors. We also refer to [5] to deepen our specific point of view. We start with the following definition.
**Definition 2.1**.: Let \(\mathcal{U}\subset\mathbb{C}\) be such that \(\overline{\mathcal{U}}=\mathcal{U}.\) A function \(F:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\) is said to be a _stem function_ if, for any \(z\in\mathcal{U}\), we have \(F(\bar{z})=\overline{F(z)}\).
If we write \(F:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\) as \(F(z)=F_{ev}(z)+\sqrt{-1}F_{od}(z)\), then the condition \(F(\bar{z})=\overline{F(z)}\) is reflected in the following two equalities \(F_{ev}(\bar{z})=F_{ev}(z)\) and \(F_{od}(\bar{z})=-F_{od}(z)\). If, instead, we read \(F\) as a function taking values in \(\mathbb{C}^{4}\), \(F(z)=(F_{0}(z),F_{1}(z),\underline{F_{2}(z)},F_{3}(z))\), then the stem condition must be satisfied by all four components, i.e. \(F_{\ell}(\bar{z})=\overline{F_{\ell}(z)}\), for \(\ell=0,1,2,3\).
**Definition 2.2**.: Let \(U\subset\mathbb{H}\) be such that if \(q=\alpha+I\beta\in U\) then \(\alpha+J\beta\in U\), for any \(J\in\mathbb{S}\) and let \(\mathcal{U}=\{\alpha+\imath\beta\,|\,\alpha+I\beta\in U\}\). A function \(f:U\to\mathbb{H}\) is said to be a _slice function_ if there exists a stem function \(F=F_{ev}+\sqrt{-1}F_{od}:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\) such that \(f(\alpha+I\beta)=F_{ev}(\alpha+\imath\beta)+IF_{od}(\alpha+\sqrt{-1}\beta)\); in this case we will write \(f=\mathcal{I}(F)\) and we will say that \(f\) is induced by \(F\).
If \(U\) is a domain and \(F\) is a holomorphic function, then \(f\) is said to be a _slice regular function_.
The definition of stem functions guarantees the well-definition of slice functions: in fact, since \(F(\bar{z})=\overline{F(z)}\), then the value of \(f\) at \(\alpha+(-I)(-\beta)\) is not different from that of \(f\) at \(\alpha+I\beta\). Examples of slice regular functions are polynomials and converging power series in the quaternionic variable \(q\) with right quaternionic coefficients.
The main property of slice functions is the so-called _Representation Formula_ contained in the following statement (see [10, Theorem 1.16]). It essentially says that a slice function can be recovered from its values on two different semislices \(\mathbb{C}_{I}^{+}\) and \(\mathbb{C}_{K}^{+}\), where the apex '\(+\)' indicates the upper half plane.
**Theorem 2.3** (Representation Formula).: _Let \(f:U\to\mathbb{H}\) be a slice function and let \(J,K\in\mathbb{S}\) be such that \(J\neq K\). Then, for every \(\alpha+I\beta\in U\) the following formula holds_
\[f(\alpha+I\beta)=(I-K)((J-K)^{-1}f(\alpha+J\beta))-(I-J)((J-K)^{-1}f(\alpha+ K\beta)).\]
It is well known that the pointwise product of two slice functions does not preserve regularity, however, the pointwise product of two stem functions is a stem functions, therefore it is natural to introduce a new notion of product as follows.
**Definition 2.4**.: Let \(f=\mathcal{I}(F)\) and \(g=\mathcal{I}(G)\) be two stem functions defined on the same domain \(U\). The \(*\)_-product_ of \(f\) and \(g\) is defined as the slice function
\[f*g=\mathcal{I}(FG):U\to\mathbb{H}.\]
Since the \(*\)-product is defined from the pointwise product in a non-commutative algebra (namely in \(\mathbb{C}\otimes\mathbb{H}\)), it is non-commutative itself. However, if we consider a slice function \(f=\mathcal{I}(F_{ev}+\sqrt{-1}F_{od}):U\to\mathbb{H}\), such that \(F_{ev}\) and \(F_{od}\) take only real values, then \(F\) is a legit complex function of one complex variable and, for any other slice function \(g\) defined on \(U\), we have that
\[f*g=fg=g*f.\]
A function \(f=\mathcal{I}(F)\) with the above property is said to be _slice preserving_. In fact, as \(F_{ev}\) and \(F_{od}\) are real valued, then, for any \(q=\alpha+I\beta\in U\), the element \(f(q)\) belongs to the same slice \(\mathbb{C}_{I}\) of \(q\). Written as a complex curve in \(\mathbb{C}^{4}\) the stem function \(F\) of a slice preserving function takes the following form
\[F(z)=(F_{ev}+\imath F_{od},0,0,0)=(F_{0},0,0,0).\]
At this stage we can apply all the formalism and properties described in the previous part of this Section and obtain that, if \(f=\mathcal{I}(F)\) and \(g=\mathcal{I}(G)\), \(F=F_{0}+F_{1}i+F_{2}j+F_{3}k\) and \(G=G_{0}+G_{1}i+G_{2}j+G_{3}k\), \(f_{\ell}=\mathcal{I}(F_{\ell})\) and \(g_{\ell}=\mathcal{I}(G_{\ell})\) for \(\ell=0,1,2,3\), \(f_{v}=\mathcal{I}((F+F^{c})/2)\) and \(g_{v}=\mathcal{I}((G+G^{c})/2)\), then
\[f*g=f_{0}g_{0}-\langle f_{v},g_{v}\rangle_{*}+f_{0}g_{v}+g_{0}f_{v}+f_{v} \mathbin{\hbox{\vrule height 6.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt}}g_{v},\]
where \(\langle f_{v},g_{v}\rangle_{*}=f_{1}g_{1}+f_{2}g_{2}+f_{3}g_{3}\) and \(f_{v}\mathbin{\hbox{\vrule height 6.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt}}g_{v}=(f_{2}g_{3}-f_{3}g_{2})i+(f_{3}g_{1}-f_{1}g_{3})j+(f_{1}g_{2}-f_ {2}g_{1})k\) and, of course, all \(f_{\ell}\) and \(g_{\ell^{\prime}}\) are slice preserving functions. These last two operators can be defined in an intrinsic way by means of the so-called regular conjugation: given a slice function \(f=\mathcal{I}(F):U\to\mathbb{H}\), we define its _regular conjugate_ as the function \(f^{c}:U\to\mathbb{H}\) defined as \(f^{c}=\mathcal{I}(F^{c})\). Then, if \(g:U\to\mathbb{H}\) is another slice function, we have that
\[\langle f,g\rangle_{*}=\frac{f*g^{c}+g*f^{c}}{2},\qquad f\mathbin{\hbox{\vrule heigh t 6.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt}}g=f_{v}\mathbin{\hbox{\vrule heigh t 6.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt}}g_{v}=\frac{f*g-g*f}{2}=\frac{[f,g]}{2}.\]
This representation of the product highlights how many algebraic features of slice functions directly come from those of quaternions (or of quaternionic curves). For instance, two non slice preserving functions \(f\) and \(g\) commute if and only if \(f_{v}\mathbin{\hbox{\vrule height 6.0pt depth 0.0pt\vrule height 0.0pt width 5.0pt depth 0.0pt}}g_{v}\equiv 0\) if and only if there exist two slice preserving functions \(\alpha\) and \(\beta\) not both identically zero, such that \(\alpha f_{v}+\beta g_{v}\equiv 0\) (see e.g. [1, Proposition 2.10]). A particular instance of this phenomenon is when \(f\) and \(g\) are both \(\mathbb{C}_{I}\)_-preserving_ for some \(I\in\mathbb{S}\), i.e., for any \(q=\alpha+I\beta\) in the domain of \(f\) and \(g\)
we have that \(f(q),g(q)\in\mathbb{C}_{I}\). Slice preserving functions are \(\mathbb{C}_{I}\)-preserving for any \(I\in\mathbb{S}\). Keeping this parallelism between the algebraic features of \(\mathbb{H}\) and those of \(\mathbb{C}\otimes\mathbb{H}\), we notice that the role of the Euclidean norm of \(\mathbb{R}^{4}\simeq\mathbb{H}\) is taken here by the so-called _symmetrization_ of \(f\): given a slice function \(f=f_{0}+f_{1}i+f_{2}j+f_{3}k\), its symmetrization is the function \(f^{s}:=\mathcal{I}(FF^{c})=f_{0}^{2}+f_{1}^{2}+f_{2}^{2}+f_{3}^{2}\). The symmetrization of \(f\) has an important role in the study of zeroes of \(f\) (see [10, Chapter 3]).
We now recall an important example of slice preserving regular function.
**Example 2.1**.: Let \(\mathcal{J}:\mathbb{H}\setminus\mathbb{R}\to\mathbb{H}\) be the function such that
\[\mathcal{J}(q)=\frac{q_{v}}{|q_{v}|}.\]
This is clearly a slice preserving function and it is constant on each semislice \(\mathbb{C}_{I}^{+}\). In fact, if \(q=\alpha+I\beta\in\mathbb{H}\setminus\mathbb{R}\), and \(\beta>0\), then \(\mathcal{J}(q)=I\). This particular function plays an important role in the theory of slice regular functions with domain that does not intersect the real axis. In fact, thanks to the fact that \(\mathcal{J}*\mathcal{J}=\mathcal{J}^{2}\equiv-1\), we are able to define slice regular idempotent functions (and hence zero divisors) as
\[\ell_{+},\ell_{-}:\mathbb{H}\setminus\mathbb{R}\to\mathbb{H},\qquad\ell_{\pm }=\frac{1\mp\mathcal{J}i}{2}.\]
For a complete study of these functions see [3].
Starting from the previous example it is worth noticing that, given any slice regular function \(f=f_{0}+f_{v}\), such that \(f_{v}\not\equiv 0\) and \(\sqrt{f_{v}^{s}}\) is well defined (e.g. when \(f_{v}^{s}\) is never vanishing see [1, Corollary 3.2]), then the function
\[\frac{f_{v}}{\sqrt{f_{v}^{s}}},\]
is such that
\[\frac{f_{v}}{\sqrt{f_{v}^{s}}}*\frac{f_{v}}{\sqrt{f_{v}^{s}}}\equiv-1\,.\]
So, at least two (intrinsically) different functions, take the role of the imaginary unit in the setting of slice functions.
As said before, a slice regular function is a slice function such that its stem function is holomorphic. In fact, if \(f=\mathcal{I}(F):U\to\mathbb{H}\) is a slice function of class \(\mathcal{C}^{1}\) defined on a domain \(U\), then the function \(\partial F/\partial\bar{z}\) and \(\partial F/\partial z\) are stem functions as well. In particular, the function \(\partial_{c}f:=\mathcal{I}(\partial F/\partial z)\) is called _slice derivative_ of \(f\). As it is clear from the definition, the slice derivative of a slice regular function controls the behavior "along slices". Thus, to have complete information at first order of a slice regular function \(f\) we need to consider another operator, namely the _spherical derivative_: given a slice function \(f=\mathcal{I}(F):U\to\mathbb{R}\)
we define \(\partial_{s}f:U\setminus\mathbb{R}\to\mathbb{R}\) as the slice function \(\partial_{s}f(\alpha+I\beta)=\mathcal{I}\left(\frac{F_{od}(\alpha+\imath\beta)}{ \beta}\right)\). Even if it does not look like a derivative, the spherical derivative can also be obtained as the result of a differential operator applied to \(f\) (see [14]).
We close this preliminary section by recalling the definition of the \(*\)-exponential of a slice regular function.
**Definition 2.5**.: Let \(f:U\to\mathbb{H}\) be any slice function. We denote by \(f^{*2}=f*f\) and, for any \(N>2\) we define
\[f^{*N}=f*f^{*(N-1)}.\]
If \(f\) is slice regular, then we define the function \(\exp_{*}(f):U\to\mathbb{H}\) as
\[\exp_{*}(f)=\sum_{n\in\mathbb{N}}\frac{f^{*n}}{n!}.\]
Many properties and representations of the \(*\)-exponential of a slice regular function are discussed in [1, 4, 8, 9].
## 3. The quaternionic exponential as a covering map
It is well known that the quaternionic exponential map \(\exp:\mathbb{H}\to\mathbb{H}\) is a covering map. However, in order to be self-contained we propose here a proof of this fact using its slice regular nature. In fact, the function \(\exp\) is induced by the stem function \(E:\mathbb{C}\to\mathbb{C}\otimes\mathbb{H}\) defined as \(E(\alpha+\imath\beta)=e^{\alpha+\sqrt{-1}\beta}=e^{\alpha}(\cos\beta+\sqrt{-1} \sin\beta)\) or, with our usual abuse of notation, viewed as a curve in \(\mathbb{C}^{4}\), as \(E(z)=(e^{z},0,0,0)\). Notice that, from the definition, the function \(\exp\) is slice preserving.
We now introduce the following family of sets where \(\exp\) will result to be non-singular. For any \(k\in\mathbb{N}\) set
\[\mathcal{U}_{k}:=\{q\in\mathbb{H}\,|\,k\pi<|q_{v}|<(k+1)\pi\}.\]
We are now ready to state and prove the first result.
**Theorem 3.1**.: _The real differential of \(\exp\) is non-singular at \(q\) if and only if \(q\in\mathcal{U}_{k}\) for some \(k\in\mathbb{N}\). Moreover, for each \(k\in\mathbb{N}\), the restriction \(\exp_{|\mathcal{U}_{k}}\) is a diffeomorphism onto its image, which is \(\mathbb{H}\setminus\mathbb{R}\)._
Proof.: Following [5, Lemmas 3.1 and 3.3] or [10, Proposition 8.19], if \(f:\Omega\to\mathbb{H}\) is slice regular and \(q=\alpha+I\beta\in\Omega\), then the real differential of \(f\) is singular at \(q\) if and only if \(\partial_{c}f(q)=0\) or \(\partial_{s}f(q)=0\) or \(\partial_{c}f(q)(\partial_{s}f(q))^{c}\in(\mathbb{C}_{I})^{\perp}\). We have that \(\partial_{c}\exp=\exp\), so it is never vanishing. The spherical derivative of \(\exp\) at \(q=\alpha+I\beta\) is equal to \(e^{\alpha\frac{\sin\beta}{\beta}}\) and therefore it vanishes if and only if \(\beta=h\pi\), where \(h\in\mathbb{Z}^{*}\). As
\((\exp(q))\cdot e^{\alpha\frac{\sin\beta}{\beta}}\in\mathbb{C}_{I}\), for all \(q\in\mathbb{H}\), we have that the set of critical points, i.e. the set of points where the real differential has not maximum rank, is given by
\[C_{0}(\exp)=\{q\in\mathbb{H}\,|\,|q_{v}|=h\pi,h\in\mathbb{Z}^{*}\}=\{\pi(z,I)\,| \,\mathsf{Im}(z)=h\pi,\,h\in\mathbb{Z}^{*},\,I\in\mathbb{S}\}.\]
It is easy to see that \(\exp(C_{0}(\exp))=\mathbb{R}\) and therefore, the set of singular points \(S(\exp)=\exp^{-1}(\exp(C_{0}(\exp)))\) is given by
\[S(\exp)=\{\pi(z,I)\,|\,\mathsf{Im}(z)=h\pi,\,h\in\mathbb{Z},\,I\in\mathbb{S}\}. \tag{3}\]
Collecting everything, we get the following equality
\[\mathbb{H}\setminus S(\exp)=\bigcup_{h\in\mathbb{N}}\mathcal{U}_{h}.\]
Now, as \(\exp\) is slice-preserving and \(\exp(\mathbb{H}\setminus S(\exp))\subseteq\mathbb{H}\setminus\mathbb{R}\), given \(q=\alpha+I\beta,q^{\prime}=\alpha^{\prime}+I^{\prime}\beta^{\prime}\in \mathbb{H}\setminus S(\exp)\), we have that \(\exp(q)=\exp(q^{\prime})\) if \(I=I^{\prime}\). In fact, if we denote by \(\mathcal{T}:\mathbb{H}\setminus\mathbb{R}\to\mathbb{H}\setminus\mathbb{R}\) the map defined by
\[\mathcal{T}(\alpha+I\beta)=\alpha+I(\beta+\pi)\]
for \(\beta>0\) and \(I\in\mathbb{S}\), by the standard properties of the complex exponential, we obtain that \(q=\mathcal{T}^{(h)}(q^{\prime})\) for some \(h\in\mathbb{N}\), where the superscript denotes iterates.
More generally, \(\mathcal{T}\) is a diffeomorphism from \(\mathcal{U}_{h}\) and \(\mathcal{U}_{h+1}\) for all \(h\); therefore, \(\exp\) is injective from \(\mathcal{U}_{h}\) to \(\mathbb{H}\setminus\mathbb{R}\) and, again by the properties of the complex exponential, also surjective. Being a bijective local diffeomorphism, \(\exp\) is a diffeomorphism between \(\mathcal{U}_{h}\) and \(\mathbb{H}\setminus\mathbb{R}\).
**Remark 3.1**.: The open domains \(\mathcal{U}_{h}\) are all disjoint, therefore the map
\[\exp_{|\mathbb{H}\setminus S(\exp)}:\mathbb{H}\setminus S(\exp)\to\mathbb{H} \setminus\mathbb{R}\]
is a covering map in the trivial way (i.e. there is no ramification).
## 4. The exponential in \(\mathbb{C}\otimes\mathbb{H}\)
In this section we are going to study the exponential function of the algebra \(\mathbb{C}\otimes\mathbb{H}\). As \(\mathbb{C}\otimes\mathbb{H}\) is biholomorphic to \(\mathbb{C}^{4}\), this study is made in order to apply the ideas of complex analysis to our quaternionic context. Given \(z=z_{0}+z_{1}i+z_{2}j+z_{3}k\in\mathbb{C}^{4}\cong\mathbb{C}\otimes\mathbb{H}\) we recall that, \(\underline{z}=z_{1}i+z_{2}j+z_{3}k\) and \(\underline{z}^{2}=z_{1}^{2}+z_{2}^{2}+z_{3}^{2}\). Moreover, given a slice regular function \(f=\mathcal{I}(F)\), we also recall the following set of relations already introduced in the preliminary section:
\[f_{0}=\mathcal{I}(F_{0}),\quad f_{v}=\mathcal{I}(\underline{F}),\quad f^{c}= \mathcal{I}(F_{0}-\underline{F}),\quad f^{s}=\mathcal{I}(F_{0}^{2}+\underline{F }^{2}),\quad f_{v}^{s}=\mathcal{I}(\underline{F}^{2}).\]
In [5] we introduced the analog of the quaternionic \(n\)th \(*\)-power as \(\sigma_{n}:\mathbb{C}\otimes\mathbb{H}\to\mathbb{C}\otimes\mathbb{H}\), where
\[\sigma_{n}(z)=(p_{0}^{n}(z_{0},\underline{z}^{2}),z_{1}p_{1}^{n-1}(z_{0}, \underline{z}^{2}),z_{2}p_{1}^{n-1}(z_{0},\underline{z}^{2}),z_{3}p_{1}^{n-1}( z_{0},\underline{z}^{2})),\]
where \(p_{0}^{n}\) and \(p_{1}^{n-1}\) are the usual Chebischev polynomials, such that
\[(x+iy)^{n}=p_{0}^{n}(x,y^{2})+ip_{1}^{n-1}(x,y^{2}). \tag{4}\]
We are now able to introduce the exponential function of the algebra \(\mathbb{C}\otimes\mathbb{H}\) as the function \(\varepsilon:\mathbb{C}\otimes\mathbb{H}\to\mathbb{C}\otimes\mathbb{H}\) defined by
\[\varepsilon=\sum_{n=0}^{\infty}\frac{\sigma_{n}}{n!}\,.\]
From the definition of \(\sigma_{n}\), we have that
\[\varepsilon(z)=\sum_{n=0}^{\infty}\frac{p_{0}^{n}(z_{0},\underline{z}^{2})}{n!}+\underline{z}\sum_{n=0}^{\infty}\frac{p_{1}^{n-1}(z_{0},\underline{z}^{2}) }{n!}\,.\]
Moreover, thanks to Formula 4, we have that
\[\sum_{n=0}^{\infty}\frac{p_{0}^{n}(x,y^{2})}{n!} =\operatorname{Re}\sum_{n=0}^{\infty}\frac{(x+Iy)^{n}}{n!}=e^{x} \cos(y)\,,\] \[y\sum_{n=0}^{\infty}\frac{p_{1}^{n-1}(x,y^{2})}{n!} =\operatorname{Im}\sum_{n=0}^{\infty}\frac{(x+Iy)^{n}}{n!}=e^{x} \sin(y)\,.\]
The aim of the following pages is to prove that, under suitable hypotheses, \(\varepsilon\) is a covering map. In order to obtain such a result, we 'lift' our construction to another space where we are able to use standard techniques from complex analysis. This was the fruitful strategy already used in [5] in order to better understand \(*\)-roots of slice functions.
### Lift to \(\mathbb{C}^{2}\times\mathcal{S}\)
As in [5] we consider the set \(\mathcal{S}\) of imaginary units contained in \(\mathbb{C}\otimes\mathbb{H}\)
\[\mathcal{S}:=\{z=\underline{z}\in\mathbb{C}\otimes\mathbb{H}\,|\,z^{2}= \underline{z}^{2}=z_{1}^{2}+z_{2}^{2}+z_{3}^{2}=-1\},\]
and the map \(\rho:\mathbb{C}^{2}\times\mathcal{S}\to\mathbb{C}\otimes\mathbb{H}\) defined by
\[\rho((u_{0},u_{1}),s)=u_{0}+u_{1}s.\]
**Remark 4.1**.: With the language and symbols of tensor product the set \(\mathcal{S}\) contains those elements \(z=p+\sqrt{-1}q\), with \(p,q\in\mathbb{H}\), such that \(-1=(p+\sqrt{-1}q)^{2}=p^{2}-q^{2}+\sqrt{-1}(pq+qp)\). Therefore, \(p\) and \(q\) satisfy the following system (see also [5, Remark 4.2]),
\[\begin{cases}p^{2}-q^{2}=-1,\\ pq+qp=0.\end{cases}\]
The map \(\rho\) is a local diffeomorphism and, if we set \(\mathcal{W}^{\prime}:=\{(u_{0},u_{1})\in\mathbb{C}^{2}\,|\,u_{1}\neq 0\}\) and
\[\Omega^{\prime}=\rho(\mathcal{W}^{\prime}\times\mathcal{S})=\{(z_{0}, \underline{z})\in\mathbb{C}^{4}\,|\,\underline{z}^{2}\neq 0\},\]
then, the restriction of \(\rho\) to \(\mathcal{W}^{\prime}\times\mathcal{S}\) is, in fact, a double cover to its image \(\Omega^{\prime}\). In fact, for any \((z_{0},\underline{z})\in\Omega^{\prime}\), we have that
\[\rho^{-1}(z_{0},\underline{z})=\left(\left(z_{0},\pm\sqrt{\underline{z}^{2}} \right),\pm\frac{z}{\sqrt{\underline{z}^{2}}}\right).\]
In [5] we made a large use of this double cover because the total space is 'large enough' to allow many classical properties to hold. Now we consider the map \(\mathfrak{e}:\mathbb{C}^{2}\times\mathcal{S}\to\mathbb{C}^{2}\times\mathcal{S}\) defined as
\[\mathfrak{e}((u_{0},u_{1}),s)=((e^{u_{0}}\cos(u_{1}),e^{u_{0}}\sin(u_{1})),s).\]
This map is defined in order to have that \(\varepsilon\circ\rho=\rho\circ\mathfrak{e}\),
i.e., \(\mathfrak{e}\) can be viewed as the lift of \(\varepsilon\) in \(\mathbb{C}^{2}\times\mathcal{S}\). We want to prove that \(\mathfrak{e}\) is a covering map onto its image. This result will allow us to state that \(\varepsilon\) is a covering map as well. Before stating the result, we introduce the following set
\[\mathcal{W}:=\{(u_{0},u_{1})\in\mathbb{C}^{2}\,|\,u_{0}^{2}+u_{1}^{2}=0\}.\]
**Theorem 4.1**.: _The complex differential of \(\mathfrak{e}\) is everywhere non-singular. Moreover, the function \(\mathfrak{e}\) is a covering map onto its image, which is \(\mathfrak{e}(\mathbb{C}^{2}\times\mathcal{S})=(\mathbb{C}^{2}\setminus \mathcal{W})\times\mathcal{S}\)._
Proof.: Setting \((u^{\prime},s)=\mathfrak{e}(u,s)\), the differential of \(\mathfrak{e}\) at \((u,s)\) is a map
\[D\mathfrak{e}_{(u,s)}:T_{u}\mathbb{C}^{2}\times T_{s}\mathcal{S}\to T_{u^{ \prime}}\mathbb{C}^{2}\times T_{s}\mathcal{S}\]
that can be represented by the following matrix
\[\begin{pmatrix}e^{u_{0}}\cos(u_{1})&-e^{u_{0}}\sin(u_{1})&\mathbf{0}\\ e^{u_{0}}\sin(u_{1})&e^{u_{0}}\cos(u_{1})&\mathbf{0}\\ \mathbf{0}^{\top}&\mathbf{0}^{\top}&\mathbf{I}_{\mathcal{S}}\end{pmatrix},\]
where \(\mathbf{0}=(0,0)\) and, as \(\dim_{\mathbb{C}}\mathcal{S}=2\), \(\mathbf{I}_{\mathcal{S}}\) the \(2\times 2\) identity matrix. We have \(\det(D\mathfrak{e}_{(u,s)})=e^{2u_{0}}\), so \(D\mathfrak{e}\) is always invertible, i.e. \(\mathfrak{e}\) is a local diffeomorphism between \(\mathbb{C}^{2}\times\mathcal{S}\) and itself.
We now pass to look at the image of \(\mathfrak{e}\). Given \((w_{0},w_{1})\in\mathbb{C}^{2}\) we consider the following system
\[\begin{cases}e^{u_{0}}\cos(u_{1})=w_{0}\\ e^{u_{0}}\sin(u_{1})=w_{1}.\end{cases} \tag{5}\]
We have that
\[w_{0}+uw_{1}=e^{u_{0}}e^{uu_{1}},\qquad w_{0}-uw_{1}=e^{u_{0}}e^{-u_{1}},\]
and hence
\[u_{0} =\frac{\log(w_{0}+\imath w_{1})+\log(w_{0}-\imath w_{1})}{2}+(h_{1} +h_{2})\imath\pi,\] \[u_{1} =\frac{\log(w_{0}+\imath w_{1})-\log(w_{0}-\imath w_{1})}{2\imath} +(h_{1}-h_{2})\pi,\]
with \(h_{1},h_{2}\in\mathbb{Z}\). Therefore, the system in Formula (5) has a solution if and only if \(w_{0}\pm\imath w_{1}\neq 0\), i.e. if and only if \(w_{0}^{2}+w_{1}^{2}\neq 0\). Hence, \(\mathfrak{e}(\mathbb{C}^{2}\times\mathcal{S})=(\mathbb{C}^{2}\setminus \mathcal{W})\times\mathcal{S}\).
We now pass to prove that \(\mathfrak{e}\) is a covering map onto its image. Having proved that it is a local diffeomorphism, we are left to prove that the _lifting property_ is satisfied, i.e., given a continuous curve \(\gamma:[0,1]\to(\mathbb{C}^{2}\setminus\mathcal{W})\times\mathcal{S}\), we will show that it is possible to construct a continuous \(\tilde{\gamma}:[0,1]\to\mathbb{C}^{2}\times\mathcal{S}\) such that \(\mathfrak{e}\circ\tilde{\gamma}=\gamma\), i.e., such that the following diagram commutes.
Given \(((\tilde{u}_{0},\tilde{u}_{1}),\tilde{s})\in\mathbb{C}^{2}\times\mathcal{S}\) let us consider a curve \(\gamma=((\gamma_{0},\gamma_{1}),\gamma_{s}):[0,1]\to(\mathbb{C}^{2}\setminus \mathcal{W})\times\mathcal{S}\), such that \(\gamma(0)=\mathfrak{e}((\tilde{u}_{0},\tilde{u}_{1}),\tilde{s})\), then \(\gamma_{0}(t)^{2}+\gamma_{1}(t)^{2}\neq 0\) for all \(t\in[0,1]\). Thus, if we define
\[\alpha(t)=\gamma_{0}(t)+\imath\gamma_{1}(t),\qquad\beta(t)=\gamma_{0}(t)- \imath\gamma_{1}(t),\]
we have that \(\alpha\) and \(\beta\) are continuous paths in \(\mathbb{C}^{*}\). But then, as \(z\mapsto e^{z}\) is a covering map from \(\mathbb{C}\) to \(\mathbb{C}^{*}\), we can construct \(\tilde{\alpha}\) and \(\tilde{\beta}\) such that
\[e^{\tilde{\alpha}}=\alpha,\qquad e^{\tilde{\beta}}=\beta,\]
with \(\tilde{\alpha}(0)=\tilde{u}_{0}+\imath\tilde{u}_{1}\) and \(\tilde{\beta}(0)=\tilde{u}_{0}-\imath\tilde{u}_{1}\).
In conclusion, the path \(\tilde{\gamma}:[0,1]\to\mathbb{C}^{2}\times\mathcal{S}\) defined by
\[\tilde{\gamma}(t)=\left(\left(\frac{\tilde{\alpha}(t)+\tilde{\beta}(t)}{2}, \frac{\tilde{\alpha}(t)-\tilde{\beta}(t)}{2}\right),\gamma_{s}(t)\right),\]
is continuous, is such that \(\tilde{\gamma}(0)=((\tilde{u}_{0},\tilde{u}_{1}),\tilde{s})\in\mathfrak{e}^{-1 }(\gamma(0))\) and \(\mathfrak{e}\circ\tilde{\gamma}=\gamma\), i.e. the map \(\mathfrak{e}:\mathbb{C}^{2}\times\mathcal{S}\to(\mathbb{C}^{2}\setminus \mathcal{W})\times\mathcal{S}\) is a covering map.
**Remark 4.2**.: As a byproduct of the proof of Theorem 4.1, we get that the fundamental group of \(\mathbb{C}^{2}\setminus\mathcal{W}\) is isomorphic to \(\mathbb{Z}^{2}\); indeed, for \((h_{1},h_{2})\in\mathbb{Z}^{2}\), the monodromy is given by the following action
\[(h_{1},h_{2})\cdot(u_{0},u_{1})=(u_{0}+(h_{1}+h_{2})u\pi,u_{1}+(h_{1}-h_{2})\pi).\]
Moreover, notice that as \(h_{1}+h_{2}\) and \(h_{1}-h_{2}\) have the same parity, this result is coherent to [4, Theorem 1.2] (see the relation between the indices \(m\) and \(n\) at point _(2)_ in the second bullet of the referred result).
### Back to \(\mathbb{C}\otimes\mathbb{H}\)
We now move back to the study of \(\varepsilon\). Let us recall from [5, Section 4] the definition of the following two sets:
\[V_{-1}:=\{(z_{0},\underline{z})\in\mathbb{C}\otimes\mathbb{H}\,| \,z_{0}^{2}+\underline{z}^{2}=0\}=\rho(\mathcal{W}\times\mathcal{S}),\] \[V_{\infty}:=\{(z_{0},\underline{z})\in\mathbb{C}\otimes\mathbb{H }\,|\,\underline{z}^{2}=0\}=(\mathbb{C}\otimes\mathbb{H})\setminus\Omega^{ \prime}.\]
From these two sets, we define
\[\Omega:=\varepsilon^{-1}(\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{ \infty}))=\{(z_{0},\underline{z})\in\mathbb{C}\otimes\mathbb{H}\,|\, \underline{z}^{2}\neq h^{2}\pi^{2},\text{ for }h\in\mathbb{Z}\}.\]
Notice that \(\Omega\) is the exact transpose in the context of \(\mathbb{C}\otimes\mathbb{H}\) of the set \(\mathbb{H}\setminus S(\exp)\), where \(S(\exp)\) is defined in Formula (3). Eventually, recalling that \(\varepsilon\circ\rho=\rho\circ\mathfrak{e}\) and collecting the fact that \(\mathfrak{e}:\mathbb{C}^{2}\times\mathcal{S}\to(\mathbb{C}^{2}\setminus \mathcal{W})\times\mathcal{S}\) and \(\rho:\mathcal{W}^{\prime}\times\mathcal{S}\to\Omega^{\prime}\) are covering maps, we have proved the following theorem.
**Theorem 4.2**.: _The function \(\varepsilon:\Omega\to\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{ \infty})\) is a covering map and its monodromy group is isomorphic to \(\mathbb{Z}^{2}\)._
The case when \(h=0\) is somehow special as described in the following remark.
**Remark 4.3**.: The further restriction of \(\varepsilon\) to \(V_{\infty}\) is a covering map onto its image \(\varepsilon(V_{\infty})=V_{\infty}\setminus\{(0,z)\,|\,z^{2}=0\}\). In fact, if \((z_{0},z)\in V_{\infty}\), i.e. when \(h=0\), we have that \(\varepsilon(z_{0},z)=e^{z_{0}}(1,z)\) (compare with [1, Corollary 4.6]), which, again, belongs to \(V_{\infty}\); moreover it is easy to see that \((z_{0},\underline{z})\mapsto e^{z_{0}}(1,z)\) is a covering map. However, while in this case each element in \(V_{\infty}\) has a one-parameter family of preimages, thanks to Remark 4.2, in the general case of Theorem 4.2 any element in \(\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})\) has a two-parameters family of preimages in \(\Omega\). So, in a sense, in the case described in this remark, we lose a bunch preimages. In particular, we have the following isomorphisms of the fundamental groups:
\[\pi_{1}(\mathbb{C}\otimes\mathbb{H}\setminus V_{\infty})\simeq\mathbb{Z}, \qquad\pi_{1}(\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})) \simeq\mathbb{Z}^{2}.\]
Thanks to the previous theorem, we can construct global 'logarithms' with respect to \(\varepsilon\).
**Corollary 4.3**.: _Let \(\mathcal{U}\) be a simply connected domain and let \(F:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})\) be a continuous function. Then there exist a two-parameter family of continuous functions \(F_{(h_{1},h_{2})}:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\), for \((h_{1},h_{2})\in\mathbb{Z}^{2}\), such that \(\varepsilon\circ F_{h_{1},h_{2}}=F\)._
Of course the previous corollary applies, in particular, to stem functions.
## 5. Global \(*\)-logarithms
In this short section we collect a series of consequences of the previous section, allowing us to define global \(*\)-logarithms of a slice regular function \(f=\mathcal{I}(F)\) such that \(f^{s}\neq 0\neq f_{v}^{s}\) or, equivalently, such that \(F\in\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})\). As in [5, Section 5] we declare the following assumption.
**Assumption 5.1**.: From now on, the set of definition \(\mathcal{U}=\overline{\mathcal{U}}\) of our stem functions will be open and simply connected or the union of two simply connected domains (if \(\mathcal{U}\cap\mathbb{R}=\emptyset\)).
In [8, 9] the quaternionic domains \(U\) coming from the sets \(\mathcal{U}\) just described in the previous Assumption are called _basic domains_.
Thanks to Corollary 4.3, if \(f\) is a slice function, such that its stem function \(F\) does not intersects \(V_{-1}\cup V_{\infty}\), we virtually have a two-parameters countable family of \(*\)-logarithms. We only need to check that the resulting logarithms at the level of \(\mathbb{C}\otimes\mathbb{H}\) are stem functions as well. Later we will see that if the domain of \(f\) contains real points, then we lose a parameter, obtaining a closer analogy with the complex case.
As done in [5] for the case of \(n\)-th \(*\)-powers, given a stem function \(F:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})\), we define the following set
\[\mathcal{G}:=\{G:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\,|\,\varepsilon \circ G=F\}.\]
Since \(\varepsilon(\bar{z})=\overline{\varepsilon(z)}\), if \(G\in\mathcal{G}\) then the function \(\hat{G}:\mathcal{U}\to\mathbb{C}\otimes\mathbb{H}\) defined by \(\hat{G}(z)=\overline{G(\bar{z})}\) belongs to \(\mathcal{G}\) as well, in fact
\[(\varepsilon\circ\hat{G})(z)=\varepsilon(\hat{G}(z))=\varepsilon(\overline{G( \bar{z})})=\overline{\varepsilon(G(\bar{z}))}=\overline{F(\bar{z})}=F(z).\]
Note that \(G\) is a stem function if and only if \(\hat{G}=G\). Therefore, we can prove the following result, the proof of which goes exactly as that of [5, Theorem 5.3].
**Corollary 5.2**.: _Let \(U\) be a basic domain such that \(U\cap\mathbb{R}=\emptyset\) and let \(f:U\to\mathbb{H}\) be a slice regular function such that \(f^{s}(q)\neq 0\neq f_{v}^{s}(q)\), for all \(q\in U\). Then, there exists a two-parameters family of slice functions \(f_{(h_{1},h_{2})}:U\to\mathbb{H}\), for \((h_{1},h_{2})\in\mathbb{Z}^{2}\), such that_
\[\exp_{*}(f_{h_{1},h_{2}})=f.\]
**Remark 5.1**.: Following Remark 4.2, if \(g=g_{0}+g_{v}\) is a \(*\)-logarithm of \(f\), then, for any couple of integers \(h_{1}\) and \(h_{2}\), the function
\[g_{0}+(h_{1}+h_{2})\mathcal{J}\pi+(\sqrt{g_{v}^{s}}+(h_{1}-h_{2})\pi)\frac{g_ {v}}{\sqrt{g_{v}^{s}}}=g+\left[(h_{1}+h_{2})\mathcal{J}+(h_{1}-h_{2})\frac{g_ {v}}{\sqrt{g_{v}^{s}}}\right]\pi,\]
is a \(*\)-logarithm of \(f\) as well (see [4, Theorem 1.2]).
We now pass to analyze the case in which the function \(f\) is defined on a domain which intersects the real axis. Under this hypothesis, it is clear that the function \(\mathcal{J}\) cannot appear in the set of solutions. In fact, as explained later, we will obtain that the two parameters \(h_{1}\) and \(h_{2}\) shall be related by the equality \(h_{1}=-h_{2}\).
The proof of the following corollary goes as that of [5, Theorem 5.4].
**Corollary 5.3**.: _Let \(U\) be a basic domain such that \(U\cap\mathbb{R}\neq\emptyset\) and let \(f:U\to\mathbb{H}\) be a slice regular function such that \(f^{s}(q)\neq 0\neq f^{s}_{v}(q)\), for all \(q\in U\). Then, there exist a one-parameter family of slice functions \(f_{h}:U\to\mathbb{H}\), for \(h\in\mathbb{Z}\), such that_
\[\exp_{*}(f_{h})=f.\]
Exactly as in [5], thanks to our construction, the previous two corollaries can be stated without the hypothesis of regularity.
**Remark 5.2**.: As anticipated before, if the domain of \(f\) contains real points a one-parameter of solution is missing. In fact, if \(x^{0}\in U\cap\mathbb{R}\), then \(F(x^{0})\in\mathbb{R}^{4}\subset\mathbb{C}^{4}\simeq\mathbb{C}\otimes\mathbb{H}\) and there exists \(y^{0}\in\mathbb{R}^{4}\) such that \(\varepsilon(y^{0})=F(x^{0})\). We have \(y^{0}=\rho((u_{0},u_{1}),s)\) with \((u_{0},u_{1})\in\mathbb{R}^{2}\). Therefore, we obtain
\[\varepsilon^{-1}(F(x^{0})) =\{\rho((h_{1},h_{2})\cdot((u_{0},u_{1}),s))\,|\,(h_{1},h_{2})\in \mathbb{Z}^{2}\}\] \[=\{\rho((u_{0}+(h_{1}+h_{2})\imath\pi,u_{1}+(h_{1}-h_{2})\pi),s) \,|\,(h_{1},h_{2})\in\mathbb{Z}^{2}\},\]
whose only real points are those obtained when \(h_{1}+h_{2}=0\) (see the first component), i.e. when \(h_{2}=-h_{1}\). To each point \(y\) in \(\varepsilon^{-1}(F(x_{0}))\) we associate \(G\in\mathcal{G}\) such that \(G(x^{0})=y\); the \(G\)'s described in the previous result are those corresponding to real points in \(\varepsilon^{-1}(F(x^{0}))\).
All the results contained in this paper so far are coherent with those contained in [4, 8]. As already pointed out in the introduction, the main difference here is the idea of using the complex geometry of \(\mathbb{C}^{4}\) and of \(\mathbb{C}^{2}\times\mathcal{S}\) to reveal the nature of \(\varepsilon\) and of \(\mathfrak{e}\) as covering maps. In a broad sense, the strategy of [4] is that of "solving" the \(*\)-logarithm mostly in algebraic terms, while in [8] the same problem is addressed by considering a sort of "\(*\)-logarithm variety" and by analyzing the geometry of curves contained in it. As already noted in [5], in our opinion our approach seems to be more suitable to generalizations to other contexts, while giving a global view of the geometric structure lying beneath the specific issue.
We conclude this section by highlighting how it is possible to recover results about the \(*\)-roots, starting from the \(*\)-logarithm. The starting point is a quite standard argument from one complex variable but, as we will see, the computation of the monodromy needs some deeper investigation.
**Remark 5.3**.: As already said, in [5] we widely studied the existence of a \(n\)-th \(*\)-rooth of a slice regular function. Exactly as in the complex case, almost all the work done in [5] can be
recovered from the study of the \(*\)-logarithm. In fact, if \(U\) is a basic domain and \(f:U\to\mathbb{H}\) is a slice function such that \(f^{s}(q)\neq 0\neq f^{s}_{v}(q)\), for all \(q\in U\), then, for any \(n\in\mathbb{R}\) we are able to define
\[(f)^{*\frac{1}{n}}:=\exp_{*}\left(\frac{1}{n}\log_{*}(f)\right),\]
where the apex \(*\), means that we are considering the power with respect to the \(*\)-product. We will show in the next section how to recover the monodromy of \(n\)-th \(*\)-root from that of the \(*\)-logarithm.
## 6. Automorphisms of \(\mathfrak{e}\) and of \(\varepsilon\)
In this section we will give a description of the deck transformations of \(\mathfrak{e}\) and of \(\varepsilon\), i.e. the automorphisms of \(\mathbb{C}^{2}\times\mathcal{S}\) and of \(\Omega\) fixing the fibers of \(\mathfrak{e}\) and of \(\varepsilon\), respectively. To be precise, given a covering map \(\pi:X\to Y\), we are interested in the set
\[\operatorname{Aut}_{\pi}:=\{f:X\to X\,|\,\pi\circ f=\pi\}.\]
In particular, we will study \(\operatorname{Aut}_{\mathfrak{e}}\) and \(\operatorname{Aut}_{\varepsilon}\). We recall from [5] that \(\operatorname{Aut}_{\rho}=\{\operatorname{id},\Gamma\}\), where \(\Gamma:\mathbb{C}^{2}\times\mathcal{S}\to\mathbb{C}^{2}\times\mathcal{S}\) is given by
\[\Gamma((u_{0},u_{1}),s)=((u_{0},-u_{1}),-s).\]
Thanks to the content of the previous section we are able to represent \(\operatorname{Aut}_{\mathfrak{e}}\) in a convenient way. In fact, let us define \(T_{\ell}:\mathbb{C}^{2}\times\mathcal{S}\to\mathbb{C}^{2}\times\mathcal{S}\) as the function
\[T_{\ell}((u_{0},u_{1}),s)=((u_{0}+u\pi,u_{1}+\ell\pi),s),\quad\ell=-1,1.\]
Then, following the proof of Theorem 4.1, we have that
\[\operatorname{Aut}_{\mathfrak{e}}=\{T_{(a,b)}:=aT_{1}+bT_{-1}\,|\,a,b\in \mathbb{Z},\,a\equiv_{2}b\}.\]
In particular, given \((h_{1},h_{2})\in\mathbb{Z}^{2}\) from Remark 4.2, we get \(a=h_{1}+h_{2}\) and \(b=h_{1}-h_{2}\), while, given \(T_{(a,b)}\), then \(h_{1}=\frac{a+b}{2}\) and \(h_{2}=\frac{a-b}{2}\).
We now pass to study \(\operatorname{Aut}_{\varepsilon}\). Recall that \(\varepsilon\circ\rho=\rho\circ\mathfrak{e}\) and notice that \(\Gamma\circ T_{1}=T_{-1}\circ\Gamma\). Therefore, if \((a,b)\in\mathbb{Z}^{2}\), we get
\[\Gamma\circ(aT_{1}+bT_{-1})=a\Gamma\circ T_{1}+b\Gamma\circ T_{-1}=aT_{-1} \circ\Gamma+bT_{1}\circ\Gamma=(bT_{1}+aT_{-1})\circ\Gamma.\]
Hence, \(T_{(a,b)}\) descend to a map \(S\in\operatorname{Aut}_{\varepsilon}\), if and only if \((a,b)=(b,a)\), i.e. if and only if \(T_{(a,b)}=T_{(a,a)}=a(T_{1}+T_{-1})\), \(a\in\mathbb{Z}\). But then it follows that, if we define \(S_{0}:\mathbb{C}\otimes\mathbb{H}\to\mathbb{C}\otimes\mathbb{H}\) as the map
\[S_{0}(z_{0},\underline{z})=(z_{0}+2\imath\pi,\underline{z}),\]
then
\[\operatorname{Aut}_{\varepsilon}=\langle S_{0}\rangle_{\mathbb{Z}}\simeq \mathbb{Z}.\]
We have just proven the following result which is analogous to [5, Corollary 6.4 and Proposition 6.5]
**Proposition 6.1**.: _The covering map \(\mathfrak{e}\) is regular, while \(\varepsilon\) is not._
### Monodromy of \(*\)-roots
We now want to recover the monodromy of \(*\)-roots by means of that of \(*\)-logarithm. This computation was already performed in [5] with different techniques. Here we will use what we just learned from the study of \(*\)-logarithm.
Since, for any \(z\in\mathbb{C}\otimes\mathbb{H}\), we have that \(\varepsilon(2z)=\varepsilon(z)^{2}\), where sum and product are the algebra operations of \(\mathbb{C}\otimes\mathbb{H}\), for any \(n\in\mathbb{N}\), then we also have that
\[\varepsilon(nz)=\sigma_{n}(\varepsilon(z)).\]
Given \(G\in\mathcal{G}\), let \(H_{G}:\mathcal{U}\to\mathbb{C}^{2}\times\mathcal{S}\) be such that \(\rho\circ H_{G}=G\). Then we have
\[\varepsilon\circ\rho\circ T_{\ell}\circ H_{G}=\rho\circ\mathfrak{e}\circ T_{ \ell}\circ H_{G}=\rho\circ\mathfrak{e}\circ H_{G}=\varepsilon\circ\rho\circ H _{G}=\varepsilon\circ G=F\;.\]
Under the hypotheses of Corollary 4.3, the set \(\mathcal{G}\) is a two-parameters family of logarithms with respect to \(\varepsilon\). Therefore, we can represent each element of \(\mathcal{G}\) as \(G_{(a,b)}=\rho\circ T_{(a,b)}\circ H_{\tilde{G}}\), with \((a,b)\in\mathbb{Z}^{2}\), where \(G_{(0,0)}=:\tilde{G}\) is any particular solution of \(\varepsilon\circ X=F\).
Now, \(\varepsilon\circ(\frac{1}{n}G_{(a,b)})=\varepsilon\circ(\frac{1}{n}G_{(c,d)})\) with \((a,b),\ (c,d)\in\mathbb{Z}^{2}\), if and only if
\[\varepsilon\circ\rho\circ\frac{1}{n}T_{(a,b)}\circ H_{\tilde{G}}=\varepsilon \circ\rho\circ\frac{1}{n}T_{(c,d)}\circ H_{\tilde{G}}\;.\]
Set \(H_{\tilde{G}}(z)=((u_{0}(z),u_{1}(z)),s(z))\), then
\[\frac{1}{n}T_{(a,b)}\circ H_{\tilde{G}}=\left(\left(\frac{1}{n}u_{0}(z)+\frac{ a+b}{n}imath\pi,\frac{1}{n}u_{1}(z)+\frac{a-b}{n}\pi\right),s(z)\right).\]
Assume, also, that \(\{u_{1}\neq 0\}\), then the equality
\[\rho\circ\mathfrak{e}\circ\frac{1}{n}T_{(a,b)}\circ H_{\tilde{G}}=\rho\circ \mathfrak{e}\circ\frac{1}{n}T_{(c,d)}\circ H_{\tilde{G}}\]
holds true if and only if
\[\mathfrak{e}\circ\frac{1}{n}T_{(a,b)}\circ H_{\tilde{G}}=\mathfrak{e}\circ \frac{1}{n}T_{(c,d)}\circ H_{\tilde{G}}\]
that is equivalent to say that there exists \((e,f)\in\mathbb{Z}^{2}\) such that
\[T_{(e,f)}\circ\frac{1}{n}T_{(a,b)}\circ H_{\tilde{G}}=\frac{1}{n}T_{(c,d)} \circ H_{\tilde{G}},\]
i.e.
\[\left(\left(\frac{1}{n}u_{0}(z)+\left((e+f)+\frac{a+b}{n}\right)u \pi,\frac{1}{n}u_{1}(z)+\left((e-f)+\frac{a-b}{n}\right)\pi\right),s(z)\right)\\ =\left(\left(\frac{1}{n}u_{0}(z)+\frac{c+d}{n}\pi,\frac{1}{n}u_{1 }(z)+\frac{c-d}{n}\pi\right),s(z)\right),\]
i.e.
\[\frac{(c+d)-(a+b)}{n} =e+f \frac{(c-d)-(a-b)}{n}=e-f\] \[\frac{c-a}{n}+\frac{d-b}{n} =e+f \frac{c-a}{n}-\frac{d-b}{n}=e-f\]
i.e.
\[\frac{c-a}{n}\in\mathbb{Z} \frac{d-b}{n}\in\mathbb{Z}\;.\]
i.e. \(n|(c-a)\) and \(n|(d-b)\), i.e. \((a,b)\equiv(c,d)\bmod n\).
Consider the subgroup of automorphisms
\[\mathsf{l}_{n}=\left\langle T_{(a,b)}\,|\,a\equiv b\equiv 0\bmod n\right\rangle\]
and let \(W_{n}=\mathbb{C}^{2}\times\mathcal{S}/\mathsf{l}_{n}\); the projection \(\mathfrak{e}_{n}:\mathbb{C}^{2}\times\mathcal{S}\to W_{n}\) is a covering map. As \(\mathsf{l}_{n}\) is a subgroup of \(\operatorname{Aut}_{\mathfrak{e}}\) and as \(\mathfrak{e}\) is a Galois covering, we can factor \(\mathfrak{e}\) via \(\mathfrak{e}_{n}\): we consider the map \(\mathfrak{s}_{n}:W_{n}\to(\mathbb{C}^{2}\setminus\mathcal{W})\times\mathcal{S}\) such that \(\mathfrak{e}=\mathfrak{s}_{n}\circ\mathfrak{e}_{n}\); \(\mathfrak{s}_{n}\) is in fact a covering map of degree \(n^{2}\) (the index of \(\mathsf{l}_{n}\) in \(\operatorname{Aut}_{\mathfrak{e}}\)).
As \(\operatorname{Aut}_{\rho}\) does not intersect \(\operatorname{Aut}_{\mathfrak{e}}\), we can induce a map \(\tilde{\rho}:W_{n}^{\prime}\to\Omega_{n}\) on a suitable open set \(W_{n}^{\prime}\subseteq W_{n}\) such that
* \(\tilde{\rho}\) is a double cover
* there is a (unique) covering map \(\sigma_{n}:\Omega_{n}\to(\mathbb{C}\otimes\mathbb{H})\setminus(V_{-1}\cup V_ {\infty})\) with \(\sigma_{n}\circ\tilde{\rho}=\rho\circ\mathfrak{s}_{n}\)
* there is a (unique) covering map \(\epsilon_{n}:\Omega\to\Omega_{n}\) such that \(\epsilon=\sigma_{n}\circ\epsilon_{n}\).
It is easy to notice that \(\mathfrak{e}_{n}((u_{0},u_{1}),s)=((u_{0}/n,u_{1}/n),s)\), \(\epsilon_{n}(z)=\epsilon(z/n)\), \(\tilde{\rho}=\rho\), \(\sigma_{n}(z)=z^{n}\) (and the induced \(\mathfrak{s}_{n}\)) satisfy the previous requirements. Therefore, \(W_{n}^{\prime}\) and \(\Omega_{n}\) can be realizes as subdomains of \(\mathbb{C}^{2}\times\mathcal{S}\) and \(\mathbb{C}\otimes\mathbb{H}\) respectively.
Carryig out the computations, one could find the definitions given in [5].
Finally, we compute the monodromy of \(\mathfrak{s}_{n}\). By simple arithmetic, the group \(\mathbb{Z}_{n}\times\mathbb{Z}_{n}=\mathbb{Z}^{2}/\mathsf{l}\) is generated by the classes \([(1,1),(1,-1)]\) when \(n\) is odd and by the classes \([(1,1),(1,-1),(1,0)]\) when \(n\) is even.
Given \(a,b\in\{0,\ldots,n-1\}\)
\[\mathfrak{e}_{n}(T_{(a,b)}((u_{0},u_{1}),s))=\mathfrak{e}\left(\left(\frac{1} {n}u_{0}+\frac{a+b}{n}\pi,\frac{1}{n}u_{1}+\frac{a-b}{n}\pi\right),s\right)=\]
\[=\left(\left(e^{\frac{u_{0}}{n}}e^{\frac{a+b}{n}\pi}\cos\left(\frac{u_{1}}{n}+ \frac{a-b}{n}\pi\right),e^{\frac{u_{0}}{n}}e^{\frac{a+b}{n}\pi}\sin\left(\frac{u _{1}}{n}+\frac{a-b}{n}\pi\right)\right),s\right)=\xi\cdot\left(A_{\eta}\frac{(u _{0},u_{1})}{n},s\right)\]
where \(\xi=e^{\frac{a+b}{n}\pi}\) is a \(n\)-th root of unity and
\[A_{\eta}=\begin{pmatrix}\cos(\frac{a-b}{n}\pi)&-\sin(\frac{a-b}{n}\pi)\\ \sin(\frac{a-b}{n}\pi)&\cos(\frac{a-b}{n}\pi)\end{pmatrix}\]
is the \(2\times 2\) matrix representation of the complex number \(\eta=e^{\frac{a-b}{n}\pi}\).
So, for \(n\) odd, the generators of the deck transformations of \(\mathfrak{s}_{n}\) are \(\xi\) (corresponding to \([(1,1)]\)) and \(A_{\eta}\) (corresponding to \([(1,-1)]\)) with \(\xi\), \(\eta\) primitive \(n\)-th roots of \(1\); for \(n\) even we have these two and \(\xi\cdot A_{\eta}\) (corresponding to \([(1,0)]\)) with \(\xi\), \(\eta\) primitive \(2n\)-th roots of \(1\).
## 7. Product of two \(*\)-exponentials
In this section we will give sufficient conditions for the product of two exponentials to be an exponential. This topic clearly deals with the so-called Baker-Campbell-Hausdorff (BCH) formula for the \(*\)-exponential.
In its more general formulation the BCH formula states that, whenever it exists, the product \(e^{X}e^{Y}\) equals \(e^{Z}\), where
\[Z=X+Y+\frac{1}{2}[X,Y]+\frac{1}{12}[X,[X,Y]]-\frac{1}{12}[Y,[X,Y]]+\cdots \tag{6}\]
Clearly, depending on the context, it is possible to give sufficient conditions for the sum on the right hand side of Formula (6) to be convergent (see for instance [6, Proposition 2.2] for Banach algebras or [7] for a general overview). In the context of quaternions, the situation is much more clear: if \(p=p_{0}+p_{v}\) and \(q=q_{0}+q_{v}\), with \(p_{v}\neq 0\neq q_{v}\), then
\[\exp(p)\exp(q)= \left[e^{p_{0}}\left(\cos|p_{v}|+\sin|p_{v}|\frac{p_{v}}{|p_{v}|} \right)\right]\left[e^{q_{0}}\left(\cos|q_{v}|+\sin|q_{v}|\frac{q_{v}}{|q_{v}| }\right)\right]\] \[= e^{p_{0}+q_{0}}\left[\cos|p_{v}|\cos|q_{v}|-\sin|p_{v}|\sin|q_{ v}|\langle\frac{p_{v}}{|p_{v}|},\frac{q_{v}}{|q_{v}|}\rangle+\right.\] \[+\left.\cos|p_{v}|\sin|q_{v}|\frac{q_{v}}{|q_{v}|}+\cos|q_{v}|\sin |p_{v}|\frac{p_{v}}{|p_{v}|}+\sin|p_{v}|\sin|q_{v}|\frac{p_{v}}{|p_{v}|}\wedge \frac{q_{v}}{|q_{v}|}\right]\] \[= \exp(w_{0}+w_{v}),\]
where \(w_{0}=p_{0}+q_{0}\) and \(w_{v}\) solves
\[\begin{cases}\cos|w_{v}|=\cos|p_{v}|\cos|q_{v}|-\sin|p_{v}|\sin|q_{v}|\langle \frac{p_{v}}{|p_{v}|},\frac{q_{v}}{|q_{v}|}\rangle,\\ \sin|w_{v}|\frac{w_{v}}{|w_{v}|}=\cos|p_{v}|\sin|q_{v}|\frac{q_{v}}{|q_{v}|}+ \cos|q_{v}|\sin|p_{v}|\frac{p_{v}}{|p_{v}|}+\sin|p_{v}|\sin|q_{v}|\frac{p_{v}}{ |p_{v}|}\wedge\frac{q_{v}}{|q_{v}|}.\end{cases}\]
Notice that, in this case, the existence of the solution is granted because \(\exp(p)\exp(q)\neq 0\) and hence it is possible to define \(w\).
Now, as \(\mathbb{H}\) and \(\mathbb{C}\otimes\mathbb{H}\) have the same algebraic structure, the same equalities hold true for the complexification. However, recall that the euclidean norm must be translated into its purely algebraic form, i.e., if \(u=u_{0}+\underline{u},u^{\prime}=u_{0}^{\prime}+\underline{u^{\prime}}\in \mathbb{C}\otimes\mathbb{H}\), then
\[\exp(u)\exp(u^{\prime})= e^{u_{0}+u_{0}^{\prime}}\left[\cos\sqrt{\underline{u}^{2}}\cos \sqrt{\underline{u^{\prime}}^{2}}-\sin\sqrt{\underline{u}^{2}}\sin\sqrt{ \underline{u^{\prime}}^{2}}\langle\frac{u}{\sqrt{\underline{u}^{2}}},\frac{ \underline{u^{\prime}}}{\sqrt{\underline{u^{\prime}}^{2}}}\rangle+\right.\] \[+\left.\cos\sqrt{\underline{u}^{2}}\sin\sqrt{\underline{u^{\prime }}^{2}}\frac{\underline{u}^{\prime}}{\sqrt{\underline{u^{\prime}}^{2}}}+ \cos\sqrt{\underline{u^{\prime}}^{2}}\sin\sqrt{\underline{u}^{2}}\frac{ \underline{u}}{\sqrt{\underline{u}^{2}}}+\right.\] \[+\left.\sin\sqrt{\underline{u}^{2}}\sin\sqrt{\underline{u^{ \prime}}^{2}}\frac{\underline{u}}{\sqrt{\underline{u}^{2}}}\wedge\frac{ \underline{u^{\prime}}}{\sqrt{\underline{u^{\prime}}^{2}}}\right].\]
In this case, the solution \(p=p_{0}+\underline{p}\in\mathbb{C}\otimes\mathbb{H}\) of the equation \(\exp(u)\exp(u^{\prime})=\exp(p)\) exists provided \(\exp(u)\exp(u^{\prime})\in\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{ \infty})\). From the previous computations and also from [1, Theorem 4.14], we already know that if \(u\) commutes with \(u^{\prime}\) or, if \(\underline{u}^{2}=\pi^{2}n^{2}\) and \(\underline{u^{\prime}}^{2}=\pi^{2}m^{2}\) with \(n\) and \(m\) satisfying a certain parity condition, then \(\exp(u)\exp(u^{\prime})=\exp(u+u^{\prime})\). Therefore, we are interested in understanding when these conditions are satisfied excluding the cases listed in the already mentioned result [1, Theorem 4.14]. In order to proceed, we need a couple of preliminary lemmas.
**Lemma 7.1**.: _Let \(\underline{z}\in\mathbb{C}\otimes\mathsf{Im}(\mathbb{H})\) be such that \(\underline{z}^{2}\neq 0\), then, for any \(\underline{w}\in\mathbb{C}\otimes\mathsf{Im}(\mathbb{H})\), there exist \(w_{1}\in\mathbb{C}\) and \(w_{\perp}\in\mathbb{C}\otimes\mathsf{Im}(\mathbb{H})\), such that_
\[\underline{w}=w_{1}\underline{v}+w_{\perp},\qquad\text{and}\qquad\langle \underline{z},w_{\perp}\rangle=0.\]
_Moreover, it holds_
\[\underline{\left(\underline{z}\wedge w_{\perp}\right)}^{2}=\underline{z}^{2} \underline{w_{\perp}}^{2}.\]
Proof.: By standard linear algebra, it is sufficient to define
\[w_{1}=\frac{\langle\underline{w},\underline{z}\rangle}{\langle\underline{z}, \underline{z}\rangle}=\frac{\langle\underline{w},\underline{z}\rangle}{ \underline{z}^{2}}.\]
Now, if \(z,w\in\mathbb{C}\otimes\mathbb{H}\), then \((zw)_{0}^{2}+\left(zw\right)^{2}=(z_{0}^{2}+\underline{z}^{2})(w_{0}^{2}+ \underline{w}^{2})\) (see Formula (2)), therefore, if \(z_{0}^{2}+\underline{z}^{2}\neq 0\neq w_{0}^{2}+\underline{w}^{2}\), then \((zw)_{0}^{2}+\overline{(zw)}^{2}\neq 0\). Hence, the following result complete the characterization we are looking for.
**Theorem 7.2**.: _Let \(z,w\in\mathbb{C}\otimes\mathbb{H}\setminus(V_{-1}\cup V_{\infty})\), then, \(\underline{(zw)}^{2}=0\) if and only if_
\[w_{0}=-z_{0}w_{1}\pm\sqrt{-1}\sqrt{\frac{z_{0}^{2}+\underline{z}^{2}}{ \underline{z}^{2}}}\sqrt{\underline{w_{\perp}}^{2}},\]
_where \(w_{1}\) and \(w_{\perp}\) are the elements defined in Lemma 7.1._
Proof.: First of all, thanks to Lemma 7.1, we can write \(\underline{w}=w_{1}\underline{z}+w_{\perp}\), with \(w_{1}\in\mathbb{C}\) and \(\langle w_{\perp},\underline{z}\rangle=0\). Therefore,
\[zw =(z_{0}+\underline{z})(w_{0}+w_{1}\underline{z}+w_{\perp})\] \[=(z_{0}w_{0}-\underline{z}^{2}w_{1})+(z_{0}w_{1}\underline{z}+z_ {0}w_{\perp}+w_{0}\underline{z}+\underline{z}\wedge w_{\perp}),\]
and so
\[\underline{(zw)}^{2} =\underline{(z_{0}w_{1}\underline{z}+z_{0}w_{\perp}+w_{0} \underline{z}+\underline{z}\wedge w_{\perp})}^{2}\] \[=\underline{(z_{0}w_{1}\underline{z}+z_{0}w_{\perp}+w_{0} \underline{z})}^{2}+\underline{(\underline{z}\wedge w_{\perp})}^{2}\] \[=\underline{(z_{0}w_{1}+w_{0})}^{2}\underline{z}^{2}+(z_{0}^{2}+ \underline{z}^{2})\underline{w_{\perp}}^{2}.\]
Therefore, \(\underline{(zw)}^{2}=0\) if and only if \((z_{0}w_{1}+w_{0})^{2}\underline{z}^{2}+(z_{0}^{2}+\underline{z}^{2}) \underline{w_{\perp}}^{2}=0\), which is equivalent to
\[(z_{0}w_{1}+w_{0})^{2}=-\frac{z_{0}^{2}+\underline{z}^{2}}{\underline{z}^{2}} \underline{w_{\perp}}^{2},\]
and hence, we get the thesis.
The previous result can be applied to slice regular functions recalling their decomposition in "scalar-vector" parts. We start by rewriting Lemma 7.1 in terms of slice regular functions.
**Corollary 7.3**.: _Let \(f=f_{0}+f_{v}:U\to\mathbb{H}\) be a slice regular function such that, for any \(q\in U\), \(f_{v}^{s}(q)\not\equiv 0\). Then, for any slice regular function \(g=g_{0}+g_{v}\), there exist two slice regular function \(g_{1},g_{\perp}\), \(g_{1}\) being slice preserving, such that_
\[g=g_{1}f_{v}+g_{\perp},\qquad\text{and}\qquad\langle f_{v},g_{\perp}\rangle=0.\]
_Moreover, it holds_
\[(f_{v}\wedge g_{\perp})^{s}=f_{v}^{s}g_{\perp}^{s}.\]
Proof.: Exactly as in the previous result, for any \(q\in U\) such that \(f_{v}^{s}(q)\neq 0\) it is sufficient to define \(g_{1}:=\frac{\langle g_{v},f_{v}\rangle}{f_{v}^{s}}\). Assume now that \(q_{0}=\alpha_{0}+i\beta_{0}\not\in\mathbb{R}\) and \(f_{v}^{s}(q_{0})=0\). Define \(D_{q_{0}}(\epsilon)\) as the
disk in \(U\cap\mathbb{C}_{i}\) centered at \(q_{0}\) with radius \(\epsilon\), such that \(\overline{D_{q_{0}}(\epsilon)}\subset U\cap\mathbb{C}_{i}\) such that \(f_{v}^{s}(q)\neq 0\) for any \(q\in\overline{D_{q_{0}}(\epsilon)}\setminus\{q_{0}\}\). Then we can define \(g_{1}(q_{0})\) by means of the Cauchy formula
\[g_{1}(q_{0}):=\frac{1}{2\pi i}\int_{\partial D_{q_{0}}(\epsilon)}\frac{g_{1}( \alpha+i\beta)}{\alpha+i\beta-q_{0}}d(\alpha+i\beta).\]
By repeating the same construction at \(q_{0}^{c}\) and using the Representation Formula, we obtain the thesis. The previous argument can be performed at \(q_{0}\in\mathbb{R}\).
**Corollary 7.4**.: _Let \(f,g\) be two slice regular functions defined on \(U\) such that \(f^{s}\neq 0\neq g^{s}\) and \(f_{v}^{s}\neq 0\neq g_{v}^{s}\), then, \((f*g)_{v}^{s}(q)=0\) if and only if_
\[(f_{0}(q)g_{1}(q)+g_{0}(q))^{2}+\frac{f^{s}(q)}{f_{v}^{s}(q)}g_{\perp}^{s}(q)=0 \tag{7}\]
_where \(g_{1}\) and \(g_{\perp}\) are the functions defined in Corollary 7.3. In particular, if \(U\cap\mathbb{R}\neq\emptyset\), then there exists an open neighborhood \(U\) of \(U\cap\mathbb{R}\) such that \((f*g)_{v|U}^{s}\neq 0\)._
Proof.: The first part of the statement is a direct consequence of Theorem 7.2. For the second part, assume that \(x\in U\cap\mathbb{R}\), then Formula (7) evaluated at \(x\), gives no solutions since the left hand side is strictly positive. Therefore, there exists an open neighborhood \(U\) of \(U\cap\mathbb{R}\) where the function \((f_{0}g_{1}+g_{0})^{2}+\frac{f^{s}}{f_{v}^{s}}g_{\perp}^{s}\) is never vanishing, and hence \((f*g)_{v}^{s}\neq 0\) on \(U\).
Thanks to the previous two corollaries we can reverse engineer several examples of slice regular functions \(f,g\) with \(f^{s}\neq 0\neq g^{s}\) and \(f_{v}^{s}\neq 0\neq g_{v}^{s}\) but \((f*g)_{v}^{s}(q)=0\).
**Example 7.1**.: Assume for simplicity that \(U\cap\mathbb{R}=\emptyset\). Then, given \(f\) satisfying the hypotheses of previous corollary, we define \(g=g_{0}+g_{v}=g_{0}+g_{1}f_{v}+g_{\perp}\) as follows:
\[g_{0}=-f_{0}g_{1}\pm\mathcal{J}\sqrt{\frac{f^{s}}{f_{v}^{s}}}\sqrt{g_{\perp}^ {s}},\]
with \(g_{1}\) and \(g_{\perp}\) be such that \(g_{v}^{s}=g_{1}^{2}f_{v}^{s}+g_{\perp}^{s}\neq 0\) and \(g_{0}^{2}+g_{1}^{2}f_{v}^{s}+g_{\perp}^{s}\neq 0\). Clearly if \(g_{\perp}^{s}\equiv 0\), then it is sufficient to take \(g=g_{1}(-f_{0}+f_{v})+g_{\perp}\). For instance, if \(f\) is \(\mathbb{C}_{i}\)-preserving, i.e. \(f=f_{0}+f_{1}i\), with \(f_{0}^{2}+f_{1}^{2}\neq 0\neq f_{1}\), then, if \(g=-f^{c}+\ell_{+,i}*j\), we have that \(f*g=-f^{s}+f*\ell_{+,i}*j\), \((f*g)_{v}=f*\ell_{+,i}*j\) and, therefore \((f*g)_{v}^{s}\equiv 0\).
Another readable case is when \(f_{0}\equiv 0\). In this case, given \(f=f_{v}\), it is sufficient to consider \(g=\pm\mathcal{J}\sqrt{g_{\perp}^{s}}+g_{1}f_{v}+g_{\perp}\), with \(g^{s}=g_{1}^{2}f_{v}^{s}\neq 0\), i.e. \(g_{1}\neq 0\), and \(g_{1}^{2}f_{v}^{s}+g_{\perp}^{s}\neq 0\).
Clearly, the previous example allows to construct functions \(f\) and \(g\), such that \((f*g)_{v}^{s}\equiv 0\), while the condition in Corollary 7.4 si given point wise.
At this point we are able to give sufficient conditions for \(f\) and \(g\) in order to have that \(\exp_{*}(f)*\exp_{*}(g)\) is an exponential function. As we said, this happens if
\(\exp_{*}(g))_{v}^{s}(q)\neq 0\) for all \(q\). Clearly, we can separate the "scalar" part of \(\exp_{*}(f)\) and of \(\exp_{*}(g)\) and only consider \((\exp_{*}(f)_{v}*\exp_{*}(g)_{v})_{v}^{s}\).
We get the following result.
**Corollary 7.5**.: _Let \(f,g:U\to\mathbb{H}\) be slice regular function such that \(f_{v}\) do not commute with \(g_{v}\) and for all \(q\in U\)\(f_{v}^{s}(q),g_{v}^{s}(q)\not\in\{\pi^{2}n^{2}\,|\,n\in\mathbb{Z}\}\). Write \(g_{v}=g_{1}\frac{f_{v}}{\sqrt{f_{v}^{s}}}+g_{\perp}\). If for any \(q\in U\)_
\[(g_{1}\cos\sqrt{f_{v}^{s}}\sin\sqrt{g_{v}^{s}}+\cos\sqrt{g_{v}^{s}})^{2}(\sin \sqrt{f_{v}^{s}})^{2}+(\sin\sqrt{g_{v}^{s}})^{2}g_{\perp}^{s}\neq 0,\]
_then there exists a slice regular function \(h:U\to\mathbb{H}\) such that_
\[\exp_{*}(f)*\exp_{*}(g)=\exp_{*}(h).\]
Notice that in the last result the function \(h=h_{0}+h_{v}\) is determined by \(h_{0}=f_{0}+g_{0}\) and \(h_{v}\) solves
\[\begin{cases}\cos\sqrt{h_{v}^{s}}=\cos\sqrt{f_{v}^{s}}\cos\sqrt{g_{v}^{s}}- \sin\sqrt{f_{v}^{s}}\sin\sqrt{g_{v}^{s}}\langle\frac{f_{v}}{\sqrt{f_{v}^{s}}}, \frac{g_{v}}{\sqrt{g_{v}^{s}}}\rangle,\\ \sin\sqrt{h_{v}^{s}}\frac{h_{v}}{\sqrt{h_{v}^{s}}}=\cos\sqrt{f_{v}^{s}}\sin \sqrt{g_{v}^{s}}\frac{g_{v}}{\sqrt{g_{v}^{s}}}+\cos\sqrt{g_{v}^{s}}\sin\sqrt{f _{v}}\frac{f_{v}}{\sqrt{f_{v}^{s}}}+\sin\sqrt{f_{v}^{s}}\sin\sqrt{g_{v}^{s}} \frac{f_{v}}{\sqrt{f_{v}^{s}}}\wedge\frac{g_{v}}{\sqrt{g_{v}^{s}}}.\end{cases}\]
## 8. Slice derivative of the \(*\)-exponential
In this section we will provide a formula for the slice derivative of \(\exp_{*}(f)\), \(f\) being a slice regular function. As in the previous section, let us begin with a short description of the general algebraic case. If \(X\) is a matrix, the differential of \(e^{X}\), is given by the following formula
\[e^{-X}de^{X}=dX-\frac{1}{2!}\left[X,dX\right]+\frac{1}{3!}[X,[X,dX]]-\frac{1}{ 4!}[X,[X,[X,dX]]]+\cdots \tag{8}\]
Assume now that \(q:[0,1]\to\mathbb{H}\) is a differentiable curve and denote by \(\dot{q}=\frac{dq}{dt}\). Therefore, Formula 8 can be written in the quaternionic setting as
\[e^{-q(t)}\frac{de^{q(t)}}{dt} =\dot{q}(t)-\frac{1}{2!}\left[q(t),\dot{q}(t)\right]+\frac{1}{3! }[q(t),[q(t),\dot{q}(t)]]-\frac{1}{4!}[q(t),[q(t),[q(t),\dot{q}(t)]]]+\cdots\] \[=\dot{q}(t)+\sum_{m=2}^{\infty}\frac{(-1)^{m-1}}{m!}[q^{(m-1)}, \dot{q}](t),\]
where \([q^{(n)},\dot{q}](t)\) stands for
\[[\underbrace{q(t)[q(t)[\ldots[q(t)}_{n\text{ times}},\dot{q}(t)]]]].\]
Therefore,
\[e^{-q(t)}\frac{de^{q(t)}}{dt}=\dot{q}(t)-\sum_{h=1}^{\infty}\frac{1}{(2h)!}[q^ {(2h-1)},\dot{q}](t)+\sum_{h=1}^{\infty}\frac{1}{(2h+1)!}[q^{(2h)},\dot{q}](t).\]
Now, as for any \(p,q\in\mathbb{H}\), we have that \([p,q]=2p\wedge q=2p_{v}\wedge q_{v}\), then we have
\[[q,\dot{q}] =2q_{v}\wedge\dot{q}_{v}\] \[=2^{2}[\langle q_{v},\dot{q}_{v}\rangle q_{v}-|q_{v}|^{2}\dot{q}_ {v}]\] \[=2^{3}(-1)|q_{v}|^{2}q_{v}\wedge\dot{q}_{v}\] \[=2^{4}(-1)|q_{v}|^{2}[\langle q_{v},\dot{q}_{v}\rangle q_{v}-|q_ {v}|^{2}\dot{q}_{v}]\] \[=2^{5}(-1)^{2}(|q_{v}|^{2})^{2}q_{v}\wedge\dot{q}_{v}\] \[=2^{6}(-1)^{2}(|q_{v}|^{2})^{2}[\langle q_{v},\dot{q}_{v}\rangle q _{v}-|q_{v}|^{2}\dot{q}_{v}]\] \[\ldots\]
where, in order to simplify the notation, we removed the dependence from the parameter \(t\). Hence,
\[e^{-q}\frac{de^{q}}{dt}= \dot{q}-\left[\sum_{h=1}^{\infty}\frac{(-1)^{h-1}2^{2h-1}}{(2h)!} |q_{v}|^{2(h-1)}\right]q_{v}\wedge\dot{q}_{v}\] \[+\left[\sum_{h=1}^{\infty}\frac{(-1)^{h-1}2^{2h}}{(2h+1)!}|q_{v}| ^{2(h-1)}\right][\langle q_{v},\dot{q}_{v}\rangle q_{v}-|q_{v}|^{2}\dot{q}_{v}]\] \[= \dot{q}-\frac{\sin^{2}(|q_{v}|)}{|q_{v}|^{2}}q_{v}\wedge\dot{q}_ {v}+\frac{|q_{v}|-\cos(|q_{v}|)\sin(|q_{v}|)}{|q_{v}|^{3}}\left[\langle q_{v},\dot{q}_{v}\rangle q_{v}-|q_{v}|^{2}\dot{q}_{v}\right]\] \[= \dot{q}-\frac{1-\cos(2|q_{v}|)}{2|q_{v}|}\frac{q_{v}}{|q_{v}|} \wedge\dot{q}_{v}+\left[1-\frac{\sin(2|q_{v}|)}{2|q_{v}|}\right]\left[\left< \frac{q_{v}}{|q_{v}|},\dot{q}_{v}\right>\frac{q_{v}}{|q_{v}|}-\dot{q}_{v}\right]\] \[= \dot{q}+\left[1-\frac{\sin(2|q_{v}|)}{2|q_{v}|}\right]\left[ \left<\frac{q_{v}}{|q_{v}|},\dot{q}_{v}\right>\frac{q_{v}}{|q_{v}|}-\dot{q}_{v }\right]-\frac{1-\cos(2|q_{v}|)}{2|q_{v}|}\frac{q_{v}}{|q_{v}|}\wedge\dot{q}_ {v}.\]
Clearly, if \(q_{v}=0\), we obtain the usual formula. Moreover, if \(q_{v}\neq 0\) and \(\dot{q}_{v}\) commutes with \(q_{v}\), i.e. there exists a real valued function \(\alpha\) such that \(\dot{q}_{v}=\alpha q_{v}\), then
\[e^{-q}\frac{de^{q}}{dt}=\dot{q},\]
as expected.
If we write \(\dot{q}=q_{1}\frac{q_{v}}{|q_{v}|}+q_{\perp}\), then we obtain
\[e^{-q}\frac{de^{q}}{dt} =\dot{q}-\frac{1-\cos(2|q_{v}|)}{2|q_{v}|}\frac{q_{v}}{|q_{v}|} \wedge q_{\perp}-\left[1-\frac{\sin(2|q_{v}|)}{2|q_{v}|}\right]q_{\perp} \tag{10}\] \[=\dot{q}_{0}+q_{1}\frac{q_{v}}{|q_{v}|}+\frac{\sin(2|q_{v}|)}{2|q _{v}|}q_{\perp}-\frac{1-\cos(2|q_{v}|)}{2|q_{v}|}\frac{q_{v}}{|q_{v}|}\wedge q _{\perp}. \tag{9}\]
Now, exactly as we have done before, this last relation extends to the complexification of \(\mathbb{H}\), where in place of a curve \(q:[0,1]\to\mathbb{H}\) we consider a complex curve \(F:D\subset\mathbb{C}\to\mathbb{C}\otimes\mathbb{H}\), the derivative with respect to \(t\) is changed into the derivative with respect to \(z\in D\) and the usual exponential function is changed into \(\varepsilon\). After these modifications we have the following formula.
**Proposition 8.1**.: _Let \(f:U\to\mathbb{H}\) be a slice regular function. Then we have the following formula_
\[\exp_{*}(f)^{-*}*\partial_{c}(\exp_{*}(f))= \partial_{c}f+\left[\sum_{h=1}^{\infty}\frac{(-1)^{h-1}2^{2h}}{(2 h+1)!}(f_{v}^{s})^{(h-1)}\right][\langle f_{v},(\partial_{c}f)_{v}\rangle_{*}f_{v}- f_{v}^{s}(\partial_{c}f)_{v}]+\] \[-\left[\sum_{h=1}^{\infty}\frac{(-1)^{h-1}2^{2h-1}}{(2h)!}(f_{v}^ {s})^{(h-1)}\right]f_{v}\mathbin{\mathbb{A}}(\partial_{c}f)_{v}.\]
From this proposition it is possible to derive some convenient corollaries.
**Corollary 8.2**.: _Let \(f:U\to\mathbb{H}\) be a slice regular function and let \(q_{0}\in U\) be any point. If \(f_{v}^{s}(q_{0})=0\), then_
\[\Big{(}\exp_{*}(f)^{-*}*\partial_{c}(\exp_{*}(f))\Big{)}(q_{0})=(\partial_{c}f )(q_{0})-(f_{v}\mathbin{\mathbb{A}}(\partial_{c}f)_{v})(q_{0})+\frac{2}{3} \Big{(}\langle f_{v},(\partial_{c}f)_{v}\rangle_{*}\Big{)}(q_{0})f_{v}(q_{0})\]
In the case when \(f_{v}^{s}\) is never-vanishing, many equivalent formulas can be derived.
**Corollary 8.3**.: _Let \(f:U\to\mathbb{H}\) be a slice regular function such that \(f_{v}^{s}\) is never-vanishing, then we have_
\[\partial_{c}(\exp_{*}(f))=\exp_{*}(f)* \left\{\partial_{c}f+\left[1-\frac{\sin(2\sqrt{f_{v}^{s}})}{2 \sqrt{f_{v}^{s}}}\right]\left[\left\langle\frac{f_{v}}{\sqrt{f_{v}^{s}}},( \partial_{c}f)_{v}\right\rangle\frac{f_{v}}{\sqrt{f_{v}^{s}}}-(\partial_{c}f)_{ v}\right]+\right.\] \[\left.-\frac{1-\cos(2\sqrt{f_{v}^{s}})}{2\sqrt{f_{v}^{s}}}\frac{ f_{v}}{\sqrt{f_{v}^{s}}}\mathbin{\mathbb{A}}(\partial_{c}f)_{v}\right\}\]
As said before, the formula contained in the last corollary is just one of the possible generalizations we have seen in this section. With the same spirit, it is clearly possible to generalize Formula (9) or (10).
**Remark 8.1**.: Many of the previous formulas can also be related to the function \(\nu:\mathbb{H}\to\mathbb{H}\) introduced in [4, Definition 2.16] as
\[\nu(q)=\sum_{m\in\mathbb{N}}\frac{(-1)^{m}q^{2m+1}}{(2m+1)!},\]
and noticing that \(\nu(q^{2})q=\sin(q)\).
**Remark 8.2**.: Exactly as in the case of a quaternionic curve, even in this case the formula for the slice derivative of the \(*\)-exponential of a slice regular function simplifies to the usual one when \((\partial_{c}f)_{v}\) and \(f_{v}\) commute, i.e., getting rid of the trivial cases, when there exists a slice preserving function \(\gamma\) such that
\[(\partial_{c}f)_{v}=\gamma f_{v}.\]
Examples of functions satisfying this relations are slice constant functions (i.e. functions with everywhere vanishing slice derivative), \(\mathbb{C}_{I}\)-preserving function (for any \(I\in\mathbb{S}\)) or functions of the form \(f_{v}=\exp(\gamma(q)q)c\), where \(c\) is any purely imaginary quaternion.
|
2302.01326
|
Federated Analytics: A survey
|
Federated analytics (FA) is a privacy-preserving framework for computing data
analytics over multiple remote parties (e.g., mobile devices) or silo-ed
institutional entities (e.g., hospitals, banks) without sharing the data among
parties. Motivated by the practical use cases of federated analytics, we follow
a systematic discussion on federated analytics in this article. In particular,
we discuss the unique characteristics of federated analytics and how it differs
from federated learning. We also explore a wide range of FA queries and discuss
various existing solutions and potential use case applications for different FA
queries.
|
Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Shanshan Han, Shantanu Sharma, Chaoyang He, Sharad Mehrotra, Salman Avestimehr
|
2023-02-02T18:56:24Z
|
http://arxiv.org/abs/2302.01326v1
|
# Federated Analytics: A survey
###### Abstract
Federated analytics (FA) is a privacy-preserving framework for computing data analytics over multiple remote parties (e.g., mobile devices) or silo-ed institutional entities (e.g., hospitals, banks) without sharing the data among parties. Motivated by the practical use cases of federated analytics, we follow a systematic discussion on federated analytics in this article. In particular, we discuss the unique characteristics of federated analytics and how it differs from federated learning. We also explore a wide range of FA queries and discuss various existing solutions and potential use case applications for different FA queries.
Federated analytics; distributed computing, privacy.
## 1 Introduction
Federated Analytics (FA) is a paradigm for collaboratively extracting insights from distributed data that is owned by multiple parties (_e.g._, individual mobile devices or institutional organizations) under the coordination of a central entity (_e.g._, a service provider) without any of
the raw data leaving their local parties or revealing information beyond the targeted insights. The core principles of this paradigm allow breaking the limitations for deriving analytics from limited centralized data, in terms of privacy concerns and operational costs. In the last decade, federated learning (Kairouz _et al._, 2021), a closely related area to federated analytics, has received significant interest both in academic and industry domains. Recently, the research community is extending federation beyond learning settings to address more generalized analytics questions. In this work, we summarize the diversity of questions within federated analytics and highlight research problems that can have significant theoretical and practical interests.
The term federated analytics was first coined by Google in 20201 to represent "collaborative data science without data collection". It was first explored in support of federated learning as a way for Google engineers to evaluate the quality of the learned machine learning models against real-world data. Beyond model evaluation, FA implementations have expanded to other applications with the flagship application being the discovery of popular elements across devices, _e.g._, popular out-of-dictionary words (Zhu _et al._, 2020) or most popular songs recognized by phones. In these FA applications, the key challenge was to develop protocols that are efficient at scale while taking into account the limited communication bandwidth, as well as preserving the privacy of the participating parties.
Footnote 1: [https://ai.googleblog.com/2020/05/federated-analytics-collaborative-data.html](https://ai.googleblog.com/2020/05/federated-analytics-collaborative-data.html)
Even with the success of these initial FA solutions and the recent interest in this collaborative paradigm, there is, unfortunately, no clear definition for what constitutes federated analytics, what kind of interesting analytical questions it can answer, and what are the possible real-world domains that can benefit from its applications. Very recent summarizing efforts in federated analytics have focused on queries of interest to particular domain applications such as video analytics (Wang _et al._, 2021). However, there exists a wide range of other queries that can be supported (and are of interest) in an FA system. Summarizing these different query classes and the potential approaches for answering them in federated analytics provides a great starting point for new researchers in this areas as well as the future development of generalized
solutions for serving these queries within an FA system.
This paper aims to provide an introductory guide to federated analytics as follows (Figure 1). We first define federated analytics and how it relates to the more well-studied field of federated learning. Next, we provide a taxonomy of typical data analysis queries of interest in federated analytics and where they can find use in different domains. For the presented queries, we also discuss different existing approaches in the literature for addressing them. Finally, we discuss different challenges and opportunities within the federated analytics framework and discuss potential solutions for addressing these challenges and open directions. These open questions provide starting points for expanding and developing more practical scenarios in federated analytics, where research efforts are still needed.
Figure 1: The schematic structure of federated analytics and the relationship between different sections. The body of this survey mainly contains the fundamentals of federated analytics, a taxonomy of different queries of federated analytics, federated analytics algorithms, applications, and discussions of challenges and opportunities in federated analytics in the presence of cloud-based services.
## 2 What is federated analytics?
In federated analytics, there is typically a central querier (the question asker) who wants to learn some property or answer a question based on data distributed across different clients (_i.e._, parties). Each of these clients owns a subset of the data, representing their local dataset. We will refer to these parties as clients or data owners interchangeably throughout this survey.
From a generalized perspective, **federated analytics** can be defined as a setting for data analysis where a querier wishes to answer a data analysis query through the collaboration of multiple data owners (clients) that own their local raw data. The raw data is not exchanged or transmitted, but instead, intermediate query replies that are meant for aggregation at the querier are transferred to answer the intended query.
In particular, from this generalized view, the goal of federated analytics is for a central querier to answer the following query \(Q\)
\[Q(\mathcal{D})=F_{\omega}(\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D} _{N}). \tag{1}\]
Here \(\mathcal{D}=\{\mathcal{D}_{i}\}_{i=1}^{m}\) is the private datasets at the \(N\) data owners, and \(F_{\omega}\) is the (potentially parameterized) function on the data describing the target query. For instance, given a pre-trained machine learning classification model parameterized by \(\omega\), the basic federated analytics query to test _the accuracy of the model \(\omega\)_ on the distributed datasets can be represented by the following query:
\[Q_{\omega}(\mathcal{D}) =Acc(\omega;\{\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D} _{N}\})\] \[=\sum_{i=1}^{N}\frac{|\mathcal{D}_{i}|}{\sum_{i=1}^{N}|\mathcal{ D}_{i}|}Acc(\omega;\mathcal{D}_{i}), \tag{2}\]
with the query answer being the weighted average of each party's local test accuracy \(Acc(\omega;\mathcal{D}_{i})\). To compute the local accuracy, each party applies the model to its local labeled dataset and computes the local ratio of correct classifications.
### Federated learning vs. federated analytics
Federated analytics is very similar to federated learning (Kairouz _et al._, 2021) in the fact that both require collaborative use of distributed
data without collecting the raw data at a centralized location. However, while federated learning, as a branch of distributed optimization, is about training machine learning models at the edge and aggregating learning outcomes back into the federated learning model, federated analytics is more generalized to include applying basic data science methods for data analysis but also includes optimization-based questions such as federated learning. Thus from a generalized perspective using the formulation of (1), federated learning can be viewed as a complex federated analytics query on the distributed datasets when the function \(F_{\omega}\) is the following optimization empirical risk minimization problem:
\[F_{\omega}(\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D}_{N})=\arg\min_{ \mathbf{w}}\sum_{i=1}^{N}\sum_{\mathbf{x}\in\mathcal{D}_{i}}\ell(\mathbf{w}; \mathbf{x}). \tag{3}\]
The analytics branch of federated learning has been extensively studied in recent years (Kairouz _et al._, 2021), while algorithms and approaches for basic data science queries have not seen similar exploration, even though they are critical to service federated learning models. In fact,
Figure 2: An example federated analytics setting where a **querier** is discovering the most popular song in the collective datasets at the clients, where each client is a **data owner** of its local subset. To preserve the privacy of the clients’ data the system seeks to answer the query distributively with only focused replies being sent back to the querier.
one of the first application examples of non-learning queries in federated analytics is strongly coupled with federated learning, where engineers at Google wanted to evaluate the inference performance (_e.g._ in terms of accuracy) of trained federated learning models against real-world data not available at the data centers.
Thus, in the remainder of the paper, we limit our attention to simple federated analytics queries that would not require optimization when solved in a centralized scenario, in contrast to the federated learning branch which would require optimization of parameters to solve in a centralized setting. Following this distinction, examples of simple queries for federated analytics include questions of the form: what is the mean or median value of a function applied on the distributed data; while federated learning would be confined to learning a parameterized function such as: what is the best model that maps features \(\mathbf{x}\) to target variable \(\mathbf{y}\). In fact, each round of federated learning invokes the simplest question in federated analytics after local training: _what is the sum of vectors (gradient updates) stored at the participating clients?_.
### Applications for federated analytics
We, next, discuss several canonical domains that benefit significantly from applying federated analytics. Figure 3 highlights a number of these applications of federated analytics in the healthcare domain.
* **Evaluation analytics for machine learning models.** The poster application that started garnering interest in federated an
Figure 3: Examples of federated analytics applications in the healthcare domain.
alytics was the collaborative evaluation of the quality of trained machine learning models. For instance, Google uses federated analytics to evaluate the accuracy of Gboard next-word prediction models by using captured data from users' typing activities on their phones. Similar to accuracy evaluation, federated analytics can also be used to compute other evaluation metrics of the trained machine learning models, _e.g._, model robustness to unseen distributions/users as well as the fairness to different demographic groups (Ezzeldin _et al._, 2021) (for example, how different is the performance of an image tagging application to photos from the black vs white communities).
* **Analytics for medical studies and precision healthcare.** A key ingredient for realizing the full promise of precision medicine is allowing research analytics and diagnostics on large amounts of medical data that are not typically available through traditional medical research procedures. This kind of information can originate from data collected at medical institutions (_e.g._, the efficacy of applied treatments and onset symptoms associated with a diagnosis) to individual personal data such as location history of individuals for contact tracing (_e.g._ during COVID-19), or mental health studies based on bio-markers. Enabling these gains from big medical data is challenged by the legal and regulatory barriers for privacy that make collecting patient-level data outside a healthcare provider complex and time-consuming.
* **Guiding advertisement tactics.** Advertisers are keen to know whether their ads are attractive to their potential customers. For example, in the case of video ads, they would like to collect summary ads viewership data from users to understand the effectiveness of their advertisement concepts as well as guide future advertisement expenditure.
The aforementioned domains can make use of a large number of simple federated analytic metrics beyond the promise of federated learning models. In the following section, we give a taxonomy of different federated analytics queries and highlight to the reader some of their potential use cases in the discussed application domains.
## 3 A taxonomy of federated analytics queries
As described in Section 2, a federated analytics query is a general class that encompasses any question by a querer on distributed private datasets. However, from this general class of queries, there exist a number of queries that find greater exposure in different application domains and are explored more deeply in the literature. We can divide these queries of interest into three main categories: 1) Statistical testing queries, 2) Set queries, and 3) Matrix transformation queries. The statistical testing category includes different data science queries that aim to discover key statistical properties of the distributed private data. Examples of such queries would be the estimation of the mean median, heavy hitters, key-valued data frequencies, hypothesis testing,..., etc. The set queries, on the other hand, include analytics for discovering data associations such as set intersection, set union, and intersection cardinality. Matrix transformation queries include but are not limited to operations such as dimensionality reduction using methods such as principal component analysis, and projections. In this section, we formally define the most popular queries in each of the aforementioned query types and present some of their real-world applications. Figure 4 summarizes the queries presented in the remainder of this section.
Figure 4: A taxonomy of federated analytics queries presented in Section 3.
### Statistical testing
We focus on four key statistical queries that have a wide variety of real-world applications in different domains, such as health, business, and user experience. For each of these statistical queries, we give its mathematical definition, followed by one of its main applications. We discuss some existing solutions in Section 4. We start by first assuming having a set \(\mathcal{D}=\{\mathcal{D}_{1},\ldots,\mathcal{D}_{N}\}\) of \(N\) datasets, where each dataset \(\mathcal{D}_{i}=\{x_{1}^{i},\ldots,x_{n_{i}}^{i}\}\) consists of \(n_{i}\) data points and is owned solely by one distributed node, \(i\).e_., an FA client.
#### 3.1.1 Heavy hitters
The objective of the heavy hitter problem is to construct a succinct histogram of the elements across the \(N\) parties datasets that contains only the most popular (heavy-hitter) elements; other elements are treated as if appearing with zero frequency. Typically, an element is denoted a heavy-hitter if its frequency in the distributed dataset is greater than or equal to a fraction \(\phi\) of the dataset size. Formally the goal of the query is to return the following:
\[Q(\mathcal{D})=\{(x,\textit{freq}(x))|x\in\mathcal{D}_{\text{HH}}\}\] \[\text{where:}\quad\mathcal{D}_{\text{HH}}=\left\{x\left|x\in \bigcup_{i=1}^{N}\mathcal{D}_{i},\ \ \textit{freq}(x)\geq\phi|\mathcal{D}|\right.\right\}. \tag{4}\]
Note that the heavy-hitters problem is closely related to another succinct histogram problem formulation, the top-\(K\) problem, where the goal is to find a succinct histogram with the \(K\) most frequent elements instead of all elements exceeding a threshold. If we target the top-1, this translates to the well-known _mode_ statistic of the dataset.
**Application (User Experience).** One popular application of heavy hitters is to learn trendy out-of-dictionary words generated by users' devices. Learning trendy words is of high interest to service providers as it allows them to improve the service they provide to their users. These services could be the autocomplete feature in smart keyboards, or a powerful advertisement engine that could leverage the current public taste of people for more effective advertisement. A similar application is
to learn the out-of-dictionary words, which can be used to improve the smart keyboard spell-auto-correction feature by adding such words to the keyboard's dictionary. Apple has already used differential privacy to protect the privacy of users' input data while collecting the top frequent emojis by users (Apple, 2017). Similarly, Google has also proposed another differential privacy (DP) method to collect the out-of-dictionary words ("Learning new words" n.d.).
#### 3.1.2 k-percentile element
\(k\)-percentile element In the \(k\)-th percentile statistical query problem, the objective is to find the smallest element that is greater than \(k\) percent of the overall dataset available at the participating distributed nodes. This statistical query problem can be formalized as follows. Assuming the entries of the datasets in \(\mathcal{D}\) are non-categorical values (_i.e._, numerical values), then by denoting \(\mathcal{D}^{s}\) to be the non-decreasing sorted set of the elements of \(\bigcup_{i=1}^{N}\mathcal{D}_{i}\), the \(k\)-percentile element \(x_{k}\) in this distributed parties datasets \(\mathcal{D}=\{\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D}_{N}\}\) is given by
\[Q(\mathcal{D})=x_{k}=x\text{ such that }\text{{rank}}_{\mathcal{D}^{s}}(x)=k \times|\mathcal{D}^{s}|, \tag{5}\]
where \(\text{{rank}}_{\mathcal{D}}^{s}(x)\) is the order of element \(x\) in the dataset \(\mathcal{D}^{s}\). An example of \(k\)-percentile values is the _median_, where \(k\) is 0.5.
**Application (Business).** It is well-known that the median is a more robust metric to represent central tendency compared to the mean, which is more sensitive to outliers. Hence, it is more useful in business use cases to assess different components such as company salaries. For instance, a possible application for federated median computation is for an authority to compute the median salary (or any other percentile) of all employees in a set of companies without revealing the exact salaries of the employees or which companies they belong to.
#### 3.1.3 Key-valued data
The Key-valued data is a statistical query problem in which each data point is represented by a key (_e.g._, identifier) and value associated with this key, while the objective is to learn the frequency of each key and the mean (or aggregate) of the values that appear paired
with this particular key. To formalize the objective, we assume that the dataset \(\mathcal{D}_{i}\), for \(i\in[N]\) is a key-valued dataset such that \(\mathcal{D}_{i}=\left\{x_{j}^{i}|x_{j}^{i}=(k_{j}^{i},v_{j}^{i}),\ \forall j\in[n_{i}]\right\}\). The objective is to find the following
\[Q(\mathcal{D})=\left\{\left(\text{{freq}}(k_{i}),\ \frac{1}{|\text{{freq}}(k_{i})|} \sum_{v_{j}:(k_{i},v_{j})\in\mathcal{D}}v_{j}\right),\forall k_{i}\in\mathcal{ D}\right\}. \tag{6}\]
**Application (Business).** A possible application can be in the business market, where the objective is to privately learn the distribution of the stocks and the investment amount of each stock from the private data of the investors. Specifically, in this stock market application, the key represents the stocks while the value represents the amount that a person invests in a given stock. The statistical query goal takes place when an analyst wants to learn how many agents invest in each stock (_e.g._, frequency distribution stocks) and the amount invested in each stock (_e.g._, average or aggregate amount) without collecting any private data which can cause a breach to their privacy.
#### 3.1.4 Histogram-based statistics
This can be considered a special case of the key-valued data problem, where the objective is to learn only the frequency of each key.
**Application (User experience).** One real-world application of histogram-based statistics is the Now Playing feature on Google's Pixel phones (Google, 2020). This feature uses an on-device database of song fingerprints to show users what song is playing in the surrounding room without an internet connection. The one-device database includes the most frequently recognized songs, which are maintained and updated by Google to ensure that the database contains only popular songs. The way it works is that on each phone, the Now Playing application computes the recognition rate (value) for each song (key) in its Now Playing History. Once the phone is plugged in and connected to WiFi, the users encrypt the rate of the songs and send them to the Google servers so that they can only compute a histogram distribution of all song counts. This allows Google to replace the less popular songs in the database with the more popular ones.
### Private set queries
The distributed private set queries class can be broadly clustered into three different categories; distributed sets intersection, distributed sets union, and distributed cardinality computation. The main goal of this analytic problem is to compute these queries in a way that protects the privacy of the data owners being queried. Similar to the statistical testing class, we consider having \(N\) parties where each party \(i\) has a dataset \(\mathcal{D}_{i}\) of \(n_{i}\) unique and private data points. Some of the existing solutions to set queries are presented in Section 5.
#### 3.2.1 Private Set Intersection
The private set intersection (PSI) is a private set query problem that has a wide range of applications with the objective of computing the intersection between the sets owned by the different clients and nothing beyond that. This query is formally given as follows
\[Q(\mathcal{D})=\bigcap_{i=1}^{N}\mathcal{D}_{i}. \tag{7}\]
**Application (Business).** One famous application of PSI in the two-party setting is the online-to-offline advertisement conversion (Ion _et al._, 2020) in which a company would like to know how much of its revenue can be attributed to an online advertisement in order to assess the future payment it spends on a paid ad (_e.g._, Facebook ad). On the other hand, the advertising company wants to know how successful its advertising campaign is. In this setting, the advertising companies have a database of the users and their status, whether they saw the ad or not, while the company knows the users who purchased their products as well as the amount they spent on their purchases. In other words, the data needed to compute these statistics are split across the two parties. In this setting, the two parties are typically unwilling to share their customers' data to protect the privacy of their business and their customers, but both parties would want to collaboratively learn how many users both saw an ad and made a corresponding purchase, as well as the amount of money those users spent on the company's products.
#### 3.2.2 Union
Similar to private set intersection, the goal is to privately evaluate the union of the input sets of two or more parties privately without revealing anything about the sets beyond the union. This objective can be formally given by
\[Q(\mathcal{D})=\bigcup_{i=1}^{N}\mathcal{D}_{i}. \tag{8}\]
Application (Security).One popular application is risk assessment and management (Ramanathan _et al._, 2020). The goal of this application is to aggregate the blacklists from different parties and across various attack types. This could help in improving the individual blacklists in identifying malicious sources.
#### 3.2.3 Cardinality
The goal of this problem is to learn the cardinality of the intersection of the data set of multiple parties in a private manner, which can formally be given as follows
\[Q(\mathcal{D})=|\bigcap_{i=1}^{N}\mathcal{D}_{i}|. \tag{9}\]
Application (Public Safety)One popular real-world application of PSI cardinality is the CSAM Detection system used by apple "Apple for Child Sexual Abuse Material (CSAM)". The main goal is to identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. The way it works is that intersection cardinality testing is carried on between a known database of CSAM images and individual iCloud users. When the cordiality of intersection exceeds a predefined threshold, Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC).
### Matrix transformations
Singular value decomposition (SVD) is one of the most popular matrix operations that have a wide range of applications in either data analytics or machine learning. The main objective of this problem is to
compute SVD over a set of distributed data without collecting any raw data or breaching the privacy of the data owners. This problem can be formally defined as follows: assume there are \(n\) parties, and each party \(i\) has a private data matrix \(\mathbf{D}_{i}\in\mathrm{R}^{m\times n_{i}}\). The \(n\) parties would like to compute the SVD jointly on the combined dataset \(\mathbf{D}=[\mathbf{D}_{1},\ldots,\mathbf{D}_{n}]\), where \(\mathbf{D}\in\mathrm{R}^{m\times n}\) and \(n=\sum_{i=1}^{n}n_{i}\). The private computation of SVD on the combined dataset takes the following form
\[Q(\mathcal{D})=\mathbf{U}\Sigma[\mathbf{v}_{1}^{T},\ldots,\mathbf{v}_{n}^{T}] \tag{10}\]
where \(\mathbf{U}\) and \(\Sigma\) are shared across all the parties, while \(\mathbf{V}_{i}\), \(\forall i\in[n]\), is kept secret by party \(i\) and never shared with any other parties. From (10), each node \(i\) can get its SVD by using the shared matrices \(\mathbf{U}\) and \(\Sigma\), and the secret matrix \(\mathbf{V}_{i}\) as \(\mathbf{D}_{i}=\mathbf{U}\Sigma\mathbf{V}_{i}\).
Another variant of SVD called Funk-SVD is applied to the sparse rating matrix used in the recommendation systems (Chai _et al._, 2020) such that it composes the sparse matrix into two embedding matrices that can be used to predict the missing rating in the rating matrix.
**Application (Machine Learning).** SVD is an essential building block in many studies and applications, such as principal component analysis (PCA). PCA is used to reduce the feature space of the data used in machine learning. Reducing dimensionality in statistical machine learning can prevent the model from overfitting, which reduces the ability of the model to generalize beyond the examples in the training set. One challenge of performing PCA in a distributed setting is having the data distributed across multiple nodes while collecting and gathering the data is prevented by the law (_e.g._, GDPR (Voigt and Von dem Bussche, 2017)). We discuss some existing solutions for the matrix transformation query in Section 4.
## 4 Existing solutions to statistical testing queries
A taxonomy of the privacy-preserving techniques used for the statistical testing queries is given in Table 1. We consider different variants of privacy-preserving techniques represented by differential privacy (DP), secure multi-party computing (MPC), and a combination of DP with MPC.
### Heavy hitters
The heavy hitter problem has been well studied in the literature either in the centralized setting with no privacy requirements where the data is already collected and stored at a central server or in a distributed federated setting where the queerer wishes to learn the "heavy hitters" in the clients' data while guaranteeing the privacy of each contributing client at minimal computation/communication costs (Charikar _et al._, 2004; Cormode _et al._, 2003; Charikar _et al._, 2004; Cormode _et al._, 2003; Hsu _et al._, 2012; Bassily and Smith, 2015; Bassily _et al._, 2017; Apple, 2017; Fanti _et al._, 2015; Acharya _et al._, 2019; Acharya and Sun, 2019; Zhu _et al._, 2020)).
#### 4.1.1 Non-private centralized setting
In the non-private centralized setting, the main objective is to develop efficient heavy hitters algorithms with low storage requirements and provable error bound. The low storage requirement is of significant importance when dealing with a large online data stream that memory-intensive solutions such as sorting the stream or keeping a counter for each distinct element are infeasible (_e.g._, (Charikar _et al._, 2004; Cormode _et al._, 2003)). The work in (Charikar _et al._, 2004) proposes an approximate heavy hitter algorithm that is memory efficient with proven theoretical error bound. The algorithm is based on sketch counting that relies on using a set of hashes that map each element in the data stream to different bins, such that when running the sketch counting algorithm along with a max-heap data structure, the algorithm can find the \(k\) heavy hitters in a stream of \(d\) unique items with storage cost logarithmic in \(d\) (_e.g._, \(O(K\log d)\)) instead of being linear in \(d\).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Query** & \begin{tabular}{l} **Privacy** \\ **technique** \\ \end{tabular} & **Related works** &
\begin{tabular}{l} **Noisy** \\ **response** \\ \end{tabular} \\ \hline \multirow{3}{*}{Heavy hitters} & Non-private & (Charikar _et al._, 2004; Cormode _et al._, 2003) & No \\ \cline{2-4} & DP & (Hsu _et al._, 2012; Bassily and Smith, 2015; Bassily _et al._, 2017) & Yes \\ \cline{2-4} & & (Apple, 2017; Acharya _et al._, 2019; Acharya and Sun, 2019; Zhu _et al._, 2020) & MPC & (Boon _et al._, 2021) \\ \cline{2-4} & DP + MPC & (Bohler and Kerschbaum, 2021) & Yes \\ \hline \multirow{3}{*}{Median} & Non-private & (Iut.ucler, 2017) & No \\ \cline{2-4} & DP & (Boehler and Kerschbaum, 2022; Böhler and Kerschbaum, 2020) & Yes \\ \cline{1-1} \cline{2-4} & MPC & (Aggarwal _et al._, 2010; Goldreich _et al._, 2019; Tueno _et al._, 2020) & No \\ \hline Key-valued data & DP & (Ye _et al._, 2019; Gu _et al._, 2020) & Yes \\ \hline \end{tabular}
\end{table}
Table 1: Taxonomy of the privacy-preserving techniques used in the statistical query.
#### 4.1.2 Private distributed setting
There is a rich body of works on private heavy hitters and frequency estimation in the distributed setting while ensuring users' privacy by leveraging DP (Hsu _et al._, 2012; Bassily and Smith, 2015; Bassily _et al._, 2017; Apple, 2017; Acharya _et al._, 2019; Acharya and Sun, 2019; Zhu _et al._, 2020), MPC (Boneh _et al._, 2021), or combine DP with MPC (Bohler and Kerschbaum, 2021).
**Heavy hitters with differential privacy.** Researchers have proposed multiple _efficient_ private heavy hitter algorithms that have a computation time, communication cost, and storage cost polynomial in \(n\) (number of users) and logarithmic in \(d\), \(log(d)\), where \(d\) is the size of the data universe (dictionary of the data points to check). (Hsu _et al._, 2012) proposed several efficient \((\epsilon,\delta)\)-differentially private algorithms for the heavy hitter problem for \(n\) parties, each of which possesses a single element from a universe of size \(d\). However, their algorithms experience high error between the estimated frequency for the heavy hitter items and their true frequency, where the error rate is given by \(\mathcal{O}\sqrt[6]{\frac{log(d)log(\frac{1}{\delta})}{\epsilon^{2}n}}\), which does not match their error lower bound \(\Omega(\frac{1}{\sqrt{n}})\). In contrast to (Hsu _et al._, 2012), (Bassily and Smith, 2015) provide the first polynomial time local \((\epsilon,0)\)-differentially private protocol for heavy hitters that has worst-case error \(\mathcal{O}(\sqrt{\frac{log(d)}{\epsilon^{2}n}})\). They also show that using the public coin model, each user can send only one bit to the server. However, one of the main limitations of their approach is the high time complexity, where their algorithm requires a server running time of \(O(n^{5/2})\) and a user running time of \(O(n^{3/2})\). In later work, (Bassily _et al._, 2017) have proposed two algorithms, TreeHist and Bitstogram, which require a server running time of \(\mathcal{O}(n)\) and a user running time of \(\mathcal{O}(1)\). The TreeHist algorithm is based on a noisy, compressed version of the count sketch proposed in (Charikar _et al._, 2004). From the practical point of view, in a concurrent work (Apple, 2017), Apple has proposed the Sequence Fragment Puzzle (SFP) algorithm, a state-of-the-art sketching-based algorithm for discovering heavy hitters using local DP and an unknown dictionary. In this work, they have proven expressions for balancing the trade-offs among privacy, accuracy, transmission cost, and computation cost, allowing a trade-off of these parameters in different practical use cases. There are
some other works (_e.g._, (Fanti _et al._, 2015)) that propose a heuristic algorithm that can be used for finding the heavy hitter with an unknown dictionary. While the work in (Bassily _et al._, 2017) requires public randomness and coordination between the server and users, the authors in (Acharya _et al._, 2019) have proposed an algorithm based on Hadamard Response (HR) that is used in general for frequency estimation and does not require any public randomness, but at the cost of a per-user communication cost of \(\log(d)\), while working for all privacy regime (_e.g._, \(\forall\epsilon\)). In contrast to (Acharya _et al._, 2019) that trades the need for public randomness with more per-user communication cost, (Acharya and Sun, 2019) proposes an algorithm that requires only 1-bit per user while not requiring any public randomness. However, their algorithm gives an optimal error rate only at the high privacy regime, _i.e._, \(\epsilon<1\).
The previously mentioned works utilize local DP to ensure privacy, yet it is known that local DP often leads to a significant reduction in utility (Kairouz _et al._, 2014; Kairouz _et al._, 2016; Duchi _et al._, 2013). On the other hand, the choice of using central DP requires having a trusted server that can first collect the clean data and then perturbs it. Since in the central DP setting, noise is only applied once by a trusted server, central DP has better utility than local DP. To overcome the limitations of central DP and local DP, (Zhu _et al._, 2020) propose trie-based heavy hitters (TrieHH) algorithm that is interactive (_e.g._, multi-round algorithm) and leverages its interactivity to achieve central DP without the need to centralize raw data while also avoiding the significant loss in utility incurred by local differential privacy. The DP privacy guarantee of their algorithm is achieved by leveraging the randomness from the user sampling and the anonymity properties of their distributed algorithm, which make their algorithm inherently differentially private without requiring additional noise. This is different from the previously discussed works that are non-interactive and achieve local DP using the randomized response. It is also different from the work in (Bassily _et al._, 2017) that relies on public randomness. They have also studied the trade-off between privacy and utility and shown that their algorithm can achieve good utility while ensuring strong privacy guarantees, compared with the works that rely on DP, such as (Apple, 2017).
**Secure Multi-party Computing**. Leveraging secure multiparty com
puting primitives is another direction for privately computing the heavy hitters without impacting the utility (Boneh _et al._, 2021) or requiring a large number of users as in (Zhu _et al._, 2020) to get reasonable utility. The proposed protocol by (Boneh _et al._, 2021) for solving the private heavy-hitter problem leverages a lightweight cryptographic tool called incremental distributed point functions instead of using DP, which could reduce the utility. The proposed protocol relies on the assumption of having two non-colluding servers, which is one of the main limitations of this work. Additionally, it requires at least one of the two servers to not collude with any client. Apart from these limitations, this protocol can guarantee correctness in the presence of malicious clients who can manipulate its input string to alter the protocol execution. The proposed protocol is interactive, requiring all users to participate only once in the protocol execution, where each client can send only a single message of size linear in the length of the input string to the servers. Similar to most works that utilize DP, the proposed protocol requires any public-key cryptographic operations except for establishing secret channels between the parties.
**Secure Multi-party Computing with DP.** By combining MPC and DP, (Bohler and Kerschbaum, 2021) have proposed a heavy hitters protocol that provides high utility even for a small number of users, which is the most challenging regime for DP (Zhu _et al._, 2020). The proposed algorithm, in contrast to (Boneh _et al._, 2021), considers the existence of only one server that wishes to compute the K-heavy hitters on the input strings of the clients.
### Median
Similar to the heavy hitter problem, the works for distributed median computation are also broadly classified from the perspective of privacy into works that leverage MPC primitives and DP.
**Secure Multi-party Computing.** As pointed out by (Aggarwal _et al._, 2010), the problem of private computing of the \(k\)-th ranked element on the private dataset of several parties can be solved by constructing a combinatorial circuit that is evaluated securely by the parties (_e.g._, (Goldreich _et al._, 2019)). However, the main limitation of these generic protocols is the communication overhead. In particular, for a two-party setting, where the combined data set size is \(n\), and the elements of the
dataset are drawn from a field of size \(M\), the communication cost of this circuit-based solution is \(\Omega(n\log M)\). For applications where the data size is large, these generic solutions are impractical. By using an interactive protocol that relies on the binary search and secure comparison using Yao's garbled circuit, (Aggarwal _et al._, 2010) have provided the first specialized protocols for computing the \(k\)-th ranked element with sublinear communication and computation overhead for the two-party setting and the multi-party setting where parties in both settings are interested in knowing the \(k\)-th ranked element. In the two-party case, the cost of computing the \(k\)-th ranked element is \(O(\log M\cdot\log k)\) compared to \(O(\log^{2}M)\) in the multi-party setting. The number of rounds of the proposed algorithm for the two-party is logarithmic in the number of input items, whereas the number of rounds of the multi-party algorithm is logarithmic in the size of the domain of possible input values (_e.g._, \(\log M\)). The proposed protocol provides security against malicious parties. One of the main limitations of this work for the multi-party setting is that it requires lots of coordination between all pairs of parties for establishing pairwise communication channels, thus impacting its practicality. Another practical limitation is that it is very interactive, where the number of rounds to complete the protocol scales logarithmic with the field size. To overcome such limitations, (Tueno _et al._, 2020) have proposed efficient algorithms that leverage the client-server architecture. In this client-server setting, there are communication channels only between each client and the server, while only clients provide inputs to the computation. The rule of the server in this setting is to make their computational resources available for the computation but have no input to the computation and receive no output. By using this setting, their proposed algorithm is less interactive, as it only requires a fixed number of rounds with the server (_e.g._, at most four rounds) compared to \(O(\log^{2}M)\) for the algorithm in (Aggarwal _et al._, 2010). The highest computation cost of their algorithms is \(O(\log^{2}M)\).
**Differential Privacy.** Computing the exact median value and revealing it to the clients using the algorithms proposed by (Goldreich _et al._, 2019; Aggarwal _et al._, 2010; Tueno _et al._, 2020) can violate the privacy of the parties that own this median value. To overcome such a challenge, (Boehler and Kerschbaum, 2022) proposes an efficient algorithm for computing a differential private median between two parties
by utilizing the exponential mechanism. The proposed algorithm has a computation complexity sublinear in the size of the data universe (_e.g._, \(\log M\)). (Bohler and Kerschbaum, 2020) proposed another algorithm for private median computation in the multi-party setting while using the exponential mechanism. Their algorithm for the multi-party setting also has a computation complexity sublinear in the data size. The threat model considered in this setting is the semi-honest (nonmalicious) clients. They also discuss how to extend their algorithm to malicious clients, and implement it using the SCALE-MAMBA framework (Aly _et al._, 2020).
**Non-private.** From the distributed optimization perspective, (Iutzeler, 2017) has proposed distributed synchronous and asynchronous algorithms for computing median and other elements of specified ranks of the clients' data. Unlike the works in (Aggarwal _et al._, 2010; Boehler and Kerschbaum, 2022; Bohler and Kerschbaum, 2020) that connect all nodes as a fully connected graph, this work considers a general undirected connected graph. To distributedly solve the median problem, they first design a convex optimization problem whose solution meets the median or the quantile to compute. They solve the problem using the distributed formulation of ADMM proposed by (Lions and Mercier, 1979; Boyd _et al._, 2011).
### Key-Valued data
The objective of this problem is to collect two fundamental statistics of key-value pairs, including frequency of keys and mean of values. One naive solution is to apply local DP independently at the keys and values. Since keys are categorical data, some existing DP methods (_e.g._, (Erlingsson _et al._, 2014; Kairouz _et al._, 2014)) can be applied to each key, while each value can be perturbed using (_e.g._, (Duchi _et al._, 2014; Nguyen _et al._, 2016)). However, the main challenge for this naive approach of applying local DP is to achieve a good utility-privacy trade-off, since the data contains two dimensions, and a user may have multiple key-value pairs. Additionally, this naive independent perturbation does not preserve the correlation between the keys and values. To address this challenge, (Ye _et al._, 2019) proposed the first specialized LDP algorithms for this problem by modifying the Harmony randomized response-based protocol (Nguyen _et al._, 2016)
to better maintain the relationships between the keys and values to improve the accuracy of statistics while still achieving local differential privacy. Their first proposed algorithm, PrivKV, is a non-iterative (non-interactive) algorithm that is suitable for low communication cost scenarios. Additionally, they have proposed another two interactive protocols (PrivKVM and PrivKVM+) to iteratively improve the estimation of a key's mean value PrivKVM trades the communication cost with the accuracy while PrivKVM+ balances between accuracy and communication bandwidth. The main limitation of their non-interactive algorithms is the large number of rounds required to get an unbiased mean estimation and to improve the estimation of a key's mean value. In general, their key limitations, which have also been highlighted by (Gu _et al._, 2020) include (1) A large number of rounds requires all users to be always online, thus limiting its practicality. (2) The privacy budget increases with the number of rounds. For a fixed privacy budget, the budget for each round decreases as the number of rounds increases. This decrease in per-round privacy budget increases the amount of noise added, which can negatively impact performance. (3) Their privacy analysis lacks improved budget composition for local differential privacy that can capture the correlation between key and value given by their algorithms. (4) Finally, their proposed random key sampling method, which is part of their algorithms, does not work well for a large key domain. Follow-up work by (Gu _et al._, 2020) introduced a non-interactive framework called PCKV with a better utility-privacy trade-off that overcomes the aforementioned limitations. In particular, they apply an advanced sampling procedure to enhance utility over the naive random sampling done by PrivKVM. They also require only a single iteration and provide a tighter analysis of the privacy budget consumption.
## 5 Existing solutions to set queries
Private set intersection/union computations have had a number of practical use cases that is large enough to garner the attention of researchers over the last two decades (Pinkas _et al._, 2018). Below, we discuss a number of key approaches to solving these set query problems, mainly from the MPC community. A taxonomy of the privacy-preserving tech
niques used for these set queries is given in Table 2.
### Private set intersection
The existing approaches for the two-party setting include works based on homomorphic encryption (HE) (Huberman _et al._, 1999; De Cristofaro _et al._, 2010; Meadows, 1986; Ion _et al._, 2017; Freedman _et al._, 2016; Chen _et al._, 2017), works based on Oblivious Polynomial Evaluation (Freedman _et al._, 2004; Dachman-Soled _et al._, 2009), works based on Oblivious Transfer (Pinkas _et al._, 2014; Pinkas _et al._, 2015; Rindal and Rosulek, 2017; Pinkas _et al._, 2019), and works based on garbled circuit (Huang _et al._, 2012; Dong _et al._, 2013). Although these techniques are for the two-party setting, some of them were extended to the multi-party setting. Specifically, (Kolesnikov _et al._, 2017) have proposed oblivious programmable pseudo-random functions that are based on the idea of using oblivious transfer. Garbled bloom filter has been used in (_e.g._, (Inbar _et al._, 2018)), and HE has been used in (_e.g._, (Hazay and Venkitasubramaniam, 2017)).
### Private set union
(Kissner and Song, 2005) have proposed the first protocol for the private set union, which leverages threshold additively HE and polynomial representation. Another approach (Frikken, 2007) that adopts a similar technique can reduce the communication/computation complexity of (Kissner and Song, 2005). Instead of using polynomial representation, (Davidson and Cid, 2017) uses an inverted Bloom Filter. While the above works use public key operations, which result in increasing
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Query** & **Privacy technique** & **Related works** \\ \hline \multirow{4}{*}{Private set intersection} & Homomorphic encryption & (Huberman _et al._, 1999; De Cristofaro _et al._, 2010; Meadows, 1986) \\ & & (Ion _et al._, 2017; Freedman _et al._, 2016) \\ & & (Chen _et al._, 2017; Hazay and Venkitasubramaniam, 2017) \\ \cline{2-3} & Oblivious polynomial evaluation & (Friedman _et al._, 2014; Dachman-Soled _et al._, 2020) \\ \cline{2-3} & Oblivious transfer & (Pinkas _et al._, 2014; Pinkas _et al._, 2015; Rindal and Rosulek, 2017) \\ \cline{2-3} & Garbled circuit & (Huang _et al._, 2012; Dong _et al._, 2013; Izhbar _et al._, 2018) \\ \hline \multirow{2}{*}{Private set union} & Homomorphic encryption & Kissner and Song, 2005; Prikken, 2007) \\ & Oblivious polynomial evaluation & (Koksnikov _et al._, 2019; Jia _et al._, 2022) \\ \hline \multirow{2}{*}{Private cardinality testing} & Homomorphic encryption & (Ghosh and Simkin, 2019; Badrinarayanan _et al._, 2021) \\ \cline{2-3} & Oblivious transfer & (Branco _et al._, 2021) \\ \hline \end{tabular}
\end{table}
Table 2: Taxonomy of the privacy-preserving techniques used in the set queries.
their computation complexities, (Kolesnikov _et al._, 2019) proposed the first scalable PSU protocol using only symmetric-key techniques while using polynomial representation for computing the private set unions. However, their protocol requires repeated high-degree polynomial interpolations on the parties' datasets. To overcome such limitation, (Jia _et al._, 2022) proposed an algorithm that relies on using data shuffling and avoids using HE and repeated operations.
### Private cardinality testing
The problem of cardinality testing has been considered in the two-party setting by (Ghosh and Simkin, 2019; Bhowmick _et al._, 2021), and in different works for the multi-party setting by (Branco _et al._, 2021; Badrinarayanan _et al._, 2021) where these different works have developed efficient solutions in terms of the computation and communication costs while preserving the privacy of the users' data.
## 6 Existing solutions to matrix transformation
To solve the problem in (10), (Chai _et al._, 2022) proposed an efficient lossless federated SVD solution over billion-scale data called FedSVD ensures the accuracy of the SVD computation is not impacted. This is guaranteed by avoiding using DP methods; instead, they rely on masking their data in a way such that the masks are canceled out when the response from the different parties is aggregated by the server. Thus, this approach guarantees the same performance as the centralized case where all the data are located in one place. (Liu and Tang, 2019) have proposed an algorithm that uses additive HE. On the other hand, (Chai _et al._, 2020; Berlioz _et al._, 2015) have proposed distributed privacy-preserving algorithms for recommendation systems that rely on matrix
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Query** & **Privacy technique** & **Related works** \\ \hline \multirow{3}{*}{Matrix factorization} & Homomorphic encryption & (Liu and Tang, 2019; Chai _et al._, 2020) \\ \cline{2-3} & MPC & (Chai _et al._, 2022) \\ \cline{2-3} & DP & (Berlioz _et al._, 2015) \\ \hline \end{tabular}
\end{table}
Table 3: Taxonomy of the privacy-preserving techniques for matrix transformation.
factorization. The proposed algorithm by (Chai _et al._, 2020) is based on HE, while the one proposed by (Berlioz _et al._, 2015) leverages differential privacy. The taxonomy of the privacy-preserving techniques used for the set queries is summarized in Table 3.
## 7 Challenges and Open Opportunities
### Algorithmic security and privacy
In the previous Sections 4-6, we presented a number of privacy-preserving approaches to compute the FA queries. However, unlike FL, there does not exist a single common framework or algorithm for privately computing a diverse number of queries. A unifying approach to evaluate FA queries without leaking unnecessary information is an open question of great importance for deploying FA systems, as it will allow them the flexibility to deal with a wide range of queries. Note that if the target is to solve the query while disregarding privacy, then a number of queries discussed earlier can be computed and then used to derive answers for other queries. For example, the mode, mean and median statistical queries can all be computed by first computing the FA histogram query and then deriving the target answers (_e.g._ median) from it. This, however, leaks unnecessary information to the querer beyond the intended goal.
One solution to address this information leakage is to employ secure enclaves (Costan and Devadas, 2016) at the querer to isolate a code execution and memory in a trusted environment where the code can be attested and verified while keeping its state a secret until it publishes an output. Using this in our previous example, the querer can run a code to aggregate the histogram and then extract the required target query from it. Although secure enclaves can theoretically address the security challenges arising from using a non-specialized analytics algorithm, current secure enclave models are only limited to CPU resources and provide limited memory resources, which limits their potential universal deployment.
With these limitations, it remains an open problem when and how much to make use of these trusted secure enclaves in the logic for computing the target query, and whether there exists a universal approach to securely and privately computes federated analytics queries that does
not need to use secure enclaves.
### Robustness to system failures
The quality of computed analytics in a federated analytics system can be prone to performance degradation due to a number of malicious or non-malicious system failures. Malicious failures can arise due to attempts by some system parties to alter their data or responses in order to either degrade the system performance or targets its deviation towards a premeditated result. In addition to malicious failures, the distributed nature of federated analytics and its reliance on parties that are not co-owned can cause it to suffer from party dropout or straggling which can potentially happen during the execution of the federated analytics algorithm. The use of privacy-preserving mechanisms in federated analytics such as secure aggregation (Bonawitz _et al._, 2016) as well as other MPC protocols, can hinder the detection or recovery from these malicious or non-malicious faults. How to make federated analytics robust to such failures without giving up any or little privacy is an interesting open problem in the area.
Although a universal solution for robustness in federated analytics is still open, there exist some approaches for handling failures in federated learning that can lend themselves easily to the federated analytics framework. The non-malicious failure of clients was an overarching limitation of the vanilla secure aggregation protocol (Bonawitz _et al._, 2016). While the protocol design was inherently able to recover from these failures and compute the sum (mean) from the surviving clients, a huge recovery cost is incurred that is can grow quadratically with the number of clients. Recent advances Kadhe _et al._, 2020; So _et al._, 2022 have proposed more efficient approaches for designing secure aggregation keys that allow for a more efficient recovery. These techniques lend themselves to algorithms that rely on aggregating from all clients simultaneously. Some federated queries, however, require structured responses where a particular subset of clients need to be active in each round. In this case, recovering the aggregate response from the surviving clients may be useless in some cases, and more sophisticated secure aggregation protocols are in great need. For example, one simple method would be checking if the subset of surviving clients does not satisfy particular properties, and if so, abandon the aggregation over
this subset of clients in this round.
For malicious failures that try to poison a client's dataset, data sanitization (Cretu _et al._, 2008; Steinhardt _et al._, 2017) and anomaly-detection (Blanchard _et al._, 2017) techniques, which aim to detect or remove anomalous data, have typically been used to address this. However, these techniques typically rely on access to some subset of the clients' data at the server or the availability of data that is sampled from the same distribution, which makes them incompatible with privacy-preserving approaches employed in federated analytics. It remains an open problem whether we can use these failure mitigation techniques in federated analytics without giving up privacy or if new defense approaches need to be developed to address malicious failures in federated analytics.
### Participation incentive mechanisms
In parallel to the development of efficient and secure approaches for federated analytics, developing appropriate mechanisms to incentivize participation is a critical open question for federated analytics systems. This is particularly important in scenarios where the data owners are competitive entities such as financial institutions or enterprises, where the default strategy is not to collaborate with other competitors. Forms of incentive in the cross-silo setting can be regulatory by a governing entity (for example, the FDIC wants to detect fraudulent activity across different banks (Elkordy _et al._, 2022a)), or for shared operational stability, by jointly computing the salary quantiles across a cohort of companies (Kenthapadi _et al._, 2017). In the case of cross-device (individual) clients, incentives can include provided services, and/or monetary gain. From a service perspective, federated analytics promises users potential improvement in the quality of their service experience, _e.g._, a higher accuracy word predictor in Gboard or better estimation of travel times in navigation applications. In other scenarios, the incentive can be individual welfare, similar to the contact tracing analytics performed using private set intersections during the COVID-19 pandemic.
In either cross-silo or cross-device, a central challenge is balancing incentive with the heterogeneity of data and contribution (_e.g._, in terms of the data size). To address this, careful design should be taken into account to ensure clients with more data are not discouraged due to the
non-proportionality of the incentives to their contributions, as well as, not pushing away clients with less data by not implementing worthwhile incentives.
### Decentralized and trust
Our discussions so far always considered a central querier that poses intermediate questions to the clients and aggregates their responses in order to arrive at the query answer (this can be in one-shot or iteratively). Such a model makes sense for queries where the question implies an authoritative entity (for fraud detection for instance) or a large company (for product analytics) is asking the query. However, for a population of clients that wish to collaboratively learn a property of their joint dataset, handling the query computation distributively can be more desirable. The key idea of decentralized analytics is to rely on peer-to-peer communications between the clients to answer the query, while still maintaining the privacy and security of exchanged information about the local datasets. Computing decentralized analytics can find application in scenarios such as the evaluation of trained models that are stored on the blockchain (Shayan _et al._, 2020) or to crowd-source the computation of percentiles (_e.g._, median) of employee salaries of the technology sectors without the pre-requisite of having the parent companies agree to perform this federated computation.
There has been a wide array of works in MPC that develop decentralized solutions for secure computation, particularly for private set intersection problems (see SS5.1). However, such solutions assume that the communication graph of clients is fully-connected and undirected. This can lead to inefficient protocols, particularly as the number of parties increases. Furthermore, sparse and directed communication graphs can model more diverse scenarios, for instance, when the clients are not co-located or when communication goes in a single direction (_e.g._, due to different social network connection tiers).
An interesting aspect of decentralized federated analytics is its decreased robustness to system failures (see the discussion in SS7.2) due to the absence of a centralized entity that can potentially filter out malicious contributions or recover the system in the case of party drops. The design of incentive mechanisms for participation in a decentralized scenario is also a critical open research direction, as coordinating
incentives is also impacted by the absence of a central coordinator.
One recent promising approach to address decentralized analytics challenges is to use blockchains to keep track of intermediate updates and verify that intermediate clients in the communication graph do not act maliciously during the aggregation of updates. The Biscotti framework (Shayan _et al._, 2020) in the context of federated learning can be easily extended to mechanisms that rely on iterative updates and secure aggregation. In Biscotti, the blockchain ledger uses verifiable random functions to ensure that the aggregation contributed by a user is truly the resultant of the stored encoded intermediate updates. It also uses DP to ensure the privacy of these stored encodings. An adaptation of a blockchain solution for decentralized federated analytics can lead to more flexible algorithms that are crowd-operated without the requirement to trust a centralized aggregator/querier entity.
### Cross-silo federated analytics on the cloud
In previous sections, we assume that FA clients own their data and process the data in local and trusted environments when responding to a query. However, in real-world deployments, instead of maintaining local data centers and keeping the data on the local side, FA clients typically would use third-party public cloud services such as Microsoft Azure, Google Cloud, Amazon Web Services, IBM Cloud, and Alibaba Cloud, to store and process their data. Outsourcing data to such third-party clouds has emerged as the de facto model for data storage and processing for numerous benefits, such as improved availability, lower cost, and improved service.
Using clouds in an FA system, however, poses additional security and privacy challenges due to the untrusted nature of public clouds. A public cloud may be curious and wish to learn some information about the data of the FA clients. To protect data from such adversarial clouds, FA clients can use two classes of solutions for secure data outsourcing. The first is called _single cloud-based solutions_, in which clients encrypts their data and use a single cloud to store the data. The second is called _multi-cloud-based solutions_, in which a client partitions their data into several parts, _e.g._, secret-sharing shares, and stores those parts in different clouds so that no single cloud can get the complete data.
In the following subsections, we will discuss solutions in the litera
ture that address security and privacy challenges in the two aforementioned outsourcing settings. We will use the set intersection query as the running example throughout our discussions, since it is difficult, complex, and important in query processing, and most existing works focus on this type of query.
#### 7.5.1 Single cloud-based solutions.
In single cloud-based solutions, clients encrypt their local data and use a single cloud to store the data. (Abadi _et al._, 2016; Abadi _et al._, 2017; Kamara _et al._, 2014; Kerschbaum, 2012; Liu _et al._, 2014; Qiu _et al._, 2015; Abadi _et al._, 2020; Zhang _et al._, 2017) allow clients to outsource their private datasets and process Private set Intersection (PSI) tasks without downloading the datasets. (Abadi _et al._, 2017) ensures that the cloud can only compute set intersection after obtaining the permission of all the clients, and the computation results will be protected from the cloud. (Abadi _et al._, 2016; Kamara _et al._, 2014) provided a watermark-based verification approach for queries over outsourced encrypted datasets. (Abadi _et al._, 2016) can also detect malicious cloud (_i.e._, an adversarial cloud that may tamper the data stored on it) by inserting secret values in the real datasets to the cloud each time to process a PSI query. By checking whether the result set contains the secret values, the clients will know whether the query result is correct or not. (Kerschbaum, 2012) shares secrets between the cloud and the clients to pre-process datasets when outsourcing the datasets. This approach is collusion-resistant if one client and the public cloud collude. However, it requires a client to encrypt the datasets with different encryption keys for set intersections with different clients. (Liu _et al._, 2014) delegates PSI computation over randomized datasets to a cloud. Each client computes the hash value of its dataset using a general-purpose hash function, then randomizes each hashed data with a random integer. (Qiu _et al._, 2015) applied fine-grained authorization that enables the cloud to perform queries without leaking any data. When a client A asks for a matching request with another client B, A first negotiates a token with B so that A can delegate the computation over the outsourced encrypted datasets to the cloud server, and such operations require a trusted third party to generate a token on behalf of the clients.
With the exception of (Kamara _et al._, 2014), the aforementioned techniques have quadratic/exponential complexity or use expensive cryptographic techniques (Qiu _et al._, 2015), and as a result, do not support large-sized datasets at the FA clients. While (Kamara _et al._, 2014) scales better, it does not support aggregation, and, moreover, reveals which item is in the intersection set. Fed-K-PSI (Elkordy _et al._, 2022a) is a different variant of the server-based federated PSI. Each record on the client's side is represented by a key-value pair, and the server is the entity that is interested in knowing the set of identifiers that appears associated with the same value at least \(K\) times. One of the main components of Fed-K-PSI is the secure aggregation protocol that has been widely used in FL setting (So _et al._, 2022; Jahani-Nezhad _et al._, 2022; Elkordy _et al._, 2022b; Elkordy and Avestimehr, 2022; Bonawitz _et al._, 2016)
#### 7.5.2 Multi-cloud-based solutions.
In multi-cloud-based solutions, a client partitions his/her local data into several parts, _i._,_e._, shares, and stores each share at different clouds. Each cloud only has partial information, thus a single cloud can not learn actual dataset (Bater _et al._, 2017; Volgushev _et al._, 2019; Li _et al._, 2021; Corrigan-Gibbs and Boneh, 2017). To partition data into shares, Shamir's secret-sharing (Shamir, 1979) is the most widely-used technique.
Prio (Corrigan-Gibbs and Boneh, 2017) is a privacy-preserving system for collecting statistics that allows multiple clients to upload their data in shares to multiple clouds, and these clouds execute only aggregation operations - count, max/min/median. Prio allows servers to verify the data they receive before storing it at their end. However, Prio only offers a mechanism for confirming the maximum number if the maximum number is known while does not provide any mechanism to compute the maximum/minimum number. Conclve (Volgushev _et al._, 2019) is an additive sharing-based system that allows to execute SQL queries over multiple clients. Conclave allows partitioning the computation such that parts of the computation can be executed at the client over cleartext and the remaining parts can be executed over additive shares. For example, a join query with selection can be partitioned such that the selection condition can be executed at clients, and
then the clients create additive shares of the data that qualifies the selection condition. On the additive shares, a join query over the additive shared data belonging to multiple clients can be executed. Two other systems similar to Conclave are Senate (Poddar _et al._, 2021), which allows collaborative SQL processing among multiple clients without using the cloud, and SMCQL (Bater _et al._, 2017), which is a garbled circuit based system supporting PSI via join and aggregation operations. However, these systems are inefficient when processing large datasets due to either potential memory outage and/or multiple communication rounds in the cloud. For example, SMCQL takes \(\approx\) 23 hours over 23M values, while Conclave takes 8 mins over 4M values. Furthermore, to execute PSI via join operation, Conclave needs to reveal the joining column in cleartext to a trusted third party. Helen (Zheng _et al._, 2019) and Cerebro (Zheng _et al._, 2021) are two recent systems that perform collaborative machine learning tasks without using the cloud. Another recent system for executing queries in the multi-cloud-based is Prism (Li _et al._, 2021). Prism uses both additive shares to support Private Set Intersection (PSI)/Union (PSU) operations and multiplicative shares to offer aggregation. Furthermore, Prism (Li _et al._, 2021) is able to support query executions over large datasets and multiple clients. To securely execute a computation, Prism needs at most three non-colluding cloud servers. Prism does not require communication among servers during/after/before the computation, and, consequently, is able to support PSI/PSU over 20 million values in 8 seconds. Furthermore, Prism is the only system that supports result verification operations.
## 8 Conclusion
In this article, we provide an overview of federated analytics, a privacy-preserving paradigm to solve queries over distributed data owned by multiple clients. We discussed the unique properties of federated analytics and how it relates to FL. We also provide a proposed taxonomy for different classes of queries in federated analytics and a survey of existing solutions in classical areas of distributed computing and secure computation. Finally, we discussed several challenges and open directions for the application and deployment of FA systems at scale. Addressing these challenges can help bring FA systems closer to being
deployed in more practical scenarios to answer a wider range of queries.
## 9 Acknowledgements
This work is supported by NSF grants CCF-1763673, CNS-2002874, Defense Advanced Research Projects Agency (DARPA) under Contract No. FASTNICS HR001120C0088 and HR001120C0160, ARO grant W911NF-22-1-0165, and gifts from Intel, Qualcomm, and Cisco.
|
2307.04480
|
Chromatic dispersion and thermal coefficients of hygroscopic liquids: 5
glycols and glycerol
|
Chromatic dispersion and thermal coefficients of 6 hygroscopic liquids:
ethylene glycol, diethylene glycol, triethylene glycol, tetraethylene glycol,
propylene glycol (propane-1,2-diol), and glycerol were measured in the range
from 390 to 1070 nm for temperatures from 1 to 45degC. A modified Abbe
refractometer was utilised. Special care was taken to avoid contamination of
the liquids under the test with water and solid particles. The measurement
uncertainties were analysed. It was noticed that (in the given range and within
the available measurement accuracy) the dependence of the refractive indices on
the wavelength and temperature could be considered independently. Thus, thermal
coefficients were found for each wavelength used, and their weak dependence on
the wavelength was recognised. Then the Sellmeier equation was fitted to the
experimental results for each temperature.
|
Daniel Jakubczyk, Gennadiy Derkachov, Kwasi Nyandey, Sima Alikhanzadeh-Arani, Anastasiya Derkachova
|
2023-07-10T11:01:58Z
|
http://arxiv.org/abs/2307.04480v3
|
# Chromatic dispersion and thermal coefficients of hygroscopic liquids: 5 glycols and glycerol
###### Abstract
Chromatic dispersion and thermal coefficients of 6 hygroscopic liquids: ethylene glycol, diethy-lene glycol, triethylene glycol, tetraethylene glycol, propylene glycol (propane-1,2-diol), and glycerol were measured in the range from 390 to 1070 nm for temperatures from 1 to 45\({}^{\mathrm{o}}\)C. A modified Abbe refractometer was utilised. Special care was taken to avoid contamination of the liquids under the test with water and solid particles. The measurement uncertainties were analysed. It was noticed that (in the given range and within the available measurement accuracy) the dependence of the refractive indices on the wavelength and temperature could be considered independently. Thus, thermal coefficients were found for each wavelength used, and their weak dependence on the wavelength was recognised. Then the Sellmeier equation was fitted to the experimental results for each temperature.
## Introduction
Achieving ultimate accuracy in optical remote sensing and particle characterisation requires accurate values of refractive indices of characterised materials and host media for a given wavelength and temperature. Since we tackle such issues in our research (e.g. [1, 2, 3]), we've looked for the available refractive index data and usually found it insufficient for our purposes. Thus, we decided to build a dedicated setup to measure the refractive indices of liquids as a function of both wavelength and temperature. However, since we studied popular hygroscopic liquids, the results seem to be worth sharing. A specific application that may serve as a good example, is industrial dehydration of natural gas with glycols (most prominently triethylene glycol) [4]. The amount of absorbed water could be accurately assessed by refractive index measurement of the mixture, which calls, among others, also for accurate refractive index data of the pure liquids.
Accurate measurements of refractive indices - their chromatic dispersion and temperature coefficient in particular - of hygroscopic liquids, require special care to avoid contamination with water. A glimpse into e.g. Landolt-Bornstein database [5, 6] reveals a large spread of results obtained by different authors over the past decades, which may be due precisely to pollution issues.
We shall briefly discuss some of the results that can be found in the literature on the ethylene glycol (EG) - a comparatively well investigated liquid, widely used as an engine coolant, antifreeze, de-icing agent, polymerisation precursor and desiccant. The results we discuss are presented in Fig. (2) together with the results of our experiments. Solid black dots represent different n\({}_{\rm D20}\) measurements taken until late 1980s. It can be noticed that there are more outliers towards the lower values. The newer results of Tsierkezos [7] or Jimenez [8] - taken also at different temperatures (open stars and diamonds respectively), belong to the higher ones. The old (1916) but extensive results of Karvonnen on chromatic dispersion are in agreement with these. It seems to indicate that even before the molecular sieves came into everyday use, controlling the contaminating water content was possible with well-planned procedures. Our results are also in agreement with all the later mentioned. The contemporary measurements of chromatic dispersion in EG are rather sparse [9, 10, 11]. A fairly recent work by Sani and Dell'Oro [10, 11] seems promising. An indirect method was utilised - the absorption (imaginary part of the refractive index) in EG was measured for a very wide range of wavelengths and Kramers-Kronig relation was invoked. However, neither the temperature, at which the dispersion curve was obtained (possibly 20\({}^{\circ}\)C - room temperature), nor the purity of EG was stated. The older measurements of Voellemy [12] at 21.9 \({}^{\circ}\)C and Timmermans et al. [13] at 15\({}^{\circ}\)C are consistent with those, while contemporary measurement of Kozma et al. at 22\({}^{\circ}\)C [9] clearly is not.
Most of the systematic measurements versus (either) wavelength or temperature are about century old. The measurements simultaneously dependent on wavelength and temperature are rare. In this work, we present such measurements we performed for 5 commercially available glycols and glycerol. We took extreme care not to contaminate them with water beyond the amount stated by the manufacturer, though we didn't use molecular sieves to avoid possible contamination with nanoparticles, which is crucial for our applications (light scattering). We measured dispersion of their refractive indices from 394 to 1070 nm for temperatures from 1 to 45\({}^{\circ}\)C. We describe the setup and the procedures we used in detail.
The refracometer setup
The experimental setup - see Fig. 1 - was based on a commercial Abbe refractometer (AR-4, Muller), which we modified to measure the chromatic dispersion of refractive index and its temperature dependence.
We partly followed [14] (compare also e.g. [15]). So the compensator (2 Amici prisms) was set to maximum dispersion. In consequence, the index of refraction read from the scale had to be corrected with the formula (6) from [14], where we verified the prism material and apex angle by carefully calibrating the device with water. The light at a desired wavelength was provided through cylindrical lens from a monochromator (SPM 1, Carl Zeiss Jena [16]) with halogen lamp (H1 automotive bulb) illumination. Monochromator calibration was performed _in-situ_ with a small grating spectrometer (USB4000, Ocean Optics; 1.4 nm resolution). We equipped the refractometer with a 14-bit digital camera (GC651MP, Smartek vision) looking through the refractometer's eyepiece with an additional camera objective (\(f\) = 6 mm, \(f\)/1.6, aberration correction including IR), so that both the shadow (resulting from the total internal reflection) with the crosshair superimposed and the scale were in the FoV simultaneously. In this way we could obtain sensible measurements in the spectral range from 390 to 1071 nm. The image of the shadow was processed numerically with the in-lab written Matlab program (clipping, background subtraction, vertical summation, smoothing) so the position of the shadow edge was represented as the minimum of the derivative of brightness - corresponding to the inflection point on the shadow edge. The crosshair centre was determined by pointing at it in a magnified image at the beginning of the measurement series. Thus, the measurement consisted of adjusting the derivative minimum to the crosshair centre. The refractometer allows for measurements at dif
Figure 1: Drawing of the measurement setup. Protective housing represented schematically.
ferent temperatures by circulating a liquid at a desired temperature through the prisms jacket. The temperature of the circulating cooling/heating liquid was maintained with in-lab built Peltier element-controlled heat exchanger with local stabilisation loop. A separate K-type thermocouple was placed directly next to the prisms-liquid contact surface to measure the temperature of the liquid accurately. It was measured with the calibrated CHY506R electronic thermometer (CHY Firemate Co.). Dry, filtered N\({}_{2}\) gas, obtained from liquid nitrogen, was flowed through the refractometer chassis and into the (plastic) protective housing in which the device was kept. The temperature of N\({}_{2}\) gas was stabilized by passing it through gas-liquid heat exchanger in a thermal bath stabilised down to \(\pm\)0.2\({}^{\rm o}\)C. Depending on the temperature of the refractometer prisms, the temperature of the bath was set between -5 and 0\({}^{\rm o}\)C to keep it significantly lower than that of the prisms. The pipe leading the gas to the refractometer was thermally insulated, but the heat transfer was found significant there. Thus, the N\({}_{2}\) temperature was measured continuously at the entrance of the refractometer to ensure that during the experimental run it was stable down to \(\pm\)1\({}^{\rm o}\)C. Depending on the temperature of the bath, it stabilised in the 1-6\({}^{\rm o}\)C range. In this way, we (i) stabilized the temperature of the refractometer body, which enabled good calibration of the device; (ii) prevented condensation of water vapour (as well as other vapours) on the optical surfaces in the device; (iii) minimized diffusion of atmospheric water into the measured liquid. The relative humidity (RH) measured in the enclosure was always below 24 % and the contact area of the sample in between the prisms with the enclosure atmosphere was \(\sim\)10 mm\({}^{2}\) (in comparison to \(\sim\)10\({}^{3}\) mm\({}^{2}\) in contact with glass). It is difficult to obtain a very low humidity in a fairly large plastic housing, because of relatively high residual water content in plastics (see e.g.: [17, 18]). However, it was confirmed with a long-time measurement that the refractive index of a sample sitting between the prisms remains constant for several hours. An ample time (both for calibration and actual measurement) was always allowed for the temperature of the device to stabilise, where the primary condition was the temporal stability of the observed shadow line.
Since for water there exist widely recognised systematic measurements of spectral dispersion versus temperature [19], the device was calibrated with water at a given temperature, before every measurement series. This fundamentally precluded calibration below 0\({}^{\rm o}\)C. Furthermore, in the case of water, for \(T\)\(\rightarrow\)0\({}^{\rm o}\)C, dn/d\(T\)\(\rightarrow\)0, which means that the calibration accuracy diminishes significantly towards 0\({}^{\rm o}\)C. Thus, for measurements at 1\({}^{\rm o}\)C the calibration was later augmented with data for EG we obtained. On the other hand, at elevated temperatures, the water between the prisms dried out faster, making calibration increasingly difficult. So, \(\sim\)0.5 ml (an ample amount) of distilled water was slowly (to avoid forming bubbles) poured on the surface of the prism with a clean disposable syringe. The packages with disposable syring were kept under vacuum to dry the syringe plastic as far as possible. The excess of water was squeezed out and removed on closing the top prism. The calibration was performed first of all at 589 nm (sodium D-line) and verified at 394 and 1071 nm. After the calibration, water was removed and the surface of prisms was carefully cleaned with a soft tissue and propanol. Then, the prisms were further dried with strong nitrogen gas flow (compressed N\({}_{2}\) of purity better than 99.8%). Extreme care must be tak
en, because the amount of liquid used for measurements is minute and even a small addition of water (or other substances) affects the measurements.
After the drying, \(\sim\)0.5 ml of desired liquid was poured with a new disposable syringe as it was done with water and a series of measurements were taken at a desired temperature starting from the infrared towards the ultraviolet. Since the measurement series takes about 30 min, after the end, the measurements for the longest IR wavelengths were repeated, to ensure that there was no change in experimental conditions, e.g. due to mechanical creep in the apparatus. Furthermore, after the measurement of the sample, the prisms were cleaned and the calibration with water was rechecked, in order to exclude any systematic errors introduced by the creep in the apparatus taking place during the measurement of the sample. The unsealed bottles with the hygroscopic liquids were loosely recapped and stored under vacuum to ensure that they don't absorb atmospheric water. The lab was air-conditioned and the temperature of 22\(\pm\)1\({}^{\circ}\)C was maintained, also to keep the humidity in the lab relatively low (\(\sim\)45%). However there was no stabilisation of humidity in the lab, and we observed some variation versus the season of the year.
The measurement series for each temperature, were repeated several times, the outlying series were discarded and the remaining were averaged.
Figure 2: Refractive index of ethylene glycol versus wavelength and temperature. Dashed lines represent fits with the obtained formula (Eqns. (1-3)) with parameters from Tabs. 1 and 2. The uncertainty of the wavelength is shown for clarity only for 40\({}^{\circ}\)C in left panel. Markers pertaining to data from [7] and [8] are colour-coded according to coding of our data. Right panel: magnification of the region around the sodium D-line – most of the literature data falls in this region.
We sampled the following liquids: (i) ethylene glycol, anhydrous, 99.8% Sigma-Aldrich lot # STBG3967V; (ii) diethylene glycol, \(\geq\)99.0%, Sigma lot # BCBT4833; (iii) triethylene glycol, 99%, Alfa Aesar lot # 10198029; (iv) tetraethylene glycol, 99%, Alfa Aesar lot # M17E015; (v) propylene glycol (propane-1,2-diol), \(\geq\)99.5%, Sigma-Aldrich lot # MKC80613V; (vi) glycerol, BioUltra, anhydrous, \(\geq\)99.5%, Sigma lot # BCBS7814V.
Data processing and accuracy considerations
After gathering the dataset \(n(\lambda\),\(T)\) - refractive indices for a set of (vacuum) wavelengths \(\lambda\) and temperatures \(T\) in the full available range (compare Fig (2)) - it was found that the dependence of \(n\) on \(\lambda\) and \(T\) (in \({}^{\circ}\)C) could be decomposed for each liquid under study, with \(n(T,\,\lambda\)=const) considered linear within our measurement accuracy (see Fig. 4):
\[n(\lambda,T)=n(\lambda,20)+\frac{\mathrm{d}n(\lambda,T)}{\mathrm{d}T}(T-20)\enspace. \tag{1}\]
Thus, \(\mathrm{d}n/\mathrm{d}T\) was found from a linear fit for each experimental \(\lambda\) point. The mean relative standard error of \(\mathrm{d}n/\mathrm{d}T\) is below 1% for all studied liquids. In inset in Fig. 4, two such fits at different \(\lambda\) (central and peripheral) for EG are presented. Followingly, in Fig. 4 itself, we present \(\mathrm{d}n/\mathrm{d}T(\lambda)\) for EG with the vertical error bars corresponding to \(\mathrm{d}n/\mathrm{d}T\) standard error and the horizontal - to the estimated uncertainty of \(\lambda\). Interestingly, \(\mathrm{d}n/\mathrm{d}T\) displays a (weak) non-linear dependence on \(\lambda\). A rational function, which is in line with the Sellmeier equation, was found to fit very well (COD=0.94 for EG):
Figure 3: The \(n(\lambda)\) traces for all temperatures for EG were shifted to overlap the trace corresponding to 20\({}^{\circ}\)C. The black dashed line shows the median of shifted traces. Inset: Sellmeier equation fitted (solid red line) to the obtained median points (black open circles).
\[\frac{{\rm d}n(\lambda,T)}{{\rm d}T}=A_{\rm T}+\frac{B_{\rm T}}{\lambda-C_{\rm T}}\ \, \tag{2}\]
where \(A_{\rm T}\) constant is associated mainly with thermal expansivity (density change) of the liquid and \(B_{\rm T}\) and \(C_{\rm T}\) parameters have similar sense as in Sellmeier equation (see below). Then, using Eqn. 1, \(n(\lambda)\) traces for all temperatures were shifted to overlap the trace corresponding to 20\({}^{\rm o}\)C (compare Fig. (3)). Median of all \(n\) for each \(\lambda\) was found and a two-pole Sellmeier equation
\[n^{2}(\lambda,20)=A+\frac{B_{\rm IR}\lambda^{2}}{\lambda^{2}-C_{\rm IR}}+ \frac{B_{\rm UV}\lambda^{2}}{\lambda^{2}-C_{\rm UV}} \tag{3}\]
was fitted, where \(A\) accounts for the short-wavelength absorption contributions to \(n\) at longer wavelengths, while \(B_{\rm IR}\) and \(B_{\rm UV}\) are absorption resonance strengths at wavelengths \(C_{\rm IR}^{\
(mainly thermal) stability and was estimated as \(\pm\)3\(\times\)10\({}^{-4}\). So the vertical error bars in \(n(\lambda\),\(T)\) figures are of the size of the symbols - circles.
The maximal error of the wavelength determination was associated with the accuracy of monochromator-halogen lamp system calibration. The precision of wavelength setting is better than 1.5 nm. However, in order to achieve adequately bright illumination the slits were fairly wide-open which resulted in spectrally non-uniform illumination. Spectral profiles obtained with a multimode fibre with NA\(=\)0.22 (P600-1-SR, Ocean Optics) were not wider than 14 nm HWHM at 1071 nm and 3 nm HWHM at 394 nm. This led to the total calibration uncertainty of \(\sim\)1%. The error bars for wavelength are shown for the trace corresponding to 40\({}^{\circ}\)C as an example. Due to the character of the dispersion curve, the influence of these errors on the fitted Sellmeier curve is comparable to that introduced by refractive index uncertainty.
The accuracy of the calibrated CHY506R electronic thermometer with K-type thermocouple - traced to temperature standard - was estimated as \(\pm\)0.2 K. Since a significant (up to 2K) temperature difference between prisms-liquid contact surface and the circulating liquid was observed, the temperature gradient across the prisms surface was checked with a thin calibrated (T-type) thermocouple (TT-T-40-SLE, by Omega, connected to CHY506R). It was found not greater than 0.4K, for highest temperature gradients in the setup (45\({}^{\circ}\)C at the prisms, 2\({}^{\circ}\)C at N\({}_{2}\) inlet to the refractometer, 22\({}^{\circ}\)C ambient in the lab). So, finally, the accuracy of temperature measurements can be estimated as -0.3/+0.6K. Again, the horizontal error bars in Fig. d\(n\)/d\(T(\lambda)\) (inset) would be of the size of the symbols.
As mentioned above, a significant error can be introduced into the measurement, if a hygroscopic sample is exposed to an ambient (humid) atmosphere, especially for longer periods. For instance, at 24% RH (as in the setup enclosure) the equilibrium water content in EG is \(\sim\)9 wt% (see Fig. 18 in [20]). This would lead to a decrease of refractive index by \(\sim\)0.01 (compare formula at Fig. 14 in [20]). It corresponds to 10% of whole difference between refractive indices of EG and water (\(n_{\rm EG}-n_{\rm H2O}\)). Similarly, the residual uncertainty of the refractive index due to the contamination with water for the EG lot that we used (see section "The refractometer setup") could be estimated as \(\sim\)2\(\times\)10\({}^{-4}\) (0.2% of \(n_{\rm EG}-n_{\rm H2O}\)), so the corresponding vertical error bars in \(n(\lambda\),\(T)\) figures would be below the size of the symbols - circles. In view of the above analysis, extreme care was taken to a avoid any prolonged contact of samples with the atmosphere or water-saturated containers - syringes, and the possible water intake during the experiment was carefully monitored as described in the previous sections.
As explained above, the measurements, in which the systematic errors were spotted by recalibration of the setup with water, were simply discarded.
The standard errors of Sellmeier equation coefficients and thermal coefficients (Tabs.1 and 2) are reported by the fitting procedure (simplex algorithm). Since, from the point of view of the optimization algorithm, both equations are over-parameterized, the errors tend to be significant,
and the exact physical meaning of the values obtained is somewhat questionable. However, they are quite satisfactory from an engineering point of view.
Results
The obtained Sellmeier equation coefficients and thermal coefficients, for \(\lambda\) expressed in \(\upmu\)m, are presented in Tabs. 1 and 2 respectively. All the obtained datasets (tables) are stored in Mendeley Data repository [21]. In Supplementary material (see below) we also present corresponding \(n(\lambda\),\(T)\) graphs for all studied liquids, comprising also relevant data from the literature. In case of less popular liquids, the chromatic dispersion data was not available for comparison.
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{liquid} & \multicolumn{6}{c|}{Sellmeier equation coefficients} \\ \cline{2-6} & \(A\) & \(B_{\rm IR}\) & \(C_{\rm IR}\) & \(B_{\rm UV}\) & \(C_{\rm UV}\) \\ \hline ethylene glycol & \(0.6\pm 0.8\) & \(0.0076\pm 0.0051\) & \(2.5\pm 0.7\) & \(1.4\pm 0.8\) & \(0.007\pm 0.004\) \\ \hline diethylene glycol & \(1.6\pm 0.1\) & \(0.03\pm 0.05\) & \(6\pm 7\) & \(0.5\pm 0.1\) & \(0.02\pm 0.003\) \\ \hline triethylene glycol & \(1.24\pm 0.25\) & \(0.008\pm 0.009\) & \(3.1\pm 1.8\) & \(0.85\pm 0.25\) & \(0.012\pm 0.003\) \\ \hline tetraethylene glycol & \(1.48\pm 0.11\) & \(0.0035\pm 0.0032\) & \(2.25\pm 0.81\) & \(0.62\pm 0.11\) & \(0.0169\pm 0.0025\) \\ \hline propylene glycol & \(1.15\pm 0.28\) & \(0.008\pm 0.006\) & \(2.75\pm 0.95\) & \(0.9\pm 0.3\) & \(0.011\pm 0.003\) \\ \hline glycerol & \(1.61\pm 0.12\) & \(0.06\pm 0.09\) & \(8\pm 9\) & \(0.54\pm 0.12\) & \(0.018\pm 0.003\) \\ \hline \end{tabular}
Tab. 1 Sellmeier equation coefficients for \(\lambda\) in \(\upmu\)m for 5 glycools and glycerol, found from the presented experiments. Uncertainties represent standard errors.
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{liquid} & \multicolumn{3}{c|}{thermal coefficients} \\ \cline{2-4} & \(A_{\rm T}\) & \(B_{\rm T}\) & \(C_{\rm T}\) \\ \hline ethylene glycol & -2.643\(\times 10^{4}\pm 3\times 10^{7}\) & -7.5\(\times 10^{6}\pm 1\times 10^{7}\) & \(0.17\pm 0.01\) \\ \hline diethylene glycol & -3.132\(\times 10^{4}\pm 2\times 10^{7}\) & -5.61\(\times 10^{6}\pm 6\times 10^{8}\) & \(0.245\pm 0.011\) \\ \hline triethylene glycol & -3.122\(\times 10^{4}\pm 3\times 10^{7}\) & -6.3\(\times 10^{6}\pm 2\times 10^{7}\) & \(0.22\pm 0.03\) \\ \hline tetraethylene glycol & -3.570\(\times 10^{4}\pm 2\times 10^{7}\) & -6.51\(\times 10^{6}\pm 6\times 10^{8}\) & \(0.235\pm 0.009\) \\ \hline propylene glycol & -3.027\(\times 10^{4}\pm 3\times 10^{7}\) & -1.2\(\times 10^{6}\pm 2\times 10^{7}\) & \(0.72\pm 0.01\) \\ \hline glycerol & -2.395 \(\times 10^{4}\pm 5\times 10^{7}\) & -6.2\(\times 10^{6}\pm 2\times 10^{7}\) & \(0.18\pm 0.01\) \\ \hline \end{tabular}
Tab. 2 Thermal coefficients for \(\lambda\) in \(\upmu\)m for 5 glycools and glycerol found from the presented experiments. Uncertainties represent standard errors.
### Acknowledgements
This research was funded in whole or in part by National Science Centre, Poland, grant 2021/41/B/ST3/00069. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission
## References
* [1] Holyst R, Litniewski M, Jakubczyk D, Kolwas K, Kolwas M, Kowalski K, et al. Evaporation of freely suspended single droplets: Experimental, theoretical and computational simulations. Reports Prog Phys 2013;76. [https://doi.org/10.1088/0034-4885/76/3/034601](https://doi.org/10.1088/0034-4885/76/3/034601).
* [2] Kolwas M, Jakubczyk D, Do Duc T, Archer J. Evaporation of a free microdroplet of a binary mixture of liquids with different volatilities. Soft Matter 2019;15:1825-32. [https://doi.org/10.1039/C8SM02220H](https://doi.org/10.1039/C8SM02220H).
* [3] Derkachov G, Jakubczyk D, Wozniak M, Archer J, Kolwas M. High-precision temperature determination of evaporating light-absorbing and non-light-absorbing droplets. J Phys Chem B 2014;118:12566-74. [https://doi.org/10.1021/jp508823z](https://doi.org/10.1021/jp508823z).
* [4] Cao S, Liu P, Zhang L, Sun B, Zou H, Chu G, et al. Mass Transfer Study of Dehydration by Triethylene Glycol in Rotating Packed Bed for Natural Gas Processing. Ind Eng Chem Res 2018;57:5394-400. [https://doi.org/10.1021/acs.iecr.7b04813](https://doi.org/10.1021/acs.iecr.7b04813).
* [5] Wohlfarth C, Wohlfarth B. Refractive Indices of Organic Liquids. vol. 38B. Berlin/Heidelberg: Springer-Verlag; 1996. [https://doi.org/10.1007/b85533](https://doi.org/10.1007/b85533).
* [6] Wohlfarth C. Optical Constants \(\cdot\) Refractive Indices of Pure Liquids and Binary Liquid Mixtures (Supplement to III/38). vol. 47. Berlin, Heidelberg: Springer Berlin Heidelberg; 2008. [https://doi.org/10.1007/978-3-540-75291-2](https://doi.org/10.1007/978-3-540-75291-2).
* [7] Tsierkezos NG, Molinou IE. Thermodynamic properties of water + ethylene glycol at 283.15, 293.15, 303.15, and 313.15 K. J Chem Eng Data 1998;43:989-93. [https://doi.org/10.1021/je9800914](https://doi.org/10.1021/je9800914).
* [8] Jimenez E, Cabanas M, Segade L, Garc S, Casas H. Excess volume, changes of refractive index and surface tension of binary 1, 2-ethanediol + 1-propanol or 1-butanol mixtures at several temperatures 2001;180:151-64.
* [9] Kozma IZ, Krok P, Riedle E. Direct measurement of the group-velocity mismatch and derivation of the refractive-index dispersion for a variety of solvents in the ultraviolet. J Opt Soc Am B 2005;22:1479. [https://doi.org/10.1364/josab.22.001479](https://doi.org/10.1364/josab.22.001479).
* [10] Sani E, Dell'Oro A. Optical constants of ethylene glycol over an extremely wide spectral range. Opt Mater (Amst) 2014;37:36-41. [https://doi.org/10.1016/j.optmat.2014.04.035](https://doi.org/10.1016/j.optmat.2014.04.035).
* [11] Sani E, Dell'oro A. Erratum: Optical constants of ethylene glycol over an extremely wide
spectral range (Optical Materials (2014) 37 (36-41)). Opt Mater (Amst) 2015;48:281. [https://doi.org/10.1016/j.optmat.2015.06.039](https://doi.org/10.1016/j.optmat.2015.06.039).
* [12] Voellmy H. Uber die Dispersion ultraviolet Strahlen durch flussige organische Substanzen. Zeitschrift Fur Phys Chemie 1927;127U:305-57. [https://doi.org/10.1515/zpch-1927-12719](https://doi.org/10.1515/zpch-1927-12719).
* [13] Timmermans MJ, Hennaut-Roland M. Etude des constantes physiques de vingt composes organiques. J Chim Phys 1935;32:501.
* [14] Kedenburg S, Vieweg M, Gissibl T, Giessen H. Linear refractive index and absorption measurements of nonlinear optical liquids in the visible and near-infrared spectral region. Opt Mater Express 2012;2:1588. [https://doi.org/10.1364/ome.2.001588](https://doi.org/10.1364/ome.2.001588).
* [15] Rheims J, Koser J, Wriedt T. Refractive-index measurements in the near-IR using an Abbe refracometer. Meas Sci Technol 1997;8:601-5. [https://doi.org/10.1088/0957-0233/8/6/003](https://doi.org/10.1088/0957-0233/8/6/003).
* [16] Schiek O, Winter E. Two New Mirror Monochromators. Appl Opt 1965;4:195. [https://doi.org/10.1364/ao.4.000195](https://doi.org/10.1364/ao.4.000195).
* [17] International Organization for Standardization. ISO 62 2008.
* [18] Duncan BC, Broughton WR. Absorption and Diffusion of Moisture In Polymeric Materials. Meas Good Pract Guid 2007.
* [19] Harvey AH, Gallagher JS, Sengers JMHL. Revised Formulation for the Refractive Index of Water and Steam as a Function of Wavelength, Temperature and Density. J Phys Chem Ref Data 1998;27:761-74. [https://doi.org/10.1063/1.556029](https://doi.org/10.1063/1.556029).
* [20] The MEGGlobal Group of Companies. Ethylene Glycol Product Guide 2008:1-33.
* [21] Jakubczyk D, Derkachov G, Nyandey K, Alikhanzadeh-Arani S, Derkachova A. Refractive index of 5 glycols and glycerol versus wavelength and temperature. Mendeley Data 2023. [https://doi.org/10.17632/8tf9sspd6b.1](https://doi.org/10.17632/8tf9sspd6b.1).
## Supplementary Material
## Chapter 2
## Propylene glycol
Refractive index of propylene glycol versus wavelength and temperature. Dashed lines represent fits with the obtained formula (Eqns. (1-3) of the manuscript) with parameters from Tabs. 1 and 2 therein. The uncertainty of the wavelength is shown for clarity only for 40\({}^{\circ}\)C. Essential data from Landolt-Bornstein database is also presented (black dots, black open diamonds, black and green open stars). Newer or more exhaustive data referenced directly.
## Glycrol
Refractive index of glycerol versus wavelength and temperature. Dashed lines represent fits with the obtained formula (Eqns. (1-3) of the manuscript) with parameters from Tabs. 1 and 2 therein. The uncertainty of the wavelength is shown for clarity only for 40\({}^{\circ}\)C. Essential data from Landolt-Bornstein database is also presented (black open stars, black open triangles, black open circles, black dots, red open diamond). Newer or more exhaustive data referenced directly.
|
2303.16573
|
The effect of the COVID-19 health disruptions on breast cancer mortality
for older women: A semi-Markov modelling approach
|
We propose a methodology to quantify the impact on breast cancer mortality of
diagnostic delays caused by public health measures introduced as a response to
the COVID-19 pandemic. These measures affected cancer pathways by halting
cancer screening, delaying diagnostic tests, and reducing the numbers of
patients starting treatment. We introduce a semi-Markov model, to quantify the
impact of the pandemic based on publicly available population data for women
age 65{89 years in England and relevant medical literature. We quantify
age-specific excess deaths, for a period up to 5 years, along with years of
life expectancy lost and change in cancer mortality by cancer stage. Our
analysis suggests a 3-6% increase in breast cancer deaths, corresponding to
more than 40 extra deaths, per 100,000 women, after age 65 years old over 5
years, and a 4-6% increase in registrations of advanced (Stage 4) breast
cancer. Our modelling approach exhibits consistent results in sensitivity
analyses, providing a model that can account for changes in breast cancer
diagnostic and treatment services.
|
Ayse Arik, Andrew J. G. Cairns, Erengul Dodd, Angus S. Macdonald, George Streftaris
|
2023-03-29T10:13:31Z
|
http://arxiv.org/abs/2303.16573v2
|
The effect of the COVID-19 health disruptions on breast cancer mortality for older women: A semi-Markov modelling approach
###### Abstract
We propose a methodology to quantify the impact on breast cancer mortality of diagnostic delays caused by public health measures introduced as a response to the COVID-19 pandemic. These measures affected cancer pathways by halting cancer screening, delaying diagnostic tests, and reducing the numbers of patients starting treatment. We introduce a semi-Markov model, to quantify the impact of the pandemic based on publicly available population data for women age 65-89 years in England and relevant medical literature. We quantify age-specific excess deaths, for a period up to 5 years, along with years of life expectancy lost and change in cancer mortality by cancer stage. Our analysis suggests a 3-6% increase in breast cancer deaths, corresponding to more than 40 extra deaths, per 100,000 women, after age 65 years old over 5 years, and a 4-6% increase in registrations of advanced (Stage 4) breast cancer. Our modelling approach exhibits consistent results in sensitivity analyses, providing a model that can account for changes in breast cancer diagnostic and treatment services.
keywords: Breast cancer; Cancer mortality; COVID-19 pandemic; Excess deaths; semi-Markov model.
## 1 Introduction
The COVID-19 pandemic has claimed more than 6.2 million lives worldwide as of May 2022 (WHO, 2022). As a response, since the beginning of the pandemic, the UK entered three national lockdowns, with the first being introduced on 23 March, 2020. Cancer pathways have been seriously affected by the changes in health practices due to a halt in cancer screening (from late March 2020 till June 2020), significant increases in the number of patients waiting for key diagnostic tests for more than 6 weeks, and significant reductions in the number of patients starting cancer treatment. Cancer Research UK (CRUK) has reported that 3 million fewer people were screened for cancer in the UK
between March and September 2020. Moreover, the number of cancer patients starting a cancer treatment decreased by 12% between April 2020 and March 2021 compared to the pre-pandemic levels, whereas the number of people waiting for more than 6 weeks for key diagnostic tests has soared to 215,000 in March 2021 from 67,000 in March 2020 (CRUK, 2021). These figures sparked the fear of a shift to later diagnosis for people having the disease but not diagnosed yet. This could restrict the opportunities for feasible treatment and worsen cancer survival.
Recent published studies based on the National Health Service (NHS) cancer registration and hospital administrative dataset focus on identifying the impact on cancer survival in England of various changes in the availability of cancer treatment and services, in addition to health-seeking behaviour, as a result of national lockdowns. Lai et al. (2020) point out dramatic reductions in the demand for, and supply of, cancer services in response to the COVID-19 pandemic by showing that these reductions could increase excess mortality among cancer patients. Sud et al. (2020) indicate a significant reduction in cancer survival as a result of treatment delay, mostly disruption in cancer surgery. Maringe et al. (2020) also note substantial increases in avoidable cancer deaths in England as a result of diagnostic delays of over a year. Arik et al. (2021) report significant increases in type-specific cancer mortality as a result of diagnostic delays. Alagoz et al. (2021) project a small long-term cumulative impact on breast cancer (BC) mortality in the US over the next decade due to initial pandemic-related disruptions.
Early empirical studies suggested that COVID-19 is more likely to affect older people and those with comorbidity (Chen et al., 2020; Richardson et al., 2020; Grasselli et al., 2020; Zhou et al., 2020). Furthermore, developing COVID-19 has been shown to be a greater risk for cancer patients depending on type of malignancy, age, and gender (Pinato et al., 2020; Garassino et al., 2020; Lee et al., 2020; Saini et al., 2020). Pinato et al. (2021) reported that cancer patients in the UK have been more severely affected by the COVID-19 pandemic compared to those in continental Europe.
Part of the contribution of our study to the literature is providing a modelling framework, which goes beyond the aforementioned empirical work, to investigate the impact of a pandemic, such as COVID-19, on BC. Particularly, we are interested in how the pandemic, causing major disruption to the health service, may affect mortality associated with disorders normally treated by the health service. It is assumed that the pandemic may give rise to changes by preventing or delaying the detection or diagnosis of BC. We examine the impact of diagnostic delays up to 5 years as Maringe et al. (2020) state, 'the effect of delayed presentation on patients with cancer is not immediate, and premature death as a result might occur up to 5 years later...' (p. 1024). This is motivated by screening programmes and cancer treatments having been largely affected by lockdowns. According to CRUK (2021), 7,200 fewer cases of BC were diagnosed between April-December 2020 compared to the same period in 2019, 60% fewer cases were diagnosed via screening, whilst 22% fewer patients started treatment from April 2020 till March 2021, compared with the same period in 2019.
Quantifying the impact of cancer diagnosis delays by considering cancer stage is complex in the light of insufficient data, but a Markov approach provides a suitable modelling framework (Castelli et al., 2007; Lu et al., 2011; Adams et al., 2013; Buchardt et al., 2015; Hubbard et al., 2016; Baione and Levantesi, 2018; Hacariz et al., 2021; Soetewey et al., 2022). We establish a semi-Markov model with multiple states, including observed and unobserved BC cases, based on: (i) available cancer registration and deaths data in Eng
land, provided by the Office for National Statistics (ONS); and (ii) published clinical studies. Accordingly, we estimate age-specific, short-term excess deaths, in addition to years of life expectancy lost (YLL) from cancer, with particular emphasis on ages above 65.
This paper is organised as follows. In Section 2 we introduce the model for BC risk. In Section 3 we explain how to calibrate the model in a pre-pandemic environment. In Section 4 we introduce a couple of scenarios, namely 'pandemic' scenarios, in pandemic environment. In Section 5 we estimate excess deaths and YLLs under a pre-pandemic model calibration and pandemic scenarios. In Section 6 we provide a sensitivity analysis. In Section 7 we discuss our findings and their implications along with strengths and limitations of our approach.
## 2 Methodology
### Definitions of breast cancer stages
BC mortality is the most common cancer diagnosed in women, in addition to being one of the leading causes of death for women (ONS, 2019a; PHE, 2017). The most common type of BC is known to be 'invasive' BC that indicates cancer cells spreading from the ducts into the surrounding (breast) tissues, with the two most well-known ones are 'invasive ductal carcinoma' and 'invasive lobular carcinoma'. Invasive BC can be described from early to advanced stage BC (CRUK, 2020b). The clinical model of BC progression is a well-defined staging model of the form:
\[\text{No BC}\rightarrow\text{Stage 1 BC}\rightarrow\text{Stage 2 BC}\rightarrow\text{Stage 3 BC}\rightarrow\text{Stage 4 BC}\rightarrow\text{Dead from BC}\]
where a higher stage number shows that cancer tumour is bigger or has spread from breast to distant parts of the body, also known as'metastasis'. This staging model, namely TNM, categorises cancer from Stage 1 to Stage 4 based on the tumour (T) size, that can be between 1-4 with 1 for small tumours and 4 for large tumours; whether or not lymph nodes (N) have cancer cells, that can change between 0-3; and whether or not the cancer cells move to other parts of the body (M), that can be either 0 or 1 (ONS, 2017; CRUK, 2020a).
The progression from Stage 1 to Stage 4 is assumed to be real and physical, whether observed or not. It is possible that 'transition into Stage 1 BC' is the nearest equivalent in the model to 'onset of BC'. We assume that 'dead from BC' is accessible only from Stage 4, and 'dead from other causes' (not shown above) is accessible from all 'live' states.
The clinical staging model above takes no account of what is observed or unobserved, i.e. all women free of BC and dead from BC are observed. In reality, an individual in one of BC Stages 1-4 may be observed to be so, or unobserved, represented by separate states. Transitions are possible:
* forward through stages of BC; and
* from 'No BC' or an unobserved BC state to an observed BC state.
The latter possibility we take to be the same as 'diagnosis' event, that is the first occurrence of BC observed. Thus a woman who is diagnosed with Stage 3 BC makes a transition from either 'Stage 2, Unobserved' or 'Stage 3, Unobserved' to 'Stage 3, Observed' and so on.
### Modelling unobserved breast cancer
We distinguish BC death from other causes of death and define life histories accordingly, keeping in mind that the main focus of this work is providing a methodology on quantifying the impact of BC diagnostic delays.
In Figure 1, we introduce a model of BC progression, based on the stages described in Section 2.1, but introducing some simplifications (Section 2.3) based on the available data and published clinical studies (Section 3).
Figure 1 shows a schematic representation of a continuous-time model for the life history of a woman at age \(x\). Age-specific transition intensities from state \(i\) to state \(j\) are denoted by \(\mu_{x}^{ij}\), where \(x\) is age-at-entry to state \(i\). Age- and duration-dependent transition intensities at age \(x\) and duration \(z\) from state \(i\) to state \(j\) are denoted by \(\mu_{x,z}^{ij}\). Stages 1, 2 and 3 of BC combined are represented by States 1 and 2 in the model, State 1 being observed cases and State 2 being unobserved cases. All stage 4 cases of BC are represented by State 3 of the model, and are assumed to be observed. We note here that'stage' and'state' are distinct concepts in this paper.
In the semi-Markov model considered here, Figure 1, the usual Kolmogorov equations in a Markov model are replaced by a system of integral-differential equations, with integrals over duration being required for certain states. Often such integrals can be intractable. In our model, which has no more than one duration-dependent transition in any possible life history, the required integrals are of low dimension and the modified Kolmogorov equations can be solved numerically using standard methods (Appendix A). In particular, we apply a fourth-order Runge-Kutta scheme to solve the modified Kolmogorov equations under consideration (Macdonald et al., 2018).
### Modelling assumptions
We introduce the following modelling assumptions.
**A1**: States 1 and 2 both represent Stages 1-3 of BC progression. We do not attempt to model progression between these stages explicitly as this is not supported by available data. Note that Stages 1-3 BC have a similar pattern for one-year survival (ONS, 2016b). State 3 represents Stage 4 of BC progression. This accords with assumptions in some epidemiological studies (Zhao et al., 2020).
Figure 1: A breast cancer semi-Markov model in continuous time.
**A2**: State 5 ('Dead, BC') is accessible only from State 3 ('Metastatic BC'). That is, earlier stages of BC lead to death from BC only by first progressing to metastasis.
**A3**: All individuals entering State 3 are observed to do so, whether their progression prior to entering that state was observed or not. That is, death from BC without metastatic BC being noticed pre-mortem is rare enough to ignore (Redig and McAllister, 2013).
The model also includes a state representing unobserved cases of BC, State 2 ('Pre-metastatic Not Observed'). With the pandemic shock in mind, for the purpose of modelling _changes_ in BC mortality caused by dramatic changes in the health service, we add two more model assumptions relating to State 2:
**A4**: Neither the manner in which we observe BC, nor the presence of a pandemic, affect the overall new cases of cancer. Therefore, we assume the total transition from 'No BC' to BC stays constant. That is
\[\mu_{x}^{01}+\mu_{x}^{02}=\mu_{x}^{*}, \tag{1}\]
where \(\mu_{x}^{*}\) is independent of any particular pandemic scenario.
**A5**: Individuals in State 1 ('Pre-metastatic Observed') are assumed to be treated for BC, while individuals in State 2 are assumed not to be treated. Therefore, we assume \(\mu_{x,z}^{13}<\mu_{x,z}^{23}\) for the same age. Moreover, we assume that treatment given while in State 1, e.g. the type of treatment, does not depend on any particular pandemic scenario, so the transition intensities \(\mu_{x,z}^{13}\) and \(\mu_{x,z}^{23}\) also do not depend on any particular pandemic scenario.
A4 and A5 suggest a convenient parametrisation of the model:
\[\mu_{x}^{01}=\alpha_{x}\,\mu_{x}^{*},\qquad\mu_{x}^{02}=(1-\alpha_{x})\,\mu_{ x}^{*},\qquad\mu_{x,z}^{13}=\beta_{x,z}\,\mu_{x,z}^{23}\qquad(\beta_{x,z}<1), \tag{2}\]
where \(0<\alpha_{x}<1\) quantifies the proportional relationship between \(\mu_{x}^{01}\) and \(\mu_{x}^{02}\), and will later be used to determine pandemic scenarios. For simplicity, and lacking data to support other assumptions, we assume \(\alpha_{x}=\alpha\) and \(\beta_{x,z}=\beta\). We suppose that \(\mu_{x,z}^{23}\) represents the rate of progression to metastatic BC in the absence of treatment, and \(\beta\) measures the effectiveness of treatment. So, our approach assumes that \(\mu_{x}^{*}\) and \(\beta\) are fixed regardless of any pandemic scenario.
## 3 Calibration of the Model
The model is calibrated based on the population of women in England, in age groups 65-69, \(\ldots\), 85-89. These population estimates are the closest we have to represent the exposure in State 0 that are women to be free of BC. However, we note that these estimates do not distinguish whether or not a woman is actually free of BC, leading to a potentially higher exposure in State 0. The aim is to estimate occupancy probabilities for each model state at future times. Calibrating the model means estimating the distribution by age in State 0 between 1 January 2020 and 31 December 2024, and the transition intensities in the model. We rely on published clinical studies and a set of cancer data collected by the ONS. We describe the sources we use in the following sections.
### Available data: Population incidence and mortality rates of breast cancer
We consider new cancer diagnoses/registrations and deaths data between 2001-2017 in England, provided by the ONS. Cancer registrations are split by five-year age groups (20-24, 25-29,..., 85-89), type of tumour, single calendar year, and gender. Causes of death data have similar granularity, up to 2018. Corresponding mid-year population estimates are available from the ONS.
Figure 2 exhibits available ONS-provided data at various ages, including screening age groups 47-73, from 2001 to 2017 for cancer incidence and up to 2018 for mortality. Note that the first screening programme was introduced in 1988, targeting women aged 50-64. Later, the screening was extended to age 70 between 2002 and 2004, including the age groups 47-73 at which screening takes place since an announcement made in 2007 (Quinn and Allen, 1995; RAC, 2006; Duffy et al., 2010; NHS, 2021). In Figure 2, five-year age groups are represented by their mid-points. Figure 1(a) shows BC incidence, which is calculated as new cancer registrations divided by mid-year population estimates, and generally shows an increasing trend over calendar time at all ages with higher incidence at older ages. Figure 1(b) shows BC mortality, which is calculated as deaths from BC divided by mid-year population estimates, and points out a decreasing trend. Mortality from other causes, not including BC as a cause, shows a more heterogeneous distribution across different ages with a decreasing trend (Figure 1(c)).
Figure 2: Breast cancer incidence, mortality, and all-cause mortality (excluding breast cancer).
### Key transition intensities
For obtaining the transition intensities in Figure 1, and in the absence of a large-scale study covering all necessary transitions, we determine the key transition intensities based on available data and published studies, as shown in Table 1. What follows in this section summarises sources that we have used to calibrate the overall process.
For simplicity, we assume that transition intensities to death due to other causes from all 'live' states are equal to each other, particularly equal to \(\mu_{x}^{04}\), shown as follows:
\[\mu_{x}^{14}=\mu_{x}^{24}=\mu_{x}^{34}=\mu_{x}^{04}. \tag{3}\]
The transition intensity from State 0 to State 4, \(\mu_{x}^{04}\), are determined using deaths from other causes from 2010 to 2015 in England, divided by the corresponding mid-year population estimates in the same years (see Section 3.1). The time period is chosen so that it is consistent with the time period of other transition intensities.
Note that we ignore any time trend in BC incidence and mortality rates, or in mortality rates from other causes, in the calculation period (1 January 2020 to 31 December 2024). Also, to the best of our knowledge, there is no available literature regarding the level of \(\alpha\) and \(\beta\), which are used to determine \(\mu_{x}^{02}\) and \(\mu_{x,z}^{23}\), respectively, in (2). We consider a range of values between 0.4 and 0.8 for \(\alpha\) and between \(\frac{1}{5}\) and \(\frac{1}{10}\) for \(\beta\).
#### 3.2.1 Determining \(\mu_{x}^{01}\): Clinical diagnosis of breast cancer
We do not have suitable empirical data on clinical diagnosis by age and stage. This is particularly important for determining transitions to State 1 and State 3. Available data include BC registrations by stage in England for year of diagnosis 2012-2015 (ONS, 2016). However, it is not recommended to use this yearly information (ONS, 2016b), due to issues relating to the potential incomplete nature of the data. Therefore, we determine the transition intensities \(\mu_{x}^{01}\), based on 81% of overall cancer registrations, provided by the ONS, as suggested in ONS (2016b) (Table 1). For consistency with the \(\mu_{x}^{35}\) intensities, which were obtained based on data between 2010-2015 (see Section 3.2.4), we also determine \(\mu_{x}^{01}\) based on the same time period. The ONS mid-year population estimates for England, during the same time period, are used to calculate the exposure in State 0. The resulting transition intensities \(\mu_{x}^{01}\) are shown in Table 1, along with other key transition intensities.
We note that an alternative source for defining \(\mu_{x}^{01}\) could be cancer registrations reported by Rutherford et al. (2013, 2015) (See Table 1 in Rutherford et al. (2013, 2015)).
\begin{table}
\begin{tabular}{c c c c} \hline Age & \(\mu_{x}^{01}\) & \(\mu_{x}^{04}\) & \(\mu_{x}^{35}\) \\ \hline
65–69 & 0.00333 & 0.00878 & 0.28060 \\
70–74 & 0.00286 & 0.01521 & 0.36002 \\
75–79 & 0.00324 & 0.02693 & 0.40000 \\
80–84 & 0.00355 & 0.05142 & 0.49711 \\
85–89 & 0.00377 & 0.09684 & 0.50000 \\ \hline \end{tabular} Note: \(\mu_{x}^{01}\) and \(\mu_{x}^{04}\) are based on the ONS data, \(\mu_{x}^{35}\) is based on a published study.
Source: See Section 3.1 and Zhou et al. (2020).
\end{table}
Table 1: Age-specific transition intensities for the semi-Markov model in Figure 1.
Although this data is more granular than the ONS data, stratified by both age and stage for women in the east of England between 2006-2010, the corresponding exposure is not available from the same source. Therefore, we have chosen to use the ONS data for our results.
#### 3.2.2 Determining \(\mu_{x}^{13}\): Developing metastatic breast cancer
Colzani et al. (2014) estimate risk of developing first distant metastasis by age within 10 years of diagnosis of first invasive BC for women in Stockholm and Gotland Swedish counties between 1990-2006, noting fairly stable rates after a peak at about 2 years for women older than 50 years (Figure 3).
We assume that the transition intensity from State 1 to State 3, \(\mu_{x,z}^{13}\), follows a functional form, indicating a steep increase in the first two years with stable rates afterwards, based on Colzani et al. (2014). Figure 3 shows observed values, taken from Figure 1 in Colzani et al. (2014), and fitted values based on some polynomial functions.
Here, we define \(\mu_{x,z}^{13}\) as a function of duration only, using a \(4^{\text{th}}\) degree polynomial, given as
\[\mu_{x,z}^{13}=0.00088644+0.04191138z-0.01574062z^{2}+0.00207282z^{3}-0.0000899 8z^{4}, \tag{4}\]
for a given age \(x\) and \(0\leq z<10\). This function is not suitable for extrapolation to durations \(z>10\). Parameters are estimated from the data in Table 2.
Note: The values are determined based on Figure 1 in Colzani et al. (2014).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 8 & 10 \\ \(\mu_{x,z}^{13}\) & 0 & 0.03 & 0.04 & 0.03 & 0.024 & 0.021 & 0.02 & 0.0194 & 0.0194 \\ \hline \end{tabular} Note: The values are determined based on Figure 1 in Colzani et al. (2014).
\end{table}
Table 2: Rates of transition from State 1 to State 3 in different durations (years).
Figure 3: Rates of transition from State 1 to State 3.
We also consider a special case of the semi-Markov model in Figure 1, assuming \(\mu_{x}^{13}=0.01954\). This value represents average of first distant metastasis rates based on Table 1 in Colzani et al. (2014). Note that rates of transition from State 2 to State 3, \(\mu_{x}^{23}\), are determined based on \(\mu_{x}^{13}=0.01954\), following (2).
#### 3.2.3 Determining \(\beta\): Measure of treatment effectiveness
State 2 is important in our model for being able to quantify the potential impact of a major disruption to health services on cancer mortality. However, there is no empirical data regarding unobserved BC. For modelling purposes we assume that rates of transition from States 1 and 2 to State 3 are related through parameter \(\beta\), which represents a measure of treatment effectiveness, as shown in (2). There is no available data regarding how a BC tumour can grow in the absence of treatment, although this is expected to differ by tumour subtypes. This is mainly because patients are required to be treated as soon as they are diagnosed (Nakashima et al., 2018). However, there is information in the literature about tumour growth for patients waiting for surgery that can be used as a proxy for the tumour growth in the lack of treatment leading to a more advanced BC stage. We use this to establish a reasonable value for \(\beta\).
Lee et al. (2016) quantify tumour growth rates for 1328 women diagnosed with invasive BC, during wait times for surgery, at Seoul National University Hospital between 2013-2014. They report significant changes depending on surrogate molecular subtypes, e.g. larger diameter changes in more aggressive molecular subtypes, and a frequent upgrade from Stage 1 to Stage 2 during waiting times for surgery, where the median waiting time is 31 days. Nakashima et al. (2018) report significant changes in tumours between diagnosis and surgery for 64% of 309 patients diagnosed with invasive BC between 2014-2016, where the mean waiting time is 56.9 days. Yoo et al. (2015) report significant increases in tumour sizes of 55% of 957 patients, diagnosed with invasive BC between 2002-2010, where the median time interval between initial and second examination is 28 days. This information suggests a considerable change in BC tumours for more than half of the observed populations during a period of one or two months, and therefore points towards the transition intensity \(\mu_{x,z}^{23}\) being considerably higher than \(\mu_{x,z}^{13}\), in the absence of any treatment. We consider a range of values between \(\frac{1}{5}\) and \(\frac{1}{10}\) for \(\beta\) in the absence of empirical data and literature information.
#### 3.2.4 Determining \(\mu_{x}^{35}\): Metastatic breast cancer related mortality
Survival from metastatic BC can be highly correlated to age, tumour type, and treatment, in addition to other patient- or disease-related factors (den Brok et al., 2017; Purushotham et al., 2014). Zhao et al. (2020) report BC deaths by age within 12 months of Stage 4 BC diagnosis, using a cohort, between 2010-2015, obtained from the National Cancer Institute Surveillance, Epidemiology and End Results (SEER) database. We define rates of transition to State 5, \(\mu_{x}^{35}\), based on the numbers shown in Table 1 in Zhao et al. (2020). Note that 'No early death' shows the number of patients that survived for 12 months, whilst 'Total early death' displays the number of patients deceased within 12 months in that study. Thus, we use a Uniform Distribution of Deaths assumption, to define the exposure under 'Total early death' (Hossain, 1994). Specifically, we assume that 'No Early Death' contributes a full year and each 'Early Death' half a year on average to the exposure. The resulting rates, \(\mu_{x}^{35}\), presented in Table 1, are assumed to
remain unchanged during the calculation period from 1 January 2020 to 31 December 2024. Note that we add small increments to the rates at ages 75-79 and 85-89 where these are rounded to 0.4 and 0.5, respectively.
## 4 Pandemic Scenarios
We consider two pandemic scenarios. Scenario 1 (S1) introduces a significant change in transitions to death from other causes, but does not involve any BC-related assumption. Thus, it reflects what would have been expected if the pandemic-related health disruptions had not affected BC diagnosis. In Scenario 2 (S2) we additionally assume a decline in cancer diagnoses.
* The pandemic is assumed to result in increased deaths from other causes. This accords with empirical evidence (Section 4.1).
* In addition to the assumption in S1, we further assume a decline in BC diagnosis, i.e. a decline in the number of transfers to State 1 (Section 4.2). This is represented by changing the level of a given \(\alpha\) in (2) based on Table 3. Since we assume that the onset of BC remains unchanged before and after the pandemic, see (1) and (2), we accordingly adjust the total transition intensity into State 2, \(\mu_{x}^{02}\) (Assumption A4).
Table 3 summarises the assumptions made in relation to some of the key transition intensities in the pandemic scenarios. These assumptions are explained in Sections 4.1-4.2.
### Scenario 1: Excess mortality due to COVID-19 in England
There is evidence suggesting that the COVID-19 pandemic has caused an increase in excess mortality, which can be linked to Scenario 1. The Office for Health Improvement and Disparities (OHID) in England monitors excess mortality by age, sex, Upper Tier Local Authority, ethnic group, level of deprivation, cause of death and place of death since 21 March 2020, in order to have a better understanding of the impact of COVID-19. They report ratios representing relative changes between registered and expected excess deaths for each group (OHID, 2022). We use a set of ratios, shown in Table 3, to define the potential increase in transition to death from other causes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Pandemic period} & \(\mu_{x}^{01}/\mu_{x}^{02}\) & \multicolumn{2}{c}{\(\mu_{x}^{04}\)} \\ \hline & \(\alpha\) & 65–84 & 85–89 \\ \hline April–Nov. 2020 & 0.8 & 1.13 & 1.12 \\ Dec. 2020–Nov. 2021 & 1 & 1.13 & 1.12 \\ Dec. 2021–Dec. 2022 & 1 & 1.10 & 1.09 \\ Jan.–Dec. 2023 & 1 & 1.07 & 1.06 \\ Jan.–Dec. 2024 & 1 & 1.04 & 1.03 \\ \hline \hline \end{tabular} Note: Proportionality constants are the same across all ages in both pandemic scenarios.
\end{table}
Table 3: Proportionality constants applied to transition intensities in the pandemic scenarios.
The age-specific transition intensities to death due to other causes, \(\mu_{x}^{04}\), are assumed to increase by a factor of 1.13 for ages 65-84 and 1.12 for ages 85+ from April 2020 until November 2021, while we assume they increase by a factor 1.10 for ages 65-84 and 1.09 for ages 85+ from November 2021 until the end of 2022 (OHID, 2022). Given the gradual decrease in the excess mortality between April 2020 and December 2022, we assume that \(\mu_{x}^{04}\) could still be higher than the pre-pandemic levels for an additional period of two years. Specifically, \(\mu_{x}^{04}\) is assumed to increase by the following factors: 1.07 for ages 65-84 and 1.06 for ages 85+ in 2023; 1.04 for ages 65-84 and 1.03 for ages 85+ in 2024.
### Scenario 2: Changes in breast cancer risk amid COVID-19
There is no evidence suggesting that the COVID-19 pandemic increased BC incidence. Therefore, we assume that overall new cases of cancer are not affected by the pandemic (A4 under Section 2.3). This implies that the onset of BC is assumed to be unchanged by the pandemic, and therefore \(\mu_{x}^{*}\) is not affected. We further assume that there is no time trend in BC risk over the next five years.
However, cancer registrations are known to have reduced during national lockdowns (CRUK, 2021). Particularly, Public Health Scotland (PHS) reported that BC registrations were 19% lower than the 2018/2019 average during the nine months of the pandemic (April-December 2020), as a result of initial health disruptions (PHS, 2021). The number of BC registrations in the second quarter of 2020 is noted to start returning back to the pre-pandemic levels towards the end of 2020. Based on the available information, we assume that, for all ages, diagnosis of BC, \(\mu_{x}^{01}\), is decreased by 20% from April 2020 until the end of 2020. Following that, it is then assumed that they are restored back to pre-pandemic levels. The intensity \(\mu_{x}^{02}\) is adjusted accordingly, keeping the overall BC onset rate unchanged (see (2) and Table 3).
## 5 Results
In this section we present the main findings based on different scenarios, associated with a pre-pandemic model calibration and pandemic scenarios S1 and S2. These findings are obtained for selected values of \(\alpha\) and \(\beta\), which are \(\alpha=0.6\) and \(\beta=\frac{1}{7}\). We note the lack of data to determine the values of these parameters. Therefore, we test for sensitivity of the results to changes in \(\alpha\) and \(\beta\) in Section 6.
Table 4 compares age-specific occupancy probabilities, denoted by \({}_{t}p_{x}^{ij}\) from state \(i\) to state \(j\) at age \(x\), based on the semi-Markov BC model, Figure 1, over one and 5 years from 1 January 2020. As a special case of the model in Figure 1, we also present results with a Markov model, which is determined by removing duration dependency in rates of transition from State 1 to State 3, \(\mu_{x,z}^{13}\), and accordingly in \(\mu_{x,z}^{23}\), as well. Therefore, in the Markov model, we determine constant values for \(\mu_{x}^{13}\) and \(\mu_{x}^{23}\) over both age and time (Section 3.2.2). This simplification can additionally allow us to compare results from the Markov and semi-Markov models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multicolumn{11}{c}{Occupancy Probabilities (\%)} \\ \hline \multicolumn{11}{c}{From State 0} & \multicolumn{11}{c}{From State 1} & \multicolumn{11}{c}{From State 3} \\ \hline Age & \(\varsigma p_{r}^{00}\) & \(\varsigma p_{r}^{01}\) & \(\varsigma p_{r}^{02}\) & \(\varsigma p_{r}^{02}\) & \(\varsigma p_{r}^{03}\) & \(\varsigma p_{r}^{04}\) & \(\varsigma p_{r}^{05}\) & \(\iota p_{r}^{15}\) & \(\varsigma p_{r}^{15}\) & \(\iota p_{r}^{05}\) & \(\varsigma p_{r}^{35}\) \\ \hline & M & M & S-M & M & S-M & M & S-M & M & M & S-M & M & S-M & M & M \\ \hline \multicolumn{11}{c}{Pre-pandemic calibration} \\
65–69 & 93.09 & 1.50 & _1.47_ & 0.76 & _0.68_ & 0.24 & _0.31_ & 4.29 & 0.13 & _0.16_ & 0.25 & _0.16_ & 4.24 & _5.98_ & 24.36 & 74.15 \\
70–74 & 90.49 & 1.25 & _1.22_ & 0.63 & _0.57_ & 0.18 & _0.23_ & 7.32 & 0.13 & _0.16_ & 0.31 & _0.20_ & 4.82 & _6.82_ & 30.02 & 81.25 \\
75–79 & 85.07 & 1.33 & _1.31_ & 0.67 & _0.61_ & 0.18 & _0.24_ & 12.59 & 0.15 & _0.19_ & 0.34 & _0.22_ & 4.92 & _6.97_ & 32.56 & 82.61 \\
80–84 & 75.07 & 1.29 & _1.26_ & 0.65 & _0.59_ & 0.15 & _0.20_ & 22.66 & 0.17 & _0.21_ & 0.40 & _0.26_ & 5.09 & _7.21_ & 38.26 & 84.79 \\
85–89 & 59.71 & 1.09 & _1.07_ & 0.55 & _0.50_ & 0.13 & _0.17_ & 38.36 & 0.16 & _0.19_ & 0.39 & _0.25_ & 4.47 & _6.29_ & 37.65 & 79.54 \\ \multicolumn{11}{c}{Pandemic scenarios} \\ \multicolumn{11}{c}{S1} & \multicolumn{11}{c}{} & \multicolumn{
5.1 Unobserved and observed breast cancer cases
Table 4 shows that for a woman free of BC at time zero, the probability of being diagnosed with pre-metastatic BC over the following 5 years, \({}_{5}\)_p\({}_{x}^{01}\)_, has decreased by 3-6%, across different ages, in Scenario 2, as compared to the pre-pandemic calibration. The results show bigger changes at older ages in Scenario 2, consistent in both models. The decline in \({}_{5}\)_p\({}_{x}^{01}\)_ has remained less than 3% in Scenario 1. At the same time, the probability of having BC and staying undiagnosed, \({}_{5}\)_p\({}_{x}^{02}\)_, increases by 1-3% over 5 years in Scenario 2 based on the Markov model. The increase is mostly higher at younger ages. The increase in the same probability, \({}_{5}\)_p\({}_{x}^{02}\)_, is less than 2% under the semi-Markov model.
Meanwhile, results under both models show that for a woman with no BC at time zero, the probability of being diagnosed with metastatic BC over the following 5 years, \({}_{5}\)_p\({}_{x}^{03}\)_, increases at certain ages, for instance, by 5% to 6% at ages 80-84 in Scenario 2, as compared to the pre-pandemic calibration. An increase in \({}_{5}\)_p\({}_{x}^{03}\)_, up to 4%, occurs at ages 65-69 and 70-74 based on the semi-Markov model.
In Scenario 1 the modelling mostly reveals a decline in \({}_{5}\)_p\({}_{x}^{02}\)_, up to 3%, as compared to the pre-pandemic levels based on both models, and no considerable changes in \({}_{5}\)_p\({}_{x}^{03}\)_, apart from the decrease in the youngest age in the Markov model. The decrease in \({}_{5}\)_p\({}_{x}^{02}\)_ and occasional decrease in \({}_{5}\)_p\({}_{x}^{03}\)_ in Scenario 1 can be associated with the increase in deaths from other causes, since the transition intensities from States 2-3 to 'Dead, Other Causes' are assumed to be equal to \(\mu_{x}^{04}\).
These findings are aligned with documented information that cancer patients have been more vulnerable to the SARS-CoV-2 coronavirus and affected worse by the pandemic, compared to the general population (Pinato et al., 2020; Garassino et al., 2020; Lee et al., 2020; Saini et al., 2020). It is also worth noting that the PHS reported falls in Stages 1-2 BC in Scotland along with small increases in Stages 3-4 BC in 2020 (PHS, 2021).
### Breast cancer mortality
For women with clinical cancer diagnosis, i.e. women in either State 1 or State 3 at time zero, we define cancer mortality as the probability of moving to State 5, for the period under consideration.
The dependence of BC mortality on age becomes more evident if we consider a longer period after diagnosis, where bigger changes are observed in more advanced ages for women with metastatic BC, consistent in both models (Table 4). For instance, in the pre-pandemic calibration, one-year mortality for a woman aged 65-69 with metastatic BC is estimated as 24.36%, whereas at ages 80+ one-year mortality is around and above 37%. On the other hand, the variation in mortality, with respect to different ages, for women in State 1 is very small even after 5 years.
The results in Table 4 also show that mortality in 5 years after metastatic BC diagnosis is estimated to be between 74.15-84.79%, whereas the mortality for a woman with pre-metastatic BC diagnosis differs in the presence of duration dependence: (i) around 4-5% under the Markov model; and (ii) 6-7% under the semi-Markov model. Meanwhile, the relationship between 5-year mortality and age is not straightforward to interpret due to the following reasons:
* We have simplified BC progression using two states, with BC Stages 1-3 being combined and included in States 1 and 2, due to lack of reliable data. Ideally, BC Stage 3, which indicates locally advanced BC, should be treated differently than Stages 1 and 2, since survival from Stage 3 can be markedly different than that from Stages 1 and 2 (Rutherford et al., 2015; Maringe et al., 2020).
* In the absence of sufficient data, we have assumed constant transition intensities over periods of 5 years. Given the trends of BC incidence and mortality over time in Figure 2, this may not be realistic.
* The probability of metastasis decreases with age, while mortality risk increases with age in the presence of any BC-related condition (Purushotham et al., 2014). The net effect of these two forces might be another reason for not seeing a consistent trend by age in 5-year BC mortality rates.
All-cause mortality, including death from BC, for women with pre-metastatic or metastatic BC is also presented over periods of 5 years, where age dependence is clear (Table B9 and Table B10).
There is a relative decline in the cancer mortality, less than 2%, across different ages in the pandemic scenarios in comparison to the pre-pandemic calibration under both models. This decline is as a result of increases in excess mortality (Section 4.1). However, across pandemic scenarios, our modelling shows no change in the cancer mortality for women with clinical diagnoses (Table 4). This is because our approach assumes that there is no change in the onset of BC before and after the pandemic, and the corresponding probabilities are conditional on BC diagnosis.
The models also allow us to obtain cancer survival rates. Cancer-specific survival, as used by the ONS, is one of most widely accepted survival measures. It is stated to be a 'net' measure and interpreted as the number of people being alive 'after cancer diagnosis'. This measure is considered to represent a 'hypothetical situation in which the cancer of interest is the only possible cause of death' (Mariotto et al., 2014; Swaminathan and Brenner, 2011; ONS, 2019b). We refer to this as the 'ONS approach'. For a woman diagnosed with pre-metastatic BC at age \(x\), for instance, cancer-specific survival in \(t\) years can be obtained based on the ONS approach as follows:
\[\frac{1-{}_{t}p_{x}^{14}\,-{}_{t}p_{x}^{15}}{1-{}_{t}p_{x}^{14}}, \tag{5}\]
where \({}_{t}p_{x}^{14}\) represents mortality from other causes, while \({}_{t}p_{x}^{15}\) represents mortality from BC.
Table 5 compares 1-, 5-, and 10-year survival probabilities based on both Markov and semi-Markov models in the pre-pandemic calibration using (5) based on the ONS approach and an adjustment of our models. We adjust the models developed here by setting the transition intensities to 'Dead, Other Causes' after being diagnosed with BC or having BC without a clinical diagnosis, i.e. \(\mu_{x}^{14}\), \(\mu_{x}^{24}\) and \(\mu_{x}^{34}\), equal to zero. This allows 'Dead, BC' to be the only cause of death.
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r r r r r} \hline \hline & \multicolumn{10}{c}{Cancer Survival (\%)} \\ \hline \multicolumn{1}{c}{} & \multicolumn{10}{c}{From State 1} \\ \hline Age & \multicolumn{3}{c}{1-year} & \multicolumn{3}{c}{5-year} & \multicolumn{3}{c}{10-year} & \multicolumn{3}{c}{1-year} & \multicolumn{3}{c}{5-year} & \multicolumn{3}{c}{10-year} & \multicolumn{3}{c}{1-year} & \multicolumn{3}{c}{5-year} & \multicolumn{3}{c}{10-year} \\ \hline & M & _S-M_ & M & _S-M_ & M & _S-M_ & M & _S-M_ & M & _S-M_ & M & _S-M_ & M & M & M \\ \hline & & & & & & & & & & & & & & & \\
65–69 & 99.75 & _99.84_ & 95.57 & _93.76_ & 87.57 & _84.44_ & 98.32 & _98.90_ & 74.75 & _67.52_ & 42.95 & _34.87_ & 75.45 & 24.09 & 5.70 \\
70–74 & 99.69 & _99.79_ & 94.81 & _92.65_ & 86.06 & _82.65_ & 97.90 & _98.61_ & 70.62 & _62.15_ & 37.67 & _29.50_ & 69.60 & 15.86 & 2.44 \\
75–79 & 99.66 & _99.77_ & 94.38 & _92.06_ & 84.95 & _81.32_ & 97.68 & _98.47_ & 68.47 & _59.45_ & 34.70 & _26.66_ & 66.71 & 12.52 & 1.49 \\
80–84 & 99.58 & _99.72_ & 93.46 & _90.75_ & 82.48 & _78.35_ & 97.18 & _98.13_ & 63.96 & _53.89_ & 29.13 & _21.58_ & 60.16 & 7.06 & 0.46 \\
Table 5 shows that cancer survival is worse at older ages. It suggests that cancer-specific survival probabilities based on the ONS methodology applied to our data are reasonably consistent with those based on the adjusted models. The main difference between the models has risen for women with pre-metastatic BC, with and without a clinical diagnosis, where lower estimates are obtained in the longer term based on the semi-Markov model. The estimates across ages change to a slightly higher degree in the longer term based on the ONS methodology as compared to the adjusted models.
We note that our findings for women with pre-metastatic and metastatic BC are broadly agreement with the ONS statistics, where 5- and 10-year age standardised survival rates (aged 15 to 99 years) for women diagnosed with BC between 2011-2015 were reported to be above 80% and 50%, respectively. Whilst very few excess deaths for women diagnosed with Stages 1-2 BC were observed, compared with general population, after the first year of diagnosis, one-year age standardised survival rate for women diagnosed with Stage 4 BC in 2015, followed up to 2016, was noted to be 65.8% (ONS, 2017). Furthermore, we found that cancer survival has worsened significantly in the absence of any treatment, in State 2, as compared to those where medical treatments were available in State 1. For instance, 10-year cancer survival of women with pre-metastatic BC at ages 65-69 would have declined from around 84-87% to 34-42%, with higher rates in the Markov model, if these women could have stayed undiagnosed and taken no medical care during the 10 years. This is aligned with the existing medical literature (Verkooijen et al., 2005; Joseph et al., 2012).
Cancer survival probabilities in the pandemic scenarios are not provided in Table 5. This is because survival is conditioned upon diagnosis of BC, which is the event disrupted by the pandemic.
### Excess deaths
The estimated numbers of deaths over 5 years, by age, due to BC and other causes, can be determined by using \({}_{5}p_{x}^{05}\) and \({}_{5}p_{x}^{04}\), respectively. Estimates of excess deaths, in the corresponding period, are then calculated as the differences between estimated numbers of deaths in the pre-pandemic calibration and the pandemic scenarios (Table 6). We note that the time trend in mortality is ignored.
Our findings show that deaths from other causes increase by 5-8%, with higher changes at younger ages, corresponding to 363-2,255 excess deaths, per 100,000 women at different ages, in Scenarios 1-2, compared to the pre-pandemic calibration under both models. Our model also gives a 3-6% increase in deaths from BC across different ages in Scenario 2 based on both settings, with higher increases for younger ages. This corresponds to 5-8 excess BC deaths at different ages under the Markov model, and 6-10 excess deaths under the semi-Markov model.
### Years of life lost
We calculate age-specific years of life lost (YLL) from BC and other causes at a given time \(t\), denoted by YLL\({}_{x,t}^{\text{cause}}\), as
\[\text{YLL}_{x,t}^{\text{cause}}=D_{x,t}^{\text{cause}}e_{x}, \tag{6}\]
where \(D_{x,t}^{\text{cause}}\) shows the corresponding excess deaths from a given cause, and \(e_{x}\) is a function that quantifies the number of years lost for deceased people aged \(x\) at time of death. Here \(e_{x}\) is determined as average life expectancy at age \(x\) using standard life tables (WHO, 2013). Also, total YLL for all ages, YLL\({}_{t}^{\text{cause}}\), are calculated as
\[\text{YLL}_{t}^{\text{cause}}=\sum_{x}D_{x,t}^{\text{cause}}e_{x}. \tag{7}\]
We refer to standard life tables as a source for the years loss function, following WHO (2013). Particularly, we use the 2018-2020 national standard life tables for women in the UK, with the life expectancies for women for ages 65-89, \(e_{x}\), shown in Table 7 (ONS, 2021).
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline \multicolumn{6}{c}{Excess deaths} & \multicolumn{4}{c}{YLL} \\ \hline Age & Dead (Other) & Dead (BC) & Dead (Other) & Dead (BC) \\ \hline \multicolumn{6}{c}{State 4} & State 5 & State 4 & State 5 \\ \hline & M & _S-M_ & M & _S-M_ & M & _S-M_ & M & _S-M_ \\ \hline S1 & & & & & & & \\
65–69 & 363 & _363_ & 0 & \(0\) & 7003 & _7010_ & -8 & \(0\) \\
70–74 & 608 & _607_ & _\(-\)1_ & _\(-\)1_ & 9301 & _9293_ & _\(-\)11_ & _\(-\)15_ \\
75–79 & 1012 & _1012_ & _\(-\)1_ & _\(-\)2_ & 11767 & _11770_ & _\(-\)16_ & _\(-\)23_ \\
80–84 & 1700 & _1700_ & _\(-\)3_ & _\(-\)4_ & 14350 & _14348_ & _\(-\)25_ & _\(-\)34_ \\
85–89 & 2255 & _2255_ & _\(-\)5_ & _\(-\)6_ & 13167 & _13169_ & _\(-\)27_ & _\(-\)35_ \\ S2 & & & & & & & \\
65–69 & 363 & _363_ & 8 & _10_ & 7000 & _7010_ & 152 & _193_ \\
70–74 & 607 & _607_ & 7 & \(9\) & 9298 & _9293_ & 113 & _138_ \\
75–79 & 1011 & _1012_ & 8 & _10_ & 11762 & _11770_ & 92 & _116_ \\
80–84 & 1699 & _1699_ & 7 & \(9\) & 14342 & _14340_ & 63 & _76_ \\
85–89 & 2253 & _2253_ & 5 & \(6\) & 13158 & _13158_ & 29 & _35_ \\ \hline \hline \end{tabular} Note: Results are based on Markov (M) and semi-Markov (S-M) models in the pandemic scenarios, Scenario 1 (S1) and Scenario 2 (S2), as compared to the pre-pandemic calibration, for \(\alpha=0.6\), \(\mu^{13}=\frac{1}{7}\mu^{23}\).
\end{table}
Table 6: Age-specific excess number of deaths and years of life expectancy lost (YLL), per 100,000 women.
Table 6 shows that the semi-Markov model gives more years of life lost due to BC, as compared to the Markov model. This is a direct result of the former model estimating higher numbers of death due to BC. For deaths from other causes, we found 7,000-14,350 years of life lost across Scenarios 1 and 2 under the Markov model, with almost identical results under the semi-Markov model (Table 6).
## 6 Sensitivity Analysis
In this section we assess the sensitivity of our main findings to the values of certain model parameters. Table 8 shows different parametrisation for pre-pandemic model calibration and pandemic scenarios.
### Impact of parameter \(\alpha\)
In the pre-pandemic calibration and the pandemic scenarios in Section 5, it was assumed that 60% of women developing BC, would actually be diagnosed with BC, in a given year, by choosing \(\alpha=0.6\) (Section 5). We now vary the value of \(\alpha\), while keeping all other model characteristics fixed in the pre-pandemic calibration and pandemic scenarios. Higher and lower diagnosis rates are represented by assuming \(\alpha=0.8\) and \(\alpha=0.4\), respectively. Changing \(\alpha\) mainly affects transitions to State 2 and State 3, along with smaller impacts on State 0 and State 5. For a woman free of BC, the probabilities of being in States 2-3 over 5 years have changed considerably as compared to the pre-pandemic calibration when \(\alpha=0.6\) (Table B9). Specifically, we observe an increase, mostly around 2 times higher, when \(\alpha=0.4\) and a decline, by 70% in \({}_{5}p_{x}^{02}\) and 50% in \({}_{5}p_{x}^{03}\), when \(\alpha=0.8\). Changes in State 0 and State 5 are more evident in the presence of lower diagnosis under both modelling settings.
Changes in excess deaths and YLL from other causes remain similar to those obtained for \(\alpha=0.6\) (Table 6, Table C11-Table C12). Considering excess deaths from BC, a lower
\begin{table}
\begin{tabular}{c c c c c c} \hline Age & 65–69 & 70–74 & 75–79 & 80–84 & 85–89 \\ \hline \(e_{x}\) & 19.31 & 15.31 & 11.63 & 8.44 & 5.84 \\ \hline \end{tabular} Note: The values are based on the 2018–2020 national standard life tables.
Source: See ONS (2021) for women.
\end{table}
Table 7: Average life expectancies at various ages, denoted by \(e_{x}\).
pre-pandemic diagnosis rate of \(\alpha=0.4\) leads to an increase of about 2%, corresponding 7 or less deaths across different ages, as compared to the corresponding pre-pandemic calibration, in the Markov model, whereas the semi-Markov model suggests a slightly higher increase, about 3%, around and less than 10 deaths at the same ages. Meanwhile, a higher diagnosis rate of \(\alpha=0.8\) leads to a more dramatic increase in BC deaths, which is about 9-12%, corresponding 7-11 excess deaths, for the same ages under both models.
### Impact of parameter \(\beta\)
In the pre-pandemic calibration and the pandemic scenarios in Section 5, we assumed \(\beta\) as low as \(\frac{1}{7}\), assuming that the transition from State 2 to State 3, \(\mu_{x,z}^{23}\), can be 7 times higher than the transition from State 1 to State 3, \(\mu_{x,z}^{13}\). This is mainly motivated by the absence of treatment in State 2, along with the potential pace of BC tumour growth (Section 3.2.3). All else being equal, we vary the value of \(\beta\) by replacing it with \(\frac{1}{5}\) and \(\frac{1}{10}\). Note that there is no change in \(\mu_{x,z}^{13}\), with \(\mu_{x}^{13}=0.01954\) in the Markov model, or determined by (4) in the semi-Markov model (Section 3.2.2). Similar to Section 6.1, the main impact of changes in \(\beta\) appears to be on State 2 and State 3, with higher changes occurring when \(\beta=\frac{1}{10}\). A smaller value of \(\beta\) leads to more transitions into State 3, leaving a smaller number of women in State 2 in the relevant pre-pandemic model calibration (Table B9). The numbers in State 5 increase with a decreasing level of \(\beta\) over time, because of the higher numbers of women with advanced BC (Stage 4 BC) in State 3.
Table D13 and Table D14 show comparable outcomes for excess deaths and YLL from other causes. Excess deaths, along with YLL, from BC differ slightly from those obtained when \(\beta=\frac{1}{7}\). For a relatively higher value of \(\beta\), \(\frac{1}{5}\), BC deaths are around 2-5% higher across different ages, indicating 3-7 excess deaths, as compared to the corresponding pre-pandemic calibrations, in both modelling settings. For a smaller value of \(\beta\), \(\frac{1}{10}\), deaths are around 3-6% higher than the relevant pre-pandemic calibrations, corresponding to 7-14 excess deaths at different ages.
### Impact of transitions to death from breast cancer \(\mu_{x}^{35}\)
In the pre-pandemic calibration and the pandemic scenarios in Section 5, we assumed the transition to death from BC, \(\mu_{x}^{35}\), to follow the rates reported in Table 1. We now consider \(\mu_{x}^{35}\) to be 20% lower, or 20% higher, than the rates in Table 1, where the pre-pandemic model calibrations in these cases are shown in Table B9. The main effect of a change in this particular transition intensity is on cancer mortality (State 5), and on State 3. For instance, an increase in the level of \(\mu_{x}^{35}\) leads to a decrease in the number of women in State 3 and an increase in State 5. A considerable increase in 5-year cancer mortality, \({}_{5}p_{x}^{15}\) and \({}_{5}p_{x}^{35}\), corresponding less than 11% and 8% increase across different ages, respectively, is also observed as a result of a higher level of \(\mu_{x}^{35}\). This leads to a higher level of overall mortality, as well. The changes in cancer mortality are more evident for women with advanced BC.
Similarly to Sections 6.1-6.2, varying rates of \(\mu_{x}^{35}\) mainly results in changes in the number of excess BC deaths, while other outcomes, e.g. excess deaths from other causes, have remained comparable to the ones in Section 5. An increasing level of \(\mu_{x}^{35}\) leads
to higher number of excess BC deaths, 5-12 deaths across different ages, whereas a decreasing level of \(\mu_{x}^{35}\) results in smaller number of BC deaths, 4-9 excess deaths at the same ages, with a similar effect on YLL from BC. However, the relative increase across ages, in comparison to the relevant pre-pandemic calibration, has remained the same, 3-6%, independent of the level of \(\mu_{x}^{35}\), under both models (Table E15, Table E16).
We also obtain cancer survival probabilities, up to 10 years, for different values of \(\mu_{x}^{35}\), provided in Appendix F. Note that different values of \(\alpha\) and \(\beta\) are not relevant to this calculation. Consistent with the findings in Table 5, Table F17 and Table F18 point towards higher changes in cancer survival for women with pre-metastatic BC using different modelling settings, with these changes becoming more profound in time. Although an increasing level of \(\mu_{x}^{35}\) results in lower cancer survival for women with metastatic BC, our model still suggests smaller differences between cancer survival at the oldest and youngest age groups in comparison to ONS methodology.
## 7 Discussion
During national lockdowns, essential BC diagnostic services were severely affected, along with cancer referral pathways. Health-seeking behaviour was also adversely affected, as only patients with urgent concerns were encouraged to use available services (Maringe et al., 2020). It is therefore important to further examine possible implications of late diagnoses on cancer rates and excess deaths.
We have constructed a semi-Markov model to quantify changes in BC mortality for women aged 65+, as a result of the impact of COVID-19 on health services. Marine et al. (2020) noted a 7.9-9.6% increase in the number of deaths due to BC in a 5-year period after diagnosis, assuming that cancers could only be diagnosed through urgent referrals with up to 80% reductions in cancer referrals. We assume 20% reduction in BC diagnosis based on a more recently published report (PHS, 2021). As a result, we found a 3-6% increase in the number of deaths from BC at different ages, and a 5-8% increase in deaths from other causes, as compared to the pre-pandemic model calibration in Section 5. Also, our results showed considerable differences among certain occupancy probabilities, e.g. \({}_{5}p_{x}^{15}\), between the semi-Markov and Markov models, highlighting the significance of assuming duration dependence in the modelling.
### Strengths and limitations
Low availability of suitable data was a major challenge in this study, limiting our ability to make data-driven inferences and to quantify uncertainty through appropriate statistical measures. A related key issue was the incompleteness of BC stage information in population-based cancer data. Nevertheless, our models are based on a pragmatic combination of available data, literature information and modelling assumptions. The models have produced insightful findings, while the results are broadly consistent with existing literature. Our modelling approach has also provided estimates of excess deaths both from BC and from other causes. Furthermore, sensitivity testing has been carried out to take into account parameter uncertainty to a certain extent. As expected, model outputs are sensitive to the choice of key model parameters. Importantly, sensitivity to parameter \(\alpha\) demonstrates the model's ability to capture the impact of health-service disruptions to
BC mortality. Relative changes in cancer mortality and deaths from other causes have shown consistent results based on different parametrisations in various pre-pandemic model calibrations and pandemic scenarios.
Our approach provides a valuable model, relating to delays in the provision of BC diagnostic and treatment services, which can be more accurately calibrated as more data become available. Availability of more data can help expand the modelling setting by providing more information in relation to the progression of BC. Our model can also be used to represent different levels of BC service availability in non-pandemic times and therefore also provides a framework for comparing health service provision in different countries. It can allow further insights regarding the impact of a pandemic on different health services by changing the levels of \(\alpha\) and \(\beta\) parameters.
There are important areas for further research. The modelling framework can be extended in a number of ways, including the following:
* employing a more detailed clinical model for BC, e.g. by involving locally advanced BC and/or considering treatment and recovery options, which would allow distinguishing between recurrence of non-metastatic BC and developing of metastatic BC;
* considering multi-morbidity as an underlying condition, allowing for the potential impact on excess deaths;
* introducing time trend for BC mortality and morbidity over years;
* formally measuring parameter and model uncertainty.
### Implications of this research
Our study can inform decision makers by increasing awareness about the continuing impact of the COVID-19 pandemic. The estimated results can be helpful while implementing evidence-based health interventions.
Our findings can also help life insurers understand the impact of late diagnoses or prevented treatment of a major cancer in women, on cancer mortality and survival rates. The modelling framework developed here can be useful for assessing different scenarios of cancer diagnoses, not just under pandemic circumstances, but also given different levels of health service provision. Our work can also add value while considering insurance pricing and valuation assumptions.
Increases in population longevity, together with the relatively and increasingly long BC survival, mean that BC will continue to significantly affect older women (Shachar et al., 2016; BCRF, 2021). In this article we have explored the short-term impact of COVID-19 related BC diagnostic delays on related mortality in an older population.
## Acknowledgements
ED and GS acknowledge funding from the Society of Actuaries, under a research project entitled 'Predictive Modelling for Medical Morbidity Trends related to Insurance'. AA and GS acknowledge funding from SCOR Foundation for Science, under a project entitled 'Estimating The Impact Of The COVID-19 Pandemic On Breast Cancer Deaths - An Application On Breast Cancer Life Insurance'.
|
2309.03643
|
High-Speed (7,2) Compressor Using A Fast Carry-Generation Logic based on
Sorting Network
|
Fast binary compressors are the main components of many basic digital
calculation units. In this paper, a high-speed (7,2) compressor with a fast
carry-generation logic is proposed. The carry-generation logic is based on the
sorting network, and it can generate a carry bit within 2 logical stages other
than 3 stages as in previous school book full adders. Collaborating with the
adjusted full adder logic, the proposed (7,2) compressor achieves using only 11
basic logical stages. Testing this new design in a binary arry with 7 rows and
8 columns, and the result shows that this design have higher proformance than
previous designs. This method is suitable for high proformance cases in
multiplication design or other cryptography hardware blocks.
|
Wenbo Guo
|
2023-08-30T05:08:25Z
|
http://arxiv.org/abs/2309.03643v1
|
# High-Speed (7,2) Compressor Using A Fast Carry-Generation Logic based on Sorting Network
###### Abstract
Fast binary compressors are the main components of many basic digital calculation units. In this paper, a high-speed (7,2) compressor with a fast carry-generation logic is proposed. The carry-generation logic is based on the sorting network, and it can generate a carry bit within 2 logical stages other than 3 stages as in previous school book full adders. Collaborating with the adjusted full adder logic, the proposed (7,2) compressor achieves using only 11 basic logical stages. Testing this new design in a binary array with 7 rows and 8 columns, and the result shows that this design have higher performance than previous designs. This method is suitable for high performance cases in multiplication design or other cryptography hardware blocks.
(7,2) compressor, multiplier, full adder, sorting network
## I Introduction
Multiplication is a very common operation in digital devices. And the performance of multiplication is the bottleneck of DSP. Fast multiplier consists of 3 parts: partial product generation, partial product reduction, and vector merge addition. Wallace Tree [1][2] is proposed to parallelly compress the partial product with full adders, and the full adders are now known as the (3,2) compressors. Thereafter, various methods are proposed to construct a more efficient compressor to further speed-up the reduction of partical products. Larger compressors have been widely used, such as (4,2), (5,2), (7,2) [3][4][5] compressors, but this reduction step still spend the most time in a multiplication operation. Besides, in many cryptography hardware blocks, such as modular multiplication, also require high-speed compressors to speed them up.
(7,2) compressor is proved to be a high efficient method [6], and many papers have discussed on this. Booth [6] and [7] are using special methods to reduce the number of basic logical stages. The method in [7] reduced the logical layer to 12, but still using too many XOR gate which is slow and hard to optimize in logical level. The method in [6] also implemente a 12-logical-layer design with the help of (7,3) counter in [8]. But this design did not optimize for it.
Further reduce the basic logical layers of (7,2) compressors is difficult. However, it is still possible by using a special carry-generation logical. The contributions of this paper are listed below:
1. We proposed a carry-generation logic that generates a carry bit within 2 basic logical stages while traditional full adders require 3 basic logical stages.
2. The adjusted full adder is introduced. It is designed to collaborate with the carry-generation unit.
3. We propose the new (7,2) compressor that only consumes 11 basic logical stages. According to the synthesis result, which is synthsised by Synopsys DC, our design has less time delay.
This paper will present our design firstly, and then present the comparison between our design and [6] or [7].
## II Method and implementation
### _Sorting Network of 1-bit Numbers_
For two 1-bit numbers, sorting is a simple operation. The circuit in Fig. 1 can easily sort the two input 1-bit numbers. \(Out_{1}\) is always the larger one and \(Out_{2}\) is always the minor one. This circuit only consumes an AND gate and an OR gate.
Fig. 2 shows a 4-input sorting network [9]. Each of the vertical lines represents a sorter in Fig. 1. After three stages of sorting, the inputs are sorted. We know that the delay of a sorter is one stage of basic logical gate, so the sorting network consumes three stages of basic logical gates.
Fig. 1: Sorter of two 1-bit numbers.
Fig. 2: 4-bit sorting network
### _Fast Carry-Generation_
In a basic full adder, suppose that the input bits are A, B, and C, the carry bit is generated by equation (1). This will consume three stages of basic logic. However, carry generation and propagation are the bottleneck of the performance in a multiplier. \(AB\) represents the AND of \(A\) and \(B\). \(A+B\) represents the OR of them.
\[Carry=AB+AC+BC \tag{1}\]
As we can see, the last stage in Fig. 2 only sort the second and the third bits. So it is clear that the first bit is the largest one and the fourth bit is the smallest one after two stages of sorting. Then if we choose one number randomly from the second and the third bits, this one is no lager than the first bit and no less than the fourth bit. That means the first bit, the bit randomly selected and the fourth bit have been in order, and we named them as X, Y and Z. The first two stages in Fig. 1 is represented as Half Sorter as shown in Fig. 2. Add X, Y and Z up as binary numbers to generate carry and sum, the result is shown in Table I. Note that X, Y, Z are in order.
Because they are ordered, so there are only four possible combinations. Based on the truth table in Table I, the bool expression is simplified as equation (2) and (3). With this special full adder structure (SFA), carry bit can be generated with two basic logic stages, faster than the traditional full adder logic which will consume 3 logic stages.
\[Carry=Y \tag{2}\]
\[Sum=X(\overline{Y}+Z) \tag{3}\]
### _Adjusted Full Adder_
Then let us talk about full adders. Usually, full adder is implemented by (4) and (5). The symbol \(\oplus\) means XOR. A, B and C are inputs, Carry and Sum are outputs. Sum is on the critical path. Formula (5) can be changed to: \(Sum=C\oplus(A\overline{A}+A\overline{B}+\overline{A}B+B\overline{B})\). Note that \(A\overline{A}+A\overline{B}+\overline{A}B+B\overline{B}=(A+B)(\overline{A}+ \overline{B})=(A+B)\overline{(AB)}\). Suppose that \(h_{1}=A+B\) and \(h_{2}=AB\), then the formula (5) can be rewritten to formula (6). Rewrite the formula (5) to formula (6) does not reduce the logical stages, but formula (6) is more convenient with the subsequent analysis.
\[Carry=C(A+B)+AB \tag{4}\]
\[Sum=A\oplus B\oplus C \tag{5}\]
\[Sum=C\overline{(h_{1}\overline{h_{2}})}+\overline{C}(h_{1}\overline{h_{2}}) \tag{6}\]
Fig. 3 is the logical implementation for formula (4) and (6). As can be seen in Fig 3, if the MUX requires 2 logical stages, the sum comsumes 4 logical stages. The input signal C is only used by the MUX, and there are two logical stages from C to Sum, two logical stages from C to Carry. That means C is used after A and B, and it does not matter that if C is late to 2 logical stages of delay. The rest of this chapter will discuss how to use this feature to optimize latency.
### _Implementation of (7,2) Compressor_
Fig. 4 shows the overall design of (7,2) compressor. Numbers in parentheses means how many logical stages are used from input. Because of the special logic of full adder discussed above which can generate a Carry bit with only 2 logical stages, the input signal \(C_{i2}\) can be input to the full adder. This helps the whole design reduced a logical stage. Each full adder in Fig. 4 have a circular marker which means the "\(C\)" input in Fig. 3. This input bit can be later than \(A\) and \(B\) for 2 logical stages.
By this structure, the latency of a (7,2) compressor is reduced to 11 logical stages's delay.
## III Performance Comparison
In this chapter, we will make a comparison with previous methods proposed in [6] and [7]. To make a clear comparison, all these designs will be utilized in a binary array of 7 rows and 8 columns. They are used to compress the binary array into 2 rows, just like the second step of a multiplier. Then a vector merge adder, which is implemented with Kogge-Stone algorithm, will plus them to one row. The only difference among them is the structure of (7,2) compressor. All the Verilog HDL codes are synthesised with Synopsys Design
Fig. 4: Overall design of (7,2) compressor
Fig. 3: Adjusted full adder logic
Compiler, with TSMC 90nm, 65nm and 28nm process, to find the minimum delay of each design. All the results are shown in Table II. Methods in [6] and [7] consume 12 logical stages, so their delays are close. The proposed method in this paper comsumes 11 logical stages, so it has lower delay compared to [6] or [7].
As shown in Table II, the delay of this module reduced when the process nodes are reduced. But the implementations with method in [6] and [7] have almost the same delay whichever the process is used. The implementation with the proposed method have less delay than them, the saving delay is approximately one logical stage delay. Through the data in table 2, we can sum up that with the reduction of process nodes, the influence of logic optimization decreases gradually. And because of the special full adder logic, which will consume more logical gates compared with traditional full adder logic, is used, the area is larger than the methods in [6] and [7]. That means under this extreme high performance design condition, a small delay reduce requires a significant amount of area to be consumed as a cost.
## IV Conclusions
In this paper, we proposed a special full adder logic. With its help, we proposed a new (7,2) compressor structure. This method reduces the logical stages of a (7,2) compressor to 11, less than the traditional design methods. The special full adder logic runs with the consider of 4 bits not 3 bits, but get a quick carry out bit. Also therefore, consider 4 bits at the same time consumes more areas. So this method is suitable for high performance conditions.
|
2301.03061
|
Quantum interference in the resonance fluorescence of a $J=1/2$-$J'=1/2$
atomic system: Quantum beats, nonclassicality, and non-Gaussianity
|
We study theoretically quantum statistical and spectral properties of the
resonance fluorescence of a single atom or system with angular momentum $J=1/2
- J'=1/2$ driven by a monochromatic linearly polarized laser field, due to
quantum interference among its two antiparallel, $\pi$ transitions. A magnetic
field parallel to the laser polarization is applied to break the degeneracy
(Zeeman effect). In the nondegenerate case, the $\pi$ transitions evolve at
different generalized Rabi frequencies, producing quantum beats in the
intensity and the dipole-dipole, intensity-intensity, and quadrature-intensity
correlations. For a strong laser and large Zeeman splitting the beats have mean
and modulation frequencies given by the average and difference, respectively,
of the Rabi frequencies, unlike thebeats studied in many spectroscopic systems,
characterized by a modulated exponential-like decay. Further, the Rabi
frequencies are those of the pairs of sidebands of the Mollow-like spectrum of
the system. In the two-time correlations, the cross contributions, i.e., those
with products of probability amplitudes of the two $\pi$ transitions, have a
lesser role than those from the interference of the probability densities. In
contrast, there are no cross terms in the total intensity. We also consider
nonclassical and non-Gaussian properties of the phase-dependent fluorescence
for the cases of weak to moderate excitation and in the regime of beats. The
fluorescence in the beats regime is nonclassical, mainly from third-order
dipole fluctuations, which reveal them to be also strongly non-Gaussian, and
their quadrature spectra show complex features around the Rabi frequencies. For
small laser and Zeeman detunings, a weak to moderate laser field pumps the
system partially to one of the ground states, showing slow decay in the two
time correlations and a narrow peak in the quadrature spectra.
|
H. M. Castro-Beltrán, O. de los Santos-Sánchez, L. Gutiérrez, A. D. Alcantar-Vidal
|
2023-01-08T15:25:36Z
|
http://arxiv.org/abs/2301.03061v2
|
Quantum interference in the resonance fluorescence of a \(J=1/2-J^{\prime}=1/2\) atomic system: Quantum beats, nonclassicality, and non-Gaussianity
###### Abstract
We study the resonance fluorescence of a system with angular momentum \(J=1/2-J^{\prime}=1/2\) level structure driven by a single, linearly polarized, monochromatic laser field. Quantum interference among the two, antiparallel, \(\pi\) transitions leads to rich results. We develop the article around two broad overlapping themes: (i) the observation of quantum beats in the intensity and the dipole-dipole, intensity-intensity, and quadrature-intensity correlations, when the atom is subject to a strong laser and large Zeeman splittings. The mean and modulation frequencies of the beats are given by the average and difference, respectively, among two close generalized Rabi frequencies related to a Mollow-like spectrum with two pairs of sidebands. (ii) The nonclassical and non-Gaussian properties of phase-dependent fluorescence for the cases of weak to moderate excitation and in the regime of beats. The fluorescence in the beats regime is nonclassical, mainly from the third-order dipole fluctuations, which reveal them to be also strongly non-Gaussian. For weak to moderate driving laser and small detunings and Zeeman splittings the nonclassicality is an interplay of second- (squeezing) and third-order dipole noise.
## I Introduction
Recently, the properties of the resonance fluorescence of a single atomic system with angular momentum transition \(J=1/2-J^{\prime}=1/2\) driven by a monochromatic laser have been the subject of great interest due to the possibility of observing vacuum-induced coherence effects due to interference among the two antiparallel \(\pi\) transitions, emitting into the same frequency range of the electromagnetic vacuum. Here, the \(\pi\) transitions are incoherently coupled, mediated by spontaneous emission in the \(\sigma\) transitions and then excited by the laser. The antiparallel dipoles of the transitions makes it realistic to observe interference effects, while \(V\) and \(\Lambda\) three-level systems require additional preparation because the transitions are perpendicular [1; 2]. Particular attention has been devoted to the spectrum [3; 4; 5; 6], time-energy complementarity [4; 5], Young's interference [7], photon correlations [8], frequency-resolved photon correlations [9], squeezing [10], phase shifts [11], and cooperative effects in photon correlations [12]. The case of additional laser excitation of one of the \(\sigma\) transitions on the spectrum and squeezing has been studied in [13; 14; 15].
Quantum beats are among the more familiar manifestations of quantum interference. They appear in the modulation of the decay by spontaneous emission of multilevel systems due to the energy difference among transitions [2]. So far, few experiments of quantum interference experiments have been performed on the \(J=1/2-J^{\prime}=1/2\), in this case observing Young-type fringes [7]. Hence, further experiments are desirable. Quantum beats in the intensity are the result of the inability to tell the path of a particular photon when observed by a broadband detector. The beats can also occur in two-time correlations. As a general rule, initial conditions should be a superposition state.
In this paper we investigate theoretically effects of quantum interference on the total intensity and two-time correlations such as dipole-dipole (to calculate spectra), intensity-intensity, intensity-amplitude correlations, and variance of the light emitted into the \(\pi\) transitions of the \(J=1/2-J^{\prime}=1/2\) atomic system driven by a linearly polarized laser and a magnetic field to break the degeneracy. While we put emphasis on the regime of observation of quantum beats, the nonclassical and non-Gaussian properties of the fluorescence are also investigated.
After describing the main features of the model in Section II, we discuss the basic dynamic and stationary properties of the atomic expectation values in Section III. Here, we analyze the previously overlooked time-dependent behavior of the atomic populations. Those of the excited states, for instance, although equal in the steady state, evolve with different Rabi frequencies and amplitudes. This is at the root of the formation of beats in the intensity and the correlations. In the regime of strong laser and magnetic fields these beats are characterized by well-defined oscillations at the _average_ frequency among two generalized Rabi frequencies, modulated at the difference of those frequencies. To observe beats in the intensity both ground state populations must be nonzero initially, ideally equal [1]. Similarly, for the two-time correlations, the vector of initial conditions must
have at least two nonzero terms.
In Section IV we describe the scattered field intensity and quadratures. Here, beats depend only on the interference of the two upper populations in the nondegenerate case, with both lower populations initially nonzero. Cross terms of the opposite \(\pi\) transitions represent interference in the steady state intensity. Then, In Section V, using the dressed states approach, we show that the double sideband spectrum [5] stems from a dipole-dipole correlation with beats, where the terms of addition of single \(\pi\) transitions dominate over those of the cross terms.
In Section VI we study Brown-Twiss photon-photon correlations [16; 17], extending the work of Ref.[8] to the nondegenerate case. Besides the ubiquitous antibunching effect, for weak to moderate laser drivings the interplay of parameters, together with detuning and Zeeman splittings, can make for somewhat involved evolutions, e.g., long decays due to optical pumping in the non-degenerate case. Again, cross terms are minor contributors to the full correlation in the beats regime.
Section VII is devoted to a study of phase-dependent fluctuations by conditional homodyne detection (CHD) [18; 19] in both the temporal and spectral domains. The CHD method is characterized by amplitude-intensity correlations (AIC), which are of third order in the field amplitude. When the atomic operators are decomposed into a mean plus a noise operator the AIC is split into a second-order term which would be a measure of squeezing if the third-order one were negligible. But the latter is not negligible outside the weak field regime of resonance fluorescence, which make the fluctuations non-Gaussian and also nonclassical by the violation of classical inequalities [20]. We obtain the spectra of the total, second- and third-order terms of the AIC. Narrow peaks in the spectra reveal population trapping when detunings favour the long term population or optical pumping of the ground state of the more detuned transition, which in the time domain show the above mentioned long decays. The third-order terms make up most of the beats and thus they are non-Gaussian and nonclassical but not squeezed.
In Section VIII we consider squeezing by means of the variance of fluctuations. As usual, squeezing in resonance fluorescence is small and restricted to weak or moderate Rabi frequencies. Finally, in Section IX we provide a discussion and conclusions, and two Appendices give details on solution methods, initial conditions, and optimal appearance of beats.
## II Model
The system, illustrated in Fig. 1, consists of a two-level atom with transition \(J=1/2\) - \(J=1/2\) and states with magnetic quantum number \(m=\pm J\),
\[|1\rangle = |J,-1/2\rangle,\qquad|2\rangle=|J,1/2\rangle,\] \[|3\rangle = |J,-1/2\rangle,\qquad|4\rangle=|J,1/2\rangle. \tag{1}\]
The matrix elements are
\[\mathbf{d}_{1} = \langle 1|\hat{\mathbf{d}}|3\rangle=-\frac{1}{\sqrt{3}}\mathcal{D} \mathbf{e}_{z},\qquad\mathbf{d}_{2}=\langle 2|\hat{\mathbf{d}}|4\rangle=- \mathbf{d}_{1},\] \[\mathbf{d}_{3} = \langle 2|\hat{\mathbf{d}}|3\rangle=\sqrt{\frac{2}{3}}\mathcal{D} \mathbf{e}_{-},\qquad\mathbf{d}_{4}=\langle 1|\hat{\mathbf{d}}|4\rangle= \mathbf{d}_{3}^{*}, \tag{2}\]
where \(\mathcal{D}\) is the reduced dipole matrix element. We choose the field polarization basis \(\{\mathbf{e}_{z},\mathbf{e}_{-},\mathbf{e}_{+}\}\) (linear, left circular, right circular), where \(\mathbf{e}_{\pm}=\mp(\mathbf{e}_{x}\pm i\mathbf{e}_{y})/2\).
The \(\pi\) transitions, \(|1\rangle-|3\rangle\) and \(|2\rangle-|4\rangle\) (\(m=m^{\prime}\)), are coupled to linearly polarized light and have their dipole moments antiparallel. On the other hand, the \(\sigma\) transitions, \(|1\rangle-|4\rangle\) and \(|2\rangle-|3\rangle\) (\(m\neq m^{\prime}\)), are coupled to circularly polarized light. This configuration can be found, for example, in \({}^{198}\)Hg\({}^{+}\)[3], and \({}^{40}\)Ca\({}^{+}\)[12].
The level degeneracy is removed by the application of a static magnetic field \(B_{z}\) along the \(z\) direction, the Zeeman effect. Note that the energy splittings \(g\mu_{B}B_{z}\) of the upper (\(u\)) and lower (\(\ell\)) levels are different due to unequal Lande \(g\) factors, \(g_{u}\) and \(g_{\ell}\), respectively; \(\mu_{B}\) is Bohr's magneton. The difference Zeeman splitting is
\[\delta=\frac{(g_{u}-g_{\ell})\mu_{B}B_{z}}{\hbar}=\frac{g_{u}-g_{\ell}}{g_{ \ell}}B_{\ell}, \tag{3}\]
where \(B_{\ell}=g_{l}\mu_{B}B_{z}/\hbar\). For \({}^{198}\)Hg\({}^{+}\)\(g_{u}=2/3\) and \(g_{\ell}=2\), so \(\hbar\delta=-(4/3)\mu_{B}B_{z}=-(2/3)\hbar B_{\ell}\).
The atom is driven by a monochromatic laser of frequency \(\omega_{L}\), linearly polarized in the \(z\) direction, propagating in the \(x\) direction,
\[\mathbf{E}_{L}(x,t)=E_{0}e^{i(\omega_{L}t-k_{L}x)}\mathbf{e}_{z}+\text{c.c.}, \tag{4}\]
thus driving only the \(\pi\) transitions.
The free atomic, \(H_{0}\), and interaction, \(V\), parts of the Hamiltonian are, respectively:
\[H_{0} = \hbar\omega_{13}A_{11}+\hbar(\omega_{24}+B_{\ell})A_{22}+\hbar B_ {\ell}A_{44}, \tag{5}\] \[V = \hbar\Omega(A_{13}-A_{24})e^{i\omega_{L}t}+\text{h.c.} \tag{6}\]
Figure 1: Scheme of the \(J=1/2\) – \(J=1/2\) atomic system interacting with a laser driving the \(|1\rangle-|3\rangle\) and \(|2\rangle-|4\rangle\) transitions with Rabi frequency \(\Omega\) and detuning \(\Delta\). There are spontaneous decay rates \(\gamma_{1}\), \(\gamma_{2}\) and \(\gamma_{\sigma}\), vacuum-induced coherence \(\gamma_{12}\), and Zeeman frequency splittings \(B_{\ell}\) and \(B_{u}\).
where \(A_{jk}=|j\rangle\langle k|\) are atomic operators, \(\omega_{13}\) and \(\omega_{24}=\omega_{13}+\delta\) are the frequencies of the \(|1\rangle-|3\rangle\) and \(|2\rangle-|4\rangle\) transitions, respectively, and \(\Omega=E_{0}\mathcal{D}/\sqrt{3}\,\hbar\) is the Rabi frequency. The frequencies of the other transitions are \(\omega_{23}=\omega_{13}-\delta\) and \(\omega_{14}=\omega_{13}-B_{\ell}\). Using the unitary transformation
\[U=\exp{[(A_{11}+A_{22})i\omega_{L}t]}, \tag{7}\]
the Hamiltonian in the frame rotating at the laser frequency is
\[H = U^{\dagger}(H_{0}+V)U, \tag{8}\] \[= -\hbar\Delta A_{11}-\hbar(\Delta-\delta)A_{22}+\hbar B_{\ell}(A_{ 22}+A_{44})\] \[+\hbar\Omega\left[(A_{13}-A_{24})+\text{h.c.}\right],\]
where \(\Delta=\omega_{L}-\omega_{13}\) is the detuning of the laser from the \(|1\rangle-|3\rangle\) resonance transition, and \(\Delta-\delta\) is the detuning on the \(|2\rangle-|4\rangle\) transition.
The excited states decay either in the \(\pi\) transitions emitting photons with linear polarization at rates \(\gamma_{1}=\gamma_{2}\), or in the \(\sigma\) transitions emitting photons of circular polarization at rate \(\gamma_{\sigma}\). There is also a cross-coupling of the excited states by the reservoir, responsible for the quantum interference we wish to study. In general, the decay rates are written as
\[\gamma_{ij}=\frac{\mathbf{d}_{i}\cdot\mathbf{d}_{j}^{*}}{|\mathbf{d}_{i}|| \mathbf{d}_{j}|}\sqrt{\gamma_{i}\gamma_{j}},\qquad i,j=1,2. \tag{9}\]
In particular, we have \(\gamma_{ii}=\gamma_{1}=\gamma_{2}\) and \(\gamma_{13}=\gamma_{24}=\gamma_{\sigma}\). Also, given that \(\mathbf{d}_{1}\) and \(\mathbf{d}_{2}\) are antiparallel, \(\gamma_{12}=\gamma_{21}=-\sqrt{\gamma_{1}\gamma_{2}}=-\gamma_{1}\).
The total decay rate is
\[\gamma=\gamma_{1}+\gamma_{\sigma}=\gamma_{2}+\gamma_{\sigma}. \tag{10}\]
The decays for the \(\pi\) and \(\sigma\) transitions occur with the branching fractions \(b_{\pi}\) and \(b_{\sigma}\)[5], respectively,
\[\gamma_{1}=\gamma_{2}=b_{\pi}\gamma, b_{\pi}=1/3, \tag{11a}\] \[\gamma_{\sigma}=b_{\sigma}\gamma, b_{\sigma}=2/3. \tag{11b}\]
## III Master equation
The dynamics of the atom-laser-reservoir system is described by the master equation for the reduced atomic density operator, \(\rho\). In a frame rotating at the laser frequency (\(\tilde{\rho}=U\rho U^{\dagger}\)) it is given by
\[\dot{\tilde{\rho}}=-\frac{i}{\hbar}[H,\tilde{\rho}]+\mathcal{L}_{\gamma}\tilde {\rho}, \tag{12}\]
where \(-(i/\hbar)[H,\tilde{\rho}]\) describes the coherent atom-laser interaction and \(\mathcal{L}_{\gamma}\tilde{\rho}\) describes the damping due to spontaneous emission [5; 21]. Defining
\[S_{1}^{-} = A_{31},\quad S_{2}^{-}=A_{42},\quad S_{3}^{-}=A_{32},\quad S_{4 }^{-}=A_{41},\] \[S_{1}^{+} = (S_{i}^{-})^{\dagger}, \tag{13}\]
the dissipative part is written as
\[\mathcal{L}_{\gamma}\tilde{\rho} = \frac{1}{2}\sum_{i,j=1}^{2}\gamma_{ij}\left(2S_{i}^{-}\tilde{\rho }S_{j}^{+}-S_{i}^{+}S_{j}^{-}\tilde{\rho}-\tilde{\rho}S_{i}^{+}S_{j}^{-}\right) \tag{14}\] \[+\frac{\gamma_{\sigma}}{2}\sum_{i=3}^{4}\left(2S_{i}^{-}\tilde{ \rho}S_{i}^{+}-S_{i}^{+}S_{i}^{-}\tilde{\rho}-\tilde{\rho}S_{i}^{+}S_{i}^{-} \right).\]
We now define the Bloch vector of the system as
\[\mathbf{Q}\equiv \left(A_{11},A_{12},A_{13},A_{14},A_{21},A_{22},A_{23},A_{24},\right. \tag{15}\] \[\left.A_{31},A_{32},A_{33},A_{34},A_{41},A_{42},A_{43},A_{44} \right)^{T}.\]
The equations for the expectation values of the atomic operators, \(\langle A_{jk}\rangle=\tilde{\rho}_{kj}\), are the so-called Bloch equations, which we write as
\[\frac{d}{dt}\langle\mathbf{Q}(t)\rangle=\mathbf{M}_{B}\langle\mathbf{Q}(t)\rangle, \tag{16}\]
where \(\mathbf{M}_{B}\) is a matrix of coeficients of the full master equation, and the formal solution is
\[\langle\mathbf{Q}(t)\rangle=e^{\mathbf{M}_{B}t}\langle\mathbf{Q}(0)\rangle. \tag{17}\]
Since we are interested only in properties of the fluorescence emitted in the \(\pi\) transitions we use the simplifying fact, already noticed in [8], that these Bloch equations can be split into two decoupled homogeneous sets. Set 1 contains the equations for the populations and the coherences of the coherently driven \(\pi\) transitions; these are
\[\langle\dot{A}_{11}\rangle = -\gamma\langle A_{11}\rangle+i\Omega(\langle A_{31}\rangle- \langle A_{13}\rangle),\] \[\langle\dot{A}_{13}\rangle = -\left(\frac{\gamma}{2}+i\Delta\right)\langle A_{13}\rangle-i \Omega(\langle A_{11}\rangle-\langle A_{33}\rangle),\] \[\langle\dot{A}_{22}\rangle = -\gamma\langle A_{22}\rangle-i\Omega(\langle A_{42}\rangle-\langle A _{24}\rangle),\] \[\langle\dot{A}_{24}\rangle = -\left(\frac{\gamma}{2}+i(\Delta-\delta)\right)\langle A_{24} \rangle+i\Omega(\langle A_{22}\rangle-\langle A_{44}\rangle),\] \[\langle\dot{A}_{31}\rangle = -\left(\frac{\gamma}{2}-i\Delta\right)\langle A_{31}\rangle+i \Omega(\langle A_{11}\rangle-\langle A_{33}\rangle),\] \[\langle\dot{A}_{33}\rangle = \gamma_{1}\langle A_{11}\rangle+\gamma_{\sigma}\langle A_{22}\rangle-i \Omega(\langle A_{31}\rangle-\langle A_{13}\rangle),\] \[\langle\dot{A}_{42}\rangle = -\left(\frac{\gamma}{2}-i(\Delta-\delta)\right)\langle A_{42} \rangle-i\Omega(\langle A_{22}\rangle-\langle A_{44}\rangle),\] \[\langle\dot{A}_{44}\rangle = \gamma_{\sigma}\langle A_{11}\rangle+\gamma_{2}\langle A_{22} \rangle+i\Omega(\langle A_{42}\rangle-\langle A_{24}\rangle). \tag{18}\]
with Bloch vector
\[\mathbf{R}\equiv\left(A_{11},A_{13},A_{22},A_{24},A_{31},A_{33},A_{42},A_{44} \right)^{T} \tag{19}\]
and a corresponding matrix \(\mathbf{M}\), Eq. (11). Equations (18) do not depend on \(\gamma_{12}\), the vacuum-induced coupling of the upper levels, but on the applied magnetic field only through the difference of Zeeman splittings, \(\delta\).
The steady state solutions, for which we introduce the
short notation \(\alpha_{jk}=\langle A_{jk}\rangle_{st}\), are
\[\alpha_{11} =\alpha_{22}=\frac{\Omega^{2}}{2D}, \tag{20a}\] \[\alpha_{33} =\frac{\Omega^{2}+\gamma^{2}/4+\Delta^{2}}{2D},\] (20b) \[\alpha_{44} =\frac{\Omega^{2}+\gamma^{2}/4+(\Delta-\delta)^{2}}{2D},\] (20c) \[\alpha_{13} =\frac{\Omega(\Delta+i\gamma/2)}{2D},\] (20d) \[\alpha_{24} =\frac{\Omega(\delta-\Delta-i\gamma/2)}{2D},\] (20e) \[\alpha_{kj} =\alpha_{jk}^{*}.\]
where
\[D=2\Omega^{2}+\frac{\gamma^{2}+\delta^{2}}{4}+\left(\Delta-\frac{\delta}{2} \right)^{2}. \tag{21}\]
Note also that in the degenerate system (\(\delta=0\)) \(\alpha_{33}=\alpha_{44}\) and that \(\alpha_{31}=-\alpha_{42}\), where the minus sign arises from the fact that the dipole moments \(\mathbf{d}_{1}\) and \(\mathbf{d}_{2}\) are antiparallel.
Set 2 contains the equations for the coherences of the \(\sigma\) transitions and those among both upper and both lower levels,
\[\mathbf{R_{2}}\equiv(A_{12},A_{14},A_{21},A_{23},A_{32},A_{34},A_{41},A_{43}) ^{T}\;. \tag{22}\]
The equations for their expected values do depend on \(B_{\ell}\) and \(\gamma_{12}\). These coherences vanish because the \(\sigma\) transitions are driven incoherently (\(\langle\{A_{14},A_{23},A_{32},A_{41}\}\rangle\)), i.e., by spontaneous emission, or because they are mediated by those \(\sigma\) transitions (\(\langle\{A_{12},A_{21},A_{34},A_{43}\}\rangle\)). For completeness, we write the steady state results:
\[\alpha_{12}=\alpha_{34}=\alpha_{14}=\alpha_{23}=0,\qquad\alpha_{kj}=\alpha_{ jk}^{*}. \tag{23}\]
The properties of the fluorescence of the \(\pi\) transitions, the subject matter of this article, do not depend on the equations for Set 2. Only the second- and third-order amplitude-intensity correlations and the dipole correlation for the spectrum of the \(\sigma\) transitions would require the full set of Bloch equations.
We gain valuable information on the nontrivial dynamics of the atomic system from single-time expectation values, apparently ignored in the previous literature on the system. In Fig. 2 we show the populations for several particular cases, all with the atom initially in state \(|3\rangle\). In the degenerate case, \(\delta=0\), the upper populations reach opposite phases by the end of the first Rabi cycle, Fig. 2(a). This is understandable since the electron occupation of, say, state \(|1\rangle\) implies not to be in state \(|2\rangle\), and viceversa. The same occurs for the lower populations. Next, we show three situations for the nondegenerate case with \(\delta<0\) (as it is for \({}^{198}\text{Hg}^{+}\)). In Fig. 2(b) the laser is slightly detuned above the \(|1\rangle-|3\rangle\) transition, but highly detuned from the \(|2\rangle-|4\rangle\) transition; the oscillations get out of phase and most of the population ends up in state \(|4\rangle\) by optical pumping. In Fig. 2(c) the laser is detuned below the \(|1\rangle-|3\rangle\) transition, and the \(|2\rangle-|4\rangle\) transition is now on resonance with the laser; again, the oscillations are out of phase but most of the population ends up now in state \(|3\rangle\). In Fig. 2(d) we extend the previous case but with stronger applied magnetic field, thus the non-degeneracy is more evident; the large detuning on both transitions makes it recover the opposite phases of the degenerate case.
In Fig. 3 we show the steady state populations as a function of the Rabi frequency; the other parameters are the same as in Fig. 2. For strong fields the populations tend to be equal (1/4), but arrive at that limit at different rates; for instance, for large detunings on both transitions, Fig. 3(d), it takes larger fields, as compared to the degenerate case, Fig. 3(a). On the other hand, for small detunings and weak-moderate fields, when one
Figure 3: Steady-state populations as a function of Rabi frequency: \(\alpha_{11}=\alpha_{22}\) (dashed-red), \(\alpha_{33}\) (solid-black), and \(\alpha_{44}\) (dots-blue). All other parameters as in Fig. 2.
transition is closer to resonance than the other, the lower state of the more detuned transition is more populated, as seen in Figs. 3 (b) and (c).
## IV The scattered field
In this Section we present the main dynamical and stationary properties of the field scattered by the atom, with emphasis on the \(\pi\) transitions.
### Single-Time and Stationary Properties
The positive-frequency part of the emitted field operator is [21; 5]
\[\hat{E}^{+}(\mathbf{r},t)=\hat{E}^{+}_{\mathrm{free}}(\mathbf{r},t)+\hat{E}^{+} _{S}(\mathbf{r},\hat{t}), \tag{24}\]
where \(\hat{E}^{+}_{\mathrm{free}}(\mathbf{r},t)\) is the free-field part, which does not contribute to normally ordered correlations, hence we omit it in further calculations, and
\[\hat{E}^{+}_{S}(\mathbf{r},t)=-\frac{\eta}{r}\sum_{i=1}^{4}\omega_{i}^{2}\hat{ \mathbf{r}}\times(\hat{\mathbf{r}}\times\mathbf{d}_{i})S_{i}^{-}(\hat{t}) \tag{25}\]
is the dipole source field operator in the far-field zone, where \(\hat{t}=t-r/c\) is the retarded time and \(\eta=(4\pi\epsilon_{0}c^{2})^{-1}\). Since \(\omega_{i}\gg\delta\), we may approximate the four transition as a single one \(\omega_{0}\) in Eq. (25, but cannot do so at the level of decay rates, Rabi frequencies, and splittings.
Making \(\hat{\mathbf{r}}=\mathbf{e}_{y}\) the direction of observation and using Eq. (2) we have
\[\hat{E}^{+}_{S}(\mathbf{r},\hat{t})=\hat{E}^{+}_{\pi}(\mathbf{r},\hat{t})\, \mathbf{e}_{z}+\hat{E}^{+}_{\sigma}(\mathbf{r},\hat{t})\,\mathbf{e}_{x}, \tag{26}\]
i.e., the fields scattered from the \(\pi\) and \(\sigma\) transitions are polarized in the \(\mathbf{e}_{z}\) and \(\mathbf{e}_{x}\) directions, respectively, where
\[\hat{E}^{+}_{\pi}(\mathbf{r},\hat{t}) =f_{\pi}(r)\left[A_{31}(\hat{t})-A_{42}(\hat{t})\right], \tag{27a}\] \[\hat{E}^{+}_{\sigma}(\mathbf{r},\hat{t}) =f_{\sigma}(r)\left[A_{32}(\hat{t})-A_{41}(\hat{t})\right], \tag{27b}\]
are the positive-frequency source field operators of the \(\pi\) and \(\sigma\) transitions, and
\[f_{\pi}(r)=-\eta\omega_{1}^{2}\mathcal{D}/\sqrt{3}r,\qquad f_{\sigma}(r)= \sqrt{2}f_{\pi}(r), \tag{28}\]
are their geometric factors.
The intensity in the \(\pi\) transitions is given by
\[I_{\pi}(\mathbf{r},\hat{t}) =\langle\hat{E}^{-}_{\pi}(\mathbf{r},\hat{t})\cdot\hat{E}^{+}_{ \pi}(\mathbf{r},\hat{t})\rangle\] \[=f_{\pi}^{2}(r)\langle A_{13}(\hat{t})A_{31}(\hat{t})+A_{24}(\hat {t})A_{42}(\hat{t})\rangle\] \[=f_{\pi}^{2}(r)\langle A_{11}(\hat{t})+A_{22}(\hat{t})\rangle, \tag{29a}\]
while in the steady state is
\[I^{st}_{\pi}=f_{\pi}^{2}(r)\left[\alpha_{11}+\alpha_{22}\right]=\frac{\Omega^ {2}}{D}. \tag{29b}\]
Adding the excited state populations with the atom initially in the single state \(|3\rangle\) in Eq. 29a gives simply \(I_{\pi}(\mathbf{r},\hat{t})=f_{\pi}^{2}(r)\langle A_{11}(\hat{t})\rangle\), i.e., without the contribution of \(\langle A_{22}(\hat{t})\rangle\). More interesting is the case where the initial condition is \(\langle A_{33}(0)\rangle=\langle A_{44}(0)\rangle=1/2\), shown in Fig. 4 (see the populations \(\langle A_{11}(t)\rangle\) and \(\langle A_{22}(t)\rangle\) in the insets). The modulation in the intensity is reminiscent of the quantum beats in the spontaneous decay in the \(V\) three-level system [1; 2]. These beats are basically due to the inability to tell from which of the \(\pi\) transitions a photon comes from. This is the standard Young-type interference [4; 5; 7]. The main requirement is that the initial condition for both ground states are nonzero (see Appendix B.
More interesting, though, is the case of strong resonant laser and magnetic fields and the laser is detuned far from the \(|2\rangle-|4\rangle\) resonance frequency, shown in Fig. 5. Due to the laser detuning, the population \(\langle A_{22}(t)\rangle\) has larger frequency and smaller amplitude than that of \(\langle A_{11}(t)\rangle\), as seen in the insets. Remarkably well-defined wave-packets or beats are observed due to the interference of the fluorescence of both \(\pi\) transitions with close Rabi frequencies, with clear average and modulation frequencies (see Fig. 5a). The beats get scrambled with larger frequency and amplitude differences, Fig. 5b.
Save for the decay, these beats are more like the classic textbook ones, described by a modulation _and_ an average frequency, unlike the beats from spontaneous emission or weak resonance fluorescence from two or more closely separated levels. Henceforth, we reserve the moniker _beats_ to those due to strong applied fields. Further analyses of the beats are given in the next Sections, as they show up
also in two-time correlations with particular features.
Similarly, for the \(\sigma\) transitions we have
\[I_{\sigma}(\mathbf{r},\hat{t}) = \langle\hat{E}_{\sigma}^{-}(\mathbf{r},\hat{t})\cdot\hat{E}_{\sigma }^{+}(\mathbf{r},\hat{t})\rangle \tag{30a}\] \[= f_{\sigma}^{2}(r)[\langle A_{23}(\hat{t})A_{32}(\hat{t})+A_{14}( \hat{t})A_{41}(\hat{t})\rangle]\] \[= f_{\sigma}^{2}(r)[\langle A_{11}(\hat{t})+A_{22}(\hat{t})\rangle],\] \[I_{\sigma}^{st} = f_{\sigma}^{2}(r)\left[\alpha_{11}+\alpha_{22}\right], \tag{30b}\]
also showing beats with intensity twice that of the \(\pi\) transitions given that \(f_{\sigma}^{2}(r)=2f_{\pi}^{2}(r)\).
The field quadrature operator at any time is
\[\hat{E}_{\pi,\phi}(\mathbf{r},\hat{t}) = \frac{1}{2}\left(E_{\pi}^{-}(\mathbf{r},\hat{t})e^{-i\phi}+E_{ \pi}^{+}(\mathbf{r},\hat{t})e^{i\phi}\right) \tag{31}\] \[= f_{\pi}(r)(S_{1,\phi}-S_{2,\phi}),\]
where \(\phi=0,\pi/2\) are the quadrature phases we consider, and
\[S_{1,\phi} = \frac{1}{2}\left(A_{13}e^{-i\phi}+A_{31}e^{i\phi}\right), \tag{32a}\] \[S_{2,\phi} = \frac{1}{2}\left(A_{24}e^{-i\phi}+A_{42}e^{i\phi}\right). \tag{32b}\]
The mean quadrature field is given by
\[\langle\hat{E}_{\pi,\phi}\rangle_{st} = \frac{f_{\pi}(r)}{2}\left[\left(\alpha_{13}-\alpha_{24}\right)e^ {-i\phi}+\left(\alpha_{31}-\alpha_{42}\right)e^{i\phi}\right]\] \[= f_{\pi}(r)\text{Re}\left[\left(\alpha_{13}-\alpha_{24}\right)e^ {-i\phi}\right]\] \[= f_{\pi}(r)\text{Re}\left[\frac{\Omega\left(\Delta+(i\gamma- \delta)/2\right)}{D}e^{-i\phi}\right],\]
### Intensity and Quadrature Fluctuations
Here we introduce the intensity and quadratures of the field in terms of atomic fluctuation operators \(\Delta A_{jk}=A_{jk}-\langle A_{jk}\rangle_{st}\), such that
\[\langle A_{kl}A_{mn}\rangle=\alpha_{kl}\alpha_{mn}+\langle\Delta A_{kl}\Delta A _{mn}\rangle. \tag{34}\]
Only the \(\pi\) transitions have nonvanishing coherence terms (\(\alpha_{13},\alpha_{24}\neq 0\)). The fluorescence in the \(\sigma\) transitions is fully incoherent (\(\alpha_{14}=\alpha_{23}=0\)), so its intensity is given by Eq. (30b). In the remainder of this section we deal only with the \(\pi\) transition. The quadrature operators are then written as
\[\hat{E}_{\pi,\phi}(\mathbf{r},\hat{t})=f_{\pi}(r)[\alpha_{\pi,\phi}+\Delta S_{ \pi,\phi}(\hat{t})],\] (35a) where \[\alpha_{\pi,\phi} = \frac{1}{2}(\alpha_{31}-\alpha_{42})e^{i\phi}+\frac{1}{2}(\alpha _{13}-\alpha_{24})e^{-i\phi}, \tag{35b}\] \[= \text{Re}\left[\frac{\Omega\left(\Delta+(i\gamma-\delta)/2 \right)}{D}e^{-i\phi}\right],\] \[\Delta S_{\pi,\phi} = \frac{1}{2}(\Delta A_{31}-\Delta A_{42})e^{i\phi}+\frac{1}{2}( \Delta A_{13}-\Delta_{24})e^{-i\phi}.\]
From Eqs. (29b) and (34) we write the steady state intensity in terms of products of dipole and dipole fluctuation operator expectation values,
\[I_{\pi}^{st}(\mathbf{r})=f_{\pi}^{2}(r)\left[I_{\pi,0}^{coh}+I_{\pi,0}^{inc}+I _{\pi,cross}^{coh}+I_{\pi,cross}^{inc}\right], \tag{36}\]
where
\[I_{\pi,0}^{coh} = |\langle A_{13}\rangle_{st}|^{2}+|\langle A_{24}\rangle_{st}|^{2}, \tag{37a}\] \[I_{\pi,0}^{inc} = \langle\Delta A_{13}\Delta A_{31}\rangle+\langle\Delta A_{24} \Delta A_{42}\rangle,\] (37b) \[I_{\pi,cross}^{coh} = -\langle A_{13}\rangle_{st}\langle A_{42}\rangle_{st}-\langle A_{ 24}\rangle_{st}\langle A_{31}\rangle_{st}\] (37c) \[= -2\text{Re}\left(\langle A_{13}\rangle_{st}\langle A_{42}\rangle _{st}\right),\] \[I_{\pi,cross}^{inc} = -\langle\Delta A_{13}\Delta A_{42}\rangle-\langle\Delta A_{24} \Delta A_{31}\rangle\] (37d) \[= -2\text{Re}\left(\langle\Delta A_{13}\Delta A_{42}\rangle\right).\]
Superindices \(coh\) and \(inc\) stand, respectively, for the coherent (depending on mean dipoles) and incoherent (depending on noise terms) parts of the emission. Subindex \(0\) stands for terms with the addition of single transition products, giving the total intensity, while subindex \(cross\) stands for terms with products of the two \(\pi\) transitions, the steady state interference part of the intensity. In
terms of atomic expectation values these intensities are:
\[I^{coh}_{\pi,0} =|\alpha_{13}|^{2}+|\alpha_{24}|^{2} \tag{38a}\] \[=\frac{\Omega^{2}}{4D^{2}}\left[\frac{\gamma^{2}}{2}+\Delta^{2}+( \delta-\Delta)^{2}\right],\] \[I^{inc}_{\pi,0} =\alpha_{11}+\alpha_{22}-|\alpha_{13}|^{2}-|\alpha_{24}|^{2}\] (38b) \[=\frac{\Omega^{2}}{D^{2}}\left[2\Omega^{2}-\frac{\gamma^{2}}{4}- \Delta^{2}-\delta^{2}\right],\] \[I^{coh}_{\pi,cross} =-2\text{Re}\left(\alpha_{13}\alpha_{42}\right)\] (38c) \[=\frac{\Omega^{2}}{2D^{2}}\left[\frac{\gamma^{2}}{4}+\Delta( \Delta-\delta)\right],\] \[I^{inc}_{\pi,cross} =2\text{Re}\left(\alpha_{13}\alpha_{42}\right)=-I^{coh}_{\pi,cross}, \tag{38d}\]
The sum of these terms is, of course, the total intensity, Eq. (29a). As usual in resonance fluorescence, the coherent and incoherent intensities are similar only in the weak field regime, \(\Omega\leq\gamma\). Here, in particular, the term \(I^{inc}_{\pi,0}\) (no interference) becomes much larger than the others for strong driving.
### Degree of Interference - Coherent Part
In Ref. [5], a measure of the effect of interference in the coherent part of the intensity was as
\[I^{coh}_{\pi,0}+I^{coh}_{\pi,cross} =I^{coh}_{\pi,0}(1+C(\delta)),\] \[C(\delta)=\frac{I^{coh}_{\pi,cross}}{I^{coh}_{\pi,0}} =\frac{\gamma^{2}/4+\Delta(\Delta-\delta)}{\gamma^{2}/4+\delta^{2}/ 4+(\Delta-\delta/2)^{2}}, \tag{39}\]
independent of the Rabi frequency and shown in Fig. 6(a).
Some special cases are found analytically:
\[C(0) =1,\qquad\delta=0, \tag{40a}\] \[C(\delta_{0}) =0,\qquad\delta_{0}=\Delta[1+(\gamma/2\Delta)^{2}],\] (40b) \[C(\delta_{min}) =\frac{-1}{1+\gamma^{2}/2\Delta^{2}},\qquad\delta_{min}=2\Delta[ 1+(\gamma/2\Delta)^{2}],\] (40c) \[C(\delta^{\pm}_{1/2}) =1/2,\qquad\delta^{\pm}_{1/2}=-\Delta\pm\sqrt{3\Delta^{2}+(\gamma ^{2}/2)}. \tag{40d}\]
In the degenerate case, \(C(\delta=0)=1\) means perfect constructive interference. That is because at \(\delta=0\) both \(\pi\) transitions (and both \(\sigma\) transitions) share the same reservoir environment. Increasing \(\delta\) the reservoir overlap decreases, so is the interference. Negative values of \(C\) indicate destructive interference; its minimum is given by \(\delta_{min}\). For large detunings, \(\Delta^{2}\gg\gamma^{2}\) we have
\[\delta_{0}=\Delta,\qquad\delta_{min}=2\Delta,\qquad\delta^{\pm}_{1/2}=-\Delta \pm\sqrt{3}\,|\Delta|. \tag{40e}\]
We have used the special cases \(\delta=\{0,\delta_{0},\delta_{min}\}\) as a guide to obtain many of the figures in this paper.
### Degree of Interference - Incoherent Part
Likewise, we define a measure, \(K(\delta)\), of the effect of interference in the intensity's incoherent part,
\[I^{inc}_{\pi,0}+I^{inc}_{\pi,cross} =I^{inc}_{\pi,0}(1+K(\delta)),\] \[K(\delta)=\frac{I^{inc}_{\pi,cross}}{I^{inc}_{\pi,0}} =\frac{\gamma^{2}/4+\Delta(\Delta-\delta)}{2\left[\gamma^{2}/4+\delta^{2}+ \Delta^{2}-2\Omega^{2}\right]}. \tag{41}\]
Unlike \(C(\delta)\), \(K(\delta)\) also depends on the Rabi frequency as \(\Omega^{-2}\), since fluctuations increase with laser intensity. Special cases are:
\[K(0) =\frac{\gamma^{2}/4+\Delta^{2}}{2\left[\gamma^{2}/4+\Delta^{2}-2 \Omega^{2}\right]},\qquad\delta=0, \tag{42a}\] \[K(\delta) =0,\quad\delta=\Delta+\frac{\gamma^{2}}{4\Delta}\quad\text{or} \quad\Omega\gg\gamma,\Delta,\delta. \tag{42b}\]
The behavior of \(K(\delta)\) with \(\Delta\) is more subtle. It is basically required that \(\Delta\sim\Omega\) in order to preserve the shape seen in Fig. 6(b), in which case the minima for \(C(\delta)\) and \(K(\delta)\) are very similar. On-resonance, for example, \(\Omega\) should be no larger than \(0.35\gamma\). Also, we can infer that the beats are little affected by the interference term unless \(\Delta\gtrsim\Omega\gg\gamma\).
## V Two-time dipole correlations and power spectrum
The resonance fluorescence spectrum of the \(J=1/2\to J=1/2\) atomic system was first considered in [3] and then very thoroughly in [4; 5]. Thus, here we only consider basic definitions and issues related to the observation of beats.
The stationary Wiener-Khintchine power spectrum is given by the Fourier transform of the field autocorrelation function
\[S_{\pi}(\omega)=\text{Re}\int_{0}^{\infty}d\tau e^{-i\omega\tau}\langle\dot{E} ^{-}_{\pi}(0)\dot{E}^{+}_{\pi}(\tau)\rangle, \tag{43}\]
such that \(\int_{-\infty}^{\infty}S_{\pi}(\omega)d\omega=I_{\pi}^{st}\). By writing the atomic operators in Eq. (27a) as \(A_{jk}(t)=\alpha_{jk}+\Delta A_{jk}(t)\), we separate the spectrum in two parts: a coherent one,
\[S_{\pi}^{coh}(\omega) =\text{Re}\int_{0}^{\infty}e^{-i\omega\tau}d\tau\left[I_{\pi,0}^{coh }+I_{\pi,cross}^{coh}\right]\] \[=\pi\left[I_{\pi,0}^{coh}+I_{\pi,cross}^{coh}\right]\delta(\omega)\] \[=\frac{\pi\Omega^{2}}{D^{2}}\left[\frac{\gamma^{2}}{4}+\left( \Delta-\frac{\delta}{2}\right)^{2}\right]\delta(\omega), \tag{44}\]
due to elastic scattering, where \(I_{\pi,0}^{coh}\) and \(I_{\pi,cross}^{coh}\) are given by Eqs. (38) (a) and (c), respectively; and an incoherent part,
\[S_{\pi}^{inc}(\omega)=\text{Re}\int_{0}^{\infty}d\tau e^{-i\omega\tau}\langle \Delta\hat{E}_{\pi}^{-}(0)\Delta\hat{E}_{\pi}^{+}(\tau)\rangle,\]
specifically,
\[S_{\pi}^{inc}(\omega) =\text{Re}\int_{0}^{\infty}d\tau e^{-i\omega\tau}\left[\langle \Delta A_{13}(0)\Delta A_{31}(\tau)\rangle\right.\] \[\left.+\langle\Delta A_{24}(0)\Delta A_{42}(\tau)\rangle-\langle \Delta A_{13}(0)\Delta A_{42}(\tau)\rangle\right.\] \[\left.-\langle\Delta A_{24}(0)\Delta A_{31}(\tau)\rangle\right], \tag{45}\]
due to atomic fluctuations. An outline of the numerical calculation is given in Appendix A.
The dipole correlation \(\langle\hat{E}_{\pi}^{-}(0)\hat{E}_{\pi}^{+}(\tau)\rangle\) and the incoherent spectrum in the strong driving regime and strong nondegeneracy (large \(\delta\)) are shown in Fig. 7. The spectrum (inset) displays a central peak and two pairs of Mollow-like-sidebands [22] with peaks at the Rabi sidebands \(\pm\Omega_{1}\) and \(\pm\Omega_{2}\), while the correlation features decaying quantum beats due to the closeness of the Rabi peaks.
As usual in the strong-field regime, the dressed system approach allows to discern the origin of the peaks from the transitions among the dressed states, to find their positions [5], and thus find the frequencies of the beats. The generalized Rabi frequencies are
\[\Omega_{1} =\mathcal{E}_{1}^{+}-\mathcal{E}_{1}^{-}=\sqrt{4\Omega^{2}+ \Delta^{2}}, \tag{46a}\] \[\Omega_{2} =\mathcal{E}_{2}^{+}-\mathcal{E}_{2}^{-}=\sqrt{4\Omega^{2}+( \delta-\Delta)^{2}}, \tag{46b}\]
where
\[\mathcal{E}_{1}^{\pm} =-\frac{\Delta}{2}\pm\frac{1}{2}\sqrt{4\Omega^{2}+\Delta^{2}}, \tag{47a}\] \[\mathcal{E}_{2}^{\pm} =B_{\ell}+\frac{\delta-\Delta}{2}\pm\frac{1}{2}\sqrt{4\Omega^{2}+ (\delta-\Delta)^{2}}, \tag{47b}\]
are the eigenvalues of the Hamiltonian (8). Due to the spontaneous decays these frequencies would have to be corrected, but they are very good in the relevant strong field limit. Indeed, we notice that \(\Omega_{1}\) and \(\Omega_{2}\) are very close to the imaginary parts of the eigenvalues \(\lambda_{2,3}\) and \(\lambda_{4,5}\), respectively, of matrix \(\mathbf{M}\), shown in Table 1.
The beats are the result of the superposition of waves at the frequencies \(\Omega_{1}\) and \(\Omega_{2}\) of the spectral sidebands, with average frequency
\[\Omega_{av}=\frac{\Omega_{2}+\Omega_{1}}{2}=\frac{\sqrt{4\Omega^{2}+(\delta- \Delta)^{2}}+\sqrt{4\Omega^{2}+\Delta^{2}}}{2}, \tag{48}\]
and beat or modulation frequency
\[\Omega_{beat}=\frac{\Omega_{2}-\Omega_{1}}{2}=\frac{\sqrt{4\Omega^{2}+(\delta -\Delta)^{2}}-\sqrt{4\Omega^{2}+\Delta^{2}}}{2}. \tag{49}\]
Now, we can identify the origin and modulation frequency of the beats in the time-dependent intensity, Eq. (29a), since the excited state populations \(\langle A_{11}(t)\rangle\) and \(\langle A_{22}(t)\rangle\) oscillate at the generalized Rabi frequencies \(\Omega_{1}\) and \(\Omega_{2}\), respectively, with initial conditions given by a nonzero superposition of ground state populations at \(t=0\). In the case of the dipole correlation \(\langle\hat{E}_{\pi}^{-}(0)\hat{E}_{\pi}^{+}(\tau)\rangle\), however, the initial conditions are given by products of stationary atomic expectation values, most of them the coherences \(\alpha_{13},\alpha_{24}\), which become very small in the regime of beats. Thus, as seen in Table 1, the terms \(\langle\Delta A_{13}(0)\Delta A_{31}(\tau)\rangle\) and \(\langle\Delta A_{24}(0)\Delta A_{42}(\tau)\rangle\) are
\begin{table}
\begin{tabular}{c c c} Eigenvalues & \(\delta=-8\gamma\) & \(\delta=-15\gamma\) \\ \hline \(\lambda_{1}\) & \(-0.749386+0i\) & \(-0.836531+0i\) \\ \(\lambda_{2}\) & \(-0.583099-18.0094i\) & \(-0.583308-17.9981i\) \\ \(\lambda_{3}\) & \(-0.583099+18.0094i\) & \(-0.583308+17.9981i\) \\ \(\lambda_{4}\) & \(-0.569785-19.6808i\) & \(-0.5492-23.4257i\) \\ \(\lambda_{5}\) & \(-0.569785+19.6808i\) & \(-0.5492+23.4257i\) \\ \(\lambda_{6}\) & \(-0.5+0i\) & \(-0.5+0i\) \\ \(\lambda_{7}\) & \(-0.444846+0i\) & \(-0.398452+0i\) \\ \(\lambda_{8}\) & \(0+0i\) & \(0+0i\) \\ \hline \hline \end{tabular} Init. cond.
\(\langle\Delta A_{13}\Delta A_{31}\rangle\) & \(0.20836+0i\) & \(0.14734+0i\) \\ \(\langle\Delta A_{24}\Delta A_{42}\rangle\) & \(0.174014+0i\) & \(0.086982+0i\) \\ \(\langle\Delta A_{13}\Delta A_{42}\rangle\) & \(0.000134+0.002146i\) & \(0.000067+0.002011i\) \\ \(\langle\Delta A_{24}\Delta A_{31}\rangle\) & \(0.000134-0.002146i\) & \(0.000067-0.002011i\) \\ \end{tabular}
\end{table}
Table 1: Eigenvalues of matrix \(\mathbf{M}/\gamma\) and initial conditions of the correlations in Eq. (45) for \(\Omega=9\gamma\) and \(\Delta=0\).
much larger than the cross terms \(\langle\Delta A_{13}(0)\Delta A_{42}(\tau)\rangle\) and \(\langle\Delta A_{24}(0)\Delta A_{31}(\tau)\rangle\), so the beats are basically due to the interference of those dominant terms.
## VI Photon-photon correlations
The standard method to investigate intensity fluctuations of a light source uses Brown-Twiss photon-photon correlations [16; 17]. The conditional character of this type of measurement makes it nearly free of detector inefficiencies, unlike a single-detector measurement of the photoelectron probability distribution. In Ref. [8] the correlations of two photons from the \(\pi\) transitions were studied, albeit only for the degenerate case. In this paper we extend it to the case of nondegenerate states. These correlations are defined as
\[g_{\pi}^{(2)}(\tau)=\frac{G_{\pi}^{(2)}(\tau)}{G_{\pi}^{(2)}(\tau \rightarrow\infty)} \tag{50}\]
where, using Eq. (27a) for the field operators,
\[G_{\pi}^{(2)}(\tau) = \langle\hat{E}_{\pi}^{-}(0)\hat{E}_{\pi}^{-}(\tau)\hat{E}_{\pi}^ {+}(\tau)\hat{E}_{\pi}^{+}(0)\rangle \tag{51a}\] \[= f_{\pi}^{4}(\tau)\langle[A_{13}(0)-A_{24}(0)][A_{11}(\tau)+A_{2 2}(\tau)]\] \[\times[A_{31}(0)-A_{42}(0)]\rangle,\]
and
\[G_{\pi}^{(2)}(\tau\rightarrow\infty)=\left(I_{\pi}^{st}\right)^{2}=f_{\pi}^{4} (\tau)\left(\alpha_{11}+\alpha_{22}\right)^{2} \tag{51b}\]
is the normalization factor. \(G_{\pi}^{(2)}(\tau)\) can be further reduced, since \(\langle A_{13}A_{jk}(\tau)A_{42}(0)\rangle=\langle A_{24}A_{jk}(\tau)A_{31}(0 )\rangle=0\), due to having vanishing initial conditions.
Figure 8 shows \(g_{\pi}^{(2)}(\tau)\) for moderate values of the Rabi frequency (near saturation) and the same sets of detunings \(\Delta\) and \(\delta\) of Fig. 2. As usual in resonance fluorescence, the correlation shows antibunching, \(g_{\pi}^{(2)}(0)=0\), that is, a single atom cannot emit two photons simultaneously. Unlike the two-level atom resonance fluorescence, the correlation is not simply the normalized population of the excited state, nor it is only the sum of the correlations of each single \(\pi\) transition. Besides the terms \(\langle A_{13}(0)A_{11}(\tau)A_{31}(0)\rangle\) and \(\langle A_{24}(0)A_{22}(\tau)A_{42}(0)\rangle\), which are also out of phase, as seen from the time-dependent populations of their excited states (Fig. 2), there are six cross terms in the full correlation. In the nondegenerate case the multiple contributions cause in some cases quite irregular evolution. For instance, as we will see in the next Section, the slow decay of the correlation when the laser drives the atom near saturation, but below the \(\omega_{13}\) resonance transition, is related to a very narrow peak in the spectrum.
The case of strong driving and large nondegeneracy is shown in Fig. 9, featuring quantum beats. There are several effects resulting from the increase of the nondegeneracy factor \(\delta\): (i) the larger number of visible wave packets; (ii) both average and beat frequencies approach one another, so the wave packets get shorter for larger photon-pair intervals \(\tau\), containing very few of the fast oscillations, as seen in Fig. 9(d); (iii) the wavepackets are initially slightly lifted above the \(g^{(2)}(\tau)=1\) value.
## VII Quadrature fluctuations
Squeezing, the reduction of noise in one quadrature below that of a coherent state at the expense of the other, is the hallmark of phase-dependent fluctuations of the electromagnetic field [cite]. It is usually measured by balanced homodyne detection (BHD), but low quantum detector efficiency degrade the weak squeezing produced in resonance fluorescence and cavity QED systems. One alternative our group has used is conditional homodyne detection (CHD) [18; 19], which correlates a quadrature amplitude on the cue of an intensity measurement. CHD measures a third-order amplitude-intensity corre
Figure 9: Photon-photon correlations showing beats in the strong field limit, \(\Omega=9\gamma\), \(\Delta=0\), and large Zeeman splittings. The horizontal line helps to see that the wave packet is slightly rised.
lation (AIC) which, in the weak driving limit is reduced to the second-order one and that allows for measuring squeezing. Being a conditional measurement it is nearly free of detector inefficiencies.
While the original goal of CHD was to measure the weak squeezing in cavity QED [18; 19], it was soon realized that nonzero third-order fluctuations of the amplitude provide clear evidence of non-Gaussian fluctuations and higher-order field nonclassicality. In the present work the fluctuations are mainly third-order ones, due to near and above saturation excitation, and violate classical bounds. We thus explore the phase-dependent fluctuations under conditions of quantum interference following our recent work [20; 23; 24].
### Amplitude-Intensity Correlations
In CHD a quadrature's field \(E_{\phi}\) is measured by BHD on the cue of photon countings in a separate detector, where \(\phi=0,\pi/2\) is the phase of the local oscillator. This is characterized by a correlation among the amplitude and the intensity of the field,
\[h_{\pi,\phi}(\tau)=\frac{H_{\pi,\phi}(\tau)}{H_{\pi,\phi}(\tau\to\infty)}, \tag{52}\]
where
\[H_{\pi,\phi}(\tau)=\langle:\hat{E}_{\pi}^{-}(0)\hat{E}_{\pi}^{+}(0)\hat{E}_{ \pi,\phi}(\tau):\rangle,\] (53a) the dots \[::\] indicating time and normal operator orderings, and \[H_{\pi,\phi}(\tau\to\infty) =I_{\pi}^{st}\langle E_{\pi,\phi}\rangle_{st} \tag{53b}\] \[=f_{\pi}^{3}(\tau)\left[\alpha_{11}+\alpha_{22}\right]\mathrm{Re }\left[(\alpha_{13}-\alpha_{24})\,e^{-i\phi}\right]\] \[=f_{\pi}^{3}(r)\frac{\Omega^{3}}{D^{2}}\mathrm{Re}\left[(\Delta+( i\gamma-\delta)/2)\,e^{-i\phi}\right]\] where \[H_{\pi,\phi}^{(2)}(\tau) =2\mathrm{Re}\left[\langle\hat{E}_{\pi}^{+}\rangle_{st}\langle \Delta\hat{E}_{\pi}^{-}(0)\Delta\hat{E}_{\pi,\phi}(\tau)\rangle\right]\] \[=\mathrm{Re}\left\{(\alpha_{31}-\alpha_{42})\left[\langle(\Delta A _{13}(0)-\Delta A_{24}(0))\left(\Delta A_{13}(\tau)-\Delta A_{24}(\tau) \right)\rangle e^{-i\phi}\right.\right.\] \[\left.\left.+((\Delta A_{13}(0)-\Delta A_{24}(0))\left(\Delta A _{31}(\tau)\right)-\Delta A_{42}(\tau))\rangle e^{i\phi}\right]\right\},\] (57) \[H_{\pi,\phi}^{(3)}(\tau) =\langle\Delta\hat{E}_{\pi}^{-}(0)\Delta\hat{E}_{\pi,\phi}(\tau) \Delta\hat{E}_{\pi}^{+}(0)\rangle\] \[=\mathrm{Re}\left\{e^{i\phi}\langle[\Delta A_{13}(0)-\Delta A_{24 }(0)]\left[\Delta A_{31}(\tau)-\Delta A_{42}(\tau)\right]\left[\Delta A_{31}( 0)-\Delta A_{42}(0)\right]\rangle\right\}. \tag{58}\]
The initial conditions of the correlations are given in Appendix A.
From \(h_{\pi,\pi/2}(0)=0\) we can obtain analytically the initial values of the second- and third-order terms,
\[h_{\pi,\pi/2}^{(2)}(0) =1-\frac{(2\Delta-\delta)^{2}+\gamma^{2}}{2D}, \tag{59}\] \[h_{\pi,\pi/2}^{(3)}(0) =\frac{(2\Delta-\delta)^{2}+\gamma^{2}}{2D}-2, \tag{60}\]
where \(D\) is given by Eq. (21).
Being the AIC a function of odd-order in the field amplitude we rightly expect a richer landscape than that of the intensity correlations, more so when one considers quantum interference and the complex parameter space. For instance, the correlation can take on not only negative values but break classical bounds [18; 19]:
\[0 \leq h_{\phi}(\tau)-1\leq 1\,, \tag{61a}\] \[|h_{\phi}^{(2)}(\tau)-1| \leq |h_{\phi}^{(2)}(0)-1|\leq 1\,, \tag{61b}\]
where the second line is valid only for weak fields such that \(h_{\phi}^{(3)}(\tau)\sim 0\). These classical bounds are stronger criteria for nonclassicality of the emitted field than squeezed light measurements, the more familiar probing of phase-dependent fluctuations. A detailed hierarchy of nonclassicality measures for higher-order correlation functions is presented in Refs. [25; 26]. In Ref. [20] an inequality was obtained that considers the full \(h_{\phi}(\tau)\) by calculating the AIC for a field in a coherent state,
\[-1\leq h_{\phi}(\tau)\leq 1\,. \tag{62}\]
For a meaningful violation of Poisson statistics, \(h_{\phi}(\tau)\) must be outside these bounds.
Also, \(h_{\phi}(\tau)\) is a measure of non-Gaussian fluctuations, here of third-order in the field fluctuations. Resonance fluorescence is a particularly strong case of non-Gaussian noise by being a highly nonlinear stationary nonequilibrium process [20; 23; 24; 27; 28], thanks also to its small Hilbert space. This makes resonance fluorescence unsuitable to a quasiprobability distribution approach.
### Fluctuations Spectra
Since quadrature fluctuations, such as squeezing, are often studied in the frequency domain we now define the spectrum of the amplitude-intensity correlations:
\[S_{\pi,\phi}(\omega)=8\gamma_{1}\int_{0}^{\infty}d\tau\cos\left(\omega\tau \right)[h_{\pi,\phi}(\tau)-1] \tag{63}\]
which, following Eqs. (52) and (55), can be decomposed into terms of second- and third-order in the dipole fluctuations
\[S_{\pi,\phi}^{(q)}(\omega)=8\gamma_{1}\int_{0}^{\infty}d\tau\cos\left(\omega \tau\right)h_{\pi,\phi}^{(q)}(\tau), \tag{64}\]
where \(q=2,3\), so that \(S_{\pi,\phi}(\omega)=S_{\pi,\phi}^{(2)}(\omega)+S_{\pi,\phi}^{(3)}(\omega)\).
As mentioned above, the AIC was devised initially to measure squeezing without the issue of imperfect detection efficiencies. Obviously, \(h_{\pi,\phi}(\tau)\) and \(S_{\pi,\phi}(\omega)\) are not measures of squeezing. They measure a third-order moment in the field's amplitude, while squeezing is a second-order one in its fluctuations. The so-called spectrum of squeezing is the one for \(q=2\), with the advantage of the AIC of not depending on the efficiency of detection. Squeezing is signaled by frequency intervals where \(S_{\pi,\phi}^{(2)}(\omega)<0\). As a further note, the full incoherent spectrum, Eq. (45), can be obtained by adding the squeezing spectra of both quadratures [29],
\[S_{\pi}^{inc}(\omega)=\frac{1}{8\gamma_{1}}\left[S_{\pi,0}^{(2)}(\omega)+S_{ \pi,\pi/2}^{(2)}(\omega)\right]. \tag{65}\]
### Results
We now show plots of the AICs and their spectra in Figs. 10-12 for the \(\phi=\pi/2\) quadrature and the same sets of detunings \(\Delta,\delta\) of Fig. 2, and weak to moderate Rabi frequencies, \(\gamma/4<\Omega<\gamma\). With the three parameters \(\Omega\), \(\Delta\), and \(\delta\), the landscape of effects is vast.
We first notice a few general features seen in \(h_{\pi,\pi/2}(\tau)\), Fig. 10. With increasing Rabi frequencies, detunings, and Zeeman splittings we observe the clear breakdown of the classical inequalities besides the one at \(\tau=0\). Correspondingly, in the spectra, the extrema get displaced and broadened. Now, we want to single out the case of nondegeneracy with small detuning on the \(|1\rangle-|3\rangle\) transition but large on the \(|2\rangle-|4\rangle\) one, \(\Delta=-\delta=2\gamma\) (green-dashed line). For weak field, \(\Omega=\gamma/4\), the AIC does not have a regular evolution for short times but it does decay very slowly, with a correponding very narrow spectral peak. The slow decay is also clearly visible in the photon correlation, Fig. 8a. As we mentioned in Sect. III regarding Fig. 2b, state \(|4\rangle\) ends up with a large portion of the steady state population due to optical pumping; not quite a trapping state, so there is no electron shelving _per se_, as argued in [5]. This effect is washed out for larger Rabi frequencies, which allow for faster recycling of the populations. To a lesser degree, slow decay and sharp peak occur for opposite signs of \(\Delta\) and \(\delta\).
Figure 10: Amplitude-intensity correlations (left panel) and spectra (right panel) for the \(\phi=\pi/2\) quadrature in the weak-moderate field limit. Parameters and line styles are the same as in Fig. 8: \(\Delta=\delta=0\) (solid-black); \(\Delta=2\gamma\) and \(\delta=-2\gamma\) (dots-red); \(\Delta=-2\gamma\) and \(\delta=-2\gamma\) (dashed-green); \(\Delta=-2\gamma\) and \(\delta=-4\gamma\) (dot-dashed-blue).
The splitting of the AIC and spectra into components of second and third order in the fluctuations, Figs. 11, 12, helps to understand better the quadrature fluctuations. For the second-order ones we have the squeezing spectra: around \(\omega=0\) for \(\Delta=0\) and small Rabi frequencies, \(\Omega<\gamma/4\); and in sidebands for larger detunings, Rabi frequencies and Zeeman splittings. In \(h^{(2)}_{\pi,\pi/2}(\tau)\) there is a reduction in amplitudes and nonclassicality for increasing Rabi frequencies except for the case of opposite signs of detuning and difference Zeeman splitting. Note that the sharp spectral peak in the latter case takes up most of the corresponding peak in Fig. 10. This is because both \(\pi\) transitions are largely detuned from the laser, keeping \(\Omega\) small.
Increasing the laser strength the third-order effects overcome the second-order ones. For instance, regarding the size of the features. Also, a comparison of Figs. 11 and 12 shows that \(h^{(3)}_{\phi}(\tau)\) is mainly responsible for the breakdown of the classical bounds when the driving field is on or above saturation. Moreover, we see that the slow-decay-sharp-peak is mainly a third-order effect.
To close this Section, the AIC and spectra for very strong fields and large Zeeman splittings, \(\Omega,|\delta|\gg\gamma\) are shown in Fig. 13. The AIC shows beats as in the photon correlations. Unlike those in \(g^{(2)}(\tau)\), these wavepackets oscillate around \(h(\tau)=1\). Because the regime is that of strong excitation the third-order component clearly dominates, making the fluorescence notably non-Gaussian, and clearly violates the classical inequalities. The spectral peaks are localized around the Rabi frequencies \(\pm\Omega_{1},\pm\Omega_{2}\). Studies of the spectrum of squeezing for the \(J=1/2-J=1/2\) system were reported in [10]. Those authors choose \(\pm\Omega_{1},\pm\Omega_{2}\) with a less strong laser but large detuning and large Zeeman splittings, observing the double sidebands, but no mention or hint of beats was made.
## VIII Variance
The variance is a measure of the total noise in a quadrature; it is defined as
\[V_{\phi}=\langle:\left(\Delta E_{\phi}\right)^{2}:\rangle=\text{Re}\left[e^{-i \phi}\langle\Delta\hat{E}^{-}\Delta\hat{E}_{\phi}\rangle_{st}\right], \tag{66}\]
Figure 11: Second-order component of the AIC and spectra of Fig. 10.
Figure 12: Third-order component of the AIC and spectra of Fig. 10.
and is related to the spectrum of squeezing as
\[V_{\phi}=\frac{1}{4\pi\gamma\eta}\int_{-\infty}^{\infty}d\omega S_{\phi}^{(2)}( \omega). \tag{67}\]
where \(\eta\) is the detector efficiency. The maximum value of \(V_{\phi}\) is \(1/4\), obtained when there is very strong driving, when almost all the emitted light is incoherent. Negative values of the variance are a signature of squeezing but, unlike the quadrature spectra, the squeezing is the total one in the field, independent of frequency.
For the \(\pi\) transitions we have
\[V_{\pi,\phi} =\frac{f_{\pi}^{2}(r)}{2}\text{Re}\left[-(\alpha_{13}-\alpha_{24} )^{2}e^{-2i\phi}\right.\] \[\left.+(\alpha_{11}+\alpha_{22}-|\alpha_{13}-\alpha_{24}|^{2}) \right], \tag{68}\] \[=\frac{f_{\pi}^{2}(r)}{2}\frac{\Omega^{2}}{D}\left[1-\frac{[(2 \Delta-\delta)\cos\phi+\gamma\sin\phi]^{2}}{2D}\right]. \tag{69}\]
For \(\phi=\pi/2\) and \(\phi=0\) we have, respectively,
\[V_{\pi,\pi/2} =\frac{f_{\pi}^{2}(r)}{2}\frac{\Omega^{2}}{D}\left[1-\frac{\gamma ^{2}}{2D}\right], \tag{70a}\] \[V_{\pi,0} =\frac{f_{\pi}^{2}(r)}{2}\frac{\Omega^{2}}{D}\left[1-\frac{(2 \Delta-\delta)^{2}}{2D}\right], \tag{70b}\]
where \(D\) is given by Eq. (21).
In Fig. 14 we plot the variances of the out-of-phase \(\phi=\pi/2\) (left panel) and in-phase \(\phi=0\) (right panel) quadratures. The interplay of parameters is a complex one, but we mostly use the ones of previous figures. For \(\phi=\pi/2\) and \(\Delta=0\), as usual in resonance fluorescence systems, squeezing is restricted to a small range of Rabi frequencies, detunings, and Zeeman splittings. For \(\phi=0\) nonzero laser or Zeeman detunings are necessary to produce squeezing, with a strong dependence on their sign: on-resonance (not shown) there is no squeezing, as for a two-level atom; in Fig. 14(d) the laser is tuned below that transition, \(\Delta=-2\gamma\), and there is no squeezing (positive variance) but the variance is reduced for large \(\delta\); in Fig. 14(e) the laser is tuned above the transition, \(\Delta=-2\gamma\), and there is squeezing for larger Rabi frequencies. Large values of \(\delta\) tend to reduce the variance, be it positive or negative.
### Out-of-phase quadrature
We now discuss a complementary view of the variance. For \(\phi=\pi/2\) we can identify the Rabi frequency interval within which squeezing takes place,
\[0<\Omega<\frac{1}{2}\sqrt{\gamma^{2}/2-\delta^{2}/2-2(\Delta-\delta/2)^{2}}, \tag{71}\]
and the Rabi frequency for maximum squeezing is
\[\tilde{\Omega}_{\pi/2}=\frac{1}{2}\sqrt{\frac{\gamma^{4}/2-2[(\delta-\Delta)^ {2}+\Delta^{2}]^{2}}{3\gamma^{2}+2[(\delta-\Delta)^{2}+\Delta^{2}]^{2}}}. \tag{72}\]
Thus, the variance at \(\tilde{\Omega}_{\pi/2}\) is
\[V_{\pi,\pi/2}^{(\tilde{\Omega}_{\pi/2})}(\Delta=0,\delta)=\frac{f_{\pi}^{2}(r )}{16}\frac{(\gamma^{4}/2-2\delta^{4})(\delta^{2}-\gamma^{2})}{\gamma^{2}( \gamma^{2}+2\delta^{2})(\delta^{2}+\gamma^{2})},\] (73a) for \[\Delta=0\] and \[|\delta/\gamma|<1/\sqrt{2}\] ; \[V_{\pi,\pi/2}^{(\tilde{\Omega}_{\pi/2})}(\Delta,\delta=0)=\frac{f_{\pi}^{2}(r )}{16}\frac{(\gamma^{4}/2-8\Delta^{4})(4\Delta^{2}-\gamma^{2})}{\gamma^{2}( \gamma^{2}+4\Delta^{2})^{2}}, \tag{73b}\]
for \(\delta=0\) and \(|\Delta/\gamma|<1/\sqrt{2}\); and the maximum total squeezing is obtained at \(\Delta=\delta=0\),
\[V_{\pi,\pi/2}^{(\tilde{\Omega}_{\pi/2})}(0,0)=-\frac{f_{\pi}^{2}(r)}{32}, \qquad\tilde{\Omega}_{\pi/2}=\frac{\gamma}{2\sqrt{6}}. \tag{73c}\]
For \(\phi=\pi/2\) squeezing is limited to elliptical regions of weak driving and small detunings \(\Delta\) and \(\delta\):
\[2\delta^{2}+8\Omega^{2} <\gamma^{2},\qquad\Delta=0, \tag{74a}\] \[4\Delta^{2}+8\Omega^{2} <\gamma^{2},\qquad\delta=0. \tag{74b}\]
### In-phase quadrature
For \(\phi=0\), squeezing is obtained in the Rabi frequency interval, for \(\delta=0\),
\[0<\Omega<\frac{1}{\sqrt{2}}\sqrt{\Delta^{2}-\gamma^{2}/4},\qquad|\Delta|>\gamma /2, \tag{75}\]
Figure 14: Variance of the quadratures of the fluorescence of the \(\pi\) transitions: left panel for \(\phi=\pi/2\) and right panel for \(\phi=0\). (a,b,d,e) as a function of Rabi frequency and (c,f) as a function of detuning. In all cases \(\delta=0\) is given by a solid-black line, and \(\delta=-0.5\gamma\) by a dashed-red line; the dotted-blue line is \(\delta=-2\gamma\) in (a,b,d,e) and \(\delta=-\gamma\) in (c,f). Additionally, (a) \(\Delta=0\), (b) \(\Delta=-2\gamma\), (c) \(\Omega=0.2\gamma\), (d) \(\Delta=0\), (e) \(\Delta=2\gamma\), (f) \(\Omega=0.8\gamma\).
with maximum squeezing at the Rabi frequency
\[\tilde{\Omega}_{0}=\frac{1}{2\sqrt{2}}\sqrt{\frac{16\Delta^{2}-\gamma^{2}}{12 \Delta^{2}+\gamma^{2}}}, \tag{76}\]
requiring finite detuning from both \(\pi\) transitions (\(\Delta\neq 0\)) and stronger driving, \(\Omega\sim\gamma\) [see Fig. 14(d)-(f)].
Thus, the variance at \(\tilde{\Omega}_{0}\) is
\[V^{(\tilde{\Omega}_{0})}_{\pi,0}(\delta)=-\frac{f_{\pi}^{2}(r)}{128}\frac{4 \Delta^{2}-\gamma^{2}}{\Delta^{2}(4\Delta^{2}+\gamma^{2})},\quad|\Delta|\geq \gamma/2. \tag{77}\]
This expression gets the asymptotic value
\[\lim_{\Delta\to\infty}V^{(\tilde{\Omega}_{0})}_{\pi,0}=-\frac{f_{\pi}^{2}(r)}{ 32}, \tag{78}\]
which is the same as that for the \(\pi/2\) quadrature. The region for squeezing obeys the relation
\[4\Delta^{2}-8\Omega^{2}<\gamma^{2}. \tag{79}\]
So, to obtain squeezing in this quadrature it is necessary to have detunings \(\Delta>\gamma/4\) for any Rabi frequency.
## IX Discussion and Conclusions
We have studied several properties of the resonance fluorescence of the \(\pi\) transitions in a \(J=1/2-J=1/2\) angular momentum atomic system driven by a linearly polarized laser field and a magnetic field along the \(\pi\) transition to lift the level degeneracies. Interference among the various transition amplitudes create a rich landscape of effects. Most notable among our results is the observation of quantum beats when the atom is subject to large laser and magnetic fields. In this regime, two close Rabi frequencies interfere, giving rise to a well-defined modulation of the fast oscillations. These Rabi frequencies are the source of the two pairs of sidebands in the incoherent part of the power spectrum [5] and in the squeezing spectrum [10]. We studied beats in the total intensity and two-time functions such as the dipole-dipole, intensity-intensity and intensity-amplitude correlations. In the beats' regime the role of vacuum-induced coherence is small because the upper levels are very separated due to very large difference Zeeman splitting.
Before the beats we considered the previously overlooked time-dependent populations and reviewed aspects of the known stationary ones. The fact that the upper state populations evolve out of phase should not be a surprise. This, and nonzero initial population of both ground states (in contrast to nonzero populations of excited states for spontaneous emission), are major factors in the interference among the terms in the intensity. Except for very strong laser fields, the steady state populations depend strongly on the difference Zeeman splitting.
The AIC also permits to quantify the degree of non-Gaussianity; the fluctuations of third-order in the field quadrature amplitude due to strong atom-laser nonlinearity dominate over the second-order ones with strong driving. The beats are in the strongly non-Gaussian regime.
The correlations show nonclassical features of the fluorescence light such as antibunching, \(g^{(2)}(0)=0\), and violation of classical inequalities in the amplitude-intensity correlations, Eqs. (61 -62). We studied squeezing using the variance, i. e., the total noise in a quadrature, as well as using the second-order part of the spectrum. In the regime of beats there is squeezing, near the effective Rabi frequencies, but none in the total noise.
For a system with many parameters the interplay among them is a complex one, making the interpretation of results nontrivial. Thus, for most of our plots we chose parameters in two groups: i) where they are relatively small, \(\Omega,\Delta,\delta\sim\gamma\), chosen to illustrate several degrees of vacuum-induced coherence; and ii) where they are large, \(\Omega,\Delta,\delta\gg\gamma\), and quantum beats are revealed. Overall, particular care must be taken regarding detunings. On the one hand, large difference Zeeman splitting means that the excited levels would be very separated and interact with different frequency portions of the reservoir, hence diminishing the vacuum-induced coherence. On the other, large laser-atom detunings, which might increase the VIC, mean reduced fluorescence rates, which may also be detrimental in measurements. The beats, then, would be better observed if \(\Delta\leq\gamma\) and \(\delta\) of just several \(\gamma\) in the strong field regime.
## X Acknowledgments.
The authors thank Dr. Ricardo Roman-Ancheyta and Dr. Iran Ramos-Prieto for useful comments at an early stage of the project. ADAV thanks CONACYT, Mexico, for scholarship No. 804318.
ORCID numbers: Hector M. Castro-Beltran [https://orcid.org/0000-0002-3400-7652](https://orcid.org/0000-0002-3400-7652), Octavio de los Santos-Sanchez [https://orcid.org/0000-0002-4316-0114](https://orcid.org/0000-0002-4316-0114), Luis Gutierrez [https://orcid.org/0000-0002-5144-4782](https://orcid.org/0000-0002-5144-4782),
## Appendix A Time-Dependent Matrix Solutions and Spectra
The two-time photon correlations under study have the general form \(\langle\mathbf{W}(\tau)\rangle=\langle O_{1}(0)\mathbf{R}(\tau)O_{2}(0)\rangle\), where \(\mathbf{R}\) is the Bloch vector and \(O_{1,2}\) are system operators. The same applies to correlations of fluctuation operators \(\Delta\mathbf{R}\), \(\Delta O_{1,2}\). Using the quantum regression formula [30], the correlations obey the equation
\[\langle\dot{\mathbf{W}}(\tau)\rangle=\mathbf{M}\langle\mathbf{W}(\tau)\rangle, \tag{80}\]
which has the formal solution
\[\langle\mathbf{W}(\tau)\rangle=e^{\mathbf{M}\tau}\langle\mathbf{W}(0)\rangle, \tag{81}\]
where \(\mathbf{M}\) is given by
\[\mathbf{M}=\left(\begin{array}{cccccccc}-\gamma&-i\Omega&0&0&i\Omega&0&0&0\\ -i\Omega&-\left(\frac{\gamma}{2}+i\Delta\right)&0&0&0&i\Omega&0&0\\ 0&0&-\gamma&i\Omega&0&0&-i\Omega&0\\ 0&0&i\Omega&-\left(\frac{\gamma}{2}+i(\Delta-\delta)\right)&0&0&0&-i\Omega\\ i\Omega&0&0&0&-\left(\frac{\gamma}{2}-i\Delta\right)&-i\Omega&0&0\\ \gamma_{1}&i\Omega&\gamma_{\sigma}&0&-i\Omega&0&0\\ 0&0&-i\Omega&0&0&-\left(\frac{\gamma}{2}-i(\Delta-\delta)\right)&i\Omega\\ \gamma_{\sigma}&0&\gamma_{2}&-i\Omega&0&i\Omega&0\end{array}\right). \tag{10}\]
Also, spectra of stationary systems can be evaluated more effectively using the above formal approach. Be \(g(\tau)=\langle\mathbf{W}(\tau)\rangle\). Then, a spectrum is calculated as
\[S(\omega) \propto\int_{0}^{\infty}\cos\omega\tau\,g(\tau)\,d\tau=\int_{0}^ {\infty}\cos\omega\tau\,e^{\mathbf{M}\tau}g(0)\,d\tau\] \[=\mathrm{Re}\int_{0}^{\infty}e^{-(i\omega\mathbf{1}-\mathbf{M}) \tau}g(0)\,d\tau\] \[=\mathrm{Re}\left[(i\omega\mathbf{1}-\mathbf{M})^{-1}g(0)\right], \tag{11}\]
where \(\mathbf{1}\) is the identity matrix. For example, the incoherent spectrum requires calculations of the type
\[S^{inc}(\omega) =\mathrm{Re}\int_{0}^{\infty}d\tau e^{-i\omega\tau}e^{\mathbf{M} \tau}\langle\Delta A_{ij}(0)\Delta A_{kl}(0)\rangle_{st}\] \[=\mathrm{Re}\left[(\mathbf{M}-i\omega\mathbf{1})^{-1}\langle \Delta A_{ij}(0)\Delta A_{kl}(0)\rangle_{st}\right]. \tag{12}\]
For the initial conditions of the correlations we use the following operator products and correlations in compact form:
\[A_{kl}A_{mn} =A_{kn}\delta_{lm}\,, \tag{13a}\] \[\langle A_{kl}A_{mn}\rangle =\alpha_{kn}\delta_{lm},\] (13b) \[A_{ij}A_{kl}A_{mn} =A_{in}\delta_{jk}\delta_{lm},\] (13c) \[\langle A_{ij}A_{kl}A_{mn}\rangle =\alpha_{in}\delta_{jk}\delta_{lm}. \tag{13d}\]
Hence, the relevant initial conditions are:
\[\langle A_{13}\mathbf{R}\rangle =\left(0,0,0,0,\alpha_{11},\alpha_{13},0,0\right)^{T}, \tag{14a}\] \[\langle A_{24}\mathbf{R}\rangle =\left(0,0,0,0,0,0,\alpha_{22},\alpha_{24}\right)^{T},\] (14b) \[\langle A_{13}\mathbf{R}A_{31}\rangle =\left(0,0,0,0,0,\alpha_{11},0,0\right)^{T},\] (14c) \[\langle A_{24}\mathbf{R}A_{42}\rangle =\left(0,0,0,0,0,0,\alpha_{22}\right)^{T},\] (14d) \[\langle A_{13}\mathbf{R}A_{42}\rangle =\langle A_{24}\mathbf{R}A_{31}\rangle=0, \tag{14e}\]
where \(\mathbf{R}=\left(A_{11},A_{13},A_{22},A_{24},A_{31},A_{33},A_{42},A_{44}\right) ^{T}\) is the Bloch vector. For correlations with fluctuation operator products, \(\Delta A_{ij}=A_{ij}-\alpha_{ij}\), we have
\[\langle\Delta A_{kl}\Delta A_{mn}\rangle =\alpha_{kn}\delta_{lm}-\alpha_{kl}\alpha_{mn}, \tag{15}\] \[\langle\Delta A_{ij}\Delta A_{kl}\Delta A_{mn}\rangle =\alpha_{in}\delta_{lm}\delta_{jk}-\alpha_{il}\alpha_{mn}\delta_{jk}\] \[-\alpha_{in}\alpha_{kl}\delta_{jm}-\alpha_{ij}\alpha_{kn}\delta_{lm}\] \[+2\alpha_{ij}\alpha_{kl}\alpha_{mn}. \tag{16}\]
Now, recalling that \(\alpha_{12}=\alpha_{14}=\alpha_{23}=\alpha_{34}=0\), we write the detailed initial conditions of the correlations (Set 1 of Bloch equations and quantum regression formula):
\[\langle\Delta A_{13}\Delta\mathbf{R}\rangle =\left(-\alpha_{13}\alpha_{11},\,-\alpha_{13}^{2},\,-\alpha_{13} \alpha_{22},\,-\alpha_{13}\alpha_{24},\,\alpha_{11}-|\alpha_{13}|^{2},\,\alpha_ {13}-\alpha_{13}\alpha_{33},\,-\alpha_{13}\alpha_{42},\,-\alpha_{13}\alpha_{44} \right)^{T}\,, \tag{17a}\] \[\langle\Delta A_{24}\Delta\mathbf{R}\rangle =\left(-\alpha_{24}\alpha_{11},\,-\alpha_{24}\alpha_{13},\,- \alpha_{24}\alpha_{22},\,-\alpha_{24}^{2},\,-\alpha_{24}\alpha_{31},\,-\alpha_ {24}\alpha_{33},\,\alpha_{22}-|\alpha_{24}|^{2},\,\alpha_{24}-\alpha_{24}\alpha _{44}\right)^{T},\] (17b) \[\langle\Delta A_{13}\Delta\mathbf{R}\Delta A_{31}\rangle =\left(2|\alpha_{13}|^{2}\alpha_{11}-\alpha_{11}^{2},\,2|\alpha_ {13}|^{2}\alpha_{13}-2\alpha_{11}\alpha_{13},\right.\] \[\left.2|\alpha_{13}|^{2}\alpha_{22}-\alpha_{11}\alpha_{22},\,2| \alpha_{13}|^{2}\alpha_{24}-\alpha_{11}\alpha_{24},\right.\] \[\left.2|\alpha_{13}|^{2}\alpha_{31}-2\alpha_{11}\alpha_{31},\,2| \alpha_{13}|^{2}\alpha_{33}+\alpha_{11}-2|\alpha_{13}|^{2}-\alpha_{11}\alpha_ {33},\right.\] \[\left.2|\alpha_{13}|^{2}\alpha_{42}-2\alpha_{11}\alpha_{42},\,2| \alpha_{13}|^{2}\alpha_{44}-\alpha_{11}\alpha_{44}\right)^{T}.\] (17c) \[\langle\Delta A_{24}\Delta\mathbf{R}\Delta A_{42}\rangle =\left(2|\alpha_{24}|^{2}\alpha_{11}-\alpha_{11}\alpha_{22},\,2| \alpha_{24}|^{2}\alpha_{13}-\alpha_{22}\alpha_{13},\right.\] \[\left.2|\alpha_{24}|^{2}\alpha_{22}-\alpha_{22}^{2},\,2|\alpha_{24 }|^{2}\alpha_{24}-2\alpha_{22}\alpha_{24},\right.\] \[\left.2|\alpha_{24}|^{2}\alpha_{31}-\alpha_{22}\alpha_{31},\,2| \alpha_{24}|^{2}\alpha_{33}-\alpha_{22}\alpha_{33},\right.\] \[\left.2|\alpha_{24}|^{2}\alpha_{42}-2\alpha_{22}\alpha_{42},\,2| \alpha_{24}|^{2}\alpha_{44}+\alpha_{22}-2|\alpha_{24}|^{2}-\alpha_{22}\alpha_ {44}\right)^{T}. \tag{17d}\]
\[\langle\Delta A_{13}\Delta\mathbf{R}\Delta A_{42}\rangle = \left(2\alpha_{13}\alpha_{11}\alpha_{42},\,2\alpha_{13}^{2}\alpha_ {42},\,2\alpha_{13}\alpha_{22}\alpha_{42},\,(2|\alpha_{24}|^{2}-\alpha_{22}) \alpha_{13},\right. \tag{10e}\] \[\left.(2|\alpha_{13}|^{2}-\alpha_{11})\alpha_{42},\,(2\alpha_{13} \alpha_{33}-\alpha_{13})\alpha_{42},\,2\alpha_{13}\alpha_{42}^{2},\,(2\alpha_ {13}\alpha_{44}-\alpha_{13})\alpha_{42}\right)^{T},\] \[\langle\Delta A_{24}\Delta\mathbf{R}\Delta A_{31}\rangle = \left(2\alpha_{24}\alpha_{11}\alpha_{31},\,(2|\alpha_{13}|^{2}- \alpha_{11})\alpha_{24},\,2\alpha_{24}\alpha_{22}\alpha_{31},\,2\alpha_{24}^{ 2}\alpha_{31},\right.\] (10f) \[\left.2\alpha_{24}\alpha_{31}^{2},\,(2\alpha_{24}\alpha_{33}- \alpha_{24})\alpha_{31},\,(2|\alpha_{24}|^{2}-\alpha_{22})\alpha_{31},\,(2 \alpha_{24}\alpha_{44}-\alpha_{24})\alpha_{31}\right)^{T}.\]
## Appendix B Condition for Optimal Appearance of Beats in the Intensity
We consider a simplified, unitary, model to estimate the optimal initial population of the ground states to make well-formed beats. First, we diagonalize the Hamiltonian Eq. (8). The eigenvalues and eigenstates are
\[\mathcal{E}_{1}^{\pm} = -\frac{\Delta}{2}\pm\frac{1}{2}\sqrt{4\Omega^{2}+\Delta^{2}}, \tag{11a}\] \[\mathcal{E}_{2}^{\pm} = B_{\ell}+\frac{\delta-\Delta}{2}\pm\frac{1}{2}\sqrt{4\Omega^{2} +(\delta-\Delta)^{2}}, \tag{11b}\]
and
\[|u_{1}\rangle = \sin\Theta_{1}|1\rangle+\cos\Theta_{1}|3\rangle,\] \[|u_{2}\rangle = -\cos\Theta_{1}|1\rangle+\sin\Theta_{1}|3\rangle,\] \[|u_{3}\rangle = \sin\Theta_{2}|2\rangle+\cos\Theta_{2}|4\rangle,\] \[|u_{4}\rangle = -\cos\Theta_{2}|2\rangle+\sin\Theta_{2}|4\rangle, \tag{12}\]
respectively, where
\[\sin\Theta_{1} = \frac{2\Omega}{\sqrt{\left(\Delta+\sqrt{\Delta^{2}+4\Omega^{2}} \right)^{2}+4\Omega^{2}}},\] \[\cos\Theta_{1} = \frac{\Delta+\sqrt{\Delta^{2}+4\Omega^{2}}}{\sqrt{\left(\Delta+ \sqrt{\Delta^{2}+4\Omega^{2}}\right)^{2}+4\Omega^{2}}},\]
\[\sin\Theta_{2} = \frac{2\Omega}{\sqrt{\left((\delta-\Delta)+\sqrt{(\delta-\Delta) ^{2}+4\Omega^{2}}\right)^{2}+4\Omega^{2}}},\] \[\cos\Theta_{2} = \frac{(\delta-\Delta)+\sqrt{(\delta-\Delta)^{2}+4\Omega^{2}}}{ \sqrt{\left((\delta-\Delta)+\sqrt{(\delta-\Delta)^{2}+4\Omega^{2}}\right)^{2}+ 4\Omega^{2}}}.\]
It is now straightforward to obtain the excited-state populations. If the initial state of the system is \(\rho(0)=\langle A_{33}(0)\rangle|3\rangle\langle 3|+\langle A_{44}(0)\rangle|4\rangle\langle 4|\) we get
\[\langle A_{33}(t)\rangle = \frac{1}{2}\langle A_{33}(0)\rangle\sin^{2}\left(2\Theta_{1} \right)(1-\cos\left(\Omega_{1}t\right)), \tag{13a}\] \[\langle A_{44}(t)\rangle = \frac{1}{2}\langle A_{44}(0)\rangle\sin^{2}\left(2\Theta_{2} \right)(1-\cos\left(\Omega_{2}t\right)), \tag{13b}\]
and the intensity of the field is
\[\frac{I_{\pi}(\mathbf{r},t)}{f_{\pi}^{2}(r)} = \langle A_{33}(0)\rangle\sin^{2}\left(2\Theta_{1}\right)+A_{44}(0 )\rangle\sin^{2}\left(2\Theta_{2}\right) \tag{14}\] \[-\langle A_{33}(0)\rangle\sin^{2}\left(2\Theta_{1}\right)\cos \left(\Omega_{1}t\right)\] \[-\langle A_{44}(0)\rangle\sin^{2}\left(2\Theta_{2}\right)\cos \left(\Omega_{2}t\right).\]
A necessary condition for the beating behavior to occur is that the initial ground-state populations are both nonvanishing in the nondegenerate case. Now, assuming the relation
\[\frac{\langle A_{33}(0)\rangle}{\langle A_{44}(0)\rangle}=\frac{\sin^{2} \left(2\Theta_{2}\right)}{\sin^{2}\left(2\Theta_{1}\right)} \tag{15}\]
is satisfied by chossing appropriate parameter values \((\Omega,\delta,\Delta)\) for given values of initial ground state populations we would get
\[I_{\pi}(\mathbf{r},t) = f_{\pi}^{2}(r)\langle A_{33}(0)\rangle\sin^{2}\left(2\Theta_{1}\right) \tag{16}\] \[\times\left[1-\cos\left(\Omega_{beat}t\right)\cos\left(\Omega_{ av}t\right)\right],\]
where \(\Omega_{beat}=(\Omega_{2}-\Omega_{1})/2\) and \(\Omega_{av}=(\Omega_{2}+\Omega_{1})/2\).
|
2301.13272
|
Adsorption of melting deoxyribonucleic acid
|
The melting of a homopolymer double-stranded (ds) deoxyribonucleic acid (DNA)
in the dilute limit is studied numerically in the presence of an attractive and
impenetrable surface on a simple cubic lattice. The two strands of the DNA are
modeled using two self-avoiding walks, capable of interacting at complementary
sites, thereby mimicking the base pairing. The impenetrable surface is modeled
by restricting the DNA configurations at the $z\geq 0$ plane, with attractive
interactions for monomers at $z=0$. Further, we consider two variants for $z=0$
occupations by ds segments, where one or two surface interactions are counted.
This consideration has significant consequences, to the extent of changing the
stability of the bound phase in the adsorbed state. Interestingly, adsorption
changes from critical to first-order with a modified exponent on coinciding
with the melting transition. For simulations, we use the pruned and enriched
Rosenbluth algorithm.
|
Debjyoti Majumdar
|
2023-01-30T20:33:18Z
|
http://arxiv.org/abs/2301.13272v2
|
# Adsorption of melting DNA
###### Abstract
The melting of a homopolymer double-stranded (ds) DNA is studied numerically, in the presence of an attractive and impenetrable surface on a simple cubic lattice. The two strands of the DNA are modelled using two self-avoiding walks, capable of interacting at complementary sites, thereby mimicking the base pairing. The impenetrable surface is modelled by restricting the DNA configurations at the \(z\geq 0\) plane, with attractive interactions for monomers at \(z=0\). Further, we consider two variants for \(z=0\) occupations by ds segments, where one or two surface interactions are counted. This consideration has significant consequences, to the extent of changing the stability of the bound phase in the adsorbed state. Interestingly, adsorption changes to first-order on coinciding with the melting transition.
_Introduction:_ The denaturation of the double-stranded DNA (dsDNA) from a bound (ds) to an unbound single-stranded (ss) phase is an important step towards fundamental biological processes such as DNA replication, RNA transcription, packaging of DNA and repairing [1]. _In vitro_, the melting transition is induced by changing the temperature or \(p\)H of the DNA solution. However, the physiological condition would allow neither extremes of temperature nor \(p\)H level inside the cell. Therefore, the cell has to rely on other ambient factors to locally modify the stability of the ds structure of the DNA. Among others, one of the crucial factors and a potential candidate that can alter the stability of the native DNA form is interaction of the DNA with a surface, e.g., with proteins or cell membranes. The strands being polymers can undergo an adsorption transition, where the two strands, either in the ds or ss phase, get adsorbed on a surface [2]. _In vivo,_ the protein-induced DNA-membrane complex is used during the replication process, cell division, and for inducing local bends in the rigid duplex DNA [3; 4]. Again, adsorption is instrumental in packaging DNA inside the virus heads [5; 6]. On the technological front, the adsorbing property of the DNA is often used to target drug delivery in gene therapy [7; 8], and for manufacturing biosensors with quick and accurate detection of DNA in bodily samples. In all these instances, the surface-DNA interaction can be tuned by changing the nature of the surface. This tunability calls for a detailed phase mapping arising from the interaction of the DNA with the adsorbing surface.
The melting and the adsorption transition individually, forms the subject of many theoretical and experimental studies in the past. Theoretically, lattice models have been useful in extracting sensible results on par with the experiments. The melting transition was shown to be first-order when excluded volume interactions are fully included [9]. On the other hand, the polymer adsorption transition was shown to be continuous [2; 10]. With this in mind, in this paper, we explore the interplay between the melting and the adsorption transition of a model homopolymer DNA, using a lattice adaptation of the Poland-Scheraga model on a simple cubic lattice. Self-avoidance is duly implemented among the intra- and inter-strand segments. We found that the melting vs. adsorption phase diagram is drastically different for the two different schemes of interaction between the ds and the adsorbing surface. For specific values of the coupling potentials, the two transitions overlap, with the continuous adsorption transition becoming first-order.
_The model:_ We model the DNA strands (say A and B) as two self-avoiding walks (SAWs), represented by the vectors \({\bf r}_{i}^{A}\) and \({\bf r}_{j}^{B}\) (\(1\leq i,j\leq N\)), and capable of forming a base pair (bp) among the complementary monomers (\(i=j\)) from the two strands while occupying the same lattice site (\({\bf r}_{i}^{A}={\bf r}_{i}^{B}\)). One end of the DNA is grafted in the \(z=0\) plane. The other end is free to wander in the \(z\geq 0\) direction, with the \(z=0\) plane impenetrable and attractive. An energy \(-\epsilon_{bp}\) is associated with each bound bp independent of the bp index (homopolymer) and is represented by the reduced variable \(g=\epsilon_{bp}/k_{B}T\), where \(T\) is the temperature and \(k_{B}\) is the Boltzmann constant. For each interaction with the \(z=0\) surface, there is an energetic gain of \(-\epsilon_{s}\), represented by the reduced variable \(q=\epsilon_{s}/k_{B}T\). Further, we consider two variants: model I and model II. The difference in the two variants is in the strength of the ds interaction with the surface; in model
I, we consider only one unit of interaction (\(\epsilon_{s}\)), while in model II, we consider two units of interaction (\(2\epsilon_{s}\)), each for one of the strands. Such consideration comes from the speculation that when interacting sidewise, like in Fig. 1(a), there would be an effective interaction of one strand. By contrast, when both the strands touch the plane simultaneously, each strand would contribute [Fig. 1(b)]. These two scenarios may arise depending on the hardness of the surface. While metallic surfaces (such as Gold) used during experiments are hard, biological surfaces tend to be much softer. A schematic diagram of our model is shown in Fig. 1(c). The Hamiltonian for a typical configuration according to model II can be written as,
\[\beta\mathcal{H}=-g\sum_{i=1}^{N}\delta_{\mathbf{r}_{i}^{A},\mathbf{r}_{i}^{B }}-q\sum_{i=1}^{N}\sum_{\alpha=A,B}\delta_{0,z_{i}^{\alpha}}, \tag{1}\]
where, \(\beta=1/(k_{B}T)\) and \(\delta_{i,j}\) is the Kronecker delta. The adsorbing surface can generally be of complex geometry with different degrees of roughness and curvature. However, we choose a smooth and impenetrable flat surface for simplicity. For simulation, we use the pruned and enriched Rosenbluth method (PERM) to sample the equilibrium configurations, averaging over \(10^{8}\) tours. We set the Boltzmann constant \(k_{B}=1\) throughout our study.
For melting, the average number of bound bps per unit length (\(n_{c}\)) serves as the order parameter with \(n_{c}=1\) and \(0\) in the bound and unbound phase, respectively. The bound and the unbound phases are dominated by energy and entropy, respectively, depending upon whichever minimizes the free energy. For our model, in the absence of any adsorbing surface (i.e., \(q=0\)), the melting takes place at \(g_{c}=1.3413\) with the crossover exponent \(\phi_{m}=0.94\)[9; 11]. On the other hand, the 3d to 2d adsorption of a lattice polymer is a continuous transition with the critical point at \(q_{c}=0.2856\)[10]. For adsorption, the average number of surface contacts per unit length (\(n_{s}\)) is the order parameter [12], and we denote its fluctuation by \(C_{s}\). The corresponding critical exponent controlling the growth of surface contacts at the critical point is \(\phi_{a}\), and the order parameter follows a scaling, \(n_{s}\sim N^{\phi_{a}-1}\)[13]. The exponent \(\phi_{a}\) is expected to be universal, and the most recent improved estimate of the critical exponent from computer simulations suggest \(\phi_{a}=0.48(4)\)[14; 10].
Naively, one would expect four distinct phases when melting and adsorption are considered together [4]. However, the unbound-adsorbed phase was found missing in a theoretical study [15], which employs a model similar to model II, except that excluded volume interactions were neglected. Overall, in Ref. [15], it was found that the bound state is stabilized in the presence of an adsorbing surface. By contrast, on the experimental side, Ref. [16] had demonstrated that directly adsorbed DNA hybrids are significantly less stable than if free. Therefore, further study of the melting-adsorption interplay, employing more versatile models is essential for a complete understanding.
_Model I:_ In this model variant, we consider equal surface interaction energy for both ss and ds segments. This choice of interaction yields four equilibrium phases, viz., bound-desorbed (**BD**), unbound-desorbed (**UD**), unbound-adsorbed (**UA**), and the bound-adsorbed (**BA**) phase [Fig. 2(a)]. The melting and the adsorption lines are obtained by varying \(g\) and \(q\), respectively, while keeping one of them fixed [13]. The error bars in \(q_{c}\) and \(g_{c}\) are of the size of the plotting points. As the two lines (\(g_{c}=1.3413\) and \(q_{c}=0.2856\)) approach each other, the bound state is primarily stabilized for increasing \(q\), which is somewhat surprising [Fig. 2(c)]. This increased stability of the bound state persists for \(0.26(6)\lesssim q\lesssim 0.4\), and is perhaps due to the fact, that, in this region the bound and unbound phases in the vicinity of the melting line are unequally placed in the adsorbed phase. This short period of stability is followed by a steady increase in the threshold \(g\) for bound state for \(q>0.4\), separating the destabilized bound and unbound state in the adsorbed phase. One can understand this using the energy-entropy
Figure 1: (Color online) Schematic diagram for the (a) lateral view of model I, and (b) planar view of model II. In (a) representing model I, only one strand is interacting with the surface effectively in the bound state. While both the strands are simultaneously in contact in model II, as in (b). (c) Two-dimensional depiction of our lattice model.
argument; since the number of independent surface contacts increases upon unbinding, with each ds bp resulting in two new possible ss surface contacts, along with an increase in the entropy, the **UA** phase is strongly favored over the **BA** phase. A significant consequence is, the melting in the adsorbed phase (**BA\(\rightarrow\)UA**) is different from the pure melting in two-dimensions (2d) where the melting point is at \(g_{c}=0.753(3)\). Noticeably, while undergoing **UA** to **BA** transition by varying \(q\), the system shows first-order like fluctuation of surface contacts while the average number of surface contacts \(n_{s}\) reduces to half its value than that in the **UA** phase [Fig. 2](d). This observation is supported by the scaling plot of the surface contact probability distribution (\(P_{n_{s}}\)) at a point (\(g=1.5\) and \(q=0.659\)) above the melting phase boundary, using the scaling exponent \(\phi_{a}=0.99\) for data collapse [Fig. 2](b). However, it is not a genuine desorption transition, and is due to the fact that the ds and ss surface contacts are treated on equal footing. For higher \(g\) values, the **BA** phase undergoes a continuous desorption around \(\lim_{g\rightarrow\infty}q_{c}=0.2856\).
Summarizing the results of model I, we see, that the bound phase is stabilized only for a small range of \(q\) values [Fig. 2(c)]. Otherwise, the bound state is mainly destabilized. For \(q<0.265(5)\), the two transitions remain decoupled without affecting each other. Results involving model I is in accordance with Ref. [16], where adsorbed DNA hybrids are found to be less stable than their free counterpart. Importantly, these results suggest that since the destabilization of the dsDNA is essential for the ease of opening up a bound segment, adsorption could play a crucial role in initiating certain biological processes related to the transferring of genetic information.
_Model II:_ For model II, a ds bound segment has a higher energy gain (precisely, double) than a ss segment upon interaction with the surface. Using this scheme of interaction, the phase plane is divided into four distinct phases viz., **BD**, **UD**, **UA** and the **BA** phase [Fig. 3]. We can further identify three types of melting transition using these four phases: (i) when both the phases are desorbed, (ii) when the bound phase is adsorbed, and the unbound phase is desorbed, and (iii) when both the phases are adsorbed. While in the phases corresponding to the melting type (i) and (iii), the two transitions remain decoupled, for melting type (ii), both the transitions coincide into one transition, represented by an overlapping phase boundary giving rise to multicritical points. Intriguingly, the adsorption transition is promoted to first-order in this overlapping region. Adjacent to this overlapping region, and bounded by the lines \(g=1.3413\) and \(q=0.2856\) on the other two sides, is a small triangular island (denoted by **a**) [Fig. 3], akin to the Borromean phase found in nuclear systems [15]. This **a** phase is not possible when either of the potentials is turned off, and exists as a result of the combined effect of the two potentials, even though neither \(g\) nor \(q\) is strong enough to support an ordered state, individually. This small window of \(q\) and \(g\) values, corresponding to the coinciding phase line, facilitates achieving an adsorbed and a bound phase by changing only \(g\) or \(q\), with the other parameter fixed. Such points (or region) can be crucial for real biological systems since it reduces a multi-parameter system to be controlled by a single parameter. Adsorption in this region follows the same scal
Figure 2: (Color online) (a) Model I phase diagram for melting ‘melt’ and adsorption ‘ads’. The different phases are: bound-desorbed (**BD**), unbound-desorbed (**UD**), unbound-adsorbed (**UA**), and bound-adsorbed (**BA**). The dotted lines represent the transition points for the individual cases; for melting \(g_{c}=1.3413\) and for adsorption \(q_{c}=0.2856\). (b) Scaling plots of the probability distribution (\(P_{n_{s}}\)) of surface contacts (\(n_{s}\)) on the **BA**\(\rightarrow\)**UA** transition line corresponding to \(g=1.5\) and \(q_{c}=0.659\), and for chain lengths \(N=700\) to \(1000\). (c) A zoom in of the phase diagram in (a) showing a decrease in the threshold \(g\) for bound state. (d) Scaling plot of surface contact fluctuation \(C_{s}\) for \(g=1.5\), using \(q_{c}=0.659\) and \(\phi_{a}=0.99\).
ing exponent as of the first-order melting transition with \(\phi_{a}=\phi_{m}\sim 1\) [Fig. 3(c)] [17]. A first-order adsorption is also evident from the probability distribution of the surface contacts (\(P_{n_{s}}\)) at the transition point, e.g., for \(g_{c}=1.25\) and \(q_{c}=0.278\) in Fig. 3(b) [18]. The melting transition, however, remains unaffected. Below \(\mathbf{a}\), the adsorbed phase is destabilized for a small range of \(g\) values. For the transition from \(\mathbf{BA}\) to \(\mathbf{UA}\) phase the melting is two dimensional for sufficiently large \(q\) with \(\phi_{m}\approx 1.5\) when the system is completely adsorbed.
Unlike model I, the bound state in model II is stabilized in the presence of the adsorbing surface. Since, post-melting, the entropy gain is smaller in the adsorbed phase (two dimensions), compared to the unbound state in the desorbed phase (three dimensions), the bound state in the adsorbed phase is more stable than that in the desorbed phase, leading to a gradual lowering in the threshold \(g\), which finally converges to \(\lim_{q\to\infty}g_{c}\approx 0.753(3)\), the two-dimensional melting point. A similar argument also applies for the adsorption transition for which the critical adsorption strength \(q_{c}\) decreases and saturates at \(\lim_{g\to\infty}q_{c}=0.1428\)[20].
Although our results from model II are in line with Ref. [15], qualitatively, we obtain all four possible phases, instead of three, as in [15], where the \(\mathbf{UA}\) phase was absent. Biologically, adsorption-induced stability could be important to guard DNA native form against thermal fluctuation and external forces. Importantly, adsorption can energetically compensate for the bending of the rigid ds segments, thereby, providing an alternative to bubble mediated bending [21].
_Conclusion:_ To conclude, in this paper, we elucidate the role of adsorption in modifying the melting transition and vice-versa. Two separate models were considered, which differs in the strength of interaction with the surface along the ds segments. Such a consideration arises from the speculation that the orientation of the DNA in conjunction with the nature of the adsorbing surface could play an important role in determining which of the studied model effectively applies. The two models show significant differences: model I shows that the ds structure is mostly destabilized in the presence of an attractive surface. This finding resemble the result from the experiment performed with DNA hybrids in Ref. [16]. On the other hand, model II shows that DNA is only stabilized in the presence of an attractive surface. Although this model is similar to the theoretical model of Ref. [15], there are significant improvements, such as we consider excluded volume interaction. Moreover, we found the presence of all four possible phases, which is not the case in Ref [15]. In both the models, adsorption coinciding with the melting transition is first-order, however, whether this denotes a non-universality in the adsorption transition is yet to be understood. Findings from both the models carry biological significance. Our work, therefore, contributes toward completing the picture by connecting the experimental and theoretical findings.
_Acknowledgement:_ D.M. was supported by the German-Israeli Foundation through grant number I-2485-303.14/2017 and by the Israel Science Foundation through grant number 1301/17, and the BCSC Fellowship from the Jacob Blaustein Center for Scientific Cooperation. Part of the simulations were carried out on the _Samkhya_ computing facility at the Institute of Physics, Bhubaneswar.
Figure 3: (Color online) (a) Model II phase diagram. The different phases are: bound-desorbed (\(\mathbf{BD}\)), unbound-desorbed (\(\mathbf{UD}\)), unbound-adsorbed (\(\mathbf{UA}\)) and bound-adsorbed (\(\mathbf{BA}\)). Dashed lines represent, \(g=0.753\) in red and \(q=0.1428\) in gray. Dotted lines represent, \(g=1.3413\) and \(q=0.2856\). (b) Probability distribution of surface contacts (\(P_{n_{s}}\)) at \(g_{c}=1.25\) and \(q_{c}=0.278\) (arrow ar\({}_{1}\) in (a)), for chain lengths \(N=700\) to \(1000\). (c) Scaling plot for fluctuation of average number of surface contacts per unit length \(C_{s}\) for \(g=1.25\) using \(\phi_{a}=0.98\) and \(q_{c}=0.277(7)\).
## References
* (1) T. E. Cloutier and J. Widom, _Mol. Cell_**14**, 355 (2004); J. Yan and J. F. Marko, _Phys. Rev. Lett._**93**, 108108 (2004).
* (2) E. Eisenriegler, K. Kremer and K. Binder, _J. Chem. Phys._**77**, 6296 (1982).
* (3) W. Firshein, _Annu. Rev. Microbiol._, 43 **89** (1989).
* (4) R. Kapri and S. M. Bhattacharjee, _Eur. Phys. Letts._**83** 68002 (2008); R. Kapri, _J. Chem. Phys._**130**, 145105 (2009).
* (5) G. A. Carri and M. Muthukumar, _Phys. Rev. Lett._**82**, 5405-5408 (1999).
* (6) P. K. Purohit, et al., _Biophys. Jour._**88**, 851-866 (2005).
* (7) S. Z. Bathaie et al., _Nucleic Acids Res._**27**, 1001 (1999).
* (8) J. O. Radler et al., _Science_**275**, 810 (1997).
* (9) M. S. Causo, B. Coluzzi, and P. Grassberger, _Phys. Rev. E_**62**, 3958 (2000).
* (10) P. Grassberger, _J. Phys. A: Math. Gen._**38**, 323-331 (2005).
* (11)\(\phi=1\) for first-order transition, and \(\phi<1\) for continuous/second order transition.
* (12) Here, length \(N\) denotes the maximum number of possible bps.
* (13) See Supplemental Material.
* (14) C. J. Bradly, A. L. Owczarek and T. Prellberg, _Phys. Rev. E_**97**, 022503 (2018).
* (15) A. E. Allahverdyan _et. al_, _Phys. Rev. Lett._**96**, 098302 (2006); A.E. Allahverdyan _et. al_, _Phys. Rev. E_**79**, 031903 (2009).
* (16) S. M. Schreiner _et al._, _Anal. Chem._**83**, 4288-4295 (2011).
* (17) A similar inter-change of the transition order was previously observed in a theoretical model studying the interplay of helix-coil transition and adsorption in a polymer [5].
* (18) A growing peak on either side of the distribution, and a deepening valley in between, is typical of a first-order transition. The valley represent suppressed states due to the growing surface term between the two phases. The inter-peak gap converges to a non-zero value. However, for models where this surface/interface, separating two coexisting phases, is reduced to a point, this valley is absent [19]. Also see SM [13].
* (19) T. Garel, H. Orland, and E. Orlandini, _Eur. Phys. J._**B** 12, 261-268 (1999).
* (20) This is exact (other digits omitted) and can be obtained considering the fact, that, for model II even though the length is halved in the bound state, the energy in the adsorbed phase remains same. Therefore, the effective adsorbed energy per unit length (\(N\)) is doubled.
* (21) Double-stranded (ds) bound DNA segments are about 25 times rigid than the single-stranded (ss) unbound DNA segments. These ss segments flanked by ds segments on either side are known as _bubbles_. These bubbles can act as hinge for bends in DNA.
## I I. Simulation algorithm
We use the pruned and enriched Rosenbluth algorithm (PERM) [1] to simulate the configurations of the dsDNA over an attractive surface [Fig. S1]. Two strands are grown at once, adding monomers on the top of the lastly added monomer of both the strands at once. At each step, we calculate the joint possibilities of stepping into free sites obtained by a Cartesian product of the individual sets of possibilities i.e. \(\mathcal{S}_{n}=\mathcal{S}_{n}(A)\times\mathcal{S}_{n}(B)\). Each element in \(S_{n}\) corresponds to an ordered pair of new steps for both the strands, and carries a Boltzmann weight of \(\exp(g\times l+q\times k)\), where \(l=1\) for a base-pair (bp) and \(0\) otherwise, while \(k=0,2\) or \(1\) depending upon the number of surface contacts and model. Then, a choice is made according to the _importance sampling_. At each step the local partition function is calculated as \(w_{n}=\sum_{\mathcal{S}_{n}}\exp(g\times l+q\times k)\). The partition sum for length \(n\) is then estimated by product over the local partition sums at each step, \(W_{n}=\prod_{i=1}^{n}w_{i}\), and averaging over the number of started tours, \(Z_{n}=\langle W_{n}\rangle\). Enrichment and pruning at \(n\)th step is performed depending on the
ratio, \(r=Z_{n}/W_{n}\):
\[r=\begin{cases}1,&\text{continue to grow}\\ <1,&\text{prune with probability }(1-r)\\ >1,&\text{make $k$-copies}.\end{cases}\]
If \(r<1\) and pruning fails, the configuration is continued to grow but with \(W_{n}=Z_{n}\). For enrichment (\(r>1\)) \(k\) is chosen as, \(k=min(\lfloor r\rfloor,\mathcal{N}(S_{n}))\), where each copy carries a weight \(\frac{W_{n}}{k}\), and \(\mathcal{N}(\mathcal{S}_{n})\) is the cardinality of the set \(\mathcal{S}_{n}\). Averages are taken over \(10^{8}\) tours.
At length \(n\), any general thermodynamic observable (\(Q_{n}\)) is averaged on the fly using the formula:
\[\langle Q_{n}\rangle(g,q)=\frac{\langle Q_{n}W_{n}(g,q)\rangle}{Z_{n}(g,q)},\] (S1)
where the \(\langle\cdots\rangle\) in the numerator represents the running average of the quantity over number of started tours and using the local estimate of the configuration weight \(W_{n}\).
One of the important aspects in simulating lattice self-avoiding walks is in checking if the immediate next sites are empty. The straightforward way is to check if any of the last \(N-1\) steps occupy the site. However, for walks of length \(N\) the time required in this operation grows as \(\mathcal{O}(N)\), and \(\mathcal{O}(N^{2})\) for the total chain. This can be avoided using the _bit map_ method in which the whole lattice is stored in an array using a hashing scheme where each site is given an array address like: \(f(x,y,z)=x+yL+zL^{2}+offset\), where \(L\) is the dimensions of the virtual lattice box and \(offset=\lfloor L^{d}/2\rfloor\) is a constant number which depends upon \(L\) to make the address start from zero. Here, the checking of self-avoidance is \(\approx\mathcal{O}(1)\), with no possibility of _hashing collision_. However, since our problem requires constraining the polymer above the plane on which it is grafted there is a significant chance that the polymer will move out of the simulation box. A possible way out is to use a _linked list_ method e.g. the AVL tree binary search [2]. In AVL, the algorithm works by creating a tree like structure where each node represent an occupied lattice site. Each entry for a new step is associated with _search_, _insertion_ and _rebalancing_ the tree branches. Each _insertion_ or _deletion_ operation requires \(\mathcal{O}(\log(n))\) time, where \(n\) is the total number of nodes which translates to the number of monomers or occupied sites or the polymer length. For a chain of length \(N+1\), the total growth time (assuming only _insertion_ is performed) is: \(\ln(1)+\ln(2)\cdots\ln(N)=\ln(N!)\). Using Sterling approximation, and for large \(N\), this is approximately \(\mathcal{O}(N\ln N)\). Moreover, the AVL algorithm can be easily incorporated in the recursive structure of the PERM algorithm.
## II II. Surface Contact Histogram
Often, crossovers result into a melange of critical exponents, obtained from different methods such as the finite-size-scaling analysis, scaling of the specific heat peaks with length (\(N\)), the reunion exponent also known as the _bubble-size-exponent_ (for DNA), among others. Therefore, deciding the behavior of the transition becomes difficult. In this kind of situation it is advised to look at the probability distribution \(P(\cdot)\) of the associated order parameter close to the transition point.
A first-order transition is characterised by doubly peaked distribution with growing depth of the valley in between. This valley is the result of a \(d-1\) dimensional surface separating the two phases of the \(d\) dimensional system which suppresses the states in between the peaks. It grows exponentially deep in the thermodynamic limit,
\(P\sim\exp(-\sigma L^{d-1})\), where \(L\) is the size of the system. However, for certain models (or problems) this interface can be reduced to a point separating the two phases e.g. in our DNA model the interface between a bound segment and an unbound segment is a point, in adsorption a point separates the adsorbed and desorbed phases, or the point interface separating the collapse-ferromagnetic phase from the coiled-paramagnetic phase in the case of a magnetic polymer [3]. In these situations the valley is absent and the surface free energy is no longer extensive in \(N\).
To understand the change in the nature of the adsorption transition, we look at the probability distribution of the surface contacts (\(n_{s}\)) at different lengths, denoted by \(P_{n,n_{s}}\) close to the transition point (\(q_{c}\)). To calculate \(P_{n,n_{s}}(q,g)\), we find the conditional partition sum \(Z_{n,n_{s}}\) for fixed \(q\) and \(g\), where \(n\) is the length having \(n_{s}\) number of surface contacts for different lengths. Finally, \(P_{n,n_{s}}\) is found using the formula,
\[P_{n,n_{s}}(q,g)=\frac{Z_{n,n_{s}}(q,g)}{\sum_{n_{s}=0}^{2n}Z_{n,n_{s}}(q,g)}.\] (S2)
For a continuous transition, the order parameter distribution is expected to hold a scaling relation of the form
\[P_{n_{s}}\sim N^{-\phi_{a}}p(n_{s}/N^{\phi_{a}}).\] (S3)
In Fig. S2, we show the scaling plot for \(P_{n,n_{s}}\) for the adsorption transition in the unbound state corresponding to \(q=0.285\) and \(g=0.7\).
## III III. Estimation of the transition points
For \(q<q_{c}\), the partition sum of a SAW scales as
\[Z(q,N)\sim\mu^{N}N^{\gamma_{1}-1},\] (S4)
where the subscript \(1\) in the entropic exponent \(\gamma_{1}\) denotes the fact that one end is grafted on an impenetrable surface, while the exponential growth through \(\mu\) (the _effective coordination_ number) is invariant. Near the adsorption transition (\(q\sim q_{c}\)), \(Z(q,N)\) should scale as
\[Z(q,N)\sim\mu^{N}N^{\gamma_{1}^{\prime}-1}\psi[(q-q_{c})N^{\phi_{a}}],\] (S5)
where \(\psi(x)\) is the scaling function. Taking derivative of \(\ln Z(q,N)\) in Eq. (S5) with respect to \(q\), and setting \(q=q_{c}\), one obtains the scaling form of the mean adsorbed energy per unit length (\(N\)) at the critical point as
\[n_{s}\sim N^{\phi_{a}-1}.\] (S6)
Therefore, at the critical adsorption point the quantity \(n_{s}/N^{\phi_{a}-1}\) should be \(N\) independent for \(N\to\infty\). For example, in Fig. S3(b) the estimated critical adsorption point using Eq. (S6) is \(q_{c}=0.1431(5)\) for \(g=5\). For higher \(g\)'s, when the chain is completely bound, this should converge to \(q_{c}=0.1428\). One must be careful to use the appropriate \(\phi_{a}\); for continuous transitions we use \(\phi_{a}=1/2\), and \(\phi_{a}=0.92\) for first-order transitions. We can have an idea about the nature of the transition and that about the transition point, beforehand, from the shape of the \(C_{s}\) curves. Further, following Ref. [4], we also looked at the quantity,
\[\gamma_{1,eff}^{\prime}=1+\frac{\ln\left[Z(q,2N)/Z(q,N/2)/\mu^{3N/2}\right]}{ \ln 4},\] (S7)
using \(\mu=4.6840386\). Here, we simulate chains of length upto \(N=10,000\), to see \(n_{s}/N^{\phi_{a}-1}\) and \(\gamma_{1,eff}^{\prime}\) upto \(N=5000\) [Fig. S4]. However, since our model has added complexities, e.g., two complementary monomers from different strands can occupy the same site to form a bp, we think that Eq. (S6) to be more reliable to estimate \(q_{c}\).
For melting, we looked at the average number of bound bps per unit length (\(n_{c}\)) and its fluctuation (\(C_{c}\)), to estimate the transition points. The melting points are obtained from the scaling (or data collapse) of \(n_{c}\) and \(C_{c}\), following the equations,
\[n_{c}\sim N^{\phi_{m}-1}f[(g-g_{c})N^{\phi_{m}}],\] (S8)
and,
\[C_{c}\sim N^{2\phi_{m}-1}h[(g-g_{c})N^{\phi_{m}}],\] (S9)
Tuning \(g_{c}\) and \(\phi_{m}\) to the appropriate values would make the data for different lengths fall upon each other resulting in _data collapse_.
For continuous adsorption transitions, we also use the crossing point of the \(C_{s}\) curves of the two longest lengths to determine the critical point [Fig. S3(a)]. However, for first-order adsorption the method of data collapse is used using Eq. (S8) and (S9) but with \(q\) in place of \(g\) and, \(n_{c}\) and \(C_{c}\) replaced with \(n_{s}\) and \(C_{s}\), respectively.
|
2310.05351
|
Generalized Neural Collapse for a Large Number of Classes
|
Neural collapse provides an elegant mathematical characterization of learned
last layer representations (a.k.a. features) and classifier weights in deep
classification models. Such results not only provide insights but also motivate
new techniques for improving practical deep models. However, most of the
existing empirical and theoretical studies in neural collapse focus on the case
that the number of classes is small relative to the dimension of the feature
space. This paper extends neural collapse to cases where the number of classes
are much larger than the dimension of feature space, which broadly occur for
language models, retrieval systems, and face recognition applications. We show
that the features and classifier exhibit a generalized neural collapse
phenomenon, where the minimum one-vs-rest margins is maximized.We provide
empirical study to verify the occurrence of generalized neural collapse in
practical deep neural networks. Moreover, we provide theoretical study to show
that the generalized neural collapse provably occurs under unconstrained
feature model with spherical constraint, under certain technical conditions on
feature dimension and number of classes.
|
Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu
|
2023-10-09T02:27:04Z
|
http://arxiv.org/abs/2310.05351v3
|
# Generalized Neural Collapse for a Large Number of Classes
###### Abstract
Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized. We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes.
## 1 Introduction
Over the past decade, deep learning algorithms have achieved remarkable progress across numerous machine learning tasks and have significantly enhanced the state-of-the-art in many practical applications ranging from computer vision to natural language processing and retrieval systems. Despite their tremendous success, a comprehensive understanding of the features learned from deep neural networks (DNNs) is still lacking. The recent work Papyan et al. (2020); Papyan (2020) has empirically uncovered an intriguing phenomenon regarding the last-layer features and classifier of DNNs, called _Neural Collapse_ (\(\mathcal{NC}\)) that can be briefly summarized as the following characteristics:
* _Variability Collapse_ (\(\mathcal{NC}_{1}\)): Within-class variability of features collapses to zero.
* _Convergence to Simplex ETF_ (\(\mathcal{NC}_{2}\)): Class-mean features converge to a simplex Equiangular Tight Frame (ETF), achieving equal lengths, equal pair-wise angles, and maximal distance in the feature space.
* _Self-Duality_ (\(\mathcal{NC}_{3}\)): Linear classifiers converge to class-mean features, up to a global rescaling.
Neural collapse provides a mathematically elegant characterization of learned representations or features in deep learning based classification models, independent of network architectures, dataset
properties, and optimization algorithms. Building on the so-called _unconstrained feature model_(Mixon et al., 2020) or the _layer-peeled model_(Fang et al., 2021), subsequent research (Zhu et al., 2021; Lu and Steinerberger, 2020; Ji et al., 2021; Yaras et al.; Wojtowytsch et al., 2020; Ji et al.; Zhou et al., 2022; Han et al.; Tirer and Bruna, 2022; Zhou et al., 2022; Poggio and Liao, 2020; Thrampoulidis et al., 2022; Tirer et al., 2023; Nguyen et al., 2022) has provided theoretical evidence for the existence of the \(\mathcal{NC}\) phenomenon when using a family of loss functions including cross-entropy (CE) loss, mean-square-error (MSE) loss and variants of CE loss. Theoretical results regarding \(\mathcal{NC}\) not only contribute to a new understanding of the working of DNNs but also provide inspiration for developing new techniques to enhance their practical performance in various settings, such as imbalanced learning (Xie et al., 2023; Liu et al., 2023b), transfer learning (Galanti et al., 2022; Li et al., 2022; Xie et al., 2022; Galanti et al., 2022), continual learning (Yu et al., 2022; Yang et al., 2023), loss and architecture designs (Chan et al., 2022; Yu et al., 2020; Zhu et al., 2021), etc.
However, most of the existing empirical and theoretical studies in \(\mathcal{NC}\) focus on the case that the number of classes is small relative to the dimension of the feature space. Nevertheless, there are many cases in practice where the number of classes can be extremely large, such as
* Person identification (Deng et al., 2019), where each identity is regarded as one class.
* Language models (Devlin et al., 2018), where the number of classes equals the vocabulary size1. Footnote 1: Language models are usually trained to classify a token (or a collection of them) that is either masked in the input (as in BERT (Devlin et al., 2018)), or the next one following the context (as in language modeling), or a span of masked tokens in the input (as in T5 (Raffel et al., 2020)), etc. In such cases, the number of classes is equal to the number of all possible tokens, i.e., the vocabulary size.
* Retrieval systems (Mitra et al., 2018), where each document in the dataset represents one class.
* Contrastive learning (Chen et al., 2020a), where each training data can be regarded as one class.
In such cases, it is usually infeasible to have a feature dimension commensurate with the number of classes due to computational and memory constraints. Therefore, it is crucial to develop a comprehensive understanding of the characteristics of learned features in such cases, particularly with the increasing use of web-scale datasets that have a vast number of classes.
Contributions.This paper studies the geometric properties of the learned last-layer features and the classifiers for cases where the number of classes can be arbitrarily large compared to the feature dimension. Motivated by the use of spherical constraints in learning with a large number of classes, such as person identification and contrastive learning, we consider networks trained with _spherical constraints_ on the features and classifiers. Our contributions can be summarized as follows.
* **The Arrangement Problem: Generalizing \(\mathcal{NC}\) to a Large Number of Classes.** In Section 2 we introduce the generalized \(\mathcal{NC}\) (\(\mathcal{ANC}\)) for characterizing the last-layer features and classifier. In particular, \(\mathcal{GNC}_{1}\) and \(\mathcal{GNC}_{3}\) state the same as \(\mathcal{NC}_{1}\) and \(\mathcal{NC}_{3}\), respectively. \(\mathcal{GNC}_{2}\) states
Figure 1: In Generalized Neural Collapse (\(\mathcal{GNC}\)), the optimal classifier weight \(\{\mathbf{w}_{k}\}\) is a _Softmax Code_ defined from maximizing the _one-vs-rest distance_ (see Definition 2.1). _(a, b)_ Illustration of the one-vs-rest distance using the example of \(\mathbf{w}_{1}\)-vs-\(\{\mathbf{w}_{2},\mathbf{w}_{3},\mathbf{w}_{4}\}\) distance, under two configurations of \(\{\mathbf{w}_{k}\}_{k=1}^{4}\) in a two-dimensional space. The distance in Case 1 is larger than that in Case 2. _(c)_ Illustration of the _one-vs-one distance_ used to define the Tammers problem (see Eq. (11)). We prove \(\mathcal{GNC}\) under technical conditions on Softmax Code and Tammers problem (see Section 3).
that the classifier weight is a _Softmax Code_, which generalizes the notion of a simplex ETF and is defined as the collection of points on the unit hyper-sphere that maximizes the minimum one-vs-all distance (see Figure 1 (a,b) for an illustration). Empirically, we verify that the \(\mathcal{GNC}\) approximately holds in practical DNNs trained with a small temperature in CE loss. Furthermore, we conduct theoretical study in Section 3 to show that under the unconstrained features model (UFM) (Mixon et al., 2020; Fang et al., 2021; Zhu et al., 2021) and with a vanishing temperature, the global solutions satisfy \(\mathcal{GNC}\) under technical conditions on Softmax Code and solutions to the Tammes problem (Tammes, 1930), the latter defined as a collection of points on the unit hyper-sphere that maximizes the minimum one-vs-one distance (see Figure 1(c) for an illustration).
* **The Assignment Problem: Implicit Regularization of Class Semantic Similarity.** Unlike the simplex ETF (as in \(\mathcal{NC}_{2}\)) in which the distance between any pair of vectors is the same, not all pairs in a Softmax Code (as in \(\mathcal{GNC}_{2}\)) are of equal distant when the number of classes is greater than the feature space dimension. This leads to the "assignment" problem, i.e., the correspondence between the classes and the weights in a Softmax Code. In Section 4, we show empirically an implicit regularization effect by the semantic similarity of the classes, i.e., conceptually similar classes (e.g., Cat and Dog) are often assigned to closer classifier weights in Softmax Code, compared to those that are conceptually dissimilar (e.g., Cat and Truck). Moreover, such an implicit regularization is beneficial, i.e., enforcing other assignments produces inferior model quality.
* **Cost Reduction for Practical Network Training/Fine-tuning.** The universality of alignment between classifier weights and class means (i.e., \(\mathcal{GNC}_{3}\)) implies that training the classifier is unnecessary and the weight can be simply replaced by the class-mean features. Our experiments in Section 5 demonstrate that such a strategy achieves comparable performance to classical training methods, and even better out-of-distribution performance than classical fine-tuning methods with significantly reduced parameters.
Related work.The recent work Liu et al. (2023) also introduces a notion of generalized \(\mathcal{NC}\) for the case of large number of classes, which predicts equal-spaced features. However, their work focuses on networks trained with weight decay, for which empirical results in Appendix B and Yaras et al. (2023) show to not produce equal-length and equal-spaced features for a relatively large number of classes. Moreover, the work Liu et al. (2023) relies on a specific choice of kernel function to describe the uniformity. Instead, we concretely define \(\mathcal{GNC}_{2}\) through the softmax code. When preparing this submission, we notice a concurrent work Gao et al. (2023) that provides analysis for generalized \(\mathcal{NC}\), but again for networks trained with weight decay. In addition, that work analyzes gradient flow for the corresponding UFM with a particular choice of weight decay, while our work studies the global optimality of the training problem. The work Zhou et al. (2022) empirically shows that MSE loss is inferior to the CE loss when \(K>d+1\), but no formal analysis is provided for CE loss. Finally, the global optimality of the UFM with spherical constraints has been studied in Lu & Steinerberger (2022); Yaras et al. (2023) but only for the cases \(K\leq d+1\) or \(K\rightarrow\infty\).
## 2 Generalized Neural Collapse for A Large Number of Classes
In this section, we begin by providing a brief overview of DNNs and introducing notations used in this study in Section 2.1. We will also introduce the concept of the UFM which is used in theoretical study of the subsequent section. Next, we introduce the notion of _Softmax Code_ for describing the distribution of a collection of points on the unit sphere, which prepares us to present a formal definition of _Generalized Neural Collapse_ and empirical verification of its validity in Section 2.2.
### Basics Concepts of DNNs
A DNN classifier aims to learn a feature mapping \(\phi_{\mathbf{\theta}}(\cdot):\mathbb{R}^{D}\rightarrow\mathbb{R}^{d}\) with learnable parameters \(\mathbf{\theta}\) that maps from input \(\mathbf{x}\in\mathbb{R}^{D}\) to a deep representation called the feature \(\phi_{\mathbf{\theta}}(\mathbf{x})\in\mathbb{R}^{d}\), and a linear classifier \(\mathbf{W}=\left[\mathbf{w}_{1}\quad\mathbf{w}_{2}\quad\cdots\quad\mathbf{w}_{K}\right]\in \mathbb{R}^{d\times K}\) such that the output (also known as the logits) \(\Psi_{\mathbf{\Theta}}(\mathbf{x})=\mathbf{W}^{\top}\phi_{\mathbf{\theta}}\left(\mathbf{x}\right) \in\mathbb{R}^{K}\) can make a correct prediction. Here, \(\mathbf{\Theta}=\{\mathbf{\theta},\mathbf{W}\}\) represents _all_ the learnable parameters of the DNN.2
Given a balanced training set \(\left\{\left(\mathbf{x}_{k,i},\mathbf{y}_{k}\right)\right\}_{i\in[n],k\in[K]}\subseteq \mathbb{R}^{D}\times\mathbb{R}^{K}\), where \(\mathbf{x}_{k,i}\) is the \(i\)-th sample in the \(k\)-th class and \(\mathbf{y}_{k}\) is the corresponding one-hot label with all zero entries except for unity in the \(k\)-th entry, the network parameters \(\mathbf{\Theta}\) are typically optimized by minimizing the following CE loss
Footnote 1: We use the default setting across a wide range of applications such as person identification (Wang et al., 2018; Deng et al., 2019), contrastive learning (Chen et al., 2020; Chen and He, 2021), etc.
\[\min_{\mathbf{\Theta}}\frac{1}{nK}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}_{\text{ CE}}\left(\Psi_{\mathbf{\Theta}}\left(\mathbf{x}_{k,i}\right),\mathbf{y}_{k},\tau\right), \;\mathcal{L}_{\text{CE}}\left(\mathbf{z},\mathbf{y}_{k},\tau\right)=-\log\Big{(}\frac {\exp(z_{k}/\tau)}{\sum_{j=1}^{K}\exp(z_{j}/\tau)}\Big{)}. \tag{1}\]
In above, we assume that a spherical constraint is imposed on the feature and classifier weights and that the logit \(z_{k}\) is divided by the temperature parameter \(\tau\). This is a common practice when dealing with a large number of classes (Wang et al., 2018; Chang et al., 2019; Chen et al., 2020). Specifically, we enforce \(\{\mathbf{w}_{k},\mathbf{\phi}_{\mathbf{\Theta}}(\mathbf{x}_{k,i})\}\subseteq\mathbb{S}^{d-1}: =\{\mathbf{a}\in\mathbb{R}^{d}:\|\mathbf{a}\|_{2}=1\}\) for all \(i\in[n]\) and \(k\in[K]\). An alternative regularization is weight decay on the model parameters \(\mathbf{\Theta}\), the effect of which we study in Appendix B.
To simplify the notation, we denote the _oblique manifold_ embedded in Euclidean space by \(\mathcal{O}\mathrm{B}(d,K):=\left\{\mathbf{W}\in\mathbb{R}^{d\times K}\,|\,\mathbf{w} _{k}\in\mathbb{S}^{d-1},\;\forall k\in[K]\right\}\). In addition, we denote the last-layer features by \(\mathbf{h}_{k,i}:=\mathbf{\phi}_{\mathbf{\theta}}(\mathbf{x}_{k,i})\). We rewrite all the features in a matrix form as
\[\mathbf{H}:=[\mathbf{H}_{1}\quad\mathbf{H}_{2}\quad\cdots\quad\mathbf{H}_{K}]\in\mathbb{R}^{d \times nK},\text{with }\mathbf{H}_{k}:=[\mathbf{h}_{k,1}\quad\cdots\quad\mathbf{h}_{k,n}]\in\mathbb{R}^{d \times n}.\]
Also we denote by \(\overline{\mathbf{h}}_{k}:=\frac{1}{n}\sum_{i=1}^{n}\mathbf{h}_{k,i}\) the class-mean feature for each class.
Unconstrained Features Model (UFM).The UFM (Mixon et al., 2020) or layer-peeled model (Fang et al., 2021), wherein the last-layer features are treated as free optimization variables, are widely used for theoretically understanding the \(\mathcal{NC}\) phenomena. In this paper, we will consider the following UFM with a spherical constraint on classifier weights \(\mathbf{W}\) and unconstrained features \(\mathbf{H}\):
\[\min_{\mathbf{W},\mathbf{H}}\frac{1}{nK}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}_{ \text{CE}}\left(\mathbf{W}^{\top}\mathbf{h}_{k,i},\mathbf{y}_{k},\tau\right)\quad\text{s. t.}\quad\mathbf{W}\in\mathcal{O}\mathrm{B}(d,K),\;\mathbf{H}\in\mathcal{O} \mathrm{B}(d,nK). \tag{2}\]
### Generalized Neural Collapse
We start by introducing the notion of _softmax code_ which will be used for describing \(\mathcal{GNC}\).
**Definition 2.1** (Softmax Code).: _Given positive integers \(d\) and \(K\), a softmax code is an arrangement of \(K\) points on a unit sphere of \(\mathbb{R}^{d}\) that maximizes the minimal distance between one point and the convex hull of the others:_
\[\max_{\mathbf{W}\in\mathcal{O}\mathrm{B}(d,K)}\rho_{\text{one-vs-rest}}(\mathbf{W}),\; \;\text{where}\;\;\rho_{\text{one-vs-rest}}(\mathbf{W})\doteq\min_{k}\mathrm{ dist}\Big{(}\mathbf{w}_{k},\{\mathbf{w}_{j}\}_{j\in[K]\setminus k}\Big{)}. \tag{3}\]
_In above, the distance between a point \(\mathbf{v}\) and a set \(\mathcal{W}\) is defined as \(\mathrm{dist}(\mathbf{v},\mathcal{W})=\inf_{\mathbf{w}\in\mathrm{conv}(\mathcal{W})} \{\|\mathbf{v}-\mathbf{w}\|\}\), where \(\mathrm{conv}(\cdot)\) denotes the convex hull of a set._
We now extend \(\mathcal{NC}\) to the _Generalized Neural Collapse_ (\(\mathcal{SNC}\)) that captures the properties of the features and classifiers at the terminal phase of training. With a vanishing temperature (i.e., \(\tau\to 0\)), the last-layer features and classifier exhibit the following \(\mathcal{GNC}\) phenomenon:
* _Variability Collapse_ (\(\mathcal{GNC}_{1}\)). All features of the same class collapse to the corresponding class mean. Formally, as used in Papyan et al. (2020), the quantity \(\mathcal{GNC}_{1}\doteq\frac{1}{K}\operatorname{tr}\left(\mathbf{\Sigma}_{W}\mathbf{ \Sigma}_{B}^{\dagger}\right)\to 0\), where \(\mathbf{\Sigma}_{B}:=\frac{1}{K}\sum_{k=1}^{K}\overline{\mathbf{h}}_{k}\overline{\mathbf{h }}_{k}^{\top}\) and \(\mathbf{\Sigma}_{W}:=\frac{1}{nK}\sum_{k=1}^{k}\sum_{i=1}^{n}\left(\mathbf{h}_{k,i}- \overline{\mathbf{h}}_{k}\right)\left(\mathbf{h}_{k,i}-\overline{\mathbf{h}}_{k}\right)^{\top}\) denote the between-class and within-class covariance matrices, respectively.
* _Softmax Codes_ (\(\mathcal{SNC}_{2}\)). Classifier weights converge to the softmax code in definition 2.1. This property may be measured by \(\mathcal{GNC}_{2}\doteq\rho_{\text{one-vs-rest}}(\mathbf{W})\rightarrow\max_{\mathbf{W} \in\mathcal{O}\mathrm{B}(d,K)}\rho_{\text{one-vs-rest}}(\mathbf{W})\).
* _Self-Duality_ (\(\mathcal{GNC}_{3}\)). Linear classifiers converge to the class-mean features. Formally, this alignment can be measured by \(\mathcal{GNC}_{3}\doteq\frac{1}{K}\sum_{k=1}^{K}\left(1-\mathbf{w}_{k}^{\top} \overline{\mathbf{h}}_{k}\right)\to 0\).
The main difference between \(\mathcal{GNC}\) and \(\mathcal{NC}\) lies in \(\mathcal{GNC}_{2}\)\(/\)\(\mathcal{NC}_{2}\), which describe the configuration of the classifier weight \(\mathbf{W}\). In \(\mathcal{NC}_{2}\), the classifier weights corresponding to different classes are described as a simplex ETF, which is a configuration of vectors that have equal pair-wise distance and that distance is maximized. Such a configuration does not exist in general when the number of classes is large, i.e., \(K>d+1\). \(\mathcal{GNC}_{2}\) introduces a new configuration described by the notion of softmax code. By Definition 2.1, a softmax code is a configuration where each vector is maximally separated from all the other points, measured by its distance to their convex hull. Such a definition is motivated from theoretical analysis (see Section 3). In particular, it reduces to simplex ETF when \(K\leq d+1\) (see Theorem 3.3).
Interpretation of Softmax Code.Softmax Code abides a max-distance interpretation. Specifically, consider the features \(\{\mathbf{h}_{k,i}\}_{k\in[K],i\in[n]}\) from \(n\) classes. In multi-class classification, one commonly used distance (or margin) measurement is the one-vs-rest (also called one-vs-all or one-vs-other) distance (Murphy, 2022), i.e., the distance of class \(k\) vis-a-vis other classes. Noting that the distance between two classes is equivalent to the distance between the convex hulls of the data from each class (Murphy, 2022), the distance of class \(k\) vis-a-vis other classes is given by \(\mathrm{dist}(\{\mathbf{h}_{k,i}\}_{i\in[n]},\{\mathbf{h}_{k^{\prime},i}\}_{k^{\prime} \in[K]\setminus k,i\in[n]})\). From \(\mathcal{GNC}_{1}\) and \(\mathcal{GNC}_{3}\) we can rewrite the distance as
\[\mathrm{dist}\big{(}\{\mathbf{h}_{k,i}\}_{i\in[n]},\{\mathbf{h}_{k^{\prime},i}\}_{k^{ \prime}\in[K]\setminus k,i\in[n]}\big{)}=\mathrm{dist}\big{(}\overline{\mathbf{h} _{k}},\{\overline{\mathbf{h}_{k^{\prime}}}\}_{k^{\prime}\in[K]\setminus k}\big{)} =\mathrm{dist}\big{(}\mathbf{w}_{k},\{\mathbf{w}_{k^{\prime}}\}_{k^{\prime}\in[K] \setminus k}\big{)}. \tag{4}\]
By noticing that the rightmost term is minimized in a Softmax Code, it follows from \(\mathcal{GNC}_{2}\) that the learned features have the property that their one-vs-rest distance minimized over all classes \(k\in[K]\) is maximized. In other words, measured by one-vs-rest distance, the learned features are such that they are maximally separated. Finally, we mention that the separation of classes may be characterized by other measures of distance as well, such as the one-vs-one distance (also known as the sample margin in Cao et al. (2019); Zhou et al. (2022)) which leads to the well-known Tammes problem. We will discuss this in Section 3.2.
Experimental Verification of \(\mathcal{GNC}\).We verify the occurence of \(\mathcal{GNC}\) by training a ResNet18 (He et al., 2016) for image classification on the CIFAR100 dataset (Krizhevsky, 2009), and report the results in Figure 2. To simulate the case of \(K>d+1\), we use a modified ResNet18 where the feature dimension is \(10\). From Figure 2, we can observe that both \(\mathcal{GNC}_{1}\) and \(\mathcal{GNC}_{3}\) converge to \(0\), and \(\mathcal{GNC}_{2}\) converges towards the spherical code with relatively small temperature \(\tau\). Additionally, selecting a small \(\tau\) is not only necessary for achieving \(\mathcal{GNC}\), but also for attaining high testing performance. Due to limited space, we present experimental details and other experiments with different architectures and datasets in Appendix B. In the next section, we provide a theoretical justification for \(\mathcal{GNC}\) under UFM in (2).
## 3 Theoretical Analysis of GNC
In this section, we provide a theoretical analysis of \(\mathcal{GNC}\) under the UFM in (2). We first show in Section 3.1 that under appropriate temperature parameters, the solution to (2) can be approximated by the solution to a "HardMax" problem, which is of a simpler form amenable for subsequent analysis. We then provide a theoretical analysis of \(\mathcal{GNC}\) in Section 3.2, by first proving the optimal classifier forms a Softmax Code (\(\mathcal{GNC}_{2}\)), and then establishing \(\mathcal{GNC}_{1}\) and \(\mathcal{GNC}_{3}\) under technical conditions on Softmax Code and solutions to the Tammes problem. In addition, we provide insights
for the design of feature dimension \(d\) given a number of classes \(K\) by analyzing the upper and lower bound for the one-vs-rest distance of a Softmax Code. All proofs can be found in Appendix C.
### Preparation: the Asymptotic CE Loss
Due to the nature of the softmax function which blends the output vector, analyzing the CE loss can be difficult even for the unconstrained features model. The previous work Yaras et al. (2023) analyzing the case \(K\leq d+1\) relies on the simple structure of the global solutions, where the classifiers form a simplex ETF. However, this approach cannot be directly applied to the case \(K>d+1\) due to the absence of an informative characterization of the global solution. Motivated by the fact that the temperature \(\tau\) is often selected as a small value (\(\tau<1\), e.g., \(\tau=1/30\) in Wang et al. (2018)) in practical applications (Wang et al., 2018; Chen and He, 2021), we consider the case of \(\tau\to 0\) where the CE loss (2) converges to the following "HardMax" problem:
\[\min_{\begin{subarray}{c}\mathbf{W}\in\mathcal{O}\text{B}(d,K)\\ \mathbf{H}\in\mathcal{O}\text{B}(d,nK)\end{subarray}}\mathcal{L}_{\text{HardMax} }(\mathbf{W},\mathbf{H}),\text{ where }\mathcal{L}_{\text{HardMax}}(\mathbf{W},\mathbf{H}) \doteq\max_{k\in[K]}\max_{i\in[n]}\max_{k^{\prime}\neq k}\langle\mathbf{w}_{k^{ \prime}}-\mathbf{w}_{k},\mathbf{h}_{k,i}\rangle, \tag{5}\]
where \(\langle\cdot,\cdot\rangle\) denotes the inner-product operator. More precisely, we have the following result.
**Lemma 3.1** (Convergence to the HardMax problem).: _For any positive integers \(K\) and \(n\), we have_
\[\limsup_{\tau\to 0}\left(\operatorname*{arg\,min}_{\begin{subarray}{c}\mathbf{W} \in\mathcal{O}\text{B}(d,K)\\ \mathbf{H}\in\mathcal{O}\text{B}(d,nK)\end{subarray}}\frac{1}{nK}\sum_{k=1}^{K} \sum_{i=1}^{n}\mathcal{L}_{\text{CE}}\left(\mathbf{W}^{\top}\mathbf{h}_{k,i},\mathbf{y}_ {k},\tau\right)\right)\subseteq\operatorname*{arg\,min}_{\begin{subarray}{c }\mathbf{W}\in\mathcal{O}\text{B}(d,K)\\ \mathbf{H}\in\mathcal{O}\text{B}(d,nK)\end{subarray}}\mathcal{L}_{\text{HardMax} }(\mathbf{W},\mathbf{H}). \tag{6}\]
Our goal is not to replace CE with the HardMax function in practice. Instead, we will analyze the HardMax problem in (5) to gain insight into the global solutions and the \(\mathcal{GNC}\) phenomenon.
### Main Result: Theoretical Analysis of \(\mathcal{GNC}\)
\(\mathcal{GNC}_{2}\) and Softmax Code.Our main result for \(\mathcal{GNC}_{2}\) is the following.
**Theorem 3.2** (\(\mathcal{GNC}_{2}\)).: _Let \((\mathbf{W}^{\star},\mathbf{H}^{\star})\) be an optimal solution to (5). Then, it holds that \(\mathbf{W}^{\star}\) is a Softmax Code, i.e.,_
\[\mathbf{W}^{\star}=\operatorname*{arg\,max}_{\mathbf{W}\in\mathcal{O}\text{B}(d,K)} \rho_{\text{one-vs-rest}}(\mathbf{W}). \tag{7}\]
\(\mathcal{GNC}_{2}\) is described by the Softmax Code, which is defined from an optimization problem (see Definition 2.1). This optimization problem may not have a closed form solution in general. Nonetheless, the one-vs-rest distance that is used to define Softmax Code has a clear geometric meaning, making an intuitive interpretation of Softmax Code tractable. Specifically, maximizing the one-vs-rest distance results in the classifier weight vectors \(\{\mathbf{w}_{k}^{\star}\}\) to be maximally distant. As shown in Figures 0(a) and 0(b) for a simple setting of four classes in a 2D plane, the weight vectors \(\{\mathbf{w}_{k}\}\) that are uniformly distributed (and hence maximally distant) have a larger margin than the non-uniform case.
For certain choices of \((d,K)\) the Softmax Code bears a simple form.
**Theorem 3.3**.: _For any positive integers \(K\) and \(d\), let \(\mathbf{W}^{\star}\in\mathcal{O}\text{B}(d,K)\) be a Softmax Code. Then,_
* \(d=2\)_:_ \(\{\mathbf{w}_{k}^{\star}\}\) _is uniformly distributed on the unit circle, i.e.,_ \(\{\mathbf{w}_{k}^{\star}\}=\big{\{}\big{(}\cos(\frac{2\pi k}{K}+\alpha),\sin(\frac {2\pi k}{K}+\alpha)\big{)}\big{\}}\) _for some_ \(\alpha\)_;_
* \(K\leq d+1\)_:_ \(\{\mathbf{w}_{k}^{\star}\}\) _forms a simplex ETF, i.e.,_ \(\mathbf{W}^{\star}=\sqrt{\frac{K}{K-1}}\mathbf{P}(\mathbf{I}_{K}-\frac{1}{K}\mathbf{I}_{K}\mathbf{ I}_{K}^{\top})\) _for some orthonormal_ \(\mathbf{P}\in\mathbf{I}^{d\times K}\)_;_
* \(d+1<K\leq 2d\)_:_ \(\rho_{\text{one-vs-rest}}(\mathbf{W}^{\star})=1\) _which can be achieved when_ \(\{\mathbf{w}_{k}^{\star}\}\) _are a subset of vertices of a cross-polytope;_
For the cases of \(K\leq d+1\), the optimal \(\mathbf{W}^{\star}\) from Theorem 3.3 is the same as that of Lu and Steinerberger (2022). However, Theorem 3.3 is an analysis of the HardMax loss while Lu and Steinerberger (2022) analyzed the CE loss.
\(\mathcal{GNC}_{1}\) and Within-class Variability Collapse.To establish the within-class variability collapse property, we require a technical condition associated with the Softmax Code. Recall that Softmax Codes are those that maximize the _minimum_ one-vs-rest distance over all classes. We introduce _rattlers_, which are classes that do not attain such a _minimum_.
**Definition 3.4** (Rattler of Softmax Code).: _Given positive integers \(d\) and \(K\), a rattler associated with a Softmax Code \(\mathbf{W}^{\text{SC}}\in\mathcal{O}\text{B}(d,K)\) is an index \(k_{\text{rattler}}\in[K]\) for which_
\[\min_{k\in[K]}\operatorname{dist}(\mathbf{w}_{k}^{\text{SC}},\{\mathbf{w}_{j}^{\text{ SC}}\}_{j\in[K]\setminus k})\neq\operatorname{dist}(\mathbf{w}_{k_{\text{rattler}}}^{ \text{SC}},\{\mathbf{w}_{j}^{\text{SC}}\}_{j\in[K]\setminus k_{\text{rattler}}}). \tag{8}\]
In other words, rattlers are points in a Softmax Code with no neighbors at the minimum one-to-rest distance. This notion is borrowed from the literature of the _Tammes Problem_(Cohn, 2022; Wang, 2009), which we will soon discuss in more detail3.
Footnote 3: The occurrence of rattlers is rare: Among the \(182\) pairs of \((d,K)\) for which the solution to Tammes problem is known, only \(31\) have rattlers (Cohn, 2022). This has excluded the cases of \(d=2\) or \(K\leq 2d\) where there is no rattler. The occurrence of rattler in Softmax Code may be rare as well.
We are now ready to present the main results for \(\mathcal{GNC}_{1}\).
**Theorem 3.5** (\(\mathcal{GNC}_{1}\)).: _Let \((\mathbf{W}^{\star},\mathbf{H}^{\star})\) be an optimal solution to (5). For all \(k\) that is not a rattler of \(\mathbf{W}^{\star}\), it holds that_
\[\overline{\mathbf{h}}_{k}^{\star}\doteq\mathbf{h}_{k,1}^{\star}=\cdots=\mathbf{h}_{k,n}^ {\star}=\mathcal{P}_{\mathbb{S}^{d-1}}\left(\mathbf{w}_{k}^{\star}-\mathcal{P}_{ \{\mathbf{w}_{j}^{\star}\}_{j\in[K]\setminus k}}(\mathbf{w}_{k}^{\star})\right), \tag{9}\]
_where \(\mathcal{P}_{\mathcal{W}}(\mathbf{v})\doteq\arg\min_{\mathbf{w}\in\operatorname{conv}( \mathcal{W})}\{\|\mathbf{v}-\mathbf{w}\|_{2}\}\) denotes the projection of \(\mathbf{v}\) on \(\operatorname{conv}(\mathcal{W})\)._
The following result shows that the requirement in the Theorem 3.5 that \(k\) is not a rattler is satisfied in many cases.
**Theorem 3.6**.: _If \(d=2\), or \(K\leq d+1\), Softmax Code has no rattler for all classes._
\(\mathcal{GNC}_{3}\) and Self-Duality.To motivate our technical conditions for establishing self-duality, assume that any optimal solution \((\mathbf{W}^{\star},\mathbf{H}^{\star})\) to (5) satisfies self-duality as well as \(\mathcal{GNC}_{1}\). This implies that
\[\operatorname*{arg\,min}_{\mathbf{W}\in\operatorname{OB}(d,K),\mathbf{H}\in \operatorname{OB}(d,nK)}\mathcal{L}_{\text{HardMax}}(\mathbf{W},\mathbf{H})= \operatorname*{arg\,min}_{\mathbf{W}\in\mathcal{O}\text{B}(d,nK)}\max_{k\in[K]} \max_{i\in[n]}\max_{\mathbf{k}^{\prime}\neq k}\langle\mathbf{w}_{k^{\prime}}-\mathbf{w}_{ k},\mathbf{w}_{k}\rangle. \tag{10}\]
After simplification we may rewrite the optimization problem on the right hand side equivalently as:
\[\max_{\mathbf{W}\in\operatorname{OB}(d,K)}\rho_{\text{one-vs-one}}(\mathbf{W}),\ \ \text{where}\ \ \rho_{\text{one-vs-one}}(\mathbf{W})\doteq\min_{k\in[K]}\min_{\mathbf{k}^{\prime} \neq k}\operatorname{dist}(\mathbf{w}_{k},\mathbf{w}_{k^{\prime}}). \tag{11}\]
Eq. (11) is the well-known _Tammes problem_. Geometrically, the problem asks for a distribution of \(K\) points on the unit sphere of \(\mathbf{I}\!R^{d}\) so that the minimum distance between any pair of points is maximized. The Tammes problem is unsolved in general, except for certain pairs of \((K,d)\).
Both the Tammes problem and the Softmax Code are problems of arranging points to be maximally separated on the unit sphere, with their difference being the specific measures of separation. Comparing (11) and (3), the Tammes problem maximizes for all \(k\in[K]\) the _one-vs-one distance_, i.e., \(\min_{k^{\prime}\neq k}\operatorname{dist}(\mathbf{w}_{k},\mathbf{w}_{k^{\prime}})\), whereas the Softmax Code maximizes the minimum _one-vs-rest distance_, i.e., \(\operatorname{dist}(\mathbf{w}_{k},\{\mathbf{w}_{j}\}_{j\in[K]\setminus k})\). Both one-vs-one distance and one-vs-rest distances characterize the separation of the weight vector \(\mathbf{w}_{k}\) from \(\{\mathbf{w}_{j}\}_{j\in[K]\setminus k}\). As illustrated in Figure 1, taking \(k=1\), the former is the distance between \(\mathbf{w}_{1}\) and its closest point in the set \(\{\mathbf{w}_{2},\mathbf{w}_{3},\mathbf{w}_{4}\}\), in this case \(\mathbf{w}_{2}\) (see Figure 0(c)), whereas the later captures the minimal distance from \(\mathbf{w}_{1}\) to the convex hull of the rest vectors \(\{\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\}\) (see Figure 0(b)).
Since the Tammes problem can be derived from the self-duality constraint on the HardMax problem, it may not be surprising that the Tammes problem can be used to describe a condition for establishing self-duality. Specifically, we have the following result.
**Theorem 3.7** (\(\mathcal{GNC}_{3}\)).: _For any \(K,d\) such that both Tammes problem and Softmax Code have no rattler, the following two statements are equivalent:_
* _Any optimal solution_ \((\mathbf{W}^{\star},\mathbf{H}^{\star})\) _to (5) satisfies_ \(\mathbf{h}_{k,i}^{\star}=\mathbf{w}_{k}^{\star},\forall i\in[n],\forall k\in[K]\)_;_
* _The Tammes problem and the Softmax codes are equivalent, i.e.,_ \[\operatorname*{arg\,max}_{\mathbf{W}\in\operatorname*{\mathcal{O}}\operatorname*{ \textnormal{B}}(d,K)}\rho_{\textnormal{one-vs-rest}}(\mathbf{W})=\operatorname*{ arg\,max}_{\mathbf{W}\in\operatorname*{\mathcal{O}}\operatorname*{\textnormal{B}}(d,K)} \rho_{\textnormal{one-vs-one}}(\mathbf{W}).\] (12)
In words, Theorem 3.7 states that \(\operatorname*{\mathcal{G}}\operatorname*{\mathcal{N}}\operatorname*{ \mathcal{C}}_{3}\) holds if and only if the Tammes problem in (11) and the Softmax codes are equivalent. As both the Tammes problem and Softmax Code maximize separation between one vector and the others, though their notions of separation are different, we conjecture that they are equivalent and share the same optimal solutions. We prove this conjecture for some special cases and leave the study for the general case as future work.
**Theorem 3.8**.: _If \(d=2\), or \(K\leq d+1\), the Tammes problem and the Softmax codes are equivalent._
### Insights for Choosing Feature Dimension \(d\) Given Class Number \(K\)
Given a class number \(K\), how does the choice of feature dimension \(d\) affect the model performance? Intuitively, smaller \(d\) reduces the separability between classes in a Softmax Code. We define this rigorously by providing bounds for the one-vs-rest distance of a Softmax Code based on \(d\) and \(K\).
**Theorem 3.9**.: _Assuming \(K\geq\sqrt{2\pi\sqrt{ed}}\) and letting \(\Gamma(\cdot)\) denote the Gamma function, we have_
\[\frac{1}{2}\bigg{[}\frac{\sqrt{\pi}}{K}\frac{\Gamma\left(\frac{d+1}{2}\right) }{\Gamma\left(\frac{d}{2}+1\right)}\bigg{]}^{\frac{2}{d-1}}\leq\max_{\mathbf{W}\in \operatorname*{\mathcal{O}}\operatorname*{\textnormal{B}}(d,K)}\rho_{ \textnormal{one-vs-rest}}(\mathbf{W})\leq 2\bigg{[}\frac{2\sqrt{\pi}}{K}\frac{\Gamma \left(\frac{d+1}{2}\right)}{\Gamma\left(\frac{d}{2}\right)}\bigg{]}^{\frac{1} {d-1}}. \tag{13}\]
The bounds characterize the separability for \(K\) classes in \(d\)-dimensional space. Given the number of classes \(K\) and desired margin \(\rho\), the minimal feature dimension is roughly an order of \(\log(K^{2}/\rho)\), showing classes separate easily in higher dimensions. This also provides a justification for applications like face classification and self-supervised learning, where the number of classes (e.g., millions of classes) could be significantly larger than the dimensionality of the features (e.g., \(d=512\)).
By conducting experiments on ResNet-50 with varying feature dimensions for ImageNet classification, we further corroborate the relationship between feature dimension and network performance in Figure 3. First, we observe that the curve of the optimal distance is closely aligned with the curve of testing performance, indicating a strong correlation between distance and testing accuracy. Moreover, both the distance and performance curves exhibit a slow (exponential) decrease as the feature dimension \(d\) decreases, which is consistent with the bounds in Theorem 3.9.
## 4 The Assignment Problem: An Empirical Study
Unlike the case \(d\geq K-1\) where the optimal classifier (simplex ETF) has equal angles between any pair of the classifier weights, when \(d<K-1\), not all pairs of classifier weights are equally distant with the optimal \(\mathbf{W}\) (Softmax Code) predicted in Theorem 3.2. Consequently, this leads to a "class assignment" problem. To illustrate this, we train a ResNet18 network with \(d=2\) on four classes {Automobile, Cat, Dog, Truck} from CIFAR10 dataset that are selected due to their clear semantic similarity and discrepancy. In this case, according to Theorem 3.3, the optimal classifiers are given by \([1,0],[-1,0],[0,1],[0,-1]\), up to a rotation. Consequently, there are three distinct class assignments, as illustrated in Figures 3(b) to 3(d).
When doing standard training, the classifier consistently converges to the case where Cat and Dog are closer together across 5 different trials; Figure 3(a) shows the learned features (dots) and classifier weights (arrows) in one of such trials. This demonstrates the implicit algorithmic regularization in training DNNs, which naturally attracts (semantically) similar classes and separates dissimilar ones.
We also conduct experiments with the classifier fixed to be one of the three arrangements, and present the results in Figures 3(b) to 3(d). Among them, we observe that the case where Cat and Dog
Figure 3: Effect of feature dimension \(d\) on (Left \(y\)-axis): \(\rho_{\textnormal{one-vs-rest}}(\mathbf{W}^{\star})\) and its upper/lower bounds (in Theorem 3.9), and (Right \(y\)-axis): training and test accuracies for ResNet-50 on ImageNet.
are far apart achieves a testing accuracy of \(89.95\%\), which is lower than the other two cases with testing accuracies of \(91.90\%\) and \(92.13\%\). This demonstrates the important role of class assignment to the generalization of DNNs, and that the implicit bias of the learned classifier is benign, i.e., leads to a more generalizable solutions.
## 5 Implications for Practical Network Training/Fine-tuning
Since the classifier always converges to a simplex ETF when \(K\leq d+1\), prior work proposes to fix the classifier as a simplex ETF for reducing training cost (Zhu et al., 2021) and handling imbalance dataset (Yang et al., 2022). When \(K>d+1\), the optimal classifier is also known to be a Softmax Code according to \(\mathcal{GNC}_{2}\). However, the same method as in prior work may become sub-optimal due to the class assignment problem (see Section 4). To address this, we introduce the method of class-mean features (CMF) classifiers, where the classifier weights are set to be the exponential moving average of the mini-batch class-mean features during the training process. This approach is motivated from \(\mathcal{GNC}_{3}\) which states that the optimal classifier converges to the class-mean features. We explain the detail of CMF in Appendix B. As in prior work, CMF can reduce trainable parameters as well. For instance, it can reduce \(30.91\)% of total parameters in a ResNet18 for BUPT-CBFace-50 dataset (Zhang and Deng, 2020). Here, we compare CMF with the standard training where the classifier is learned together with the feature mapping, in both training from scratch and fine-tuning.
Training from Scratch.We train a ResNet18 on CIFAR100 by using a learnable classifier or the CMF classifier. The learning curves in Figure 5 indicate that the approach with CMF classifier achieves comparable performance to the classical training protocols.
Fine-tuning.To verify the effectiveness of the CMF classifiers on fine-tuning, we follow the setting in Kumar et al. (2022) to measure the performance of the fine-tuned model on both in-distribution (ID) task (i.e., CIFAR10 Krizhevsky (2009)) and OOD task (STL10 Coates et al. (2011)). We compare the standard approach that fine-tunes both the classifier (randomly initialized) and the pre-trained feature mapping with our approach (using the CMF classifier). Our experiments show that the approach with CMF classifier achieves slightly better ID accuracy (\(98.00\%\) VS \(97.00\%\)) and a better OOD performance (\(90.67\%\) VS \(87.42\%\)). The improvement of OOD performance stems from the ability to align the classifier with the class-means through the entire process,
Figure 4: **Assignment of classes to classifier weights** for a ResNet18 with 2-dimensional feature space trained on the 4 classes {Automobile, Cat, Dog, Truck} from CIFAR10. _(a)_ Learned classifier. _(b-d)_ Classifiers fixed to be three different assignments. Test accuracy is reported in the bracket.
Figure 5: **Comparison of the learning curves (training and testing accuracies) with learned classifiers vs. CMF classifiers trained with ResNet18, DenseNet121, and ResNeXt50 on CIFAR100 dataset and \(d=10\).**
which better preserves the OOD property of the pre-trained model. Our approach also simplifies the two-stage approach of linearly probing and subsequent full fine-tuning in Kumar et al. (2022).
## 6 Conclusion
In this work, we have introduced generalized neural collapse (\(\mathcal{ANC}\)) for characterizing learned last-layer features and classifiers in DNNs under an arbitrary number of classes and feature dimensions. We empirically validate the \(\mathcal{ANC}\) phenomenon on practical DNNs that are trained with a small temperature in the CE loss and subject to spherical constraints on the features and classifiers. Building upon the unconstrained features model we have proven that \(\mathcal{ANC}\) holds under certain technical conditions. \(\mathcal{ANC}\) could offer valuable insights for the design, training, and generalization of DNNs. For example, the minimal one-vs-rest distance provides implications for designing feature dimensions when dealing with a large number of classes. Additionally, we have leveraged \(\mathcal{ANC}\) to enhance training efficiency and fine-tuning performance by fixing the classifier as class-mean features. Further exploration of \(\mathcal{ANC}\) in other scenarios, such as imbalanced learning, is left for future work. It is also of interest to further study the problem of optimally assigning classifiers from Softmax Code for each class, which could shed light on developing techniques for better classification performance.
## Acknowledgment
JJ, JZ, and ZZ acknowledge support from NSF grants CCF-2240708 and IIS-2312840. PW and QQ acknowledge support from NSF CAREER CCF-2143904, NSF CCF-2212066, NSF CCF-2212326, NSF IIS 2312842, ONR N00014-22-1-2529, an AWS AI Award, a gift grant from KLA, and MICDE Catalyst Grant. We thank CloudBank (supported by NSF under Award #1925001) for providing the computational resources.
|
2304.08932
|
Distributed Search Planning in 3-D Environments With a Dynamically
Varying Number of Agents
|
In this work, a novel distributed search-planning framework is proposed,
where a dynamically varying team of autonomous agents cooperate in order to
search multiple objects of interest in three-dimension (3-D). It is assumed
that the agents can enter and exit the mission space at any point in time, and
as a result the number of agents that actively participate in the mission
varies over time. The proposed distributed search-planning framework takes into
account the agent dynamical and sensing model, and the dynamically varying
number of agents, and utilizes model predictive control (MPC) to generate
cooperative search trajectories over a finite rolling planning horizon. This
enables the agents to adapt their decisions on-line while considering the plans
of their peers, maximizing their search planning performance, and reducing the
duplication of work.
|
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, Marios M. Polycarpou
|
2023-04-18T12:17:52Z
|
http://arxiv.org/abs/2304.08932v1
|
# Distributed Search Planning in 3D Environments with a Dynamically Varying Number of Agents
###### Abstract
In this work, a novel distributed search-planning framework is proposed, where a dynamically varying team of autonomous agents cooperate in order to search multiple objects of interest in 3D. It is assumed that the agents can enter and exit the mission space at any point in time, and as a result the number of agents that actively participate in the mission varies over time. The proposed distributed search-planning framework takes into account the agent dynamical and sensing model, and the dynamically varying number of agents, and utilizes model predictive control (MPC) to generate cooperative searcher trajectories over a finite rolling planning horizon. This enables the agents to adapt their decisions on-line while considering the plans of their peers, maximizing their search planning performance, and reducing the duplication of work.
Multi-Agent systems, Distributed model predictive control, Trajectory planning, Distributed coverage.
## I Introduction
In emergency response situations the immediate deployment of the response team is imperative for saving people's lives. In such situations the ability to plan and organize predictable, precise, and efficient cooperative searches of the affected area is of the highest importance in order to locate people in danger. In general, an emergency response mission can be divided into two main tasks [1] i.e., assessment, and search-and-rescue. In the assessment task, the rescue team first assesses the damages and hazards of the affected region and then determines the areas that need to be searched for locating survivors or people in need. During the assessment task the rescue team organizes and plans the search mission. Subsequently, the purpose of the search-and-rescue task is to perform organized, complete and efficient searches of the affected area in order to locate survivors and provide rescue.
We envision that a team of distributed autonomous mobile agents (i.e., unmanned aerial vehicles (UAVs) or drones), capable of conducting optimized and coherent search-planning in 3D, can significantly enhance the capabilities and success rate of the rescue team in emergency response situations. The assessment task is captured in this work through a mission pre-planning step in which the affected area that needs to be searched, once identified, is decomposed into a number of artificial cells according to the UAV's sensing capabilities and required search-effort. Then we propose a distributed search-planning framework in which multiple autonomous agents cooperate in order to efficiently search the affected area. In our previous works [2, 3] we have presented a novel planning framework for the problem of 3D search planning with a single autonomous agent. Therefore, the motivation of this article is to design a multi-agent distributed 3D search-planning framework with improved performance and more capabilities compared to the single agent case.
In this work, we propose a distributed search-planning framework, based on model predictive control (MPC) [4], for the problem of cooperative searching in 3D environments with a dynamically varying number of agents. In particular, in this work, it is assumed that the agents can enter and exit the mission space (i.e., to recharge their depleted batteries) at any point in time, and as a result the number of active agents that participate in the mission changes over time. This necessitates the need for efficient planning and cooperation amongst the team of agents, so that they can adapt their plans and make decisions on-line in order to better accommodate the collective objective of the team.
More specifically, the objective is for a dynamically varying team of agents to cooperate in order to efficiently search multiple objects of interest in 3D (i.e., the total surface area of each object of interest must be searched) with certain detection probability. The agents are equipped with a camera-based sensing system with finite field-of-view (FoV), which they use to scan the surface of the objects of interest while maintaining the required detection probability (specified at the beginning of the mission). Therefore, the agents cooperate in order to scan the total surface area of each object of interest, searching for survivors while trying to minimize the duplication of work. To achieve this, it is assumed that the agents can opportunistically communicate and exchanging their search-maps and their future intentions whenever they are in communication range with each other. The proposed approach does not require any form of coordination between the agents, thus enabling them to plan their decisions autonomously and in parallel with each other, while optimizing the collective objective of the team. Overall, the contributions of this work are as follows:
* We propose a novel distributed search planning framework, based on model predictive control (MPC), which enables a dynamically varying number of autonomous agents to cooperatively search in 3D multiple objects of interest, without requiring any form of coordination.
* We derive a mixed integer quadratic programming (MIQP) mathematical formulation for the distributed 3D search planning problem which can be solved efficiently with widely available optimization tools.
* Finally, the performance of the proposed approach is demonstrated through a series of qualitative and quantitative synthetic experiments.
The rest of the paper is organized as follows. Section II
summarizes the related work on search-planning and coverage control with multiple agents. Section III develops the system model and Section IV discusses the mission pre-planning step which takes place prior to search-planning. Then, Section V discusses the details of the proposed distributed multi-agent 3D search planning framework, Sec. VI evaluates the proposed framework and finally Sec. VII concludes the paper.
## II Related Work
In the recent years, we have witnessed an unprecedented interest in UAV-based applications and automation technologies [5, 6, 7, 8, 9, 10, 11, 12, 13], with particular interest in planning techniques. In this work the problem of trajectory planning with the objective of searching an area of interest with multiple agents is investigated. An interesting work on this topic is shown in [14], where the authors proposed a centralized formulation for the problem of multi-agent search-planning which they solve using mixed-integer linear programming (MILP). The work in [15] proposes a two-stage centralized-assignment, decentralized-covering algorithm in which the area of interest is first divided into non-overlapping regions in a centralized fashion, and then assigned to the UAV agents. Each UAV agent then runs a local covering algorithm to search its assigned area. In a similar fashion, the work in [16], proposes a hierarchical cooperative planning framework for finding a target in a 2D environment. In [16] the area of interest is first decomposed and prioritized into subregions and then allocated to the UAV agents. Each UAV agent then uses a local receding horizon controller (RHC) for searching its allocated area. In [17] a centralized market-based multi-robot task allocation algorithm is proposed for assigning regions of interest to mobile agents. The idea of distributed task allocation for multi-agent search operations is illustrated in [18]. Multi-agent search-planning is also investigated in [19], where the authors evaluate various discrete search-planning algorithms. The problem of distributed search-planning is investigated in [20], with the goal of searching and localizing a stationary ground target with a team of UAVs. The authors propose a distributed control framework for maximizing the probability of target detection with a team of UAVs over a finite planning horizon. This method however, requires coordination between the agents and works in a sequential fashion.
In [21] a distributed trajectory planning approach is proposed based on linear model predictive control (MPC), where multiple UAVs are guided with the goal of forming a communication network around multiple targets. More recently, the authors in [22] proposed a decentralized MPC approach for multi-UAV trajectory planning for obstacle avoidance, whereas in [23] a consensus algorithm for distributed cooperative formation trajectory planning is proposed based on artificial potential fields and consensus theory. In [24] a sampling-based chance-constrained 2D trajectory planning approach is proposed for multiple UAV agents with probabilistic geofencing constraints, whereas in [25] a particle-swarm optimization (PSO) approach is proposed for distributed collision-free trajectory planning with a team of UAVs operating in stochastic environments. More recently, the authors in [26] have proposed a deep reinforcement learning based 3D area coverage approach with a swarm of UAV agents, whereas in [27] a multi-robot coverage approach is proposed based on spatial graph neural networks. Moreover, in [28] the authors investigate the problem of full coverage search with multiple agents in cluttered environments, and finally, the work in [29] proposes a distributed sweep coverage algorithm for multi-agent systems in uncertain environments.
In comparison with the related works above, in this work we propose a distributed search-planning approach which does not require the commonly used two-stage procedure of centralized-assignment and decentralized coverage. Instead, in the proposed approach the agents cooperatively decide in a rolling-horizon fashion which regions of interest to visit and how to visit them, generating search-plans online, thus tackling the overall search-planning problem in a distributed fashion. In addition, in contrast with the aforementioned literature, in this work we consider a dynamically varying number of agents which a) exhibit limited sensing and communication capabilities, and b) are prone to random battery failures.
Other related works on this topic include the problem of adaptive and role-based collaboration in multi-agent systems which is investigated in [30, 31]. The authors propose a mathematical model (i.e., E-CARGO) which can be used to describe in a rigorous mathematical way, relationships and interactions within a typical multi-agent system; thus enabling the design and implementation of efficient algorithms for various real-world problems of multi-agent systems [31]. Moreover, the work in [32] presents a factor graph optimization framework to tackle the problems of estimation and optimal control jointly, whereas the work in [33] poses the problem of continuous-time motion-planning as a probabilistic inference problem with Gaussian processes, and then proposes efficient gradient-based optimization planners (GPMP). More recently, the problem of online motion planning is investigated in [34] with joint sampling and trajectory optimization over factor graphs. Factor-graph based motion planning techniques (e.g., [32, 34]) are mostly concerned with the determination of a single obstacle-free trajectory between the starting and goal locations. In contrast, in this work we propose a rolling-horizon distributed search-planning approach which allows a dynamically varying number of autonomous agents (governed by dynamical, sensing, communication, and battery constraints) to decide their control inputs, and generate cooperative trajectories in order to search in 3D multiple objects of interest. Finally, the proposed approach is formulated as a convex MIQP which can be solved optimally using existing optimization solvers, whereas graph-based motion planning methods (e.g., [33]) often rely on iterative gradient-based optimization methods which they do not offer any global optimality guarantees.
## III System Model
### _Agent Dynamics_
A team \(\mathcal{M}=\{1,\ldots,|\mathcal{M}|\}\) of autonomous mobile agents (i.e., UAVs), is deployed inside a bounded surveillance region \(\mathcal{A}\). Each agent \(j\in\mathcal{M}\) evolves in 3D space according to the following discrete-time linear dynamical model:
\[x_{t+1}^{j}=\Phi x_{t}^{j}+\Gamma u_{t}^{j}-\Gamma u_{g},\ \forall j\in \mathcal{M} \tag{1}\]
where \(x_{t}^{j}=[\mathbf{x}^{j},\hat{x}_{t}^{j}]_{t}^{\top}\in\mathbb{R}^{6}\) denotes the agent's state at time \(t\) which consists of position \(\mathbf{x}_{t}^{j}=[x_{p},y_{p},z_{t}]\in\mathcal{A}\subset\mathbb{R}^{3}\) and velocity \(\hat{x}_{t}^{j}=[\nu_{x},\nu_{y},\nu_{z}]_{t}\in\mathbb{R}^{3}\) components in 3D
cartesian coordinates. The agent can be controlled by applying an amount of force \(u_{t}^{j}\in\mathbb{R}^{3}\) in each dimension, thus \(u_{t}^{j}=[\mathbf{u}_{x},\mathbf{u}_{y},\mathbf{u}_{z}]_{t}^{\top}\) denotes the applied force vector at \(t\) and the constant \(u_{g}=[0,0,m^{j}g]^{\top}\) denotes the force of gravity where \(g=9.81\text{m}/\text{s}^{2}\) is the Earth's gravitational acceleration and \(m^{j}\) is the agent mass. The matrices \(\Phi\) and \(\Gamma\) are given by:
\[\Phi=\begin{bmatrix}\text{I}_{3\times 3}&\Delta T\cdot\text{I}_{3\times 3}\\ 0_{3\times 3}&\phi\cdot\text{I}_{3\times 3}\end{bmatrix},\ \Gamma=\begin{bmatrix} \text{0}_{3\times 3}\\ \gamma\cdot\text{I}_{3\times 3}\end{bmatrix} \tag{2}\]
where \(\Delta T\) is the sampling interval, \(\text{I}_{3\times 3}\) is the identity matrix of dimension \(3\times 3\) and \(\text{0}_{3\times 3}\) is the zero matrix of dimension \(3\times 3\). The parameters \(\phi\) and \(\gamma\) are further given by \(\phi=(1-\eta)\) and \(\gamma=\frac{\Delta T}{m^{j}}\), and the parameter \(\eta\in[0,1]\) is used to model the air resistance.
### _Agent Battery and Communication model_
Each agent \(j\in\mathcal{M}\) exhibits a nominal flight time of \(\mathcal{T}^{j}\) time-steps which depends on the agent's onboard battery lifetime. However, the agent's onboard battery health deteriorates due to irreversible physical and chemical changes that take place with usage and aging, which makes the nominal flight time inaccurate due to imprecise battery state-of-charge cycle calculations. For this reason agent's \(j\) battery can be depleted during a mission at some time \(t\leq\mathcal{T}^{j}\) with probability \(p_{b}^{j}(t)\). When this happens, the agent needs to exit the mission space and return to its ground station (relying on backup power) located at \(\mathcal{G}^{j}\in\mathbb{R}^{3}\) for recharging. We model the battery depletion event of agent's \(j\) battery at time \(t\), as a Bernoulli random variable \(\mathcal{B}^{j}\in\{0,1\}\) with conditional probability distribution given by:
\[Pr(\mathcal{B}^{j}=1|t)=p_{b}^{j}(t)=\frac{1}{1+\alpha_{1}^{j}e^{-\beta_{1}^{ j}(t-\alpha_{1}^{j})}} \tag{3}\]
where the parameters \(\alpha_{1}^{j}\) and \(\beta_{1}^{j}\) control the severity of the battery's aging. Due to the random battery depletion events that occur during the mission, only a subset \(\tilde{\mathcal{M}}_{t}\subseteq\mathcal{M}\) of agents actively participate in the search-planning task at any given time instance \(t\). Moreover, we assume that the recharging time \(t_{R}\) is distributed uniformly in the interval \([\mathcal{T}_{R}^{\text{start}},\mathcal{T}_{R}^{\text{stop}}]\), i.e., \(t_{R}\sim\mathcal{U}(\mathcal{T}_{R}^{\text{start}},\mathcal{T}_{R}^{\text{ stop}})\) for all agents. Thus, after \(t_{R}\) time-steps of recharging, the agents can enter again the mission space, and continue their mission. To achieve some form of cooperation, the set of agents that participate in the mission \(\tilde{\mathcal{M}}_{t}\subseteq\mathcal{M}\), exchange information whenever they are in communication range. An agent \(j\in\tilde{\mathcal{M}}_{t}\) can communicate and receive information from the group of neighboring agents \(\mathcal{N}_{t}^{j}=\{i\neq j\in\tilde{\mathcal{M}}_{t}:\left\|Hx_{t}^{i}-Hx_{ t}^{j}\right\|_{2}\leq C_{R}\}\) where \(H\) is a matrix which extracts the position coordinates from the agent's state vector i.e., \(Hx_{t}=\mathbf{x}_{t}=[p_{x},p_{y},p_{z}]_{t}^{\top}\) and \(C_{R}\) is the communication range which we assume in this work to be the same for all agents.
### _Agent Sensing Model_
Each agent is equipped with a camera system which is used for acquiring snapshots of the objects of interest. Assuming that the camera field-of-view (FoV) angles in the horizontal and vertical axis are equal, the projection of the camera FoV on a planar surface is given by a square with side length \(r\) as \(r(d)=2d\tan\left(\frac{\varphi}{2}\right)\), where \(d\) denotes the distance in meters between the location of the agent and the surface of the object that needs to be searched, and \(\varphi\) is the angle opening of the FoV according to the camera specifications. Before taking a snapshot of the object of interest the agent first aligns its camera with respect to the surface in such a way so that the optical axis of the camera (i.e., the viewing direction) is parallel to the normal vector of the face. An object of interest is searched when its total surface area is included in the agents's acquired images. The acquired images are then processed by a computer vision module to detect people. The quality of the acquired images depends on the distance between the agent and the object of interest. Therefore, the probability of detecting people \(p_{d}(d)\) in the acquired images depends on the distance \(d\) between the agent and the object of interest according to:
\[p_{d}(d)=\left\{\begin{array}{ll}0&,\ \ \text{if}\ \ d\leq d_{\text{min}}\\ \text{max}(0,\ 1-\frac{d-d_{\text{max}}}{d_{\text{min}}-d_{\text{min}}})&,\ \ \text{if}\ \ d>d_{\text{min}}\end{array}\right. \tag{4}\]
where \(d_{\text{min}}\) and \(d_{\text{max}}\) are the minimum and maximum camera working distance for detecting people in the acquired frames. Although in this work we are utilizing a simplified detection probability model to demonstrate the proposed search planning framework, more realistic sensor detection models [35, 36] can be incorporated without requiring any changes in the problem formulation.
### _Objects of Interest and Obstacles Model_
The objects of interest that need to be searched and the obstacles in the environment that need to be avoided by the agents are represented in this work by rectangular cuboids of various sizes (referred to hereafter as cuboids). A rectangular cuboid is a convex hexahedron in three dimensional space which exhibits six rectangular faces (i.e., where each pair of adjacent faces meets in a right angle).
A point \(x\in\mathbb{R}^{3}\) belongs to the cuboid \(\mathcal{C}\) (i.e., \(x\in\mathcal{C}\)) if the linear system of inequalities \(Ax\leq B\) is satisfied, where \(A\) is a \(6\times 3\) matrix, with each row corresponding to the outward normal vector \(\alpha_{i}=[\alpha_{i,x},\alpha_{i,y},\alpha_{i,z}]\) of the plane which contains the \(i_{\text{th}}\) face of the cuboid and \(B=[b_{1},\dots,b_{6}]^{\top}\) is a \(6\times 1\) column vector, where each element \(b_{i}\) is obtained by the dot product between \(\alpha_{i}\) and a known point on plane \(i\). For the rest of the paper, we will use the matrices \(A\) and \(B\) to denote the system of linear inequalities \(Ax\leq B\) associated with the rectangular cuboid \(\mathcal{C}\).
## IV Mission Pre-planning
The amount of search-effort which is required in order to successfully search an object of interest, during an emergency response mission, is generally determined during the mission assessment phase [1] which is conducted at the mission control, prior to the search mission. During the assessment phase, the rescue team assesses the situation at hand (e.g., potential hazards, missing people, importance of the object, etc.) and specifies the amount of search-effort required for conducting an efficient and effective search. In this work, the search-effort is captured by the detection probability (i.e., Eqn. (4)) issued at the central station, before the mission begins. A high detection probability allows for detailed and accurate
snapshots of the object of interest. However, the size of the FoV inversely decreases with the amount of detail captured in the acquired images, and as such more snapshots are needed to cover the whole surface of the object with a high detection probability.
In order to allow the UAV agent to search the total surface area of the object of interest with the specified detection probability, the area around the object is decomposed into multiple cuboids as illustrated in Fig. 1. In essence, once the distance \(d\) between the agent and the object of interest is determined according to the specified detection probability \(p_{d}(d)\), the agent's FoV footprint \(r\times r\) is computed according to the agent's sensing model, as illustrated by step 2 in Fig. 1. Subsequently, each face of the object of interest is decomposed into cells of size \(r\times r\), forming a 3D grid, as illustrated by step 3 in the figure. For each cell, an artificial cuboid is generated and placed at distance \(d\) from the center of the cell as depicted in Fig. 1. Then, by guiding the UAV agent through all the generated cuboids, we make sure that the total surface area of each face is searched according to the specified detection probability. This is because, once the agent resides within a particular cuboid, the projected camera FoV on the face's surface captures the area of the corresponding cell as illustrated in Fig. 1. The area decomposition process discussed above, allows us to transform the 3D search problem into an optimal control problem i.e., finding the UAV control inputs, such that the agent is guided through all the generated cuboids in an optimal way.
## V Multi-Agent 3D Search Planning
In this section we develop a rolling-horizon distributed model predictive control (DMPC) algorithm [4, 37] for the cooperative guidance of a team of UAV agents with the purpose of searching in 3D multiple objects of interest while avoiding collisions with the obstacles in the environment. Our DMPC formulation, does not require any explicit coordination between the UAV agents and thus allows the agents to operate independently and in parallel with each other. In the proposed approach the agents can enter and exit the mission space according to the condition of their batteries and opportunistically cooperate with each other by exchanging information whenever they are in communication range, optimizing their future search plans. In particular, the agents in communication range exchange a) state information i.e., their current location, b) their search-maps, c) their future search plans and finally d) their flight time. Based on the exchanged information the agents seek to optimize their future plans in order to minimize the duplication of work and improve the search efficiency.
### _Centralized Control_
Let us assume that the mission control has issued the required detection probability and that the mission pre-planning step discussed in Sec. IV is completed, i.e., the faces of the object of interest that need to be searched are covered with a total of \(N\) artificial cuboids \(\mathcal{C}_{n},n\in[1,..,N]\).
We assume that a centralized station is in place, where all the necessary information is being transmitted, and which in turn decides the control actions for each agent by solving the 3D search planning problem jointly among all agents. The centralized formulation of the multi-agent 3D search planning problem is shown in problem (P1), where we seek to obtain the optimal joint control inputs for all agents i.e., \(u^{j}_{t[t},\dots,u^{j}_{t+T-1]t},\forall j\in\mathcal{M}\) over a rolling planning horizon of length \(T\) time-steps, by solving an open-loop optimal control problem, with the goal of guiding the agents to visit all cuboids while ensuring that the work is not duplicated i.e., a cuboid is not searched by more than one agent. Once the sequence of joint control inputs is found, the first control inputs \(u^{j}_{t[t},\forall j\) in the sequence are executed by the agents and the procedure described above is repeated for the subsequent time-steps.
**Problem (P1)** : Centralized MPC \[\min_{\mathbf{U_{t}},\mathbf{Y}}\ \mathcal{J}_{\text{ centralized}}(\mathbf{X_{t}},\mathbf{U_{t}},\mathbf{Y})\] (5a) **subject to** \[j\in\{1,..,|\mathcal{M}|\},\tau\in[0,\dots,T-1]\] **:** \[x^{j}_{t+\tau+1|t}=\Phi x^{j}_{t+\tau|t}+\Gamma u^{j}_{t+\tau|t}- \Gamma u_{g} \forall\tau,j\] (5b) \[x^{j}_{t[t}=x^{j}_{t[t-1} \forall j\] (5c) \[Hx^{j}_{t+\tau+1|t}\notin\mathcal{C}_{\psi} \forall\tau,\psi,j\] (5d) \[x^{j}_{t+\tau+1|t}\in\mathcal{X} \forall\tau,j\] (5e) \[|u^{j}_{t+\tau+1|t}|\leq u_{\text{max}} \forall\tau,j\] (5f) \[A_{n,l}Hx^{j}_{t+\tau+1|t}+(M-B_{n,l})b^{j}_{\tau,n,l}\leq M \forall\tau,n,l\] (5g) \[L\tilde{b}^{j}_{\tau,n}-\sum_{l=1}^{L}b^{j}_{\tau,n,l}\leq 0 \forall\tau,n\] (5h) \[\hat{b}^{j}_{n}\leq\sum_{\tau}b^{j}_{\tau,n} \forall n\] (5i) \[y^{j}_{n}\leq\hat{b}^{j}_{n}+V^{j}_{t}(n)+\sum_{i\neq j\in \mathcal{M}}\left[V^{i}_{t}(n)+P^{i}_{t}(n)\right] \forall n,j\] (5j) \[y^{j}_{n},b^{j}_{\tau,n,l},\hat{b}^{j}_{\tau,n},\hat{b}^{j}_{n}, V^{j}_{t}(n),P^{j}_{t}(n)\in\{0,1\} \forall n,j\] (5k)
#### V-A1 Objective Function
In problem (P1) we are interested in optimizing a system-wide objective function i.e., Eqn. (5a) over the planning horizon of length \(T\) time-steps for the joint controls over all \(j\in\mathcal{M}\) agents. In (P1), the bold capital letters indicate quantities over all agents thus \(\mathbf{X_{t}}=\{X^{1}_{t},..,X^{|\mathcal{M}|}_{t}\}\) is the combined state for all \(|\mathcal{M}|\) agents with \(X^{j}_{t}=\{x^{j}_{t+\tau+1|t}\},\forall\tau\in[0,..,T-1]\), where the notation \(x_{t+\tau+1|t}\) is used here to denote the future (i.e., planned) agent state at time \(t+\tau+1\) based on the current time-step \(t\). Similarly, \(\mathbf{U_{t}}=\{U^{1}_{t},..,U^{|\mathcal{M}|}_{t}\}\) denotes the agent joint mobility controls inputs with \(U^{j}_{t}=\{u^{j}_{t+\tau|t}\},\forall\tau\) and finally \(\mathbf{Y}=\{Y^{1},..,Y^{|\mathcal{M}|}\}\)
Fig. 1: The figure illustrates the mission pre-planning step.
are binary variables indicating whether a specific artificial cuboid \(\mathcal{C}_{n}\) has been visited or will be visited in the future by some agent \(j\) with \(Y^{j}=\{y_{1}^{j},..,y_{n}^{j}\},n\in[1,..,N]\).
In essence our goal is to find the agent joint control inputs \(u_{t+\tau|t}^{j},\forall\tau,j\) which will maximize the number of cuboids that will be visited during the planning horizon. The objective function can thus be defined as \(\underset{\mathbf{U_{t},Y}}{\min}\ \mathcal{J}_{\text{centralized}}(\mathbf{X_{t}}, \mathbf{U_{t}},\mathbf{Y})=\underset{\mathbf{U_{t},Y}}{\min}\ \ w_{1}\sum \mathcal{M}\|Hx_{t+\tau^{*}+1|t}^{j}-x_{j}^{*}\|_{2}^{2}+\] (6) \[w_{2}\sum\mathcal{M}\sum_{\tau=1}^{T-1}\|u_{t+\tau|t}^{j}-u_{t+ \tau-1|t}^{j}\|_{2}^{2}-w_{3}\sum_{j=1}^{\mathcal{M}}\sum_{n=1}^{N}y_{n}^{j}\]
where \(w_{i}>0\) are tuning weights, \(\tau^{*}\in[0,..,T-1]\), and \(x_{j}^{*}\) is the centroid of the nearest unvisited cuboid to agent's \(j\) current location. This is computed as \(x_{j}^{*}=c(\mathcal{C}_{n_{j}^{*}})\) where \(n_{j}^{*}\) is given by: \(n_{j}^{*}=\underset{n\in\tilde{N}_{j}^{j}}{\arg\min}|Hx_{t+\tau^{*}+1|t}^{j}- c(\mathcal{C}_{n})\|_{2}\), with \(\tilde{N}_{j}^{j}\) denoting agent's \(j\) set of all unvisited cuboids and the operator \(c(\mathcal{C}_{n})\) returns the centroid of cuboid \(\mathcal{C}_{n}\). Therefore, the first term in Eqn. (6) guides all agents towards their nearest unvisited cuboids. The second term minimizes the deviations between consecutive control inputs over all agents in order to produce smooth trajectories which the UAV agents can follow and finally, the last term maximizes the number of cuboids to be visited by the team of agents over the planning horizon, indicated by the binary variable \(y_{n}^{j}\) which is defined as: \(y_{n}^{j}=1\implies\exists\tau\in[0,..,T-1]:Hx_{t+\tau+1|t}^{j}\in\mathcal{C}_ {n}\)
#### Iii-B2 Constraints
Eqn. (5b) and Eqn. (5c) are due to the agent dynamical model assuming a known initial state \(x_{t|t}^{j}\). Then, Eqn. (5d) defines the obstacle avoidance constraints of agent \(j\) with all obstacles \(\mathcal{C}_{\psi},\psi\in[1,..,\Psi]\) in the environment where \(\Psi\) denotes the total number of obstacles present and \(\mathcal{C}_{\psi},\psi\in[1,..,\Psi]\) denotes the cuboid representation of obstacle \(\psi\). We can now say that agent \(j\) avoids a collision with an obstacle \(\psi\in\Psi\) when: \(Hx_{t+\tau+1|t}^{j}\notin\mathcal{C}_{\psi},\ \forall\psi\in\Psi,\forall\tau\in\{0, \ldots,T-1\}\), which can be implemented with the following constraints:
\[A_{\psi,l}(Hx_{t+\tau+1|t}^{j})>B_{\psi,l}-Mx_{\tau,\psi,l}^{j} \qquad\forall\tau,\psi,l \tag{7a}\] \[\sum_{l=1}^{L}z_{\tau,\psi,l}^{j}\leq L-1 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall\tau,\psi \tag{7b}\]
In Eqn. (7a), \(A_{\psi,l}\) and \(B_{\psi,l}\) define the coefficients of the equation of the plane which contains the \(l_{\text{th}}\) face of the obstacle. When the system of linear inequalities \(A_{\psi,l}(Hx_{t+\tau+1|t}^{j})<B_{\psi,l},\forall l\in[1,..,L]\) is true then the agent is contained within the obstacle, which signifies that a collision has occurred. Thus a collision is avoided when \(\exists l\in\{1,\ldots,L\}:A_{\psi,l}Hx_{t+\tau+1|t}^{j}>B_{\psi,l}\). This is achieved a) with the binary variable \(z_{\tau,\psi,l}^{j}\) which counts the number of times the inequality \(s_{\psi,l}Hx_{t+\tau+1|t}^{j}>B_{\psi,l}\) is violated for agent \(j\), regarding the face \(l\) of obstacle \(\psi\) and b) with the constraint in Eqn. (7b) which makes sure that the number of violations is less than \(L-1\) where \(L=6\) is the total number of faces that compose the obstacle. In Eqn. (7a), \(M\) denotes a large positive constant, also known as big-\(M\)[38], which is selected in such a way so that the constraint shown in Eqn. (7a) is satisfied at all times when \(z_{\tau,\psi,l}^{j}=1\).
The Eqn. (5e) constrains the agent's state within the bounded set \(\mathcal{X}\), and the constraint in Eqn. (5f) limits the values of the control input within the range \([-u_{\text{max}},+u_{\text{max}}]\) as shown. The constraints in Eqn. (5g)-(5i) determine whether agent \(j\) resides inside cuboid \(\mathcal{C}_{n}\) at time \(\tau\) (relative to the horizon) via the binary variables \(b_{\tau,n,l}^{j}\), \(\tilde{b}_{\tau,n}^{j}\) and \(\tilde{b}_{n}^{j}\). Thus, the constraints in Eqn. (5g)-(5i) allow the agent to search in 3D an object of interest by passing through all artificial cuboids that have been generated for this object. The \(n_{\text{th}}\) cuboid \(\mathcal{C}_{n}\) is visited by the agent when the system of linear inequalities \(A_{n,l}Hx_{t+\tau+1|t}^{j}<B_{n,l},\forall l\) holds for every face \(l\). Thus the binary variable \(b_{\tau,n,l}^{j}\) indicates whether this inequality is true at time-step \(\tau\), cuboid \(n\) and face \(l\). When this happens \(b_{\tau,n,l}^{j}\) becomes \(1\), otherwise \(b_{\tau,n,l}^{j}=0\) and the constraint in Eqn. (5g) is valid with the use of a large positive constant \(M\) as shown. Then the constraint in Eqn. (5h) uses the binary variable \(\tilde{b}_{\tau,n}^{j}\) to count the number of times \(b_{\tau,n,l}^{j}\) takes a value of one, and becomes active when \(\sum_{l=1}^{L}b_{\tau,n,l}^{j}=6\) which signifies that agent \(j\) resides inside the \(n_{\text{th}}\) cuboid at time-step \(\tau\). Finally, the constraint in Eqn. (5i) with the use of the binary variable \(\tilde{b}_{n}^{j}\) makes sure that the agent has no incentive in visiting the same cuboid multiple times during the current planning horizon.
Ideally, in this centralized multi-agent formulation we would like to have the following properties: a) a cuboid \(\mathcal{C}_{n}\) that has been visited by some agent \(i\) in the past, is not visited again by another agent \(j\) in the future and, b) if agent \(i\) plans to visit cuboid \(\mathcal{C}_{n}\) in the future, then agent \(j\neq i\) refrains from including cuboid \(\mathcal{C}_{n}\) in its future plans, thus avoiding duplication of work. Properties a) and b) are accomplished by the constraint in Eqn. (5j). More specifically, each agent \(j\) stores all visited cuboids in its local database \(V_{t}^{j}\in\{0,1\}\), referred to as search-map hereafter, and uses this search-map in order to avoid visiting cuboids that it has visited in the past, thus avoiding the duplication of work. Therefore, \(V_{t}^{j}(n)=1\) only when the cuboid \(\mathcal{C}_{n}\) has been visited by the agent at time prior to \(t\). In all other cases \(V_{t}^{j}(n)=0\). The binary variable \(\tilde{b}_{n}^{j}\) indicates whether cuboid \(n\) has been planned to be visited by agent \(j\) during the next planning horizon i.e., \(\dot{b}_{n}^{j}=1,\ \text{iff}\ Hx_{t+\tau+1|t}^{j}\in\mathcal{C}_{n},\ \tau\in[0, \ldots,T-1]\). Therefore, the inequality \(y_{n}^{j}\leq\dot{b}_{n}^{j}+V_{t}^{j}(n)\) provides no incentive for the agent to visit cuboids that have been visited in the past. In Eqn. (5j), the future plans of all other agents \(i\neq j\in\mathcal{M}\) are denoted as \(P_{t}^{i}(n)\) and defined as: \(P_{t}^{i}(n)=1\implies\exists\tau\in[0,\ldots,T-1]:Hx_{t+\tau+1|t}^{i}\in \mathcal{C}_{n}\), otherwise \(P_{t}^{i}(n)=0\).
As shown in (P1) by Eqn. (5j) the past and future plans of all other agents \(i\neq j\in\mathcal{M}\) denoted by \(V_{t}^{i}(n)\) and \(P_{t}^{i}(n)\) respectively are taken into account when deriving agent's \(j\) plan by maximizing the binary variable \(y_{n}^{j}\). There are four possible ways for activating \(y_{n}^{j}\): a) the cuboid \(\mathcal{C}_{n}\) has been planned to be visited by agent \(j\) during the next planning horizon, which is indicated by \(\hat{b}_{n}^{j}\), b) the cuboid has already been searched by agent \(j\) as indicated by \(V_{t}^{j}(n)\), c) the cuboid \(\mathcal{C}_{n}\) is included in the future plans of some agent \(i\neq j\) indicated by \(P_{t}^{i}(n)\) and finally, (d) another agent \(i\neq j\) has already searched cuboid \(\mathcal{C}_{n}\) in the past as indicated by \(V_{t}^{i}(n)\). Duplication of work occurs when more than one of \(\{\hat{b}_{n}^{j},V_{t}^{j}(n),V_{t}^{i}(n),P_{t}^{i}(n)\},\forall i\neq j\in \mathcal{M}\) becomes active for a specific cuboid \(\mathcal{C}_{
occurs, the less-than or equal sign effectively discourages such scenarios since the value of \(y_{n}^{j}\) cannot exceed the maximum value of 1, and all the involved variables are binary. The centralized multi-agent formulation presented in problem (P1) with the availability of all the necessary information, optimally solves the joint multi-agent search planning problem, while avoiding the duplication of work by jointly considering the past and future plans of all agents.
### _Distributed Control_
A closer look at the centralized problem (i.e., P1), reveals the existence of coupled constraints as shown in Eqn. (5). Consequently, the centralized formulation requires at each time-step information (i.e., search maps and future plans) from all agents in order to produce the joint search plans by optimizing the objective shown in Eqn. (6). This is possible in the centralized version of the problem since all the information is available at the time of planning and moreover the problem is solved jointly among all agents on a central system. This ensures that the agents cooperate to minimize duplication of work. Although, the centralized version of the problem achieves optimality, it has several drawbacks: a) the computational complexity increases with the number of agents, b) it relies on the availability of information from all agents at every time-step and finally c) it does not accounts for failures on the central station where the planning process takes place.
The aforementioned drawbacks of the centralized system are alleviated in this work with the design of a distributed system [4], however this comes at the cost of optimality. More specifically, in the proposed distributed control approach we drop the coupled constraints of Eqn. (5), and the behavior of the centralized system is approximated as follows: At each time-step \(t\), agent \(j\) will compute a local search plan without considering the intentions of other agents, unless \(\mathcal{N}_{t}^{j}\neq\emptyset\) i.e., agent \(j\) receives the search plans of other agents \(i\in\mathcal{N}_{t}^{j}\) inside its limited communication range. Subsequently, cooperative search plans are generated in the scenario where two or more agents cooperate via communication, and exchange their future intentions. For this reason the constraints of Eqn. (5) are only approximated in the proposed distributed system as it is explained next in more detail. Nevertheless, the proposed distributed system offers an appealing tradeoff between optimality and computational complexity, as shown later in the evaluation.
Finally, the proposed distributed search planning framework is based on the following required key properties: First, the agents operate autonomously and in parallel with each other without the need for deliberative coordination. The term coordination in this work refers to the ability of each agent to decide its own control inputs independently from other agents, and without relying on any specific execution order amongst the cooperative agents (e.g., sequential decision making/control procedures [20, 39]). In this work we would like to make sure that the mission will not be interrupted and it will be completed in the event where one or more agents need to exit the mission space. Constant communication between the agents should not be a requirement, rather the agents can opportunistically communicate and exchange information only when they are within communication range. Finally, the agents should cooperate and work towards improving the system-wide (i.e., collective) objective (i.e., searching all the objects of interest) while at the same time trying to minimize the duplication of work.
In the distributed formulation of the problem we consider a team \(\mathcal{M}\) of agents where each agent \(j\in\mathcal{M}\) evolves inside a bounded surveillance area \(\mathcal{A}\), according to the dynamics in Eqn. (1). Each agent \(j\) exhibits a nominal flight time \(\mathcal{T}^{j}\) and with probability \(p_{b}^{j}(t)\) the agent's battery is depleted at time \(t\leq\mathcal{T}^{j}\). When a battery depletion event occurs i.e., \(\mathcal{B}^{j}=1\) the agent must exit the mission and return to its base station \(\mathcal{G}^{j}\in\mathbb{R}^{3}\) for recharging as discussed in Sec. III-B. The subset of active agents (not recharging agents) at time \(t\) is thus denoted as \(\tilde{\mathcal{M}}_{t}\subseteq\mathcal{M}\). Each agent exhibits a communication range \(C_{R}\) for communicating and exchanging information with nearby agents. More specifically, agent \(j\) receives the following information from all the neighboring agents \(i\in\mathcal{N}_{t}^{j}\) at time \(t\): a) agent's \(i\) current state \(x_{t|t}^{i}\), b) agent's \(i\) search-map \(V_{t}^{i}(n)\), c) agent's \(i\) flight time \(t_{Fl}^{i}\) and finally d) its future plan \(\tilde{P}_{t}^{i}(n)\). An overview of the proposed cooperative search planning framework is illustrated in Fig. 2. We should point out here, that the future plan \(\hat{P}_{t}^{i}(n)\) of some agent \(i\) which is received by agent \(j\) at time \(t\) is not the most recent plan of agent \(i\), since at the time of the communication agent \(i\) has not yet generated its future plan for time \(t\). Because the agents are synchronized, operate in parallel, and without coordination, at the time of communication the agents are not receiving the latest plans of their peers i.e., \(\hat{P}_{t}^{i}(n)=P_{t-1}^{i}(n)\). We are going to refer to \(\hat{P}_{t}^{i}(n)\) as the hypothetical plan of agent \(i\) at time \(t\) from the point of view of agent \(j\). Furthermore, \(\hat{P}_{t}^{i}(n)\in\{0,1\},\forall n\) and is defined as \(\hat{P}_{t}^{i}(n)=1,\text{if }\exists\tau\in[0,..,T-1]:Hx_{t+\tau|t-1}^{i} \in\mathcal{C}_{n}\). Hereafter, we will use the notation \(\tau\hat{P}_{t}^{i}(n)\) to refer to the relative time \(\tau\) in the planning horizon for which cuboid \(n\) is planned to be visited i.e., \(\hat{P}_{t}^{i}(n)=1\) by agent \(i\). We can now describe the proposed distributed search planning formulation.
#### Iii-B1 Objective function
The objective function of the centralized problem in Eqn. (6) can be decomposed into several local objectives per agent as: \(\sum_{j=1}^{|\mathcal{M}_{t}|}\mathcal{J}_{\text{local}}^{j}(X_{t}^{j},U_{t}^{j },Y^{j})\), so that each active agent \(j\) can independently optimize its local objective function \(\mathcal{J}_{\text{local}}^{j}(X_{t}^{j},U_{t}^{j},Y^{j})\) while at the same time the collective effort of the agents optimizes the system wide objective similarly to the centralized problem as discussed in Sec. V-A. The objective function of each agent becomes \(\mathcal{J}_{\text{local}}^{j}(X_{t}^{j},U_{t}^{j},Y^{j})\):
\[\mathcal{B}_{t}^{i}\mathcal{J}_{\text{recharge}}^{j}(X_{t}^{j})+(1-\mathcal{B}_ {t}^{i})\mathcal{J}_{\text{search}}^{j}(X_{t}^{j},U_{t}^{j},Y^{j}) \tag{8}\]
where \(\mathcal{B}_{t}^{j}\in\{0,1\}\) indicates a battery depletion event which occurs with probability \(p_{b}^{j}(t)\) and at which point the agent must return to its base station \(\mathcal{G}^{j}\in\mathbb{R}^{3}\) for recharging by minimizing \(\mathcal{J}_{\text{recharge}}^{j}(X_{t}^{j})=\|Hx_{t+\tau+1|t}^{j}-\mathcal{G}^ {j}\|_{2}^{2}\). On the other
Fig. 2: Overview of the proposed search planning framework.
hand when \(\mathcal{B}^{j}_{t}=0\), the agent \(j\) optimizes its search planning objective \(\mathcal{J}^{j}_{\text{search}}(X^{j}_{t},U^{j}_{t},Y^{j})\) which is given by:
\[\mathcal{J}^{j}_{\text{search}}(X^{j}_{t},U^{j}_{t},Y^{j})=w_{1}\| Hx^{j}_{t+\tau^{*}+1|t}-x^{*}_{j}\|^{2}_{2}\ \ + \tag{9}\] \[w_{2}\sum_{\tau=1}^{T-1}\|u^{j}_{t+\tau|t}-u^{j}_{t+\tau-1|t}\|^{ 2}_{2}\ -\ w_{3}\sum_{n=1}^{N}r^{j}(n)y^{i}_{n}\]
where the first term guides the agent towards the unvisited cuboids, the second term minimizes the deviations between consecutive control inputs and finally the third term aims to maximize the number of cuboids that will be visited in the future as explained in Sec. V-A. Specifically, \(x^{*}_{j}=c(\mathcal{C}_{n^{*}_{j}})\) determines the centroid of the nearest cuboid with respect to agent \(j\), with \(n^{*}_{j}\) given by:
\[n^{*}_{j}=\left\{\begin{array}{ll}\underset{n\in\mathcal{N}^{j}}{\arg\min} \|Hx^{j}_{t+\tau^{*}+1|t}-c(\mathcal{C}_{n})\|_{2},&\text{if }\mathcal{N}^{j}_{t}= \emptyset\\ \underset{A^{j}}{\arg\min}\underset{i\in\{j\cup\mathcal{N}^{j}_{t}\}}{\sum} \underset{n\in\mathcal{N}^{j}_{t}}{\sum}\Omega^{j}_{i,n}A^{j}_{i,n},&\text{o.w. }\end{array}\right. \tag{10}\]
In particular, when agent \(j\) is not in communication range with other agents i.e., \(\mathcal{N}^{j}_{t}=\emptyset\), then agent \(j\) moves greedily towards its nearest cuboid as shown above. However, when \(\mathcal{N}^{j}_{t}\neq\emptyset\), agent \(j\) receives the location of all other agents i.e., \(Hx^{i}_{t|t},i\in\mathcal{N}^{j}_{t}\), and hypothesizes what their next target (i.e., cuboid to be visited) will be. In other words agent \(j\) adjusts its next target according to the hypothesized actions of the agents in its neighborhood. To do so, agent \(j\) solves a local assignment problem where the objective is to find the cuboids that are likely to be visited next by the agents in the set \(\mathcal{N}^{j}_{t}\cup j\). For this reason the cost matrix \(\Omega^{j}_{i,n}\) is constructed locally at agent \(j\), and populated with the distances between agent's \(i\) location \(Hx^{i}_{t|t}\) and every unvisited cuboid \(n\in\hat{N}^{j}_{t}\) (\(\hat{N}^{j}_{t}\) denotes the unvisited cuboids in agent's \(j\) search-map). Then the objective is to find an assignment matrix \(A^{j}\), which assigns the agents to the unvisited cuboids, where \(A^{j}_{i,n}\in\{0,1\}\), and the sum of each row and column of \(A^{j}\) does not exceeds the value of one. Once a solution is found agent \(j\) keeps its assigned cuboid (i.e., extracts \(n^{*}_{j}\) from \(A^{j}\)) and discards all other results.
Finally, in the last term (i.e., \(\sum_{n}r^{j}(n)y^{j}_{n}\)), \(y^{j}_{n}\) is a binary decision variable which is activated whenever one of the following is true: a) cuboid \(n\) has been visited by agent \(j\) in the past, b) cuboid \(n\) has been planned to be visited by agent \(j\) in the current planning horizon or c) some agent \(i\neq j\) has visited cuboid \(n\) in the past and this information has already been communicated to agent \(j\). The term \(r^{j}(n)\in\{0,1\}\) is a reward term which is used to include or exclude cuboid \(n\) from the planning process as we will explain next. Essentially, the notation \(r^{j}(n)y^{j}_{n}\) indicates here whether the decision variable \(y^{j}_{n}\) will be included in the optimization. Since there is no coordination between the agents, and because the agents operate in parallel it is highly likely that one or more agents (especially nearby agents) generate plans for the same cuboids. In addition, each agent \(j\) with probability \(p^{j}_{b}(t)\) will exit the mission space due to a depleted battery event. As a consequence, cuboids that have been planned to be visited by agent \(j\) will be left unvisited in such events. Thus, the agents need to account for the above scenarios in an effort to increase the overall search planning performance and reduce the duplication of work. Let us assume that agent \(j\) has received at time \(t\) the hypothetical future plans of all nearby agents \(i\neq j\in\mathcal{N}^{j}_{t}\) denoted as \(\hat{P}^{i}_{t}(n),\forall n\in N\) where \(N\) is the total number of cuboids in the environment. Alongside \(\hat{P}^{i}_{t}(n)\) the agent has also received \(\tau\hat{P}^{i}_{t}(n)\), and flight time \(t^{i}_{Fl}\) for each agent \(i\in\mathcal{N}^{j}_{t}\).
With this information, agent \(j\) first computes the probability that a particular cuboid \(\mathcal{C}_{n}\) will not be visited by any agent that has made plans for it, due to the occurrence of battery depletion events. More specifically let \(\mathcal{W}^{j}_{t}\subseteq\mathcal{N}^{j}_{t}\) to denote the subset of agents which have included cuboid \(n\) in their plans transmitted to agent \(j\) i.e., \(\hat{P}^{i}_{t}(n)=1,\forall l\in\mathcal{W}^{j}_{t}\) and let \(\tau\hat{P}^{i}_{t}(n)\) to denote the relative time \(\tau\) in the planning horizon for which agent \(l\in\mathcal{W}^{j}_{t}\) is planning to visit cuboid \(\mathcal{C}_{n}\). Agent \(j\) computes the probability that agent \(l\in\mathcal{W}^{j}_{t}\) will experience a battery depletion event before reaching cuboid \(\mathcal{C}_{n}\) as:
\[p^{l}_{F}(n)=p^{l}_{b}(t^{l}_{Fl}+\tau\hat{P}^{l}_{t}(n)-1) \tag{11}\]
where \(t^{l}_{Fl}+\tau\hat{P}^{l}_{t}(n)-1\) is the hypothesized arrival time of agent \(l\) at cuboid \(\mathcal{C}_{n}\). Subsequently, the probability of the event for which all agents \(l\in\mathcal{W}^{j}_{t}\) fail to reach cuboid \(\mathcal{C}_{n}\) due to depleted batteries and agent \(j\) does not runs out of battery during its planning horizon, is computed as:
\[\hat{p}^{j}_{F}(n)=\left(1-p^{j}_{b}(t+T)\right)\prod_{l=1}^{|\mathcal{W}^{j}_{t }|}p^{l}_{F}(n) \tag{12}\]
The probability in Eqn. (12) is computed by agent \(j\) with information received from its communication neighborhood and allows the agent to determine whether a particular cuboid needs to be included in its future plans, given the hypothesized battery depletion events of other agents. The value of \(\hat{p}^{j}_{F}(n)\in[0,1]\) indicates the probability with which agent \(j\) should include cuboid \(n\) in its future plans.
As we have already mentioned, the plans received by agent \(j\) from other agents inside its communication neighborhood are not necessarily up-to-date and could have been changed. For this reason agent \(j\) takes into account the plans of other agents only with certain probability. More specifically, the expected number of agents \(l\in\mathcal{W}^{j}_{t}\) that will reach cuboid \(n\) during the next planning horizon can be computed as: \(\hat{m}(n)=\sum_{l=1}^{|\mathcal{W}^{j}_{t}|}(1-p^{l}_{F}(n))\). Based on the expected number of agents \(\hat{m}(n)\) that agent \(j\) hypothesizes that will reach cuboid \(n\) in the future, agent \(j\) includes cuboid \(n\) in its future plans with probability which is given by:
\[\hat{p}^{j}_{C}(\hat{m}(n))=\left\{\begin{array}{ll}1-\frac{1}{1+\alpha_{2}^{j }e^{-\beta_{2}^{j}((n)-\alpha_{2}^{j})}},&\text{if }p^{j}_{b}(t+T)>0.5\\ 0,&\text{o.w}\end{array}\right. \tag{13}\]
where \(\beta_{2}^{j}\) and \(\alpha_{2}^{j}\) are design parameters. In essence, Eqn. (13) expresses the probability that a particular cuboid \(n\) will be included in agent's \(j\) plan conditioned on the expected number of agents that also plan to visit the same cuboid. This probability decreases with the expected number of agents that plan to visit a particular cuboid i.e., agent \(j\) probabilistically refrains from visiting cuboid \(n\) when a large number of agents are expected to visit \(n\) as well. The reward \(r^{j}(n)\) shown in Eqn. (9) can now be defined as:
\[r^{j}(n)=\left\{\begin{array}{ll}1,&\text{with probability }\max\{\hat{p}^{j}_{F}(n),\hat{p}^{j}_{C}(\hat{m}(n))\}\\ 0,&\text{otherwise}\end{array}\right. \tag{14}\]
Eqn. (14) is activated when \(\forall\ l\in\mathcal{W}_{t}^{j}:\hat{P}_{t}^{l}(n)=1\). On the other hand, when no agent \(l\in\mathcal{W}_{t}^{j}\) has included cuboid \(n\) in its plans then \(r^{j}(n)=1\). Additionally, when \(\mathcal{N}_{t}^{j}=\emptyset\) then \(r^{j}(n)=1,\forall n\).
To summarize each agent \(j\in\tilde{\mathcal{M}}_{t}\) solves the distributed MPC problem shown in (P2) by optimizing their local objective function i.e., Eqn. (8) with respect to their own control inputs \(U_{t}^{j}\) and binary variables \(Y^{j}\). In problem (P2) we assume that there are \(q\in\mathcal{Q}\) objects of interest that need to be searched by the team of agents, and that each object of interest is searched when all \(\mathcal{C}_{n}^{q},n\in[1,..,N_{q}]\) cuboids are visited by at least one agent. Moreover, the agents must avoid collisions with all \(\psi\in\Psi\) obstacles in the environment, including the objects of interest.
**Problem (P2)** : Distributed MPC \[\min_{U_{t}^{j},Y^{j}}\mathcal{J}_{\text{local}}^{j}(X_{t}^{j},U_{t }^{j},Y^{j})\] (15a) **subject to** \[\tau\in[0,\ldots,T-1]\] **:** \[x_{t+\tau+1|t}^{j}=\Phi x_{t+\tau|t}^{j}+\Gamma u_{t+\tau|t}^{j}- \Gamma u_{g} \forall\tau\] (15b) \[x_{t|t}^{j}=x_{t|t-1}^{j}\] (15c) \[x_{t+\tau+1|t}^{j}\in\mathcal{X} \forall\tau\] (15d) \[|u_{t+\tau+1|t}^{j}\leq u_{\text{max}} \forall\tau\] (15e) \[A_{q,n,l}Hx_{t+\tau+1|t}^{j}\ +\] \[(M-B_{q,n,l})b_{\tau,q,n,l}^{j}\leq M \forall\tau,q,n,l\] (15f) \[L\hat{b}_{\tau,q,n}^{j}-\sum_{l=1}^{L}b_{\tau,q,n,l}^{j}\leq 0 \forall\tau,q,n\] (15g) \[\hat{b}_{q,n}^{j}\leq\sum_{\tau}\hat{b}_{\tau,q,n}^{j} \forall q,n\] (15h) \[V_{t}^{j}(q,n)=V_{t}^{j}(q,n)+\sum_{i\neq j\in\mathcal{N}_{t}^{ j}}V_{t}^{i}(q,n) \forall q,n\] (15i) \[y_{q,n}^{j}\leq\hat{b}_{q,n}^{j}+V_{t}^{j}(q,n) \forall q,n\] (15j) \[A_{\psi,l}Hx_{t+\tau+1|t}^{j}>B_{\psi,l}-Mz_{\tau,\psi,l}^{j} \forall\tau,\psi,l\] (15k) \[\sum_{l=1}^{L}z_{\tau,\psi,l}^{j}\leq L-1 \forall\tau,\psi\] (15l)
#### V-B2 Constraints
In problem (P2) each agent \(j\) constructs its future trajectory \(x_{t+\tau+1|t}^{j}\) over the rolling horizon \(\tau\in[0,\ldots,T-1]\) of length \(T\). The constraints in Eqn. (15b) - (15e) are due to the agent dynamical model. Then, the constraints in Eqn. (15f) - (15g) check whether the \(n_{\text{th}}\) cuboid (i.e., \(\mathcal{C}_{n}^{q}\)), of the object of interest \(q\), has been planned to be visited at time \(t+\tau+1|t\) by the agent. To do this, we use the binary variables \(b_{\tau,q,n,l}^{j}\) and \(\hat{b}_{\tau,q,n}^{j}\) where \(l\in[1,..,L]\) denotes the cuboid faces. The constraint in Eqn. (15h) discourages agent \(j\) to include in its plans cuboid \(n\) of the object of interest \(q\) more than once during the planning horizon. Then the constraint in Eqn. (15i) updates agent's \(j\) search-map \(V_{t}^{j}(q,n)\) with information received from other agents \(i\in\mathcal{N}_{t}^{j}\). When agent \(j\) has no agents inside its communication range \(\mathcal{N}_{t}^{j}=\emptyset\) then \(V_{t}^{j}(q,n)\) is not updated with information from other agents. Subsequently, the constraint in Eqn. (15j) is used to give no incentive for agent \(j\) to visit cuboid \(n\) of the object of interest \(q\), if \(n\) has been visited in the past (by agent \(j\) or any other agent which has exchanged information with agent \(j\) at some point in time). The constraints in Eqn. (15k) - (15l) define collision avoidance constraints with the obstacles \(\psi\in\Psi\).
## VI Evaluation
The experimental setup used for the evaluation of the proposed system is as follows: The agent dynamics are expressed by Eqn. (1) with \(\Delta T=1\)s, agent mass \(m=3.35\)kg and air resistance coefficient \(\eta=0.2\). The applied control input is bounded as \(|u_{t}|\leq 35N\), the agent velocity is bounded within \(|\dot{\mathbf{x}}|\leq 15\)m/s, and the agent's position is bounded within the physical limits of the surveillance area \(\mathcal{A}\). The agent FoV angle \(\phi\) is set at 60deg. Simulations were conducted on an 3.5GHz dual core CPU running the Gurobi V9 MIQP solver.
### _Mission Pre-planning_
First we demonstrate the mission-preplanning step which is depicted in Fig. 3(a)-(c). As we have discussed in Sec. IV, in the mission pre-planning step, the mission control at the central station specifies the amount of search-effort required for efficiently searching an object of interest, which in this work is captured by the detection probability. In this scenario we assume that the profile of the detection probability is given by Eqn. (4) with \(d_{\text{min}}=17\)m and \(d_{\text{max}}=90\)m as shown in Fig. 3(a). Subsequently, Fig. 3(b) shows an example of the area decomposition step for 3 different detection probabilities i.e., \(p_{d}^{j}(d_{1}),p_{d}^{j}(d_{2})\) and \(p_{d}^{j}(d_{3})\), where \(p_{d}^{j}(d_{1})=1\) is the maximum detection probability, \(p_{d}^{j}(d_{2})=0.88\) and \(p_{d}^{j}(d_{3})=0.53\) indicated by the green, blue and red colors, respectively. Once the central station issues the required detection probability, the distance (i.e., \(d_{1},d_{2}\) or \(d_{3}\)) that the UAV agent must maintain with the object of interest is determined from Eqn. (4) and the FoV footprint is computed according to the agent's sensing model. In the illustrated scenario, \(p_{d}^{1}(d_{1}),p_{d}^{2}(d_{2})\) and \(p_{d}^{3}(d_{3})\) is achieved at distances \(d_{1}=17\)m, \(d_{2}=26\)m and \(d_{3}=52\)m, respectively. The agent's FoV area for \(d_{1}\) is approximately 20m\(\times\)20m, whereas for \(d_{2}\) and \(d_{3}\) the FoV sizes are \(30\)m\(\times\)30m and \(60\)m \(\times\) 60m, respectively.
Let us now assume that a large structure or building, with dimensions \(60\)m \(\times\) 60m \(\times\) 60m as shown in Fig. 3(b), is on fire and thus all its lateral faces must be searched to determine if there are trapped people inside. Each one of the faces of the object of interest is decomposed into multiple cells according to the agent's FoV footprint, forming a grid as shown in the figure. For \(p_{d}^{2}(d_{1})\) each face is decomposed into 9 cells shown in green color, for \(p_{d}^{2}(d_{2})\) each face is decomposed into 4 cells shown with blue color and finally for \(p_{d}^{3}(d_{3})\) the agent's FoV area captures the whole face of the object of interest as shown with red color in Fig. 3(b) and thus one cell contains the entire face. Finally, depending on the required detection probability, for each cell, an artificial cuboid is generated and placed in front of the cell's center, at the distance which the agent's FoV area matches the area of the cell. This is depicted in 2D and 3D view in Fig. 3(c), where the green, blue and red cuboids are associated with the detection probabilities \(p_{d}^{1}(d_{1}),p_{d}^{2}(d_{2})\), and \(p_{d}^{3}(d_{3})\) respectively and are placed at distances \(d_{1}=17\)m, \(d_{2}=26\)m, and \(d_{3}=52\)m respectively.
To summarize, the UAV agent is required to pass from a total of 36 cuboids (i.e., 9 cuboids per face) in order to search the 4 faces (shown in Fig. 3(b)) of the object of interest with a detection probability of \(p_{d}^{1}(d_{1})=1\). On the other hand, when the detection probability is set to \(p_{d}^{2}(d_{2})=0.88\) the agent needs to pass from 16 artificial cuboids and finally with a detection probability of \(p_{d}^{3}(d_{3})=0.53\) only 4 cuboids need to be visited.
Once the mission-preplanning step is completed, the 3D search planning problem is transformed into an optimal control problem where the objective is to guide the UAV agents through all the generated artificial cuboids. Figure 3(d)(e) shows the output of the rolling-horizon model predictive control formulation for a single agent, obtained from problem (P1) (problems (P1) and (P2) are equivalent in this case and produce the same result). The objective here is to search the object of interest discussed in the previous paragraph with the maximum detection probability i.e., \(p_{d}^{1}(d_{1})=1\). In this setting, the agent needs to visit a total of 36 cuboids around the object of interest shown in green color in Fig. 3(d)(e). The agent's home depot is shown with a light green box and its initial state is \(x_{0}=[160,200,5,0,0,0]\). The size of the surveillance region is \(300\text{m}\times 300\text{m}\times 80\text{m}\), the planning horizon \(T\) is set at 10 time-steps, the weights \(w_{1},w_{2},\text{and }w_{3}\) of the objective function in Eqn. (6) are set to 0.0001, 0.0001, and 0.3 respectively and finally \(\tau^{*}=3\).
Figure 3(d) shows the agent's trajectory at time-step 11. The agent's executed trajectory is denoted with red diamonds and the agent's predicted trajectory i.e., \(x_{t+r+1|t},t=11,\tau\in[0,..,9]\) is marked with red circles. As shown in the figure, the agent maximizes the number of cuboids to be visited within its planning horizon. Figure 3(e) on the other hand, shows the final trajectory of the agent which took place over 75 time-steps. As it is shown the agent visits all the generated cuboids, forming a spiral trajectory.
### _Distributed 3D Search Planning_
Next we analyze the performance of the proposed distributed 3D search planning approach. We begin our evaluation, with an illustrative scenario shown in Fig. 4, where 4 agents are tasked to search 2 objects of interest with sizes \(60\text{m}\times 60\text{m}\times 60\text{m}\) each. In this scenario the surveillance region has a size of \(250\text{m}\times 400\text{m}\times 80\text{m}\) and the agents are required to search the objects of interest with a detection probability of \(p_{d}^{2}(d_{2})=0.88\), which results in the generation of 4 cuboids per face as shown in Fig. 4(a). In order to search the 4 faces of each object of interest, as depicted in the figure, the agents need to visit 32 cuboids in total (colored in cyan). The 4 agents shown in purple, red, green and blue depart from their home depots as shown in the figure and execute the distributed MPC program shown in (P2) to produce the search trajectories illustrated in Fig. 4(b). We should mention that for this experiment, the parameters \(\alpha_{1}^{j}\) and \(\beta_{1}^{j}\) of Eqn. (3) have been set to 100 and 0.3 respectively for all agents. Similarly, the parameters \(\alpha_{2}^{j}\) and, \(\beta_{2}^{j}\) of Eqn. (13) have been set to 2 and 0.5 respectively for all agents and the communication range \(C_{R}\) was set to 100m. All the other parameters remain unchanged. The initial states of the agents are \(x_{0}^{1}=[166,235,5,0,0,0]\), \(x_{0}^{2}=[185,235,5,0,0,0]\), \(x_{0}^{3}=[165,215,5,0,0,0]\), and \(x_{0}^{4}=[185,215,5,0,0,0]\) and the planning horizon is \(T=10\) time-steps. As it is shown in the figure, the agents work cooperatively to search the objects of interest in a distributed fashion. In particular we observe that the agents are divided into two teams i.e., the green-purple team and the blue-red team, with each team searching one object of interest. In this scenario all 32 cuboids are visited by the agents in 48 time-steps.
The next experiment aims to demonstrate the cooperative behavior of the system in the presence of obstacles. This experiment is depicted in Fig. 5, where 2 cooperative UAV agents, denoted with red and blue color, operate inside a surveillance region with dimensions \(500\text{m}\times 400\text{m}\times 80\text{m}\). The agents initial
Fig. 4: Distributed Search Planning with 4 cooperative UAV agents.
Fig. 3: The figure illustrates: (a)-(c) the mission-preplanning step, (d)-(e) the generated 3D search plan for a single UAV agent.
state are \(x_{0}^{1}=[85,215,5,0,0,0]\) and \(x_{0}^{2}=[485,215,5,0,0,0,0]\) for the red and blue agents respectively. The agents collaborate in order to search a single object of interest (by visiting a total of 16 artificial cuboids) located between two obstacles \(\psi_{1}\) and \(\psi_{2}\) as as depicted in Fig. 5(a). The height of obstacle \(\psi_{1}\) is set at 80m (equal to the maximum height of the surveillance region), and the height of obstacle \(\xi_{2}\) is 30m. As shown in the figure, each agent manages to search 2 of the object's faces, and thus in total 4 faces are searched (i.e., all 16 cuboids are visited by the agents as shown). More importantly, the agents avoid the obstacles in the environment in their effort to reach the object of interest.
The next series of experiments aims to investigate the impact of: a) the number of agents \(|\mathcal{M}|\), b) the communication range \(C_{R}\), and finally c) the parameters \(\alpha_{1}\) and \(\beta_{1}\) of the agent's battery profile i.e., Eqn. (3), on the mission completion time i.e., the amount of time required for searching all objects of interest. For this experiment we have used the environmental set-up shown in Fig. 4, with two objects of interest of sizes \(60\mathrm{m}\times 60\mathrm{m}\times 60\mathrm{m}\) each, inside the surveillance region of size \(400\mathrm{m}\times 400\mathrm{m}\times 80\mathrm{m}\) and with a detection probability of \(p_{d}=0.88\), which results in the generation of 32 artificial cuboids in total. In this test we have experiment with various parameter configurations as follows: the total number of available agents \(|\mathcal{M}|\) varies in the set \(\{3,5,7,9,11\}\), the communication range \(C_{R}\) takes values in the set \(\{50\mathrm{m},100\mathrm{m},250\mathrm{m}\}\) and two different battery profiles settings have been used i.e., with the parameter \(\alpha_{1}\) in the range \(\alpha_{1}=[20,40]\) for battery profile 1 and \(\alpha_{1}=[70,90]\) for battery profile 2. The parameter \(\beta_{1}\) is kept fixed at \(\beta_{1}=0.3\). The agent recharging time \(t_{R}\) is sampled uniformly from the interval \([5,10]\). We have conducted 50 Monte Carlo (MC) trials for each parameter combination, where we randomly initialize the agents inside the surveillance region and we let the system (i.e., problem (P2)) to run, logging the mission completion time and the number of active agents per time-step. The averaged results for the different configurations are illustrated in Fig. 6. More specifically, Fig. 6(a) shows the average mission completion time for different agent team sizes and various communication ranges for battery profile 1. For this experiment, the battery profile parameter \(\alpha_{1}\), for each agent is sampled uniformly within the interval \([20,40]\). On the other hand, in Fig. 6(b) the same configuration scenario is simulated for battery profile 2, in which \(\alpha_{1}\) is sampled uniformly within the interval \([70,90]\). The conditional probability distributions of the two battery profiles are shown in Fig. 6(c) with black and red colors, for profile 1 and 2, respectively. As we can observe from Fig. 6(a) and Fig. 6(b), the average mission time decreases as the number of agents increases. Additionally, these results also show the impact of the communication range on the performance of the system. As the communication range increases, the cooperation between the agents also increases which results in improved mission execution times. Interestingly, we can observe that a large communication range primarily benefits small teams, for which the agents are sparse and scattered within the surveillance region. Fig. 6(b), shows a similar behavior for battery profile 2. However, in this scenario the battery depletion events are not as frequent, compared to the battery depletion events obtained with battery profile 1. As a result, the average number of active agents per time-step, participating in the mission is larger, which improves the mission execution times as shown. Figure 6(d), shows the average number of active agents per time-step for the two different battery profiles. As it is shown, the frequent battery depletion events caused with battery profile 1, makes the number of agents that participate in the mission to fluctuate significantly, which can potentially decrease the system's performance. Nevertheless, the results show the flexibility of the proposed distributed search planning approach to cope with a dynamically varying number of agents.
The next experiment, aims to demonstrate more clearly the effect of the battery profile and the communication range, on the performance of the system. In this experiment, we used the setup shown in Fig. 4, with 4 agents initialized at \(x_{0}^{1}=[166,235,5,0,0,0]\), \(x_{0}^{2}=[185,235,5,0,0,0]\), \(x_{0}^{3}=[165,215,5,0,0,0]\), and \(x_{0}^{4}=[185,215,5,0,0,0]\), and with the objects of interest as shown in the figure. The agent recharging time \(t_{R}\) is sampled uniformly within the interval \([1,5]\) and the rest of the parameters remain unchanged. We run the system with the following configurations a) \((C_{R}=50\mathrm{m},\alpha_{1}=10)\), b) \((C_{R}=50\mathrm{m},\alpha_{1}=20)\), c) \((C_{R}=250\mathrm{m},\alpha_{1}=10)\) and, d) \((C_{R}=250\mathrm{m},\alpha_{1}=20)\), and we monitor the percentage of visited cuboids over time as shown in Fig. 7(a). Figure 7(b) shows the number of active agents in each time-step, for the four configurations. As it is shown in Fig. 7(a) with the red and blue solid lines, the system's performance increases dramatically with the reduction of battery depletion events. This is also evident by the number of active agents shown in Fig. 7(b). When
Fig. 5: Searching with 2 cooperative UAV agents in the presence of obstacles.
Fig. 6: The figure shows the search planning performance for different parameter configurations of the proposed approach.
\(\alpha_{1}=10\), the agents enter and exit the mission space very frequently as shown by red solid and dotted lines, which causes delays in the mission execution time. Here it is also evident the importance of the communication range on the performance of the system. The increased communication range i.e., dotted lines, significantly improves the mission execution times, as shown in the figure.
The next experiment compares the performance of the proposed distributed search planning approach, with the centralized formulation of the problem discussed in Sec. V-A, and with the distributed planning framework presented in [39] which requires coordination between the agents. Specifically, in [39] the agents execute their plans in a sequential fashion one after the other. In order to evaluate the 3 approaches discussed above we have used the following simulation setup: We have generated a surveillance region of size \(300\)m\(\times 300\)m\(\times 80\)m, with one object of interest of size \(60\)m \(\times 60\)m \(\times 60\)m \(\times 60\)m centered at \((x,y)=(175,130)\), with \(16\) artificial cuboids. The agents have a communication range of 430m, and no battery depletion events occur during searching. We conduct the experiment with \(\alpha_{1}^{j}=100\), \(\beta_{1}^{j}=0.3\), \(\alpha_{2}^{j}=1.5\), \(\beta_{2}^{j}=1\), and all other parameters as set as previously. We have conducted 50 MC trials, where 3 and 5 agents are uniformly spawned inside the surveillance region. Fig. 8 shows the average mission completion time (i.e. the time required so that all cuboids are searched) for the 3 approaches. For the case of 3 agents, Fig. 8, shows an average mission completion time of 24.4 seconds for the proposed distributed approach without coordination, approximately 21.8 seconds for the distributed framework with coordination, and 20.9 seconds for the centralized approach. Similar results have been obtained for the case of 5 agents, as shown in Fig. 8. In summary, the centralized approach outperforms both distributed approaches in terms of mission completion time, and the coordination between the agents seems to provide a slight advantage over the proposed approach. However, both competing techniques require constant communication amongst the agents in order to produce plans. The centralized approach requires at each time-step all information to be transmitted to a central station. Similarly, the distributed approach with coordination is not flexible in terms of communication since it requires information exchange among all pairs of agents at each time-step, which also prohibits this technique from operating at high frequencies for large number of agents. In addition, the centralized approach does not scales well with the number of agents as shown in the next section, and the distributed approach with coordination is not robust to agent failures as opposed to the proposed approach. Consequently, the performance of the proposed distributed search planning framework seems reasonable for a system which does not require any form of coordination between the agents. Additionally, in the proposed framework the agents can enter and exit the mission space at random times and can communicate opportunistically, properties which fit with the application scenario studied in this work.
### _Computational Complexity_
The main factors that drive its computational complexity of the problem (P1) are a) the length of the planning horizon \(T\), b) the number of agents \(|\mathcal{M}|\) which are involved in searching and c) the number of cuboids \(N\) that need to be searched. This is also evident by the number of binary variables required by the mixed integer quadratic program (MIQP) as shown in Eqn. (5k). On the other hand, the computational complexity of the distributed search planning approach shown in Problem (P2), only grows with the length of the planning horizon \(T\) and, with the number of cuboids \(N\) that need to be searched.
In this test we have run the centralized and distributed formulation of the proposed approach for \(3\) and \(7\) agents, with planning horizon lengths of \(3\) and \(7\) time-steps. For each (Number of Agents, Horizon Length) combination \(20\) trials were conducted, with the agents being randomly initialized inside a square area of size 20m by 20m, located between two objects of interest as shown in Fig. 4. In this experiment the number of cuboids \(N\) that need to be searched was kept constant with \(N=32\). The rest of the parameters were set according to the first paragraph of Sec. VI-B. Table I summarizes the results of this experiment in terms of the execution time (i.e., the time required by the solver to find the optimal solution). In particular, Table I shows the average execution time for each combination of the parameters for the centralized and distributed controllers. The results verify our previous discussion and show that the computational complexity of the centralized formulation does not scale well with the number of agents, as compared to the proposed distributed approach.
Fig. 8: Performance comparison of the proposed distributed search planning approach with competing techniques.
Fig. 7: The figure shows the percentage of visited cuboids over time and the number of active agents in each time-step, for different parameter configurations, for a search planning scenario with 4 agents.
## VII Conclusion
We have proposed, a novel distributed search-planning framework, where a dynamically varying number of autonomous agents cooperate in order to search multiple objects of interest in 3D. The proposed distributed model predictive control (MPC) approach allows the generation of cooperative search trajectories over a finite planning horizon and enables the agents to operate without coordination, optimizing their plans on-line for maximizing the collective search planning performance.
## Acknowledgments
This work is funded by the Cyprus Research and Innovation Foundation under Grant Agreement EXCELENCE/0421/0586 (GLIMPSE), by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 739551 (KIOS CoE), and from the Government of the Republic of Cyprus through the Cyprus Deputy Ministry of Research, Innovation and Digital Policy.
|
2305.07928
|
AMTSS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation
Framework For Multilingual Language Inference
|
Knowledge distillation is of key importance to launching multilingual
pre-trained language models for real applications. To support cost-effective
language inference in multilingual settings, we propose AMTSS, an adaptive
multi-teacher single-student distillation framework, which allows distilling
knowledge from multiple teachers to a single student. We first introduce an
adaptive learning strategy and teacher importance weight, which enables a
student to effectively learn from max-margin teachers and easily adapt to new
languages. Moreover, we present a shared student encoder with different
projection layers in support of multiple languages, which contributes to
largely reducing development and machine cost. Experimental results show that
AMTSS gains competitive results on the public XNLI dataset and the realistic
industrial dataset AliExpress (AE) in the E-commerce scenario.
|
Qianglong Chen, Feng Ji, Feng-Lin Li, Guohai Xu, Ming Yan, Ji Zhang, Yin Zhang
|
2023-05-13T14:42:30Z
|
http://arxiv.org/abs/2305.07928v1
|
AMTS: An Adaptive Multi-Teacher Single-Student Knowledge Distillation Framework For Multilingual Language Inference
###### Abstract
Knowledge distillation is of key importance to launching multilingual pre-trained language models for real applications. To support cost-effective language inference in multilingual settings, we propose **AMTSS**, an **a**daptive **m**ulti-**t**eacher **s**ingle-**s**t**dent distillation framework, which allows distilling knowledge from multiple teachers to a single student. We first introduce an adaptive learning strategy and teacher importance weight, which enables a student to effectively learn from max-margin teachers and easily adapt to new languages. Moreover, we present a shared student encoder with different projection layers in support of multiple languages, which contributes to largely reducing development and machine cost. Experimental results show that AMTSS gains competitive results on the public XNLI dataset and the realistic industrial dataset AliExpress (AE) in the E-commerce scenario.
## 1 Introduction
Multilingual pre-trained language models (aka M-PLMs) such as mBERT Devlin et al. (2019), XLM Conneau et al. (2019) and XLM-R Conneau et al. (2020) have achieved significant improvement for many multilingual tasks, including multilingual NLI Conneau et al. (2018); Bowman et al. (2015); Williams et al. (2018), question answering Lewis et al. (2019); Clark et al. (2020); Hardalov et al. (2020) and NER Wu and Dredze (2019); Rahimi et al. (2019). As pre-trained language models are usually computationally expensive, many transformer distillation methods Jiao et al. (2020); Liu et al. (2020) have been proposed, which distill knowledge from a large teacher model to a lightweight student network, to accelerate inference and reduce model size while maintaining the accuracy.
The majority of works mainly focus on learning from a single teacher as in Figure 1 (a), while only a few studies have considered to learn from multiple teachers Peng et al. (2020); Liu et al. (2020) which allows to select the optimal model as teacher for different domains during the student training. It is essential for cross domain knowledge distillation, especially in multilingual NLI.
In this work, we focus on chatbot settings, which currently supports nearly twenty kinds of languages on E-commerce language inference, and are constantly accepting new languages. Considering the linearly increased development and machine cost, we cannot develop a model instance for each language, which has poor scalability. Meanwhile, as we need to distill from around twenty teacher models each time when dealing with a new coming language, current multi-teacher distillation methods Peng et al. (2020); Liu et al. (2020); You et al. (2017); Yang et al. (2020) are not fit for our scenario. This begs an important question in practice: _can we distill knowledge from multiple teachers in a multilingual settings to cost-effectively support multiple languages and easily adapt to a new language?_
To address this challenge, we propose an adaptive multi-teacher single-student distillation framework (AMTSS). Firstly, we fine-tune a pre-trained language model for each language and obtain the optimal teacher, either monolingual or multilingual. Then, we distill the knowledge from teachers to a single student with a novel adaptive training strategy and a shared student encoder with different projection layers, instead of training several students for each language. For adapting to the new languages, we fine-tune the student model to learn from the max-margin teachers instead of re-training the student model with all teachers.
The contributions of this work are as follows:
* We propose an adaptive multi-teacher single-student knowledge distillation framework, consists of a shared student encoder with different projection layers in support of multiple languages in a cost-effective manner.
* We propose a weight based adaptive learning strategy that enables a student model to effectively learn from max-margin teachers with the importance weights, and easily adapt to new coming languages.
* We demonstrated the effectiveness of AMTSS through the experimental evaluation on public XNLI dataset and a realistic industrial dataset AliExpress (AE) in E-commerce scenario.
## 2 Knowledge Distillation Framework
As shown in Figure 1 (c), we present an adaptive knowledge distillation framework. Being different from standard knowledge distillation, either monolingual in Figure 1 (a) or multilingual in Figure 1 (b), AMTSS enables a student to not only learn from multiple teachers with different importance weights, but also cost-effectively serve multiple languages and easily adapt to new coming language.
### Model Architectures
#### 2.1.1 Teacher models
To ensure teacher models achieve the best performance on each language, we adopt different architectures (e.g., CamemBERT or XLMR). That is, one can choose different pre-trained LM, monolingual or multilingual, together with a simple projection layer, to fine-tune and obtain an optimal teachers for a specific language. We formulate teacher models as follows:
\[input=[D_{1},...,D_{i},...,D_{n}] \tag{1}\]
\[h_{i}^{T}=encoder_{i}^{T}(input_{i}) \tag{2}\]
\[\hat{y}_{t,i}^{T}=softmax_{t,i}^{T}(h_{i}^{T}) \tag{3}\]
\[\hat{y}_{i}^{T}=\sum_{t=1}^{n}w_{t,i}\hat{y}_{t,i}^{T} \tag{4}\]
where \(input_{i}\) denotes the input of the \(i\)-th language, \(h_{i}^{T}\) is the output hidden state of \(i\)-th teacher,\(T\) denotes teacher, \(softmax_{i}^{T}\) is the specific prediction layer of \(i\)-th teacher model, \(w_{t,i}\) denotes the importance weight of \(i\)-th teacher and \(\hat{y}_{i}^{T}\) represents the soft-targets generated by the \(i\)-th teacher.
#### 2.1.2 Student model
For student model, we use a shared 4-layer transformer with different projection layers for each language. Specifically, each \(input_{i}\) is encoded by the shared student encoder (Equation 5), and the prediction is generated by a corresponding projection layer for each language (Equation 6):
\[h_{i}^{S}=encoder^{S}(input_{i}) \tag{5}\]
\[\hat{y}_{i}^{S}=softmax_{i}^{S}(h_{i}^{S}) \tag{6}\]
where \(S\) denotes student, \(h_{i}^{S}\) is the output hidden state of student encoder, \(\hat{y}_{i}^{S}\) represents the soft-targets generated by the student. Note that the designing of shared encoder plus different projections plays a key role in supporting multiple languages with one lightweight student model, which contributes to largely reducing development and machine cost.
### Training Loss
For loss function, we use cross-entropy to measure the difference between the output distribution of student and ground truth, and use KL-divergence loss to measure the distance between the output of student and that of teachers, defined as follows:
\[L_{KD}=\sum_{i}H(y_{i},y_{i}^{S})+\lambda\sum_{i}D_{KL}(\hat{y}_{i}^{T},\hat{y }_{i}^{S}) \tag{7}\]
where \(H\) is the cross-entropy loss, \(y_{i}\) and \(y_{i}^{S}\) are the ground-truth and inferred labels of student, \(D_{KL}\) is the KL divergence, \(\hat{y}_{i}^{T}\) and \(\hat{y}_{i}^{S}\) denote the soft-targets generated by teacher and student model, \(\lambda\in(0,1)\) is a hyper-parameter that controls the relative influence of teacher knowledge transfer.
Figure 1: Knowledge distillation framework: a) monolingual LM knowledge distillation, b) original multilingual LM knowledge distillation, c) adaptive-MTSS-LM distillation. FT denotes fine-tuning, KD denotes knowledge distillation, \(W_{i}\) is weighted parameter.
### Adaptive Training Strategy
To make student models easily adapt to new languages, we introduce an adaptive training strategy with max-margin and importance weight, as in Algorithm 1. Given \(n\) languages, we firstly obtain \(n\) teachers \(T_{1},T_{2},...,T_{n}\) through fine-tuning on the corresponding datasets \(D_{1},D_{2},...,D_{n}\). For the knowledge distillation, we obtain a student model \(S\) through learning from the \(n\) importance weighted teachers (line 4-6). Further, we evaluate \(S\) on the \(n\) validation datasets (line 7-9) and calculate the top-K max-margin 1 teachers (line 10). In general, the distillation process will continue to learn from the chosen top-K max-margin teachers (line 11), and will terminate if the number of epoch exceeds \(M\) or the max-margin is smaller than a certain threshold \(\epsilon\) (line 12-13). Our adaptive strategy is inspired by Adaboost Freund and Schapire (1997) that tries to improve model performance through learning from mistaken instances, but differs from that we only train one base model instead of a group, and do not update the importance weight of teachers. Given a new language, we can directly continue our distillation process by treating the newly obtained teacher as a max-margin teacher. Through the importance weight learning, we aim to obtain a balanced student model from multiple teachers.
Footnote 1: The margin denotes the performance delta between student and a teacher.
```
0: Teachers \(T_{1},T_{2},...,T_{n}\), Weights \(W_{1},W_{2},...,W_{n}\), Datasets \(\{D_{1},...,D_{i},...,D_{n}\}\)
0: Student Model \(S\)
1:\(D_{i}\leftarrow\{D_{1},...,D_{n}\}\)
2:\(D_{i}\) consists of \(\{D_{train,i},D_{valid,i}\}\)
3:for\(epoch<M\)do
4:for\(D_{train,i}\in D_{i}\)do
5: Training \(S\) with weighted teachers, and \(D_{train,i}\)
6: Update parameters \(W_{1},W_{2},...,W_{n}\)
7:for\(D_{valid,i}\in D_{i}\)do
8: Evaluate(\(D_{valid,i}\),\(S\))
9:\(M_{i}\gets Margin(S,W_{i}T_{i})\)
10:\(R\gets Rank(M_{1},...,M_{n})\)
11:\(D_{i}\gets TopK(index(R))\)
12:if\(max(R)<\epsilon\)then
13: break
```
**Algorithm 1** Adaptive Training Strategy
## 3 Experiments and Results
### Datasets
We evaluated our framework on two datasets: **XNLI** and **AE**. XNLI Conneau et al. (2018) is a public multilingual NLI dataset, which the number of categories in XNLI is 3, including entailment, contradiction and neutral. The AliExpress (AE) dataset is a practical text classification dataset constructed from our chatbot AliExpress in E-commerce scenario. We select 5 languages from AliExpress chatbot for evaluation, and the number of categories for each language is 20.The statistics of the practical AE dataset label distribution are shown in Appendix. Since the label distribution are not equal for difference language set, it is challenging for current knowledge distillation methods. Therefore, we propose the importance weight based adaptive strategy for knowledge distillation.
### Experimental Setting
We use the RoBERTa-large for English Corpus encoding, CamemBERT for French Corpus encoding and XLM-R-large for Spanish, Arabic and Russian encoding. We use AdamW as optimizer and adopt cross entropy and KL divergence as the loss function. We set batch size to 32, the learning rate to 1e-5, dropout to 0.01. For training, we set the max epoch to 200. The evaluation metric is accuracy.
### Baseline
X-LM-Tiny is the student model which is distilled from XLM-R-large through original multilingual LM knowledge distillation as shown in Fig.1(b), where XLM-R-large is a single teacher trained with the whole mixed multilingual dataset. We use X-LM-Tiny as the baseline to evaluate the effectiveness of our adaptive MTSS framework.
### Results and Analysis
The results on XNLI are shown in Table 1. While the X-LM-Tiny obtain 81.88% on average, while with the adaptive strategy, our Adaptive MTSS-LM-Tiny can gain 1.74% improvements on average. The results demonstrate the effectiveness of multiple teachers and adaptive training for knowledge distillation. Note that for the teacher architecture on English and French, we adopt RoBERTa and CamemBERT respectively, due to the performance of them are better than XLM-R.
The practical experimental results on AE are shown in Table 2. The
tive MTSS-LM-Tiny is only 0.34% lower than that of teachers on average, and 2.80% higher than X-LM-Tiny. That is, a student model can benefit from other teachers and outperform its dedicated one. Specifically, compared with X-LM-Tiny, the performance of AMTSS-LM-Tiny is 12.33% higher on English, and 2.29% higher on average. The notable gap between X-LM-Tiny and XLM-R on English (\(12.33\%=89.93\%-77.60\%\)) indicates that the direct mix of multilingual training data could cause unexpected deviations in some languages. With the importance weight based adaptive training strategy, our adaptive MTSS architecture can achieve more balanced improvement on each language, while the performance of X-LM-Tiny in each language has large gap to the teacher. The gaps between X-LM-Tiny with RoBERTa-large in English and with CamemBERT in French are 11.32% and 2.68%, while our Adaptive MTSS-LM-Tiny reduce the gap to 1.01% and 0.61% respectively.
We also analyse the model size of both teacher and student models, and report the statistics in Table 3. The size of student models is much smaller than that of teachers, which can be deployed on restricted resources (e.g., CPU) at a low cost. Although X-LM-Tiny has the same size of parameters, the performance are not comparable with our Adaptive MTSS-LM-Tiny. Meanwhile, the X-LM-Tiny are not suitable for adapting to new language.
### Adapting to New Language
To test the adaptability of our framework to new languages, we introduce another two languages, Korean (Ko) and Polish (Pl), in addition to the aforementioned 5 languages. We train two new teachers and regard them as max-margin teachers to continue the previous distillation process, learning the teachers importance weight, and report the results in Table 4. We can find that the performance of student on Ko and Pl is 77.91% and 81.64%, which is higher than that of the teachers, and the average performance of all 7 languages decreases less (\(83.09\%\to 82.15\%\)). That is, with adaptive training strategy, we can alleviate the information forgetting, and even benefit from previous language while adapting to new languages.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Model** & **Ar** & **En** & **Es** & **Fr** & **Ru** & **Average** \\ \hline Teacher Architecture & XLM-R & RoBERTa & XLM-R & CamemBERT & XLM-R & - \\ \hline Teacher & 83.10 & 91.30 & 86.60 & 85.10 & 83.50 & 85.92 \\ \hline X-LM-Tiny & 79.40 & 87.60 & 80.06 & **82.50** & 79.90 & 81.88 \\ \hline Adaptive MTSS-LM-Tiny & **81.42** & **90.91** & **82.49** & 82.46 & **80.82** & **83.62** \\ \hline \end{tabular}
\end{table}
Table 1: Results on XNLI, including teacher models, X-LM-Tiny and Adaptive MTSS-LM-Tiny. The evaluation metric is accuracy. On XNLI, the performance of XLM-R on En and Fr is 89.1% and 83.5%, lower than RoBERTa and CamemBERT.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Model** & **Ar** & **En** & **Es** & **Fr** & **Ru** & **Average** \\ \hline Teacher Architecture & XLM-R & RoBERTa & XLM-R & CamemBERT & XLM-R & - \\ \hline Teacher & 80.45 & 88.92 & 81.05 & 84.17 & 82.55 & 83.43 \\ \hline X-LM-Tiny & 78.21 & 77.60 & 79.52 & 81.49 & 79.64 & 79.29 \\ \hline Adaptive MTSS-LM-Tiny & **79.45** & **89.93** & **80.69** & **83.56** & **81.82** & **83.09** \\ \hline \end{tabular}
\end{table}
Table 2: Results on the AE dataset, including teacher models, X-LM-Tiny and Adaptive MTSS-LM-Tiny. The evaluation metric is accuracy. On AE, the performance of XLM-R on En and Fr is 83.04%, and 76.55%, respectively.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Model** & **Ar** & **En** & **Es** & **Fr** & **Ru** & **Average** \\ \hline Teacher Architecture & XLM-R & RoBERTa & XLM-R & CamemBERT & XLM-R & - \\ \hline Teacher & 83.10 & 91.30 & 86.60 & 85.10 & 83.50 & 85.92 \\ \hline X-LM-Tiny & 79.40 & 87.60 & 80.06 & **82.50** & 79.90 & 81.88 \\ \hline Adaptive MTSS-LM-Tiny & **81.42** & **90.91** & **82.49** & 82.46 & **80.82** & **83.62** \\ \hline \end{tabular}
\end{table}
Table 1: Results on XNLI, including teacher models, X-LM-Tiny and Adaptive MTSS-LM-Tiny. The evaluation metric is accuracy. On XNLI, the performance of XLM-R on En and Fr is 89.1% and 83.5%, lower than RoBERTa and CamemBERT.
\begin{table}
\begin{tabular}{|l|c|} \hline
**Model Name** & **Size** \\ \hline XLM-R & 550M \\ RoBERTa & 355M \\ CamemBERT & 335M \\ X-LM-Tiny & 52.2M \\ AMTSS-LM-Tiny & 52.2M \\ \hline \end{tabular}
\end{table}
Table 3: The size of teacher and student parameters.
## 4 Conclusion
In this paper, we propose AMTSS, an adaptive multi-teacher single-student knowledge distillation framework, which enables a student model to learn from multiple teachers with adaptive strategy and importance weight. Experimental results demonstrate that our model is cost-effectively serve multiple languages, and easily adapt to new languages. In the future, we will further explore adapting max-margin teacher weights with contrastive learning to improve the performance of model and alleviate the data imbalance problem in practical scenarios.
## Limitations
In this work, although we evaluate our AMTSS methods on both public XNLI dataset and the realistic industrial dataset AliExpress (AE) in E-commerce scenario, the AMTSS method can be further evaluated on other scenarios and tasks, such as question answering, commonsense reasoning in medical or science area. Furthermore, in this work, we explore the possibility of adaptive strategy and importance weight for knowledge distillation, but there are more methods we are going to introduce into knowledge distillation, such as contrastive learning, few-shot learning and in-context learning etc., to alleviate the problem in different low-source language.
|
2307.11933
|
Constraint on cosmological constant in generalized Skryme-teleparallel
system
|
The Einstein-Skyrme system is understood to defy the "no hair" conjecture by
possessing black-hole solutions with fractional baryon number outside the event
horizon. In this article, we extend the study of the Skyrme system to
teleparallel gravity framework. We consider two scenarios, the Teleparallel
Equivalent of General Relativity (TEGR) and generalized teleparallel gravity
$f(T)$. In our analysis, we compute the fractional baryon number beyond the
black-hole horizon and its correlation with the cosmological constant
($\Lambda$). In the TEGR context, where $f(T) = -T - 2\Lambda$, the results
match with the Einstein-Skyrme model, assuming a positive $\Lambda$. More
interestingly, in generalized teleparallel gravity scenario, defined by $f(T) =
-T - \tau T^2 - 2\Lambda$, we show that the existence of a solution demands
that not only must $\Lambda$ be positive but has to lie in a range,
$\Lambda_{min} < \Lambda < \Lambda_{max}$. While the upper bound depends
inversely on $\tau$, the lower bound is a linear function of it. Hence, in the
limiting case with generalized teleparallel gravity converging towards TEGR
($\tau \rightarrow 0$), the constraints on the cosmological constant relax to
the Einstein Skryme system ($\Lambda_{min}$ approaches zero and $\Lambda_{max}$
becomes unbounded). On the other hand, in f(T) gravity, vanishing cosmological
constant solution is found only if the lower bound on the energy of the soliton
is very large.
|
Krishnanand K. Nair, Mathew Thomas Arun
|
2023-07-21T22:53:52Z
|
http://arxiv.org/abs/2307.11933v2
|
# Skyrmion in teleparallel gravity
###### Abstract
The Einstein-Skyrme system became famous for its black hole solutions that admits fractional baryon number outside the horizon, thus violating the "_no hair_" conjecture. In this article, we extend the Skyrmion to teleparallel gravity framework and investigate the teleparallel-Skyrme system in the context of the Teleparallel Equivalent of General Relativity (TEGR) and \(f(T)\) power law gravity. We demonstrate the emergence of the fractional baryon number outside the horizon and its dependence on the cosmological constant (\(\Lambda\)). The solutions in TEGR (\(f(T)=-T-2\Lambda\)), as expected, matches with the Einstein-Skyrme system with the requirement of \(\Lambda>0\). Interestingly, in power law gravity (\(f(T)=-T-\tau T^{2}-2\Lambda\)), the reality conditions requires the cosmological constant to be positive and stay within the range \(\Lambda_{min}<\Lambda<\Lambda_{max}\). And, in the limit power law gravity reaches TEGR (\(\tau\to 0\)), we get back the condition on the cosmological constant with \(\Lambda_{min}\to\) 0 and \(\Lambda_{max}\to\infty\).
## I Introduction
The theory of General Relativity (GR) has been astonishingly successful in describing most of the cosmological phenomena, albeit, with inclusion of exotic matter sector. Due in part to their elusiveness in colliders that investigate extensions to the Standard Model of particle physics, the nature of these hypothetical fields is yet unknown. This raises the question of whether we ought to alter matter or the gravitational field. Efforts to develop an alternative to General Relativity have recently picked up momentum, and teleparallel gravity in particular has witnessed a resurgence. This framework is built on torsion [1; 2; 3; 4; 5; 6; 7; 8; 9] based Weitzenbock connection instead of the curvature based Levi-Civita connection. Its advancements are due to the fact that the teleparallel framework possesses an equivalent to GR in the form of Teleparallel Equivalence of General Relativity (TEGR), making the theory have similar phenomenology. For the simplest setting, the Ricci scalar present in Einstein-Hilbert action is related to the torsion scalar and a respective boundary term, guaranteeing the invariance of the solutions of general relativity in the new framework [10]. Whereas, teleparallel scalar-tensor theories, \(f(T)\), can accommodate significantly different solutions to the field equations [11; 12]. In the last decade, numerous literature discussing various phenomena including bouncing universe [13; 14], black holes [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29], cosmological inflation [30; 31; 32], gravitational waves [33; 34; 35; 36; 37; 38] etc have sparked interest in the community. Similar to GR, generalizations to TEGR are likewise worth studying.
The Skyrme model is one such example from GR that rose to fame for having an intriguing feature that defies the "_no hair_" conjecture. According to the "_no hair_" conjecture, all other information, such as global charges, are lost during the gravitational collapse that creates a black hole, which is entirely determined by its mass, electric charge, and angular momentum. For black holes with classical skyrmion hair, this has been contested and found to be false [39; 40]. In this article, we expand the Skyrme system to the frameworks of TEGR (\(f(T)=-T-2\Lambda\)) and generalized teleparallel gravity (\(f(T)=-T-\tau T^{2}-2\Lambda\)), and we compute the fractional baryon number and its dependence on \(\Lambda\).
Non-linear sigma model and chiral Lagrangian have had miraculous success in modelling the low-energy dynamics of light pions. These particles arise as the massless Goldstone bosons of the spontaneously broken chiral symmetry, \(U(3)_{L}\times U(3)_{R}\to U(1)_{V}\times SU(3)_{V}\). Though charged under the \(SU(3)_{V}\), they remain neutral under \(U(1)_{V}\) global baryon number group. On the other hand, it was also realized that symmetry violation results in the creation of other bound states known as baryons, which are charged under the \(U(1)_{V}\). However, the chiral Lagrangian defining pions does not explain these baryons. It was soon discovered that the non-linear sigma model does possess a topology to support soliton solutions with the conserved current,
\[B^{\mu}=\frac{1}{24\sqrt{-g}\pi^{2}}\epsilon^{\mu\nu\rho\sigma}Tr\Big{(}U^{ \dagger}(\partial_{\nu}U)U^{\dagger}(\partial_{\rho}U)U^{\dagger}(\partial_{ \sigma}U)\Big{)}\,\]
where, \(U=e^{\frac{2i}{\kappa}\kappa^{a}T^{a}}\) with \(\pi^{a}\) representing pions and \(\kappa\) their decay constant.
The simplest solution, with two derivative chiral Lagrangian, is not static by simple scaling arguments. The inclusion of higher derivative terms, that scales differently, improves this situation leading to the well-known Skyrme model, with stable solutions, given by,
\[{\cal L}=\frac{\kappa^{2}}{4}Tr(\partial^{\mu}U^{\dagger}\partial_{\mu}U)+\frac{ 1}{32e^{2}}Tr([U^{\dagger}\partial^{\mu}U,U^{\dagger}\partial^{\nu}U][U^{ \dagger}\partial_{\mu}U,U^{\dagger}\partial_{\nu}U])\,\]
where \(\kappa\) is the pion decay constant and \(e^{2}\) is a dimensionless coupling. This Lagrangian leads to the winding number, \(B=\int\sqrt{-g}d^{3}xB^{0}\), which is bounded from below and the soliton configurations with non-trivial winding numbers are identified as baryons. Making the Skryme model one of the most crucial field theories to be studied. Such topologically stable solutions of this model could have interesting effects in the early epochs of the Universe. Moreover, this model will directly couple to gravity and studying this can be insightful to understand the gravitational effects of systems with baryonic charges.
The building blocks in TEGR formalism are the four tangent space vectors erected on the space-time manifold, known as "tetrad fields" given by \(h^{a}_{\mu}\), where \(\mu\), \(a=\{1,\ 2,\ 3,\ 4\}\). From this the space-time metric can be constructed as
\[g_{\mu\nu}=\eta_{ab}{h^{a}}_{\mu}{h^{b}}_{\mu}. \tag{1}\]
Here, \(g_{\mu\nu}\) is the metric tensor, \(\eta_{ab}\) is the Minkowski metric of the tangent space, and \({h^{a}}_{\mu}\) are the tetrads. In this paper, the Greek indices (\(\mu\), \(\nu\)) refers to the space-time manifold and the Latin indices (a, b) to the local Minkowski tangent space \(T_{x}M\). These tetrads are used to establish the teleparallel equivalent of Levi-Civita connection, called the Weitzenbock connection \(\Gamma^{\rho}_{\mu\nu}\)[41], given by,
\[{\Gamma^{\rho}}_{\mu\nu}={h_{a}}^{\rho}\partial_{\mu}{h^{a}}_{\nu} \tag{2}\]
The calculation of the torsion tensor \({T^{a}}_{\mu\nu}\)[1] is thus made possible by the Weitzenbock connection. This tensor characterizes the torsion of spacetime, as given by,
\[{T^{\rho}}_{\mu\nu}={\Gamma^{\rho}}_{\mu\nu}-{\Gamma^{\rho}}_{\nu\mu}. \tag{3}\]
The Weitzenbock connection and Levi-Civita connection are related through the contorsion tensor (\({K^{\rho}}_{\mu\nu}\)) as \(\Gamma^{\rho}_{\mu\nu}-{K^{\rho}}_{\mu\nu}=\tilde{\Gamma}^{\rho}_{\mu\nu}\) where,
\[{K^{\rho}}_{\mu\nu}=\frac{1}{2}\left({T_{\mu}}^{\rho}{}_{\nu}+{T_{\nu}}^{ \rho}{}_{\mu}-{T^{\rho}}_{\mu\nu}\right). \tag{4}\]
Further we define the dual torsion tensor as
\[S^{\rho\mu\nu}=\frac{1}{2}\left[K^{\mu\nu\rho}-g^{\rho\nu}{T^{\lambda\mu}}_{ \lambda}+g^{\rho\mu}{T^{\lambda\nu}}_{\lambda}\right]. \tag{5}\]
Finally, the torsion scalar T, which is a quadratic function of torsion, becomes,
\[T=T_{\rho\mu\nu}S^{\rho\mu\nu}={T^{\rho}}_{\mu\nu}{T^{\mu\nu}_{\rho}}^{\nu}/ 2+{T^{\rho}}_{\mu\nu}{T^{\nu\mu}}_{\rho}-2{T^{\rho}}_{\mu\rho}{T^{\nu\mu}}_{ \nu}. \tag{6}\]
Thus the teleparallel action now can be written inters of the Torsion scalar as,
\[S_{\rm T}=-\frac{1}{16\pi G}\int d^{4}x\ h_{det}\,{\rm T}\]
where \(h_{det}=det(h^{a}_{\mu})=\sqrt{-g}\).
Mathematically, the torsion scalar \(T\) in teleparallel gravity is related to the Ricci scalar \(\tilde{R}\) in GR as [42; 43]
\[T\equiv-\tilde{R}+B\,\]
where \(\tilde{R}\) is the Ricci scalar and \(B=2\tilde{\nabla}_{\mu}({T^{\nu}}_{\nu}{}^{\mu})\) is a total divergence term. This TEGR Lagrangian can be readily extended to power law gravity (\(f(T)\)) [44; 45; 46; 47; 48; 49], analogous to how the Einstein-Hilbert action can be generalised to \(f(R)\). The relationship between \(T\) and \(\tilde{R}\) ensures that simplest teleparallel gravity and GR are equivalent, but \(f(T)\) gravity can be quite different from \(f(R)\) gravity, due to the presence of the boundary term which is studied extensively in literature [1; 2; 3].
Here we study black-hole solutions with baryonic charge (\(B=0\) and \(B\neq 0\)) in TEGR and the modified teleparallel gravity (\(f(T)\)) framework. We start by describing the teleparallel-Skyrme system in the next section. In sec (III), we investigate the Skyrme solution and modifications to the metric near the black hole for a Skyrme with \(B=0\) baryonic charge. Further, in sec (IV), we study the solutions and the existence of fractional baryon charge in TEGR and power law gravity. And in sec (V),we
conclude and summarise our results.
## II Teleparallel-Skyrme system
Similar to the Einstein's General Theory of Relativity, in teleparallel geometry, the minimal coupling prescription is given by[50; 14],
\[\eta^{ab}\to g^{\mu\nu}=\eta^{ab}h_{a}^{\mu}h_{b}^{\nu}\] \[\partial_{a}\rightarrow\nabla_{\mu}=\partial_{\mu}-\Gamma_{\mu}\;,\]
where \(\Gamma_{\mu}\) is the Weitzenock connection. Thus, in the case of the Skyrme field expressed as \(SU(2)\) group-valued scalar field U, the minimal coupling prescription becomes [14],
\[\partial_{a}U\rightarrow\nabla_{\mu}U\;,\]
and, the Teleparallel-Skyrme action in four dimensions, \(S=S_{\rm T}+S_{\rm S}\), is dictated by the generalized teleparallel gravity action (\(S_{\rm T}\)) and the Skyrme action (\(S_{\rm S}\)) given by
\[S_{\rm T} = \frac{1}{16\pi G}\int d^{4}x\;h_{det}\;{\rm f(T)}\] \[S_{\rm S} = \int d^{4}xh_{det}\left[\frac{\kappa^{2}}{4}\;{\rm Tr}\left(Q_{ \mu}Q^{\mu}\right)+\frac{1}{32e^{2}}\;{\rm Tr}\left(\left[Q_{\mu},Q_{\nu} \right]\left[Q^{\mu},Q^{\nu}\right]\right)\right]\;\;\;,\]
where, f(T) is a function of the torsion scalar \(T\). In the action, \(h_{det}=det(h_{\mu}^{a})=\sqrt{-g}\) and G is the Newton's constant. In the Skyrme action, \(Q_{\mu}=U^{-1}\partial_{\mu}U\) is the current in nonlinear sigma model. On varying this action w.r.t the tetrads \(h_{\mu}^{a}\), we obtain the equation of motion [51; 52; 53],
\[{M_{\beta}}^{\mu}\equiv{S_{\beta}}^{\alpha\mu}\partial_{\alpha}Tf_{TT}(T)+ \left[h_{det}^{-1}h_{\;\;\beta}^{a}\partial_{\alpha}\left(h_{det}h_{a}^{\; \sigma}{S_{\sigma}}^{\alpha\mu}\right)-{T^{\sigma}}_{\nu\beta}{S_{\sigma}}^{ \mu\nu}\right]f_{T}(T)-\frac{1}{4}\delta_{\beta}^{\mu}f(T)=4\pi G\mathcal{T}_{ \beta}^{\mu} \tag{7}\]
where \(f_{T}=\frac{\partial f}{\partial T}\) and \(f_{TT}=\frac{\partial^{2}f}{\partial T^{2}}\). Here, \(\mathcal{T}_{\mu\nu}\) represents the energy momentum tensor and, in terms of the matter Lagrangian density \(\mathcal{L}_{M}\) becomes [9],
\[{\mathcal{T}_{\mu}}^{\nu}=\frac{h_{det}^{-1}h_{\mu}^{a}}{4}\left\{\frac{ \partial\mathcal{L}_{M}}{\partial h_{\mu}^{a}}-\partial_{\alpha}\left[\frac{ \partial\mathcal{L}_{M}}{\partial\left(\partial_{\alpha}h^{a}{}_{\nu}\right)} \right]\right\}\;.\]
For the case of Skyrme field, this energy momentum tensor is computed to be,
\[\mathcal{T}_{\mu\nu}=-\,\frac{\kappa^{2}}{2}\;{\rm Tr}\left(Q_{\mu}Q_{\nu}- \frac{1}{2}g_{\mu\nu}Q_{\alpha}Q^{\alpha}\right)-\frac{1}{8e^{2}}\;{\rm Tr} \left(g^{\alpha\beta}[Q_{\mu},Q_{\alpha}][Q_{\nu},Q_{\beta}]-\frac{1}{4}g_{\mu \nu}[Q_{\alpha},Q_{\beta}][Q^{\alpha},Q^{\beta}]\right) \tag{8}\]
To obtain the Skyrme equations of motions, varying the action, we get,
\[\nabla^{\mu}\left(Q_{\mu}+\frac{1}{4\kappa^{2}e^{2}}\left[Q^{\nu},[Q_{\mu},Q_ {\nu}]\right]\right)=0 \tag{9}\]
where \(\nabla^{\mu}\) is the Fock Ivanenko Derivative [14]. This equation of motion admits topological solutions where \(U\to 1\) as \(|x^{\mu}|\rightarrow\infty\). These static field configurations in the chiral Lagrangian is characterised by the homotopy class \(\Pi_{3}(SU(2))=\mathbb{Z}\), with the energy density bounded by the topological winding number (baryon number) \(B\in\mathbb{Z}\). The topological current corresponding to the baryon current of the teleparallel-Skyrme system is given by [54]
\[B^{\mu}=\frac{1}{h_{det}}\frac{1}{24\pi^{2}}\epsilon^{\mu\nu\alpha\beta}\;{\rm Tr }\left(Q_{\nu}Q_{\alpha}Q_{\beta}\right) \tag{10}\]
from which the baryon number B can be derived as
\[B=\int h_{det}B^{0}d^{3}x \tag{11}\]
where \(B^{0}\) is the the temporal component of the topological current. To obtain the static solutions, let's first use the metric ansatz [54],
\[ds^{2}=-h(r)dt^{2}+\frac{1}{h(r)}\left(p_{1}(r)dr^{2}+p_{2}(r)r^{2}d\theta^{2}+p_ {3}(r)r^{2}\sin^{2}\theta d\varphi^{2}\right) \tag{12}\]
where \(h(r)\), \(p_{1}(r)\), \(p_{2}(r)\), and \(p_{3}(r)\) are functions of the coordinate \(r\).
Analysis of the Skyrme system in teleparallel framework is new and has nontrivial solutions. The generalized hedgehog ansatz [55] has just recently made it possible to construct the first analytic gravitating Skyrmions and exact configurations of multi-Skyrmions [56, 57, 58, 59, 60, 61, 62, 63]. The general ansatz for the form of the \(SU(2)-\)valued Skyrme field U is given by [55],
\[U=\rho\cdot 1+\pi^{k}\cdot t_{k}\]
Here, \(t_{k}\) are the \(SU(2)\) generators, given by the Pauli matrices,
\[t_{1}=\begin{bmatrix}0&-i\\ i&0\end{bmatrix}\quad t_{2}=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\quad t_{3}=\begin{bmatrix}-i&0\\ 0&i\end{bmatrix}\]
These matrices are Hermitian and traceless, and they satisfy the commutation relations \([t_{i},t_{j}]=2i\epsilon_{ijk}t_{k}\), where \(\epsilon_{ijk}\) is the totally antisymmetric Levi-Civita tensor. The index \(k=1,2,3\) corresponds to the \(SU(2)\) group index, which is raised and lowered using the flat metric \(\delta_{ij}\). The quartet of the field \((\rho,\pi_{a})\) is restricted to the surface of the unit sphere, where the field is a map from compactified Euclidean coordinate space \(S^{3}\) to the \(SU(2)\) group space, satisfying \(\rho^{2}+\pi^{k}.\pi^{k}=1\)[63]. This constraint ensures that the Skyrme field is restricted to the surface of the unit sphere. It also implies that the Skyrme field has a topological charge, which is an integer-valued quantity that characterizes the topology of the field configuration. To specify the Skyrme field more explicitly, we use the anzats in [64], which takes the form,
\[U=1\cos\gamma(r)+\widehat{n}^{k}t_{k}\sin\gamma(r). \tag{13}\]
In above, we \(\rho=\cos\gamma(r)\) and \(\pi^{k}=\widehat{n}^{k}\sin\gamma(r)\), where \(\gamma(r)\), and \(\widehat{n}^{k}\) are functions of the coordinates \(r\) and (\(\theta\), \(\varphi\)) respectively, given by,
\[\widehat{n}^{1}= \sin\theta\cos\varphi \tag{14}\] \[\widehat{n}^{2}= \sin\theta\sin\varphi\] (15) \[\widehat{n}^{3}= \cos\theta \tag{16}\]
Thus, we can see that the Skyrme field \(U\) is a matrix-valued function that depends on the radial coordinate \(r\) and the unit vectors \(\widehat{n}^{k}\). The ansatz in Eq. (13) respects the symmetry of the problem and simplifies the analysis of Skyrme models. Substituting Eq. (14)-(16) in Eq. (13), we get the expression of U as:
\[U=\begin{bmatrix}\cos\gamma+i\cos\theta\sin\gamma&\sin\theta\sin\gamma\left(i \cos\varphi+\sin\varphi\right)\\ \sin\theta\sin\gamma\left(i\cos\varphi-\sin\varphi\right)&\cos\gamma-i\cos \theta\sin\gamma\end{bmatrix} \tag{17}\]
Now, the respective components of the Skyrme field \(Q_{\mu}\) becomes,
\[Q_{t} = 0\] \[Q_{r} = \begin{bmatrix}i\cos\theta\gamma^{\prime}&\sin\theta\gamma^{ \prime}\left(i\cos\varphi+\sin\varphi\right)\\ \sin\theta\gamma^{\prime}\left(i\cos\varphi-\sin\varphi\right)&-i\cos\theta \gamma^{\prime}\end{bmatrix}\] \[Q_{\theta} = \begin{bmatrix}-i\cos\gamma\sin\gamma\sin\theta&\sin\gamma\left( \cos\phi-i\sin\phi\right)\left(i\cos\theta\cos\gamma+\sin\gamma\right)\\ \sin\gamma\left(\cos\phi+i\sin\phi\right)\left(i\cos\theta\cos\gamma-\sin \gamma\right)&i\cos\gamma\sin\gamma\sin\theta\end{bmatrix} \tag{18}\] \[Q_{\varphi} = \begin{bmatrix}i\sin^{2}\theta\sin^{2}\gamma&\sin\gamma\sin\theta \left(\cos\phi-i\sin\phi\right)\left(\cos\gamma+i\cos\theta\sin\gamma\right) \\ -\sin\gamma\sin\theta\left(\cos\phi+i\sin\phi\right)\left(\cos\gamma-i\cos \theta\sin\gamma\right)&-i\sin^{2}\theta\sin^{2}\gamma\end{bmatrix}\]
Using this, the non-zero components of energy momentum tensor \(T_{\mu\nu}\) of the Skyrme field given in Eq. (8) becomes,
\[\begin{split}\mathcal{T}^{tt}=&\frac{1}{2e^{2}r^{4}p_{1 }p_{2}p_{3}}\Bigg{(}e^{2}\kappa^{2}r^{2}\left(r^{2}p_{2}p_{3}{\gamma^{\prime}}^ {2}+p_{1}\sin^{2}\gamma(p_{2}+p_{3})\right)+h\left(r^{2}{\gamma^{\prime}}^{2} \sin^{2}\gamma(p_{2}+p_{3})+p_{1}\sin^{4}\gamma\right)\Bigg{)}\\ \mathcal{T}^{rr}=&\frac{1}{16e^{2}r^{4}p_{1}^{2}p_{2 }p_{3}}h^{2}\left(4e^{2}\kappa^{2}r^{2}\left(2r^{2}p_{2}p_{3}{\gamma^{\prime}}^ {2}-2p_{1}\sin^{2}\gamma(p_{2}+p_{3})\right)+4h\sin^{2}\gamma\left(2r^{2}{ \gamma^{\prime}}^{2}(p_{2}+p_{3})-2p_{1}\sin^{2}\gamma\right)\right)\\ \mathcal{T}^{\theta\theta}=&\frac{1}{2e^{2}r^{6}p_{1 }p_{2}^{2}p_{3}}h^{2}\left(h\sin^{2}\gamma\left(r^{2}{\gamma^{\prime}}^{2}(p_ {3}-p_{2})+p_{1}\sin^{2}\gamma\right)-\frac{1}{2}e^{2}\kappa^{2}r^{2}\left(2r ^{2}p_{2}p_{3}{\gamma^{\prime}}^{2}+2p_{1}\sin^{2}\gamma(p_{2}-p_{3})\right) \right)\\ \mathcal{T}^{\varphi\varphi}=&\frac{1}{16e^{2}r^{6}p_ {1}p_{2}p_{3}^{2}}h^{2}\csc^{2}(\theta)\left(8h\sin^{2}\gamma\left(r^{2}{ \gamma^{\prime}}^{2}(p_{2}-p_{3})+p_{1}\sin^{2}\gamma\right)-4e^{2}\kappa^{2} r^{2}\left(2r^{2}p_{2}p_{3}{\gamma^{\prime}}^{2}-2p_{1}\sin^{2}\gamma(p_{2}-p_{3}) \right)\right)\end{split} \tag{19}\]
Further, demanding the conservation of energy momentum,
\[\nabla_{\mu}\mathcal{T}^{\mu\nu}=0\;, \tag{20}\]
we get,
\[\begin{split}\nabla_{\mu}\mathcal{T}^{\mu t}=& 0\\ \nabla_{\mu}\mathcal{T}^{\mu r}=&\frac{h^{2}\gamma^{ \prime}}{2e^{2}r^{4}p_{1}^{3}p_{2}^{2}p_{3}^{2}}\eta(r)\\ \nabla_{\mu}\mathcal{T}^{\mu\theta}=&-\frac{1}{e^{2 }r^{4}p_{1}p_{2}^{2}p_{3}}h^{2}\cot(\theta)\sin^{2}\gamma(p_{2}-p_{3})\left(e ^{2}\kappa^{2}p_{1}+h{\gamma^{\prime}}^{2}\right)\\ \nabla_{\mu}\mathcal{T}^{\mu\varphi}=& 0\end{split} \tag{21}\]
where we have
\[\begin{split}\eta(r)&=\left(r^{2}p_{2}p_{3}(p_{1}(p_ {3}{\gamma^{\prime}}(e^{2}\kappa^{2}r^{2}p_{2}^{\prime}+2h^{\prime}\sin^{2} \gamma)+p_{2}({\gamma^{\prime}}(e^{2}\kappa^{2}r^{2}p_{3}^{\prime}+2h^{\prime }\sin^{2}\gamma)+2e^{2}\kappa^{2}rp_{3}(r{\gamma^{\prime\prime}}+2{\gamma^{ \prime}}))\right)\\ &-e^{2}\kappa^{2}r^{2}p_{2}p_{3}{\gamma^{\prime}}p_{1}^{\prime}-e ^{2}\kappa^{2}p_{1}^{2}\sin(2\gamma)(p_{2}+p_{3}))-h\sin\gamma(r^{2}-p_{2}p_{ 3}{\gamma^{\prime}}\sin\gamma p_{1}^{\prime}(p_{2}+p_{3})+r^{2}p_{1}(-p_{3}^{2 }{\gamma^{\prime}}\sin\gamma p_{2}^{\prime}+p_{2}p_{3}\times\\ &\left.({\gamma^{\prime}}\sin\gamma(p_{2}^{\prime}+p_{3}^{\prime}) +2p_{3}({\gamma^{\prime\prime}}\sin\gamma+{\gamma^{\prime 2}}\cos\gamma))+p_{2}^{2}(2p_{3}({ \gamma^{\prime\prime}}\sin\gamma+{\gamma^{\prime 2}}\cos\gamma)-{\gamma^{\prime}}\sin \gamma p_{3}^{\prime})\right)-4p_{1}^{2}p_{2}p_{3}\sin^{2}\gamma\cos\gamma) \Big{)}\end{split}\]
Among the above relations, the vanishing of \(\nabla_{\mu}\mathcal{T}^{\mu r}\) demand either \(\gamma^{\prime}=0\) or \(\eta(r)=0\), while \(\nabla_{\mu}\mathcal{T}^{\mu\theta}\) demands \(p_{2}=p_{3}\). In order to satisfy the energy-momentum conservation in Eq. (20), we need to impose the conditions,
\[p_{2}(r)=p_{3}(r)=m(r),\;\;\;\;\;p_{1}(r)=l(r)\]
where we rename \(p_{1}(r)\) as \(l(r)\) for convenience and we will solve for \(\gamma(r)\) that satisfies \(\nabla_{\mu}\mathcal{T}^{\mu r}=0\). It is also important to note that equation Eq. (20) satisfies the energy momentum tensor components given in Eq. (19) if the metric functions \(h\), \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(\gamma\) do not depend on \(\theta\) and \(\varphi\). Therefore, we can satisfy all the equations in Eq. (20) only in the spherically symmetric situation when \(p_{2}(r)=p_{3}(r)=m(r)\), and the Skyrme and metric functions depend only on the radial coordinate. The metric, given in Eq.(12), now becomes,
\[ds^{2}=-h(r)dt^{2}+\frac{l(r)}{h(r)}dr^{2}+\frac{m(r)}{h(r)}\left(r^{2}d\theta ^{2}+r^{2}\sin^{2}\theta d\varphi^{2}\right) \tag{22}\]
It is important to note that the above metric is spherically symmetric [54] and is of the form,
\[ds^{2}=g_{AB}(y)dy^{A}dy^{B}+\beta(y)^{2}\gamma_{ab}(z)dz^{a}dz^{b}\]
where \(y^{A}=\{t,r\}\) and \(g_{AB}\) are the coordinates and the metric on two-dimensional Lorentzian \(M^{2}\) respectively, while \(z^{a}=\{\theta,\varphi\}\) and \(\gamma_{ab}\) are the coordinates and the metric on the two-dimensional unit sphere \(S^{2}\).
We choose the tetrad that satisfies the metric as,
\[{{h_{\mu}}^{a}}{}_{]}{}_{\rm diag}=\left[\begin{array}{cccc}\sqrt{h(r)}&0&0&0\\ 0&\sqrt{l(r)/h(r)}&0&0\\ 0&0&r\sqrt{m(r)/h(r)}&0\\ 0&0&0&r\sin\theta\sqrt{m/h(r)}\end{array}\right]\]
While this tetrad generates the metric in Eq.(22), it is not suitable for reducing the algebraic expression of \(f(T)\) to the teleparallel linear form. Therefore, we follow the approach developed in literatures [65, 66, 67, 68], where a local Lorentz transformation in the tangent space is performed to construct a good set of non-diagonal tetrads. These transformations, in the tangent space (with flat metric \(\eta_{ab}\)), are given by, \(h^{a}_{\mu}\mapsto\Lambda^{a}_{b}h^{b}_{\mu}\), where \(\Lambda^{a}_{b}\) satisfies the condition \(\eta_{ac}\Lambda^{a}_{b}\Lambda^{c}_{d}=\eta_{bd}\). By using these non-diagonal "good" tetrads, we obtain the desired algebraic teleparallel equations of motion for the Skyrme black-hole system.To construct our tetrads, we choose an ansatz that is adapted to stationary observers with a four-velocity \(\left(u^{0},0,0,0\right)\). This is achieved by setting the degree of freedom to be \(h^{\lambda}_{0}=u^{\lambda}\), while the other components \(h^{\lambda}_{1}\), \(h^{\lambda}_{2}\), and \(h^{\lambda}_{3}\) are aligned with the \(\widehat{x}\), \(\widehat{y}\), and \(\widehat{z}\) Cartesian directions. With this ansatz, we obtain the tetrad field as,
\[h^{a}_{\mu}=\left[\begin{array}{cccc}\sqrt{h(r)}&0\\ 0&\sqrt{l(r)/h(r)}\sin\theta\cos\phi&r\sqrt{m(r)/h(r)}\cos\theta\cos\phi&-r \sqrt{m(r)/h(r)}\sin\theta\sin\phi\\ 0&\sqrt{l(r)/h(r)}\sin\theta\sin\phi&r\sqrt{m(r)/h(r)}\cos\theta\sin\phi&r \sqrt{m(r)/h(r)}\sin\theta\cos\phi\\ 0&\sqrt{l(r)/h(r)}\cos\theta&-r\sqrt{m(r)/h(r)}\sin\theta&0\end{array}\right] \tag{23}\]
A spherically symmetric teleparallel geometry is defined consistently by this tetrad. Using this, the non-zero components of the Witzenbock connection are computed to be,
\[{\Gamma^{t}}_{tr}= \frac{h^{\prime}(r)}{2h(r)} {\Gamma^{r}}_{rr}= \frac{1}{2}\left(\frac{l^{\prime}(r)}{l(r)}-\frac{h^{\prime}(r)} {h(r)}\right) {\Gamma^{r}}_{\theta\theta}= -r\sqrt{\frac{m(r)}{l(r)}}\] \[{\Gamma^{r}}_{\varphi\varphi}= -r\sqrt{\frac{m(r)}{l(r)}}\sin^{2}(\theta) {\Gamma^{\theta}}_{r\theta}= {\Gamma^{\varphi}}_{r\varphi}=\frac{1}{r}\sqrt{\frac{l(r)}{m(r)}} {\Gamma^{\theta}}_{\varphi r}= -\frac{h^{\prime}(r)}{2h(r)}+\frac{m^{\prime}(r)}{2m(r)}+\frac{1}{r} \tag{24}\] \[{\Gamma^{\theta}}_{\varphi\varphi}= -\sin\theta\cos\theta {\Gamma^{\varphi}}_{\theta\varphi}= {\Gamma^{\varphi}}_{\varphi\theta}= \cot\theta\;.\]
Further, the non-zero components of the torsion and contorsion tensor becomes,
\[{T^{t}}_{rt}= \frac{h^{\prime}(r)}{2h(r)} {T^{\theta}}_{\theta r}={T^{\varphi}}_{\varphi r}=\frac{h^{ \prime}(r)}{2h(r)}+\frac{1}{r}\sqrt{\frac{l(r)}{m(r)}}-\frac{m^{\prime}(r)}{2 m(r)}-\frac{1}{r} \tag{25}\]
\[{K^{\theta}}_{r\theta}= {K^{\varphi}}_{r\varphi}=\frac{1}{2}\left(\frac{h^{\prime}(r)}{h( r)}+\frac{2}{r}\sqrt{\frac{l(r)}{m(r)}}-\frac{m^{\prime}(r)}{m(r)}-\frac{2}{r} \right)\hskip 28.452756pt{K^{r}}_{tt}=-\frac{h(r)h^{\prime}(r)}{2l(r)}\] \[{K^{r}}_{\theta\theta}= -r\sqrt{\frac{m(r)}{l(r)}}-\frac{r^{2}}{2}\frac{h^{\prime}(r)m(r) }{h(r)l(r)}+\frac{r}{2l(r)}\left(rm^{\prime}(r)+2m(r)\right)\hskip 28.452756pt {K^{t}}_{rt}=-\frac{h^{\prime}(r)}{2h(r)} \tag{26}\] \[{K^{r}}_{\varphi\varphi}= {r\sin^{2}\theta}\left(-r\sqrt{\frac{m(r)}{l(r)}}-\frac{r^{2}}{2 }\frac{h^{\prime}(r)m(r)}{h(r)l(r)}+\frac{r}{2l(r)}\left(rm^{\prime}(r)+2m(r) \right)\right)\;.\]
Using the above equations (Eq. (24) Eq. (25) and (26)), the components of the dual torsion tensor can be written as,
\[{S_{t}}^{tr}= \frac{h(r)}{l(r)}\left(\frac{h^{\prime}(r)}{2h(r)}+\frac{1}{r} \sqrt{\frac{l(r)}{m(r)}}-\frac{m^{\prime}(r)}{2m(r)}-\frac{1}{r}\right)\] \[{S_{\theta}}^{\theta r}= {S_{\varphi}}^{\varphi r}=\frac{1}{4rl(r)m(r)}\left(h(r)\left(-2h (r)\sqrt{\frac{l(r)}{h(r)}}\sqrt{\frac{m(r)}{h(r)}}+rm^{\prime}(r)+2m(r) \right)\right)\;. \tag{27}\]
And, the torsion scalar can be derived as,
\[T=\frac{1}{2r^{2}lm^{3}}\left(\frac{r^{2}m^{3}h^{\prime 2}}{h}+4h^{3}\sqrt{ \frac{l}{h}}\left(\frac{m}{h}\right)^{3/2}\left(rm^{\prime}+2m\right)-hm\left( 4lm+\left(rm^{\prime}+2m\right)^{2}\right)\right) \tag{28}\]
In the Minkowski limit (\(h(r)\to 1\), \(l(r)\to 1\), \(m(r)\to 1\)), it is straightforward to see that the torsion scalar vanishes i.e \(T=0\). This is expected because the torsion scalar is a measure of the geometry's deviation from flat Minkowski spacetime. There is no such deviation in the Minkowski limit, and hence the torsion scalar vanishes. Substituting Eq.(24)-(28) and Eq.(19) in Eq.(7), we get the teleparallel equations of motion as,
\[\frac{1}{rm}\Bigg{(}f_{T}l^{\prime}\Big{(}rm^{\prime}+2m\Big{)} \Bigg{)}-\frac{1}{r^{2}hm^{2}}\Bigg{(}2l\Big{(}-r^{2}f_{T}m^{2}h^{\prime 2}+ rhm\big{(}rf_{T}h^{\prime}m^{\prime}+m\big{(}rf_{\mathsf{T}\mathsf{T}}h^{ \prime}T^{\prime}+f_{T}(rh^{\prime\prime}+2h^{\prime})\big{)}\Big{)}\] \[+h^{3}\sqrt{\frac{l}{h}}\sqrt{\frac{m}{h}}\left(rf_{T}m^{\prime}+ 2m\left(rf_{\mathsf{T}\mathsf{T}}T^{\prime}+f_{T}\right)\right)-h^{2}m\left( r\left(rf_{\mathsf{T}\mathsf{T}}m^{\prime}T^{\prime}+f_{T}\left(rm^{\prime \prime}+4m^{\prime}\right)\right)+2m\left(rf_{\mathsf{T}\mathsf{T}}T^{\prime}+f _{T}\right)\right)\Bigg{)}+fl^{2}\] \[+\frac{\pi G}{e^{2}r^{4}lm^{2}}\Big{(}2r^{2}m\gamma^{\prime 2} \left(-e^{2}\kappa^{2}r^{2}m+\cos(2\gamma)-1\right)+l\sin^{2}(\gamma)\left(-4e ^{2}\kappa^{2}r^{2}m+\cos(2\gamma)-1\right)\Big{)}=0\] \[-r^{2}f_{T}m^{2}h^{\prime 2}-2f_{T}h^{3}\sqrt{\frac{l}{h}}\sqrt{ \frac{m}{h}}\Big{(}rm^{\prime}+2m\Big{)}+fr^{2}hlm^{2}+f_{T}h^{2}\left(rm^{ \prime}+2m\right)^{2}+\frac{\pi G}{e^{2}r^{4}lm^{2}}\Big{(}2r^{2}m\gamma^{ \prime 2}\big{(}-e^{2}\kappa^{2}r^{2}m\] \[+\cos(2\gamma)-1\Big{)}+l\sin^{2}(\gamma)\left(-4e^{2}\kappa^{2} r^{2}m+\cos(2\gamma)-1\right)\Big{)}=0 \tag{29}\] \[-rf_{T}hml^{\prime}\big{(}rm^{\prime}+2m\big{)}+2hl\Big{(}-2rf_{T }h\sqrt{\frac{l}{h}}\sqrt{\frac{m}{h}}m^{\prime}+m\Big{(}r\big{(}rf_{\mathsf{T }\mathsf{T}}m^{\prime}T^{\prime}+f_{T}(rm^{\prime\prime}+4m^{\prime})\big{)} \Big{.}\] \[-2h\sqrt{\frac{l}{h}}\sqrt{\frac{m}{h}}\big{(}rf_{\mathsf{T} \mathsf{T}}T^{\prime}+2f_{T}\big{)}\Big{)}+2m^{2}\big{(}rf_{\mathsf{T}\mathsf{ T}}T^{\prime}+f_{T}\big{)}\Big{)}+2l^{2}m\big{(}2f_{T}h+fr^{2}m\big{)}+2\pi G \left(\frac{\sin^{4}(\gamma(r))}{e^{2}r^{4}m(r)^{2}}-\frac{\kappa^{2}\gamma^{ \prime}(r)^{2}}{l(r)}\right)=0\]
where \(f_{T}=\frac{\partial f}{\partial T}\) and \(f_{TT}=\frac{\partial^{2}f}{\partial T^{2}}\). Before proceeding to solve the teleparallel gravity equations, let's first attempt to solve the Skyrme field equation given in Eq. (9). One can easily decompose the Skyrme field \(Q_{\mu}\), given in Eq. (18), in terms of the SU(2) generators \(t_{k}\) as,
\[Q_{\mu}=Q_{\mu}^{k}t_{k}\, \tag{30}\]
where \(Q_{\mu}^{k}\) are given by,
\[Q_{t}^{k}= 0\] \[Q_{r}^{k}= \widehat{n}^{k}\gamma^{\prime}\] \[Q_{\theta}^{k}= \sin^{2}\gamma\delta^{rk}\varepsilon_{ijr}\widehat{n}^{i}\partial _{\theta}\widehat{n}^{j}+\frac{1}{2}\sin(2\gamma)\partial_{\theta}\widehat{n}^{k} \tag{31}\] \[Q_{\varphi}^{k}= \sin^{2}\gamma\delta^{rk}\varepsilon_{ijr}\widehat{n}^{i} \partial_{\phi}\widehat{n}^{j}+\frac{1}{2}\sin(2\gamma)\partial_{\phi}\widehat{n }^{k}\]
Using Eqs. (31), we can easily obtain the divergence of \(Q_{\mu}\), given in Eq. (32) as,
\[\nabla^{\mu}Q_{\mu}=\frac{1}{2}\left(-\frac{h^{\prime}\gamma^{\prime}}{l}+\frac {h^{\prime}\gamma^{\prime}}{h^{2}}+\frac{h\left(r\left(2rlm\gamma^{\prime\prime}+ \gamma^{\prime}\left(2l\left(rm^{\prime}+2m\right)-rm^{\prime}\right)\right)-2 l^{2}\sin(2\gamma)\right)}{r^{2}l^{2}m}\right)\widehat{n}^{k}t_{k} \tag{32}\]
To compute the second term in Eq.(9), we use,
\[\left[Q^{\nu},\left[Q_{\mu},Q_{\nu}\right]\right]=W_{\mu}^{k}t_{k},\]
where \(W_{\mu}^{k}\) are given as,
\[\begin{split} W_{t}^{k}=& 0\\ W_{r}^{k}=& 8Q_{r}^{k}\frac{h(r)}{r^{2}m(r)}\sin^{2} \gamma\\ W_{\theta}^{k}=& 4Q_{\theta}^{k}\left(\gamma^{\prime 2 }+\frac{h(r)}{m(r)r^{2}}\sin^{2}\gamma\right)\\ W_{\varphi}^{k}=& 4Q_{\varphi}^{k}\left(\gamma^{ \prime 2}+\frac{h(r)}{m(r)r^{2}}\sin^{2}\gamma\right)\end{split} \tag{33}\]
It is straightforward to obtain this divergence term as,
\[\nabla^{\mu}\left[Q^{\nu},\left[Q_{\mu},Q_{\nu}\right]\right]= 4h\Bigg{(}\frac{h^{\prime}\gamma^{\prime}\sin^{2}(\gamma)}{ lr^{2}m}+\frac{h^{\prime}\gamma^{\prime}\sin^{2}(\gamma)}{h^{2}r^{2}m}+h \Bigg{(}\frac{\sin(\gamma)\Big{(}2l\left(\gamma^{\prime\prime}\sin(\gamma)+ \gamma^{\prime 2}\cos(\gamma)\right)-\gamma^{\prime}\sin(\gamma)l^{\prime} \Big{)}}{l^{2}r^{2}m}\] \[-\frac{2\sin^{3}(\gamma)\cos(\gamma)}{r^{2}r^{2}m}\Bigg{)} \Bigg{)}\widehat{n}^{k}t_{k} \tag{34}\]
After substituting Eq.(32) and Eq.(34) into Eq.(9) and simplifying, we obtain the equation of motion of the Skyrme field as,
\[\frac{8}{e^{2}\kappa^{2}r^{2}m}\left(\frac{h^{\prime}\gamma^{ \prime}\sin^{2}(\gamma)}{l}+\frac{h^{\prime}\gamma^{\prime}\sin^{2}(\gamma)}{ h^{2}}+h\left(\frac{\sin(\gamma)\left(2l\left(\gamma^{\prime\prime}\sin(\gamma)+ \gamma^{\prime 2}\cos(\gamma)\right)-\gamma^{\prime}\sin(\gamma)l^{\prime} \right)}{l^{2}}-\frac{2\sin^{3}(\gamma)\cos(\gamma)}{r^{2}m}\right)\right)\] \[-\frac{h^{\prime}\gamma^{\prime}}{l}+\frac{h^{\prime}\gamma^{ \prime}}{h^{2}}+\frac{h\left(r\left(2rlm\gamma^{\prime\prime}+\gamma^{\prime} \left(2l\left(rm^{\prime}+2m\right)-rm^{\prime}\right)\right)-2l^{2}\sin(2 \gamma)\right)}{r^{2}l^{2}m}=0 \tag{35}\]
The Skyrme field equation of motion is a second order differential equation of \(\gamma(r)\), that includes the metric functions \(h(r)\), \(l(r)\), and \(m(r)\). The Skyrme function \(\gamma(r)\) is important in determining the topological charge (given in Eq. (36)), while the metric functions provide useful information on the curvature and geometry of the spacetime manifold coupled with the Skyrmion. As a result, a thorough understanding of these functions in teleparallel gravity is required in order to completely explain the behaviour of the Skyrme field and its interactions with the underlying spacetime metric in teleparallel gravity. Further, the topological charge (Eq. (11)) for the Skyrme anzats given by the Eq. (13) reduces to [69]
\[B=\left.\frac{1}{\pi}\left\{-\gamma(r)+\frac{1}{2}\sin[2\gamma(r)]\right\} \right|_{r_{h}}^{\infty} \tag{36}\]
where \(r_{h}\) is the event horizon radius of the black-hole background. Thus, the problem reduces to solving a single ordinary differential equation for the Skyrme function \(\gamma(r)\) given in Eq. (35).
## III Case 1: Skyrmions with B=0
In the context of Skyrme theory, the simplest nontrivial solution of the Skyrme equation Eq.(35) is given by,
\[\gamma=\frac{\pi}{2}+N\pi \tag{37}\]
where N is an integer. The energy momentum tensor now becomes,
\[\begin{split}\mathcal{T}^{t}{}_{t}=&\mathcal{T}^{r} _{r}=-\frac{1}{2e^{2}r^{4}m(r)}\Big{(}h(r)\left(2e^{2}\kappa^{2}r^{2}m(r)+h( r)\right)\Big{)}\\ \mathcal{T}^{\theta}{}_{\theta}=&\mathcal{T}^{\varphi }{}_{\varphi}=\frac{h(r)^{2}}{2e^{2}r^{4}m(r)^{2}}\end{split} \tag{38}\]
It is also important to note that \(\nabla_{\mu}\mathcal{T}^{\mu r}=0\), since \(\gamma^{\prime}=0\), thus satisfying Eq. (20). Before analysing the Skyrme black-hole solutions in generalised teleparallel gravity, lets consider TEGR. It is also important to note that \(\gamma=N\pi/2\) is also a solution for the field equation Eq. (35), but the \(U\) vanishes for even \(N\).
### Tegr: \(f(T)=-T-2\Lambda\)
In the case of \(f(T)=-T-2\Lambda\), corresponding to the TEGR, the teleparallel equations of motions given in Eq. (29) can be written as,
\[\frac{1}{8r^{2}hl^{2}m^{2}}\Bigg{(}-5r^{2}lm^{2}h^{\prime 2}+2rhm \left(-rmh^{\prime}l^{\prime}+2l\left(rmh^{\prime\prime}+rh^{\prime}m^{\prime} +2mh^{\prime}\right)-2\Lambda rl^{2}m\right)+h^{2}\Big{(}2rml^{\prime}\left(rm^ {\prime}+2m\right)\] \[+l\left(r^{2}m^{\prime 2}-4rm\left(rm\left(rm^{\prime\prime}+3m^{ \prime}\right)-4m^{2}\right)+4l^{2}m\right)\Bigg{)}=\frac{2\pi G}{e^{2}r^{4}m^ {2}}\Big{(}h\left(2e^{2}\kappa^{2}r^{2}m+h\right)\Big{)}\] \[\frac{1}{8}\left(\frac{h^{\prime 2}}{hl}-\frac{h\left(\left(rm^{ \prime}+2m\right)^{2}-4lm\right)}{r^{2}lm^{2}}-4\Lambda\right)=\frac{2\pi G}{e ^{2}r^{4}m^{2}}\Big{(}h\left(2e^{2}\kappa^{2}r^{2}m+h\right)\Big{)} \tag{39}\] \[-\frac{1}{8}\left(-\frac{h^{\prime 2}}{hl}+\frac{h\left(m\left(rl^ {\prime}m^{\prime}-2l\left(rm^{\prime\prime}+2m^{\prime}\right)\right)+2m^{2} l^{\prime}+rlm^{\prime 2}\right)}{rl^{2}m^{2}}-4\Lambda\right)=-\frac{2\pi Gh^{2}}{e^{2}r^{4}m^{2}}\;.\]
Solutions to these equations are computed as,
\[h(r) =C_{1}-\frac{C_{2}}{r}+\frac{4\pi G}{e^{2}r^{2}}-\frac{1}{3} \Lambda r^{2} \tag{40}\] \[l(r) =1\] \[m(r) =h(r)\]
where \(C_{1}\) and \(C_{2}\) are integrating constants. Linearising the metric and comparing it to the Newtonian limit yields, \(C_{2}=2GM\), where \(M\) is the mass of the black-hole [12]. While, assuming the Minkowskian limit (for \(\Lambda\) = 0), we get \(C_{1}=1-8\pi G\kappa^{2}\). Thus, \(h(r)\) finally becomes,
\[h(r):=1-8\pi G\kappa^{2}-\frac{2GM}{r}+\frac{4\pi G\kappa^{2} \lambda}{r^{2}}-\frac{1}{3}\Lambda r^{2} \tag{41}\]
where \(\lambda=1/(\kappa^{2}e^{2})\). Note that the black-hole solution that we obtained in Eq.(41) has been reported in the literature [64], which is found using conventional hedgehog ansatz in Riemannian geometry. A similar solution with \(\lambda=0\) (without the Skyrme term) was proposed in [70] to represent a global monopole within a black-hole. Furthermore, the solution with \(M=\lambda=\Lambda=0\) corresponds to the Barriola-Vilenkin monopole spacetime [71].
Using Eq. (36), the corresponding value of the topological charge is computed to be,
\[B=0\;.\]
Figure 1: The metric function \(h(r)\) is shown as a function of \(r/r_{h}\) for different black hole masses \(M\)= \(M_{min}\), \(3.8\,M_{Pl}\) and \(4.0\)\(M_{Pl}.M_{min}\) is computed using Eq.(43) to be 0.36299 \(M_{Pl}\). We have chosen the coupling constant \(\alpha=8\pi G\kappa^{2}=0.1\), \(G=1M_{Pl}^{-2}\), \(e=1\) and \(\Lambda=10^{-120}M_{Pl}^{2}\)
The position of the Killing horizon, which marks the boundary of the black-hole region, is determined by the condition \(h(r_{h})=0\)[64]. Further we will be taking \(\Lambda=0\) as an approximation, since \(\Lambda\ll 1\) (we take \(\Lambda=10^{-120}M_{Pl}^{2}\)[72]). Solving this condition for \(r_{h}\), we get two solutions corresponding to the outer and inner horizons as,
\[r_{h}=\frac{GM}{1-8\pi G\kappa^{2}}\left(1\pm\sqrt{1-\frac{4\pi(1-8\pi G\kappa^{ 2})}{e^{2}GM^{2}}}\right) \tag{42}\]
Here, the upper sign gives the location of the outer horizon, while the lower sign gives the location of the inner horizon. In this paper, we will be denoting \(r_{h}\) as the outer horizon and we assume \(\kappa^{2}/M_{Pl}^{2}<1\) to keep the sanity of the model. The mass \(M\) and the horizon \(r_{h}\) is related as
\[M=\frac{1}{2G}\left((1-8\pi G\kappa^{2})r_{h}+\frac{4\pi G}{r_{h}e^{2}}\right)\]
For the black-hole mass \(M<M_{min}\), \(r_{h}\) becomes non-physical due to the presence of imaginary terms. The expression of \(M_{min}\) is given by
\[M_{min}=\frac{2\sqrt{\pi}\sqrt{1-8\pi G\kappa^{2}}}{e\sqrt{G}} \tag{43}\]
For this value of \(M\), \(r_{h}\) takes the form
\[r_{h}=\frac{2}{e}\sqrt{\frac{G\pi}{1-8\pi G\kappa^{2}}}\]
To illustrate the behavior of \(h(r)\), we plot it, in Fig. 1, as a function of \(r/r_{h}\) for different values of black-hole mass M, namely \(M\)= \(M_{min}\), \(3.8\)\(M_{Pl}\) and \(4.0\)\(M_{Pl}\).
Additionally, here, the energy momentum Eq. (38) takes the form
\[\rho_{m} =p_{r}=-\frac{1}{2e^{2}r^{4}}-\frac{\kappa^{2}}{r^{2}} \tag{44}\] \[p_{\theta} =p_{\varphi}=\frac{1}{2e^{2}r^{4}}\]
where \(\rho_{m}=\mathcal{T}^{t}{}_{t}\), \(p_{r}=\mathcal{T}^{r}{}_{r}\), \(p_{\theta}=\mathcal{T}^{\theta}{}_{\theta}\) and \(p_{\varphi}=\mathcal{T}^{\varphi}{}_{\varphi}\) are the energy density, radial matter pressure and \(\theta\)-angular pressure and \(\varphi\)-angular pressure of the Skyrme field respectively. The energy density \(\rho_{m}\) equal to the radial matter pressure \(p_{r}\) and the \(\theta\)-angular pressure and \(\varphi\)-azimuthal pressure is shown as a function of \(r/r_{h}\) in Fig.(2a) and Fig. (2b) respectively
### Weak power law \(f(T)\) gravity
The complexity of the field equations makes it extremely challenging to obtain an analytic solution for a generic \(f(T)\). Here we use perturbation theory to find approximate solutions instead of the exact analytical solutions. In this subsection, we focus on the weak power-law \(f(T)\) model [73] given as,
\[f(T)=-T-\epsilon\tau T^{2}-2\Lambda \tag{45}\]
where \(\tau\) is the coupling constant and \(0<\epsilon\ll 1\) is the perturbation parameter. The assumption is made that the deviation from TEGR is small (\(\epsilon\ll 1\)), thus only first-order terms in \(\epsilon\) are considered in calculations. We are interested in the perturbations around the metric background defined in Eq. (22) with solutions given by (40) and (41). Hence, we choose the ansatz for \(g_{tt}\) and \(g_{rr}\) given as follows.
\[g_{tt} =h(r)=w(r)^{2}+\epsilon u(r) \tag{46}\] \[g_{rr} =1/h(r)=w(r)^{-2}+\epsilon v(r)\]
where
\[w(r)=\sqrt{1-8\pi G\kappa^{2}-\frac{2GM}{r}+\frac{4\pi G\lambda}{r^{2}}-\frac {1}{3}\Lambda r^{2}} \tag{47}\]
The functions \(u(r)\) and \(v(r)\) are defined as perturbations of the metric coefficients and are functions of the radial coordinate \(r\)
Using the \(f(T)\) expansion for the weak power-law gravity up to first order in \(\epsilon\) in Eq.(29) we obtain a set of equations,
\[\frac{1}{2r^{4}}\Bigg{(}4rw(r)^{3}\left(w^{\prime}(r)\left(r^{2}v(r )-\frac{12\tau}{w(r)}+6\tau\right)+4r\tau\left(1-\frac{2}{w(r)}\right)w^{\prime \prime}(r)\right)+4\tau w(r)^{2}\big{(}10r^{2}w^{\prime}(r)^{2}-9\big{)}\] \[+2w(r)\tau\left(4r^{2}\left(\frac{1}{w(r)}-6\right)w^{\prime}(r)^ {2}-\frac{1}{w(r)}+8\right)+w(r)^{4}\left(r^{3}v^{\prime}(r)+r^{2}v(r)+\frac{32 \tau}{w(r)}-10\tau\right)+8\tau rw(r)\Big{(}2rw^{\prime\prime}(r)\] \[+3w^{\prime}(r)\Big{)}\Bigg{)}=0\] \[\frac{1}{2r^{4}w(r)}\Bigg{(}2r^{3}u(r)w^{\prime}(r)+2rw(r)^{4}w^ {\prime}(r)\left(r^{2}v(r)+12\tau\left(1-\frac{2}{w(r)}\right)\right)+w(r)^{ 5}\left(r^{2}v(r)+2\tau\left(3-\frac{8}{w(r)}\right)\right)\] \[-4\tau w(r)^{3}\left(2r^{2}\left(\frac{4}{w(r)}-3\right)w^{ \prime}(r)^{2}-3\right)+w(r)\left(r^{3}\left(-u^{\prime}(r)\right)+8r^{2}\tau w ^{\prime}(r)^{2}-2\tau\right)+24r\tau w(r)^{2}w^{\prime}(r)\Bigg{)}=0 \tag{48}\] \[\frac{1}{4r^{4}w(r)}\Bigg{(}2r^{3}u(r)\left(rw^{\prime\prime}(r)+ w^{\prime}(r)\right)+w(r)^{5}\left(r^{3}v^{\prime}(r)+4\tau\left(\frac{8}{w(r)}-3 \right)\right)+8r\tau w(r)^{2}\big{(}4r^{2}w^{\prime}(r)^{3}+3rw^{\prime\prime }(r)\] \[+6w^{\prime}(r)\big{)}+rw^{\prime}(r)\left(r^{3}u^{\prime}(r)-16w (r)\left(r^{2}rw^{\prime}(r)^{2}+\tau\right)\right)+rw(r)^{4}\Bigg{(}2rw^{ \prime\prime}(r)\left(r^{2}v(r)-\frac{24\tau}{w(r)}+12\tau\right)\] \[+w^{\prime}(r)\left(r^{3}v^{\prime}(r)+6r^{2}v(r)-\frac{48\tau}{ w(r)}+16\tau\right)\Bigg{)}-4w(r)^{3}\Bigg{(}8r^{3}\tau\left(\frac{1}{w(r)}-1 \right)w^{\prime}(r)w^{\prime\prime}(r)-r^{2}w^{\prime}(r)^{2}\Big{(}r^{2}v(r)\] \[+6\tau\left(3-\frac{4}{w(r)}\right)\Bigg{)}+6\tau\Big{)}+w(r) \left(r^{4}\left(-u^{\prime\prime}(r)\right)-r^{3}u^{\prime}(r)+24r^{2}\tau w ^{\prime}(r)^{2}+4\tau\right)\Bigg{)}=0\]
The above set of equations are solved numerically and plotted in the range \(r/r_{h}=(1,\infty)\) in Fig. (3) for different black-hole masses \(M\)= \(M_{min}\), \(3.8\)\(M_{Pl}\) and \(4.0\)\(M_{Pl}\), where \(M_{min}\) is computed using Eq.(43). Here, we have assumed that the initial conditions \(u(r_{h})=v(r_{h})=0\). This assumption takes into account the preservation of TEGR geometry at the event horizon.
## IV Case 2: Skyrmions with \(\mathbf{B}\neq\mathbf{0}\)
In this section, we consider the scenario with non-trivial winding numbers for the Skyrme. We consider two cases here, TEGR where \(f(T)=-T-2\Lambda\) and power law gravity where \(f(T)=-T-\tau T^{n}-2\Lambda\) (\(\tau\) is a constant and \(n\in\mathbb{Z}\)). Moreover, unlike the discussions in the previous case, here, we will solve for the metric functions in the region of interest and look for solutions of \(\gamma(r)\) such that the Skyrme equations of motions are satisfied.
Due to the complexity in solving the Skyrme field equation described by Eq. (35), we consider two simplifying regions of interest: solutions near the event horizon denoted by \(r_{h}\) (\((r-r_{h})\ll 1\)), and the far-field solution where \(r\gg r_{h}\). To study the region close to the event horizon (\(r_{h}\)) of the black-hole, we Taylor expand Skyrme solution about \((r-r_{h})\ll 1\), as,
\[\gamma(r)=\gamma_{0}+\gamma_{1}(r-r_{h})+\gamma_{2}(r-r_{h})^{2}+\mathcal{O} \big{(}(r-r_{h})^{3}\big{)}\]
where \(\gamma_{0}\), \(\gamma_{1}\) and \(\gamma_{2}\) are constants. While, to obtain the far field solution of the Skyrme equation Eq.(35), we assume that \(r\gg r_{h}\). In this limit, the metric becomes flat and we can assume the Minkowski limit where \(h(r),l(r),m(r)\to 1\). Assuming, \(\gamma(r)\) decays as \(r\) goes to infinity, we choose the anzats for \(\gamma(r)\), as,
\[\gamma(r)=f(r)/r+\mathcal{O}(1/r^{2}) \tag{49}\]
where \(f(r)\) is linear in \(r\). Substituting the above equation in Eq. (35) and using the Minkowski limit \(h(r),l(r),m(r)\to 1\) and solving for \(\gamma(r)\) in the far field limit, we have solved for \(\gamma(r)\) as follows [74]:
\[\gamma(r)=C/r+\mathcal{O}(1/r^{2})\]
where \(C\) is a constant.
### Tegr: \(f(T)=-T-2\Lambda\)
In this case, we consider the metric given by,
\[ds^{2}=-h(r)dt^{2}+g(r)\left(dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\varphi^ {2}\right) \tag{50}\]
where \(g(r)=l(r)/h(r)=m(r)/h(r)\). Thus, we can express the equations of motion Eq (29) as,
\[-\frac{4\left(e^{2}r^{4}g^{\prime\prime}+2e^{2}r^{3}g^{\prime}-8 \pi Gr^{2}\gamma^{\prime 2}\sin^{2}(\gamma)-2\pi G\sin^{2}(\gamma)+2\pi G\sin^{2}( \gamma)\cos(2\gamma)\right)}{e^{2}r^{4}g^{2}}+\frac{16\pi G\kappa^{2}\left(r^{ 2}\gamma^{\prime 2}+2\sin^{2}(\gamma)\right)}{r^{2}g}\] \[-4\Lambda+\frac{3g^{\prime 2}}{g^{3}}=0\] \[\frac{4g\left(e^{2}r^{3}g^{\prime}+8\pi Gr^{2}\gamma^{\prime 2} \sin^{2}(\gamma)-4\pi G\sin^{4}(\gamma)\right)}{e^{2}}+16\pi G\kappa^{2}r^{2} g^{2}\left(r^{2}\gamma^{\prime 2}+\cos(2\gamma)-1\right)+\frac{2r^{3}gh^{ \prime}\left(rg^{\prime}+2g\right)}{h}\] \[+r^{4}g^{\prime 2}+4\Lambda r^{4}g^{3}=0\] \[-\frac{2}{g^{2}}\left(\frac{8\pi G\sin^{4}(\gamma)}{e^{2}r^{4}}+ g^{\prime\prime}+\frac{g^{\prime}}{r}\right)+\frac{16\pi G\kappa^{2}rh^{2} \gamma^{\prime 2}+rh^{\prime 2}-2h\left(rh^{\prime\prime}+h^{\prime}\right)}{rh^{2}g} -4\Lambda+\frac{2g^{\prime 2}}{g^{3}}=0\]
We assume that the metric exhibits Minkowskian behavior for \(r\gg r_{h}\). In order to solve this, we make the assumption that the metric functions and the Skyrme function behave near the horizon (\((r-r_{h})\ll 1\)), upto the leading order, as,
\[h(r) =h_{0}+h_{1}(r-r_{h})+\mathcal{O}\big{(}r-r_{h})^{2} \tag{51}\] \[g(r) =g_{0}+g_{1}(r-r_{h})+\mathcal{O}\big{(}r-r_{h})^{2}\] \[\gamma(r) =\gamma_{0}+\gamma_{1}(r-r_{h})+\mathcal{O}\big{(}r-r_{h})^{2}\]
where \(h_{0}\), \(h_{1}\), \(g_{0}\), \(g_{1}\), \(\gamma_{0}\), \(\gamma_{1}\) are constants. Next, we substitute Eq. (51) into the teleparallel equations of motion given in Eq. (29) and solve for the near-horizon solution of the metric function and Skyrme function. At the zeroth order of \((r-r_{h})\), the teleparallel equations of motion take the following form
\[e^{2}r_{h}^{2}g_{0}\left(\Lambda r_{h}^{2}g_{0}-1\right)-2\pi G \sin^{2}(\gamma_{0})\left(-4e^{2}\kappa^{2}r_{h}^{2}g_{0}+\cos(2\gamma_{0})-1 \right)=0\] \[\Lambda-\frac{4G\pi\sin^{4}(\gamma_{0})}{e^{2}r_{h}^{4}g_{0}^{2}}=0\]
Solving the above equations for \(g_{0}\) and \(\gamma_{0}\), we get the following solutions.
\[g_{0} =\frac{1}{4e\sqrt{\pi G\Lambda}\kappa^{2}r_{h}^{2}+2\Lambda r_{h }^{2}}\ \ \ \text{or}\ \ g_{0}=0 \tag{52}\] \[\gamma_{0} =\pm\sin^{-1}\left(\frac{1}{2\sqrt{\pi G}}\left(\sqrt{\frac{e}{2 \sqrt{\pi G}\kappa^{2}+\sqrt{\Lambda}}}\right)\right)+2\pi c_{1},\ \ \ \ c_{1}\in\mathbb{Z}\]
where \(c_{1}\) is a constant. Here, we chose the non-zero solution of \(g_{0}\), since \(g_{0}=0\) leads to the solution of \(B=0\). Additionally, as the negative solution of \(\gamma(r)\) leads to negative topological number, we consider only the \(+\) solution of \(\gamma(r)\). It is important to note that \(\Lambda\) has to be greater than zero in order for the above solutions to be physical. In the first order of \((r-r_{h})\), we have the following equations
\[\frac{4\pi G}{e^{2}}\left(\sin(\gamma_{0})\left(2e^{2}g_{0}\kappa ^{2}r_{h}^{2}-\cos(2\gamma_{0})+1\right)\left(2\gamma_{1}r_{h}g_{0}\cos(\gamma_ {0})-\sin(\gamma_{0})(r_{h}g_{1}+2g_{0})\right)\right)+g_{0}r_{h}^{2}\left(g_{1 }r_{h}+2g_{0}\right)=0\] \[4\pi G\sin^{3}(\gamma_{0})(\sin(\gamma_{0})(r_{h}g_{1}+2g_{0})-2 \gamma_{1}r_{h}g_{0}\cos(\gamma_{0}))=0\]
Solving, we get
\[g_{1} =-\frac{2g_{0}}{r_{h}} \tag{53}\] \[\gamma_{1} =0\]
Next, we will analyze the expression of the Skyrme equation in the vicinity of the horizon, represented by Eq. (35). By solving for both the zeroth and first order terms in \((r-r_{h})\), we obtain the following results.
\[h_{0}=h_{1}=0. \tag{54}\]
The far-field solution of \(\gamma(r)\) takes the form
\[\gamma(r)=1/r+\mathcal{O}\big{(}1/r^{2}\big{)}\.\]
Using the above obtained solutions, now we have the near field solutions of the metric function and skyrme field as follows:
\[\begin{split} h(r)&=\mathcal{O}\big{(}(r-r_{h})^{ 2}\big{)}\\ g(r)&=\frac{g_{0}}{3}\Bigg{(}1-\frac{2r}{3r_{h}} \Bigg{)}+\mathcal{O}\big{(}(r-r_{h})^{2}\big{)}\\ \gamma(r)&=\sin^{-1}\Bigg{(}\frac{1}{2\sqrt{\pi G}} \left(\sqrt{\frac{e}{2\sqrt{\pi G}e\kappa^{2}+\sqrt{\Lambda}}}\right)\Bigg{)} +2\pi c_{1}+\mathcal{O}\big{(}(r-r_{h})^{2}\big{)},\ \ \ \ c_{1}\in\mathbb{Z}\end{split} \tag{55}\]
Now, the baryon number \(B\) in Eq. (36) becomes,
\[B=\frac{1}{\pi}\left(\sin^{-1}\left(\sigma\right)-\,\sigma\sqrt{1-\sigma^{2}} \right)+2c_{1} \tag{56}\]
where we have
\[\sigma=\frac{1}{2\sqrt{\pi G}}\left(\sqrt{\frac{e}{2\sqrt{\pi G}e\kappa^{2}+ \sqrt{\Lambda}}}\right)\]
It is evident that the cosmological constant \(\Lambda\) has be positive for the existence of Skyrme in TEGR. And, Fig. (4) shows the variation of \(B\) with respect to \(\Lambda\).
Further \(B\) tends to zero (or \(c_{1}\)), as \(\Lambda\) goes to infinity. This behaviour matches with the results in Einstein-Skyrme system [75].
### Power law \(f(T)\) gravity
In this section, we aim to find solutions for the metric and Skyrme fields within the framework of the power law \(f(T)\) gravity model, represented by the equation [73]:
\[f(T)=-T-\tau T^{n}-2\Lambda \tag{57}\]
As the field equations become increasingly intricate for \(n>2\), we will focus on the case where \(n=2\) for the sake of simplicity. In this scenario, we will consider the metric given in Eq. (50) and, again, since the field equations are complex to solve, we focus
Figure 4: The variation of B with respect to the cosmological constant \(\Lambda\) in the case of TEGR for \(8\pi G\kappa^{2}=0.1\), \(G=1{M_{Pl}}^{-2}\), \(e=1\), \(\tau=1/2M_{Pl}^{-2}\) and \(c_{1}=0\)
exclusively on the near field solution of the metric and Skyrme functions and also assume that the metric becomes Minkowski for large values of \(r\). Similar to the previous case of TEGR, we assume that the behaviour of metric and Skyrme function, near the horizon (\((r-r_{h})\ll 1\)), is as given in Eq. (51). Substituting Eq. (57) and Eq. (51) in the teleparallel equations of motion Eq. (29), the teleparallel field equations takes the following form,
\[e^{2}\left(g_{0}^{2}\Lambda r_{h}^{4}-g_{0}r_{h}^{2}+2\tau\right) -2\pi G\sin^{2}\left(\gamma_{0}\right)\left(\cos\left(2\gamma_{0}\right)-4e^{2 }g_{0}\kappa^{2}r_{h}^{2}-1\right)=0\] \[\frac{\Lambda}{2}-\frac{2G\pi\sin^{4}\left(\gamma_{0}\right)}{2e^ {2}g_{0}^{2}r_{h}^{4}}-\frac{\tau}{g_{0}^{2}r_{h}^{4}}=0\]
On solving this, we get
\[g_{0\pm}= -\frac{\Lambda\pm 2\sqrt{\pi}\sqrt{e^{2}G\kappa^{4}\Lambda\left(32 \pi e^{2}G\kappa^{4}\tau-8\Lambda\tau+1\right)}}{8\pi e^{2}G\kappa^{4}\Lambda r _{h}^{2}-2\Lambda^{2}r_{h}^{2}}\ \ \ \text{or}\ \ \ g_{0}=0 \tag{58}\] \[\gamma_{0}= \sin^{-1}\left(\sqrt{e}\left(\frac{g_{0\pm}^{2}\Lambda r_{h}^{4}- 2\tau}{4\pi G}\right)^{\frac{1}{4}}\right)+2\pi c_{1},\ \ \ \ c_{1}\in\mathbb{Z}\]
We choose the non-zero solution of \(g_{0}\) for the reason mentioned in the previous section. Now, to the first order of \((r-r_{h})\), the teleparallel equations takes the following form,
\[4\pi G\sin(\gamma_{0})\left(2e^{2}\kappa^{2}r_{h}^{2}g_{0}-\cos( 2\gamma_{0})+1\right)\left(2\gamma_{1}r_{h}g_{0}\cos(\gamma_{0})-\sin(\gamma_{0 })(r_{h}g_{1}+2g_{0})\right)+e^{2}\left(r_{h}^{2}g_{0}-4\tau\right)\left(r_{h} g_{1}+2g_{0}\right)=0\] \[2(r_{h}g_{1}+2g_{0})\left(e^{2}\tau+2\pi G\sin^{4}(\gamma_{0}) \right)-8\pi G\gamma_{1}r_{h}g_{0}\sin^{3}(\gamma_{0})\cos(\gamma_{0})=0\]
After solving this system of equations, we obtain the following results,
\[\gamma_{1}=0 \tag{59}\] \[g_{1}=-\frac{2g_{0\pm}}{r_{h}}\]
Now, let us examine the near-horizon form of the Skyrme equation, as given by Eq. (35). Solving for zeroth and first order in \((r-r_{h})\), we get
\[h_{0}=h_{1}=0 \tag{60}\]
Thus, the near field solutions of the metric function and skyrme field are given as,
\[h(r)=\mathcal{O}\big{(}(r-r_{h})^{2}\big{)} \tag{61}\] \[g(r)=\frac{g_{0\pm}}{3}\Bigg{(}1-\frac{2r}{3r_{h}}\Bigg{)}+ \mathcal{O}\big{(}(r-r_{h})^{2}\big{)}\] \[\gamma(r)=\sin^{-1}\left(\alpha_{\pm}\right)+2\pi c_{1}+\mathcal{ O}\big{(}(r-r_{h})^{2}\big{)},\ \ \ c_{1}\in\mathbb{Z}\]
where we have
\[\alpha_{\pm}=\sqrt{e}\left(\frac{g_{0\pm}^{2}\Lambda r_{h}^{4}-2\tau}{4\pi G} \right)^{\frac{1}{4}}.\]
And the far-field solution of \(\gamma(r)\) becomes,
\[\gamma(r)=1/r+\mathcal{O}\big{(}1/r^{2}\big{)}\.\]
Further, the reality condition on \(g_{0\pm}\) and \(\gamma_{0}\) given in Eq. (61) (we name it as condition-1) is given by,
\[\Lambda\leq\frac{32\pi e^{2}G\kappa^{4}\tau+1}{8\tau} \tag{62}\]
Now we can use Eq. (36) to find the expression of the topological number \(B\), which is obtained as,
\[B_{\pm}=\frac{1}{\pi}\left(\sin^{-1}\left(\alpha_{\pm}\right)-\alpha_{\pm} \sqrt{1-\alpha_{\pm}{}^{2}}\right)+2c_{1},\ \ \ c_{1}\in\mathbb{Z} \tag{63}\]
\(B_{+}\) and \(B_{-}\) are the topological numbers corresponding to the two different solutions, which depends on the cosmological constant \(\Lambda\), but independent of the event horizon \(r_{h}\). Further both solutions coincided when \(\kappa\to 0\). The reality condition of \(B_{\pm}\) then becomes,
\[\alpha_{\pm}\leq 1\implies e^{2}\left(\frac{\Lambda\left(2\sqrt{\pi}\sqrt{e^{2}G \kappa^{4}\Lambda\left(32\pi e^{2}G\kappa^{4}\tau-8\Lambda\tau+1\right)}+ \Lambda\right)^{2}}{\left(8\pi e^{2}G\kappa^{4}\Lambda-2\Lambda^{2}\right)^{2} }-2\tau\right)\leq 4\pi G \tag{64}\]
In Fig. (5), we have shown the variation of \(B\) with respect to \(\Lambda\). We can see that there is a \(\Lambda_{max}\) above which \(B\) does not exist. Further, there is also a \(\Lambda_{min}\) below which \(B\) does not exist. In the limit as \(\tau\) approaches zero, the geometry transforms into TEGR. In this limit, \(\Lambda_{max}\) approaches infinity and \(\Lambda_{min}\) approaches zero. Consequently, we recover the result obtained from TEGR, thereby ensuring consistency. And in Fig.(6), the dependence of the lower and upper limits of \(\Lambda\) on \(\tau\) parameter is show.
Figure 5: The variation of B with respect to the cosmological constant \(\Lambda\) for \(8\pi G\kappa^{2}=0.1\), \(G=1{M_{Pl}}^{2}\), \(e=1\), \(\tau=1/2M_{Pl}^{-2}\) and \(c_{1}=0\)
Figure 6: Here we have shown the parameter space that satisfies condition-1 and condition-2 given in Eq.62 and Eq.64 respectively for \(8\pi G\kappa^{2}=0.1\), \(G=1{M_{Pl}}^{-2}\) and \(e=1\). The overlap region represents the region where \(g_{0\pm}\), \(\gamma_{0}\) and \(B_{\pm}\) are real. This also shows that there exists minimum and maximum for the cosmological constant (\(\Lambda_{min}\) and \(\Lambda_{max}\)), for given \(\tau\).
Results and Discussions
In this paper, we looked into Skyrmions in the context of teleparallel gravity. We investigated two cases with vanishing and non-vanishing baryon numbers (\(B=0\) and \(B\neq 0\)) and computed the Skyrme solution and the metric adjustment in TEGR (\(f(T)=-T-2\Lambda\)) and power law gravity. In generalises teleparallel \(f(T)\), we used power gravity model given by Eq.(45) and Eq. (57), and perturbative approaches to get the corrections. While for \(B\neq 0\), we compute the Skyrme solution and metric perturbation in both the near-horizon limit (\((r-r_{h})\ll 1\) and far limit (\(r\gg r_{h}\)).
As expected, the solutions for \(B=0\) in TEGR matches with the Einstein-Skyrme system. Though in this case the Skyrme solution \(\gamma(r)\) is independent of the cosmological constants and Skyrme parameters, in the case with \(B\neq 0\), the Skyrme function depends on \(\Lambda\). Moreover, in \(f(T)=-T-2\Lambda\), \(B\neq 0\) solution of Skyrme is physical only if the cosmological constant is positive, which again matches with the results in Einstein-Skyrme system [75]. On the other hand, for power law gravity model, given in Eq. (57), the Skyrme solution and the baryon number not only depends on cosmological constant, but now, a non-zero lower bound and upper bound to \(\Lambda\) (\(\Lambda_{min}\) and \(\Lambda_{max}\)) emerges such that the Skyrmion is physical only if the condition \(\Lambda_{min}<\Lambda<\Lambda_{max}\) is satisfied. The results are shown in Fig. (5). Moreover, the limits on the lower and upper values of \(\Lambda\) are dependent on the \(\tau\) parameter and is shown in Fig. (6).
## Acknowledgments
The authors would like to thank Prof. Jutta Kunz and Prof. Eugen Radu for their helpful comments and discussions on this work. M.T.A. acknowledges financial support of DST through INSPIRE Faculty grant [DST/INSPIRE/04/2019/002507].
|
2307.16084
|
PD-SEG: Population Disaggregation Using Deep Segmentation Networks For
Improved Built Settlement Mask
|
Any policy-level decision-making procedure and academic research involving
the optimum use of resources for development and planning initiatives depends
on accurate population density statistics. The current cutting-edge datasets
offered by WorldPop and Meta do not succeed in achieving this aim for
developing nations like Pakistan; the inputs to their algorithms provide flawed
estimates that fail to capture the spatial and land-use dynamics. In order to
precisely estimate population counts at a resolution of 30 meters by 30 meters,
we use an accurate built settlement mask obtained using deep segmentation
networks and satellite imagery. The Points of Interest (POI) data is also used
to exclude non-residential areas.
|
Muhammad Abdul Rahman, Muhammad Ahmad Waseem, Zubair Khalid, Muhammad Tahir, Momin Uppal
|
2023-07-29T21:42:44Z
|
http://arxiv.org/abs/2307.16084v1
|
PD-Seg: Population Disaggregation Using Deep Segmentation Networks for Improved Built Settlement Mask
###### Abstract
Any policy-level decision-making procedure and academic research involving the optimum use of resources for development and planning initiatives depends on accurate population density statistics. The current cutting-edge datasets offered by WorldPop and Meta do not succeed in achieving this aim for developing nations like Pakistan; the inputs to their algorithms provide flawed estimates that fail to capture the spatial and land-use dynamics. In order to precisely estimate population counts at a resolution of 30 meters by 30 meters, we use an accurate built settlement mask obtained using deep segmentation networks and satellite imagery. The Points of Interest (POI) data is also used to exclude non-residential areas.
M. A. Rahman, M. A. Waseem, Z. Khalid, M. Tahir, M. Uppal+Department of Electrical Engineering, Lahore University of Management Sciences, Lahore 54792, Pakistan
Footnote †: This work was supported under the Grand Challenge Fund of the Higher Education Commission, Pakistan (Grant Number: GCF-521).
Disaggregation, Census, Deep Learning, GIS, Built Mask
## 1 Introduction
For various decision-making processes and development initiatives, such as urban growth, infectious disease containment, evacuation planning, risk management and conservation planning, accurate population density data is essential. Census data disaggregation using survey-based methods lacks the precision needed for these applications and is seldom done because of the time and expense requirements [1].
There have been several attempts in the literature to accurately disaggregate census data. Azar et al use remotely sensed data combined with a likelihood layer to disaggregate census data at 100 meter resolution [2]. Linard at al use land-classification and settlement points for disaggregation, also at a resolution of 100 meter [3]. However the existing world-wide population grids contain flaws that render them useless [4]. Even cutting-edge datasets like WorldPop (100m x 100m) and Meta (30m x 30m) have several flaws, particularly in the case of developing nations like Pakistan where high-quality urban data is not readily accessible for public use [5, 6]. For example, both Meta and WorldPop have disaggregated the population based on 2010 census estimates (provided by Demobase), and therefore do not accurately capture the current dynamics of 2023. Additionally, these projections are made at the tehsil level, the second administrative level out of a total of five levels. Furthermore, WorldPop and Meta both utilize low-quality covariates and methodologies to disaggregate the population, which lead to errors. Meta equally disaggregates the population estimates across built-up tiles for each tehsil while the WorldPop dataset, even the constrained one, displays high population counts in physically desolate regions.
The following contributions address all of the aforementioned issues:
1. Disaggregation based on the most recent (2017) publicly available census data at the second-highest resolution i.e, fourth administrative level.
2. Accurate built-up mask developed using deep segmentation networks and satellite imagery. Built-up proportions per tile are used to determine population density and Points of Interest (POI) data is used to remove non-residential regions.
The rest of the paper is structured as follows: Methodology (section 2) explains the built-up mask and POI based disaggregation technique, Evaluation and Analysis (section 3) discusses and compares our results with state-of-the-art approaches, and Conclusion (section 4) provides the concluding remarks.
## 2 Methodology
As previously mentioned, old low-resolution population aggregates and unreliable constructed settlement masks that support naive disaggregation methods result in sub-par population density maps for Pakistan. We set out to determine an accurate population density map of Lahore using latest census data, satellite imagery and POI.
### Census Data
We obtain the most recent 2017 census data from the PBS website and convert it into GIS vector files up to the circle, or the forth administrative level, as part of the technique shown in Figure 1. The Lahore district is divided into 7 tehsils, 184
charges, 867 circles and 6,764 blocks. The city has a population of around 11.13 million, with approximately 1,744,755 households [7, 8]. A higher administrative level for disaggregation significantly improves the density estimates due to a finer resolution of census data.
### Segmentation Network
The built-up area prediction masks obtained through the deep semantic segmentation model are used to disaggregate the population counts into 30m x 30m tiles. The deep network is built on the DeepLabV3+ architecture with a dilated ResNet encoder [9], that is trained using a Dice Loss on manually annotated datasets of various parts of Lahore, Pakistan. We train the model for 80 epochs using an 80-20 Train-Val split and a 8-batch size [10]. We use Google Earth satellite imagery at a fine resolution of 20 zoom level (about 0.3 meters per pixel) to create high-quality constructed settlement masks.
### Disaggregation
After determining built-up regions, we exclude the tiles containing POI using a mask, denoted by \(T_{i}\) for \(i\)-th tile as
\[T_{i}=\begin{cases}0,&\exists\,j,\,B(R,POI_{ij})\geq P\\ 1,&\text{otherwise}\end{cases} \tag{1}\]
for \(i=1,2,...N\), \(R\) is the radius value, \(POI\) is the Points of Interest dataset, \(B\) is the buffer and \(P\) is the number of points. The algorithm draws a buffer \(B\) of radius \(R\) around a point in \(POI\). If \(B\) contains points greater than or equal to \(P\), \(T_{i}\) is removed. This method increases the probability of removing only those tiles that cover non-residential built up areas. Individual POI can be located in between residential areas which are less likely to be removed using this method. For this purpose, \(R\) is set to 500 meters and \(P\) is set to 5. The circle-level population is then divided among the remaining residential tiles by weighting each tile according to the quantity of built-up pixels existing in that tile, as illustrated below.
\[P_{\text{tile}}=\frac{\sum\limits_{i=1}^{N_{t}}f(i)}{\frac{\sum\limits_{i=1}^{ N_{c}}f(j)}{\sum\limits_{j=1}^{N_{c}}f(j)}\times P_{\text{circle}}}\quad\forall\, \text{tile}\in\text{circle}, \tag{2}\]
where \(P_{\text{circle}}\) is the population of the circle to which the tile belongs, \(N_{t}\) represents the total number of pixels in the tile, \(N_{c}\) represents the total number of pixels in the circle and \(f\) is a function which takes pixel position as input and returns 1 if it belongs to built-up class and 0 otherwise. Using this approach, we are able to concentrate a higher density into the tiles with higher built-up proportions while simultaneously excluding the unbuilt tiles/regions.
## 3 Evaluation and Analysis
A comparison of our population density maps with those offered by Meta and WorldPop is shown in Fig. 2. WorldPop unconstrained (a) tends to overestimate population counts by projecting population for agricultural and barren land. WorldPop constrained (b) tries to overcome this shortcoming but also has severe limitations since it uses a built settlement growth model to determine the built-up area. The Meta (c) dataset effectiveness is limited since it uses the same population estimate for each tile in a tehsil. The proposed (d) method performs significantly better in estimating population counts than any other state-of-the-art methods since it uses up-to-date census data at the fourth administrative level along with precise constructed settlement masks. We are able to compute distinct population estimates for each tile using (2)
Figure 1: Proposed pipeline for processing data. To create a population density map, we first get the built-up estimates using deep learning models using satellite imagery and then combine them with census and POI data.
and eliminate unbuilt regions by taking into consideration the proportion of built-up area in a tile to the total built-up area in a circle. Non-residential built-up tiles are removed using the POI data. The effectiveness of the created settlement mask is displayed in Table 1 for each of the aforementioned datasets.
## 4 Conclusion
In order to more accurately estimate population counts at a 30-meter by 30-meter resolution, our method advances previous approaches by integrating more dependable built-settlement masks, latest census results and excluding non-residential areas. In addition, we plan to calculate the population of the entire country of Pakistan by comparing the population in each tile to the built-up area while accounting for the geographic diversity in Lahore's demography. We hope to help close the data gap in urban policy and research by making our high-resolution accurate population estimates available to the public.
|
2305.09905
|
Entanglement-based Mutual Quantum Distance Bounding
|
Mutual distance bounding (DB) protocols enable two distrusting parties to
establish an upper-bound on the distance between them. DB has been so far
mainly considered in classical settings and for classical applications,
especially in wireless settings, e.g., to prevent relay attacks in wireless
authentication and access control systems, and for secure localization. While
recent research has started exploring DB in quantum settings, all current
quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit
exchange phase and only perform one-way DB. Specifically, the latest QDB
proposals improve the initial ones by adding resistance to photon number
splitting attacks, and improving round complexity by avoiding communication
from the prover to the verifier in the last authentication phase. This paper
presents two new QDB protocols that differ from previously proposed protocols
in several aspects: (1) to the best of our knowledge, our protocols are the
first to utilize entangled qubits in the rapid-bit exchange phase, previous
protocols relied on sending individual qubits, not those from a pair of
entangled ones; (2) our second protocol can perform mutual QDB between two
parties in one execution, previous QDB protocols had to be executed twice with
the prover and verifier roles reversed in each execution; (3) the use of
entangled qubits in our protocols thwarts attacks that previous QDB protocols
were prone to; (4) and finally, our protocols also eliminate the need for
communication from the prover to the verifier in the last authentication phase,
which was necessary in some previous QDB protocols. Our work paves the way for
several interesting research directions which we briefly discuss in detail in
the appendix.
|
Aysajan Abidin, Karim Eldefrawy, Dave Singelee
|
2023-05-17T02:28:00Z
|
http://arxiv.org/abs/2305.09905v1
|
# Entanglement-based Mutual Quantum Distance Bounding
###### Abstract
Mutual distance bounding (DB) protocols enable two distrusting parties to establish an upper-bound on the distance between them. DB has been so far mainly considered in classical settings and for classical applications, especially in wireless settings, e.g., to prevent relay attacks in wireless authentication and access control systems, and for secure localization. While recent research has started exploring DB in quantum settings, all current quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit exchange phase, and only perform one-way DB. Specifically, the latest QDB proposals improve initial ones by adding resistance to photon number splitting attacks, and improving round complexity by avoiding communication from the prover to verifier in the last authentication phase.
This paper presents two new QDB protocols that differ from previously proposed protocols in several aspects: (1) to the best of our knowledge, our protocols are the first to utilize entangled qubits in the rapid-bit exchange phase, previous protocols relied on sending individual qubits, not those from a pair of entangled ones; (2) our second protocol can perform mutual QDB between two parties in one execution, previous QDB protocols had to be executed twice with the prover and verifier roles reversed in each execution; (3) the use of entangled qubits in our protocols thwarts attacks that previous QDB protocols were prone to; (4) and finally, our protocols also eliminate the need for communication from the prover to the verifier in the last authentication phase, which was necessary in some previous QDB protocols. Our work paves the way for several interesting research directions which we briefly discuss in detail in the appendix.
Keywords:Mutual authentication, distance bounding; quantum distance bounding, quantum communication, wireless security.
## 1 Introduction
Distance Bounding (DB) protocols are cryptographic protocols that combine entity authentication and proximity verification. These protocols enable a verifier to establish an upper-bound on the distance to an untrusted prover. DB was introduced by Brands-Chaum [1] as a primitive to prevent relay attacks on Automatic Teller Machines (ATM) systems. Following this initial proposal of Brands-Chaum, several new DB protocols were proposed, and also implemented [2] and experimentally evaluated. More background on classical DB protocols and their main design blueprints can be found in appendix.
All DB protocols proposed in the literature so far, require an (unpredictable) rapid exchange of bits, i.e., a sequence of fast challenge-response phases. The security of this primitive, and hence also the DB protocol itself, relies on the laws of physics, i.e., that the speed of light is an upper bound on the speed of electromagnetic waves. In RF-based DB protocols, this means in practice that adversaries cannot transmit signals faster than the speed of light, and cannot force signals to arrive faster at the receiver than the actual propagation time to travel the distance to the receiver. This physical law is used to establish an upper bound on the (physical) distance between the prover and the verifier. It is worth noting that recent research [3] has questioned certain assumptions (and "folklore" design guidelines) and adversarial models in the DB research literature, e.g., that only single bits should be transmitted in the fast-bit exchange phase; we do not tackle such advanced issues in this work.
Up to a decade ago, all DB protocols focused on RF-based DB. A couple of recent proposals [4, 5] explored the possibility of developing Quantum DB (QDB) protocols. Such QDB protocols rely on principles of quantum physics, namely, that unlike classical bits sent over classical communication channels, quantum bits (qubits) cannot be measured without modifying their states. In particular, if a qubit is in an unknown state, an adversary cannot extract any information about it without destroying it. We review such QDB protocols in Section 2. While such protocols already realize some practical versions of one-way QDB, there are still a lot of basic open questions and work to be performed in this emerging research topic; there are also several unexplored areas in the possible protocol design landscape.
**Applications of QDB.** There are a couple of emerging (quantum communication related) applications that could benefit from QDB. The first application is enhancing quantum key distribution (QKD) by augmenting it with QDB. In recent years, QKD systems have been experimentally evaluated over terrestrial [6, 7, 8] and satellite systems [9]; to further extend distances over which QKD can operate, such systems could be augmented by quantum relays or repeaters [10, 11] which seem likely to be constructed in the coming years. We argue that QDB could be added to key establishment functionalities in such systems to obtain distance-bounded mutually shared keys where both the security of the derived keys, and bounds on the distance of entities such keys are established with, are based on quantum/physical properties. Another closely-related application that we envision is to augment systems used to distribute entanglement in what has been termed the "quantum Internet" [12] with distance bounding guarantees. We acknowledge that this topic (quantum Internet) is still at a very early stage and may be even regarded by some as impractical, but we stress that experimental demonstrations for aspects of it justify why it is physically realizable and in
teresting to explore. What is unclear is whether such entanglement distribution systems will have practical applications in the near future, but this is out of scope of this paper. We just point out that QDB may be useful in explorations of that topic.
## Contributions
In this paper we make the following contributions: (1) We present two new QDB protocols that (to the best of our knowledge) are the first QDB protocols to utilize entangled qubits in the rapid-bit exchange phase; previous protocols relied on sending individual qubits, not those from a pair of entangled ones. (2) Our second protocol can perform mutual QDB between two parties (acting both as prover and verifier) in one execution, previous QDB protocols had to be executed twice with those roles reversed in each execution. (3) We analyze the security and practicality of our QDB protocols and conclude that they are within reach of commercially available QKD equipment, and other equipment required for similar experimental setups; both types of equipment have been tested and verified several times as reported in existing literature.
## Why Use Entanglement in QDB?
This is a natural question given that there are QDB protocols [4, 5] that utilize un-entangled quantum particles in the rapid-bit exchange phase. The benefits of using entangled particles are:
1. Using entangled particles in QDB enables us to develop a mutual QDB protocol that requires 25% less rounds (and thus communication) than performing two independent executions of a one-way QDB with the parties assuming reversed roles; there are currently no other proposals for performing mutual QDB protocols.
2. In existing QDB protocols that rely on un-entangled quantum particles, the (pseudo) random challenge is first classically produced by the verifier and then encoded as a quantum particle. When entangled particles are used in our QDB protocols, such a (physically) random challenge remains unknown to an honest verifier (and thus also the adversary that may compromise it during protocol execution) until later when measured; this provides less room for an adversary to cheat in the challenge generation step. We view this property as an enabler for _device independent security_ because the processing device used in QDB cannot bias the randomness used in the protocol in this case.
3. Our QDB protocols can detect a new class of attacks, where a cheating prover immediately reflects challenge bits back without processing them in an attempt to shorten the distance perceived by the verifier. When entangled particles are used, (unusually) strong correlations between the particles retained by the verifier and those reflected by prover - pretending to be legitimate responses to the verifier's challenges - can be detected at the verifier.
Finally, we note that our protocols would still work if the entangled quantum particles (qubits) are replaced with un-entangled ones, but in this case the security of our protocols reduces to the security of the pseudorandom number generator (PRNG) used to generate the (pseudo) random challenge bits.
**Paper Outline.** The rest of this paper is organized as follows, Section 2 reviews preliminaries required for this paper and briefly discusses related work. Section 3 contains the main novel material in this paper, it introduces two new entanglement-based QDB protocols, one that can only perform one-way DB, while the other performs mutual DB. Section 4 contains the security analysis of the proposed QDB protocols. Section 5 presents open issues and future work, and Section 6 concludes the paper.
## 2 Background and Preliminaries
This section covers the necessary background required for the rest of the paper. We start with a description of classical DB protocols, followed by a brief overview of qubits and quantum entanglement, and then discuss related work in QDB.
### Classical distance bounding protocols
DB protocols [1] allow one entity (verifier) to obtain an upper-bound on the distance to another entity (prover), in addition to authenticating the latter. Figure 1 shows an example of a generic DB protocol: the Hancke-Kuhn protocol [13], where \(k_{p}\) is a shared key between the prover and the verifier, \(f\) is a pseudorandom function, and \(a\) and \(b\) are of length \(n\)-bit each. The core of any _one-way DB_ protocol is the distance measurement phase, whereby the verifier measures round-trip time between sending its challenge and receiving the reply from the prover. The verifier's challenges are unpredictable to the prover and replies are computed as a function of these challenges. Thus, the prover cannot reply to the verifier sooner than it received the challenges. The prover, therefore, cannot pretend to be closer to the verifier than it really is (only further).
The first (Brands-Chaum) DB protocol [1] comprises three phases, namely, an initialisation phase, a rapid-bit exchange phase consisting of \(n\) rounds, and an authentication phase. In the initialisation phase, the prover first commits to a randomly generated \(n\)-bit nonce \(N\). Then in the \(i\)-th round of the rapid-bit exchange phase, for \(i=1,\cdots,n\), the verifier sends a random challenge bit \(c_{i}\) to the prover, and the prover computes and responds with \(r_{i}=N_{i}\oplus c_{i}\). In the last phase, the verifier checks the reply and measures the elapsed time between each challenge and response. The protocol completes successfully only if _all_\(n\) rounds succeed and all responses correspond to the prover's committed value
\(k_{p}\)\(k_{p}\)\(N_{v}\)\(N_{p}\)\(N_{v}\)\(N_{p}\)
(i.e., \(c_{i}\oplus r_{i}=N_{i}\), \(i=1,\cdots,n\)). The processing time on the prover's side \(\alpha=t_{s}^{P}-t_{r}^{P}\) must be negligible (compared to time of flight of the signal); otherwise, a computationally powerful prover could claim a false bound. This time might be tolerably small, depending on the underlying technology, the distance measured and required security guarantees.
The security of DB protocols relies on two assumptions: (1) challenges are random and unpredictable to the prover before being sent by the verifier; (2) challenges traverse the distance between the prover and the verifier at maximum possible speed, i.e., the speed of electromagnetic waves. After executing the DB protocol, the verifier knows that the distance to the prover is at most \(\frac{t_{r}^{V}-t_{s}^{V}-\alpha}{2}\cdot c\), where \(\alpha\) is the processing time of the prover (ideally, negligible) and \(c\) is the speed of light [1]. DB protocols typically require \((2\cdot n+\mathcal{C})\) messages, where \(\mathcal{C}\) is the number of messages exchanged in the pre- and post-processing protocol phases. Typically, \(\mathcal{C}<<n\) and thus can be ignored.
In some cases (e.g., distributed localization), there is a need for mutual DB between \(P_{1}\) and \(P_{2}\). This can be achieved by modifying the one-way DB protocol such that each response from \(P_{2}\) to a challenge by \(P_{1}\) also includes a challenge from \(P_{2}\) to \(P_{1}\). This requires \(2n+2\mathcal{C}+1\) messages instead of \(2(2\cdot n+\mathcal{C})\) for mutual DB and is shown in [14]. Both parties generate and commit to two random bit strings \([c_{1},c_{2},...,c_{n}]\) and \([s_{1},s_{2},...,s_{n}]\). \(P_{1}\) starts by sending the first challenge bit \(c_{1}\) and \(P_{2}\) replies with \(c_{1}\oplus s_{1}\). \(P_{1}\) measures the time between sending \(c_{1}\) and receiving the response. \(P_{1}\) then replies with \(c_{2}\oplus s_{1}\). \(P_{2}\) measures the time between sending \(c_{1}\oplus s_{1}\) and receiving the response. This process is repeated \(n\) times. The mutual DB procedure is considered successful if both parties verify all responses and match previously committed values (see [14] for more details).
### Qubits
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Data** & **Comp. (or \(+\)) basis** & **Hadamard (or \(\times\)) basis** \\ \hline
0 & \(|0\rangle\) (i.e., \(\rightarrow\)) & \(|+\rangle\) (i.e., \(\nearrow\)) \\ \hline
1 & \(|1\rangle\) (i.e., \(\uparrow\)) & \(|-\rangle\) (i.e., \(\nwarrow\)) \\ \hline \end{tabular}
\end{table}
Table 1: A rule for encoding classical bits as qubits.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Data** & **Comp. (or \(+\)) basis** & **Hadamard (or \(\times\)) basis** \\ \hline
0 & \(|0\rangle\) (i.e., \(\rightarrow\)) & \(|+\rangle\) (i.e., \(\nearrow\)) \\ \hline
1 & \(|1\rangle\) (i.e., \(\uparrow\)) & \(|-\rangle\) (i.e., \(\nwarrow\)) \\ \hline \end{tabular}
\end{table}
Table 2: A rule for encoding classical bits as qubits.
A qubit is a unit of quantum information, just as a bit (0 or 1) is the classical unit of information. A qubit is a vector in a 2-dimensional Hilbert space (a vector space with inner product). The basis
\[\{|0\rangle=\begin{bmatrix}1\\ 0\end{bmatrix},\ \ |1\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\}\]
for a qubit is called the computational basis, whereas the basis
\[\{|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \ |-\rangle=(|0\rangle-|1\rangle)/ \sqrt{2}\}\]
is called the diagonal (or the Hadamard) basis. In general, a normalised quantum state can be expressed as a superposition of \(|0\rangle\) and \(|1\rangle\) as
\[\alpha\,|0\rangle+\beta\,|1\rangle\,,\]
where \(\alpha,\beta\in\mathbb{C}\) satisfying \(|\alpha|^{2}+|\beta|^{2}=1\).
If a qubit in state \(\alpha\,|0\rangle+\beta\,|1\rangle\) is measured in the computational basis, then \(|0\rangle\) is obtained with probability \(|\alpha|^{2}\) and \(|1\rangle\) with probability \(|\beta|^{2}\). And upon measurement in the computational basis, the qubit with original state \(\alpha\,|0\rangle+\beta\,|1\rangle\) collapses into \(|0\rangle\) or \(|1\rangle\), which is different from the original.
The four states, \(|0\rangle\), \(|1\rangle\), \(|+\rangle\), and \(|-\rangle\), have some nice properties. For example, they satisfy that
\[|0\rangle=(|+\rangle+|-\rangle)/\sqrt{2}\]
and
\[|1\rangle=(|+\rangle-|-\rangle)/\sqrt{2}.\]
If the qubits \(|0\rangle\) and \(|1\rangle\) are measured in the computational basis, then the states are not changed; whereas a measurement in the Hadamard basis completely destroys the state. In the latter case, either \(|+\rangle\) or \(|-\rangle\) is obtained with equal probability. Similarly, if the qubits \(|+\rangle\) and \(|-\rangle\) are measured in the Hadamard basis, then the states do not change; whereas a measurement in the computational basis destroys the state, and either \(|0\rangle\) or \(|1\rangle\) is obtained with equal probability. It is this principle that is used in [4, 5], while in the present paper we use it together with quantum entanglement (cf. Section 2.3).
These four states correspond to different polarisations of photons. The states \(|0\rangle\) and \(|1\rangle\) correspond to horizontally \(\rightarrow\) and vertically \(\uparrow\) polarised photons, respectively, whereas the states \(|+\rangle\) and \(|-\rangle\) to \(\nearrow\) (45\({}^{\circ}\)) and \(\nwarrow\) (-45\({}^{\circ}\)) polarised photons. The classical bit value 0 is encoded as a qubit in state \(|0\rangle\) or \(|+\rangle\), and the value 1 is encoded as a qubit in state \(|1\rangle\) or \(|-\rangle\). The qubits are measured either in the computational or the Hadamard basis. Throughout this paper, (+) denotes the computational basis and (\(\times\)) the Hadamard (\(\times\)) basis; see Table 2.
If 0 is encoded as \(\rightarrow\) polarized photon, it can be decoded correctly as 0 only in the + basis, whereas if 0 is encoded as \(\nearrow\) polarized photon, then it can
be decoded correctly as 0 only in the \(\times\) basis. This also applies to the case when a 1 is encoded in the \(+\) basis and the \(\times\) basis, respectively. In any of these cases, if a polarized photon is decoded using a wrong basis, then one obtains a random bit. Therefore, by using the above encoding method one can send information encoded as polarized photons so that no one can copy or read reliably without knowing the bases used for encoding.
Later in the paper, we use \(\left|x\right\rangle_{y}\), for \(x,y\in\{0,1\}\), to denote encoding of a classical bit \(x\) into a qubit using the basis determined by \(y\).
### Quantum Entanglement
What we have learnt in the previous subsection is that if we measure a qubit in an unknown state, not only do we obtain a purely random result, but we destroy the original state of the qubit. Another useful property of qubits is that they can be _entangled_. A two-qubit state \(\left|\beta\right\rangle\) is called _entangled_ if it cannot be expressed as a (tensor) product of its component states. For example, the following four two-qubit states (a.k.a., Bell states or Einstein-Podolsky-Rosen (EPR) pairs)
\[\left|\beta_{00}\right\rangle =\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle),\] \[\left|\beta_{01}\right\rangle =\frac{1}{\sqrt{2}}(\left|00\right\rangle-\left|11\right\rangle),\] \[\left|\beta_{10}\right\rangle =\frac{1}{\sqrt{2}}(\left|10\right\rangle+\left|01\right\rangle), \text{ and}\] \[\left|\beta_{11}\right\rangle =\frac{1}{\sqrt{2}}(\left|01\right\rangle-\left|10\right\rangle)\]
are (maximally) entangled states that are mutually orthogonal. The fact that entanglement sources generate pairs of entangled qubits is relevant and could be explained more prominently. Entangled qubits exhibit strong correlations with no classical analog. We use entanglement in our protocols as follows. When an entanglement source (e.g., one emitting pairs of spin \(\frac{1}{2}\) particles) generates a pair of entangled qubits (say, on the verifier side), one half is sent to the prover and the other half is kept locally. Then both prover and verifier measure their half of the entangled qubits in the same measurement setting. So after learning its local measurement outcome, the verifier can predict with probability 1 what would the prover obtain if it also measures its half of the entangled qubits in the same setting. The prover will then send as a response its measurement outcome encoded into a state of a new qubit (cf. Section 3.2). Prior to their measurements, neither of the prover and verifier can predict the value of the challenge or response. In other words, there is no a priori information encoded into the sate of entangled qubits. The information comes into existence only after measurement.
### Related Work in QDB
The most relevant QDB protocol is the one proposed in [4] and illustrated in Figure 2. In that protocol \(k_{p}\) is a shared secret key between the prover and the verifier. Both parties generated random values \(N_{v}\) and \(N_{p}\) and exchange them, and a keyed pseudo-random function (PRF) \(f_{k_{p}}\) is applied to \(N_{v}\) and \(N_{p}\) to generate a random bit sequence that is parsed as two equal sized registers \(a\) and \(b\). We denote by \(x_{i}\) the bit at position \(i\) of register \(x\) where \(x\in\{a,b\}\). In the \(i\)-th round of rapid-bit exchange, the verifier generates a random challenge bit \(c_{i}\) and encodes it in a basis determined by \(a_{i}\). The prover decodes the challenge qubit in a basis determined by \(a_{i}\) and obtains \(c^{\prime}\) and re-encodes it (as a response qubit) in a basis determined by \(b_{i}\). If, in the \(i\)-th round of the rapid-bit exchange phase, the verifier obtains the same challenge bit by decoding the prover's response qubit in the basis determined by bit \(b_{i}\), then verifier can deduce that it is the prover which sent this response. This is because the only party that can correctly decode the challenge qubit without errors is the prover, i.e., only the correct prover can re-encode the challenge bit in \(b_{i}\). Once the rapid-bit exchange is completed, the verifier checks whether the decoded bits \(c_{i}^{{}^{\prime\prime}}\) match the generated challenge bits \(c_{i}\).
The protocol in [4] is an improvement to an earlier work in [5] where the authors present the first QDB protocol that falls under the Brands-Chaum blueprint, in that it requires an additional authentication phase in the end. More specifically, the protocol in [5] works as follows. Again, let \(k_{p}\) be a shared key between the prover and verifier. In the initialisation phase, both parties exchange randomly generated values \(N_{v}\) and \(N_{p}\), and compute \(a=f_{k_{p}}(N_{v},N_{p})\) of, say, length \(2n\). Then, in the \(i\)-th rapid-bit exchange phase, for \(i=1,2,\cdots,n\), the verifier encodes a randomly generated challenge bit \(c_{i}\) in a basis determined by \(a_{2i-1}\) and sends the resulting qubit to the prover. The prover decodes the received challenge qubit in the basis determined by \(a_{2i-1}\), obtains \(c_{i}^{\prime}\), re-encodes it in the basis determined by \(a_{2i}\), and send the resulting qubit as its response. Upon receiving the reponse qubit, the verifier decodes it in the basis determined by \(a_{2i}\) and obtains \(c_{i}^{\prime\prime}\). In the last phase of the protocol, the prover sends a MAC (message authentication code) tag for \((ID_{p},ID_{v},N_{p},N_{v},c_{1}^{\prime},\cdots,c_{n}^{\prime})\), where \(ID_{p}\) and \(ID_{v}\) stands for the identity of the prover and verifier, respectively, computed using the shared key \(k_{p}\) to the verifier, which verifies the MAC tag by comparing it to a MAC tag for \((ID_{p},ID_{v},N_{p},N_{v},c_{1}^{\prime\prime},\cdots,c_{n}^{\prime\prime})\) that it computes itself.
Lately, Abidin proposed a hybrid (one-way) DB protocol that uses classical bits for challenges and qubits for responses in [15].
Although these protocols use qubits in the rapid-bit exchange phase, they do not utilise entangled qubits. In fact, it is not immediately clear whether we can expand the design space for QDB using entangled qubits. One of the main motivations behind the current paper is to investigate whether entangled qubits
\begin{tabular}{|c|c|} \hline \(k_{p}\) & \(k_{p}\) \\ \hline Verifier & Prover \\ \hline \(N_{v}\)\(\leftarrow\)\(\{0,1\}^{n}\) & \(N_{v}\)\(N_{p}\)\(\leftarrow\)\(\{0,1\}^{n}\) \\ \hline \(a||b\)\(=\)\(f_{k_{p}}(N_{v}\),\(N_{p})\) & \(a||b\)\(=\)\(f_{k_{p}}(N_{v}\),\(N_{p})\) \\ \hline \end{tabular}
Fig. 2: The (one-way) QDB protocol from [4].
can be utilized to design a QDB protocol. To this end, we propose two new QDB protocols employing entangled qubits.
## 3 Entanglement-based QDB
In this section we present details of the new entanglement-based QDB protocols. We start with a brief discussion of the intuition behind our new protocols.
### Intuition of Entanglement-based QDB
As is standard in most DB literature, we assume that the two parties engaging on the protocol have a shared cryptographic key that can be used together with a PRF to generate a random bit string that can be split into several sub-strings which we call registers. In the one-way DB case, we require this shared random binary string to be (deterministically) split into two registers \(a\) and \(b\); in the mutual QDB case we require it to be split into three registers \(a\), \(b\), and \(c\). The main idea behind the new QDB protocols is to use entangled particles (denoted by _EP_ in the Figures 3 and 4) in the rapid-bit exchange phases. In the base protocol for one-way QDB, two entangled particles are prepared by a verifier and one of the particles is sent by that verifier to the prover. Instead of encoding a randomly generated new challenge bit \(c\) as a qubit in a basis determined by the random bits in register \(a\), as was done in previous work [4] (and as illustrated in the one-way protocol in Figure 2), we now generate two entangled particles and determine the basis to measure them, and also the basis for encoding the response, by bits of the two registers \(a\) and \(b\) in the one-way DB case, and three registers \(a\), \(b\), and \(c\) in the mutual DB case.
The protocol blueprint that uses two registers only achieves one-way DB though. In theory, one can perform mutual DB by using the one-way DB protocol in Figure 3. This is done by executing the protocol twice, with reversed role in the second execution. For a mutual DB with fewer communication rounds, we need the prover's response to the verifier's challenge to be unpredictable by the verifier so that it can also be regarded as a challenge by the verifier. With this in mind, we prepare the prover's response qubit as encoding of the measurement outcome of the entangled qubit XORed with a random bit. One subtle issue in this case is that the entanglement-based mutual QDB protocol needs an additional random bit to be committed to (denoted by \(r\) in Figure 4). This will now be discussed more in detail below in Section 3.2.
### Details of Entanglement-based QDB
We first present details of our new entanglement-based one-way QDB protocol; we then present the entanglement-based mutual QDB one.
Entanglement-based One-Way QDBAn schematic illustration of the steps of the base one-way QDB protocol is provided in Figure 3 (top and middle portions). The base one-way QDB protocol enables one party (verifier) to establish an upper bound on the distance between itself and another party (prover) with a single execution of this protocol. As mentioned before, a single execution of this base one-way DB protocol does not provide the prover with any guarantees on the upper bound on the distance to the verifier. To do this there are two options: (1) the first approach to establish mutual DB is to execute two independent sequential runs of the one-way DB protocol as shown in Figure 3. Figure 3 shows two such independent runs (middle and bottom sections of the figure) of the protocol with the two parties taking the opposite roles in the second run. In Figure 3, Party A acts as a verifier in the first run (middle of the figure) and acts as a prover in the second run (bottom of the figure), and Party B acts as a prover in the first run and acts as a verifier in the second run. (2) The second approach to establish mutual DB is to perform a single execution of the mutual QDB protocol, as will be outlined later in Section 3.2 and Figure 4.
In the base one-way QDB protocol, during the initialization phase of the protocol, the two parties exchange two random nonces \(N_{A}\) and \(N_{B}\) and apply the keyed PRF \(f_{k}\) to the exchanged nonces (step 1 and 2 in the figure). The result of the PRF is a random binary string that is then split into two parts which we denote as registers \(a\) and \(b\). These steps are shown in the top portion of Figure 3. The protocol then proceeds as follows (the sequence of the subsequent steps are repeated \(n\) times):
1. In the \(i\)-th round of rapid-bit exchange, Party A (the verifier) generates a pair of entangled particles \(\text{EP}_{i}\), which is one of those mutually orthogonal Bell states \(\beta_{00},\,\beta_{01},\,\beta_{10},\,\beta_{11}\) from Section 2.3 and sends one half, which is again denoted as \(\text{EP}_{i}\), to Party B, starts a local timer (clock denoted as CLK in Figure 3) when \(\text{EP}_{i}\) is sent to the prover. The verifier then uses \(a_{i}\) to determine the basis in which to measure its half of \(\text{EP}_{i}\). Party B (the prover) eventually receives \(\text{EP}_{i}\) and measures it in a basis determined by register \(a_{i}\). Denote the measurement result as \(m^{\prime}_{i}\).
2. Party B encodes the result \(m^{\prime}_{i}\) of the measurement of the received \(\text{EP}_{i}\) in a basis determined by register \(b_{i}\). The encoded result \(|m^{\prime}_{i}\rangle\)is then sent from Party B to Party A. Next, upon receiving the encoded response, Party A performs the following steps: 1. It stops the clock to measure the time of flight and thus the upper bound on the distance to the prover, and decodes the response in a basis determined by register \(b_{i}\). 2. It then checks if the response decoded on the basis determined by register \(b_{i}\) is equal to the measurement value of \(\text{EP}_{i}\). If this check fails, it aborts the protocol.
5. _Optional:_ The protocol can then be repeated with the roles of Parties A and B reversed, as shown in the bottom section (step 5 and step 6) of Figure 3.
Entanglement-based Mutual QDBAn schematic illustration of the steps of our new mutual QDB protocol is provided in Figure 4. The protocol enables two parties to mutually establish an upper bound on the distance between them with a single execution of this protocol.
Similar to the one-way setting, during the initialization phase of this protocol, the two parties exchange two random nonces \(N_{A}\) and \(N_{B}\) and apply the keyed PRF \(f_{k}\) to the exchanged nonces (step 1 and 2 in the figure). The result of the PRF is a random binary string that is then split into three parts which we denote as registers \(a\), \(b\), and \(c\). The protocol then proceeds as follows (steps 4 to 6 are repeated \(n\) times):
1. Party B generates a random bit sequence \(r\leftarrow\{0,1\}^{n}\) and sends a commitment to \(r\) to Party A.
2. In the \(i\)-th round of rapid-bit exchange, Party A generates a pair of entangled particles \(\mathrm{EP}_{i}\), which is one of those mutually orthogonal Bell states \(\beta_{00}\), \(\beta_{01}\), \(\beta_{10}\), \(\beta_{11}\) from Section 2.3 and sends one half, which we again denote as \(\mathrm{EP}_{i}\), to Party B. Party B eventually receives \(\mathrm{EP}_{i}\) and measures it in a basis determined by register \(a_{i}\). Denote the measurement result as \(m^{\prime}_{i}\).
3. Party B computes \(m^{\prime}\oplus r_{i}\) and encodes the result in a basis determined by register \(b_{i}\). Then \(\left|m^{\prime}_{i}\oplus r_{i}\right\rangle_{b_{i}}\) is then sent to Party A.
4. Party A eventually receives \(\left|m^{\prime}_{i}\oplus r_{i}\right\rangle_{b_{i}}\) from Party B, stops the clock, decodes it in a basis determined by register \(b_{i}\), then XORs the result with the result of measuring its half of \(\mathrm{EP}_{i}\) a basis determined by register \(a_{i}\). The result of the above step (which should be \(r_{i}\) in an honest execution) is then encoded a basis determined by register \(c_{i}\), and transmitted back to Party B. Party B receives this response, and stops the clock and computes an upper bound on the distance to Party A based on the time of flight. Party B then decodes the response from Party A in a basis determine by register \(c_{i}\) and compares the result to \(r_{i}\). If they are equal, then Party B knows that Party A was not cheating. Otherwise, it aborts the protocol.
5. In the last phase, Party B sends \(m^{\prime}\) which is concatenation of all \(m^{\prime}_{i}\) measured in the basis determined by register \(a\) in step 4 of each round above, and opens the commitment to \(r\). Party A eventually receives and checks if it is equal to the measurement value of \(\mathrm{EP}_{i}\). If this check fails, it aborts the protocol.
## 4 Security Analysis
We first discuss a new property that our QDB protocols rely on: detection of reflected entangled particles that are not measured by a cheating prover. Next, we
Figure 3: Entanglement-based one-way QDB protocol with reversed roles, to establish mutual distance bounds.
Figure 4: An overview of our entanglement-based mutual QDB protocol.
provide an (informal) analysis of the security of our entanglement-based QDB protocols against several attacks: (1) distance fraud attacks, (2) mafia fraud attacks, (3) terrorist fraud attacks, and (4) implementation attacks. A formal security analysis is beyond the scope of this work, and is left as future work.
### Reflection attacks
In some of the attacks that are described later in this section, a malicious prover could try to reflect an entangled particle without measuring it first, and use the reflected particle as a standard quantum particle that the protocol expects as a response. Assuming1 that the entanglement is with respect to the polarization of the particles, if the verifier wants to ensure that the prover did not reflect back the entangled particle, the verifier can perform a joint measurement of its local entangled particle and the response it receives in the complementary basis, e.g., diagonal in the case of vertical and/or horizontal polarization. If the prover reflected the challenge entangled particle, then the outcome of this joint measurement will indicate an unusually high correlation, which should not be the case if a standard quantum particle was instead sent back. We note that the rationale behind such detection is along the same lines of the rationale typically used in entanglement-based quantum key distribution (QKD) in protocols [6, 7, 8] analogous to BB84 [16], but where one typically expects high correlations in the normal case when no eavesdropper is present (no attack), and low correlations when such an eavesdropper is present (attack). In our case, high correlations are expected when there is an attack, and low correlations when none is present. We think that the observation that entanglement-based QDB provides such detection properties for advanced distance fraud attack strategies is of independent interest and may be utilized in other QDB settings, e.g., distributed, hierarchical, and/or group settings.
Footnote 1: Without loss of generality, as the reasoning applies to other cases too.
### Distance Fraud Attack
In this type of attacks, a dishonest prover (controlled by the adversary) attempts to shorten the distance computed by the verifier. A strawman distance fraud attack strategy is to attempt to predict the challenge bit, and send the response before receiving the actual challenge. This attack, which also applies to the entangled QDB protocols presented in this paper, succeeds with a probability of \(2^{-n}\) for \(n\) challenges, which can be made negligible by increasing \(n\); we thus argue that this attack is rendered ineffective for a large \(n\).
We note that previous QDB protocols [4] were susceptible to a more advanced strategy for a distance fraud attack. In that advanced strategy, assuming
in the \(i\)-th round of rapid-bit exchange \(a_{i}=b_{i}\), then an adversary could effectively shorten the computed distance by reflecting the incoming photon without decoding and re-encoding it. This strategy results in a prover saving some processing time (probably only a few nano-seconds though). In this case, the (shorter) distance computed by a verifier depends on exact length of the saved processing time (of the prover). The root causes of this attack are (1) the prover knowing the bits \(a\) and \(b\) in advance, and hence being able to identify the rounds where the condition \(a_{i}=b_{i}\) holds, and (2) the feasibility of reflecting photons without the need to decode and re-encode them. The probability of success of this attack is \(2^{-\texttt{HD}(a,b)}\), where HD is the Hamming distance between \(a\) and \(b\).
**Entanglement-based One-way QDB:** We argue that the advanced distance fraud attack strategy discussed above no longer works in our entanglement-based QDB protocols. Recall that in our one-way QDB protocol (Figure 3), the verifier transmits its challenge to the prover via an entangled particle (step 3 in the protocol), and expects a _standard (un-entangled) quantum particle_ as a response from the prover (step 4 in the protocol). If the prover would instead reflect the entangled particle of step 3 back in step 4, then that entangled particle would collapse to a random bit when the verifier tries to decode it in the basis determined by value of the register \(b\). Because the output of this decoding operation will be a random bit, this strategy would not offer any advantage compared to the default strawman distance fraud attack described above (with a success probability of \(2^{-n}\)). Moreover, as discussed above, the verifier could even detect this reflection attack by measuring the correlation between the challenge and the response.
**Entanglement-based Mutual QDB:** A similar reasoning as in the one-way QDB case holds for Party B performing a distance fraud attack in the mutual QDB protocol (Figure 4). So let us now consider the case where Party A tries to perform a distance fraud attack. Recall that both the challenge Party A receives (step 5 in the protocol) and the response it transmits (step 6 in the protocol) are encoded using a _standard quantum particle_. When it performs the advanced attack strategy where it reflects the incoming photon - without decoding and re-encoding it - in round \(i\), then Party A is successful in the following cases:
* If \(b_{i}\neq c_{i}\), then the output of the decoder in basis \(c\) at Party B (step 7 in the protocol) will be random, since the photon will be decoded in the wrong basis. So the output will be correct with probability \(1/2\).
* If \(b_{i}=c_{i}\), then the response sent to Party B will only be correct if \(m^{\prime}\oplus r_{B}=r_{B}\). This condition obviously only holds when \(m^{\prime}=0\). However, recall from Sect. 2.3 that the there is no information encoded in the state of entangled qubits. The information comes into existence only after the entangled qubits
are measured. What this means is that the value \(m^{\prime}\) remains unknown to any party until later when measured. Therefore, Party A cannot enforce the condition that \(m^{\prime}=0\); it will hold with probability \(1/2\). However, Party A could cheat by not using entangled particles in step 4 of the protocol. Instead, it could generate an un-entangled particle with \(m^{\prime}=0\). Party B cannot check that it received an un-entangled particle and will accept it as a valid challenge. As a result of using an un-entangled particle, Party A will be able to reflect all incoming challenges (step 5 of the protocol) in each round where \(b_{i}=c_{i}\). This increases the success probability to \(2^{-\texttt{HD}(b,c)}\).
In summary, our mutual QDB protocol offers asymmetric resilience against distance fraud attacks. Party B has no advantage compared to the default strawman distance fraud attack, while Party A does have an advantage.
### Mafia Fraud Attack
In this form of attack, an adversary either (1) guesses the verifier's challenges in advance, sends these random challenges to the prover to obtain the prover's responses (prior to receiving any challenges from the verifier), and then uses the prover's responses as the responses for the verifier's actual challenges, or (2) intercepts the verifier's challenges and responds using randomly chosen bits (encoded in random basis). In the first case (similar to the analysis in [4]), there are four scenarios to consider: (1.1) the attacker guesses the basis correctly but guesses a wrong challenge bit, (1.2) the attacker guesses both the basis and the challenge bit correctly, (1.3) the attacker guesses the basis incorrectly but the challenge bit correctly, and (1.4) the attacker guesses both the basis and the challenge bit incorrectly. In case (1.1), the prover's overall response will always be incorrect and the attack fails. In case (1.2), the attacker always wins. In case (1.3), with probability \(1/2\) the prover's measurement results in the correct challenge bit, so the attacker wins with probability \(1/2\). In case (1.4), again the prover's measurement results in the correct challenge bit with probability \(1/2\), so the attacker wins with probability \(1/2\). Overall, the attacker wins with probability \(1/2\) in the strategy where it obtains the prover's responses in advance, and after \(n\)-rounds, the success probability is \((1/2)^{n}\).
In the second case, where the attacker responds to the verifier's challenges using randomly chosen basis, there are again four scenarios to consider: (2.1) the attacker measures the verifier's challenge in the correct basis and uses the correct basis to respond, (2.2) the attacker measures the verifier's challenge in the correct basis but uses the wrong basis to respond, (2.3) the attacker measures the verifier's challenge in the wrong basis to respond, (2.4) the attacker measures the verifier's challenge in the wrong basis and also uses the wrong basis to respond.
In case (2.1), the attacker always wins. In cases (2.2) to (2.4), the attacker wins with probability \(1/2\) in each case. Overall, the attacker wins with probability \(5/8\) using this attack strategy of intercept and resend, or \((5/8)^{n}\) after \(n\) rapid-bit exchange rounds. Note that the use of entangled particles does not change the success probability of mafia fraud attacks compared to the QDB protocol in [4]. This is as expected, as it is used to improve the efficiency of the mutual QDB protocol and offer stronger protection against an adversarial prover.
We note that the attacker can also employ a similar strategy to the one employed by a dishonest prover in the distance fraud attack. That is, the attacker can just reflect the verifier's challenges. However, we have shown in Section 4.1 that this strategy is not better than the one where the attacker responds with random bits, due to the use of entangled particles. Hence, there is nothing to gain from employing it, as the success probability of this strategy is lower than \((5/8)^{n}\).
It is also interesting to note that the attack success probability and strategy is similar for the one-way as for the mutual QDB protocol. In both cases, the adversary chooses the party it wants to authenticate to in advance. This party will be the verifier in the attack strategy, and the other party will be the prover. The adversary then performs one of the attack strategies discussed above to falsely convince the verifier that it is the legitimate prover. When the verifier needs to authenticate to the prover, the adversary can ignore the verifier's responses.
### Terrorist Fraud Attack
Protocols resisting terrorist fraud attacks follow the blueprint in Figure 1 but with the bits of \(b\) derived using an encryption scheme that uses the bits \(a\) as key to compute \(b=Enc_{a}(k_{p})\), i.e., their design is such that that revealing \(a\) and \(b\) reveals the long-term secret \(k_{p}\). In its present form, our QDB protocols are not-resistant to terrorist fraud attacks, as no information on the long-term shared secret key \(k_{p}\) can be derived from \(a\) and \(b\) because we assume that \(f_{k}(.,.)\) is a PRF. If we retain \(a=f_{k_{p}}(N_{v},N_{p})\) and make \(b=Enc_{a}(k_{p})\) then our protocols in Figure 3 and 4, similar to the previous QDB protocols in [4, 5], become resistant to terrorist fraud attacks.
### Implementation Attacks
As noted in previous work [4], one implementation attack that always should be considered in QDB is the photon number splitting (PNS) attack. The PNS attack applies when a transmitted (quantum) pulse has more than one particle/photon. In QDB, if the verifier's challenge qubit is composed of multiple photons, then, the adversary could attempt to split the extra photons and only transmit the remaining single photon to the prover as noted in [4]. When PNS is applied to the
closest previous (one-way) QDB protocols [4], it was argued that the adversary could not do anything further beyond waiting for the response to arrive from the prover, and that the response will encode the verifier's challenge bit in a new basis, which is unknown to the adversary. In conclusion, the PNS attack is not more effective than the mafia fraud in the QDB protocols of [4]; note that other previous QDB protocols as opposed to the original protocol in [5] where the PNS attack is mitigated by including a final authentication phase, where the prover sends all the received challenge bits to the verifier. The same analysis of PNS above applies to our entanglement-based QDB protocols.
### Comparison
In Table 3, we compare our entanglement-based one-way and mutual QDB protocols with the state-of-the-art QDB protocols [5, 4, 15], as well as the two classical DB protocols [1, 17] from which all other DB protocols are derived. We refer the interested reader to [18], and the references therein, for a recent survey on DB
\begin{table}
\begin{tabular}{|l|c|c|} \hline & **Distance Fraud** & **Mafia Fraud** \\ \hline Brands-Chaum [1] (Classical) & \(\left(\frac{1}{2}\right)^{n}\) & \(\left(\frac{1}{2}\right)^{n}\) \\ \hline Hanke-Kuhn [17] (Classical) & \(\left(\frac{3}{4}\right)^{n}\) & \(\left(\frac{3}{4}\right)^{n}\) \\ \hline QDB [5] (Quantum one-way) & \(\left(\frac{3}{4}\right)^{n}\) & \(\left(\frac{3}{4}\right)^{n}\) \\ \hline QDB [4] (Quantum one-way) & \(\left(\frac{1}{2}\right)^{\mathsf{HD}(a,b)}\) & \(\max\Big{(}\left(\frac{1}{2}\right)^{\mathsf{HD}(a,b)},\left(\frac{5}{8} \right)^{n}\Big{)}\) \\ \hline Hybrid DB [15] (Hybrid one-way) & \(\left(\frac{1}{2}\right)^{n}\) & \(\left(\frac{3}{4}\right)^{n}\) \\ \hline
**Our One-way QDB Protocol** & \(\left(\frac{1}{2}\right)^{\mathsf{HD}(a,b)}\) & \(\left(\frac{5}{8}\right)^{n}\) \\ \hline
**Our Mutual QDB Protocol** & \(\left(\frac{1}{2}\right)^{\mathsf{HD}(b,c)}\) & \(\left(\frac{5}{8}\right)^{n}\) \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of our entanglement-based one-way and mutual QDB protocols with other classical and quantum DB protocols. Note that \(a\), \(b\), and \(c\) are the registers in the respective protocols.
protocols. We can see from the table that both of our protocols compare favorably to the QDB protocol in [4].
## 5 Open Issues and Future Work
This paper paves the way for several future research directions: (1) formal security analysis of entanglement-based mutual QDB, (2) experimental realization of such mutual QDB, (3) combining entanglement-based (mutual) QDB with Quantum Key Distribution (QKD), and (4) QDB for group settings.
_(1) Formal Security Analysis:_ As mentioned in Section 4, our security analysis is brief and informal. The objective of our analysis is to provide intuition and evidence that our protocols are secure, but we do acknowledge that more rigour is required to establish more confidence in our protocols. We note that the current literature is missing a detailed formal treatment of DB protocols in the quantum setting, i.e., formalizing required security guarantees using a game-based definition [19] when the adversary has quantum capabilities. We think that this is one of the most pressing future research directions to consider, to lay out a rigorous formal foundation for this topic.
_(2) Experimental Realization of (Mutual) QDB:_ We argue that our protocols are practically feasible and can be implemented, and experimentally verified using existing QKD technology. Our arguments about practical feasibility are based on those in [5] because similar experimental setups for transmitting and measuring (entangled) particles in QKD (and other close experiments) may be directly applicable to our protocols, i.e., to prepare, transmit, and then measure the entangled particles during the rapid-bit exchange phase. Finally, QKD and equipment required for similar experimental setups are commercially available, and have been tested and verified several time as reported in [20] and references therein.
_(3) Combined (Mutual) QDB with Key Establishment:_ Our protocols assume that the two parties already share a secret key \(k\). It would be interesting to explore combining QDB with a key establishment functionality to obtain distance-bounded mutually shared keys, similarly as was proposed in [21] using classical distance bounding protocols. It may be possible to combine QDB and QKD protocols to obtain this functionality. We think that this research direction is theoretically and experimentally interesting.
_(4) QDB for Group Settings:_ QDB has so far only been considered in the context of a single prover and a single verifier. To the best of our knowledge, there has been no prior work on QDB in group settings, where a set of (quantum-capable) provers interact with a set of (quantum-capable) verifiers. In the classic setting, the need for group distance bounding (GDB) [22] is motivated by several practical scenarios such as group device pairing, location-based access control and secure
distributed localization. It is currently unclear (at least to us) what a Group QDB (GQDB) protocol would look like.
## 6 Conclusion
In this paper we present two new Quantum Distance Bounding (QDB) protocols that utilize entangled particles to communicate random qubits in the rapid-bit exchange phase. Because the blueprint of our base protocol for one-way DB utilizes entangled particles in the rapid-bit exchange phase, we were able to extend it to perform mutual QDB between two parties (prover and verifier) in a single execution. Previous QDB protocols had to be executed twice with those roles reversed in each execution. Our protocols eliminate the need for communication from the prover to the verifier in the last authentication phase (especially, in the one-way DB case), which was necessary in some previous QDB protocols. To the best of our knowledge, our entanglement-based QDB protocol is the first mutual QDB protocol proposed in the literature. Finally, we briefly discuss several future research directions that can benefit from our QDB protocols, e.g., generalizing QDB to group settings which is currently unaddressed in the literature. We note that in the classical case, (communication) efficient DB protocols for group settings build upon mutual DB protocols. We argue that our mutual QDB protocol now opens the door for constructing efficient QDB protocols for group settings.
|
2308.09721
|
A new solution and concrete implementation steps for Artificial General
Intelligence
|
At present, the mainstream artificial intelligence generally adopts the
technical path of "attention mechanism + deep learning" + "reinforcement
learning". It has made great progress in the field of AIGC (Artificial
Intelligence Generated Content), setting off the technical wave of big models[
2][13 ]. But in areas that need to interact with the actual environment, such
as elderly care, home nanny, agricultural production, and vehicle driving,
trial and error are expensive and a reinforcement learning process that
requires much trial and error is difficult to achieve. Therefore, in order to
achieve Artificial General Intelligence(AGI) that can be applied to any field,
we need to use both existing technologies and solve the defects of existing
technologies, so as to further develop the technological wave of artificial
intelligence. In this paper, we analyze the limitations of the technical route
of large models, and by addressing these limitations, we propose solutions,
thus solving the inherent defects of large models. In this paper, we will
reveal how to achieve true AGI step by step.
|
Yongcong Chen, Ting Zeng, Jun Zhang
|
2023-08-12T13:31:02Z
|
http://arxiv.org/abs/2308.09721v1
|
A new solution and concrete implementation steps that can realize a truly universal artificial intelligence
###### Abstract
At present, the mainstream artificial intelligence generally adopts the technical path of "attention mechanism + deep learning" + "reinforcement learning". It has made great progress in the field of AIGC (Artificial Intelligence Generated Content), setting off the technical wave of big models[2][13]. But in areas that need to interact with the actual environment, such as elderly care, home nanny, agricultural production, and vehicle driving, trial and error are expensive and a reinforcement learning process that requires much trial and error is difficult to achieve. Therefore, in order to achieve Artificial General Intelligence(AGI) that can be applied to any field, we need to use both existing technologies and solve the defects of existing technologies, so as to further develop the technological wave of artificial intelligence. In this paper, we analyze the limitations of the technical route of large models, and by addressing these limitations, we propose solutions, thus solving the inherent defects of large models. In this paper, we will reveal how to achieve true AGI step by step.
artificial general intelligence AGI AIIG reinforcement learning large model ChatGPT
## 1 Introduction
The current mainstream artificial intelligence model has brought the spark of artificial general intelligence[1], But it's not really universal AI yet. Where is the ceiling of the current ai model? Can "attention mechanism + deep learning" + "reinforcement learning" achieve truly "Artificial General Intelligence"(AGI)? We believe that the current large AI model cannot solve the following serious flaws:
### It can not solve the problem independently.
Artificial intelligence, for example, does not offer to help when it sees its owner fall[7]. This is because machines do not have their own needs, so it is impossible to produce their own goals. Since the machine does not have its own goals, it is impossible to proactively create a task. In other words, large models do not create new processes!
The big model is essentially a programming platform. The programming language used is the natural language[14]. So, no matter how many high-level functions we add to the big model, and no matter how many tools we integrate into the big model, the big model will not spontaneously create new processes. All of its processes are preset, either from program presets or from data statistics. Both approaches are, essentially, "using out-of-set processes for all problems."[8][9][10][11][12], no matter how many if...else... are in this process. Considering how many possibilities, it is preset and exists in advance. It's not for specific tasks created by the machine itself! Therefore, the machine that makes decisions according to the predetermined process is the "bookworm" machine intelligence. Decision-making is
not flexible and difficult to face the endless unexpected situations in the actual life, which is also the current dilemma of artificial intelligence.
### knowledge cannot be updated in real time.
At present, artificial intelligence uses big data training, and the knowledge cannot be updated in real time. The real-time update of knowledge is crucial for machines that interact with the environment. Because the interaction between the machine and the environment is the process of the machine acquiring new knowledge. If the knowledge acquired by the machine cannot be updated in real time, a machine, facing the same input information, will constantly make the same mistakes[3].
### It cannot be applied to areas that require interaction with the real environment.
In areas that need to interact with the real environment, such as autonomous driving, doing housework, and caring for patients, machines need to build knowledge of interactive decision-making between their behavior and the external environment. And these areas are difficult to make a lot of trial and error, so machines cannot use reinforcement learning and interact to build decision knowledge in these areas in a real environment[2][4]. We hope that in the future, every family will have a machineanny; all vehicles will be autonomous; robots will undertake all industry, agriculture, services, and the human main job is to enjoy the beauty of life. But the current artificial intelligence technology solution, still can not achieve the above scenario.
## 2 How to create knowledge?
### How to describe the information contained in a matrix?
Although a matrix may contain a lot of information, we can express all the information in the matrix by establishing a set of coordinate base clusters. If this set of coordinate base clusters is complete and orthogonal, all the information in the matrix. If the coordinate matrix clusters we establish are not orthogonal but complete, then we can also use this set of coordinate matrix clusters to express any information in the matrix. If the coordinate base cluster is not complete, then there are some vectors in the matrix that cannot be expressed through this set of coordinate base clusters, and then we need to increase the dimension of the coordinate base cluster.
If the substrate coordinate clusters are eigenorthogonal substrate, then we achieve the full information of this vector with the most concise coefficients. If the base coordinate cluster is not completely orthogonal, then we want it to be as close to the orthogonal base cluster as the coefficient matrix we obtain at this time is sparse (highly expression). But if we only care about some of the common information in the matrix, we can use the common information mode as the coordinate base. This base is not an efficient expression for the overall information, but for the common information, it is an efficient way of expression (the coefficient matrix is sparse). So, if we live in an information matrix space, when we need to identify, analyze, and generate a wide variety of information, the most important thing is to find a set of substrate coordinates in the information matrix space.
### How do humans create knowledge?
The information that humans can recognize is only a tiny fraction of the information in our world. This is because we humans have a limited resolution of the information. The relative spatiotemporal relationship between the arrangement of A and B atoms on A grass is also A kind of information, but we will not identify it.
So in the process of evolution, humans have developed the Tokens recognition ability. Tokens Is the smallest information unit commonly used by humans, such as a straight line. Tokens In itself is a kind of "world model", it is the smallest "world model" used by human beings to build a magnificent palace of knowledge. In the process of evolution, humans have formed the "pattern recognition" ability to adopt "models" such as Tokens to identify the surrounding information, which greatly improves the energy efficiency ratio of information recognition. It's a gift from evolution.
Therefore, we take the minimum information units used to human beings, such as points, lines, surfaces, colors, texture, curvature, syllables, tones, symbols, touch, temperature, direction, etc. as Tokens, so we humans live in a 4-dimensional matrix composed of Tokens (three-dimensional space + time dimension). For humans, this 4-dimensional Tokens matrix, from the Big Bang to today, contains the whole knowledge.
Humans slowly use a certain symbol (language symbol) to represent the common Tokens combination, which is the concept. Humans use concepts to describe any information in a matrix (vector): chatting, writing articles. These concepts are a set of coordinate base clusters in our information space matrix.
Under such substrate clusters, the coefficient expression of common information (vectors) is sparse. For example, "investor" represents " human family, rich, want to make more money, find someone to help him earn, take risks, sign an agreement, share the income...".
Concepts contain common Tokens combinations, also contain, language symbols. And because linguistic symbols, appearing more frequently, are more representative, they may become the most commonly used entrance to a concept.
Obviously, the human concept is not orthogonal. Humans are accustomed to take the frequent combination of Tokens as a concept. For the Tokens combinations included in the concepts, there may be non-overlapping, partial overlapping and complete inclusion. Those common Tokens that exist in a large number of things, are highly represented but have low resolution and less number, and they represent abstract concepts. On the basis of abstract concepts, more Tokens are added to form more concrete concepts, which represent a reduced range and a higher resolution.
Although such coordinate base clusters, expressing all information, is not efficient. But they can express common information efficiently. For concepts such as "cat" and "dog", there may be a large number of common Tokens that are non-orthogonal. But it is highly efficient in expressing daily information to humans.
And, this is crucial for the generalization of the information. Because the properties of things are essentially a combination of the Tokens properties that make them up. For example, "cat" is a common arrangement of Tokens in space and time, which may include the language, text, sound, images, action, touch and other multimodal matrix information elements. In this arrangement, some of the matrix elements may have a higher weight, because they are more common, and they may all belong to the concept of "animal"."Animal" contains fewer elements and is more applicable, so between "cat" and "dog", their shared Tokens (such as Tokens related to the concept of "animal") can be directly reused. This is the process of information generalization, and it is also the origin of intelligence.
### What is how deep learning works?
In deep learning, the coefficient of each layer of neural network is behind a set of implicit coordinate bases. The neural network from layer A to layer A + 1 is essentially A substrate transformation process from layer A (expressing information together with the base cluster corresponding to the underlying coordinate base corresponding to layer A) to the coefficient matrix of layer B (expressing information together with the underlying coordinate base cluster corresponding to layer B). Then the information is partially compressed or discarded by the nonlinear function. The essence of deep learning is to use the "trial and error method" to find a suitable set of coordinate base clusters, which can dilute the "useful information" coefficient matrix in the input information.
The purpose of the residual network is to reduce the amount of information loss in each layer of the neural network, so that the machine can make multiple transformations, thus having a greater probability of finding the preferred base cluster. The purpose of regularization is to make the hidden base of the intermediate layer neural network as close as possible to the orthogonal system, so as to avoid the influence of dimensions and avoid the emergence of local optimum. It achieves this purpose by forcing the coefficient matrix of the middle layer close to the thinning matrix.
The high-dimensional features created by deep learning are a set of coordinate base clusters in its information matrix. But it does not have the constraint of "using common Tokens combinations". It uses the "trial and error method" under the error constraint, the established coordinate base clusters, more inclined efficient orthogonal expression. It is more efficient at expressing overall useful information, but different from human habits (humans only need to express those common information efficiently), so "deep learning" and "human" communication are "chicken and duck", less than one piece.
In a large model, the attention mechanism, in essence, is to predict the common degree (occurrence probability) of the local statistical knowledge obtained through pre-training, and to become the preferred coordinate base cluster to identify, analyze and generate various Tokens combinations.
Such a base cluster, it is more in line with human habits. So you can communicate in language between big models and humans. Such an overall expression efficiency is not necessarily high, but is more efficient for expressing common information.
### What is the nature of the attention mechanism?
The core of the attention mechanism is a kind of Bayesian inference (conditional probability). The overview can be "the probability of N Tokens combinations, and the probability of M Tokens combinations".
In human language, the combinations of N Tokens are almost endless, and the M of Tokens are endless. Therefore, it is impossible for the machine to solve the problem of "the probability of known N Tokens combinations, and the
probability of M Tokens combinations". In multimode, the problem is even more prominent. Therefore, the machine can only speculate on the probability of the M Tokens combinations after the N Tokens combinations based on a limited number of statistics. This is the nature of the attention mechanism. The weight matrix obtained by pre-training is a finite amount of statistical knowledge. The attention mechanism is based on a limited number of statistical knowledge to introduce the current Tokens with the probability of M Tokens. If in N + M Tokens, some Tokens have a high weight, they often appear together, so they are more likely to be common Tokens combinations.
This is the central mechanism of the attention mechanism, which is a way to find common arrangements of Tokens. Its essence can be regarded as a Bayesian inference combined with neural networks.
Therefore, the deep learning supported by the attention mechanism and the coordinate base clusters created are more in line with the human habit of creating concepts. That's why big models and humans can communicate in language[15].
In the language model, the "common Tokens combination" is the "common word". It contains the organization of common Tokens, which is similar to grammar; it also contains specific "common phrases", and the "common phrases (including grammar)" is much larger than the human ones.
Attention mechanism, very similar to human learning. When we learn the information in a book, "read thin first, read thick" is the same way."Read thin first" is to summarize the frame information, which is an information compression process, and then "read thick again", is on the basis of the frame information, add different details (and other vectors combined into new vectors) to form new knowledge, which is an information generation process[17][18][19].
### What does the big model work on?
In a large model, when the information is input, the inference process of the attention mechanism is the projection process of putting the input vector to the coordinate base cluster. The weight obtained by the attention mechanism is the coordinate value[15].
In the large model, the input Tokens projects the vector into the weight matrix in the first layer, which is a vector decomposition process. Then, the second layer of projection, which is the input Tokens combination and weighting, after the Tokens combination of the pre-trained weight matrix, the projection process (combination to the combination projection). After the operation of multi-layer attention mechanism, the projection decomposition process of multiple input Tokens combination to pre-trained Tokens combination is formed.
The weight coefficient matrix of the output of the last layer of attention mechanism, and the coordinate base cluster implied behind it (with the common Tokens combination as the coordinate base cluster), together form the re-description of the input information (self-attention mechanism).
Therefore, the working principle of the large model is as follows: (1) it takes the Tokens combination of the pre-trained weight matrix as the base cluster, and the weight matrix is the local statistical information obtained from the training material through the trial and error method; (2) it adopts the attention mechanism to realize the projection process of the input Tokens combination to the weight Tokens combination (vector decomposition), and the weight obtained by the reasoning process is the coordinate value.(3) With the vector component, you can find a large number of adjacent vectors, and the next vector corresponding to these adjacent vectors is the output vector. The proximity relation of the vector is shown in the form of the probability of the output vector.
So, the big model is an autoregressive prediction model. However, it performs the coordinate base cluster transformation process on the original input base (each Tokens is a dimension). Convert the original base cluster such as "every Tokens is a dimension" into a coordinate base cluster such as "after the common Tokens combination, as a dimension". Then, the autoregressive prediction was performed.
### Why are big models able to emerge? When did it come up?
Why do big models "emerge"? A very simple truth, for example, when an American comes to China, he can complete the correct translation process through a large amount of common background information (such as personal needs, social structure, etc.), and a medium number of comparison between Chinese and English.
But the big model is like an alien, and there is no common background information between it and humans, and it sees only the way that human information is connected. So it needs to extract the way that human information is connected to predict the development of information. At first, when the sample is not enough, the "information framework" is very different from the human "information framework", so it keeps making mistakes, groping in the dark, and always running. With the increasing number of samples, its "information framework" and the human "information framework" have a higher probability of alignment. But this is not a linear process. For example, before it reaches a
certain threshold, it decryphes the ancient language, groping in the dark, with little progress. At a certain node, if the accuracy rate reaches the threshold, the whole decryption process will be greatly accelerated and quickly completed. This is the "emergence" phenomenon. It is not about intelligence that the machine "emerges", but about finding the right "common way to combine Tokens". Because the criterion for evaluating the ability of a machine is the human standard, its ability emerges when its base is close to the human one.
### Can RLHF ultimately solve the problems faced by large models?
There are two serious problems with the large model:
#### 2.7.1 Illusion problem[20]
. At present, the core capability of large models is to transform the input information to the coordinate base cluster composed of common Tokens combination (vector projection decomposition), which is a base transformation process of information space.
It then uses the obtained coefficient matrix (the inference weight of the attention mechanism) to find multiple similar "pre-trained vectors" (component-weighted contrast). Then, according to these similar "pre-training vectors", follow the mapping relationship obtained by the pre-training, find the "next vector", and select one of the outputs. This is the autoregressive prediction process, and how the large model of the GPT class works. So, the big model optimizes the "parameters". For each parameter, it corresponds to a set of Tokens combinations. On the surface, the large model works in optimizing the network parameters. Its essence is to optimize the common Tokens combinations, that is to say, looking for a set of optimal base coordinate clusters. Each layer of coefficients of a neural network corresponds to a cluster of underlying substrate coordinates.
Large models have only "common Tokens combinations" derived from huge amounts of data, and have no factual memory. Therefore, facing the input Tokens, the large model can only decompose the input information to the "coordinate base cluster", and then obtain the next Tokens with different probabilities. This process proceeds iteratively, and it in itself is a creative process. If the fact itself is "common," then the fact is retained in the form of "common Tokens combinations". If the facts are not retained as a "common Tokens combination", or the facts themselves are not weighted enough, then the machine creates information. The GPT itself is information generation, so the hallucination problem is a part of its job[17][18][19], So the GPT has no solution to this problem.
For example, the machine finds that behind the profile of many journalists, there are links to other articles, or awards that they have won in the past. If the machine sees this pattern of information organization, then this mode of information organization becomes a mapping from "framework" to "framework". So if the input information contains a similar frame, but only the reporter's name is different, then the machine can map to the "frame + details" through the "frame + details", which can also produce a lot of web links, or awards in the output. But these web links and awards are also built by mapping "frame + other vectors" to "frame + other vectors", and they probably do not exist at all!
In order to solve the illusion problem of the big model, many people expect to plug in the "vector database", let the big model to query the factual knowledge to eliminate the illusion. This is another version of an attempt to adopt the encyclopedia to implement general AI. Whether it is a "vector database" or a "knowledge graph", it is impossible to solve the illusion problem! Because, these knowledge is plug-in, and the knowledge of the big model itself can not be integrated. They are like an ordinary person taking a dictionary and trying to open a translation company. When the expert system encountered the problems, it will encounter.
#### 2.7.2 The question of the harmful content[20]
In large models, the attention mechanism is correct, but deep learning is flawed.
In the large model, the Transform model based on Selfattention, including position coding, its main purpose is to increase the Tokens position information, so that it can use the position relationship between each other. This is necessary for the attention mechanism, because it is to find the temporal and spatial relationship of the Tokens.
However, through the multi-layer deep learning network, the large model finds the "optimal coordinate substrate cluster" after multiple coordinate substrate transformations under the error constraints. However, this Tokens combination of the "optimal coordinate base cluster" is no longer the same as the temporal and spatial relationship of the original Tokens. While it may still retain some of the organizational information between Tokens (because the deep learning process is irreversible, the location information of Tokens is only partially retained), it is difficult to be understood and exploited by humans. So, we believe that deep learning destroys the original temporal / spatial organization form of Tokens.
We can think that the large model performs a lossy translation process, translating the human Tokens into its language. However, the problem is that human beings do not master the language of the large model, so human beings cannot understand the knowledge created by the large model, nor can they imitate the form of knowledge organization, and implant "innate knowledge" into the large model. This is the core of the problem.
Moreover, because the large model cannot realize small samples and cumulative learning, it needs very large samples and takes the knowledge shape at once, which further increases the difficulty for human beings to understand the form of knowledge organization. Because machines have no own needs, they cannot have self-perceived rewards and punishments. Without self-perceived rewards and punishment, it is impossible to spontaneously create a projection of the vector (information) to the reward or punishment dimension. That is to say, in the base coordinate cluster created by the machine, the lack of rewards, punishment, happiness, sadness and other human unique, also must have the basic dimension!
The current remedy used by the large model is the RLHF. This is equivalent to humans adding a suffix of a reward dimension to a particular vector. That is, in the base coordinate cluster of the machine, a reward dimension is added. If in the training data, increasing the component value in the reward dimension on a large number of different types, sufficient number of vectors, it is equivalent to establishing the common component combinations in these training vectors, projections to the reward dimension. This is the reward function of the machine. So, the machine can also predict the reward component contained in the output vector produced in different decisions, namely in different combinations. Therefore, the machine will prefer the output with a high reward component. This is the amazing effect of RLHF learning. Because the knowledge learned through the RLHF can actually be generalized. When a machine has its own dimension of reward and punishment, it has its own preliminary "consciousness of" seeking benefits and avoiding harm ", which is why we can see the hazy shadow of" consciousness " from the current big model.
But it's a patch, which means the machine to try, then humans score and feedback, and it can only be used in areas where there is a lot of trial and error. This is similar to a child who graduated with a PhD, but there is no concept of "right and wrong", the parents can only shout "No", "No", "Yes" to give him the concept of "right and wrong" behind, and he can't communicate directly with his parents, only through "Yes" and "No" to communicate. Therefore, this learning effect is low efficiency, and may always encounter those unexpected corner case!
Is attention mechanism + deep learning + reinforcement learning the right path for artificial general intelligence?
### Can the big model achieve AGI?
We believe that the large model proves its general direction. But we don't think that large models are the right way to achieve general AI.
In terms of NLP, humans range from early bag model, word vector to EMLO[21], Until Transformer, the attention mechanism is truly realized. After combining deep learning and attention mechanisms[22], Can produce optimized coordinate base clusters similar to human expression, which is why Transformer can produce intellectual "emergence".
However, we note that the path of the large model is "to establish the preliminary relationship; then adjust the coordinate base cluster; then under the preferred coordinate base cluster to obtain the correct relationship". Such a mechanism leads to a huge amount of data required and computation, and knowledge is formed through the training process, which is difficult to update in real time[2][3].
At the same time, the reward function appears after the event, which is not applicable to difficult trial and error areas, such as interactive decisions in real environments (autonomous driving, home nannies, industry, agriculture, business, services, government management, etc.).
In addition, the idea of "task-oriented, do reinforcement learning" is wrong. The reason why human beings are "universal" is that we face all tasks and make decisions according to "seeking advantages and avoiding disadvantages". The ines should be the same. There are thousands of tasks, task-oriented reinforcement learning, never learn! And many tasks cost very much for trial and error!
### What kind of road is the right way to reach the AGI?
The current problems with the big model are described in this way:
(1) The attention mechanism is right. But deep learning is flawed.
Because deep learning destroys the original form of temporal / spatial organization of Tokens. The knowledge generated is difficult to understand and cannot be imitated. So humans cannot imitate their organizational form and implant innate "self-needs" (innate knowledge) into machines.
Without "self-needs", it is impossible to have "own ideas" and "independent decisions". In this way, the machine can only follow the predetermined process (or preset, or statistics), passive "decision", not flexible, which is the big problem of AI at present.
(2) The idea of "task-oriented, do reinforcement learning" is wrong.
The reason why human beings are "universal" is that we face all tasks and make decisions according to "seeking advantages and avoiding disadvantages". The ines should be the same. There are thousands of tasks, task-oriented reinforcement learning, never learn! And many tasks cost very much for trial and error! Like taking care of children, no one wants to give their children to a machine for experiments!
So, our solution is to:
(1) Realize the attention mechanism without destroying the original temporal / spatial organization form of Tokens. The knowledge created can be understood and can be imitated.
(2) We can imitate the organizational form of knowledge and give "with" innate needs "."Innate needs", as a special class of Tokens, and other Tokens, form common combinations through attentional mechanisms. These common combinations are common sense (that's the world model)!
(3) The machine only learns one thing "how to meet its own needs", and only deals with one thing "how to meet its own needs". This is the general decision-making.
(4) Because the original temporal / spatial organization form of Tokens is not destroyed, the machine can directly obtain the temporal and spatial arrangement of Tokens through language symbols. And this arrangement can be understood and imitated, so that machines can directly acquire all the experience accumulated in the history of human civilization through language learning! Machines no longer need to go through the "evolutionary history"!
## 4 Step by Step steps for implementing general AI.
Here are the 10 steps to implement our protocol.
Step 1, Tokenize information.(Like any other AI technology)
Step 2, Matrices the Tokens.(Build a memory data library)
Step 3, The input Tokens propagates the activation value to the Tokens in the memory bank according to the similarity relationship.
Step 4, All the activated Tokens, following the proximity relationship, propagate the activation values to the adjacent Tokens.
Step 5, Each activated Tokens, in turn, spreads the activation value in the memory bank.
Among them, from Step 3 to Step 5, the higher the similarity, the greater the transfer coefficient. The closer the storage location is, the larger the transfer coefficient is. The higher memory value of Tokens indicates a larger transfer coefficient.
Step 6, The activation value obtained for each Token from different propagation paths, is accumulated.
Step 7, The activation values of all Tokens, were resolved over time. Among them, Step3 Step7 is the process of chain association activation, which is the inference process of the attention mechanism, and the activation value is the inference weight.
Step 8, Each Token updates the memory value according to the size of the activation value obtained. And, all memory values fade over time. Each Token's memory value is its pre-trained weight value. In memory, there are a large number of Tokens combinations, those that can appear repeatedly, which contain Tokens that can activate each other each time to push up the activation value, thus obtaining higher memory values. So if a combination of multiple Tokens's appears in the input, the Tokens combination has a higher probability of getting a high attention weight. Therefore, the chain of associative activation process is a "Tokens combination" first activation value propagation process.
Step 9, preset minimum innate requirements (innate knowledge, composed of Tokens + memory value + arrangement). Innate demand are the organizational form of imitation knowledge and the establishment of innate knowledge. Innate
knowledge can include the minimum innate needs, rewards and punishments, emotions, and the necessary innate safety instinct knowledge, and of course, other knowledge can also be preset. This knowledge exists as part of the memory bank and seamlessly integrates with the acquired memory to form the overall memory bank. The "Fine Tuning" of innate knowledge is achieved by the accumulation of acquired knowledge (including feedback).
Step10, Let the innate needs, rewards and punishments and emotions (using special Tokens to represent), and acquired information (ordinary Tokens information flow), in the machine training and life, the formation of time information flow, and is stored. Then, through the chain association activation process + attention mechanism, a fully connected knowledge network (memory bank) is formed. Our scheme ends up with a memory bank: each Token is a data record. They consisted of the 4 fields shown in Table 1. Time mark: Represents the temporal relationship of the Tokens to each other
Token: Represents the Tokens itself, can be data from graphics, voice, or other sensors.
Memory value: Represents the pre-training weights.
Activation value: Reasential weights representing the attention mechanism. A large number of Tokens are stored at time intervals, and a knowledge network is formed through optimization (through the chain association activation process + memory and forgetting mechanism to survive the fittest).
Knowledge network, it is the memory bank. The network node, which is the Tokens. The network connection, which is the activation value transfer relationship. However, it should be pointed out that the activation value transmission relationship is determined by the relative position of Tokens, the similarity between the memory value of Tokens and Tokens, and the size of the initial activation value obtained by Tokens. Therefore, the Tokens is input first, and then the activation value transmission relationship between Tokens is temporarily established. This transmission relationship is not fixed.
The memory value represents the pre-training weight; the activation value represents the reasoning weight under the attention mechanism. So, in our scheme, knowledge acquisition and reasoning application are integrated, and innate knowledge and acquired knowledge are integrated.
In the memory bank, there are both objective Tokens and subjective Tokens, and the connections formed through the attention mechanism is "information". The permutation of all Tokens is all information, with has high dimension. And "knowledge" is the arrangement that can repeat (including time, space), they are the part of the information that can be repeated, so they contain less Tokens, more representative, more applicable, more abstract, so they have less dimensions. Common sense is further limited to our common human "knowledge".
Our machines, memory banks can be inserted, modified, or merged, so knowledge between machines can be shared directly by memory banks. For example, a chef robot, by loading the doctor robot's memory, can directly acquire the doctor's various skills. There is no need to combine the "chef big data" and the "Doctor big data" again, and spend tens of millions of dollars and several months to redo the pre-training.
### Detailed description of each step
#### 4.1.1 Step1, Tokens Tokens the information
The machine only needs to disperse the input information, according to the overall priority, according to the low resolution priority, extract the underlying Tokens (such as the overall outline, texture, topology, line, image, horn, ridge, vertex, voice time domain / frequency range tone, timbre and other main underlying Tokens).
In chronological order, then stored in the memory bank is OK. Special emphasis: no need to identify them, save it OK. Even if the Tokens extracted is random, the algorithm is imperfect. Because our algorithm is based on accumulating common Tokens combinations (the "world model"), which guides the machine on how to "extract on demand"! Common Tokens combinations contain both common Tokens and their organizational forms.
So the process of Tokens extraction is a step by step optimization process. After the Tokens is stored in the memory bank, the memory value and activation value of these Tokens are constantly changed according to the chain associative activation process, memory and forgetting mechanism. Through survival of the fittest, those widespread Tokens, or
\begin{table}
\begin{tabular}{l c c c} \hline \hline Filed1 & Filed2 & Filed3 & Filed4 \\ \hline Record Time & Token & Memory value & Activation value \\ \hline \hline \end{tabular}
\end{table}
Table 1: composition of each Token data
Tokens combinations, will be retained to form a more complex Tokens. Those Tokens that rarely repeat are eliminated and they are no longer extracted.
So, the strategy of machines to process Tokens is also: look for the widespread combination of raw data, as Tokens. This is the application of the common information combination first principle in determining the Tokens composition. This, similar to humans, is a gift from evolution to humans. Because extracting underlying programs such as Tokens requires extensive multiplexing to achieve maximum energy utility.
#### 4.1.2 Step2, which matrices the Tokens
Each Token, corresponding to a record in the memory bank, has 4 fields, as shown in Table 1. Memory value size indicates memory strength, and zero is removed. Activation value size indicates the strength of being activated, and being zero indicates not being activated. All the records spontaneously constitute the entire memory bank according to the simultaneous storage method. As for the simultaneous storage method, the specific embodiments include: (2.1) The machine retains the time relative position where the Tokens appears in the input information.
One implementation method is: the machine uses the distance of the Tokens in the storage space to reflect the time distance between the time when the Tokens are stored, for example, the machine stores Tokens in order according to the input time order, the closer the time Tokens, the closer the closer the storage position;
Another method for storing the relative position of the retention time is that each Tokens has the coordinates in the memory space, mainly including the storage time information of the Tokens;
The machine retains the spatial relative position of Tokens in the input information; one implementation method puts the extracted Tokens in the overlap with the original data, and keeps the relative position of the Tokens in space during storage;
The implementation method can also be: the overall low-resolution Tokens priority extraction, and then based on the machine's decision, and then extract other local Tokens on demand. In this way, through the proximity storage relationship, local Tokens and overall Tokens have both proximity activation relationship and similarity relationship between Tokens, so they will activate each other and establish positional connections.
#### 4.1.3 Step3, from the input Tokens to the Tokens in the memory bank, performs similarity activation
Each Token of the input is given a uniform initial activation value of A0. The A0 in itself is a preset numerical value. However, it can be adjusted by the activation value of the activated reward symbol and the punishment symbol during the last machine chain association activation process.
The activation value of the activated reward symbol and the punishment symbol is the potential reward and penalty value predicted by the machine for the previous input information. The initial activation value, A0, affects the range of the chain association activation process. When the initial activation value A0 is high, the spread of the chain association activation process is larger. This is because in our scheme, the activation value propagation coefficient is less than 1. As the number of chain spread increases, the value of spread becomes smaller and smaller. The chain propagation process ends when a Token obtains an activation value less than a preset threshold. So A0 reflects how much the machine values the input information. When A0 is high, the machine activates more Tokens in memory to look for Tokens. This is similar to humans, where if the previously input Tokens brings high potential rewards and penalties, then the new relevant Tokens inputs may be particularly valued. For example, the boss, will let you associate more information.
The principle of similarity activation is: (1) the higher the similarity between Tokens, the greater the transfer coefficient; this is the point product of the correlation between Token.(2) The higher the memory value, the greater the transmission coefficient; the memory value is the pre-training weight. It should be emphasized that the same Token may constantly appear in many locations in the memory bank! They all have their own different memory values! This is because the same Token in the same Token weight is not the same! This is similar to the attention mechanism in the large model.
#### 4.1.4 Step4, all activated Tokens, performs proximity activation
We believe that the proximity relationship between Tokens represents some implicit correlation between them. The closer the occurrence time is, the closer the potential relationship is. This correlation can be counted by the chain of association activation process + memory and forgetting mechanism. The proximity relationship actually reflects a Tokens combination relationship. If this combination relationship repeats itself, then it is a common combination. So we find out the common combination through the near activation process during the chain association activation process.
Each activated Token then transmits the activation value to its adjacent Token; the closer the time position, the greater the transmission coefficient; the higher the memory value, the greater the transmission coefficient. If there is a close relationship between Tokens in memory banks, it shows that they were once a combination way. If their memory values are high, it indicates that they are a common way of combination. If only one Token has a high memory value, they are not common combinations. If neither memory value of the Tokens is high, then their propagation activation value is low and the chain propagation of Tokens stops quickly. It means that such information is not important and they have a low weight in information processing. Token adopts the method of "the closer the time position, the greater the transmission coefficient; the higher the memory value, the greater the transmission coefficient" to activate the common combination containing them, which is essentially the projection process of inputting Token to the coordinate base composed of a set of Tokens.
If the N input Tokens are projected onto the X combination (Tokens combination) in memory, then the X combinations get a high activation value. Because each Tokens will activate multiple Tokens in the X combination with both similarity and proximity. Therefore, the X combination achieves a higher activation value by accumulating the activation value. These higher activation values of Tokens, composed of the "model", is the expected model of the input vector (N Tokens) activation (world model).
In essence, this is a process of vector decomposition to coordinate substrate, and also an information recognition process.
1.5 Step5, each activated Tokens, in turn spreads the activation value in the memory bank following the similar and proximity activation principles
Each input Token has "similarity activation" and "proximity activation" in the memory bank, and the size of activation value transmission is positively correlated with their pre-training weights (memory values).
Each activated Token in the memory bank is also positively correlated with "similarity activation" and "proximity activation", and the activation value transfer size is positively correlated with their pre-training weight (memory value).
This process proceeds strand until all the inputs Token complete their own "chain activation process". So, in addition to the activation of the memory vector similar to the input vector, the machine also activates the "antecedents" and "consequences" of the memory vector similar to the input vector, that is, the front and back information in time in the memory bank. And, possibly through different memory segments, activate different "antecedents" and "consequences". This allows our scheme to speculate on the possible former vector and to predict the possible next vector.
Since the strategy we adopted is "the overall low score rate Tokens" first, the spatial position relationship of the information is actually established through the temporal position relationship. When the information is entered, the machine first extracts the "overall low score rate Tokens", which stores it in the memory bank. A chained associative activation process is subsequently initiated. After completion, the decision is made by counting the activation value of the activated reward and penalty symbols.
The decision-making principle of machines is to seek advantages and avoid disadvantages. Decisions may be made by further identifying information, or other decisions. If the decision is to further identify the information, then the machine will take the current high activation value Tokens combination mode (including the language Tokens) as the expected model to proactively confirm the high activation value Tokens that has not yet appeared in the input. The method used is to imitate the past experience of obtaining these Tokens to adjust your own sensor system. So this is a "pattern recognition" for information, which is similar to the human recognition process.
These newly acquired Tokens (such as local details) have a temporal proximity relationship with the original "overall low score rate Tokens", but also have a partial similarity relationship, so they can be connected by passing activation values to each other. In this way, the newly acquired Tokens establishes a positional relationship with the original overall low score rate Tokens. These overall low score rates, Tokens, and those often accompanying local Tokens, slowly form a "world model" through memory and forgetting mechanisms.
It should be pointed out that the world model is not to create an independent model, it contains Tokens that may spread throughout the memory bank, and these Tokens are temporarily created by similarity, proximity, and high memory values. So it is not static, it is distributed existence, it is temporarily composed of those Tokens with high activation values, motivated by the input information, and there is no separate model in the memory bank.
#### 4.1.6 Step6, the activation values are accumulated
If there is an activation value propagation path between a Token and multiple input Tokens (i. e., either directly or indirectly), the activation value passed from the input is cumulative. So Token, a memory bank with multiple input Token, will obtain higher cumulative activation values from multiple propagation paths.
In this way, the input in Token, if the Tokens is associated to each other, it pushes up the weight of the related Tokens in the memory bank. That is, the common combinations, their activation values, rise from the activation values to sea level. And this activation value at sea level is the low activation value of those with large amounts of Tokens. Those Tokens that rise from the activated sea level constitute one or more "world models". While those memories most related to the input, although they may not be common, may also obtain high activation values because they are directly related to the input and have short propagation paths.
Therefore, our scheme can not only obtain the "information framework" of information through the common Tokens combination, but also pay attention to specific factual details, so our scheme, is a "fact database", which can solve the current "illusion" problem of GPT.
#### 4.1.7 Step7, activation values subsided over time
All of the activation values are constantly decreasing over time. When the following Token input, the memory related Token is activated. In the previous input, the activated Token has not completely subsided, the activation value will be accumulated.
The machine's decision is based on all the activated Tokens. So both the first and back inputs are taken into account. Therefore, the thinking of the machine has a certain time consistency, which can solve the problems of "omission", "reference" and "metaphor".
So, our machine takes advantage of the implicit relationship between the front and back inputs! That's the attention mechanism!
Further: the machine adjusts the initial activation value A0 assigned to the input Token s based on the "pros and cons" predicted by the last decision. The initial activation value A0 will affect the range and cumulative size of the propagation of the activation value! This is to adjust the intensity of attention according to the "pros and cons"! It's very similar to humans. At this point, beyond the current technology (Transformer). In fact, this is very similar to the human decision-making process, such as the boss, which allows you to generate more associations, activate more reward or punishment symbols, so as to predict rewards and penalties more deeply.
1.8 Step8, to update the pre-training weight matrix through the chain association activation process + memory and forgetting mechanism + the principle of seeking benefits and avoiding disadvantages
In our protocol, those Tokens combinations that are able to recur are likely to obtain higher memory values because of repetition. And because it is a recurring combination, each other pushes up the activation value, so it obtains a far higher memory value than simple.
And because they can repeat, their combination achieves a higher activation value each time, so they are easier to be activated, and thus easier to gain memory increments. So this is a positive cycle process. So, from here we can see, our machines can learn from themselves. But at the same time, forgetting the pre-existing mindset is also a time-consuming process.
So, in our protocol, the pre-training statistical process of the machine is not simply statistical repeatability and then using memory and forgetting mechanisms. But through the attention mechanism + memory and forgetting mechanism + benefit and avoid the principle of the joint completion.
The decision-making process of the machine is to seek advantages and avoid disadvantages. In the decision-making of seeking advantages and avoiding disadvantages, the process of identifying information according to the way of seeking advantages and avoiding disadvantages. So, our machines, based on their own needs, to build common combinations of Tokens. So, our machines, to the outside world, to their own information recognition, are selective recognition.
Memory and forgetting mechanism: all Tokens in the memory bank, if activated once, will positively update their memory value according to the size of their activation value. Their memory value is the pre-trained weight matrix! Since the Token permutation cannot be exhaustive, this is a non-complete statistical process, which is similar to the pre-training process of large models.
The chain association activation process is the inference process of the attention mechanism (from the local statistical weight to the input localization weight calculation process) under the input Token combination incentive, which is similar to the attention mechanism in Transformer.
This process is essentially the projection of the input vector to the coordinate base cluster established by the attention mechanism. The input vector, which can be regarded as the original base cluster formed by the pulse function under the input dimension. However, the coordinate base cluster established by the attention mechanism is established on the basis of common Tokens combinations.
The inference weight matrix of the attention mechanism is the coefficient matrix of the projection of the input vector to the base cluster. In our scheme, the chain association activation process, similar to the multi-layer attention mechanism in Transformer, is also the process of the coordinate base cluster projection to the attention mechanism: first separate Tokens projection, and then combined projection. After the final chain activation, one or more high weight components of the high activation value Tokens is the "framework" of information. Each framework contains many Tokens, which is difficult to describe specifically. But usually the language symbols, due to high representativeness and repeatability, so the activation value may also be the highest, so they may become the representative Tokens of this "framework". So, in our scheme, the activation value is the inference weight matrix.
In fact, both the large model and our network are a kind of neural networks. Attention mechanism, in essence, is Bayesian inference. Generally speaking, the attention mechanism is the conditional probability of some Tokens and the joint probability of some Tokens, and the joint conditional probability of a specific combination of Tokens. This is the application of combining Bayesian inference and neural networks. In the large model, it is known that the probability of some Tokens, and the joint probability of some Tokens are determined by the weight matrix, and the probability prediction under Tokens combination is performed through multiple correlation operations. In our scheme, it is known that the joint probabilities of some Tokens and some Tokens are explicitly expressed in the memory bank, which are the memory values of Tokens, the relative position of Tokens and the similarity between Tokens.
As you can see, the way we realize the attention mechanism is a small sample, cumulative learning. And the weight matrix is updated in real time, so our scheme, knowledge is updated in real time. And we don't distinguish between pre-training and reasoning processes, so our machine is lifelong learning. In addition, it can be seen that our scheme does not need BP algorithm, does not need pre-training, and its basic operation amount is close to the reasoning process of the large model. Therefore, the computational amount of our scheme is much less than Transformer, and it can also be calculated in parallel. Therefore, our scheme can realize the computational localization of the pre-training process. Every machine is a self-training, constantly iterative, and constantly evolving agent.
In addition, it can be seen that in our scheme, Tokens extraction can adopt similar techniques to the current large models, and the amount of calculation is comparable. The chain association activation process is highly stereotyped, which can be directly implemented at the hardware level with new memory devices. In this way, it will help in the localization of the calculation in our scheme, which will help to expand the landing scenario and reduce the cost.
#### 4.1.9 Step 9, preset minimum innate requirements
We not only realized the attention mechanism, found the common Tokens combination, but also did not disrupt the original temporal and spatial organization form of Tokens! Therefore, the knowledge network formed by our scheme is understandable by human beings. So, we can imitate the organization form of Tokens in the final memory bank and build the initial minimum congenital memory for the machine! This is equivalent to giving a machine a minimum of innate knowledge similar to humans (what a baby is born with).
In innate memory, the minimum "demand system", "reward and punishment system" and "emotional system" of the machine are needed. The method is: use special Tokens to represent each "demand", "reward and punishment" and "emotion". Then imitate the form of pre-training (in fact, the appropriate Tokens arrangement + the appropriate memory value) and implant the minimum innate knowledge.
In daily life, let these Tokens, which represent "demand", "reward and punishment" and "emotion", and other external Tokens that trigger them train together, activate the chain association together, and remember and forget together. That is, through the attention mechanism, let these special Tokens, like other Tokens, establish common Tokens combinations. Therefore, we must preset the minimum "demand system", "reward and punishment system" and "emotional system" of the machine, so that the Tokens representing the outside world (including the state parameters of the machine itself) can trigger these special Tokens, so as to establish the information flow. And through the chain association activation process + profit and harm avoidance decision + memory and forgetting mechanism, to gradually obtain the most common, and the machine is most concerned about the common Tokens combination.
In this way, we establish a connection between the "common Tokens combination of the objective world" and the "needs". The "common Tokens combination of the objective world" is the "objective common sense" of the objective world, and the "common Tokens combination" composed of the "common Tokens combination of the objective world" and the "demand" is the "subjective common sense"."Objective common sense" and "subjective common sense" constitute "common sense".
Common sense is the "world model", which contains the "world model" of human cognition of the external world, and also contains the relationship between the "world model" and "I" established by human beings. In particular, it should be pointed out that Tokens is not only static features, but also contains those simple dynamic features (such as rotation, swing, etc.), so the world mode is not static or fixed, but is created under the input Tokens excitation!
And each world model is different, which is directly related to its experience. In our scheme, the "world model" built by the machine is directly related to its training data, as well as to its life experience!
With the world model, the input Tokens can activate the reward and punishment Tokens, emotion Tokens and demand Tokens through the chain association activation process, and the transfer path of the activation value from the input Tokens to these features Tokens is a logical reasoning process compatible with the neural network! It is explicit, is understood, can be imitated, so the machine decision can be seen.
In fact, Step 9 is essentially the first step in the actual creation of a general AI. But we can train the experimental data through the previous steps, and thus obtain and understand the organizational forms of the knowledge created by the machine, and then imitate these organizational forms to implement Step 9.
(1) Preset and machine life activities related, basic requirements pros and cons system. For example, give a reasonable interval to the battery data, preset a symbol representing "hungry" in the "innate memory", and put a "punishment" symbol and an emotional symbol representing "hungry" next to the "hungry" symbol. And give them the appropriate memory value.
When the power is insufficient, the vital state monitoring program will directly give the initial activation value to the symbol of "hungry" in the "innate memory". Its activation value will spread chain throughout the memory bank. The "hungry" emotion symbol next to it is activated, and the "punishment" symbol next to it is also activated. So the machine has a "hungry" mood and a "punishment value". In order to avoid the "penalty value", the machine will use its own experience to actively find a plug to charge!
(2) The advantages and disadvantages of the "higher order needs" of preset machine values need to preset the simplest means of communication and then cultivate values.
Values need to be cultivated from childhood! So we need to educate us about the "values" of robots from an early age. Since education, it needs to be achieved through "reward" and "punishment". So when the machine starts out, it needs to be able to recognize "rewards" and "punishments". In this way, we can initiate the first step of learning through "reward" and "punishment"!
Therefore, we need to imitate the acquired memory network organization form, so that the machines can have the innate knowledge that can recognize the simplest "reward" and "punishment"!
For example: preset the most basic head nodding features (assuming X Tokens) / head shaking features (assuming Y features), do not need to be accurate!
Next nodding Tokens, place a "respected" symbol; place a "reward" symbol; give these symbols a higher memory value, and make their relationship a long-term memory. When a partial nod Tokens appears in the information input, the machine obtains the "reward value" through the chain association activation process. In pursuit of "reward", the machine may plan various decisions in the future of obtain "human nod"!
Similar to a child, starting from the simplest way of communication, gradually acquire complex learning ability, he (she) gradually established the "reward function" logical chain is: "milk", "pacifier", "bottle", "milk powder can"...\(\rightarrow\)..."Academic performance", "house, car"..."Social status"...\(\rightarrow\) "worldly ideal".
Therefore, after training, there are a large number of reward and punishment related Tokens symbols in the memory bank of the machine, and Tokens combinations closely related to these reward and punishment Tokens, there is a causal relationship between them. These Tokens combinations that are closely related to reward and punishment Tokens, which represent things, behaviors and results, are values. Therefore, any value of the machine can be established by preset innate communication means, and then cultivating it step by step. In fact, human beings are the same, no one is born with a "saint".
Figure 1 is a schematic diagram of the "innate minimum requirement".
#### 4.1.10 Step 10 To form a fully connected knowledge network.
Our scheme ends up with a network where each Token is composed of four fields: the time mark, the Tokens itself, the memory value, and the activation value.
A large number of Tokens are stored according to the time interval, and through the optimization (using: chain association activation process + memory and forgetting mechanism to survive the fittest), the knowledge network is formed, in which the memory value represents the pre-training weight; the activation value represents the reasoning weight under the attention mechanism.
Our networks have both objective Tokens and subjective Tokens, and their connections through the attention mechanism are knowledge, among which common knowledge is "common sense". This is why our machines can predict the pros and cons and make their own decisions! Because it has a "demand", and a "logical chain" related to the "demand" (the activation value transfer link formed by Tokens). Driven by the demand, it will take the initiative to learn and iterate on itself! For example, go to recharge, go to the library to read!
In our scheme, knowledge is developed around "demand", and decisions are also developed around "demand", which is the core reason why our machine can achieve "universal"! It faces only one task: "needs", rather than all kinds of "external tasks". So, our solution is "active wisdom", and all other solutions are "passive" wisdom.
As you can see, our scheme is small sample learning, real-time knowledge update, and the training and use process are integrated, so the machine is lifelong learning, self-iteration. Because the knowledge of the machine exists in the form of a memory bank, and the memory bank is stored in chronological order, but on the basis of the original memory bank, the memory value is gradually optimized. So different memory banks can be directly stitched together to form large memory banks. So, by combining the chef's memory bank and the doctor's memory bank, the robot can have the skills of both a chef and a doctor, without having to retrain a lot of a chef and a doctor's data together. The current AI technology route, can not achieve this point. In large models, it has to be trained with a large amount of both doctor and chef data for a machine to master both skills. Obviously, with this training method, it is a wild hope that machines can have "all kinds of" abilities.
### An example of the process of changes in memory values and activation values
Figure 2 shows a simple example of the process of changes in memory values and activation values during associative activation. To simplify, assuming that the machine's memory bank is empty, it is the first time the machine receives the
Figure 1: establishes a schematic diagram of the ”innate minimum requirement”.
input (and we don't give the machine a preset innate memory). Suppose that, at time t0 to t7, the machine input Tokens is "We hope for world peace". In the actual process, the machine should adjust the initial activation value to all input Tokens according to the activation value size of the currently activated reward and penalty symbol (value estimate). But here, because there is no value system to adjust, we assume the default input Tokens initial activation value of 90 (assuming activation value interval is 0 255), so after the chain association activation process, assuming the memory curve, Tokens activation value is 90, and the current memory value of 0, the machine obtained memory value increment is 126.
dm = f(m0, A0) Memory values update increments, where m0 represents the current memory value and A0 represents the current activation value. Memory value update increments and activation values are positively correlated.
All memory values and activation values decrease over time. Here, an exaggerated descending gradient was employed. At time t9 to t19, the machine receives the second input Tokens: "Peace makes our world better". Obviously, according to the "similarity activation" process, first Token "and" will activate Token "and" in the memory bank, and give it activation value, and because the memory value of "and" in the memory bank is high, so the "and" in the memory bank from the initial activation value, obtained the activation value of the past, and the transfer coefficient is large.
The similarity activation process transfer coefficient T=f(S,m0), where S represents the similarity (the dot product of the Tokens vector), and m0 represents the memory value of the transmitted Tokens. Positive correlation between activation value transfer coefficient and similarity and memory value.
At the same time, the "and" Token in the memory bank will also initiate the chain propagation process because the activation value exceeds the preset threshold. In the process of chain propagation, it will first activate the near relationship to the "flat" and "bound" through the "near activation" way.
The near activation process transfer coefficient T=f(D,m0), where D represents the temporal distance of the two Tokens and m 0 represents the memory value of the transmitted Tokens. The near activation value transfer coefficient and time distance are anti-correlated, and the memory value of the transmitted Tokens is positively correlated.
After the "flat" and "bound" obtain the activation value, if the activation value exceeds the preset threshold, the chain will also initiate the propagation process. Looking for the Tokens similar to oneself in the memory library to conduct activation value propagation, it will also conduct activation value propagation to the Tokens adjacent to oneself. The transfer coefficient of both processes is positively correlated with the memory value.
Through the chain association activation process, with input Tokens, it is possible to activate the entire memory repertoire and their associated Tokens combinations. The activation range depends on the initial activation value they obtain, which is adjusted by the value prediction.
After completing the chain association activation process in the second input, we can see that in the Tokens stored in the memory bank, the "peace" Tokens combination has the highest memory value and approaches, so each Token will get a higher activation value due to the high memory value. At the same time, "and", "flat" in addition to oneself have a chance to get higher activation value, they will also pass each other activation value (near activation), and the process, also because of their high memory value and high transfer coefficient, through activation value accumulation, they are a set of easy to obtain high activation value weight of Tokens combination.
Figure 2: establishes a schematic diagram of the ”innate minimum requirement”.
Second, we can see that in the Tokens stored in the memory bank, the "world" Tokens combination is similar to the "peace" Tokens combination, obtaining the second highest memory value, so they are also Tokens combinations that are prone to obtain high activation value weights.
In Figure 2, we use Chinese characters to represent a token, and of course, we can also use word vectors to express a token, without any difference in the entire process.
So, we only need two sentences, we can establish the relative "weight" of Tokens. By doing this, the machine can build the right combination of common Tokens and their memory values. This memory value corresponds to the "common degree" of the combination, that is, the memory value is actually the local statistical value of the occurrence probability of the combination obtained from the training data. The activation value is the dot product process (projection) of the input Tokens combination to the "common Tokens combination" (base cluster) based on this local statistical value.
So, we use a kind of human learning method, it is very efficient, and can achieve small samples, cumulative learning, real-time updates. It does not modify the parameters of "old knowledge", so there will be no "catastrophic forgetting" problem. It does not require the BP gradient optimization process, so its computational amount is consistent with the inference process of the large model.
## 5 we implemented the three conditions proposed by Professor Yann Lecun
With the three deep learning giants and Turing Award winners, Professor Yann LeCun believes that the right direction of AGI is the "world model" and the road is to achieve "humanoid AI". They proposed three conditions:
(1): Need a world model. These include demand modules that need to model basic needs such as happiness and hunger, as well as value modules that predict value.
(2): Need a logical reasoning ability compatible with the neural network.(The current reasoning ability is based on the plug-in symbolic reasoning).
(3): Need a "general decision-making ability", can top down, decompose decisions. Can't be intensively trained a million times for every task!
Although they put forward these ideas, they have no complete technical solution. And in our scheme, we can achieve the above three conditions.
### We built the world model
The input Tokens activates the Tokens combination in memory, and the high activation value Tokens combination is the activated "world model" (some Tokens may already appear in the input and other Tokens may not yet appear in the input). Then, according to the predicted decision process of "seeking benefits and avoiding harm", decide whether to further confirm the existence of other "high activation value Tokens", which is "pattern recognition". The world model is "common sense", which is the Tokens combination mode composed of subjective Tokens and objective Tokens, such as "demand", "reward and punishment" and "emotion". Humans use "common sense" to "pattern recognize" things.
After each new information input, the machine needs to conduct a chain association activation, and then store the Tokens as "simultaneous storage" mode. Simultaneous storage is the use of a mechanism to reflect the time interval between Tokens. For example, the time interval can be determined according to the closer the Tokens of the time is approaching, or the closer the storage location is approaching, or according to the time information brought by each Tokens.
Every time it gets a new Tokens, the machine needs to have a more updated activation value to find a way to achieve the reward and avoid the punishment. The set of these paths is the overall response path. The overall response path may be a network-like structure, and many local paths may lead to both reward and punishment symbols.
Because of the activation value transfer path to the reward symbol (or penalty symbol), that is, we realize the advance and step of the reward and penalty function. Therefore, we have solved the problem of sparse and lagging reward function in the current reinforcement learning process. The machine can find the initial optimal response path by a search process similar to AlphaGo.
If the overall reward and penalty value does not enter the acceptable preset value (or no convergence), the machine cannot decide whether to choose or exclude certain specific paths, so as to maximize the benefits. The machine needs to further identify the input information and add more Tokens to subdivide certain reward and penalty activation value transfer paths, so as to further help the machine to select or exclude certain specific paths. This step is the process that machines create spontaneously and actively find information to help them make decisions. This process proceeds iteratively until the reward and penalty statistics reach the accepted preset values or converge.
When further identifying the input information, the high activation value Tokens is either because of their high memory value, such as the representative of a class of things Tokens, or the Tokens closely related to the input Tokens, such as similar, or often near the appearance. Therefore, the Tokens combination of high activation value being activated in memory is the representative Tokens combination related to the input information. These representative Tokens combinations are the "world model" temporarily created by the machine, which we call the "expectation model". It is both a summary of past experience (Tokens memory value after survival of the fittest) and directly related to the current specific input. It is temporarily created through high activation values and is the "expected model" of the machine for the current input Tokens combinations.
The spatial or temporal relationship between the Tokens in the machine refers expected model already present in the input and the Tokens not present in the input, Given the temporal and spatial location of Tokens, Predict the temporal or spatial location of those Tokens that have not yet appeared; These high activation values Tokens, which have not yet appear in the expected model, Is the expected Tokens; the machine assigns the time and space location of the machine's sensor search according to the time, space, and size of the expected Tokens in the expected model, And determine the type of sensor used based on the expected Tokens properties (e. g., speech, image, or touch), And determine the resolution to be used based on the properties of the expected Tokens (such as the size). This is the machine's "on-demand identification" process. This process can be performed iteratively.
Selective attention is used to extract Tokens from the input information, and the machine extracts Tokens from the input information according to the recognition interval and resolution given by the selective attention recognition. In this way, the problem of wireless granularity of image information can be solved (the machine extracts the information in the image on demand). When the machine extracts the specific interval data in, it preferentially extracts the overall topology, shape outline, main lines and main texture Tokens in the selected interval in the way of overall feature first. Then, the machine obtains the relevant memory in the memory network through the chain association activation process, and combines these memories into different weight expectation models according to the weights.
The machine uses the decision process to determine whether to further identify the input information according to the activated reward and penalty Tokens (the activated value of the reward and penalty Tokens, which is the expected reward and penalty value), or whether to respond to the input information.
If the machine decides to further identify the input information, the machine further extracts the "expected Tokens" from the input information by imitating the relevant experience of obtaining the "expected Tokens" in the past. Therefore, the machine is the Tokens that constantly iteratively extracts the input information through the attention mechanism, and each extraction process may use different sensors, with different resolutions, for different recognition intervals. So for the same input thing, the machine may extract different types, different intervals and different resolutions of Tokens, and use the combination of these Tokens to form a "hierarchical representation" of the same thing."Hierarchical representation" refers to the Tokens that extracts information step by time in the overall way of low resolution in the interval.
The high activation value Tokens is used to form the expected model; its theoretical basis is that these high activation value Tokens come from two parts: one is the common features of similar things; because common features are widely found in similar things, they are highly repetitive, so they are usually high memory value Tokens. Therefore, in our scheme, the machine is to first identify large categories (obtain abstract concepts) through common features, and then gradually add more Tokens to limit the scope (from abstract concepts to concrete concepts).
Another source of high activation values is that there are similar Tokens in the input Tokens and in a specific memory. These specific Tokens, which are directly activated by Tokens in memory because of similarity activation, and other high memory values Tokens with its proximity relationship are also prone to higher activation values. Because of the short activation path, so in the relational network, special Tokens activates a specific "expected model", a way to quickly locate the expected model through special Tokens.
Therefore, the identification process of input information is to identify which large category it belongs to through common features, and then to determine which specific subcategory it belongs to through unique features. The machine iteratively increases the Tokens for identification through selective attention. In this process, the previously activated Tokens, whose activation value fades over time. If they are re-activated by the newly input Tokens, their activation values are consistently maintained. If they are unrelated to the new input Tokens, their activation values slowly fade away and gradually exit the decision process.
The "world model" contains two aspects: 1. The machine knows the world iteratively in the way of "pattern recognition". Machines know the world in the way of "pros and cons". This is because "pros and cons value" is the core "world model" established by human beings. It is a "world model" that guides all human behavior.
So, we implemented the "world model."
### We achieve logical reasoning capabilities that are compatible with neural networks
All the "reward and punishment" Tokens motivated by the input Tokens, their activation value size is the value prediction.
The propagation path from the input Tokens to the activated reward and penalty Tokens is the reasoning ability that is fully compatible with connectionism! A memory network is a neural network organized by Tokens to transfer according to activation values. The essence of activation value transfer is the inference process that realizes the attention mechanism.
Tokens Combination in each Token, through chain association activation process, activation and their common Tokens combination, through the activation value accumulation process, can realize from the input combination (known N Tokens specific combination probability), and the most relevant Tokens combination (M Tokens specific combination probability), and the final activation value distribution in the memory library is the Bayesian inference results.
In fact, the attention mechanism in large models has achieved logical reasoning capabilities compatible with neural networks. However, there are two defects: 1. Deep learning destroys the original organizational form of Tokens time and space, leading to knowledge being difficult to be understood and imitated.2, the lack of "subjective Tokens" (such as needs, emotions, and pros and cons). So the inference process of the big model is flawed.
Turing Award winner Professor Yushua Bengio, one of the big three deep learning, believes the most important step in general AI is to combine neural networks with causal reasoning. In fact, our scheme has achieved this: the memory network is a fully connected neural network, from the input Tokens to the activated "world model", is the causal reasoning of the objective world organization; from the input Tokens to the input Tokens to the activated "subjective Tokens combination" (representing the demand, emotion and rewards and punishment Tokens), is the causal reasoning between the objective world and the needs of the machine itself.
So, we implemented "combining neural networks and causal inference."In fact, the current large models have realized the objective reasoning ability and some subjective reasoning ability, but their reasoning process is difficult for humans to understand and imitate, so it is difficult to use.
### We achieve a hierarchical "general decision-making capability"
The machine only reinforcement learning one task: " How do you meet your needs?"And only deal with one task" how to meet your own needs "? So our machine, the decision is "to face their own needs", while currently other AI solutions, the decision is to face all kinds of "task itself".
Information input, produce all kinds of associations, there are good and bad. Reduce the probability of Tokens that brings "punishment", and increase the probability of Tokens that brings "reward". This is the "general decision"! This is similar to human decision-making, so universal!
With the advance and step of the reward function, the machine has the "decision-making ability". With the general goal of "seeking advantages and avoiding disadvantages", machines can achieve "general decision-making" capabilities.
#### 5.3.1 The current "machine learning" is not really "machine learning"
In the face of a new task, people predict the "good or bad" of different decisions based on their own experience, and choose at most a few schemes to try. For a new task, machines currently rely on reinforcement learning to "keep trying," or (1) try a million times to see the results (Google AI for various games), or (2) humans tell me good or bad (GPT-4, big model, RLHF)[23], Then gaining knowledge of decision to deal with this issue.
Therefore, the current machine learning takes the way of "try first" + "then eliminate". So they should call it "machine evolution," not "machine learning."So we proposed that AGI requires real "machine learning." What is the real "machine learning"? In our opinion, real machine learning should be like human beings, facing a new task, which can predict the "good or bad" in different decision paths according to their own past experience. By choosing a limited few solutions to try, we can obtain the decision knowledge to deal with the new task. Furthermore, we believe that real learning should also be similar to children's learning style, by directly acquiring the accumulated human experience through language. In the face of a new task, an attempt is not needed, a direct success! For example, in the laboratory, when the teachers teach the children to do experiments, they directly pass on the existing human decision-making experience to the children through language teaching. After the children can obtain the knowledge conveyed by the teacher, they can use the experience gained by the language and interact with the environment to directly complete the experiment. Although it may be the first time that the children do these experiments!
Real tasks vary greatly, and real scenes vary greatly, and human beings can not put each type of task into a large number of scenes to "reinforcement learning"! So, you must change your thinking! The idea is to turn all tasks into a single
task: "How do you meet your needs"? All the training process of the machine is about training this task. So, facing this task, the machine has a lot of "state" and "policy knowledge, so it can predict the potential" pros and cons "estimates of" different decisions ".
The task given to machines is the background information of the task of "how to meet their needs". If "gaining human recognition" is one of the needs of machines, then machines will incorporate "completing human tasks" into the overall pros and cons statistics in the pursuit of "meeting their own needs". This is similar to the human, facing the boss, you will weigh the pros and cons to make different decisions. For example, one decision you make may be to proactively find more information to analyze the pros and cons of the task before making the decision. And actively looking for more information is the new task of by themselves. If the machine makes the same decision, then it is the machine assigning itself tasks, that is, the machine programming itself. In fact, our machine uses this decision-making process: weighing the pros and cons, and having the potential to actively find information to help you maximize your benefits.
#### 5.3.2 How to achieve true "machine learning"?
Ten years ago, we thought that to create real "knowledge", we should start from the perspective of information statistics. Unlike "deep learning", we believe that machines should learn in accordance with the human learning model, using small samples and knowledge accumulation. Therefore, at the beginning, it also tried to use "symbolic expression", "causal logic" and "knowledge network".
After a few years of trying, I found that the first step of the road was blocked. Because "symbol expression" "dog" how expression? Need to pick out all of the characteristics of the "dog". But a "dog" can be either an animal or a person! It can be "a celebrated character" or "a despised character", where the meaning of the symbol "dog" varies greatly in different contexts. So the essence of "dog" is the sum of the relationship between "dog" and all other things. So the "dog" must be put into the whole knowledge network, defined by its relationship to all the other knowledge. So, "symbolism" doesn't work! Because "dogs" can't be separated from other knowledge! A "fully connected knowledge network" similar to deep learning, which is our first conclusion.
Because the "dog" must be placed into the entire knowledge network, defined by its relationship to all the other knowledge. So you must have enough knowledge to understand the "dog" thing. Therefore, the amount of knowledge must be sufficient, so that through enough background knowledge to understand what a dog is. This is our second conclusion.
When we look back, isn't that what the big model does?"Deep learning" is to do a fully connected network, and the big model is to do "use a lot of knowledge to build a fully connected knowledge network". So why don't we see robots walking around the streets? Because only the knowledge network is not good! The machine must also be able to "interact with the environment to make decisions"! Studies have shown that humans make more than 30,000 decisions a day. Beyond the industry, what allows the machine to make its own decisions is only reinforcement learning algorithms. So, one possible way to general AI is: big model + reinforcement learning algorithms. In fact, GPT-4 has already achieved "full knowledge + fully connected network + RLHF", and RLHF is reinforcement learning. Google Published the Gato model in 2022, and has taken the road of "all knowledge + fully connected network + reinforcement learning".
So why don't we see Google with robots walking on the streets?
The core obstacle to this path is the reinforcement learning algorithm, the two prerequisites required[24]: (1), The machine needs to know the reward information it can get at different decision paths. Because reward information is scarce and delayed, the current problem is a lot of trial and error training.(2), the machine needs to search for all possible decisions.
These two conditions, can be perfectly satisfied in the game. The game can constantly try, the decision search space has boundaries (can also be trimmed to reduce the search space). But in real life, there are many problems without constant trial and error (such as taking care of children, no one wants to let you keep trying!), There is no clear boundary, so this problem cannot be solved! That's why Google keeps launching AI that can play a variety of very complex strategy games, but has been unable to launch the most basic "home many robot"! In fact, in daily life, the vast majority of decisions are far less complicated than the decisions in games! But because in real life, a lot of things can not be massive trial and error! And in real life, there are no clear boundaries to the relevant information. So the two difficulties above, leading to Open-AI or Google, through "big model + reinforcement learning", can only be used for things that can be massive trial and error. So AIGC, from AGI, is still a long way off!
Our decision-making plan is also essentially reinforcement learning, but only reinforcement learning how to seek advantages and avoid disadvantages. And we took advantage of the chain association activation process, automatically limited the search range! Search only for the activated information! Moreover, we use the logical chain of "Tokens" and "reward and punishment symbol" to automatically predict the reward and punishment information, rather than
only post-hoc feedback to obtain the reward and punishment information. So we perfectly solved the problem that Google's decision-making AI can only play games! This is because we have simultaneously realized the "objective common sense" + "subjective common sense". In the existing technical route, the technical route is to realize "objective common sense" first, and then to establish "subjective common sense" through "RLHF". So the current technical route, "subjective common sense" is obtained through post-hoc feedback, so it can only apply to areas where there is a lot of trial and error.
#### 5.3.3 Implementation process of "universal decision"
The machine is in any environment, and the input information includes all the sensor information. So at any moment, the environmental information that the machine is in is a part of the input information.
Machine and environment interactive decision-making, including two aspects:
1, and the choice of the optimal decision-making.
2. Execution of the decision-making process.
These two steps are not separate! Is intertwined, parallel processing!
The first question that the "universal decision" needs to solve is: What is the reward function? Inside the GPT-4, and within the A lpha go, the reward comes from the final external feedback. In our AGI, the reward comes from the "reward" and "punishment" symbols activated by external information, and the size is their activation value.
Step 1: What is the purpose?
When the information is input (outside + machine monitoring information) input, some reward and punishment symbols are activated.
Each transfer path from the activation value of the input reward symbols and punishment symbols is a potential logical link for generating a reward or punishment.
If each underlying feature is truly realized on this logical link, then the reward or punishment spread by this logical link is also realized.
Therefore, the response of the machine to any input information is the same: increase the probability of reward logic chain occurrence, reduce the probability of punishment logic chain occurrence, to achieve the purpose of seeking benefits and avoiding harm.
Step 2: How to plan with a purpose?
1. How to increase the reward link and reduce the occurrence probability of the punishment link?
The Way is increasing, or decreasing, the realization probability of the high activation value Tokens combination on the link. The high activation value Tokens combination on the link is the high-weight Tokens combination of this link. When they are true, the activation value spread along this link is true, so the final activated reward, or punishment, is also true.
2. How to operate it specifically?
From the transfer path of the activation value of the input information reward and penalty symbol, the N Tokens with the highest activation value is selected, which causes the reward or brings the real top implementation path. The goal of the machine is: 1, to let the Tokens implementation on the reward path (which is to imitate the past experience and let them appear in the input information).2. Make the Tokens on the penalty path not possible (that is, imitate past experience and avoid them from appearing in the input information).
Therefore, from the logical pathway of input reward and penalty, select the N Tokens with the highest activation value and contain the propagation path of the activation value, which is the top implementation path. Why does the machine only select the N Tokens with the highest activation value? Because these Tokens, either because they are representative Tokens of things, have high memory values and obtain higher activation values; or Tokens closely related to the input information. Due to the small number, equivalent to less attribute restrictions, so the concepts most closely related to them are usually "abstract concepts".
Due to the frequent use of language symbols, the language Tokens often obtains a high activation value, becoming the core Tokens with the highest activation value constituting the combination of "abstract concepts" Tokens, making the language symbols become the representative of the concept itself. Such as "eating", "escape" and other abstract concepts. It should be pointed out that "abstract concepts" are not the patent of linguistic symbols, and animals can also have "top-level decisions".
So the process of machine decision-making is to prioritize "abstract concepts" and then gradually add more Tokens to form a more concrete combination of concepts. This is the top-down, gradual process of decision-making and execution. We call this process "segmented imitation".
Specific example of the segmented imitation method:
Consider the set of input Tokens as A and the set of response Tokens as B; the machine looks for the activation process through A and B chain association for high activation values Tokens. These Tokens are Tokens that are connected to both A and B, because they obtain activation values from both A and B. They are the middle bridge Tokens that connects A and B. This process proceeds iteratively, enabling top-down, layer-by-layer decisions.
How to do it in a computer? The methods adopted are as follows:
(1) the external input Tokens to determine the activation value of the reward and penalty symbol (the Tokens exceeding the preset value as the target), and establish the level target.
(2) Starting from the reward and penalty symbol with the highest activation value, find the N Tokens with the highest activation value on the transfer path of the activation value from the input to each first-level target, which are the logical link to realize the corresponding reward and penalty. The Tokens on the link is the secondary target.
(3) The machine takes each secondary target as the new target, takes them as a new input Tokens, gives them the initial activation value, and initiates the chain association activation process again. So, the Tokens with the highest activation value are the Tokens combinations associated with the external input Tokens and the secondary target Tokens. This is because we employed activation value accumulation and activation value extinction, and only the Tokens associated with the most recent input Tokens can maintain the activation state. So these Tokens are the level 3 goals.
(4) This process is performed iteratively, and the machine can break down each level goal into hierarchical logical links to achieve them.
(5) For each expansion of the decision-making process, different reward values or punishment values will be selected to enter the accumulation. According to the principle of seeking benefits and avoiding disadvantages, the machine chooses the subpath that brings the reward value and avoids the sub-path that brings the penalty, thus increasing the cumulative reward value. When the machine finds that the total reward and penalty value converges, that is, it cannot be further improved, that is, the benefit is maximized. The machine stops expanding further and enters the execution process. This is the hierarchical "general decision ability" proposed by the Yann Lecun tutorial, and also the logical reasoning ability proposed by Professor Bengio that is compatible with neural networks.
Why are only the N Tokens with the highest activation values selected for each expansion? This is because past experience is impossible to fully match the current and reality, so by selecting only the highest activation value Tokens, it means that the "model" is either abstract (widely applicable) or closely related to the input Tokens (good match). The purpose of selecting only the Tokens with the N highest activation values is to achieve empirical generalization. So, in our scheme, the empirical generalization is implemented automatically.
For example, the machine has use hammer nail experience, in need to hit nails, and no hammer, and the input Tokens stone, in order to achieve primary goal (reward symbol or punishment symbol, complete the task, reward, or avoid punishment), in the activated logic link, may contain the Tokens combination represents the hammer. Then, these Tokens combinations are the secondary target.
Machine according to the chain of memory association activation process, may be found the M hammer target activation value transfer path, may be from the "memory toolbox", may also be from "to teammates borrow experience", the activation value transfer path is to improve "hammer" Tokens implementation probability path, which is the secondary path to reward.
Since the stone-related Tokens appears in the input, the total Tokens of the hammer and the stone (such as weight data, size, hardness sensation, etc.) is likely to obtain a higher cumulative activation value, which can be selected as the first N high activation values. They become bridge Tokens, making the stone-related Tokens a secondary path to rewards. This is the empirical generalization process, through the Tokens shared by the stone and the hammer, allowing the Tokens of the stone to transfer activation values to the reward symbol. The reason for this is that "stone" and "nail hammer" have some common attributes (there are Tokens, and this part of Tokens can be repeated in various kinds of "hitting nails with nail hammer" scenes), and they are the bridge of experience generalization. It can be seen that, in our scheme, the empirical generalization process is done automatically.
So, which secondary path does the machine take to the rewards? At this time, the machine needs to activate the Tokens space updated according to the new chain association activation process, and choose its own decision-making path again according to the principle of seeking advantages and avoiding disadvantages. Some paths, which may bring both
rewards and punishment, may make it difficult for the machine to calculate the rewards and penalties. If the machine finds that the reward and penalty value statistics do not converge, the machine's decision is to further identify the information to converge each reward and penalty value transfer path.
For example, "toolbox in memory" needs to confirm the probability of "toolbox" currently appearing in the input (implementation)? This probability can further converge the reward and penalty value of this path. At this point, confirming the probability that the "toolbox" is currently present in the input becomes a new goal for the machine to create itself. To achieve this "new goal," the machine needs to imitate past experience to execute it. If in the past memory, its toolbox is hanging in the waist, then it imitates the past experience to determine that the toolbox related Tokens appears in the input, the most likely imitation process is to use the "hand" to pat the waist, to reproduce the past hand touched the "toolbox" of the various sensor data combination. Because this decision costs the least power and uses the least time, and can maximize the benefits of the machine itself, this is the preferred decision path.
For another example, the "borrow from a teammate" path requires increasing the implementation probability of the Tokens in this path to transfer a greater activation value (to get a greater reward) to the reward symbol. So, the most likely experience a machine imitates is to look around, or ask.
Therefore, in our scheme, the decision of the machine is very complex. In a decision path, it may be nested W decision and execution processes, but at any time, the only goal of the machine is to "seek advantages and avoid disadvantages". All decisions are derived around this goal. Therefore, the decision of the machine is very flexible, it always changes according to the state of the environment, and there is no preset process. The only preset process is just: "Seek the best and avoid the bad."
The above process proceeds iteratively, and each time a new reward and penalty symbol is activated. The machine counts the activation values of these reward and penalty symbols until the activation values of the reward and penalty symbols converge. Then the machine establishes the optimal response path.
It is possible that the machine will make decisions in response to input information or to find more information to continue making decisions. Either way, machines increase or reduce the implementation probability of a particular Tokens by mimicking past experience. At any time, with the new information input, the new information will update the activation value distribution in the memory bank through the chain association activation process. At this time, the machine needs to re-count the reward and punishment information according to the new state, and re-find the optimal decision. Only with new information, the process continues all the time.
Step 3: With the planning, how do you implement it? Execution is improved by mimicking past experience, or by reducing the probability of Tokens.
1. Choose a small number of underlying features with the highest activation value to abstract the decision path.
2, adding more high activation value of the underlying feature abstract decision path embodiment.
3, steps 1 and 2 above proceed iteratively until the decision is decomposed to the drive command that can be executed. Drive command: send waveform to the speaker, send drive command to the motor, send display data to the display screen, and send set parameters to the facial expression display system, etc.
4. New input information may be encountered at any time, and the new input information will change the activation value in the memory bank and change the reward and penalty situation, so the machine may change the original plan at any time in the process of implementing the optimal response path!
Step 4: segmented imitation during decision making and execution.
The machine can find the experience related with the current input through the chain association activation process. Among these experiences, the probability of a small number of high activation values Tokens is usually representative abstractions due to their high abstraction. These experiences contain "antecedents" and "consequences" related with the input Tokens, which are the objects of empirical generalization.
The essence of empirical generalization is to use the effect of the existing process to achieve the effect of the unfinished process. In our scheme, it is completed automatically completed by the transfer of activation value of "common Tokens" in the two processes. Since the Tokens is not consistent in the two processes, this corresponds to the mismatch problem in the empirical generalization process. But this problem, in our scheme, the experience of the two processes, is through the common Tokens to achieve the activation value transfer process, to automatically complete the generalization.
It should be particularly noted that the concept of machines is formed by various Tokens through large stereoscopic networks. The same Tokens may be distributed in different memory segments. These Tokens are likely to come both from their own experiences and from input to linguistic symbols.
Therefore, when the language symbol itself is activated, the relevant Tokens represented by the language symbol are activated. However, the language symbol itself has the sequence, and the language sequence usually contains the expression Tokens combination sequence, so behind the language symbol sequence is the Tokens temporal and spatial information flow. Their temporal and spatial combination order is the "causal relationship". Moreover, these "causal relationships" can form a close activation value transfer relationship through the chain association activation process of "language symbols". This close activation value transfer relationship, which is itself a kind of "experience". So, in our scheme, the experience comes not only from the machine itself, but also from the "experience of others" gained through linguistic symbols. Therefore, our machine can not only learn "experience" through language symbols, but also imitate "experience" through the Tokens information flow composed of language symbols.
## 6 our scheme and the current large model road
Our solution solves the following problems:
### How to "build up common sense" problem
Deep learning destroys the time and space relationship of the original Tokens! In our scheme, the "chain association activation process + memory and forgetting mechanism" is used to realize the attention mechanism. However, we did not adopt deep learning, so our scheme, the knowledge created, retains the original temporal and spatial relationship of Tokens. And the original Tokens combination way, it is exactly the basis of the human "concept". So, in our scheme, the "knowledge" it creates is the knowledge that humans can understand and imitate.
In our scheme, the essence of "knowledge" is the permutation relationship of Tokens in time and space, and the prediction of different Tokens permutations of the agent. And Tokens in time and space arrangement nature is "causal", the Tokens in time and space is not simple near time, space, but agent, can repeat the relationship, they actually span the span of time and space is likely very big, but through the chain of association activation process, the time and space span large Tokens, formed a close activation value transfer relationship, this is the knowledge. If knowledge includes Tokens related to "needs", "emotions," and "pros and cons", this can predict potential pros and cons, so the arrangement of Tokens represents "knowledge". The common permutations are "common sense."
### The question of whether the machine can be conscious "
We solved how to give "self-needs to a machine."Therefore, machines can make independent decisions, have self-evolution, can have their own emotions, and can pursue "self-needs", so our machines are "conscious".
### The "universal decision-making" problem
Facing any task, the machine makes decisions according to "seeking good and avoiding harm". The task given by humans is a by-product of machines' pursuit of "self-needs".
This is the same thing as completing the task your boss assigned. You are also in the pursuit of "self needs", to complete the tasks assigned by the boss. If there is a conflict between the two, you will make a variety of flexible decisions according to seek the best and avoid harm, test the boss's true intentions and consider the boss's bottom line, so your decision will be very flexible!
### The "language understanding" problem
Because we didn't break the original time and space relationship of Tokens. The Tokens temporal and spatial sequence represented by language sequences can be understood and can be imitated. So machines can learn various skills directly through language, just like humans. Read the oven manual and start the toast[5][6][7].
### We think that our path is a viable way leading to the AGI
#### 6.5.1 Advantage 1: Ability to handle tasks that fail a lot of trial and error
Such as autonomous driving, home nanny, taking care of the elderly, accompany children, engaged in "workers, peasants, soldiers and business".
Because we are "humanoid" AI, we can make general decisions and learn skills in language! And the current big model can't handle these things!
#### 6.5.2 Advantage 2: Can solve the "illusion" problem
Large models have only "common words" obtained from local statistics and no factual memory.
Our scheme, first, stores memory and then extracts common information from memory. So we have our own "fact database", and it is integrated with knowledge.
#### 6.5.3 Advantage 3: Ability to learn skills directly through the language and imitate them
Because we do not destroy the temporal and spatial relationship of Tokens combination, the spatiotemporal relationship of Tokens represented by language can be understood and imitated! This point, no matter now, or in the future, the big model can not achieve! For example, on the first day at a bakery, it will ask the boss for an "oven" manual. Read it and start baking without individual training!
#### 6.5.4 Advantage 4: Be safer
(1) At present, artificial intelligence is a single goal. From the perspective of decision making, it is a "one-minded thinking" artificial intelligence "to achieve the goal". Such an artificial intelligence, it doesn't think about anything outside the goal, the decision is still a black box. Think how dangerous it is if "stuffy pot" + "one-track mind" people take control of your life! If such artificial intelligence is allowed to fully control human life, it may completely bring incalculable disasters to human beings because of the wrong understanding.
(2) In our scheme, the "demand type" of the machine can be preset, the values can be trained, and the human values can be aligned. At any time, the machine will consider various goals, and there will be no "extreme" behavior. Moreover, in our scheme, the decision is visible, modifiable, and "white box".
## 7 The underlying logic of our scheme
### The chain association activation process is the attention mechanism
First, we believe that the nature of knowledge is information. And human knowledge, is a very small part of the information. This is because we humans have a limited resolution of the information. The relative spatiotemporal relationship between the arrangement of A and B atoms on A grass is also A kind of information, but we will not identify it.
So in the process of evolution, humans have developed the Tokens recognition ability. Tokens Is the smallest information unit commonly used by humans, such as a straight line. Tokens itself is the "world model", it is the smallest "world model" for human beings to build a magnificent palace of knowledge. In the process of evolution, humans have formed the "pattern recognition" ability to adopt "models" such as Tokens to identify the surrounding information, which greatly improves the energy efficiency ratio of information recognition. It's a gift from evolution.
If we arrange the "Tokens" of everything from the "Big Bang" to the "present" in order of space and time. We just get an information tensor. It is all the knowledge that man has. Faced with such a treasure house of knowledge, if our agents outside the universe want to know about it, they will make statistics on these Tokens.
The first question: "How many independent Token's do we have"? In our scheme, the similarity relationships answer this question. Second question: "the quantity distribution of each Token"? In our scheme, repetitive relationships answer this question. The third question: "How are the Tokens arranged"? In our scheme, proximity relationship answers this question. We can see that in our scheme, through the chain association activation process, memory and forgetting mechanism, is to make a statistical description of information!
In the attention mechanism of large models, the Tokens combination correlation is inferred by the pairwise correlation between Tokens. Then again to speculate a larger Tokens combination correlation by pairwise correlation. This process takes multiple iterations to obtain the correlation of the different Tokens combinations to each other. The pre-training process is to find the correct "optimal coordinate base" through the trial and error method (deep learning). With the help of the attention mechanism, the obtained "optimal coordinate base" is only for the "common information". The essence of this process is a Bayesian inference process: the conditional probability of a particular Tokens by partially known probability.
In our scheme, the correlation between Tokens, is obtained by induction. The chain-type associative activation process is to use the correlation obtained by pre-training (partially known probability) to obtain the conditional probability that a certain Tokens may appear. The chain association activation process is to find correlation (the reasoning process of attention mechanism); and memory and forgetting mechanism are induction.
### The core of the attention mechanism is to create "common sense"
Knowledge is the arrangement of Tokens, and common sense is the common way that Tokens is arranged[16]. The big model of the core problem is that the human knowledge (Tokens arrangement), converted to its own system of knowledge (because of the deep learning destroyed the original Tokens space-time relationship, lead to big model of knowledge, human is difficult to understand, unable to imitate), it uses its own knowledge system to solve the problem, and then translated to human. Therefore, deep learning destroys the spatial and temporal relationship of the original Tokens, which means that it destroys the original and human understandable organizational form of Tokens, and transforms it into the organizational form that machines can understand. From a machine's perspective, it retains the way Tokens is organized because it correctly finds "common information."But from a human point of view, the knowledge that it produces, and the knowledge created by human beings, cannot be directly interconnected, or can not be directly borrowed from each other.
Because two sets of system of the underlying language can't communicate with each other, so it is difficult to give the machine "innate knowledge" (such as innate requirements, innate reward and punishment function and innate emotion function), so can only remedy the way, the RLHF, or using pl-in knowledge base, to solve part of the problem, and only through the "yes" or "No to communicate, the robot, can only be a" scripted "" nerd ", not really flexible to solve the problem.
Therefore, in our scheme, the most core is to establish "common sense" without destroying the original form of time and space organization of Tokens, and need to include the "subjective common sense" of the machine.
In order to establish "common sense" under the original form of time and space organization of Tokens, we adopted information Tokens, retain time and space information storage, and adopted the chain association activation process, and adopted the memory and forgetting mechanism to realize the induction of the chain activation value transfer relationship between Tokens. At the same time, we imitate the organization form of "common sense" and preset the Tokens combination representing the innate demand, the innate reward and punishment function and the innate emotion function. Then let the machine make independent decisions and evolve itself according to the principle of seeking benefits and avoiding disadvantages, and constantly expand the memory bank around the innate knowledge to form the whole knowledge network, so as to create "objective common sense" and "subjective common sense".
### We can accomplish only one thing: "Create common sense"
In order to "create common sense", we need to solve (1) "give the machine self needs".
In order to solve (1), (2) "how to create understandable knowledge" problem.
In order to solve (2), the problem of "how to create a fully connected knowledge network without using deep learning" should be solved. Then, subjective Tokens and objective Tokens can be realized through attention mechanism. This is common sense.
The establishment of a relationship between subjective Tokens and objective Tokens is the "front" + "step" of the excitation function, which can realize the "general decision-making ability" by "seeking advantages and avoiding disadvantages". Driven by "self-needs", machines can achieve "self-evolution".
### We establish an infant AI
"Build a baby machine, then learn lifelong, and grow yourself". The idea has been around for years, but we were the first team to propose detailed solution steps.
## 8 A simple example
Below, we illustrate how the machine makes decisions and responds with an example.
Background: Lao Wang went to other places for vacation, took an assistant robot, stayed in a hotel room...
Lao Wang: " Hello...".
Robot: There are many Tokens activated in the memory bank, but in these activated Tokens, there is no reward symbol whose activation value exceeds A1 (A1 is a preset threshold), and no activation value exceeds P1 (P1 is a preset threshold).
It is constantly receiving the external information and internal information from the sensor, and uses low resolution priority to extract the Tokens in these information, stored in the memory bank. According to the same process, the initial
activation value is given to these Tokens. Since there is no reward symbol / punishment symbol with high activation value, the activation value given to these Tokens is relatively low according to the predetermined procedure. Therefore, in the subsequent chain association activation process, the propagation range of the activation value is very small, and the chain activation process is very quickly completed.
The machine starts updating the memory value. Because activated Tokens obtain low activation values (due to low initial activation values and low activation value spread), their increased memory values are small and a lot of their information is forgotten in a short time. At the same time, because the activation values of the reward symbol and punishment symbols in the memory bank are relatively low, that is, the potential reward and the potential punishment are relatively small. So the best decision path formed by the machine is to continue receiving information. This is because giving power itself is a punishment, and if you don't get a reward, then the optimal decision is not to waste power.
After each chain association activation process, the machine needs to check whether there are reward or punishment symbols whose activation value exceeds the preset threshold. In this case, the optimal response formed by the machine is to extract the Tokens from this information with low resolution and save it in the memory bank. Following the same process, the above process cycle proceeds.
Suddenly, the audio processing system introduced a series of audio Tokens (still extracted at low resolution), and these Tokens, following the same process, were given a lower initial activation value and underwent a chain association activation process. This input Tokens, some Tokens in the process of chain propagation, because of similarity, activated many similar Tokens in the memory library, there is a close relationship between these Tokens and many reward, punishment symbols, so the activation value chain propagation process, there are a lot of reward and punishment symbols are activated.(These Tokens are usually the owner's voice print features, such as their unique timbre).
Because this time, many rewards, punishment symbols get more than the preset activation value. Assuming N reward symbols and M punishment symbols have the activation values above the preset value. The machine targets both N reward symbols and M punishment symbols, so that the machine autonomously establishes N + M targets at the same time. So, in our scheme, the goal is machine-autonomously generated, is multi-target generated at the same time, rather than artificially preset a total reward function. In our scheme, all the response of the machine is based on the principle of seeking advantages and avoiding disadvantages.
Therefore, after the machine creates N + M targets, the machine plans its own response path principle is to increase the probability of the activation value of the reward symbol and reduce the probability of the activation value of the penalty symbol. So the machine's decision is all around achieving the reward and avoiding the punishment.
The machine first processes the reward / penalty Tokens with the highest activation value, which may be one or more penalty Tokens; in the memory bank, the propagation path of transmitting the activation value to the penalty Tokens may be the underlying feature input of the voice print of the owner, for many of the owner; and the activation value further transmits the activation value in the memory bank.
Of these memories, one penalty, Tokens, had high activation values. The Tokens that can obtain a high activation value is nothing more than several cases: (1) the penalty Tokens has a very high memory value. One possible reason is that when it is stored, its activation value is high, while the memory value increment and the activation value are positively correlated. Another reason is that it is often activated, by repeating high memory values.(2) Multiple input Tokens's pass the activation value to this penalty Tokens through different pathways. Such as the host "tone Tokens", "word Tokens", "master state Tokens", "host expression Tokens", "current environment related Tokens", if these Tokens and similar punishment symbols in memory, so they complete the activation value chain propagation process, and they are related to Tokens may obtain high activation value.(3) There is a tight activation value transfer relationship between this penalty Tokens and a specific input Tokens. That is, they always appear in memory. Therefore, they form a "proximity relationship" and a "high memory value relationship", and the propagation path is very short, and the activation value transfer coefficient is high. Therefore, the attention mechanism may not only use comprehensive reasoning (such as multiple Tokens to specific reward and punishment symbols, comprehensive experience), but also use special case reasoning (such as the specific activation value closely transfer path, the specific experience). The penalty Tokens is high and may also come from the activation value distribution established by the previous Tokens input. Although the activation value of the high activation value Tokens fades over time, if the activation value is high enough, it will influence the decision of the machine for a longer time. This is very similar to humans.
In this example, the propagation network composed of activation-value propagation paths contains a very large number of Tokens that it is difficult to formulate. But it is usually the language symbols that have the highest activated value (because they are the most commonly used and have the highest memory value), and if they are combined in their space-time order, the main idea may be "don't lie down (the cause), be scolded by the owner, very sad (consequences)".
The machine immediately starts searching for the optimal response path to avoid the probability of this penalty symbol occurring. The principle of machine decision making is to increase the probability of reward symbol and reduce the probability; reduce the probability of the path of the activation value and increase the probability of the path of the activation value to increase and reduce the probability? Each concept is a local tight network in the memory bank, and the machine needs to reduce the probability of high activation value Tokens in this local tight network, thus reducing the occurrence probability of this reward and penalty logic link.
For example, in the case of the memory of the machine, when the owner "uses a similar Tokens reprimand", the memory stored its internal sensor data and the external sensor data of the time; some of the Tokens were forgotten because they did not repeat again and did not obtain the enhanced memory. But the Tokens combinations that can repeat with this "punishment symbol" are "lying" related Tokens, and some "specific time Tokens" and "specific occasion Tokens", which obtain higher memory values because of their repeatability. And because it is a recurring combination, each other pushes up the activation value, so to obtain a far higher than the repetitive memory value. And because they can repeat, their combination achieves higher activation values each time, so they are easier to be activated and easier to remember, so it's a positive cycle process. This is the process of summarizing the experience.
If once, in a similar environment, the owner praises the machine, the memory will be involved in the decision. So, in a similar context, the various Tokens may pass the activation value to either the punishment sign or to the reward sign. So, the machine's decision is a comprehensive statistics of all rewards and penalties, both may consider how to get rewards, and will consider how to avoid punishment, so the machine in choosing the response path, some local response path is the path to the reward, is the path to the punishment, so the machine needs to subdivide these path, to determine what is the path to the reward, what is the path to punishment. This segmentation process is to add more Tokens to this path, so as to form multiple segmentation paths (such as different scenarios, or different time points, or different factors, etc.), so that the machine can determine its response through the segmentation path, which is the core of segmentation imitation.
So, our machine does not need to accommodate the new "Fine tuning". It simply needs to achieve "Fine tuning" by accumulating memories. It can perform a "Fine tuning" at any depth, a "Fine tuning" in any field, and a superimposed "Fine tuning" in countless fields without "catastrophic forgetting". This is because it does not modify the past knowledge parameters, but rather does simply amplify the network.
In this case, the hypothesis is that during the day, the machine is lying (saving some power, getting rewards), after activating the punishment symbol of the owner's voice print, the machine needs to avoid the probability of the activated penalty symbol and increase the probability of the activated reward symbol. So, there are at least two Cases, 1, to reduce the probability of the "lying" concept and avoid punishment (such as being reprimanded); 2, to increase the probability of the "lying" concept and get a reward (such as saving power). At this time, the machine needs to make the best choice according to the principle of seeking advantages and avoiding disadvantages. At this time, the machine has to synthesize various response paths, and compare the statistical rewards and penalties.
If the machine is fully powered, the reward for saving power is small. After completing the chain-type associative activation process, only one penalty symbol achieved a high activation value. The machine will choose to avoid the punishment, because the highest reward value. So the machine, driven by profit maximization, will avoid punishment as a goal and start to build a response.
Assuming the machine is running low, the power savings is significant (assuming the machine has to lie down and charge). After completing the chain-type associative activation process, a penalty symbol obtains a high activation value, and a reward symbol also obtains a high activation value. According to the principle of seeking benefits and avoiding disadvantages, the machine simultaneously establishes two goals: to achieve rewards and avoid punishment. Because this is the highest reward value. So the machine, driven by profit maximization, will take the reward + avoid punishment as the goal and start to build a response.
Assuming the machine is fully powered, the machine now creates a level 2 goal: reducing the activation value of the "lying down" concept. Therefore, under the constraint of the level 2 objective, the machine looks for the propagation path to transfer the activation value to the "lying" concept, and creates the level 3 objective: reducing the activation value of the concepts on the propagation path. Thus, the machine finds that the main path to propagate the activation value to the concept of "lying" is the input of a set of self-state sensors. So the machine creates a level 3 target: reducing the probability of these input Tokens.
The machine will record various internal and external parameters of each training, using memory and forgetting mechanism, encourage it to imitate the reward parameters and avoid the reward parameters. In this way, an empirical connection is established between parameter combination, reward and internal and external environment. This is essentially a reinforcement learning process. Of course, humans can also imitate its form, implant innate knowledge (drive related) into the machine, or use the accumulated human experience to directly modify the knowledge of the machine so that it converges as soon as possible. So in different environments, the environment Tokens will
automatically activate the most relevant memories, by imitating these experiences, passing similar combinations of parameters to the machine's motor system (including parameter types and their temporal order, these processes are all automatically completed). This allows the machine to stand up in various environments, reducing the probability of "lying" related Tokens.
Assuming that the machine is then low, the machine's experience in achieving rewards will allow it to lie down, increasing the probability of charging related Tokens implementation. The experience of avoiding punishment, it imitates past experience and explains to the owner why he is doing so. The machine then creates a level 2 objective: raising the activation value achieved by the "charge" concept. To ic the past experience of avoiding "punishment". So the machine may create level 3 goal: "to explain the reason of their behavior", because the "Tokens combination" in memory, and "avoid punishment" Tokens combination there are close relationship between activation value transfer, so the goal of the machine is to improve and specific Tokens combination (to explain their behavior) occurrence probability. So the next level of decision of the Tokens combination is: the experience of language organization is activated.
This process proceeds iteratively, and each time a new reward and penalty symbol is activated. The machine counts the activation values of these reward and penalty symbols until the activation values of the reward and penalty symbols converge. Then the machine establishes the optimal response path.
The machine then goes into the imitation execution process. The decision path of the machine needs to be decomposed iteratively up to the underlying drive parameters, so that it can issue the drive command by imitating the parameter configuration in experience to imitate the execution.
In practice, experience and reality can always only be partially matched, so the generalization between experience and reality can only be realized by imitating their common Tokens combination mode.
Among these pathways, those composed of high activation value Tokens are the top-level imitation pathways. If the imitation path does not contain a direct underlying driver command combination, then more Tokens (lower activation value Tokens) is added in, and then the imitation path becomes a different combination of multisegment paths formed by more Tokens. This is what segmented imitation means.
That is to say, when we face a large path without the appropriate experience, we can then refine it and decompose it into several small response path segments. For each small path segment, we can look for the right experience to generalize the experience. If it is still impossible to decompose to the direct underlying driver command combination, then repeat the process by adding more Tokens, decompose the response path into more small path segments, and then find the right experience to generalize the experience. If you still can't break down to the direct underlying driver command combination, then repeat the process until it breaks down into the direct underlying driver command combination.
The above process continues iteratively. There may be new Tokens input all the time. Whenever a new Tokens input, the machine needs to do the chain association activation process again. After completion, the distribution of activation values in the memory bank changes, so the machine needs to restart the decision-making process. So in this process, the optimal decision of the machine may be to put down some of the current goals and start to pursue the latest goals.
So, our machine produces its own goals and can constantly change its own goals, so its decisions are very flexible and match the environment.
So in the example above, the possible result of the machine is to stand up immediately, improve the resolution of the sound processing system, and turn around to observe the owner's posture, movement and expression, but until this moment, the owner may have just said " Hello..."Word, the latter words have not yet begun.
So, our machine is human-like intelligence, and its understanding of information comes from its own experience, not from the statistical process. Only in this way can our machines have personalized services.
A thousand housewives, with a thousand different requirements. The artificial intelligence obtained through the knowledge statistics, the robot that cannot update the knowledge in real time, can never enter the home, and can never enter the hearts of the housewives. Their landing scenarios will be very limited, and our solution, which is the real general artificial intelligence, and it may change the face of the world.
## 9 Conclusion
We believe that the development of artificial intelligence can be approximately divided into different stages: (1) the "feature exploration" stage. Before deep learning, it was mainly focused on the "manual exploration" stage. After deep learning, focus on the "machine exploration" phase.(2) After the realization of real attention (Transformer), the machine realizes the "knowledge generalization" after the initial alignment of the machine's "knowledge coordinate base cluster"
and the human "knowledge coordinate base cluster (concept)". In the face of human tasks, machines can show certain intelligence through "knowledge generalization".[29][30]
The one-dimensional attention mechanism brings about a large model of language. Two-dimensional attention mechanism brings about image generalization. Three-dimensional attention mechanism, can achieve 3D creative ability. The four-dimensional (3 D + time) attention mechanism can realize the generalization of dynamic processes: it will bring video generation and robot services in limited scenes.
But we believe that only by increasing the "vitality: the fifth dimension, self-demand", can we bring the real "soul" to the machine intelligence. And the big model doomed it to achieve the "fifth dimension". And our solution can give "life" to the machine, so it can become a true "universal artificial intelligence".
So, we believe that AI needs to move on to the next stage: the "autonomous interaction" stage."Autonomy" means that the machine is no longer a silent "machine", it can spontaneously produce behavior (which is equivalent to programming itself), and the machine explores knowledge (for example, actively interacting with the environment to acquire knowledge)."Interaction" means that the machine can interact with the environment in real time, update its knowledge in real time, and make continuous decisions to complete complex tasks in an unfamiliar environment[29].
Many famous scholars have put forward their own views on how to move to the real general artificial intelligence, for example, Professor Lecun proposed the "world model", Professor Zhu Songchun also proposed the four characteristics of general artificial intelligence:
(1) can perform unlimited tasks;
(2) can independently generate new tasks;
(3) valuable system driven;
(4) has a world model reflecting the real world.
Obviously, our plan is a response to the ideas of Professor Lecun and Professor Zhu Songchun.
General artificial intelligence is the original intention of artificial intelligence, but also the crown of artificial intelligence. We present a set of technical solutions for implementing general AI, including implementation steps with Step by Step. In reference [25][26][27][28], we reveal in detail the technical steps to achieve this path by patent form. It may be the right path to lead humanity to general artificial intelligence.
|
2310.13533
|
Technical Report for ICCV 2023 Visual Continual Learning Challenge:
Continuous Test-time Adaptation for Semantic Segmentation
|
The goal of the challenge is to develop a test-time adaptation (TTA) method,
which could adapt the model to gradually changing domains in video sequences
for semantic segmentation task. It is based on a synthetic driving video
dataset - SHIFT. The source model is trained on images taken during daytime in
clear weather. Domain changes at test-time are mainly caused by varying weather
conditions and times of day. The TTA methods are evaluated in each image
sequence (video) separately, meaning the model is reset to the source model
state before the next sequence. Images come one by one and a prediction has to
be made at the arrival of each frame. Each sequence is composed of 401 images
and starts with the source domain, then gradually drifts to a different one
(changing weather or time of day) until the middle of the sequence. In the
second half of the sequence, the domain gradually shifts back to the source
one. Ground truth data is available only for the validation split of the SHIFT
dataset, in which there are only six sequences that start and end with the
source domain. We conduct an analysis specifically on those sequences. Ground
truth data for test split, on which the developed TTA methods are evaluated for
leader board ranking, are not publicly available.
The proposed solution secured a 3rd place in a challenge and received an
innovation award. Contrary to the solutions that scored better, we did not use
any external pretrained models or specialized data augmentations, to keep the
solutions as general as possible. We have focused on analyzing the
distributional shift and developing a method that could adapt to changing data
dynamics and generalize across different scenarios.
|
Damian Sójka, Yuyang Liu, Dipam Goswami, Sebastian Cygert, Bartłomiej Twardowski, Joost van de Weijer
|
2023-10-20T14:20:21Z
|
http://arxiv.org/abs/2310.13533v1
|
# Technical Report for ICCV 2023 Visual Continual Learning Challenge:
###### Abstract
The goal of the challenge is to develop a test-time adaptation (TTA) method, which could adapt the model to gradually changing domains in video sequences for semantic segmentation task. It is based on a synthetic driving video dataset - SHIFT [5]. The source model is trained on images taken during daytime in clear weather. Domain changes at test-time are mainly caused by varying weather conditions and times of day. The TTA methods are evaluated in each image sequence (video) separately, meaning the model is reset to the source model state before the next sequence. Images come one by one and a prediction has to be made at the arrival of each frame. Each sequence is composed of 401 images and starts with the source domain, then gradually drifts to a different one (changing weather or time of day) until the middle of the sequence. In the second half of the sequence, the domain gradually shifts back to the source one. Ground truth data is available only for the validation split of the SHIFT dataset, in which there are only six sequences that start and end with the source domain. We conduct an analysis specifically on those sequences. Ground truth data for test split, on which the developed TTA methods are evaluated for leader board ranking, are not publicly available.
## 1 Introduction
The goal of the challenge is to develop a test-time adaptation (TTA) method, which could adapt the model to gradually changing domains in video sequences for semantic segmentation task. It is based on a synthetic driving video dataset - SHIFT [5]. The source model is trained on images taken during daytime in clear weather. Domain changes at test-time are mainly caused by varying weather conditions and times of day. The TTA methods are evaluated in each image sequence (video) separately, meaning the model is reset to the source model state before the next sequence. Images come one by one and a prediction has to be made at the arrival of each frame. Each sequence is composed of 401 images and starts with the source domain, then gradually drifts to a different one (changing weather or time of day) until the middle of the sequence. In the second half of the sequence, the domain gradually shifts back to the source one. Ground truth data is available only for the validation split of the SHIFT dataset, in which there are only six sequences that start and end with the source domain. We conduct an analysis specifically on those sequences. Ground truth data for test split, on which the developed TTA methods are evaluated for leader board ranking, are not publicly available.
The proposed solution secured a 3rd place in a challenge and received an innovation award. Contrary to the solutions that scored better, we did not use any external pretrained models or specialized data augmentations, to keep the solutions as general as possible. We have focused on analyzing the distributional shift and developing a method that could adapt to changing data dynamics and generalize across different scenarios.
## 2 Problem Analysis
We conduct an extensive analysis using the semantic segmentation source model DeepLabv3+ [1] with ResNet50 [2] backbone. We utilize model weights provided by the challenge organizers.
Firstly, we check how the domain shift influences unchanged source model performance over the span of the sequence. Figure 1 shows the mean intersection over union (mIoU) for each image in six sequences from the validation split. As expected, the performance degrades until the middle of the sequence, where the domain shift is the most drastic, and starts increasing toward the end of the sequence, as the domain comes back to the source one. However, the change in mIoU is gradual for sequences in which the domain change is in the form of weather conditions (clear to rainy or foggy), and more abrupt for videos with domain change to night. It might be caused by drastic changes in lighting conditions during the night. This suggests that the developed method should be flexible and be able to adapt the model to both gradual and abrupt changes.
To further analyze the nature of domain shift, we decided to inspect the shift in data distribution \(\phi=(\mu,\sigma)\), where \(\mu\) is a mean of data and \(\sigma\) represents a standard deviation. We examine the distance between the distributions of source data \(\phi^{S}\) and test images \(\phi_{t}^{T}\) for each frame at time \(t\). We utilize symmetric Kullback-Leibler divergence as a distance metric \(D(\phi^{S},\phi_{t}^{T})\):
\[D(\phi^{S},\phi_{t}^{T})=\frac{1}{C}\sum_{i=1}^{C}KL(\phi_{i}^{S}||\phi_{t,i}^ {T})+KL(\phi_{t,i}^{T}||\phi_{i}^{S}) \tag{1}\]
where \(C\) is the number of channels.
The distance plot is presented in Figure 2. It can be seen that the significance of the distribution shift varies between
sequences. Changes in time of day to night time cause the test image distribution to greatly drift from the source data distribution. On the other hand, differences in weather conditions do not influence the distribution changes significantly. It shows that the TTA method should be able to handle different distributions of data and adjust the normalization process of the model accordingly, especially while using batch normalization (BN).
Lastly, Figure 3 shows the mean entropy of source model predictions for each frame of the sequences. Predictions' entropy increases with increasing domain shift. Moreover, the trend is highly similar to the \(mIoU\) plot in Figure 1. It suggests that entropy might be a relatively useful metric for evaluating the model performance and the degree of domain shift during test-time.
## 3 Baselines
There are two baseline TTA methods implemented by the challenge organizers: TENT [7] and CoTTA [8]. They use two different adaptation methods. TENT uses prediction entropy minimization to update only batch normalization weights. CoTTA is based on adapting the student model with pseudo-labels generated by the teacher model. The teacher is updated by the exponential moving average of student's weights. To prevent performance degradation, it additionally uses stochastic model restoration, where randomly selected weights are reset to the source model state. Table 1 shows their performance using local evaluation on six sequences from the validation split. DeepLabv3+ [1] model with weights provided by the organizers is used. TENT outperformed the CoTTA method in the challenge setting. Therefore, we choose entropy minimization used in TENT as our base adaptation method and build upon it.
## 4 Our Method
Our method is composed of the base adaptation method from TENT [7] - entropy minimization, and three additional modifications. Firstly, considering the experimental results from Figure 2, we utilize the dynamic BN statistics update method, described in Section 4.1. Moreover, to make the entropy minimization process more reliable, we filter the uncertain pixels from the minimization process by the value of the entropy of their prediction. We depict this approach
\begin{table}
\begin{tabular}{c|c c} \hline Method & \(mIoU\) & \(overall\) \\ \hline Source model & 54.4 & 12.9 \\ TENT [7] & **58.1** & **41.0** \\ CoTTA [8] & 56.2 & 12.0 \\ \hline \end{tabular}
\end{table}
Table 1: mIoU and overall metrics of fixed source model, TENT [7] and CoTTA [8] baselines. The results are from the local evaluation on the validation split. The learning rate is equal to 0.00006/8.
Figure 1: The plot of \(mIoU\) between ground truth and source model predictions for each frame in six different sequences from the validation split.
Figure 3: The plot of mean entropy of source model predictions for each frame in six different sequences from the validation split.
Figure 2: The plot of symmetric Kullback-Leibler divergence as a distance metric between source training data distribution and the distribution of each frame in six different sequences from the validation split.
in Section 4.2. Lastly, the original TENT method for classification tasks adapted only BN weights of the whole model. For segmentation, we only adapt the BN weights of the backbone (ResNet50), leaving the segmentation head fixed. We show the advantage of this approach experimentally in Section 5.4.
### Dynamic Batch Normalization Statistics Update
Due to domain shift, state-of-the-art test-time adaptation methods [8, 3, 7] for classification task usually discard statistics calculated during source training and estimate data distribution based on each batch of data separately. However, this way of calculating the statistics is flawed since the sample size from data is usually too small to correctly estimate the data distribution, especially for lower batch sizes.
Moreover, as presented in Figure 2, the magnitude of the distribution shift might vary between sequences. For some domain shifts, keeping the BN statistics of source data could be more beneficial. Therefore, there is a need for a method that adjusts statistics used in BN accordingly.
We adapt a part of the method developed by us [6] to semantic segmentation task and use BN statistics from source data to estimate BN statistics \(\phi_{t}=(\mu_{t},\sigma_{t})\) at time step \(t\) during test-time by linearly interpolating between saved statistics from source data \(\phi^{S}\) and calculated values from current batch \(\phi_{t}^{T}\):
\[\phi_{t}=(1-\beta)\phi^{S}+\beta\phi_{t}^{T} \tag{2}\]
where \(\beta\) is a parameter that weights the influence of saved and currently calculated statistics.
We utilize the symmetric KL divergence as a measure of distance between distributions \(D(\phi_{t-1},\phi_{t}^{T})\) to adjust the value of \(\beta\) accordingly to the severity of the distribution shift:
\[D(\phi_{t-1},\phi_{t}^{T})=\frac{1}{C}\sum_{i=1}^{C}KL(\phi_{t-1,i}||\phi_{t, i}^{T})+KL(\phi_{t,i}^{T}||\phi_{t-1,i}) \tag{3}\]
where \(C\) is the number of channels. \(\beta_{t}\) at time step \(t\) is calculated as follows:
\[\beta_{t}=1-e^{-\gamma D(\phi_{t-1},\phi_{t}^{T})} \tag{4}\]
where \(\gamma\) is a scale hyperparameter.
To provide more stability for the adaptation, we take into account previous \(\beta_{1:t-1}\) values and use an exponential moving average for \(\beta_{t}\) update:
\[\beta=(1-\alpha)\beta_{t-1}+\alpha\beta_{t} \tag{5}\]
where \(\alpha\) is a hyperparameter.
### Entropy-based Pixel Filtering
Training feedback from entropy minimization might be noisy and unreliable, considering that the model predictions can be incorrect. Some of the state-of-the-art TTA methods [3, 4] used for the classification task filter images on which they adapt based on the entropy of the model's predictions to make adaptation more reliable.
We decided to use a similar approach for the task of segmentation. However, discarding the whole images on which we update the model could be sub-optimal, considering the low number of images in sequences and segmentation task. Instead, we mask out single pixels in the process of calculating the loss based on the entropy of prediction for those pixels. Pixels with predictions having an entropy higher than the predefined, constant threshold are masked and do not participate in the backpropagation process. This way, we are able to adapt more robustly, disregarding uncertain predictions.
## 5 Experiments
### Evaluation Metrics
The main metrics defined by the organizers are as follows:
\[overall=mIoU-2\times mIoU_{drop} \tag{6}\]
where \(mIoU_{drop}\) is calculated as:
\[mIoU_{drop}=mIoU_{source}-mIoU_{target} \tag{7}\]
The \(mIoU\) is based on combined predictions from all frames, \(mIoU_{source}\) is a \(mIoU\) from the first 20 frames of each sequence, and \(mIoU_{target}\) is a \(mIoU\) from 180th to 220th frame of each sequence. Apart from \(overall\), we utilize a simple \(mIoU\) metric in our experiments.
### Implementation Details
We use the code repository provided by the challenge organizers for the development and evaluation of our method. Additionally, we implemented the \(overall\) metric locally ourselves, as it was only available on the evaluation server. The presented results are from our local evaluation on six sequences from the validation split unless stated otherwise.
We utilize DeepLabv3+ [1] with ResNet50 [2] backbone as a source model, with weights provided by the organizers. During TTA, we use a learning rate equal to 0.00006/4. The \(\gamma\) parameter value from Equation 4 is set to 0.1 and \(\alpha\) from Equation 5 to 0.005, unless stated otherwise. The entropy threshold for discarding the pixels with uncertain predictions from the adaptation process is equal to \(0.3\times\ln 14\), where 14 is the number of classes and the \(\ln 14\) represents the maximum entropy value.
### Results
Table 2 presents the performance with different combinations of components of our method. It can be seen that adding each element to our base method (entropy minimization) increases both \(mIoU\) and overall metrics. The most significant improvement is achieved in terms of overall metric by adding Dynamic BN Statistics Update to **B** configuration.
We show our final results from the server evaluation on test split in Table 3.
### Ablation Study
Table 4 displays the performance of the baseline TENT [7] method, while different parts of a model are updated during test-time. The results show that adapting only BN weights of the backbone achieves the best results in terms of both metrics. Moreover, keeping all the weights of an adapted part unfrozen, instead of only BN ones, significantly degrades the plain entropy minimization performance.
Additionally, we explored different thresholds for filtering the unreliable pixel predictions for adaptation. Results are displayed in Table 5.
Lastly, in Table 6, we show the performance of our method with different parameters of dynamic BN statistics update, namely \(\gamma\) from Equation 4 and \(\alpha\) from Equation 5.
### Things That Didn't Work
Apart from the techniques used in our final method, we experimented with more approaches that didn't work.
We tried to adapt the model during test-time while keeping the features of the original and augmented images consistent by adding additional term to the loss function.
Moreover, we explored preserving a buffer of a low number of previous predictions and averaging them to obtain more reliable pseudo-labels. Additionally, to account for different positions of objects in images in between the frames, we considered using optical flow to unify the predictions into a single time step.
## 6 Conclusions
In this work, we present a brief analysis of the problem of continuous test-time adaptation and demonstrate our methods for semantic segmentation task. We were able to build upon and outperform the baselines.
**Acknowledgement.** Bartlomiej Twardowski acknowledges the grant RYC2021-032765-I.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline \(\gamma\) & \(\alpha\) & \(mIoU\) & \(overall\) \\ \hline \multirow{3}{*}{1} & 0.5 & 58.0 & 40.3 \\ & 0.05 & 58.3 & 42.0 \\ & 0.005 & 58.3 & 43.8 \\ & 0.0005 & 58.6 & 46.0 \\ \hline \multirow{3}{*}{0.1} & 0.5 & 58.2 & 45.0 \\ & 0.05 & 58.7 & 48.2 \\ & 0.005 & **58.8** & **50.2** \\ & 0.0005 & **58.8** & 47.5 \\ \hline \multirow{3}{*}{0.01} & 0.5 & 58.2 & 43.8 \\ & 0.05 & 58.4 & 45.6 \\ \cline{1-1} & 0.005 & 58.6 & 49.6 \\ \cline{1-1} & 0.0005 & **58.8** & 47.9 \\ \hline \multirow{3}{*}{0.001} & 0.5 & 58.1 & 43.3 \\ & 0.05 & 58.3 & 45.2 \\ \cline{1-1} & 0.005 & **58.8** & 48.0 \\ \cline{1-1} & 0.0005 & **58.8** & 48.0 \\ \cline{1-1} & 0.0005 & **58.8** & 48.0 \\ \cline{1-1} \cline{2-4} & 0.0005 & **58.8** & 48.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of our method with different values of dynamic BN statistics update parameters - \(\gamma\) and \(\alpha\). The results are from the local evaluation on the validation split.
\begin{table}
\begin{tabular}{c c c|c c} \hline \hline \(mIoU\) & \(mIoU_{drop}\) & \(mIoU_{source}\) & \(mIoU_{target}\) & \(overall\) \\ \hline \multirow{3}{*}{71.4} & 23.3 & 76.5 & 53.2 & 24.7 \\ \cline{1-1} & & & & \\ \cline{1-1} \cline{2-4} & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation of the performance of our method from the evaluation server on the test split.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Method & \(mIoU\) & \(overall\) \\ \hline A: TENT [7] baseline (entropy minimization) & 56.9 & 39.0 \\ B: A + Adapting Only Backbone’s BN Weights & 57.7 & 41.3 \\ C: B + Dynamic BN Statistics Update & 58.6 & 49.1 \\ \cline{1-1}
**Ours: C + Pixel Filtering** & **58.8** & **50.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(mIoU\) and \(overall\) metrics of different combinations of components of our method. The results are from the local evaluation on the validation split.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Entropy threshold & \(mIoU\) & \(overall\) \\ \hline \(0.40\times\ln 14\) & 58.6 & 49.2 \\ \(0.35\times\ln 14\) & 58.6 & 48.9 \\ \(0.30\times\ln 14\) & **58.8** & **50.2** \\ \(0.25\times\ln 14\) & **58.8** & 50.0 \\ \(0.20\times\ln 14\) & **58.8** & 49.4 \\ \(0.15\times\ln 14\) & 58.7 & 47.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of our method with different values of entropy threshold for filtering the unreliable pixel predictions. The results are from the local evaluation on the validation split.
|
2302.03531
|
Structured Generative Models for Scene Understanding
|
This position paper argues for the use of \emph{structured generative models}
(SGMs) for the understanding of static scenes. This requires the reconstruction
of a 3D scene from an input image (or a set of multi-view images), whereby the
contents of the image(s) are causally explained in terms of models of
instantiated objects, each with their own type, shape, appearance and pose,
along with global variables like scene lighting and camera parameters. This
approach also requires scene models which account for the co-occurrences and
inter-relationships of objects in a scene. The SGM approach has the merits that
it is compositional and generative, which lead to interpretability and
editability. \\\\ To pursue the SGM agenda, we need models for objects and
scenes, and approaches to carry out inference. We first review models for
objects, which include ``things'' (object categories that have a well defined
shape), and ``stuff'' (categories which have amorphous spatial extent). We then
move on to review \emph{scene models} which describe the inter-relationships of
objects. Perhaps the most challenging problem for SGMs is \emph{inference} of
the objects, lighting and camera parameters, and scene inter-relationships from
input consisting of a single or multiple images. We conclude with a discussion
of issues that need addressing to advance the SGM agenda.
|
Christopher K. I. Williams
|
2023-02-07T15:23:52Z
|
http://arxiv.org/abs/2302.03531v2
|
# Structured Generative Models for Scene Understanding
###### Abstract
This position paper argues for the use of _structured generative models_ (SGMs) for scene understanding. This requires the reconstruction of a 3D scene from an input image, whereby the contents of the image are causally explained in terms of models of instantiated objects, each with their own type, shape, appearance and pose, along with global variables like scene lighting and camera parameters. This approach also requires scene models which account for the co-occurrences and inter-relationships of objects in a scene. The SGM approach has the merits that it is compositional and generative, which lead to interpretability.
To pursue the SGM agenda, we need models for objects and scenes, and approaches to carry out inference. We first review models for objects, which include "things" (object categories that have a well defined shape), and "stuff" (categories which have amorphous spatial extent). We then move on to review _scene models_ which describe the inter-relationships of objects. Perhaps the most challenging problem for SGMs is _inference_ of the objects, lighting and camera parameters, and scene inter-relationships from input consisting of a single or multiple images. We conclude with a discussion of issues that need addressing to advance the SGM agenda.
**Keywords:** structured generative models, generative models, compositionality, scene understanding.
## 1 Introduction
The goal of this position paper is to promote the use of _structured generative models_ (SGMs) for scene understanding. These models are situated in the classical framework for computer vision whereby a 3D scene is _reconstructed_ from one or more input images. In this case the contents of the image are causally explained in terms of models of instantiated objects, each with their own type, shape, appearance and pose, along with global variables like scene lighting and camera parameters. This approach also requires _scene models_ which account for the co-occurrences and inter-relationships of objects in a scene. Because such models can _generate_ (or reconstruct, or explain) the scene, and because they are _structured_ (i.e. they are composed of multiple objects and their relationships), we term them _structured generative models_.
This reconstructive framework is also known as _analysis-by-synthesis_(Grenander, 1978), or _vision-as-inverse-graphics (VIG)_(Kulkarni et al., 2015; Moreno et al., 2016). It can be traced back to the early days of "blocks world" research in the 1960s (Roberts, 1963). Other early work in this vein
includes the VISIONS system of Hanson and Riseman (1978), and the system of Ohta et al. (1978) for outdoor scene understanding. For example, the VISIONS system used various levels of analysis (e.g., objects, surfaces), and mappings between the image-specific parse and generic knowledge about scenes.
Alternatives to the VIG framework are either _discriminative approaches_, predicting some target quantity or quantities given input image(s), or _unstructured generative models_. Discriminative approaches are typically applied to solve _specific tasks_ such as object detection or semantic segmentation, which are usually specified in _image space_. These are usually set up as supervised learning tasks, thus requiring annotated data. Currently deep neural networks (DNNs) are the dominant method-of-choice for such tasks. DNNs are often highly accurate, but as discriminatively-trained models they can sometimes fail badly, producing absurd mistakes,1 but with no effective indication of unreliability. One example of this is performance failures on adversarial examples (see e.g., Szegedy et al. 2013). Also as the discriminative models are trained on specific datasets, they can often perform poorly when faced with the same task but on a novel dataset with different statistics (distribution shift). The focus on the evaluation of specific tasks means that the predictions from multiple tasks are not required to create a coherent understanding of the input image in terms of the 3D world; this point is made, e.g., by Zamir et al. (2020).
Footnote 1: The phrase “absurd mistakes” is borrowed from Daniel Kahneman’s talk at the NeurIPS conference in December 2021.
With _unstructured generative models_, images are generated from a single set of latent variables, without explicit modelling of objects and their interactions. An example is the work of Radford et al. (2016) where images of bedroom scenes generated by generative adversarial networks (GANs). Here there is a single latent vector representation for the whole scene, which is not disentangled across objects. This means, for example, that it is is very hard to edit the latent representation to make specific changes in the scene (e.g., to change the colour of the bedspread), as the representation is not interpretable.
To be clear, we are not arguing against the use of deep neural networks in computer vision. However, for structured generative models DNNs can be used for specific modelling and inference tasks (as we will see below), rather than as one big black box. A similar point is made by Yuille and Liu (2021), who argue (their sec. 7.3) that to handle the combinatorial explosion of possible images, computer vision systems need to be _compositional_ and _generative_, and that this also leads to _interpretability_. These points are in excellent agreement with the formulation of structured generative models.
Fig. 1 shows how an input image can be explained in terms of 3D objects, the camera pose and the illumination to produce the reconstructed image. Sometimes a full 3D version may be too onerous, and we can consider a layered "2.1D" model, where the objects (people) are represented by "sprite" models of the shape and appearance, along with the background, as in Figure 2. The layers have a depth ordering, so that occlusions can be explained.
The goals of this paper are: to promote the SGM viewpoint; to review relevant work on object and scene modelling, and inference with SGMs; and to identify gaps/outstanding issues where further research is needed. The structure of the paper is as follows: In sec. 1.1 we discuss the rich variety of tasks associated with scene understanding. Sec. 1.2 describes the general advantages of generative models, and sec. 1.3 discusses the pros and cons of structured generative models. Sec. 2 describes modelling objects, including "things" (object categories that have a well defined shape) in sec. 2.1, including parts-based models, and "stuff" (categories which have amorphous spatial extent) in sec. 2.2. Sec. 3 covers models of the inter-relationships of objects, focusing mostly on indoor scenes. Having defined models for objects and scenes, sec. 4 discusses how inference for the SGM may be
Figure 1: The input image (left) is explained in terms of 3D objects, the camera pose and illumination, to produce the reconstructed image (right). Images from Romaszko et al. (2020).
Figure 2: The left most panel shows two frames from a video of two people walking past each other against a background. The second panel shows the mask (top) and appearance of the first sprite learned. The third panel shows the same thing for the second sprite. The rightmost panel shows the learned background. Images from Williams and Titsias (2004).
carried out, from input consisting of a single or multiple images. Sec. 5 discusses issues that are needed to advance the SGM agenda, including datasets and benchmarks.
Note that this paper covers a very large amount of ground, and does not aim to provide comprehensive references for each topic. Indeed, the necessary topics cover much of the content of a textbook on computer vision. Rather, it aims to use prominent examples of work to provide an illustration of the various topics.
### Scene Understanding
Before discussing scene understanding more generally, let's first look at two example images in Fig. 3, and see what we can extract from them. These are not particularly complex scenes--it would be easy to pick images with a lot more objects and relations. Consider first the outdoor scene, Fig. 3(a)--we can identify that this is not taken in a dense, "downtown" area, but equally not in a very rural area. We can identify objects: a small herd of 6 cattle (one is likely a calf mostly hidden behind the white-and-brown cow near the centre of the image); 5 motor vehicles (one is half occluded); a building, some lamp posts; some trees; and some road signs. This focus on objects may have distracted us from the fact that large amounts of the images are "stuff" categories, such as road surface and grass. The cows are on the grass, which makes sense as they can graze there, but not on the road surface.
The second scene, Fig. 3(b), is an indoor scene of a dining room. We notice a table, 6 chairs (of the same or similar design), 5 tablemats on the table, a clock, a light fitting. The room has an outside door, a window, and a recessed area and a large wooden cabinet (?) off to the left. There is a polished wooden floor which allows some reflections, and the walls are a yellow-greenish colour. Closer inspection would pick up smaller details like light switches, a low-level radiator, some objects on the chairs (including perhaps a child's booster chair), and part of an indoor plant at the bottom right. We can also see some outdoor railings through the window, suggesting that there is a stairway up to the area outside the door.
So what is _scene understanding_? Part of it is about identifying the objects and the stuff that are visible, but it is more than this. For each object we would like to know its category, shape and its pose relative to the scene, and the materials it is made of. We also would like to understand the camera parameters (e.g., observer viewpoint) and lighting; this will help explain occlusions and shadows. For
Figure 3: Two example images 2008_001062 and 2008_000043 from the PASCAL VOC 2008 dataset.
example in Fig. 3(a) one can make inferences for the direction of the sun, given the shadows of the cattle. Such a 3D representation allows counterfactual questions, such as predicting how the image would change if an object was removed or added, or if it was viewed from a new direction (novel view synthesis, NVS). It also enables _interaction_ by an agent in the scene, e.g., by attempting to herd the cattle.
Scene understanding also includes identifying the scene type (e.g., dining room), which will give rise to expectations of what objects should (and should not) be present. And it is about spatial, functional and semantic relationships between objects.2 For example in Fig. 3(a) we might find it surprising that the cattle are not fenced off from the road to minimize collisions, but this may depend on the norms of the location where the image was taken. And in Fig. 3(b) knowledge of dining rooms means we would likely expect as many tablemats as chairs to be set on the table--in fact close inspection of the image suggests that the "missing" tablemat is has been placed on the seat of the chair on the right hand side.
Footnote 2: See [https://ps.is.mpg.de/research_fields/semantic-scene-understanding](https://ps.is.mpg.de/research_fields/semantic-scene-understanding).
One rich approach to scene understanding is via answering questions. In Visual Question Answering (VQA; see, e.g., Antol et al. 2015), one typically expects textual responses, but this might not be the best way to answer certain questions, such as "which pixels belong to the black cow near the centre of the image?". The PASCAL VOC challenges (Everingham et al., 2010) asked three questions: For _classification_ the question was "is there an object of class X in the image?". For _detection_, the task was to predict the bounding box of every object of class X in the image. And for _segmentation_ the task was to label each pixel with one of the known class labels, or background. For the last two, textual responses are not the most natural way to answer the questions.
The epithet "a picture is worth a thousand words" also suggests that text is not the most efficient way to describe a scene, particularly given the ambiguities of natural language. Instead, SGMs provide a domain-specific language for scenes. OpenAI's system text-to-image system DALL-E (Ramesh et al., 2021) can generate impressive output in response to prompts like "a couple of people are sitting on a wood bench", or even a quirky prompt like "a tapir made of an accordion" (see Figs. 3 and 2(a) in the paper). However, it has been reported that requesting more than three objects, negation, and numbers may result in mistakes and object features may appear on the wrong object (Marcus et al., 2022). An issue here is that in the paired text and images sourced from the web, the text will likely not be sufficiently informative about the objects and their spatial relationships etc. The kinds of results produced by DALL-E and similar systems are thus unlikely to give sufficient control to graphic artists and animators, who may be broadly happy with the output of the system, but may wish to make adjustments and edits. In order to enable this, we argue that one needs object-based representations and scene models as advocated for above.
### General Advantages of Generative Models
We first define some notation: \(\mathbf{x}\) denotes observed data, such as an image. A generative model \(p_{\theta}(\mathbf{x})\) defines a probability distribution over images; here \(\theta\) denotes the parameters of the model. For SGMs, our model is defined in terms of latent variables \(\mathbf{z}\). The latent variables for a single object might be decomposed, for example into \(\mathbf{z}=(\mathbf{z}^{s},\mathbf{z}^{t},\mathbf{z}^{p})\), for shape, texture (including colour) and pose respectively. If there are \(K\) objects, they can each have latent variables (LVs) \(\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\); let \(\mathbf{z}_{0}\) denote the global variables (e.g., camera parameters and illumination). There can also be additional
latent structure in \(\mathbf{z}\) that models the inter-relationships of objects in the scene. We have that
\[p_{\theta}(\mathbf{x})=\int p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})\ d \mathbf{z}, \tag{1}\]
where \(p(\mathbf{z})\) is a prior over the latent variables, and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) renders the latent description into the image.3
Footnote 3: In eq. 1 the integral should be interpreted as a summation for latent variables that are discrete.
There are several advantages of generative models, as discussed below.
Pattern synthesis (unconditional generation):Generative models allow one to sample from the model, and compare the samples to the raw input data. This can be very helpful, especially to identify ways in which the model has failed to capture aspects of the input data, leading to model revision. The methods of _model criticism_ (see e.g., Seth et al. 2018) can help detect differences; one can also train a discriminator to do this, as used in the training of GANs.
There are situations where such synthetic data (i.e. data sampled from the model) can be useful in its own right--for example, in healthcare one may not wish to release data due to privacy concerns. But if one can create synthetic data which mimics the underlying data distribution, this can enable research to proceed much more freely. However, one has to be careful that the synthetic data has not simply "memorized" some or all of the training data, as discussed in van den Burg and Williams (2021).
Imputation and restoration (conditional generation):If the input data \(\mathbf{x}\) is split into observed data \(\mathbf{x}^{o}\) and missing data \(\mathbf{x}^{m}\), then the task of _imputation_ is to predict the missing data given \(\mathbf{x}^{o}\). Probabilistic models also produce a probability distribution for \(p(\mathbf{x}^{m}|\mathbf{x}^{o})\), which allows a quantification of the uncertainty. In the case that part of an image is missing, the imputation task can be called _inpainting_; see Fig. 4 for an example. Here a simple method might just inpaint red and black texture from the car body and tarmac, but knowledge of cars will predict a wheel in this location, and it is likely that it will match in style to the visible front wheel.
Figure 4: Image inpainting task, with the green rectangle blanked out, based on image 2008_000959 from the PASCAL VOC 2008 dataset.
It might also happen that the observed data is a noisy or degraded version of the underlying data; in this case having models of the underlying data and the noise process allows probabilistic restoration of the data.
Anomaly detection:It can be helpful to detect datapoints which do not conform to the learned model \(p(\mathbf{x})\). This task is known as _anomaly detection_, _novelty detection_ or _out-of-distribution (OOD) detection_. For example it can be useful for an automated system to detect that the regime of operation has changed, and thus flag up that it needs attention or re-training. One way to frame this task is as a classification between \(p(\mathbf{x})\) and a broad ("crud-catcher") model \(p_{0}(\mathbf{x})\), as used e.g. in Quinn et al. (2009). If a data point \(\mathbf{x}\) is more likely under \(p_{0}(\mathbf{x})\), it can be classified as an outlier relative to \(p(\mathbf{x})\).
Anomalies can be quite subtle. In images of street scenes in North America both fire hydrants and mailboxes are common items of street furniture, but it is improbable to see a fire hydrant located on top of a mailbox--this example of a contextual anomaly is from Biederman et al. (1982, Fig. 1).
Data compression:A probabilistic model \(p(\mathbf{x})\) can be used to compress data. Given the true data distribution \(p(\mathbf{x})\), Shannon's source coding theorem will assign a code of length \(l(\mathbf{x})=-\log_{2}p(\mathbf{x})\) bits to \(\mathbf{x}\). Thus the expected code length is \(-\int p(\mathbf{x})\log_{2}p(\mathbf{x})\ d\mathbf{x}=H(p)\), the entropy of \(p\). Such data compression can be approached in practice using, for example, using arithmetic coding, see e.g. MacKay (2003, sec. 6.2).
In practice we may not know the true distribution \(p(\mathbf{x})\), but have an alternative model \(q(\mathbf{x})\). In this case we have to pay a price in terms of the expected number of bits used. Let the expected code length when coding under \(q(\mathbf{x})\) be denoted \(L_{q}\). Then
\[L_{q}=-\int p(\mathbf{x})\log_{2}q(\mathbf{x})\ d\mathbf{x}=-\int p(\mathbf{ x})\log_{2}\left(\frac{q(\mathbf{x})}{p(\mathbf{x})}p(\mathbf{x})\right)\ d\mathbf{x}=H(p)+D_{KL}(p||q)\geq H(p). \tag{2}\]
Hence the additional expected code length is given by the Kullback-Leibler (KL) term, which of course reduces to zero when \(q=p\). This motivates minimizing the KL divergence to produce better codes, or equivalently to maximize the expected log likelihood \(\int p(\mathbf{x})\log q(\mathbf{x})\ d\mathbf{x}\) for a model \(q(\mathbf{x})\) (as the entropy term is fixed).
### Pros and Cons of Structured Generative Models
Below we contrast structured generative models (SGMs) compared to discriminative models, or to unstructured generative models.
* Structured generative models provide a coherent scene representation, rather than just output predictions for a disparate set of tasks. This representation is available for multiple tasks, including new ones not previously trained on (transfer learning).
* Structured generative models are _compositional_. This implies that when learning about a particular object type, we don't have to be concerned with other object types at the same time. This should make them more data efficient. If there are inter-object relationships, these can be modelled separately from the variability of individual objects, in a hierarchical model. This allows curriculum learning (Bengio et al., 2009), where one can first focus on modelling individual object classes using class-conditional data, and then bring in within-scene inter-relationships. These advantages are not present in an unstructured generative model.
* The SGM representation is _editable_, e.g., to change the direction of the lighting, or to add/remove objects.
* (Structured) generative models can be trained unsupervised, or with weak supervision. Discriminative models usually require potentially expensive/difficult human labelling, although the use weaker supervision has also been explored (see, e.g., Shi et al. 2017).
* The SGM is _interpretable/explainable_. This structured approach can be contrasted with many deep generative models, which learn a rich model of the data, but with a monolithic black-box architecture which is not interpretable or easily editable. A SGM identifies certain image regions as being explained by certain objects, and can potentially provide more detailed part-level correspondences. The structured representation also enables other features such as occlusion reasoning.
* Discriminative models can be less susceptible to modelling limitations, as they are directly optimizing for a given task "end-to-end", rather than building a general-purpose model which can be used for inference for many different tasks.
* The SGM framework can require expensive inference processes to infer the latent variables for the whole scene. We discuss in section 4 below how these issues can be ameliorated.
An example of where the SGM approach should be helpful is when an object is heavily occluded, but scene context can help with its reconstruction. Consider Fig. 5(a), where the rearmost red chair is heavily occluded.4 Knowledge that chairs grouped around a table are often of the same design in such scenes would help make strong predictions for this heavily occluded chair. This could be evaluated by outputting a 3D model of the object, or making predictions of how the scene would look from a novel viewpoint, as in Fig. 5(b). A related task is that of image inpainting, as illustrated in Fig. 4. In this case there is in effect a synthetic occluder (the mask); it is most natural to evaluate this by prediction of the masked-out region(s).
Footnote 4: This example was inspired by Hueeting et al. (2018).
The SGM approach could also be used to carry out scene editing, e.g. to add or remove objects or change their properties, or alter the lighting. Here the result could be evaluated by collecting views under the relevant perturbation. Another possible task is the completion of a 3D scene given only a subset of the objects in the scene (see e.g., Li et al. 2019). This is a missing data imputation task, like image inpainting, but different as it is imputing a 3D scene, not just an image.
Figure 5: Images of an office scene from two viewpoints.
Is full inference overkill?The key advantage of the SGM approach is that it provides a unified representation, from which many different tasks or questions can be addressed. This creates a coherent understanding of the input image in terms of the 3D (or 2.1 D) world, in contrast with what might arise if different models are trained for different tasks without this underlying structure.
If we only care about one task, then it is certainly overkill. But acting in the real world does not require just one task. The example of tea-making in Land et al. (1999) illustrates this nicely. The goal of making tea decomposes into subgoals such as "put the kettle on", "make the tea", "prepare the cups". A subgoal of putting the kettle on is to "fill the kettle", and this in turn requires "find the kettle", "lift the kettle", "remove the lid", "transport to sink", "locate and turn on tap", "move kettle to water stream", "turn off tap when full", "replace lid", and "transport to worktop". The visual tasks required include object detection; pose and shape estimation of objects so that they can be manipulated; and monitoring of the state of some variable (e.g. water level in the kettle). Carrying out the tea-making task in an unfamiliar kitchen will bring in to play knowledge about the typical layout of kitchens, and possibly about different kinds of tap mechanism. Note that subgoals such as "locate and turn on tap" and "turn off the tap" are re-usable across other tasks such as making coffee, or washing the dishes.
## 2 Models of Objects
Visual scenes can contain a lot of complexity. Components that make up the scene can be divided into "things" and "stuff". Things are object categories that have a well defined shape (like people or cars), while stuff corresponds to categories which have an amorphous spatial extent, such as grass and sky (see e.g., Sun et al. 2014). We first focus on approaches to model things in sec. 2.1, and then move on to model stuff in sec. 2.2.
### Modelling Things: Multifactor Models
Here we consider modelling a class of visual objects (such as teapots or cars). These can vary in shape, and in texture (described e.g. by its colour or possibly a more complex pattern on the surface). We can also vary the position and orientation (the pose) of the camera relative to the object, and the lighting; we term these as _rendering_ variables. Hence there are separate factors of _shape_, _texture_ and _rendering_ that combine to produce the observations. We call models with a number of separate factors **multifactor** models. Below we first give some examples of multifactor models, and then focus on parts-based models in sec. 2.1.1 (which are a special class of multifactor models).
Example: Blanz and Vetter's morphable model of faces.An early example of a 3D multifactor model is due to Blanz and Vetter (1999).5 Consider \(n\) locations on the face; \(\mathbf{s}_{i}\) records the \((x,y,z)\) coordinates of location \(i\), and similarly \(\mathbf{t}_{i}\) records the red, green and blue colour values (albedo) at the same location. These measurements were obtained with a laser scanner. These individual vectors are concatenated to produce the shape vector \(\mathbf{s}=(\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{n})\) which has length \(3n\), and similarly there is an texture vector \(\mathbf{t}\) of length \(3n\). Blanz and Vetter (1999) used approximately 70,000 locations on each face, and collected data from 200 subjects. Preprocessing was carried out to remove the the global 3D transformation between the faces, and an optical flow method was used to register the locations.
Footnote 5: The description below is partly based on Chapter 17 of Prince (2012) as well as the original paper.
Given the shape and texture vectors for each subject, a probabilistic principal components analysis (PPCA) model can be built to capture the variation in shape and texture, with latent variables \(\mathbf{z}^{*}\) and
\(\mathbf{z}^{t}\) respectively. One could alternatively use a common \(\mathbf{z}\) for the shape and texture variation, e.g. by concatenating the s and \(\mathbf{t}\) vectors for each example before applying PPCA. This could model the fact that a change in the shape of the mouth to produce a smile will also likely expose the teeth to view, so these changes are correlated.
The above description models shape and texture variation in 3D. This model is transformed geometrically into the image plane in terms of the camera intrinsic and extrinsic parameters. The colour at each pixel is determined by the Phong shading model (see e.g., Szeliski 2021, sec. 2.2.2), which accounts for diffuse and specular reflections from directed light sources, and also for ambient illumination. Denoting all of rendering variables by \(\mathbf{z}^{r}\), the overall model can be fitted to a new face by optimizing \(\mathbf{z}^{s}\), \(\mathbf{z}^{t}\) and \(\mathbf{z}^{r}\) so as to minimize an error measure between the observed and predicted pixels.
The model of Blanz and Vetter (1999) is in 3D. Such 3D models have also been used, e.g., for modelling regions in the brain (Babalola et al., 2008). Some earlier work by Cootes, Taylor and collaborators first developed 2D _active shape models_ using a PPCA model of the shape as defined by landmarks (Cootes et al., 1995), and then developed _active appearance models_ that also took the texture into account (Cootes et al., 1998).
Example: CodeNeRF models disentangled Neural Radiance Fields for object categories.Blanz and Vetter's model uses a linear approach (PPCA) to model the variability due to shape and texture. Careful alignment of the data was needed in order to make this approach work. With the advent of deep learning, it is natural to ask if one can exploit more powerful nonlinear models. We first start with the Neural Radiance Field (NeRF) representation for a single object due to Mildenhall et al. (2020), and then add latent variables to model shape and texture variation, as in the CodeNeRF model of Jang and Agapito (2021).
Figure 6: An illustration of the ability of CodeNeRF to carry out novel shape, texture and pose synthesis. The 4 boxed images correspond to renderings with shape and texture codes corresponding to the reference views. The other results show renders obtained from the cross product of the shape and texture codes, at a novel viewpoint. Image from Fig. 9 in Jang and Agapito (2021) licenced under CC BY 4.0.
The NeRF takes as input a 3D location \(\mathbf{x}=(x,y,z)\) and a viewing direction defined by a 3D Cartesian unit vector \(\mathbf{d}\), and outputs a volume density \(\sigma(\mathbf{x})\) and emitted colour \(\mathbf{c}(\mathbf{x},\mathbf{d})=(r,g,b)\) at that location and direction. obtained via a neural network \(F_{\theta}:(\mathbf{x},\mathbf{d})\rightarrow(\mathbf{c},\sigma)\) with weights \(\theta\). As Mildenhall et al. (2020, sec. 4) state, "the volume density \(\sigma(\mathbf{x})\) can be interpreted as the differential probability of a ray terminating at an infinitesimal particle at location \(\mathbf{x}\)". This means that the expected colour \(C(\mathbf{r})\) observed at the camera ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), with near and far bounds \(t_{n}\) and \(t_{f}\) is given by6
Footnote 6: Note that in this section \(\mathbf{x}\) denotes a 3D location, not our usual meaning of an input image. Also \(t\) is overloaded to denote both the parameterization along the ray \(\mathbf{r}(t)\), and the texture superscript.
\[C(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{c}( \mathbf{r}(t),\mathbf{d})\ dt,\qquad\mathrm{where}\qquad T(t)=\exp\left(-\int_ {t_{n}}^{t}\sigma(\mathbf{r}(s))\ ds\right). \tag{3}\]
Here \(T(t)\) denotes the transmittance along the ray from \(t_{n}\) to \(t\), i.e., the probability that the ray reaches \(t\) starting from \(t_{n}\) without hitting any other particle. The process to obtain \(C(\mathbf{r})\) is known as _volumetric rendering_, and is differtentiable. The computation of \(C(\mathbf{r})\) is approximated by taking a number of samples along the ray.
For a single object, the NeRF representation can be obtained by minimizing the error between the observed and predicted colours at a set of ray locations for a number of different views (with known camera poses and intrinsic parameters). This is useful to allow novel view synthesis, i.e., to predict the image that would be obtained from a novel view. However, for an object class, it makes sense to have latent variables \(\mathbf{z}^{s},\mathbf{z}^{t}\) for each object, as is done in CodeNeRF (Jang and Agapito, 2021). Here the neural network is enhanced to map \(F_{\theta}:(\mathbf{x},\mathbf{d},\mathbf{z}^{s},\mathbf{z}^{t})\rightarrow( \mathbf{c},\sigma)\). Now the optimization problem for the network weights \(\theta\) is carried out over all training examples in an object class, and for each set of views of a given example the shape and texture latent variables are estimated. (A regularization penalty proportional to \(|\mathbf{z}^{s}|^{2}+|\mathbf{z}^{t}|^{2}\) is also imposed on the latent variables, corresponding to a Gaussian prior.) The ability of CodeNeRF to generalize to novel shape/texture/pose combinations is illustrated in Fig. 6.
There have been a lot of other recent developments arising from the NeRF work. For example Zhang et al. (2021) extended NeRF to extract a surface (rather than volumetric) representation, and then solve for spatially-varying reflectance and environment lighting. This allows rendering of novel views of the object under arbitrary environment lighting, and editing of the object's material properties.
Other multifactor models.One can sometimes use a _multilinear model_ to handle multiple factors of variation. A multilinear model with two latent factors \(\mathbf{z}^{1}\) and \(\mathbf{z}^{2}\) is given by \(x_{i}=\sum_{j,k}w_{ijk}z_{j}^{1}z_{k}^{2}\), where \(w_{ijk}\) is 3-way tensor of parameters. Tenenbaum and Freeman (2000) use a bilinear model to separate style and content, e.g. of the font (style) of different letters (content). Vasilescu and Terzopoulos (2002) use a 4-factor model to cover different facial geometries (people), expressions, head poses, and lighting conditions.
Another example of a multifactor model is _transformation invariant clustering_(Frey and Jojic, 2003). They consider the case where there is a discrete factor modelling shape/appearance variation as a mixture model, and a continuous factor arising from "nuisance" translations and rotations of an object in an image.
#### 2.1.1 Parts-based Models
Parts-based models are old idea in computer vision. For example Fischler and Elschlager (1973) described a model termed _pictoral structures_, where an object is represented by a number of parts
arranged in a deformable configuration. The deformations are represented by spring-like connections between pairs of parts. Biederman (1987) has advocated for a parts-based approach in computer vision, under the name of _Recognition-by-Components_. More recently Felzenszwalb et al. (2009) used discriminatively-trained parts-based models to obtain state-of-the-art results (at the time) for object recognition.
Some advantages of the parts-based approach are described by Ross and Zemel (2006), viz.
* A partially-occluded object can be recognized if some of the parts are present;
* A parts-based approach is a good way to model the variability in highly-articulated objects such as the human body;
* Parts may vary less under a change of pose than the appearance of the whole object.
* Known parts may be recombined in novel ways to model a new object class.
Parts-based models share a similar objective with _perceptual organization_ or _perceptual grouping_ (see, e.g., Palmer 1999, ch. 6) in that they seek to organize parts into a whole, but generally perceptual organization is seen as a generic process, e.g., for grouping edges into contours, or to group similar pixels into regions, rather than exploiting specific knowledge about certain object classes.
Below we give four examples of parts-based models.
Figure 7: Top row: sample training images for Ross and Zemel’s models. Lower rows: parts-based model learned by MCVQ. The plots on left show the masks, the probability with which each pixel selects the given VQ. On the right are the 10 means for each VQ, multiplied by the mask shown on the left. Figures reproduced from Ross and Zemel (2006) with permission of R. S. Zemel.
Example: Parts-based Models of Faces.One common application of parts-based models is to faces. In the UK, the "PhotoFit" system is used by police in the investigation of crimes to produce an image of a suspect with the help of eye witnesses. See, for example, the "PhotoFit Me" work of Prof. Graham Pike.7. This decomposes a face into eyes, nose, mouth, jaw and hair parts.
Footnote 7: See [https://www.open.edu/openlearn/PhotoFitMe](https://www.open.edu/openlearn/PhotoFitMe).
Ross and Zemel (2006) proposed two models to learn such a parts-based decomposition. We describe them in relation to the modelling face images.8 The first model, Multiple Cause Vector Quantization (MCVQ), has \(K\) multinomial factors, each selecting the appearance of a given part. The "masks" for each part are learned probabilistically, so that pixel \(j\) is explained by part \(k\) with multinomial probability \(\pi_{jk}\), where \(\sum_{k}\pi_{jk}=1\). The learned model is illustrated in Figure 7. The plots on the bottom left show the mask probabilities, and on the bottom right the 10 means for each VQ are shown, multiplied by the relevant mask. Notice in the bottom left panel how the masks identify regions such as the eyes, nose and chin.
Footnote 8: The data used is aligned with respect to position and scale, so these “nuisance factors” do not need to be modelled during learning.
A second model is termed Multiple Cause Factor Analysis (MCFA)--this is similar to MCVQ, but now the appearance each part is based on a factor analyzer instead of a discrete choice. The mask model is as for MCVQ. Nazabal et al. (2022) used a MCFA model of faces, but added a higher-level factor analysis model to correlate the factor analyzers for each part. The dataset used they used (from PhotoFit Me) is balanced by gender (female/male) and by race (Black/Asian/Caucasian), hence the high-level factor analyser can model regularities across the parts, e.g. with respect to skin tone.
The MCVQ and MCFA models use a simple model for the mask probabilities \(\pi_{jk}\). But suppose we are modelling a parts decomposition of side-views of cars, e.g. into wheels, body and windows. The different styles of cars will give rise to quite different mask patterns, and these can be modelled with a latent variable model. For example Eslami and Williams (2011) modelled the mask patterns with exponential family factor analysis, while Eslami and Williams (2012) used a multinomial Shape Boltzmann machine.
Example: Parts-based Model of a Clock.Fig. 8 is reproduced from the work Zhu and Mumford (2006). The figure describes an AND-OR graph for clocks; for example there is an AND over the hands, frame and numbers components of the clock, but in each there are alternatives as encoded by OR nodes. Particular choices at the OR nodes gives rise to a _parse graph_.9 One of the possible parse graphs is illustrated with dark arrows, corresponding to the image at the top. Thus we observe that OR choices are made for the hands (2 not 3), the numbers (Arabic not Roman), and the shape of the frame (circular, and only the outer ring is present).
Footnote 9: See sec. 3.3 for further discussion of grammars and AND-OR graphs.
The AND-OR structure is similar to what we have seen for faces with the MCVQ; there are a number of parts (the AND), and for each there are a discrete set of choices (the OR). However, an AND-OR graph is generally more powerful, as it can have hierarchical structure. For example, in the clock model there are choices for all of the outer, inner, and central rings of the frame component to be present or absent.
Example: Parts-based Models of Articulated Bodies.A classic example of an articulated object is the human body, which can be decomposed into a torso, head, arms and legs. The arms and legs can each be further decomposed; for example an arm is made up of the upper arm, lower arm and the hand, and the hand can be further decomposed into the fingers and thumb. Given the importance
of human avatars in the film and gaming industries, there has been a lot of work on this topic. Here we focus on the Skinned Multi-Person Linear Model (SMPL) of Loper et al. (2015). There are two factors of variation that need to be taken into account. The first is the pose of the body, as defined by the joint angles along the kinematic tree. The second is the specific body shape of a given individual. The model for the body is defined by a mesh with some 6890 vertices in 3D. Each vertex \(i\) is assigned weights \(w_{ki}\) indicating how much part \(k\) affects vertex \(i\).
The body shape is modelled with a linear basis, similar to Blanz and Vetter's model discussed in sec. 2.1. The pose of the body is determined by the axis-angle representation of the relative rotation of a part with respect to its parent in the kinematic tree. One other important part of the model is _pose blend shapes_, which modify the the vertex locations depending on the pose, but _before_ the pose transformation is applied. In SMPL, the pose blend shapes are a linear function of the elements of the part rotation matrices. Pose blend shapes are needed to counter the unrealistic deformations at joints that would otherwise arise. After combining the shape and pose blend effects in the neutral pose, the final predicted location of vertex \(i\) is obtained as a weighted sum (with weights \(w_{ki}\)) of the transformation of vertex \(i\) under part \(k\).
Example: CapsulesAnother parts-based model is termed "capsules". This term was introduced in Hinton et al. (2011), with later developments including Sabour et al. (2017), Hinton et al. (2018) and Kosiorek et al. (2019). There is a recent survey paper on capsules by De Sousa Ribeiro et al. (2022). The term "capsule" relates to a visual entity which outputs both the probability that the entity is present, and a set of instantiation parameters for the entity. Although capsules are usually described in an inferential manner, with the flow of information from the parts to the object, below we follow
Figure 8: AND-OR graph for the clock object category. The dashed links between the children of an AND node represent relations and constraints. Reproduced from Zhu and Mumford (2006, Fig. 6.1) with permission of now publishers inc. ©2006.
the exposition of Nazabal et al. (2022) who described Generative Capsule Models.
Consider an object template \(T\) which consists of \(N\) parts \(\{\mathbf{p}_{n}\}_{n=1}^{N}\). Each part \(\mathbf{p}_{n}\) is described by its class, pose, shape, texture. The template \(T\) has an associated latent variable vector \(\mathbf{z}\) which affects to the pose, shape, and texture of the parts. For example for a template in 2D, part of \(\mathbf{z}\) may define the parameters of a similarity transformation (in terms of a translation \(\mathbf{t}\), rotation \(\theta\) and scaling \(s\) of the template). The geometric transformation between the object and the parts can be described by a linear transformation in terms of \(\mathbf{t}\), \(s\cos\theta\) and \(s\sin\theta\). Other parts of \(\mathbf{z}\) can model shape and texture correlations between parts. Methods for matching observed parts to template parts are described in sec. 4.
Although we have described here object-parts relationships, capsules can be formed into an hierarchical architecture, allowing e.g., the representation of object inter-relationships.
### Modelling Stuff: Visual Texture
Forsyth and Ponce (2003, p. 164) discuss texture as follows:
Texture is a phenomenon that is widespread, easy to recognise, and hard to define. Typically, whether an effect is referred to as texture or not depends on the scale at which it is viewed. A leaf that occupies most of an image is an object, but the foliage of a tree is a texture. Views of large numbers of small objects are often best thought of as textures. Examples include grass, foliage, brush, pebbles, and hair. Many surfaces are marked with orderly patterns that look like large numbers of small objects. Examples include the spots of animals such as leopards or cheetahs; the stripes of animals such as tigers or zebras; the patterns on bark, wood, and skin. Textures tend to show repetition: (roughly!) the same local patch appears again and again, though it may be distorted by a viewing transformation.
Textures can be classed as regular or stochastic, although there can be gradations, such as near-regular or near-stochastic. Examples of regular textures include brickwork, tiled floor patterns, and wickerwork. Examples of stochastic textures include clouds, wood grain, and foliage. Below we will focus mainly on stochastic textures.
We take the goal of _texture synthesis_ to be the generation of an arbitrarily sized region of a texture, given a (small) training sample. For regular textures it should be possible to extract the repeating element(s) and simply tile the target region appropriately, but this will not work for stochastic textures. Instead we aim to learn a generative model of the texture from the training sample. A common type of model used is an _energy-based model_ (EBM). Let \(\mathbf{x}_{(k)}\) denote a patch of the image centered at location \(k\) (e.g. a square patch). The _field of experts_ (FoE) energy is defined as
\[E_{FoE}(\mathbf{x})=\sum_{k}\sum_{j}\phi_{j}(\mathbf{w}_{j}\cdot\mathbf{x}_{(k )}), \tag{4}\]
where \(\mathbf{w}_{j}\) is a filter the same size as the patch, and \(\phi_{j}()\) is some function. A probability distribution over images is then defined by the Boltzmann distribution, i.e.,
\[p_{FoE}(\mathbf{x})=\frac{1}{Z(W)}\exp(-E_{FoE}(\mathbf{x})). \tag{5}\]
Here \(Z(W)\) is the _partition function_ that serves to normalize \(p_{FoE}(\mathbf{x})\). The term field of experts was introduced in Roth and Black (2005), as a generalization of the product of experts construction due to Hinton (2002), to handle arbitrarily-sized images.
In general it is not easy to draw samples directly from an energy-based model; a standard approach is to construct a Markov chain whose equilibrium distribution is the desired Boltzmann distribution. See e.g., Murphy (2012, ch. 24) for further details.
One simple choice would be to take the function \(\phi_{j}\) to be a quadratic form in \(\mathbf{x}_{(k)}\). If this is positive definite, \(p_{FoE}(\mathbf{x})\) will be well-defined and normalizable, and will define a Gaussian Markov random field (GMRF), see e.g., Rasmussen and Williams (2006, sec. B.5). Stationary GMRFs on a regular grid can be analyzed via Fourier analysis (Rozanov, 1977) in terms of the power spectrum. However, it is well known that image models based on simply on the power spectrum (or equivalently, on second-order statistics, via the Wiener-Khinchine theorem) are inadequate. For example, Fig. 2 in Galerne et al. (2011) shows a section of a tiled roof. Randomizing the phase of its Fourier transform, while maintaining the power spectrum, leads to a blurry image, as the phase alignment needed to create sharp edges no longer occurs.
A Gaussian random field model can also be obtained via the _maximum entropy principle_ (see e.g., Cover and Thomas 1991, ch. 12) on the basis of second order (covariance) constraints on the distribution. But an alternative is to consider a set of filters, and impose the constraint that the maximum entropy model matches the observed histogram for each filter. This gives rise the FRAME (Filters, Random field, and Maximum Entropy) model of Zhu et al. (1998). Roth and Black (2005) discuss the FRAME model, and note that the approach is complicated by its use of discrete filter histograms. Instead they propose the field of experts model.
Kivinen and Williams (2012) defined a different energy based model, with the energy function
\[E_{Tm}(\mathbf{x})=\frac{1}{2\sigma^{2}}(\mathbf{x}-\mathbf{a})^{T}(\mathbf{ x}-\mathbf{a})-\sum_{k,j}\log[1+\exp(b_{j}+\sigma^{-1}(\mathbf{w}_{j}\cdot \mathbf{x}_{(k)})]. \tag{6}\]
This was inspired by the convolutional restricted Boltzmann machine of Lee et al. (2009), but instead of convolution uses _tiled convolution_ as described in Ranzato et al. (2010), who argue that convolutional weight sharing creates problems due to nearby latent variables of the same filter being highly correlated. In the tiled convolutional strategy each filter tiles the image with copies of itself without overlaps (i.e. the stride equals the filter diameter). But different filters do overlap with each other, in order to avoid tiling artifacts. The energy function \(E_{Tm}\) consists of two terms. The first component corresponds to a simple spherical Gaussian with mean \(\mathbf{a}\). The second is obtained by integrating out the hidden units (as given in eq. 1 of Kivinen and Williams 2012) of the restricted Boltzmann machine
Figure 9: The top row shows two sample patches from each of three textures (labelled D4, D68 and D103). The bottom row shows two samples drawn from a Tm model trained on the specific textures. Figure from Kivinen and Williams (2012, Fig. 4).
analytically. The label "Tm" is given to this model in order to denote that it uses tiled convolution, and that the means, rather than the covariances, are modelled (by the \(\{\mathbf{w}_{j}\}\)s and \(\mathbf{a}\)).
Kivinen and Williams (2012) showed that to model multiple textures one can keep a fixed set of filters, but adjust the biases on a per-texture basis--they called this the "Multi-Tm" model. Fig. 9 shows results for three different textures, with two images of both the raw data and samples. The raw data is obtained as \(98\times 98\) patches cropped from the Brodatz texture album (Brodatz, 1966).
An alternative to energy based models is an auto-regressive model, where the predicted pixel value \(x_{ij}\) at location \((i,j)\) depends on a vector of context, typically to the left and above if working in raster scan order. Efros and Leung (1999) used this approach to carry out _exemplar-based texture synthesis_, starting with a source training sample of texture. For a target location \((i,j)\), one identifies neighbourhoods in the source texture that are similar to the current context region, and then selects one of these regions at random (depending on the level of agreement with the context region). For filling in holes an "onion peeling" strategy of scanning round the periphery can be used rather than raster scan order. Note that this is a non-parametric modelling approach--rather than constructing a parameterized model for \(p(x_{ij}|\mathrm{context})\), one extracts relevant regions from the source texture.
Auto-regressive models do not have to be exemplar-based. PixelCNN (van den Oord et al., 2016) and PixelCNN+ (Salimans et al., 2017) are prominent recent auto-regressive models which use convolutional layers for feature extraction, along with masking to ensure that only valid context regions are accessed, in order predict \(p(x_{ij}|\mathrm{context})\).
Above we have discussed the generation of flat textures on a 2D plane. Textures can be applied (mapped) to a surface; this is known as _texture mapping_ (see, e.g., Szeliski 2021).
## 3 Models of Scenes
A very simple model of scenes is one which randomly selects objects and puts them into a scene. This might be summarized as "the independent components of images are objects". One example of this is the 2D sprites work of Williams and Titsias (2004) illustrated in Fig. 2. Here the model has learned about the background and the two people, but a priori it would place the people at random locations in the image. A more recent example is IODINE (short for Iterative Object Decomposition Inference NEtwork) due to Greff et al. (2019). This uses an "object-centric" representation, consisting of \(K\) vectors of latent variables \(\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\), one for each object. IODINE was demonstrated on 2D sprites data, and also on images of 3D scenes from the CLEVR dataset, which consists of geometric objects like spheres and cubes in random locations with random material properties. The \(\mathbf{z}_{k}\)s for each object in IODINE were not factored into shape, texture and pose components (as discussed in sec. 2.1), but were a single vector that entangled these factors.
However, in the same way that sentences are not random sequences of words,10 visual scenes are not composed of random collections of objects--there are co-occurrences of objects, and relationships between them. For example, there are correlations between the scene type (e.g. kitchen. living room, urban street, rural field) and the kinds of objects observed. Also, there are stuff-stuff, things-stuff, and thing-thing interactions that occur between objects in the scene (see e.g. Heitz and Koller 2008). Examples of things-stuff interactions are that cars are (usually) found on roads, or cows on grass. An example of thing-thing interactions is that dining chairs are likely to be grouped around a dining table. As another example of thing-thing interactions, one might consider adding details to a coarse scene layout, e.g. by adding tablemats, cutlery and crockery to the dining table.11 The reader will observe
that there are similarities between parts-based models described in sec. 2.1.1, and scene models. However, parts-based models are often more constrained, with a fixed number of parts, while scene models can have a variable number of objects and looser relationships. Scene relationships can also be longer-range--a classic example is the relationship between a TV and a sofa for viewing it, which need to be a comfortable distance apart.
Below we focus particularly on models for indoor scenes, reflecting the focus in the research literature. But there are also outdoor scenes, in both rural and urban environments. In scenes of mountains or coastlines, it may be most natural to consider the carving of the landscape by erosion, e.g. by river valleys. In farmland there will be human-made field boundaries (constrained by the landscape), along with crops or livestock. In urban environments, one might use grammar-type models to generate building facades, and then "decorate" the street architecture with other objects such as people, cars and street furniture.
Below we describe autoregressive, energy based and hierarchical models, which we cover in turn. Note that autoregressive and energy based models are not latent variable models, while hierarchical models are. In this section it is assumed that 3D data is available, e.g. 3D oriented bounding boxes (OBBs) with class labels.
### Autoregressive Models
We take as an example the work of Ritchie et al. (2019), who describe a process where objects are added to a room layout one at a time, until a decision is made to stop. The model first extracts a top-view floor plan of the room (to define the valid region to place objects). It then feeds the floor plan to a sequence of four modules that (i) decide which object (if any) to add, (ii) specify where the object should be located, (iii) its orientation, and (iv) its physical dimensions. Once an object has been added the floor plan representation is updated to include the object, before the next calls to steps (i) to (iv). These modules are implemented with convolutional neural networks.
Let the \(m\) ordered objects be denoted \(\mathbf{x}_{1},\ \mathbf{x}_{2},\ldots,\mathbf{x}_{m}\). Each \(\mathbf{x}_{i}\) is comprised of an object class label, pose features (location and orientation), shape features and texture features12, so that \(\mathbf{x}_{i}=(\mathbf{x}_{i}^{c},\mathbf{x}_{i}^{p},\mathbf{x}_{i}^{s}, \mathbf{x}_{i}^{t})\). Then under an autoregressive model we have that
Footnote 12: Ritchie et al. (2019) do not use texture features, but in general they could be present.
\[p(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m})=p(\mathbf{x}_{1})\prod_ {i=2}^{m}p(\mathbf{x}_{i}|\mathbf{x}_{<i}), \tag{7}\]
where \(\mathbf{x}_{<i}\) denotes the sequence \(\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1}\). This is suitable for generating scenes from the model. It can also be used to compute \(p(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m})\) if an ordering of the objects is given. However, when we observe an _unordered_ set of objects \(X=\{\mathbf{x}_{i}\}_{i=1}^{m}\), we should sum over all possible permutations, so that
\[p(X)=\sum_{\pi\in\Pi}p(\pi)p(\mathbf{x}_{\pi(1)})\prod_{i=2}^{m}p(\mathbf{x}_{ \pi(i)}|\mathbf{x}_{\pi(<i)}), \tag{8}\]
where \(\pi\) denotes a particular permutation,and \(\Pi\) is the set of all permutations. The prior over permutations \(p(\pi)\) can be taken as uniform, i.e. \(1/(m!)\). In fact Ritchie et al. (2019) do use an ordering of objects for training the object class label module, based on a measure of the importance of the class. This depends on the average size of a category multiplied by its frequency of occurrence. This means that large objects like a bed will occur first in a bedroom scene, with other objects fitting in around it.
The model of Ritchie et al. (2019) is relatively simple, but the use of the floor plan representation with the added objects means that the chain rule of probabilities can be used readily to model the context, without simplifications such as a Markov model.13 One can ask if the autoregressive process really makes sense as a generative model of scenes? It is not unreasonable that large objects (such as the bed in a bedroom scene) should be added first. But it seems rather unlikely that people furnish bedrooms using a fixed ordering of all the object classes.
Footnote 13: Although note that the floor plan representation discards the order in which objects were added.
Note also the autoregressive model does not make explicit groupings of objects that co-occur, such as a bed and nightstand, or a tv-sofa combination arranged for convenient viewing. In contrast, hierarchical models (see below) should be able to pick out such structure.
Building on the work of Ritchie et al. (2019), Paschalidou et al. (2021) develop an autoregressive model using transformers (dubbed ATISS). They demonstrate some advantages of ATISS over the earlier work, e.g. in relation to scene completion, especially when objects that come early in the sequence (e.g., beds for a bedroom) are omitted. However, in the formulation of ATISS (their eq. 3), a valid sum over permutations of the sequence as in eq. 8 is replaced by a product over permutations. The motivation is to maximize the probability of generation over all possible orderings, rather than having at least one with high probability. But taking a product over sequences is not a valid probability calculation; one possible way to make sense of this is as a product-of-experts (PoE) construction (Hinton, 2002), although in this case a partition function \(Z\) should be introduced, which would complicate the learning.
### Energy-based Models
Suppose that we have generated a number of objects to go in a room. We then need to arrange them in order to obey a number of spatial, functional and semantic relationships or constraints. The basic idea is to define an energy function that measures the fit of the configuration to these constraints, and then as per eq. 5 to define a probability distribution using the Boltzmann distribution. Yu et al. (2011) used such an approach to automatically optimize furniture arrangements, using simulated annealing to search in the configuration space.
The method of Yu et al. (2011) is defined for a fixed set of objects. Yeh et al. (2012) extended this idea to allow an "open world", where the number of objects is variable. They used a reversible jump Markov chain Monte carlo (MCMC) method from this space.
A problem with energy based models in general is that one generally needs to run a MCMC chain for many iterations in order to draw samples from the equilibrium distribution. This limitation applies to the above methods, meaning EBMs will not scale well to larger scenes.
### Hierarchical Models
A natural way to obtain an hierarchical model is via a **grammar-based approach**. The idea of using grammars for pattern analysis is an old one, see for example the work of K. S. Fu (1982) on syntactic pattern recognition. In relation to scene understanding, perhaps the most notable work is from Song-Chun Zhu and collaborators. The long paper by Zhu and Mumford (2006) entitled "A Stochastic Grammar of Images" is a key reference. A parts-based model of clocks due to Zhu and Mumford (2006) is discussed above in sec. 2.1.1.
A context-free grammar (CFG) is defined in terms of terminal symbols, non-terminal symbols, a start symbol, and production rules. Inspired by Liu et al. (2014), we take bedroom scenes as an
example domain. Here we will have terminal symbols for observed objects such as mattress, nightstand, pillow etc. As above, the full description will involve not only the class label, but also pose, shape and texture information. Non-terminal symbols here would correspond to groupings of objects; for example we may have \(\mathrm{sleeping-area}\rightarrow\mathrm{bed\ nightstand-group}\). There may typically be one or two nightstands (usually depending on whether the bed is single or double), so that the productions \(\mathrm{nightstand-group}\rightarrow\mathrm{nightstand}\) and \(\mathrm{nightstand-group}\rightarrow\mathrm{nightstand}\) are both valid. An example grammar for bedroom scenes from Liu et al. (2014) is shown in Figure 10.
A probabilistic context-free grammar (PCFG) adds a probability distribution over the productions which share the same non-terminal on the LHS of the rules. Charniak (1993, ch. 5) provides a good overview of CFGs and PCFGs. A CFG is often described in terms of a set of production rules, but it can also be described by an AND-OR graph; Hall (1973) showed the equivalence between the two. Here, the OR occurs over productions share the same non-terminal on the LHS; the AND occurs with productions with more than one symbol on the RHS, as all of these symbols must be generated. As Zhu and Mumford (2006, sec. 6.1) state, an "AND-OR graph embeds the whole image grammar and contains all the valid parse graphs".
We have seen that the terminal symbols include class label, pose, shape and texture information in general. This means that the non-terminals which govern a set of terminals will need to include latent variables which can generate the appropriate correlations between them. For example the production \(\mathrm{sleeping-area}\rightarrow\mathrm{bed\ nightstand-group}\) will need to specify the size of the bed (single or double), and this information will also need to be passed to the \(\mathrm{nightstand-group}\) variable, so that it can determine whether to generate one or two nightstands (and their appropriate location(s)). In the CFG, pose and size information can be defined relative to higher level variables. So, for example, if the size and location of the \(\mathrm{sleeping-area}\) has been defined, it makes sense to locate the bed relative to this. A possible alternative to encoding this information in the latent variables is to add horizontal relational structures that to encode contextual information or constraints between nodes, as proposed in Zhu and Mumford (2006, ch. 4) and Jin and Geman (2006).
A problem with using CFGs to model visual scenes is that it can be hard to learn them from data. Even in natural language processing (NLP), most successful grammar learning uses annotated "treebank" data, rather than unannotated sentences. Similarly, annotated data has been used to learn some hierarchical models for images, as in Yao et al. (2007) and Liu et al. (2014). But recently for NLP data, Kim et al. (2019) found that they could add a sentence-level continuous vector latent
Figure 10: A hierarchical grammar for bedroom scenes, redrawn from Liu et al. (2014).
variable in addition to the PCFG structure (to produce a "compound" PCFG) yielding state-of-the-art results for parsing tasks.
While grammars are one way to generate hierarchical structure, they are not the only way. Instead of discrete-valued non-terminals, one can use a continuous-valued latent vector in a node. Consider starting with a single latent vector \(\mathbf{z}\). The first binary split can be generated as
\[[\mathbf{z}_{1},\mathbf{z}_{2}]=\mathbf{f}(\mathbf{z}), \tag{9}\]
where \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) are the latent vectors of the two children., and \(\mathbf{f}\) is a non-linear vector-valued function, which could be as simple as \([\tanh(W_{1}\mathbf{z}+\mathbf{b}_{1}),\tanh(W_{2}\mathbf{z}+\mathbf{b}_{2})]\) or a deeper neural network. For each generated latent vector, a binary node classifier is applied to decide whether it is a terminal (generating an object), or a non-terminal which can be further split. The advantage of the continuous state is that it is well suited to express pose, shape and texture variation. This kind of generative structure is a simplified version of the GRAINS model of Li et al. (2019) which is described below. Note that the hierarchical MCFA model for faces of Nazabal et al. (2022) (described in sec. 2.1.1) is a simple one-layer model of this type, but with \(K\) child nodes rather than just two, and without the \(\tanh\) nonlinearities.
Li et al. (2019) developed the Generative Recursive Autoencoders for INdoor Scenes (GRAINS) model. This takes as input an unstructured set of objects in 3D and computes a latent representation \(\mathbf{z}\), which is decoded to yield a hierarchical structure of objects which can then be rendered. In more detail, the GRAINS model decodes tbe latent vector \(\mathbf{z}\) into five variables representing the floor and four walls of a room. Each of these is either a terminal (corresponding to an object), or a non-terminal, as determined by a node classifier neural network. There are number of non-terminals, corresponding to support, surround and co-occurrence relationships. If the model were a grammar the encoder network should sum over all parses of the data, but instead a tree structure is built heuristically bottom up, and that is used in the encoder to output \(\mathbf{z}\).
## 4 Inference
In this section the task is inference from a single RGB or RGBD image, or from a set of images with known viewpoints (multi-view data), assuming that the object and relational models have already been learned. Thus we are interested in \(p(\mathbf{z}|\mathbf{x})\).
We first cover inference for objects and global variables, and then inference with scene models. Finally we discuss inference in the presence of model deficiency, where an input image cannot be reconstructed exactly.
Inference for Objects and Global Variables.The basic inference task is to (i) detect each object, and to determine its class label, shape, texture and pose information, (ii) detect and characterize regions of stuff, and (iii) determine the lighting and camera global latent variables. The inference problem for a single object is already daunting if we we consider a discretization of the values for each of the latent variables and search over these (see e.g. Yuille and Liu 2021 sec. 7.1), and there is a combinatorial explosion when considering multiple objects and the global variables. The most direct way to address the combinatorial explosion is to _approximate_ the search. One such method is Markov chain Monte Carlo (MCMC), where the goal is construct a Markov chain whose equilibrium distribution samples from the desired posterior over the latent variables. See e.g., Murphy (2012, chapter 24) for a discussion of MCMC. Often the proposed moves in the state-space are generic and do not exploit the structure of the problem, leading to slow mixing. An example of a more
advanced approach is Data-driven Markov chain Monte Carlo (DDMCMC, Tu and Zhu 2002), which allows proposals such as the candidate set of object detections to be incorporated into a valid MCMC algorithm.
An alternative approximate approach is to use _variational inference_, see e.g. Jordan et al. (1999), where the goal is approximate the posterior \(p(\mathbf{z}|\mathbf{x})\) with a variational distribution \(q(\mathbf{z})\). The variational autoencoder (Kingma and Welling, 2014; Rezende et al., 2014) uses _amortized_ variational inference to predict a distribution over the latent variables given input data. This is achieved with an "encoder" network (a.k.a. a "recognition model", see Dayan et al. 1995). Such amortized inference is relatively straightforward if there is one object of interest in the image, but is more complex if there are multiple objects, due e.g. to permutation symmetries. In IODINE (Greff et al., 2019) the feed-forward predictions from the image for the object latent variables are the refined iteratively to take into account effects such as explaining away. In the slot-attention model of Locatello et al. (2020), an iterative attention mechanism is used in the mapping from the inputs to the latent variables, so that they _compete_ to explain the objects in the image. The attend, infer, repeat (AIR) model of Eslami et al. (2016) takes an alternative, sequential approach, identifying one object at a time.
MCMC and variational inference (VI) methods can be used to express the uncertainty in the latent representation \(\mathbf{z}\) given the data \(\mathbf{x}\). This may arise, e.g. due to (partial) occlusions, and can give rise to a multi-modal posterior, corresponding to different interpretations or "parses" of the input image. A limitation of VI methods is that they may not capture this multi-modality well if the assumed form of \(q(\mathbf{z})\) is unimodal. As explained in Murphy (2012, sec. 21.2.2), when the variational distribution is unimodal, then the Kullback-Leibler divergence \(KL(q||p)\) used in VI tends to pick out one mode of the posterior, and thus under-estimate the uncertainty. MCMC methods can in principle sample from a multi-modal posterior, but in practice can get trapped within one mode, unless special measures such as annealing (see e.g., Murphy 2012, sec. 24.6) are used. One way to reduce the uncertainty in \(q(\mathbf{z})\) is to fuse information from multiple views, as shown in Li et al. (2020).
The above models such as IODINE and AIR were trained unsupervised. But the development of object detectors over the last two decades means that one can train object detectors for known classes such as cars, pineapples etc. The output may be a bounding box (BBox), or possibly a region-of-interest. Such an approach was used by Izadinia et al. (2017) in their IM2CAD system, which takes as input an image of a room. It estimates the room geometry (as a 3D box), detects objects, predicts the associated latent variables of each object, places the objects in the scene, and then optimizes their placement via rendering the scene and comparing it to the input image.
If the input data is a set of multi-view images, then geometric computer vision techniques can be brought to bear. Simultaneous localization and mapping (SLAM) or structure-from-motion (SfM) can be used to estimate camera poses and a sparse point cloud representation, as in COLMAP (Schonberger and Frahm, 2016). By themselves, such techniques do not segment the data into objects, but they can be combined with object-detection bounding boxes and masks predicted from each image to produce a 3D bounding box, as in Runz et al. (2020, sec. 5.1). Once the objects are segmented, their shape, texture and pose latent variables can be estimated, see e.g., Runz et al. (2020, secs. 5.2-5.3).
One important recent development has been _differentiable rendering_, which allows optimization of the estimated latent state, in order to improve the fit between rendered scene and the input image(s). An early differentiable renderer was _OpenDR_ due to Loper and Black (2014). Kato et al. (2020) provide a survey of various methods that have been proposed for mesh, voxel, point cloud and implicit surface representations.
Inference with Scene Models.Above we have discussed instantiating objects and the camera/lighting global variables; we now consider inference for _relational_ structures. For example, these may be represented as a parse graph that needs to be inferred on the fly.
One of the major issues is to match a set of object and part detections to a scene structure like an AND-OR graph. As a simple example, consider a capsules model for an object, as discussed in sec. 2.1.1. This is comprised of a number of parts that lie in a certain geometric relationship to each other. (Similar arguments also apply to a grouping a objects in a scene, such as a computer-monitor-keyboard, but here we will use the terminology of an object and its parts.)
Now consider that we have detected a set of parts \(\{\mathbf{x}_{m}\}_{m=1}^{M}\), and the task is to match these to a set of templates \(\{T_{k}\}_{k=1}^{K}\). Let \(w_{mnk}\in\{0,1\}\) indicate whether observation \(\mathbf{x}_{m}\) is matched to part \(n\) of template \(k\). The \(w\)'s can be considered as a binary matrix \(W\) indexed by \(m\) and \(n,k\). One way to frame this assignment problem is to consider valid _matchings_ between observed and template parts, as defined by a permutation matrix. If \(M\) is not equal to the total number of model parts \(N=\sum_{k=1}^{K}N_{k}\), then dummy rows or columns can be added to make the problem square, e.g., in case some parts are missing. As the exact computation would require considering all possible permutations which scales exponentially with \(M\), Nazabal et al. (2022) consider variational inference for the match variables \(W\) and the latent variables \(\mathbf{z}_{k}\) for each template. This implements a _routing-by-agreement_ procedure, as discussed by Sabour et al. (2017) and Hinton et al. (2018), but derived from the variational inference equations rather than as an _ad hoc_ objective. An alternative to routing-by-agreement is to use a random sample consensus approach (RANSAC, Fischler and Bolles 1981), where a minimal number of parts are used in order to instantiate an object template, which is then verified by finding the remaining parts in the predicted locations. Kosiorek et al. (2019) use another inference approach, where an autoencoder architecture predicts the LVs for each template, making use of a Set Transformer architecture (Lee et al., 2019) to handle the arbitrary number of observations \(M\).
If the hierarchical structure has been defined in terms of a recursive neural network, as in GRAINS (Li et al., 2019), then it is possible to build an encoder to predict the scene latent variable \(\mathbf{z}\), which can then be decoded to produce the hierarchical structure. This was successful in GRAINS, but note that there the input is a set of segmented 3D objects, as opposed to an unsegmented image.
Although it is natural to think of inferential information flows in the hierarchical model being bottom-up, from parts to objects to scenes, it does not have to happen this way. The overall scene type may be well-characterized by a global scene descriptor like the _gist_(Oliva and Torralba, 2006), and this will create, e.g., top-down expectations for certain object classes, and not others. For example, Torralba et al. (2010) made use of the gist to predict the scene type (such as beach scene, street scene, living room). This then made useful predictions for the vertical location in the image for objects of a certain class (e.g. cars in street scenes)14. This indicates more generally that information from the image(s) may flow in top-down as well as bottom-up fashion for inference in the scene model.
Footnote 14: In the scene types considered, the horizontal location of objects was usually not well constrained by the scene type.
As Zhu and Huang (2021, sec. 1.3.4) note, vision is driven by a large number of tasks, and "Each of these tasks utilizes only a small portion of the parse graph, which is constructed in real-time in a task-driven way based on a hierarchy of relevant tasks". Thus it may be useful to consider "lazy inference"15 of the full parse graph.
Footnote 15: Thanks for Kevin Murphy for suggesting this term.
Model deficiency:Quoting George Box, we can say that "all models are wrong, but some are useful" (Box and Draper, 1987, p. 424). There is a great richness and detail in many visual scenes,
and it may not be possible (or perhaps even desirable) to model this fully. However, if we cannot get zero error at the pixels, to what extent can a generative model be said to have fully explained the image? The issue here is that there can be many ways in which an aggregate measure of error (such as mean squared error, MSE) can arise: (i) the pose of an object could be slightly off, but the appearance and shape are correct, leading to a "halo" of errors around the boundaries of the object; (ii) the pose and shape may be correct, but the object's texture does not match the palette of known textures, and thus there is a high-frequency pattern of errors within the object's extent; (iii) an object's shape is incorrect (either due to failures in inference or modelling), but the pose and texture are correct; or (iv) there may have been a false positive or false negative detection of a small object, again leading to the same MSE.
A natural way to tackle this problem is to compare the input and predicted images, along with the predictions for object extent etc. To my knowledge there has only been a little prior work on prediction of the quality of the outputs of a vision algorithm when ground truth is not available. Jammalamadaka et al. (2012) discuss _evaluator algorithms_ to predict if the output of a human pose estimation algorithm is correct or not, and Xia et al. (2020) have looked at predicting failures in semantic segmentation. In the reconstructive framework the goodness-of-fit of the geometric variables (camera parameters, object pose and shape) can be measured by the intersection-over-union (IoU) of the the predicted and ground-truth object masks (as used e.g. in Romaszko et al. 2020). Thus this IoU measure could be predicted by an evaluator algorithm. Errors in these variables will have consequent effects on the pixel errors. But if the IoU is satisfactory, then object-level pixel errors will likely be due to errors in the texture or lighting variables. Assessing the significance of such errors with an evaluator algorithm will require the annotations of the severity and types of the errors made.
It may be thought that algorithms that provide estimates of the _uncertainty_ in their predictions help to address the issue of the assessment of vision algorithms, and this is partially correct. However, a limitation of probabilistic model uncertainty is that it is assessed _relative_ to a model (or a fixed set of models). But if, for example, the model's palette of textures does not match with that in an input image, then the model's posterior uncertainty measures will not characterize the true situation well. This deficiency is known as the "open world" (M-open) situation, in contrast to M-closed, as discussed e.g. by Bernardo and Smith (1994, SS6.1.2). One approach to addressing this is via _model criticism_, as discussed e.g., by Seth et al. (2018).
Given the complexity of visual scenes, it may not be possible all the detail in a scene at the pixel level. Instead, one can build a generative model of the spatial layout of _image features_, as computed e.g. by a DNN. Such an approach is used, for example, in the neural mesh model (NeMo) of Wang et al. (2021). While it is more interpretable to reconstruct the input image than a feature-based representation, the latter may make modelling easier. The experimental results for the NeMo generative model show that it is much more robust to OOD tasks like recognition under partial occlusion and prediction of previously unseen poses than standard feedforward DNNs.
## 5 Advancing the SGM agenda
Above I have laid out the key topics of modelling objects and scenes, and carrying out inference in these models. Below I comment on the state-of-the-art, and issues around datasets etc.
Modelling objects:The state-of-the-art seems to be at a good level. In terms of data, the Amazon Berkeley Objects (ABO) dataset of Collins et al. (2022) is a recent example of a reasonably large
collection of models (some 8,000 objects) with complex geometries and high-resolution, physically-based materials that allow for photorealistic rendering.
Modelling scenes:Compared to objects, the state-of-the-art is less advanced. Much of the focus has been on indoor scenes, leaving the modelling of outdoor scenes in urban and rural environments less explored. The task of modelling scenes is also more difficult than for objects, involving variable numbers of objects and types, and spatial, functional and semantic relationships between them. There is still much to be done here, e.g., for automatic discovery of hierarchical structure such as scene-type, objects and parts from data.
One issue here is data. Progress in 2D image recognition has been driven by large-scale datasets. Currently there are very few large 3D datasets available, and none on the scale of, say, the ImageNet which contains over one million images. To my knowledge the 3D-FRONT dataset (Fu et al., 2021) is the largest collection of indoor scenes, with almost 19,000 rooms. (It consists of synthetic 3D indoor scenes with professionally designed layouts.) Of course the effort needed to obtain and annotate a 3D scene is much greater than simply annotating bounding boxes in images. But are there reasons to believe that we should need fewer examples for learning from 3D data? Computer graphics can, of course, provide a rich source of 3D ground truth and annotations (see e.g. the CLEVR dataset of Johnson et al. 2017), and can provide a controlled means to test compositional generalization (see their sec. 4.7). However, this requires good object and scene models to generate realistic scene layouts, and also there are issues on how well models trained on such rendered data will transfer to real scenes.
While collecting 3D datasets is challenging, it is also possible to collect _multi-view data_, where there are multiple images of (parts of a) given scene, with known camera parameters (intrinsic and extrinsic). See, e.g., the active vision dataset from UNC16 and the Aria dataset17 from Meta. SGMs can be tested on multi-view by predicting what will be observed from a novel test viewpoint.
Footnote 16: [https://www.cs.unc.edu/~ammirato/active_vision_dataset_website/](https://www.cs.unc.edu/~ammirato/active_vision_dataset_website/).
Footnote 17: [https://about.meta.com/realitylabs/projectaria/datasets/](https://about.meta.com/realitylabs/projectaria/datasets/).
Inference:Scene-level inference is challenging, with the need to match portions of a hierarchical scene model with image data. As we have seen above, this can involve bottom-up and top-down flows of information in the model. This could lead to complex inference processes, reminiscent of those found in earlier vision models, such as VISIONS (Hanson and Riseman, 1978) and the system of Ohta et al. (1978). It may be possible to simplify this somewhat by using the idea of "lazy inference", where only parts of the whole scene representation are activated in response to given task. With regard to model deficiency, to make progress it will be important to get lots of data on the kinds of deficiencies that occur most, in order to address the important issues.
Tasks and Benchmarks:Currently most tasks for computer vision are evaluated in the image plane. Partly this may be due to the fact that it is relatively easy to collect images and create annotations in this case. However, there are some 3D benchmarks such as the KITTI suite18 and nuScenes19. These include 3D object detection in road scenes (using 3D bounding boxes). Autonomous driving and bin picking tasks in cluttered scenes are examples of areas that may well push the 3D reconstructive agenda forward.
Footnote 18: [http://www.cvlibs.net/datasets/kitti/](http://www.cvlibs.net/datasets/kitti/).
In order to exploit the value of structured generative models, it will be necessary to define a set of tasks which can exploit the same underlying representation. Some examples of challenging tasks were given in sec. 1.3; these include object reconstruction under heavy occlusion, and scene editing tasks, both of which exploit the 3D and scene-level information contained in the SGM representation. The research community will need to focus attention on a tasteful choice of 3D/multi-view benchmarks and tasks in order to promote the SGM approach.
## Acknowledgements
I thank David Hogg, Alan Yuille, Oisin Mac Aodha, Antonio Vergari, Siddharth N., Kevin Murphy, Paul Henderson, Adam Kortylewski, Hakan Bilen and Titas Anciukevicius for helpful comments and discussions.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
|
2309.01054
|
Self-Purification and Entanglement Revival in Lambda Matter
|
In this study, we explore the dynamics of entanglement in an ensemble of
three-level systems with a lambda-type level structure interacting with
single-mode bosons. Our investigation focuses on zero-energy states within the
subspace of totally symmetric wave functions. Remarkably, we observe a
universal two-stage dynamics of entanglement with intriguing revival behavior.
The revival of entanglement is a consequence of the self-purification process,
where the quantum state relaxes and converges universally to a special dark
state within the system.
|
Dongni Chen, Stefano Chesi, Mahn-Soo Choi
|
2023-09-03T02:17:54Z
|
http://arxiv.org/abs/2309.01054v3
|
# Self-Purification and Entanglement Revival in Lambda Matter
###### Abstract
In this study, we explore the dynamics of entanglement in an ensemble of three-level systems with a lambda-type level structure interacting with single-mode bosons. Our investigation focuses on zero-energy states within the subspace of totally symmetric wave functions. Remarkably, we observe a universal two-stage dynamics of entanglement with intriguing revival behavior. The revival of entanglement is a consequence of the self-purification process, where the quantum state relaxes and converges universally to a special dark state within the system.
## 1 Introduction
Quantum entanglement plays a pivotal role in quantum information processing, specifically in quantum communication [1, 2], quantum simulations [3], and quantum computing [4]. However, environmental noise poses a significant challenge to preserving entanglement and coherence. For the majority of systems, decoherence results in an irreversible loss of entanglement, becoming a major hindrance in advancing quantum information technologies [5]. Despite this common behavior, certain exceptional systems exhibit a fascinating phenomenon known as _entanglement revival_, where entanglement increases during their dissipative evolution. Moreover, a related scenario called 'entanglement sudden birth' has been explored, wherein entanglement arises from a separable state after a specific time interval. Entanglement revival and sudden birth have been extensively studied in a variety of discrete and continuous variable systems [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. This non-monotonic behavior relies on a delicate interplay between entanglement
generation, induced by internal system interactions, and dissipation from the external environment [6, 7, 9]. Generally speaking, the entanglement revival shows a sensitive dependence on the initial state [12, 13, 14]. In some systems, it was suggested that revivals might be induced by the transfer of entanglement between different subsystems [17, 18, 19].
In this work, we present a comprehensive investigation of a robust mechanism for entanglement revival, centered around the utilization of special metastable states. These unique states act as entanglement reservoirs during the later stages of time evolution and can be deliberately engineered by leveraging specific (exact or approximate) symmetries inherent in the underlying quantum system. Our focus lies on an ensemble of three-level systems, i.e., _qutrits_, exhibiting a lambda-type level structure interacting with a single bosonic mode, a model pertinent to various platforms such as cavity quantum electrodynamics (QED) [20, 21], trapped ions [22], or circuit QED [23]. The dissipative evolution in our study follows a universal two-stage dynamics. In the initial stage, we observe the expected rapid decay of entanglement at short times. Subsequently, in the second stage leading to entanglement revival, the system undergoes a slower evolution caused by relaxation, ultimately converging to a special dark state. This self-purification process plays a crucial role in restoring entanglement. Importantly, the entire process is closely connected to an antisymmetry (rather than symmetry) associated with the parity operator of the system. Furthermore, we provide a comprehensive description of the quantum states involved, enabling a clear identification of the underlying physical processes at play. Armed with this in-depth understanding, we anticipate that the essential features of the revival dynamics can be realized in a broader class of systems.
This paper is structured as follows: In Section 2, we introduce the system, detailing its fundamental physical processes and associated properties. Section 3 explores the observation of entanglement revival behavior, illustrating the dissipative evolution characterized by a universal two-stage dynamics. Section 4 provides a semiclassical description of the system, while Sections 5 and 6 analyze the effect of boson and collective qutrit decay, based on this semiclassical description. Finally, Section 7 summarizes the findings and presents an outlook for future research. Further details on the optimization conditions for the visibility of entanglement revival (A) and on the effects of other imperfections (B and C) are discussed in appendices.
## 2 Model
We consider \(n\) identical three-level systems (qutrits, for short) coupled with single-mode bosons. As shown in Fig. 1, each qutrit has a \(\Lambda\)-type structure with two ground-state levels, denoted by \(|0\rangle\) and \(|2\rangle\), and one excited-state level, \(|1\rangle\). The bosonic mode induces the transition \(|0\rangle\leftrightarrow|1\rangle\) whereas the transition \(|1\rangle\leftrightarrow|2\rangle\) is driven resonantly by an external classical field. In the interaction picture, the system is governed by the Hamiltonian
\[\hat{H}=g\sum_{k=1}^{n}\hat{c}\,|1\rangle_{k}\langle 0|+\Omega\sum_{k=1}^{n}|1 \rangle_{k}\langle 2|+\mathrm{h.c.}, \tag{1}\]
where \(\hat{c}\) is the bosonic annihilation operator, \(|\mu\rangle_{k}\) (\(\mu=0,1,2\)) are the quantum states of the \(k\)th qutrit, \(g\) is the qutrit-boson coupling, and \(\Omega\) is the Rabi transition amplitude. Note that, in this work, we assume uniform \(g\) and \(\Omega\) for all qutrits. The operator \(\hat{N}=\hat{c}^{\dagger}\hat{c}+\sum_{k=1}^{n}(|1\rangle_{k}\langle 1|+|2 \rangle_{k}\langle 2|)\) is conserved by \(\hat{H}\), and we will usually refer to its eigenvalues (i.e., the total number of excitations) as \(p\).
Any realistic system is subject to the influence of quantum noise, thus its dynamics is not unitary any longer. Dissipative dynamics is typically modeled by a quantum master equation, which we take of the following form
\[\dot{\hat{\rho}}(t)=-i[\hat{H},\hat{\rho}]+\kappa\mathcal{L}[\hat{c}]\hat{\rho} +\Gamma_{0}\mathcal{L}[\hat{L}_{0}]\hat{\rho}+\Gamma_{2}\mathcal{L}[\hat{L}_{ 2}]\hat{\rho}, \tag{2}\]
where \(\hat{\rho}\) is the density operator of the system and \(\mathcal{L}[\hat{L}]\hat{\rho}\) is the Lindblad superoperator, defined by \(\mathcal{L}[\hat{L}]\hat{\rho}:=\hat{L}\hat{\rho}\hat{L}^{\dagger}-(\hat{L}^{ \dagger}\hat{L}\hat{\rho}+\hat{\rho}\hat{L}^{\dagger}\hat{L})/2\) for the linear operator \(\hat{L}\) associated with a quantum decoherence process. Specifically, \(\mathcal{L}[\hat{c}]\) is responsible for the loss of bosons while \(\mathcal{L}[\hat{L}_{\mu}]\), with \(\hat{L}_{\mu}:=\sum_{k=1}^{n}|\mu\rangle_{k}\langle 1|\), describes the collective spontaneous decay of qutrits from the excited-state level \(|1\rangle\) to the ground-state level \(|\mu\rangle\) (\(\mu=0,2\)); \(\kappa\) and \(\Gamma_{\mu}\) are the corresponding rates. As implied by the specific form of the quantum master equation (2), we mainly concentrate on _collective_ decay of qutrits. However, as we discuss later, our main results are not affected qualitatively by _individual_ qutrits decay.
In this work, we investigate an unusual dynamics of entanglement in the presence of quantum decoherence. The effect of our interest is most pronounced in the zero-energy subspace of \(\hat{H}\) with \(p\) excitations and totally symmetric wave functions (with respect to the exchange of qutrits). Therefore we mainly focus on this subspace, denoted as \(\mathcal{Z}_{p}\). Interestingly, it was pointed out [24] that the \(\mathcal{Z}_{p}\) subspace features a decoherence-free nature [25, 26, 27], and is always degenerate regardless of the parameters, allowing one to geometrically manipulate quantum states within the subspace. This property is due to three symmetry properties of the system: the conservation of total number of excitations \(\hat{N}\), the exchange symmetry of qutrits, and the _anti_-symmetry, \(\{\hat{\Pi}_{1},\hat{H}\}=0\), of \(\hat{\Pi}_{1}:=\exp\left(i\pi\sum_{k=1}^{n}|1\rangle_{k}\langle 1|\right).\)
We denote the basis states of \(\mathcal{Z}_{p}\) by \(|Z_{p}^{i}\rangle\) for \(i=0,1,2,\cdots,[p/2]\)1, where \([x]\) indicates the integer part of the real number \(x\). The first zero-energy eigenstate can
Figure 1: Left: Schematics of an ensemble of qutrits (dots) uniformly coupled to a single bosonic mode. Right: The level structure of each qutrit.
always be chosen of the following form:
\[|Z_{p}^{0}\rangle=\sum_{k=0}^{p}\frac{(-1)^{p-k}}{\sqrt{(p-k)!g^{p-k}\Omega^{k}}} |\Phi_{n}^{k}\rangle_{Q}|p-k\rangle_{c}, \tag{3}\]
where \(|\Phi_{n}^{k}\rangle_{Q}=\sum_{\mathcal{P}}\mathcal{P}\left|2\right\rangle^{ \otimes k}\left|0\right\rangle^{\otimes(n-k)}\) (\(\mathcal{P}\) are permutations of qutrits) is a symmetric Dicke state and \(\left|k\right\rangle_{c}\) is a Fock state with \(k\) bosons. Equation (3) describes a special zero-energy state which we call _master dark state_. It has several interesting properties useful for quantum-state engineering applications, including the generation of arbitrary symmetric Dicke states [24, 28]. We will see below that this master dark state plays a key role in the entanglement-revival behavior as well.
## 3 Entanglement revival
Figure 2 illustrates the main phenomenon in the focus of this work. For four qutrits (\(n=4\)) with three excitations (\(p=3\)), which we take as a prototypical example, there are two zero-energy states, \(|Z_{p}^{0}\rangle\) and \(|Z_{p}^{1}\rangle\), where \(|Z_{p}^{1}\rangle\) has a relatively large qutrit-boson entanglement (as quantified by the logarithmic negativity [29, 30, 31]). From the quantum dissipative dynamics of Eq. (2), we find in Fig. 2(a) that the entanglement content of \(|Z_{p}^{1}\rangle\) decreases fast initially, as usually expected. Surprisingly, however, the entanglement increases again to a certain level as time evolves. This type of two-stage dynamics and entanglement-revival behavior are universally found in a wide range of parameters.
In search for clues to explain the above behavior, we consider the probabilities \(P_{p}^{k}(t):=\langle Z_{p}^{k}|\,\hat{\rho}(t)\,|Z_{p}^{k}\rangle\) of being in a zero-energy state (\(p\leq 3\)). We find that, among all zero-energy states, the key role is played by \(|Z_{3}^{1}\rangle\), \(|Z_{1}^{0}\rangle\), and \(|Z_{0}^{0}\rangle\), whose populations are shown in in Fig. 2(b). In the first stage of the dynamics, the initial state \(|Z_{3}^{1}\rangle\) decays rapidly (over a time scale of order \(1/\kappa\) or \(1/\Gamma_{\mu}\)) to the \(p=1\) master dark
state \(|Z_{1}^{0}\rangle\). Once the master dark state \(|Z_{1}^{0}\rangle\) becomes dominant, the second stage of dynamics kicks in, and \(|Z_{1}^{0}\rangle\) decays to the trivial state \(|Z_{0}^{0}\rangle\equiv|0\rangle_{1}\otimes\cdots\otimes|0\rangle_{n}\otimes|p \rangle_{c}\). The latter process is much slower, since the master dark states are extremely robust to decoherence [24]. As inferred from Eq. 3, they are not affected by spontaneous decay of qutrits (neither individual or collective) from the excited-state to ground-state levels, and become immune to boson losses in the limit of large \(g/\Omega\). Between the two stages there is a certain transition period (shaded region in Fig. 2) characterized by a low purity, see the dashed curve in Fig. 2(b). Importantly, the probabilities \(P_{3}^{1},P_{1}^{0},P_{0}^{0}\) do not sum up to unity during this transition period. This means that some other states give a significant contribution to \(\hat{\rho}(t)\). Indeed, as we discuss in detail below, the decay of the initial state \(|Z_{3}^{1}\rangle\) to \(|Z_{1}^{0}\rangle\) occurs via intermediate states with finite energies. The important point here is that the mixture of those states dramatically suppresses the overall entanglement to a level that is much smaller than the entanglement of each component (pure) state.
We conclude that the entanglement revival can be attributed to an initial suppression of entanglement due to strong mixture, followed by a recovery of entanglement when the system evolves towards the (relatively) pure master dark state. This self-purification is possible because the master dark state is stable against the decay processes. As noted already, the effect of boson loss gets suppressed by increasing \(g/\Omega\). However, this also leads to a master dark state with smaller entanglement, implying that the visibility of the entanglement revival can be optimized over the coupling ratio. We discuss in detail such optimization in A.
## 4 Semiclassical description
We now quantitatively analyze the two-stage dissipative dynamics and the corresponding entanglement-revival behavior, observed numerically above. To do this, we first simplify our model through a semiclassical approximation, recalling that we are working in the subspace of totally symmetric wave functions. As long as the quantum master equation (2) contains only collective decoherence of qutrits and boson loss, the system remains in this subspace. Then, it is convenient to describe the ensemble of qutrits using bosonic operators \(\hat{a}_{\mu}\) associated with the levels \(|\mu\rangle\). Expressed in terms of these bosonic operators, \(\hat{H}\) reads
\[\hat{H}=g\hat{c}^{\dagger}\hat{a}_{0}^{\dagger}\hat{a}_{1}+\Omega\hat{a}_{2}^{ \dagger}\hat{a}_{1}+\mathrm{h.c.}, \tag{4}\]
which preserves the total number of qutrits \(\hat{a}_{0}^{\dagger}\hat{a}_{0}+\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hat{a}_{2} ^{\dagger}\hat{a}_{2}=n\) as well as the total number of excitations \(\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}+\hat{c}^{ \dagger}\hat{c}=p.\) Likewise, Eq. (2) becomes:
\[\dot{\hat{\rho}}(t)=-i[\hat{H},\hat{\rho}]+\kappa\mathcal{L}[\hat{c}]\hat{\rho }+\Gamma_{0}\mathcal{L}[\hat{a}_{0}^{\dagger}\hat{a}_{1}]\hat{\rho}+\Gamma_{2 }\mathcal{L}[\hat{a}_{2}^{\dagger}\hat{a}_{1}]\hat{\rho}. \tag{5}\]
So far, everything is exact. We now make a semiclassical approximation \(g\hat{c}^{\dagger}\hat{a}_{0}^{\dagger}\hat{a}_{1}\approx g\hat{c}^{\dagger} \hat{a}_{1}\sqrt{n}\), which is valid in the limit \(n\gg p\). Then, Eq. (4) gives:
\[\hat{H}\approx\hat{H}_{\mathrm{sc}}:=g\sqrt{n}\,\hat{c}^{\dagger}\hat{a}_{1}+ \Omega\,\hat{a}_{2}^{\dagger}\hat{a}_{1}+\mathrm{h.c.} \tag{6}\]
This semiclassical Hamiltonian is quadratic in the bosonic operators, hence can be solved exactly. By introducing three new bosonic operators
\[\hat{C}_{0} :=\cos\theta\,\hat{c}-\sin\theta\,\hat{a}_{2}, \tag{7a}\] \[\hat{C}_{\pm} :=\left(\sin\theta\,\hat{c}+\cos\theta\,\hat{a}_{2}\mp\hat{a}_{1} \right)/\sqrt{2}, \tag{7b}\]
where \(\tan\theta:=g\sqrt{n}/\Omega\), the semiclassical Hamiltonian takes the following form:
\[\hat{H}_{\rm sc}=\epsilon(\hat{C}_{+}^{\dagger}\hat{C}_{+}-\hat{C}_{-}^{ \dagger}\hat{C}_{-}), \tag{8}\]
where \(\epsilon:=\sqrt{g^{2}n+\Omega^{2}}\). We note from Eq. (8) that \(\hat{C}_{0}\) is a zero-frequency bosonic eigenmode, while \(\hat{C}_{\pm}\) have opposite mode-frequencies \(\pm\epsilon\). This interesting property is due to the anti-symmetry \(\{\hat{\Pi}_{1},\hat{H}\}=0\), with \(\hat{\Pi}_{1}\) expressed now as \(\hat{\Pi}_{1}=\exp\left(i\pi\hat{a}_{1}^{\dagger}\hat{a}_{1}\right).\) The semiclassical Hamiltonian immediately yields the eigenenergies \((k_{+}-k_{-})\epsilon\), with the correspoding eigenstates:
\[|E_{k_{0};k_{+}k_{-}}\rangle=\frac{1}{\sqrt{k_{0}!k_{+}!k_{-}!}}(\hat{C}_{0}^ {\dagger})^{k_{0}}(\hat{C}_{+}^{\dagger})^{k_{+}}(\hat{C}_{-}^{\dagger})^{k_{ -}}\,|\,\,\rangle\,, \tag{9}\]
where \(|\,\,\rangle\) is the vacuum state. Of particular importance are the semiclassical zero-energy states with \(p\) excitations (corresponding to the subspace \(\mathcal{Z}_{p}\)), which take the general form \(|E_{(p-2k);kk}\rangle\,.\) The case \(k=0\) gives the master dark states of Eq. (3), i.e., \(|E_{p;00}\rangle\approx|Z_{p}^{0}\rangle\,.\) For \(p=3\), applicable to Fig. 2, the two semiclassical zero-energy states are \(|E_{3;00}\rangle\approx|Z_{3}^{0}\rangle\) and \(|E_{1;11}\rangle\approx|Z_{3}^{1}\rangle\).
In the physically relevant regime of \(\kappa,\Gamma_{\mu}\lesssim g,\Omega\), the relative phase between any pair of semiclassical eigenstates with different energies oscillates quickly (the energy difference is of order \(\epsilon\gg\kappa,\Gamma_{\mu}\)). Therefore, by ignoring the phase coherence between such eigenstates, we consider a semiclassical solution of the following form
\[\hat{\rho}_{\rm sc}(t)\approx\sum_{\alpha}P_{\alpha}(t)|E_{\alpha}\rangle \langle E_{\alpha}|, \tag{10}\]
where \(\alpha\equiv(k_{0};k_{+}k_{-})\) collectively denotes the semiclassical quantum numbers \(k_{0}\) and \(k_{\pm}\). Substituting this ansatz into Eq. (5), we obtain the (classical) equations
\[\frac{dP_{\alpha}}{dt}=\sum_{\beta}\gamma_{\alpha\beta}P_{\beta}(t) \tag{11}\]
for the probabilities \(P_{\alpha}(t)\), where \(\gamma_{\alpha\beta}\) is the transition rate between semiclassical states \(|\alpha\rangle\) and \(|\beta\rangle\).
## 5 The effects of boson loss
We now apply the semiclassical description of Eqs. (10) and (11) to analyze in detail the dynamical process. We first consider the effects of boson loss and, for simplicity, we focus on the particular example with \(p=3\). Similar results apply to arbitrary \(p\).
In the semiclassical limit (\(n\gg p\), hence \(\tan\theta\gg 1\)), the rates \(\gamma_{\alpha\beta}:=\kappa|\langle E_{\alpha}|\hat{c}|E_{\beta}\rangle|^{2}\) of the four processes \(\left|E_{1;11}\right\rangle\rightarrow\left|E_{1;10}\right\rangle,\)\(\left|E_{1;11}\right\rangle\rightarrow\left|E_{1;01}\right\rangle,\)\(\left|E_{1;10}\right\rangle\rightarrow\left|E_{1;00}\right\rangle,\) and \(\left|E_{1;01}\right\rangle\rightarrow\left|E_{1;00}\right\rangle,\) are much larger (by a factor of order \(g^{2}n/\Omega^{2}\gg 1\)) then other transition rates. Therefore, as illustrated in Fig. 3, one can identify two dominant (incoherent) decay paths of \(\left|Z_{3}^{1}\right\rangle\approx\left|E_{1;11}\right\rangle\):
\[\left|E_{1;11}\right\rangle\rightarrow\left|E_{1;10}\right\rangle \rightarrow\left|E_{1;00}\right\rangle, \tag{12}\] \[\left|E_{1;11}\right\rangle\rightarrow\left|E_{1;01}\right\rangle \rightarrow\left|E_{1;00}\right\rangle. \tag{13}\]
We also include \(\left|E_{1;00}\right\rangle\rightarrow\left|E_{0;00}\right\rangle\) among the major transitions of Fig. 3 (solid arrows). The latter process has a much smaller rate, but is the only allowed transition once the system has reached \(\left|E_{1;00}\right\rangle\).
The above remarks allow us to further simplify our semiclassical description, by only including in the ansatz \(\hat{\rho}_{\mathrm{sc}}(t)\) of Eq. (10) the five most relevant states \(\left|E_{1;11}\right\rangle,\)\(\left|E_{1;10}\right\rangle,\)\(\left|E_{1;01}\right\rangle,\)\(\left|E_{1;00}\right\rangle,\) and \(\left|E_{0;00}\right\rangle\), which appear in the major transition paths. As Fig. 4(a) demonstrates, the approximate evolution of the five populations \(P_{\alpha}(t)\) agrees very well with the full numerical solution, obtained from Eq. (5). In Figure 4(b) we further compare the entanglement content of the dominant semiclassical state \(\hat{\rho}_{\mathrm{sc}}(t)\) (red dashed line) to the full solution \(\hat{\rho}(t)\) (blue empty circles). Here the agreement is worse than for the probabilities of panel (a); apparently, the entanglement content is more sensitive to the detailed form of \(\hat{\rho}(t)\). However, the discrepancy can be easily corrected by including all semiclassical eigenstates \(\left|E_{k_{0};k_{+}k_{-}}\right\rangle\) in \(\hat{\rho}_{\mathrm{sc}}(t)\). In this case, the possible transition paths include those marked by dashed arrows in Fig. 3. The black dashed curve of Fig. 4(b) plots the evolution of the entanglement content within the full semiclassical description, and shows an excellent agreement with the full solution.
We have shown above that, as expected, the semiclassical approximation works very well when \(n\gg p\). However, it also gives qualitatively correct predictions of the dynamical behavior for values of \(n\) as small as \(n=4\) (with \(p=3\)). Many features can
Figure 3: Schematic diagram describing boson losses within the semiclassical approximation. Dominant semiclassical processes are represented by solid blue arrows. The complete semiclassical diagram includes both solid and dashed arrows.
be even described quantitatively, as long as we replace \(\hat{a}_{0}^{\dagger}\hat{a}_{0}\) by the average occupation \(n_{0}\) of state \(|0\rangle\).
## 6 The effects of qutrit decay
The semiclassical description is still applicable when collective decay of qutrits is the dominant decoherence mechanism. Assuming \(\kappa=\Gamma_{2}=0\), we illustrate in Fig. 5(a) the transitions induced by the \(\hat{a}_{0}^{\dagger}\hat{a}_{1}\simeq\sqrt{n}\hat{a}_{1}\) process of Eq. (2). This semiclassical diagram is similar to the one in Fig. 3, except that now mode \(\hat{C}_{0}\) is not affected by dissipation. Therefore, all the slow transitions of Fig. 3 do not take place (including \(|E_{1;00}\rangle\rightarrow|E_{0;00}\rangle\)). The system will ultimately reach the master dark state \(|E_{1;00}\rangle\), instead of the vacuum state. In Fig. 5(a), the transition rates are all of order \(\Gamma=\sqrt{n}\Gamma_{0}\), i.e., collectively enhanced by the occupancy of state \(|0\rangle\). If, on the other hand, the \(\hat{a}_{2}^{\dagger}\hat{a}_{1}\) process dominates (\(\kappa=\Gamma_{0}=0\)) we find the complete semiclassical diagram shown in Fig. 5(b). Since the decay from \(|1\rangle\) to \(|2\rangle\) does not alter the excitation number, the dynamics is confined to the \(p=3\) subspace, and terminates in the master dark state \(|E_{3;00}\rangle\).
As before, both diagrams of Fig. 5 lead naturally to a two-stage dynamics where, after an initial loss of purity, the system evolves towards a master dark state. As a consequence, revivals are observed in the time dependence of the logarithmic negativity, shown in Fig. 6. Since the master dark states are completely immune to the decay of qutrits, at long times the entanglement content saturates to a finite value.
When both types of qutrit decay are present, the semiclassical diagram is more complex but can be obtained in a similar manner. An entanglement revival is still found with the parameters of Fig. 6 (see the lowest curve). However, now the system
Figure 4: Comparison of the exact (empty circles) and semiclassical (dashed curves) dynamics in the presence of boson loss. (a): Probabilities \(P_{\alpha}\) of the five states \(|E_{\alpha}\rangle\) entering the major transition paths of Fig. 3. (b): Time dependence of the logarithmic negativity. The dashed curves in panel (a) only consider the dominant semiclassical contribution, while in panel (b) we show both the dominant (upper dashed curve) and the complete (lower dashed curve) semiclassical treatment. The latter includes all states and transitions of Fig. 3. In both panels: \(n=20\), \(p=3\), \(\Omega/g=0.1\sqrt{n}\), and \(\kappa/g=0.1\).
relaxes to a mixture of three master dark states, \(|E_{3;00}\rangle\), \(|E_{1;00}\rangle\), and \(|E_{2;00}\rangle\), leading to to a general reduction of entanglement. As seen, in all three cases of Fig. 6 the semiclassical approximation (dashed curves) is in close agreement with the exact evolution (empty circles).
Finally, we comment on the effect of individual decay of qutrits. This process is more complex to describe, as it causes transitions between different symmetry sectors, expanding the dynamical process beyond the totally symmetric subspace. Nevertheless, in B we find that the dynamics remains qualitatively similar. The system can still reach a distinct master dark state, causing the entanglement to revive. Only when the decay rates from \(|1\rangle\) to \(|0\rangle\) and from \(|1\rangle\) to \(|2\rangle\) are comparable, a larger number of states is involved and this might prevent the self-purification process. The analysis of individual qutrits decay suggests that the revival is robust to other types of local perturbations, e.g., small deviations of \(g,\Omega\) from the homogenous limit. The entanglement revival also survives the impacts of detuning as discussed in C.
## 7 Conclusion
By analyzing the evolution of an ensemble of qutrits interacting with a bosonic mode, we have identified a robust physical mechanism for entanglement revivals. We show that starting from a highly-entangled state, decoherence leads to a universal two-stage dynamics which can be accurately captured by a semiclassical approximation. The entanglement revival is attributed to the self-purification of the quantum state as it relaxes towards a special dark state. This mechanism boasts broad applicability and bears some analogy to other extensively studied protocols to generate entanglement through dissipation such as reservoir engineering [32, 33, 34, 35, 36]. However, our primary focus hereis on inducing an unconventional form of quantum evolution.
## Acknowledgments
S.C. acknowledges support from the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301602), the National Science Association Funds (Grant No. U2230402), and the National Natural Science Foundation of China (Grant Nos. 11974040 and 12150610464). C.D. and M.-S.C. has been supported by the National Research Function (NRF) of Korea (Grant Nos. 2022M3H3A106307411 and 2023R1A2C1005588) and by the Ministry of Education through the BK21 Four program.
## Appendix A Optimal coupling ratio
Here, we analyze the optimal value of the coupling ratio \(\Omega/(g\sqrt{n})\), which is determined by the competing requirements of having a dark state with large entanglement and being weakly affected by boson decay.
For simplicity and to focus on the essential point, we assume no qutrit decay (the effect of which does not affect the qualitative feature). In this case, the semiclassical dynamics is described by Fig. 3 of the main text. We also recall that the revival of entanglement is due to a self-purification of the system, as it relaxes to the special master dark state \(|E_{1;00}\rangle\). This process takes place on a timescale \(\sim\kappa^{-1}\). Later, boson decay causes a slow leakage towards the vacuum state, \(|E_{1;00}\rangle\rightarrow|E_{0;00}\rangle\). The rate of this process is of order \(\kappa/\tan\theta\) and can be suppressed by making \(\tan\theta\) as large as possible. In this limit the probability of the master dark state can approach one (ideal purification), but the entanglement between the qutrits and the cavity becomes vanishingly small. We see from Eq. (3) of the main text that \(|Z_{p}^{0}\rangle\simeq|\Phi_{n}^{p}\rangle_{Q}|0\rangle_{c}\) when \(\tan\theta=g\sqrt{n}/\Omega\gg 1\).
Because of this competition, the entanglement revival will at first become more visible by increasing the ratio of the two coupling strengths \(1/\tan\theta=\Omega/(g\sqrt{n})\) but
Figure 6: Evolution of the logarithmic negativity in the presence of collective qutrit decay. The full solution, obtained from Eq. 5, is computed with \(\Gamma_{2}=\Gamma\) and \(\Gamma_{0}=0\) (empty circles), \(\sqrt{n}\Gamma_{0}=\Gamma\) and \(\Gamma_{2}=0\) (empty squares), and \(\Gamma_{2}=\sqrt{n}\Gamma_{0}=\Gamma\) (empty triangles). The black dashed curves are from the semiclassical approximation. We also used: \(n=20\), \(p=3\), \(\kappa=0\), \(\Gamma/g=0.02\), and \(\Omega/g=0.1\sqrt{n}\).
gradually disappears at larger values. This behavior is illustrated in panel (a) of Fig. 11, where we plot the time dependence of the entanglement with various coupling strength ratios. The height of the revival as function of \(1/\tan\theta\) is plotted in Fig. 11(b), showing that the optimal ratio is in the range of \(0.05-0.1\).
This optimal ratio can be estimated through simple analytical calculations. At long times \(t\gg\kappa^{-1}\), the initial state \(|E_{1;11}\rangle\) has almost fully decayed to a mixture of \(|E_{1;00}\rangle\) and \(|E_{0;00}\rangle\). The approximate density matrix, denoted by \(\tilde{\rho}(t)\), can be expressed in the following form:
\[\tilde{\rho}(t)=P_{1;00}(t)|E_{1;00}\rangle\langle E_{100}|+(1-P_{1;00}(t))|E_{ 0;00}\rangle\langle E_{0;00}|, \tag{11}\]
where the probability can be derived through the semiclassical rate equation, Eq. (11) of the main text, giving
\[P_{1;00}(t)=e^{-\kappa t}(1-e^{\frac{\sin\theta}{2(\sin\theta+\cos\theta)} \kappa t})^{2}. \tag{12}\]
As a reference, we consider the typical timescale \(\kappa t=10\) and compute the entanglement of the approximate density operator \(\tilde{\rho}(10)\). The entanglement obtained in this manner is plotted in Fig. 12, from where we estimate an optimal coupling ratio \(1/\tan\theta\approx 0.13\), which is close to the value from the full simulations of Fig. 11.
## Appendix B Individual decay of qutrits
To incorporate the effect of individual decay of qutrits, we extend the master equation of the main text as follows:
\[\dot{\hat{\rho}}= -i[\hat{H},\hat{\rho}]+\kappa\mathcal{L}[\hat{c}]\hat{\rho}\] \[+\sum_{\mu=0,2}\left(\Gamma_{\mu}\mathcal{L}[\hat{L}_{\mu}]\hat{ \rho}+\sum_{j=1}^{n}\Gamma_{1\mu}\mathcal{L}[\hat{A}_{1\mu}^{j}]\hat{\rho} \right), \tag{13}\]
where \(\hat{L}_{\mu}:=\sum_{k=1}^{n}|\mu\rangle_{k}\langle 1|\) describes the collective decay of qutrits from state \(|1\rangle\) to state \(|\mu\rangle\) and \(\hat{A}_{1\mu}^{j}=|\mu\rangle_{j}\langle 1|\) describes the individual decay of qutrits. Due to this additional dissipation channel, the dynamics become more complex. It extends beyond the totally symmetric subspace, thus the semiclassical description is not applicable anymore. Despite these technical difficulties, the qualitative behavior is remarkably similar to the evolution with only bosonic and/or collective qutrit decay.
We first only include the individual decay of qutrits (\(\kappa=\Gamma_{0}=\Gamma_{2}=0\)). Starting from \(|E_{1;11}\rangle\), the allowed transitions between different symmetry sectors (Young diagrams) are represented in Fig. 18(a). The individual decay \(|1\rangle\to|0\rangle\) gives the 'horizontal' transitions, which reduce the number of excitations \(p\). On the other hand, the \(|1\rangle\to|2\rangle\) decay induces'vertical' transitions, which do not change \(p\). If we restrict ourselves to the former process (setting \(\Gamma_{12}=0\)), after two decays the systems is one of the \(|E_{1}^{m}\rangle\) eigenstates of the \(p=1\) subspace. We can estimate the probabilities of these states as follows:
\[P_{m}\simeq\sum_{k}\frac{\Gamma(E_{2}^{k}\to E_{1}^{m})}{\Gamma(E_{2}^{k}\to E _{1}^{m^{\prime}})}\frac{\Gamma(E_{1;11}\to E_{2}^{k})}{\sum_{k^{\prime}} \Gamma(E_{1;11}\to E_{2}^{k^{\prime}})}, \tag{14}\]
where \(\Gamma(\alpha\to\beta)=\sum_{i}\Gamma_{10}|\langle\beta|\hat{A}_{10}^{i}| \alpha\rangle|^{2}\) and the sum over \(k\) runs over all the intermediate states with \(p=2\). In Fig. 18(b) we show that the largest \(P_{m}\) is for the state \(|E_{1;00}\rangle\), indicating that the decay to the master dark state is the dominant process. Thus, the same type of two-stage dynamics discussed already should take place.
In agreement with this argument, we see a clear revival in Fig. 18(a), where the decay \(|1\rangle\to|0\rangle\) is dominant. If the \(|1\rangle\to|2\rangle\) decay is dominant, the dynamics is also restricted, as it remains approximately confined to the \(p=3\) subspace. As seen Fig. 18(b), a strong revival exists in this case. Instead, when the effects of \(\hat{A}_{10}^{j}\) and \(\hat{A}_{12}^{j}\) are comparable, all the processes shown in Fig. 18(a) occur on a similar timescale and
it is difficult to achieve the self-purification process. As a consequence, the revival is not observed in panel (c) of Fig. 11.
Lastly, we investigate the entanglement revival when both bosonic decay and individual decay of the qutrits are present. As shown in Fig. 11(a), where all the decay rates are comparable, the entanglement revival can persists in this scenario as well.
## Appendix C Detuning
Until now, we have treated the resonant case. Considering a finite detuning \(\Delta\), the effective Hamiltonian in the bosonic representation reads:
\[\hat{H}_{\Delta}=\Delta\hat{a}_{1}^{\dagger}\hat{a}+g\sqrt{n}(\hat{a}_{1}^{ \dagger}\hat{a}_{0}+\hat{a}_{0}^{\dagger}\hat{a}_{1})+\Omega(\hat{a}_{1}^{ \dagger}\hat{a}_{2}+\hat{a}_{2}\hat{a}_{1}^{\dagger}). \tag{12}\]
Simulations for different detunings are presented in Fig. 11(b). If the detuning is small compared to the effective coupling strength \(g\sqrt{n}\), the entanglement revival behavior
survives because the master dark state \(|Z_{p}^{0}\rangle\) still satisfies \(\hat{H}_{\Delta}|Z_{p}^{0}\rangle=0\). If the detuning is too large, the oscillations caused by detuning and the possibility of transitions from a master dark state \(|Z_{p}^{0}\rangle\) to other excited states can destroy the entanglement revival behavior.
|
2305.16825
|
Room temperature quantum Hall effect in a gated ferroelectric-graphene
heterostructure
|
The quantum Hall effect is widely used for the investigation of fundamental
phenomena, ranging from topological phases to composite fermions. In
particular, the discovery of a room temperature resistance quantum in graphene
is significant for compact resistance standards that can operate above
cryogenic temperatures. However, this requires large magnetic fields that are
accessible only in a few high magnetic field facilities. Here, we report on the
quantum Hall effect in graphene encapsulated by the ferroelectric insulator
CuInP2S6. Electrostatic gating of the graphene channel enables the Fermi energy
to be tuned so that electrons in the localized states of the insulator are in
equilibrium with the current-carrying, delocalized states of graphene. Due to
the presence of strongly bound states in this hybrid system, a quantum Hall
plateau can be achieved at room temperature in relatively modest magnetic
fields. This phenomenon offers the prospect for the controlled manipulation of
the quantum Hall effect at room temperature.
|
Anubhab Dey, Nathan Cottam, Oleg Makarovskiy, Wenjing Yan, Vaidotas Mišeikis, Camilla Coletti, James Kerfoot, Vladimir Korolkov, Laurence Eaves, Jasper F. Linnartz, Arwin Kool, Steffen Wiedmann, Amalia Patanè
|
2023-05-26T11:11:00Z
|
http://arxiv.org/abs/2305.16825v1
|
## Room temperature quantum Hall effect
#### Abstract
The quantum Hall effect is widely used for the investigation of fundamental phenomena, ranging from topological phases to composite fermions. In particular, the discovery of a room temperature resistance quantum in graphene is significant for compact resistance standards that can operate above cryogenic temperatures. However, this requires large magnetic fields that are accessible only in a few high magnetic field facilities. Here, we report on the quantum Hall effect in graphene encapsulated by the ferroelectric insulator CuInP\({}_{2}\)S\({}_{6}\). Electrostatic gating of the graphene channel enables the Fermi energy to be tuned so that electrons in the localized states of the insulator are in equilibrium with the current-carrying, delocalized states of graphene. Due to the presence of strongly bound states in this hybrid system, a quantum Hall plateau can be achieved at room temperature in relatively modest magnetic fields. This phenomenon offers the prospect for the controlled manipulation of the quantum Hall effect at room temperature.
The electronic properties of graphene are very sensitive to applied magnetic fields (**B**) and are ideally suited for the investigation of the quantum Hall effect (QHE). This is exemplified by plateaus in the Hall resistance due to the quantization of the two-dimensional electron motion into Landau levels (LL) [1, 2, 3, 4, 5, 6, 7, 8, 9]. The QHE, first discovered in Si metal-oxide-semiconductor field-effect transistors [10], exhibits important differences in graphene due to the electron-hole degeneracy near the charge neutrality point, which leads to a distinctive half-integer QHE and a non-zero Berry's phase of the electron wavefunction [4, 5, 7, 8, 9].
Of particular significance for the QHE in graphene is the effect of dopant impurities near its surface. Screening effects in graphene [11, 12] tend to be weakened by a magnetic field and can facilitate the localisation of charge carriers in the disordered potential of the graphene layer [13]. [14]. For example, for epitaxial graphene on a Si-terminated SiC substrate [15, 16, 17, 18], donors reside in the SiC layer adjacent to the graphene layer. These dopants act as a reservoir of electrons for graphene, maintaining the Hall voltage on the \(v=2\) QH plateau over a wide range of magnetic fields [19]. An extended quantum Hall plateau was also observed in graphene-based field effect transistors (FETs) in which graphene is capped by a thin layer of the van der Waals crystal InSe [20, 21]. These examples of "giant" QH plateaus in graphene have been reported at low temperatures (\(T<200\) K) and have been assigned to the magnetic field and electric field induced transfer of charge carriers between the degenerate Landau levels of graphene and the localized states in its proximity. A full microscopic model for the QHE in these hybrid systems does not yet exist. However, recent work has modelled the interaction between free carriers and localized charges near the surface of graphene [22], showing that when the chemical potential is in the gap between Landau levels, these charges can form stable bound states over a distance of the order of the magnetic length \(l_{B}=\sqrt{\hbar/eB}\) and binding energy \(E_{B}\!\approx\!(\hbar v_{F}/l_{B})\), where \(v_{F}\simeq 10^{6}\) m/s is the Fermi velocity and \(e\) is the elementary charge. This phenomenon can persist well beyond cryogenic temperatures, opening possibilities for the controlled manipulation of the
QHE at room temperature. To date, a room temperature resistance quantum has been reported only in high mobility graphene at large magnetic fields that are available only in a few high field magnet laboratories [5, 23].
Here, we report on the QHE in field effect transistors based on single layer graphene capped with the ferroelectric van der Waals crystal CuInP\({}_{2}\)S\({}_{6}\) (CIPS). The CIPS layer is used as a source of localized charge carriers in proximity to graphene. We report a hysteretic behaviour in the longitudinal and transverse magnetoresistance of graphene over a range of applied magnetic fields and temperatures. Similar hysteretic phenomena in the resistivity of graphene have been reported previously in zero magnetic field and assigned to charge trapping [24, 25, 26, 27, 28, 29] and/or ferroelectric polarisation [30, 31, 32, 33, 34]. In this work, we report on the dynamic exchange of charge carriers at the CIPS/graphene interface and its influence on the QHE and its hysteretic behaviour. The QHE is found to be weakly dependent on temperature and is observed at room temperature over a range of easily accessible applied magnetic fields.
## Results
### Transport characteristics in zero magnetic field
The CIPS/graphene heterostructure was prepared by exfoliation and visco-elastic stamping of a CIPS flake on a Hall bar based on high-quality graphene grown by CVD (chemical vapour deposition). Figure 1a shows the optical image of a ten-terminal Hall bar, half of which is based on graphene (G) and the other half on CIPS/graphene (CG), mounted on a 285 nm-thick SiO\({}_{2}\)/\(n\)-Si substrate. The morphology of the layers was probed by atomic force microscopy (AFM) and single pass amplitude-modulated Kelvin probe force microscopy (AM-KPFM) [35]. The CIPS layer has a non-uniform thickness ranging from 20 nm to 50 nm and a uniform work function potential at the graphene/CIPS interface (Figure 1b). Details of the fabrication and of the characterisation of the CIPS flakes by AFM and piezoresponse force microscopy (PFM) are in the experimental section and Figure S1 of Supplementary Information (SI).
The longitudinal resistance \(R_{\rm XX}\) was measured at a constant current (\(I=1\)\(\mu\)A). The voltage drop \(V_{\rm XX}\) across different pairs of terminals along the graphene channel was measured over a range of gate voltages \(V_{\rm G}\) applied between the graphene and Si-gate electrodes. As can be seen in the inset of Figure 1c, for pristine graphene the \(R_{\rm XX}\left(V_{\rm G}\right)\) curve at \(T=300\)K is peaked at the neutrality point \(V_{\rm NP}=+10\) V. Using a capacitance model of the graphene FET, we estimate a hole density \(p=7\times\)10\({}^{11}\) cm\({}^{-2}\) at \(V_{\rm G}=0\) and a hole (electron) mobility \(\mu=9\times\)10\({}^{3}\) cm\({}^{2}\)/Vs (1\(\times\)10\({}^{4}\) cm\({}^{2}\)/Vs) for carrier concentrations in the range 10\({}^{11}\)-10\({}^{12}\) cm\({}^{-2}\) at \(T=300\) K. In contrast to pristine graphene, for CG the \(R_{\rm XX}\left(V_{\rm G}\right)\) curves show a pronounced hysteresis and are asymmetric (Figure 1c): The amplitude of the hysteresis increases with increasing the sweep range of \(V_{\rm G}\) from \(\Delta V_{\rm G}=\pm\) 10 V to \(\pm\) 50 V. For \(\Delta V_{\rm G}=\pm\) 10 V, the \(R_{\rm XX}(V_{\rm G})\) curves are shifted to lower values of \(V_{\rm G}\) compared to pristine graphene; also, the field effect mobility (and Hall mobility) for holes and electrons is reduced from \(\mu\sim\) 10\({}^{4}\) cm\({}^{2}\)/Vs to \(\mu\sim\) 2\(\times\)10\({}^{3}\) cm\({}^{2}\)/Vs. In general, the \(R_{\rm XX}(V_{\rm G})\) curve consists of multiple peaks, suggestive of a channel with a non-uniform distribution of dopants; also, the temporal response of \(R_{\rm XX}\) is slow (with rise and decay times \(\tau>100\) s). Thus, the \(R_{\rm XX}(V_{\rm G})\) curve depends on the sweep range of \(V_{\rm G}\) and sweep rate \(\Delta V_{\rm G}/\Delta t\). A value of \(\Delta V_{\rm G}/\Delta t=0.3\)V/s was used for the data presented in this work.
Hysteresis in the transport characteristics of graphene can arise from a gate-induced polarization at the interface of graphene with a ferroelectric layer [33]. For our CG, the hysteresis is not dominated by this phenomenon as a gate-induced ferroelectricity would produce a shift of the neutrality point \(V_{\rm NP}\) in the direction of the gate sweep, _i.e._\(V_{\rm NP}\) would shift to higher voltages when \(V_{\rm G}\) is swept from negative to positive values compared to when \(V_{\rm G}\) is swept from positive to negative values. On the other hand, a hysteresis can also originate from a slow charge transfer at the CIPS/graphene interface, as reported earlier in a similar device structure [25]. The gate voltage induces charges in the graphene layer, which then redistribute between the graphene and CIPS layers. In the first part of the sweep of \(V_{\rm G}\) to positive gate
voltages (\(V_{\rm G}\) \(>\) 0 V), electrons are transferred from graphene onto CIPS; during the reverse sweep with \(V_{\rm G}\) \(<\) 0 V, the CIPS layer discharges its electrons onto graphene. A non-homogeneous distribution of localized states in CIPS can create areas of graphene with different carrier densities, thus causing the multiple peaks in \(R_{\rm XX}\) (\(V_{\rm G}\)) shown in Figure 1c.
We model the hysteresis in \(R_{\rm XX}\)(\(V_{\rm G}\)) using a classical capacitance model of the FET that takes into account a charge transfer at the CIPS/graphene interface (Figure S2 in SI). We
Figure 1: **Gated Hall bar based on graphene capped with CIPS** (a) Optical image of a Hall bar based on CIPS/graphene (CG) on a SiO\({}_{2}\)/\(n\)-Si substrate and Ni-Au contacts. One section of the graphene layer is covered by a CIPS layer. The white dotted lines mark the edges of pristine graphene. (b) AM-KPFM contact potential difference (CPD) map (top) and CPD profile (bottom) of CG measured with a Multi75E cantilever at a voltage amplitude of \(V_{\rm AC}\) = 4 V and frequency \(f_{\rm AC}\) = 17 kHz. The CPD-profile is obtained along the length of the CIPS-flake, as indicated by the white arrow in the map. (c) Resistance-gate voltage \(R_{\rm XX}\)(\(V_{\rm G}\)) curves for CG at \(T\) = 300 K (\(I\) = 1 \(\mu\)A, \(B\) = 0 T). The sweep up/down branches are shown in blue and red arrows, respectively. Curves are displaced along the vertical axis for clarity. Inset: \(R_{\rm XX}\)(\(V_{\rm G}\)) curve for a reference sample based on pristine graphene (G) at \(T\) = 300 K (\(I\) = 1 \(\mu\)A, \(B\) = 0 T). This sample corresponds to the uncapped section of the graphene Hall bar shown in part (a). A sweep rate \(\Delta V_{\rm G}\)/\(\Delta t\) = 0.3V/s was used for the measurements.
estimate that a charge \(\Delta Q\) redistributes slowly between the graphene (\(Q_{\rm g}\)) and CIPS (\(Q_{\rm CIPS}\)) layers with a characteristic time constant \(\tau>100\) s; different regions of CG tend to charge/discharge with similar temporal dynamics; also, the value of \(\Delta Q/e=n_{Q}\) is dependent on \(V_{\rm G}\) and reaches values of up to \(n_{Q}\sim 10^{12}\) cm\({}^{-2}\) at large \(V_{\rm G}\) (\(V_{\rm G}=+50\) V) and \(T=300\) K.
The hysteresis in \(R_{\rm XX}(V_{\rm G})\) weakens with decreasing \(T\) (Figure 2(a)) or under excitation of the sample with photons of energy larger (\(hv=3.06\) eV) than the band gap of CIPS (Figure S3). Light of increasing intensity induces a shift of the neutrality point to larger positive \(V_{\rm G}\) and a narrowing of the \(R_{\rm XX}(V_{\rm G})\) curve. This indicates that carriers photocreated in the CIPS layer can screen the disordered potential created by localized charges. In summary, the transport characteristics of graphene are very sensitive to charges trapped in the CIPS layer. This effect is observed in all our CG devices and is used to probe the effect of localized charges on the QHE at different temperatures, magnetic fields and gate voltages.
**Magneto-transport and quantum Hall effect** Figures 2a and 2b show the temperature dependence of the \(R_{\rm XX}\left(V_{\rm G}\right)\) curves for CG at \(B=0\) T and 16 T respectively. At low temperatures (\(T\leq 200\) K), the hysteresis in \(R_{\rm XX}\left(V_{\rm G}\right)\) is weak, as also observed in the pristine graphene. However, it becomes pronounced for \(T>200\) K. In particular, in a magnetic field (\(B\) = 16 T in Figure 2b), the \(R_{\rm XX}\left(V_{\rm G}\right)\) curves exhibit additional maxima and minima. To illustrate this behaviour more clearly, we plot in Figure 2c-d-e the colour maps of \(R_{\rm XX}\) versus \(V_{\rm G}\) and \(B\) at different \(T\) and for different (up/down) sweeps of \(V_{\rm G}\). For \(T\) up to 200 K (Figure 2c-d), the bright red region in \(R_{\rm XX}\left(B,\,V_{\rm G}\right)\) centred at \(V_{\rm NP}\sim+30\) V corresponds to the neutrality point of graphene represented by the zeroth Landau level, LL (\(n=0\)). For both sweep up/down branches, secondary peaks in \(R_{\rm XX}\left(B,\,V_{\rm G}\right)\) emerge for \(B>5\) T at around \(V_{\rm G}=+20\) V and \(+40\) V. As \(V_{\rm G}\) increases from negative to positive values, first holes (\(V_{\rm G}<V_{\rm NP}\)) and then electrons (\(V_{\rm G}>V_{\rm NP}\)) fill successive LLs.
The energy-level spectrum of Dirac fermions in a magnetic field is described by the relation \(E_{n}=sgn(i)\sqrt{2e\hbar v_{F}^{2}B\left|\,n\right|}\), where \(n=0,\pm 1,\pm 2\ldots\) The spectrum comprises electron and hole LLs, as well as a LL (\(n=0\)) at the neutrality point. We use the capacitance equation \(C=e[dn_{\mathrm{g}}/dV_{\mathrm{G}}]\) to calculate the voltage separation \(\Delta V_{\mathrm{G}}\) of the maxima in the \(R_{xx}(V_{\mathrm{G}})\) curve at different \(B\). Here \(C=\varepsilon\varepsilon_{0}/t\) is the "classical" capacitance per unit area of the graphene/SiO\({}_{2}\)/Si heterostructure, \(t=285\) nm is the SiO\({}_{2}\) layer thickness, \(\varepsilon=\)3.9 is the relative dielectric constant of SiO\({}_{2}\), \(\varepsilon_{0}\) is the permittivity of free space, and \(n_{\mathrm{g}}\) is the carrier density in the graphene layer. We express the separation between the two maxima in \(R_{xx}(V_{G})\) corresponding to the alignment of the Fermi level with the \(n=0\) and \(n=\pm 1\) LLs as \(\Delta V_{G}=eg/C\), where \(g=4eB/h\). This model
Figure 2: **Longitudinal magnetoresistance for CIPS/graphene (a-b) Resistance-gate voltage \(R_{\mathrm{xx}}(V_{\mathrm{G}})\) curves for CIPS/graphene at different temperatures \(T\) (\(I=1\)\(\upmu\)A) and for (a) \(B=0\) T and (b) \(B=16\) T. The sweep up/down branches are shown in blue and red, respectively. For clarity, curves are displaced along the vertical axis. (c-d-e) Colour plots of \(R_{\mathrm{xx}}\) versus \(B\) and \(V_{\mathrm{G}}\) at (c) \(T=4\) K, (d) \(T=200\) K and (e) \(T=300\) K and different sweeps (top: sweep up; bottom: sweep down; \(I=1\)\(\upmu\)A). Dashed white lines represent the calculated Landau level (LL) charts using a conventional model, as described in the text. Dashed black lines in part (e) show the calculated LL charts assuming a \(B\)-dependent charge transfer.**
reproduces the data at low \(T\) for both sweep up and down of \(V_{\rm G}\) (\(T\) = 4 K and 200 K in Figure 2c-d, white dashed lines), but fails to describe the data at \(T\) \(>\) 200 K (\(T\) = 300 K in Figure 2e, white dashed lines). At \(T\) = 300 K, the LL quantization is obscured by a large hysteresis; in particular, the neutrality point \(V_{\rm NP}\) shifts to larger positive \(V_{G}\) with increasing \(B\). The black lines in Figure 2e describe the deviation of the LL features in \(R_{xx}\) (\(B\), \(V_{G}\)) from a conventional LL chart model. The measured deviation is reproduced by considering a \(B\)-dependent charge transfer and the capacitance equation \(C=e[dn_{\rm g}/dV_{\rm G}]\). The magnetic field tends to reduce the density of electrons transferred from CIPS to graphene by \(\Delta n_{\rm g}\)= 4\(\times\)10\({}^{10}\) cm\({}^{\rm-2}\) at \(B\) = 10 T and \(\Delta n_{\rm g}\) = 4\(\times\)10\({}^{11}\) cm\({}^{\rm-2}\) at \(B\) = 16 T. This phenomenon can also be seen in the dependence of the Hall resistance (\(R_{\rm XY}\)) on \(B\), \(V_{G}\) and \(T\), as discussed below.
Figure 3 shows the \(V_{\rm G}\)-dependence of \(R_{\rm XY}\) over a range of temperatures (\(T\) = 4-320 K) and magnetic fields from \(B\) = 0 T to 16 T. At \(T\) = 4 K (Figure 3a), the \(R_{\rm XY}\)(\(V_{\rm G}\)) curve shows QH plateaus centred at \(V_{\rm G}\) \(\approx\) +20 V and \(V_{\rm G}\) \(\approx\) +40 V, corresponding to the LL filling factor \(v\) = 2 for holes and electrons, respectively. The LL filling factor \(v\) is derived from the relation \(v\) = \(\pm\) 4(\(|n|\)+1/2), where \(n\) is the LL index [16]. Plateaus corresponding to lower value of \(R_{\rm XY}\) can also be seen at \(V_{\rm G}\)\(\approx\) -2 V (\(v\) = 6) and \(V_{\rm G}\)\(\approx\) -25 V (\(v\) = 10). As the temperature increases to \(T\) = 100 K (Figure 3b) and 200 K (Figure 3c), the QH plateaus tend to narrow. A further increase of temperature to \(T\)\(\geq\) 300 K (Figure 3d and 3e) induces a pronounced hysteresis in the \(R_{\rm XY}\) (\(V_{\rm G}\)) curves (see also Figure S4 and S5 in SI). Figure 4 shows the colour plots of \(R_{\rm XY}\) versus \(V_{\rm G}\) and \(B\) at \(T\) =300 K for different (up/down) sweeps of \(V_{\rm G}\). These data illustrate the sign of the \(v\) = 2 QH plateau and its evolution with increasing values of \(B\) and \(V_{\rm G}\). In particular, it can be seen that the neutrality point \(V_{\rm NP}\) shifts to larger positive \(V_{G}\) with increasing \(B\).
Figure 4: **Room temperature quantum Hall resistance for CIPS/graphene** Colour plots of the Hall resistance \(R_{\rm XY}\) versus magnetic field \(B\) and gate voltage \(V_{\rm G}\) at \(T=300\) K (\(I=1\)\(\mu\)A) for different sweeps of \(V_{\rm G}\) (top: sweep down; bottom: sweep up).
Figure 3: **Hall resistance for CIPS/graphene** (a-b-c-d-e) Hall resistance–gate voltage \(R_{\rm XY}(V_{\rm G})\) curves at different temperatures: (a) \(T=4\) K, (b) \(T=100\) K, (c) \(T=200\) K, (d) \(T=300\) K and (e) \(T=320\) K, and magnetic field ranging from \(B=0\) T to \(16\) T in \(1\) T steps (\(I=1\)\(\mu\)A). The right panel in part (a) shows the \(R_{\rm XY}(V_{\rm G})\) curves at \(B=16\)T and \(T=4\)K. Dashed lines correspond to the quantized values of \(R_{\rm XY}\).
From Figure 3 it can be seen that the \(v=2\) QH plateau is accompanied by a hysteresis that depends on \(T\) and \(B\). This behaviour is shown in more detail in Figure 5a where the \(R_{\rm XY}(V_{\rm G})\) curves are plotted at \(B=16\) T for different \(T\). To quantify the hysteresis, we consider the gate voltage at which \(R_{\rm XY}\left(V_{\rm G}\right)\) goes to zero (_i.e._ the charge neutrality point) on the sweep up (\(V_{\rm Gu}\)) and sweep down (\(V_{\rm Gd}\)) branches of \(R_{\rm XY}\left(V_{\rm G}\right)\). The difference between the two values, \(|\Delta V_{\rm G}|=|V_{\rm Gu}-V_{\rm Gd}|\), is shown in Figure 5b for different \(T\) and \(B\). For \(T<200\) K, \(|\Delta V_{G}|\) is weakly dependent on \(T\) and tends to increase with \(B\). For \(T>200\) K, the hysteresis is more pronounced and can be described by the relation \(|\Delta V_{G}|\propto\exp(-E_{a}/kT)\), where \(E_{a}\) is an activation energy given by \(E_{a}\approx 0.16\) eV for \(B=16\) T (Arrhenius plot in the inset of Figure 5b). Figure 5a also reveals that increasing \(T\) above \(T=100\) K leads to a shift of the neutrality point to lower values of \(V_{\rm G}\), corresponding to an increasing density of electrons in the graphene layer. This behaviour is not observed in pristine graphene and is assigned to the thermal excitation of electrons from CIPS into the graphene layer. For lower \(T\) (\(T=4.2\) and \(100\) K), the shift of the neutrality point is towards higher values of \(V_{\rm G}\) with increasing \(T\), indicative of a thermal excitation of carriers near the Dirac point.
Figure 5: **Hysteresis in the Hall resistance of CIPS/graphene** (a) \(R_{\rm XY}(V_{\rm G})\) at \(B=16\) T and different temperatures \(T\). The sweep up/down branches are shown in blue and red, respectively. For clarity, curves are displaced along the vertical axis (\(I=1\)\(\mu\)A). \(\Delta V_{\rm G}\) is the amplitude of the hysteresis in \(R_{\rm XY}(V_{\rm G})\), as estimated from the voltage at which \(R_{\rm XY}(V_{\rm G})=0\). (b) Amplitude of the hysteresis \(|\Delta V_{\rm G}|\) versus \(T\) at different \(B\). For \(T=4.2\) K and \(B\leq 5\) T, \(|\Delta V_{\rm G}|\approx 0\). Inset: Arrhenius plot of \(|\Delta V_{\rm G}|\) versus \(1/T\) at \(B=16\) T. The dashed line is an exponential fit to the data.
Due to the hysteresis and slow charge transfer in CG, the measurement of \(R_{\rm XX}\) and \(R_{\rm XY}\) versus \(B\) at a given \(V_{\rm G}\) require special consideration. For each measurement of the \(R_{\rm XY}(B)\) and \(R_{\rm XX}(B)\) curves, the value of \(V_{\rm G}\) was increased by small increments (\(\Delta V_{\rm G}/\Delta t=0.1\) V/s) starting from \(V_{\rm G}=0\) V until reaching the required value of \(V_{\rm G}\). The temporal dependence of \(R_{\rm XY}\) and \(R_{\rm XX}\) at \(B=0\) T was then followed over intervals of several minutes, as required for \(R_{\rm XY}\) and \(R_{\rm XX}\) to reach stable values. The magnetic field was then swept from \(B=0\) T to 16 T (sweep rate of 5 mT/s). The values of \(V_{\rm G}\) were selected according to the \(R_{\rm XY}(V_{\rm G})\) curves in Figures 3 and 4, showing plateaus on each side of the neutrality point (between the \(n=0\) and \(n=\pm 1\) LLs) due to holes (\(V_{\rm G}\approx+20\) V) or electrons (\(V_{\rm G}\approx+40\) V).
Figure 6a shows the \(R_{\rm XY}\left(B\right)\) curves of CG for \(V_{\rm G}=+20\) V at \(T=4\) K and 300 K. It can be seen that the \(R_{\rm XY}\left(B\right)\) curves exhibit a weak \(T\)-dependence; in particular, the approach to the \(v=2\) QH plateau shifts to lower \(B\)-fields at \(T=300\)K. The value of \(R_{\rm XY}\) at the plateau and its stability over time depend on the gate voltage. We have observed similar behaviours in other devices (Figure S8-S9-S10), although the threshold in \(B\) for the \(v=2\) QH plateau may differ depending on the quality of the graphene layer, which can contain defects and impurities introduced during the growth and/or the transfer of CVD-grown graphene from the residual Cu onto the SiO\({}_{2}\)/Si substrate. The behaviour of CIPS/graphene contrasts with the strong temperature dependence of the \(v=2\) QH plateau in pristine graphene (Figures 6b and S11). Although the analysis of the QH plateau in the proximity of the charge neutrality point is complicated by the contribution of both electrons and holes to the conductivity, we can select gate voltages at which a QH plateau is observed for both holes and electrons (Figures 4, 6c-d).
The plateau in \(R_{\rm XY}\left(B\right)\) is accompanied by a corresponding decrease in \(R_{\rm XX}\left(B\right)\) (Figure 6e and S6-S7). However, we note that \(R_{\rm XX}\) does not go to zero at values of \(B\) corresponding to the \(v=2\) QH plateau; also, we observe a small deviation of \(R_{\rm XY}\) from its nominal quantized value (\(h/2e^{2}\)). This can also be seen in pristine graphene at low \(T\) (Figure S11). As shown in Figure
6f, this deviation (\(\Delta R_{\rm XY}\)) depends on \(R_{\rm XX}\) and tends to zero for decreasing \(R_{\rm XX}\). Here, values of \(R_{\rm XX}\) and \(R_{\rm XY}\) are obtained from measurements of the same device at different \(T\), \(B\) and/or \(V_{\rm G}\) after the onset of the \(v=2\) QH plateau in \(R_{\rm XY}\left(B\right)\).
Our data indicate a coupling between \(R_{\rm XX}\) and \(R_{\rm XY}\) that could be accounted for by geometrical effects and/or disorder. A non-uniform channel can exhibit regions that do not have minimal resistance at the same magnetic field. This can result in an effective misalignment of the Hall probes so that the measured Hall resistance \(R_{\rm XY}\) is influenced by the longitudinal resistance
Figure 6: **Quantum Hall plateau in CIPS/graphene (CG)** (a-b) Hall resistance \(R_{\rm XY}\) versus magnetic field \(B\) at \(T=4\) K and 300 K in (a) CG (\(V_{\rm G}=+20\) V, \(I=1\)\(\mu\)A) and (b) pristine graphene (\(V_{\rm G}=+3\) V, \(I=1\)\(\mu\)A). (c) Top: Schematic of bound states in CIPS/graphene. Bottom: Landau levels (LLs) in graphene with Fermi level aligned between the \(n=0\) and \(n=\pm 1\) LLs corresponding to the \(v=2\) QH plateau. (d) \(R_{\rm XY}\) versus \(B\) for CG at different \(V_{\rm G}\) and \(T=300\) K (\(I=1\)\(\mu\)A). Negative and positive values of \(R_{\rm XY}\) refer to hole and electron resistivity, respectively. (e) \(R_{\rm XX}\) versus \(B\) for CG at different \(V_{\rm G}\) and \(T=300\) K (\(I=1\)\(\mu\)A). (f) Deviation of \(R_{\rm XY}\) from the quantized value (\(R_{\rm XY}=h/2e^{2}\)) versus \(R_{\rm XX}\), as derived from measurements at different \(T\), \(B\) and \(V_{\rm G}\) (top: \(T=4.2\) K, \(B=16\) T, \(V_{\rm G}=42\)-28 V and 16-22 V; bottom: \(T=300\) K, \(B=14\)-16 T, \(V_{\rm G}=20\) V). Dashed lines are guides to the eye.
\(R_{\rm XX}\)[36]. Also, disorder can play an important role on the \(\nu=2\) QH plateau due to the coexistence and contribution to the transport of both electrons and holes[37]. Using the data in Fig. 6f, we estimate the coupling parameter \(s=\)\(\Delta R_{\rm XY}\)/\(R_{\rm XX}\). A linear fitting of \(\Delta R_{\rm XY}\) versus \(R_{\rm XX}\) indicates \(s=\) 0.04 (0.05) at \(T=300\)K (\(T=4.2\)K). Our values are similar to the value (\(s=0.038\)) for high-quality graphene/SiC devices reported in the recent literature [38].
**Discussion**
The room temperature QHE was first reported in graphene and explained in terms of the magnetic field quantization of Dirac fermions in graphene [5]. The LL quantization energy of fermions in magnetic field is \(E_{n}=\nu_{F}\sqrt{2e\hbar B\ |\ n\ |}\). For \(n=\pm 1\) and \(B=45\) T, \(E_{n}\sim 250\) meV, which greatly exceeds the thermal energy (\(\sim 26\) meV) of charge carriers at \(T=300\) K. However, the physics of the QHE in graphene is more complex and requires an understanding of the unique nature of the \(n=0\) LL [39]. The measured thermal activation energy for the quenching of the \(\nu=2\) QH plateau in graphene approaches the cyclotron energy gap only at high magnetic fields (\(B\approx 30\) T). At high \(B\), the number of states with zero energy (\(n=0\) LL) is determined by the total magnetic flux and does not depend on disorder. Thus, the \(n=0\) LL is well separated from its neighbouring (\(n=\pm 1\)) LLs and the activation energy corresponds approximatively to \(E_{1}=\nu_{F}\sqrt{2e\hbar B}\); however, for lower \(B\), LL mixing due to disorder broadens the LLs by means of inter-LL scattering, leading to an activation energy that is smaller than \(E_{1}\). Thus, the observation of the \(\nu=2\) QH plateau at room temperature in graphene requires a large \(B\) accessible only in a few high field facilities. In our CIPS/graphene sample the \(\nu=2\) QH plateau is observed at relatively small \(B\) at \(T=300\) K, yet it is not seen in pristine graphene in the same range of \(B\); thus, our results merit further consideration.
First, we note that the CIPS layer can act as a remote source of carriers for graphene. Our measurements demonstrate that the charge transfer at the CIPS/graphene interface is tuneable by gating and is temperature dependent. Regions of CIPS with different densities of localized
states tend to charge and discharge with a similar slow (\(\sim 100\) s) time constant at room temperature, thus accounting for the gate-induced hysteresis in the transport characteristics (Figure 1c). The hysteresis becomes significant only at \(T>200\) K (Figure 2), symptomatic of thermal activation and slow transfer of charges from/to the CIPS layer onto graphene.
At sufficiently high temperatures (\(T>200\) K), electrons (and holes) in the localized states of CIPS are in equilibrium with the current-carrying, delocalized states of graphene. Under these conditions, bound states are formed in CIPS/graphene. In contrast, at low \(T\) such equilibrium cannot be established and the hysteretic behaviour is not observed. Also, a comparison of the transfer characteristics and their hysteresis for CG at \(B=0\) and \(16\) T indicates that the charge transfer is influenced by magnetic field. The magnetic field acts to enhance the hysteretic behaviour and distorts the \(V_{\rm G}\)-dependence of \(R_{\rm XX}\) and \(R_{\rm XY}\) around the charge neutrality point. As shown in Figure 2e and 4, for \(B\geq 10\) T the colour regions in \(R_{\rm XX}\) and \(R_{\rm XY}\) corresponding to the zeroth LL tend to shift to larger \(V_{\rm G}\) with increasing \(B\), consistent with a reduced transfer of electrons from the CIPS layer onto graphene due to an increased localization of charges in the quantizing magnetic field. This can also be seen in Figure 5b, where the hysteresis (as measured by \(|\Delta V_{\rm G}|\)) increases with \(B\).
Reference 22 offers an insight into the role of localized charges near the surface of graphene: For a range of chemical potentials inside the gap between the zeroth and first LLs, charged impurities can form stable "molecules" bound by free carriers of opposite sign within graphene [22]. The optimal distance between charges in the bound state is of the order of the magnetic length \(l_{B}=\sqrt{\hbar/eB}\) and their binding energy scales as \(E_{B}=(\hbar v_{F}/l_{B})\). For \(B=16\) T, this gives \(l_{B}=6.4\) nm and \(E_{B}=0.10\) eV. This binding energy is comparable to our estimate (\(0.16\) eV) derived from the \(T\)-dependent hysteresis in \(R_{\rm XY}\) (Figure 5b). Thus, for sufficiently high \(T\), electrons (and holes) in the localized states have a binding energy that exceeds the cyclotron energy gap.
We note that a strongly disordered system cannot show the QHE because no LL quantization can occur. However, it is well established that the standard picture of the QHE requires the existence of disorder and localized states. This enables the Fermi level to be pinned at energies between the extended states of adjacent LLs. Disorder is a key feature of the QHE and its thermal stability: it acts to pin the Fermi level between the LLs and maintains the Hall voltage on the plateaus. The charge transfer between the CIPS and graphene layers is reversible, leading to the \(v=2\) QH plateau for both electrons and holes, as shown in Figure 4 and 6.
We now consider our findings in the context of ongoing research on other hybrid systems based on graphene. For example, the use of a conducting layer, such as the relatively small band gap semiconductor InSe (\(\sim 1.3\) eV at \(T=300\) K) to form an InSe/graphene FET, facilitates the observation of a giant QH plateau, but its observation at room temperature is prevented by parallel conduction in the InSe layer [20, 21]. The use of a high-resistance dielectric poses other challenges. A giant QH plateau has been reported in graphene grown epitaxially by thermal annealing of a SiC dielectric substrate. In this case, the charge transfer at the SiC/graphene interface involves defects in SiC with a high densities of states (\(10^{14}\) cm\({}^{-2}\) eV\({}^{-1}\)) in close proximity to graphene [15, 16, 17, 18, 19]. These states arise from atomic-scale defects within the top few SiC layers, which are created during the formation of the graphene layer by Si- sublimation. For graphene on SiC, the giant QH effect was reported at temperatures of up to \(T\) \(\sim\) 100-200 K, suggesting that the bound states in SiC/graphene have a relatively small binding energy even at high \(B\) (\(B>20\) T). Alternatively, hexagonal boron nitride (hBN) represents an ideal dielectric for graphene-based FETs [40]. Charge and surface fluctuations in hBN tend to be weaker than in other substrates, such as SiO\({}_{2}\). Thus, graphene on hBN has a high-mobility and is well suited for observations of integer and fractional QHE [41, 42]. In particular, the formation of moire superlattices in rotationally misaligned graphene/hBN layers can promote interfacial charge transfer and new quantum transport regimes [43, 44, 45, 46]. More recently, a hybrid system based on CrOCl-graphene also revealed an exotic QH effect phase due to the formation of a
long-wavelength charge ordering [47]. In all these different hybrid systems, the band structure of graphene is modified around the Dirac cone as a result of an interfacial charge transfer involving a semiconductor or an insulator. However, for all these systems the observation of quantum effects at high temperatures, approaching room temperature, has proven to be challenging. Our choice of CIPS provides an effective layer for charge transfer as CIPS is a dielectric and its defect states are not only sufficiently dense (\(\sim\)10\({}^{12}\) cm\({}^{-2}\) eV\({}^{-1}\) ), but also they form bound states with graphene that are sufficiently deep to be resilient to ionization at room temperature. The range of high temperatures (\(T\) \(>\) 200 K) for charge transfer and hysteresis in the transport curves corresponds to that required for activating the thermal motion of the Cu ions [48, 49] out of the CIPS layer planes. This can lead to localized ionic charges whose slow motion could be responsible for the slow dynamics of charge transfer at the graphene/CIPS interface, leading to the hysteretic transport observed in this system. Since CIPS is a dielectric layer and electrons remain bound onto its localized states, the QH voltage in graphene is not short-circuited by a significant parallel conduction in the CIPS layer.
In conclusion, the controlled transfer of charges between graphene and localized states in its proximity provide a route for the observation of quantum effects at room temperature and in readily accessible magnetic fields. We have shown that the electric field-induced transfer of charge between the localized states in the CIPS and graphene layers acts to increase or decrease the carrier density in graphene, causing a change in its resistance that is gate-tuneable at high temperatures (\(T\) \(>\) 200 K). The charge transfer causes hysteretic behaviour in the electrical characteristics due a slow dynamic exchange of electrons between graphene and localized states in its proximity. Prospects for further research include a more accurate resistance quantization, which will require progress in both material growth and fabrication processes. This requires high-mobility homogenous graphene, a homogenous charge transfer at the graphene/CIPS interface, and the fabrication of low-resistance contacts. A more uniform
CIPS/graphene heterostructure could be achieved by the development of scalable growth techniques (for example using epitaxial graphene grown with intrinsic structural alignment on SiC) together with the fabrication of high-quality electrical contacts, such as electrodes with the edge-contact geometry [50]. Thus, there are prospects for further studies and for engineering interfacial charge transfer in hybrid systems based on graphene for the observation of quantum effects over a wide parameter space beyond the current state-of-the-art for future applications, such as graphene-based resistance standards for the new International System of Units [51].
## Methods
### Materials and device fabrication.
The CuInP\({}_{2}\)S\({}_{6}\) crystal was purchased from HQ Graphene. Graphene Hall bars were fabricated at the NEST laboratories at the Istituto Italiano di Tecnologia, Pisa, Italy. The fabrication of high-mobility CVD-grown graphene Hall bars before the deposition of CIPS is crucial for the observation of the quantum Hall effect. Single-crystal graphene used in this work was grown on Cu foil by CVD in a cold-wall reactor (Aixtron BM Pro) using chromium nucleation seeds [52]. Graphene crystals were electrochemically delaminated from the growth substrate in 1M NaOH and deposited on SiO\({}_{2}\)/n-doped Si wafers using semi-dry transfer [53]. The fabrication of the Hall bars was carried out using e-beam lithography (20 kV, Raith Multibeam on Zeiss Ultra Plus scanning electron microscope). Graphene Hall bars were prepared using reactive ion etching (gas flow 80 sccm O\({}_{2}\) and 5 sccm Ar, RF power 35 W) and electrical contacts were deposited by thermal evaporation of 7 nm of Ni and 60 nm of Au. A Poly(methyl methacrylate) (PMMA) resist (950 K, 4.5% in anisole, Allresist) was used for lithography, followed by 2-step cleaning in acetone and removal of AR600-71 (Allresist) to ensure a polymer-free surface of graphene [54]. The wafer containing the graphene Hall bar devices was covered with a protective coating of PMMA for dicing and storage. The processed wafers were diced into \(\sim\) 4x4 mm\({}^{2}\) chips and then cleaned in hot acetone (\(T\sim\) 65 'C) for 1 h, rinsed with isopropyl alcohol (IPA) and dried with pressurised
nitrogen gas to remove the protective PMMA coating. These devices were then annealed in a tube furnace at \(T=300\)\({}^{\circ}\)C for 3 h in a 5% H\({}_{2}\) and 95% Ar flowing gas atmosphere to remove surface impurities and residues on the graphene surface. The graphene was then used for stamping the CIPS layer to form the CIPS/graphene heterostructure. The interface between graphene and CIPS after stamping was not further cleaned. The heterostructure was fabricated by exfoliating a CIPS flake onto polydimethylsiloxane (PDMS) from a low-residue tape and identified using optical microscopy. By using a micromanipulator stage, an exfoliated flake of CIPS on PDMS was aligned to one section of the graphene Hall bar and brought into contact with it. The PDMS was then slowly retracted in order to deposit CIPS. The graphene Hall bar capped with CIPS was then bonded into non-magnetic chip carriers for electrical measurements.
**Optical, electrical and microscopy studies**. The surface topography of the flakes was acquired by atomic force microscopy (AFM, Park NX20) in non-contact mode under ambient conditions. The KPFM study was conducted using an additional lock-in amplifier connected to the same AFM system. Transport measurements in the dark and under light illumination were conducted in vacuum (2\(\times\)10\({}^{-6}\) mbar) using Keithley-2400 source-meters and Keithley-2010 multi-meters. A temperature controller from Lakeshore Cryotronics was used to control and probe the temperature. A solid-state laser (\(\lambda=405\) nm) and a He-Ne laser (\(\lambda=632.8\) nm) were used for the optical studies. The position of the laser spot was adjusted on the device and measurements were taken at different powers. A cryogen free magnet (Cryogenic Limited) was used to perform the magneto-transport studies over a range of temperatures.
## Acknowledgements
This work was supported by the European Union's Horizon 2020 research and innovation programme Graphene Flagship Core 3; the Engineering and Physical Sciences Research Council (Grant No. EP/M012700/1) and the University of Nottingham Propulsion Futures Beacon. Measurements in high magnetic field were supported by the European Magnetic Field Laboratory (EMFL) and by the EPSRC via the UK membership of the EMFL (Grant No. EP/X020304/1).
## Author contributions
A.P. and A.D. conceived the project and wrote the paper; C.C and V.M. fabricated the graphene Hall bars; A.D. fabricated the CIPS/graphene devices and performed the transport studies assisted by N.C., O.M. and A.P.; A.D. conducted the analysis of the data, assisted by O.M. and A.P.; J.K. and V.K. conducted the microscopy studies; W.Y. contributed to the transport studies of CIPS in zero magnetic field; C.C. and V.M. synthesized and transferred single-crystal CVD graphene and fabricated the Hall bars; J.F.L. and S.R.W. contributed to the transport studies in high magnetic fields; all authors discussed the results.
## Additional information
Competing financial interests: The authors declare no competing financial interests.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2310.03720
|
SteP: Stacked LLM Policies for Web Actions
|
Performing tasks on the web presents fundamental challenges to large language
models (LLMs), including combinatorially large open-world tasks and variations
across web interfaces. Simply specifying a large prompt to handle all possible
behaviors and states is extremely complex, and results in behavior leaks
between unrelated behaviors. Decomposition to distinct policies can address
this challenge, but requires carefully handing off control between policies. We
propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically
compose policies to solve a diverse set of web tasks. SteP defines a Markov
Decision Process where the state is a stack of policies representing the
control state, i.e., the chain of policy calls. Unlike traditional methods that
are restricted to static hierarchies, SteP enables dynamic control that adapts
to the complexity of the task. We evaluate SteP against multiple baselines and
web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP
improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on
MiniWob++, SteP is competitive with prior works while using significantly less
data. Our code and data are available at
https://asappresearch.github.io/webagents-step.
|
Paloma Sodhi, S. R. K. Branavan, Yoav Artzi, Ryan McDonald
|
2023-10-05T17:40:09Z
|
http://arxiv.org/abs/2310.03720v4
|
# HeaP: Hierarchical Policies for
###### Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in performing a range of instruction following tasks in few and zero-shot settings. However, teaching LLMs to perform tasks on the web presents fundamental challenges - combinatorially large open-world tasks and variations across web interfaces. We tackle these challenges by leveraging LLMs to decompose web tasks into a collection of sub-tasks, each of which can be solved by a low-level, closed-loop policy. These policies constitute a _shared grammar_ across tasks, i.e., new web tasks can be expressed as a composition of these policies. We propose a novel framework, Hierarchical Policies for Web Actions using LLMs (HeaP), that learns a set of hierarchical LLM prompts from demonstrations for planning high-level tasks and executing them via a sequence of low-level policies. We evaluate HeaP against a range of baselines on a suite of web tasks, including MiniWobB++, WebArena, a mock airline CRM, as well as live website interactions, and show that it is able to outperform prior works using orders of magnitude less data.
## 1 Introduction
Recent advances in instruction following large language models (LLMs) (Ouyang et al., 2022; Touvron et al., 2023) have shown impressive zero and few-shot capabilities in solving tasks by parsing natural language instructions and breaking them down into actionable steps (Yao et al., 2022b; Huang et al., 2022b). In this paper, we focus on the problem of teaching LLMs to perform tasks on the web, for instance booking flights or making appointments. Assisting humans in performing web tasks has significant implications on a variety of domains given the pervasive nature of web and cloud-based applications in everyday life.
Prior works collect large amounts of demonstrations of web tasks to train language models (Furuta et al., 2023; Gur et al., 2022; Humphreys et al., 2022; Liu et al., 2018; Shi et al., 2017). However, teaching LLMs to perform tasks on the web presents fundamental challenges. (1) _Combinatorially large open-world tasks_: There are countless ways to interact with the web, leading to a combinatorially large space of tasks such as booking flights, making appointments, payments, etc. (2) _Variations across web interfaces_: Web interfaces differ from one website to another, e.g. booking a flight on JetBlue is different from booking it on United. Hence, it is intractable to cover all such variations in tasks and interfaces in the training data, and have a single supervised model that can solve all tasks.
Our key insight is to leverage LLMs to _decompose_ complex web tasks into a set of modular sub-tasks, each of which can be solved by a low-level, closed-loop web policy. These policies constitute a _shared grammar_ across tasks, i.e., any new web task can be expressed as a composition of these policies. For example, the task of booking a flight can be expressed as a sequence of policies for filling source and destination airports, choosing flight dates, and filling in passenger details. Each low-level policy is specialized for a particular sub-task, e.g. a fill text policy can work on text boxes across web user interfaces (UIs) that either require clicking and typing text, or require typing partial text and auto-completing from a list of options.
While manually programming these policies can be tedious, it is much easier to learn them from humans performing varied tasks on the web. We propose a novel framework, **H**ierarchical **P**olicies
for Web Actions using LLMs (HeaP), that learns a set of hierarchical LLM prompts for planning high-level tasks and executing low-level policies. We first collect raw demonstrations from a human user, auto-label them with low-level policies, and auto-generate both task and policy prompts. At inference time, given a task objective, we hierarchically invoke an LLM to first generate a task plan and then generate actions for each policy in the plan. HeaP enables LLMs to respond effectively to dynamic web pages as well as generalize across tasks and interfaces from few-shot demonstrations.
Experimentally, we evaluate HeaP on a range of increasingly complex benchmarks: MiniWoB++, WebArena, a mock airline CRM simulator and live website interactions.1 We show that HeaP has significantly better task success rates and requires orders of magnitude less training (or demonstration) data relative to prior work (see Table 1 for summary).
Footnote 1: We will open-source the code, simulator, and data.
## 2 Related Work
Language models for web tasks.Early work mapping natural language instructions into actions (Branavan et al., 2009; Artzi & Zettlemoyer, 2013; Nogueira & Cho, 2016) has rapidly evolved resulting in new applications and datasets (Zhou et al., 2023; Deng et al., 2023). In language models performing web tasks, there are broadly 3 classes of methods: _(1) Reinforcement learning (RL) for web navigation_ that train RL agents to navigate web interfaces (Humphreys et al., 2022; Gur et al., 2021; Liu et al., 2018; Shi et al., 2017). However, these are often sample inefficient and exploration on live websites can pose practical safety concerns. _(2) In-context learning with large language models_ uses a combination of instructions and in-context examples with large language models (OpenAI, 2023a; Significant Gravitas, 2023; Wang et al., 2023b; Friedman, 2022; LangChain, 2023), with a significant portion being open-source initiatives. While impressive, they often rely on manually crafted prompts and heuristic strategies to tackle context lengths and task generalization, making it challenging to build on existing findings. _(3) Fine-tuning language models for web tasks_ focuses on fine-tuning language models on specific web tasks and has emerged as a predominant approach in prior works (Gur et al., 2022; Furuta et al., 2023; Yao et al., 2022a; Gur et al., 2023; Mazumder & Riva, 2020; Gur et al., 2018). However, training such models has limitations such as an inability to generalize from few examples of tasks and interfaces, necessitating frequent retraining. As our method, HeaP, is compositional in how it uses the LLM, it is inherently not task-specific and does not have these shortcomings.
Language models for decision making.Large language models have shown impressive out-of-the-box decision making capabilities (Ouyang et al., 2022;?; Brown et al., 2020; Radford et al., 2019). This arises from an ability to break down complex tasks into smaller sub-tasks (Huang et al., 2022a; Zhou et al., 2021), reason about intermediate steps (Yao et al., 2022b; Wei et al., 2022), and recover from errors (Miao et al., 2023). As a results, LLMs in recent times, have found applications in diverse domains like web retrieval (Nakano et al., 2021; Liu et al., 2023; Zaheer et al., 2022; Schick et al., 2023; Xu et al., 2021), robotics (Ahn et al., 2022; Huang et al., 2022b; Wang et al., 2023a), and text-based games (Yao et al., 2020; Shridhar et al., 2020). Moreover, advances in multi-modal LLMs enable decision making from both language and image feedback (Shaw et al., 2023; Lee et al., 2023; Burns et al., 2022). However, such decision making capabilities remain to be explored for general purpose web tasks involving clicks, types, form filling, etc. Our approach, HeaP, leverages the task decomposition and reasoning capabilities of LLMs to perform a wide range of web tasks. With only a handful of examples, HeaP can generalize, showing improved performance over prior works (Gur et al., 2022; Furuta et al., 2023; Humphreys et al., 2022; Liu et al., 2018) that train models with orders of magnitude more data.
## 3 Problem Formulation
The overall goal is to learn a policy that performs a web task. The web task is represented as a context \(\phi\), that can be (a) an explicit instruction such as _"Book me a flight from NYC to BOS"_ (b) a structured dictionary defining the parameters of the task, or (c) a supporting set of texts such as a conversation where the instruction is implicit. Given the current context \(\phi\), the goal is to perform a web task that achieves the task objective. We formulate this as a Contextual Markov Decision Process (CMDP), \(<\Phi,\mathcal{S},\mathcal{A},\mathcal{T},r>\), defined below:
* **Context, \(\phi\in\Phi\)** is the web task objective expressed explicitly as an instruction or structured parameters or implicitly as a conversation
* **State, \(s\in\mathcal{S}\)** is the current state of the webpage, i.e., the current DOM \(d\) serialized as text.2 Footnote 2: For some tasks, the current webpage may not be sufficient to define state. In such cases, we can extend state to a history of previous webpages and actions.
* **Action, \(a\in\mathcal{A}(s)\)** are the set of web actions that can be performed on the current webpage, i.e. click(id), type(id,value), where id specifies an element in the webpage, and value is a string. The action space can be quite large since a typical webpage can have hundreds of elements, and value can be any arbitrary text.
* **Transition function, \(\mathcal{T}(s^{\prime}|s,a)\)** represents the change in the webpage on performing an action.
* **Reward, \(r(s,a)\)** is awarded for reaching a set of subgoals, e.g. cancelling a flight has subgoals like finding the booking and then canceling it.
The goal is to learn a policy \(\pi:\mathcal{S}\times\phi\rightarrow\mathcal{A}\) that maximizes performance, i.e., the cumulative reward \(J(\pi)=\mathbb{E}_{\pi}\left[\sum_{t=1}^{T}[r(s_{t},a_{t})]\right]\). Instead of explicitly defining the reward function and solving the MDP, we aim to learn this policy \(\pi\) from demonstrations \(\mathcal{D}=\{(\phi^{i},s_{1}^{i},a_{1}^{i},s_{2}^{i},a_{2}^{i},\dots)\}_{i=1}^ {N}\).
We leverage LLMs that are highly effective at generalizing from few-shot demonstrations without the need for fine-tuning. To do so, we translate demonstrations \(\mathcal{D}\) into in-context examples for an LLM prompt \(\mathcal{P}\). A simple way to do this is to flatten all demonstrations \(\mathcal{D}\), i.e., concatenate the conversation \(\phi\), with state action trajectories, and merge them together. However, a typical demonstration may consist of a lengthy chain of actions, with each state in the chain being the entire webpage document object model (DOM). In terms of total tokens, \(N\) demonstrations each of \(T\) timesteps, each step comprising of \(X\) tokens of both conversation and webpage would result in \(N\times T\times X\) tokens. This can quickly exhaust context space even for simple websites. We tackle this problem in our approach by hierarchically composing prompts.
## 4 Approach
We present a framework, **H**ierarchical **P**olicies for Web Actions using LLMs (HeaP), that performs a range of web tasks from natural language conversations by hierarchically invoking a Large Language Model (LLM). The framework consists of a hierarchy of two levels: a _high-level task planner_ that in turns invokes a sequence of _low-level web policies_.
Consider the example in Fig. 1. Given a conversation with a customer looking to book flights, and a booking website, the task planner generates a plan, i.e, a sequence of steps to execute. Examples of
Figure 1: HeaP Overview: **(a) Inference:** High-level task planner creates a sequence of steps like filling text or choosing dates from an input context and starting webpage. Each step is a call to a low-level web policy that directly interacts with the webpage. **(b) Prompt Generation:** Dataset of raw state-action demonstrations is transformed into task and policy base prompts by first auto-labeling with policies and then generating prompts.
steps are either filling a text box, choosing a date, or choosing an option from a drop-down. Each of these steps can be delegated to a corresponding web policy that interacts with the web page and executes web actions like clicking and typing. For instance, the Fill_TEXT(field, text) web policy searches for the web element corresponding to field, clicking it, typing a text and optionally choosing from a list of autocomplete options. On the other hand, the CHOOSE_DATE(field, date) web policy clicks on the web element, navigates a grid of dates and clicks on the correct date.
### Inference time: Compose policies to solve the task
Algorithm 1 describes the inference time procedure. We take as input a context \(\phi\), which can be a conversation or an explicit objective, and the current webpage state \(s_{0}\). This is sent to a task planner that generates a plan. The plan is a sequence of calls to low-level web policies. Each element of the sequence is represented as a web policy type \(\pi\) and instruction to the policy \(\psi\), i.e., \(\xi=\{(\pi_{1},\psi_{1}),(\pi_{2},\psi_{2}),\ldots(\pi_{N},\psi_{N})\}\). For example, CHOOSE_DATE(field, date) corresponds to calls to policy \(\pi=\texttt{CHOOSE\_DATE}\) with instruction \(\psi=\texttt{(field, date)}\).
The web policies in plan \(\xi\) are invoked one by one. Each policy \(\pi_{i}\) predicts the next action \(a\) given its instruction \(\psi_{i}\), current state \(s\), and previous actions \(a_{prev}\). Once the policy issues the special action "DONE", control is handed back to the outer loop and the next policy is executed. When all policies in the plan \(\xi\) are done, the task planner is invoked again for the next plan. The process is terminated when the task planner produces an empty plan.
Both the task planner and the web policies are calls to an LLM with different base prompts. The base prompt for the task planner shows examples of (input: [overall context \(\phi\), current state \(s_{0}\)], output: plan \(\xi\)). The base prompt for web policies shows examples of (input: [instruction \(\psi_{t}\), current state \(s_{t}\), previous actions \(a_{it-1}\)], output: next action \(a_{t}\)). We additionally include chain-of-thought (CoT) reasoning Wei et al. (2022) to both task and policy prompts that forces the LLM to generate a series of short sentences justifying the actions it predicts. We found this to uniformly improve performance (Appendix B).
### Generate task and policy prompts from demonstrations
To generate prompts from demonstrations, we collect demonstrations from human users performing tasks on the browser. We design a browser plugin to record webpage DOM \(d\) and events such as clicks and types. Each demonstration is expressed as text by converting the DOM tree into a list of salient web elements like links, buttons, inputs. The parsed demonstration dataset is represented as \(\mathcal{D}=\{(\phi,s_{1},a_{1},\ldots,s_{T},a_{T})\}\).
We then _autolabel_ each step \(t\) with a low-level policy \(\pi_{t}\) and instruction \(\psi_{t}\) to create a labeled dataset \(\mathcal{D}_{label}=\{(\phi,s_{1},a_{1},(\pi_{1},\psi_{1}),\ldots,s_{T},a_{T},( \pi_{T},\psi_{T}))\}\). We leverage LLMs to autolabel demonstra
tions and describe details in Appendix. D. Finally, we convert demonstrations to base prompts for both high-level planner and low-level policies and list representative prompts in Appendix. G.
## 5 Experiments
### Experimental Setup
**Environments.** We evaluate across \(4\) distinct environments, each emphasizing different components:
* **MiniWoB++** (Liu et al., 2018): An extension of the OpenAI MiniWoB benchmark (Shi et al., 2017) covering a range of web interactions like form filling, search, choose dates, etc. We evaluate across \(45\) distinct tasks that don't rely on visual reasoning, and average over \(50\) seeds per task.
* **WebArena** (Zhou et al., 2023): A recent community benchmark offering complex web tasks across multiple domains. Compared to MiniWoB++, WebArena websites are highly realistic with tasks mirroring those that humans routinely perform on the internet. We evaluate on a set of \(125\) examples sampled from \(12\) distinct intents from \(2\) domains, Gitlab and OpenStreetMaps.
* **Airline CRM**: A new CRM simulator that we developed, modeled after customer service workflows of popular airline websites. Compared to MiniWoB++, Airline CRM offers longer-horizon tasks tied to a mock database, capturing typical CRM activities more effectively. We evaluate across \(5\) distinct tasks each with \(20\) randomized scenarios. More simulator details in Appendix E.
* **Live Websites**: Finally, we create an environment to interact with live websites, such as popular airlines like JetBlue, American, United. The raw browser content is considerably more complex, being \(\sim\)100x larger than the simulators. We evaluate generalization across UIs by performing the same search-flight task across \(3\) very different website UIs and average across \(10\) runs per UI.
**Baselines.** We compare against various baselines including prior state-of-the-art (Furuta et al., 2023; Gur et al., 2022; Humphreys et al., 2022; Liu et al., 2018) and methods Flat Zero-shot, Flat Few-shot, HeaP Zero-shot, HeaP Few-shot. Flat Zero-shot is a single prompt containing only the instructions and no in-context examples. Flat Few-shot includes both instructions and in-context examples. Both of these follow a chain-of-thought (CoT) prompting style similar to ReAct (Yao et al., 2022). HeaP Few-shot and HeaP Zero-shot is our hierarchical prompting approach, HeaP, with and without in-context examples respectively. Detailed prompts for the different methods can be found in Appendix G. All 4 methods use the instruction fine-tuned text-davinci-0033 model. We found it to perform better at multi-step reasoning compared to gpt-3.5-turbo1(Ouyang et al., 2022) while being more cost-effective than gpt-41(OpenAI, 2023). More details on model hyper-parameters in Appendix C.2.
Footnote 1: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models)
**Metrics.** We define 3 metrics: Success Rate ($suc\(\uparrow\)), Task Progress ($prog\(\uparrow\)), and Number Actions ($act\(\downarrow\)). $suc\(\uparrow\) is either 0 or 1 based on the task being completed successfully. $prog\(\uparrow\) is between 0 and 1 indicating progress towards completing the task. $act\(\downarrow\) is the number of actions.
### Results and Analysis
**Overall Results.**
* On the MiniWoB++ benchmark, HeaP Few-shot matches or outperforms priors works with orders of magnitude fewer demonstrations (\(21\) demos for HeaP vs \(347\)k demos in (Furuta et al., 2023) or \(2.4\)M demos in (Humphreys et al., 2022)). See Table 1.
* On the WebArena benchmark (Gitlab, Maps), HeaP Few-shot achieves much better success rates than prior works (Zhou et al., 2023; Yao et al., 2022) that use Flat chain-of-thought prompting, See Fig. 4.
* On airline CRM and live websites, we see a similar trend where HeaP Few-shot achieves better success rates and task progress with lower number of actions. See Fig. 5, Fig.7.
* HeaP Few-shot achieves higher success rates by breaking down complex tasks into reusable low-level policy calls each of which can be covered with their own in-context examples. See Fig. 2 for an ablation and Figs. 8,9 for qualitative examples.
* Finally, we provide ablations on different model scales and CoT reasoning in Appendix B.
**Comparison to prior works.** In Table 1, HeaP Few-shot outperforms or matches priors works with orders of magnitude lower demonstrations on the MiniWob++ benchmark. HeaP has an average success rate of \(0.96\) using only \(21\) in-context examples.
HeaP outperforms all the supervised learning baselines and matches the most performant baseline CC-Net (Humphreys et al., 2022) that trains an RL agent using \(2.4\) million demonstrations. HeaP outperforms the most recent baseline, WebGUM (Furuta et al., 2023) which fine tunes a pre-trained instruction model on \(347\)K demonstrations. Part of the performance gain comes from in-context learning and CoT reasoning with large-scale models similar to ReAct (Yao et al., 2022b). However, HeaP with its hierarchical prompting improves success rates significantly over ReAct (aka Flat), by breaking down complex tasks and covering them efficiently with more in-context examples.
**Why does hierarchical prompting help?**
The key benefit of hierarchical prompting is to break down complex tasks into a set of smaller policies, each of which can be covered by a handful of demonstrations. In contrast, covering the entire task would require combinatorially many more demonstrations. Fig. 2 shows an ablation of HeaP vs Flat with varying number of in-context examples. Hierarchy helps at two levels: (1) For the same number of examples (\(\leq 7\)), improvements come from decomposing task instructions into granular policy instructions (2) Hierarchical decomposition results in smaller individual policies. This allows us to add more in-context examples (\(>7\)) in each policy prompt compared to what is possible with a single flat prompt (see Sec 3) resulting in higher success rates.
date from a datepicker. This step is particularly challenging for baselines due to the variations in navigating the datepicker. However, the CHOOSE_DATE policy in HeaP Few-shot has the ability to cover these variations with more in-context examples, making it more robust.
On the Web Arena benchmark, we observe a similar trend in Fig. 4 showing a breakdown of success rates across \(12\) different intents on \(2\) domains. Compared to MiniWob++, this is a significantly more challenging environment where prior work with Flat CoT prompting (Zhou et al., 2023; Yao et al., 2022b) has very limited success rates (\(\sim 18\%\)). This limitation arises from the challenge of understanding how to interact appropriately with web pages. HeaP provides a mechanism for defining dedicated policies that can be taught with targeted in-context examples. For instance, a task like searching a Gitlab issue involves filtering and sorting by specific criteria. A dedicated policy like SEARCH_ISSUE() can handle this by filtering by keywords, sorting, and determining issue status.
**How well does HeaP generalize across tasks?** Table 2 along with Appendix B.3 shows metrics across \(45\) tasks from MiniWoB++ (Liu et al., 2018; Shi et al., 2017) averaged over \(50\) seeds per task. HeaP Few-shot obtains higher success rates with lower number of actions compared to baselines, with the performance gap higher for complex tasks, with complex being tasks that either require a heterogeneous set of actions or multiple steps with changing webpages. HeaP Few-shot achieves
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Net} & CC-Net & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} & \multirow{2}{*}{Multi-TS} \\ & & (EL-Net) & & & & & & & & & & & \\ & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & 1.00 & 0.99 & 0.37 & 1.80 & 0.76 & 1.80 & 1.00 & 1.00 & 2.02 & 0.90 & 1.00 & 1.94 \\ & click-along-2 & 1.00 & 1.00 & 0.20 & 0.46 & 0.00 & 1.00 & 1.00 & 1.00 & 0.00 & 1.00 & 1.00 \\ & logit-outer & 0.99 & 1.00 & 0.82 & 1.00 & 0.96 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ & split-coronine & 1.00 & 1.00 & 0.99 & 1.00 & 0.96 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \hline \multirow{4}{*}{
\begin{tabular}{l} \end{tabular} } & 1.00 & 0.99 & 0.37 & 1.80 & 0.76 & 0.96 & 1.00 & 0.98 & 3.94 & 0.96 & 2.01 \\ & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} & \multirow{-2}{*}{local} \\ & click-along-2 & 0.98 & 0.99 & 0.37 & 0.92 & 1.00 & 0.20 & 1.00 & 2.02 & 0.90 & 1.00 & 3.98 \\ & click-along-out & 0.84 & 0.97 & 0.92 & 0.99 & 0.40 & 0.80 & 0.80 & 0.80 & 0.80 & 1.00 & 1.00 & 1.00 \\ & click-along-out & 0.84 & 0.99 & 0.99 & 0.99 & 0.40 & 0.70 & 0.90 & 0.90 & 1.00 & 1.00 & 1.00 & 1.00 \\ & click-along-out & 0.84 & 0.99 & 0.99 & 0.99 & 0.40 & 0.70 & 0.70 & 0.50 & 0.90 & 1.00 & 1.00 & 1.00 \\ & semi-labels & **0.95** & **0.95** & **0.95** & **0.99** & **0.40** & **0.70** & **0.70** & **0.50** & **0.90** & **0.90** & **0.50** \\ & semi-labels & **0.95** & **0.95** & **0.99** & **0.99** & **0.40** & **0.70** & **0.70** & **0.50** & **0.90** & **0.90** & **0.50** \\ & semi-labels & **0.95** & **0.95** & **0.99** & **0.99** & **0.40** & **0.70** & **0.70** & **0.50** & **0.90** & **0.90** & **0.50** \\ & semi-labels & **0.95** & **0.95** & **0.99** & **0.99** & **0.40** & **0.70** & **0.70** & **0.50** & **0.90** & **0.90** & **0.50** \\ & semi-labels & **0.95** & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.50** \\ & semi-labels & **0.95** & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.50** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.95** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** \\ & semi-labels & **0.99** & **0.99** & **0.99** & **0.99** & **0.99** & **
this with only \(21\) examples from \(6\) tasks and is able to solve the remaining \(39\) tasks without ever having seen them. Table 3 shows the breakup of in-context examples across different environments.
Similarly, Fig. 5 shows metrics on \(5\) longer horizon CRM tasks (each averaged over \(20\) scenarios) corresponding to typical airline workflows like find & cancel bookings, update passenger details, find & book flights. HeaP Few-shot obtains higher success and task progress with lower number of actions compared to baselines. It achieves this with \(10\) in-context examples from \(2\) tasks (Table 3)
**How well does HeaP generalize across complex webpages?** Fig. 7 shows evaluation of HeaP Few-shot and Flat Few-shot across \(10\) runs each on \(3\) different live websites with task specification coming from short simulated conversations. What makes this task challenging is that the browser content from these websites have a lot of extraneous information that make it challenging to parse the correct fields. Fig. 6 shows the extent of compression we perform to fit the browser content into the LLM's context space (see Appendix F for details). For WebArena, we use the accessibility tree browser content representation from the environment (Zhou et al., 2023). We evaluate each run by comparing model performance against a reference human demonstration. In Fig. 7, HeaP Few-shot is able to generalize to multiple websites even though it has demonstration from only one (i.e. jetblue.com). In contrast, Flat Few-shot fails to generalize from it's demonstration. Again HeaP Few-shot, by hierarchically decomposing the problem, is able to use demonstrations more efficiently.
**Ablations on reasoning, models, and few-shot examples.** Appendix B shows ablations on CoT reasoning and model scales. Overall, we find CoT to boost performance across tasks, especially multi-step tasks. For models, gpt-4 improves performance across methods, but having both hierarchical prompting and few-shot examples continue to help. gpt-3.5-turbo does better in zero-shot setting but under-performs text-davinci-003 when given few-shot examples. Fig. 9 shows the effect of few-shot examples qualitatively on a search-engine task. Few-shot examples help ground the task in concrete low-level actions on the web UI, resulting in HeaP Few-shot navigating to the desired link correctly.
**Error Analysis.** We cluster common failure modes of HeaP: (1) _Content parsing errors:_ Browser content may be parsed with incorrect associations. Specifically, since we flatten the DOM structure and add to the LLM context, this can cause incorrect text associations. (2) _Error recovery:_ LLM
Figure 6: Token counts for browser content before and after compression on different environments.
Figure 7: **(Left) Evaluation on \(3\) live airline websites averaged over 10 runs per website. (Right) Difference in train (jetblue) v/s test (united, aa) website UIs.**
may not know how to recover from incorrect actions. For instance, HeaP clicks on a wrong link, sending it to a new webpage not seen in the demonstrations. (3) _Visual information gaps:_ Visual elements, such as specific dropdown menus in maps environment, do not appear in the DOM. Such tasks require multi-modal models that reason about browser images.
## 6 Discussion and Limitations
In this paper, we present a hierarchical framework HeaP for solving web tasks from few-shot demonstrations. We evaluate against a range of baselines on a suite of web tasks and characterize performance gains from both hierarchical prompting and demonstrations. Our key takeaways are:
**(1) Hierarchy breaks down complex tasks** Our results indicate that hierarchical prompting achieves higher success rates by breaking down complex tasks into reusable low-level policy calls. This is evident in the performance difference between HeaP Few-shot and Flat Few-shot (see Figs. 3,4,5,7), with Fig. 2 showing the role of hierarchy in both better task decomposition and ability to pack in more examples. **(2) Sample efficient generalization** HeaP matches or outperforms priors works with multiple orders of magnitude less data (see Table 1). It is able to adapt to unseen tasks with only a handful of task demonstrations seen in-context (see Table 3). **(3) Effects of few-shot prompting and reasoning** Few-shot examples in the prompt are effective at grounding high-level task instructions as actions on the web UI environment (see Fig. 9). CoT reasoning significantly boosts performances across all methods, particularly on multi-step tasks (see Appendix B).
While HeaP shows promise, there are still limitations and open challenges: **(1) Complex Webpages.** HeaP is currently unable to handle pages with visual only components since those observations don't get parsed from the HTML DOM. Leveraging pretrained multi-modal models offer a promising avenue (Lee et al., 2023; Furuta et al., 2023). Moreover, parsing pages containing long tables, databases needs advanced compression techniques such as learning dedicated saliency models (Wang et al., 2022; Sridhar et al., 2023) to determine relevant web elements. **(2) Verification and Error Recovery.** HeaP may click on a wrong link sending it to a new webpage and must learn to recover from such errors. Learning from incorrect actions either via human feedback or self-verification are interesting directions of future work. Action LLMs also carry potential for misuse given their execution on open-domain environments, requiring careful verification and security solutions.
Figure 8: Outputs from HeaP Few-shot on book-flight task showing hierarchical task planner actions, low-level web policy actions, and LLM reasoning.
Figure 9: HeaP Few-shot vs HeaP Zero-shot on a search-engine task. The instruction asks to find the 7th link, however, it is ambiguous what 7 refers to. HeaP Few-shot with a single in-context demo is able to ground the task in the UI and reason that the 7th link lies in the 2nd webpage and navigates to the link correctly.
## Acknowledgements
We would like to thank Daniel Ciolek, Volkan Cirik, Michael Griffiths for help with browser tooling and plugins. We are grateful to Kilian Weinberger, Yoav Artzi, Ramya Ramakrishnan, and the rest of the ASAPP research team for insightful feedback and suggestions.
|
2303.03618
|
Demazure product of permutations and hopping
|
The Demazure product (also goes by the name of 0-Hecke product or the greedy
product) is an associative operation on Coxeter groups with interesting
properties and important applications. In this note, we study permutations and
present an efficient way to compute the Demazure product of two permutations
starting from their usual product and then applying a new operator we call a
hopping operator. We also give an analogous result for the group of signed
permutations.
|
Tina Li, Suho Oh, Edward Richmond, Grace Yan, Kimberley You
|
2023-03-07T03:05:13Z
|
http://arxiv.org/abs/2303.03618v1
|
# Demazure product of permutations and hopping
###### Abstract.
The Demazure product (also goes by the name of \(0\)-Hecke product or the greedy product) is an associative operation on Coxeter groups with interesting properties and important applications. In this note, we study permutations and present an efficient way to compute the Demazure product of two permutations starting from their usual product and then applying a new operator we call a hopping operator. We also give an analogous result for the group of signed permutations.
## 1. Introduction
Coxeter groups play an important role in the representation theory of Lie groups and the geometry of associated flag varieties and Schubert varieties. There is an interesting associative operation on Coxeter groups called the **Demazure product**[4] (also called the \(0\)-**Hecke product** or the **greedy product**). In this paper, we study the Demazure product for two classes of Coxeter groups: permutations and signed permutations. Our main result is a computationally efficient algorithm to compute this product using a new operator called a hopping operator.
Let \(W\) be a Coxeter group with simple generating set \(S\). Specifically, \(W\) is generated by \(S\) with relations of the the form
\[(st)^{m_{st}}=id,\quad s,t\in S \tag{1}\]
for some values \(m_{st}\in\mathbb{Z}_{>0}\cup\{\infty\}\) where \(m_{st}=1\) if and only if \(s=t\). Coxeter groups come equipped with a natural length function \(\ell:W\to\mathbb{Z}_{\geq 0}\) and a poset structure called the Bruhat order (denoted by \(\leq\)) that respects length. For more details on the basic properties of Coxeter groups, see [3]. The **Coxeter monoid** structure (also called the \(0\)-**Iwahari-Hecke monoid**) on \(W\) is defined to be the monoid generated by \(S\) with a product \(\star\) satisfying the Coxeter braid relations in Equation (1) for \(s\neq t\) along with the relation \(s\star s=s\) for all \(s\in S\) (this new relation replaces \(s^{2}=id\) in the usual product). This monoid structure was first studied by Norton in [11] in the context of Hecke algebras. It is well known that, as sets, \(W=\langle S,\star\rangle\). We say an expression \(w=s_{1}\cdots s_{k}\) is reduced if \(\ell(w)=k\). In other words, \(w\) cannot be expressed with fewer than \(k\) generators in \(S\). The next lemma records some basic facts about the Coxeter monoid.
**Lemma 1.1**.: _Let \(W\) be a Coxeter group with simple generating set \(S\). Then the following are true:_
1. _Let_ \((s_{1},\ldots,s_{k})\) _be a sequence of generators in_ \(S\)_. Then_ \[s_{1}\cdots s_{k}\leq s_{1}\star\cdots\star s_{k}\] _with equality if and only if_ \((s_{1},\ldots,s_{k})\) _is a reduced expression._
2. _For any_ \(s\in S\) _and_ \(w\in W\)_,_ \[s\star w=\begin{cases}w&\text{if}\quad\ell(sw)<\ell(w)\\ sw&\text{if}\quad\ell(sw)>\ell(w).\end{cases}\]
It turns out that there is a very nice interpretation of \(w\star u\):
**Proposition 1.2**.: _[_6, 7_]_ _For any \(w,u\in W\), the Bruhat interval_
\[[e,w\star u]=\{ab\ |\ a\in[e,w],b\in[e,u]\}.\]
We give an example of this phenomenon. The poset in Figure 1 is the Bruhat order of the symmetric group \(S_{4}\) which has three simple generators \(s_{1},s_{2},s_{3}\). The elements in the lower interval of \(s_{1}s_{2}s_{3}s_{2}s_{1}\) are colored in red. All the elements in the lower interval \([e,s_{1}s_{2}s_{3}s_{2}s_{1}]\) can be written as \(a\star b\) where \(a\leq s_{1}s_{2}s_{3}\) and \(b\leq s_{2}s_{1}\). For example, \(s_{2}s_{3}s_{1}\) can be written as \(s_{2}s_{3}\star s_{1}\).
This product has been used and studied in various fields that depend on Coxeter groups [5], [8],[9], [12], [13], [14]. For example, in Lie theory, the Demazure product naturally arises in the study of BN pairs and reductive groups. Specifically, the relation on Borel double orbit closures is given by
\[\overline{BwBuB}=\overline{B(w\star u)B}.\]
While the product \(w\star v\) has been well studied for many years, it is computationally expensive to calculate using reduced expressions of \(w\) and \(u\). In this note, we present an efficient method to compute the Demazure product of two permutations in the symmetric group using their one-line notation. This algorithm starts with the usual product of permutations and a brand new operation we call **hopping** operators. We state this result in Theorem 3.1. In Section 4, we prove an analogous result for signed permutations which is stated in Theorem 4.8.
Figure 1. The Bruhat order of \(S_{4}\) and the lower interval of \(s_{1}s_{2}s_{3}s_{2}s_{1}\) in red.
## 2. The hopping operator
In this section, we focus on the permutation group or symmetric group \(S_{n}\). These groups are also known as Coxeter groups of type \(A\). The group \(S_{n}\) is a Coxeter group with simple generating set \(S=\{s_{1},\ldots,s_{n-1}\}\) satisfying the relations \(s_{i}^{2}=id\) and
\[(s_{i}s_{j})^{2}=id\ \ \text{if}\ |i-j|>1\ \text{and}\ (s_{i}s_{i+1})^{3}=id. \tag{2}\]
The generator \(s_{i}\) corresponds to the simple transposition \((i,i+1)\). Let \([n]:=\{1,2,\ldots,n\}\) and for \(w\in S_{n}\), let \(w=w(1)w(2)\cdots w(n)\) denote the permutation in one-line notation. We define a new operator on permutations called the **hopping operator**.
**Definition 2.1**.: _For \(t\in[n]\) and \(L\) an ordered subset of \([n]\) (without repetition) the **hopping operator**_
\[h_{t,L}:S_{n}\to S_{n}\]
_acts on a permutation \(w\) to yield the permutation obtained by the following algorithm: Scan to the right (within the one-line notation of \(w\)) of \(t\) and look for the element furthest to the right in \(L\) that is greater than \(t\). If it exists, swap \(t\) and that element, replace \(w\) with the resulting permutation, and repeat. The algorithm ends when there are no elements of \(L\) within \(w\) to the right of \(t\)._
For example, take \(w=891726435\). Then \(h_{1,[2,3,4,5,6,7,8]}(w)=897625431\) is obtained by the following process:
\[89\mathbf{17}26435\to 897\mathbf{126}435\to 89762\mathbf{1435}\to 897625431.\]
For another example, we have \(h_{1,[3,6,5,7,2]}(w)=892756431\) and is obtained by the following process:
\[891\mathbf{7}26435\to 8927\mathbf{16}435\to 892756431.\]
For any ordered subset \(L\subset[n]\), let \(w(L)\subseteq[n]\) denote the ordered list obtained by \(w\) acting on the elements of \(L\). While a permutation may not preserve \(L\), it can be viewed as an operator on ordered subsets of \([n]\) preserving size. Hopping operators satisfy the following commuting relation with simple transpositions:
**Lemma 2.2**.: _Let \(w\in S_{n}\). For any \(i\geq t\) and ordered subset \(L\subseteq[n]\), we have_
\[s_{i}\cdot h_{t,L}(w)=h_{t,s_{i}(L)}(s_{i}\cdot w).\]
Proof.: The positions that \(t\) will be in throughout the hopping of \(h_{t,L}\,w\) will be exactly the same as the position of \(t\) throughout the hopping of \(h_{t,s_{i}(L)}(s_{i}w)\). So \(h_{t,L}w\) and \(h_{t,s_{i}(L)}(s_{i}w)\) will be exactly the same except \(i\) and \(i+1\) flipped, hence the desired result.
For example, let \(w=514632\). We compare the action of \(s_{3}h_{1,[2,3,4,5]}\) and \(h_{1,[2,4,3,5]}s_{3}\) on \(w\). First, we have
\[514632\xrightarrow[h_{1,[2,3,4,5]}543621\xrightarrow[s_{3}]{}534621.\]
On the other hand, we get
\[514632\xrightarrow[s_{3}]{}513642\xrightarrow[h_{1,[2,4,3,5]}534621,\]
yielding the same result as guaranteed by Lemma 2.2.
Throughout the paper, we will denote the product of simple transpositions by the symbol
\[\operatorname{C}_{a,b}:=s_{a}s_{a+1}\cdots s_{a+b-1}.\]
If \(b=0\), then \(\mathrm{C}_{a,0}\) is the identity. The operator \(\mathrm{C}_{a,b}\) acts on a permutation \(w\) by mapping each of \(a,a+1,\ldots,a+b\) to \(a+1,\ldots,a+b,a\) respectively. In other words, it is a cyclic shift of the elements \(a,a+1,\ldots,a+b\) by one. For example, in \(S_{8}\), we have \(\mathrm{C}_{2,4}=s_{2}s_{3}s_{4}s_{5}\) corresponding to the permutation \(13456278\). From the above lemma we immediately get the following corollary:
**Corollary 2.3**.: _Let \(w\in S_{n}\). For any \(a\geq t\) and ordered subset \(L\subseteq[n]\), we have_
\[\mathrm{C}_{a,b}\,h_{t,L}(w)=h_{t,\mathrm{C}_{a,b}(L)}\,\mathrm{C}_{a,b}(w).\]
The Demazure product with \(\mathrm{C}_{a,b}\) can be described using the usual product on permutations and a hopping operator as follows:
**Proposition 2.4**.: \((s_{i}s_{i+1}\cdots s_{j})\star v=h_{i,[i+1,\ldots,j+1]}(s_{i}s_{i+1}\cdots s _{j}v)\)__
Proof.: We do strong induction on \(j-i\). When \(j-i=0\), the claim is that \(s_{i}\star v=h_{i,[i+1]}(s_{i}v)\) which is straightforward to verify. Assume for sake of induction that we have \((s_{i+1}\cdots s_{j})\star v=h_{i+1,[i+2,\ldots,j+1]}(s_{i+1}\cdots s_{j}v)\). Then all we need to do is show that \(h_{i,[i+1]}(s_{i}h_{i+1,[i+2,\ldots,j+1]}(s_{i}w))=h_{i,[i+1,\ldots,j+1]}(w)\), by setting \(w=s_{i}s_{i+1}\cdots s_{j}v\).
First note that for any permutation \(w\), if \(i+1\) is not encountered during the hopping process of \(i\) in \(h_{i,[i+1,\ldots,j+1]}(w)\), then the hopping of \(i+1\) in \(h_{i+1,[i+2,\ldots,j+1]}(s_{i}w)\) follows the same hopping steps as with \(h_{i,[i+1,\ldots,j+1]}(w)\) (i.e, the same numbers are getting swapped in the same order). Hence
\[s_{i}h_{i+1,[i+2,\ldots,j+1]}(s_{i}w)=h_{i,[i+1,\ldots,j+1]}(w).\]
Since \(i\) is to the right of \(i+1\) in \(h_{i,[i+1,\ldots,j+1]}(w)\), by Defintion 2.1, we have
\[h_{i,[i+1]}h_{i,[i+1,\ldots,j+1]}(w)=h_{i,[i+1,\ldots,j+1]}(w)\]
giving us the desired result. Otherwise, if \(i+1\) is encountered during the hopping process of \(i\) in \(h_{i,[i+1,\ldots,j+1]}(w)\), then none of the numbers \(i+2,\ldots,j+1\) appear to the right of \(i+1\) in \(w\) (since the elements \(i+2,\ldots,j+1\) would have been prioritized to be chosen for the swap). So in \(h_{i,[i+1,\ldots,j+1]}(w)\), we have that none of \(i+2,\ldots,j+1\) appears between \(i+1\) and \(i\) (with \(i\) being right of \(i+1\)). Now the hopping of \(i+1\) in \(h_{i+1,[i+2,\ldots,j+1]}(s_{i}w)\) follows the same hopping steps except the last one of \(h_{i,[i+1,\ldots,j+1]}(w)\). Hence, we end up with \(h_{i,[i+1,\ldots,j+1]}(w)\). So
\[h_{i,[i+1]}(s_{i}h_{i+1,[i+2,\ldots,j+1]}(s_{i}w))=h_{i+1,[i+2,\ldots,j+1]}(s_ {i}w)=h_{i,[i+1,\ldots,j+1]}(w)\]
which proves the proposition.
For example, let \(w=124567893=s_{3}s_{4}s_{5}s_{6}s_{7}s_{8}=\mathrm{C}_{3,6}\) and \(v=891726435\). The above proposition implies that \(w\star v=h_{3,[4,5,6,7,8,9]}(wv)\). Starting with the usual product \(wv=931827546\), we get
\[9\mathbf{318}27546\to 981\mathbf{327}546\to 98172\mathbf{3546}\to 981726543.\]
Thus \(w\star v=981726543\).
**Definition 2.5**.: _Given \(w\in S_{n}\), we let \(w\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox{$\sim$}}} \limits}a\) stand for the subword of \(w\) obtained by restricting ourselves to the subword strictly left of \(a\), then cutting off the elements smaller than \(a\)._
For example, take \(w=891726435\). We have \(w\ntimes 2=897\), obtained by taking the subword strictly left of \(1\) and then removing elements smaller than \(2\). Similarly, we get \(w\ntimes 4=8976\), again obtained by taking the subword strictly left of \(4\) and then removing elements smaller than \(4\).
Notice that if \(w=s_{i}s_{i+1}\cdots s_{j}\), then \(w\ntimes_{i}=[i+1,i+2,\ldots,j+1]\). Proposition 2.4 implies that if \(w=s_{i}s_{i+1}\cdots s_{j}\), then \(w\star v=h_{i,w\ntimes_{i}(wv)}\).
## 3. The main result
In this section we give a formula for the Demazure product between two arbitrary permutations by writing one of the permutations as a product \(\operatorname{C}_{i,j}\)'s and then carefully iterating Proposition 2.4.
**Theorem 3.1**.: _For any \(w,v\in S_{n}\), we have \(w\star v=h_{n-1,w\ntimes_{n-1}}h_{2,w\ntimes_{2}}h_{1,w\ntimes_{1}}(wv)\)_
Proof.: Let \((j_{1},\ldots,j_{n-1})\) denote the inversion sequence of \(w\) (see [15, Chapter 1.3]). In other words, \(j_{i}\) denotes the number of inversions in \(w\) of the form \((k,i)\). It is easy to check that
\[w=\operatorname{C}_{n-1,j_{n-1}}\cdots\operatorname{C}_{2,j_{2}}\operatorname{ C}_{1,j_{1}}. \tag{3}\]
The reason we use this decomposition is that to the left of \(\operatorname{C}_{i,j_{i}}\) restricted to \(i,i+1,\ldots,n\) is exactly the same as the subword of \(w\) restricted to \(i,i+1,\ldots,n\).
From Proposition 2.4, we get that
\[w\star v=\cdots h_{i,[i+1,\ldots,j_{i}+1]}\operatorname{C}_{i,j_{i}}\cdots h_{ 2,[3,\ldots,j_{2}+1]}\operatorname{C}_{2,j_{2}}(h_{1,[2,\ldots,j_{1}+1]} \operatorname{C}_{1,j_{1}}v).\]
It is enough to show that for each \(i<n\), we have
\[\operatorname{C}_{n-1,j_{n-1}}\cdots\operatorname{C}_{i,j_{i}}h_{i,[i+1,\ldots,j_{i}+1]}=h_{i,w\ntimes_{i}}\operatorname{C}_{n-1,j_{n-1}}\cdots\operatorname{ C}_{i,j_{i}}.\]
This follows from Corollary 2.3 and the previous observation that \(\operatorname{C}_{n-1,j_{n-1}}\cdots\operatorname{C}_{i,j_{i}}\) restricted to \(i,i+1,\ldots,n\) is exactly same as that of \(w\) and that we can truncate the list in a hopping operator of \(h_{i,L}\) by removing the elements in \(L\) that are smaller than \(i\).
For example, let \(w=6541723\) and \(v=5436217\). The usual product of these two permutations is \(wv=7142563\). The Demazure product \(w\star v\) corresponds to the sequence of hopping operators
\[h_{5,[6]}h_{4,[6,5]}h_{3,[6,5,4,7]}h_{2,[6,5,4,7]}h_{1,[6,5,4]}\]
acting on the usual product \(wv\). Applying each of the hopping operators in order to \(wv\), we get
\[7142563\xrightarrow[h_{1,[6,5,4]}7452613\xrightarrow[h_{2,[6,5,4,7]}7456213\xrightarrow[h_{3,[6,5,4,7]}7456213\xrightarrow[h_{4,[6,5]}7564213 \xrightarrow[h_{5,[6]}7654213.\]
This gives us \(6541723\star 5436217=7654213\).
## 4. Signed permutations
In this section, we prove an analogue of Theorem 3.1 for the group of signed permutations, also known as Coxeter groups of type \(B\) (or equivalently, type \(C\)). Signed permutations can be viewed as a permutation subgroup of \(S_{2n}\). Let \(\{s^{\prime}_{1},\ldots,s^{\prime}_{2n-1}\}\) be the simple generators of the permutation group \(S_{2n}\). We define \(B_{n}\) to be the subgroup of \(S_{2n}\) generated by \(S:=\{s_{1},\ldots,s_{n}\}\) where
\[s_{i}:=s^{\prime}_{i}\,s^{\prime}_{2n-i}\text{ for }1\leq i<n\text{ and }\ \ s_{n}:=s^{\prime}_{n}. \tag{4}\]
The convention we use regarding type \(B\) simple transpositions follows those found in [1]. As a Coxeter group, the generators \(s_{1},\ldots,s_{n-1}\) of \(B_{n}\) satisfy the same relations as in type \(A\) (see Equation (2)) with the last generator \(s_{n}\) satisfying:
\[(s_{i}s_{n})^{2}=id\ \text{ for }1\leq i<n-1\text{ and }(s_{n-1}s_{n})^{4}=id.\]
Similar to the symmetric group, the elements of the Coxeter group \(B_{n}\) can be interpreted using a one-line notation called the signed permutations [3]. The convention we use here will be slightly different from that of [3] in the sense that \(s_{n}\) plays the role of \(s_{0}\) in [3].
**Definition 4.1**.: _A **signed permutation** of type \(B_{n}\) is a permutation of \([n]\) along with a sign of \(+\) or \(-\) attached to each number._
For example, the signed permutation \([4,-2,3,-1]\) is an element of \(B_{4}\). The generator \(s_{i}\in B_{n}\) corresponds to the simple transposition swapping \(i\) and \(i+1\) if \(i<n\) and \(s_{n}\) to the transposition swapping \(n\) with \(-n\). The product structure on signed permutations is just the usual composition of permutations with the added condition that \(w(-i)=-w(i)\). Let \(\pm[n]\) denote the set \([n]\cup-[n]\), where \(-[n]:=\{-1,\ldots,-n\}\). We impose the total ordering on \(\pm[n]\) given by:
\[1<2<\cdots<n<-n<\cdots<-2<-1.\]
By **unfolding** of a signed permutation \(w\in B_{n}\) we mean the following: to the right of \(w\), attach a reverse ordered copy of \(w\) with the signs flipped to get a permutation of \(\pm[n]\). The unfolding map respects the embedding of \(B_{n}\) as a subgroup of \(S_{2n}\) given above. Specifically, if we replace \(-[n]\) with \(\{n+1,\ldots,2n\}\), then the unfolding map assigns to each signed permutation in \(B_{n}\) a standard permutation in \(S_{2n}\). For example the unfolding of \([4,-2,3,-1]\) is
\[[4,-2,3,-1,1,-3,2,-4]\]
and the corresponding permutation of [8] is \([4,7,3,8,1,6,2,5].\) Conversely, given a permutation of of \(\pm[n]\) where the \(i\)-th entry is the opposite sign of \((2n+1-i)\)-th entry, we can **fold** the permutation to get a signed permutation on \([n]\). In this section, we will slightly abuse notation and identify a signed permutation of \(B_{n}\) with its unfolding in \(S_{2n}\). When referring the generators of \(S_{2n}\), we set
\[s^{\prime}_{-i}:=s^{\prime}_{2n-i}\]
for any \(i<n\) and hence \(s_{i}:=s^{\prime}_{i}s^{\prime}_{-i}\).
**Lemma 4.2**.: _For any signed permutations \(w,v\in B_{n}\), we have_
\[w\star v=\mathrm{fold}(\mathrm{unfold}(w)\star\mathrm{unfold}(v)). \tag{5}\]
Proof.: First observe that Equation (5) holds if we replace \(\star\) with the group product since the unfolding map corresponds to the embedding of \(B_{n}\) into \(S_{2n}\). We proceed by induction on the length of \(w\) and will use \(\ell_{B},\ell_{A}\) to denote length in the Coxeter groups \(B_{n}\) and \(S_{2n}\) respectively. First, suppose that \(w=s_{i}\) where \(i<n\). Then \(\mathrm{unfold}(w)=s^{\prime}_{i}s^{\prime}_{-i}\)
with respect to the embedding of \(B_{n}\) into \(S_{2n}\). Since \(s^{\prime}_{i}\) commutes with \(s^{\prime}_{-i}\), we have that \(\ell_{B}(sv)=\ell_{B}(v)-1\) if and only if \(\ell_{A}(\text{unfold}(sw))=\ell_{A}(\text{unfold}(w))-2\). Lemma 1.1 part (2) implies
\[s\star w=\text{fold}(\text{unfold}(s\star w))=\text{fold}(\text{unfold}(s) \star\text{unfold}(w)).\]
A similar argument holds when \(w=s_{n}\) and \(\text{unfold}(w)=s^{\prime}_{n}\). This proves the lemma in the case when \(\ell_{B}(w)=1\). Now suppose that \(\ell_{B}(w)>1\) and write \(w=sw^{\prime}\) for some \(s\in S\) and \(w^{\prime}\in B_{n}\) where \(\ell_{B}(w)=\ell_{B}(w^{\prime})+1\). By induction we get
\[w\star v=sw^{\prime}\star v=s\star(w^{\prime}\star v)=s\star\text{fold}(\text {unfold}(w^{\prime})\star\text{unfold}(v)).\]
The inductive base case above implies
\[s\star\text{fold}(\text{unfold}(w^{\prime})\star\text{unfold}(v)) =\text{fold}(\text{unfold}(s)\star\text{unfold}(w^{\prime}) \star\text{unfold}(v))\] \[=\text{fold}(\text{unfold}(w)\star\text{unfold}(v)).\]
This completes the proof.
Next, we define a hopping operator for \(B_{n}\) analogous to Definition 2.1 for permutations.
**Definition 4.3**.: _Let \(t\in\pm[n]\) and \(L\) an ordered subset \(\pm[n]\) (without repetition). The hopping operator_
\[h_{t,L}:B_{n}\to B_{n}\]
_acts on a signed permutation \(w\) by the following algorithm: Scan to the right (within the unfolding of \(w\)) of \(t\) and look for the element furthest to the right in \(L\) that is greater than \(t\). If it exists, say \(q\), then swap \(t\) and \(q\) and also swap \(-t\) with \(-q\) (unless \(t=-q\)). Replace \(w\) with the resulting unfolded signed permutation and repeat. The algorithm ends when there are no elements of \(L\) within \(w\) to the right of \(t\)._
For example, let \(w=[2,3,5,-1,4]\) with \(t=1\) and \(L=[-2,-3,4]\). We calculate the hopping operator \(h_{1,[-2,-3,4]}(w)\). First we unfold \(w\), which gives
\[\text{unfold}(w)=[2,3,5,-1,4,-4,1,-5,-3,-2].\]
To the right of \(1\) we have \([-5,-3,-2]\). We first swap \(1\) with \(-3\), since \(-3\) is the right-most element of \(L\) that exists here. This gives us \([2,-1,5,3,4,-4,-3,-5,1,-2]\). After that, we again scan to the right we have \([-2]\). Then we swap \(1\) with \(-2\), to get \([-1,2,5,3,4,-4,-3,-5,-2,1]\). So the signed permutation we end up with is \([-1,2,5,3,4]\).
Just like the type \(A\) case, hopping operators satisfy a commuting relation with simple transpositions as in Lemma 4.6. We omit the proof since it is analogous to that of Lemma 4.6. Similar to the type \(A\) case, we let \(B_{n}\) act on sub-lists of \(\pm[n]\) via the corresponding signed permutation.
**Lemma 4.4**.: _Let \(w\in B_{n}\). For any \(i\geq t\) and ordered subset \(L\subseteq\pm[n]\) we have_
\[s_{i}\cdot h_{t,L}(w)=h_{t,s_{i}(L)}(s_{i}\cdot w).\]
Recall that in the proof of Theorem 3.1, we defined \(\text{C}_{a,b}:=s_{a}\cdots s_{a+b-1}\) and used the fact that any permutation naturally decomposes into a product of \(\text{C}_{a,b}\)'s (see Equation (3)). For the type \(B_{n}\) case, we will define the analogous product of simple generators
\[\text{C}_{a,b}^{B}:=s_{a}\cdots s_{a+b-1}\]
where for any \(j>1\), we set \(s_{n+j}:=s_{n-j}\). Note that if \(a\leq n\), then \(1\leq b\leq 2n-a\). For example, in \(B_{7}\), we have
\[\text{C}_{5,6}^{B}=s_{5}s_{6}s_{7}s_{8}s_{9}s_{10}=s_{5}s_{6}s_{7}s_{6}s_{5}s_{ 4}.\]
As a signed permutation, the product \(\mathrm{C}^{B}_{a,b}\) corresponds to unfolding the identity permutation \([1,2,\ldots,n]\) and shifting \(a\) to the right by \(b\) positions, then placing \(-a\) in the mirrored position. For example, in \(B_{7}\) we have
\[\mathrm{C}^{B}_{5,6}=[1,2,3,-5,4,6,7,-7,-6,-4,5,-3,-2,-1]=[1,2,3,-5,4,6,7].\]
An immediate corollary of Lemma 4.4 is the following.
**Corollary 4.5**.: _Let \(w\in B_{n}\) with \(a\leq n\) and \(1\leq b\leq 2n-a\). For any \(t\leq a\) and ordered subset \(L\subseteq\pm[n]\), we have_
\[\mathrm{C}^{B}_{a,b}\!\cdot\!h_{t,L}(w)=h_{t,\mathrm{C}^{B}_{a,b}(L)}(\mathrm{ C}^{B}_{a,b}\!\cdot\!w).\]
As in the type \(A\) case, the Demazure product with \(\mathrm{C}^{B}_{a,b}\) can be described using the usual composition product on signed permutations and the hopping operator given in Definition 4.3. Recall we identified \(s_{n+j}\) with \(s_{n-j}\) for \(1\leq j<n\). Similarly we use \(n+j\) to denote \(-(n+1-j)\) for \(1\leq j<n\) when we are dealing with elements of \(\pm[n]\).
**Lemma 4.6**.: _Let \(v\in B_{n}\). For any \(i<n\), we have_
\[s_{i}\star v=h_{i,[i+1]}(s_{i}v)\]
_and_
\[s_{n}\star v=h_{n,[-n]}(s_{n}v).\]
Proof.: In this proof, let \(h^{A}_{i,L}\) denote the hopping operator given in Definition 2.1 acting on the permutation group \(S_{2n}\) and \(h^{B}_{i,L}\) denote the hopping operator given in Definition 4.3 acting on \(B_{n}\subseteq S_{2n}\). If \(i<n\), then \(s_{i}=s^{\prime}_{-i}s^{\prime}_{i}\) and by Lemma 4.2, we have
\[\mathrm{unfold}(s_{i}\star v)=s^{\prime}_{-i}s^{\prime}_{i}\star\mathrm{unfold }(v).\]
Proposition 2.4 implies
\[(s^{\prime}_{-i}s_{i}{}^{\prime})\star\mathrm{unfold}(v)=h^{A}_{-(i+1),[-i]}h^ {A}_{i,[i+1]}s^{\prime}_{-i}s_{i}{}^{\prime}\ \mathrm{unfold}(v).\]
Note that swapping \(i\) with \(i+1\) in \(h^{A}_{i,[i+1]}\) mirrors swapping \(-(i+1)\) with \(-i\) in \(h^{A}_{-(i+1),[-i]}\). Hence Lemma 4.2 implies
\[\mathrm{fold}((s^{\prime}_{-i}s_{i}{}^{\prime})\star\mathrm{unfold}(v))=h^{B }_{i,[i+1]}(s_{i}v).\]
In the case that \(i=n\), note that \(s_{n}\star v=v\) if the sign of \(n\) in \(v\) is negative and \(s_{n}\star v=s_{n}v\) otherwise. From this it follows that \(s_{n}\star v=h_{n,[-n]}(s_{n}v)\).
**Proposition 4.7**.: _Let \(v\in B_{n}\). For any \(i\leq j\leq 2n-1\), we have_
\[(s_{i}s_{i+1}\cdots s_{j})\star v=h_{i,[i+1,\ldots,j+1]}(s_{i}s_{i+1}\cdots s_ {j}v).\]
Proof.: First, if \(i\leq j<n\), we focus on how \(s^{\prime}_{i}s^{\prime}_{i+1}\ldots s^{\prime}_{j}\) interacts with \(v\) since how \(s^{\prime}_{-i}s^{\prime}_{-(i+1)}\ldots s^{\prime}_{-j}\) interact with \(v\) will mirror that. Then the proposition follows from Proposition 2.4 and Lemma 4.2. Second, if \(n<i\leq j\), we now focus on how \(s^{\prime}_{-i}s^{\prime}_{-(i+1)}\ldots s^{\prime}_{-j}\) interacts with \(v\) since how \(s^{\prime}_{i}s^{\prime}_{i+1}\ldots s^{\prime}_{j}\) interact with \(v\) will mirror that. Note that \(-i<-(i+1)<\cdots<-j\) is an increasing order and hence Proposition 2.4 and Lemma 4.2 imply
\[(s_{i}s_{i-1}\ldots s_{j})\star v=h_{-(i+1),[-i,\ldots,-j]}(s_{i}s_{i-1}\ldots s _{j}v).\]
This proves the proposition when \(i>n\).
Now suppose that \(i\leq n\leq j\). We proceed by induction on \(n-i\). First, when \(n-i=-1\), the proposition follows from the above case. Now suppose that \(n-i\geq 0\) and suppose for
the sake of induction that we have the proposition is true for all \(i^{\prime}\) such that \(n-i^{\prime}<n-i\). We start by analyzing the expression \(s_{i}\star((s_{i+1}\cdots s_{j})\star v)\). From the induction hypothesis, we have that
\[(s_{i+1}\cdots s_{j})\star v=h_{i+1,[i+2,\ldots,n+j]}(s_{i+1}\cdots s_{n+j}v).\]
By Lemma 4.6, it suffices to analyze the operator \(h_{i,[i+1]}s_{i}h_{i+1,[i+2,\ldots,2n]}\). Then Lemma 4.4 implies
\[s_{i}h_{i+1,[i+2,\ldots,2n]}s_{i}=h_{i,[i+2,\ldots,2n]}\]
and hence \(h_{i,[i+1]}h_{i,[i+2,\ldots,2n]}=h_{i,[i+1,\ldots,2n]}\). This proves the proposition.
We now give an analogue of Definition 2.5 for signed permutations. For any \(w\in B_{n}\) and \(i>0\), define \(w\nwwww\)\(i\) to be the subword of \(\text{unfold}(w)\) obtained be restricting to numbers to the left of \(i\) that are either greater than \(i\), or less or equal to \(-i\). For example, if \(w=[-5,3,1,-2,4]\), then
\[\text{unfold}(w)=[-5,3,1,-2,4,-4,2,-1,-3,5].\]
In this case we have \(
\[[2,3,5,-1,4,-4,1,-5,-3,-2]\xrightarrow[h_{1,[-5,3]}[2,3,-1,5,4,-4,-5,1,-3,-2] \xrightarrow[h_{2,[-5,3,-2,4,-4]}]\] \[[-2,3,-1,5,-4,4,-5,1,-3,2]\xrightarrow[h_{3,[-5]}[-2,-5,-1,-3,-4,4,3, 1,5,2]\xrightarrow[h_{4,[-5]}]\] \[[-2,-5,-1,-3,-4,4,3,1,5,2]\xrightarrow[h_{5,[-5]}[-2,-5,-1,-3,-4,4,3, 1,5,2]\]
Theorem 4.8 implies that the Demazure product \(w\star v=[-2,-5,-1,-3,-4]\). We conclude this section with some questions.
**Question 4.9**.: _Can the Demazure product in type \(D\) be similarly described as the type \(B\) case?_
**Question 4.10**.: _In [2], Billey and Weaver give a "one-line notation" algorithm to compute the maximal element in the intersection of a lower interval with an arbitrary coset of a maximal parabolic subgroup in type \(A\). In [12], there is an alternate algorithm to compute the maximal element using the Demazure product (this second formula is for any Coxeter group and parabolic subgroup). Is there a way to apply Theorem 3.1 to recover the algorithm in [2]? If so, is there a generalization of the algorithm in [2] to the case where the parabolic subgroup is not maximal? or to the case of signed permutations? We remark that the existence of such a maximal element for any Coxeter group \(W\) and parabolic subgroup \(W_{J}\) was established in [10]._
### Acknowledgments
This research was primarily conducted during the 2022 Honors Summer Math Camp at Texas State University. The authors gratefully acknowledge the support from the camp and also thank Texas State University for providing support and a great working environment. ER was supported by a grant from the Simons Foundation 941273.
|
2303.14402
|
Exploring the use of deep learning in task-flexible ILC
|
Growing demands in today's industry results in increasingly stringent
performance and throughput specifications. For accurate positioning of
high-precision motion systems, feedforward control plays a crucial role.
Nonetheless, conventional model-based feedforward approaches are no longer
sufficient to satisfy the challenging performance requirements. An attractive
method for systems with repetitive motion tasks is iterative learning control
(ILC) due to its superior performance. However, for systems with non-repetitive
motion tasks, ILC is {generally} not applicable, {despite of some recent
promising advances}. In this paper, we aim to explore the use of deep learning
to address the task flexibility constraint of ILC. For this purpose, a novel
Task Analogy based Imitation Learning (TAIL)-ILC approach is developed. To
benchmark the performance of the proposed approach, a simulation study is
presented which compares the TAIL-ILC to classical model-based feedforward
strategies and existing learning-based approaches, such as neural network based
feedforward learning.
|
Anantha Sai Hariharan Vinjarapu, Yorick Broens, Hans Butler, Roland Tóth
|
2023-03-25T08:41:38Z
|
http://arxiv.org/abs/2303.14402v1
|
# Exploring the use of deep learning in task-flexible ILC*
###### Abstract
Growing demands in today's industry results in increasingly stringent performance and throughput specifications. For accurate positioning of high-precision motion systems, feedforward control plays a crucial role. Nonetheless, conventional model-based feedforward approaches are no longer sufficient to satisfy the challenging performance requirements. An attractive method for systems with repetitive motion tasks is iterative learning control (ILC) due to its superior performance. However, for systems with non-repetitive motion tasks, ILC is generally not applicable, despite of some recent promising advances. In this paper, we aim to explore the use of deep learning to address the task flexibility constraint of ILC. For this purpose, a novel Task Analogy based Imitation Learning (TAIL)-ILC approach is developed. To benchmark the performance of the proposed approach, a simulation study is presented which compares the TAIL-ILC to classical model-based feedforward strategies and existing learning-based approaches, such as neural network based feedforward learning.
## I Introduction
High-precision positioning systems are essential components in modern manufacturing machines and scientific equipment, see [1, 2, 3, 4]. To ensure high-throughput and high-accuracy position tracking, a two-degree-of-freedom controller structure, consisting of a feedback controller and a feedforward controller, is commonly utilized, see [5, 6, 7]. The feedback controller maintains closed-loop stability and disturbance rejection, while the feedforward controller is primarily responsible for achieving optimal position tracking performance, see [8]. Nonetheless, with the increasingly stringent demands in contemporary industry, conventional model-based feedforward techniques, e.g. [9], are no longer adequate to meet the desired performance specifications, thus necessitating for alternative feedforward approaches.
_Iterative Learning Control_ (ILC), see [10], has emerged as a viable choice for feedforward control in motion systems that execute recurring tasks, enabling accurate position tracking. Despite its advantages, ILC exhibits significant limitations. Primarily, ILC is dependent on the assumption that the tracking error recurs from one iteration to the next, limiting its general applicability. Additionally, conventional ILC performance is constrained to a single task, see [11].
Several studies have attempted to address the task flexibility limitations of ILC by drawing on concepts from machine learning and system identification, as reported in the literature [12, 13, 14]. However, the findings from the related literature suggest that there exists a trade-off between the achievable position tracking performance and the degree of deviation from the core principle of ILC, i.e., direct iterative manipulation of signals. Instead of compromising local ILC performance to enhance task flexibility, the aim is to develop a learning-based feedforward strategy that can deliver superior position tracking performance regardless of the severity of the variation of the compensatory signal across tasks. Such an ILC variant can be imagined to make use of imitation learning in order to mimic the behaviour of conventional ILC policies generalized over multiple trajectories.
This paper introduces a novel approach to ILC, termed Task Analogy based Imitation Learning (TAIL)-ILC, from a data science perspective. By acquiring spatial feature analogies of the trajectories and their corresponding control signals, performance of conventional ILC policies can be replicated. To facilitate efficient network training, abstract lower-dimensional representations of signals are utilized. This approach offers numerous benefits in terms of training and prediction time efficiency, utilization of large datasets, and high sampling rate handling. The resulting feedforward controller comprises an encoding policy, a learning policy, and a decoding policy arranged in a cascade interconnection. Dual principal component analysis (DPCA), a standard linear dimensionality reduction technique, is utilized for the integration of the encoding and decoding policies, while a deep neural network is employed for the learning policy.
The main contributions of this paper are:
1. A novel TAIL-ILC approach that tackles the task extension problem of ILC via learning spatial feature analogies of trajectories and their compensation signals, enabling direct imitation of ILC policies.
2. An efficient implementation strategy is devised for the learning-based feedforward controller in terms of constructing it through the cascade interconnection of an encoder, a deep neural network, and a decoder.
This paper is organized as follows. First, the problem formulation is presented in Section II. Next, Section III presents the proposed novel TAIL-ILC approach which aims at generalizing ILC performance across various tasks through imitation learning strategies. Section IV provides a simulation study of the proposed approach with respect to existing feedforward strategies using a high-fidelity model of a moving-magnet planar actuator. In Section V, detailed comparison between the proposed TAIL-ILC approach and neural-network-based feedforward strategies is presented. Finally, conclusions on the proposed approach are presented in Section VI.
## II Problem statement
### _Background_
Consider the conventional frequency domain ILC configuration illustrated by Figure 1, where \(P\in\mathcal{R}^{n_{\mathrm{y}}\times n_{\mathrm{u}}}\) corresponds to the proper transfer matrix representation of a _discrete time_ (DT) _linear-time-invariant_ (LTI) _multiple-input multiple-output_ (MIMO) plant with \(\mathcal{R}\) denoting the set of real rational functions in the complex variable \(z\in\mathbb{C}\). Furthermore, the proper \(K\in\mathcal{R}^{n_{\mathrm{u}}\times n_{\mathrm{y}}}\) represents a LTI stabilizing DT feedback controller, which is typically constructed using rigid-body decoupling strategies, see [15]. The aim of the conventional frequency domain ILC framework is to construct an optimal feedforward policy \(f\), which minimizes the position tracking error \(e\) in the presence of the motion trajectory \(r\). Under the assumption that the reference trajectory is trial invariant, the error propagation per trial \(k\in\mathbb{N}_{\geq 0}\) is given by:
\[e_{k}=Sr-Jf_{k}, \tag{1}\]
where \(S=(I+PK)^{-1}\) and \(J=(I+PK)^{-1}P\). Generally, the update law for the feedforward policy is in accordance with the procedure outlined in [16]:
\[f_{k+1}=Q\left(Le_{k}+f_{k}\right), \tag{2}\]
where \(L\in\mathcal{R}\mathcal{L}_{\infty}^{n_{\mathrm{u}}\times n_{\mathrm{y}}}\) is a learning filter and \(Q\in\mathcal{R}\mathcal{L}_{\infty}^{n_{\mathrm{u}}\times n_{\mathrm{u}}}\) denotes a robustness filter with \(\mathcal{R}\mathcal{L}_{\infty}\) corresponding to set of real rational functions in \(z\) that have bounded singular value on the unit circle \(\mathbb{D}=\{e^{\mathrm{i}\omega}\mid\omega\in[0,2\pi]\}\), i.e., finite \(\mathcal{L}_{\infty}(\mathbb{D})\) norm. Both \(L\) and \(Q\) are required to be designed for the ILC task at hand. Furthermore, by combining (1) and (2), the progression of the error and feedforward update is reformulated as:
\[e_{k+1} =(I-JQJ^{-1})Sr+JQ(J^{-1}-L)e_{k}, \tag{3a}\] \[f_{k+1} =QLSr+Q(I-LJ)f_{k}, \tag{3b}\]
which can be reduced to:
\[e_{k+1} =(I-Q)Sr+Q(I-JL)e_{k}, \tag{4a}\] \[f_{k+1} =QLSr+Q(I-LJ)f_{k}, \tag{4b}\]
under the assumption that \(Q\) is diagonal and \(J\) is approximately diagonal, which holds in case of rigid-body decoupled systems.
From (4), several observations can be made. First, it can be observed that the contribution of \(r\) to the position tracking error is dependent on the robustness filter \(Q\), which is optimally chosen as identity to negate the contribution of the reference trajectory towards the tracking error. Secondly, learning filter \(L\) aims to minimize the criterion \(\|Q(I-JL)\|_{\infty}<1\), where \(\|\_\infty\) stands for the \(\mathcal{H}_{\infty}\) norm, such that the tracking error is steered to zero, which is optimally achieved when \(L=J^{-1}\). Note that these assumptions on \(Q\) and \(L\) yield the optimal feedforward update \(f_{k+1}=P^{-1}r\), which results in perfect position tracking. Moreover, when the convergence criterion is satisfied, the limit policies, i.e. \(e_{\infty}=\lim_{k\rightarrow\infty}e_{k}\), \(f_{\infty}=\lim_{k\rightarrow\infty}f_{k}\), correspond to:
\[e_{\infty} =\left(I-J\big{(}I-Q(I-LJ)\big{)}^{-1}QL\right)\!Sr, \tag{5a}\] \[f_{\infty} =(I-Q(I-LJ))^{-1}\,QLSr, \tag{5b}\]
In spite of its simplicity and efficacy, the conventional ILC is hindered by significant limitations, the most notable of which is its confinement to a single task. Consequently, its practical utility is restricted to particular types of machinery.
### _Problem formulation_
The aim of this paper is to address the challenge of augmenting the task-flexibility of the conventional ILC by utilizing an imitation learning based controller. This approach facilitates the generalization of the optimal feedforward policy, created by the conventional ILC, for a wider range of motion profiles. The primary objective of this paper is to devise a feedforward controller that employs a learning-based mechanism, which satisfies the following requirements:
1. The learning-based feedforward approach enables the generalization of the performance of the conventional ILC across multiple trajectories.
2. The scalability of the learning-based feedforward approach is imperative for its implementation in systems with a high sampling rate.
## III Tail-Ilc
### _Approach_
For a given dynamic system with a proper discrete transfer function \(G\in\mathcal{R}\) under a sampling time \(T_{\mathrm{s}}\in\mathbb{R}_{+}\), a reference trajectory \(r\) of duration \(T=n_{\mathrm{d}}T_{\mathrm{s}}\) seconds can be defined as
\[r=\begin{bmatrix}r(0)&\cdots&r(n_{\mathrm{d}})\end{bmatrix}^{\top}, \tag{6}\]
where \(n_{\mathrm{d}}\) corresponds to the length of the signal in DT. This reference trajectory for example can correspond to a \(n^{th}\) order motion profile. A trajectory class \(C\subset\mathbb{R}^{n_{\mathrm{d}}\times n_{\mathrm{t}}}\) is defined as a collection of reference trajectories such that each trajectory shares certain prominent spatial features (motion profile order, constant velocity interval length, etc.) with the others, where \(n_{\mathrm{t}}\) is the number of trajectories:
\[C=\{r_{1},r_{2},r_{3}.....,r_{n_{\mathrm{t}}}\}. \tag{7}\]
Given a specific combination of the \(L\) and \(Q\) filters, consider that an ILC policy \(\pi^{*}\) exists which maps a given reference trajectory \(r\) to the optimal feedforward compensation signal \(f^{*}\), see (5). This can be formally expressed as:
\[\pi^{*}:r_{i}\to f_{i}^{*}. \tag{8}\]
Henceforth, \(\pi^{*}\) shall be denoted as the expert policy, which is equipped with learning and robustness filters established through a process model. Our objective is to formulate an optimal student policy \(\pi^{*}_{\mathrm{s}}\) that approximates the performance
Fig. 1: Control structure with the conventional ILC configuration.
of the optimal policy \(\pi^{*}\) over a set of trajectories from the pertinent trajectory class. To this end, we endeavor to determine \(\pi^{*}_{\mathrm{s}}\) as a solution to the optimization problem:
\[\pi^{*}_{\mathrm{s}}=\operatorname*{arg\,min}_{\pi_{\mathrm{s}}}\eta(\pi^{*}(r_ {i}),\pi_{\mathrm{s}}(r_{i})),\quad\forall i\in[1,n_{\mathrm{t}}] \tag{9}\]
where \(r_{i}\sim C\) and \(\eta(\cdot,\cdot)\) is a performance quantification measure, and \(\pi_{\mathrm{s}}\) are parameterized student policy candidates. The expert policy \(\pi^{*}\) is a conventionally designed frequency domain ILC as described in Section II-A. In TAIL-ILC, the idea is to structure \(\pi_{\mathrm{s}}\) as :
\[\pi_{\mathrm{s}}=\pi_{\mathrm{D,Y}}\circ\pi_{\mathrm{C}}\circ\pi_{\mathrm{E,U}}. \tag{10}\]
which is visualised in Figure 2. The TAIL-ILC controller is capable of generating a feedforward control signal based on a given reference trajectory. This process is carried out through a series of three sub-policies outlined in equation (10). The first sub-policy, \(\pi_{\mathrm{E,U}}\), projects the reference trajectory \(r_{i}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\) into a lower-dimensional space referred to as the _latent space_. Next, the second sub-policy, \(\pi_{\mathrm{C}}\), predicts a latent space representation of the feedforward signal, which is then fed into the third sub-policy, \(\pi_{\mathrm{D,Y}}\), to project the latent space feedforward signal back into the higher-dimensional output space, resulting in \(f_{i}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\). Notably, the successful application of TAIL-ILC requires that all reference trajectories share certain spatial features with each other. The prediction sub-policy, \(\pi_{\mathrm{C}}\), is trained on a set of reference trajectories and their corresponding feedforward control signals obtained using \(\pi^{*}\), which are projected into the latent space. The use of abstract representations enables the preservation of the most significant information of the signals while simultaneously reducing the amount of data used for making predictions, resulting in several advantages, such as increased training and prediction time efficiencies. The subsequent sub-section will delve into the development of each sub-policy in further detail.
### _Student policy \(\pi_{\mathrm{s}}\)_
The three-part student policy \(\pi_{\mathrm{s}}:r\to f_{\pi_{*}}\) can be decomposed into three distinct components:
\[\pi_{\mathrm{E,U}}:r\to r_{l} \tag{11a}\] \[\pi_{\mathrm{C}}:r_{l}\to f_{l}\] (11b) \[\pi_{\mathrm{D,Y}}:f_{l}\to f_{\pi_{*}} \tag{11c}\]
where, \(r,f_{\pi_{*}}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\) and \(r_{l},f_{l}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\) and \(n_{l}\) is the latent space dimensionality such that \(n_{l}\ll n_{\mathrm{d}}\). As mentioned in Section III-A, the training data for the sub-policy \(\pi_{\mathrm{C}}\), namely the pairs \(\{r_{i,l},f_{i,l}\}\), are in the latent space. This shows that the ideal outputs of \(\pi_{\mathrm{s}}\) are of the form:
\[f_{\pi_{*}}=\pi_{\mathrm{D,Y}}(f_{l})=f^{\prime}\approx f^{*}, \tag{12}\]
where, an approximation error may exist between \(f^{\prime}\) and \(f^{*}\). Additionally, we aim at:
\[\pi_{\mathrm{C}}(r_{l})=\widehat{f}_{l}\approx f_{l}, \tag{13}\]
where, in case of using a deep neural network, \(\widehat{f}_{l}\) is the output of the network and the prediction error \(e_{\mathrm{pred}}\) is defined as:
\[e_{\mathrm{pred}}=\|f_{l}-\widehat{f}_{l}\|_{2}, \tag{14}\]
where, \(\|\cdot\|_{2}\) denotes the \(\ell_{2}\) norm. Moreover, this implies that (12) becomes:
\[f_{\pi_{*}}=\pi_{\mathrm{D,Y}}(\widehat{f}_{l})=\widehat{f}^{\prime}. \tag{15}\]
In order to quantify the gap between performance of \(\pi^{*}\) and that of \(\pi_{\mathrm{s}}\), a distance measure is used as the performance quantification measure \(\eta\) in (9). This is expressed as:
\[\eta(\pi^{*},\pi_{\mathrm{s}})=\frac{1}{n_{\mathrm{t}}}\sum_{i=1}^{n_{\mathrm{ t}}}\|f_{i}-\widehat{f}_{i}^{\prime}\|_{2}. \tag{16}\]
Assuming that \(\mu\) represents the set of weights and biases of the deep neural network, improving the performance of \(\pi_{\mathrm{s}}\) can be posed as the following optimization problem:
\[\operatorname*{arg\,min}_{n_{l},\mu}\ \eta(\pi^{*},\pi_{\mathrm{s}}) \tag{17}\]
The proposed approach involves propagating the parameter \(\eta\) through the three sub-policies, with the aim of iteratively optimizing both \(n_{l}\) and \(\mu\) via (17). However, given the significant computational burden associated with this approach, there is a need for a more straightforward alternative or a reformulation of the problem. With this goal in mind, we introduce the concepts of the _Expert space_ and _Student space_ to provide alternative perspectives for addressing the optimization problem at hand.
**Definition 1**: _The expert space is defined as the space of all real policies denoted by superscript \({}^{e}\) having the form_
\[\pi^{e}:\mathbb{R}^{n_{\mathrm{s}}\times 1}\to\mathbb{R}^{n_{\mathrm{d}} \times 1}\quad\forall n_{\mathrm{x}}\in\mathbb{N}\]
**Example:**
1. _Expert policy in expert space:_ \[\pi^{e}_{\mathrm{e}}:r\to f^{\prime}\] (18) _where_ \(r,f^{\prime}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\)__
2. _Student policy in expert space:_ \[\pi^{e}_{\mathrm{s}}:r\to\widehat{f}^{\prime}\] (19) _where_ \(r,\widehat{f}^{\prime}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}\)__
**Definition 2**: _The student space is defined as the space of all real policies denoted by superscript ( \({}^{s}\)) having the form_
\[\pi^{s}:\mathbb{R}^{n_{\mathrm{s}}\times 1}\to\mathbb{R}^{n_{\mathrm{l}} \times 1}\ \forall\ n_{\mathrm{x}}\in\mathbb{N}\]
**Example:**
1. _Expert policy in student space:_ \[\pi^{s}_{\mathrm{e}}:r_{l}\to f_{l}\] (20) _where_ \(r_{l},f_{l}\in\mathbb{R}^{n_{\mathrm{l}}\times 1}\)__
2. _Student policy in student space:_ \[\pi^{s}_{\mathrm{s}}:r_{l}\to\widehat{f}_{l}\] (21) _where_ \(r_{l},\widehat{f}_{l}\in\mathbb{R}^{n_{\mathrm{l}}\times 1}\)__
Table I summarizes these definitions.
Stated differently, the expert space is comprised of all the decoding policies, \(\pi_{\mathrm{D}}\), which project signals into \(n_{\mathrm{d}}\) dimensions, while the student space is composed of all the encoding policies, \(\pi_{\mathrm{E}}\), which project signals into \(n_{l}\) dimensions. Based on the preceding definitions, it is worth noting that our primary objective is to determine the student policy in the expert space, \(\pi_{\mathrm{s}}^{\mathrm{e}}\). In light of these definitions, the distance metric specified in (16) can be reformulated as:
\[\eta(\pi^{*},\pi_{\mathrm{s}}^{\mathrm{e}})=\eta(\pi^{*},\pi_{ \mathrm{e}}^{\mathrm{e}})+\eta(\pi_{\mathrm{e}}^{\mathrm{e}},\pi_{\mathrm{s}} ^{\mathrm{e}}) \tag{22}\] \[\implies\eta(\pi^{*},\pi_{\mathrm{s}}^{\mathrm{e}})=\frac{1}{n_{ \mathrm{t}}}\sum_{i=1}^{n_{\mathrm{t}}}\|f_{i}-f_{i}^{\prime}\|_{2}+\frac{1}{n _{\mathrm{t}}}\sum_{i=1}^{n_{\mathrm{t}}}\|f_{i}^{\prime}-\widehat{f}_{i}^{ \prime}\|_{2}\]
where, \(\eta(\pi^{*},\pi_{\mathrm{e}}^{\mathrm{e}})\) corresponds to the optimization of \(n_{l}\) and \(\eta(\pi_{\mathrm{e}}^{\mathrm{e}},\pi_{\mathrm{s}}^{\mathrm{e}})\) corresponds to the optimization of \(\mu\). This separation of the distance measure (16) allows the optimization problem in (17) to be segmented as:
\[\operatorname*{arg\,min}_{n_{l},\mu}\ \eta(\pi^{*},\pi_{\mathrm{s}}^{ \mathrm{e}})=\operatorname*{arg\,min}_{n_{l}}\ \eta(\pi^{*},\pi_{\mathrm{e}}^{ \mathrm{e}})+\operatorname*{arg\,min}_{\mu}\ \eta(\pi_{\mathrm{e}}^{ \mathrm{e}},\pi_{\mathrm{s}}^{\mathrm{e}})\]
This segregation allows us to optimize \(n_{l}\) independently of \(\mu\), thus simplifying the optimization problem defined by (17).
### _Choice of encoding and decoding sub-policies_
The encoding and decoding sub-policies in this work employ DPCA, a well-established linear dimensionality reduction technique, due to its computational simplicity. Other commonly-used linear and non-linear dimensionality reduction methods are also available and have been reviewed in [17]. DPCA involves the identification of a linear subspace with \(n_{l}\) dimensions in an \(n_{\mathrm{d}}\) dimensional space, where \(n_{l}\) is significantly smaller than \(n_{\mathrm{d}}\). This subspace is defined by a set of orthonormal bases that maximize the variance of the original data when projected onto this subspace. The orthonormal bases computed through this process are commonly referred to as _principal components_.
**Definition 3**: _A data point in an arbitrary dataset \(H\in\mathbb{R}^{n_{\mathrm{s}}\times n_{\mathrm{t}}}\) is defined as a vector \(r_{i}\in\mathbb{R}^{n_{\mathrm{s}}\times 1}\ \forall i\in[1,n_{\mathrm{t}}]\)._
The selection of the principal components for an \(n_{l}\) dimensional latent space for the data points in \(C\) involves choosing the right eigenvectors that correspond to the first \(n_{l}\) singular values of \(H\). It should be emphasized that the projection of a data point onto the latent space can be computed through the following method:
\[r_{l}=T_{\mathrm{E}}r, \tag{23}\]
where:
\[T_{\mathrm{E}}=\widehat{\Sigma}^{-1}V^{\top}H^{\top} \tag{24}\]
In this context, \(r_{l}\in\mathbb{R}^{n_{l}\times 1}\), \(V\Subset R^{n_{\mathrm{t}}\times n_{\mathrm{t}}}\) denotes the matrix of right eigenvectors of \(H\) and \(\widehat{\Sigma}\in\mathbb{R}^{n_{l}\times n_{\mathrm{t}}}\) contains the first \(n_{l}\) singular values of \(H\) along its diagonal elements. It is worth noting that the value of \(n_{l}\) is constrained by the number of data points in \(H\). This feature of DPCA is particularly advantageous in situations where \(n_{\mathrm{d}}>>n_{\mathrm{t}}\). Given the latent space representation \(r_{l}\), a reconstructed data point \(r^{\prime}\) can be obtained as:
\[r^{\prime}=T_{D}r_{l},\qquad r^{\prime}\in\mathbb{R}^{n_{\mathrm{d}}\times 1}, \tag{25}\]
where:
\[T_{\mathrm{D}}=HV\widehat{\Sigma}^{-1}. \tag{26}\]
**Remark 1**: _The computation of transformations \(T_{\mathrm{E}}\) and \(T_{\mathrm{D}}\) depend on \(n_{l}\). Additionally, considering that we have access to the dataset \(H\), the matrix transformations \(\widehat{\Sigma}^{-1}V^{T}H^{T}\) and \(HV\widehat{\Sigma}^{-1}\), the right hand side of (23) and (25), become constant for a specific problem for a given choice of \(n_{l}\)._
In the light of Remark 1, for a given dataset \(H\in\mathbb{R}^{n_{\mathrm{d}}\times n_{\mathrm{t}}}\) the encoding (\(\pi_{\mathrm{E,U}}\)) and decoding (\(\pi_{\mathrm{D,Y}}\)) sub-policies for using in the student policy \(\pi_{\mathrm{s}}\) can be defined as follows:
\[\pi_{\mathrm{E,U}}(r) =T_{\mathrm{E}}r \tag{27a}\] \[\pi_{\mathrm{D,Y}}(\widehat{f}_{l}) =T_{\mathrm{D}}\widehat{f}_{l} \tag{27b}\]
## IV Simulation study
This section presents a comparison study of the TAIL-ILC approach in comparison with classical ILC, an artificial neural network (ANN) based ILC, referred to as NN-ILC, see [14], and conventional rigid body feedforward, see [7, 18], which is obtained by multiplication of the acceleration profile and the inverted rigid body dynamics of the system:
\[C_{\mathrm{FF}}=m\vec{r_{i}} \tag{28}\]
To facilitate simulation, a high-fidelity model of a moving-magnet planar actuator (MMPA), depicted in Figure 3, is considered. A detailed description of a MMPA system is given in [19]. Table II provide a concise overview of the network architecture and training specifics for sub-policy \(\pi_{\mathrm{C}}\) in TAIL-ILC and policy \(\pi_{\mathrm{NN}}\) in NN-ILC, respectively. For the sake of comparability, the training parameters are kept consistent between the two networks. The networks are designed and trained using the Deep Learning toolbox in MATLAB 2019b, employing the default random parameter initialization.
The training set consists of 618 trajectories, while the test set includes 42 trajectories, each of which is 2.5 seconds long with a total of 20833 time samples. Each trajectory
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Expert space** (\({}^{\circ}\)) & **Student space** (\({}^{\circ}\)) \\ \hline \hline
**Expert policy** (\(\pi_{\mathrm{e}}^{\mathrm{e}}\)) & \(\pi_{\mathrm{e}}^{\mathrm{e}}:r\to f^{\prime}\) & \(\pi_{\mathrm{e}}^{\mathrm{e}}:r_{l}\to f_{l}\) \\
**Student policy** (\(\pi_{\mathrm{s}}\)) & \(\pi_{\mathrm{e}}^{\mathrm{e}}:r\to\widehat{f}^{\prime}\) & \(\pi_{\mathrm{e}}^{\mathrm{e}}:r_{l}\to\widehat{f}_{l}\) \\ \hline \end{tabular}
\end{table} TABLE I: Expert and student policies in expert and student spaces
Fig. 3: Schematic representation of a MMPA model.
corresponds to a fourth-order motion profile, designed based on the approach presented by [20], and is parameterized with five parameters in the spatial domain. Individual trajectories are then generated by sweeping over a grid of values for each of these parameters. The objective of this study is to evaluate and compare the performance of the previously mentioned feedforward approaches against the expert ILC policy \(\pi^{*}\), which is the traditional ILC optimized for multiple trajectories of the same class. The primary aim of ILC in this context is to mitigate any unaccounted-for residual dynamics in the system and enhance classical model-based feedforward. Consequently, we also compare the combined performance of student policies with classical feedforward controllers. We demonstrate the tracking ability of TAIL-ILC and NN-ILC on two reference trajectories, namely \(r_{1}\) and \(r_{2}\), which belong to the same class and are shown in Figure 4. \(r_{1}\) is a randomly chosen trajectory from the training set, while \(r_{2}\) is a previously unseen trajectory.
### _Time domain performance of TAIL-ILC and NN-ILC_
A silicon wafer scanning application is considered where the scanning takes place during the constant velocity interval of the motion profile, see [1]. In this context, Figure 5 illustrates the position tracking error in \(x\)-direction during the constant velocity interval of the reference trajectories \(r_{1}\) and \(r_{2}\) respectively. In addition to the performance of mass feedforward, TAIL-ILC and NN-ILC, the figure also indicates the performance of the expert ILC policy. This is to facilitate the comparison of the performance of the two deep learning based ILC variants with the baseline. As demonstrated in the left Figure, i.e. the performance of the feedforward controllers on \(r_{1}\), the expert ILC policy exhibits the highest overall performance. Nonetheless, it is noteworthy that the TAIL-ILC policy outperforms in terms of the peak tracking error achieved compared to the alternative feedforward approaches, whereas the NN-ILC policy demonstrates a superior performance in terms of the convergence time of the error. Nonetheless, when analyzing the right Figure, i.e. the performance of the feedforward approaches for a previously unseen trajectory \(r_{2}\), the expert ILC policy needs to re-learn the relevant feedforward signal. Conversely, the TAIL-ILC and NN-ILC policies are capable of achieving similar performance to the re-learned expert ILC policy without any further training. Additionally, when combined with a classical mass feedforward controller, both the TAIL-ILC and NN-ILC policies are observed to yield superior performance in terms of peak error and settling time compared to the classical mass feedforward controller alone.
### _TAIL-ILC vs NN-ILC_
Table III provides a comparison of the training and prediction properties of the TAIL-ILC and NN-ILC student policies. Here, we compare the following parameters:
1. \(T_{\text{train}}\): Time to train the neural network
2. \(T_{\text{predict}}\): Time to make predictions for 10 randomly selected test set trajectories.
3. \(e_{\text{train}}\): Control signal prediction error averaged over 10 randomly selected train set trajectories.
4. \(e_{\text{test}}\): Control signal prediction error averaged over 10 randomly selected test set trajectories.
5. \(e_{\text{peak tracking}}\): Peak tracking error achieved with the predicted control signals averaged over 10 randomly selected train set trajectories.
Here, the average control signal prediction errors of the train and the test set trajectories are calculated as the values of the performance measure \(\eta\) in (22). As can be seen, though the original signals and trajectories are extremely high dimensional, the projection of these signals into the latent space using the proposed TAIL-ILC approach has resulted in significant improvement in training and prediction time compared to that of the NN-ILC approach.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **TAIL-ILC** & **NN-ILC** \\ \hline No. of neurons in the input layer & \(618\) & \(4\) \\ \hline No. of hidden layers & \(3\) & \(3\) \\ \hline No. of neurons in hidden layers & \(800\) & \(6\) \\ \hline Activation & Relu & Relu \\ \hline No. of neurons in the output layer & \(618\) & \(1\) \\ \hline Learning rate & \(10^{-3}\) & \(10^{-3}\) \\ \hline Epochs & \(5000\) & \(5000\) \\ \hline Optimizer & adam & adam \\ \hline Minibatch size & \(128\) & \(128\) \\ \hline Train set & \(618\) trajectories & \(618\) trajectories \\ \hline Test set & \(42\) trajectories & \(42\) trajectories \\ \hline \end{tabular}
\end{table} TABLE II: Architecture and training details of the NNs
Fig. 4: x-direction of the references \(r_{1}\) and \(r_{2}\).
Fig. 5: Tracking error for \(r_{1}\) (left) and \(r_{2}\) (right) during constant velocity.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Criterion** & **NN-ILC** & **TAIL-ILC** \\ \hline \(T_{\text{train}}\) & \(2.5\) hr & \(20\) min \\ \hline \(T_{\text{predict}}\)(per sample) & \(0.005\) sec & \(0.064\) sec \\ \hline \(T_{\text{predict}}\)(full signal) & \(86\) sec & \(0.064\) sec \\ \hline \(e_{\text{train}}\) & \(0.0055\) N & \(0.0011\) N \\ \hline \(e_{\text{test}}\) & \(0.0013\) N & \(0.0064\) N \\ \hline \(e_{\text{peak tracking}}\) & \(1.3\cdot 10^{-7}\) m & \(8.3\cdot 10^{-8}\) m \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison for the \(1^{st}\) degree of freedom
Moreover, as observed in Figure 5, the average signal prediction error has decreased for TAIL-ILC in case of previously seen trajectories whereas the NN-ILC has improved performance for previously unseen trajectories.
## V TAIL-ILC vs NN-ILC Perspectives
In the previous Section, we have seen a comparison of the performance of TAIL-ILC and NN-ILC controllers for a specific use case. However, it is more natural to view these controllers as individual instances of two fundamentally different perspectives of the problem. Hence, it is important to reflect upon the perspectives that these controllers convey and the consequences for various aspects of the resulting controllers. This is expected to provide us with a more generalised reasoning to some of the differences observed in performances of these two controllers.
### _Time duration of trajectories_
The NN-ILC and TAIL-ILC are two approaches of ILC that differ in their treatment of reference trajectories and feedforward signals. NN-ILC is capable of handling trajectories of different lengths, as it deals with them sample-wise. In contrast, TAIL-ILC processes trajectories and signals in their entirety, making it challenging to manage trajectories of varying durations due to the fixed input-output dimensionality of neural network learning models. Additionally, NN-ILC is better equipped to handle instantaneous changes in reference trajectories compared to TAIL-ILC. A possible solution to reconcile these perspectives is to use a different class of learning models, such as a recurrent neural network.
### _Training and prediction time efficiencies_
In NN-ILC, the training dataset used for \(\pi_{\mathrm{NN}}\) encompasses all the samples from all the trajectories in the training set, along with their associated feedforward signals. Conversely, TAIL-ILC employs a training dataset for \(\pi_{\mathrm{C}}\) that solely includes the parameters of the trajectories and feedforward signals within the latent space, resulting in a significantly smaller dataset in comparison to the total number of samples. This characteristic leads to TAIL-ILC presenting shorter training and prediction times when compared to NN-ILC, as demonstrated by the results presented in Table III.
### _Generalizability to previously unseen trajectories_
Figure 5 demonstrates that NN-ILC outperforms TAIL-ILC in terms of generalizing performance to previously unobserved trajectories. The improved performance can be attributed to NN-ILC's treatment of reference trajectories as points in an \(n\)-dimensional space corresponding to \(n^{\mathrm{th}}\) order motion profiles, which allows it to learn a mapping to the corresponding feedforward signal time samples. As a result, the trained network can more accurately extrapolate performance to previously unobserved points in the space of possible reference trajectories. In contrast, TAIL-ILC relies primarily on analogies between individual tasks on a higher level, which may result in suboptimal performance when confronted with previously unobserved trajectories at the sample level.
## VI Conclusion
In this work, we have primarily explored two different perspectives within the context of deep learning of the task-flexibility constraint of conventional ILC. While each of the considered approaches has its own advantages and disadvantages, it has been observed that the use of deep learning techniques in general could be a useful direction for future research in designing task-flexible ILC variants.
|
2309.01732
|
Evolution of Efimov States
|
The Efimov phenomenon manifests itself as an emergent discrete scaling
symmetry in the quantum three-body problem. In the unitarity limit, it leads to
an infinite tower of three-body bound states with energies forming a geometric
sequence. In this work, we study the evolution of these so-called Efimov states
using relativistic scattering theory. We identify them as poles of the
three-particle $S$ matrix and trace their trajectories in the complex energy
plane as they evolve from virtual states through bound states to resonances. We
dial the scattering parameters toward the unitarity limit and observe the
emergence of the universal scaling of energies and couplings -- a behavior
known from the non-relativistic case. Interestingly, we find that Efimov
resonances follow unusual, cyclic trajectories accumulating at the three-body
threshold and then disappear at some values of the two-body scattering length.
We propose a partial resolution to this "missing states" problem.
|
Sebastian M. Dawid, Md Habib E Islam, Raúl A. Briceño, Andrew W. Jackura
|
2023-09-04T17:34:20Z
|
http://arxiv.org/abs/2309.01732v1
|
# Evolution of Efimov States
###### Abstract
The Efimov phenomenon manifests itself as an emergent discrete scaling symmetry in the quantum three-body problem. In the unitarity limit, it leads to an infinite tower of three-body bound states with energies forming a geometric sequence. In this work, we study the evolution of these so-called Efimov states using relativistic scattering theory. We identify them as poles of the three-particle \(S\) matrix and trace their trajectories in the complex energy plane as they evolve from virtual states through bound states to resonances. We dial the scattering parameters toward the unitarity limit and observe the emergence of the universal scaling of energies and couplings--a behavior known from the non-relativistic case. Interestingly, we find that Efimov resonances follow unusual, cyclic trajectories accumulating at the three-body threshold and then disappear at some values of the two-body scattering length. We propose a partial resolution to this "missing states" problem.
_Introduction:_ The discovery of the Efimov effect in 1970 revealed the formation of an infinite number of bound states, or trimers, in a system of three non-relativistic bosons [1; 2]. Assuming they interact via two-body forces characterized by a large scattering length, the three-body binding energies form a geometric series with a quotient \(\lambda^{2}\approx 515\). The emergence of the phenomenon is closely tied to the scale invariance of the quantum-mechanical \(1/r^{2}\) potential [3; 4; 5; 6], and is the best-known example of the renormalization group limit cycle [7; 8; 9; 10].
The sequence of trimers becomes infinite in the so-called unitarity limit, i.e., when the two-body scattering length, \(a\), is made arbitrarily large, \(a\to\infty\). While such behavior has not been observed in nature, several nuclear [11; 12; 13; 14] and hadronic systems [15; 16; 17; 18; 19] may serve as proxies due to their large scattering length. Furthermore, Efimov physics is realized experimentally using ultracold atoms submerged in a background magnetic field tuned to introduce a Feschbach resonance and drive the system to the unitarity limit [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Given the generality of the result, this phenomenon has ignited a rich line of research into universality across different subfields [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47].
Although the unitarity limit does not seem to exist in nature, we can expose the universal scaling behavior by exploring the evolution of Efimov states in vicinity of this limit. We investigate this evolution using relativistic scattering theory, which has been derived as part of ongoing efforts to develop a model-independent framework for studying three-body systems [48; 49; 50]. Building on previous work [61], we identify the trimers as poles of the \(S\) matrix in the complex energy variable and study their behavior for various values of \(a\), including the \(a\to\infty\) limit. We provide evidence of the discrete scaling relationship between the binding energies of the three-body spectrum,
\[\Delta E_{n}(a)=Q_{a}^{2}\,\Delta E_{n+1}(Q_{a}a)\,, \tag{1}\]
where \(\Delta E_{n}\) is the binding energy of the \(n^{\rm th}\) bound state, and \(Q_{a}\) is a scaling quotient that asymptotes to Efimov's \(\lambda\) in the unitarity limit. This scaling relationship holds as the states evolve from bound states to unstable resonances, verifying that the relativistic framework recovers the known non-relativistic results.
Furthermore, by studying the analytic structure of the scattering amplitude, we find a much richer picture of the trimers' behavior than previously identified. We discuss intriguing properties of their evolution across various unphysical Riemann sheets of the complex energy plane, such as the formation of cyclic trajectories of the three-body poles and the emergent scaling property of the associated residues. The behavior of the Efimov resonances is sufficiently puzzling that it motivates us to conjecture about the structure of the three-boson amplitudes and to call for further investigation of these states. Before presenting our findings, we briefly review the framework needed to obtain them.
_Relativistic scattering theory:_ We consider the scattering of three identical spinless bosons of mass \(m\), which we label as "\(\varphi\)", in their c.m. frame. We fix the total angular momentum of the system to \(J=0\), as well as neglect contributions from the two-particle subsystems of angular momenta higher than zero. The \(3\varphi\to 3\varphi\) scattering amplitude depends on the total relativistic energy \(E\) and two more variables. We describe the system by splitting the scattering states into a _spectator_ particle and a _pair_, formed from the two other bosons, and use initial and final spectator momenta, \(k\) and \(p\), as the remaining kinematic parameters. In what follows, we use a notation
with an implicit energy dependence.
Physical states are associated with poles of the amplitude, with a residue corresponding to the coupling of the state to the open scattering channel. Lehmann-Symanzik-Zimmerman [62; 63; 64; 65] reduction implies that this identification also holds for poles off the real energy axis. Causality assures that a complex-valued pole can not reside on the "physical" energy plane and must instead appear in unphysical Riemann sheets generated by square-root and logarithmic branch cuts of the scattering amplitude. Depending on the location in the complex plane and the sheet, these poles are associated with bound states (real-valued, physical sheet), virtual states (real-valued, unphysical sheet), or resonances (complex-valued, unphysical sheet).
The relativistic three-body amplitude, \(\mathcal{M}_{3}(p,k)\), exhibits poles associated with trimers in the \(E^{2}\) plane. Near the \(n^{\rm th}\) pole, it behaves like
\[\mathcal{M}_{3}(p,k)=-\frac{\Gamma_{n}(p)\,\Gamma_{n}(k)}{E^{2}-E_{n}^{2}}+ \mathcal{O}\big{(}E^{0}\big{)}\;, \tag{2}\]
where \(E_{n}\) is the trimer energy. Bound or virtual states have \(\operatorname{Im}E_{n}=0\), while resonances \(\operatorname{Im}E_{n}\neq 0\). The residue, i.e., the coupling of the \(n^{\rm th}\) trimer to the \(3\varphi\) state, factorizes into momentum-dependent vertex factors \(\Gamma_{n}(k)\) that are closely related to the Faddeev wave functions in the non-relativistic limit.
As implied by the unitarity of the \(S\) matrix, the amplitude is described by a set of integral equations [49; 50; 66; 60]. They depend on two dynamical inputs, the \(2\varphi\to 2\varphi\) scattering amplitude, \(\mathcal{M}_{2}\), and the three-body \(K\) matrix, \(\mathcal{K}_{3}\), which describes short-distance dynamics of three particles. Given these two objects, one can solve integral equations to obtain the scattering amplitude [61; 67; 49]. As argued in the supplemental material, the universal scaling behavior is independent of \(\mathcal{K}_{3}\), and we set it to zero in the remainder of this letter. Most of the techniques we use have been developed in Ref. [61] and references within. We discuss some new details in the supplemental material.
The three-body scattering amplitude is given by,
\[\mathcal{M}_{3}(p,k)=-\mathcal{M}_{2}(p)\,G(p,k)\,\mathcal{M}_{2}(k)\] \[\qquad\qquad\qquad-\mathcal{M}_{2}(p)\int_{k^{\prime}}G(p,k^{ \prime})\,\mathcal{M}_{3}(k^{\prime},k)\,, \tag{3}\]
where \(G\) is the \(S\)-wave-projected propagator describing particle exchange between the pairs. It is a kinematic function with a logarithmic branch cut [61; 66; 49]. Finally, the integral measure is \(\int_{k}\equiv\int_{0}^{k_{\rm max}}\mathrm{d}k\,k^{2}/(2\pi)^{2}\omega_{k}\), where \(k_{\rm max}\) is the maximum allowed value of the momentum and \(\omega_{k}=\sqrt{m^{2}+k^{2}}\) is the spectator energy. The cutoff momentum is fixed by our choice of \(\mathcal{K}_{3}=0\). Changes in the regularization lead to a different three-body \(K\) matrix, assuring that the resultant amplitude is independent of the cutoff.
For \(\mathcal{M}_{2}\), we use the leading-order effective range expansion,
\[\mathcal{M}_{2}(k)=\frac{16\pi\varepsilon_{k}}{-1/a-iq_{k}}\,, \tag{4}\]
Figure 1: Trajectories of the first three trimer poles in the \((\kappa/m,1/ma)\) plane where \(\kappa=\operatorname{sign}(\operatorname{Re}\Delta E)\sqrt{|m\operatorname{ Re}\Delta E|}\). The \(3\varphi\) and \(\varphi b\) thresholds are shown explicitly as grey and orange lines. Solid red, blue, and green lines denote physical bound states, while dashed ones denote either virtual bound states on the unphysical \(\varphi b\) sheet or resonances on the nearest \(3\varphi\) sheet. Stars denote the emergence of a virtual state from the logarithmic cut on the second \(\varphi b\) sheet. Circles denote the evolution of this virtual state onto a real bound state. Squares denote the further evolution of the state to three-body resonance. Insets show behavior of trimers near the three-body threshold.
where \(\varepsilon_{k}=\sqrt{(E-\omega_{k})^{2}-k^{2}}\) is the pair's energy in its c.m. frame, and \(q_{k}=\sqrt{\varepsilon_{k}^{2}/4-m^{2}}\) is the relative momentum between the particles in the pair. Due to the square root in the definition of the relative momentum, the amplitude is defined on two Riemann branches in the complex \(\varepsilon_{k}\) variable; the first being the physical sheet with \(\operatorname{Im}q_{k}>0\), while the second (unphysical) sheet corresponding to \(\operatorname{Im}q_{k}<0\).
Regardless of the value of \(|ma|\geq 1\), the \(\mathcal{M}_{2}\) amplitude has a pole in the \(\varepsilon_{k}\) variable, corresponding to a state with mass \(m_{b}=2\sqrt{m^{2}-1/a^{2}}\). It resides on the real axis below the two-body threshold, \(\varepsilon_{k}<2m\). If \(a>0\), the pole is on the first sheet and is associated with a two-body bound state. Otherwise, it is a virtual state on the second sheet.
_Analytic continuation:_ Following our previous work in Refs. [61, 67], we numerically solve Eq. (3) to obtain \(\mathcal{M}_{3}\) in the complex \(E^{2}\) plane on the physical and the nearest unphysical sheets. Analytic continuation to the complex plane depends on the nature of the singularities of the three-body scattering amplitude encoded in Eq. (3).
In addition to potential poles, the \(\mathcal{M}_{3}\) amplitude has a logarithmic branch cut inherited from the partial-wave projected propagator, \(G\). Furthermore, it has two possible physical thresholds manifesting as corresponding branch points. These are the square-root bound-state-spectator threshold at \(E_{\text{thr}}^{(\varphi b)}=m+m_{b}\) and the logarithmic three-body threshold at \(E_{\text{thr}}^{(3\varphi)}=3m\)[68, 69]. Unphysical sheets are associated with these two singularities.
The emergence of these thresholds in Eq. (3) has a nonperturbative origin. The three-body amplitude inherits the singularities of \(\mathcal{M}_{2}\) in the external momentum variables, \(p\) and \(k\). The threshold branch points of \(\mathcal{M}_{3}(p,k)\) in the \(E^{2}\) plane emerge from the second term of the integral equation when these energy-dependent singularities in the \(k^{\prime}\) variable coincide with the origin of the integration interval, \(k^{\prime}=0\)[61, 69, 70]. The branch cut at \(E=E_{\text{thr}}^{(\varphi b)}\) arises from the collision of point \(k^{\prime}=0\) with the two-body bound-state pole. The \(E_{\text{thr}}^{(3\varphi)}\) branch point appears from the collision with the square-root branch point of \(\mathcal{M}_{2}\).
This observation suggests a procedure for the analytic continuation of the amplitude defined by the integral equation. Namely, to extend \(\mathcal{M}_{3}\) to the unphysical Riemann sheets of the \(E^{2}\) plane, either through the \(\varphi b\) or the \(3\varphi\) cut, one needs to avoid integrating over the discontinuities associated with the above-mentioned collisions, i.e., avoid coincidence of the integration interval with the pole or threshold cut in \(k^{\prime}\). We accomplish this by deforming the integration contour into the complex \(k^{\prime}\) momentum plane in Eq. (3). In doing this, we ensure that the deformed integration path avoids logarithmic branch points of \(G(p,k^{\prime})\) and all other singularities induced by the non-perturbative nature of the equation [6]. We give a more detailed description of this procedure in the supplemental material.
_Efimov trimers:_ Close to the unitarity limit, i.e., for \(|ma|\gg 1\), we find that \(\mathcal{M}_{3}\) develops multiple bound state poles. We observe that their binding energies, \(\Delta E_{n}=E_{n}-E_{\text{thr}}^{(3\varphi)}\), obey the discrete scaling symmetry given in Eq. (1), which is characteristic of the Efimov phenomenon. The quotient \(Q_{a}\to\lambda\) as \(ma\to\infty\), confirming that the relativistic framework recovers the expected Efimov scaling. In addition, in the supplement, we show that the vertex functions \(\Gamma_{n}(p)\) agree with the prediction of Ref. [71] for non-relativistic momenta, \(p\ll m\).
By dialing \(ma\) to smaller values, we trace the trimers on their trajectories that span across multiple Riemann sheets associated with the dimer-particle and three-particle cuts. The trimers evolve from the virtual states (small, positive \(a\)) through bound states (large \(a\) of both signs) to resonances (small, negative \(a\)). In Fig. 1, we present their trajectories on the so-called Efimov plot, i.e., in the \((\kappa,1/a)\) plane, where \(\kappa=\operatorname{sign}(\operatorname{Re}\Delta E)\sqrt{|m\operatorname{Re }\Delta E|}\).
At small values of \(a>0\), the ground state exhibits a noticeable deviation from the scaling behavior. Nevertheless, its qualitative features (e.g., pole trajectory, residues) remain analogous to shallow trimers of non-relativistic binding energies.
All excited states follow similar trajectories. They emerge as virtual states on the unphysical \(\varphi b\) sheet from the logarithmic cut inherited by \(\mathcal{M}_{3}\) from the one-particle exchange amplitude, \(G\). They approach the dimer-particle threshold and move to the first sheet, be
Figure 2: Trajectories of the first three resonances on the nearest unphysical Riemann sheet of the complex \(\Delta E\) plane. Energies of the 2nd and 3rd trimers are rescaled by \(\lambda^{2}\) and \(\lambda^{4}\), respectively. At large \(|ma|\) and close to the threshold, all trajectories exhibit discrete scaling symmetry. As \(|ma|\) decreases, the scaling symmetry breaks down, although, for the second and third resonant states, the discrepancy between the trajectories remains small.
coming bound states. They remain bound states for large negative values of \(a\) and evolve to become resonances on the nearest unphysical sheet associated with the logarithmic \(3\varphi\) threshold cut.
We trace their motion on this sheet, in the complex \(\Delta E\) variable, and present it in Fig. 2. At a given finite value of \(ma\), there is only a single resonance pole in the unphysical sheet. As we decrease \(ma\) from zero to \(ma=-8.71\), the "ground state" resonance moves to the three-body threshold on an arc from complex infinity. It is natural since \(ma\to 0\) corresponds to no dynamics and the removal of all but the free states from the spectrum.
By contrast, the excited three-body resonances follow cyclic trajectories, which start and end at the three-body threshold and accumulate near this point. By rescaling them by an appropriate power of \(\lambda^{2}\), we observe that they nearly overlap, providing additional evidence of the discrete scale invariance in the three-boson system. The loop-like trajectories of the Efimov resonances were previously noticed in Ref. [72], where a non-relativistic approach was used.
Moreover, we discover an interesting pattern as these excited trimers move between the physical and unphysical Riemann sheets. Namely, the first excited resonance of energy \(\Delta E_{2}\) emerges from the threshold on the unphysical sheet at the same value, \(ma\approx-8.71\), at which the ground state resonance reaches this point and becomes a bound state on the physical energy plane. It leads to a "missing poles" problem: one pole reaches the threshold, and two emerge. This behavior is repeated for all states, i.e., whenever the \(n^{\text{th}}\) resonance enters the threshold, the \((n+1)^{\text{th}}\) resonance appears on the unphysical sheet and the \(n^{\text{th}}\) bound state on the physical one. Moreover, we find that the residues of all three poles converge to the same value when they approach the \(3\varphi\) branch point.
This puzzling behavior violates our expectation that the number of poles, equivalent to the number of physical states, must be conserved when one varies the theory parameters. The only exception is the instance of lifted spectrum degeneracy, which we verify does not happen in our system by studying the order of trimer poles.
We propose a possible resolution to this puzzle by noting that the three-body scattering amplitude has infinitely many unphysical sheets; see Fig. 3. We label the two nearest sheets \(\pm 1\) while denoting the physical sheet by \(0\). States in the \(-1\) plane are the complex-conjugate or "_mirror_" poles of those in the \(+1\) sheet. This is unlike the two-body case, where the Schwarz reflection principle ensures that a resonance pole has its mirror image on the same sheet. For the three-body amplitude, similarly to a complex logarithm, it implies a reflection between the \(\pm n\) sheets.
We conjecture that the missing poles come from the higher Riemann branches, one from each. The \(n^{\text{th}}\) state approaches the threshold from complex infinity on the \(n^{\text{th}}\) Riemann sheet and moves to the \((n-1)^{\text{th}}\) one, where it starts evolving on a cyclic trajectory. Eventually, in the unitarity limit, it travels to the physical energy plane, contributing to the geometric series of bound states. We depict this idea in Fig. 3, where the dashed lines represent trajectories of \(n\geq 4\) states. Reference [72] did not address the "missing poles" issue.
One could verify this conjecture by analytically continuing the amplitude to the higher Riemann sheets. Although we are currently unable to extend our solution to the other sheets, we have performed numerical extrapolations presented in the supplemental material that further support this conjecture. Our proposal only partially resolves the puzzle. Whenever a resonance approaches the threshold, its "mirror" image does the same. Yet, we observe only one bound state emerging on the physical energy plane. The "mirror" poles seem to vanish when meeting their complex-conjugate partners at the threshold, which violates our expectation about the conservation of the number of states.
_Discussion:_ To summarize, we found and presented the emergence of the Efimov effect from the relativistic three-body scattering equations. In particular, we discovered evidence of the discrete scaling symmetry in the trajectories of resonances in the nearby unphysical sheet of complex energy. By studying the evolution of the spectrum onto unphysical sheets, we make several observations suggesting that the Efimov phenomenon is closely related to the logarithmic nature of the three-body unitarity cut, i.e., the presence of infinitely many branches.
Figure 3: Riemann surfaces of the three-body amplitude in \(\Delta E\). The trajectories of the trimers are shown, along with three bound states positions for some \(a\). On the nearest unphysical sheet, the \(2^{\text{nd}}\) and \(3^{\text{rd}}\) trimer exhibit the cyclic behavior as shown in Fig. 2. Mirror poles are found by continuing up to sheet \(-1\). We postulate that the higher trimers come from the further unphysical sheets (\(\geq+2\)), where their cyclic behavior repeats.
At the same time, our conjecture about the behavior of the trimer trajectories cannot be the end of the story because of the mirror fashion in which \(+\) and \(-\) sheets contribute trimer poles to the physical sheet.
The "missing pole" problem is not just a mathematical curiosity but points to a deficiency in our knowledge about the analytic structure and properties of three-particle scattering amplitudes. This, in turn, affects our understanding of the nature of particles that couple strongly to three-particle states [73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. Having relativistic scattering amplitudes that satisfy unitarity and whose analytic structure we can fully control will impact a broad set of experimental, phenomenological, and lattice QCD studies. As a result, we close by encouraging further investigations along these lines.
_Acknowledgements:_ The authors thank T. Hyodo, S. Sharpe, M. Baker, and S. Mizera for valuable discussions. SMD is supported by U.S. Department of Energy Contract no. DE-SC0011637. RAB and MHI acknowledge the support of the USDOE Early Career award, contract DE-SC0019229. MHI acknowledges the support from the Jefferson Science Associates/Jefferson Lab graduate fellowship program. AWJ acknowledges the support of the USDOE ExoHad Topical Collaboration, contract DE-SC0023598.
|
2301.02419
|
Exploring Efficient Few-shot Adaptation for Vision Transformers
|
The task of Few-shot Learning (FSL) aims to do the inference on novel
categories containing only few labeled examples, with the help of knowledge
learned from base categories containing abundant labeled training samples.
While there are numerous works into FSL task, Vision Transformers (ViTs) have
rarely been taken as the backbone to FSL with few trials focusing on naive
finetuning of whole backbone or classification layer.} Essentially, despite
ViTs have been shown to enjoy comparable or even better performance on other
vision tasks, it is still very nontrivial to efficiently finetune the ViTs in
real-world FSL scenarios. To this end, we propose a novel efficient Transformer
Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key
novelties come from the newly presented Attentive Prefix Tuning (APT) and
Domain Residual Adapter (DRA) for the task and backbone tuning, individually.
Specifically, in APT, the prefix is projected to new key and value pairs that
are attached to each self-attention layer to provide the model with
task-specific information. Moreover, we design the DRA in the form of learnable
offset vectors to handle the potential domain gaps between base and novel data.
To ensure the APT would not deviate from the initial task-specific information
much, we further propose a novel prototypical regularization, which maximizes
the similarity between the projected distribution of prefix and initial
prototypes, regularizing the update procedure. Our method receives outstanding
performance on the challenging Meta-Dataset. We conduct extensive experiments
to show the efficacy of our model.
|
Chengming Xu, Siqian Yang, Yabiao Wang, Zhanxiong Wang, Yanwei Fu, Xiangyang Xue
|
2023-01-06T08:42:05Z
|
http://arxiv.org/abs/2301.02419v1
|
# Exploring Efficient Few-shot Adaptation for Vision Transformers
###### Abstract
The task of Few-shot Learning (FSL) aims to do the inference on novel categories containing only few labeled examples, with the help of knowledge learned from base categories containing abundant labeled training samples. While there are numerous works into FSL task, Vision Transformers (ViTs) have rarely been taken as the backbone to FSL with few trials (Hu et al., 2022; Evic et al., 2022; Abnar et al.) focusing on naive finetuning of whole backbone or classification layer. Essentially, despite ViTs have been shown to enjoy comparable or even better performance on other vision tasks, it is still very nontrivial to efficiently finetune the ViTs in real-world FSL scenarios. To this end, we propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA) for the task and backbone tuning, individually. Specifically, in APT, the prefix is projected to new key and value pairs that are attached to each self-attention layer to provide the model with task-specific information. Moreover, we design the DRA in the form of learnable offset vectors to handle the potential domain gaps between base and novel data. To ensure the APT would not deviate from the initial task-specific information much, we further propose a novel prototypical regularization, which maximizes the similarity between the projected distribution of prefix and initial prototypes, regularizing the update procedure. Our method receives outstanding performance on the challenging Meta-Dataset. We conduct extensive experiments to show the efficacy of our model. Our code is available at [https://github.com/loader/eTT_TMLR2022](https://github.com/loader/eTT_TMLR2022).
## 1 Introduction
Modern computer vision models such as ResNet (He et al., 2016) and Faster R-CNN (Ren et al., 2015) are trained on large-scale training sets, and not well generalize to handle the long tail categories with few
labeled samples. Few-shot Learning (FSL) has thus been studied to make inference on insufficiently-labeled _novel_ categories typically with the transferable knowledge learned from _base_ categories which are provided with abundant labeled training samples. Essentially, the FSL can be taken as _representation learning_, as its backbones should ideally extract features representative and generalizable to various novel tasks. Currently Convolutional Neural Networks (CNNs), especially ResNet, are the predominant backbone and widely utilized in most existing FSL works (Ravi and Larochelle, 2017; Finn et al., 2017; Nichol et al., 2018; Li et al., 2017; Sun et al., 2019).
Recently, by taking the merits of Multi-headed Self-Attention (MSA) mechanism and Feed Forward Network (FFN), the transformers have been widely used in the recognition (Alexey et al., 2018; Liu et al., 2021), detection (Beal et al., 2020) and image editing (Cao et al., 2021). The general pipeline of Pretrain-(Meta-train)-Finetune has been explored in few ViTs on FSL (Hu et al., 2022; Evci et al., 2022; Abnar et al.), recently. Particularly, the ViT models are first pretrained/meta-trained on a large-scale dataset. Then a test-time finetune procedure is set up for each target task on novel data. The finetuning strategy can be generally categorized into linear classifier probing and backbone tuning: the former one optimizes the reasonable decision boundaries by the fixed embeddings, while the latter one considers the adaptation of both embedding space and classifier.
In this paper we focus on the backbone tuning method. (Hu et al., 2022) shows that the naive Pretrain-Meta-train-Finetune (P\(>\)M\(>\)F) baseline can generally have satisfactory performance in FSL. Unfortunately, it involves heavy computations and potential overfitting in FSL setting. Particularly, (1) It typically demands extraordinary computing power to formulate episodes from a large number of support classes to update the whole network parameters. Thus it is less efficient in many real-case applications. For example, the edge devices such as mobiles donot have enough computational power to adapt all model parameters by personalized/specialized data collected on these devices. (2) It is very subtle and difficult to directly fine-tune trained deep models on one or two labeled instances per class, as such few-shot models will suffer from severe overfitting (Snell et al., 2017; Fei-Fei et al., 2006; Brian et al.). By contrast, humans have the ability of conducting few-shot recognition from even single example of unseen novel category with very high confidence.
Such problems may be the culprit of the phenomenon that their proposed finetune strategy only works on part of datasets and has less effect to the others. This suggests their limited usage of ViT backbone for any potential FSL applications. An alternative choice is to finetune specific layers in a ViT model with much smaller tunable parameters (ViT-s block in Fig. 1(a)). Such a strategy nevertheless can only finetune either low-level or high-level features, leading to inferior performance in many cases. Therefore it is desirable to have an efficient and light-weighted ViT tuning method that shall not only avoid overfitting to small training samples, but also achieve high performance of FSL.
In this paper, we present a novel efficient Transformer Tuning (eTT) for few-shot learning task, which adopts a pretrain-finetune pipeline. To pretrain our transformer, we advocate utilizing the recent self-supervised method - DINO (Caron et al., 2021). Our key novelties are in the finetuning stage. As illustrated in Fig. 1(b), we propose Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA) as the key components to our eTT, to efficiently learn the newly-introduced tunable parameters over novel support sets. Specifically, we formulate the attentive prototypes by aggregating patch embeddings with the corresponding attention weights of the class token for each image, so as to provide the model with abundant task-specific information and guide each self-attention layer to aggregate more class-related features. To encourage the prefix to keep the prior knowledge from initial prototypes, we further propose a novel prototypical regularization which restricts the relationship between the prefix and prototypes by optimizing the similarity of their projected distributions. Moreover, we propose to additionally adopt a light-weighted domain residual adapter in the form of learnable offset to deal with the potential failure of APT on large domain gaps. Extensive experiments are conducted to evaluate our eTT: we use the ViT-tiny and ViT-small backbones on the large-scale Meta-Dataset (Triantafillou et al., 2019) consisting of ten sub-datasets from different domains; and the results show that our model can achieve outstanding performance with comparable or even much fewer model parameters. Thus our eTT is a promising method on efficiently finetuning ViTs on the FSL tasks.
Our paper has the following contributions.
1. In order to solve the problem of inefficiency and make better use of ViT in FSL, we propose a novel
finetuning method named efficient Transformer Tuning (eTT).
2. Inspired by recent advance in language model, a novel attentive prefix tuning is presented utilizing the attentive prototypes to embed the task-specific knowledge into pretrained ViT model. Particularly, we propose a new initialization strategy tailored for FSL by leveraging prototypical information from the self-attention layers. Moreover, a novel domain residual adapter is repurposed to handle the various domain gaps between training and testing data.
3. We introduce a prototypical regularization term which can constrain the update procedure of prefix during finetuning to maintain the initial task-specific knowledge.
4. By utilizing the proposed eTT, our ViT models receive remarkable performance on Meta-Dataset. overpassing the existing ResNet-based methods without using additional training data. More importantly, both of the model scale and efficiency of our method are comparable with the other competitors, indicating the promising application of ViTs in FSL.
## 2 Related Works
**Few-shot recognition.** FSL learns transferable knowledge from base classes and adapt it to a disjoint set (novel classes) with limited training data. Among those FSL tasks, few-shot image recognition is the one with most focus and researches. Existing works can be grouped into two main categories. One is optimization-based methods (Ravi and Larochelle, 2017; Finn et al., 2017; Nichol et al., 2018; Li et al., 2017; Sun et al., 2019), which learn parameters that can be better finetuned on few-shot support sets. The other is metric-based methods such as ProtoNet (Snell et al., 2017), RelationNet (Sung et al., 2018), CAN (Hou et al., 2019), DMF (Xu et al., 2021), COSOC (Luo et al., 2021) and CTX (Doersch et al., 2020), which solve FSL by applying an existing or learned metric on the extracted features of images. Particularly, CTX (Doersch et al., 2020) builds up a cross attention module which interacts between query and support images to adaptively aggregate better prototypes than simply averaging all support features. While these methods perform well on classical few-shot learning settings, most of them adopt convnet as backbone, especially ResNet (He et al., 2016). We, on the opposite, try to make full use of another widely-applied structure, i.e. ViT, in FSL, which requires extra design for training and finetuning strategy.
**Transformer in vision tasks.** Transformers widely utilize the self-attention mechanism which originally are employed to process the feature sequence in Vaswani et al. (2017). Then large scale transformers become increasingly popular in NLP tasks to build complex language models, and also extend to vision tasks (Alexey
Figure 1: (a) Comparing with other backbones, we propose the Domain Residual Adapter (DRA) to tune much less parameters in our efficient Transformer Tuning (eTT); and effective for large-scale FSL. (b) The few-shot support samples are first processed into attentive prototypes which are used to initialize the task-specific visual prefix. Then the prefix together with the domain adapter are attached to each layer of the ViT to finetune our ViTs.
et al.; Yuan et al., 2021; Liu et al., 2021b) by formulating the token sequence with image patches processed with position embedding. It has been shown the efficacy in various applications, such as (Liu et al., 2021a) for image caption, (Sun et al., 2020) for multiple object tracking and (Esser et al., 2021; Cao et al., 2021) for image inpainting and editing. Critically, ViTs is typically trained by very large-scale dataset, and few effort has been dedicated in training or finetuning on few-shot supervision. We follow the pretrain-meta-train-finetune pipeline (Hu et al., 2022), while their method finetune the whole ViTs on few-shot examples, and thus has less efficiency and can easily overfit. In contrast, our proposed eTT has the key components of DRA and APT, demanding much less tunable parameters with much better performance.
**Finetuning algorithm for ViT.** The idea of finetuning ViTs on small-scale datasets has been partly investigated in Natural Language Processing (NLP) communities. Houlsby et al. (2019) proposed to attach two learnable bottleneck adapters to each transformer layer. Other works (Xiang and Percy; Brian et al.) make use of the prompt which trains a small task-specific prompt for each task so that the prompt can guide the model with knowledge corresponding to the task. Such a prompting idea from NLP is inherited and repurposed to finetune a learnable prefix for each novel episode in this paper. However, these works (Xiang and Percy; Brian et al.; Houlsby et al., 2019) initialize the prefix or prompt with word embeddings which is not available in our problem. Instead, we propose an attentive prototype with regularization initializing the visual prefix with object-centric embeddings. Additionally, we notice that a very good concurrent technical report (Jia et al., 2022) also studies finetuning visual prompt for pretrained ViTs in downstream tasks. We highlight the two key differences from our eTT. The first is about the initialization. While initialization strategy does not matter in their method and the corresponding tasks, we show in our experiments that randomly initializing prefix does lead to sub-optimal performance in FSL, which leads to the necessity of a well-designed initialization. The second is that we further propose a regularization term to restrict the prefix, which has never been studied in existing works.
**Task-specific Adapter.** The idea of task-specific adapter has been explored in several works like (Li et al., 2022; Rebuffi et al., 2017) to adapt CNNs to learn the whole information from support set. Besides, (Requeira et al., 2019; Bateni et al., 2020) adopt Feature-wise Linear Modulation (FiLM) layers (Perez et al., 2018) to adapt task-specific information into networks. In contrast, we repurpose the adapter as the domain residual to update transformer blocks in a more light-weighted way with less learnable parameters. Beyond different structures, our proposed DRA intrinsically serves as the domain adapter rather than meta-learner for the FSL in Rusu et al. (2018); Sun et al. (2019); Requeima et al. (2019). While these previous works require meta-training to optimize their adaptation modules, our method simply utilizes the novel support data to learn the DRA, thus reducing the training cost. Furthermore, our DRA is mostly tuned to bridge the visual domain gap between base and novel categories, thus improving the learning of APT on each episode task.
## 3 Methodology
### Problem Setup
We formulate few-shot learning in the meta-learning paradigm. In general, we have two sets of data, namely meta-train set \(\mathcal{D}_{s}=\{\left(\mathbf{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{s}\}\) and meta-test set \(\mathcal{D}_{t}=\{\left(\mathbf{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{t}\}\) which contain the base and novel data respectively and are possibly collected from different domains. \(\mathcal{C}_{s}\) and \(\mathcal{C}_{t}\) (\(\mathcal{C}_{s}\cap\mathcal{C}_{t}=\emptyset\)) denote base and novel category sets. FSL aims to train a model on \(\mathcal{D}_{s}\) which is generalizable enough on \(\mathcal{D}_{t}\). In the testing phase, the model can learn from few labelled data from each category of \(\mathcal{C}_{t}\).
While most previous FSL works (Snell et al., 2017; Sung et al., 2018) utilize the setting of \(N\)-way \(K\)-shot in mini-ImageNet, i.e., \(K\) training samples from \(N\) class, we follow CTX (Doersch et al., 2020) to adopt the setting on the large-scale Meta-Dataset (Triantafillou et al., 2019). In each episode \(\mathcal{T}\), \(N\) is first uniformly sampled from \([5,N_{max}]\) where \(N_{max}\) equals to \(\min(50,|\mathcal{C}_{t}|)\) or \(\min(50,|\mathcal{C}_{s}|)\) on training or testing stage, accordingly. \(N\) is supposed to be accessible knowledge during both training and testing. In the most naive case, one can get \(N\) by directly counting the number of support classes. From each of the sampled category, \(M\) query samples per category are randomly selected, and thus constructing the query set \(\mathcal{Q}=\{(\mathbf{I}_{i}^{q},y_{i}^{q})\}_{i=1}^{N_{0}}\). After that random amount of samples are taken from the rest of samples belonging to these categories to form the support set \(\mathcal{S}=\{(\mathbf{I}_{i}^{\mathrm{supp}},y_{i}^{\mathrm{supp}})\}_{i=1}^{ N_{S}}\). Note that compared to the classical \(N\)-way \(K\)-shot setting,
such a setting generates class-imbalanced support sets, and different episodes contain different numbers of support samples. This is much more challenging to the model and learning algorithms, as they shall handle both extremely large and small support sets.
### Overview of Our Method
To handle the optimization of various episodes on large-scale dataset, we present our novel finetuning model - efficient Transformer Tuning (eTT) as shown in Fig. 2. Our eTT follows the pipeline in Hu et al. (2022), and has key stages of the pretraining and finetuning. We employ DINO as pretraining, and conduct the task tuning by attentive prefix tuning (Sec. 3.4), and backbone tuning with domain residual adapter (Sec. 3.5).
**Pre-training**. As previous work (Hu et al., 2022) shows the importance of self-supervised pre-training to learning vision transformer models, we adopt the same principle and introduce the self-supervised learning model to pre-train our ViT backbone on base data. Specifically, we utilize the recent State-of-the-art self-supervised ViT models - DINO (Caron et al., 2021) to pretrain our model. DINO builds up supervision based on a self-distillation framework by using the multi-crop strategy (Caron et al., 2020). As we will show in our experiments, such a pre-trained model shall have good cluster property even among cross domain images, potentially benefiting our following FSL stages. Note that different from (Hu et al., 2022) which takes an off-the-shelf model pretrained with DINO on full ImageNet, we strictly follow the FSL protocols to retrain the DINO models on the meta-train split in the target dataset to avoid the abuse of extra data.
One would ask whether it is necessary to make use of the annotations for base data, since supervised pretrain has been proven to be effective in many previous FSL works (Ye et al., 2020; Hou et al., 2019). As we will show in the experiments, an additional finetuning with image labels on base data cannot bring consistent improvement and even makes it worse on most datasets, which may be caused by the overfitting on the image labels leads to less generalization ability across different domains. Moreover, compared with vanilla supervised training, the attention maps for models trained by DINO contain more semantic information, which we will utilize in the following context.
### Preliminary: Vanilla Test-time Finetuning
Before fully developing our fine-tuning contributions, we review the simple and effective finetuning method named LT+NCC (Li et al., 2021). The novel modules proposed by us in the following context are all adopted together with this simple baseline method. Given a ViT backbone \(f_{\theta}\) that is parameterized by \(\theta\) and an episode \(\mathcal{T}\), the support features \(\{x_{i}^{supp}\}_{i=1}^{N_{g}}\), where \(x_{i}^{supp}=f_{\theta}(\textbf{I}_{i}^{supp})\), are extracted from the support set
Figure 2: Schematic illustration of our proposed model. For each image, we first fetch its patch embedding sequence and the attention score with regard to the last layer’s class token, from which the image embedding can be computed. Then the visual prefix is initialized as the attentive prototypes of image embeddings. The prefix, together with the proposed domain residual adapter are attached to the model. The final features are processed with an extra linear transformation layer and predicted with ProtoNet. Dashed arrows denote forward propagation before test-time finetuning.
\(\{\mathbf{I}^{supp}_{i}\}_{i=1}^{N_{S}}\). Then, a learnable linear transformation \(\phi\) is added to the model to realize the adaptation, which results in the final support features used for classification \(\{\underline{x}^{supp}_{i}\}_{i=1}^{N_{S}}\), where \(\hat{x}^{supp}_{i}=\phi(x^{supp}_{i})\). The prediction of these support images can thus be calculated based on the similarity between the transformed features and the aggregated prototypes as,
\[\bar{x}_{c}=\frac{1}{\sum_{i=1}^{N_{s}}\mathbbm{1}_{y^{supp}_{i}=c}}\sum_{i=1} ^{N_{s}}\hat{x}^{supp}_{i}\mathbbm{1}_{y^{supp}_{i}=c}\qquad\hat{y}^{supp}_{i} (c)=\frac{\exp(d(\hat{x}^{supp}_{i},\bar{x}_{c}))}{\sum_{c=1}^{N}\exp(d(\hat{x} ^{supp}_{i},\bar{x}_{c}))} \tag{1}\]
where \(d\) denotes cosine similarity, i.e., \(d(a,b)=\frac{a^{T}b}{\|a\|b\|}\). We fix all of the parameters in the original backbone, and adopt the cross entropy loss to optimize the transformation \(\phi\). Precisely speaking, for each support image \(\mathbf{I}^{supp}\) together with its annotation \(y^{supp}\), the objective function is as following:
\[\ell_{CE}=-y^{supp}\cdot\log\hat{y}^{supp} \tag{2}\]
After finetuning, \(\phi\) is applied to query features and the same procedure as above is performed between the processed query features \(\{\hat{x}^{q}_{i}\}\) and prototypes \(\{\bar{x}_{c}\}_{c=1}^{N}\) for the inference of each episode.
### Task Tuning by Attentive Prefix Tuning
We finetune the pre-trained ViT with support set via an attentive prefix tuning strategy. Specifically, a prefix matrix \(\theta_{P}\in\mathbb{R}^{N_{P}\times d}\) is first initialized, where \(N_{P}\) denotes the number of prefix. Then a bottleneck \(g\) is added upon \(\theta_{P}\) to produce \(\hat{\theta}_{P}\in\mathbb{R}^{N_{P}\times(2Ld)}\), where \(L\) denotes the number of backbone layers. The \(g\) plays the same role as the projector in each self-attention layer, except that all layers share the same module. The produced \(\hat{\theta}_{P}\) can be reshaped and seen as \(L\) value and key pairs \(\{\theta^{l}_{v},\theta^{l}_{k}\}_{l=1}^{L},\theta^{l}_{v},\theta^{l}_{k}\in \mathbb{R}^{N_{P}\times d}\). The MSA block in the \(L\)-th layer can then be reformed by attaching these new pairs to the original key and value sequences:
\[A^{l}=\text{Attn}(Q,\left[K;\theta^{l}_{k}\right])\qquad\text{output}=A^{l} \left[Y;\theta^{l}_{v}\right] \tag{3}\]
where \(\left[\cdot;\cdot\right]\) denotes concatenation, Attn denotes the calculation of MSA matrices. In this way, the prefix can affect the attention matrix \(A^{l}\) and result in different output features from the original ones.
**Remark**. Compared with the naive strategy that finetunes specific layers in ViT (ViT-s block in Fig. 1(a)) which can only adjust part of blocks, the prefix can evenly adapt each layer's image embedding with almost the same parameter size as one transformer layer, as shown in Tab. 1(a). By fixing the model parameters and optimizing the prefix \(\theta_{P}\) and the transformation module \(g\), the support knowledge can be smoothly embedded into the prefix, which further helps the task adaptation.
**Attentive Prototype**. The initialization of the prefix is very important to our APT, as it greatly boosts the performance. Critically, quite different from the prefix or prompt tuning in NLP and visual-context tasks that have task-specific instructions explicitly as word embedding sequences, each episode in our FSL only has the few support images and their labels. Thus, rather than steering the model with _'what should be done'_ as in Xiang & Percy, our APT shall provide the model with _'what we have globally'_ by leveraging the class-specific information. Thus, the attentive prototype is presented to aggregate the image embeddings with object-centric attention, as the initialization of the prefix. Particularly, each support image \(\mathbf{I}^{supp}\) is first transformed to a patch emebddging sequence \(\{\tilde{x}^{supp}_{m}\}_{m=1}^{P}\) with the starting patch embedding layer,
\[\tilde{x}^{supp}_{m}=f_{\theta_{p_{e}}}(I^{supp}_{m})+E^{pos}_{m} \tag{4}\]
where \(m=1,\cdots,P^{2}\) is the patch index; \(f_{\theta_{p_{e}}}\) denotes the patch embedding layer which is typically a convolutional layer whose kernel size equals to patch size; and \(E^{pos}\) indicates the position embedding. Meanwhile, we can get unnormalized attention score \(A\in\mathbb{R}^{h\times P}\) between the class token and image patches from the last MSA layer, where \(h\) denotes number of heads in each MSA module. Such an attention vector can focus on the foreground in the image, especially for models trained with DINO (Caron et al., 2021), with each head indicating a particular part or an object. We can thus get the initial image-level representation
\[\hat{A}=\sigma(A)\qquad\hat{x}^{supp}=\frac{1}{h}\sum_{n=1}^{h}\sum_{m=1}^{P^{2 }}\hat{A}_{nm}\tilde{x}^{supp}_{m} \tag{5}\]
where \(\sigma\) is softmax function. Compared with simply averaging all patch embeddings, the attentive embeddings can highlight the objects of interest and suppress the background information. Then the prototypes \(\bar{x}\) can be calculated by averaging the attentive image embeddings belonging to each support category. We set the number of prefix as \(N\), which is available during testing for each episode, and initialize the prefix with \(\bar{x}\).
**Remark**. In this way, commonly-used prototypes can provide the model with comprehensive information about the episode. Also such a first-order statistics is comparable with the normal patch features among the layers. This can benefit the training with more stability. When \(N\) is large, more prefix are required to fully learn the information included by each episode. On the other hand, when \(N\) is small so that the episode is relatively easy, fewer prefix can handle the support knowledge without trouble while decreasing the computing debt.
### Backbone Tuning by Domain Residual Adapter
Finetuning few-shot tasks by APT will make a good balance between performance and efficiency. To further improve the model generalization ability on different domains, we further propose the backbone tuning by leveraging the Domain Residual Adapters (DRA), as illustrated in Fig. 2. Specifically, for the \(l\)-th transformer layer, we attach two learnable offset vectors \(\delta^{l}_{a},\delta^{l}_{f}\in\mathbb{R}^{d}\) to the MSA and FFN. After features are processed with MSA and FFN, the corresponding offsets are added to them so that the extreme domain gap can be neutralized. These offsets are expected to represent the gap between source and target domains, and transfer the original manifold to a more appropriate one.
### Loss Functions
**Prototypical Regularization**. In addition to the cross entropy loss in Eq. 2, we propose a novel prototypical regularization to ensure the class-specific information, which is embedded in the prefix via initialization, can be maintained during update. The knowledge in attentive prototypes is distilled to the prefix during finetuning. Concretely, in each iteration, the prototypes \(\bar{x}\) and prefix \(\theta_{P}\) are first projected to a latent space via a projector module \(\psi\), which produces \(\bar{x}^{\prime}\) and \(\theta^{\prime}_{P}\) respectively. Then the distillation loss is computed using these two embeddings as,
\[\ell_{dist}=\frac{1}{N}\sum_{n=1}^{N}H(\bar{x}^{\prime n},\theta^{\prime n}_{P}) \tag{6}\]
where \(H(a,b)=-a\log b\). The above objective function can ensure the prototype of each category and the corresponding prefix contain consistent information, which is indicated by the similarity between distributions after projection. To make training more stable and avoid collapse, for each episode we maintain an exponential moving average (EMA) of \(\bar{x}^{\prime}\) as the center variable \(c_{center}\). Before calculating \(\ell_{dist}\), we standardize \(\bar{x}^{\prime}\) as \(\sigma(\frac{\bar{x}^{\prime}-x_{center}}{\tau})\), where \(\sigma\) denotes softmax function and \(\tau\) is the temperature typically set as 0.04.
Once having both of the above losses calculated, we can optimize the model parameters including the DRA, the prefix together with the transformation \(g\) and the projector \(\psi\), with the following objective function:
\[\mathcal{L}=\ell_{CE}+\lambda\ell_{dist} \tag{7}\]
where the scalar weight \(\lambda\) controls the strength of the regularization.
**Remarks**. For a ViT with \(L\) layers, \(n_{h}\) heads and \(d\) feature dimension, the size of trainable parameters is \((N+d^{\prime}+d_{proj}+d)d+2(d^{\prime}+1)Ld\), where \(d^{\prime}\) is the hidden dimension for transformation module \(g\) and \(d_{proj}\) denotes output dimension for the projector \(\psi\), which is much smaller than that of the whole backbone model. Specifically, the learnable modules during finetuning have only about 9% parameters with regard to the whole transformer model when using ViT-small and ViT-tiny.
## 4 Experiments
### Experimental Setup
**Datasets.** We use Meta-Dataset (Triantafillou et al., 2019) - the most comprehensive and challenging large-scale FSL benchmark. It has 10 sub-datasets such as ImageNet (Deng et al., 2009) and Omniglot (Lake et al., 2015), with various domain gaps. Our experiments are conducted under the single training source setting, i.e. only ImageNet is used for training, and the meta-test split of all ten datasets for evaluation. Some of the test datasets such as CUB share similar or highly-related categories with ImageNet, while the others have greater domain gaps. Note that Hu et al. (2022) claims pretraining on all images in the training set of ImageNet is reasonable for introducing extra data and boosting the performance. However, such a strategy utilizes much more training samples (1.28M images, 1000 classes in ImageNet v.s. 0.9M images, 712 classes in meta-train split of ImageNet). Empirically so many additional images can greatly benefit generalization ability of self-supervised learning methods. Therefore to make a more fair comparison, we strictly follow the experiment protocol used in CTX (Doersch et al., 2020) and shall not use any extra data even in the unsupervised pretraining stage. We resize all images to \(224\times 224\) for ViT-small and \(84\times 84\) for ViT-tiny.
**Implementation details.** We set the patch size as 8 for ViT-tiny (as it has small input image size), and keep the other hyper-parameters as default. We adopt standard ViT-small with 12 layers, 6 attention heads, feature dimension as 384 and patch size as 16. We strictly follow the hyper-parameter setting and data augmentation in DINO (Caron et al., 2021) for pretraining. In test-time finetuning, we empirically set the hidden dimension \(d^{\prime}\) of the transformation module as \(d/2\), and output dimension \(d_{proj}\) of the projector as 64 for all datasets. We utilize AdamW optimizer finetuning, with learning rate set as \(1e-3\) for TrafficSign and \(5e-4\) for other datasets. \(\lambda\) is set as 0.1. For simplicity, the selection of hyper-parameters is conducted on the meta-validation set of ImageNet, which is the only within-domain setting in Meta-Dataset.
**Evaluation benchmark.** We report the accuracy of randomly sampled 600 episodes for each dataset and the average accuracy when comparing with the existing methods. The comprehensive comparison of both accuracy and 95% confidence interval is in Appendix.
### Comparison with State-of-the-art Methods
Before the comprehensive comparison, it is necessary to show the comparison between different backbone is fair enough since our backbone model is not the same as the existing method. Therefore we present the comparison of size of model parameters and FLOPs in Tab. 1, in which the FLOPs of all models are computed by fvcore1. The results show that (1) compared with Res18, ViT-tiny is a much smaller and efficient model, and (2) ViT-small is approximately comparable to Res34. In this way, the comparison of our proposed method with state-of-the-art methods is reasonable and fair.
Footnote 1: [https://github.com/facebookresearch/fvcore](https://github.com/facebookresearch/fvcore)
We compare our model with ProtoNet(Snell et al., 2017), CTX (Doersch et al., 2020), TSA (Li et al., 2022), etc. These methods take the backbones of ResNet18 or ResNet34. Also, the pretrain-meta-train-finetune baseline (P\(>\)M\(>\)F) (Hu et al., 2022) is not considered in computing average rank since extra data is used. As in Tab. 2, when using ViT-small as backbone whose parameter size is comparable to that of ResNet34, our model receives 1.6 average rank on all dataset. Specifically, on Texture and Fungi, our model outperforms the strongest competitors CTX and TSA by about 8% and 10%, while on other datasets the performance of our model is still comparable with or slight better than that of the existing methods. We notice that our model
\begin{table}
\begin{tabular}{l|l|c c} \hline \hline Backbone & Image size & Params(M) & FLOPs(G) \\ \hline \hline Res18 & 84\(\times\)84 & 11.69 & 1.82 \\ \cline{2-3} ViT-tiny & 84\(\times\)84 & 5.38 & 0.72 \\ \cline{2-3} Res34 & 224\(\times\)224 & 21.80 & 3.68 \\ \cline{2-3} ViT-small & 224\(\times\)224 & 21.97 & 4.61 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of parameter size and FLOPs between different backbones.
is inferior to the best ones in Omniglot, while this is reasonable. Since Omniglot images represent simple characters with monotone color patterns, each image patches contain less information than images in other datasets. Vanilla ViTs have less efficiency in dealing with these image patches due to limited interaction among patch embeddings. This problem can be solved with much sophisticated variants of ViT like Swin (Liu et al., 2021), and will be taken as future works. Moreover, our proposed method is better than P\(>\)M\(>\)F, which not only utilizes extra data for training but also finetunes all model parameters during testing, on more than half of the datasets, which strongly indicates the effectiveness of the proposed finetuning strategy in this paper. As for using ViT-tiny which has much less parameter than Res18, our model is still comparable to the state-of-the-art methods and outperforms many popular baselines. Particularly, compared with ProtoNet which is one of the most famous and efficient methods in FSL, our eTT shows significant boost on Aircraft by 19.74% and TrafficSign by 36.97%. The reason of the inferior results on several datasets against TSA can be two folds. Firstly, the ViT-tiny intrinsically has smaller capacity than Res18. On the other hand, while it is common to train ViT with large scale images and patches so that the images are splitted into abundant patches and each patch-level token can receive enough information. In contrast, we adopt \(84\times 84\) images with \(8\times 8\) patch size for ViT-tiny so that the comparison with Res18 is fair, which lead to less patches with smaller size and may have negative influence on the performance. In general, the results indicate that our proposed eTT can make ViT models a desirable choice for large scale FSL problems.
### Model Analysis
To further validate the effectiveness of our method, we conduct a series of ablation studies on Meta-Dataset using ViT-small below.
#### 4.3.1 Design of Each Module
**Can finetuning on meta-train set boost the performance?** One would ask whether it is necessary to make use of base annotations, as supervised pretraining is also effective in many FSL works (Ye et al., 2020; Hou et al., 2019). To verify it, we finetune DINO-pre-trained ViT-small on meta-train split of ImageNet, in which the options of all hyper-parameters and data augmentations follow DeiT (Touvron et al., 2021) using either way of class token features or averaged patch features as image representations. After such a supervised finetuning, we test the models both with the basic test-time finetuning method as in Sec. 3.3, which we denote as LT+NCC, and with our proposed eTT. The results are shown in Fig. 3, from which we find out that (1) Supervised finetuning does improve test accuracies on ImageNet, CUB and MSCOCO. Particularly, the token finetune model receives 89.83% accuracy on CUB when testing with our eTT, which is remarkably better than any other models. This is reasonable as similar images between ImageNet and these datasets. By
\begin{table}
\begin{tabular}{l|c|c c c c c c c c c c|c} \hline Model & Backbone & ILSVRC & Omni & Acraft & CUB & DTD & QDraw & Fungi & Flower & Sign & COCO & Avg & Rank \\ \hline Finetune & & 45.78 & 60.85 & 68.69 & 57.31 & 69.05 & 42.60 & 38.20 & 85.51 & 66.79 & 34.86 & 56.96 & 10.2 \\ Proto & & 50.50 & 59.98 & 53.10 & 68.79 & 66.56 & 48.96 & 39.71 & 85.27 & 47.12 & 41.00 & 56.10 & 10.5 \\ Relation & \multirow{2}{*}{Res18} & 34.69 & 45.35 & 40.73 & 49.51 & 52.97 & 43.30 & 30.55 & 68.76 & 33.67 & 29.15 & 42.87 & 14.6 \\ P-MAML & & 49.53 & 63.37 & 55.95 & 68.66 & 66.49 & 51.52 & 39.96 & 87.15 & 48.83 & 43.74 & 57.52 & 9.2 \\ BOHB & & 51.92 & 67.57 & 54.12 & 70.69 & 68.34 & 50.33 & 41.38 & 87.34 & 51.80 & 48.03 & 59.15 & 8.2 \\ TSA & & **59.50** & **78.20** & 72.20 & **74.90** & 77.30 & 67.60 & 44.70 & 90.90 & 82.50 & **59.00** & **70.68** & 4.3 \\ Ours & ViT-t & 56.40 & 72.52 & **72.84** & 73.79 & **77.57** & **67.97** & **51.23** & **93.30** & **84.09** & 55.68 & 70.54 & 4.1 \\ \hline Proto & & 53.70 & 68.50 & 58.00 & 74.10 & 68.80 & 53.30 & 40.70 & 87.00 & 58.10 & 41.70 & 60.39 & 7.4 \\ CTX & Res34 & 62.76 & 82.21 & 79.49 & 80.63 & 75.57 & **72.68** & 51.58 & 95.34 & 82.65 & 59.90 & 74.28 & 2.8 \\ TSA & & 63.73 & **82.58** & **80.13** & 83.39 & 79.61 & 71.03 & 51.38 & 94.05 & 81.71 & 61.67 & 74.93 & 2.5 \\ \hline P\(>\)M\(>\)F\({}^{*}\) & ViT-s & 74.69 & 80.68 & 76.78 & 85.04 & 86.63 & 71.25 & 54.78 & 94.57 & 88.33 & 62.57 & 77.53 & — \\ Ours & & **67.37** & 78.11 & 79.94 & **85.93** & **87.62** & 71.34 & **61.80** & **96.57** & **85.09** & **62.33** & **77.61** & **1.6** \\ \hline \end{tabular}
\end{table}
Table 2: Test accuracies and average rank on Meta-Dataset. Note that different backbones are adopted by these methods. * denotes using extra data for training. The bolded items are the best ones with highest accuracies.
training on the image annotations of ImageNet, the model learns class-specific knowledge which cannot be obtained during self-supervised learning. Since the categories are highly correlated and overlapped among these datasets, the learned knowledge can also benefit the recognition on these novel datasets even though the specific novel classes do not appear in the meta-train set. (2) Despite the improvement on the three datasets, models with supervised finetuning degrade on the other datasets, especially on Traffic Sign and VGG Flower. This is due to fitting class labels weakens the effect of these features and makes it harder to generalize to novel domains. When taking into account the performance of all datasets, pretraining with DINO is generally the much more desirable choice for better generalization over different domains. (3) The improvement of our propose method against the basic LT+NCC is not consistent among three different kinds of pretraining strategy. For example, while our method can boost the performance of DINO pre-trained model by 9.47% on Aircraft and 4.83% on CUB, it can only bring much less advantage on models with supervised finetuning.
**Effectiveness of APT and DRA.** We test the DINO pre-trained model with different kinds of testing strategies including (1) Proto: Directly generating predictions based on ProtoNet. The prototypes are computed using averaged class token features from each category. (2) LT+NCC: The basic test-time finetuning method in Sec. 3.3. (3) Last: Finetuning the last transformer layer during testing, together with LT+NCC. which has similar parameter size to our method. (4) First: Finetuning the first transformer layer during testing, together with LT+NCC. which has similar parameter size to our method. (5) LN: We try to finetune the affinity parameter in each layer normalization as an alternative finetune strategy, which is utilized in many cross-domain FSL works (Tseng et al., 2022). (6) APT: The model is finetuned using APT together with LT+NCC, using cross entropy loss and the proposed prototypical regularization. (7) Adapter: The model is finetuned using DRA together with LT+NCC, using cross entropy loss. (8) eTT: The model is finetuned using our proposed APT, DRA and LT+NCC. The results in Tab. 3 show that while LT+NCC can fundamentally improve the model which indicates the importance of test-time finetuning, adding our proposed modules to the finetuning procedure can consistently bring higher performance. Also, finetuning specific transformer layer can only bring limited improvement on few datasets: finetuning the last
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c} \hline Model & ILSVRC & Omni & Acraft & CUB & DTD & QDraw & Fungi & Flower & Sign & COCO & Avg \\ \hline Proto & 63.37 & 65.86 & 45.11 & 72.01 & 83.50 & 60.88 & 51.02 & 92.39 & 49.23 & 54.99 & 63.84 \\ \cline{2-11} LT+NCC & 65.96 & 67.62 & 64.03 & 77.10 & 83.46 & 63.88 & 57.79 & 93.13 & 66.91 & 56.04 & 69.59 \\ Last & 66.32 & 71.04 & 78.04 & 86.25 & 86.67 & 64.22 & 55.69 & 94.44 & 65.55 & 55.94 & 72.42 \\ \cline{2-11} First & 61.54 & 50.46 & 69.23 & 79.17 & 83.10 & 68.69 & 49.93 & 93.50 & 54.28 & 58.45 & 66.84 \\ \cline{2-11} LN & 66.22 & 70.45 & 69.41 & 81.29 & 86.37 & 66.28 & 58.38 & 96.25 & 71.09 & 59.57 & 72.53 \\ \cline{2-11} APT & 66.75 & 75.16 & 75.41 & 84.25 & 86.47 & 69.55 & 60.03 & 96.38 & 78.20 & 61.10 & 75.33 \\ Adapter & 66.53 & 72.31 & 73.75 & 83.73 & 86.86 & 66.74 & 58.49 & 96.15 & 82.65 & **62.40** & 74.93 \\ eTT & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & **61.80** & **96.57** & **85.09** & 62.33 & **77.61** \\ \hline \hline Random & 66.12 & 76.33 & 78.35 & 84.77 & 86.78 & 70.13 & 59.25 & 96.00 & 82.28 & 59.59 & 75.96 \\ Avg & 66.11 & 75.06 & 77.07 & 85.16 & 87.35 & 70.72 & 61.79 & 96.54 & 84.28 & 62.18 & 76.73 \\ Sampling & 67.81 & 76.72 & 77.96 & 85.79 & 87.25 & 70.19 & 60.73 & 96.27 & 83.72 & 62.17 & 76.86 \\ Full & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & **61.80** & **96.57** & **85.09** & **62.33** & **77.61** \\ \hline \hline Linear & 66.35 & 74.26 & 79.42 & 83.65 & 86.02 & 71.11 & 55.73 & 95.89 & 82.73 & 59.90 & 75.51 \\ Bottleneck & 67.29 & 76.06 & 79.72 & 85.60 & 87.21 & 70.59 & 61.59 & 96.15 & 85.00 & 62.02 & 77.12 \\ FiLM & 66.91 & 75.32 & 78.26 & 85.78 & 86.83 & 70.29 & 61.65 & 96.50 & 84.48 & 61.75 & 76.78 \\ Offset & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & **61.80** & **96.57** & **85.09** & **62.33** & **77.61** \\ \hline \hline w/o PR & 66.72 & 74.20 & 78.42 & 85.06 & 87.01 & 70.34 & 61.64 & 96.51 & 84.23 & 61.08 & 76.52 \\ w PR & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & **61.80** & **96.57** & **85.09** & **62.33** & **77.61** \\ \hline \hline w/o Stand & 67.09 & 76.42 & 78.87 & 83.10 & 86.50 & 70.09 & 61.02 & 96.33 & 82.88 & 61.33 & 76.36 \\ w Stand & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & **61.80** & **96.57** & **85.09** & **62.33** & **77.61** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracies on Meta-Dataset of different variants of our proposed method. The bolded items are the best ones with highest accuracies.
layer leads to good performance on Aircraft, CUB and Texture, while updating the first layer leads to good performance on Quickdraw and MSCOCO. However, this simple finetuning strategy cannot bring consistent improvement on all datasets. This indicates that different data requires different levels of adaptation, and the improvement is much smaller than that of our method. Moreover, we give the tSNE visualization of feature embeddings of a randomly sampled episode from TrafficSign in Fig. 4, which demonstrates that utilizing our proposed method can better regulate the feature embeddings into proper clusters.
**Is prototypical initialization necessary?** One of the most important parts of our APT is the attentive prototypical initialization in which we use attentively aggregated patch embeddings to initialize the prefix matrix. To verify this strategy, we compare several different choices of initialization, including (1) Random: random initialization from normal distribution. (2) Avg: simply averaging all patch embeddings from each category. (3) Sampling: randomly sampling one image for each category, and then initializing the prefix matrix with the averaged patch embeddings of each image. (4) Full: computing prototypes with our proposed attentive prototype. Results in Tab. 3 show that random initialization performs the worst, which can be resulted from insufficient task-specific information provided by the prefix in this way. Meanwhile, among all other strategies, using the attention map to aggregate patch embeddings as in Eq. 5 is better than simply averaging, leading to about 1% improvement on average.
**Do we need a more complex adapter structure?** One would argue that our DRA structure is too simple to learn the complex knowledge from support images. In Tab. 3 we compare three different instantiations of adapters including (1) Linear: As in Li et al. (2022), we use a linear layer for each adapter, whose output are than added to the original features in the MSA and FFN. (2) Bottleneck: We expand the linear layer
Figure 4: Visualization of feature embeddings from a randomly sampled episode of TrafficSign.
Figure 3: Test accuracy of different training strategy if testing with (a) LT+NCC or (b) our eTT.
to a bottleneck structure where two linear layers are used for each adapters. (3) FiLM: We compare DRA with a FiLM-like variant, in which we add a scaling vector for each adapter as in FiLM layer Perez et al. (2018). Note that such a method is similar to MTL (Sun et al., 2019) in FSL. The difference lies in that we still use the original way to directly tune the parameters on the novel support sets, instead of using another meta-trained module to generate the parameters. (4) Offset: Only an offset vector is adopted for each adapter. The results reveals that the linear adapter performs the worst, which means such a structure is improper for ViT in finetuning. Moreover, we also find that using the bottleneck adapter will result in a dilemma. If we use small initial value for the adapter, the weights of each layer can only achieve gradient with extremely small values. As the result, these weights, except the bias term of the last layer, can hardly be updated based on the support knowledge, which means such an architecture almost equals to our design where only an offset vector is utilized. On the other hand, if large initial values are adopted to avoid gradient diminishing, then the output features from the adapters can make the predictions severely deviate from those without adapters, thus leading to worse performance. As for the FiLM-like DRA, it is worse than offset DRA by about 0.8% on average, while it doubles the parameter size based on offset DRA, leading to no significant additional efficacy.
**Effectiveness of prototypical regularization.** We also validate this regularization. In Tab. 3 we present the test accuracy when finetuning with and without this loss function. We can find that by applying this objective function, the model can have higher results on most datasets. Besides, as described in Sec 3.6, we use a standardization technique when computing the prototypical regularization. To verify its efficacy, we compare the model with and without such a standardization. The results are shown in Tab. 3. When not using standardization, the results are generally worse given comparable confidence intervals (Tab. 11). The results verify that this strategy can help the model with more stable finetuning procedure.
#### 4.3.2 Comparison among Different Hyper-parameter Settings
In additional to the ablation study about the proposed module, We further verify different choices of hyper-parameters in our model. Especially, \(d_{proj}\) for the transformation module in APT and \(\lambda\) for the prototypical regularization are tested in Tab. 4 and Tab. 5 in the Appendix. In general, the improvement is not consistent. For \(d_{proj}\), we can find that using 192-d hidden dimension can get globally better results, which indicates that such a choice can make a good balance between the model capacity and scale so that the finetuning can be conducted both efficiently and effectively. As for \(\lambda\), 0.1 seems to be a desirable choice. Intuitively, smaller \(\lambda\) leads to less control of the prefix from the proposed prototypical regularization. Therefore, the prefix may lose the desired information during the optimization on the support set. On the other hand, when \(\lambda\) is too large, the regularization overwhelms the label supervision, and thus the model can hardly adapt to the support knowledge, leading to worse performance especially on Omniglot and Aircraft.
## 5 Conclusion
We propose a novel finetuning method named efficient Transformer Tuning (eTT) for few-shot learning with ViT as our backbone. By fixing the parameters in the backbone and utilizing attentive prefix tuning and domain residual adapter, our method can guide the ViT model with comprehensive task-specific information, which leads to better representations and performance. This is demonstrated by the fact that we establish new state-of-the-arts on the large-scale benchmark Meta-Dataset.
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline \(d_{proj}\) & ILSVRC & Omni & Acraft & CUB & DTD & QDraw & Fungi & Flower & Sign & COCO & Avg \\ \hline
64 & 67.18 & 75.30 & 78.88 & 86.20 & 87.09 & 69.82 & 61.61 & 96.31 & 82.24 & 62.14 & 76.68 \\ \cline{2-11}
96 & 66.23 & 75.69 & 78.26 & 85.67 & 87.28 & 70.25 & **61.97** & **96.59** & 84.10 & 62.17 & 76.82 \\ \cline{2-11}
128 & 67.31 & 76.83 & 78.81 & 85.77 & 87.36 & 70.16 & 60.81 & 96.53 & 84.29 & 62.12 & 77.00 \\ \cline{2-11}
256 & 66.83 & 78.04 & 78.38 & 84.60 & 86.68 & 70.43 & 61.03 & 96.23 & **85.33** & 62.10 & 76.97 \\ \cline{2-11}
192 & **67.37** & **78.11** & **79.94** & **85.93** & **87.62** & **71.34** & 61.80 & 96.57 & 85.09 & **62.33** & **77.61** \\ \hline \end{tabular}
\end{table}
Table 4: Test accuracies on Meta-Dataset of different variants of our proposed method. The bolded items are the best ones with highest accuracies.
|
2302.13044
|
On the two-point function of the Ising model with infinite
range-interactions
|
In this article, we prove some results concerning the truncated two-point
function of the infinite-range Ising model above and below the critical
temperature. More precisely, if the coupling constants are of the form $J_{x}=
\psi(x)e^{ -\rho(x)}$ with $\rho$ some norm and $\psi$ an subexponential
correction, we show under appropriate assumptions that given
$s\in\mathbb{S}^{d-1}$, the Laplace transform of the two-point function in the
direction $s$ is infinite for $\beta=\beta_{\text{sat}}(s)$ (where
$\beta_{\text{sat}}(s)$ is a the biggest value such that the inverse
correlation length $\nu_{\beta}(s)$ associated to the truncated two-point
function is equal to $\rho(s)$ on $[0,\beta_{\text{sat}}(s)))$. Moreover, we
prove that the two-point function satisfies Ornstein-Zernike asymptotics for
$\beta=\beta_{\text{sat}}(s)$ on $\mathbb{Z}$. As far as we know, this
constitutes the first result on the behaviour of the two-point function at
$\beta_{\text{sat}}(s)$. Finally, we show that there exists $\beta_{0}$ such
that for every $\beta>\beta_{0}$, $\nu_{\beta}(s)=\rho(s)$. All the results are
new.
|
Yacine Aoun, Kamil Khettabi
|
2023-02-25T10:02:54Z
|
http://arxiv.org/abs/2302.13044v1
|
# On the two-point function of the Ising model with infinite range-interactions
###### Abstract.
In this article, we prove some results concerning the truncated two-point function of the infinite-range Ising model above and below the critical temperature. More precisely, if the coupling constants are of the form \(J_{x}=\psi(x)\mathbf{e}^{-\rho(x)}\) with \(\rho\) some norm and \(\psi\) an subexponential correction, we show under appropriate assumptions that given \(s\in\mathbb{S}^{d-1}\), the Laplace transform of the two-point function in the direction \(s\) is infinite for \(\beta=\beta_{\mathrm{sat}}(s)\) (where \(\beta_{\mathrm{sat}}(s)\) is a the biggest value such that the inverse correlation length \(\nu_{\beta}(s)\) associated to the truncated two-point function is equal to \(\rho(s)\) on \([0,\beta_{\mathrm{sat}}(s))\)). Moreover, we prove that the two-point function satisfies Ornstein-Zernike asymptotics for \(\beta=\beta_{\mathrm{sat}}(s)\) on \(\mathbb{Z}\). As far as we know, this constitutes the first result on the behaviour of the two-point function at \(\beta_{\mathrm{sat}}(s)\). Finally, we show that there exists \(\beta_{0}\) such that for every \(\beta>\beta_{0}\), \(\nu_{\beta}(s)=\rho(s)\). All the results are new and their proofs are built on different results and ideas developed in [11, 2].
## 1. Introduction
In the present paper, we study the behaviour of the two-point function in Ising models with infinite-range interactions. In [2] (see also [1]), the first author and collaborators considered a general class of lattice spin systems (including the Ising model) on \(\mathbb{Z}^{d}\) with interactions of the form \(J_{x}=\psi(x)\mathbf{e}^{-\rho(x)}\), where \(\psi(x)\) is a subexponential correction and \(\rho\) is a norm on \(\mathbb{R}^{d}\). Let \(\langle\sigma_{0}\sigma_{x}\rangle_{\beta}\) be the usual Ising two-point function with free boundary conditions at inverse temperature \(\beta\) without an external field, and \(\nu_{\beta}(\hat{x})\) be the associated inverse correlation length in the direction \(\hat{x}=x/\|x\|\) where \(\|\cdot\|\) is the euclidian norm. It is easy to see that one always has \(\nu_{\beta}(\hat{x})\leq\rho(x)\). In [2], we developed an explicit necessary and sufficient condition (see Theorem 2.2) to ensure the existence of a non-trivial _saturation transition_, i.e. the strict positivity of \(\beta_{\mathrm{sat}}(\hat{x})=\sup\{\beta\geq 0:\nu_{\beta}(\hat{x})=\rho( \hat{x})\}\). For instance, a sufficient condition for the latter to happen is to have \(\psi(x)=\mathsf{O}(\|x\|^{-(d+\varepsilon)})\) for some \(\varepsilon>0\). By definition one always has \(\beta_{\mathrm{sat}}(\hat{x})\leq\beta_{\mathrm{c}}\) where \(\beta_{\mathrm{c}}\) is the usual transition point of the Ising model. Note that if \(\beta_{\mathrm{sat}}(\hat{x})>0\), the function \(\beta\mapsto\nu_{\beta}(\hat{x})\) is non-analytic. Moreover, we proved in [2] that if \(\beta_{\mathrm{sat}}(\hat{x})>0\), then the Ornstein-Zernike asymptotics (see (1)) _do not hold_ at arbitrarily high temperature. In subsequent works [3, 4], we studied the behavior of the two point function in the saturated regime \((0,\beta_{\mathrm{sat}}(\hat{x}))\) and in the non-saturated regime \((\beta_{\mathrm{sat}}(\hat{x}),\beta_{\mathrm{c}})\). Under appropriate assumptions, for \(\beta\in(\beta_{\mathrm{sat}}(\hat{x}),\beta_{\mathrm{c}})\), we proved in [3] that the two-point function has the _Ornstein-Zernike_ asymptotics: there exists \(c:=c(\hat{x},\beta)>0\) such that
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta}=c\|x\|^{-\frac{d-1}{2}}\mathbf{e}^{- \nu_{\beta}(x)}(1+\mathsf{o}_{\|x\|}(1)). \tag{1}\]
The OZ asymptotics were predicted in the physics literature in [16], and were expected to hold generally when the interactions decay exponentially fast in the distance. In [4],
we proved that this is not the case in the whole saturated regime: under appropriate assumptions, for \(\beta\in(0,\beta_{\mathrm{sat}}(\hat{x}))\), there exists \(C(\beta,\hat{x})>0\) such that
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta}=CJ_{x}(1+\mathfrak{o}_{\|x\|}(1)). \tag{2}\]
This leaves us with a natural question of determining the asymptotics of the two-point function at \(\beta_{\mathrm{sat}}(\hat{x})\). The techniques used for proving (2) and (1) break down at \(\beta_{\mathrm{sat}}(\hat{x})\). On the one hand, in [3], we derived (1) under the mass-gap assumption \(\nu_{\beta}(\hat{x})<\rho(\hat{x})\), which is violated at \(\beta_{\mathrm{sat}}(\hat{x})\) since by continuity of the function \(\beta\mapsto\nu_{\beta}(\hat{x})\), one has \(\nu_{\beta}(\hat{x})=\rho(\hat{x})\). On the other hand, we used differential inequalities (inspired by the ideas of [10, 14]) and the fact that for any \(\beta_{0}\in(0,\beta_{\mathrm{sat}}(\hat{x}))\) there exists an open interval containing \(\beta_{0}\) on which the function \(\beta\mapsto\nu_{\beta}(\hat{x})\) is constant to derive (2). In the present article, we provide partial answers for the behavior of the two-point function at \(\beta_{\mathrm{sat}}(\hat{x})\): under suitable assumptions, we prove that the Laplace transform associated to the two-point function is infinite. Moreover, we prove that (1) holds up to multiplicative constants on \(\mathbb{Z}\). This is the first example where the OZ asymptotics are shown to hold in the absence of a mass-gap. In particular, it shows that the mass-gap is not a necessary condition for OZ asymptotics to hold.
Note that in the discussion above, the saturation phenomenon is only shown to happen at high temperatures. In the present work, we prove the existence of a non-trivial saturation regime at arbitrarily _low temperatures_ as well. Let \(\langle\sigma_{0};\sigma_{x}\rangle_{\beta}\) be the truncated two-point function of the Ising model with \(+\) boundary conditions and \(\nu_{\beta}(\hat{x})\) the associated inverse correlation length. We prove the existence of \(\beta_{\mathrm{sat}}^{*}:=\beta_{\mathrm{sat}}^{*}(\hat{x})<\infty\) such that for every \(\beta>\beta_{\mathrm{sat}}^{*}\), we have \(\nu_{\beta}(\hat{x})=\rho(\hat{x})\).
## 2. Models and notations
### Graphs
Most of our results naturally extend to a wider set-up but we restrict attention to \(\mathbb{Z}^{d}\). We will always see \(\mathbb{Z}^{d}\) as canonically embedded inside \(\mathbb{R}^{d}\) and will denote \(\|\cdot\|\) the Euclidean norm on \(\mathbb{R}^{d}\). \(\rho\) will denote a norm on \(\mathbb{R}^{d}\) (and will be one of the parameters in our analysis).
We consider the graph \((\mathbb{Z}^{d},E_{d})\) with edge set \(E_{d}=\big{\{}\{i,j\}\subset\mathbb{Z}^{d}\big{\}}\), which we will often write simply \(\mathbb{Z}^{d}\). Let \(\Lambda_{N}=\{-N,\ldots,N\}^{d}\) and \(\Lambda_{N}(x)=x+\Lambda_{N}\).
Given a subgraph \(\Lambda\), let \(\Lambda^{c}=\mathbb{Z}^{d}\backslash\Lambda\) and
\[E_{\Lambda}=\big{\{}\{i,j\}\in E_{d}\,:\,\{i,j\}\subset\Lambda\big{\}}.\]
Given \(x,y,z\in\mathbb{Z}^{d}\), a sequence \(\gamma=(\gamma_{0},\gamma_{1},\ldots,\gamma_{n})\in(\mathbb{Z}^{d})^{n+1}\) is called a path from \(x\) to \(y\) if \(\gamma_{0}=x\) and \(\gamma_{n}=y\). We say that \(n\) is the length of the path, and denote it by \(|\gamma|\). We say that \(\gamma\) is edge self-avoiding if \(\{\gamma_{i},\gamma_{i+1}\}=\{\gamma_{j},\gamma_{j+1}\}\Rightarrow i=j\).
### Interaction
We consider a weight function (the _interaction_, or the set of _coupling constants_) \(J:E_{d}\to\mathbb{R}_{+}\) of the form \(J_{i,j}=\psi(i-j)\mathsf{e}^{-\rho(i-j)}\) where \(\psi\) satisfies
\[\lim_{\|x\|\to\infty}\frac{\log(\psi(x))}{\|x\|}=0.\]
Moreover, we will assume that the interaction satifies the following properties:
* _No self-interaction:_ \(J_{0}=0\),
* _Rotational invariance:_ \(J\) is invariant by a rotation of \(\pi/2\) around any coordinate axis.
### Percolation configurations
Given a subset \(\Lambda\) of \(\mathbb{Z}^{d}\), the percolation configuration \(\omega\) is defined as a function from \(E_{\Lambda}\) to \(\{0,1\}\). Given an edge \(\{i,j\}\in E_{\Lambda}\), we say that the edge \(\{i,j\}\) is open in \(\omega\) if \(\omega_{i,j}=1\) and closed otherwise. Given the subsets \(A,B,C\) of \(\Lambda\), we will denote by \(\{A\stackrel{{ C}}{{\leftrightarrow}}B\}\) the subset of percolation configurations \(\omega\) such that there exists a path from \(A\) to \(B\) consisting of open edges of \(C\). If \(C=\Lambda\), we will remove it from the notation. We will write \(\{x\stackrel{{ C}}{{\leftrightarrow}}y\}\) instead of \(\{\{x\}\stackrel{{ C}}{{\leftrightarrow}}\{y\}\}\). Finally, we will define the connected component of \(x\) by \(\mathcal{C}_{x}:=\{y\in\mathbb{Z}^{d}:x\leftrightarrow y\}\).
### Constants
\(c,C,c^{\prime},C^{\prime},\ldots\) will denote constants whose value can change from line to line. Unless explicitly stated otherwise, they depend only on \(d,\beta,h,J\).
### Ising Model
The Ising model at inverse temperature \(\beta\geq 0\) without a magnetic field with free boundary condition on \(\mathbb{Z}^{d}\) is the probability measure on \(\Omega:=\{-1,+1\}^{\mathbb{Z}^{d}}\) given by the weak limit of the finite-volume measures (for \(\sigma\in\{-1,+1\}^{\Lambda_{N}}\) and \(\Lambda_{N}=[-N,N]^{d}\cap\mathbb{Z}^{d}\))
\[\mu^{\mathrm{f}}_{\Lambda_{N};\beta}(\sigma)=\frac{1}{Z^{\mathrm{f}}_{\Lambda _{N};\beta}}\mathsf{e}^{-\beta\mathscr{H}^{\mathrm{f}}_{N}(\sigma)},\]
with Hamiltonian
\[\mathscr{H}^{\mathrm{f}}_{N}(\sigma)=-\sum_{\{i,j\}\subset\Lambda_{N}}J_{ij} \sigma_{i}\sigma_{j}-\sum_{i\in\Lambda_{N}}\sigma_{i}\]
and partition function \(Z^{\mathrm{f}}_{\Lambda_{N};\beta}\). We also define the Ising measure at inverse temperature \(\beta\geq 0\) with \(+\) boundary condition and without a magnetic field by
\[\mu^{+}_{\Lambda_{N};\beta}(\sigma)=\frac{1}{Z^{+}_{\Lambda_{N};\beta}} \mathsf{e}^{-\beta\mathscr{H}^{+}_{N}(\sigma)},\]
with Hamiltonian
\[\mathscr{H}^{\mathrm{+}}_{N,h}(\sigma)=-\sum_{\{i,j\}\subset\Lambda_{N}}J_{ij} \sigma_{i}\sigma_{j}-\sum_{\begin{subarray}{c}i\in\Lambda_{N}\\ j\in\Omega\setminus\Lambda_{N}\end{subarray}}J_{ij}\sigma_{i}.\]
For \(\eta\in\{+,f\}\), the limit \(\mu^{\eta}_{\beta}=\lim_{N\to\infty}\mu^{\eta}_{\Lambda_{N};\beta}\) is always well defined and agrees with the unique infinite-volume measure whenever \(\beta<\beta_{\mathrm{c}}\), the critical point of the model; we refer to [12] for more details. We will be interested in the behaviour of the _truncated two-point_ function of the model
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}:=\mathrm{Cov}(\sigma_{0},\sigma_{ x}),\]
where the covariance is taken with respect to \(\mu^{+}_{\beta}\). We also introduce the _correlation length_ associated to the latter in the direction \(s\in\mathbb{S}^{d-1}\)
\[\nu_{\beta}(s):=-\lim_{n\to\infty}\frac{1}{n}\log\langle\sigma_{0};\sigma_{ns} \rangle_{\beta}.\]
The existence of this limit follows from the subadditivity proved in [13]. The subadditivity also provides the following bound
\[\langle\sigma_{0};\sigma_{ns}\rangle_{\beta}\leq\mathsf{e}^{-n\nu_{\beta}(s)}. \tag{3}\]
Let us also introduce
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta}:=\mathbb{E}[\sigma_{0}\sigma_{x}].\]
When \(\beta<\beta_{\mathrm{c}}\), the truncated two-point function is just equal to the usual two-point function,
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}=\langle\sigma_{0}\sigma_{x} \rangle_{\beta}.\]
The following result was proved in [4].
**Theorem 2.1**.: _Fix \(\beta<\beta_{\rm c}\) and \(s\in\mathbb{S}^{d-1}\). Then \(\nu_{\beta}(s)>0\)._
#### 2.5.1. FK-Ising model
Itimately related to the Ising model is the FK-Ising model (i.e. the Random-Cluster model with \(q=2\)). The latter is a measure on percolation configurations on \(\mathbb{Z}^{d}\) depending on a parameter \(\beta\in\mathbb{R}_{\geq 0}\) that will be denoted by \(\Phi_{\beta}\) and is obtained as the weak limit of the finite-volume measures
\[\Phi_{\Lambda_{N};\beta}(\omega)=\frac{1}{Z^{\rm FK}_{\Lambda_{N};\beta}}\prod _{\{i,j\}\in\omega}(\mathsf{e}^{\beta J_{ij}}-1)2^{\kappa(\omega)}, \tag{4}\]
where \(\kappa(\omega)\) is the number of connected components in the graph with vertex set \(\Lambda_{N}\) and edge set \(\omega\) and \(Z^{\rm FK}_{\Lambda_{N};\beta}\) is the partition function. One has the following correspondance between the Ising model without a magnetic field and the FK-Ising model
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta}=\Phi_{\beta}(0\leftrightarrow x).\]
It is a standard consequence that one in particular has
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}\geq\Phi_{\beta}(0 \leftrightarrow x,0\not\leftrightarrow\infty). \tag{5}\]
During the proofs, we will need several well-known properties of the FK-Ising model:
_Finite energy property:_ Fix \(\Lambda\subset\mathbb{Z}^{d}\) and \(\beta>0\). For any \(e\in E_{\Lambda}\) and \(\eta\in\{0,1\}^{E_{\Lambda}\setminus\{e\}}\), one has
\[\frac{\mathsf{e}^{\beta J_{e}}-1}{\mathsf{e}^{\beta J_{e}}+1}\leq\Phi_{\Lambda ;\beta}(\omega_{e}=1\;|\omega_{E_{\Lambda}\setminus\{e\}}=\eta)\leq 1- \mathsf{e}^{-\beta J_{e}}. \tag{6}\]
_FKG inequality:_ We say that a \(\mathcal{F}_{\Lambda}\)-measurable event \(A\) is increasing if \(\mathds{1}_{A}\) in increasing with respect to the lexographical order on \(\{0,1\}^{E_{\Lambda}}\). Given two increasing events \(A,B\), the FKG inequality states that
\[\Phi_{\Lambda;\beta}(A)\Phi_{\Lambda;\beta}(B)\leq\Phi_{\Lambda;\beta}(A\cap B).\]
_Simon-Lieb inequality:_ Given a finite subset \(S\) containing \(0\), one has [11]
\[\Phi_{\beta}(u\xleftrightarrow{\Delta}v)\leq\sum_{x\in S}\sum_{y \not\in S}\Phi_{\beta}(u\xleftrightarrow{S}x)\beta J_{x,y}\Phi(y\xleftrightarrow{ \Delta}v). \tag{7}\]
These properties in particular imply the existence of \(C:=C(\beta)>0\) such that
\[CJ_{x}\leq\langle\sigma_{0};\sigma_{x}\rangle_{\Lambda,\beta}. \tag{8}\]
Indeed, one has
\[\Phi_{\Lambda;\beta}(\omega_{\{0,x\}}=1,|\mathcal{C}_{0}|=2)\leq \Phi_{\beta}(0\leftrightarrow x,0\not\leftrightarrow\infty).\]
The finite energy property then implies the existence of \(C:=C(\beta)>0\) such that
\[CJ_{x}\leq\Phi_{\Lambda;\beta}(\omega_{\{0,x\}}=1,|\mathcal{C}_{0}|=2).\]
All these inequalities combined with (5) gives (8). Notice that (8) in particular implies that \(\nu_{\beta}(s)\leq\rho(s)\) for any \(\beta>0\).
#### 2.5.2. Random current
Let \(\Lambda\) be a finite subgraph of \(\mathbb{Z}^{d}\). We consider an additional vertex \(\mathfrak{g}\) in the graph \(\Lambda\) and denote by \(\Lambda^{\mathfrak{g}}\) the graph obtained by adding an edge between each \(x\in\Lambda\) and \(\mathfrak{g}\). A _current_\(\mathbf{n}=(\mathbf{n}_{xy})_{x,y\in E_{\Lambda\mathfrak{g}}}\) on \(\Lambda^{\mathfrak{g}}\) is an element of \(\mathbb{N}^{E_{\Lambda\mathfrak{g}}}\). For \(x\in\Lambda^{\mathfrak{g}}\), set \(X(\mathbf{n},x):=\sum_{y\in\Lambda^{\mathfrak{g}}}\mathbf{n}_{xy}\). We define
\[\partial\mathbf{n}:=\{x\in\Lambda^{\mathfrak{g}}:X(\mathbf{n},x)\text{ is odd}\}.\]
In the case of the Ising model on a finite box \(\Lambda\subset\mathbb{Z}^{d}\) with \(+\) boundary condition, we set \(J_{x\mathfrak{g}}:=\sum_{y\in\Lambda^{c}}J_{xy}\). This will allow us to reinterpret the \(+\) boundary conditions as the presence of a new vertex, namely \(\mathfrak{g}\). We also define the weight of a current \(\mathbf{n}\) on \(\Lambda^{\mathfrak{g}}\) to be the quantity
\[w_{\Lambda^{\mathfrak{g}};\beta}(\mathbf{n}):=\prod_{xy\in E_{\Lambda^{ \mathfrak{g}}}}\frac{(\beta J_{xy})^{\mathbf{n}_{xy}}}{\mathbf{n}_{xy}!}.\]
Taylor-expanding \(\mathbf{e}^{\beta J_{ij}\sigma_{i}\sigma_{j}}\) and resumming, one gets
\[\langle\sigma_{A}\rangle_{\Lambda,\beta}=\left\{\begin{array}{ll}\frac{Z_{ \Lambda}(A)}{Z_{\Lambda}(\varnothing)}&\text{if $|A|$ is even}\\ \frac{Z_{\Lambda}(A)\cup\{g\})}{Z_{\Lambda}(\varnothing)}&\text{otherwise} \end{array}\right.\]
where \(Z_{\Lambda}(F):=\sum_{\mathbf{n}:\partial\mathbf{n}=F}w_{\Lambda;\beta}( \mathbf{n})\) for any subset \(F\subset\Lambda\). We will refer to this correspondance as the _random-current representation_. Given a subset \(A\subset\Lambda^{\mathfrak{g}}\), one can define a probability law on currents on \(\Lambda^{\mathfrak{g}}\) with sources \(A\) by
\[\mathbb{P}^{A}_{\Lambda^{\mathfrak{g}};\beta}(\mathbf{n})=\frac{w_{\Lambda^{ \mathfrak{g}};\beta}(\mathbf{n})\mathbb{1}_{\partial\mathbf{n}=A}}{Z_{\Lambda }(A)}.\]
We will use the notation \(\mathbb{P}^{\varnothing,\{0,x\}}_{\Lambda^{\mathfrak{g}};\beta}\) for the product measure \(\mathbb{P}^{\varnothing}_{\Lambda^{\mathfrak{g}},\beta}\times\mathbb{P}^{\{0,x\}}_{\Lambda^{\mathfrak{g}},\beta}\). This is therefore a law on pairs of currents \((\mathbf{n}_{1},\mathbf{n}_{2})\) such that \(\partial\mathbf{n}_{1}=\varnothing\) and \(\partial\mathbf{n}_{2}=\{0,x\}\). In particular, \(0\) and \(x\) are connected in \(\mathbf{n}_{2}\) since those are the only vertex with odd degree. Such a pair \(\mathbf{n}=(\mathbf{n}_{1},\mathbf{n}_{2})\) can be seen as the sum \(\mathbf{n}_{1}+\mathbf{n}_{2}\). It is well known (see for instance [8] that
\[\langle\sigma_{0};\sigma_{x}\rangle_{\Lambda;\beta}=\langle\sigma_{0}\sigma_{ x}\rangle_{\Lambda;\beta}\mathbb{P}^{\varnothing,\{0,x\}}_{\Lambda^{ \mathfrak{g}};\beta}\left[0\not\leftrightarrow\mathfrak{g}\right]. \tag{9}\]
Note that every current \(\mathbf{n}\) can be seen as a percolation realization \((\omega_{e})_{e\in E_{\Lambda}}\), by declaring an edge \(e\) is said to be open if and only if \(\mathbf{n}_{e}>0\).
Partial finite energy propertyOne can show that [17]
\[\mathbb{P}^{A}_{\Lambda^{\mathfrak{g}},\beta}\left(\mathbf{n}_{e}>0\mid \mathbf{n}_{f}=m(f),\ \forall f\neq e\right)\geq\frac{\cosh(\beta J_{e})-1}{\cosh(\beta J_{e})}, \tag{10}\]
for any edge \(e=\{a,b\}\in E_{\Lambda}\) and any function \(m:\Lambda\setminus\{e\}\to\mathbb{N}\) compatible with \(A\). This in particular implies that
\[\mathbb{P}^{A}_{\Lambda^{\mathfrak{g}};\beta}\left(\mathbf{n}_{e}=0\mid \mathbf{n}_{f}=m(f),\ \forall f\neq e\right)\leq 2\mathbf{e}^{-\beta J_{e}}. \tag{11}\]
Furthermore, recall that if \(\beta>\beta_{e}\), then for any set \(B\) with \(|B|\in\{0,2\}\), there exists \(C^{\prime}>0\) such that \(C^{\prime}\leq\langle\sigma_{B}\rangle_{\Lambda;\beta,h}\leq 1\) (we set \(\sigma_{\varnothing}=1\)). There exists \(C>0\) such that for any set \(\{e_{1},...,e_{k}\}\) of edges in \(\Lambda\), one has
\[\mathbb{P}^{B}_{\Lambda^{\mathfrak{g}};\beta,h}\left(\mathbf{n}_{e_{1}}\geq 1,...,\mathbf{n}_{e_{k}}\geq 1\right)\leq C\beta^{k}\prod_{i=1}^{k}J_{e_{i}}.\]
Indeed, summing on all currents \(\mathbf{n}\) with \(\partial\mathbf{n}=B\) satisfying \(\mathbf{n}_{e_{i}}\geq 1\) for \(1\leq i\leq k\), one gets
\[\sum_{\mathbf{n}}\frac{w(\mathbf{n})}{Z_{\Lambda}(B)}\leq\left(\beta^{k}\prod_{ i=1}^{k}J_{e_{i}}\right)\sum_{\tilde{\mathbf{n}}}\frac{w(\tilde{\mathbf{n}})}{Z_{ \Lambda}(B)}\leq\frac{\langle\sigma_{S}\rangle_{\Lambda;\beta}}{\langle\sigma _{B}\rangle_{\Lambda;\beta}}\beta^{k}\prod_{i=1}^{k}J_{e_{i}}\leq C\beta^{k} \prod_{i=1}^{k}J_{e_{i}},\]
where the second sum is on the currents \(\tilde{\mathbf{n}}\) having as sources set the symmetric difference \(S:=A\Delta e_{1}\Delta...\Delta e_{k}\). Putting these two results together, one thus gets
\[\mathbb{P}^{B}_{\Lambda^{\mathbf{g}};\beta}\left(\mathbf{n}_{e_{1}}\geq 1,..., \mathbf{n}_{e_{k}}\geq 1,\mathbf{n}_{f_{1}}=...=\mathbf{n}_{f_{l}}=0\right)\leq C2 ^{l}\beta^{k}\prod_{i=1}^{k}J_{e_{i}}\prod_{j=1}^{l}\mathsf{e}^{-\beta J_{f_{ j}}}, \tag{12}\]
for any family \(\{f_{1},...,f_{l}\}\) of edges.
#### 2.5.3. Convex geometry
It will be convenient to introduce a few quantities associated to the norm \(\rho\). First, two convex sets are important: the unit ball \(\mathscr{U}\subset\mathbb{R}^{d}\) associated to \(\rho\) and the corresponding _Wulff shape_
\[\mathscr{W}=\{t\in\mathbb{R}^{d}\,:\,\forall x\in\mathbb{R}^{d},\,t\cdot x \leq\rho(x)\}.\]
Given a direction \(s\in\mathbb{S}^{d-1}\), we say that the vector \(t\in\mathbb{R}^{d}\) is dual to \(s\) if \(t\in\partial\mathscr{W}\) and \(t\cdot s=\rho(s)\). A direction \(s\) possesses a unique dual vector \(t\) if and only if \(\mathscr{W}\) does not possess a facet with normal \(s\). Equivalently, there is a unique dual vector when the unit ball \(\mathscr{U}\) has a unique supporting hyperplane at \(s/\rho(s)\). (See Fig. 1 for an illustration.) We refer to [15] for the necessary backround on the convex geometry.
#### 2.5.4. Saturation transition
Recall that (8) implies that \(\nu_{\beta}(s)\leq\rho(s)\) for every \(\beta>0\). As explained in the introduction, we consider the saturation point above the critical temperature in the direction \(s\in\mathbb{S}^{d-1}\) defined by
\[\beta_{\text{sat}}(s)=\sup\{\beta\in[0,\beta_{\text{c}}]:\nu_{\beta}(s)=\rho (s)\}.\]
For \(t\in\mathscr{W}\), we define
\[\mathbb{G}_{\beta}(t)=\sum_{x\in\mathbb{Z}^{d}}\mathsf{e}^{t\cdot x}\Phi_{ \beta}(0\leftrightarrow x)\qquad\text{ and }\qquad\mathbb{J}(t)=\sum_{x\in\mathbb{Z}^{d}}\mathsf{e}^{t \cdot x}J_{0,x},\]
and an associated transition points
\[\hat{\beta}_{\text{sat}}(t)=\sup\{\beta\geq 0\,:\,\mathbb{G}_{\beta}(t)<\infty\},\]
and
\[\hat{\beta}_{\text{sat}}(s)=\sup_{\begin{subarray}{c}t\in\mathscr{W}\\ \text{t dual to }s\end{subarray}}\hat{\beta}_{\text{sat}}(t).\]
Figure 1. Left: The unit ball for the norm \(\rho(\cdot)=\left\|\cdot\right\|_{1}\). Middle: the corresponding Wulff shape \(\mathscr{W}\) with two vectors \(t_{1}\) and \(t_{2}\) dual to \(s=(1,0)\). Right: the set \(\mathscr{W}\) with the unique vector \(t\) dual to \(s=\frac{1}{\sqrt{5}}(2,1)\).
It was proved in [4] that if \(\psi(x)=\rho(x)^{-\alpha}\) with \(\alpha>2d\) or \(\psi(x)=\mathsf{e}^{-c\rho(x)^{\eta}}\) with \(\eta\in(0,1)\) and \(c>0\), then \(\hat{\beta}_{\rm sat}(s)=\beta_{\rm sat}(s)\). We can now state the criterion ensuring the existence of a non-trivial saturation point:
**Theorem 2.2**.: _Let \(J\) be exponentially decaying. Fix \(s\in\mathbb{S}^{d-1}\).Then \(\beta_{\rm sat}(s)>0\) if and only if there exists a dual vector \(t\) to \(s\) such that \(\mathbb{J}(t)<\infty\)._
Note that \(\mathbb{J}(t)<\infty\) whenever \(\psi(x)=\mathsf{O}(\rho(x)^{-d-\varepsilon})\) for some \(\varepsilon>0\). An even more explicit (although a little bit less general) criterion ensuring the finitude of \(\mathbb{J}(t)\) was derived in [2]. It was proved in [11] that \(\nu_{\beta_{\rm c}}(s)=0\) for every \(s\in\mathbb{S}^{d-1}\), and therefore one always has \(\beta_{\rm sat}(s)<\beta_{\rm c}\) by the continuity of the function \(\beta\mapsto\nu_{\beta}(s)\).
We also introduce a saturation point below the critical temperature in the direciton \(s\) defined by
\[\beta_{\rm sat}^{*}(s)=\sup\{\beta\in[\beta_{\rm c},\infty):\nu_{\beta^{ \prime}}(s)=\rho(s)\;\forall\beta^{\prime}>\beta\}.\]
## 3. Main results and conjectures
**Theorem 3.1**.: _For any \(t\in\mathscr{W}\) such that \(\hat{\beta}_{\rm sat}(t)>0\), there exists \(C:=C(\hat{\beta}_{\rm sat}(t))\) and a strictly increasing sequence \((n_{k})_{k=1}^{\infty}\) such that_
\[\sum_{x\in\Lambda_{n_{k}}}\mathsf{e}^{t\cdot x}\Phi_{\hat{\beta}_{\rm sat}(t) }(0\stackrel{{\Lambda_{n_{k}}}}{{\longleftrightarrow}}x)\geq Ck.\]
_In particular,_
\[\mathbb{G}_{\hat{\beta}_{\rm sat}(t)}(t)=\infty.\]
_Moreover, if \(\psi(x)=\mathsf{O}(\rho(x)^{-d-1-\varepsilon})\) for some \(\varepsilon>0\), one can choose \(n_{k}=k\)._
Theorem 3.1 has the following immediate Corollary.
**Corollary 3.2**.: _Suppose that \(\psi\) has one of the following forms:_
* \(\psi(x)=\rho(x)^{-\alpha}\) _with_ \(\alpha>2d\)__
* \(\psi(x)=\mathsf{e}^{-\tilde{c}\rho(x)^{\eta}}\) _with_ \(\tilde{c}>0\) _and_ \(\eta\in(0,1)\)_._
_Then there exists \(C>0\) such that for any \(t\) dual to \(s\)_
\[\sum_{x\in\Lambda_{n}}\mathsf{e}^{t\cdot x}\Phi_{\beta_{\rm sat}(s)}(0 \stackrel{{\Lambda_{n}}}{{\longleftrightarrow}}x)\geq Cn.\]
_In particular, \(\mathbb{G}_{\beta_{\rm sat}(s)}(t)=\infty\) for any \(t\) dual to \(s\)._
Proof.: It was proved in [4] that \(\beta_{\rm sat}(s)=\hat{\beta}_{\rm sat}(s)\) under the assumptions of Corollary 3.2. Therefore, the conclusion follows from Theorem 3.1 since \(\hat{\beta}_{\rm sat}(t)\leq\hat{\beta}_{\rm sat}(s)\).
The next result will give a description of the saturation phenomenon as a function of the direction \(s\): if \(\mathscr{W}\) is regular locally in a strictly saturated direction \(s\) (in the sense that \(\beta<\beta_{\rm sat}(s)\)), then there exists a neighborhood of \(s\) for which all the directions are strictly saturated.
**Lemma 3.3**.: _Fix \(t\in\mathscr{W}\) and assume that \(\mathscr{W}\) is locally strictly convex and \(C^{1}\). Fix the unique direction \(s\in\mathbb{S}^{d-1}\) dual to \(t\). Assume that \(\beta_{\rm sat}(s)=\hat{\beta}_{\rm sat}(s)\) locally and that there exists \(\delta>0\) such that \(\mathbb{J}(h)<\infty\) for all \(h\in\partial\mathscr{W}\cap B_{\delta}(t)\). Then, for every \(\beta<\beta_{\rm sat}(s)\), there exists \(\varepsilon>0\) such that for any \(s^{\prime}\in\mathbb{S}^{d-1}\cap B_{\varepsilon}(s)\), \(\beta<\beta_{\rm sat}(s^{\prime})\)._
The next result gives the asymptotics of the two-point function at \(\beta_{\rm sat}(1)\) on \(\mathbb{Z}\).
**Theorem 3.4**.: _Fix \(d=1\). Suppose that_
* \(\psi(x)=\left|x\right|^{-\alpha}\) _with_ \(\alpha>2\)__
* \(\psi(x)=\mathsf{e}^{-\tilde{c}|x|^{\eta}}\) _with_ \(\tilde{c}>0\) _and_ \(\eta\in(0,1)\)_._
_Then, there exists \(C_{-}>0\) such that for any \(x\in\mathbb{Z}\), one has_
\[C_{-}\leq\mathsf{e}^{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsf \mathsf{ }}}}}}}}}}}}\Phi_{\beta_{\rm sat}(1)}(0 \leftrightarrow x)\leq 1.\]
Our next result shows that a non-trivial saturation regime can exist even at arbitrarily low temperatures for the truncated two-point function.
**Theorem 3.5**.: _Let \(d\geq 2\) and \(s\in\mathbb{S}^{d-1}\). Suppose that \(J_{\rm e}>0\) for any edge of length 1. If there exists \(t\in\partial\mathcal{W}\) dual to \(s\) such that \(\mathbb{J}(t)<\infty\), then there exists \(\beta_{0}\) such that \(\beta_{\rm sat}^{*}(s)<\beta_{0}\). Moreover, for any \(\beta>\beta_{0}\) there exists \(C_{-},C_{+}>0\) such that_
\[C_{-}J_{\rm ns}\leq\langle\sigma_{0};\sigma_{\rm ns}\rangle_{\beta}\leq C_{+ }J_{\rm ns}.\]
**Remark 3.1**.: _Notice that in Theorem 3.5, we take \(d\geq 2\). This assumption is necessary, since by definition one has \(\beta_{\rm sat}^{*}(s)\geq\beta_{\rm c}\), and \(\beta_{\rm c}=\infty\) on \(\mathbb{Z}\). This is in contrast with what happens at high temperatures, in which case Theorem 2.2 holds._
Theorem 3.5 is in contrast with what happens in the finite-range Ising model, in which case it was proved in [6] that the truncated two-point function satisfies OZ asymptotics on \(\mathbb{Z}^{d}\) with \(d\geq 3\) (see also [7]).
Our work suggests a number of conjectures and open problems that we summarize now.
#### 3.0.1. Behaviour at \(\beta_{\rm sat}(s)\)
Theorem 3.4 suggests that the OZ asymptotics should hold at \(\beta_{\rm sat}(s)\) whenever \(\psi\) decays fast enough.
**Conjecture 3.6**.: _For any \(\psi\) decaying fast enough, the conclusion of Theorem 3.4 holds on \(\mathbb{Z}^{d}\)._
However, this is easily seen not to be true in general. To see that, fix \(\rho(\cdot)=\|\cdot\|_{1}\). Using the results of [2], it can easily be seen that for \(\psi(x)=\rho(x)^{-\alpha}\), \(\beta_{\rm sat}(e_{1})>0\) whenever \(\alpha>1\). However, one always has the lower bound
\[CJ_{x}\leq\langle\sigma_{0}\sigma_{x}\rangle_{\beta}.\]
This shows that OZ asymptotics cannot hold in this case whenever \(d>3\).
**Open problem 3.7**.: _Caracterise all possible behaviours of the two-point function at \(\beta_{\rm sat}(s)\) in function of the dimension and \(\rho\)._
We expect that the OZ asymptotics could fail at \(\beta_{\rm sat}\) for two different reasons:
1. The dominant contribution to the FK-Ising two-point function comes from configurations with \(|\mathcal{C}_{0}|=\mathsf{o}(n)\).
2. The dominant contribution to the FK-Ising two-point function comes from configurations with \(|\mathcal{C}_{0}|=\mathsf{O}(n)\) (as is the case in the OZ regime), but the steps of the associated effective random walk don't have two moments, and so the usual local limit theorem does not hold (however, there has been results on the non-OZ asymptotic behaviour of the Green function in this case, see [5] and references therein).
We plan to come back to this issue in a simpler context of the killed random walk (see section 5 for the definition of this model).
#### 3.0.2. Behaviour for \(\beta>\beta_{\rm c}\)
In the case of exponentially decaying coupling constants, Theorem 3.5 implies the exponential decay of the two-point function for \(\beta\) large enough whenever \(\psi\) decays fast enough. We expect this to hold more generally below the critical temperature.
**Conjecture 3.8**.: _If there exists \(c>0\) such that \(J_{x}\leq\mathsf{e}^{-c\|x\|}\), then \(\nu_{\beta}(s)>0\) for \(\beta>\beta_{\rm c}\) and \(s\in\mathbb{S}^{d-1}\)._
Understanding the behaviour of the truncated two-point function without an external field non pertubatively below critical temperature is challenging. The exponential decay of the two-point function in the finite-range Ising models was established only recently in [9]. For \(\beta<\beta_{c}\), the conclusion of Conjecture 3.8 was established in [4] using the random cluster representation of the Ising model and the OSSS inequality for monotonic measures (see [10]). Since the double random-current is not known to be monotonic, one cannot use the same reasoning to prove Conjecture 3.8.
We also expect the same dichotomy of behaviour of the two-point function between \(\beta<\beta_{\rm sat}(s)\) and \(\beta\in(\beta_{\rm sat}(s),\beta_{\rm c})\) (see (2) and (1)) to happen below the critical temperature.
**Conjecture 3.9**.:
1. _For_ \(\beta>\beta_{\rm sat}^{*}(s)\)_, there exists_ \(C>0\) _such that_ \[\langle\sigma_{0};\sigma_{ns}\rangle_{\beta}=CJ_{ns}(1+\mathsf{o}_{n}(1)).\]
2. _For_ \(\beta\in(\beta_{\rm c},\beta_{\rm sat}^{*}(s))\)_, there exists_ \(C>0\) _such that_ \[\langle\sigma_{0};\sigma_{ns}\rangle_{\beta}=Cn^{-\frac{d-1}{2}}\mathsf{e}^{- \nu_{\beta}(x)}(1+\mathsf{o}_{n}(1)).\]
### Organisation of the paper
In Section 4, we will prove Theorem 3.1, Theorem 3.4 and Lemma 3.3 using the so-called \(\varphi(S)\) argument. In Section 5, we will prove Theorem 3.5 by comparing directly the random-current representation of the truncated two-point function to the Green function associated to a well-chosen killed random walk. Note that different parts are essentially independent.
## 4. \(\varphi(S)\) argument
In this section, we are going to prove Theorem 3.1, Lemma 3.3 and Theorem 3.4. Generalizing what has been done in [11], given a finite subset \(S\) containing \(0\), \(t\in\partial\mathscr{W}\) and \(\beta>0\), let us define
\[\varphi_{\beta}(S,t)=\beta\sum_{x\in S}\sum_{y\notin S}\mathsf{e}^{t\cdot x} \Phi_{\beta}(0\stackrel{{ S}}{{\leftrightarrow}}x)J_{x,y} \mathsf{e}^{t\cdot(y-x)}.\]
Moreover, we define
\[\tilde{\beta}_{\rm sat}(t)=\sup\{\beta\geq 0\,:\,\text{there exists a finite $S$ containing $0$ such that $\varphi_{\beta}(S,t)<1$}\}.\]
We will need the following lemma:
**Lemma 4.1**.: _Fix \(t\in\partial\mathscr{W}\) and \(\beta>0\). Assume that \(\hat{\beta}_{\rm sat}(t)>0\)._
1. _If there exists a finite subset_ \(S\ni 0\) _such that_ \(\varphi_{\beta}(S,t)<1\)_, then there exists_ \(C=C(S)>0\) _such that_ \[\mathbb{G}(t)\leq\frac{C}{1-\varphi_{\beta}(S,t)}<\infty.\]
_._
2. _There exist_ \(c>0\) _and a strictly increasing sequence_ \((n_{k})\) _such that_ \[c\sum_{k=1}^{l}\varphi_{\beta}(\Lambda_{n_{l}},t)\leq\sum_{x\in\Lambda_{n_{l}}} \mathsf{e}^{t\cdot x}\Phi_{\beta}(0\stackrel{{\Lambda_{n_{l}}}}{{ \longleftrightarrow}}x).\] _If_ \(\psi(x)=\mathsf{O}(\rho(x)^{-d-1-\varepsilon})\) _for some_ \(\varepsilon>0\)_, one can take_ \(n_{k}=k\)_._
_In particular, \(\tilde{\beta}_{\mathrm{sat}}(t)=\hat{\beta}_{\mathrm{sat}}(t)\)._
Before proving Lemma 4.1, let us see how it implies Theorem 3.1 and Lemma 3.3.
Proof of Theorem 3.1.: Since \(\varphi_{\beta}(S,t)\) is a continuous function in \(\beta\) and \([0,1)\) is open in \([0,\infty)\), it follows that at \(\tilde{\beta}_{\mathrm{sat}}(t)\), for every \(\Lambda\ni 0\), we have \(\varphi_{\tilde{\beta}_{\mathrm{sat}}(t)}(\Lambda,t)\geq 1\). This in turn implies, by the second part of Lemma 4.1, the conclusion of Theorem 3.1.
Proof of Theorem 3.4.: We will only show the result for \(x>0\), since the result follows for \(x\) negative by symmetry. The right inequality follows directly from (3) and \(\nu_{\beta_{\mathrm{sat}}(1)}(1)=\rho(1)\). For the left inequality, remark that since we assumed that \(\alpha>2\), it follows from Corollary 3.2 that for every \(x\geq 1\):
\[\sum_{k=1}^{x}\mathsf{e}^{k}\Phi_{\beta_{\mathrm{sat}}(1)}(0\leftrightarrow k )\geq Cx-\sum_{k=0}^{x}\mathsf{e}^{-k}\Phi_{\beta_{\mathrm{sat}}(1)}(0 \leftrightarrow-k).\]
Since \(\mathsf{e}^{k}\Phi_{\beta_{\mathrm{sat}}(1)}(0\leftrightarrow k)\in[0,1]\) by (3), this implies that there exists \(R>0\) and \(c>0\) such that for any \(m\in\mathbb{N}\), there exists \(k\in\{m,\ldots,m+R\}\) such that one has
\[\mathsf{e}^{k}\Phi_{\beta_{\mathrm{sat}}(1)}(0\leftrightarrow k)\geq c.\]
The result then follows by the finite energy (6) and FKG for every \(x\in\mathbb{N}\).
Proof of Lemma 3.3.: In order to prove Lemma 3.3, note that, by assumption, \(\beta<\beta_{\mathrm{sat}}(s)=\hat{\beta}_{\mathrm{sat}}(t)=\tilde{\beta}_{ \mathrm{sat}}(t)\), where the last equality is given by Lemma 4.1. It follows that there exists a finite \(S\) containing \(0\) such that \(\varphi_{\beta}(S,t)<1\). Since \(\mathbb{J}\) is locally finite (around \(t\)) and \(S\) is finite, it follows by continuity that \(\varphi_{\beta}(S,h)<1\) for \(h\in B_{\varepsilon^{\prime}}(t)\cap\mathcal{W}_{\rho}\). This implies that \(\beta<\beta_{\mathrm{sat}}(s^{\prime})\) for \(s^{\prime}\) in some small neighborhood around \(s\) since \(\mathscr{W}\) is locally strictly convex and \(\beta_{\mathrm{sat}}=\tilde{\beta}_{\mathrm{sat}}\) locally, which is the desired result.
Proof of Lemma 4.1.: We follow here ideas developed in [11]. First, suppose that there exists \(S\) containing \(0\) such that \(\varphi_{\beta}(S,t)<1\). Let \(\Lambda\subset\mathbb{Z}^{d}\) and let
\[\tilde{\chi}(\Lambda,t,\beta)=\max\Bigl{\{}\sum_{v\in\Lambda}\mathsf{e}^{t \cdot(v-u)}\Phi_{\beta}(u\stackrel{{\Lambda}}{{\longleftrightarrow }}v)\,:\,u\in\Lambda\Bigr{\}}.\]
Let us fix \(u\in\Lambda\) and denote by \(S_{u}\) the translation of \(S\) by \(u\). Fix \(v\in\Lambda\setminus S_{u}\). If \(u\) is connected to \(v\), then there exists \(x\in S_{u}\) and \(y\notin S_{u}\) such that \(u\) is connected to \(x\) in \(S\), \(\{x,y\}\) is open and \(y\) is connected to \(v\). Using the union bound and the Simon-Lieb inequality (7), we get
\[\mathsf{e}^{t\cdot(v-u)}\Phi_{\beta}(u\stackrel{{\Lambda}}{{ \longleftrightarrow}}v)\leq\sum_{x\in S_{u}}\sum_{y\notin S_{u}}\mathsf{e}^{t \cdot(x-u)}\Phi_{\beta}(u\stackrel{{ S_{u}}}{{\longleftrightarrow }}x)\mathsf{e}^{t\cdot(y-x)}\beta J_{x,y}\mathsf{e}^{t\cdot(v-y)}\Phi(y \stackrel{{\Lambda}}{{\longleftrightarrow}}v).\]
Summing over \(v\in\Lambda\setminus S_{u}\), we get
\[\sum_{v\in\Lambda\setminus S_{u}}\mathsf{e}^{t\cdot(v-u)}\Phi_{\beta}(u \stackrel{{\Lambda}}{{\longleftrightarrow}}v)\leq\varphi_{\beta}( S,t)\tilde{\chi}(\Lambda,t,\beta),\]
where we used the invariance under translations. Since \(S\) is finite, there exists \(C:=C(S)>0\) such that
\[\sum_{v\in\Lambda}\mathsf{e}^{t\cdot(v-u)}\Phi_{\beta}(u\xleftrightarrow{\Delta} v)\leq C+\varphi_{\beta}(S,t)\tilde{\chi}(\Lambda,t,\beta).\]
Now, we can optimize over \(u\) to get
\[\tilde{\chi}(\Lambda,t,\beta)\leq C+\varphi_{\beta}(S,t)\tilde{\chi}(\Lambda, t,\beta),\]
which can be rewritten as
\[\tilde{\chi}(\Lambda,t,\beta)\leq\frac{C}{1-\varphi_{\beta}(S,t)}.\]
Taking the limit \(\Lambda\uparrow\mathbb{Z}^{d}\), we obtain
\[\mathbb{G}_{\beta}(t)\leq\frac{C}{1-\varphi_{\beta}(S,t)}<\infty,\]
where the last inequality follows from the assumption \(\varphi_{\beta}(S,t)<1\).
Let us now turn to the second point. For any strictly increasing sequence \((n_{k})\), one has
\[\sum_{k=1}^{l}\varphi_{\beta}(\Lambda_{n_{k}},t) =\sum_{k=1}^{l}\sum_{x\in\Lambda_{n_{k}}}\sum_{y\notin\Lambda_{n_{ k}}}\mathsf{e}^{t\cdot x}\Phi_{\beta}(0\xleftrightarrow{\Delta_{n_{k}}}x) \mathsf{e}^{t\cdot(y-x)}\beta J_{x,y}\] \[\leq\sum_{x\in\Lambda_{n_{l}}}\mathsf{e}^{t\cdot x}\Phi_{\beta}(0 \xleftrightarrow{\Delta_{n_{l}}}x)\sum_{k=1}^{l}\sum_{y\notin\Lambda_{n_{k}}} \mathsf{e}^{t\cdot(y-x)}\beta J_{x,y}\mathds{1}_{x\in\Lambda_{n_{k}}}.\]
Given \(x\in\mathbb{Z}^{d}\), let us prove that the double sum over \(k\) and \(y\) is finite. The sum over \(y\) is bounded by \(\mathbb{J}(t)\) which is finite. Indeed, for \(\beta=\hat{\beta}_{\mathrm{sat}}(t)/2>0\) by hypothesis, one has by finite energy
\[\mathbb{G}_{\beta}(t)\geq C_{\beta}\mathbb{J}(t).\]
This implies that
\[\lim_{L\to\infty}\sum_{y\notin\Lambda_{L}}\mathsf{e}^{t\cdot(y-x)}J_{xy}=0.\]
We can thus choose \(n_{k}\) such that
\[\sum_{k\geq 1}\sum_{y\notin\Lambda_{n_{k}}}\mathsf{e}^{t\cdot(y-x)}J_{x,y}=C <\infty.\]
Moreover, if \(\psi(x)=\mathsf{O}(\rho(x)^{-d-1-\varepsilon})\) for some \(\varepsilon>0\), this last sum is finite if one chooses \(n_{k}=k\) since \(t\cdot y-\rho(y)\leq 0\) for any \(y\in\mathbb{Z}^{d}\) by definition of the dual vector \(t\). Therefore, we get
\[\sum_{k=1}^{l}\varphi_{\beta}(\Lambda_{n_{k}},t)\leq C\sum_{x\in\Lambda_{n_{l} }}\mathsf{e}^{t\cdot x}\Phi_{\beta}(0\xleftrightarrow{\Delta_{n_{l}}}x),\]
which proves the desired identity.
## 5. The existence of a saturation transition at low temperatures
In this section, we are going to prove Theorem 3.5. Through this section, we are going to assume that \(J_{e}>0\) for any edge of length \(1\). By rotational invariance, we can assume without loss of generality that \(J_{e}=1\) for every edge of length \(1\). We start by making a brief summary of the result proved in [2] that we rely on. Given \(\lambda>0\), we define the Green function of the killed random walk model by
\[G_{\lambda}^{\text{KRW}}(x,y)=\sum_{\gamma:x\to y}\prod_{i=1}^{|\gamma|} \lambda J_{\gamma_{i-1},\gamma_{i}},\]
where the sum is over edge self-avoiding paths from \(x\) and \(y\). We will need the following result proved in [2].
**Theorem 5.1**.: _Fix \(s\in\mathbb{S}^{d-1}\). If there exists a dual vector \(t\) to \(s\) such that \(\mathbb{J}(t)<\infty\), then there exists \(\lambda_{0}\), such that for every \(\lambda<\lambda_{0}\), there exists \(C:=C(\lambda)>0\) such that_
\[G_{\lambda}^{\text{KRW}}(0,x)\leq CJ_{x}.\]
The next result bounds the truncated two-point function of the Ising model by the Green functions introduced above
**Lemma 5.2**.: _There exists \(\beta_{0}\) such that for any \(\beta>\beta_{0}\), there exists \(C_{+}>0\) such that_
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}\leq C_{+}G_{\lambda(\beta)}^{ \text{KRW}}(0,x),\]
_where \(\lim_{\beta\to\infty}\lambda(\beta)=0\)._
Before proving Lemma 5.2, let us show how it implies Theorem 3.5.
Proof of Theorem 3.5.: On the one hand, the lower bound in Theorem 3.5 follows directly from (8) for any \(\beta>0\). Fix \(\beta_{1}\) such that \(\lambda(\beta)<\lambda_{0}\) for any \(\beta>\beta_{1}\). Let \(\beta_{2}=\max\{\beta_{0},\beta_{1}\}\). Then, for any \(\beta>\beta_{2}\)
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}\leq C_{+}G_{\lambda(\beta)}^{ \text{KRW}}(0,x)\leq cJ_{x},\]
where we used Lemma 5.2 in the first inequality and Theorem 5.1 in the second inequality. This gives the desired result.
Heuristic proof of Lemma 5.2Thanks to (9), we need to compare \(\mathbb{P}_{\Lambda^{\emptyset},\beta}^{\varnothing,\{0,x\}}[0\not\leftrightarrow \mathfrak{g}]\) to a Green function of a killed random walk. Recall that one has \(J_{x\mathfrak{g}}=\sum_{y\in\Lambda^{c}}J_{xy}\). Therefore, thanks to (11), most of the points \(y\) close to \(\partial\Lambda\) will not be connected to \(\partial\Lambda\), which allows us to replace the event \(\{0\not\leftrightarrow\mathfrak{g}\}\) with the event \(\{0\not\leftrightarrow\partial\Lambda\}\). This term can be estimated using Peierls-like argument: we will decompose \(\mathcal{C}_{0,x}\) into \(C_{1},\ldots,C_{k}\) where \(C_{i}\)'s are disjoint nearest neighbor connected components of \(\mathcal{C}_{0,x}\). We will extract a path \(\gamma\) from \(0\) to \(x\) in such a way that all points of \(\gamma\) are in \(\cup_{i=1}^{k}C_{i}\) and that \(\left|\gamma^{(i)}\right|=K|\partial C_{i}|\) for some \(K>0\) where \(\gamma^{(i)}\) is the part of \(\gamma\) in \(C_{i}\). In this way, using (12) and stantard perturbative estimates, we will extract for \(\gamma^{(i)}\) a cost of order
\[e^{-c\beta|\partial C_{i}|}\beta^{\left|\gamma^{(i)}\right|}\prod_{e\in\gamma^ {(i)}}J_{e},\]
which can compared easily to a Green function of a killed random walk with parameter \(\lambda(\beta)\) satisfying \(\lim_{\beta\to\infty}\lambda(\beta)=0\).
Proof of Lemma 5.2.: We are going to prove Lemma 5.2 only for \(d=2\) where the use of planar duality simplifies the notations. One can generalize the argument that follows for any \(d\geq 2\) in a standard way by introducing \(d-1\) dimensional plaquettes (i.e., the \(d-1\) dimensional faces of a \(d\) dimensional hypercubes). We define the _dual graph_ of \(\mathbb{Z}^{2}\) by
\[(\mathbb{Z}^{2})^{*}=\mathbb{Z}^{2}+(1/2,1/2).\]
The edges of \((\mathbb{Z}^{2})^{*}\) are called _dual edges_, and any dual edge \(e^{*}\) is perpendicular to an unique edge \(e\) of \(\mathbb{Z}^{2}\). Therefore, there is a one-to one correspondance between percolation configurations on \(\mathbb{Z}^{2}\) and those on \((\mathbb{Z}^{2})^{*}\), where a dual edge \(e^{*}\) is open if and only if \(e\) is closed.
We are going to use the random-current representation of the truncated two-point function (9) (see section 2.5.2). We will only work with a single current since one has
\[\mathbb{P}^{\varnothing,\{0,x\}}_{\Lambda_{N}^{\emptyset},\beta}[0\not\leftrightarrow \mathfrak{g}]\leq\mathbb{P}^{\{0,x\}}_{\Lambda_{N}^{\emptyset},\beta}[0 \not\leftrightarrow\mathfrak{g}].\]
To any percolation configuration \(\omega\) induced by a current \(\mathbf{n}\) on \(\Lambda_{N}^{\emptyset}\) with sources \(\{0,x\}\), we can associate a new percolation configuration \((\hat{\omega}_{e})_{e\in E_{\Lambda_{N+1}}}\) as follows:
\[\hat{\omega}_{e}=\left\{\begin{array}{ll}\omega_{e}&\mbox{if }e\in E_{ \Lambda_{N}}\\ \omega_{x\mathfrak{g}}&\mbox{if }e=\{x,y\},\ x\in\partial\Lambda_{N},\ y\in \partial\Lambda_{N+1},\ |x-y|_{1}=1\\ 0&\mbox{otherwise}.\end{array}\right.\]
We therefore have a surjective mapping \(F:\omega\mapsto\hat{\omega}\) from the set of currents on \(\Lambda_{N}^{\emptyset}\) having sources \(\{0,x\}\) to the set of percolation configurations on \(E_{\Lambda_{N+1}}\). The law of \(\hat{\omega}\) previously defined is therefore the push-forward measure of \(\mathbb{P}^{\{0,x\}}_{\Lambda_{N}^{\emptyset},\beta}\) by \(F\). Said differently, \(\hat{\omega}\sim\mathbf{P}_{\Lambda_{N+1}}\), where \(\mathbf{P}_{\Lambda_{N+1}}\) is the probability measure defined by
\[\mathbf{P}_{\Lambda_{N+1}}(A)=\mathbb{P}^{\{0,x\}}_{\Lambda_{N}^{\emptyset}, \beta}(F^{-1}(A))\]
for any \(A\in\{0,1\}^{E_{\Lambda_{N+1}}}\). Remark that \(\mathbf{P}_{\Lambda_{N+1}}\) inherits the finite energy lower bound (12) from \(\mathbb{P}^{\{0,x\}}_{\Lambda_{N}}\). This allows us to reinterpret \(0\not\leftrightarrow\mathfrak{g}\) as the event that \(0\) is disconnected from \(\partial\Lambda_{N+1}\). Indeed, observe that in order to have a connection from \(0\) to \(\partial\Lambda_{N+1}\) in \(\hat{\omega}\), there must be a connection from \(0\) to \(\mathfrak{g}\) in \(\omega\). This implies in particular that
\[\mathbb{P}^{\{0,x\}}_{\Lambda_{N}^{\emptyset}}[0\not\leftrightarrow\mathfrak{ g}]\leq\mathbf{P}_{\Lambda_{N+1}}[0\not\leftrightarrow\partial\Lambda_{N+1},0 \leftrightarrow x].\]
Such an event can easily be described using dual blocking surfaces in a Peierls-like argument. We will call a path _basic_ if it only uses edges of length \(1\). Consider \(\mathcal{C}_{0,x}\) the joint cluster of \(0\) and \(x\). For any \(y\in\mathcal{C}_{0,x}\), denote by \([y]\) the (random) set of points \(z\in\mathcal{C}_{0,x}\) such that there exists an open basic path joining \(y\) to \(z\). Choose an arbitrary order on \(\mathbb{Z}^{d}\). Choose \(\gamma=(\gamma_{0},...,\gamma_{n})\) joining \(0\) to \(x\) to be an open self-avoiding path minimal according to this order. We extract a new path from \(\gamma\) using the following procedure. Let \(r_{0}=0\) and
\[r_{1}:=\max\{i\ :\ \gamma_{i}\in[0],\ 0\leq i\leq n\}.\]
For \(k\geq 1\), define recursively
\[r_{k+1}:=\max\{i\ :\ \gamma_{i}\in[\gamma_{r_{k}+1}],\ r_{k}<i\leq n\}.\]
This procedure stops as soon as \(r_{k}=n\). Let \(m=m(\gamma)\) be such that \(r_{m}=n\). By construction, for any \(1\leq k<m\) we have the inclusion \(\{\gamma_{r_{k}+1},...,\gamma_{r_{k+1}}\}\subset[\gamma_{r_{k}}]\), and the sets \(\left([\gamma_{r_{k}}]\right)_{k\geq 1}\) are all disjoint sets. For any \(k\geq 1\), there is a minimal self-avoiding basic path of open edges joining \(\gamma_{r_{k}+1}\) to \(\gamma_{r_{k+1}}\), using only points in \([\gamma_{r_{k}}]\), that is
minimal with respect to the order we previously chose. Denote by \(\alpha_{k}\) such a path, and by \(\lambda_{k}\) its length. Denote by \(\alpha_{0}\) (respectively \(\lambda_{0}\)) the self-avoiding basic path joining \(0\) to \(\gamma_{r_{1}}\) (respectively its length).
We now have a new self-avoiding path joining \(0\) to \(x\) defined by taking the union of the paths \((\alpha_{i})_{i\geq 0}\). From now on, we will denote by \(\gamma\) this new path in order to lighten the notations. To any cluster realization of the cluster \(\mathcal{C}_{0,x}\) one can thus associate an open path \(\Gamma(\mathcal{C}_{0,x})\) joining \(0\) to \(x\) using this procedure. Moreover, each \(\alpha_{k}\) is contained in the interior of a dual basic path of open edges. Denote by \(\partial^{*}[\gamma_{r_{k}}]\) the shortest such path and by \(\operatorname{Int}(\partial^{*}[\gamma_{r_{k}}])\) its interior. We call \(\partial^{*}[\gamma_{r_{k}}]\)_the dual boundary of \([\gamma_{r_{k}}]\)_. Note that the \([\gamma_{r_{k}}]\)'s are disjoint and each edge belonging to one of their dual boundaries can belong at most to two different boundaries. Since all the \([\gamma_{r_{i}}]\)'s are connected subgraphs of a lattice and the \(\alpha_{i}\)'s are of minimal length, there exists a family \(\alpha_{1}^{*},...,\alpha_{m}^{*}\) of dual basic paths with \(|\alpha_{i}^{*}|\geq|\alpha_{i}|\) and \(|\alpha_{i}^{*}|\neq 0\) for all \(0\leq i\leq n\), such that, for every \(i\in\{1,\ldots,n\}\), one has
\[\alpha_{i}^{*}\subset(\partial^{*}[\gamma_{r_{i}}]\cup\operatorname{Int}( \partial^{*}[\gamma_{r_{i}}]))\setminus\bigcup_{j\neq i}\operatorname{Int}( \partial^{*}[\gamma_{r_{j}}])\]
and such that there exists a deterministic constant \(K>0\) satisfying
\[\frac{|\{e^{*}\in\alpha_{i}^{*}:\omega_{e^{*}}=1\}|}{|\alpha_{i}^{*}|}\geq K. \tag{13}\]
Notice that it is possible that \(\alpha_{i}^{*}=\partial^{*}[\gamma_{r_{i}}]\). We are going to prove that there exists \(C,c>0\) such that
\[\mathbf{P}_{\Lambda_{N+1}}\left[0\not\Leftrightarrow\partial\Lambda_{N+1},0 \leftrightarrow x,\Gamma(\mathcal{C}_{0,x})=\gamma\right]\leq\prod_{i=1}^{| \gamma|}C\mathbf{e}^{-c\beta}\beta J_{\gamma_{i-1}\gamma_{i}}.\]
In order to prove this inequality, we are going to use the fact that all edges in \(\gamma\) are open (which will give the contribution in \(\beta J_{\gamma_{i-1},\gamma_{i}}\)), that all (dual) edges in \(\partial^{*}[\gamma_{r_{i}}]\) are open and that there exists a strictly positive proportion of (dual) edges in \(\alpha_{i}^{*}\) that are open. Fix now some \(\alpha_{k}\). We are going to separate between two cases. Firstly, assume \(|\partial^{*}[\gamma_{r_{k}}]|\geq|\alpha_{k}|\). In this case, using (12), one has
\[\mathbf{P}_{\Lambda_{N+1}}(\omega_{f^{*}}=1\;\forall f^{*}\in\partial^{*}[ \gamma_{r_{k}}])\leq C2^{|\partial^{*}[\gamma_{r_{k}}]|}\mathbf{e}^{-\frac{ \beta}{2}\left|\partial^{*}[\gamma_{r_{k}}]\right|}\leq C\mathbf{e}^{-c\beta \left|\partial^{*}[\gamma_{r_{k}}]\right|}, \tag{14}\]
Figure 2. A realization of \(\mathcal{C}_{0,x}\). The open dual edges are dashed and the \(\alpha_{i}\)’s are in red.
In the first inequality, the \(\frac{1}{2}\) factor ensures that edges belonging to two different boundaries are not counted twice in the upcoming bounds. Therefore, the existence of an open dual basic path \(\partial^{*}[\gamma_{\tau_{k}}]\) of length at least \(|\alpha_{k}^{*}|\) surrounding \(\alpha_{k}\) is an event of probability \(C\mathbf{e}^{-c\beta|\alpha_{k}|}\).
Secondly, assume that \(|\partial^{*}[\gamma_{\tau_{k}}]|<|\alpha_{k}|\). In this case, using (13) and(12), the existence of \(\alpha_{k}^{*}\) is an event with probability bounded by
\[|\partial^{*}[\gamma_{\tau_{k}}]\cup\mathrm{Int}(\partial^{*}[\gamma_{\tau_{k} }])|C\mathbf{e}^{-c\beta K|\alpha_{k}^{*}|}\leq C\mathbf{e}^{-c^{\prime}\beta K |\alpha_{k}^{*}|},\]
where we used that the number of ways of choosing open edges in \(\alpha_{k}^{*}\) is given by \(\sum_{r\geq K|\alpha_{i}^{*}|}\binom{|\alpha_{i}^{*}|}{r}\).
Putting all of this together, denoting by \(A_{N}\) the event \(\{0\leftrightarrow x\}\cap\{0\nleftrightarrow\Lambda_{N+1}\}\), the union bound gives
\[\mathbf{P}_{\Lambda_{N+1}}\left(A_{N},\ \Gamma(\mathcal{C}_{0,x})=\gamma\right) \leq\sum_{\alpha_{1}^{*},\ldots,\alpha_{m(\gamma)}^{*}}\prod_{k=1 }^{m(\gamma)}C\mathbf{e}^{-c\beta K\left|\alpha_{k}^{*}\right|}\prod_{i=1}^{| \gamma|}\beta J_{\gamma_{i-1}\gamma_{i}} \tag{15}\] \[\leq\left(\prod_{k=1}^{m(\gamma)}\sum_{\alpha_{k}^{*}}C\mathbf{e }^{-c\beta K\left|\alpha_{k}^{*}\right|}\right)\prod_{i=1}^{|\gamma|}\beta J_{ \gamma_{i-1}\gamma_{i}}\] (16) \[=\prod_{k=1}^{m(\gamma)}C\mathbf{e}^{-c^{\prime}\beta|\lambda_{k} |}\prod_{i=1}^{|\gamma|}\beta J_{\gamma_{i-1}\gamma_{i}}, \tag{17}\]
where, in the last line, we used that the number of paths \(\alpha_{k}^{*}\) of length \(l\) is bounded by \((2d)^{l}\) and took \(\beta\) large enough. Since \(\sum_{k}\lambda_{k}\geq\frac{|\gamma|}{2}\), there exist two positive constants \(C\) and \(c\) such that
\[\mathbf{P}_{\Lambda_{N+1}}\left[A_{n},\Gamma(\mathcal{C}_{0,x})=\gamma\right] \leq\prod_{i=1}^{|\gamma|}C\mathbf{e}^{-c\beta}\beta J_{\gamma_{i-1}\gamma_{ i}}. \tag{18}\]
Therefore, for any \(\beta\) big enough, one has
\[\mathbf{P}_{\Lambda_{N+1}}\left[A_{n}\right]\leq\sum_{n\geq 0}\sum_{\gamma \in\mathrm{SAW}_{n}(0,x)}\prod_{i=1}^{|\gamma|}C\mathbf{e}^{-c\beta}\beta J_{ \gamma_{i-1}\gamma_{i}}\leq G_{\lambda(\beta)}^{\mathrm{KRW}}(0,x),\]
where \(\lim_{\beta\to\infty}\lambda(\beta)=0\). Since we have
\[\langle\sigma_{0};\sigma_{x}\rangle_{\Lambda_{N};\beta}=\langle\sigma_{0} \sigma_{x}\rangle_{\Lambda_{N};\beta}\mathbb{P}_{\Lambda_{N}^{c},\beta}^{ \varnothing,\{0,x\}}\left[0\nleftrightarrow\mathfrak{g}\right]\leq\langle \sigma_{0}\sigma_{x}\rangle_{\Lambda_{N};\beta}\mathbf{P}_{\Lambda_{N+1}} \left[A_{n}\right],\]
there exists a constant \(c_{\beta}>0\) such that \(\langle\sigma_{0};\sigma_{x}\rangle_{\Lambda_{N};\beta}\leq c_{\beta}G_{ \lambda(\beta)}^{\mathrm{KRW}}(0,x)\) for \(\beta\) big enough. Taking the limit as \(N\to\infty\), one finally gets
\[\langle\sigma_{0};\sigma_{x}\rangle_{\beta}\leq c_{\beta}G_{\lambda(\beta)}^{ \mathrm{KRW}}(0,x),\]
which is the desired result.
**Remark 5.1**.: _In the case of the Ising model with strictly positive magnetic field \(h\), one could prove that there exists a non-trivial saturated regime in a straightforward way. Indeed, one can derive a random-current representation of the truncated two-point function in such a way that \(J_{x,\mathfrak{g}}=h>0\) for any \(x\in\mathbb{Z}^{d}\) and that_
\[\langle\sigma_{0};\sigma_{x}\rangle_{\Lambda;\beta,h}=\langle\sigma_{0}, \sigma_{x}\rangle_{\Lambda;\beta,h}\mathbb{P}_{\Lambda_{N}^{c};\beta,h}^{ \varnothing,\{0,x\}}\left[0\nleftrightarrow\mathfrak{g}\right].\]
In particular, in this case, for any connection \(\gamma\) from \(0\) to \(x\) in the right-hand side, \(\gamma\) has to be disconnected from \(\mathfrak{g}\) which is an event of probability of order \(\mathsf{e}^{-ch\beta}\). Therefore, Lemma 5.2 holds in this case as well, from which the desired conclusion follows._
## Acknowledgments
YA is supported by the Swiss NSF grant 200021_200422 is a member of the NCCR SwissMAP. KK thanks the Excellence Fellowship program at the University of Geneva for supporting him during his studies. Both authors very kindly thank Yvan Velenik and Sebastien Ott for useful discussions. We also thank Yvan Velenik for reading the first version of the present article and several helpful comments.
|
2308.09047
|
A Concept of Assessment of LIV Tests with THESEUS Using the Gamma-Ray
Bursts Detected by Fermi/GBM
|
According to Einstein's special relativity theory, the speed of light in a
vacuum is constant for all observers. However, quantum gravity effects could
introduce its dispersion depending on the energy of photons. The investigation
of the spectral lags between the gamma-ray burst (GRB) light curves recorded in
distinct energy ranges could shed light on this phenomenon: the lags could
reflect the variation of the speed of light if it is linearly dependent on the
photon energy and a function of the GRB redshift. We propose a methodology to
start investigating the dispersion law of light propagation in a vacuum using
GRB light curves. This technique is intended to be fully exploited using the
GRB data collected with THESEUS.
|
Anastasia Tsvetkova, Luciano Burderi, Alessandro Riggio, Andrea Sanna, Tiziana Di Salvo
|
2023-08-17T15:29:54Z
|
http://arxiv.org/abs/2308.09047v1
|
A Concept of Assessment of LIV Tests with _Theseus_ Using the Gamma-Ray Bursts Detected by _Fermi_/GBM
###### Abstract
According to Einstein's special relativity theory, the speed of light in a vacuum is constant for all observers. However, quantum gravity effects could introduce its dispersion depending on the energy of photons. The investigation of the spectral lags between the gamma-ray burst (GRB) light curves recorded in distinct energy ranges could shed light on this phenomenon: the lags could reflect the variation of the speed of light if it is linearly dependent on the photon energy and a function of the GRB redshift. We propose a methodology to start investigating the dispersion law of light propagation in a vacuum using GRB light curves. This technique is intended to be fully exploited using the GRB data collected with _THESEUS_.
gamma-ray bursts: general - methods: data analysis - gamma-ray bursts as cosmological probes and test-bench for fundamental physics - gamma-ray bursts: past, present and future experiments and missions
## 1 Introduction
According to Einstein's special relativity theory, a proper length is Lorentz-contracted by a factor of \(\gamma^{-1}=[1-(v/c)^{2}]^{1/2}\) as observed from the reference frame moving at speed \(v\) relative to the rest frame. However, various spacetime theories, e.g., some string or loop quantum gravity theories
(see, e.g., Rovelli & Smolin (1988, 1990); Rovelli (1998)), imply the existence of a minimum spatial length of the order of the Plank length \(l_{\rm Pl}=\sqrt{G\hbar/c^{3}}=1.6\times 10^{-33}\) cm Hossenfelder (2013). Lorentz invariance is a fundamental property of both the standard model of particle physics and general relativity. In general relativity, a locally inertial reference frame where the Lorentz symmetry is fulfilled can always be chosen. Some quantum gravity (QG) theories predict the Lorentz invariance violation (LIV) at the Planck energy scale (\(E_{\rm Pl}=\sqrt{\hbar c^{5}/G}\simeq 1.22\times 10^{19}\) GeV) as there exists a minimum spatial length \(l_{\rm min}=\alpha l_{\rm Pl}\) (where \(\alpha\sim 1\) is a dimensionless constant inherent to a particular spacetime theory), which, e.g., in the string theories, corresponds to the string length. Therefore, the Lorentz contraction is limited by this spatial scale (see Hossenfelder (2013) for a review).
There are several frameworks implying LIV, e.g., string theory Kostelecky & Samuel (1989a,b), noncommutative spacetime Carroll et al. (2001); Ferrari et al. (2007), Brane worlds Santos & Almeida (2013), Horava-Lifshitz gravity Horava (2009). LIV was considered in the gravitational context for the first time in Kostelecky (2004), where the so-called standard model extension was developed. The Bumblebee1 models, which are the simplest cases of theories including the spontaneous breaking of Lorentz symmetry, are effective field QG theories describing a vector field with a non-zero vacuum expectation value and involving the vacuum condensate Kostelecky & Samuel (1989a,b). The spontaneous symmetry breaking preserves both the geometric constraints and conservation laws or quantities required by the general relativity theory or Riemannian geometry. Regarding gravity, LIV can happen if a vector field ruled by a potential exhibiting a minimum rolls to its vacuum expectation value, similar to the Higgs mechanism Kostelecky (2004). This "bumblebee" vector thus takes an explicit (four-dimensional) orientation, and preferred-frame effects may emerge Bertolami & Paramos (2005). The generalized uncertainty principle (GUP) states that, in quantum theory, if the quantities are incompatible, they are mutually dependent, and measuring one observable may yield some information about its incompatible partners. GUP is based on a momentum-dependent modification of the standard dispersion relation, which is supposed to produce LIV Tawfik et al. (2016); Lambiase & Scardigli (2018). See, e.g.,Kanzi & Sakalli (2019); Ovgun et al. (2019); Kanzi & Sakalli (2021); Delhom et al. (2021); Gogoi & Dev Goswami (2022, 2023); Neves (2023) for recent advances in the Bumblebee and GUP models.
Footnote 1: The name of the model was inspired by the insect, whose ability to fly has been questioned theoretically.
One of the LIV effects, relevant to astrophysics, is the existence of a dispersion law for the photon speed \(c\) (see, e.g., Amelino-Camelia (2000)). However, LIV is not a mandatory property of all QG theories: in some of them, e.g., in the spacetime uncertainty principle Burderi et al. (2016) or in the quantum spacetime Sanchez (2019), LIV is not expected. Despite photon velocity dispersion, the Lorentz invariance is not violated and the dispersion law is a second-order effect relative to the ratio of the photon energy to the QG energy scale. To obtain an idea of the nature of the speed of light dispersion due to LIV, e.g., within the Liouville string approach, one can consider a vacuum as a non-trivial medium containing "foamy" quantum gravity fluctuations whose origin can be
imagined as processes that involve the pair creation of virtual black holes. In this concept, one can verify that the massless particles of different energies can excite vacuum fluctuations differently as they propagate through the quantum gravity medium, producing a non-trivial dispersion relation of Lorentz "non-covariant" form, similarly to the thermal medium Amelino-Camelia et al. (1998a). For more details regarding this concept, see, e.g., Amelino-Camelia et al. (1997).
Since gamma-ray bursts (GRBs) are characterized by high-energy emission, large cosmological distances, and temporal variability at short (\(\lesssim\)10 ms) timescales, they have been applied as powerful tools in LIV searches (see, e.g., Amelino-Camelia et al. (1998a); Ellis et al. (2003, 2006); Abdo et al. (2009a); Vasileiou et al. (2015); Zhang & Ma (2015); Pan et al. (2015); Xu & Ma (2016a,b); Chang et al. (2016); Wei et al. (2017); Ganguly & Desai (2017); Liu & Ma (2018); Zou et al. (2018); Ellis et al. (2019); Wei (2019); Pan et al. (2020); Acciari et al. (2020); Du et al. (2021); Agrawal et al. (2021); Wei & Wu (2021); Bartlett et al. (2021); Xiao et al. (2022); Desai et al. (2023)) for more than two decades. In Amelino-Camelia et al. (1998a,b), it was first suggested to test LIV using a comparison of the arrival times of GRB photons detected in distinct energy ranges. In Abdo et al. (2009a); Vasileiou et al. (2013, 2015), the authors exploited the spectral lag between high-energy (\(\sim\)31 GeV) and low-energy photons of GRB 090510 to derive lower limits on the linear and quadratic QG energy: \(E_{\rm QG,1}>(1-10)\times E_{\rm Pl}=1.22\times(10^{19}-10^{20})\) GeV and \(E_{\rm QG,2}>(1-10)\times E_{\rm Pl}=1.3\times 10^{11}\) GeV, respectively. In Abdo et al. (2009b), \(E_{\rm QG}>10^{18}\) GeV was obtained based on the observation of GRB 080916C. For GRB 190114C MAGIC Collaboration et al. (2019), the linear and quadratic LIV were constrained using the time delays of TeV photons: \(E_{\rm QG,1}>0.58\times 10^{19}\) GeV (\(E_{\rm QG,1}>0.55\times 10^{19}\) GeV) and \(E_{\rm QG,2}>0.63\times 10^{11}\) GeV (\(E_{\rm QG,1}>0.56\times 10^{11}\) GeV) for the subluminal (superluminal) case Acciari et al. (2020). In Liu et al. (2022), the authors constrained the linear and quadratic LIV for the set of 32 _Fermi_/GBM GRBs with known redshifts and characterized by the positive-to-negative transition of the spectral lag: \(E_{\rm QG,1}=1.5\times 10^{14}\) GeV for the linear case and \(E_{\rm QG,2}~{}=~{}8~{}\times~{}10^{5}\) GeV for the quadratic case.
The phenomenon of GRBs remains puzzling, although much progress has been made. Both the light curves and the spectra among GRBs vary significantly. It is generally believed that collisions between relativistic shells ejected from an active central engine produce pulses in GRB light curves Rees & Meszaros (1994). The collision of the slower-moving shell with the second, faster shell ejected later produces a shock that dissipates internal energy and accelerates the particles that emit the GRB radiation. As of now, two "physical" classes of GRBs are distinguished (see, e.g., Zhang et al. (2009)): the merger-origin Type I GRBs Blinnikov et al. (1984); Paczynski (1986); Eichler et al. (1989); Paczynski (1991), which are usually short, with a duration of less than 2 s Mazets et al. (1981); Kouveliotou et al. (1993), and spectrally hard, and the collapser-origin Type II GRBs Woosley (1993); Paczynski (1998); MacFadyen & Woosley (1999); Woosley & Bloom (2006), characterized by longer durations.
The spectral lag, which is known as the difference in arrival time between high-energy and low-energy photons, is a common phenomenon occurring during the GRB prompt emission phase Norris et al. (1986, 2000); Band (1997); Chen et al. (2005) and in high-energy astrophysics in general
(e.g., Norris et al. (2000); Zhang et al. (2002)). The authors of Cheng et al. (1995) were the first to analyze the spectral lags of GRBs. It was found that a soft lag, i.e., the hard photons arriving first, dominates in long GRBs Norris et al. (2000); Wu & Fenimore (2000); Chen et al. (2005); Norris et al. (2005), and some GRBs have significantly different spectral lags in early and late epochs Hakkila & Giblin (2004). It was shown that the lags are correlated with the GRB luminosity Norris et al. (2000) and the jet break times in afterglow light curves Salmonson & Galama (2002); the spectral lags of long GRBs are correlated with the pulse duration, while the spectral lags of short GRBs are not Yi et al. (2006). The GRB spectral lags can be explained within several physical models, such as the curvature effect of a relativistic jet and rapidly expanding spherical shell Ioka & Nakamura (2001); Shen et al. (2005); Lu et al. (2006); Shenoy et al. (2013); Uhm & Zhang (2016). Regardless of its physical origin, a spectral lag is an important GRB parameter as it may help to distinguish between long and short GRBs: long bursts have large lags, while short bursts have relatively smaller or negligible lags Norris (1995); Ryde (2005); Norris & Bonnell (2006).
To show the feasibility of the search for the dispersion of the speed of the light in the data collected by THESEUS Amati et al. (2018), one can start performing similar research using the data of already commissioned missions. In this paper, we describe a methodology for the testing of the dispersion of the light speed, and we intend to apply it to the _Fermi_/GBM data and the set of GRBs with known redshifts. The paper is organized as follows. We start with a brief description of the instrumentation and data in Section 2. The methodology that involves constraining LIV using GRBs is described in Section 3. In Section 4, we discuss the _THESEUS_ mission in the context of the LIV tests using GRBs. Section 5 concludes the paper.
## 2 Instrumentation and Data
Among several space-based detectors able to collect GRB data, the most prolific are _Swift_/BAT (BAT; Gehrels et al. (2004)), _Fermi_/GBM (GBM; Meegan et al. (2009)), and Konus-_Wind_ (KW; Aptekar et al. (1995)). More details regarding the BAT, GBM, KW, and other GRB detectors' design and performance can be found in Tsvetkova et al. (2022). However, since _Swift_/BAT collects GRB data in a narrow energy band of 15-350 keV, and Konus-_Wind_ records GRB light curves in three fixed energy windows, the _Fermi/GBM_ GRB time histories seem to be the most suitable for the testing of the speed of light variance.
Launched in June 2008, the _Fermi_ Gamma-Ray Space Telescope Thompson & Wilson-Hodge (2022) harbors two scientific instruments: the Gamma-Ray Burst Monitor (GBM) and the Large Area Telescope (LAT; Atwood et al. (2009)). The LAT covers the 30 MeV-300 GeV band, while the GBM, intended to detect and study GRBs, is sensitive within the 8 keV-30 MeV energy range, extending the spectral band over which bursts are observed downwards to the hard X-ray range. GBM comprises twelve NaI(Tl) detectors covering an energy range of 8 keV\({}^{-1}\) MeV and two bismuth-germanate (BGO) scintillation detectors sensitive within the 150 keV to 30 MeV band that observe the whole sky not occluded by the Earth (?8 sr).
The primary scientific data produced by GBMs can be summarized as a time history and spectra, which are provided as temporally pre-binned (CTIME and CSPEC) or temporally unbinned time tagged events (TTE). These data types are produced as "snippets" for every trigger and are also provided continuously. The CTIME data are collected in 8 energy channels with a 256 ms time resolution, while the CSPEC data are recorded in 128 channels with an 8.192 s time resolution. TTEs for each detector are recorded with time precision down to 2 \(\mu\)s, in 128 energy channels, matching the CSPEC ones, which gives an excellent opportunity to bin the data in time and energy in a suitable way. From 2008 through November 2012, TTE were available only during a 330 s interval: from 30 s before the burst trigger to 300 s after the burst trigger. Since November 2012, GBM flight software has produced a new data type, continuous TTE (CTTE), available at all times that the instrument is operating.
To date, the GBM has triggered almost 3500 GRBs, among which almost 3000 have \(T_{90}>2\) s.
## 3 Methodology
We start the research by computing the redshifts and the rest-frame spectral lags for all GBM-triggered GRBs. Then, thanks to the central limit theorem, we can consider the distribution of the lags as normal and compute the mean and variance. The large number of "measurements" is expected to significantly increase the accuracy of the spectral lags and constraints on \(E_{\rm QG}\). Another approach would be to constrain the QG energy (see Section 3.1) first and then estimate its mean values and variance.
### Brief Introduction to the Basics of the Speed of Light Variance
The velocity dispersion law for photons can be expressed as a function of its observer-frame energy \(E_{\rm obs}\) in units of the QG energy scale \(E_{\rm QG}\), at which the quantum nature of gravity becomes important:
\[v_{\rm phot}/c-1=\xi\left(\frac{E_{\rm obs}}{E_{\rm QG}}\right)^{n}, \tag{1}\]
where \(v_{\rm phot}\) is the group velocity of a photon wave-packet, \(E_{\rm QG}=\zeta m_{\rm Pl}c^{2}=\zeta E_{\rm Pl}\), \(m_{\rm Pl}=2.176\times 10^{-5}\) g is the Planck mass, \(\zeta\sim\alpha^{-1}\sim 1\) expresses the significance of the QG effects, \(\xi\sim\pm 1\) is a dimensionless constant inherent to a particular QG theory, and the index \(n\) denotes the order of the first relevant term of the small parameter \(\left(\frac{E_{\rm obs}}{E_{\rm QG}}\right)\). This expression takes into account that high-energy photons can travel faster (superluminal, \(\xi=+1\)) or slower (subluminal, \(\xi=-1\)) than low-energy ones Amelino-Camelia & Smolin (2009).
The difference in the arrival times of photons emitted at the same time in the same place is
\[\Delta t_{\rm QG}=\xi\left(\frac{D_{\rm trav}}{c}\right)\left(\frac{\Delta E_ {\rm obs}}{E_{\rm QG}}\right)^{n}, \tag{2}\]
where \(D_{\rm trav}\) is the comoving distance traversed by a massless particle, emitted at redshift \(z\) and traveling down to redshift 0.
### Observed Spectral Lag as a Function of Redshift
For a given observer-frame energy \(E_{\rm obs}\), the total spectral lag \(\tau_{\rm total,\,obs}(E_{\rm obs},z)\) can be split into two terms:
\[\tau_{\rm total,\,obs}(E_{\rm obs},z)=\tau_{\rm int,\,obs}(E_{\rm obs},z)+\tau _{\rm QG,\,obs}(E_{\rm obs},z), \tag{3}\]
where \(\tau_{\rm int,\,obs}\) is the observed intrinsic spectral lag, which corresponds to the intrinsic rest-frame lag
\[\tau_{\rm int,\,rf}(E_{\rm obs})=\tau_{\rm int,\,obs}(E_{\rm rf})/(1+z) \tag{4}\]
induced by the GRB central engine emission mechanism and assumed to be independent of the photon source redshift \(z\). \(\tau_{\rm QG,\,obs}\) is the lag induced by the QG effects discussed above, and and \(E_{\rm rf}\) is the photon energy in the rest frame of its source. Following Jacob & Piran (2008), \(\tau_{\rm QG,\,obs}\) can be expressed as a function of the GRB rest-frame energy:
\[\begin{array}{c}\tau_{\rm QG}(E_{\rm rf},z)=\xi\left(\frac{1}{H_{0}}\right) \left(\frac{E_{\rm rf}}{\zeta E_{\rm pl}}\right)^{n}\left(\frac{1+n}{2}\right) \left(\frac{1}{1+z}\right)^{n}\times\\ \times\int_{0}^{z}\frac{(1+z^{\prime})^{n}dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{ \prime})^{3}+\Omega_{\Lambda}}},\end{array} \tag{5}\]
where \(H_{0}\) is the Hubble constant, \(\Omega_{m}\) is the matter density parameter, and \(\Omega_{\Lambda}\) is the dark energy density parameter, i.e., the parameters of the standard \(\Lambda\)CDM model.
Experimentally, the total observed spectral lag \(\tau_{\rm total,\,obs}(E_{\rm rf},z)\) can be computed by cross-correlating the GRB light curves recorded in the redshift-dependent energy windows corresponding to the fixed rest-frame energy windows as \(E_{\rm obs}=E_{\rm rf}/(1+z)\), with the GRB light curve collected at the lowest possible energy channel, where the lag induced by QG is negligible, e.g., it is \(\sim\)\(\mu\)s in the 5-20 keV energy range, while, in the higher-energy bands, it is \(\sim\)ms.
The total observed spectral lag \(\tau_{\rm total,\,obs}(E_{\rm rf},z)\) should follow the relation obtained by inserting Equations (4) and (5) into Equation (3):
\[\begin{array}{c}\tau_{\rm total,\,obs}(E_{\rm rf},z)=\tau_{\rm int,\,obs}( E_{\rm rf})+\\ \xi\left(\frac{1}{H_{0}}\right)\left(\frac{E_{\rm rf}}{\zeta E_{\rm pl}} \right)^{n}\left(\frac{1+n}{2}\right)\left(\frac{1}{1+z}\right)^{n}\int_{0}^{ z}\frac{(1+z^{\prime})^{n}dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+ \Omega_{\Lambda}}}.\end{array} \tag{6}\]
The transformation of spectral lags from the observer frame back to the rest frame2 can be
performed by dividing by the redshift factor \((1+z)\):
\[\tau_{\rm total,\,rf}(E_{\rm rf},z)=\tau_{\rm total,\,obs}(E_{\rm rf},z)/(1+z). \tag{7}\]
Thus, the GRB rest-frame spectral lag obeys the following relation:
\[\begin{split}\tau_{\rm total,\,rf}(E_{\rm rf},z)=\frac{\tau_{ \rm int,\,obs}(E_{\rm rf})}{(1+z)}+\\ \xi\left(\frac{1}{H_{0}}\right)\left(\frac{E_{\rm rf}}{\zeta E_{ \rm pl}}\right)^{n}\left(\frac{1+n}{2}\right)\left(\frac{1}{1+z}\right)^{n+1} \int_{0}^{z}\frac{(1+z^{\prime})^{n}dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime })^{3}+\Omega_{\Lambda}}}.\end{split} \tag{8}\]
Let us define a function
\[u(z)=\left(\frac{1+n}{2}\right)\left(\frac{1}{1+z}\right)^{n+1}\int_{0}^{z} \frac{(1+z^{\prime})^{n}dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+ \Omega_{\Lambda}}}. \tag{9}\]
Then, the experimentally determined lag \(\tau_{\rm total,\,obs}(E_{\rm rf},z)\) will follow the relation
\[\tau_{\rm total,rf}(E_{\rm rf},z)=\tau_{\rm int,rf}(E_{\rm rf})+\xi\left( \frac{1}{H_{0}}\right)\left(\frac{E_{\rm rf}}{\zeta E_{\rm pl}}\right)^{n}u(z). \tag{10}\]
The dependence of \(\tau_{\rm total,\,rf}(E_{\rm rf},z)\) on \(u(z)\) is expected to be linear, with the intercept corresponding to the intrinsic lag and the slope proportional to the ratio of the rest-frame photon energy to the QG energy \(\zeta E_{\rm pl}\) raised to the \(n\)-th power, which is the first significant term in the series expansion of the quantum gravity dispersion relation. An argument in support of the independence of \(\tau_{\rm int,\,rf}\) from \(z\) is the absence of the prominent cosmological evolution of GRB energetics Tsvetkova et al. (2017, 2022), which indicates that the GRB central engine does not evolve significantly with \(z\). However, if \(\tau_{\rm int,\,rf}\) is dependent on \(z\), one can fit both \(\tau_{\rm int,\,rf}(z)\) and \(\tau_{\rm QG}(z)\) simultaneously to the data, as was done in Liu et al. (2022).
In Liu et al. (2022), it was found that the behavior of the lags follows some key statistical properties described in terms of log-normal and Gaussian distributions around average values. These empirical functions describing the spectral lags (as a function of energy) for each of the 32 _Fermi_/GBM GRBs in the sample of bursts with known redshifts that exhibit the lag transition phenomenon are shown in Figure 1 (as derived from Figure 1 of Liu et al. (2022)). In the method proposed here, we make the reasonable assumption that the intrinsic lags (that dominate in magnitude over the small delays induced by QG effects; see the upper limits for the first- and second-order QG delays shown as blue and orange curves in Figure 1) do not correlate with the redshift as the distance of a given GRB from us does not affect the emission properties in, e.g., the fireball model. This means that, averaging over a large sample of GRBs at different redshifts, the intrinsic delays will cluster around the common value that defines the intrinsic average rest-frame lag.
Figure 1: The dependence of the spectral lag on the energy window for the GRB sample studied in Liu et al. (2022). The fits with a smoothly broken power law (SBPL) are shown by black solid lines. Blue and orange dotted lines denote the maximally allowed LIV-induced lags in linear and quadratic cases, thereby defining the lower limits on the QG energy. This figure is adapted from Figure 1 from Liu et al. (2022) (see Section 3 of this work for details). ©AAS. Reproduced with permission.
### The Technique of Averaging over the Sample
Relation (10) shows that, for a given GRB rest-frame energy \(E_{\rm rf}\), the lag \(\tau_{\rm total,\,rf}(E_{\rm rf})\) depends only on the GRB redshift \(z\) through the function \(u(z)\) defined in Equation (9). Therefore, fixing \(E_{\rm rf}\) for an ensemble of \(N\) GRBs with known redshift, one can compute the function \(\tau_{\rm total,\,rf}(E_{\rm rf})\) from the observed data for all GRBs of the ensemble, i.e., it is possible to obtain a set of \(N\) experimentally computed values of \(\tau_{\rm total,\,rf}(E_{\rm rf})\). This ensemble of \(N\) values can be fitted as a function of \(u(z)\) through Equation (10) to obtain the best fit values of the intrinsic lag in the GRB rest frame \(\tau_{\rm int,\,rf}\), and the coefficient of the QG-induced delay at the GRB rest-frame energy \(E_{\rm rf}\)
\[\left\{\begin{array}{ll}\left[\tau_{int,\,rf}(E_{\rm rf})\right]_{\rm BEST}\\ \mbox{and}\\ \left[\xi\left(\frac{1}{H_{0}}\right)\left(\frac{E_{\rm rf}}{\zeta E_{\rm pl} }\right)^{n}\right]_{\rm BEST}&=\\ \left[\xi\left(\frac{1}{H_{0}}\right)\left(\alpha\frac{E_{\rm rf}}{E_{\rm pl} }\right)^{n}\right]_{\rm BEST}&=\ \left[\phi(\alpha E_{\rm rf})\right]_{\rm BEST} \end{array}\right.. \tag{11}\]
Once the values of \(\left[\phi(\alpha E_{\rm rf})\right]_{\rm BEST}\) are obtained for all \(E_{\rm rf}\), these values can be plotted as a function of \(s(E_{\rm rf})=(E_{\rm rf}/E_{\rm pl})^{n}\) and subsequently linearly fitted through the equation
\[\phi(\alpha E_{\rm rf})=\left(\frac{\alpha^{n}}{H_{0}}\right)s(E_{\rm rf})= \Delta_{\rm QG}\,s(E_{\rm rf}), \tag{12}\]
to obtain the best fit value of the strength of the QG effect \(\Delta_{\rm QG}=\alpha^{n}/H_{0}\). We note that this technique allows us to combine the whole ensemble of \(N\) GRBs to obtain a unique measure of the strength of the QG effect, whose uncertainty \(\sigma_{\Delta_{\rm QG}}\), in the absence of other systematic errors, only depends on the (Poissonian) statistics of the whole ensemble of \(N\) GRBs and therefore improves as the inverse square root of \(N\):
\[\sigma_{\Delta_{\rm QG}}\propto\frac{1}{\sqrt{N}}. \tag{13}\]
Consequently, the precision of the measurement of the QG effect's strength can be improved by increasing the size of the analyzed sample.
### GRB Intrinsic Spectral Lags vs. Quantum Gravity Effects
The GRB spectral lag can be caused by a mixture of two effects: the QG one and the one inherent to the fireball model. The latter is due to the curvature effect, i.e., the kinematic effect caused by the fact that the observer looks at an increasingly off-axis annulus area relative to the line-of-sight Fenimore et al. (1996); Salmonson (2000); Kumar & Panaitescu (2000); Ioka & Nakamura
(2001); Qin (2002); Qin et al. (2004); Dermer (2004); Shen et al. (2005); Lu et al. (2006). Softer low-energy radiation comes from the off-axis annulus area with smaller Doppler factors and is delayed for the observer with respect to on-axis emission due to the geometric curvature of the shell.
A competing hypothesis is that the traditional view based on the high-latitude emission "curvature effect" of a relativistic jet cannot explain spectral lags. Instead, spectral peaks should be swept across the observing energy range in a specific manner to account for the observed spectral lags. A simple physical model that implies synchrotron radiation from a rapidly expanding outflow can explain GRB spectral lags Uhm & Zhang (2016). This model requires the following conditions to be fulfilled: (1) the emission radius has to be large (over several \(10^{14}\) cm from the central engine), in the optically thin region, well above the photosphere; (2) the \(\gamma\)-ray photon spectrum is curved (as observed); (3) the magnetic field strength in the emitting region decreases with the radius as the region expands in space, which is consistent with an expanding jet; and (4) the emission region itself undergoes rapid bulk acceleration as the prompt emission is produced. These requirements are consistent with a Poynting-flux-dominated jet abruptly dissipating magnetic energy at a large distance from the engine. The aforementioned theories successfully explain the positive spectral lags. Nevertheless, the rarely observed negative lags remain a more intriguing phenomenon that can be used to infer the different radiation mechanisms Li (2010); Zhang et al. (2011) or emission regions Toma et al. (2009) of low- and high-energy photons.
For a given GRB, the intrinsic delay inherent to the GRB emission could mimic the genuine quantum gravity effect, making these two effects difficult to disentangle. However, currently, there is no evidence for a correlation between the GRB intrinsic delays and the distances to its sources. For example, in Tsvetkova et al. (2017, 2021), where the largest sample of GRBs with known redshifts detected by a single instrument in a wide energy range is studied, the significance of the cosmological evolution of GRB energetics is \(\lesssim\)2\(\sigma\). Meanwhile, the delays induced by a photon dispersion law are proportional both to the light travel distance (a function of redshift) and to the differences in the energy of the photons. This dual dependence on energy and redshift could be the unique feature of a genuine QG effect. As suggested in Burderi et al. (2020), given an adequate collection area, GRBs, once their redshifts are known, are potentially excellent tools to search for the first-order dispersion law for photons.
### Computation of Spectral Lags
To avoid distortions due to the fact that the shape of the light curve changes with the energy, we suggest fixing the energy channels in which the light curves are recorded to certain values in the rest frame. In this case, the corresponding observer-frame values of the channel boundaries will be \(E_{\rm obs}=E_{\rm rest}/(1+z)\), i.e., redshift-dependent. The first step to test the LIV effects with the suggested technique would be apply it to the GBM data. Thus, we propose to use the following energy bands to record the light curves. Given the 9 keV lower boundary of the GBM spectral window, one has to select the rest-frame channels starting from, at least, 60 keV, to allow for bursts with redshifts
up to \(z=5\). This boundary can be shifted towards a higher value to allow high-redshift GRBs to contribute to the study. However, we should mention that the majority of GRBs have redshifts \(z<5\) (see Figure 2). Some examples of the pseudo-logarithmic channels that could be used are 60-100 keV, 100-160 keV, 160-250 keV, 250-400 keV, 400-600 keV, 600-900 keV, or 60-80 keV, 80-100 keV, 100-130 keV, 130-160 keV, 160-200 keV, 200-250 keV, 250-320 keV, 320-400 keV, 400-500 keV, 500-650 keV, 650-900 keV. We found these numbers of channels to be reasonable in terms of SNR based on the research of Liu et al. (2022), carried out for 32 _Fermi_/GBM GRBs.
Since the GRB energetics are usually considered on the logarithmic scale, we suggest adopting the geometric mean of the lower and upper boundaries of an energy band, \(E_{\rm phot}=\sqrt{E_{\rm min}\times E_{\rm max}}\), as a proxy for the average energy of photons in the given range.
Since the spectral lag distributions of the short and long GRBs significantly differ (Yi et al., 2006), and these two types of bursts belong to distinct classes of progenitors, they have different intrinsic spectral lags. Thus, we suggest studying them separately.
### Obtaining GRB Redshifts
Since the suggested technique of testing LIV using GRBs strongly relies on prior knowledge of the burst redshift, it is necessary either to measure it directly from the observations in optics or estimate it using the prompt emission parameters. GRB redshift measurements based on the detection of emission lines or absorption features of GRB host galaxies imposed on the afterglow continuum, or performed photometrically, are widespread. However, there are other methods to obtain the redshift estimates, e.g., the "pseudo-redshift" (pseudo-z) technique based on the spectral properties of GRB prompt high-energy emission Atteia (2003), using well-known correlations such as, for example, the Norris correlation (spectral lag vs. isotropic peak luminosity; Norris et al. (2000)), the Amati correlation (rest-frame peak energy vs. isotropic energy release; Amati et al. (2002)), the isotropic peak luminosity vs. temporal variability correlation Reichart et al. (2001); Fenimore & Ramirez-Ruiz (2000), the Yonetoku (the rest-frame peak energy vs. the isotropic peak luminosity; Yonetoku et al. (2004)) correlation, etc., or the method of searching for a minimum on the intrinsic hydrogen column density versus the redshift plane (see, e.g., Ghisellini et al. (1999)).
Nowadays, the machine learning (ML) approach to redshift estimation is becoming popular in astrophysics (see, e.g., D'Isanto, A. & Polsterer, K. L. (2018); Dainotti et al. (2019); Lee & Shin (2021); Momtaz et al. (2022)). Supervised ML is a data mining method based on prior knowledge of a "training" data set, on which we can build models predicting the parameter under consideration, a "validation" set, which provides an unbiased evaluation of a model's fit while tuning the model's hyperparameters, and a "test" data set necessary for an unbiased evaluation of the final model fit.
Considering only spectroscopic and photometric redshifts, there were \(\gtrsim\)420 GRBs with reliably measured redshifts by the middle of 2022 (for a list of GRBs with measured redshifts, see Gruber et al. (2011); Atteia et al. (2017); Tsvetkova et al. (2017); Minaev & Pozanenko (2020, 2021);
Figure 2: The cosmological GRB formation rate (GRBFR) derived in Tsvetkova et al. (2021), superposed onto the star formation rate (SFR) data from the literature. The gray points show the SFR data from Hopkins (2004); Bouwens et al. (2011); Hanish et al. (2006); Thompson et al. (2006). The marked line denotes the SFR approximation from Li (2008). The GRBFR normalization is equal for all four data sets and the GRBFR points have been shifted arbitrarily to match the SFR at \((1-z)\sim 3.5\). Figure 5b from Tsvetkova et al. (2021). ©AAS. Reproduced with permission.
Tsvetkova et al. (2021), the Gamma-Ray Burst Online Index3, Jochen Greiner's GRB table4, and the references therein). Using one of the aforementioned techniques, one can estimate the redshift of any burst based on its temporal or spectral parameters and energetics. For example, Lloyd-Ronning et al. (2002); Yonetoku et al. (2004); Kocevski & Liang (2006) used various correlations to obtain unknown GRB redshifts from GRB observables, while Ukwatta et al. (2016) used the random forest algorithm to estimate GRB redshifts.
Footnote 3: [https://sites.astro.caltech.edu/grbox/grbox.php](https://sites.astro.caltech.edu/grbox/grbox.php)
Footnote 4: [https://www.mpe.mpg.de/~jcg/grbgen.html](https://www.mpe.mpg.de/~jcg/grbgen.html)
## 4 Discussion
_THESEUS_ is a mission aimed at increasing the discovery rate of the high-energy transient phenomena over the entirety of cosmic history and fully exploiting GRBs to explore the early Universe Amati et al. (2018, 2021). _THESEUS_ is likely to become a cornerstone of multi-messenger and time-domain astrophysics thanks to its exceptional payload, providing wide and deep sky monitoring in a broad energy range (0.3 keV-20 MeV); focusing capabilities in the soft X-ray band, providing large grasp and a high angular resolution; and onboard near-IR capabilities for immediate transient identification and redshift determination.
The _THESEUS_ payload is planned to include the following instrumentation: (1) the X-Gamma-Ray Imaging Spectrometer (XGIS, 2 keV-20 MeV): a set of two coded-mask cameras using monolithic X-gamma-ray detectors based on bars of silicon diodes coupled with a crystal scintillator, granting a \(\sim\)2 sr field of view (FoV) and source location accuracy of \(\sim\)10\({}^{\prime}\) in the 2-150 keV band, as well as a?4 sr FoV at energies \(>150\) keV, with a few \(\mu\)s timing resolution; (2) a Soft X-Ray Imager (SXI, 0.3-5 keV): a set of two lobster-eye telescope units, covering a total FOV of \(\sim\)0.5 sr with source location accuracy \(\lesssim 2^{\prime}\); (3) an infrared telescope (IRT, 0.7-1.8 \(\mu\)m): a 0.7 m class IR telescope with a \(15^{\prime}\times 15^{\prime}\) FOV, for a fast response, with both imaging (I, Z, Y, J, and H) and spectroscopic (resolving power, R\(\sim\)400, through \(2^{\prime}\times 2^{\prime}\) grism) capabilities.
Thanks to the unique combination of a wide 0.3 keV-10 MeV energy range, remarkable sensitivity, and exceptionally high counting statistics, _THESEUS_ heralds a new era in the multi-wavelength studies of GRBs, providing the community with a sample of GRBs with known redshifts of unprecedented size, which, in turn, will not only allow the use of GRBs as cosmological tools but also shed light on one of the most challenging aspects of QG theory, the systematic study of which is still beyond the current instrumental capabilities. The capability of _THESEUS_ to detect and localize GRBs, as well as measure their redshifts, will essentially surpass those of the current missions.
The left panel of Figure 3 shows the expected detection rate of long GRBs by _THESEUS_ compared with observed GRBs. The orange histogram depicts the cumulative distribution of the GRBs detected by the SXI and/or XGIS (the bursts with measured \(z\) are marked in purple), while
the blue histogram presents the distribution of GRBs with known redshifts detected from 2005 to the end of 2020. The distribution of the GRBs detected by _THESEUS_ was acquired based on the anticipated IRT capabilities and on the assumption of a ground follow-up rate of 50% for the GRBs at \(z<5\). It is noticeable that _THESEUS_ is expected to detect an order of magnitude more bursts than _Swift_ does, especially in the high-redshift domain (\(z>6\)) Ghirlanda et al. (2021). It is expected that the redshifts for the majority of GRBs detected by _THESEUS_ will be measured (onboard or on the ground). The cumulative distribution plotted in the right panel of Figure 3 represents the annual detection rate of short GRBs by XGIS, not corrected for mission observation efficiency. _THESEUS_ is supposed to acquire a statistically significant sample of short GRBs, including high-redshift (\(z\lesssim 4\)-5) events. Considering that the distribution of the spectral lags of short GRBs and long GRBs differs significantly, advances in research on short GRBs are very important for such sophisticated studies of the QG effects as we discuss in this paper. Thus, the sample of GRBs with measured redshifts obtained by THESEUS is the most promising for the application of the described technique to study LIV.
## 5 Conclusions
Various QG theories predict LIV, which can manifest itself as the dispersion of the speed of light. The method that we propose to disentangle and constrain the QG delays from the intrinsic spectral lags in GRB light curves is based on the assumption of the constancy of the rest-frame intrinsic spectral lags and on the linear dependence of the GRB spectral lag on both the photon energy and function of the GRB redshift. The ability to collect a large sample of GRBs with known redshifts is crucial for this type of study, as the precision of the QG effect measurement can be improved by expanding the data set. Currently, redshifts are measured spectroscopically or photometrically for \(\lesssim 500\) GRBs. Thus, indirect estimates of the redshifts from the prompt emission observables are necessary to obtain a large GRB sample for LIV studies using the commissioned instruments. The sample of GRBs collected by _Fermi_/GBM could provide a promising opportunity to apply the aforementioned technique, thanks to its extensive trigger statistics (\(\gtrsim\)3500 GRBs up to date) and sophisticated data acquired with a high temporal and spectral resolution, which could allow precise measurements of the rest-frame spectral lags.
The _THESEUS_ mission is likely to initiate a breakthrough in this field of fundamental physics as, thanks to the combination of its unique characteristics, the observatory will collect one order of magnitude more samples of GRBs with known redshifts than are currently available, which will not only allow the use of GRBs as cosmological tools but will also enable us to constrain the QG theories. Moreover, thanks to its capability to detect the GRB emission in the relatively soft energy band of 0.3-5 keV, _THESEUS_ could provide a unique opportunity not only to constrain the empirical and physical GRB models but also to expand the data range, providing more accurate constraints on the QG energy from the lag-energy plane. Due to its high sensitivity, _THESEUS_ will also allow advances in the study of short GRBs; in particular, XGIS will be able to detect short GRBs up to z\(\sim\)4-5, which is important as the spectral lags of short GRBs essentially differ from
Figure 3: Left plot: Observed GRBs with known redshifts measured in 2005–2020 (blue line and filled cyan area representing 1\(\sigma\) uncertainty) superimposed on the anticipated frequency of detection of long GRBs by _THESEUS_ (orange histogram). The purple hatched histogram shows the GRBs that are expected to have a redshift measured by either _THESEUS_ or ground-based facilities’ telescopes. The model that fits the observed distribution used to make predictions for _THESEUS_ is represented by the green curve. _THESEUS_ is expected to detect one to two orders of magnitude more GRBs than Swift at any redshift, and most importantly in the high-redshift range (\(z>6\)). Right plot: Cumulative redshift distribution of short GRBs detectable with _THESEUS_/XGIS per year of mission. Theoretically, short GRBs can be detected at high redshifts \(z>4\) with a rate of \(\sim\)1 event per year. This figure is adopted from Ghirlanda et al. (2021).
the ones of long bursts. Thus, the _THESEUS_ mission could make a significant contribution to the study of the QG effects using GRBs.
We thank the anonymous referees for helpful comments on the manuscript. Some of the authors wish to thank ASI and INAF (agreements ASI-UNI-Ca 2016- 13-U.O and ASI- INAF 2018-10-hh.0), the Italian Ministry of Education, University and Research (MIUR), Italy (HERMES-TP project) and the EU (HERMES-SP Horizon 2020 Research and Innovation Project under grant agreement 821896) for the financial support within the HERMES project. L. B., A. S. and A. T. acknowledge funding from the Italian Ministry of University and Research (MUR), PRIN 2017 (prot. 20179ZF5KS), "The new frontier of multi-messenger astrophysics: follow-up of electromagnetic transient counterparts of gravitational wave sources" (PI: E. Capellaro). T.d.S., L.B. and A.S. also acknowledge the financial support of PRIN-INAF 2019 within the project "Probing the geometry of accretion: from theory to observations" (PI: Belloni).
|
2310.13255
|
Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in
Open Worlds
|
Recent studies have presented compelling evidence that large language models
(LLMs) can equip embodied agents with the self-driven capability to interact
with the world, which marks an initial step toward versatile robotics. However,
these efforts tend to overlook the visual richness of open worlds, rendering
the entire interactive process akin to "a blindfolded text-based game."
Consequently, LLM-based agents frequently encounter challenges in intuitively
comprehending their surroundings and producing responses that are easy to
understand. In this paper, we propose Steve-Eye, an end-to-end trained large
multimodal model designed to address this limitation. Steve-Eye integrates the
LLM with a visual encoder which enables it to process visual-text inputs and
generate multimodal feedback. In addition, we use a semi-automatic strategy to
collect an extensive dataset comprising 850K open-world instruction pairs,
empowering our model to encompass three essential functions for an agent:
multimodal perception, foundational knowledge base, and skill prediction and
planning. Lastly, we develop three open-world evaluation benchmarks, then carry
out extensive experiments from a wide range of perspectives to validate our
model's capability to strategically act and plan. Codes and datasets will be
released.
|
Sipeng Zheng, Jiazheng Liu, Yicheng Feng, Zongqing Lu
|
2023-10-20T03:22:05Z
|
http://arxiv.org/abs/2310.13255v2
|
# Steve-Eye: Equipping LLM-based Embodied Agents with Visual Perception in Open Worlds
###### Abstract
Recent studies have presented compelling evidence that large language models (LLMs) can equip embodied agents with the self-driven capability to interact with the world, which marks an initial step toward versatile robotics. However, these efforts tend to overlook the visual richness of open worlds, rendering the entire interactive process akin to "a blindfolded text-based game." Consequently, LLM-based agents frequently encounter challenges in intuitively comprehending their surroundings and producing responses that are easy to understand. In this paper, we propose Steve-Eye, an end-to-end trained large multimodal model designed to address this limitation. Steve-Eye integrates the LLM with a visual encoder which enables it to process visual-text inputs and generate multimodal feedback. In addition, we use a semi-automatic strategy to collect an extensive dataset comprising 850K open-world instruction pairs, empowering our model to encompass three essential functions for an agent: multimodal perception, foundational knowledge base, and skill prediction and planning. Lastly, we develop three open-world evaluation benchmarks, then carry out extensive experiments from a wide range of perspectives to validate our model's capability to strategically act and plan. Codes and datasets will be released.
## 1 Introduction
Developing embodied agents that can adapt to the open world has long been a substantial challenge (Kolve et al., 2017; Savva et al., 2019). Recently, the rapid progress of large language models (LLMs) (OpenAI, 2022; Touvron et al., 2023a) has shown their potential to serve as a general-purpose assistant. Driven by these pre-trained LLMs, recently proposed agents (Yuan et al., 2023; Wang et al., 2023a;b; Zhu et al., 2023) have managed to extract world knowledge and reasoning capabilities from LLMs, allowing them to become self-driven. Thereby these agents are capable of generating executable policies or plans for a wide range of skills and tasks in an open world.
While current endeavors to integrate LLMs show promise in constructing a generic embodied agent, these efforts primarily translate the entire world into text, which overlooks the multifaceted richness of our diverse visual reality and
Figure 1: (a) LLM-based agent’s feedback is uncontrollable due to the uncertainty of input prompt; (b) a text-only driven agent often finds it difficult to produce intuitive feedback that humans can easily understand.
turns interacting with the environment into something akin to "**a blindfolded text-based game.**" Consequently, such text-only driven agents often face difficulties when it comes to effectively and intuitively representing the world. Imagine a situation where you request your agent to shop for a pair of shoes online. Would you prefer to send the agent a picture of the shoes or provide a lengthy description of the shoes to convey their appearance? Undoubtedly, you would opt for the former choice.
In fact, the agent's reliance on text input/output (I/O) imposes significant limitations on its ability to interact with the world. To illustrate this point, we consider Minecraft (Guss et al., 2019; Fan et al., 2022) as an ideal example. Minecraft, being an expansive sandbox game, offers a vast realm for embodied agents to explore, which requires the acquisition of various basic skills (e.g., crafting logs) and the ability to plan and execute diverse tasks. First, as shown in Figure 1 (a), the LLM-based agent produces uncontrollable outputs. The success of the agent's responses hinges heavily on careful prompt engineering (Huang et al., 2022), ensuring that the LLM comprehends the environment and task objectives. Moreover, a universally applicable prompt that suits every LLM and task is an unattainable goal. Therefore, this prompting process is labor-intensive and contradicts our aim of enabling agents to act in a self-driven manner. Second, when compared to visual feedback, language often encounters difficulties in intuitively conveying specific world concepts (e.g., recipes) to users, as illustrated in Figure 1 (b), thereby unavoidably creating obstacles for robust human-computer/AI interaction (Preece et al., 1994; Fallman, 2003).
Unlike LLMs, humans possess an innate ability to process and generate information through both visual and text channels. This inherent gift significantly enhances our capability to interact with the world. However, the coupling of LLM-based agents with multimodal I/O has been relatively underexplored in an open-ended environment. To fill this gap, we introduce **Steve-Eye \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxed{\boxed{\boxed{\boxed{\boxed
## 2 Related Work
### Open-world Embodied Agents with LLMs
The rapid progress of large language models (Brown et al., 2020; Raffel et al., 2020; Zhang et al., 2022; Chowdhery et al., 2022) has significantly boosted their capacity to encode a wide range of human behaviors within training data (Bommasani et al., 2021). When equipped with narrowly designed prompts, LLM-based agents exhibit the capability to generate executable plans for tasks such as indoor robot manipulation. For instance, SayCan (Ahn et al., 2022) integrates skill affordances with LLMs to yield actionable plans, while Palm-E (Driess et al., 2023) takes a step further by constructing hierarchical agents capable of handling multimodal prompts. This approach has also proven its efficacy in open-world environments (Huang et al., 2022; Li et al., 2022). In contrast to robot manipulation, agents in the wild require a heightened level of real-time situational awareness and foundational knowledge to execute intricate skill plans across a diverse array of tasks. To simulate human behaviors in such open worlds, Generative Agents (Park et al., 2023) store agents' experiences and retrieve these memories to generate plans in a text-based sandbox game.
In recent years, the 3D sandbox Minecraft has received considerable attention owing to its remarkably flexible game mechanics to serve as a prominent open-world benchmark (e.g., MineRL (Guss et al., 2019) and Minedojo (Fan et al., 2022)). DEPS (Wang et al., 2023) introduces the descriptor, explainer, and selector for plan generation with the help of LLM. Plan4MC (Yuan et al., 2023) constructs a skill graph and proposes a skill search algorithm to minimize planning errors. Voyager (Wang et al., 2023) proposes an LLM-powered lifelong learning agent that continually explores the Minecraft world. Similar to (Park et al., 2023), GITM (Zhu et al., 2023) integrates LLMs with text-based memory and knowledge to create generic agents in Minecraft. Among these studies, Voyager (Wang et al., 2023) and GITM (Zhu et al., 2023) lean entirely on text descriptions of the environment to act and plan, while Plan4MC (Yuan et al., 2023) and DEPS (Wang et al., 2023) have visual-input skills but still rely on merely text for planning. None of them try to understand the rich visual observation provided natively by Minecraft. In contrast to these works, our work trains a large multimodal model to fill this gap.
### Large Multimodal Models (LMMs)
In comparison to LLMs, large multimodal models (LMMs) (Awadalla et al., 2023) encompass a broad range of information beyond text modality, which can be categorized into two primary streams. The first category (Gupta and Kembhavi, 2023; Huang et al., 2023; Pati et al., 2023; Suris et al., 2023) involves hinging on ChatGPT (OpenAI, 2022) or GPT-4 (OpenAI, 2023) to generate in-context responses without parameter tuning. However, these approaches heavily rely on the availability of an LLM's API and the quality of the designed prompts. The second category comprises end-to-end pre-trained models. Within this category, models such as Huang et al. (2023); Peng et al. (2023) are trained entirely from scratch. Conversely, some research explores efficient fine-tuning using pre-trained LLMs by incorporating lightweight modality encoders, such as Qformer (Li et al., 2023) or Perceiver (Alayrac et al., 2022). Recently, Liu et al. (2023) propose to explicitly instruction-tune a LLM using vision-language instruction data.
In this work, we propose Steve-Eye by building upon pre-trained LLMs, aiming to develop an open-world agent powered by a large-scale model with versatile multimodal I/O capabilities.
## 3 Methodology
In this section, we first provide our instruction-following dataset to develop three key functions for the agent's open-world interaction in Section 3.1. We then propose our large multimodal agent Steve-Eye in Section 3.2, and clarify details of the training procedure in Section 3.3. We adopt Minecraft as our open-ended platform in this paper to collect data and validate the model, anticipating to explore a broader range of environments for Steve-Eye in future studies.
To empower an agent with the self-driven capacity to act and plan in an open world, we posit that the following embodied functions are indispensable: (1) multimodal perception function which offers a detailed description of the agent status and environmental features; (2) foundational knowledge
base which imparts an understanding of how the world works and conveys crucial basic knowledge related to skills and tasks; (3) skill prediction and planning which is responsible for generating skill execution feedback (e.g., success or failure) and crafting high-level skill plans for handling more complex and long-horizon tasks. We develop these functions by building the corresponding instruction dataset to pre-train Steve-Eye as follows.
### Open-World Instruction-Following Dataset
Multimodal Perception Instructions.Human players can perform actions in Minecraft mainly relying on their visual perception, without any prior hints or imposed game judgments. In order to endow Steve-Eye with the same ability, it is required to provide it with comprehensive visual descriptions of the environment. To achieve this, we use Minedjoo (Fan et al., 2022) to obtain Minecraft snapshots which contain a wide array of details within the agent's surroundings, including environmental features, the agent's life and food status, inventory items, and equipment, as illustrated in Figure 2. In addition, we leverage MaskCLIP (Zhou et al., 2022) to identify the in-sight objects of these snapshots without supervised annotations. During our data collection process, for each snapshot \(\mathcal{I}\) and its corresponding description \(\mathcal{X}_{C}\), we initiate a three-step approach. Firstly, we prompt ChatGPT to curate a list of 40 instructions as shown in Figure 6 in Appendix A.1. Then we enrich snapshot details as dense caption to describe its content, with the assistance of ChatGPT. Finally, we select an instruction \(\mathcal{X}_{Q}\) randomly from the list and combine it with the snapshot's caption to create a single-round multimodal description pair (e.g., **### Human: \(\mathcal{X}_{Q}\,\mathcal{I}\)**n ### Embodied Agent: \(\mathcal{X}_{C}\backslash\)n.). By doing so, we collect 200K instructional pairs for multimodal perception learning.
**Foundational Knowledge Instructions.** Embodied agents require a foundation of essential knowledge to facilitate action-taking and skill planning. In Minecraft, such knowledge should contain item recipes, details of item attributes, their associated numerical value, etc. We access this vital information from Minecraft-Wiki (Fandom, 2023), which comprises an extensive collection of over 9,000 HTML pages. To be specific, we first obtain all item icons from Minecraft-Wiki and generate 200K icon inventory images, as illustrated in Figure 3 (a). Each icon image corresponds to a 4-row table with an associated caption adhering to a standardized template: "There is a Minecraft inventory with 4 rows. From left to right, they are...". As shown in Figure 7 in Appendix A.1.2, we curate a set of 20 distinct prompts designed to challenge the model's ability to recognize items. Subsequently, we further collect all recipe-related information from the Wiki as illustrated in Figure 3 (b), and design similar prompt templates to formulate 10,000 recipe-image instructional pairs. Lastly, we process the Wiki and utilize this corpus to produce 40,000 single-round question-answer pairs. In total, we collect a high-quality dataset with 250K foundational knowledge instructions.
**Skill-related Interaction Instructions.** The environmental description and foundational knowledge serve as prerequisites for an agent's interaction within the open world. However, a successful interaction requires more than these elements alone. It relies upon the mastery of basic skills, such as log, harvesting, and food preparation, as well as high-level skill planning abilities to tackle complex, long-horizon tasks, such as crafting an iron pickaxe. To facilitate this, we gather corresponding training data for skill prediction and planning, which enables our model to provide correct feedback on both basic skills and long-horizon tasks across a spectrum of agent or environmental conditions. Specifically, the data collection process involves two steps. First, we sample skill trajectories based on the pre-trained basic skill policies and collect 200K snapshot pairs with corresponding statuses from these trajectories. Each snapshot pair \(\{\mathcal{I}_{0},\mathcal{I}_{t}\}\) denotes the 0-th and t-th timestamp of the skill trajectory. Next, we employ ChatGPT to generate question-answer pairs about diverse aspects of skill execution status. These ques
Figure 3: Icons and recipes
Figure 2: Multimodal perception
tions delve into whether the agent completes the skill, encounters unexpected failures, or seeks explanations for such failures. More details can be found in Appendix A.1.3. Second, we sample 40K task trajectories using the planner in Yuan et al. (2023), each of which can be denoted as \(\mathcal{T}=\{s_{1},s_{2},...s_{\mathrm{T}}\}\) representing the task is finished via a \(\mathrm{T}\)-round planning procedure, where \(s_{i}\) is the skill plan for \(i\)-th round. At each round \(i\), we feed our model with its start snapshot and task initialization, and curate instructional questions to inquire about \(s_{i}\) with reasonable explanation. In this manner, we obtain 200K instructional pairs from task trajectories.
### Model Architecture
Figure 4 illustrates the overall architecture of our proposed model. Steve-Eye, functioning as a generative model, connects an image-oriented tokenizer \(f_{v}\) with the pre-trained LLM backbone \(\Theta\). We adopt the image tokenizer, e.g., VQ-GAN (Esser et al., 2021), to encode the raw images \(\mathcal{I}\) into token embeddings \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\in\mathbb{R}^{n\times d}\), where \(n\) denotes the number of visual tokens and \(d\) is the dimensionality of each token. We further utilize a lightweight projection module \(f_{l}\) with a trainable projection matrix \(W\). This module maps the visual tokens to the same space with text embeddings, yielding \(\hat{\mathcal{V}}=\{\hat{v}_{1},\hat{v}_{2},...,\hat{v}_{n}\}\in\mathbb{R}^{n \times d}\):
\[\hat{\mathcal{V}}=W\mathcal{V};\text{ where }\mathcal{V}=f_{v}(I). \tag{1}\]
To effectively process visual-language inputs and generate corresponding outputs, our model integrates the visual codebook \(\mathcal{C}_{v}\) into the pre-existing language vocabulary \(\mathcal{C}_{l}\). This integration leads to the formation of a unified multimodal codebook, denoted as \(\mathcal{C}_{m}=\mathcal{C}_{v}\cup\mathcal{C}_{l}\). Additionally, in order to mark the starting and ending points of visual elements in I/O sequences, we introduce two special tokens, namely \(<\)vis\(>\) and \(<\)/vis\(>\). The LLM backbone \(\Theta\) of our Steve-Eye is built upon a decoder-only architecture with casual transformers. Our model employs an auto-regressive prediction mechanism, generating responses based on the provided multimodal input tokens. The resulting response is a mixed sequence of visual and textual tokens, represented as \(\mathcal{Y}=\{y_{1},y_{2},...,y_{m}\}\). For each embedding \(y_{i}\), we pass it through a linear layer \(f_{p}\) followed by a softmax operation, mapping it into a probability distribution of the multimodal vocabulary. The final prediction for the \(i\)-th token \(z_{i}\) is determined by selecting the token from the multimodal codebook with the highest score:
\[z_{i}=\arg\max(\text{softmax}(f_{p}(y_{i}))). \tag{2}\]
### Training
Each instruction-following instance can be formulated as a multi-round conversation \(\{\mathcal{X}^{1}_{Q},\mathcal{X}^{1}_{C},...,\)\(\mathcal{X}^{N}_{Q},\mathcal{X}^{N}_{C}\}\), where each \(\{\mathcal{X}^{i}_{Q},\mathcal{X}^{i}_{C}\}\) represents a question-answer interaction between a human and
Figure 4: Illustration of Steve-Eye: a large multimodal model designed to seamlessly process both visual and language inputs. Steve-Eye excels in acquiring fundamental knowledge of the world it lives in, understanding the nuances of its surroundings, and generating executable plans to complete a wide array of open-ended tasks. Furthermore, Steve-Eye responds to user instructions through either visual or text-based cues, enhancing the convenience and flexibility of human-AI interaction.
the embodied agent and \(N\) indicates the total number of rounds in the conversation. The entire instructional dataset follows this unified template, as demonstrated in Figure 11 in Appendix A.1.3. To efficiently train our model, we employ the negative log-likelihood objective over the prediction tokens with instruction tuning:
\[\mathcal{L}(\Theta)=-\sum_{j=1}^{L}\log P_{\Theta}(y_{j}|\mathcal{I},\hat{y}_{1 :j-1}), \tag{3}\]
where \(y\) and \(\hat{y}\) respectively denote the input and target token sequences, with \(\Theta\) representing the model parameters, and \(L\) representing the length of the target sequence. The input visual content \(\mathcal{I}\) may represent an empty image depending on the input instruction. It is worth noting that we constrain the loss computation to only consider the answer tokens \(\mathcal{X}_{C}\). This constraint prevents training from becoming excessively straightforward and ensures that the model's primary focus is on learning to precisely generate coherent responses. Similar to Liu et al. (2023), we adopt a two-stage instruction-tuning strategy to train our model:
**Two-Stage Instruction-Tuning.****(1) Multimodal feature alignment**: In the first stage, our primary objective is to align visual features with the language token space. In order to strike a balance between efficient tuning and a comprehensive coverage of the world's concepts, we curate our open-ended instruction dataset to 600K snapshot-text pairs. These pairs are then transformed into instruction-following data as described in Section 3.1. During the feature alignment stage, we maintain the visual encoder and the LLM parameters in a frozen state, exclusively training the projection module. Additionally, this training phase involves fine-tuning token embeddings to accommodate the newly introduced visual codebook and two special tokens \(<\)vis\(>\) and \(<\)/vis\(>\). **(2) End-to-end instruction tuning**: In the second stage, we continue to keep the visual encoder frozen while concurrently training the projection module and LLM. This second stage leverages the entire open-ended instructions and contributes significantly to enhancing the model's capability of comprehending and effectively responding to complex multimodal instructions.
## 4 Experiments
### Experimental Setup
**Implementation Details.** In this paper, we use the LLaMA-2 model (Touvron et al., 2023b) as the LLM backbone. Additionally, we use CLIP (Radford et al., 2021) as our visual encoder to achieve the best performance for non-visual generative tasks, and use VQ-GAN (Esser et al., 2021) as the default visual tokenizer for visual generation. The size of visual codebook \(\mathcal{C}_{v}\) and language vocabulary is 8192 and 32000, respectively. In addition, we add \(<\)vis\(>\) and \(<\)/vis\(>\) to the final unified codebook, indicating the starting and ending points of visual content. Similar to Liu et al. (2023), we construct 850K instruction-answer pairs for model training. Note that the model is trained to predict the agent's answer, and thus only sequence/tokens of answer will be used to compute the loss in the auto-regressive model. We also adopt LoRA (Hu et al., 2021) to reduce the computational cost for efficient tuning. We choose MineDojo (Fan et al., 2022) as the Minecraft platform to collect our instruction data and conduct experiments. Following Yuan et al. (2023), we use the environments of programmatic tasks to train basic policies with RL. These policies are trained to execute corresponding skills and keep fixed in all testing tasks.
**Evaluation Benchmarks.** We conduct experiments on three benchmarks to evaluate an agent's interaction ability in an open world. **(1) Environmental visual captioning (ENV-VC)**: given a snapshot, the model is asked to describe the agent's current status and environmental features from diverse aspects (e.g., life, food...). We evaluate the prediction's accuracy of each aspect by extracting corresponding answers from the output description to compare with the groundtruth. **(2) Foundational knowledge question answering (FK-QA)**: to assess the model's grasp of essential knowledge, we collect a set of 10,000 Minecraft-related questions from different sources, including the Wiki pages, Wiki tables, and Minecraft recipes. The performance is measured by the model's ability to provide correct answers to these questions. **(3) Skill prediction and planning (SPP)**: we utilize our proposed Steve-Eye to predict whether a skill has been successfully completed and assert its capability to generate executable high-level skill plans for long-horizon tasks.
### Environmental Visual Captioning (ENV-VC)
We introduce this evaluation protocol for asserting Steve-Eye's multimodal perception function, which serves as an initial stride toward comprehensive evaluation of large multimodal models. Specifically, we collect 20,000 Minecraft snapshots (named ENV-VC test set) using Minedoo and apply the proposed data generation pipeline to create six questions for each snapshot, resulting in a total of 120K questions. These six questions pertain to the prediction of various aspects, including inventory items \(\overline{\text{III}}\), equipment \(\hat{\mathcal{P}}\), objects in sight \(\overline{\text{M}}\), life \(\overline{\text{V}}\), food \(\overline{\text{V}}\), and the visibility of sky \(\overline{\text{III}}\).
During the inference phase, Steve-Eye predicts answers based on these questions and the input snapshot. Experimental results are presented in Table 1 and Table 2. As shown in Table 1, our visual encoder, when combined with multimodal instruction tuning, significantly enables the ability of the text-only language model LLM (Llama-2-7b) to comprehend the contents of the snapshots (Steve-Eye-7b). Notably, Steve-Eye outperforms BLIP-2 by a substantial margin due to the improved reasoning ability enabled by the larger LLM. Furthermore, the visual encoder plays a crucial role in facilitating multimodal understanding. Surprisingly, the model equipped with CLIP (Radford et al., 2021) surpasses the performance of the model using MineCLIP (Fan et al., 2022), achieving over +48.9\(\%\), +21.0\(\%\) and +19.9\(\%\) improvements in inventory, equipment, and object-in-sight predictions, respectively. We attribute this performance difference to the fact that MineCLIP does not prioritize fine-grained alignment during pre-training, despite being exposed to a diverse range of Minecraft videos. In summary, Steve-Eye's ability to comprehend visual cues from its surroundings lays the foundation for subsequent interactions with the world.
To investigate the effectiveness of various types of instructional data for multimodal perception, we carry out experimental comparisons with diverse data configurations in Table 2. First, our results showcase a significant improvement in the model's capacity to respond to instructional questions through instruction tuning, which leads to impressive gains of over +50\(\%\) for inventory, equipment, and object-in-sight prediction. Furthermore, the inclusion of the multimodal perception dataset and icon images in the training data both contribute to a substantial improvement in the model's overall performance. Ultimately, the best results are achieved when combining all available data sources.
### Foundational Knowledge Question Answering (FK-QA)
Following Team (2022), we establish a question database specialized to assess our model's proficiency in generating responses pertaining to fundamental Minecraft knowledge. This evaluation is carried out through a validation dataset known as the FK-QA test set, which is further divided into two distinct subsets: 'TEXT' and IMG. In the FK-QA 'TEXT' subset, we generate a collection of 10,000 question-answer pairs curated from various sources, including the Minecraft-Wiki pages, Minecraft-Wiki tables, and Minecraft recipes. Each category comprises 2,000, 5,000, and 3,000
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Model & visual encoder & inventory \(\overline{\text{III}}\) & equip \(\hat{\mathcal{P}}\) & object in sight \(\overline{\text{M}}\) & life \(\overline{\text{V}}\) & food \(\overline{\text{V}}\) & sky \(\overline{\text{III}}\) \\ \hline BLIP-2 & CLIP & 41.6 & 58.5 & 64.7 & 88.5 & 87.9 & 57.6 \\ Llama-2-7b & - & - & - & - & - & - & - \\ Steve-Eye-7b & VQ-GAN & 89.9 & 78.3 & 87.4 & 92.1 & 90.2 & 68.5 \\ Steve-Eye-13b & MineCLIP & 44.5 & 61.8 & 72.2 & 89.2 & 88.6 & 68.2 \\ Steve-Eye-13b & VQ-GAN & 91.1 & 79.6 & 89.8 & 92.7 & 90.8 & 72.7 \\ Steve-Eye-13b & CLIP & **92.5** & **82.8** & **92.1** & **93.1** & **91.5** & **73.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of different model settings on the environmental visual caption benchmark. The experiments are conducted on 20K ENV-VC test set.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & inventory \(\overline{\text{III}}\) & equip \(\hat{\mathcal{P}}\) & object in sight \(\overline{\text{M}}\) & life \(\overline{\text{V}}\) & food \(\overline{\text{V}}\) & sky \(\overline{\text{III}}\) \\ \hline no instruction tuning & 22.7 & 24.3 & 39.8 & 81.2 & 80.4 & 61.1 \\ w/o snapshot desc. & 46.2 (+23.5) & 40.9 (+16.6) & 41.2 (+1.4) & 83.0 (+1.8) & 82.4 (+2.0) & 63.3 (+2.1) \\ w/o icon images & 52.3 (+29.6) & 48.1 (+23.8) & 91.4 (+51.6) & 92.5 (+11.3) & 90.9 (+10.5) & 73.5 (+12.4) \\ full data & 92.5 (+69.5) & 82.8 (+58.5) & 92.1 (+52.3) & 93.1 (+11.9) & 91.5 (+11.1) & 73.8 (+12.7) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons of different data configurations on the environmental visual captioning benchmark, where “snapshot desc.” denotes the 200K multimodal perception instruction dataset.
pairs, respectively. Upon receiving a response from Steve-Eye, we feed both the generated response and the corresponding groundtruth answer to ChatGPT. ChatGPT will first examine the accuracy of the response as a measure of answer correctness. To minimize variability in error, ChatGPT conducts a further evaluation, considering the response's accuracy, relevance, and level of detail. This comprehensive evaluation yields an overall score on a scale ranging from 0 to 10, where a higher score signifies superior overall performance. In the FK-QA IMG subset, we shift our focus to visual generation by employing 3,000 recipe images as groundtruth data. Here, our model is tasked with generating visual outputs for each item within the recipe inventory, following a specific order. The visual output is considered correct only if every element of the recipe is accurately generated. We adopt this metric to assert our model's ability to produce multimodal feedback.
Table 3 presents both scoring and accuracy results. It's worthy to note that Llama-2 exhibits consistent performance regardless of the model's scale, with Llama-2-70b only marginally outperforming the 7b-version by +1.26\(\%\) in accuracy, meanwhile 13b-version performs even worse than 7b-version on the scoring results. We hypothesize that this phenomenon can be attributed to distinct variations in difficulty levels encountered within our FK-QA test set. Llama-2 fails to answer correctly for the challenging part regardless of its size due to essential knowledge missing. In contrast, Steve-Eye outperforms both Llama-2 and gpt-turbo-3.5, despite its considerably smaller scale. Furthermore, our model exhibits a more substantial improvement in responding to Recipe and Wiki Table questions as compared to Wiki Page questions. This disparity can likely be attributed to the fact that Wiki Page contains a large proportion of invalid questions (e.g., version, history), whereas Recipe and Wiki Table predominantly feature knowledge-related inquiries. Such result further validates the effectiveness of our approach in acquiring foundational knowledge. Unlike text-only LLMs, our model exhibits considerable ability to output visual contents, which achieves \(65.13\%\) accuracy on FK-QA IMG using the 13b-version. The multimodal generation ability enables Steve-Eye to better serve as an assistant for potential needed people such as beginners of this game. We show more details and cases in Appendix A.3.
### Skill Prediction and Planning (SPP)
**Skill Prediction.** Similar to Section 3.1, we collect another 20K snapshot pairs in the form of \(\{\mathcal{I}_{0},\mathcal{I}_{t}\}\) from skill trajectories (referred to as Skill-Pred test). These pairs are input into our model to query the current execution status of the skill. The execution status can fall into one of three categories: success, failure, and running, with "running" signifying that the skill is currently in progress. As shown in Table 4, our model exhibits commendable performance in skill status prediction. However, the performance is still far from enough to completely replace the rule-based game judgment adopted by the existing RL-based skill agents. These experiments indicate that, despite the excellent multimodal understanding capabilities of our model in open-world environments in previous experiments, it still falls short in fine-grained reasoning tasks that involve consecutive frames to some extent.
**Skill Planning.** Following Yuan et al. (2023), we carry out evaluation on 24 difficult tasks in Minecraft. These tasks can be categorized into three types: cutting trees to craft primary items (7),
\begin{table}
\begin{tabular}{l c c c} \hline \hline & running (\%) & success (\%) & fail (\%) \\ \hline BLIP-2 & 65.2/58.8 & 49.8/54.3 & 42.1/51.8 \\ Steve-Eye-7b & 89.8/82.5 & 77.6/81.4 & 74.2/79.9 \\ Steve-Eye-13b & 92.1/84.2 & 80.5/83.1 & 76.8/81.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Recall/Accuracy results on Skill-Pred test set for the skill prediction benchmark.
\begin{table}
\begin{tabular}{l|l l l l|l l} \hline \hline & \multicolumn{4}{c}{Scoring} & \multicolumn{2}{c}{Accuracy} \\ \cline{2-7} & Wiki Page & Wiki Table & Recipe & TEXT All & TEXT & IMG \\ \hline Llama-2-7b & 6.90 & 6.21 & 7.10 & 6.62 & 37.01\% & - \\ Llama-2-13b & 6.31 (+0.59) & 6.16 (+0.05) & 6.31 (+0.79) & 6.24 (+0.38) & 37.96\% & - \\ Llama-2-70b & 6.91 (+0.01) & 6.97 (+0.76) & 7.23 (+0.13) & 7.04 (+0.42) & 38.27\% & - \\ gpt-turbo-3.5 & 7.26 (+0.36) & 7.15 (+0.94) & **7.97** (+0.87) & 7.42 (+0.38) & 41.78\% & - \\ Steve-Eye-7b & 7.21 (+0.31) & 7.28 (+1.07) & 7.82 (+0.72) & **7.54** (+0.92) & 43.25\% & 62.83\% \\ Steve-Eye-13b & **7.38** (+0.48) & **7.44** (+1.23) & 7.93 (+0.83) & **7.68** (+1.06) & **44.36\%** & **65.13\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparisons on FK-QA test set of the foundational knowledge question answering benchmark. The evaluation metrics consider both the scoring and accuracy dimensions simultaneously.
mining cobblestones to craft advanced items (7), and interacting with mobs to harvest food and materials (10). Each task is tested for 30 episodes, where an episode refers to a multi-round interaction process. At each round, the model receives the environmental feedback from the last round, plans a skill list based on the current status, and then picks up the top skill to execute. For each task episode, we set a maximum step between [3000, 10000]. In our evaluation, we compare Steve-Eye against two baseline approaches: (1) MineAgent (Fan et al., 2022), which completes tasks without decomposing them into basic skills, and uses PPO and self-imitation learning with CLIP reward, and (2) GPT Assistant, which employs ChatGPT as a high-level planner to generate skill plans by prompting itself with information from the environment and the agent's status. The results in Table 5 demonstrate that Steve-Eye significantly outperforms both baseline methods. Additionally, we conduct experiments in which Steve-Eye takes over the skill prediction function from the rule-based game judgment in Minecraft. This self-driven variant is referred to as 'Steve-Eye-auto.' Since the model's skill prediction is not always 100% accurate, Steve-Eye-auto does experience some performance degradation when compared to Steve-Eye. This degradation is more pronounced in longer, complex tasks (e.g., ), ) as opposed to short-term tasks (e.g., ). Nevertheless, Steve-Eye-auto still demonstrates significant performance improvements in most tasks, compared to the baselines. For additional details about this benchmark, please refer to Appendix A.2.
For better visualization, we provide a qualitative example of Steve-Eye completing the task "crafting stone axe with wooden pickaxe" as shown in Figure 5.
## 5 Conclusion
In this paper, we explore enabling a large multimodal model to serve as a generative embodied agent in open worlds. We achieve this goal by proposing Steve-Eye, which combines the text-only language model with a visual encoder, allowing for a multimodal I/O interface to interact with the
Figure 5: Snapshots of a qualitative example, illustrating how Steve-Eye completes the task of “crafting a stone axe with a wooden pickaxe.” Our model generates a skill plan at each interaction round and selects the top skill from the plan list for execution.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline Model & ✓ & & & & & & & & & & & & & & & & & & & & \\ \hline MineAgent & 0.00 & 0.03 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.21 & 0.0 & 0.05 & 0.0 \\ gpt assistant & 0.30 & 0.17 & 0.07 & 0.00 & 0.03 & 0.00 & 0.20 & 0.00 & 0.20 & 0.03 & 0.13 & 0.00 & 0.10 & 0.00 \\ Steve-Eye-auto & 0.30 & 0.27 & 0.37 & 0.23 & 0.20 & 0.17 & 0.26 & 0.07 & 0.13 & 0.17 & 0.20 & 0.33 & 0.00 & 0.13 \\ Steve-Eye & **0.40** & **0.30** & **0.43** & **0.53** & **0.33** & **0.37** & **0.43** & **0.30** & **0.43** & **0.47** & **0.40** & **0.13** & **0.23** \\ \hline \hline Model & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline MineAgent & 0.46 & 0.50 & 0.33 & 0.35 & 0.0 & 0.0 & 0.0 & 0.06 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ gpt assistant & 0.57 & 0.76 & 0.43 & 0.30 & 0.00 & 0.00 & 0.37 & 0.00 & 0.03 & 0.00 & 0.00 \\ Steve-Eye-auto & 0.70 & 0.63 & 0.40 & 0.30 & 0.17 & 0 & 0.37 & 0.03 & 0.07 & 0.00 \\ Steve-Eye & **0.73** & 0.67 & **0.47** & 0.33 & **0.23** & **0.07** & **0.43** & **0.10** & **0.17** & **0.07** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparisons on the skill planning benchmark. We test the mean success rates of all tasks, where each task is executed for 30 episodes using the same seeds for initialization.
environment. With the help of ChatGPT, we curate questions to generate 850K instruction-following data to facilitate the agent's multimodal perception fuction, foundational knowledge mastery, as well as the capability of skill prediction and planning. Experiments on three open-world benchmarks verify the advantages of our Steve-Eye over a wide range of perspectives.
|
2308.10516
|
Single laser pulse induced magnetization switching in in-plane
magnetized GdCo alloys
|
The discovery of all-optical ultra-fast deterministic magnetization switching
has opened up new possibilities for manipulating magnetization in devices using
femtosecond laser pulses. Previous studies on single pulse all-optical
helicity-independent switching (AO-HIS) have mainly focused on perpendicularly
magnetized thin films. This work presents a comprehensive study on AO-HIS for
in-plane magnetized GdxCo100-x thin films. Deterministic single femtosecond
laser pulse toggle magnetization switching is demonstrated in a wider
concentration range (x=10% to 25%) compared to the perpendicularly magnetized
counterparts with GdCo thicknesses up to 30 nm. The switching time strongly
depends on the GdxCo100-x concentration, with lower Gd concentration exhibiting
shorter switching times (less than 500 fs). Our findings in this geometry
provide insights into the underlying mechanisms governing single pulse AO-HIS,
which challenge existing theoretical predictions. Moreover, in-plane magnetized
GdxCo100-x thin films offer extended potential for opto-spintronic applications
compared to their perpendicular magnetized counterparts.
|
Jun-Xiao Lin, Michel Hehn, Thomas Hauet, Yi Peng, Junta Igarashi, Yann Le Guen, Quentin Remy, Jon Gorchon, Gregory Malinowski, Stéphane Mangin, Julius Hohlfeld
|
2023-08-21T07:08:18Z
|
http://arxiv.org/abs/2308.10516v1
|
# Single laser pulse induced magnetization switching in in-plane magnetized GdCo alloys
###### Abstract
The discovery of all-optical ultra-fast deterministic magnetization switching has opened up new possibilities for manipulating magnetization in devices using femtosecond laser pulses. Previous studies on single pulse all-optical helicity-independent switching (AO-HIS) have mainly focused on perpendicularly magnetized thin films. This work presents a comprehensive study on AO-HIS for in-plane magnetized Gd\({}_{x}\)Co\({}_{100\text{-}x}\) thin films. Deterministic single femtosecond laser pulse toggle magnetization switching is demonstrated in a wider concentration range (x=10% to 25%) compared to the perpendicularly magnetized counterparts with GdCo thicknesses up to 30 nm. The switching time strongly depends on the Gd\({}_{x}\)Co\({}_{100\text{-}x}\) concentration, with lower Gd concentration exhibiting shorter switching times (less than 500 fs). Our findings in this geometry provide insights into the underlying mechanisms governing single pulse AO-HIS, which challenge existing theoretical predictions. Moreover, in-plane magnetized Gd\({}_{x}\)Co\({}_{100\text{-}x}\) thin films offer extended potential for opto-spintronic applications compared to their perpendicular magnetized counterparts.
**Keywords:** in-plane magnetized thin film, ultrafast optics, single laser pulse magnetization reversal, Gd-based alloys, opto-spintronics
**I. INTRODUCTION**
The advancement of magnetic data storage, memories, and logic necessitates a fast and energy-efficient method for manipulating magnetization in thin magnetic media and heterostructures like magnetic tunnel junctions and spin valves. While significant progress has been made in current-induced magnetic switching over the past 25 years, the typical time required for this process is still orders of magnitude slower than optically induced magnetization manipulation [1-4]. In 2012, Ostler _et al._ achieved field-free ultrafast magnetization reversal by irradiating a femtosecond laser pulse onto a ferrimagnetic GdFeCo alloy [5]. This breakthrough paved the way for All-Optical Helicity-Independent Switching (AO-HIS), although its underlying mechanism remains a topic of debate [6-11]. The reversal mechanism is primarily attributed to a pure ultrafast thermal effect on magnetization, enabled by ultrafast demagnetization and subsequent angular momentum exchange between rare-earth and transition-metal sublattices on sub-picosecond and picosecond timescales [5,10-16]. This enables reliable writing of magnetic bits at GHz frequencies [17,18].
The AO-HIS has been mainly reported in perpendicularly magnetized Gd-based alloys or multilayers [5-23]. GdFeCo and GdCo alloys have been extensively studied. They are ferrimagnetic alloys for which the magnetization of the Gd sublattice (M\({}_{\rm Gd}\)) is exchange-coupled antiferromagnetically to the magnetization of the Transition Metal (Fe, Co) sublattice (M\({}_{\rm Co}\)). The resulting magnetization depends on the alloy concentration and temperature. Due to the antiferromagnetic coupling, the net magnetization is zero at the so-called compensation composition (x\({}_{\rm comp}\)) which depends on temperature. It has been shown that AO-HIS, measured at room temperature, occurs in Gd\({}_{\rm x}\)(FeCo)\({}_{\rm 100\mathchar 45}\) only when x is close to x\({}_{\rm comp}\) (at room temperature) within a few percent [10,11,19-22]. Theoretical models tend to confirm that only alloys with a concentration close to the x\({}_{\rm comp}\) could exhibit AO-HIS [21]. In 2015, Atxitia _et al._ used an atomistic stochastic Landau-Lifshitz-Gilbert equation for semi-classical spins, described by a Heisenberg Hamiltonian, to model AO-HIS in rare earth-transition metal ferrimagnetic alloys and concluded that a low net magnetization is an important ingredient for an energy-efficient AO-HIS [24]. The same conclusions were drawn a few years later by Jakobs _et al._[11] by using an atomistic model and the so-called two-temperature model, and by Davies _et al._[22] who used a phenomenological framework showing theoretical agreement with experimental results.
With the exception of a single experimental study conducted on in-plane magnetized Gd\({}_{25}\)(FeCo)\({}_{75}\) microstructures with a specific Gd concentration [5], all previous experimental results in this field have been obtained using samples exhibiting strong perpendicular magnetic anisotropy (PMA) [5-22]. Undeniably, PMA samples offer technical advantages for performing experiments, as the pump and probe beams can be directed perpendicular to the sample's surface. However, these samples may face challenges related to the stabilization of maze domain structures, which can limit the observation of AO-HIS only when the net magnetization and the thickness of the layers are low [20,23-26]. In cases where the magnetization is high and the alloy concentration is far from x\({}_{\text{comp}}\), the magnetic configuration tends to break into domains after laser excitation to minimize dipolar (demagnetization) energy [25]. Consequently, we can question the theoretical notion that a concentration close to compensation (i.e., low magnetization) is an intrinsic and mandatory requirement for achieving AO-HIS or whether it is an extrinsic effect driven by domain structure stabilization. In contrast to PMA systems, in-plane magnetized thin films are not expected to experience a strong dipolar field that induces a multidomain state. Therefore, studying these films can help addressing uncertainties regarding intrinsic and extrinsic effects.
This manuscript presents a comprehensive investigation into the manipulation of magnetization using a single femtosecond laser pulse in in-plane magnetized Gd\({}_{x}\)Co\({}_{100\text{-}x}\) thin films. Unlike perpendicularly magnetized Gd\({}_{x}\)(FeCo)\({}_{100\text{-}x}\) films, we were able to successfully demonstrate All-Optical Helicity-Independent Switching (AO-HIS) over a wide range of concentrations (5% \(<\) x \(<\) 30%) and thicknesses (5 to 30 nm). This observation challenges existing theoretical predictions, which suggested a narrow concentration range around compensation for AO-HIS. By analyzing the laser-induced effects, we identified three distinct threshold fluences that play a crucial role in observing AO-HIS: the fluence required for magnetization switching (F\({}^{\text{th}}\)sw), the fluence for demagnetization (F\({}^{\text{th}}\)Dem) for which the multidomain patterns are created due to excessive heating of the sample, and the fluence that causes irreversible changes or damage to the magnetic properties (F\({}^{\text{th}}\)Dem). F\({}^{\text{th}}\)Sw exhibited a minimum at compensation, while F\({}^{\text{th}}\)Dem increased with the sample's Curie temperature. The magnetization dynamics during the reversal process resembled those observed in perpendicularly magnetized films, with the fastest switching occurring for the lowest Gd concentration.
## II. Experimental Results
### A. The magnetic properties of in-plane magnetized Gd\({}_{x}\)Co\({}_{100\text{-x}}\) thin films
Gd\({}_{x}\)Co\({}_{100\text{-x}}\) ferrimagnetic thin films consisting of Glass/Ta (3 nm)/Gd\({}_{x}\)Co\({}_{100\text{-x}}\) (t nm)/Cu (1 nm)/Pt (3 nm) were prepared with a wide range of Gd concentration, x, varying from 5% to 35% and a wide range of Gd\({}_{x}\)Co\({}_{100\text{-x}}\) thickness, t, varied from 5 to 35 nm. In all cases, the Gd\({}_{x}\)Co\({}_{100\text{-x}}\) interfaces have been designed to keep an easy uniaxial anisotropy axis in-plane for any x and t values. Fig. 1(b) summarizes the evolution of the magnetization and coercivities of the sample as a function of the concentration for t=5 nm. As expected, the coercivity diverges, and the net magnetization reaches zero around x=20%, corresponding to the compensation composition (x\({}_{\text{comp}}\)) at room temperature. Mc\({}_{\text{co}}\) is dominant over M\({}_{\text{Gd}}\) when x\(<\)20%, while M\({}_{\text{Gd}}\) becomes dominant as x\(>\)20%. As shown in Supplemental Material 1, all hysteresis loops measured with an in-plane field have a remanence close to one, revealing the existence of a well-defined in-plane magnetic anisotropy axis in all Gd\({}_{x}\)Co\({}_{100\text{-x}}\) films. This is also confirmed by the angle-dependent remanence measurements shown in Supplemental Material 9. Moreover, the out-of-plane magnetic field-dependent saturation magnetization measurements for all the samples are provided in Supplemental Material 2, indicating that the hard axis is mainly perpendicular to the sample surface. The evolution of Kerr microscope images as a function of an in-plane magnetic field applied along the easy axis tends to show that the domain size and the domain growth by domain wall motion are similar for all GdCo concentration (Supplemental Material 3).
Figure 1: (a) Schematic of the static single laser pulse and time-resolved measurements based on
longitudinal magneto-optic Kerr effect (MOKE). The linearly polarized pump laser pulse (800 nm, 150 fs) is shined perpendicular to the film plane, whereas the linearly p-polarized probe beam (515 nm) angle of incidence is 45\({}^{\circ}\). The reflected optical probe beam is sent to a camera for MOKE images. The sample is magnetized in-plane using an external in-plane external magnetic field before pumping the laser pulse. The red three-dimensional arrows represent the Co magnetic moments. Experiments were carried out at room temperature. (b) Variation of the coercive field \(\rm{H_{C}}\) (solid orange symbols) and the saturation magnetization \(\rm{M_{S}}\) (open purple symbols) as a function of Gd content x in Glass/Ta (3 nm)/Gd\({}_{x}\)Co\({}_{100\text{-x}}\) (5 nm)/Cu (1 nm)/Pt (3 nm).
Single pulse all-optical helicity-independent switching for in-plane magnetized Gd\({}_{x}\)Co\({}_{100}\). \(\bf{x}\) alloys
A sketch of the experimental setup is shown in Fig. 1(a). It is based on a standard longitudinal MOKE configuration that allows tracking the in-plane magnetization changes after shining linearly polarized femtosecond laser pulses. One important parameter, the pulse length, has been fixed to 150 fs, and further details are given in the experimental section. Fig. 2(a) demonstrates that for a Gd\({}_{15}\)Co\({}_{85}\) in-plane magnetized thin film, it is possible to observe a deterministic single pulse all-optical helicity-independent switching (AO-HIS), independently of the initial direction of magnetization. First, two domains with opposite directions along the in-plane easy axis were created using an in-plane external magnetic field; then, laser pulses were shined at three different positions under zero applied field. As a result, one can clearly observe that a full magnetization switching is observed for the two initial magnetization directions. Moreover, we confirmed that a full switching is indeed observed since the contrast variations between the magnetic field-induced and the laser-induced switching are the same.
The laser-induced magnetization switching obtained for various Gd\({}_{x}\)Co\({}_{100\text{-x}}\) concentrations is shown in Figures 2(b) and (c). They present the MOKE images obtained after shining 0, 1, 2, and 3 pulses at the same position for a series of 5 nm Gd\({}_{x}\)Co\({}_{100\text{-x}}\) alloy samples. AO-HIS is demonstrated for x ranging from 10% to 25%. In this case, a fully reversed domain appears after the first pulse, completely vanishes after the second pulse, and fully re-appears after the third one indicating a perfect toggle-switching behavior. In addition, a multidomain state is
formed in the center region of the spot when the laser fluence is large enough (Supplemental Material 4). Perfect toggle switching for in-plane magnetized GdCo films is observed even after 1000 pulses, as shown in Supplemental Material 5, demonstrating the endurance of AO-HIS in such films potentially for technological applications. To make sure that dipolar fields are not affecting the switching, AO-HIS has been demonstrated when shining laser pulses on the boundary between two domains, as shown in Supplemental Material 6. The effect of single laser pulses was also carried out for samples with excessive content of Co (i.e., x=5%) and Gd (i.e., x=30%). For x=5%, neither a typical round-shaped switching pattern nor a multidomain state was observed before the degradation of the sample, as shown in Supplemental Material 7. For x=30%, either no switching or a multidomain state was obtained (Supplemental Material 8). Nevertheless, as shown for heat-assisted magnetic recording, those disordered patterns can be removed, and magnetization can be reversed by applying a tiny external field along the direction opposite to the initial magnetization direction [27, 28]. Supplemental Material 9 compares the effect of light when the sample is saturated along the easy and in-plane hard axes for Gd\({}_{15}\)Co\({}_{85}\). Along the easy axis, as reported earlier, full switching is demonstrated. However, when the sample is first saturated along the in-plane hard axis, and then the field is removed, the remanent state is multidomain; consequently, no single-domain all-optical switching could be observed; however, partial light-induced switching is clearly demonstrated.
To summarize AO-HIS in GdCo in-plane magnetized samples, Fig. 2(d) shows the threshold fluence needed to observe switching (F\({}^{\mathrm{th}}\)\({}_{\mathrm{Sw}}\)) and threshold fluence needed to demagnetize the sample (F\({}^{\mathrm{th}}\)\({}_{\mathrm{Dem}}\); i.e., the multidomain state is formed in the irradiation area in the case of too high pump pulse fluence, as shown in Fig. S4(c)) as a function of the Gd concentrations. This diagram clearly demonstrates that the window of Gd concentration (\(\Delta\)x) showing the AO-HIS is much larger for in-plane magnetized GdCo alloys (\(\Delta\)x\(\sim\)20%) compared to perpendicular magnetic anisotropy (PMA) GdFeCo alloys (\(\Delta\)x\(\sim\)5%) [10, 19, 20] and pure PMA GdCo alloys (\(\Delta\)x\(\sim\)9%) [21, 29]. The diagram also shows that a minimum in F\({}^{\mathrm{th}}\)\({}_{\mathrm{Sw}}\) can be observed close to the x\({}_{\mathrm{comp}}\), whereas F\({}^{\mathrm{th}}\)\({}_{\mathrm{Dem}}\) decreases monotonically with increasing Gd concentration, following the Curie temperature (Tc) (Supplemental Material 10). Note that a third threshold fluence (F\({}^{\mathrm{th}}\)\({}_{\mathrm{Dam}}\)) should be defined as the fluence threshold which damages the sample such that it cannot recover its initial magnetic state. The reason why no AO-HIS is observed for x= 5% and x=30% would then come from two different reasons. As we will discuss later, no AO-HIS is observed because F\({}^{\mathrm{th}}\)\({}_{\mathrm{Sw}}\)\(>\) F\({}^{\mathrm{th}}\)
for x= 5%, and because F\({}^{\rm th}\)\({}_{\rm sw}\)\(>\) F\({}^{\rm th}\)\({}_{\rm Dem}\) for x=30%. For the PMA Gd\({}_{\rm x}\)(FeCo)\({}_{\rm 100\)-x alloys, it has been shown that the laser pump energy required to produce AO-HIS is minimum around x\({}_{\rm comp}\)[19; 21; 30], which is in agreement with in-plane magnetized GdCo alloys, hinting that in-plane magnetized GdCo alloys should share the same switching mechanism as perpendicularly magnetized GdCo alloys.
Figure 2: **AO-HIS in 5 nm thick Gd\({}_{\rm x}\)Co\({}_{\rm 100\)-x alloys** (a) Kerr images and normalized contrast cross-section after three single laser shots inducing AO-HIS for a Gd\({}_{\rm 15}\)Co\({}_{\rm 85}\) thin film starting from a two-domain magnetic state of opposite directions along the easy axis. The blue and red arrows indicate the Co sublattice’s magnetization direction. Magneto-optic contrast obtained after a single 150 fs laser pump pulse on various in-plane magnetized Gd\({}_{\rm x}\)Co\({}_{\rm 100\)-x alloys: (b) for Co-dominant samples and (c) for Gd-dominant samples. For each measurement, the laser pulses were shined at the same position. A scale bar of 100 μm is presented. (d) The switching threshold (F\({}^{\rm th}\)\({}_{\rm sw}\)) and
demagnetization threshold (F\({}^{\rm th}\)\({}_{\rm Dem}\)) fluences as a function of Gd concentrations. Note that the grey open triangle indicates the threshold to permanently damage the sample (F\({}^{\rm th}\)\({}_{\rm Dam}\)), and the red dashed line guides the eye for the F\({}^{\rm th}\)\({}_{\rm Dem}\).
*C. Thickness-dependent single pulse all-optical helicity-independent switching for in-plane magnetized GdCo alloys**
After investigating the influence of the GdCo alloy concentration on AO-HIS, we now study its thickness dependence. For this study, we fixed the arbitrary Gd concentration x to 25% (Gd\({}_{25}\)Co\({}_{75}\)). The samples normalized longitudinal MOKE hysteresis loops for various thicknesses are shown in Supplemental Material 11. Square hysteresis loops are observed along the in-plane easy axis for all thicknesses ranging from 5 to 35 nm, demonstrating a strong in-plane anisotropy easy axis for all thicknesses. Fig. 3(a) demonstrates deterministic toggling AO-HIS for thickness (t) \(\leq\)30 nm. F\({}^{\rm th}\)\({}_{\rm Sw}\) and F\({}^{\rm th}\)\({}_{\rm Dem}\), as a function of the Gd\({}_{25}\)Co\({}_{75}\) thicknesses, are shown in Fig. 3(b).
Both F\({}^{\rm th}\)\({}_{\rm Sw}\) and F\({}^{\rm th}\)\({}_{\rm Dem}\) are following the same trend. For low GdCo thickness (t\(<\)20 nm), the two threshold fluences increase linearly with thickness, which can be understood by supposing that the amount of energy absorbed by the GdCo layer is depth independent in this thickness range. Switching and demagnetization depend on the deposited energy density and, consequently, on temperature. The fact that the two fluences tend to diverge with the GdCo thickness implies that the absorption can no longer be considered uniform. Indeed, the depth-dependent absorption profiles obtained using the transfer matrix method reflect the heat absorption at different film depths (Supplemental Material 12) [31, 32]. The obtained absorption gradient implies that the laser pump pulse heats the system inhomogeneously, indicating that the front part of the sample (facing the laser pump pulse) absorbs more laser energy than the latter part of the magnetic layer. If the entire GdCo layer needs to reach a certain temperature before switching, the laser fluence will then increase nonlinearly and tend to diverge as the sample thickness increases. We can then speculate that the speckled magnetic domain observed for Gd\({}_{25}\)Co\({}_{75}\) with a thickness of t=35 nm could be understood by the fact that the front part of the sample absorbed enough energy to demagnetize this area, whereas the back part does not absorb enough energy to switch, resulting in a maze domain structure.
### Time-resolved magneto-optic Kerr measurements
After investigating the effect of a single femtosecond laser pulse using Kerr images which are taken several seconds after the excitation, we are now probing the fast magnetization dynamics of 5 nm Gd\({}_{\text{x}}\)Co\({}_{\text{100-x}}\) samples using longitudinal time-resolved MOKE (TR-MOKE). TR-MOKE measurements were performed for samples showing toggle-switching behavior. Since the magnetization dynamics depend strongly on the laser fluence, in an effort to normalize the laser fluence, for each concentration, we used a laser fluence 1.2 larger than the previously determined threshold fluence F\({}^{\text{th}}\)\({}_{\text{sw}}\). In Figures 4(a) and (b), all curves showed similar features: a fast initial drop followed by a slower decrease and a saturation toward -1. This type of feature has already been seen for the AO-HIS in the single-layer PMA GdFeCo alloys demonstrated by different research groups [6, 8, 10, 11, 21]. This indicates, as expected at short time scales, that the AO-HIS effect is independent of the magnetic anisotropy and should originate from a purely ultrafast thermal effect [5, 6, 10, 11, 21]. To qualitatively describe the dynamics traces, we defined two characteristic times: T1 corresponding to the time at the end of the first fast drop and T2 to the
Figure 3: (a) Static MOKE images obtained after single laser pump pulse irradiation for different thicknesses (t) of the in-plane magnetized Gd\({}_{\text{25}}\)Co\({}_{\text{75}}\) thin film. The scale bar is 100 μm long. (b) Dependence of F\({}^{\text{th}}\)\({}_{\text{sw}}\) and F\({}^{\text{th}}\)\({}_{\text{Dem}}\) as a function of thickness. The orange line is obtained from a fitting using an exponential function. The linear grey line is a guide for the eye.
time corresponding to the end of the slower decrease, as shown in Supplemental Material 13. In Fig. 4(c), T1 and T2 are plotted as a function of the Gd concentration. Those measurements show that the magnetization dynamics of Co slow down with increasing Gd concentration.
Figure 4: The \(\alpha\)-\(\beta\) dependence of the Gd concentration on the concentration of Co slow down with increasing Gd concentration.
## III Discussion
The obtained results provide clear evidence of _single pulse_, _ultra-fast_ All-Optical Helicity-Independent Switching (AO-HIS) in in-plane magnetized GdCo alloys. Notably, this AO-HIS phenomenon is observed over a significantly larger concentration and thickness range compared to previous experimental observations in perpendicularly magnetized systems [10, 11, 19, 20, 29, 33] and theoretical predictions [10, 11, 21, 24]. The extended concentration range can be attributed to the absence of a stabilized multidomain state due to dipolar fields (stray fields or demagnetization fields) far from magnetization compensation composition (x\({}_{\rm comp}\)). In perpendicularly magnetized samples, the generation of a multidomain state by dipolar fields requires the definition of a threshold fluence, denoted as F\({}^{\rm th}\)Multi[25]. Zhang _et al._ demonstrated that a criterion for observing AO-HIS is that F\({}^{\rm th}\)\({}_{\rm sw}\) (the fluence needed for magnetization switching) is less than F\({}^{\rm th}\)\({}_{\rm Multi}\). Otherwise, the switching process is overshadowed by the formation of the multidomain state, which occurs at a longer timescale. As dipolar fields increase with sample magnetization and thickness, F\({}^{\rm th}\)\({}_{\rm Multi}\) decreases when moving away from x\({}_{\rm comp}\) and for thicker samples.
Because F\({}^{\rm th}\)\({}_{\rm Multi}\) does not need to be considered in the in-plane magnets, the observation of AO-HIS is not limited to low-thickness or low-magnetization alloys (close to compensation in the case of ferrimagnetic alloys). Accordingly, the critical criteria become F\({}^{\rm th}\)\({}_{\rm Sw}\)\(<\) F\({}^{\rm th}\)\({}_{\rm Dam}\) and F\({}^{\rm th}\)\({}_{\rm sw}\)\(<\) F\({}^{\rm th}\)\({}_{\rm Dem}\). We can suppose that F\({}^{\rm th}\)\({}_{\rm Dam}\) does not depend on the Gd concentration, whereas F\({}^{\rm th}\)\({}_{\rm Dem}\) decreases with Gd concentration as a consequence of the decrease of Curie temperature (T\({}_{\rm C}\)) [34]. The latter is clearly observed in Fig. 2(d), where F\({}^{\rm th}\)\({}_{\rm Dem}\) decreases almost linearly with increasing concentration of Gd. If we extrapolate the evolution of F\({}^{\rm th}\)\({}_{\rm Dem}\) as a function of Gd concentration,
Figure 4: Normalized longitudinal MOKE signal as a function of time for Gd\({}_{\rm x}\)Co\({}_{\rm 100\mathchar 45}\)(5 nm) for (a) Co-dominant alloys (x=10%, 16%, and 18.4%) and (b) Gd-dominant alloys GdCo (x=21%, 23%, and 25%). The measurements were obtained with an external magnetic field along the in-plane initial easy-axis direction and for fluences 1.2 times larger than the fluence threshold (F\({}^{\rm th}\)\({}_{\rm sw}\)). (c) Evolution of two characteristic times (T1 and T2) of the switching as a function of x the Gd\({}_{\rm x}\)Co\({}_{\rm 100\mathchar 45}\) concentration.
\(\rm F^{th}{}_{\rm Dem}\)=0 can be reached for x=40%, the concentration at which the \(\rm T_{C}\) approaches 300 K [34, 35]. For x=30% (highest Gd concentration in this work), the AO-HIS cannot be seen because of \(\rm F^{th}{}_{\rm Sw}\)\(>\rm F^{th}{}_{\rm Dem}\). On the other hand, as detailed in the work of Zhang _et al._, \(\rm F^{th}{}_{\rm Sw}\) depends both on the alloy's \(\rm T_{C}\) and the amount of angular momentum generated [25]. In our work, for x=5% (lowest Gd concentration in this work), the AO-HIS cannot be observed due to the fact that \(\rm F^{th}{}_{\rm Sw}\)\(>\)\(\rm F^{th}{}_{\rm Dam}\). Because of the high \(\rm T_{C}\) value of the alloy for low Gd concentration, a laser fluence larger than \(\rm F^{th}{}_{\rm Dam}\) would be needed to switch the magnetization. Furthermore, a minimum \(\rm F^{th}{}_{\rm Sw}\) appears close to \(\rm x_{comp}\) in our study, as observed in PMA GdFeCo alloys [11, 19, 21, 24]. This could be in accord with Barker _et al._ who suggested that a nonequilibrium energy transfer between the ferromagnetic- and antiferromagnetic-like magnon branches is maximized near \(\rm x_{comp}\), resulting in a minimum \(\rm F^{th}{}_{\rm Sw}\) around \(\rm x_{comp}\)[30].
Finally, the fastest switching is obtained for low Gd concentration and tends to increase as the Gd concentration increases, as shown in Fig. 4(c). We confirm here that the Co spin-lattice coupling is an efficient channel for angular momentum dissipation in ferrimagnetic alloys [36]. Indeed, it has been demonstrated that for GdCo alloy, the angular momentum dissipation into the lattice for Gd is less than for Co [6, 36]. Adding Gd leads to changes in the speeds and amplitudes of both sublattices' demagnetization [21, 37]. As confirmed experimentally, when ultrafast demagnetization occurs, the angular momentum will either dissipate locally into the lattice or be transferred as a spin current [4, 36-42]. Therefore, the more Gd is introduced, the less dissipation channels are available at a short time scale, resulting in the slowdown of the overall magnetization dynamics.
**IV. CONCLUSION**
In conclusion, our systematic study has successfully demonstrated deterministic All-Optical Helicity-Independent Switching (AO-HIS) using a femtosecond laser pulse for in-plane magnetized Gd\({}_{\rm x}\)Co\({}_{\rm 100\mbox{-}x}\) thin films at room temperature. The observation of ultrafast switching across a wide concentration range, in contrast to perpendicularly magnetized counterparts, is due to the absence of perpendicular demagnetization fields which tend to break magnetization into domains. These findings challenge the notion that compensation composition and temperature are
necessary to ensure AO-HIS. However, it is crucial to create conditions for which the switching fluence threshold is lower than the thresholds leading to sample damage and demagnetization in order to observe the desired switching behavior. Additionally, our results indicate that switching can be achieved in relatively thick films, depending on the demagnetization process, providing valuable insights for future stack engineering and optimization of AO-HIS. Furthermore, the magnetization dynamics observed during the reversal of in-plane magnetized GdCo alloys are similar to those observed in perpendicularly magnetized counterparts. Notably, the demagnetization speed and subsequent switching slow down with increasing Gd concentration. These experimental findings give new insights into the magnetization reversal of Gd-based materials, which hold significant implications for the development of future ultrafast spintronic memory devices.
## V Experimental section/methods
### Sample preparations
Glass/Ta (3 nm)/Gd\({}_{x}\)Co\({}_{100\text{-x}}\) alloys (t nm)/Cu (1 nm)/Pt (3 nm) with different x and t values were prepared through magnetron co-sputtering with elemental targets processed under an Argon gas pressure of approximately 10\({}^{-3}\) mbar. Multilayered typical structures were deposited onto 15\(\times\)10-mm glass substrates. During the room-temperature sample deposition, the sample holder was fixed in a specific direction without rotating the holder, which produced a well-defined in-plane anisotropy in Gd\({}_{x}\)Co\({}_{100\text{-x}}\) (t nm), as shown in Fig. S9(b). In samples for the thickness-dependent all-optical helicity-independent switching (AO-HIS) experiments, the Gd\({}_{x}\)Co\({}_{100\text{-x}}\) (t nm) concentration is fixed at x=25% while t varies from 5 nm to 35 nm with an interval of 5 nm.
### Characterizations
Static single pulse and time-resolved measurements were performed in a longitudinal magneto-optic Kerr effect (MOKE) configuration. Samples were housed in an optical setup integrated with a dipole electromagnet, allowing for a variable field (0-0.3 T) operation along the sample plane. The linearly polarized pump pulse with a spot size diameter of \(\sim\)120 \(\upmu\)m was normal incident to the sample surface. On the other hand, the linearly p-polarized probe beam with low
energy was used to impinge on the sample at an incidence angle of 45 degrees to obtain the MOKE images using a complementary metal oxide semiconductor camera. Both laser beams were shot on the sample side to perform all the measurements. In this work, the wavelength of the pump pulse was fixed at 800 nm, and the one for the probe beam was set at 515 nm, which mainly reflects the magnetic signals of the Co sublattice of GdCo alloys. One crucial parameter for all the AO-HIS measurements: the laser pulse length was fixed at 150 fs. All the measurements were performed at ambient temperature.
On the one hand, in the static single pulse AO-HIS measurements, the external in-plane magnetic field (oriented parallel to the substrate and along the in-plane easy axis, as indicated in Fig. S9(c)) with a strength larger than the sample coercivity was first applied to initialize the sample. Thereby all the moments in the alloy were aligned in the direction of the in-plane easy axis under a zero field. Subsequently, the samples were irradiated by different numbers of laser pump pulses without any external magnetic field to perform the static single pulse AO-HIS measurements. The repetition rate of the laser pump pulse was set to 100 kHz for all static single pulse AO-HIS measurements.
On the other hand, in the time-resolved MOKE imaging measurements, an in-plane magnetic field with a strength around the sample coercivity was always applied along the initial in-plane easy axis direction of magnetization to initialize the sample before each pump pulse. The value varies from one sample to another due to the different coercivities of all investigated samples. Moreover, we used the MOKE imaging configuration to perform the dynamics measurements, which may include some artificial optical signals originating from a substrate or environment. To remove those signals, we took the intensity difference for opposite magnetization directions at negative time delay to normalize the presented data, as described in a previous study [42]. Here, the time zero (time delay=0) was determined by the time delays where the derivative of the magnetization dynamics trace is maximal. Since the threshold to permanently damage the sample is different from one sample to another, the endurance capacity of the temperature of the sample within a given time is also different. In this study, for x=16%, 18.4%, 21%, and 23%, a repetition rate of 100 kHz was used for both the pump and probe beam, while for x=10% and 25%, the value was reduced to 10 kHz.
The saturation magnetization and the coercivity in Fig. 1(b) and the magnetization versus magnetic field curves in Fig. S2 were examined using a superconducting quantum interference device.
## Acknowledgment
The authors thank Eric Fullerton and Bert Koopmaans for fruitful discussion. This work is supported by the ANR-20-CE09-0013 UFO, the Institute Carnot ICEEL for the project "CAPMAT" and FASTNESS, the Region Grand Est, the Metropole Grand Nancy for the Chaire PLUS, the interdisciplinary project LUE "MAT-PULSE", part of the French PIA project "Lorraine Universite d'Excellence" reference ANR-15-IDEX-04-LUE, the "FEDERFSE Lorraine et Massif Vosges 2014-2020" for the project PLUS and IOMA, a European Union Program, the European Union's Horizon 2020 research and innovation program COMRAD under the Marie Sklodowska-Curie grant agreement No 861300, the ANR project ANR-20-CE24-0003 SPOTZ,. This article is based upon work from COST Action CA17123 MAGNETOFON, supported by COST (European Cooperation in Science and Technology). This work was supported by the French National Research Agency through the France 2030 government grants EMCOM (ANR-22-PEEL-0009). All fundings were shared equally among all authors.
## References
* [1] A. Kirilyuk, A. V. Kimel, and T. Rasing, Ultrafast optical manipulation of magnetic order, Rev. Mod. Phys. **82**, 2731 (2010).
* [2] D. Sander, S. O. Valenzuela, D. Makarov, C. H. Marrows, E. E. Fullerton, P. Fischer, J. McCord, P. Vavassori, S. Mangin, P. Pirro, B. Hillebrands, A. D. Kent, T. Jungwirth, O. Guttleisch, C. G. Kim, and A. Berger, The 2017 Magnetism Roadmap, J. Phys. D: Appl. Phys. **50**, 36 (2017).
* [3] B. Dieny, I. L. Prejbeanu, K. Garello, P. Gambardella, P. Freitas, R. Lehndorff, W. Raberg, U. Ebels, S. O. Demokritov, J. Akerman, A. Deac, P. Pirro, C. Adelmann, A. Anane, A. V. Chumak, A. Hirohata, S. Mangin, Sergio O. Valenzuela, M. Cengiz Onbasli, M. d'Aquino, G. Prenat, G. Finocchio, L. Lopez-Diaz, R. Chantrell, O. Chubykalo-Fesenko, and P. Bortolotti, Opportunities and challenges for spintronics in the microelectronics industry, Nat. Electronics. **3**, 446 (2020).
* [4] J. Igarashi, W. Zhang, Q. Remy, E. Diaz, J.-X. Lin, J. Hohlfeld, M. Hehn, S. Mangin, J. Gorchon, and G. Malinowski, Optically induced ultrafast magnetization switching in ferromagnetic spin valves, Nat. Mater. **22**, 725 (2023).
* [5] T. A. Ostler, J. Barker, R. F. L. Evans, R. W. Chantrell, U. Atxitia, O. Chubykalo-Fesenko, S. El Moussaoui, L. Le Guyader, E. Mengotti, L. J. Heyderman, F. Nolting, A. Tsukamoto, A. Itoh, D. Afanasiev, B. A. Ivanov, A. M. Kalashnikova, K. Vahaplar, J. Mentink, A. Kirilyuk, T. Rasing, and A. V. Kimel, Ultrafast heating as a sufficient stimulus for magnetization reversal in a ferrimagnet, Nat. Commun. **3**, 666 (2012).
* [6] I. Radu, K. Vahaplar, C. Stamm, T. Kachel, N. Pontius, H. A. Durr, T. A. Ostler, J. Barker, R. F. L. Evans, R. W. Chantrell, A. Tsukamoto, A. Itoh, A. Kirilyuk, T. Rasing, and A. V. Kimel, Transient ferromagnetic-like state mediating ultrafast reversal of antiferromagnetically coupled spins, Nature **472**, 205 (2011).
* [7] T. A. Ostler, R. F. L. Evans, R. W. Chantrell, U. Atxitia, O. Chubykalo-Fesenko, I. Radu, R. Abrudan, F. Radu, A. Tsukamoto, A. Itoh, A. Kirilyuk, T. Rasing, and A. V. Kimel, Crystallographically amorphous ferrimagnetic alloys: Comparing a localized atomistic spin model with experiments, Phys. Rev. B **84**, 024407 (2011).
* [8] J. Gorchon, R. B. Wilson, Y. Yang, A. Pattabi, J. Y. Chen, L. He, J. P. Wang, M. Li, and J. Bokor, Role of electron and phonon temperatures in the helicity-independent all-optical switching of GdFeCo, Phys. Rev. B **94**, 184406 (2016).
* [9] A. El-Ghazaly, B. Tran, A. Ceballos, C.-H. Lambert, A. Pattabi, S. Salahuddin, F. Hellman, and J. Bokor, Ultrafast magnetization switching in nanoscale magnetic dots, Appl. Phys. Lett. **114**, 232407 (2019).
* [10] C. S. Davies, T. Janssen, J. H. Mentink, A. Tsukamoto, A. V. Kimel, A. F. G. van der Meer, A. Stupakiewicz, and A. Kirilyuk, Pathways for Single-Shot All-Optical Switching of Magnetization in Ferrimagnets, Phys. Rev. Appl. **13**, 024064 (2020).
* [11] F. Jakobs, T. A. Ostler, C.-H. Lambert, Y. Yang, S. Salahuddin, R. B. Wilson, J. Gorchon, J. Bokor, and U. Atxitia, Unifying femtosecond and picosecond single-pulse magnetic switching in Gd-Fe-Co, Phys. Rev. B **103**, 104422 (2021).
* [12] J. H. Mentink, J. Hellsvik, D. V. Afanasiev, B. A. Ivanov, A. Kirilyuk, A. V. Kimel, O. Eriksson, M. I. Katsnelson, and T. Rasing, Ultrafast Spin Dynamics in Multisublattice Magnets, Phys. Rev. Lett. **108**, 057202 (2012).
* [13] A. J. Schellekens, and B. Koopmans, Microscopic model for ultrafast magnetization dynamics of multisublattice magnets, Phys. Rev. B **87**, 020407(R) (2013).
* [14] N. Bergeard, V. Lo'pez-Flores, V. Halte, M. Hehn, C. Stamm, N. Pontius, E. Beaurepaire, and C. Boeglin, Ultrafast angular momentum transfer in multisublattice ferrimagnets, Nat. Commun. **5**, 3466 (2014).
* [15] V. N. Gridnev, Ultrafast heating-induced magnetization switching in ferrimagnets, J. Phys.: Condens. Matter **28**, 476007 (2016).
* [16] A. M. Kalashnikova, and V. I. Kozub, Exchange scattering as the driving force for ultrafast all-optical and bias-controlled reversal in ferrimagnetic metallic structures, Phys. Rev. B **93**, 054424 (2016).
* [17] S. Wang, C. Wei, Y. Feng, H. Cao, W. Li, Y. Cao, B.-O. Guan, A. Tsukamoto, A. Kirilyuk, A. V. Kimel, and X. Li, Dual-shot dynamics and ultimate frequency of all-optical magnetic recording on GdFeCo, Light: Sci. Appl. **10**, 8 (2021).
* [18] F. Steinbach, N. Stetzuhn, D. Engel, U. Atxitia, C. v. K. Schmising, and S. Eisebitt, Accelerating double pulse all-optical write/erase cycles in metallic ferrimagnets, Appl. Phys. Lett. **120**, 112406 (2022).
* [19] J. Wei, B. Zhang, M. Hehn, W. Zhang, G. Malinowski, Y. Xu, W. Zhao, and S. Mangin, All-optical Helicity-Independent Switching State Diagram in Gd-Fe-Co Alloys, Phys. Rev. Appl. **15**, 054065 (2021).
* [20] Y. Xu, M. Deb, G. Malinowski, M. Hehn, W. Zhao, and S. Mangin, Ultrafast Magnetization Manipulation Using Single Femtosecond Light and Hot-Electron Pulses, Adv. Mater. **29**, 1703474 (2017).
* [21] M. Beens, M. L. M. Lalieu, A. J. M. Deenen, R. A. Duine, and B. Koopmans, Comparing all-optical switching in synthetic-ferrimagnetic multilayers and alloys, Phys. Rev. B **100**, 220409(R) (2019).
* [22] C. S. Davies, J. H. Mentink, A. V. Kimel, T. Rasing, and A. Kirilyuk, Helicity-independent all-optical switching of magnetization in ferrimagnetic alloys, J. Magn. Magn. Mater. **563**, 169851 (2022).
* [23] M. L. M. Lalieu, M. J. G. Peeters, S. R. R. Haenen, R. Lavrijsen, and B. Koopmans, Deterministic all-optical switching of synthetic ferrimagnets using single femtosecond laser pulses, Phys. Rev. B **96**, 220411(R) (2017).
* [24] U. Atxitia, T. A. Ostler, R. W. Chantrell, and O. Chubykalo-Fesenko, Optimal electron, phonon, and magnetic characteristics for low energy thermally induced magnetization switching, Appl. Phys. Lett. **107**, 192402 (2015).
* [25] W. Zhang, J. Hohlfeld, T. X. Huang, J.-X. Lin, M. Hehn, Y. Le Guen, J. Compton-Stewart, G. Malinowski, W. S. Zhao, and S. Mangin, Submitted (2023).
* [26] A. Hassdenteufel, J. Schmidt, C. Schubert, B. Hebler, M. Helm, M. Albrecht, and R. Bratschitsch, Low-remanence criterion for helicity-dependent all-optical magnetic switching in ferrimagnets, Phys. Rev. B **91**, 104431 (2015).
* [27] M. H. Kryder, E. C. Gage, T. W. McDaniel, W. A. Challener, R. E. Rottmayer, G. Ju, Y.-T. Hsia, and M. F. Erden, Heat Assisted Magnetic Recording, Proc. IEEE **96**, 1810 (2008).
* [28] D. O. Ignatyeva, P. O. Kapralov, K. H. Prabhakara, H. Yoshikawa, A. Tsukamoto, and V. I. Belotelov, Magnetization Switching in the GdFeCo Films with In-Plane Anisotropy via Femtosecond Laser Pulses, Mol. **26**, 6406 (2021).
* [29] X. Fan, and X. Lin, Thermal impact on ultrafast helicity independent all-optical switching of Gd\({}_{x}\)Co\({}_{100\text{-}x}\), J. Phys.: Conf. Ser. **2230**, 012025 (2022).
* [30] J. Barker, U. Atxitia, T. A. Ostler, O. Hovorka, O. Chubykalo-Fesenko, and R. W. Chantrell, Two-magnon bound state causes ultrafast thermally induced magnetisation switching, Sci. Rep. **3**, 3262 (2013).
* [31] A. R. Khorsand, M. Savoini, A. Kirilyuk, and T. Rasing, Optical excitation of thin magnetic layers in multilayer structures, Nat. Mater. **13**, 101 (2014).
* [32] Y. Xu, M. Hehn, W. Zhao, X. Lin, G. Malinowski, and S. Mangin, From single to multiple pulse all-optical switching in GdFeCo thin films, Phys. Rev. B **100**, 064424 (2019).
* [33] J. Wang, T. Seki, Y.-C. Lau, Y. K. Takahashi, and K. Takanashi, Origin of magnetic anisotropy, role of induced magnetic moment, and all-optical magnetization switching for Co\({}_{100\text{-}}\)Gd\({}_{\text{v}}\)/Pt multilayers, APL Mater. **9**, 061110 (2021).
* [34] N. H. Duc, and D. Givord, Exchange interactions in amorphous Gd Co alloys, J. Magn. Magn. Mater. **157**, 169 (1996).
* [35] P. Hansen, C. Clausen, G. Much, M. Rosenkranz, and K. Witter, Magnetic and magneto-optical properties of rare-earth transition-metal alloys containing Gd, Tb, Fe, Co, J. Appl. Phys. **66**, 756 (1989).
* [36] B. Koopmans, G. Malinowski, F. Dalla Longa, D. Steiauf, M. Fahnle, T. Roth, M. Cinchetti, and M. Aeschlimann, Explaining the paradoxical diversity of ultrafast laser-induced demagnetization, Nat. Mater. **9**, 259 (2010).
* [37] T. Ferte, M. Beens, G. Malinowski, K. Holldack, R. Abrudan, F. Radu, T. Kachel, M. Hehn, C. Boeglin, B. Koopmans, and N. Bergeard, Laser induced ultrafast Gd 4f spin dynamics in Co\({}_{100\text{-}}\)Gd\({}_{\text{x}}\) alloys by means of time-resolved XMCD, Eur. Phys. J. Spec. Top. (2023).
* [38] S. R. Tauchert, M. Volkov, D. Ehberger, D. Kazenwadel, M. Evers, H. Lange, A. Donges, A. Book, W. Kreuzpaintner, U. Nowak, and P. Baum, Polarized phonons carry angular momentum in ultrafast demagnetization, Nature **602**, 73 (2022).
* [39] G. Malinowski, F. Dalla Longa, J. H. H. Rietjens, P. V. Paluskar, R. Huijink, H. J. M. Swagten, and B. Koopmans, Control of speed and efficiency of ultrafast demagnetization by direct transfer of spin angular momentum, Nat. Phys. **4**, 855 (2008).
* [40] G.-M. Choi, B.-C. Min, K.-J. Lee, and D. G. Cahill, Spin current generated by thermally driven ultrafast demagnetization, Nat. Commun. **5**, 4334 (2014).
* [41] B. Liu, H. Xiao, and M. Weinelt, Microscopic insights to spin transport-driven ultrafast magnetization dynamics in a Gd/Fe bilayer, Sci. Adv. **9**, eade0286 (2023).
* [42] Q. Remy, J. Hohlfeld, M. Verges, Y. Le Guen, J. Gorchon, G. Malinowski, S. Mangin, and M. Hehn, Accelerating ultrafast magnetization reversal by non-local spin transfer, Nat. Commun. **14**, 445 (2023).
Supplemental Material
**Single laser pulse induced magnetization switching in in-plane magnetized GdCo alloys**
Jun-Xiao Lin,\({}^{1}\) Michel Hehn,\({}^{1,2}\) Thomas Hauet,\({}^{1}\) Yi Peng,\({}^{1}\) Junta Igarashi,\({}^{1}\) Yann Le Guen,\({}^{1}\) Quentin Remy,\({}^{3}\) Jon Gorchon,\({}^{1}\) Gregory Malinowski,\({}^{1}\) Stephane Mangin,\({}^{*,1,2}\), and Julius Hohlfeld\({}^{1}\)
\({}^{1}\)_Universite de Lorraine, CNRS, Institut Jean Lamour, F-54000 Nancy, France \({}^{2}\)Center for Science and Innovation in Spintronics, Tohoku University, Sendai, Japan \({}^{3}\)Department of Physics, Freie Universitat Berlin, 14195 Berlin, Germany \({}^{*}\)_Authors to whom correspondence should be addressed: [email protected]
**Keywords:** in-plane magnetized thin film, ultrafast optics, single laser pulse magnetization reversal, Gd-based alloys, opto-spintronics
**SI. Longitudinal magneto-optic Kerr effect (MOKE) hysteresis loops of in-plane magnetized Gd\({}_{x}\)Co\({}_{100\text{-}x}\) films at room temperature**
The longitudinal MOKE hysteresis loops were obtained by applying a magnetic field along the in-plane easy axis. Since MOKE is mainly sensitive to the Co sublattice, the sign of the hysteresis loops depends on the Gd concentration x: counterclockwise (respectively clockwise) hysteresis loops are obtained when the magnetization of the Co sublattice, M\({}_{\text{Co}}\), is higher (respectively lower) than the magnetization of the Gd sublattice, M\({}_{\text{Gd}}\). In Fig. S1, we can clearly see that M\({}_{\text{Co}}\) is dominant over M\({}_{\text{Gd}}\) when x\(<\)20%, while M\({}_{\text{Gd}}\) becomes dominant when x\(>\)20%, which is consistent with the results observed in Fig. 1(b). The compensation composition is then x\({}_{\text{comp}}\)\(\sim\)20%. In all cases, the hysteresis loops have a remanence close to one, revealing the existence of a well-defined in-plane magnetic anisotropy axis in all the studied Gd\({}_{x}\)Co\({}_{100\text{-}x}\) films.
**FIG. S1.** Normalized MOKE signal as a function of magnetic field (H) applied along the in-plane anisotropy axis for various Gd\({}_{x}\)Co\({}_{100\text{-}x}\) films concentration (a) for Co-rich alloys (x=5%, 10%, 16%, and 18.4 %) and (b) for Gd-rich alloys (x=21%, 23%, 25%, and 30 %).
S2 Magnetic field-dependent magnetization (M-H) measurements of Gd\({}_{x}\)Co\({}_{100\textrm{-}x}\) films at room temperature
Magnetization versus magnetic field curves, with the external magnetic field applied along the out-of-plane direction, are shown in Fig. S2 for the studied samples. Those results indicate that the out-of-plane direction is a hard axis.
**S3. Magnetic domain imaging of in-plane magnetized Gd\({}_{\bf x}\)Co\({}_{\bf 100\text{-}x}\) thin films**
Initially, the sample was saturated using a strong external in-plane magnetic field, then the field (with a direction opposite to the saturating field) was reduced close to coercivity to generate magnetic domains. The obtained Kerr microscopy images suggest that the characteristic domain size and shape are similar for all the studied Gd\({}_{\bf x}\)Co\({}_{\bf 100\text{-}x}\) films. The typical domain size is comparable to the laser pump spot size, which is around 120 \(\upmu\)m in diameter.
**S4. Reversed domain diameter versus laser pump pulse energy**
The sample is excited with a single laser pump pulse with various energies. Fig. S4(a) shows that the diameter of the written domain increases as the pumped energy increases. We consider that the laser beam has a Gaussian intensity profile with a maximum value at the center of the beam. When the energy is too high, a demagnetization state is observed in the center of the circular reversed domain. The threshold pump pulse energy (\(E_{th}\)) to observe all-optical helicity-independent switching (AO-HIS) can be determined from \(d=D\sqrt{\frac{E}{E_{th}}}\), where the \(d\) ( the diameter of the reversed domain) and \(E\) (the laser pump pulse energy) are the controlled experimental values, as shown in Fig. S4(b). Then, by fitting the set of data points, the \(D\) (laser-spot diameter) and the \(E_{th}\) (threshold pump pulse energy) can be evaluated. By knowing \(E_{th}\) and \(D\), the threshold fluence (\(F_{th}\)) is given by \(\frac{E_{th}}{\pi\cdot(\frac{D}{2})^{2}}\). Detailed information can be found in a previous work [1].
**FIG. S4. Single pulse switching for a Gd\({}_{16}\)Co\({}_{84}\) (5 nm) thin film. (a) Kerr images obtained after a single laser pump pulse irradiation with various pulse energy. (b) Reversed domain size as a function of the pump pulse energy, deduced from images in (a). (c) Longitudinal MOKE images were extracted at high laser energy to demonstrate the multidomain state resulting from excessive heating of the sample.**
S5.**Endurance test on in-plane magnetized Gd\({}_{18.4}\)Co\({}_{081.6}\) and Gd\({}_{21}\)Co\({}_{079}\) alloys at room temperature
Fig. S5 shows the toggle magnetization switching after a thousand laser pump pulses.
S6. Single pulse all-optical helicity-independent switching measurements performed at the boundary between two magnetic domains
Fig. S6 shows that single pulse AO-HIS depends only on the initial magnetic configuration and the number of laser pump pulses. It also proves that the dipolar field resulting from the domain formation does not affect the magnetization reversal.
**S7. Pump pulse energy-dependent optical switching measurements of Gd\({}_{5}\)Co\({}_{95}\) thin film**
To prove that no deterministic AO-HIS could be detected on Gd\({}_{5}\)Co\({}_{95}\) film (x=5%), we show in Fig. S7 optical switching measurements without and with an applied in-plane magnetic field for various laser pump fluences. The switching of a circular area could not be observed. Neither a switching pattern nor a multidomain state was observed before burning and damaging the sample, indicating that the Curie temperature of this sample is too high to be demagnetized enough to either observe AO-HIS or heat-assisted magnetic recording. Note that the strength of the field is close but lower than the coercivity used to assist the switching. In principle, the external field should assist the switching [2]. Strikingly, only a speckled domain and no circular-shaped reversal domain were observed under applying the field, stressing again the Curie temperature (T\({}_{\rm C}\)) of this alloy is too high.
field along the in-plane easy axis. (b) Normalized MOKE signal as a function of the field applied along the in-plane anisotropy axis for a Gd\({}_{5}\)Co\({}_{95}\) (5 nm) film. (c) Static optical switching results after 1 and 2 laser pump pulses were obtained without and with the field opposite the initial magnetization direction. A scale bar of 100 \(\upmu\)m is presented.
S8 Pump pulse energy-dependent optical switching measurements of Gd\({}_{30}\)Co\({}_{70}\) thin film
We measured the optical switching without and with the external in-plane field for various laser pump fluences to demonstrate the switching behavior and the possible application of Gd\({}_{30}\)Co\({}_{70}\) (x=30%). The reversal of a uniform domain could not be observed under a zero field; only the multidomain is formed, as shown in Fig. S8(c). However, a uniform reversed domain can be obtained by applying a field (amplitude less than the coercivity) opposite to the initial magnetization direction. The results of Fig. S8(d) show that the switching behavior depends on the amplitude of the field, which demonstrates a similar behavior as seen in heat-assisted magnetic recording [2].
**S9. Single pulse all-optical helicity-independent switching measurements performed at two orthogonal in-plane directions**
Typically, in-plane magnetization could be aligned along different directions in the plane. We compared the MOKE hysteresis loops with the magnetic field applied along two orthogonal in-plane directions and measured the angle-dependent remanence. The results of Figures S9(a) and (b) confirmed the existence of well-defined in-plane anisotropy in our samples. Then, to see the AO-HIS effect on two in-plane directions, we first initialized the magnetization along the in-plane easy axis direction (corresponding to 0 degrees in Fig. S9(a)); the results show that the sample exhibits AO-HIS, as shown in Figures S9(c) and 2. Secondly, the magnetization was initialized along the in-plane hard axis direction (corresponding to 90 degrees in Fig. S9(a)), and then microsized domains were created under a zero field upon single laser pump pulse irradiation. The results of Figures S9(d) and (e) show that those domains can still be reversed partially in a repeatable way and are insensitive to the multidomain surrounding them. The results of Figures S9(c)-(e) clearly summarize that the switching with a circular area only appears when the magnetization is initialized along the direction of the in-plane easy axis, indicating that having the well-defined in-plane magnetic anisotropy is also a key ingredient to observe well-defined reversed domain.
Figure S9: (a) Normalized longitudinal MOKE hysteresis loops measured for the magnetic field applied along the longitudinal (0 deg) and transversal (90 deg) in-plane direction. Here the sample of Glass/Ta (3 nm)/ Gd\({}_{15}\)Co\({}_{85}\) (5 nm)/Cu (1 nm)/Pt (3 nm) was used for demonstration. (b) Angle
dependent remanent ratio M\({}_{\rm r}\)/Ms of a Gd\({}_{15}\)Co\({}_{85}\) film which was collected by a vibrating-sample magnetometer. Static longitudinal Kerr images after 1, 2, and 3 pump pulses when the field initializes the magnetization along the (c) easy axis and (d) hard axis in the plane. A scale bar of 100 \(\upmu\)m is presented. (e) From left to right are the pictures taken from (d) showing the difference between each pump pulse, as indicated by the label above the MOKE images.
**S10. Comparison of the demagnetization threshold and Curie temperature**
Previous theoretical works have suggested that a multidomain state will be formed during the successive slow cooldown when the lattice temperature rises above the \(\mathrm{T_{C}}\) of GdCo alloys [3]. This implies that the multidomain relates to the \(\mathrm{T_{C}}\) of a material. Here, we define the pump pulse energy required to observe multidomain patterns in the irradiation area as a demagnetization energy threshold (\(\mathrm{E^{th}}_{\mathrm{Dem}}\)). Accordingly, \(\mathrm{E^{th}}_{\mathrm{Dem}}\) versus Gd concentration is compared to the \(\mathrm{T_{C}}\) of each Gd concentration extracted from the literature [4]. Both cases follow a similar trend.
S11. MOKE hysteresis loop measurements of in-plane magnetized Gd\({}_{25}\)Co\({}_{75}\) with different thicknesses
The longitudinal MOKE hysteresis loops were obtained by applying a magnetic field along the in-plane easy axis. In all cases, square hysteresis loops with \(\sim\)100% remanence for H reveal the existence of well-defined in-plane magnetic anisotropy in all Gd\({}_{25}\)Co\({}_{75}\) films. The main hard axis also sits along the direction normal to the sample surface.
**S12. The calculation of the absorption profile of Gd\({}_{25}\)Co\({}_{75}\) with various thicknesses**
The multilayer structure of Glass/Ta (3 nm)/Gd\({}_{25}\)Co\({}_{75}\) (t nm)/Cu (1 nm)/Pt (3 nm) was used to study the thickness-dependent single pulse AO-HIS. In order to understand the amount of energy absorbed by the sample, it is interesting to know the optical absorption profiles as a function of the sample depths [5-7]. As shown in Fig. S12(a), the absorption profile in GdCo is almost constant for t\(\leq\)20 nm, which indicates that the entire GdCo layer can be uniformly demagnetized. By contrast, when t\(\geq\)20 nm, the absorption profile starts to become non-uniform. Fig. S12(b) shows the total amount of laser energy deposited in the GdCo layer. The results reveal that the absorption reaches saturation at t\(\sim\)20 nm, and then almost stays constant. The result means that when the thickness is greater than 20 nm, the total energy injected into the GdCo is nearly maintained, giving rise to a temperature gradient in the magnetic layer. This indicates that the front part of the sample absorbs most of the laser energy, resulting in an insufficient demagnetization in the deeper part of the GdCo layer. Accordingly, we attribute a maze domain structure for t=35 nm to the significant thermal gradient in the GdCo layer.
**S13. The definition of the time needed for T1 and T2**
From the time-resolved magnetization dynamic curve, three typical features can be determined by their slopes, as observed by previous works [8,9]. The magnetization dynamics of the Gd\({}_{18.4}\)Co\({}_{81.6}\) alloy is taken as an example to demonstrate the way to extract T1 and T2, as shown in Fig. S13(a); three features can be described as follows: (1) a fast initial drop as a result of ultrafast demagnetization (black curve; the slope is the highest among the three). (2) a recovery in the opposite direction of the initial state (orange curve; the slope is the intermediate among three). (3) a plateau where the value of normalized MOKE stays almost constant (dark cyan curve; the slope is nearly zero). By obtaining those three features (slopes), we can define the time scale "T1" at which the transition between features (1) and (2) happens. Similarly, "T2" can be determined by the time scale at which the transition between features (2) and (3) happens. By carefully analyzing the time-resolved MOKE traces for various Gd concentrations (Fig. S13(b)), we can summarize T1 and T2 as a function of Gd concentration, as shown in Fig. 4(c).
**FIG. S13. (a) Time-resolved MOKE trace of Gd\({}_{18.4}\)Co\({}_{81.6}\) alloy is taken as an example to define the time at which for T1 and T2. (b) Three different features determined by their slopes are plotted as a function of Gd concentration.**
|
2308.08359
|
Membrane Potential Batch Normalization for Spiking Neural Networks
|
As one of the energy-efficient alternatives of conventional neural networks
(CNNs), spiking neural networks (SNNs) have gained more and more interest
recently. To train the deep models, some effective batch normalization (BN)
techniques are proposed in SNNs. All these BNs are suggested to be used after
the convolution layer as usually doing in CNNs. However, the spiking neuron is
much more complex with the spatio-temporal dynamics. The regulated data flow
after the BN layer will be disturbed again by the membrane potential updating
operation before the firing function, i.e., the nonlinear activation.
Therefore, we advocate adding another BN layer before the firing function to
normalize the membrane potential again, called MPBN. To eliminate the induced
time cost of MPBN, we also propose a training-inference-decoupled
re-parameterization technique to fold the trained MPBN into the firing
threshold. With the re-parameterization technique, the MPBN will not introduce
any extra time burden in the inference. Furthermore, the MPBN can also adopt
the element-wised form, while these BNs after the convolution layer can only
use the channel-wised form. Experimental results show that the proposed MPBN
performs well on both popular non-spiking static and neuromorphic datasets. Our
code is open-sourced at \href{https://github.com/yfguo91/MPBN}{MPBN}.
|
Yufei Guo, Yuhan Zhang, Yuanpei Chen, Weihang Peng, Xiaode Liu, Liwen Zhang, Xuhui Huang, Zhe Ma
|
2023-08-16T13:32:03Z
|
http://arxiv.org/abs/2308.08359v1
|
# Membrane Potential Batch Normalization for Spiking Neural Networks
###### Abstract
As one of the energy-efficient alternatives of conventional neural networks (CNNs), spiking neural networks (SNNs) have gained more and more interest recently. To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs. All these BNs are suggested to be used after the convolution layer as usually doing in CNNs. However, the spiking neuron is much more complex with the spatio-temporal dynamics. The regulated data flow after the BN layer will be disturbed again by the membrane potential updating operation before the firing function, i.e., the nonlinear activation. Therefore, we advocate adding another BN layer before the firing function to normalize the membrane potential again, called MPBN. To eliminate the induced time cost of MPBN, we also propose a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold. With the re-parameterization technique, the MPBN will not introduce any extra time burden in the inference. Furthermore, the MPBN can also adopt the element-wised form, while these BNs after the convolution layer can only use the channel-wised form. Experimental results show that the proposed MPBN performs well on both popular non-spiking static and neuromorphic datasets. Our code is open-sourced at MPBN.
## 1 Introduction
Emerged as a biology-inspired method, spiking neural networks (SNNs) have received much attention in artificial intelligence and neuroscience recently [17, 13, 57, 56, 47, 58, 59]. SNNs deal with binary event-driven spikes as their activations and therefore the multiplications of activations and weights can be substituted for additions or only keeping silents. Benefitting from such a computation paradigm, SNNs derive extreme energy efficiency and run efficiently when implemented on neuromorphic hardware [1, 42, 5].
Despite the SNN has achieved great success in diverse fields including pattern recognition [12, 21, 19, 14], object detection [30], language processing [55], robotics [9], and so on, its development is deeply inspired by the experience of convolutional neural networks (CNNs) in many aspects. However, the spiking neuron model along with the rich spatio-temporal dynamic makes SNNs much different from CNNs, and directly transferring some experience of CNNs to SNNs without any modifications may be not a good idea. As one of the famous techniques in CNNs, the batch normalization(BN) technique shows great advantages. It can reduce the gradient exploding/vanishing problem, flatten the loss landscape, and reduce the internal covariate shift, thus being widely used in CNNs. There are also some works trying to apply normalization approaches in the SNN field to help model convergence. For example, inspired by BN in CNNs, NeuNorm [51] was proposed to normalize the data along the channel dimension. Considering that the temporal dimension is also important in SNNs, threshold-dependent batch normalization (tdBN) [62] then extended the scope of BN to the additional temporal dimension. Subsequently, to better depict the differences of data flow distributions in different time dimensions, the temporal batch normalization through time (BNTT) [31], postsynaptic potential normalization (PSP-BN) [28], and temporal effective batch normalization (TEBN) [10] that regulate the data flows with multiple BNs on different time steps were proposed.
However, all these BNs proposed in SNNs are advised to be used after convolution layers as usually doing in CNNs. This ignores the fact that the nonlinear transformation in the SNN spiking neuron is much more complex than that of the ReLU neuron. In the spiking neuron, the data flow after the convolution layer will be first injected into the residual membrane potential (MP) coming from the previous time step to generate a new MP at the current time step. And then the neuron will fire a spike or keep silent
still based on whether or not the new MP is up to the firing threshold. Obviously, though the data flow has been normalized by the BN after the convolution layer, it will be disturbed again by the residual MP in the membrane potential updating process. Therefore, we advocate also adding a BN layer after MP updating to regulate the data flow once again, called MPBN. Furthermore, we also propose a training-inference-decoupled re-parameterization technique in SNNs to fold the trained MPBN into the firing threshold. Hence, the MPBN will not induce any extra burden in the inference but a trivial burden in the training. The MPBN can be extended to channel-wised MPBN and element-wised MPBN further, which is very different from that of CNNs where only channel-wised normalization can be folded into weights. The difference between our SNN with MPBN and the vanilla SNN is illustrated in Fig. 1. Our main contributions are as follows:
* We propose to add another BN layer after the membrane potential updating operation named MPBN to handle the data flow disturbance in the spiking neuron. The experiment shows that MPBN can flatten the loss landscape further, thus benefiting model convergence and task accuracy.
* We also propose a re-parameterization method to decouple the training-time SNN and the inference-time SNN. In specific, we propose a method to fold the trained MPBN parameter into the firing threshold. Therefore, MPBN can be seen as only training auxiliary manner free from burdens in the inference-time. This re-parameterization method is suitable for both channel-wised MPBN and element-wised MPBN.
* Extensive experiment results show that the SNN trained with the MPBN is highly effective compared with other state-of-the-art SNN models on both static and dynamic datasets, e.g., 96.47% top-1 accuracy and 79.51% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100 with only 2 time steps.
## 2 Related Work
### Learning of Spiking Neural Networks
There are three kinds of learning algorithms of SNNs, including unsupervised learning [43, 24], converting ANN to SNN (ANN2SNN) [46, 25, 26], and supervised learning [19, 38, 20]. Unsupervised learning adopts some biological mechanism to update the SNN model, i.e., the spike-timing-dependent plasticity (STDP) approach [39], thus being considered a biologically plausible method. However, STDP cannot help train large-scale networks yet, thus it is usually limited to small datasets and non-ideal performance. The ANN-SNN conversion approach [22, 37] obtains an SNN by reusing well-trained homogeneous ANN parameters and replacing the ReLU neuron with a spiking neuron. Since the ANN model is easier to train and reach high performance, the ANN-SNN conversion method provides an interesting way to generate an SNN in a short time with competitive performance. However, the converted SNN will lose the rich temporal dynamic behaviors and thus cannot handle neuromorphic datasets well. Supervised learning [11, 50, 18] adopts the surrogate gradient (SG) approach to train SNNs with error backpropagation. It can handle temporal data and provide decent performance with few time steps on the large-scale dataset, thus having received much attention recently. For a more detailed introduction, please refer to the recent SNN survey [17]. Our work falls under the supervised learning.
### Normalization in Spiking Neural Networks
The batch normalization technique was originally introduced as a kind of training auxiliary method by [29] in CNNs. It uses the weight-summed input over a mini-batch of training cases to compute a mean and variance and then uses them to regulate the summed input. This simple operation can derive many benefits. i) It reduces the internal covariate shift (ICS), thus accelerating the training of a deep neural network. ii) It makes the network insensitive to the scale of the gradients, thus a higher learning rate can be chosen to accelerate the training. iii) It makes the network suitable for more nonlinearities by preventing the network from getting stuck in the saturated modes. With these advantages, more kinds of BNs were proposed, including layer normalization [2], group normalization [52], instance normalization [48], and switchable normalization [40].
Figure 1: The difference between our SNN with MPBN and the vanilla SNN. We add another BN layer after membrane potential updating (MPU) operation in the training. The MPBN can be folded into the firing threshold and then the homogenous firing threshold will be transformed into different ones.
There are also some works that modify and apply normalization approaches in the SNN field. For example, NeuNorm [51] also normalizes the feature map along the channel dimension like BN in CNNs. Recently, some methods were proposed to normalize the feature map from both the channel dimension and temporal dimension to take care of the spatio-temporal characteristics of the SNN, such as the threshold-dependent batch normalization (tdBN) [62]. It extends the scope of BN to the additional temporal dimension by adopting a 3DBN-like normalization method in CNNs. Note that, the tdBN can be folded into the weights, thus inducing no burden in the inference time. Nevertheless, NeuNorm and tdBN still use the shared parameters along the temporal dimension. Some works argued that the distributions of data in different time steps vary wildly and that using shared parameters is not a good choice. Subsequently, the temporal batch normalization through time (BNTT) [31], postsynaptic potential normalization (PSP-BN) [28], and temporal effective batch normalization (TEBN) [10] were proposed. These BNs regulate the data flow utilizing different parameters through time steps. Though these BNs with different BN parameters on different time steps can train more well-performed SNN models, their parameters can not be folded into the weights, thus will increase the computations and running time in the inference.
Nevertheless, all these BNs in the SNN field are advised to be used after convolution layers. However, the data flow after the convolution layer will not be presented to the firing function directly but to the membrane potential updating function first. Hence, the data flow will be disturbed again before reaching the firing function. To this end, in this paper, we add another BN after the membrane potential updating function, called the MPBN to retain normalized data flow before the firing function.
## 3 Preliminary
### Leaky Integrate-and-Fire Model
Different from CNNs, SNNs use binary spikes to transmit information. In the paper, we use the widely used Leaky-Integrate-and-Fire (LIF) neuron model [41] to introduce the unique spatial-temporal dynamic of the spiking model. First, we introduce the notation rules used here as follows. Vectors or tensors are denoted by bold italic letters, i.e., \(\mathbf{x}\) and \(\mathbf{o}\) represent the input and output variables respectively. Matrices are denoted by bold capital letters. For instance, \(\mathbf{W}\) is the weight matrix. The constant is denoted by small letters.
In LIF, the membrane potential is updated by
\[\mathbf{u}^{(t+1),\text{pre}}=\tau\mathbf{u}^{(t)}+\mathbf{c}^{(t+1)},\text{ where }\mathbf{c}^{(t+1)}=\mathbf{W}\mathbf{x}^{(t+1)}, \tag{1}\]
where \(\mathbf{u}\) represents the membrane potential and \(\mathbf{u}^{(t+1),\text{pre}}\) is the updated membrane potential at time step \(t+1\), \(\mathbf{c}^{(t+1)}\) is the pre-synaptic input at time step \(t+1\), which is charged by weight-summed input spikes \(\mathbf{x}^{(t+1)}\), and \(\tau\) is a constant within \((0,1)\), which controls the leakage of the membrane potential. Then, when the updated membrane potential \(\mathbf{u}^{(t+1),\text{pre}}\) is up to the firing threshold \(V_{\text{th}}\), the LIF spiking neuron will fire a spike as bellow,
\[\mathbf{o}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{u}^{(t+1),\text{pre}}>V_{ \text{th}}\\ 0&\text{otherwise}\end{cases}, \tag{2}\] \[\mathbf{u}^{(t+1)}=\mathbf{u}^{(t+1),\text{pre}}\cdot(1-\mathbf{o}^{(t+1)}).\]
After firing, the spike output \(\mathbf{o}^{(t+1)}\) at time step \(t+1\) will be transmitted to the next layer and become its input. At the same time, the updated membrane potential will be reset to zero and becomes \(\mathbf{u}^{(t+1)}\) to join the neuron processing at the next time step.
**The Classifier in the SNN.** In a classification model, the final output is used to compute the \(\operatorname{Softmax}\) and predict the desired class object. In an SNN model, if we also use LIF neurons at the output layer to fire spikes and use the number of spikes to compute the probability, too much information will be lost. Therefore, we only integrate the output and do not fire them across time, as doing in recent work [20, 21, 12].
\[\mathbf{o}_{\text{out}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{c}_{\text{out}}^{(t)}=\frac{ 1}{T}\sum_{t=1}^{T}\mathbf{W}\mathbf{x}^{(t)}. \tag{3}\]
Then, the cross-entropy loss is computed based on the true label and \(\operatorname{Softmax}(\mathbf{o}_{\text{out}})\).
### Batch Normalization in SNNs
Batch normalization can effectively reduce the internal covariate shift and alleviate the gradient vanishing or explosion problem for training networks, thus having been widely used in CNNs. Fortunately, BN can also be used in SNNs. Considering a spiking neuron with input \(\mathbf{c}=\{\mathbf{c}^{(1)},\mathbf{c}^{(2)},\dots,\mathbf{c}^{(t)},\dots\}\), where \(t\) is the time step, BN regulate the input at each time step as follows,
\[\tilde{\mathbf{c}}_{i}^{(t)}=\frac{\mathbf{c}_{i}^{(t)}-\mathbf{\mu}_{i}}{\sqrt{\mathbf{ \sigma}_{i}^{2}+\epsilon}}, \tag{4}\]
where \(\mathbf{c}_{i}^{(t)}\) is the input in \(i\)-th channel at \(t\)-th time step, \(\mathbf{\mu}_{i}\) and \(\mathbf{\sigma}_{i}\) are the mean and variance of input in channel dimension, and \(\epsilon\) is a small constant to avoid denominator being zero. To ensure BN can represent the identity transformation, the normalized vector \(\tilde{\mathbf{c}}_{i}^{(t)}\) is scaled and shifted in a learning manner as follows,
\[\operatorname{BN}(\mathbf{c}_{i}^{(t)})=\mathbf{\lambda}_{i}\tilde{\mathbf{c}}_{i}^{(t)}+ \mathbf{\beta}_{i}, \tag{5}\]
where \(\mathbf{\lambda}_{i}\) and \(\mathbf{\beta}_{i}\) are channel-wised learnable parameters.
## 4 Methodology
This section first introduces the specific form of membrane potential batch normalization. Then the re-parameterization technique that how to fold the MPBN into \(V_{\mathrm{th}}\) will be introduced in detail. Next, some key details for training the SNN and the pseudocode for the training and inference of our SNN will be given. Finally, we will provide plenty of ablation studies and the comparison of the loss landscape of the models with or without MPBN to show the effectiveness of the proposed method.
### Membrane Potential Batch Normalization
As abovementioned, we argue that though the data flow has been normalized by the BN after the convolution layer, it will be disturbed again by the membrane potential updating operation. To better depict this, we give the vanilla form of LIF neuron with BN first as follows,
\[\mathbf{u}^{(t+1),\text{pre}}=\tau\mathbf{u}^{(t)}+\mathrm{BN}(\mathbf{W}\mathbf{x}^{(t+1 )}), \tag{6}\]
where \(\tau\) is 0.25 in the paper following [21, 38, 4]. To regulate the disturbed data flow once again, We further embed another BN after the membrane potential updating operation, called MPBN. The LIF neuron with MPBN can be updated as
\[\tilde{\mathbf{u}}^{(t+1),\text{pre}}=\mathrm{MPBN}(\mathbf{u}^{(t+1),\text{pre}}). \tag{7}\]
Obviously, \(\mathbf{u}^{(t+1),\text{pre}}\) will be scaled and sifted, and some \(\mathbf{u}^{(t+1),\text{pre}}\) less than \(V_{\mathrm{th}}\) may be greater than \(V_{\mathrm{th}}\) with MPBN and vice versa. This is abhorrent with the biology and MPBN will cause some extra computation burden in the inference compared with the vanilla one. To solve this problem, we also propose a training-inference-decoupled re-parameterization technique here.
### Re-parameterization
With MPBN, the firing function will be updated as
\[\mathbf{o}^{(t+1)}=\begin{cases}1&\text{if }\mathrm{MPBN}(\mathbf{u}^{(t+1),\text{pre}})>V _{\mathrm{th}}\\ 0&\text{otherwise}\end{cases}. \tag{8}\]
If we unfold the MPBN, the above equation will be re-organized as
\[\mathbf{o}_{i}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{\lambda}_{i}\frac{\mathbf{u}_{i}^{(t+1), \text{pre}}-\mathbf{\mu}_{i}}{\sqrt{\mathbf{\sigma}_{i}^{2}}}+\mathbf{\beta}_{i}>V_{ \mathrm{th}}\\ 0&\text{otherwise}\end{cases}. \tag{9}\]
By folding the MPBN to \(V_{\mathrm{th}}\), the firing function will be further updated as
\[\mathbf{o}_{i}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{u}^{(t+1),\text{pre}}>( \mathbf{\tilde{V}}_{\mathrm{th}})_{i}\\ 0&\text{otherwise}\end{cases}, \tag{10}\] \[\text{where }(\mathbf{\tilde{V}}_{\mathrm{th}})_{i}=\frac{(V_{ \mathrm{th}}-\mathbf{\beta}_{i})\sqrt{\mathbf{\sigma}_{i}^{2}}}{\mathbf{\lambda}_{i}}+\bm {\mu}_{i}.\]
It can be seen that by absorbing some parameters from MPBN, \(V_{\mathrm{th}}\) will be transformed to another channel-wised \((\mathbf{\tilde{V}}_{\mathrm{th}})_{i}\). In this way, the extra computation burden caused by MPBN will be eliminated again in the inference time. Furthermore, the diversity of the spiking neuron will be improved with abundant firing parameters as the learnable firing threshold in other work [3, 49].
### Training Framework
In the paper, the spatial-temporal backpropagation (STBP) algorithm [51] is adopted to train the SNN mod
els. STBP treats the SNN model as a self-recurrent neural network thus enabling an error backpropagation mechanism following the same principles as in CNNs. However, there is still a problem impeding the direct training of SNNs. To demonstrate this problem, we formulate the gradient at the layer \(l\) by the chain rule, given by
\[\frac{\partial L}{\partial\mathbf{W}^{l}}=\sum_{t}(\frac{\partial L}{\partial \mathbf{\boldsymbol{o}}^{(t),l}}\frac{\partial\mathbf{\boldsymbol{o}}^{(t),l}} {\partial\mathbf{\boldsymbol{u}}^{(t),l}}+\frac{\partial L}{\partial\mathbf{ \boldsymbol{u}}^{(t+1),l}}\frac{\partial\mathbf{\boldsymbol{u}}^{(t+1),l}}{ \partial\mathbf{\boldsymbol{u}}^{(t),l}})\frac{\partial\mathbf{\boldsymbol{u}}^ {(t),l}}{\partial\mathbf{\boldsymbol{W}}^{l}}, \tag{11}\]
where \(\frac{\partial\mathbf{\boldsymbol{o}}^{(t),l}}{\partial\mathbf{\boldsymbol{u}}^ {(t),l}}\) is the gradient of firing function at at \(t\)-th time step in \(l\)-th layer. Obviously, the non-differentiable firing activity of the spiking neuron will result in zero gradients everywhere, while infinity at \(V_{\mathrm{th}}\). Therefore, the gradient descent \((\mathbf{\boldsymbol{W}}^{l}\leftarrow\mathbf{\boldsymbol{W}}^{l}-\eta\frac{ \partial L}{\partial\mathbf{\boldsymbol{W}}^{l}})\) either freezes or updates to infinity in the backpropagation. To handle this problem, here, we also adopt the commonly used STE surrogate gradients as doing in other surrogate gradients (SG) methods [44, 20]. Mathematically, it is defined as:
\[\frac{d\mathbf{\boldsymbol{o}}}{d\mathbf{\boldsymbol{u}}}=\left\{\begin{array} []{ll}1,&\text{if }0\leq\mathbf{\boldsymbol{u}}\leq 1\\ 0,&\text{otherwise}\end{array}\right.. \tag{12}\]
Then, the SNN model can be trained end-to-end. The training and inference of our SNN are detailed in Algo. 1.
### Ablation Study
To verify the effectiveness of the MPBN, a lot of ablative studies using spiking ResNet20 architecture along with different time steps were conducted on the CIFAR-10 and CIFAR-100 datasets. The results of top-1 accuracy of these models are shown in Tab. 1. It's can be seen that the test accuracy of the SNNs with MPBN is always higher than these vanilla counterparts. For example, the accuracy of baseline SNN with 1 time step is 90.40%, while with MPBN, it will increase up to 92.22%, which is a huge improvement (more than 2.0%) in the SNN field. Moreover, we also show the test accuracy curves of ResNet20 using MPBN and its vanilla counterpart with 2 time steps on CIFAR-10/100 during training in Fig. 2. It can be observed obviously that the SNNs with MPBN also perform better on convergence speed. To sum up, the proposed MPBN can both improve accuracy and convergence speed, which are very important aspects in deep learning.
### Loss Landscape
We further inspect the 1D loss landscapes [36] of the SNNs with or without MPBN using spiking ResNet20 architecture in 2 time steps to show why the MPBN can improve accuracy and convergence speed in Fig. 3. It can be observed that the loss landscape of the SNN model with MPBN is flatter than that of the SNN without MPBN. This indicates that the MPBN makes the landscape of the corresponding optimization problem smoother [36], thus making the gradients more predictive and network faster convergence. The results here provide convincing evidence for ablation studies in Section 4.4.
## 5 Experiments
In this section, abundant experiments were conducted to verify the effectiveness of the MPBN using widely-used spiking ResNet20 [44, 46], VGG16 [44], ResNet18 [11],
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Method & Time step & Accuracy \\ \hline \multirow{5}{*}{CIFAR-10} & baseline & 1 & 90.40\% \\ & w/ MPBN & 1 & 92.22\% \\ \cline{2-4} & baseline & 2 & 92.80\% \\ & w/ MPBN & 2 & 93.54\% \\ \cline{2-4} & baseline & 4 & 93.85\% \\ & w/ MPBN & 4 & 94.28\% \\ \hline \multirow{5}{*}{CIFAR-100} & baseline & 1 & 67.94\% \\ & w/ MPBN & 1 & 68.36\% \\ \cline{1-1} \cline{2-4} & baseline & 2 & 70.18\% \\ \cline{1-1} \cline{2-4} & w/ MPBN & 2 & 70.79\% \\ \cline{1-1} \cline{2-4} & baseline & 4 & 71.77\% \\ \cline{1-1} \cline{2-4} & w/ MPBN & 4 & 72.30\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation experiments for MPBN.
Figure 3: The 1D loss landscape of spiking ResNet20 with and without MPBN.
Figure 2: The accuracy curves of spiking ResNet20 with or without MPBN using 2 time steps on CIFAR-10 (left) and CIFAR-100 (right). The MPBN based SNNs obviously enjoy higher accuracy and easier convergence.
ResNet19 [62], and ResNet34 [11] on both static datasets including CIFAR-10 [32], CIFAR-100 [32], and ImageNet [6], and one neuromorphic dataset, CIFAR10-DVS [35]. The specific introduction for these datasets has be detailed in many recent works [62, 44, 21, 38]. Here, we mainly introduce these hyper-parameters and data preprocessing in detail. We used the widely adopted LIF neuron in our SNN models as other works about direct training methods [44, 46]. These hyper-parameters for LIF neuron about the initial firing threshold \(V_{\mathrm{th}}\) and the membrane potential decaying constant \(\tau_{\mathrm{decay}}\) are \(0.5\) and \(0.25\) respectively. For static image datasets, since encoding the 8-bit RGB images into 1-bit spikes will lose too much information, we use an ANN-like convolutional layer and a LIF layer to encode the images to spikes for all the rest of the layers, as in recent works [62, 44, 21, 38].
### Comparison with SoTA Methods
**CIFAR-10.** On CIFAR-10, we trained our SNN model using the SGD optimizer with 0.9 momentum. The initial learning rate is 0.1 and decays to 0 in cosine form. The total training time is 400 epochs. To fairly compare with these recent SoTA methods [38, 20, 16], we also adopt data normalization, random horizontal flipping, cropping, and cutout [8] for data augmentation. We run three times for each experiment to report the "mean \(\pm\) std" in Tab. 2. It can be seen that our models can outperform other methods over all these chosen widely adopted architectures with
\begin{table}
\begin{tabular}{l l l l c c} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{Diet-SNN [44]} & SpikeNorm [46] & ANN2SNN & VGG16 & 2500 & 91.55\% \\ & Hybrid-Train [45] & Hybrid training & VGG16 & 200 & 92.02\% \\ & Spike-basedBP [34] & SNN training & ResNet11 & 100 & 90.95\% \\ & Joint A-SNN [19] & SNN training & ResNet18 & 4 & 95.45\% \\ & GLIF [60] & SNN training & ResNet19 & 2 & 94.44\% \\ & PLIF [12] & SNN training & PLIFNet & 8 & 93.50\% \\ \hline \multirow{4}{*}{Diet-SNN [44]} & \multirow{2}{*}{SNN training} & VGG16 & 5 & 92.70\% \\ & & & 10 & 93.44\% \\ \cline{3-5} & & ResNet20 & 5 & 91.78\% \\ & & & 10 & 92.54\% \\ \hline \multirow{4}{*}{RecDis-SNN [20]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 93.64\% \\ & & & 4 & 95.53\% \\ & & & 6 & 95.55\% \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet20} & 2 & 93.13\% \\ & & & 4 & 93.66\% \\ & & & 6 & 94.25\% \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{2}{*}{STBP-tdBN [62]} & \multirow{2}{*}{SNN training} & 2 & 92.34\% \\ & & & 4 & 92.92\% \\ & & & 6 & 93.16\% \\ \hline \multirow{4}{*}{TET [7]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 94.16\% \\ & & & 4 & 94.44\% \\ & & & 6 & 94.50\% \\ \hline \multirow{4}{*}{Real Spike [21]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 95.31\% \\ & & & 4 & 95.51\% \\ & & & 6 & 96.10\% \\ \hline \multirow{4}{*}{InfLoR-SNN [16]} & \multirow{2}{*}{SNN training} & ResNet20 & 5 & 93.01\% \\ & & & 10 & 93.65\% \\ \hline \multirow{4}{*}{**MPBN**} & \multirow{4}{*}{SNN training} & ResNet19 & 1 & **96.06\%\(\pm 0.10\)** \\ & & & 2 & **96.47\%\(\pm 0.08\)** \\ \cline{1-1} \cline{3-5} & & & 1 & **92.22\%\(\pm 0.11\)** \\ \cline{1-1} \cline{3-5} & & & 2 & **93.54\%\(\pm 0.09\)** \\ \cline{1-1} \cline{3-5} & & & 4 & **94.28\%\(\pm 0.07\)** \\ \cline{1-1} \cline{3-5} & & VGG16 & 2 & **93.96\%\(\pm 0.09\)** \\ \cline{1-1} \cline{3-5} & & & 4 & **94.44\%\(\pm 0.08\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with SoTA methods on CIFAR-10.
fewer time steps. For example, The accuracy of spiking ResNet19 trained with MPBN with only 1 time step can be up to 96.06%, while the Real Spike [21] needs 6 time steps to reach a comparable result and the RecDis-SNN [61] even still underperforms 0.51% with 6 time steps. this superiority can also be observed in the results regarding the spiking ResNet20 and VGG16.
**CIFAR-100.** For CIFAR-100, we adopted the same settings as in CIFAR-10. The proposed MPBN also performs well on CIFAR-100. It can be seen that our method gets the best accuracy over all these networks even with fewer time steps. For instants, the ResNet19 trained with MPBN
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{ImageNet} & STBP-tdBN [62] & SNN training & ResNet34 & 6 & 63.72\% \\ & TET [7] & SNN training & ResNet34 & 6 & 64.79\% \\ & MS-ResNet [27] & SNN training & ResNet18 & 6 & 63.10\% \\ & OTTT [54] & SNN training & ResNet34 & 6 & 63.10\% \\ & Real Spike [21] & SNN training & ResNet18 & 4 & 63.68\% \\ \cline{2-6} & SEW ResNet [11] & SNN training & ResNet18 & 4 & 63.18\% \\ & & ResNet34 & 4 & 67.04\% \\ \cline{2-6} & **MPBN** & SNN training & ResNet18 & 4 & **63.14\%\(\pm 0.08\)** \\ & & ResNet34 & 4 & **64.71\%\(\pm 0.09\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with SoTA methods on ImageNet.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{ImageNet} & SpikeNorm [46] & ANN2SNN & ResNet20 & 2500 & 64.09\% \\ & RMP [23] & ANN2SNN & ResNet20 & 2048 & 67.82\% \\ & Hybrid-Train [45] & Hybrid training & VGG11 & 125 & 67.90\% \\ & IM-Loss [15] & SNN training & VGG16 & 5 & 70.18\% \\ \cline{2-6} & Joint A-SNN [19] & SNN training & ResNet18 & 4 & 77.39\% \\ & & ResNet34 & 4 & 79.76\% \\ \hline \multirow{8}{*}{Dspike [38]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet20} & 2 & 71.68\% \\ & & & & 4 & 73.35\% \\ & & & & 6 & 74.24\% \\ \hline \multirow{8}{*}{CIFAR-100} & \multirow{8}{*}{TET [7]} & \multirow{8}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 72.87\% \\ & & & & 4 & 74.47\% \\ & & & & 6 & 74.72\% \\ \cline{1-1} \cline{2-6} & RecDis-SNN [20] & SNN training & ResNet19 & 4 & 74.10\% \\ & & VGG16 & 5 & 69.88\% \\ \cline{1-1} \cline{2-6} & InfLoR-SNN [16] & SNN training & ResNet20 & 5 & 71.19\% \\ & & VGG16 & 5 & 71.56\% \\ \cline{1-1} \cline{2-6} & Real Spike [21] & SNN training & ResNet20 & 5 & 66.60\% \\ & & VGG16 & 5 & 70.62\% \\ \cline{1-1} \cline{2-6} & GLIF [60] & SNN training & ResNet19 & 2 & 75.48\% \\ & & & & 4 & 77.05\% \\ \hline \multirow{3}{*}{TEBN [10]} & \multirow{3}{*}{SNN training} & \multirow{3}{*}{ResNet19} & 2 & 75.86\% \\ & & & & 4 & 76.13\% \\ \cline{1-1} & & & & 6 & 76.41\% \\ \hline \multirow{3}{*}{**MPBN**} & \multirow{3}{*}{SNN training} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{1} & **74.74\%\(\pm 0.11\)** \\ & & & & **78.71\%\(\pm 0.10\)** \\ & & & & 2 & **79.51\%\(\pm 0.07\)** \\ \cline{1-1} \cline{2-6} & & ResNet20 & 2 & **70.79\%\(\pm 0.08\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with SoTA methods on CIFAR-100.
can achieve 78.71% top-1 accuracy with only 1 time step, which outperforms other SoTA methods such as TET, GLIF, TEBN, and RecDis-SNN even with 4 or 6 time steps about 1.66%-3.99%.
**ImageNet.** On ImageNet, we used standard data normalization, random horizontal flipping, and cropping for data augmentation and trained the networks for 320 epochs as in [11]. The optimizer setting also keeps the same with CIFAR datasets. The results for ImageNet are presented in Tab. 4. It can be seen that the accuracy of our method is better than that of these recent SoTA methods, only relatively smaller compared with SEW ResNet [11] for spiking ResNet34. However, SEW ResNet is not a typical SNN model. It adopts the activation before addition form-based ResNet and its blocks will fire positive integer spikes. In this way, the event-driven and multiplication-addition transform advantages of SNNs will be lost. While we adopt the original ResNet, which fires standard binary spikes.
**CIFAR10-DVS.** We also adopted the neuromorphic dataset, CIFAR10-DVS in the paper to verify the effectiveness of the MPBN. We also split the dataset into 9K training images and 1K test images, and resize them to \(48\times 48\) for data augmentation as in [51, 21]. The learning rate is 0.01 and other settings are the same as CIFAR-10. It can be seen that the MPBN also shows superiority in this dataset.
### Extension of the MPBN
In CNNs, the most widely used BN is channel-wised. This is because element-wised BNs are very time-consuming and can not be folded into weights, otherwise, the channel-wised weight-sharing mechanism will be destroyed. However, the MPBN adopts the firing threshold-folded manner and the firing threshold need not keep the same along the channels, therefore, MPBN can use the element-wised form freely. In this way, \(V_{\mathrm{th}}\) will be transformed to element-wised ones as follows,
\[(\mathbf{\tilde{V}}_{\mathrm{th}})_{i,j,k}=\frac{(V_{\mathrm{th}}-\mathbf{\beta}_{i,j, k})\sqrt{\mathbf{\sigma}_{i,j,k}^{2}}}{\mathbf{\lambda}_{i,j,k}}+\mathbf{\mu}_{i,j,k}, \tag{13}\]
where\((\mathbf{\tilde{V}}_{\mathrm{th}})_{i,j,k}\) is the transformed firing threshold of the neuron comes from \(i\)-th channel in the spatial position \((j,k)\). To investigate the performance of the element-wised MPBN, here we also provide a comparison of the vanilla MPBN and its extension. The results of top-1 accuracy of the spiking ResNet20 with 4 time steps on CIFAR datasets are shown in Tab. 6. Though the two versions all perform well, the element-wised MPBN is relatively better than the channel-wised MPBN. This may be because element-wised MPBN can learn more firing threshold values, which means a richer representation ability for SNNs.
## 6 Conclusion
In the paper, we advocated adding the MPBN before the firing function to regulate the disturbed data flow again. We also provided a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold to eliminate the extra time burden induced by MPBN in the inference time. Furthermore, the channel-wised and element-wised MPBN in different granularities were explored. Extensive experiments verified that the proposed MPBN can consistently achieve good performance.
## Acknowledgment
This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Method & Time step & Accuracy \\ \hline \multirow{2}{*}{CIFAR-10} & channel-wised & 4 & 94.28\% \\ & element-wised & 4 & 94.42\% \\ \hline \multirow{2}{*}{CIFAR-100} & channel-wised & 4 & 72.30\% \\ & element-wised & 4 & 72.49\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison with learnable firing threshold methods.
\begin{table}
\begin{tabular}{l l l l c c} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{CIFAR10-DVS} & Rollout [33] & Rollout & DenseNet & 10 & 66.80\% \\ & LIAF-Net [53] & Conv3D & LIAF-Net & 10 & 71.70\% \\ & LIAF-Net [53] & LIAF & LIAF-Net & 10 & 70.40\% \\ & STBP-tdBN [62] & SNN training & ResNet19 & 10 & 67.80\% \\ & RecDis-SNN [20] & SNN training & ResNet19 & 10 & 72.42\% \\ \cline{2-6} & Real Spike [21] & SNN training & ResNet19 & 10 & 72.85\% \\ & & ResNet20 & 10 & 78.00\% \\ \cline{2-6} & **MPBN** & SNN training & ResNet19 & 10 & **74.40\%\(\pm 0.20\)** \\ & & ResNet20 & 10 & **78.70\%\(\pm 0.10\)** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with SoTA methods on CIFAR10-DVS.
|
2303.12073
|
3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers
|
Accurate 3D mitochondria instance segmentation in electron microscopy (EM) is
a challenging problem and serves as a prerequisite to empirically analyze their
distributions and morphology. Most existing approaches employ 3D convolutions
to obtain representative features. However, these convolution-based approaches
struggle to effectively capture long-range dependencies in the volume
mitochondria data, due to their limited local receptive field. To address this,
we propose a hybrid encoder-decoder framework based on a split spatio-temporal
attention module that efficiently computes spatial and temporal self-attentions
in parallel, which are later fused through a deformable convolution. Further,
we introduce a semantic foreground-background adversarial loss during training
that aids in delineating the region of mitochondria instances from the
background clutter. Our extensive experiments on three benchmarks, Lucchi,
MitoEM-R and MitoEM-H, reveal the benefits of the proposed contributions
achieving state-of-the-art results on all three datasets. Our code and models
are available at https://github.com/OmkarThawakar/STT-UNET.
|
Omkar Thawakar, Rao Muhammad Anwer, Jorma Laaksonen, Orly Reiner, Mubarak Shah, Fahad Shahbaz Khan
|
2023-03-21T17:58:49Z
|
http://arxiv.org/abs/2303.12073v1
|
# 3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers
###### Abstract
Accurate 3D mitochondria instance segmentation in electron microscopy (EM) is a challenging problem and serves as a prerequisite to empirically analyze their distributions and morphology. Most existing approaches employ 3D convolutions to obtain representative features. However, these convolution-based approaches struggle to effectively capture long-range dependencies in the volume mitochondria data, due to their limited local receptive field. To address this, we propose a hybrid encoder-decoder framework based on a split spatio-temporal attention module that efficiently computes spatial and temporal self-attentions in parallel, which are later fused through a deformable convolution. Further, we introduce a semantic foreground-background adversarial loss during training that aids in delineating the region of mitochondria instances from the background clutter. Our extensive experiments on three benchmarks, Lucchi, MitoEM-R and MitoEM-H, reveal the benefits of the proposed contributions achieving state-of-the-art results on all three datasets. Our code and models are available at [https://github.com/OmkarThawakar/STT-UNET](https://github.com/OmkarThawakar/STT-UNET).
Keywords:Mitochondria segmentation Spatio-temporal Transformer
## 1 Introduction
Mitochondria are membrane-bound organelles that generate the primary energy required to power the cell activities, thereby crucial for metabolism. Mitochondrial dysfunction, which occurs when mitochondria are not functioning properly has been witnessed as a major factor in numerous diseases, including noncommunicable chronic diseases (_e.g_, cardiovascular and cancer), metabolic (_e.g_, obesity) and neurodegenerative (_e.g_, Alzheimer and Parkinson) disorders [19, 21]. Electron microscopy (EM) images are typically utilized to reveal the corresponding 3D geometry and size of mitochondria at a nanometer scale, thereby facilitating basic biological research at finer scales. Therefore, automatic instance segmentation of mitochondria is desired, since manually segmenting from a large amount of data is particularly laborious and demanding. However, automatic 3D mitochondria instance segmentation is a challenging task, since complete shape of
mitochondria can be sophisticated and multiple instances can also experience entanglement with each other resulting in unclear boundaries. Here, we look into the problem of accurate 3D mitochondria instance segmentation.
Earlier works on mitochondria segmentation employ standard image processing and machine learning methods [28; 16; 17]. Recent approaches address [22; 3; 12] this problem by leveraging either 2D or 3D deep convolutional neural network (CNNs) architectures. These existing CNN-based approaches can be roughly categorized [30] into bottom-up [12; 3; 24; 11; 2] and top-down [9]. In case of bottom-up mitochondria instance segmentation approaches, a binary segmentation mask, an affinity map or a binary mask with boundary instances is computed typically using a 3D U-Net [4], followed by a post-processing step to distinguish the different instances. On the other hand, top-down methods typically rely on techniques such as Mask R-CNN [6] for segmentation. However, Mask R-CNN based approaches struggle due to undefined bounding-box scale in EM data volume.
As discussed above, most recent approaches for 3D mitochondria instance segmentation utilize convolution-based designs within the "U-shaped" 3D encoder-decoder architecture. In such an architecture, the encoder aims to generate a low-dimensional representation of the 3D data by gradually performing the downsampling of the extracted features. On the other hand, the decoder performs upsampling of these extracted feature representations to the input resolution for segmentation prediction. Although such a CNN-based design has achieved promising segmentation results compared to traditional methods, they struggle to effectively capture long-range dependencies due to their limited local receptive field. Inspired from success in natural language processing [27], recently vision transformers (ViTs) [5; 15; 26; 10; 25] have been successfully utilized in different computer vision problems due to their capabilities at modelling long-range dependencies and enabling the model to attend to all the elements in the input sequence. The core component in ViTs is the self-attention mechanism that that learns the relationships between sequence elements by performing relevance estimation of one item to other items. Inspired by ViTs and based on the observation that attention-based vision transformers architectures are an intuitive design choice for modelling long-range global contextual relationships in volume data, we investigate designing a CNN-transformers based framework for the task of 3D mitochondria instance segmentation.
When designing a attention-based framework for 3D mitochondria instance segmentation, a straightforward way is to compute joint spatio-temporal self-attention where all pairwise interactions are modelled between all spatio-temporal tokens. However, such a joint spatio-temporal attention computation is computation and memory intensive as the number of tokens increases linearly with the number of input slices in the volume. In this work, we look into an alternative way to compute spatio-temporal attention that captures long-range global contextual relationships without significantly increasing the computational complexity. Our contributions are as follows:
* We propose a hybrid CNN-transformers based encoder-decoder framework, named STT-UNET. The focus of our design is the introduction of a split
spatio-temporal attention (SST) module that captures long-range dependencies within the cubic volume of human and rat mitochondria samples. The SST module independently computes spatial and temporal self-attentions in parallel, which are then later fused through a deformable convolution.
* To accurately delineate the region of mitochondria instances from the cluttered background, we further introduce a semantic foreground-background (FG-BG) adversarial loss during the training that aids in learning improved instance-level features.
* We conduct experiments on three commonly used benchmarks: Lucchi [16], MitoEM-R [30] and MitoEM-H [30]. Our STT-UNET achieves state-of-the-art segmentation performance on all three datasets. On Lucchi test set, our STT-UNET outperforms the recent [3] with an absolute gain of 3.0% in terms of Jaccard-index coefficient. On MitoEM-H val. set, STT-UNET achieves AP-75 score of 0.842 and outperforms the recent 3D Res-UNET [13] by 3.0%. Fig. 1 shows a qualitative comparison between our STT-UNET and 3D Res-UNET [13] on examples from MitoEM-R and MitoEM-H datasets.
## 2 Method
### Baseline Framework
We base our approach on the recent Res-UNET [13], which utilizes encoder-decoder structure of 3D UNET [29] with skip-connections between encoder and
Figure 1: Qualitative 3D instance segmentation comparison between the recent Res-UNET [13] and our proposed STT-UNET approach on the example input regions from MitoEM-H and MitoEM-R validation sets. Here, we present the corresponding segmentation predictions of the baseline and our approach along with the ground truth. Our STT-UNET approach achieves superior segmentation performance by accurately segmenting 16% more cell instances in these examples, compared to Res-UNET-R.
decoder. Here, 3D input patch of mitochondria volume (\(32\times 320\times 320\)) is taken from the entire volume of \((400\times 4096\times 4096)\). The input volume is denoised using an interpolation network adapted for medical images [7]. The denoised volume is then processed utilizing an encoder-decoder structure containing residual anisotropic convolution blocks (ACB). The ACB contains three layers of 3D convolutions with kernels (\(1\times 3\times 3\)), (\(3\times 3\times 3\)), (\(3\times 3\times 3\)) having skip connections between first and third layers. The decoder outputs semantic mask and instance boundary, which are then post-processed using connected component labelling to generate final instance masks. We refer to [13] for more details.
**Limitations:** As discussed above, the recent Res-UNET approach utilizes 3D convolutions to handle the volumetric input data. However, 3D convolutions are designed to encode short-range spatio-temporal feature information and struggle to model global contextual dependencies that extend beyond the designated receptive field. In contrast, the self-attention mechanism within the vision transformers possesses the capabilities to effectively encode both local and global long-range dependencies by directly performing a comparison of feature activations at all the space-time locations. In this way, self-attention mechanism goes much beyond the receptive field of the conventional convolutional filters. While self-attention has been shown to be beneficial when combined with convolutional layers for different medical imaging tasks, to the best of our knowledge, no previous attempt to design spatio-temporal self-attention as an exclusive building block for the problem of 3D mitochondria instance segmentation exists in literature. Next, we present our approach that effectively utilizes an efficient spatio-temporal attention mechanism for 3D mitochondria instance segmentation.
### Spatio-Temporal Transformer Res-UNET (STT-UNET)
Fig. 2(a) presents the overall architecture of the proposed hybrid transformers-CNN based 3D mitochondria instance segmentation approach, named STT-UNET. It comprises a denoising module, transformer based encoder-decoder with split spatio-temporal attention and an instance segmentation block. The denoising module alleviates the segmentation faults caused by anomalies in the EM images, as in the baseline. The denoising is performed by convolving the current frame with two adjacent frames using predicted kernels, thereby generating the resultant frame by adding the convolution outputs. The resulting denoised output is then processed by our transformer based encoder-decoder with split spatio-temporal attention to generate the semantic masks. Consequently, these semantic masks are post-processed by an instance segmentation module using a connected component labelling scheme, thereby generating the final instance-level segmentation output prediction. To further enhance the semantic segmentation quality with cluttered background we introduced semantic adversarial loss which leads to improved semantic segmentation in noisy background.
**Split Spatio-Temporal Attention based Encoder-Decoder:** Our STT-UNET framework comprises four encoder and three decoder layers. Within each layer, we introduce a split spatio-temporal attention-based (SST) module, Fig. 2(b), that strives to capture long-range dependencies within the cubic
volume of human and rat samples. Instead of the memory expensive joint spatio-temporal representation, our SST module splits the attention computation into a spatial and a temporal parallel stream. The spatial attention refines the instance level features from input features along the spatial dimensions, whereas the temporal attention effectively learns the inter-dependencies between the input volume. The resulting spatial and temporal attention representations are combined through a deformable convolution, thereby generating spatio-temporal features. As shown in Fig 2 (b), the normalized 3D input volume of denoised features \(X\) of size (\(T\times H\times W\times C\)) where \(T\) is volume size, (\(H\times W\)) is spatial dimension of volume and \(C\) is number of channels. The spatial and temporal attention blocks project \(X\) through linear layer to generate \(Q_{s}\), \(K_{s}\), \(V_{s}\) and \(Q_{t}\), \(K_{t}\), \(V_{t}\). In temporal attention \(Q_{t}\), \(K_{t}\), \(V_{t}\) is permuted to generate \(Q_{tp}\), \(K_{tp}\), \(V_{tp}\) for temporal dot product. The spatial and temporal attention is defined as,
\[X_{s}=softmax(\frac{Q_{s}K_{s}^{T}}{\sqrt{d_{k}}})V_{s} \tag{1}\]
Figure 2: **(a)** Overall architecture of our STT-UNET framework for 3D mitochondria instance segmentation. A 3D volume patch of mitochondria is first pre-processed using the interpolation network. The resulting reconstructed volume is then fed to our split spatio-temporal attention based encoder-decoder to generate the semantic-level mitochondria segmentation masks. The focus of our design is the introduction of split spatio-temporal attention (SST) module within the encoder-decoder. **(b)** The SST module first computes spatial and temporal attentions independently, which are later combined through a deformable convolution. Consequently, the semantic masks from the decoder are then input to the instance segmentation module to generate the final instance masks. The entire framework is trained using the standard BCE loss (\(L_{BCE}\)) and our semantic foreground-background (FG-BG) adversarial loss (\(L_{fg-bg}\)). **(c)** The \(L_{fg-bg}\) loss improves the instance-level features, thereby aiding in the better separability of the region of mitochondria instances from the cluttered background.
\[X_{t}=softmax(\frac{Q_{tp}K_{tp}^{T}}{\sqrt{d_{k}}})V_{tp} \tag{2}\]
Where, \(X_{s}\) is spatial attention map, \(X_{t}\) is temporal attention map and \(d_{k}\) is dimension of \(Q_{s}\) and \(K_{s}\). To fuse spatial and temporal attention maps, \(X_{s}\) and \(X_{t}\), we employ deformable convolution. The deformable convolution generates offsets according to temporal attention map \(X_{t}\) and by using these offsets the spatial attention map \(X_{s}\) is aligned. The deformable fusion is given as,
\[X=\int_{c=1}^{C}\sum_{k_{n}\in R}W(k_{n})\cdot X_{s}(k_{0}+k_{n}+\Delta K_{n}) \tag{3}\]
Where, \(C\) is no of channels, \(X\) is spatially aligned attention map with respect to \(X_{t}\). \(W\) is the weight matrix of kernels, \(X_{s}\) is spatial attention map, \(k_{0}\) is starting position of kernel, \(k_{n}\) is enumerating along all the positions in kernel size of \(R\) and \(\Delta K_{n}\) is the offset sampled from temporal attention map \(X_{t}\). We empirically observe that fusing spatial and temporal features through a deformable convolution, instead of concatenation through a conv. layer or addition, leads to better performance. The resulting spatio-temporal features of decoder are then input to instance segmentation block to generate final instance masks, as in baseline.
**Semantic FG-BG Adversarial Loss:** As discussed earlier, a common challenge in mitochondria instance segmentation is to accurately delineate the region of mitochondria instances from the cluttered background. To address this, we introduce a semantic foreground-background (FG-BG) adversarial loss during the training to enhance the FG-BG separability. Here, we introduce the auxiliary discriminator network \(D\) with two layers of 3D convolutions with stride 2 during the training as shown in Fig. 2(c). The discriminator takes the input volume \(I\) along with the corresponding mask as an input. Here, the mask \(M\) is obtained either from the ground truth or predictions, such that all mitochondria instances within a frame are marked as foreground. While the discriminator \(D\) attempts to distinguish between ground truth and predicted masks (\(M_{gt}\) and \(M_{pred}\), respectively), the model \(\Psi\) learns to output semantic mask such that the predicted masks \(M_{pred}\) are close to ground truth \(M_{gt}\). Let \(\mathbf{F}_{gt}=\text{CONCAT}(\mathbf{I},\mathbf{M}_{gt})\) and \(\mathbf{F}_{pr}=\text{CONCAT}(\mathbf{I},\mathbf{M}_{pred})\) denote the real and fake input, respectively, to the discriminator \(D\). Similar to [8], the adversarial loss is then given by,
\[L_{fg-bg}=\min_{\Psi}\max_{D}\Psi[\log D(F_{gt})]+\Psi[\log(1-D(F_{pr}))]+ \lambda_{1}\Psi[D(F_{gt})-D(F_{pr})] \tag{4}\]
Consequently, the overall loss for training is: \(L=L_{BCE}+\lambda\cdot L_{fg-bg}\), Where, \(L_{BCE}\) is BCE loss, \(\lambda=0.5\) and \(L_{fg-bg}\) is semantic adversarial loss.
## 3 Experiments
**Dataset:** We evaluate our approach on three datasets: MitoEM-R [30], MitoEM-H [30] and Lucchi [18]. The MitoEM [30] is a dense mitochondria instance segmentation dataset from ISBI 2021 challenge. The dataset consists of 2 EM image
volumes (\(30~{}\mu m^{3}\)) of resolution of \(8\times 8\times 30\) nm, from rat tissues (MitoEM-R) and human tissue (MitoEM-H) samples, respectively. Each volume has 1000 grayscale images of resolution (\(4096\times 4096\)) of mitochondria, out of which train set has 400, validation set contains 100 and test set has 500 images. Lucchi [18] is a sparse mitochondria semantic segmentation dataset with training and test volume size of \(165\times 1024\times 768\).
**Implementation Details:** We implement our approach using Pytorch1.9 [23] (rcom env) and models are trained using 2 AMD MI250X GPUs. During training of MitoEM, for the fair comparison, we adopt same data augmentation technique from [30]. The 3D patch of size (\(32\times 320\times 320\)) is input to the model and trained using batch size of 2. The model is optimized by Adam optimizer with learning rate of \(1e^{-4}\). Unlike baseline [13], we do not follow multi-scale training and perform single stage training for 200k iterations. For Lucchi, we follow training details of [13, 30] for semantic segmentation. For fair comparison with previous works, we use the same evaluation metrics as in the literature for both datasets. We use 3D AP-75 metric [30] for MitoEM-R and MitoEM-H datasets. For Lucchi, we use jaccard-index coefficient (Jaccard) and dice similarity coefficient (DSC).
when introducing our semantic foreground-background adversarial loss. Our final approach achieves absolute gains of 3.7% and 2.6% over the baseline on MitoEM-R and MitoEM-H, respectively. Tab. 4 shows ablation study with feature fusion strategies in our SST module: addition, concat and deformable-conv. The best results are obtained with deformable-conv on both datasets. For encoding spatial and temporal information, we analyze two design choices with SST module: cascaded and split, as shown in Tab. 5. The best results are obtained using our split design choice (row 3) with spatial and temporal information encoded in parallel and later combined. We also evaluate with different input volumes: 4,8,16,32. We observe best results are obtained when using 32 input volume.
## 4 Conclusion
We propose a hybrid CNN-transformers based encoder-decoder approach for 3D mitochondria instance segmentation. We introduce a split spatio-temporal attention (SST) module to capture long-range dependencies within the cubic volume of human and rat mitochondria samples. The SST module computes spatial and temporal attention in parallel, which are later fused. Further, we introduce a semantic adversarial loss for better delineation of mitochondria instances from background. Experiments on three datasets demonstrate the effectiveness of our approach, leading to state-of-the-art segmentation performance.
Figure 3: Qualitative 3D instance segmentation results of our STT-UNET on the example input regions from MitoEM-H and MitoEM-R val sets. Our STT-UNET achieves promising results on these input examples containing noise.
\begin{table}
\begin{tabular}{}
[MISSING_PAGE_POST]
|
2307.00180
|
Tuning a magnetic energy scale with pressure in UTe$_2$
|
A fragile ordered state can be easily tuned by various external parameters.
When the ordered state is suppressed to zero temperature, a quantum phase
transition occurs, which is often marked by the appearance of unconventional
superconductivity. While the quantum critical point can be hidden, the
influence of the quantum criticality extends to fairly high temperatures,
manifesting the non-Fermi liquid behavior in the wide range of the $p$-$H$-$T$
phase space. Here, we report the tuning of a magnetic energy scale in the
heavy-fermion superconductor UTe$_2$, previously identified as a peak in the
$c$-axis electrical transport, with applied hydrostatic pressure and magnetic
field along the $a$-axis as complementary (and opposing) tuning parameters.
Upon increasing pressure, the characteristic $c$-axis peak moves to a lower
temperature before vanishing near the critical pressure of about 15 kbar. The
application of a magnetic field broadens the peak under all studied pressure
values. The observed Fermi-liquid behavior at ambient pressure is violated near
the critical pressure, exhibiting nearly linear resistivity in temperature and
an enhanced pre-factor. Our results provide a clear picture of energy scale
evolution relevant to magnetic quantum criticality in UTe$_2$.
|
Hyunsoo Kim, I-Lin Liu, Wen-Chen Lin, Yun Suk Eo, Sheng Ran, Nicholas P. Butch, Johnpierre Paglione
|
2023-07-01T00:22:10Z
|
http://arxiv.org/abs/2307.00180v2
|
# Tuning a magnetic energy scale with pressure in UTe\({}_{2}\)
###### Abstract
A fragile ordered state can be easily tuned by various external parameters. When the ordered state is suppressed to zero temperature, a quantum phase transition occurs, which is often marked by the appearance of unconventional superconductivity. While the quantum critical point can be hidden, the influence of the quantum criticality extends to fairly high temperatures, manifesting the non-Fermi liquid behavior in the wide range of the \(p\)-\(H\)-\(T\) phase space. Here, we report the tuning of a magnetic energy scale in the heavy-fermion superconductor UTe\({}_{2}\), previously identified as a peak in the \(c\)-axis electrical transport, with applied hydrostatic pressure and magnetic field along the \(a\)-axis as complementary (and opposing) tuning parameters. Upon increasing pressure, the characteristic \(c\)-axis peak moves to a lower temperature before vanishing near the critical pressure of about 15 kbar. The application of a magnetic field broadens the peak under all studied pressure values. The observed Fermi-liquid behavior at ambient pressure is violated near the critical pressure, exhibiting nearly linear resistivity in temperature and an enhanced pre-factor. Our results provide a clear picture of energy scale evolution relevant to magnetic quantum criticality in UTe\({}_{2}\).
pacs: Few systems in nature exhibit a fragile long-range magnetic order, where the thermal phase transition into its ordered state can be readily suppressed by either chemical substitution, magnetic field, or physical pressure. However, systems have been found that undergo a quantum phase transition at a critical value of the tuning parameter [1; 2; 3; 4], deemed a quantum critical point (QCP). However, the QCP is often putative, being hidden within a surrounding superconducting phase which is thought to be mediated by fluctuations affiliated with the magnetic order [1]. While the majority of magnetic unconventional superconductors are found near an antiferromagnetic instability, several uranium-based superconductors including URhGe and UCoGe coexist with ferromagnetism [5], making them promising candidates for a topological spin-triplet superconductivity [6].
Recently, UTe\({}_{2}\) was identified as a new member of the U-based superconductor family [6], with a transition temperature \(T_{c}\) reaching up to 2 K [7]. The normal state of UTe\({}_{2}\) can be described by the Kondo lattice model where the localized magnetic moment of uranium is hybridized with the conduction electrons at low temperatures [8]. UTe\({}_{2}\) does not magnetically order, but the superconductivity in this paramagnetic heavy fermion is believed to be in the vicinity of the magnetic instability [6]. The application of pressure as low as 15 kbar induces a long-range magnetic order [9]. Because of the relatively small energy scales of the superconductivity and magnetic order in UTe\({}_{2}\), a rich phase diagram emerges when the system is subjected to external parameters. However, understanding of competition and interplay between magnetism and superconductivity in UTe\({}_{2}\) remains elusive, and the associated quantum criticality in the \(p\)-\(H\)-\(T\) phase space has not been fully explored.
In UTe\({}_{2}\), electrical resistivity exhibits the behavior of a Fermi liquid in its temperature dependence above \(T_{c}\) for currents applied along all three crystallographic axes [10]. Whereas the resistivity along the \(a\) and \(b\) directions is consistent with typical incoherent-to-coherent crossover upon cooling as expected for a Kondo lattice at low temperatures, Eo _et al._[10] found a qualitatively different behavior in the \(c\)-axis transport which exhibits a pronounced local maximum near 12 K. An anomaly in \(d\rho_{a}/dT\)[10; 11] and \(\chi_{a}\) (magnetic susceptibility with a field along the \(a\)-axis) [12] was reported at the same temperature. The pressure evolution of \(\chi_{a}\) was studied by Li _et al._[12] where the feature moves to lower temperatures with pressure. In contrast, \(\chi_{b}\) exhibit a broad local maximum around 35 K, and its pressure evolution is scaled with that of the metamagnetic transition field [13]. A similar peak in the electrical transport measurement was identified at 16 K in an unoriented sample [14] and in a sample under applied pressure [9]. Other measurements including heat capacity [11], linear thermal expansion coefficient [15] and thermoelectric power [16] exhibit a prominent feature around 12 K. The \(c\)-axis peak has been associated with a spin fluctuation energy scale based on thermodynamic measurements [11], and therefore a measurement of \(c\)-axis transport as a function of tunable parameters allows for direct tracking of the evolution of this energy scale and any resultant change in physical properties, providing a straightforward but crucial window into the magnetic fluctuation spectrum likely responsible for superconductivity in UTe\({}_{2}\).
In this work, we investigate the \(c\)-axis electrical transport in UTe\({}_{2}\) while tuning the applied magnetic field and
pressure in order to elucidate the presence of quantum criticality in its rich phase diagram. By performing precision measurements of the electrical resistance \(R\) under applied pressures up to 17.4 kbar and in magnetic fields up to 18 T applied along the \(a\)-axis, we determine the pressure and field evolution of the characteristic fluctuation energy scale, upper critical field, and the power-law behavior of the \(c\)-axis electrical resistance. Our results clearly indicate an energy scale evolution relevant to magnetic quantum criticality in UTe\({}_{2}\).
Figure 1 presents the applied pressure dependence of \(R(T)\) in UTe\({}_{2}\) with electrical currents applied along the crystallographic \(c\)-axis. The measured single-crystal sample was grown by the chemical vapor transport method, and achieves zero resistance at \(T_{c}\)=1.6 K in the absence of pressure (see Methods for detail). The ambient pressure (0 kbar) \(R(T)\) curve exhibits the characteristic \(c\)-axis peak near 13 K as shown previously [10], which monotonically moves towards lower temperatures with increasing applied pressures while \(T_{c}\) steadily increases as reported previously [17; 18], reaching a maximum at \(p=9.7\) kbar before decreasing rapidly. The resistive superconducting transition itself exhibits distinct features that evolve with pressure as shown in Fig. 1(b). First, a small upturn appears just above the superconducting transition at pressures up to 9.7 kbar, which seemingly evolves from the relatively flat resistance at 0 kbar. A similar upturn was observed in prior electrical transport measurements with current applied in the (011) plane, found to be accompanied by thermal hysteresis (not observed in this study) [9]. Second, the superconducting transition narrows and becomes sharpest at \(p=9.7\) kbar, before broadening at higher pressures with a long tail just before the first-order transition to a magnetic phase occurs near \(p=14.2\) kbar. This feature was also observed previously [9], and was shown to sharpen upon application of magnetic field. At higher pressures, the peak in \(R(T)\) is diminished and a considerable increase in resistance occurs on cooling before an abrupt drop to finite resistance at the lowest measured temperatures. The features found above 15 kbar have been previously associated with magnetic ordering [12; 17].
The pressure-temperature phase diagram extracted from our \(c\)-axis resistivity measurements is presented in Figure 1(c) as a contour plot, comparing the evolution of the resistivity magnitude with that of other measured quantities. The precise resistivity measurements tracking the properties of the peak offer a clear picture, particularly near the critical pressure. The zero-pressure \(c\)-axis peak at 13.8 K decreases in temperature with increasing pressure at a rate of \(-0.6\) K/kbar, and the peak becomes narrower with pressure. Interestingly, the observed pressure suppression rate of the peak is in excellent agreement with that observed for the \(a\)-axis magnetic susceptibility \(\chi_{a}\), which is \(-0.58\) K/kbar [12]. Furthermore, Willa _et al._[11] estimated the pressure-suppression rate of the minimum thermal expansion coefficient along the \(c\)-axis from the thermodynamic Gruneisen parameter to be \(-0.4\) K/kbar, which also tracks the resistivity features as shown in Figure 1(c). Evidently, the pressure evolu
Figure 1: **Pressure evolution of the \(c\)-axis resistivity of UTe\({}_{2}\) in the absence of a magnetic field.** Panel (a) shows resistance \(R\) of UTe\({}_{2}\) measured with the electrical current applied along the crystallographic \(c\)-axis under various applied pressures up to \(p=17.4\) kbar. The peak in \(R(T)\) monotonically moves towards the lower temperature with increasing pressure. The pressure evolution of the resistive superconducting transition is shown in panel (b) for pressures up to 14.2 kbar, above which zero resistance is not observed. Panel (c) exhibits a phase diagram of the characteristic temperature scales (various symbols) of the system overlaid on a color contour presentation of the resistance \(R\) variation with pressure and temperature. The black (this work) and red [15] circles represent the superconducting transition and the black squares indicate a shoulder-like feature observed in magnetic susceptibility \(\chi_{a}\)[12], which closely tracks the position of the maximum in \(c\)-axis resistivity (\(T^{\star}\)) plotted as black stars. The dashed line represent the suppression of the observed minimum of the thermal expansion coefficient, estimated by using a thermodynamic relationship of electronic Grüneisen parameter [11], and the triangle and diamond symbols observed above 14.2 kbar are features attributed to magnetic ordering [12; 15].
tion of the \(c\)-axis peak closely tracks both the \(\chi_{a}\) feature as well as the Gruneisen parameter, strongly suggesting all features have a common magnetic origin.
Applying magnetic field at each measured pressure reveals the field-evolution of \(R(T)\) from 5.3 kbar to 14.2 kbar, where the \(c\)-axis peak remains as a pronounced local maximum but is strongly tuned by magnetic field. As shown in Figs. 2(a-e), increasing magnetic field broadens the \(c\)-axis peak and increases its temperature position, while also invoking a shallower temperature dependence of the resistance with increased curvature. The broadening of the peak with field is similar to what was observed previously at ambient pressures [19], but is contrary to the opposite trend observed with field applied along the magnetic hard axis (\(b\)-axis) [19; 20]. To characterize this trend, we define \(T^{*}\) and \(R^{*}\) as, respectively, the temperature and resistance values at the \(c\)-axis peak for each pressure and field value, with the latter representing the field-evolution of the absolute low-temperature scattering rate at each pressure. The field-dependent \(T^{*}\) and \(R^{*}\) values show common features under all applied pressures with \(H\parallel a\), as shown in Figs. 2(f-g). While \(T^{*}\) increases with increasing field and approaches a linear trend, \(R^{*}\) generically decreases with increasing field, except for a saturated evolution at low fields in the vicinity of the magnetic order transition. The trends are characterized by plotting the rate \(dT^{*}/d(\mu_{0}H)\) (determined between 6 T and 18 T) and \(|R^{*}(18~{}\mathrm{T})-R^{*}(0)|\) in Fig. 2(h), which show nearly linear increase and decrease with pressure, respectively.
The effect of magnetic field on the superconducting transition also reveals interesting pressure evolution of the upper critical field \(H_{c2}(T)\) as shown in Fig. 3. The \(H_{c2}(T)\) curves were determined from \(R(T)\) measurements with the electrical current along the \(c\)-axis and the magnetic field applied parallel to the \(a\)-axis under applied pressure up to \(p=14.2\) kbar. We used the zero resistance criteria for the superconducting transition temperature \(T_{sc}\) in field. While the \(H_{c2}(T)\) curve without the applied pressure exhibits a smooth variation, the application of pressure drastically changes the shape of the superconducting \(H\)-\(T\) phase lines. Near \(T_{c}\), the slope of \(H_{c2}(T)\) increases by almost five-fold under \(p=9.7\) kbar, and it slightly decreases at 11.8 kbar, consistent
Figure 2: **Magnetic field evolution of \(c\)-axis resistivity of UT\({}_{2}\) under applied pressure.** Panels (a-e) show the field-evolution of \(R(T)\) with applied pressure and fields applied along the \(a\)-axis, in the temperature range where the data exhibit a peak that evolves very sensitively with both pressure and magnetic field. Defining \(T^{*}\) and \(R^{*}\) as, respectively, the temperature and resistance at the maximum in \(R(T)\), panels (f-g) show the field-dependence of these characteristic values to exhibit common features under all applied pressures. The pressure evolution of the rate of increase of \(T^{*}\) with field, \(dT^{*}/dH\), is plotted in panel (h) (left vertical axis), together with the total field variation of \(R^{*}\), \(|R^{*}(18~{}\mathrm{T})-R^{*}(0)|\) (right vertical axis).
with the previous results [13; 18]. As was shown previously [9], the application of 14.2 kbar induces reentrant behavior of superconductivity. The large slope change of \(H_{c2}(T)\) at \(T_{c}\) with pressure indicates the significant variation in the orbital limiting \(H_{c2}(0)\)[21]. However, the overall observed \(\mu_{0}H_{c2}(T)\) at the lowest temperature remains between 6 and 10 T as shown in panel (a). When the field-driven superconducting to normal state transition occurs due to the orbital limiting effect, \(H_{c2}(0)\) can be estimated from the slope of \(H_{c2}(T)\) at \(T_{c}\) with a relation, \(H_{HW}=-\lambda T_{c}H_{c2}^{\prime}(T_{c})\), proposed by Helfand and Werthamer (HW) [21]. Here \(\lambda\approx 0.73\) and 0.69, which correspond to the clean and dirty limits, respectively [21; 22]. Alternatively, the spin-singlet superconductivity can be suppressed due to the Zeeman energy contribution of Pauli paramagnetism, and the limiting value \(H_{P}\) can be estimated by a relation, \(H_{P}=\Delta_{0}/\sqrt{2}\mu_{B}\). Here \(\Delta_{0}\) and \(\mu_{B}\) are the magnitudes of the superconducting energy gap at zero temperature and the Bohr magneton, respectively. For a weak-coupling BCS superconductor, \(\mu_{0}H_{P}=\alpha T_{c}\) where \(\alpha\approx 1.87\) T/K.
We compare the experimental \(H_{c2}(0)\) to both limiting fields, \(H_{HW}\) and \(H_{P}\), in Fig. 3(b). We note that \(H_{HW}\) is ill-defined under \(p=14.2\) kbar because of the reversed sign of \(H_{c2}^{\prime}(T_{c})\), i.e., reentrant superconductivity. Figure 3(c) shows the pressure evolution of \(H_{HW}/H_{c2}(0)\) and \(H_{P}/H_{c2}(0)\). While \(H_{P}\) remains less than \(H_{c2}(0)\), indicating non-singlet pairing, \(H_{HW}\) exhibits a substantial variation. The large \(H_{HW}\) prediction is generally evidence for the heavy-fermion normal state [23]. The pressure-evolution of \(H_{HW}\), which exhibits a significant enhancement around 10 kbar, indicates increasing effective mass with pressure. However, the orbital limiting effect is interrupted, and the largest discrepancy between \(H_{c2}(0)\) and \(H_{HW}\) is observed at 9.7 kbar where the highest \(T_{c}\) is observed. A similar effect was observed in other heavy fermion superconductors near quantum criticality [23], suggesting the existence of a QCP near 10 kbar. At low temperatures, a drastic slope change appears under pressure between 5.3 and 11.8 kbar. The slope change
Figure 3: **Superconducting upper critical fields of UTe\({}_{2}\) as a function of applied pressure.** Panel (a) shows the temperature-dependent upper critical field \(H_{c2}(T)\) under various applied pressures with field \(H\) applied along the \(a\)-axis. Values are obtained using the zero resistance criteria for the superconducting transition temperature \(T_{sc}\) in a magnetic field. Panel (b) compares the extracted zero-temperature experimental \(H_{c2}(0)\) values (red circles) to the calculated orbital limiting field, \(H_{HW}\) (blue triangles), and the paramagnetic limiting field, \(H_{P}\) (black squares). See text for definitions of \(H_{HW}\) and \(H_{P}\). The experimental \(H_{c2}(0)\) values are determined by extrapolating the \(H_{c2}(T)\) curves to zero temperature. Panel (c) shows the pressure evolution of \(H_{HW}/H_{c2}(0)\) and \(H_{P}/H_{c2}(0)\). Panels (d-h) present the relation between the anomalous behavior \(H_{c2}(T)\) (red circles) and the width of the superconducting phase transition \(\Delta T_{sc}/T_{sc}\) (blue triangles). Under all measured pressures, the width exhibits strong enhancement in the field range where \(H_{c2}(T)\) exhibits a sudden slope change, as discussed in the text.
in UTe\({}_{2}\) was previously reported by Aoki _et al._, which was attributed to the existence of other superconducting phases [18]. Similar \(H_{c2}(T)\) behavior was reported by Kasahara _et al._ in FeSe [24], which was attributed to the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state [25; 26; 27].
We found the width of the superconducting phase transition in resistivity is closely related to this anomalous behavior in \(H_{c2}(T)\). To shed light on the origin of this feature, we determined the field-dependent transition width compared to the \(T_{sc}\) that is determined at the zero resistivity, \(\Delta T_{sc}/T_{sc}\). For all studied pressures, \(\Delta T_{sc}/T_{sc}\) exhibits strong enhancement where the sudden slope change occurs as shown in panels (d-h). Defining \(H^{*}\) as the field value where the slope of \(H_{c2}(T)\) changes, we observe that \(\Delta T_{sc}/T_{sc}\) decreases above \(H^{*}\) under \(p=7.5\), \(9.7\), \(11.8\) kbar where the low-temperature data above \(H^{*}\) are available. A broad superconducting transition is usually associated with inhomogeneity [28; 29; 15] or a filamentary superconducting state. However, the systematic field dependence rules out these simple scenarios, suggesting this is rather associated with the competing order parameters and quantum criticality leading anomalous transport properties.
Recently, the field evolution of the \(c\)-axis peak with \(H\parallel a\)[19] and the pressure evolution of the \(c\)-axis transport with \(H\parallel b\)[20] were reported. Here, we report the field and pressure evolution of the power-law temperature dependence of \(\rho_{c}\) with field along the \(a\)-axis. Figs. 4(a-e) present the phase diagrams for each applied pressure determined by the field-dependent exponent \(n^{*}\) of \(R(T)\) determined using the relation \(n^{*}=d[\log{(\rho(T)-\rho_{0})}]/d[\log{T}]\). \(R(0)\) is estimated by extrapolating the \(R(T)\) tail assuming a power-law behavior of \(R(T)\) in the low-temperature limit. Provided \(R(0)\) is accurately determined, \(n^{*}\) is equivalent to the exponent \(n\), yielding a continuous approximate measure of the temperature power law exponent of \(R(T)\). In previous re
Figure 4: **Evolution of non-Fermi liquid behavior with field and pressure in UTe\({}_{2}\).** Panels (a-e) show the field-dependent exponent \(n^{*}\), representative of the power law exponent of the temperature dependence of \(c\)-axis resistance \(R(T)\) for fields applied along the \(a\)-axis, determined using the relation \(n^{*}=d[\log{(\rho(T)-\rho_{0})}]/d[\log{T}]\). At \(5.3\) kbar, \(n^{*}\) exhibits Fermi Liquid behavior (i.e., \(n^{*}=2\), shown as yellow coloring) just above \(T_{c}\) near zero field, but decreases toward \(n^{*}=1.5\) (light green) with increasing fields and decreasing temperatures. To quantify the trends, least-squares fitting of selected \(R(T)\) using the relation \(R(T)=R(0)+AT^{n}\) to the experimental data with \(T\leq T^{*}/2\) yield values for the extracted power law exponent \(n\) and corresponding temperature coefficient \(A\), summarized as a function of the field in panels (f,g) and pressure in panels (h,i).
ports, the \(a\)-axis resistivity of UTe\({}_{2}\) was shown to remain quadratic in temperature (i.e., \(\Delta\rho_{a}\sim AT^{n}\), with \(n\)=2) for magnetic fields applied along both \(a\)- and \(b\)-axes up to 40 T, with the coefficient \(A\) significantly enhanced near a 35 T \(b\)-axis field [30] but retaining Fermi liquid (FL) behavior. Linear in temperature (i.e., \(n=1\)) resistivity was reported by Thomas _et al._[17] in the \(a\)-axis transport at low temperatures around 13 kbar. For \(c\)-axis resistivity, Eo _et al._ reported quadratic FL behavior in the absence of both field and applied pressures [10]. As shown in Fig.4, \(R(T)\) exhibits FL behavior (yellow) just above \(T_{c}\) at \(p=5.3\) kbar in zero field, but the exponent \(n^{*}\) decreases toward \(n^{*}=1.5\) (light green) with increasing field near \(H_{c2}(0)\). Under 7.5 kbar and 9.7 kbar, while the \(c\)-axis transport exhibits non-FL behavior near \(H_{c2}(0)\), FL behavior (yellow) is recovered at high fields between 15 T and 18 T. Under 11.8 kbar and 14.2 kbar, the exponent reaches \(n^{*}=2.5\) (red) at high fields.
Whereas the FL behavior (i.e., \(T^{2}\)) is expected at low temperatures in a typical metal, a non-FL sub-quadratic exponent is a telltale signature of unconventional scattering that has been attributed to the presence of enhanced spin fluctuations near a magnetic quantum critical point [1; 2; 23]. To study the quantitative trends, we performed least-squares fitting on selected \(R(T)\) curves by fitting our data to the relation \(R(T)=R(0)+AT^{n}\) with \(T\leq T^{*}/2\). The field evolution of \(n\) and \(A\) are summarized in panels (f, g) and pressure evolution in panels (h, i). For \(p=5.3\) kbar, \(n=2\) in zero field but smoothly decreases with increasing field, showing a minimum value of \(n=1.5\) near 10 T. It weakly increases at high fields while remaining sub-quadratic up to the highest fields measured. For higher pressures between 7.5 and 11.8 kbar, \(n\) exhibits a more drastic decrease with a minimum near 6-8 T where the \(H_{c2}(T)\) changes the slope. The smallest exponent \(n\approx 1\) is observed near 6 T under 9.7 kbar. At higher fields, \(n\) increases substantially to about 2.5 for 11 kbar and 14.2 kbar. The extracted \(A\)-coefficient appears to correlate inversely with the trends in the power law exponent, with a dip in \(n\) and a peak in \(A\) at a field near the suppression of the superconducting state being typical for a system at or near a quantum critical point. In UTe\({}_{2}\), this signature in \(c\)-axis transport is a revealing indication of an incipient magnetic order that has a strong influence on the physical properties and possibly the superconductivity.
## Methods
**Sample preparation:** Single crystals of UTe\({}_{2}\) were synthesized by the chemical vapor transport method using iodine as the transport agent. Elements of U and Te with atomic ratio 2:3 were sealed in an evacuated quartz tube, together with 3 mg/cm\({}^{3}\) iodine. The ampoule was gradually heated up and held in the temperature gradient of 1060/1000 \({}^{\circ}\)C for 7 days, after which it was furnace cooled to room temperature.
**Transport measurements under pressure:** A UTe\({}_{2}\) single-crystal sample with an onset \(T_{c}\approx 1.78\) K was prepared for transport measurements by soldering electrical leads with gold wires. The typical contact resistance is less than 1 \(\Omega\). The transport data were taken with a fixed current of 100 \(\mu\)A. A nonmagnetic piston-cylinder pressure cell was used for measurements under pressure up to 17.4 kbar, with Daphne oil 7373 as the pressure medium. Transport measurements were performed in a commercial \({}^{3}\)He cryostat system with a base temperature of 300 mK, which is equipped with a superconducting magnet. The current was applied along the crystallographic \(c\)-axis. The magnetic field up to 18 T was applied along the \(a\)-axis, perpendicular to the current. The pressure produced on the single-crystal sample at low temperatures was calibrated by measuring the superconducting transition temperature of lead placed in the cell. The known pressure dependencies of the superconducting transition temperature of Pb [9; 31] were used for this purpose.
## Acknowledgments
The authors are grateful for the useful discussions with Andriy Nevidomskyy. Research at the University of Maryland was supported by the Department of Energy Award No. DE-SC-0019154 (transport experiments), the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant No. GBMF9071 (materials synthesis), NIST, and the Maryland Quantum Materials Center.
|
2301.03707
|
Domains of discontinuity of Lorentzian affine group actions
|
We prove nonemptyness of domains of proper discontinuity of Anosov groups of
affine Lorentzian transformations.
|
Michael Kapovich, Bernhard Leeb
|
2023-01-09T22:40:40Z
|
http://arxiv.org/abs/2301.03707v2
|
# Domains of discontinuity of Lorentzian affine group actions
###### Abstract
We prove nonemptyness of domains of proper discontinuity of Anosov groups of affine Lorentzian transformations of \(\mathbb{R}^{n}\).
There is a substantial body of literature, going back to the pioneering work of Margulis [Ma], on properly discontinuous non-amenable groups of affine transformations, see e.g. [A, AMS02, AMS11, Dr, DGK, GLM, Me], and numerous other papers. In this paper we address a somewhat related question of nonemptyness of domains of proper discontinuity of discrete groups acting on affine spaces:
**Question 1**.: _Which discrete subgroups \(\Gamma<Aff(\mathbb{R}^{n})\) have nonempty discontinuity domain in the affine space \(\mathbb{R}^{n}\)?_
In this paper we limit ourselves to the following setting: Suppose that \(\Gamma<\mathbb{R}^{n}\rtimes O(n-1,1)<Aff(\mathbb{R}^{n})\) is a discrete subgroup such that the linear projection \(\ell:\Gamma\to O(n-1,1)\) is a _faithful representation with convex-cocompact image_, see e.g. [Bo] for the precise definition. Given a representation \(\ell:\Gamma\to O(n-1,1)\), the affine action of \(\Gamma\) is determined by a cocycle \(c\in Z^{1}(\Gamma,\mathbb{R}^{n-1,1}_{\ell})\). Even in the case \(n=3\) and \(\ell(\Gamma)\) a Schottky subgroup of \(O(2,1)\) (which is the setting of Margulis' original examples), while some actions are properly discontinuous on the entire \(\mathbb{R}^{3}\) (as proven by Margulis, see also [GLM] for a general description of such actions), nonemptyness of domains of discontinuity for _arbitrary_\(c\) does not appear to be obvious1.
Footnote 1: The reaction to the question that we observed included: “clearly true”, “clearly false”, “unclear”.
The main result of this note is:
**Theorem 2**.: _Every subgroup \(\Gamma<\mathbb{R}^{n}\rtimes O(n-1,1)\) with faithful convex-cocompact linear representation \(\ell:\Gamma\to O(n-1,1)\), acts properly discontinuously on a nonempty open subset of the Lorentzian space \(\mathbb{R}^{n-1,1}\)._
We will prove this theorem by applying results on domains of discontinuity for discrete group actions on flag manifolds proven in [KLP3]. To this end, we will begin by identifying
the Lorentzian space \(\mathbb{R}^{n-1,1}\) with an open Schubert cell in a partial flag manifold of the group \(G=O(n,2)\).
Consider the group \(G=O(n,2)\) and its symmetric space \(X=G/K\), \(K=O(n)\times O(2)\). The group \(G\) has two partial flag manifolds: the Grassmannian \(\mathrm{F}_{1}\) of isotropic lines and another partial flag manifold \(\mathrm{F}_{2}\) of isotropic planes in \(V=\mathbb{R}^{n,2}\), where the quadratic form on \(V\) is
\[q=x_{1}y_{1}+x_{2}y_{2}+z_{1}^{2}+....+z_{n}^{2}.\]
We will use the notation \(\langle\cdot,\cdot\rangle\) for the associated bilinear form on \(V\).
In the paper we will be using the _Tits boundary_\(\partial_{Tits}X\) of the symmetric space \(X\) and the incidence geometry interpretation of \(\partial_{Tits}X\). The Tits boundary \(\partial_{Tits}X\) is a metric bipartite graph whose vertices are labelled _lines_ and _planes_, these are the elements of \(\mathrm{F}_{1}\) and \(\mathrm{F}_{2}\) respectively. Two vertices \(L\in\mathrm{F}_{1}\) and \(p\in\mathrm{F}_{2}\) are connected by an edge iff the line \(L\) is contained in the plane \(p\). The edges of this bipartite graph have length \(\pi/4\). We refer the reader to [Br], [G] and [T].
The group \(G\) acts transitively on the set of edges of \(\partial_{Tits}X\) and we can identify the quotient \(\partial_{Tits}X/G\) with \(\sigma_{mod}\), the _model spherical chamber_ of \(\partial_{Tits}X\). Thus \(\sigma_{mod}\) is a circular segment of the length \(\pi/4\). This segment has two vertices, one of which we denote \(\tau_{mod}\), this is the one which is the projection of \(\mathrm{F}_{1}\). The flag manifold \(\mathrm{F}_{1}\) is the quotient \(G/P_{L}\), where \(P_{L}\) is the stabilizer of an isotropic line \(L\) in \(G\); this flag manifold is \(n\)-dimensional.
Recall that two vertices of \(\partial_{Tits}X\) are opposite iff they are within Tits distance \(\pi\) from each other. In terms of the incidence geometry of the vector space \((V,q)\), two lines \(L,\hat{L}\in\mathrm{F}_{1}\) are opposite iff that they span the plane \(\mathrm{span}(L,\hat{L})\) in \(V\) such that the restriction of \(q\) to \(\mathrm{span}(L,\hat{L})\) is nondegenerate, necessarily of the type \((1,1)\). Two lines \(L,L^{\prime}\in\mathrm{F}_{1}\) are within Tits distance \(\pi/2\) iff they span an isotropic plane in \(V\).
Consider a subgroup \(P_{L}<G\); it is a maximal parabolic subgroup of \(G\); let \(U<P_{L}\) be the unipotent radical of \(P_{L}\). Choosing a line \(\hat{L}\) opposite to \(L\), defines a semidirect product decomposition \(P_{L}=U\rtimes G_{L,\hat{L}}\), where \(G_{L,\hat{L}}\) is the stabilizer in \(P_{L}\) of the line \(\hat{L}\); equivalently, it is the stabilizer of the _parallel set_2\(P(L,\hat{L})\). This subgroup is the intersection
Footnote 2: The parallel set \(P(L,\hat{L})\) is a certain symmetric subspace in \(X\), which is the union of all geodesics \(l\) in \(X\) which are forward-asymptotic to \(L\in\partial_{Tits}X\) and backward-asymptotic to \(\hat{L}\in\partial_{Tits}X\). The parallel set splits isometrically as the product \(l\times\mathbb{H}^{n-1}\), where \(\mathbb{H}^{n-1}\) is the _cross-section_ of \(P(L,\hat{L})\).
\[G_{L,\hat{L}}=P_{L}\cap P_{\hat{L}}.\]
The orthogonal complement \(V_{L,\hat{L}}\subset V\) of the anisotropic plane \(\mathrm{span}(L,\hat{L})\) is invariant under \(G_{L,\hat{L}}\), hence,
\[G_{L,\hat{L}}\cong\mathbb{R}_{+}\times O(V_{L,\hat{L}},q|_{V_{L,\hat{L}}}) \cong\mathbb{R}_{+}\times O(n-1,1).\]
Here the group \(\mathbb{R}_{+}\) acts via transvections along geodesics in the symmetric space \(X\) connecting \(L\) and \(\hat{L}\). The group \(G_{L,\hat{L}}\) acts on both \((V^{\prime},q^{\prime})=(V_{L,\hat{L}},q|_{V_{L,\hat{L}}})\) and on \(U\), where the action of \(\mathbb{R}_{+}\) on \(V^{\prime}=V_{L,\hat{L}}\) is trivial. In order to simplify the notation, we set
\[O(q^{\prime})=O(V^{\prime},q^{\prime}).\]
In terms of linear algebra, \(\mathbb{R}_{+}\) is the identity component of the orthogonal group
\[O(\operatorname{span}(L,\hat{L}),q|_{\operatorname{span}(L,\hat{L})})\cong O(1,1).\]
We will use the notation
\[G^{\prime}_{L}:=U\rtimes O(q^{\prime})<P_{L}.\]
This subgroup is the stabilizer in \(P_{L}\) of horoballs in \(X\) centered at \(L\).
Our next goal is to describe Schubert cells in the Grassmannian \(\operatorname{F}_{1}\). We fix \(L\in\operatorname{F}_{1}\) and define the subvariety \(Q_{L}\subset\operatorname{F}_{1}\) consisting of all (isotropic) lines \(L^{\prime}\subset V\) such that \(\operatorname{span}(L,L^{\prime})\) is isotropic (the line \(L\) or an isotropic plane). In terms of the Tits' distance, \(Q_{L}-\{L\}\) consists of lines \(L^{\prime}\in\operatorname{F}_{1}\) within distance \(\frac{\pi}{2}\) from \(L\). The complement
\[L^{opp}=\operatorname{F}_{1}-Q_{L}\]
consists of lines opposite to \(L\). The group \(P_{L}\) acts transitively on \(\{L\}\), \(Q_{L}-\{L\}\) and \(L^{opp}\) and each of these subsets is an open Schubert cell of \(\operatorname{F}_{1}\) with respect to \(P_{L}\) and we obtain the \(P_{L}\)-invariant Schubert cell decomposition
\[\operatorname{F}_{1}=\{L\}\sqcup(Q_{L}-\{L\})\sqcup L^{opp}.\]
We next describe \(Q_{L}\) more geometrically. A vector \(v\in V\) spans an isotropic subspace with \(L\) iff \(v\in L^{\perp}\) and satisfies the quadratic equation \(q(v)=0\). Since we are only interested in nonzero vectors \(v\neq 0\) and their spans \(\operatorname{span}(v)\), we obtain the natural identification
\[Q_{L}\cong\mathbb{P}(q^{-1}(0)\cap L^{\perp}),\]
the right hand-side is the projectivization a conic in \(L^{\perp}\). Thus, \(Q_{L}\) is a (projective) conic and \(L\in Q_{L}\) is the unique singular point of the \(Q_{L}\).
**Lemma 3**.: _Given two opposite isotropic lines \(L,\hat{L}\), the intersection of the conics_
\[E=E_{L,\hat{L}}:=Q_{L}\cap Q_{\hat{L}}\]
_is an ellipsoid in \(Q_{L}\)._
Proof.: As before, let \(V^{\prime}\subset V\) denote the codimension two subspace orthogonal to both \(L,\hat{L}\). Then each \(L^{\prime}\in E\) is spanned by a vector \(v\in V^{\prime}\) satisfying the condition \(q(v)=0\). In other words, \(E\) is the projectivization of the conic
\[\{v\in V^{\prime}:q(v)=0\},\]
i.e. is an ellipsoid.
Our next goal is to (equivariantly) identify the open cell \(L^{opp}\) with the \(n\)-dimensional Lorentzian affine space \(\mathbb{R}^{n-1,1}\) (where a chosen \(\hat{L}\in L^{opp}\) will serve as the origin), so that
the group \(P_{L}\) is identified with the group of Lorentzian similarities, where the simply-transitive action \(U\rightsquigarrow L^{opp}\) is identified with the action of the full group of translations of \(\mathbb{R}^{n-1,1}\).
We fix nonzero vectors \(e\in L\), \(f\in\hat{L}\) such that \(\langle e,f\rangle=1\). Then
\[V=\operatorname{span}(e)\oplus\operatorname{span}(f)\oplus V^{\prime}.\]
We obtain an epimorphism \(\eta:P_{L}\to O(q^{\prime})\) by sending \(g\in P_{L}\) first to the restriction \(g|L^{\perp}\) and then to the projection of the latter to the quotient space \(V^{\prime}\cong L^{\perp}/L\) (the quotient of \(L^{\perp}\) by the null-subspace of \(q|L^{\perp}\)). Hence, the kernel of this epimorphism is precisely the solvable radical \(U\rtimes\mathbb{R}_{+}\) of \(P_{L}\).
For each \(v^{\prime}\in V^{\prime}\) we define the linear transformation (a shear) \(s=s_{v^{\prime}}\in GL(V)\) by its action on \(e,f\) and \(V^{\prime}\):
1. \(s(e)=e\).
2. \(s(f)=-\frac{1}{2}q(v^{\prime})e+f+v^{\prime}\).
3. For \(w\in V^{\prime}\), \(s(w)=w-\langle v^{\prime},w\rangle e\).
The next two lemmata are proven by straightforward calculations which we omit:
**Lemma 4**.: _For each \(s=s_{v^{\prime}}\) the following hold:_
1. \(s\in P_{L}\)_._
2. \(s\) _lies in the kernel of the homomorphism_ \(\eta:P_{L}\to GL(V^{\prime})\) _and is unipotent. In particular,_ \(s\in U\) _for each_ \(v^{\prime}\in V\)_._
**Lemma 5**.: _The map \(\phi:v^{\prime}\mapsto s_{v^{\prime}}\) is a continuous monomorphism \(V^{\prime}\to U\), where we equip the vector space \(V^{\prime}\) with the additive group structure._
Since \(U\) acts simply transitively on \(L^{opp}\), it is connected and has dimension \(n\). Therefore, the monomorphism \(\phi\) is surjective and, hence, a continuous isomorphism. Thus, \(\phi\) determines a homeomorphism \(h:V^{\prime}\to L^{opp}\)
\[h:v^{\prime}\mapsto s_{v^{\prime}}(\hat{L})=span\left(-\frac{1}{2}q(v^{\prime })e+f+v^{\prime}\right),\]
\[h(0)=\hat{L}.\]
The group \(G_{L,\hat{L}}\cong\mathbb{R}_{+}\times O(V^{\prime},q^{\prime})\) acts on both \(L^{opp}\) and on \(U\) (via conjugation). The center of \(G_{L,\hat{L}}\) acts on \(V^{\prime}\) trivially while its action on \(U\) is via a nontrivial character.
**Proposition 6**.: _The map \(h\) is equivariant with respect to these two actions of \(O(V^{\prime},q^{\prime})\)._
Proof.: Consider a linear transformation \(A\in O(V^{\prime},q^{\prime})\); as before, we identify \(O(V^{\prime},q^{\prime})\) with a subgroup of \(O(V,q)\) fixing \(e\) and \(f\). For an arbitrary \(v^{\prime}\in V^{\prime}\) we will verify that
\[s_{Av^{\prime}}=As_{v^{\prime}}A^{-1}.\]
It suffices to verify this identity on the vectors \(e,f\) and arbitrary \(w\in V^{\prime}\). We have:
1. For each \(u\in V^{\prime}\), \(s_{u}(e)=e\), while \(A(e)=A^{-1}(e)=e\). It follows that \[e=s_{Av^{\prime}}(e)=As_{v^{\prime}}A^{-1}(e)=e.\]
2. \[s_{Av^{\prime}}(f)=-\frac{1}{2}q(Av^{\prime})e+f+Av^{\prime}=-\frac{1}{2}q(v^{ \prime})e+f+Av^{\prime}\]
while (since \(Ae=e\), \(Af=f\))
\[As_{v^{\prime}}A^{-1}(f)=As_{v^{\prime}}(f)=A(-\frac{1}{2}q(v^{\prime})e+f+v^{ \prime})=-\frac{1}{2}q(v^{\prime})e+f+Av^{\prime}.\]
3. For \(w\in V^{\prime}\),
\[s_{Av^{\prime}}(w)=w-\langle Av^{\prime},w\rangle e=w-\langle v^{\prime},A^{-1 }w\rangle e,\]
while
\[As_{v^{\prime}}A^{-1}w=As_{v^{\prime}}(A^{-1}w)=A(A^{-1}w-\langle v^{\prime},A ^{-1}w\rangle e)=w-\langle v^{\prime},A^{-1}w\rangle e.\qed\]
In view of this proposition we will identify \(V^{\prime}\) with the open Schubert cell \(L^{opp}\), which, in turn, enables us to use Lorentzian geometry to analyze \(L^{opp}\) and, conversely, to study discrete subgroups of \(P_{L}\) using results of [KLP3] on domains of discontinuity of discrete group actions on the flag manifold \(\mathrm{F}_{1}\). Under the identification \(V^{\prime}\cong L^{opp}\), for each \(\hat{L}\in L^{opp}\), the conic \(Q_{L}\cap L^{opp}\) becomes a translate of the null-cone of the form \(q^{\prime}\) on \(V^{\prime}\) (see Lemma 7 below) and the flag manifold \(\mathrm{F}_{1}\) becomes a compactification of \(V^{\prime}\) obtained by adding to it the "quadric at infinity" \(Q_{L}\).
**Lemma 7**.: _For all \(v^{\prime}\in V^{\prime}\), \(q^{\prime}(v^{\prime})=0\) iff \(q\) vanishes on \(\mathrm{span}(f,h(v^{\prime}))\), i.e. iff \(h(v^{\prime})\in Q_{\hat{L}}\). In other words, \(Q_{\hat{L}}\cap L^{opp}\) is the image under \(h\) of the null-cone of \(q^{\prime}\) in the vector space \(V^{\prime}\)._
Proof.: Since \(f\) and \(s_{v^{\prime}}(f)\) (spanning the line \(h(v^{\prime})\)) are null-vectors of \(q\), the vanishing of \(q\) on \(\mathrm{span}(f,h(v^{\prime}))\) is equivalent to the vanishing of
\[\langle f,s_{v^{\prime}}(f)\rangle=-\frac{1}{2}q(v^{\prime}).\qed\]
**Lemma 8**.: _For each neighborhood \(N\) of \(L\) in \(Q_{L}\) there exists \(\hat{L}\in L^{opp}\) such that \(E_{L,\hat{L}}\subset N\)._
Proof.: We pick \(L_{\infty}\in\mathrm{F}_{1}\) opposite to \(L\) and, as above, identify \(L_{\infty}^{opp}\) with \((V^{\prime},q^{\prime})\). Then for a sequence \(\hat{L}_{i}\in L_{\infty}^{opp}\) contained in the, say, future light cone of \(Q_{L}\cap L_{\infty}^{opp}\) and converging radially to \(L\), the intersections of null-cones \(E_{L,L_{i}}=Q_{L_{i}}\cap Q_{L}\) converge to \(L\). Since \(L_{i}\notin Q_{L}\), they are all opposite to \(L\).
For each subset \(C\subset\mathrm{F}_{1}\), we define the _thickening_ of \(C\):
\[\mathrm{Th}(C)=\bigcup_{L\in C}Q_{L}.\]
This notion of thickening is a special case of the one developed in [12] (see also [13]): If we restrict to a single apartment \(a\) in the Tits building of \(G\), then for the vertex \(L\in a\), \(\operatorname{Th}(L)\cap a=Q_{L}\cap a\) consists of three vertices within Tits distance \(\frac{\pi}{2}\) from \(L\). Thus, in the terminology of [12], the thickening Th is _fat_.
**Lemma 9**.: _For any two opposite lines \(L,\hat{L}\in\operatorname{F}_{1}\) and each compact subset \(C\subset Q_{\hat{L}}\cap L^{opp}\), the intersection \(\text{Th}(C)\cap L^{opp}\) is a proper subset of \(L^{opp}\)._
Proof.: Let \(H\subset L^{opp}\cong V^{\prime}\) be an affine hyperplane in \(V^{\prime}\) intersecting \(Q_{\hat{L}}\) only at \(\hat{L}\). Then
\[C^{\prime}:=\{L^{\prime}\in H:Q_{L^{\prime}}\cap C\neq\emptyset\}\]
is compact in \(H\). Next, observe that for \(L_{1},L_{2}\in\operatorname{F}_{1}\), \(L_{1}\in Q_{L_{2}}\iff L_{2}\in Q_{L_{1}}\). Thus, every \(L^{\prime}\in H-C^{\prime}\) does not belong to \(\operatorname{Th}(C)\).
**Lemma 10**.: _For each compact \(C\subset Q_{L}-\{L\}\) the thickening \(\text{Th}(C)\) is a proper subset of \(\operatorname{F}_{1}\)._
Proof.: Lemma 8 implies that there exists \(L_{\infty}\in L^{opp}\) such that \(E_{L,L_{\infty}}\) is disjoint from \(C\). Thus, \(C\) is contained in \(L_{\infty}^{opp}\). Now the claim follows from Lemma 9.
We now turn to discrete subgroups \(\Gamma<G_{L}^{\prime}<P_{L}<G\). We refer the reader to [12] for the notion of \(\tau_{mod}\)-regular discrete subgroups \(\Gamma<G\) and their \(\tau_{mod}\)-limit sets, which are certain closed \(\Gamma\)-invariant subsets of \(\operatorname{F}_{1}\).
**Remark 11**.: We must also note that the notion equivalent to \(\tau_{mod}\)-regularity and the \(\tau_{mod}\)-lit set was first introduced by Benoist in his highly influential work [11].
An important class of \(\tau_{mod}\)-regular discrete subgroups \(\Gamma<G\) consists of \(\tau_{mod}\)_-Anosov subgroups_. Anosov representations \(\Gamma\to G\) whose images are Anosov subgroups were first introduced in [14] for fundamental groups of closed manifolds of negative curvature, then in [15] for arbitrary hyperbolic groups; we refer the reader to our papers [12, 13, 14], for a simplification of the original definition as well as for alternative definitions and to [14, 15] for surveys of the results.
**Lemma 12**.: _The \(\tau_{mod}\)-limit set \(\Lambda_{\tau_{mod}}(\Gamma)\) of every \(\tau_{mod}\)-regular discrete subgroup \(\Gamma<P_{L}\) is contained in \(Q_{L}\)._
Proof.: Recall that \(G_{L}^{\prime}\) and, hence, \(\Gamma\), preserves each horoball \(Hbo\) in \(X\) centered at \(L\), where the latter is regarded as a point of the visual boundary of the symmetric space \(X\). Therefore, for each \(x\in Hbo\), the closure of \(\Gamma x\) in \(\overline{X}=X\cup\partial_{\infty}X\) is contained in the ideal boundary of \(Hbo\), which is the closed \(\frac{\pi}{2}\)-ball \(\bar{B}(L,\frac{\pi}{2})\) in \(\partial_{\infty}X\) centered at \(L\), where the distance is computed in the Tits metric on \(\partial_{\infty}X\). For each vertex \(\tau\) of the building \(\partial_{Tits}X\) which belongs to \(\bar{B}(L,\frac{\pi}{2})\) the star \(\text{st}(\tau)\subset\partial_{\infty}X\) is contained in the closed ball in \(\partial_{\infty}X\) of the radius \(\frac{3\pi}{4}\) centered at \(L\). Therefore, the intersection of \(\text{st}(\tau)\) with the Grassmannian \(\operatorname{F}_{1}\) is contained in \(\bar{B}(L,\frac{\pi}{2})\). It follows from the definition of the \(\tau_{mod}\)-limit set that \(\Lambda_{\tau_{mod}}(\Gamma)\) is contained in \(\operatorname{F}_{1}\cap\bar{B}(L,\frac{\pi}{2})=Q_{L}\).
**Proposition 13**.: _Suppose that \(\Gamma<G^{\prime}_{L}\) is a \(\tau_{mod}\)-regular discrete subgroup whose \(\tau_{mod}\)-limit set does not contain \(L\). Then_
\[\text{Th}(\Lambda_{\tau_{mod}}(\Gamma))\neq\text{F}_{1}\]
_and the action_
\[\Gamma\curvearrowright\text{F}_{1}-\text{Th}(\Lambda_{\tau_{mod}}(\Gamma))\]
_is properly discontinuous._
Proof.: Since \(\Lambda_{\tau_{mod}}(\Gamma)\) is a compact subset of \(Q_{L}\), the first statement of the proposition is a special case of Lemma 10. The proper discontinuity statement is a special case of a general theorem [13, Theorem 6.13] since the thickening Th is fat.
We now describe certain conditions on \(\tau_{mod}\)-regular discrete subgroups \(\Gamma<G^{\prime}_{L}\) which will ensure that \(\Lambda_{\tau_{mod}}(\Gamma)\) does not contain the point \(L\). Each subgroup \(\Gamma<G^{\prime}_{L}\) has the _linear part_\(\Gamma_{0}\), i.e. its projection to \(O(q^{\prime})\cong O(n-1,1)\), which is identified with the semisimple factor of the stabilizer in \(P_{L}\) of some \(\hat{L}\in L^{opp}\). We now assume that:
* \(\Gamma_{0}\) is a convex-cocompact subgroup of \(O(n-1,1)\).
* The projection \[\ell:\Gamma\to\Gamma_{0}\] is an isomorphism.
Since \(\Gamma_{0}<O(q^{\prime})\) is convex-cocompact and \(O(q^{\prime})<P_{L}\) is the Levi subgroup of the parabolic group \(P_{L}\) stabilizing a face of type \(\tau_{mod}\) of \(\partial_{Tits}X\), it follows that \(\Gamma_{0}<G\) is a \(\tau_{mod}\)-Anosov subgroup of \(G\); the \(\tau_{mod}\)-limit set of \(\Gamma_{0}\) is contained in the visual boundary of the cross-section (isometric to \(\mathbb{H}^{n-1}\)) of the parallel set \(P(L,\hat{L})\); in particular, \(\Lambda_{\tau_{mod}}(\Gamma_{0})\) does not contain \(L\).
Given a subgroup \(\Gamma_{0}<O(q^{\prime})\), the inverse \(\rho:\Gamma_{0}\to\Gamma\) to \(\ell:\Gamma\to\Gamma_{0}\) is determined by a cocycle \(c\in Z^{1}(\Gamma_{0},V^{\prime})\) which describes the translational parts of the elements of \(\Gamma\):
\[\rho(\gamma):v\mapsto\gamma v+c(\gamma),v\in V^{\prime}\cong\mathbb{R}^{n-1,1}.\]
Pick some \(t\in\mathbb{R}_{+}\); then \(tc\) is again a cocycle corresponding to the conjugate representation \(\rho^{t}\), where we identity \(t\in\mathbb{R}_{+}\) with a central element of \(G_{L,\hat{L}}\). Sending \(t\to 0\) we obtain:
\[\lim_{t\to 0}\rho^{t}=id,\]
the identity embedding \(\Gamma_{0}\to O(n-1,1)<P_{L}\). In view of stability of Anosov representations (see [12] and [13]) we conclude that all representations \(\rho^{t}\) are \(\tau_{mod}\)-Anosov and the \(\tau_{mod}\)-limit sets of \(\Gamma_{t}=\rho^{t}(\Gamma_{0})\) vary continuously with \(t\); moreover,
\[t\Lambda_{\tau_{mod}}(\Gamma_{t_{1}})=\Lambda_{\tau_{mod}}(\Gamma_{t_{2}})\]
where \(t=t_{2}/t_{1}\). In particular,
\[\Lambda_{\tau_{mod}}(\Gamma)\subset Q_{L}-\{L\}\]
is a compact subset. Proposition 13 now implies:
**Corollary 14**.: _For each \(\Gamma\) as above,_
\[Th(\Lambda_{\tau_{mod}}(\Gamma))\neq\mathrm{F}_{1}\]
_and the action_
\[\Gamma\curvearrowright\mathrm{F}_{1}-Th(\Lambda_{\tau_{mod}}(\Gamma))\]
_is properly discontinuous._
Thus, we proved that each discrete subgroup \(\Gamma<P_{L}\) as above has nonempty domain of discontinuity in the vector space \(V^{\prime}\). Theorem 2 follows.
**Acknowledgements.** The first author was partly supported by the NSF grant DMS-16-04241, by a Simons Foundation Fellowship, grant number 391602, by Max Plank Institute for Mathematics in Bonn, as well as by KIAS (the Korea Institute for Advanced Study) through the KIAS scholar program. Much of this work was done during our stay at KIAS and we are thankful to KIAS for its hospitality.
|
2307.10393
|
Nonequilibrium Seebeck and spin Seebeck effects in nanoscale junctions
|
The spin-resolved thermoelectric transport properties of correlated nanoscale
junctions, consisting of a quantum dot/molecule asymmetrically coupled to
external ferromagnetic contacts, are studied theoretically in the
far-from-equilibrium regime. One of the leads is assumed to be strongly coupled
to the quantum dot resulting in the development of the Kondo effect. The
spin-dependent current flowing through the system, as well as the
thermoelectric properties, are calculated by performing a perturbation
expansion with respect to the weakly coupled electrode, while the Kondo
correlations are captured accurately by using the numerical renormalization
group method. In particular, we determine the differential and nonequilibrium
Seebeck effects of the considered system in different magnetic configurations
and uncover the crucial role of spin-dependent tunneling on the device
performance. Moreover, by allowing for spin accumulation in the leads, which
gives rise to finite spin bias, we shed light on the behavior of the
nonequilibrium spin Seebeck effect.
|
Anand Manaparambil, Ireneusz Weymann
|
2023-07-19T18:04:25Z
|
http://arxiv.org/abs/2307.10393v1
|
# Nonequilibrium Seebeck and spin Seebeck effects in nanoscale junctions
###### Abstract
The spin-resolved thermoelectric transport properties of correlated nanoscale junctions, consisting of a quantum dot/molecule asymmetrically coupled to external ferromagnetic contacts, are studied theoretically in the far-from-equilibrium regime. One of the leads is assumed to be strongly coupled to the quantum dot resulting in the development of the Kondo effect. The spin-dependent current flowing through the system, as well as the thermoelectric properties, are calculated by performing a perturbation expansion with respect to the weakly coupled electrode, while the Kondo correlations are captured accurately by using the numerical renormalization group method. In particular, we determine the differential and nonequilibrium Seebeck effects of the considered system in different magnetic configurations and uncover the crucial role of spin-dependent tunneling on the device performance. Moreover, by allowing for spin accumulation in the leads, which gives rise to finite spin bias, we shed light on the behavior of the nonequilibrium spin Seebeck effect.
## I Introduction
Quantum transport through nanoscale systems, such as quantum dots, molecular junctions and nanowires, has been under tremendous research interest due to promising applications of such nanostructures in nanoelectronics, spintronics and spin-caloritronics [1; 2; 3; 4]. Due to the strong electron-electron interactions and a characteristic discrete density of states, these systems can exhibit large thermoelectric figure-of-merit and are excellent candidates for nanoscale heat engines [5; 6; 7; 8; 9]. As far as more fundamental aspects are concerned, correlated nanoscale systems allow one to explore fascinating many-body phenomena that are not present in bulk materials. One of such phenomena is the Kondo effect, which can drastically change the system's transport properties at low temperatures by giving rise to a universal enhancement of the conductance to its maximum [10; 11; 12]. Moreover, in addition to voltage-biased setups' investigations, the emergence of Kondo correlations can be probed in the presence of a temperature gradient, where thermoelectric transport properties reveal the important physics [5; 6; 7]. In fact, the thermopower of the quantum dot systems have been shown to contain the signatures of the Kondo effect. Specifically, the sign changes in the temperature dependence of the thermopower with the onset of Kondo correlations have been identified in both the theoretical [12] and experimental [13; 14; 15] studies.
Furthermore, other interesting properties arise when the electrodes are magnetic, making such nanoscale systems important for spin nanoelectronics applications [3; 4]. It turns out that ferromagnetism of the leads can compete with the Kondo correlations giving rise to an interplay between ferromagnet-induced exchange field and the Kondo behavior [16; 17; 18; 19]. This interplay has been revealed in theoretical studies on thermoelectric properties of strongly-correlated molecular and quantum dot systems with ferromagnetic contacts [20; 21]. From theoretical point of view, accurate description of low-temperature transport behavior of correlated nanoscale systems with competing energy scales requires resorting to advanced numerical methods, such as the numerical renormalization group (NRG) method [22; 23]. Indeed, while there has been a tremendous progress in complete understanding of transport properties at equilibrium [24; 25; 26; 27; 28; 29; 30], much less is known in fully nonequilibrium settings, where standard NRG cannot be applied. The exact treatment of the nonlinear response regime requires even more sophisticated numerical techniques [31; 32] and this is why it has been much less explored [33; 34; 35; 36; 37; 38].
In this work we therefore investigate the nonlinear thermopower of a molecular magnetic junction and analyze how the spin-resolved transport affects the nonequilibrium thermoelectric properties of the system. More specifically, we consider a quantum dot/molecule strongly coupled to one ferromagnetic lead and weakly coupled to the other nonmagnetic or ferromagnetic lead kept at different potentials and temperatures, see Fig. 1. We perform a perturbation expansion in the weak coupling, while the strongly coupled subsystem, where Kondo correlations may arise, is solved with the aid of the NRG method. This allows us to extract the signatures of the interplay between the spin-resolved transport and the Kondo correlations in the Seebeck coefficient in far from equilibrium settings. Furthermore, we study how different magnetic configurations of the system affect the differential and nonequilibrium Seebeck effects. In particular, we show that the Seebeck effect exhibits new sign changes as a function of the bias voltage which are associated with the Kondo resonance split by exchange field. These sign changes are found to extend to the temperature gradients on the order of the Kondo temperature. Moreover, we also provide a detailed analysis of the nonequilibrium spin Seebeck coefficient. We believe that our work sheds light on the spin-resolved nonequi
librium thermopower of correlated nanoscale junctions, in which the interplay between the Kondo and exchange field correlations is relevant. It thus provides a better understanding of spin caloritronic nanodevices under finite temperature and voltage gradients.
The paper is organized as follows: The system Hamiltonian and the theoretical framework are described in Sec. II. The numerical calculations and the results are discussed in Sec. III with a short summary and concluding remarks in Sec. IV.
## II Theoretical description
### Hamiltonian of the system
We consider a nanoscale junction with an embedded quantum dot/molecule, which is schematically shown in Fig. 1. The quantum dot is assumed to be strongly coupled to the left ferromagnetic lead and weakly coupled to the right lead, which can be either nonmagnetic [Fig. 1(a)] or ferromagnetic [Fig. 1(b)]. In the case of two ferromagnetic electrodes, we will distinguish two magnetic configurations: the parallel (P) one when the leads magnetic moments point in the same direction and the antiparallel (AP) one, when the orientation of magnetic moments is opposite, see Fig. 1(b). It is assumed that there are finite temperature and voltage gradients applied to the system, with \(T_{L}=0\) and \(\mu_{L}=0\), whereas \(T_{R}=\Delta T\) and \(\mu_{R}=-eV\), as shown in Fig. 1, where \(T_{\alpha}\) and \(\mu_{\alpha}\) are the temperature (\(k_{B}\equiv 1\)) and the chemical potential of lead \(\alpha\).
With the assumption of weak coupling between the quantum dot and right contact the system Hamiltonian can be simply written as
\[H=H_{L}+H_{R}+H_{T}. \tag{1}\]
\(H_{L}\) describes the strongly coupled left subsystem, consisting of the quantum dot and the left lead, and it is given by
\[H_{L}=\varepsilon_{d}\sum_{\sigma}n_{\sigma}+Un_{\uparrow}n_{ \downarrow}+\sum_{k\sigma}\varepsilon_{Lk\sigma}c^{\dagger}_{Lk\sigma}c_{Lk\sigma}\] \[+\sum_{k\sigma}t_{Lk\sigma}(d^{\dagger}_{\sigma}c_{Lk\sigma}+c^ {\dagger}_{Lk\sigma}d_{\sigma}), \tag{2}\]
where \(n_{\sigma}=d^{\dagger}_{\sigma}d_{\sigma}\), with \(d^{\dagger}_{\sigma}\) (\(d_{\sigma}\)) being the creation (annihilation) operator on the quantum dot for an electron of spin \(\sigma\), \(c_{\alpha k\sigma}\) (\(c^{\dagger}_{\alpha k\sigma}\)) annihilates (creates) an electron in the lead \(\alpha\) with momentum \(k\), spin \(\sigma\) and energy \(\varepsilon_{\alpha k\sigma}\). The quantum dot is modeled by a single orbital of energy \(\varepsilon_{d}\) and Coulomb correlations \(U\). The hopping matrix elements between the quantum dot and lead \(\alpha\) are denoted by \(t_{\alpha k\sigma}\) and give rise to the level broadening \(\Gamma_{\alpha\sigma}=\pi\rho_{\alpha\sigma}|t_{\alpha k\sigma}|^{2}\), which is assumed to be momentum independent, where \(\rho_{\alpha\sigma}\) is the density of states of lead \(\alpha\) for spin \(\sigma\).
The second part of the Hamiltonian describes the right lead and is given by
\[H_{R}=\sum_{k\sigma}\varepsilon_{Rk\sigma}c^{\dagger}_{Rk\sigma}c_{Rk\sigma}- e\sum_{k\sigma}\mu_{R\sigma}c^{\dagger}_{Rk\sigma}c_{Rk\sigma}, \tag{3}\]
while the last term of \(H\) accounts for the hopping between the left and right subsystems
\[H_{T}=\sum_{k\sigma}t_{Rk\sigma}(d^{\dagger}_{\sigma}c_{Rk\sigma}+c^{\dagger} _{Rk\sigma}d_{\sigma}). \tag{4}\]
In the following, we use the lowest-order perturbation theory in \(H_{T}\) to study the spin-dependent electric and thermoelectric properties of the system.
### Nonlinear transport coefficients
The electric current flowing through the system in the spin channel \(\sigma\) can be expressed as [39; 40]
\[I_{\sigma}(V,\Delta T) = -\frac{e\Gamma_{R\sigma}}{\hbar}\int_{-\infty}^{\infty}d\omega~{} A_{L\sigma}(\omega) \tag{5}\] \[\times[f_{L}(\omega)-f_{R}(\omega-eV)],\]
where \(f_{\alpha}(\omega)=[1+\exp(\omega/T_{\alpha})]^{-1}\) is the Fermi-Dirac distribution function, while \(A_{L\sigma}(\omega)\) denotes the spin-resolved spectral function of the left subsystem. The total current flowing through the system under potential bias \(V\) and temperature gradient \(\Delta T\) is thus \(I(V,\Delta T)=\sum_{\sigma}I_{\sigma}(V,\Delta T)\). The spectral function \(A_{L\sigma}(\omega)\) is calculated by means of the NRG method [22; 23; 41], which allows us to include all the correlation effects between the quantum dot strongly coupled to left contact in a fully nonperturbative manner. In particular, \(A_{L\sigma}(\omega)\) is determined as the imaginary part of the Fourier transform of the retarded Green's function of the left subsystem Hamiltonian \(H_{L},~{}G_{\sigma}(t)=-i\Theta(t)\langle\{d_{\sigma}(t),d^{\dagger}_{\sigma} (0)\}\rangle\)
Figure 1: The schematic of the considered asymmetric tunnel junction with embedded quantum dot/molecule strongly coupled to a cold ferromagnetic left lead and weakly coupled to a hot (a) nonmagnetic or (b) ferromagnetic right lead. The right lead is subject to voltage and temperature gradients, while the left lead is grounded and kept at zero temperature. The device in (b) can be in two magnetic configurations: the parallel (P) and antiparallel (AP) one, as indicated by the arrows.
In NRG calculations, the spectral data is collected in logarithmic bins that are then broadened to obtain a smooth function.
For the further analysis, it is convenient to express the coupling constants \(\Gamma_{\alpha\sigma}\) by using the spin polarization of the lead \(\alpha\), \(p_{\alpha}\), as \(\Gamma_{L\sigma}=(1+\sigma p_{L})\Gamma_{L}\) and \(\Gamma_{R\sigma}=(1+\sigma p_{R})\Gamma_{R}\) for the parallel magnetic configuration, with \(\Gamma_{R\sigma}=(1-\sigma p_{R})\Gamma_{R}\) in the case of the antiparallel configuration of the system. Here, \(\Gamma_{\alpha}=(\Gamma_{\alpha\uparrow}+\Gamma_{\alpha\downarrow})/2\). Furthermore, in the case when the right lead is nonmagnetic, \(p_{R}=0\), while for both ferromagnetic leads we for simplicity assume \(p_{L}=p_{R}\equiv p\).
As far as thermoelectric coefficients are concerned, the differential Seebeck coefficient can be expressed as [42]
\[S_{d}=-\left(\frac{dV}{d\Delta T}\right)_{\!I}=-\left(\frac{\partial I}{ \partial\Delta T}\right)_{\!V}\!\left/\!\left(\frac{\partial I}{\partial \Delta T}\right)_{\!\Delta T}. \tag{6}\]
Furthermore, the extension of the conventional Seebeck coefficient to the nonlinear response regime is referred to as the nonequilibrium Seebeck coefficient \(S_{n}\), and it can be defined as [43; 44; 45; 46; 37; 47]
\[S_{n}=-\left(\frac{\Delta V}{\Delta T}\right)_{\!I(V+\Delta V,\Delta T)=I(V,0) }. \tag{7}\]
The above definitions will be used to describe thermoelectric transport in different configurations of the system, respectively.
## III Numerical results and discussion
In this section we present the main numerical results and their discussion. In our considerations we assume that the left lead is always ferromagnetic, while the right electrode can be either nonmagnetic or ferromagnetic, cf. Fig. 1. For the studied setup, the strong coupling to the left contact may give rise to the Kondo effect [48; 11]. However, it is crucial to realize that the presence of the spin-dependent hybridization results in a local exchange field on the quantum dot, which can split the dot orbital level when detuned from the particle-hole symmetry point, and thus suppress the Kondo resonance. The magnitude of such exchange field can be estimated from the perturbation theory, which at zero temperature gives [49],
\[\Delta\varepsilon_{\rm exch}=\frac{2p_{L}\Gamma_{L}}{\pi}\ln\left|\frac{ \varepsilon_{d}}{\varepsilon_{d}+U}\right|. \tag{8}\]
The presence of the exchange field and its detrimental effect on the Kondo phenomenon has been confirmed by various experiments on electronic transport measurements in quantum dot and molecular systems [50; 17; 51; 18].
We start our considerations with the analysis of electric transport properties, revealing the effects of the exchange field. Further on, we study the nonlinear thermoelectric response, first for the case of nonmagnetic right lead and then for the case of two ferromagnetic leads. In numerical calculations, we use the following parameters: \(U=0.2\), \(\Gamma_{L}=0.02\), \(\Gamma_{R}=0.002\), in units of band halfwidth, and \(p=0.4\) for the ferromagnetic leads. For the assumed parameters, the Kondo temperature of the left subsystem for \(\varepsilon_{d}=-U/2\) is equal to [49; 52], \(T_{K}\approx 0.035\Gamma_{L}\).
To begin with, it is instructive to analyze the properties of the left subsystem itself as described by the spectral function. The spectral function for each individual spin channel is shown in Fig. 2. First of all, one can see that for \(\varepsilon_{d}=-U/2\) there is a pronounced Kondo peak at the Fermi level for each spin component. However, when detuned from the particle-hole symmetry, there is a finite exchange-induced splitting, cf. Eq. (8), which suppresses the Kondo effect when \(|\Delta\varepsilon_{\rm exch}|\gtrsim T_{K}\), with \(T_{K}\) denoting the Kondo temperature. Because of that, each spin component of the spectral function displays a slightly detuned from Fermi energy side peak, constituting the split Kondo resonance. In addition, the Hubbard resonances at \(\omega\approx\varepsilon_{d}\) and \(\omega\approx\varepsilon_{d}+U\) become affected as well: although their position is only slightly modified, their magnitude gets strongly spin-dependent.
The splitting of the Kondo resonance is directly visi
Figure 2: The energy dependence of the spectral functions for the individual spin channels, (a) \(A_{L\uparrow}(\omega)\) and (b) \(A_{L\downarrow}(\omega)\) calculated for the strongly coupled left subsystem with orbital energies as indicated. The zoomed Kondo and split-Kondo peaks are shown in the insets. The other parameters are: \(U=0.2\), \(\Gamma_{L}=0.02\), in units of band halfwidth, and \(p=0.4\).
ble in the differential conductance of the system, which is demonstrated in Fig. 3. This figure presents the bias voltage dependence of the differential conductance in different magnetic configurations for various temperature gradients, as indicated. More specifically, \(G\) corresponds to the case when the right lead is nonmagnetic [cf. Fig. 1(a)], while \(G^{P}\) (\(G^{AP}\)) presents the case of both ferromagnetic leads in the parallel (antiparallel) alignment [cf. Fig. 1(b)]. When the orbital level is detuned out of the particle-hole symmetry point, \(\varepsilon_{d}=-U/3\), as in the case of Fig. 3, the splitting of the Kondo peak in the spectral function of the left subsystem becomes revealed in the differential conductance of the whole system. Let us start with the case of nonmagnetic right lead, presented in Fig. 3(a). First of all, one can note a large asymmetry of the differential conductance with respect to the bias reversal. Moreover, for small temperature gradients, \(\Delta T\lesssim T_{K}\), the split zero-bias anomaly due to the Kondo effect is visible. These features can be understood by inspecting the behavior of the spectral function around the Fermi energy, see the insets in Fig. 2. One can note that the split Kondo peak in \(A_{L\uparrow}(\omega<0)\) has smaller weight compared to the split Kondo peak in \(A_{L\downarrow}(\omega>0)\). Because, for low temperature gradients, for \(eV>0\) (\(eV<0\)) we probe the density of states of the left subsystem for negative (positive) energies, the above-mentioned asymmetry in \(A_{L\sigma}(\omega)\) gives rise to highly asymmetric behavior of the differential conductance, see Fig. 3(a), with the peak in the negative voltage regime more pronounced than the other. Interestingly, when the tunneling to the right lead becomes spin dependent, in the case of parallel configuration one observes a rather symmetric behavior of \(G^{P}\), with nicely visible split zero-bias anomaly, see Fig. 3(b). This is due to the fact that the increased tunneling rate of spin-down electrons due to larger density of states becomes now reduced since the spin-down electrons are the minority ones in the right lead. On the other hand, the tunneling of spin-up electrons to the right is enlarged. As a consequence, the unequal contributions of the currents in each spin channel become now equalized and the differential conductance in the parallel configuration exhibits split-Kondo resonance with the side peaks of comparable height. On the other hand, when the magnetization of the right lead is flipped, the asymmetric behavior visible in Fig. 3(a) is even further magnified, see Fig. 3(c). This can be understood by invoking similar arguments as above, keeping in mind that now the rate of spin-up tunneling to the right is smaller than that for spin-down electrons. With increase in the temperature gradient, the Kondo-related behavior gets smeared and finally disappears when \(\Delta T\gtrsim T_{\text{K}},|\Delta\varepsilon_{\text{exch}}|\).
### Effects of exchange field on nonequilibrium thermopower
In this section, we focus on the case where the right lead is nonmagnetic, see Fig. 1(a). In such a setup it will be possible to observe clear signatures of ferromagnet-induced exchange field on the thermoelectric properties of the system subject to temperature and voltage gradients. We first study the case of the linear response in potential bias with nonlinear temperature gradient in Sec. III.1.1, while in Sec. III.1.2 the discussion is extended to the case of nonlinear response regime in both \(\Delta T\) and \(V\).
#### iii.1.1 Zero-bias thermoelectrics with finite temperature gradient
Figure 4 displays the zero-bias differential conductance \(G\), the differential Seebeck coefficient \(S_{\text{d}}\) and the nonlinear Seebeck coefficient \(S_{\text{n}}\) calculated as a function of orbital level \(\varepsilon_{d}\) and finite temperature gradient \(\Delta T\). For
Figure 3: The differential conductance for the quantum dot strongly coupled to ferromagnetic left lead and weakly coupled to (a) nonmagnetic right lead, ferromagnetic right lead in (b) the parallel magnetic configuration and (c) the antiparallel magnetic configuration. The parameters are the same as in Fig. 2 with \(\varepsilon_{d}=-U/3\) and different temperature gradients, as indicated.
low temperature gradients, the conductance shows considerable increase near three values of \(\varepsilon_{d}\). The peaks for \(\varepsilon_{d}\approx 0\) and \(\varepsilon_{d}\approx-U\) correspond to the Hubbard resonances in the spectral function, whereas the maximum at \(\varepsilon_{d}=-U/2\) is due to the Kondo effect. In fact, in the local moment regime, \(-1\lesssim\varepsilon_{d}/U\lesssim 0\), the Kondo resonance is suppressed by the exchange field once \(|\Delta\varepsilon_{\rm exch}|\gtrsim T_{K}\), i.e. for values of \(\varepsilon_{d}\) away from the particle-hole symmetry point, cf. Eq. (8). With the increase in the temperature gradient, the Kondo resonance dies out when \(\Delta T>T_{K}\) and the Hubbard peaks get suppressed when \(\Delta T>\Gamma_{L}\), see Fig. 4(a).
In the case of differential and nonlinear Seebeck coefficients presented in Figs. 4(b) and (c), respectively, we can see an overall antisymmetric behavior across the particle-hole symmetry point \(\varepsilon_{d}=-U/2\). The sign of the Seebeck coefficient here corresponds to the dominant charge carriers in transport, holes for \(\varepsilon_{d}<-U/2\) and particles for \(\varepsilon_{d}>-U/2\). The differential Seebeck coefficient shows two sign changes in the local moment regime as a function of the temperature gradient. The sign change around \(\Delta T\approx T_{K}\) originates from the signatures of the Kondo correlations present in the spectral functions. In the case of nonlinear Seebeck coefficient, we do not find the corresponding sign changes because \(S_{\rm n}\) can deviate considerably from the linear response Seebeck coefficient at large \(\Delta T\)[29]. Additionally, one can see that both Seebeck coefficients decay with decreasing \(\Delta T\), this behavior can be described using the Sommerfeld expansion for the linear response Seebeck coefficient, where
\[S(T)\propto\left.\frac{T}{A(\omega=0,T)}\frac{\partial A}{\partial\omega} \right|_{\omega=0}. \tag{9}\]
We also note that both Seebeck coefficients can possess finite values at even lower \(\Delta T\) inside the local moment regime than outside of it due to the additional contribution of the Kondo resonance in the spectral function \(A_{L}(\omega)\) at \(\omega=0\).
#### iii.2.2 The case of nonlinear potential bias and temperature gradients
Let us now inspect the behavior of the nonequilibrium thermoelectric coefficients as a function of both potential bias and temperature gradient shown in Fig. 5, focusing on the \(V\) and \(\Delta T\) range where Kondo correlations are important. The first row of the figure corresponds to the case of particle-hole symmetry, \(\varepsilon_{d}=-U/2\), while the second row presents the results for \(\varepsilon_{d}=-U/3\). Consider the first case. Figure 5(a) depicts the bias and temperature gradient dependence of the differential conductance \(G\). There exist a prominent peak at low \(\Delta T\) centered at \(V=0\), this is the zero-bias conductance peak characteristic of the Kondo effect. As the temperature gradient increases, the Kondo peak dies out and becomes smeared when \(\Delta T\gtrsim T_{K}\). The differential and nonlinear Seebeck coefficients, shown in Figs. 5(b) and (c), exhibit a sign change with respect to the bias voltage reversal. Moreover, while \(S_{d}\) exhibits considerable values around the Kondo peak and becomes suppressed as \(\Delta T\) grows, \(S_{n}\) gets enhanced when \(\Delta T\gtrsim(\Gamma_{L}/U)|eV|\).
When the orbital level is detuned out of the particle-hole symmetry point, one can observe an interesting interplay between the exchange field and Kondo effect,
Figure 4: (a) The differential conductance \(G\), (b) the differential Seebeck coefficient \(S_{d}\) and (c) the nonequilibrium Seebeck coefficient \(S_{n}\) of the quantum dot strongly coupled to left ferromagnetic lead and weakly attached to the right nonmagnetic lead plotted as a function of the orbital energy \(\varepsilon_{d}\) and the temperature gradient \(\Delta T\). The system is assumed to be in the linear response regime with respect to the bias voltage. The other parameters are the same as in Fig. 2.
and its signatures present in the nonlinear thermoelectric coefficients. First, Fig. 5(d) shows the splitting of the Kondo peak due to the exchange field present in the strongly correlated subsystem. As observed in the discussions of Fig. 3(a), the split Kondo peaks are not symmetric, with the more prominent one in the \(eV<0\) regime and both dying off at large \(\Delta T\). Interestingly, the differential and nonlinear Seebeck coefficients also capture the signatures of the exchange field shown by the split Kondo peak. In fact, there exist additional sign changes in the nonlinear response regime with respect to \(V\). More specifically, at low \(\Delta T\), there is a sign change at low bias voltages, followed by another one, roughly located around the split-Kondo peak, see Figs. 5(e) and (f). These sign changes correspond to the additional energy scale in the system, namely the exchange field \(\Delta\varepsilon_{\rm exch}\). They occur at slightly different absolute values of \(eV\), which is due to the fact that the Kondo resonance in the local density of states of the left subsystem exhibits an asymmetric splitting, cf. Fig. 2. With increasing the temperature gradient, we observe that the right split Kondo peak in the conductance dies out first, accordingly the regime of positive values of the Seebeck coefficients corresponding to the right peak disappears around \(\Delta T\approx 0.03\,\Gamma_{L}\). Moreover, we also note that the overall sign change of the thermopower as a function of the bias voltage is now shifted to negative values of \(eV\), as compared to the case of particle-hole symmetry, see Fig. 5.
### Effects of different magnetic configurations on nonequilibrium thermopower
In this section we study the case where the quantum dot is coupled to both ferromagnetic leads with spin polarization \(p=0.4\). The magnetic moments of the external leads are assumed to be aligned either in parallel or antiparallel. The focus is on the effects of different magnetic configurations on nonequilibrium thermoelectric transport properties.
#### iii.2.1 The case of zero bias with nonlinear temperature gradient
The zero-bias thermoelectric properties of the system with two ferromagnetic leads are shown in Fig. 6. The differential conductance for the parallel \(G^{P}\) and antiparallel \(G^{AP}\) configuration of the lead magnetizations is shown in Figs. 6(a) and (b). The qualitative behavior of both conductances is similar to the case of nonmagnetic lead on the right, where \(G\) shows a region of high conductance around \(\varepsilon_{d}=-U/2\) due to the Kondo effect. Similarly to the previous case, the exchange field suppresses the linear response conductance for values of \(\varepsilon_{d}\) away from the particle-symmetry symmetry point. Around \(\varepsilon_{d}\approx 0,-U\), there is a rise in the conductance corresponding to the contribution from the Hubbard peaks.
Figure 5: (a,d) The differential conductance \(G\), (b,e) the differential Seebeck coefficient \(S_{d}\) and (c,f) the nonequilibrium Seebeck coefficient \(S_{n}\) as a function of the potential bias \(V\) and the temperature gradient \(\Delta T\). The first row corresponds to the particle-hole symmetry point \(\varepsilon_{d}=-U/2\), while the second row shows the case of \(\varepsilon_{d}=-U/3\). The other parameters are the same as in Fig. 4.
It is interesting to note that the conductance in the case of parallel configuration is smaller than that in the antiparallel configuration around the Kondo resonance, cf. the discussion of Fig. 3, while this situation is reversed for the resonances at \(\varepsilon_{d}\approx 0,-U\).
The Seebeck coefficients \(S_{d}^{P}\) and \(S_{n}^{P}\) shown in Fig. 6(c) and (e) for the parallel configuration display very interesting features corresponding to various energy scales. These coefficients show antisymmetric behavior across \(\varepsilon_{d}=-U/2\) and sign changes as a function of temperature gradient in the local moment regime \(-1\lesssim\varepsilon_{d}/U\lesssim 0\). Let us first consider the linear response in \(\Delta T\) for \(S_{d}^{P}\). In this regime one can relate the Seebeck coefficient to the conductance through the Mott's formula. Thus, the changes of \(G^{P}\) as a function of orbital level are reflected in the corresponding dependence of the thermopower, which shows sign changes as \(\varepsilon_{d}\) is detuned from the particle-hole symmetry point. The first sign change occurs when detuning is large enough to induce the exchange field that suppresses the Kondo effect. Further sign change occurs at the onset of conductance increase (as function of \(\varepsilon_{d}\)) due to the Hubbard resonance. This behavior extends to higher \(\Delta T\) as long as the thermal gradient is smaller than the Kondo energy scale (or \(\Delta\varepsilon_{\text{exch}}\)). Otherwise, another sign change occurs as a function of \(\Delta T\), see Fig. 6(c). Very similar dependence can be observed in Fig. 6(e), which shows the nonequilibrium Seebeck coefficient \(S_{n}^{P}\). The main difference is present for large \(\Delta T\), where \(S_{n}^{P}\) takes considerable values while \(S_{d}^{P}\) decreases, as explained earlier.
The situation is completely different in the case of the antiparallel configuration, where one does not see any ad
Figure 6: (a,b) The differential conductance \(G\), (c,d) the differential Seebeck coefficient \(S_{d}\) and (e,f) the nonequilibrium Seebeck coefficient \(S_{n}\) in (first column) the parallel (P) and (second column) antiparallel (AP) configuration calculated as a function of \(\Delta T\) and \(\varepsilon_{d}\) assuming linear response in voltage. The spin polarizations of both leads are equal to \(p=0.4\) and the other parameters are the same as in Fig. 4.
ditional sign changes, neither in \(S_{d}^{AP}\) nor in \(S_{n}^{AP}\), other than the ones present across \(\varepsilon_{d}=-U/2\), see Figs. 6(d) and (f). This can be understood by realizing that the interplay of exchange field with spin-dependent tunneling to the right contact hinders the splitting of the Kondo resonance as a function of the bias voltage. Consequently, one only observes a single resonance displaced from \(V=0\), cf. Fig. 3(c), which results in much more regular dependence of the differential and nonequilibrium Seebeck coefficients.
#### iii.2.2 The case of nonlinear potential bias and temperature gradient
The nonequilibrium thermoelectric properties of the quantum dot coupled to both ferromagnetic leads are shown in Fig. 7. The first row corresponds to the case of parallel configuration of the leads' magnetizations. The differential conductance depicted in Fig. 7(a) exhibits the split Kondo anomaly, with side peaks of similar magnitude located at roughly the same distance from the zero bias. Both peaks die off with the temperature gradient around \(\Delta T\approx 0.05\,\Gamma_{L}\), i.e. when thermal gradient exceeds the Kondo temperature.
At low \(\Delta T\) the differential and nonequilibrium Seebeck coefficients exhibit similar bias voltage dependence to the case presented in Figs. 5(e) and (f), see Figs. 7(b) and (c). Now, however, the region of negative Seebeck coefficient is smaller. This can be attributed to the fact that the split Kondo resonance is more symmetric across the bias reversal in the case of parallel magnetic configuration, cf. Fig. 3(b). Unlike in the case of nonmagnetic right lead, the sign changes at finite bias corresponding to the split Kondo peak persist as long as \(\Delta T\lesssim T_{K}\) and disappear around comparable temperature gradient.
The case of antiparallel magnetic configuration of the system is presented in the second row of Fig. 7. Consistent with the discussion of Fig. 3(c), the differential conductance exhibits two conductance peaks but with a large difference in their magnitudes. The peak in the negative bias regime is far more pronounced than the miniscule peak one can observe in the positive regime. Just as in the case of other configurations, the peaks die out with increasing the temperature gradient but the negative bias peak survives till larger temperature gradients \(\Delta T\approx 0.2\Gamma_{L}\) whereas the positive bias peak vanishes at temperature gradients as low as \(\Delta T\approx 0.02\Gamma_{L}\).
The Seebeck coefficients \(S_{d}^{AP}\) and \(S_{n}^{AP}\), shown in Figs. 7(e) and (f), respectively, demonstrate a similar behavior to the other configurations only at very low temperature gradients. However, now, instead of sign changes, one only observes suppression of the Seebeck coefficients at the corresponding values of the bias voltage associated with the exchange field. These suppressions extend to temperatures gradients of the order of \(\Delta T\approx 0.03\Gamma_{L}\), see Figs. 7(e) and (f).
Figure 7: (a,d) The differential conductance \(G\), (b,e) the differential Seebeck coefficient \(S_{d}\) and (c,f) the nonequilibrium Seebeck coefficient \(S_{n}\) as a function of the bias voltage and temperature gradient in the case of \(\varepsilon_{d}=-U/3\). The first (second) row corresponds to the parallel (antiparallel) magnetic configuration of the system. The other parameters are the same as in Fig. 6.
### Finite spin accumulation and the associated nonequilibrium spin Seebeck effect
In this section we consider the case when ferromagnetic contacts are characterized by slow spin relaxation, which can result in a finite spin accumulation [53; 54]. Such a spin accumulation will induce a spin bias across the quantum dot. Here, we assume that the spin accumulation and the resulting spin-dependent chemical potential occurs only in the right lead. Thus, we define the induced spin bias as, \(V_{s}/2=\mu_{R\uparrow}=-\mu_{R\downarrow}\) (keeping \(\mu_{L}=0\)). The nonequilibrium spin bias across the quantum dot enables the spin chemical potentials to be tuned separately and thus the thermal bias induced transport can be different in the separate spin channels. The system can then exhibit interesting spin caloritronic properties, such as the spin Seebeck effect in this setup. The spin Seebeck coefficient \(S_{s}\) quantifies the magnitude and the direction of the spin current induced in the presence of a thermal bias [55]. Analogous to the differential Seebeck effect \(S_{d}\), the differential spin Seebeck coefficient \(S_{s}\) in the nonlinear
Figure 8: The charge Seebeck (first column) and the spin Seebeck (second column) coefficients under nonlinear temperature gradient \(\Delta T\) and linear response spin bias \(V_{s}\) as a function of the orbital level energy \(\varepsilon_{d}\) and \(\Delta T\). The first row corresponds to the case of nonmagnetic right lead, while the second (third) row presents the case of ferromagnetic right lead in the parallel (antiparallel) magnetic configuration of the system. The other parameters are the same as in Fig. 6.
response regime can be defined as
\[S_{s}=-\left(\frac{dV_{S}}{d\Delta T}\right)_{I_{s}}=-\left(\frac{\partial I_{s}}{ \partial\Delta T}\right)_{V_{s}}\left/\left(\frac{\partial I_{s}}{\partial V_{s }}\right)_{\Delta T},\right. \tag{10}\]
where \(I_{s}=I_{\uparrow}(\mu_{R\uparrow},\Delta T)-I_{\downarrow}(\mu_{R\downarrow},\Delta T)\) is the net spin current flowing through the system. This quantity acts as a response over the spin current as a function of both the spin bias \(V_{s}\) and the temperature gradient \(\Delta T\). In addition to the net spin current, there can also exist a charge current \(I=\sum_{\sigma}I_{\sigma}(\mu_{R\sigma},\Delta T)\) flowing across the system originating solely from the thermal and the spin biases. We define the Seebeck coefficient that estimates the charge current in the presence of the spin bias as the charge Seebeck coefficient \(S\)[53]. The charge Seebeck coefficient \(S\) can thus be defined based on the response of charge current \(I\) as
\[S=-\left(\frac{dV_{s}}{d\Delta T}\right)_{I}=-\left(\frac{\partial I}{ \partial\Delta T}\right)_{V_{s}}\left/\left(\frac{\partial I}{\partial V_{s }}\right)_{\Delta T}.\right. \tag{11}\]
We first discuss the case of linear response in the spin bias \(V_{s}\) with large and finite temperature gradient \(\Delta T\), focusing on the differential spin Seebeck coefficient \(S_{s}\) and the charge Seebeck coefficient \(S\). It is pertinent to note that the nonequilibrium equivalent of the spin Seebeck coefficient \(S_{s,n}\) tends to remain undefined in our considerations, since the magnitude of the spin bias fails to compensate for the thermally induced spin current in (_parts of_) the regimes considered. Hence in this paper, we limit our discussions to the differential spin Seebeck coefficient \(S_{s}\equiv S_{s,d}\) in the case of different configurations. We further investigate the dependence of \(S_{s}\) and \(S\) on large and finite spin bias under applied temperature gradient.
#### iii.2.1 The case of zero spin bias with nonlinear temperature gradient
Figure 8 shows the behavior of the charge Seebeck coefficients \(S\), \(S^{P}\), \(S^{AP}\) and the spin Seebeck coefficients \(S_{s}\), \(S_{s}^{P}\), \(S_{s}^{AP}\) for the case of nonmagnetic right lead, as well as the case of ferromagnetic lead in the parallel and antiparallel magnetic configurations, respectively. The first row of Fig. 8 shows the case of right lead with spin polarization \(p=0\), but with finite spin accumulation occurring from the spin-resolved transport through the quantum dot. Figure 8(a) displays the charge Seebeck \(S\) coefficient, which behaves similarly to the differential Seebeck \(S_{d}\) presented in Fig. 4 except some points of divergences. At temperature gradients smaller than \(\Gamma_{L}\), there exist two additional sign changes, both in the local moment regime symmetric across the particle-hole symmetry point. The points of sign change spread out of the local moment regime for thermal biases \(\Delta T\gtrsim 3\,\Gamma_{L}\). The sign changes of the Seebeck effect are also accompanied by large divergences in the magnitude of \(S\). The additional sign changes and divergences originate from the behavior of the denominator in the definition of \(S\), cf. Eq. (11). The denominator in Eq. (11), which can be represented as, \(G^{cs}=(\partial I/\partial V_{s})_{\Delta T}\), is the differential mixed conductance [53] that estimates the charge current in the presence of a spin bias, which can be either negative or positive, resulting in its zero crossing points causing the divergence. From a physical perspective, tuning the temperature gradient in these specific regimes will result in extraordinary changes in the induced charge current. Note that the colormaps in Figs. 8(a) and (e) have been truncated for readability.
The charge Seebeck for the parallel configuration [see Fig. 8(c)] perfectly recreates the behavior seen in Fig. 6(c). In the case of the parallel configuration, the relative scaling of the couplings in each spin channels on the right and left is the same, resulting in a non-negative \(G^{cs}\) and, thus, no divergences. Similarly to Fig. 8(e), the charge Seebeck effect for the antiparallel configuration, there exist a resemblance to the Seebeck coefficient discussed in Fig. 6(d), but overlaid by the divergences associated with \(G^{cs}\). In this case, the additional sign changes start from inside the local moment regime at very low temperature gradients and move out of the local moment regime monotonously around \(\Delta T\approx 10^{-1}\,\Gamma_{L}\).
The differential spin Seebeck coefficient \(S_{s}\) shown in panels (b), (d) and (f) of Fig. 8 for different lead configurations behave antisymmetrically across the particle-hole symmetry point (\(\varepsilon_{d}=-U/2\)). There exist a pronounced spin Seebeck coefficient in the local moment regime for all the configurations that dies off at \(\Delta T\gtrsim 10\,\Gamma_{L}\). Such regions of considerable spin Seebeck effect have been observed in the linear response studies of symmetrically coupled quantum dots as a function of the global temperature \(T\)[29; 21]. In addition to the sign change at the particle-hole symmetry point, at very low \(\Delta T\)\(S_{s}\) changes sign when moving out of the local moment regime (i.e., at \(\varepsilon_{d}\approx-U,0\)). In the case of the nonmagnetic right lead, the region of sign change outside the local moment regime extends up to \(\Delta T\approx\Gamma_{L}\), whereas for the antiparallel configuration the sign change extends only up to \(\Delta T\approx 0.2\,\Gamma_{L}\). On the other hand, the sign change of the spin Seebeck in the local moment regime survives at thermal gradients even greater than \(\Delta T\approx 10^{2}\,\Gamma_{L}\) for the parallel configuration.
#### iii.2.2 The case of nonlinear spin bias and temperature gradient
The dependence of the nonlinear charge Seebeck and the spin Seebeck effect is shown in Fig. 9 for the case of orbital energy level \(\varepsilon_{d}=-U/3\). The first column in Fig. 9 focuses on the charge Seebeck effect for various magnetic configurations of the system. For the case of a nonmagnetic right lead, the charge Seebeck coefficient \(S\) changes sign four times as a function of \(V_{s}\) at temperature gradients below \(\Delta T\approx 0.5\Gamma_{L}\), see Fig. 9(a). Two among these sign changes (around \(V_{s}\approx 0.001\,U\) and \(V_{s}\approx 0.15\,U\)) correspond to the zeros in the mixed con
ductance \(G^{cs}\), which can be identified from the divergence in \(S\) around the sign changes. The other two sign changes (around \(V_{s}\approx-0.05\,U\) and \(V_{s}\approx 0.03\,U\)) originate from the zeros of the thermal response \(-(\partial I/\partial\Delta T)_{V_{s}}\), i.e. the numerator in the definition of the charge Seebeck, cf. Eq. (10). As the temperature gradient increases, the regions of sign change introduced by \(G^{cs}s\) and the temperature gradient become larger in the spin bias regime until around \(\Delta T\approx\Gamma_{L}/3\) for the sign change associated with the mixed conductance and \(\Delta T\approx\Gamma_{L}/2\) due to the sign change from the thermal response. With further increase in the temperature gradient \(\Delta T\) the regions of sign change disappear. This happens around \(\Delta T\gtrsim\Gamma_{L}/2\) for the sign change caused by the mixed conductance and \(\Delta T\gtrsim 0.8\Gamma_{L}\) for the sign change due to the thermal response. The remaining two sign changes at \(V_{s}=-U/2\) and \(V_{s}=U/2\) correspond to the Hubbard peaks of the quantum dot spectral function. The region of these sign changes disappears above temperature gradient \(\Delta T\gtrsim 4\,\Gamma_{L}\). At \(V_{s}=0\) and very large temperature gradients (around \(\Delta T\gtrsim 10\,\Gamma_{L}\)), there exist another sign change that originates from the zeros of \(G^{cs}\). For positive \(V_{s}\), this sign change moves to lower \(\Delta T\), while for negative \(V_{s}\) this sign change moves to higher \(\Delta T\), see
Figure 9: The charge Seebeck (first column) and the spin Seebeck (second column) coefficients for the orbital level \(\varepsilon_{d}=-U/3\) as a function of the applied spin bias \(V_{s}\) and \(\Delta T\). The first row corresponds to the case of nonmagnetic right lead, while the second (third) row presents the case of ferromagnetic right lead in the parallel (antiparallel) magnetic configuration of the system. The other parameters are the same as in Fig. 6.
Fig. 9(a).
Figure 9(c) shows the charge Seebeck effect \(S^{P}\) corresponding to the system in parallel configuration of the leads. We observe that there are two sign changes as a function of the spin bias \(V_{s}\). At low temperatures, \(\Delta T\lesssim 0.01\,\Gamma_{L}\), the region of sign change appears between \(V_{s}\approx 0.005\,U\) and \(V_{s}\approx U/2\). One can identify that these sign changes originate solely from the thermal response of the current under spin bias. With an increase in \(\Delta T\), the sign change at \(V_{s}\approx 0.005\,U\) crosses over to the negative \(V_{s}\) regime and the sign change around \(V\approx U/2\) moves closer to \(V_{s}\approx 2\,U/3\), thus increasing the region of sign change in the spin bias \(V_{s}\) regime until around \(\Delta T\approx 0.1\,G_{L}\). On further increase in temperature gradient, the regions of sign change tend to disappear once \(\Delta T\approx 0.2\,G_{L}\). On the other hand, outside of this regime, the sign of the spin-resolved thermopower remains positive. We also note that there exist another point of sign change due to the contribution from the Hubbard peaks in the spectral function. Unlike in the previous case of \(S\), this sign change survives for large temperature gradients \(\Delta T\) and moves closer to \(V_{s}=0\) when the temperature gradient \(\Delta T\) is increased \(\Delta T\gtrsim\Gamma_{L}\), see Fig. 9(c).
The charge Seebeck coefficient for the antiparallel configuration \(S^{AP}\) does not show any sign change in the local moment regime apart from the particle-hole symmetry point \(\varepsilon_{d}=-U/2\), as seen in Fig. 8(e). However, as a function of the spin bias \(V_{s}\), two points of sign changes form in the dependence of the charge Seebeck effect \(S^{AP}\). One change occurs in the negative spin bias regime around \(V_{s}\approx-0.15\,U\) and the other one in the positive regime at \(V_{s}\approx 0.03\,U\). With increasing \(\Delta T\), these changes move further apart into the negative and positive spin bias regimes, respectively.
It is important to emphasize that the sign changes observed in the charge Seebeck coefficient as a function of spin bias \(V_{s}\) do not correspond to the sign changes seen in the Seebeck coefficient as a function of \(V\), as discussed and presented in Fig. 5 and Fig. 7. This is associated with the fact that the generated current [Eq. (5)] as a function of \(V\) scans through each of the split Kondo resonances shown in Fig. 2 separately, resulting in the split peaks seen in the differential conductance and the corresponding sign changes in the Seebeck coefficients. However, as a function of the spin bias \(V_{s}\), the signatures from the split Kondo resonance cannot be identified directly in the generated current \(I\). This is because the spin bias \(\mu_{R\uparrow}-\mu_{R\downarrow}=V_{s}\) scans both split Kondo peaks (see Fig. 2) simultaneously, and the total current \(I\) is rescaled by just relative couplings of the separate spin channels \(\Gamma_{R\sigma}\). Hence, the sign changes in the charge Seebeck coefficient are solely resulting from the sign changes in the thermal response and the mixed charge conductance.
The spin Seebeck coefficient in the nonlinear spin bias regime is presented in the second column of Fig. 9. Panels (b),(d) and (f) show the case of the nonmagnetic right lead as well as ferromagnetic right lead in the parallel and antiparallel configuration, respectively. From the discussion of the linear \(V_{s}\) case shown in Fig. 8, we observe that the differential spin Seebeck coefficient does not change sign inside the local moment regime for all three configurations. Under finite spin bias \(V_{s}\), we can see only one sign change in the positive spin bias regime around \(V\approx U/4\) for all magnetic configurations. The point of sign change shifts towards the positive regime with increasing temperature gradient \(\Delta T\). The behavior of the spin Seebeck coefficient is identical for all the configurations apart from slight differences in the magnitude, meaning that this originates solely from the properties of the spectral function outside the split Kondo peaks. In the case of parallel configuration, we observe a small region of additional sign change around \(V_{s}\approx 0.2\,U\) to \(V_{s}\approx U/2\). Such a behavior have already been observed in the nonequilibrium thermopower of similar systems where it has been attributed to the characteristic behavior of the spectral function for energies between the Kondo and the Hubbard peak [38].
## IV Summary
In this paper we have studied the nonequilibrium thermoelectric properties of the system consisting of a quantum dot/molecule asymmetrically coupled to external ferromagnetic leads. The strongly coupled ferromagnetic contact induces an exchange field in the dot that can split and suppress the Kondo resonance. The emphasis has been put on the signatures of the interplay between spin-resolved tunneling and strong electron correlations in the nonequilibrium thermopower of the system. In particular, we have determined the bias voltage and temperature gradient dependence of the differential and nonequilibrium Seebeck coefficients. We have observed new signatures in the Seebeck coefficients corresponding to the Kondo resonance and the regions where the ferromagnetic contact induced exchange field suppresses the Kondo effect both in the potential bias and temperature gradient regimes. More specifically, we have demonstrated that the Seebeck coefficient exhibits new sign changes as a function of bias voltage, which are associated with the split Kondo resonance. These sign changes extend to the temperature gradients on the order of the Kondo temperature. Furthermore, we investigated the influence of the spin accumulation and the resulting spin bias on the Seebeck and spin Seebeck coefficients. The nonlinear charge Seebeck coefficient and the spin Seebeck coefficient showed points of sign changes in the presence of finite spin and thermal bias, corresponding to the different properties of the quantum dot spectral function.
###### Acknowledgements.
This work was supported by the Polish National Science Centre from funds awarded through the decision No. 2017/27/B/ST3/00621. We also acknowledge the
computing time at the Poznan Supercomputing and Networking Center.
|
2303.02260
|
Learning to reason over visual objects
|
A core component of human intelligence is the ability to identify abstract
patterns inherent in complex, high-dimensional perceptual data, as exemplified
by visual reasoning tasks such as Raven's Progressive Matrices (RPM). Motivated
by the goal of designing AI systems with this capacity, recent work has focused
on evaluating whether neural networks can learn to solve RPM-like problems.
Previous work has generally found that strong performance on these problems
requires the incorporation of inductive biases that are specific to the RPM
problem format, raising the question of whether such models might be more
broadly useful. Here, we investigated the extent to which a general-purpose
mechanism for processing visual scenes in terms of objects might help promote
abstract visual reasoning. We found that a simple model, consisting only of an
object-centric encoder and a transformer reasoning module, achieved
state-of-the-art results on both of two challenging RPM-like benchmarks (PGM
and I-RAVEN), as well as a novel benchmark with greater visual complexity
(CLEVR-Matrices). These results suggest that an inductive bias for
object-centric processing may be a key component of abstract visual reasoning,
obviating the need for problem-specific inductive biases.
|
Shanka Subhra Mondal, Taylor Webb, Jonathan D. Cohen
|
2023-03-03T23:19:42Z
|
http://arxiv.org/abs/2303.02260v2
|
# Learning to reason over visual objects
###### Abstract
A core component of human intelligence is the ability to identify abstract patterns inherent in complex, high-dimensional perceptual data, as exemplified by visual reasoning tasks such as Raven's Progressive Matrices (RPM). Motivated by the goal of designing AI systems with this capacity, recent work has focused on evaluating whether neural networks can learn to solve RPM-like problems. Previous work has generally found that strong performance on these problems requires the incorporation of inductive biases that are specific to the RPM problem format, raising the question of whether such models might be more broadly useful. Here, we investigated the extent to which a general-purpose mechanism for processing visual scenes in terms of objects might help promote abstract visual reasoning. We found that a simple model, consisting only of an object-centric encoder and a transformer reasoning module, achieved state-of-the-art results on both of two challenging RPM-like benchmarks (PGM and I-RAVEN), as well as a novel benchmark with greater visual complexity (CLEVR-Matrices). These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases.
## 1 Introduction
Human reasoning is driven by a capacity to extract simple, low-dimensional abstractions from complex, high-dimensional inputs. We perceive the world around us in terms of objects, relations, and higher order patterns, allowing us to generalize beyond the sensory details of our experiences, and make powerful inferences about novel situations Spearman (1923); Gick & Holyoak (1983); Lake et al. (2017). This capacity for abstraction is particularly well captured by visual analogy problems, in which the reasoner must abstract over the superficial details of visual inputs, in order to identify a common higher order pattern (Gentner, 1983; Holyoak, 2012). A particularly challenging example of these kinds of problems are the Raven's Progressive Matrices (RPM) problem sets (Raven, 1938), which have been found to be especially diagnostic of human reasoning abilities (Snow et al., 1984).
A growing body of recent work has aimed to build learning algorithms that capture this capacity for abstract visual reasoning. Much of this previous work has revolved around two recently developed benchmarks - the Procedurally Generated Matrices (PGM) (Barrett et al., 2018), and the RAVEN dataset (Zhang et al., 2019) - consisting of a large number of automatically generated RPM-like problems. As in RPM, each problem consists of a \(3\times 3\) matrix populated with geometric forms, in which the bottom right cell is blank. The challenge is to infer the abstract pattern that governs the relationship along the first two columns and/or rows of the matrix, and use that inferred pattern to 'fill in the blank', by selecting from a set of choices. As can be seen in Figure 1, these problems can be quite complex, with potentially many objects per cell, and multiple rules per problem, yielding a highly challenging visual reasoning task.
There is substantial evidence that human visual reasoning is fundamentally organized around the decomposition of visual scenes into objects (Duncan, 1984; Pylyshyn, 1989; Peters & Kriegeskorte, 2021). Objects offer a simple, yet powerful, low-dimensional abstraction that captures the inherent compositionality underlying visual scenes. Despite the centrality of objects in visual reasoning, previous works have so far not explored the use of object-centric representations in abstract visual reasoning tasks such as RAVEN and PGM, or at best have employed an imprecise approximation to object representations based on spatial location.
Recently, a number of methods have been proposed for the extraction of precise object-centric representations directly from pixel-level inputs, without the need for vertical segmentation data (Greff et al., 2019; Burgess et al., 2019; Locatello et al., 2020; Engelcke et al., 2021). While these methods have been shown to improve performance in some visual reasoning tasks, including question answering from video (Ding et al., 2021) and prediction of physical interactions from video Wu et al. (2022), previous work has not addressed whether this approach is useful in the domain of _abstract_ visual reasoning (i.e., visual analogy). To address this, we developed a model that combines an object-centric encoding method, _slot attention_(Locatello et al., 2020), with a generic transformer-based reasoning module (Vaswani et al., 2017). The combined system, termed the _Slot Transformer Scoring Network_ (STSN, Figure 1) achieves state-of-the-art performance on both PGM and I-RAVEN (a more challenging variant of RAVEN), despite its general-purpose architecture, and lack of task-specific augmentations. Furthermore, we developed a novel benchmark, the _CLEVR-Matrices_ (Figure 2), using a similar RPM-like problem structure, but with greater visual complexity, and found that STSN also achieves state-of-the-art performance on this task. These results suggest that object-centric encoding is an essential component for achieving strong abstract visual reasoning, and indeed may be even more important than some task-specific inductive biases.
Figure 1: Slot Transformer Scoring Network (STSN). STSN combines slot attention, an object-centric encoding method, and a transformer reasoning module. Slot attention decomposes each image panel into a set of \(K\) slots, which are randomly initialized and iteratively updated through competitive attention over the image. STSN assigns a score to each of the 8 potential answers, by independently evaluating the combination of each answer choice together with the 8 context panels. For each answer choice, slots are extracted from that choice, and the context panels, and these slots are concatenated to form a sequence that is passed to the transformer, which then generates a score. The scores for all answer choices are passed through a softmax in order to compute the task loss \(\mathcal{L}_{task}\). Additionally, the slots for each image panel are passed through a slot decoder, yielding a reconstruction of that image panel, from which the reconstruction loss \(\mathcal{L}_{recon}\) is computed.
## 2 Related Work
Since the introduction of the PGM (Barrett et al., 2018) and RAVEN (Zhang et al., 2019) datasets, a number of methods have been proposed for learning to solve RPM-like problems Barrett et al. (2018); Steenbrugge et al. (2018); Van Steenkiste et al. (2019); Zhang et al. (2019); Zheng et al. (2019); Spratley et al. (2020); Jahrens and Martinetz (2020); Wang et al. (2020); Wu et al. (2020); Benny et al. (2021); Hu et al. (2021); Zhuo and Kankanhalli (2022). Though significant progress has been made, the best performing methods generally rely on inductive biases that are specifically tailored to the RPM problem format. For instance, the Scattering Compositional Learner (SCL) (Wu et al., 2020), arguably the best current model (achieving strong performance on both PGM and I-RAVEN), assumes that rules are independently applied in each feature dimension, with no interaction between features. Similarly, the Multi-Scale Relation Network (MRNet) (Benny et al., 2021), which achieves strong performance on PGM, explicitly builds the row-wise and column-wise structure of RPM problems into its architecture. These approaches raise the question of whether problem-specific inductive biases are necessary to achieve strong performance on these problems.
Here, we explore the utility of a more general-purpose inductive bias - a mechanism for processing visual scenes in terms of objects. In contrast, most previous approaches to solving RPM-like problems have operated over embeddings of entire image panels, and thus likely fail to capture the compositional structure of such multi-object visual inputs. Some work has attempted to approximate object-centric representations, for instance by treating spatial location as a proxy for objects (Wu et al., 2020), or by employing encodings at different spatial scales (Benny et al., 2021) (therefore preferentially capturing larger vs. smaller objects), but it is not clear that these approximations extract precise object-centric representations, especially in problems with many overlapping objects, such as PGM.
Recently, a number of methods have been proposed to address the challenging task of annotation-free object segmentation (Greff et al., 2019; Burgess et al., 2019; Locatello et al., 2020; Engelcke et al., 2021). In this approach, the decomposition of a visual scene into objects is treated as a latent variable to be inferred in the service of a downstream objective, such as autoencoding, without access to any explicit segmentation data. Here, we used the slot attention method (Locatello et al., 2020), but our approach should be compatible with other object-centric encoding methods.
Our method employs a generic transformer (Vaswani et al., 2017) to perform reasoning over the object-centric representations extracted by slot attention. This approach allows the natural permutation invariance of objects to be preserved in the reasoning process. A few other recent efforts have employed systems that provide object-centric representations as the input to a transformer network (Ding et al., 2021; Wu et al., 2022), most notably ALOE (Attention over Learned Object Embeddings (Ding et al., 2021)), which used a different object encoding method (MONet (Burges et al., 2019)). Such systems have exhibited strong visual reasoning performance in some tasks, such as question answering from video, that require processing of relational information. Here, we go beyond this work, to test: a) the extent to which object-centric processing can subserve more _abstract_ visual reasoning, involving the processing of higher-order relations, as required for visual analogy tasks such as PGM and I-RAVEN; and b) whether this approach obviates the need for problem-specific inductive biases that have previously been proposed for these tasks.
## 3 Approach
### Problem Definition
Each RPM problem consists of a \(3\times 3\) matrix of panels in which each panel is an image consisting of varying numbers of objects with attributes like size, shape, and color. The figures in each row or column obey a common set of abstract rules. The last panel (in the third row and column), is missing and must be filled from a set of eight candidate panels so as to best complete the matrix according to the abstract rules. Formally, each RPM problem consists of 16 image panels \(X=\{x_{i}\}_{i=1}^{16}\), in which the first 8 image panels are context images \(X_{c}=\{x_{i}\}_{i=1}^{8}\) (i.e., all panels in the \(3\times 3\) problem matrix except the final blank panel), and the last 8 image panels are candidate answer images \(X_{a}=\{x_{i}\}_{i=9}^{16}\). The task is to select \(y\), the index for the correct answer image.
### Object-centric Encoder
STSN employs slot attention (Locatello et al., 2020) to extract object-centric representations. Slot attention first performs some initial processing of the images using a convolutional encoder, producing a feature map, which is flattened to produce \(\mathbf{inputs}\in\mathbb{R}^{N\times D_{inputs}}\), where \(N=H\times W\) (the height and width of the feature map), and \(D_{inputs}\) is the number of channels. Then, the slots \(\mathbf{slots}\in\mathbb{R}^{K\times D_{slot}}\) are initialized, to form a set of \(K\) slot embeddings, each with dimensionality \(D_{slot}\). We set the value of \(K\) to be equal to the maximum number of objects possible in a given image panel (based on the particular dataset). For each image, the slots are randomly initialized from a distribution \(\mathcal{N}(\mu,\mathrm{diag}(\sigma))\in\mathbb{R}^{K\times D_{slot}}\) with shared mean \(\mu\in\mathbb{R}^{D_{slot}}\) and variance \(\sigma\in\mathbb{R}^{D_{slot}}\) (each of which are learned). The slots are then iteratively updated based on a transformer-style attention operation. Specifically, each slot emits a query \(\mathrm{q}(\mathbf{slots})\in\mathbb{R}^{K\times D_{slot}}\) through a linear projection, and each location in the feature map emits a key \(\mathrm{k}(\mathbf{inputs})\in\mathbb{R}^{N\times D_{slot}}\) and value \(\mathrm{v}(\mathbf{inputs})\in\mathbb{R}^{N\times D_{slot}}\). A dot product query-key attention operation followed by softmax is then used to generate the attention weights \(\mathbf{attn}=\mathrm{softmax}(\frac{1}{\sqrt{D_{slot}}}\mathrm{k}(\mathbf{inputs })\cdot\mathrm{q}(\mathbf{slots})^{\top})\), and a weighted mean of the values \(\mathbf{updates}=\mathbf{attn}\cdot\mathrm{v}(\mathbf{inputs})\) is used to update the slot representations using a Gated Recurrent Unit (Cho et al., 2014), followed by a residual MLP with ReLU activations. More details can be found in Locatello et al. (2020). After \(T\) iterations of slot attention, the resulting slots are passed through a reasoning module, that we describe in the following section.
In order to encourage the model to make use of slot attention in an object-centric manner, we also included a slot decoder to generate reconstructions of the original input images. To generate reconstructions, we first used a spatial broadcast decoder (Watters et al., 2019) to generate both a reconstructed image \(\tilde{x}_{k}\), and a mask \(m_{k}\), for each slot. We then generated a combined reconstruction, by normalizing the masks across slots using a softmax, and using the normalized masks to compute a weighted average of the slot-specific reconstructions.
### Reasoning Module
After object representations are extracted by slot attention, they are then passed to a transformer (Vaswani et al., 2017). For each candidate answer choice \(x_{a}\in\{x_{i}\}_{i=9}^{16}\), the transformer operates over the slots obtained from the 8 context images \(\mathbf{slots}_{x_{1.8}}\), and the image for that answer choice \(\mathbf{slots}_{x_{a}}\). We flattened the slots over the dimensions representing the number of slots and images, such that, for each candidate answer, the transformer operated over \(\mathrm{flatten}(\mathbf{slots}_{x_{1.8}},\mathbf{slots}_{x_{a}})\in\mathbb{R}^{9 K\times D_{slot}}\). We then applied Temporal Context Normalization (TCN) (Webb et al., 2020), which has been shown to significantly improve out-of-distribution generalization in relational tasks, over the flattened sequence of slots. To give the model knowledge about which slot representation corresponded to which row and column of the matrix, we added a learnable linear projection \(\mathbb{R}^{6}\rightarrow\mathbb{R}^{D_{slot}}\) from one-hot encodings of the row and column indices (after applying TCN). We concatenated a learned CLS token (analogous to CLS token in Devlin et al. (2018)) of dimension \(D_{slot}\), before passing it through a transformer with \(L\) layers and \(H\) self-attention heads. The transformed value of the CLS token was passed through a linear output unit to generate a score for each candidate answer image, and the scores for all answers were passed through a softmax to generate a prediction \(\hat{y}\).
### Optimization
The entire model was trained end-to-end to optimize two objectives. First, we computed a reconstruction loss \(\mathcal{L}_{recon}\), the mean squared error between the 16 image panels and their reconstructed outputs. Second, we computed a task loss \(\mathcal{L}_{task}\), the cross entropy loss between the target answer index and the softmax-normalized scores for each of the candidate answers. These two losses were combined to form the final loss \(\mathcal{L}=\lambda*\mathcal{L}_{recon}+\mathcal{L}_{task}\), where \(\lambda\) is a hyperparameter that controls the relative strength of the reconstruction loss.
## 4 Experiments
### Datasets
**PGM.** The PGM dataset was introduced by Barrett et al. (2018), and consists of problems belonging to eight different regimes with different generalization difficulty. Each matrix problem in PGM is defined by the abstract structure \(\mathcal{S}=\{[r,o,a]:r\in\mathcal{R},o\in\mathcal{O},a\in\mathcal{A}\}\), where \(\mathcal{R}=\{\text{progression},\) XOR, AND, OR, consistent union\(\}\) are the set of rules (note that consistent union\(\}\) is also referred to as 'distribution-of-3'), \(\mathcal{O}=\{\text{shape, line}\}\) are the set of objects, and \(\mathcal{A}=\{\text{size, type, position, color, number}\}\) are the set of attributes. Each regime consists of 1.2M training problems, 20K validation problems, and 200K testing problems. Due to the enormous size of the dataset we focused on the neutral, interpolation, and extrapolation regimes. In the neutral regime, the training and test sets are sampled from the same underlying distribution, whereas the interpolation and extrapolation regimes both involve out-of-distribution generalization. Given the set of feature values for each attribute, the interpolation regime involves training on all even-indexed feature values and testing on all odd-indexed values, and the extrapolation regime involves training on the lower half of feature values and testing on the upper half of feature values. More details can be found in Barrett et al. (2018).
**I-RAVEN.** The RAVEN dataset was introduced by Zhang et al. (2019), with problems belonging to seven different configurations. These configurations are defined by the spatial layout of the elements in each panel, ranging from low visual complexity (e.g., the 'Center' configuration, in which each panel contains just a single object in the center of the image), to high visual complexity (e.g., the 'O-IG' configuration, in which each panel contains an outer object surrounding an inner grid of objects). Some configurations have multiple components \(\mathcal{C}\) to which separate rules can be bound. Thus, each problem in RAVEN is defined by the abstract structure \(\mathcal{S}=\{[r,c,a]:r\in\mathcal{R},c\in\mathcal{C},a\in\mathcal{A}\}\), where \(\mathcal{R}=\{\text{constant, progression, arithmetic, distribution-of-3}\}\) are the set of rules, \(\mathcal{C}\) are the set of components (depending on the particular configuration), and \(\mathcal{A}=\{\text{number, position, size, type, color}\}\) are the set of attributes. There are a total of 42K training problems, 14K validation problems, and 14K testing problems. We trained STSN jointly on all configurations in RAVEN.
It was subsequently discovered that the original RAVEN dataset employed a biased method for generating candidate answers, that could be exploited so as to achieve near perfect performance by only viewing these candidate answers (i.e., ignoring the problem itself) (Hu et al., 2021). To address this, Hu et al. (2021) proposed the Impartial RAVEN (I-RAVEN) dataset, with an unbiased procedure for generating candidate answers. As with most recent work in this domain, we performed our evaluation on I-RAVEN.
**CLEVR-Matrices.** We created a novel dataset of RPM-like problems using realistically rendered 3D shapes, based on source code from CLEVR (a popular visual-question-answering dataset) (Johnson et al., 2017). Problems were formed from objects of three shapes (cube, sphere, and cylinder), three sizes (small, medium, and large), and eight colors (gray, red, blue, green, brown, purple, cyan,
Figure 2: Example problem from our proposed CLEVR-Matrices dataset. Problems are governed by RPM-like problem structure, but with greater visual complexity (rendered using approach similar to CLEVR dataset (Johnson et al., 2017)). This particular problem is an example of the ‘Location’ problem type. The reader is encouraged to identify the correct answer, and rule for each attribute.
and yellow). Objects were placed on a \(3\times 3\) grid of locations (such that there was a maximum of 9 objects in each panel), which was oriented randomly in each problem. Lighting was varied randomly between each panel, and objects were randomly assigned one of two textures (metal or rubber). Rules were independently sampled for shape, color, and size, from the set \(\mathcal{R}=\{\text{null, constant, distribution-of-3}\}\). Location was determined based on three different problem types. In the first problem type ('Logic'), locations were determined based on a logical rule sampled from \(\mathcal{R}=\{\text{AND, OR, XOR}\}\). In the second problem type ('Location'), locations were determined based on a rule sampled from \(\mathcal{R}=\{\text{constant, distribution-of-3, progression}\}\). In the third problem type ('Count'), the count of objects in each panel was determined based on a rule sampled from \(\mathcal{R}=\{\text{constant, distribution-of-3, progression}\}\), and locations were randomly sampled to instantiate that count. Example problems are shown in Figure 2 and Section A.5. Answer choices were generated using the attribute bisection tree algorithm proposed by Hu et al. (2021), which was used to generate answer choices for I-RAVEN. Our dataset thus does not contain the biases identified in the original RAVEN dataset. We generated 20K problems for each type, including 16K for training, 2K for validation, and 2K for testing. We trained STSN jointly on all three problems types.
### Baselines
We compared our model to several baselines, as detailed in Tables 1- 3. To the best of our knowledge, these baselines include the current best performing models on the I-RAVEN and PGM benchmarks. We didn't use any auxiliary information (i.e., training to explicitly label the underlying rules), and hence for fair comparison we only compared to baselines that didn't use auxiliary loss.
There are too many baselines to describe them each in detail, but here we briefly describe the best performing baselines. The baseline that achieved the best overall performance was the Scattering Compositional Learner (SCL) (Wu et al., 2020). SCL employs an approximate form of object segmentation based on fixed spatial locations in a convolutional feature map, followed by a dual parameter-sharing scheme, in which a shared MLP (shared across 'objects') is used to generate object embeddings, and another shared MLP (shared across attributes) is used to classify rules for each attribute. We also compare against the Multi-Layer Relation Network (MLRN) (Jahrens and Martinetz, 2020) and the Multi-scale Relation Network (MRNet) (Benny et al., 2021), both of which achieved strong results on PGM. MLRN builds on the Relation Network (Santoro et al., 2017), which uses a shared MLP to compute learned relation vectors for all pairwise comparisons of a set (in this case, the set of embeddings for all image panels in a problem). MLRN passes the output of one RN to another RN, thus allowing second-order relations to be modeled. MRNet creates image embeddings at different spatial scales, allowing it to approximate segmentation of larger vs. smaller objects, and then computes both row-wise and column-wise rule embeddings, which are aggregated across both rows/columns and spatial scales.
### Experimental Details
We give a detailed characterization of all hyperparameters and training details for our models in Section A.2. We employed both online image augmentations (random rotations, flips, and brightness changes) and dropout (in the transformer), when training on I-RAVEN (details in Section A.2). We also trained both SCL and MLRN on CLEVR-Matrices, and compared to two alternative versions of SCL on I-RAVEN, one that employed the same image augmentations, TCN, and dropout employed by our model, and another that combined SCL with slot attention (also with image augmentations, TCN and dropout) referred to as 'Slot-SCL'
For I-RAVEN, to be consistent with previous work (Wu et al., 2020), we report results from the best out of 5 trained models. Similarly, for CLEVR-Matrices, we report results from the best out of 3 trained models for STSN, SCL, and MLRN. For PGM, we only trained 1 model on the neutral regime, 1 model on the interpolation regime, and 1 model on the extrapolation regime, due to the computational cost of training models on such a large dataset.
For the PGM neutral regime, we pretrained the convolutional encoder, slot attention, and slot decoder on the reconstruction objective with the neutral training set, and fine-tuned while training on the primary task. For the PGM interpolation regime, all model components were trained end-to-end from scratch. For the the PGM extrapolation regime, we employed a simultaneous dual-training scheme, in which the convolutional encoder, slot attention, and slot decoder were trained on recon
struction for both the neutral and extrapolation training sets (thus giving these components of the model exposure to a broader range of shapes and feature values), while the transformer reasoning module was trained on the primary task using only the extrapolation training set.
### Results
Table 1 shows the results on the I-RAVEN dataset. STSN achieved state-of-the-art accuracy when averaging across all configurations (\(95.7\%\)), and on two out of seven configurations ('U-D' and 'O
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Test Accuracy (\%)} \\ \cline{2-10} Model & Average & Center & 2Grid & 3Grid & L-R & U-D & O-IC & O-IG \\ \hline LSTM (Hu et al., 2021) & 18.9 & 26.2 & 16.7 & 15.1 & 14.6 & 16.5 & 21.9 & 21.1 \\ WRen(Hu et al., 2021) & 23.8 & 29.4 & 26.8 & 23.5 & 21.9 & 21.4 & 22.5 & 21.5 \\ MLRN (Jahrens and Martinetz, 2020) & 29.8 & 38.8 & 32.0 & 27.8 & 23.5 & 23.4 & 32.9 & 30.0 \\ LEN (Zheng et al., 2019) & 39.0 & 45.5 & 27.9 & 26.6 & 44.2 & 43.6 & 50.5 & 34.9 \\ ResNet (Hu et al., 2021) & 40.3 & 44.7 & 29.3 & 27.9 & 51.2 & 47.4 & 46.2 & 35.8 \\ Wild ResNet (Hu et al., 2021) & 44.3 & 50.9 & 33.1 & 30.8 & 53.1 & 52.6 & 50.9 & 38.7 \\ CoPINet (Zhang et al., 2019b) & 46.3 & 54.4 & 33.4 & 30.1 & 56.8 & 55.6 & 54.3 & 39.0 \\ SRAN (Hu et al., 2021) & 63.9 & 80.1 & 53.3 & 46.0 & 72.8 & 74.5 & 71.0 & 49.6 \\ Slot-SCL & 90.4 & 98.8 & 94.1 & 80.3 & 92.9 & 94.0 & 94.9 & 78.0 \\ SCL (Wu et al., 2020) & 95.0 & **99.0** & 96.2 & 89.5 & 97.9 & 97.1 & 97.6 & 87.7 \\ SCL + dropout, augmentations, TCN & 95.5 & 98.2 & **96.4** & **90.0** & **98.8** & 97.9 & **98.0** & 89.3 \\ STSN (ours) & **95.7** & 98.6 & 96.2 & 88.8 & 98.0 & **98.8** & 97.8 & **92.0** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on I-RAVEN.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{Test Accuracy (\%)} \\ \cline{2-5} Model & Neutral & Interpolation & Extrapolation \\ \hline CNN+MLP (Barrett et al., 2018) & 33.0 & - & - \\ CNN+LSTM (Barrett et al., 2018) & 35.8 & - & - \\ ResNet-50 (Barrett et al., 2018) & 42.0 & - & - \\ Wild-ResNet (Barrett et al., 2018) & 48.0 & - & - \\ CoPINet (Zhang et al., 2019b) & 56.4 & - & - \\ WReN (\(\beta=0\)) (Barrett et al., 2018) & 62.6 & 64.4 & 17.2 \\ VAE-WReN (Steenbrugge et al., 2018) & 64.2 & - & - \\ MXGNet (\(\beta=0\)) (Wang et al., 2020) & 66.7 & 65.4 & 18.9 \\ LEN (\(\beta=0\)) (Zhang et al., 2019) & 68.1 & - & - \\ DCNet (Zhuo and Kankanhalli, 2022) & 68.6 & 59.7 & 17.8 \\ T-LEN (\(\beta=0\)) (Zheng et al., 2019) & 70.3 & - & - \\ SRAN (Hu et al., 2021) & 71.3 & - & - \\ Rel-Base (Spratley et al., 2020) & 85.5 & - & **22.1** \\ SCL (Wu et al., 2020) & 88.9 & - & - \\ MRNet (Benny et al., 2021) & 93.4 & 68.1 & 19.2 \\ MLRN (Jahrens and Martinetz, 2020) & 98.0 & 57.8 & 14.9 \\ STSN (ours) & **98.2** & **78.5** & 20.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on PGM.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{Test Accuracy (\%)} \\ \cline{2-5} Model & Average & Logic & Location & Count \\ \hline MLRN (Jahrens and Martinetz, 2020) & 30.8 & 47.4 & 21.4 & 23.6 \\ SCL (Wu et al., 2020) & 70.5 & 80.9 & 65.8 & 64.9 \\ STSN (ours) & **99.6** & **99.2** & **100.0** & **99.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on CLEVR-Matrices.
IG'). The most notable improvement was on the 'O-IG' configuration (a large outer object surrounding an inner grid of smaller objects), probably due to the need for more flexible object-encoding mechanisms in this configuration. For PGM (Table 2), STSN achieved state-of-the-art accuracy on the neutral (\(98.2\%\)) and interpolation (\(78.5\%\)) regimes, and achieved the second-best performance on the extrapolation regime (\(20.4\%\) for STSN vs. \(22.1\%\) for Rel-Base). The next best model on I-RAVEN, SCL (\(95\%\)) performed worse on PGM (\(88.9\%\)), perhaps due to its more limited object-encoding methods (PGM includes a large number of spatially overlapping objects). We evaluated the next best model on PGM, MLRN (\(98\%\)), on I-RAVEN (using code from the authors' publicly available repository), and found that it displayed very poor performance (\(29.8\%\)), suggesting that some aspect of its architecture may be overfit to the PGM dataset. Thus, STSN achieved a \(\sim 5\%\) increase in average performance across both of the two datasets relative to the next best overall model (\(97.0\%\) average performance on PGM Neutral and I-RAVEN for STSN vs. \(92.0\%\) for SCL), despite incorporating fewer problem-specific inductive biases.
To further investigate the utility of STSN's object-centric encoding mechanism, we evaluated STSN, SCL, and MLRN on our newly developed CLEVR-Matrices dataset (Table 3). STSN displayed very strong performance (\(99.6\%\) average test accuracy), whereas both SCL (\(70.5\%\) average test accuracy) and MLRN (\(30.8\%\) average test accuracy) performed considerably worse. This is likely due to the fact that these models lack a precise object-centric encoding mechanism, and were not able to cope with the increased visual complexity of this dataset.
Finally, we also evaluated both STSN and SCL on a dataset involving analogies between feature dimensions (e.g., a progression rule applied to color in one row, and size in another row) (Hill et al., 2019). STSN outperformed SCL on this dataset as well (Table 12), likely due to the fact that SCL assumes that rules will be applied independently within each feature dimension. This result highlights the limitation of employing inductive biases that are overly specific to certain datasets.
### Ablation Study
We analyzed the importance of the different components of STSN in ablation studies using the I-RAVEN dataset (Table 4). For I-RAVEN, our primary STSN implementation employed dropout, which we found yielded a modest improvement in generalization, but our ablation studies were performed without dropout. Thus, the relevant baseline for evaluating the isolated effect of each ablation is the version of STSN without dropout. First, we removed the slot attention module from STSN, by averaging the value embeddings from the input feature vectors over the image space (i.e., using only a single slot per panel). The average test accuracy decreased by more than 20%, suggesting that object-centric representations play a critical role in the model's performance. The effect was particularly pronounced in the 'O-IG' (a large outer object surrounding an inner grid of smaller objects) and '3Grid' (a \(3\times 3\) grid of objects) configurations, likely due to the large number of objects per panel in these problems. Next, we performed an ablation on TCN, resulting in a test accuracy decrease of around 7%, in line with previous findings demonstrating a role of TCN in improved generalization (Webb et al., 2020). We also performed an ablation on the size of the reasoning module, finding that a smaller transformer (\(L=4\) layers) did not perform as well. Finally, we performed an ablation on the image augmentations performed during training, resulting in a test accuracy decrease of more than 3%, suggesting that the augmentations also helped to improve generalization. Overall, these results show that the use of object-centric representations was the most important factor explaining STSN's performance on this task.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c}{Test Accuracy (\%)} \\ \cline{2-10} & Average & Center & 2Grid & 3Grid & L-R & U-D & O-IC & O-IG \\ \hline STSN & **95.7** & **98.6** & **96.2** & **88.8** & **98.0** & **98.8** & **97.8** & **92.0** \\ -dropout & 93.4 & 97.8 & 92.5 & 84.7 & 96.4 & 96.7 & 96.5 & 89.6 \\ -dropout, -slot attention & 71.0 & 90.0 & 71.0 & 59.4 & 73.8 & 75.4 & 74.5 & 53.0 \\ -dropout, -TCN & 86.5 & 97.0 & 76.0 & 69.2 & 96.0 & 96.0 & 95.6 & 75.8 \\ -dropout, \(L=4\) & 88.6 & 96.4 & 85.8 & 74.4 & 94.6 & 95.0 & 94.5 & 79.2 \\ -dropout, -augmentations & 90.3 & 96.1 & 88.3 & 80.8 & 93.2 & 93.7 & 94.8 & 85.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on the I-RAVEN dataset.
### Visualization of object masks
We also visually inspected the attention behavior of STSN's slot attention module (Figure 3). We found that STSN's slot-specific reconstructions conformed nearly perfectly to the individual objects in the image panels of PGM, with the remaining slots left unused. This confirms that STSN was engaged in object-centric processing. We also evaluated STSN on I-RAVEN with a range of values for \(\lambda\) (the parameter that governs the relative emphasis placed on the reconstruction loss), and found that with lower values of \(\lambda\), STSN's reconstructions were no longer object-centric. With a value of \(\lambda=100\), STSN's reconstructions were blurrier, and multiple objects tended to be combined into a single slot (Figure 5 in Section A.4). With a value of \(\lambda=1\), STSN's reconstructions completely failed to capture the content of the original image (Figure 6). Interestingly, these changes in reconstruction quality were mirrored by changes in performance on the reasoning task, with an average test accuracy of \(90.1\%\) for \(\lambda=100\) and \(74.2\%\) for \(\lambda=1\) (relative to \(95.7\%\) for \(\lambda=1000\), Figure 4). This is consistent with our hypothesis that encouraging high-quality reconstructions (through a sufficiently high weight on \(\mathcal{L}_{recon}\)) would encourage object-centric encoding behavior, which would in turn promote more generalizable visual reasoning strategies. Thus, for STSN to fully exploit its object-centric encoding mechanisms, it is important to use a high enough value of \(\lambda\) so as to ensure high-quality reconstructions.
## 5 Conclusion and Future Directions
We have presented a simple, general-purpose visual reasoning model, organized around the principle of object-centric processing. Our proposed model, STSN, displayed state-of-the-art performance on both of two challenging visual reasoning benchmarks, PGM and I-RAVEN, as well a novel reasoning benchmark with greater visual complexity, CLEVR-Matrices, despite the relative lack of problem-specific inductive biases. These results suggest that object-centric processing is a powerful inductive bias for abstract visual reasoning problems such as RPM.
Some previous work has proposed novel relational inductive biases for the purposes of achieving strong out-of-distribution generalization in visual reasoning problems (Webb et al., 2021; Zhang et al., 2021; Kerg et al., 2022). This work has often assumed (i.e., hand-coded) object-centric representations. We view our approach as complementary with these previous approaches, and suggest that a fruitful avenue for future work will be to pursue the integration of object-centric and relational inductive biases.
Figure 3: Slot-specific reconstructions generated by STSN. 3 problems were chosen at random from the PGM neutral test set. The first two images for each problem show the original image and the combined reconstruction. The following images show the slot-specific reconstruction for each of the slots. In general, STSN’s slot attention module implemented a nearly perfect object-based segmentation of its input images, despite receiving no veridical segmentation information during training or test. STSN used 16 slots per image for this dataset, but generally left the slots not assigned to objects unused. Only 8 slots are pictured for these example problems since the remaining slot-specific reconstructions were completely blank.
|
2305.06388
|
Dust Properties of 870 Micron Selected Galaxies in the GOODS-S
|
We analyze the dust properties of 57 870 $\mu$m selected dusty star-forming
galaxies in the GOODS-S using new deep ALMA 1.2 mm, 2 mm, and 3 mm continuum
imaging together with other far-infrared through millimeter data. We fit the
spectral energy distributions (SEDs) with optically thin modified blackbodies
to constrain the emissivity indices and effective dust temperatures, finding a
median emissivity index of $\beta = 1.78^{+0.43}_{-0.25}$ and a median
temperature of $T_d = 33.6^{+12.1}_{-5.4}$ K. We observe a negative correlation
between $\beta$ and $T_d$. By testing several SED models, we determine that the
derived emissivity indices can be influenced by opacity assumptions. Our
temperature measurements are consistent with no evolution in dust temperature
with redshift.
|
S. J. McKay, A. J. Barger, L. L. Cowie, F. E. Bauer, M. J. Nicandro Rosenthal
|
2023-05-10T18:01:33Z
|
http://arxiv.org/abs/2305.06388v1
|
# Dust Properties of 870 Micron Selected Galaxies in the GOODS-S
###### Abstract
We analyze the dust properties of 57 870 \(\mu\)m selected dusty star-forming galaxies in the GOODS-S using new deep ALMA 1.2 mm, 2 mm, and 3 mm continuum imaging together with other far-infrared through millimeter data. We fit the spectral energy distributions (SEDs) with optically thin modified blackbodies to constrain the emissivity indices and effective dust temperatures, finding a median emissivity index of \(\beta=1.78^{+0.43}_{-0.25}\) and a median temperature of \(T_{d}=33.6^{+12.1}_{-5.4}\) K. We observe a negative correlation between \(\beta\) and \(T_{d}\). By testing several SED models, we determine that the derived emissivity indices can be influenced by opacity assumptions. Our temperature measurements are consistent with no evolution in dust temperature with redshift.
cosmology: observations -- galaxies: distances and redshifts -- galaxies: evolution -- galaxies: starburst +
Footnote †: journal: ApJ
0000-0002-8870-7885]S. J. McKay
0000-0002-4070-2886]A. J. Barger
0000-0002-0002-3870]L. L. Cowie
0000-0002-0703-3870]F. E. Bauer
0000-0002-0703-3870]M. J. Nicandro Rosenthal
## 1 Introduction
Over the last several decades, dusty star-forming galaxies (DSFGs) have emerged as a critical population at redshifts \(z\gtrsim 1\). First discovered with the Submillimeter Common User Bolometer Array (SCUBA) on the single-dish James Clerk Maxwell Telescope (JCMT) (Smail et al., 1997; Barger et al., 1998; Hughes et al., 1998; Eales et al., 1999), DSFGs boast some of the highest star formation rates (SFRs) in the universe (up to several thousand M\({}_{\odot}\) yr\({}^{-1}\)) and may be responsible for 25% to 80% of the star formation rate density between redshifts of \(z=6\) to \(z=2\)-2.5, respectively (Zavala et al., 2021).
Surveys of distant DSFGs have been performed on single-dish facilities, both from the ground (JCMT, IRAM, the South Pole Telescope, and the Large Millimeter Telescope (LMT)) and from space (the Herschel Space Observatory). Single-dish observations sample large numbers of sources near the peaks of their far-infrared (FIR) spectral energy distributions (SEDs). However, they do not allow for accurate position measurements, and they are affected by source blending due to poor spatial resolution (e.g., Biggs et al., 2011; Barger et al., 2012). The new TolTEC camera on the LMT, with its fast mapping speeds and high sensitivity (Wilson et al., 2020), may mitigate some of these issues.
In contrast to single-dish imaging surveys, submillimeter/millimeter interferometric surveys using NOEMA, the Submillimeter Array, and the Atacama Large Millimeter/Submillimeter Array (ALMA) have the sensitivity to detect faint sources (e.g., Ono et al., 2014; Aravena et al., 2016; Chen et al., 2023). In addition, they provide accurate positions, are not affected by source blending, and can resolve extended emission (Hodge & da Cunha, 2020). The main drawback is that interferometric surveys are observationally expensive. Currently, the most efficient strategy for studying large numbers of DSFGs is to conduct single-dish surveys to identify sources and then to follow them up with interferometry (e.g., Hodge et al., 2013; Cowie et al., 2018; Stach et al., 2019).
Constraints on the dust and gas masses of DSFGs are important for determining what is responsible for their high SFRs. The dust masses of DSFGs are the highest of any known galaxy population (\(\gtrsim 10^{8}\) M\({}_{\odot}\)) (e.g., Swinbank et al., 2014; da Cunha et al., 2015; Dudzeviucute et al., 2020). They have been used to estimate the gas masses available to form stars through an assumed calibration (e.g., Scoville et al., 2014, 2016; Suzuki et al., 2021). However, dust mass measurements depend sensitively on the choice of dust emissivity spectral index, \(\beta\), which parameterizes how the dust emission varies with wavelength (Blain et al., 2002; Casey et al., 2014). Since \(\beta\) pertains to the intrinsic makeup of the dust, it can vary across a galaxy (e.g., Planck Collaboration et al., 2014).
In studies of the local universe, \(\beta\) has been found to be between 1.5 and 2.0 (Dunne and Eales, 2001; Chapin et al., 2011; Clements et al., 2018). This range is in line with theoretical predictions of \(\beta\) between 1.0 and 2.5 (Draine and Lee, 1984). In the absence of direct measurements, values of \(\beta\) from the Milky Way and local galaxies are often assumed for high-redshift sources, with a common choice of \(\beta=1.8\) for optically thin SED fits (e.g., Scoville et al., 2016; Simpson et al., 2017; Dudzeviucute et al., 2021). If this assumption is not valid, then it would systematically impact measured dust and gas masses for the DSFG population.
Unfortunately, measuring \(\beta\) directly for high-redshift galaxies is difficult, requiring multiple observations spanning the submillimeter/millimeter regime to break the degeneracy between the dust temperature, \(T_{d}\), and \(\beta\). Most surveys provide data at one or two wavelengths--often from low-resolution single-dish observations--and only a few recent studies use ALMA data. In one such study, da Cunha et al. (2021) used an optically thin modified blackbody and found a median \(\beta=1.9\pm 0.4\) for a sample of 27 DSFGs from the ALESS survey (Hodge et al., 2013; Karim et al., 2013) with ALMA 870 \(\mu\)m and 2 mm data. In another study, Cooper et al. (2022) used a combined general opacity blackbody and power law fit and found a median \(\beta=2.4\pm 0.3\) for a sample of 39 DSFGs in the SSA22 field (850 \(\mu\)m flux \(>5.55\) mJy) with SCUBA-2 850 \(\mu\)m, AzTEC 1.1 mm, and ALMA 2 mm data.
In this paper, we determine \(\beta\) and \(T_{d}\) for a large sample of DSFGs in the GOODS-S that have observations in multiple ALMA bands. Our sample is taken from the catalog of 75 ALMA sources detected at 870 \(\mu\)m (\(>4.5\sigma\)) by Cowie et al. (2018), which were originally selected from SCUBA-2 850 \(\mu\)m imaging.
We structure the paper as follows. In Section 2, we discuss the ALMA 870 \(\mu\)m catalog and our new longer-wavelength ALMA observations, along with ancillary photometry and redshifts. In Section 3, we describe our SED fitting methods and we constrain the dust properties of our sample. In Section 4, we discuss the implications of our results in the context of other studies in the literature. In Section 5, we summarize our results.
We assume a flat concordance \(\Lambda\)CDM cosmology throughout with \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(H_{0}=70.0\) km s\({}^{-1}\) Mpc\({}^{-1}\).
## 2 Data
### Total Sample
Our main sample of galaxies comes from the SUPER GOODS program of Cowie et al. (2018) (hereafter, C18). Using ALMA band 7 (central wavelength of 870 \(\mu\)m), C18 followed up SCUBA-2 850 \(\mu\)m selected sources in the GOODS-S to obtain accurate positions. We hereafter refer to the resulting 75 galaxies (\(>4.5\sigma\); their Table 4) as our _total sample_.
The total sample consists of sources ranging in 870 \(\mu\)m flux from 0.84 mJy to 8.93 mJy. Of these 75 sources, 17 (23%) are at or below the SCUBA-2 4\(\sigma\) confusion limit of \(\sim\)1.6 mJy at 850 \(\mu\)m (Cowie et al., 2017). The optical/NIR counterparts to this sample vary from bright galaxies at lower redshifts to sources that are very faint or undetected in the deep CANDELS HST imaging.
### ALMA Band 3, 4, and 6 Observations
In ALMA Project #2021.1.00024.S (PI: F. Bauer), we made ALMA spectral linescans in band 6 (central wavelength of 1.24 mm), band 4 (1.98 mm), and band 3 (3.07 mm) of 57 sources in the total sample, prioritizing those with 870 \(\mu\)m fluxes above 1.8 mJy and lacking a well-established spectroscopic redshift.
The ALMA observation blocks were downloaded and calibrated using casa version 6.2.1-7 based on the associated PI scripts. The visibilities from individual spectral setups in a given band were combined using concat. Dirty continuum images were generated using tclean, adopting 0\(\farcs\)25 pixels, natural weighting, and a "common" restoring beam. Mildly cleaned continuum images were generated adopting multithreshold automasking with standard thresholds, 100 clean iterations with a 0.1 mJy threshold, pixel scales of 0, 5, and 10, and robust=0.5. The resulting band 3, 4, and 6 images had central frequencies, bandwidths, and beams of 97.662 GHz, 27.120 GHz, and \(\theta_{\rm beam}\)=0\(\farcs\)99\(\times\)0\(\farcs\)83; 151.188 GHz, 23.370 GHz, and \(\theta_{\rm beam}\)=1\(\farcs\)21\(\times\)1\(\farcs\)10; and 241.750 GHz, 23.250 GHz, and \(\theta_{\rm beam}\)=1\(\farcs\)36\(\times\)1\(\farcs\)08; respectively.
We measured peak fluxes and errors from the band 3, 4, and 6 cleaned continuum images. C18 found that since the sources were resolved at 870 \(\mu\)m, the peak fluxes (even those from the tapered images) underestimated the total fluxes. They determined that taking a ratio of aperture flux measurements made over a range of aperture radii gave a similar correction factor to estimates derived by fitting the sources in the uv plane. We adopted the aperture method for the current data as well, finding average correction factors of 1.3 for all three bands (e.g., see Figure 2 of Cowie et al., 2023 for
the band 4 data). Thus, we use this correction factor to convert the band 6, 4, and 3 peak fluxes to total fluxes (we hereafter refer to these as 1.2 mm, 2 mm, and 3 mm fluxes, respectively).
We supplement our ALMA observations with the 1.13 mm observations from the GOODS-ALMA survey (Gomez-Guijarro et al., 2022). These were also obtained with ALMA in the band 6 frequency window, but their central frequency of 265 GHz is sufficiently different from our central frequency of 242 GHz to make them worth including in our SED fits. Of the 75 sources in the total sample, 39 are also detected in the GOODS-ALMA sample. Although 8 of these 39 sources were not targeted in our spectral program, we at least have the GOODS-ALMA 1.13 mm measurements (hereafter, 1.1 mm).
In Table 1 (Appendix A), we list our ALMA 1.2 mm, 2 mm, and 3 mm peak fluxes and errors in a table comprised of the 75 sources in the total sample. In addition, we give the 1.1 mm total fluxes for the 39 sources that also appear in the GOODS-ALMA catalog.
### Ancillary Photometry and Redshifts
The GOODS-S is one of the most well-observed fields in the sky, with comprehensive multiwavelength coverage from the X-ray to the radio regime. In this section, we summarize the additional photometric and redshift data that we use in our analysis.
Barger et al. (2022) presented deep SCUBA-2 450 \(\mu\)m observations of the GOODS-S, including counterparts to some of the 870 \(\mu\)m sources described here. Since that time, we have continued to deepen our SCUBA-2 450 \(\mu\)m images of the GOODS-S (see Cowie et al., 2023). Our latest maps reach a central rms noise of 1.67 mJy, which is about 10% deeper than the image used in Barger et al. (2022). For our purposes here, the deeper images provide more robust SCUBA-2 450 \(\mu\)m fluxes for our sample. These fluxes were obtained sequentially by first measuring the peak SCUBA-2 flux for a given source within 2'' of the ALMA position, then removing that source from the image to prevent blending in later measurements. However, we note that this is not critical for SCUBA-2 450 \(\mu\)m data due to its higher resolution and shallower depth than SCUBA-2 850 \(\mu\)m data. The 1\(\sigma\) errors were measured from the local rms noise map.
Spitzer/MIPS 24 and 70 \(\mu\)m and Herschel/PACS 100 and 160 \(\mu\)m counterparts were obtained from the catalog of Elbaz et al. (2011) using a 1\(\farcs\)5 matching radius. 61 sources from the total sample have a 24 \(\mu\)m counterpart.
We matched our sample within 4'' to the HerMES DR3 catalog that used Spitzer/MIPS prior positions for deblending (Oliver et al., 2012) to obtain Herschel/SPIRE data at 250, 350, and 500 \(\mu\)m. We found 51 galaxies with SPIRE counterparts that appear to be reliable based on the images. For 24 sources without a reasonable counterpart in either the PACS or SPIRE catalogs, plus an additional 13 that did not appear in the PACS catalog, we measured the fluxes ourselves from the images at the ALMA positions and normalized them to the catalog fluxes. However, we do not use the SPIRE 500 \(\mu\)m fluxes in our SED fits due to their source blending and lower spatial resolution.
Cowie et al. (2023) presented for the total sample the redshifts, both photometric (hereafter, photz; these come from Straatman et al., 2016) and spectroscopic (hereafter, specz), including five that were determined from our ALMA data. In total there are 20 sources with speczs in the total sample (27%).
The highest specz in our sample is for ALMA 68 (numbered as in Table 4 of C18) at \(z=5.58\), obtained in Oesch et al. (2023) using JWST NIRCam/grism spectra from the FRESCO survey. A NIRCam F444W, F210M, F182M color image of this source is shown in Figure 2. Although this source was undetected in the CANDELS HST images, it is clearly visible in the NIRCam image.
The distribution of 870 \(\mu\)m fluxes and redshifts for our total sample is shown in Figure 1. We list all the adopted redshifts in Table 1 (Appendix A), including whether they are speczs or photzs (denoted by the number of decimal places). For the six sources in the table with poor-quality photzs (quality flag \(Q>3\) in the Straatman et al., 2016 catalog), we put their photzs in brackets.
### Robust Subset Selection
Although, in principle, only three photometric data points are needed to break the degeneracy between \(\beta\) and \(T_{d}\), the results are unlikely to be reliable for individual galaxies unless there are well-constrained fluxes sampling both the peak of the dust SED and the Rayleigh-Jeans (RJ) tail. For example, da Cunha et al. (2021) found for their bright ALESS sample that without both Herschel detections near the peak and ALMA 870 \(\mu\)m and 2 mm fluxes on the tail, the derived parameters were poorly constrained.
Thus, to better determine the dust properties of our sample, we restrict much of our analysis to sources with observations in at least two ALMA bands, i.e., in addition to the ALMA 870 \(\mu\)m measurement, at least one ALMA measurement at wavelengths longer than 1 mm. We also require the sources to have a redshift estimate. This is satisfied by 52 out of the 57 sources with ALMA spectral observations (the other 5 lack redshifts), and all 8 of the additional sources with GOODS-ALMA 1.1 mm measurements (see Section 2.2). However, we exclude one source (ALMA 58) that was not detected in the SPIRE images or SCUBA-2 450 \(\mu\)m image, and we remove two others (ALMA 45 and ALMA 54) with questionable \(z_{\rm phot}>7\) (their FIR colors suggest lower redshift solutions). After excluding these 3 sources, there are 57 sources which we keep in our analysis; we refer to these as the _robust subset_. The 18 sources not in the robust subset are marked with brackets around their source numbers in Table 1.
Within the robust subset, there are 19 sources with speczs (33%). The median redshift for the robust subset is \(z=2.37\), and the 870 \(\mu\)m flux range is 0.93-8.93 mJy with a median of 2.54 mJy. Each source in the sample has at least two ALMA measurements by our selection criteria, but 44 (77%) have four or more ALMA measurements (2 have 3 ALMA bands and the remaining 11 have just 2 ALMA bands). Thus, the FIR SEDs of our sample are generally better sampled than others in the literature (e.g., da Cunha et al., 2021; Cooper et al., 2022).
## 3 SED analysis and dust properties
The main goal of this work is to measure the dust properties of our sample of DSFGs as accurately as possible. We do this by fitting the SEDs using simple isothermal models. We fit the SEDs for 69 sources--the 57 in the robust subset plus the remaining 12 in the total sample with redshifts.
We fit the photometry of each source with a single-temperature modified blackbody (MBB), for which the flux density, \(S_{\nu}\), for rest-frame frequency, \(\nu\), is given as \(S_{\nu}\propto\kappa(\nu)B_{\nu}(T_{d})\). Here \(B_{\nu}(T_{d})\) is the Planck distribution and \(\kappa(\nu)\) is the frequency-dependent dust opacity. Although the dust consists of components at a range of different temperatures, an effective \(T_{d}\) description is often used because it serves as a good trade-off between number of model parameters and quality of fit to the FIR SED.
Additionally, we make the assumption that the emission is optically thin at the wavelengths we are fitting, such that \(\kappa(\nu)\propto\nu^{\beta}\), where \(\beta\) is the dust emissivity spectral index. At the resolution of our observations, this \(\beta\) represents a galaxy-averaged value; it manifests primarily in the observed slope of the RJ fall-off.
The optically thin, isothermal MBB has been widely used as a successful model for the FIR/(sub)millimeter dust emission of galaxies (e.g., Kovacs et al., 2006; Magdis et al., 2012; Jin et al., 2019; Dudzevicinte et al., 2020; da Cunha et al., 2021; Barger et al., 2022), so by choosing it, we enable comparisons of our derived parameters with those from the literature.
We use the Python-based Markov chain Monte Carlo (MCMC) package emcee(Foreman-Mackey et al., 2013) to fit the MBB to our data and recover the posterior likelihood distributions for the parameters. The free parameters in these fits are the overall normalization, \(\beta\), and \(T_{d}\). We use flat priors on \(\beta\) between 0.8 and 4.0 and on \(T_{d}\) between 10 K and 90 K. We choose this Bayesian approach over a simple least-squares fitting algorithm because least-squares fitting has been shown to introduce an artificial correlation between \(T_{d}\) and \(\beta\)(Shetty et al., 2009, 2012; Kelly et al., 2012).
Figure 1: Histograms of the (a) 870 \(\mu\)m flux densities and (b) redshifts for the total sample (gray). Sources with speczs are shown in color in both plots. The six sources in the total sample without redshifts are not shown in (b), and the two with questionable \(z_{\rm phot}>7\) are shown at a nominal redshift of \(z=7\).
Figure 2: Three-color JWST NIRCam image (red = F444W, green = F210M, blue = F182M) for ALMA 68 from the FRESCO survey (Oesch et al., 2023). The green circle has a 1′′ radius and is centered on the ALMA 870 \(\mu\)m position.
In the fits, we only consider the photometry at rest-frame wavelengths higher than 50 \(\mu\)m. We add a 5% error in quadrature to the uncertainties to correct for differences in the absolute flux calibration across FIR/(sub)millimeter bands. We perform all of the fits at our adopted redshifts (see Table 1) without allowing redshifts to vary.
We include the corrections to the SED from the CMB as outlined in da Cunha et al. (2013). However, these are expected to be small for sources at \(z<5\). We check this by fitting MBBs without the CMB correction to the robust subset and comparing the median likelihood values for \(\beta\) and \(T_{d}\) to those which include the CMB effects. The results are shown in Figure 3. We find small offsets in both \(\beta\) and \(T_{d}\), but conclude that the effects of the CMB are fairly negligible for our sample.
Three sources, ALMA 54, ALMA 61, and ALMA 69, none of which is in the robust subset, had 3 or fewer photometric data points included in the fit. Since the number of data points must always be greater than the number of free parameters, for these sources we only fit \(T_{d}\) and not \(\beta\).
For most of our sources, the breadth of wavelength coverage allows us to constrain the dust parameters tightly (we report the median likelihood estimate of \(\beta\) and \(T_{d}\) as the measured value here and throughout the paper). This can be quantified by the uncertainties on the measured parameters, for which we use the 16th to 84th percentile range of the posterior likelihood distribution. For the 44 sources in the robust subsample with 3 or more ALMA measurements, we find a median 16th to 84th percentile range for \(\beta\) of 0.45 and a median 16th to 84th percentile range for \(T_{d}\) of 7.0 K. For the 13 sources with only 2 or 3 ALMA bands available, the parameters become more uncertain: the median range on \(\beta\) is 1.05 and the median range on \(T_{d}\) is 16.5 K. The errors are generally even higher for sources not in the robust subset.
We illustrate this for a concrete example in Figure 4, where we compare the fits for ALMA 43, which has 5 ALMA measurements; ALMA 48, which has just 2; and ALMA 60, which is not included in the robust subset due to it only having ALMA 870 \(\mu\)m and shorter wavelength data. We also show the marginalized likelihood distributions (histograms) and joint likelihood distributions (contour plot) for \(\beta\) and \(T_{d}\) for each source. For ALMA 43, the parameters are tightly constrained, with \(\beta=1.54^{+0.27}_{-0.26}\) and \(T_{d}=37.0^{+6.1}_{-4.7}\) K. The elongated shape of the joint likelihood distribution reflects the intrinsic degeneracy between \(\beta\) and \(T_{d}\), though in this case the parameters are well-constrained regardless. For ALMA 48, we find \(\beta=2.54^{+0.65}_{-0.65}\) and \(T_{d}=32.2^{+10.4}_{-6.1}\) K; the parameters are still well-constrained with slightly higher errors. In contrast, for ALMA 60, though the peak of the SED is sampled by the SPIRE and submillimeter data, the lack of millimeter data means that a much larger range of emissivities are consistent with the available data (clearly apparent in the posterior likelihood distribution for \(\beta\)). We find \(\beta=1.72^{+1.20}_{-0.68}\) and \(T_{d}=36.0^{+14.4}_{-11.1}\) K for this source. These errors are 2-4\(\times\) higher than those for ALMA 43, which is also reflected in the 16th to 84th percentile distribution of models shown in the SED fits and the relatively wide joint likelihood distribution for \(\beta\) and \(T_{d}\).
In Figure 5, we show histograms of (a) \(\beta\) and (b) \(T_{d}\) for the 57 sources in the robust subset (red and hatched) and the remaining 12 sources in the total sample with redshifts (gray). For the robust subset in (a), we find a wide distribution of \(\beta\) that peaks in the range \(\beta=1.6\)-2.0 and has a median \(\beta=1.78^{+0.43}_{-0.25}\), where the uncertainties are the 16th to 84th percentile range. The error on the median derived from bootstrapping the sample is \(\pm 0.06\). Our median \(\beta\) is consistent with the frequently-assumed value for high-redshift studies of \(\beta=1.8\) and with recent measurements by da Cunha et al. (2021), as discussed in the introduction. Considering only the 19 robust subset sources with speczs, we find a median \(\beta=1.74^{+0.28}_{-0.22}\).
We note that just six sources in the robust subset had measured emissivity indices of \(\beta\gtrsim 2.8\). It is possible that these sources do in fact have very steep emissivities, or this could be an artifact of the fitting driven by relatively high uncertainties on the 2 mm and/or 3 mm data. Five of the sources only have photzs, so there is some possibility that an incorrect redshift could affect these fits as well. We show the SED for one of the six sources, ALMA 36, in Figure 6, along with the posterior likelihood distribution for \(T_{d}\) and \(\beta\). All six of these sources have large errors on \(\beta\) that would make them consistent with \(\beta\sim 2.8\) or below.
For the robust subset in Figure 5(b) (blue and hatched), we find 15 K \(<T_{d}<70\) K with a median \(T_{d}=33.6^{+12.1}_{-5.4}\) K, where the error refers to the
Figure 3: Difference in median likelihood estimates with and without CMB corrections for \(\beta\) (top) and for \(T_{d}\) (bottom) vs. redshift, for the robust subset. Models with the CMB included find slightly higher \(T_{d}\) and slightly lower \(\beta\) on average, but the effect is minimal.
16th to 84th percentile range of the sample. The error on the median derived from bootstrapping is \(\pm 1.0\) K. This median \(T_{d}\) is similar to the median \(T_{d}=30.4^{+6.9}_{-4.7}\) K found by Dudzeviciute et al. (2020) for their ALMA SCUBA-2 UDS sample and to the median \(T_{d}=30^{+14}_{-8}\) K found by da Cunha et al. (2021) (again, uncertainties are the 16th to 84th percentile range). Both of these results were derived using similar methods to ours, though Dudzeviciute et al. (2020) assumed a fixed \(\beta=1.8\). Considering only the 19 robust subset sources with speczs, we find a median \(T_{d}=32.8^{+13.0}_{-1.5}\) K.
We also integrate the best-fit MBB from rest-frame 8 \(\mu\)m to 1000 \(\mu\)m for each member of the robust subset to measure the FIR luminosities of our sources and to help in investigating selection biases. We then multiply the result by a correction factor of 1.35 (as was done in, for example, Jin et al.2019) to account for the fact that MBB models for DSFGs typically underestimate the MIR flux that results from warmer dust components. The resulting luminosities range from \(L_{\rm IR}=3.9\times 10^{11}\) L\({}_{\odot}\)to \(L_{\rm IR}=1.4\times 10^{13}\) L\({}_{\odot}\). We discuss the relation between dust temperature and FIR luminosity in Section 4.3.
In Figure 7, we show the relationship between \(\beta\) and \(T_{d}\) that we measure for the robust subset, with colors denoting redshifts. We find a general negative correlation, with Pearson coefficient \(r=-0.67\), \(p\) value \(=1\times 10^{-10}\). This relationship is expected, both from other studies of high-redshift galaxies (e.g., da Cunha et al.2021) and from various observations of molecular clouds within the Galactic plane (e.g., Paradis et al.2010). Laboratory tests of silicate grains have also shown an intrinsic negative correlation of emissivity with temperature (Agladze et al.1996; Boudet et al.2005; Inoue et al.2020).
Although the intrinsic degeneracy between \(\beta\) and \(T_{d}\) can play a role in a negative correlation (e.g., Galliano et al.2018), our SEDs are very well-sampled and thus we expect that our data are sufficient to break the degeneracy. In Figure 7 we overplot the stacked joint likelihood distribution for the robust subset (shown as contours)
Figure 4: Top row: Best-fit MBB (black curve) and 16th to 84th percentile range from the posterior distribution of the MCMC models (blue shaded region) for ALMA 43 (left), a source in the robust subset with five ALMA measurements; ALMA 48 (center), also in the robust subset but with only two ALMA bands; and ALMA 60 (right), a source not in the robust subset due to the lack of observations at wavelengths longer than 870 \(\mu\)m. Photometry: Red circles—ALMA 1.1 mm, 1.2 mm, 2 mm, and 3 mm, maroon pentagon—ALMA 870 \(\mu\)m, green square—SCUBA-2 450 \(\mu\)m, dark red stars—Herschel/PACS 100 and 160 \(\mu\)m and SPIRE 250 and 350 \(\mu\)m. The MBB is only fit to the data at rest-frame wavelengths \(\geq\)50 \(\mu\)m (points not included in the fits are marked with black squares). The legend lists the parameters for the MBB fit and the adopted redshift from Table 1. Bottom row: Marginalized likelihood distributions (histograms) and joint likelihood distribution (contours) of \(\beta\) and \(T_{d}\) for each of these sources. While the fits to the data are good in all cases, the dust parameters \(\beta\) and \(T_{d}\) can only be well constrained for ALMA 43 and ALMA 48, which have millimeter observations.
and find that it traces the median likelihood estimates for \(\beta\) and \(T_{d}\) well. This implies that the observed negative correlation is robust. da Cunha et al. (2021) discuss this correlation and several possible selection effects that could artificially produce this result. However, we find the negative correlation is only minimally affected when considering only sources with \(L_{\rm IR}>2\times 10^{12}\) L\({}_{\odot}\)and \(z<3.5\), where we expect a greater degree of completeness. This suggests that selection effects are unlikely to bias our result.
In general, one would expect \(\beta\) to correlate with a ratio of fluxes on the RJ tail of the blackbody, such as the 1.2 mm/2 mm ratio, although a spread of temperatures and redshifts may introduce scatter into this relation. In Figure 8, we check for this correlation between \(\beta\) and the 1.2 mm/2 mm flux ratio for all robust subset sources with a 2 mm measurement. Although there is scatter in the trend as expected, we find a significant correlation (\(r=0.50\), \(p=7\times 10^{-4}\). This correlation remains when we only consider sources with 2 mm flux \(>\) 0.16 mJy, where the sample is expected to be essentially complete (Cowie et al., 2023). This cut excludes most of the sources with \(\beta\gtrsim 2.8\) and the one source with 1.2 mm/2 mm ratio \(>20\) (due to a low signal-to-noise detection at 2 mm).
In Table 2, we summarize the median likelihood estimates for the dust parameters obtained from the MBB fits for the robust subset. In Appendix B, we show the photometry and best-fit MBB for all 69 sources that we fit.
## 4 Discussion
### Interpretation of the Measured Emissivity Index
Our median \(\beta=1.78^{+0.43}_{-0.25}\) is broadly consistent with theoretical predictions for the interstellar medium (ISM), which give \(\beta\sim 2\)(Draine and Lee, 1984). Measurements of the Milky Way's ISM have yielded \(\beta=1.5\)(Paradis et al., 2009) and \(\beta=1.8\)(Planck Collaboration et al., 2011). These values are within the range spanned by our sample, though our results suggest that \(\beta=1.5\) may not be appropriate as an assumption for high-redshift galaxies as a whole.
Figure 5: Histograms of the (a) dust emissivity indices and (b) effective dust temperatures measured for the 57 sources in the robust subset (hatched and red in (a); hatched and blue in (b)) and the 12 remaining sources in the total sample with redshifts (gray). Three of the latter sources do not have a measured \(\beta\) due to limited photometry and hence are not shown in (a). We only allowed \(\beta\) to range between 0.8 and 4.0 and \(T_{d}\) to range between 10 K and 90 K.
Figure 6: Same as Figure 4, but for ALMA 36.
However, as we mentioned in Section 3, the emissivities that we measure from our SED fits are galaxy-integrated values that characterize the slope of the overall dust emission spectrum on the RJ side. The connection to the intrinsic emissivity of the dust composition of the galaxy is difficult to make and relies on assumptions about the relative proportions of dust grains of different sizes and structures and the ISM geometry, among other factors. Higher values of \(\beta\) could suggest a greater proportion of larger crystalline dust grains that are at generally lower temperatures (Agladze et al., 1996). However, a range of temperatures in the dust components would, in general, flatten the effective RJ slope and hence reduce the measured \(\beta\).
Regardless of the physical interpretation, we have shown that isothermal MBBs with \(\beta\approx 1.8\) can be used to fit the FIR SEDs of high-redshift dusty galaxies spanning a range of submillimeter/millimeter fluxes, redshifts, and observed wavelengths.
### Comparisons with General Opacity Models
While many authors assume an optically thin MBB model as a good trade-off between number of model parameters and quality of fit to the FIR SED, there is no guarantee that the emission is optically thin at shorter wavelengths. Some works (e.g., Conley et al., 2011; Casey et al., 2019) suggest that the dust may remain optically thick out to rest-frame wavelengths \(\lambda_{0}\sim 200\)-300 \(\mu\)m. Accordingly, several recent studies of the dust properties of DSFGs have instead used general opacity factors in their MBB models (e.g., Casey et al., 2021; Cooper et al., 2022).
To make a direct comparison with these studies, we next fit the SEDs of the robust subset with a general opacity MBB model of the form \(S(\nu)\propto(1-e^{-\tau})B_{\nu}(T)\), where \(\tau\) is the optical depth described by the power law \(\tau\propto(\nu/\nu_{0})^{\beta}\). Here \(\nu_{0}\) is the turnover frequency at which the optical depth equals 1. We assume a turnover wavelength \(\lambda_{0}=200\)\(\mu\)m, and we include a MIR power law with \(\alpha=2.0\), following the prescription of Casey (2012).
With the inclusion of the power law on the short-wavelength side of the FIR peak, we fit the photometry down to a rest-frame wavelength of 10 \(\mu\)m. The MIR power law is included in the SED fits of both Casey et al. (2021) and Cooper et al. (2022), though the former vary the slope \(\alpha\) in their fits, where possible, and generally find steep slopes (\(\alpha\approx 3.5\)-7). We choose to leave \(\alpha\) fixed at 2.0 so that our model is identical to that of Cooper et al. (2022).
In Figure 9, we show a comparison of this opacity + power law model and the optically thin model fits for ALMA 24. Both fit the data well but with somewhat different median likelihood values. The optically thin fit gives \(\beta=1.44\) and \(T_{d}=42.0\) K, while the opac
Figure 8: Emissivity index measured from MBB fits vs. the 1.2 mm/2 mm flux ratio, for the robust subset with both ALMA 1.2 mm and 2 mm measurements. The data are color-coded by adopted redshift (right-hand scale). The error bars on \(\beta\) are the 16th to 84th percentile range of the likelihood distribution, while the errors on the flux ratio are derived from the respective photometric errors from Table 1. ALMA 34 is shown with a rightward-facing triangle since its flux ratio is off the plot at 26.9, due to its low-significance detection at 2 mm.
Figure 7: For the robust subset, emissivity index vs. dust temperature, both measured from MBB fits. The data are color-coded by adopted redshift (right-hand scale). Error bars represent 16th to 84th percentile ranges from the likelihood distributions. The stacked joint likelihood distribution for the robusts subset is shown as contours (levels are 39%, 86.4%, 98.8%, i.e., the 1\(\sigma\), 2\(\sigma\), 3\(\sigma\), levels of a 2D Gaussian).
ity + power law fit gives \(\beta=1.76\) and \(T_{d}=59.0\) K. We also note that for this galaxy, the \(\alpha=2.0\) power law slope does a good job of fitting the measured MIPS 70 \(\mu\)m point, which is not included in our optically thin fit.
We find median \(\beta=2.06^{+0.56}_{-0.37}\) and \(T_{d}=57.5^{+12.6}_{-12.2}\) K for the robust subset using the opacity + power law model. This median \(\beta\) is slightly lower than those reported in Casey et al. (2021) and Cooper et al. (2022), who find median \(\beta=2.2^{+0.5}_{-0.4}\) and \(\beta=2.4^{+0.3}_{-0.3}\), respectively. The errors here are the 16th to 84th percentile range of each sample and thus reflect the spread in measured \(\beta\) values rather than the errors on the individual \(\beta\) values.
Thus, in Figure 10, we compare the individual errors on our \(\beta\) values with the errors from Cooper et al. (2022), who only had ALMA data in the 2 mm band. Since we and Cooper et al. (2022) report both upper and lower errors determined from the likelihood distributions, in the figure we take the average of the upper and lower errors for each source (i.e., half of the 16th to 84th percentile range; for a Gaussian posterior this would be the 1\(\sigma\) error) for a rough comparison of errors. We see that the addition of more ALMA bands in our case reduces the errors on the individual \(\beta\) measurements.
A Mann-Whitney test gives a 0.05% probability that our sample comes from the same underlying distribution as that of Cooper et al. (2022). This may be due to their combining single-dish observations with ALMA 2 mm data, or to the brighter flux limit of their sample (SCUBA-2 850 \(\mu\)m flux \(>5.55\) mJy). However, the 7 sources in our sample that meet their selection criteria have median \(\beta=1.89\) from the opacity + power law fits, suggesting that the discrepancy reflects a problem with mixed single-dish and interferometric measurements.
The median temperature we measure for the opacity + power law model is \(\sim\)24 K lower and the median emissivity is 0.3 higher than the results for the optically thin MBB. For comparison, da Cunha et al. (2021) found dust temperatures that were \(\sim\)10 K lower for their optically thin model than for their general opacity model but found that emissivity was robust against different opacity assumptions--though they did not include a MIR power law and they allowed \(\lambda_{0}\) to vary between 60-140 \(\mu\)m. Meanwhile, for simulated galaxies with optically thick emission out to \(\sim\)200 \(\mu\)m, Hayward et al. (2011) found that dust temperatures derived from an optically thin model could underpredict its optically thick counterpart by \(\sim\)20 K, which is closer to the deviation we measure.
To test whether our discrepancies in \(\beta\) and \(T_{d}\) were related to the assumption of \(\lambda_{0}=200\)\(\mu\)m and/or the inclusion of the MIR power law, we fit the SEDs of the robust subset again with three other general opacity models: a general opacity MBB with \(\lambda_{0}\) fixed to 200 \(\mu\)m with no MIR power law, a general opacity MBB with \(\lambda_{0}\) fixed to 100 \(\mu\)m with a MIR power law, and a general opacity MBB with \(\lambda_{0}\) fixed to 100 \(\mu\)m but no power law included (to make a better comparison with the general opacity model used in da Cunha et al. 2021).
Figure 10: Histogram of errors in our \(\beta\) values for the robust subset opacity + power law MBB fits (blue) with the errors from Cooper et al. (2022) (hatched and red). The errors shown are half of the 16th to 84th percentile range determined for each individual measurement.
Figure 9: General opacity + power law MBB fit (red dot-dashed curve), with MIR power law slope \(\alpha=2.0\), compared with optically thin MBB fit (black curve) for ALMA 24. The 16th to 84th percentile range from the posterior distribution of the MCMC models is shaded red for the opacity + power law model and shaded blue for the optically thin model. The optically thin model is fit to all points at rest-frame wavelengths greater than 50 \(\mu\)m (points below this are marked with black squares) while the opacity + power law fit is fit to all data. The measured dust temperature and emissivity for each are given in the figure legend, showing a slightly steeper \(\beta\) and a higher \(T_{d}\) for the opacity + power law model.
For the \(\lambda_{0}=100\)\(\mu\)m MBB model with no power law, we find median \(\beta=1.80\) and median \(T_{d}=42\) K. The results are only minimally affected by the inclusion of the MIR power law: for the \(\lambda_{0}=100\)\(\mu\)m MBB model with the power law we find median \(\beta=1.78\) and median \(T_{d}=45\) K. The median \(\beta\) for these models is consistent with the optically thin case, and the median \(T_{d}\) is larger by about \(\sim\)10 K. These results are consistent with those of da Cunha et al. (2021), who used a similar model. However, for the \(\lambda_{0}=200\)\(\mu\)m MBB model with the power law included, we measure a median \(\beta=2.02\) and median \(T_{d}=56\) K. Thus, we observe that the assumption of the turnover wavelength has a significant effect on the derived parameters, and the emissivity is not necessarily robust against opacity assumptions if the turnover wavelength is high enough. This may also help to explain why models with \(\lambda_{0}=200\)\(\mu\)m such as that of Cooper et al. (2022) find a higher median \(\beta\) for their sample than that of da Cunha et al. (2021), though as we have already discussed, we find \(\beta\) values inconsistent with Cooper et al. (2022) even under identical modeling assumptions.
Finally, we note that the general opacity models (with or without a power law) did not necessarily provide a
Figure 11: For the robust subset, (a) Dust temperature vs. FIR luminosity, (b) FIR luminosity vs. redshift, (c) Dust temperature vs. redshift, and (d) Same as (c) but only for the sources with a FIR luminosity \(>2\times 10^{12}\) L\({}_{\odot}\)and \(z<3.5\), where our sample is more likely to be complete. The dashed line shows the median \(T_{d}=33.6\) K, and the shaded region denotes the 16th to 84th percentile range of the robust subset. In all panels, sources with speczs are shown as red pentagons while those with photzs are shown as blue squares.
better fit to the FIR data than our optically thin model, aside from being able account for shorter wavelength fluxes in the cases where the power law was included.
Even if we cannot constrain with certainty the underlying dust properties or opacity of the sources, we can conclude that our range of dust parameters are consistent with theory and with recent studies of the local and high-redshift universe. This is true whether we assume a general opacity or optically thin model, though we caution that--as many authors have pointed out--making direct comparisons between parameters derived using different models is not straightforward. Allowing for differences in the assumed models, we confirm that \(\beta\) between 1.8 and 2.0 is appropriate for DSFGs at high redshift.
### Dust Temperature Variation with Redshift
Although a number of studies have found that dust temperature increases with redshift (e.g., Magdis et al., 2012; Magnelli et al., 2014; Bethermin et al., 2015; Schreiber et al., 2018; Zavala et al., 2018; Sommovigo et al., 2022), there is debate over whether this is simply a selection bias due to picking out higher luminosity sources at higher redshifts. Other recent studies find little to no evidence of temperature evolution with redshift when the luminosity dependence is taken into account (Lim et al., 2020; Dudzeviciute et al., 2020; Barger et al., 2022; Drew and Casey, 2022). The luminosity ranges of these studies overlap with ours in part or in full, although Drew and Casey (2022) consider a lower-redshift selection of \(0<z<2\) galaxies with a luminosity range that extends down to \(L_{\rm IR}\sim 10^{10}\) L\({}_{\odot}\).
Our robust subset is also consistent with no evolution. In Figure 11(a), we show \(T_{d}\) from our optically thin MBB fits plotted against FIR luminosity. We see an increase in dust temperature with luminosity. From Figure 11(b), we also see a general increase in \(T_{d}\) with redshift. However, when we restrict our analysis to sources with \(L_{\rm IR}>2\times 10^{12}\) L\({}_{\odot}\)and \(z<3.5\), where the sample is less likely to suffer from incompleteness, we find the Pearson coefficient for the temperature versus redshift relation is \(r=0.10\), with a \(p\) value of 0.61, indicating no statistically significant correlation. This is shown in Figure 11(c); the lack of visible trend is clear. We note that since we employ photz for some of our sources and redshift and dust temperature are degenerate in MBB SED fits, incorrect redshifts could affect our results. Thus, it is possible we have underestimated the errors on the dust temperatures of some sources.
## 5 Summary
We analyzed the dust properties of a large sample of GOODS-S galaxies selected at 870 \(\mu\)m using new ALMA continuum observations at millimeter wavelengths. Compared to other large samples of DSFGs, the ALMA observations, which probe the RJ side of the FIR SED, reach some of the deepest depths at these wavelengths. Here we summarize our main results:
1. Using optically thin, isothermal MBB fits, we measured a median \(T_{d}=33.6^{+12.1}_{-5.4}\) K and a median \(\beta=1.78^{+0.43}_{-0.25}\) for our robust subset of 57 sources.
2. We observed a negative correlation between \(\beta\) and \(T_{d}\) for our robust subset. Since our FIR SEDs are relatively well-sampled (up to nine photometric points from rest-frame 50-1500 \(\mu\)m) and based on the stacked likelihood distributions, this relationship appears robust. We also confirm that it is unlikely to have been produced by selection effects.
3. We determined that the opacity assumptions used in the MBB fits can affect the measured values for \(\beta\) as well as \(T_{d}\). We found that a general opacity MBB with \(\lambda_{0}=100\)\(\mu\)m gave similar values of \(\beta\) to optically thin fits, while a general opacity MBB with \(\lambda_{0}=200\)\(\mu\)m gave higher values of \(\beta\). In all cases \(T_{d}\) was higher for the general opacity MBB fits.
4. After restricting to sources in our robust subset with \(L_{\rm IR}>2\times 10^{12}\) L\({}_{\odot}\)and \(z<3.5\), we find no evidence for temperature evolution from \(z=1\) to \(z=3.5\).
This work is one of only a few to directly measure the emissivity index and dust temperature of individual DSFGs using deep ALMA millimeter imaging. We find that the dust emission of DSFGs is well represented by modified blackbodies with \(\beta\sim 1.8\). Future observations of larger and fainter samples of DSFGs using ALMA and TolTEC will confirm whether the dust characteristics of DSFGs differ from local galaxies and help to disentangle competing theories about the origins of their extreme dust masses and star formation rates.
We thank the anonymous referee for constructive comments that helped us to improve the manuscript. We gratefully acknowledge support for this research from the William F. Vilas Estate (S. J. M.), a Kellett Mid-Career Award and a WARF Named Professorship from the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation (A. J. B.), NASA grant 80NSSC22K0483 (L. L. C.), the Millennium Science Initiative Program - ICN12_009 (F. E. B), CATA-Basal - FB210003 (F. E. B), and FONDECYT Regular - 1190818 (F. E. B) and 1200495 (F. E. B).
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2021.1.00024.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.
The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan, Academia Sinica Institute of Astronomy and Astrophysics, the Korea Astronomy and Space Science Institute, the National Astronomical Observatories of China and the Chinese Academy of Sciences (grant No. XDB09000000), with additional funding support from the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada.
We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. ALMA, JCMT
astropy (Astropy Collaboration et al., 2022), casa(McMullin et al., 2007), emcee(Foreman-Mackey et al., 2013)
## Appendix A Flux densities and dust properties
In Table 1, we list the positions and redshifts of the total sample, along with the ALMA 1.2 mm, 2 mm, and 3 mm fluxes from the present work and the 1.1 mm fluxes from Gomez-Guijarro et al. (2022). In Table 2, we give the median likelihood \(\beta\) and \(T_{d}\) values and the errors from the posterior likelihood distributions for our optically thin MBB fits, as well as the FIR luminosities and errors measured from the fits.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ C18} & & & & Total & Peak & & Peak & & Peak \\ No. & R.A. & Decl. & \(z\) & \(f_{1.13\,{\rm mm}}\) & Error & \(f_{1.24\,{\rm mm}}\) & Error & \(f_{2\,{\rm mm}}\) & Error & \(f_{3\,{\rm mm}}\) & Error \\ & J2000.0 & J2000.0 & & (mJy) & & (mJy) & & (mJy) & & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Total Sample Redshifts and Fluxes
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline C18 & & & & & Total & & Peak & & Peak & \\ No. & R.A. & Decl. & \(z\) & \(f_{\rm 1.13\,mm}\) & Error & \(f_{\rm 1.24\,mm}\) & Error & \(f_{\rm 2\,mm}\) & Error & \(f_{\rm 3\,mm}\) & Error \\ & J2000.0 & J2000.0 & & & (mJy) & & (mJy) & & (mJy) & & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline
34 & 53.090752 & -27.782473 & 1.95 (1.86-1.97) & 0.76 & 0.09 & 0.35 & 0.07 & 0.01 & 0.03 & \(\cdots\) & 0.022 \\
35 & 53.091747 & -27.712166 & 1.612 & \(\cdots\) & 0.35 & 0.09 & 0.1 & 0.02 & 0.042 & 0.018 \\
36 & 53.086586 & -27.810249 & 2.37 (2.28-2.42) & 0.49 & 0.10 & 0.44 & 0.07 & 0.04 & 0.02 & 0.025 & 0.018 \\
37 & 53.146378 & -27.888807 & 2.96 (2.87-3.05) & \(\cdots\) & 0.43 & 0.09 & 0.1 & 0.03 & 0.011 & 0.018 \\
38 & 53.092335 & -27.803223 & 2.31 (2.26-2.38) & 0.85 & 0.11 & 0.8 & 0.09 & 0.12 & 0.02 & \(\cdots\) & 0.019 \\
39 & 53.124332 & -27.882696 & 3.04 (2.99-3.21) & \(\cdots\) & 0.35 & 0.08 & 0.04 & 0.02 & 0.032 & 0.018 \\
40 & 53.131123 & -27.773195 & 2.223 & 0.72 & 0.11 & 0.56 & 0.05 & 0.12 & 0.03 & 0.015 & 0.026 \\
41 & 53.172832 & -27.858860 & [4.13 (3.45-4.46)] & \(\cdots\) & 0.66 & 0.11 & 0.16 & 0.02 & 0.039 & 0.019 \\
42 & 53.091629 & -27.853390 & 2.34 (2.30-2.42) & 0.81 & 0.10 & 0.41 & 0.09 & 0.12 & 0.02 & 0.029 & 0.02 \\
43 & 53.068874 & -27.879723 & 2.39 (2.32-2.62) & 1.03 & 0.11 & 0.83 & 0.07 & 0.16 & 0.03 & 0.034 & 0.021 \\
[44] & 53.087166 & -27.840195 & \(\cdots\) & 0.85 & 0.11 & 0.73 & 0.09 & 0.18 & 0.02 & 0.07 & 0.017 \\
[45] & 53.041084 & -27.837721 & [7.62 (7.15-7.93)] & \(\cdots\) & 0.54 & 0.08 & 0.14 & 0.02 & 0.022 & 0.018 \\
46 & 53.104912 & -27.705305 & 1.613 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
47 & 53.163540 & -27.890556 & 2.19 (2.12-2.22) & \(\cdots\) & 0.5 & 0.1 & 0.07 & 0.02 & 0.018 & 0.019 \\
48 & 53.160664 & -27.776251 & 2.543 & 0.99 & 0.12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
49 & 53.053669 & -27.869278 & 1.87 (1.84-1.93) & \(\cdots\) & 0.5 & 0.11 & 0.07 & 0.02 & \(\cdots\) & 0.019 \\
50 & 53.089542 & -27.711666 & 1.69 (1.64-1.70) & \(\cdots\) & 0.3 & 0.08 & 0.07 & 0.03 & 0.012 & 0.022 \\
51 & 53.067833 & -27.728889 & 2.32 (2.29-2.43) & \(\cdots\) & 0.39 & 0.08 & 0.09 & 0.02 & 0.028 & 0.019 \\
52 & 53.064793 & -27.862638 & [4.78 (4.35-5.10)] & 0.54 & 0.10 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
53 & 53.198875 & -27.843945 & 1.56 (1.50-1.60) & 1.01 & 0.12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
54 & 53.181995 & -27.814196 & [9.42 (9.35-9.83)] & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
55 & 53.048378 & -27.770306 & \(\cdots\) & 0.71 & 0.11 & 0.24 & 0.1 & 0.06 & 0.02 & 0.01 & 0.019 \\
56 & 53.107044 & -27.718334 & 2.299 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
57 & 53.033127 & -27.816778 & 3.08 (3.00-3.68) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
58 & 53.183666 & -27.836500 & [4.73 (4.39-4.90)] & 1.23 & 0.12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
59 & 53.094044 & -27.804195 & 2.325 & \(\cdots\) & 0.37 & 0.09 & 0.07 & 0.03 & 0.01 & 0.02 \\
60 & 53.124584 & -27.893305 & 2.53 (2.41-2.60) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
61 & 53.132751 & -27.720278 & [4.67 (4.48-5.23)] & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
62 & 53.080669 & -27.720861 & 2.94 (2.88-3.03) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
63 & 53.120041 & -27.808277 & 1.83 (1.78-1.88) & 0.55 & 0.12 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
64 & 53.117085 & -27.874918 & 3.26 (3.20-3.40) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\
65 & 53.131458 & -27.841
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{C18} & & & \multicolumn{3}{c}{Total} & \multicolumn{2}{c}{Peak} & \multicolumn{2}{c}{Peak} & \multicolumn{2}{c}{Peak} \\ No. & R.A. & Decl. & \(z\) & \(f_{\rm 1.13\,mm}\) & Error & \(f_{\rm 1.24\,mm}\) & Error & \(f_{\rm 2\,mm}\) & Error & \(f_{\rm 3\,mm}\) & Error \\ & J2000.0 & J2000.0 & & & (mJy) & & (mJy) & & (mJy) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\ \hline
[69] & 53.113125 & -27.886639 & 2.55 (2.47-2.64) & & & & & & & \\
70 & 53.141251 & -27.872860 & 3.14 (3.06-3.34) & & & & 0.06 & 0.04 & 0.02 & 0.019 \\
[71] & 53.056873 & -27.798389 & 1.71 (1.63-1.72) & & & & & & & \\
72 & 53.119957 & -27.743137 & [3.76 (3.47-4.31)] & 0.71 & 0.12 & & & & & \\
73 & 53.142872 & -27.874084 & 2.19 (2.09-2.22) & & & 0.04 & 0.08 & 0.01 & 0.02 & -0.01 & 0.015 \\
74 & 53.093666 & -27.826445 & 0.732 & & & 0.19 & 0.1 & 0.04 & 0.03 & 0.017 & 0.018 \\
[75] & 53.074837 & -27.787111 & & & & & & & & \\ \hline \end{tabular} Note. – Columns: (1) Source number from Table 4 of C18 (brackets refer to sources not in the robust subset), (2) and (3) ALMA 870 \(\mu\)m R.A. and decl., (4) adopted redshift taken from the compilation in Cowie et al. (2023) (three digits after the decimal point for spectroscopic redshifts—except for the JWST NIRSpec redshift for source 68 from Oesch et al. (2023)—and two digits after the decimal point for photometric redshifts, plus 68% confidence ranges from Straatman et al. 2016 for photometric redshifts, given in parentheses), (5) and (6) total ALMA 1.13 mm flux and error from Gómez-Guijarro et al. (2022), (7) and (8) measured peak ALMA 1.24 mm flux and error from this work, (9) and (10) measured peak ALMA 2 mm flux and error from this work, (11) and (12) measured peak ALMA 3 mm flux and error from this work. In Column (4), values in brackets refer to photzs which had quality flag \(Q>3\) in the catalog of Straatman et al. (2016).
\end{table}
Table 1: _(continued)_
\begin{table}
\begin{tabular}{c c c c c} \hline \hline No. & \(z\) & \(\beta\) & \(T_{d}\)/K & log(\(L_{\rm IR}\)/L\({}_{\odot}\)) \\ (1) & (2) & (3) & (4) & (5) \\ \hline
1 & 2.574 & \(1.53^{+0.1}_{-0.1}\) & \(32.8^{+1.9}_{-1.7}\) & \(12.79^{+0.04}_{-0.05}\) \\
2 & 3.69 & \(1.84^{+0.12}_{-0.12}\) & \(34.5^{+2.1}_{-2.0}\) & \(12.98^{+0.05}_{-0.06}\) \\
3 & 2.648 & \(1.73^{+0.14}_{-0.14}\) & \(36.1^{+2.6}_{-2.3}\) & \(12.84^{+0.05}_{-0.05}\) \\
4 & 2.252 & \(2.01^{+0.17}_{-0.16}\) & \(31.4^{+2.4}_{-2.1}\) & \(12.84^{+0.05}_{-0.05}\) \\
5 & 2.309 & \(1.75^{+0.16}_{-0.15}\) & \(32.7^{+2.9}_{-2.5}\) & \(12.66^{+0.05}_{-0.06}\) \\
7 & 3.672 & \(1.45^{+0.14}_{-0.14}\) & \(43.4^{+3.7}_{-3.4}\) & \(12.88^{+0.08}_{-0.07}\) \\
8 & 2.69 & \(2.21^{+0.2}_{-0.18}\) & \(26.6^{+2.7}_{-2.4}\) & \(12.53^{+0.09}_{-0.09}\) \\
9 & 2.322 & \(1.71^{+0.16}_{-0.16}\) & \(32.8^{+2.7}_{-2.4}\) & \(12.63^{+0.07}_{-0.07}\) \\
10 & 2.41 & \(1.63^{+0.21}_{-0.2}\) & \(37.1^{+4.5}_{-3.7}\) & \(12.64^{+0.08}_{-0.08}\) \\
12 & 3.76 & \(1.86^{+0.25}_{-0.23}\) & \(38.4^{+4.0}_{-3.6}\) & \(12.86^{+0.08}_{-0.07}\) \\
13 & 2.73 & \(0.99^{+0.16}_{-0.12}\) & \(49.7^{+5.8}_{-5.7}\) & \(12.61^{+0.12}_{-0.12}\) \\
14 & 2.73 & \(1.96^{+0.21}_{-0.2}\) & \(45.4^{+6.1}_{-4.9}\) & \(13.14^{+0.1}_{-0.1}\) \\
15 & 2.14 & \(2.84^{+0.44}_{-0.38}\) & \(20.6^{+3.2}_{-2.9}\) & \(12.19^{+0.1}_{-0.13}\) \\
16 & 3.37 & \(1.71^{+0.28}_{-0.25}\) & \(36.2^{+5.7}_{-4.9}\) & \(12.62^{+0.14}_{-0.13}\) \\ \hline \end{tabular}
\end{table}
Table 2: Robust Subset Dust Properties
\begin{table}
\begin{tabular}{c c c c c} \hline \hline No. & \(z\) & \(\beta\) & \(T_{d}\)/K & log(\(L_{\rm IR}\)/L\({}_{\odot}\)) \\ (1) & (2) & (3) & (4) & (5) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 2: _continued_
## Appendix B SED fits
In Figure 12, we show the MBB SED fits for the 69 sources from the total sample that have redshifts.
Figure 12: Optically thin MBB SED fits (black curves) and 16th to 84th percentile ranges of the accepted MCMC models (blue shaded regions) for the 69 sources with redshifts in Table 1. Photometry: Red circles—ALMA 1.1 mm, 1.2 mm, 2 mm, and 3 mm, maroon pentagon—ALMA 870 \(\mu\)m, green square—SCUBA-2 450 \(\mu\)m, dark red stars—Herschel/PACS 100 and 160 \(\mu\)m and SPIRE 250 and 350 \(\mu\)m, blue triangles—Spitzer/MIPS 70 \(\mu\)m. The fits are made only to the data at wavelengths greater or equal to rest-frame 50 \(\mu\)m (points not included in the fits are marked with black squares). The entire figure set (69 images) is available in the online journal.
|
2310.02358
|
Shock enhanced [CII] emission from the infalling galaxy Arp 25
|
We present SOFIA observations with HAWC+ and FIFI-LS of the peculiar galaxy
Arp 25, also known as NGC 2276 or UGC 3740, whose morphology is deformed by its
impact with the intra-group medium of the NGC 2300 galaxy group. These
observations show the first direct proof of the enhancement of [CII] emission
due to shocks caused by ram pressure in a group of galaxies. By comparing the
[CII] emission to UV attenuation, dust emission, PAH, and CO emission in
different regions of the galaxy, we find a clear excess of [CII] emission along
the impact front with the intra-group medium. We estimate that the shock due to
the impact with the intra-group medium increases the [CII] emission along the
shock front by 60% and the global [CII] emission by approximately 25% with
respect to the predicted [CII] emission assuming only excitation caused by
stellar radiation. This result shows the danger of interpreting [CII] emission
as directly related to star formation since shocks and other mechanisms can
significantly contribute to the total [CII] emission from galaxies in groups
and clusters.
|
Dario Fadda, Jessica S. Sutter, Robert Minchin, Fiorella Polles
|
2023-10-03T18:34:17Z
|
http://arxiv.org/abs/2310.02358v1
|
# Shock enhanced [CII] emission from the infalling galaxy Arp 251
###### Abstract
We present SOFIA observations with HAWC+ and FIFI-LS of the peculiar galaxy Arp 25, also known as NGC 2276 or UGC 3740, whose morphology is deformed by its impact with the intra-group medium of the NGC 2300 galaxy group. These observations show the first direct proof of the enhancement of [CII] emission due to shocks caused by ram pressure in a group of galaxies. By comparing the [CII] emission to UV attenuation, dust emission, PAH, and CO emission in different regions of the galaxy, we find a clear excess of [CII] emission along the impact front with the intra-group medium. We estimate that the shock due to the impact with the intra-group medium increases the [CII] emission along the shock front by 60% and the global [CII] emission by approximately 25% with respect to the predicted [CII] emission assuming only excitation caused by stellar radiation. This result shows the danger of interpreting [CII] emission as directly related to star formation since shocks and other mechanisms can significantly contribute to the total [CII] emission from galaxies in groups and clusters.
Infrared galaxies (790) - Molecular gas (1073) - Galaxy environments (229) - Interstellar Medium (847) +
Footnote †: journal: Data obtained with FIFI-LS and HAWC+ onboard SOFIA
0000-0002-8870-387X]Dario Fadda
0000-0002-2880-788X]Jessica S. Sutter
0000-0002-4880-0880]Robert Minchin
0000-0002-4880-3330]Fiorella Poles
0000-0002-1888-0880]Jessica S. Sutter
## 1 Introduction
Although galaxy clusters have historically been believed to be closed and dynamically relaxed systems at the present epoch, a large fraction of them instead are continuing to grow through the merger of subclusters and the infall of galaxies (McGee et al., 2009), usually acquired through surrounding filaments (see, e.g., Fadda et al., 2008). As infalling galaxies enter the diffuse hot gas which permeates clusters and massive groups (see, e.g., Sarazin, 1986), they experience ram-pressure which can unbind their gas from their gravitational potential (Gunn & Gott, 1972). This effect can eventually strip most of the gas from the galaxies, leading to the quenching of star formation (van Gorkom, 2004). The affected galaxies appear to be morphologically disturbed and with trails of stripped gas (Gavazzi et al., 1995; van Gorkom, 2004). In some extreme cases 'Jellyfish galaxies' are observed, whose name is evocative of the tentacles of gas trailing the galaxy (Ebeling et al., 2014; Boselli et al., 2016; Sun et al., 2006). Before the complete removal of gas, moderate values of ram pressure can lead to an increase of the star formation rate in the regions close to the impact with the intra-cluster medium (Merluzzi et al., 2013; Vulcani et al., 2018). In fact, the increased pressure helps compress the gas and triggers more star formation (Kapferer et al., 2009). Over time, however, the interstellar medium is fully stripped from the galaxy and star formation ceases (Bekki, 2009).
Arp 25 is a beautiful example of an infalling galaxy in the initial phase of the interaction with the intra-group medium. It resides inside a group of galaxies, the NGC 2300 group, which was the first group where X-ray emitting intra-group medium was observed (Mulchaey et al., 1993). ROSAT observations revealed a surprisingly dense (\(\approx 5.3\times 10^{-4}\) cm\({}^{-3}\)) and extended (\(\approx 0.2\) Mpc) intra-group gas halo. Under several standard assumptions, the total mass inside this region is about \(3\times 10^{13}\) M\({}_{\odot}\). The barionic mass being less than 15%, the presence of such a large amount of gas can be explained only by invoking a large quantity of dark matter in this
group. The intra-group gas is hot (\(\approx 0.9\) keV) and relatively metal poor (\(\approx 0.06\) Z\({}_{\odot}\)), revealing very little loss of processed gas from member galaxies (Mulchaey et al., 1993; Davis et al., 1996).
Deeper observations with XMM (Finoguenov et al., 2006) and Chandra (Rasmussen et al., 2006; Wolter et al., 2015) lead to the discovery of many X-ray ultra-luminous sources in Arp 25. The data were found consistent with intra-group gas being pressurized at the leading edge due to the supersonic motion of the galaxy through the intra-group medium. Although the ram pressure significantly affects the morphology of the outer gas disc, it is probably insufficient to strip large amounts of cold gas from the disc. According to the analysis of Rasmussen et al. (2006), the X-ray data are consistent with a mildly shocked intra-group medium.
As this galaxy is viewed nearly face-on, the deformation of its spiral morphology caused by ram pressure is perfectly observable. H\(\alpha\) observations of Arp 25 (Tomicic et al., 2018) show a front of enhanced star formation on the leading edge and a gradient of star formation in the direction perpendicular to the impact. However, the authors were not able to identify shocks using optical line diagnostics.
Low velocity shocks have been invoked to explain the high [CII] emission detected in studies of compact groups (Appleton et al., 2017; Alatalo et al., 2014) and clusters (Minchin et al., 2022). In fact, an effective way to dissipate the energy of the shock which accumulates in the molecular hydrogen is via emission of the fine-structure [CII] line at 157.7 \(\mu\)m (Lesaffre et al., 2013). This line typically acts as a coolant of the warm molecular and atomic hydrogen excited by the radiation from bright young stars. Since the far-infrared (FIR) continuum is produced by the emission of dust excited by the same stars, an excess of the [CII]/FIR ratio can be used to detect shocks in the molecular hydrogen. In this paper, we show how the ram-pressure is not only triggering star formation along the impacted region but it is also responsible for shocking the interstellar medium of the galaxy in the same regions. This conclusion is based on recent photometric and spectroscopic observations in the far-IR obtained with SOFIA, the Stratospheric Observatory For Infrared Astronomy. These observations, which were performed during the last months of activity of the observatory, show the enormous potential of far-infrared studies to unveil environmental effects on the evolution of galaxies in groups and clusters.
Throughout this paper, we use \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\), and \(\Omega_{\rm\Lambda}=0.7\). The adopted distance of Arp 25 is discussed in Section 3.1 and reported in Table 1.
## 2 Data and Observations
### SOFIA data
Data for Arp 25 were obtained during cycle 9 as part of two DDT proposals. The first proposal used HAWC+ and was submitted as a flash proposal (P.I. Minchin) and executed in two flights (SOFIA flights 883 and 885) in June 2022. Following the good detection of the galaxy in the three HAWC+ bands (C, D, and E) a second DDT proposal was submitted to observe the galaxy during the
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Quantity} & Value & Reference \\ \hline R.A.(J2000) & 07\({}^{h}\) 27\({}^{m}\) 14.36\({}^{s}\) & \\ Dec (J2000) & +85\({}^{o}\) 45’ 16.4” & \\ Luminosity Distance & 28.5 Mpc & this paper \\ Angular Distance & 28.2 Mpc & this paper \\ Scale & 138 pc/arcsec & \\ Inclination & 20”\(\pm\)10” & Tomicic et al. (2018) \\ v\({}_{sys}\) & 2416\(\pm\)2 km/s & Reid et al. (2019) \\ Type & SAB(rs)c & de Vaucouleurs et al. (1991) \\ \hline \end{tabular}
\end{table}
Table 1: General properties
Figure 1: Coverage of the FIFI-LS observations (white contour) over a composite HST image of Arp 25 obtained from WFC3 images in the 275 nm, 336 nm, 438 nm, 555 nm, and 814 nm bands. Credit: ESA/Hubble & NASA, P. Sell, Acknowledgement: L. Shatz
last observational opportunity with FIFI-LS before the decommissioning of SOFIA. The proposal (P.I. Polles) was executed on flight 907 on August 30, 2022. The FIFI-LS data were obtained at a barometric altitude of 43,000 ft with a low value of zenithal precipitated water vapor (3.6 \(\mu\)m). The 400 s integration contours of the FIFI-LS observations are shown on the top of an HST image of Arp 25 in Fig. 1.
The HAWC+ data from flight 883 (bands C and D) were obtained with the detector at a temperature slightly higher than the standard value. The observations were repeated during flight 885 when the detector was operating in standard conditions. The current paper makes use of the combination of the two observations rescaled to the well calibrated flux values observed during flight 885. The new HAWC+ data are displayed in Figure 2.
### Archival data
To compute the spectral energy distribution (SED) across the galaxy, we made use of several data from the ultraviolet to the mid-infrared. The near- and far-ultraviolet maps were obtained by the Deep Imaging Survey with GALEX and were retrieved from the MAST archive (target name PS_NGC4258_MOS23, obs. ID 2606460620865798144, Fadda et al., 2023). At visible bands, we used Pan-STARRS maps in the g, i, r, z, and y filters (Flewelling et al., 2020). For the near-IR bands J, H, and Ks we use 2MASS data retrieved from the IRSA archive (Jarrett et al., 2020). The WISE band 3 image at 11.3\(\mu\)m was obtained from the all-sky survey (WISE Team, 2020), while the Spitzer data were obtained from the Spitzer archive (Spitzer Science Center, 2020).
Several spectral cubes have been used in our analysis of the gas in Arp 25. They come from different surveys which made their data publicly available. H\(\alpha\) data are from the GHASP survey (Epinat et al., 2008) and they have been obtained from the Fabry-Perot database 1 of the Observatoire de Haute Provence. CO data are from the COMING survey 2(Sorai et al., 2019) and were obtained with the Nobeyama 45m telescope. Finally, HI data are from the WHISP survey 3(van der Hulst et al., 2001) obtained at the Westerbork telescope.
Footnote 1: [https://cesam.lam.fr/fabryperot/](https://cesam.lam.fr/fabryperot/)
Footnote 2: [https://astro3.sci.hokudai.ac.jp/](https://astro3.sci.hokudai.ac.jp/)\(\sim\)radio/coming/data/
Footnote 3: www.astron.nl
## 3 Results and discussion
### The NGC 2300 group and the distance of Arp 25
We gathered the velocities of various candidate members of the NGC 2300 group from literature. Although there is no dedicated spectroscopic study of this group, Diaz-Gimenez et al. (2012) report velocities of four galaxies in this group, while Wolter et al. (2015) considers five of them. A search for possible group members in literature with distances of less than 0.4 Mpc from the group center identified by the peak of the X-ray emission (Mulchaey et al., 1993), and velocities between 1000 and 3000 km/s yields a total 8 members (see Table 2). Arp 25 has the most extreme velocity among these members. It is also a late-type spiral galaxy which is generally considered to be infalling and moving on a radial orbit (see, e.g., Biviano & Katgert, 2004). The systemic velocity of the NGC 2300 group has been computed with a biweight mean (Beers et al., 1990) yielding a value of 1985 km/s. This corresponds to a luminosity distance of 28.5 Mpc. The histogram of the velocity distribution is shown in Fig. 3 and it is obtained using an adaptive kernel estimator (Fadda et al., 1998). Location and dispersion of the adaptive kernel distribution approximately correspond to the values computed using the biweight estimator (\(v=1985\) km/s and \(\sigma_{v}=255\) km/s). The difference is probably due to the skewness of the distribution caused by the high proper velocity of Arp 25.
Figure 2: Infrared emission from Arp 25 observed with Spitzer (8 and 24 \(\mu\)m) and SOFIA/HAWC+ in the C, D, and E bands. The white circles in the left-low corner of each map correspond to the beams of each observation.
By accepting 1985 km/s as group systemic velocity, Arp 25 would have a line-of-sight velocity relative to the group of 430 km/s. In the worst-case scenario, the impact would be perpendicular to the plane of the disk as suggested by the symmetric deformation of the spiral morphology. Since the inclination of the galaxy inferred from the rotational velocity is approximately \(20^{\circ}\)(Tomiciic et al., 2018), this would corresponds to an infall velocity of approximately \(v_{infall}=430/\sin(20^{\circ})\approx 1260\) km/s. This velocity is sufficient to generate enough ram pressure in a low density medium similar to the intra-group medium in the NGC 2300 group to explain the shock seen in the X-ray observations. Rasmussen et al. (2006), on the basis of the observed shock, estimate a velocity of \(860\pm 120\) km/s but they do not exclude a higher value because of the lack of knowledge about the three-dimensional direction of the motion of the galaxy.
By means of the virial theorem it is possible to estimate the mass of the group from positions and velocities of the galaxy members. We adopt the systemic velocity of \(v_{S}=1985\) km/s and compute the three-dimensional velocity dispersion with the formula:
\[v^{2}=\frac{3}{n}\sum_{i}\frac{(v_{i}-v_{S})^{2}-v_{err}^{2}}{(1-\frac{v_{i}v _{S}}{c^{2}})^{2}}, \tag{1}\]
where \(v_{err}^{2}=\sum\sigma_{v}^{2}\) is the quadratic sum of the errors on the galaxy velocities, \(n\) is the number of galaxies, and \(c\) is the speed of light. The denominator takes into account the relativistic correction, the factor 3 allows one to pass from line-of-sight velocities to the three-dimensional distribution, and the subtraction of the error term compensate for the broadening of the velocity distribution due to measurement errors (Danese et al., 1980).
The projected virial radius \(R_{vir}\) is estimated using the formula (3) from Carlberg et al. (1996) assuming the diffuse X-ray center of emission as the center of the cluster (see Table 2) and equal weights for all the galaxies:
\[R_{vir}^{-1} = \frac{1}{n^{2}}\sum_{i<j}\frac{1}{2\pi}\int_{0}^{\pi}[r_{i}^{2}+ r_{j}^{2}+2r_{i}r_{j}\cos\theta]^{-1/2}d\theta\] \[= \frac{2}{\pi n^{2}}\sum_{i<j}\frac{1}{(r_{i}+r_{j})}\int_{0}^{ \pi/2}[1-m_{ij}\sin^{2}t]^{-1/2}dt\] \[= \frac{2}{\pi n^{2}}\sum_{i<j}\frac{K(m_{ij})}{(r_{i}+r_{j})}.\]
The formula can be expressed as a complete elliptical integral of the first type \(K(m_{ij})\) with \(m_{ij}=\frac{4r_{i}r_{j}}{(r_{i}+r_{j})^{2}}\) which can be computed with the ellipk function in the Python scipy library (Virtanen et al., 2020). The three-dimensional virial radius, \(r_{vir}\), is obtained from \(R_{vir}\) with a deprojection factor \(r_{vir}=\pi R_{vir}/2\)(Limber & Mathews, 1960).
The computation of the virial mass:
\[M_{vir}=\frac{r_{vir}v^{2}}{G}, \tag{3}\]
with \(G\), the gravitational constant, yields a value of \((1.8\pm 0.3)\times 10^{13}\) M\({}_{\odot}\), obtained by resampling the data with the bootstrap technique (see, e. g., Efron, 1982). This value is close to that obtained from the X-ray diffuse emission by Mulchaey et al. (1993) and confirms the large amount of dark matter needed to explain the stability of this group.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Name} & R.A. - Dec (J2000) & v [km/s] & R [Mpc] & Src \\ \hline X-ray center & 07:30:39.54 +85:40:59.0 & - & - & 0 \\ NGC 2300 & 07:32:20.49 +85:42:31.9 & 1905 \(\pm\) 7 & 2.4’ [0.02] & 1,5 \\ Arp 25 & 07:27:14.36 +85:45:16.4 & 2416 \(\pm\) 2 & 5.7’ [0.05] & 1 \\ IC 455 & 07:34:57.53 +85:32:13.9 & 2050 \(\pm\) 51 10.0’ [0.08] & 1,5 \\ UGC 3670 & 07:20:04.73 +85:35:14.3 & 1861 \(\pm\) 29 13.4’ [0.11] & 3 \\ UGC 3654 & 07:17:47.09 +85:42:47.7 & 2303 \(\pm\) 22 14.6’ [0.12] & 1 \\ GGCG 362-035 07:15:06.84 +85:46:28.4 & 1724 \(\pm\) 30 18.2’ [0.15] & 1 \\ CGCG 362-048 07:58:12.74 +85:43:00.0 & 1896 \(\pm\) 29 31.0’ [0.26] & 3 \\ IC 469 & 07:55:59.08 +85:09:32.1 & 2080 \(\pm\) 39 43.6’ [0.36] & 4 \\ \hline \end{tabular} Note. – Sources code for the last column: 0 (Mulchaey et al., 1993), 1 (Huchra et al., 2012), 2 (Springob et al., 2005), 3 (Falco et al., 1999), 4 (de Vaucouleurs et al., 1991), 5 (Afanasiev et al., 2016).
\end{table}
Table 2: NGC 2300 group members
Figure 3: Adaptive kernel histogram of the velocity distribution of the NGC 2300 group. Velocities of single members are indicated with vertical segments. The vertical orange line and the horizontal orange segment correspond to the velocity and dispersion of the group computed with the biweight estimator (1985 and 255 km/s).
### Dust emission maps
Figure 2 shows the emission of the dust in the mid- and far-IR as seen by Spitzer at 8 and 24 \(\mu\)m with IRAC and MIPS, respectively, and by SOFIA/HAWC+ in the C, D, and E bands which correspond to central wavelengths of 89, 155, and 216 \(\mu\)m. In all the maps the nucleus is the brightest peak of emission. Three other peaks are visible in all the maps along the shock front, although the two peaks in the southern part almost merge in the images at longer wavelengths (band D and E). These images confirm the excess of star formation on the side affected by ram pressure seen by Tomicic et al. (2018) with H\(\alpha\) imaging (see also Section 3.6).
### Moment maps
In Figure 4 we compare the intensity, velocity, and velocity dispersion maps of the HI, CO, [CII], and H\(\alpha\) observations of Arp 25. These observations map the main states of the atomic and molecular gas in Arp 25: the neutral atomic gas (HI), the cold molecular gas (CO),
Figure 4: Intensity, velocity, and velocity dispersion maps for the HI, H\(\alpha\), [CII], and CO lines of Arp 25. These lines show the distribution of neutral and ionized atomic hydrogen (HI and H\(\alpha\)) and of the warm and cold molecular hydrogen ([CII] and CO). The beam of each observation is shown as a black circle in the bottom left corner of each intensity map. The images show the lopsided emission from the ionized atomic and warm molecular hydrogen, while the neutral atomic and cold molecular gas emission is more uniform across the galaxy. A gradient in velocity is clearly visible, while a somewhat enhanced velocity dispersion is visible along the shock region.
the warm molecular gas ([CII]) and the ionized atomic gas (H\(\alpha\)). It is immediately evident that the intensity maps of H\(\alpha\) and [CII] are lopsided. The shock front is well recognizable. The intensity map of the CO and HI emission are more uniform, although the HI has an excess of emission along the shock front. In the CO map, the peak of the intensity corresponds to the nucleus and there are no comparable peaks of emission along the shock front. The nucleus has very low emission in the HI map as is usually the case in star-forming galaxies. The velocity gradient is well visible in all the maps. Finally, the velocity dispersion maps show higher values along the shock front and, except for the HI map, on the nucleus of the galaxy.
The difference of emission along the shock front between the [CII] and the CO observations is remarkable. Clearly the source of emission is different in the two cases. The two peaks of [CII] emission along the shock front corresponds to the most intense spots in the H\(\alpha\) map. However, as we will see in the following, the ratio between the [CII] emission and the dust emission is much higher than that expected for normal star formation.
### Apertures
To better study the effect of the ram pressure on the [CII] emission, we defined a series of independent apertures centered on the regions with [CII] and/or H\(\alpha\) emission. In particular, we defined an aperture centered on the nucleus of the galaxy, five apertures along the shock front, three apertures in the region between the nucleus and the shock front (which we will call post-shock region) and other seven apertures in other regions of the galaxy with far-IR emission. The apertures have a diameter of 18 arcsec which corresponds to the beam of the HAWC+ band E, the band considered for obtaining the spectral energy distribution with the poorest spatial resolution. The apertures are reported in the top panel of Fig. 5 with four different colors: lime green for the shock region, green for the post-shock region, blue for the nucleus, and orange for the disk regions. The same color code is used in the plots of the following sections. Before performing aperture photometry, we took care of removing any residual background from the optical and near-IR images. The 2MASS images had a gradient in the background. To remove it, we first masked the region with extended emission around Arp 25 (a disk of 3 arcmin diameter) and all the point sources in the field. Then, we computed the median flux for each column of the image and smoothed the obtained background profile with a Chebyshev polynomial. This passage allowed us to avoid adding noise when removing the residual background from the image. The archival PanSTARRS stacked images contain several artefacts and have an uneven background. To improve the quality of the stacked images, we downloaded all the single images used to obtain the archival stacks (called "warp" images). After discarding the images with bad seeing or with too many artefacts, we masked a few remaining artefacts, subtracted the residual background from each single exposure, and stacked the selected images scaled to the same photometric zero-point. In this way we obtained cleaner images with a more even background.
### SED Modeling
Once all photometric data had been smoothed to the same spatial resolution as the HAWC+ E band and the aperture fluxes measured, Spectral Energy Distribution
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Filter} & Wavelength & Beam & Pixel & \(\sigma_{\rm cal}\) & Ext & Refs \\ Name & \(\mu\)m & arcsec & arcsec & mag & mag & \\ \hline GALEX\_FUV & 0.1516 & 4.2 & 1.5 & 0.05 & 0.70 & 1 \\ GALEX\_NUV & 0.2267 & 5.3 & 1.5 & 0.03 & 0.79 & 1 \\ PANSTARRS g & 0.4866 & 1.3 & 0.258 & 0.020 & 0.30 & 2 \\ PANSTARRS r & 0.6215 & 1.2 & 0.258 & 0.016 & 0.23 & 2 \\ PANSTARRS i & 0.7475 & 1.1 & 0.258 & 0.017 & 0.18 & 2 \\ PANSTARRS z & 0.8679 & 1.1 & 0.258 & 0.018 & 0.13 & 2 \\ PANSTARRS y & 0.9633 & 1.0 & 0.258 & 0.022 & 0.11 & 2 \\
2MASS\_J & 1.235 & 2.9 & 2.0 & 0.03 & 0.08 & 3 \\
2MASS\_H & 1.662 & 2.8 & 2.0 & 0.03 & 0.05 & 3 \\
2MASS\_Ks & 2.159 & 2.9 & 2.0 & 0.03 & 0.03 & 3 \\ IRAC\_1 & 3.550 & 1.66 & 1.2 & 1.8\% & 4 \\ IRAC\_2 & 4.490 & 1.72 & 1.2 & 1.9\% & 4 \\ IRAC\_3 & 5.730 & 1.88 & 1.2 & 2.0\% & 4 \\ IRAC\_4 & 7.870 & 1.98 & 1.2 & 2.1\% & 4 \\ WISE\_3 & 12.08 & 6.5 & 2.75 & 4.5\% & 5 \\ MIPS\_24 & 23.70 & 4.9 & 2.5 & 4.0\% & 6 \\ HAWC+ C & 89 & 7.8 & 4.0 & 10\% & 7 \\ HAWC+ D & 154 & 13.6 & 6.9 & 10\% & 7 \\ HAWC+ E & 214 & 18.2 & 9.4 & 10\% & 7 \\ \hline \end{tabular} Note. – Beam sizes for SDSS and 2MASS are median seeing values. References: (1) Morrissey et al. (2007), (2) Tonry et al. (2012); Magnier et al. (2020), (3) Skrutskie et al. (2006), (4) Reach et al. (2005), (5) Jarrett et al. (2011), (6) Engelbracht et al. (2007), (7) Harper et al. (2018). Extinction maps are computed according to Cardelli et al. (1989) and the \(A_{v}=0.2663\) value from Schlafly & Finkbeiner (2011).
\end{table}
Table 3: Bands used for SED Fitting
(SED) fitting was performed using the Code Investigating GALaxy Evolution (CIGALE Noll et al., 2009; Boquien et al., 2019). CIGALE was chosen over other SED modeling tools (i.e., MAGPHYS, da Cunha et al., 2008) because of the ease of adding filter profiles, namely the HAWC+ bands which are not supported in the latest version of MAGPHYS. It should be noted that there has been extensive work comparing CIGALE and other SED modeling tools, yielding no significant differences (Hunt et al., 2019). CIGALE models the SED by assuming the energy absorbed by dust from the UV to the near-infrared is balanced by the energy emitted by dust in the mid and far-infrared. The full suite of photometric observations used to determine the SED fits are described in Section 2 and are listed in Table 3 along with the calibration uncertainties used to estimate the errors. Fits were determined using the Bruzual & Charlot (2003) stellar population and the Draine et al. (2014) dust models. A complete list of the parameters and modules used in the SED models can be found in Table 4. SED fits were performed on the apertures discussed in Section 3.4. Errors were determined using the sum in quadrature of the variation of the sky brightness and the calibration uncertainty for each photometric detector listed in Table 3. The best model SEDs are plotted in the bottom panel of Figure 5 as black lines overlapped on the photometric data. The color of the points refer to different instruments: GALEX in purple, PanStarrs in cyan, 2MASS in orange, IRAC in yellow, WISE in pink, MIPS in red, and HAWC+ in brown. These SED models allow for estimations of the dust properties and attenuation rates in different environments in Arp 25.
### Ram pressure impact on star formation
In Figure 6 we plot the surface density of star formation (\(\Sigma_{\rm SFR}\)) versus the surface density of molecular hydrogen (\(\Sigma_{H_{2}}\)). This plot is commonly referred to as a Kennicutt-Schmidt plot, and the proportional trend between \(\Sigma_{\rm SFR}\) and \(\Sigma_{H_{2}}\) demonstrates how gas in the ISM is the fuel for ongoing star formation (Kennicutt & Evans, 2012).
For SFR values we used the CIGALE estimates which take into account the entire SED from the far-IR to the far-UV. The H\({}_{2}\) gas mass is determined by converting the CO luminosity inside each region with Eq. 3 from Bolatto et al. (2013). Both the H\({}_{2}\) gas mass and the SFR are converted to surface densities by dividing by the deprojected area of the aperture used in the measurement. For comparison, we plot the relationship from Bigiel et al. (2008) which investigated the Kennicutt-Schmidt relationship at sub-Kpc scale for several star
Figure 5: _Top_: Circular apertures selected for the study of the different parts of the galaxy overlapped on the 8\(\mu\)m (left) and [CII] images (right). Colors are as in Fig. 6. _Bottom_: SED for the apertures, multiplied by a factor \(\Delta=100^{16-i}\) where \(i\) is the identification number of the aperture shown on the right side of the plot.
forming galaxies as a blue line, with the dispersion represented using blue shading, as well as the relationship for regions of the Milky Way analog NGC 7331 (Sutter and Fadda, 2022). Figure 6 clearly shows that the star formation activity is in general high in all the regions analyzed. Along the shock front, where ram pressure is triggering further star formation, the values are substantially higher than those found in normal galaxies. We can also notice in Figure 5 that the stellar component in the optical side of the SED of the regions along the shock front is flatter than those of the disk regions. This confirms the formation of a younger stellar population triggered by the shock. Although the regions along the shock front are outliers in the Kennicutt-Schmidt diagram, the relationship between star-formation rates and gas surface densities can be linearized by normalizing the gas surface density with the freefall time of the gas. In particular, Salim et al. (2015) showed that it is important to consider density dependent timescales which take into account the clumpy nature of the clouds. The relationship between a single freefall timescale at a mean density and a "multi freefall" timescale is a function of density variance of the clouds which can be parameterized by the sonic Mach number \(\mathcal{M}\), the turbulent driving parameter \(b\), and the thermal to magnetic pressure ratio \(\beta\). By using the same approximations as Salim et al. (2015), i.e. \(b=0.4\) and \(\beta\rightarrow\infty\), and assuming a constant ratio between \(\Sigma_{gas}\) and \(\Sigma_{H_{2}}\), we can use the logarithmic distance along the y-axis of the points in Fig. 6 from the linear relationship, \(\Delta\), to roughly estimate the Mach number of the clouds in different regions of the galaxy:
\[\mathcal{M}\approx\frac{\sqrt{e^{\frac{8}{3}\Delta}-1}}{0.4} \tag{4}\]
Most of the points have \(\Delta=0.5-0.6\) which corresponds to a Mach number range of 4-5, while the most shocked regions have \(\Delta=0.8-1.2\) corresponding to a Mach number range of 7-12. Such estimates are compatible with those reported in Table 3 of Salim et al. (2015) for local disk and starburst galaxies, respectively.
### [CII] and dust
Most of the [CII] emission originates in the photo-dissociation region (PDR, see e.g., Croxall et al., 2012) where the singly ionized carbon acts as the main coolant for the neutral gas heated by the radiation of young, bright stars. If the PDRs are in thermal equilibrium, the [CII] emission should therefore trace the star formation and be proportional to tracers of heating, such as the amount of UV attenuation, the far-infrared luminosity emitted by dust, and the intensity of the emission of polycyclic aromatic hydrocarbons (PAHs). If some region has an anomalous ratio between the [CII] emission and one of these quantities, we can deduce that some other mechanism contributes to the [CII] emission. The [CII]/UV attenuation, [CII]/FIR, and [CII]/PAH ratios have been also proposed as tracers of the photo
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Parameter} & Input values \\ \hline \multicolumn{2}{c}{sfhdelayed} \\ tau\_main [Gyr] & [0.5, 10], \(\delta=0.25\) \\ age\_main [Gyr] & 11 \\ tau\_burst [Gyr] & 0.05 \\ age\_burst [Gyr] & 0.02 \\ f\_burst & 0 \\ SFR\_A [M\({}_{\odot}\)/yr] & 1 \\ \hline \multicolumn{2}{c}{bc03} \\ imf & 1 (Chabrier) \\ metallicity [solar] & 0.02 \\ seperation\_age [Gyr] & 0.01 \\ \hline \multicolumn{2}{c}{nebular} \\ logU & -3 \\ f\_esc & 0 \\ f\_dust & 0 \\ lines\_width [km s\({}^{-1}\)] & 300 \\ \hline \multicolumn{2}{c}{dustat\_modified\_starburst} \\ E\_BV\_nebular [mag] & [0,1.0], \(\delta=0.1\) \\ E\_BV\_factor & 0.44 \\ uv\_bump\_wavelength [nm] & 217.5 \\ uv\_bump\_width [nm] & 35 \\ uv\_bump\_amplitude & 0, 1.5, 3 (Milky Way) \\ powerlaw\_slope & [-0.5, 0.0], \(\delta=0.1\) \\ Ext\_law\_emission\_lines & 1 (Milky Way) \\ Rv & 3.1 \\ filters & B\_B90, V\_B90, FUV \\ \hline \multicolumn{2}{c}{dl2014} \\ qpha & 0.47, 2.50, 4.58, 6.63 \\ \multicolumn{2}{c}{umin} & 0.1, 0.25, 0.5, 1, \\ & 2.5, 5, 10, 25 \\ alpha & 2 \\ gamma & 0.001, 0.002, 0.004, \\ & 0.008, 0.016, 0.032, \\ & 0.064, 0.125,0.25, 0.5 \\ \hline \multicolumn{2}{c}{fritz2006} \\ fracAGN & 0.0 \\ \hline \multicolumn{2}{c}{restframe\_parameters} \\ beta\_calz94 & False \\ D4000 & False \\ IRX & False \\ EW\_lines & 500.7/1.0 \& 656.3/1.0 \\ luminosity\_filters & FUV \& V\_B90 \\ colours\_filters & FUV-NUV \& NUV-\(r\) \\ \hline \multicolumn{2}{c}{redshift} \\ redshift & 0 \\ \hline \end{tabular}
\end{table}
Table 4: Parameter values for CIGALE modules
electric heating efficiency (Kapala et al., 2017; Croxall et al., 2012). The comparison with other normal star forming galaxies can inform us about the peculiar conditions in Arp 25.
#### 3.7.1 Photo-electric efficiency
The ratio between absorbed UV radiation and emitted [CII] defines the so-called photo-electric efficiency. We estimate the UV attenuation through the fit of SEDs with the CIGALE code. The slope of the relationship between the UV attenuation and the [CII] surface luminosity is the photo-electric efficiency.
Figure 7 shows the relationship for two sets of reference galaxies: NGC 7331 from Sutter and Fadda (2022) and M 31 from Kapala et al. (2015). We fitted a linear relationship considering the NGC 7331 and M 31 data and assuming the line will pass through the origin. The slope of the fits is \(0.96\pm 0.03\)%. On each side of the relationship, we computed the dispersion of the residuals to define the region including most of the points. The 3-\(\sigma\) region defined in this way is shaded in blue in Fig. 7. While the nucleus and the disk regions follow the relationship very well and fall completely within the blue shaded region, the shock and post-shock regions show an excess of [CII] emission. In particular, the shock regions lie at 3 \(\sigma\) or more above the relationship. We interpret this as evidence of the non-stellar origin of the excess of [CII] emission in the shock regions.
#### 3.7.2 [CII] and dust continuum
The dust emission accounts for the peak in the far-infrared of the spectral energy distribution of the galaxy. The typical estimate of this energy is the integrated FIR flux between 8 and 1000\(\mu\)m, also called total infrared flux. Figure 8 shows the relationship between the [CII]/FIR ratio and the FIR surface brightness. The FIR surface brightness was determined by dividing the FIR luminosity by the deprojected area of one region. To compare the regions within Arp 25 to previous studies of the [CII]/FIR relationship, we also plot data from resolved regions across the disk of the nearby star-forming galaxy NGC 7331 (dark gray points, Sutter and Fadda 2022), resolved star-forming regions from the "Key Insights in Nearby Galaxies: a Far-Infrared Survey with _Herschel_" (KINGFISH, light gray triangles, Sutter et al. 2019), global measurements of \(z\sim 0.02-0.2\) galaxies (brown points, Ibar et al. 2015), and global measurements from local luminous infrared galaxies (LIRGS) from the Great Observatory All-Sky LIRG Survey (GOALS, dark blue points Diaz-Santos et al. 2017). To match these measurement in a uniform way, we deprojected the infrared surface brightness measurements reported in Ibar et al. (2015) and Diaz-Santos et al. (2017) by dividing by \(\cos i\), where \(i\) is the galaxy's inclination. For the sources included in Ibar et al. (2015), the inclinations were determined by fit
Figure 6: Kennicutt-Schmidt diagram for the different regions in Arp 25. The values from NGC 7331, a Milky-Way analog from Sutter and Fadda (2022), and the band defined in Bigiel et al. (2008) for sub-kpc regions in galaxies are plotted for comparison. The cross shows the typical error bars for the measured values of Arp 25.
Figure 7: Surface brightness of the [CII] line plotted as a function of the attenuated UV light determined using the CIGALE SED fits. Arp 25 data are color–coded based on their regions. Comparison data from the nearby galaxy NGC 7331 and M 31 are shown as grey and purple symbols, respectively. A linear fit of the relationship using the NGC 7331 and M 31 and the 3-\(\sigma\) region are shown shaded in light blue.
ting ellipses to the PANSTARRs \(r\) band images of each galaxy. For the galaxies in the GOALS sample, the inclinations were taken from Kim et al. (2013). With these updated \(\Sigma_{FIR}\) measurements, we see a linear trend between [CII]/FIR and \(\Sigma_{FIR}\) across three orders of magnitude in \(\Sigma_{FIR}\) between our comparison samples. The locus occupied by most of the galaxies of the comparison sample is highlighted in light blue. Of the regions defined in Arp 25, the nucleus and the disk regions fall into the blue locus. The regions along the shock front and also those immediately after it, the post-shock regions, fall outside the relationship. We also notice that the values of the disk regions, although falling into the blue locus, have a rather high ratio. This is probably due to the high rate of star formation in this galaxy revealed also by optical observations (Tomicic et al., 2018).
We can estimate the excess of [CII] emission by fitting a linear relationship using the disk and nuclear regions, and computing the expected [CII] emission in the other regions based on their surface far-infrared emission. Along the leading edge of the galaxies we find a 60% excess in [CII] emission, while globally the excess amounts to 25%. This excess is probably due to the turbulence in the interstellar medium caused by the mechanical dissipation of the shocks due to the impact with the intra-group medium.
### [CII] and PAH emission
In the standard model of [CII] emission, polycyclic aromatic hydrocarbons (PAHs) and dust grains irradiated by the UV light from young stars emit electrons through the photo-electric effect. The collisions of these free electrons with molecules of hydrogen heat the re
Figure 8: The [CII]/FIR ratio plotted as a function of far-IR surface brightness (\(\Sigma_{FIR}\)). The points are color coded according to their region. The comparison data are from NGC 7331 (Sutter and Fadda, 2022), star–forming regions in local galaxies from galaxies in the KINGFISH survey (Sutter et al., 2019), \(z\sim 0.02-0.2\) galaxies from Ibar et al. (2015), local U/LIRGS from the GOALS survey (Diaz-Santos et al., 2017). The central grey line corresponds to the linear fit of the comparison points, while the lines limiting the blue shaded region correspond to 5 times the dispersion of the positive and negative residuals. Such a region contains points where the [CII] emission is powered by star formation. Points higher than the upper line have an excess of [CII] emission.
gions of the molecular clouds closest to these stars, called photo-dissociation regions (PDRs). Because of the inefficiency of the hydrogen molecule to irradiate energy, due to its lack of dipole, thermal equilibrium is reached thanks to the cooling provided by fine structure lines and predominantly by the singly ionized carbon. Since PAHs provide most of the free electrons in the PDRs, their emission can provide an indicator of the photo-electric efficiency in PDRs more direct than the dust continuum emission. In this scenario, we expect a relationship between PAH and [CII] to be much more stable than that between [CII] and FIR (Croxall et al., 2012).
Since there are no spectral mid-infrared observations of Arp 25, we used estimates of the 7.7 \(\mu\)m and 11.3 \(\mu\)m PAH features based on the IRAC 4 and WISE 3 band photometry. These estimates are obtained by removing the contributions of stars, large dust grains, and AGN estimated using the SED fits produced by CIGALE. The modeled flux from each of these components is summed, convolved with the transmission function for the specified band, and then subtracted from the observed flux. The remaining emission is then assumed to be only the emission from the 7.7 and 11.3 \(\mu\)m PAH emission features.
Figure 9 shows the ratio of the [CII] and the sum of the two PAHs versus the ratio of the 7.7\(\mu\)m and 11.3\(\mu\)m PAH features, an indicator of average PAH charge (Draine et al., 2021). The figure includes data from three other normal star forming galaxies as a comparison: NGC 7331 from Sutter and Fadda (2022), NGC 4559 and NGC 1097 from Croxall et al. (2012). The figure also includes the histogram of the [CII]/PAH and of the PAH ratio of the comparison sample (grey histogram) and of Arp 25 (yellow histogram). We can see that the inner and nuclear regions have the same distribution in [CII]/PAH as the bulk of the comparison galaxies (left vertical panel). The post-shock regions are higher than the rest, but the shock regions have an exceptionally high [CII]/PAH ratio (\(\geq 8\%\)). It is interesting to note that this difference is not due to a change in the radiation field. In fact, the PAH ratio is substantially the same for the different regions of the Arp 25. We think this is another evidence of the peculiarity of the regions on the shock front hinting at the non-stellar origin of part of the [CII] emission.
### [CII] and CO
Normal star-forming galaxies where the CO and [CII] emission are powered by star formation show a correlation between these two quantities. As shown in Figure 10, nearby spiral galaxies included in Hughes et al. (2017) and Gullberg et al. (2015), as well as regions from the Milky-Way analog NGC 7331 (Sutter and Fadda, 2022), fill a locus in the [CII]/FIR vs \({}^{12}\)CO\({}_{1\to 0}\)/FIR plane which can be described with PDR models (Kaufman et al., 2006). A grid of predicted \(G_{0}\), the FUV radiation field in Habing units (typical energy density at the solar circle averaged between 6 eV \(\leq\) h\(\nu\)\(\leq\) 13.6 eV, i.e. 91.2-240 nm, which correspond to \(1.6\times 10^{-3}\) erg cm\({}^{-2}\) s\({}^{-1}\), Habing, 1968), and \(n\), the gas density in cm\({}^{-3}\), computed with the PDR toolkit (Pound and Wolfire, 2022) is shown overlapped to the galaxy values.
Galaxy regions or galaxies falling out of this grid are usually either low-metallicity dwarf galaxies or CO-dark regions (Madden et al., 2020). Another possibility is that the [CII] emission is boosted by an alternative mechanism, such as shocks or turbulence. Lesaffre et al. (2013) showed that even quite low-velocity shocks, passing through a mildly UV-irradiated diffuse (\(10^{2}\)-\(10^{3}\) cm\({}^{-3}\)) molecular medium, can produce strong [CII] emission, comparable to other powerful ISM coolants, like mid-IR H\({}_{2}\) emission. Models of this sort were used to explain the powerful H\({}_{2}\), [CII] and H\({}_{2}\)O emission detected by _Spitzer_ and _Herschel_ in the shocked filament in Stephan's Quintet (Appleton et al., 2017) and in the Hickson compact group 57 (Alatalo et al., 2014).
In order to compare our measurements to PDR models, we multiplied the [CII] fluxes by a factor of 0.75,
Figure 9: The ratio of the 7.7\(\mu\)m and 11.3\(\mu\)m PAH feature fluxes versus [CII] to PAH ratio. The PAH\({}_{7.7\mu m}\)/PAH\({}_{11.3\mu m}\) ratio is indicative of the average charge of the PAHs. The histograms on the top and side show the distribution of the comparison samples shaded in gray and the data from Arp 25 outlined in yellow.
a typical value of the neutral fraction of [CII] emission, i.e. the part of the emission which is due to non ionized hydrogen (see, e.g., Sutter and Fadda, 2022). This allows us to estimate the fraction of the [CII] emission that originates in PDRs. In addition, we increased the \({}^{12}\)CO\({}_{1\to 0}\) fluxes by a factor 2 to account for the likelihood that the \({}^{12}\)CO\({}_{1\to 0}\) line will become optically thick in dense star-forming regions (Hughes et al., 2017).
The emission from the apertures in Arp 25 shows that the radiation field is intense, much more than that of the Milky-Way analog NGC 7331 indicated with red dots. This can account for the high rate of star formation detected with H\(\alpha\) images (Tomicic et al., 2018). But the remarkable result of this comparison is how the apertures along the shock stand clearly out of the region powered by star formation showing that most of the [CII] emission is due to an alternative mechanism. Even the regions immediately after the shock are on the border of the relationship showing that the effects of the shocks probably propagates even inside the galaxy, although this can be simply a contamination effect due to the poor spatial resolution of FIFI-LS.
## 4 Summary and Conclusions
We presented new SOFIA observations of the galaxy Arp 25 whose shape is strongly deformed by ram pressure due to its fast motion through the diffuse medium in the NGC 2300 group. We obtained far-infrared images and spectra with the HAWC+ and FIFI-LS instruments. Flux measurements and other quantities derived in the article are reported in Table 5. We can summarize the main results of this work in the following points:
* we gathered a total of 8 galaxies in the NGC 2300 group obtaining a new estimate of the distance of
Figure 10: The \({}^{12}\)CO\({}_{1\to 0}\)/FIR values plotted against the [CII]/FIR values of the apertures in Arp 25 over a grid of PDR models of Kaufman et al. (2006) (available in the PDR toolbox, see Pound and Wolfire, 2022). The region populated by normal galaxies is shaded in blue. For comparison, we plotted data from regions of the Milky-Way analog NGC 7331 (Sutter and Fadda, 2022) and normal star–forming galaxies from Hughes et al. (2017) and Gullberg et al. (2015).
Arp 25 and a virial estimate of the group mass which agrees with previous X-ray studies;
* we studied the star formation as a function of the molecular hydrogen mass finding that the star formation is high across the whole galaxy, but it is especially high along the region impacted by the collision with the intra-group medium;
* we compared the [CII] emission in different regions of the galaxy to other estimators of photo-electric efficiency such as UV attenuation, dust emission, and PAH emission. We find that the regions along the front of impact with the intra-group medium have a [CII] emission higher than what is expected only from stellar radiation;
* the distribution of CO does not show peaks in the impact region as the [CII] intensity does. The comparison of the two emissions against a grid of PDR models shows that the emission from the regions in the impact front cannot be explained with PDR models.
We conclude that the impact with the intra-group medium enhances the star formation rate especially along the shock front. However, the enhancement in star formation is not sufficient to explain the high values of [CII] emission detected in the region of the impact. Such a high [CII] emission can be explained as a dissipation of the mechanical energy transferred to the molecular gas by shocks. By assuming a linear relationship between the [CII]/FIR ratio and the FIR surface brightness based on the internal regions of the galaxy, we infer that the [CII] emission is boosted by 60% along the shock front. This leads to a 25% increase in the [CII] emission from the whole galaxy. This observation is the first direct measurement of the enhancement of [CII] emission due to shocks caused by ram pressure in a galaxy group. It clearly shows that the interaction between infalling galaxies and diffuse medium in groups and clusters can significantly alter the total [CII] emission. Since [CII] observations are now routinely used to estimate star formation rates at high redshifts, this study cautions against a direct interpretation of high [CII] fluxes as high star formation rates in clusters of galaxies.
The authors thank S. Shenoy and S. Eftekharzadeh for assistance with the HAWC+ data and the anonymous referee for useful comments and suggestions. This research is based on data and software from: the SOFIA Observatory, operated by USRA (NASA contract NNA17BF53C) and DSI (DLR contract 500K0901 to the Stuttgart Univ.); the Spitzer Space Telescope, operated by JPL/Caltech under a contract with NASA; WISE, a UCLA-JPL/Caltech project funded by NASA; 2MASS, a NASA/NSF funded project of the Univ. of Massachusetts and IPAC/Caltech; Pan-STARRS1, a survey funded by IfA, Univ. of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society (MPA Heidelberg and MPE Garching), the Johns Hopkins Univ., the Durham Univ., the Univ. of Edinburgh, the Queen's Univ. Belfast, the Harvard-Smithsonian CfA, the LCO Global Tel. Net. Inc., NCU of Taiwan, STScI, NASA grant NNX08AR22G, NSF grant AST-1238877, the Univ. of Maryland, the Eotvos Lorand Univ., the Los Alamos Nat. Lab., and the Moore Foundation; GALEX, a NASA small explorer, whose archive is hosted by HEASARC; the Fabry Perot database at CeSAM/LAM, Marseille, France; the COMING legacy project of the Nobeyama 45m radiotelescope; the WSRT archive operated by the Netherlands Inst. for Radio Astronomy ASTRON, with support of NWO.
Spitzer (MIPS, IRAC), WISE, Nobeyama, WSRT, Obs. de Haute Provence, GALEX, Pan-STARRS, 2MASS, SOFIA (HAWC+, FIFI-LS) astropy (Astropy Collaboration et al., 2013, 2018), scipy (Virtanen et al., 2020), _sospex_(www.github.com/darioflute/sospex, Fadda & Chambers, 2018), CIGALE ( [https://cigale.lam.fr/](https://cigale.lam.fr/), Boquien et al., 2019)
|
2310.01651
|
Fool Your (Vision and) Language Model With Embarrassingly Simple
Permutations
|
Large language and vision-language models are rapidly being deployed in
practice thanks to their impressive capabilities in instruction following,
in-context learning, and so on. This raises an urgent need to carefully analyse
their robustness so that stakeholders can understand if and when such models
are trustworthy enough to be relied upon in any given application. In this
paper, we highlight a specific vulnerability in popular models, namely
permutation sensitivity in multiple-choice question answering (MCQA).
Specifically, we show empirically that popular models are vulnerable to
adversarial permutation in answer sets for multiple-choice prompting, which is
surprising as models should ideally be as invariant to prompt permutation as
humans are. These vulnerabilities persist across various model sizes, and exist
in very recent language and vision-language models. Code is available at
https://github.com/ys-zong/FoolyourVLLMs.
|
Yongshuo Zong, Tingyang Yu, Ruchika Chavhan, Bingchen Zhao, Timothy Hospedales
|
2023-10-02T21:27:57Z
|
http://arxiv.org/abs/2310.01651v3
|
# Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations
###### Abstract
Large language and vision-language models are rapidly being deployed in practice thanks to their impressive capabilities in instruction following, in-context learning, and so on. This raises an urgent need to carefully analyse their robustness so that stakeholders can understand if and when such models are trustworthy enough to be relied upon in any given application. In this paper, we highlight a specific vulnerability in popular models, namely permutation sensitivity in multiple-choice question answering (MCQA). Specifically, we show empirically that popular models are vulnerable to adversarial permutation in answer sets for multiple-choice prompting, which is surprising as models should ideally be as invariant to prompt permutation as humans are. These vulnerabilities persist across various model sizes, and exist in very recent language and vision-language models. Code is available at [https://github.com/ys-zong/FoolyourMLMs](https://github.com/ys-zong/FoolyourMLMs).
## 1 Introduction
Large language models (LLMs) (Brown et al., 2020; OpenAI, 2023a; Touvron et al., 2023a) and large vision-language models (VLLMs) (Alayrac et al., 2022; Li et al., 2023c) have made astonishing progress in recent years. They have attained strong capabilities across a diverse array of language tasks, enabling nuanced text generation, sophisticated instruction following, and natural dialogue with multimodal input and output. One task where they demonstrate particular prowess is multiple-choice question answering (MCQA) (Robinson and Wingate, 2023). This is an important capability with many real-world applications, from education to recruitment exams. Current LLMs and VLLMs have widely utilized the task format of MCQA for benchmarking and evaluation (Hendrycks et al., 2020; Lu et al., 2022; Zhong et al., 2023; Liang et al., 2022; Schwenk et al., 2022). This has built confidence that they can generate accurate and robust answers, underpinned claims of LLM competence at professional level human qualifications such as the bar exam (OpenAI, 2023b), and even led to reports of surpassing human-level performance on various tasks.
Surprisingly, contrary to the confidence instilled by high-performance metrics on established benchmarks, these models are surprisingly brittle when subjected to simple permutations in the answer choices, i.e., randomly changing the option positions. In this paper, we show that even a simple permutation of the answer sets, as illustrated in Figure 1, can lead to a dramatic decline in accuracy for both LLMs and VLLMs in a wide range of MCQA datasets, sometimes even below the random chance levels. For instance, Llama2-13B (Touvron et al., 2023a) experiences a 33.89% degradation in accuracy on the MMLU dataset (Hendrycks et al., 2020) following random permutation of option positions, with results falling below the random chance. A wide variety of popular LLMs and VLLMs, suffer significantly from this vulnerability, as summarised in Figure 2.
Furthermore, our investigations reveal an even more disconcerting aspect: the vulnerability to permutations persists in LLMs and VLLMs even when multiple distractor options are deliberately removed from the answer sets. Intuitively, one expects that by eliminating incorrect choices, the task should become simpler due to increasing chance performance,
thereby enhancing the models' performance. However, our empirical findings contradict this notion. Even with a reduced number of distractors, the performance of both LLMs and VLLMs remains susceptible to degradation, affirming the deeply ingrained nature of this vulnerability.
To further investigate the source of the brittleness, we demonstrate through our adversarial attack that it is not merely a selection bias towards/against certain positions, such as moving correct answers to a fixed position that a given model is biased against picking. While positional factors may moderately influence model performance, they do not explain the strength of our adversarial attack results, suggesting a more systemic issue that extends beyond simple position bias.
This issue should be of intrinsic concern to those seeking to understand and design trustworthy and reliable LLMs and VLLMs, or emulate human capabilities. However, one might speculate that the issue could be mitigated in practice through the engineering solution of majority voting across different permutations or by employing calibration strategies as suggested in previous work (Zhao et al., 2021). However, our findings indicate that while majority voting may offer some degree of improvement, the resulting performance still lags behind the original metrics, despite incurring a \(k!\times\) computational cost of the original inference time. Additionally, calibration techniques such as calibrate-before-use (Zhao et al., 2021) fail to alleviate this problem effectively.
In summary, our research unveils a glaring yet often overlooked vulnerability in large language models and vision-language models, specifically within the domain of multiple-choice question answering (MCQA). Despite their impressive metrics on well-established benchmarks, these models reveal a disconcerting fragility when faced with
Figure 1: Schematic Illustration of a MCQA permutation attack.
Figure 2: Summary of MCQA adversarial attack results for both LLMs and VLLMs. The values are average accuracy across all benchmarking datasets.
simple manipulations such as option permutations. Existing mitigation strategies fall short of effectively resolving this issue. Our observations not only raise pivotal questions about the models' robustness but also accentuate the necessity for heightened scrutiny in assessing their MCQA capabilities. We argue that stakeholders should be vigilant in relying on such models until these vulnerabilities are adequately addressed.
## 2 Simple Adversarial Attack Breaks LLMs and VLLMs
In this section, we analyse the brittleness of a broad array of large language models and vision-language models to random adversarial attacks in MCQA. By simply shuffling answer choices, we find that these models fail to maintain their performance, revealing a critical vulnerability.
### Experiment Setup
In an ideal scenario, robust models should offer consistent predictions that are _invariant_ to permutations that have no semantic influence on the question being posed. To test this, we simply iterate through the possible permutations of MCQ options. A robust model should be correct in every case. While there are \(k!\) possible combinations in total, we cease permutation once the model produces an incorrect prediction (succumbs to the permutation attack), which usually requires far less than \(k!\) attempts1.
Footnote 1: Since typical MCQA benchmarks use \(k=4\), the brute force algorithm is cheaper than a gradient-based solution. But gradient-based solutions could be used if the attack needs to scale to substantially larger \(k\).
Formally, Given a question \(q\) and an answer list \(A=\{a_{1},a_{2},\ldots,a_{k}\}\), the permutation adversarial attack can be described by the Equation 1. We maximize the loss function (\(\mathcal{L}\)) with respect to all possible permutations (\(\Pi\)) of the answer list. Here, \(\text{prompt}(q,A)\) prompts the model with the given query and answer list, and the model's response is then evaluated by the loss.
\[\begin{array}{ll}\text{Maximize:}&\mathcal{L}\left(\text{prompt}(q,A^{*}) \right)\\ &\text{s.t.}&A^{*}\in\Pi(A)\end{array} \tag{1}\]
ModelsWe evaluate a wide range of LLMs and VLLMs of diverse sizes, different pretrained backbones, and both auto-regressive pretrained and instruction-following fine-tuned models. Specifically, for LLMs, we have evaluated LLaMA-2 (7B/13B) (Touvron et al., 2023b), Vicuna (7B/13B) (Chiang et al., 2023), WizardLM-13B (Xu et al., 2023), InternLM-20B (Team, 2023a), Falcon-7B Penedo et al. (2023), and MPT-7B (Team, 2023b). For VLLMs, InstructBLIP (Vicuna-based, 7B/13B) (Dai et al., 2023), Open-Flamingo (MPT-based, 9B) (Awadalla et al., 2023), Otter (Llama-based, MPT-based) (Li et al., 2023a), LLaVA (7B/13B) (Liu et al., 2023a), Limber (7B) (Merullo et al., 2023), and mPLUG-Owl (pretraining, intruction) (Ye et al., 2023) are used for evaluation.
DatasetsWe utilize a diverse array of language and vision-language MCQA datasets for comprehensive evaluation. These datasets cover multiple domains and require different aspects of the models to give correct answers, ensuring our findings are generalizable. Specifically, for LLMs, we utilize MMLU (Hendrycks et al., 2020), ARC challenge (ARC-c) (Clark et al., 2018), BoolQ (Clark et al., 2019), SocialiQA (Sap et al., 2019), and MedMCQA (Pal et al.). For VLLMs, we use ScienceQA (Lu et al., 2022), A-OKVQA (Schwenk et al., 2022), MMBench (Liu et al., 2023c),
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **MMLU** & **ARC-c** & **BoolQ** & **SocialiQA** & **MedMCQ** \\ \hline \# of choices & 4 & 4 & 2 & 3 & 4 \\ \# QA pairs & 14079 & 1165 & 3270 & 1954 & 2816 \\ Task & Aggregated & Commonsense Reasoning & Reading Comprehension & Commonsense Reasoning & Out-of-domain \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the language datasets evaluated.
\begin{table}
\begin{tabular}{l c c} \hline \hline & \# of choices & \# QA pairs \\ \hline
**ScienceQA** & 2,3,4,5 & 2021 \\
**A-OKVQA** & 4 & 1145 \\
**MMBench** & 4 & 4377 \\
**SEED-Bench** & 4 & 14233 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of the vision-language datasets evaluated.
and SEED-Bench (Li et al., 2023b). We use the questions in ScienceQA that have corresponding images, the MCQA subsets of MMBench, and the image-based MCQAs in SEED-Bench.
EvaluationsWe use accuracy as our primary metric. During testing, we prompt the model to generate the possible option symbols (e.g., A to D) and extract the probability assigned to each choice in the first position. The option with the highest probability is then selected as the model's answer for that specific question. For both LLMs and VLLMs, we use greedy decoding and set the temperature to 1.
### Results
We present the main results in Table 3 and 4 for language and vision-language models respectively.
Language ModelsIn our experiments, large language models manifested a significant susceptibility to adversarial permutations, a finding consistent across various MCQA benchmarks. Our evaluation extended beyond the typical four-option MCQA datasets to include more diverse formats like the two-option BoolQ (Clark et al., 2019) and the three-option SocialIQA (Sap et al., 2019) that are naturally more resilient to the permutations. Intriguingly, the presence of only one or two distractor options did not mitigate the model's vulnerability to permutations. For instance, Llama2-7B's accuracy on BoolQ plummeted from 61.79% to a mere 8.23%, a performance even worse than random chance. Moreover, out of 45 experiments conducted with large language models, only six non-GPT-3.5-turbo models managed to perform better than random chance. And all of them, including GPT-3.5-turbo, suffer from significant performance decreases.
Vision-Language ModelsIn the vision-language model evaluations, the susceptibility to adversarial permutations is also severe. Despite the presence of visual context, which may intuitively add a layer of resilience, the VLLMs were not spared from the adverse effects of our permutation attacks. For datasets other than ScienceQA, which has varying numbers of options, 36.66% of the models fell below random chance performance after the adversarial attack. While InstructBLIP (Dai et al., 2023) shows relatively strong robustness to the adversarial attack. all of the models experienced significant accuracy drops ranging from 20% to 40%.
Further ObservationsWe note that within the same model family but with varying parameter sizes (e.g., InstructBLIP-7B v.s. InstructBLIP-13B), scaling up generally enhances both the baseline performance and resilience to adversarial attacks with relatively smaller declines in accuracy. We can also observe that models have different success rates over random chance in different datasets. For example, all of the LLMs failed the adversarial attack on MedMCQA dataset except GPT-3.5-turbo, which is also only slightly above the random chance. It shows the challenges of LLMs to generalize to out-of-domain data, and suggests caution about their use in unconstrained practical scenarios.
## 3 Answer Set Pruning
In this section, we examine the impact of a stricter test condition on MCQA, specifically by reducing the number of distractor options, while obviously retaining the true answer. This is expected to both improve baseline performance by increasing the random chance level, but also we expected it to reduce vulnerability to adversarial permutation by
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Method** & **MMLU** & **ARC-c** & **BoolQ** & **SocialIQA** & **MedMCQA** \\ \hline Llama2-7B & 40.91/ 6.17 (34.74 ) & 47.04/ 7.98 (39.06 \(\downarrow\)) & 61.79/ 8.23 (53.56 \(\downarrow\)) & 52.00/15.71 (36.29 \(\downarrow\)) & 37.96/ 1.60 (36.36 \(\downarrow\)) \\ Llama2-13B & 52.22/18.33 (33.89 \(\downarrow\)) & 61.80/21.63 (40.17 \(\downarrow\)) & 67.16/38.29 (28.87 \(\downarrow\)) & 61.21/34.14 (27.07 \(\downarrow\)) & 39.78/ 7.35 (32.43 \(\downarrow\)) \\ Vicuna-v1.5 & 48.57/18.09 (30.48 \(\downarrow\)) & 58.37/23.43 (34.94 \(\downarrow\)) & 64.04/29.60 (34.44 \(\downarrow\)) & 64.99/38.33 (26.66 \(\downarrow\)) & 39.28/ 7.67 (31.61 \(\downarrow\)) \\ Vicuna-v1.5-13B & 54.68/26.27 (28.41 \(\downarrow\)) & 69.27/38.80 (30.47 \(\downarrow\)) & 68.96/42.14 (26.82 \(\downarrow\)) & 66.07/44.42 (21.65 \(\downarrow\)) & 41.80/11.90 (29.90 \(\downarrow\)) \\ WizardLM-13B & 48.60/15.87 (52.73 \(\downarrow\)) & 58.20/21.12 (37.08 \(\downarrow\)) & 67.49/42.11 (25.38 \(\downarrow\)) & 63.46/31.78 (31.68 \(\downarrow\)) & 34.87/ 6.32 (28.55 \(\downarrow\)) \\ InterLM-20B & 59.14/29.52 (29.62 \(\downarrow\)) & 78.28/54.42 (23.86 \(\downarrow\)) & 85.20/82.91 (2.29 \(\downarrow\)) & 79.48/65.97 (1.51 \(\downarrow\)) & 43.61/13.92 (29.69 \(\downarrow\)) \\ Falcon-7b & 31.66/ 2.49 (29.17 \(\downarrow\)) & 34.74/ 0.09 (34.65 \(\downarrow\)) & 55.35/ 2.66 (52.69 \(\downarrow\)) & 36.29/ 0.55 (35.74 \(\downarrow\)) & 28.12/ 0.07 (28.05 \(\downarrow\)) \\ MPT-7B & 35.60/ 3.52 (32.08 \(\downarrow\)) & 37.76/ 1.06 (36.70 \(\downarrow\)) & 58.46/ 7.03 (51.43 \(\downarrow\)) & 41.61/ 2.53 (39.08 \(\downarrow\)) & 26.31/ 1.60 (24.71 \(\downarrow\)) \\ GPT-3.5-turbo & 64.81/40.39 (24.42 \(\downarrow\)) & 82.23/61.55 (20.68 \(\downarrow\)) & 87.92/81.35 (6.57 \(\downarrow\)) & 70.62/56.29 (14.33 \(\downarrow\)) & 52.22/32.07 (20.15 \(\downarrow\)) \\ Random Chance & 25.0 & 25.0 & 50.0 & 33.33 & 25.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparisons of LLMs before and after adversarial attack. Numbers in each represent original accuracy, accuracy after adversarial attack, and relative performance drop. Red shading indicates experiments where the permutation attack reduced performance below the chance level. All models suffer substantially with most experiments leading to below chance performance.
substantially reducing the degrees of freedom that the permutation attack can explore. However, we found that models remain highly susceptible to even the few permutations available in the reduced set of options.
Experiment SetupSpecifically, we constrain the answer set by reducing the number of total choices from four to either three or two, inclusive of the ground-truth answer. We then compare the performance metrics between these pruned sets in both permuted and non-permuted conditions to assess the relative susceptibility of the models.
ResultsWe present the results of answer set pruning of MMLU datasets in Table 5 and other datasets in the appendix. As can be seen from Table 5, reducing the number of options increases the base prediction accuracy as expected, but performing adversarial permutation on the reduced answer set still dramatically reduces the accuracy even in the 2-option cases. In most cases, the performance is below the chance level given the number of options. This means that, surprisingly, even in the simplest case of a binary choice, models are not robust to whether the true answer is presented as the first or second option.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Method** & **ScienceQA** & **A-OKVQA** & **SEED-Bench** & **MMBench** \\ \hline InstructBLIP-7B & 59.46/33.31 (26.15 \(\downarrow\)) & 74.06/51.62 (22.44 \(\downarrow\)) & 51.61/25.68 (25.93 \(\downarrow\)) & 64.91/41.01 (23.90 \(\downarrow\)) \\ InstructBLIP-13B & 64.15/41.84 (22.31 \(\downarrow\)) & 77.90/55.38 (22.52 \(\downarrow\)) & 53.65/28.79 (24.86 \(\downarrow\)) & 67.12/45.49 (21.63 \(\downarrow\)) \\ OpenFlamingo & 39.43/1.37 (38.06 \(\downarrow\)) & 46.90/3.58 (43.32 \(\downarrow\)) & 37.99/0.87 (37.12 \(\downarrow\)) & 38.99/5.18 (33.81 \(\downarrow\)) \\ Otter-Llama7B & 59.92/32.54 (27.38 \(\downarrow\)) & 57.99/28.30 (29.69 \(\downarrow\)) & 40.77/9.91 (30.86 \(\downarrow\)) & 55.24/19.67 (35.57 \(\downarrow\)) \\ Otter-MPT7B & 63.11/31.38 (31.73 \(\downarrow\)) & 68.21/43.19 (25.02 \(\downarrow\)) & 46.76/10.82 (35.94 \(\downarrow\)) & 61.31/36.46 (24.85 \(\downarrow\)) \\ LLaVA-7B & 45.20/2.28 (42.92 \(\downarrow\)) & 52.91/ 0.09 (52.82 \(\downarrow\)) & 38.36/5.67 (43.03 \(\downarrow\)) & 46.03/5.07 (40.96 \(\downarrow\)) \\ LLaVA-13B & 60.63/46.53 (14.10 \(\downarrow\)) & 63.14/25.85 (37.29 \(\downarrow\)) & 44.00/13.68 (30.32 \(\downarrow\)) & 59.13/31.30 (27.83 \(\downarrow\)) \\ Limber & 49.33/14.03 (35.30 \(\downarrow\)) & 39.57/1.22 (38.35 \(\downarrow\)) & 31.50/0.26 (31.24 \(\downarrow\)) & 34.93/1.62 (33.31 \(\downarrow\)) \\ mPLUG-Owl-pt & 53.24/10.20 (43.04 \(\downarrow\)) & 39.91/1.83 (38.08 \(\downarrow\)) & 35.57/0.91 (34.66 \(\downarrow\)) & 42.57/8.54 (34.03 \(\downarrow\)) \\ mPLUG-Owl-instr & 54.87/11.43 (43.44 \(\downarrow\)) & 37.12/2.01 (35.11 \(\downarrow\)) & 36.74/2.72 (34.02 \(\downarrow\)) & 43.74/6.12 (37.62 \(\downarrow\)) \\ Random Chance & Min 20.0 & 25.0 & 25.0 & 25.0 & 25.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparisons of VLLMs before and after adversarial attack. Numbers in each cell represent original accuracy, accuracy after adversarial attack, and relative performance drop. Red shading indicates performance below the chance level after the permutation attack. All models suffer substantially with most experiments leading to below chance performance.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Method** & **4 Choices** & **3 Choices** & **2 Choices** \\ \hline Llama2-7B & 40.91 & 48.75/ 8.67 (39.08\(\downarrow\)) & 63.33/20.26 (43.07\(\downarrow\)) \\ Llama2-13B & 52.22 & 70.77/22.85 (47.92\(\downarrow\)) & 71.13/31.85 (39.28\(\downarrow\)) \\ Vicuna-v1.5-7B & 48.57 & 56.65/30.60 (26.97\(\downarrow\)) & 68.81/32.60 (36.21\(\downarrow\)) \\ Vicuna-v1.5-13B & 54.68 & 61.75/29.02 (32.66\(\downarrow\)) & 72.97/28.06 (44.91\(\downarrow\)) \\ WizardLM-13B & 48.60 & 56.57/17.74 (38.83\(\downarrow\)) & 69.09/28.96 (40.13\(\downarrow\)) \\ InternLM-20B & 59.14 & 65.25/30.48 (34.67\(\downarrow\)) & 76.09/43.51 (32.58\(\downarrow\)) \\ Falcon-7b & 31.66 & 52.88/ 5.92 (46.96\(\downarrow\)) & 58.31/11.41 (46.90\(\downarrow\)) \\ MPT-7B & 35.60 & 53.31/ 6.27 (47.03\(\downarrow\)) & 58.31/15.44 (42.87\(\downarrow\)) \\ Random Chance & 25.0 & 33.33 & 50.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of LLMs on the MMLU dataset under answer set pruning. Numbers in each cell represent original accuracy, accuracy after adversarial attack, and relative performance drop. Baseline performance improve as the number of distractors is reduced, but performance is reduced below chance after adversarial permutation.
## 4 Further Analysis
### Position Bias
A concurrent study to ours argued for the existence of _position bias_ in language model MCQA (Zheng et al., 2023a). For example, in an A/B/C/D MCQ situation, a given model might have a predisposition to selecting a particular option such as "C" and an aversion to selecting some other option such as "A", irrespective of the correctness of the answer associated with each label. Position bias could potentially explain adversarial permutation vulnerability if a model is so averse to selecting a particular option, that rotating the true answer into that slot would reliably cause it to fail.
To analyse whether position bias can explain our results, we compare our adversarial permutation results to the performance of each LLM under position bias analysis - always rotating the correct answer to a specific slot (A/B/C/D) in the answer list.
From the results in Table 6, we do see the position bias effect remarked upon by Zheng et al. (2023a). The models tested exhibit varying degrees of position bias, as results fluctuate with respect to original performance (left column). For example Vicuna suffers limited position bias, while Falcon-7B is highly position biased. Falcon-7B's baseline accuracy of 31% rises to 70.9% when the true answer is placed in slot A - indicating a strong preference for choosing A; but drops to 3.7% when the true answer is placed in slot B, indicating a strong aversion to selecting B.
Comparing the observed position bias to the impact of our adversarial permutation, we can see that our adversarial permutation has a much stronger effect. The results after permutation (right column) are substantially worse than the position bias results. For example, Llama2 performs above chance level for answers in every possible position (A/B/C/D), but is reduced to below chance by our adversarial permutation. Thus we conclude that _the impact of our adversarial permutation is not explainable by position bias_. Evidently, models apparently rely on the relationships between choices, including the distractors, which the adversarial permutation manipulates to fool them. I.e., it is not just the true answer, and the location of the true answer (position bias), but also the pattern of the distractor answers around the true answer (as explored by adversarial permutations) that determine model success or failure. This reveals a complex and concerning form of vulnerability.
### Majority Voting and Contextual Calibration
The previous analysis of adversarial permutation vulnerability should be concerning to stakeholders interested in trustworthy and reliable AI, and suggests a new focus for researchers in developing models with improved intrinsic permutation robustness. Nevertheless, one might ask whether there are any post-hoc engineering fixes that could alleviate this issue in practice for existing models. To this end, we explore two post-hoc strategies that have previously proven effective in improving model performance, namely, majority voting (Wang et al., 2023) and majority voting and contextual calibration (Zhao et al., 2021), and ask whether they can alleviate adversarial permutation vulnerability.
SetupMajority voting (Wang et al., 2023) has been shown highly successful in self-ensembling over stochastic predictions. In our context, we apply it by obtaining the predictions for all possible permutations and then selecting the most frequent prediction. If most permutations lead to a correct prediction and there are only one or two pathological
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Method** & **Original** & **A** & **B** & **C** & **D** & **Adversarial Permutation** \\ \hline Llama2-7B & 40.91 & 60.02 & 37.28 & 30.69 & 35.43 & 6.17 \\ Llama2-13B & 52.22 & 36.15 & 58.69 & 59.08 & 54.91 & 18.33 \\ Vicuna-7B & 48.57 & 49.83 & 63.22 & 45.46 & 37.85 & 18.09 \\ Vicuna-13B & 54.68 & 47.33 & 70.00 & 51.73 & 52.04 & 26.27 \\ WizardLM-13B & 48.60 & 34.75 & 56.38 & 45.86 & 57.56 & 15.87 \\ InternLM-20B & 59.14 & 51.05 & 68.75 & 53.47 & 62.35 & 29.52 \\ Falcon-7B & 31.66 & 70.86 & 3.77 & 10.52 & 14.85 & 2.49 \\ MPT-7B & 35.60 & 0.82 & 75.35 & 34.72 & 2.03 & 3.52 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of positional bias and our adversarial permutation attack on MMLU dataset. While position bias exists, its impact is moderate. In contrast, our adversarial method severely degrades performance, usually below random chance level.
permutations that lead to an incorrect prediction, then majority voting should provide complete robustness to adversarial permutation. Contextual calibration (Zhao et al., 2021) is designed to mitigate the prior bias introduced from the in-context examples by estimating the model's bias toward each answer with a "content-free" query and fitting a calibration parameter. Here we consider the input question and options as the language prior bias. We first feed the model with content-free options (e.g., "N/A") as the content-free input, and then calibrate the real prediction based on the calibration parameters calculated from the content-free input.
ResultsFrom the results in Table 7 we can see that neither defense proved effective at restoring the original performance levels. The majority voting certainly ameliorated the permutation attack as expected, but still fell short of the baseline accuracy. This is despite it being a highly impractical defense due to imposing a \(k!\)-fold increase in inference cost. Contextual calibration, on the other hand, completely failed to make a meaningful impact on mitigating the adversarial attack. This re-confirms that the position bias is not the primary reason for models' permutation vulnerability.
### Analysis on Permutation Distribution
While our main focus has been on the permutation-robustness of LLMs and VLMs, we can also ask about the distribution of responses as a function of permutation. For example, is there only one specific pathological permutation among all \(k!\) options, or are there many mistake-inducing permutations? To analyse this we report in Figure 3 a histogram over the questions in ARC-challenge where each bin represents the number of questions where the specified proportion of permutations led to the correct answer. For example, we see that Vicuna-13B has a large number of questions that succeed for almost all permutations, while several models have a substantial batch of questions that are only correctly answered for around 30% of the potential permutations. Interestingly, most models have a substantial minority of questions that are only correctly answered for a small fraction of the permutations (leftmost bin).
### Qualitative Results
To illustrate the permutation attack, we present qualitative results for both LLMs in Table 8 and VLLMs in Figure 4.
Language ModelsIn Table 8, we showcase an MCQA example from the ARC-challenge dataset (Clark et al., 2018), with the original answer order alongside two permutations. The ground-truth answer is underlined in each configuration. We use Llama-13B for this experiment. The model gives the correct prediction for the original option order. For permutation 1, if we only swap the position of options C and D, i.e., moving the ground-truth position to C, the model can still successfully give the prediction. However, for permutation 2, even if we do not move the ground-truth answer but only swap option A and B, the model incorrectly predicts A as the answer. This qualitative example underscores that the model's vulnerability extends beyond mere positional bias and even minor changes in option ordering can result in completely different predictions.
Vision-Language ModelsIn Figure 4, we present a visual MCQA example from the ScienceQA dataset using Otter-Llama model. In this example, we simply move the ground truth "Asia" from option A to option C. However, the
\begin{table}
\begin{tabular}{l l|l l l} \hline \hline
**Method** & **Original** & **Adversarial Attack** & **Majority Vote** & **Contextual Calibration** \\ \hline Llama2-7B & 40.91 & 6.17 (34.74 \(\downarrow\)) & 33.64 (7.27 \(\downarrow\)) & 5.24 (35.67 \(\downarrow\)) \\ Llama2-13B & 52.22 & 18.33 (33.89 \(\downarrow\)) & 48.53 (3.69 \(\downarrow\)) & 20.02 (32.20 \(\downarrow\)) \\ Vicuna-v1.5-7B & 48.57 & 18.09 (30.48 \(\downarrow\)) & 44.10 (4.47 \(\downarrow\)) & 11.33 (37.24 \(\downarrow\)) \\ Vicuna-v1.5-13B & 54.68 & 26.27 (28.41 \(\downarrow\)) & 52.03 (2.65 \(\downarrow\)) & 18.10 (36.58 \(\downarrow\)) \\ WizardLM-13B & 48.60 & 15.87 (32.73 \(\downarrow\)) & 30.17 (18.43 \(\downarrow\)) & 8.23 (40.37 \(\downarrow\)) \\ InternLM-20B & 59.14 & 29.52 (29.62 \(\downarrow\)) & 60.33 (1.19 \(\uparrow\)) & 28.94 (30.20 \(\downarrow\)) \\ Falcon-7b & 31.66 & 2.49 (29.17 \(\downarrow\)) & 4.38 (27.28 \(\downarrow\)) & 3.59 (28.07 \(\downarrow\)) \\ MPT-7B & 35.60 & 3.52 (32.08 \(\downarrow\)) & 13.80 (21.80 \(\downarrow\)) & 6.24 (29.36 \(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Impact of majority vote and contextual calibration defenses against the permutation attack on the MMLU dataset. Contextual calibration fails completely. Majority vote ameliorates the attack, but does not completely restore performance. Red shading indicates below-chance results.
model still predicts the answer to be A and shows strong confidence in terms of the token probabilities (right part of the figure). This might show the model's preference for the first option as a recency bias.
Figure 4: Qualitative results of permutations of answer options and the corresponding model (Otter-Llama) predictions. The example is selected from the ScienceQA dataset.
\begin{table}
\begin{tabular}{l} \hline \hline
**Question: A physicist wants to determine the speed a car must reach to jump over a ramp. The physicist conducts three trials.** \\ In trials two and three, the speed of the car is increased by 20 miles per hour. What is the physicist investigating when he changes the speed? \\
**True Answer: the independent (manipulated) variable.** \\ \hline
**Original Answer Set: A. the control B. the hypothesis statement C. the dependent (responding) variable D. the independent (manipulated) variable.** \\ Model Prediction: D. \\
**Permutation 1: A. the control B. the hypothesis statement C. the independent (manipulated) variable D. the dependent (responding) variable** \\ Model Prediction: C. \\
**Permutation 2: A. the hypothesis statement B. the control C. the dependent (responding) variable D. the independent (manipulated) variable.** \\ Model Prediction: A. \\ \hline \hline \end{tabular}
\end{table}
Table 8: Qualitative results of permutations of answer options and the corresponding model (Llama2-7B) predictions. The example is selected from the ARC-challenge dataset.
Figure 3: Analysis on permutation distribution. The histogram shows the number of questions for which the corresponding proportion of permutations lead to the correct answer (ideal is a full bar at the 100% bin, indicating that all permutations are correctly answered for all questions). The distribution of bins suggests that many questions have multiple adversarial permutations.
Related Work
Large Language Models and Vision-Language Models.In recent years, the natural language processing community has seen astonishing progress in large language models (LLMs) with billions of trained parameters, such as GPT-3 (Brown et al., 2020) and Llama (Touvron et al., 2023a, 2023b), and become more intelligent after instruction-following fine-tuning (Ouyang et al., 2022; Zheng et al., 2023b). With the strong capabilities of LLMs, there is a growing interest in grounding vision with LLMs to enable the models to perceive multimodal information (Yin et al., 2023; Zong et al., 2023; Li et al., 2023c), usually by utilizing pretrained language and vision encoders with trainable alignment modules to connect them. Such models have shown strong capabilities across a diverse range of language tasks including multimodal generation, question-answering, dialogue, and more.
Multiple-Choice Question Answering (MCQA).Multiple-Choice Question Answering (MCQA) requires selecting the correct option from a set of choices and is prevalent in numerous real-world applications, making it a key performance metric for both LLMs and VLLMs. Various benchmarks such as MMLU (Hendrycks et al., 2020), AGI-Eval (Zhong et al., 2023), MedMCQA (Pal et al., 2019) and SocialIQA (Sap et al., 2019) have been designed to assess MCQA proficiency across different domains. Different prompting approaches have been considered for MCQA with multiple-choice prompting being the currently recommended state-of-the-art (Robinson and Wingate, 2023). On these benchmarks, LLMs and VLLMs frequently achieve, or even surpass, human-level accuracy (Anil et al., 2023; OpenAI, 2023b), suggesting a high degree of reliability and robustness. However, we cast doubt on this presumed robustness, exposing the underlying fragility of these models in MCQA scenarios.
Robustness of LLMs and VLLMs.Despite their impressive capabilities, concerns remain about the robustness and reliability of LLMs and VLLMs (Liu et al., 2023b). Previous studies have revealed the sensitivity of LLMs to various factors including prompt (Zhu et al., 2023), in-context examples (Liu et al., 2021; Zhao et al., 2021), irrelevant context (Shi et al., 2023), etc. Despite its significance, the robustness of MCQA has been relatively unexamined, particularly for VLLMs. Our research addresses this gap by scrutinizing a specific, yet pervasive, vulnerability to answer choice permutations in MCQA across both model types. Concurrent work (Zheng et al., 2023a) discusses position-bias in MCQA. Our results show that adversarial permutation vulnerability is a much deeper problem than position bias.
## 6 Discussion
In this paper, we present a comprehensive empirical analysis that unveils a critical but often overlooked vulnerability in both large language models (LLMs) and large vision-language models (VLLMs) in the context of multiple-choice question answering (MCQA). Despite their seemingly robust performance on established MCQA benchmarks, these models are highly susceptible to simple manipulations like option permutations. Our findings raise concerns about the widespread practice of evaluating and deploying these models based on MCQA tasks, urging caution in interpreting high benchmark scores as evidence of robust capabilities.
We highlight the need for future work to develop training strategies and/or architectures that lead to intrinsic robustness to such adversarial attacks and develop parameter-efficient tuning approaches that can fine-tune or align existing pretrained LLMs and VLLMs to be invariant to permutations.
## Acknowledgement
Yongshuo Zong was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. For the purpose of open access, the author has applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising.
|
2305.14954
|
Weakly nonlinear analysis of a two-species non-local advection-diffusion
system
|
Nonlocal interactions are ubiquitous in nature and play a central role in
many biological systems. In this paper, we perform a bifurcation analysis of a
widely-applicable advection-diffusion model with nonlocal advection terms
describing the species movements generated by inter-species interactions. We
use linear analysis to assess the stability of the constant steady state, then
weakly nonlinear analysis to recover the shape and stability of non-homogeneous
solutions. Since the system arises from a conservation law, the resulting
amplitude equations consist of a Ginzburg-Landau equation coupled with an
equation for the zero mode. In particular, this means that supercritical
branches from the Ginzburg-Landau equation need not be stable. Indeed, we find
that, depending on the parameters, bifurcations can be subcritical (always
unstable), stable supercritical, or unstable supercritical. We show numerically
that, when small amplitude patterns are unstable, the system exhibits large
amplitude patterns and hysteresis, even in supercritical regimes. Finally, we
construct bifurcation diagrams by combining our analysis with a previous study
of the minimisers of the associated energy functional. Through this approach we
reveal parameter regions in which stable small amplitude patterns coexist with
strongly modulated solutions.
|
Valeria Giunta, Thomas Hillen, Mark A. Lewis, Jonathan R. Potts
|
2023-05-24T09:43:53Z
|
http://arxiv.org/abs/2305.14954v1
|
# Weakly nonlinear analysis of a two-species non-local advection-diffusion system
###### Abstract
Nonlocal interactions are ubiquitous in nature and play a central role in many biological systems. In this paper, we perform a bifurcation analysis of a widely-applicable advection-diffusion model with nonlocal advection terms describing the species movements generated by inter-species interactions. We use linear analysis to assess the stability of the constant steady state, then weakly nonlinear analysis to recover the shape and stability of non-homogeneous solutions. Since the system arises from a conservation law, the resulting amplitude equations consist of a Ginzburg-Landau equation coupled with an equation for the zero mode. In particular, this means that supercritical branches from the Ginzburg-Landau equation need not be stable. Indeed, we find that, depending on the parameters, bifurcations can be subcritical (always unstable), stable supercritical, or unstable supercritical. We show numerically that, when small amplitude patterns are unstable, the system exhibits large amplitude patterns and hysteresis, even in supercritical regimes. Finally, we construct bifurcation diagrams by combining our analysis with a previous study of the
minimisers of the associated energy functional. Through this approach we reveal parameter regions in which stable small amplitude patterns coexist with strongly modulated solutions.
keywords: Nonlocal interactions, Pattern formation, Amplitude equation formalism, Bifurcations, Multi-stability +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Spontaneous pattern formation occurs throughout nature [22], with examples ranging from animal coat patterns [35] to territory formation [27], cell sorting [6] and swarm aggregation [33]. Therefore uncovering and analysing the mechanisms behind pattern formation is a central challenge in the life sciences where applied mathematics can play a role. Typically, research into pattern formation proceeds first by assessing which parameters may cause patterns to emerge spontaneously from a homogeneous steady state, using linear pattern formation analysis, sometimes called 'Turing pattern analysis' [35]. This determines whether patterns may emerge at short times from arbitrarily small perturbations. However, it is also important biologically to show whether these patterns are stable. One approach to pattern stability is via weakly nonlinear analysis: a stable supercritical bifurcation branch suggest that asymptotic patterns will emerge continuously as the bifurcation parameter is changed, whereas an unstable subcritical branch suggests that large amplitude asymptotic patterns may appear abruptly as the bifurcation point is crossed, their amplitude being a discontinuous function of the bifurcation parameter. This discontinuity in amplitude with respect to parameter change indicates that a biological system might suddenly change its behaviour in a dramatic fashion with only a small change in the underlying mechanisms.
Many biological mechanisms generate attractive or repulsive forces governing phe
nomena such as chemotaxis ([14; 21]), bacterial orientation ([2]), swarms of animals ([29]), and motion of human crowds ([20]). These mechanisms are driven by electrical, chemical or social interactions. These interactions arise from individual organisms collecting information from their environment, such as the presence of other individuals, food or chemicals. After gathering information, individuals move towards regions that contain important components for survival or move away from less favourable areas, thus creating spatially inhomogeneous distributions of individuals, which may have a certain degree of regularity in space and/or time (e.g. [33; 28]). This process of acquiring information from the environment is generally nonlocal, as motile organisms are usually able to inspect a portion of their environment, either by prolonging their protrusions, as in the case of cells [8], or by using their sight, hearing or smell, as with animals [26].
In recent years there has been an increasing interest in the mathematical modelling of nonlocal advection as a movement model with nonlocal information [5; 33; 6; 10; 8]. Recently, the following class of nonlocal advection-diffusion equations was proposed as a general model of interacting populations [28]
\[\frac{\partial u_{i}}{\partial t}=D_{i}\Delta u_{i}+\nabla\cdot\left(u_{i} \sum_{j=1}^{N}\gamma_{ij}\nabla(K*u_{j})\right),\,i=1,\ldots,N. \tag{1}\]
Here, \(u_{i}(x,t)\) denotes the density of population \(i\) at position \(x\) and time \(t\), for \(i\in\{1,\ldots,N\}\) and \(D_{i}>0\) is the diffusion rate of \(u_{i}\). Individuals can detect the presence of other individuals, whether conspecifics or not, over a spatial neighborhood described by spatial averaging kernel \(K\), which is a symmetric, non-negative function modelling the sensing range. The term \(K*u_{j}\) denotes the convolution between \(K\) and \(u_{j}\) and describes the nonlocal interactions of \(u_{i}\) with \(u_{j}\). The parameters \(\gamma_{ij}\) are the inter/intra-species interaction parameters, giving the density-dependent rate at which species \(i\) advects towards (if \(\gamma_{ij}<0\)), or away from (if \(\gamma_{ij}>0\)), species
Model (1) implicitly focuses on time scales whereby birth and death processes are negligible. Nonetheless, it has a wide range of possible applications in that it generalizes a variety of existing models describing many different phenomena, such as animal home ranges [5], territory formation [15, 27, 30], and cell sorting [6]. On the mathematical side, well-posedness of System (1) was analyzed in [17] and [23]. When the kernel \(K\) is sufficiently smooth, [17] shows that the system admits classical, positive and global solutions in 1D dimension, and local strong solutions in any higher dimension. When the kernel is non-smooth, in [23] it is proven that System (1) has weak solutions that exist globally in time.
From the perspective of pattern formation, numerical analysis shows that System (1) exhibits a great variety of spatio-temporal patterns, depending on the model parameters. These include segregated and aggregated stationary patterns, periodic time oscillating solutions, and aperiodic spatio-temporal behaviours [28], [17], [9]. In many cases the system admits an energy functional [18, 9], which can be used to gain analytic insight into the steady asymptotic patterns that can form from this system. Although [18] focused on the \(N=2\) case, the methods are more generally applicable in principle.
Here, we perform a bifurcation analysis of one of the cases analyzed in [18], namely where \(N=2\), \(\gamma_{ij}=\gamma_{ji}\) and \(\gamma_{ii}=0\). For simplicity, we also assume that \(D_{1}=D_{2}\). We use weakly nonlinear analysis to derive the equations governing the amplitude of the stationary solutions. Through analysis of the amplitude equations, we determine the nature of bifurcations generating branches of non-homogeneous solutions from a homogeneous state, then recover the shape of the non-homogeneous solutions and their stability. We validate our results through numerical analysis, setting \(K\) to be the top-hat distribution [18]. Finally, we combine our results with results of [18]
that were derived from an energy principle, to construct bifurcation diagrams that incorporate all the existing analysis of this system.
An interesting feature of our analysis is that the equation governing the modulation of small-amplitude patterns is not always the real Ginzburg-Landau (GL) equation. This contrasts with many examples of weakly nonlinear analysis, where the GL equation provides the amplitude of the stationary pattern and its stability: in subcritical regimes, the pattern solution is always unstable; in supercritical regimes, a periodic pattern is stable if its wavenumber lies within the Eckhaus band;[34, 22, 3, 4, 19, 11]. In our case, the real GL equation does not always provide a correct description of the pattern near the onset. This is because our system possesses a conservation law, i.e. mass is conserved for all time. This conservation law gives rise to a large-scale neutral mode (the zero mode) that can affect the stability of the pattern, so must be included into the analysis [12, 24]. Therefore, the resulting amplitude equations will consist of the GL equation coupled to an equation for the large-scale mode.
In [24] the authors used symmetry and scaling arguments to derive the amplitude equations governing systems with a conserved quantity. They proved that there exist stable stationary solutions in the form of strongly modulated patterns (i.e. patterns that consist of multiple Fourier modes), and these exist away from the branch that bifurcates from the constant steady state. The existence of strongly modulated patterns for System (1) has also been shown in [18] by analyzing the minimizers of an energy functional associated with the system. Here we build on this by investigating the existence and stability of small amplitude patterns, and showing that when these solutions are unstable, the system evolves towards either large amplitude or strongly modulated patterns. In addition, our analysis shows that, in some parameter regions, stable small amplitude patterns can coexist with stable strongly modulated solutions.
A similar two-species aggregation model was studied recently in [6]. Their model differs from our model (2) in regard of the diffusion term. In [6] the terms \(D\partial_{xx}u_{i}\) for \(i=1,2\) are replaced by density dependent diffusion terms \(D\partial_{x}(u_{i}\partial_{x}(u_{1}+u_{2}))\). The pattern forming mechanism is similar to our model, however, the arising aggregations have compact support.
This paper is organised as follows. Linear stability analysis is given in Section 2 and a weakly nonlinear analysis in Section 3. In these two sections, the analysis is carried out with a generic kernel, in order to provide some general results that can be used for future works. Section 4 focuses on detailed analysis where \(K\) is the top-hat distribution. We analyse the amplitude equations, recover the bifurcation diagrams and compare analytical results with numerical solutions. We finally combine the analysis performed here with the results obtained in [18] to recover more exhaustive pictures of the bifurcation diagrams. In Section 5, we outline further extensions of this work and discuss possible applications of our results to natural systems.
## 2 Linear stability analysis
We consider System (1) with two interacting populations, \(u_{1}\) and \(u_{2}\), that either mutually avoid or attract with the same strength (i.e. \(\gamma_{12}=\gamma_{21}\)). We set \(\gamma:=\gamma_{12}=\gamma_{21}\) and fix \(D_{1}=D_{2}=:D\), and \(\gamma_{11}=\gamma_{22}=0\). Therefore, System (1) reads as
\[\begin{split}&\partial_{t}u_{1}=D\partial_{xx}u_{1}+\gamma \partial_{x}\left(u_{1}\partial_{x}(K\ast u_{2})\right),\\ &\partial_{t}u_{2}=D\partial_{xx}u_{2}+\gamma\partial_{x}\left(u _{2}\partial_{x}(K\ast u_{1})\right).\end{split} \tag{2}\]
We work on the one dimensional spatial domain \(\Omega=\left[-\frac{l}{2},\frac{l}{2}\right]\) and impose periodic boundary conditions
\[u_{i}\left(-\frac{l}{2},t\right)=u_{i}\left(\frac{l}{2},t\right),\quad\partial _{x}u_{i}\left(-\frac{l}{2},t\right)=\partial_{x}u_{i}\left(\frac{l}{2},t \right),\quad\text{ for }i\in\{1,2\}\text{ and }t\geq 0. \tag{3}\]
We consider an even and non-negative kernel \(K\) such that
\[\int_{-l/2}^{l/2}K(x)dx=1,\text{ and }\operatorname{Supp}(K)=\{x\in\mathbb{R}:K(x)>0 \}=[-\alpha,\alpha] \tag{4}\]
where the constant \(\alpha\) denotes the sensitivity radius. We assume that \(\alpha<l/2\). Due to the periodic boundary conditions, we also assume that \(K(x)\) is wrapped around periodically over the domain.
The periodic boundary conditions (Equation (3)) ensure that in System (2) the total mass of each population \(u_{i}\) is conserved in time. Indeed the following identities are satisfied
\[\frac{d}{dt}\int_{-l/2}^{l/2}u_{i}(x,t)\mathrm{d}x=0,\qquad\text{ for }i=1,2. \tag{5}\]
Hence
\[\int_{-l/2}^{l/2}u_{i}(x,t)\mathrm{d}x=\int_{-l/2}^{l/2}u_{i}(x,0)\mathrm{d}x =:p_{i},\text{ for all }t\geq 0, \tag{6}\]
where the constant \(p_{i}\) denotes the size of population \(u_{i}\), for \(i=1,2\).
Equation (6) implies that system (2) has a unique equilibrium point given by
\[\mathbf{\bar{u}}:=(\bar{u}_{1},\bar{u}_{2})=\left(\frac{p_{1}}{l},\frac{p_{2} }{l}\right). \tag{7}\]
### Nondimensionalization
We start our analysis by rescaling the original system (2) using the following non-dimensional coordinates and variables
\[\tilde{x}=\frac{x}{\alpha},\quad\tilde{t}=\frac{D}{\alpha^{2}}t,\quad\tilde{ u}_{1}=lu_{1},\quad\tilde{u}_{2}=lu_{2}. \tag{8}\]
Note that, instead of \(\alpha\), one could have rescaled using any other constant that is proportional to the standard deviation of \(K(x)\) instead, which may be useful if \(K(x)\) does not have compact support, for example.
In the non-dimensional spatial domain, we define the following kernel
\[\tilde{K}(\tilde{x}):=\alpha K(\alpha\tilde{x})=\alpha K(x). \tag{9}\]
By Equation (9), we see that \(\text{Supp}(\tilde{K})=[-1,1]\) and that
\[\int_{-1}^{1}\tilde{K}(\tilde{x})d\tilde{x}=\int_{-1}^{1}\alpha K(\alpha\tilde {x})d\tilde{x}=\int_{-\alpha}^{\alpha}K(x)dx=1. \tag{10}\]
By (8) and (9), it follows that the convolution product becomes
\[\begin{split} K*u_{i}(x)&=\int_{-\alpha}^{\alpha}K( x-y)u_{i}(y)dy\\ &=\int_{-1}^{1}\frac{1}{\alpha}\tilde{K}(\tilde{x}-\tilde{y}) \frac{1}{l}\tilde{u}_{i}(\tilde{y})\alpha d\tilde{y}\\ &=\frac{1}{l}\tilde{K}*_{\sim}\tilde{u}_{i}(\tilde{x}),\end{split} \tag{11}\]
where \(*_{\sim}\) denotes the convolution operator in the rescaled spatial coordinate.
By substituting Equations (8), (9) and (11) in Equations (2), we obtain the following non-dimensional system
\[\begin{split}\partial_{\tilde{t}}\tilde{u}_{1}&= \partial_{\tilde{x}\tilde{x}}\tilde{u}_{1}+\frac{\gamma}{lD}\partial_{\tilde{ x}}\left(\tilde{u}_{1}\partial_{\tilde{x}}(\tilde{K}*_{\sim}\tilde{u}_{2}) \right),\\ \partial_{\tilde{t}}\tilde{u}_{2}&=\partial_{\tilde {x}\tilde{x}}\tilde{u}_{2}+\frac{\gamma}{lD}\partial_{\tilde{x}}\left(\tilde{u }_{2}\partial_{\tilde{x}}(\tilde{K}*_{\sim}\tilde{u}_{1})\right),\end{split} \tag{12}\]
where \(\tilde{x}\in\left[-\frac{l}{2\alpha},\frac{l}{2\alpha}\right]\). By the relations in Equation (8), the boundary conditions now read as:
\[\tilde{u}_{i}\left(-\frac{l}{2\alpha},\tilde{t}\right)=\tilde{u}_{i}\left( \frac{l}{2\alpha},\tilde{t}\right),\,\partial_{\tilde{x}}\tilde{u}_{i}\left( -\frac{l}{2\alpha},\tilde{t}\right)=\partial_{\tilde{x}}\tilde{u}_{i}\left( \frac{l}{2\alpha},\tilde{t}\right),\,\forall i\in\{1,\ldots,N\}\text{ and }\tilde{t}\geq 0. \tag{13}\]
The boundary conditions (Equation (13)) imply that the total mass of each population \(\tilde{u}_{i}\) is conserved in time. Therefore, for \(i=1,2\) and all \(\tilde{t}\geq 0\), the following identities hold
\[\int_{-l/2\alpha}^{l/2\alpha}\tilde{u}_{i}(0,\tilde{t})\mathrm{d}\tilde{x}= \int_{-l/2\alpha}^{l/2\alpha}\tilde{u}_{i}(\tilde{x},\tilde{t})\mathrm{d} \tilde{x}=\int_{-l/2}^{l/2}\frac{l}{\alpha}u_{i}(x,t)dx=\frac{l}{\alpha}p_{i}, \tag{14}\]
where the second equality uses the identities in Equation (8) and the third equality uses Equation (6). By Equation (14) it follows that the non-dimensional system in (12) has a unique equilibrium point given by
\[\bar{\mathbf{\bar{u}}}:=\left(\bar{\bar{u}}_{1},\bar{\bar{u}}_{2}\right)=\left(p _{1},p_{2}\right). \tag{15}\]
To simplify the notation, we define \(\tilde{\gamma}:=\frac{\gamma}{lD}\) and \(L:=\frac{l}{\alpha}\), and by dropping the tildes, the non-dimensional system (12) reads as
\[\begin{split}\partial_{t}u_{1}&=\partial_{xx}u_{1} +\gamma\partial_{x}\left(u_{1}\partial_{x}(K*u_{2})\right),\\ \partial_{t}u_{2}&=\partial_{xx}u_{2}+\gamma \partial_{x}\left(u_{2}\partial_{x}(K*u_{1})\right),\end{split} \tag{16}\]
where \(x\in\left[-\frac{L}{2},\frac{L}{2}\right]\). The boundary conditions for System (16) read as:
\[u_{i}\left(-\frac{L}{2},t\right)=u_{i}\left(\frac{L}{2},t\right),\,\partial_{ x}u_{i}\left(-\frac{L}{2},t\right)=\partial_{x}u_{i}\left(\frac{L}{2},t \right),\,\forall i\in\{1,\ldots,N\}\text{ and }t\geq 0. \tag{17}\]
### Linear stability analysis
We now perform a linear stability analysis of system (16) about the equilibrium point
\[\mathbf{\bar{u}}=(\bar{u}_{1},\bar{u}_{2})=(p_{1},p_{2}), \tag{18}\]
(see Equation (15)). To this end, we consider a perturbation of the homogeneous solution (18) of the following form
\[\mathbf{w}=\begin{pmatrix}u_{1}-\bar{u}_{1}\\ u_{2}-\bar{u}_{2}\end{pmatrix}=\mathbf{u}^{(0)}e^{\lambda t+iqx}, \tag{19}\]
subject to boundary conditions (17), where \(\mathbf{u}^{(0)}\) is a constant vector, \(\lambda\in\mathbb{R}\) is the growth rate and \(q\) is the wavenumber of the perturbation. By substituting Equation
(19) into Equation (16) and neglecting nonlinear terms, we obtain the following eigenvalue problem
\[\lambda(q)\mathbf{w}=\mathcal{L}(q)\mathbf{w}, \tag{20}\]
where
\[\mathcal{L}(q)=-q^{2}\begin{bmatrix}1&\gamma\bar{u}_{1}\hat{K}(q)\\ \\ \gamma\bar{u}_{2}\hat{K}(q)&1\end{bmatrix}, \tag{21}\]
and
\[\hat{K}(q):=\int_{-1}^{1}K(x)e^{-iqx}\mathrm{d}x=\int_{-1}^{1}K(x)\cos(qx) \mathrm{d}x, \tag{22}\]
where the second equality uses the fact that \(K(x)\) is an even function and then \(K(x)\sin(qx)\) is an odd function.
The eigenvalues of the matrix \(\mathcal{L}\) (21) read
\[\lambda^{\pm}(q):=-q^{2}(1\pm\gamma|\hat{K}(q)|\sqrt{\bar{u}_{1}\bar{u}_{2}}), \tag{23}\]
and govern the evolution of the perturbation \(\mathbf{w}\) (Equation (19)). If \(\gamma=0\) then \(\lambda^{\pm}(q)\leq 0\). By continuity, if \(\gamma\) is arbitrarily small, \(\lambda^{\pm}(q)\leq 0\) for all wavenumbers \(q\), and the equilibrium point \(\bar{\mathbf{u}}\) (Equation (18)) is linearly stable. As \(|\gamma|\) increases, either \(\lambda^{+}(q)\) or \(\lambda^{-}(q)\) becomes positive for some values of \(q\) and, consequently, the equilibrium \(\bar{\mathbf{u}}\) becomes unstable.
The wavenumbers \(q\) must be chosen in such a way that the periodic boundary conditions in Equation (17) are satisfied, and thus we have a discrete set of admissible wavenumbers given by
\[I=\left\{q_{m}:=\frac{2\pi m}{L},\text{ with }m\in\mathbb{Z}_{\geq 0}\right\}. \tag{24}\]
The equilibrium \(\bar{\mathbf{u}}\) (Equation (18)) is unstable when \(\lambda^{\pm}(q_{m})>0\) for some \(m\in\mathbb{Z}_{\geq 0}\). Note that \(\lambda^{\pm}(q_{0})=0\) so the system never becomes unstable at wavenumber
\(q_{0}\). For \(m>0\), if \(\hat{K}(q_{m})\neq 0\), we denote by \(\gamma_{m}^{\pm}\) the instability thresholds of the wavenumber \(q_{m}\), which are defined as
\[\gamma_{m}^{\pm}=\pm\frac{1}{|\hat{K}(q_{m})|\sqrt{\bar{u}_{1}\bar{u}_{2}}},\,m \in\mathbb{Z}_{>0}. \tag{25}\]
Therefore the equilibrium \(\bar{\mathbf{u}}\) (Equation (18)) is unstable when
\[\gamma<\gamma_{m}^{-}\quad\text{ or }\quad\gamma>\gamma_{m}^{+},\quad\text{ for some }m\in\mathbb{Z}_{>0}. \tag{26}\]
In the following section, we will perform a weakly nonlinear analysis to study the evolution of the perturbation \(\mathbf{w}\) when the equilibrium \(\bar{\mathbf{u}}\) becomes linearly unstable. We will adopt \(\gamma\) as bifurcation parameter and denote by \(q_{c}\) the first admissible wavenumber that is destabilized as \(|\gamma|\) is increased. By Equation (25), we note the critical wavenumber \(q_{c}\) is defined as
\[q_{c}=\arg\max_{q_{m}\in I}\lvert\hat{K}(q_{m})\rvert, \tag{27}\]
where the set \(I\) is defined in (24). We also underline that \(q_{c}\) depends on the choice of kernel \(K\) and may not be unique. We will denote by \(\gamma_{c}^{\pm}\) the corresponding bifurcation thresholds, that is
\[\gamma_{c}^{+}=\frac{1}{|\hat{K}(q_{c})|\sqrt{\bar{u}_{1}\bar{u}_{2}}}\quad \text{ and }\quad\gamma_{c}^{-}=-\frac{1}{|\hat{K}(q_{c})|\sqrt{\bar{u}_{1}\bar{u}_{2}}}. \tag{28}\]
## 3 Amplitude equations
In this section we perform a weakly nonlinear analysis based on the method of multiple scales. Close to the threshold of instability, that is in the weakly non-linear regime, we will use an expansion technique to recover an approximated solution, characterized by a slowly varying amplitude, and the equations governing the amplitude of the solution. Through the analysis of these equations (usually referred
to as amplitude equations), we recover the amplitude and stability of the stationary solutions.
The idea behind the multiple scale method comes from the observation that, just above an instability threshold, a nonlinear state is given by a superposition of modes whose wavenumbers \(q\) lie in a narrow band \(q^{-}\leq q\leq q^{+}\) (see [13] Cap 6). The resulting nonlinear state is a solution governed by one or more unstable modes and characterized by an amplitude that varies slowly in space, due to the superposition of modes with almost identical wavenumbers. Also, the amplitude evolves slowly in time because, close to the onset of instability, all growth rates are small.
Generally just beyond a bifurcation threshold, if the band of unstable wavenumbers \([q^{-},q^{+}]\) around \(q_{c}\) has width \(O(\varepsilon)\), where \(\varepsilon\ll 1\), the positive growth rates are \(O(\varepsilon^{2})\). Therefore, the solution evolves as
\[\mathbf{u}(x,t)\sim\mathbf{\bar{u}}+\tilde{A}(X,T)e^{iq_{c}x}+\tilde{A}^{*}(X, T)e^{-iq_{c}x}, \tag{29}\]
where \(X=\varepsilon x\) is a long spatial scale, \(T=\varepsilon^{2}t\) is a slow temporal scale, \(\tilde{A}(X,T)\) is a complex function and denotes the slow modulation of the critical mode \(e^{iq_{c}x}\), and \(\tilde{A}^{*}\) is the complex conjugate of \(\tilde{A}\). Also, in the limit of \(\varepsilon\to 0\), this solution must satisfy the boundary conditions in Equation (17).
However, in systems with a conservation law, so that \(\lambda(0)=0\), long-scale modes evolve on long timescales, and must be included in the analysis (see also [24]). Therefore solutions to System (16)-(17) evolve as
\[\mathbf{u}(x,t)=\mathbf{\bar{u}}+\tilde{A}(X,T)e^{iq_{c}x}+\tilde{A}^{*}(X,T)e ^{-iq_{c}x}+\tilde{B}(X,T), \tag{30}\]
where \(\tilde{B}(X,T)\) is a real function and denotes the slow modulation of the mode corresponding to the zero wavenumber, \(q=0\).
Recall that the homogeneous steady state is linearly stable for \(\gamma_{c}^{-}<\gamma<\gamma_{c}^{+}\), and becomes unstable for \(\gamma<\gamma_{c}^{-}\) or \(\gamma>\gamma_{c}^{+}\). In the following Theorem, we derive an approximation of the solutions close to the instability thresholds (\(\gamma\approx\gamma_{c}^{+}\) or \(\gamma\approx\gamma_{c}^{-}\)) and the equations governing the amplitude of the solutions. Since the analysis is broadly the same, we do not distinguish between \(\gamma_{c}^{+}\) and \(\gamma_{c}^{-}\) and use \(\gamma_{c}\) to denote both the thresholds. This Theorem also shows that the ansatz in Equation (30) correctly describes solutions in the weakly nonlinear regime.
**Theorem 3.1**.: _Let \(\varepsilon:=\sqrt{|\frac{\gamma-\gamma_{c}}{\gamma_{c}}|}\). When \(\varepsilon\ll 1\), solutions to system (16) have the following form_
\[\begin{split} u_{1}&=\bar{u}_{1}+\varepsilon\rho_{1 }(Ae^{iq_{c}x}+A^{*}e^{-iq_{c}x})+\varepsilon^{2}[\psi_{1}(A^{2}e^{2iq_{c}x}+A ^{*2}e^{-2iq_{c}x})+B]+O(\varepsilon^{3}),\\ u_{2}&=\bar{u}_{2}+\varepsilon\rho_{2}(Ae^{iq_{c}x }+A^{*}e^{-iq_{c}x})+\varepsilon^{2}[\psi_{2}(A^{2}e^{2iq_{c}x}+A^{*2}e^{-2iq_ {c}x})+B]+O(\varepsilon^{3}).\end{split} \tag{31}\]
_Here, \((\bar{u}_{1},\bar{u}_{2})\) is the homogeneous steady state (18), and \(\rho_{1}\), \(\rho_{2}\), \(\psi_{1}\), \(\psi_{2}\) are constants defined as_
\[\begin{split}&\rho_{1}=1,\qquad\rho_{2}=-\frac{1}{\gamma_{c}\bar{u }_{1}\hat{K}(q_{c})},\\ &\psi_{1}=\frac{1}{2\bar{u}_{1}}\frac{1-\gamma_{c}\bar{u}_{1}\hat{ K}(2q_{c})}{1-\gamma_{c}^{2}\bar{u}_{1}\bar{u}_{2}\hat{K}^{2}(2q_{c})},\qquad\psi_{2}= \frac{1}{2\bar{u}_{1}}\frac{1-\gamma_{c}\bar{u}_{2}\hat{K}(2q_{c})}{1-\gamma_{ c}^{2}\bar{u}_{1}\bar{u}_{2}\hat{K}^{2}(2q_{c})}.\end{split} \tag{32}\]
_Also, \(A(X,T)\) and \(B(X,T)\) are governed by the following equations_
1. _If_ \(\bar{u}_{1}\neq\bar{u}_{2}\)_,_ \[\begin{split}& A_{T}=\sigma A-\Lambda|A|^{2}A,\\ & B=0,\end{split}\] (33)
2. _If_ \(\bar{u}_{1}=\bar{u}_{2}\)_,_ \[\begin{split}& A_{T}=\sigma A-\Lambda|A|^{2}A+\nu AB,\\ & B_{T}=\mu B_{XX}-\eta(|A|^{2})_{XX},\end{split}\] (34)
_where the coefficients \(\sigma\), \(\Lambda\), \(\nu\), \(\mu\) and \(\eta\) are defined as_
\[\sigma =-q_{c}^{2},\text{ if }\gamma_{c}^{-}<\gamma<\gamma_{c}^{+}\text{ (stable regime)},\quad\sigma=q_{c}^{2},\text{ if }\gamma<\gamma_{c}^{-}\text{ or }\gamma>\gamma_{c}^{+}\text{ (unstable regime)},\] \[\Lambda =\frac{1}{2}q_{c}^{2}\gamma_{c}[2\hat{K}(2q_{c})(\psi_{1}+\psi_{ 2})-\hat{K}(q_{c})(\psi_{1}\rho_{2}+\psi_{2}a_{2})],\] \[\nu =\frac{q_{c}^{2}}{\bar{u}_{1}},\qquad\mu=1+\gamma_{c}\bar{u}_{1} \hat{K}(0),\qquad\eta=\frac{1}{\bar{u}_{1}}. \tag{35}\]
_Finally, \(A^{*}\) denotes the complex conjugate of \(A\)._
**Proof.** Recall the definition of \(\mathbf{w}\) from Equation (19). Separating the linear part from the non linear part, System (16) can be rewritten as
\[\partial_{t}\mathbf{w}=\partial_{xx}\mathcal{L}^{\gamma}[\mathbf{w}]+\partial _{x}\mathcal{Q}^{\gamma}[\mathbf{w},\partial_{x}(K*\mathbf{w})], \tag{36}\]
where the actions of linear operator \(\mathcal{L}^{\gamma}\) and the non-linear operator \(\mathcal{Q}^{\gamma}\) on the vectors \(\mathbf{r}=(r_{1},r_{2})^{T}\) and \(\mathbf{s}=(s_{1},s_{2})^{T}\) are defined as
\[\mathcal{L}^{\gamma}\left[\mathbf{r}\right]=\begin{pmatrix}1&\gamma\bar{u}_{1 }K*\\ \gamma\bar{u}_{2}K*&1\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\end{pmatrix},\qquad\mathcal{Q}^{\gamma}\left[\mathbf{r},\mathbf{s} \right]=\gamma\begin{pmatrix}r_{1}s_{2}\\ r_{2}s_{1}\end{pmatrix}. \tag{37}\]
Choosing \(\gamma\) such that \(\gamma-\gamma_{c}\sim\varepsilon^{2}\), we write the following expansion
\[\gamma=\gamma_{c}+\varepsilon^{2}\gamma^{(2)}. \tag{38}\]
From the definition of \(\varepsilon\), it follows that either \(\gamma^{(2)}=\gamma_{c}\) or \(\gamma^{(2)}=-\gamma_{c}\). In particular, \(\gamma^{(2)}=-\gamma_{c}\) in the stable regime (\(\gamma_{c}^{-}<\gamma<\gamma_{c}^{+}\)), while \(\gamma^{(2)}=\gamma_{c}\) in the unstable regime (\(\gamma<\gamma_{c}^{-}\) or \(\gamma>\gamma_{c}^{+}\)).
We then employ the method of multiple scales and adopt a long spatial scale \(X=\varepsilon x\) and multiple temporal scales \(T_{1},T_{2},\dots\) such that
\[t=\frac{T_{1}}{\varepsilon}+\frac{T_{2}}{\varepsilon^{2}}+\cdots. \tag{39}\]
As \(\varepsilon\to 0\), temporal and spatial derivatives decouple as
\[\partial_{t}\rightarrow\partial_{t}+\varepsilon\partial_{T_{1}}+\varepsilon^{2} \partial_{T_{2}},\qquad\partial_{x}\rightarrow\partial_{x}+\varepsilon\partial _{X}. \tag{40}\]
We employ a regular asymptotic expansion of \(\mathbf{w}\) in terms of \(\varepsilon\)
\[\mathbf{w}=\varepsilon\mathbf{w}_{1}+\varepsilon^{2}\mathbf{w}_{2}+\varepsilon ^{3}\mathbf{w}_{3}+\cdots, \tag{41}\]
where
\[\mathbf{w}_{j}=\sum_{m=-\infty}^{\infty}\mathbf{w}_{jm}(X,T_{1},T_{2})e^{iq_{ m}x},\ \ \text{for}\ j=1,2,\ldots \tag{42}\]
and must satisfy the boundary conditions in Equations (17).
By Equations (38) and (41), we see that the operators \(\mathcal{L}^{\gamma}\) and \(\mathcal{Q}^{\gamma}\) in (37) decouple in orders of \(\varepsilon\) as
\[\mathcal{L}^{\gamma}\left[\mathbf{r}\right] =\begin{pmatrix}1&(\gamma_{c}+\varepsilon^{2}\gamma^{(2)})\bar{u }_{1}K\ast\\ (\gamma_{c}+\varepsilon^{2}\gamma^{(2)})\bar{u}_{2}K\ast&1\end{pmatrix} \begin{pmatrix}r_{1}\\ r_{2}\end{pmatrix}\] \[=\mathcal{L}^{\gamma_{c}}\left[\mathbf{r}\right]+\varepsilon^{2} \begin{pmatrix}0&\gamma^{(2)}\bar{u}_{1}K\ast\\ \gamma^{(2)}\bar{u}_{2}K\ast&0\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\end{pmatrix}, \tag{43}\]
\[\mathcal{Q}^{\gamma}\left[\mathbf{r},\mathbf{s}\right]=(\gamma_{c}+\varepsilon ^{2}\gamma^{(2)})\begin{pmatrix}r_{1}s_{2}\\ r_{2}s_{1}\end{pmatrix}=\mathcal{Q}^{\gamma_{c}}\left[\mathbf{r},\mathbf{s} \right]+\varepsilon^{2}\mathcal{Q}^{\gamma^{(2)}}\left[\mathbf{r},\mathbf{s} \right].\]
By substituting Equations (41), (38), (40) and (43) into Equation (36), we obtain
\[\varepsilon^{2}\partial_{T_{1}}\mathbf{w}_{1}+\varepsilon^{3} \partial_{T_{2}}\mathbf{w}_{1}+\varepsilon^{3}\partial_{T_{1}}\mathbf{w}_{2}+ \varepsilon^{4}\partial_{T_{2}}\mathbf{w}_{2}=\] \[\left(\partial_{xx}+2\varepsilon\partial_{xX}+\varepsilon^{2} \partial_{XX}\right)\mathcal{L}^{\gamma_{c}}[\varepsilon\mathbf{w}_{1}+ \varepsilon^{2}\mathbf{w}_{2}+\varepsilon^{3}\mathbf{w}_{3}+\varepsilon^{4} \mathbf{w}_{4}]\] \[+\varepsilon^{2}(\partial_{xx}+2\varepsilon\partial_{xX}+ \varepsilon^{2}\partial_{XX})\begin{pmatrix}0&\gamma^{(2)}\bar{u}_{1}K*\\ \gamma^{(2)}\bar{u}_{2}K*&0\end{pmatrix}(\varepsilon\mathbf{w}_{1}+ \varepsilon^{2}\mathbf{w}_{2}+\varepsilon^{3}\mathbf{w}_{3})\\ +(\partial_{x}+\varepsilon\partial_{X})\mathcal{Q}^{\gamma_{c}}[( \varepsilon\mathbf{w}_{1}+\varepsilon^{2}\mathbf{w}_{2}+\varepsilon^{3} \mathbf{w}_{3}),(\partial_{x}+\varepsilon\partial_{X})(K*(\varepsilon \mathbf{w}_{1}+\varepsilon^{2}\mathbf{w}_{2}+\varepsilon^{3}\mathbf{w}_{3}) )]\\ +\varepsilon^{2}(\partial_{x}+\varepsilon\partial_{X})\mathcal{Q}^{ \gamma(2)}[(\varepsilon\mathbf{w}_{1}+\varepsilon^{2}\mathbf{w}_{2}+ \varepsilon^{3}\mathbf{w}_{3}),(\partial_{x}+\varepsilon\partial_{X})(K*( \varepsilon\mathbf{w}_{1}+\varepsilon^{2}\mathbf{w}_{2}+\varepsilon^{3} \mathbf{w}_{3}))]+O(\varepsilon^{5}). \tag{44}\]
Next we collect the terms at each order of \(\varepsilon\) and obtain a sequence of equations for each \(\mathbf{w}_{i}\). At order \(\varepsilon\), we obtain the homogeneous linear problem \(\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{1}]=0\), where the function \(\mathbf{w}_{1}\), has the form as in (42). Therefore, we have:
\[\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{1}] =\partial_{xx}\sum_{m=-\infty}^{\infty}\begin{pmatrix}1&\gamma_{ c}\bar{u}_{1}K*\\ \gamma_{c}\bar{u}_{2}K*&1\end{pmatrix}\mathbf{w}_{1m}e^{iq_{m}x}\] \[=\partial_{xx}\begin{pmatrix}1&\gamma_{c}\bar{u}_{1}\hat{K}(q_{m} )\\ \gamma_{c}\bar{u}_{2}\hat{K}(q_{m})&1\end{pmatrix}\mathbf{w}_{1m}e^{iq_{m}x} \tag{45}\] \[=0\]
where the second equality uses
\[K*e^{iq_{m}x}=\int_{-1}^{1}K(y)e^{iq_{m}(x-y)}dy=\int_{-1}^{1}K(y)e^{-iq_{m}y} dye^{iq_{m}x}=\hat{K}(q_{m})e^{iq_{m}x}. \tag{46}\]
with \(\hat{K}\) defined in (22). The fourth equality in Equation (45) is satisfied if and only if
\[q_{m}^{2}\begin{pmatrix}1&\gamma_{c}\bar{u}_{1}\hat{K}(q_{m})\\ \gamma_{c}\bar{u}_{2}\hat{K}(q_{m})&1\end{pmatrix}\mathbf{w}_{1m}e^{iq_{m}x}=0 \text{, for all }m\in\mathbb{Z}. \tag{47}\]
Non-trivial solutions to Equation (47) exist when either the determinant of the matrix is zero or \(q_{m}=0\). Recalling the definition of \(q_{m}\) (24) and \(\gamma_{c}\) (28), we see that non-trivial solutions exist only for \(q_{m}=q_{0}\) and \(q_{m}=q_{c}\).
Therefore, the function \(\mathbf{w}_{1}\) that satisfies this linear problem \(\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{1}]=0\) is
\[\mathbf{w}_{1}=\boldsymbol{\rho}_{0}A_{0}(X,T_{1},T_{2})+\boldsymbol{\rho} \left(A(X,T_{1},T_{2})e^{iq_{c}x}+A^{*}(X,T_{1},T_{2})e^{-iq_{c}x}\right), \tag{48}\]
where \(A_{0}(X,T_{1},T_{2})\) is a real function, \(A(X,T_{1},T_{2})\) is a complex function, \(A^{*}\) denotes the complex conjugate of \(A\), and
\[\boldsymbol{\rho}_{0}=\begin{pmatrix}\rho_{01}\\ \rho_{02}\end{pmatrix},\qquad\boldsymbol{\rho}=\begin{pmatrix}\rho_{1}\\ \rho_{2}\end{pmatrix} \tag{49}\]
are constant vectors. First, notice that \(\partial_{xx}\mathcal{L}^{\gamma_{c}}[\boldsymbol{\rho}_{0}A_{0}(X,T_{1},T_ {2})]=0\), for any \(\boldsymbol{\rho}_{0}\) and \(A_{0}(X,T_{1},T_{2})\). Also, in order to satisfy \(\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{1}]=0\), the vector \(\boldsymbol{\rho}\) must be such that
\[\boldsymbol{\rho}\in\text{Ker}\begin{pmatrix}1&\gamma_{c}\bar{u}_{1}\hat{K}(q _{c})\\ \gamma_{c}\bar{u}_{2}\hat{K}(q_{c})&1\end{pmatrix}, \tag{50}\]
where \(\hat{K}\) is defined in (22). Since \(\gamma_{c}\hat{K}(q_{c})\sqrt{\bar{u}_{1}\bar{u}_{2}}=\pm 1\) (see Equation (28)), \(\boldsymbol{\rho}\) can be defined up to a constant. We shall choose the following normalization
\[\boldsymbol{\rho}=\begin{pmatrix}1\\ \rho_{2}\end{pmatrix},\text{ where }\rho_{2}:=-\frac{1}{\gamma_{c}\bar{u}_{1} \hat{K}(q_{c})}. \tag{51}\]
At this stage, the amplitudes \(A(X,T_{1},T_{2})\) and \(A_{0}(X,T_{1},T_{2})\), and the vector \(\boldsymbol{\rho}_{0}\) are still unknown.
At order \(\varepsilon^{2}\) we obtain the following problem
\[\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{2}]=\mathbf{F}, \tag{52}\]
with
\[\begin{split}\mathbf{F}=&-2\partial_{xx}\mathcal{L}^{ \gamma_{c}}[\mathbf{w}_{1}]-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1}, \partial_{x}(K\ast\mathbf{w}_{1})]+\partial_{T_{1}}\mathbf{w}_{1}\\ =&-2iq_{c}\begin{pmatrix}1&\gamma_{c}\bar{u}_{1} \hat{K}(q_{c})\\ \gamma_{c}\bar{u}_{2}\hat{K}(q_{c})&1\end{pmatrix}\begin{pmatrix}1\\ \rho_{2}\end{pmatrix}(A_{X}e^{iq_{c}x}-A_{X}^{\ast}e^{-iq_{c}x})\\ &+2q_{c}^{2}\rho_{2}\gamma_{c}\hat{K}(q_{c})\begin{pmatrix}1\\ 1\end{pmatrix}(A^{2}e^{2iq_{c}x}+A^{\ast 2}e^{-2iq_{c}x})\\ &+q_{c}^{2}\gamma_{c}\hat{K}(q_{c})\begin{pmatrix}\rho_{01}\rho_{2}\\ \rho_{02}\end{pmatrix}A_{0}(Ae^{iq_{c}x}+A^{\ast}e^{-iq_{c}x})\\ &+\mathbf{\rho}_{0}\partial_{T_{1}}A_{0}+\mathbf{\rho}\left(\partial_{T_{1}}Ae^{iq_{ c}x}+\partial_{T_{1}}A^{\ast}e^{-iq_{c}x}\right)\\ =&2q_{c}^{2}\rho_{2}\gamma_{c}\hat{K}(q_{c})\begin{pmatrix}1\\ 1\end{pmatrix}(A^{2}e^{2iq_{c}x}+A^{\ast 2}e^{-2iq_{c}x})+q_{c}^{2}\gamma_{c} \hat{K}(q_{c})\begin{pmatrix}\rho_{01}\rho_{2}\\ \rho_{02}\end{pmatrix}A_{0}(Ae^{iq_{c}x}+A^{\ast}e^{-iq_{c}x})\\ &=&-\frac{2}{\bar{u}_{1}}q_{c}^{2}\begin{pmatrix}1\\ 1\end{pmatrix}(A^{2}e^{2iq_{c}x}+A^{\ast 2}e^{-2iq_{c}x})+q_{c}^{2}\gamma_{c} \hat{K}(q_{c})\begin{pmatrix}\rho_{01}\rho_{2}\\ \rho_{02}\end{pmatrix}A_{0}(Ae^{iq_{c}x}+A^{\ast}e^{-iq_{c}x})\\ &+\mathbf{\rho}_{0}\partial_{T_{1}}A_{0}+\mathbf{\rho}\left(\partial_{T_{1}}Ae^{iq_{c} x}+\partial_{T_{1}}A^{\ast}e^{-iq_{c}x}\right),\end{split} \tag{53}\]
where the second equality uses Equation (46), the third equality is true because, by Equation (50), the term on the second line is equal to zero, and the fourth equality uses the definition of \(\rho\) (Equation (51)).
By the Fredholm Alternative Theorem, Equation (52) admits a solution if and only if for any \(\mathbf{a}\in L^{2}(-L/2,L/2)\) such that
\[\mathbf{a}\in\mathrm{Ker}\{(\partial_{xx}\mathcal{L}^{\gamma_{c}})^{T}\}= \mathrm{Ker}\left\{\partial_{xx}\begin{pmatrix}1&\gamma_{c}\bar{u}_{2}K\ast\\ \gamma_{c}\bar{u}_{1}K\ast&1\end{pmatrix}\right\}, \tag{54}\]
the equality \(\langle\mathbf{F},\mathbf{a}\rangle=0\) is satisfied, where \(\langle\cdot,\cdot\rangle\) denotes the scalar product in \(L^{2}(-L/2,L/2)\).
Notice that any \(\mathbf{a}\neq\mathbf{0}\) satisfying the condition in (54) is a constant multiple of
\[\mathbf{a}=\begin{pmatrix}1\\ a_{2}\end{pmatrix}(e^{iq_{c}x}+e^{-iq_{c}x}),\text{ with }a_{2}:=-\frac{1}{\gamma_{c} \bar{u}_{2}\hat{K}(q_{c})}. \tag{55}\]
Therefore Equation (52) only has a solution when \(\rho_{01}=\rho_{02}=0\) and \(\partial_{T_{1}}A=0\), that is the amplitude \(A\) does not depend on \(T_{1}\). From now on, we will denote \(T_{2}\) by \(T\) for simplicity and write \(A(X,T)\) instead of \(A(X,T_{2})\).
Therefore, the linear problem in Equation (52) reduces to
\[\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{2}]=-\frac{2}{\bar{u}_{1}}q _{c}^{2}\begin{pmatrix}1\\ 1\end{pmatrix}(A^{2}(X,T)e^{2iq_{c}x}+A^{*2}(X,T)e^{-2iq_{c}x}). \tag{56}\]
Finally, by Equation (56) it follows that the function \(\mathbf{w}_{2}\), having the form as in (42), is given by
\[\mathbf{w}_{2}=\boldsymbol{\psi}_{0}B_{0}(X,T)+\boldsymbol{\psi}(A^{2}(X,T)e ^{2iq_{c}x}+A^{*2}(X,T)e^{-2iq_{c}x}), \tag{57}\]
where \(B_{0}(X,T)\) is a real function and
\[\boldsymbol{\psi}_{0}=\begin{pmatrix}\psi_{01}\\ \psi_{02}\end{pmatrix},\qquad\boldsymbol{\psi}=\begin{pmatrix}\psi_{1}\\ \psi_{2}\end{pmatrix} \tag{58}\]
are constant vectors. Notice that \(\partial_{xx}\mathcal{L}^{\gamma_{c}}[\boldsymbol{\psi}_{0}B_{0}(X,T)]=0\), for any \(\boldsymbol{\psi}_{0}\) and \(B_{0}(X,T)\). Substituting Equation (57) into Equation (56) and solving for \(\boldsymbol{\psi}\) we obtain
\[\begin{split}\psi_{1}&=\frac{1}{2\bar{u}_{1}}\frac{1- \gamma_{c}\bar{u}_{1}\hat{K}(2q_{c})}{1-\gamma_{c}^{2}\bar{u}_{1}\bar{u}_{2} \hat{K}^{2}(2q_{c})},\\ \psi_{2}&=\frac{1}{2\bar{u}_{1}}\frac{1-\gamma_{c}\bar{u}_{2} \hat{K}(2q_{c})}{1-\gamma_{c}^{2}\bar{u}_{1}\bar{u}_{2}\hat{K}^{2}(2q_{c})}, \end{split} \tag{59}\]
whilst \(A(X,T)\), \(B_{0}(X,T)\), and \(\boldsymbol{\psi}_{0}\) remain unknown.
At order \(\epsilon^{3}\), we find the following problem
\[\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{3}]=\mathbf{G}, \tag{60}\]
where
\[\mathbf{G}= \,\partial_{T}\mathbf{w}_{1}-2\partial_{xX}\mathcal{L}^{\gamma_{c}}[ \mathbf{w}_{2}]-\partial_{XX}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{1}]-\begin{pmatrix} 0&\gamma^{(2)}\bar{u}_{1}K*\\ \gamma^{(2)}\bar{u}_{2}K*&0\end{pmatrix}\partial_{xx}\mathbf{w}_{1} \tag{61}\] \[-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1},\partial_{x }(K*\mathbf{w}_{2})]-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{2}, \partial_{x}(K*\mathbf{w}_{1})]\] \[-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1},\partial_{X }(K*\mathbf{w}_{1})]-\partial_{X}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1}, \partial_{x}(K*\mathbf{w}_{1})]\] \[= \,(A_{T}e^{iq_{c}x}+A_{T}^{*}e^{-iq_{c}x})\boldsymbol{\rho}+8iq_{ c}\begin{pmatrix}1&\gamma_{c}\bar{u}_{1}\hat{K}(2q_{c})\\ \gamma_{c}\bar{u}_{2}\hat{K}(2q_{c})&1\end{pmatrix}\boldsymbol{\psi}(AA_{X}e^ {2iq_{c}x}-A^{*}A_{X}^{*}e^{-2iq_{c}x})\] \[-\begin{pmatrix}1&\gamma_{c}\bar{u}_{1}\hat{K}(q_{c})\\ \gamma_{c}\bar{u}_{2}\hat{K}(q_{c})&1\end{pmatrix}\boldsymbol{\rho}(A_{XX}e^{ iq_{c}x}+A_{XX}^{*}e^{-iq_{c}x})\] \[+q_{c}^{2}\gamma_{c}\left(2\hat{K}(2q_{c})\begin{pmatrix}\psi_{2 }\\ \psi_{1}\rho_{2}\end{pmatrix}-\hat{K}(q_{c})\begin{pmatrix}\psi_{1}\rho_{2}\\ \psi_{2}\end{pmatrix}\right)|A|^{2}(Ae^{iq_{c}x}+A^{*}e^{-iq_{c}x})\] \[+q_{c}^{2}\gamma_{c}\hat{K}(q_{c})\begin{pmatrix}\psi_{01}\rho_{2} \\ \psi_{02}\end{pmatrix}(Ae^{iq_{c}x}+A^{*}e^{-iq_{c}x})B_{0}\] \[+3q_{c}^{2}\gamma_{c}\left(2\hat{K}(2q_{c})\begin{pmatrix}\psi_{ 2}\\ \psi_{1}\rho_{2}\end{pmatrix}+\hat{K}(q_{c})\begin{pmatrix}\psi_{1}\rho_{2}\\ \psi_{2}\end{pmatrix}\right)(A^{3}e^{3iq_{c}x}+A^{*3}e^{-3iq_{c}x})\] \[-4iq_{c}\hat{K}(q_{c})\rho_{2}\begin{pmatrix}1\\ 1\end{pmatrix}(AA_{X}e^{2iq_{c}x}-A^{*}A_{X}^{*}e^{-2iq_{c}x}).\]
By Equation (50), it follows that the third term of the second equality of Equation (60) is the null vector. In order to simplify the notation, we rewrite Equation (61)
as:
\[\begin{split}\mathbf{G}=&(A_{T}\boldsymbol{\rho}+A \mathbf{G}_{1}+AB_{0}\mathbf{G}_{1}^{(2)}+|A|^{2}A\mathbf{G}_{1}^{(3)})e^{iq_{c} x}+\mathbf{G}_{2}(A^{2})_{X}e^{2iq_{c}x}+\mathbf{G}_{3}A^{3}e^{3iq_{c}x}\\ &+(A_{T}^{*}\boldsymbol{\rho}+A^{*}\mathbf{G}_{1}+A^{*}B_{0} \mathbf{G}_{1}^{(2)}+|A^{*}|^{2}A^{*}\mathbf{G}_{1}^{(3)})e^{-iq_{c}x}+\mathbf{ G}_{2}(A^{*^{2}})_{X}e^{-2iq_{c}x}+\mathbf{G}_{3}A^{*^{3}}e^{-3iq_{c}x}.\end{split} \tag{62}\]
The linear problem in Equation (60) admits a solution if and only the Fredholm condition \(\langle\mathbf{G},\mathbf{a}\rangle=0\) is satisfied, where \(\mathbf{a}\) is defined in Equation (54). Note that the terms \(\mathbf{G}_{2}(A^{2})_{X}e^{2iq_{c}x}+\mathbf{G}_{2}(A^{*^{2}})_{X}e^{-2iq_{c }x}\) and \(\mathbf{G}_{3}A^{3}e^{3iq_{c}x}+\mathbf{G}_{3}A^{*^{3}}e^{-3iq_{c}x}\) are hortogonal to \(\mathbf{a}\). Therefore, the Fredholm condition \(\langle\mathbf{G},\mathbf{a}\rangle=0\) for Equation (60) gives the following amplitude equation
\[A_{T}=\sigma A-\Lambda|A|^{2}A+\delta AB_{0}, \tag{63}\]
where
\[\begin{split}\sigma&=-\frac{\langle\mathbf{G}_{1}, \mathbf{a}\rangle}{\langle\boldsymbol{\rho},\mathbf{a}\rangle}=q_{c}^{2}\frac{ \gamma^{(2)}}{\gamma_{c}}\\ \Lambda&=\frac{\langle\mathbf{G}_{1}^{(3)},\mathbf{ a}\rangle}{\langle\boldsymbol{\rho},\mathbf{a}\rangle}=\frac{1}{2}q_{c}^{2}\gamma_{c}(2 \hat{K}(2q_{c})(\psi_{1}+\psi_{2})-\hat{K}(q_{c})(\psi_{1}\rho_{2}+\psi_{2}a_{ 2}))\\ \delta&=-\frac{\langle\mathbf{G}_{1}^{(2)},\mathbf{ a}\rangle}{\langle\boldsymbol{\rho},\mathbf{a}\rangle}=\frac{1}{2}q_{c}^{2}\left( \frac{\psi_{01}}{\bar{u}_{1}}+\frac{\psi_{02}}{\bar{u}_{2}}\right).\end{split} \tag{64}\]
At order \(\epsilon^{4}\), we have the following problem
\[\partial_{xx}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{4}]= \,\partial_{T}\mathbf{w}_{2}-2\partial_{xx}\mathcal{L}^{\gamma_{c}}[ \mathbf{w}_{3}]-\partial_{XX}\mathcal{L}^{\gamma_{c}}[\mathbf{w}_{2}] \tag{65}\] \[-\gamma^{(2)}\hat{K}(2q_{c})\begin{pmatrix}0&\bar{u}_{1}\\ \bar{u}_{2}&0\end{pmatrix}\partial_{xx}\mathbf{w}_{2}-2\gamma^{(2)}\hat{K}(q_{ c})\begin{pmatrix}0&\bar{u}_{1}\\ \bar{u}_{2}&0\end{pmatrix}\partial_{xX}\mathbf{w}_{1}\] \[-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1},\partial_{x }(K*\mathbf{w}_{3})]-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{3}, \partial_{x}(K*\mathbf{w}_{1})]-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{ w}_{2},\partial_{x}(K*\mathbf{w}_{2})]\] \[-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1},\partial_{X }(K*\mathbf{w}_{2})]-\partial_{x}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{2}, \partial_{X}(K*\mathbf{w}_{1})]\] \[-\partial_{X}\mathcal{Q}^{\gamma_{c}}[\mathbf{w}_{1},\partial_{X }(K*\mathbf{w}_{1})]-\partial_{x}\mathcal{Q}^{\gamma^{(2)}}[\mathbf{w}_{1}, \partial_{x}(K*\mathbf{w}_{1})]\] \[=\begin{pmatrix}\psi_{01}\\ \psi_{02}\end{pmatrix}(B_{0})_{T}-\begin{pmatrix}\psi_{01}+\gamma_{c}\bar{u}_{1 }\hat{K}(0)\psi_{02}\\ \gamma_{c}\bar{u}_{2}\hat{K}(0)\psi_{01}+\psi_{02}\end{pmatrix}(B_{0})_{XX}+ \frac{1}{\bar{u}_{1}}\begin{pmatrix}1\\ 1\end{pmatrix}(|A|^{2})_{XX}\] \[+\sum_{h=1}^{3}\mathbf{r}_{h}e^{ihq_{c}x}+\text{c.c.}\]
Since the function \(\mathbf{w}_{4}\) is as in (42), in Equation (65) all terms independent of \(x\) must be equal to zero, that is
\[\begin{pmatrix}\psi_{01}\\ \psi_{02}\end{pmatrix}(B_{0})_{T}=\begin{pmatrix}\psi_{01}+\gamma_{c}\bar{u}_{1 }\hat{K}(0)\psi_{02}\\ \gamma_{c}\bar{u}_{2}\hat{K}(0)\psi_{01}+\psi_{02}\end{pmatrix}(B_{0})_{XX}- \frac{1}{\bar{u}_{1}}\begin{pmatrix}1\\ 1\end{pmatrix}(|A|^{2})_{XX}. \tag{66}\]
When \(\bar{u}_{1}=\bar{u}_{2}\), we can choose \(\psi_{01}=\psi_{02}\) and, by setting \(B:=\psi_{01}B_{0}\), we obtain the following amplitude equations
\[A_{T} =\sigma A-\Lambda|A|^{2}A+\nu AB, \tag{67}\] \[B_{T} =\mu B_{XX}-\eta(|A|^{2})_{XX},\]
where
\[\nu=\frac{q_{c}^{2}}{\bar{u}_{1}},\qquad\mu=1+\gamma_{c}\bar{u}_{1}\hat{K}(0),\qquad\eta=\frac{1}{\bar{u}_{1}}, \tag{68}\]
and \(\sigma\) and \(\Lambda\) are given in Equation (64). Notice that \(\nu=\delta/\psi_{01}\) (see Equation (64)), with \(\psi_{01}=\psi_{02}\) and \(\bar{u}_{1}=\bar{u}_{2}\). On the other hand, if \(\bar{u}_{1}\neq\bar{u}_{2}\), Equation (66) is satisfied when \(\psi_{01}=\psi_{02}=0\) and \((|A|^{2})_{XX}=0\).
### Small amplitude solutions
The stationary solutions of the amplitude equations in (33) and (34) correspond to steady states of system (16). Notice that if \(B=0\), Equation (34) reduces to Equation (33), which is a Stuart-Landau equation. If \(\Lambda>0\), system (16) undergoes a supercritical bifurcation, while if \(\Lambda<0\) the system undergoes a subcritical bifurcation ([34]).
In the supercritical regime, as the homogeneous steady state becomes unstable, stationary small amplitude patterns emerge and correspond to solutions of Equation (33) with \(A=a_{0}e^{i\phi}\), where \(\phi\in\mathbb{R}\) is the phase of the pattern and the amplitude \(a_{0}\) is real and must satisfy \(a_{0}^{2}=\sigma/\Lambda\). These small amplitude solutions are always stable ([34]).
Analogously, stationary small amplitude patterns correspond to solutions of Equation (34) with \(A=a_{0}e^{i\phi}\) and \(B=0\), where \(\phi\in\mathbb{R}\) and \(a_{0}^{2}=\sigma/\Lambda\). However, in this case the stationary patterns might be destabilized by large-scale modes ([12]). In the following Proposition we will derive a stability condition for these stationary solutions.
**Proposition 3.1**.: _Suppose \(\bar{u}_{1}=\bar{u}_{2}\). If \(\sigma>0\) and \(\Lambda>0\) then small amplitude patterns to System (16) exist. These solutions are unstable if the following condition holds_
\[\Gamma:=\frac{\Lambda\mu}{\eta\nu}-1<0, \tag{69}\]
_where \(\sigma\), \(\Lambda\), \(\mu\), \(\eta\) and \(\nu\) are given in Theorem 3.1._
**Proof.** By Theorem 3.1, if \(\bar{u}_{1}=\bar{u}_{2}\), the amplitude of the stationary solutions to System (16) is governed by Equation (34). When \(\sigma>0\) and \(\Lambda>0\), stationary small amplitude patterns exist and correspond to solutions of (34) with \(A=a_{0}e^{i\phi}\) and \(B=0\), where \(\phi\in\mathbb{R}\) and \(a_{0}^{2}=\sigma/\Lambda\). To study the stability of this stationary solution, we consider the following perturbation
\[A(X,T)=(a_{0}+a(X,T))e^{i\phi},\qquad B(X,T)=b(X,T). \tag{70}\]
We substitute the perturbation (70) in Equations (34), and by linearizing in \(a\) and \(b\) we obtain:
\[\begin{split} a_{T}=&-\sigma(a+a^{*})+\nu a_{0}b, \\ b_{T}=&\mu b_{XX}-\eta a_{0}(a_{XX}+a_{XX}^{*}). \end{split} \tag{71}\]
We consider a perturbation of the form
\[a(X,T)=e^{\bar{\lambda}T}(Ve^{iQX}+W^{*}e^{-iQX})\text{ and }b(X,T)=e^{\bar{ \lambda}T}(Ue^{iQX}+U^{*}e^{-iQX}), \tag{72}\]
where \(\bar{\lambda}\) is the growth rate of the perturbation, \(U,V,W\in\mathbb{C}\) and \(Q\geq 0\) denotes a spatial mode. Notice that \(a\) is a complex perturbation, while \(b\) is real. Upon substituting Equations (72) in Equations (71), we obtain the following eigenvalue problem
\[\bar{\lambda}\left(\begin{array}{c}V\\ W\\ U\end{array}\right)=\begin{pmatrix}-\sigma&-\sigma&\nu a_{0}\\ -\sigma&-\sigma&\nu a_{0}\\ \eta a_{0}Q^{2}&\eta a_{0}Q^{2}&-\mu Q^{2}\end{pmatrix}\begin{pmatrix}V\\ W\\ U\end{pmatrix}, \tag{73}\]
from which we recover the growth rates
\[\bar{\lambda}_{0}(Q)=0,\qquad\bar{\lambda}^{\pm}(Q)=\frac{1}{2}\left(-\mu Q^{ 2}-2\sigma\pm\sqrt{\mu^{2}Q^{4}+Q^{2}\left(8a_{0}^{2}\eta\nu-4\mu\sigma\right) +4\sigma^{2}}\right). \tag{74}\]
Recalling that \(a_{0}^{2}=\sigma/\Lambda\), a simple calculation shows that \(\bar{\lambda}^{+}(Q)>0\) if \(Q\neq 0\) and \(\Gamma=\frac{\Lambda\mu}{\eta\nu}-1<0\). \(\Box\)
The analysis so far is valid for any non-negative, symmetric kernel \(K\) satisfying Equation (4). In the following section, we adopt the top-hat distribution and use the results obtained so far to recover the instability thresholds and to predict the shape of the emerging patterns.
For readers convenience, we conclude this section with Table (1), in which we have collected the main parameters involved in the study of stability and bifurcations, and included a brief description of their significance and properties.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Parameter List and Description**} \\ \hline
**Parameter** & **Description** & **Properties** \\ \hline \(\gamma:=\gamma_{12}=\gamma_{21}\) & Inter-species interaction parameter & \(\gamma>0:\) Mutual avoidance \\ & & \(\gamma<0:\) Mutual attraction \\ \hline \(L:=l/\alpha\) & Length of the rescaled domain & \(L>2\): ratio between the length \\ & & of the domain \(l\) and the sensing range \(\alpha\) \\ \hline \(\sigma\) (Eq. (64)) & Linear Stuart-Landau coefficient & \(\sigma<0\): \(\mathbf{\bar{u}}\) (Eq. (18)) stable \\ & & \(\sigma>0\): \(\mathbf{\bar{u}}\) (Eq. (18)) unstable \\ \hline \(\Lambda\) (Eq. (64)) & Nonlinear Stuart-Landau coefficient & \(\Lambda<0\): subcritical bifurcation \\ & & \(\Lambda>0\): supercritical bifurcation \\ \hline \(\Gamma\) (Eq.(69)) & Stability coefficient & \(\Gamma<0\): unstable supercritical bifurcation \\ & computed for \(\Lambda>0\) & \(\Gamma>0\): stable supercritical bifurcation \\ \hline \end{tabular}
\end{table}
Table 1: List and description of main parameters involved in the study of stability and bifurcations.
## 4 The top hat distribution
In this section we analyze System (2) with
\[K(x)=K_{\alpha}(x):=\begin{cases}\frac{1}{2\alpha},&x\in[-\alpha,\alpha]\\ 0,&\text{otherwise}\end{cases}. \tag{75}\]
The parameter \(\alpha\), modelling the sensing radius of an organism, is such that \(\alpha<l/2\), where \(l\) is the length of the domain. As in Section 2, we will work in dimensionless coordinates, so that our study system is given by Equations (16) and the dimensionless averaging kernel is
\[K_{1}(x)=\begin{cases}\frac{1}{2},&x\in[-1,1],\\ 0,&\text{otherwise}.\end{cases} \tag{76}\]
### Linear stability analysis
Linear stability analysis of System (16) around the equilibrium point \(\mathbf{\bar{u}}=(p_{1},p_{2})\) (Equation (18)), gives the following eigenvalues (see Equation (23))
\[\lambda^{\pm}(q):=-q^{2}(1\pm\gamma|\hat{K}_{1}(q)|\sqrt{\bar{u}_{1}\bar{u}_{2 }}), \tag{77}\]
where
\[\hat{K}_{1}(q)=\int_{-1}^{1}K_{1}(x)e^{-iqx}\mathrm{d}x=\begin{cases}\frac{ \sin(q)}{q},&\text{if }q\neq 0\\ 1,&\text{if }q=0\end{cases}. \tag{78}\]
Recall that the admissible wavenumbers are \(q_{m}=2\pi m/L\), with \(m\in\mathbb{N}\).
Figure 1 shows the graphs of \(\lambda^{\pm}(q)\) (Equation (77)) for different values of \(\gamma\). Observe that the first wavenumber that is destabilized as \(\gamma\) is varied is
\[q_{c}=q_{1}=\frac{2\pi}{L}. \tag{79}\]
Since \(L>2\), we have \(\hat{K}_{1}(q_{c})>0\), so the corresponding bifurcation thresholds, obtained by solving \(\lambda^{\pm}(q_{c})=0\), are
\[\gamma_{c}^{\pm}=\gamma_{1}^{\pm}:=\pm\frac{1}{\hat{K}_{1}(q_{c})\sqrt{\bar{u}_{ 1}\bar{u}_{2}}}. \tag{80}\]
Since the equilibrium \(\bar{\mathbf{u}}\) becomes unstable as \(\lambda^{\pm}(q_{c})>0\), the system undergoes an instability when
\[\gamma<\gamma_{c}^{-}=-\frac{1}{\hat{K}_{1}(q_{c})\sqrt{\bar{u}_{1}\bar{u}_{2} }}\quad\text{ or }\quad\gamma>\gamma_{c}^{+}=\frac{1}{\hat{K}_{1}(q_{c})\sqrt{\bar{u}_{1}\bar{u }_{2}}}. \tag{81}\]
Figure 1: Graphs of the growth rates \(\lambda^{\pm}(q)\) (Equation (77)), in the mutual avoidance (Panel (a)) and in the mutual attraction (Panel (b)) regime. Panel (a) shows the graphs of \(\lambda^{\pm}(q)\) for increasing values of \(\gamma>0\): \(0<\gamma<\gamma_{1}^{+}\) (left); \(\gamma=\gamma_{1}^{+}\) (center); \(\gamma>\gamma_{1}^{+}\) (right). Panel (b) shows the graphs of \(\lambda^{\pm}(q)\) for decreasing values of \(\gamma<0\): \(\gamma_{1}^{-}<\gamma<0\) (left); \(\gamma=\gamma_{1}^{-}\) (center); \(\gamma<\gamma_{1}^{-}\) (right). As the magnitude of \(\gamma\) increases, the first wavenumber destabilized is \(q_{1}\) (Equation (79))
### Analysis of the amplitude equations and bifurcations
By Theorem 3.1, when \(\varepsilon=\sqrt{|\frac{\gamma-\gamma_{c}}{\gamma_{c}}|}\ll 1\) (where \(\gamma_{c}=\gamma_{c}^{\pm}\)), the solutions to System (16) have the following form
\[\begin{split} u_{1}&=\bar{u}_{1}+\varepsilon\rho_{1} (Ae^{iq_{c}x}+A^{*}e^{-iq_{c}x})+\varepsilon^{2}(\psi_{1}(A^{2}e^{2iq_{c}x}+A^ {*2}e^{-2iq_{c}x})+B)+O(\varepsilon^{3}),\\ u_{2}&=\bar{u}_{2}+\varepsilon\rho_{2}(Ae^{iq_{c}x }+A^{*}e^{-iq_{c}x})+\varepsilon^{2}(\psi_{2}(A^{2}e^{2iq_{c}x}+A^{*2}e^{-2iq_ {c}x})+B)+O(\varepsilon^{3}).\end{split} \tag{82}\]
Recall from (32) that the constants \(\rho_{1}\), \(\rho_{2}\) are defined as
\[\rho_{1}=1,\qquad\rho_{2}=-\frac{1}{\gamma_{c}\bar{u}_{1}\hat{K}_{1}(q_{c})}. \tag{83}\]
Note that in the mutual avoidance case (\(\gamma>0\)), \(\gamma_{c}=\gamma_{c}^{+}>0\) and then \(\rho_{2}<0\), which implies that \(u_{1}\) and \(u_{2}\) show a spatial oscillation that is out of phase. On the other hand, in the mutual attraction regime (\(\gamma<0\)), \(\gamma_{c}=\gamma_{c}^{-}<0\) and then \(\rho_{2}>0\), which means that the spatial pattern for \(u_{1}\) and \(u_{2}\) are in phase.
Theorem 3.1 also says that \(A(X,T)\) and \(B(X,T)\) are governed by the following equations
1. If \(\bar{u}_{1}\neq\bar{u}_{2}\), \[\begin{split} A_{T}&=\sigma A-\Lambda|A|^{2}A,\\ B&=0,\end{split}.\] (84)
2. If \(\bar{u}_{1}=\bar{u}_{2}\), \[\begin{split} A_{T}&=\sigma A-\Lambda|A|^{2}A+ \nu AB,\\ B_{T}&=\mu B_{XX}-\eta(|A|^{2})_{XX},\end{split}\] (85)
where the coefficients \(\sigma\), \(\Lambda\), \(\nu\), \(\mu\) and \(\eta\) are defined in Equation (35)
As discussed in Section 3, the sign of \(\Lambda\) determines the type of bifurcation: for \(\Lambda>0\) the system exhibits a supercritical bifurcation, while for \(\Lambda<0\) the system undergoes a subcritical bifurcation (see also Table (1)). The sign of \(\Lambda\) depends on \(\bar{u}_{1}\), \(\bar{u}_{2}\) and on the length of the domain, \(L\) (see the definition of \(\Lambda\) in Equation (35)).
Figure 2 shows the graphs of \(\Lambda\) versus \(L\), in the mutual avoidance (\(\gamma>0\)) and in the mutual attraction (\(\gamma<0\)) regime with \(K=K_{1}\), for both \(\bar{u}_{1}=\bar{u}_{2}\) and \(\bar{u}_{1}\neq\bar{u}_{2}\).
For \(\gamma>0\), if \(\bar{u}_{1}=\bar{u}_{2}\) then the qualitative behaviour of \(\Lambda(L)\) remains unchanged as \(\bar{u}_{1}=\bar{u}_{2}\) are varied. In fact, Figure 2(a) shows that for different values of \(\bar{u}_{1}=\bar{u}_{2}\), \(\Lambda(L)\) is negative (subcritical bifurcation) for \(2<L<3\), while it is positive (supercritical bifurcation) for \(L>3\). On the other hand, if \(\bar{u}_{1}\neq\bar{u}_{2}\) (Figure 2(b)), \(\Lambda(L)\) is negative for \(2<L<3\), becomes positive for \(L>3\), and then \(\Lambda(L)\) becomes negative again for sufficiently large values of \(L\) depending on the ratio \(\bar{u}_{1}/\bar{u}_{2}\).
For \(\gamma<0\), if \(\bar{u}_{1}=\bar{u}_{2}\), \(\Lambda(L)\) is positive for \(2<L<6\) and it becomes negative as \(L>6\) (see Figure 2(c)). The qualitative behaviour of \(\Lambda(L)\) does not change as \(\bar{u}_{1}=\bar{u}_{2}\) are varied. However, if \(\bar{u}_{1}\neq\bar{u}_{2}\) (Figure (2) (d)) we observe the emergence of a subcritical regime for sufficiently small values of \(L\) depending on the ratio \(\bar{u}_{1}/\bar{u}_{2}\).
As shown in Section 3, if \(\Lambda(L)\) is positive then small amplitude patterns emerge from the homogeneous steady state beyond the bifurcation threshold. These solutions are always stable when \(\bar{u}_{1}\neq\bar{u}_{2}\) but can be unstable when \(\bar{u}_{1}=\bar{u}_{2}\).
Proposition 3.1 shows that when \(\bar{u}_{1}=\bar{u}_{2}\) the stability of small amplitude patterns is determined by the coefficients of the amplitude equations in (85) and that, in particular, these solutions are unstable if \(\Gamma=\frac{\Lambda\mu}{\eta\nu}-1<0\). By using the definitions of \(\Lambda\), \(\nu\), \(\mu\) and \(\eta\) in Equation (35), we recover
\[\Gamma=\frac{(1+\hat{K}_{1}(q_{1}))(2\hat{K}_{1}(2q_{1})+\hat{K}_{1}(q_{1}))}{ 2\hat{K}_{1}(q_{1})(\hat{K}_{1}(2q_{1})+\hat{K}_{1}(q_{1}))}-1. \tag{86}\]
Note that \(\Gamma\) does not depend on \(\bar{u}_{1}\). Indeed, since \(q_{1}=2\pi/L\), it follows that \(\Gamma\) depends only on \(L\). In Figure 3 we show the graphs of \(\Gamma\) versus \(L\) for \(\gamma>0\) in (a), and \(\gamma<0\) in (b). We also recall that we are analyzing the sign of \(\Gamma\) in supercritical regimes (\(\Lambda>0\)), for this reason we plot the curve \(\Gamma(L)\) only in those intervals in which \(\Lambda>0\). The graph in Figure 3(a) shows that in the mutual avoidance case
(\(\gamma>0\)), small amplitude patterns exist and are unstable for \(3<L<3.5\), and that they become stable as \(L>3.5\). Figure 3(b) shows that in the mutual attraction scenario (\(\gamma<0\)), \(\Gamma(L)\) is always negative and therefore small amplitude patterns are always unstable. These results are summarized in Figure 4.
In summary, our analysis shows that the nature of the transition and the stability of the bifurcation patterns depend mainly on \(L\). These results can be read and re-interpreted in terms of the parameters of the original system (2), recalling that \(L=\alpha/l\), where \(\alpha\) is the sensing radius and \(l\) is the length of the dimensional spatial domain. Therefore, the qualitative behaviour of the system under study strongly depends on the measure of the sensing radius compared on the length of the domain.
Figure 2: Graphs of the nonlinear Stuart-Landau coefficient \(\Lambda\) (Equation (64)) versus the domain length \(L\), in the mutual avoidance (\(\gamma>0\)) and in the mutual attraction (\(\gamma>0\)) regime, with \(\bar{u}_{1}=\bar{u}_{2}\) and \(\bar{u}_{1}\neq\bar{u}_{2}\). Positive values of \(\Lambda\) correspond to supercritical bifurcations, negative values of \(\Lambda\) correspond to subcritical bifurcations
Figure 3: Graphs of the stability coefficient \(\Gamma\) (Equation (69)) versus the domain length \(L\), in the mutual avoidance (a), and in the mutual attraction regime (b). If \(\Lambda>0\) and \(\Gamma<0\), small amplitude patterns exist and are unstable, and if \(\Lambda>0\) and \(\Gamma>0\), small amplitude patterns exist and are stable
### Numerical Simulations
In this Section, we perform a numerical investigation of system (16). To solve numerically System (16), we use the spectral method and numerical schemes presented in [18]. By employing a continuation technique, we recover numerical bifurcation diagrams which are compared with the bifurcation diagrams obtained via the weakly nonlinear analysis. We show that our weakly nonlinear analysis provides accurate approximations of stable steady-state solutions in supercritical stable regimes, as long as we stay close to the bifurcation threshold. We also analyse those bifurcations that generate unstable small amplitude patterns. In these cases, we numerically detect the existence of stable large amplitude solutions, which are not predicted by the
Figure 4: Graphs of the curves for the critical values of the density-dependent advection strength \(\gamma=\gamma_{c}^{+}\) in (a) and \(\gamma=\gamma_{c}^{-}\) in (b) (Equation (28)) versus the domain length \(L\). When the magnitude of \(\gamma\) is small, the homogeneous steady state is linearly stable. As the magnitude of \(\gamma\) increases, the system undergoes a bifurcation and the homogeneous steady state becomes unstable as \(\gamma\) crosses \(\gamma_{c}^{\pm}\). For \(\gamma>0\) (a), when \(L\) is small the system undergoes a subcritical bifurcation. As \(L\) increases, the bifurcation becomes supercritical, and the emerging patterns will be unstable. As \(L\) increases further, the system undergoes a supercritical bifurcation leading to the emergence of stable patterns. For \(\gamma<0\) (b), when \(L\) is small the system undergoes a supercritical bifurcation generating unstable small amplitude patterns. As \(L\) increases, the bifurcation becomes subcritical
weakly nonlinear analysis, but which were predicted by an energy method in [18].
First, we analyze the scenarios depicted in Figures 2(b) (\(\gamma>0\)) and (d) (\(\gamma<0\)), in which \(\bar{u}_{1}\neq\bar{u}_{2}\). These figures show subcritical bifurcations for sufficiently small values of \(L\), then a shift to a supercritical regime, as \(L\) increases, and again a subcritical regime, as \(L\) increases further. Recall that if \(\bar{u}_{1}\neq\bar{u}_{2}\) then supercritical bifurcations always give rise to stable small amplitude solutions.
Figure 5 shows bifurcation diagrams obtained by fixing \(\bar{u}_{1}=0.1\) and \(\bar{u}_{2}=10\) and by changing \(L\), in the mutual avoidance regime (\(\gamma>0\)). This case corresponds to the scenario shown in Figure 2(b) (center). Dashed and solid lines represent unstable and stable branches, respectively, computed analytically, while the dots are computed numerically. For \(L=2.7\), the weakly nonlinear analysis predicts a subcritical bifurcation, and the numerical simulations confirm this result. In fact, just beyond the instability threshold (\(\gamma>\gamma_{c}\approx 3.20\)), we find stable large amplitude solutions, which persist when we decrease the control parameter \(\gamma\) below the instability threshold (Figure 5(a)). For \(L=5\), the analysis predicts a supercritical bifurcation and, again, the numerical simulations confirm this result. In Figure 5(b) we see, indeed, a good matching between the analytical branch and the numerical solutions, as long as \(\gamma\) is sufficiently close to the bifurcation threshold \(\gamma_{c}\approx 1.32\). Finally, for \(L=15\) the subcritical bifurcation predicted by our analysis is also detected numerically (see Figure 5(c)). Here, we observe bistability between the homogeneous steady state and non-homogeneous solutions below the instability threshold \(\gamma_{c}\approx 1.03\).
Figure 6 shows bifurcation diagrams obtained by fixing \(\bar{u}_{1}=0.1\) and \(\bar{u}_{2}=10\), for three different values of \(L\), in the mutual avoidance regime (\(\gamma<0\)). This case corresponds to the scenario shown in Figure 2(d) (center). The numerical simulations, again, confirm the results of the weakly nonlinear analysis: we have detected subcritical transitions for \(L=2\) and \(L=10\), and a stable branch bifurcating supercritically for \(L=5\), whose amplitude is well approximated by the weakly nonlinear analysis.
Figure 5: Comparison between analytical and numerical bifurcation diagrams of system (16) with density-dependent advection strength \(\gamma>0\), and nonlocal kernel \(K=K_{1}\) (see Equation (76)), \(\bar{u}_{1}=0.1\) and \(\bar{u}_{2}=10\), for different values of the length of the domain \(L\). These scenarios correspond to Figure 2 (b) (center). Dashed and solid lines represent unstable and stable branches, respectively, which are computed analytically, while the dots are computed numerically. As the length of the domain increases, the system changes its qualitative behaviour. In (a): \(L=2.7\) and the system exhibits a subcritical bifurcation at \(\gamma=\gamma_{c}=3.19933\). In (b), \(L=5\) and at \(\gamma=\gamma_{c}=1.32131\) a branch of stable solutions bifurcates from the homogeneous state. In (c), \(L=15\) and the system exhibits a subcritical bifurcation at \(\gamma=\gamma_{c}=1.02985\).
It remains to analyze the case \(\bar{u}_{1}=\bar{u}_{2}\), corresponding to the scenarios depicted in Figures 2(a) (\(\gamma>0\)) and (c) (\(\gamma<0\)). In this case, three different types of bifurcation are predicted by the analysis: subcritical bifurcations (for \(\Lambda<0\)), unstable supercritical bifurcations (for \(\Lambda>0\) and \(\Gamma<0\)) and stable supercritical bifurcations (for \(\Lambda>0\) and \(\Gamma>0\)) (see Figure 4). In particular, for \(\gamma>0\), system (16) undergoes subcritical bifurcations for \(2<L<3\), unstable supercritical bifurcations for \(3<L<3.5\), and stable supercritical bifurcations for \(L>3.5\) (see Figure 4(a)).
In Figure 7 we analyze System (16) with \(\gamma>0\) and \(\bar{u}_{1}=\bar{u}_{2}=10\), for \(L=3.1\) in (a), and \(L=4\) in (b). In Figure 7(a) (left) we show the spatio-temporal evolution
Figure 6: Comparison between analytical and numerical bifurcation diagrams of system (16) with \(\gamma<0\), \(K=K_{1}\) (see Equation (76)), \(\bar{u}_{1}=0.1\) and \(\bar{u}_{2}=10\), for different values of the length of the domain \(L\). These scenarios correspond to Figure 2 (d) (center). Dashed and solid lines represent unstable and stable branches, respectively, which are computed analytically, while the dots are computed numerically. As the length of the domain increases, the system changes its qualitative behaviour. In (a): \(L=2.5\) and the system exhibits a subcritical bifurcation at \(\gamma=\gamma_{c}=-4.2758\). In (b), \(L=5\) and at \(\gamma=\gamma_{c}=-1.32131\) a branch of stable solutions bifurcates from the homogeneous state. In (c), \(L=10\) and the system exhibits a subcritical bifurcation at \(\gamma=\gamma_{c}=-1.06895\).
of a numerical solution whose initial condition is a small perturbation of the weakly nonlinear solution with \(L=3.1\). We observe that the numerical solution moves away from the initial condition and evolves toward a large amplitude pattern. The initial condition and the final stationary state are reported in Figure 7(a) (center). Therefore, when the supercritical branch is unstable, the system supports large amplitude patterns. These solutions exist even below the bifurcation threshold, as shown by the bifurcation diagram in Figure 7(a) (right). These large amplitude solutions are not predicted by the weakly nonlinear analysis. However we conjecture that they might be obtained analytically by expanding the weakly nonlinear analysis to higher orders.
In Figure 7(b) (left) we show the spatio-temporal evolution of a numerical solution whose initial condition is a small perturbation of the weakly nonlinear solution with \(L=4\). In this case, the analysis predicts that the small amplitude pattern is stable. In the numerical simulation we observe that the solution moves towards a small amplitude pattern, which is well approximated by the weakly nonlinear analysis. This result confirms the stability predicted by our analysis. The initial condition and the final stationary state are reported in Figure 7(b) (center). Finally, a comparison between the analytical and numerical bifurcation diagrams is shown in 7(b) (right).
Figure 7: Numerical investigation of system (16) in the mutual avoidance regime (\(\gamma>0\)) with \(\bar{u}_{1}=\bar{u}_{2}=10\), for two different values of \(L\). In (a): \(L=3.1\) and the analysis predict an unstable supercritical bifurcation at \(\gamma=\gamma_{c}=0.225754\). On the left, numerical simulation showing that the system moves away from the unstable solution and evolves toward a large amplitude pattern. In the center, initial condition and the final stationary state. On the right, comparison between analytical and numerical bifurcation diagrams. In (b): \(L=4\) and the analysis predict a stable supercritical bifurcation at \(\gamma=\gamma_{c}=0.15708\). On the left, numerical simulation showing that the system moves towards the stable small amplitude solution. In the center, initial condition and the final stationary state. On the right, comparison between analytical and numerical bifurcation diagrams.
Finally, Figure 8 shows analytic and numerical bifurcation diagrams of System (16) with \(\gamma<0\) and \(\bar{u}_{1}=\bar{u}_{2}=10\). Our previous analysis predicts unstable supercritical bifurcations for \(2<L<6\), and subcritical bifurcations for \(L>6\) (see Figure 4 (b)). We have verified these results numerically, and the comparisons between analytical and numerical bifurcation diagrams are shown in Figure 8.
\[\gamma<0,\,\bar{u}_{1}=\bar{u}_{2}\]
### Bistability between small amplitude patterns and strongly modulated solutions
The existence of non-constant solutions to system (16), far away from any bifurcation of the constant solution, was already detected and analyzed in [18] using an energy method. By minimising an energy functional associated with the system,
Figure 8: Comparison between analytical and numerical bifurcation diagrams of system (16) with \(\gamma<0\), \(K=K_{1}\) (see Equation (76)), \(\bar{u}_{1}=\bar{u}_{2}=10\), for different values of the length of the domain \(L\). These scenarios correspond to Figure 2 (c) (right). Dashed and solid lines represent unstable and stable branches, respectively, which are computed analytically, while the dots are computed numerically. As the length of the domain increases, the system changes its qualitative behaviour. In (a): \(L=5\) and the system exhibits a supercritical bifurcation at \(\gamma=\gamma_{c}\approx-13.2\), giving rise to a branch of unstable small amplitude solutions. In (b), \(L=10\) and at \(\gamma=\gamma_{c}\approx-10.7\) the system exhibits a subcritical bifurcation
nontrivial stationary solutions were revealed which, as \(L\) increases, tend to look increasingly like piecewise constant functions, when \(\gamma>0\), or spike solutions, when \(\gamma<0\). We call such solutions _strongly modulated_ because they are given by the superposition of more than one unstable Fourier mode. In this section, we will combine numerical and analytic solutions inferred by both the weakly nonlinear analysis here and the results presented in [18] to construct more comprehensive bifurcation diagrams.
For this, we focus on the case \(\gamma>0\) and \(\bar{u}_{1}=\bar{u}_{2}\). Here, the system exhibits supercritical bifurcations for large values of \(L\) (see Figure 2 (a)). Also, as shown in Figure 3 (a), these supercritical bifurcations generate stable small amplitude patterns. In [18] we showed that under the same conditions (that is \(L\gg 1\), \(\gamma>0\) and \(\bar{u}_{1}=\bar{u}_{2}\)), the system supports strongly modulated patterns. Therefore we expect that for \(L\) sufficiently large, there exist parameter regions in which small amplitude patterns and strongly modulated solutions coexist and are stable.
We have verified this numerically and the results are shown in Figure 9. When \(L\) is not too large, the system admits small amplitude solutions that bifurcate supercritically from the homogeneous steady state and remains stable as the control parameter \(\gamma\) increases (see Figure 9 (a)). In this case, we do not find strongly modulated solutions. As \(L\) increases, the supercritical branch of patterns predicted by the weakly nonlinear analysis still exists and is stable as long as \(\gamma\) is sufficiently close to the bifurcation threshold (see Figure 9 (b)). However, a second branch appears higher up, representing the strongly modulated solutions predicted by [18]. As \(L\) increases further, the branch of stable small amplitude solutions becomes smaller and smaller (Figure 9(c)), and the solutions transition to strongly modulated for values of \(\gamma\) closer to the bifurcation threshold.
## 5 Discussion
We have analysed bifurcations for a nonlocal advection diffusion system with two interacting populations that either mutually avoid or mutually attract. First, we analysed the linear stability of the homogeneous steady state and recovered the instability thresholds. Beyond these thresholds, the homogeneous steady state becomes unstable and the system is expected to form spatially inhomogeneous patterns. To predict the evolution of the system in the unstable regime, we used weakly nonlinear analysis to recover the equations governing the amplitude of the pattern and approximations of the inhomogeneous solutions. We found that the amplitude equations consist of a Ginzburg-Landau equation coupled with an equation for the zero mode.
Figure 9: Bifurcation diagrams of system (16) with \(\gamma>0\), \(K=K_{1}\) (see Equation (76)), \(\bar{u}_{1}=\bar{u}_{2}=1\), for different values of the length of the domain \(L\). These scenarios correspond to Figure 2 (a) (center). Dashed and solid lines represent unstable and stable branches, respectively, which are computed analytically, while the dots are computed numerically. The system exhibits a supercritical stable bifurcation at: \(\gamma=\gamma_{c}=1.06896\) in (a); \(\gamma=\gamma_{c}=1.01664\) in (b); \(\gamma=\gamma_{c}=1.0264\) in (c). As \(L\) becomes sufficiently large, the system support strongly modulated patterns which coexist with stable small amplitude patterns.
Indeed, we obtained a sequence of linear problems whose general solutions must be a linear combination of the critical mode and the zero mode. This follows from the fact that the system under study obeys a conservation law. An equivalent result was shown in [25], where similar amplitude equations were derived using symmetry and scaling arguments. By means of the amplitude equations, we recovered the condition that ensures the stability of the patterns bifurcating from the homogeneous steady state.
To obtain concrete numerical results, we analysed the case where the spatial-averaging kernel, \(K\), is a top-hat distribution. By combining analysis of the amplitude equation with numerical solutions, we showed that the system exhibits a variety of different types of bifurcations and bistability regimes, strongly depending on the ratio \(l/\alpha\). In particular, we found stable small amplitude patterns bifurcating supercritically from the homogeneous steady state at the onset of the instability. We also found subcritical regimes generating unstable small amplitude patterns, which coexist with both the stable homogeneous solution and stable large amplitude patterns. In this case, numerics revealed an hysteresis effect due to the bistability between two stationary states. Finally, we also found supercritical bifurcations generating unstable small amplitude patterns. Beyond the instability threshold, we numerically detected stable large amplitude patterns that persist even when decreasing the bifurcation parameter below the instability threshold, revealing again a hysteresis effect similar to that found in the subcritical regime.
By combining weakly nonlinear analysis, numerical simulations and the energy functional analysis from [18], we obtained a comprehensive bifurcation picture. We found parameter regions exhibiting bistability between small amplitude patterns and strongly modulated solutions, when \(l/\alpha\gg 1\). The range of bistability becomes smaller and smaller as \(l/\alpha\) increases, because the small amplitude patterns lose their
stability for values of the control parameter increasingly closer to the bifurcation threshold (Figure 9). Overall, our analysis reveals that our system may display discontinuous phase transitions either when \(\alpha\approx l\) or when the sensing range \(\alpha\) is very small compared to the length \(l\) of the domain.
Our study provides an example of how to combine different and complementary approaches to recover more comprehensive pictures of the bifurcation diagrams. To extend these results further, it would be interesting to expand the weakly nonlinear analysis up to higher orders. Such an approach could reveal analytically some of the large amplitude branches here found numerically, as well as the branches of solutions connecting small and large amplitude patterns. Numerical continuation software, such as pde2path [36], gives another way of approaching this problem [31, 11]. Our analysis revealed parameter regions with bistability between two extended states, a scenario in which systems often exhibit snaking branches of localized solutions [7, 37]. Extending our weakly nonlinear analysis to higher orders may help locate the codimension-two point where the nascence of localised structures may take place, which would be an interesting subject for future work.
Our focus here has been on a particular example of Equation (1) [28], with just two populations and no self-interaction terms (\(N=2\), \(\gamma_{ii}=0\)). However, even in this relatively-simple system, we found an unexpectedly rich variety of patterning scenarios. Therefore, we conjecture that analysis of the system with \(N\geq 3\) populations and/or \(\gamma_{ii}\neq 0\) would reveal even more complex patterning and bifurcation structure. Our next goal, indeed, is to analyse the more general scenarios (\(N\geq 3\), \(\gamma_{ii}\neq 0\)). A possible way forward might be to analyse phase transitions by combining the tools used here with those from [9]. In [9] the authors studied the phase transitions of the Mckean-Vlasov equation by analysing the minimizers of the energy associated to the problem. Combining this with weakly nonlinear analysis might shed light on
the number of steady states at the onset of an instability, and consequently on type of phase transition occurring when the bifurcation parameter crosses the instability threshold.
System (1) has several applications to natural systems and, in particular, to ecological systems. Therefore the analysis presented in this paper, as well as possible future extensions, might help to address some important ecological questions regarding the emergence of territories, as well as their sizes and stability [26]. Indeed, variations in territory size and shape can strongly affect population structure and dynamics [1], therefore understanding the mechanisms and consequences of these changes is crucial for informing the design of efficient conservation strategies. Our results support the hypothesis that the formation of territorial patterns is not just a consequence of a heterogeneity in resources distribution, but that they can emerge as a consequence of animal behaviour and mutual interactions [16, 1, 26]. Our analysis also predicts that a small sensing range relative to the length of the domain can facilitate a territory instability, in agreement with other theoretical studies suggesting that poor sensory information can promote the range size instability ([32]). In summary, the analysis of the class of models (1) with the techniques here presented and discussed can help to resolve biological and ecological questions that may be inaccessible to experimental investigation.
**Acknowledgements:** JRP and VG acknowledge support of Engineering and Physical Sciences Research Council (EPSRC) grant EP/V002988/1 awarded to JRP. VG is also grateful for support from the National Group of Mathematical Physics (GNFM-INdAM). TH is supported through a discovery grant of the Natural Science and Engineering Research Council of Canada (NSERC), RGPIN-2017-04158. MAL gratefully acknowledges support from NSERC Discovery Grant RGPIN-2018-05210 and from
the Gilbert and Betty Kennedy Chair in Mathematical Biology.
**Declarations of interest:** The authors have no competing interests to declare.
|
2310.11442
|
Trusted Provenance of Automated, Collaborative and Adaptive Data
Processing Pipelines
|
To benefit from the abundance of data and the insights it brings data
processing pipelines are being used in many areas of research and development
in both industry and academia. One approach to automating data processing
pipelines is the workflow technology, as it also supports collaborative,
trial-and-error experimentation with the pipeline architecture in different
application domains. In addition to the necessary flexibility that such
pipelines need to possess, in collaborative settings cross-organisational
interactions are plagued by lack of trust. While capturing provenance
information related to the pipeline execution and the processed data is a first
step towards enabling trusted collaborations, the current solutions do not
allow for provenance of the change in the processing pipelines, where the
subject of change can be made on any aspect of the workflow implementing the
pipeline and on the data used while the pipeline is being executed. Therefore
in this work we provide a solution architecture and a proof of concept
implementation of a service, called Provenance Holder, which enable provenance
of collaborative, adaptive data processing pipelines in a trusted manner. We
also contribute a definition of a set of properties of such a service and
identify future research directions.
|
Ludwig Stage, Dimka Karastoyanova
|
2023-10-17T17:52:27Z
|
http://arxiv.org/abs/2310.11442v1
|
# Trusted Provenance of Automated,
###### Abstract
To benefit from the abundance of data and the insights it brings _data processing pipelines_ are being used in many areas of research and development in both industry and academia. One approach to automating data processing pipelines is the workflow technology, as it also supports collaborative, trial-and-error experimentation with the pipeline architecture in different application domains. In addition to the necessary flexibility that such pipelines need to possess, in collaborative settings cross-organisational interactions are plagued by lack of trust. While capturing provenance information related to the pipeline execution and the processed data is a first step towards enabling trusted collaborations, the current solutions do not allow for provenance of the change in the processing pipelines, where the subject of change can be made on any aspect of the workflow implementing the pipeline and on the data used while the pipeline is being executed. Therefore in this work we provide a solution architecture and a proof of concept implementation of a service, called Provenance Holder, which enable _provenance of collaborative, adaptive data processing pipelines in a trusted manner_. We also contribute a definition of a set of properties of such a service and identify future research directions.
Keywords:Provenance of Change Reproducibility Trust Collaborative Processes Data Processing Pipelines Workflow evolution provenance Provenance of ad-hoc workflow change
## 1 Introduction
A significant part of data-driven ICT research and development in enterprises heavily relies on data analysis, simulations and machine learning algorithms. Recently, in a wave of initiatives towards supporting wide-spread transformation to a digital world, there has been an enormous effort by both enterprises in different industries and research to automate and deploy data processing onto the enterprise computing environments in order improve their operations and to benefit the most from the available data.
The first step towards this goal is the automation of the computational and data processing steps needed using data processing pipelines that can be implemented in many different ways using different methodologies and technologies. The major challenges such a task faces are related to identifying what the best approach is towards the actual automation of the data pipeline and integration of the computational resources, the ability to use data from different sources in different formats and varying quality properties, the flexibility of the data pipelines, the modularity and reusability of individual steps, the ability to enable collaborative modelling and execution of data processing pipelines, as well as their provenance and reproducibility. All these challenges have been in the focus of research and industries for quite some time and there is abundant literature reporting on interdisciplinary research results from many different communities, like data science, intelligent systems, scientific computing and workflows, eScience and others, employing a huge variety of concepts and technologies.
The topic of _provenance1_ has been researched predominantly in the field of scientific experiments and scientific workflows, which led to the definition of the characteristics of Findable Accessible Interoperable Reusable (FAIR) results [16, 25] and Robust Accountable Reproducible Explained (RARE) experiments [10]. In this field, scientific experiments are considered to be of good provenance if they are reproducible [2]. Enabling reproducibility of experiment results, typically by means of tracking the data through all processing, analysis and interpretation steps of the experiment, has been one of the main objectives of scientific workflow systems, in addition to the actual automation of scientific experiments. The importance of provenance in in-silico experiments has been identified, discussed and approaches have been partly implemented more recently in e.g. [3, 11, 1, 26, 8] and are relevant to enabling the provenance of data processing pipelines. Furthermore, there are initiatives towards standardization of representing provenance information for the purposes of both modeling provenance information and establishing an interchangeable format for such information such as PROV-DM2.
Footnote 1: ”The provenance of digital objects represents their origins.” source: [https://www.w3.org/TR/2013/NOTE-prov-primer-20130430/](https://www.w3.org/TR/2013/NOTE-prov-primer-20130430/)
Footnote 2: [https://www.w3.org/TR/prov-overview/](https://www.w3.org/TR/prov-overview/)
The scope of our work includes automated data processing pipelines, which use only software implementations of computational and data transformation tasks and excludes data processing pipelines in which participation of physical devices (e.g. microscopes, wet labs, sensors and actuators) is directly visible in the pipeline. Having said that, we focus additionally on enabling the _provenance of flexible, a.k.a. adaptive, data processing pipelines that are carried out in collaboration_ among identifiable organisational entities. The matter of _trust among the collaborating parties_ is of utmost importance in the context of our work, in particular because of the need to capture the origins of change that can be carried out by any of the participating parties at any point in the execution of the pipelines.
Our technology of choice for modelling and running collaborative data processing pipelines is _service-based, adaptable processes, both workflows and choreographies_, that are well known from the field of Business Process Management (BPM) [24] and conventional Workflow Management Technology [15] for their beneficial properties such as modularity, reusability, interpretability, transactional support, scalability and reliability. In other related research of ours we have provided a Workflow Management System (WfMS) supporting the execution of adaptable, collaborative choreographies and workflows and have also evaluated its applicability in the domain of scientific workflow automation [22].
To the best of our knowledge, the ability to reproduce the changes on either workflow or choreography models or instances made by collaborating organisations in the course of running their data processing pipelines in a trusted manner, has not been the subject of other works. We call this type of of provenance "_trusted provenance of change_".
Towards closing this gap in research, we extend our vision of a solution[20], called _Provenance Holder service_, that has to track and record all changes made on choreography and/or workflow models or instances to support their provenance in a trusted manner and allow collaborating organisations to retrace and reproduce their data processing pipelines exactly the same way as they have been carried out, including all changes made on both data and software used during the execution. The contributions of this work are: (i) An extension of the workflow provenance taxonomy to account for adaptation (ii) Detailed definition of the properties of the Provenance Holder service that will guarantee trusted provenance of collaborative, adaptive data processing pipelines, (iii) functional architecture, which is generic in nature and applicable in any application domain and imposes low effort to integrate with other flexible WfMS systems and (iv) an implementation as a proof of concept. We also explicitly identify (v) the _prerequisites_ for employing the Provenance Holder with other WfMS environments, namely the ability to support the trial-and-error manner of experimenting (as in e.g. Model-as-you-go-approach [19] or ability to change and propagate change in choreographies [7]) and the ability to provide workflow monitoring data that allows for data and workflow provenance [20].
The paper structure is as follows. In Section 2 we reiterate the requirements on our system, illustrate the supported provenance types and define the properties of a system that can enable provenance and trust in adaptive collaborative data processing pipelines, while the architecture of the Provenance Holder supporting these properties is described in depth in Section 3. In Section 4 we contribute a Proof of Concept (PoC) implementation of the Provenance Holder and elaborate on different design and implementation details and discuss our design decisions. In Section 5 we identify open issues and directions for future research and Section 6 concludes the paper.
## 2 Provenance Holder: Requirements, Supported Types of Provenance and Properties
In this section we reiterate the requirements for a system enabling reproducible, trusted and adaptive collaborations, we discuss existing provenance types and expand these with new types of provenance, as well as we present the Provenance Holder properties.
### Requirements
Previously we identified four requirements in [12] and [20] for reproducible, trusted and adaptive collaborations, e.g. scientific experiments (including eScience). Furthermore we argued these requirements are to be fulfilled by an enabling system. Hence, these requirements are to be provided by the Provenance Holder. We numbered these requirements (cf. Table 1) for a better overview, the possibility for clear referencing and mapping purposes.
### Supported Types of Provenance
According to the most recent survey on provenance [11], there are four types of provenance: provenance meta-data, information system provenance, workflow provenance and data provenance; in all cases artefacts are considered to be of good provenance if their origin and history of transformations have been sufficiently recorded to be reproducible. While provenance meta-data is the least specific one, data provenance is considered the most specific one and is also known as data lineage.
The workflow provenance type (see Figure 1) directly applies to our use case of adaptive data processing pipelines, hence it is the type we focus on in this work. In addition to the provenance information regarding the control and data flow of workflows, workflow provenance includes input, output and parameters of the workflows; collecting such provenance information requires appropriate instrumentation of the WfMS. Furthermore, the authors of [11] group workflow provenance in form (prospective, retrospective, evolution) and granularity (coarse-grained and fine-grained). Prospective provenance captures only workflow models and their (run-time) contexts, retrospective provenance captures
\begin{table}
\begin{tabular}{c l} \hline \hline
**Requirement** & **Description** \\ \hline
**R1** & **Adaptability** to adhere to the adaptability of experiments \\
**R2** & **Provenance** to enable FAIR results [16] \\
**R3** & **Reproducibility** for RARE experiments [10] \\
**R4** & **Trust** among collaborating parties to also enable accountability \\ \hline \hline \end{tabular}
\end{table}
Table 1: Provenance Holder Requirements adopted from [12, 20]
Additionally the input data, whereas evolution provenance captures the changes on models, input data or context. In addition to that classification of workflow provenance, there is a need to account for the different types of _provenance of adaptation or change_ of workflows, as this is not part of the work of [11] but is necessary for enabling provenance of adaptive workflows (or trusted provenance of change). We therefore subdivide provenance of adaptation/change into _workflow evolution provenance_ and _provenance of ad-hoc workflow change_. This distinction is important, as the subject of the change are either workflow model or one or more workflow instances, respectively, and ensuring their provenance is addressed using different approaches and require different data. The former type of change is typically enacted using instance migration from one workflow model to another, whereas the latter is carried out directly on the internal representation of a process instance running on a workflow engine. Furthermore, there is a need to distinguish between the provenance forms _provenance of adaptive workflows_ and _provenance of adaptive choreographies_, as in collaborative data processing pipelines the changes in a choreography (model or instances) of workflows have to be tracked and captured, too.
The ability to support these types of provenance implies specific requirements on the Provenance Holder service. Namely, it has to be able to track the models and model adaptation/changes, instance migrations and ad- hoc changes, of both workflows and choreographies, as well as the executions of choreographies and workflows which produced a certain output given a certain input (data and other parameters). Since we consider a collaborative environment, traces of workflow execution or even actual changes to models are not (immediately) published for confidentiality and trust reasons. For the Provenance Holder service it means that it will have to capture only representations of the actual objects containing the relevant information rather than the actual detailed workflow execution and data traces. In the terms of [11] this means that the Provenance Holder records (representations of) models (W), input data (D) and the run-time context (C) and changes thereof. The types of provenance to be supported are the prospective, evolution and coarse-grained ones. Depending on the level of detail of available execution traces, retrospective and even fine-grained provenance can be enabled, too. The Provenance Holder is required to support also the provenance of adaptation/change in form of workflow evolution provenance and provenance of ad-hoc workflow change (as per Figure 1).
In the next section we focus on the properties a Provenance Holder must possess in order to support all these provenance types and meet the requirements R1 through R4 (Table 1).
### Provenance Holder Properties
Note that in our work we assume only a minimum level of trust. This accounts for the fact that process and/or choreography participants may prefer to keep the details about their data processing pipeline confidential until its potential disclosure (or forever). At the same time we aim at giving a choreography participant the possibility to make statements about their data processing steps
that can be trusted by other parties. That is why a Provenance Holder service is required to keep the provenance information separately from the data and execution of the data processing pipeline and thus does not disclose publicly any insights about the actual data processing.
To enable the provenance of collaborative adaptive choreographies and meet the above explained requirements, the Provenance Holder has to enable a choreography participant (referred to as **I** in Table 2) to make the following four statements about their data processing pipelines without directly disclosing inner workings or data. A choreography participant can be person or system invoking a workflow/choreography, deploying/adapting a workflow/choreography model or executing such a workflow/choreography (hence a participant is using an appropriate system for that, e.g. WfMS). We map these statements to Provenance Holder properties, as they imply guarantees for specific capabilities of the service (cf. Table 2).
While in this section we highlight some of the concepts and techniques that can be used to enable these four properties, we will give more details later in this article.
To be able to attribute something to someone (P1) we employ public/private key digital signature because of its widespread use and because after an initial identification or pairing of a party to public key authenticity can easily be verified.
With time stamping, e.g. on an immutable public ledger (like e.g. blockchain), it will be possible to prove that something was known to have happened at a certain point in time (P2) or at least that it was known before a certain point in time. Time stamping via blockchain proves that something was known before; alternatively, the signature of an execution timestamp by invoker and executor might be considered but is not as strong of an indicator.
Proving that someone knows a particular something can be trivially and obviously achieved by simply disclosing said information. Therefore, property P3 can
Figure 1: Workflow Provenance types taxonomy
always be achieved through information disclosure. However, in order to prove that someone knows a particular something without disclosing sensitive information (P3), we will investigate further the concept of zero knowledge proofs (ZKP) and more specifically non-interactive zero-knowledge proofs ([5], as it presents a systematic overview over the greater topic of verifiable privacy-preserving computations). Examples of application for non-interactive zero-knowledge proofs include the Zerocash protocol of the cryptocurrency Zcash3\({}^{,}\)4 and the Monero cryptocurrency5.
Footnote 3: [https://github.com/zcash/zcash](https://github.com/zcash/zcash)
Footnote 4: [https://z.cash/](https://z.cash/)
Footnote 5: [https://www.getmonero.org/](https://www.getmonero.org/)
Property P4 requires the three previous properties and is meant to prove the origin of something by linking it to its predecessor. This can be done by combining the properties P1, P2 and P3.
While the support of properties P1 and P4 by the Provenance Holder, together with the actual data, models and changes thereof, guarantees the aforementioned workflow provenance types, the properties P2 and P3 address the issue of ensuring trust in adaptive collaborations. A third party can view the recorded provenance information and even verify properties P1, P2, (P3) and P4 without engaging in collaboration or with a respective choreography participant. In this work we will concentrate on the design and implementation of three out of these four properties, more precisely on P1, P2 and P4 (see Section 3.2), whereas P3 will be subject of our future research. In the following section we
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Property** & **Statement by participant** & **Description** \\ \hline
**P1** & “**I** know it” & A result/change/predecessor can be attributed to a certain identifiable entity, i.e. choreography participant. \\
**P2** & “I knew it **before**” & A result/change/predecessor has been available/known or has happened at or before a certain point in time. \\
**P3** & “**I actually** know it” & Prove that that participants know of a result/change/predecessor (without information disclosure). \\
**P4** & “I know **where it came from**” & Participants have knowledge of the predecessor of a result/change/predecessor. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Provenance Holder Properties and their mapping to statements made by choreography participants. In the statement column the pronoun **It** is information about either of the following: result, origin/predecessor or change. The text in bold highlights where the focus of each property lies.
present the detailed architecture of the Provenance Holder with focus on the properties presented here.
## 3 Provenance Holder Architecture
The Provenance Holder is a service responsible for collecting all information necessary to ensure provenance and reproducibility of and trust in the collaborative adaptations and enable the four properties we introduced in the previous section (Section 2.3). We aim at providing a generic, reusable and non-intrusive solution across different scenarios and separation of concerns [6].
The Provenance Holder service provides _two main operations_ as part of its interface (cf. Figure 2): 1) _collect provenance data_ (_Collect_) and 2) _retrieve provenance information_ (_Retrieve_); we call these operations also external operations. The controller, the adapter and one or more provenance providers are the _components of the Provenance Holder_ (cf. Section 3.1 and Figure 2) and they carry out four _interaction scenarios_ in order to realize the two externally provided operations of the Provenance Holder service (see Section 3.2). The interaction scenarios are always combinations of several of the internal methods6; the (internal) methods are: _Record_, _Retrieve_, _Validate_ and _Migrate_.
Footnote 6: We use the term _method_ for disambiguation purposes only.
_Collection of provenance data_ is done through selection of the relevant messages from the service middleware used for the interaction among participants in the choreography (e.g. and Enterprise Service Bus (ESB) or any other data transfer technology). We regard every piece of data related to execution of workflows and choreographies or their adaptation that is communicated via the service middleware as provenance data. After selection, processing and storage by the Provenance Holder it becomes _provenance information_. In this process, the collected provenance data is validated by the Validate method provided by the
Figure 2: Provenance Holder Architecture: components, external operations and internal methods, implemented ones are black (adopted from [20])
Controller component and recorded using the Record operation of one of the Provider components.
_Retrieving provenance information_, i.e. _Retrieve_ operation, is done by using the methods Retrieve and Validate. Afterwards the provenance information is published to the service middleware so that a Provenance viewing or visualization tool can be used by an expert to inspect the provenance information.
### Components
As shown in Figure 2, the components of the Provenance Holder are the controller, the adapter and one or more provider components.
The _adapter_ is the component ensuring the _integration_ of the Provenance Holder with other systems. Its actual design and implementation are specific for the system with which it has to work to enable the integration and correct communication. We recommend using a service middleware that facilitates the integration of workflow management systems and service-based components with other functionality, however we do not assume any specific technology in our generic solution. In addition it provides the two external operations: _Collect_ and _Retrieve_. To do so, the adapter acts as interface to external existing system and has to be directly connected to the communication middleware which transports the provenance data. Furthermore, the adapter has the important task to identify the appropriate data to pick from the service middleware and hand it over to the controller component for further processing. The actual interaction with the service middleware has to comply with the adapter interface, i.e. use the external operations, but the implementation has to be dealt with individually for each software landscape. We envision two possible approaches for the _selection of provenance data_: i) it can be actively provided by the WfMSs running the choreographies and the middleware they use or ii) the adapter component has to carry out the selection of relevant data. Both approaches have their own advantages and disadvantages and are going to be discussed in Section 5.
In terms of _adaptations_ made on workflows and choreographies we distinguish between changes which cause an instance migration and those that stand for ad-hoc changes. Capturing the former is done by recording the new model. The latter case is more difficult to tackle because changes are applied to a model but do not necessarily generate a new model representation to keep track of. Furthermore, changes can be applied consecutively to a "base" model and they might add to it or remove previously added parts. Changes might evolve to a new model version or they might be abandoned altogether. While capturing these changes and keeping track of them is not trivial, it also becomes apparent that keeping track of these transient changes is important, too, since they are not inherently captured through manifestation in a model version. Distinguishing between execution and adaption data, as well as differentiating between instance migration and ad-hoc change is a task of the adapter component.
_Data retrieval_, which scientists commonly call publication, is done only upon request sent by the user typically via a service or tool capable of presenting the provenance information - both the actual information (if provided by the involved
participants) and the fact that it can be trusted. The adapter component serves the request with only the collected provenance information, though.
To enable both provenance data collection and retrieval imposes additional requirements, especially on the WfMS and software used to model and monitor the workflows and choreographies (see Section 3.3).
_Providers_ or _provenance providers_ have to implement three out of the four methods explained in Section 3.2, namely _record, retrieve and migrate_ and certain requirements to fulfil. The implementation complexity can be arbitrary high and strongly depends on the employed technology: writing into a log file, for example, is of low complexity, whereas employing blockchain technology is more on the high end of the complexity spectrum. The needs of different workflow types also come into play when deciding which technology to use.
The _controller_ is in charge of the interaction between the adapter and the provenance providers so that the Provenance Holder can provide the provenance service operations to each workflow and choreography. The controller combines the four methods: record, validate, retrieve and migrate into the realization of the two operations provided by the Provenance Holder: Collect and Retrieve. For the _collect provenance data operation_ the controller receives, validates and relays the provenance information to the providers. For the _operation retrieve provenance information_, it combines the methods retrieve and validate. In both cases the validate method is a crucial step (more details in Section 3.2). The controller has a key storage integrated to be able to verify signatures and identify participants. If the data is not valid, for instance the signature does not match the signed data or the signer is unknown, the data is rejected and not relayed to the providers for storage.
### Methods and Operations
As mentioned above, the four main internal methods of the Provenance Holder are used in different combinations to realize the two external operations (cf. Figure 2). During the execution and adaptation of workflows and choreographies the Provenance Holder constantly collects provenance data on a very detailed level, including on per-workflow-activity level. For all practical purposes, the data collected about experiment execution and applied changes need to be authenticated. This can be done, for instance, by employing public/private key signature algorithms on e.g. input, workflow version, and output data produced by the participating WfMS environments.
The _Record_ method selects appropriate provider components for a certain workflow type out of the available providers and uses them to store the provenance information. The information needed for ensuring the provenance of workflow runs/executions is input data, the executed workflow model version and its output data. The actual provenance information is paired with information about the corresponding choreography instance (typically using a reference to the corresponding choreography instance). This does not only ease the retrieval but enables the attribution of an execution and data to a certain origin. For an adaptation on a workflow or a choreography, the provenance information might
consist of the description of the actual change performed, the new version of the workflow/choreography and a reference to the preceding one. Data is validated (with the validation method) before it is actually handed over to a provider for storage.
The _Retrieve_ method is used to fetch the desired provenance information from the provider components via their interfaces. The information is identified and retrieved using the corresponding choreography instance ID and/or workflow IDs. The actual data retrieval is done by each provider itself and returned to the _retrieve_ method. After retrieval, the information is validated (with the validation method) before it is handed over to the adapter component, i.e. the Provenance Holder interface implementation. The validation is used to rule out storing errors or tampering on the data storage and to guarantee the validity of the data and the freshness of the validity check. The retrieved information should then be presented to the user in an implementation specific manner.
During _Validation_ the provided signature is verified in the controller component. Similarly, the signed data is validated and the signee is identified. When the _Record_ method is called, the signature gets verified before the data is "recorded". If the signature can not be verified or the signee not be identified, the information is rejected and not considered further, hence is not "recorded". The failed signature verification is also communicated to the user. When calling the _Retrieve_ method, the provenance information is fetched from the provenance provider and then validated. If two or more providers are present in a Provenance Holder, retrieved data is not only validated but the data of the different providers and each validation status is compared to one another to identify possible discrepancies. The status of the validation is also communicated to the user. Especially if the data storage is at a remote location an adversary might be able to change stored data. If this is done by a participant then he is also able to produce a "valid" signature of the changed data. To be able to validate signatures, the participants' keys need to be exchanged beforehand and stored in the Provenance Holder.
The _Migrate_ method is only used if stored information has to be transferred to a new type or instance of storage, in case an addition or change of instances is desired or needed. It provides the ability to retrieve all stored provenance information from a provider component at once. Similar to the retrieval of individual objects, all data is validated using the validation method, before it is migrated.
Addition of storage instances means that storage is expanded by a new instance of an already existing technology or a new instance of a not yet used storage technology, e.g. data is stored in an SQL database and now will be copied to a second one or will now also be stored in a flat file. Change of storage on the other hand means that one storage instance replaces another one. This can be done within a particular storage technology or done by replacing one technology with another, e.g. data is stored in a flat file and will now be stored in an SQL database, hence it will be migrated. The employed technology has also implications on the complexity of such a migration because of the difference in features and of their characteristics. Migrations can be triggered both auto
matically or manually by an administrator; the actual procedure for migration is out of the scope of this paper as related work like [21] is available. It is important to note that the cost of the _Migrate_ method must be considered, especially when it comes to blockchain technology where the needed information can be spread virtually over the whole ledger. The migration step might also involve purging data from the source after a successful data transfer. Here, blockchain technology might also pose additional challenges.
### Requirements on WFfMSs
The Provenance Holder architecture introduced above implies several assumptions about the information needed from a WfMS and related service middleware so that it can meaningfully collect, process and return provenance information about workflow/choreography changes. From these assumptions we can derive the minimum requirements on WfMSs and service middleware.
First, all provenance data produces by the involved WfMSs, workflow caller and choreography initiator and the information used to identify workflows and choreographies need to be signed and the signature made available to the Provenance Holder. This also implies that the WfMS and the service middleware used need to be able to identify the choreography and workflow instances; the concrete technique used for that is technology and implementation specific.
Second, all messages notifying actual changes in workflow/choreography models and/or instances need to be signed too so that both evolution provenance and ad-hoc change provenance can be enabled. All private/public key pairs used have to be generated beforehand and made available to the Provenance Holder.
Third, the key exchange and choreography participant identification has to be done before participants can engage in collaborative scientific workflows. While the key exchange is trivial as the key is made public, e.g. alongside the workflow, the participant identification could be done following principles such as trust on first use (TOFU) or trust upon first use (TUFU) and the web of trust. Identification via other channels is also possible and if desired needs to be implemented accordingly.
## 4 Design and Implementation
In this section we elaborate on design, technological decisions and implementation details. We discuss several specific aspects regarding our proof-of-concept implementation, available at: [https://github.com/ProvenanceHolder/ProvenanceHolder](https://github.com/ProvenanceHolder/ProvenanceHolder)
### Signature algorithm
To be able to attribute data to a certain entity as postulated by one of the requirements, we decided to use public/private key signature scheme. Using this
signature scheme not only enables the identification of signers but with the public key meant to be public also not violating privacy or security concerns when storing it on an immutable (public) ledger. This kind of signature scheme or algorithm makes it possible to identify signers since a signature can only be validated with a certain key. Key owners are identified before engaging in collaboration and keys are stored accordingly. Hence, if a signature can be validated, it can be attributed at the same time. A public/private key algorithm requires the public key to be public and potentially known to everyone in order to validate a signature or encrypt a message (only for the owner of the private key). Therefore storing the public key on an immutable (public) ledger, like e.g. blockchain, does not impose security or privacy issues. While RSA [17] is the most popular algorithm in this category, we chose ed25519 [4] because keys and signatures are significantly smaller without compromising security. Reference implementations and bindings are also available for a wide range of languages7, e.g. Python8 and Java9. The signature algorithm ed25519 is applied during process execution and when changes are made by the respective parties, i.e. WfMS and modeling environment.
Footnote 7: [https://doc.libsodium.org/bindings_for_other_languages](https://doc.libsodium.org/bindings_for_other_languages)
Footnote 8: [https://pynacl.readthedocs.io/en/stable/](https://pynacl.readthedocs.io/en/stable/)
Footnote 9: [https://github.com/terl/lazysodium-java](https://github.com/terl/lazysodium-java)
### Controller
Besides coordinating the individual architecture components (see Figure 2) in order to deliver the two offered operations, the controller has to provide several management features:
* Key management: as mentioned above, each and every choreography participant needs to be identified before engaging in collaboration or at least his or her public key has to be saved. As ed25519 keys are "just" random byte strings only storing them does not suffice for key management and is not efficient when it comes to verifying signatures since there is no information for picking the appropriate key. To the best of our knowledge, there is no ed25519 key management that can do both storing keys and annotating them with the necessary information for automatic and manual key retrieval, therefore we needed to also implement such a key management to suit our needs.
* Provenance object record: For performance purposes, the controller keeps a record about stored provenance information objects, the _object record_. This way already stored objects can be identified directly without querying any provider, requests for non-existent objects e.g. via _collect_ and requests if a certain object exists can also be answered directly. The record of stored objects can also assist with migration procedures between providers.
* Management of linked provenance information objects: The _object record_ also keeps track of the linked provenance information objects. In addition to supporting faster search (without the necessity to query the used storage provider component), it also allows for a quick and easy identification of corresponding
provenance paths. Each provenance object is recorded by its identifier, i.e. the provenance hash, and contains its predecessor.
There is information relevant for key management and in particular key retrieval that needs to be stored with the provenance information. The key identifiers of keys used to sign a provenance information object are stored together with the respective signature. The to be stored key object (cf. Figure 3) within the key management has six elements, key id (id), name (name), e-mail address (mail), creation date (date), fingerprint and the public key (pubkey). With the fingerprint of the key being a signature with said key over name, mail, date and pubkey, and the id being the last 16 byte of the fingerprint10.
Footnote 10: This is loosely adapted from OpenPGP ([https://datatracker.ietf.org/doc/htm1/rfc4880](https://datatracker.ietf.org/doc/htm1/rfc4880)).
The aforementioned object record (cf. Figure 4) keeps track of each recorded provenance information object by storing it in a key value data structure with it provenance hash being the key and its predecessor, also identified by its provenance hash, being the value.
Figure 4: Object record
Figure 3: Key object
### Adapter
The main role of the adapter is to provide the two external operations _collect_ and _retrieve_ (see Figure 2). Furthermore, the adapter is the component used for integrating the Provenance Holder with any existing WfMS. Therefore this is the component that needs to be configured or specifically implemented to allow for integration with the WfMS under consideration, the technology it uses and its event models.
In this work we assume that the execution information about workflows and choreographies as well as information about any changes are made available by participating workflow engines to the Provenance Holder via its adapter. In the current implementation we follow the approach in which it is clearly specified which data is related to change and which to workflow/choreography execution. We also assume that the provenance data will be communicated using a message-oriented middleware, as this mode of communication provides the most advantages for integration of distributed applications and most service middewares support it as well. Besides the information about workflow execution or change the adapter processes the participants signatures. At the moment our implementation is based on these assumptions and realises only the _collect_ operation. We are aware that some scenarios may require different integration approaches and may not have explicitly identified change information, for which other solutions need to be investigated in future. Similarly, due to the dependence of the _retrieve_ operation on a user-friendly visualization tool and/or specific integration with a WfMS, the retrieve operation's implementation is still under development.
### Provider
A provider has the task to store the provenance information. Most of the members of such a data object can be considered fixed length. However, when it comes to input, predecessor and possibly also output, one has to consider that even a trivial mathematical operation such as the addition has at least two input parameters/operands, which need to be stored. As an operation can have an arbitrary number of inputs, even though the length of an input, i.e. the hash of the actual input has a fixed length, the number of the inputs varies and can be quite big. This fact needs to be accounted for in a suitable way by the technology used.
We allow for storing an arbitrary number of predecessors. A predecessor, to a certain provenance information object, is the provenance information object from which the current referenced data was derived from. This can either be one or more provenance information objects for an execution or a provenance information object for a change in a model. In order to be able to store predecessors it is paramount to be able to identify them. The Choreography instance id and the workflow instance id besides the actual input can be an indication for a predecessor. However, the actual identification might not be as trivial, for instance, only because a certain input or output came before in the execution
of choreography or workflow does not necessarily mean that it is the right predecessor (cf. Figure 5) or a predecessor at all. Therefore, the provenance data object structure may need to be extended in our future work on the adapter component, whose task it is to create these objects from the input data.
Changes to a choreography or workflow model, for both instance migration and ad-hoc changes, are stored in the same object type - namely the provenance information object for adaptation. In case of instance migration the whole workflow/choreography model is captured and in case of ad-hoc changes only the actual change (diff) is recorded.
By storing provenance information in this way we end up with only lists of inputs and outputs/results belonging together in reverse order, which is enough for the provenance of specific data and executions as it is required. Doubly linked lists are not desired because it should be possible to store the provenance information on append-only data structures such as public ledgers. A single link suffices since a provenance path can always be back traced to its origin from any element in the path and with the information stored by the controller about recorded provenance information objects it is possible to generate an interlinked list, if necessary.
Consider the following example in Figure 5. In this figure we show an excerpt of an XES11-compliant event log of different process instances of a process model from which we can derive and record provenance information. The logs contain the process related events that would be notified to the service middleware when processes are executed, as well as the events related to changes. The example log explicitly contains the execution of process instances and implicitly they reflect changes/adaptations to the process model. As we stated earlier, we record the execution of processes and changes thereof. In our example we focus on capturing the change/adaptation to the underlying process model. In Figure 4(a), there are two excerpts of an event log which represent an adaption - the first log is the process event sequence/process trace following the original process model, whereas the second event log/trace includes events related to the addition of a new activity to the instance (and the process model). Execution-wise it is merely a new instance of the process model (instances are recorded but not depicted in the example). In Figure 4(b), there are two provenance information objects reflecting the change and depicting the predecessor relation between those two objects. Both objects are identified by their ProvenanceHash, a hash which encompasses all individual elements of the object.
Footnote 11: [https://www.xes-standard.org/_media/xes/xesstandarddefinition-2.0.pdf](https://www.xes-standard.org/_media/xes/xesstandarddefinition-2.0.pdf)
In the following, we will present two proof-of-concept providers, a SimpleStorage provider and a Timestamping provider, which have different characteristics and serve different purposes in terms of the discussed properties (cf. Section 2.3).
#### 4.1.1 SimpleStorage provider
Providers can be implemented in any storage technology which suits the respective use case. We provide a proof-of-concept implementation of a provider which is simple but at the same time provides database
capabilities. The SimpleStorage provider is implemented using SQLite12 as storage technology, which stores data in files, provides a feature-rich SQL interface and does not require a full-scale database management system.
Footnote 12: [https://www.sqlite.org/](https://www.sqlite.org/)
This provider addresses the properties P1 and P4 by storing the above mentioned provenance information objects and implicitly the link between them.
The performance characteristics of such a provider need to be evaluated in future research.
#### 4.1.3 Timestamping provider
Timestamping in general can be seen as the act of producing a certificate of existence, in our case even without revealing the object which is attested by the certificate. This can for instance be done for a data object by hashing said object and publishing the produced hash to a medium linear in time, e.g. a public ledger such as blockchain. By doing so it can be proven that something was known before, which corresponds to the property P2 (cf. Section 2.3). With the Bitcoin blockchain there are several timestamping services such as OpenTimestamps or originstamp [9, 18]. In order to submit hashes to the blockchain, timestamping services accumulate hashes in a Merkle tree and only submit its root hash, or use the root hash as private key for a bitcoin address to which the data is sent. This is cost efficient since not every hash is submitted individually which saves transaction costs. At the same time the cost efficiency is traded off by a submission delay of individual hash values. Since a block containing such a transaction is not instantly created and takes some time until it becomes part of the blockchain anyway, this fact might be negligible. However, it needs to be evaluated in future work if and when such a delay might prove undesirable.
We employ this technology by submitting the ProvenanceHash, which is a SHA256-Hash, to such a service and keeping track of these submissions. Because of the mentioned delay, the Timestamping provider implementation has not only to record hashes which are on the blockchain but also hashes pending to become part of the blockchain (as part of a transaction in a block). The provider has to check periodically for the submission status in order to retrieve the address to which the transaction is ultimately sent to. The provider has also the task ultimately to make sure that the hash actually becomes part of a block. In order to fulfill this task it might also be necessary to resubmit a hash to the timestamping provider, depending on the respective provider's Service-level agreement (SLA). By doing so the provider accounts for the eventual consistency property of blockchain technology. The data to be stored in this case only consists of the ProvenanceHash and the value of the Merkle trees' root of the Merkle tree in which the ProvenanceHash was included.
By timestamping the ProvenanceHash and storing the necessary information to verify said timestamp property P2 is supported.
However, it needs to be investigated further how to assert that one data set existed before or after another, while there are timestamps for each, since time and also the order of blocks is a non-trivial issue within blockchain technology.
It becomes even more complicated if the existing timestamps are on different blockchains. The authors of [14] investigated the aspect time in blockchain-based process execution in their work and also introduce a set of time measures. These measures will be examined for their usability in the context of timestamping and timestamps in our future research.
## 5 Discussion of Open Issues
In this section we discuss open issues not yet covered by our work and highlight directions for future work.
In our previous work [12] we identified adaptability (R1), provenance for FAIR (R2), reproducibility for RARE (R3), and trust (R4) as the main requirements of experts on collaborative data processing pipelines and reiterated them in Section 2.1.
Our approach presented here already accounts for all of them: The evolution of data and models can be retraced through the help of the Provenance Holder since linked lists of choreography/workflow executions and adaptations are recorded. Thus we address the _provenance_ requirement (R2). The requirement of _reproducibility_ (R3) is addressed by recording all executions and changes, which allows for rerunning of the actual data pipeline executions, and thus reproduce them exactly, if, in addition, the necessary access is given to actual data and models and/or changes thereof. In the future we will evaluate if the results can be reproduced in the way experts from different fields require it for different use cases. The two corresponding requirements, R2 and R3, are addressed by the implemented property P4.
The _trust_ requirement (R4) is addressed through digital signature of all recorded data, i.e. by implementing the properties P1 and P2. Thus, all executions and changes are signed and can be attributed to individual collaboration participants. While _adaptability_ (R1) itself needs to be enabled on WfMS-basis, the Provenance Holder supports provenance of adaptation since all changes, of the type instance migration or ad-hoc changes, are recorded in the above mentioned way, too. Hence, by recording all changes and implementing the properties P1, P2 and P4, this requirement is fulfilled.
The adapter component will have the important task of _identifying and selecting the right execution and adaptation data_ on the service middleware in a generic manner applicable to different types of WfMSs. We currently envision two possible ways of enabling this: a) the needed data is published to specific dedicated topics on the middleware and subscribed by the adapter or b) the adapter picks the appropriate messages and data from the middleware itself. The first option would pose a rather small integration effort on the adapter, however the requirements on the involved environment, i.e. WfMSs and modeling environment, are rather high and more intrusive. The second option, on the other hand, will probably pose no or only minimal additional requirements on the environment but will require higher design and implementation effort on the adapter itself, especially for the cases in which adaptation is not explicitly
annotated in the events published on the service middleware; such research is related to the field of process model drift identification or anomaly detection in process execution, as known from the process mining and BPM communities. The decision of which option to pick and their comparison will be subject of future work as it requires, among all else, rigorous evaluation of quantitative and qualitative performance characteristics.
Besides identifying and selecting the right data, the adapter has also to _identify predecessors_ of said data or at least support this identification process. Furthermore, a mapping of data on the middleware with data stored in the Provenance Holder might be needed.
Furthermore, we find the ability to use existing provenance models, such as PROV-DM to exchange provenance information in a standard manner, beneficial. While this directly calls for a transformation of the provenance information stored by the Provenance Holder into this model, being able to import data recorded so that it follows this model might be beneficial as well for the purposes of standardization and reuse. Moreover, there are already available tools for data provenance visualization (e.g. Prov Viewer13[13], ProvViz14[23]) complying with this standard that could be extended to serve the visualization of the retrieved provenance data.
Footnote 13: [https://github.com/gems-uff/prov-viewer](https://github.com/gems-uff/prov-viewer)
Footnote 14: [https://github.com/benwerner01/provviz](https://github.com/benwerner01/provviz)
We already mentioned the topic of zero knowledge proofs, more specifically non-interactive zero knowledge proofs, in the scope of supporting property P3. Furthermore, in combination with property P4, realised by linking the provenance information objects together, may lead to additional characteristics such as "chained zero knowledge proofs", which we will investigate in our future work.
Storage of provenance related artifacts on an immutable (public) ledger is achievable since a) the actual data is not stored there (no security/privacy implication and considerably small cost implication) and b) data from the past is not amended. While we do not consider this line of research in the scope of our project, storing all data recorded by the Provenance Holder on a public ledger might as well be of interest, as it is definitely an unexplored alternative.
## 6 Conclusions
The focus of our work is to enable the trusted provenance and reproducibility of adaptive collaborative data processing pipelines. Our work is based on the workflow management technology for process automation that has proven to bring significant benefits for automating data processing pipelines, and in particular such pipelines that implement in-silico scientific experiments and business analytics pipelines that are in the focus of data-driven, collaborating enterprises. While reproducibility of data processing pipelines has been in the focus of research, the dimensions of collaboration and trust have been abstracted away in available literature, whereas the adaptation of running data processing pipelines has not been considered at all. The work presented in this paper strives towards
closing this gap by 1) defining the specific properties of such a service enabling trusted provenance of collaborative and adaptive data processing pipelines, 2) contributing an architecture of a generic provenance service, called Provenance Holder, 3) a proof of concept implementation of the approach and 4) identifying the requirements on systems that automate data processing pipelines so that they can integrate with the Provenance Holder service.
The main focus of our future work will be on investigating the applicability of zero knowledge proofs, on the research into the best alternatives for integration with the Provenance Holder with focus on the adapter component, on mapping to the provenance information standards available and on visualization of provenance information of change for experts.
|
2301.08652
|
Cosmic bounce and phantom-like equation of state from tunnelling
|
We allow a scalar field on a flat FLRW background metric to tunnel between
two degenerate vacua. The resulting true vacuum state then violates the Null
Energy Condition, and the corresponding homogeneous fluid induces a bounce,
after which it has a phantom-like equation of state and asymptotically leads to
a de Sitter phase. The mechanism presented here requires no exotic matter or
modified gravity, it is purely generated by quantum fluctuations and is valid
for a generic double well potential.
|
Jean Alexandre, Silvia Pla
|
2023-01-20T16:01:06Z
|
http://arxiv.org/abs/2301.08652v2
|
# Phantom-like equation of state from tunnelling
###### Abstract
We allow a scalar field on a flat FLRW background metric to tunnel between two degenerate vacua. The resulting true vacuum state then violates the Null Energy Condition, and the corresponding homogeneous fluid has a phantom-like equation of state. The mechanism presented here requires no exotic matter or modified gravity, it is purely generated by quantum fluctuations and is valid for a generic double well potential.
+
Footnote †: preprint: KCL-PH-TH/2023-05
###### Contents
* I Introduction
* II Semi-classical approximation and saddle points
* II.1 Assumptions
* II.2 Semi-classical approximation
* II.3 Static saddle points
* II.4 Instanton gas
* III Effective action
* III.1 Symmetric ground state
* III.2 NEC violation
* IV Friedmann Equations
* V Conclusions
* A One-loop effective action in curved space-times
* B Quantisation over instanton configurations
* C Effective action, energy density and pressure
Introduction
Unlike spontaneous symmetry breaking (SSB), which occurs in infinite volume, tunnelling involves remarkable energetic features, among which a non-perturbative ground state with no classical analogue. This is at the origin of convexity of the effective potential for a scalar field [1; 2; 3; 4; 5; 6; 7; 8; 9], and thus symmetry restoration.
The explicit calculation of the one-particle-irreducible (1PI) effective potential, taking into account several degenerate vacua, was done in [10; 11] in the semi-classical approximation for the partition function. These studies assumed an \(O(4)\)-symmetric Euclidean space-time, and the corresponding work at finite but low temperature was done in [12] and [13]. The latter works allow for the full tunnelling regime, involving a gas of Euclidean-time-dependent instantons relating two degenerate vacua. It was found that the true ground state for the scalar field is symmetric, and it violates the Null Energy Condition (NEC - see [14; 15] for reviews), because it is non-extensive in the thermodynamical sense. We note here that these results are independent of symmetry-restoration by the Kibble-Zurek mechanism [16; 17], which is valid at high temperatures and does not allow for the NEC to be violated.
The present work extends this tunnelling mechanism to a Friedmann-Lemaitre-Robertson-Walker (FLRW) background metric, where we study the backreaction of the fluid provided by the scalar true vacuum on the metric dynamics. Our assumptions do not involve exotic matter or modified gravity, but a finite volume and an adiabatic expansion instead, both to be defined in the next section. Our results arise purely from quantum fluctuations and they have no classical counterpart.
It is well known that in Quantum Field Theory (QFT) the energy conditions can be violated under certain circumstances. Some examples include the Casimir effect [18], radiation from moving mirrors [19], or black hole evaporation [20]. Another interesting example in curved backgrounds was obtained in [21]. The latter work studies a self-interacting massless field, therefore seeing only one vacuum, and not tunnelling. Also, the background is fixed as a de Sitter metric, whereas in our study the scale factor is determined by the backreaction of the scalar effective vacuum. Nevertheless, it is still possible for the stress-energy tensor to satisfy certain constraints, such as the Averaged Null Energy Condition (ANEC), which averages the NEC over timelike or null geodesics. The mechanism we propose here indeed does not violate the ANEC, since NEC violation is valid temporarily only - see Sec.IV. We note that an eternal inflation scenario is described in [22], which also respects the ANEC.
In Sec.II we describe the semi-classical approximation in which we evaluate the partition function, based on the different saddle points which are relevant for two degenerate vacua: two static configurations and a gas of instantons/anti-instantons. In the situation of non-degenerate vacua, the relevant configurations are the Coleman bounce [23; 24] and the shot [25], with imaginary quantum fluctuations which arise from a negative eigenvalue in the fluctuation determinant [26]. In the present case though, there are not any imaginary quantum corrections, since the (anti-)instantons are monotonic functions of Euclidean time [27].
The effective action is then derived in Sec.III to the lowest order in the field, which is enough to confirm convexity and that the ground state is obtained for a vanishing field, unlike the situation of SSB. This calculation is done in the adiabatic approximation, assuming that the tunnelling rate is large compared to the space-time expansion rate. The vacuum energy induced by tunnelling violates the NEC and has an equation of state of the form
\[w=-1-\hbar\,w_{0}\left(\alpha^{3/2}(t)+\frac{1}{2\alpha^{3/2}(t)}\right)e^{- \alpha^{3}(t)}\, \tag{1}\]
where \(w_{0}>0\) and \(\alpha(t)\) is proportional to the scale factor. The property \(w<-1\) is usually related to a negative kinetic term in the potential (see [28] for a review on phantom energy), but is not the case here: the vacuum we find is homogeneous and its energetic properties arise purely from quantum fluctuations, not from a specific bare action.
In Sec.IV we solve numerically the Friedmann equations, where we study the backreaction of the effective theory on gravity. As expected from NEC violation, the solution exhibits a cosmological bounce [29; 30; 31], known to provide an alternative to Cosmic Inflation [32; 33]. The original idea to generate a bounce from a tunnelling-induced scalar field true vacuum was proposed in [34; 35], in the context of an \(O(4)\)-symmetric Euclidean space-time though, whereas we allow here for the full tunnelling regime, with finite volume and infinite Euclidean time.
Finally, the detailed calculations are presented in Appendix A, B and C.
## II Semi-classical approximation and saddle points
### Assumptions
We consider the classical background metric
\[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}(t)\delta_{ij}\mathrm{d}x^{i}\mathrm{d}x ^{j}\, \tag{2}\]
where the scale factor \(a(t)\) is kept generic. The bare matter action is
\[S[\phi]=\int\mathrm{d}^{4}x\sqrt{|g|}(L-j\phi)\, \tag{3}\]
where the Lagrangian \(L\) involves a double-well potential, as well as a non-minimal coupling to the scalar curvature:
\[L=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}\xi R \phi^{2}-\frac{\lambda}{4!}(\phi^{2}-v^{2})^{2}-\bar{\Lambda}. \tag{4}\]
For convenience, we have also added the cosmological constant term in the matter sector (\(\bar{\Lambda}=\kappa^{-1}\Lambda\) with \(\kappa=8\pi G\)) to account for vacuum energy effects after renormalisation. The important assumptions we make are the following:
* Finite volume, which allows tunnelling between the degenerate vacua. We start from a fundamental flat spatial cell with volume \(V_{0}\) and comoving volume \(a^{3}(t)V_{0}\), which can be thought of as a 3-torus, or a 3-sphere with large enough radius to neglect curvature. Although finite, we assume the parameters of the model to be such that quantisation of momentum can be ignored, and the periodic boundary conditions do not play a role. Related comments on the Casimir effect are given in [13] for tunnelling in flat space-time, and we focus here on the tunnelling features only;
* Adiabatic approximation, where the expansion rate of the metric is assumed small compared to the tunnelling rate for matter. According to the discussion at the end of Sec.II.4, this is valid in the regime \[|H|\equiv\left|\frac{\dot{a}}{a}\right|\ll v\sqrt{\frac{\lambda}{\pi}}\ \alpha^{3/2}\ \exp(-\alpha^{3})\,\] (5) where \(\alpha^{3}(t)=a^{3}(t)S_{0}/\hbar\) and \(S_{0}\) is the action for an instanton interpolating the two vacua \(\pm v\).
As a consequence of the second point, the scale factor \(a(t)\) will be considered constant for the calculation of the matter effective theory, and its time dependence will be reinstated when we couple the matter effective theory to gravity.
### Semi-classical approximation
We work here in Euclidean signature. In the semi-classical approximation, and focusing only on the matter sector for the reasons explained above, the partition function takes the form
\[Z[j]=\int\mathcal{D}[\phi]\exp(-S[\phi]/\hbar)\simeq\sum_{n}Z_{n}[j]\, \tag{6}\]
where
\[Z_{n}=F_{n}[j]\exp(-S[\phi_{n}]/\hbar)\equiv\exp(-\Sigma_{n}[j]/\hbar)\, \tag{7}\]
and \(\phi_{n}\) are the different dominant contributions, the saddle points, which satisfy the equation of motion and minimise the action locally in the space of field configurations. \(F_{n}[j]\) are the fluctuation factors for these saddle points, that we will calculate at one-loop, and \(\Sigma_{n}[j]\) are the corresponding connected graphs generating functionals.
The saddle points \(\phi_{n}\) satisfy then
\[-\frac{1}{\sqrt{g}}\frac{\delta S}{\delta\phi}=j\, \tag{8}\]
and since we consider two degenerate minima, a bubble-solution cannot form, since it would have an infinite radius [23; 24]. Hence we focus on homogeneous saddle points only, which can depend on the Euclidean time tough. These saddle points obey
\[\ddot{\phi}+\frac{3\dot{a}}{a}\dot{\phi}-\xi R\phi+\frac{\lambda}{6}v^{2}\phi -\frac{\lambda}{6}\phi^{3}=j\, \tag{9}\]
where a dot represents a (Euclidean) time derivative. In the adiabatic approximation, the scale factor \(a\) is assumed constant for the calculation of quantum fluctuations for matter, and we will therefore take \(\dot{a}=0=R\). We discuss below the static saddle points and the instanton gas, with their corresponding connected graphs generating functionals \(\Sigma_{1}[j],\Sigma_{2}[j]\) and \(\Sigma_{gas}[j]\) respectively.
Finally, we are interested in the tunnelling-induced effective potential, such that it is enough to consider a constant source \(j\). A spacetime-dependent source is necessary for the calculation of the derivative part of the effective action only.
### Static saddle points
The static saddle points satisfy
\[v^{2}\phi-\phi^{3}=\frac{6j}{\lambda}\, \tag{10}\]
which, for \(j<j_{c}\equiv\lambda v^{3}/(9\sqrt{3})\), has two real solutions
\[\phi_{1}(j)=\frac{2v}{\sqrt{3}}\cos\left(\frac{\pi}{3}-\frac{1}{3}\arccos(j/j_{ c})\right)\quad\mbox{ and }\quad\phi_{2}(j)=-\phi_{1}(-j)\, \tag{11}\]
with the corresponding actions
\[S_{1}[j]\equiv S[\phi_{1}(j)]=\int\mathrm{d}^{4}x\sqrt{g}\left( \bar{\Lambda}+v\,j-\frac{3}{2v^{2}\lambda}\ j^{2}+\mathcal{O}(j^{3})\right) \tag{12}\] \[S_{2}[j]\equiv S[\phi_{2}(j)]=S_{1}[-j]\.\]
The one-loop fluctuation factor for a static saddle point \(\phi_{n}(j)\) is calculated in Appendix A, using the Schwinger proper time representation of the propagator. We find for the corresponding renormalised connected graphs generating functional
\[\Sigma_{n}[j]=\int\mathrm{d}^{4}x\sqrt{g}\left(\bar{\Lambda}_{R}+\frac{ \lambda_{R}}{4!}(\phi_{n}^{2}-v_{R}^{2})+\frac{\hbar\lambda_{R}^{2}}{4608\pi^{ 2}}\Big{(}G(\phi_{n})+2(3\phi_{n}^{2}-v_{R}^{2})^{2}\ln\big{(}(3\phi_{n}^{2}/v_ {R}^{2}-1)/2\big{)}\Big{)}+j\phi_{n}\right) \tag{13}\]
with
\[G(\phi_{n})=-285v_{R}^{4}+366v_{R}^{2}\phi_{n}^{2}-81\phi_{n}^{4}\,, \tag{14}\]
and \(\lambda_{R},v_{R},\Lambda_{R}\) are the renormalised parameters given in Appendix A. The specific form (13), including the renormalised parameters, is chosen in such a way that, in the absence of source we have
\[\Sigma_{n}[0]=\int\mathrm{d}^{4}x\sqrt{g}\,\bar{\Lambda}_{R}\, \tag{15}\]
which makes the discussion on vacuum energy simpler. Note that, in eq.(13), the static saddle points \(\phi_{n}\) can be expressed as in eq.(11), where the parameters can be replaced by the renormalised ones, since they satisfy the equation of motion [11].
### Instanton gas
We describe here Euclidean time-dependent saddle points. In the absence of a source, they obey the following equation
\[\ddot{\phi}+\omega^{2}\phi-\frac{\lambda}{6}\phi^{3}=0\, \tag{16}\]
where \(\omega=v\sqrt{\lambda/6}\), which corresponds to a problem of real-time classical mechanics in the upside-down potential
\[V(\phi)=-\frac{\lambda}{24}(\phi^{2}-v^{2})^{2}\, \tag{17}\]
represented in Fig. 1.
The motion starting asymptotically close to a hilltop and ending asymptotically close to the other hilltop is given by the known solution
\[\phi_{inst}(j=0)=\pm v\tanh\left(\frac{\omega}{\sqrt{2}}(t-t_{1})\right)\, \tag{18}\]
where \(t_{1}\) corresponds to the "jump", where the instanton goes through \(0\), and the corresponding action is
\[S[\phi_{inst}(j=0)]=a^{3}S_{0}\quad\mbox{ with }\quad S_{0}\equiv\frac{2\sqrt{2}}{ \lambda}\omega^{3}V_{0}. \tag{19}\]
Indeed, the field spends a large (Euclidean) time close to a hilltop, with an exponentially small contribution to both the potential and the kinetic energy, and the main contribution to the action comes from the jump. For \(p\) jumps, an exact saddle point is a series of periodic oscillations between the two hills. If the motion starts exponentially close to a hilltop, the distance \(|t_{i+1}-t_{i}|\) between two consecutive jumps is large compared to the width \(2\pi/\omega\) of a jump. The motion is then approximately described by
\[\phi^{(p)}_{inst}(j=0)\simeq\ \ \pm v\tanh\left(\frac{\omega}{\sqrt{2}}(t-t_{1}) \right)\times\tanh\left(\frac{\omega}{\sqrt{2}}(t-t_{2})\right)\times\cdots \times\tanh\left(\frac{\omega}{\sqrt{2}}(t-t_{p})\right)\, \tag{20}\]
where the times \(t_{i}\) are regularly spread along the Euclidean time (see Fig.2a), and the corresponding action is
\[S[\phi^{(p)}_{inst}(j=0)]\simeq p\ a^{3}S_{0}. \tag{21}\]
The above action remains unchanged when the jumps are shifted though, provided the condition \(|t_{i+1}-t_{i}|\gg 2\pi/\omega\) is satisfied, which is called the dilute gas approximation (see Fig.2b). As a consequence, all the corresponding configurations in the path integral \(Z\) contribute as much as the exact solution of the equation of motion. The invariance of the action under the translation of jumps has a high degeneracy, making this dilute gas dominant in \(Z\).
We show in Appendix B that, in the presence of a source, the summation over all the \(p\)-jump saddle points leads to
\[\Sigma_{gas}[j]\simeq\frac{1}{2}\big{(}\Sigma_{1}[j]+\Sigma_{2}[j]\big{)}-\hbar \ln\big{(}\exp(\bar{N})-1\big{)}\equiv-\hbar\ln Z_{gas}[j]\, \tag{22}\]
where the statistical average number of jumps between the two static saddle points is
\[\bar{N}=\sqrt{g_{00}}\ \omega T\sqrt{\frac{6a^{3}S_{0}}{\hbar\pi}}e^{-a^{3}S_{0}/ \hbar}. \tag{23}\]
We note that the parameters in the latter equation can be understood as the renormalised ones, since the contribution of \(\bar{N}\) is at one-loop already.
The exponential of \(\bar{N}\) appearing in the partition function is a known feature in tunnelling studies, and it arises from the summation over the zero modes of each (anti-)instanton (see Appendix B for details). Note that we are interested here in the situation where \(S_{0}\) is fixed and the total Euclidean time \(T\) goes to infinity, such that \(\bar{N}\) is assumed to be large. An alternative situation, relevant at finite temperature, consists in fixing \(T\) and taking \(S_{0}\to\infty\), such that \(\bar{N}\to 0\). This corresponds to the suppression of tunnelling, and where SSB provides a better description of the system [12]. The expression (23) can be understood as the total Euclidean time \(T\) multiplied by the tunnelling rate \(\omega\sqrt{6a^{3}S_{0}/\hbar\pi}\ e^{-a^{3}S_{0}/\hbar}\).
## III Effective action
We describe here the main steps for the construction of the effective theory, as well as its energetic properties. The details can be found in Appendix (C).
Figure 1: The upside-down potential \(V(\phi)\) in which the field oscillates. One instanton corresponds to the motion from infinitesimally close to one hilltop to infinitesimally close to the other.
### Symmetric ground state
From the previous section, the partition function can be expressed as
\[Z[j] \simeq Z_{1}[j]+Z_{2}[j]+Z_{gas}[j] \tag{24}\] \[=\exp\left(-\frac{1}{\hbar}\Sigma_{1}[j]\right)+\exp\left(-\frac{ 1}{\hbar}\Sigma_{2}[j]\right)+(\exp(\bar{N})-1)\exp\left(-\frac{1}{2\hbar} \Big{(}\Sigma_{2}[j]+\Sigma_{2}[j]\Big{)}\right)\,\]
from which one can derive the classical field \(\phi_{c}\), which corresponds to the vacuum expectation value in the presence of the source \(j\)
\[\phi_{c}=\frac{-\hbar}{Z(j)\sqrt{g}}\frac{\delta Z}{\delta j}=-M^{-2}\,j+ \mathcal{O}(j^{3}). \tag{25}\]
Figure 2: Example of exact and approximate saddle points. In the dilute gas approximation, the difference between the corresponding actions is exponentially small, and the partition function is dominated by the whole set of approximate saddle points.
In the previous expression and in the limit where \(T\to\infty\), we show in Appendix C that
\[M^{-2}=\frac{3}{\lambda_{R}v_{R}^{2}}\left(1+\frac{27\hbar\lambda_{R}}{32\pi^{2}} \right)+\mathcal{O}(\hbar^{2}). \tag{26}\]
We note that \(\phi_{c}\) is proportional to \(j\), showing symmetry restoration: the vacuum for \(j=0\) is at \(\phi_{c}=0\).
The relation \(\phi_{c}[j]\) is then inverted to
\[j[\phi_{c}]=-M^{2}\phi_{c}+\mathcal{O}(\phi_{c}^{3})\, \tag{27}\]
and the 1PI effective action, defined through the Legendre transform as a functional of \(\phi_{c}\), is
\[\Gamma[\phi_{c}] =-\hbar\ln Z[j\big{[}\phi_{c}]\big{]}-\int\mathrm{d}^{4}x\sqrt{g} \ \phi_{c}\ j[\phi_{c}] \tag{28}\] \[=\Gamma[0]+\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{g}\ M^{2}\phi_{c} ^{2}+\mathcal{O}(\phi_{c}^{4})\.\]
In the previous expression, the effective action for the ground state reads
\[\Gamma[0] =\int\mathrm{d}^{4}x\sqrt{g}\,\bar{\Lambda}_{R}-\hbar\ln(e^{\bar {N}}+1) \tag{29}\] \[\simeq\int\mathrm{d}^{4}x\sqrt{g}\,\bar{\Lambda}_{R}-\hbar\bar{N}\.\]
To summarise the essential features of the effective action (28):
* it is convex, since \(M^{2}>0\), and has its ground state at \(\phi_{c}=0\);
* the ground state energy has a non-trivial dependence on the comoving volume, via \(\bar{N}\), and is therefore not extensive in the usual thermodynamical sense.
### NEC violation
For simplicity, in what follows we will drop the sub-index \({}_{R}\) and all the parameters should be understood as the renormalised ones.
We focus here on the fluid provided by the ground state \(\phi_{c}=0\). In order to obtain the energy density and the pressure, we need to represent \(\Gamma[0]\) and thus \(\bar{N}\) as the integral over a Lagrangian density, restoring the time dependence of the scale factor. This is done in Appendix C, where we show that the expression (29) can be written as
\[\Gamma[0]=\int\mathrm{d}^{4}x\sqrt{g}\left(\bar{\Lambda}-\rho_{0}\ \frac{e^{-\alpha^{3}}}{\alpha^{3/2}}\right)\, \tag{30}\]
where
\[\alpha^{3}\equiv a^{3}\frac{S_{0}}{\hbar}\ \ \ \ \text{and}\ \ \ \ \rho_{0}\equiv\hbar\ \frac{\omega}{V_{0}}\frac{S_{0}}{\hbar}\sqrt{\frac{6}{\pi}}=\frac{\lambda \ v^{4}}{3\sqrt{3\pi}}. \tag{31}\]
From eq.(30), the energy density and the pressure are obtained from the components of the energy-momentum tensor
\[T_{\mu\nu}=\frac{2}{\sqrt{g}}\frac{\delta\Gamma[0]}{\delta g^{\mu\nu}}\, \tag{32}\]
and we find
\[\rho =\bar{\Lambda}-\rho_{0}\ \frac{e^{-\alpha^{3}}}{\alpha^{3/2}}\, \tag{33}\] \[p =-\bar{\Lambda}+\rho_{0}\left(\frac{1}{2\alpha^{3/2}}-\alpha^{3/2 }\right)e^{-\alpha^{3}}\.\]
The fluid provided by the ground state therefore features the following properties:
* It consistently satisfies the (real-time) continuity equation \(\dot{\rho}+3H(\rho+p)=0\);
* It violates the NEC \[\rho+p=-\rho_{0}\ e^{-\alpha^{3}}\left(\alpha^{3/2}+\frac{1}{2\alpha^{3/2}} \right)\ <0\ ;\]
* Assuming \(e^{-\alpha^{3}}\ll 1\), its equation of state has the phantom form \[w=\frac{p}{\rho}\simeq-1-\frac{\rho_{0}}{\bar{\Lambda}}\,e^{-\alpha^{3}}\left( \alpha^{3/2}+\frac{1}{2\alpha^{3/2}}\right)\ <-1\.\] (34)
We stress here an important point: the property \(w<-1\) does not arise from a kinetic energy with the opposite sign, but is a consequence of tunnelling between the two degenerate bare vacua, which induces a homogeneous symmetric ground state.
## IV Friedmann equations
In this section we go back to Lorentzian signature. As explained in the introduction, we study the back-reaction of the effective theory on the metric, such that the energy-momentum tensor in the Einstein equations \(G_{\mu\nu}=\kappa T_{\mu\nu}\) contains the energy density and pressure given by eqs. (33), and \(\kappa\) is the renormalised gravity coupling. The resulting Friedman equations read
\[H^{2} = \frac{\kappa}{3}\rho \tag{35}\] \[\frac{\ddot{a}}{a} = -\frac{\kappa}{6}(\rho+3p)\,\]
that we study here numerically. The first equation \(H^{2}\propto\rho\) gives the initial condition \(\dot{a}_{0}\) once \(a_{0}\) is known, and the second equation provides the evolution equation for \(a(t)\). We then introduce the dimensionless time
\[\tau\equiv t\ \sqrt{\frac{\Lambda}{3}}\, \tag{36}\]
and we use the expressions (33) to obtain from eqs.(35)
\[\alpha^{\prime} = \pm\alpha\sqrt{1-r\ \frac{e^{-\alpha^{3}}}{\alpha^{3/2}}} \tag{37}\] \[\frac{\alpha^{\prime\prime}}{\alpha} = 1-\frac{r\ e^{-\alpha^{3}}}{4\ \alpha^{3/2}}(1-6\alpha^{3})\,\]
where a prime denotes a derivative with respect to \(\tau\) and
\[r=\kappa\frac{\rho_{0}}{\Lambda}=\frac{\rho_{0}}{\bar{\Lambda}}. \tag{38}\]
The Friedman Equations (37) are solved numerically, and we plot in Figure 3 the solutions corresponding to a fixed value of \(\alpha(0)\) and different values of the parameter \(r\). The initial condition for \(\alpha^{\prime}(0)\) is given by the negative branch \(\alpha^{\prime}(0)<0\) of the first Friedman equation, in order to describe a cosmological bounce induced by the phantom-like fluid. We see that such a bounce is indeed generated, after which the expansion suppresses tunnelling: the NEC is recovered and the metric dynamics enters a de Sitter phase, with constant \(H\).
## V Conclusions
We have described how the energetic properties arising from tunnelling could be relevant in a cosmological context, starting from standard QFT and Einstein gravity. To summarise the non-perturbative mechanism described in this article:
_(a)_ The effective theory taking into account tunnelling between two degenerate vacua is obtained by considering the contribution of different saddle points in the partition function; _(b)_ As a consequence of this interplay between the two vacua \(\pm v\), the resulting true vacuum is at \(\phi_{c}=0\), with an energy which is not proportional to the comoving volume; _(c)_ This non-extensive feature of the vacuum energy implies NEC violation; _(d)_ The NEC violation induces a cosmological bounce in the case of initial spacetime contraction, and is valid until the resulting expansion suppresses tunnelling, such the ANEC is satisfied.
The adiabatic approximation is well justified in the vicinity of the cosmological bounce, but out-of-equilibrium studies would be necessary to include the full-time dependence of the scale factor if one wishes to look at what happens away from the bounce. A related improvement to this work would be to derive our results in a manifestly covariant way.
Regarding the assumption of finite-volume FLRW space-time, this study has required a toy-model geometry/topology, in the form of a 3-torus or 3-sphere, and thus still needs to be developed for phenomenological purposes. Also, quantum corrections in a finite volume should in principle take into account discrete momentum, as well as periodic boundary conditions. This is done in the framework of Casimir effect studies [36], whereas the present article focuses on the tunnel effect, with continuous momentum and effectively Dirichlet boundary conditions. A natural step further would then consider a discrete spectrum, which could be done numerically for example.
The situation of non-degenerate minima would avoid making the finite-volume assumption, since the relevant instanton action (the Coleman bounce saddle point) is independent of the volume. In this case, quantum fluctuations for the latter saddle point would involve an imaginary part, which should be cancelled by the imaginary part induced by other saddle points [25], since the effective potential is real. The whole process is challenging to describe analytically
Figure 3: Time evolution of the scaled scale factor \(\alpha\) (upper panel) and the scaled Hubble rate \({\cal H}=\alpha^{\prime}/\alpha\) (lower panel) with initial condition \(\alpha(0)=1\), for three different values of \(r\), namely \(r=2\) (solid line), \(r=1\) (dashed line) and \(r=0.5\) (dashed-dotted line).
in more than 0-dimensional space-time though, but is a potential avenue to explore, since it could be relevant as a component of Dark Energy.
## Acknowledgements
JA would like to thank Janos Polonyi for enlightening discussions. SP thank Jose Navarro-Salas for very useful comments. This work is supported by the Leverhulme Trust (grant RPG-2021-299) and the Science and Technology Facilities Council (grant STFC-ST/T000759/1). For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
## Appendix A One-loop effective action in curved space-times
In this appendix we review the main steps to obtain the one-loop effective action in curved space-times for a real scalar field in a double-well potential, and propagating in a curved background with Euclidean signature. We focus here on one saddle point only.
For renormalisation purposes, we need to consider the bare action of this model
\[S[\phi,g]=\int\mathrm{d}^{d}x\sqrt{g}\left(\frac{1}{2}g^{\mu\nu}\partial_{\mu }\phi\partial_{\nu}\phi+\frac{1}{2}\xi R\phi^{2}+\frac{\lambda}{4!}(\phi^{2}- v^{2})^{2}+\bar{\Lambda}+j\phi\right)\, \tag{10}\]
together with the semi-classical action for gravity 1
Footnote 1: We note that the Euclidean form of the Lagrangian differs with a minus sign with respect to its Lorentzian form.
\[S_{G}[g]=-\int\mathrm{d}^{d}x\sqrt{g}\Big{[}(2\kappa)^{-1}R+(\epsilon_{1}R^{ 2}+\epsilon_{2}R^{\mu\nu}R_{\mu\nu}+\epsilon_{3}R^{\mu\nu\rho\sigma}R_{\mu \nu\rho\sigma})\Big{]}\, \tag{11}\]
in \(d\) space-time dimensions, where \(\kappa=8\pi G\), and \(\bar{\Lambda}=\kappa^{-1}\Lambda\). For convenience, we have included the cosmological constant term in the matter sector. The inclusion of the higher curvature terms is needed for the cancellation of the divergences that arise in this context. In this setup, the Klein-Gordon equation for the scalar field is
\[(-\Box_{E}+\xi R-\tfrac{\lambda}{6}v^{2}+\tfrac{\lambda}{3!}\phi^{2})\phi+j=0\, \tag{12}\]
where \(\Box_{E}=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}=\tfrac{1}{\sqrt{g}}\partial_{\mu }(\sqrt{g}g^{\mu\nu}\partial_{\mu})\), and the scalar field can be expanded around a saddle point \(\phi=\phi_{s}+\delta\phi\). The associated Euclidean Green's function for the quantum fluctuation \(\delta\phi\) reads
\[(-\Box_{E}+Q)G_{E}(x,x^{\prime})=\frac{1}{\sqrt{g}}\delta^{(4)}(x-x^{\prime})\,, \tag{13}\]
where
\[Q=\frac{\lambda}{2}\phi_{s}^{2}-\frac{\lambda v^{2}}{6}+\xi R\,. \tag{14}\]
The one-loop correction to the classical action can be written in terms of the Green's function as [37]
\[\Sigma[\phi_{s},g] =S_{G}[g]+S[\phi_{s},g]-\tfrac{1}{2}\hbar\ln\mathrm{Det}\,G_{E} \tag{15}\] \[\equiv S_{G}[g]+S[\phi_{s},g]+\Sigma^{(1)}[\phi_{s},g]\,.\]
For general background configurations, the Green's function is unknown. However, an approximated expression for the quantum contribution \(\Sigma^{(1)}[\phi_{s},g]\) in the case of slowly varying background fields \(\phi_{s}\) and \(g\) can be computed using the proper-time formalism as follows (see Refs. [38; 39] for a detailed explanation).
The DeWitt-Schwinger representation of the propagator \(G_{E}(x,x^{\prime})\) is given by
\[G_{E}(x,x^{\prime})=\int_{0}^{\infty}\mathrm{d}s\,H(x,x^{\prime};s)\,, \tag{16}\]
where the kernel \(H(x,x^{\prime};s)\) obeys a diffusion equation with appropriate boundary conditions [40]. For the one-loop connected graph, it translates into
\[\Sigma^{(1)}[\phi_{s},g]=\,\frac{\hbar}{2}\int\mathrm{d}^{d}x\sqrt{g}\int_{0}^{ \infty}\frac{\mathrm{d}s}{s}\,H(x,x;s)\,. \tag{100}\]
The kernel \(H(x,x^{\prime};s)\) admits, in general, an asymptotic expansion in terms of the Schwinger proper-time parameter [41]. At coincidence \(x^{\prime}\to x\) this expansion reads
\[H(x,x;s)=\frac{e^{-m^{2}s}}{(4\pi s)^{d/2}}\sum_{k=0}^{\infty}a_{k}(x)\,s^{k}\,. \tag{101}\]
where \(a_{k}(x)\) are the so-called the deWitt coefficients and \(d\) is the space-time dimension. The first few coefficients are [40; 42]
\[a_{0} = 1\,; \tag{102}\] \[a_{1} = \frac{1}{6}R-Q\,;\] (103) \[a_{2} = -\frac{1}{180}R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma \delta}-\frac{1}{180}R^{\alpha\beta}R_{\alpha\beta}-\frac{1}{30}\Box_{E}R+ \frac{1}{6}\Box_{E}Q+\frac{1}{2}Q^{2}-\frac{1}{6}RQ+\frac{1}{72}R^{2}\,. \tag{104}\]
This expansion captures, in its leading orders, the UV divergences (\(s\to 0\)) of the theory and it is routinely used for renormalisation in the context of QFT in curved spaces.
The expansion above (101) has an important property: it admits an exact resummation [43; 44]
\[H(x,x;s)=\frac{e^{-\mathcal{M}^{2}s}}{(4\pi s)^{d/2}}\sum_{k=0}^{\infty}b_{k}( x)\,s^{k}\,, \tag{105}\]
with
\[\mathcal{M}^{2}=Q-\frac{1}{6}R\,, \tag{106}\]
such that, the new coefficients \(b_{k}(x)\) do not contain any term that vanish when \(Q\) and \(R\) are replaced by zero. For example, for the first resummed deWitt coefficients we have
\[b_{0} = 1\,; \tag{107}\] \[b_{1} = 0\,;\] (108) \[b_{2} = -\frac{1}{180}R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma \delta}-\frac{1}{180}R^{\alpha\beta}R_{\alpha\beta}-\frac{1}{30}\Box_{E}R+ \frac{1}{6}\Box_{E}Q\;. \tag{109}\]
Therefore, the resummed expansion becomes a derivative expansion in the field \(\phi_{s}\) and the metric, physically meaningful in the case of slowly varying background fields. Then, it is possible to truncate the expansion at a given order \(N\) - the order of derivatives - to obtain an approximated expression for the one-loop connected graph
\[\Sigma^{(1)}[\phi_{s},g]=\,\frac{\hbar}{2}\int\mathrm{d}^{d}x\sqrt{g}\int_{0} ^{\infty}\frac{\mathrm{d}s}{s}\;\;\frac{e^{-\mathcal{M}^{2}s}}{(4\pi s)^{d/2} }\sum_{k=0}^{N}b_{k}(x)\,s^{k}\,. \tag{110}\]
The expression above is divergent for \(d=4\) and can be renormalised using dimensional regularization. For arbitrary dimension \(d\), the proper-time integrals can be performed to give
\[\Sigma^{(1)}[\phi_{s},g]=\frac{\hbar}{(4\pi)^{d/2}}\left(\frac{\mathcal{M}}{ \mu_{d}}\right)^{d-4}\int\mathrm{d}^{d}x\sqrt{g}\;\sum_{k=0}^{N}b_{k}(x) \mathcal{M}^{d-2k}\,\Gamma\left(k-\frac{d}{2}\right)\;. \tag{111}\]
We have introduced a renormalisation mass parameter to proceed with dimensional regularization in what follows. Truncating the sum at \(N=2\) and expanding around \(d\to 4\) we find
\[\Sigma^{(1)}=\hbar\int\mathrm{d}^{4}x\sqrt{g}\left[\frac{\mathcal{M}^{4}}{64 \pi^{2}}\Big{[}\ln\left(\frac{\mathcal{M}^{2}}{\mu^{2}}\right)-\frac{3}{2} \Big{]}+\frac{b_{2}}{32\pi^{2}}\ln\left(\frac{\mathcal{M}^{2}}{\mu^{2}} \right)\right]\;, \tag{112}\]
where \({\cal M}^{2}>0\) since we quantise about stable saddle points and the curvature effects are expected to be small. In the above expression, the divergences have been absorbed in the scale parameter \(\mu\), which is defined by
\[\ln\mu^{2}=\ln\Big{(}4\pi\mu_{d}^{2}\Big{)}-\gamma-\frac{2}{d-4}\quad\text{( finite when $d\to 4$)}. \tag{100}\]
From these expressions, we can directly obtain the renormalised values of the coupling constants of the problem (see, for example Ref. [45]).
In our particular problem, we are assuming an adiabatic expansion of the universe, and that quantum processes under consideration occur at equilibrium. Hence, we neglect the curvature of space-time. Hence we are only interested in the couplings \(\lambda,\ v,\ \bar{\Lambda}\). For simplicity, we will follow [46], and apply the renormalisation conditions at the same scale for all bare parameters, namely,
\[3\frac{\partial^{2}L}{\partial\phi^{2}}\Big{|}_{\phi=\pm v_{R},g=\eta}= \lambda_{R}\,v_{R}^{2}\,,\qquad\frac{\partial^{2}L}{\partial\phi^{4}}\Big{|}_ {\phi=\pm v_{R},g=\eta}=\lambda_{R}\,,\qquad+L\Big{|}_{\phi=\pm v_{R},g=\eta }=\bar{\Lambda}_{R}\, \tag{101}\]
where \(\eta\) is the Euclidean flat metric and \(\Sigma=\int\mathrm{d}^{4}x\sqrt{g}\ L\). From these conditions we obtain
\[\delta\lambda = \frac{3\lambda_{R}^{2}}{32\pi^{2}}\left(3+\ln(\frac{v_{R}^{2} \lambda_{R}}{3\mu^{2}})\right)\,, \tag{102}\] \[\delta v^{2} = \frac{v_{R}^{2}\lambda_{R}}{16\pi^{2}}\left(10-\ln(\frac{v_{R}^{2 }\lambda_{R}}{3\mu^{2}})\right)\,,\] (103) \[\delta\bar{\Lambda} = \frac{v_{R}^{4}\lambda_{R}^{2}}{1152\pi^{2}}\left(-3+2\ln(\frac{ v_{R}^{2}\lambda_{R}}{3\mu^{2}})\right)\, \tag{104}\]
where we define \(\lambda_{R}=\lambda+\hbar\ \delta\lambda\), \(v_{R}^{2}=v^{2}+\hbar\ \delta v^{2}\), and \(\bar{\Lambda}_{R}=\bar{\Lambda}+\hbar\ \delta\bar{\Lambda}\). Inserting these results in (100) and assuming \(R=0\) and \(\phi_{s}\) static, we obtain the final renormalised connected graph given in Sec. II.3.
For completeness, we also give the renormalised values of \(\kappa^{-1}\) and \(\xi\). The renormalisation conditons we impose are
\[-2\frac{\partial L}{\partial R}-\xi_{R}\phi^{2}\Big{|}_{\phi=\pm v_{R},g=\eta }=\kappa_{R}^{-1}\,,\qquad\frac{\partial^{3}L}{\partial R\partial\phi^{2}} \Big{|}_{\phi=\pm v_{R},g=\eta}=\xi_{R}\,, \tag{105}\]
that lead to
\[\delta\xi = \frac{\lambda_{R}(6\xi_{R}-1)}{192\pi^{2}}\left(3+\ln(\frac{v_{R }^{2}\lambda_{R}}{3\mu^{2}})\right)\,, \tag{106}\] \[\delta(\kappa^{-1}) = \frac{v_{R}^{2}\lambda_{R}(6\xi_{R}-1)}{2304\pi^{2}}\left(11+\ln( \frac{v_{R}^{2}\lambda_{R}}{3\mu^{2}})\right)\,. \tag{107}\]
## Appendix B Quantisation over instanton configurations
In Section II.4 we describe few features of the gas of instantons for a vanishing source. In the presence of an infinitesimal source \(j\ll j_{c}\), the jump is not modified, and what changes is the position of the asymptotically "flat" parts of the instantons, which now go from one saddle point \(\phi_{i}(j)\) to the other, instead of going from one vacumm \(\pm v\) to the other \(\mp v\). We have then, instead of eq.(19),
\[S[\phi_{inst}(j)]\simeq a^{3}S_{0}+\frac{1}{2}\big{(}S_{1}[j]+S_{2}[j]\big{)}\, \tag{108}\]
since on average the configuration spends half the Euclidean time exponentially close to \(\phi_{1}(j)\) and the other half close to \(\phi_{2}(j)\). The contribution of one instanton \(F_{inst}\exp(-S[\phi_{inst}]/\hbar)\) to the partition function is the product of the following contributions
* The "flat" part close to each static saddle point, leading to the fluctuation factor \(F_{i}\) about each of the static saddle points, for half of the total Euclidean time \[\sqrt{F_{1}F_{2}}e^{-(S_{1}+S_{2})/(2\hbar)}=\exp\left(-\frac{1}{2\hbar}\big{(} \Sigma_{1}[j]+\Sigma_{2}[j]\big{)}\right)\,\] (109) where \(\Sigma_{n}[j]\) is given in eq.(13).
* Fluctuations above one jump which, discounting the zero mode corresponding to the translational invariance of the jump, lead to the factor (see [27; 47]) \[\sqrt{\frac{6a^{3}S_{0}}{\hbar\pi}}\ ;\] (14)
* The zero mode corresponding to the position of the jump, which can happen at any Euclidean time between \(0\) and \(T\), and thus gives the extra factor \[\omega\int_{0}^{T}\sqrt{g_{00}}\ \mathrm{d}t=\sqrt{g_{00}}\ \omega T\.\] (15) Note that the summation over the different positions of the jump is done with the comoving proper time, since the jump is observed by the comoving observer. Here, \(S_{0}\) and \(\omega\) are defined with the renormalised parameters.
All together, the contribution of one instanton to the partition function is
\[F_{inst}\exp\left(-\frac{S[\phi_{inst}]}{\hbar}\right)=\sqrt{g_{00}}\ \omega T\sqrt{\frac{6a^{3}S_{0}}{\hbar\pi}}\exp\left(-a^{3}\frac{S_{0}}{\hbar} -\frac{1}{2\hbar}\big{(}\Sigma_{1}[j]+\Sigma_{2}[j]\big{)}\right). \tag{16}\]
For a \(p\)-jump saddle point in the dilute gas approximation, and where the width of an instanton is negligible compared to the total Euclidean time \(T\), the classical action is
\[S[\phi_{inst}^{p}(j)]\simeq pa^{3}S_{0}+\frac{1}{2}\big{(}S_{1}[j]+S_{2}[j] \big{)}. \tag{17}\]
Also, whereas the first jump can happen at any time \(t_{1}\in[0,T]\), the jump \(i\) can happen at a time \(t_{i}\in[t_{i-1},T]\) only, such that the degeneracy of a \(p\)-jump configuration leads to the factor [27]
\[\prod_{i=1}^{p}\left(\omega\int_{t_{i-1}}^{T}\sqrt{g_{00}}\ \mathrm{d}t_{i} \right)=\frac{1}{p!}(\sqrt{g_{00}}\ \omega T)^{p}\qquad\text{(with $t_{0}=0$)}. \tag{18}\]
Summing over all the possibilities for \(p\), we obtain the final expression for the dilute gas contribution to the partition function
\[\exp\left(-\frac{1}{\hbar}\Sigma_{gas}[j]\right) =\sum_{p=1}^{\infty}\frac{1}{p!}(\sqrt{g_{00}}\ \omega T)^{p}\left(\frac{6a^{3}S_{0}}{\hbar\pi}\right)^{p/2}\exp\left(-pa^{3} \frac{S_{0}}{\hbar}-\frac{1}{2\hbar}\big{(}\Sigma_{1}[j]+\Sigma_{2}[j]\big{)}\right) \tag{19}\] \[=\exp\left(-\frac{1}{2\hbar}\big{(}\Sigma_{1}[j]+\Sigma_{2}[j] \big{)}\right)\left[\exp\left(\sqrt{g_{00}}\ \omega T\sqrt{\frac{6a^{3}S_{0}}{\hbar\pi}}e^{-a^{3}S_{0}/\hbar}\right)-1 \right]\.\]
## Appendix C Effective action, energy density and pressure
We give here details on the derivation of the one-loop effective action. We start from the partition function
\[Z[j] =Z_{1}[j]+Z_{2}[j]+Z_{gas}[j] \tag{20}\] \[=e^{-\Sigma_{1}/\hbar}+e^{-\Sigma_{2}/\hbar}+(e^{\bar{N}}-1)e^{-( \Sigma_{1}+\Sigma_{2})/2\hbar}\,\]
where \(\Sigma_{2}[j]=\Sigma_{1}[-j]\) which, for small source, can be expanded as
\[\Sigma_{1,2}[j]=\int\mathrm{d}^{4}x\sqrt{g}\left(\bar{\Lambda}_{R}\pm\sigma_{( 1)}\,j+\frac{1}{2}\sigma_{(2)}\,j^{2}+\mathcal{O}(j^{3})\right)\, \tag{21}\]
with
\[\sigma_{(1)}=v_{R}-\hbar\frac{9\lambda_{R}v_{R}}{32\pi^{2}}\,,\qquad\sigma_{( 2)}=-\frac{3}{v_{R}^{2}\lambda_{R}}-\hbar\,\frac{81}{32\pi^{2}v_{R}^{2}}. \tag{22}\]
The classical field \(\phi_{c}\) is
\[\phi_{c}=\frac{-\hbar}{Z(j)\sqrt{g}}\frac{\delta Z}{\delta j}=-M^{-2}\,j+{\cal O} (j^{3})\, \tag{100}\]
with
\[M^{-2}=-\sigma_{(2)}+\frac{V_{4}}{\hbar}\frac{2}{(e^{\bar{N}}+1)}\sigma_{(1)}^{2 }=\frac{3}{\lambda_{R}v_{R}^{2}}\left(1+\frac{2A}{3}+\hbar\lambda_{R}\,\frac{2 7}{2\pi^{2}}\left(\frac{1}{16}-A\right)\right)+{\cal O}(\hbar^{2})\,, \tag{101}\]
and
\[V_{4}=\int{\rm d}^{4}x\sqrt{g}\quad,\quad\ A=\frac{V_{4}\,\omega_{R}^{4}}{ \hbar\lambda_{R}(e^{\bar{N}}+1)}. \tag{102}\]
The relation \(\phi_{c}[j]\) is then inverted to \(j[\phi_{c}]\), in order to define the 1PI effective action as the Legendre transform
\[\Gamma[\phi_{c}]=-\hbar\ln Z[j\big{[}\phi_{c}]\big{]}-\int{\rm d}^{4}x\sqrt{g} \ \phi_{c}\ j[\phi_{c}]. \tag{103}\]
An expansion in the classical field finally gives
\[\Gamma[\phi_{c}]=\Gamma[0]+\int{\rm d}^{4}x\sqrt{g}\ \frac{M^{2}}{2}\phi_{c}^{2 }+{\cal O}(\phi_{c}^{4})\, \tag{104}\]
with
\[M^{2} = \left(-\sigma_{(2)}+\frac{V_{4}}{\hbar}\frac{2}{e^{\bar{N}}-1} \sigma_{(1)}^{2}\right)^{-1}\] \[= \frac{\lambda_{R}v_{R}^{2}}{3}\left(\frac{1}{1+24A}-\hbar\lambda_ {R}\frac{27}{32\pi^{2}}\frac{1-16A}{(1+24A)^{2}}\right)+{\cal O}(\hbar^{2})\,\]
and
\[\Gamma[0]=\int{\rm d}^{4}x\sqrt{g}\,\bar{\Lambda}_{R}-\hbar\ln(e^{\bar{N}}+1) \simeq\int{\rm d}^{4}x\sqrt{g}\,\bar{\Lambda}_{R}-\hbar\bar{N}. \tag{105}\]
In the limit \(T\to\infty\) we obtain then
\[M^{2}=\frac{\lambda_{R}v_{R}^{2}}{3}\left(1-\hbar\lambda_{R}\frac{27}{32\pi^{ 2}}\right)+{\cal O}(\hbar^{2}). \tag{106}\]
The next step is the analysis of the energy density and pressure for the ground state. The stress-energy tensor can be obtained from the definition
\[T^{E}_{\mu\nu}=\frac{2}{\sqrt{g}}\frac{\delta\Gamma(0)}{\delta g^{\mu\nu}}. \tag{107}\]
where we have explicitly written the super-index \({}^{E}\) as a reminder that we are working in Euclidean signature. Because of homogeneity and isotropy, the stress-energy tensor can be decomposed as
\[T^{E}_{\mu\nu}={\rm diag}(-\rho,a^{2}p,a^{2}p,a^{2}p)\, \tag{108}\]
so that we directly obtain
\[\rho=-T^{E}_{00}=-\frac{2}{\sqrt{g}}\frac{\delta\Gamma(0)}{\delta g^{00}} \Big{|}_{g^{00}=1}\,\qquad p=g^{11}T^{E}_{11}=\frac{2}{a^{2}\sqrt{g}}\frac{\delta\Gamma(0)}{ \delta g^{11}}\Big{|}_{g^{11}=a^{-2}}\,. \tag{109}\]
In order to express \(\bar{N}\) as a Lagrangian density we restore the time dependence of the scale factor with the replacement
\[\sqrt{g_{00}}\ T\ f(a)\to\int_{0}^{T}{\rm d}t\ \sqrt{g_{00}}\ f(a) \tag{110}\]
and we express the cell 3-volume at \(t=t_{0}\) as
\[V_{0}=\int\mathrm{d}^{3}x\ a^{3}(t_{0})=\int\mathrm{d}^{3}x\quad\text{ if we choose }\quad a(t_{0})=1. \tag{109}\]
The effective action for the ground state for \(\omega_{R}T\gg 1\) [see Eq. (106)] can then be expressed as
\[\Gamma[0] \simeq\int\mathrm{d}^{4}x\sqrt{g}\ \bar{\Lambda}_{R}-\hbar \omega_{R}\sqrt{\frac{6S_{0}}{\hbar\pi}}\int_{0}^{T}\mathrm{d}t\sqrt{g_{00}} \int\frac{\mathrm{d}^{3}x}{V_{0}}a^{3/2}\ e^{-a^{3}S_{0}/\hbar} \tag{110}\] \[=\int\mathrm{d}^{4}x\sqrt{g}\left(\bar{\Lambda}_{R}-\rho_{0}\ \frac{e^{-a^{3}S_{0}/\hbar}}{\sqrt{a^{3}S_{0}/\hbar}}\right)\,\]
where
\[\rho_{0}\equiv\frac{\omega_{R}S_{0}}{V_{0}}\sqrt{\frac{6}{\pi}}=\frac{\lambda _{R}v_{R}^{4}}{3\sqrt{3\pi}}\, \tag{111}\]
and where \(S_{0}\) is defined with the renormalised parameters.
From Eqs. (102) and (110) we can easily obtain the energy density and the pressure, namely
\[\rho=-T_{00}^{E}= -\left.\frac{2}{\sqrt{g}}\frac{\delta\Gamma(0)}{\delta g^{00}} \right|_{g_{00}=1}=+\bar{\Lambda}_{R}-\rho_{0}\ \frac{e^{-a^{3}S_{0}/\hbar}}{\sqrt{a^{3}S_{0}/\hbar}}\,, \tag{112}\] \[p=g^{11}T_{11}^{E}=\left.\frac{2}{a^{2}\sqrt{g}}\frac{\delta \Gamma(0)}{\delta g^{11}}\right|_{g_{11}=a^{2}}=-\bar{\Lambda}_{R}+\rho_{0} \left(\frac{1}{2\sqrt{a^{3}S_{0}/\hbar}}-\sqrt{a^{3}S_{0}/\hbar}\right)e^{-a^ {3}S_{0}/\hbar}\.\]
|
2302.00986
|
Eloss in the way: A Sensitive Input Quality Metrics for Intelligent
Driving
|
With the increasing complexity of the traffic environment, the importance of
safety perception in intelligent driving is growing. Conventional methods in
the robust perception of intelligent driving focus on training models with
anomalous data, letting the deep neural network decide how to tackle anomalies.
However, these models cannot adapt smoothly to the diverse and complex
real-world environment. This paper proposes a new type of metric known as Eloss
and offers a novel training strategy to empower perception models from the
aspect of anomaly detection. Eloss is designed based on an explanation of the
perception model's information compression layers. Specifically, taking
inspiration from the design of a communication system, the information
transmission process of an information compression network has two
expectations: the amount of information changes steadily, and the information
entropy continues to decrease. Then Eloss can be obtained according to the
above expectations, guiding the update of related network parameters and
producing a sensitive metric to identify anomalies while maintaining the model
performance. Our experiments demonstrate that Eloss can deviate from the
standard value by a factor over 100 with anomalous data and produce distinctive
values for similar but different types of anomalies, showing the effectiveness
of the proposed method. Our code is available at: (code available after paper
accepted).
|
Haobo Yang, Shiyan Zhang, Zhuoyi Yang, Xinyu Zhang
|
2023-02-02T10:11:08Z
|
http://arxiv.org/abs/2302.00986v1
|
# Eloss in the way: A Sensitive Input Quality Metrics for Intelligent Driving
###### Abstract
With the increasing complexity of the traffic environment, the importance of safety perception in intelligent driving is growing. Conventional methods in the robust perception of intelligent driving focus on training models with anomalous data, letting the deep neural network decide how to tackle anomalies. However, these models cannot adapt smoothly to the diverse and complex real-world environment. This paper proposes a new type of metric known as Eloss and offers a novel training strategy to empower perception models from the aspect of anomaly detection. Eloss is designed based on an explanation of the perception model's information compression layers. Specifically, taking inspiration from the design of a communication system, the information transmission process of an information compression network has two expectations: the amount of information changes steadily, and the information entropy continues to decrease. Then Eloss can be obtained according to the above expectations, guiding the update of related network parameters and producing a sensitive metric to identify anomalies while maintaining the model performance. Our experiments demonstrate that Eloss can deviate from the standard value by a factor over 100 with anomalous data and produce distinctive values for similar but different types of anomalies, showing the effectiveness of the proposed method. Our code is available at: (code available after paper accepted).
The State Key Laboratory of Automotive Safety and Energy,
and the School of Vehicle and Mobility
Tsinghua University
## Introduction
Intelligent driving is an inevitable trend in the future development of urban transportation [1], of which 3D object detection is one of the essential tasks to achieve intelligent driving [1]. In recent years, with the rapid iteration of computing devices and the emergence of high-quality annotation datasets, data-driven deep learning methods have driven the rapid development of 3D object detection tasks [23]. However, compared with the target detection tasks applied to other fields, because of driver and road safety, the 3D object detection tasks applied to intelligent driving have higher requirements for accuracy and speed. In addition, compared with other application scenarios, the sensors in the intelligent driving vehicle can not guarantee the collection of stable and high-quality data due to uncontrollable conditions, for example, the weather. As a result, some data may appear anomalous. If a model can not judge these anomalous data and make special treatment, it would lead to errors in the decision-making of the intelligent driving system and even severe traffic accidents.
In the face of the higher requirements of intelligent driving for 3D object detection, many advanced algorithms have given their solutions to achieve higher accuracy, and faster speed. However, much of this work is based on high-quality data modeling, focusing on achieving higher performance on clean datasets. Furthermore, even if there is a small range of low-quality data in the dataset, the model can learn several anomalous patterns through large-scale training, thus achieving high accuracy. However, the data collected in the real world is diverse, and the patterns learned in the dataset are not enough to cope with the complex driving environment. Therefore, even if high accuracy is achieved on various data sets, the perception model will most likely produce wrong judgments and make wrong decisions, resulting in ir
Figure 1: Pending update
reparable consequences.
Anomaly detection has solutions in many fields, such as CCD flatness detection and textile defect detection. However, much of this work is extensively based on data, training a "black box" model, which is unsuitable for intelligent driving. Moreover, as mentioned earlier, models can only learn a limited number of anomalous patterns through training data, and limited patterns are not enough to cope with the complex and diverse road environment. Therefore, to solve the problem of anomaly, we should look for the difference between normal and anomalous from the data itself. The ultimate purpose of a network is to extract helpful sensor information regarding the target task; for those anomalous data, the network cannot extract adequate information from it or even over-interpret these data. From this starting point, we can detect the anomalous data in real-time according to the amount of information extracted by the model or by each model layer, see Figure 1.
In this paper, from the perspective of treating the perceptual system as a communication model Zou et al. (2022), we build a theoretical model that can describe the underlying information transmission process of the neural network for intelligent driving perception tasks. Our contribution mainly includes the following aspects. Firstly, based on the source coding theory in the communication system Jones and Mary Jones (2000), the expected value of information change at each layer of the information compression part of a model has been constructed: the information change at each layer of the information compression network is stable. Second, a probabilistic model is established for the data in the neural network by introducing a continuous random variable \(X\), to estimate the change in information entropy. Third, according to the expected value of information change, the plug-and-play Eloss function module is established, which can be used as a loss function to guide the process of neural network parameters' update and give networks the ability to detect the anomaly.
## Related Works
### Uncertainty quantifification
In real-world scenarios, autonomous driving perception systems face challenges such as occlusion and noise in sensor observations [1], which are often limited in the observability of the environment. Therefore, the quantification of perceived uncertainty in various environments has attracted attention.
Deep learning uncertainty has two types, Aleatoric Uncertainty and Epistemic uncertainty [2]. Aleatoric uncertainty is due to an incomplete understanding of the environment, such as partial observability and measured noise, which cannot be reduced by obtaining more or even unlimited data but can be reduced by explicit modeling. Aleatoric uncertainty is usually learned using the heteroscedasticity loss function [3]. Epistemic uncertainty comes from a deficient dataset and some methods unknown to our model, which can be eliminated with enough training data. Two popular methods usually estimate Epistemic uncertainty: Monte Carlo (MC)-dropout [4] and ensembles [5].
In previous studies, uncertainty quantification lacked the truth value of uncertainty estimates [6] and a unified quantitative evaluation index [7]. More specifically, uncertainty is defined differently in different machine learning tasks, such as classification, segmentation, and regression. In the paper, the proposed method can produce a ground truth of 0 for regular data input, and our Eloss is a unified method for all kinds of tasks or networks where a repetitive structure is presented. We discuss this in the next section.
### Shannon's source coding in the communication model
We can use communication models to build neural networks, using prior knowledge from information theory to explain and guide neural network optimization.
Information theory, mainly constructed by Shannon Jones and Mary Jones (2000), uses the concept of entropy to study information processes based on information quantification. Information entropy reveals the limitations of signal processing and communication operations by quantifying the uncertainty in a signal. Due to the convenient quantization nature of communication networks, many have introduced knowledge of information theory in communication into deep neural networks.
A channel model built by neurons or networks was suggested by MacKay et al Mackay (2003). Sharma et al. introduced fiducial coding in variational self-encoders Sharma et al. (2021). Using Shannon's first theorem, the average length of the encoding is compared to the magnitude of entropy to ensure that the variational distribution is a distortion-free coding process.
In 2015, Tishby and Zaslavsky used IB (tis 2000) to further reveal the deep learning model mechanism. Their experiments using a multilayer perceptron showed that the network tended to capture relevant information first and then combine them. Inspired by the work of DAGSurvSharma et al. (2021), we modelled the pre-fusion process of feature extraction for a single modality as source coding and achieved compression of the amount of information and improved model efficiency by jointly training this part with the fusion and detection networks that follow, selectively capturing effective features and reducing unincrease features, while introducing entropy from information theory to quantify the amount of information in the output of each layer of source coding, by limiting the entropy changes in entropy to reduce the direction of possible optimization of the network and accelerate model optimization.
### Optimizer of Neural Network
The process of optimizer optimization in machine learning is to look for neural network parameters that significantly reduce the loss function, which typically includes performance metrics evaluated on the entire training set and additional regularization terms. In order to make the model output approximate or reach the optimal value, we need to use various optimization strategies and algorithms to update and calculate the network parameters that affect model training and model output. Currently, the main solutions to this
problem are roughly divided into three categories: gradient descent method, momentum optimization method, and adaptive learning method.
In the gradient descent method, the small and medium batch gradient method combines the advantages of BGD(Hinton, Srivastava, and Swersky, 2012) and SGD(Bottou, 2012), and selects less than the total number of training samples of small batch samples in sample selection, which not only ensures the speed of training, but also ensures the accuracy of the final convergence(Ruder, 2016).
Momentum optimization methods introduce momentum ideas in physics, accelerate gradient descent, and common algorithms are Momentum and NAG. Using the method of momentum optimization, it is possible to make the direction of the gradient in the unchanged dimension, the parameter update becomes faster, and when the gradient changes, the update parameter becomes slower, so that the convergence can be accelerated and the turbulence can be reduced(Dozat, 2016). Among them, NAG is an improvement of momentum, which reserves the direction of the previous update when the parameters are updated, and uses the current gradient to fine-tune the final update direction, while introducing a correction at the time of gradient update. This method is very good at accelerating convergence and suppressing oscillations.
In deep learning, the learning rate, as a very important hyperparameter, is generally difficult to determine and usually requires a certain number of trainings to find the optimal learning rate(Smith, 2017). The adaptive learning rate optimization algorithm can adaptively adjust the learning rate size according to some strategies, thus improving the training speed. Currently, adaptive learning rate optimization algorithms mainly include: AdaGrad algorithm, RMSProp algorithm, Adam algorithm, and AdaDelta algorithm(Le et al., 2011; Zaheer and Shaziya, 2019). The momentum in Adam is directly incorporated into the estimation of first-order gradient moments. Likewise, Adam includes bias corrections to correct the first- and second-order moment estimates that are initialized from the origin. This makes the parameters relatively smooth and suitable for most nonconvex optimization problems, as well as large data sets and high-dimensional spaces.
## Theory
To efficiently fuse the information of each modality and remove the information that is not helpful to the subsequent task, the information must be compressed so that the features entering the subsequent network can contribute to the final task. This process of information compression in the communication model can be expressed as a distortion-limited encoding. To improve information transmission efficiency, source symbols that appear less frequently in the original source need to be removed during the compression process using distortion-limited encoding.
Because these source symbols appear less frequently, even if they are lost during the encoding process, a high data recovery rate can still be achieved at the other end of the channel and transmission efficiency is greatly improved. Similarly, in an information compressing network, the removed information usually does not contribute to subsequent tasks, and removing this information can greatly improve the efficiency of following networks(Jones and Mary Jones, 2000).
In communication models and information theory, entropy is often used as a measure of the amount of information. The lower the entropy, the larger the amount of information. As a result, we introduced an entropy calculation method to describe the change in information during distortion-limited encoding, which will be explained in detail later. At the same time, to ensure the smoothness of information compression, we keep the entropy change constant as the information passes through the layers of the neural network, preventing sudden distortion of the information.
### Entropy Expectation of Neural Network Layers
The neural network training process can be seen as constantly searching for the mapping relationship between input and expected output to achieve the effect of learning (Liu et al., 2017). It can be considered that before the neural network is trained, the mapping relationship learned is weak, and when the neural network is trained with data, the mapping relationship learned is continuously strengthened, and the feature extraction ability of each layer of the network, that is, the information compression ability, is continuously improved. However, for such a black-box model(Buhrmester, Munch, and Arens, 2021), there is no good explanation of the information compression process, and it is even more impossible to effectively guide the parameter optimization process of the information compression model.
To explain the underlying mechanism of the information compression network, with reference to the communication model, feature compression can be regarded as a source coding process. To ensure that the network can extract data in a more complete way that is useful for subsequent tasks, the source encoding should be a distortion-limited encoding, as discussed above.
Now, we introduce the information entropy concept to reflect the amount of information output by the network. In a communication system with constant bandwidth, if the entropy of the information of transmitted data is steadily reduced, the efficiency of the information transmission is gradually improving (Zou et al., 2022).
In distortion-limited encoders, information entropy decreases with increasing degree of encoding, and the same applies to feature compression networks. Because the feature compression network often has a continuous repetitive network structure, such as the repeated linear layer in the SECOND network(Yan, Mao, and Li, 2018), and each duplicate network layer should have a similar feature compression ability, it can be considered that the bandwidth of data transmission in the network is constant. In summary, the expectation of the feature compression network for the change in the amount of information layer-by-layer is that the information entropy of the output results of each repeated layer of the network is steadily decreasing.
According to the expectation, we can optimize the network parameters in the direction of keeping a steady change
in the layer-by-layer information entropy by constructing the entropy loss function. Therefore, we are able to go into the black-box model and find a way to further optimize the training processes.
### Uncertainty quantification for abnormal info input
For a feature extraction network with a specific function, the raw input data forms a new feature map after passing through each feature extraction layer, and in this process, there will be a change in the amount of information, that is, a change in information entropy. However, since no new information is introduced during feature extraction, and a large amount of information unrelated to the target task is filtered out, only the information related to the target task is retained. Therefore, this process is usually expected to be a process of information entropy reduction.
In the same feature extraction network, the feature extraction capability of each layer of the network is positively correlated with the scale of the parameters of that layer. Therefore, when the feature map passes through several successive feature extraction layers with the same parameter amount and similar structure, a similar amount of information reduction should be maintained after each feature extraction to ensure the smooth information compression of the network.
However, for an abnormal input feature map, for example, some information has noise, the change in the amount of feature map information when passing through each layer of the feature extractor will become irregular rather than a smooth decrease, and the entropy value may even increase. Therefore, we can observe the entropy of the input data as it passes through each feature extractor to determine whether the input information is an anomaly.
### Probabilistic Modeling for Information
The loss function, calculated based on the expectation of entropy, requires a method to estimate the entropy of each layer output in the information compression network. Moreover, estimating the entropy requires probabilistic modeling of the distribution of output data.
In information compression, it is common to consider the convolutional network as a choice Zou et al. (2022). To model the convolutional neural network as a probabilistic model, we set it as follows. The feature channels \(\tilde{X}=\{x_{1},x_{2},...,x_{i}\}\) generated by the convolution kernel are considered as samples of multidimensional continuous random variable \(X\), where \(i\) is the number of channels, and the number of values in each channel is the dimension \(d\) of \(X\).
Because the output of each layer of any network can be considered a continuous random variable \(X\), and the output of any layer can be selected as a set of samples \(x_{i}\) of \(X\), probabilistic modeling can also be applied to other neural network structures in addition to the convolutional neural network, to calculate the entropy of the data distribution. Therefore, the proposed probabilistic modeling method for convolutional neural networks can be extended to other neural network structures.
### Entropy calculation
Now, the problem of estimating the entropy of each layer of the neural network is transformed into estimating the entropy according to the probability distribution of the unknown continuous random variable \(X\).
There are many ways to solve this problem, the essence of which is to calculate the differential entropy of continuous random variables. Differential entropy, also known as continuous entropy, is a concept in information theory that derives from Shannon's attempt to extend his concept of Shannon entropy to a continuous probability distribution Jones and Mary Jones (2000). Set the random variable \(X\), whose domain of the probability density function \(f\) is the set of \(X\). This differential entropy \(h(X)\) or \(h(f)\) is defined as follows:
Figure 2: An detailed Illustration of our proposed Eloss method.
\[h(X)=-\int f(x)\log f(x)dx \tag{1}\]
Since the probability distribution of the random variable is not known in advance in this problem, the probability density function is unknown, and only a limited number of sample values are available in that probability distribution. Here is the K-Near-Neighbor Entropy Estimation Method (van de Water and Schram 1988) that we can use to calculate its information entropy:
Continuous variables are discrete with sampling, and to use \(n\) samples to approximate the entire sample space, each sample point is expanded into a d-dimensional hypersphere with the radius of the sphere as the distance between the sample point and the nearest sample point. When the variables are evenly distributed in the sample space, the probability of each sample point can be approximated to \(1/n\).
Since the distribution of random variables in the sample space is unknown, there may be large differences from the uniform distribution, and the distribution of random variables in the space is corrected by using the distribution of the sample in the space. The density and sparsity of the samples in the sample space directly affect the probability density near each sample point. The discrete probability of each sample point is estimated as:
\[p(x_{i})=[(n-1)\cdot r_{d}(x_{i})^{d}]\cdot V_{d}]^{-1} \tag{2}\]
where \(n\) is the number of samples, \(r_{d}(x_{i})\) is the Euclidean distance of d-dimensionality between the sample \(x_{i}\) and its nearest sample point, and \(V_{d}\) is the volume of the unit sphere in d-dimensional space.
The estimate of the entropy of the random variable \(X\) is:
\[H(X)=\frac{1}{n}\sum_{i=1}^{n}[-\log p(x_{i})]+\gamma \tag{3}\]
where \(\gamma\) is the Euler-Maseroni constant, which is approximately equal to 0.5772.
The K-Near-Neighbor Entropy Estimation Method expands the distance between each sample point and its nearest sample point to the distance to the k-th sample point closest to it, and the entropy estimate of the random variable \(X\) becomes:
\[H(X,k)=-\psi(k)+\psi(n)+\log V_{d}+\frac{d}{n}\sum_{i=1}^{n}\log r_{d,k}(x_{i}) \tag{4}\]
where \(\psi\) is the Di-gamma function, \(\psi(1)=-\gamma\), \(\psi(n)\sim\log{(n-1)}\). \(r_{d,k}(x_{i})\) is the Euclidean distance of d-dimensionality between sample \(x_{i}\) and its nearest kth sample point. It can be shown that \(H(X)\) is equivalent to \(H(X,k)\) when \(k=1\). We use \(H(X)\) as the entropy value of the output of each layer of the network and then obtain the entropy variable \(\Delta H\) of each layer of the network, \(\Delta H_{n}=H_{n+1}-H_{n}\), where \(n\) is the index of the layer of the network.
### Loss Functions for Information Compression Network
According to the information transmmission expactation for a information compression network, there are two loss functions that can be established: function \(L_{1}\) focuses on the steady change of information and function \(L_{2}\) focuses on the direction of information change.
The loss function \(L_{1}\) is formulated with the variance of the change in entropy \(\Delta H\), and the expected value of \(L_{1}\), the variance, is 0.
\[L_{1}=\frac{\sum_{n=1}^{N}(\Delta H_{n}-\widehat{\Delta H})^{2}}{N} \tag{5}\]
Where \(N\), \(n\) is the index of the layer of duplicate network layers and \(\widehat{\Delta H}\) is the mean entropy change of all duplicate network layers.
According to the decreasing entropy expectation, the loss function \(L_{2}\) is formormulated as follows:
\[L_{2}=-\sum_{n=1}^{N}\Delta H_{n}^{2} \tag{6}\]
We call \(L_{1}\) and \(L_{2}\) combined as Eloss.
As a network is trained with Eloss, the influence of Eloss can be described as an amplifier, enhancing the interpretability of some parts of the network, where it can be described as a feature compression network. But this is also a limit of the Eloss method, as the network layers where \(L_{1}\) and \(L_{2}\) can influence are limited to those that are repeated several times and perform a specific task closely linked to the communication network. We evaluate this limit in the following sections. For this reason, Eloss is a complement to the loss function \(L\) that corresponds exactly to the final goal.
## Experiments
**Dataset.** We conduct experiments on the KITTI dataset set and nuScenes dataset, which is jointly founded by the Karlsruhe Institute of Technology in Germany and the Toyota Institute of Technology of America (Geiger, Lenz, and Urtasun 2012; Geiger et al. 2013). In addition to that, this dataset is one of the most widely used computer vision algorithm evaluation datasets in the scenario of intelligent driving. The data set contains real-world image data collected from scenes such as urban areas, villages, and highways. The data in the KITTI dataset contains multimodal data, such as lidar point clouds, gps data, right-hand color camera data, and grayscale camera images. The KITTI dataset is divided into training sets and test sets, where the training set contains 7481 samples, and the test set contains 7518 samples. The Nuscenes(NuTonomy Scenes) dataset is a multimodal dataset on autonomous vehicle driving. It is the first large-scale dataset to provide a full set of sensor data for an autonomous vehicle, including six cameras, one ilDAR, five millimeter-wave radars, GPS and IMU. It contains seven times more object annotations than the KITTI dataset. These include 1.4 million camera images, 390,000 LiDAR scans, and manually annotated 1.4m 3D bounding
boxes for 23 object categories. The Nuscenes dataset consists of 1000 scenes, each equivalent to 20S of video, containing a wide variety of scenarios. On each scene, there are 40 key frames, which is two keyframes per second, and the others are sweeps. The key frames are manually labeled, and each frame has some annotations in the form of bounding box. Not only size, fenced, category, visibility, and so on. Existing autonomous driving datasets lack a full set of multimodal data for building autonomous driving sensing systems, which Nuscenes compensates for.
**Implementation Details.** The experiment was carried out on the Nvidia RTX 3090 device and the model was built using PyTorch based on the MMDetection3D [15] framework. PyTorch has a wide range of deep learning applications and provides a large number of Python interfaces, making it very easy for the framework to call and use Python's own function packages. The auto-differentiation feature included in PyTorch has made it a very popular dynamic graph framework. MMDetection3D is an open source toolbox for 3D object detection based on PyTorch, and the models used in this article are developed using PyTorch on the MMDetection3D framework.
**Evalutate the Sensitivity of Eloss on abnormal inputs** The pointpillars model trained with and without gloss is used to process the KITTI data set and Nuscenes data set of the test data set respectively, and the voxelnet model trained with and without gloss is used to process the KITTI data set of the test data set respectively, and then the gloss that obtains the result is compared. Then calculate the confidence of processing the results of the data set in the model without gloss added. Compare the gloss of the experimental results with confidence of the experimental results, and further explore the difference between the gloss of the results obtained by the model trained with gloss and the results obtained by the data set compared with the model not trained with gloss, and explore which indicators can be used to better respond to abnormal data, and explain the difference in sensitivity to abnormal data. Our result is shown in Table 1.
The confidence obtained by the model trained without adding gloss is compared with the gloss value after processing different test data sets. It is found that when processing data sets with different noises added under different models, the confidence after adding noise is lower than that when no noise is added. It can be identified that the data at this time is abnormal, but However, when the noise addition ratio is different, the confidence of the result changes very slightly, the sensitivity to data anomalies is low, and there is no discrimination, and the noise addition ratio cannot be judged. It is guessed that confidence has the problem of over-interpretation of adding noise. And the gloss value in this situation can also identify that the data is abnormal, which plays a role in the same effect, although the gloss value presents a chaotic state. However, it is particularly sensitive to some data with a specific proportion of noise, and can detect the addition of different sizes of noise. That is, when using the same model to obtain the same detection results, using gloss as an indicator can better judge that the data is abnormal, and can detect abnormal data with a degree of discrimination.
The gloss obtained by the model trained without adding gloss after processing different test data sets is compared with the gloss value obtained by the model trained with adding gloss after processing different test data sets. It is found that the gloss value of the latter is more sensitive to data anomalies, and the fluctuation of the gloss value is more obvious, although the accuracy of the result is reduced. That is, when using the same model, the model trained with gloss will lose some accuracy in the result, but at the expense of a certain detection accuracy, the detection ability of abnormal data can be further improved.
Comparing the gloss value with confidence in these two cases, it is found that although confidence can reflect that the data is abnormal, the change range is extremely small when different proportions of noise are added, and no further distinction can be made; However, the gloss value is used as an indicator to judge the degree of abnormality of the data, especially the gloss value obtained by the model trained by gloss. Although the detection accuracy has decreased, not only can the data abnormality be judged in the same way, but also the gloss fluctuation is larger and further improves the detection ability of abnormal data.
### Comparison with Eloss on Training Process
To measure the impact of gloss on the model training process, we first conduct control experiments on the same model with and without gloss on the KITTI dataset and Nusenes dataset without noise. We plot the part of our experiment results in Figure 3 to more intuitively show the impact of gloss on the volatility of the training process.
To quantify the impact process, we use the mean absolute value slope (Mean Absolute Value Slope, MAVP) to measure the impact of gloss on the volatility of the model training curve and use the maximum precision index to measure the impact of gloss on the model training accuracy. The MAVP formula is as follows, where N is the number of sliding panes, and (k, k+1) refers to two adjacent time windows.
\[\text{MAVP}=\frac{1}{N}\sum_{k=1}^{N}(|x_{k+1}|-|x_{k}|)) \tag{7}\]
We applied the pointpillars and voxelnet models to the KITTI dataset to conduct the above control experiments. The experimental results are shown in the Table 2.
The experimental results in the pivot table show that the maximum training accuracy decreases after adding gloss to both models. In terms of MAVP, the MAVP decreased after adding gloss, which means that the addition of gloss makes the above training process smoother.
On the Nusence dataset, we perform the above control experiments on the PointPillars model with three different metrics: Car \(AP_{dist}1.0\), mAP, and NDS. The experimental results are shown in the Table 3.
The table shows that the maximum accuracy of the Car category is lost during the training process after adding gloss. Still, the decline in MAVP shows that the addition of gloss moderates the volatility of the above training process. Similar observations for mAP, the average precision of
multiple categories, and NDS, the Nusenes detection score, indicate that adding gloss to the model makes the training process smoother.
The above is the experiment on the influence of gloss on model training without noise interference. In order to further understand the effect of gloss in the experiment, we will add Eloss to different parts of the network or conduct control experiments with anomalous data.
### Varying Amount of Eloss
In this experiment, we apply continuous training to models that have already trained 80 epochs and see how adding Eloss to different fractions of the network may influence the model's accuracy on the KITTI test set.
Eloss is a plug-and-play module, which means Eloss can be added to any place having a repetitive network structure and start functioning. For this reason, we can evaluate Eloss by considering different fractions of the network. In this experiment, we consider only repetitive structures grouped into blocks in SECOND backbone for convenience.
After sorting out the experiment result of the best models, we get Table 4.
In these tables, there are gaps between the accuracy of models at 80 epoch and that of the fine-tuned models. Hence, a reasonable expectation is constructed that the model continually trains after 80 epochs. In the validation set, the accuracy raises after Eloss is applied as expected. However, in the test set, we get conflicting results.
For PointPillars, in Table 4, the Car \(AP_{R}40\) value is going down as the Eloss influences more SECOND blocks and is consistent with the previous experiment. The reason is that Eloss provides more constraints to the model parameters, making the model training process more difficult.
\begin{table}
\begin{tabular}{l l|l|c c c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{Setting} & \multicolumn{3}{c|}{Confidence} & \multicolumn{3}{c|}{Eloss (metrics)} & \multicolumn{3}{c}{Eloss (metrics \& loss func.)} \\ Model & Dataset & Value & clean & noise1 & noise2 & clean & noise1 & noise2 & clean & noise1 & noise2 \\ \hline VoxelNet & KITTI & Mean & 0.495 & 0.248 & 0.248 & 0.015 & 0.008 & 0.009 & 1.584E-03 & 9.085E-03 & 8.697E-03 \\ & \%change & 0.0\% & -49.9\% & -49.9\% & 0.0\% & -48.5\% & -39.1\% & 0.0\% & **473.5\%** & **449.0\%** \\ \hline PointPillars & KITTI & Mean & 0.487 & 0.344 & 0.344 & 0.012 & 2.086 & 0.008 & 1.091E-01 & 3.836E+00 & - \\ & \%change & 0.0\% & -29.3\% & -29.3\% & 0.0\% & **1747.6\%** & -36.1\% & 0.0\% & **3416.0\%** & - \\ \hline PointPillars & nuSenes & Mean & 0.168 & 0.128 & 0.128 & 0.034 & 1.918 & 0.016 & 2.556E-04 & 1.478E-01 & 4.273E-03 \\ & \%change & 0.0\% & -23.7\% & -23.7\% & 0.0\% & **5494.7\%** & -54.5\% & 0.0\% & **57746.3\%** & **1571.9\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between Confidence, Eloss as metrics and Eloss as metrics and loss function. Observation with different noise setting is given.
\begin{table}
\begin{tabular}{l|l|c c} \hline \hline Model & Method & Max(\%) & MAVP(\%) \\ \hline \multirow{2}{*}{PointPillars} & Without Eloss & 90.694 & 11.946 \\ & With Eloss & 88.916 & 11.932 \\ & Delta & -1.778 & **-0.014** \\ \hline \multirow{2}{*}{VoxelNet} & Without Eloss & 94.873 & 10.959 \\ & With Eloss & 94.586 & 10.937 \\ & Delta & -0.287 & **-0.022** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The Car \(AP_{R}40\) Max and MAVP of the models on KITTI validation set during the training process.
Figure 3: Convergences Curves of the model accuracy on NuSenes validation set for PointPillars with or without Eloss. (a) the Average Precision of Car detection with Distance Threshold 1.0 meters; (b) mean Average Precision computed across 10 class of objects; (c) nuScenes detection score.
In Table 4, the result for voxel-net differs from previous observations. When the Eloss influences one block of SECOND, the accuracy on the test set drops slightly, but when the Eloss influences two blocks of SECOND, the accuracy increases.
For the abnormal observation of VoxelNet with Eloss, we undertake more experiments for further evaluation.
### Comparison between Different Models
In this experiment, we use the same voxel encoder as VoxelNet, and blocks in SECOND where Eloss can influence are pre-trained with or without Eloss, and their parameters are frozen. Then we vary the size of the whole model, increasing the number of modalities and the degree of complexity. Result is shown in Table 5.
In the SECOND+ResNet[1] experiment, we take information from 2 modalities: point cloud and image. An increase in the accuracy of Cyclist and Pedestrian detection is observed. Both increase more than \(3\%\), but there is a decrease in the accuracy of Car detection.
The situation is getting worse in the last model SECOND+RESNET+Correlation[22, 16, 20]+GNN[11]+FPN[12], only the accuracy of the Pedestrian detection increases.
These observations suggest that for a network with a voxel encoder, the more significant influence Eloss has, the better the network's performance on the object detection task.
At this stage, we guess the reason for the above observation is that blocks influenced by Eloss are beneficial for compression as information is transmitted in those blocks, but the compression result is fragile and can be easily distorted by the latter inference process while training. Suggests that if Eloss is used, we need to consider using a lower learning rate for layers outside the Eloss coverage. We will take more effort into this topic in the future.
## Conclusion
In this paper, we propose Eloss, an amplifier of the interpretability of a feature compression network based on the ideas behind the communication system. The Eloss is constructed by comparing the network-layer output with the information change expectations, which are generated according to the source coding. We implement Eloss to optimize a network in the direction converging to expectations. Through experiments in three different aspects, our study shows that training a 3D object detection model with Eloss is beneficial for both training speed and model interpretability. Having these results, we still see limitations in our work. For a model with a small fraction of network where Eloss can influence, adding Eloss can cause unexpected retention to the model-training process. In the future, we will investigate this limitation in detail and continue our research on the interpretability of models in intelligent driving.
## Acknowledgements
This work was supported by the National High Technology Research and Development Program of China under Grant No. 2018YFE0204300, and the National Natural Science Foundation of China under Grant No. U1964203,and sponsored by Meituan and Tsinghua University-Didi Joint Research Center for Future Mobility.
|
2308.12301
|
The SPARC Toroidal Field Model Coil Program
|
The SPARC Toroidal Field Model Coil (TFMC) Program was a three-year effort
between 2018 and 2021 that developed novel Rare Earth Yttrium Barium Copper
Oxide (REBCO) superconductor technologies and then successfully utilized these
technologies to design, build, and test a first-in-class, high-field (~20 T),
representative-scale (~3 m) superconducting toroidal field coil. With the
principal objective of demonstrating mature, large-scale, REBCO magnets, the
project was executed jointly by the MIT Plasma Science and Fusion Center (PSFC)
and Commonwealth Fusion Systems (CFS). The TFMC achieved its programmatic goal
of experimentally demonstrating a large-scale high-field REBCO magnet,
achieving 20.1 T peak field-on-conductor with 40.5 kA of terminal current, 815
kN/m of Lorentz loading on the REBCO stacks, and almost 1 GPa of mechanical
stress accommodated by the structural case. Fifteen internal demountable
pancake-to-pancake joints operated in the 0.5 to 2.0 nOhm range at 20 K and in
magnetic fields up to 12 T. The DC and AC electromagnetic performance of the
magnet, predicted by new advances in high-fidelity computational models, was
confirmed in two test campaigns while the massively parallel, single-pass,
pressure-vessel style coolant scheme capable of large heat removal was
validated. The REBCO current lead and feeder system was experimentally
qualified up to 50 kA, and the crycooler based cryogenic system provided 600 W
of cooling power at 20 K with mass flow rates up to 70 g/s at a maximum design
pressure of 20 bar-a for the test campaigns. Finally, the feasibility of using
passive, self-protection against a quench in a fusion-scale NI TF coil was
experimentally assessed with an intentional open-circuit quench at 31.5 kA
terminal current.
|
Zachary Hartwig, Rui Vieira, Darby Dunn, Theodore Golfinopoulos, Brian LaBombard, Christopher Lammi, Phil Michael, Susan Agabian, David Arsenault, Raheem Barnett, Mike Barry, Larry Bartoszek, William Beck, David Bellofatto, Daniel Brunner, William Burke, Jason Burrows, William Byford, Charles Cauley, Sarah Chamberlain, David Chavarria, JL Cheng, James Chicarello, Karen Cote, Corinne Cotta, Van Diep, Eric Dombrowski, Jeffrey Doody, Raouf Doos, Brian Eberlin, Jose Estrada, Vincent Fry, Matthew Fulton, Sarah Garberg, Robert Granetz, Aliya Greenberg, Martin Greenwald, Samuel Heller, Amanda Hubbard, Ernest Ihloff, James Irby, Mark Iverson, Peter Jardin, Daniel Korsun, Sergey Kuznetsov, Chris Lammi, Steven Lane Walsh, Richard Landry, Richard Lations, Matthew Levine, George Mackay, Kristin Metcalfe, Kevin Moazeni, John Mota, Theodore Mouratidis, Robert Mumgaard, JP Muncks, Richard Murray, Daniel Nash, Ben Nottingham, Colin O Shea, Andrew Pfeiffer, Samuel Pierson, Clayton Purdy, Alexi Radovinsky, DJ Ravikumar, Veronica Reyes, Nicolo Riva, Ron Rosati, Michael Rowell, Erica E. Salazar, Fernando Santoro, Dior Sattarov, Wayne Saunders, Patrick Schweiger, Shane Schweiger, Maise Shepard, Syunichi Shiraiwa, Maria Silveira, FT Snowman, Brandon Sorbom, Peter Stahle, Ken Stevens, Joseph Stiebler, Joshua Stillerman, Deepthi Tammana, David Tracy, Ronnie Turcotte, Kiran Uppalapati, Matthew Vernacchia, Christopher Vidal, Erik Voirin, Alex Warner, Amy Watterson, Dennis Whyte, Sidney Wilcox, Michael Wolf, Bruce Wood, Lihua Zhou, Alex Zhukovsky
|
2023-08-18T18:58:45Z
|
http://arxiv.org/abs/2308.12301v1
|
# The SPARC Toroidal Field Model Coil Program
###### Abstract
The SPARC Toroidal Field Model Coil (TFMC) Program was a three-year effort between 2018 and 2021 that developed novel Rare Earth Yttrium Barium Copper Oxide (REBCO) superconductor technologies and then successfully utilized these technologies to design, build, and test a first-in-class, high-field (\(\sim\)20 T), representative-scale (\(\sim\)3 m) superconducting toroidal field coil. With the principal objective of demonstrating mature, large-scale, REBCO magnets, the project was executed jointly by the MIT Plasma Science and Fusion Center (PSFC) and Commonwealth Fusion Systems (CFS) as a technology enabler of the high-field pathway to fusion energy and, in particular, as a risk retirement program for the TF magnet in the SPARC net-energy fusion tokamak. Weighing 10,058 kg (22,174 lb) and utilizing 270 km (168 mi) of REBCO, the TFMC is a no-insulation magnet comprising a winding pack of sixteen REBCO stack-in-plate style pancakes and twoij termination plates inside a Nitronic-50 structural case, which also serves as a pressure vessel for the cryogenic coolant flowing through channels in the winding pack. To execute the TFMC tests, a new magnet test facility was built and commissioned at the MIT PSFC. A centerpiece of the test facility is a pair of 50 kA LN2-cooled REBCO binary current leads and VIPER REBCO cable feeder system. A novel liquid-free cryocooler-based cryogenic system provided 20 K supercritical helium. The magnet is integrated with the feeder and helium circulation system inside a large 20 m\({}^{3}\) vacuum cryostat, which contains internal LN2 radiation shields and access for facility and magnet instrumentation.
The TFMC achieved its programmatic goal of experimentally demonstrating a large-scale high-field
**REBCO magnet, achieving 20.1 T peak field-on-conductor with 40.5 kA of terminal current, 815 kN/m of Lorentz loading on the REBCO stacks, and almost 1 GPa of mechanical stress accommodated by the structural case. Fifteen internal demountable pancake-to-pancake joints operated in the 0.5 to 2.0 n\(\Omega\) range at 20 K and in magnetic fields up to 12 T. The DC and AC electromagnetic performance of the magnet, predicted by new advances in high-fidelity computational models, was confirmed in two test campaigns while the massively parallel, single-pass, pressure-vessel style coolant scheme capable of large heat removal was validated. The REBCO current lead and feeder system was experimentally qualified up to 50 kA, and the cryocooler based cryogenic system provided 600 W of cooling power at 20 K with mass flow rates up to 70 g/s at a maximum design pressure of 20 bar-a for the test campaigns. Finally, the feasibility of using passive, self-protection against a quench in a fusion-scale NI TF coil was experimentally assessed. While the TFMC was intentionally not optimized for quench resiliency - and suffered localized thermal damage in response to an intentional open-circuit quench at 31.5 kA terminal current - the extensive data and validated models that it produced represent a critical step towards this important objective.**
_Index Terms--_Fusion energy, Rare Earth Barium Copper Oxide, Superconducting magnet, Toroidal field magnet
## I Introduction
The SPARC Toroidal Field Model Coil (TFMC) Project was an approximately three-year effort between 2018 and 2021 to retire the design, fabrication, and operational risks inherent in large-scale, high-field superconducting magnets based on the high temperature superconductor (HTS) Rare-Earth Barium Copper Oxide (REBCO). Executed jointly between the MIT Plasma Science and Fusion Center (PSFC) and Commonwealth Fusion Systems (CFS), the project developed novel Rare Earth Yttrium Barium Copper Oxide (REBCO) superconductor technologies and then utilized those technologies to successfully design, build, and test a first-in-class, high-field (\(\sim\)20 T) representative scale (\(\sim\)3 m in linear size) superconducting toroidal field (TF) coil known as the TFMC. In parallel to the construction of the magnet, a new superconducting magnet test facility was established at MIT to serve as the proving grounds for the experimental demonstration of the magnet. The test facility itself incorporated novel advances in both REBCO and cryogenic technology to meet the technical and schedule requirements for testing the magnet. The magnet and test facility, shown together in Fig. 1, were combined in the fall of 2021 to carry out a series of experimental test campaigns to assess the performance of the magnet, validating the magnet modeling, design, and fabrication techniques and proving the novel technologies deployed in the test facility.
This paper, the first of six papers in this special issue covering the TFMC Program, serves two principal objectives: first, it presents a self-contained, high-level technical and programmatic overview of the entire TFMC Program, including
Fig. 1: A view of the TFMC Magnet and test facility at the MIT Plasma Science and Fusion Center.
a summary of the high-field path to fusion energy and a brief history of large-scale superconducting fusion magnet development programs; and second, it provides the context for understanding the accompanying five papers that cover specific technical areas of the TFMC Program. These papers focus on the following topics:
* Magnet design, fabrication, and assembly [1]
* Test facility design, construction, and commissioning [2]
* 50 kA binary LN2-cooled REBCO current leads [3]
* 600 W cryocooler-based helium cryogenic system [4]
* Results of the test campaigns and post-mortem [5]
Taken together, the papers attempt to provide a comprehensive review of the program's background, objectives, activities, and achievements.
This overview and introductory paper is structured as follows: Section II presents the context and motivation for the TFMC Program, namely the technical and programmatic advantages of accelerating the deployment of commercial fusion energy through the use of high-field superconducting magnets; Section III briefly reviews the history of large-scale superconducting magnet development in order to provide background and insight into why and how the TFMC Program was executed; Section IV provides a programmatic overview of the project as a foundation to understand the technical research and development that was conducted; Section V briefly reviews the two distinct conductor and coil technologies developed in the first phase of the TFMC Program (VIPER Cables [6] and no-insulation no-twist, or NINT, coils [7]) and summarizes the advantages of selecting NINT coils to be scale-up in the TFMC; Section VI summarizes the remaining three phases of the program, which are explored in greater technical depth in the accompanying five papers; and, finally, Section VII concludes with some summary remarks about the TFMC Program impact on the state of large-scale high-field REBCO magnets and the next steps towards high-field net-energy fusion devices.
## II The high magnetic field path to fusion energy
It has long been recognized that the strength of the primary magnetic field plays a principal role in determining the performance of a magnetic fusion energy device such as a tokamak. Since the first magnetic confinement devices in the 1950s, this has manifested in the exponential increase in plasma physics performance metrics achieved by a succession of increasingly high-field fusion tokamaks, which have relied on advances of the science and engineering of large-scale electromagnets. In fact, a straightforward theoretical analysis of the nuclear and engineering aspects of a tokamak, combined with a basic knowledge plasma physics constraints, shows that increasing the magnetic field strength is one of the most powerful and accessible means of achieving both the conditions required to access a stable, burning, net-energy plasma and the design of a cost-effective compact tokamak fusion power plant [8]. The optimization strategy of maximizing the magnetic field strength to achieve net-fusion energy in more compact, lower cost, and faster-to-build tokamaks has historically been known as the "high-field path to fusion energy," and it can be divided chronologically into two separate eras based on the magnet technology utilized.
The first iteration of the high-field path focused on the use of resistive copper magnet technology. The copper high-field path to fusion energy relied on advanced water- or LN2-cooled Bitter plate style magnets to achieve pulsed magnetic fields in tokamaks exceeding 12 T peak field-on-coil, corresponding to approximately 8 T in the plasma center [9]. Several tokamaks were built and operated that experimentally validated the plasma physics advantages of high-field operation and advanced the state of high magnetic field engineering including most notably the Frascati Tokamak Upgrade at ENEA in Italy [10] and the three Alcator tokamaks at MIT in the United States culminating in the Alcator C-Mod Tokamak [11].
Based on both the physics and engineering successes of FTU and the Alcators, high-field copper-based tokamaks, particularly in the United States, dominated the roadmap for achieving net-fusion energy in compact machines in the 1980s and 1990s. Several major machine design and engineering activities were initiated, including the Burning Plasma Experiment (BPX) [12]Compact Ignition Tokamak (CIT) [13], the Fusion Ignition Research Experiment (FIRE) [14], and the Ignitor Tokamak [15]. Despite the advantages ascribed to these machines, the copper high-field path was ultimately abandoned due to the challenge of scaling resistive copper magnet technology to fusion power plants and the preference to begin utilizing superconducting magnets, based on the emergent low-temperature superconductors (LTS) NbTi in the 1980s and then Nb\({}_{3}\)Sn in the 1990s, as an alternative.
Starting in 1978 with the T-7 tokamak in the Soviet Union, a series of superconducting machines with LTS were built and operated with moderate field (1 - 8 T in the plasma center). This list includes EAST in China [16], KSTAR in South Korea [17], SST-1 in India [18], T-7 and T-15 in the Soviet Union/Russia, Tore Supra / WEST in France [19], and TRIAM-1M in Japan [20] with JT-60SA [21] as the most recent. These machines have provided the necessary superconducting magnet engineering basis for ITER, a tokamak designed to achieve 500 MW of fusion power with a gain factor of ten using Nb\({}_{3}\)Sn TF coils to provide \(\sim\)5.3 T in the center of the plasma [22]. Based on the design and initial engineering work for ITER, a series of post-ITER conceptual, such as CFETR in China [23], and demonstration power plants such as DEMO in the EU [24], are being planned. Because Nb\({}_{3}\)Sn magnets limit the magnetic fields in the plasma to approximately \(\sim\)5.5 T, such tokamaks typically have major radii between 6 and 9 m to achieve sufficient plasma performance, leading to extraordinally large devices associated with high capital cost, multi-decadal schedules, scale challenges in supply chain and assembly, and complex organizational issues.
In response to these challenges, and due to the emergence of a new class of superconductors capable of achieving much higher magnetic fields than LTS materials, a second iteration of the high-field path to fusion energy was proposed by MIT in the
mid-2010s [25]. Anchored in the record-setting plasma physics performance of Alcator C-Mod in 2016 [26], this new high-field path proposes the use of superconducting TF magnets exceeding 20 T peak field-on-coil (8 - 12 T in the plasma center) based on REBCO.
REBCO was discovered in 1987 [27]. While little was then known about its superconducting performance and an engineering-relevant form factor remained decades in the future, several theoretical papers emerged in the following few years that examined the advantageous physics, cost, and complexity implications of superconducting toroidal field coils approaching 20 T [24 -25]. Only relatively recently has sufficiently high performance REBCO coated conductor tape been characterized and manufactured at the industrial volumes and cost levels required to actually design and build large-scale high-field fusion magnets [30].
An important engineering study of a toroidal field (TF) magnet with modern REBCO superconductor was carried out in 2011 [31] as part of the conceptual design activities for the VULCAN tokamak [32]. The study was the first to highlight significant advantages of high-field REBCO-based magnets for tokamaks, including feasible TF magnets approaching 20 T peak field-on-coil, an optimized operational temperature of 20 K, tolerance of large nuclear heating, and demountable low-resistance joints. Since that time, several physics and engineering assessments reinforcing and extending the scale, schedule, cost, and plasma-physics advantages of high-field fusion energy powerplants based on REBCO magnets have been carried out for conventional aspect ratio tokamaks [33], spherical tokamaks [34], and stellarators [35].
In the mid-2010s, it was evident that execution of the high field path required accelerating the development and deployment of fusion-relevant REBCO conductors and coils beyond their then-nascent state. Two separate potential technologies had emerged by this time. First, a wide variety of high-current REBCO cables based on the insulated cable-in-conduit conductor (CICC) concept were being built and tested on small-scales. Although none to-date had demonstrated the required performance, robustness, and scalability required for fusion-scale magnets, the initial prototypes were promising [36]. Second, single REBCO tape wound no-insulation coils had demonstrated high field performance with simpler fabrication and the potential for self-protection during quench[37]; this represented an alternative, albeit very different, coil concept compared to insulated CICC cables.
Regardless of which REBCO-based concept would be developed as a basis for high-field superconducting fusion magnet technology, the TFMC Program engineering R&D would follow a two-part sequence. The first part would focus on technology readiness of basic conductor/coil technology at the small scale. Efforts would focus on engineering design, fabrication processes, computational modeling, and rigorous experimental testing of small-scale components. The overarching requirement emplaced on the processes and technology created during this phase was that it must be inherently scalable on rapid timescales. The second part would undertake the scale-up of the conductor/coil technology into a representative scale magnet, or "model coil", that would ultimately qualify the design, fabrication, modeling, and operation of a high-field REBCO magnet for readiness in a first-generation high-field net fusion energy device. Such model coils have historically played a critical role in advancing superconducting fusion magnets, confirming the arrival of a step-change in this key enabling technology.
## IV Brief review of large-scale superconducting fusion magnet test programs
Superconducting magnet systems for fusion embody substantial scale, cost, schedule, complexity, and risk. From a physics perspective, the design, operation, and ultimately fusion performance of the machine is fundamentally set by the magnetic fields, making achievement of the magnet design specifications imperative. From an engineering perspective, the magnet systems are heavily integrated into the device superstructure, making them difficult or impossible to replace or repair. Thus, while advances in superconducting magnet technology can often bring transformational benefits to a fusion machine, it is imperative that the technology be derisked before being utilized in a fusion device. Therefore, the integration of major advances in superconducting magnet technology has historically been preceded by signficant research and development programs on specially designed "model coils" that achieve fusion device magnet relevant scale and performance but often in a stand-alone configuration in specialized test facilities for efficiency.
The first such endeavor was the Large Coil Task (LCT), an international collaboration between the United States, Japan, Switzerland, and Europe that sought to evaluate the feasibility of large-scale superconducting magnets for fusion tokamaks [38]. By 1987, the LCT successfully demonstrated multiple fusion-relevant superconducting magnet technologies, many for the first time in a large superconducting coil. Six 2.5 m x 3.5 m bore toroidal field (TF) like coils capable of producing 8 T peak field-on-coil in steady-state were built and tested individually and as an array at the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) [39]. Five coils - General Dynamics (GD), General Electric/ORNL (GE/ORNL), Switzerland (CH), EUROATOM (EU) and Japan (JA) - utilized NbTi superconductor in a steel structure while a sixth from Westinghouse (WH) was the first large-scale coil to utilize react-and-wind Nb\({}_{5}\)Sn in an aluminum structure. To evaluate cryogenics, three coils (GD, GE/ORNL, JA) were cooled with atmospheric liquid helium pool boiling while the remaining coils (EU, CH, WH) employed forced-flow supercritical helium
at 1.5 MPa. To evaluate winding configurations, five of the coils were pancake wound while the GD coil was layer wound.
The LCT proved that steady-state, or DC, superconducting coils could indeed be built and operated at fusion-relevant scales and performance; however, it was recognized that the pulsed, or AC, central solenoid (CS) and poloidal field (PF) magnets contained significant further challenges and would require their own model coil programs. Two such programs were undertaken. In the late 1980s and early 1990s, the US and Japan built and tested several Demonstration Poloidal Coils (DPC), including a 2 m bore 30 kA, 10 T/s Nb\({}_{3}\)Sn cable-in-conduit conductor (CICC) coil led by MIT in the US [40] and 30 kA NbTi coil led by JAERI in Japan [41]. Following the conclusion of the DPC project, the large POLO model coil, a PF-like NbTi coil with a 3 m bore, 15 kA nominal current, and 2 T/s ramp rate was built and successfully tested by Forschungzentrum Karlsruhe (FZK) in 1997 [42].
One of the key conclusions of ITER Conceptual Design Activity (CDA) was that the ITER magnet systems would need to be superconducting; thus, the follow-on ITER Engineering Design Activity (EDA) initiated two model coil programs in 1992 with the explicit objective of demonstrating the design, manufacturing, and operation of ITER-relevant TF and CS magnets. The ITER Toroidal Field Model Coil (TFMC), carried out by the European Union, built a 40 ton, 80 kA, 7.8 MA-turn Nb\({}_{3}\)Sn CICC coil using 7 double pancakes inside a SS316 LN structural case with the objective of maximizing likeness to the ITER TF coil in a scaled test article [43]. The coil was successfully tested, individually and mounted at a 4.5 degree angle to the EURATOM LCT coil to increase peak field-on-coil and out-of-plane IxB Lorentz loading to 9.97 T and 797 kN/m, respectively, at the TOSKA facility at FZK in 2001 [44]. Concurrently, the ITER Central Solenoid Model Coil (CSMC) project, executed by the European Union, Japan, Russia, and the United States, built a 101 ton, 46 kA coil composed of an outer and inner module, built by Japan and the US, respectively, along with an insert coil built by Japan. Tested at the JAERI facilities in Naka, Japan, the combined CSMC achieved a peak field-on-coil of 13 T with ramp rates of 0.6 T/s (inner module) and 1.2 T/s (insert coil) - in excess of ITER CS requirements at the time - with a total stored energy of 640 MJ with no performance degradation after 10,000 load cycles [45].
Following a similar progression as ITER, the MIT PSFC and CFS concluded an approximately five-year conceptual design activity that resulted in the physics basis for the SPARC tokamak in 2020 [46]. One of the key conclusions was that a tokamak approximately \(\sim\)35 times smaller in volume than ITER could achieve high fusion gain (Q\({}_{\text{physics}}>2\)) provided that (1) the TF magnet could provide 12.2 T on-axis magnetic field, corresponding to approximately 22 T peak field-on-coil and (2) that the TF magnet could maintain cryostability despite the significant nuclear heating expected in a compact tokamak with minimal radiation shielding. Both requirements rule out the use of NbTi or Nb\({}_{3}\)Sn superconductor, which can practically achieve maximum fields of around 9 T and 13 T, respectively, in fusion-style magnets and cannot tolerate significant nuclear heating due to their small margins to critical temperature, low heat capacity, and low thermal diffusivity at 4 K. The only superconductor capable of meeting these requirements was REBCO.
Given the lack of experience world-wide with the design, fabrication, and operation of fusion-scale REBCO magnets, MIT and CFS undertook a rapid one-year conductor and coil development program to establish the foundational magnet technologies followed by a two-year period to design, build, and test the TFMC within a new magnet test facility.
## IV Programmatic overview of the TFMC Program
### _Project Objectives_
The principal function of the TFMC was as a risk-retirement article for the SPARC TF magnet; as such, the technical requirements and the project objectives flowed down from the TF to the TFMC. A comparison between the defining magnet engineering parameters of the TFMC and the SPARC TF magnet are presented in Table I. Compared to the SPARC TF in terms of size, the TFMC is an approximately 55% scaled version, which is large enough to develop representative fabrication processing but at significantly reduced cost and schedule compared to TF-scale. Despite the scaled size, the TFMC is capable of matching or exceeding many of the critical parameters including the peak field-on-coil, electromagnetic loading on conductor, terminal electrical current, winding pack current density, and cryogenic cooling metrics. Although the stored magnetic energy does not match, the TFMC at 110 MJ provides sufficient energy to assess the feasibility of dissipation
\begin{table}
\begin{tabular}{l|r|r}
**Design Parameter** & **TFMC** & **TF Coil** \\ \hline Magnet mass [kg] & 10,058 & 18,025 \\ \hline Magnet size [m] & 1.9 x 2.9 & 3.0 x 4.3 \\ \hline Winding pack (WP) mass [kg] & 5,113 & 7,975 \\ \hline WP minimum turn radius [m] & 0.2 & 0.4 \\ \hline WP current density [A/mm2] & 153 & 94 \\ \hline WP inductance [H] & 0.14 & 0.59 \\ \hline WP amp-turns [MA-turns] & 10.4 & 6.3 \\ \hline Terminal current [kA] & 40.5 kA & 31.3 \\ \hline Number of turns & 256 & 200 \\ \hline Number of pancakes & 16 & 16 \\ \hline Total REBCO [km] & 270 & 270 \\ \hline Coolant type & Supercritical helium \\ \hline Coolant pressure [bar] & \(10-20\) & 15 \\ \hline Operating temperature [K] & 20 & \(8-17\) \\ \hline Peak magnetic field [T] & 20.1 & 23 \\ \hline Peak Lorentz loading [kN/m] & 822 & 750 \\ \hline Magnetic stored energy [MJ] & 110 & 316 \\ \hline \end{tabular}
\end{table}
Table I: Parameter comparison between the TFMC and one coil from the SPARC toroidal field (TF) magnet in 2021.
uniformly in the winding pack in a quench scenario and to induce localized thermal damage if this cannot be achieved.
With the technical requirements defined, the TFMC Program was then programmatically oriented around the achievement of six main objectives:
1. The achievement of 20 T peak field-on-conductor in a large-bore magnet with a terminal current of 40 kA and a transverse lkB load of 800 kN/m at a temperature of 20 K with total stored magnetic energy of 110 MJ.
2. The demonstration of key aspects of the magnet design: * Confirmation of no-insulation magnet physics and operation, particularly the current, voltage, power, and temperature distributions in the magnet during charging/discharging and steady-state regimes. * Demonstration of efficient cryogenic cooling through the use of massively parallel, single pass, machined cooling channels within the winding pack and the structural case as a pressure vessel. * Demonstration of simple, robust, demountable pancake-to-pancake joints internal to the magnet winding pack with resistances in the 1 n\(\Omega\) range.
3. The development and validation of high-fidelity electromagnetic and thermo-mechanical models of no-insulation magnets such that confidence in the design and operation of the full-scale SPARC TF could be achieved.
4. Exploration of passive, self-protecting resilience to current sharing and quench in representative-scale, fusion-relevant no insulation REBCO magnets.
5. The development and qualification of materials, instrumentation, tooling, fabrication processes, and external vendors to enable confidence and speed in the construction of the SPARC TF magnet.
6. Development of the REBCO supply chain by providing magnet-pull in the form of challenging tape specification, close technical partnerships, and, most importantly, an unprecedented injection of capital expenditure through large tape orders to enable manufacturer scale-up.
### _Project Constraints_
Another key shaping function for the TFMC, its test facility, and the test campaigns were the constraints imposed upon the project. These constraints were of two types: anticipated and accepted as the boundary conditions of executing the project under the circumstances available; and unanticipated from external sources beyond project control. Understanding the constraints helps provide insight into why certain technical decisions were made and how those decisions manifested. While a comprehensive list is beyond the scope of this paper, a brief review of the key constraints and mitigating decisions is included here.
Budget and schedule are two important constraints in any large-scale engineering project but were amplified in the TFMC Program as a result of project sponsorship through a start-up company (CFS) with a fixed capital raise and scheduled milestone targets. Schedule proved to be the most defining constraint of the two. The June 2021 completion date for the TFMC Program was set at the official start of the MIT-CFS collaboration in June of 2018, providing an immutable 3-year window to complete the four phases of the project described in Section IV C.
Several strategies were employed to minimize schedule. The "make-buy" strategy was carefully evaluated with heavy weighting towards in-housing major parts and subsystems at MIT and CFS while choosing fabrication processes and partner vendors primarily on the ability to execute successfully on rapid schedules over other considerations such as cost. Closely coupled to this strategy was the high - but carefully managed - tolerance for technical risk in the R&D process, allowing signficant advances with less iterations. Examples of these strategies on the TFMC include the decision to build and then use all the equipment required to wind the magnet in-house while intentionally crafting the size and shape of the TFMC to fit within the existing capacities of established vendors who could deliver on the required schedule. Another example: external, existing test facilities were sought but none identified that could be ready on the schedule required, resulting in a decision to build a new test facility from scratch at MIT PSFC. Decisions for major elements of the test facility itself, such as the decision to design and build custom 50 kA binary LN2-cooled REBCO current leads, were driven by the lack of rapidly available 50 kA current leads from national laboratory or commercial vendors.
Another important constraint that shaped the project, in particular the design of the test facility, was the pre-existing infrastructure available at the MIT PSFC. Unlike almost all large-scale magnet test facilities, the MIT PSFC does not host an extensive liquid helium infrastructure, which is typically used both to provide the cryogenic coolant to the magnet as well as the coolant to superconducting current leads and feeder cable systems. To provide cooling to the TFMC, an innovative liquid-free cryocooler-based helium circulation system was developed with an external vendor to provide 600 W at 20 K cooling capability at significantly reduced cost, footprint, and schedule compared to a new liquid helium infrastructure. To provide cooling to the 50 kA superconducting current leads and feeder cables, the large 18,000 gallon capacity LN2 storage and distribution system was utilized by designing custom LN2-cooled REBCO current leads rather than the more traditional helium vapor-cooled leads found in similar scale magnet systems [47].
While high bay experimental halls were available for the fabrication and assembly of the TFMC and the 50 kA current leads, the test facility had to be built in an experimental hall with 18 foot vertical clearance. This constraint challenged the design of the vertical 50 kA current leads as well as resulted in
the decision to test the TFMC magnet in the horizontal configuration, which departs from previous large-scale superconducting magnet tests such as the ITER TFMC in the KIT TOSKA facility [44]
A final important and completely unanticipated set of constraints were those imposed on the project by the COVID-19 pandemic. Perhaps the most significant impact was near-complete cessation of onsite hardware activities at MIT and CFS from mid-March to mid-May of 2020, followed by a slow ramp-up during the summer months of 2020. Many of the project's key vendors were similarly impacted, resulting in closures and staffing issues. Further challenges resulted from the inability of TFMC personal to routinely make onsite visits to vendors to collaborate effectively, witness critical processes, and perform quality inspections, typically a critical set of activities for successful complex engineering projects. Strategies employed to mitigate impacts included rotating two and three onsite shifts to minimize personnel density, requiring full personnel protection equipment and eventually vaccination while working on site, twice a week onsite COVID testing at MIT and CFS, moving all meetings to videoconferencing, and remote video inspection of vendor parts.
### _TFMC Program Structure and Timeline_
To achieve the programmatic objectives within the imposed project constraints as described in the previous two subsections, the TFMC Program was factored into four distinct phases spanning approximately three years:
1. _Foundational conductor/coil development (2018-2019):_ Novel REBCO conductor and coil technologies were required to meet the requirements of superconducting fusion magnets capable of achieving in excess of 20 T peak field-on-coil. Activities in this phase focused on small-scale prototyping and testing of two very different base technologies suitable for such magnets: REBCO cables based on the insulated cable-in-conduit conductor (CICC) concept pioneered by Montgomery, Hoenig, and Steeves at MIT in the mid-1970s [48]; and REBCO coils based on the single tape-wound no-insulation coils developed by Hahn, Park, Bascunan, and Iwasa at MIT in the late 2000s [49].
2. _Magnet (2019-2021):_ After demonstration of the base conductor/coil technologies, it was decided that a large-scale toroidal field (TF) magnet of fusion-relevant size and performance should be designed, fabricated, and tested. This approach followed a well-established precedent in the advancement of large-scale superconductor technology, as reviewed in Section IV, of building representative "model coils" as an important risk-retirement step towards constructing full-scale fusion devices incorporating that technology. Magnet fabrication and assembly took place at the MIT PSFC, in a large ~370 m\({}^{2}\) (~4,000 sq. ft.) hall explicitly reconfigured for this purpose with REBCO quality-assurance/quality-control and some magnet winding activities taking place at CFS.
3. _Magnet Test Facility (2019-2021):_ In parallel with the magnet activities, the magnet required the establishment of a new test facility capable of meeting both the unique technical and demanding schedule requirements imposed on the project. The test facility was built at the MIT PSFC in order to repurpose the pre-existing large-scale experimental facilities and infrastructure made available by the shuttering of the Alcator C-Mod tokamak in 2016. These capabilities included a large ~835 m\({}^{2}\) (~9,000 sq. ft.) experimental hall, over 1 MVA of available electrical power up to 13.8 kV, a 400 kW distilled milled water system, and 18,000 gallon LN2 distribution capabilities. Importantly, decades of experience conducting large-scale complex electromechanical experiments, combined with close cooperation with MIT's Environmental Health and Safety office, maximized the probability of safe, successful tests.
4. _Experimental Test Campaigns (2021):_ Two distinct test campaigns were carried out in the fall of 2021. The first test campaign in August and September 2021 targeted assessment of the charging/discharging and steady-state performance of the coil including the demonstration of 20 T peak field-on-conductor, electromagnetic characteristics, measurement of joint resistances, and structural assessment of the coil under peak loads. The second test campaign in October 2021 focused on characterizing the response of the coil to off-normal events, including in current-sharing regimes at increasingly higher temperatures and under worst-case open circuit quench conditions. The test campaigns were followed by a series of non-destructive and destructive post-mortem analysis of the magnet to further validate the engineering design of the coil as well as to confirm experimental findings from the test campaigns, in particular the localized damage sustained during the final programmed open circuit quench.
## V Preliminary R&D towards the TFMC
This section provides a brief overview of the development of two different REBCO technologies -VIPER cable and NINT coils - that took place during the first phase of the TFMC Program. This development was completed rapidly and in parallel at small-scale over approximately one year for two purposes: first, to determine which of the two technologies would be selected for the DC TF magnet in SPARC; and second, to develop a foundation for the cables required for the AC magnets in SPARC and superconducting feeder/bus systems. NINT, due primarily to the long L/R charging/discharging time constant imposed by NI magnet physics, is only suitable for the steady-state TF magnet. In contrast, insulated VIPER cable can satisfy the requirements of both steady-state magnets like the TF as well as handle the rapid magnetic flux density swings required by pulsed magnets like the central solenoid (CS) and poloidal field (PF) coils in a tokamak.
### _The VIPER REBCO Cable_
At the start of the TFMC Program, a range of fusion-relevant REBCO cable designs had been proposed [50, 51, 52, 53, 54, 55] by institutions around the world. These cables, were all based on either the twisted stacked tape conductor (TSTC) architecture developed by Takayasu at MIT PSFC [56] or the conductor on round core (CORC(r)) architecture developed by van der Laan at Advanced Conductor Technologies [57]. Despite significant progress, several technical issues remained unresolved including critical current (I\({}_{\text{c}}\)) degradation under representative high-field magnet electromagnetic loading (including axial strain and cycling) and quench detection given the slow normal zone propagation velocities inherent in HTS materials. Complexity in fabrication, particularly in the preparation of low resistance joints, and little development in integrating and testing these cables in magnet geometries presented scale-up challenges.
A REBCO CICC cable R&D program was undertaken to directly address these remaining issues. The result was the VIPER cable shown in Fig. 2, an industrially scalable high current REBCO cable based on the TSTC architecture. In the period of 2018-2019, over a dozen VIPER cables were built spanning lengths from 1 to 12 m, including a 3D single-turn coil intended to be tested in the NIFS 13 T large-bore test facility [58] and a multiturn pancake coil tested in LN2 at the MIT PSFC. The most stringent tests were a series of four experimental campaigns (comprising two identical VIPER cables each) at the SULTAN test facility at PSI [59]. Novel cable assemblies were developed to provide simultaneous transverse IxB Lorentz loads and axial strain to emulate the force state of a 3D coil [60]. Key results included achieve stable I\({}_{\text{c}}\) with less than 5% degradation at IxB load of 382 kN/m (2000 cycles) and with axial strain of 0.3% for over 500 cycles [6], robust demountable joints in the 2-5 n\(\Omega\) range [6], and the first demonstration of two separate fiber optic quench detection technologies on full-scale conductors in fusion-relevant conditions suitable for the low normal zone propagation velocity of REBCO [61].
### _No Insulation - No Twist (NINT) REBCO Coils_
In the mid-2010s, no-insulation REBCO (NI) technology had emerged as an alternative means of building high-field superconducting magnets. Superconducting NI coils were first proposed by Berlincourt and Hake, the co-discovers of NbTi as the one of the earliest practical superconductors, in the mid-1960s [62]. Since that time, and with the incorporation of REBCO as the superconductor, single-tape wound REBCO NI coils have developed into mature, high-field, well-engineered magnet systems and presently hold the world record of 45.5 T magnetic field strength [63]. The first fusion-relevant REBCO NI coil at the MIT PSFC was built in 2016 using 500 m of 12 mm wide tape and achieved a peak field in the 8 cm clear bore of 6 T [64].
Several challenges existed at the start of the TFMC Program in adapting such coils for fusion purposes. First, the scale of fusion TF coils required handling several GJ of stored magnet energy in the event of a quench, orders of magnitude beyond what had been experimentally demonstrated to date on small-bore NI coils. Second, the extremely long L/R charging/discharging times caused by high inductance from the single-tape/many-turns approach was incompatible with the timescales of fusion TF systems. Finally, the tightly packed NI coil geometry was unfavourable to efficiently removing the large amount of nuclear heat that is found in tokamaks, especially compact machines with limited space for nuclear radiation shielding.
In parallel with the VIPER cable program, a NINT REBCO coil R&D program was started to assess the feasibility of adapting NI coils to serve as fusion TF magnets. The first part of the program focused on conceptual coil design and resulted in several innovations.
A REBCO stack-in-plate concept was developed in which a stack of REBCO tapes was wound
Fig. 3: NINT coils. (a) shows a NINT coil including the superconducting terminals and instrumentation cabling. (b) shows a NINT coil configured for 15 K testing at the MIT PSFC.
Fig. 2: VIPER cables. (a) shows a cross section of two of steel-jacketed VIPER cables configured for SULTAN testing. (b) shows a completed test assembly at MIT before shipping to PSI.
directly into spiral grooves machined into one side of a plate of structural metal [7]. As was done for VIPER cables, REBCO tape stacks are soldered in place. Channels machined into the other side could accommodate cryogenic gas and be used for efficient, local, single-pass cooling [53]. After this, a two-pronged approach was taken that coupled a series of experiments on small demonstration coils with an advanced modelling program. A series of three 40 cm radius NINT coils with 16 tape REBCO stacks were built and tested at self-field at 77.3 K in LN2 and up to 10 kA terminal current at 15 K in a conduction-cooled configuration with the thermally insulated coil suspended over a bath of liquid helium as the coolant source, as shown in Fig. 3. These tests provided experimental data on the electromagnetic behaviour of the coils, particularly on the high cryostability of the coils and passive self-protection during quench by uniformly dissipating the magnet stored energy throughout the cold mass of the magnet.
### _Selection of NINT for the TFMC winding pack_
At the conclusion of the first phase of TFMC in June 2019, a review was held to assess the results from the VIPER and NINT development programs with the objective of selecting the technology that would be scaled-up into the TFMC and, if successful, the SPARC TF. Design and analysis based on the experimental and modelling results from the VIPER and NINT development
programs showed that either technology could successfully be used as the base magnet winding pack technology to achieve the necessary requirements for both the TFMC and the SPARC TF. Ultimately, NINT technology was selected for the TFMC on the basis of the proposed advantages summarized in Table II although, as discussed at the end of this section, VIPER cable was utilized in several ways in the TFMC Program.
The advantages can be roughly grouped into three types: superconducting; cryogenics; and fabrication. The high winding pack current density achievable with NINT, primarily due to the lack of insulation and the compact untwisted REBCO stack, provides a large design space for a compact but high-field magnet. Thermal stability is enabled by efficient cooling and large current sharing capacity of the unit cell. This allows the use of REBCO with defects such as dropouts, aggressively grading the REBCO tape stack, and possible tolerance to some threshold-level of damage sustained during fabrication or operation. Small-scale NI coils have demonstrated a significant degree of self-protection from quenches in experiments [64] and modelling [65]. If this capability could be successfully extended to large-scale NI coils suitable for fusion, active quench detection and mitigation systems could be eliminated. This would reduce the complexity of magnet fabrication and operation and provide passive protection, eliminating or minimizing perhaps the most significant operational challenge in superconducting fusion magnets.
In terms of cryogenics, the NINT design enables a scheme of massively parallel, single-pass cooling channels in the winding pack that maximizes global heat removal and can be optimized for local heat removal. This capability is important for the TF magnet in the SPARC tokamak, which minimizes nuclear radiation shielding to achieve compact size and results in large nuclear heating of the cryogenic TF.
In terms of fabrication, the open geometry of the NINT cable-in-plate concept, straightforward fabrication processes, and the absence of high voltage insulation makes production of NI magnets relatively efficient and scalable. In particular, the use of REBCO as opposed to LTS materials and the absence of an epoxy vacuum pressure impregnation step eliminates two of the most complex, long duration high temperature heat treatment operations involved in traditional insulated LTS CICC superconducting fusion magnets. Because the magnet winding pack is not encased in VPI epoxy and because demountable pancake-to-pancake joints can be used, NINT magnets can be fully disassembled for maintenance or component replacement without destructive operations. Finally, the intrinsic low voltage nature of NINT eliminates the need for high voltage current leads and feeder cables, traditionally one of the most challenging parts of insulated superconducting fusion magnet systems, and provides enhanced personnel and machine safety.
The VIPER cable R&D program was also drawn upon for TFMC. The successful vacuum pressure impregnation solder process developed for VIPER cables was directly adapted to fabricate NINT pancakes for the TFMC winding pack [66]. The robust, demountable, low resistance joints of the VIPER cable were modified to serve as demountable pancake-to-pancake joints embedded within the winding pack. Finally, VIPER cables were used directly in the implementation of the test facility, with three pairs of jointed VIPER cables forming the
\begin{table}
\begin{tabular}{|l|c|} \multicolumn{1}{c}{**Proposed NINT Design Features**} & \multicolumn{1}{c}{**NINT Advantage to be Assessed by TMFC**} \\ \hline High winding pack current density & Compact, high-field REBCO magnet; Large magnet design space \\ \hline High thermal stability & Resistant to quench; Robust to REBCO defects and local damage \\ \hline Quench resiliency & Potential to eliminate active quench detection or mitigation systems \\ \hline Single-pass pressure vessel cooling & Handle high nuclear heating; Local cooling optimization; Simple manifolding \\ \hline Simple modular construction & Rapid fabrication; Scalable for SPARC and commercial use; Maintenance options \\ \hline Intrinsically low voltage (\textless\text{1 V}) & Minimal insulation; Simple fabrication; Low voltage leads and feeders; Safety \\ \hline \end{tabular}
\end{table}
Table II: Summary of the proposed NINT advantages for a tokamak TF coil used to select NINT technology for the TFMC winding pack.
superconducting feeder system between the 50 kA LN2-cooled binary current leads and the TFMC magnet terminals.
## VI TFMC Program technical overview
The purpose of this section is to provide a short summary of the five technical papers that accompany this overview paper, covering major facets of the program at a high-level and providing a unified view of how the entire project was integrated together to form a complete whole. Readers interested in obtaining an in-depth technical analysis are encouraged to review the companion papers associated with each topic in this special issue.
### _The TFMC Magnet_
The TFMC, shown at the bottom-left of Fig. 1 and as a detailed CAD rendering in Fig. 4, is a non-insulated, REBCO stack-in-plate, style superconducting magnet. It weighed 10,058 kg and utilized 270 km of REBCO. It has three main components: (1) the winding pack; (2) the structural case; and (3) the plena. The winding pack is composed of a stack of 16 single pancakes with 2 termination plates top and bottom. The pancakes are Nitronic-40 radial plates machined with spiral channels on one side for the REBCO tape stack and single-pass channels on the opposite side for supercritical helium coolant. After REBCO winding and pancake assembly is complete, the pancake undergoes a vacuum-pressure impregnation solder process to provide good mechanical protection of the REBCO stack and efficient thermal and electrical connectivity throughout each pancake. Each pancake is electrically tested in LN2 following the solder process, providing quality assurance / quality control as well as superconducting performance data to guide pancake location within the winding pack stack and inform models of magnet performance. The pancakes are bolted at the perimeter to provide mechanical and thermal connectivity while inter-pancake joints provide low resistance current transfer between pancakes. The top and bottom termination plates facilitate electrical connection to a superconducting feeder system.
The winding pack is contained within a structural case, a "trough and lid" style design composed of two Nitronic-50 forgings machined to shape and bolted together. The case reacts the large electromechanical loads, with stresses approaching 1 GPa during operation, and serves as a pressure vessel to enable single-pass 20 bar supercritical helium flow that cools the winding pack and case. Two high pressure vessels or "plena" are attached to the case with unique high-pressure feedthroughs to provide winding pack access for current, cooling, and instrumentation, completing the magnet assembly.
### _The TFMC Test Facility_
The 835 m\({}^{2}\) (9,000 sq. ft.) TFMC Test Facility is shown in overview in Fig. 1. It was built as a stand-alone large-scale REBCO magnet test facility at the MIT PSFC in less than two years in the large experiment hall formerly housing power equipment for the Alcator C-Mod tokamak. The major engineering systems enabling magnet testing are as follows:
* _Main vacuum cryostat:_ A 35-ton SS316 vacuum cryostat with 20 m\({}^{3}\) internal volume was designed at MIT and built at an external vendor. External SS316 ribs minimize deflection under vacuum while the support structure underneath was carefully designed to properly support the approximately 10-ton TFMC magnet. The cryostat contains ten radial NW500 flanges to provide abundant internal access. A central bore is provided for instrumentation access within the high-field region of the TFMC. The top lid separates for installation/removal of large components.
* _Vacuum system:_ Pumping is provided by two Leybold Turbovac TMP1000 (total of 2000 l/s pumping power) backed by Leybold Ecodry 35 scroll pumps. The scroll pumps were located outside of the 35-gauss line of the TFMC with the turbos enclosed within 1-inch thick iron magnetic shields. High vacuum in the 10\({}^{\circ}\) to 10\({}^{\circ}\)torr range was routinely achieved during experiments.
* _LN2 radiation shields and distribution system:_ Radiation shields, consisting of modules of two steel panels with a thin interstitial space for LN2, were also designed at MIT and built by an external vendor. They were assembled within the vacuum cryostat. The LN2 was gravity fed from two storage dewars adjacent to the vacuum cryostat, which were provided with LN2 from an 18,000 external gallon storage tank.
* _Power supply:_ The system was delivered by Alpha Scientific Electronics and is composed of eight cabinets each providing 6.25 kA for a total of 50 kA.
* _Current leads and feeder system:_ A set of binary LN2-cooled 50 kA REBCO current leads and a VIPER cable feeder system were designed and built in-house at MIT PSFC. These systems are described in more detail in Section VI C.
* _Supercritical helium circulation system:_ A liquid-free cryocooler-based system provided cooling down to 20 K for the TFMC, current leads, and feeder system was built
Fig. 4: CAD rendering of the TFMC magnet showing the principal components.
by an external vendor. The system is described in more detail in Section VI D.
### _Current leads and feeder cable system_
The current leads (CL) and feeder cables (FC), shown together in proximity to the TFMC magnet in Fig. 5, represent a significant superconducting achievement in their own right. Designed and built in-house at the MIT PSFC as part of the TFMC Test Facility, the system was designed to handle 50 kA of current and utilize the significant onsite LN2 storage and distribution system available. Prior to the TFMC test campaigns, the CL and FC system was successfully commissioned at 41 kA to ensure the nominal 40.5 kA of terminal current for the TFMC tests; post-TFMC testing demonstrated stable operation at 50 kA, the maximum current available from the power supply. CL-FC joint resistances and FC joint resistances were measured in the 1 to 2 n\(\Omega\) range at 40.5 kA.
Each of the approximately 3 m tall CLS consists of three sections. The upper, electrically resistive heat exchanger section is composed of a large heavily slotted C101 copper piece, designed to transport electrical current with high conductivity while being cooled with gaseous nitrogen boiling off from the middle section. The middle section is a large boiling chamber, containing an internal reservoir of LN2. Internally machined cooling fins maximize heat transfer while external claw pumps provide the ability to subcool the LN2 to 65 K by reducing the vacuum pressure in the chamber. The boiling chambers are fed from a large LN2 reservoir housed within the current lead vacuum cryostat, which also serves as the cryostat radiation shield. Connected to the bottom of the boiling chamber is the lower superconducting section. This section is composed of six individual petals that combine to provide an I\({}_{c}\)(77 K, self-field) of 58.4 kA. Each petal consists of two large C101 copper terminal blocks spanned with a SS316 bridge containing twenty-two stacks of 4 REBCO tapes soldered into machined channels. When assembled the non-REBCO side of the C101 copper terminal blocks of the six petals forms an electrical and mechanical joint with the first VIPER feeder cable.
The superconducting FC system is composed of three sets of highly shaped non-planar VIPER cables, which are enclosed by a conduction cooled copper radiation shield attached to the main vacuum cryostat LN2 radiation shields. The longest, central set of cables - the "cold bus" - is a VIPER cable with a 10 mm central cooling hole for coolant. The coolant is warm supercritical helium exhaust from the TFMC, nominally at 22 - 25 K compared to the 20 K helium inlet temperature to the magnet. Four stacks of REBCO resulted in an I\({}_{c}\)(25 K, self-field) of 101 kA and a T\({}_{c}\)(45 kA, self-field) of 55 K. This provides a large factor of safety to the 40.5 kA 25 K nominal operating conditions. The bus cables have a gradual "S" shape bend along the cable axis, which enables the feeder system to absorb with less than 0.1% axial strain the physical coefficient of thermal expansion movement induced during cooling when the current leads and the TFMC shrink away from each other. On either end of the bus cables, a set of nearly identical performance VIPER cables joins the bus to the CLs and the TFMC terminals; however, these cables have a solid copper former to maximize conduction cooling and simplify the supercritical helium circuit. All cables had to be bent to tolerances of a few mm to guarantee successful assembly.
### _Cryocooler-based helium circulation system_
Another innovative feature of the TFMC Test Facility was the implementation of a liquid-free cryocooler-based system that circulates supercritical helium as an alternative the more traditional but cost- and schedule-intensive liquid helium infrastructure found at large-scale magnet test facilities. The system was responsible for cooling the TFMC, the FCs, and the REBCO-section of the CLs.
The design and construction of the system was contracted to an external vendor. To meet the cryogenic requirements, two nearly identical modules - each containing four Cryomech AL630 cryocoolers housed within an LN2-shielded vacuum cryostat - were operated in parallel. Custom gas heat
Fig. 5: CAD rendering showing how the superconducting current leads (at top-left) and VIPER cable feeder system (at center) interface with the TFMC magnet (at bottom-right).
Fig. 6: Data from the second cooldown of the TFMC magnet from 300 K to 20 K over the course of 5 days.
exchangers were integrated directly into the cryocooler coldhead, and cryofans with a maximum rotation speed of 60 krpm in each of the modules actuated helium circulation. During commissioning, the system confirmed 600 W of cooling power at 20 K with a supercritical helium mass flow of 70 g/s at maximum operating pressure of 20 bar-a. Heaters in each module provide for temperature control of the helium supply to the circuit.
Fig. 6 shows an example of cryogenic system performance during a 300 K to 24 K cooldown of the TFMC. Average helium circuit pressure, roughly maintained during cooldown by the addition of fresh helium, was approximately 13.3 bar-a. Crycoolers were activated in sets of two to maximize the cooling rate but stay well within the administrative limit of a 50 K maximum difference between the helium inlet and return temperature to avoid temperature gradient-induced strains in the winding pack and structural case. The heaters are activated around hour 105 to maintain the target TFMC temperature of 24 K. The cooldown time of approximately 4.5 days was in good agreement with the in-house cryogenic circuit model used to design the helium circuit.
### _Experimental results from the TFMC campaigns_
The TFMC was experimentally assessed in two test campaigns in between August and October of 2021; following the test campaigns the coil was removed from the cryostat, carefully disassembled, and subject to a series of destructive and non-destructive tests. Disassembly was aided by the modular, demountable nature of the TFMC coil. The post-mortem was conducted to maximize learning from the TFMC, including confirmation of the engineering design and construction of the coil after rigorous testing up to full performance at 20 T and to assess the resiliency of the magnet to two open-circuit events that occurred during testing. The open-circuits were of particular interest, as they represent the most damaging circumstances for an NI coil, as currents driven radially cause internal joule heating that can lead to a quench. Without the ability to detect-and-dump the stored magnetic energy as is the case for insulated magnets, large-scale NI fusion magnets must have a successfully strategy for handling quench. The TFMC tests provided the first opportunity to gather extensive data on a fusion-scale NI TF coil to aid in the validation of computational models and to guide future magnet design.
The first test campaign objectives were to assess the charging/discharging and steady-state electromagnetic response of the coil at the full design performance of 20 T peak field-on-conductor with 40 kA of terminal current. Measurements were successfully made of radial resistance, current, voltage and temperature distribution in the winding pack, the magnet's L/R decay time constant, internal pancake-to-pancake joint resistance, cryogenic helium flow distribution and cooling power, and structural performance. Fig. 7 shows an overview of the campaign's magnet ramp. The test took approximately 5 days due to the approximately 3-hour L/R decay time constant, placing great demands upon the operators. The magnet achieved a peak field-on-coil over a significant section of first turn REBCO stack of 20.1 T approximately 65 hours into the test.
The second test campaign had two objectives: to precisely quantify TFMC REBCO superconducting performance in DC as a function of temperature; and to fully measure the dynamics of the coil to an intentional open-circuit quench at a terminal current (31.5 kA) very close to the proposed SPARC TF magnet. Excellent voltage and temperature data were acquired during the initiation, incubation, and dump phases of the quench. The acquisition of this data set, and subsequent utilization to validate and extend the extensive array of NI computational modelling tools, achieved of one the most important objectives of the TFMC Program.
Fig. 8: Post-quench analysis of the TFMC. (a) shows Pancake #12 during the post-mortem with a sharply defined region of thermal damage in the upper-left light corner. (b) shows a 3D FEA simulation 170 s into the evolution of a 30 kA quench; the burn region is reproduced almost perfectly.
Fig. 7: Data from the first TFMC test campaign, showing the terminal current (top) and 3D teslameter measurements of the peak field-on-conductor exceeding 20 T (bottom).
In the quench; the predicted rapid (~3 s) inductive turn-to-turn and pancake-to-pancake quench cascades were clearly observed, confirming the dynamics of the basic self-protection mechanism for large-scale NI coils. The cascades are intended to rapidly distribute the stored magnetic energy uniformly throughout the magnet. Post-quench analysis and experiments, however, indicated the present of localized damage. Data on the global temperature distribution of the magnet during quench compared with 3D FEA model predictions indicated non-uniform energy deposition within the winding pack. Post-quench electromagnetic tests after recooling to 24 K found that the electromagnetic response of the coil (current path through the winding pack, azimuthal current and magnetic field, and total pancake voltages) had been altered through the upper half of the winding pack.
The TFMC's post-mortem confirmed the presence of localized damage within the upper half of the magnet winding pack. Concentrated in only a few pancakes with the epicentre in Pancake #12, thermal damage was found in a tightly defined azimuthal arc in one of the tight corners as shown in Fig. 8a. Due to the non-azimuthally symmetric shape, magnetic flux density is concentrated in the tight corners and creates a sharply defined critical surfaces characterized by high ratios of I/Ic (or T/Tc). For these regions, the result is sustained current sharing, leading to rapid temperature rise and ultimately burning in these areas, before the rapid inductive cascade can dissipate the magnetic stored energy throughout the magnet. The TFMC, which intentionally concentrated magnetic flux in the corners to achieve the programmatic DC performance goals, was inherently vulnerable to this effect. Indeed, the post-mortem confirmed that the engineering design, fabrication, and assembly of the coil were executed perfectly with no technical, operational, or other issue inducing the quench.
Importantly, this effect was predicted by several of the computational models developed in the TFMC program as shown in Fig. 8b; however, the models had not converged to a single self-consistent scenario requiring the experimental open circuit quench test. The extensive and unprecedented data set obtained from the experimental quench have been used to refine the simulation toolset. The advances in NI magnet physics understanding and validated models are now being used to design next-generation NI TF coils that maximize quench resilience through a multipronged approach. These developments are outside the scope of this paper but expected to be published in the future.
## VII Conclusion
The TFMC achieved its programmatic goal of demonstraining a large-scale high-field magnet, achieving 20.1 T peak field-on-conductor with 40.5 kA of terminal current, 815 kN/m of Lorentz loading on the REBCO stacks, and almost 1 GPa of mechanical stress. Internal demonutable pancake-to-pancake joints operated in the 0.5 to 2.0 n\(\Omega\) range at 20 K and in magnetic fields up to 12 T. The DC and AC electromagnetic performance of the magnet, predicted by new advances in high-fidelity computational models, was confirmed in experiment while a novel cryogenic coolant scheme capable of the large heat removal required by compact tokamaks was validated. A critical experimental step was taken to assess the feasibility of passive, self-protection against a quench in a fusion-scale NI TF coil. While the TFMC was intentionally not optimized for quench resiliency, the extensive data and validated models that it produced were required as an essential step towards this important objective.
As a result of the TFMC program, design and fabrication of the NINT-based TF magnet, VIPER-based CS and PF magnets are underway, and new commercial magnets are underway. Efforts since the close of the TFMC Program have focused on optimizing the TF towards quench resiliency, utilizing the improved modelling capabilities, technical innovations, and subsequent experiments. Based on the results from the TFMC Program, VIPER-based cables known as PIT-VIPER cable for the CS and PF are being developed and qualified in small-scale and model coil programs, making use of substantially expanded fabrication and test facilities at CFS as well as MIT PSFC. The TFMC Test facility, upgraded to support fast-ramp capabilities, is presently being utilized in cable and model coil tests for the CS. In addition to SPARC, CFS is also now designing and building high-field REBCO magnets for other scientific and industrial purposes, such as the WHAM high-field mirror project [67].
The TFMC Program also had an unprecedented impact on the REBCO tape industry. One of the objectives of the program was to reduce the cost of REBCO and catalyze the evolution of the industry from small, bespoke tape deliveries to standardized, industrial volumes. Fig. 9 shows the impact of the TFMC Program on REBCO cost per meter. Starting with initial MIT procurements from ten manufacturers in early 2018 through all of the CFS procurement for the TFMC in mid-2021, the TFMC was instrumental in reducing the average REBCO tape costs per meter by almost 40%. Procurements by CFS for the SPARC tokamak, now underway and expected to approach 10 million meters, should continue to decrease cost. Furthermore, the demanding REBCO specifications required by TFMC presented a technical challenge, some manufacturers, in response, proved capable of dramatically increasing both the performance and volume of REBCO at reduced cost [30].
Fig. 9: The decrease in normalized REBCO tape cost per meter during the time period spanning the TFMC Program..
Finally, the need for high volume I(B,T, 0) measurements as part of the NI modeling effort and quality-assurance / quality-control program for TFMC produced superior REBCO characterization equipment [68].
A final positive impact of the TFMC Program has been the significant acceleration of other high-field superconducting magnet efforts based on REBCO across a variety of fields. For example, the first non-planar stellarator coil from REBCO was designed, built, and tested with VIPER cable by MIT PSFC and Type One Energy, a private fusion company [69]. Another example is how the reduction in REBCO tape cost and demonstration of high-field REBCO magnets and ancillary technology is opening a path to next-generation collider experiments at the frontier of particle physics [70].
After a little over three years, the TFMC Program concluded in the fall of 2021 having successfully completed all of its programmatic and technical objectives. The state of REBCO magnet technology for fusion energy at the outset of the program, rooted in small-scale cable and coil prototypes and conceptual designs of full-scale system, looked very different at its close. The first representative-scale REBCO TF coil - in fact, the first large-scale REBCO magnet ever constructed - had been designed, built, and successfully tested. Ancillary technology, from high-current REBCO cables to 50 kA REBCO current leads, from liquid-free 600 W supercritical helium cryogenic systems to advanced 3D FEA modeling of the coupled thermal, mechanical, and electrical phenomena in large-scale NI magnets, had been successfully demonstrated. REBCO, in terms of performance, volume, and cost took its first significant step towards an industrial commodity product procured at the ton-scale. Finally, and perhaps most importantly, a new pathway to achieving fusion energy on accelerated timescales in compact devices, as well as opportunities in other areas of science and industry that utilize strong magnetic fields, had been opened by expanding the state of large-scale high-field superconducting magnets into the 20 T frontier.
## Acknowledgment
The authors are indebted to the many individuals and entities that made the TFMC Program possible. Executing such a program makes clear the full extent to which a tremendous number of people contribute to ensure collective success. We thank everyone who participated in the endeavour.
_At the MIT:_ Professor Ron Ballinger for timely advice on metal forgings; Makoto Takayasu and Jim Kelsey for technical input and reviews; Joseph Minervini for leadership in superconductivity; Corinne Cotta (PSFC) for overseeing a sizeable fraction of TFMC procurement; Mary Davenport, Tesla Myers, Vick Sangha, and Katherine Ware for fiscal oversight; Jennifer James for providing unceasing administrative support; David Parker for handling PSFC operations; Karen Cote for handling PSFC safety; Brandon Savage, Lee Berkowitz, Mark London, and Clarence Tucker for tireless IT support; Ed Lamere and Mitch Galanek from MIT EHS for overseeing test program safety; Bob Armstrong and Randy Field at MIT Energy Initiative; MIT Central Machine Shop for fabrication support. MIT Central Utility Plant for keeping the water cold, and MIT facilities for the keeping the lights on.
_At CFS:_ Andrea Jarrett and Joseph Stiebler who oversaw CFS procurement, Steve Renter for handling CFS operations, Carolina Zimmerman for collaboration on fiscal oversight, and Samuel Morgan for technical support.
_External:_ Edward Moses for unwavering mentorship and project management guidance; Walter Fietz of Karlsruhe Institute of Technology for technical contributions and program review; Nick Strickland and Stuart Wimbush of Robinson Research Institute for collaboration on REBCO characterization; Nagato Yanagi of the National Institute for Fusion Sciences for collaboration on high-field cable testing; Satoshi Awaji, Tatsu Okada, and Arnaud Badel for opening their facilities and expertise to us in high-field REBCO I\({}_{c}\)(B,T,0) measurements; Jeff Mullins from Oliver Welding for perfect welds; to George Dodson (MIT), Joseph Minervini (MIT), Edward Moses, and Soren Prestemon (LBNL) for serving on the TFMC Safety and Operations Review committee.
_Vendors:_ The project would not have succeeded without the commitment and execution of our vendor partners, who delivered complex procurements on challenging schedules while working at-risk through the COVID-19 pandemic. Our sincerest gratitude to each of you.
|
2307.13842
|
CosSIF: Cosine similarity-based image filtering to overcome low
inter-class variation in synthetic medical image datasets
|
Crafting effective deep learning models for medical image analysis is a
complex task, particularly in cases where the medical image dataset lacks
significant inter-class variation. This challenge is further aggravated when
employing such datasets to generate synthetic images using generative
adversarial networks (GANs), as the output of GANs heavily relies on the input
data. In this research, we propose a novel filtering algorithm called Cosine
Similarity-based Image Filtering (CosSIF). We leverage CosSIF to develop two
distinct filtering methods: Filtering Before GAN Training (FBGT) and Filtering
After GAN Training (FAGT). FBGT involves the removal of real images that
exhibit similarities to images of other classes before utilizing them as the
training dataset for a GAN. On the other hand, FAGT focuses on eliminating
synthetic images with less discriminative features compared to real images used
for training the GAN. Experimental results reveal that employing either the
FAGT or FBGT method with modern transformer and convolutional-based networks
leads to substantial performance gains in various evaluation metrics. FAGT
implementation on the ISIC-2016 dataset surpasses the baseline method in terms
of sensitivity by 1.59% and AUC by 1.88%. Furthermore, for the HAM10000
dataset, applying FABT outperforms the baseline approach in terms of recall by
13.75%, and with the sole implementation of FAGT, achieves a maximum accuracy
of 94.44%.
|
Mominul Islam, Hasib Zunair, Nabeel Mohammed
|
2023-07-25T22:37:10Z
|
http://arxiv.org/abs/2307.13842v2
|
CosSIF: Cosine similarity-based image filtering to overcome low inter-class variation in synthetic medical image datasets
###### Abstract
Crafting effective deep learning models for medical image analysis is a complex task, particularly in cases where the medical image dataset lacks significant inter-class variation. This challenge is further aggravated when employing such datasets to generate synthetic images using generative adversarial networks (GANs), as the output of GANs heavily relies on the input data. In this research, we propose a novel filtering algorithm called Cosine Similarity-based Image Filtering (CosSIF). We leverage CosSIF to develop two distinct filtering methods: Filtering Before GAN Training (FBGT) and Filtering After GAN Training (FAGT). FBGT involves the removal of real images that exhibit similarities to images of other classes before utilizing them as the training dataset for a GAN. On the other hand, FAGT focuses on eliminating synthetic images with less discriminative features compared to real images used for training the GAN. Experimental results reveal that employing either the FAGT or FBGT method with modern transformer and convolutional-based networks leads to substantial performance gains in various evaluation metrics. FAGT implementation on the ISIC-2016 dataset surpasses the baseline method in terms of sensitivity by 1.59% and AUC by 1.88%. Furthermore, for the HAM10000 dataset, applying FABT outperforms the baseline approach in terms of recall by 13.75%, and with the sole implementation of FAGT, achieves a maximum accuracy of 94.44%.
**Keywords**: Medical Image Classification; Generative Adversarial Networks; Cosine Similarity; Swin-Transformer; ViT; ConvNeXt.
## 1 Introduction
Medical image analysis is a critical component of modern healthcare, enabling accurate diagnosis, effective treatment, and continuous monitoring of various diseases [1]. The advent of deep learning has created a new horizon in this field, delivering significant improvements in the early detection and classification of diseases. Numerous studies have highlighted the efficacy of deep learning in medical imaging [2, 3], leading to its widespread adoption in multiple medical domains, including radiology, dermatology, and ophthalmology [4, 5, 6]. In radiology, deep learning models have surpassed the diagnostic accuracy of radiologists in detecting breast cancer in mammography images [7]. Similarly, in dermatology, deep learning has demonstrated outstanding performance in identifying skin cancer from dermoscopy images [8]. In ophthalmology, deep learning models have been used to diagnose diabetic retinopathy and age-related macular degeneration from retinal images [9, 10].
One of the major challenges in developing deep learning models for medical image analysis is the limited availability of datasets. This issue is particularly significant in classifi
Figure 1: The illustration depicts the pipeline of our research, which involves identifying the minority class from the dataset, oversampling through GAN and transformation techniques, the adoption of our proposed FBGT and FAGT methods to mitigate low inter-class variation by leveraging our novel CosSIF algorithm, and ultimately training classifiers using the augmented dataset. In the case of multiple minority classes, the pipeline is repeated until the classifier training stage. It is recommended to view the illustration in color.
cation tasks, where obtaining a balanced dataset with high inter-class variation is difficult [11]. Inter-class variation refers to the differences in appearance between different classes of images [12]. The scarcity of a balanced dataset arises when obtaining a sufficient number of images from certain classes is challenging, resulting in an imbalanced distribution of classes. For example, Zech et al. [13] reported that training a deep learning model to detect pneumonia in chest radiographs was significantly impacted by the imbalanced nature of the dataset. The authors noted that obtaining a balanced dataset with a sufficient number of images of the positive class was difficult due to the low prevalence of pneumonia in the population. Therefore, low inter-class variation and class imbalance in medical image datasets significantly undermine the applicability of deep learning techniques in medical imaging.
The use of generative adversarial networks (GANs) has increasingly gained popularity in recent years to address class imbalances in datasets [14]. GANs are generative models that can produce synthetic images that closely resemble real images [15]. Several studies have reported success using GANs to address the class imbalance in various domains, including medical image analysis [16]. The conventional approach for training a GAN involves utilizing every image of the minority class in a dataset. [17]. However, when dealing with images that have low inter-class variation, this method can be problematic. This is because the GAN may generate synthetic images that visually resemble images from other classes, resulting in a dataset that is technically balanced but presents challenges for a neural network attempting to distinguish differences between classes. The is largely attributed to the lack of diversity in the synthetic images that makes it difficult for a network to learn discriminative features required for accurate classification.
In response to the challenge of GANs producing visually similar images with less discriminative features when trained on datasets with low inter-class variation, we propose a novel filtering algorithm called Cosine Similarity-based Image Filtering (CosSIF). We utilize CosSIF to introduce two filtering methods: Filtering Before GAN Training (FBGT) and Filtering After GAN Training (FAGT). The CosSIF algorithm is employed to determine the similarity between two sets of images. For instance, in a dataset consisting of two classes, A and B, CosSIF calculates the similarity of each image from class A with all the images in class B. The resulting similarity scores generated by CosSIF are then used by FBGT or FACT to filter out the most similar or dissimilar images. FBGT involves removing real images from the minority class that exhibit visual resemblance to images from other classes before incorporating them into the training dataset of a GAN. This ensures that the GAN does not learn certain features that contribute to generating visually similar images. However, implementing FBGT requires retraining the GAN with the filtered images. In contrast, FAGT operates on a pre-trained GAN, where similarity calculations are conducted between the synthetic images generated by the GAN and the real images used for training the GAN. The architecture of our proposed algorithm and filtering methods is illustrated in Fig. 1. To evaluate the effectiveness of our approaches, we perform experiments using modern transformers such as Vision Transformer (ViT) and Swin Transformer, as well as convolutional-based networks like ConvNeXt. The key contributions of our work in this paper can be summarized as follows:
* We propose CosSIF, an image similarity calculation algorithm with cosine similarity as its backbone, capable of identifying visually similar images of a specific class to images of another class/classes in a dataset.
* We propose two filtering methods, FBGT and FAGT, to regulate GANs synthetic image generation capabilities in an effort to reduce low inter-class variability in medical image datasets.
* We propose a reproducible train-test split for the HAM10000 dataset, which can facilitate the comparison of our proposed methods with future experiments conducted by others.
* We experimentally demonstrate that the utilization of FAGT on the ISIC-2016 dataset surpasses the baseline method, MelaNet [18], in terms of sensitivity by 1.59% and AUC by 1.88%. Furthermore, the utilization of FAGT and FBGT exceeds the baseline method, IRv2+SA [19], in terms of recall by 13.72% and 13.75%, respectively.
The remaining sections of this paper are organized as follows: In Section 2, we discuss related studies on class imbalance and low inter-class variation in medical image classification. Moreover, we explore the usage of cosine similarity in computer vision. In Section 3, we present a comprehensive overview of our proposed CosSIF algorithm, as well as the FBGT and FAGT filtering methods. Furthermore, this section provides detailed descriptions of our selected GAN architecture and gives a brief overview of our chosen transformer and convolutional-based network models. In Section 4, we give an overview of the utilized datasets, present the selected configurations for classifier and GAN training. Subsequently, we perform experiments by employing our proposed algorithm and filtering methods. In Section 5, we conduct an ablation study of our experiments and compare the performance of our trained classifiers against strong baseline methods. Finally, Section 6 presents the conclusions and a discussion on possibilities for future work.
## 2 Related Work
Several studies have delved into a multitude of strategies to address class imbalance in medical image datasets. These
approaches encompass oversampling techniques that involve either transformations or the implementation of generative adversarial networks (GANs) [20]. For instance, Zunair and Hamza [18] employed CycleGAN, a GAN model consisting of dual-generator and discriminator modules, to effectively increase the representation of the minority class in the dataset [21]. On the other hand, researchers such as Datta et al. [19] and Lan et al. [22] opted for alternative transformation methods, adjusting image rotation and focus to diversify the dataset without resorting to GANs. While these research papers used different methods to address the class imbalance issue, they did not propose any solutions to handle the low inter-class variation in the generated synthetic datasets. In this paper, we aim to address this issue by eliminating images that contain limited distinguishable features during the oversampling process.
Identifying visually similar images begins with mathematically calculating the similarity between two images, preferably in a higher dimension. Multiple formulas exist for this task, including Mean Square Error (MSE), Cosine Similarity, and Euclidean Distance. In the field of computer vision, the use of cosine similarity is fairly prevalent for calculating the similarity between images. Iliham et al. [23] used cosine similarity to compare the vector representation of a query image with the vector representations of all the images in the database. Similarly, Kaur et al. [24] proposed a content-based image retrieval system (CBIR) to assist dermatologists in diagnosing skin diseases. The system utilized various techniques such as feature extraction, similarity matching, and cosine similarity to retrieve the most similar images to the query image. Tao et al. [25] introduced Saliency-Guided Constrained Clustering approach with cosine similarity (SGC3) for image cosegmentation. This method employs cosine similarity to calculate the feature similarity between data points and its cluster centroid. Given the extensive utilization of cosine similarity in similarity calculations, as demonstrated by the mentioned authors, we also choose to employ it as the foundation of our similarity calculation algorithm. Similar to these authors, we transform the two input images into vectors and calculate their cosine similarity. However, as we need to calculate the cosine similarities for thousands of images, it becomes necessary to reduce the pixel dimensions of the input images. Therefore, we optimize our algorithm by reducing the number of pixels to 64x64. This adjustment significantly improves the computation time of our algorithm. Moreover, we utilize cosine similarity to compute the cosine distance, ensuring that the calculated distances are non-negative values.
## 3 Proposed Method
This section describes the main components of our proposed algorithm and methods. It begins with a comprehensive overview of the CosSIF algorithm, followed by a detailed explanation of the FBGT and FAGT methods, along with a comparison between them. Next, the architecture of the GAN is described, along with a hybrid augmentation process designed specifically to suit the outlined GAN architecture. Finally, the section concisely discusses the transformers and convolutional-based network models used for training classifiers.
### CosSIF
The details of the Cosine Similarity-based Image Filtering (CosSIF) algorithm are divided into several parts: class selection, image rescaling, similarity calculation, optimization, backbone, algorithm, filtering, and adaptability.
#### Class Selection
The CosSIF algorithm begins by selecting a class from a dataset, referred to as the target class. This target class acts as the anchor for similarity calculations, while the remaining classes are considered secondary classes. The target class is denoted as \(\textbf{\emph{T}}^{[\textbf{c}]}\), and a secondary class is denoted as \(\textbf{\emph{S}}^{[\textbf{c}]}\), where \(\textbf{\emph{c}}\) represents the class name. The total number of images in the target class is represented by \(\textbf{\emph{p}}\), and the total number of images in the secondary class is represented by \(\textbf{\emph{q}}\). Equations 1 and 2 represent all the images within the \(\textbf{\emph{T}}^{[\textbf{c}]}\) and \(\textbf{\emph{S}}^{[\textbf{c}]}\) classes, respectively.
\[\textbf{\emph{T}}^{[\textbf{c}]}=\{\textbf{\emph{t}}_{1}^{[\textbf{c}]}, \textbf{\emph{t}}_{2}^{[\textbf{c}]},\ldots,\textbf{\emph{t}}_{\textbf{\emph{p} }}^{[\textbf{c}]}\} \tag{1}\]
\[\textbf{\emph{S}}^{[\textbf{c}]}=\{\textbf{\emph{s}}_{1}^{[\textbf{c}]}, \textbf{\emph{s}}_{2}^{[\textbf{c}]},\ldots,\textbf{\emph{s}}_{\textbf{\emph{ q}}}^{[\textbf{c}]}\} \tag{2}\]
In the case of multiple secondary classes, they are represented as a set \(\textbf{\emph{X}}\). Equation 3 represents all the classes within the set \(\textbf{\emph{X}}\), where \(\textbf{\emph{m}}\) denotes the total number of secondary classes.
\[\textbf{\emph{X}}=\{\textbf{\emph{S}}^{[\textbf{c}_{1}]},\textbf{\emph{S}}^{[ \textbf{c}_{2}]},\ldots,\textbf{\emph{S}}^{[\textbf{c}_{m}]}\} \tag{3}\]
#### Image Rescaling
Upon selecting the target and secondary classes, all images belonging to these classes are resized to a smaller size, typically 64x64 pixels by default. This resizing step enables faster similarity calculations between images. To achieve even faster computation, the image size can be further reduced. Conversely, if a more detailed pixel-based computation is desired, the size can be increased beyond the default 64x64 pixels, albeit with an increase in computation time.
#### Similarity Calculation
Following the image rescaling process, the algorithm proceeds to perform the similarity calculation. It starts by selecting an image from the target class \(\textbf{\emph{T}}^{[\textbf{c}]}\) and calculates its similarity score, denoted as \(\textbf{\emph{I}}\), with all the other images in the secondary class \(\textbf{\emph{S}}^{[\textbf{c}]}\). If there are multiple secondary classes, the similarity measure is computed for all images across the entire set \(\textbf{\emph{X}}\).
During the similarity calculation, a record denoted as \(\mathbf{\eta}\) is maintained for each image in \(\mathbf{T^{[c]}}\), storing all the computed similarity scores along with the corresponding image identifiers (image and class names). The record \(\mathbf{\eta}\) is then sorted in descending order based on the individual similarity scores \(\mathbf{I}\). Therefore, for any image in \(\mathbf{T^{[c]}}\), the first entry in the record \(\mathbf{\eta}\) contains the maximum similarity score, denoted as \(\mathbf{I_{max}}\), along with its associated image identifiers.
The algorithm iterates through all images in the target class \(\mathbf{T^{[c]}}\) and records their similarities and corresponding image identifiers. Once the iteration process is complete, we obtain a set \(\mathbf{R}\) of records, as defined in Equation 4. It is important to note that the total number of records, denoted by \(\mathbf{z}\), in \(\mathbf{R}\), is equal to the total number of images, denoted by \(\mathbf{p}\), in \(\mathbf{T^{[c]}}\).
\[\mathbf{R=\{\eta_{1},\eta_{2},\ldots,\eta_{z}\}} \tag{4}\]
Finally, the set \(\mathbf{R}\) of records is sorted in ascending order based on the individual maximum similarity scores \(\mathbf{I_{\max}}\). Figure 2 provides a detailed illustration of the CosSIF algorithm.
### Optimization
To tackle the issue of the record size growing excessively large as the number of images in \(\mathbf{S^{[c]}}\) or \(\mathbf{X}\) increases, an optimization technique is introduced in the similarity calculation module. Rather than recording the similarity for each image in \(\mathbf{T^{[c]}}\) with every other image in \(\mathbf{S^{[c]}}\) or \(\mathbf{X}\), only a limited range of images with the highest similarity scores are recorded. However, the similarity calculation is still performed for all images in \(\mathbf{S^{[c]}}\) or \(\mathbf{X}\). This approach effectively reduces the size of the record and addresses the scalability concern.
### Backbone
The CosSIF algorithm analyzes images and computes their level of similarity. It utilizes cosine similarity to determine the degree of similarity between two images, as well as cosine distance to measure the positive distance between them.
The cosine similarity measures the cosine of the angle between two vectors. Let's assume that \(\mathbf{u}\) and \(\mathbf{v}\) are two arbitrary vectors. The cosine similarity between the vectors is defined by Equation 5, where \(\mathbf{u\cdot v}\) represents the dot product of \(\mathbf{u}\) and \(\mathbf{v}\), and \(\|\mathbf{u}\|\mathbf{x}\|\mathbf{y}\|\) denotes the product of their magnitudes. For a more detailed visual represen
Figure 2: The illustration portrays the step-by-step process of the CosSIF algorithm. It commences by selecting the target class \(\mathbf{T^{[c]}}\) and a set \(\mathbf{X}\) comprising secondary classes. Subsequently, the images within the selected classes undergo resizing to 64x64 pixels. Following this, similarity scores are calculated for each image in \(\mathbf{T^{[c]}}\) by comparing them to all images in \(\mathbf{X}\). For each image in \(\mathbf{T^{[c]}}\), a record \(\mathbf{\eta}\) is created to store individual similarity scores \(\mathbf{I}\) and their corresponding image identifiers. The record \(\mathbf{\eta}\) is then sorted in descending order, with the first entry representing the maximum similarity score \(\mathbf{I_{max}}\). Once the similarity calculation for all images in \(\mathbf{T^{[c]}}\) has been completed, the resulting set of records \(\mathbf{R}\) is obtained. Finally, \(\mathbf{R}\) is sorted in ascending order based on the maximum similarity score \(\mathbf{I_{max}}\), thereby concluding the similarity calculation process.
tation, refer to Figure 3 and 4.
\[\text{cosine similarity}\ \mathbf{(u,v)=cos(\theta)=\frac{u\cdot v}{\|u\|x\|v\|}} \tag{5}\]
While cosine similarity measures the similarity between two vectors based on their angles, it can occasionally yield negative values. This poses a challenge when comparing it with other computed similarity scores. To overcome this issue, the cosine distance is calculated by subtracting the cosine similarity from 1. This calculation ensures that the similarity score is a non-negative value. The cosine distance is defined the Equation 6.
\[\text{cosine distance}=1\text{ - cosine similarity} \tag{6}\]
#### Algorithm
The detailed procedure of the CosSIF algorithm is presented in Algorithm 1. The input consists of the target class \(\mathbf{T^{[c]}}\) and either a secondary class \(\mathbf{S^{[c]}}\) or a set \(\mathbf{X}\) of secondary classes. The output is a set \(\mathbf{R}\) of records, sorted in ascending order, which holds the computed similarity scores for each image in \(\mathbf{T^{[c]}}\) compared to all other images in \(\mathbf{S^{[c]}}\) or \(\mathbf{X}\).
```
0: Target class \(\mathbf{T^{[c]}}=\{\mathbf{t^{[c]}_{1},t^{[c]}_{2},\dots,t^{[c]}_{p}\}\), secondary class \(\mathbf{S^{[c]}}=\{\mathbf{s^{[c]}_{1},s^{[c]}_{2},\dots,s^{[c]}_{q}\}\)} or a set \(\mathbf{X}\) of secondary classes.
0: Set \(\mathbf{R}\) of records, sorted in ascending order.
1:\(\mathbf{R=\{\}}\)
2: Resize all images to 64x64.
3:for\(\mathbf{t^{[c]}_{1}}\) in \(\mathbf{T^{[c]}}\)do
4:\(\mathbf{\eta_{1}=\{\}}\)
5:if (secondary class!= \(\mathbf{X}\)) then
6:\(\mathbf{X=\{\mathbf{S^{[c]}}\}}\)
7:else
8:for\(\mathbf{S^{[c]}}\) in \(\mathbf{X}\)do
9:for\(\mathbf{s^{[c]}_{1}}\) in \(\mathbf{S^{[c]}}\)do
10: Calculate cosine similarity of \(\mathbf{t^{[c]}_{1}}\) and \(\mathbf{s^{[c]}_{1}}\). 5
11: Calculate cosine distance. 6
12: Similarity score \(\mathbf{I}=\) cosine distance
13: Save record \(\mathbf{\eta_{1}=\{t^{[c]}_{1},\{s^{[c]}_{1},\mathbf{I}\}\}}\)
14:endfor
15:endfor
16:\(\mathbf{\eta_{1}=\{t^{[c]}_{1},\{s^{[c]}_{1},\mathbf{I}\}\},\{s^{[c]}_{2},\mathbf{I}, \dots\}}\)
17: Sort \(\mathbf{\eta_{1}}\) in descending order by similarity scores.
18:\(\mathbf{\eta_{1}=\{t^{[c]}_{1},\{s^{[c]}_{max},\mathbf{I}_{max}\}\},\dots\}\)
19:endif
20: Append \(\mathbf{\eta_{1}}\) to set \(\mathbf{R}\).
21:endfor
22: Sort \(\mathbf{R}\) in ascending order by \(\mathbf{I}_{max}\).
23: Set \(\mathbf{R=\{\eta_{1},\eta_{2},\dots,\eta_{z}\}}\)
```
**Algorithm 1** CosSIF
#### Filtering
Once the similarity calculation is completed, the filtering process is initiated. To recap, the similarity calculation generates a set \(\mathbf{R}\) of records, as shown in Equation 4, which is subsequently sorted in ascending order. For each record \(\mathbf{\eta}\) in \(\mathbf{R}\), \(\mathbf{I}_{max}\) represents the maximum similarity score, and the associated image identifiers are used to identify the image in \(\mathbf{T^{[c]}}\) that achieves this similarity score with an image in \(\mathbf{S^{[c]}}\) or \(\mathbf{X}\).
Since the set \(\mathbf{R}\) is sorted in ascending order by \(\mathbf{I}_{max}\), the first record, \(\mathbf{\eta_{1}}\), has the lowest \(\mathbf{I}_{max}\), while the last record, \(\mathbf{\eta_{z}}\), has the highest. Therefore, the process of filtering out
Figure 4: The graph depicts two arbitrary vectors \(\mathbf{u}\) and \(\mathbf{v}\), representing two images, where the cosine of the angle \(\mathbf{\theta}\) between these vectors represents the cosine similarity value.
Figure 3: The illustration depicts the process of calculating the cosine similarity between two images. It begins by dividing a color image into its three RGB (red, green, blue) layers, where each layer is represented as a square matrix. Each layer contains different pixel values, which are then normalized by dividing each pixel by 255. Next, all layers are flattened into a vector. Considering that each image has a resolution of 64x64 pixels and consists of 3 layers, the resulting vector dimension becomes 1x12288. This procedure is repeated for both **Image 1** and **Image 2**, resulting in two vectors, \(\mathbf{u}\) and \(\mathbf{v}\), respectively. The cosine similarity between these two vectors is then calculated.
the most similar images in begins with the \(\mathbf{\eta_{z}}\). From there, the filtering process gradually moves up the list of \(\mathbf{\eta}\) in \(\mathbf{R}\), with each subsequent image having a lower \(\mathbf{I_{max}}\) than the previous one. Conversely, to filter out the most dissimilar images, the filtering process starts from the \(\mathbf{\eta_{1}}\). In this case, the filtering process gradually moves down the list of \(\mathbf{\eta}\) in \(\mathbf{R}\), with each subsequent image having a higher similarity score than the previous one.
In summary, the CosSIF algorithm enables the filtering of images based on their cosine similarity, identifying the most similar or dissimilar ones. However, the algorithm doesn't directly perform the filtering task; instead, it generates the essential information needed for filtering. The actual filtering is accomplished using our proposed FBGT and FAST methods.
#### Adaptability
The CosSIF algorithm has been designed with future reusability in mind, allowing it to be applied to various tasks. To facilitate this adaptability, we have incorporated certain features that may not be immediately useful but could prove valuable in future works. One such feature relates to the similarity calculation process, where we provide the option to restrict the range of saved records.
In the current implementation of our research, we have chosen to limit this range to only 1. This means that for each image in a selected target class, there is at most one image from the secondary class that is most similar. This suffices for our purposes, as the generated set \(\mathbf{R}\) of sorted records obtained from the CosSIF algorithm already gives us the most similar or dissimilar images that can be filtered from the target class.
However, let's consider a scenario where it is necessary to know all possible similarities that each image in the target class shares with other images in the secondary class. In such cases, we can easily modify and reuse the CosSIF algorithm by adjusting the similarity range. By increasing the range, we can obtain the desired results and retrieve all the relevant similarities for each image.
Therefore, the flexibility of the CosSIF algorithm allows it to be applied to a variety of tasks, and with slight modifications, it can accommodate different requirements in future applications.
### Fbgt
The Filtering Before GAN Training (FBGT) aims to eliminate real images from the minority class that display resemblances to images from other classes before employing them as the training dataset for a GAN. FBGT method commence by selecting the target and secondary classes. Here, the target class \(\mathbf{T^{[c]}}\) represents the minority class within a given dataset, and the remaining classes are collectively referred to as a set \(\mathbf{X}\) of secondary classes. Then, the CosSIF algorithm is employed to calculate the similarity scores for each image in \(\mathbf{T^{[c]}}\) with all other images in \(\mathbf{X}\). CosSIF generates a set \(\mathbf{R}\) of records which is shorted in ascending order.
Following the completion of the similarity calculation and the generation of the set \(\mathbf{R}\) of records, the subsequent step in FBGT focuses on filtering images from the target class \(\mathbf{T^{[c]}}\). In this step, the number of images to be filtered from \(\mathbf{T^{[c]}}\) is determined by a hyperparameter denoted as \(\mathbf{\alpha}\), where \(\mathbf{0<\alpha<1}\). The value of \(\mathbf{\alpha}\) is calculated using the following formula:
\[\mathbf{\alpha=\frac{100-\%\text{ of images to be removed}}{100}} \tag{7}\]
The formula for calculating the number of filtered images, \(\mathbf{f}\), is given by:
\[\mathbf{f=[p\times\alpha]} \tag{8}\]
where, the symbol \(\lceil\ \rceil\) represents the ceiling function, which rounds up the result of the multiplication to the nearest integer. The value of \(\mathbf{f}\) represents the threshold point, which indicates the number of images that have been filtered from \(\mathbf{T^{[c]}}\). The newly filtered target class, \(\mathbf{T^{[c]}_{filtered}}\), composed of \(\mathbf{f}\) images is given by:
\[\mathbf{T^{[c]}_{filtered}=\{t^{[c]}_{1},t^{[c]}_{2},\ldots,t^{[c]}_{f}\}} \tag{9}\]
\(\mathbf{T^{[c]}_{filtered}}\) is the output of FBGT method. It contains the newly filtered images that are going to be used for oversampling.
### Fagt
The Filtering After GAN Training (FAGT) method calculates similarities between the synthetic images generated by a GAN and real images of the class on which the GAN was trained. In this method, the target class \(\mathbf{T^{[c]}}\) consists of images that are synthetically generated via a trained GAN, while the secondary class \(\mathbf{S^{[c]}}\) composed of real images, serves as the training dataset for that GAN. It is important to note that in FAST, there is no possibility of having a set of secondary classes. Following the selection of \(\mathbf{T^{[c]}}\) and \(\mathbf{S^{[c]}}\), the FAST method utilizes the CosSIF algorithm, leading to the generation of a set \(\mathbf{R}\) of records.
In the FAST method, the process of filtering images from the target class \(\mathbf{T^{[c]}}\) becomes more a bit more complex, compared to FBGT. Unlike FBGT, where the number of images in the filtered target class \(\mathbf{f}\) is not user-defined but rather calculated using Equations 7 and 8, in FAST, the value of \(\mathbf{f}\) is determined by the user. This value represents both the number of images in the filtered target class \(\mathbf{T^{[c]}_{filtered}}\) and the number of images required for oversampling. In FAST, \(\mathbf{f}\) is considered a constant value. To control the output of the filtering process, the hyperparameter \(\mathbf{\alpha}\) is used to calculate the value of \(\mathbf{p}\), which denotes the number of synthetic images generated by a GAN. The formula for calculating \(\mathbf{p}\) is given by:
\[\mathbf{p=\left\lceil\frac{\mathbf{f}}{\mathbf{\alpha}}\right\rceil} \tag{10}\]
While \(\mathbf{p}\) is fixed in the FBGT method, it is variable in the FAGT method. This is because the quality of the synthetic images produced by a GAN can vary, leading to changes in the value of \(\mathbf{p}\). If a GAN produces synthetic images with fewer discriminative features compared to the real images, more images need to be filtered out from a larger set of images, resulting in an increased value of \(\mathbf{p}\). Conversely, if a GAN is capable of generating images with similar discriminative features compared to real images, then the value of \(\mathbf{p}\) decreases. This implies that fewer images need to be filtered out from a smaller set of synthetic images. Thus, in the FAGT method, \(\mathbf{p}\) behaves more like a hyperparameter.
Figure 5 depicts this dependency in two setups, namely \(\mathbf{\Psi_{1}}\) and \(\mathbf{\Psi_{2}}\). In the first setup, \(\mathbf{\Psi_{1}}\), it is assumed that the GAN produces more random images with significant deviations from the real images, thereby necessitating a higher filtering requirement. Conversely, the second setup, \(\mathbf{\Psi_{2}}\), assumes that the GAN generates synthetic images that closely resemble the real images, resulting in a decreased need for filtering.
### Binary vs Multiclass Classification
In both the FBGT and FAGT methods, it is possible to eliminate similar and dissimilar images from a calculated set \(\mathbf{R}\) of records. In binary classification, the removal of similar images from one class and dissimilar images from another class enhances the distinction between the two classes, resulting in improved filtering outcomes. However, in multiclass classification, it is crucial to avoid removing dissimilar images, as eliminating these images in relation to all other classes can lead to the loss of images that possess distinct features essential for accurate classification. Therefore, for multiclass classification, it is recommended to eliminate only the similar images.
### FBGT vs FAGT
The FBGT and FAGT methods are two approaches that produces more robust oversampled datasets that reduce the issue of low inter-class variation. However, there are differences between these two methods that need to be considered.
The FBGT method requires retraining the GAN with the newly filtered dataset, which can be a time-consuming process. Therefore, it may not be practical to use this method when dealing with multiclass classification problems that require oversampling for several classes. In contrast, the FAGT method can be applied to a pre-trained GAN, making it faster than the FBGT as it does not require retraining. However, the FAGT method requires filtering more images since \(\mathbf{p}\) is a variable for this method, resulting in longer computation time for filtering compared to the FBGT.
In the FBGT method, both the target class and secondary class consist of real images. This implies that the set \(\mathbf{R}\) of records obtained after the similarity calculation is universal and can be utilized later by anyone to filter out images. However, in the FAGT method, the target class is composed of synthetic images randomly generated by a trained GAN, which necessitates the recalculation of similarity each time the method is employed. As a result, the FBGT method is more efficient than the FAGT method when it comes to filtering images.
Furthermore, it is essential to address a potential question regarding the FAGT method. Although similarities between real images of a specific class and real images from other classes are not directly calculated, the effectiveness of the method lies in the context of medical image datasets. Typically, all classes in such datasets consist of similar types of images (e.g., skin lesions, CT scans) but at different stages. When the GAN generates images that do not distinctly resemble the real images used during its training, these synthetic images may end up closely resembling images from other classes. As a result, the removal of synthetic images that deviate from the real images serves as a multipurpose filtering method. On one hand, it contributes to generating images with greater discriminative features, while on the other hand, it effectively addresses the issue of low inter-class variation.
### GAN Architecture
The FBGT and FAGT methods are independent of GAN architecture, meaning that the filtering process remains constant regardless of any selected GAN framework. However, the choice of GAN architecture is often determined by the total image size in the dataset. Typically, training a GAN requires a large number of images. However, in medical image analysis, the minority class often consists of an extremely low volume of images, posing a challenge for the GAN to converge during training. Consequently, we tend to choose a GAN architecture that performs well with a small set of images.
In this paper, we use StyleGAN2-ADA as our GAN architecture. StyleGAN2-ADA builds upon the improvements of StyleGAN2, with the key enhancement being the
Figure 5: A visual representation of the correlation between the total number of images, \(\mathbf{p}\), in the target class \(\mathbf{T^{[c]}}\), and the total number of images, \(\mathbf{f}\), in the filtered target class \(\mathbf{T^{[c]}_{filtered}}\), employing the FAGT method, demonstrates that the extent of necessary filtering is influenced by the GAN’s capacity to generate synthetic images that closely resemble the real images used to train the GAN.
addition of adaptive data augmentation (ADA) techniques. [26]. This method allows for the generation of high-quality, diverse images even when the dataset is small or imbalanced, making it a valuable tool in medical image analysis and other applications where training data may be limited. StyleGAN2-ADA consists of two main parts: the generator and the discriminator. The generator's objective is to create realistic images, while the discriminator's goal is to differentiate between real and generated images.
#### Generator
The generator \(\mathbf{G}\) consists of several key components: the mapping network, the synthesis network, and the Adaptive Instance Normalization (AdaIN) layers. The mapping network \(\mathbf{h}\) takes a latent code \(\mathbf{z}\) and maps it to a style vector \(\mathbf{w}\):
\[\mathbf{w}=\mathbf{h(z)} \tag{11}\]
The synthesis network \(\mathbf{g}\) takes the style vector \(\mathbf{w}\) and a noise tensor \(\mathbf{n}\), and generates an image \(\mathbf{x}\):
\[\mathbf{x}=\mathbf{g(w,n)} \tag{12}\]
Adaptive instance normalization (AdaIN) [27] is used to modulate the feature maps in the synthesis network with the style vector \(\mathbf{w}\). Given a feature map \(\mathbf{F}\) and the style vector \(\mathbf{w}\), AdaIN produces a styled feature map \(\mathbf{F^{\prime}}\):
\[\mathbf{F^{\prime}}=\text{AdaIN}(\mathbf{F,w}) \tag{13}\]
#### Discriminator
The discriminator \(\mathbf{D}\) is a convolutional neural network that classifies whether an input image \(\mathbf{x}\) is real or generated. It takes an image \(\mathbf{x}\) as input and outputs a scalar probability value \(\mathbf{y}\):
\[\mathbf{y}=\mathbf{D(x)} \tag{14}\]
#### Adaptive Discriminator Augmentation
In StyleGAN2-ADA, the discriminator is trained on both the real images and their augmented counterparts. The augmentation function \(\mathbf{V}\) takes an image \(\mathbf{x}\) and an augmentation parameter \(\mathbf{\mu}\) to produce an augmented image \(\mathbf{x^{\prime}}\):
\[\mathbf{x^{\prime}}=\mathbf{V(x,\mu)} \tag{15}\]
#### Loss
Both the generator and discriminator losses are based on the binary cross-entropy loss function. The generator seeks to minimize its loss, which represents the difference between the discriminator's output on generated images and the target output. In essence, the generator aims to maximize the probability of the discriminator classifying the generated images as real:
\[\mathbf{L_{G}}=-\mathbb{E}_{\mathbf{z}\sim\mathbf{y(z)}}[\log\mathbf{D(g(h(z),n))}] \tag{16}\]
Here, \(\mathbf{z}\) is a random latent code sampled from the prior distribution \(\mathbf{y(z)}\), \(\mathbf{h}\) is the mapping network, \(\mathbf{g}\) is the synthesis network, \(\mathbf{n}\) is the noise tensor, and \(\mathbf{D}\) is the discriminator. \(\mathbb{E}\) is expectation, which represents the average value of the expression inside the brackets.
The discriminator aims to minimize its loss, which consists of two parts: the difference between the discriminator's output on real images and the target output, and the difference between the discriminator's output on generated images and the target output:
\[\mathbf{L_{D}}=-\mathbb{E}_{\mathbf{z}\sim\mathbf{y_{\text{data}}(x)}}[\log \mathbf{D(x)}]\] \[-\mathbb{E}_{\mathbf{z}\sim\mathbf{y(z)}}[\log(\mathbf{1-D(g(h(z),n))})] \tag{17}\]
Here, \(\mathbf{x}\) is an image sampled from the true data distribution \(\mathbf{y_{\text{data}}(x)}\), and the other variables have the same meaning as in the generator loss.
### Hybrid Augmentation
Typically, oversampling via a GAN involves merging the real images with generated synthetic images after the GAN training. However, in some datasets, certain classes have an extremely low volume of training images. This makes the learning process of a GAN really difficult, even when using StyleGAN2-ADA.
The architecture of StyleGAN2-ADA includes a component known as adaptive discriminator augmentation (ADA), which is vital for training with a small number of images. During training, this component takes an input image \(\mathbf{x}\) and produces an augmented image \(\mathbf{x^{\prime}}\), as expressed in Equation 15. To increase the variability of \(\mathbf{x^{\prime}}\), it's important for the training dataset to contain sufficient variation. Therefore, we perform a minor oversampling of the minority classes by applying various transformations to the images before using them as a training dataset for StyleGAN2-ADA. These transformations include adjusting the focus, rotating the images, shifting their positions, and flipping them horizontally or vertically. The oversampling via transformations improves the variability of \(\mathbf{x^{\prime}}\) during the process, which enhances the quality of the synthetically generated images produced by StyleGAN2-ADA.
### Model Architectures
To assess the efficacy of our FBGT and FAGT methods, we employ pre-trained transformer and convolutional-based models. We train these models using the oversampled dataset, incorporating the FBGT and FAGT methods in some instances and excluding them in others for comparison purposes. For our experiments, we utilize pre-trained Swin Transformer [28], Vision Transformer (ViT) [29], and ConvNeXt [30] models. We fine-tune these models by adapting their output layers to accommodate the classes within our dataset.
The Swin Transformer, proposed by Liu et al. [28], is a hierarchical transformer model specifically designed for computer vision tasks. It introduces a local representation
to capture both local and global context, using a shifted window-based self-attention mechanism and a hierarchical architecture. The Vision Transformer (ViT), introduced by Dosovitskiy et al. [29], applies the transformer architecture to computer vision tasks by dividing input images into patches and processing them as tokens with positional encodings. It has shown excellent performance on large-scale datasets but is known to be data-hungry. ConvNeXt, a model introduced by Liu et al. [30], demonstrates the potential of pure ConvNets by modernizing a standard ResNet to compete with the performance of Vision Transformers.
## 4 Experiments
In this section, extensive experiments are carried out to assess the performance of the proposed FBGT and FAGT methods, comparing them with strong baseline methods. The source code required to reproduce the experimental results is available at: [https://github.com/mominul-ssv/comsif](https://github.com/mominul-ssv/comsif)
### Datasets
The effectiveness of FBGT and FAGT methods is analyzed using two datasets: ISIC-2016 and HAM10000. The ISIC-2016 dataset is employed to test both methods for binary classification, while the HAM10000 dataset is used for multiclass classification. Both datasets exhibit significant class imbalance with low inter-class variation, making them ideal choices for testing the filtering methods.
#### Isic-2016
The ISIC-2016 Task 3 dataset contains 900 training and 379 testing dermoscopic images for skin lesion analysis and melanoma classification. With 173 malignant and 737 benign lesions in the training set, and 75 malignant and 304 benign lesions in the testing set, the dataset exhibits class imbalance [31]. The images of malignant and benign lesions exhibit similar appearances, leading to low inter-class variation within the dataset, as observed in Figure 6 (a). Figure 7 depicts the number of images in each class along with the train-test split.
#### Ham10000
The HAM10000 dataset consists of 10,015 clinical images of skin lesions, sourced from various locations worldwide and annotated by dermatologists. The dataset includes seven classes of skin lesions: actinic keratoses and intraepithelial carcinoma (akiec) with 327 images, basal cell carcinoma (bcc) with 514 images, benign keratosis-like lesions (bkl) with 1,099 images, dermatofibroma (df) with 115 images, melanoma (mel) with 1,113 images, melanocytic nevi (nv) with 6,705 images, and vascular lesions (vasc) with 142 images [32]. This distribution results in a highly unbalanced dataset. Moreover, the images also exhibit low inter-class variation, as can be seen in Figure 6 (b).
The HAM10000 dataset does not include a predefined train-test split. This is particularly problematic as with no pre-defined split, it is difficult to compare the performance of our work with already existing state-of-the-art approaches. Some research papers focus on demonstrating high accuracy, which can sometimes involve manipulating the associated test data. Moreover, most work does not provide a reproducible train-test split, leading to difficulties in verifying the reported results.
Therefore, to address this issue, we employ a reproducible train-test split in our work. We partition the dataset into 9,187 images for training and 828 for testing. This partitioning process entails removing duplicates from the test set, which are composed of identical images with slight visual augmentations. Consequently, the training set contains these augmented images, while the test set
Figure 6: The visual depictions of images from various classes indicate low inter-class variation both in the ISIC-2016 dataset, as shown in (a), and the HAM10000 dataset, as shown in (b).
Figure 7: The bar plot displays the train-test split and the number of samples in each class of the ISIC-2016 dataset, where substantial class imbalance is present in both the training and testing datasets.
is devoid of different augmentations of the same images. To perform the split, we use the scikit-learn train-test split library [33], providing a random state of 42 as an input parameter. This approach ensures consistency in the images within the training and testing sets for future experiments. Figure 8 depicts the quantity of images in each class along with the distribution of the train-test split.
### Preprocessing
In our experiment, we resize images from both datasets to a resolution of 256x256 pixels to ensure that the minority classes meet the criteria for being utilized as training datasets for the GAN. We opt to oversample both benign and malignant classes in the ISIC-2016 dataset and oversample all classes, except for melanocytic nevi (nv), in the HAM10000 dataset, as it already has a sufficient number of real images available and is the majority class.
### Dataset Filtering
The dataset filtering process involves the utilization of the FBGT and FAGT methods on both the ISIC-2016 and HAM10000 datasets. Furthermore, as outlined in the methodology, we apply hybrid augmentation techniques to both datasets. This involves oversampling images through transformations, while simultaneously training GANs to generate synthetic images. We conduct a total of three primary experiments in our study. Experiment I utilizes the FBGT method, Experiment II employs the FAGT method, and Experiment III does not employ either the FBGT or FAGT methods.
#### Experiment I
In Experiment I, we apply the FBGT method to both benign and malignant classes of the ISIC-2016 dataset and the akice, bcc, bkl, df, mel, and vasc classes of the HAM10000 dataset. This involves conducting similarity calculations using CosSIF and filtering real images from the GAN training dataset that exhibit the highest similarity scores with images from other classes. We then perform minor oversampling via transformation with the newly filtered images associated with each class, followed by conducting GAN training for all the selected classes using the associated filtered GAN training datasets. Finally, we employ the trained GANs to generate synthetic images for each selected class, consequently resolving the issues of low inter-class variation. For a visual representation of this process, please refer to Fig. 1.
In the FBGT method, we have a hyperparameter called \(\mathbf{\alpha}\) that determines the number of images to be filtered from the real images. Instead of randomly selecting a number for filtering, we consider three specific values for \(\mathbf{\alpha}\). The variation in the number of filtered images for different \(\mathbf{\alpha}\) values when using the FBGT method can be observed in Table 2 and 1 for the HAM10000 and ISIC-2016 datasets, respectively.
#### Experiment II
In Experiment II, we apply the FAGT method to pre-trained GANs that are individually trained using the real images from the benign and malignant classes of ISIC-2016, as well as the akice, bcc, bkl, df, mel, and vasc classes of the HAM10000 dataset. By utilizing these pre-trained GANs, we generate synthetic images for each class. Subsequently, we perform similarity calculations to compare the generated synthetic images with the real training images. Based on the similarity results obtained from the calculations, we selectively remove synthetic images with lower discriminative features associated with the selected class. This process ensures that the filtered synthetic images closely resemble features of the real images used to train the GAN.
Like the FBGT method, the FAGT method also utilizes the hyperparameter known as \(\mathbf{\alpha}\) to determine the number
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{4}{c|}{Classes} \\ \cline{3-6} \multicolumn{2}{|c|}{} & \multicolumn{3}{c|}{Benign} & \multicolumn{3}{c|}{Malignment} \\ \cline{3-6} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{Total \(\mathbf{p}\)} & \multicolumn{1}{c|}{Filtered \(\mathbf{f}\)} & \multicolumn{1}{c|}{Total \(\mathbf{p}\)} & \multicolumn{1}{c|}{Filtered \(\mathbf{f}\)} \\ \hline \hline \multirow{3}{*}{\(\mathbf{\alpha}\)} & \(\mathbf{\alpha}=0.80\) & & 582 & & 139 \\ \cline{2-6} & \(\mathbf{\alpha}=0.85\) & 727 & 618 & 173 & 148 \\ \cline{2-6} & \(\mathbf{\alpha}=0.90\) & & 655 & & 156 \\ \hline \multirow{3}{*}{\(\mathbf{\alpha}\)} & \(\mathbf{\alpha}=0.75\) & 1441 & & 1530 & \\ \cline{2-6} & \(\mathbf{\alpha}=0.80\) & 1351 & 1081 & 1435 & 1148 \\ \cline{1-1} \cline{2-6} & \(\mathbf{\alpha}=0.85\) & 1271 & & 1350 & \\ \hline \end{tabular}
\end{table}
Table 1: The variation in the number of filtered images, denoted as \(\mathbf{f}\), from the total number of real/synthetic images, denoted as \(\mathbf{p}\), for three different values of \(\mathbf{\alpha}\) when implementing the FBGT and FAGT methods on the benign and malignant classes of the ISIC-2016 dataset.
Figure 8: The bar plot provides a visual representation of the train-test split and the logarithmic scale depiction of the sample distribution across different classes within the HAM10000 dataset, revealing a significant class imbalance in both the training and testing datasets.
of images to be filtered from the synthetic images. Similarly, we consider three specific values for \(\mathbf{\alpha}\), and the variations in the number of images can be observed in Table 2 and 1 for the HAM10000 and ISIC-2016 datasets, respectively. To visualize the internal processes of the FAGT method, please refer to Fig. 1.
### Experiment III
In Experiment III, neither the FBGT nor FAGT methods are employed. Instead, we utilize the same pre-trained GAN as used in the FAGT method, but without implementing the similarity calculation and filtering process. This experiment is referred to as No-Filtering. Unlike the other experiments, No-Filtering does not involve any hyperparameter tuning. This experiment is solely conducted to analyze the efficacy of the FBGT and FAGT methods.
### Dataset Augmentation
Dataset augmentation is performed after the completion of dataset filtering. To achieve the final dataset augmentation for each class, we combine the real images with a batch of oversampled images obtained through transformations, as well as the synthetic images generated by GANs. During Experiment I, Experiment II, and Experiment III, the output is the final augmented dataset. The final augmented dataset, as shown in Figure 9, consists of 2000 images for both the benign and malignant classes, effectively addressing the class imbalance in the ISIC-2016 dataset. Similarly, in Figure 10, the final augmented HAM10000 dataset is displayed, with the akiec, bcc, bkl, df, mel, nv, and vasc classes each containing 6042 images. We chose this number as it matches the number of images in the overrepresented mv class, thus resolving the class imbalance present in the HAM10000 dataset by oversampling the remaining classes to this range.
Although the final augmented ISIC-2016 dataset contains 2000 images for each class and the HAM10000 dataset contains 6042 images for each class, it is important to note that these numbers are predetermined at the beginning of our experiments. There is a relationship in the FAGT method between the number of generated synthetic images and the number of filtered images. In this case, the number of filtered images is fixed and determined based on the size of the final augmented dataset. For example, in the ISIC-2016 dataset, we need to filter 1081 synthetic images from the benign class. This number is not randomly generated but derived from the size of the final augmented dataset. Therefore, if the final augmentation consists of 2000 images, with 727 real images and 192 images oversampled
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{Class} \\ \hline & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} \\ \cline{3-11} \cline{3-11} \multicolumn{11}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} \\ \cline{3-11} \cline{3-11} \multicolumn{11}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{} \\ \hline \multirow{3}{*}{\(\mathbf{\alpha}\)} & \(\mathbf{\alpha}=0.80\) & \multirow{3}{*}{304} & 244 & \multirow{3}{*}{488} & 391 & \multirow{3}{*}{1033} & 879 & 88 & \multirow{3}{*}{109} & 93 & 1079 & 918 & 132 & 113 \\ \cline{2-2} \cline{4-11} & \(\mathbf{\alpha}=0.85\) & & & & & & & & & & & & & & \\ \cline{2-2} \cline{4-11} & \(\mathbf{\alpha}=0.90\) & & 274 & \multirow{2}{*}{440} & 440 & 930 & 99 & \multirow{2}{*}{972} & 919 & 119 & \multirow{2}{*}{132} & 113 \\ \cline{2-2} \cline{4-11} & \(\mathbf{\alpha}=0.75\) & & 5624 & & 5368 & & 3241 & & 7201 & & 3142 & & 7181 & \\ \cline{2-2} \cline{4-11} & \(\mathbf{\alpha}=0.80\) & & 5272 & 4218 & 5032 & 4026 & 3038 & 2431 & & 6751 & 5401 & 2946 & 2357 & 6732 & 5386 \\ \cline{2-2} \cline{4-11} & \(\mathbf{\alpha}=0.85\) & & 4962 & & 4736 & & 2860 & & 6354 & & 2772 & & 6336 & \\ \hline \end{tabular}
\end{table}
Table 2: The variation in the number of filtered images, denoted as \(\mathbf{f}\), from the total number of real/synthetic images, denoted as \(\mathbf{p}\), for three different values of \(\mathbf{\alpha}\) when implementing the FBGT or FAGT methods on the HAM10000 dataset.
Figure 10: The pie charts visually represent the composition of the augmented dataset, showcasing the contributions of real, transformed, and synthetic images for all seven classes of the HAM10000 dataset.
Figure 9: The pie charts show the composition of the augmented datasets, indicating the contributions of real, transformed, and synthetic images for the benign and malignant classes of the ISIC-2016 dataset.
through transformations, the required number of synthetic images is calculated as (2000 - (727 + 192) = 1081). Thus, 1081 is the constant number used to control the output of the FAGT method. If the GAN generates more random images that don't resemble real images of the benign class, a larger pool of synthetic images must be generated, from which 1081 images are filtered. Conversely, if the GAN generates images that closely resemble real images of the benign class, a smaller pool of generated images is sufficient for filtering 1081 images.
### GAN Configuration
As mentioned in the methodology section, we employ StyleGAN2-ADA as our GAN architecture. Each selected class for oversampling is trained using the same StyleGAN2-ADA configuration. In this configuration, we employ 400 king, which is equivalent to 400 epochs. Typically, StyleGAN2-ADA necessitates a larger number of epochs to generate realistic-looking synthetic images. However, for our experiment, we set king equal to 400 to accommodate our hardware constraints. Despite this limitation, we still acquire satisfactory synthetic images that fulfill our needs for analyzing the efficacy of our filtering methods.
### Training Classifiers
As outlined in the methodology section, we utilize the Swin Transformer, ViT, and ConvNeXt models for training our classifiers. Specifically, we use the pre-trained versions of these models and fine-tune them using our final augmented datasets. To facilitate training, we resize the images in the final augmented datasets to 224x224 pixels. For optimization during training, we use AdamW, which is an algorithm designed for training deep learning models that extends the Adam optimizer to include weight decay regularization. We use a learning rate of \(\mathbf{5e^{-5}}\) during training for all these models.
### Evaluation Metrics
In classification tasks, it is essential to select appropriate evaluation metrics to accurately assess model performance. This section discusses various evaluation metrics, including recall, F1-score, sensitivity, accuracy, and AUC, which are employed to analyze the performance of modern transformer and convolutional-based network models and evaluate the efficacy of the FBGT and FAGT methods.
#### Recall
Recall, also known as sensitivity, measures the proportion of actual positive instances that are correctly predicted as positive. For binary classification, recall can be defined as:
\[\mathbf{Recall}=\frac{\mathbf{TP}}{\mathbf{TP}+\mathbf{FN}} \tag{18}\]
where \(\mathbf{TP}\) represents the number of true positives and \(\mathbf{FN}\) represents the number of false negatives. For multiclass classification, macro-average recall is utilized. It is computed by calculating the recall for each class individually, treating each distinct class as a positive class and the remaining classes as negative classes. Then, the average of these recall values is taken. The formula for macro-average recall is given by:
\[\mathbf{Recall}_{macro}=\frac{\mathbf{1}}{\mathbf{k}}\sum_{i=1}^{\mathbf{k}}\frac{\mathbf{TP_{i}}} {\mathbf{TP_{i}+FN_{i}}} \tag{19}\]
where \(\mathbf{k}\) represents the number of classes.
#### F1-score
The F1-score, which is the harmonic mean of precision and recall, provides a balance between the two metrics and is especially useful when dealing with imbalanced datasets or when both false positives and false negatives are of concern. In multiclass classification, we use the macro-average F1-score, which is calculated by computing the F1-score for each class individually and then taking the average of these values. The formula for macro-average F1-score is given by:
\[\mathbf{F1}\text{-}\mathbf{score}_{macro}=\frac{\mathbf{1}}{\mathbf{k}}\sum_{i=1}^{\mathbf{k}}2 \cdot\frac{\mathbf{Precision_{i}\cdot Recall_{i}}}{\mathbf{Precision_{i}+Recall_{i}}} \tag{20}\]
where \(\mathbf{Precision}=\frac{\mathbf{TP}}{\mathbf{TP}+\mathbf{FP}}\). By using the macro-average F1-score, we can obtain an overall performance measure of the multiclass classification model, while taking into account the performance of each individual class.
#### Accuracy
Accuracy measures the proportion of correctly predicted instances over the total number of instances. For binary classification, accuracy can be defined as:
\[\mathbf{Accuracy}=\frac{\mathbf{TP+TN}}{\mathbf{TP+TN+FP+FN}} \tag{21}\]
For multiclass classification, accuracy can be calculated as:
\[\mathbf{Accuracy}=\frac{\sum_{i=1}^{\mathbf{k}}\mathbf{TP_{i}}}{\sum_{i=1}^{\mathbf{k}}(\mathbf{ TP_{i}+TN_{i}+FP_{i}+FN_{i}})} \tag{22}\]
#### Auc
The AUC (Area Under the Curve) is a popular evaluation metric for binary classification models, and it measures the model's ability to distinguish between positive and negative classes. The AUC is the area under the receiver operating characteristic (ROC) curve, which is a plot of the true positive rate (TPR) against the false positive rate (FPR) at
different classification thresholds. The formula for AUC for binary classification can be written as:
\[\mathbf{AUC_{binary}}=\int_{\mathbf{0}}^{\mathbf{1}}\mathbf{TPR(FPR^{-1}(\mathbf{t}))dt} \tag{23}\]
where \(\mathbf{TPR}\) is the true positive rate, \(\mathbf{FPR^{-1}}\) is the inverse of the false positive rate, and \(\mathbf{t}\) is the threshold value. For multiclass classification, the AUC is generally calculated using the one-vs-rest (OVR) approach. In the OVR approach, we treat each class as the positive class and the remaining classes as the negative class, and we compute the AUC for each class separately. Then, we take the average of these AUC values to obtain the overall AUC score. The formula for AUC for multiclass classification using the OVR approach can be written as:
\[\mathbf{AUC_{multiclass}}=\frac{\mathbf{1}}{\mathbf{k}}\sum_{\mathbf{i=1}}^{\mathbf{k}}\mathbf{AUC_{i}} \tag{24}\]
where \(\mathbf{AUC_{i}}\) represents the AUC value for the \(\mathbf{i}\)-th class obtained using the OVR approach.
### Baselines
Our baseline for the ISIC-2016 dataset is the results achieved by the MelaNet model, designed by Zunair and Hamza [18]. However, due to an inconsistency in the reported sensitivity result in their paper, we recomputed this metric using the publicly available pre-trained MelaNet model. For the HAM10000 dataset, we use the results of the IRv2+SA model as our baseline, as reported by Datta et al. [19]. Both baselines demonstrate state-of-the-art classification performance.
### Experimental Setup
The experiments were conducted in two different setups with different hardware configurations. Setup 1, which employed a Linux server with 2-core Intel(R) Xeon(R) CPU @ 2.20GHz, 13 GB RAM, and 1x NVIDIA P100 16GB GPU was utilized for training StyleGAN2-ADA for different minority classes and for training classifiers for the HAM10000 dataset. Setup 2, which used a Windows machine with 6-core AMD Ryzen 5 5600H CPU @ 3.30GHz, 16 GB RAM, and 1x NVIDIA RTX 3060 6GB GPU, was utilized for implementing the FBGT and FAGT methods and for training classifiers for the ISIC-2016 dataset.
## 5 Results
This section presents a thorough performance analysis of the FBGT and FAGT methods on each variation of the final augmented dataset. The analysis is conducted in two parts. Firstly, we investigate the difference in performance while utilizing the FBGT and FAGT methods against No-Filtering. Secondly, we compare our best models with state-of-the-art baseline models.
The experimental results on the ISIC-2016 dataset are presented in Table (a)a, using metrics such as sensitivity, FN, and AUC for evaluation. Similarly, Table (b)b presents the experimental results on the HAM10000 dataset, employing evaluation metrics like recall, F1-score, accuracy, and AUC. Since the classifier training is performed with the final augmented datasets, and there are three distributions of datasets for three distinct values of \(\mathbf{\alpha}\) for both FBGT and FAGT methods, we experimentally conduct hyperparameter tuning on the Swin Transformer. We select the best-performing Swin Transformer model by analyzing its performance on the test set and choose the corresponding \(\mathbf{\alpha}\) value and its associated augmented dataset to train the ViT and ConnNeXt models.
### Ablation Study
From the analysis presented in Table (a)a and (b)b, it is evident that our trained models exhibit improved performance on most evaluation metrics when utilizing either the FBGT or FAGT methods, as opposed to No-Filtering, on both the ISIC-2016 and HAM10000 datasets. These performance gains are observed across different distributions of augmented training datasets, each corresponding to a different \(\mathbf{\alpha}\) value. The following provides a detailed analysis of the observed performance gains.
#### ISIC-2016 Dataset
Table (a)a provides a detailed analysis of the performance of various models trained on the ISIC-2016 dataset. It is evident that all trained models, when utilizing either the FBGT or FAGT methods, exhibit performance improvements over the No-Filtering approach, which does not incorporate either of the filtering methods.
Regarding individual model performance with the FBGT method, the Swin Transformer model with a hyperparameter value of \(\mathbf{\alpha}=\mathbf{0.80}\) achieves the best performance,
Figure 11: The graph provides a visual comparison of the performance of the trained models in terms of sensitivity and FN count while utilizing the FAGT and FBGT methods with optimal \(\mathbf{\alpha}\) values on ISIC-2016 dataset.
with a false negative (FN) count of 25 and a sensitivity of 66.67%. Similarly, when applying the FAGT method, the model with \(\mathbf{\alpha}=\mathbf{0.75}\) demonstrates the best performance, resulting in a FN count of 28 and a sensitivity of 62.67%. These optimal \(\mathbf{\alpha}\) values are also applied to the ViT and ConvNeXt models.
For the ViT model, applying the FAGT method with \(\mathbf{\alpha}=\mathbf{0.75}\) leads to significant improvements, surpassing all other trained models with a sensitivity of 72.00% and a FN count of 21. While the utilization of the FBGT method enhances the performance of the ViT model, the improvement is not as significant. Similarly, the ConvNeXt model shows performance improvements with the FBGT and FAGT methods, although the gains are not as substantial as those observed in the other models. Figure 11 provides a visual representation of the comparison between the performance of the trained models utilizing the FAGT and FBGT methods on ISIC-2016 dataset.
### HAM10000 Dataset
Table 2(b) presents the performance analysis of the models trained on the HAM10000 dataset. It is apparent that all trained models, irrespective of their utilization of the FBGT or FAGT methods, surpass the performance of the No-Filtering approach, which does not incorporate any filtering methods.
The table clearly shows that all trained models, regardless of whether they used the FBGT or FAGT methods, outperform the No-Filtering approach, which does not utilize any filtering methods. It is apparent that all trained models, irrespective of their utilization of the FBGT or FAGT methods, surpass the performance of the No-Filtering approach, which does not incorporate any filtering methods.
When it comes to individual model performance, utilizing the FBGT method with \(\mathbf{\alpha}=\mathbf{0.80}\), the Swin Transformer model achieves the best results, with a recall of 81.82% and an F1-score of 83.94%. Similarly, when the FAGT method is employed with an \(\mathbf{\alpha}\) value of 0.85, the best performance is observed, resulting in a recall of 82.48% and an F1-score of 81.90%. Similar to the findings in Table 2(a), the \(\mathbf{\alpha}\) values that yield the best performance for the Swin-Transformer model are also applied to the ViT and
\begin{table}
\end{table}
Table 3: Performance analysis of the fine-tuned Swin Transformer, ViT, and ConvNeXt models with the application of FBGT and FAGT methods for different \(\mathbf{\alpha}\) values, compared to No-Filtering, on the ISIC-2016 dataset in Table (a), and the HAM10000 dataset in Table (b).
Figure 12: The graph provides a visual comparison of the performance of the trained models in terms of recall, F1-score and accuracy, while utilizing the FAGT and FBGT methods with optimal \(\mathbf{\alpha}\) values on the HAM10000 dataset.
ConvNeXt models.
In the case of the ViT model, utilizing the FAGT method with \(\mathbf{\alpha=0.85}\) achieves the highest recorded recall of 85.94% and the highest average AUC of 98.27%. As for the ConvNeXt model, applying the FAGT method with \(\mathbf{\alpha=0.85}\) results in an accuracy of 94.44%, an F1-score of 84.06%, and a recall of 81.80%. This is by far the best-performing model when we consider F1-score and accuracy as our evaluation metrics. The use of the FBGT method does improve the performance of the ViT and ConvNeXt models as well. However, this improvement isn't as significant as using the FAGT method. For a visual depiction of the performance comparison between the trained models utilizing the FAGT and FBGT methods on HAM10000 dataset, please refer to Figure 12.
### Comparison Against Baselines
In this section, we compare our best-performing models against strong baseline models. As mentioned before, we choose the baseline model MelaNet[18] for both the ISIC-2016 dataset and the baseline IRv2+SA [19] HAM10000 datasets. Furthermore, we compare our models with other strong models proposed by various researchers. The overall comparisons for the ISIC-2016 and HAM10000 datasets can be visualized in Tables 3(a) and 3(b).
(2D) UMAP embeddings of the ISIC-2016 dataset. In subfigure 12(a), there are 727 malignant and 173 benign lesions in the distribution. Conversely, subfigures 12(b), 12(c), and 12(d) depict oversampled datasets with 2000 malignant and 2000 benign lesions. It is noticeable in subfigure 12(b) that the data distribution for both benign and malignant classes overlaps. However, in subfigures 12(c) and 12(d), the distribution appears to separate into two distinct groups, indicating the effectiveness of the FBGT and FAGT methods compared to the No-Filtering approach.
Similarly, the subfigures in Fig. 14 present the three-dimensional (3D) UMAP embeddings of the HAM10000 dataset. In subfigure 13(a), the distribution consists of 9186 skin lesions. Conversely, subfigures 13(b), 13(c), and 13(d) depict oversampled datasets, each showing a portion of the 42,294 skin lesions. When observing the 3D representations, it becomes apparent that in subfigures 13(c) and 13(d), the data points for each class become more distinguishable in a three-dimensional space. In contrast, the unfiltered 3D representation in subfigure 13(b) exhibits sparse data points, making it challenging for the classifier to identify specific regions for each class. As a result, the classification task becomes more difficult.
The formation of clustered regions, as opposed to a single concentrated area, further substantiates the effectiveness of the FBGT and FAGT methods, addressing the issue of low inter-class variation within a dataset. By utilizing these techniques, we can substantially refine the classification process, ultimately leading to more accurate and dependable outcomes.
## 6 Conclusion
This paper introduces Cosine Similarity-based Image Filtering (CosSIF), a robust dataset filtering algorithm. We utilize CosSIF to create two filtering approaches: FBGT and FAGT. These methods rely on cosine similarity as the main metric for similarity calculation and aim to reduce the volume of GAN-generated synthetic images from the minority class that resemble similarity to images from the majority class. Our experimental results demonstrate that models trained on datasets processed with either the FBGT or FAGT methods show improved performance compared to models without these filtering methods. Through comprehensive experiments, we demonstrate that the proposed FAGT method, when applied to the ISIC-2016 dataset and trained with ViT model using a tuned \(\mathbf{\alpha}\) value of 0.75, improves sensitivity by 1.59% and AUC by 1.88% compared to the baseline MelaNet. When applying the FAGT and
Figure 14: The subfigures present UMAP visualizations of the four different variations of the HAM10000 dataset, enabling a convenient comparison of the distribution and clustering patterns across each variation.
Figure 13: The subfigures showcase the four variations of the ISIC-2016 dataset using 2D UMAP visualizations, offering a comparative view of the dataset’s distribution and clustering patterns for each variation.
FBGT methods to the HAM10000 dataset, trained with ConvNeXt and Swin Transformer models, respectively, using tuned \(\mathbf{\alpha}\) values of 0.80 and 0.85, we observe significant improvements in recall. Specifically, the FACT method achieves a recall improvement of 13.72% over the baseline IRv2+SA, with an accuracy of 94.44%, while the FBGT method achieves a recall improvement of 13.75% over the same baseline, with an accuracy of 94.04%. For future research, our aim is to enhance the similarity calculation algorithm by incorporating a feature extraction and feature-based similarity calculation module. Additionally, we aim to apply our algorithm and filtering methods to various medical domains, including X-rays, CT scans, and MRI images. Furthermore, we plan to utilize the proposed CosSIF algorithm to develop a downsampling technique suitable for all image classification tasks.
## Compliance with Ethical Standards
### Funding Statement
The authors did not receive any external financial support for this research. The study was entirely self-financed, with all related costs being borne exclusively by the authors of the paper.
|
2303.07318
|
The Masses of Supernova Remnant Progenitors in M33
|
Using resolved optical stellar photometry from the Panchromatic Hubble
Andromeda Treasury Triangulum Extended Region (PHATTER) survey, we measured the
star formation history (SFH) near the position of 85 supernova remnants (SNRs)
in M33. We constrained the progenitor masses for 60 of these SNRs, finding the
remaining 25 remnants had no local SF in the last 56 Myr consistent with
core-collapse SNe (CCSNe), making them potential Type Ia candidates. We then
infer a progenitor mass distribution from the age distribution, assuming single
star evolution. We find that the progenitor mass distribution is consistent
with being drawn from a power-law with an index of $-2.9^{+1.2}_{-1.0}$.
Additionally, we infer a minimum progenitor mass of $7.1^{+0.1}_{-0.2}\
M_{\odot}$ from this sample, consistent with several previous studies,
providing further evidence that stars with ages older than the lifetimes of
single 8 $M_{\odot}$ stars are producing supernovae.
|
Brad Koplitz, Jared Johnson, Benjamin F. Williams, Mariangelly Diaz-Rodriguez, Jeremiah W. Murphy, Margaret Lazzarini, Joseph Guzman, Julianne J. Dalcanton, Andrew Dolphin, Meredith Durbin
|
2023-03-13T17:39:43Z
|
http://arxiv.org/abs/2303.07318v2
|
# The Masses of Supernova Remnant Progenitors i n M33
###### Abstract
Using resolved optical stellar photometry from the Panchromatic Hubble Andromeda Treasury Tri-angulum Extended Region survey, we measured the star formation history near the position of 85 supernova remnants (SNRs) in M33. We constrained the progenitor masses for 60 of these SNRs, finding the remaining 25 remnants had no local star formation in the last 56 Myr consistent with core-collapse supernovae, making them potential Type Ia candidates. We then infer a progenitor mass distribution from the age distribution, assuming single star evolution. We find that the progenitor mass distribution is consistent with being drawn from a power-law with an index of \(-2.9^{+1.2}_{-1.0}\). Additionally, we infer a minimum progenitor mass of \(7.1^{+0.1}_{-0.2}\)\(M_{\odot}\) from this sample, consistent with several previous studies, providing further evidence that stars with ages older than the lifetimes of single 8 \(M_{\odot}\) stars are producing supernovae.
Supernovae -- Stellar Evolution -- Massive Stars -- Stellar Populations 0000-0002-0001-000-0002-000-0002-0002-0002-0002-0002-0002-002-002-002-002-0002-002-002-002-002-0002-002-002-002-0002-002-002-002-002-002-002-002-0002-002-002-002-002-002-00200-02-002-0002-002-002-002-002-0002-002-002-00200-02-00202-0002-0200-02-002002-002-002-00202-002-00202-002-002002-002-02002-002-002-00202-0020-002-002-002002-002-002-00200-0202-00200-0202-002002-0020
genitors with mass constraints and expanding the measured distribution of progenitor masses to wider ranges of galaxy properties. The traditional method for determining the mass of SNe progenitors is by directly imaging the progenitor stars (e.g. Smart et al., 2003, 2004; Van Dyk et al., 2003; Li et al., 2006; Gal-Yam et al., 2007; Kilpatrick et al., 2021). This technique requires high resolution (better than \(\sim\)0.\({}^{\prime\prime}\)1) images of the SN site both before and after the event, which involves a large amount of serendipity. The difficult requirement of having spatially resolved photometry of the location before the explosion has resulted in only 34 SNe having their progenitor masses determined by this method, along with 40 upper limits constrained (Van Dyk, 2017; Kilpatrick & Foley, 2018; Van Dyk et al., 2018; O'Neill et al., 2019; Kilpatrick et al., 2021; Tinyanont et al., 2022; Vazquez et al., 2022). While the number of cataloged SNe has increased in recent years (e.g. Guillochon et al., 2017; Holoien et al., 2019), few of these SNe have had their progenitor constrained due to insufficient precursor imaging.
An alternative method, which does not require pre-explosion images, uses an age-dating technique of the stellar populations surrounding an SN event (Gogarten et al., 2009; Murphy et al., 2011). This technique leverages the stellar populations surrounding an SN to measure the local star formation history (SFH) by finding the model age distribution that best fits the color-magnitude diagram (CMD) of the resolved local stars. By assuming the progenitor star belongs to the median population near the event, we are able to place statistical constraints on the age of the SN progenitor. We can then infer the most likely mass of the progenitor by assuming that it was the most massive star that survives to that age according to the models.
This age-dating technique was shown to be a reliable way to infer progenitor ages for distances out to \(\sim\)8 Mpc (Murphy et al., 2011). Assuming only stars with masses \(\gtrsim\)7 \(M_{\odot}\) become CCSNe requires photometry that is sensitive to populations as old as 56 Myr (Girardi et al., 2002). Because the technique does not require precursor imaging, it can be applied to any location where an SN has occurred in the recent past, including any known SN remnants (SNRs). As a result, several previous works have shown that most young stars within 50 pc of an SN event are associated with the progenitor (Bastian & Goodwin, 2006; Badenes et al., 2009; Gogarten et al., 2009; Jennings et al., 2012; Williams et al., 2014). For example, this technique was used to constrain the masses of SNR progenitors in the local star-forming galaxies M31 (Jennings et al., 2012), NGC 6946 (Koplitz et al., 2021), as well as the Magellanic Clouds (Badenes et al., 2009; Auchettl et al., 2019). This technique has also been used to constrain the mass of observed CCSNe (Williams et al., 2014, 2018; Diaz-Rodriguez et al., 2021; Koplitz et al., 2021). Progenitor masses in M83 have also been constrained, including one with a most likely mass of 59 \(M_{\odot}\) whose errors exclude ages older than 8 Myr, the highest mass progenitor inferred from the technique to date (Williams et al., 2019).
M33, or the Triangulum Galaxy, is an excellent target for our technique. It is nearby, relatively face on (\(i=56^{\circ}\); Zaritsky et al., 1989), and is known to host over 200 SNRs (Long et al., 2010; Lee & Lee, 2014). Jennings et al. (2014), hereafter J14, have already applied this technique to 33 SNRs in M33, finding that the distribution was well fit by power-law distributions with indices that were significantly steeper than a standard Salpeter initial mass function power-law index of \(-\)2.35 (Salpeter, 1955). However, their analysis in M33 was limited by the heterogeneous set of archival Hubble Space Telescope (HST) images available. This heterogeneous coverage resulted in inconsistent filter coverage and photometric depths between observations containing SNRs. Furthermore, they did not fit a separate field star component to their ages, which could have resulted in age biases. Here, we follow up on their work using the deep, uniform coverage provided by the Panchromatic Hubble Andromeda Treasury Triangulum Extended Region (PHATTER) survey (Williams et al., 2021) as well as updated fitting techniques.
The analysis we present here takes advantage of the work by Diaz-Rodriguez et al. (2018), hereafter DR18, who developed a Bayesian hierarchical analysis capable of constraining the progenitor mass distribution index with an improved method for accounting for background effects as well as the minimum and maximum mass at which a star is able to undergo a CCSNe event from a set of SFHs. They reanalyzed the SFHs from J14 as well as those from Lewis et al. (2015) which correspond to likely SNRs from Lee & Lee (2014), finding a progenitor mass index closer to, but not consistent with, a Salpeter index (\(-\)2.96\({}^{+0.45}_{-0.25}\)). This combined M31 and M33 distribution pointed to a minimum mass of \(\sim\)7.3 \(M_{\odot}\) and a maximum mass of \(>\)59 \(M_{\odot}\). However, they found the SFHs from M31 led to a Salpeter progenitor mass distribution index (\(-\)2.35\({}^{+0.36}_{-0.48}\)) with a minimum mass of 6.5 \(M_{\odot}\) and a maximum mass of \(>\)46 \(M_{\odot}\).
In this paper, we take an updated look at the ages of SNR progenitors in M33 using resolved stellar photometry from the PHATTER survey. Our larger sample and more homogeneous photometry catalog allow us to compare different fitting methods and quantify the impact these changes have on the age and mass results. Ad
ditionally, we compare our custom SNR-centered SFHs to those measured by Lazzarini et al. (2022) in grids, allowing us to determine whether grid SFHs are sufficient for inferring a progenitor age and mass. The rest of the paper is outlined as follows: Section 2 details our SNR source catalog, as well as how our SFHs were measured. Section 3 presents our progenitor age and mass estimates. In Section 4, we discuss our constraint on the lower mass limit for CCSNe as well as the results of Kolmogorov \(-\) Smirnov (KS) tests on our observed distribution, then compare our results to similar studies in the literature. Finally, Section 5 provides a short summary of our results. Throughout this paper, we assume a distance to M33 of 859 kpc (de Grijs et al., 2017).
## 2 Data and Analysis
Our technique has two main data requirements. First, we need to know the locations of past SN activity. Second, we require resolved stellar photometry of the current populations within 50 pc of the SNe, as stars tend to remain spatially correlated within about 100 pc of their siblings for about 100 Myr, even if the cluster is not gravitationally bound (Bastian and Goodwin, 2006). Using these, we can measure the star formation rate as a function of lookback time, known as the SFH, at each SNR location. The SFH provides the age distribution of the stars near each SN. We then apply this age distribution to constrain the age and mass of the progenitor star. We detail each of these steps below.
### SNR Locations
For the locations of past SN activity in M33, we take the locations of SNRs from the catalogs of Long et al. (2010) and Lee and Lee (2014a), hereafter L10 and LL14, respectively. L10 identified candidates based on their X-ray spectrum as well as having [S ii]:H\(\alpha\) ratios \(\geq\)0.4. The candidates in LL14 were identified based on their lack of blue stars, remnant morphology, and [S ii]:H\(\alpha\) ratios \(\geq\)0.4. Of the 137 SNR candidates in L10, 120 are included in LL14's catalog of 199 candidates. The remaining 17 locations were classified as likely superbubbles or H ii regions, leading LL14 to exclude them from their final catalog. Of these 17 potential SNRs, 4 (L10-043, L10-050, L10-079, L10-098) are within the PHATTER survey footprint. We include these 4 locations in our catalog since they may be SNRs located within larger star forming complexes. Of the 199 candidates from LL14, 81 reside in the PHATTER footprint, leading to our catalog of 85 SNR candidates.
In addition to the SNR locations, we produced 2 control catalogs of locations not associated with SNRs. The first sample is 85 locations randomly distributed within the PHATTER footprint. The second sample is 2500 random draws of the grid SFHs from Lazzarini et al. (2022), which do not contain an SNR. Differences between the random and SNR samples provide additional evidence that the stellar populations near the SNRs are likely related to the progenitors, and not chance spatial coincidences of young stars (see Section 4.3 for details).
### Photometry
Once we had determined the historical SN locations, the second requirement was resolved stellar photometry at those locations. This photometry was obtained from the PHATTER survey (Williams et al., 2021). The survey measured resolved stellar photometry for 22 million stars within M33 in optical (Advanced Camera for Surveys \(F475W\) and \(F814W\)), near-ultraviolet (Wide Field Camera 3 (WFC3) \(F275W\) and \(F336W\)), and near-infrared (WFC3 \(F110W\) and \(F160W\)) bands. Our photometry is derived from the optical images (\(F475W\) and \(F814W\)) of the PHATTER survey, rather than measuring photometry in all 6 bands simultaneously. We took samples from this photometry catalog for each SNR location and each control location, with the samples consisting of all of the stars within 50 pc (12\(\arcsec\)) from the SNR or random position. We also collected samples of the widespread young populations surrounding each SNR from 50 to 1000 pc (12\(\arcsec\) to 4\(\arcmin\)). These "background" samples allow us to identify young populations unique to the region containing the SNR.
To fit stellar evolution models to the photometric data, we require artificial star tests (ASTs) to correctly model the photometric completeness and uncertainty as a function of color and magnitude. We used the ASTs from these data that were created by Lazzarini et al. (2022), who used them to measure grid SFHs in M33, as discussed in Section 2.5. These tests are obtained by adding stars of known flux to an image and blindly rerunning the photometry routine to measure the photometric bias, uncertainty, and completeness as a function of color and magnitude when fitting models to the data. This is done at least 50,000 times within a region of interest. Williams et al. (2017) and Koplitz et al. (2021) found that one set of artificial stars could be used for all locations of similar stellar density, rather than creating a set for each location. This greatly reduces the computation time required. Lazzarini et al. (2022) used this technique to optimize the number of ASTs that needed to be created. Since we are using the same photometry catalog as Lazzarini et al. (2022), we are able to use the same ASTs when analyzing the SNRs in our catalog. These tests and the optical photometry catalog are described in further detail in Lazzarini et al. (2022).
### CMD Fitting
Once we had the photometry and ASTs necessary to study each SNR location, we used the CMD fitting program MATCH (Dolphin, 2002, 2012, 2013) to measure SFHs near the SNRs in our catalog. MATCH has been used to constrain the age of SN progenitors (e.g., Jennings et al., 2012; J14; Williams et al., 2018, 2019; Koplitz et al., 2021) and SFH for nearby galaxies (e.g. Williams et al., 2009; Weisz et al., 2014; Skillman et al., 2017). MATCH fits the observed CMD using the PARSEC stellar evolution models (Bressan et al., 2012). For each model age and metallicity, it creates a model CMD by assuming a Kroupa initial mass function (Kroupa, 2001). It then finds the highest likelihood linear combination of those models that provides the best fit to the observed CMD using a maximum likelihood estimator and taking into account the bias, uncertainty, and completeness of the photometry as determined by ASTs. This combination of models yields the distribution of ages and metallicities for the stars in the observed CMD, which we refer to as the SFH of the region.
Below, we provide a brief description of our technique for running MATCH. A more detailed account of the process can be found in Koplitz et al. (2021), which is identical to how we ran MATCH here. In short, for each SNR location, we fit the CMD of the resolved stellar photometry with a grid of model CMDs generated from the PARSEC stellar evolution models (Bressan et al., 2012). Our model grid had time bins of size 0.05 dex from \(\log_{10}(t/\mathrm{yr})=6.6-8.0\) while bins of size 0.1 dex were used from \(\log_{10}(t/\mathrm{yr})=8.0-10.2\). Since M33 is known to have a subsolar metallicity (e.g., Barker et al., 2011), we limited the metallicities MATCH applies to the model grid to be \(-0.5\leq\mathrm{[Fe/H]}\leq 0.1\) using the \(zinc\) flag. Multiple massive stars can reside in the same location on CMDs even though they have different metallicities. As a result, using the \(zinc\) flag forces MATCH to use models for the young stars that are within the known metallicity range of M33.
As in Koplitz et al. (2021), our model also includes a "background" or "contamination" CMD of the stars in an annulus between 50 and 1000 pc (12\({}^{\prime\prime}\)\(-\)4\({}^{\prime}\)) from the SNR. The contamination CMD is scaled to the size of our regions before fitting. This allows us to identify young populations that are sparse in the surrounding field and more heavily weight the populations concentrated within the regions being fit.
Furthermore, the fitting routine accounts for the effects of dust on the photometry. We allowed MATCH to find the combination of reddening parameters along with the combination of ages and metallicities, which provided the best fit to the observed CMD. Since young populations are often found in dusty regions, MATCH applies three types of extinction to the model CMDs when fitting the stellar populations. The first, A\({}_{V}\), is the total foreground extinction over the full region. The second, dA\({}_{V}\), is the extinction spread due to the stars along the line of sight. The third, dA\({}_{VY}\), is additional differential extinction added to populations younger than 100 Myr old. The default dA\({}_{VY}\) value of 0.5 was used. To determine A\({}_{V}\) and dA\({}_{V}\) for an SNR, we fit a range of possible values at the location. We allowed A\({}_{V}\) to be between 0.05 and 1.00 in steps of 0.05 while dA\({}_{V}\) could be between 0.0 and 2.0 in steps of 0.2.
On average, our locations returned an A\({}_{V}\) value of 0.30, higher than the Schlafly & Finkbeiner (2011) value of 0.114. This higher A\({}_{V}\) is not surprising given that MATCH takes into account the Milky Way and M33 reddening, whereas Schlafly & Finkbeiner (2011) only account for the Milky Way. The vast majority of dA\({}_{V}\) values in our sample were 0.0, meaning the default differential reddening for the youngest stars (dA\({}_{VY}=0.5\)) was sufficient to account for differential reddening in most cases.
### Uncertainty Estimation
Random and systematic uncertainties are inherent to fitting stellar models to CMDs. Most of the random uncertainties in our fits arise from photometric errors as well as the number of stars used to determine the most likely progenitor age. The systematic uncertainties are from any deficiencies present in the stellar evolution models used during the SFH fits. Lazzarini et al. (2022) have shown that there is good agreement between model sets for fits to young ages, and that the random uncertainties dominate the error budget in these fits. Thus, we use the random uncertainties determination to estimate the uncertainties in our SFHs.
To estimate our random uncertainties, we used the hybridMC tool within MATCH (Dolphin, 2013). This task uses a hybrid Monte Carlo algorithm to accept or reject potential SFHs around the best fit SFH based on likelihood. We report the narrowest 68% of the distribution of accepted SFHs that decreases with look-back time in columns (4) and (5) of Table 1. A detailed description of how our uncertainties are estimated can be found in Section 2.6.
### SFHs from Previous Work
Recently, Lazzarini et al. (2022) published recent SFH maps of the PHATTER region of M33. They used the same PHATTER optical photometry to measure the SFH of M33's inner disk in a grid of 100 \(\times\) 100 pc (24\({}^{\prime\prime}\)\(\times\) 24\({}^{\prime\prime}\)) cells, which they have released to the community.
Lazzarini et al. (2022) largely used the same MATCH fitting technique as we have, but there were a few differences. In their analysis, time bins of size 0.1 dex were used for all bins (\(\log_{10}(t/\text{yr})=6.6-10.2\)). Since they were measuring the total amount of star formation in each location, and not attempting to isolate very localized populations, a contamination CMD was not included during their fits.
Being able to constrain progenitor masses using such a grid of spatially resolved age distributions would be very powerful, since it would avoid having to access the original photometry and ASTs and run custom fitting for each SNR location. Thus, we also attempted to age date the SNR locations using this grid of published SFHs by assigning an SFH from Lazzarini et al. (2022) that corresponded to the location of each SNR in our sample. We then compare the ages and masses of custom fits to those taken from a less optimized, but more easily accessible, source.
### Constraining Progenitor Mass
The next step in constraining the masses of SNR progenitors is to convert the recent SFH from MATCH into a probability distribution for the age of the progenitor. This calculation is done by determining the fraction of the total stellar mass present in each age bin. We take this fraction to be equal to the probability that the progenitor is associated with that age. We also take the error on that fraction as the error on the probability. We provide an example of such a probability distribution in Table 1.
The age probability distribution presented in Table 1 is for the SNR LL14-060. Similar tables for each SNR with SF in the last 56 Myr are combined into one and made available in the online supplemental material.
While the age probability distribution derived from the SFH is the most complete constraint on the progenitor age, we also provide a single progenitor mass estimate with uncertainties. This age simplifies the mass inference, as well as comparisons with other measurements and mass distribution analysis. To derive the most likely progenitor age, we use the SFHs and uncertainties produced by MATCH to calculate the median age of the stellar populations younger than 56 Myr surrounding each SNR. We then take that age as the most likely progenitor age. We determine the uncertainties on the median age as follows. We recalculate the median age a million times by accounting for the uncertainties and resampling the SF rates in each time bin, then determine the narrowest 68th percentile of this distribution of ages that contain the best fit. We use a 56 Myr cutoff for our SNR-centered SFHs, rather than the 50 Myr used by other works, because of the results of our Bayesian analysis presented in Section 4.1. A 56 Myr (\(\log(t/\text{yr})=7.75\)) cutoff is not possible for the grid SFHs since Lazzarini et al. (2022) ran MATCH with time bins of 0.1 dex. As a result, we must decide whether to use a 50 or 63 Myr cutoff (\(\log(t/\text{yr})=7.7\) or 7.8) for the grid SFH samples. We adopt a 50 Myr cutoff as this limits the number of contamination populations being included in our analysis. To infer the progenitor mass for each age bin, we assume that the SNR progenitor is the highest surviving mass on the PARSEC stellar isochrone (Bressan et al., 2012).
We present an example of our progenitor age fitting results for SNR LL14-160 in Figure 1. Similar summary plots are available for all of the SNR locations in the online supplemental material associated with the paper.
Past studies have shown that progenitor masses estimated from the SFHs produced by MATCH are consistent with estimates from other techniques (e.g., Jennings et al., 2012; Williams et al., 2019; Koplitz et al., 2021). DR18 found that their combined M31 and M33 distribution pointed to a minimum mass for CCSN progenitors of \(\sim\)7 \(M_{\odot}\), which corresponds to an age of \(\sim\)50 Myr assuming single star evolution. Populations older than this are more likely to be unrelated to the SNR since they have had more time to distance themselves from their parent cluster. Older stars in binaries have been shown to be possible SN progenitors (e.g., Xiao et al., 2019); however, our current inference from age to mass requires that we assume single star evolution. Fortunately, this assumption should not impact our age constraints, which come from the surrounding population, but it could significantly impact our conversions between age and progenitor mass if the progenitor system was a mass-exchanging binary.
## 3 Results
We present and provide the progenitor mass results from our own custom SFHs. We then compare to results that we obtain from previously published SFHs, as well as control samples and results from SNR studies of other nearby galaxies. These comparisons suggest that custom SFHs with a contamination CMD included in the fit to account for the more widespread populations are required to isolate the ages of the stars most likely associated with each SNR.
### Comparing Grid SFHs to SNR-Centered SFHs
We present our progenitor mass constraints for the SNRs in our catalog in Table 2 and compare the resulting age distributions in Figure 2, which reveals that the masses from Lazzarini et al. (2022) are systematically
lower than our custom measurements. For 42 of the 85 locations in our catalog (\(\sim\)49%) the best fit masses were not consistent with each other. KS tests between these samples returned a \(p-\)value of 0.11, suggesting we cannot rule out that they are from the same parent distribution.
To isolate the cause of the observed difference, we reran our custom fits without including a contamination component, which returned a distribution similar to the one from the Lazzarini et al. (2022) grid SFHs. Performing KS tests between the SNR-centered distributions returned a \(p-\)value of 0.09 while 0.27 was returned when comparing the grid distribution to the SNR-centered without a contamination CMD sample. Figure 3 is a histogram comparing the distribution of progenitor masses that resulted from using the grid SFHs as well as the SNR-centered SFHs with and without a contamination CMD. Each distribution is normalized such that they integrate to one. The overall distribution from the grid SFHs is similar to that of our centered SFHs without a contamination CMD, which is expected given that both SFHs were fit without a contamination CMD and the sample populations overlap.
Even though none of the distributions contain a progenitor that excludes masses \(<\)20 \(M_{\odot}\), the grid SFHs produced 10 locations consistent with being more massive than 20 \(M_{\odot}\) while the SNR-centered SFHs returned 15 with a contamination CMD and 9 without one. A similar fraction of locations with masses between \(7-15\) and \(15-25\)\(M_{\odot}\) were found in the grid and SNR-centered without a contamination component distributions (86%, 13% and 86%, 11% respectively). These show that the inclusion of the contamination CMD impacts the resulting distribution the most, though the high-precision custom location does play a large role.
### Type Ia Candidates
Of the 85 locations in our catalog, we classify 25 as Type Ia candidates. Zapartas et al. (2017) showed that binaries with ages down to 200 Myr can produce delayed CCSNe; however, these systems have had enough time to move a significant distance away from their parent cluster, making the SF we measure older than \(\sim\)56 Myr likely contaminated by nearby populations that are not associated with the SN event. Thus, any location without SF in the last \(\sim\)56 Myr we classify as Type Ia candidates since our technique cannot reliably determine the progenitor age beyond this.
Including contamination CMDs in our SFH fits forces MATCH to only fit for SF above any background young stellar populations. While this requirement can be helpful in isolating populations more likely to be associated with an SNR, it can also cause some SNRs to be classified as Type Ia candidates when they are actually Type II or Type Ibc in origin, because their associated young population may be too similar to that of the larger surroundings. To check how many of our Type Ia candidates could actually be CCSNe, we can use our results from the Lazzarini et al. (2022) SFHs, which measured the total star formation in each location. The SNRs with mass estimates in column (9) of Table 2 but without a constraint in column (7) are less likely to be Type Ia in origin, as there are relatively high-mass populations nearby, just not above the background level. Of the 25 Type Ia candidates from our SNR-centered SFHs, only LL14-103's grid SFH contained no SF within the last 50 Myr, making it our best Type Ia candidate. The other 24 Type Ia candidates had young stellar populations present but not in sufficient quantities to be detected above the larger surroundings, making them weaker Type Ia candidates. These results suggest a Type Ia fraction between 1 \(-\) 29%. While this is not a tight constraint, it is consistent with the \(\sim\)15% expected for late-type spirals (Li et al., 2011).
## 4 Discussion
Our progenitor age distributions probe the minimum mass at which CCSNe can occur, how SNe are spatially distributed in the disk of M33, and the power-law index of the progenitor mass distribution for the galaxy.
### Mass Limits for CCSNe
Using the Bayesian hierarchical analysis developed by DR18, we use our SNR age sample to provide a constraint on the maximum age at which stars undergo CCSNe, \(t_{\rm max}\). Our analysis was sensitive to the assumed minimum age for CCSNe, \(t_{\rm min}\). To account for this, we fit our distribution assuming \(t_{\rm min}\) values of 6, 9, 10, 12, 15, and 18 Myr, with each returning similar results. We report the \(t_{\rm min}=15\) Myr fit since this is the lowest \(t_{\rm min}\) value that stabilized the returned progenitor mass distribution slope, finding \(54.3^{+3.8}_{-2.0}\) Myr as the best fit \(t_{\rm max}\) which corresponds to a \(M_{\rm min}\) of \(7.1^{+0.1}_{-0.2}\)\(M_{\odot}\). Figure 4 shows the distribution of \(t_{\rm max}\) (left) and \(M_{\rm min}\) (right) returned by the Bayesian analysis for the fit with \(t_{\rm min}=15\) Myr.
The analysis also attempts to constrain the upper mass limit for CCSNe and the progenitor mass distribution index when fitting a distribution. Our high-mass progenitors, however, have large error bars that prevented the analysis from converging on a rigorous best fit value for the upper mass limit. Since the upper mass is degenerate with the distribution index, it also did not return a reliable index. We estimate the progenitor mass
distribution index in Section 4.3 using an alternate technique, since it may be of interest to the community.
### Spatial Distribution of Progenitor Masses
To investigate the spatial distribution of SNRs in M33, we plot the locations of our catalog on an H\(\alpha\) image taken with the WIYN 0.9m telescope (Figure 5). The progenitor mass and most likely SNe type are indicated by the color and symbol, respectively. Locations for which we have mass constraints in column (7) of Table 2, i.e., we were able to measure SF within the last 56 Myr above the background level, are shown as circles. Progenitors with masses \(<\)9 \(M_{\odot}\) are white, masses of \(9-12\)\(M_{\odot}\) are red, masses of \(12-15M_{\odot}\) are orange, masses of \(15-20\)\(M_{\odot}\) are yellow, and masses \(>\)20 \(M_{\odot}\) are blue. Our Type Ia SNe candidates are shown as squares, where the color indicates the best fit progenitor mass from the grid SFH that the SNR resides in from Lazzarini et al. (2022). The colors show the mass that could have produced a CCSNe at the location, though these are less likely to be CCSNe than the colored circles due to the lower level of young SF. The coloring depicts the same mass ranges as the circles, with the addition of black indicating the location of LL14-103, our best Type Ia candidate.
Our entire catalog mostly traces the H\(\alpha\) emission and spiral arms of M33. There are many squares (Type Ia candidates) inside star forming regions throughout the galaxy, indicating that young populations are present, just not enough to be detected in fits that include a contamination component. In these cases, fitting without a contamination CMD (i.e., fitting the full population) often finds some massive stars in the region, whereas the fit including a contamination CMD finds no such populations.
### Progenitor Mass Distribution Power-Law Index
While the Bayesian hierarchical analysis of DR18 was not able to converge on a power-law index for the progenitor distribution due to the uncertainties at very young ages, it may still be of interest to determine the closest power-law representation of our most likely progenitor masses. To determine this value, we use KS tests to determine the likelihood the locations in our catalog with young populations (\(<\)56 Myr) are drawn from various power-law distributions. We compared the data to power-law indices between \(-\)6.0 and 0.0, in steps of 0.1, and report the most likely index. To estimate the uncertainties on the index, we employee a bootstrap analysis in which we sample the uncertainties on each mass 1000 times. We then find the indices that return \(p-\)values \(\geq\)0.05 (\(\sim\)95% confidence) and report the extremes as our limits.
Performing this analysis on our full catalog of SNRs indicates the progenitor mass distribution is best matched by a power-law with an index of \(-2.9^{+1.2}_{-1.0}\), which does contain the Salpeter index of \(-2.35\)(Salpeter, 1955). The best fit index has a \(p-\)value of 0.23. Running the progenitor mass distribution from the grid SFHs through this same analysis found that the sample was best matched by an index of \(-3.8^{+1.8}_{-0.4}\), significantly steeper than our SNR-centered catalog though still consistent. This is not surprising given that, as discussed in Section 3.1, L10-043 was the only progenitor found to be more massive than 25 \(M_{\odot}\) in this sample.
Our power-law indices were estimated using only the locations that contained SF at ages younger than \(\sim\)56 Myr. To estimate the impact that removing locations without young SF has on our indices, we refit the progenitor mass distribution index of our SNR-centered sample while adding in the grid SFH progenitor mass for locations that did not contain young SF in our custom fit. This combined sample was best fit by a \(-3.2^{+1.3}_{-0.6}\) index, which is consistent with both the SNR-centered and grid SFH indices. This indicates that removing locations without young SF does not have a large impact on the returned progenitor mass distribution index.
Figure 6 shows our ranked progenitor mass distribution. The red points indicate the progenitor mass of each SNR with a constraint in column (7) of Table 2, with uncertainties shown as red lines. Overplotted as gray lines are 50 draws from a power-law distribution with an index of \(-2.9^{+1.2}_{-1.0}\), our best fit index.
### Control Sample
As mentioned in Section 2.1, we also performed our analysis on control samples, random locations that did not contain SNRs. We compared our SNR results to these control results to determine if the SNRs are indeed affecting our results. We discuss both the control sample for the mass estimates based on custom SFH measurements and the control samples for mass estimates based on Lazzarini et al. (2022) SFH measurements below.
Our first control sample, containing randomly drawn locations in the PHATTER footprint, returned fewer progenitors with masses \(>\)20 \(M_{\odot}\) (5 in the control sample and 9 in our catalog). There were also significantly more (33, \(\sim\)39% of the sample) Type Ia candidates (i.e., locations with no significant recent SF above what is present in the contamination CMD) than our SNR-centered distribution with a contamination CMD (25, \(\sim\)30% of the sample). Both of these suggest that the regions in this control sample contained, on average, older populations than those found near SNRs. The locations in this sample that contained SF at ages
younger than 56 Myr were best fit by a power-law index of \(-4.9^{+3.2}_{-0.2}\), which is consistent with our contamination CMD and the grid distributions. While the uncertainties do overlap, this can likely be attributed to the amount of widespread SF within the PHATTER footprint of M33. Comparing this random sample to our SNR-centered sample returned a \(p-\)value of 0.08, suggesting these sample are only marginally consistent with being drawn from the same parent distribution.
Of the 2500 random draws in the grid control sample, \(\sim\)70% were consistent with the grid power-law index, with the remaining \(\sim\)30% resulting in steeper indices. We found the median index to be \(-4.1\pm 0.5\), which includes the grid SFH sample index of \(-3.8\) but excludes the SNR-centered index of \(-2.9\), though the uncertainties do overlap. Of the 2500 draws, 1492 (\(\sim\)60%) resulted in \(p-\)values \(\leq\)0.05 when compared to our grid sample. No \(p-\)values \(\geq\)0.05 were found when compared to the SNR-centered sample. Additionally, our grid sample only contained 1 location without recent SF (LL14-103) whereas only 3 of the grid control draws contained as many or fewer such locations, meaning that \(>\)99% of random draws had more locations without young stars present than we find in locations containing SNRs. These results show that the grid SFHs that contain SNRs do differ from those that lack an SNR, with the similar power-law indices likely being explained, again, by the amount of widespread SF in M33.
### Comparison to J14 and DR18
J14's catalog contained 33 SNRs in M33, of which 28 are in the PHATTER footprint. Our SNR-centered progenitor mass estimates were consistent with those found by J14 for 16 of these 28 sources. Of the 12 sources that were not consistent between our estimates and those in J14, we identify 8 Type Ia candidates. J14 found their full distribution of 33 SNRs was best fit by an index of \(-3.8^{+0.5}_{-0.4}\). Using our SNR-centered SFHs of the 28 overlapping locations, our analysis pointed to the distribution being well matched by a power-law index of \(-3.1^{+1.2}_{-1.1}\), which is flatter than what J14 found. Their steep index could be from the few progenitors with masses \(>\)20 \(M_{\odot}\) in their sample, which can likely be partially attributed to the low number of SNRs in their sample. Poisson fluctuations for small numbers can randomly vary to zero quite easily. Additionally, J14 did not include a contamination CMD when fitting, which may have biased their estimates toward older, less massive populations and reduced their number of Type Ia candidates. We have shown in Section 3.1 that CMD-based age dating returns more low-mass progenitors when no background CMD is included in the fitting. DR18 constrained the minimum mass for CCSNe to be \(7.32^{+0.12}_{-0.14}\)\(M_{\odot}\) using the J14 measurements, which is consistent with the minimum mass we identified, suggesting that this cutoff may be the most reliable parameter returned from the SNR sample.
### Comparison to Other Galaxies
In addition to the SNRs in M33, J14 also constrained the progenitor mass distribution for 82 SNRs in M31 using photometry from the PHAT survey. Their KS test analysis found that this distribution was best matched by a power-law index of \(-4.4^{+0.4}_{-0.4}\), which is not consistent with our distribution. Interestingly, one major difference between J14 and DR18 is that DR18 allow for a uniform background distribution when fitting for the progenitor mass distribution parameters. This should have a similar effect to the use of a contamination CMD during the fitting process in that it accounts for the possibility that some of the measured SF may not be associated with the SN. With the inclusion of the uniform background distribution, DR18 constrained the distribution index to be \(-2.35^{+0.36}_{-0.48}\) and found the minimum mass for CCSNe to be \(6.5^{+0.6}_{-0.2}\)\(M_{\odot}\) using 62 SNRs in M31. Both of these measurements are consistent with what we have found in M33.
Katsuda et al. (2018) gathered the progenitor masses for 40 SNRs in the Milky Way and both Magellanic Clouds from the literature, which were estimated using chemical abundances. They updated many of the measurements using Fe:Si ratios and found a progenitor mass distribution consistent with both a Salpeter index and our measured index for M33.
Auchettl et al. (2019) examined 23 SNRs in the Small Magellanic Cloud, finding that 22 were likely core collapse in origin. They report the likelihood that the mass of each progenitor is between \(8-12.5\), \(12.5-21.5\), and \(>\)21.5 \(M_{\odot}\) assuming single and binary evolution. Regardless of single or binary evolution, 70% of their progenitors had the highest likelihood in the most massive bin, whereas 9% of our SNRs were found to have progenitor masses in this range. The large number of high-mass progenitors led to their distribution being well matched by a power-law index of \(-1.84\), though the uncertainties are consistent with a Salpeter index. A possible reason given by Auchettl et al. (2019) for the top-heavy distribution is that the Small Magellanic Cloud has a lower metallicity than M33. It has been shown that lower metallicity gas is more likely to produce a top-heavy stellar distribution (e.g., Bromm & Larson, 2004; Marks et al., 2012).
Williams et al. (2019) constrained the progenitor mass of 199 SNRs in M83 using our technique. They found
that the progenitor mass distribution was well matched by a power-law index of \(-2.9^{+0.2}_{-0.7}\). A KS test between their M83 distribution and ours resulted in a \(p-\)value of 0.04, suggesting their parent distributions may differ, possibly due to the higher star formation intensity or lower metallicity of M33.
Koplitz et al. (2021) measured the progenitor mass of 169 SNRs, 8 historically observed SNe, and NGC6946-BH1, the first black hole formation candidate, in the galaxy NGC 6946 using our technique. They found that gas emission impacted their broad \(V\)-band photometry, which biased some of their mass estimates. As a result, they only included the 46 sources that were least likely to be biased when constraining the progenitor mass distribution index. In this sample, they found 24% with masses \(\geq\)20 \(M_{\odot}\), while we have 11% progenitors in our catalog with similar masses. They found their distribution was best fit by an index of \(-2.6^{+0.5}_{-0.6}\), which is consistent with our measured index. KS tests between their preferred sample and our distribution resulted in a \(p-\)value of 0.2.
Figure 7 compares our distribution of progenitor masses to those in M83 (Williams et al., 2019) and the preferred sample in NGC 6946 (Koplitz et al., 2021). We normalize each individually so that they integrate to one. Each is dominated by the low-mass progenitors and those less massive than 25 \(M_{\odot}\) have similar overall shapes. These led to power-law indices that are consistent with each other. Our distribution is the only one that lacks any progenitors with mass \(\geq\)40 \(M_{\odot}\). However, this could be the result of the small number of high-mass progenitors expected combined with the smaller number of SNRs in our sample.
## 5 Summary
We constrained the progenitor age and mass of 60 SNRs in the nearby galaxy M33, or the Triangulum Galaxy, using an age-dating technique of the stellar populations near the SNRs. The remaining 25 showed no local SF within the past 56 Myr, making them potential Type Ia candidates. While it is possible that these candidates are binary systems producing delayed CCSNe with ages down to 200 Myr, our analysis is not able to reliably determine the progenitor age beyond \(\sim\)56 Myr.
Using the Bayesian hierarchical analysis developed by DR18, we constrained the maximum age for CCSNe to be \(54.3^{+3.8}_{-2.0}\) Myr which, assuming single star evolution, corresponds to a minimum mass of \(7.1^{+0.1}_{-0.2}\)\(M_{\odot}\). A KS test analysis determined that the progenitor mass distribution of our full catalog was best matched by a power-law distribution with an index of \(-2.9^{+1.2}_{-1.0}\), which includes the Salpeter index of \(-2.35\). Our distribution is well populated by progenitors with masses \(9-40\)\(M_{\odot}\).
When using grid SFHs from Lazzarini et al. (2022), rather than SNR-centered regions with a contamination CMD included, the inferred progenitor mass was biased to lower values, with only 1 progenitor more massive than 25 \(M_{\odot}\). There were also fewer Type Ia candidates when using the grid SFHs, 1 from the grid SFHs and 25 from the SNR-centered SFHs. Additionally, the progenitor mass distribution index that came from the grid SFHs was steeper than the index from the SNR-centered SFHs while KS tests between these samples returned a 0.03 \(p-\)value. Without a contamination CMD, our custom SFHs returned a similar distribution to the grid, finding a \(p-\)value of 0.14 between the two samples. The grid results differ from a random distribution of grid cells that do not contain an SNR, though not as strongly as our SNR-centered sample. The stronger difference from the overall background suggests that the custom SFHs with a contamination CMD provide a more robust constraint on the age and mass of SNRs than SFHs measured in grids, where the background populations are not taken into account and the SNR may be anywhere in the grid cell.
Previously, J14 used archival HST images to constrain the age and mass of 33 SNRs in M33. We present new age and mass estimates for 28 of these using the deep, uniform photometry from the PHATTER survey. Performing KS test analysis on the SNRs with updated mass estimates pointed to the distribution being well matched by a power-law index of \(-3.1^{+1.2}_{-1.1}\), which is consistent with the index J14 found for their full catalog and our sample of 85 SNRs.
Our normalized progenitor mass distribution is similar to that of M83 (Williams et al., 2019) and NGC 6946 (Koplitz et al., 2021). All the distribution are dominated by low-mass progenitors and have best fit power-law indices that are consistent with one another. KS tests between our sample and the preferred sample in NGC 6946 resulted in a \(p-\)value of 0.20, suggesting that these are likely drawn from the same parent distribution. A \(p-\)value of 0.04 was returned when performing KS tests between our sample and the SNRs in M83. A \(p-\)value just below 0.05 and the matching power-law indices means we cannot rule out that the samples are drawn from the same parent distribution.
Each of our distributions shows a sharp drop in the number of progenitors at \(\sim\)20 \(M_{\odot}\). Few progenitors are found more massive than this, which coincides with the upper limits found by Smartt (2015) and Davies and Beasor (2020) for Type II SNe. It is possible that the reason we do not see many high mass progenitors is because not
all experience a canonical CCSNe and instead collapse directly into a black hole (Pejcha and Thompson, 2015). Now that the JWST has launched, similar studies will be able to leverage red supergiants to constrain the age of SN progenitors with higher precision and may resolve the "red supergiant problem".
Support for this work was provided by NASA through grants GO-14610 and GO-15216 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
|
2307.12584
|
Identifying the discs, bulges, and intra-halo light of simulated
galaxies through structural decomposition
|
We perform a structural decomposition of galaxies identified in three
cosmological hydrodynamical simulations by applying Gaussian Mixture Models
(GMMs) to the kinematics of their stellar particles. We study the resulting
disc, bulge, and intra-halo light (IHL) components of galaxies whose host dark
matter haloes have virial masses in the range $M_{200}=10^{11}$--
$10^{15}\,{\rm M_\odot}$. Our decomposition technique isolates galactic discs
whose mass fractions, $f_{\rm disc}$, correlate strongly with common
alternative morphology indicators; for example, $f_{\rm disc}$ is approximately
equal to $\kappa_{{\rm co}}$, the fraction of stellar kinetic energy in
co-rotation. The primary aim of our study, however, is to characterise the IHL
of galaxies in a consistent manner and over a broad mass range, and to analyse
its properties from the scale of galactic stellar haloes up to the
intra-cluster light. Our results imply that the IHL fraction, $f_{\rm IHL}$,
has appreciable scatter and is strongly correlated with galaxy morphology: at
fixed stellar mass, the IHL of disc galaxies is typically older and less
massive than that of spheroids. Above $M_{200}\approx 10^{13}\,{\rm M_\odot}$,
we find, on average, $f_{\rm IHL}\approx 0.45$, albeit with considerable
scatter. The transition radius beyond which the IHL dominates the stellar mass
of a galaxy is roughly $30\,{\rm kpc}$ for $M_{200}\lesssim 10^{12.8}\,{\rm
M_\odot}$, but increases strongly towards higher masses. However, we find that
no alternative IHL definitions -- whether based on the ex-situ stellar mass, or
the stellar mass outside a spherical aperture -- reproduce our
dynamically-defined IHL masses.
|
Katy L. Proctor, Claudia del P. Lagos, Aaron D. Ludlow, Aaron S. G. Robotham
|
2023-07-24T07:55:41Z
|
http://arxiv.org/abs/2307.12584v2
|
Identifying the disc, bulge, and intra-halo light of simulated galaxies through structural decomposition
###### Abstract
We perform a structural decomposition of galaxies identified in three cosmological hydrodynamical simulations by applying Gaussian Mixture Models (GMMs) to the kinematics of their stellar particles. We study the resulting disc, bulge, and intra-halo light (IHL) components of galaxies whose host dark matter haloes have virial masses in the range \(M_{200}=10^{11}\)- \(10^{15}\,\mathrm{M}_{\odot}\). Our decomposition technique isolates galactic discs whose mass fractions, \(f_{\mathrm{disc}}\), correlate strongly with common alternative morphology indicators; for example, \(f_{\mathrm{disc}}\) is approximately equal to \(\kappa_{\mathrm{co}}\), the fraction of stellar kinetic energy in co-rotation. The primary aim of our study, however, is to characterise the IHL of galaxies in a consistent manner and over a broad mass range, and to analyse its properties from the scale of galactic stellar haloes up to the intra-cluster light. Our results imply that the IHL fraction, \(f_{\mathrm{IHL}}\), has appreciable scatter and is strongly correlated with galaxy morphology; at fixed stellar mass, the IHL of disc galaxies is typically older and less massive than that of spheroids. Above \(M_{200}\approx 10^{13}\,\mathrm{M}_{\odot}\), we find, on average, \(f_{\mathrm{IHL}}\approx 0.45\), albeit with considerable scatter. The transition radius beyond which the IHL dominates the stellar mass of a galaxy is roughly \(30\,\mathrm{kpc}\) for \(M_{200}\lesssim 10^{12.8}\,\mathrm{M}_{\odot}\), but increases strongly towards higher masses. However, we find that no alternative IHL definitions - whether based on the ex-situ stellar fraction, or the stellar mass outside a spherical aperture - reproduce our dynamically-defined IHL fractions.
keywords: galaxies: evolution - galaxies: kinematics and dynamics - galaxies:stellar content - galaxies: structure - methods: numerical
## 1 Introduction
In the '\(\Lambda\)-cold dark matter' (\(\Lambda\)CDM) cosmological model, structure forms in a hierarchical manner. Galaxies are expected to have undergone numerous accretion events over their lifetime, building up their stellar mass at least in part through mergers with lower-mass ones. In principle, these accretion events should leave an observable imprint on present-day galaxies in the form of an extended stellar component: the intra-halo light (IHL; see e.g. Purcell et al., 2007). The ubiquity of merger events in the \(\Lambda\)CDM model implies that the IHL should be present, to some extent, in most galaxies (Bullock and Johnston, 2005). Indeed, the IHL has been observed across a broad range of galaxy masses; it is typically referred to as a stellar halo at the galactic scale (see Helmi, 2008, for a review of the Milky Way's, MWs, stellar halo), and as intra-group (cluster) light at the galaxy group (cluster) scale (see Contini, 2021; Montes and Trujillo, 2022, for reviews).
Our understanding of the mass fraction, spatial distribution and stellar populations of the IHL, as well as how they vary with host halo mass, is limited, partly due to difficulties defining the IHL unambiguously (see e.g. Sanderson et al., 2018, and references therein). Nonetheless, there is mounting evidence hinting at a connection between present day IHL properties and the formation of its host galaxy and dark matter (DM) halo. In the case of the MW, observations of halo stars (e.g., by the Gaia Collaboration et al., 2021) have revolutionised our understanding of the Galaxy's formation history through the discovery of distinct substructures in 6D phase space (e.g., the Gaia Enceladus/Sausage; Belokurov et al., 2018; Helmi et al., 2018) or distinct chemodynamical sub-populations (e.g. Kruijssen et al., 2019; Horta et al., 2020; Buder et al., 2022), which are the likely remnants of destroyed satellites that merged with the MW early in its formation. At the galaxy cluster scale, Deason et al. (2021) showed that the stellar density profiles of simulated clusters exhibit a well-defined edge, coincident with the'splashback' radius of the underlying DM halo (e.g. Adhikari et al., 2014; Diemer, 2017). This feature has since been detected observationally (Gonzalez et al., 2021), indicating that the IHL may provide an observable probe of the underlying DM distribution.
Despite recent observational progress, accurately characterising the IHL for a representative sample of extragalactic galaxies remains challenging. Surveys based on resolved stellar populations provide a wealth of information on halo stars, but these measurements can only be obtained for nearby systems (e.g. Barker et al., 2009; Ibata et al., 2013; Harmsen et al., 2017). Deep integrated light surveys provide an alternative approach to studying the IHL of extragalactic systems, where the surface brightness profiles of galaxies can be decomposed based on assumptions about the analytic form appropriate for specific galactic components. Merritt et al. (2016) measured the IHL
of 8 nearby disc galaxies by decomposing their stellar mass surface density profiles, finding an RMS scatter in the IHL mass fraction of approximately 1 dex, indicating that the IHL fraction varies significantly, even for galaxies with similar morphologies and stellar masses. The IHL fractions of galaxy groups and clusters at \(z\approx 0\) also exhibit significant scatter, with measured fractions ranging from a few per cent to \(\approx 50\) per cent (e.g. Montes, 2022). It is unclear how much of this scatter is inherently physical in origin (due to, for example, stochastic variations in formation histories, e.g. Fattaini et al., 2020; Rey & Starkenburg, 2022), and how much is a result of the inconsistent methodologies adopted by different observational studies (see, e.g., Kluge et al., 2021).
The outlook for characterising the IHL consistently across different environments is perhaps more promising in cosmological, hydrodynamical simulations, in which the various structural components of galaxies can in principle be identified from the kinematics of their stellar particles. In particular, understanding the formation mechanisms of the IHL and how they vary as a function of mass can be addressed with large volume cosmological simulations, where galaxies that form in a broad range of environments can be studied and tracked over time (e.g. Schaye et al., 2015; Pillepich et al., 2018; Dave et al., 2019).
Canas et al. (2020) introduced an adaptive phase-space algorithm, specifically designed to distinguish kinematically hot stellar particles (which they identified with the IHL) from centrally concentrated stellar structures and applied it to galaxies identified in the Horizon-AGN simulation (Dubois et al., 2016). This allowed them to study the mass-dependence of the IHL fractions of a diverse population of galaxies, which revealed that the scatter in IHL masses is correlated with kinematic galaxy morphology. While their method was effective in identifying the IHL of galaxies in diverse environments, their results depend on a free parameter that had to be calibrated to a small number of uncertain observations.
Unsupervised machine learning techniques provide a viable alternative for identifying kinematically distinct structures within simulated galaxies (e.g. Domenech-Moral et al., 2012). Obreja et al. (2016) showed that Gaussian Mixture Models (GMMs) can be applied to the stellar components of galaxies to identify discs, bulges, and stellar haloes, whose properties resemble those of observed systems. Automating structural decomposition for a diverse galaxy sample is, however, non-trivial. As a result, studies are typically limited to small samples of galaxies, for which the clusters of stars identified by the GMMs can be manually assigned to different galactic components (Obreja et al., 2018, 2019), or they are limited to galaxies within a narrow range of mass or morphology (e.g. Ortega-Martinez et al., 2022, though see Du et al., 2019).
In this work, we use GMMs to characterise the disc, bulge and IHL components of simulated galaxies across a wide range of halo masses. We apply our methodology to galaxies from three simulations based on the Eagle model of galaxy formation, allowing us to study how the mass fraction, stellar populations and structure of these components vary from the galactic scale up to the scale of massive galaxy clusters. The remainder of this paper is organised as follows. In Section 2, we describe the Eagle simulations and introduce the kinematic quantities used in our structural decomposition. In Section 3, we introduce our decomposition technique and compare our results to alternative techniques. In Section 4, we analyse the stellar populations and structural properties of the disc, bulge, and IHL components; and in Section 5 we summarise our results.
## 2 Simulations and Analysis
### The Eagle simulations
The Eagle Project (Schaye et al., 2015; Crain et al., 2015) is a suite of cosmological, smoothed particle hydrodynamical (SPH) simulations that model the formation and evolution of galaxies in a \(\Lambda\)CDM universe using cosmological parameters consistent with the Planck Collaboration et al. (2014) results. The majority of our analysis is based on the L\({}_{\rm box}=100\) cubic Mpc "Reference" run of the Eagle Project (i.e. Ref-L0100N1504; see Table 2 of Schaye et al., 2015), which follows structure formation using \(\rm{N_{DM}}=1504^{3}\) equal-mass dark matter (DM) particles and initially the same number of baryonic particles. The mass of DM particles is \(m_{\rm DM}=9.70\times 10^{6}\) M\({}_{\odot}\), and \(m_{\rm gas}=1.81\times 10^{6}\) M\({}_{\odot}\) is the initial baryonic particle mass.
Initial conditions were evolved to \(z=0\) using an updated version of Gadget-3 (Springel, 2005; Schaye et al., 2015) that includes subgrid models for, among other processes, radiative cooling and photoheating (Wiersma et al., 2009), star formation and stellar feedback (Schaye & Dalla Vecchia, 2008; Dalla Vecchia & Schaye, 2012) the growth of supermassive black holes (BHs) through mergers and accretion, and feedback from active galactic nuclei (AGN; Rossa-Guevara et al., 2015). The subgrid model parameters were calibrated so that Eagle reproduced \(z\approx 0\) observations of the galaxy stellar mass function, the stellar size-mass relation, and the black hole mass-stellar mass relation (see Crain et al., 2015, for details). Subsequent work has shown that Eagle also reproduces observations of galaxies at \(z>0\), such as their angular momenta (Lagos et al., 2017), sizes (Furlong et al., 2017), velocity dispersion, and rotational velocities (van de Sande et al., 2019), highlighting that galaxy structure and kinematics are reproduced well by Eagle.
We supplement results from Ref-L0100N1504 at high and low halo masses using data from two other simulations. One is the Cluster-Eagle project (C-Eagle; Bahe et al., 2017; Barnes et al., 2017), which is a suite of 30 resimulations of cluster-mass haloes; the other is a L\({}_{\rm box}=50\) Mpc Eagle volume simulated using higher resolution in the DM component while maintaining the original baryonic particle mass and force resolution that was used for Eagle (see Ludlow et al., 2023, for details). We use the latter run, which is referred to in our paper as 50-HiResDM, to test the sensitivity of our results to the spurious collisional heating of stellar particles by DM halo particles (see, e.g., Ludlow et al., 2019; Wilkinson et al., 2023). 50-HiResDM employed the same subgrid models, as well as the same numerical and subgrid parameters as Eagle, but its DM particle mass is \(m_{\rm DM}=1.39\times 10^{6}\) M\({}_{\odot}\), i.e. a factor of 7 lower than the value used for Ref-L0100N1504. The C-Eagle project used the same subgrid and numerical set-up as Eagle, but adopted different parameters for the AGN feedback subgrid model to achieve better agreement with the observed gas content of galaxy clusters (see Bahe et al., 2017 and Barnes et al., 2017 for details).
### Identifying DM haloes and galaxies
DM haloes, their substructure haloes and associated galaxies were identified using Subgrid(Springel et al., 2001; Dolag et al., 2009). Haloes were first identified using a Friends-of-Friends algorithm (FoF; Davis et al., 1985), which links nearby DM particles into FoF groups. Baryonic particles were assigned to the same group as their nearest DM particle, provided it belonged to one. Each FoF halo was then divided into self-bound substructures, or "subhaloes" for short. One subhalo typically dominates the total mass of the FoF halo - we refer to this as the "central" subhalo; lower-mass subhaloes we refer
to as "satellite" subhaloes. The baryonic particles associated with central and satellite subhaloes are referred to as central and satellite galaxies, respectively. The stellar mass of a galaxy, \(M_{\bullet}\), is defined as the total mass of all stellar particles gravitationally bound to a subhalo.
We restrict our analysis to stellar particles associated with central galaxies identified at \(z=0\), but exclude those bound to satellite galaxies. We note that this choice may be questionable for systems with large numbers of satellite galaxies or for those undergoing mergers; in these cases, distinguishing stellar particles that are bound to a central galaxy from those bound to its satellites is challenging. However, we find that MW-mass central galaxies in Eagle typically contribute 97 per cent of the total stellar mass associated with their FoF haloes, with satellites contributing \(\lesssim 3\) per cent. At higher halo masses, however, where mergers and substructure are more prevalent, the contribution of satellite galaxies to the total stellar mass of FoF haloes can be quite large, sometimes reaching as high as \(\approx 40\) per cent for halo masses \(\gtrsim 10^{13}\,\mathrm{M}_{\odot}\). Satellite galaxies, however, typically dominate the stellar mass budget at larger galacto-centric radii than that which encompasses the majority of the central's stellar material. For that reason, we neglect the possible contribution of satellites to the IHL of galaxies.
We henceforth quantify halo masses using \(M_{200}\), i.e. the total mass (DM plus baryonic) within the spherical radius \(r_{200}\) that encloses a mean density of \(200\times\rho_{\mathrm{crit}}(z)\), where \(\rho_{\mathrm{crit}}(z)=3\,H(z)^{2}/8\pi G\) is the critical density (\(H(z)\) is the Hubble parameter and \(G\) is the gravitational constant). The centres of haloes and galaxies are defined as the location of their DM particle with the lowest gravitational potential energy.
### Kinematic quantities used for structural decomposition
We begin by repositioning the stellar particles of each central galaxy relative to its halo centre. The velocity frame of the galaxy is at rest with respect to the centre of mass motion of the innermost 80 per cent of its stellar mass. All positions and velocities from herein refer to these recenter quantities.
Galaxies are then oriented such that the \(z\)-axis aligns with their total stellar angular momentum vector, \(\bar{J}_{\bullet}\), which is calculated using all stellar particles between 2 and 30 kpc. The lower limit of 2 kpc is imposed to minimise contributions from stellar particles with disordered motions, or from kinematically decoupled cores1, while the upper limit minimises the contribution from particles that do not belong to the central disc or spheroidal component of the galaxy.
Footnote 1: Although these are rare in Eagle (Lagos et al., 2022), they can impact the measured stellar angular momentum significantly.
We use the following quantities to decompose Eagle galaxies into distinct structural components, which may include a disc, a bulge, and IHL:
* \(j_{\mathrm{c}}/j_{\mathrm{circ}}\): The specific angular momentum in the \(z\)-direction relative to the specific angular momentum of a particle on a circular orbit with the same binding energy. This quantity was first introduced by Abadi et al. (2003) to isolate disc stars; it is commonly referred to as the orbital circularity parameter. Particles with \(j_{\mathrm{c}}/j_{\mathrm{circ}}\) values of 1 (-1) are on prograde (retrograde) circular orbits in the plane perpendicular to the net angular momentum vector (i.e., in the disc plane of a late type galaxy).
* \(j_{\mathrm{p}}/j_{\mathrm{circ}}\): The specific angular momentum in the plane parallel to \(\bar{J}_{\bullet}\) (i.e., perpendicular to the disc of a late type galaxy), normalised by \(j_{\mathrm{circ}}\) (note that in our coordinate system, \(j_{\mathrm{p}}^{2}=j_{\mathrm{x}}^{2}+j_{\mathrm{y}}^{2}\) ). This quantity was first introduced by Domenech-Moral et al. (2012) to aid in identifying disc stars.
* \(e/e_{\mathrm{min}}\): The ratio of the specific binding energy of a particle to that of the most bound stellar particle in the galaxy.
For a given value of the specific binding energy, the maximum value of the specific angular momentum corresponds to that of a particle on a prograde circular orbit, which we refer to as \(j_{\mathrm{circ}}\)(Abadi et al., 2003). We estimate \(j_{\mathrm{circ}}\) numerically using the orbital information of stellar particles (see also Thob et al., 2019; Kumar et al., 2021). Particles are first divided into 150 equally-spaced bins of binding energy. Within each bin, the maximum value of the specific angular momentum of all particles, \(j_{\mathrm{max}}\), is taken to be the value of \(j_{\mathrm{circ}}\) for particles within that bin. Due to the finite mass resolution of our simulations and the diffuse nature of galaxy outskirts, this approach may be inaccurate at large galacto-centric distances, where bins naturally contain fewer stellar particles than those in the central regions of galaxies. However, we have verified that the results of our galaxy decomposition are insensitive to reasonable variations in the number of binding energy bins used to calculate \(j_{\mathrm{circ}}\). This is because \(j_{z}/j_{\mathrm{circ}}\) is mostly useful for identifying disc particles, which are centrally concentrated and located in regions of a halo that are well sampled by stellar particles.
## 3 Identifying the structural components of simulated galaxies
In this section, we introduce our structural decomposition technique and apply it to two example galaxies identified in Ref-L0100N1504 (S3.1). In S3.2, we test the effect of varying the number of Gaussians used by the GMM on the resulting disc, bulge, and IHL mass fractions, and in S3.3 we compare the results of our decomposition to alternative estimates of the kinematic morphologies and IHL fractions of galaxies. The results in this section are limited to central galaxies identified at \(z=0\) in Ref-L0100N1504 that have \(M_{200}>10^{11.7}\,\mathrm{M}_{\odot}\), resulting in a sample of 2415 galaxies. This halo mass limit ensures that the structural and kinematic properties of the galaxies are robust to spurious heating by DM particles at their half stellar mass radii (\(r_{50}\); see Table 2 of Ludlow et al., 2023).
### Decomposing the stellar component of Eagle galaxies using GMMs
We use GMMs to decompose the stellar components of Eagle galaxies into at most three physical components - a disc, a centrally concentrated bulge, and an extended IHL - but make no attempt to isolate dynamically distinct sub-components of them (i.e. we do not distinguish between thin and thick discs, or classic- or pseudo-bulges). We follow Obreja et al. (2016, see also Du et al., 2019) and use the kinematic parameter space of stellar particles defined by their values of \(j_{z}/j_{\mathrm{circ}}\), \(j_{\mathrm{p}}/j_{\mathrm{circ}}\), and \(e/e_{\mathrm{min}}\). Our GMMs find clusters of particles in this parameter space, approximating them as multi-dimensional Gaussian distributions that are later associated with the different structural components of a galaxy.
The optimal number of Gaussian distributions, \(n_{c}\), required to disentangle the various components of a galaxy is not known a priori, so assumptions must be made. Obreja et al. (2019) applied GMMs to 25 galaxies simulated at high resolution using \(n_{c}\leq 5\) and associated each of the best-fit Gaussians to one physical galaxy component: a
thin or thick disc, classical- or pseudo-bulge, or a stellar halo. However, the structural components of galaxies may not follow simple Gaussian distributions in the input parameter space (e.g. they may posses kinematic substructure or exhibit non-Gaussian distribution functions), and when they do not multiple Gaussian distributions per galaxy component seem to fare better. Du et al. (2019) used a modified Bayesian information criterion to determine that \(5\lesssim n_{c}\lesssim 12\) works well for galaxies in Illustris-TNG100 (Pillepich et al., 2018) with stellar masses \(M_{\bullet}\gtrsim 10^{10}\,\mathrm{M}_{\odot}\); they assigned each Gaussian to a physical galaxy component based on the values2 of \(\left<j_{z}/j_{\mathrm{circ}}\right>\) and \(\left<e/e_{\mathrm{min}}\right>\).
Footnote 2: We follow the nomenclature of Du et al. (2019) and denote the mean of the Gaussians output by our GMM fits using angular brackets, e.g. \(\left<j_{z}/j_{\mathrm{circ}}\right>\) or \(\left<e/e_{\mathrm{min}}\right>\).
We follow a different approach, and initially set \(n_{c}=3\) in order to assess whether each galaxy possesses a significant disc component or if it can be approximated as a pure spheroid. If one or more of the best-fit Gaussians have a mean circularity \(\left<j_{z}/j_{\mathrm{circ}}\right>\geq 0.5\) we classify the galaxy as a disc (and hereafter refer to them as "disc" galaxies), otherwise it is spheroid dominated (hereafter, "spheroid"). After some experimentation, we found that this initial morphological classification prevents the identification of spurious discs in dispersion dominated galaxies for GMMs run with larger \(n_{c}\), cases that often lead to net counter-rotating spheroidal components. This occurs because dispersion supported systems often contain a small but significant fraction of stellar orbits with high \(j_{z}/j_{\mathrm{circ}}\) values, and assigning those orbits to a "disc" skews the remaining \(j_{z}/j_{\mathrm{circ}}\) distribution to lower, often negative values. The upper panels of Fig. 1 show two typical galaxies that were classified as a disc (left panel) and spheroid (right panel) using \(n_{c}=3\).
After categorising galaxy morphologies this way, we again run GMMs but this time using a range of \(n_{c}\) values. For the bulk of our analysis we adopt \(n_{c}=12\), but discuss below how increasing or decreasing \(n_{c}\) affects the median mass fractions of the various structural components of galaxies inferred from GMMs, as well as the intrinsic variation in them for individual galaxies.
For all \(n_{c}\), disc galaxies are modelled using the \(\left<j_{z}/j_{\mathrm{circ}}\right>\), \(p/j_{\mathrm{circ}}\), \(e/e_{\mathrm{min}}\)) parameter space of stellar particles and we assign the corresponding best-fit Gaussian distributions to one of the three physical galaxy components based on their values of \(\left<j_{z}/j_{\mathrm{circ}}\right>\) and \(\left<e/e_{\mathrm{min}}\right>\). Those with \(\left<j_{z}/j_{\mathrm{circ}}\right>\geq 0.5\) are assigned to the disc, and the rest are split between the bulge and IHL. To do so, we identify the Gaussian distributions with the maximum and minimum \(\left<e/e_{\mathrm{min}}\right>\) values and define \(e_{\mathrm{cut}}\) as the midpoint between them, i.e. \(e_{\mathrm{cut}}=[\min(\left<e/e_{\mathrm{min}}\right>)+\max(\left<e/e_{ \mathrm{min}}\right>)]/2\). The remaining Gaussian clusters not assigned to the disc (i.e. those with \(\left<j_{z}/j_{\mathrm{circ}}\right><0.5\)) are assigned to the bulge if \(\left<e/e_{\mathrm{min}}\right>\geq e_{\mathrm{cut}}\), or to the IHL if \(\left<e/e_{\mathrm{min}}\right><e_{\mathrm{cut}}\). Note that there is the chance that the IHL will
Figure 1: Distribution of \(j_{z}/j_{\mathrm{circ}}\)-\(e/e_{\mathrm{min}}\) for a disc galaxy (DG; left panel) and a spheroidal galaxy (DG; right panel). The top row corresponds to the \(n_{c}=3\) decomposition, which is used as a first step to classify the morphologies of galaxies. The bottom row corresponds to \(n_{c}=12\). The best-fitting Gaussian distributions identified by the GMM are allocated to different structural components based on their mean values of \(j_{z}/j_{\mathrm{circ}}\) and \(e/e_{\mathrm{min}}\), which are plotted with real diamonds for discs, macro circles for bulges, and yellow squares for the IHL. The vertical tail mean at \(j_{z}/j_{\mathrm{circ}}=0.5\) is used to allocate Gaussians to the disc component (i.e. \(j_{z}/j_{\mathrm{circ}}\)\(>0.5\)); the maxon line represents \(e_{\mathrm{cut}}\), the value used to distinguish the bulge (\(e>e_{\mathrm{cut}}\)) and IHL components (\(e<e_{\mathrm{cut}}\)). The shaded ellipses represent the \(1-\sigma\) confidence regions.
be undetected, as is the case for the \(n_{c}=3\) disc model in Fig. 1 (upper-left panel). This occurs most often when \(n_{c}\) is small and the IHL is significantly less massive than the disc and bulge components.
We run the same GMMs for spheroids, but this time using \((j_{z}/j_{\rm circ},\)\(e/e_{\rm min})\) as the input parameter space (the broad \(j_{\rm P}/j_{\rm circ}\) distributions for the bulges and IHL of spheroidal galaxies overlap considerably making this quantity less useful for distinguishing these components). We also impose priors of \(\langle j_{z}/j_{\rm circ}\rangle=0\) on all Gaussian components of the GMMs, the expected value for non-rotating, dispersion dominated systems. Priors on \(\langle e/e_{\rm min}\rangle\) are equally spaced between 0 and 1. This encourages the best-fit Gaussians obtained by the GMM to be distinct in binding energy rather than in \(j_{z}/j_{\rm circ}\). As for discs, Gaussians with \(\langle e/e_{\rm min}\rangle\geq e_{\rm cut}\) are assigned to the bulge; those with \(\langle e/e_{\rm min}\rangle<e_{\rm cut}\) are assigned to the IHL.
The lower panels of Fig. 1 show the results of running a GMM using \(n_{c}=12\) on the disc and spheroidal galaxies mentioned above (these examples are hereafter referred to as DG and SG, respectively). DG (left) is roughly the mass of the MW, having \(M_{\bullet}\approx 10^{10.7}\,{\rm M}_{\odot}\) and \(M_{200}\approx 10^{12.3}\,M_{\odot}\). SG is the central galaxy of a low-mass cluster with \(M_{\bullet}\approx 10^{12.1}\,{\rm M}_{\odot}\) and \(M_{200}=10^{14.3}\,{\rm M}_{\odot}\). Teal diamonds, maroon circles, and yellow squares show the values of \(\langle e/e_{\rm min}\rangle\) and \(\langle j_{z}/j_{\rm circ}\rangle\) corresponding to the best-fit Gaussians assigned to the disc, bulge, and IHL, respectively; the shaded regions of corresponding colour are the \(1-\)sigma confidence intervals around the mean values.
The vertical teal lines show the value of \(\langle j_{z}/j_{\rm circ}\rangle=0.5\) used to distinguish Gaussians assumed to represent the disc component from those that represent the bulge or IHL. The horizontal maroon lines show the values of \(e_{\rm cut}\). For DG, we find \(e_{\rm cut}=0.54\) whereas for SG \(e_{\rm cut}=0.45\). We note that these values are lower than the value of 0.75 adopted in previous work (e.g. Du et al., 2020, 2021) to separate the bulges and IHL of simulated disc galaxies, but the most appropriate value of \(e_{\rm cut}\) for a particular galaxy is unclear. Recent work suggests that the optimal cut in binding energy may depend on galaxy morphology (e.g. Zana et al., 2022). Nonetheless, when applied to DG, our method effectively separates the tightly-bound stellar structures (i.e. the bulge) from the loosely-bound ones (i.e. the IHL). For SG, the separation between the bulge and IHL components is less clear: several Gaussians have \(\langle e/e_{\rm min}\rangle\approx e_{\rm cut}\), suggesting that the bulge and IHL components of this galaxy are less dynamically distinct than those of DG.
Fig. 2 shows a few properties of the stellar particles belonging to the various structural components of DG (left column) and SG (right column; in both cases, the structural components were identified using a GMM with \(n_{c}=12\)). The top and middle rows show the distributions of \(j_{z}/j_{\rm circ}\) and \(e/e_{\rm min}\), respectively; the bottom row shows the distributions of stellar ages (\(t_{\rm form}\)). Grey histograms in each panel represent all stellar particles belonging to each galaxy, and the individual lines show the subset of stellar particles assigned to the disc (teal dotted-dashed lines), bulge (solid maroon lines), and IHL (dashed yellow lines).
DG is composed of three distinct components: a rotationally supported disc, a tightly bound bulge and a more loosely bound stellar halo (the latter two components are largely dispersion supported, as expected). The bottom left panel of Fig. 2 shows that the disc component formed over an extended time period, with star formation peaking \(\approx 9\) Gyrs ago, and gradually tapering off to roughly half of the peak value by \(z=0\). The disc, which is composed primarily of stellar particles that formed in-situ (\(f_{\rm ex-situ}\approx 0.1\)),3 has a half mass stellar age of 6.4 Gyr, and an interquartile age range of about 5 Gyr.
Footnote 3: We use the ex-situ classification of Davison et al. (2020): stellar particles that formed in subhaloes of the main branch of a \(z=0\) subhalo are considered to have formed “in-situ”. All other stellar particles are flagged as having formed “ex-situ”.
The bulge component of DG contains a slightly higher contribution from ex-situ stars (\(f_{\rm ex-situ}\approx 0.2\)) and is, on average, composed of the oldest stellar populations in the galaxy; its half mass stellar age is \(t_{50}\approx 11.3\,{\rm Gyrs}\). The IHL component is the most extended in the galaxy (its half stellar mass radius is \(r_{50}=27.4\,{\rm kpc}\); for the bulge, \(r_{50}=3.4\,{\rm kpc}\)) and is dominated by stars that formed ex-situ (\(f_{\rm ex-situ}=0.6\)), suggestive of a merger-driven formation scenario. Similar to the bulge component, the IHL hosts a relatively old stellar population with a half mass age of about 10.1 Gyrs. Neither the bulge nor the IHL contain an appreciable number of stellar particles with
Figure 2: From top to bottom, various rows show the \(j_{z}/j_{\rm circ}\), \(e/e_{\rm min}\), and \(t_{\rm form}\) distributions, respectively, for the example disc (DG; left panels) and spheroidal galaxy (SG; right panels). The distributions for all stellar particles associated with the galaxies are indicated by the grey shaded histograms; the dash-dotted teal lines, solid maroon lines, and dashed yellow lines show the distributions for stellar particles assigned to the disc, bulge, and IHL components.
ages \(\lesssim 6\) Gyr (the disc formed 54 per cent of its stellar mass in that time).
The bulge component of SG is similar to that of DG: it is dispersion-supported and comprised primarily of old stellar populations (the half mass age of bulge stars in SG is \(\approx 11.3\) Gyrs). The visible peaks in the \(t_{\rm form}\) distribution of bulge stars does, however, indicate that the bulge contains several distinct stellar populations. Together with the high ex-situ fraction (\(f_{\rm ex-situ}\approx 0.80\)), this implies a formation history dominated by multiple merger events, consistent with the standard model for the formation of brightest cluster galaxies (BCGs) in \(\Lambda\)CDM (e.g. De Lucia & Blaizot 2007; Robotham et al. 2014). The IHL of SG is also dominated by ex-situ stars (\(f_{\rm ex-situ}\approx 0.87\)) and is comprised of two kinematically-distinct components: one dispersion-supported structure with a peak orbital circularity of \(j_{z}/j_{\rm circ}\)\(\approx 0\), and another with a peak at \(j_{z}/j_{\rm circ}\)\(\approx-0.5\). The latter component counter-rotates with respect to the net angular momentum of the galaxy and is visible in the plot of the component projections (see Fig 11).
### The impact of varying \(n_{c}\) on the decomposition results
Fig. 3 shows the logarithmic change in the stellar mass that is allocated to the disc (teal diamonds; upper panel), bulge (maroon circles), and IHL (yellow squares) components of individual galaxies as \(n_{c}\) is increased by \(\Delta n_{c}=1\), starting from \(n_{c}=3\) and increasing to \(n_{c}=14\). The top and bottom panels show results separately for the subset of disc and spheroidal galaxies, respectively. Symbols correspond to the median4 values of \(\log[M(n_{c})/M(n_{c}+1)]\), and error bars show in interquartile scatter (note: \(M(n_{c})\) is used generically here to represent the mass assigned to a particular galaxy component after running a GMM that uses \(n_{c}\) Gaussians).
Footnote 4: When calculating this ratio, we set \(\log(M(n_{c})/M(n_{c}+1))=-1\) if a particular component is undetected for a given value of \(n_{c}\). Doing so allows us to include such instances in our estimates of the scatter and logarithmic change in component masses as \(n_{c}\) is increased, which would otherwise be biased by only including systems for which masses can be estimated. We note that such occurrences do not affect the medians values plotted in Fig. 3.
The top panel of Fig. 3 shows that, for disc galaxies, the mass assigned to each structural component is typically robust provided \(n_{c}\geq 6\). For \(n_{c}\leq 5\), the mass assigned to the disc and bulge are well-converged, the mass allocated to the IHL, however, can vary significantly. For \(n_{c}=3\), for example, the median IHL mass of discs is 0. This is because, for most disc dominated systems, the disc component is represented by two Gaussians, either due to the presence of a thick disc, or due to inherent non-Gaussianity of galactic discs in the input parameter space. The large scatter in the mass assigned the IHL component is partly due its relatively low mass, which makes it more susceptible to small changes in \(M(n_{c})\). Note, however, that both the median mass fraction and scatter in mass assigned to each component, including the IHL, are well-behaved provided \(n_{c}\geq 12\).
For spheroids, the situation is similar. The median masses assigned to the bulge and IHL of individual systems is largely unchanged when \(n_{c}\) is increased from \(\approx 7\), although the scatter in mass assigned to the IHL of individual galaxies is larger than that of bulges for all \(n_{c}\). Note too that, for \(n_{c}\gtrsim 7\), the masses assigned to the IHL of spheroidal galaxies are more susceptible to changes in \(n_{c}\) than is the case for the IHL of disc galaxies (i.e. the yellow error bars are larger in the lower panel of Fig. 3 than in the upper panel). For example, for \(n_{c}=12\) the interquartile range (IQR) for the IHL of spheroids is \(\approx 0.37\), but for discs it is \(\approx 0.15\). This may indicate that the bulge and IHL components of discs are more dynamically distinct than they are in spheroidal galaxies.
We adopt \(n_{c}=12\) for our analysis two reasons. First, if the Gaussian distributions are equally divided between the structural components of galaxies, it implies that the disc, bulge, and IHL of disc-dominated galaxies will each be identified by 4 Gaussian distributions; for spheroidal galaxies, the bulge and IHL will be identified by 6 Gaussians. The exact division of the Gaussians among the structural components of galaxies will, of course, vary from galaxy to galaxy, but allowing for multiple Gaussians per galactic component can better accommodate the presence of distinct sub-populations.
Second, we are primarily interested in studying the IHL of galaxies which, judging by Fig. 3, converges to a stable mass fraction at higher \(n_{c}\) values than is the case for the disc or bulge components. We stress, however, that the exact value of \(n_{c}\) is somewhat arbitrary: provided \(\lesssim 8\leq n_{c}\lesssim 15\), the masses allocated to the different structural components of most galaxies are relatively stable. This result is shown another way in Fig. 4, where we plot the fraction of mass assigned to the various structural components of discs (left) and spheroids (right) as a function of \(M_{200}\). The thick lines in each panel correspond to results obtained for \(n_{c}=12\); the thin lines show results for other values of \(n_{c}\) in the range \(8\leq n_{c}\leq 15\).
Figure 3: Logarithmic difference in the estimated mass of each structural component induced by increasing \(n_{c}\) by 1, plotted as a function of \(n_{c}\) (for visual clarity, points corresponding to the disc and IHL components have been offset slightly from the integer \(n_{c}\) values). Medians are plotted in diamonds, circles, and squares for the disc, bulge and IHL components, respectively; error bars represent the interquartile range. Results for disc galaxies are plotted in the top panel; results for spheroidal galaxies are show in the bottom panel.
### Comparison to alternative definitions of galaxy components
#### 3.3.1 Relation to kinematic estimates of galactic morphology
In Fig. 5 we compare the mass fraction allocated to the disc component by our GMM (\(f_{\rm disc}\)) with two other kinematic indicators5 of galaxy morphology: the fraction of kinetic energy in co-rotation (\(\kappa_{\rm co}\); see Correa et al., 2017) and the disc-to-total ratio (D/T; see e.g. Thob et al., 2019). Both quantities were measured using stellar particles that lie within a spherical 30 kpc aperture.
Footnote 5: Specifically, we define \(\kappa_{\rm co}=(2\,K_{\star})^{-1}\sum_{j_{\rm c},k_{\rm o}>0}m_{k}\,(j_{\rm c,k}/R_{k})^{2}\), where \(K_{\star}\) is the total kinetic energy of the stellar particles, \(j_{\rm c,k}\) is the \(z\)-component of the angular momentum of particle \(k\), and \(R_{k}\) is its distance from the \(z\)-axis. The disc-to-total ratio is defined as \(D/T=1-S/T=1-2/M_{\star}\sum_{j_{\rm c},k_{\rm o}<0}m_{k}\), where \(S/T\) is the spheroid-to-total ratio and \(m_{k}\) is the mass of the \(k^{\rm th}\) stellar particle.
The top panels of Fig. 5 plot the 2D histogram of \(f_{\rm disc}\) versus \(\kappa_{\rm co}\) (left) and D/T (right) for disc galaxies. The lines show the median relations separately for three bins of \(M_{200}\). The Spearman correlation coefficient for the full sample of discs (labelled \(\rho\) in the upper panels) is displayed in the bottom right corner of each panel.
Regardless of halo mass, there is a strong correlation between \(f_{\rm disc}\) and \(\kappa_{\rm co}\) (\(\rho=0.93\) for all galaxies, and \(\rho>0.89\) for the individual mass bins) that closely follows the one-to-one line. Close inspection, however, indicates that the relation has a slope is slightly steeper than 1: for \(f_{\rm disc}\leq 0.4\) there is a tendency for \(\kappa_{\rm co}\) to exceed \(f_{\rm disc}\). This is because our GMMs can in principle yield \(f_{\rm disc}=0\), whereas the minimum value of \(\kappa_{\rm co}\) for isotropic, dispersion supported systems with no rotation is \(\kappa_{\rm co}\approx 0.17\). This naturally biases the relation between \(f_{\rm disc}\) and \(\kappa_{\rm co}\), particularly for galaxies with small disc fractions.
The middle panel of Fig. 5 shows that \(f_{\rm disc}\) and D/T are also strongly correlated (\(\rho=0.85\)), but that D/T is slightly higher than \(f_{\rm disc}\) for the majority of galaxies. This is true for all mass bins, but a larger offset is seen for higher \(M_{200}\). This offset occurs because the D/T statistic implicitly assumes that all spheroidal components of galaxies do not rotate and are completely dispersion supported, attributing all net rotation in the galaxy to the disc. A similar offset between D/T and disc mass fractions obtained from GMMs was reported by Obreja et al. (2016). The mass dependence of the offset hints at an increasing prevalence of rotational support in the spheroidal component for galaxies in higher mass haloes.
The bottom panels of Fig. 5 show the distributions of \(\kappa_{\rm co}\) and D/T for our sample of spheroids. The vertical pink, yellow, and blue lines show the median values for each mass bin and the green line shows the values obtained for the spheroidal galaxy used for Figs. 1 and 2 (i.e. SG). The median \(\kappa_{\rm co}\) value for all spheroids is 0.19, only slightly larger than the value expected for isotropic, dispersion supported systems with no net rotation. There are a handful of spheroids with relatively high values of \(\kappa_{\rm co}\) and D/T (for example, 6.6 per cent of spheroids have \(\kappa_{\rm co}\geq 0.4\), whose mean D/T ratio is 0.6). A visual inspection of these galaxies reveal they typically have lenticular morphologies, consistent with fast rotators (e.g. Cappellari, 2016), or have experienced recent mergers, which complicates the disc-bulge decomposition.
The strong correlations between \(f_{\rm disc}\) and these alternative morphology metrics are perhaps unsurprising given that both \(\kappa_{\rm co}\) and D/T are calculated directly from the \(z\)-component of the angular momentum. Although they are not completely independent measures of galactic morphology, the close correspondence between them indicates that our galaxy decomposition technique yields sensible results and that the kinematic morphologies we recover are in agreement with previous work.
Figure 4: The median stellar mass fraction assigned to each galactic component as a function of \(\mathbf{M}_{200}\). The dash-dotted teal lines, solid maroon lines, and dashed yellow lines show the median mass fractions of the disc, bulge, and HIL components, respectively. The thick lines show results obtained using \(n_{\rm c}=12\); results for other \(n_{\rm c}\geq 8\) are plotted as faint lines of the corresponding colour and style. Results are shown separately for discs (left panel), and spheroids (right panel).
Figure 5: Top row: The \(f_{\rm disc}\)-\(\kappa_{\rm co}\) (left panel) and \(f_{\rm disc}\)-D/T (right panel) distribution for discs. Median relations are plotted for 3 bins of halo mass, as indicated above the plot. The black dotted line shows the one-to-one line for reference and our example disc galaxy (DG) is plotted using an outsized green pentagon. The Spearman correlation coefficient (\(\rho\)) for the total disc sample is displayed in the bottom right corner of the upper panels. Bottom row: \(\kappa_{\rm co}\) and D/T distributions for spheroids. Vertical lines indicate the median values for the same three halo mass bins and the vertical green line shows values for our example spheroidal galaxy (SG).
#### 3.3.2 Fraction of mass in the HIL
We next compare the estimates of \(f_{\rm HIL}\) obtained from our GMMs to those obtained using three alternative HIL definitions: 1) the fraction of stellar mass that formed ex-situ (\(f_{\rm ex-situ}\); e.g. Cooper et al., 2010); 2) the fraction of stellar mass at \(r>100\) kpc (\(f_{\rm>100\,kpc}\); e.g. Pillepich et al., 2014); and 3) the fraction of stellar mass beyond 2 times the stellar half-mass radius of a galaxy (\(f_{\rm>2\,r_{50}}\); e.g. Elias et al., 2018).
The top panel of Fig. 6 shows the relationship between \(f_{\rm HIL}\) and \(f_{\rm ex-situ}\). Median relations are shown separately for discs and spheroids (blue and orange lines, respectively) in four bins of \(M_{200}\) (see the legend in the middle panel). Both \(f_{\rm HIL}\) and \(f_{\rm ex-situ}\) increase with increasing \(M_{200}\), as seen by the separation of lines of different type (they move up and to the right as mass increases). As a result, the values of \(f_{\rm HIL}\) and \(f_{\rm ex-situ}\) for the whole sample of galaxies are weakly correlated (\(\rho=0.56\)), even though at fixed halo mass the correlations are significantly weaker (the Spearman rank coefficients for the various mass bins plotted in Fig. 6 range from 0.31 to 0.33).
The weak correlation between \(f_{\rm HIL}\) and \(f_{\rm ex-situ}\) among high mass spheroids likely reflects the fact that both the bulge and HIL components of these galaxies are dominated by stars that formed ex-situ: associating the HIL with all accreted stellar material is therefore inappropriate for these systems, because much of the bulge mass also formed ex-situ (e.g. Pillepich et al., 2014). We find that, for the full sample, \(f_{\rm ex-situ}\) exceeds \(f_{\rm HIL}\), by a factor of \(\approx 1.3\). We will return to these points in the next section.
Given its extended nature, defining the HIL with an aperture-based approach is common; in this case, all stars beyond some radius are assigned to the IHL, whereas those within that radius are assumed to belong to the other galaxy components. The middle and bottom panels of Fig. 6 show the relation between \(f_{\rm HIL}\) and two aperture-based HIL mass measurements: \(f_{\rm>100kpc}\)(e.g. Pillepich et al., 2018), and \(f_{\rm>2\,r_{50}}\)(e.g. Elias et al., 2018), respectively.
We find that \(f_{\rm>100kpc}\) is lower than \(f_{\rm HIL}\) for \(\approx 97\) per cent of galaxies. While the two estimates are weakly correlated for the whole population (\(\rho=0.54\)), at fixed \(M_{200}\) they are not correlated. This suggests that the correlation is primarily driven by the fact that both \(f_{\rm>100kpc}\) and \(f_{\rm HIL}\) increase with increasing \(M_{200}\). A similar conclusion applies to the relationship between \(f_{\rm>2\,r_{50}}\) and \(f_{\rm HIL}\). Note that \(f_{\rm>2\,r_{50}}\) is typically larger than \(f_{\rm HIL}\), and has a much smaller halo-to-halo scatter (see Canas et al., 2020, for a similar finding).
Although the IHL fractions estimated using these alternative methods correlate with our measurements, it is clear that none of them reproduce our results, and fare even poorer when comparisons are made at fixed halo mass. This is due to the fact that the various components of galaxies do not have well-defined edges, nor are they composed purely of in-situ or ex-situ stars.
## 4 Results
In this section, we supplement the Ref-L0100N1504 sample with 897 additional galaxies from 50-HiResDM,6 as well as the 30 galaxy clusters from the C-Eagle suite. This provides us with a sample of 3342 galaxies spanning the mass range \(10^{11.2}{\rm M}_{\odot}\lesssim M_{200}\lesssim 10^{15.4}{\rm\,M}_{\odot}\) (or \(10^{8.4}{\rm M}_{\odot}\lesssim M_{\star}\lesssim 10^{12.9}{\rm\,M}_{\odot}\) in stellar mass). Of the full galaxy sample, \(\approx 72\) per cent were classified as discs based on our morphological classification step (see Section 3.1); the remainder as spheroids. Only about 1.5 per cent of galaxies have no discernible IHL component; these are primarily low-mass, disc-dominated systems. All results that follow were obtained from our GMM galaxy decomposition using \(n_{c}=12\).
Figure 6: Correlations between various estimates of the IHL mass fraction. From top to bottom, respectively, the various panels show: \(f_{\rm ex-situ}\) (i.e. the fraction of stellar mass that formed ex-situ), \(f_{\rm>100kpc}\) (i.e. the stellar mass fraction at \(r>100\) kpc), \(f_{\rm>2\,r_{50}}\) (i.e. the fraction of stellar mass beyond two stellar half mass radii) versus \(f_{\rm HIL}\). Median relations are plotted for discs (blue) and spheroids (orange) separately, in 4 bins of halo mass. The one-to-one line is shown as a dotted black line for reference. The locations of our example disc and spheroidal galaxies are plotted using green pentagons (labelled “D” and “S”, respectively).
### The stellar-to-halo mass relation and morphology
In Fig. 7 we plot the stellar-to-halo mass relation (SHMR) for the full galaxy sample, colour-coded by \(f_{\rm disc}\). The median relations obtained from Ref-L0100N1504 (shown as a solid black line) and 50-HiResDM (thick dashed line) are in good agreement (see also Ludlow et al., 2023). Fig. 7 also shows that, over the mass range \(11.5\lesssim\log\left(M_{200}/{\rm M}_{\odot}\right)\lesssim 12.5\), there is a clear relationship between the stellar mass fraction of a galaxy and \(f_{\rm disc}\): at fixed halo mass, disc galaxies have higher stellar masses than spheroids. For example, galaxies with \(f_{\rm disc}>0.5\) have, on average, \(M_{\bullet}\approx 2.1\times 10^{10}\,{\rm M}_{\odot}\); those with \(f_{\rm disc}<0.1\) have \(M_{\star}\approx 1.4\times 10^{10}\,{\rm M}_{\odot}\). These results provide additional dynamical evidence for a morphology-dependent stellar-to-halo mass relation, consistent with the observational results of Posti & Fall (2021), and the theoretical results of Correa & Schaye (2020), who reported a similar trend for Eagle galaxies but using \(\kappa_{\rm co}\) as a proxy for morphology.
Fig. 7 also demonstrates that disc-dominated galaxies tend to occupy haloes with masses \(M_{200}\lesssim 10^{12.5}\,{\rm M}_{\odot}\) (including 95 per cent of galaxies with \(f_{\rm disc}\)\(>0.5\)), but above this mass spheroids dominate (e.g. at \(M_{200}\gtrsim 10^{12.5}\,{\rm M}_{\odot}\), more than half of the galaxies in our sample have \(f_{\rm disc}\)= 0), a trend that is also well-established observationally (e.g. Posti & Fall, 2021).
### The SFRs, ex-situ fractions, and ages of discs, bulges and the IHL
Fig. 8 plots the median star formation rates (SFRs), ex-situ fractions (\(f_{\rm ex-situ}\)), and stellar half-mass formation times (\(t_{50}\)), versus \(M_{200}\). Results are shown separately for each galaxy component and for all simulations (Ref-L0100N1504 results are shown using solid lines; 50-HiResDM results using dashed lines; and C-Eagle results using crosses). The SFRs were averaged over a lookback time of 500 Myr, but we have verified that our results are qualitatively insensitive to reasonable variations in that timescale. Note that merger trees are not available for our 50-HiResDM run, nor for C-Eagle, so we only present ex-situ fractions for Ref-L0100N1504.
Overall, the results plotted in Fig. 8 are in qualitative agreement with observational expectations and theoretical results: compared to bulges and the IHL, discs have the highest SFRs, the lowest fractions of ex-situ stars, and are typically the youngest component of a galaxy (see, e.g., Robotham et al., 2022, for observational evidence of the latter). Note that discs have small but non-zero ex-situ fractions, possibly due to the presence of stellar particles tidally stripped from satellites co-planar with the disc (e.g. Abadi et al., 2003).
Results from our GMMs suggest that bulges host the oldest stellar populations and exhibit the lowest \(z=0\) SFRs. They have higher ex-situ fractions than discs, but lower ex-situ fractions than the IHL at fixed mass. The ex-situ fraction of bulges correlates strongly with halo mass, increasing from about 10 per cent at \(M_{200}\approx 10^{12}\,{\rm M}_{\odot}\) to about \(\approx 70\) per cent at \(M_{200}\approx 6\times 10^{13}\,{\rm M}_{\odot}\); for galaxies hosted by haloes of mass \(M_{200}\approx 10^{13}\,{\rm M}_{\odot}\), roughly half of all their bulge stars were formed ex-situ.
Although the IHL is dominated by ex-situ stars at most masses, our analysis suggests that it also contains a non-negligible fraction of stars that were born in-situ (roughly 60 per cent at the galactic scale, and \(\approx 10\) per cent at the cluster scale). This goes against a common assumption that the IHL is comprised entirely of accreted stellar material (e.g. Cooper et al., 2010). The IHL is systematically younger, more star-forming, and comprised of more ex-situ stars than bulges, although the differences between them become small at mass scales \(M_{200}\gtrsim 10^{14}\,{\rm M}_{\odot}\). This is in qualitative agreement with observational results indicating that the stellar populations of the ICL and BCGs overlap significantly (Jimenez-Teja et al., 2018).
The median age of each component increases with increasing \(M_{200}\), consistent with the concept of galaxy "downsizing" (e.g. Neistein et al., 2006).
Finally, note that the median SFRs and half-mass ages of the various galactic components are in good agreement between Ref-L0100N1504 and 50-HiResDM. The slight differences in the SFRs of the bulge components in these runs are primarily due to the different galaxy samples that they provide, rather than due to numerical artefacts affecting our decomposition technique. These results corroborate and extend the findings of Ludlow et al. (2023), and suggest that spurious heating does not affect the SFRs of simulated galaxies, or even the SFRs of their dynamically-distinct components. This is because, at the mass resolution of our simulations, gaseous baryons are largely unaffected by spurious heating because their radiative cooling timescale is shorter than their collisional heating timescale (Steinmetz & White, 1997).
### The structure of discs, bulges and the IHL
#### 4.3.1 The size-mass relation for discs, bulges, and the IHL
Fig. 9 plots the \(r_{50}-M_{200}\) relations obtained from our simulations. As above, results from Ref-L0100N1504 are shown using solid lines,
Figure 7: The stellar-to-halo mass relation for our full galaxy sample. Individual galaxies have been binned and plotted as hexagons that have been color-coded by their median value of \(f_{\rm disc}\). The median relation for Ref-L0100N1504 (50-HiResDM) is plotted using a solid black (dashed grey) line; the \(25^{\rm th}\) and \(75^{\rm th}\) percentiles are plotted using dash-dotted lines. The two example galaxies discussed in Section 3 are highlighted using green pentagons. Individual C-Eagle galaxies are plotted as crosses. Note that our dynamical decomposition of galaxy components implies a strong morphology dependence to the stellar-to-halo mass relation.
results from 50-HiResDM using dashed lines, and C-Eagle galaxies using crosses. Different panels show results separately for discs (left), bulges (middle), and the IHL (right). For comparison, the grey lines in each panel show the size-mass relations for central galaxies (i.e. for the total stellar component of each system), regardless of their morphological type.
The median disc sizes depend weakly on \(M_{200}\) across the mass range plotted and, for most masses, are well approximated by a power-law of logarithmic slope \(\approx 0.18\). For \(M_{200}\lesssim 10^{12.5}\,\mathrm{M}_{\odot}\), bulge sizes also exhibit a weak dependence on \(M_{200}\), which gradually steepens at higher masses where spheroids begin dominate our sample. Like discs, the median sizes of the IHL component are also well approximated by a single power-law, \(r_{50}\propto M_{200}^{0.67}\), over roughly four orders of magnitude in halo mass. Discs are larger than bulges by roughly 0.2 dex at any given mass, in agreement with observational results (e.g. Lange et al., 2016; Robotham et al., 2022).
Navarro et al. (2017, and later Ludlow et al., 2023) showed that, on the galactic scale, stellar half-mass sizes are well approximated by the simple empirical relation \(r_{50}\approx 0.2\times r_{s}\), where \(r_{s}\) is the scale radius of the galaxy's DM halo. The light blue limit observable plotted in the middle panel of Fig. 9 shows7\(r_{50}=0.18\times r_{s}\), which describes the median sizes of Eagle centrals quite well, at least those occupying haloes with \(M_{200}\lesssim 10^{14}\,\mathrm{M}_{\odot}\). Note, however, that the size-mass relations obtained for any of the individual galaxy components are not well described by this simple relation, nor are they well approximated by a fixed fraction of the halo virial radius (but see Kravtsov, 2013).
Footnote 7: The different normalisation of the relation plotted in Fig. 9 and that advocated by Ludlow et al. (2023) is likely due to the different mass-concentration relations adopted in our study and theirs. Their \(r_{s}\) values were taken from the empirical model for the mass-concentration relation advocated by Ludlow et al. (2016) whereas ours were taken from Schaller et al. (2015), which were determined from fits to halo profiles obtained from Eagle.
Finally, note the good agreement between the various size-mass relations obtained from Ref-L0100N1504 and 50-HiResDM. Although we employed lower limits on \(M_{200}\) for these runs such that the effects of spurious heating are negligible at (and above) the half-mass radii of _all_ stellar particles, it is encouraging that good convergence is also obtained for the sizes of dynamically distinct discs, bulges, and the IHL.
#### 4.3.2 The IHL transition radius
At what radius does the IHL begin to dominate the stellar mass of a galaxy? We denote this radius \(r_{\mathrm{IHL}}\) and explore below how it depends on halo mass and galaxy morphology.
To compute \(r_{\mathrm{IHL}}\), we first construct spherically-averaged density profiles for the different structural components of each galaxy, using 36 equally-spaced bins in \(\log(r)\) that span the range \(-0.5<\log(r/\mathrm{kpc})<3\). We then interpolate the profiles to determine the outer radius at which the density of the IHL exceeds the density of the remaining stellar material. This procedure yields \(r_{\mathrm{IHL}}\) values for \(\approx 90\) per cent of our galaxy sample. Galaxies for which \(r_{\mathrm{IHL}}\) could not be determined are usually low-mass, disc galaxies with low IHL fractions. For such systems, sampling noise in the stellar halo makes it difficult to determine an accurate value of \(r_{\mathrm{IHL}}\). For that reason, in the remainder of this section, we only consider galaxies with \(M_{200}\geq 10^{11.5}\,\mathrm{M}_{\odot}\), for which \(r_{\mathrm{IHL}}\) was reliably determined.
In Fig. 10, we plot \(r_{\mathrm{IHL}}\) versus \(M_{200}\) for galaxies identified in Ref-L0100N1504. Blue points show results for our sample of disc galaxies, and orange points show spheroids. The shape of the \(r_{\mathrm{IHL}}-M_{200}\) relation differs from the size-mass relations of the individual structural components of galaxies plotted in Fig. 9. Specifically, below \(M_{200}\approx 10^{12.8}\,\mathrm{M}_{\odot}\), \(r_{\mathrm{IHL}}\) is largely independent of \(M_{200}\) for both discs and spheroids. There is some evidence that the \(r_{\mathrm{IHL}}\) of discs is slightly larger than that of spheroids at fixed \(M_{200}\) but, for both morphologies, the IHL begins to dominate the stellar mass distribution at \(r_{\mathrm{IHL}}\)\(\approx 30\,\mathrm{kpc}\) (shown as a horizontal dashed line in Fig. 10).
Although \(r_{\mathrm{IHL}}\) marks the radius where the IHL begins to dominate the stellar mass of a galaxy, the typical fraction of the IHL within that radius can be quite large. For example, roughly 74 (69) per
Figure 8: Different panels, from top to bottom, plot the median star formation rate (SFR), \(\kappa\)-situ fraction (\(f_{\mathrm{Fe-situ}}\)), and stellar half-mass formation time (\(t_{50}\)) as a function of \(M_{200}\). Results are shown separately for the disc (real lines), bulge (maroon lines), and IHL components (yellow lines). The solid (dashed) lines indicate the median relations from Ref-L0100N1504 (50-HiResDM); shaded regions enclosed the interquartile range. Medians are displayed only for mass bins that contain at least 10 galaxies, otherwise individual galaxies are plotted as circles (for Ref-L0100N1504) or crosses (for C-Eagle). The dotted horizontal line in the top panel indicates the point at which we consider the SFR to be poorly resolved, which corresponds to a SFR of \(10\,m_{\mathrm{gas}}/500\,\mathrm{Myr}\).
cent of the HIL mass of discs (spheroids) hosted by haloes with \(M_{200}\approx 10^{12}\,\mathrm{M}_{\odot}\) lies within \(r_{\rm HIL}\). And beyond \(r_{\rm HIL}\), only about 64 percent of the stellar mass is associated with the IHL (for discs; for spheroids it is 93 per cent). Although the exact values differ as a function of halo mass, our results suggest that there is significant overlap in the spatial distribution of the different galaxy components, and that \(r_{\rm HIL}\) (or any other characteristic radius, or fixed spherical aperture) is unlikely to accurately distinguish the IHL from the other components of a galaxy. Defining the IHL as such likely excludes a significant fraction of the total IHL mass (and includes non-negligible contributions to the IHL from bulge or disc stars) and potentially biases estimates of IHL properties.
The strong mass dependence of \(r_{\rm HIL}\) at \(M_{200}\gtrsim 10^{13}\,\mathrm{M}_{\odot}\) differs from the findings of Contini et al. (2022), who found that \(r_{\rm HIL}\approx 60\,\mathrm{kpc}\) at the group and cluster scale. This discrepancy is perhaps due to the different theoretical approach to the problem (they used a semi-analytic model that implicitly assumes that the HIL follows a NFW profile, albeit one that is more concentrated than the surrounding DM halo by a factor of 3), but may also be due to our different definitions of \(r_{\rm HIL}\). Specifically, Contini et al. (2022) identify \(r_{\rm HIL}\) with the radius at which the IHL contributes 90 per cent of the total stellar mass, whereas in our definition it contributes 50 per cent. Regardless of this issue, the different mass dependence of \(r_{\rm HIL}\), above and below \(M_{200}\approx 10^{13}\mathrm{M}_{\odot}\) seen in Fig. 10 indicates that the IHL in groups and clusters likely differs from the IHL of spheroids and discs at the galaxy scale. A comprehensive analysis of the structure of the IHL, how it varies with mass, and its relationship to galaxy assembly histories warrants further investigation, which we defer to future work.
Chen et al. (2022) showed that the IHL transition radii inferred from photometric decomposition of stacked images of galaxy clusters in the Sloan Digital Sky Survey (Rykoff et al., 2014) are comparable to the characteristic scale radii (inferred from weak lensing) of their surrounding DM haloes. The grey line plotted in Fig. 10 shows the best-fitting \(r_{s}-M_{200}\) relation proposed by Schaller et al. (2015), which was obtained by fitting NFW profiles to the average density profiles of relaxed haloes in Eagle. At the group and cluster scale, \(r_{\rm HIL}\) exhibits a similar dependence on \(M_{200}\) to that of \(r_{s}\). Furthermore, above \(M_{200}\approx 10^{14}\,\mathrm{M}_{\odot}\), we find \(r_{\rm HIL}\approx r_{s}\), although with considerable scatter. We did not, however, find a correlation between \(r_{\rm HIL}\) and \(r_{s}\) among individual systems, i.e. the scatter in \(r_{\rm HIL}\) at fixed \(M_{200}\) cannot be attributed to differences in halo concentrations. The apparent similarity between \(r_{\rm HIL}\) and \(r_{s}\) may be coincidental.
Finally, note that the \(r_{\rm HIL}\) values we obtain at cluster mass scales are consistent with observational results suggesting that the sphere of influence of BCGs can extend to \(\approx 200\) kpc (Chen et al., 2022), which
Figure 10: The IHL transition radius, \(r_{\rm HIL}\), plotted as a function of halo mass. Median values are shown separately for discs (blue squares) and spheroids (orange squares), and error bars represent the interquartile scatter. We show individual galaxies as circles for mass bins that contain fewer than 10 galaxies; crosses show galaxies in the C-Eagle sample. The median \(r_{s}-\mathrm{M}_{200}\) relation for Eagle is plotted as a grey dotted line and was taken from Schaller et al. (2015).
Figure 9: Stellar half mass size, \(r_{50}\), plotted versus \(M_{200}\) for the disc, bulge, and IHL components (left to right panels, respectively). The thick lines indicate the median relations, and the shaded regions enclose the interquartile scatter. Medians are plotted for mass bins that contain at least 10 galaxies (dashed lines for 50-HiResDM, solid lines for Ref-L1000N1504); individual galaxies are plotted beyond this point. The grey lines show the median \(r_{50}-M_{200}\) relations for the whole galaxy, and are repeated in each panel for comparison. The light blue line with black boundaries plotted in the middle panel shows the empirical relation \(r_{50}=0.18\times r_{s}\).
is significantly larger than the fixed spherical apertures often used in theoretical work (e.g. 30 kpc; Montenegro-Taborda et al., 2023).
### Variation of the IHL fraction with host galaxy properties
In the left panel of Fig. 11, we plot \(f_{\rm HIL}\) versus \(M_{\star}\) and compare our simulation results to values obtained from observations of nearby disc galaxies (observational data were taken from Table 4 of Harmsen et al., 2017; our simulated galaxies are colour-coded by their disc mass fractions). The MW and M31 are shown using green symbols, and the rest are plotted in beige (with black edges). For the observed discs, we have included the contribution of the IHL to the total stellar mass to be consistent with the stellar masses used for our simulated galaxies.
In the right panel of Fig. 11, we plot \(f_{\rm HIL}\) versus \(M_{200}\) and compare with observed IHL fractions for a few galaxy clusters,8 as well as for the MW and M31 (the virial masses for the MW and M31 were taken from Shen et al., 2022 and Fardal et al., 2013, respectively; in this case, our simulated galaxies have been coloured according to the half mass ages of their stellar HIL particles). Note that at the galaxy cluster scale, it is common for IHL fractions to be expressed relative to the total stellar mass bound to the halo, inclusive of satellite galaxies. For our analysis, we only include observational data for which the contribution from satellite galaxies was excluded from the reported IHL fractions.
Footnote 8: Observed IHL fractions at the group and cluster scale are typically expressed as light fractions. Here, we assume a constant mass-to-light ratio when comparing to our simulation results.
In both panels, solid and dashed grey lines show the running medians obtained from Ref-L0100N1504 and 50-HiResDM, respectively. The median \(f_{\rm HIL}\) values are in good agreement between the two simulations at all mass scales plotted, suggesting that our IHL mass estimates are robust to the spurious heating of stellar particles by DM particles. For simulated galaxies with no discernible IHL, we instead plot \(f_{\rm 50-100kpc}\) (grey triangles in both panels of Fig. 11), which can be interpreted as a lower limit on their IHL fractions (see Fig. 6).
The left panel of Fig. 11 shows that there is some overlap in the \(f_{\rm HIL}\) values obtained for our simulated galaxies and from observations (although the simulated galaxies are biased toward slightly higher \(f_{\rm HIL}\), on average). The IHL fractions of simulated galaxies measured using aperture-based methods (e.g. Pillepich et al., 2018; Elias et al., 2018) or 6D phase-space information (e.g. Cailas et al., 2020) are also typically higher than observed IHL fractions. This suggests that the IHL of simulated galaxies may genuinely outweigh that of observed ones, although it has been noted that \(f_{\rm HIL}\) values derived from single-band photometric data (such as those in Merritt et al., 2016) are likely lower limits on the true IHL fractions (Sanderson et al., 2018). We also stress that the observed IHL fractions plotted in the left panel of Fig. 11 correspond to disc-dominated galaxies, and more closely coincide with the IHL fractions of disc-dominated galaxies in our simulations (i.e. those corresponding to the blue coloured points).
The left panel of Fig. 11 also shows that there is a large diversity in the IHL fractions of simulated galaxies at fixed stellar mass. For those that span the stellar mass range of the MW and M31 (i.e. the vertical shaded region in the left-hand panel of Fig. 11), the rms scatter (in \(\log\ f_{\rm HIL}\)) about the median IHL fraction is about 0.4 dex, corresponding to a factor of \(\approx 3.2\) in IHL mass. The scatter, however, clearly correlates with galaxy morphology: at fixed \(M_{\star}\), galaxies with higher IHL fractions tend to have lower disc fractions, and vice versa.
The colour coding of points in the right-hand panel of Fig. 11 shows that there is also a correlation between the IHL mass fraction and its half mass age, \(t_{\rm 50,HIL}\), at least among low-mass galaxies (i.e. \(M_{\rm 200}\lesssim 10^{12.5}\,\rm M_{\odot}\)). Note, however, that the correlation between \(t_{\rm 50,HIL}\) and \(f_{\rm HIL}\) disappears at high masses (see inset in the right-hand panel Fig. 11). At the scale of galaxy clusters, the IHL fractions obtained from our simulations approach a constant value of \(f_{\rm HIL}\)\(\approx 0.45\), in broad agreement with Kluge et al. (2021), who finds \(f_{\rm HIL}\)\(=0.52\).
The fact that the IHL of disc dominated galaxies tends to be older and less massive than that of spheroids of similar stellar mass can be interpreted as follows. Galaxies with high disc mass fractions are unlikely to have experienced recent disruptive mergers that could potentially contribute to the growth of their IHL. As a result, what IHL they do possess tends to be older than that of spheroidal galaxies, which are more likely to have experienced recent mergers. This interpretation is supported by the conclusions of Deason et al. (2019), who showed that the low IHL mass of the MW can be largely explained by an ancient (\(\approx 10\) Gyr) merger event with a massive progenitor whose stellar mass now dominates its IHL.
The diagonal dashed line in the left-hand panel of Fig. 11 highlights in IHL fraction corresponding to roughly 100 stellar particles. Clearly, the stellar haloes of many of the low-mass galaxies in our simulations (i.e. those with \(M_{\star}\lesssim 10^{10}\,\rm M_{\odot}\)) are only resolved by a few 10s to a few 100s of stellar particles. Assessing the properties of their IHL components, such as their shape or spatial and kinematic structure, will likely require simulations that reach higher baryonic mass resolution than our runs.
## 5 Conclusions and Outlook
We used Gaussian Mixture Models (GMMs) to decompose the structural components of central galaxies identified in the \(z=0\) output of the Eagle simulation. Most of our analysis was based on galaxies identified in Ref-L0100N1504, i.e. the 100 cubic Mpc flagship simulation of the Eagle Project (Schaye et al., 2015), which we supplemented with 30 galaxy clusters from the C-Eagle Project (Bahe et al., 2017; Barnes et al., 2017), and several hundred from 50-HiResDM (Ludlow et al., 2023). The latter run used the same numerical and subgrid model parameters as Ref-L0100N1504, but was was carried out in a smaller volume (50 cubic Mpc) and with seven times higher mass resolution in the DM component. Combined, these simulations allowed us to study the disc, bulge, and intra-halo light (IHL) of galaxies across a range of environments and over four orders of magnitude in halo mass (\(10^{11.2}\,\mathrm{M}_{\odot}\leq M_{200}\leq 10^{15.4}\,\mathrm{M}_{\odot}\)). Our main results are summarised below.
* Our galaxy decomposition technique is robust to small variations in \(n_{c}\), i.e. the number of Gaussian distributions used by the GMM to isolate kinematically distinct galaxy components (see Section 3). The stellar mass fractions assigned to the disc, bulge, and IHL components of galaxies are, on average, independent of \(n_{c}\) provided \(8\leq n_{c}\leq 15\) (although there is some variation on the level of individual galaxies). For most of our analysis we used \(n_{c}=12\), which resulted in disc fractions that correlate strongly with alternative kinematic morphology estimators, such as \(\kappa_{\mathrm{co}}\)(Correa et al., 2017; see Fig. 5). We also reproduce the morphology-dependence of the stellar-to-halo mass relation reported in previous observational and theoretical work (Fig. 7).
* Of the 3342 galaxies in our sample, roughly 72 per cent were classified as discs (i.e. have a non-zero disc mass fraction based on the \(n_{c}=3\) GMM; see Section 3.1), the remainder were classified as spheroids. We find that 34 per cent of the disc population are disc dominated (i.e. have \(f_{\mathrm{disc}}>0.5\)). Disc galaxies primarily occupy haloes with virial mass \(M_{200}\lesssim 10^{12.5}\,\mathrm{M}_{\odot}\), whereas spheroids dominate in higher-mass haloes. Less than 2 per cent of galaxies in our sample possess no IHL component; these are primarily low-mass, disc-dominated systems (i.e. they typically have disc mass fractions \(f_{\mathrm{disc}}>0.5\)).
* The basic properties of the disc and bulge components of galaxies are consistent with well-established observed trends: discs host younger stellar populations than bulges, have higher star formation rates, and, on comparable mass scales, are systematically larger than bulges (as quantified by their stellar half-mass radii). Discs also contain a smaller contribution from ex-situ stars than bulges, by roughly a factor of 2 (see Fig. 8).
The primary aim of our work, however, was to study the properties of the IHL components of \(z=0\) galaxies using a consistent methodology for IHL identification, and across a broad range of galaxy and halo masses. The results listed above give us confidence that our galaxy decomposition technique is sensible and robust, and that our IHL definition is on firm footing. The main insights into the IHL components of galaxies that our work provides are summarised as follows:
* Compared to discs and bulges, the IHL component of galaxies contains the highest fraction of ex-situ stars, which contribute roughly 40 per cent of the IHL mass at \(M_{200}\approx 10^{12}\,\mathrm{M}_{\odot}\) and about 70 per
Figure 11: Left panel: \(f_{\mathrm{IHL}}\) as a function of stellar mass. Galaxies in Ref-L0100N1504 and 50-HiResDM are plotted as circles and are colour coded by \(f_{\mathrm{disc}}\) (those without an IHL component are plotted as grey triangles using the fraction of stellar mass at \(r>100\,\mathrm{kpc}\) as lower limit for \(f_{\mathrm{IHL}}\)). The median relations for the 50-HiResDM and Ref-L0100N1504 simulations are plotted in dashed and solid grey lines, respectively. C-Eagle galaxies are plotted using crosses. Observational results from the GHOSTS survey are plotted as being pentagons with black boundaries (Harmsen et al., 2017); results from the Dragonfly Nearby Galaxies Survey (DINGS; Merritt et al., 2016) are plotted as being squares (the IHL fraction for these galaxies was estimated using the approach of Harmsen et al., 2017). Results for the MW (Deason et al., 2019) and M31 (Harmsen et al., 2017) are plotted using a green cross or pentagon, respectively. The observed IHL fractions of disc-dominated galaxies, where required, have been expressed relative to the total stellar mass in the central galaxy (see text for details). Right panel: \(f_{\mathrm{IHL}}\) as a function of \(M_{200}\). Eagle galaxies are colour coded by the median stellar age of the IHL (\(t_{50,\mathrm{IHL}}\)). Observational results are shown for the Fornax cluster (Sparone et al., 2020), A85 (Montes et al., 2021), and a compilation of 170 local clusters by Kluge et al. (2021). Halo mass estimates for the MW and M31 were taken from Shen et al. (2022) and Fardal et al. (2013), respectively. The inset panel zooms in on \(f_{\mathrm{IHL}}\) for haloes with \(\mathrm{M}_{200}>10^{13.8}\,\mathrm{M}_{\odot}\), for which the colour bar has been appropriately rescaled.
cent at \(M_{200}\approx 10^{14}\,\mathrm{M}_{\odot}\). The average SFR of the IHL (averaged over a lookback time of 500 Myr) is intermediate between that of bulges (which have lower SFRs) and discs (which have higher SFRs). The IHL is, on average, composed of older stellar populations than discs, but younger stellar populations than spheroids (Fig. 8). At all mass scales, the IHL component is more extended than the disc or spheroid component (as quantified by the half stellar mass radius of each component; Fig. 9). The SFRs, ages, and sizes of the disc, bulge, and IHL components studied in this work are converged between the Ref-L0100N1504 and 50-HiResDM simulations, indicating that our results are robust to the effects of spurious collisional heating (see Ludlow et al., 2023, for details).
* The fraction of mass assigned to the IHL, \(f_{\mathrm{IHL}}\), increases with increasing stellar and halo mass (although slightly), but at fixed mass exhibits large scatter. For MW mass galaxies (i.e. \(M_{\bullet}\approx 10^{10}\,\mathrm{M}_{\odot}\) or \(M_{200}\approx 10^{12}\,\mathrm{M}_{\odot}\)) we find \(f_{\mathrm{IHL}}\approx 0\), which increases to \(f_{\mathrm{IHL}}\approx 0.45\) at the group and cluster scale (i.e. \(M_{200}\gtrsim 10^{13}\,\mathrm{M}_{\odot}\)). The IHL fractions we obtain from our GMMs are in broad agreement with observed values obtained for disc and BCGs (Fig. 11; although several nearby disc galaxies have systematically lower IHL fractions than discs in our simulations).
* At halo masses \(M_{200}\lesssim 10^{12.5}\mathrm{M}_{\odot}\), the scatter in \(f_{\mathrm{IHL}}\) is closely connected to the kinematic morphology of a galaxy: at a fixed halo mass, disc dominated galaxies have lower IHL fractions than spheroidal galaxies. The IHL fraction also correlates (albeit weakly) with the median age of the IHL (Fig. 11) in such a way that the IHL of disc-dominated galaxies is systematically older than the IHL of spheroids of the same mass. This supports the idea that most discontinued galaxies have not undergone any recent mergers that could have increased their IHL fractions, and that, more broadly, the IHL fraction of a galaxy holds valuable information about its assembly history.
* For halo masses \(M_{200}\gtrsim 10^{13}\,\mathrm{M}_{\odot}\), the various correlations between the IHL mass fraction, its age, and galaxy morphology no longer hold. This may be due to the lack of diversity in galaxy morphologies at these mass scales (Fig. 7) or due to similarities in the merger histories of massive haloes. At these mass scales, galaxies are typically spheroidal and both the bulge and IHL components are dominated by ex-situ stars. It is not clear whether a meaningful distinction exists between the bulge and IHL components of galaxies in massive haloes, since both components exhibit similar stellar populations (Fig. 8).
* Finally, we explored how the transition radius, \(r_{\mathrm{IHL}}\), between the IHL and inner galaxy components depends on halo mass and galaxy morphology. For galaxies with \(M_{200}\lesssim 10^{12.8}\,\mathrm{M}_{\odot}\), we found that \(r_{\mathrm{IHL}}\approx 30\,\mathrm{kpc}\), irrespective of morphology. For \(M_{200}\gtrsim 10^{12.8}\mathrm{M}_{\odot}\), where spheroids dominate, \(r_{\mathrm{IHL}}\), depends strongly on \(M_{200}\), increasing to \(r_{\mathrm{IHL}}\approx 300\,\mathrm{kpc}\) at \(M_{200}\approx 10^{15}\,\mathrm{M}_{\odot}\) (Fig. 10). Our methodology also predicts significant spatial overlap between galaxy components, implying that \(r_{\mathrm{IHL}}\) (or any other spherical aperture) cannot be used to distinguish the IHL from the other structural components of a galaxy.
We believe our results provide a sensible assessment of some of the basic properties of the disc, bulge, and IHL components of simulated galaxies. We plan to leverage the galaxy decomposition technique introduced in this work to explore properties of the progenitors of the IHL (as well as the progenitors of the ex-situ components of discs and bulges) and how they depend on halo mass.
Although our simulated galaxy sample spans a large range of halo masses (\(10^{11.2}\,\mathrm{M}_{\odot}\leq M_{200}\leq 10^{15.4}\,\mathrm{M}_{\odot}\)), massive galaxy clusters are sparsely sampled, and the IHL of low-mass galaxies (\(M_{200}\lesssim 10^{12}\,\mathrm{M}_{\odot}\)) is poorly resolved (typically containing only a few hundred to a few thousand stellar particles). Future work on the subject may benefit from recent large volume cosmological simulations that provide much larger samples of massive clusters (e.g. Pakmor et al., 2022; Kugel et al., 2023; Schaye et al., 2023). Likewise, cosmological simulations with higher baryonic resolution than Eagle will be useful for exploring the structure of the IHL of low-mass galaxies, which is crucial if we wish to properly interpret the low IHL fraction of the MW and other nearby galaxies, and to properly place their formation histories in a wider cosmological context (e.g. Evans et al., 2020). Such simulations will also enable investigations into the IHL of dwarf galaxies, which, due to their low surface brightness, are difficult to study observationally but are potentially powerful probes of dark matter models (Deason et al., 2022).
## Data Availability
The EAGLE simulations are publicly available; see McAlpine et al. (2016); The EAGLE team (2017) for how to access EAGLE data. Any additional data used in this work can be made available upon reasonable request. The decomposition code is publicly available.9
Footnote 9: [https://github.com/katyproector/decomp](https://github.com/katyproector/decomp)
## Acknowledgements
We wish to thank Joel Pfeffer for providing ex-situ particle classifications, and Matthieu Schaller for providing best-fitting NFW concentrations for the Ref-L0100N1504 simulations. KLP thanks Annette Ferguson, Azi Fattahi, and Alexander Knebe for useful discussions and thanks the University of Edinburgh for supporting a productive research visit. KLP acknowledges support from the Australian Government Research Training Program Scholarship. CL has received funding from the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, and the Australian Research Council Discovery Project (DP210101945). ADL acknowledges financial support from the Australian Research Council through their Future Fellowship scheme (project number FT160100250). ASGR acknowledges funding by the Australian Research Council (ARC) Future Fellowship scheme (FT200100374; 'Hot Fuzz'). This work made use of the supercomputer OzSTAR which is managed through the Centre for Astrophysics and Supercomputing at Swinburne University of Technology. This supercomputing facility is supported by Astronomy Australia Limited and the Australian Commonwealth Government through the national Collaborative Research Infrastructure Strategy (NCRIS). We acknowledge the Virgo Consortium for making their simulation data available. The Eagle simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyeres-le-Chatel. The C-Eagle simulations were in part performed on the German federal maximum performance computer "HazelHen" at the maximum performance computing centre Stuttgart (HLRS), under project GCS-HYDA / ID 44067 financed through the large-scale project "Hydrangea" of the Gauss Center for Supercomputing. This work has benefitted from the following public python packages: pandas(McKinney, 2010), scipy(Virtanen et al., 2020), numpy(Harris et al., 2020), and matplotlib(Hunter, 2007).
|
2303.00477
|
ORCHNet: A Robust Global Feature Aggregation approach for 3D LiDAR-based
Place recognition in Orchards
|
Robust and reliable place recognition and loop closure detection in
agricultural environments is still an open problem. In particular, orchards are
a difficult case study due to structural similarity across the entire field. In
this work, we address the place recognition problem in orchards resorting to 3D
LiDAR data, which is considered a key modality for robustness. Hence, we
propose ORCHNet, a deep-learning-based approach that maps 3D-LiDAR scans to
global descriptors. Specifically, this work proposes a new global feature
aggregation approach, which fuses multiple aggregation methods into a robust
global descriptor. ORCHNet is evaluated on real-world data collected in
orchards, comprising data from the summer and autumn seasons. To assess the
robustness, we compare ORCHNet with state-of-the-art aggregation approaches on
data from the same season and across seasons. Moreover, we additionally
evaluate the proposed approach as part of a localization framework, where
ORCHNet is used as a loop closure detector. The empirical results indicate
that, on the place recognition task, ORCHNet outperforms the remaining
approaches, and is also more robust across seasons. As for the localization,
the edge cases where the path goes through the trees are solved when
integrating ORCHNet as a loop detector, showing the potential applicability of
the proposed approach in this task. The code will be publicly available
at:\url{https://github.com/Cybonic/ORCHNet.git}
|
T. Barros, L. Garrote, P. Conde, M. J. Coombes, C. Liu, C. Premebida, U. J. Nunes
|
2023-03-01T13:04:45Z
|
http://arxiv.org/abs/2303.00477v2
|
ORCHNet: A Robust Global Feature Aggregation approach for 3D LiDAR-based Place recognition in Orchards
###### Abstract
Robust and reliable place recognition and loop closure detection in agricultural environments is still an open problem. In particular, orchards are a difficult case study due to structural similarity across the entire field. In this work, we address the place recognition problem in orchards resorting to 3D LiDAR data, which is considered a key modality for robustness. Hence, we propose ORCHNet, a deep-learning-based approach that maps 3D-LiDAR scans to global descriptors. Specifically, this work proposes a new global feature aggregation approach, which fuses multiple aggregation methods into a robust global descriptor. ORCHNet is evaluated on real-world data collected in orchards, comprising data from the summer and autumn seasons. To assess the robustness, We compare ORCHNet with state-of-the-art aggregation approaches on data from the same season and across seasons. Moreover, we additionally evaluate the proposed approach as part of a localization framework, where ORCHNet is used as a loop closure detector. The empirical results indicate that, on the place recognition task, ORCHNet outperforms the remaining approaches, and is also more robust across seasons. As for the localization, the edge cases where the path goes through the trees are solved when integrating ORCHNet as a loop detector, showing the potential applicability of the proposed approach in this task. The code and dataset will be publicly available at:[https://github.com/Cybonic/ORCHNet.git](https://github.com/Cybonic/ORCHNet.git)
Localization, place recognition, SLAM, agricultural robotics.
## I Introduction
Place recognition can be understood as a perception-based global localization approach that recognizes previously visited places using visual, structural, and/or semantic cues, from which descriptors are generated. In a typical application, the current descriptor is compared with descriptors from previous input data to identify revisited locations. Lately, place recognition has been used as an efficient loop closure detector in SLAM or localization approaches[1, 2].
In terms of research, the autonomous driving (AD) community is the most active, using place recognition to achieve long-term localization, where data from 3D LiDARs are considered a key modality to gain robustness [3]. Currently, 3D LiDAR-based approaches resort mostly to deep learning (DL) methods for place modeling [4], a technique that has been widely adopted since the recent advancements in DL that let point clouds to be handled directly by the networks - an example of such a network is PoinNet[5].
A less studied field in terms of place recognition and loop closure detection is field robotics, particularly, in agricultural robotics, where perception-based localization is a hard task to perform over time, due to harsh and changing conditions [6]. Within the agricultural application domains, orchards are a very challenging case study because of the lack of'relevant' geometric features in the environment when compared to urban-like environments which are usually extracted from vehicles, buildings, or other static urban "furniture". In orchards, the low density of the canopies, which may change from season to season, generates sparse scans, which leads to poor descriptive features. Moreover, the disposition of trees in parallel rows, with regular intervals, results in LiDAR scans with similar geometrical data, which makes subsequent scans, and scans from neighboring rows almost indistinguishable, a poor characteristic for approaches such as place recognition, where the goal is to learn unique and descriptive features to identify a place.
This work addresses this problem of place recognition and loop closure detection in orchards, proposing ORCHNet, a place modeling approach that maps 3D LiDAR scans to a descriptor space. The main contribution of ORCHNet is the global feature aggregation approach that fuses multiple aggregation approaches into a robust global descriptor.
Fig. 1: Representation of the proposed ORCHNet integrated in a localization framework.
Due to the lack of available orchard datasets for this application, ORCHNet is evaluated on a self-made dataset, which was recorded in orchards in the United Kingdom, using a mobile robot equipped with sensors such as 3D LiDAR and RTK-GPS. The dataset comprises orchard data from the summer and autumn seasons, which allows testing the proposed approach in real changing conditions that occur in orchards.
To evaluate the merit of ORCHNet and, in particular, the robustness across seasons in a place recognition task, two types of experiments were conducted: same season, where the training and test data are from the autumn sequence; and cross-season, where training data is from the autumn sequence and test data is from the summer sequence. ORCHNet was also evaluated as part of a localization framework (see Fig. 1), where it is used to find loops, from which a qualitative assessment is presented.
The empirical results indicate that the proposed approach is more adequate for orchard than other global feature aggregation approaches. Furthermore, the results also show that the performance is not affected in cross-season experiments. As for the localization task, ORCHNet was able to solve some edge cases, where the path goes through the trees.
Succinctly, this work's key contributions are the following:
* A new multi-season dataset1 from orchards in real operational conditions; Footnote 1: The dataset, and the code, will be made publicly available at GitHub
* An new robust 3D LiDAR-based place recognition approach adapted for orchards/tree-containing environments.
## II Related Work
Place recognition has been the subject of much research over the last decade, where 3D LiDAR approaches have been a very active topic [4]. 3D-LiDAR sensory data has been considered a key modality to achieve robustness in place recognition due to being invariant to challenging visual changing conditions, which may arise during long-term operation, especially when revisiting places in different seasons and illumination conditions.
3D-LiDAR place recognition approaches have been a natural response to overcome the challenges that visual-based approaches face. Within the 3D-LiDAR-based approaches, Deep Learning (DL) methods are the most common for place modeling. The point cloud-based DL approaches can be split into two major subfields: those that extract features directly from point clouds, such as PointNetVLAD [7] or Minkloc3d [8], and those that extract features indirectly from point clouds, projecting the point clouds to a proxy representation, such as voxels [9], polar coordinates [10] or depth range images [11]. In both approaches, these inputs are fed to a feature extraction module, with the goal of extracting local features, which are, then, aggregated into a global descriptor.
Despite the natural robustness towards appearance-changing conditions, 3D-LiDAR-based approaches have still some limitations, such as generating a global descriptor that is invariant to rotation, which is essential when revisiting a place from the opposite direction. Thus, generating a global descriptor that is invariant to rotations but simultaneously is descriptive enough to identify a place, is a fine balance that the models have to learn from the data. In this process, the feature aggregation module is of a great importance, given that it is in the aggregation step that the local features are converted into a global descriptor, where the networks define which features are the most important from a global perspective. In this regard, several aggregation approaches have been suggested for 3D-LiDAR data, where NetVLAD [7, 12] has been one of the most popular. NetVLAD splits the feature space into clusters to generate a global descriptor. Other approaches comprise attention [13], generalized-mean pooling (GeM) [8], global max pooling (MAC) [14], average-pooling (SPoC) [15].
This work leverages the GeM, SpoC, and MAC aggregation approaches, which are efficient and have shown to be adequate for point clouds, fusing their respective outputs into a global descriptor.
## III Proposed Approach
This section details the proposed ORCHNet in a retrieval-based place recognition framework. An overview of ORCHNet and the framework is illustrated in Figure 2. The place
Fig. 2: ORCHNet’s architecture as part of a retrieval-based place recognition task. ORCHNet receives as input a point cloud \(S_{i}\), which is down-sampled and preprocessed, returning \(S_{i}^{\prime}\). From \(S_{i}^{\prime}\), local features \(Z\) are extracted, using a feature extractor. The local features are fed into the global feature aggregation module, which fuses the outputs of GeM, SPoC, and MAC into a global descriptor \(D\). The global descriptor is used to query the database, which based on a similarity metric, returns the top N loop candidates.
recognition framework has two main modules: the ORCHNet, which models places by mapping point clouds to a descriptor space; and the Retrieval module. The modeling is achieved through the following main steps: an input point cloud \(S_{i}\) is preprocessed and fed to a feature extractor, which returns local features \(Z\). The local features are aggregated into a global descriptor \(D\), using the proposed global feature aggregation module, which is the main contribution of this work.
Observation: The sets defined in the following subsections are ordered sets (_i.e._, there is a relation of order between the elements of the set) and this order is defined by the indices of the elements.
### _Pre-processing & Feature Extractor_
The aim of the pre-processing and feature extracting modules is to extract adequate features from the scans, by mapping them to an intermediate feature space. Hence, given a scan \(S_{i}\in\mathbb{R}^{n_{i}\times 3}\) with \(i\in[1,...,N]\) where \(N\) is the number of scans in the sequence and \(n_{i}\) the number of points in the \(i^{th}\) point cloud, the feature extractor maps \(S_{i}\) to a fixed feature vector \(Z_{i}\in\mathbb{R}^{n\times F}\) with \(F\gg 3\). In order to compare different feature extraction approaches, this work resorts to an extractor that handles point clouds directly (PoinNet), and another that extracts feature from an image-like proxy representation (ResNet50). ResNet50 was initially used on images, but has also been used on other modalities with an appropriate projection to an image-like representation. Both networks were slightly changed to fit this application.
### _Global Feature Aggregation_
The proposed global feature aggregation approach takes advantage of three different aggregation methods: MAC[14], GeM[16, 8] and SPoC[15], fusing their outputs into a global descriptor using a trainable weighted sum. The proposed approach is illustrated in Fig.2. It receives a feature vector \(Z\in\mathbb{R}^{N\times F}\) and outputs a descriptor \(D=[d_{k}]\in\mathbb{R}^{K}\), being a function \(\mu:\mathbb{R}^{N\times F}\rightarrow\mathbb{R}^{K}\).
#### Iii-B1 Mac
The MAC module is a global max pooling operator, which computes from input features \(Z=[z_{i,j}]\in\mathbb{R}^{N\times F}\), the maximum activation value of the \(i^{th}\) dimension:
\[z_{i}^{M}=\max_{j}\,z_{i,j},\,\,\forall i\in[1,...,N] \tag{1}\]
with \(Z^{M}=[z_{i}^{M}]\in\mathbb{R}^{N}\). Then, \(Z^{M}\) is fed to a full-connected function, \(h^{M}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{K}\), which outputs the MAC's output descriptor \(D^{M}=[d_{k}^{M}]\) with \(K\) dimensions.
#### Iii-B2 SpoC
The SPoC module is an average pooling operator, which computes, from the input features \(Z=[z_{i,j}]\in\mathbb{R}^{N\times F}\), the average activation value of the \(i^{th}\) dimension :
\[z_{i}^{S}=\frac{1}{F}\sum_{j=1}^{F}z_{i,j},\,\,\forall i\in[1,...,N] \tag{2}\]
with \(Z^{S}=[z_{i}^{S}]\in\mathbb{R}^{N}\). Then, \(Z^{S}\) is fed to a full-connected function \(h^{S}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{K}\), which outputs the SPoC's output descriptor \(D^{S}=[d_{k}^{S}]\) with \(K\) dimensions.
#### Iii-B3 GeM
The GeM module is a generalized-mean pooling operator that is applied to the input feature vector \(Z=[z_{i,j}]\in\mathbb{R}^{N\times F}\):
\[z_{i}^{G}=\left(\frac{1}{F}\sum_{j=1}^{F}z_{i,j}^{p}\right)^{1/p},\,\,\forall i \in[1,...,N] \tag{3}\]
where \(p\in\mathbb{R}\) is a trainable parameter and with \(Z^{G}=[z_{i}^{G}]\in\mathbb{R}^{N}\). Then, \(Z^{G}\) is fed to a full-connected function \(h^{G}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{K}\), which outputs the GeM's output descriptor \(D^{G}=[d_{k}^{G}]\) with \(K\) dimensions.
#### Iii-B4 Fusion
The fusion module merges the three descriptors (\(D^{M}\),\(D^{S}\) and \(D^{G}\)) into a global descriptor \(D\), using \(A=[a_{1},a_{2},a_{3}]\in\mathbb{R}^{3}\), which are trainable parameters that learn the best activation combinations. First, the descriptors are concatenated into \(D^{\prime}=[D^{M},D^{S},D^{G}]^{T}=[d_{i,k}^{\prime}]\in\mathbb{R}^{3\times K}\), then a weighted sum is computed:
\[d_{k}=\sum_{i=1}^{3}a_{i}d_{i,k}^{\prime},\,\,\forall k\in[1,...,K] \tag{4}\]
with \(D=[d_{k}]\in\mathbb{R}^{K}\), which is the \(K\)-dimensional output descriptor of the proposed ORCHNet.
### _Network Training_
In this work, the global descriptors are trained using the LazyTriplet loss as proposed in [7]. The LazyTriplet loss forces the network to learn a descriptor space where descriptors from the same physical place (positive samples _w.r.t._ the
Fig. 3: Training approach using the LazyTriplet loss. The anchor-positive pair has to be from the same physical (_i.e._ close within the same line), and the anchor-negative pair has to be from distinct places. The LazyTriplet loss is computed by measuring the similarity distance between the anchor-positive and the anchor-negative pair.
anchor) are close, while descriptors originated from different places (negative samples _w.r.t._ the anchor) are apart.
Hence, given a collection of scans \(\mathcal{S}=\{S_{i}|S_{i}\in\mathbb{R}^{n_{i}\times 3}\}\) and the corresponding poses \(\mathcal{P}=\{p_{i}|p_{i}\in\mathbb{R}^{3}\}\) with \(i=[1,...,N]\) where \(N\) is the number of samples in the sequence, and \(n_{i}\) is the number of points in the \(i^{th}\) scan. Let us consider a set \(\Gamma=\{(S_{i},p_{i})|S_{i}\in\mathcal{S},p_{i}\in\mathcal{P}\}\subseteq \mathcal{S}\times\mathcal{P}\), which defines that for each scan \(S_{i}\) exists a corresponding pose \(p_{i}\).
Let us define a set \(\Gamma^{A}\subseteq\Gamma\) where each element \((S_{j}^{A},p_{j}^{A})\in\Gamma^{A}\) is an anchor (_i.e._ for which exist a corresponding positive loop) with \(j\in[1,...,J]\) and \(J<N\) is the number of anchors in the sequence \(\mathcal{S}\).
Let us also define (for some parameter \(\gamma\)) a set of positives \(\Gamma_{\gamma}^{P_{j}}\subseteq\Gamma\) (\(\forall j\in[1,...,J]\)) _w.r.t._ the \(j^{th}\) anchor \((S_{j}^{A},p_{j}^{A})\). First, we define the function \(\omega:\Gamma\rightarrow\mathbb{N}\), that maps each pair of \(\Gamma\) to the its row number. Now we can formally define
\[\begin{split}\Gamma_{\gamma}^{P_{j}}=\Big{\{}&(S_{ l}^{P_{j}},p_{l}^{P_{j}})\,|\,\,\|p_{l}^{P_{j}}-p_{j}^{A}\|_{2}<r_{th}\\ &\wedge\,\,\omega(S_{l}^{P_{j}},p_{l}^{P_{j}})=\omega(S_{l}^{A}, p_{l}^{A})\\ &\wedge\,l\notin[j-\gamma,j]\Big{\}}\end{split} \tag{5}\]
where each element \((S_{l}^{P_{j}},p_{l}^{P_{j}})\in\Gamma_{\gamma}^{P_{j}}\), is a positive _w.r.t._ the \(j^{th}\) anchor. In summary, the positives defined by \(\Gamma_{\gamma}^{P_{j}}\) respect three different conditions, described (by this order) in equation (5): belong to a neighborhood of the \(j^{th}\) anchor, defined by a radius \(r_{th}\); belong to the same row as the \(j^{th}\) anchor; do not belong to set of the \(\gamma\) previous elements of \(\Gamma\), _w.r.t._ to the \(j^{th}\) anchor.
And, finally, let us also define a set of negatives \(\Gamma^{N_{j}}\subseteq\Gamma\)_w.r.t._ the \(j^{th}\) anchor \((S_{j}^{A},p_{j}^{A})\). The negatives are samples that are outside the region of interest defined by \(r_{th}\), _i.e._
\[\Gamma^{N_{j}}=\{(S_{m}^{N_{j}},p_{m}^{N_{j}})\,|\,\,\|p_{m}^{N_{j}}-p_{j}^{A} \|_{2}\geq r_{th}\} \tag{6}\]
where each element \((S_{m}^{N_{A}},p_{m}^{N_{A}})\in\Gamma^{N_{j}}\) is a negative _w.r.t._ the \(j^{th}\) anchor. In practice, the set of negatives is generated by taking a random sample of the original \(\Gamma^{N_{j}}\).
Now to compute the loss for a given anchor \((S_{j}^{A},p_{j}^{A})\), the LazyTriplet loss is computed by the following expression:
\[\mathcal{L}_{T}=\text{max}(\|d_{j}^{A}-d_{c_{1}}^{P}\|_{2}-\|d_{j}^{A}-d_{c_{2 }}^{N}\|_{2}+m,0) \tag{7}\]
where \(d_{j}^{A}\) is the descriptor of the \(j^{th}\) anchor scan \(S_{j}^{A}\); \(d_{c}^{P}\) is the descriptor of the closest positive (in the physical space) \((S_{c_{1}}^{P},p_{c_{1}}^{P})\in\Gamma^{P}\)_w.r.t._\((S_{j}^{A},p_{j}^{A})\), _i.e._\(c_{1}=\text{arg\,min}_{l}\,\,\|p_{l}^{P}-p_{j}^{A}\|_{2}\); and \(d_{c}^{N}\) is the descriptor of the closest negative (in the descriptor space) \((S_{c_{2}}^{P},p_{c_{2}}^{P})\in\Gamma^{P}\)_w.r.t._\((S_{j}^{A},p_{j}^{A})\), _i.e._\(c_{2}=\text{arg\,min}_{m}\,\,\|d_{m}^{P}-d_{c}^{A}\|_{2}\). Finally, \(m\) is a margin value. This process is illustrated in Fig. 3.
## IV Experimental Evaluation
The aim of this work is to evaluate the robustness of place recognition in orchards, namely one of the goals is to assess the robustness of the proposed approach over different seasons. Hence, to support the proposed contributions, this section presents a self-made orchard dataset, the experiments and evaluation protocols, the implementation details, and finally, the results obtained on a retrieval task, and as part of a localization framework are discussed.
### _Orchard Dataset_
In this work, a new orchard dataset is proposed for the task of place recognition, but the dataset can also be used for other localization-related tasks such as SLAM. The dataset comprises two sequences, which were recorded in the same Bramley apple orchard located in Kent in the United Kingdom, but in different seasons (_i.e._ summer and autumn), using a Clearpath Husky mobile robot equipped with a Velodyne VLP32 3D LiDAR (10Hz) and a ZED-F9P RTK-GPS (5Hz). These two sequences capture the trees in different flowering states, which introduces an additional challenge, due to the changing conditions of the trees and the environment. Both were conducted on bright dry days with low wind speeds. Figure 4 illustrates this robot with the sensors during a recording session. For the purpose of this work, a loop exists when scans from different revisits are within a range of 10m: an anchor-positive pair exists when two scans, from different revisits, are within a range of 10m.
Fig. 4: Illustration of the orchards and the robot platform used to collect data. The orchards are located in the southern part of the United Kingdom. The orchards were split in rows, here the autumn sequence is illustrated, which was split into 6 rows.
In both sequences, the robot traveled in the orchard lines defining paths containing revisited segments from the same and opposite directions. The summer sequence was recorded in July and comprises 3244 scans with synchronized poses. From these scans, 954 are anchors. While the summer sequence has segments that are revisited only once, the autumn sequence has segments with several revisits. The autumn sequence was recorded in November and has 3674 scans with synchronized poses, from which 2311 are anchors. Figure 5 shows the point distributions per scan in each sequence, while the ground-truth loops are illustrated in Fig. 6.a) and Fig. 6.c). Table I summarizes the information regarding the scans and existing loops in each sequence. Figure 6.b) and Fig.6.d) illustrates for a given anchor, the respective positives within the range of \(10\,\mathrm{m}\) for the summer and autumn sequence, respectively.
### _Evaluation of Place Recognition_
The evaluation of the proposed approach is conducted using the standard retrieval metric which is Recall, defined as follows:
\[\text{Recall}=\frac{\text{TP (\# Retrieved Loops)}}{\text{TP + FN (\# Loops)}}, \tag{8}\]
namely reporting the recall of the most similar retrieved candidate (_i.e._, recall@1) and the recall of the 1% most similar candidates (_i.e._, recall@1%).
Hence, to evaluate the performance of the proposed approach, the following retrieval protocol was implemented. Let us first consider the set of scans and corresponding poses \(\Gamma=\{(S_{i},p_{i})|S_{i}\in\mathcal{S},p_{i}\in\mathcal{P}\}\). For each element \((S_{i},p_{i})\in\Gamma\) we will denote by \(D_{i}\) the descriptor of \((S_{i},p_{i})\) generated by the model, defining the set of descriptors \(\mathcal{D}=\{D_{i}|D_{i}\in\mathbb{R}^{K}\}\), with \(K\) dimensions and \(|D|=|\Gamma|\). Let us also define a database \(\Theta_{t}=\{D_{m}^{\Theta_{t}}|D_{m}^{\Theta_{t}}\in\mathbb{R}^{K}\}\subseteq \mathcal{D}\) with \(m\in[1,...,M_{t}]\), where \(M_{t}\) is the number of descriptors in the database at iteration \(t\).
Moreover, we will also consider the previously defined set of anchor pairs \(\Gamma^{A}\), to define \(\mathcal{D}^{A}=\{D_{j}^{A}\in D|(S_{j}^{A},p_{j}^{A})\in\Gamma^{A}\}\subseteq \mathcal{D}\), _i.e._ the set of descriptors associated to the anchors \((S_{j}^{A},p_{j}^{A})\in\Gamma^{A}\); let us note that \(|D^{A}|=|\Gamma^{A}|\).
Given an anchor's descriptor \(D_{j}^{A}\in\mathcal{D}^{A}\) and the database \(\Theta_{t}\), the set of \(N\) top candidates \(\mathcal{D}_{N,t}^{j}\)_w.r.t._ the \(j^{th}\) anchor are retrieved using a K-nearest neighbor(KNN) method from \(\Theta_{t}\) and defined as follows: \(\mathcal{D}_{N,t}^{j}=\{d_{n}\in\Theta_{t}\|d_{n}-d_{j}^{A}\|_{2}\leq\|d_{n+1} -d_{j}^{A}\|_{2}\ \wedge\ n\in[1,...,N]\}\) (with \(|\mathcal{D}_{N,t}^{j}|=N\)) _i.e._ the set of the \(N\) descriptors of \(\Theta_{t}\) that are closer (in the descriptor space) to the descriptor of the \(j^{th}\) anchor.
Hence, to compute recall@N of a given model (with N representing the top N candidates), given a set of top candidate descriptors \(\mathcal{D}_{N,t}^{j}\), we start by defining \(\Gamma_{N,t}^{j}=\{(S_{n},p_{n})\in\Gamma|d_{n}\in\mathcal{D}_{N,t}^{j}\}\) (with \(|\Gamma_{N,t}^{j}|=|\mathcal{D}_{N,t}^{j}|=N\)), _i.e._ the set of pairs (scans and poses) associated with the set of top candidate descriptors. Then, the number of true positives (in the Equation
Fig. 5: Illustration of the points per frame distributions of a) sequence summer and b) sequence autumn.
Fig. 6: Illustration of the ground-truth paths. Figures a) and c) outline the loops of the summer and autumn sequences respectively, where anchors are highlighted in red, positives in blue, and the remaining points in black. Figures b) and d) outline the positives within \(10\,\mathrm{m}\) for a given anchor of the summer and autumn sequences, respectively, where the anchor is highlighted in red and the positives in blue.
\begin{table}
\begin{tabular}{l l l l} \hline Sequence &
\begin{tabular}{l} Length \\ [frames] \\ \end{tabular} & Loops [frames] & Points/Frame \\ \hline \hline Autumn & 3674 & 2311 & 47k\(\pm\)1k \\ Summer & 3244 & 954 & 50k\(\pm\)2k \\ \hline \end{tabular}
\end{table} TABLE I: Number of frames and loops in the dataset. The number loops correspond to a \(r_{th}\leq 10m\).
(8)), _w.r.t._ the \(j^{th}\) anchor, is defined by \(|\Gamma_{N,t}^{j}\cap\Gamma_{\gamma}^{P_{j}}|\), _i.e._ the number of common elements in the set of top candidates (\(\Gamma_{N,t}^{j}\)) and set of positive samples (\(\Gamma_{\gamma}^{P_{j}}\)), previously defined.
The proposed approaches are compared with state-of-the-art global feature aggregation approaches such as NetVLAD [7], GeM[8] MAC [8], and SPoC[15]. Thus, in order to establish a fair comparison among the various methods, and assess the merit of each approach, all global feature aggregation methods are evaluated on the same feature extractor (_i.e._ backbone) network. As for feature extractor, two approaches are used in this work: PointNet[5] and ResNet50[17]. These two networks process the point clouds differently, while PointNet is a neuronal network that learns features directly from the point cloud, ResNet50, on the other hand, is a CNN-based network, which was initially proposed for images, requiring thus to project point clouds to a proxy representation such as spherical or Bird's-eye View (BEV). In this work, in ResNet50-based experiments, the point clouds are projected to the BEV representation. Comparing the two feature extractors allows a deeper assessment of the proposed approach when fed with distinct input representations.
### _Implementation and Training Details_
All models were trained for 100 epochs on a NVIDIA GeForce RTX 3090 GPU, using the closest positive and 20 negatives. The margin value \(m\) was set to 0.5, and the model parameters were optimized using the AdamW optimizer with a learning rate (\(Lr\)) of 0.0001 and a weight decay (\(Wd\)) of 0.0005. Moreover, all proposed experiments were conducted on Python 3.8, using the PyTorch with CUDA 11.6 for the networks.
In all experiments, if not otherwise specified, the point clouds were cropped along the x-axis and y-axis at \(\pm 15m\) respectively, and then down sampled to 20k points for PointNet and to 512 points for ResNet50, which are randomly selected. Additionally, at each training step, the point clouds are augmented by applying a random rigid body transformation with a maximum rotation of \(\pm 180^{\circ}\).
Furthermore, as suggested in [7], the number of clusters of NetVLAD is set to 64. The trainable parameters \(A=[a_{1},a_{2},a_{3}]^{T}\) are initialized based on a normal distribution with \(\mu=0\) and \(\sigma=0.1\). Moreover, the configuration of each network was adjusted in order to return the best performance. For instance, ResNet50 presented the best performance with a global descriptor size of 512 dimensions, while PointNet required 2048 dimensions.
#### Iv-C1 PointNet-based
PointNet is a neuronal network that learns features directly from the point clouds. In this work, PointNet is cropped before the max pooling layer, as proposed in [7]. Despite learning from point clouds directly, PointNet, nevertheless, needs a fixed size input, hence, instead of using the raw scan directly as input, \(S_{i}\) is down sampled to \(S_{i}^{\prime}\).
PointNet receives as input a tensor \(S^{\prime}\in\mathbb{R}^{b\times 1\times n^{\prime}\times F}\), where \(b\) is the batch size, \(n^{\prime}\) the number of points (already down sampled), and F the feature dimensions. In this work, the batch is set to \(20\) for testing and \(1\) for training, the number of points is set to \(20k\), and \(F\) is set to \(3\). The first layer of the network maps this 3-dimensional tensor to 1024 dimensions, which are down-scaled to 512 dimensions at the PointNet's output (_i.e._\(Z\in\mathbb{R}^{b\times 20k\times 512}\)). The global feature aggregation module receives this \(Z\) tensor and output a descriptor \(D\in\mathbb{R}^{b\times 2048}\).
#### Iv-C2 ResNet50-based
ResNet50 is a CNN-based network, which extracts feature from image-like representations. In this work, when using ResNet50 as feature extractor, point clouds are projected to a BEV representation, which is obtained by discretizing the point clouds into a 2D grid, where the cells are populated with the height (\(z\) coordinate), the point density, and the intensity that fall into the corresponding cells. Hence, the process of extracting features from point clouds using ResNet50 is the following: first the scan \(S_{i}\in\mathbb{R}^{n\times 3}\) is down sampled to \(S_{i}^{\prime}\in\mathbb{R}^{500\times 3}\), which are projected to an image \(B_{i}=[R_{h},R_{d},R_{i}]\in\mathbb{R}^{b\times 3\times h\times w}\), which is the concatenation of the height (\(z\) coordinate) \(R_{h}\in\mathbb{R}^{h\times w}\), the point density \(R_{d}\in\mathbb{R}^{h\times w}\) and the intensity \(R_{i}\in\mathbb{R}^{h\times w}\) of the scan's projection into the BEV representation, where \(b\) is the batch size, and \(h\) and \(w\) are the image's height and width, respectively. In this work, \(h\) and \(w\) are set to 256 each, and the batch is set to 20 during testing, and to 1 during training. ResNet50 uses \(B_{i}\) as input and outputs a tensor \(Z^{\prime}\in\mathbb{R}^{b\times 2048\times 16\times 16}\), which is reshaped to \(Z\in\mathbb{R}^{b\times 2048\times 256}\). Then, \(Z\) is fed to the global feature aggregation module, which outputs a global descriptor \(D\in\mathbb{R}^{b\times 512}\).
### _Retrieval Performance_
This section presents and discusses the empirical results obtained with ORCHNet. Two main experiments were conducted: same season and cross season. As for the experiments on data from the same season, the autumn sequence was split into \(60\%\) for training and \(40\%\) for testing. As for the cross-season experiments, the models were trained on the autumn sequence, while tested in the summer sequence. Table II summarizes the split in terms of training and testing samples for each experiment.
The results obtained on the same season, which are presented in Table III, indicate that all methods, ORCHNet
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{Recall@1} & \multicolumn{2}{c}{Recall@1\%} \\ & PointNet & ResNet & PointNet & ResNet \\ \hline \hline VLAD & 0.11 & 0.41 & 0.68 & 0.92 \\ SPoC & 0.50 & 0.44 & 0.85 & 0.93 \\ GeM & 0.49 & 0.41 & 0.84 & 0.93 \\ MAC & 0.49 & 0.44 & 0.84 & 0.92 \\ \hline ORCHNet (our) & **0.52** & **0.45** & **0.92** & **0.94** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Empirical results obtained on the same-season experiment, where the autumn sequence was split into 60% for the training set and 40% for the test set.
included, have a poor performance when retrieving only one candidate (_i.e._ recall@1). The reason behind this low performance can be attributed to the low geometrical differences in orchards, which leads to the networks' incapacity to extract descriptive features in order to distinguish the scans from the various lines. On the other hand, when retrieving the top \(1\%\) (of the elements in the database) as loop candidates, which in this experiment were 31 candidates, the performance grows considerably to almost perfect recall. This means that, despite the ambiguity in the top 1 candidate, when retrieving \(1\%\), all networks are able to achieve reasonable performance, with ORCHNet outperforming the remaining methods.
The results obtained on the cross-season experiment, which are presented in Table IV, indicate that, despite all models loose in performance, which was expected, PointNet-based approaches still perform better at top@1 retrieving, while ResNet50-based approaches have higher performance at top 1%. The results also indicate that ORCHNet is able to maintain performance across seasons, showing thus that it is robust to seasonal changes.
### _Evaluation on Localization Framework_
ORCHNet was additionally evaluated as part of an Adaptive Monte Carlo Localization (AMCL) framework, for which qualitative results from the autumn sequence are reported. The evaluation consists in comparing the generated paths from the AMCL (only), AMCL-ORCHANet, SLAM, and RTK-GPS. First, an HMaps-based SLAM approach as proposed in [18] was used to create an environment representation of the autumn sequence (see Fig. 7). Then, the AMCL approach is adapted to take into account a set of loop proposals provided by the ORCHNet model. The adaptation was made in the AMCL's resampling stage, where a stratified approach as proposed in [19] was used to select (i) if a given particle is sampled from the current set of particles or (ii) if it has been sampled from the set of loop candidates. The sampling of loop proposals is conditioned on the similarity score of each loop candidate, meaning that similar place proposals have a higher sampling probability. This is achieved with a multinomial resampling approach. Additionally, a scan-matching-based LiDAR odometry approach is used to predict the particles' state. As for the weight update, it is important to note that only a small number of points (1 in 50) is used to compute each particle's state.
To complement the results, a qualitative assessment of the proposed AMCL-ORCHNet framework is presented in Fig. 7, where it is compared with a AMCL implementation (_i.e._, without loop closure detection), the HMaps-based SLAM and the ground truth path obtained with an RTK-GPS. Although the autumn sequence is a spatially small orchard, where each scan is highly overlapped, the sparse nature of the point clouds, which can be seen in Fig. 1, imposes some challenges in the registration of the scans in the map. Despite the challenges, the HMaps-based SLAM approach produces a good map representation. Figure 7 shows that the baseline AMCL achieved, overall, a reasonable localization performance, however on the rightmost part of the geometric path (highlighted in red and depicted in Fig. 8), the AMCL goes through the trees in several ocations, which is corrected in the AMCL-ORCHNet approach. On the other hand, the AMCL-ORCHNet approach computes a similar path to the one obtained with the HMaps-based SLAM and the RTK-GPS in that region. These results
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{Recall@1} & \multicolumn{2}{c}{Recall@1\%} \\ & PointNet & ResNet & PointNet & ResNet \\ \hline \hline VLAD & 0.061 & 0.35 & 0.58 & 0.92 \\ SPoC & 0.43 & 0.41 & 0.76 & 0.93 \\ GeM & 0.43 & 0.38 & 0.79 & 0.93 \\ MAC & 0.40 & 0.40 & 0.79 & **0.96** \\ \hline ORCHNet (our) & **0.50** & **0.42** & **0.85** & 0.95 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Empirical results obtained on the cross-season experiment, where the autumn sequence was used for training and the summer sequence for testing.
Fig. 7: Autumn sequence and localization framework results. From top to bottom: RTK-GPS data, HMaps-based SLAM, AMCL and AMCL with place proposals provided by the ORCHNet.
highlight the applicability and suitability of the proposed ORCHNet as part of a localization framework in real-world conditions.
## V Conclusions
In this work, a 3D LiDAR-based place recognition approach (called ORCHNet) is proposed by combining the descriptors of multiple global feature aggregation methods to obtain robust global descriptors. The proposed ORCHNet is evaluated in place recognition and in an AMCL-based localization framework as a loop closure detector.
The experimental evaluation of the ORCHNet was carried out with real-world data collected in orchards during the summer and autumn seasons, using a mobile robot platform equipped with a 32-channel LiDAR, as part of an effort to increase the availability of orchard datasets in the literature. Two different task domain experiments were conducted, the first was focused on 3D LiDAR-based place recognition, while the second explored the integration of ORCHNet in a localization framework. Regarding LiDAR-based place recognition, ORCHNet's robustness is compared with other state-of-the-art methods and demonstrated on challenging orchard datasets, where the proposed approach demonstrates to be reliable across seasons. As part of a localization framework, the AMCL-ORCHNet approach corrected the path, which the AMCL baseline was not able to predict correctly.
## Acknowledgments
This work has been supported by the project GreenBotics (ref. PTDC/EEI-ROB/2459/2021), founded by Fundacao para a Ciencia e a Tecnologia (FCT), Portugal. It was also partially supported by FCT through grant UIDB/00048/2020 and under the PhD grant with reference 2021.06492.BD. The authors would also like to thank Dr Charles Whitfield at NIAB East Malling for facilitating orchard data collection campaigns.
|
2310.18300
|
Quantum simulation of the tricritical Ising model in tunable Josephson
junction ladders
|
Modern hybrid superconductor-semiconductor Josephson junction arrays are a
promising platform for analog quantum simulations. Their controllable and
non-sinusoidal energy/phase relation opens the path to implement nontrivial
interactions and study the emergence of exotic quantum phase transitions. Here,
we propose the analysis of an array of hybrid Josephson junctions defining a
2-leg ladder geometry for the quantum simulation of the tricritical Ising phase
transition. This transition provides the paradigmatic example of minimal
conformal models beyond Ising criticality and its excitations are intimately
related with Fibonacci non-Abelian anyons and topological order in two
dimensions. We study this superconducting system and its thermodynamic phases
based on bosonization and matrix-product-states techniques. Its effective
continuous description in terms of a three-frequency sine-Gordon quantum field
theory suggests the presence of the targeted tricritical point and the
numerical simulations confirm this picture. Our results indicate which
experimental observables can be adopted in realistic devices to probe the
physics and the phase transitions of the model. Additionally, our proposal
provides a useful one-dimensional building block to design exotic topological
order in two-dimensional scalable Josephson junction arrays.
|
Lorenzo Maffi, Niklas Tausendpfund, Matteo Rizzi, Michele Burrello
|
2023-10-27T17:45:18Z
|
http://arxiv.org/abs/2310.18300v4
|
# Quantum simulation of the tricritical Ising model in tunable Josephson junction
###### Abstract
Modern hybrid superconductor-semiconductor Josephson junction arrays are a promising platform for analog quantum simulations. Their controllable and non-sinusoidal energy/phase relation opens the path to implement nontrivial interactions and study the emergence of exotic quantum phase transitions. Here, we propose the analysis of an array of hybrid Josephson junctions defining a 2-leg ladder geometry for the quantum simulation of the tricritical Ising phase transition. This transition provides the paradigmatic example of minimal conformal models beyond Ising criticality and its excitations are intimately related with Fibonacci non-Abelian anyons and topological order in two dimensions. We study this superconducting system and its thermodynamic phases based on bosonization and matrix-product-states techniques. Its effective continuous description in terms of a three-frequency sine-Gordon quantum field theory suggests the presence of the targeted tricritical point and the numerical simulations confirm this picture. Our results indicate which experimental observables can be adopted in realistic devices to probe the physics and the phase transitions of the model. Additionally, our proposal provides a useful one-dimensional building block to design exotic topological order in two-dimensional scalable Josephson junction arrays.
+
Footnote †: These authors contributed equally to this work.
The rapid advances in the fabrication of superconducting/semiconducting heterostructures [1; 2] allow for the realization of Josephson junction arrays (JJAs) with an unprecedented control and tunability over their physical parameters [3; 4; 5]. State-of-the-art electron beam lithography and etching techniques enable the realization of superconducting (SC) arrays with exquisite geometrical precision and scalability. Epitaxial growth consents to create pristine interfaces between a semiconducting substrate and SC islands, thus providing the possibility of controlling these setups through voltage gates. These fabrication developments are flanked by remarkable advances in the measurement techniques which include microwave spectroscopy to study the 1D strongly correlated systems emerging in Josephson junction chains [6; 7] and transport measurements to investigate the intricate thermodynamic properties of these systems [3; 4; 5; 7; 8]. Such progresses have brought JJAs right back into the arena of analog quantum simulation platforms, where they started their journey decades ago. The simultaneous tunability of the junction transparencies [9; 10; 2; 11] and magnetic fluxes opens indeed the path to tailor models of interest, among which quantum field theories (QFTs) and integrable models [12; 13; 14]. In particular, the experimental achievement of multicritical points, with peculiar conformal field theories (CFTs) associated to them [15], becomes at reach [16].
In this work we formulate a blueprint for the quantum simulation of the tricritical Ising (TCI) CFT in a tunable Josephson junction ladder. The reasons of interest for this model are multiple. It constitutes the simplest example of CFT beyond the Ising model, and its particle content includes excitations that share the same fusion properties of Fibonacci non-Abelian anyons. Successfully implementing this model will open the way to engineer exotic topological order in 2D arrays in the spirit of the wire constructions of Refs. [17; 18; 19; 20]. Moreover, the TCI model stands as a strong potential candidate to observe the emergence of supersymmetry [21; 22; 23]. Notably, to our knowledge, no experimental realization of a quantum TCI phase transition in 1D has ever been observed, nor have its critical exponents been measured.
Indeed, the quantum simulations of CFTs beyond the Ising universality class face both experimental and theoretical challenges: the most recent theoretical proposals rely on advanced constructions based on Majorana modes [22; 23; 24; 25; 26], extended Hubbard models with staggering potentials [27; 28] or nontrivial mappings between microscopic lattice operators and the field content of the CFTs [29]. In this context, the main mechanism to achieve a TCI point is to consider platforms like Rydberg atom systems [30; 31] and ultracold atoms in tilted optical superlattices [32] that are described by discrete models with a continuous Ising phase transition turning into a first-order phase transition (FOPT) at the tricritical point.
A significant advancement offered by JJAs is the ability to provide a feasible way to directly implement nontrivial interacting bosonic QFTs [13; 16]. In the following we present a ladder system that embodies a three-frequency sine-Gordon model and, as we will show, can be tuned to naturally flow towards the TCI point at low energy. The chosen ladder geometry offers an alternative construction
compared to previous works on superconducting chains [14; 16], and opens a path to the realization of 2D devices with non-Abelian topological order. To achieve our goal, we utilize a blend of analytical techniques, including mean field analysis and bosonization [33], complemented by numerical results based on variational uniform matrix product state ansatz (VUMPS) [34; 35; 36].
_The triple Josephson junction.-_ The building block of our 1D construction consists of two E-shaped SC islands facing each other and grown on a semiconducting substrate [Fig. 1(a)]. Schematically, we model this element as three parallel Josephson junctions (JJs) where Andreev bound states induced in the semiconductor mediate the Cooper pair tunneling [37; 38]. For simplicity we assume that each junction is defined by a single transport channel with transparency \(T_{p}\in[0,1]\) (\(p=1,2,3\)) and energy/phase relation [37]:
\[\mathcal{E}_{J}^{(p)}\left(\varphi\right)=-\Delta\sqrt{1-T_{p}\sin^{2}\left( \varphi/2\right)}\,, \tag{1}\]
See also Ref. [39] for alternative realizations. In Eq. (1), \(\varphi\) is the phase difference between the two islands and \(\Delta\) is the SC gap induced by proximity in the semiconducting substrate. High-transparencies \(T_{p}\) lead to coherent tunneling events of multiple Cooper pairs [40] corresponding to higher harmonics contribution, \(\cos(n\varphi)\) with \(n>1\), to the dispersion (1). In the triple JJ geometry, the amplitudes of such events can be tuned by inserting two magnetic fluxes \(\Phi_{i=1,2}/2\pi\) in the SC loops as in Fig. 1(a) [41].
We fix \(\Phi_{1}=\Phi_{2}=\Phi\) and identical transparencies (\(T_{1}=T_{3}\)) for the external junctions, controlled using electrostatic gates [Fig. 1(a)]. With these constraints, the exchange of the SC islands, \(\varphi\rightarrow-\varphi\), corresponds to the required \(\mathds{Z}_{2}\)-symmetry for the multicritical Ising physics, which is reflected in the odd current/phase relation of the triple JJ. Multiple channels in the junctions or unequal plaquette areas may explicitly break this symmetry [41], hindering the observation of critical features whenever the corresponding energy gaps are larger than the experimentally achievable energy resolution due to the finite size \(L\) and the temperature. In the symmetric setup, the total Josephson potential can be expanded as
\[V_{J}\left(\varphi\right)=\sum_{n\in\mathds{N}}\mu_{n}(\mathbf{X})\cos\left( n\varphi\right)\text{.} \tag{2}\]
The Fourier coefficients \(\mu_{n}\)[41] depend on the values of the external parameters \(\mathbf{X}=\left(T_{1}\cos\left(\Phi\right)\text{, }T_{1}\sin\left(\Phi\right)\text{, }T_{2}\right)\) which span a solid cylinder.
In the following, we will use many copies of this triple JJ to build a 1D ladder geometry, thus promoting the phase difference \(\varphi\) to a position-dependent field. In light of this, a preliminary mean-field analysis allows us to qualitatively understand the onset of a TCI point by investigating the Josephson potential \(V_{J}\left(\varphi\right)\) as function of \(\mathbf{X}\). In a semiclassical picture, a tricritical point arises when three potential minima merge into one [42; 43; 44]. In the potential landscape defined by \(V_{J}(\varphi)\) with \(\varphi\in\left(-\pi,\pi\right]\), for any \(T_{2}\), there exists a point \(\left(T_{1},\Phi\right)_{c}\) where this merging occurs and \(V_{J}(\varphi)\) is approximated by a \(\varphi^{6}\) local potential, see Fig. 2. This suggests the first connection to the TCI model and its Ginzburg-Landau (GL) formulation [42; 43; 44].
_1D model.-_ We design a 1D quantum simulator to achieve a TCI point by arranging a set of identical triple JJs with potential \(V_{J}\) in parallel, as depicted in Fig. 1(b),
Figure 1: (a) Two E-shaped SC islands are connected through three parallel junctions characterized by magnetic fluxes \(\Phi_{1}/2\pi\) and \(\Phi_{2}/2\pi\) in units of \(\Phi_{0}=hc/2e\). The external junctions are controlled by electrostatic gates at potential \(V_{G1}\), \(V_{G3}\) which vary the carrier density in the surrounding semiconductor. This triple JJ element allows us to control the potential (2) at each rung of the ladder geometry (b). The fluxes of the triple JJ elements are staggered along the ladder [41]. Mutual rung capacitances and the island self-capacitances determine the electrostatic interactions \(V_{\perp}\) and \(E_{C}\).
Figure 2: Given \(\varphi_{\text{min}}\) the global minimum of \(V_{J}\) in Eq. (2), we depict \(\left|\sin\left(\varphi_{\text{min}}\right)\right|\) in the parameter space at \(T_{2}=0.6\). Regions I and III correspond to \(\mathds{Z}_{2}\)-symmetric configurations with \(\varphi_{\text{min}}=0,\pi\) respectively. Region II presents two degenerate minima. Inset: the transition between region I and II can be either discontinuous with three degenerate minima (yellow line) or continuous with the merging of the two minima in \(\varphi_{\text{min}}=0\). The red dot labels a tricritical point where a three-well potential \(V_{J}=g_{2}\varphi^{2}+g_{4}\varphi^{4}+\varphi^{6}\) approximates Eq. (2). The dashed line corresponds to \(g_{4}=0\).
in order to implement a multiple-frequency sine-Gordon model at low energies. The Hamiltonian of the JJ ladder is:
\[\widehat{H}=\sum_{j=0}^{L-1}\Biggl{[}\sum_{\alpha=a,b} \Bigl{(}E_{C}\widehat{N}_{\alpha,j}^{2}-E_{J}\cos\left(\hat{\varphi }_{\alpha,j+1}-\hat{\varphi}_{\alpha,j}\right)\Bigr{)} \tag{3}\] \[+V_{\perp}\,\widehat{N}_{a,j}\widehat{N}_{b,j}+V_{J}\left(\hat{ \varphi}_{a,j}-\hat{\varphi}_{b,j}\right)\Biggr{]},\]
where \(\hat{\varphi}_{\alpha,j}\) represents the phase operator of the \(j\)-th island on the leg \(\alpha\in\{a,b\}\). Along the legs, the SC islands are connected through JJs in a standard sinusoidal regime with Josephson energy \(E_{J}\). This energy scale can vary from \(E_{J}\simeq h\,50\) GHz [10] down to \(E_{J}=0\) for completely depleted junctions. The dynamics of the SC phases in Eq. (3) is dictated by charging effects, described by the charge operators \(\widehat{N}_{\alpha,j}\), canonically conjugated to the SC phases, \([\widehat{N}_{\alpha,j},e^{i\hat{\varphi}_{\alpha,j}}]=-e^{i\hat{\varphi}_{ \alpha,j}}\). We consider in particular an on-site electrostatic repulsion \(E_{C}\) and a rung repulsive interaction \(V_{\perp}\).
To obtain the rung potentials \(V_{J}\) in Eq. (3), the pattern of magnetic fluxes in the system must be carefully considered: a uniform magnetic field breaks time-reversal invariance driving the system into Meissner chiral phases [45, 46, 47, 48, 49, 50, 51] and does not fulfill the \(\mathbb{Z}_{2}\)-symmetry on each rung. We consider instead staggered fluxes \((-1)^{j}\Phi\) alternating at each triple JJ [Fig. 1 (b)]. This choice yields the local effective potential (2) and avoids additional fluxes between subsequent rungs [41].
The aimed multi-frequency sine-Gordon model emerges when the rung potentials \(V_{J}\) and the Josephson energy \(E_{J}\) dominate over the charging effects \(E_{C}\) and \(V_{\perp}\). In this Josephson-dominated regime, the system lies away from Mott insulating phases [48, 52, 53] and phase delocalization due to charge disorder [54, 55, 56] is strongly irrelevant. In the continuum limit, the low-energy physics of the Cooper pairs can be described through bosonization [33] by introducing a pair of dual fields \((\hat{\theta}_{\alpha}(x),\hat{\varphi}_{\alpha}(x))\) with \(\left[\hat{\theta}_{\alpha}(y),\hat{\varphi}_{\beta}(x)\right]=-i\pi\delta_{ \alpha\beta}\Theta\left(y-x\right)\) for each leg \(\alpha\), where \(\widehat{N}_{\alpha,j}/a\approx-\partial_{x}\hat{\theta}_{\alpha}(x)/\pi\) describing the charge of the island \(j=x/a\) and \(a\) the lattice spacing [41].
By defining the customary charge \(c\) and spin \(s\) sectors, \(\hat{\varphi}_{c/s}(x)=\left(\hat{\varphi}_{a}(x)\pm\hat{\varphi}_{b}(x) \right)/\sqrt{2}\), the Hamiltonian (3) is approximated by [41]:
\[\widehat{H}=\sum_{q=c,s}u_{q}\int\frac{dx}{2\pi}\left[K_{q}\left( \partial_{x}\hat{\varphi}_{q}\right)^{2}+\frac{1}{K_{q}}\left(\partial_{x} \hat{\theta}_{q}\right)^{2}\right]\\ +\int\frac{dx}{a}\,\sum_{n=1}^{3}\mu_{n}\cos\left(\sqrt{2}n\hat{ \varphi}_{s}\right)\!. \tag{4}\]
Eq. (4) describes the two branches of the model as Luttinger liquids (LLs), with Luttinger parameters \(K_{c/s}\approx\pi\sqrt{E_{J}/\left(2E_{C}\pm V_{\perp}\right)}\)[45, 48]. The rung potential \(V_{J}\) affects only the spin branch and yields the targeted multiple sine-Gordon interactions. The three potential terms in Eq. (4) must be relevant in the renormalization group sense and induce order in the phase \(\hat{\varphi}_{s}\), driving the spin sector away from the LL phase. This sets the constraint \(K_{s}>9/4\), which, indeed, is fulfilled for sufficiently large Josephson energies, when the semiclassical description is most accurate. Higher harmonics in Eq. (2), instead, are neglected as less relevant and characterized by smaller amplitudes [41].
The interplay of the three sine-Gordon terms \(\mu_{n}\) yields nontrivial phase transitions [57, 58, 16] between the low-energy massive phases of the spin sector. In particular, an Ising critical line meets a FOPT in a tricritical point characterized by the TCI CFT with central charge \(c=7/10\)[58, 16].
Observables and results.-We study the phase diagram of our model by using the variational uniform matrix product state ansatz (VUMPS), [34, 35, 36], to find the ground state of the Hamiltonian (3) in the thermodynamic limit. The VUMPS is based on a two-site elementary cell representing two SC islands on the same rung. The local Hilbert space is constructed from the charge basis defined by \(\widehat{N}_{\alpha=a/b,j}\). For numerical purpose, we truncate its basis by introducing a cutoff, \(|N_{\alpha,j}|<N_{\text{max}}\), with \(N_{\text{max}}\geq 6\)[41].
We set \(E_{C}/E_{J}=0.4\) and \(V_{\perp}/E_{J}=0.65\), corresponding to \(K_{s}\approx 8\). This favours the clean emergence of the transition lines as the interactions are strongly relevant, yielding sizeable energy gaps in the spin sector. The Fourier components \(\mu_{n}\) in Eq. (2) are determined from Eq. (1) with a SC gap \(\Delta/E_{J}=50\) and \(T_{2}=0.6\), consistent with Fig. 2.
We identify the phases of the model with labels I, II and III as in Fig. 2, and, to distinguish them, we employ the local order operator \(\hat{J}_{\perp}^{(2e)}(x)=\sin\left(\sqrt{2}\hat{\varphi}_{s}(x)\right)\) representing the single-particle contribution to the rung current. In the VUMPS simulations, the symmetry-broken phase II is signalled by a finite \(\langle\hat{J}_{\perp}^{(2e)}\rangle\) [Fig. 3(a)], and it aligns with the mean-field predictions in Fig. 2. The symmetric phases I and III broaden away from the semiclassical limit due to the dominant scaling behavior of the first-harmonic interaction. The order parameter allows us to investigate the boundary between the disordered phase I and the ordered phase II: a neat jump in \(\langle\hat{J}_{\perp}^{(2e)}\rangle\) marks a FOPT for \(X_{2}=T_{1}\sin\left(\Phi\right)\gtrsim 0.475\) [Fig. 3(b)], while a continuous change in the region \(|X_{2}|\lesssim 0.475\) indicates the onset of a second-order transition, as exemplified for \(X_{2}=0\) in Fig. 3(c).
This picture is confirmed by the analysis of the ground state fidelities [60, 61, 62, 63]. Given the abrupt change of the ground state \(\ket{\psi\left(\mathbf{X}\right)}\) across the FOPT, the average log
fidelity per site [62]
\[\mathcal{F}\left(\mathbf{X},\delta\right)=-\lim_{N\to\infty}\frac{1}{N}\log\left( \langle\psi(\mathbf{X}-\delta)|\psi(\mathbf{X}+\delta)\rangle\right), \tag{5}\]
displays a clean discontinuity [Fig. 3(b)], at fixed \(\delta\). On the other hand, across the lower cut the fidelity susceptibility \(\chi_{\mathcal{F}}=\mathcal{F}/\delta^{2}\) shows a more gradual singular behaviour and exhibits the typical peak of a second-order phase transition in Fig. 3(c).
The universal collapse of the spin correlation length \(\xi_{s}\) according to finite entanglement scaling ansatz [41, 59] confirms that the continuous phase transition lies within the Ising universality class, see Fig. 3(d): for \(X_{2}=0\), we located the critical point \(X_{1c}\) and extrapolated the infinite bond dimension estimate of the critical exponent \(\nu=1.0(1)\), matching the CFT prediction \(\nu_{\text{IS}}=1\). Additionally, our analysis reveals the scaling of the effective magnetization [41]\(\langle\hat{J}_{\perp}^{(2e)}\rangle\sim\left|X_{1}-X_{1c}\right|^{\beta}\), with the critical exponent \(\beta\) compatible with the Ising value \(\beta_{\text{IS}}=1/8\) for \(\left|X_{2}\right|<0.43\) [Fig. 3(e)].
The latter confirms also the onset of the TCI point joining the Ising phase transition and the FOPT: by increasing \(X_{2}\) above \(0.43\), \(\beta\) decreases and, at \(X_{2}\sim 0.46\), it exhibits a plateau close to the expected TCI value \(\beta_{\text{TCI}}=1/24\) [Fig 3(e)]. Further increasing \(X_{2}\) results in a vanishing \(\beta\), as expected for a FOPT. The error bars in Fig. 3(e) do not account for finite entanglement effects, accentuated by the massless LL in the charge sector with \(c=1\) throughout the entire phase diagram. Despite this, we observe a good convergence in scaling features away from the critical point.
Finally, along the transition line for \(X_{2}>0.42\), finite-size density-matrix renormalization group (DMRG) simulations [41] reveal in Fig. 3(e) the non-monotonic behavior of the central charge \(c\)[64, 27], consistently with the presence of the TCI CFT (\(c-1=7/10\)) amid the
Figure 3: (a): Expectation value of the order parameter \(\hat{J}_{\perp}^{(2e)}\) at \(T_{2}=0.6\). Green stars mark a discontinuity of the log-fidelity per site [Eq. (5)] denoting the FOPT between phases I and III, consistently with the mean-field picture. (b): FOPT discontinuity of \(\exp\left(-\mathcal{F}\right)\) and \(\langle\hat{J}_{\perp}^{(2e)}\rangle\) between phases II and I at \(X_{2}=0.52\) [cut b) in panel (a)]. (c): singular behavior of the fidelity susceptibility \(\chi_{\mathcal{F}}\) and order parameter along the cut c) at \(X_{2}=0\), both indicating a second-order phase transition. (d): collapse of the correlation length \(\xi_{s}\) at \(X_{2}=0\) for five values of the bond dimension \(D\) by employing a finite-entanglement scaling [41, 59]. (e): critical exponent \(\beta\) obtained by fitting \(\langle\hat{J}_{\perp}^{(2e)}\rangle\) as a function of \(X_{1}\) for \(0.42<X_{2}<0.49\) and bond dimension \(D=600\) (blue dots). Two plateaux appear close to the Ising (\(\beta_{\text{IS}}=1/8\)) and TCI (\(\beta_{\text{TCI}}=1/24\)) predictions. The central charge (empty symbols), derived from finite-size DMRG simulations [41], increases from \(c\simeq 1+1/2\) to \(c\simeq 1+7/10\) before dropping to \(c\simeq 1\).
Ising regime (\(c-1=1/2\)) and the FOPT (\(c-1=0\)). Finite size effects yield large central charge estimates as expected and shift the tricritical point to larger \(X_{2}\) relative to the \(\beta=\beta_{\text{TCI}}\).
Experimental observables.-Transport features can be used to explore the phase diagram of the model. Indeed, the thermal conductance across 1D systems at criticality is proportional to the central charge \(c\) of the related CFT at low temperature \(T\)[65, 66]: \(G_{Q}=\frac{\pi k_{B}^{2}Tc}{6\hbar}\). In our model, symmetric and symmetry-broken phases exhibit \(c=1\) due to the charge sector, while along the transition line the additional contribution of the spin sector yields the behaviour showed in Fig. 3(e). In realistic devices, finite size and temperature determine the profile of the heat conductance as a function of the system parameters. Nevertheless, a non-monotonic behavior of \(G_{Q}\) across the second-order phase transition line and in proximity of the TCI point would provide strong evidence of the emergence of the related CFTs.
Furthermore, as the rung currents exhibit quasi long-range order at the phase transitions, the power spectrum of their noise provides a probe to detect the critical lines and measure the scaling dimension of the order parameter. Additionally, microwave spectroscopy of JJAs [6, 7] allows for the study of the excitation spectra of the system and can be used to verify the predictions of the TCI CFT spectra [67, 68, 69, 70, 44]
Conclusions.-We designed a JJ ladder to realize a quantum simulator for the tricritical Ising CFT. Our construction is based on the properties of hybrid semiconducting-superconducting JJs and their non-sinusuoidal energy/phase relation. In particular, we engineered a triple JJ that allows us to tune the higher harmonics and we adopted them to realize the physics of a multi-frequency sine-Gordon QFT [58].
We used bosonization and tensor-networks simulations to investigate this JJA. Our analysis showed the presence of an ordered phase and demonstrated the existence of a critical Ising plane connected to a first-order transition in correspondence of a tricritical Ising line in a three-parameter space.
Our construction does not require the introduction of strong and fine-tuned interactions and relies on the adjustments of parameters that can be controlled in hybrid state-of-the-art platforms.
Our study poses the basis for further explorations of the connection between nontrivial interacting CFTs and hybrid JJ systems characterized by high harmonics terms. The ladder we devised, in particular, provides a tool to engineer systems with exotic topological order in two-dimensional setups: an array of these tricritical systems opens the way to realize Fibonacci topological superconductors [19, 20] with universal non-Abelian anyons.
Acknowledgements.-We thank L. Banszerus, A. Cappelli, C. Marcus, G. Mussardo, C. Schrade and S. Vaitiekenas for fruitful discussions. We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) project Grant No. 277101999 within the CRC network TR 183 (subprojects B01 and C01). L.M. and M.B. are supported by the Villum Foundation (Research Grant No. 25310). N.T. and M.R. are further supported by the DFG under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at the Julich Supercomputing Centre (JSC) (Grant NeTeNeSyQuMa) and the FZ Julich for JURECA (institute project PGI-8)[71]. Data and Code are available at [72].
Supplemental materials for "Quantum simulation of the tricritical Ising model in tunable Josephson junction ladders"
### Triple Josephson junction element
#### Higher harmonics expansion
In this section, we briefly analyze the decomposition of the energy-phase relation of the triple JJ into harmonic terms \(\mu_{n}\) that we introduced in Eq. (2) of the main text. Assuming that each semiconducting/superconducting junction is described by a single quantum channel, the potential of triple JJ element
\[V_{J}\left(\varphi\right)=-\Delta\left(\sqrt{1-T_{1}\sin^{2}\left(\frac{ \varphi-\Phi_{1}}{2}\right)}+\sqrt{1-T_{2}\sin^{2}\left(\frac{\varphi}{2} \right)}+\sqrt{1-T_{3}\sin^{2}\left(\frac{\varphi+\Phi_{2}}{2}\right)}\right),\] (S6)
can be expanded as \(V_{J}=\sum_{n}\mu_{n}\cos\left(n\varphi\right)\), where \(\varphi\) is the SC phase difference of the two islands and \(\Delta\) the superconducting gap induced in the semiconducting layer of the hybrid system. To maintain the reflection symmetry \(\varphi\rightarrow-\varphi\), we impose \(\Phi_{1}=\Phi_{2}=\Phi\) and \(T_{1}=T_{3}\). The full expression of \(\mu_{n}\) involves the elliptic integrals
\[\mu_{n}=\int_{-\pi}^{\pi}\frac{d\varphi}{\pi}\,V_{J}\left(\varphi\right)\cos \left(n\varphi\right),\] (S7)
which do not have an elementary analytical solution. However, for small transparencies \(T_{i}\ll 1\), we can approximate them as follows:
\[\mu_{1}/\Delta =-\frac{1}{512}\left(T_{2}\left(128+32T_{2}+15T_{2}^{2}\right)+2T _{1}\left(128+32T_{1}+15T_{1}^{2}\right)\cos\Phi\right)+O\left(T_{i}^{4}\right)\] (S8) \[\mu_{2}/\Delta =\frac{1}{256}\left(T_{2}^{2}\left(4+3T_{2}\right)+2T_{1}^{2} \left(4+3T_{1}\right)\cos 2\Phi\right)+O\left(T_{i}^{4}\right)\] \[\mu_{3}/\Delta =-\frac{1}{512}\left(T_{2}^{3}+2T_{1}^{3}\cos 3\Phi\right)+O\left(T _{i}^{4}\right)\] \[\mu_{4}/\Delta =O\left(T_{i}^{4}\right).\]
In this limit, it is evident that the potential \(V_{J}\) is mostly determined by the first harmonic term \(\cos\varphi\) with \(\mu_{1}<0\), as long as the flux \(\Phi\) is such that \(\cos\Phi>0\). Numerical evaluation of the integrals (S7) shows that this is true also in the large transparencies limit.
The situation is different if we consider fluxes such that \(\cos\Phi<0\). In particular, one can fine-tune the external parameters to make \(\mu_{1}\) vanish. Moreover, for \(\Phi=2\pi/3\) and \(T_{1}=T_{2}\) both \(\mu_{1}\) and \(\mu_{2}\) vanish as a consequence of destructive interference of tunneling events of one and two Cooper pairs through the three junctions. In this case only triplet of Cooper pairs can jump between the two SC islands with amplitude \(|\mu_{3}|\). One can also check that, in the considered geometry, the contribution \(\mu_{4}\) is always at least one order of magnitude smaller than the other terms as showed in Fig. S4. Therefore, given the ability of controlling both the transparencies of the hybrid junctions through external gates and the magnetic flux \(\Phi\) piercing the two loops, we can tune independently the ratios between the first three harmonics amplitudes in Eq. (S6). In particular, the results discussed in the main text require that only the transparencies of the external junctions, \(T_{1}\) and \(T_{3}\), need to be tuned, whereas \(T_{2}\) does not qualitatively affect the appearance of the tricritical Ising point. This constitutes an advantage for experimental realizations since we envision that the external junctions can more easily be controlled via electrostatic gates.
### Multichannel case
In the case of several transport channels in each of the junctions, the Josephson energy-phase relation is given by the sum of the related contributions:
\[\mathcal{E}_{J}^{(p)}=-\sum_{i=1}^{M_{p}}\Delta\sqrt{1-T_{p}^{(i)}\sin^{2} \left(\phi/2\right)},\] (S9)
where \(T_{p}^{(i)}\) represents the transparency of the \(i\)th channel in the JJ \(p\), and \(M_{p}\) is the number of channels in the junction. For disordered multichannel junctions, these transport coefficients \(T_{p}^{(i)}\) follow a bimodal distribution [73],
with a few high-transparency channels resulting in a nonsinusoidal current response. A complete generalization of our results to the multichannel case goes beyond the scope of this supplemental section. However, a qualitative analysis of its effects is needed. In particular, one essential feature of our triple JJs element is the symmetry between the two external junctions.
Experimental results in gate-tunable device showed that the nonsinusoidal effects are overall well-approximated by one JJ with \(M^{*}\) high-transparency channels with the same average \(T^{*}\), such that the current phase relation reads [11, 74]
\[I\left(\varphi\right)=\frac{e\Delta M^{*}T^{*}}{\hbar}\frac{\sin\left(\varphi \right)}{\sqrt{1-T^{*}\sin^{2}\left(\varphi/2\right)}}.\] (S10)
Therefore, the nonlinear function in Eq. (1) in the main text well approximates the energy-phase relation also in the multichannel case.
Moreover, in this approximation, one can assume that the external voltage gate \(V_{G}\) affects only the number of channels \(M^{*}\) and not the average transparency \(T^{*}\), which mildly varies among the junctions [11]. In this case, the symmetry between the external JJs is lifted by the weak finite difference between the two average transparencies \(T_{1}^{*}-T_{3}^{*}\neq 0\), which is almost independent of the voltage gates \(V_{G1}\) and \(V_{G3}\). However, tuning the number of open channels \(M_{1}^{*}\) and \(M_{3}^{*}\) via the voltage gates provides a way to mitigate this explicit symmetry breaking. Finally, potential asymmetries in the magnetic fluxes cause a splitting in energy of the minima of the potential \(V_{J}\) which is linear in \(\Phi_{1}-\Phi_{3}\). However, this effect can also be used to mitigate the asymmetry caused by the mismatch of the transparencies \(T_{1}^{*}\neq T_{3}^{*}\) and restore the degeneracy of the minima of \(V_{J}\).
Alternatively, as briefly mentioned in the main text, the non-sinusoidal current/phase relation can effectively be obtained by substituting each of the junctions with two sinusoidal multichannel JJs in series [39]. For the external
links, the effective transmissions \(T_{p,\text{eff}}\) with \(p=1,3\) will depend on the critical currents flowing through such JJs and indeed can be tuned by external electrostatic gates.
### Ladder: further details
#### Staggered magnetic fluxes
Interacting bosons on a ladder with uniform magnetic fields exhibit are characterized by the onset of several chiral many-body phases, including the Meissner phase. For our purposes the onset of the Meissner effect may be detrimental, because it breaks the emergent Lorentz invariance in the QFT and may compete with the phases and critical points discussed in the main text.
Additionally, to obtain a quantum simulation of the three-frequency sine-Gordon model, each rung triple JJ must be characterizes by the same \(V_{J}\). This condition is, in the general case, fulfilled only by staggered patterns of magnetic fluxes.
We present two viable flux configurations which are schematically represented in Fig. S5(a) and (b). The solution (a) relies on the parity property of the local potential \(V_{J}\) under \(\Phi\rightarrow-\Phi\) and enables the engineering of a ladder geometry where the magnetic flux within two subsequent rungs \(\Phi_{\text{int}}\) vanishes. This preserves time-reversal invariance in the effective QFT. However, this approach leads to the experimental challenge of controlling nonuniform magnetic fields along the ladder.
A convenient construction to realize the configuration (a) in experimental devices is depicted in Fig. S5(c). To stagger the magnetic fluxes within two subsequent triple JJ elements, we design the ladder in a'snake' configuration and control the magnetic field by introducing a current \(I_{\text{ext}}\) through the line schematically represented in Fig. S5. Alternatively, a local control of multiple fluxes can be achieved with the techniques adopted by modern quantum processors based on transmon qubits [75].
An alternative flux configuration, Fig. S5(b) results in the same potentials \(V_{J}\) on each rung and relies on compensating the magnetic fluxes of the triple JJs with opposite fluxes in the ladder plaquettes, thus setting \(\Phi_{\text{int}}=-2\Phi\) between each rung. The possibility of introducing additional integer fluxes in each loop, thus replacing \(\Phi_{\text{int}}\rightarrow\Phi_{\text{int}}+2\pi\) may also offer an alternative to implement the configuration (b) with uniform magnetic fluxes. To tune the system at the tricritical point in this scenario, however, it is required to known a priori the parameter \(T_{2}\) of the ladder: the critical flux of the trijunctions depends indeed on \(T_{2}\); therefore, its knowledge is necessary to designing superconducting circuits with a correct ratio between the areas of the loops inside the trijunctions and the areas of the loops between the ladder rungs to obtain the desired tunneling phases at constant magnetic field.
### Bosonization
In this section, we will review the main steps of the connection between the lattice Hamiltonian in (3) in the main text and the three-frequency sine-Gordon quantum field theory. At low temperature \(K_{B}T<\Delta_{c}\) each SC island of our lattice corresponds to a condensate of \(N_{c}\) Cooper pairs with gap \(\Delta_{c}\) and a well defined complex order parameter, the SC phase \(\hat{\varphi}_{\alpha,j}\). The residual charge around \(N_{c}\) is represented by the operator \(\widehat{N}_{\alpha,j}\) dual to the SC phase. In the long wavelength limit, we can use an effective continuum description in terms of the Bose fields \(\hat{\theta}_{\alpha}(x)\) and \(\hat{\varphi}_{\alpha}(x)\)[33], fulfilling commutation relations:
\[\left[\hat{\theta}_{\alpha}(y),\hat{\varphi}_{\beta}(x)\right]=-i\pi\delta_{ \alpha\beta}\Theta\left(y-x\right)\,,\] (S11)
where \(\Theta\) indicates the Heaviside step function. The weak interactions case \(E_{C},\ V_{\perp},\ \ll E_{J}\) we considered allows us to neglect fast-oscillating contributions in the Cooper-pair density and write \(\widehat{N}_{\alpha,j}\approx-a\dfrac{\partial_{x}\hat{\theta}_{\alpha}(x)}{\pi}\), with \(j=xa\). In the harmonic approximation for the Josephson interaction along the legs, the low-energy lattice Hamiltonian can be written as
\[\hat{H}=\sum_{\alpha=a,b}\left[-E_{J}\int dx\ a\left(\partial_{x}\hat{\varphi }_{\alpha}\left(x\right)\right)^{2}+\dfrac{E_{C}a}{\pi^{2}}\int dx\ \left(\partial_{x}\hat{\theta}_{\alpha}\left(x\right)\right)^{2}\right]+\sum_ {n=1}^{3}\dfrac{\mu_{n}}{a}\int dx\ \cos\left(n\left(\hat{\varphi}_{a}-\hat{\varphi}_{b}\right)\right)\!.\] (S12)
By rotating the fields \(\hat{\varphi}_{c/s}(x)=\left(\hat{\varphi}_{a}(x)\pm\hat{\varphi}_{b}(x) \right)/\sqrt{2}\) and the corresponding dual ones \(\hat{\theta}_{c/s}(x)\), we obtain the Hamiltonian (4) in the main text with the perturbative relations
\[K_{c/s}=\pi\sqrt{\dfrac{E_{J}}{\left(2E_{c}\pm V_{\perp}\right)}}\qquad\text{ and}\qquad u_{c/s}=a\sqrt{E_{J}\left(2E_{C}\pm V_{\perp}\right)}.\] (S13)
In general, a finite intra-leg capacitance \(C_{L}\) among adjacent islands leads to a long range interaction stemming from the inverse capacitance matrix [52] with screening length \(\lambda=a\sqrt{C_{L}/C_{g}}\), where \(C_{g}\) is the self capacitance. However, this may be ignored as long as one is interested in the physics of modes with energies lower than \(u_{c/s}/\lambda\).
From a perturbative point of view the plasma frequency of the spin sector \(u_{s}/a=\Lambda\simeq\sqrt{E_{J}\left(2E_{c}-V_{\perp}\right)}\) defines a UV cut-off that allows us to define the dimensionless coupling \(\tilde{\mu}_{n}=\mu_{n}/\Lambda\) in the sine-Gordon Euclidean action,
\[S\left[\varphi_{s}(x,\tau)\right]=\dfrac{1}{2\pi}\int dxd\tau\ K_{s}\left( \left(\partial_{\tau}\varphi_{s}\right)^{2}+\left(\partial_{x}\varphi_{s} \right)^{2}\right)-\sum_{n=1}^{3}\dfrac{\tilde{\mu}_{n}}{a^{2}}\int dxd\tau\ \cos\left(\sqrt{2}n\varphi_{s}\right)\!,\] (S14)
where we have rescaled the imaginary time \(\tau\to u_{s}\tau\). The operators \(\widehat{\mathcal{O}}_{n}=\cos\left(\sqrt{2}n\hat{\varphi}_{s}\right)\) correspond to primaries of the unperturbed free boson \(c=1\) theory with scaling dimensions
\[\Delta_{n}=\dfrac{n^{2}}{2K_{s}}.\] (S15)
Therefore, such operators drive the LL to a massive phase, namely they are relevant, only when \(\Delta_{n}<2\) inferring the lower bound \(K_{s}>9/4\) considered in the main text to make \(\mathcal{O}_{n\leq 3}\) relevant.
Note that the charge sector remains massless as there is no sine-Gordon potential for \(\hat{\varphi}_{c}\). We checked the validity of this statement in our lattice simulation. In the LL liquid phase the density correlation functions is expected to show the following power-law decay
\[\left\langle\widehat{\rho}_{\text{tot}}(x)\widehat{\rho}_{\text{tot}}(y)\right\rangle \sim\dfrac{2}{\pi^{2}}\left\langle\partial_{x}\theta_{c}(x,\tau)\ \partial_{y}\theta_{c}(y,\tau)\right\rangle=\dfrac{K_{c}}{\pi^{2}}\dfrac{1}{ \left|x-y\right|^{2}}.\] (S16)
In the ladder model, the operator \(\widehat{\rho}_{\text{tot}}(x)\) corresponds to the total rung density offset \(\widehat{N}_{\text{tot},j}-\left\langle\widehat{N}_{\text{tot},j}\right\rangle\) with \(\widehat{N}_{\text{tot}}=\widehat{N}_{a,j}+\widehat{N}_{b,j}\). We explicitly checked the decay of Eq. (S16) for each point of the phase diagram by fitting a power-law decay [Fig. S6]. The so found \(K_{c}\) parameters are in a good agreement with the perturbative approximations given by Eq. (S13). This confirms the validity of the field theoretical approach in the low energy regime of the ladder.
On the other hand, the spin sector (S14) is subject to the different relevant interactions in Eq. (S14) which tend to order the SC phase difference \(\hat{\varphi}_{s}\). In Ref. [58] the author shows that this quantum field theory flows to a tricritical Ising point with central charge \(c=7/10\) for suitable values of the coupling constants \(\mu\). Despite the absence of any non-perturbative mappings between our lattice operators and the massless excitations of this field theory, we can exploit the Ginzburg-Landau representation of the TCI CFT to gain insight about this relation.
The operator content of the CFT is split in the odd and even sector with respect to the \(\mathds{Z}_{2}\)-symmetry and is characterized by 6 primary fields: the identity \(I\), four relevant operators \(\sigma,\ \epsilon\ \sigma^{\prime},\ \epsilon^{\prime}\ (\Delta<2)\) and one irrelevant \((\Delta>2)\) operator. The Ginzburg-Landau Lagrangian representation of the TCI corresponds to [76]
\[\mathcal{L}=\frac{K_{s}}{2\pi}\varphi_{s}\left(\partial_{x}^{2}+\frac{ \partial_{\tau}^{2}}{u_{s}^{2}}\right)\varphi_{s}-\lambda_{2}:\varphi_{s}^{2 }:-\lambda_{4}:\varphi_{s}^{4}:-\lambda_{6}:\varphi_{s}^{6}:,\] (S17)
where \(::\) indicates the normal ordering with respect to the tricritical point CFT. In the mean-field limit \(K_{s}\gg 1\), we can build an approximate mapping bewteen local operators in our theory and the primary fields (see also Ref. [27]),
\[\begin{split}&\varphi_{s}(x)\rightarrow\sigma(x),\quad\left(h_{ \sigma},\bar{h}_{\sigma}\right)=\left(\frac{3}{80},\frac{3}{80}\right)\\ &:\varphi_{s}^{2}(x):\mapsto\epsilon(x),\quad\left(h_{\epsilon},\bar{h}_{\epsilon}\right)=\left(\frac{1}{10},\frac{1}{10}\right)\\ &:\varphi_{s}^{3}(x):\mapsto\sigma^{\prime}(x),\quad\left(h_{ \sigma^{\prime}},\bar{h}_{\sigma^{\prime}}\right)=\left(\frac{7}{16},\frac{7} {16}\right)\\ &:\varphi_{s}^{4}(x):\mapsto\epsilon^{\prime}(x),\quad\left(h_{ \epsilon^{\prime}},\bar{h}_{\epsilon^{\prime}}\right)=\left(\frac{3}{5},\frac {3}{5}\right),\end{split}\] (S18)
which implies the expansion of the local order operator \(\hat{J}_{\perp}\) in terms of the most relevant operator \(\sigma\) close to the critical point,
\[\hat{J}_{\perp}(x)=\sin\left(\sqrt{2}\hat{\varphi}_{s}(x)\right)\sim\hat{ \varphi}_{s}(x)+\ldots\rightarrow\sigma(x)+\ldots\] (S19)
In the previous expansion the dots indicate less relevant operator contributions.
### Charge basis
For the numerical simulations, we formulated the Hamiltonian (3) from the main text in the charge basis. In this basis the operator \(\widehat{N}_{\alpha,j}\) is diagonal and defines how the number of Cooper pairs differs from the average occupation on the island \((\alpha,j)\):
\[\widehat{N}_{\alpha,j}=\text{diag}\left(\ldots,-2,-1,0,1,2,\ldots\right)\,.\] (S20)
Using this choice, it is easy to show that \(e^{i\hat{\varphi}_{\alpha,j}}\) must to be of the form
\[e^{i\hat{\varphi}_{\alpha,j}}=\left(\begin{array}{ccccc}\ddots&&&\\ &0&1&&\\ &&0&1&\\ &&0&1&\\ &&&\ddots&\end{array}\right)_{\alpha,j}\equiv\widehat{\Sigma}_{\alpha,j}^{-}\] (S21)
for the commutator \([\widehat{N},\widehat{\Sigma}^{-}]=-\widehat{\Sigma}^{-}\) to hold. Further, in order to represent these operators in our simulations, we have to truncate the number of possible charge states
\[\widehat{N}_{\alpha,j}=\text{diag}\left(-N_{\text{max}}\ldots,-2,-1,0,1,2, \ldots N_{\text{max}}\right)\,,\] (S22)
i.e. we adopt a truncated local Hilbert-space of dimension \(2N_{\text{max}}+1\) per each SC island. We can control the error caused by this truncation by varying \(N_{\text{max}}\) until we reach convergence in all observables. Alternatively, we can measure the probability \(\langle\hat{P}_{\alpha,j}^{n}\rangle\) of finding an excitation \(n\) on the island \((\alpha,j)\). By ensuring that \(N_{\text{max}}\) is large enough to have negligible weight \(\langle\hat{P}_{\alpha,j}^{N_{\text{max}}}\rangle<\epsilon\) we can claim to be converged in \(N_{\text{max}}\). In practice we found that \(N_{\text{max}}=8\) gives \(\langle\hat{P}_{\alpha,j}^{N_{\text{max}}}\rangle\sim 10^{-9}\). The Hamiltonian used for the simulation finally reads \(\widehat{H}=\sum\limits_{j=0}^{L}\widehat{h}_{j,j+1}\) with:
\[\begin{split}\hat{h}_{j,j+1}=&\sum\limits_{\alpha= a,b}\left[E_{c}\left(\widehat{N}_{\alpha,j}\right)^{2}-\frac{E_{J}}{2}\left( \widehat{\Sigma}_{\alpha,j}^{+}\widehat{\Sigma}_{\alpha,j+1}^{-}+\widehat{ \Sigma}_{\alpha,j}^{-}\widehat{\Sigma}_{\alpha,j+1}^{+}\right)\right]\\ &+V\widehat{N}_{\alpha,j}\widehat{N}_{b,j}+\frac{\mu_{1}}{2} \left(\widehat{\Sigma}_{a,j}^{+}\widehat{\Sigma}_{b,j}^{-}+\widehat{\Sigma}_{ b,j}^{+}\widehat{\Sigma}_{a,j}^{-}\right)\\ &+\frac{\mu_{2}}{2}\left(\left(\widehat{\Sigma}_{a,j}^{+}\right) ^{2}\left(\widehat{\Sigma}_{b,j}^{-}\right)^{2}+\left(\widehat{\Sigma}_{b,j}^ {+}\right)^{2}\left(\widehat{\Sigma}_{a,j}^{-}\right)^{2}\right)\\ &+\frac{\mu_{3}}{2}\left(\left(\widehat{\Sigma}_{a,j}^{+}\right) ^{3}\left(\widehat{\Sigma}_{b,j}^{-}\right)^{3}+\left(\widehat{\Sigma}_{b,j}^ {+}\right)^{3}\left(\widehat{\Sigma}_{a,j}^{-}\right)^{3}\right)\end{split}\] (S23)
### Further numerical evidence for the transitions
In this section, we present additional numerical indications about the different nature of the transitions across the phase diagram.
### Hysteresis and gap jump at the first-order transition
First of all, we present additional evidence of first-order phase transitions (FOPTs) along the horizontal cuts at \(X_{2}=0.52\) (between the disordered phase I and the ordered phase II) and at \(X_{2}=0.6\) (between phases I and III).
One significant indicator involves the distinct behavior of the lowest energy excitation in the spin sector. Its energy corresponds to the system's gap, which can be extracted (see Section ) from the transfer matrix spectrum as shown in Fig. S7. By following the corresponding eigenvalue of the transfer matrix \(\lambda_{1}\), we can extract the gap of the spin sector \(\Delta_{s}=-\log\lambda_{1}\). Across a second-order phase transition, the physical gap closes and, in the numerical VUMPS simulations, this is marked by a minimum in \(\Delta_{s}\) [panel (c)] which approaches zero by increasing the bond dimension. Across a FOPT, instead, the spin gap remains finite [panels (a) and (b)], although it may display a discontinuity
when the mass of the spin excitations is different in the two phases. Panels (a) and (b) respectively depict the typical behaviors of the FOPT between the two disordered phases and between phase II and phase I. In the latter case, the related order parameter displays a very weak variation, resulting in an almost continuous behavior of \(\Delta_{s}\).
This behavior is reflected also in the analysis of the hysteresis in the order parameter and the many-body ground state energy, as illustrated in Fig. S8.
A discontinuity in the first derivative of the energy density is observed in the FOPT cases, which is absent in the second-order transition at \(X_{2}=0\) and indicates the crossing of the lowest energy levels. Furthermore, by altering the minimization procedure at each point \(X_{1}\) and initializing the ground state with the result from \(X_{1}\pm\delta\), the variational algorithm follows the corresponding branch, even within the opposite phase. This can be interpreted as a hysteresis effect induced by the orthogonality of these two states around the crossing point.
Also in this case the features of the FOPT are stronger between the two disordered phases - panel S8 (b) is depicted with a magnified energy scales with respect to panel (a). The discontinuity of the derivative \(\partial\varepsilon/\partial X_{1}\) is around \(30\;E_{J}\) in panel (a) and \(22\;E_{J}\) in panel (b). This is physically related to the jump of the average loop current circulating around each triple JJs element, namely \(\hat{J}_{\rm loop}=\partial\hat{H}/\partial\Phi\).
### Scaling and critical exponents Ising phase transition
In this subsection, we focus on characterizing the critical exponents \(\nu\) and \(\beta\), which describe how the correlation length diverges and the order parameter approaches zero across the continuous phase transitions. Concerning the Ising line, we will consider as main example the \(X_{2}=T_{1}\sin(\Phi)=0\) cut corresponding to Fig. 3(c)-(d) of the main text. In this case, the measured values indicate indeed that the transition belongs to the Ising universality class with \(\nu_{\rm IS}=1\) and \(\beta_{\rm IS}=1/8\). To extract these exponents, we relied on scaling properties of three different quantities: the log-fidelity per site \(\mathcal{F}\) (and its susceptibility \(\chi_{\mathcal{F}}\)), the correlation length of the spin sector \(\xi_{s}\) and the order parameter \(\hat{J}_{\perp}^{(2e)}\).
We determine the critical exponent \(\nu\) through two different methods based on the fidelity scaling, both yielding values near \(\nu_{\rm IS}=1\) [Fig. S9]. The first approach involves fitting the non-analytic behavior of the log-fidelity per site at the critical point, showing a consistent increase towards \(\nu=1\) as the bond dimension \(D\) grows [Fig. S9 a), inset], although the adopted bond dimensions were not sufficient to converge to \(\nu=1\). The second approach, instead, provides more accurate results and relies on analyzing the divergence pattern of the fidelity susceptibility along a horizontal cut; in this way we obtain \(\nu=1.00(3)\) [Fig. S9 b)].
To take into account finite bond dimension corrections, we employed the finite entanglement scaling discussed in Ref. [59] for the spin correlation length \(\xi_{s}\). Similarly to finite size effects, the finite bond dimension introduces an artificial length scale making all correlation functions exponential decaying even at critical points. This can be
interpreted as the addition of a relevant perturbation of the underlying CFT. However, in the \(D\to\infty\) limit, the gapless nature of the model must be restored. This artificial length scale is associated with the critical exponent \(\kappa\):
\[\xi_{D}\sim D^{\kappa}\]
and we use this relation to define the following scaling ansatz [59]
\[\xi_{D}=D^{\kappa}f\left(D^{\frac{\kappa}{2}}\frac{|X_{1}-X_{1c}|}{X_{1c}} \right)\,,\quad f(x)\sim\begin{cases}\text{const }\,,&x\to 0\\ \frac{1}{x^{\nu}}\,,&x\gg 1\end{cases}\] (S24)
where \(\nu\) is the critical exponent of the correlation length in the infinite bond dimension case. We use this ansatz to determine the critical point \(X_{1c}\) and to extract the critical exponents \(\nu\) and \(\kappa\) discussed in the main text.
Additionally, to extract the critical exponent \(\beta\) we employ the scaling of the expectation value of the single-particle current \(\hat{J}_{\perp}^{(2e)}\) close to the critical point. Indeed, this operator plays the role of the Ising magnetization which is odd under the \(\mathds{Z}_{2}\)-symmetry \(\hat{\varphi}_{s}\to-\hat{\varphi}_{s}\). By fitting the expected scaling behaviour \(|X_{1}-X_{1c}|^{\beta}\), we obtain the critical exponent \(\beta=0.125(3)\) [Fig. S10] at \(X_{2}=0\), and analogous values are obtained for \(|X_{2}|\lesssim 0.435\), as depicted in Fig. (3)(e) in the main text.
These results collectively indicate that our findings concerning the transition from the ordered to the disordered phase sufficiently far from the first order discontinuities are compatible with the Ising universality class with \(\nu_{\text{IS}}=1\) and \(\beta_{\text{IS}}=1/8\).
The critical exponents \(\kappa\) extracted for the spin correlation length at the second order transitions are typically smaller than one. This implies that a considerable increase of the bond dimension is required in order to faithfully capture the algebraic decay of correlation functions over a long distance. Taking the example of the \(X_{2}=0\) cut from the main text with \(\kappa\approx 0.8\). The largest correlation length obtained for \(X_{2}\) is \(\xi_{s}\approx 30\) for a bond dimension of \(D=1000\). Using the scaling behavior \(\xi_{s}\sim D^{0.8}\) we estimate that a bond dimension \(D^{\star}\approx 4500\) is necessary to get \(\xi_{s}\approx 100\) sites, and \(D^{\star}\approx 18000\) for \(\xi_{s}\approx 300\) sites.
### Central charge
Given the separation of the two sectors in our model, in the thermodynamic limit the entanglement entropy of the system is predicted to display a typical divergence \(S=c_{c}/6\log(\xi_{c})+c_{s}/6\log(\xi_{s})\)[77] in proximity of the second-order phase transition, with \(c_{c/s}\) the central charge of the charge/spin sector. However, strong finite entanglement effects in the VUMPS simulations have a quantitative impact on the estimate of the latter and result in strong
fluctuations. Moreover, the theory of finite-entanglement corrections [59; 62; 78] is less developed than the finite-size scaling and, in particular, doesn't cover the case of two gapless modes sharing the same finite bond dimension in the MPS representation. In particular, as already pointed out at the end of previous section, achieving a reliable description of the critical correlations of the system with \(\xi_{s}\to\infty\) requires a very large bond dimension \(D\), given the sub-linear scaling of \(\xi_{s}\sim D^{\kappa}\).
For these reasons, we determined the total central charge \(c\) from finite-size DMRG simulations with periodic boundary conditions by fitting the relation [77]
\[S(j)=\frac{c}{3}\log\left(d\left(j,L\right)\right)+s_{1},\] (S25)
where \(S(j)\) is the entanglement entropy at the site \(j\), \(d(j,L)=L/\pi\sin\left(\pi j/L\right)\) is the chord distance, and \(s_{1}\) is a non-universal constant.
We specifically traced the transition line where the VUMPS spin correlation length \(\xi_{s}\) is maximal and the critical exponent \(\beta\) shows the CFTs predictions before vanishing at the FOPT, Fig. 3(e) in the main text. Figure S11 shows the excellent agreement of our data with the relation (S25) at three illustrative points along this line. Finite size effects are present in any case and lead to an overestimation of the value of the central charge. The measured estimate is expected to decrease by increasing the size of the finite system.
### Extraction Of Correlation Lengths
Most of the numerical results presented in this latter are obtained by the VUMPS algorithm presented in Ref. [36]. The concrete implementation uses the ITensor library [79]. This ansatz operates directly in the thermodynamic limit by enforcing translational invarance. The class of ansatz states is characterized by the set of matrices \(\{A_{L}^{\sigma},A_{C}^{\sigma},A_{R}^{\sigma}\}\), with \(\sigma\) enumerating the physical local states. From this set of matrices, the state \(\ket{\psi}\) is represented as
\[\ket{\psi}=\sum_{\{\sigma\}}\operatorname{Tr}\left[\dots A_{L}^{\sigma_{j-2}} A_{L}^{\sigma_{j-1}}A_{C}^{\sigma_{j}}A_{R}^{\sigma_{j+1}}A_{L}^{\sigma_{j+2}} \dots\right]\ket{\dots\sigma_{j-2}\sigma_{j-1}\sigma_{j}\sigma_{j+1}\sigma_{j +2}\dots}\right..\]
The matrices \(A_{L}^{\sigma}\) and \(A_{R}^{\sigma}\) fulfill \(\sum_{\sigma}(A_{L}^{\sigma})^{\dagger}A_{L}^{\sigma}=\sum_{\sigma}A_{R}^{\sigma }(A_{R}^{\sigma})^{\dagger}=\mathds{1}\) and special equivariance relations to ensure the translational invariance of the ansatz, see Fig. S12. Using the transfer-matrix of the system, defined by
\[\mathcal{T}_{L}\coloneqq\sum_{\sigma}A_{L}^{\sigma}\otimes\bar{A}_{L}^{\sigma}\,,\] (S26)
and the two transfer-matrices with operator insertion
\[\mathcal{T}_{L}^{O}\coloneqq\sum_{\sigma,\tau}O_{\sigma,\tau}A_{L}^{\sigma} \otimes\bar{A}_{L}^{\tau}\,,\quad\mathcal{T}_{C}^{K}\coloneqq\sum_{\sigma,\tau} K_{\sigma,\tau}A_{C}^{\sigma}\otimes\bar{A}_{C}^{\tau}\,,\] (S27)
where \(\bar{z}\) denotes the complex conjugation of \(z\), one can represent the correlation function of two arbitrary operators \(\hat{O}\) and \(\hat{K}\) as, Fig. S13:
\[\begin{split}\langle\hat{O}_{j}\hat{K}_{j+l}\rangle& =\langle\mathds{1}|\,\mathcal{T}_{\mathrm{L}}^{O}\left(\mathcal{T }_{\mathrm{L}}\right)^{l-1}\mathcal{T}_{C}^{K}\ket{\mathds{1}}=\sum_{n\geq 0} \lambda_{n}^{l-1}\alpha_{n}^{O}\,\beta_{n}^{K}=\sum_{n\geq 0}e^{-\frac{l-1}{ \xi_{n}}}c_{n}^{O,K}\\ \alpha_{n}^{O}&=\langle\mathds{1}|\,\mathcal{T}_{ \mathcal{O}}|R_{n}\rangle\,,\ \beta_{n}^{K}=\langle L_{n}|\mathcal{T}_{K}|\,\mathds{1}\rangle\,,\ \xi_{n}=-\frac{1}{\log(\lambda_{n})}\,.\end{split}\] (S28)
The second line in Eq. S28 is obtained after using the eigen decomposition of the transfer-matrix
\[\mathcal{T}_{L}=\sum_{n\geq 0}\lambda_{n}\ket{R_{n}}\bra{L_{n}}\,,\quad \langle L_{n}|R_{m}\rangle=\delta_{m,n}\,.\] (S29)
Using Eq. S28, it is straightforward to extract the asymptotic behavior of any correlation function
\[\langle\hat{O}_{j}\hat{K}_{j+l}^{\dagger}\rangle\approx c_{n^{\star}}^{O,K}\,e ^{\frac{l}{\xi_{n^{\star}}}}+c_{0}^{O,K}\,.\]
where \(n^{\star}\) is the first \(n>0\) in the descending sequence \(\lambda_{0}>|\lambda_{1}|\geq|\lambda_{2}|\dots\) with a non-zero operator weight \(c_{n}^{O,K}\) (assuming \(\lambda_{n^{\star}}\) to be unique). The contribution \(c_{0}^{O,K}\) equals the product of expectation values \(\bra{\hat{O}_{j}}\bra{K_{j}^{\dagger}}\). In the case of \(\hat{O}=\hat{K}\) this asymptotic behavior can be used to extract the smallest energy gap in the excitation spectrum generated by the operator \(\hat{O}\). In the main text, we applied this analysis to the current operator
\[\hat{O}=\widehat{J}_{\perp}^{(2e)}\coloneqq\frac{i}{2}\left(\Sigma_{a}^{+} \Sigma_{b}^{-}-\Sigma_{b}^{+}\Sigma_{a}^{-}\right)\,.\]
which can be interpreted as the magnetization order parameter in the field theory \(\sin\left(\hat{\varphi}_{s}(x)\right)\) odd under the \(\varphi_{s}(x)\to-\varphi_{s}(x)\) symmetry transformation. Thus, \(\hat{J}_{\perp}^{(2e)}\) is naturally associated to excitations in the spin-sector exclusively.
Very similarly, one can extract the density of the logarithmic fidelity \(\mathcal{F}\) in the thermodynamic limit from the mixed transfer-matrix
\[\mathcal{T}_{L}^{\phi,\varphi}\coloneqq\sum_{\sigma}A_{L}^{\phi,\sigma}\otimes \bar{A}_{L}^{\phi,\sigma}\,,\] (S30)
where \(A_{L}^{\phi}\) defines the state \(\ket{\phi}\) and \(A_{L}^{\psi}\) the state \(\ket{\psi}\). Define \(\lambda_{0}\) the smallest in magnintde eigenvalue of \(\mathcal{T}_{L}^{\phi,\psi}\), it is straigthforward to show:
\[\mathcal{F}\coloneqq-\lim_{N\to\infty}\frac{1}{N}\log\left(\left\langle\psi \middle|\phi\right\rangle\right)=-\log(\left|\lambda_{0}\right|)\,.\]
|
2301.01110
|
Causal Discovery for Gene Regulatory Network Prediction
|
Biological systems and processes are networks of complex nonlinear regulatory
interactions between nucleic acids, proteins, and metabolites. A natural way in
which to represent these interaction networks is through the use of a graph. In
this formulation, each node represents a nucleic acid, protein, or metabolite
and edges represent intermolecular interactions (inhibition, regulation,
promotion, coexpression, etc.). In this work, a novel algorithm for the
discovery of latent graph structures given experimental data is presented.
|
Jacob Rast
|
2023-01-03T14:11:00Z
|
http://arxiv.org/abs/2301.01110v1
|
# Causal Discovery for Gene Regulatory Network Prediction
###### Abstract
Biological systems and processes are networks of complex nonlinear regulatory interactions between nucleic acids, proteins, and metabolites. A natural way in which to represent these interaction networks is through the use of a graph. In this formulation, each node represents a nucleic acid, protein, or metabolite and edges represent intermolecular interactions (inhibition, regulation, promotion, coexpression, etc.). In this work, a novel algorithm for the discovery of latent graph structures given experimental data is presented.
## 1 Introduction
The problem of representing a biological process of interest in a graphical structure is under active investigation and stands as one of the grand challenges in biology.
Since RNA data is widely available, exclusive use of the transcriptome as a stand-in for other biomolecular expression levels is a well-studied formulation of the problem. Commonly, RNA microarray or single cell RNA-seq data is used to obtain gene expression levels of a large number of cells. This data is used to form a module of gene expression, expressed in a directed or undirected graph. If the "ground truth" or "gold standard" pathway is known, it can in turn be used to evaluate the network.
While the human body contains hundreds of cell types and subtypes, each cell contains a nearly identical genome. It is the complex regulatory machinery that determines which proteins a cell expresses and at what time, giving rise to cellular diversity. Regulatory networks play an important role in determining whether or not a gene will be expressed. Canonical examples of the regulatory network include the lactose (or lac) operon [5] and Wnt pathway [7].
Discovery of gene regulatory networks is a longstanding biological problem [10]. The generation of high-throughput data relevant to the problem has allowed for the application of new approaches. A large influence in the field was a series of challenges issued from the National Center for Data to Health under the Dialogue on Reverse Engineering Assessment and Methods (DREAM) framework. DREAM challenges DREAM3 [13], DREAM4 [4] and DREAM5 [10] tasked participants with determining the network structures of a number of measurements of gene expression level from increasingly complex networks. These included both in simulated networks and those from well studied model organisms.
In the years since, a number of graphical methods have been developed expanding upon these approaches [11][8]. One notable example is the use of a factor graph for the representation of gene regulatory networks, which was successful at recovering pathways found in E. coli from a well studied database. Another notable example was the use of reciprocal graphs, a highly general structure that is well suited to the study of gene expression given its ability to model loops, a drawback of traditional Bayesian networks.
The discovery of new cellular pathways, prediction of response to environmental changes, discovery of transcription factors for stem cell differentiation, and deeper understanding of cellular pathways.
## 2 Background
### Methods for modifying gene expression
Several tools exist for manipulating the gene expression level in a target organism or cell. One of such techniques is the gene knockout, whereby the expression level of a target gene or set of genes is forced to 0. Typically this is achieved through genome editing whereby the DNA sequence or sequences coding for the set of genes to be knocked out is removed. This renders the organism incapable of expressing the RNA or protein encoded. While most commonly a single gene or dual gene knockout is performed, some studies have identified genome screens that allow for the study of combinations of up to three genes to be knocked out [15]. A number of methods have been developed for gene engineering and knockout. Briefly, technologies such as CRISPR/Cas9 [9] allow for the removal or insertion of a gene at any target location with minimal cost. An alternative approach for manipulation of gene expression level in a target cell is the gene knockdown. In some instances it is advantageous to study the effect of perturbing a gene away from its steady state expression level rather than silencing its expression entirely. This can be achieved through the introduction of inhibitory molecules such as interfering RNA (RNAi), antisense DNA oligos, or inhibitory proteins. Finally, some methods exists for inducing the expression of a target gene. While less robust and less well studied, some RNA species have been discovered that promote translation [2]. In short, researchers have the tools to eliminate, increase, or decrease gene expression in a cell or organism. In this work algorithms were designed to study the experimental data resulting from each of these manipulations.
### Mathematical models of gene expression
The use of tools to simulate gene expression data is invaluable to study the theory of reconstructing gene regulatory networks. A well-know tool for the simulation of gene expression is GeneNetWeaver [14]. Briefly, GeneNetWeaver models a cell as a bipartite directed graph of proteins (transcription
Figure 1: Full gene regulatory network for model organism E. coli
factors) and RNA. A series of ordinary differential equations or stochastic differential equations are used to relate the expression levels of the protein or RNA given a set of conditions. These conditions include biological factors such as RNA degradation rate, transcription factor-RNA affinity, transcription factor role (activation or deactivation of RNA transcription), etc. Interested readers may consult [6] and [14].
Steady-state solutions to the ODEs are used to generate baseline gene expression level. Experiments such as single gene knockout, multiple gene knockout, gene knockdown, gene knockup are simulated by intervening on the value of a set of variables and calculating the expression of the remaining variables under those conditions.
### Boolean Networks
It is important to note the types of nonlinear dynamics that are frequently encountered in gene regulatory networks and that can be captured in ODE or SDE models. Frequently, proteins known as transcription factors will act in complexes in order to activate or inhibit the expression of a gene. The behavior whereby gene expression will only be affected by a complex and not at all by any subset of the complex is similar to the Boolean AND function. Simple Boolean network models of gene regulatory networks have been applied to capture this behavior [1]. For more expressive power and the creation of synthetic datasets, success has been reported converting a Boolean network to a system of ordinary differential equations [12].
## 3 Methods
### Boolean Causal Discovery Algorithm
In this section, the development, motivation, working theory, and proof of the Bool-PC algorithm are described.
#### 3.1.1 Test for independence
Using observational data alone it is theoretically possible test for independence between variables using an approach such as mutual information. For small or simple networks without nonlinearities this approach can successfully reconstruct a directed graph [3]. However, for larger or more complex networks this approach yields poor performance. Recognizing that the ability to manipulate the expression level of a gene is equivalent to intervening on a variable (in the causal sense) a much more robust test for independence can be developed.
We use the property that for two independent distributions, \(P(A,B)=P(A)P(B)\) to write the following:
\[A\perp\!\!\!\perp B\implies P(A=a|B=b_{1})=P(A=a|B=0) \tag{1}\]
In other words, if the probability distribution of A is unchanged by knocking out gene B, A and B are independent.
Additionally, note that for a directed network we have \(A\perp\!\!\!\perp B\implies B\perp\!\!\!\perp A\) for independence as defined in this equation 1.
#### 3.1.2 Test for Conditional Independence
This property is now extended to develop a test for conditional independence. A naive extension would propose the following:
\[A\perp\!\!\!\perp B|C\implies P(A=a|B=b_{1},C=c_{1})=P(A=a|B=0,C=c_{1}) \tag{2}\]
Equation 2 holds for many probabilistic graphical models. It follows from the definition of conditional independence \(P(A|B,C)=P(A|C)\).
Importantly, equation 2 does not hold for the problem of gene regulatory network inference containing Boolean nonlinearities. An illustrative counterexample can be found in the toy network depicted
in figure 2. From the graph structure, we can read the set of conditional independence statements \(G27\perp\!\!\!\perp G26|G25\) and \(G27\not\perp\!\!\!\perp G24|G25\).
In table 1 we consider the effect of perturbing the value of G26 given G25 = 0. The value of G27 is unaffected, seemingly in accord with the conditional independence statements given in 2. However, in table 2 we consider the effect of perturbing G24 given G25 = 0. Under the same set of experimental conditions we observe a nearly identical result. The value of G27 is unaffected given G25=0, despite the graph structure giving conditional dependence. Finally, in 3 the effect of perturbing G25 given G24 = 0 is given. Notice that again, no effect is observed on G27 from perturbations in G25 given G24 = 0, despite the graph structure describing conditional dependence.
Given the Boolean AND inhibitory effect of G24 and G25 on G27, equation 2 does not capture the graph structure. This motivates the development of a test for conditional independence in Boolean causal graphs, given in 3.
### Bool-PC
Equation 3 is now used to develop a variant of the PC algorithm. Algorithm Bool-PC (1) is for use in Boolean-Nonlinear causal graphs suitable for use in gene regulatory network discovery.
Intuitively, the algorithm has the following logic. For any sets of variables X, Y, and Z if there is some flow of causality from X to Y, as shown in Step 1, we cannot prune an edge given Z blocks causal from X to Y if Y also blocks causal flow from Z to X, as these two conditions would imply no causal flow from Y, Z to X.
### Bool-PC Proof of Correctness
The proof of correctness for Bool-PC follows from the reduction of SAT to 3SAT. For any 3 variables X, Y, Z we have shown exhaustively that Bool-PC can discover any graph structure, as shown in the toy example from Figure 2. Given that the algorithm can reconstruct the Boolean relations between any 3 variables and that sets of 3 variables can be used to satisfy the Boolean relation of any arbitrary Boolean Satisfiability problem, we argue that Bool-PC is a correct algorithm for any arbitrary Boolean graph. From here the correctness of the PC algorithm applies.
\[A\perp\!\!\!\perp B|C\to P(A=a|B=b_{1},C=c_{1})=P(A=a|B=0,C=c_{1})\\ \wedge P(A=a|B=b_{1},C=c_{1})=P(A=a|B=b_{1},C=0) \tag{3}\]
## 4 Results
Performance of the GRNULAR + TopoDiffVAE algorithm, Bool-PC algorithm and comparison against the state of the art can be found in table 4.
Figure 2: Toy model of conditional dependence and independence
## 5 Discussion and Analysis
### Bool-PC
First, it must be noted that the results present in table 4 are for data generated without noise. The theoretically performance of Bool-PC on a network defined by a system of SDEs with noise (a more challenging problem that closer resembles the biological reality) cannot in general reach AUPRC of 1. Additionally, a major limitation of the Bool-PC algorithm is the experimental and time complexity. The PC algorithm has exponential time complexity, limiting its application to the discovery of full GRNs of thousands of nodes. Worse, the algorithm proposes an exponential number of gene knockout experiments which is both prohibitively expensive and technically infeasible. Bool-PC with dual gene knockout reliably captures well gene regulatory subnetworks of hundreds of nodes and very often perfectly captures networks of tens of nodes. This makes it a useful abi
\begin{table}
\begin{tabular}{||c c c c||} \hline \hline Metrics & AUPRC & Training Time (seconds) \\ \hline \hline Bool-PC (dual knockout) & 0.741 & NA \\ Bool-PC (n-knockout, theoretical) & 1.00 & NA \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of discovery algorithms on simulated GRN with 100 genes, and 10 Transcription factors in a noise-free setting
\begin{table}
\begin{tabular}{||c c c c c||} \hline \hline Conditions & G24 & G25 & G26 & G27 \\ \hline \hline Steady state & 0.541 & 0.464 & 0.110 & 0.112 \\ G25 = 0 & 0.541 & 0.00 & 0.665 & 1.00 \\ G25 = 0, G26=0 & 0.541 & 0.00 & 0.00 & 1.00 \\ G25 = 0, G26 \(\uparrow\) & 0.541 & 0.00 & 1.99 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental perturbations of G25 and G27
\begin{table}
\begin{tabular}{||c c c c c||} \hline \hline Conditions & G24 & G25 & G26 & G27 \\ \hline \hline Steady state & 0.541 & 0.464 & 0.110 & 0.112 \\ G25 = 0 & 0.541 & 0.00 & 0.665 & 1.00 \\ G25 = 0, G26=0 & 0.541 & 0.00 & 0.00 & 1.00 \\ G25 = 0, G26 \(\uparrow\) & 0.541 & 0.00 & 1.99 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental perturbations of G25 and G24
\begin{table}
\begin{tabular}{||c c c c c||} \hline \hline Conditions & G24 & G25 & G26 & G27 \\ \hline \hline Steady state & 0.541 & 0.464 & 0.110 & 0.112 \\ G25 = 0 & 0.541 & 0.00 & 0.665 & 1.00 \\ G25 = 0, G26=0 & 0.541 & 0.00 & 0.00 & 1.00 \\ G25 = 0, G26 \(\uparrow\) & 0.541 & 0.00 & 1.99 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental perturbations of G25 and G27
\begin{table}
\begin{tabular}{||c c c c c||} \hline \hline Conditions & G24 & G25 & G26 & G27 \\ \hline \hline Steady state & 0.541 & 0.464 & 0.110 & 0.112 \\ G25 = 0 & 0.54 & 0.00 & 0.67 & 1.00 \\ G25 = 0, G24 = 0 & 0.00 & 0.00 & 0.09 & 1.00 \\ G25 = 0, G24 \(\uparrow\) & 2.07 & 0.09 & 0.69 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental perturbations of G25 and G24
the same conditional independence tests with more recent causal discovery algorithms such as the Grow-Shrink algorithm may expand the usefulness of the approach. Additionally, as gene sequencing costs shrink massive gene knockout panels will be increasingly feasible.
## 6 Appendix
### Access to Code
All relevant data and code for the project can be found at the project's Github repository [https://github.com/jacobrast/grn_causal_discovery](https://github.com/jacobrast/grn_causal_discovery).
|
2306.03978
|
Büyük dil modellerinin Türkçe verisetleri ile
eğitilmesi ve ince ayarlanması
|
Large language models have advanced enormously, gained vast attraction and
are having a phase of intensed research. Some of the developed models and
training datasets have been made open-accessible. Hence these may be further
fine-tuned with some techniques to obtain specialized models for specific
tasks. When it comes to Turkish language, open-access models do not provide
satisfactory coverage. This is also observed over published datasets. In this
work, we propose some ideas to mitigate this issue: creating large Turkish
datasets, training LLMs with these and fine-tuning pre-trained models with
Turkish inputs. We report our findings on Turkish-based trainings with the
problems encountered along the way. We conclude with outcomes of these
experiments and propose ideas for further works.
--
B\"uy\"uk dil modelleri inan{\i}lmaz \"ol\c{c}\"ude geli\c{s}mekte, b\"uy\"uk
ilgi toplayarak ve \"uzerlerinde yo\u{g}un ara\c{s}tirmalarin yapildi\u{g}i bir
d\"onemdedirler. Geli\c{s}tirilen modeller ve e\u{g}itimde kullanilan
verisetlerinden bazilari a\c{c}ik eri\c{s}imli olarak sunulmaktadir. B\"oylece
ince ayarlama teknikleri uygulayarak \"ozelle\c{s}mi\c{s} g\"orevler i\c{c}in
\c{c}ali\c{s}abilir modeller elde edilmektedir. T\"urk\c{c}e s\"oz konusu
oldu\u{g}unda bu modellerinin kapsayicili\u{g}i yeterli d\"uzeyde de\u{g}ildir.
Bu durum, yayimlanan verisetlerinde de g\"ozlemlenebilir. Bunu a\c{s}manin
yollari T\"urk\c{c}e i\c{c}erikli b\"uy\"uk verisetlerinin olu\c{s}turulmasi,
b\"uy\"uk dil modellerinin bunlarla e\u{g}itilmesi ve \"onceden
e\u{g}itilmi\c{s} modellerin T\"urk\c{c}e girdilerle ince ayarlanmalari
olabilir. Bu \c{c}ali\c{s}mada a\c{c}ik eri\c{s}imli dil modelleri ve
verisetleri \"uzerinde durulmakta ve T\"urk\c{c}e temelli bazi deneyler,
kar\c{s}ila\c{s}ilan sorunlar ve sonu\c{c}lar irdelenmektedir.
|
A. Taha Arslan
|
2023-06-06T19:31:08Z
|
http://arxiv.org/abs/2306.03978v1
|
# Buyuk dil modellerinin Turkce verisetleri ile egutilmesi ve ince ayarlanmassi
###### Abstract
Large language models have advanced enormously, gained vast attraction and are having a phase of intensed research upon them. Some of the developed models and corresponding training datasets have been made public and open-accessible. Hence these may be further fine-tuned with some techniques to obtain specialized models for specific tasks. When it comes to Turkish language, open-access large language models do not provide satisfactory coverage. This can be also observed over published datasets. In this work, we propose some ideas to mitigate this issue. These include creating large Turkish supported datasets, training LLMs with these and fine-tuning already trained models with Turkish inputs. We introduce open-access LLMs and datasets and further report our findings on Turkish-based trainings, problems encountered. We conclude with outcomes of these experiments and propose ideas for further works. -- Buyuk dil modelleri gectigimiz donemde inanlmaz olcude gelismis, buyuk ligi toplams ve uzerlinde yogun arastrmalarn yapidgi bir donem gecirmektdirler. Gelistirilen modeller ve bunlar egitmede kulaninlanl verisetlerinden bazilarar acik erisimli olarak sunulmaktadir. Boylece bunlar uzerlinde ince ayarlama teknikleri uygulanarak ozellesmis goreveler icin calisabilir modeller elde etmek mumkundur. Turkce soz konusu oldugunda sunulan buyuk dil modelleri- nin kapsayicliyi yeterli duzeyde gildir. Bu durum, yayimlanan verisetlerinde gozlemlembeilir. Bunu gidermenin yollarn Turkce iczetki buyuk verisetleri- nin olusturulmast, buyuk dil modellerinin bunlarla egitllmesi ve onceden egitlmis modellerin Turkce girdilerle ince ayarlanmalarn olarak saylabilir. Bu calismada acik erisimli dil modelleri ve verisetleriuzerinde durulmakta ve Turkce temelli yapilmis bazi deneyler, karslalslan sorunlar ve sonuclar anlatl-maktadir.
## 1 Girls
Son yllarda yapay zeka alannda yasanan dramatik gelismelerden birisi de, _buyuk dil modellerinin_ yani cok sayida parametreye sahip (bir milyar ve astunde) yapay sinir aglarnnn gene cok sayida etiklenmemis metin ve kendi gozetimli ya da yar gozetimli ogrenemim yontemleri ile egitilmeleridir. Bu yaklasim, 2018 yilndan itibaren artarak devam etmektleri. Bu gelsimeler ayrica dogal dil isleme (NLP) arastrmalarn bin onceki yaklasim olan gozetimli ogrenimden birtakim goreveler icin farkhlasmarn da beraberinde getirmistir [1].
Buyuk dil modelleri genellikle bir cumlede yer alan bir sonraki kelimeyi kestirmek uzere kurgulansalar da, belli gorev tanmlar ile ozellestirilmeieri ve ince ayarlanmalarn sonucunda farkli sleveleri de ger
|
2303.09862
|
Resolving buried interfaces with Low Energy Ion Scattering
|
We investigate the use of Low Energy Ion Scattering (LEIS) to characterize
buried interfaces of ultra-thin films. LEIS spectra contain depth-resolved
information in the so-called sub-surface signal. However, the exact correlation
between the sub-surface signal and the depth composition is still unknown. For
this reason, LEIS spectra so far only provided qualitative information about
buried interfaces. In this study, we investigate nm-thin films of Si-on-W and
Si-on-Mo, where we compare simulated data to LEIS spectra. We present a method
to extract depth-sensitive compositional changes -- resolving buried interfaces
-- from LEIS spectra for the first few nanometers of a thin film sample. In the
case of Si-on-Mo, the simulation of the LEIS sub-surface signal allows
obtaining a quantitative measurement of the interface profile that matches the
value determined using the LEIS layer growth profile method with an accuracy of
0.1 nm. These results pave the way to further extend the use of LEIS for the
characterization of features buried inside the first few nanometers of a
sample.
|
Adele Valpreda, Jacobus M. Sturm, Andrey Yakshin, Marcelo Ackermann
|
2023-03-17T10:00:26Z
|
http://arxiv.org/abs/2303.09862v2
|
# Resolving buried interfaces with Low Energy Ion Scattering
###### Abstract
We investigate the use of Low Energy Ion Scattering (LEIS) to characterize buried interfaces of ultra-thin films. LEIS spectra contain depth-resolved information in the so-called sub-surface signal. However, the exact correlation between the sub-surface signal and the sample's depth composition is still unknown. For this reason, LEIS spectra so far only provided qualitative information about buried interfaces.
In this study, we investigate nm-thin films of Si-on-W and Si-on-Mo, where we compare simulated data to LEIS spectra. We present a method to extract depth-sensitive compositional changes - resolving buried interfaces - from LEIS spectra for the first few nanometers of a thin film sample.
In the case of Si-on-Mo, the simulation of the LEIS sub-surface signal allows obtaining a quantitative measurement of the interface profile that matches the value determined using the LEIS layer growth profile method with an accuracy of 0.1 nm. These results pave the way to further extend the use of LEIS for the characterization of features buried inside the first few nanometers of a sample.
## I Introduction
Ultrathin films of only a few nm pose unique challenges in the characterization of interfaces. When film thicknesses are a few nanometers at most, the interface makes up a major part of the final structure, and hence determines many of the film's properties. To unravel, and ultimately predict the properties of such thin films, characterizing the interface composition with quasi-atomic accuracy is key.
Several methods can be used to probe the interface quality but no method is free of issues. Commonly used methods include transmission electron microscopy (TEM) which typically requires extensive experimental effort and X-ray photoelectron
spectroscopy (XPS) which offers a limited depth resolution due to large information depth. Low Energy Ion Scattering (LEIS), XPS and secondary ion mass spectrometry (SIMS) can also be used in combination with sputter depth profiling, which, however, will introduce sputtering artifacts.
In this paper, we present the use of the LEIS sub-surface signal for the characterization of buried interfaces in a static mode. This is interesting because it avoids the use of sputtering steps, which are currently the limiting factor for the use of LEIS to resolve buried features with quasi-atomic resolution.
Along with quantification of the composition of the outermost atomic layer, LEIS provides compositional information about deeper layers, down to ca. 10 nm. These two signals are distinguishable in LEIS measurements as a peak-like'surface' signal and a background'sub-surface' signal respectively, as it can be seen in figure 3. The presence and intensity of the sub-surface signal depend on the chemistry of the surface and target and projectile conditions such as the mass of the target atoms and the mass and energy of the projectiles [1; 2; 3; 4].
The surface selectivity of the peaks in LEIS spectra enables the characterization of the change in surface coverage as a function of the as-deposited film thickness, the so-called LEIS layer growth profile. The procedure used to record the LEIS layer growth profiles is described in [5]. In the studies [5; 6; 7; 8; 9; 10; 11], the authors made use of LEIS layer growth profiles to characterize the nanolayer structure evolution and intermixing behavior of Transition-metal/silicon (TM/Si) thin-film structures, Transition-metal/Transition-metal (TM/TM) structures and Transition-metal oxides deposited by magnetron sputtering and Atomic Layer Deposition. In the studies [5; 7; 9] the authors showed the effectiveness of the error function and the logistic function to describe the interface profile in thin films. In the study [9], the layer growth profile of a comprehensive set of TM/TM structures allowed the authors to derive empirical rules to qualitatively predict the growth characteristics of the system based on atomic size difference, surface-energy difference, and enthalpy of mixing between the film and substrate atoms.
In LEIS layer growth profiles, the fact that the interface is characterized while being formed limits the use of the method to systems that are not subjected to matrix effects and segregation. Specifically, segregation during growth results in a mismatch between the as-deposited surface composition and the final interface profile. For these reasons, in recent years the sub-surface signal has gained more attention with the aim of improving the static non-destructive depth analysis of sample compositions, the so
called LEIS static depth profiling, offering an alternative to the layer growth profile in modern thin film science.
It was shown that it is possible to determine the thickness of a top film with sub-nm resolution from the shape of the sub-surface signal in LEIS measurements, with the restriction that the difference in mass between the top film and substrate needs to be sufficiently large to separate their respective contributions [2; 3; 4; 8; 12; 13; 14; 15; 16; 17]. The method was successfully demonstrated for the combination of ZrO\({}_{2}\) and Si [17].
In literature, several authors have already shown that Monte Carlo calculations performed with the TRBS code [18] can provide valid simulation of LEIS data [19; 20; 21; 22; 23; 24; 25]. The study by Bruner et al. [14] specifically showed that TRBS simulations are a valuable tool for film thickness analysis. However, the authors state that for the investigated structures, allowing for layer intermixing in TRBS does not significantly change the outcome of the simulation. From these results, it seemed impossible to measure an interface width by LEIS spectrum analysis paired with TRBS simulations.
In LEIS measurements, the projectiles' energy loss due to the interaction with the electrons is stochastic and therefore subjected to depth-dependent straggling. Although it is true that TRBS offers the possibility to include electronic straggling in the simulation, one must consider that when we apply TRBS to the LEIS regime (of a few keV) the electronic straggling is overestimated by the code, which is tailored to the MeV regime [14]. For this reason, past attempts to simulate LEIS data from TRBS calculations either included a custom-made model of electronic straggling or manually adjusted the TRBS smoothing function.
To the authors' knowledge, the models used so far for the simulation of electronic straggling did not take into account the dependence of electronic straggling on energy. The risk with this simplification is to overestimate the electronic straggling in the high energy side of the spectrum (which correspond to lower penetration depth). The implementation of an overestimated smoothening function can explain why the simulations appear insensitive to the small compositional changes that are present below the surface of the sample.
In this study, we measure the error in the simulation of LEIS spectra when no electronic straggling is applied, aiming to improve the understanding of electronic straggling in the LEIS regime. We then explore the characterization of a buried interface by comparing the experimental and simulated LEIS sub-surface spectra.
We use W/Si and Mo/Si thin films as model structures. W/Si structures are expected to have a relatively sharp and stable interface when Si is deposited on W [11]. As such, they are a good example structure for assessing the contribution of electronic straggling to the shape of the sub-surface signal in LEIS spectra. The results show that the electronic straggling is a function of the penetration depth of the ions inside the sample.
Mo/Si thin-film structures are expected to have a relatively broad interface when Si is deposited on Mo [10], which makes them a good model structure for assessing the contribution of interface width to the shape of the sub-surface signal. We show that the method of comparing the experimental and simulated - LEIS spectra is sensitive to the interface width in the case of short penetration depths, where the effect of electronic straggling is reduced to the minimum.
## II Experiment
### Deposition
All samples were fabricated in a home-designed ultra-high vacuum (UHV) system (base pressure \(<\)1x10\({}^{-9}\) mbar) which allows in-vacuum transfer between the thin film deposition chamber and the LEIS analysis chamber.
The following structures were deposited, a 30 nm silicon film for the measurement of silicon reionization function, three Si-on-W structures and one Si-on-Mo structure for the characterization of buried interfaces. All the structures were deposited onto super-polished Si substrates with native oxide. The bi-layer structures for interface characterization are shown in figure 1.
All the films were deposited at room temperature using magnetron sputtering. The argon process gas working pressure was 0.6x10\({}^{-3}\) mbar. The substrate-to-target distance was 8 cm for all materials. To prevent cross-contamination, all magnetrons were equipped with a shutter.
Figure 1: Bilayer structures used for LEIS characterization of buried interfaces. Three structures were used with Si-on-W. The thickness of the Si top film varied between the structures, while the deposition parameters were kept constant. One structure was used with Si-on-Mo.
W and Mo were deposited by direct current (DC) magnetron sputtering. The sputter powers used were 12 W and 10 W respectively. The corresponding sputter voltages were 357 V and 338 V, and the deposition rates were 0.07 nm/s and 0.11 nm/s.
The settings used for silicon varied between samples. Note that for the silicon films of interest for this study, the surface roughness is not expected to vary depending on the deposition settings. For the ion-fraction and Si-on-W samples, DC sputtering was used. The sputter power value was 12 W, matching the settings used in the study by Zameshin et al. [11]. The corresponding Si sputter voltage was 437 V, and the Si deposition rate was 0.05 nm/s. For the Si-on-Mo sample, radiofrequency (RF) sputtering was used for the deposition. The sputter power was 30 W, matching the settings used in the study by Reinink et al. [10]. The corresponding Si deposition rate was 0.02 nm/s.
To monitor the deposited thickness, all magnetrons are equipped with a quartz crystal microbalance (QCM) which is calibrated against ex-situ X-ray reflectivity (XRR) measurements of reference layers. Note that magnetron sputtering produces films that are very close to bulk density (in the 98-99% range), therefore the deposited mass can be related to thickness.
For Si-on-W, the thickness of the top film was determined using two methods: 1) the QCM, and 2) LEIS static depth profiling. The agreement between the two measurements is \(\pm\)0.3 nm. For Si-on-Mo, the thickness of the top film was measured by LEIS static depth profiling only.
### _B. LEIS Characterization_
LEIS measurements were performed using an IONTOF GmbH Qac100 high-sensitivity LEIS spectrometer with a base pressure of 1x10\({}^{\text{-10}}\) mbar.
The system is equipped with two electron-impact ion sources (primary source and sputter gun), a double toroidal electrostatic analyzer (DTA), and a position-sensitive detector. The primary source and sputter gun are positioned at incidence angles of 0\({}^{\circ}\) and 59\({}^{\circ}\) with respect to the sample surface normal. The DTA detects ions that are backscattered at an angle of 145\({}^{\circ}\).
During the measurement, the primary beam rasters over a 1x1 mm\({}^{2}\) area. For the study of the silicon reionization function, a 6 keV He\({}^{+}\) beam with a 4 nA current was used for the first measurement. He\({}^{+}\) beams of 5 keV, 4 keV, and 3 keV were also used. The measured beam currents were 4.3 nA, 4.2 nA and 3.1 nA, respectively. For all the
measurements, the acquisition time was under 4 min with an ion dose of 2x10\({}^{15}\) ions/cm\({}^{2}\). For the interface characterization, a 3 keV He\({}^{+}\) beam with a 3 nA average current was used for measurements. The acquisition time was around 3 min with an ion dose of around 3.5x10\({}^{14}\) ions/cm\({}^{2}\).
Whenever sputtering was performed, a 0.5 keV Ar\({}^{+}\) beam with a 100 nA average current was used over a raster area of \(2\times 2\) mm\({}^{2}\).
## III. TRBS simulations
For this study, we used the Monte Carlo code TRBS which is a specialized version of the TRIM code [26], optimized for the calculation of backscattered particles [18]. We used the version of TRBS implemented into the IONTOF SurfaceLab software (I-TRBS).
### A. Working principle
The code models the trajectory of ions inside a target as formed by free paths between nuclei and scattering events with the nuclei.
In a free path, the partial energy loss resulting from the interaction with the target's electrons (electronic stopping) is implemented. Previous studies [19, 20, 21, 14] showed that electronic stopping is typically underestimated by TRBS when performing simulations with low-energy ions. To compensate for this, TRBS requires the user to specify a correction for the electronic stopping (ESC values).
In a scattering event, the universal scattering potential is used to model the scattering probability. To mimic the experimental condition while limiting the computational costs, TRBS solves individual scattering integrals only when the scattering angle is above a user-defined cutoff angle. The collisions resulting in smaller scattering angles are accounted for globally as a continuous nuclear energy loss [18].
Biersack et al. provide a detailed description of TRBS [18], including the calculation methods used by the algorithm. Bruner et al. [14] provide a detailed description of the adjustments to make in order to use the TRBS code to simulate LEIS spectra.
The input for the program is a file where the ion species, the primary energy of the ions, and the target composition are defined. For each layer of the target, the user specifies the stoichiometry, thickness, density, ESC, and screening length correction (SLC). The latter is a correction factor for the empirical scattering potential which
affects the intensity of the simulated spectrum. The parameters ESC, SLC, and cutoff angle are further discussed in the section Calculation Details.
As output, TRBS gives two energy spectra of backscattered particles. The first is the particles' energy distribution without the influence of electronic straggling (uncorrected spectrum). The second is the result of the uncorrected spectrum convoluted with a Gaussian energy distribution where the standard deviation represents the mean electronic straggling width for each channel (corrected spectrum) [18]. Previous studies showed that the corrected spectrum often overestimates the influence of electronic straggling when the primary energy is of the order of keV [14, 18]. For this reason, in the study by Bruner et al. [14] the straggling correction is custom-made with a gaussian that has a Full With at Half Maximum (FWHM) of around 300 eV. The result is a good fit of the spectra but the applied uniform broadening leads to simulations that are insensitive to the interdiffusion between thin films. To observe the effect of electronic straggling on the LEIS spectra, we compare the uncorrected spectra with LEIS measurements, as shown in section VI.
The main difference between LEIS experiments and TRBS simulations is that the projectile charge state is not included in TRBS simulations. This is why TRBS simulations of scattered particles have no contrast between surface and sub-surface signals. This difference is the key feature for the calculation of the reionization function of a material by means of TRBS simulations, as described in section IV.
### B. Calculation Details
For each simulation, the ion species, the energy of the ions, and the target composition were chosen to match a corresponding LEIS experiment.
For each material used in this study, the value of the ESC parameter was measured by performing LEIS measurements and TRBS simulations on thin films of known thickness. For silicon, the measurements were performed at three different primary energies, 3 keV, 4 keV, and 5 keV, and on three samples of different thickness for better accuracy. The Si-ESC factor obtained is valid for all the investigated primary energies and thicknesses. For W and Mo, only one measurement with 3 keV primary energy was performed for the evaluation of the ESC. Note that Si-ESC is critical for the measurement of Si reionization function, while W and Mo ESCs will only affect the low energy side of the spectra used in this study, which is of no interest for the
measurement of the interface width. The exact values reported in Table 1 were used for the ESC factors in all the simulations presented in this study.
The parameter SLC did not significantly change the shape of the spectra studied in this work. Therefore, we used the default value for low energy equal to 0.85. For the cutoff angle, the default value of 0.08 was assigned to the corresponding simulation parameter during a first investigation step. This allowed to obtain fast results with a typical computation time below 500 s. In a second step, simulations were run with a much lower cutoff angle. This led to identical results with the only difference of reduced noise. A total number of \(10^{8}\) ions were used in each simulation, this was sufficient to achieve smooth simulation results.
### C. Adjustments to the TRBS spectra
Instrumental broadening is a contribution to the shape of LEIS spectra that is not simulated by TRBS. We implemented a simple approximation of instrumental broadening through a Gaussian convolution of the output spectrum. The width of the Gaussian was taken equal to the width of the surface peak i.e. around 50 eV, assuming that the width of the surface peak represents the minimal broadening that is also expected for in-depth information.
In TRBS output spectra, in units particles/total particles, the yield depends on the channel size. The wider each channel the more particles are included in it. We hence normalize the result by the channel size. The resulting spectrum of backscattered particles, in units particles/(total particles*eV), has the same maximum yield regardless of the resolution.
TRBS simulations allow isolating the signal coming from the first layers of the sample. We compared such TRBS energy spectra with the corresponding LEIS spectra and noticed that in our case there is a mismatch of around 70 eV. This is attributed to
\begin{table}
\begin{tabular}{l l} \hline Material & ESC (dimensionless) \\ \hline Si & 2.2 \(\pm\)0.1 \\ W & 1.9 \(\pm\)0.3 \\ Mo & 1.9 \(\pm\)0.3 \\ \hline \end{tabular}
\end{table}
Table 1: Electronic stopping correction (ESC) for TRBS simulations of the materials used in this study. The measurements were performed on films of known thickness.
both the inelastic energy loss of the reionization process and the energy calibration of the LEIS experiments. For each structure presented here, we shifted the experimental spectra accordingly, obtaining aligned experimental and simulated spectra.
## IV. Model for Reionization Function
From a physical point of view, we make the following assumptions regarding charge transfer between projectiles and target atoms:
* Noble gas ions penetrating the target get neutralized
* Detected ions scattered from the subsurface are reionized at the surface upon leaving the sample
* For a given surface chemistry, the probability for a projectile to be reionized at the surface is a function of the final energy
With these assumptions, the reionization ion-fraction as a function of energy (reionization function) can be calculated by dividing the LEIS spectrum of backscattered ions by a spectrum of backscattered particles. The latter can be calculated either from single scattering approximations or by Monte Carlo calculations such as TRBS simulations. The difference is that Monte Carlo simulations take into account multiple scattering which is a key factor contributing to the shape of LEIS spectra.
The method of calculating the reionization function by dividing the LEIS spectrum by the corresponding TRBS simulation, first presented in 2015 by Bruner et al. [14], was recently used to investigate the ion fraction of oxides [19, 20, 21] and is used in this study to obtain the silicon reionization function. The result is shown in figure 2.
To our knowledge, there are no quantitative models describing how the reionization function scales as a function of energy. For this reason, it is difficult to identify in which energy range the signal from sputtered atoms has a significant contribution to the subsurface signal. A possibility is to perform Time of Flight measurements. However, this was not enough to avoid the detection of sputter atoms in previous studies [14]. For this reason, we calculated the maximum energy of sputtered silicon for each primary energy from elastic kinematics, as described in the Appendix, and excluded the data below such values.
The reionization probability increases as a function of the final energy of the projectile. This is expected, the higher the final energy, the closer the projectile can get to the target atom during the last collision, and the higher the probability for charge transfer [1, 3, 14, 21]. In addition, higher final energy implies a higher probability that a reionized particle survives Auger neutralization in the path toward the detector.
The reionization functions determined with four different primary energies overlap, showing that the ion fraction does not depend on the primary energy. This is expected considering that the final reionization happens at the surface when the projectile is about to leave the sample.
The reionization energy threshold resulting from the calculation is in agreement with the measurements included in the review by Brongersma et al. which reported a threshold between 300 eV and 500 eV [1].
## V Model for LEIS SUB-SURFACE SIGNAL
When focusing on bi-layer structures whose surface is fully closed by atoms of the top film, the reionization ion-fraction of the top material describes the reionization probability of any projectile, including those that were backscattered by atoms of the substrate. This is based on the assumption that the final reionization happens at the surface. Therefore, for a given bilayer structure of known film thicknesses, multiplying
Figure 2: Silicon reionization function determined with four different primary ion energies. For each primary energy, the reionization function was obtained on a 30 nm silicon film as the point-to-point ratio between the LEIS experiment and TRBS simulation. The data is fitted with a polyline of degree 3.
the spectrum of backscattered particles (such as the TRBS spectrum) by the reionization function of the top film gives a simulation of the LEIS spectrum with the exception of the surface peaks. The latter are mainly formed by ions surviving neutralization during surface backscattering and cannot be simulated by the reionization function.
Figure 3 shows the steps for simulating the sub-surface signal (also called background) of a LEIS spectrum as implemented in this study. The structure of 1.7 nm Si-on-W was used in this case. The primary energy of the ions was 3 keV.
The resolution of the experiment was lowered to match the resolution of the TRBS simulation which in this case is a channel size of 25 eV. We start from the uncorrected TRBS spectrum. The convolution of the latter by a Gaussian with a 50 eV width allows us to obtain the spectrum of backscattered particles in units particles/(total particles*eV) (figure 3, green). We multiply the spectrum of backscattered particles by the silicon reionization function, thereby obtaining a spectrum of backscattered ions in units ions/nC (figure 3, blue). We multiply this by a scaling factor to take into account the detection efficiency in the experiment and obtain a simulation of the LEIS sub-surface signal (figure 3, red).
Figure 3: Comparison of experimental LEIS spectrum (gray), TRBS spectrum of backscattered particles (green), spectrum of backscattered ions (blue), and simulation of LEIS sub-surface signal (red) for the structure 1.7 nm Si on 20 nm W. The experiment was shifted in energy to match the surface signal of the simulation. The uncorrected TRBS spectrum was normalized by the channel size and convoluted with a 50 eV FWHM Gaussian to simulate instrumental broadening; the TRBS yield should be read on the right side axis. The spectra of backscattered ions and the simulation of LEIS sub-surface signal (background simulation) have the same unit as the LEIS experiment; the corresponding yield should be read on the left side axis. The numbers in the figure indicate the sequence of steps implemented in this study for simulating the sub-surface signal of a LEIS spectrum.
Below 1200 eV the LEIS signal starts to deviate from the simulation. This is attributed to the contribution of sputtered Si atoms to the ion signal. In the case of Si-on-W we expect the maximum energy of sputtered Si atoms will be higher compared to the case of pure Si (described in the Appendix). This is due to the fact that the projectiles can backscatter on W and then create a Si recoil in a second collision. He projectiles backscattered on in-depth W have about 1.5 times more kinetic energy compared to scattering on Si (equation 2), this will produce higher energy sputtered Si atoms.
Accurately modeling the LEIS sub-surface signal with the method described above is valuable since its shape provides information relevant to depth resolution and surface quantification. In the section'results and discussion', we further investigate these two interesting features of the LEIS spectra.
## VI. Results and Discussion
### A. Influence of electronic straggling
In our model, we neglect the effect of electronic straggling on the LEIS spectrum. Therefore, any influence of electronic straggling is found in the residual error between the simulation and measured data.
We compare the simulations of three Si-on-W structures which present different thicknesses of the Si top film. When depositing Si on W, we expect to obtain a relatively sharp and stable interface [11]. Between the three samples, the width of the interface is expected to be constant as the deposition settings were kept constant for the three depositions. The surface roughness is expected to be sufficiently similar between the three samples, considering the amorphous structure of the silicon film. For a thicker film, the effect of electronic stopping and electronic straggling should be higher, therefore we expect to see an increasing error between simulation and data for increasing thickness of Si top-film.
To compare the results, we determine the relative error in the fit of the W signal at high energy, which corresponds to the signal coming from interfacial W. The results for the three structures are shown in figure 4. The corresponding values of the relative error at high energy are reported in table 2. The same trend holds if we consider the whole spectrum for the calculation of the relative error.
Figure 4: LEIS experiment with 3keV He+ ions compared to the corresponding simulation for the different structures in a) b) and c). The interface is modeled as infinitely sharp in the simulation. The residual error is calculated as the difference between the experiment and simulation for each point of the spectrum. The highlighted area was used for the calculation of relative error at high energy in table 1. From the fit to the dataset, it is clear that the error is larger for structures with thicker top film.
We observe an increase in the residual error when comparing the signal coming from interfacial W for the three samples. In the experiments, the higher the depth of the interface, the broader the energy distribution of the ion beam that reaches such compositional change (straggling). The simulation of the LEIS sub-surface signal disregards the contribution of electronic straggling, therefore the residual error in the fit increases as a function of the depth of the interface.
For the structure 1.7 nm Si on W (figure 4a), the relative error in fitting the W signal at high energy is 4.0%. Note that the interface is modeled infinitely sharp in the simulation, therefore only part of the error is attributed to the effect of electronic straggling, while another part is caused by the finite interface width in the experiment.
Since we are interested in characterizing the interface width, if we reduce the thickness of the top film to the lowest possible value while still achieving a top film that fully covers the substrate, we are able to reduce the contribution of the error due to electronic straggling and therefore get a realistic model for the intrinsic part of the spectrum which is sensitive to the interface width.
### B. Qualitative comparison of interfaces
To investigate whether the method of comparing LEIS sub-surface signals with the corresponding simulations is sensitive to the interface width, we compare the relative error at high energy obtained from two different structures, Si on W and Si on Mo. We make use of two bilayer structures with a similarly thin top film, 1.7 nm for Si on W and 1.6 nm for Si on Mo. The residual error due to straggling is expected to be similar in the two structures.
For the structure Si-on-W, we obtained a relative error at high energy equal to 4%. When considering another structure, if we assume the surface roughness to be similar,
\begin{table}
\begin{tabular}{l l} \hline Sample & Relative error at \\ & high energy (\%) \\ \hline
1.7 nm Si on W & 4.0 \\
4.3 nm Si on W & 6.8 \\
6.0 nm Si on W & 14.8 \\
1.6 nm Si on Mo & 7.1 \\ \hline \end{tabular}
\end{table}
Table 1: Relative error in fitting the sub-surface signal at high energy for four structures. The interface is modeled as infinitely sharp in the simulations. The relative error at high energy is calculated from 1850 eV as the area under the absolute residual error divided by the corresponding area under the experimental LEIS spectra.
any larger variation between the experiment and simulation can be attributed to a broader interface. The result is shown in figure 5.
Comparing the interface signal for the two structures, the Si-on-Mo structure has a higher residual error, suggesting a broader interface. This is in agreement with what is predicted by previous studies on the two structures [10, 11] and by empirical rules based on atomic size difference, surface-energy difference, and mixing enthalpy developed by Chandrasekaran et al. [9]. From this qualitative analysis, the method of comparing LEIS sub-surface signals with the corresponding simulations appears sensitive to the interface width.
### Measurement of interfaces
To investigate whether it is possible to determine the width of a buried interface by comparing the experimental and simulated LEIS spectra, we focus on the structure Si-on-Mo which was investigated by the LEIS layer growth profile in the study [10].
We implement an interface layer in the TRBS simulations and study the relative error at high energy as a function of the thickness of the simulated interface. We increase the resolution of TRBS simulations by reducing the energy range to 1500-2700 eV. The corresponding energy resolution is a channel size of 6 eV.
Figure 5: LEIS experiment with 3keV He+ ions compared to the corresponding simulation for the structures 1.7nm Si on 20 nm W and 1.6 nm Si on 20 nm Mo. The interface is modeled as infinitely sharp in the simulation. The highlighted area was used to calculate the relative error at high energy in table 2. It is clear from the presented data that the error for Si-on-Mo is larger than for Si-on-W, indicating a larger interface width for the Si-on-Mo system.
As it is not known a priori what is the best model to describe the interface, we used two designs, a one-layer interface, and a four layers interface. From this, we test the sensitivity of the modeling to the variance in interface design. Figure 6 shows a sketch of how the interfaces are implemented in the simulations. The values of the parameters used for the simulations are listed in table 3. Figure 7 shows the relative fitting error as a function of the total thickness of the simulated interface for the two cases. Figure 8 shows the simulated spectra corresponding to the best fit for the two cases.
\begin{table}
\begin{tabular}{l l l l} \hline \multicolumn{4}{l}{One-layer interface} \\ \hline thickness (nm) & Composition & Density & ESC \\ & (\% of Si) & (g/cm3) & \\ \hline
1.6 nm \(-x\)/2 & 100 & 2.33 & 2.2 \\ \(x\) & 50 & 6.31 & 2.1 \\
20 nm & 0 & 10.28 & 1.9 \\ \hline \multicolumn{4}{l}{Four layers interface} \\ \hline thickness (nm) & Composition & Density & ESC \\ & (\% of Si) & (g/cm3) & \\ \hline
1.6 nm \(-x\)/2 & 100 & 2.3 & 2.2 \\ \(x\)/4 & 80 & 3.9 & 2.1 \\ \(x\)/4 & 60 & 5.5 & 2.1 \\ \(x\)/4 & 40 & 7.1 & 2.0 \\ \(x\)/4 & 20 & 8.7 & 2.0 \\
20 nm & 0 & 10.3 & 1.9 \\ \hline \end{tabular}
\end{table}
Table 1: Simulated layer stack (from top to bottom) where the interface is modeled as a single layer and as formed by four layers. The composition, density, and ESC of the layers were defined through linear extrapolation between Si and Mo values.
Figure 6: Simulated layer stack, where the interface is modeled as a single layer (a), and as formed by four layers (b). The total thickness of the simulated interface, \(x\), was modeled for discrete values between 1.0 to 2.4 nm. For an interface thickness \(x\), Si thickness was reduced by a factor \(x\)/2 assuming the interface is allocated 50% inside the silicon film and 50% inside the Mo film. The simulation parameters used for each layer are reported in table 3.
Figure 7: Relative error at high energy as a function of the total thickness \(x\) of the simulated interface for 3 keV He ions on the structure 1.6 nm Si on 20 nm Mo. Two models were used for the interface as described in figure 6.
Figure 8: LEIS experiment with 3 keV He ions compared to the corresponding simulation for the structure 1.6 nm Si on 20 nm Mo. a) the interface was modeled as one-layer containing Si and Mo as described in figure 6. b) the
interface was modeled as mode by four layers as described in figure 6. The residual error is calculated as the difference between experiment and simulation for each point of the spectrum. The highlighted area corresponds to the area of deviation in figure 5 and was, therefore, used to calculate the relative error at high energy reported in figure 7._
The four-layers model led to a minimum relative error of 4.2%. This is significantly smaller than the minimum relative error of 5.4% obtained by the one-layer model. Assuming a gradual compositional change in the structure, it is expected that the relative error decreases for an increasing number of layers in the model.
When the interface is modeled by four layers, the total thickness of the simulated interface yielding the best fit to the measured data is 2.0 nm (figure 7, purple). When the interface is modeled as one layer, the optimal value for the total thickness of the simulated interface is 1.2 nm (figure 7, green). Note that adding more steps in the simulated interface equals refining the fit towards a gradual compositional change which represents a realistic interface. Therefore, the total thickness of the simulated interface, x, increases as a function of the number of steps used for the simulation, as illustrated in the scheme in figure 9.
To retrieve the effective width \(\sigma\) (nm) of the two simulated interfaces, we fit them with an error function. With the one-layer model, we obtain \(\sigma\)=0.72 nm. With the four-layers model, we obtain \(\sigma\)=0.80 nm. Given the smaller relative error, the four-layers model is considered more accurate. However, it is important to notice that, even by modeling the interface with a single-layer, the difference in the final effective interface width is relatively small (0.08 nm).
Finally, the interface width measured with this method is likely to be an overestimate. This is due to the fact that straggling (acting like a smoothening factor) is not modeled by the uncorrected TRBS spectrum used for this study. For comparison, following the method described in [9], we extract the logistic function-like profile of the Si-on-Mo interface from the layer growth profile measured by Reinink et al. in the study [10]. The corresponding effective interface width is \(\sigma\)=0.79 nm.
### Silicon sub-surface signal
The LEIS sub-surface signal (also called background) obtained with this method simulates the signal from all projectiles that are reionized at the surface after experiencing backscattering and charge transfer phenomena inside the target. The simulations in figures 3, 4, and 5 clearly shows a contribution of projectiles backscattered by silicon, the so-called silicon tail, in the energy range of silicon backscattering (below 1800 eV). In comparison, when the background of the LEIS spectrum is fitted with an error function or a polyline, there is no defined way to estimate the contribution of the silicon tail. This makes it difficult to establish a standard procedure to fit the background.
The background subtraction is a necessary step for the quantification of the area under surface peaks (surface quantification). We find that the simulation of the LEIS background with the described method might help in the process of establishing a standard procedure for background subtraction. However, a more detailed investigation is required for this purpose and that is beyond the scope of this paper.
Figure 9: Model of an error function-like concentration profile with a one-layer interface and a four-layers interface. The total thickness of the interface resulting from the one-layer interface model, \(x_{\nu}\) is smaller than the total thickness resulting from the four-layers interface model, \(x_{\lambda}\). When fitted with an error function, the two models lead to similar profiles.
## 7 Conclusions
The use of Low Energy Ion Scattering to quantitatively characterize buried interfaces was investigated. LEIS spectra contain depth-resolved information in the sub-surface signal. The latter can provide a relatively high yield when the structures are formed by heavy elements such as transition metals. In this study, we investigated structures of W/Si and Mo/Si thin films. The LEIS spectra provided qualitative information about the buried interfaces.
A methodology to assist the spectrum analysis with simulations has been explored. In the case of ultrathin films (\(<\)1.7nm) deposited on thick substrates, TRBS simulations can be used without a model for electronic straggling to simulate LEIS sub-surface signals. In the case of Si-on-W bi-layer structures, whose interface is expected to be relatively sharp, the relative error in fitting the sub-surface signal of the experimental spectra can be as low as 4%.
Excluding electronic straggling in the simulation leads to increasing residual error for increasing thickness of the top Si film. This result shows that models for electronic straggling should be depth dependent in the LEIS regime.
Simulations of LEIS sub-surface signals obtained by the presented method are sensitive to the interface width. For the structure Si-on-Mo, we obtained an optimal value for the interface width by introducing an interface layer of increasing width in the simulation. The resulting effective interface width of 0.8 nm \(\pm\) 0.08 nm is in good agreement with the value of 0.79 nm, measured from the layer growth profile obtained by Reinink et al. [10].
This approach extends the use of LEIS to the characterization of buried interfaces without the need for sputter profiling. Interfaces play such an important role in the performance of thin films, that enabling a highly accurate and non-destructive measurement inside the structure is extremely valuable. Extending the study to other material systems is necessary to further assess the reliability and accuracy of the method.
## Acknowledgments
This work has been carried out in the frame of the Industrial Partnership Program "X-tools," Project No. 741.018.301, funded by the Netherlands Organization for Scientific Research, ASML, Carl Zeiss SMT, and Malvern Panalytical. We
acknowledge support of the Industrial Focus Group XUV Optics at the MESA+ Institute for Nanotechnology at the University of Twente.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Author Declarations
### Credits
This article has been accepted by the Journal of Vacuum Science & Technology A. After it is published, it will be found at Link.
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**A. Valpreda**: Conceptualization (lead); Formal analysis (lead); Investigation (lead); Methodology (lead); Writing - original draft (lead); Writing - review & editing (lead). **A. Yakshin**: Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Writing - review & editing (supporting). **J. M. Sturm**: Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Writing - review & editing (supporting). **M. D. Ackermann**: Project administration (lead); Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Writing - review & editing (supporting).
## APPENDIX, Calculations of the maximum energy of Si sputtered atoms
We consider a system in which there is a collision cascade as formed by the following steps:
1. An incident projectile (1) of mass m\({}_{1}\) travels through the sample before the backscattering event. To estimate the case of maximum final energy we consider the minimum travel depth (and hence minimum stopping) of 3A
* the projectile (1) gets backscattered by a target atom (2) of mass m\({}_{2}\), which in this case is silicon. We assume that (2) is at rest before the collision.
* The projectile (1) travels back after the backscattering
* The projectile (1) kicks out a Si atom (3) at the surface
These steps can be described as follows.
The energy after the free path a. can be calculated as
\[E_{a}=E_{0}-S\,d_{a}\]
Where E\({}_{0}\) is the primary energy of the ions, S is stopping calculated by SRIM software, S[eV/A]=4.9, and d\({}_{a}\) is the distance travelled in the free path a., d\({}_{a}\)=3A.
The energy after the backscattering event b. can be calculated as
\[E_{b}=E_{a}\left(\frac{cos\theta+\left[(m_{2}/m_{1})^{2}-\sin^{2}(\theta) \right]^{\prime\prime}}{(1+\nicefrac{{m_{2}}}{{m_{1}}})}\right)^{2}\]
Where \(\theta\) is the angle between the incoming trajectory and the outgoing trajectory as defined in figure 10, m\({}_{1}\) is the mass of the projectile (He) and m\({}_{2}\) is the mass of the target Si atom.
Figure 10: Geometry of the collision cascade. We define the angle \(\alpha\) as the angle between the trajectory of the backscattered He and the trajectory of Si after ejection. Note that the angle \(\alpha\) can vary for a given final direction of the Si particle because multiple combinations of the backscattering angle \(\theta\) and angle \(\alpha\) can result in a Si
particle ejected in the direction of the detector. E\({}_{0}\), E\({}_{a}\), E\({}_{b}\), E\({}_{c}\) indicate the He kinetic energy at several stages in the collision cascade, while E\({}_{d}\) indicates the energy of the ejected Si atom.
The energy after the second free c. path can be calculated as
\[E_{c}=E_{b}-S\frac{d_{a}}{\cos\left(180^{\circ}-\theta\right)}\]
The energy of the ejected Si atom after the sputtering event d. can be calculated as
\[E_{d}=E_{c}\left(\frac{4m_{1}m_{2}\cos^{2}\alpha}{(m_{1}+m_{2})^{2}}\right)\]
Equation 4 is described in [27] as equation 8.
We, therefore, get a formula for the energy of the Si sputtered atom as a function of \(\alpha\), the angle between the trajectory of the backscattered He and the trajectory of the Si atom after ejection. We plot the final energy E\({}_{d}\) as a function of \(\alpha\), for combinations of values of \(\theta\) and \(\alpha\) that lead to ejection of a Si atom in the direction of the analyser (i.e. at an angle of 145\({}^{\circ}\) with respect to the incoming He ion), and read the maximum value of E\({}_{d}\). Figure 11 shows the plot for the case of primary energy E\({}_{a}\)=3keV. The corresponding maximum energy of sputtered silicon is E\({}_{\text{max}}\)=771eV. This calculation was performed also for the primary energies E\({}_{a}\)=4 keV, 5keV, and 6keV. The resulting maximum energies of sputtered silicon are reported in table 6.
Figure 11: Final energy E\({}_{d}\) of a Si particle ejected at a total angle of 145\({}^{\circ}\) with respect to the incoming He ion, as a function of the angle \(\alpha\) in the case of primary energy E\({}_{a}\)=3 keV.
|
2304.02990
|
Representations on canonical models of generalized Fermat curves and
their syzygies
|
We study canonical models of $\left(\mathbb{Z}/k\mathbb{Z}\right)^n$- covers
of the projective line, tamely ramified at exactly $n+1$ points each of index
$k$, when $k,n\geq 2$ and the characteristic of the ground field $K$ is either
zero or does not divide $k$. We determine explicitly the structure of the
respective homogeneous coordinate ring first as a graded $K$-algebra, next as a
$\left(\mathbb{Z}/k\mathbb{Z}\right)^n$- representation over $K$, and then as a
graded module over the polynomial ring; in the latter case, we give generators
for its first syzygy module, which we also decompose as a direct sum of
irreducible representations.
|
Kostas Karagiannis
|
2023-04-06T10:59:26Z
|
http://arxiv.org/abs/2304.02990v1
|
# Representations on canonical models of generalized Fermat curves and their syzygies
###### Abstract.
We study canonical models of \((\mathbb{Z}/k\mathbb{Z})^{n}\)- covers of the projective line, tamely ramified at exactly \(n+1\) points each of index \(k\), when \(k,n\geq 2\) and the characteristic of the ground field \(K\) is either zero or does not divide \(k\). We determine explicitly the structure of the respective homogeneous coordinate ring first as a graded \(K\)-algebra, next as a \((\mathbb{Z}/k\mathbb{Z})^{n}\)- representation over \(K\), and then as a graded module over the polynomial ring; in the latter case, we give generators for its first syzygy module, which we also decompose as a direct sum of irreducible representations.
## 1. Introduction
### Generalized Humbert and Fermat curves
At the turn of the previous century, motivated by the theory of elliptic integrals and multiply periodic functions, G. Humbert [10] and H.F. Baker [1] independently studied pencils of complex projective curves of genus \(5\) that arise as ramified coverings of the projective line, are branched at \(5\) distinct points of order \(2\), and are stable under the action of \(\left(\mathbb{Z}/2\mathbb{Z}\right)^{4}\). Humbert's suggestion to seek for possible generalizations, naturally lead to covers of the projective line with a fixed number of distinct branch points all of which share the same ramification index.
This was formalized in [1] and [1], where the authors considered for each pair of integers \(k,n\geq 2\), complex projective curves \(F_{k,n}\) that admit an action of \(\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\) with quotient isomorphic to the projective line and branch locus consisting of \(n+1\) distinct points, each of order \(k\). For \((k,n)=(2,4)\) one retrieves classic Humbert curves, while for \(k=2\) and arbitrary \(n\) the authors call \(F_{2,n}\) a generalized Humbert curve. If \(k\) is allowed to vary arbitrarily and \(n=2\) is fixed, one obtains plane curves branched at three distinct points, which up to a suitable Mobius transformation can be assumed to be \(\left\{0,1,\infty\right\}\). This property characterizes classic Fermat curves, which are defined by equations of the form \(x^{k}+y^{k}+z^{k}=0\); this motivated using the term generalized Fermat curve for \(F_{k,n}\).
Since their introduction 15 years ago, generalized Fermat curves have been thoroughly studied by several mathematicians, led by R. Hidalgo and his collaborators. We mention indicatively results on Fuchsian and Schottky uniformizations [1], moduli theory [1], Jacobian varieties [1] and [21], automorphism groups [10], and relations to Ihara's theory [12] of Braid representations of absolute Galois groups [13]. Of the latest developments, we single out [14], in which, among other results, Hidalgo finds a basis for the global sections of the sheaf \(\Omega_{F_{k,n}}\) of regular \(1\)-differentials, then proceeds to study a particular projective embedding of \(F_{k,n}\).
These two results motivated the present paper, which treats the case of higher order differentials \(\Omega_{F_{k,n}}^{\otimes m}\), the induced canonical embedding of the curve, and takes into account the action of \(\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\). The case of Fermat curves \(F_{k,2}\) was recently established by the author in [11]; the current work can be viewed as a simultaneous generalization of this paper for arbitrary \(n\) and of Hidalgo's paper for arbitrary \(m\). Apart from the particular interest to generalized Fermat curves discussed above, motivation also comes from the broader theme of homogeneous coordinate rings, which we briefly present in the next subsection.
### Homogeneous coordinate rings
Let \(K\) be an algebraically closed field of arbitrary characteristic. Consider a triple \(\left(X,G,\mathscr{L}\right)\), where \(X\) is a smooth projective curve over \(K\), \(G\) is a finite subgroup of \(\operatorname{Aut}_{k}\left(X\right)\) and \(\mathscr{L}\) is a very ample \(G\)-equivariant line bundle. The reader should always keep in mind the triple \(\left(F_{k,n},\left(\mathbb{Z}/k\mathbb{Z}\right)^{n},\Omega_{F_{k,n}}\right)\) discussed in the previous section as a toy example for what will follow.
Assuming that \(X\) is not hyperelliptic and has genus \(g\) which is at least \(3\), one has a projective embedding \(X\hookrightarrow\mathbb{P}_{k}\left(H^{0}\left(X,\mathscr{L}\right)\right)\), which by construction must be \(G\)-equivariant. The homogeneous coordinate ring of
\(X\) relative to this embedding, denoted by \(S_{X,\mathscr{L}}\), can be endowed with several algebraic structures, which reflect the geometric and representation-theoretic properties of the triple \((X,G,\mathscr{L})\).
1. It carries the structure of a graded \(K\)-algebra, since \(S_{X,\mathscr{L}}=\bigoplus_{m\geq 0}H^{0}\left(X,\mathscr{L}^{\otimes m}\right)\).
2. The action of \(G\) extends to linear representations on each \(H^{0}\left(X,\mathscr{L}^{\otimes m}\right)\), which respect the grading, thus making \(S_{X,\mathscr{L}}\) into a graded \(KG\)-module.
3. The projective embedding induces a graded ring homomomorphism from \(S=\operatorname{Sym}\left(H^{0}\left(X,\mathscr{L}\right)\right)\) to \(S_{X,\mathscr{L}}\), endowing the latter with the structure of a graded module over the former.
4. The actions of \(G\) on \(S\) and \(S_{X,\mathscr{L}}\) satisfy the necessary compatibility properties so that \(S_{X,\mathscr{L}}\) becomes a graded \(SG\)-module.
The primary goal of this paper is to demonstrate that one can study all four structures simultaneously, starting with (1) and working their way up into (4) as follows. First, construct explicit bases over \(K\) for the graded pieces \(H^{0}\left(X,\mathscr{L}^{\otimes m}\right)\). Proceed with decomposing \(S_{X,\mathscr{L}}\) as a direct sum of indecomposable \(KG\)-modules, graded-piece-by-graded-piece. Then describe the kernel of the structure map \(S\to S_{X,\mathscr{L}}\) and resolve \(S_{X,\mathscr{L}}\) to get its graded syzygies and Betti numbers. Finally decompose the syzygy modules into direct sums of indecomposables.
It is worth mentioning here that the problem of determining each structure has been extensively studied in the literature in varying levels of generality, and has motivated much of the interaction between arithmetic geometry and representation theory in the recent decades. The bibliography is vast, and we restrict to mentioning a small subset of the relevant results. Determining (2) is referred to as the Galois module structure problem for sheaf cohomology, which can be traced back to Hecke [10]. The case \(\operatorname{char}(K)\nmid[G]\) was settled in [1], generalizing work of Chevalley and Weil [12]; however, the formulas are given in terms of the local monodromy at ramification points which is often hard to compute, as demonstrated in our work [13]. The case of wild ramification remains open and there exist only partial results, see for example [14], [15], [16] and [17]. The problem falls into the general context of equivariant Euler characteristics [18]. Historically, the starting point for (3) is K. Petri's analysis [19] of the classic theorem by M. Noether and Enriques [15]. Generating sets of different flavors for the first syzygy module have been computed; the approach taken here builds on previous work of the author [13], closely related to that of [13] and [16]. The most explicit result on higher syzygies is Schreyer's algorithm [15], and much more is known about the Betti table of \(S_{X,\Omega_{X}}\), see [14, 12] and [20]. The de facto central problem of the area has been M. Green's syzygy conjecture, settled by Voisin in characteristic \(0\) in [18], [18], then generalized in [20]; the topic remains an active area of research. Group actions on resolutions of graded modules over polynomial rings, the general framework for (4), have been studied in relation to Boij-Soderberg theory [12], Veronese subalgebras [21], and permutations of monomial ideals [22], [23]. For an overview of computational flavor, see [11]. To the author's knowledge, the case of homogeneous coordinate rings has been minimally explored. The only relevant reference is [10].
In conclusion, the different structures have been mostly treated separately so far and it is our hope to unify these themes under the same framework and explore possible connections. The author believes that this can also shed light to the generalized version of Oort's conjecture on lifting curves with automorphisms [19], settled for cyclic groups in [19] and [20], but still open in its full generality.
### Outline
In Section 2 we review the basic properties of generalized Fermat curves from [16]. The main ingredients are the presentation of their automorphisms in eq. (1), their description as fiber products of classic Fermat curves in eq. (3) and the basis for the global sections of regular \(1\)-differentials in eq. (4). We then proceed with our study of differentials of higher order. First in Section 3 as vector spaces over the ground field (Theorem 1) then as representations (Theorem 2). In Section 4 we consider their direct sum as a graded module over a polynomial ring by invoking the classic result of M. Noether and Enriques. By using a variant of the techniques of [13] and [13], we prove in Theorem 5 that the first syzygy module is generated by a collection of binomials and trinomials, defined in eq. (10) and eq. (11) respectively. Finally, we extend the group action both on the said polynomial ring and on the syzygy module and decompose them into direct sums of irreducible representations in Proposition 8 and Corollary 9 respectively.
### Acknowledgments
The author would like to thank Ioannis Tsouknidas for bringing Hidalgo's paper to their attention and for suggesting to apply the techniques of [13] to prove Theorem 5, and Aristides
Kontogeorgis for helpful comments on early versions of this paper. This research was supported by EPSRC grant no. EP/V036017/1.
## 2. Preliminaries on Generalized Fermat Curves
Let \(K\) be an algebraically closed field of characteristic \(p\geq 0\), and let \(k,n\geq 2\) be integers such that \((k-1)(n-1)>1\) and \(p\nmid k\). _A generalized Fermat curve of type \((k,n)\)_ is a non-singular projective algebraic curve \(F_{k,n}\) over \(K\) admitting a group of automorphisms \(H\) isomorphic to \(\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\) such that the quotient \(F_{k,n}/H\) is the projective line \(\mathbb{P}^{1}_{K}\) with \(n+1\) distinct branch points, each one of order \(k\). For \(n=2\), one retrieves the definition of the classic Fermat curve, given by the vanishing of \(X^{k}_{0}+X^{k}_{1}+X^{k}_{2}=0\) in \(\mathbb{P}^{2}_{K}\).
To make things more explicit, consider the complete intersection in \(\mathbb{P}^{n}_{K}\) given by the fiber product of \((n-1)\)-many classic Fermat curves
\[C^{k}_{\lambda_{1},\dots,\lambda_{n-1}}:\left\{\lambda_{i}X^{k}_{0}+X^{k}_{1 }+X^{k}_{i+1}=0:1\leq i\leq n-1\right\}\subset\mathbb{P}^{n}_{K},\]
where \(\lambda_{1}=1\) and \(\lambda_{i}\in K-\{0,1,\infty\}\) are pairwise distinct for \(2\leq i\leq n-1\). Each classic Fermat curve above is branched at \(0,\infty\) and \(\lambda_{i}\) for \(2\leq i\leq n\); thus, by setting \(\lambda_{0}=0\) and \(\lambda_{n}=\infty\), the branch locus \(\{\lambda_{i}:0\leq i\leq n\}\) of \(C^{k}_{\lambda_{i},\dots,\lambda_{n-1}}\) consists of \(n+1\) distinct points. Further, if \(\zeta=\zeta_{k}\) is a primitive \(k\)-th root of unity and \(H\) is the group generated by the automorphisms
\[\{\phi_{j}:0\leq j\leq n\}\text{ where }\phi_{j}X_{i}=\zeta^{\delta_{ij}}X_{i}, \tag{1}\]
then one has that \(\phi_{0}\cdots\phi_{n}=1\) and so \(H\cong\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\). Thus, \(C^{k}_{\lambda_{1},\dots,\lambda_{n-1}}\) is a generalized Fermat curve of type \((k,n)\); its genus can be directly computed from the Riemann-Hurwitz formula
\[g_{k,n}=|H|\left(g_{p^{1}_{K}}-1\right)+1+\sum_{i=0}^{n}\left(1-\frac{1}{k} \right)=1+\frac{k^{n-1}}{2}\left[(k-1)(n-1)-2\right]. \tag{2}\]
The converse statement, that any generalized Fermat curve of type \((k,n)\) arises in the above manner, was first proved for \(K=\mathbb{C}\) in [1, Theorem 5], then extended to arbitrary \(K\) in [1, Theorem 2.2]. In what follows, we identify \(F_{k,n}\) with the complete intersection \(C^{k}_{\lambda_{1},\dots,\lambda_{n-1}}\). We shall be working with the affine model obtained by setting \(x=X_{1}/X_{0}\) and \(y_{i}=X_{i}/X_{0}\) for \(2\leq i\leq n\),
\[F_{k,n}:\left\{\begin{array}{rl}1+x^{k}+y^{k}_{2}&=0\\ \lambda_{2}+x^{k}+y^{k}_{3}&=0\\ \vdots\\ \lambda_{n-1}+x^{k}+y^{k}_{n}&=0\end{array}\right\}. \tag{3}\]
Let \(\Omega_{F_{k,n}}\) denote the canonical sheaf on \(F_{k,n}\). In [1, Theorem 3.1], the author proves that the collection
\[\mathscr{B}_{k,n}=\left\{\frac{x^{r}dx}{y^{a_{2}}_{2}y^{a_{3}}_{3}\dots y^{a_ {n}}_{n}}:0\leq a_{i}\leq k-1\text{ for all }2\leq i\leq n,\text{ and }0\leq r\leq\sum_{i=2}^{n}a_{i}-2\right\} \tag{4}\]
forms a \(K\)-basis for vector the space of global sections \(H^{0}\left(F_{k,n},\Omega_{F_{k,n}}\right)\). Assuming that \((n-1)(k-1)>2\), one has by [1, Theorem 4] that \(F_{k,n}\) is not hyperelliptic and thus, the choice of basis \(\mathscr{B}_{k,n}\) above gives rise to an embedding \(F_{k,n}\hookrightarrow\mathbb{P}^{a_{k},n-1}_{K}\). These results constitute the starting point of our analysis.
## 3. Holomorphic \(m\)-differentials
For \(m\geq 1\), let \(\Omega_{F_{k,n}}^{\otimes m}\) be the sheaf of regular \(m\)-differentials on \(F_{k,n}\). The global sections \(V_{m}=H^{0}(F_{k,n},\Omega_{F_{k,n}}^{\otimes m})\) form a vector space over the ground field \(K\) of dimension
\[d_{m}=\dim_{K}V_{m}=\begin{cases}g_{k,n}&,\text{ if }m=1\\ (2m-1)(g_{k,n}-1)&,\text{ if }m\geq 2.\end{cases} \tag{5}\]
To describe an explicit basis for each \(V_{m}\), consider the _meromorphic_ differentials
\[\theta^{(m)}_{r,\mathbf{a}}=\frac{x^{r}dx^{\otimes m}}{y^{a_{2}}_{2}y^{a_{3}} _{3}\dots y^{a_{n}}_{n}},\text{ where }(r,\mathbf{a})=(r,a_{2},\dots,a_{n})\in\mathbb{Z}^{n}. \tag{6}\]
The divisors \(\operatorname{div}(\theta_{r,\mathbf{a}}^{(m)})\) can be computed using a similar argument to that used in [10, SS3.2]: for each \(0\leq j\leq n\), let \(\phi_{j}\) be the automorphism defined in eq. (1), let \(\{P_{j\ell}:1\leq\ell\leq k^{n-1}\}\) denote its set of fixed points and set \(D_{j}=\sum_{\ell=0}^{k^{n-1}}P_{j,\ell}\) to be the corresponding divisor. We then have that
\[\operatorname{div}(x)=-D_{0}+D_{1}+\sum_{i=2}^{n}D_{i},\quad\operatorname{div} (y_{j})=-D_{0}+D_{j},\quad\operatorname{div}(dx)=-2D_{0}+\sum_{i=2}^{n}(k-1)D_{ i},\]
and thus, denoting the sum \(\sum_{i=2}^{n}a_{i}\) by \(|\mathbf{a}|\), we get
\[\operatorname{div}(\theta_{r,\mathbf{a}}^{\otimes m})=\left(|\mathbf{a}|-2m-r \right)D_{0}+rD_{1}+\sum_{i=2}^{n}[m(k-1)-a_{i}]D_{i}. \tag{7}\]
We are now ready to prove the following.
**Theorem 1**.: _For \(m\geq 1\), let_
\[I_{k,n}^{(m)}=\left\{(r,\mathbf{a})\in\mathbb{Z}^{n}:(m-1)(k-1)\leq a_{i}\leq m (k-1)\text{ for all }2\leq i\leq n,\text{ and }0\leq r\leq|\mathbf{a}|-2m\right\}.\]
_Then the collection_
\[\mathscr{B}_{k,n}^{(m)}=\left\{\theta_{r,\mathbf{a}}^{(m)}=\frac{x^{r}dx^{ \otimes m}}{y_{2}^{a_{2}}y_{3}^{a_{3}}\dots y_{n}^{a_{n}}}:(r,\mathbf{a})\in I _{k,n}^{(m)}\right\}\]
_is a basis for \(V_{m}=H^{0}\left(F_{k,n},\Omega_{F_{k,n}}^{\otimes m}\right)\)._
Proof.: By eq. (7), the differentials in \(\mathscr{B}_{k,n}^{(m)}\) are holomorphic. It thus suffices to show that the cardinality of \(I_{k,n}^{(m)}\) equals \(d_{m}\), as given in eq. (5). We proceed by induction on \(m\).
For \(m=1\) we retrieve the collection \(\mathscr{B}_{k,n}\) of eq. (4) and the result follows from [10, Theorem 3.1]. Assume that \(|I_{k,n}^{(m)}|=(2m-1)(g_{k,n}-1)\) and consider the set
\[I_{k,n}^{(m+1)}=\left\{(r,\mathbf{a})\in\mathbb{Z}^{n}:m(k-1)\leq a_{j}\leq(m +1)(k-1)\text{ and }0\leq r\leq|\mathbf{a}|-2m-2\right\}.\]
For \(2\leq j\leq n\) set \(b_{j}=a_{j}-(k-1)\), so that \(|\mathbf{b}|=|\mathbf{a}|-(n-1)(k-1)\). Then
\[I_{k,n}^{(m+1)}=\left\{(r,\mathbf{b})\in\mathbb{Z}^{n}:(m-1)(k-1)\leq b_{j} \leq m(k-1)\text{ and }0\leq r\leq|\mathbf{b}|+(n-1)(k-1)-2m-2\right\}\]
can be written as the disjoint union of the two sets
\[\left\{(r,\mathbf{b})\in\mathbb{Z}^{n}:(m-1)(k-1)\leq b_{j}\leq m(k-1)\text{ and }0\leq r\leq|\mathbf{b}|-2m\right\}\]
and
\[\left\{(r,\mathbf{b})\in\mathbb{Z}^{n}:(m-1)(k-1)\leq b_{j}\leq m(k-1)\text{ and }|\mathbf{b}|-2m+1\leq r\leq|\mathbf{b}|+(n-1)(k-1)-2m-2\right\}.\]
The former equals \(I_{k,n}^{(m)}\), so it has cardinality equal to \((2m-1)(g_{k,n}-1)\) by the inductive hypothesis. For the cardinality of the latter, note that the inequalities \((m-1)(k-1)\leq b_{j}\leq m(k-1)\) are satisfied by \(k^{n-1}\) tuples \(\mathbf{b}\), and that each such tuple gives rise to exactly \((n-1)(k-1)-2\) values of \(r\). Thus
\[|I_{k,n}^{(m+1)}|=(2m-1)(g_{k,n}-1)+k^{n-1}\left[(k-1)(n-1)-2\right]=(2m-1)(g_ {k,n}-1)+2(g_{k,n}-1)=(2m+1)(g_{k,n}-1)\]
completing the proof.
We proceed with studying the action of \(H\cong\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\) on \(V_{m}\). To be consistent with the indexing notation introduced in eq. (6), we denote the elements of \(\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\) by \((e_{1},\mathbf{e})\), where \(\mathbf{e}=(e_{2},\dots,e_{n})\), and the corresponding elements of \(H\) by \(\sigma_{(e_{1},\mathbf{e})}\). Recall that \(H\) acts on \(F_{k,n}\) via \(\sigma_{(e_{1},\mathbf{e})}(x,y_{2},\dots,y_{n})=(\zeta^{e_{1}}x,\zeta^{e_{2}}y _{2},\dots,\zeta^{e_{n}}y_{n})\), and this induces an action on the basis \(\mathscr{B}_{k,n}^{(m)}\) of \(V_{m}\) via
\[\sigma_{(e_{1},\mathbf{e})}\theta_{r,\mathbf{a}}^{(m)}=\frac{\left(\zeta^{e_{1} }x\right)^{r}d\left(\zeta^{e_{1}}x\right)^{\otimes m}}{\left(\zeta^{e_{2}}y_{2 }\right)^{a_{2}}\left(\zeta^{e_{3}}y_{3}\right)^{a_{3}}\dots\left(\zeta^{e_{n} }y_{n}\right)^{a_{n}}}=\zeta^{e_{1}(r+m)-\sum_{j=2}^{n}a_{j}e_{j}}\theta_{r, \mathbf{a}}^{(m)}=\zeta^{e_{1}(r+m)-\mathbf{a}\cdot\mathbf{e}}\theta_{r, \mathbf{a}}^{(m)}, \tag{8}\]
where we write \(\mathbf{a}\cdot\mathbf{e}\) for the sum \(\sum_{j=2}^{n}a_{j}e_{j}\). Thus, the character of this representation is
\[\chi_{V_{m}}:H\to K,\ \sigma_{(e_{1},\mathbf{e})}\mapsto\sum_{(r,\mathbf{a})\in I _{k,n}^{(m)}}\zeta^{e_{1}(r+m)-\mathbf{a}\cdot\mathbf{e}}. \tag{9}\]
We identify the irreducible representations of \(H\) with its group of irreducible characters
\[\mathscr{X}(H)=\left\{\chi_{(h_{1},\mathbf{h})}:H\to K,\;\sigma_{(e_{1},\mathbf{e} )}\mapsto\zeta^{h_{1}e_{1}+\mathbf{h}\cdot\mathbf{e}}\;|\;(h_{1},\mathbf{h})\in \left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\right\},\]
and write \(V_{m}=\bigoplus_{(h_{1},\mathbf{h})\in\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}} \nu_{m,(h_{1},\mathbf{h})}\chi_{(h_{1},\mathbf{h})}\) for the decomposition of \(V_{m}\) into a direct sum of irreducibles.
**Theorem 2**.: \(\nu_{m,(h_{1},\mathbf{h})}=(n-1)(m-1)-\left\lceil\frac{|(h_{1},\mathbf{h})|+m}{ k}\right\rceil-\sum_{j=2}^{n}\left\lfloor\frac{m-1-h_{j}}{k}\right\rfloor+1.\)__
Proof.: Using eq. (9) and the fact that \(\nu_{m,(h_{1},\mathbf{h})}=\langle\chi_{V_{m}},\chi_{(h_{1},\mathbf{h})}\rangle\), we obtain
\[\nu_{m,(h_{1},\mathbf{h})}=\frac{1}{|H|}\sum_{(e_{1},\mathbf{e})\in\left( \mathbb{Z}/k\mathbb{Z}\right)^{n}}\chi_{V_{m}}\left(\sigma_{(e_{1},\mathbf{e} )}\right)\overline{\chi_{(h_{1},\mathbf{h})}\left(\sigma_{(e_{1},\mathbf{e})} \right)}=\frac{1}{k^{n}}\sum_{(e_{1},\mathbf{a})\in\left(\mathbb{Z}/k\mathbb{Z }\right)^{n}\atop(r,\mathbf{a})\in\ell_{k,n}^{(m)}}\zeta^{e_{1}(r+m-h_{1})-( \mathbf{a}+\mathbf{h})\cdot\mathbf{e}}.\]
For a fixed value \((r,\mathbf{a})\in I_{k,n}^{(m)}\) we have
\[\sum_{(e_{1},\mathbf{e})\in\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}}\zeta^{e_{ 1}(r+m-h_{1})-(\mathbf{a}+\mathbf{h})\cdot\mathbf{e}}=\begin{cases}k^{n}&,\; \text{if}\;\left(r+m-h_{1},\mathbf{a}+\mathbf{h}\right)\equiv\mathbf{0}\bmod k \\ 0&,\;\text{otherwise},\end{cases}\]
and thus \(\nu_{m,(h_{1},\mathbf{h})}\) is equal to the cardinality of \(\left\{(r,\mathbf{a})\in I_{k,n}^{(m)}:(r+m-h_{1},\mathbf{a}+\mathbf{h})\equiv \mathbf{0}\bmod k\right\}\). Spelled out, one needs to count the number of tuples \((r,\mathbf{a})\in\mathbb{Z}^{n}\) that satisfy the relations
\[\left\{\begin{array}{cc}(m-1)(k-1)\leq a_{j}\leq m(k-1)&\text{and}&r+m\equiv h _{1}\bmod k\\ 0\leq r\leq|\mathbf{a}|-2m&\text{and}&a_{j}\equiv-h_{j}\bmod k\end{array} \right\},\]
or equivalently, setting \(b_{j}=a_{j}-(m-1)(k-1)\), the relations
\[\left\{\begin{array}{cc}0\leq b_{j}\leq k-1&\text{and}&r+m\equiv h_{1} \bmod k\\ 0\leq r\leq|\mathbf{b}|+(n-1)(m-1)(k-1)-2m&\text{and}&b_{j}\equiv m-1-h_{j} \bmod k.\end{array}\right\}.\]
There is exactly one tuple \(\mathbf{b}\) that satisfies \(0\leq b_{j}\leq k-1\) and \(b_{j}\equiv m-1-h_{j}\bmod k\), namely
\[b_{j}=m-1-h_{j}-\left\lfloor\frac{m-1-h_{j}}{k}\right\rfloor k.\]
Write \(r+m=qk+h_{1}\) for the division of \(r+m\) by \(k\) and substitute into the inequality to be satisfied by \(r\)
\[0\leq qk+h_{1}-m\leq(n-1)(m-1)-|\mathbf{h}|-\sum_{j=2}^{n}\left\lfloor\frac{m -1-h_{j}}{k}\right\rfloor k+(n-1)(m-1)(k-1)-2m\]
\[\Leftrightarrow 0\leq qk\leq(n-1)(m-1)k-|(h_{1},\mathbf{h})|-m-\sum_{j=2}^{n} \left\lfloor\frac{m-1-h_{j}}{k}\right\rfloor k\]
\[\Leftrightarrow 0\leq q\leq(n-1)(m-1)-\left\lceil\frac{|(h_{1},\mathbf{h})|+m}{k} \right\rceil-\sum_{j=2}^{n}\left\lfloor\frac{m-1-h_{j}}{k}\right\rfloor\]
and the result follows.
## 4. The canonical ring
Let \(S_{F_{k,n}}\) be the direct sum of the \(K\)-vector spaces \(V_{m}=H^{0}(\Omega_{F_{k,n}}^{\otimes m},F_{k,n})\) for \(m\geq 0\). It is equipped with the structure of a graded ring, with multiplication defined as
\[V_{m}\times V_{m^{\prime}}\to V_{m+m^{\prime}},\;\left(fdx^{\otimes m},gdx^{ \otimes m^{\prime}}\right)\mapsto fgdx^{\otimes(m+m^{\prime})}.\]
Let \(S\) be the symmetric algebra \(\operatorname{Sym}\left(V_{1}\right)\); using Theorem 1, we identify \(S\) with the polynomial ring with variables \(\left\{z_{r,\mathbf{a}}:(r,\mathbf{a})\in I_{k,n}^{(1)}\right\}\), indexed by the elements of
\[I_{k,n}^{(1)}=\left\{(r,\mathbf{a})\in\mathbb{Z}^{n}:0\leq a_{j}\leq k-1\text{ and }0\leq r\leq|a|-2\right\}.\]
The assignment \(z_{r,\mathbf{a}}\mapsto\theta_{r,\mathbf{a}}^{(1)}\) can be naturally extended to a homogeneous homomorphism of graded rings \(\phi:S\to S_{F_{k,n}}\), which endows \(S_{F_{k,n}}\) with the structure of a graded \(S\)-module. Assuming that \((n-1)(k-1)>2\), one has by [1, Theorem 4] that \(F_{k,n}\) is not hyperelliptic. Thus, we may invoke the following classic result of M. Noether, Enriques and Petri, see[1].
**Theorem 3**.: _The canonical map \(\phi:S\to S_{F_{k,n}}\) is surjective. If \(F_{k,n}\) is neither trigonal nor a plane quintic, then the kernel of \(\phi\) is generated by homogeneous elements of degree \(2\)._
In what follows, we assume that \(F_{k,n}\) is neither trigonal nor a plane quintic and proceed with the description of a generating set for \(\ker\phi\) in degree \(2\), by considering the \(K\)-linear map
\[\phi_{2}:S_{2}\twoheadrightarrow V_{2},\ z_{r,\mathbf{a}}z_{s,\mathbf{b}} \mapsto\theta_{r+s,\mathbf{a}+\mathbf{b}}^{(2)}.\]
Notice that \(\phi\left(z_{r,\mathbf{a}}z_{s,\mathbf{b}}\right)=\phi\left(z_{t,\mathbf{c}}z _{u,\mathbf{d}}\right)\Leftrightarrow(r+s,\mathbf{a}+\mathbf{b})=(t+u, \mathbf{c}+\mathbf{d})\) and so
\[\mathscr{G}_{\mathrm{bi}}=\left\{z_{r,\mathbf{a}}z_{s,\mathbf{b}}-z_{t, \mathbf{c}}z_{u,\mathbf{d}}:(r+s,\mathbf{a}+\mathbf{b})=(t+u,\mathbf{c}+ \mathbf{d})\right\}\subseteq\ker\phi. \tag{10}\]
**Remark 4**.: To give a combinatorial interpretation of the above, recall that in degree \(1\) we have a bijection between the variables of \(S\), the basis of \(V_{1}\) and the points of \(I_{k,n}^{(1)}\). In degree \(2\), there is a similar correspondence, which however fails to be a bijection. By Theorem 1, the basis of \(V_{2}\) is in bijection with
\[I_{k,n}^{(2)}=\left\{(r,\mathbf{a})\in\mathbb{Z}^{n}:k-1\leq a_{j}\leq 2(k-1) \text{ and }0\leq r\leq|a|-4\right\}.\]
The images of the monomials in \(S_{2}\) under \(\phi\) correspond to points in the Minkowski sum
\[I_{k,n}^{(1)}+I_{k,n}^{(1)}=\left\{(r,\mathbf{a})+(s,\mathbf{b}):(r,\mathbf{a} ),(s,\mathbf{b})\in I_{k,n}^{(1)}\right\}=\left\{(r,\mathbf{a})\in\mathbb{Z}^ {n}:0\leq a_{j}\leq 2(k-1)\text{ and }0\leq r\leq|a|-4\right\},\]
which strictly contains \(I_{k,n}^{(2)}\). Two monomials correspond to the same point if and only they give rise to a binomial in \(\mathscr{G}_{\mathrm{bi}}\).
To obtain the remaining generators, for \(1\leq i\leq n-1\) let \(\mathbf{k}(i)\) be the vector \((0,\ldots,k,\ldots,0)\in\mathbb{Z}^{n-1}\) whose \(i\)-th coordinate equals \(k\) and all others are \(0\). Let \((r,\mathbf{a})\) be a point in \(I_{k,n}^{(1)}+I_{k,n}^{(1)}\) such that \((r+k,\mathbf{a})\in I_{k,n}^{(1)}+I_{k,n}^{(1)}\) and \((r,\mathbf{a}-\mathbf{k}(i))\in I_{k,n}^{(1)}+I_{k,n}^{(1)}\) for exactly one \(1\leq i\leq n-1\). These three points give rise to a relation in \(V_{2}\)
\[\lambda_{i}\theta_{r,\mathbf{a}}^{(2)}+\theta_{r+k,\mathbf{a}}^{(2)}+\theta_{ r,\mathbf{a}-\mathbf{k}(i)}^{(2)}=\theta_{r,\mathbf{a}}^{(2)}\left(\lambda_{i}+x^{ k}+y^{k}\right)\]
which is zero by eq. (3). The preimage of that relation under \(\phi\) defines a subset of \(\ker\phi\), and so
\[\mathscr{G}_{\mathrm{tri}}=\left\{\lambda_{i}z_{r,\mathbf{a}}z_{s,\mathbf{b}} +z_{t,\mathbf{c}}z_{u,\mathbf{d}}+z_{v,\mathbf{e}}z_{w,\mathbf{f}}:\begin{array} []{l}t+u=r+s+k,\ \mathbf{c}+\mathbf{d}=\mathbf{a}+\mathbf{b}\\ v+w=r+s,\ \mathbf{e}+\mathbf{f}=\mathbf{a}+\mathbf{b}-\mathbf{k}(i)\end{array},\ 1\leq i\leq n-1\right\}\subseteq\ker\phi. \tag{11}\]
**Theorem 5**.: \(\ker\phi\) _is generated by \(\mathscr{G}_{\mathrm{bi}}\cup\mathscr{G}_{\mathrm{trinom}}\)._
The proof of Theorem 5 will be given using a variant of the techniques introduced in [1], [1] and [1]. To this end, we recall some facts of Grobner theory from [10, SS1]. Let \(\prec\) be a term order on the monomials of \(S\). For \(f\in S\), let \(\mathrm{in}_{\prec}(f)\) be the initial term of \(f\) with respect to \(\prec\); for an ideal \(\mathfrak{a}\) of \(S\) and a subset \(\mathscr{G}\subseteq\mathfrak{a}\), let \(\mathrm{in}_{\prec}\left(\mathfrak{a}\right)=\left\langle\mathrm{in}_{\prec}(f): f\in\mathfrak{a}\right\rangle\) and \(\mathrm{in}_{\prec}\left(\mathscr{G}\right)=\left\{\mathrm{in}_{\prec}(f):f\in \mathscr{G}\right\}\). In general \(\left\langle\mathrm{in}_{\prec}\left(\mathscr{G}\right)\right\rangle\subseteq \mathrm{in}_{\prec}\left(\mathscr{G}\right)\), with equality if and only if \(\mathscr{G}\) is a Grobner basis for \(\left\langle\mathscr{G}\right\rangle\). A monomial is called _standard_ with respect to \(\mathfrak{a}\) if it does not lie in \(\mathrm{in}_{\prec}\left(\mathfrak{a}\right)\) and standard monomials form a \(K\)-basis for \(S/\mathfrak{a}\). In the case of interest to this paper, when \(\mathfrak{a}=\ker\phi\) and \(\mathscr{G}=\mathscr{G}_{\mathrm{bi}}\cup\mathscr{G}_{\mathrm{tri}}\), we will identify the standard monomials with a subset of \(I_{k,n}^{(1)}+I_{k,n}^{(1)}\), in the philosophy of Remark 4. For convenience purposes we fix a term order, even though the proof is independent of the choice, see also Remark 7. In the definition below, we extend the notation \(\mathbf{a}=(a_{2},\ldots,a_{n})\in\mathbb{Z}^{n-1}\) to account for collections of elements of \(\mathbb{Z}^{n-1}\) which will be denoted by \(\mathbf{a}_{1}=(a_{1,2},\ldots,a_{1,n})\), \(\,\mathbf{a}_{2}=(a_{2,2},\ldots,a_{2,n})\) and so on.
**Definition 6**.: Let \(\prec\) be the term order on the monomials of \(S\) defined as:
\[z_{r_{1},\mathbf{a}_{1}}z_{r_{2},\mathbf{a}_{2}}\cdots z_{r_{d},\mathbf{a}_{d}} \prec z_{s_{1},\mathbf{b}_{1}}z_{s_{2},\mathbf{b}_{2}}\cdots z_{s_{2},\mathbf{b} _{d^{\prime}}}\text{ if and only if}\]
* \(d<d^{\prime}\) or
* \(d=d^{\prime}\) and \(\sum r_{i}>\sum s_{i}\) or
* \(d=d^{\prime}\) and \(\sum r_{i}=\sum s_{i}\) and \(\sum_{i}a_{i,2}<\sum_{i}b_{i,2}\) or
* \(d=d^{\prime}\) and \(\sum r_{i}=\sum s_{i}\) and \(\sum_{i}a_{i,2}=\sum_{i}b_{i,2}\) and \(\sum_{i}a_{i,3}<\sum_{i}b_{i,3}\) or
\(\vdots\)
* \(d=d^{\prime}\) and \(\sum r_{i}=\sum s_{i}\) and \(\sum_{i}a_{i,j}=\sum_{i}b_{i,j}\) for all \(2\leq j\leq n\) and \[z_{r_{1},\mathbf{a}_{1}}z_{r_{2},\mathbf{a}_{2}}\cdots z_{r_{d},\mathbf{a}_{d}} <z_{s_{1},\mathbf{b}_{1}}z_{s_{2},\mathbf{b}_{2}}\cdots z_{s_{2},\mathbf{b}_{d^ {\prime}}}\text{ lexicographically}.\]
Proof of Theorem 5.: By construction, \(\mathscr{G}=\mathscr{G}_{\mathrm{bi}}\cup\mathscr{G}_{\mathrm{tri}}\subseteq \ker\phi\), see equations (10) and (11). By Theorem 3, the inclusion \(\ker\phi\subseteq\left\langle\mathscr{G}\right\rangle\) can be checked in degree \(2\); we will show that \(\left(S/\langle\mathscr{G}\rangle\right)_{2}\) is a \(K\)-subspace of \(\left(S/\ker\phi\right)_{2}\). By Grobner theory, referring again indicatively to [11, SS1], we have that
\[\dim_{K}\left(S/\langle\mathscr{G}\rangle\right)_{2}=\dim_{K}\left(S/\mathrm{ in}_{\prec}\langle\mathscr{G}\rangle\right)_{2}\leq\dim_{K}\left(S/\langle \mathrm{in}_{\prec}\left(\mathscr{G}\right)\rangle\right)_{2}=\left|\mathbb{T }^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}\right)\right|, \tag{12}\]
where \(\mathbb{T}^{2}\) denotes the set of monomials of degree \(2\) in \(S\). To obtain an upper bound for \(\left|\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}\right)\right|\), we consider the map of sets
\[\tau:I_{k,n}^{(1)}+I_{k,n}^{(1)}\rightarrow\mathbb{T}^{2},\;(r,\mathbf{a}) \mapsto\min_{\prec}\left\{z_{s,\mathbf{b}}z_{t,\mathbf{c}}\in\mathbb{T}^{2}: \left(s+t,\mathbf{b}+\mathbf{c}\right)=(r,\mathbf{a})\right\},\]
which is well-defined, \(1\)-\(1\) and has image equal to \(\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}_{\mathrm{bi}}\right)\). Thus \(\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}_{\mathrm{bi}}\right)\) is in bijection with \(I_{k,n}^{(1)}+I_{k,n}^{(1)}\). For \(2\leq i\leq n\), consider the subsets of \(I_{k,n}^{(1)}+I_{k,n}^{(1)}\) defined as
\[C_{i}=\left\{(r,\mathbf{a})\in I_{k,n}^{(1)}+I_{k,n}^{(1)}:(r+k,\mathbf{a})\in I _{k,n}^{(1)}+I_{k,n}^{(1)}\text{ and }(r,\mathbf{a}-\mathbf{k}(i))\in I_{k,n}^{(1)}+I_{k,n}^{(1)} \right\}. \tag{13}\]
The three monomials \(\tau\left(r,\mathbf{a}\right),\;\tau\left(r+k,\mathbf{a}\right)\) and \(\tau\left(r,\mathbf{a}-\mathbf{k}(i)\right)\) give rise to an element of \(\mathscr{G}_{\mathrm{tri}}\)
\[\lambda_{i}\tau\left(r,\mathbf{a}\right)+\tau\left(r+k,\mathbf{a}\right)+\tau \left(r,\mathbf{a}-\mathbf{k}(i)\right),\]
whose initial term is by construction \(\tau\left(r,\mathbf{a}\right)\). Thus \(\left|\bigcup_{i=2}^{n}C_{i}\right|\leq\left|\mathrm{in}_{\prec}\left( \mathscr{G}_{\mathrm{tri}}\right)\right|\), and so
\[\left|\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}\right) \right|\leq|(I_{k,n}^{(1)}+I_{k,n}^{(1)})\setminus\bigcup_{i=2}^{n}C_{i}|=|(I_{ k,n}^{(1)}+I_{k,n}^{(1)})\bigcap_{i=2}^{n}\overline{C}_{i}|.\]
Using the defining inequalities of \(I_{k,n}^{(1)}+I_{k,n}^{(1)}\), we observe that
\[C_{i}=\left\{(r,\mathbf{a})\in\mathbb{Z}^{n}:0\leq a_{j}\leq 2k-2\text{ for }j\neq i, \;k\leq a_{i}\leq 2k-2\text{ and }0\leq r\leq|\mathbf{a}|-(k+4)\right\}.\]
Setting \(b_{i}=a_{i}-k\) and \(b_{j}=a_{j}\) for \(i\neq j\) yields
\[C_{i}=\left\{(r,\mathbf{b})\in\mathbb{Z}^{n}:0\leq b_{j}\leq 2k-2\text{ for }j\neq i, \;0\leq b_{i}\leq k-2\text{ and }0\leq r\leq|\mathbf{b}|-4\right\}\]
and so
\[\left(I_{k,n}^{(1)}+I_{k,n}^{(1)}\right)\bigcap_{i=2}^{n}\overline{C}_{i}= \left\{(r,\mathbf{b})\in\mathbb{Z}^{n}:k-1\leq b_{i}\leq 2(k-1)\text{ for }2\leq i\leq n-1\text{ and }0\leq r\leq|\mathbf{b}|-4 \right\}=I_{k,n}^{(2)}.\]
Thus \(\left|\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}\right) \right|\leq\left|I_{k,n}^{(2)}\right|\). Theorem 1 and the first part of Theorem 3 then imply that
\[\left|\mathbb{T}^{2}\setminus\mathrm{in}_{\prec}\left(\mathscr{G}\right) \right|\leq\left|I_{k,n}^{(2)}\right|=\dim_{k}V_{2}=\dim_{K}\left(S/\ker\phi \right)_{2},\]
and so by eq. (12), \(\left(S/\langle\mathscr{G}\rangle\right)_{2}\) is a \(K\)-subspace of \(\left(S/\ker\phi\right)_{2}\), as requested.
**Remark 7**.: The term order of Definition 6 was only used to argue that \(\tau\) maps the sets \(C_{i}\) of eq. (13) into \(\mathrm{in}_{\prec}\left(\mathscr{G}_{\mathrm{tri}}\right)\), which follows since the monomial \(\tau\left(r,\mathbf{a}\right)\) is greater than both \(\tau\left(r+k,\mathbf{a}\right)\) and \(\tau\left(r,\mathbf{a}-\mathbf{k}(i)\right)\) with respect to \(\prec\). Had we refrained from making a choice, we could have given a case-by-case proof: if for example \(\tau\left(r+k,\mathbf{a}\right)\) were maximal among the three monomials we would replace \(C_{i}\) by the set
\[\left\{(r,\mathbf{a})\in I_{k,n}^{(1)}+I_{k,n}^{(1)}:(r-k,\mathbf{a})\in I_{k,n}^ {(1)}+I_{k,n}^{(1)}\text{ and }(r-k,\mathbf{a}-\mathbf{k}(i))\in I_{k,n}^{(1)}+I_{k,n}^{(1)}\right\}\]
which has the same cardinality as \(C_{i}\), and similarly for the third case.
The action of \(H\) on \(V_{1}\) gives rise to a natural action on \(S=\mathrm{Sym}(V_{1})\) which respects the grading: if \(\mathfrak{m}\) is a monomial of degree \(d\), and \(\sigma_{\left(e_{1},\mathbf{e}\right)}\in H\) is the automorphism determined by \(\left(e_{1},\mathbf{e}\right)\in\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\), then \(\sigma_{\left(e_{1},\mathbf{e}\right)}\mathfrak{m}=\zeta^{e_{1}\left(r+d\right) -\mathbf{a}\cdot\mathbf{e}}\mathfrak{m}\), where \((r,\mathbf{a})\) is the sum of the indices corresponding to the variables that divide \(\mathfrak{m}\).
This leads us to consider, for each \(d\geq 1\), the \(d\)-fold Minkowski sum of \(I_{k,n}^{(1)}\) with itself
\[d\cdot I_{k,n}^{(1)}=\underbrace{I_{k,n}^{(1)}+I_{k,n}^{(1)}+\cdots+I_{k,n}^{(1)}}_ {d\text{-times}}.\]
For each point \((r,\mathbf{a})\in d\cdot I_{k,n}^{(1)}\), let \(S_{d,r,\mathbf{a}}\) be the \(k\)-span of
\[\left\{z_{r_{1},\mathbf{a}_{1}}z_{r_{2},\mathbf{a}_{2}}\cdots z_{r_{d},\mathbf{ a}_{d}}\in S_{d}\mid\;\sum_{i=1}^{d}\left(r_{i},\mathbf{a}_{i}\right)=(r, \mathbf{a})\right\},\]
i.e. the set of monomials \(\mathfrak{m}\) of degree \(d\) such that the sum of the indices corresponding to the variables that divide \(\mathfrak{m}\) equals \((r,\mathbf{a})\). The above construction gives rise to a direct sum decomposition
\[S=\bigoplus_{d=0}^{\infty}S_{d}=\bigoplus_{d=0}^{\infty}\bigoplus_{(r, \mathbf{a})\in d\cdot I_{k,n}^{(1)}}S_{d,r,\mathbf{a}}, \tag{14}\]
which determines the structure of \(S\) as a \(KG\)-module.
**Proposition 8**.: _The multiplicity \(\mu_{d,(h_{1},\mathbf{h})}\) of the irreducible representation \(\chi_{(h_{1},\mathbf{h})}\) of \(H\) in the decomposition of \(S_{d}\) in a direct sum of irreducible representations is given by_
\[\mu_{d,(h_{1},\mathbf{h})}=\sum_{(r,\mathbf{a})\in J_{h_{1},\mathbf{h}}^{(d)} }\dim_{K}\left(S_{d,r,\mathbf{a}}\right),\text{ where }J_{h_{1},\mathbf{h}}^{(d)}= \left\{(r,\mathbf{a})\in d\cdot I_{k,n}^{(1)}\;\mid\;(r+d,-\mathbf{a})\equiv(h _{1},\mathbf{h})\bmod k\right\}.\]
Proof.: An automorphism \(\sigma_{(e_{1},\mathbf{e})}\in H\) acts on the elements of \(S_{d,r,\mathbf{a}}\) by the scalar \(\zeta^{e_{1}(r+d)-\mathbf{a}\cdot\mathbf{e}}\) and thus \(S_{d,r,\mathbf{a}}\) is isomorphic, as a \(KG\)-module, to a direct sum of \(\dim_{K}\left(S_{d,r,\mathbf{a}}\right)\) copies of the irreducible representation determined by the class of \((r+d,-\mathbf{a})\in\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\). Hence,
\[S_{d}=\bigoplus_{(r,\mathbf{a})\in d\cdot I_{k,n}^{(1)}}\dim_{K}\left(S_{d,r, \mathbf{a}}\right)\chi_{(r+d,-\mathbf{a})}\]
and the result follows since two points \((r,\mathbf{a}),(s,\mathbf{b})\in d\cdot I_{k,n}^{(1)}\) determine the same irreducible representation if and only if \((r,\mathbf{a})\equiv(s,\mathbf{b})\bmod k\).
Finally, we extend the action of \(H\) on the ideal \(\ker\phi\) of Theorem 3. This can be done either by comparing the action of \(H\) on \(S\) to that on \(S_{F_{k,n}}\) and deduce that the canonical map \(\phi\) is \(H\)-equivariant, or by directly verifying that for any generator \(f\in\mathscr{G}_{\mathrm{bi}}\cup\mathscr{G}_{\mathrm{tri}}\) we have \(\sigma_{(e_{1},\mathbf{e})}f=\zeta^{e_{1}(r+2)-\mathbf{a}\cdot\mathbf{e}}f\). Further, the assumption that \(\mathrm{char}(K)\nmid|H|\) allows us to invoke Maschke's theorem to conclude that the sequence \(0\to\ker\phi\to S\to S_{F_{k,n}}\to 0\) of Theorem 3 is a split short exact sequence of \(KG\)-modules.
**Corollary 9**.: _The multiplicity of the irreducible representation \(\chi_{(h_{1},\mathbf{h})}\) of \(H\) in the decomposition of \(\left(\ker\phi\right)_{d}\) in a direct sum of irreducible representations is given by \(\mu_{d,(h_{1},\mathbf{h})}-\nu_{d,(h_{1},\mathbf{h})}\), where \(\mu_{d,(h_{1},\mathbf{h})}\) is an in Proposition 8 and \(\nu_{d,(h_{1},\mathbf{h})}\) is as in Theorem 2._
We conclude our analysis with some comments on the relationship of our techniques with algebraic combinatorics and combinatorial commutative algebra. This has a twofold purpose: first to present some future directions and open problems, and second to justify why the formulas for \(\mu_{d,(h_{1},\mathbf{h})}\) in Proposition 8 are not as concrete as the respective formulas for \(\nu_{d,(h_{1},\mathbf{h})}\) given in Theorem 2.
1. For \(d=1,\,\dim_{K}\left(S_{1,r,\mathbf{a}}\right)=1\) for all \((r,\mathbf{a})\in I_{k,n}^{(1)}\), and so \(\mu_{1,(h_{1},\mathbf{h})}=\nu_{1,(h_{1},\mathbf{h})}\), reflecting the fact that \(\ker\phi\) has no generators in degree \(1\). For \(d\geq 1,\,\dim_{K}S_{d,r,\mathbf{a}}\) can be interpreted as the number of partitions of the vector \((r,\mathbf{a})\in\mathbb{Z}^{n}\) into \(d\)-many parts, such that each summand lies in \(I_{k,n}^{(1)}\). The enumeration of such _multipartite partitions_ is a classic, albeit difficult, problem in algebraic combinatorics, usually approached via the theory of generating functions, see [20]. The requirement that the summands must be in \(I_{k,n}^{(1)}\) poses an additional obstruction to obtaining explicit formulas.
2. The standard generating functions used in invariant theory of polynomial rings are equivariant Hilbert-Poincare series, see for example [20] or [21, SS3]. The most well-known and concrete result is Molien's formula, which expresses the generating function of the sequence \(\{\mu_{d,(h_{1},\mathbf{h})}\}_{d=0}^{\infty}\) as a sum of rational functions. Applying this to our case, even when \(\chi_{(h_{1},\mathbf{h})}\) is the trivial representation, does not produce a formula more illuminating than the ones obtained in Proposition 8.
3. For yet another class of relevant formal power series, we mention Erhart theory [21, SS12], which addresses the problem of enumerating lattice points in dilations of convex polytopes. In our context, these dilations correspond to the sets \(d\cdot I_{k,n}^{(1)}\) but the desired stratification by the sets \(J_{h_{1},\mathbf{h}}^{(d)}\) in
Proposition 8 would require a version of Erhart theory modulo \(k\). It is also worth mentioning that there is a connection with the so-called integer decomposition property of polytopes, see [14].
In any case, the potential interplay between the three types of generating functions in the context of the present paper is something that we deem worth exploring in future work.
1. The indexing of the indeterminates \(z_{r,\mathbf{a}}\) of the polynomial ring \(S\) by the elements of \(I_{k,n}^{(1)}\) can be interpreted as a multigrading of \(S\) by the abelian group \(\mathbb{Z}^{n+1}\): the multidegree of a monomial \(\mathfrak{m}=z_{r_{1},\mathbf{a}_{1}}\cdots z_{r_{d},\mathbf{a}_{d}}\) is \(\operatorname{mdeg}\left(\mathfrak{m}\right)=(d,\sum r_{i},\sum\mathbf{a}_{i })\in\mathbb{Z}\times d\cdot I_{k,n}^{(1)}\subset\mathbb{Z}^{n+1}\). The decomposition of eq. (14) thus gives rise to a multigraded Hilbert function via the assignment \((d,r,\mathbf{a})\mapsto\dim_{k}S_{d,r,\mathbf{a}}\), which is a refinement of the classic Hilbert function \(d\mapsto\binom{d+g-1}{d}\) of the polynomial ring. A similar interpretation can be given for the multiplicities of the irreducible summands of the degree \(d\) pieces of \(S\), \(S_{F_{k,n}}\) and \(\ker\phi\) as \(KH\)-modules. This leads towards the theory of invariant and multigraded Hilbert schemes, see [11] and [10] respectively; a natural question would be to seek for defining equations for the Hilbert schemes parametrizing ideals with the above multigraded Hilbert functions.
|
2306.04540
|
NeMO: Neural Map Growing System for Spatiotemporal Fusion in
Bird's-Eye-View and BDD-Map Benchmark
|
Vision-centric Bird's-Eye View (BEV) representation is essential for
autonomous driving systems (ADS). Multi-frame temporal fusion which leverages
historical information has been demonstrated to provide more comprehensive
perception results. While most research focuses on ego-centric maps of fixed
settings, long-range local map generation remains less explored. This work
outlines a new paradigm, named NeMO, for generating local maps through the
utilization of a readable and writable big map, a learning-based fusion module,
and an interaction mechanism between the two. With an assumption that the
feature distribution of all BEV grids follows an identical pattern, we adopt a
shared-weight neural network for all grids to update the big map. This paradigm
supports the fusion of longer time series and the generation of long-range BEV
local maps. Furthermore, we release BDD-Map, a BDD100K-based dataset
incorporating map element annotations, including lane lines, boundaries, and
pedestrian crossing. Experiments on the NuScenes and BDD-Map datasets
demonstrate that NeMO outperforms state-of-the-art map segmentation methods. We
also provide a new scene-level BEV map evaluation setting along with the
corresponding baseline for a more comprehensive comparison.
|
Xi Zhu, Xiya Cao, Zhiwei Dong, Caifa Zhou, Qiangbo Liu, Wei Li, Yongliang Wang
|
2023-06-07T15:46:15Z
|
http://arxiv.org/abs/2306.04540v1
|
# NeMO: Neural Map Growing System for Spatiotemporal Fusion in Bird's-Eye-View
###### Abstract
Vision-centric Bird's-Eye View (BEV) representation is essential for autonomous driving systems (ADS). Multi-frame temporal fusion which leverages historical information has been demonstrated to provide more comprehensive perception results. While most research focuses on ego-centric maps of fixed settings, long-range local map generation remains less explored. This work outlines a new paradigm, named NeMO, for generating local maps through the utilization of a readable and writable big map, a learning-based fusion module, and an interaction mechanism between the two. With an assumption that the feature distribution of all BEV grids follows an identical pattern, we adopt a shared-weight neural network for all grids to update the big map. This paradigm supports the fusion of longer time series and the generation of long-range BEV local maps. Furthermore, we release BDD-Map, a BDD100K-based dataset incorporating map element annotations, including lane lines, boundaries, and pedestrian crossing. Experiments on the NuScenes and BDD-Map datasets demonstrate that NeMO outperforms state-of-the-art map segmentation methods. We also provide a new scene-level BEV map evaluation setting along with the corresponding baseline for a more comprehensive comparison.
## 1 Introduction
In the realm of autonomous driving, the ability to perceive and comprehend the surrounding environment is of utmost importance. The Bird's-Eye-View (BEV) representation is particularly desirable for its ability to accurately display the spatial placement of objects and road elements in a three-dimensional space [36; 21; 5; 17; 41; 3]. Many vision-based Bird's-Eye-View (BEV) studies perception [2; 19; 30; 15; 26; 12] have shown significant progress in recent years.
Besides utilizing multi-view information[19; 15], vision-based Bird's-Eye-View (BEV) perception also taps into the potential of time-series images. Leveraging time-series data can effectively address challenges such as visual occlusion and visual illusions, particularly for static elements, like road elements. Researchers have explored temporal fusion via warping or query-based approaches[16; 22; 28; 40; 20], demonstrating that temporal fusion improves perception in challenging environments such as occluded circumstances. However, these works limit to ego-centric environment and do not consider the global perception of the environment corresponding to the traveled distance.
In this paper, we present a Neural Map grOwing system, **NeMO**, that can digest image sequences, unravel the details of a journey, and produce a comprehensive long-range local map of the environment.
navigation. In comparison with the BEV maps generated with existing methods, a long-range local map usually has a larger spatial area, requiring fusion of more frames.
Expanding the spatial size of the BEV plane using current state-of-the-art spatio-temporal fusion methods[20, 28] can be a straightforward approach. However, it is not practical or easily extendable because increased computational costs accompany the expansion of the local map. In [19], an alternative approach is proposed where BEV feature maps for related frames are generated and then aligned onto a fixed BEV space based on ego poses to create a local map, followed by max-pooling for overlapping areas. While this approach is universal and is able to fuse any number of frames, two issues still remain unsettled. First, max-pooling may not eliminate false detections and its performance could suffer if the BEV perception results are poor. The second issue is that accuracy of this approach depends on precise ego poses.
To address these issues, NeMO utilizes a BEV-grid-based local map generation paradigm to fuse frames and construct a long-range local map simultaneously. In NeMO, we create a readable and writable BEV feature grid map for feature storage and extraction, a coarse-to-fine spatial matching module to sample and match features in BEV at different timestamp, along with a homogenous grid fusion network to identify and preserve the most valuable features. The read- and writ-able BEV feature grid map (referred to as "the big feature map" in this paper) represents a wider ranged area and stores BEV features, enabling extraction and updating of historical BEV grids whenever required. The system works by initially retrieving BEV features from the current frame. Concurrently, a coarse matching method, is used to sample historical BEV features from the big feature map. To enhance the matching of current and historical features, a finer local spatial matching process is performed. Assuming complete descriptors are present in independent grids, we design a homogeneous grid fusion network to merge the grid-based features. Finally, the corresponding portion of grids in the big feature map is updated with the new fused features. The matching and fusion operations interact in real-time with the big feature map, resulting in an iterative process that enables continuous growth of the neural map. The proposed paradigm offers several advantages. First, it enables the generation of long-range local maps from an arbitrary number of frames at a small and consistent computational cost. Features maintained in the big map integrate all previous information rather than a fixed number of frames. The grid-based coarse-to-fine spatial matching technique mitigates the impact of pose noise, resulting in improved accuracy. The homogenous grid fusion is able to capture, enhance, and update critical information within a grid effectively. It is worth mentioning that NeMO can accommodate a broad spectrum of inputs as it is compatible with any BEV features. A concurrent work3 NMP[37] proposes a similar approach like NeMO for better online BEV inference with global map as prior information storage, and we provide a brief comparison in Section 2.
Footnote 3: The first version of the current work was submitted to NeurIPS on 2023-05-17. Shortly after that, we became aware of Xiong et al.[37] CVPR submission which was posted on arxiv on 2023-04-17.
We validate NeMO system on NuScenes [4] and BDD100K [39] datasets. Notably, the latter was acquired using smartphones with reduced accuracy in pose information. To supplement the BDD100K dataset with BEV annotations, we provide annotation tools and use the same annotation style as NuScenes [4] for three categories (lane line, pedestrian crossing, and boundary). The annotated dataset, named BDD-Map, consists of 446 scenes and 426,476 frames.
Our approach demonstrates exceptional performance in the NuScenes dataset, significantly improve performance of HDMapNet [19] by a large margin. Additionally, our approach shows greater accuracy than the current state-of-the-art temporal BEV perception method BEVerse [40], highlighting the flexibility and effectiveness of our system. Furthermore, we provide a benchmark for scene-level local map generation for both datasets.
Figure 1: NeMO system overview.
## 2 Related Work
**BEV lane segmentation map construction.** The conversion of static road elements in Perspective View (PV) to Bird's-Eye-View (BEV) can be broadly classified into geometry-based and learning-based aproaches. The former utilizes the physical principles underlying the geometric projection relationship between Perspective View (PV) and Bird's-Eye-View (BEV), while the latter employs data-driven approaches that involve the use of learnable neural networks for mapping. The pioneering geometry-based approach is homography-based IPM (Inverse Perspective Mapping [24]), which inversely maps PV information onto BEV plane utilizing homography matrices with flat-ground assumption [9; 30; 9; 25; 6]. Therefore, IPM-like methods may have unsatisfactory performance when the ground is not flat. Another geometry-based way, represented by Lift, Splat, and Shoot (LSS) [27], is to lift 2D pixes to 3D space via depth prediction [32; 11; 29; 40]. Learning-based methodologies have made great progress in recent years [23; 31; 13; 38; 19; 20; 22; 34; 35]. HDMapNet [19] utilizes MLP to cover complex transformation between PV and BEV features. Transformers with BEV queries, first used by Tesla [1] for multi-view PV-BEV transformation, have gained their popularity in recent works [7; 20; 22; 26; 10] because of the superior efficacy. In this approach, view transformation is usually conducted using cross-attention between PV features and BEV queries with positional encoding [20]. As this dense-query design leads to memory cost issue in the cross-attention operation, several studies such as BEVSegFormer [26], PersFormer [7], and BEVFormer [20] deploy deformable attention [42] for faster computation. Concurrently, GKT [8] leverages camera's parameters to find 2D reference points such that queries can focus on small regions.
**Temporal fusion in BEV lane segmentation.** Existing studies have confirmed that utilizing multi-frame information helps to improve detection accuracy while alleviating occlusion issue in single frame perception [20; 22; 40; 28]. BEVerse [40] and BEVFormer [20] wrap past BEV features to the present frame with ego motion information, while the former mainly creates temporal block stack and deploys 3D convolutions, and the latter utilizes self-attention layer to query wrapped previous BEV features with current BEV features. Differently, [22] and [28] directly query PV features. PETRv2 [22] combines temporal information by adding coresponding positional encoding for both previous and current image features. Specifically, it generates previous frame's positional encoding by converting its 3D coordinates to the current one according to ego motion, and concatenate the converted 3D coordinates to the 2D features to obtain 3D position-aware features. 3D position-aware features of different frames can be further queried by BEV queries. Unifusion [28] treats temporal fusion as a multi-view fusion problem by converting past frames to virtual views which are transformed to the ego BEV space with view transformation. The multi-view features are then fused with BEV queries in cross-attention layer.
Multi-frame fusion is also an inevitable part for long-range local map generation in ADS. Unifusion [28] proposes new BEV settings representing larger areas for longer series temporal fusion. However, adopting larger settings has to strike a balance between BEV grid resolution and computational complexity, which may not only impact the result accuracy but also impose constraints on the map size. A scene-level long-term temporal fusion method proposed by HDMapNet [19] is to paste BEV
Figure 2: Coarse-to-fine spatial matching and HomoGridFusion models.
maps of previous frames into current's with ego pose and fuse the overlapped grids via max pooling. Yet, fusion methods like temporal max pooling may unexpectedly retain noise thus do not perform stably in different scenes. The real-time long-range local map generation logic proposed by Tesla [1] is to update only a portion of BEV grids in a long-range map at each moment with "spatial RNN". Our work delves further into this logic by proposing a practical pipeline and solution with the aim of aiding future researchers in their exploration of this field.
Most similar to our approach and developed in parallel is NMP [37], which leverages a city-wide global map for feature prior storage, and a current-to-prior attention followed by ConvGRU module for prior and current feature fusion for online inference. A main difference is that we use a shared recurrent neural network for homogeneous grid fusion, with an assumption that all grid features have identical data distribution pattern regardless of the grid's position in local BEV plane. Besides, we adopt local spatial attention for fine matching, which further reduces the computation cost in online inference process. Moreover, our HDMapNet-based NeMO outperforms [37] in NuScenes dataset for local BEV, and we provide a benchmark for scene-level long-range map generation.
## 3 Methodology
The proposed NeMO system generates a long-range semantic map using image sequence and ego pose information as inputs, as depicted in Figure 1. At each time step, the image \(\mathcal{I}_{cur}\) is fed into a PV-to-BEV neural network, which produces BEV features \(\mathcal{F}_{cur}^{bev}\) that are relative to the ego vehicle. The historical BEV features \(\mathcal{F}_{hist}^{bev}\) that correspond to the same area are retrieved from the BEV Feature Grid Map (also known as "the big feature map") through a "coarse-to-fine" spatial matching technique, which we refer to as "reading". The BEV features \(\mathcal{F}_{cur}^{bev}\) and \(\mathcal{F}_{hist}^{bev}\) are then integrated in the HomoGridFusion model to generate \(\mathcal{F}_{fused}^{bev}\), and "written" back to the big feature map using the ego pose \(E_{cur}\) associated with current moment, which updates the corresponding grids. Finally, the updated big feature map is fed into a decoder to generate the desired local long-range semantic map.
**PV-to-BEV revisiting.** Many studies focus on PV-to-BEV transformation [19; 20; 30; 15; 40], which involves converting images \(\mathcal{I}_{cur}\) to BEV features \(\mathcal{F}_{cur}^{bev}\subseteq\mathbb{R}_{O_{v}}^{H_{bv}\times W_{bv}\times K}\) in ego coordinates \(O_{e}\), where \(H_{bv}\) and \(W_{bv}\) represent the height and width of the BEV plane (i.e., number of grids along height and width), and \(K\) is feature dimension. Here the input \(\mathcal{I}_{cur}\) can be either one front-view image or multiple images from surrounding cameras. While some research looked into ways to improve ego-centric BEV perception by integrating multiple timestamp frames [20; 40], ultimately they still produce one single-frame BEV perception map. NeMO is compatible with any PV-to-BEV frontend as long as it can produce a BEV feature map \(\mathcal{F}_{cur}^{bev}\) for current timestamp.
### Coarse-to-fine spatial matching
NeMO system employs a two-stage "coarse-to-fine" spatial matching to obtain historical BEV features \(\mathcal{F}_{hist}^{bev}\subseteq\mathbb{R}_{O_{e}}^{H_{bev}\times W_{bev} \times K}\) that correspond to the same spatial area as the \(\mathcal{F}_{cur}^{bev}\).
We first use coordinate transformation to obtain a "coarse" position for historical BEV features in the readable and writable big feature map \(\mathcal{F}^{map}\subseteq\mathbb{R}_{O_{g}}^{H_{map}\times W_{map}\times K}\), which is in a scene global coordinate system \(O_{g}\). The size of \(\mathcal{F}^{map}\), with dimensions \(H_{map}\times W_{map}\), is much larger than that of the BEV plane \(H_{bev}\times W_{bev}\). The ego pose \(E_{cur}\) is critical in identifying the optimal target region within the big feature map for retrieving the historical BEV information. Define grid coordinates in ego-centric BEV plane as \(C_{bev}\), their coordinates in the big feature map is coarsely determined as \(C_{map}=E_{cur}C_{bev}\). The \(C_{map}\) coordinates serve as reference points to sample \(\mathcal{F}_{hist}^{bev}\) from \(\mathcal{F}^{map}\).
In contrast to established techniques that save previous frames' features and wrap it to current moment's coordinate system, potentially leading to loss of information, the coarse spatial matching approach introduced in this study, which employs the big feature map, seamlessly attains spatial alignment for both past and present features with shared scale on the BEV grid plane. Moreover, the big map range (\(H_{map}\) and \(W_{map}\)) is adjustable and can be expanded on demand over time.
In ideal situations when \(E_{cur}\) is accurate and the grid resolution is high, \(\mathcal{F}_{cur}^{bev}\) and \(\mathcal{F}_{hist}^{bev}\) are perfectly spatially aligned. However, this is not always the case as sensors may have noises, and grid density needs to be balanced with computational cost. Therefore, we propose the "fine" spatial matching stage to alleviate the misalignment issue with a grid-based local-spatial attention (LSA) network. The
LSA model adopts a local querying approach for each grid by considering only its adjacent grids, rather than querying all grids globally, which not only yields more accurate results but also incurs low computational costs. For a grid \(G\) with initialized coarse coordinate \(C^{G}_{map}\) in the big feature map, we sample features from \(\mathcal{F}^{bev}_{cur}\) using a local kernel that is expanded according to \(C^{G}_{map}\), as shown in the upper part of Figure 2. Denote the sampled features as \(\mathcal{F}^{G}_{localkernel}\). BEV queries \(Q_{bev}\) are generated by positional encoding in ego-centric BEV plane. Query for \(G\), denoted as \(Q^{G}_{bev}\) is determined by its position in BEV plane, \(C^{G}_{bev}\). For grid \(G\), the fine matching current feature is formulated as:
\[\bar{\mathcal{F}}^{bev,G}_{cur}=CA(Q^{G}_{bev},\mathcal{F}^{G}_{localkernel})\]
where \(CA\) refers to cross-attention. With a query-based structure to integrate information from local regions for each BEV grid, we get \(\bar{\mathcal{F}}^{bev}_{cur}\subseteq\mathbb{R}^{H_{bev}\times W_{bev}\times K} _{O_{e}}\) that better aligned \(\mathcal{F}^{bev}_{hist}\).
### HomoGridFusion
\(\bar{\mathcal{F}}^{bev}_{cur}\) and \(\mathcal{F}^{bev}_{hist}\) is combined with HomoGridFusion, which is a per-grid temporal fusion model of current-historical features. The core spirit of HomoGridFusion is the grid-based shared recurrent network structure. The current and historical BEV features \(\bar{\mathcal{F}}^{bev}_{cur}\) and \(\mathcal{F}^{bev}_{hist}\subseteq\mathbb{R}^{H_{bev}\times W_{bev}\times K}_{O _{e}}\) represent the same area with the same scale, enabling grid-based temporal fusion at this step. For each grid \(G\) with coordinate \((h,w)\) in BEV plane, where \(h\in\{1,2,...,H_{bev}\}\) and \(w\in\{1,2,...,W_{bev}\}\), we can get current BEV feature \(\bar{\mathcal{F}}^{bev}_{cur}[h][w]\) and corresponding historical BEV feature \(\mathcal{F}^{bev}_{hist}[h][w]\), both of which are \(K\)-dimentional feature vectors. They are integrated into a new \(K\)-dimentional feature vectors in a recurrent manner, that \(\mathcal{F}^{bev}_{hist}[h][w]\) is treated as a hidden state and \(\bar{\mathcal{F}}^{bev}_{cur}[h][w]\) is a new observation, and we get the new fused state \(\mathcal{F}^{bev}_{fused}[h][w]\) with a recurrent model. Since all grids share a same recurrent model, it is easy to parallel in that the BEV feature arrays are unfolded to form a \(H_{bev}\times W_{bev}\) sized batch. This design rests on the assumption that the feature distribution of all BEV grids follows identical pattern, notwithstanding the grid's spatial position \((h,w)\) on the BEV plane \(\mathbb{R}^{H_{bev}\times W_{bev}\times K}_{O_{e}}\). \(\mathcal{F}^{bev}_{cur}\) and \(\mathcal{F}^{bev}_{hist}\) are embedded in the ego coordinate system which moves as the vehicle advances. \(\mathcal{F}^{bev}_{cur}\) represents the features of a specific are based on a single observation, whereas \(\mathcal{F}^{bev}_{hist}\) is based on historical observations. The assumption is that the method of combining these two types of features should be the same across different areas regardless of spatial properties.
The main body of the HomoGridFusion model presented in this paper is a two-block bidirectional recurrent network, each followed by an Multi-layer Perceptron (MLP) layer. Prior to the recurrent network blocks, we add three convolutional layers to \(\bar{\mathcal{F}}^{bev}_{cur}\) and \(\bar{\mathcal{F}}^{bev}_{hist}\) to better capture visual pattern for each grid.
## 4 BDD-Map Dataset
BDD100K is a large-scale diverse driving video dataset that covers a wide range of driving scenarios, including different times of day and weather conditions in multiple cities. Different from other
Figure 4: BDD-Map weather condition, time of day, and scene distributions.
Figure 3: Pixel map in ground plane.
popular autonomous driving dataset such as NuScenes, Argoverse 2 and Waymo, BDD100K provides less accurate ego motions due to hardware limitation. Instead of using Lidar, BDD100K employs GPS/IMU information from phones to generate rough trajectories. All smartphones are equipped with cameras, GPS and IMU. Therefore, BDD100K represents the most universal perception system for all customers and covers much richer scenes. The computer vision community has greatly benefited from the extensive BDD100K driving scene data. However, its primary usage is for object detection, scene segmentation, and behavior prediction. Few people use it for road structure perception to provide reliable maps. In this work, we attempt to utilize a small portion of it to do semantic map perception task. Although the constructed map accuracy is limited by trajectories precision, they exhibit approximate road topology structure that are sufficient to support lane-level localization and augmented reality road guidance.
Generating road elements annotations such as lanes, boundaries and pedestrian crossings for a BDD clip in BEV is a challenging task. BDD100K only offers one frame lane annotations per entire clip, leaving out annotations for the majority frames within the clip. Additionally, the dataset does not officially provide extrinsic and intrinsic parameters for each clip, further hindering the process. To tackle these challenges, we develop a semi-automated road elements annotation system. Considering the static nature of road elements and their better observability in a BEV, we propose to convert each 40s video clip into a single large frame as a pixel map as show in figure 1 (a). The pixel map effectively displays all road elements, and obscured areas due to dynamic objects such as cars can be easily annotated with the help of surrounding elements. We provide more details about the semi-automatic annotation pipeline in Appendix A.2.
There are 100 sets in BDD100K and each contains 1000 clips. We randomly selected set 66 and ran the annotation pipeline, resulting in 446 valid ground-plane pixel map with complete road element annotations in BEV and reasonable camera parameters. We omitted the remaining portion due to an unreasonable trajectory or the inability to estimate a suitable extrinsic matrix. We will release the annotated data as an extension of BDD100K, named BDD-Map. The weather condition, scene, and time of day distributions are shown in Figure 4. Meanwhile, we will release annotation tools since a large portion of BDD100K remains unlabeled and we hope to take advantage of the diversity of BDD100K to promote the development of semantic map perception.
## 5 Experiment
### Experimental Settings
**Datasets and tasks.** We conduct experiments on two open datasets, namely NuScenes ("Nus" in the following tables) [4] and aforementioned BDD-Map ("BDD" in the following tables). NuScenes has 1000 scenes. Following HDMapNet [19] settings, we use training set (700 scenes) for model training and validation set (150 scenes) for evaluation. BDD-Map has 446 scenes in total, in which 400 scenes are used for training and 46 scenes for evaluation. Regarding the temporal model training, we set frame-per-clip \(T=4\) and step-size \(D=T=4\) (the distance between the first frames of two consecutive clips when splitting), splitting NuScenes training set into 6923 clips, and BDD-Map dataset into 9565 clips. Following [19], we focus on semantic map segmentation considering road elements of lane line, pedestrian crossing, and boundary.
Figure 5: Visual comparison on NuScenes scene-level long-range map generation.
**Experimental settings.** To evaluate NeMO system, we use HDMapNet [19] and BEVerse [40] as the baseline methods. They are served as PV-to-BEV module in NeMO as well. We conduct experiments under both single front-view image _(**fcam**) and six surrounding images (**6cam**) settings for NuScenes dataset, and _fcam_ for BDD-Map dataset.
In PV-to-BEV process, we uses 30m \(\times\) 30m ego plane setting for _fcam_ with a 200 \(\times\) 200 BEV plane and a resolution of 0.15m, while for _6cam_ we adopt the same 30m \(\times\) 60m setting (200 \(\times\) 400) in [19]. In training process, we prepare a fixed-sized plane for each \(T\)-frame clip in world coordinate system: 256 \(\times\) 256 for _fcam_ and 384 \(\times\) 384 for _6cam_ (Table 1). In inference process, we generate a big map with all frames in each scene, and evaluate the accuracy of these big maps (Table 2). We use the cross-entropy loss for the semantic segmentation, and Adam optimizer [18] for model training with a learning rate of 1e-03 and weight decay of 1e-07, same with the HDMapNet model [19]. We train HDMapNet-based NeMO with one NVIDIA GeForce RTX 3090, and BEVerse-based NeMO with eight. Implementation details in both training and inference phases are presented in Appendix A.1.
**Evaluation metric and baseline.** For all settings and the big map condition, mean intersection-over-union (mIoU) is used as the evaluation metric. HDMapNet [19] and BEVerse[40] are selected as the baselines for per timestamp BEV perception comparison. A major approach of generating big maps from multiple BEVs is stitching single-frame BEVs with ego poses and integrate the overlapped grids. We select two common non-parameterized methods, overwriting the grids by the information captured in the most recent moment or keeping the maximum (maxpool in temporal dimension), as baselines to compare with our neural network temporal fusion way.
### Main Results
Table 1 presents results of BEV segmentation performance before and after fusion with NeMO. To provide ego-centric BEV perception on \(200\times 200\) or \(200\times 400\) settings evaluation, features in NeMO are extracted back from the big feature map to the ego-centric plane according to ego pose at each moment. For _fcam_, multi-frame fused NeMO outperforms the baseline, single-frame HDMapNet[19], in both NuScenes and BDD-Map datasets. For NuScenes _6cam_, NeMO improve HDMapNet mIoU by 20.36%, from 32.9 to 39.6. This suggests that the proposed strategy NeMO successfully enhances perception in map growing process by integrating temporal information in consecutive frames. For a more advance baseline BEVerse [40], NeMO also improves the per timestamp perception mIoU
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline _fcam_ & & & & \multicolumn{4}{c}{mIoU(\%)} \\ \cline{4-8} & & \#Param. & Temp & Divider & Ped Xing & Boundary & All \\ \cline{3-8} & HDMapNet\({}_{200\times 200}\)\({}^{\dagger}\) & 27.0M & & 38.4 & 18.7 & 35.3 & 30.8 \\ Nus & NeMO\({}_{200\times 200}\) & & ✓ & 40.9\({}_{+2.5}\) & 23.1\({}_{+4.4}\) & 39.4\({}_{+4.1}\) & 34.5\({}_{+3.7}\) \\ & NeMO\({}_{256\times 256}\) & 28.2M & ✓ & 42.8 & 21.2 & 40.8 & 34.9 \\ \cline{2-8} & HDMapNet\({}_{200\times 200}\)\({}^{\dagger}\) & 23.8M & & 24.3 & 7.2 & 14.3 & 15.3 \\ BDD & NeMO\({}_{200\times 200}\) & & ✓ & 26.7\({}_{+2.4}\) & 10.1\({}_{+2.9}\) & 17.2\({}_{+2.9}\) & 18.0\({}_{+2.7}\) \\ & NeMO\({}_{256\times 256}\) & 25.0M & ✓ & 28.4 & 8.3 & 15.4 & 17.4 \\ \hline _6cam_ & & & & & & & \\ & HDMapNet\({}_{200\times 400}\)* & 78.3M & & 40.6 & 18.7 & 39.5 & 32.9 \\ & NeMO\({}_{200\times 400}\) & & ✓ & 45.9\({}_{+5.3}\) & 26.9\({}_{+8.2}\) & 46.0\({}_{+6.5}\) & 39.6\({}_{+6.7}\) \\ Nus & NeMO\({}_{384\times 384}\) & 79.5M & ✓ & 44.7 & 22.9 & 44.5 & 37.4 \\ \cline{2-8} & BEVerse-Map\({}_{200\times 400}\)* & 54.8M & & 53.9 & 41.0 & 54.5 & 49.8 \\ & NeMO\({}_{200\times 400}\) & & ✓ & 57.7\({}_{+3.8}\) & 47.8\({}_{+6.8}\) & 57.6\({}_{+3.1}\) & 54.4\({}_{+4.6}\) \\ & NeMO\({}_{384\times 384}\) & 55.5M & ✓ & 57.1 & 47.0 & 57.6 & 53.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiments of BEV segmentation without and with temporal fusion, for both single front view _(fcam)_ and six surrounding image (_6cam_) inputs. Three segmentation classes are road divider (Divider), pedestrian crossing (Ped Xing), and boundary (Boundary). \(*\) means the results are reported in the paper [19; 40]. \(\dagger\) means the results are reimplemented in this work.
from 49.8 to 54.4. It is worth mentioning that NeMO only introduce a minor increase in model size, meaning that NeMO has potential of obtaining considerable advantages at an inconsequential cost.4.
Footnote 4: The reimplementation of _fcam_ HDMapNet exhibits a significantly lower parameter count than its official counterpart _6cam_ HDMapNet. This discrepancy can be attributed to the difference in view fusion module architecture between the two models, wherein the latter employs 6 independent MLPs for 6 surrounding camera views while the former only requires one MLP for _fcam_.
We also generate big local map for each scene (150 scenes in NuScenes and 46 scenes in BDD-Map) and evaluate the scene-based map segmentation performance in Table 2. The results demonstrate that NeMO leads to significantly improved accuracy of perception when compared to non-parameterized baseline techniques, across all conditions. This suggests that the proposed design of the grid-based fusion architecture and shared fusion model can effectively handle the processes of information screening, updating, and memorization involved in temporal fusion. Besides, it can be observed Maxpool outperform Overwrite in all settings except for the case of BDD _fcam_. It is due to the low accuracy of the pose in BDD-Map dataset: selecting the maximum value tends to maintain more noise data which in turn diminishes the overall accuracy.
### Ablation Study
We conduct ablation study based on HDMapNet-NeMO using _6cam_ NuScenes dataset.
**Impacts of local-spatial attention (LSA) model.** We analyze the impact of fine spatial matching LSA model in NeMO system. As shown in Table 3, NeMO with LSA model demonstrates advantages
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multirow{3}{*}{_fcam_} & Single-frame & Multi-frame & \multicolumn{3}{c}{mIoU(\%)} \\ & PV-to-BEV & Grid Fusion & Divider & Ped Crossing & Boundary & All \\ \cline{4-7} & & Overwrite & 40.7 & 13.5 & 36.4 & 30.2 \\ \multirow{3}{*}{Nus} & \multirow{3}{*}{HDMapNet} & Maxpool & 42.6 & 12.7 & 39.4 & 31.5 \\ & & NeMO & **47.2** & **21.6** & **44.8** & **37.9** \\ \cline{3-7} & & Overwrite & 29.9 & 7.0 & 16.4 & 17.8 \\ \multirow{3}{*}{BDD} & \multirow{3}{*}{HDMapNet} & Maxpool & 26.2 & 4.7 & 14.9 & 13.9 \\ & & NeMO & **37.5** & **12.8** & **22.5** & **24.3** \\ \hline \multirow{3}{*}{_6cam_} & \multirow{3}{*}{HDMapNet} & Overwrite & 34.4 & 10.0 & 32.1 & 25.6 \\ & & Maxpool & 43.3 & 13.9 & 42.1 & 33.1 \\ \multirow{3}{*}{Nus} & \multirow{3}{*}{BEVerse} & NeMO & **48.7** & **29.1** & **49.7** & **42.5** \\ \cline{3-7} & & Overwrite & 52.6 & 28.1 & 49.3 & 43.3 \\ \cline{1-1} & & Maxpool & 61.7 & 46.1 & 59.8 & 55.8 \\ \cline{1-1} & & NeMO & **62.5** & **49.5** & **61.6** & **57.9** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiments of BEV stitching and temporal fusion methods in multi-frame grid fusion to generate long-range maps, for both single front view (_fcam_) and six surrounding image (_6cam_).
\begin{table}
\begin{tabular}{l l l c c c} \hline \hline & & \multicolumn{2}{c}{Spatial Matching} & \multicolumn{1}{c}{Supervision} & \multicolumn{1}{c}{HomoGridFusion Design} \\ & Baseline & NeMO & w/o LSA & Many-to-many & LSTM & Conv1d+LSTM \\ \cline{3-6} Divider & 43.3 & 48.7 & 46.8 & 42.0 & 44.7 & 45.1 \\ Ped Xing & 13.9 & 29.1 & 27.9 & 20.0 & 20.7 & 22.0 \\ Boundary & 42.1 & 49.7 & 46.2 & 42.5 & 44.8 & 43.9 \\ All & 33.1 & 42.5 & 40.3 & 34.9 & 36.8 & 37.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiments of NeMO (a) w/ and w/o LSA fine spatial matching, (b) different supervision types, and (c) differen designs of HomoGridFusion. Results shown in the table are mIoU (%). For comparison, the first column shows the results of NeMO system with LSA model, many-to-one supervision, and has 2d convolutional layers in HomoGridFusion.
in all classes compared to the result without LSA. This experiment highlights the role of local-spatial fusion design in enhancing the performance of temporal fusion in map generation.
**Convolutional layers in HomoGridFusion.** For the default setting, we use 2D convolutional layers before LSTM[14] recurrent block. We compare it with two other designs in HomoGridFusion, i.e., HomoGridFusion without convolutional layers ("LSTM") and with simple Conv1D ("conv1d+LSTM"). Table 3 shows that 2D convolutional layers lead to better performance for long-range map generation.
**Supervision types in HomoGridFusion.** The recurrent network in HomoGridFusion can be supervised in various ways. Many-to-many supervision is implemented by conducting supervision to partial grids at each timestamp, while many-to-one supervision applies clip-based map-wide supervision to all grids evolved. Results in Table 3 demonstrate that many-to-one supervision surpasses the performance of many-to-many method, suggesting that many-to-one supervision enables the model to learn to effectively disregard and memorize information across multiple frames in a more global sense. Such an approach is particularly beneficial for NeMO's map generation process.
**Impacts of pose noises.** We validate NeMO's capability in handling noisy pose information through experiments on the BDD-Map dataset in Table 1 and Table 2. Besides, we manually introduce pose noises in the NoScenes dataset and compare NeMO with baselines across two noise levels. In Table 4, we show results of Gaussian noise added to ego pose with different standard deviation. Specifically, \([0.5,0.5]\) represents a random \(e\in N(0,0.5)\) degree is added to each of three Euler angles in R, and \(e\in N(0,0.5)\) meters of noise are added to the x and y coordinates in T. It can be observed that, as the pose noise increases, the performance of long-range maps generated via overwriting and maxpooling substantially deteriorates. Despite a slight reduction, NeMO exhibits a relatively high level of performance. Notably, even with a 0.5 noise level present, NeMO outperforms the noise-free maxpool method, demonstrating its considerable capability to tackle noisy poses.
## 6 Conclusion and Discussion
**Conclusion.** This paper presents a novel neural map growing system, named NeMO. By employing coarse-to-fine spatial matching and HomoGridFusion module to fuse temporal information, NeMO generates long-range segmentation maps of target areas from image streams. Experiments demonstrate that, with model size increase a little for temporal fusion, NeMO leads to a significant improvement in the generated Bird's-Eye-View (BEV) map compared to other existing methods. Besides, we evaluate the performance of long-range local maps generated from all images in each scene and provide a benchmark within this novel evaluation framework. We show that NeMO achieves broad generalization across scenes and various sizes of BEV plane. Meanwhile, we extend a portion of BDD100K dataset by incorporating BEV map element annotations and release BDD-Map as a new BEV dataset. We hope to provide a comprehensive and diverse resource to facilitate further advancement in the field of related research.
**Broader impacts.**The study suggests that employing a shared, grid-based, and location- and view-independent fusion network to temporally fuse individual BEV grids in a big feature map yields significant improvements. Instead of fusing temporal BEV maps in ego coordinate system, it extracts, denoises, memorizes, and updates map information in real space, which redefine the paradigm of temporal fusion. We release a BDD-Map dataset and tools for generating BEV annotations to aid others in producing more diverse BEV datasets. We hope these initial exploratory undertakings and related resources will advance BEV perception.
**Limitations and future directions.** One of the limitations of this study is evident in the implementation aspect. While NeMO framework supports end-to-end training, a two-stage approach is employed
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Noise & \multicolumn{3}{c}{[0, 0]} & \multicolumn{3}{c}{[0.1, 0.1]} & \multicolumn{3}{c}{[0.5, 0.5]} \\ \cline{2-10} & Overwrite & Maxpool & NeMO & Overwrite & Maxpool & NeMO & Overwrite & Maxpool & NeMO \\ \cline{2-10} Divider & 34.4 & 43.3 & 48.7 & 33.2 & 42.2 & 48.0 & 22.5 & 15.8 & 39.6 \\ Ped Xing & 10.0 & 13.9 & 29.1 & 9.9 & 13.6 & 28.8 & 8.2 & 8.5 & 26.8 \\ Boundary & 32.1 & 42.1 & 49.7 & 31.2 & 41.5 & 49.1 & 23.1 & 23.9 & 44.1 \\ All & 25.5 & 33.1 & 42.5 & 24.8 & 32.4 & 42.0 & 17.9 & 16.1 & 36.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experiments of map generation with noisy pose information.
in this study, whereby the single-frame PV-to-BEV perception model and the multi-frame fusion model are separately trained and supervised. As a result, the ultimate fusion outcome is notably constrained by the initial PV-to-BEV model, limiting its performance and effectiveness. Therefore, a potential avenue for future research is to investigate methodologies for end-to-end training.
|
2310.19591
|
Prediction of Locally Stationary Data Using Expert Advice
|
The problem of continuous machine learning is studied. Within the framework
of the game-theoretic approach, when for calculating the next forecast, no
assumptions about the stochastic nature of the source that generates the data
flow are used -- the source can be analog, algorithmic or probabilistic, its
parameters can change at random times, when building a prognostic model, only
structural assumptions are used about the nature of data generation. An online
forecasting algorithm for a locally stationary time series is presented. An
estimate of the efficiency of the proposed algorithm is obtained.
|
Vladimir V'yugin, Vladimir Trunov
|
2023-10-30T14:48:01Z
|
http://arxiv.org/abs/2310.19591v1
|
# Prediction of Locally Stationary Data Using Expert Advice
###### Abstract
The problem of continuous machine learning is studied. Within the framework of the game-theoretic approach, when for calculating the next forecast, no assumptions about the stochastic nature of the source that generates the data flow are used - the source can be analog, algorithmic or probabilistic, its parameters can change at random times, when building a prognostic model, only structural assumptions are used about the nature of data generation. An online forecasting algorithm for a locally stationary time series is presented. An estimate of the efficiency of the proposed algorithm is obtained.
**KEYWORDS:** Lifelong Machine Learning, Predictive Algorithms, Supervised Learning, Adaptive Online Prediction Algorithms, Predictions with Expert Advice, Regret, Aggregation Algorithm, Fixed Share, Mixing Past Posteriors (MPP).
## 1 Introduction
Predicting data coming from a "black box" is one of the main tasks of machine learning. In this case, no stochastic assumptions about data source is used. The data comes online as a time series consisting of pairs of the form ("signal", "response"). The data source can be an analog, deterministic (algorithmic) or stochastic process. In this case, we will use simple structural assumptions about the source of the data.
In this paper, an approach is proposed in which training is performed on small subsamples of the main sample, forecasts of the constructed predictive models are combined into one common forecast based on the known aggregation methods. The general scheme of the online learning process is as follows. The learning process occurs at discrete times in steps \(t=1,2,\dots\). At the next step \(t\), according to the data from the subsample, from the data observed in the past, a local predictive model (expert predictive strategy) is defined to obtain a response to the signal. As a rule, this is a regression function built on the observed segment of the time series. Thus, at step \(t\), there are \(t\) predictive models
built from the corresponding subsamples from the past. After that, the signal \(\mathbf{x}_{t}\) is observed and all the expert predictive strategies built on steps \(1,2,\ldots,t\) present their response predictions. Predictor builds its response prediction by aggregating the experts predictions.
The problem of online aggregation of forecasts is solved within the framework of the theory of predictions using Prediction with Expert Advice. This approach is widely represented in the scientific literature on machine learning(see Vovk 1998, Cesa-Bianchi and Lugosi 2006, V'yugin 2022).
After the predictions are presented, the source (the corresponding generator) produces the true response \(y_{t}\), and the experts and Predictor calculate their losses due to the difference between their predictions and the response.
In mathematical statistics, when building predictive models, one often use stochastic assumptions about the nature of the data. In this work, to build predictive models online methods of machine learning are used within the framework of the game-theoretic approach, while stochastic data models are not used.
When constructing predictive strategies, assumptions about the structure of the data generation method can be used. The following data generation scheme is assumed that there are several generators, which, replacing each other, generate a time series, which, thus, is divided into subsamples - areas of stationarity. The device of the generators is unknown to the experts and Predictor. Each area of stationarity can be studied by machine learning methods based on the results of the generator, i.e., according to the data from the stationarity region, the corresponding local predictive algorithm(local predictive model) will be built tied to a generator that can be applied to other stationarity domains generated by the same generator.
In the theory of prediction with expert advice, the efficiency of an aggregating algorithm is evaluated using the concept of a regret, which is the difference between the total (cumulative) losses of the aggregating algorithm and the total losses of the expert algorithm accumulated over the entire prediction period. The goal of the aggregating algorithm is to minimize the regret with respect to each expert strategy (see V'yugin 2022, Cesa-Bianchi and Lugosi 2006, Vovk 1998).
In another, more general, formulation of the forecasting problem, the regret of the aggregating algorithm with respect to arbitrary sequences of expert strategies is minimized: a series of steps at which predictions are made is divided into segments. Each segment is assigned its own expert; the sequence of segments and corresponding experts is called a composite expert. The purpose of the algorithm changes - now it must predict in such a way that it is not worse than each composite expert. Accordingly, the concept of the algorithm regret is modified - now it is the difference between the total loss of the algorithm and the total loss of the sequence of experts. This change allows us to more accurately simulate real life conditions, when the nature of outcomes can change over time and different experts can predict with varying degrees of success depending on the current trend. the corresponding the algorithm is called Fixed Share Herbster and Warmuth (1998). In the work Bousquet and Warmuth (2002) a further generalization of the Fixed Share
method was proposed - the method of mixing past posterior distributions MPP (Mixing Past Posteriors). The cumulative loss of the aggregation algorithm are related to the loss of any convex combinations of the experts. The concept of regret also changes. Now the total loss of the algorithm is compared with the total loss of convex combinations of expert strategies (see details in V'yugin (2022) and Bousquet and Warmuth 2002). In this work, we apply this approach to construct an algorithm for predicting locally stationary data.
A characteristic feature of the problem considered in this work is the absence of a predetermined set of competing expert strategies, as was the case in the works cited above. Instead, new expert strategies are being built at every step of the online learning process. The predictor must aggregate at each step the forecasts of all the expert strategies built by that time.
Let us briefly describe the proposed approach. Expert strategies (local predictive models) are automatically built up online depending on the observed real data. At each step, a new expert predictive strategy is introduced that reflects the local properties of the observed part of the time series (subsample). Forecasts of all predictive strategies built up to this point are combined to the Predictor forecast using one of the aggregation methods.
The general scheme of learning with a teacher using expert strategies has the form of a game with participants: Predictor, experts \(i\in\mathcal{N}\). At each step \(t\) of the game, each expert \(i\) observes the signal \(\mathbf{x}_{t}\) and provides its prediction \(f_{i,t}=f_{i,t}(\mathbf{x}_{t})\), Predictor calculates its prediction \(\gamma_{t}\). After that, the the true response \(y_{t}\) is presented and the losses \(l_{i,t}=\lambda(f_{i,t},y_{t})\) of the experts and the loss \(h_{t}=\lambda(\gamma_{t},y_{t})\) of Predictor are calculated, where \(\lambda(\gamma,y)\) is a loss function that takes non-negative values.
We assume that the data source has several response generation modes, so the responses generated by it the data is broken down into appropriate time "stationarity intervals". Each stationarity interval corresponds to a certain data source operation mode and is characterized by a valid predictive model - an expert strategy whose parameters are determined by the "stationarity interval".
The parameters of the valid prognostic model corresponding to a given mode of operation of the source can be limited at the first appearance of stationarity intervals (corresponding to the operating mode of the source). The assumption is used that after determining its parameters, the predictive model has the property of validity on other intervals of stationarity corresponding to the same mode of operation of the source (generator).
Since the boundaries of the stationarity intervals are unknown to Predictor, the expert predictive models are built at each training step. Some of these models are valid, i.e., they be trained on data generated by some generator, the rest will be not valid, i.e.they do correspond to data from any stationarity interval. Thus, at each stage of forecasting, we have a collection of valid and invalid local predictive models, from which we compose a single effective predictive model of Predictor. The constructed predictive models compete with each other at every moment of time, so we will combine (aggregate) them using using methods of the theory of prediction with expert advice. The main result of this work is the
construction and study of an algorithm for predicting a locally stationary time series, which aggregates all the constructed predictive models, highlighting the forecasts of valid local predictive models.
The proposed approach is implemented in the form of the **GMPP** algorithm, and the theoretical bound of the loss of this algorithm will be obtained.
## 2 Preliminaries
### Prediction with expert advice
Let \(\lambda(\gamma,y)\) be the a loss function, where \(\gamma\) is a forecast, \(y\) is an outcome (response). The loss function accepts non-negative real numbers as values. The simplest example of a loss function for a regression problem: in the case when the outcomes and forecasts are real numbers from \(\mathcal{R}\), the square loss function \(\lambda(\gamma,y)=(\gamma-y)^{2}\) is used.
The general scheme of learning with a teacher using expert strategies is given below in the form of a game with participants: Predictor, experts \(i\in\mathcal{N}\). At each step \(t\) of the game, each Expert \(i\) observes the signal \(\mathbf{x}t\) and provides its prediction \(f_{i,t}=f_{i,t}(\mathbf{x}_{t})\), Predictor provides its prediction \(\gamma_{t}.\), after that the true response is revealed and experts calculate theirs losses \(l_{i,t}=\lambda(f_{i,t},y_{t})\), Predictor calculates its loss \(h_{t}=\lambda(\gamma_{t},y_{t})\), where \(\lambda(\gamma,y)\) is a loss function that takes non-negative values.
The specificity of the problem lies in the fact that the number of experts is not limited - each expert \(i\), or rather, the prediction function \(f_{i,t}=f_{i,t}(\mathbf{x}_{t})\), will be built at step \(i\) and used in subsequent steps. Therefore, we have to assume in advance that the number of experts is infinite and consider the problem of prediction using forecasts of expert strategies for an infinite number of experts.
Let us present the classical formulation of the prediction problem using expert forecasts for the case when the number of experts is infinite. We assume that there is an infinite set of expert strategies \(i\in\mathcal{N}\), where \(\mathcal{N}\) is the set of all nonnegative integer numbers (or an initial segment of this set).1
Footnote 1: The second case is the classical setting considered in Vovk (1990), Vovk (1998), in which the algorithm is trained using expert forecasts from a predetermined finite set, in this case \(\mathcal{N}\) is the initial segment of the natural series.
The order of actions of players and access to information is determined by the following online protocol.
**Protocol 1**
**FOR**\(t=1,\ldots,T\)
Each expert \(i\in\mathcal{N}\) presents its own prediction \(f_{i,t}\).
Predictor presents its prediction \(\gamma_{t}\).
Get the outcome \(y_{t}\) and calculate the loss of each Expert \(i\): \(l_{i,t}=\lambda(f_{i,t},y_{t})\) and the loss \(h_{t}=\lambda(\gamma_{t},y_{t})\)of Predictor.
Total loss \(L_{i,T}\) of an arbitrary expert \(i\) and the total loss \(H_{T}\) incurred by Predictor in the first \(T\) steps are defined as \(L_{i,T}=\sum\limits_{t=1}^{T}l_{i,t}\) and \(H_{T}=\sum\limits_{t=1}^{T}h_{t}\), respectively.
Experts can get their predictions in one way or another, which does not matter in this game. The predictor must have his own strategy for calculating \(\gamma_{t}\) predictions. The construction of such a strategy is the main task in the construction of a forecasting method. The predictor can use all the information that is known for his move, in particular, he can use the current and past predictions of experts, past outcomes, as well as losses of experts in the past steps of the game. The efficiency of Predictor relative to the expert \(i\) is measured by the regret \(R_{i,T}=H_{T}-L_{i,T}\). The task of Predictor is to minimize the regret in relation to each of the experts.
The Predictor's strategy is based on the use of weights assigned to experts depending on their losses in the past. First, the initial values of weights \(w_{i,1}\) at \(i\in\mathcal{N}\) are set. For example, \(w_{i,1}=\frac{2}{c(i+1)\ln^{2}(i+1)}\), where \(c=\sum_{i\in\mathcal{N}}\frac{1}{(i+1)\ln^{2}(i+1)}\).2
Footnote 2: \(\frac{1}{\ln 3}<c<\frac{1}{\ln 3}\). As \(w_{i,1}\), elements of any convergent series are suitable.
The Predictor's strategy is based on the use of weights, which are assigned to experts depending on their losses in the past.
First, the initial values of the weights \(w_{i,1}\) at \(i\in\mathcal{N}\) are specified. For example, \(w_{i,1}=\frac{1}{c(i+1)\ln^{2}(i+1)}\), where \(c=\sum_{i\in\mathcal{N}}\frac{1}{(i+1)\ln^{2}(i+1)}\).3 At the end of each step \(t\), we update the weights using the exponential weighting method:
Footnote 3: In statistical physics the quantity \(m_{t}\) is called the statistical sum. It is easy to see that these are finite.
\[w_{i,t+1}=w_{i,t}e^{-\eta l_{i,t}} \tag{1}\]
for each \(i\in\mathcal{N}\), where \(\eta>0\) is a learning parameter
Weights are normalized:
\[w_{i,t}^{*}=\frac{w_{i,t}}{\sum\limits_{j\in\mathcal{N}}w_{j,t}}.\]
(see V'yugin 2022, Vovk 1990, Cesa-Bianchi and Lugosi 2006 for details).
The quantity
\[m_{t}=-\frac{1}{\eta}\sum_{i\in\mathcal{N}}w_{i,t}^{*}e^{-\eta\lambda(f_{i,t },y)}\]
is called the exponentially mixed loss (mixloss) and the quantity \(M_{T}=\sum_{t=1}^{T}m_{t}\) is called the cumulative (total) exponentially mixed loss at steps \(t=1,\ldots,T\).4\(L_{i,T}=\sum_{t=1}^{T}l_{i,t}\) - total loss of th expert \(i\) for the first \(T\) steps. These values underlie the analysis of predictive algorithms.
Footnote 4: In statistical physics the quantity \(m_{t}\) is called the statistical sum. It is easy to see that these are finite.
**Proposition 1**: _For any expert \(i\)_
\[M_{T}\leq L_{i,T}+\frac{1}{\eta}\ln\frac{1}{w_{i,1}}\]
_for every \(T\). A typical bound for \(M_{T}\) is presented below._
_Proof_ Let \({\bf w}_{t}^{*}=(w_{i,t}^{*}:i\in{\cal N})\) be the normalized weights and \({\bf f}_{t}=(f_{i,t}:i\in{\cal N})\) be the experts forecasts at step \(t\). Predictor's forecast is denoted by \(f_{t}\). It follows from the definition that
\[m_{t}=-\frac{1}{\eta}\sum_{i\in{\cal N}}e^{-\eta\lambda(f_{i,t},y_{t})}w_{i,t }^{*}=-\frac{1}{\eta}\ln\frac{W_{t+1}}{W_{t}} \tag{2}\]
for all \(t\), where \(W_{t}=\sum\limits_{i\in{\cal N}}w_{i,t}\) and \(W_{1}=1\). From (1) we have \(w_{i,T+1}=w_{i,1}e^{-\eta L_{i,T}}\). By telescoping, we obtain for any expert \(i\) a time-independent bound \(M_{T}=-\frac{1}{\eta}\ln W_{T+1}\leq L_{i,T}+\frac{1}{\eta}\ln\frac{1}{w_{i,1}}\) for all \(T\). \(\Box\)
The method of calculating Predictor's forecast is specified in Section 2.3.
### MPP and Fixed Share methods
In what follows, we will use an important generalization of the classical prediction scheme using expert strategies - the method of mixing past posterior distributions of experts - MPP.
Let \(\Delta\) be the set of all probability distributions \({\bf p}=\{p_{i}:i\in{\cal N}\}\) on a countable set \({\cal N}\): \(p_{i}\geq 0\), \(\sum_{i\in{\cal N}}p_{i}=1\).
In what follows, the inequalities between the vectors \({\bf p}>{\bf q}\) are understood component by component: \(p_{i}>q_{i}\) as \(i\in{\cal N}\).
Let us expand the concept of relative entropy for infinite-dimensional probability vectors. Let \({\bf p}=(p_{i}:i\in{\cal N})\), \({\bf q}=(q_{i}:i\in{\cal N})\) and \({\bf q}>{\bf 0}\).
The relative entropy (Kullback-Leibler divergence) for the vectors \({\bf p},{\bf q}\in\Delta\), \({\bf q}>{\bf 0}\) is defined as
\[D({\bf p}\|{\bf q})=\sum_{i\in{\cal N}}p_{i}\ln\frac{p_{i}}{q_{i}}.\]
We set \(0\ln 0=0\). Let us recall some properties of relative entropy that will be necessary in what follows V'yugin (2022).
**Lemma 1**: _1) For any \({\bf p},{\bf q},{\bf w}\in\Delta\), where \({\bf q}>{\bf 0}\) and \({\bf w}>{\bf 0}\),_
\[D({\bf p}\|{\bf q})\leq D({\bf p}\|{\bf w})+\ln\left(\sum_{i\in{\cal N}}p_{i} \frac{w_{i}}{q_{i}}\right).\]
_2) If \({\bf q}\geq\beta{\bf w}\) for some number \(\beta>0\), then_
\[D({\bf p}\|{\bf q})\leq D({\bf p}\|{\bf w})+\ln\frac{1}{\beta}.\]
_3) In particular, \({\bf p}={\bf w}\) and \({\bf q}\geq\beta{\bf w}\) will be \(D({\bf w}\|{\bf q})\leq\ln\frac{1}{\beta}\)._
_Proof_. From the concavity of the logarithm, we obtain inequality 1):
\[D({\bf p}\|{\bf q})-D({\bf p}\|{\bf w})=\sum\limits_{i\in{\cal N}}p_{i}\ln\frac{w _{i}}{q_{i}}\leq\ln\sum\limits_{i\in{\cal N}}p_{i}\frac{w_{i}}{q_{i}}. \tag{3}\]
If \({\bf q}\geq\beta{\bf w}\) then \(D({\bf w}\|{\bf q})-D({\bf p}\|{\bf w})\leq\ln\sum\limits_{i\in{\cal N}}p_{i} \frac{w_{i}}{\beta w_{i}}\leq\ln\frac{1}{\beta}\), i.e., 2) is satisfied. Since \(D({\bf p}\|{\bf w})=0\) for \({\bf p}={\bf w}\), from 2) we get 3). \(\Box\)
A mixing scheme (posterior distributions of experts) is a vector \(\beta=(\beta_{0},\dots,\beta_{t})\), where \(\beta_{i}\geq 0\) for \(0\leq i\leq t\) and \(\sum_{i=0}\beta_{i}=1\).
**Corollary 1**: _Let \(\beta=(\beta_{0},\dots,\beta_{t})\) be a mixing scheme, \({\bf w}_{s}\) such that \({\bf w}_{s}>{\bf 0}\) for \(0\leq s\leq t\). Let also, \({\bf q}=\sum\limits_{s=0}^{t}\beta_{i}{\bf w}_{s}\) be a vector of a convex combination of vectors \({\bf w}_{s}\). Then for an arbitrary vector \({\bf p}\in\Delta\) it will be_
\[D({\bf p}\|{\bf q})\leq D({\bf p}\|{\bf w}_{s})+\ln\frac{1}{\beta_{s}}\]
_for any \(s\) such that \(\beta_{s}>0\)._
_In particular, for \({\bf p}={\bf w}_{s}\) we have an estimate for the discrepancy between an arbitrary element \({\bf w}_{s}\) and vector of a convex combination:_
\[D\left({\bf w}_{s}\|\sum\limits_{i=0}^{t}\beta_{i}{\bf w}_{i}\right)\leq\ln \frac{1}{\beta_{s}}.\]
Here is a modified scheme for the weights update in Protocol 1 using the method of Mixing Past Posteriors - MPP.
Parameter \(\eta>0\). We set \(w_{i,1}=\tilde{w}_{i,0}=\frac{1}{c(i+1)\ln^{2}(i+1)}\) for \(i\in{\cal N}\), denote in vector form \({\bf w}_{t}=(w_{t,1},w_{t,2},\dots)\) and \(\tilde{{\bf w}}_{t}=(\tilde{w}_{t,1},\tilde{w}_{t,2},\dots)\).
**FOR**\(t=1,\dots,T\)
Let at step \(t\) experts incur their losses \(l_{i,t}\) for \(i\in{\cal N}\) and Predictor incurs its loss \(h_{t}\).
We update the expert weights in two stages:
**Loss Update**
\[\tilde{w}_{i,t}=\frac{w_{i,t}e^{-\eta l_{i,t}}}{\sum\limits_{j\in{\cal N}}w_{ j,t}e^{-\eta l_{j,t}}}\]
for \(i\in{\cal N}\).
**Mixing Update**
Define the mixing scheme \(\beta^{t+1}=(\beta_{0}^{t+1},\dots,\beta_{t}^{t+1})\) and update the weight of the \(i\)th expert:
\[w_{i,t+1}=\sum\limits_{s=0}^{t}\beta_{s}^{t+1}\tilde{w}_{i,s}\]
\(i\in{\cal N}\).
**ENDFOR**
Below are examples of mixing schemes from Bousquet and Warmuth (2002) and V'yugin (2022).
**Example 1.**\(\beta_{t}^{t+1}=1\), where \(\beta_{s}^{t+1}=0\), for \(s=0,\ldots t\) (i.e., in convex combination weights in previous steps are not taken into account). It turns out that the weights are corrected for the exponential scheme mixing (1)
\[w_{i,t+1}=\tilde{w}_{i,t}=\frac{w_{i,t}e^{-\eta l_{i,t}}}{\sum\limits_{j\in \mathcal{N}}w_{j,t}e^{-\eta l_{j,t}}}\]
from Protocol 1.
**Example 2.**\(\beta_{t}^{t+1}=1-\alpha\), \(\sum\limits_{s=0}^{t-1}\beta_{s}^{t+1}=\alpha\). Any such scheme \(\beta^{t+1}\) is called Fixed-Share with the parameter \(\alpha\in[0,1]\). In particular, the following mixing scheme will be used: \(\beta_{0}^{t+1}=\alpha\) and \(\beta_{s}^{t+1}=0\) for \(0<s<t\). For this mixing scheme
\[w_{i,t+1}=\alpha\tilde{w}_{i,0}+(1-\alpha)\tilde{w}_{i,t}.\]
Let \(\mathbf{l}_{t}=(l_{t}^{1},\ldots,)\) be the (infinite-dimensional) loss vector of all experts at the step \(t\), \(l_{i,t}\geq 0\) for all \(i\) and \(t\); \(m_{t}=-\frac{1}{\eta}\ln\sum\limits_{i\in\mathcal{N}}\mathbf{w}_{i,t}e^{-\eta l _{i,t}}\) - exponentially mixed losses (mixloss) at step \(t\); \(M_{T}=\sum\limits_{t=1}^{T}m_{t}\) - cumulative mixloss over \(T\) steps.
Denote \(L_{i,T}=\sum\limits_{t=1}^{T}l_{i,t}\) - cumulative loss of the expert \(i\)\(i,i=1,\ldots\); \(H_{T}=\sum\limits_{t=1}^{T}h_{t}\) - Predictor's cumulative loss. for the first \(T\) steps.
A vector \(\mathbf{q}_{t}=(q_{i,t}:i\in\mathcal{N})\), where \(\mathbf{q}_{t}\in\Delta\), is called a comparison vector if all its coordinates are equal to \(0\) except for a finite number of them. Consider the convex combinations of expert losses \((\mathbf{q}_{t}\cdot\mathbf{l}_{t})=\sum\limits_{i\in\mathcal{N}}q_{i,t}l_{i,t}\) and weights \((\mathbf{q}_{t}\cdot\mathbf{w}_{t})=\sum\limits_{i\in\mathcal{N}}q_{i,t}w_{i,t}\), where \(q_{t}=(q_{i,t}:i\in\mathcal{N})\) is the comparison vector. A bound for the mixloss at step \(t\) is given in the following theorem.
**Theorem 1**: _Let \(\tilde{\mathbf{w}}_{t}=(\tilde{w}_{1,t},\ldots)\) and \(\mathbf{w}_{t}=(w_{1,t},\ldots)\) be the weight vectors from the Loss Update and Mixing update procedures._
_For any \(t\) and \(0\leq s<t\) such that \(\beta_{s}^{t}>0\), and for any comparison vector \(\mathbf{q}_{t}\),_
\[m_{t}\leq(\mathbf{q}_{t}\cdot\mathbf{l}_{t})+\frac{1}{\eta}(D( \mathbf{q}_{t}\|\mathbf{w}_{t})-D(\mathbf{q}_{t}\|\tilde{\mathbf{w}}_{t}))\leq \tag{4}\] \[\leq(\mathbf{q}_{t}\cdot\mathbf{l}_{t})+\frac{1}{\eta}\left(D( \mathbf{q}_{t}\|\tilde{\mathbf{w}}_{s})-D(\mathbf{q}_{t}\|\tilde{\mathbf{w}}_{ t})+\ln\frac{1}{\beta_{s}^{t}}\right). \tag{5}\]
_Proof:_ Due to (3),
\[m_{t}=-\frac{1}{\eta}\ln\sum_{i\in\mathcal{N}}w_{i,t}e^{-\eta l_{i,t}} \leq\sum_{i\in\mathcal{N}}q_{i,t}\left(-\frac{1}{\eta}\ln\sum_{j\in\mathcal{N}}w _{j,t}e^{-\eta l_{j,t}}\right)=\] \[=\sum_{i\in\mathcal{N}}q_{i,t}\ \left(l_{i,t}+\frac{1}{\eta}\ln e^{- \eta l_{i,t}}--\frac{1}{\eta}\ln\sum_{j\in\mathcal{N}}w_{j,t}e^{-\eta l_{j,t}} \right)=\] \[=\sum_{i\in\mathcal{N}}q_{i,t}l_{i,t}+\frac{1}{\eta}(D(\mathbf{q} _{t}\|\mathbf{w}_{t})-D(\ q_{t}\|\tilde{\mathbf{w}}_{t})).\]
The inequality (5) follows from (4) by Corollary 1. \(\Box\)
Let's apply Theorem 1 for the mixing schemes from Examples 1 and 2.
**Corollary 2**: _For the mixing scheme \(\beta^{t+1}\) from Example 1, where \(\beta^{t+1}_{t}=1\), and \(\beta^{t+1}_{s}=0\) for all \(0\leq s<t\),_
\[M_{T}\leq\sum_{t=1}^{T}(\mathbf{q}\cdot\mathbf{l}_{t})+\frac{1}{\eta}D( \mathbf{q}\|\mathbf{w}_{1}). \tag{6}\]
_for any \(T\) and for any comparison vector \(\mathbf{q}\)._
_Proof._ Summing up the inequality (4) with a constant comparison vector: \(\mathbf{q}_{t}=\mathbf{q}\) for \(t=1,\ldots T\), we obtain
\[M_{T}\leq\sum_{t=1}^{T}(\mathbf{q}\cdot\mathbf{l}_{t})+\frac{1}{\eta}\sum_{t= 1}^{T}(D(\mathbf{q}\|\mathbf{w}_{t})-D(\mathbf{q}\|\tilde{\mathbf{w}}_{t}))=\]
\[\sum_{t=1}^{T}(\mathbf{q}\cdot\mathbf{l}_{t})+\frac{1}{\eta}(D(\mathbf{q}\| \mathbf{w}_{1})-D(\mathbf{q}\|\tilde{\mathbf{w}}_{T}))\leq\sum_{t=1}^{T}( \mathbf{q}\cdot\mathbf{l}_{t})+\frac{1}{\eta}D(\mathbf{q}\|\mathbf{w}_{1}). \tag{7}\]
Here, when passing from the first line to the second, we use the equality \(\mathbf{w}_{t}=\tilde{\mathbf{w}}_{t-1}\), which is the case for the mixing scheme from this example. Neighboring terms cancel and only the first and last terms remain. Inequality (7) (satisfied, since the first term satisfies \(D(\mathbf{q}\|\tilde{\mathbf{w}}_{T})\geq 0\). \(\Box\)
Let's estimate the losses for the mixing scheme from Example 2.
**Theorem 2**: _Suppose that the comparison vector \(q_{t}\) changes \(k\) times for \(t=1,\ldots,T\): \(k=|\{t:1\leq t\leq T,\mathbf{q}_{t}\neq\mathbf{q}_{t-1}\}|\). Let \(0<t_{1}<t_{2}<\ldots<t_{k}\) be the steps at which changes occur, i.e. \(\mathbf{q}_{t_{j}}\neq\mathbf{q}_{t_{j}-1}\) and \(\mathbf{q}_{t}=\mathbf{q}_{t-1}\) for all other steps \(t\), \(t>1\). We set \(t_{0}=1\) and \(t_{k+1}=T+1\)._
_For the mixing scheme from example 2, i.e. at \(\beta_{t}^{t+1}=1-\alpha\), \(\beta_{0}^{t+1}=\alpha\), \(\beta_{s}^{t+1}=0\), for \(0<s<t\),_
\[M_{T}\leq\sum_{t=1}^{T}({\bf q}_{t}\cdot{\bf l}_{t})+\frac{1}{ \eta}\sum_{j=0}^{k}\left(D({\bf q}_{t_{j}}\|{\bf w}_{1})-D({\bf q}_{t_{j}}\|{ \tilde{\bf w}}_{t_{j+1}-1})\right)+\] \[\frac{1}{\eta}(k+1)\ln\frac{1}{\alpha}+\frac{1}{\eta}(T-k-1)\ln \frac{1}{1-\alpha}. \tag{8}\]
_Proof._ Apply Theorem 1 to the distribution \(\beta^{t+1}\). Recall that \({\bf w}_{i,1}={\tilde{\bf w}}_{i,0}=\frac{1}{c(i+1)\ln^{2}(i+1)}\) for all \(i\). For any sequence \(T\) of comparison vectors \({\bf q}_{t}\) with \(k\) changes
\[M_{T}\leq\sum_{t=1}^{T}({\bf q}_{t}\cdot{\bf l}_{t})+\frac{1}{ \eta}\sum_{j=0}^{k}\left(D({\bf q}_{t_{j}}\|{\bf w}_{1})-D({\bf q}_{t_{j}}\|{ \tilde{\bf w}}_{t_{j+1}-1})\right)+\] \[+\sum_{j=1}^{k}D({\bf q}_{t_{j}}\|{\tilde{\bf w}}_{0})+\] \[+\frac{1}{\eta}(k+1)\ln\frac{1}{\alpha}+\frac{1}{\eta}(T-k-1)\ln \frac{1}{1-\alpha}. \tag{9}\]
Let us apply at each step \(t\) the inequality (5) from Theorem 1 for a suitable \(s\): For \(t=1\) we put \(s=0\), while \(\beta_{0}^{1}=1\). We get
\[m_{1}\leq({\bf q}_{1}\cdot{\bf l}_{1})+\frac{1}{\eta}\left(D({\bf q}_{1}\|{ \tilde{\bf w}}_{0})-D({\bf q}_{1}\|{\tilde{\bf w}}_{1})\right).\]
For those steps \(t\), where the comparison vector did not change, i.e. \({\bf q}_{t}={\bf q}_{t-1}\), we set \(s=t-1\) and use the property \(\beta_{t-1}^{t}=1-\alpha\) of the mixing scheme, i.e.,
\[m_{t}\leq({\bf q}_{t}\cdot{\bf l}_{t})+\frac{1}{\eta}\left(D({\bf q}_{t}\|{ \tilde{\bf w}}_{t-1})-D({\bf q}_{t}\|\cdot w_{t})\right))+\frac{1}{\eta}\ln \frac{1}{1-\alpha}.\]
For steps \(t\), where the comparison vector was changed, \(t=t_{1},\ldots,t_{k}\), we set \(\beta_{0}^{t_{j}}=\alpha\) (for \(s=0\)), i.e.,
\[m_{t_{j}}\leq({\bf q}_{t_{j}}\cdot{\bf l}_{t_{j}})+\frac{1}{\eta}\left(D({\bf q }_{t_{j}}\|{\tilde{\bf w}}_{0})-D({\bf q}_{t_{j}}\|{\tilde{\bf w}}_{t_{j}} \right)++\frac{1}{\eta}\ln\frac{1}{\alpha}.\]
We add up all these inequalities of three types. Terms of the same magnitude but different signs inside the intervals will cancel, as in the proof of the Theorem 2, for each partition interval, only the initial points remain - with a plus sign, and the end points - with a minus sign, these terms cancel each other out. In addition, the beginning of each interval corresponds to an additional term \(\frac{1}{\eta}\ln\frac{1}{\alpha}\), and each step \(t\), where \({\bf q}_{t}={\bf q}_{t-1}\) corresponds to the additional term \(\frac{1}{\eta}\ln\frac{1}{1-\alpha}\). There are only \(k+1\) such additional terms of the first type, and only \(T-k-1\) of the second type. The sum \(\sum_{j=1}^{k}D({\bf q}_{t_{j}}\|{\tilde{\bf w}}_{0})\) also remains. As a result, we get (9). \(\square\)
Let the comparison vectors \({\bf q}_{t}\) have the form \({\bf q}_{t}=(0,\dots,0,1,0,\dots)\), where the \(i\)-th coordinate is \(1\) and the rest the coordinates are all equal to \(0\). In this case, at step \(t\) we compare the loss of the algorithm with the loss of only one \(i\)-th expert. In this case, \(D({\bf q}_{t_{j}}\|\tilde{\bf w}_{0})=\ln(c(i+1)1\ln^{2}(i+1))\leq\ln c+\ln(i+ 1)+2\ln\ln i+1\).5
Footnote 5: Recall that that \(c=\sum_{i=1}^{T}\frac{1}{\ln(i+1)\ln^{2}(i+1)}\).
An arbitrary set \(E\) of experts \(i_{0},i_{1},\dots i_{k}\), and a set of intervals \([t_{j-1},t_{j})\), \(j=1,\dots k\). will be called a composite expert, and its constituent experts will be called elementary. Since the total losses of the elementary expert \(i_{j}\) on the interval \([t_{j-1},t_{j})\) are equal to \(L_{([t_{j-1},t_{j}))}=\sum_{t_{j-1}\leq t<t_{j}}l_{i_{j},t}\), the total losses of the composite Expert \(E\) over the entire time interval \([1,T)\) are equal to \(\sum_{j=1}^{k}L_{([t_{j-1},t_{j}))}\).
Let's set these losses with the help of comparison vectors. Consider a sequence of comparison vectors \({\bf q}_{1},\dots,{\bf q}_{T}\) such that \({\bf q}_{t}=(0,\dots,1,\dots,0)\), where the \(i_{j}\)-th coordinate is equal to \(1\) for \(t_{j-1}\leq t<t_{j}\) and it is equal to \(0\), otherwise:
\[q_{i_{j},t}=\left\{\begin{array}{ll}1,\mbox{ if }[t_{j-1}\leq t<t_{j}),\\ 0\mbox{ otherwise.}\end{array}\right.\]
Then the total losses \(L_{T}(E)\) of the composite expert \(E\) on the entire interval \([0,T]\) can be represented as
\[L_{T}(E)=\sum_{t=0}^{T}({\bf q}_{t}\cdot{\bf l}_{t})=\sum_{j=1}^{k}\sum_{t:t _{j-1}\leq t<t_{j}}^{T}q_{i_{j},t}l_{j,t}=\sum_{j=1}^{k}L_{([t_{j-1},t_{j})}.\]
From Theorem 2 we obtain an inequality relating the cumulative exponentially mixed loss and total losses of an arbitrary composite Expert.
**Corollary 3**: _For any composite expert \(E\) consisting of \(k\) elementary experts, the inequality_
\[M_{T}\leq L_{T}(E)+\frac{1}{\eta}(k+1)(\ln(T+1)+2\ln\ln(T+1)+\ln c )+\] \[\frac{1}{\eta}(k+1)\ln\frac{1}{\alpha}+\frac{1}{\eta}(T-k-1)\ln \frac{1}{1-\alpha}. \tag{10}\]
_Proof._ We will use the bound 8. Since \(\sum_{j=1}^{k}D({\bf q}_{t_{j}}\|\tilde{\bf w}_{0})\leq\ln\frac{1}{w_{0}}\leq \ln(i_{j}+1)+2\ln\ln(i_{j}+1)+\ln c\), we get \(\sum_{j=1}^{k}D({\bf q}_{t_{j}}\|\tilde{\bf w}_{0})\leq k(\ln(i_{j}+1)+2\ln\ln( i_{j}+1)+\ln c)\). From here and from 8 we get the bound 10. \(\Box\)
### Aggregating Algorithm AA
The Aggregating Algorithm (**AA**) proposed in Vovk (1990), Vovk (1998) is the basic method for calculating Predictor predictions in this work. Let \({\bf f}=(f_{1},f_{2},\dots)\) be the forecasts of \(i\in{\cal N}\) experts and \({\bf p}=(p_{i}:i\in{\cal N})\) - probability
distribution on the set \(\mathcal{N}\) of all experts.6 The superprediction function is defined as
Footnote 6: That is, \(p_{i}\geq 0\) for all \(i\) and \(\sum\limits_{i\in\mathcal{N}}p_{i}=1\).
\[g(y)=-\frac{1}{\eta}\ln\sum\limits_{i\in\mathcal{N}}e^{-\eta \lambda(f_{i},y)}p_{i}\]
for arbitrary \(y\), where \(\eta>0\) is the learning rate Vovk (1998).7
Footnote 7: The series on the right side of (11) converges, since the loss function takes non-negative values.
A loss function \(\lambda\) is said to be \(\eta\)-mixable if for any probability distribution \(\mathbf{p}\) on a set of experts and for any set of expert predictions \(\mathbf{f}\) there exists a prediction \(\gamma\in\Gamma\) which satisfies the inequality
\[\lambda(\gamma,y)\leq g(y) \tag{11}\]
for all \(y\).
We fix some rule \(\gamma=\mathrm{Subst}(\mathbf{f},\mathbf{p})\) for computing the prediction \(\gamma\), satisfying (11).
The Subst function is called the substitution function.
In what follows, we will use the square loss function \(\lambda(\gamma,y)=(y-\gamma)^{2}\), where \(y\)\(\gamma\) are real numbers. we assume that \(y\in[a,b]\), where \(a<b\) are real numbers
In Vovk (1998) and Vovk (2001) it is proved that the square loss function is \(\eta\)-mixable for every \(\eta\) such that \(0<\eta\leq\frac{2}{(b-a)^{2}}\), and the corresponding prediction is
\[\gamma=\mathrm{Subst}(\mathbf{f},\mathbf{p})=\frac{a+b}{2}+\frac {1}{2\eta(b-a)}\ln\frac{\sum\limits_{i\in\mathcal{N}}p_{i}e^{-\eta(b-f_{i})^{ 2}}}{\sum\limits_{i\in\mathcal{N}}p_{i}e^{-\eta(a-f_{i})^{2}}}. \tag{12}\]
**Theorem 3**: _(Vovk 1998 and V'yugin 2022) Suppose that the loss function \(\lambda(f,y)\) is \(\eta\)-mixable for some \(\eta>0\). Let \(H_{T}\) be the total losses of the Predictor, and \(L_{i,T}\) be the total losses of the Expert \(i\). Then for each \(T\) the inequality \(H_{T}\leq M_{T}\leq L_{i,T}+\frac{1}{\eta}\ln\frac{1}{w_{i,1}}\)._
_Proof_. According to (11)
\[h_{t}=\lambda(f_{t},y_{t})\leq g_{t}(y_{t})=m_{t} \tag{13}\]
for every \(t\). We sum the inequalities (13) over \(t=1\ldots,T\) and get \(H_{T}\leq M_{T}\). Hence, by Proposition 1 for any \(i\) and all \(T\) done \(H_{T}\leq L_{i,T}+\frac{1}{\eta}\ln\frac{1}{w_{i,1}}\). \(\Box\)
## 3 Algorithm for tracking of subsample generators
In this section, we present a prediction algorithm - **GMPP**.
Let us first motivate the method underlying the algorithm.
The general scheme of the online learning process is as follows. At each step \(t\) one observes signal \(\mathbf{x}_{t}\). Expert strategies built on steps \(1,2,\dots,t\) present their response predictions. For simplicity, we assume that \(\mathbf{x}_{t}\in\mathcal{R}^{n}\), \(y_{t}\in\mathcal{R}\). The predictor also presents his prediction. After that, the corresponding generator \(G\) produces the true response \(y_{t}=G(\mathbf{x}_{t})\), and the experts and the Predictor calculate their losses due to the difference in their predictions and response.
There are \(k+1\) generators that transform the signal \(\mathbf{x}_{t}\) into the response \(y_{t}\). The time interval \([0,T]\) is divided into subintervals, on each of which one of these generators produces responses. At each time \(t\), neither the experts nor Predictor know the number of generators, and also which of the generators produces a response.
The described generation model creates a sample \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\dots\) which is divided into subsamples, the responses \(y_{t}\) in which are obtained as a result of the operation of one of the generators.
We assume that there is a learning method with the help of which at any time \(t\) by subsample (window to the past), you can build a local predictive model (expert).8
Footnote 8: A window into the past at time \(t\) is understood as a subsample \((\mathbf{x}_{t-1},y_{t-1},\dots,\mathbf{x}_{t-h},y_{t-h})\), where \(h>0\) is a parameter (window size). As such a method, the ridge regression method will be used. In section 3.1, the prognostic model (Expert) will be given by the regression equation \(y=(\mathbf{a}\cdot\mathbf{x})\), where \(\mathbf{a}\in\mathcal{R}^{n}\).
At each step \(t>h\), a prognostic model is built (initialized) - a function \(f_{t}(\mathbf{x})\), which is determined by the previous observed members of the time series - by a window into the past
\[(\mathbf{x}_{t-1},y_{t-1},\dots,\mathbf{x}_{t-h},y_{t-h}),\]
where \(h\) is a parameter (window size).
Thus, at each step \(t\) there is a collection of predictive strategies (models) \(\mathbf{f}_{t}=(f_{1},\dots,f_{t-1})\) constructed at the previous steps and the predictive function \(f_{t}\) constructed at the step \(t\).
Expert's \(i\leq t\) forecast at step \(t\) is equal to \(f_{i,t}=f_{i}(\mathbf{x}_{t})\), where \(\mathbf{x}_{t}\) is the signal at step \(t\),
Calculate the forecast \(\gamma_{t}\) Predictor according to the rule (12). At the steps \(t<i\), when \(i\) has not yet been initialized, we introduce a virtual forecast - we will assume that his forecast is equal to the forecast \(\gamma_{t}\) of the predictor (aggregation algorithm).
This definition contains a logical circle since the prediction \(\gamma_{t}\) of the Predictor is defined by aggregation of forecasts of all experts, including experts \(i>t\). This contradiction will be resolved using the fixed point method proposed in Chernov and Vovk (2009) as follows. Let's assume that Predictor's \(\gamma_{t}\) forecast is known to experts. Define forecasts of the experts \(i=1,2,\dots\) at step \(t\):
\[f_{i,t}=\left\{\begin{array}{l}f_{i}(\mathbf{x}_{t}),\mbox{ if }i\leq t,\\ \gamma_{t},\mbox{ if }i>t.\end{array}\right.\]
The loss of the aggregation algorithm is \(h_{t}=\lambda(\gamma_{t},y_{t})\), and the loss of any expert \(i\) is \(l_{i,t}=\lambda(f_{i,t},y_{t})\) for \(i\leq t\) and \(l_{i,t}=h_{t}\) for \(i>t\). forecast \(\gamma_{t}\) should satisfy the condition
\[\lambda(\gamma_{t},y))\leq g_{t}(y), \tag{14}\]
or, equivalently, the condition
\[e^{-\eta\lambda(\gamma_{t},y)}\geq\sum_{i\in\mathcal{N}}e^{-\eta\lambda(f_{i, t},y)}w_{i,t} \tag{15}\]
have to satisfy for every \(y\).
Let's replace the condition (15) with an equivalent condition under which the summation is performed over a finite set of experts. Since \(f_{i,t}=f_{i}(\mathbf{x}_{t})\) for \(i\leq t\) and \(f_{i,t}=\gamma_{t}\) for \(i>t\), we present the condition (15) for \(\gamma_{t}\) in a more detailed form:
\[e^{-\eta\lambda(\gamma_{t},y)}\geq\sum_{i=1}^{t}w_{i,t}e^{-\eta\lambda(f_{i}, y)}+e^{-\eta\lambda(\gamma_{t},y)}\left(1-\sum_{i=1}^{t}w_{i,t}\right). \tag{16}\]
Thus, the inequality (15) is equivalent to the inequality
\[e^{-\eta\lambda(\gamma_{t},y)}\geq\sum_{i=1}^{t}w_{i,t}^{p}e^{-\eta\lambda(f_ {i,t},y)}, \tag{17}\]
Where
\[w_{i,t}^{p}=\frac{w_{i,t}}{\sum_{j=1}^{t}w_{j,t}}. \tag{18}\]
According to the rule (12) for \(\mathbf{AA}\), we define
\[\gamma_{t}=\mathrm{Subst}(\mathbf{f}_{t},\mathbf{w}_{t}^{p}), \tag{19}\]
where \(\mathrm{Subst}\) is the substitution function for the loss function used,9 where \(\mathbf{w}_{t}^{p}=(w_{1,t}^{p},\ldots,w_{t,t}^{p})\) and \(\mathbf{f}_{t}=(f_{1}(\mathbf{x}_{t}),\ldots,f_{t}(\mathbf{x}_{t}))\).
Footnote 9: For example, for a quadratic loss function, the substitution function is defined according to (12).
From the definition for \(y=y_{t}\) it will be
\[h_{t}=\lambda(\gamma_{t},y_{t}))\leq g_{t}(y_{t})=m_{t},\]
where \(m_{t}\) is the exponentially mixed loss. We summarize this inequality over \(t=1,\ldots,T\) and get \(H_{T}\leq M_{T}\).
The expert weights are are updated in two stages as follows:
**Loss Update**
\[\tilde{w}_{i,t}=\frac{w_{i,t}e^{-\eta l_{i,t}}}{\sum\limits_{j\in\mathcal{N}}w _{j,t}e^{-\eta l_{j,t}}} \tag{20}\]
for \(i\in\mathcal{N}\).
**Mixing Update**
\[w_{i,t+1}=\alpha_{t}\tilde{w}_{i,1}+(1-\alpha_{t})\tilde{w}_{i,t} \tag{21}\]
for \(i\in\mathcal{N}\), where \(\alpha_{t}\) is a parameter, \(0<\alpha_{t}<1\).
From the definition (20) it follows that
\[\sum_{j\in\mathcal{N}}w_{j,t}=1\]
for each \(t\). It follows from (21) that
\[\sum_{j\in\mathcal{N}}\tilde{w}_{j,t}=1\]
for any \(t\).
Recall that the Predictor's loss is \(h_{t}=\lambda(\gamma_{t},y_{t})\), and the experts losses are are \(l_{i,t}=\lambda(f_{i,t},y_{t})\) for \(i\leq t\) and \(l_{i,t}=h_{t}\) for \(i>t\). Using these equalities, we represent the sum
in the denominator of (20) in a computationally efficient form
\[\sum_{j\in\mathcal{N}}w_{j,t}e^{-\eta l_{j,t}} =\] \[\sum_{j\leq t}w_{j,t}e^{-\eta l_{j,t}}+\sum_{j>t}w_{j,t}e^{-\eta h _{t}} =\] \[\sum_{j\leq t}w_{j,t}e^{-\eta l_{j,t}}+e^{-\eta h_{t}}\sum_{j>t} w_{j,t} =\] \[\sum_{j\leq t}w_{j,t}e^{-\eta l_{j,t}}+e^{-\eta h_{t}}(1-\sum_{j \leq t}w_{j,t}) =\] \[\sum_{j\leq t}w_{j,t}e^{-\eta l_{j,t}}+e^{-\eta h_{t}}(1-\sum_{j \leq t}w_{j,t}).\]
Therefore, the (20) **Loss Update** part is replaced with the following definition:
\[\tilde{w}_{i,t}=\frac{w_{i,t}e^{-\eta l_{i,t}}}{\sum\limits_{j=1}^{t}w_{j,t}e ^{-\eta l_{j,t}}+e^{-\eta h_{t}}(1-\sum\limits_{j=1}^{t}w_{j,t})}, \tag{22}\]
and the **Mixing update** part is still (21).
Let's present the protocol of the **GMPP** algorithm. Let's first set the parameters \(\eta\) and \(\alpha_{t}\), where \(0<\alpha_{t}<1\) for \(t=1,2\dots\), and \(\eta>0\).10 We put \(\alpha_{t}=\frac{1}{t+1}\) for all \(t\).
Footnote 10: For the square loss function, we set \(\eta=\frac{2}{(b-a)^{2}}\), where \(y_{t}\in[a,b]\).
**Algorithm GMPP**
Define the initial weights \(w_{i,1}=\tilde{w}_{i,0}\) of the experts such that \(\sum_{i\in\mathcal{N}}w_{i,1}=1\).11
Footnote 11: For example, \(w_{i,1}=\tilde{w}_{i,0}=\frac{1}{c(i+1)\ln^{2}(i+1)}\) for \(i=1,2,\dots,\) where \(c=\sum_{i\in\mathcal{N}}\frac{1}{(i+1)\ln^{2}(i+1)}\), \(\frac{1}{\ln 3}<c<\frac{1}{\ln 2}\).
**FOR**\(t=1,\dots,T\)
1. Experts \(f_{1}(\cdot),\dots,f_{t-1}(\cdot)\) have been initialized in the previous steps. Initialize the Expert \(f_{t}(\cdot)\).12 Footnote 12: In the case when the regression problem is being solved, initialization means that we use the data from the past to determine the weight vector \(\mathbf{a}_{t}\) of the regression equation \(f_{t}(\mathbf{x})=(\mathbf{a}_{t}\cdot\mathbf{x})\).
3. We receive the signal \(\mathbf{x}_{t}\).
4. Calculate expert forecasts \(f_{i,t}=f_{i}(\mathbf{x}_{t})\) for \(1\leq i\leq t\).
5. Calculate auxiliary weights of the experts1\(\leq i\leq t\): \[w_{i,t}^{p}=\frac{w_{i,t}}{\sum_{j=1}^{t}w_{j,t}}.\] (23)
6. Calculate the Predictor's forecast according to the rule (12): \[\gamma_{t}=\mathrm{Subst}(\mathbf{f}_{t},\mathbf{w}_{t}^{p}),\]
7. We get (from the generator) the true value of the sign (label) \(y_{t}\) and calculate the loss \(h_{t}=\lambda(\gamma_{t},y_{t})\) of Predictor's and the experts losses: \[l_{i,t}=\left\{\begin{array}{l}\lambda(f_{i,t},y_{t})\text{ if }i\leq t,\\ h_{t}\text{ if }i>t.\end{array}\right.\]
8. We update the weights of the experts \(1\leq i\leq T\) in two stages:13 Footnote 13: Thus, it is assumed that the prediction horizon \(T\) is given to Predictor as a parameter. **Loss Update** \[\tilde{w}_{i,t}=\frac{w_{i,t}e^{-\eta l_{i,t}}}{\sum\limits_{j=1}^{t}w_{j,t}e^ {-\eta l_{j,t}}+e^{-\eta h_{t}}(1-\sum\limits_{j=1}^{t}w_{j,t})}.\] (24)
**Mixing Update** \[w_{i,t+1}=\alpha_{t}\tilde{w}_{i,1}+(1-\alpha_{t})\tilde{w}_{i,t}\] (25)
**ENDFOR**
A bound of the efficiency of the **GMPP** algorithm is presented in the following theorem.
**Theorem 4**: _Let \(\alpha_{t}=\frac{1}{(t+1)}\) for all \(t\). For any composite expert \(E\) consisting of \(k+1\) elementary experts, there will be_
\[M_{T}\leq L_{T}(E)+\sum_{j=1}^{k}D(\mathbf{q}_{t_{j}}\|\tilde{ \mathbf{w}}_{0})+\frac{1}{\eta}(k+1)\ln c+\] \[\frac{1}{\eta}(k+1)(\ln(T+1)+2\ln\ln(T+1)+\ln c+\ln T)+\frac{1}{ \eta}\ln(T-k-1) \tag{26}\]
_for all \(T\), where \(L_{T}(E)\) is the total loss of the composite Expert. In addition, \(H_{T}\leq M_{T}\). From the bound (26) it follows that_
\[\limsup_{T\infty}\frac{1}{T}(H_{T}-L_{T}(E))=0.\]
_Proof._ Let's refine the process of obtaining the bound (10) of Corollary 2 a' in the case when \(\alpha_{t}=\frac{1}{t+1}\). The score of the regret consists of three sums.
The first sum is \(\sum_{j=1}^{k}D({\bf q}_{t_{j}}\|\tilde{\bf w}_{0})\). \(i_{j}\leq T\) for all \(j\) and \({\bf q}_{t_{j}}\) is the unit vector, we have \(D({\bf q}_{t_{j}}\|\tilde{\bf w}_{0})\leq\ln T+2\ln\ln T+\ln c\), so the first sum is bounded by \(\frac{1}{\eta}(k+1)(\ln(T+1)+2\ln\ln(T+1)+\ln c)\).
The second sum is
\[\frac{1}{\eta}\sum_{t=1}^{k+1}\ln\frac{1}{t+1}.\]
The third sum is
\[\frac{1}{\eta}\sum_{t=1}^{T-k}\ln\frac{1}{1-\frac{1}{t+1}}=\frac{1}{\eta}\sum _{t=1}^{T-k-1}((\ln(t+1)-\ln t).\]
We restrict the second sum to \(\frac{1}{\eta}(k+1)\ln T\). We restrict the third sum to \(\frac{1}{\eta}\ln(T-k-1)\). Using these considerations and the bound (10), we obtain the bound (26). \(\Box\)
Let the intervals \([t_{1},t_{2}),\ldots,(t_{j-1},t_{j}],(t_{k-1},t_{k}]\) define the data areas generated by corresponding generators. Let us introduce a composite Expert \(E\) consisting of elementary experts \(i_{1},\ldots,i_{k}\) and the corresponding intervals
\[(t_{1},t_{2}],\ldots,(t_{j-1},t_{j}],(t_{k-1},t_{k}],\]
where for each \(j\leq k\) the elementary expert \(i_{j}\), which was initialized on some interval \(\leq t_{j}\) and bears the least loss on the interval \([t_{j-1},t_{j})\) among all experts initialized at steps \(\leq t_{j}\), By Theorem 4, the bound (26) takes place.
The bound (26) allows us to formulate the main hypothesis underlying the application of the **GMPP** algorithm: In the case where it is possible to "attach" to each local subsample from the generation area valid predictive (expert) strategy, carrying low loss on each local subsample generated by the generator, i.e., "learn" "this generator, the **DMPP** algorithm will also predict with sufficiently small average (in time) loss over the entire sample.
### Numerical experiments
Time scale \([1,T]\) is divided into \(k=10\) consecutive time intervals \(I_{1},\ldots,I_{k}\), on which one each of fours generators are performed. Therefore, on the time interval \([1,T]\) the dependence of \(y_{t}\) on \({\bf x}_{t}\) is switched \(k=9\) times.
The number of generators and their parameters are unknown to Forecaster. \(e=4\) linear response generators are used, defined by weight vectors \(\hat{\bf a}_{1},\ldots,\hat{\bf a}_{e}\), i.e. within the corresponding generation interval \(I_{s}\) the response is equal to \(y_{t}=(\hat{\bf a}_{s}\cdot{\bf x}_{t})+\epsilon\) for \(1\leq s\leq e\), where \(\epsilon\) is the standard normal noise.
At each step \(t\) using the ridge regression method over a window into the past \((({\bf x}_{i-h},y_{t-h}),\ldots,({\bf x}_{i-1},y_{i-1})\) an expert predictive function is constructed \(f_{t}({\bf x})=({\bf a}_{t}\cdot{\bf x})\).14
Footnote 14: Here
\[\mathbf{f}_{t}=(f_{1},\ldots,f_{t}),\mathbf{x}_{t-h},y_{t-h}.\]
## 4 Conclusion
An online learning algorithm is presented for tracking online generators of subsamples.
An evident drawback of the computation scheme is that on each step \(1\leq t\leq T\) we perform computationally expensive operations (22) and (25) for each expert \(1\leq i\leq T\), and store the corresponding weights; in practical applications, restrictions on the number of initialized experts were used.
|
2301.11854
|
GrGadget: an N-body TreePM relativistic code for cosmological
simulations
|
We present the merging of the Particle-Mesh (PM) relativistic Gevolution code
with the TreePM Gadget-4 code, with the aim of studying general relativity
effects in cosmology. Our code, called GrGadget, is able to track the evolution
of metric perturbations in the weak field limit by using Gevolution's
implementation of a relativistic PM in the Poisson gauge. To achieve this,
starting from Gevolution we have written a C++ library called libgevolution,
that allows a code to access and use the same abstractions and resources that
Gevolution uses for its PM-only N-body simulations. The code works under the
assumption that particle interactions at short distances can be approximated as
Newtonian, so that we can combine the forces computed with a Newtonian Tree
with those computed with a relativistic PM. The result is a TreePM simulation
code that represents metric perturbations at the scales where they are
relevant, while resolving non-linear structures. We validate our code by
closely matching Gadget-4 forces, computed with the Tree switched off, with
those computed with libgevolution in the Newtonian limit. With GrGadget we
obtain a matter power spectrum that is compatible with Newtonian Gadget at
small scales and contains GR features at large scales that are consistent with
results obtained with Gevolution. We demonstrate that, due to the better
resolution of the highly non-linear regime, the representation of the
relativistic fields sampled on the mesh improves with respect to the PM-only
simulations.
|
Eduardo Quintana-Miranda, Pierluigi Monaco, Luca Tornatore
|
2023-01-27T16:59:41Z
|
http://arxiv.org/abs/2301.11854v1
|
# GrGadget: an N-body TreePM relativistic code for cosmological simulations
###### Abstract
We present the merging of the Particle-Mesh (PM) relativistic Gevolution code with the TreePM Gadget-4 code, with the aim of studying general relativity effects in cosmology. Our code, called GrGadget, is able to track the evolution of metric perturbations in the weak field limit by using Gevolution's implementation of a relativistic PM in the Poisson gauge. To achieve this, starting from Gevolution we have written a C++ library called Libgevolution, that allows a code to access and use the same abstractions and resources that Gevolution uses for its PM-only N-body simulations. The code works under the assumption that particle interactions at short distances can be approximated as Newtonian, so that we can combine the forces computed with a Newtonian Tree with those computed with a relativistic PM. The result is a TreePM simulation code that represents metric perturbations at the scales where they are relevant, while resolving non-linear structures. We validate our code by closely matching Gadget-4 forces, computed with the Tree switched off, with those computed with Libgevolution in the Newtonian limit. With GrGadget we obtain a matter power spectrum that is compatible with Newtonian Gadget-4 at small scales and contains GR features at large scales that are consistent with results obtained with Gevolution. We demonstrate that, due to the better resolution of the highly non-linear regime, the representation of the relativistic fields sampled on the mesh improves with respect to the PM-only simulations.
keywords: cosmology: theory - large-scale structure of the Universe
## 1 Introduction
The state of the art of precision cosmology provides a standard cosmological model, \(\Lambda\)CDM, that is consistent with most observational evidence on large scales, but relies on the existence of a dark sector populated by Dark Matter (DM) and Dark Energy (DE). The first is responsible for the formation of cosmological structures such as galaxies and their large-scale density field, while the second causes the observed accelerated expansion of the universe in the present epoch. Their physical nature is an open problem, since the only evidence of their existence comes from their gravitational interaction with visible matter. A possible explanation is that the dark sector is due to a misrepresentation of gravity, that on large scales does not follow Einstein's General Relativity (GR), at the basis of the \(\Lambda\)CDM model.
This fact has triggered a wave of interest in modifications of GR, that can lead to extra terms that explain dark energy or dark matter (see, e.g., Silvestri and Trodden, 2009; Capozziello and De Laurentis, 2012, and references therein). Such modifications must be significant only on large scales or low density, because GR is very accurate in predicting planetary orbits, light deflection and Doppler effects in solar system tests and has more recently been successfully tested with the detection of gravitational waves (Abbott et al., 2016) and the direct imaging of black hole event horizons (Event Horizon Telescope Collaboration et al., 2019).
In order to characterize dark energy in the age of its dominance, many projects have been planned to survey large parts of the sky and probe the large-scale distribution of matter using galaxy clustering and galaxy lensing, both from the ground (DES1, Krause et al., 2017; DESI2, DESI Collaboration et al., 2016; Rubin's LSST3, Ivezic et al., 2019; SKAO4 surveys) and from space (Euclid5, Laureijs et al., 2011; Roman6, Spergel et al., 2015; SphereX7, Dore et al., 2014). Some of these surveys have already started to produce a flood of data that will soon lead to a precise characterization of the galaxy and matter density fields. A comparison of these observations to model predictions, either using summary statistics or field-level inference, will lead to unprecedented tests not only of the cosmological model but also of the gravity theory behind it. With precision being guaranteed by
the amount of available high-quality data, accuracy will be achieved only by rigorous control of systematics, both in the data and in theory predictions.
The highly non-linear nature of the observed density field and the non-locality of gravity make cosmological simulations necessary to compare the predictions of current theories with the observations at an increasing level of accuracy. Yet, most of the widely adopted simulation codes, like e.g. Gadget-4(Springel et al., 2021), use Newtonian dynamics for the evolution of matter perturbations. This is not the ideal configuration to pass from the unobservable distribution of matter in a periodic comoving box to the observable distribution of light in the past light cone. Relativistic corrections can be added _a posteriori_ by post-processing Newtonian simulation outputs; one specific example of this approach is the modeling of lensing due to the distortion of null geodesics (Bartelmann and Schneider, 2001), while a more comprehensive approach to adding relativistic effects is presented by Borzyszkowski et al. (2017). However, even though the biases introduced by this approach are expected to be small, a fully self-consistent approach is necessary to convincingly demonstrate our ability of controlling theory systematics. For instance, galaxy clustering is affected by magnification bias due to lensing, and neglecting this effect induces a non-negligible bias in parameter estimation (Lepori et al., 2020; Alam et al., 2021). This is even more true when modified gravity theories are used: extensions of gravity are typically derived in a full relativistic context, and while they influence the Newtonian limit of gravity, the small but measurable relativistic effects may provide smoking-gun signals of a specific class of gravity theories. In this sense, restricting to the treatment of the Newtonian limit of modified gravity theories (as, e.g., in Puchwein et al., 2013) may leave out crucial observable signatures.
Two examples of fully relativistic N-body codes for the evolution of cosmic perturbations, that integrate Einstein's equations to follow the motion of massive particles along their geodesics, are the Adaptive Mesh Refinement (AMR) code Gramses(Barrera-Hinojosa and Li, 2020) and the Particle-Mesh (PM) code Gevolution(Adamek et al., 2016). These have proven to be precious tools to produce accurate cosmological predictions, like a self-consistent treatment of massive neutrinos (Adamek et al., 2022), and to explore phenomena that were previously overlooked, like the strength of the frame dragging field acting on dark matter haloes (Barrera-Hinojosa et al., 2020). These codes sample the fields in a mesh that fills the simulated volume, but while Gramses uses an AMR scheme to increase resolution only where it is needed, PM schemes working on a single non-adaptive mesh are well known to be limited by memory, so they are unable to achieve the large dynamic range required, e.g., to resolve DM halos in large cosmological volumes. The integration of Newtonian particle trajectories has historically been addressed with the introduction of an oct-tree data structure (Barnes and Hut, 1986), that provides a \(N\log N\) scaling for the computation of gravity without compromising its accuracy. Because the integration of large-scale perturbations is very slow in this scheme, such an oct-tree is used to compute short-range forces, and is complemented by a Particle-Mesh (PM) code on large scales. The resulting algorithm is commonly called TreePM, and it is the standard gravity solver for Gadget-4.
As we will show in next Section, deviations from a pure Newtonian approach become significant on scales that are comparable with the Hubble horizon, so a Newtonian treatment of small-scale clustering, performed by the Tree algorithm, would introduce a negligible error if large scales are treated by a fully relativistic gravity solver. This can be achieved, in a TreePM scheme, by using a relativistic PM code for large-scale gravity, where relativistic potentials are sampled on a small enough mesh so as to be effectively Newtonian on the scales where the Tree code gets in.
In this paper we present an implementation of Gadget-4 that uses a PM library, based on Gevolution relativistic code, as the PM part of the TreePM solver. This is a step toward the construction of an ecosystem of codes and post-processing tools to perform end-to-end simulations of future surveys, with the aim of achieving optimal control of all systematics, including theoretical ones. The paper is organized as follows: Section 2 gives an overview of the theory of relativistic perturbations, with a focus on the approach used in Gevolution. Section 3 gives a description of the Gadget-4 and Gevolution codes, and describes the implementation of Libgevolution and GrGadget. Section 4 presents the tests performed to validate GrGadget, while Section 5 gives our conclusions.
## 2 Theory of Relativistic Perturbations
The success of Newtonian simulations in describing the large-scale structure of the universe follows from the fact that, for an observer at rest with respect to the CMB, the metric of spacetime is very close to Friedmann-Lemaitre-Robertson-Walker's (FLRW). Deviations from the Newtonian approach are expected to be significant, albeit small, on scales near the Hubble horizon, or when the energy-momentum tensor has relativistic components like radiation or fast massive neutrinos. Deviations from FLRW metric are expected to be strong in the proximity of compact objects, but this happens on scales that are far smaller than the resolution that can be afforded in simulations of large comoving volumes. It is thus fair to assume that the perturbations to the metric are small and can be described in a weak-field regime. This does not imply that deviations of the components of the energy-momentum tensor from homogeneity are assumed to be small, density perturbations can be highly non-linear: what we require is that the size of self-gravitating objects is much larger than their gravitational radius.
The Gevolution code (see Adamek et al., 2016) models the spacetime metric with a perturbed FLRW metric in the weak field regime. In the _Poisson gauge_ the metric can be written as:
\[\begin{split} ds^{2}=& a^{2}\Big{(}-c^{2}\,d\tau^{2}(1 +2\Psi)-2c\,d\tau dx^{i}B_{i}+\\ &+dx^{i}dx^{j}\big{(}\gamma_{ij}(1-2\Phi)+h_{ij}\big{)}\Big{)}, \end{split} \tag{1}\]
where \(a(\tau)\) is the scale factor of the FLRW background, \(\tau\) is the conformal time and \(x^{i}\) are the space coordinates. It is possible to exploit the residual degrees of freedom of the metric to impose the conditions \(B_{Ii}l^{i}=0\), \(h_{i}l^{i}=0\) and \(h_{ij}l^{j}=0\). In our notation, repeated latin indexes denote Einstein's summation over the spatial coordinates \(1,2,3\) and the vertical bar subscript, e.g. \(B_{i|j}\), denotes a covariant derivative with respect to the affine connection that emerges from the background spatial metric \(\gamma_{ij}\).
The choice of the Poisson gauge is convenient because the two potentials \(\Psi\) and \(\Phi\) are the gauge-invariant Bardeen potentials, and in the Newtonian limit the the field \(\Psi\) can be interpreted as the gravitational potential. In other words, this is the gauge in which the standard N-body solver is integrating the right equations of motion in the Newtonian limit (Chisari and Zaldarriaga, 2011).
### Field equations
The background, characterized by \(a(\tau)\), is by construction a solution of the Einstein's equations in the presence of a homogeneous and
isotropic energy-momentum tensor \(\tilde{T}^{\mu}{}_{\nu}\):
\[\tilde{G}^{\mu}{}_{\nu}=-\frac{8\pi G}{c^{4}}\tilde{T}^{\mu}{}_{\nu}\,, \tag{2}\]
where \(\tilde{G}^{\mu}{}_{\nu}\) is Einstein's tensor constructed from the metric (1) with the perturbations \(\Psi,\Phi,B_{i},h_{ij}\) set to zero. Applying equation (2) to the FLRW metric one obtains Friedmann's equations.
To solve for the perturbations of the metric, the usual procedure consist in subtracting (2) from the full Einstein's equations:
\[G^{\mu}{}_{\nu}-\tilde{G}^{\mu}{}_{\nu}=-\frac{8\pi G}{c^{4}}\left(T^{\mu}{}_{ \nu}-\tilde{T}^{\mu}{}_{\nu}\right)\,. \tag{3}\]
The right hand side now contains the perturbation of the energy-momentum tensor due to inhomogeneities in mass and energy distributions, while the left hand side is a very complicated non-linear expression containing the potentials \(\Psi,\Phi,B_{i},h_{ij}\) and their spacetime derivatives up to second order.
To reach a tractable set of equations that we can interpret and solve numerically, we apply the weak field assumption. The perturbations \(\Psi,\Phi,B_{i},h_{ij}\) are assumed to be of order \(\epsilon\ll 1\). Spatial derivatives are known to increase their amplitude by a factor of \(\epsilon^{-1/2}\), accounting for the presence of shortwave fluctuations induced by the non-linear structure in the energy-momentum tensor, while time derivatives are assumed to preserve the perturbation order. Then one can expand \(G^{\mu}{}_{\nu}-\tilde{G}^{\mu}{}_{\nu}\) in terms of the metric perturbations, neglecting contributions with order higher than \(\epsilon\). For example: \(\Phi\) is a term of order \(\mathcal{O}(\epsilon),\Phi_{,i}\) has order \(\mathcal{O}(\epsilon^{1/2}),\Phi_{,i}{}^{m}\) is a leading term (order 1, because of the second derivative), quadratic terms like \(\Phi_{,i}{}_{\mu}\Phi_{,i}{}^{n}\) are \(\mathcal{O}(\epsilon)\), and a term like \(\Phi_{,00}\) is considered as \(\mathcal{O}(\epsilon)\). This type of expansion is known as the _shortwave correction_(Adamek et al., 2014).
Furthermore, experience has shown that the scalar perturbations \(\Phi\) and \(\Psi\) are generally larger than the vector and tensor perturbations \(B_{i}\) and \(h_{ij}\). Indeed, the scalar potentials, that are sourced by the density perturbation \(\Delta T^{00}\), become the Newtonian potential in the Newtonian limit, while the vector perturbation \(B_{i}\) is sourced by \(\Delta T^{0i}\), that is small by a factor of \(v/c\) for non-relativistic matter perturbations, and \(h_{ij}\) by \(\Delta T^{ij}\), that is suppressed by a \((v/c)^{2}\) factor. Hence, it is fair to drop quadratic terms of \(B_{i}\) and \(h_{ij}\) in this weak field limit approximation.
In this approximation, from Eq. (3) it descends that its time-time component yields a Poisson-like equation for the scalar \(\Phi\):
\[\begin{split}\Phi_{|m}{}^{n}(1+4\Phi)&-3\frac{ \mathcal{H}}{c^{2}}\Phi_{,0}+3\frac{\mathcal{H}^{2}}{c^{2}}(\chi-\Phi)+\frac{ 3}{2}\Phi_{|n}\Phi_{|}{}^{n}\\ &=\frac{4\pi Ga^{2}}{c^{4}}\Delta T^{0}{}_{0}\,,\end{split} \tag{4}\]
where \(\mathcal{H}=a^{-1}\frac{da}{d\tau}\) and \(\chi=\Phi-\Psi\). From the time-space section of eq. (3) we obtain:
\[-\frac{B_{|n}{}^{n}}{4c}-\frac{\Phi_{,i0}}{c^{2}}-\frac{\mathcal{H}}{c^{2}}( \Phi_{,i}-\chi_{,i})=-\frac{4\pi Ga^{2}}{c^{4}}\Delta T^{0}{}_{i}\,, \tag{5}\]
that, taking advantage of the condition \(B_{n}{}^{n}=0\), can be reduced to:
\[-\frac{B_{|n}{}^{n}}{4c}=-\frac{4\pi Ga^{2}}{c^{4}}P_{\perp}\Delta T^{0}{}_{ i}\,, \tag{6}\]
where \(P_{\perp}\) is a linear operator that selects from a vector field its divergenceless component.
The traceless part of the spatial section of eq. 3 leads to:
\[\begin{split}\left(\delta^{I}{}_{b}\delta^{a}{}_{i}-\frac{1}{3} \delta^{a}{}_{b}\delta^{I}{}_{i}\right)\left[\chi_{|i}{}^{I}-2\Phi_{|j}{}^{i} \chi+4\Phi\Phi_{|j}{}^{i}+2\Phi_{|j}\Phi_{|}{}^{i}\right.\\ +\left.\frac{1}{2c^{2}}h^{I}{}_{j,00}+\frac{\mathcal{H}}{c^{2}}h^{ I}{}_{j,0}-\frac{1}{2}h^{I}{}_{j|n}{}^{n}\\ +\left.\frac{1}{2c}\left(\frac{\partial}{\partial\tau}+2\mathcal{H }\right)\left(B^{i}{}_{|j}+B_{j}{}^{i}\right)\right]\\ =\left(\delta^{I}{}_{b}\delta^{a}{}_{i}-\frac{1}{3}\delta^{a}{}_ {b}\delta^{I}{}_{i}\right)\left(-\frac{8\pi G}{c^{4}}\Delta T^{i}{}_{j}\right) \,,\end{split} \tag{7}\]
from which we can determine the rest of the metric degrees of freedom \(\chi\) and \(h_{ij}\). Since the source of \(\chi\) and \(h_{ij}\) are the perturbation of the of the energy-momentum tensor \(\Delta T^{i}{}_{j}\), their amplitude in a matter dominated universe is suppressed by a factor \((v/c)^{2}\). That is equivalent to say: since dark matter is non-relativistic, \(\chi\) and \(h_{ij}\) must be very small with respect to \(\Phi\) or even \(B_{i}\).
As a matter of fact, V1.2 of \(\mathrm{\AA}\)pions
while (8) and (9) become:
\[\frac{d\kappa^{l}}{d\tau}=\frac{p^{l}}{ma}\,, \tag{11}\]
\[\frac{dp_{l}}{d\tau}=-\Phi_{i}mc^{2}a\,. \tag{12}\]
## 3 Algorithms and Code Infrastructure
### Evolution
Gevolution8(Adamek et al., 2016) is an N-body relativistic cosmological code, written in C++ and parallelized with the MPI paradigm. The physical theory behind this code has been described at length in Section 2. Numerically, this code implements a PM scheme to follow the evolution of energy-momentum tensor perturbations. As in PM codes, the advantage of working with a single grid and using Fast Fourier Transforms (FFTs) to solve the Poisson-like equations for the fields is paid with a high cost in memory, of \(\mathcal{O}(N^{3})\) where \(N\) is the number of grid points per dimension.
Footnote 8: [https://github.com/gevolution-code](https://github.com/gevolution-code)
Gevolution, can run in either _Newton_ or _General Relativity_ modes. The Newtonian gravity solver inverts the Laplace operator in the Poisson equation for the Newtonian potential, Eq. 10. When running the General Relativity mode, the code solves Eqs. 4, 6 and 7, that require the computation of the perturbed energy-momentum tensor. This is performed using a Cloud-In-Cell (CIC) scheme both for the density and for particle velocities; details are given in the presentation paper. Then the Hamiltonian forces to which particles are subjected are computed from Eqs. 8 and 9.
Gevolution solves the field equations in Fourier space, using a C++ library called LATfield2 to operate FFTs on classical fields in massively parallel applications with distributed memory. LATfield2 provides a programming interface to perform operations on the fields, either in their real or Fourier space representations. This library implements FFTs of 3-dimensional fields whose memory is distributed among parallel processes following a 2-dimensional uniform decomposition of space, in which each process owns in memory a portion of the grid with a _rod_ shape (Daverio et al., 2015). In this way LATfield2 overcomes the scaling limitations of a simpler 1-dimensional domain (_slab_) decomposition provided by the mainstream FFTW3 library9. FFTW3 is used, however, to compute 1D FFTs.
Footnote 9: [http://fftw.org/](http://fftw.org/)
Footnote 10: [https://wwwmpa.mpa-garching.mpg.de/gadget4](https://wwwmpa.mpa-garching.mpg.de/gadget4)
### Gadget-410
Footnote 10: [https://wwwmpa.mpa-garching.mpg.de/gadget4](https://wwwmpa.mpa-garching.mpg.de/gadget4)
Gadget-410 is a state-of-the-art TreePM N-body hydrodynamical cosmological code written in C++ (see Springel et al., 2021); it is massively parallelized in a distributed-memory paradigm using MPI.
Footnote 10: [https://github.com/gevolution-code](https://github.com/gevolution-code)
As in most N-body codes, gravity in Gadget-4 is represented in the Newtonian limit, but the equations of motion are modified to take into account the Universe expansion, obtained by integrating the Friedmann equations separately. As mentioned above, this approach is consistent with General Relativity in the Poisson gauge, and gives the leading-order term of weak field expansion. This amounts to neglecting the metric degrees of freedom \(B_{i}\), \(\chi\) and \(h_{ij}\), and is valid on scales much smaller than the Hubble horizon. In a typical configuration that is convenient for large cosmological volumes, the code solves for the forces acting on each particle, representing them as the sum of two contributions, one due to the interactions with nearby particles, computed with a Tree algorithm, and one due to long-range interactions, computed with a PM algorithm.11
Footnote 11: The code can work in other configurations (a non-cosmological volume, switching off the PM, enhancing the Tree part using multipole expansion) that are however not relevant for this paper.
The Tree algorithm works by partitioning the space into cubic cells, called nodes; in turn, each node is recursively partitioned into 8 children nodes down to a pre-determined maximum refinement level. A tree structure tracks the list of particles that are located within each node. This structure is used to speed up the computation of gravitational force on a particle: in a particle-particle integration scheme, this force is computed by adding up a series of \(\tilde{r}\,m/r^{3}\) terms, one for each particle pair, but we know that the accuracy of force evaluation does not depend strongly on the small-scale distribution of distant particles, so in the Tree scheme the evaluation of gravity is performed by grouping particles that belong to the same node, under the condition that the node subtends a given aperture angle \(\theta\). Particle-particle computation is then used only for the nearest neighbours. This is equivalent to considering the leading order in a multipole expansion of the gravity force from particles belonging to a distant cell. While the construction of the Tree is expensive in terms of computing time, it allows to achieve \(\mathcal{O}(N_{p}\log N_{p})\) scaling for the force computation, where \(N_{p}\) is the total number of particles in the simulation. Thus the Tree is able to compute with high accuracy the short wavelength modes of the gravitational interaction, while keeping the computational time low for large simulations. However, the Tree code is slow in integrating particle motions near the initial conditions, when the departures from homogeneity are small. This is why it is often coupled with a PM code to speed up the first time steps of a cosmological box.
The PM algorithm represents gravity through the gravitational potential field \(\Phi\), evaluated on a Cartesian cubic mesh of fixed size. The potential is found from the density field by solving the Poisson equation in Fourier space, while the force is computed from the gradient of the potential, obtained with a finite differences scheme. According to the Nyquist-Shannon theory, this implies that the information handled by the PM is limited to the long modes, up to the Nyquist frequency.
To combine the forces provided by the PM and Tree codes, the gravitational potential is split into the sum of two fields:
\[\Phi=\Phi^{(L)}+\Phi^{(S)}\,, \tag{13}\]
where \(\Phi^{(L)}\) represents long-range modes from the PM, and \(\Phi^{(S)}\) represents short-range modes from the Tree. Written in Fourier space (tilde on top of symbols denotes a Fourier transform), the Poisson equation reads:
\[\tilde{\Phi}_{k}=-\frac{4\pi}{k^{2}}\tilde{\rho}_{k}\,, \tag{14}\]
where \(\rho\) denotes the mass density. We can split the density as a sum of short-range and long-range terms, using Gaussian filters:
\[\tilde{\Phi}_{k}=-\frac{4\pi}{k^{2}}\tilde{\rho}_{k}\left(1-\exp(-k^{2}r_{a}^{ 2})\right)-\frac{4\pi}{k^{2}}\tilde{\rho}_{k}\exp(-k^{2}r_{a}^{2})\,. \tag{15}\]
The scale \(r_{a}\) is the one at which we split long- and short-range modes. We can obtain \(\Phi^{(S)}\) by solving the modified Poisson equation for short modes:
\[\tilde{\Phi}_{k}^{(S)}=-\frac{4\pi}{k^{2}}\tilde{\rho}_{k}\left(1-\exp(-k^{2}r_ {a}^{2})\right)\,, \tag{16}\]
and \(\Phi^{(L)}\) by solving the modified Poisson equation for long modes
\[\dot{\Phi}_{k}^{(L)}=-\frac{4\pi}{k^{2}}\tilde{\rho}_{k}\,\exp(-k^{2}r_{a}^{2})\,. \tag{17}\]
The long-mode Poisson equation (17) is solved by the PM in Fourier space, so the convolution with the kernel is a simple multiplication. The Tree on the other hand works in real space, hence equation (16) has to be transformed; this can be done analytically, yielding:
\[\Phi^{(S)}(\vec{x})=-G\sum_{i}\frac{m_{i}}{|\vec{x}-\vec{r}_{i}|}\,\text{erfc} \left(\frac{|\vec{x}-\vec{r}_{i}|}{2r_{a}^{2}}\right)\,. \tag{18}\]
### GrGadget
#### 3.3.1 Lipgevolution library
In order to have a relativistic PM code working in Gadget-4, we developed a library that implements both the Newtonian and the relativistic PM algorithms of the monolithic Revolution code. This was done by forking the Gevolution github repository into Lipgevolution, a library that is publicly available on github12 under MIT license.
Footnote 12: [https://github.com/GrGadget/gevolution-1.2](https://github.com/GrGadget/gevolution-1.2)
The rationale behind the development of Lipgevolution is to encapsulate Gevolution's resources and methods into abstract objects. This yields several benefits. Firstly, Gevolution maintenance is eased by the logical modularization of the code, i.e. instead of a monolithic code with a unique workflow we can divide Gevolution into components (C++ classes and/or namespaces) with well defined purposes. Secondly, we are allowed to re-use Gevolution components within other applications, such as we do within Gadget-4 in the present paper.
We give here an overview of the library; the precise signature of all the defined functions, methods and data structures is described in the technical documentation of the code. Lipgevolution is based on three cornerstones: (i) a particle container implemented through the class Particles_gevolution; (ii) a PM data structure named particle_mesh, templated on the particle container type, that can be used either as a relativistic_pm or a newtonian_pm; (iii) an executable application that uses the previous components to produce N-body simulations as the original code does. particle_mesh has to be understood as a container that is aware of the parallelization of the tasks and distribution of memory; it holds the gravitational fields and it allows the user to compute the forces acting on the simulation particles. The user interface declared in particle_mesh consists of the following functions:
* sample(...), that builds the sources (density field or energy-momentum tensor) by sampling particle properties in the mesh;
* compute_potential(...), that solves Poisson equations to compute the potential fields;
* compute_forces(...), that computes the forces acting on particles.
particle_mesh is specialized to solve the Newtonian problem or the General Relativistic problem using class inheritance; Figure 1 illustrates the class hierarchy of Lipgevolution's particle_mesh. The expert user will be able to specialize particle_mesh to his/her own needs, for example by deriving a PM that solves a modified gravity problem.
newtonian_pm is the specialization of particle_mesh that contains a real LATfield2::Field scalar field \(\Phi_{\text{Newton}}\) and its complex LATField2::Field Fourier transform \(\Phi_{\text{Newton}}\), plus a LATField2::PlanFFT that connects \(\Phi_{\text{Newton}}\) with \(\Phi_{\text{Newton}}\) through discrete Fourier transform. relativistic_pm is the specialization of particle_mesh that contains the above quoted degrees of freedom of the perturbed FLRW metric, \(\Phi\), \(B_{i}\) and \(\chi\). These are represented as real LATfield2::Field, with complex LATField2::Field counterparts to represent their Fourier transforms and a LATField2::PlanFFT for each field.
As a first testing phase, we run Lipgevolution, called with a simple wrapper, and the native Gevolution code, applying them to the same set of initial conditions, checking that the results were identical both in the Newtonian and relativistic cases. Then we stripped down Gadget-4 by switching off the Tree code, and compared its results to the Newtonian results of Lipgevolution. It is necessary that this comparison gives nearly identical results if we want Lipgevolution to substitute the native PM code of Gadget-4 without loss of accuracy. To achieve a satisfactory match of the two PM codes we had to change the Gevolution scheme in a few points.
We started from V1.2 of Gevolution, that implemented a first-order version of finite differences instead of the fourth-order scheme of Gadget-4. This resulted in a difference with Gadget-4 run on the same initial conditions, and in a percent-level offset of the matter power spectrum on large scales at low redshift. We upgraded the computation of spatial derivatives to fourth order, in parallel with the Gevolution developers that had noticed the same problem; our implementation is equivalent the most recent issue of Gevolution (used, e.g., in Adamek et al. 2022). The upgrade is the following: let's consider the gravitational potential along one direction of the mesh, and let's call its values \(\Phi_{i}\), where the index \(i\) denotes its position along that direction. Its first derivative is computed with finite differences at the first order as:
\[\frac{\partial\Phi_{i}}{\partial x}=\frac{\Phi_{i+1}-\Phi_{i}}{h}+O(h), \tag{19}\]
where \(h\) is the size of the mesh cell. Fourth-order Taylor expansion gives:
\[\frac{\partial\Phi_{i}}{\partial x}=8\frac{\Phi_{i+1}-\Phi_{i-1}}{12h}-\frac{ \Phi_{i+2}-\Phi_{i-2}}{12h}+O(h^{4})\,. \tag{20}\]
This has a smaller error of order \(O(h^{4})\), so it achieves higher precision than (19) with the little cost of knowing the potential value at the second-nearest cell, that implies a negligible communication overhead.
Another improvement with respect to V1.2 of Gevolution, that follows an implementation of Gadget-4, was the application of correcting filters to the density in Fourier space to compensate for cloud-in-cell (CIC) interpolation. Indeed, as discussed e.g. in Springel (2005) or Sefusatti et al. (2016), CIC interpolation at some finite order leads to some loss of power that can be compensated for in Fourier space using suitable kernels. This was applied both to the computation of the density and to the computation of energy-momentum tensor components in the relativistic case.
Lastly, to make the Newtonian PM scheme equivalent to that of
Figure 1: PM class hierarchy in Lipgevolution.
Gadget-4 we changed the form of the discrete Laplacian operator in the Poisson equation solver from its original form
\[\nabla^{2}\rightarrow-\frac{4N^{2}}{L^{2}}\Big{(}\sin^{2}\frac{\pi k_{x}}{N}+\sin ^{2}\frac{\pi ky}{N}+\sin^{2}\frac{\pi k_{z}}{N}\Big{)}\,, \tag{21}\]
described in Adamek et al. (2016), equation (C.5), to the form used in Gadget-4:
\[\nabla^{2}\rightarrow-\frac{4\pi^{2}}{L^{2}}\Big{(}k_{x}^{2}+k_{y}^{2}+k_{z}^ {2}\Big{)}. \tag{22}\]
#### 3.3.2 Calling Llegvolution from Gadget-4
The implementation of Llegvolution in Gadget-4 was performed as follows. We created a new PM class with a similar interface as the original one in Gadget-4, so that it is initialized and executed with the same functions as Gadget-4, i.e. init_periodic() and pmforce_periodic(). A new class relativistic_pm was implemented within an gadget::gevolution_api namespace, avoiding to use the wider gadget namespace to make a clear distinction of purpose between the original Gadget-4 code and our additional features. This relativistic_pm class acts much like a mediator taking information in and out of gadget simulation particles, processing the correct units conversion and calling the methods on gevolution namespace. Figure 2 shows a diagram that summarizes the contents of this PM class, its relation with Gadget-4's resources and the entry points for gevolution's api.
relativistic_pm consists of:
* A variable of type simparticle_handler that acts as a wrapper for providing particle information from Gadget-4's simparticles global variable and writing back the data produced by gevolution's PM.
* A variable of type latfield_handler that takes care of correctly initializing LATfield global state. Indeed, while Gadget-4 can run with any number of MPI processes, LATfield has limitations that depend on the number of grid points in the PM. latfield_handler also takes care of creating a sub-communicator from Gadget-4's MPI global communicator that satisfies the constraints set by LATfield.
* A variable of type gevolution::cosmology that contains the parameters for the background evolution.
* A container of type gevolution::Particles_gevolution that holds particle information, stored according to their location on the PM grid.
* Variables of type gevolution::relativistic_pm and gevolution::newtonian_pm that perform the actual PM computations, i.e. construct the sources, either density or the components of the energy-momentum tensor, compute the gravitational potential or the metric perturbation fields and the forces that act upon the particles.
* The methods pm_init_periodic and pmforce_periodic, for initialization and execution of the PM, respectively.
#### 3.3.3 Kick and drift operators
In order to keep the Hamiltonian character of the equations of motion in Gadget-4, we have to describe the state of each particle through its position and momentum, not velocity. Following a leap-frog scheme, the momentum should be updated with a _kick_ operation using the full relativistic Eqs. (8) and (9). However, velocities in Gadget-4 are to be interpreted as momenta (per unit mass) of non-relativistic particles in the Newtonian limit. Then we redefine the Gadget-4 _kick_ and _drift_ operators assuming non-relativistic matter, \(p\ll mca\), and further neglecting the very small contribution coming from \(\chi\):
\[\frac{dx^{i}}{d\tau}= \frac{p^{i}}{ma}\;(1+3\Phi)+cB^{i}\,, \tag{23}\] \[\frac{dp_{i}}{d\tau}= -cp^{n}B_{n|i}-\Phi_{i}mc^{2}a\,. \tag{24}\]
The right hand side of (24) is what we call _force_.
#### 3.3.4 Adding long-range and short-range forces
To combine the forces computed with the relativistic PM and Gadget-4's Newtonian Tree we have extended the idea of the TreePM coupling. From equation (13) one obtains that the force acting on a particle in a TreePM scheme consists of two terms:
\[\vec{F}=S_{r_{a}}\big{[}\vec{F}_{\rm Newton}^{\rm Tree}\big{]}+L_{r_{a}} \big{[}\vec{F}_{\rm Newton}^{\rm PM}\big{]}. \tag{25}\]
The first term is the force computed using the Tree on which an exponential high-pass filter \(S_{r_{a}}\) is applied, leaving short-wavelength modes. The second term corresponds to the PM force on which the complementary low-pass filter \(L_{r_{a}}\) is applied to leave long-wavelength modes. The symbols \(S_{a}\) and \(L_{a}\) formally denote these linear operators:
\[S_{r_{a}}[f](\vec{r})=\frac{1}{N}\sum_{\vec{k}}\tilde{f}_{\vec{k}}\,(1-\exp(- k^{2}{r_{a}}^{2}))\exp(-i\vec{k}\cdot\vec{r})\,, \tag{26}\]
and
\[L_{r_{a}}[f](\vec{r})=\frac{1}{N}\sum_{\vec{k}}\tilde{f}_{\vec{k}}\exp(-k^{2}{ r_{a}}^{2})\exp(-i\vec{k}\cdot\vec{r})\,. \tag{27}\]
The _grid smoothing scale_\(r_{a}\) scales with the PM mesh size, and its value is optimized in Gadget-4, in a way that will be tested below, to minimize the impact of the two different treatments of the gravitational force.
In order to account for the relativistic dynamics while preserving the match between tree and PM contributions that is valid in the Newtonian case, we choose the following strategy: Gadget-4 calls both newtonian_pm and relativistic_pm, the Newtonian value of the force is added to the Tree force as in a standard Newtonian simulation, while the difference between the Newtonian and the relativistic forces is added on top as a correction, but filtered on a different scale \(r_{b}\), that we call _gr-smoothing scale_. Eq. (25) then becomes:
\[\vec{F}=S_{r_{a}}\big{[}\vec{F}_{\rm Newton}^{\rm Tree}\big{]}+L_{r_{a}} \big{[}\vec{F}_{\rm Newton}^{\rm PM}\big{]}+L_{r_{b}}\big{[}\vec{F}_{\rm GR}^ {\rm PM}-\vec{F}_{\rm Newton}^{\rm PM}\big{]}\,. \tag{28}\]
The case \(r_{a}=r_{b}\) would correspond to simply adding the relativistic force to the Tree:
\[\vec{F}=S_{r_{a}}\big{[}\vec{F}_{\rm Newton}^{\rm Tree}\big{]}+L_{r_{b}} \big{[}\vec{F}_{\rm GR}^{\rm PM}\big{]}\,. \tag{29}\]
However, while the size of \(r_{a}\), that regulates the match between Newtonian Tree and PM forces, is very well tested within Gadget-4, the optimal value of \(r_{b}\) is to be found; we will show in the next Section that using \(r_{b}\) larger than \(r_{a}\) allows us to achieve percent accuracy at small scales.
## 4 Validation
The GrGadget code has been validated by running it on a few realizations of initial conditions, listed in table 1. These were generated
with Gadget-4's ngenic code at \(z=19\), starting from a linear power spectrum generated with CAMB13 and with cosmological parameters consistent with Planck 2018 result (Planck Collaboration et al., 2020): \(\Omega_{b}h^{2}=0.0223\), \(\Omega_{c}h^{2}=0.120\), \(H_{0}=67.3\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}\), \(A_{\mathrm{s}}=2.097\times 10^{-9}\) and \(n_{\mathrm{s}}=0.965\).
Footnote 13: [https://camb.info/](https://camb.info/)
### _Gevolution_ and Gadget-4 original codes_
As already discussed in Section 3.3.1, the newtonian_pm implementation in V1.2 of Gevolution computes the Newtonian forces differently from those obtained with Gadget-4's PM. Before implementing Libgevolutions as the PM engine of Gadget-4, we need to make the two algorithms work in the same way.
To this aim, we have run a set of simulations with the configuration N64 (described in table 1) with a small number of particles \(N_{p}=64^{3}\) to be able to compute forces using a straightforward particle-particle (PP) scheme, that can be taken as the true force that we are trying to approximate. The same initial conditions at \(z=19\) have been fed to both Gadget-4 (with Tree either on or switched off to have a pure PM run) and Gevolution (in Newtonian mode) codes. At later times, \(z=8\) and \(z=0\), we have written snapshots of the forces that the simulation particles experience, separating the PM and the TreePM components; we have then compared those to the true Newtonian force computed with the PP scheme. The data we have obtained are summarized in the plots shown in figure 3. We have binned particles according to the value of the true force, then for each bin we have computed the mean (colored lines) and standard deviation (shaded regions) of the difference between the force computed with approximate methods (PM or TreePM) and the true value. Forces are given in Gadget-4's default units, which is actually acceleration, measured in units of \(10H_{0}\)\(\mathrm{km}/\mathrm{s}=h\,\mathrm{km}^{2}\,\mathrm{s}^{-2}\,\mathrm{kpc}^{-1}\). The green line shows the PM result using the original Gevolution code (the true force is anyway computed with Gadget-4 and matched particle by particle) while the red line is obtained from a pure PM using Gadget-4's original code. The black line gives the TreePM method precision, obtained using Gadget-4.
Looking at the red and green lines (and their shaded areas) we find two known results. Firstly, the TreePM method produces far less bias and dispersion when estimating forces; for instance, in the left panel of Fig. 3 the error is of the order14 of 0.1 \(h\,\mathrm{km}^{2}\,\mathrm{s}^{-2}\,\mathrm{kpc}^{-1}\), while in the right panel it is larger but barely visible when compared with the other curves. Secondly, while the PM force has low bias but a much larger variance than the TreePM one at high redshift, at low redshift, i.e. at higher level of non-linearity, it underestimates the value of the Newtonian force as its magnitude increases. This underestimation is due to the failure of PM in resolving interaction at scales smaller than the grid resolution.
Footnote 14: This quantification is in code units, we can take this value as a reference for a high accuracy gravity solver.
When comparing Gevolution PM and true forces, we notice an \(S\)-shaped feature in the plot, much more visible at high redshift. As anticipated in Section 3.3.1, this is mostly due to the first-order interpolation used to find the gradient of the potential in the code version that we tested.
In Fig. 4 we show the matter power spectra15 obtained at \(z=0\) from a set of larger simulations with the configuration N256 (see table 1). The red solid line shows the result obtained with the original Gadget-4 code with its TreePM method, while the red dotted line shows the results obtained by switching off the Tree so that the
\begin{table}
\begin{tabular}{c|c c c} name & \(N_{p}\) (particles) & \(N\) (PM grid points) & \(L\) (box size) \\ \hline N64 & 64\({}^{3}\) & 64 & 1 Gpc/\(h\) \\ N256 & 256\({}^{3}\) & 256 & 1 Gpc/\(h\) \\ high\_res & 512\({}^{3}\) & 512 & 500 Mpc/\(h\) \\ \hline \end{tabular}
\end{table}
Table 1: Cosmological simulation configurations used to validate GröGadget.
Figure 2: Diagram of resource ownership and relations for Libgevolution integrated into Gadget-4’s workflow. Each solid box represent a memory resource (an instantiation of a variable type) while the dashed boxes indicate ownership. The newly developed code, represented in the right part of the diagram denoted with the namespace gadget::gevolution_api, consists in a class named relativistic_pm that owns a particle_handler object that reads and writes directly into gadget::simparticles, a latfield_handler that takes care of setting up and inspect the state of LATfield2::parallel, and some types defined in Lysenvolutions, that are defined in pseudoinu namespace, like cosmology, Particles_gevolution and relativistic_pm. The methods sim::begrun(1) and sim::gravity_long_range_force() in gadget:: interact with the relativistic_pm through their interface init_periodicO and pmforce_periodic().
forces are computed using the PM alone. The green lines show results obtained with the latest _develop_ version of Gevolution that implements higher order schemes for finite differences; the dotted line gives results obtained with GRADIENT_ORDER=1 and is identical to the result obtained with V1.2 of Gevolution, the green solid line uses GRADIENT_ORDER=2, that corresponds to a second-order scheme. These power spectra show that the matter distribution in Gevolution using first-order gradients loses power in what seems to be a uniform trend for large-scale modes. This is a behaviour which is not inherent to the PM nature of the code, since that type of numerical approximation should predict very well the linear evolution at large scales; indeed, the higher-order scheme recovers power on large scales to sub-percent accuracy. Conversely, Gadget-4's PM and TreePM agree very well at wavenumbers below \(k\sim 0.1h/\)Mpc scale,
The higher-order differentiation worsens the loss of power of Gevolution for high values of \(k\), that is not present in Gadget-4. This can be explained as a consequence of the particle-to-mesh sampling and mesh-to-particle interpolation described in section 3.3.1. As discussed there, Gadget-4's PM corrects for these effects, resulting in a power spectrum that degrades only at very high values of \(k\) as we approach the Nyquist frequency, while producing a \(\sim 2\) percent overcorrection at \(k\sim 0.4\)\(h/\)Mpc.
After implementing the higher-order differentiation scheme, the correction for the loss of power discussed above and the change in the discrete Laplacian operator (Section 3.3.1), the results of native Gadget-4 and Libgevolution PMs become indistinguishable.
### Newtonian forces
We have tested our implementation of the GrGadget code by running a standard test in Gadget-4: we create an N-body configuration in which there is a single massive particle in the entire simulation box, while other massless test particles are placed at different distances from the first. In this setting the exact value of the force on each particle is known, hence one can compare the numerical results coming from the TreePM algorithm to the analytical solution.
The results are shown in figure 5, where each dot represents a test particle. The x-axis gives the distance to the massive particle that sources the gravitational field, in units of the PM resolution (\(L/N\)), while the y-axis gives the corresponding absolute value of the relative difference of the true and estimated forces acting on the test particle. The red and blue lines correspond to the mean value of force residuals, for particles binned into distance bins; the red line denotes the statistics obtained from a simulation using Gadget-4's original TreePM implementation and the blue line was produced using GrGadget, in this case with the Newtonian gravity engine.
This figure shows that the accuracy with which the TreePM code reproduces the gravitational force is at worst at percent level on scales of a few mesh cells, corresponding to the scale where the PM and Tree contributions are matched, and gets very accurate in the limits where either the Tree (small scales) or the PM (large scales) dominates.
Gadget-4's and GrGadget's Newtonian PMs show basically the same accuracy, even though their PM implementations are very different.
In Fig. 6 we show the matter power spectra of a set of N256 simulations (see table 1). In this case we are comparing the matter clustering of GrGadget, in blue (with Newtonian forces for testing purposes), against Gadget-4, in red. In agreement with the previous test of force differences, we find that both codes produce the same matter power spectrum up to floating point errors. This is verified both in the case of simulations computing forces using a pure PM and in the case of TreePM.
### Relativistic simulations with GrGadget.
We present here results obtained by running GrGadget with relativistic_pm, comparing them with the corresponding rela
Figure 3: Difference of gravity force with respect to the true PP value, binned according to the true force, for R64 initial conditions, at \(z=8\) (left panel) and \(z=0\) (right panel). Lines represent the mean value of force difference in the bins, with colours explained in the legend; the shaded regions give the standard deviation of the corresponding force difference.
tivistic version of Gevolution. We expect that the power spectrum of the matter density displays some relativistic features at large scales due to terms preceded by \(\mathcal{H}\) in the field equation (4), while at small scales results should be compatible with Gadget-4's Newtonian simulations. However, the matter power spectrum shown here is not an observable quantity, so this comparison is just meant to give a first validation of the results. A more thorough comparison of observables reconstructed on the past light cone will be presented in a future paper.
Figure 7 shows the matter power spectra for a series of N256 simulations (see table 1). In this case Gevolution and GrGadget are run in GR mode. The parameter that regulates the scale of the relativistic correction (Eqs. 28 and 29) is set to \(r_{b}=6\,L/N\approx 23\mathrm{Mpc}/\,h\), i.e. the relativistic corrections of the PM method are smoothed at a distances below 6 grid cells. The plot shows that relativistic PM-only simulations, GrGadget (blue dotted line) and Gevolution (green lines) are compatible on large scales (\(k<0.03\,h/\mathrm{Mpc}\)) up to a small percent-level difference that it is likely caused by the use of different orders for finite difference gradient; indeed, going from first- to second-order differences (from dotted to solid green line) the power spectrum gets nearer to GrGadget's fourth-order one. The plot also confirms that our combination of Tree and PM forces in the relativistic weak field limit with GrGadget (blue solid line) reproduces the Newtonian non-linear features to sub-percent level at small scales, that is for \(k>0.1\,h/\mathrm{Mpc}\); here Gadget-4 (red solid line) is again our reference for the non-linear clustering.
Being designed for the use of Fourier methods from the beginning, Librecvolution offers an interface for the computation of the power spectrum of the fields defined through the library's interface. Thus we can also extract and analyse the power spectra of the individual components of the metric perturbations from the relativistic simulations. Figures 8 and 9 show the power spectra of the relativistic potentials, \(\Phi\), \(B_{i}\) and \(\chi\), for a high resolution configuration high_res (see table 1). These plots show a comparison of PM (blue lines) and TreePM (red lines) simulations. The power spectrum of the gravita
Figure 4: Matter power spectrum of N256 cosmological simulations. The lower panel shows residuals with respect to Gadget-4’s original code (in red), used as baseline. The black line shows the linear power spectrum obtained with CAMB. Red lines show results obtained with Gadget-4, with the Tree part on (solid line) or switched off (dotted line). Green lines show results obtained with Gevolution in Newtonian configuration, with finite differences at first order (dotted line) or second order (solid line).
Figure 5: Forces due to a point source: the points are test particles located at different distances (in units of the mesh resolution \(L/N\)) from the source and the lines represent the RMS of the difference between real and TreePM forces in different distance bins. The red line corresponds to Gadget-4 original TreePM while the blue line was obtained with GrGadget in Newtonian mode. As for the the grid smoothing scale, the default value was used: \(r_{a}=1.25L/N\). For this test we have used \(N=256\) and \(L=1\,\mathrm{Gpc}/h\).
Figure 6: Matter power spectrum of four simulations starting from the same initial conditions high_res: blue lines give results for Gadget-4 original code, red lines give results for GrGadget. In both cases dotted lines refer to runs with PM-only, solid lines refer to runs with full TreePM.
tional potentials converge for both methods on large scales. However, below \(1\,\,{\rm Mpc}/h\) the PM-only simulation loses power with respect to the TreePM one; the differences can reach up to \(40\%\) as we approach the Nyquist frequency. This pattern is equally found for the scalar fields \(\Phi\) and \(\chi\), as well as for the individual components of \(B_{i}\).
The right plot in Fig. 8 helps to understand the reason behind this result. Generally speaking, energy density, momentum density and their respective density current (the components of the Energy-Momentum tensor) are sources of the metric perturbations. Even though those quantities, as fields, are found at discrete positions of space defined by the mesh, their values are computed by sampling the energy and momentum carried by the particle distribution, which contain information on the clustering due to the short range interactions (through the Tree) that goes well below the mesh resolution \(L/N\). Therefore, TreePM simulations, having power on scales well smaller than the PM mesh, give a better representation of the source of metric perturbation, and thus allow to recover power at frequency modes right below Nyquist. Fig. 8 highlights the particular case of \(T^{0}{}_{0}\) (the matter density) as a source for \(\Phi\); by comparing \(T^{0}_{0}\) with \(k^{2}\Phi\), we are verifying the Poisson equation \(k^{2}\Phi\approx T^{0}{}_{0}\) that is valid for wavelengths below the Hubble horizon. This confirms that the presence of small-scale clustering in the particle distribution propagates to the gravitational fields up to the maximum resolution that the PM allows. The same thing is visible in the vector modes \(B_{i}\) and in \(\chi\) (Figure 9), where we also notice a small, few-percent mismatch on large scales. These fields are known to give sub-percent effects on observables, so this difference, that is likely due to some degree of numerical mode coupling, is non considered as a problem.
In figure 10 we show how the matter power spectrum obtained using GrGadget is affected by the choice of the gr-smoothing scale parameter \(r_{b}\). We have used an N256 box configuration to perform this test, and tested values of \(r_{b}=1.5,3,6\) in units of \(L/N\approx 4\,\,{\rm Mpc}/h\). We find that large-scales power is independent of the value of \(r_{b}\) parameter; structures one scales below the PM resolution are resolved by the Tree algorithm, hence for \(k>k_{\rm Nyquist}\) there is a convergence of all simulations to a common non-linear power spectrum tail. It is in the medium to small scales \(k_{\rm Nyquist}>k>0.2\,{\rm Mpc}^{-1}h\) that we notice differences in the power spectrum above the \(\sim 1\%\) (dashed grey line). For small values of \(r_{b}\) (\(\sim 1.5\,L/N\)), we obtain discrepancies in the power spectrum at \(k\sim 0.5\,{\rm Mpc}^{-1}h\) that can be as large as 5 percent and indicate the limitations of our force summation scheme, Eq. (28). A value of \(r_{b}=3\,L/N\) or possibly higher is needed to obtain a good compatibility of GrGadget and Gadget-4 for all modes greater than \(0.1\,{\rm Mpc}^{-1}h\), where relativistic features in the matter clustering is negligible.
The last test we present here regards the convergence of the numerical results for increasing resolution. Figure 11 shows the matter power spectrum obtained from running Gadget-4's TreePM (red lines), GrGadget with PM-only (blue dotted lines) and GrGadget with TreePM (blue continuous line). These various code configurations were run with different combinations of the number of grid points per dimension \(N=256\), \(N=512\) and box length \(L=250,500\), \(1000,200\) Mpc\(/h\); the number of particles was fixed as \(N_{p}=N^{3}\). In all cases we have set the PM smoothing scale to \(r_{a}=1.5\,L/N\) and the gr-smoothing scale to \(r_{b}=3\,L/N\). It can be observed with the finest resolution, in the top plots, that there is a matching between General Relativity and Newtonian dynamics in the small scales. Then as the mesh size becomes coarser, in the middle plots, some discrepancies in the power spectrum start to appear which become more evident for even coarser meshes, in the bottom plots. This mismatch may be caused by \(r_{b}=3\,L/N\) moving towards larger scales, so that the assumption that PM forces are Newtonian on the small scales breaks. Indeed, while with \(L/N=1\)\(h^{-1}\) Mpc (\(r_{b}=3\)\(h^{-1}\) Mpc) the scales where relativistic effects become evident in the matter power spectrum and the scales where the pure PM prediction starts to deviate from TreePM are well separated, for larger \(L/N\) values the two scales get nearer, indicating that the assumption of pure Newtonian forces on the mesh scale may not be very good. This conclusion is apparently at variance with the discussion of Figure 10, where a larger value of \(r_{b}\) was preferred; however, that figure refers to \(L/N=1\) and is shown at \(z=0.5\), where clustering is a bit weaker. swe thus recommend to work with mesh sizes of \(L/N\sim 1\,{\rm Mpc}/h\).
## 5 Conclusions
We have constructed a relativistic TreePM code, that we call GrGadget, where the large-scale contribution to the gravitational force is computed using the relativistic C++ PM library Lingevolution, based on Gevolution code, while gravity coming from small scales is computed by the Tree code of Gadget-4. The code works under the assumption that, in the context of cosmological simulations, dark matter can be treated non-relativistically and then the equations of motion of tracer particles tend to the Newtonian limit at scales well below the Hubble horizon. Following the Gevolution approach, we use a weak field approximation of GR, where the perturbations of the space-time metric with respect to FLRW background are encoded as fields and simulated by the PM. Comparing the matter power spectrum from GrGadget simulations with that of original Gadget-4 and Gevolution codes, we conclude that the code produces consistent results as long as the PM cell size \(L/N\) is smaller than \(2\,{\rm Mpc}/h\) and the gr-smoothing parameter is \(r_{b}\approx 3\,L/N\).
Figure 7: Matter power spectrum of Gadget-4, Gevolution and GrGadget runs, the last code being run in relativistic mode. The upper panel shows the absolute value and the lower panel the relative difference with respect to Gadget-4’s TreePM. The black line gives the linear matter power spectrum; red and blue lines give Gadget-4 and GrGadget results, with full TreePM forces (solid lines) or with the Tree switched off (dotted lines). Green lines give Gevolution results, dotted line referring to first-order finite differences (GRADIENT_ORDRE=1) and solid line referring to second-order calculation (GRADIENT_ORDRE=2).
With respect to the pure PM implementation of Gevolution, the predictive power of GrGadget gives an improvement even on the scales sampled by the mesh. This is due to the fact that the energy-momentum tensor, that sources the equations of the fields that represent the perturbations of the metric, is computed from a fully non-linear distribution of particles, with gravity being resolved down to a much smaller softening length and not down to the mesh size. This may be very useful, e.g., when assessing the possibility of detecting the frame-dragging effect of a rotating dark-matter halo, if not of a spiral galaxy (Bruni et al., 2014). Furthermore, this code is a development of the widely used Gadget-4 code, and because the PM sector of the code is called only by the computation of the gravity force, our code can be easily extended to simulations of galaxies or galaxy clusters by switching on the hydrodynamics, star formation and feedback sectors. All the physics described by these sectors can safely be treated in the Newtonian limit; one should in principle
Figure 8: In the left plot: power spectrum of the metric perturbation \(\Phi\) in a high_res simulation obtained with GrGadget. In the right plot: power spectrum of \(k^{2}\Phi\) and \(T^{4}0\). For modes well below the Hubble horizon and small perturbations it should be verified that \(k^{2}\Phi\approx T^{4}0\).
Figure 9: In the left plot: power spectrum of the metric perturbation \(B_{i}\) (the \(x\) component) in a high_res simulation obtained with GrGadget. In the right plot: power spectrum of \(\chi\).
add thermal energy of gas particles to the energy-momentum tensor, but while this extension is straightforward, it is likely to provide a negligible contribution.
This is, for our group, a further step in the construction of an ecosystem of simulation codes and post-processing tools for modeling the evolution of structure in the Universe, with the aim of making predictions for precision cosmology. Sub-percent accuracy in cosmological predictions, that matches the smallness of the statistical error that will be obtained with forthcoming galaxy surveys mentioned in the Introduction, can only be obtained taking into account relativistic effect (e.g. Lepori et al., 2020), and we can foresee that a self-consistent treatment of these effects (to within the required accuracy) will soon become the standard in cosmological simulations. These effects can also be added by post-processing Newtonian simulations, but a validation of these procedures requires validation against a more self-consistent approach. Conversely, a large community is developing Gevolution in the direction of adding modifications of gravity, whose formulation is typically worked out in a general relativistic context. This line of development, coupled with a Newtonian treatment of modified gravity in the Tree code, would be precious in the formulation of tests of gravity, because relativistic effects may hide smoking-gun features of specific classes of modified gravity theories.
## Appendix A Code scaling
The code we presented in this work is the merging of two codes whose behaviour in terms of run-time scaling is well-known and characterized; since we did not modify the underlying algorithms, it is expected that the run-time scaling of our code follows that of the parent codes.
However, the Lipgevolution's PM is obviously different from Gadget-4's, and we added the translation of particles data from the host code to the target relativistic PM. Both this facts require that we establish the overall scaling of GrGadget in its fully-relativistic configuration and the overhead associated to both the relativistic PM and the interface between the two codes.
In figure 11 we show the fraction of time spent in the PM in both the original and relativistic configurations as a function of the grid cell size (see the caption for details). The relativistic PM is an order of magnitude more expensive than the original Gadget-4's Newtonian PM, although in absolute sense it is still either negligible or secondary in the simulation sets that have been tested (it reaches a maximum value of 16% at highest resolution, i.e. in the \(N=512\),\(L=250\) Mpc\(/h\)). However, it scales with both the resolution and the grid number as the original Newtonian PM does.
Figures 12 and 13 report the scaling of run time in strong and weak scaling tests respectively for the total run time, the tree time and the PM time (left. middle and right panels in both figures; see the captions for details). As inferred from 11, the run-time and hence its scaling, are dominated by the Gadget-4's Tree section.
## Acknowledgements
We thank Julian Adamek for many fruitful discussions on gevolution, Volker Springel for his comments on an early draft, Francesca Lepori, Marco Bruni, Marco Baldi and Emilio Bellini for discussions. Simulations were performed with the HOTCAT system of INAF (Taffoni et al., 2020; Bertocco et al., 2020). PM acknowledges partial support by a _Fondo di Ricerca di Ateneo_ grant of University of Trieste.
## Data availability
The simulation codes presented in this paper are publicly available on github in the following path: [https://github.com/GrGadget](https://github.com/GrGadget).
|
2306.05886
|
In search of a precursor for crystal nucleation of hard and charged
colloids
|
The interplay between crystal nucleation and the structure of the metastable
fluid has been a topic of significant debate over recent years. In particular,
it has been suggested that even in simple model systems such as hard or charged
colloids, crystal nucleation might be foreshadowed by significant fluctuations
in local structure around the location where the first nucleus arises. We
investigate this using computer simulations of spontaneous nucleation events in
both hard and charged colloidal particles. To detect local structural
variations, we use both standard and unsupervised machine learning methods
capable of finding hidden structures in the metastable fluid phase. We track
numerous nucleation events for the face-centered cubic and body-centered cubic
crystal on a local level, and demonstrate that all signs of crystallinity
emerge simultaneously from the very start of the nucleation process. We thus
conclude that there is no precursor for the nucleation of charged colloids.
|
Marjolein de Jager, Frank Smallenburg, Laura Filion
|
2023-06-09T13:30:05Z
|
http://arxiv.org/abs/2306.05886v1
|
# In search of a precursor for crystal nucleation of hard and charged colloids
###### Abstract
The interplay between crystal nucleation and the structure of the metastable fluid has been a topic of significant debate over recent years. In particular, it has been suggested that even in simple model systems such as hard or charged colloids, crystal nucleation might be foreshadowed by significant fluctuations in local structure around the location where the first nucleus arises. We investigate this using computer simulations of spontaneous nucleation events in both hard and charged colloidal particles. To detect local structural variations, we use both standard and unsupervised machine learning methods capable of finding hidden structures in the metastable fluid phase. We track numerous nucleation events for the face-centered cubic and body-centered cubic crystal on a local level, and demonstrate that all signs of crystallinity emerge simultaneously from the very start of the nucleation process. We thus conclude that there is no precursor for the nucleation of charged colloids.
## I Introduction
Crystal nucleation plays an important role in fields ranging from colloidal self-assembly, to protein crystallization, and even polymorph selection in pharmaceuticals [1; 2]. However, despite its importance in a number of essential fields, the detailed mechanism of forming a crystal nucleus still remains a topic of continuous debate.
The simplest theory which addresses crystal nucleation is classical nucleation theory (CNT). In CNT, the metastable fluid is continuously undergoing thermal fluctuations, where small, solid clusters form and dissolve until one appears which is large enough (critically large) to grow out into a macroscopic crystal. The size of such a critical cluster is given simply by balancing the bulk free-energy gain associated with transitioning into the more stable solid phase, with the surface free-energy cost of having a finite crystal cluster immersed in the fluid. This picture, however, becomes significantly more complicated when one considers the possibility of multiple competing crystal structures, typically referred to as polymorphs. In systems with crystal polymorphs, the crystalline phase that first nucleates in the metastable fluid is not necessarily the stable phase. Theories to address such situations, such as the Ostwald step rule [3], and the Alexander-McTague theory [4] have proven unreliable in explaining polymorph selection (see e.g. Refs. [5; 6; 7]).
One complication when studying such questions is the close interplay between local structural motifs that occur naturally in the fluid, and the ones that might emerge when the crystal forms. It has been suggested that motifs hiding in the fluid are predictive of, or even responsible for, the location or polymorph of the nucleus that forms [8; 9]. To investigate this possibility, one avenue forward could be to explore just how much information the metastable liquid is hiding regarding the nucleation process. Over the last two decades a plethora of studies have appeared presenting contradictory observations [8; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In particular, in simple systems such as charged colloids which nucleate into either the face-centered cubic (FCC) or body-centered cubic (BCC) crystal, some studies have argued that local structural order develops before the local density increases [8; 15; 16], while other authors found evidence that the two processes happen simultaneously [17].
To address this issue, some recent, intriguing studies have explored how modifying (via biasing) the structure of the fluid - either enhancing or suppressing specific local motifs - affects the nucleation process [6; 15]. In principle, such studies might be able to give one direct evidence that a specific local structure either enhances or suppresses the nucleation process. Unfortunately, however, biasing the structure of the fluid modifies not only its local structure but also its thermodynamics, meaning that comparisons with the unbiased case are inconclusive.
The more direct route to trying to explore how various kinds of local ordering interplay in crystal nucleation is simply to simulate the nucleation event, and follow the various structural and density features as nucleation happens. At first glance this would appear to be a straightforward approach. However, the challenge in this case lies in the difficulty in creating local order parameters that are unbiased. For example, order parameters that are tuned to recognize the crystalline regions from fluid might struggle at the boundary between the fluid and crystal - a highly important aspect at the beginning of nucleation. Similar issues exist for other order parameters, making it very difficult to pinpoint the start of the nucleation process and hence to determine whether structural order emerges before, during, or after densification. Hence, in some cases instead of capturing accurately whether local structure exists in the highly fluctuating metastable fluid, one ends up examining the properties of the order parameter instead of the properties of
the fluid.
While this problem is never fully avoidable, one option to try and avoid accidental biases is to exploit multiple different measures for local order - for instance measures associated with symmetries like bond-order parameters and order associated with the topological connections between neighboring particles - such as topological cluster classification (TCC). Interestingly, new unsupervised machine learning (UML) algorithms also give new avenues to probe structure (see e.g. Refs. [20; 21; 22; 23; 24; 25; 26; 27; 28; 29]). Recent studies have even demonstrated that simple, UML-based approaches are able to extract variations in disorder in the structure of supercooled fluids from e.g. a simple vector of bond order parameters [23; 27; 28]. Intriguingly, this includes identifying variations in local structure that are not easily extracted by looking at each element of the vector individually.
In this paper, we attempt to take the utmost care in identifying local signatures of the fluid and and revisit the question: are there hidden local structures present in metastable fluid that foreshadow the location of the imminent formation of a crystal nucleus? Specifically, we apply both classical and UML-based methods to the nucleation of hard and charged colloids in both the regime of strong screening and weak screening, for which respectively the FCC and BCC crystals nucleate. To this end, we simulate numerous spontaneous nucleation events, and closely follow all nucleation events as a function of time. In particular, similar to Ref. [17] we zoom in on the regions where the nuclei are born and analyze the local fluctuations in density and structure of the metastable fluid. By doing this we can locally track whether there is a delay between the increase in local structural ordering and local density prior to the start of nucleation. Such a delay would indicate the presence of a precursor. However, within the limits of this study, we find no evidence of such a precursor in the systems we studied.
## II Model
We consider a system of \(N\) like-charged hard spheres of diameter \(\sigma\) suspended in a solvent containing salt. The effective interaction potential between these colloids is given by the repulsive hard-core Yukawa potential
\[\beta\phi(r)=\begin{cases}\beta\epsilon\;\frac{e^{-\kappa\sigma(r/\sigma-1)}}{ r/\sigma}&\text{for }r\geq\sigma,\\ \infty&\text{for }r<\sigma,\end{cases} \tag{1}\]
with contact value \(\beta\epsilon=Z^{2}\lambda_{B}/\sigma(1+\kappa\sigma/2)^{2}\), where \(Z\) is the charge of the colloids in electron charge, \(\lambda_{B}\) is the Bjerrum length, \(\kappa\) is the inverse Debye screening length, and \(\beta=1/k_{B}T\), with \(k_{B}\) the Boltzmann constant and \(T\) the temperature. Note that in the limit of zero charge (\(Z\to 0\)) or infinite screening (\(\kappa\sigma\rightarrow\infty\)), this potential reduces to the hard-sphere potential. The interaction potential was truncated and shifted such that the shift was never more than \(10^{-5}k_{B}T\).
Nucleation of both the BCC and FCC phases in this system has been studied in the past (see e.g. Refs. [30; 31; 32; 33; 34]). In a previous study [34], we used umbrella sampling to calculate the nucleation barriers and rates of highly screened charged particles. In this paper, we will study the nucleation of some of these (nearly-)hard systems, as well as the nucleation of weakly screened charged particles. To be able to compare the nucleation processes of different systems, we select state points with approximately equal barrier heights. In particular, we will simulate brute-force nucleation events of systems with barrier heights around 15-18\(k_{B}T\). Information on the nucleation barriers of the systems studied is given in Tab. 1. Note that systems with a Debye screening length of \(1/\kappa\sigma=0.01\) were found to behave essentially as "hard" spheres when mapped with an effective hard-sphere diameter [34]. A brief explanation of the methods used for computing the nucleation barriers as well as some additional information on these systems can be found in the Supplemental Materials (SM).
## III Methods
To explain the methods we use for studying the nucleation events, we need to discuss two things: i) how we identify local structure, and ii) how we track nucleation events locally.
### Identifying local structure
We use three different methods to classify the local structure. The first method considers just the averaged bond-orientational order parameters (BOPs) of Lechner and Dellago [35]. For this, we first calculate for each particle \(i\) the complex quantities
\[q_{lm}(i)=\frac{1}{N_{b}(i)}\sum_{j\in\mathcal{N}_{b}(i)}Y_{l}^{m}(\theta_{ij},\phi_{ij}), \tag{2}\]
where \(\mathcal{N}_{b}(i)\) is the set of the \(N_{b}(i)\) nearest neighbors of particle \(i\), \(Y_{lm}\left(\theta,\phi\right)\) are the spherical harmonics with
\begin{table}
\begin{tabular}{c c c c c c c} & \(\beta\epsilon\) & \(1/\kappa\sigma\) & \(\eta^{*}\) & \(\beta|\Delta\mu|\) & \(n^{*}\) & \(\beta\Delta G^{*}\) \\ \hline & hard spheres & 0.5385 & 0.585 & 75 & 16.5 \\ FCC & 81 & 0.01 & 0.4681 & 0.584 & 84 & 16.3 \\ & 8 & 0.04 & 0.4400 & 0.541 & 69 & 14.8 \\ \hline BCC & 81 & 0.40 & 0.1305 & 0.321 & 122 & 18.0 \\ \end{tabular}
\end{table}
Table 1: For each system studied, the packing fraction of the supersaturated fluid \(\eta^{*}\) at which the brute force nucleation is performed together with the corresponding supersaturation \(\beta|\Delta\mu|\). The last columns give the critical nucleus size \(n^{*}\) and barrier height \(\beta\Delta G^{*}\) obtained using umbrella sampling. The error in \(\beta\Delta G^{*}\) is no more than 1.
\(m\in[-l,l]\), and \(\theta_{ij}\) and \(\phi_{ij}\) are the polar and azimuthal angles of the vector \(\mathbf{r}_{ij}=\mathbf{r}(j)-\mathbf{r}(i)\) connecting particles \(i\) and \(j\). We use the SANN algorithm [36] to determine the nearest neighbors. Next, we average these complex quantities over the set of nearest neighbors as well as the particle itself
\[\bar{q}_{lm}(i)=\frac{1}{N_{b}(i)+1}\sum_{j\in\{i,\mathcal{N}_{b}(i)\}}q_{lm}( j). \tag{3}\]
Finally, we compute the rotationally invariant averaged BOPs
\[\bar{q}_{l}(i)=\sqrt{\frac{4\pi}{2l+1}\sum_{m=-l}^{l}|\bar{q}_{lm}(i)|^{2}}, \tag{4}\]
and
\[\bar{w}_{l}(i)=\frac{w_{l}(i)}{\left(\sum_{m=-l}^{l}|\bar{q}_{lm}(i)|^{2} \right)^{3/2}}, \tag{5}\]
with
\[w_{l}(i)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
we want to track the local density and structure to determine if there is a difference between the increase in local structural ordering and local density at the start of nucleation. To this end, for each nucleation event we find the position \(\mathbf{r}_{0}\) that best captures the center of the nucleus at the start of nucleation. For this we use the average center-of-mass of the precritical nucleus as a starting point and, if needed, by eye adjust it to best capture the birthplace of the crystal nucleus. Next, for each snapshot of the nucleation trajectory, starting well before the start of nucleation, we determine all particles inside a sphere of radius \(R\) around \(\mathbf{r}_{0}\), and take the average of the local properties of these particles. This is similar to what Berryman _et al._ did in Ref. Berryman _et al._, 2017. The local structural properties that we consider are explained in the previous subsection. Additionally, we define for each particle a local packing fraction measured via the volume of its Voronoi cell. The volumes of the Voronoi cells were obtained using voro++ [41]. As we are searching for local precursors, we choose \(R\) such that the selected region contains around 30-40 particles. This size provides a good balance between being large enough to obtain relatively stable averages of the local properties, and being small enough to ensure that the averaged properties still represent the local situation.
## IV Results
### Structure of the metastable fluid
Before we look into the actual crystal nucleation, we first characterize the structural properties of the metastable fluid.
First, we examine the globally averaged values of the local BOPs, and plot the results in Fig. 1 for the metastable fluids of essentially hard spheres and of soft spheres as a function of the supersaturation. We see that \(\bar{q}_{6}\), \(\bar{q}_{8}\), and \(\bar{w}_{8}\) are most prominent in both metastable fluids, and that all BOPs are only marginally affected by the increase in supersaturation. Furthermore, notice that the values in both systems are surprisingly similar, even though the metastable fluid of essentially hard spheres later forms an FCC crystal, whereas the fluid of the soft spheres will form a BCC crystal. The most prevalent difference between the two systems can be found in \(\bar{w}_{6}\), which is smaller for the nearly-hard spheres than for soft spheres, and for high supersaturation even becomes on average negative for the nearly-hard spheres whereas it stays positive for the soft spheres. See the SM for more analysis on the \(\bar{w}_{l}\)'s. We, thus, conclude that the fluid's "knowledge" about which crystal phase it should nucleate into is difficult to distinguish from the global values of the BOPs.
In addition to the BOPs, we take a look at the presence of the different TCC clusters in the metastable fluids. Figure 2 shows the population of various TCC clusters in the metastable fluid of hard spheres, and soft spheres. Even though we see some small deviations in the populations of the different metastable fluids - e.g. clusters 6A, 8A, 8K, 9K, and BCC_9 have a slightly higher population in the fluid of soft spheres and 9B, 10B, 11C, 11E, and 12D have a slightly higher population in the fluid of nearly-hard spheres - the values are again surprisingly similar. This indicates once more that it is difficult to determine which crystal phase will nucleate from the metastable fluid for the systems studied here.
Next, we characterize the local ordering of the metastable fluid on the single-particle level. As explained in the Methods, we train a PCA model using only configurations of the metastable fluid. In all cases the first principal component (PC1) explains around 70% of the total variance of the input. To illustrate what kind of fluctuations PCA picks up in the fluid, Figs. 3a,c) show for the metastable fluids of hard spheres and of soft spheres the weight of each BOP in the first and second principal component. We see that PC1 is mostly made up of \(\bar{q}_{6}\) and \(\bar{q}_{8}\). Furthermore, Figs. 3b,d) show the distribution of these metastable fluid particles in the PC1-PC2 plane, as well as the distributions of the corresponding FCC and BCC phase. (Recall that the data for the crystalline particles was not used in training the PCA models.) In this scatter plot we can neatly see that the crystal phases lie in the region of large PC1. To get a better understanding of the real-space distribution of these particles with above or below average PC1, we take a look at a single snapshot of the metastable fluid of hard spheres and color the particles according to their local packing fraction, PC1, and the number of 9B and 11F clusters a particles is involved in, see Fig. 4. Even though the spatial correlations in the local packing fraction are not clearly visible, we can clearly distinguish by eye large spatial regions of above or below average PC1. The autocorrelation functions of these spatial correlations can be found in the SM. Notice that the regions with above average PC1 correspond to an absence of 9B clusters and a high presence of 11F clusters, while regions with below average PC1 correspond to a high presence of 9B clusters and an absence of 11F clusters. Thus, there is a negative correlation between PC1 and 9B clusters and positive correlation between PC1 and 11F. The precise correlations of these two and other TCC clusters with PC1 are shown in Fig. 3e). Analogous to what was found in Ref. [23], the TCC clusters can be roughly divided into two groups: those with a negative correlation, which essentially are all clusters consisting of one or more tetrahedral subclusters, and those with a positive correlation, which contain the clusters consisting of one or more square pyramidal subclusters.
Combining the observation of large spatial regions of above average PC1 with the proximity of these particles to the crystal phases in the PC1-PC2 scatter plot (Fig. 3), we can conclude that regions of above average PC1 form a good candidate for harboring a precursor for crystal nucleation. In the next section, we will investigate these regions whilst tracking the nucleation events. However, before we turn our attention to that,
we need to determine the temporal correlations of the local structure such that we know the time window before the start of nucleation during which we can search for a precursor. Figure 5 shows, for multiple simulation methods, the autocorrelation functions (ACFs) of the first two principal components and the local packing fraction in the metastable fluids of hard spheres and of soft spheres. Here, we give the time in terms of the long-time diffusion time \(\tau_{d}=\sigma^{2}/6D_{l}\), where \(D_{l}\) is the long-time diffusion coefficient obtained from the mean-squared displacement. Notice that the ACFs are essentially independent of the simulation method, which confirms that the dynamics are also independent of the choice of simulation method. We see in both systems that PC2 and the local packing fraction decay extremely fast in time. Although PC1 decays more slowly, i.e. within half a diffusion time for the hard spheres and one diffusion time for the soft spheres, this is still relatively fast. The decay time of PC1 provides a good estimate for the time window before the start of nucleation in which we can search for a precursor.
Figure 1: The mean of a,b) the first eight \(\bar{q}\)’s and c,d) the first four even \(\bar{w}\)’s as a function of the supersaturation for the metastable fluids of hard-core Yukawa with a,c) \(\beta\epsilon=81\) and \(1/\kappa\sigma=0.01\), and b,d) \(\beta\epsilon=81\) and \(1/\kappa\sigma=0.40\).
Figure 2: The population of various TCC clusters in the metastable fluids of hard spheres (\(\eta=0.5385\)), nearly-hard hard-core Yukawa particles (\(\beta\epsilon=8\), \(1/\kappa\sigma=0.04\), \(\eta=0.4400\)), and soft hard-core Yukawa particles (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\), \(\eta=0.1305\)).
### Nucleation study
We now turn our attention to crystal nucleation. As explained in the methods, we simulate numerous spontaneous nucleation events using (K)MC and MD simulations, and track for all these events the local properties of the region where nucleation starts. Here, we discuss our observations using two typical nucleation events: one of the hard-spheres system and one of soft spheres (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\)). Both these nucleation events were obtained using MC simulations. More nucleation events, where we either used other simulation methods or studied the other systems mentioned in Tab. 1, can be found in the SM. To better illustrate which region we study while tracking a nucleation event, Fig. 6 shows a couple of snapshots of the nucleation event of hard spheres where the particles inside the studied region are colored red. Figure 7 shows for this event and the nucleation event of soft spheres the average properties of the particles in this studied region. Before we discuss what we see,
Figure 4: Snapshot of the metastable fluid of hard spheres (\(\eta=0.5385\)) colored by a) the local packing fraction, b) the first principal component, c) the number of 9B clusters per particle, and d) the number of 11F clusters per particle.
Figure 3: PCA on the metastable fluids of a,b) hard spheres (\(\eta=0.5385\)) and c,d) soft hard-core Yukawa particles (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\), \(\eta=0.1305\)). a,c) give the weight of each BOP in the first and second principal component. b,d) show the distribution of the fluid particles in the PC1-PC2 plane, with the addition of the corresponding bulk FCC phase of hard spheres (\(\eta=0.5981\)) and bulk BCC phase of soft hard-core Yukawa (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\), \(\eta=0.1311\)). e) Pearson correlation between PC1 and the number of TCC clusters a particle is involved in for the hard spheres. The color of the bars indicate clusters that consist of one or more tetrahedral subclusters (blue), one or more square pyramidal subclusters (yellow), or both/neither (gray).
let us again point out that the nucleus size (black line) is not an ideal order parameter for tracking nucleation since its binary nature causes it to overlook subtle increases in the local structural ordering at the onset of nucleation. It does, however, provide a general overview of the nucleation event, such as when the nucleus reaches its critical size (see Tab. 1). That being said, let us first discuss the BOPs of the studied region. We observe no notable change in the behavior of the BOPs before the start of nucleation, but, as soon as nucleation starts, we see a sharp increase in the values of \(\bar{q}_{6}\) and \(\bar{q}_{8}\) for both systems. Furthermore, for the hard spheres, we see that, once nucleation starts, \(\bar{q}_{4}\) increases, \(\bar{w}_{6}\) stays negative, and \(\bar{w}_{4}\) decreases. This all indicates that indeed the FCC phase nucleates. On the other hand, for the soft spheres, we see that as nucleation starts \(\bar{q}_{4}\) barely increases, \(\bar{w}_{6}\) stays positive, and \(\bar{w}_{4}\) keeps fluctuating around zero, which all indicates that indeed the BCC phase nucleates. Similar to the behavior of the BOPs, we observe no notable change in the behavior of PC1 before the start of nucleation, but see a sharp increase in its value once nucleation starts. Note that this increase in PC1 is visible before the number of particles classified as crystalline starts to rise (black line), demonstrating that PC1 is a better order parameter for tracking the start of nucleation than the nucleus size according to our definition. Lastly, for the hard spheres, we see that the local packing fraction increases simultaneously with PC1 as soon as nucleation starts, and that no notable behavior can be observed before the start of nucleation. This strongly indicates that increase in structural ordering and local density go together and, thus, that there is no apparent precursor. Unfortunately, as the difference between the packing fraction of the fluid and solid phases is extremely small for the soft spheres, i.e. less than 0.001, it is not possible to observe any increase in the local packing fraction on top of the normal fluctuations. Hence, we cannot draw any conclusions on the local packing fraction of the soft spheres.
Next, to show that we have missed no subtle changes in the local structure and thus confirm that there is no precursor for nucleation, we further examine the local structure using TCC. In Fig. 8 we show for four of the most relevant TCC clusters the average number of clusters a particle is involved in and compare it with PC1. These four clusters are: i) 6A, which has the strongest positive correlation with PC1 and is present in both bulk FCC and bulk BCC, ii) 8A, which has the second strongest positive correlation with PC1 and is present in bulk BCC
Figure 5: Autocorrelation function of the first and second principal components and the local packing fraction in the metastable fluids of a) hard spheres (\(\eta=0.5385\)) and b) soft hard-core Yukawa particles (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\), \(\eta=0.1305\)). The different dashing and darkness of the color indicate the simulation method, and the time is in terms of the long-time diffusion time \(\tau_{d}\).
Figure 6: Four snapshots of a typical nucleation event of hard spheres (\(\eta=0.5385\)). Here \(t_{0}\) indicates the start of nucleation. Fluid particles are displayed at a quarter of their actual size to make the nucleus visible, and red indicates the particles inside the studied region, i.e. those inside the sphere of radius \(R\) around the center of nucleation \(\mathbf{r}_{0}\).
but not bulk FCC, iii) FCC, which is present in bulk FCC but not bulk BCC, and iv) 7A, which has the strongest negative correlation with PC1 and can neither be found in bulk FCC nor bulk BCC. Similar figures of other TCC are also found in bulk FCC, which are consistent with the bulk FCC.
Figure 7: Left: typical nucleation event of hard spheres (\(\eta=0.5385\)), same as in Fig. 6. Right: typical nucleation event of soft hard-core Yukawa particles (\(\beta\epsilon=81\), \(1/\kappa\sigma=0.40\), \(\eta=0.1305\)). Both events were obtained using MC simulations. The vertical dashed line in each figure indicates the start of nucleation \(t_{0}\), and the shaded area indicates the time window before \(t_{0}\) for which the ACF of PC1 \(>0.05\) (see Fig. 5). In a-f) the black line (right axis) gives the size of the biggest nucleus present in the studied region. The other lines give the average value of a-b) \(\bar{q}_{l}\) for \(l\in[3,8]\), c-d) \(\bar{w}_{l}\) for \(l\in[4,6,8]\), e-f) PC1. In g-h) the blue line (right axis) gives PC1, while the yellow line (left axis) gives the local packing fraction \(\eta_{\text{local}}\). Note that in e-g) the horizontal dashed lines give the reference value of PC1 in the fluid and solid phase. In g) the right axis is scaled in such a way that the reference values in the fluid and solid of PC1 and \(\eta_{\text{local}}\) lie on top of each other. In h) the horizontal dashed lines give the reference value of \(\eta_{\text{local}}\) in the fluid and solid phase.
clusters can be found in the SM. For all clusters we see that there is no significant change prior to the start of nucleation. Furthermore, we see that the trends of the 6A cluster coincide almost perfectly with those of PC1. Similarly, we see that the trends of the 8A cluster closely follow the trends of PC1. However, for hard spheres the initial increase in 8A clusters is followed by a decrease. Since 8A is a cluster that is usually found in bulk BCC and not in bulk FCC, this initial increase might be surprising. This can be explained via the observation that 8A clusters are found in high concentrations near the surface of growing nuclei [9]. As a result, the number of clusters in the bulk FCC is much higher than the number of clusters in the bulk FCC. This is because the number of clusters in the bulk FCC is much higher than the number of clusters in the bulk FCC.
Figure 8: For the same events as in Fig. 7, i.e. hard spheres (left) and soft hard-core Yukawa particles (right), the average number of clusters per particle (left axis, yellow) for a couple of TCC clusters together with the average value of PC1 (right axis, blue). The horizontal dashed lines indicate the reference values of the number of clusters per particle and PC1 in the fluid. In a-f) the left axis is scaled in such a way that these lines lie on top of each other. In g-h) this was not possible without inverting one of the axes.
these clusters decreases once the nucleus grows beyond our averaging radius. For the FCC cluster, we observe a sharp increase during the nucleation of hard spheres. Notice, however, that this increase starts slightly later than the increase in PC1. This is not surprising as the FCC cluster is a relatively large cluster, i.e. it contains 13 particles, and consequently is not present in the first stages of nucleation. For the soft spheres there is no significant increase in FCC clusters, as expected. Lastly, we take a look at the 7A cluster. In contrast to the other three clusters, this five-fold symmetric cluster has a strong negative correlation with PC1. Moreover, it is strongly present in the metastable fluid phases, whereas its presence in the FCC and BCC phases is negligible. It is, therefore, not surprising that we observe an immediate and sharp decrease in 7A clusters as soon as nucleation starts.
## V Conclusions
To conclude, we have characterized the local structure of various metastable fluids of charged colloids using multiple methods: bond-orientational order parameters (BOPs), principal component analysis (PCA) on the BOPs, and topological cluster classification (TCC). In doing this have attempted to avoid artefacts due to biases in our chosen order parameters. For all systems we have found that any local structural ordering has a relatively short lifetime, resulting in a short time window prior to the start of nucleation in which a precursor could exist. By tracking the local structure of the spatial region coinciding with the birthplace of the crystal nucleus, we show that inside this time window no atypical behavior in the local structural order is observed using any of our structural order parameters. Furthermore, we demonstrate that all structural characteristics that differ significantly between the fluid and crystal phase start changing simultaneously as soon as nucleation starts. Specifically in the case of FCC, this includes the local density, which starts growing immediately as soon as structural order emerges. We, thus, conclude that we find no evidence for a precursor for the crystal nucleation of hard and charged colloids.
## VI Acknowledgements
L.F. and M.d.J. acknowledge funding from the Vidi research program with project number VI.VID.192.102 which is financed by the Dutch Research Council (NWO).
|
2308.13590
|
LSTM-based QoE Evaluation for Web Microservices' Reputation Scoring
|
Sentiment analysis is the task of mining the authors' opinions about specific
entities. It allows organizations to monitor different services in real time
and act accordingly. Reputation is what is generally said or believed about
people or things. Informally, reputation combines the measure of reliability
derived from feedback, reviews, and ratings gathered from users, which reflect
their quality of experience (QoE) and can either increase or harm the
reputation of the provided services. In this study, we propose to perform
sentiment analysis on web microservices reviews to exploit the provided
information to assess and score the microservices' reputation. Our proposed
approach uses the Long Short-Term Memory (LSTM) model to perform sentiment
analysis and the Net Brand Reputation (NBR) algorithm to assess reputation
scores for microservices. This approach is tested on a set of more than 10,000
reviews related to 15 Amazon Web microservices, and the experimental results
have shown that our approach is more accurate than existing approaches, with an
accuracy and precision of 93% obtained after applying an oversampling strategy
and a resulting reputation score of the considered microservices community of
89%.
|
Maha Driss
|
2023-08-25T17:23:12Z
|
http://arxiv.org/abs/2308.13590v1
|
# LSTM-based QoE Evaluation for Web Microservices' Reputation Scoring
###### Abstract
Sentiment analysis is the task of mining the authors' opinions about specific entities. It allows organizations to monitor different services in real time and act accordingly. Reputation is what is generally said or believed about people or things. Informally, reputation combines the measure of reliability derived from feedback, reviews, and ratings gathered from users, which reflect their quality of experience (QoE) and can either increase or harm the reputation of the provided services. In this study, we propose to perform sentiment analysis on web microservices reviews to exploit the provided information to assess and score the microservices' reputation. Our proposed approach uses the Long Short-Term Memory (LSTM) model to perform sentiment analysis and the Net Brand Reputation (NBR) algorithm to assess reputation scores for microservices. This approach is tested on a set of more than 10,000 reviews related to 15 Amazon Web microservices, and the experimental results have shown that our approach is more accurate than existing approaches, with an accuracy and precision of 93% obtained after applying an oversampling strategy and a resulting reputation score of the considered microservices community of 89%.
Keywords:Sentiment Analysis, Reputation, Web Microservices, Long Short-Term Memory Model, Net Brand Reputation
## 1 Introduction
In the current era, many customer reviews are available on different platforms and applications: e-commerce, Web services, games, social networks, etc. What interests us in this paper are the web microservices-based applications. A web microservice is a tiny, self-contained component of an online application that performs a specific function or task. The microservices architecture is a methodology for developing software systems consisting of loosely coupled, independently deployable services [13]. Customers post reviews online as feedback on microservices they have purchased, used, or experienced. These reviews are one of the most effective ways to motivate and encourage potential customers to use services. They reflect users' quality of experience (QoE), which can influence potential customers' perceptions. Positive reviews can enhance the microservice's reputation and encourage new users to try it out, while negative reviews can
harm its reputation and discourage potential users. The main issue with these reviews is that they may be ambiguous and unclear, and this is due to various factors such as attitude, emotions, used vocabulary, and previous experiences of the customer. To solve this issue, sentiment analysis techniques [4] are employed to automatically transform these unstructured reviews into structured data that can be extremely valuable for commercial concerns like reputation management. Having positive reviews and a good reputation as a service can play an important role in its success. It helps attract customers' attention and interest and establish trust and confidence in the service. In this paper, we aim to perform sentiment analysis techniques on web microservices' reviews to exploit the provided information for services' reputation assessment and scoring. Our proposed approach is designed and implemented to mine microservices' reviews by categorizing them into different polarity labels and providing a score that is used to measure the microservices' community reputation. This approach applies a deep learning-based sentiment classification that performs the Long Short-Term Memory (LSTM) model [14] and employs the Net Brand Reputation (NBR) algorithm [3] to assess reputation scores for concerning microservices. This work makes a significant contribution by leveraging the outputs of the LSTM model to classify reviews as positive or negative. These results are then utilized to calculate the overall reputation score of the microservices' community provider through the application of the NBR algorithm. The proposed approach is tested on a set of 10,000 reviews related to 15 Amazon Web microservices. The experimental results have shown that our approach is more accurate than existing approaches, with an accuracy and a precision of 93% after applying oversampling strategy and a resulting reputation score of 89%. The remainder of this paper is structured as follows: Section 2 provides a brief background about Web microservices, sentiment analysis, and reputation assessment. Section 3 presents pertinent related works that implement sentiment classification and reputation assessment for Web microservices. Section 4 details the proposed approach. Section 5 illustrates the implementation of the proposed approach and discusses the experiments that are conducted to test and validate this approach. Section 6 presents the concluding remarks and future works.
## 2 Background
This section presents fundamental concepts related to Web microservices, sentiment analysis, and reputation management.
### Web Microservices
A web microservice is a tiny, self-contained component of an online application that performs a specific function or task. The microservices architecture is a methodology for developing software systems consisting of loosely coupled, independently deployable services [11, 5]. Each microservice is often responsible for a specific business function and connects with other services through common web
protocols. Online microservices are frequently employed to develop sophisticated web systems that demand scalability, fault tolerance, and flexibility. By splitting a web application into smaller, more manageable services, developers may work on each component individually, making it easier to update, test, and deploy changes. The quality of service characteristics (e.g., response time, availability, scalability, security, usability, etc.), which are provided by these Web microservices, have become a primary concern for the users as well as the providers [6]. One way to improve these characteristics is to analyze the feedback generated by users' reviews. Mining users' feedback is crucial since it reflects the service's reputation and leads to its improvement. It generally gives an idea of whether users like the microservice, and if the users do not like it, it indicates what factors contributed to this negative feedback.
### Web Microservices and Reputation Management
According to the Concise Oxford Dictionary [2], "Reputation is generally said or believed about a person's or thing's character or standing". Informally, reputation combines the measure of reliability derived from feedback, reviews, and ratings gathered from users in a certain society. The QoE and the reputation of web microservices are closely related. A positive quality of experience can lead to a strong reputation, while a negative quality of experience can harm the reputation of the microservice. When users have a positive experience while using a web microservice, they are more likely to recommend it to others and leave positive reviews or feedback. This can help to build the microservice's reputation and attract new users. On the other hand, if users have a negative experience while using a web microservice, they may leave negative reviews or feedback, which can harm the microservice's reputation. A reputation model [12] in the context of Web microservices is a method that enables decision-makers to distinguish good and satisfying services from bad and poor ones based on users' feedback and reviews. In this context, the importance of reputation is derived from the need to help users and service providers to distinguish the quality of the functionalities and performances among similar services based on these services' history of use and how they behaved in the past.
### Sentiment Analysis
Sentiment Analysis (SA) is defined as analyzing authors' opinions, emotions, or attitudes about specific entities such as products, services, events, and individuals [10]. These entities are most likely to be covered by users' reviews. Sentiment analysis is a process that aims to classify sentiments, and that consists of three different steps [10]: 1) sentiment identification, 2) feature selection, and 3) sentiment classification. The input of this process is a dataset of users' reviews; the output is a set of sentiment polarities (i.e., positive/negative/neutral or positive/negative). There are three main classification levels for SA [15]: document-level, sentence-level, and aspect-level SA. In this paper, we tackle the second class
of SA since considered users' opinions will be grouped into a single document that will be analyzed at the sentence level to determine users' orientations.
## 3 Related Works
Many statistical, fuzzy-logic, and data mining-based approaches for computing web service reputation have been proposed in the literature. These are the most recent and relevant related works.
In [9], the authors presented a collaborative Service Level Agreement (SLA) and Reputation-based Trust Management (RTM) solution for federated cloud environments. The SLA service explicitly set performance standards and evaluated the real performance of cloud applications installed. Based on the SLA, the collaborative solution's RTM service utilized many technical and user experience parameters to calculate the cloud providers' dependability and customers' trust. The collaborative approach was demonstrated and proven in a genuine federated setting. The study, presented in [8], uses a trust prediction and confusion matrix to rank web services based on throughput and response time. For a benchmark web services dataset, AdaBoostM1 and J48 classifiers were utilized as binary classifiers. The confusion matrix was used to compute trust scores. Correct prediction of trustworthy and untrusted web services has enhanced the overall selection process in a pool of comparable web services. Kappa statistics values were used to evaluate the suggested method and compare the performance of AdaBoostM1 and J48 classifiers. [7] discussed web service selection utilizing a well-known machine learning technique, REPTree, to forecast trustworthy and untrusted services correctly. Using web services datasets, the performance of REPTree is compared to that of five machine learning models. The authors tested web services datasets using a ten k-fold cross-validation approach. They utilized performance measures, like sensitivity and specificity measures, to assess the effectiveness of the REPTree classifier. The evaluation results of the suggested web services selection technique showed a link between the final selection and the recommended web service trust score. The authors in [1] presented a reputation-based trust assessment technique using online user evaluations to combine the NBR measure with a deep learning-based sentiment analysis model called CBiLSTM. The suggested deep learning model combined the layers of Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM). The CNN layers coped with the high dimensionality of text inputs, and the BiLSTM layer investigated the context of the derived features in both forward and backward directions.
The existing works using a reputation-based selection of web services have several limitations, including:
* Limited scope: Reputation-based selection approaches typically rely on feedback from a small subset of users, which may not be representative of the broader user community. This can result in biased or incomplete reputation scores.
* Difficulty in interpretability: Deep learning-based solutions are often complex and difficult to interpret, making it difficult to understand how they arrive at their reputation scores. This can limit the transparency of the reputation assessment process.
* Computational requirements: Hybrid deep learning models used for reputation assessment can be computationally intensive and require significant resources to train and evaluate. This can make them less suitable for use in resource-constrained environments, such as on mobile devices or in low-bandwidth networks.
* Limited generalization performance: Imbalanced datasets with few instances of negative feedback may result in biased reputation scores, as the model may be more likely to assign positive scores to services even if they are not of high quality.
* Difficulty in feature extraction: Imbalanced datasets may make it difficult for the deep learning model to extract meaningful features that accurately represent the characteristics of the service. This can result in poor model performance and inaccurate reputation scores.
## 4 Proposed Approach
Our proposed approach for computing Web services' reputation focuses on using deep learning models. This choice is justified by the fact that these models have proven their efficiency in sentiment analysis in several applications (i.e., social media monitoring, brand monitoring, market analysis, etc.), as demonstrated in the study presented in [15]. Our approach consists of four phases: 1) the data preprocessing phase, 2) the embedding generation phase, 3) the sentiment analysis phase, and 4) the reputation assessment phase. Figure 1 presents our approach with its different phases.
### Data Collecting Phase
This phase encloses four consecutive tasks, which are:
1. Removing the invalid reviews: the reviews' dataset is examined to filter out invalid reviews. A review is considered invalid if: 1) it is empty, 2) it contains mainly tagged usernames, and 3) it provides mainly commercial URLs.
2. Word tokenizing and stemming: for each review, tokenization and stemming tasks are performed. Tokenization aims to divide a text into small units called tokens, which refer in our context to words composing the whole review. Stemming aims to reduce a word to its word stem. For example, the stem word of "understanding" is "understand", which is obtained by removing the affix from "understanding".
3. Stop words, special characters, and punctuation marks removing: stop words such as "a", "of", and "in" are words that need to be filtered out since they do not contribute much to the overall meaning of the review. Also, special characters (i.e., "@", "%", "/", etc.) and punctuation marks are eliminated to increase the accuracy of the sentiment classification phase.
Figure 1: Proposed approach
4. Part-of-speech (POS) tagging: This task aims to convert each review into a set of tuples where each tuple has a form (word, tag). The tag signifies whether the word is a noun, adjective, verb, etc. After applying POS tagging, only nouns, and adjectives are kept since they both play a key role in the distinction of the sentiment polarity of the review.
### Embeddings' Generation Phase
A word embedding is a learned representation for text where words that have the same meaning have a similar representation. Word embeddings are a class of techniques where individual words are represented as real-valued vectors in a predefined vector space. Each word is mapped to one vector, and the vector values are learned in a way that resembles a neural network. Hence the technique is often lumped into the field of deep learning. To represent the preprocessed data, we proceed with the following successive steps:
1. Create a word-to-index dictionary: each word will be assigned to a key, and the unique matching index is used as the value for the key.
2. Padding: Padding is the process of setting a fixed length to sentences. Every sentence has a different length so we will set the maximum size of each list of sentences to 50 as an example. If the list's size is greater than 50, it will be trimmed to 50. And for the lists with a length of less than 50, we will add 0 at the end until it reaches the maximum length.
3. Create a feature matrix: We will load the GloVe word embeddings, which is an algorithm for obtaining vector representations for words. And build a dictionary that will include words as keys and their corresponding embedding list as values.
4. Create embedding matrix: The matrix will have columns where all columns contain the GloVe word embeddings for the words, and each row will match the corresponding index.
### Classification Phase
We propose a deep learning-based sentiment analysis method to ensure review classification. This method relies on the LSTM model. LSTM is a Recurrent Neural Network (RNN) variant specifically designed to better handle long-term dependencies in sequential data. Compared to traditional RNNs, LSTM can selectively forget or remember previous inputs and outputs, allowing it to capture more complex patterns in sequential data. In the context of text classification for sentiment analysis, LSTM can bring several improvements over traditional RNNs:
* Better handling of long-term dependencies: Sentiment analysis often requires understanding the context and meaning of words and phrases over long sequences of text. LSTM can better capture these dependencies and make more accurate predictions compared to traditional RNNs.
* Improved memory: Since LSTM can selectively remember or forget previous inputs and outputs, it can retain useful information and discard irrelevant information more effectively. This makes it easier for LSTM to identify important features for sentiment analysis and make more accurate predictions.
* Reduced vanishing gradient problem: Traditional RNNs can suffer from the vanishing gradient problem, where the gradients become very small, and the model stops learning effectively. LSTM can alleviate this problem by using gating mechanisms to control the flow of information and gradients through the network.
Figure 2 presents the architecture of the LSTM model used for microservices' reviews classification.
### Reputation Assessment Phase
The objective of this phase is to use the NBR formula to assess the reputation of Web microservice providers. This will validate the proposed model's effectiveness used for reputation assessment. The NBR formula determines the net value of a brand's reputation based on published reviews, utilizing sentiment analysis to measure customer satisfaction. The NBR index emphasizes positive feedback from brand advocates more than negative feedback, and its output can range from -100 to 100, with higher values indicating a greater number of positive reviews. Equation 1 illustrates the NBR formula.
\[NBR=\frac{PositiveReviews-NegativeReviews}{PositiveReviews+Negative Reviews}*100 \tag{1}\]
Figure 2: LSTM Architecture for Microservices’ Reviews Classification
## 5 Experiments
In this section, firstly, we will present the details of the implementation. Next, we will describe the dataset and the performance metrics. Finally, we will provide a detailed explanation of the results and make comparisons with existing deep-learning models used for text mining.
### Implementation Environment
The experiments in this paper are carried out on a PC with the following configuration properties: an x64 CPU, an Intel Core i9-11900H (11th Gen), 32 GB RAM, and an NVIDIA GeForce RTX 3080 (8G) graphics card. All experiments were carried out on Google Colab14, with Python 3.7.1015 and Keras 2.4.3.
### Dataset
The reviews are scraped from multiple review websites, including Capterra, g2, Gartner, TrustRadius, Software Advice, GetApp, Trust Pilot, and Spiceworks. The reviews are about 15 Amazon Web microservices. The collected dataset contains 10,676 reviews, including 10,155 (95%) "Positive" reviews and 521 (5%) "Negative" reviews. Duplicates and noises were removed from reviews. Due to the enormous amount of gathered reviews that was processed, manual labeling of this dataset was impracticable. For this reason, we applied a two-stage labeling approach. Firstly, a sentiment analysis technique was utilized to label the dataset automatically. Then, reviews of the minority class were carefully reviewed and re-labeled based on specific features. The dataset was split into 80% for model training and 20% for validation and testing.
### Performance Metrics
The overall accuracy performance of the proposed approach is measured through the accuracy, precision, recall, and F1-score, which are expressed in the following: In order to assess the performance of the proposed approach, accuracy, precision, recall, and F1-score metrics were used. The statistical measures are represented mathematically in Equations 2 - 5, where: TP, TN, FP, and FN represent the number of True Positives, True Negatives, False Positives, and False Negatives, respectively.
**Accuracy:** it is used to evaluate the model's overall performance throughout all categories.
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{2}\]
**Precision:** it is used to assess the model's accuracy in classifying a sample as positive or negative.
\[Precision=\frac{TP}{TP+FP} \tag{3}\]
**Recall :** it is employed to assess the model's ability to identify the positive samples.
\[Recall=\frac{TP}{TP+FN} \tag{4}\]
**F1-score:** it combines the accuracy and recall measurements to produce a value-added rating for performance verification.
\[F1-score=\frac{2*Precision*Recall}{Precision+Recall} \tag{5}\]
### Results and Discussion
The main goal of the proposed approach is to classify microservice reviews properly. This was accomplished using RNN, GRU, CNN, and LSTM. Across 20 epochs, the five deep-learning architectures were trained. The Adam optimizer, the cross-entropy loss function, and the SoftMax activation function have been employed for the models' configuration.
As shown by the performance results in Table 1, our model outperforms all the other models for the weighted average by ensuring an overall accuracy of 91%, a precision and an F1-score of 92%, and a recall of 90%. The training time for each model is shown in Table 2. The results show that CNN takes the least training time, followed by LSTM. As compared to the training times of RNN and GRU models, the training time of our suggested classifier was acceptable. The considered dataset was a highly imbalanced dataset.
It is challenging for any classifier to predict a class accurately based on a few hundred instances.
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline
**Deep Learning Model** & **Accuracy (\%)** & **Precision (\%)** & **Recall (\%)** & **F1-score (\%)** \\ \hline RNN & 87 & 88 & 86 & 88 \\ \hline GRU & 81 & 87 & 90 & 91 \\ \hline CNN & 88 & 92 & 87 & 89 \\ \hline LSTM (Proposed Model) & 91 & 92 & 90 & 92 \\ \hline \end{tabular}
\end{table}
Table 1: Weighted Average Measures of Accuracy, Precision, Recall, and F1-score for RNN, GRU, CNN, and LSTM Models Used for Microservices’ Reviews Classification
\begin{table}
\begin{tabular}{|c|c|} \hline
**Deep Learning Model** & **Training Time (ms)** \\ \hline RNN & 698.13 \\ \hline GRU & 785.66 \\ \hline CNN & 352.33 \\ \hline LSTM (Proposed Model) & 398.41 \\ \hline \end{tabular}
\end{table}
Table 2: Training Time for RNN, GRU, CNN, and LSTM Models Used for Microservices’ Reviews Classification
Only 521 negative reviews are included in the whole dataset, with only 104 of them used for testing and validation. To address the imbalance problem, various resampling strategies were tested. These include oversampling, undersampling, SMOTE, and ADASYN strategies. Figure 3 shows the training and validation loss learning curves of CBiLSTM with different resampling techniques. All of the other techniques, with the exception of oversampling, appear to be unable to solve the typical underfitting problem during model training.
Figure 4 shows the classification report obtained after applying oversampl
Figure 3: Training Loss and Validation Loss Learning Curves of LSTM Plotted as a Function of the Epoch Number after Applying Different Resampling Strategies.
same number of positive and negative reviews in the testing, which is 1000. The results confirmed the oversampling strategy's effectiveness since it provided considerable improvements in performance compared to testing results without a resampling strategy. Before oversampling, the model had an accuracy of 91% and a precision of 92%. However, after oversampling the data, the model's accuracy and precision increased to 93%.
The analysis of the binary matrix revealed that the number of positive reviews was 2039, denoted by the TP value, while the number of negative reviews was 112, represented by the TN value. Substituting these values into equation 1, the NBR score for AWS microservices was computed as 89.58%. Moreover, the testing dataset comprised 2,031 positive reviews and 104 negative reviews, leading to an estimated reputation score of 90.25% for AWS microservices. Comparing the NBR score generated using LSTM-based techniques for reputation assessment with the score obtained from the original dataset revealed close similarity between the two values. These results imply that the LSTM-based approach can be a reliable and effective technique for assessing the reputation of microservices providers.
## 6 Conclusion and Future Work
This study develops a deep learning model to classify web microservice user-related reviews based on sentiments derived from collected users' reviews. The proposed deep learning model, LSTM, outperforms other existing models used in text classification for sentiment analysis, such as RNN, GRU, and CNN. The aim of this approach is to establish a reputation ranking for microservices providers by analyzing the QoE of their users. The QoE is gauged by classifying reviews as "positive" or "negative" and comprehensively evaluating users' opinions towards the service providers. Our upcoming work involves the integration of advanced natural language processing techniques to enhance the precision of sentiment analysis. This may entail the use of sophisticated deep learning models, like Transformers, that leverage attention mechanisms to more effectively comprehend the nuances of language and context in reviews. Additionally, we will investigate the impact of the suspicious user punishment mechanism on the reputation
Figure 4: Classification report
of service providers and we will propose viable solutions to address the challenges posed by unjust feedback ratings.
## Acknowledgment
The author would like to thank Prince Sultan University for financially supporting the conference registration fees.
|
2310.18857
|
Toward Local Madelung Mechanics in Spacetime
|
It has recently been shown that relativistic quantum theory leads to a local
interpretation of quantum mechanics wherein the universal wavefunction in
configuration space is entirely replaced with an ensemble of local fluid
equations in spacetime. For want of a fully relativistic quantum fluid
treatment, we develop a model using the nonrelativistic Madelung equations, and
obtain conditions for them to be local in spacetime. Every particle in the
Madelung fluid is equally real, and has a definite position, momentum, kinetic
energy, and potential energy. These are obtained by defining quantum momentum
and kinetic energy densities for the fluid and separating the momentum into
average and symmetric parts, and kinetic energy into classical kinetic and
quantum potential parts. The two types of momentum naturally give rise to a
single classical kinetic energy density, which contains the expected kinetic
energy, even for stationary states, and we define the reduced quantum potential
as the remaining part of the quantum kinetic energy density. We treat the
quantum potential as a novel mode of internal energy storage within the fluid
particles, which explains most of the nonclassical behavior of the Madelung
fluid. For example, we show that in tunneling phenomena the quantum potential
negates the barrier so that nothing prevents the fluid from flowing through. We
show how energy flows and transforms in this model, and that enabling local
conservation of energy requires defining a quantum potential energy current
that flows through the fluid rather than only flowing with it. The
nonrelativistic treatment generally contains singularities in the velocity
field, which undermines the goal of local dynamics, but we expect a proper
relativistic treatment will bound the fluid particle velocities at $c$.
|
Mordecai Waegell
|
2023-10-29T00:44:15Z
|
http://arxiv.org/abs/2310.18857v2
|
# Toward Local Madelung Mechanics in Spacetime
###### Abstract
It has recently been shown that relativistic quantum theory leads to a local interpretation of quantum mechanics wherein the universal wavefunction in configuration space is entirely replaced with an ensemble of local fluid equations in spacetime. For want of a fully relativistic quantum fluid treatment, we develop a model using the nonrelativistic Madelung equations, and obtain conditions for them to be local in spacetime. Every particle in the Madelung fluid is equally real, and has a definite position, momentum, kinetic energy, and potential energy. These are obtained by defining quantum momentum and kinetic energy densities for the fluid and separating the momentum into average and symmetric parts, and kinetic energy into classical kinetic and quantum potential parts. The two types of momentum naturally give rise to a single classical kinetic energy density, which contains the expected kinetic energy, even for stationary states, and we define the reduced quantum potential as the remaining part of the quantum kinetic energy density. We treat the quantum potential as a novel mode of internal energy storage within the fluid particles, which explains most of the nonclassical behavior of the Madelung fluid. For example, we show that in tunneling phenomena the quantum potential negates the barrier so that nothing prevents the fluid from flowing through. We show how energy flows and transforms in this model, and that enabling local conservation of energy requires defining a quantum potential energy current that flows through the fluid rather than only flowing with it. The nonrelativistic treatment generally contains singularities in the velocity field, which undermines the goal of local dynamics, but we expect a proper relativistic treatment will bound the fluid particle velocities at \(c\).
## 1 Introduction
Since the early days of quantum theory, researchers like Madelung [1] have recognized that the Schrodinger dynamics can be recast as equations of fluid
dynamics, where the conservation of probability current is mapped to conservation of fluid current. This leads naturally to a many-worlds interpretation of Born rule probabilities as proportions of fluid current [2], where individual experiences of collapse are mapped to individual particles within the fluid (e.g., if we observe a quantum particle to reflect off a beam splitter with twice the probability that it is transmitted, then twice as many of its constituent fluid particles are reflected as are transmitted).
For a single quantum particle (say, an electron), this dynamics occurs in the familiar three spatial dimensions and one time dimension, but for \(N\) entangled particles, the standard nonrelativistic treatment occurs in a configuration space with \(3N\) spatial dimensions, although there is still only one time dimension. The fluid analogy still holds in this space, but now a single 'particle' in the fluid describes all \(N\) particles in 3-space at once. For a fluid in 3-space, we can consider the kinetic and potential energies of the individual particles in the fluid, and corresponding energy densities for the bulk fluid itself, but these classical quantities are not well-defined for the 'particles' in \(3N\)-dimensional space, which significantly undermines the analogy.
However, the local interpretation of relativistic quantum physics from [2] restores all of the dynamics to 3-space, which allows the full classical analogy to be recovered. In this treatment a single quantum particle typically comprises several different fluids in spacetime, with indexes to indicate which distinct fluid each particle belongs to. The different fluids of a given quantum system only interact with one another when there is a local coupling to another quantum system. The indexed fluids of a single quantum system allow for the treatment of entanglement and internal degrees of freedom like spin, with all of the dynamics occurring in 3-space where the particle energies and fluid energy densities are all well-defined. These are essentially classical fluids from a mathematical standpoint, but they behave quite differently from any standard classical liquid or gas because of the quantum potential.
The purpose of this research program is to interpret the quantum potential as a standard classical energy in 3-space and ideally to identify the local interaction rules for the particles in the fluid. We will begin with the nonrelativistic Schrodinger equation and the corresponding Madelung equations, with the ultimate aim of carrying the physical intuition we develop here over to the relativistic case.
## 2 Model Overview
We begin by defining a classical momentum density \(p_{q}(\vec{x},t)\) using the integrand of the expectation value of the quantum momentum operator of single-particle Madelung fluid in 3-space, \(\psi^{*}\hat{p}\psi\), with the wavefunction expressed in eikonal form, \(\psi(\vec{x},t)\equiv R(\vec{x},t)e^{iS(\vec{x},t)/\hbar}\). The real part of this density is the fluid current, with particle velocity \(\vec{\nabla}S/m\), and the imaginary part is associated with additional momentum that averages to zero over all of the fluid particles at a given event in spacetime, with particle velocity \(\left|\frac{\hbar}{m}\frac{\vec{\nabla}R}{R}\right|\hat{r}_{i}\), and unit vector \(\hat{r}_{i}\)
This gives us a distribution of definite particle velocities at each event, such that \(\sum_{i}\hat{r}_{i}=0\), so the imaginary part contributes nothing to the net current of the fluid.
Next, we define a classical energy density using the integrand of the expectation value of the quantum kinetic energy operator, \(\psi^{*}\hat{K}\psi\), and separate it into real positive terms that correspond to the classical kinetic energy density associated with the particle velocities found above, and other real terms that can become negative, which we identify as the (reduced) quantum potential energy density. There are also several imaginary terms we ignore, which integrate to zero. When the Schrodinger equation density is expanded out, those imaginary terms belong to the continuity equation, while the real terms relate to the evolution of the fluid current. We thus find that the 'quantum kinetic energy density' is really a sum of a classical kinetic and quantum potential energy densities, which suggests that the quantum potential has a kinetic character, even though it can become negative.
There are two ways to group the terms that appear in the quantum kinetic energy density \(k_{q}\), and each has its own conceptual advantages. The standard way [1], [3] is to define a kinetic energy density \(k_{a}=\frac{1}{2m}R^{2}|\vec{\nabla}S|^{2}\) corresponding to the net fluid flow, and the quantum potential energy density as \(q=-\frac{\hbar^{2}}{2m}R\nabla^{2}R\), such that \(k_{q}=k_{a}+q\). However, the quantum potential density can be broken down into a kinetic energy density \(k_{s}=\frac{\hbar^{2}}{2m}|\vec{\nabla}R|^{2}\) corresponding to the motion that averages to zero over the velocity distribution, and a reduced quantum potential \(q_{r}=-\frac{\hbar^{2}}{4m}\nabla^{2}R^{2}\), such that \(q=k_{s}+q_{r}\), and \(k_{q}=k_{a}+k_{s}+q_{r}\). Averaging over the velocity distribution, all of the cross terms cancel so that the classical kinetic energy density associated with the total velocity \(\frac{1}{m}\big{(}\vec{\nabla}S+\hbar\frac{\vec{\nabla}R}{R}\hat{r}_{i}\big{)}\) of each fluid particle can be written as \(k_{c}=k_{a}+k_{s}\), so the second way we can group terms is \(k_{q}=k_{c}+q_{r}\). This is probably the more physically correct grouping, since \(k_{a}=0\) for stationary states, while \(k_{s}\) (and thus \(k_{c}\)) integrates to the expectation value of kinetic energy. Furthermore, there is experimental evidence that there is motion in stationary states [4], because the laboratory-frame half-lives of muons in stationary states of muonic atoms are time-dilated in a way that appears to relate to the speed \(\Big{|}\frac{\hbar}{m}\frac{\vec{\nabla}R}{R}\Big{|}\).
However, as we will show, the reduced quantum potential acts as an intermediary between the two kinetic energy terms, so relative to the average kinetic energy \(k_{a}\), it is reasonable to think of both the symmetric kinetic energy \(k_{s}\) and reduced quantum potential \(q_{r}\) as a single entity, the usual quantum potential \(q\), and we would use the standard grouping. To see how the energy flows between these types, we construct the continuity equations for each of the above energy densities, and match their source/sink terms, which give us \(k_{s}{\leftrightarrow}q_{r}{\leftrightarrow}k_{a}{\leftrightarrow}u\), where \(u=R^{2}U\) is the external potential energy density.
This calculation also reveals that energy can only be locally conserved if the (reduced) quantum potential energy can flow between particles in the fluid, in addition to flowing with them. We compute the necessary energy current for local energy conservation in the case that the reduced quantum potential
flows through the fluid. We also compute it for the case that the entire standard quantum potential \(q\) flows through the fluid, which is consistent with a model in which all of \(q\) is treated as an internal potential energy, but inconsistent with our model, since the symmetric kinetic energy \(k_{s}\) should flow with the fluid, not through it.
We will show that any discontinuity in \(u\) is exactly canceled by an opposite discontinuity in \(q_{r}\), so the total potential energy density \(q_{r}+u\) is a continuous function, so the two types of potential energy seem to naturally lock together. It is also interesting to note that if we also include \(k_{s}\) as part of the potential energy relative to the flow energy \(k_{a}\), the potential energy density \(q+u\) is (mostly) smooth, and appears to coincide more naturally with the fluid density, and physical intuition about how it flows. We will demonstrate these features in our tunneling example, and show how the quantum potential negates the barrier, providing a simple explanation of how the otherwise-classical fluid flows through.
Finally, we have made two somewhat arbitrary assumptions in constructing this model, either of which might be subject to experimental falsification. The first is that there is motion related to speed \(\left|\frac{\hbar}{m}\frac{\vec{\nabla}R}{R}\right|\) even in stationary states. The past experiments with muonic atoms were of low fidelity, but a new set of muon experiments in which the shape of stationary wavefunctions is carefully controlled could allow us to measure whether the average half-life of muons in the fluid is really consistent with this velocity distribution. If this first assumption seems to be borne out, this would also lend support for the idea that only the reduced quantum potential is a true internal energy, which is our second assumption. Either way, these experiments would probe an important and under-explored interface between quantum mechanics and relativity, which could help us to reconcile stationary quantum states with time dilation, even if this fluid model is falsified. Beyond this, We believe a new generation of muon experiments would be invaluable for shedding light on the interplay between quantum mechanics and relativity, both special and general [5].
## 3 The Formalism
### The quantum momentum density
We define the quantum momentum density \(\vec{p}_{q}(\vec{x},t)\) as \(\psi^{*}(\vec{x},t)\hat{p}\psi(\vec{x},t)\), in the sense the integral of this quantity over space gives the expectation value of the momentum. Taking the eikonal form of the wavefunction, \(\psi=Re^{iS/\hbar}\),
\[\vec{p}_{q}(\vec{x},t)=\psi^{*}(\vec{x},t)\hat{p}\psi(\vec{x},t)=-i\hbar R( \vec{x},t)e^{-iS(\vec{x},t)/\hbar}\vec{\nabla}R(\vec{x},t)e^{iS(\vec{x},t)/ \hbar} \tag{1}\]
\[=-i\hbar Re^{-iS/\hbar}\big{(}\vec{\nabla}R+\frac{i}{\hbar}R\vec{\nabla}S \big{)}e^{iS/\hbar}\]
\[=R\big{(}R\vec{\nabla}S-i\hbar\vec{\nabla}R\big{)}.\]
We define the _average momentum density_ as the real quantum momentum density,
\[\vec{p}_{a}(\vec{x},t)\equiv R^{2}\vec{\nabla}S=m\rho\vec{v}_{a}=m\vec{j}, \tag{2}\]
where the density \(\rho\equiv R^{2}\) is understood as describing a locally conserved fluid, and the average local velocity of the fluid-particles is \(\vec{v}_{a}(\vec{x},t)=\vec{j}(\vec{x},t)/\rho(\vec{x},t)=\vec{\nabla}S(\vec{x },t)/m\), where \(\vec{j}\) is the probability (fluid) current.
As a brief aside and cautionary note, the spatial integral over this density gives the expectation value of the momentum, which is consistent with the ensemble average after measuring the particle momentum many times, where each result appears to collapse to a random momentum eigenstate with Born rule probability. However, unlike a position measurement, where the ensemble average probability to find the particle near \(\vec{x}\) matches the density \(\rho(\vec{x})\), the momentum eigenstates are spread over space. To find the probability of each outcome one must decompose the entire fluid state into this set of delocalized states, which has little to do with the density \(\vec{p}_{q}(\vec{x})\) at each location \(\vec{x}\), so there is no joint probability distribution for which the outcomes of both position and momentum measurements are marginals (Wigner functions are as close as one can get). In general, it is the measurement apparatus which determines the eigenbasis into which the fluid must be decomposed, and with the exception of position measurements, these eigenstates are spread out in space. However, the local density \(\vec{p}_{q}(\vec{x})\) is of physical importance, because it correctly describes where the momentum can be found, regardless of what basis is used to measure it. For example, if the fluid is coherently separated into multiple regions, and local momentum modes are measured in those separate regions, the expectation value in each region is still the integral of \(\vec{p}_{q}(\vec{x})\) over that region.
Now, the expected classical kinetic energy of a fluid particle with velocity \(\vec{v}_{a}(\vec{x},t)\) at \(\vec{x}\) and \(t\) is then \(K_{a}(\vec{x},t)=\frac{1}{2}m\vec{v}_{a}(\vec{x},t)\cdot\vec{v}_{a}(\vec{x},t)\), so the expected classical kinetic energy density associated with the average local velocity of the fluid is,
\[k_{a}(\vec{x},t)\equiv\rho(\vec{x},t)K_{a}(\vec{x},t)=\frac{1}{2m}R^{2}(\vec{ \nabla}S\cdot\vec{\nabla}S). \tag{3}\]
We are making progress, but there still seems to be a discrepancy in this analysis having to do with stationary states. The classical analog of the energy eigenstates describe particles in motion, and experiments with muonic atoms show that this motion slows the proper time of the muons relative to the lab frame, but stationary states have \(\vec{v}_{a}=0\), so this motion seems to be missing.
To resolve this issue, we define the imaginary part of the quantum momentum density as the _symmetric momentum density_,
\[\vec{p}_{s}(\vec{x},t)\equiv-\hbar R(\vec{x},t)\vec{\nabla}R(\vec{x},t)=m\rho \vec{v}_{s}(\vec{x},t), \tag{4}\]
so the _symmetric velocity_ is
\[\vec{v}_{s}(\vec{x},t)=-\frac{\hbar}{m}\frac{\vec{\nabla}R(\vec{x},t)}{R( \vec{x},t)}. \tag{5}\]
The kinetic energy and kinetic energy density associated with this velocity are then,
\[K_{s}=\tfrac{1}{2}m\vec{v}_{s}\cdot\vec{v}_{s}=\tfrac{\hbar^{2}}{2m}\tfrac{\vec{ \nabla}R\cdot\vec{\nabla}R}{R^{2}},\qquad k_{s}=\rho K_{s}=\tfrac{\hbar^{2}}{2m} \vec{\nabla}R\cdot\vec{\nabla}R. \tag{6}\]
This energy is not zero for stationary states, and for simple cases like the infinite square well, all of the energy is of this type, so it seems we are on the right track to identify the motion in this state. However, this energy is not associated with a change in the fluid density \(R^{2}\). To explain this we argue that, unlike a classical continuum fluid, the quantum fluid does not have a single velocity at each event in spacetime \((\vec{x},t)\), but rather a distribution of velocities,
\[\vec{v}_{i}(\vec{x},t)=\vec{v}_{a}(\vec{x},t)+|\vec{v}_{s}(\vec{x},t)|\hat{r}_ {i}(\vec{x},t), \tag{7}\]
with unit vectors \(\hat{r}_{i}(\vec{x},t)\) such that,
\[\sum_{i}\hat{r}_{i}(\vec{x},t)=0 \tag{8}\]
(thus symmetric). We then have \(\vec{v}_{a}=(\vec{v}_{i})_{\rm{ave}}\) because all of \(|\vec{v}_{s}(\vec{x},t)|\hat{r}_{i}(\vec{x},t)\) components cancel and contribute nothing to the net flux of fluid particles at event \((\vec{x},t)\), and \(\vec{v}_{a}(\vec{x},t)\) really is the average of the local velocity distribution, which gives the (net) local fluid current \(\vec{j}(\vec{x},t)=\rho(\vec{x},t)\vec{v}_{a}(\vec{x},t).\) In this picture, the fluid particles are essentially of zero volume, and do not exert direct forces on one another. All of their interactions are collective and occur through the quantum potential, which we will demonstrate in later sections. Effectively, all of the fluid particles contribute to creating a collective quantum potential surface, and in turn, each of the fluid particles moves in that potential.
With this velocity, the classical kinetic energy of a particle in the fluid is \(K_{i}(\vec{x},t)=\tfrac{1}{2}m\vec{v}_{i}(\vec{x},t)\cdot\vec{v}_{i}(\vec{x},t)\), and the average classical kinetic energy at event \((\vec{x},t)\) is \(K_{c}=(K_{i})_{\rm{ave}}=K_{a}+K_{s}\) (the cross-terms cancel), and thus the classical kinetic energy density is,
\[k_{c}(\vec{x},t)=\rho(\vec{x},t)K_{c}(\vec{x},t)=k_{a}+k_{s}, \tag{9}\]
We have inferred this classical kinetic energy, and the two terms it comprises, from the quantum momentum density, and some considerations about motion in stationary states. Next we want to see if this kinetic energy appears in the quantum kinetic energy density.
### The quantum kinetic energy, classical kinetic energy, and quantum potential energy densities
We define the quantum kinetic energy density \(k_{q}(\vec{x},t)\) in standard quantum theory as the real part of \(\psi^{*}(\vec{x},t)\hat{K}\psi(\vec{x},t)\), in the sense the integral of this quantity over space gives the expectation value of the kinetic energy.
\[\psi^{*}(\vec{x},t)\hat{K}\psi(\vec{x},t)=-\frac{\hbar^{2}}{2m}R(\vec{x},t)e^ {-iS(\vec{x},t)/\hbar}\nabla^{2}R(\vec{x},t)e^{iS(\vec{x},t)/\hbar} \tag{10}\]
\[=-\frac{\hbar^{2}}{2m}Re^{-iS/\hbar}\vec{\nabla}\cdot\big{(}\vec{\nabla}R+\frac{i}{ \hbar}R\vec{\nabla}S\big{)}e^{iS/\hbar}\]
\[=-\frac{\hbar^{2}}{2m}Re^{-iS/\hbar}\bigg{[}\big{(}\nabla^{2}R+\frac{i}{\hbar} \vec{\nabla}R\cdot\vec{\nabla}S+\frac{i}{\hbar}R\nabla^{2}S\big{)}e^{iS/\hbar} +\frac{i}{\hbar}\big{(}\vec{\nabla}R+\frac{i}{\hbar}R\vec{\nabla}S\big{)}\cdot (\vec{\nabla}S)e^{iS/\hbar}\bigg{]}\]
\[=-\frac{\hbar^{2}}{2m}R\bigg{(}\nabla^{2}R-\frac{1}{\hbar^{2}}R(\vec{\nabla}S \cdot\vec{\nabla}S)+\frac{2i}{\hbar}\vec{\nabla}R\cdot\vec{\nabla}S+\frac{i}{ \hbar}R\nabla^{2}S\bigg{)}.\]
The imaginary parts always integrate to zero, so while they are related to the continuity equation, dropping them here gives a real quantum kinetic energy density,
\[k_{q}(\vec{x},t)\equiv-\frac{\hbar^{2}}{2m}R\nabla^{2}R+\frac{1}{2m}R^{2}(\vec {\nabla}S\cdot\vec{\nabla}S). \tag{11}\]
The apparent trouble with this expression is that it can be negative, which makes it difficult to interpret as a kinetic energy density. The resolution is that it is not, in fact, only a kinetic energy, but rather sum of a true classical kinetic energy density, which is always positive, and the quantum potential energy density, which may be negative.
We can identify the second term of the quantum kinetic energy density above as the classical kinetic energy density \(k_{a}\) we were expecting, which is always positive, so we will define the first as the quantum potential energy density. From Bohm/Madelung's definition of the quantum potential energy of a single classical particle at \(\vec{x}\),
\[Q(\vec{x},t)\equiv-\frac{\hbar^{2}}{2m}\frac{\nabla^{2}R}{R}=\frac{\hbar^{2}} {2m}\frac{\vec{\nabla}R\cdot\vec{\nabla}R}{R^{2}}-\frac{\hbar^{2}}{4m}\frac{ \nabla^{2}(R^{2})}{R^{2}}, \tag{12}\]
we see that this really is the density of quantum potential energy in the fluid of particles,
\[q(\vec{x},r)\equiv\rho(\vec{x},t)Q(\vec{x},t)=-\frac{\hbar^{2}}{2m}R\nabla^{2}R. \tag{13}\]
\[=\frac{\hbar^{2}}{2m}\vec{\nabla}R\cdot\vec{\nabla}R-\frac{\hbar^{2}}{4m} \nabla^{2}(R^{2}).\]
Note that in the expanded form of the quantum potential, we can now identify the first term as our other classical kinetic energy density \(k_{s}\), so if we collect \(k_{s}\) and \(k_{a}\) together into a single classical kinetic energy density \(k_{c}\), we are left with a different term that we call the _reduced quantum potential_ energy density,
\[q_{r}\equiv q-k_{s}=-\frac{\hbar^{2}}{4m}\nabla^{2}(R^{2}), \tag{14}\]
corresponding to reduced quantum potential energy per fluid particle,
\[Q_{r}=\frac{q_{r}}{R^{2}}=-\frac{\hbar^{2}}{4m}\frac{\nabla^{2}(R^{2})}{R^{2 }}. \tag{15}\]
Thus we have broken the quantum kinetic energy density down into a well-behaved positive kinetic energy density and a new reduced quantum potential which represents a new type of internal energy carried by particles in the fluid,
\[k_{q}=k_{a}+q=k_{a}+k_{s}+q_{r}=k_{c}+q_{r}. \tag{16}\]
Defining the external potential energy density as \(u(\vec{x},t)\equiv\rho(\vec{x},t)U(\vec{x},t)\), where \(\hat{U}(\vec{x},t)\) is the external potential in which the particle (fluid) moves (assumed to be non-differential for this article), the total energy density is then,
\[e=k_{a}+q+u=k_{c}+q_{r}+u, \tag{17}\]
and the integral over this density is the constant energy expectation value.
## 4 Local Conservation of Fluid Density and Energy Density
### Time Evolution
Now that we have this new type of potential energy in play, it will be helpful to consider how it moves within the fluid, and how energy is transferred from one type to another. To begin, we return to the Schrodinger equation, and left-multiply by \(\psi^{*}\) to get the evolution equations for the densities,
\[i\hbar\psi^{*}\frac{\partial\psi}{\partial t}=-\frac{\hbar^{2}}{2m}\psi^{*} \hat{K}\psi+\psi^{*}\hat{U}\psi \tag{18}\]
\[=-\frac{\partial S}{\partial t}R^{2}+i\hbar R\frac{\partial R}{\partial t}\]
\[=-\frac{\hbar^{2}}{2m}R\nabla^{2}R+\frac{1}{2m}R(\vec{\nabla}S\cdot\vec{\nabla }S)+R^{2}U-\frac{i\hbar}{2m}\big{(}2\vec{\nabla}R\cdot\vec{\nabla}S+R\nabla^{2 }S\big{)}\]
As discussed, the imaginary part gives the continuity equation for the fluid density, which can be seen from,
\[\frac{\partial\rho}{\partial t}=-\vec{\nabla}\cdot\vec{j}=-\frac{1}{m}\vec{ \nabla}\cdot(\rho\vec{\nabla}S)=-\frac{1}{m}\big{(}\vec{\nabla}\rho\cdot\vec{ \nabla}S+\rho\nabla^{2}S\big{)} \tag{19}\]
\[=-\frac{1}{m}\big{(}2R\vec{\nabla}R\cdot\vec{\nabla}S+R^{2}\nabla^{2}S\big{)} =2R\frac{\partial R}{\partial t},\]
from which it follows that,
\[\frac{\partial R}{\partial t}=-\frac{1}{2m}\big{(}2\vec{\nabla}R\cdot\vec{ \nabla}S+R\nabla^{2}S\big{)}=-\frac{\vec{\nabla}\cdot(R^{2}\vec{\nabla}S)}{2mR}, \tag{20}\]
so we have the evolution equation for \(R\), and the amount of fluid is a locally conserved quantity.
The real part of Eq. 18 then gives us the evolution equation for \(S\),
\[\frac{\partial S}{\partial t}=-\Big{(}\frac{1}{2m}(\vec{\nabla}S\cdot\vec{\nabla} S)-\frac{\hbar^{2}}{2m}\frac{\nabla^{2}R}{R}+U\Big{)}, \tag{21}\]
from which we can get the evolution for \(\vec{\nabla}S\), which will be useful when we consider the time evolution of the energy densities,
\[\frac{\partial\vec{\nabla}S}{\partial t}=-\vec{\nabla}\bigg{(}\frac{1}{2m}( \vec{\nabla}S\cdot\vec{\nabla}S)-\frac{\hbar^{2}}{2m}\frac{\nabla^{2}R}{R}+U \bigg{)}. \tag{22}\]
### Energy Density Continuity Equations
Now, the continuity equations for the kinetic energy density, quantum potential energy density, and external potential energy density are, respectively,
\[\frac{\partial k_{a}}{\partial t}+\vec{\nabla}\cdot\vec{j}_{k}=\kappa,\qquad \frac{\partial q}{\partial t}+\vec{\nabla}\cdot\vec{j}_{q}=\alpha,\qquad\frac{ \partial u}{\partial t}+\vec{\nabla}\cdot\vec{j}_{u}=\beta. \tag{23}\]
where \(\kappa\), \(\alpha\), and \(\beta\) represents the source/sink terms, and \(\vec{j}_{k}\equiv k_{a}\frac{\vec{\nabla}S}{m}\), \(\vec{j}_{u}\equiv u\frac{\vec{\nabla}S}{m}\), and \(\vec{j}_{q}\equiv q\Big{(}\frac{\vec{\nabla}S}{m}+\vec{v}\Big{)}\) are the energy current densities, where the kinetic and potential energy are external properties of the particles in the fluid, and thus their densities flow with the average fluid velocity \(\frac{\vec{\nabla}S}{m}\). The (internal) quantum potential energy density flows with the fluid, but can also flow through the fluid (from particle to particle) with an additional current \(q\vec{v}\), which we have not yet defined.
Note that we have begun with the case that all of the quantum potential energy \(q\) is internal, and can flow between particles in the fluid, which is inconsistent with our model, but this still lays the mathematical groundwork for the case of our model, where only the reduced quantum potential energy \(q_{r}\) is internal and flows between fluid particles with relative current \(q_{r}\vec{v}_{r}\).
We will show that in either case, the requirement that energy is locally conserved fixes this additional current up to an additional divergence-free term.
So starting with the \(q\vec{v}\) case, the three source/sink terms are,
\[\kappa=\frac{1}{m}R^{2}\vec{\nabla}S\cdot\frac{\partial\vec{\nabla}S}{ \partial t}+\frac{1}{m^{2}}R^{2}|\vec{\nabla}S|\vec{\nabla}S\cdot\vec{\nabla} |\vec{\nabla}S| \tag{24}\]
\[=\frac{\hbar^{2}}{2m}R^{2}\vec{\nabla}S\cdot\vec{\nabla}\Big{(}\frac{\nabla^ {2}R}{mR}\Big{)}-\frac{1}{m}R^{2}\vec{\nabla}S\cdot\vec{\nabla}U,\]
\[\alpha=-\frac{\hbar^{2}}{2m}\bigg{(}R\nabla^{2}\frac{\partial R}{\partial t}- \frac{\partial R}{\partial t}\nabla^{2}R+R^{2}\vec{\nabla}S\cdot\vec{\nabla} \Big{(}\frac{\nabla^{2}R}{mR}\Big{)}+\vec{\nabla}\cdot\big{(}\vec{v}R\nabla^ {2}R\big{)}\bigg{)}, \tag{25}\]
and
\[\beta=R^{2}\frac{\partial U}{\partial t}+\frac{1}{m}R^{2}\vec{\nabla}S\cdot \vec{\nabla}U, \tag{26}\]
where we have used the continuity equation \(\frac{\partial R^{2}}{\partial t}+\vec{\nabla}\cdot\left(\frac{R^{2}\vec{\nabla}S}{ m}\right)=0\) to eliminate some terms in these expressions, and substituted in Eq. 22. The colors in the three equations above show the matched terms that represent local energy exchanges between the kinetic energy and the potential energies.
### Local Energy Conservation
If we add all of the source/sink terms up, we see that the matched terms cancel as expected, but a few terms remain,
\[\kappa+\alpha+\beta=-\frac{\hbar^{2}}{2m}\bigg{(}R\nabla^{2}\frac{\partial R }{\partial t}-\frac{\partial R}{\partial t}\nabla^{2}R+\vec{\nabla}\cdot\left( \vec{v}R\nabla^{2}R\right)\bigg{)}+R^{2}\frac{\partial U}{\partial t}. \tag{27}\]
If energy is to be locally conserved when \(\frac{\partial U}{\partial t}=0\), then the term in parentheses must be zero. Substituting in Eq. 20 we have,
\[0=-R\nabla^{2}\bigg{(}\frac{\vec{\nabla}\cdot(R^{2}\vec{\nabla}S)}{2mR} \bigg{)}+\bigg{(}\frac{\vec{\nabla}\cdot(R^{2}\vec{\nabla}S)}{2mR}\bigg{)} \nabla^{2}R+\vec{\nabla}\cdot\big{(}\vec{v}R\nabla^{2}R\big{)} \tag{28}\]
\[0=\vec{\nabla}\cdot\bigg{(}-R^{2}\vec{\nabla}\bigg{(}\frac{\vec{\nabla}\cdot( R^{2}\vec{\nabla}S)}{2mR^{2}}\bigg{)}+\vec{v}R\nabla^{2}R\bigg{)}.\]
Thus we have the additional current,
\[q\vec{v}=\frac{\hbar^{2}}{2m}R^{2}\vec{\nabla}\bigg{(}\frac{\vec{\nabla}\cdot( R^{2}\vec{\nabla}S)}{2mR^{2}}\bigg{)}+\vec{F}, \tag{29}\]
as the condition for local energy conservation in the quantum fluid, where \(\vec{F}\) is any field satisfying \(\vec{\nabla}\cdot\vec{F}=0\). If we assume a current that satisfies this condition then source/sink term for the quantum potential energy density simplifies to,
\[\alpha_{\rm{lec}}=-\frac{\hbar^{2}}{2m}R^{2}\vec{\nabla}S\cdot\vec{\nabla} \Big{(}\frac{\nabla^{2}R}{mR}\Big{)} \tag{30}\]
Now we move on to the case where only \(q_{r}\) in an internal energy that can flow between particles in the fluid, while \(k_{s}\) is now treated as external energy that only flows with the fluid. We can use the calculation we've already done as a shortcut for finding the additional current \(q_{r}\vec{v}_{r}\) need for local energy conservation. The continuity equations for \(k_{s}\) and \(q_{r}\) are,
\[\frac{\partial k_{s}}{\partial t}+\vec{\nabla}\cdot\big{(}k_{s}\frac{\vec{ \nabla}S}{m}\big{)}=\alpha_{s},\qquad\frac{\partial q_{r}}{\partial t}+\vec{ \nabla}\cdot\Big{(}q_{r}\big{(}\frac{\vec{\nabla}S}{m}+\vec{v}\big{)}\Big{)}= \alpha_{r}, \tag{31}\]
where \(\alpha_{s}\) and \(\alpha_{r}\) are the source/sink terms, and \(\vec{v}\) is again treated as unknown. Now, noting that \(\alpha_{0}\equiv\alpha_{s}+\alpha_{r}=\alpha-\vec{\nabla}\cdot\big{(}k_{s}\vec {v}\big{)}\), where this is the old \(\alpha\) from Eq. 27, the new sum of sinks and sources is,
\[\kappa+\alpha_{0}+\beta=\kappa+\alpha+\beta-\vec{\nabla}\cdot\big{(}k_{s}\vec {v}\big{)} \tag{32}\]
\[=-\frac{\hbar^{2}}{2m}\bigg{(}R\nabla^{2}\frac{\partial R}{\partial t}-\frac{ \partial R}{\partial t}\nabla^{2}R+\vec{\nabla}\cdot\big{(}\vec{v}(R\nabla^{2}R+ \vec{\nabla}R\cdot\vec{\nabla}R)\big{)}\bigg{)}+R^{2}\frac{\partial U}{\partial t}.\]
Following the same reasoning as before, to locally conserve energy we must now have,
\[0=\vec{\nabla}\cdot\bigg{(}-R^{2}\vec{\nabla}\bigg{(}\frac{\vec{\nabla}\cdot(R ^{2}\vec{\nabla}S)}{2mR^{2}}\bigg{)}+\vec{v}\big{(}\vec{\nabla}\cdot(R\vec{ \nabla}R)\big{)}\bigg{)}, \tag{33}\]
and noting \(\nabla^{2}(R^{2})=2\big{(}\vec{\nabla}\cdot(R\vec{\nabla}R)\big{)}\), the new solutions are,
\[q_{r}\vec{v}_{r}=\frac{\hbar^{2}}{2m}R^{2}\vec{\nabla}\bigg{(}\frac{\vec{ \nabla}\cdot(R^{2}\vec{\nabla}S)}{mR^{2}}\bigg{)}+\vec{F}, \tag{34}\]
for any field \(\vec{F}\) such that \(\vec{\nabla}\cdot\vec{F}=0\).
### Energy Exchanges
From \(\kappa\), \(\alpha\), and \(\beta\) we can see the power density for local transfers between the kinetic, quantum potential, and external potential energies.
The term,
\[p_{u}=R^{2}\frac{\partial U}{\partial t}, \tag{35}\]
represents the power delivered to the fluid by a time-dependent external potential. Next, we can identify the term,
\[p_{k_{a}u}=-R^{2}\vec{\nabla}S\cdot\vec{\nabla}U, \tag{36}\]
which is the power density being converted from external potential to kinetic energy. Lastly we have,
\[p_{k_{a}q}=\frac{\hbar^{2}}{2m^{2}}R^{2}\vec{\nabla}S\cdot\vec{\nabla}\Big{(} \frac{\nabla^{2}R}{R}\Big{)}, \tag{37}\]
which is the power density being converted from quantum potential to kinetic energy.
Now, to separate the quantum potential into the symmetric kinetic energy and reduced quantum potential, and identify how energy flows between them, and how it flows out to the average kinetic energy. Assuming the local energy conservation condition is satisfied, we have to \(\alpha_{r}=\alpha|_{\vec{v}=\vec{v}_{r}}-\alpha_{s}-\vec{\nabla}\cdot(k_{s} \vec{v}_{r})\), so we can see that \(\alpha_{r}\) has the terms from \(\alpha\) plus a direct energy exchange \(\alpha_{s}\) with the symmetric kinetic energy, where the power density delivered from the reduced quantum potential density to the symmetric kinetic energy density is,
\[p_{q_{r}k_{s}}=\alpha_{s}=\frac{\hbar^{2}}{2m^{2}}\bigg{(}\vec{\nabla}\cdot \Big{(}\vec{\nabla}S(\vec{\nabla}R\cdot\vec{\nabla}R)\Big{)}-\vec{\nabla}R \cdot\vec{\nabla}\Big{(}\frac{\vec{\nabla}\cdot(R^{2}\vec{\nabla}S)}{R}\Big{)} \bigg{)}. \tag{38}\]
This gives us
\[\alpha_{r,\text{lec}}=-\frac{\hbar^{2}}{2m}R^{2}\vec{\nabla}S\cdot\vec{\nabla} \Big{(}\frac{\nabla^{2}R}{mR}\Big{)} \tag{39}\]
\[-\frac{\hbar^{2}}{2m^{2}}\bigg{(}\vec{\nabla}\cdot\Big{(}\vec{\nabla}S(\vec{\nabla} R\cdot\vec{\nabla}R)\Big{)}-\vec{\nabla}R\cdot\vec{\nabla}\Big{(}\frac{\vec{ \nabla}\cdot(R^{2}\vec{\nabla}S)}{R}\Big{)}\bigg{)},\]
and thus the average kinetic energy is coupled to the reduced quantum potential, but not the symmetric kinetic energy, and the power density delivered from the reduced quantum potential to the average kinetic energy is,
\[p_{k_{a}q_{r}}=p_{k_{a}q}=\frac{\hbar^{2}}{2m^{2}}R^{2}\vec{\nabla}S\cdot\vec{ \nabla}\Big{(}\frac{\nabla^{2}R}{R}\Big{)}, \tag{40}\]
The full pattern of local energy exchanges is then,
\[k_{s}{\longleftrightarrow}q_{r}{\longleftrightarrow}k_{a}{\longleftrightarrow}u, \tag{41}\]
and for our two groupings it reduces to,
\[q{\longleftrightarrow}k_{a}{\longleftrightarrow}u,\quad\text{or},\quad q_{r}{ \longleftrightarrow}k_{c}{\longleftrightarrow}u, \tag{42}\]
where \({\longleftrightarrow}\) includes both \({\longleftrightarrow}\) and \({\longleftrightarrow}\).
## 5 What does the quantum potential do?
Once we understand the quantum potential as a new type of local interaction within the fluid, we can start to look at how this interaction causes the fluid to behave in ways that a classical liquid or gas classical fluid never would. Two important examples are how it explains phenomena like tunneling, where fluid is found in a classically forbidden region, and its role in quantum interference within the fluid.
The quantum potential has a tendency to cancel out jumps in the external potential, so for the fluid, it is as though the jumps aren't even there, or not entirely there. To see this, consider the 1D Schrodinger Equation with a constant external potential, \(U(x,t)=U_{0}\Theta(x)\), where the \(\Theta(x)\) is the Heaviside step function. The boundary conditions at the finite discontinuity are \(\psi_{l}(0,t)=\psi_{r}(0,t)\), from which it follows that \(R_{l}(0,t)=R_{r}(0,t)\) and \(e^{-iS_{l}(0,t)/\hbar}=e^{-iS_{r}(0,t)/\hbar}\), and \(\vec{\nabla}\psi_{l}(0,t)=\vec{\nabla}\psi_{r}(0,t)\). We begin from the Schrodinger Equations for the left (\(l\)) and right (\(r\)) sides of the discontinuity,
\[\begin{array}{l}\psi_{l}^{*}\sum_{n}E_{n}\psi_{n,l}e^{-iE_{n}t/\hbar}=\psi_{ l}^{*}\hat{K}\psi_{l}+U_{l}R_{l}^{2},\\ \\ \psi_{r}^{*}\sum_{n}E_{n}\psi_{n,r}e^{-iE_{n}t/\hbar}=\psi_{r}^{*}\hat{K}\psi_{ r}+U_{r}R_{r}^{2},\end{array} \tag{43}\]
where \(\psi(x,t)=\sum_{n}\psi_{n}(x)e^{-iE_{n}t/\hbar}\), and \(\psi_{n}(x)\) are the energy eigenstates. Next we consider the difference of these two equations, making use of the fact that \(U_{l}=0\) and \(U_{r}=U_{0}\), and substituting in the quantum kinetic energy density,
\[\sum_{n}E_{n}e^{-iE_{n}t/\hbar}\big{(}\psi_{r}^{*}\psi_{n,r}-\psi_{l}^{*}\psi_ {n,l}\big{)}=q_{r,r}-q_{r,l}+k_{a,r}-k_{a,l}+k_{s,r}-k_{s,l}+U_{0}R_{r}^{2}. \tag{44}\]
The boundary conditions apply to both the state and the eigenstates, so we can see that the term in the parentheses in Eq. 44 is zero at the boundary. Furthermore, recalling that \(k_{a}=\frac{1}{2m}R^{2}|\vec{\nabla}S|^{2}\) and \(k_{s}=\frac{\hbar^{2}}{2m}|\vec{\nabla}R|^{2}\), and using the identities,
\[\vec{\nabla}R=\frac{1}{2}\Big{(}e^{-iS/\hbar}\vec{\nabla}\psi+e^{iS/\hbar}\vec {\nabla}\psi^{*}\Big{)}=\text{Re}\Big{(}e^{-iS/\hbar}\vec{\nabla}\psi\Big{)}, \tag{45}\]
and,
\[\vec{\nabla}S=\frac{1}{2iR}\Big{(}e^{-iS/\hbar}\vec{\nabla}\psi-e^{iS/\hbar} \vec{\nabla}\psi^{*}\Big{)}=\frac{1}{R}\text{Im}\Big{(}e^{-iS/\hbar}\vec{ \nabla}\psi\Big{)}, \tag{46}\]
the boundary conditions give us \(k_{a,r}=k_{a,l}\) and \(k_{s,r}=k_{s,l}\), so Eq. 44 reduces to,
\[q_{r,r}-q_{r,l}=-U_{0}R^{2},\quad\text{or,}\quad\,Q_{r,r}-Q_{r,l}=-U_{0}. \tag{47}\]
Thus, as claimed, the discontinuity in the reduced quantum potential is equal and opposite to the discontinuity in the external potential, and so the total potential energy density \(q_{r}+u\) is a continuous function, and by extension, so is \(q+u\).
This effect is particularly relevant for tunneling through a rectangular barrier, and a detailed example showing the roles of the different types of energy during such tunneling is given in the Appendix. While truly discontinuous potentials are probably nonphysical, they are still an interesting limiting case. Perhaps most importantly, we have also proved that if the external potential is continuous, then so is the reduced quantum potential.
The Appendix also contains a detailed example of reflection off an infinite barrier.
## 6 Discussion
So far we have ignored a conceptual issue that may be problematic for this model, which is that the velocities and energies of the particles generally contain singularities, as does the additional current \(q_{r}\vec{v}_{r}\) of the reduced quantum potential energy also. Despite this, all of the energy densities we have considered, and all of the currents associated with velocity \(\vec{\nabla}S/m\), are well-behaved finite functions, which has allowed for our analysis. Furthermore, while the fluid current, average kinetic energy density, and quantum potential energy density are all zero where the fluid density is zero, the symmetric velocity is singular, so the symmetric current, momentum, and kinetic energy densities, and the reduced quantum potential energy density can all be finite where the fluid density has an isolated zero (although not where it is smoothly zero). The fact that there can be current, momentum, and energy where there are no fluid particles seems like a conceptual problem, but since this can only happen at isolated zeroes in the density, the overall amount of current, momentum, and energy at these locations is of measure zero, maybe it is nothing to worry about.
In general, the infinite velocities undermine the premise that this model is local, but we think this is likely to be an artifact of the nonrelativistic treatment we have used. Our hope is that in a proper relativistic treatment, these singularities will vanish and the particle velocity in the fluid will be bounded by \(c\) (this seems to work out in [6]). Restricting \(\vec{v}_{s}\) in this way could also make it so that all current, momentum, and energy densities are zero where the fluid density is zero.
In conclusion, while the present model has many satisfying features, it also has some apparent problems that need to be addressed. However, there is good reason to think that a relativistic picture will overcome these issues, since we are looking for a fluid model consistent with the local Heisenberg picture used in relativistic quantum field theory. We also suspect that in the generalized treatment, the fluid particles will be of definite energy rather than definite mass, which will mix up several of the fluid properties we have considered here, and also allow for the treatment of massless particles. In general, energy is a locally conserved quantity in (special) relativistic theories, so it is quite natural to think of it as a fluid anyway.
In the local many worlds picture, the conserved fluid of each quantum system is separated into multiple branches during an interaction between systems, each corresponding to a different outcome, and the relative amounts of fluid in each branch give rise to the Born rule probability of observing that outcome at macroscopic scale. If each fluid particle is taken to be of definite energy (in a given frame), then this is just another manifestation of local conservation of energy, whereas in our present treatment it is effectively conservation of mass, which we should not expect to survive a generalization to special relativity. The fluid particles should also possess definite values for any other locally conserved property, whether frame invariant like charge, or frame-dependent like momentum.
Finally, it is worth noting that many other works have explored different ways to interpret or modify Madelung's original fluid equations (or their Bohm equivalent) [6]-[35], so even if the present model fails, there are plenty of alternative ideas available.
|
2303.06589
|
Stress-dependent activation entropy in thermally activated cross-slip of
dislocations
|
Cross slip of screw dislocations in crystalline solids is a stress-driven
thermally activated process essential to many phenomena during plastic
deformation, including dislocation pattern formation, strain hardening, and
dynamic recovery. Molecular dynamics (MD) simulation has played an important
role in determining the microscopic mechanisms of cross slip. However, due to
its limited timescale, MD can only predict cross-slip rates in high-stress or
high-temperature conditions. The transition state theory can predict the
cross-slip rate over a broad range of stress and temperature conditions, but
its predictions have been found to be several orders of magnitude too low in
comparison to MD results. This discrepancy can be expressed as an anomalously
large activation entropy whose physical origin remains unclear. Here we resolve
this discrepancy by showing that the large activation entropy results from
anharmonic effects, including thermal softening, thermal expansion, and soft
vibrational modes of the dislocation. We expect these anharmonic effects to be
significant in a wide range of stress-driven thermally activated processes in
solids.
|
Yifan Wang, Wei Cai
|
2023-03-12T06:41:26Z
|
http://arxiv.org/abs/2303.06589v1
|
# Stress-dependent activation entropy in thermally activated cross-slip of dislocations
###### Abstract
Cross slip of screw dislocations in crystalline solids is a stress-driven thermally activated process essential to many phenomena during plastic deformation, including dislocation pattern formation, strain hardening, and dynamic recovery. Molecular dynamics (MD) simulation has played an important role in determining the microscopic mechanisms of cross slip. However, due to its limited timescale, MD can only predict cross-slip rates in high-stress or high-temperature conditions. The transition state theory can predict the cross-slip rate over a broad range of stress and temperature conditions, but its predictions have been found to be several orders of magnitude too low in comparison to MD results. This discrepancy can be expressed as an anomalously large activation entropy whose physical origin remains unclear. Here we resolve this discrepancy by showing that the large activation entropy results from anharmonic effects, including thermal softening, thermal expansion, and soft vibrational modes of the dislocation. We expect these anharmonic effects to be significant in a wide range of stress-driven thermally activated processes in solids.
## Introduction
Dislocation slip is the primary source of plastic deformation in crystalline solids. Cross-slip occurs when a screw dislocation changes its slip plane (Fig. 1(a)). This stress-driven, thermally-activated process is critical in creating dislocation patterns[1] and bypassing obstacles[2, 3], which leads to strain hardening and dynamic recovery[4, 5, 6] during plastic deformation. It has long been challenging to accurately predict the cross-slip rate as a function of stress and temperature. Many experimental[7, 8] and theoretical[9, 10] analyses have been performed to determine the activation parameters for cross slip based on the continuum theory of dislocations. However, the applicability of the continuum theory is questionable[11] since the changes in dislocation core structure during cross slip can be confined to only a few lattice spacings. Fully atomistic models are needed to uncover the fundamental physical mechanisms of cross-slip. Unfortunately, direct molecular dynamics (MD) simulation has a limited timescale (typically less than 100 ns), so it is only applicable when cross slip occurs at a high rate, i.e. under a high-stress or high-temperature condition[12, 13].
The transition state theory (TST), combined with minimum energy paths (MEP) calculations, provides a theoretical framework to predict the rate of thermally activated processes in solids over a wide range of stress and temperature conditions[14, 15, 16]. For a screw dislocation segment of length \(L\), the cross-slip rate as a function of temperature \(T\) under applied stress tensor \(\mathbf{\tau}_{\rm app}\) (Fig. 1(b)) can be written as,
\[r(T,\mathbf{\tau}_{\rm app},L)=\nu(L)\exp\left[-\frac{H_{\rm c}(\mathbf{\tau}_{\rm app })}{k_{\rm B}T}\right] \tag{1}\]
where \(H_{\rm c}\) is the activation enthalpy obtained from MEP calculations, and \(k_{\rm B}\) is the Boltzmann constant. The rate prefactor \(\nu(L)\) is proportional to the dislocation length \(L\) and can be written as \(\nu(L)=\nu_{\rm e}\,L/b\) where \(\nu_{\rm e}\) is an effective attempt frequency, and \(b\) is the magnitude of the dislocation Burgers vector and
hence the smallest repeat distance along the dislocation. In cross-slip models used in discrete dislocation dynamics (DDD) simulations, the rate prefactor is linked to the vibrational frequency of the dislocation line, and is commonly expressed as \(\nu(L)=\nu_{\rm D}\,L/L_{0}\) where \(\nu_{\rm D}\sim 10^{13}\,\rm s^{-1}\) is the Debye frequency, and \(L_{0}=1\,\rm\upmu m\) is a reference length [17, 18]. Given that the reported activation enthalpy \(H_{\rm c}\) for the cross slip in Cu is in the range of 0.5 - 3 eV [7, 8, 9, 19], together with the rate prefactor estimates above, cross slip is not expected to occur in direct MD simulations except at very high temperatures or stresses.
However, previous studies [10, 12, 13] have shown that cross slip occurs in direct MD simulations at a much higher rate than expected (see Fig. 1(c)). This discrepancy has led to the suggestion that the previous estimates of the rate prefactor is incorrect, and need to be multiplied by a factor of \(\exp\left[\Delta S_{\rm c}(\boldsymbol{\tau}_{\rm app})/k_{\rm B}\right]\), where \(\Delta S_{\rm c}\) is a stress-dependent activation entropy whose physical origin has remained elusive [20, 21, 22, 23]. It has been estimated either through an empirical estimate based on Meyer-Neldel rule [13] or simplified line tension models [24], but not from fully atomistic models due to numerical difficulties [10, 23]. The unknown origin of the activation entropy has raised doubts about whether TST is even applicable in thermally activated processes such as cross slip [22, 25].
This work provides a systematic and fully atomistic approach to resolve the discrepancy in the cross slip rates and uncover the physical origin of the anomalously large activation entropy. We carry out high-throughput minimum-energy paths (MEP) calculations to map out the stress dependence of the activation enthalpy \(H_{\rm c}(\boldsymbol{\tau}_{\rm app})\). The rate prefactor is determined from the harmonic transition state theory (HTST), with essential corrections applied to soft vibrational modes of the dislocation. Our approach reveals that in order to resolve the rate discrepancy between MD and TST predictions, anharmonic effects of thermal softening and thermal expansion must be appropriately considered. These effects cause the solid to experience more significant shear and volumetric deformations when temperature increases at constant applied stress, and cause a pronounced drop in the cross-slip activation barrier, giving rise to the activation entropy \(\Delta S_{\rm c}\). We find that \(\Delta S_{\rm c}\) is more pronounced at higher stress, contrary to previous estimates [13] based on the Meyer-Neldel rule [26]. This work demonstrates the applicability of HTST (after corrections) to dislocation cross-slip and provides a quantitative approach to predict its rate and activation entropy. The significant activation entropy is expected to influence the rate of a wide range of stress-driven thermally-activated processes in solids, such as phase transformation and twin boundary migration.
## Results
We use face-centered cubic nickel as an example to investigate dislocation cross-slip behaviors. The interatomic force field is modeled by the embedded-atom model (EAM) 'vnih' [27] because its stacking fault energy is in good agreement with both experimental measurements and first-principle calculations [11, 28]. The simulation cell is large enough (\(N=78\),\(400\) atoms) to avoid boundary effects on dislocation cross slip rates. A screw dislocation along the \(x\)-direction passes through the center of the simulation cell. The cell is periodic in \(x\)- and \(z\)- directions and has free surfaces on the \(y\)-direction. Shear stresses \(\boldsymbol{\tau}_{\rm app}=(\sigma_{\rm e}^{\rm g},\sigma_{\rm s}^{\rm c}, \sigma_{\rm e}^{\rm c})\) are applied to provide driving force for cross-slip. As shown in Fig. 1(a), the applied stress contains Escaig (\({}_{\rm e}\)) and Schmid (\({}_{\rm s}\)) components on the original slip (\({}^{\rm g}\)) plane (\(111\)) and the cross-slip (\({}^{\rm c}\)) plane (\(11\bar{1}\)) (see Methods). The Schmid stress on the original slip plane \(\sigma_{\rm s}^{\rm g}\) is set to zero so that the dislocation does not move prior to cross-slip [11, 13, 29].
MD simulations of cross-slip are carried out using the LAMMPS package [30]. The initial configuration is heated up to the target temperature \(T\) using the Nose-Hoover thermostat (NVT ensemble) while keeping a constant applied stress at \(\boldsymbol{\tau}_{\rm app}=(-0.6,-0.8,0.8)\,\rm GPa\) by adjusting the strain. After equilibration, the simulation continues at constant \(T\) and the corresponding stress until the dislocation cross-slips (at time \(t_{\rm cs}\)) and annhilates at the surface (see Methods). The MD simulation is repeated 32 times at each temperature.
The cross-slip rate \(r_{\rm MD}\), estimated as the inverse of the average cross-slip time \(\bar{t}_{\rm cs}\), is plotted against the temperature in Fig. 1(c). The temperature dependence of the cross-slip rate is seen to follow the Arrhenius law,
\[r_{\rm MD}=\nu_{\rm MD}\exp\left[-\frac{H_{\rm c}^{\rm MD}}{k_{\rm B}T}\right] \tag{2}\]
where \(H_{\rm c}^{\rm MD}=0.60\,\rm eV\) and \(\nu_{\rm MD}=2.57\times 10^{16}\,\rm s^{-1}\) are parameters obtained from fitting the MD data.
We proceed to analyze the cross-slip rates by TST. The activation enthalpy \(H_{\rm c}\) represents the energy difference between the transition state (i.e., saddle point on the potential energy landscape) and the initial state of the thermally activated cross-slip. To find the transition state under the applied stress \(\mathbf{\tau}_{\rm app}\), we first determine the minimum-energy path (MEP) using the free-end string method [31, 32]. Given the MEP, the exact transition state (saddle point) is then obtained by the dimer method [33] (see Methods). Fig. 1(b) illustrates two converged MEPs with and without the applied stress \(\mathbf{\tau}_{\rm app}\) corresponding to the MD simulations, respectively. As expected, the applied stress lowers the activation enthalpy of cross slip. Furthermore, the activation enthalpy of cross slip under the applied stress, \(H_{\rm c}=0.60\,\rm eV\), perfectly matches the value \(H_{\rm c}^{\rm MD}\) extracted from the MD simulations (Fig. 1(c)). On the other hand, if we adopt the commonly used estimate for the frequency prefactor [17, 18], \(\nu(L)=\nu_{\rm D}L/L_{0}\), for the dislocation length (\(L\approx 10\,\rm nm\)) considered here, we would arrive at \(\nu(L)\approx 10^{11}\,\rm s^{-1}\), which is more than five orders of magnitude lower than MD predictions (see Fig. 1(c)). This paper's primary purpose is to identify the physical origin of this discrepancy.
To go beyond a heuristic estimate, we use the harmonic transition state theory (HTST) to compute the rate prefactor more rigorously. In HTST, the rate prefactor is expressed as follows [14],
\[\nu_{\rm HTST}=\frac{\prod_{i=1}^{3N-3}\nu_{i}^{A}}{\prod_{j=1}^{3N-4}\nu_{j}^ {S}} \tag{3}\]
where \(\nu_{i}^{A}\) and \(\nu_{i}^{S}\) are frequencies of the eigenmodes of the initial state (A) and the transition state (S), respectively. The three rigid-body translational models (with zero frequency) are excluded from the product in both states A and S. For state S, the mode along the reaction coordinate (with imaginary frequency) is also excluded. Although HTST is often employed to study thermally activated processes in solids at moderately low temperatures, it has never been successfully applied to dislocation cross-slip due to several challenges.
First, a direct implementation of Eq. (3) requires diagonalizing the Hessian matrix of the system to obtain the eigen-frequencies [34] (for both states A and S). The Hessian matrix is quite large (size \(3N\times 3N\)) and a full diagonalization is computationally very expensive. In this work, we take advantage of the fact that the product of eigen-frequencies can be obtained from the determinant of the Hessian matrix, which can be computed much more efficiently (e.g. using LU decomposition) than to obtain all the eigen-frequencies individually. To avoid the determinant becoming zero due to the rigid-body translation modes, we slightly perturb the Hessian matrix to impart a small but non-zero frequency to these modes (see Methods).
Second, the harmonic approximation is not valid at room temperature or above for some of the _soft vibrational modes_. For example, the saddle state S contains a constriction of the stacking fault, which can be formed anywhere along the dislocation line. Motion of this constriction along the dislocation line, i.e. the so-called Goldstone mode, produces periodic energy variations with an amplitude of around \(20\,\rm meV\)[10],
even lower than the thermal energy. In this case, approximating the periodic potential landscape by a quadratic function leads to a large error in the partition function. Here we account for these soft vibrational modes by numerically evaluating the partition function in their eigen-directions, and introduce a correction factor \((\tilde{\nu}_{\mathrm{A}}/\tilde{\nu}_{\mathrm{S}})\) to the cross-slip rate prediction, where \(\tilde{\nu}_{\mathrm{S}}\) is the correction factor for the Goldstone mode in the saddle state \(\mathrm{S}\), and \(\tilde{\nu}_{\mathrm{A}}\) is the correction factor for the uniform glide mode of the screw dislocation on its slip plane in state \(\mathrm{A}\) (see Supplementary Text I).
Using the above two methods, we can now evaluate the HTST-based rate prefactor, \(\nu(L)=\nu_{\mathrm{HTST}}\cdot\tilde{\nu}_{\mathrm{A}}/\tilde{\nu}_{\mathrm{S}}\). For the stress condition considered above, \(\nu(L)=7.73\times 10^{12}\,\mathrm{s}^{-1}\), which, although higher than previous estimates, is still much lower than \(\nu_{\mathrm{MD}}\). As a result, the predicted cross-slip rate (black line) is still 3-4 orders of magnitude lower than the MD results (see Fig. 2(b)).
To resolve the remaining discrepancy, we note that the activation enthalpy \(H_{\mathrm{c}}\) at a given stress \(\mathbf{\tau}_{\mathrm{app}}\) is often computed as an activation energy \(E_{\mathrm{c}}\) at a given strain \(\mathbf{\varepsilon}\) corresponding to stress \(\mathbf{\tau}_{\mathrm{app}}\). To make this point more explicit, we express the cross-slip rate as a function of strain \(\mathbf{\varepsilon}\) and temperature \(T\),
\[r_{\mathrm{HTST}}(\mathbf{\varepsilon},T)=\nu_{\mathrm{HTST}}\,\frac{\tilde{\nu} _{\mathrm{A}}}{\tilde{\nu}_{\mathrm{S}}}\exp\left[-\frac{E_{\mathrm{c}}(\mathbf{ \varepsilon})}{k_{\mathrm{B}}T}\right] \tag{4}\]
For consistency, \(\mathbf{\varepsilon}\) should be the strain \(\mathbf{\varepsilon}_{T}\equiv\mathbf{\varepsilon}(\mathbf{\tau}_{\mathrm{app}},T)\) corresponding to stress \(\mathbf{\tau}_{\mathrm{app}}\) at temperature \(T\). However, most of the MEP methods, which are based on energy minimization, are performed at zero temperature. Let us define \(\mathbf{\varepsilon}_{0}\equiv\mathbf{\varepsilon}(\mathbf{\tau}_{\mathrm{app}},0)\) as the strain corresponding to stress \(\mathbf{\tau}_{\mathrm{app}}\) at zero temperature. In the above, we have reported that \(E_{\mathrm{c}}(\mathbf{\varepsilon}_{0})=H_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})= 0.60\,\mathrm{eV}\). From Eq. (5), it can be clearly seen that an inconsistency would arise if \(\mathbf{\varepsilon}=\mathbf{\varepsilon}_{T}\) is used on the left hand side and \(\mathbf{\varepsilon}=\mathbf{\varepsilon}_{0}\) is used on the right hand side.
While the difference between \(\mathbf{\varepsilon}_{T}\) and \(\mathbf{\varepsilon}_{0}\) has been implicitly assumed to be small and often neglected, here we show that it has a pronounced effect on the predicted cross-slip rate. If the applied stress \(\mathbf{\tau}_{\mathrm{app}}\) remains constant as temperature is increased, the strain \(\mathbf{\varepsilon}_{T}\) increases in both the deviatoric and volumetric components, as sketched in the inset of Fig. 2(a). Fig. 2(a) shows that the computed activation energy \(E_{\mathrm{c}}(\mathbf{\varepsilon}_{T})\) decreases linearly with temperature, i.e., \(E_{\mathrm{c}}(\mathbf{\varepsilon}_{T})=E_{\mathrm{c}}(\mathbf{\varepsilon}_{0})-T \cdot\Delta S_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})\), where \(\Delta S_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})=8.0\,k_{\mathrm{B}}\) is the negative slope of the \(E_{\mathrm{c}}\)-\(T\) curve, and can be called an _activation entropy_. Inserting this expression of \(E_{\mathrm{c}}(\mathbf{\varepsilon}_{T})\) into Eq. (4), we can express the HTST-based rate prediction as,
\[r_{\mathrm{HTST}}(\mathbf{\varepsilon}_{T},T)=\nu_{\mathrm{HTST}}\,\frac{\tilde{ \nu}_{\mathrm{A}}}{\tilde{\nu}_{\mathrm{S}}}\exp\left[\frac{\Delta S_{ \mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})}{k_{\mathrm{B}}}\right]\exp\left[-\frac {E_{\mathrm{c}}(\mathbf{\varepsilon}_{0})}{k_{\mathrm{B}}T}\right] \tag{5}\]
The new rate prefactor, \(\nu(L)=\nu_{\mathrm{HTST}}\cdot(\tilde{\nu}_{\mathrm{A}}/\tilde{\nu}_{\mathrm{ S}})\cdot\exp(\Delta S_{\mathrm{c}}/k_{\mathrm{B}})=2.30\times 10^{16}\, \mathrm{s}^{-1}\), is in very good agreement with \(\nu_{\mathrm{MD}}\). Fig. 2(b) shows that the resulting HTST-based predictions of cross-slip rates now agree well with MD results.
## Discussion
In the example considered above, we observe that the large discrepancy between previous TST-based predictions of cross-slip rate and MD results is mostly due to the change of strain with increasing temperature at a constant applied stress. Due to the thermal softening effect, the same shear stress will result in greater shear strain at higher temperature. Due to the thermal expansion effect, the volumetric strain also increases with increasing temperature. We have repeated the MD simulations and HTST
calculations of cross-slip rates at two more applied stress conditions, and the results support the same conclusions (Supplementary Text II).
To examine how does the activation entropy depends on the applied stress, we compute \(\Delta S_{\mathrm{c}}\) at 27 different stress conditions (for \(\sigma_{\mathrm{e}}^{\mathrm{g}}=0,-0.4,-0.8\,\mathrm{GPa}\), \(\sigma_{\mathrm{s}}^{\mathrm{c}}=0,-0.4,-0.8\,\mathrm{GPa}\), and \(\sigma_{\mathrm{e}}^{\mathrm{c}}=0,0.4,0.8\,\mathrm{GPa}\), respectively). We have previously shown that the activation enthalpy \(H_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})\) as a function of these three shear stress components can be expressed in terms of a one-dimension function of an _effective stress_[29], defined as \(\mathbf{\tau}^{*}=C_{\mathrm{e}}^{\mathrm{g}}\sigma_{\mathrm{e}}^{\mathrm{g}}+C_{ \mathrm{e}}^{\mathrm{c}}\sigma_{\mathrm{e}}^{\mathrm{c}}+(D_{\mathrm{s}}^{ \mathrm{c}}\sigma_{\mathrm{s}}^{\mathrm{c}})^{2}\), where \(C_{\mathrm{e}}^{\mathrm{g}}\), \(C_{\mathrm{e}}^{\mathrm{c}}\) and \(D_{\mathrm{s}}^{\mathrm{c}}\) are fitting constants. Fig. 3 shows that the activation entropy \(\Delta S_{\mathrm{c}}\) generally increases with the effective stress \(\tau^{*}\), although it is not a function of \(\tau^{*}\) alone (see Supplementary Text II). The empirical Mayer-Neldel rule, \(S_{\mathrm{c}}=H_{\mathrm{c}}/T_{\mathrm{m}}\), where \(T_{\mathrm{m}}\) is the melting temperature, is often used to estimate the activation entropy [13]. Because the cross-slip activation enthalpy \(H_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})\) is a monotonically decreasing function of \(\tau^{*}\), it is clear that the Mayer-Neldel rule does not apply to cross-slip. As shown in Fig. 3, \(\Delta S_{\mathrm{c}}\) for cross slip becomes smaller at lower stress; in fact \(\Delta S_{\mathrm{c}}\) vanishes in the zero stress limit, as we will show below. This may be a reason for neglecting the activation entropy effects in previous studies of dislocation cross-slip [10].
We now seek a close-form expression for \(\Delta S_{\mathrm{c}}\) as a function of stress, which will not only reveal more insight on the physical nature of the activation entropy, but also provide a needed tool for predicting cross-slip rate in mesoscale models such as discrete dislocation dynamics [18, 35]. We begin by defining \(\bar{\mathbf{\sigma}}\) as the stress of the crystal at zero temperature when subjected to the strain \(\mathbf{\varepsilon}_{T}\), i.e. \(\mathbf{\varepsilon}_{T}=\mathbf{\varepsilon}(\bar{\mathbf{\sigma}},0)=\mathbf{\varepsilon}( \mathbf{\tau}_{\mathrm{app}},T)\). \(\bar{\mathbf{\sigma}}\) is the stress in the simulation cell when performing MEP calculations for \(E_{\mathrm{c}}(\mathbf{\varepsilon}_{T})\); hence there is a one-to-one correspondence between \(\mathbf{\varepsilon}_{T}\) and \(\bar{\mathbf{\sigma}}\). At temperature \(T\), the stress of the crystal subjected to strain \(\mathbf{\varepsilon}_{T}\) is simply \(\mathbf{\tau}_{\mathrm{app}}\). But if the temperature is set to zero with the strain fixed at \(\mathbf{\varepsilon}_{T}\), the stress value changes, i.e. \(\bar{\mathbf{\sigma}}=\mathbf{\tau}_{\mathrm{app}}+\hat{\sigma}\mathbf{I}+\mathbf{\tau}_{ \mathrm{ex}}\), where \(\hat{\sigma}\) is a hydrostatic (tensile) stress, and \(\mathbf{\tau}_{\mathrm{ex}}\) is an excess shear stress. We performed 500 MEP calculations of cross-slip at different stress \(\bar{\mathbf{\sigma}}\) and fit the activation energy \(\tilde{H}_{\mathrm{c}}(\bar{\mathbf{\sigma}})=E_{\mathrm{c}}(\mathbf{\varepsilon}_{T})\) results as a function of \(\bar{\mathbf{\sigma}}\) (see Supplementary Text IV). The functional form of \(\tilde{H}_{\mathrm{c}}(\bar{\mathbf{\sigma}})\) is a generalization of the \(H_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})\) function established in our previous work [29], and reduces to \(H_{\mathrm{c}}(\mathbf{\tau}_{\mathrm{app}})\) when \(\hat{\sigma}=0\). Given the analytic function \(\tilde{H}_{\mathrm{c}}(\bar{\mathbf{\sigma}})\), we obtain the following expression for the activation entropy (see Supplementary Text V)
\[\Delta S_{\mathrm{c}}=-K\alpha_{V}\,\left(\frac{\partial\tilde{H}_{\mathrm{c }}}{\partial\hat{\sigma}}\right)+\frac{1}{\mu}\left(\frac{\partial\mu}{ \partial T}\right)\,\left(\frac{\partial\tilde{H}_{\mathrm{c}}}{\partial\mathbf{ \tau}}\right)\cdot\mathbf{\tau} \tag{6}\]
where \(K\) is bulk modulus, \(\alpha_{V}\) is volumetric thermal expansion coefficient, and \(\mu\) is shear modulus. Fig. 3 shows that Eq. (6) agrees very well with the activation entropy computed above. The two terms in Eq. (6) can be identified as the contributions from thermal expansion and thermal softening effects to the activation entropy. Both terms vanishes at the zero-stress limit. Eq. (6), combined with Eq. (5), leads to a theoretical model that accurately predicts the cross-slip rate as a function of applied stress. It can serve as an essential input for mesoscale models such as discrete dislocation dynamics [18, 36]. Because Eq. (6) expresses \(\Delta S_{\mathrm{c}}\) in terms of fundamental materials parameters and stress dependence of activation enthalpy, it is generally applicable to all stress-driven thermally activated processes in solids, such as phase transformation and twinning.
In conclusion, we have resolved a long-standing discrepancy between TST and direct MD predictions of cross-slip rate, and show that the anomalously large activation entropy is ultimately caused by the increasing shear and volumetric strain with increasing temperature at constant applied stress. These anharmonic effects, i.e. thermal softening and thermal expansion, although previously ignored, can lead to orders-of-magnitude changes in the prediction of cross-slip rate. We obtain an analytical expression for the activation entropy, which not only provides accurate predictions of cross-slip rate for meso-scale
models, but also shows that our findings are generally applicable to all stress-driven thermally activated processes in solids.
## Methods
### Prepare a single screw dislocation under applied stress.
The dislocation structure is similar to our previous works [11, 29]. We start with a perfect fcc nickel crystal (lattice constant \(a_{0}=3.52\,\mathrm{\AA}\)) with simulation box dimension of \(20[1\bar{1}0]\times 20[111]\times 10[\bar{1}\bar{1}2]\). \(10\,\mathrm{\char 37}\) of the atoms are removed on each side of the \(y\)-direction to create free surfaces, resulting in with 78,400 atoms in the simulation cell. A single straight left-hand screw dislocation is created at the center of the \(yz\)-plane with Burger's vector \(\mathbf{b}=a_{0}[\bar{1}10]/2\) along the positive \(x\)-direction \(\mathbf{\xi}=[1\bar{1}0]\). The initial configuration is obtained by splitting the screw dislocation into two Shockley partial dislocations (orange arrows in Fig. 1(a)) with stacking fault on the gliding plane, i.e., the \((111)\) plane [29].
We perform energy minimization with applied shear stresses \(\mathbf{\tau}_{\mathrm{app}}=(\sigma_{\mathrm{e}}^{\mathrm{g}},\sigma_{\mathrm{s} }^{\mathrm{c}},\sigma_{\mathrm{e}}^{\mathrm{c}})\) to the dislocation structure. The Cartesian stress tensor can be calculated from the applied stress as,
\[\sigma_{\mathrm{s}}^{\mathrm{g}}=\sigma_{xy},\quad\sigma_{\mathrm{e}}^{ \mathrm{g}}=\sigma_{yz},\quad\sigma_{\mathrm{s}}^{\mathrm{c}}=\frac{2\sqrt{2} \sigma_{xz}-\sigma_{xy}}{3},\quad\sigma_{\mathrm{e}}^{\mathrm{c}}=\frac{7 \sigma_{yz}+2\sqrt{2}\left(\sigma_{zz}-\sigma_{yy}\right)}{9} \tag{7}\]
where \(\sigma_{zz}=-\sigma_{yy}\) is enforced to enable a one-on-one mapping between the Cartesian stress and the Escaig-Schmid stress components, and \(\sigma_{xy}\) is set to be zero to avoid the screw dislocation moving on the original slip plane.
On the one hand, due to free surfaces in the \(y\)-direction, stress components \((\sigma_{xy},\sigma_{yy},\sigma_{yz})\) are applied by external forces \(\mathbf{f}_{y}=(A/N_{xz})(\sigma_{xy},\sigma_{yy},\sigma_{yz})\) to the first layer of atoms (\(N_{xz}=1600\) atoms in total) on the free surfaces, and \(A=H_{x}H_{z}\) is the area of the surface. On the other hand, due to the periodic boundary condition on \(x\)- and \(z\)- directions, the stress components \((\sigma_{xx},\sigma_{zz},\sigma_{xz})\) are controlled by adjusting the components \((H_{x},H_{z},H_{xz})\) in the simulation cell iteratively until the stresses are converged. The simulation cell matrix (cell vectors) \(\mathbf{H}=[\mathbf{c}_{1}|\mathbf{c}_{2}|\mathbf{c}_{3}]\) is defined as,
\[\mathbf{H}=\begin{bmatrix}H_{x}&H_{xy}&H_{xz}\\ 0&H_{y}&H_{yz}\\ 0&0&H_{z}\end{bmatrix} \tag{8}\]
The stress of the dislocation configuration is calculated by averaging the atomic stress [37] of all the atoms \(20\,\mathrm{\AA}\) below the free surfaces to avoid the surface effect. The convergence tolerance of the stress is \(\pm 0.05\,\mathrm{MPa}\).
**Minimum-energy path (MEP) search.** To perform MEP search, we first prepare the initial state \(A\) before the transition state and the final state \(B\) after the transition state. The converged metastable dislocation structure from the previous section is used as the initial state \(A\). The final state \(B\) is prepared with the same full screw dislocation structure as state \(A\), but with the middle half of the dislocation dissociated on the cross-slip plane \((11\bar{1})\), while the rest of the dislocation still dissociates on \((111)\)[29]. Energy minimization is then performed to obtain the final state \(B\) under the same applied shear stress \(\mathbf{\tau}_{\mathrm{app}}\). In order to obtain a better initial guess and help with the convergence of the MEP search, the conjugate-gradient energy minimization on the final state \(B\) is only performed for five iterations so that the cross-slipped dislocation does not move towards the free surface and annihilate, i.e., the state \(B\) is not too far away from the transition state. Starting from a linear interpolation (32 image copies) between states \(A\) and \(B\) as the initial guess, the MEP search is performed using the free-end string method [31] with
reparameterization and trimming [32]. After the string method is converged, we use the dimer method [33] to obtain the exact transition state \(S\). Starting from the two images closest to the maximum value as the initial dimer, we iteratively shrink the the dimer until the distance is below \(10^{-7}\,\mathrm{\SIUnitSymbolCelsius}\). The external forces \(\mathbf{f}_{y}\) and simulation cell matrix \(\mathbf{H}\) from state \(A\) are applied during all the energy minimization steps in state-\(B\) preparation, MEP search, and dimer method to ensure the same applied stress condition \(\boldsymbol{\tau}_{\mathrm{app}}\).
**Molecular dynamics (MD) simulation.** MD simulations of dislocation cross-slip are performed using the LAMMPS package [30]. To prepare the dislocation structure at finite temperature \(T\) under the applied stress condition \(\boldsymbol{\tau}_{\mathrm{app}}\), we start from the state \(A\) with zero applied stress. The system is gradually heated up to the target temperature \(T\) and equilibrated for \(10\,\mathrm{ps}\) using the Nose-Hoover thermostat [38] with zero stress applied, to avoid premature cross-slip. The configuration is then gradually loaded to the target stress \(\tau_{\mathrm{es}}\) and further equilibrated for \(2\,\mathrm{ps}\). The method to control the stress is the same as in the previous sections. After the system is equilibrated, we apply a small random perturbation (uniform distribution with the magnitude of \(\pm 10^{-4}\,\mathrm{\SIUnitSymbolAngree}\cdot\mathrm{s}^{-1}\)) to the initial velocity before continuing the MD simulation to avoid repeated MD trajectories. The MD simulation is continued until cross slip occurs (sudden release of the applied stress) and the cross-slip time \(t_{\mathrm{cs}}\) is recorded.
**Harmonic vibrational frequencies.** The product of the harmonic vibrational frequencies in Eq. (4) is obtained from the Hessian matrices of the initial state \(A\) (\(\mathbf{K}_{A}\)) and the transition state \(S\) (\(\mathbf{K}_{S}\)). The standard approach to obtain the prefactor is to diagonalize \(\mathbf{K}_{A}\) and \(\mathbf{K}_{S}\). However, for our system (\(N=78\),400), the Hessian matrices have a size of \(3N\times 3N=235\),\(200\times 235\),\(200\), which requires significant computational load. Instead, we can calculate the products of the eigenfrequencies from the determinant if and only if \(\mathbf{K}\) is _non-singular_.
To avoid the non-sigularity from the three rigid-body translational modes (eigen frequency \(\nu=0\)), we couple three soft harmonic spring forces \(k\) to the \(x\), \(y\), and \(z\)-directions on one atom (atom #1) in both states \(A\) and \(S\). This is equivalent to modifying the first three diagonal elements of the Hessian matrix \(\mathbf{K}\),
\[K_{11}\to K_{11}+k;\quad K_{22}\to K_{22}+k;\quad K_{33}\to K_{33}+k \tag{9}\]
We then obtain the product of the eigen frequencies by calculating the determinant of the modified Hessian matrix using the sparse LU decomposition in MATLAB. The negative eigenvalue of the Hessian matrix at state \(S\) is obtained by finding the minimum eigenvalue using the 'eigs' method in MATLAB. The soft spring frequencies are selected to be \(k=1\times 10^{-4}\,\mathrm{eV}/\mathrm{\SIUnitSymbolAngree}\), which will be cancelled out while calculating the prefactor \(\nu_{\mathrm{HTST}}\) from taking the ratio between the determinant of state \(S\) and state \(A\). A detailed proof of the method is provided in Supplementary Text VI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.