id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.01340 | Wrinkles in the Froggatt-Nielsen Mechanism and Flavorful New Physics | When the Froggatt-Nielsen mechanism is used to explain the Standard Model
flavor hierarchy, new physics couplings are also determined by the horizontal
symmetry. However, additional symmetries or dynamics in the UV can sometimes
lead to a departure from this na\"ive scaling for the new physics couplings. We
show that an effective way to keep track of these changes is by using the new
spurions of the $\mathrm{U}(3)^5$ global flavor symmetry, where we parameterize
extra suppression or enhancement factors, referred to as wrinkles, using the
same power counting parameter as in the original Froggatt-Nielsen model. As a
concrete realization, we consider two flavor spurions of the $S_1$ leptoquark,
and demonstrate that wrinkles can be used to make an enhanced value of
$\textrm{BR}(B^+ \to K^+\nu\bar{\nu})$ consistent with other flavor
observables. We also present example UV models that realize wrinkles, and
comment on choosing consistent charges in ordinary Froggatt-Nielsen models
without the typical monotonicity condition. | Pouya Asadi, Arindam Bhattacharya, Katherine Fraser, Samuel Homiller, Aditya Parikh | 2023-08-02T18:00:02Z | http://arxiv.org/abs/2308.01340v1 | # Wrinkles in the Froggatt-Nielsen Mechanism and Flavorful New Physics
###### Abstract
When the Froggatt-Nielsen mechanism is used to explain the Standard Model flavor hierarchy, new physics couplings are also determined by the horizontal symmetry. However, additional symmetries or dynamics in the UV can sometimes lead to a departure from this naive scaling for the new physics couplings. We show that an effective way to keep track of these changes is by using the new spurions of the U(3)\({}^{5}\) global flavor symmetry, where we parameterize extra suppression or enhancement factors, referred to as _wrinkles_, using the same power counting parameter as in the original Froggatt-Nielsen model. As a concrete realization, we consider two flavor spurions of the \(S_{1}\) leptoquark, and demonstrate that wrinkles can be used to make an enhanced value of BR(\(B^{+}\to K^{+}\nu\bar{\nu}\)) consistent with other flavor observables. We also present example UV models that realize wrinkles, and comment on choosing consistent charges in ordinary Froggatt-Nielsen models without the typical monotonicity condition.
###### Contents
* 1 Aperitif
* 2 Amuse-bouche: Froggatt-Nielsen and BSM Physics
* 2.1 Review of the Froggatt-Nielsen Mechanism
* 2.2 Froggatt-Nielsen and Flavorful New Physics
* 3 Plat Principal: Wrinkles in Froggatt-Nielsen
* 3.1 Wrinkled Froggatt-Nielsen Chains
* 3.2 Bounds on Wrinkles from Radiative Corrections
* 3.3 UV Completions
* 3.3.1 Missing Heavy Fermions
* 3.3.2 Extra Abelian Symmetries
* 4 Dessert: \(B\to K\bar{\nu}\nu\) in a Wrinkled Setup
* 4.1 \(B\to K\bar{\nu}\nu\) in the SM and Beyond
* 4.2 Constraints with Different Flavor Ansatze
* 4.3 Predictions for Future Measurements
* 5 Digestifs
* A Full Set of Consistency Conditions
* B Calculation of Other Observables
* B.1 Dipole Moments
* B.2 Lepton Flavor Violating Observables
* B.3 Leptonic Meson Decays
* B.3.1 \(P\to\ell\nu\)
* B.3.2 \(P\to\ell\ell^{\prime}\) and \(P\to\nu\nu^{\prime}\)
* B.4 Semi-leptonic Meson Decays
* B.4.1 \(R_{D^{(*)}}\)
* B.4.2 \(K\to\pi\nu\bar{\nu}\)
* B.5 \(Z\to\ell\ell^{\prime}\)
* B.6 Meson Mixing
Aperitif
Flavor physics has been a harbinger of physics beyond the Standard Model (BSM) at various points in time, from predicting the existence of the charm quark [1, 2] to estimating the mass of the top quark [3, 4, 5, 6] long before its discovery at the Tevatron [7, 8]. Precision experiments, in particular, help establish or find violations of the Standard Model (SM) symmetry structures, and prove to be noteworthy indirect probes of new physics whose mass scale lies beyond the reach of direct collider searches; see Refs. [9, 10] for reviews of many such experiments.
A primary goal of flavor physics is to understand the appearance of large hierarchies in the masses and mixing angles of the SM fermions. The two most popular solutions to this puzzle are (i) the Froggatt-Nielsen (FN) mechanism and its variations [11, 12, 13, 14], and (ii) extra dimensional models where an \({\cal O}(1)\) difference in the bulk masses of fermions gives rise to an exponential hierarchy between the observed masses in the IR [15, 16, 17, 18, 19]. Other notable possibilities include generating the mass hierarchy via running to the IR in extensions of the SM with scale invariant sectors in the UV [20], or radiatively generating the Yukawas with the hierarchy governed by powers of the loop expansion parameter [21, 22, 23, 24]. A review of these and other dynamical solutions to the flavor puzzle can be found in Refs. [25, 26, 27]. In what follows, we focus our attention on the FN mechanism.
In the FN mechanism, the hierarchies in the SM fermion sector arise as different powers of a small expansion parameter. This expansion parameter is given by the ratio of the vacuum expectation value (vev) of a scalar field, known as the flavon, over a heavy mass scale. The SM Yukawa couplings are generated by non-renormalizable operators involving the chiral SM fermions, the Higgs, and the flavon. The dimensionality of these operators--and the resulting power of the expansion parameter that appears--is dictated by the charges of the SM fermions under a new Abelian horizontal symmetry, U(1)\({}_{H}\), which is broken by the flavon. As we will discuss, there is additional freedom in the assignment of these charges that was overlooked in Ref. [11]. In the original FN paper, it was supposed that these irrelevant operators are generated by "chains" including heavy vector-like matter, also charged under U(1)\({}_{H}\). A number of variations to this model have been proposed, including "inverted" models [28], where the flavon vev is larger than the heavy mass scale.
One of the drawbacks of invoking the FN mechanism is that the new dynamics responsible for the SM hierarchies can exist at scales far above the weak scale, beyond the reach of direct experimental probes. Nevertheless, given the other shortcomings of the SM--the electroweak hierarchy problem in particular--there is ample reason to expect new physics at or near the TeV scale. If the new physics is _flavorful_ (i.e., it involves non-universal couplings to SM matter fields), its flavor structure may also be dictated by the FN dynamics. This argument can also be run in reverse: given the stringent constraints from precision measurements of the SM, for new physics to exist at the TeV scale it must either be flavor-blind or incorporate some symmetry arguments to suppress flavor-violation [29, 30]. This reasoning is familiar in
the supersymmetric context, where it is understood that squarks must either be degenerate or flavor-aligned [31].
In this light, it is clearly worthwhile to study the application of the FN mechanism to the couplings of new BSM fields. This is particularly true when flavorful new physics is invoked to explain potential discrepancies between experimental results and the SM expectations: should one of these discrepancies become an unambiguous signal of new physics, we might glean information about the dynamics associated with flavor in the UV. This approach was advocated in Refs. [32, 33, 34], and we will review it extensively in this work. An immediate consequence of this framework is that many different experimental observables become correlated. These correlations challenge some of the simplest solutions to various flavor anomalies, as the couplings and masses required to explain the discrepancy violate bounds set by other observables such as lepton flavor violating (LFV) processes or flavor changing neutral currents.
The goal of this work is to explore how these considerations can change if the FN setup is amended with additional symmetries or structure in the UV. We do this by working in an effective field theory (EFT) framework, including the SM and new BSM fields, with their couplings to fermions treated as spurions under the U(3)\({}^{5}\) flavor symmetry of the SM. In this framework, we can introduce controlled deviations from the size of these spurions dictated by the horizontal charges. We refer to these deviations as _wrinkles_, since they appear in the UV as changes in the length of the chain diagrams responsible for the Yukawas in the IR. Wrinkles can exist in SM or BSM spurions, and allow us to relax the correlations between different observables, permitting sizable new physics contributions to some observables while satisfying other experimental bounds.
Importantly, while wrinkles allow for much greater flexibility in the couplings of BSM fields to SM fermions, this flexibility is not without bound. If the effective theory is to be faithfully embedded in the FN mechanism, radiative corrections must not spoil the relationship between the couplings in the IR and the non-renormalizable operators in the UV. This requirement has been previously formulated as a consistency condition in the context of minimal flavor violation EFTs [32] (see also Ref. [33]). While these conditions are trivially satisfied in ordinary FN models, we show that they put meaningful bounds on wrinkled FN setups.
Since this wrinkled FN setup can be applied to any new physics, we will illustrate its application in an example, where the SM is extended by a single leptoquark, denoted \(S_{1}\) in the nomenclature of Ref. [35]. See Refs. [34, 36] for previous discussions of the \(S_{1}\) leptoquark model with horizontal symmetries. We will use this leptoquark to enhance the branching ratio of \(B^{+}\to K^{+}\bar{\nu}\nu\), which currently shows a small discrepancy with SM predictions [37] and will be precisely measured at the Belle II experiment in the coming years. Without wrinkles, the charges and masses required to generate a large \(B^{+}\to K^{+}\bar{\nu}\nu\) signal also imply the existence of large signals in other correlated observables, such as LFV decays or
leptonic meson decays. We will show a simple example where a wrinkled FN setup evades these bounds while satisfying the consistency conditions alluded to above. As we will see, the bound on the wrinkles implies other correlated signals are generated near detection thresholds in this example, and could potentially be seen in the near future.
In the coming years, troves of new data from colliders and small-scale experiments searching for signs of flavorful new physics will begin stress-testing the delicate flavor structure of the SM. Given the substantial motivation for BSM physics, this structure could break and potentially start showing signs of deviations from the SM expectation. In preparation for such deviations, it is timely to develop new model-building tools which enable embedding their solutions in UV complete frameworks. Wrinkles in an FN Ansatz are a flexible, bottom-up tool that allow for a broader exploration of the complementarity of different flavor probes, while reliably parameterizing more sophisticated UV models of flavor. As such, they present a natural setup to search for a consistent IR picture of new physics with flavor, should any deviations from the SM come to light.
This paper is organised as follows: in SS2, we review the FN mechanism, its solution to the flavor hierarchy problem in SM and how it furnishes suitable Ansatze for couplings arising from new BSM physics. Next, in SS3, we introduce the concept of wrinkles for the FN mechanism, discuss constraints on them, and provide examples for how they can arise from UV complete models. In SS4, we provide a concrete example of applying wrinkles to the \(S_{1}\) scalar leptoquark embedded in a FN model. We demonstrate that wrinkles allow one to simultaneously explain bounds on BSM physics from current precision flavor observables, while also retaining predictive power for potential future measurements. We conclude in SS5. Appendix A provides details about bounds on wrinkles arising from consistency conditions. Appendix B provides details on flavor observable computations in the \(S_{1}\) leptoquark model.
## 2 Amuse-bouche: Froggatt-Nielsen and BSM Physics
The lepton and quark Yukawas and mixing angles present a clear generational hierarchy, with the charged particle masses ranging over five orders of magnitude. This hierarchy implores an explanation in the UV. Searches for flavorful new physics are carried out in pursuit of such an explanation. Hence, if any anomaly emerges in these experiments, it is well-motivated to embed its BSM solutions within UV models that explain the flavor hierarchy as well.
The FN mechanism [11] provides a four-dimensional, field-theoretic explanation for this hierarchy, replacing the small dimensionless parameters with a power counting in powers of an inverse mass scale, fixed by a symmetry. In this section, we review how this mechanism can explain the parameters in the SM matter sector, with an emphasis on the EFT point of view. We will then discuss how this perspective can naturally be extended to BSM physics.
### Review of the Froggatt-Nielsen Mechanism
The basic idea of the FN mechanism is to introduce a horizontal symmetry, U(1)\({}_{H}\), under which different generations of the SM fermions have different charges. The horizontal symmetry is assumed to be spontaneously broken by the vacuum expectation value of a SM singlet scalar field, \(\varphi\)--the _flavon_. Assuming our EFT is valid up to some cutoff scale \(M\), we are led to a natural expansion parameter \(\lambda=\langle\varphi\rangle/M\), which appears in non-renormalizable operators involving the SM fermions. Later on, we will associate this scale \(M\) with the mass of new heavy fermions. Without loss of generality, we take the SM Higgs to be neutral under U(1)\({}_{H}\) and take the flavon charge to be \(-1\).
At scales just below the cutoff, the lowest dimension operators involving the SM fermions and the Higgs take the form
\[\mathcal{L}\,\supset\,r_{ij}^{u}\frac{\varphi^{(\dagger)m_{ij}}}{M^{m_{ij}}}Q_ {i}H\bar{u}_{j}+r_{ij}^{d}\frac{\varphi^{(\dagger)n_{ij}}}{M^{n_{ij}}}Q_{i}H^{ c}\bar{d}_{j}+r_{ij}^{e}\frac{\varphi^{(\dagger)l_{ij}}}{M^{l_{ij}}}L_{i}H^{ c}\bar{e}_{j}+\text{h.c.} \tag{1}\]
where \((Q_{i},\ \bar{u}_{i},\ \bar{d}_{i},\ L_{i},\ \bar{e}_{i})\) are different SM fermions, subscripts on fermion fields refer to different generations, \(r_{ij}\) are \(\mathcal{O}(1)\) couplings,
\[m_{ij}=\big{|}[Q_{i}]+[\bar{u}_{j}]\big{|},\qquad n_{ij}=\big{|}[Q_{i}]+[\bar{ d}_{j}]\big{|},\qquad l_{ij}=\big{|}[L_{i}]+[\bar{e}_{j}]\big{|}, \tag{2}\]
and the square brackets indicate the U(1)\({}_{H}\) charge. The hermitian conjugate on \(\varphi\) appears if the sum of charges inside the absolute value is negative. At energies below \(\langle\varphi\rangle\), these operators appear as the Yukawa couplings of the SM Higgs, with the coupling matrices given by
\[Y_{Q\bar{u}}^{ij}=r_{ij}^{u}\frac{\langle\varphi^{(\dagger)}\rangle^{m_{ij}}}{ M^{m_{ij}}}\sim\lambda^{m_{ij}},\qquad Y_{Q\bar{d}}^{ij}=r_{ij}^{d}\frac{ \langle\varphi^{(\dagger)}\rangle^{n_{ij}}}{M^{n_{ij}}}\sim\lambda^{n_{ij}}, \qquad Y_{L\bar{e}}^{ij}=r_{ij}^{e}\frac{\langle\varphi^{(\dagger)}\rangle^{l_ {ij}}}{M^{l_{ij}}}\sim\lambda^{l_{ij}}. \tag{3}\]
This scaling implies that even modest differences in horizontal charges give rise to exponential hierarchies in Yukawa couplings. To connect with the observed flavor structure of the SM, we identify \(\lambda\) with the Cabbibo angle, \(\sim 0.2\), so that the CKM matrix hierarchies follow naturally from the Wolfenstein parameterization [38]. We refer to this setup as vanilla FN.
At the \(\mathcal{O}(1)\) level, the masses and mixing angles are
\[V_{ij}\sim\lambda^{\big{|}[Q_{i}]\,-\,[Q_{j}]\big{|}},\quad U_{ij}\sim\lambda ^{\big{|}[L_{i}]\,-\,[L_{j}]\big{|}}, \tag{4}\]
\[m_{i}^{u}\sim\lambda^{\big{|}[Q_{i}]\,+\,[\bar{u}_{i}]\big{|}},\quad m_{i}^{d} \sim\lambda^{\big{|}[Q_{i}]\,+\,[\bar{d}_{i}]\big{|}},\quad m_{i}^{l}\sim \lambda^{\big{|}[L_{i}]\,+\,[\bar{e}_{i}]\big{|}}, \tag{5}\]
where \(V\) (\(U\)) is the CKM [39, 40] (PMNS [41, 42]) matrix.
The most general horizontal charge-assignment that gives rise to the observed structure
of the CKM and PMNS matrices and SM fermion masses is given in Table 1.1 We have the overall freedom to shift the charges of all quarks (leptons) by the same amounts \(q_{0}\) (\(l_{0}\)), respectively. Once these shifts are chosen, the CKM and PMNS structure constrain the other LH quarks' and leptons' charges. As indicated in Eq. (4), these mixing matrices only fix the absolute value of the difference between charges, hence the freedom in choosing \(X,Y=\pm 1\) in the table. The appearance of \(X,Y\) in multiple entries captures the correlation between those charges. To find the RH fermion charges we use the measured values of masses in the SM. As in the case of mixing, Eq. (5) only fixes the absolute value of the charge difference between LH and RH fermions, leaving the sign undetermined. We choose the signs so that the eigenvector associated with the heaviest (lightest) mass eigenstate has the biggest overlap with the third (first) generation for each type of fermion. To check this, we generated 10000 mass matrices for each charge assignment, drawing new random numbers \(r_{ij}^{u,d,e}\in(0.2,1)\) for each test. For every charge assignment, we confirmed that a substantial fraction of trials yield the correct mixing patterns and mass eigenvalues that are within a factor of two of the experimentally-measured values.
Footnote 1: In general, shifts of \(\pm 1\) in most of these charges can be tolerated when random \(\mathcal{O}(1)\) Yukawa couplings in the UV model are taken into account and the fact that the expansion parameter \(\lambda\) is not particularly small is considered. The anarchic structure of the PMNS matrix, in particular, leaves room for such small changes in the charges; see Refs. [43, 44] for further exploration of these shifts.
In the original FN proposal, it was assumed that the charges of all five types of fermions (\(Q\), \(\bar{u}\), \(\bar{d}\), \(L\), and \(\bar{e}\)) are ordered monotonically between different generations. Table 1 indicates that, while some correlations between LH and RH fermions of the second and third generation (captured by \(X,Y\)) are needed to generate the correct mass eigenstates, the monotonicity condition can be removed for first generation RH fermions without distorting the model's prediction for SM masses. This manifests itself as a binary choice in the charge of each first
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Gen. 1 & Gen. 2 & Gen. 3 \\ \hline \(Q\) & \(-q_{0}-3X\) & \(-q_{0}-2X\) & \(-q_{0}\) \\ \hline \(\bar{u}\) & \(q_{0}+3X\pm 7\) & \(q_{0}-X\) & \(q_{0}\) \\ \hline \(\bar{d}\) & \(q_{0}+3X\pm 6\) & \(q_{0}-3X\) & \(q_{0}-2X\) \\ \hline \(L\) & \(l_{0}+Y\) & \(l_{0}\) & \(l_{0}\) \\ \hline \(\bar{e}\) & \(-l_{0}-Y\pm 8\) & \(-l_{0}+5Y\) & \(-l_{0}+3Y\) \\ \hline \end{tabular}
\end{table}
Table 1: The most general horizontal charge assignment that explains the SM masses and mixings in FN with \(\lambda\sim 0.2\). \(q_{0}\) and \(l_{0}\) denote general shifts in quark and lepton charges, respectively, that leave the IR masses and mixings unchanged. \(X,Y=\pm 1\) denote the correlations between different charges that are required by the CKM and PMNS matrices. For every value of \((q_{0},l_{0})\), we have \(2^{5}\) choices for the charge assignments. In supersymmetric theories, holomorphy sets \(X=-Y=-1\) and picks the positive sign for first generation RH fermions.
generation RH fermion \((\bar{u},\ \bar{d},\ \bar{e})\).
It is also popular to consider supersymmetric variations of FN models. In the supersymmetric case, holomorphy of the superpotential forbids terms with \(\varphi^{\dagger}\) instead of \(\varphi\)[12, 13]. This eliminates a great deal of the freedom in charge assignments tabulated in Table 1. Specifically, it fixes \(X=-Y=-1\) and picks the positive sign for first generation RH fermions, leaving only the separate overall shifts in the quark and lepton charges, \(q_{0}\) and \(l_{0}\). It also enforces the monotonicity of the horizontal charges across different generations. However, since we do not explore the supersymmetric case in detail in the rest of this paper, we do not need to enforce these constraints.
The simplest UV completion of this effective theory (and the one imagined by Froggatt and Nielsen [11]) is to introduce a set of vector-like fermions \(F\) with mass \(M\) that live in an SM representation permitting Yukawa couplings between the Higgs and SM fermions. We assume the existence of heavy fermions with all horizontal charges necessary to complete the SM Yukawas with Yukawa couplings to the flavon \(\sim\varphi F\bar{F}^{\prime}\). The flavon Yukawa couplings are assumed to be \(\mathcal{O}(1)\), leading to the effective theory in Eq. (1) with \(\mathcal{O}(1)\) Wilson coefficients denoted by \(r_{ij}\).
As an example, the up-type Yukawa couplings can be generated by "chain" diagrams such as those shown in Figure 1. The top Yukawa arises at the renormalizable level, but the suppressed couplings arise by introducing the vector-like pair \(U\) and \(\bar{U}\), where \(\bar{U}\) has the same quantum numbers under the SM gauge groups as \(\bar{u}\). The subscripts indicate the U(1)\({}_{H}\) charge of \(U\). For instance, the chain shown on the right side of Figure 1 gives rise to a \(\lambda^{2}\) suppression in the coupling of \(Q_{2}\bar{u}_{3}\). If there exist heavy fermions with SM charges similar to \(Q\) and the correct horizontal charges, chain diagrams with the Higgs and flavon insertions interchanged will contribute as well. Similar chains give rise to the Yukawa couplings for other SM fermions.
Models of FN constructions with additional symmetries, multiple expansion parameters, or expansion parameters that are allowed to freely vary have also been developed in the literature, e.g. see [12, 13, 43, 45, 46, 47]. For simplicity, however, in this work we focus on
Figure 1: Example diagrams leading to the effective operators for the up-type Yukawa couplings with vector-like heavy fermions \(U\) and \(\bar{U}\), where \(\bar{U}\) has the same SM quantum numbers as \(\bar{u}\). The subscripts on the \(U\) fields refer to the horizontal charge of \(U\), and we have taken charges from Table 1 with \(X=-1\) and \(q_{0}=0\).
FN setups with only one expansion parameter, which we identify with the Cabbibo angle, and develop a systematic way for small deviations from them. We can straightforwardly generalize our discussions below to more baroque FN setups.
As a final note, in a UV complete model, quantum gravity considerations require that the horizontal symmetry be embedded in a gauge symmetry [48], which in turn demands the cancellation of all its anomalies.2 We have checked that the general charge assignment of Table 1 can not cancel all gauge anomalies in the typical FN UV completion, see also Ref. [52] for a similar conclusion. This conclusion is also corroborated by Refs. [53, 54], which deduce that the general charge assignments that can explain the SM Yukawa hierarchy can not be anomaly-free by studying general extensions of the SM with a new anomaly-free U(1) gauge group. As a result, in such a construction one should resort to either introducing new heavy chiral fermions (and subsequently extending the scalar sector so as to generate a mass for these fermions) or the Green-Schwarz mechanism to cancel anomalies [55]. We will leave further investigations of anomaly cancellation for future work.
Footnote 2: The lack of evidence for the (pseudo-)Nambu–Goldstone boson associated with the spontaneous breaking of the horizontal symmetry is also often used as motivation for gauging it. However, models with a potentially viable Goldstone exist. See Refs. [49, 50, 51, 52] for examples where the Goldstone is identified with the QCD axion.
### Froggatt-Nielsen and Flavorful New Physics
When introducing new physics, some assumptions must be made about the couplings of SM fields to new particles. These couplings are generically non-universal unless governed by additional structure such as new gauge symmetries. Given the hierarchies that exist in the SM fermion couplings, it is a priori unclear what a "natural" size for such non-universal couplings should be. However, if one assumes a UV explanation of the flavor hierarchy such as the FN mechanism, there is a natural Ansatz for the new physics couplings as well.
The phenomenological significance of such an Ansatz lies in the fact that it correlates predictions of a BSM model for various flavorful observables in the IR. Thus, depending on the Ansatz, a model built for explaining a discrepancy in the data will give rise to correlated signals in other constraining observables. For instance, any solutions of the \((g-2)_{\mu}\) anomaly with non-minimal flavor Ansatz gives rise to unacceptably large contributions to various LFV decays, especially \(\tau\to\mu\gamma\).
To better understand such Ansatze, it is useful to organize our thinking in terms of the global flavor symmetry of the SM:
\[G_{\rm flavor}={\rm SU}(3)_{Q}\times{\rm SU}(3)_{u}\times{\rm SU}(3)_{d}\times {\rm SU}(3)_{L}\times{\rm SU}(3)_{e}\times{\rm U}(1)^{5}, \tag{6}\]
where three of the U(1) factors can be identified with hypercharge, baryon number and lepton number. This symmetry acts on the generation indices of the chiral matter in the SM, with the unbarred (barred) fields transforming as triplets (anti-triplets), respectively.
The symmetry is broken explicitly by the Yukawa matrices, but formal invariance under \(G_{\rm flavor}\) can be restored if we promote the Yukawas to transform as spurions:
\[Y_{Q\bar{u}}\sim(\mathbf{\bar{3}}_{Q},\mathbf{3}_{u}),\qquad Y_{Q\bar{d}}\sim( \mathbf{\bar{3}}_{Q},\mathbf{3}_{d}),\qquad Y_{L\bar{e}}\sim(\mathbf{\bar{3}}_{ L},\mathbf{3}_{e}). \tag{7}\]
This formalism can be extended in a straightforward way to new physics with any new spurions of \(G_{\rm flavor}\)[32, 33, 34]. New fields are taken to be singlets of the SU(3)5 part of the SM flavor group, and their couplings to SM fermions then have definite transformation properties under \(G_{\rm flavor}\).
Footnote 3: The SM gauge symmetries also permit the couplings \(S_{1}\bar{u}\bar{d}\) and \(S_{1}Q^{\dagger}Q^{\dagger}\), which lead to proton decay. We can forbid these couplings by enforcing conservation of baryon number and endowing the leptoquark with a baryon number of \(-1/3\), or by potentially gauging some discrete subgroup. Therefore, in the rest of this work, we ignore these couplings. We note that the “wrinkles” introduced in §3 cannot entirely alleviate the proton decay constraint, necessitating a symmetry-based explanation.
As an example, consider the scalar leptoquark \(S_{1}\), a color anti-fundamental with hypercharge \(Y=1/3\). This allows for the renormalizable couplings to SM fields,3
Footnote 3: The SM gauge symmetries also permit the couplings \(S_{1}\bar{u}\bar{d}\) and \(S_{1}Q^{\dagger}Q^{\dagger}\), which lead to proton decay. We can forbid these couplings by enforcing conservation of baryon number and endowing the leptoquark with a baryon number of \(-1/3\), or by potentially gauging some discrete subgroup. Therefore, in the rest of this work, we ignore these couplings. We note that the “wrinkles” introduced in §3 cannot entirely alleviate the proton decay constraint, necessitating a symmetry-based explanation.
\[\mathcal{L}\supset-\Delta_{QL}^{ij}\epsilon^{ab}S_{1}Q_{bi}L_{aj}-\Delta_{ \bar{u}\bar{e}}^{ij}S_{1}^{\dagger}\bar{u}_{i}\bar{e}_{j}+{\rm h.c.}, \tag{8}\]
where the spinor indices are implicit, \(a,b\) are SU(2)\({}_{L}\) fundamental indices, \(\epsilon^{12}=+1\), and \((i,j)\) are flavor indices. The \(\Delta_{QL}\) coupling also appears in R-parity violating supersymmetric models, where \(S_{1}\) is identified with a down squark; these models have \(\Delta_{\bar{u}\bar{e}}=0\)[56]. The new Yukawa couplings \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\) transform as
\[\Delta_{QL}\sim(\mathbf{\bar{3}}_{Q},\mathbf{\bar{3}}_{L}),\qquad\Delta_{\bar {u}\bar{e}}\sim(\mathbf{3}_{u},\mathbf{3}_{e}). \tag{9}\]
In the absence of any flavor Ansatz, the matrices \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\) are arbitrary \(3\times 3\) complex matrices. However, when embedded in a vanilla FN setup, and assuming the \(S_{1}\) leptoquark is neutral under U(1)\({}_{H}\), we find an Ansatz for the hierarchies present in the spurions \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\). In analogy with Eq. (3), we find:
\[\Delta_{QL}^{ij}\sim\lambda^{\left|[Q_{i}]\right.+\left.[L_{j}]\right|},\qquad \Delta_{\bar{u}\bar{e}}^{ij}\sim\lambda^{\left|[\bar{e}_{j}]\right.+\left.[\bar {u}_{i}]\right|}. \tag{10}\]
Put differently, the SM charges and flavor symmetries are enough to determine how the new \(S_{1}\) field should be embedded in the effective theory below \(M\). The power counting of the effective theory then dictates that the expected FN scaling above holds, up to the \(\mathcal{O}(1)\) Wilson coefficients of the effective theory (analogous to the \(r_{ij}\) in Eq. (1)). This Ansatz generalizes to arbitrary new spurions of \(G_{\rm flavor}\) that can arise in other leptoquark models. A complete list of these spurions is given in Ref. [33].
Once the effective theory is known, we can make predictions for the contributions of new physics to various observables. Because the same spurion contributes to multiple observables,
these predictions are correlated by a FN Ansatz. These correlations can lead to inconsistencies with experimental results. Consequently, it is useful to have a systematic way of deviating from this scaling while still maintaining the predictivity of FN models. We discuss a systematic way of doing this in the next section. Specifically, we show how modifications of the UV spectrum of a FN construction can allow a controlled deviation from correlations between various observables in the IR, alleviating violations of experimental bounds.
## 3 Plat Principal: Wrinkles in Froggatt-Nielsen
As described in the previous section, the FN mechanism provides a natural Ansatz for the hierarchies of new flavor spurions coupled to the SM quarks and leptons. However, given our lack of knowledge about the dynamics underlying the flavor structure of the SM, it is worth exploring how this Ansatz could change within the general framework of horizontal symmetry explanations for the SM flavor pattern.
In this spirit, we introduce the notion of "wrinkles", as a way of parametrically changing the FN Ansatz for the flavor spurions that is described above without introducing additional scales. In SS3.1, we will define them precisely, and argue that they allow for more flexibility in correlations between different flavor observables. While this flexibility inherently makes our Ansatz less predictive, the freedom to introduce wrinkles is not absolute: there is a bound on the number of wrinkles imparted by radiative corrections, which we will discuss in SS3.2. In SS3.3, we give several explicit examples of realizations of wrinkles in UV models.
### Wrinkled Froggatt-Nielsen Chains
In SS2.2, we described how the FN Ansatz leads to a natural power counting for new flavor spurions in powers of \(\lambda\equiv\langle\varphi\rangle/M\), which we identify with the Cabbibo angle. Here, we generalize this power counting by considering modifications to the power of \(\lambda\) that appears in the spurion.
Consider a flavor spurion \(Y_{\psi\bar{\chi}}\), where \(\psi\), \(\bar{\chi}\) are given SM matter fields. We introduce what we call "wrinkles" to modify the scaling of a given element of \(Y_{\psi\bar{\chi}}\):
\[Y_{\psi\bar{\chi}}^{ij}\sim W_{\psi\bar{\chi}}^{ij}\lambda^{[[\psi_{i}]+[ \bar{\chi}_{j}]]}\equiv\lambda^{\omega_{ij}+[[\psi_{i}]+[\bar{\chi}_{j}]]}. \tag{11}\]
Here we denote the power of \(\lambda\) that appears in \(W_{\psi\bar{\chi}}^{ij}\) by \(\omega_{\psi\bar{\chi}}^{ij}\) which, for simplicity, is assumed to be an integer. This additional scaling is motivated by allowing for additional structure in the UV, such as symmetries inducing obstructions in the heavy fermion chains which generate the non-renormalizable operators, and is illustrated schematically in Figure 2. In general, any modification of the UV theory that gives rise to deviations from predictions of the vanilla FN setup without changing the number of power counting parameters can be
considered a wrinkle. Different UV completions can lead to different correlated patterns of matrix entries \(\omega^{ij}_{\psi\bar{\chi}}\) as we will discuss in SS3.3, but from the IR perspective, these correlations are not apparent.
To be concrete, consider the example of the spurions \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\) for the \(S_{1}\) leptoquark, as in Eqs. (8) and (9). With additional wrinkles, the couplings in Eq. (10) are modified to
\[\Delta^{ij}_{QL}\sim\lambda^{\omega^{ij}_{QL}+|[Q_{i}]+[L_{j}]|},\qquad\Delta^ {ij}_{\bar{u}\bar{e}}\sim\lambda^{\omega^{ij}_{\bar{u}\bar{e}}+|[\bar{u}_{i}]| +[\bar{e}_{j}]} \tag{12}\]
where \(\omega_{QL}\) and \(\omega_{\bar{u}\bar{e}}\) are matrices of integers, whose elements \(\omega^{ij}_{QL}\) and \(\omega^{ij}_{\bar{u}\bar{e}}\) can vary across generations independently for both fermions. The idea of wrinkles can also be extended to models with additional scales by allowing wrinkles for each power counting parameter. Here we will focus on the case with a single expansion parameter and not discuss the case of multiple parameters further.
Note that there are two distinct possibilities allowed by introducing wrinkles. The most straightforward one is that the number of factors of \(\lambda\) in some couplings of a _new_ flavor spurion are modified, suppressing or enhancing their contributions to some flavor observables. For instance, wrinkles could suppress BSM contributions to observables such as electric and magnetic dipole moments (EDMs and MDMs) or light meson decays, which are generally strongly constrained, and allow for spurions with smaller mass scales to contribute to other observables. We will discuss this possibility thoroughly, again in the case of the \(S_{1}\) leptoquark, in SS4.
The second possibility is that wrinkles could exist in SM chains--i.e., \(Y_{Q\bar{u}}\), \(Y_{Q\bar{d}}\), or \(Y_{L\bar{e}}\) could have fewer or additional factors of \(\lambda\). In the IR, the SM Yukawa matrices must still match the measured masses and mixing angles of the quarks and leptons. Wrinkles in SM chains therefore necessitate different horizontal charges than the ones shown in Table 1. This changes the expected scaling for BSM spurions, leading to different couplings than expected in a naive FN Ansatz between the SM fermions and new particles. We will not comment in detail on particular phenomenological applications of this scenario, but highlight that this is an interesting direction for further exploration.
Figure 2: A cartoon illustrating a “wrinkle” in the Yukawa coupling \(\Phi\psi_{i}\bar{\chi}_{j}\), which leads to a change in the predicted scaling from the FN Ansatz.
### Bounds on Wrinkles from Radiative Corrections
Allowing for wrinkles would appear to entirely eliminate the predictivity of the FN Ansatz. However, there is a natural bound on the size of the wrinkles that arises from demanding that the observed flavor structure in the IR arises predominantly from _tree-level_ contributions to the effective operators below the scale \(M\). Requiring that the tree-level contribution (including wrinkles) to the Yukawa coupling is larger than any subleading corrections from loops leads to a number of _consistency conditions_ on the Yukawas, which in turn set a bound on the wrinkles. Provided these conditions are satisfied, the flavor structure in the IR is still determined by the FN mechanism in a predictive way, with departures from the minimal implementation parameterized by the wrinkles.
To illustrate these constraints, consider the Yukawa coupling matrix between the right-handed up-type quarks and the right-handed charged leptons for the \(S_{1}\) leptoquark model in Eq. (8), \(\Delta^{ij}_{\bar{u}\bar{e}}\). In a FN setup, this coupling arises from a non-renormalizable operator with a minimal number of flavons. It can be UV completed with a tree-level chain of heavy fermions and flavons with a single leptoquark vertex, as illustrated on the left in Figure 3. However, the same operator can also be generated at higher order by including SM fermions in the FN chain \(\bar{u}^{i}\to Q^{k}\to L^{l}\to\bar{e}^{j}\), as shown on the right in Figure 3. The first and last connections include additional Higgs insertions that are tied together to form a loop, and the \(Q^{k}\to L^{l}\) connection involves a leptoquark interaction. Thus, the higher-order contribution to \(\Delta^{ij}_{\bar{u}\bar{e}}\) is:
\[\Delta^{ij}_{\bar{u}\bar{e}}\Big{|}_{\rm loop}\sim\frac{1}{16\pi^{2}}\left(Y _{Qu}^{T}\cdot\Delta_{QL}^{*}\cdot Y_{L\bar{e}}\right)^{ij}. \tag{13}\]
Demanding this contribution to be smaller than the tree-level contribution, and assuming the absence of any artificial cancellations, leads to a lower bound on the Yukawa coupling \(\Delta^{ij}_{\bar{u}\bar{e}}\) and an upper bound on the entries of \(\Delta^{*}_{QL}\). This bound begets a set of consistency
Figure 3: Left: the tree level \(S_{1}^{\dagger}\bar{u}_{i}\bar{e}_{j}\) coupling. Right: A loop contribution to the same spurion, leading to the spurion contribution described by Eq. (13). In both diagrams the dots indicate a chain of flavon and heavy fermion vertices, whose length is determined by the horizontal charges of the particles, which we suppress for clarity.
conditions on the wrinkles:
\[\begin{split}\big{|}\Delta^{ij}_{\bar{u}\bar{e}}\big{|}& \gtrsim\frac{1}{16\pi^{2}}\Big{|}\big{(}Y^{T}_{Q\bar{u}}\cdot\Delta^{*}_{QL} \cdot Y_{L\bar{e}}\big{)}^{ij}\Big{|},\\ \implies\lambda^{\omega^{ij}_{\bar{u}\bar{e}}+\big{|}[\bar{u}_{i }]+[\bar{e}_{j}]\big{|}}&\gtrsim\frac{1}{16\pi^{2}}\big{|}Y^{T}_ {Q\bar{u}}\big{|}^{ik}\lambda^{\omega^{kl}_{QL}+\big{|}[Qk]+[Ll]\big{|}}\big{|} Y_{L\bar{e}}\big{|}^{lj}\,\end{split} \tag{14}\]
where there is an implicit summation over the indices \(k\) and \(l\) above. While the SM Yukawas on the right hand side of this relation may also contain wrinkles, it is the IR value of the coupling that appears, which is fit to the SM masses and mixing angles.
Similar _consistency conditions_ were proposed in Refs. [32, 33, 34], neglecting the loop factor. Other similar constraints (including the loop factor) have been considered as naturalness constraints on models of flavorful new physics [57]. We settle for the weaker constraint, including the loop factor, as a concrete, irreducible bound.4 Note that there are also other higher order contributions to the spurions, such as those from higher-dimensional operators with the Higgs replaced by its vacuum expectation value, but they will be smaller than the one in Eq. (13), since \(v^{2}/M^{2}<1/16\pi^{2}\).
Footnote 4: RG evolution of the leptoquark couplings also does not change the above set of bounds, as long as one imposes the consistency conditions at the matching scale of order \(M\). The structure of the one loop Yukawa RGEs involves the same higher order operators as appearing in our consistency condition. Thus, imposing the consistency condition at the matching scale ensures that running is a small effect and can be neglected. Consequently, RG evolution to scales below the matching scale ensures that the consistency condition (inequality) holds at all such scales.
More generally, a complete set of consistency conditions can be derived by again considering the Yukawas as spurions under \(G_{\text{flavor}}\). In the absence of any additional symmetries, contributions similar to Eq. (13) arise from any combination of Yukawa couplings that transform in the same representation of \(G_{\text{flavor}}\). The complete list of leading consistency conditions for all of the Yukawa couplings in the SM extended with the \(S_{1}\) leptoquark are listed in Appendix A.
These inequalities must be satisfied for any wrinkled FN setup involving additional flavor spurions, and they impose non-trivial constraints on the size of the wrinkles introduced in Eq. (11).5 The details of these constraints depend on the particular charge assignment of the SM fermions, but once these are fixed, a degree of predictiveness is returned to the FN Ansatz, even in the presence of wrinkles. As pointed out in Refs. [32, 33, 34], these consistency conditions are trivially satisfied in a vanilla FN setup without wrinkles, as a result of the triangle inequality.
Footnote 5: The consistency conditions, as written, hold neglecting \(\mathcal{O}(1)\) couplings; there may be small deviations from including them.
As an example of how this bound works with nonzero wrinkles, consider the charge assignment in Table 1 with \(q_{0}=0\), \(l_{0}=-1\), \(X=+1\), \(Y=-1\) and all other sign choices being positive. Assuming no wrinkles in the SM Yukawas, the bound on \(\omega^{33}_{\bar{u}\bar{e}}\) from Eq. (14)
becomes
\[\begin{array}{rcl}\omega_{\bar{u}\bar{e}}^{33}&\lesssim&\sum_{k,l} \left(\left|\left[Q_{k}\right]+\left[\bar{u}_{3}\right]\right|+\left|\left[Q_{k} \right]+\left[L_{l}\right]\right|+\omega_{QL}^{kl}+\left|\left[L_{l}\right]+ \left[\bar{e}_{3}\right]\right|\right)\\ &&\qquad+\log_{\lambda}\frac{1}{16\pi^{2}}-\left|\left[\bar{e}_{3} \right]+\left[\bar{u}_{3}\right]\right|\\ &\lesssim& 2+\omega_{QL}^{33}+\log_{\lambda}\frac{1}{16\pi^{2}},\end{array} \tag{15}\]
where in the last line we have assumed that \(k=l=3\) is the largest entry in \(\omega_{QL}^{kl}\), which is typically the case. We see that, at least for this consistency condition, up to five wrinkles on \(\Delta_{\bar{u}\bar{e}}^{33}\) are allowed, even without extra wrinkles on \(\Delta_{QL}^{33}\).
A similar argument for general couplings, again using the triangle inequality, makes it clear that if all \(\omega_{\psi\bar{\chi}}^{ij}\geq 0\), a _sufficient_ condition on the wrinkles is that they are all greater than a loop factor:
\[(W_{\psi\bar{\chi}})^{ij}\gtrsim\frac{1}{16\pi^{2}}. \tag{16}\]
Note that in this equation, we have assumed a mild separation of scales so that the logarithms in the loop contribution can be neglected along with other \(\mathcal{O}(1)\) factors in the loop calculation. In this work, we focus on the bound in Eq. (16) and leave further studies of more accurate lower bounds on wrinkles for future work. As shown in Eq. (15), this bound may be overly restrictive, but it provides a useful shortcut for employing wrinkles in an EFT without having to manually check all the consistency conditions.
### UV Completions
We now turn to UV completions of the wrinkles introduced in Eq. (11). Our goal is not to provide an exhaustive or detailed list of examples, but demonstrate a proof of principle of potential ways these wrinkles can arise from more complicated UV completions.
#### 3.3.1 Missing Heavy Fermions
As a first concrete realization of the idea sketched in Figure 2, we consider a situation where one of the heavy fermions with a particular horizontal charge does not exist in the spectrum. Instead, the chain leading to the effective operator can only be completed by including additional fermions and scalars, causing additional suppression.
To illustrate this mechanism, we consider the example in Figure 1 and replace a single heavy vector-like pair of fermions \(U_{1}\), \(\bar{U}_{1}\) with two sets of vector-like pairs, which we will denote by \(U_{1}^{(1)}\), \(\bar{U}_{1}^{(1)}\) and \(U_{1}^{(2)}\), \(\bar{U}_{1}^{(2)}\). These are assumed to have the same SM and horizontal charges as \(U_{1}\), \(\bar{U}_{1}\), but also transform as conjugate pairs under new symmetry groups, \(G_{1}\) and \(G_{2}\), respectively. To be explicit, we will take \(G_{1}=\mathrm{SU}(N_{1})\) and \(G_{2}=\mathrm{SU}(N_{2})\) to be two different continuous, non-Abelian groups, but the following construction works for arbitrary (continuous or discrete) groups as well, with straightforward modifications. To
complete the chain diagram, we must also introduce new flavons, which we take to be in the representations,
\[\varphi^{(1)}:(\mathbf{N_{1}},\mathbf{1})_{-1},\qquad\varphi^{(2)}:(\mathbf{1}, \overline{\mathbf{N_{2}}})_{-1},\qquad\Phi^{(1,2)}:(\overline{\mathbf{N_{1}}}, \mathbf{N_{2}})_{0}, \tag{17}\]
where the parentheses indicate the \(\mathrm{SU}(N_{1})\times\mathrm{SU}(N_{2})\) representation, and the subscript is the horizontal charge. These allow us to construct the diagram shown in Figure 4, where both of the extra heavy fermion pairs are traversed between \(Q_{2}\) and \(\bar{u}_{3}\). The charge assignments forbid the couplings \(\varphi^{(1)}U_{2}\bar{U}_{1}^{(2)}\) and \(\varphi^{(2)}U_{1}^{(1)}\bar{u}_{3}\), so that this diagram is the leading effective operator containing \(HQ_{2}\bar{u}_{3}\).
Assuming all the scalars acquire vevs \(\sim\langle\varphi\rangle\) and that the new fermions have vector-like masses \(\sim M\), this replaces the \(\lambda^{2}\) suppression inferred from the horizontal charges with a \(\lambda^{3}\) suppression. In other words, this leads to a "wrinkle", \(W_{Q\bar{u}}^{23}\sim\lambda\).
This construction can be extended to include arbitrarily many wrinkles in place of a single heavy fermion. For example, \(W_{Q\bar{u}}^{23}\sim\lambda^{2}\) is obtained by introducing additional mirror quarks, \(U_{1}^{(3)}\), \(\bar{U}_{1}^{(3)}\), replacing \(\varphi^{(2)}\) with a bi-fundamental \(\Phi^{(2,3)}\) transforming as \((\mathbf{1},\overline{\mathbf{N_{2}}},\mathbf{N_{3}})_{0}\), closing the chain with \(\varphi^{(3)}\), which transforms as a \((\mathbf{1},\mathbf{1},\overline{\mathbf{N_{3}}})_{-1}\); further wrinkles are obtained for additional mirror quarks. In these types of examples, the Higgs and chiral fermions of the SM are neutral under the new symmetries, so as to be compatible with the general arguments in Ref. [12].
Note that with this mechanism, we see an example of the correlation between wrinkles in different chains. We constructed this wrinkle in the context of the \(Q_{2}\) and \(\bar{u}_{3}\) chain, but since we have removed \(U_{1}\), \(\bar{U}_{1}\) from the spectrum, the wrinkle necessarily appears in any chain involving them. For instance, assuming heavy up-like quarks are responsible for all of the up-type Yukawa couplings, it would also appear in the \(Q_{1}H\bar{u}_{3}\) operator.
Figure 4: An explicit realization of a “wrinkled” FN chain, where the heavy quark with horizontal charge \(+1\) is replaced by two heavy quarks, along with additional flavons, transforming under additional symmetries.
#### 3.3.2 Extra Abelian Symmetries
Another concrete example in which wrinkles can appear in an effective theory with the FN Ansatz is realized by considering additional Abelian symmetries in the UV, under which the SM fermions are charged. In particular, we can consider gauging the non-anomalous combinations of baryon number, \(B\), and the individual lepton numbers, \(L_{e}\), \(L_{\mu}\), and \(L_{\tau}\), as is frequently done in model-building for various flavor anomalies [58]. These symmetries are preserved by the SM Yukawa couplings, but generically violated by neutrino masses and additional Yukawa couplings between SM fermions and new BSM fields, such as leptoquarks. For concreteness, we again work with the \(S_{1}\) leptoquark and assume it is neutral under the new symmetry; therefore the flavor spurion must absorb the remaining U(1) charge. This means that additional flavons charged under the extra symmetries also must be included in order to complete the leptoquark Yukawa couplings. The usual flavon, with U(1)\({}_{H}\) charge \(-1\), is still present, since it is required to complete the SM Yukawa couplings.6
Footnote 6: In the presence of neutrino masses, the extra flavons may also be required to generate the PMNS matrix structure, depending on the additional symmetries we impose.
In contrast to the UV completions discussed in SS3.3.1, where the wrinkles are always additional suppression factors, the wrinkles that result from these extra symmetries can naturally either suppress or enhance the size of the flavor spurions. Another distinction is that we have not removed any fermions of particular charges from the UV spectrum in this case: we allow fermions with all required quantum numbers to exist.
Just like other UV models, additional symmetries and the flavons charged under them can generate a correlated pattern of wrinkles for the different chains. The details of those correlations depend on whether the new symmetries are flavor universal or flavor specific; we will discuss examples of both cases. In order to maintain the predictivity of our example, we also assume additional symmetries are spontaneously broken at similar scales to the U(1)\({}_{H}\) symmetry.
The flavor universal case is simpler, but also less flexible because of interdependence between different chains. For example, assuming U(1)\({}_{B-L}\) is a symmetry of the theory, we can construct the leptoquark Yukawa spurions by introducing an additional flavon, \(\tilde{\varphi}\), which we take to have \(B-L\) charge \(1/3\). The new flavon will not change any of the SM chains, since they respect the \(B-L\) symmetry, but the leptoquark chains can be different from the usual FN scenario. For instance, if \(\tilde{\varphi}\) has no U(1)\({}_{H}\) charge, all the leptoquark Yukawas will become smaller by \(\lambda^{2}\), since the external fermions all have \(B-L\) charge difference \(\pm 2/3\) without the new flavon. If \(\tilde{\varphi}\) also has U(1)\({}_{H}\) charge \(\geq 1\), then the pattern of leptoquark chains becomes more intricate. Since each leptoquark chain must contain exactly two copies of the new \(B-L\) flavon and the remaining difference in horizontal charge requires insertions of the original flavon, whether a given chain becomes shorter or longer depends on the details of the assigned horizontal charges.
We have somewhat more freedom in the flavor specific case. As an example, consider introducing a new \(\mathrm{U}(1)_{B-3L_{e}}\) symmetry. Now both the PMNS matrix and chains for the leptoquark Yukawa couplings require new flavons charged under both \(\mathrm{U}(1)_{H}\) and \(\mathrm{U}(1)_{B-3L_{e}}\). We introduce two additional flavons: \(\varphi_{2}\) is necessary to generate PMNS matrix entries of the correct size, and \(\varphi_{3}\) is necessary to complete the leptoquark chains while respecting the additional symmetries. These flavons have \(B-3L_{e}\) charges
\[[\varphi]=0\qquad[\varphi_{2}]=3\qquad[\varphi_{3}]=-1/3. \tag{18}\]
Each also carries \(\mathrm{U}(1)_{H}\) charge \(-1\). Including these extra symmetries and flavons charged under them creates wrinkles by changing the required number of vev insertions for the leptoquark couplings compared to the spurion size we would naively expect with only these \(\mathrm{U}(1)_{H}\) charges. For example, if we consider only the couplings to the third generation leptons, we make the right-handed \(\mu\) and \(\tau\) couplings smaller while leaving the right-handed \(e\) coupling and the left-handed couplings unaffected. This is shown in Figure 5. Nonetheless, despite the additional freedom in the flavor specific case, it is still challenging to obtain certain patterns of wrinkles, such as those constrained by the triangle inequality.
Finally, we comment on a few modifications to the examples above. First, we note that it is possible to modify this approach by charging the leptoquark under \(\mathrm{U}(1)_{H}\) instead of/in addition to additional flavon(s). Similar to the \(B-L\) charged flavons, this is another mechanism to add wrinkles to the leptoquark couplings without affecting the SM couplings. In principle, we can also charge the leptoquark under the additional symmetries we discussed in this section, but note that we are not always guaranteed a charge assignment which
Figure 5: Chains before and after adding flavons charged under a \(\mathrm{U}(1)_{B-3L_{e}}\) symmetry to generate wrinkles. The horizontal charges correspond to Table 1 with \(q_{0}=0\), \(l_{0}=2\), and \(Y=1\). We observe that the new \(\mathrm{U}(1)_{B-3L_{e}}\) symmetry and its flavons modify the prediction of the model for some of the leptoquark couplings in the IR.
makes all of the couplings invariant. Second, we note that like the previous case, other modifications such as using discrete Abelian symmetries also behave similarly. However, we can not replace these Abelian symmetries with non-Abelian ones [12], because we are charging the SM fermions under the new symmetry. This is in contrast to the previous case, where only internal fermions are charged under new non-Abelian symmetries.
While we have provided two different ways in which wrinkles could be generated, we have not exhausted the possibilities. These are only examples, and there are undoubtedly many more options for generating wrinkles, which would be interesting for future work. Since the details of a particular model are not the central point of this paper, we now move to discussing a full example in the IR.
## 4 Dessert: \(B\to K\bar{\nu}\nu\) in a Wrinkled Setup
To demonstrate the ideas of the previous sections with a specific example, in this section we study the phenomenology of the \(S_{1}\) leptoquark introduced in Eq. (8) with particular flavor Ansatze in detail. Such Ansatze correlate the contribution of \(S_{1}\) to different observables. As mentioned in the previous section, the inclusion of wrinkles in a FN Ansatz can change the relative sizes of predictions for different flavor observables. This could allow a model to accommodate a significant excess over the SM in one observable, while suppressing other observables that would otherwise be too constraining.7
Footnote 7: Signals of leptoquarks in all flavor experiments can also be suppressed by choosing \(q_{0}\) and \(l_{0}\) (defined in Table 1) such that the quarks’ and leptons’ charges are very far apart, but this requires an unnaturally large separation of charges. This choice also does not permit the explanation of any discrepancies in flavor experiments because it suppresses leptoquark contribution to all observables.
As an illustration, we will focus on constructing a model that can give rise to a large signal in the semi-leptonic decay \(B^{+}\to K^{+}\bar{\nu}\nu\). \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) is an interesting test case for several reasons. Assuming the vanilla FN Ansatz, the mass range preferred for new physics near the current experimental sensitivity is in the few TeV range, and small hints of flavorful new physics may have already been detected [37]. Like all flavor-changing neutral currents, the \(b\to s\bar{\nu}\nu\) transition is greatly suppressed in the SM. It is also relatively clean theoretically, with the uncertainties in the hadronic form factors and from perturbative effects well under control [59, 60, 61, 62, 63, 64, 65, 66, 67]. This situation, along with the prospect of observing the decay at the Belle II experiment in the near future, make it an intriguing probe of BSM physics [68, 69, 70]. We use this specific observable as a testbed of various ideas introduced in the previous section; similar studies can be carried out for any other flavorful anomalies that may emerge in experimental data.
### \(B\to K\bar{\nu}\nu\) in the SM and Beyond
In order to understand how various FN Ansatze contribute to \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\), we first need to discuss the SM and leptoquark contributions, as well as experimental bounds. Typically, \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) is parameterized in terms of the Wilson coefficients \(C_{R}^{ij}\) and \(C_{L}^{ij}\), which are defined implicitly in the effective Hamiltonian governing \(b\to s\bar{\nu}\nu\) transitions
\[\mathcal{H}_{\text{eff}}=-\frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{*}\big{(}C_{L}^ {ij}\mathcal{O}_{L}^{ij}+C_{R}^{ij}\mathcal{O}_{R}^{ij}\big{)}+\text{h.c.} \tag{19}\]
where
\[\mathcal{O}_{L}^{ij}=\frac{\alpha_{\text{em}}}{2\pi}\big{(}s_{L}^{\dagger} \bar{\sigma}^{\mu}b_{L}\big{)}\big{(}\nu_{j}^{\dagger}\bar{\sigma}_{\mu}\nu_{ i}\big{)},\qquad\mathcal{O}_{R}^{ij}=\frac{\alpha_{\text{em}}}{2\pi}\big{(}s_{R}^{ \dagger}\sigma^{\mu}b_{R}\big{)}\big{(}\nu_{j}^{\dagger}\bar{\sigma}_{\mu} \nu_{i}\big{)}, \tag{20}\]
and \(i,j=e,\mu,\tau\) are neutrino flavor indices.
In the SM (and in the \(S_{1}\) leptoquark model we consider below), only \(C_{L}\) is non-zero. The leading contribution to the SM value of the Wilson coefficient arises from diagrams such as those in Figure 6. Also including NLO QCD corrections [59, 60, 61] and two-loop electroweak contributions [65], the SM Wilson coefficient is
\[C_{L}^{ij,\,\text{SM}}=\left(-6.353\pm 0.074\right)\delta_{ij}, \tag{21}\]
where \(\delta_{ij}\) captures the fact that the SM contributions are lepton flavor conserving. This leads to a prediction for the branching ratio [69, 70],
\[\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\Big{|}_{\text{SM}}=\left(0.4 6\pm 0.05\right)\times 10^{-5}, \tag{22}\]
where we are inclusive to neutrino flavor.
This process has been searched for at Belle and BaBar by tagging the second \(B\) meson in either a hadronic or semileptonic decay [71, 72, 73]. Similar searches exist for \(\text{BR}(B\to K^{*}\bar{\nu}\nu)\), e.g. see Refs. [71, 72]. Each of these channels leads to the same qualitative conclusions; thus,
Figure 6: Example Feynman diagrams leading to \(b\to s\bar{\nu}\nu\) transitions in the SM extended with an \(S_{1}\) leptoquark. The left (center) diagram show the leading one loop SM contributions with the penguin (box) topology, while the right diagram illustrates the tree-level leptoquark contribution. The \(Z\) in the left diagram could also connect to the top line instead.
for the rest of this work we will focus on BR (\(B^{+}\to K^{+}\bar{\nu}\nu\)) measurements, for simplicity. A combination of these results yields a 90% C.L. upper limit on the branching ratio of
\[\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)<1.6\times 10^{-5}. \tag{23}\]
Recently, Belle II has searched for the same decay using an inclusive tagging technique, which allows them to partially compensate for their smaller dataset and larger backgrounds [37]. Though not yet statistically significant, a combination of these results (assuming their uncertainties are uncorrelated) leads to a best fit value of
\[\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)=(1.1\pm 0.4)\times 10^{-5}, \tag{24}\]
which leaves room for a BSM contribution on top of the SM prediction in Eq. (22). The uncertainties in all of these estimates--both the tagged and inclusive searches--are predominantly statistical, and are expected to improve and become comparable to the theoretical uncertainty in Eq. (22) with the forthcoming full Belle II dataset [74]. Therefore, while it remains to be seen if any signals of new physics exist in this channel, it provides an interesting application of our wrinkled FN setup.
The \(S_{1}\) leptoquark contributes to \(b\to s\bar{\nu}\nu\) transitions via the tree-level diagram shown on the right in Figure 6. It generates a Wilson coefficient
\[C_{L}^{ij}\propto\frac{v^{2}}{m_{S_{1}}^{2}}\Delta_{QL}^{3i}\Delta_{QL}^{2j\, *} \tag{25}\]
for the effective theory of Eq. (19). Since this is the same operator as generated in the SM, it is convenient to capture these effects by considering the ratio:
\[R_{K}^{\nu\nu}\equiv\frac{\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)}{ \text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\big{|}_{\text{SM}}}. \tag{26}\]
The contribution from \(S_{1}\) is given by [75, 76] (see also Refs. [69, 35])
\[R_{K}^{\nu\nu}=1-y\,\text{Re}\left[\frac{(\Delta_{QL}^{3i}\Delta_{QL}^{2i\,*} )}{V_{tb}V_{ts}^{*}}\right]+\frac{3y^{2}}{4}\frac{(\Delta_{QL}^{3i}\Delta_{QL} ^{3i\,*})(\Delta_{QL}^{2j}\Delta_{QL}^{2j\,*})}{\big{|}V_{tb}V_{ts}^{*}\big{|} ^{2}}, \tag{27}\]
with a sum over repeated lepton indices in each term, and
\[y\equiv-\frac{2\pi v^{2}}{6C_{L}^{\text{SM}}\alpha_{\text{em}}m_{S_{1}}^{2}} \simeq\left(\frac{1.2\,\text{TeV}}{m_{S_{1}}}\right)^{2}. \tag{28}\]
In terms of \(R_{K}^{\nu\nu}\), the 90% C.L. limit and 68% C.L. preferred values of the branching ratio
in Eqs. (23) and (24) translate to
\[R_{K}^{\nu\nu}<3.4,\qquad R_{K}^{\nu\nu}\in[1.5,3.3], \tag{29}\]
respectively. The interpretation of these bounds in the context of the leptoquark depends on the assumptions made about the hierarchies in \(\Delta_{QL}^{ij}\), to which we now turn.
### Constraints with Different Flavor Ansatze
In addition to \(b\to s\bar{\nu}\nu\) transitions discussed above, the \(S_{1}\) leptoquark can contribute to a number of flavor-changing processes or precision observables that are constrained by experiments. These include electric and magnetic dipole moments of SM particles, LFV decays, leptonic and semi-leptonic meson decays, flavor-violating decays of gauge bosons, and neutral meson mixing. Some of the most powerful observables, and their dependence on the leptoquark Yukawa couplings are summarized in Table 2.8 As is apparent from the table,
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Observable & \(S_{1}\) Yukawa Couplings & Experimental Result & Future Bounds \\ \hline BR(\(B^{+}\to K^{+}\bar{\nu}\nu\)) & \(\Delta_{QL}^{3i}\times(\Delta_{QL}^{2j})^{*}\) & \((1.1\pm 0.4)\times 10^{-5}\)[37] & - \\ \hline \hline electron EDM & \((V^{*}\Delta_{QL})^{31}\times(\Delta_{\bar{u}\bar{e}}^{31})^{*}\) & \(<4.1\times 10^{-30}\)\(e\,\)cm [77] & \(<10^{-31}\)\(e\) cm [78, 79] \\ \hline BR(\(\mu\to e\gamma\)) & \(\begin{array}{c}(V^{*}\Delta_{QL})^{32}\times\Delta_{\bar{u}\bar{e}}^{31}\\ \Delta_{\bar{u}\bar{e}}^{32\,*}\times(V^{*}\Delta_{QL})^{31\,*}\end{array}\) & \(<4.2\times 10^{-13}\)[80] & \(<6\times 10^{-14}\)[81] \\ \hline CR(\(\mu\to e\))\({}_{N}\) & \((V^{*}\Delta_{QL})^{11\,*}\times(V^{*}\Delta_{QL})^{12}\) & \(<7.0\times 10^{-13}\)[82] & \(<2.5\times 10^{-18}\)[83, 84] \\ \hline BR(\(\tau\to\mu\gamma\)) & \(\begin{array}{c}(V^{*}\Delta_{QL})^{33}\times\Delta_{\bar{u}\bar{e}}^{32}\\ \Delta_{\bar{u}\bar{e}}^{33\,*}\times(V^{*}\Delta_{QL})^{32\,*}\end{array}\) & \(<4.2\times 10^{-8}\)[85] & \(<6.9\times 10^{-9}\)[86, 87] \\ \hline BR(\(K^{+}\to\pi^{+}\bar{\nu}\nu\)) & \(\Delta_{QL}^{2k}\times(\Delta_{QL}^{1k})^{*}\) & \(<1.88\times 10^{-10}\)[88] & \((8.4\pm 0.4)\times 10^{-11}\)[89] \\ \hline \(\Delta m_{B_{s}}\) & \((\Delta_{QL}\,\Delta_{QL}^{\dagger})^{32}\) & \(\Delta C_{B_{s}}\leq 0.09\)[90] & \(\Delta C_{B_{s}}\leq 0.026\)[90] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Here we show the experimental results for BR (\(B^{+}\to K^{+}\bar{\nu}\nu\)) and a few other constraining observables; we also show the predominant \(S_{1}\) Yukawa couplings contributing to each. Note that for \(B\)-mixing, we use the experimental uncertainty on the quantity \(C_{B_{s}}\) as defined in Eq. (33). For \(K^{+}\to\pi^{+}\nu\bar{\nu}\), the future bound corresponds to reaching a 5% experimental uncertainty on the SM branching ratio [91]. The muon to electron conversion rate in nuclei, CR(\(\mu\to e\))\({}_{N}\), gets contributions from both dipole and four-fermion operators; we show the Yukawas entering the four-fermion operator that is dominant in the FN Ansatz (associated with a left-handed vector current) here, while the complete set is given in Appendix B. The current (future) bound listed for it is on the conversion rate in a gold (aluminum) nucleus.
the observables depend on numerous different combinations of the leptoquark couplings. More details about the observables, including the dependence on the leptoquark couplings and references to more complete treatments in the literature, are given in Appendix B.
Because the contributions to various observables are correlated, we need to pick a particular Ansatz and study it in order to understand these constraints. In the rest of this section, we study these constraints in the context of three different flavor Ansatze: flavor anarchy, vanilla FN, and FN with wrinkles. In particular, we explore how adding wrinkles can alleviate constraints while maintaining consistency with \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) measurements.
Without any assumptions about the underlying structure, a minimal assumption is that all elements of \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\) are \(\mathcal{O}(1)\). This assumption is commonly referred to as "flavor anarchy". Under this assumption, the mass of the leptoquark consistent with the \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) measurements is \(m_{S_{1}}\in\left(9,\ 18\right)\) TeV. On the other hand, measurements of the electron EDM and other flavor-changing processes constrain the mass of the leptoquark to be above \(\sim 10^{5}\text{ TeV}\). The resulting limits for some of the observables considered are shown as yellow bars in Figure 7. To calculate these ranges for observables that are already measured experimentally, we demand the leptoquark contribution to be within one standard deviation of the measured value, while for others we use the reported upper bounds from Ref. [92].9 For the electron EDM, a CP-odd observable, we assume a purely imaginary coupling to show the maximum reach of the experimental results.
Footnote 9: The exceptions to this are \(R_{D}\) and \(a_{\mu}\), where we take the maximum leptoquark mass consistent to within \(3\sigma\) and \(4\sigma\), respectively, of the experimental measurement for the anarchic coupling case, and use the \(2\sigma\) ellipse for the preferred mass range in the wrinkled case.
It is clear that without any flavor texture on the leptoquark Yukawas, observables such as the electron EDM, LFV decays, or meson-mixing parameters rule out the leptoquark mass range relevant for \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\).10 We have also checked the contribution of our setup to many other similar observables (electron and tau MDM, \(\tau\to e\gamma\), \(K\to e\nu\), various other \(D\) meson decays, \(D_{s}\to e\nu\), \(B\to e\nu\), \(\pi\to ee\), \(\pi\to\mu e\)), but find that the constraints they place are not as competitive for our model.
Footnote 10: Our model can also contribute to \(a_{\mu}\) at one loop to explain the observed anomaly [93, 94], although recent lattice calculations [95, 96, 97, 98, 99, 100, 101] and measurements [102] hint toward a smaller discrepancy with the experimental data. However, other observables already rule out the leptoquark mass range that has a large enough contribution to \(a_{\mu}\). See Refs. [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115] for other solutions to this anomaly, including attempts at embedding the solution in a FN construction.
Thus we are led to consider embedding the \(S_{1}\) leptoquark in a FN model of flavor. This has the benefit of not only alleviating some of the experimental constraints discussed above, but also relating it to the SM flavor puzzle.
As discussed in SS2, aside from the general shifts in the lepton and quark horizontal charges, there are only a handful of possible charge assignments that give rise to the correct pattern of SM masses and mixing angles. For concreteness, we choose horizontal charges
from Table 1 with \(q_{0}=0\), \(l_{0}=-1\), and \(X=-Y=-1\). This yields
\[\begin{split}([Q_{1}],\,[Q_{2}],\,[Q_{3}])=(3,2,0),\qquad([\bar{u}_{ 1}],\,[\bar{u}_{2}],\,[\bar{u}_{3}])=(4,1,0),\qquad([\bar{d}_{1}],\,[\bar{d}_{2}],\,[\bar{d}_{3}])=(3,3,2),\\ ([L_{1}],\,[L_{2}],\,[L_{3}])=(0,-1,-1),\qquad([\bar{e}_{1}],\,[ \bar{e}_{2}],\,[\bar{e}_{3}])=(8,6,4).\end{split} \tag{30}\]
With these charge assignments, the FN Ansatz for the leptoquark couplings is:
\[\Delta_{QL}\ \sim\ \begin{pmatrix}\lambda^{3}&\lambda^{2}&\lambda^{2}\\ \lambda^{2}&\lambda&\lambda\\ 1&\lambda&\lambda\end{pmatrix},\qquad\Delta_{\bar{u}\bar{e}}\ \sim\ \begin{pmatrix}\lambda^{12}&\lambda^{10}&\lambda^{8}\\ \lambda^{9}&\lambda^{7}&\lambda^{5}\\ \lambda^{8}&\lambda^{6}&\lambda^{4}\end{pmatrix}. \tag{31}\]
The resulting bounds, neglecting \(\mathcal{O}(1)\) Yukawa factors, are shown as the green bars in Figure 7. Compared to the anarchic Ansatz, the bounds on the leptoquark mass are significantly relaxed.
Nevertheless, it is clear that the mass range consistent with the BR (\(B^{+}\to K^{+}\bar{\nu}\nu\)) mea
Figure 7: The leptoquark mass range probed by various observables if the Yukawa couplings of the leptoquark are either \(\mathcal{O}(1)\) (yellow), follow the vanilla FN setup in Eq. (31) (green), or the same FN setup plus the wrinkles from Eq. (32) (blue). The preferred range for explaining some existing anomalies are shown in red, assuming the wrinkled setup. The undetermined \(\mathcal{O}(1)\) factors in the Yukawas (folded in \(r_{ij}\) in Eq. (1)) can further affect the leptoquark contribution and slightly change the mass range probed by each observable. We see that in our wrinkled setup, the mass range that explains the current discrepancy in BR (\(B^{+}\to K^{+}\bar{\nu}\nu\)) measurement (between the horizontal dashed lines) can also be probed by the LFV processes \(\mu\to e\gamma\) and CR(\(\mu\to e\)), and the electron EDM in near future measurements.
surements at Belle II is still excluded by other observables under the FN Ansatz. We have checked that--while the exact bounds for different observables can change significantly--this conclusion remains unchanged for the other possible charge assignments enumerated in Table 1. If any deviation from SM is observed in \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\), the \(S_{1}\) leptoquark embedded in a vanilla FN model cannot explain the anomaly while respecting bounds from other measurements.
Adding wrinkles to the FN Ansatz as discussed in SS3 can ameliorate the tension with these observables. Using the scaling of the observables with the leptoquark Yukawas shown in Table 2 as a guide, we add the following wrinkles (as defined in Eq. (11)) to the leptoquark Yukawa matrices:
\[W^{ij}_{\bar{u}\bar{e}}=\lambda^{3},\qquad W_{QL}=\begin{pmatrix}\lambda^{3}& \lambda^{3}&\lambda^{3}\\ \lambda^{3}&1&1\\ \lambda^{3}&1&1\end{pmatrix}. \tag{32}\]
This is the largest number of wrinkles we can add to suppress the leptoquark contribution to the most constraining observables (especially electron EDM, \(\mu\to e\gamma\), \(\tau\to\mu\gamma\), and meson mixing observables), while retaining consistency with the naive constraint \(\omega\gtrsim\lambda^{3}\sim 1/16\pi^{2}\) from SS3.2 and leaving the contribution to \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) mostly intact. Further suppression with additional powers of \(\lambda\) may be possible, but must be carefully checked with all of the consistency conditions in Appendix A.
It is worth emphasizing that it is not obvious how to get the pattern of wrinkles in Eq. (32) from the example UV completions discussed in SS3.3. Nevertheless, we can treat them consistently in an effective field theory approach, and leave the model-building to future work. Note also that with the additional suppression of the right-handed Yukawa couplings, the phenomenology of this model resembles that of the RPV down squark as discussed in SS2.2.
The contribution of this wrinkled FN setup to various observables is shown by blue bars in Figure 7. We find that the set of wrinkles from Eq. (32) sufficiently suppresses the contribution to other observables, so that they are all compatible with the mass range of interest for \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\). In particular, bounds from meson mixing observables and leptonic meson decays are circumvented. Within this wrinkled setup, the viable leptoquark mass range that can account for a signal in \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) is slightly above the current direct search bounds at the LHC (see Refs. [116, 117]) and could be detected in future searches at the LHC or future hadron [118, 119, 120, 121] or lepton [122, 123, 124, 125, 126, 127] colliders.
There are several observables which probe a similar mass range to \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) which will see significant improvement in experimental measurements soon. In particular, these observables include \(\mu\to e\gamma\), \(\text{CR}(\mu\to e)\), and electron EDM, though the precise mass range depends on \(\mathcal{O}(1)\) Yukawa couplings in the UV completion. As a result, they could be the smoking gun signal of an FN-like \(S_{1}\) leptoquark solution to any future excess observed in \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\). Since the experimental precision on both of these (and several other)
observables is expected to improve significantly in the near future, we will dedicate the next subsection to discussing potential discovery prospects for this wrinkled FN scenario.
### Predictions for Future Measurements
We have already seen that adding wrinkles to a FN Ansatz allows for greater flexibility in simultaneously accommodating experimental deviations from the SM while satisfying constraints from other observables and explaining the observed pattern of SM masses and mixing angles. As we will now emphasize, despite this added flexibility, these choices still make concrete predictions for other observables, which can be tested in future experiments. The importance of these tests lies in being able to probe indirect information about the underlying UV model which is hidden in the charge assignments and wrinkles in the IR.
Several upcoming experiments will provide concrete tests of our wrinkled Ansatz. When assuming the wrinkled FN Ansatz from Eq. (32) for the leptoquark Yukawa couplings, several classes of observables-- including LFV processes, the electron EDM, meson-mixing measurements, and the decay \(K\to\mu\nu\)--have a present sensitivity to roughly the same mass scale as \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\). Moreover, the mass reach of many of these observables is expected to improve significantly with forthcoming experimental data. Since we have suppressed our model contribution to these observables as far as possible while satisfying the bound from the consistency condition in Eq. (16), these correlated signals allow for a definitive test of these types of wrinkled models within the FN mechanism.
At the moment, the strongest bound on LFV processes involving muons is the 90% C.L. limit, \(\text{BR}(\mu\to e\gamma)<4.2\times 10^{-13}\) set by the MEG experiment [80]. In the future, however, the most powerful probes of this model will come from searches for \(\mu\to e\) conversion in atomic nuclei. As discussed in more detail in Appendix B, the conversion rate depends not only on the dipole operator relevant for \(\mu\to e\gamma\) and \(\mu\to 3e\) decays, but also on four-fermion operators including the first generation quarks generated by integrating out the leptoquark. Future prospects for detecting \(\mu\to e\) conversion include the COMET experiment, which will set a limit on the conversion rate of \(7\times 10^{-15}\) (\(2.6\times 10^{-17}\)) in Phase-I (Phase-II) [128, 129], and at Mu2e, which aims at a final sensitivity of \(2.5\times 10^{-18}\)[83, 84]11, both in aluminum nuclei. For more discussion on current and forthcoming searches for LFV, see Refs. [10, 130, 131, 132, 133, 134].
Footnote 11: This sensitivity might be achievable at Mu2e-II, a proposed upgrade of Mu2e using the PIP-II accelerator at Fermilab, potentially with a target material other than aluminum [130, 131].
In the top left panel of Figure 8, we show the predicted \(\mu\to e\) conversion rate in aluminum nuclei as a function of the leptoquark mass, with the wrinkled FN Ansatz taken for the Yukawa couplings. The \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\)-preferred region discussed in SS4.2 is highlighted in red, while the dashed horizontal lines show the future sensitivities for the conversion rate. We see that even Phase-I of the COMET experiment will be sensitive to the mass range preferred by \(B^{+}\to K^{+}\bar{\nu}\nu\) measurements, while Mu2e will decisively test all of the relevant
parameter space predicted by this model of flavor.
For the electron EDM, the bounds from the ACME II and JILA experiments [77, 135] are at the level \(d_{e}<1.1\times 10^{-29}\) and \(4.1\times 10^{-30}\,e\) cm, respectively. For anarchic flavor couplings, this excludes masses up to \(\sim 10^{5}\,\text{TeV}\). A vanilla FN Ansatz relaxes this constraint to \(\sim 10^{2}\,\text{TeV}\), and with the additional wrinkles invoked in Eq. (32), this bound weakens to \(m_{S_{1}}\gtrsim 2.7\,\text{TeV}\). Random factors of \(\mathcal{O}(1)\), neglected throughout our calculations, can slightly affect the reach on \(m_{S_{1}}\). The fact that this is the same mass range as favored by \(\text{BR}\,(B^{+}\to K^{+}\bar{\nu}\nu)\) measurements, and that the reach in \(m_{S_{1}}\) scales faster with improvements to electron EDM measurements compared to other observables, underscores the importance of future electron EDM experiments in probing our model. In the coming years, experimental advances and new technologies promise to increase the sensitivity of EDM experiments by an order of magnitude or more [78, 79, 136]. In the lower-left panel of Figure 8, we show the predicted value of the electron EDM as a function of the leptoquark mass, alongside current
Figure 8: Predictions for the \(S_{1}\) leptoquark contributions to precision observables with the wrinkled (blue, solid) and vanilla (green, dashed) FN Ansätze described in §4.2. We show the \(\mu\to e\) conversion rate in an aluminum nucleus (top left), the electron EDM (bottom left), the relative new physics contribution to \(\Delta m_{B_{s}}\) (top right), and \(\text{BR}(\tau\to\mu\gamma)\) (bottom right), using solid (dashed) lines for current (future) experimental bounds or sensitivity. We do not show the best current bounds on \(\mu\to e\) conversion rate, \(<7\times 10^{-13}\), from SINDRUM II [82] since it was made with a different nucleus (gold). The red band indicates the mass range of interest for \(\text{BR}\,(B^{+}\to K^{+}\bar{\nu}\nu)\), as in Figure 7.
bounds and a projected constraint of \(10^{-31}\,e\) cm, assuming an \({\cal O}(1)\) CP-violating phase. As is clear from the figure, future EDM experiments will decisively test this model, up to scales \(m_{S_{1}}\sim{\cal O}(10)\,\text{TeV}\).
For the meson mixing observables, we focus in particular on the neutral \(B_{s}\) meson mass difference, \(\Delta m_{B_{s}}\), whose matrix element is directly related to the \(B^{+}\to K^{+}\bar{\nu}\nu\) process for the \(S_{1}\) leptoquark. To understand the current sensitivity to new physics of \(B_{s}-\bar{B}_{s}\) mixing, we follow the UTFit analysis [137, 90, 138] and compute the quantity \(C_{B_{s}}\), defined as
\[C_{B_{s}}e^{2i\phi_{B_{s}}}\equiv\frac{\langle B_{s}|{\cal H}^{\text{SM+NP}}_{ \text{mix}}|\bar{B}_{s}\rangle}{\langle B_{s}|{\cal H}^{\text{SM}}_{\text{mix} }|\bar{B}_{s}\rangle}, \tag{33}\]
where \({\cal H}_{\text{mix}}\) includes the four-fermion operators responsible for \(\Delta F=2\) transitions, as defined in Appendix B.6. The SM is defined as the point \(C_{B_{s}}=1\), \(\phi_{B_{s}}=0\), and the allowed size of the new physics contribution is determined by a global fit to the flavor sector, with the range determined primarily by the uncertainties on the input parameters, such as the CKM matrix elements. To be conservative, we consider only the absolute value of the matrix elements above, and avoid making any assumptions about the relative phase between the SM and leptoquark contributions, which is constrained by \(\phi_{B_{s}}\).
The resulting current and future sensitivities (where we assume the current central value is at the SM, for consistency with future projections) are shown on the top right in Figure 8. The projected future sensitivity of \(\Delta C_{B_{s}}=0.026\) is taken from Ref. [90], based on projections of HL-LHC results and Belle II results with \(50\,\text{ab}^{-1}\) integrated luminosity. We see that the improved sensitivity will start to probe the leptoquark mass range preferred by the \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) measurements. It is also worth emphasizing that these projections do not account for potential improvements in lattice inputs, and thus could be quite conservative. A statistically significant signal in any of the aforementioned channels would also warrant a much more careful analysis of these \(B_{s}\)-mixing constraints and projections, including phase information that depends in more detail on the flavor Ansatz, which could improve sensitivity even further.
A number of additional flavor-changing or flavor-violating decays will be probed with increasing sensitivity at Belle II. A notable example is the LFV decay \(\tau\to\mu\gamma\), for which the current bound set by Belle is \(\text{BR}(\tau\to\mu\gamma)<4.2\times 10^{-8}\)[85]. Belle II is projected to improve this bound to \(6.9\times 10^{-9}\)[86, 87]. In the lower-right panel of Figure 8, we show the predicted branching ratio of \(\tau\to\mu\gamma\) as a function of mass. We see that, for the mass range preferred by the \(\text{BR}\left(B^{+}\to K^{+}\bar{\nu}\nu\right)\) measurements, the addition of wrinkles in our flavor Ansatz suppresses what would otherwise be a predicted signal from assuming the FN mechanism.
Finally, the \(K\to\pi\nu\bar{\nu}\) decays, which would rule out the preferred mass range for \(B^{+}\to K^{+}\bar{\nu}\nu\) without wrinkles, have a sensitivity \(\sim 1\,\text{TeV}\) in the wrinkled FN Ansatz.
The \(K^{+}\to\pi^{+}\nu\bar{\nu}\) decay was only recently measured (with a significance of \(3.4\sigma\)) at the NA62 experiment [88]. A 10 - 20% precision on this branching ratio is necessary to start excluding \(m_{S_{1}}\sim 2\) - 3 TeV, and the requirement for \(K_{L}\to\pi^{0}\nu\bar{\nu}\) is similar. Both of these may be achievable with future runs at NA62, or at future experiments planned at the NA62 hall at CERN [89, 139] and at J-PARC [140], and would be an interesting complementary probe of the same physics considered here.
The preceding discussion demonstrates that all of these powerful, forthcoming measurements could have a similar sensitivity to new mass scales for an appropriate choice of wrinkles. Exactly which search channel is ideal depends on the precise pattern of charges and wrinkles in the IR. However, the expectation that we will probe these other correlated signals is relatively robust since the wrinkles in Eq. (32) were chosen to saturate the bound in Eq. (16) without diminishing the \(B^{+}\to K^{+}\bar{\nu}\nu\) signal. While this enhancement to \(B^{+}\to K^{+}\bar{\nu}\nu\) was only for illustration, and not a fit to a true, significant deviation from the SM, it reveals that for some motivated UV models of flavor, upcoming experiments can simultaneously test explanations for the SM flavor puzzle.
## 5 Digestifs
When new physics is embedded in the FN mechanism, the FN Ansatz determines the size of both the SM and new physics couplings. In this paper, we have put forward a systematic extension of this Ansatz which can change the expected scaling of the new physics and SM couplings. These changes, referred to as wrinkles, deviate from the FN pattern that is dictated by the horizontal symmetry charges. Wrinkles allow us to demand consistency with other experimental measurements and searches: modifying the relative size of couplings restores some theories that would otherwise be unfeasible due to the correlations between different observables from the FN Ansatz. Therefore, they vastly increase the FN mechanism's versatility in accommodating solutions to flavor anomalies. However, owing to radiative corrections, we have also argued that wrinkles can not give rise to arbitrarily large deviations from vanilla FN predictions. There are consistency conditions which must be obeyed by the size of the new wrinkled Yukawas.
While the primary purpose of wrinkles is to give a consistent IR description for various flavor observables, we have also explored how they can be UV completed by various different models. Specifically, in this paper we have given some simple schematic examples of possible UV realizations. In future work, it would also be interesting to understand more about what patterns of wrinkles can be realistically realized in the UV and the various models that can be used to realize them.
Throughout this work, we focused on the phenomenological example of the \(S_{1}\) leptoquark. We discussed the implementation in the IR when the leptoquark is embedded in a FN model. We also provided a detailed example of how an enhancement of the leptoquark contribution to
\({\rm BR}\,(B^{+}\to K^{+}\bar{\nu}\nu)\) can consistently respect other experimental bounds, but only if wrinkles are invoked. This wrinkled setup also motivates future measurements, since several signals would be on the verge of discovery in this model, even when the number of wrinkles is enlarged to saturate the simplest consistency condition. In particular, we showed predictions for the most sensitive upcoming probes, namely \(\mu\to e\) conversion and the electron EDM.
While we limited our exploration to a specific example with the \(S_{1}\) leptoquark in this paper, it would be interesting to explore how wrinkles can be applied more broadly. For instance, in our example we fixed the horizontal charges of the SM particles, but there are many other possible choices that reliably yield the SM masses and mixing angles. One could explore how changing the charges affects the correlations and hence the allowed wrinkled Ansatz, and see which observables remain correlated to the same mass scale more generally. It would also be intriguing to include other flavor spurions or to add wrinkles to the SM couplings in addition to the new physics couplings. Moreover, it would be useful to do a broad methodical study on the effect of \({\cal O}(1)\) numbers in different spurions to explore naturalness in these types of models; see Refs. [43, 44, 47] for previous studies of naturalness in such models.
Aside from the flexibility permitted by wrinkles, it is worthwhile to emphasize a separate point about FN models in general: there is more than one charge assignment that can naturally generate the observed SM masses and mixings, beyond just the overall shift in the quark and lepton charges. In particular, we find that the charges of first generation fermions can be either larger than or smaller than other two generations. This is in contrast to a criterion in Ref. [11], where it was demanded that charges increase monotonically between generations. However, this general FN charge assignment is still not anomaly free and requires some cancellation mechanism, such as Green-Schwarz.
With a number of precision flavor experiments gathering data in the near future that could probe the underlying mechanisms for the flavor structure of the SM, it is the right moment to think about sophisticated UV flavor structures beyond the vanilla FN setup. Wrinkles--a systematic deviation from the vanilla FN prediction for the relationship between different couplings--are one such example that significantly increase the versatility of FN constructions in confronting potential signs of flavorful new physics. We encourage their use in embedding solutions to anomalous signals in UV complete models of flavor.
### Acknowledgments
We thank Daniel Aloni, Wolfgang Altmannshofer, Avital Dery, Darius Faroughy, Seth Koren, Graham Kribs, Clara Murgui, Matthew Reece, Matthew Strassler, and Lian-Tao Wang for helpful discussions. The work of PA is supported in part by the U.S. Department of Energy under Grant Number DE-SC0011640. AB, KF and SH are supported in part by the DOE grant DE-SC0013607. KF and SH are also supported in part by the Alfred P. Sloan
Foundation Grant No. G-2019-12504, and KF is also supported in part by the NASA ATP Grant NNX16AI12G. The work of AP is supported in part by the US National Science Foundation Grant PHY2210533 and the Simons Foundation Grant No. 623940. PA thanks Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA\({}^{+}\) (Project ID 39083149) and KF thanks the Aspen Center for Physics (which is supported by NSF grant PHY-2210452) for their hospitality during the completion of this work.
Full Set of Consistency Conditions
Here we list the full set of consistency conditions that arise for the Yukawa couplings of the \(S_{1}\) leptoquark model embedded in an FN setup. They arise from considering the representation of the Yukawas under the SM flavor symmetry group, \(G_{\text{flavor}}\) (see Eq. (6)), and constructing the other combinations of Yukawas that transform in the same way. Each combination produces a one-loop contribution via a diagram analogous to Figure 3. The representations of \(Y_{Q\bar{u}}\), \(Y_{Q\bar{d}}\), \(Y_{L\bar{e}}\), \(\Delta_{QL}\), and \(\Delta_{\bar{u}\bar{e}}\) are listed in Eqs. (7) and (9). We find
\[\begin{split}\Big{|}\Delta^{ij}_{\bar{u}\bar{e}}\Big{|}& \geq\frac{1}{16\pi^{2}}\Big{|}\big{(}\Delta_{\bar{u}\bar{e}}\cdot \Delta^{\dagger}_{\bar{u}\bar{e}}\cdot\Delta_{\bar{u}\bar{e}}\big{)}^{ij}\Big{|},&\Big{|}\Delta^{ij}_{QL}\Big{|}\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}\Delta_{QL}\cdot\Delta^{\dagger}_{QL}\cdot\Delta_{QL}\big{)}^{ ij}\Big{|},&\Big{|}\Delta^{ij}_{QL}\Big{|}\geq\frac{1}{16\pi^{2}}\Big{|}\big{(} \Delta_{QL}\cdot\Delta^{\dagger}_{QL}\cdot\Delta_{QL}\big{)}^{ij}\Big{|},\\ \Big{|}\Delta^{ij}_{\bar{u}\bar{e}}\Big{|}&\geq \frac{1}{16\pi^{2}}\Big{|}\big{(}Y_{Q\bar{u}}^{T}\cdot Y_{Q\bar{u}}^{*}\cdot \Delta_{\bar{u}\bar{e}}\big{)}^{ij}\Big{|},&\Big{|}\Delta^{ij}_{ QL}\Big{|}\geq\frac{1}{16\pi^{2}}\Big{|}\big{(}Y_{Q\bar{d}}\cdot Y_{Q\bar{d}}^{ \dagger}\cdot\Delta_{QL}\big{)}^{ij}\Big{|},\\ \Big{|}\Delta^{ij}_{\bar{u}\bar{e}}\Big{|}&\geq \frac{1}{16\pi^{2}}\Big{|}\big{(}Y_{Q\bar{u}}\cdot\Delta^{*}_{QL}\cdot Y_{L \bar{e}}\big{)}^{ij}\Big{|},&\Big{|}\Delta^{ij}_{QL}\Big{|}\geq \frac{1}{16\pi^{2}}\Big{|}\big{(}Y_{Q\bar{u}}\cdot Y_{Q\bar{u}}^{\dagger} \cdot\Delta_{QL}\big{)}^{ij}\Big{|},\\ \Big{|}Y^{ij}_{Q\bar{d}}\Big{|}&\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}\Delta_{QL}\cdot\Delta^{\dagger}_{QL}\cdot Y_{Q\bar{d}}\big{)}^{ ij}\Big{|},&\Big{|}Y^{ij}_{Q\bar{u}}\Big{|}\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}Y_{Q\bar{u}}\cdot Y_{Q\bar{u}}^{\dagger}\cdot Y_{Q\bar{u}}\big{)} ^{ij}\Big{|},\\ \Big{|}Y^{ij}_{Q\bar{u}}\Big{|}&\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}\Delta_{QL}\cdot\Delta^{\dagger}_{QL}\cdot Y_{Q\bar{u}}\big{)}^{ ij}\Big{|},&\Big{|}Y^{ij}_{Q\bar{u}}\Big{|}\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}\Delta_{QL}\cdot\Delta^{\dagger}_{QL}\cdot Y_{Q\bar{u}}\big{)} ^{ij}\Big{|},\\ \Big{|}Y^{ij}_{L\bar{e}}\Big{|}&\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}Y_{L\bar{e}}\cdot Y_{L\bar{e}}^{\dagger}\cdot Y_{L\bar{e}}\big{)} ^{ij}\Big{|},&\Big{|}Y^{ij}_{Q\bar{u}}\Big{|}\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}Y_{Q\bar{u}}\cdot\Delta^{*}_{\bar{u}\bar{e}}\cdot\Delta^{T}_{ \bar{u}\bar{e}}\big{)}^{ij}\Big{|},\\ \Big{|}Y^{ij}_{L\bar{e}}\Big{|}&\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}\Delta^{T}_{QL}\cdot\Delta^{*}_{QL}\cdot Y_{L\bar{e}}\big{)}^{ij} \Big{|},&\Big{|}Y^{ij}_{Q\bar{u}}\Big{|}\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}Y_{Q\bar{d}}\cdot Y_{Q\bar{d}}^{\dagger}\cdot Y_{Q\bar{u}}\big{)} ^{ij}\Big{|},\\ \Big{|}Y^{ij}_{L\bar{e}}\Big{|}&\geq\frac{1}{16\pi^{2}} \Big{|}\big{(}Y_{L\bar{e}}\cdot\Delta^{\dagger}_{\bar{u}\bar{e}}\cdot\Delta_{ \bar{u}\bar{e}}\big{)}^{ij}\Big{|},&\Big{|}Y^{ij}_{Q\bar{u}}\Big{|}\geq\frac{1 }{16\pi^{2}}\Big{|}\big{(}\Delta_{QL}\cdot Y_{L\bar{e}}^{*}\cdot\Delta^{T}_{ \bar{u}\bar{e}}\big{)}^{ij}\Big{|},\\ \end{split} \tag{34}\]
We could also consider additional consistency conditions with more Yukawa couplings on the right-hand side, but those will be sub-dominant to those listed above. The consistency conditions listed above are specific to the spurions we have considered, but this procedure generalizes to arbitrary new spurions under \(G_{\text{flavor}}\).
Calculation of Other Observables
In this appendix we review the contributions of the \(S_{1}\) leptoquark to various flavor observables. The emphasis is on the dependence on the flavor spurions, \(\Delta_{QL}\) and \(\Delta_{\bar{u}\bar{e}}\), with many details left to the references. In what follows, \(V\) is the CKM matrix and \(v\) is the SM Higgs vev. We use the CKM parameters as determined in Ref. [138] while the remainder of our inputs are taken from the PDG [92]. Furthermore, we work with a set of operators where the neutrinos are left in the flavor basis as the processes we consider have either a final state neutrino of a specific flavor, or a sum over all possible final state neutrinos, which can be done in any basis. Therefore, we do not include explicit factors of the PMNS matrix in the expressions for the Wilson coefficients. We assume the leptoquark Yukawas are given in the IR and neglect the running effects. These calculations are used in SS4 to identify the most relevant constraints and the wrinkles which are useful for evading them. We also employ mostly four-component spinor notation in this appendix for consistency with the majority of the references.
### Dipole Moments
First we calculate the contribution of \(S_{1}\) to the electric and magnetic dipole moments of SM particles. After integrating out the leptoquark, the one loop diagrams of Figure 9 can give rise to the effective operators
\[\mathcal{L}\supset c_{ij}^{R}\bar{f}_{i}\sigma^{\mu\nu}P_{R}f_{j}F_{\mu\nu}+ \text{h.c.}, \tag{35}\]
where \(f_{i,j}\) are SM fermions, \(F_{\mu\nu}\) is the electromagnetic field strength, and \(c_{ij}^{R}\) is the corresponding Wilson coefficient. By matching the diagrams in Figure 9 to this operator, we can calculate \(c_{ij}^{R}\) values in our setup. See Refs. [141, 142, 143, 144, 145, 43] for details of the calculation. Following the notation of Ref. [43], we have
\[c_{ij}^{R}=\sum_{\bar{q}}\frac{e}{64\pi^{2}m_{S_{1}}^{2}}\bigg{[} m_{\bar{q}}(V^{*}\Delta_{QL})^{\bar{q}i\,*}\Delta_{\bar{u}\bar{e}}^{\bar{q}j\,*} \Big{(}Q_{S_{1}}A(r)-Q_{\bar{q}}B(r)\Big{)}\\ +\big{(}m_{i}\Delta_{\bar{u}\bar{e}}^{\bar{q}i}\Delta_{\bar{u} \bar{e}}^{\bar{q}\,*}+m_{j}(V^{*}\Delta_{QL})^{\bar{q}i\,*}(V^{*}\Delta_{QL})^ {\bar{q}j}\big{)}\,\Big{(}Q_{S_{1}}\bar{A}(r)-Q_{\bar{q}}\bar{B}(r)\Big{)} \bigg{]}, \tag{36}\]
where the sum is over all possible up-type anti-quarks \(\bar{q}\) that can go in the loop, \(Q\) is the electric charge, \(m_{i,j}\) are the masses of the external leptons, \(r=m_{\bar{q}}^{2}/m_{S_{1}}^{2}\), and the loop functions are defined in the appendix of Ref. [43].
In terms of these Wilson coefficients, the electric and magnetic dipole moments can be written as
\[d_{f}=2\ \text{Im}\,c_{ff}^{R}\qquad a_{f}=\frac{4m_{f}}{e}\ \text{Re}\,c_{ff}^{R}. \tag{37}\]
Note that because the two fermions in the operator Eq. (35) have opposite chirality, all the contributions in Eq. (36) are proportional to the external fermion or internal quark mass. As a result, unless the Yukawas are very suppressed, the \(S_{1}\) contribution to EDMs and MDMs are dominated by diagrams with the top quark in the loop, which are proportional to \(m_{t}\).
### Lepton Flavor Violating Observables
The Lagrangian from Eq. (35) also contributes to LFV decays as [132, 141, 43]
\[\text{BR}\left(\ell\to\ell^{\prime}\gamma\right)=\frac{48\pi^{2}}{G_{F}^{2}m_ {\ell}^{2}}\left(|c_{\ell\ell^{\prime}}^{R}|^{2}+|c_{\ell^{\prime}\ell}^{R}|^ {2}\right). \tag{38}\]
Similar to the previous section, dominant contributions to \(c_{\ell^{\prime}\ell}^{R}\) come from diagrams with the heaviest quarks in the loop. More concretely, we find that in the limit \(m_{\ell},m_{\ell^{\prime}}\ll m_{S_{1}}\)
\[c_{\ell\ell^{\prime}}^{R}\approx\frac{em_{q}}{16\pi^{2}m_{S_{1}}^{2}}\left[ \text{ln}\bigg{(}\frac{m_{S_{1}}^{2}}{m_{q}^{2}}\bigg{)}-\frac{7}{4}\right] \left(V^{*}\Delta_{QL}\right)^{q\ell\,*}\Delta_{\bar{u}\bar{e}}^{q\ell^{ \prime}}. \tag{39}\]
These dipole operators also contribute to the well-constrained LFV processes \(\mu\to 3e\) and \(\mu\to e\) conversion in nuclei. In our leptoquark model, the dipole operator is the only contribution to \(\mu\to 3e\), so these branching ratios are directly correlated:
\[\text{BR}(\mu\to 3e)=\frac{\alpha}{3\pi}\Big{(}\log\frac{m_{\mu}^{2}}{m_{e}^{2} }-\frac{11}{4}\Big{)}\,\text{BR}(\mu\to e\gamma)\simeq\frac{1}{162}\text{BR}( \mu\to e\gamma). \tag{40}\]
The \(\mu-e\) conversion process in nuclei, however, also receives contributions from four-fermion operators coupling the muon and electron to quarks. The effective Hamiltonian for
Figure 9: Feynman diagram (in two-component notation) for the \(S_{1}\) leptoquark contribution to the dipole operators of charged fermions, including \((g-2)_{\mu}\). The largest contribution arises from the top quark in the loop. The photon can attach to either internal line in the loop.
this process can be written [146, 147]:
\[\begin{split}\mathcal{H}\supset\frac{G_{F}}{\sqrt{2}}& \sum_{q=u,d,s}\left[\left(c_{LS}^{(q)}\bar{e}P_{R}\mu+c_{RS}^{(q)}\bar{e}P_{L} \mu\right)\bar{q}q+\left(c_{LP}^{(q)}\bar{e}P_{R}\mu+c_{RP}^{(q)}\bar{e}P_{L} \mu\right)\right\rangle\!\bar{q}\gamma_{5}q\\ &+\left(c_{LV}^{(q)}\bar{e}\gamma^{\mu}P_{L}\mu+c_{RV}^{(q)}\bar {e}\gamma^{\mu}P_{R}\mu\right)\!\bar{q}\gamma_{\mu}q+\left(c_{LA}^{(q)}\bar{e }\gamma^{\mu}P_{L}\mu+c_{RA}^{(q)}\bar{e}\gamma^{\mu}P_{R}\mu\right)\!\bar{q} \gamma_{\mu}\gamma_{5}q\\ &+\frac{1}{2}\big{(}c_{LT}^{(q)}\bar{e}\sigma^{\mu\nu}P_{R}\mu+c _{RT}^{(q)}\bar{e}\sigma^{\mu\nu}P_{L}\mu\big{)}\bar{q}\sigma_{\mu\nu}q+\text{ h.c.}\bigg{]}\end{split} \tag{41}\]
where for the \(S_{1}\) leptoquark,
\[\begin{split} c_{LS}^{(u)}=+c_{LP}^{(u)}& =-c_{LT}^{(u)}=-\frac{1}{2}\frac{v^{2}}{m_{S_{1}}^{2}}(V^{*} \Delta_{QL})^{11\,*}\,\Delta_{\bar{u}\bar{e}}^{12\,*}\\ c_{RS}^{(u)}=-c_{RP}^{(u)}&=-c_{RT}^{(u)}=-\frac{1}{ 2}\frac{v^{2}}{m_{S_{1}}^{2}}(V^{*}\Delta_{QL})^{12}\,\Delta_{\bar{u}\bar{e}} ^{11}\\ c_{LV}^{(u)}=-c_{LA}^{(u)}&=-\frac{1}{2}\frac{v^{2}} {m_{S_{1}}^{2}}(V^{*}\Delta_{QL})^{12}\,(V^{*}\Delta_{QL})^{11\,*}\\ c_{RV}^{(u)}=c_{RA}^{(u)}&=-\frac{1}{2}\frac{v^{2}} {m_{S_{1}}^{2}}\Delta_{\bar{u}\bar{e}}^{11}\,\Delta_{\bar{u}\bar{e}}^{12\,*} \end{split} \tag{42}\]
The conversion rate is then computed by evaluating the overlap integrals of the fermion wavefunction and nucleon densities. This has been performed in Ref. [148], assuming the coherent conversion process (where the initial and final state nucleus are the same) dominates. We use the average values of their overlap integrals for the different nuclei (Al and Au).
### Leptonic Meson Decays
#### b.3.1 \(P\to\ell\nu\)
The EFT for a generic meson decaying to a neutrino and a charged lepton is [149, 75, 34]
\[\mathcal{H}_{\text{eff}} = \frac{4G_{F}V_{ud}}{\sqrt{2}}\left[C_{L,ud\ell\nu}^{V}\left(\bar{u }_{L}\gamma^{\mu}d_{L}\right)\left(\bar{\ell}_{L}\gamma_{\mu}\nu_{L}\right)+C_ {R,ud\ell\nu}^{V}\left(\bar{u}_{R}\gamma^{\mu}d_{R}\right)\left(\bar{\ell}_{L} \gamma_{\mu}\nu_{L}\right)\right.\] \[+ \left.C_{L,ud\ell\nu}^{S}\left(\bar{u}_{R}d_{L}\right)\left(\bar{ \ell}_{R}\nu_{L}\right)+C_{R,ud\ell\nu}^{S}\left(\bar{u}_{L}d_{R}\right)\left( \bar{\ell}_{R}\nu_{L}\right)\right]+\text{h.c.},\]
where \(u\) (\(d\)) labels the involved up-type (down-type) quark. In the SM, these decays are mediated by a \(W\) exchange and the overall normalization is chosen such that \(C_{L}^{V}=1\), with other Wilson coefficients set to zero.
For the \(S_{1}\) leptoquark, we can show that at the leptoquark mass scale
\[C^{V}_{L,ud\ell\nu} =\frac{\Delta^{d\nu}_{QL}(V^{*}\Delta_{QL})^{u\ell\,*}}{V_{ud}} \frac{v^{2}}{4m_{S_{1}}^{2}}, \tag{44}\] \[C^{S}_{L,ud\ell\nu} =\frac{\Delta^{d\nu}_{QL}\Delta^{u\ell}_{\bar{u}\bar{e}}}{V_{ud}} \frac{v^{2}}{4m_{S_{1}}^{2}},\]
In our model there are no couplings to RH down-type quarks, so \(C^{S}_{R,ud\ell\nu}=C^{V}_{R,ud\ell\nu}=0\).
The meson branching ratio to \(\ell\nu\) is given by
\[\text{BR}\left(P^{-}_{ud}\to\ell\nu\right)=\tau_{P}\frac{m_{P}f_{ P}^{2}G_{F}^{2}|V_{ud}|^{2}}{8\pi}m_{\ell}^{2}\left(1-\frac{m_{\ell}^{2}}{m_{P}^{2 }}\right)^{2}\\ \times\left|(C^{V}_{L,ud\ell\nu}-C^{V}_{R,ud\ell\nu})+\frac{m_{P} ^{2}}{m_{\ell}(m_{u}+m_{d})}(C^{S}_{R,ud\ell\nu}-C^{S}_{L,ud\ell\nu})\right|^{ 2}, \tag{45}\]
where \(\tau_{P}\) is the meson lifetime, \(m_{P}\) is the meson mass, \(f_{P}\) is the meson decay constant, \(m_{\ell}\) is the final state lepton's mass, and \(m_{u}\) (\(m_{d}\)) is the mass of the up-type (down-type) valence quark of the meson. This equation has been used to calculate the contribution of our model to various leptonic meson decays in the main text.
#### b.3.2 \(P\to\ell\ell^{\prime}\) and \(P\to\nu\nu^{\prime}\)
The Hamiltonian describing a meson \(P\) decaying to charged leptons \(l\) and \(l^{\prime}\) is [35, 150]
\[\mathcal{H}_{\text{eff}}\supset\frac{4G_{F}}{\sqrt{2}}\lambda_{\text{CKM}} \left[\sum_{X=S,P,9,10}C^{qq^{\prime},\ell\ell^{\prime}}_{X}\mathcal{O}^{qq^{ \prime},\ell\ell^{\prime}}_{X}+C^{qq^{\prime},\ell\ell^{\prime}}_{X^{\prime} }\mathcal{O}^{qq^{\prime},\ell\ell^{\prime}}_{X^{\prime}}+\text{h.c.}\right]. \tag{46}\]
Here, \(\lambda_{\text{CKM}}\) is a combination of two CKM entries involving the valence quarks of the meson, \(C_{X}\) are Wilson coefficients, and their associated operators are
\[\mathcal{O}^{qq^{\prime};\ell\ell^{\prime}}_{S}=\frac{\alpha_{ \text{em}}}{4\pi}(\bar{q}P_{R}q^{\prime})(\bar{\ell}\ell^{\prime}) \mathcal{O}^{qq^{\prime};\ell\ell^{\prime}}_{P}=\frac{\alpha_{\text{em}}}{4 \pi}(\bar{q}P_{R}q^{\prime})(\bar{\ell}\gamma^{5}\ell^{\prime}) \tag{47}\] \[\mathcal{O}^{qq^{\prime};\ell\ell^{\prime}}_{9}=\frac{\alpha_{ \text{em}}}{4\pi}(\bar{q}\gamma^{\mu}P_{L}q^{\prime})(\bar{\ell}\gamma_{\mu} \ell^{\prime}) \mathcal{O}^{qq^{\prime};\ell\ell^{\prime}}_{10}=\frac{\alpha_{ \text{em}}}{4\pi}(\bar{q}\gamma^{\mu}P_{L}q^{\prime})(\bar{\ell}\gamma_{\mu} \gamma^{5}\ell^{\prime}),\]
The operators with a prime on the subscript are obtained by the replacement \(P_{L/R}\to P_{R/L}\).
At tree level, our leptoquark only gives rise to decays of \(D\) and \(\pi\) via \(t\)-channel diagrams, while decays of \(K,\;\;B\), and \(B_{s}\) take place at one-loop level and are suppressed. For the tree-level decays, the Wilson coefficients above can be calculated as a function of the leptoquark
Yukawa couplings [35]
\[\begin{split} C_{9}^{qq^{\prime};\ell\ell^{\prime}}=-C_{10}^{qq^{ \prime};\ell\ell^{\prime}}&=-\frac{v^{2}\pi}{2\alpha_{\text{em}} \lambda_{\text{CKM}}m_{S_{1}}^{2}}(V^{*}\Delta_{QL})_{q^{\prime}\ell^{\prime}} (V\Delta_{QL})_{q\ell}^{*}\\ C_{9^{\prime}}^{qq^{\prime};\ell\ell^{\prime}}=C_{10^{\prime}}^{qq^ {\prime};\ell\ell^{\prime}}&=-\frac{v^{2}\pi}{2\alpha_{\text{em}} \lambda_{\text{CKM}}m_{S_{1}}^{2}}(\Delta_{\bar{u}\bar{e}})_{q^{\prime}\ell^{ \prime}}^{*}(\Delta_{\bar{u}\bar{e}})_{q\ell}\\ C_{S}^{qq^{\prime};\ell\ell^{\prime}}=C_{P}^{qq^{\prime};\ell\ell^ {\prime}}&=-\frac{v^{2}\pi}{2\alpha_{\text{em}}\lambda_{\text{CKM }}m_{S_{1}}^{2}}(\Delta_{\bar{u}\bar{e}})_{q^{\prime}\ell^{\prime}}^{*}(V^{*} \Delta_{QL})_{q\ell}^{*}\\ C_{S^{\prime}}^{qq^{\prime};\ell\ell^{\prime}}=-C_{P^{\prime}}^{qq ^{\prime};\ell\ell^{\prime}}&=-\frac{v^{2}\pi}{2\alpha_{\text{em}} \lambda_{\text{CKM}}m_{S_{1}}^{2}}(V^{*}\Delta_{QL})_{q^{\prime}\ell^{\prime}} (\Delta_{\bar{u}\bar{e}})_{q\ell}.\end{split} \tag{48}\]
For \(D\) and \(\pi\) mesons decays we set \(\lambda_{\text{CKM}}=V_{q^{\prime}b}^{*}V_{qb}\) with \(q\), \(q^{\prime}\) referring to the valence quarks of the meson.
In terms of the Wilson coefficients above, the BR of the meson to \(\ell^{-}\) and \(\ell^{\prime+}\) is given by [35, 150]
\[\begin{split} BR(P&\to\ell^{-}\ell^{\prime+})=\tau_{P }f_{P}^{2}m_{P}^{3}\frac{\alpha_{\text{em}}^{2}G_{F}^{2}}{64\pi^{3}}\lambda_{ \text{CKM}}^{2}\sqrt{\left(1-\frac{(m_{1}-m_{2})^{2}}{m_{P}^{2}}\right)\left( 1-\frac{(m_{1}+m_{2})^{2}}{m_{P}^{2}}\right)}\\ &\qquad\times\left[\left(1-\frac{(m_{1}+m_{2})^{2}}{m_{P}^{2}} \right)\left|(C_{9}-C_{9^{\prime}})\frac{m_{1}-m_{2}}{m_{P}}+\frac{m_{P}}{m_{ q^{\prime}}+m_{q}}(C_{S}-C_{S^{\prime}})\right|^{2}\right.\\ &\qquad+\left.\left(1-\frac{(m_{1}-m_{2})^{2}}{m_{P}^{2}}\right) \left|(C_{10}-C_{10^{\prime}})\frac{m_{1}+m_{2}}{m_{P}}+\frac{m_{P}}{m_{q^{ \prime}}+m_{q}}(C_{P}-C_{P^{\prime}})\right|^{2}\right],\end{split} \tag{49}\]
where \(\tau_{P}\) is the meson lifetime, \(m_{P}\) is the meson mass, and \(m_{1}\) (\(m_{2}\)) is the mass of the \(\ell\) (\(\ell^{\prime}\)) lepton.
We can use Eq. (49) to calculate meson decay to a pair of neutrinos too. For that, we should set \(m_{1}=m_{2}=0\) and only keep couplings to LH fermions in the SM. Doing that, we find zero contribution for the \(S_{1}\) leptoquark.
### Semi-leptonic Meson Decays
Next we compute the leptoquark contribution to semi-leptonic meson decays. We ignore constraints from \(B\to K^{(*)}\ell\ell\), since the \(S_{1}\) leptoquark only contributes at loop-level, which is subdominant for leptoquark masses above a few TeV [69]. Instead, we study the more sensitive observables \(B\to D^{(*)}l\nu\) and \(K\to\pi\nu\bar{\nu}\), which receive contributions at tree-level.
#### b.4.1 \(R_{D^{(*)}}\)
\(B\to D^{(*)}l\nu\) proceeds at tree-level via the exchange of the \(W\) and the leptoquark [149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 150, 151, 152, 153]. This and other leptoquark models have generated significant interest in the context of \(B\to D^{(*)}l\nu\) because some evidence of a lepton flavor non-universal BSM contribution in
this channel, captured by the ratio
\[R_{D^{(*)}}\equiv\frac{\text{BR}\left(B\to D^{(*)}\tau\nu\right)}{\text{BR} \left(B\to D^{(*)}\ell\nu\right)}, \tag{50}\]
has been detected in various experiments [154, 155, 156, 157, 158, 159, 160] (\(\ell=e,\mu\)).
When computing the decay rate, integrating the heavy mediators out allows us to work with a set of dimension-6 operators given by
\[\mathcal{H}_{\text{eff}}=\frac{4G_{F}V_{cb}}{\sqrt{2}}\bigg{(}\mathcal{O}^{V}_ {LL}+\sum_{\begin{subarray}{c}X=S,V,T\\ M=L,R\end{subarray}}C^{X}_{ML}\mathcal{O}^{X}_{ML}\bigg{)} \tag{51}\]
where
\[\mathcal{O}^{S}_{ML}\equiv(\bar{c}P_{M}b)(\bar{\tau}P_{L}\nu)\quad\mathcal{O}^ {V}_{ML}\equiv(\bar{c}\gamma^{\mu}P_{M}b)(\bar{\tau}\gamma_{\mu}P_{L}\nu)\quad \mathcal{O}^{T}_{ML}\equiv(\bar{c}\sigma^{\mu\nu}P_{M}b)(\bar{\tau}\sigma_{ \mu\nu}P_{L}\nu) \tag{52}\]
Note that we have split apart the contributions to the vector operator such that the Wilson coefficients only capture leptoquark contributions.
For the process of interest, the helicity amplitude we wish to compute is
\[-i\mathcal{M}=\langle\ell(p_{\ell},\lambda_{\ell}),\bar{\nu}_{\ell}(p_{\nu}), D^{(*)}(p_{\mu},\epsilon(\lambda_{M}))|\mathcal{H}_{eff}|B(p_{B})\rangle. \tag{53}\]
Each of these operators can be split apart into the constituent quark and lepton bilinears, which allows us to split apart the total amplitude into a product of hadronic and leptonic amplitudes. Details of the calculation can be found in [161, 162]. The leptonic amplitudes, which are generically functions of various angles, are identical for both \(D\) and \(D^{*}\), while the hadronic amplitudes, which are functions of \(q^{2}\), vary and are determined by the specific helicity of the \(D^{(*)}\) meson. The leptonic amplitudes can be found in multiple references, including [161]; the expressions for the relevant hadronic functions are taken from [163, 151].12
Footnote 12: The correct sign of \(h_{T_{3}}(w)\) is in Ref. [151].
To compute the differential decay rate, we use
\[\frac{\text{d}\Gamma}{\text{d}q^{2}\,\text{d}\cos\theta}=\frac{1}{2m_{B}}\sum _{\ell}\Big{|}\mathcal{M}(q^{2},\cos\theta)\Big{|}^{2}\frac{\sqrt{(m_{B}+m_{D })^{2}-q^{2}}}{256\pi^{3}m_{B}^{2}}\left(1-\frac{m_{\tau}^{2}}{q^{2}}\right) \tag{54}\]
where we sum over neutrinos in the final state. Performing the angular integral over the leptonic functions first, we recover Eqs. (B.6) and (B.8) in Ref. [162] for the differential decay rates of \(B\to D\tau\nu\) and \(B\to D^{*}\tau\nu\) respectively. This is the result for a \(\tau\) in the final state, but making the replacement \(m_{\tau}\to m_{\ell}\) gives us the expression for decays involving any of the SM leptons. The total decay rate can then be obtained by performing the \(q^{2}\) integral
over the interval \([m_{\ell}^{2},(m_{B}-m_{D})^{2}]\).
The expressions from (B.6) and (B.8) in Ref. [162] are given in terms of the Wilson coefficients defined in Eq. (51), therefore, the last ingredient required to complete this computation is the set of pertinent Wilson coefficients for the leptoquark model. They are given by
\[\begin{split} C_{LL}^{S}&=-\frac{v^{2}}{4m_{LQ}^{2} }\frac{\Delta_{QL}^{3j}\,\Delta_{\bar{u}\bar{e}}^{23}}{V_{cb}}\\ C_{LL}^{V}&=\frac{v^{2}}{4m_{LQ}^{2}}\frac{\Delta_{ QL}^{3j}(V^{*}\Delta_{QL})^{23\,*}}{V_{cb}}\\ C_{LL}^{T}&=\frac{v^{2}}{16m_{LQ}^{2}}\frac{\Delta_ {QL}^{3j}\,\Delta_{\bar{u}\bar{e}}^{23}}{V_{cb}}\end{split} \tag{55}\]
#### b.4.2 \(K\to\pi\nu\bar{\nu}\)
The decays \(K^{+}\to\pi^{+}\nu\bar{\nu}\) and \(K_{L}\to\pi^{0}\nu\bar{\nu}\) can be described with an effective Hamiltonian very similar to Eq. (19) [164, 68]:
\[\mathcal{H}_{\rm eff}=-\frac{4G_{F}}{\sqrt{2}}\Bigg{[}\mathcal{H}_{\rm eff}^ {(c)}+V_{td}^{*}V_{ts}(C_{L}^{K\nu}\mathcal{O}_{L}^{K\nu}+C_{R}^{K\nu} \mathcal{O}_{R}^{K\nu})+\text{h.c.}\Bigg{]} \tag{56}\]
where
\[\mathcal{O}_{L(R)}^{K\nu}=\frac{\alpha_{\rm em}}{4\pi}(\bar{d}\gamma^{\mu}P_{ L(R)}s)(\bar{\nu}\gamma_{\mu}(1-\gamma_{5})\nu), \tag{57}\]
and \(\mathcal{H}_{\rm eff}^{(c)}\) includes operators that encode physics below the weak scale. The branching ratios for \(K^{+}\to\pi^{+}\nu\bar{\nu}\) and \(K_{L}\to\pi^{0}\nu\bar{\nu}\) are then written as
\[\begin{split}\text{BR}(K^{+}\to\pi^{+}\nu\bar{\nu})& =\kappa_{+}\left[\left(\frac{\text{Im}(\lambda_{t}X^{K\nu})}{ \lambda^{5}}\right)^{2}+\Big{(}-P_{(u,c)}+\frac{\text{Re}(\lambda_{t}X^{K\nu} )}{\lambda^{5}}\Big{)}^{2}\right]\\ \text{BR}(K_{L}\to\pi^{0}\nu\bar{\nu})&=\kappa_{L} \left(\frac{\text{Im}(\lambda_{t}X^{K})}{\lambda^{5}}\right)^{2}\end{split} \tag{58}\]
where \(X^{K\nu}=-\sin^{2}\theta_{W}(C_{L}^{K\nu}+C_{R}^{K\nu})\), \(\lambda_{t}=V_{td}^{*}V_{ts}\) and \(\lambda=0.2255\) is the Wolfenstein parameter of the CKM matrix. The \(\kappa\)-factors encode input from hadronic matrix elements. Following Ref. [68], we take \(\kappa_{+}=(5.27\pm 0.03)\times 10^{-11}\) and \(\kappa_{L}=(2.27\pm 0.01)\times 10^{-10}\). The quantity \(P_{(u,c)}=0.41\pm 0.05\) encodes contributions from charm and light-quark loops. These two decays are related via the Grossman-Nir bound [165].
The SM Wilson coefficient \(C_{L}^{K\nu\,{\rm SM}}\) is the same as Eq. (21), while the leptoquark contribution is
\[C_{L}^{K\nu}=\frac{v^{2}}{m_{S_{1}}^{2}}\frac{\pi}{2\alpha_{\rm em}}\frac{ \Delta_{QL}^{2k}\Delta_{QL}^{1k\,*}}{\lambda_{t}} \tag{59}\]
We set a constraint on the leptoquark mass by demanding that the total predicted branch
ing ratio (including the SM contribution) be less than the \(2\sigma\) upper limit of the measured branching ratio in Ref. [88]: \(\text{BR}(K^{+}\to\pi^{+}\nu\bar{\nu})<1.88\times 10^{-10}\). The analogous limit for \(K_{L}\) decays set by the KOTO experiment, \(\text{BR}(K_{L}\to\pi^{0}\nu\bar{\nu})<4.9\times 10^{-9}\)[166] is not yet competitive in the context of this model.
### \(Z\to\ell\ell^{\prime}\)
Virtual corrections involving SM fermions and the \(S_{1}\) leptoquark can also contribute to lepton flavor universality violating decays of the SM gauge bosons. The strongest bound on the leptoquark comes from measurements of the \(Z\to\ell\ell^{\prime}\) decays, which are constrained by ATLAS [167]. Constraints on \(Z\) decays can be cast as bounds on anomalous couplings of the \(Z\) boson, \(\delta g\), where
\[\mathcal{L}\supset\frac{g}{\cos\theta_{W}}\sum_{f,i,j}\bar{f}_{i}\gamma^{\mu} \Big{[}(\delta_{ij}g_{\text{SM}}^{f_{L}}+\delta g_{ij}^{f_{L}})P_{L}+(\delta_ {ij}g_{\text{SM}}^{f_{R}}+\delta g_{ij}^{f_{R}})P_{R}\Big{]}f_{j}Z_{\mu}, \tag{60}\]
with \(g_{\text{SM}}^{f_{L}}=T_{3}^{f}-Q_{f}\sin^{2}\theta_{W}\) and \(g_{\text{SM}}^{f_{R}}=-Q_{f}\sin^{2}\theta_{W}\) being the left- and right-handed fermion couplings to the \(Z\) boson in the SM.
The \(S_{1}\) leptoquark contributions to these anomalous couplings have been worked out in Refs. [168, 169]. In particular Ref. [169] includes additional finite terms that are numerically important. The \(S_{1}\) leptoquark contributions to the charged lepton couplings of the \(Z\) is
\[\begin{split}\delta g_{ij}^{\ell\,L(R)}&=\frac{N_{ c}}{16\pi^{2}}w_{L(R)}^{tj}(w_{L(R)}^{ti})^{*}\bigg{[}\big{(}g_{\text{SM}}^{u_{ L(R)}}-g_{\text{SM}}^{u_{R(L)}}\big{)}\frac{x_{t}(x_{t}-1-\log x_{t})}{(x_{t}-1)^{2}}+ \frac{x_{Z}}{12}F_{L(R)}(x_{t})\bigg{]}\\ &\qquad+\frac{N_{c}}{48\pi^{2}}x_{Z}\sum_{k=u,c}w_{L(R)}^{kj}(w_ {L(R)}^{ki})^{*}\bigg{[}g_{\text{SM}}^{u_{L(R)}}\big{(}\log x_{Z}-i\pi-\frac{1 }{6}\big{)}+\frac{1}{6}g_{\text{SM}}^{\ell_{L(R)}}\bigg{]}\end{split} \tag{61}\]
where \(x_{Z}=m_{Z}^{2}/m_{S_{1}}^{2}\), \(x_{t}=m_{t}^{2}/m_{S_{1}}^{2}\), \(w_{L}^{ij}=(V^{*}\Delta_{QL})^{ij}\), \(w_{R}^{ij}=\Delta_{\bar{u}\bar{e}}^{ij}\), and \(F_{L(R)}(x)\) are loop functions, which can be found in Ref. [169].
Ref. [170] sets bounds on combinations of these anomalous couplings with a variety of flavor Ansatze, by combining the LFV decay bounds with LEP data at the \(Z\)-pole [171]. To extract a constraint on the \(S_{1}\) leptoquark, we simply demand that the anomalous couplings computed above satisfy their bounds assuming generic LFV coupling, which limits
\[\sqrt{|\delta g_{12}^{\ell_{L}}|^{2}+|\delta g_{12}^{\ell_{R}}|^{2}}<1.2\times 1 0^{-3},\qquad\sqrt{|\delta g_{23}^{\ell_{L}}|^{2}+|\delta g_{23}^{\ell_{R}}|^{2 }}<4.8\times 10^{-3}. \tag{62}\]
The \(e\mu\) bound is most constraining for the anarchic and vanilla FN flavor Ansatze, while the \(\mu\tau\) bound is strongest with the additional wrinkles from Eq. (32).
### Meson Mixing
The leptoquark \(S_{1}\) also contributes at the one-loop level to operators in the SM that are responsible for meson mixing. In particular for the down type quarks, the important operator for meson mixing is the dimension-six, four-quark bilinear
\[\mathcal{H}_{\text{mix}}\supset C^{ij}_{\text{mix}}\left(\bar{d}^{i}_{L}\gamma^ {\mu}d^{j}_{L}\right)(\bar{d}^{i}_{L}\gamma^{\mu}d^{j}_{L}). \tag{63}\]
The associated Wilson coefficient for this operator generated by the \(S_{1}\) leptoquark is [153]
\[C^{ij}_{\text{mix}}=\frac{1}{128\pi^{2}m_{S_{1}}^{2}}\sum_{k=1}^{3}\left[( \Delta^{ik}_{QL}{}^{*})\Delta^{jk}_{QL}\right]^{2}, \tag{64}\]
where the sum above is over all neutrino flavors. Several experimental quantities of interest can then be derived from this; for instance (in the limit of negligible CP violating phases) the mass difference \(\Delta m\) between the mass eigenstates of the oscillating meson is given by
\[\Delta m=\frac{\left\langle P\big{|}\mathcal{H}_{\text{mix}}\big{|}\bar{P} \right\rangle}{m_{P}}\ =\frac{C^{ij}_{\text{mix}}}{m_{P}}\left\langle P\big{|}(\bar{d}^{i}_{L} \gamma^{\mu}d^{j}_{L})\ (\bar{d}^{i}_{L}\gamma^{\mu}d^{j}_{L})\big{|}\bar{P} \right\rangle. \tag{65}\]
Here, \(P\) denotes the meson whose constituent down-type quarks are in the \(i,j\) generation. The non-perturbative hadronic matrix element above is
\[\left\langle P\big{|}\left(\bar{d}^{i}_{L}\gamma^{\mu}d^{j}_{L} \right)\,(\bar{d}^{i}_{L}\gamma^{\mu}d^{j}_{L})\,\big{|}\bar{P}\right\rangle= \frac{2}{3}f_{P}^{2}m_{P}^{2}B_{P}, \tag{66}\]
where \(f_{P}\) is the meson decay constant and \(B_{P}\) is the meson bag factor, which can be extracted from lattice computations [172, 173, 174].
In order to reduce uncertainties from the hadronic matrix elements, we find it advantageous to compare ratios of the matrix elements of the mixing operator (as given in Eq. (33)). We define
\[C_{B_{q}}e^{2i\phi_{B_{q}}} = \tag{67}\]
where \(q=d,s\) and by definition in the SM, \(C_{B_{q}}=1\) and \(\phi_{B_{q}}=0\). By definition, the \(C_{B_{q}}\) are free from the non-perturbative matrix elements and depend only on perturbative, short-distance Wilson coefficients. The aforementioned ratio is experimentally determined by the UTFit collaboration [137, 138, 90], and can be understood as a short-distance proxy for the mass difference \(\Delta m\). In principle, there can be intricate interplay between the phases of leptoquark couplings, leading to interference with the SM contributions in this ratio. In this work, we avoid making any assumptions on the underlying complex phases of the leptoquark couplings in \(C_{B_{q}}\), and simply compute the absolute value of \(C_{B_{q}}\).
Additional CP violation from BSM physics is also strongly constrained by other meson mixing measurements, especially in the Kaon system. The quantity of interest is \(\epsilon_{K}\), which, following standard assumptions (see e.g. [175]), is given by
\[\epsilon_{K}=\frac{1}{4}\frac{\left\langle K_{0}\big{|}\mathcal{H}_{\text{mix} }\big{|}\bar{K}_{0}\right\rangle}{\left\langle\bar{K}_{0}\big{|}\mathcal{H}_{ \text{mix}}\big{|}K_{0}\right\rangle}-\frac{1}{4}. \tag{68}\]
To account for \(\epsilon_{K}\), which is much more constraining than the Kaon mass difference, we define
\[C_{\epsilon_{K}}=\frac{\text{Im}\langle K^{0}|\mathcal{H}_{\text{mix}}^{\text {SM+NP}}|\bar{K}^{0}\rangle}{\text{Im}\langle K^{0}|\mathcal{H}_{\text{mix}}^{ \text{SM}}|\bar{K}^{0}\rangle}, \tag{69}\]
where again \(C_{\epsilon_{K}}=1\) in the SM.
For all of these quantities, we compute the leptoquark contributions using Eq. (64). We compare to the SM matrix elements, which are computed following Refs. [175, 176, 177], including the scale-independent, short-distance QCD corrections. Then we set constraints using the latest results from UTFit [138].
We do not consider effects of the \(S_{1}\) leptoquark on mixing in mesons with up-type quarks such as the \(D^{0}\), primarily due large hadronic undertainties [178, 179] in current SM predictions that make it difficult to glean any information from new physics contributions. |
2303.03625 | SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped
Domain Attention | Lung cancer is the leading cause of cancer death worldwide. The best solution
for lung cancer is to diagnose the pulmonary nodules in the early stage, which
is usually accomplished with the aid of thoracic computed tomography (CT). As
deep learning thrives, convolutional neural networks (CNNs) have been
introduced into pulmonary nodule detection to help doctors in this
labor-intensive task and demonstrated to be very effective. However, the
current pulmonary nodule detection methods are usually domain-specific, and
cannot satisfy the requirement of working in diverse real-world scenarios. To
address this issue, we propose a slice grouped domain attention (SGDA) module
to enhance the generalization capability of the pulmonary nodule detection
networks. This attention module works in the axial, coronal, and sagittal
directions. In each direction, we divide the input feature into groups, and for
each group, we utilize a universal adapter bank to capture the feature
subspaces of the domains spanned by all pulmonary nodule datasets. Then the
bank outputs are combined from the perspective of domain to modulate the input
group. Extensive experiments demonstrate that SGDA enables substantially better
multi-domain pulmonary nodule detection performance compared with the
state-of-the-art multi-domain learning methods. | Rui Xu, Zhi Liu, Yong Luo, Han Hu, Li Shen, Bo Du, Kaiming Kuang, Jiancheng Yang | 2023-03-07T03:17:49Z | http://arxiv.org/abs/2303.03625v1 | # SdA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention
###### Abstract
Lung cancer is the leading cause of cancer death worldwide. The best solution for lung cancer is to diagnose the pulmonary nodules in the early stage, which is usually accomplished with the aid of thoracic computed tomography (CT). As deep learning thrives, convolutional neural networks (CNNs) have been introduced into pulmonary nodule detection to help doctors in this labor-intensive task and demonstrated to be very effective. However, the current pulmonary nodule detection methods are usually domain-specific, and cannot satisfy the requirement of working in diverse real-world scenarios. To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks. This attention module works in the axial, coronal, and sagittal directions. In each direction, we divide the input feature into groups, and for each group, we utilize a universal adapter bank to capture the feature subspaces of the domains spanned by all pulmonary nodule datasets. Then the bank outputs are combined from the perspective of domain to modulate the input group. Extensive experiments demonstrate that SGDA enables substantially better multi-domain pulmonary nodule detection performance compared with the state-of-the-art multi-domain learning methods.
Pulmonary nodule Detection, Multi-center Study, Domain Adaptation, Slice Grouped Squeeze-and-Excitation Adapter.
## 1 Introduction
Lung cancer has been the most common cause of cancer death in the world. Prompt diagnosis of the pulmonary nodules and timely treatment can significantly improve lung cancer survival rates. For pulmonary nodule detection, the most effective and widely used tool is the thoracic computed tomography (CT). However, a single CT scan contains hundreds of slices; thus interpreting CT data for pulmonary nodule diagnosis is a massive work to doctors, and computer-aided algorithms have been developed to assist doctors in this laborious task [1, 2]. In recent years, with the prosperity of deep learning, convolutional neural networks (CNNs) have been introduced in the field of pulmonary nodule detection. Powered by the availability of several pulmonary nodule datasets, such as LUNA16 [3], tansia [4], PN9 [6], etc, CNN based methods have achieved great success, and become the mainstream approach for pulmonary nodule detection.
Nonetheless, the existing pulmonary detection methods are usually domain-specific, e.g. trained and tested on the same dataset. They often show performance degradation when applied to other datasets due to the nontrivial domain shift. As shown in Fig. 1, pulmonary nodule datasets can vary in terms of illumination, color contrast/saturation, resolution, etc. The annotation standards of datasets can also be different. For instance, LUNA16 [3] only covers nodules \(\geq\) 3mm; russia [5] just includes binary labels; PN9 [6] not only annotates nodules of all sizes, but also elaborately categorizes them into 9 classes. It is common that high pulmonary detection performance requires a model specially trained for the target dataset.
Fig. 1: Samples of four pulmonary nodule datasets. Each column belongs to one pulmonary nodule dataset as labeled. CT images of different datasets present domain discrepancy, such as illumination, color contrast/saturation, resolution, number of nodules.
However, in most clinical scenarios, multi-center trials have to be conducted. The CT images to be analyzed are not restricted to any one of the domains in Fig. 1. Hence, it is necessary for algorithms capable of detecting nodules from CT scans whichever medical center they are collected from. In natural images, multi-domain learning (MDL) methods are developed to tackle the learning of representations for multiple domains, and which domain the data come from is known as a priori. They often use a combination of parameters shared across domains and parameters specialized for each domain, which are usually known as adapters. Most of them focus on natural image analysis and have achieved significant progress [7, 8, 9, 10]. In [11], MDL is introduced to medical image segmentation tasks, and a universal architecture is proposed for multi-domain medical image segmentation through parallel channel-wise convolutions, one per domain, followed by one point-wise convolution shared by all domains. A more recent work [12] combines MDL with missing annotation mining to develop a universal lesion detection network. However, existing efforts in this area mostly require prior knowledge of the domain of interest. This is undesirable for autonomous systems in real applications, where determining the domain (data drawn from) is also a nontrivial problem. Therefore, we consider the design of a universal object detection network capable of operating over multiple pulmonary nodule datasets with no need for prior knowledge of the domain of interest.
To achieve this goal, we propose a slice grouped domain attention (SGDA) module for adaptation to different domains, and enhancing the generalization capability of the pulmonary nodule detection networks. The SGDA module can capture the feature subspaces of the domains spanned by all pulmonary nodule datasets from the axial, coronal, and sagittal directions, and in each direction, we soft-route the projections on these subspaces by group. Particularly, this domain attention module can be used as a plug-and-play module for existing pulmonary nodule detection networks (such as NoduleNet [13] and SANet [6]). Taking the widely used NoduleNet [13] network as an example, this module is added in each 3D residual block [14].
We summarize our main contributions as follows:
* We propose a slice grouped domain attention (SGDA) module, a plug-and-play tool, for existing pulmonary nodule detection networks to enhance their generalization abilities. It mainly contains a universal adapter bank, a domain assignment component, and a three way cross-attention module.
* We design a new class of lightweight adapters called slice grouped squeeze-and-excitation (SGSE) adapter to compensate for domain shift.
* We introduce domain assignment to achieve domain-aware soft combination of projections on different domains.
* We develop three way cross-attention to fuse the modulated feature maps in three directions.
To verify the effectiveness of our SGDA, we perform extensive experiments on four pulmonary nodule datasets. Experimental results show that our SGDA outperforms several state-of-the-art multi-domain methods.
The rest of this paper is organized as follows. We summarize the related works of pulmonary nodule detection and multi-domain learning in Section 2. Details of the proposed SGDA method are presented in Section 3. Section 4 includes the experimental results and analysis, and we conclude this paper in Section 5.
## 2 Related Work
### _Pulmonary Nodule Detection_
Pulmonary nodule detection is usually regarded as an object detection task for CT images, and draws public attention due to its great clinical value. In recent years, CNN based methods have been utilized in various detection tasks [15, 16, 17]. These methods are also introduced in pulmonary nodule detection, and have achieved promising success. Many works are based on 2D CNN. For example, a deconvolutional structure is introduced in Faster RCNN for candidate detection on axial slices [18]. In [19], multi-view ConvNets is proposed for pulmonary nodule detection. The proposed architecture comprises multiple streams of 2D ConvNets, which take a set of 2D patches from differently oriented planes as input. Then the outputs are combined using a dedicated fusion method to obtain the final results. More recently, 3D CNN based methods have become the focus of many studies considering the 3D nature of CT images, such as [20, 21, 22, 2, 23, 24, 25], and [26]. Specifically, an end-to-end 3D deep CNN called NoduleNet is proposed in [13] to solve nodule detection, false positive reduction, and nodule segmentation jointly in a multi-task manner. In [6], a slice-aware network for pulmonary nodule detection termed SANet is developed, which mainly contains a slice grouped non-local module and a false positive reduction module. However, existing pulmonary nodule detectors are usually domain specific, e.g. trained and tested on the same dataset. They may not perform well on other datasets because there exists nontrivial domain shift as shown in Fig. 1. Likewise, none of these pulmonary nodule detectors could achieve reliable detection performance on diverse datasets/domains of different distributions [27, 28].
### _Multi-Domain Learning/Adaptation_
The concept of multi-domain learning (MDL) is introduced in [8], which can be regarded as a sub-category of the generic multi-task learning [29, 7, 9]. Different from some general approaches of domain adaptation [30, 31, 32], MDL aims to utilize a single model to simultaneously learn multiple diverse visual domains, known as a priori. This can be realized by packing domain-specific parameters in adapters added to the network. The parameters of the resulting network are either shared across domains or domain-specific. For example, in [7, 8], and [9], domain-specific normalization layers and domain-specific residual adapters are designed for natural image classification. In [10], a squeeze-and-excitation (SE) [33] adapter is proposed for object detection and a domain attention module is further designed for automatic domain assignment. This MDL idea also flourishes in medical imaging analysis. For example, anatomy-specific instance normalization is proposed in [34]
to learn a universal network for under-sampled MRI reconstruction. In [11] and [35], separable convolution consisting of domain-specific channel-wise convolution and shared point-wise convolution are explored for medical image segmentation and anatomical landmark detection respectively. In [12], MDL is combined with missing annotation mining to develop a universal lesion detection network for various lesion detection tasks.
Unfortunately, these approaches either do not take full advantage of the 3D nature of medical images, or can not perform automatic inference without prior knowledge of the domains [36, 37, 38]. These drawbacks can be remedied by the proposed slice grouped domain attention (SGDA) module, which takes the character of pulmonary nodule detection into consideration, and performs data-driven domain assignments of network activations from the axial, coronal, and sagittal directions.
## 3 Universal Pulmonary Nodule Detection
In clinical applications, multi-center trials are often required, which involve various datasets/domains. Hence, we aim to design a universal pulmonary nodule detector capable of operating over multiple pulmonary nodule datasets. For a vanilla pulmonary nodule detection task, the goal is to train a detector using one given pulmonary nodule dataset. In this work, we consider a more realistic and complex task, training a universal pulmonary nodule detector on multiple datasets, between which there exist nontrivial domain shifts as shown in Fig. 1. In other words, the detector needs to be capable of detecting nodules from CT scans no matter which medical center they are collected from, acting like experienced doctors who can diagnose nodules without being affected by domain shift between medical centers.
In order to address the issue that existing pulmonary nodule detectors have little flexibility in dealing with the domain variations in Fig. 1, we propose a slice grouped domain attention (SGDA) module, as illustrated in Fig. 2(b). Particularly, given the 3D feature map of a CT image, we first split the map into slice groups in the axial, coronal, and sagittal directions, so as to explore the inter-dependencies among channels for each group in different directions. Then for each direction, the channel responses of different groups are modulated using the domain attention mechanism, where a new class of light adapters termed slice grouped squeeze-and-excitation (SGSE) adapters are incorporated. This can be seen as a feature based attention. These adapters form as an universal adapter bank, which captures the feature subspaces of the domains spanned by all pulmonary nodule datasets and grouped from the axial, coronal, and sagittal directions. To further achieve automatic inference of domains, a domain assignment component is added to soft-route the universal adapter bank projections by group in three directions. It should be noted that both the projections on domains and domain assignment are data-adaptive and not bind to specific tasks/datasets. Afterwards, the modulated groups are stacked and the resulting feature maps in the three directions are fused by a three way cross attention module (instead of simple summation). Finally, we incorporate the SGDA module into the traditional network for domain shift compensation, as illustrated in Fig. 2(a). More details about the different components of our SGDA are given as follows.
### _Slice Grouped SE Adapter_
We first begin by designing an extra light-weight adapter to compensate for domain shift. As demonstrated in [10], the squeeze-and-excitation module [33] can be seen as a feature-based attention mechanism for dealing with domain shift due to its channel-wise rescaling ability. Nonetheless, it is designed for 2D object detection, and directly applying it to pulmonary nodule detection in 3D CT scans may not be optimal [39]. This is because that simply converting the 2D pooling operation of it to 3D will lead to severe loss of information. To solve this problem, we consider that, in the thoracic CT images, vessels and bronchus have the shape of continuous pipe, whereas nodules are usually isolated and spherical; thus in order to distinguish nodules from other tissues, doctors only need to view several consecutive slices to capture the relevance among them. Inspired by the diagnosis way of doctors, we propose a slice grouped squeeze-and-excitation (SGSE) adapter based on the squeeze-and-excitation module in [33]. The SGSE mimics doctors to learn explicit channel interdependencies across consecutive slices in three different directions to modulate channel responses.
As illustrated in Fig. 3, the SGSE adapter consists of the sequences of operations across several adjacent slices in three directions, and the output feature is the mean of three directional output features. Let \(\mathbf{X}\in\mathbb{R}^{C\times D\times H\times W}\) denote the input feature map for the SGSE adapter, where \(D\), \(H\), \(W\), and \(C\) represent depth, height, width, and the number of channels, respectively. We split the input feature map \(\mathbf{X}\) into \(G\) groups along the axial, coronal, and sagittal axis respectively, to obtain the slice grouped volumes in each direction: \(\mathbf{X}_{a}(i)\in\mathbb{R}^{C\times D^{\prime}\times H\times W}\), \(\mathbf{X}_{c}(i)\in\mathbb{R}^{C\times D\times H^{\prime}\times W}\), \(\mathbf{X}_{s}(i)\in\mathbb{R}^{C\times D\times H\times W^{\prime}}\), where \(i=1,...,G\), \(D^{\prime}=D/G\), \(H^{\prime}=H/G\), and \(W^{\prime}=W/G\). Each group is executed independently as following Eq. (1) to compute \(\mathbf{Y}_{a}(i)\), \(\mathbf{Y}_{c}(i)\), or \(\mathbf{Y}_{s}(i)\) for channel response modulation, which is further applied to the group as Eq. (2) to compute \(\mathbf{\widetilde{X}}_{a}(i)\), \(\mathbf{\widetilde{X}}_{c}(i)\), or \(\mathbf{\widetilde{X}}_{s}(i)\); then the results in the same direction are concatenated to obtain the directional output feature \(\mathbf{\widetilde{X}}_{a}\), \(\mathbf{\widetilde{X}}_{c}\), or \(\mathbf{\widetilde{X}}_{s}\):
\[\mathbf{Y}_{(\varphi)}(i) =\mathbf{F}_{SE}(\mathbf{F}_{avg}(\mathbf{X}_{(\varphi)}(i)), \mathbf{W}_{(\varphi)1},\mathbf{W}_{(\varphi)2})\] \[=\mathbf{W}_{(\varphi)2}\delta(\mathbf{W}_{(\varphi)1}\mathbf{F} _{avg}(\mathbf{X}_{(\varphi)}(i))), \tag{1}\] \[\mathbf{\widetilde{X}}_{(\varphi)}(i) =\mathbf{F}_{scale}(\mathbf{X}_{(\varphi)}(i),\sigma(\mathbf{Y}_{ (\varphi)}(i))),\] (2) \[\mathbf{\widetilde{X}}_{(\varphi)} =[\mathbf{\widetilde{X}}_{(\varphi)}(1),...,\mathbf{\widetilde{X }}_{(\varphi)}(G)]\in\mathbb{R}^{C\times D\times H\times W}, \tag{3}\]
where \(\mathbf{F}_{avg}\) is the 3D average pooling operation, and \(\delta\) refers to the ReLU function. \(\mathbf{W}_{(\varphi)1}\in\mathbb{R}^{\frac{C}{\delta}\times C}\) and \(\mathbf{W}_{(\varphi)2}\in\mathbb{R}^{C\times\frac{C}{\delta}}\) are FC layers; \(r\) denotes the channel dimension reduction factor. \(\mathbf{W}_{(\varphi)1}\) and \(\mathbf{W}_{(\varphi)2}\) are shared by different groups \(\mathbf{X}_{(\varphi)}(i)\) in the same direction, and are distinct for different directions. \(\varphi\) represents axial, coronal, or sagittal axis. \(\sigma\) refers to the sigmoid function, and \(\mathbf{F}_{scale}\) implements a channel-wise multiplication. Finally, the output feature \(\mathbf{\widetilde{X}}\) is
\[\mathbf{\widetilde{X}}_{SGSE}=(\mathbf{\widetilde{X}}_{a}+\mathbf{\widetilde{X }}_{c}+\mathbf{\widetilde{X}}_{s})/3. \tag{4}\]
We note that the 3D version of squeeze-and-excitation module [33] is a special case of our SGSE adapter, when we set the number of groups to be one.
### _Slice Grouped Domain Attention_
Deep learning has advanced the state-of-the-arts for pulmonary nodule detection; however, existing detectors are usually customized and trained for a certain task/dataset. On the one hand, these detectors typically suffer from significant performance degradation on unseen datasets that have different distributions from the observed data. On the other hand, these detectors lack the capability of operating over multiple domains, which is actually a challenging topic in computer vision. We aim to design a universal pulmonary nodule detector robust for multiple datasets. To the best of our knowledge, this is the first time to learn a universal network for multi-center datasets in the field of pulmonary nodule detection.
In order to allow adaptation to different domains within a universal network, we introduce the abovementioned SGSE adapters as domain-specific layers, and the remainder of the network is shared across domains, as is commonly done in multi-domain learning. In addition, instead of using hard attention mechanism the same as multi-domain learning to force the network to fully attend to a single domain, inspired by [10], we adopt the domain assignment mechanism of Fig. 2 (c). The overall flowchart of our proposed SGDA is exhibited in Fig. 2 (b), which consists of three branches of a universal SGSE adapter bank and its corresponding domain assignment mechanism. In this way, the domain can be inferred automatically from three directions, and more importantly, any of the tasks can be solved in any of the domains without prior knowledge of the tasks/datasets, since it is not necessary to limit domains according to the tasks/datasets. For example, the widely used nodule dataset LUNA16 [3] is co-created by several academic centers and medical imaging companies, which can have many sub-domains, e.g. due to CT devices (GE Medical vs. Siemens), annotation habits of doctors, the severity of nodules (malignancy vs. benignity), radiation dose, etc. Another example is that, one sub-domain of LUNA16 [3] may follow similar distribution as one sub-domain of tianchi [4]; thus these two sub-domains can be merged into one. Actually, the domains may not even have clear semantics, and they can be data-driven. Thus, the soft domain assignment mechanism proposed makes more sense.
#### 3.2.1 Universal SGSE Adapter Bank
A universal pulmonary nodule detector should involve multi-domain information, thus being able to adapt to dif
Fig. 3: Proposed Slice Grouped Squeeze-and-Excitation (SGSE) Adapter. It works in axial, coronal, and sagittal directions to project the input feature map along a subspace matched to the statistics of a particular domain by group, and then sums the three directional projections.
Fig. 2: (a) Overall flowchart of the universal pulmonary nodule detection network with the proposed Slice Grouped Domain Attention (SGDA) module being plugged in some residual blocks. The backbone of the network is shared across all the datasets, whereas there are multiple detection heads, one for each dataset. (b) The proposed SGDA module. It works in axial, coronal, and sagittal directions to remodulate the input feature map by group, and then fuses the three directional modulated feature maps. (c) Domain attention in one direction consists of a universal adapter bank and a domain assignment component.
ferent domains. To realize this, we construct the universal SGSE adapter bank.
The universal SGSE adapter bank of Fig. 2(c) is a universal module composed of several single SGSE adapters, which are then integrated by direction as shown in Fig. 4; each SGSE adapter corresponds to one domain. The universality of the universal SGSE adapter bank is implemented by concatenating one group's outputs of certain directional branch of the individual SGSE adapters to form a universal representation space for this group in the certain direction
\[\mathbf{Y}_{(\varphi)}(i)^{Uni}=[\mathbf{Y}_{(\varphi)}(i)^{1},\mathbf{Y}_{( \varphi)}(i)^{2},...,\mathbf{Y}_{(\varphi)}(i)^{N}]\in\mathbb{R}^{C\times N}, \tag{5}\]
where \(N\) is the number of adapters, and \(\mathbf{Y}_{(\varphi)}(i)^{j}\) is the output of the \(j\)-th adapter of the \(i\)-th group in the \(\varphi\) direction, given by Eq. (1). Note that \(N\) is a hyperparameter, and there is no need to make it identical to the number of tasks/datasets. Each SGSE adapter in the bank (non-linearly) projects the input by group along three directional data-driven subspaces of domains. The combination of these data-driven feature subspaces, which are later trained on all pulmonary nodule datasets, is thus able to cover the feature subspaces of the domains spanned by all the datasets. Then the concatenation of the \(N\) projections in certain direction of a group is fed in the corresponding directional domain assignment component to reweight each of these projections for combination also in a data-driven way.
#### 3.2.2 Domain Assignment
The domain assignment component shown in Fig. 2(c) correspondingly also works in three directions to produce a domain-sensitive set of weights for each group to combine the directional SGSE adapter projections. Following [10], each directional domain assignment component first applies a 3D global average pooling to the \(i\)-th input group in this direction to remove spatial dimensions, and then a softmax layer (linear layer plus softmax function)
\[\mathbf{Y}_{(\varphi)}(i)^{DA} =\mathbf{F}_{DA}(\mathbf{X}_{(\varphi)}(i))\] \[=softmax(\mathbf{W}_{(\varphi),DA}\mathbf{F}_{avg}(\mathbf{X}_{( \varphi)}(i))), \tag{6}\]
where \(\mathbf{W}_{(\varphi),DA}\in\mathbb{R}^{N\times C}\) is FC layer. It varies for different directions, while keeps the same for groups in the same direction. Then the vector \(\mathbf{Y}_{(\varphi)}(i)^{DA}\) is used to combine the outputs of the corresponding directional universal SGSE adapter bank after the \(i\)-th input group entered, to obtain a vector of domain adaptive responses for this group
\[\mathbf{Y}_{(\varphi)}(i)=\mathbf{Y}_{(\varphi)}(i)^{Uni}\mathbf{Y}_{( \varphi)}(i)^{DA}. \tag{7}\]
Subsequently, as Eq. (2) in the SGSE adapter, \(\mathbf{Y}_{(\varphi)}(i)\) is used to modulate channel responses of the \(i\)-th input group in this direction, e.g. the \(\varphi\) direction. Finally, the modulated groups in the same direction are concatenated as Eq. (3) to obtain the directional output feature map \(\mathbf{\widetilde{X}}_{a}\), \(\mathbf{\widetilde{X}}_{c}\), or \(\mathbf{\widetilde{X}}_{s}\). Compared with the outputs of the SGSE adapter, the directional output feature maps here involve a wealth of multi-domain information; simple summation of them seems to be a bad choice. Therefore, we propose a three way cross attention to fuse the output feature maps in three directions.
#### 3.2.3 Three Way Cross Attention
As mentioned above, rather than merely summing the directional modulated feature maps, we further design a three way cross attention module as shown in Fig. 5 for their fusion. We first compute the similarity of two feature maps \(\mathbf{\widetilde{X}}_{a}\) and \(\mathbf{\widetilde{X}}_{c}\) using embedded dot product, and then apply a softmax function to obtain the weights on the other feature map \(\mathbf{\widetilde{X}}_{s}\) after it gets embedded. Afterwards, the output is embedded for addition to the summation of \(\mathbf{\widetilde{X}}_{a}\), \(\mathbf{\widetilde{X}}_{c}\), and \(\mathbf{\widetilde{X}}_{s}\):
\[\mathbf{\widetilde{X}} =(\mathbf{\widetilde{X}}_{a}+\mathbf{\widetilde{X}}_{c}+\mathbf{ \widetilde{X}}_{s})/3+\mathbf{W}_{CA}\mathbf{Y}_{CA}, \tag{8}\] \[\mathbf{Y}_{CA} =softmax(\mathbf{\widetilde{X}}_{a}^{T}\mathbf{W}_{\theta}^{T} \mathbf{W}_{\phi}\mathbf{\widetilde{X}}_{c})\mathbf{\widetilde{X}}_{s}^{T} \mathbf{W}_{g}^{T}, \tag{9}\]
where \(\mathbf{W}_{CA}\), \(\mathbf{W}_{\theta}\), \(\mathbf{W}_{\phi}\), and \(\mathbf{W}_{g}\) are \(1\times 1\times 1\) convolutional layers. In order to reduce the computation, the number of channels represented by \(\mathbf{W}_{\theta}\), \(\mathbf{W}_{\phi}\), and \(\mathbf{W}_{g}\) are set to be half of those in \(\mathbf{\widetilde{X}}_{(\varphi)}\). \(\mathbf{W}_{CA}\) later restores the number of channels for matrix addition. We also add max pooling layers after \(\mathbf{W}_{\phi}\) and \(\mathbf{W}_{g}\) as Fig. 5.
Due to the large computational cost of the pulmonary nodule detection task, although we already use some tricks above, our proposed three way cross attention module may still not be able to fit in few 3D pulmonary nodule detection networks. Therefore, a grouping trick as in [40, 41, 42, 43, 44], can be used to further reduce computation. We divide the three embedded directional feature maps (after pooling) all along the depth dimension into \(G\) groups, each of which contains \(D^{\prime}=D/G\) or \(d^{\prime}=d/G\) depths of its corresponding feature map, and then employ the three way
Fig. 4: Universal SGSE adapter Bank in one direction. It projects the input group along \(N\) subspaces of domains in this direction, and then concatenates these projections.
Fig. 5: Three Way Cross Attention. Feature maps in two of the three directions are computed to obtain the weights on the third one. The shapes of the feature maps after certain operations are shown.
cross attention in every three matching groups. The Eq. (9) is modified as
\[\mathbf{Y}_{CA}(i)=softmax((\mathbf{\widetilde{X}}_{a}^{T}\mathbf{W}_{\theta}^{T} )(i)(\mathbf{W}_{\phi}\mathbf{\widetilde{X}}_{c})(i))(\mathbf{\widetilde{X}}_{s }^{T}\mathbf{W}_{g}^{T})(i). \tag{10}\]
The outputs of these groups are concatenated in the axial direction/along the depth dimension for addition to the summation of the three original directional feature maps.
All in all, the feature subspaces of the domains spanned by all pulmonary nodule datasets are captured from the axial, coronal, and sagittal directions by the universal SGSE adapter bank of the domain attention module; then the domain assignment component soft-routes the combination of the output projections of the bank by group in each direction. Both operations are data-driven, and do not require prior knowledge of the domain. In this way, our proposed SGDA achieves universal pulmonary nodule detection over multiple datasets. What's more, implementing this module allows the network to leverage shared knowledge across domains, which further improves the performance of the network. Note that the output layer has to be task/dataset-specific, since different pulmonary nodule datasets may use different annotation standards.
### _Discussion_
It is noteworthy that the 3D version of [10] is a special case of our approach, when the number of groups in the SGDA module is 1. However, our approach substantially differs from [10] in the following aspects: (1) Their work focuses on universal object detection using 2D CNN networks, while we aim to realize 3D universal pulmonary nodule detection in the pure medical field. The two are fundamentally different in both network structures and tasks. (2) We propose a novel SGSE adapter/form a new SGSE adapter bank according to the characteristics of nodule detection in CT images, and also a three way cross attention to fuse the output feature maps in three directions of our SGDA. (3) They mainly explore their method's capability of working on multiple domains. Nonetheless, we not only study this, but also invest significant efforts in validating the generalization ability of the nodule detection networks after our proposed SGDA module is plugged.
## 4 Experiments
In this section, we conduct extensive experiments to investigate the effectiveness of our proposed SGDA module on multiple pulmonary nodule datasets.
### _Datasets and Evaluation_
Our experiments are mainly conducted on four pulmonary nodule datasets: LUNA16 [3], tianchi [4], russia [5], and PN9 [6]. The details of these datasets are shown in the Table I. We also present the pulmonary nodule size distribution of the four datasets in Table II. The CT scans in these datasets are collected from diverse sites with various thicknesses. We consider only those CT scans with annotations of nodule locations. The annotation files of LUNA16, tianchi, and russia are csv files containing one nodule per line. Each line holds the filename of the CT scan, the center coordinates, and the diameter of one nodule. PN9 has a similar annotation file, except for using top-left and bottom-right coordinates to denote nodule location.
We use the Free-Response Receiver Operating Characteristic (FROC), which is the official evaluation metric of the widely used pulmonary nodule dataset LUNA16, for evaluation in all cases. It is defined as the average recall rate at 0.125, 0.25, 0.5, 1, 2, 4, and 8 false positives per scan. A nodule candidate is regarded as a true positive if it is located within a distance \(R\) from the center of any annotated nodules, where \(R\) denotes the radius of the annotated nodule. Nodule candidates not located in the range of any annotated nodules are regarded as false positives. All the nodule candidates are evaluated on their corresponding dataset respectively. Through this, all detectors are evaluated on their corresponding dataset, because different tasks/datasets use different output layers. All in all, we evaluate the detector for each task/dataset via the aforementioned FROC, and the mean value of the FROCs is used to measure the universal pulmonary nodule detection performance.
### _Data Preprocessing_
LUNA16 [3], tianchi [4], and russia [5] are split into 7/1/2 for training, validation, and testing. There are three preprocessing steps for the raw CT data in these three datasets. First, in order to reduce the irrelevant calculation, we segment lung regions from each CT image using lungmask [47], and after converting the raw data from Hounsfield Unit (HU) to unit8, we assign the regions other than the lung masks a padding value 170. Thereinto, The HU values are clipped into \([-1200,600]\), and transformed linearly into \([0,255]\) to obtain unit8 values. Second, to avoid too
\begin{table}
\begin{tabular}{l|c c c c c c|c c|c} \hline Dataset & Year & Scans & Nodules & Class & Raw & File Size & \multicolumn{3}{c}{Dipage} \\ \hline LUNA16 & 2016 & 601 & 1186 & 2 & Yes & 25M-258M & \(512\times 512\times 59-512\times 512\times 733\) & \((0.86,0.86,2.50)-(0.64,0.64,0.50)\) \\ tianchi [4] & 2017 & 800 & 1244 & 2 & Yes & 26M-343M & \(512\times 512\times 114-512\times 512\times 1034\) & \((0.66,0.66,2.50)-(0.69,0.69,0.30)\) \\ russia [5] & 2018 & 364 & 1850 & 2 & Yes & 80M-491M & \(512\times 512\times 313-512\times 512\times 1636\) & \((0.62,0.62,0.80)-(0.78,0.78,0.40)\) \\ PN9 [6] & 2021 & 8796 & 40436 & 9 & No & 5.6M-73M & \(212\times 212\times 181-455\times 455\times 744\) & \((1.00,1.00,1.00)-(1.00,1.00,1.00)\) \\ \hline \end{tabular}
\end{table} TABLE I: Pulmonary nodule datasets. ‘Scans’ denotes the number of CT scans. Nodule’s denotes the number of labeled nodules. ‘Class’ denotes the class number. And ‘Raw’ means whether the dataset contains raw CT scans. ’Image Size’ gives the dimensions of the CT image matrix along the x, y, and z axes. Spacing gives the voxel sizes (mm) along the x, y, and z axes.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline Dataset & \(d<3\) & \(3\leq d<5\) & \(5\leq d<10\) & \(10\leq d<30\) & \(30\leq d\) & All \\ \hline LUNA16 [3] & - & 270 & 635 & 279 & 2 & 1186 \\ tianchi [4] & 1 & 213 & 596 & 423 & 11 & 1244 \\ russia [5] & 6 & 552 & 907 & 360 & 25 & 1850 \\ russia [5] & 9 & 4678 & 29213 & 6053 & 483 & 40436 \\ \hline \end{tabular}
\end{table} TABLE II: Pulmonary nodule size distribution of datasets. ‘\(d\)’ denotes the diameter (mm).
much unnecessary hyperparameters, we resample all the CT images into \(1\times 1\times 1\) mm spacing to keep anchors in all the detectors consistent. Third, in order to further reduce the computation, we utilize the segmented lung masks to restrict the size of the CT images. The preprocessed CT images are as shown in Fig. 6. For PN9 [6] dataset, we use the same data preprocessing as [6]. We adopt voxel coordinates for all the cases, and modify the annotation coordinates according to our preprocessing processes.
During training, it is not feasible for 3D CNN to use the entire CT images as input due to the limitation of GPU memory. Thus, small 3D patches are extracted from the CT images and individually input into the network. The size of the extracted 3D patch is \(1\times 128\times 128\times 128\) (Channel\(\times\)Depth\(\times\)Height\(\times\)Width). If a patch exceeds the range of the CT image from which it is extracted, it is padded with a value of 170. During testing, the entire CT images are input into the network without being cropped into 3D patches. In order to avoid an odd size of the entire CT images, they are padded with a value of 170 before being input into the network.
### _Small-Scale Experiments_
In our small-scale experiments, we utilize the widely used pulmonary nodule detection network NoduleNet [13] as the backbone. The small-scale experiments are done on the LUNA16 [3], tianchi [4], and russia [5] datasets, which are all split into 7/1/2 for training, validation, and testing. The multi-domain methods are trained from scratch on all datasets of interest, e.g. LUNA16 [3], tianchi [4], and russia [5], simultaneously. All inputs of a batch are from a single (randomly sampled) pulmonary nodule dataset, and in each epoch, all 3D patches of each dataset are processed only once. We use the Stochastic Gradient Descent (SGD) optimizer with a batch size of 8. The initialization learning rate is set to 0.01; the momentum and weight decay coefficients are set to 0.9 and \(1\times 10^{-4}\), respectively. The learning rate decreases to 0.001 after 200 epochs and 0.0001 after another 120 epochs. The NoduleNet has many hyperparameters, and we use the same hyperparameters as NoduleNet1 across datasets for all networks except for the number of epochs of training without RCNN. It is tuned for each network to obtain the best performance. All the small-scale experiments are implemented using PyTorch on 1 NVIDIA GeForce RTX 3090 GPU with 24GB memory.
Footnote 1: [https://github.com/uci-cbcl/NoduleNet/](https://github.com/uci-cbcl/NoduleNet/)
#### 4.3.1 Multi-center Experiments
Here we perform experiments to evaluate the proposed SGDA in dealing with multi-center pulmonary nodule detection using LUNA16 [3], tianchi [4], and russia [5]. Specifically, we compare with the following approaches:
* Models that have'single' in the name: train a model for each dataset separately;
* Models that have the prefix 'uni-': union different datasets into one dataset for training;
\begin{table}
\begin{tabular}{l|c|c|c|c|c c c|c} \hline Method & \#Adapters & \#Groups & \#Params & \#FLOPs\({}^{a}\) & LUNA16 & tianchi & russia & Avg \\ \hline single NoduleNet [13] & - & - & 16.73M\(\times\)3 & 139G & 77.71 & 68.23 & 37.19 & 61.04 \\ uniModuleNet & - & - & 39.50M & 139G & 79.88 & 68.60 & 33.35 & 60.61 \\ \hline NoduleNet+BN [7] & 3 & - & 39.51M & 139G & 79.94 & 68.12 & 36.52 & 61.52 \\ NoduleNet+series [9] & 3 & - & 40.14M & 145G & 78.44 & 70.41 & 33.39 & 60.74 \\ NoduleNet+parallel [9] & 3 & - & 40.13M & 145G & 78.57 & 70.14 & 35.61 & 61.44 \\ NoduleNet+separable [11] & 3 & - & 34.68M & 13.58G & 66.31 & 62.26 & 32.96 & 53.84 \\ NoduleNet+SNR [46] & - & - & 39.50M & 139G & 69.52 & 66.57 & 36.76 & 57.61 \\ \hline single NoduleNet+SE [10] & - & - & 16.74M\(\times\)3 & 139G & 77.78 & 68.86 & 38.06 & 61.56 \\ uniSEModuleNet [10] & - & - & 39.51M & 139G & 80.53 & 69.13 & 34.34 & 61.33 \\ NoduleNet+SE [10] & 3 & - & 39.54M & 139G & 78.89 & 72.33 & 35.89 & 62.37 \\ DANoduleNet [10] & 3 & - & 39.54M & 139G & **82.63** & 73.29 & 38.50 & 64.80 \\ \hline single NoduleNet+SGSE & - & 4 & 16.77M\(\times\)3 & 139G & 78.30 & 70.36 & **39.01** & 62.55 \\ unISGSERoduleNet & - & 4 & 39.54M & 139G & 81.12 & 71.00 & 38.42 & 63.51 \\ NoduleNet+SGSE & 3 & 4 & 39.62M & 139G & 80.93 & 70.94 & 38.30 & 63.39 \\ SGDANoduleNet & 3 & 4 & 39.82M & 147G & 81.91 & **77.13** & 37.15 & **65.39** \\ \hline \end{tabular}
\end{table} TABLE III: Comparison of our SGDA and other multi-domain methods in terms of FROC on dataset LUNA16, tianchi, and russia. Values below the names of datasets are FROCs (unit: %). All the methods utilize NoduleNet as backbone: (1) shared models with the prefix ‘uni-’, (2) independent models with the word ‘single’ in the name, (3) multi-domain methods, (4) universal models with ‘SG’ in the name (Ours).
Fig. 6: Samples of the preprocessed images in the LUNA16, tianchi, and russia. The 1st row are the raw images, the 2nd row are the segmented lung regions, and the 3rd row are the preprocessed images, respectively.
* The multi-domain methods2;
Footnote 2: We follow the code from [https://github.com/microsoft/SNR](https://github.com/microsoft/SNR) for SNR implementation.
performance.
[48] pulmonary nodule detection using LUNA16 [3], tianchi [4], and russia [5].
We mainly compare with the uniNoduleNet baseline, and the most competitive multi-domain counterpart DANoduleNet. The models are trained on two of the three datasets, and then the remaining dataset is treated as the target dataset for the trained network to be finetuned and tested on. To better investigate the cross-center detection performance, we finetune the trained network using a varied percentage (20%, 40%, 60%, 80%, and 100%) of training images in the target dataset, and test the finetuned network on the whole testing set. The results are shown in Table VIII, Table IX, and Table X. We also list the FROCs of NoduleNets [13], which are trained using all the training images in the corresponding target datasets. We can see that our SGDA achieves the best performance in most cases, and improves the baseline by up to approximately 6% in terms of the FROC score for domain generalization. The barely satisfactory results in Table X may be due to the limited depth information resulting from the resampling operations on LUNA16 [3] and tianchi [4], and then the advantage of grouping is diminished. Besides, results of the models trained and tested on the corresponding two source datasets are reported in Table XI, Table XII, and Table XIII. The results indicate that uniNoduleNet trained on two datasets have better overall performance than the one trained using all the three datasets, while our SGDANoduleNet trained on three datasets performs better than the one trained using two datasets. This further indicates that existing pulmonary nodule detectors are not effective in handling multiple domains, while our SGDA can well exploit the information from multi-center datasets.
the source dataset but suffer from performance degradation when tested on a different domain. Then we perform large-scale experiments to evaluate the proposed SGDA in dealing with cross-center pulmonary nodule detection. We take PN9 [6] as the source dataset, and the LUNA16 [3], tianchi [4], and russia [5] as the target dataset in turn. Each model trained on PN9 [6] from Sec. 4.4.1 is finetuned on the latter three datasets separately using 20% of the training images, and tested on the whole testing set. The results are shown in Fig. 13, which demonstrates the effectiveness of our SGDA for improving the model generalization ability.
## 5 Conclusion
In this paper, we study the challenging problem of universal pulmonary nodule detection. We propose a slice grouped domain attention (SGDA) module, which aims to enhance the generalization capability of the pulmonary nodule detection networks. It is a _universal_ plug-and-play module, which can be incorporated into existing backbone networks, and works on multiple pulmonary nodule datasets with no requirement for prior domain knowledge. Extensive experimental results show that schemes powered by SGDA achieve the state-of-the-art performance in both multi-center and cross-center pulmonary nodule tasks. In the future, we intend to apply dynamic networks to reduce computational cost, and verify effectiveness of our method on more large-scale datasets.
## Acknowledgment
The authors would like to thank the handling associate editor and all the anonymous reviewers for their constructive comments. This research was supported in part by the National Key Research and Development Program of China under No. 2021YFC3300200, the Special Fund of Hubei Luojia Laboratory under Grant 220100014, and the National Natural Science Foundation of China (Grant No. 62276195 and 62141112).
|
2305.16818 | Trust-Aware Resilient Control and Coordination of Connected and
Automated Vehicles | We address the security of a network of Connected and Automated Vehicles
(CAVs) cooperating to navigate through a conflict area. Adversarial attacks
such as Sybil attacks can cause safety violations resulting in collisions and
traffic jams. In addition, uncooperative (but not necessarily adversarial) CAVs
can also induce similar adversarial effects on the traffic network. We propose
a decentralized resilient control and coordination scheme that mitigates the
effects of adversarial attacks and uncooperative CAVs by utilizing a trust
framework. Our trust-aware scheme can guarantee safe collision free
coordination and mitigate traffic jams. Simulation results validate the
theoretical guarantee of our proposed scheme, and demonstrate that it can
effectively mitigate adversarial effects across different traffic scenarios. | H M Sabbir Ahmad, Ehsan Sabouni, Wei Xiao, Christos G. Cassandras, Wenchao Li | 2023-05-26T10:57:51Z | http://arxiv.org/abs/2305.16818v2 | # Trust-Aware Resilient Control and Coordination
###### Abstract
We address the security of a network of Connected and Automated Vehicles (CAVs) cooperating to navigate through a conflict area. Adversarial attacks such as Sybil attacks can cause safety violations resulting in collisions and traffic jams. In addition, uncooperative (but not necessarily adversarial) CAVs can also induce similar adversarial effects on the traffic network. We propose a decentralized resilient control and coordination scheme that mitigates the effects of adversarial attacks and uncooperative CAVs by utilizing a trust framework. Our trust-aware scheme can guarantee safe collision free coordination and mitigate traffic jams. Simulation results validate the theoretical guarantee of our proposed scheme, and demonstrate that it can effectively mitigate adversarial effects across different traffic scenarios.
## I Introduction
The rise of connected and automated vehicles (CAVs) and advancements in traffic infrastructure [1] promise to offer solutions to transportation issues like accidents, congestion, energy consumption, and pollution [2, 3]. To achieve these benefits, efficient traffic management is crucial, particularly at bottleneck locations such as intersections, roundabouts, and merging roadways [4].
Thus far, two approaches, centralized [5] and decentralized [6], have been proposed for controlling and coordinating CAVs at conflict points. There has been extensive research on cybersecurity of CAVs summarized in [7, 8, 9]. The attacks can be categorized into in-vehicle network attacks and attacks on (V2V or V2X) communication networks [8]. A significant amount of research has been done from a control point of view with the aim of designing smart and efficient coordination algorithms for real-world implementation. However, security for this next generation of CAV algorithms has received virtually no attention, with only [10, 11] tackling security for merging roadways, and our previous work [12] providing an extensive study of security threats to this class of algorithms for various conflict areas.
There is literature that considers cyberattacks on connected vehicles and investigates their effects on intersections [13, 14] and freeway [15] control systems; however, the fundamental difference is that they do not consider the security of cooperative control of CAVs. One class of cooperative algorithms for autonomous vehicles whose security has been extensively studied [16, 17, 18] is Cooperative Adaptive Cruise Control (CACC).
An idea that has been extensively applied to multi-agent systems is the notion of trust/reputation [14, 19, 20]. A novel CBF-based trust metric was introduced in [21] for multi-robot systems (MRS) for providing safe control against adversarial agents; however, it cannot be directly applied to our application. The authors in [22] used a trust framework to address the security of CACC. Lastly, the authors in [23] used a trust framework based on a macroscopic model of the network to tackle Sybil attacks for traffic intersections without analyzing the fidelity of the model and commenting about the classification accuracy of their proposed method.
In this paper, we present distributed resilient control and coordination scheme for CAVs at conflict areas that is resilient to adversarial agents and uncooperative CAVs. We use Sybil attacks to validate our proposed scheme as they can be used to achieve both adversarial objectives. Sybil attacks can't be tackled using existing road infrastructure including namely sensors and cameras as they are placed sparsely in the network, and, their reliability degrades with age [23]. The key contributions of the paper are as follows.
1. We propose trust-aware resilient control and coordination that guarantees safe coordination against adversarial attacks and uncooperative CAVs. It is important to add that, our proposed framework is agnostic to the specific implementation of the trust framework.
2. We provide resilient coordination using a _robust event-driven scheduling scheme_ that can successfully alleviate traffic holdups due to adversarial attacks and uncooperative CAVs.
3. We present simulation results that validate our proposed resilient control and coordination scheme guarantees safety; and our robust scheduling scheme besides mitigating traffic jams also improves the travel time and fuel economy of real cooperative CAVs in the presence of adversarial attacks and uncooperative CAVs.
Our proposed scheme is computationally tractable, minimally invasive, and can be readily incorporated into the existing intelligent traffic infrastructure like intersections, roundabouts, merging roadways, etc. without extensive overhaul. The paper is organized in seven sections. We present the background materials and the threat models in sections II and III respectively. In section IV, we present the trust
framework for a cooperative network of CAVs in conflict areas. Our proposed resilient control and coordination scheme is presented in section V. The results from our simulations have been included in section VI which is followed by the conclusion in section VII.
## II Background
We present resilient control and coordination approach for secure coordination of CAVs in conflict areas, _using the signal-free intersection presented in [24] as an illustrative example._ Fig. 1 shows a typical intersection with multiple lanes. The Control Zone (CZ) is the area within the outer red circle. It contains eight entries labeled from \(o_{1}\) to \(o_{8}\) and lanes labeled from \(l_{1}\) to \(l_{8}\) each of length \(L\) which is assumed to be the same here. Red dots show all the merging points (MPs) where potential collisions may occur. All the CAVs have the following possible planned trajectories when they enter the CZ: going straight, turning left from the leftmost lane, or turning right from the rightmost lane.
The vehicle dynamics for each CAV in the CZ take the following form:
\[\left[\begin{array}{c}\dot{x}_{i}(t)\\ \dot{v}_{i}(t)\end{array}\right]=\left[\begin{array}{c}v_{i}(t)\\ u_{i}(t)\end{array}\right], \tag{1}\]
where \(x_{i}(t)\) is the distance from the origin at which CAV \(i\) arrives, \(v_{i}(t)\) and \(u_{i}(t)\) denote the velocity and control input (acceleration/deceleration) of CAV \(i\), respectively. We also consider that each CAV has a vision-based perception capability defined by a radius and angle tuple denoted as \((r,\theta)\), (where \(r\in\mathbb{R}^{+},\theta\in[0,2\pi]\)) Let \(t_{i}^{0}\) and \(t_{i}^{f}\) denote the time that CAV \(i\) arrives at the origin and exits the CZ, respectively. The control is implemented in a _decentralized manner_ whereby each CAV \(i\) determines a control policy to jointly minimize the travel time and energy consumption governed by the dynamics (1). Expressing energy through \(\frac{1}{2}u_{i}^{2}(t)\) and normalizing travel time and energy, we use the weight \(\alpha\in[0,1]\) to construct a convex combination as follows:
\[J_{i}(u_{i}(t),t_{i}^{f}):=\beta(t_{i}^{f}-t_{i}^{0})+\int_{t_{i}^{0}}^{t_{i} ^{f}}\frac{1}{2}u_{i}^{2}(t)dt \tag{2}\]
where \(\beta:=\frac{\alpha\max\{u_{\max}^{2},u_{\min}^{2}\}}{2(1-\alpha)}\) is an adjustable weight to penalize travel time relative to the energy cost of CAV \(i\).
A central Roadside unit (RSU) receives the state and control information \([x_{i}(t),v_{i}(t),u_{i}(t)]^{T}\) from CAVs through vehicle-to-infrastructure (V2X) communication and stores them in a table as shown in Fig. 1. It is assumed that the coordinator knows the entry and exit lanes for each CAV upon their arrival and uses them to determine the list of MPs from the set \(\{M_{1},\ldots,M_{24}\}\) (shown in Fig. 1) in its planned trajectory. It facilitates safe coordination by providing each CAV with relevant information about other CAVs in the network, that the CAV has to yield to while traveling through the CZ. It does so by assigning each CAV a unique index based on a passing sequence policy and, tabulates and stores the information of the CAVs according to the assigned indices as shown in Fig. 1. Let \(S(t)\) be the set of CAV indices in the coordinator queue table and \(N(t)=|S(t)|\) be the total number of CAVs in the CZ at time \(t\). The default passing sequence is implemented using First In First Out (FIFO) policy which assigns \(N(t)+1\) to a newly arrived CAV, and decrements the indices of all CAV with index greater than \(i\) by 1, when CAV \(i\) exits the CZ.
### _Constraints/Rules in the Control Zone_
The following section summarizes the rules that CAVs in the CZ must follow to navigate safely through the intersection.
**Constraint 1** (Rear-End Safety Constraint): Let \(i_{p}\) denote the index of the CAV which physically immediately precedes CAV \(i\) in the CZ (if one is present). It is required that CAV \(i\) conforms to the following constraint:
\[x_{i_{p}}(t)-x_{i}(t)-\varphi v_{i}(t)-\Delta\geq 0,\ \ \forall t\in[t_{i}^{0},t_{i}^{f}] \tag{3}\]
where \(\varphi\) denotes the reaction time and \(\Delta\) is a given minimum safe distance which depends on the length of these two CAVs.
**Constraint 2** (Safe Merging Constraint): Every CAV \(i\) should leave enough room for the CAV preceding it upon arriving at a MP, to avoid a lateral collision i.e.,
\[x_{i_{m}}(t_{i}^{m})-x_{i}(t_{i}^{m})-\varphi v_{i}(t_{i}^{m})-\Delta\geq 0, \tag{4}\]
where \(i_{m}\) is the index of the CAV that may collide with CAV \(i\) at the merging points \(m\in\mathcal{M}_{i}\) where \(\mathcal{M}_{i}\subset\{M_{1},...,M_{24}\}\), \(\mathcal{M}_{i}\) is the set of MPs that CAV \(i\) passes in the CZ, and \(t_{i}^{m}\) is time of arrival of CAV \(i\) at the MP.
**Constraint 3** (Vehicle Limitations): Finally, there are constraints on the speed and acceleration for each \(i\in S(t)\):
\[v_{\min}\leq v_{i}(t)\leq v_{\max},\forall t\in[t_{i}^{0},t_{i}^{f}] \tag{5}\]
\[u_{min}\leq u_{i}(t)\leq u_{max},\forall t\in[t_{i}^{0},t_{i}^{f}] \tag{6}\]
where \(v_{min}\geq 0\), \(v_{max}>0\) denote the minimum and maximum speed, and \(u_{min}<0\) and \(u_{max}>0\) denote the minimum and maximum control respectively.
## III Threat model
The adversarial effects of malicious attacks have been highlighted in our preliminary study in [12], namely, creating traffic jams across multiple roads due to the cooperative aspect of the control scheme, and in the worst case accidents, thus warrant making the control robust against these attacks.
**Definition 1**: (Safe coordination) In our context, it is defined as the ability of the coordination and control framework to guarantee the satisfaction of (3) and (4) for every CAV \(i\in S(t)\)\(\forall t\) by conforming to (5) and (6), to navigate through the CZ without any collision.
**Definition 2**: (Uncooperative vehicle) We define a CAV \(i\in S(t)\) as _uncooperative_ if its free-flow speed is abnormally low in the CZ i.e. \(v_{i}(t)\leq v_{low}\) (where \(v_{low}\) is considered abnormally low for the CZ), thus worsening traffic through
**Definition** 3: (Adversarial agent) An agent is called adversarial if it has one of the following objectives: (i) prevent _safe coordination_, (ii) _reduce traffic throughput_, by introducing _cyber-attacks_.
Note that adversarial agents introduce attacks with malicious intent, whereas uncooperative CAVs are not malicious and may be going slow due to various reasons like faults, failures, and so on.
**Assumption** 1: Adversarial agents do not collide with other CAVs, nor do they attempt to cause collisions between CAVs and themselves.
### _Sybil Attack:_
A single malicious client (could be a CAV or attacker nearby the \(\mathrm{CZ}\)) may spoof one or multiple unique identities and register them in the coordinator queue table. We assume at any time \(t\), there are two groups of CAVs in CZ: i. Normal CAVs and ii. fake CAVs. Let \(S_{x}(t)\) and \(S_{s}(t)\) be the set of the indices of normal and fake CAVs in the FIFO queue of the coordinator unit. Therefore at any time \(t\), there are \(N(t)=|S_{x}(t)|+|S_{s}(t)|\) CAVs which communicate their state and control information to the RSU. There can be one or more fake clients/CAVs in the \(\mathrm{CZ}\) at any time \(t\).
A Sybil attack is one where the \(S_{s}(t)\subset S(t)\) is a nonempty set that is located in the coordinator queue table, but unknown to the coordinator. For example, Fig. 1 presents a scenario, where there are multiple fake CAVs with indices \(S_{s}(t)=\{3,5\}\).
**Assumption** 2: There is a limit on the number of fake CAVs that an adversary can spoof during a Sybil attack due to resource and energy limitations.
## IV Trust framework
In this section, we present our trust framework inspired from the ideas in [19, 20, 25]. We consider that the central coordinator is trustworthy, and monitors, computes and stores the trust of every CAV \(i\in S(t)\) in the network at every time \(t\) denoted as \(\tau_{i}(t)\in[0,1]\). The trust is determined based on identified behavioral specifications specific to the CAVs in the CZ, which are described below.
**Behavioral Specifications:**
1. **Co-observation consistency checks**: Based on the reported position of the CAVs, for each CAV \(i\) the coordinator identifies a set \(S_{i}^{o}(t)\) of CAVs that CAV \(i\) should be visible to at time \(t\). Let \(S_{i}^{o}(t)\) be the set of CAVs which report estimated states of CAV \(i\). Then the specification is \(S_{i}^{o}(t)=S_{i}^{o}(t)\).
2. **Initial condition checks**: The reported initial states particularly the position information of the CAVs has to be consistent.
3. **Dynamic model checks**: The physical model similar to (1) is invariant and hence, the data communicated by each CAV has to always satisfy the underlying model.
4. **Control zone rule checks:** The rules for safe coordination and the vehicle limitations presented in II-A are invariant and mandatory for every CAV in the CZ. Hence, the specification is, every CAV \(i\in S(t)\)\(\forall t\) has to conform to all rules in II-A while in the CZ.
Let \(\mathcal{B}\) be the index set of the behavioral specifications in the order they are enumerated above. For each CAV \(i\in S(t),\ \forall t\in[t_{i}^{0},t_{i}^{f}]\) the coordinator assigns positive evidence \(r_{i,j}(t)\) and negative evidence \(p_{i,j}(t)\) for conformance and violation of every specification \(j\in\mathcal{B}\) respectively (where
Fig. 1: The multi-lane intersection problem. Collisions may happen at the merging points. The table shows the order of the CAVs in the queue based on the FIFO sequencing scheme, trust-aware scheduling scheme, and lane-priority based scheduling scheme.
\(0\leq r_{i,j}(t)\leq r_{max},0\leq p_{i,j}(t)\leq p_{max}\)), which it uses to update \(\tau_{i}(t)\). We define \(R_{i}(t)\) and \(P_{i}(t)\) as cumulative positive and negative evidence for CAV \(i\) at time \(t\) discounted by trust of other CAVs (if the check involves another CAV, like in (3) and (4), as they can be untrustworthy). We also define a time discount factor \(\gamma\in(0,1)\) as defined in (8). In addition, we have a non-informative prior weight \(h_{i}\) as in [19, 25]. Let the set of checks for every CAV involving another CAV(s) be denoted as \(\mathcal{B}_{a}\subset\mathcal{B}\). The set of other CAVs involved in check \(j\in\mathcal{B}_{a}\) when applied to CAV \(i\), is denoted as \(S_{i,j}(t)\subseteq S(t)/\{i\}\). Then, the trust metric is updated as follows:
\[\tau_{i}(t)=\frac{R_{i}(t)}{R_{i}(t)+P_{i}(t)+h_{i}}\ \ \forall i\in S(t) \tag{7}\]
\[R_{i}(t)= \gamma R_{i}(t-1)+\sum_{j\in\mathcal{B}\backslash\mathcal{B}_{a}}r _{i,j}(t)+\sum_{j\in\mathcal{B}_{a}}\prod_{k\in\mathcal{S}_{i,j}}\tau_{k}(t)r_ {i,j}(t)\] \[P_{i}(t)= \gamma P_{i}(t-1)+\sum_{\begin{subarray}{c}j\in\mathcal{B} \backslash\mathcal{B}_{a}\\ \end{subarray}}p_{i,j}(t)+\sum_{j\in\mathcal{B}_{a}}\prod_{k\in S_{i,j}}\tau_{k }(t)p_{i,j}(t)\] \[\forall i\in S(t),\forall t\in[t_{i}^{0},t_{i}^{f}] \tag{8}\]
Finally, we define a lower trust threshold \(\delta\in(0,1/2)\), and a higher trust threshold \(1-\delta\) for subsequent sections. It is important to emphasize that, in practice, the magnitude of negative evidence is different and significantly higher compared to the magnitude of positive evidence. This model of trust relationships considers the social aspect, where a single action can cause significant damage to a trust relationship, and recovery from such damage is challenging [22].
**Remark** 1: Our implementation is agnostic to the specific implementation of the trust framework and the ideas can be used for any framework provided that the trust metric can accurately encapsulate the behavioral specification of the network and distinguish between normal and anomalous behavior for every CAV in real-time.
## V Safe and Resilient Control Formulation
We adopt a decentralized _Optimal Control Problem_ (OCP) controller for the CAVs that uses Control Barrier Functions (CBF). CBFs provide manifold benefits namely, i. their forward invariance property guarantees satisfaction of the constraints of the OCP, and ii. they transform the original constraints to linear constraints in terms of the control input which makes them computationally efficient, thus, attractive for real-time applications [6].
**The OCBF Controller**[6]. Firstly, Control Barrier Functions (CBFs) that ensure the constraints (3), (4), (5) and (6) are derived, subject to the vehicle dynamics in (1) by defining \(f(\mathbf{x}_{i}(t))=[v_{i}(t),0]^{T}\) and \(g(\mathbf{x}_{i}(t))=[0,1]^{T}\). Each of these constraints can be easily written in the form of \(b_{q}(\mathbf{x}(t))\geq 0\), \(q\in\{1,...,n\}\) where \(n\) stands for the number of constraints only dependent on state variables and \(\mathbf{x}(t)=[\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),...,\mathbf{x}_{N(t)}(t)]\). The CBF method (details provided in [6, 26]) maps a constraint \(b_{q}(\mathbf{x}(t))\geq 0\) onto a new constraint which is _linear_ in the control input and takes the general form
\[L_{f}b_{q}(\mathbf{x}(t))+L_{g}b_{q}(\mathbf{x}(t))u_{i}(t)+\kappa_{q}(b_{q}(\mathbf{x}(t)) )\geq 0. \tag{9}\]
where. \(\kappa_{q}\) is a class \(\mathcal{K}\) function.
A Control Lyapunov Function (CLF) is used for velocity tracking with \(v_{i}^{ref}(t)\) as the reference by setting \(V(\mathbf{x}_{i}(t))=(v_{i}(t)-v_{i}^{ref}(t))^{2}\), rendering the following CLF constraint:
\[L_{f}V(\mathbf{x}_{i}(t))+L_{g}V(\mathbf{x}_{i}(t))\mathbf{u}_{i}(t)+c_{3}V(\mathbf{x}_{i}(t)) \leq e_{i}(t), \tag{10}\]
where \(e_{i}(t)\) makes this a soft constraint. _Note that_ the CBFs are used to enforce hard constraints mentioned in section II-A, whereas CLFs are used to enforce soft constraints.
The OCBF problem corresponding to (2) is formulated as:
\[\min_{u_{i}(t),e_{i}(t)}J_{i}(u_{i}(t),e_{i}(t)):=\int_{t_{i}^{0}}^{t_{i}^{f}} \big{[}\tfrac{1}{2}(u_{i}(t)-u_{i}^{ref}(t))^{2}+\lambda e_{i}^{2}(t)\big{]}dt \tag{11}\]
subject to vehicle dynamics (1), the CBF constraints (9), \(\forall q=\{1,...,n\}\) and CLF constraint (10). In this approach,(i) \(u_{i}^{ref}\) is generated by solving the unconstrained optimal control problem in (2), (ii) the resulting \(u_{i}^{ref}\) is optimally tracked such that constraints including the CBF constraints (9) \(\forall q=\{1,...,n\}\) are satisfied. We can solve this dynamic optimization problem by discretizing \([t_{i}^{0},t_{i}^{f}]\) into intervals \([t_{i}^{0},t_{i}^{0}+t_{s}],...,[t_{i}^{0}+kt_{s},t_{i}^{0}+(k+1)t_{s}],...\) with equal length \(t_{s}\) and solving (11) over each time interval through solving a QP at each time step:
\[\min_{u_{i,k},e_{i,k}}[\frac{1}{2}(u_{i,k}-u_{i}^{ref}(t_{i,k}))^{2}+\lambda e _{i,k}^{2}] \tag{12}\]
subject to the CBF constraints (9), \(\forall q=\{1,...,n\}\), CLF constraint (10) and dynamics (1), where all constraints are linear in the decision variables.
### _Resilient Control and Coordination Scheme_
We propose a resilient coordination and control scheme to mitigate the adversarial effects in terms of causing (i) collision and (ii) traffic congestion. Resilience is the ability of the framework to guarantee _safe coordination_ and _mitigate any traffic jam_ introduced by adversarial agents and uncooperative CAVs.
#### V-A1 **Resilience goal (collision avoidance)**
**Trust-based search:** The coordinator incorporates trust besides the default passing sequence policy (e.g. FIFO) to identify indices of CAVs, any CAV may conflict within the CZ based on (3) and (4). Under the default passing sequence, for every CAV \(i\in S(t)\), the coordinator has to identify indices of all CAVs which includes i. index of the CAV that immediately precedes CAV \(i\) physically in its lane and ii. index of the CAV that will precede \(i\) immediately at every \(m\in\mathcal{M}_{i}\) in the intersection. For example, in Fig. 1, \(\mathcal{M}_{6}=\{M_{11},M_{13},M_{16},M_{20}\}\), and as per FIFO sequencing, \(6_{M_{20}}=5\), since CAV 5 is the CAV that will precede it.
The trust-based search process identifies all the CAVs that will precede \(i\) until the first CAV whose trust value is greater than or equal to \(1-\delta\) and forms a set \(S_{i,m}(t)\subset S(t)\)
containing all the CAV indices identified during the search process. It follows the same search process for every MP in \(\mathcal{M}_{i}\) and also for (3). Therefore, for each CAV \(i\), the coordinator identifies \(S_{i}^{p}(t)\subset S(t)\), and \(S_{i}^{M}(t)=\bigcup_{m\in\mathcal{M}_{i}}S_{i,m}(t)\) (where \(S_{i}^{p}(t)\) is the set for (3) and \(S_{i}^{M}(t)\) correspond to the set of indices for every MP). The search process is formalized as follows:
\[S_{m}(t)=\{i_{+}\in S(t)|\ i_{+}<i,m\in\mathcal{M}_{i}\} \tag{13}\] \[k_{min}=\text{min }\left\{k\in S_{m}(t)|\tau_{k}\geq 1-\delta\right\}\] (14) \[\tilde{S}_{i,m}(t)=S_{m}(t,1)\] (15) \[S_{i,m}(t)=\cup_{k=1}^{k_{min}}S_{m}(t,k) \tag{16}\]
where \(S_{i,m}(t,k)\) is the \(k-th\) element of set \(S_{i,m}(t)\). The set returned by the default search process is given in (15), and the trust based search returns the set in (16). Note that, there are three scenarios possible from the search process: i. \(k_{min}=\emptyset\) meaning there are no constraints for MP \(m\), ii. \(\tilde{S}_{i,m}=S_{i,m}\) when \(k_{min}=S_{m}(t,1)\) meaning that the trust of the CAV immediately proceeding CAV \(i\) at \(m\) is greater than or equal to \(1-\delta\), and iii. \(k_{min}>S_{m}(t,1)\), hence \(\tilde{S}_{i,m}(t)\subset S_{i,m}\) implying that the immediately preceding CAV has trust lower than \(1-\delta\). For the example in Fig. 1, notice \(4_{p}=3\). However, since \(\tau_{3}<1-\delta\), the search process will continue and return \(S_{4,p}=\{3,1\}\). Similarly, under the trust-based search scheme \(6_{M_{20}}=\{5,4,3,2\}\) as CAVs 2, 3 4, and 5 have trust less than \(1-\delta\).
The state and control information of the CAVs in \(S_{i}^{p}(t)\cup S_{i}^{M}(t)\) are communicated to CAV \(i\) at each \(t\), and the corresponding CBF constraints for each CAVs are incorporated to the control in (12).
**Lemma 1**: The introduction of additional constraints due to _trust-based search_, (including those due to default search process) in the control for any CAV \(i\in S(t)\) in (12) at time \(t^{\prime}\) where \(t^{\prime}\in[t_{i}^{0},t_{i}^{f}]\) does not affect the feasibility of the problem (12) at \(t^{\prime}\).
As mentioned, for any CAV \(i\in S(t)\), \(S_{i,m}(t)\subset\tilde{S}_{i,m}(t)\) is the set of indices of the CAVs that \(i\) needs to stay safe to at MP \(m\in\mathcal{M}_{i}\) under trust based search scheme. Let the trust-based search adds an index of a CAV \(i_{-}(<i)\) to \(S_{i,m}(t)\). We define, \(i_{1}=S_{i,m}(t,1)\), \(b_{i,i_{1}}(\boldsymbol{x}(t^{{}^{\prime}}))=x_{i_{1}}(t^{\prime})-x_{i}(t^{ \prime})-\varphi v_{i}(t^{\prime})-\Delta\). Similarly, \(b_{i_{1},i_{-}}(\boldsymbol{x}(t^{{}^{\prime}}))\) and \(b_{i,i_{-}}(\boldsymbol{x}(t^{{}^{\prime}}))\) can be defined.
Notice that, \(m\in\mathcal{M}_{i_{-}}\). Also notice, \(i_{-}<i_{1}<i\), since \(i_{-}\) will cross MP \(\mathcal{M}\) before \(i_{1}\) which will cross before \(i\). This implies, \(b_{i_{1},i_{-}}(\boldsymbol{x}(t^{{}^{\prime}}))\geq 0\) and \(,b_{i,i_{1}}(\boldsymbol{x}(t^{{}^{\prime}}))\geq 0\). Hence \(b_{i,i_{-}}(\boldsymbol{x}(t^{{}^{\prime}}))=b_{i_{1},i_{-}}(\boldsymbol{x}(t^{ {}^{\prime}}))+b_{i,i_{1}}(\boldsymbol{x}(t^{{}^{\prime}}))\geq 0\), implying the constraint is initially feasible and \(i\) is safe to \(i^{-}\) at \(t^{\prime}\). Hence, the addition of a new CBF constraint due to _trust-based search_ corresponding to \(i_{-}\) to the control of CAV \(i\) (or any CAV) in (12) doesn't affect the feasibility at time \(t^{\prime}\).
**Remark 2**: Lemma 1 is necessary for guaranteeing the satisfaction of the CBF constraints corresponding to the CAVs returned by _trust-based search_ process \(\forall\ t\geqslant t^{\prime}\) using the forward invariant property of CBFs [6].
**Theorem 1**: Given \(0\leq r_{i,j}(t)\leq r_{max}\ \forall t,\ \forall i\in S(t),\ \forall j\in\mathcal{B}\), the introduction of trust-based search guarantees avoidance of collision by guaranteeing the satisfaction of (3) and (4) that can be caused by adversarial agents.
Let, adversarial CAV \(i\in S(t)/\{k\}\) attempts to induce an accident to CAV \(k\) in the CZ at time \(t\) through using one of the attacks in section III. Firstly, notice \(k\) must be greater than \(i\), else it is impossible to create an accident due to i. each CAV staying safe to all immediately preceding CAVs in their trajectory, and ii. assumption (1). At some time \(t>t_{i}^{0}\), CAV \(i\) has to violate its own constraint; else, if \(i\) satisfies its own constraint, so will each CAV \(i_{+}\in S(t)\) (\(i_{+}=\{i_{+}\in S(t)|\ i_{+}>i,(\mathcal{M}_{i}\cap\mathcal{M}_{i_{+}})\neq \emptyset\}\)) queuing behind \(i\) and so does CAV \(k\). Upon violation of a constraint by CAV \(i\), two scenarios can occur.
Case (i) \(\tau_{i}(t)>1-\delta\): Given \(r_{i,j}(t)<r_{max}\),
\[\sum_{j\in\mathcal{B}\setminus\mathcal{B}_{a}}r_{i,j}(t)+\sum_{j\in \mathcal{B}_{a}}\prod_{k\in S_{i,j}^{a}}\tau_{k}(t)r_{i,j}(t)\leq|\mathcal{B} |\,r_{max}\] \[\therefore R_{i}(t)\leq|\mathcal{B}|\,r_{max}+\gamma R_{i}(t-1)\leq \frac{|\mathcal{B}|\,r_{max}}{1-\gamma}\] \[\text{and,}p_{i,j}(t)\geqslant 0\Rightarrow P_{i}(t)\geqslant 0\]
We need the trust \(\tau_{i}<1-\delta\) immediately to trigger trust-based search. Hence we need to show given \(\tau_{i}(t)>1-\delta\), \(\exists p_{i}(t+1)\) and \(p_{i,j}(t+1)\) s.t. \(\tau_{i}(t+1)<1-\delta\) i.e. \(trust-based\ search\) is triggered in next iteration.
\[\tau_{i}(t+1)=\frac{R_{i}(t+1)}{R_{i}(t+1)+P_{i}(t+1)+h_{i}}<1-\delta\] \[\Rightarrow P_{i}(t+1)>\frac{R_{i}(t+1)}{1-\delta}-R_{i}(t+1)-h_{i}\] \[\Rightarrow p_{i}(t+1)+\gamma P_{i}(t)>\frac{\delta}{1-\delta} \frac{|\mathcal{B}|r_{max}}{1-\gamma}\] \[\Rightarrow p_{i}(t+1)>\frac{\delta}{1-\delta}\frac{|\mathcal{B}|r_{max} }{1-\gamma}\geq\frac{\delta}{1-\delta}\frac{|\mathcal{B}|r_{max}}{1-\gamma}- \gamma P_{i}(t)\]
and, when \(\tau_{i}(t+1)<1-\delta\), then, CAV \(k\) will stay safe from \(i\), and, as well to all other CAVs that will arrive before \(i\) as well as all CAVs in \(S_{i}^{p}(t)\cup S_{i}^{M}(t)\). We set the sampling time to be in the order of \(ms\), combining this with Lemma 1 will guarantee safety for CAV \(k\), thus preventing any collision.
Case (ii) \(\tau_{i}(t)<1-\delta\): The same argument in the preceding paragraph apply, and hence guarantee safety for CAV \(k\). A similar argument can be extended to guarantee safety for every CAV \(i_{+}\in S(t)\) which completes the proof.
#### Iii-B2 **Resilience goal (traffic jam avoidance)**
The goal is to avoid traffic buildup in the network due to uncooperative/malicious agents acting deliberately to create traffic congestion in the network.
**Robust Scheduling:** We propose a central, \(event-driven\), \(robust\ scheduling\) scheme that implements FIFO passing sequence for the CAVs in the CZ during normal operation; however, reschedule the CAVs in the presence of adversarial CAVs to prevent any traffic jam. We define a rescheduling zone in the CZ of length \(L_{1}\) as shown in Fig.
1. We first present the rescheduling schemes followed by the events resulting in CAV scheduling (rescheduling).
**Trust-aware scheduling:** Under this scheme, CAVs are indexed (sequenced) in descending order of their trust value, which is intended to encourage CAVs to act in a manner that earns them trust as quickly as possible upon arrival in the CZ. The algorithm is presented in Algorithm 1.
The problem of the rescheduling (i.e. finding a passing sequence) based on the trust score of the CAVs is formulated as a Integer Linear Program (ILP) as in (17). We define the index of the first CAV in the queue to re-sequence from as \(k_{min}=\min S_{R}(t)\) (where \(S_{R}(t)\) is defined in Algorithm 1) and \(S_{+}(k_{min})=\{k_{min},\ldots,N(t)\}\), as the set of indices of the CAVs to be rescheduled in \(S(t)\).
\[\underset{\{a_{i}\in S_{+}(k_{min})\}}{\operatorname{argmax}} \sum_{i=k_{min}}^{N(t)}(1-\tau_{i}\left(t\right))a_{i}\] (17) s.t. \[a_{j}-a_{k}\geq\nu\ \ \forall j\in S_{+}(k_{min}),k\in S_{j}^{p} \tag{18}\] \[a_{j}\neq a_{k}\ \ j,k\in S_{+}(k_{min})\] (19) \[\nu\geq 1 \tag{20}\]
where (18) corresponds to constraint (3), \(\{a_{k_{min}},\ldots,a_{N(t)}\}\) are the new indices of the CAVs in \(S_{+}(k_{min})\). For example in Fig. 1, rescheduling moves CAV 3 (and immediately preceding CAV 4) down in the queue beneath the remaining CAVs in the CZ since \(\tau_{3}\) is the lowest of all CAVs in the Rescheduling zone.
**Lane-priority based rescheduling:** This idea is based on lane priority assignment where lanes are prioritized by observing the number of uncooperative CAVs in that lane. However, note that the presence of slow CAVs in the trajectory of a particular CAV \(i\) (i.e. constraints of CAV \(i\)) can also cause it to go slower than \(v_{low}\). Hence, we identify any CAV \(i\in S(t)\) as uncooperative at time \(t\), if \(v_{i}(t)\leq v_{low}\) and \(\nexistsists_{i}+S_{i}^{M}\) s.t. \(v_{i_{+}}(t)\leq v_{low}\); we group the slow moving CAVs at time \(t\) for lane \(l\) into the set \(S_{l}^{a}(t)\) where \(l\in\{l_{1},\ldots,l_{8}\}\). Following that we compute _the priority of any lane \(l\)_ using the following equation.
\[\zeta_{l}(t)=1-\frac{S_{l}^{a}(t)}{\sum_{l\in[l_{1},\ldots,l_{8}]}S_{l}^{a}(t) +c},\ \ c(\approx 0)\in\mathbb{R}^{+} \tag{21}\]
```
Input :\(\tau_{i}(t),\tau_{i}(t-1)\ \forall i\in S(t),\ \mathcal{A}\) = allowable proportion of CAVs with low trust Output : New sequence
1 Set of CAVs with low trust \(S_{R}(t)=\emptyset\) for each CAV \(i\) in Rescheduling zone do
2if\(\tau_{i}\left(t-1\right)-\tau_{i}\left(t\right)\geq 0\ \ \&\ \ \tau_{i}\left(t\right)\leq\delta\)then
3 append \(i\) to \(S_{R}\left(t\right)\)
4if\(|S_{R}(t)|\geq\mathcal{A}\times N(t)\)then
5 Solve (17)
6
7 end for
8
9 end for
10
11 end for
```
**Algorithm 1**Trust-aware rescheduling algorithm
We define \(k_{min}=\min\{k|k\in S^{a}(t)\}\) and \(S_{+}(k_{min})=\{k_{min},\ldots,N(t)\}\), where \(S^{a}(t)=\cup_{l\in\{l_{1},\ldots,l_{8}\}}S_{l}^{a}(t)\). We also define a set \(S_{+}^{r}(t)\) containing the indices of CAVs that are not physically following any slow moving CAV:
\[S_{+}^{r}(t)=\{i\in S_{+}(k_{min})|i_{p}(t)\cap S^{a}(t)=\emptyset\}\]
Then, we define the following condition that triggers the re-sequencing event:
\[\frac{|S_{+}^{r}(t)|}{|S_{+}(k_{min})|}\geq\mathcal{A}_{l},\ \ \mathcal{A}_{l}\in\mathbb{R}^{+}\text{is a preset threshold} \tag{22}\]
The re-sequencing is done by solving the following ILP that returns the new indices of the CAVs in \(S_{+}(k_{min})\)
\[\underset{\{a_{i}\in S_{+}(k_{min})\}}{\operatorname{argmax}} \sum_{i\in S_{a}}(1-\zeta_{i}^{l}\left(t\right))a_{i}\] s.t.(18),(19) and (20). (23)
where \(\zeta_{i}^{l}(t)\) is the priority associated to the lane that CAV \(i\) is physically located at time \(t\) which can be found in (21), and \(\{a_{k_{min}},\ldots,a_{N(t)}\}\) are the new indices of the CAVs in \(S_{+}(k_{min})\). For example, in Fig. 1, velocities of CAV 2 and 5 are \(v_{2}<v_{low}\) and \(v_{5}<v_{low}\) in lane \(l_{8}\) and \(l_{5}\) respectively. Hence, \(S_{l_{8}}^{a}=\{2,8,9\}\) and \(S_{l_{5}}^{a}=\{5,6\}\). Therefore, the priorities of \(l_{8}\) and \(l_{5}\) become \(0.4\) and \(0.6\) respectively, while all other lane priority remains equal and high. This causes CAVs in \(S_{l_{8}}^{a}\) to be moved to the very end of the queue followed by \(S_{l_{8}}^{a}\), as seen in the table in Fig. 1.
**Lemma 2**: The rescheduled sequence is guaranteed to be feasible if \(L-L_{1}\geq\frac{v_{\max}^{2}}{2|u_{\min}|}+\Delta\), where \(\Delta\) is defined as in (3) and (4).
Proof:: The maximum velocity for any CAV \(i\) is \(v_{\max}\), and the maximum deceleration is \(|u_{\min}|\). Thus the minimum distance required to come to full stop for any CAV \(i\) is \(\frac{v_{\max}^{2}}{2|u_{\min}|}\). Hence, to satisfy the constraint (3) and (4), the minimum distance between the merging point and the end of the re-sequencing zone has to be greater than or equal to \(\frac{v_{\max}^{2}}{2|u_{\min}|}+\Delta\), which will guarantee the feasibility of rescheduled sequence.
The list of \(events\) that cause scheduling (rescheduling) CAVs in the CZ are enumerated below:
1. **Arrival event:** This corresponds to a CAV that has just arrived at the CZ, and hence has to be added in the queue Table 1 and an index has to be assigned to it using the default sequencing scheme (FIFO).
2. **Departure event:** An exiting CAV triggers this event, after which the row corresponding to that CAV is removed from the coordinator table and the indices of all CAVs are decreased by 1.
3. **Reschedule event:** This corresponds to an event, when the presence of uncooperative CAVs triggers the event as a result of the condition in (22), or, the presence of low trustworthy CAVs results in trust-based rescheduling as in algorithm 1.
**Remark 3**: Notice, the two robust rescheduling schemes are event-driven, and hence, can be simultaneously incorporated.
The default scheduling scheme in [24] and the presented rescheduling schemes render a unique index list for the CAVs in the CZ based on which they cross the intersection in descending order of their indices.
## VI Simulation Results
In this section, we present the results of our proposed resilient control and coordination scheme for the threats mentioned in section III. We performed the simulations in Matlab and ode45 to integrate the CAV dynamics. The value of \(\delta\) was set to 0.1. The positive and negative evidence magnitudes for the tests in the order they are mentioned in section IV are: \(r_{i}(t)=[0.6,0.6,0.6,0.6]^{T}\) and \(p_{i}(t)=[1000,100,50,1]^{T}\)\(\forall i\in S(t)\) and \(\forall t\). The intersection dimensions are: \(L=300\)m, \(A=30\)m\({}^{2}\); and, the remaining parameters are \(\varphi=1.8\)s, \(\Delta=3.78\)m, \(\beta_{1}=1,u_{\max}=4.90\)5m/s\({}^{2},u_{\min}=-5.886\)m/s\({}^{2},v_{\max}=108\)km/h,\(v_{\min}=0\)km/h. Finally, we also adopted a realistic energy consumption model from [24] to supplement the simple surrogate \(L_{2}\)-norm (\(u^{2}\)) model in our analysis.
### _Resilient Control and Coordination_
**Resilience control**: Presented in Fig. 2 are results for the scenarios when a fake CAV attempts to violate safety constraints between real CAVs with the aim of creating an accident. The plot shows the value of the safety constraints that the fake CAV attempts to violate with and without our proposed resilient control scheme for _safe coordination_. As can be seen, both rear collision and collision at merging point inside the intersection (which can cause traffic disruption and jam inside the intersection) is possible which is eliminated through our proposed safe and resilient control and coordination scheme, using _trust-based search_.
**Lane-priority based rescheduling**: An extensive simulation with multiple slow CAVs was conducted to demonstrate the effectiveness and significance of our lane-priority based re-scheduling scheme with its results demonstrated in Fig. 3. We introduced from 2 up to 8 uncooperative CAVs in the intersection across 3 arbitrarily chosen lanes during our simulation. As can be noticed, the cooperative nature of the algorithm can cause traffic holdups with the average travel times of CAVs from over \(4\) mins. (270 secs precisely) upto around \(5\) mins. However, with our proposed robust scheduling scheme based on lane priority, the average travel time was significantly reduced, and the maximum average travel time was a little over \(1\) min (74 secs precisely) which was an improvement of over \(3\) mins from the least average travel time without our rescheduling scheme.
**Trust-aware rescheduling:** Finally, we present the results for our proposed trust-aware rescheduling scheme in Fig. 4. We introduced various percentages of fake CAVs ranging from 2% to 15 % fake CAVs through Sybil attack (using the model in section III). We used the various attacker models for the fake CAVs presented in [12]. Our results demonstrate that the average travel times, energy, and fuel consumption of the real CAVs improve with the inclusion of our proposed rescheduling scheme. However, notice that the average energy eventually becomes identical, since a large proportion of spoofed CAVs cause the average travel times of the normal CAVs to increase, thus decreasing the average acceleration input (related to energy, (2)). However, the average fuel consumption is improved with our proposed rescheduling scheme. Note that, eventually, as the percentage of fake CAVs approaches 100 %, the curves for all three metrics will coincide, since all CAVs are fake.
## VII Conclusion
We have presented a resilient coordination and control scheme by incorporating a trust framework that offers resilience against adversarial objectives that can be introduced by malicious attacks and uncooperative CAVs. Based on our previous study we identified two main adversarial objectives namely, (i) safety violation and (ii) creating traffic congestion in the network. We used Sybil attacks to validate and demonstrate the merit of our proposed scheme which guarantees _safe coordination_ and can _mitigate traffic jam_. In addition, we demonstrated that our proposed robust scheduling scheme, mainly, lane-priority based rescheduling can successfully mitigate the effect of uncooperative CAVs and mitigate traffic holdups introduced by them due to the cooperative coordination scheme. Finally, we have presented results from computer simulation to validate and demonstrate
Fig. 3: Average time for of real CAVs for lane-based priority scheduling for various number of uncooperative CAVs.
Fig. 2: Comparison of rear-end and lateral constraint value given in (3) and (4), for a real CAV with respect to another real CAV, with and without the proposed resilient control scheme.
the effectiveness of our proposed attack resilient control and coordination scheme for Sybil attacks, and uncooperative CAVs.
|
2310.04072 | AI Regulation in Europe: From the AI Act to Future Regulatory Challenges | This chapter provides a comprehensive discussion on AI regulation in the
European Union, contrasting it with the more sectoral and self-regulatory
approach in the UK. It argues for a hybrid regulatory strategy that combines
elements from both philosophies, emphasizing the need for agility and safe
harbors to ease compliance. The paper examines the AI Act as a pioneering
legislative effort to address the multifaceted challenges posed by AI,
asserting that, while the Act is a step in the right direction, it has
shortcomings that could hinder the advancement of AI technologies. The paper
also anticipates upcoming regulatory challenges, such as the management of
toxic content, environmental concerns, and hybrid threats. It advocates for
immediate action to create protocols for regulated access to high-performance,
potentially open-source AI systems. Although the AI Act is a significant
legislative milestone, it needs additional refinement and global collaboration
for the effective governance of rapidly evolving AI technologies. | Philipp Hacker | 2023-10-06T07:52:56Z | http://arxiv.org/abs/2310.04072v1 | **AI Regulation in Europe: From the AI Act to Future Regulatory Challenges**
#### Abstract:
This chapter provides a comprehensive discussion on AI regulation in the European Union, contrasting it with the United Kingdom's more sectoral and self-regulatory approach. It argues for a hybrid regulatory strategy that combines elements from both philosophies, emphasizing the need for agility and safe harbours to ease compliance. The paper examines the EU's AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI, asserting that, while the Act is a step in the right direction, it has shortcomings that could hinder the advancement of AI technologies. The paper also anticipates upcoming regulatory challenges, such as the management of toxic content, environmental concerns, and hybrid threats. It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems. Although the EU's AI Act is a significant legislative milestone, it needs additional refinement and global collaboration for the effective governance of rapidly evolving AI technologies.
Keywords: AI Act; artificial intelligence; foundation models; product liability; sustainability; toxicity; threats
#### Contents:
* I Introduction..........................................................................................
* II Modes of regulation: the EU versus the UK?..........................................................
* III Architecture and main content of the AI Act..........................................................
* IV International and economic considerations..........................................................................
* V Critique and policy proposals.......................................................................... |
2308.11153 | Information Complexity of Mixed-integer Convex Optimization | We investigate the information complexity of mixed-integer convex
optimization under different types of oracles. We establish new lower bounds
for the standard first-order oracle, improving upon the previous best known
lower bound. This leaves only a lower order linear term (in the dimension) as
the gap between the lower and upper bounds. This is derived as a corollary of a
more fundamental ``transfer" result that shows how lower bounds on information
complexity of continuous convex optimization under different oracles can be
transferred to the mixed-integer setting in a black-box manner.
Further, we (to the best of our knowledge) initiate the study of, and obtain
the first set of results on, information complexity under oracles that only
reveal \emph{partial} first-order information, e.g., where one can only make a
binary query over the function value or subgradient at a given point. We give
algorithms for (mixed-integer) convex optimization that work under these less
informative oracles. We also give lower bounds showing that, for some of these
oracles, every algorithm requires more iterations to achieve a target error
compared to when complete first-order information is available. That is, these
oracles are provably less informative than full first-order oracles for the
purpose of optimization. | Amitabh Basu, Hongyi Jiang, Phillip Kerger, Marco Molinaro | 2023-08-22T03:14:11Z | http://arxiv.org/abs/2308.11153v1 | # Information Complexity of Mixed-integer Convex Optimization
###### Abstract
We investigate the information complexity of mixed-integer convex optimization under different types of oracles. We establish new lower bounds for the standard first-order oracle, improving upon the previous best known lower bound. This leaves only a lower order linear term (in the dimension) as the gap between the lower and upper bounds. This is derived as a corollary of a more fundamental "transfer" result that shows how lower bounds on information complexity of continuous convex optimization under different oracles can be transferred to the mixed-integer setting in a black-box manner.
Further, we (to the best of our knowledge) initiate the study of, and obtain the first set of results on, information complexity under oracles that only reveal _partial_ first-order information, e.g., where one can only make a binary query over the function value or subgradient at a given point. We give algorithms for (mixed-integer) convex optimization that work under these less informative oracles. We also give lower bounds showing that, for some of these oracles, every algorithm requires more iterations to achieve a target error compared to when complete first-order information is available. That is, these oracles are provably less informative than full first-order oracles for the purpose of optimization.
_Keywords: Mixed-integer optimization, convex optimization, information complexity, lower bounds_
## 1 First-order information complexity
We consider the problem class of _mixed-integer convex optimization_:
\[\inf\{f(\mathbf{x},\mathbf{y}):(\mathbf{x},\mathbf{y})\in C,(\mathbf{x}, \mathbf{y})\in\mathbb{Z}^{n}\times\mathbb{R}^{d}\}, \tag{1}\]
where \(f:\mathbb{R}^{n}\times\mathbb{R}^{d}\to\mathbb{R}\) is a convex (possibly nondifferentiable) function and \(C\subseteq\mathbb{R}^{n}\times\mathbb{R}^{d}\) is a closed, convex set. Given \(\varepsilon\geq 0\), we wish to report a point \((\mathbf{x},\mathbf{y})\in C\cap(\mathbb{Z}^{n}\times\mathbb{R}^{d})\) such that \(f(\mathbf{x},\mathbf{y})\leq f(\mathbf{x}^{\prime},\mathbf{y}^{\prime})+\varepsilon\) for all \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in C\cap(\mathbb{Z}^{n}\times \mathbb{R}^{d})\). Such a point will be called an \(\varepsilon\)_-approximate solution_ and points in \(C\cap(\mathbb{Z}^{n}\times\mathbb{R}^{d})\) will be called _feasible solutions_. We say that \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) are the _integer-valued decision variables_ or simply the _integer variables_ of the problem, and \(\mathbf{y}_{1},\ldots,\mathbf{y}_{d}\) are called the _continuous variables_.
The notion of _information complexity_ (a.k.a. _oracle complexity_ or _analytical complexity_) goes back to foundational work by Nemirovski and Yudin [12] on convex optimization (without integer variables) and is based on the following. An algorithm for reporting an \(\varepsilon\)-approximate solution to an instance \((f,C)\) must be "given" the instance somehow. Allowing only instances
with explicit, algebraic descriptions (e.g., the case of linear programming) can be restrictive in some settings. To work with more general, nonlinear instances, the algorithm is allowed to make queries to an oracle to collect information about the instance. More formally, we have the following definition.
**Definition 1**.: _An oracle \(\mathcal{O}\) for an optimization problem class \(\mathcal{I}\) is given by a family \(\mathcal{Q}\) of possible queries along with a set \(H\) of possible answers or responses. A query \(q\in\mathcal{Q}\) is a function \(q:\mathcal{I}\to H\). We say that \(q(I)\in H\) is the answer or response to the query \(q\) for the instance \(I\in\mathcal{I}\)._
Any algorithm using such an oracle to find an \(\varepsilon\)-approximate solution for an instance makes queries about the instance in a sequence according to some strategy depending on the queries made and answers received, which we define formally as its _query strategy_.
**Definition 2**.: _A query strategy is a function \(D:(\mathcal{Q}\times H)^{*}\to\mathcal{Q}\), where \((\mathcal{Q}\times H)^{*}\) denotes the set of all finite sequences over \(\mathcal{Q}\times H\), including the empty sequence. The transcript \(\Pi(D,I)\) of a strategy \(D\) on an instance \(I=(f,C)\) is the sequence of query and response pairs \((q_{i},q_{i}(I))\), \(i=1,2,\ldots\) obtained when one applies \(D\) on \(I\), i.e., \(q_{1}=D(\emptyset)\) and \(q_{i}=D((q_{1},q_{1}(I)),\ldots,(q_{i-1},q_{i-1}(I)))\) for \(i\geq 2\)._
If different instances with no common \(\varepsilon\)-approximate solution produce the same transcript for the queries an algorithm has made, then the algorithm cannot tell them apart and will be unable to reliably report an \(\varepsilon\)-solution for those instances after those queries. The goal is to design a query strategy that can report an \(\varepsilon\)-approximate solution after making the smallest number of queries. This motivates the following definition of information complexity:
**Definition 3**.: _Given a family of instances \(\mathcal{I}\) and access to an oracle \(\mathcal{O}\), the \(\varepsilon\)-information complexity \(\operatorname{icomp}_{\varepsilon}(D,I,\mathcal{O})\) of an instance \(I\) for a query strategy \(D\), is defined as the minimum natural number \(k\) such that the set of all instances in \(\mathcal{I}\) for which \(\mathcal{O}\) returns the same responses as the instance \(I\) to the first \(k\) queries of \(D\) have a common \(\varepsilon\)-approximate solution. The \(\varepsilon\)-information complexity of the problem class \(\mathcal{I}\) with respect to the oracle \(\mathcal{O}\), is defined as_
\[\operatorname{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O}):=\inf_{D}\sup_{I \in\mathcal{I}}\operatorname{icomp}_{\varepsilon}(D,I,\mathcal{O})\]
_where the infimum is taken over all query strategies._
Thus, to prove an upper bound \(u\) on \(\operatorname{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O})\), it suffices to construct a query strategy that requires, in the worst case, at most \(u\) queries to narrow down to a collection of instances that all have a common \(\varepsilon\)-approximate solution. On the other hand, to establish a lower bound of \(\ell\) on \(\operatorname{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O})\), one needs to show that for any query strategy \(D\), there exists a collection of instances in \(\mathcal{I}\) that give the same responses to the first \(\ell\) queries of \(D\) (on these instances), and there is no point in \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) that is a common \(\varepsilon\)-approximate solution to all these instances.
While we introduce information complexity allowing for any general choice of oracle, the standard oracle that has been studied over the past several decades for convex optimization is the so-called _(full-information) first-order oracle_, which has two types of queries indexed by points in \(\mathbb{R}^{n}\times\mathbb{R}^{d}\): i) a _separation oracle_ query indexed by a point \(\mathbf{z}\in\mathbb{R}^{n+d}\) reports "YES" if \(\mathbf{z}\in C\) and otherwise reports a separating hyperplane for \(\mathbf{z}\) and \(C\), ii) a _subgradient oracle_ query indexed by a point \(\mathbf{z}\in\mathbb{R}^{n+d}\) reports \(f(\mathbf{z})\) and a subgradient for \(f\) at \(\mathbf{z}\). Tight lower and upper bounds (differing by only a small constant factor) on the number of queries required were obtained by Nemirovski and Yudin in their seminal work [12] for the case with no integer
variables; roughly speaking, the bound is \(\Theta\left(d\log\left(\frac{1}{\varepsilon}\right)\right)\). These insights were extended to the mixed-integer setting in [13, 3, 2], with the best known lower and upper bounds stated in [2].
Observe that the response to any separation/subgradient query is a vector in \(\mathbb{R}^{n+d}\). Thus, each query reveals at least \(n+d\) bits of information about the instance. A more careful accounting that measures the "amount of information" accrued would track the total number of bits of information obtained as opposed to just the total number of oracle queries made. A natural question, posed in [2], is whether the bounds from the classical analysis would change if one uses this new measure of the total number of bits, as opposed to the number of queries. The intuition, roughly, is that one should need a factor \((n+d)\log\left(\frac{1}{\varepsilon}\right)\) larger than the number of first-order queries, because one should need to probe at least \(\log\left(\frac{1}{\varepsilon}\right)\) bits in \(n+d\) coordinates to recover the full subgradient/separating hyperplane (up to desired approximations). We attempt to make some progress on this question in this paper.
The above discussion suggests that one should consider oracles that return a desired bit of a desired coordinate of the separating hyperplane vector or subgradient. However, one can imagine making other binary queries on the instance; for example, one can pick a direction and ask for the sign of the inner product of the subgradient and this direction. In fact, one can consider more general binary queries that have nothing to do with subgradients/separating hyperplanes. If one allows _all_ possible binary queries, i.e., one can use any function from the space of instances to \(\{0,1\}\) as a query, then one can simply ask for the appropriate bits of the true minimizer and in \(O((n+d)\log(1/\varepsilon))\) queries, one can get an \(\varepsilon\)-approximate solution. A matching lower bound follows from a fairly straightforward counting argument. Thus, allowing for all possible binary queries gives the same information complexity bound as the original Nemirovski-Yudin bound with subgradient queries in the \(n=0\) (no integer variables) case, but is an exponential improvement when \(n\geq 1\) (see [2] and the discussion below). What this shows is that the bounds on information complexity can be quite different under different oracles. With all possible binary queries, while each query reveals only a single bit of information, the queries themselves are a much richer class and this compensates to give the same bound in the continuous case and exponentially better bounds in the presence of integer variables. Thus, to get a better understanding of this trade-off, we restrict to queries that still extract information by only acting "locally".
### Our contributions
Oracles based on first-order information.Our first contribution is formalizing this notion of general "local" queries. While we focus on first-order information, our framework can be readily extended to consider, for example, information from higher-order derivatives.
**Definition 4**.: _An oracle using first-order information \(\mathcal{O}(\mathcal{G},\mathcal{H})\) consists of two parts:_
1. _For every_ \(\mathbf{z}\in[-R,R]^{n+d}\)_, there exist three maps_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}:\mathcal{I}_{n,d,R,p,M}\to\mathbb{R}^{ n+d}\)_,_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}:\mathcal{I}_{n,d,R,\rho,M}\to\mathbb{R}\)_, and_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}:\mathcal{I}_{n,d,R,\rho,M}\to\mathbb{R} ^{n+d}\) _such that for all_ \((f,C)\in\mathcal{I}_{n,d,R,\rho,M}\) _the following properties hold._ 1. \(C\subseteq\{\mathbf{z}^{\prime}\in\mathbb{R}^{n+d}:\langle\mathbf{g}_{\mathbf{ z}}^{\mathrm{sep}}(f,C),\mathbf{z}^{\prime}\rangle<\langle\mathbf{g}_{\mathbf{z}}^{ \mathrm{sep}}(f,C),\mathbf{z}\rangle\}\) _if_ \(\mathbf{z}\not\in C\) _and_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(f,C)=\mathbf{0}\) _if_ \(\mathbf{z}\in C\)_. In other words,_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(f,C)\) _returns a (normal vector to a) separating hyperplane if_ \(\mathbf{z}\not\in C\)_. We will assume that a nonzero response_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(f,C)\) _has norm 1, since scalings do not change the separation property._ 2. \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(f,C)=f(\mathbf{z})\)_. In other words,_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(f,C)\) _returns the function value for_ \(f\) _at_ \(\mathbf{z}\)_._ 3. \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\in\partial f(\mathbf{z})\)_, where_ \(\partial f(\mathbf{z})\) _denotes the subdifferential (the set of all subgradients) of_ \(f\) _at_ \(\mathbf{z}\)_. In other words,_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) _returns a subgradient for_ \(f\) _at_ \(\mathbf{z}\)
_Such maps will be called first-order maps. A collection of first-order maps, one for every_ \(\mathbf{z}\)_, is called a first-order chart _and will be denoted by \(\mathcal{G}\)._
2. _There are three sets of functions_ \(\mathcal{H}^{\mathrm{sep}}\)_,_ \(\mathcal{H}^{\mathrm{val}}\)_, and_ \(\mathcal{H}^{\mathrm{sub}}\) _and with domains_ \(\mathbb{R}^{n+d}\)_,_ \(\mathbb{R}\) _and_ \(\mathbb{R}^{n+d}\) _respectively. We will use the notation_ \(\mathcal{H}=\mathcal{H}^{\mathrm{sep}}\cup\mathcal{H}^{\mathrm{val}}\cup \mathcal{H}^{\mathrm{sub}}\)_._ \(\mathcal{H}\) _will be called the collection of_ permissible queries of the oracle_._
An algorithm for instances of (1) using \(\mathcal{O}(\mathcal{G},\mathcal{H})\) can, at any iteration, choose a point \(\mathbf{z}\) and a function \(h\in\mathcal{H}\) and receive the response \(h(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{C}))\), \(h(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C}))\) or \(h(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{C})\), depending on whether \(h\in\mathcal{H}^{\mathrm{sep}}\), \(h\in\mathcal{H}^{\mathrm{val}}\) or \(h\in\mathcal{H}^{\mathrm{sub}}\), where \(\widehat{f}\) and \(\widehat{C}\) are the objective function and feasible region, respectively, of the unknown instance. Hence, queries to an oracle \(\mathcal{O}(\mathcal{G},\mathcal{H})\) using first-order information are indexed by \((\mathbf{z},h)\), \(\mathbf{z}\in\mathbb{R}^{n}\times\mathbb{R}^{d},h\in\mathcal{H}\). Since the goal of this paper is to provide bounds for different types of such oracles, i.e., with different permissible queries \(\mathcal{H}\), let us define some cases of interest.
**Definition 5** (Examples of oracles).:
1. _(Full-information first-order oracle) When_ \(\mathcal{H}\) _consists only of the identity functions, i.e.,_ \(h^{\mathrm{sep}}(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{ C}))=\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{C})\)_,_ \(h^{\mathrm{val}}(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{ C}))=\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})\) _and_ \(h^{\mathrm{sub}}(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{ C}))=\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{C})\)_, we recover a full-information first-order oracle._
2. _(Bit oracle) Let_ \(\mathcal{H}^{\mathit{bit}}\) _be the set of binary queries that return a desired bit (of a desired coordinate) of the binary representation of_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{C})\)_,_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})\) _or_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{C})\)_. Let_ \(\mathcal{H}^{\mathit{bit}^{\star}}\) _be the_ shifted _bit oracle that additionally returns a desired bit of_ \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})+u\)_, for any_ \(u\in\mathbb{R}\)_, i.e._ \(\mathcal{H}^{\mathit{bit}^{\star}}\) _allows querying a bit of the function value shifted by some number._
3. _(Inner product threshold queries) Let_ \[\mathcal{H}^{\mathit{dir}} :=\{h_{\mathbf{u},c}^{\mathrm{sep}}:h_{\mathbf{u},c}^{\mathrm{ sep}}(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{C}))=sgn( \langle\mathbf{u},\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(\widehat{f},\widehat{ C})\rangle-c),\mathbf{u}\in\mathbb{R}^{n+d},c\in\mathbb{R}\}\] \[\cup\{h_{u,c}^{\mathrm{val}}:h_{u,c}^{\mathrm{val}}(\mathbf{g}_{ \mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C}))=sgn(u\cdot\mathbf{g}_{ \mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})-c),u\in\mathbb{R},c\in \mathbb{R}\}\] \[\cup\{h_{\mathbf{u},c}^{\mathrm{sub}}:h_{\mathbf{u},c}^{\mathrm{ sub}}(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{C}))=sgn( \langle\mathbf{u},\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(\widehat{f},\widehat{ C})\rangle-c),\mathbf{u}\in\mathbb{R}^{n+d},c\in\mathbb{R}\},\] _where sgn denotes the sign function, be the set of binary queries that answers whether the inner product of the separating hyperplane, function value or subgradient, with a vector or a number of choice_ \(\mathbf{u}\) _or_ \(u\) _in the appropriate space, is at least some value_ \(c\) _or not. We write these as_ \(\mathcal{H}^{\mathit{dir}}\) _since these queries allow for the choice of a "direction"_ \(\mathbf{u}\)_, or a number_ \(u\) _in the function value case, as part of the query._
4. _When_ \(\mathcal{H}\) _is the the set of all possible binary functions on_ \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) _for the separating hyperplanes,_ \(\mathbb{R}\) _for the functions values, and_ \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) _for the subgradients, we will call the resulting oracle the_ general binary oracle based on \(\mathcal{G}\)_._
These now give us a variety of oracles using first-order information, that clearly provide very different information for each query depending on the choice of permissible queries \(\mathcal{H}\). Note that different first-order charts will result in different oracles of each of these types that may give different answers at any point, depending on which separating hyperplane/subgradient the oracle's first-order map selects at those points for that instance.
We are now ready to state our quantitative results for lower and upper bounds on the information complexity of mixed-integer convex optimization under different oracles; see Table 1 for a summary. It is not hard to see that we need to restrict the set of possible instances \(\mathcal{I}\) in order to have meaningful (finite) information complexity \(\mathrm{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O})\). We will focus on the following standard parameterization.
**Definition 6**.: _Define \(\mathcal{I}_{n,d,R,\rho,M}\) to be the set of all instances of (1) such that:_
1. \(C\) _is contained in the box_ \(\{\mathbf{z}\in\mathbb{R}^{n}\times\mathbb{R}^{d}:\|\mathbf{z}\|_{\infty}\leq R\}\)_. The case_ \(C=\{\mathbf{z}\in\mathbb{R}^{n}\times\mathbb{R}^{d}:\|\mathbf{z}\|_{\infty}\leq R\}\) _will be called_ unconstrained_._
2. _If_ \((\mathbf{x}^{\star},\mathbf{y}^{\star})\) _is an optimal solution of the instance, then there exists_ \(\hat{\mathbf{y}}\in\mathbb{R}^{d}\) _satisfying_ \(\{(\mathbf{x}^{\star},\mathbf{y}):\|\mathbf{y}-\hat{\mathbf{y}}\|_{\infty}\leq \rho\}\subseteq C\)_. In other words, there is a "strictly feasible" point_ \((\mathbf{x}^{\star},\hat{\mathbf{y}})\) _in the same fiber as the optimum_ \((\mathbf{x}^{\star},\mathbf{y}^{\star})\)_._
3. \(f\) _is Lipschitz continuous with respect to the_ \(\|\cdot\|_{\infty}\)_-norm with Lipschitz constant_ \(M\) _on_ \(\{\mathbf{x}\}\times[-R,R]^{d}\) _for all_ \(\mathbf{x}\in[-R,R]^{n}\cap\mathbb{Z}^{n}\)_. In other words, for any_ \((\mathbf{x},\mathbf{y}),(\mathbf{x},\mathbf{y}^{\prime})\in(\mathbb{Z}^{n} \times\mathbb{R}^{d})\cap[-R,R]^{n+d}\) _with_ \(\|\mathbf{y}-\mathbf{y}^{\prime}\|_{\infty}\leq R\)_,_ \(|f(\mathbf{x},\mathbf{y})-f(\mathbf{x},\mathbf{y}^{\prime})|\leq M\|\mathbf{ y}-\mathbf{y}^{\prime}\|_{\infty}\) _with the convention that_ \(\infty-\infty=0\)_._
Lower bounds.Our first result is a "transfer" theorem that will be a powerful tool for obtaining concrete mixed-integer lower bounds under different oracles. This theorem lifts lower bounds for unconstrained optimization from the continuous to the mixed-integer setting. In particular, if one has a lower bound \(\ell\) with respect to an oracle using first-order information (Definition 4) for the information complexity for some family of purely continuous instances, then one can "transfer" that lower bound to the mixed-integer case as \(\Omega(2^{n}\ell)\) with access to the "same" oracle in the \(n+d\) dimensional space. For this notion, we require the set of permissible
\begin{table}
\begin{tabular}{l l l l} \hline \hline Type of & Variables & Lower bound & Upper bound \\ first-order oracle & & & \\ \(\mathcal{O}(\mathcal{G},\mathcal{H})\) & & & \\ \hline \(\mathcal{H}\) is hereditary & Mixed & \(\Omega(2^{n}\ell)\), where & \\ & & & \(\ell\leq\mathrm{icomp}_{\varepsilon}(\mathcal{I}_{0,d,R,\rho,M},\mathcal{O}( \mathcal{G},\mathcal{H}))\) & \\ & & & (Theorem 7) & \\ Full-information & Mixed & \(\Omega\left(2^{n}d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\) & \(O\left(2^{n}d(n+d)\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\) \\ first-order oracle & & & (Corollary 8) & (Oertel [13], Basu-Oertel [3]) \\ \(\mathcal{H}^{bit},\mathcal{H}^{bit^{\star}},\mathcal{H}^{dir}\), or General & Mixed & \(\tilde{\Omega}\left(2^{n}\max\left\{d^{\frac{\delta}{\delta}},d\log\left( \frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right\}\right)\) & \(O\left(2^{n}d\left(n+d\right)^{2}\log^{2}\left(\frac{(n+d)MR}{\min\{\rho,1\} \varepsilon}\right)\right)\) \\ Binary Queries & & & (Theorem 10) \\ & Continuous & \(\tilde{\Omega}\left(\max\left\{d^{\frac{\delta}{\delta}},d\log\left(\frac{MR} {\min\{\rho,1\}\varepsilon}\right)\right\}\right)\) & \(O\left(d^{2}\log^{2}\left(\frac{dMR}{\min\{\rho,1\}\varepsilon}\right)\right)\) \\ & & (Theorem 9) & (Theorem 11) \\ General Binary & Mixed & \(\tilde{\Omega}\left(2^{n}\max\left\{d^{\frac{\delta}{\delta}},d\log\left(\frac {MR}{\min\{\rho,1\}\varepsilon}\right)\right\}\right)\) & \(O\left(\log|\mathcal{I}|+2^{n}d(n+d)\log\left(\frac{MR}{\min\{\rho,1\} \varepsilon}\right)\right)\) \\ & & (Theorem 9) & (Corollary 13) \\ & Continuous & \(\tilde{\Omega}\left(\max\left\{d^{\frac{\delta}{\delta}},d\log\left(\frac{MR} {\min\{\rho,1\}\varepsilon}\right)\right\}\right)\) & \(O\left(\log|\mathcal{I}|+d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon} \right)\right)\) \\ & & (Theorem 9) & (Corollary 13) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of results on the information complexity of mixed-integer convex optimization for the class of instances \(\mathcal{I}_{n,d,R,\rho,M}\) that have \(n\) integer variables, \(d\) continuous variables, the feasible region lies in the box \([-R,R]^{n+d}\) and has a “\(\rho\)-deep feasible point” on the optimal fiber, and the objective function is \(M\)-Lipschitz with respect to \(\ell_{\infty}\) (see Definition 6). The table presents simplified bounds showing only the main parameters.
queries \(\mathcal{H}\) to be _hereditary_. Roughly speaking, this means that the set of queries has the same richness on a purely continuous space as it is in a mixed-integer space. We formally define hereditary queries in Section 2, and note that all of the types of permissible queries discussed in Definition 5 satisfy this property, except for \(\mathcal{H}^{bit}\) (the slightly enhanced \(\mathcal{H}^{bit^{*}}\) queries are hereditary).
**Theorem 7**.: _Let \(\mathcal{H}_{n,d}\) be any class of hereditary permissible queries, and assume \(\mathcal{H}_{0,d}\) contains function threshold queries \(h_{c}\) that answer \(h_{c}(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})):=sgn (\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(\widehat{f},\widehat{C})+c)\) for any \(c\in\mathbb{R}\). Let \(\varepsilon\geq 0\). Suppose, for some \(d\geq 1\), there exists a class \(\mathcal{I}\subseteq\mathcal{I}_{0,d,R,\rho,M}\) of continuous convex (unconstrained) optimization problems in \(\mathbb{R}^{d}\), and a first order chart \(\mathcal{G}_{0}\) for \(\mathcal{I}\) such that \(\mathrm{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O}(\mathcal{G}_{0}, \mathcal{H}_{0,d}))\geq\ell\). Suppose further that all instances in \(\mathcal{I}\) have the same optimal value. Then, for any number of integer variables \(n\geq 1\), there is a first order chart \(\mathcal{G}_{n}\) such that \(\mathrm{icomp}_{\varepsilon}(\mathcal{I}_{n,d,R,\rho,M},\mathcal{O}(\mathcal{ G}_{n},\mathcal{H}_{n,d}))\geq 2^{n-1}\ell\)._
As a first consequence of this transfer theorem we obtain a sharpened lower bound for the standard full-information first-order oracle case for mixed-integer problems. For this setting, Basu [2] proved the lower bound of \(\Omega\left(2^{n}\cdot d\log\left(\frac{2R}{3\rho}\right)\right)\). However, this bound is independent of the Lipschitz constant \(M\) of the objective function, and thus does not capture the hardness of the problem as \(M\) increases. By applying Theorem 7 to the classical lower bound of \(\Omega\left(d\log\left(\frac{MR}{\varepsilon}\right)\right)\) for continuous convex optimization with the standard first-order oracle by Nemirovski and Yudin [12], and combining the result with the existing mixed-integer lower bound, we obtain the following improved bound.
**Corollary 8**.: _There exists a first-order chart \(\mathcal{G}\) such that for the full-information first-order oracle based on \(\mathcal{G}\) (i.e., \(\mathcal{H}\) consists of the identity functions) we have_
\[\mathrm{icomp}_{\,\varepsilon}(\mathcal{I}_{n,d,R,\rho,M},\mathcal{O}( \mathcal{G},\mathcal{H}))=\Omega\left(2^{n}\left(1+d\log\left(\frac{MR}{\min\{ \rho,1\}\varepsilon}\right)\right)\right).\]
Moving on to "non-standard" oracles, we consider mixed-integer convex optimization under the general binary oracle. Recall from Definition 5 that this means that the algorithm can make any binary query on subgradients/separating hyperplanes. Despite the power of these queries, we prove a separation between the information complexity under the standard full-information first-order oracle and the general binary oracle, i.e., the latter provides quantitatively less information for solving the problem. For example, in the pure continuous setting, \(O(d)\) queries suffice (ignoring the logarithmic dependence on other parameters) under the full-information first-order oracle. However, we show that \(\Omega(d^{8/7})\) queries are needed under the general binary oracle. More precisely, we show the following lower bound.
**Theorem 9**.: _For every \(n\geq 0\), there exists a first-order chart \(\mathcal{G}\) such that for the general binary oracle based on \(\mathcal{G}\) we have_
\[\mathrm{icomp}_{\,\varepsilon}(\mathcal{I}_{n,d,R,\rho,M},\mathcal{O}( \mathcal{G},\mathcal{H}))=\tilde{\Omega}\left(2^{n}\left(1+\max\left\{d^{\frac{ 8}{7}},d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\right\} \right),\]
_where \(\tilde{\Omega}\) hides polylogarithmic factors in \(d\)._
We note that, since \(\mathcal{H}^{bit}\), \(\mathcal{H}^{bit^{*}}\) and \(\mathcal{H}^{dir}\) are more restrictive than the general binary oracle, this lower bound applies to oracles with those permissible queries as well. The proof of this result relies on a connection between information complexity and _memory constrained_ algorithms for convex optimization, and the recent lower bound for the latter from [11] (in addition to Theorem 7 for lifting the result to the mixed-integer case).
Upper bounds.We now present upper bound results that illustrate the connection between information complexity based on full-information first-order oracles and information complexity based on binary queries on separating hyperplanes and subgradients. We first formalize the intuition that by making roughly \(O\left((n+d)\log\left(\frac{1}{\varepsilon}\right)\right)\) bit or inner product sign queries on a separating hyperplane or subgradient, one should have enough information to solve the problem as with full information (Theorems 10 and 11). Next, in Theorem 12 and Corollary 13, we show how this natural bound can be improved in certain settings.
**Theorem 10**.: _Assume \(d\geq 1\). For \(U>0\), consider the subclass of instances of \(\mathcal{I}_{n,d,R,\rho,M}\) whose objective function values lie in \([-U,U]\), and the fiber over the optimal solution contains a \(\mathbf{z}\) such that the \((n+d)\)-dim \(\rho\)-radius \(\ell_{\infty}\) ball centered at \(\mathbf{z}\) is contained in \(C\). There exists a query strategy for this subclass that reports an \(\varepsilon\)-approximate solution by making at most_
\[O\left(2^{n}d\left(n+d\right)\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon} \right)\right)\cdot\left((n+d)\log\left(\frac{(n+d)MR}{\rho\varepsilon}\right) +\log\frac{U}{\varepsilon}\right)\]
_queries to an oracle \(\mathcal{O}(\mathcal{G},\mathcal{H})\), where \(\mathcal{G}\) is any first-order chart and \(\mathcal{H}\) is either \(\mathcal{H}^{\mathrm{bit}}\) or \(\mathcal{H}^{\mathrm{dir}}\)._
Prescribing an _a priori_ range for objective function values is not a serious restriction for two reasons: i) The difference between the maximum and the minimum values of an objective function in \(\mathcal{I}_{n,d,R,\rho,M}\) is at most \(2MR\), and ii) All optimization problems whose objective functions differ by a constant are equivalent. We also comment that while we assume \(d\geq 1\) in Theorem 10, similar bounds can be established for the \(d=0\) (pure integer) case. We omit this here because a unified expression for the \(d=0\) and \(d\geq 1\) cases becomes unwieldy and difficult to parse.
The main idea behind Theorem 10 is to show that existing methods with the best known information complexity for mixed-integer convex optimization that use full-information first-order oracles can also work with approximate separation and subgradient oracles that return desired approximations of the true vectors (with no loss in the information complexity). Then one shows that one can produce these approximations with roughly \(O\left((n+d)\log\left(\frac{1}{\varepsilon}\right)\right)\) bit or inner product sign queries on a separating hyperplane or subgradient. With bit queries, this is just a matter of probing enough bits of each coordinate of the vector. The case with inner product sign queries is a bit more involved and our main tool is a result that shows how to approximate any vector up to desired accuracy with such queries (Lemma 30).
Subsequently, using similar techniques we present an enhanced upper bound for the scenario where \(n=0\) (pure continuous case).
**Theorem 11**.: _For \(U>0\), consider the subclass of instances of \(\mathcal{I}_{n,d,R,\rho,M}\) where \(n=0\) (pure continuous case) and the objective function values lie in \([-U,U]\). There exists a query strategy for this subclass that reports an \(\varepsilon\)-approximate solution by making at most_
\[O\left(d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\cdot \left(d\log\left(\frac{dMR}{\rho\varepsilon}\right)+\log\frac{U}{\varepsilon}\right)\]
_queries to an oracle \(\mathcal{O}(\mathcal{G},\mathcal{H})\), where \(\mathcal{G}\) is any first-order chart and \(\mathcal{H}\) is either \(\mathcal{H}^{\mathrm{bit}}\) or \(\mathcal{H}^{\mathrm{dir}}\)._
Finally, we provide a kind of transfer result that allows one to transfer algorithms designed for full-information first-order oracles to the (harder) setting of a general binary oracle.
**Theorem 12**.: _Suppose there exists an algorithm that reports an \(\varepsilon\)-approximate solution for instances in \(\mathcal{I}_{n,d,\rho,M,R}\) with at most \(u\) queries to the full-information first-order oracle based on a first-order chart \(\mathcal{G}\). Then, for any subclass of finitely many instances \(\mathcal{I}\subset\mathcal{I}_{n,d,R,\rho,M,R}\), there
exists a query strategy for this subclass using the general binary oracle based on \(\mathcal{G}\) that reports an \(\varepsilon\)-approximate solution by making at most_
\[O\big{(}\log|\mathcal{I}|+u\big{)}\]
_queries._
Using the centerpoint-based algorithm from [13, 3], we obtain the following corollary:
**Corollary 13**.: _Given any subclass of finitely many instances \(\mathcal{I}\subset\mathcal{I}_{n,d,R,\rho,M}\) and any first-order chart \(\mathcal{G}\), there exists a query strategy for this subclass using the general binary oracle based on \(\mathcal{G}\) that reports an \(\varepsilon\)-approximate solution by making at most_
\[O\left(\log|\mathcal{I}|+2^{n}\,d\,(n+d)\log\left(\frac{dMR}{\min\{\rho,1\} \varepsilon}\right)\right)\]
_queries. In the (pure continuous) case of \(n=0\), \(O\left(\log|\mathcal{I}|+d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon} \right)\right)\) queries suffice._
In particular, when the number of instances under consideration is \(|\mathcal{I}|=O(2^{2^{n}d(n+d)})\), Corollary 13 gives a strictly better upper bound than Theorem 10. Similarly, for \(n=0\), in the case when \(|\mathcal{I}|=O(2^{d})\), we get a better upper bound compared to Theorem 11; in fact, we beat the lower bound provided by Theorem 9. This demonstrates that even with exponentially many instances under consideration, the case of finite instances yields a lower information complexity than the case of infinitely many instances. We point out that the first-order chart \(\mathcal{G}\) must be known to implement the query strategy in Theorem 12. In contrast, the algorithms in Theorems 10 and 11 are oblivious of the first order chart, i.e., they work with any first order chart.
### Discussion and future avenues
The concept of information complexity in continuous convex optimization and its study go back several decades, and it is considered a fundamental question in convex optimization. In comparison, much less work on information complexity has been carried out in the presence of integer constrained variables. Nevertheless, we believe there are important and challenging questions that come up in that domain that are worth studying. Further, even within the context of continuous convex optimization, the notion of information complexity has almost exclusively focused on the number of full-information first-order queries. As we hope to illustrate with the results of this paper, considering other kinds of oracles leads to very interesting questions at the intersection of mathematical optimization and information theory. In particular, the study of binary oracles promises to give a more refined understanding of the fundamental question "How much information about an optimization instance do we need to be able to solve it with provable guarantees?". For instance, establishing _any superlinear (in the dimension)_ lower bound for the continuous problem with binary oracles, like the one in Theorem 9, seems to be nontrivial. In fact, the results from [11], on which Theorem 9 is based, were considered a breakthrough in establishing superlinear lower bounds on space complexity of convex optimization. Even so, the right bound is conjectured to be quadratic in the dimension (see Theorem 11) and our Theorem 9 is far from that at this point. These other oracles also have a practical motivation. Obtaining exact first-order information may be difficult or impossible in many practical situations, and one has to work with approximations of separating hyperplanes and subgradients. The binary oracles can be viewed as providing these approximations and information complexity under these oracles becomes important from a practical standpoint.
We thus view the results of this paper as expanding our understanding of information complexity of optimization in two different dimensions: what role does the presence of integer variables play and what role does the nature of the oracle play (with or without integer variables)? For the role of integer variables, in the pure optimization case Theorem 7 provides a lifting of lower bound from the continuous case. Allowing for constraints, Corollary 8 brings the lower bound closer to the best known upper bound on information complexity based on the classical subgradient oracle. The remaining gap is now simply a factor linear in the dimension. A conjecture in convex geometry first articulated in [13, Conjecture 4.1.20] and elaborated upon in [3, 2] would resolve this and would show that the right bound is essentially equal to the lower bound we prove in this paper.
Beyond the contributions discussed above, our work also opens up new future directions for study. We believe the following additional conjectures to be good catalysts for future research, especially in regard to understanding the interplay of integer variables and other oracles.
The first conjecture is a generalization of our Theorem 7 to incorporate constraints as well. This would make this "transfer" tool more powerful, and would, for example, give Corollary 8 as a special case without appealing to [2] for the feasibility lower bound.
**Conjecture 1**.: _If there exist continuous, **constrained** convex optimization instances such that \(\ell\) is a lower bound for this family on the information complexity with respect to an oracle, then for every \(n\geq 1\), there exist mixed-integer instances with \(n\) integer variables such that the information complexity of these mixed-integer instances is lower bounded by \(\Omega(2^{n}\cdot\ell)\) for the same oracle._
Another consequence of resolving this conjecture is that if future research on the information complexity of continuous convex optimization results in better/different lower bounds based on feasibility, these would immediately imply new lower bounds for the mixed-integer case. For instance, we believe the following conjecture to be true for the mixed-integer convex optimization problem.
**Conjecture 2**.: _There exists a first-order chart \(\mathcal{G}\) such that the general binary oracle based on \(\mathcal{G}\) has information complexity \(\Omega\left(2^{n}\Big{(}1+d^{2}\log\left(\frac{MR}{\rho\varepsilon}\right) \Big{)}\right)\)._
A version of Conjecture 2 is also stated in the language of "memory-constrained" algorithms in [16, 11] for the continuous case (see Section 3 below); the way we have stated the conjecture here presents its transfer to the mixed-integer case.
Analogously, it would be nice to have "transfer" theorems for upper bounds as well. In the spirit of Theorems 10, 11 and 12, we believe a useful result would be a theorem that takes upper bound results proved in the full-information first-order oracle setting and obtains upper bound results in the general binary oracle setting. A use case of such a result would be the following: if the upper bound for the general mixed-integer problem with full-information first-order oracles is improved by resolving the convex geometry conjecture mentioned above (and we believe the lower bound is correct and the upper bound is indeed loose), then this would also give better upper bounds for the general binary oracle setting. Thus, we make the following conjecture.
**Conjecture 3**.: _If there exists a query strategy with worst case information complexity \(u(n,d,R,\rho,M,\mathcal{G})\) under the full-information first-order oracle based on a first-order chart \(\mathcal{G}\), then there exists a query strategy with worst case information complexity bounded by_
\[u(n,d,R,\rho,M,\mathcal{G})\cdot O\left((n+d)\log\left(\frac{MR}{\rho \varepsilon}\right)\right)\]
_under the general binary oracle based on \(\mathcal{G}\)._
We focus on oracles that use first order information in this paper (Definitions 4 and 5). Oracles that use "zero-order information" have also been studied in the literature, beginning with the seminal work of Yudin and Nemirovski [12]; see [8] for an exposition of how those ideas can be used in the mixed-integer setting and [5] for an exposition in the nonconvex setting. Such oracles report function values only for the objective, with no subgradient information, and only report membership for the constraints, with no separating hyperplanes. A related oracle is the "value comparison" oracle that has found many applications. These oracles comprise of questions of the form "Is \(f(\mathbf{z})\leq f(\mathbf{z}^{\prime})\)?", with no access to the subgradients of \(f\). Such algorithms are particularly useful in learning from users' behaviors, since while a user typically cannot accurately report its (dis)utility value \(f(\mathbf{z})\) for an option \(\mathbf{z}\), it can more reliably compare the values \(f(\mathbf{z})\) and \(f(\mathbf{z}^{\prime})\) of two options; see [10, 14] and references therein for discussions and algorithms in the continuous convex case. The mixed-integer setting under the value comparison oracle has been extensively studied in recent work [4, 6, 7, 15]. The ideas in this paper can also be adapted to give algorithms for mixed-integer convex optimization using the comparison oracle, but we do not undertake a deeper study here. There seems to be scope for future research in this direction, especially in tying together these different strands of ideas for "zero order information".
The remainder of this paper is dedicated to the formal proofs of our main results discussed above.
## 2 Proof of Theorem 7
The high-level idea for the proof of Theorem 7 is to construct difficult mixed-integer instances by taking hard instances of the continuous case, "placing" one of them on each fiber \(\mathbf{x}\times\mathbb{R}^{d},\mathbf{x}\in\{0,1\}^{n}\), and interpolating between fibers appropriately. We do this in a way such that effectively one needs to solve the continuous problems obtained by restricting to each fiber, which leads the \(\Omega(2^{n}\ell)\) lower bound from an \(\ell\) lower bound on the continuous problems - namely, there will be one difficult function from the continuous case placed on each of the \(2^{n}\) fibers, so if one can't do better than solving each of them separately, one ends up with an \(\Omega(2^{n}\ell)\) lower bound. To make this idea work, the interpolation needs to be done in a way that no query in the full \([0,1]^{n}\times\mathbb{R}^{d}\) space reveals information about two (or more) of the continuous functions placed on different fibers, or reveals significantly more information about a function on a fiber than a query on that fiber would. For example, we need to ensure that a single query at the point \((\frac{1}{2},\ldots,\frac{1}{2},\mathbf{y})\) for \(\mathbf{y}\in R^{d}\) does not reveal information about multiple functions on different fibers.
### Game-theoretic perspective
So far we have described the information complexity of optimization using an oracle \(\mathcal{O}\) over a family of instances \(\mathcal{I}\) based on having an optimization algorithm that in each round \(t\) makes a query \(q_{t}\) to \(\mathcal{O}\) and receives as answer the result \(q_{t}(\widehat{I})\) for the unknown instance \(\hat{I}\) it is trying to optimize. However, for obtaining lower bounds on the information complexity, it is more helpful to consider the algorithm as interacting with an _adversary_ for the family of instances under \(\mathcal{O}\), instead of the unknown instance \(\hat{I}\). More precisely, at round \(t\), the adversary receives the query \(q_{t}\) of the algorithm and produces, possibly based on all the previous queries \(q_{1},\ldots,q_{t-1}\), a response \(r_{t}\). The only requirement is that there must always exist at least one instance \(\bar{I}\in\mathcal{I}\) that is _consistent_ with all of its responses, namely \(r_{t}=q_{t}(\bar{I})\) for all \(t\), under the oracle \(\mathcal{O}\) being considered. With each such response, the set of instances that are consistent with all responses given may change, motivating the following definition:
**Definition 14**.: _Given a class of instances \(\mathcal{I}\), an oracle \(\mathcal{O}\), and a transcript of query-response pairs \((q_{1},r_{1}),...,(q_{t},r_{t})\), the set of surviving instances for \((q_{1},r_{1}),...,(q_{t},r_{t})\) under \(\mathcal{O}\) is_
\[\{I\in\mathcal{I}:q_{j}(I)=r_{j}\,\forall\,j\in[t]\},\]
_i.e., the set of instances consistent with the responses in the transcript under the oracle \(\mathcal{O}\). When all instances in \(\mathcal{I}\) are unconstrained, let the set of surviving functions be the set of functions corresponding to the surviving instances._
We say that an adversary **Adv** is \(\varepsilon\)_-hard for \(\ell\) rounds_ if for any algorithm **Alg**, after \(\ell\) rounds there are surviving instances in \(\mathcal{I}\) that do not have a common \(\varepsilon\)-approximate solution, i.e., if \(q_{1},\ldots,q_{\ell}\) and \(r_{1},\ldots,r_{\ell}\) are **Alg**'s queries and **Adv**'s responses, respectively, then there is a collection of instances \(\mathcal{J}\subset\mathcal{I}\) that have no common \(\varepsilon\)-approximate solution but such that \(r_{t}=q_{t}(I)\) for all \(I\in\mathcal{J}\) and \(t=1,\ldots,\ell\). Since the sets of \(\varepsilon\)-approximate solutions of instances in \(\mathcal{I}_{n,d,R,\rho,M}\) are compact convex sets, this collection \(\mathcal{J}\) of instances may always be taken to be finite1. Intuitively, the existence of such an adversary should imply that no algorithm can reliably report an \(\varepsilon\)-approximate solution within \(\ell\) iterations, that is, \(\mathrm{icomp}_{\varepsilon}(\mathcal{O},\mathcal{I})>\ell\). The next result shows that this adversary-based perspective is indeed equivalent to information complexity, and may be of independent interest (for a proof see Appendix A).
Footnote 1: If a collection of compact sets has empty intersection, then there exists a finite subcollection that already has empty intersection.
**Lemma 15**.: _Consider a class of instances \(\mathcal{I}\) and an oracle \(\mathcal{O}\). Then \(\mathrm{icomp}_{\varepsilon}(\mathcal{I},\mathcal{O})>\ell\) if and only if there exists an adversary under \(\mathcal{O}\) using \(\mathcal{I}\) that is \(\varepsilon\)-hard for \(\ell\) rounds._
### Proof for the full-information first-order oracle
The full proof of Theorem 7 is a bit technical and requires a few conceptual connections. For a better exposition, we first prove the theorem in the case that the oracle is the full-information first-order oracle. As \(\mathcal{H}\) thus consists of the identity maps, throughout this subsection we will write oracles using first-order information as \(\mathcal{O}(\mathcal{G})\), where \(\mathcal{G}\) is the corresponding first order chart.
Given the assumption of the theorem and the equivalent adversarial perspective from Lemma 15, assume there is a family of continuous, unconstrained instances \(\mathcal{I}_{cont}\subseteq\mathcal{I}_{0,d,R,\rho,M}\), all with the same optimal value OPT, and a full-information first-order adversary **Adv-Cont** for \(\mathcal{I}\) that is \(\varepsilon\)-hard for \(\ell-1\) rounds. Let us use \(\mathcal{F}_{cont}\) to denote the objective functions of the instances \(\mathcal{I}_{cont}\). In the full-information first-order case, queries of an optimization algorithm consist of points \(\mathbf{y}_{1},\mathbf{y}_{2},\ldots\in\mathbb{R}^{d}\), and either query the function value or the subgradient. For simplicity, let us allow the algorithm to query _both_ the function value and subgradient in a single query, so that the queries become simply \(\mathbf{y}_{1},\mathbf{y}_{2},\ldots\in\mathbb{R}^{d}\) and the responses of an adversary consist of a sequence of consistent function values and subgradients, namely a sequence \((v_{1},\mathbf{g}_{1}),(v_{2},\mathbf{g}_{2}),\ldots\in\mathbb{R}\times \mathbb{R}^{d}\) such that there is some \(f\in\mathcal{F}_{cont}\) satisfying \(v_{t}=f(\mathbf{y}_{t})\) and \(\mathbf{g}_{t}\in\partial f(\mathbf{y}_{t})\) for all rounds \(t\).
To prove the theorem, we will construct a full-information first-order adversary **Adv-MI** for a family of mixed-integer instances over \(\{0,1\}^{n}\times\mathbb{R}^{d}\) that is \(\varepsilon\)-hard for \(2^{n}\ell-1\) rounds. As alluded to before, the very high-level is to place a copy of the continuous adversary **Adv-Cont** on each of the continuous fibers \(\mathbf{x}\times\mathbb{R}^{d}\) for \(\mathbf{x}\in\{0,1\}^{n}\). In fact, we will work with a slightly modified version of the continuous adversary that is constructed next.
#### 2.2.1 Modifying the continuous adversary Adv-Cont
For the mixed-integer adversary **Adv-MI**, it will be important to render a fiber "useless" for the optimization algorithm after it queries (close to) this fiber too many times, so as to intuitively
force it to query (close to) other fibers, or gain no new information otherwise. This will be done by modifying the continuous adversary **Adv-Cont** such that whenever it is probed \(\ell\) or more times, it commits to answering all future queries consistently with a _single_ function that has optimal value \(>\mathrm{OPT}+\varepsilon\); since our mixed-integer instances will be constructed to have optimal value \(\mathrm{OPT}\), gathering more information about the function on such fibers will not help the algorithm solve the mixed-integer problem. To do this, the modified continuous adversary will also keep track of the set \(S\) of surviving functions (Definition 14) given its responses. More precisely, here are its main properties.
**Lemma 16**.: _There is a family of convex functions \(\overline{\mathcal{F}}_{cont}\) corresponding to instances \(\mathcal{I}_{0,d,R,\rho,M}\) of the purely continuous case, a first-order chart \(\mathcal{G}\) and a full-information first-order adversary **Adv-Cont+** that, for any algorithm **Alg**, maintains a set of functions \(S_{t}\subseteq\overline{\mathcal{F}}_{cont}\) for every query-response round \(t\) with the following properties:_
1. _In every round_ \(t\geq 1\)_, all functions in_ \(S_{t}\) _are consistent with the responses returned by_ _Adv-Cont+_ _thus far, under some oracle using first-order information_ \(\mathcal{O}(\mathcal{G})\)_._
2. _In the first_ \(t\leq\ell-1\) _rounds, there is a finite collection of functions in_ \(S_{t}\cap\mathcal{F}_{cont}\) _that do not share an_ \(\varepsilon\)_-approximate solution. In particular,_ _Adv-Cont+_ _is still_ \(\varepsilon\)_-hard for_ \(\ell-1\) _rounds._
3. _For all rounds_ \(t\geq 1\)_,_ \(S_{t}\) _is closed under taking maxima of finitely many of its elements, and also contains a function that has minimum value_ \(>\mathrm{OPT}+\varepsilon\)_._
4. _For rounds_ \(t\geq\ell\)_,_ \(S_{t}\) _contains a single function with minimum value_ \(>\mathrm{OPT}+\varepsilon\)_._
Item 1. means that for rounds \(t<\ell\), there exists a full-information first-order oracle \(\mathcal{O}(\mathcal{G})\) such that \(S_{t}\) is exactly the set of surviving functions under \(\mathcal{O}(\mathcal{G})\) given the responses produced by **Adv-Cont+** up to round \(t\). Hence, we will refer to this \(S_{t}\) as the set of _surviving functions maintained by **Adv-Cont+** at round \(t\)._ We now make precise our modification to the continuous adversary **Adv-Cont** and prove Lemma 16. As a preliminary, let \(\overline{\mathcal{F}}_{cont}\) denote the closure of \(\mathcal{F}_{cont}\) under taking maxima of finitely many functions, i.e. for any finite collection \(\mathcal{J}\subset\mathcal{F}_{cont}\), \(\max_{f\in\mathcal{J}}\{f\}\in\overline{\mathcal{F}}_{cont}\). Notice these functions are still convex. The following lemma highlights the key property of \(\mathcal{F}_{cont}\) we will make use:
**Lemma 17**.: _Let \(\mathcal{J}\subset\mathcal{F}\) be a finite set. If \(f\in\mathcal{J}\) do not have a common \(\varepsilon\)-solution, then the pointwise maximium function \(\max_{f\in\mathcal{J}}\{f\}\) has minimum value greater than \(OPT+\varepsilon\)._
Proof.: Suppose for sake of contradiction that there exists a point \(\mathbf{z}\) such that \(\max_{f\in\mathcal{J}}\{f(\mathbf{z})\}\leq OPT+\varepsilon\). Then \(f(\mathbf{z})\leq OPT+\varepsilon\) for all \(f\in\mathcal{J}\), which means \(\mathbf{z}\) is an \(\varepsilon\)-solution for all \(f\in\mathcal{J}\), which contradicts the assumption that they do not share an \(\varepsilon\)-solution.
We now formally describe **Adv-Cont+**, and then prove that it satisfies the invariants of Lemma 16.
**Procedure 1. Adv-Cont+**
Initialize set of surviving functions \(S_{0}=\overline{\mathcal{F}}_{cont}\)
For each round \(t=1,2\ldots\):
1. Receive query point \(\mathbf{y}_{t}\in\mathbb{R}^{d}\) from the optimization algorithm
2. If \(t\leq\ell-1\): Send \(\mathbf{y}_{t}\) to the adversary **Adv-Cont**, receiving back a value \(v_{t}\) and subgradient \(\mathbf{g}_{t}\). Obtain \(S_{t}\) by removing from \(S_{t-1}\) the functions \(f\) that are not consistent with this response for any first-order chart \(\mathcal{G}\), namely where \(f(\mathbf{y}_{t})\neq v_{t}\) or \(\mathbf{g}_{t}\notin\partial f(\mathbf{y}_{t})\).
Send the response \((v_{t},\mathbf{g}_{t})\) to the optimization algorithm.
3. If \(t=\ell\): Since \(\mathbf{Adv}\)-\(\mathbf{Cont}\) if \(\varepsilon\)-hard for \(\ell-1\) rounds, there is a finite collection of functions \(\{f_{1},...,f_{k}\}\subset S_{t-1}\cap\mathcal{F}_{cont}\) that do not share an \(\varepsilon\)-solution. Define their pointwise maxima \(f_{\max}=\max\{f_{1},...,f_{k}\}\) and set \(S_{t+k}=\{f_{\max}\}\), for all \(k=0,1,2...\). Set the value \(v_{t}\) to be \(f_{\max}(\mathbf{y}_{t})\) and set \(\mathbf{g}_{t}\) to be a subgradient in \(\partial f_{\max}(\mathbf{y}_{t})\) (consistent with what the first order chart \(\mathcal{G}_{0}\) gives for \(f_{1},\ldots,f_{k}\) at \(\mathbf{y}_{t}\), if \(\mathbf{y}_{t}\) has been queried in an earlier round), and send the response \((v_{t},\mathbf{g}_{t})\) to the optimization algorithm.
4. If \(t>\ell\): Let \(f_{\max}\) be the only function in \(S_{t-1}\). If \(\mathbf{y}_{t}\) was queried in an earlier round \(k\), answer \((v_{k},\mathbf{g}_{k})\). Otherwise, set the value \(v_{t}\) to be \(f_{\max}(\mathbf{y}_{t})\) and set \(\mathbf{g}_{t}\) to be any subgradient in \(\partial f_{\max}(\mathbf{y}_{t})\), and send the response \((v_{t},\mathbf{g}_{t})\) to the optimization algorithm.
Proof of Lemma 16.: We will proceed by induction on the number of rounds \(t\). The lemma clearly holds for \(S_{0}\), so suppose it holds for \(S_{t-1}\).
If \(t\leq\ell-1\), then \(S_{t}\) satisfies Item 1 due to to the update rule in the procedure for obtaining \(S_{t}\), since all functions that are not consistent with the given response are removed. More precisely, since the responses given are those produced by \(\mathbf{Adv}\)-\(\mathbf{Cont}\), these functions in \(S_{t}\) are consistent with the responses under exactly the oracle \(\mathcal{O}(\mathcal{G}_{0})\) that \(\mathbf{Adv}\)-\(\mathbf{Cont}\) is hard under. \(S_{t}\) also satisfies Item 2 because \(\mathbf{Adv}\)-\(\mathbf{Cont}\) is assumed to be \(\varepsilon\)-hard for \(\ell-1\) rounds, so there exists a finite collection of functions \(\{f_{1},...,f_{k}\}\subset\mathcal{F}_{cont}\) with no common \(\varepsilon\)-solution that are consistent with all responses given to \(\mathbf{Adv}\)-\(\mathbf{Cont}\)+ by \(\mathbf{Adv}\)-\(\mathbf{Cont}\); thus \(S_{t}\) contains them. For Item 3, to show the closure under taking maxima, we need to argue that if functions \(f_{1},...,f_{k}\) were not removed from \(S\), then neither was \(\max(f_{1},...,f_{k})\). Since \(f_{1},...,f_{k}\) are convex, then \(\partial f_{j}(\mathbf{y})\subset\partial\max\{f_{1},...,f_{k}\}(\mathbf{y})\) for any \(j\) such that \(f_{j}(\mathbf{y})=\max\{f_{1}(\mathbf{y}),...,f_{k}(\mathbf{y})\}\). Hence, if \(f_{1},...,f_{k}\) all have function value \(v_{t}\) and subgradient \(\mathbf{g}_{t}\) at \(\mathbf{y}\), then so does \(\max\{f_{1},...,f_{k}\}\), so \(\max\{f_{1},...,f_{k}\}\) was not removed from \(S\), as desired. Furthermore, if \(f_{1},...,f_{k}\) are taken to be the functions guaranteed by Item 2, Lemma 17 implies that \(\max\{f_{1},...,f_{k}\}\) has optimal value greater than \(\mathrm{OPT}+\varepsilon\), so since we just showed \(\max\{f_{1},...,f_{k}\}\in S_{t}\), the remainder of item 3 follows.
If \(t=\ell\), \(S_{t}\) contains the single function \(f_{\max}\), which has optimal value greater than \(\mathrm{OPT}+\varepsilon\) by its construction as a consequence of Lemma 17. Hence, Item 3 follows. To prove item 1, we will use that \(S_{t-1}\) satisfies Item 1, and by Item 3 applied to \(S_{t-1}\), \(f_{\max}\) is consistent with the responses returned by the procedure up to round \(t-1\). For round \(t\) itself, consistency follows from the definition of \(v_{t}\) and \(\mathbf{g}_{t}\), and so \(f_{\max}\) is consistent with all responses given. Item 2 does not apply in this case and Item 4 is immediate by the construction of \(S_{t}:=\{f_{\max}\}\).
If \(t>\ell\), then \(S^{t}=S^{t-1}=\{f_{\max}\}\) and it suffices to check that the response \((v_{t},\mathbf{g}_{t})\) is compatible with \(f_{\max}\), which follows immediately from the definition of the response.
Hence, Items 1-4 of the lemma follow. It remains to show that \(\mathbf{Adv}\)-\(\mathbf{Cont}\)+ is indeed a well-defined adversary under a full-information first-order oracle. For rounds \(t\leq\ell-1\), this is inherited from \(\mathbf{Adv}\)-\(\mathbf{Cont}\), while for \(t\geq\ell\), this is ensured because if the queried point \(\mathbf{y}_{t}\) is the same as \(\mathbf{y}_{t^{\prime}}\) for some round \(t^{\prime}<t\), \(\mathbf{Adv}\)-\(\mathbf{Cont}\)+ provides the same response in round \(t\) as in round \(t^{\prime}\). Thus, there is indeed a first-order chart \(\mathcal{G}\) (derived from the first order map \(\mathcal{G}_{0}\) for \(\mathbf{Adv}\)-\(\mathbf{Cont}\)) such that \(\mathbf{Adv}\)-\(\mathbf{Cont}\)+ is an adversary under the corresponding full-information first-order oracle \(\mathcal{O}(\mathcal{G})\).
#### 2.2.2 Constructing the mixed-integer adversary Adv-MI
We now construct the family \(\mathcal{F}_{MI}\) of functions over \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) used to transfer the lower bound to the mixed-integer setting, along with the adversary \(\mathbf{Adv}\)-\(\mathbf{MI}\) for that family. We call functions
over \(\mathbb{R}^{n}\times\mathbb{R}^{d}\)_full-dimensional_ to distinguish them from the functions over \(\mathbb{R}^{d}\), the continuous part of the problem. As indicated previously, these full-dimensional functions \(\psi\) in \(\mathcal{F}_{MI}\) will be obtained by considering combinations of selecting one function \(f_{\tilde{\mathbf{x}}}\) from \(\overline{\mathcal{F}}_{cont}\) for each of the mixed-integer fibers \(\tilde{\mathbf{x}}\times\mathbb{R}^{d},\tilde{\mathbf{x}}\in\{0,1\}^{n}\), letting \(\psi\) equal the appropriate function selected over each corresponding fiber, and applying an interpolation scheme between the fibers. This interpolation is illustrated in Figure 1 and described in detail later in this section.
For the behavior of **Adv-MI**, we instantiate a copy of the modified continuous adversary **Adv-Cont+** on each fiber. Whenever the optimization algorithm queries a point \((\tilde{\mathbf{x}},\mathbf{y})\) on a fiber, we send \(\mathbf{y}\) to the continuous adversary on the fiber and report back the response \((v,\mathbf{g})\) received, although \(\mathbf{g}\) needs to be appropriately lifted to the full \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) space to be consistent with the way we interpolate the functions between the fibers. If the optimization algorithm only probes on these fibers, then it is intuitive that such an adversary would be \(\varepsilon\)-hard for \(2^{n}\ell-1\) rounds: informally, up to this round, at least one of the \(2^{n}\) fibers that has received no more than \(\ell-1\) queries, so using the hardness of **Adv-Cont+** (Item 2 of Lemma 16) we can obtain full-dimensional functions that do not share an \(\varepsilon\)-approximate solution, which confirm the desired \(\varepsilon\)-hardness of the mixed integer adversary **Adv-MI**.
The crucial element is how to deal with queries on points outside of the mixed-integer fibers. If such queries provide the algorithm with more information about the full-dimensional functions \(\mathcal{F}_{MI}\) than queries on the fibers do, then we may not have full-dimensional functions with no common \(\varepsilon\)-approximate solution surviving for \(2^{n}\ell-1\) rounds. To handle this issue, the interpolation used to define the full-dimensional functions \(\psi\) guarantees that its behavior on a fractional point \((\tilde{\mathbf{x}},\mathbf{y})\notin\{0,1\}^{n}\times\mathbb{R}^{d}\) is completely determined by the value of the function \(f_{\tilde{\mathbf{x}}}(\mathbf{y})\) from \(\overline{\mathcal{F}}_{cont}\) selected for the fiber \(\tilde{\mathbf{x}}\times\mathbb{R}^{d}\), where \(\tilde{\mathbf{x}}\in\{0,1\}^{n}\) is the closest \(0/1\) point to \(\tilde{\mathbf{x}}\). Thus, **Adv-MI** can also answer such a query at a fractional point by making a query to the appropriate continuous adversary **Adv-Cont+** on \(\{\tilde{\mathbf{x}}\}\times\mathbb{R}^{d}\), and the hardness of the latter can still be leveraged.
We now formally define the functions \(\mathcal{F}_{MI}\) and the adversary **Adv-MI**.
Figure 1: (Left) Illustration of two possible functions \(f_{0},f_{1}\in\overline{\mathcal{F}}_{cont}\) (blue and red) of the continuous adversary **Adv-Cont+**, for \(d=1\). (Right) Illustration of the function \(\psi_{(f_{0},f_{1})}\) for the mixed-integer adversary **Adv-MI** obtained by placing the functions \(f_{0}\) and \(f_{1}\) on the fibers \(\{0\}\times\mathbb{R}\) and \(\{1\}\times\mathbb{R}\) and interpolating appropriately between the fibers.
Construction of the functions \(\mathcal{F}_{MI}\).For a \(0/1\) point \(\bar{\mathbf{x}}\in\{0,1\}^{n}\) and a function \(f\in\overline{\mathcal{F}}_{cont}\) in \(\mathbb{R}^{d}\), we first define its (convex) extension to the full-dimensional space \(\mathbb{R}^{n+d}\) as
\[\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})=\max\Big{\{}f(\mathbf{y})+ \langle\mathbf{M}_{\bar{\mathbf{x}}},\mathbf{x}-\bar{\mathbf{x}}\rangle\,\ \text{OPT}\Big{\}}, \tag{2}\]
with \(\mathbf{M}_{\bar{\mathbf{x}}}:=3MR\cdot sgn(\bar{\mathbf{x}}-0.5\cdot 1)\), where \(sgn\) denotes the sign function. This construction effectively places \(f\) along the \(\mathbf{y}\) space at the fiber \(\bar{\mathbf{x}}\) and extends it in each of the \(\mathbf{x}\) variables via a linear function with slope \(\pm 3MR\), in a way that it decreases the value as it moves into the unit cube, or equivalently, away from \(\bar{\mathbf{x}}\); it then truncates the final value to being at least OPT; see Figure 2 for an illustration. We note for later use that wherever the extension is not truncated by OPT, a subgradient is given by appending the vector \(\mathbf{M}_{\bar{\mathbf{x}}}\) to a subgradient of \(f\), and otherwise the all zeroes vector is a subgradient. More precisely, we have
\[\partial\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})\supseteq\left\{ \begin{array}{ll}\{\mathbf{M}_{\bar{\mathbf{x}}}\}\times\partial f(\mathbf{ y})&\text{, if }\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})>\text{OPT}\\ \{\mathbf{0}\}&\text{, otherwise.}\end{array}\right. \tag{3}\]
Given a collection \(\mathtt{F}=(f_{\bar{\mathbf{x}}})_{\bar{\mathbf{x}}}\) with one function \(f_{\bar{\mathbf{x}}}\in\overline{\mathcal{F}}_{cont}\) for each \(0/1\) point \(\bar{\mathbf{x}}\), we combine them into the convex function
\[\psi_{\mathtt{F}}(\mathbf{x},\mathbf{y}):=\max_{\bar{\mathbf{x}}\in\{0,1\}^{n }}\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y}), \tag{4}\]
where we abuse notation slightly and write the convex extension \((\widehat{f_{\bar{\mathbf{x}}}})_{\bar{\mathbf{x}}}\) as simply \(\hat{f}_{\bar{\mathbf{x}}}\) to simplify the notation. As mentioned above, a crucial property of these functions is that their behavior between the fibers is determined by the behavior on the closest fiber. Intuitively, the slope \(\pm 3MR\) guarantees that as the \(\mathbf{x}\) argument moves away from the base fiber \(\bar{\mathbf{x}}\) of each extended function \(\hat{f}_{\bar{\mathbf{x}}}\), \(\hat{f}_{\bar{\mathbf{x}}}\) decreases rapidly enough so that the maximum in (4) is always achieved by the extended function at the closest fiber to \(\mathbf{x}\). Figure 1 illustrates this, where one can see that both functions placed on the fibers get fully truncated in between the fibers. To make this precise, let \(r(\mathbf{x}):[0,1]^{n}\to\{0,1\}^{n}\) map any \(\mathbf{x}\) in the box to its closest \(0/1\) point in \(\ell_{\infty}\)-norm, that is \(r(\mathbf{x}):=\operatorname*{argmin}_{\mathbf{x}^{\prime}}\{\|\mathbf{x}- \mathbf{x}^{\prime}\|_{\infty}:\mathbf{x}^{\prime}\in\{0,1\}^{n}\}\).
**Lemma 18**.: _For every collection \(\mathtt{F}=(f_{\bar{\mathbf{x}}})_{\bar{\mathbf{x}}}\), for every point \((\mathbf{x},\mathbf{y})\in[0,1]^{n}\times[-R,R]^{d}\) we have_
\[\psi_{\mathtt{F}}(\mathbf{x},\mathbf{y})=\hat{f}_{r(\mathbf{x})}(\mathbf{x}, \mathbf{y}),\quad\quad\text{and}\quad\quad\partial\psi_{\mathtt{F}}(\mathbf{x},\mathbf{y})=\partial\hat{f}_{r(\mathbf{x})}(\mathbf{x},\mathbf{y})\]
Figure 2: An example of a possible function from \(f\in\mathcal{F}_{cont}\) (left) together with an illustration of its truncated extension \(\hat{f}_{0}(\mathbf{x},\mathbf{y})\) (right) as constructed in (2).
Proof.: Define \(B_{\bar{\mathbf{x}}}:=\{\mathbf{x}^{\prime}\in\mathbb{R}^{n}:\|\mathbf{x}^{\prime}- \bar{\mathbf{x}}\|_{\infty}\leq\frac{1}{3}\}\) for every \(\bar{\mathbf{x}}\in\{0,1\}^{n}\). Consider an arbitrary \((\mathbf{x},\mathbf{y})\in[0,1]^{n}\times[-R,R]^{d}\).
Case 1: \(\mathbf{x}\not\in B_{\bar{\mathbf{x}}}\) for any \(\bar{\mathbf{x}}\in\{0,1\}^{n}\). This implies that for every \(\bar{\mathbf{x}}\in\{0,1\}^{n}\), we have
\[f_{\bar{\mathbf{x}}}(\mathbf{y})+\langle\mathbf{M}_{\bar{\mathbf{ x}}},\mathbf{x}-\bar{\mathbf{x}}\rangle =f_{\bar{\mathbf{x}}}(\mathbf{y})+\sum_{j\in[n]}3MR\cdot sgn(\bar{ \mathbf{x}}_{j}-0.5)\cdot(\mathbf{x}_{j}-\bar{\mathbf{x}}_{j})\] \[\leq f_{\bar{\mathbf{x}}}(\mathbf{y})-3MR\cdot\max_{j\in[n]}| \mathbf{x}_{j}-\bar{\mathbf{x}}_{j}|<f_{\bar{\mathbf{x}}}(\mathbf{y})-MR\leq \text{OPT},\]
Thus, \(\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})=\text{OPT}\) for all \(\bar{\mathbf{x}}\in\{0,1\}^{n}\). As a result, \(\psi_{F}(\mathbf{x},\mathbf{y})=\text{OPT}=\hat{f}_{r(\mathbf{x})}(\mathbf{x },\mathbf{y})\).
Moreover, since \(f_{\bar{\mathbf{x}}}(\mathbf{y})+\langle\mathbf{M}_{\bar{\mathbf{x}}},\mathbf{ x}-\bar{\mathbf{x}}\rangle<\text{OPT}\) for all \(\bar{\mathbf{x}}\in\{0,1\}^{n}\), there exists a neighborhood of \((\mathbf{x},\mathbf{y})\) such that for any point \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) in the neighborhood, it holds that \(f_{\bar{\mathbf{x}}}(\mathbf{y}^{\prime})+\langle\mathbf{M}_{\bar{\mathbf{x}} },\mathbf{x}^{\prime}-\bar{\mathbf{x}}\rangle<\text{OPT}\), and \(\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})=\text{OPT}\) for all \(\bar{\mathbf{x}}\in\{0,1\}^{n}\). As a result, \(\partial\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})=\{\mathbf{0}\}\) for all \(\bar{\mathbf{x}}\in\{0,1\}^{n}\), and \(\partial\psi_{F}(\mathbf{x},\mathbf{y})=\{\mathbf{0}\}\).
Case 2: \(\mathbf{x}\in B_{\bar{\mathbf{x}}}\) for some \(\bar{\mathbf{x}}\in\{0,1\}^{n}\). In this case, \(r(\mathbf{x})=\bar{\mathbf{x}}\). This is because for any \(\tilde{\mathbf{x}}\in\overline{\{0,1\}^{n}}\backslash\{\bar{\mathbf{x}}\}\), we have that
\[\|\mathbf{x}-\tilde{\mathbf{x}}\|_{\infty}\geq\frac{2}{3}>\frac{1}{3}\geq\| \mathbf{x}-\bar{\mathbf{x}}\|_{\infty}. \tag{5}\]
It is also true that \(\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y})\geq\text{OPT}=\hat{f}_{\bar {\mathbf{x}}}(\mathbf{x},\mathbf{y})\) for any \(\tilde{\mathbf{x}}\neq\bar{\mathbf{x}}\), which holds due to the result from Case 1.
Moreover, the arguments from Case 1 and (5) imply that there exists a neighborhood of \((\mathbf{x},\mathbf{y})\) such that for any point \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) in the neighborhood, it holds that \(r(\mathbf{x}^{\prime})=r(\mathbf{x})=\bar{\mathbf{x}}\), and \(\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\geq\text{ OPT}=\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) for all \(\tilde{\mathbf{x}}\neq\bar{\mathbf{x}}\). As a result, \(\psi_{F}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})=\hat{f}_{\bar{\mathbf{x}}}( \mathbf{x}^{\prime},\mathbf{y}^{\prime})=\hat{f}_{r(\mathbf{x}^{\prime})}( \mathbf{x}^{\prime},\mathbf{y}^{\prime})=\hat{f}_{r(\mathbf{x})}(\mathbf{x}^{ \prime},\mathbf{y}^{\prime})=\hat{f}_{r(\mathbf{x})}(\mathbf{x}^{\prime}, \mathbf{y}^{\prime})\) and \(\partial\psi_{F}(x,y)=\partial\hat{f}_{\bar{\mathbf{x}}}(\mathbf{x},\mathbf{y} )=\partial\hat{f}_{r(\mathbf{x})}(\mathbf{x},\mathbf{y})\).
Construction of the mixed-integer adversary Adv-MI.We finally describe **Adv-MI** in Procedure 2. Its main property is captured in the following invariant.
**Procedure 2. Adv-MI**
Instantiate a copy of **Adv-Cont+** on each fiber \(\mathbf{x}\in\{0,1\}^{n}\), and let \(S(\mathbf{x})\) denote the set of surviving functions maintained in every round by this copy, initialized to \(\overline{\mathcal{F}}_{cont}\).
For each round \(t=1,2\ldots\):
1. **Adv-MI** receives the query \((\mathbf{x}_{t},\mathbf{y}_{t})\) from the algorithm. Send \(\mathbf{y}_{t}\) to the adversary **Adv-Cont+** associated with the closest fiber \(r(\mathbf{x}_{t})\), which then returns a value \(v\) and subgradient \(\mathbf{g}\), and updates its maintained set of surviving functions \(S(r(\mathbf{x}_{t}))\) of its fiber \(r(\mathbf{x}_{t})\).
2. **Adv-MI** returns as its response to the query \((\mathbf{x}_{t},\mathbf{y}_{t})\) the value \[\tilde{v}_{t}=\max\Big{\{}v+\langle\mathbf{M}_{r(\mathbf{x}_{t})},\mathbf{x}_{t }-r(\mathbf{x}_{t})\rangle\,\ \text{OPT}\Big{\}},\] and as subgradient returns either \(\tilde{\mathbf{g}}_{t}=(\mathbf{M}_{\bar{\mathbf{x}}},\mathbf{g})\) or \(\tilde{\mathbf{g}}_{t}=\mathbf{0}\) depending whether \(\tilde{v}_{t}>\text{OPT}\) or not (i.e., whether \(\hat{f}_{r(\mathbf{x})}\) was truncated at \((\mathbf{x}_{t},\mathbf{y}_{t})\) or not), respectively.
**Invariant 1**.: _There exists a first order chart \(\mathcal{G}\) (derived from \(\mathcal{G}_{0}\)) such that, for any algorithm, the sets \(S(\mathbf{x})\), \(\mathbf{x}\in\{0,1\}^{n}\) maintained by **Adv-MI** satisfy the following property._
_In every round, for every collection \(\text{F}=(f_{\mathbf{x}})_{\mathbf{x}\in\{0,1\}^{n}}\) of current surviving functions \(f_{\mathbf{x}}\in S(\mathbf{x})\) for \(\mathbf{x}\in\{0,1\}^{n}\), the function \(\psi_{\text{F}}\) is consistent with the response returned by **Adv-MI** under the full-information first-order oracle \(\mathcal{O}(\mathcal{G})\), i.e., \(\psi_{\text{F}}(\mathbf{x}_{t},\mathbf{y}_{t})=\tilde{v}_{t}\) and \(\tilde{\mathbf{g}}_{t}\in\partial\psi_{\text{F}}(\mathbf{x}_{t},\mathbf{y}_{ t})\)._
Notice that Invariant 1 is indeed maintained after each response in Step 2 of Procedure 2: For every collection \(\text{F}=(f_{\mathbf{x}})_{\bar{\mathbf{x}}}\) of still surviving functions \(f_{\bar{\mathbf{x}}}\in S(\bar{\mathbf{x}})\), by the consistency guarantee of **Adv-Cont+** (Item 1 of Lemma 16) the function \(f_{r(\mathbf{x}_{t})}\) selected for the fiber \(r(\mathbf{x}_{t})\) has value \(v\) and subgradient \(\mathbf{g}\) at \(\mathbf{y}_{t}\); thus, Lemma 18 combined with (2) implies that the function \(\psi_{\text{F}}\) has value
\[\psi_{\text{F}}(\mathbf{x}_{t},\mathbf{y}_{t})=\hat{f}_{r(\mathbf{x})}( \mathbf{x}_{t},\mathbf{y}_{t})=\max\left\{f_{r(\mathbf{x}_{t})}(\mathbf{y}_{t} )+\langle\mathbf{M}_{r(\mathbf{x}_{t})},\mathbf{x}_{t}-r(\mathbf{x}_{t}) \rangle\,\ \text{OPT}\right\}=\tilde{v}_{t},\]
and similarly from (3) we see that \(\tilde{\mathbf{g}}\) is a subgradient in \(\partial\psi_{\text{F}}(\mathbf{x}_{t},\mathbf{y}_{t})=\partial\hat{f}_{r( \mathbf{x}_{t})}(\mathbf{x}_{t},\mathbf{y}_{t})\), as desired.
We now prove that **Adv-MI** is \(\varepsilon\)-hard for \(2^{n}\ell-1\) rounds; using Lemma 15, this implies Theorem 7 for the case of full-information first-order oracle. Suppose the optimization algorithm runs for fewer than \(2^{n}\cdot\ell\) iterations. Then there is a fiber \(\mathbf{x}^{*}\in\{0,1\}\) where **Adv-MI** sent at most \(\ell-1\) queries to the adversary **Adv-Cont+** of the fiber \(\mathbf{x}^{*}\). Thus, by the guarantee of the latter (Item 2 of Lemma 16), the surviving set \(S(\mathbf{x}^{*})\) has some finite collection of functions \(f^{1}_{\mathbf{x}^{*}},...,f^{k}_{\mathbf{x}^{*}}\) with no common \(\varepsilon\)-approximate solution. Consider the collections \(\text{F}^{1},...,\text{F}^{k}\) of surviving functions that have \(f^{1}_{\mathbf{x}^{*}},...,f^{k}_{\mathbf{x}^{*}}\), respectively, for the fiber \(\mathbf{x}^{*}\) and any function \(f_{\bar{\mathbf{x}}}\in S(\bar{\mathbf{x}})\) with optimal value \(>\text{OPT}+\varepsilon\) for each of the other fibers \(\bar{\mathbf{x}}\neq\mathbf{x}^{*}\), which exist on each of the other fibers by Item 3 of Lemma 16. By Invariant 1, all functions \(\psi_{\text{F}^{1}},...,\psi_{\text{F}^{k}}\) are compatible with the responses returned by **Adv-MI**. The desired \(\varepsilon\)-hardness of **Adv-MI** then follows from the following claim, which then concludes the proof.
**Claim 19**.: _The functions \(\psi_{\text{F}^{1}},...,\psi_{\text{F}^{k}}\) share no common \(\varepsilon\)-approximate solution._
Proof.: From the construction above, we have that \(\text{F}^{\dagger}:=\text{F}^{1}\backslash\{f^{1}_{\mathbf{x}^{*}}\}=\text{F}^ {2}\backslash\{f^{2}_{\mathbf{x}^{*}}\}...=\text{F}^{k}\backslash\{f^{k}_{ \mathbf{x}^{*}}\}\). Due to (4) and the definitions of \(\text{F}^{1},...,\text{F}^{k}\), for any fiber \(\bar{\mathbf{x}}\neq\mathbf{x}^{*}\) and \(f_{\bar{\mathbf{x}}}\in\text{F}^{\dagger}\), it follows that \(\psi_{\text{F}^{1}}(\bar{\mathbf{x}},\mathbf{y})=...=\psi_{\text{F}^{k}}(\bar{ \mathbf{x}},\mathbf{y})=f_{\bar{\mathbf{x}}}(\mathbf{y})>\text{OPT}+\varepsilon\). Thus, the \(\varepsilon\)-approximate solutions for the functions \(\psi_{\text{F}^{1}},...,\psi_{\text{F}^{k}}\) only exist within the fiber \(\mathbf{x}^{*}\). Given that the \(\varepsilon\)-approximate solutions of \(f^{1}_{\mathbf{x}^{*}},...,f^{k}_{\mathbf{x}^{*}}\) are disjoint, and considering that \(\psi_{\text{F}^{j}}(\mathbf{x}^{*},\mathbf{y})=f^{j}_{\mathbf{x}^{*}}(\mathbf{ y})\) for all \(j\in[k]\), we can conclude our proof.
### Proof of Theorem 7 for general oracles
We now prove Theorem 7 in full generality. We will do this using the exact same family of difficult functions \(\psi_{\text{F}}\) from (4), and also with the same idea of constructing a mixed-integer adversary that produces its answers by making queries to an adversary for the continuous problems on the fibers. Since the mixed-integer adversary will need to answer queries made in the full \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) space by making queries in the continuous space \(\mathbb{R}^{d}\) on each fiber, we will require that the set of permissible queries that can be made to the continuous adversary is, in some sense, as rich as the queries allowed in the full space. For example, if one allows full-information queries to be made in \(\mathbb{R}^{n}\times\mathbb{R}^{d}\), but only binary queries to be made in \(\mathbb{R}^{d}\) to the continuous adversary, one would struggle to determine how the mixed-integer adversary should answer those full-information queries by making only binary queries to the adversaries for the continuous subproblems. Specifically, for a query at \((\mathbf{x},\mathbf{y})\in\mathbb{R}^{n}\times\mathbb{R}^{d}\), knowing how \(\mathbf{x}\) affects the function values and subgradients of \(\psi_{\text{F}}\), the mixed-integer adversary needs to be
able to determine what response to give by making a suitably chosen query about \(f_{r(\mathbf{x})}\) to the continuous adversary. We formalize this requirement of having the same richness of queries for the continuous subproblems as for the full \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) space with the concept of _hereditary queries_.
Hereditary queries.For simplicity, we define the notion of hereditary queries for unconstrained problems (i.e., only for value/subgradient queries), but we remark that the same idea can be applied to separation queries as well.
**Definition 20**.: _Let \(\{\mathcal{H}^{\mathrm{val}}_{n,d}\}_{n,d\in\mathbb{N}}\) and \(\{\mathcal{H}^{\mathrm{sub}}_{n,d}\}_{n,d\in\mathbb{N}}\) be classes of permissible function value and subgradient queries, respectively, with response sets (codomains) \(H^{\mathrm{val}}_{n,d}\) and \(H^{\mathrm{sub}}_{n,d}\). \(\{\mathcal{H}^{\mathrm{val}}_{n,d}\}_{n,d\in\mathbb{N}}\) and \(\{\mathcal{H}^{\mathrm{sub}}_{n,d}\}_{n,d\in\mathbb{N}}\) are said to be hereditary if the following holds for all \(n,d\in\mathbb{N}\) and functions \(\mathcal{M}:\{0,1\}^{n}\to\mathbb{R}^{n}\). For any \(\mathbf{x}\in\{0,1\}^{n}\), \(\delta\in\mathbb{R}\), \(h^{\mathrm{val}}\in\mathcal{H}^{\mathrm{val}}_{n,d}\), and \(h^{\mathrm{sub}}\in\mathcal{H}^{\mathrm{sub}}_{n,d}\), there exists \(h^{\mathrm{val}}_{*}\in\mathcal{H}^{\mathrm{val}}_{0,d}\), \(h^{\mathrm{sub}}_{*}\in\mathcal{H}^{\mathrm{sub}}_{0,d}\) and functions \(B^{\mathrm{val}}:H^{\mathrm{val}}_{0,d}\to H^{\mathrm{val}}_{n,d}\), \(B^{\mathrm{sub}}:H^{\mathrm{sub}}_{0,d}\to H^{\mathrm{sub}}_{n,d}\) such that_
\[B^{\mathrm{val}}(h^{\mathrm{val}}_{*}(v)) =h_{\mathrm{val}}(v+\delta)\qquad\forall v\in\mathbb{R}, \tag{6}\] \[B^{\mathrm{sub}}(h^{\mathrm{sub}}_{*}(v,\mathbf{g})) =h_{\mathrm{sub}}(\mathcal{M}(\mathbf{x}),\mathbf{g})\;\;\forall \mathbf{g}\in\mathbb{R}^{d} \tag{7}\]
Intuitively, a class of queries being hereditary has the consequence that if for a point \((\mathbf{x},\mathbf{y})\in\mathbb{R}^{n}\times\mathbb{R}^{d}\), one knows exactly the \(\mathbf{x}\) component \(\mathcal{M}(\mathbf{x})\) of the subgradient, then one can simulate a query in the \(\mathbb{R}^{n}\times\mathbb{R}^{d}\) space by only making a query on the \(\mathbb{R}^{d}\) space, and similarly that there are queries rich enough to consider shifted function values \(v+\delta\), where the interpretation is that \(\delta\) is the effect \(\mathbf{x}\) has on the overall function value - see (2).
**Example 21**.: _We show that natural permissible queries, as from Definition 5, are hereditary. Let \(\mathcal{M}(\mathbf{x}),\delta\) be as in Definition 20._
1. _(Full-information first-order oracle) If_ \(\mathcal{H}^{\mathrm{val}}_{n,d}\) _and_ \(\mathcal{H}^{\mathrm{sub}}_{n,d}\) _are simply the identity functions, then we can take_ \(B^{\mathrm{val}}\) _to be_ \(B^{\mathrm{val}}(v)=v+\delta\) _and take_ \(B^{\mathrm{sub}}\) _to be the "lifting/rotation" map_ \(B^{\mathrm{sub}}(\mathbf{g})=(\mathcal{M}(\mathbf{x}),\mathbf{g})\)_, noting that_ \(h^{\mathrm{sub}},\;h^{\mathrm{val}},\;h^{\mathrm{sub}}_{*}\)_, and_ \(h^{\mathrm{val}}_{*}\) _are all the identity functions._
2. _(General binary oracle) For the general binary oracle based on a first-order chart_ \(\mathcal{G}\)_,_ \(B^{\mathrm{val}}\) _and_ \(B^{\mathrm{sub}}\) _can be taken to be the identity map from_ \(\{0,1\}\) _to_ \(\{0,1\}\)_, and one can take_ \(h^{\mathrm{val}}_{*}(v)=h(v+\delta),\;h^{\mathrm{sub}}_{*}=h(\mathcal{M}( \mathbf{x}),\mathbf{g}),\) _which are permissible queries since all binary queries are permissible._
3. _(Shifted bit oracle) If_ \(\mathcal{H}^{\mathrm{val}}_{n,d}\) _and_ \(\mathcal{H}^{\mathrm{sub}}_{n,d}\) _are from a shifted bit oracle_ \(\mathcal{H}^{bit^{*}}\)_, then_ \(B^{\mathrm{val}}\) _can be taken to be the identity map from_ \(\{0,1\}\) _to_ \(\{0,1\}\)_. For a query on the function value, if_ \(h^{\mathrm{val}}_{*}\) _reports some bit of_ \(v+\delta\)_, then the appropriate hereditary query is exactly the query_ \(h^{\mathrm{val}}_{*}\) _such that_ \(h^{\mathrm{val}}_{*}(v)=h(v+\delta)\)_, i.e. using the shift_ \(u=\delta\) _in the notation of Definition_ 5_. A subgradient bit query_ \(h^{\mathrm{sub}}(\mathcal{M}(\mathbf{x}),\mathbf{g})\) _returns a bit of either_ \(\mathcal{M}(\mathbf{x})\) _or_ \(\mathbf{g}\)_, so there are two cases._ 1. \(h^{\mathrm{sub}}\) _returns the_ \(j^{th}\) _bit of the_ \(k^{th}\) _entry of_ \(\mathcal{M}(\mathbf{x})\)_. Set_ \(B^{\mathrm{sub}}(\cdot)\) _to return exactly that bit of_ \(\mathcal{M}(\mathbf{x})\)_, no matter the input to_ \(B^{\mathrm{sub}}\)_, so_ \(h^{\mathrm{sub}}_{*}\) _may be chosen arbitrarily._ 2. \(h^{\mathrm{sub}}\) _returns the_ \(j^{th}\) _bit of the_ \(k^{th}\) _entry of_ \(\mathbf{g}\)_. Set_ \(B^{\mathrm{sub}}\) _to be the identity map from_ \(\{0,1\}\) _to_ \(\{0,1\}\)_, and set_ \(h^{\mathrm{sub}}_{*}\) _to return the desired bit of_ \(\mathbf{g}\)_._
4. _(Inner product threshold queries) If_ \(\mathcal{H}^{\mathrm{val}}_{n,d}\) _and_ \(\mathcal{H}^{\mathrm{sub}}_{n,d}\) _consist of the inner product threshold queries, take both_ \(B^{\mathrm{val}}\) _and_ \(B^{\mathrm{sub}}\) _to be the identity maps. For function value queries_ \(h^{\mathrm{val}}_{u,c}(v)=sgn(u\cdot(v+\delta)-c)\)_, use_ \[h^{\mathrm{val}}_{*}(v):=h^{\mathrm{val}}_{u,c-u\delta}(v)=sgn(uv-(c-u\delta)),\]
_since then_ \(h_{*}^{\rm val}(v)=sgn(uv-(c-u\delta))=sgn(u\cdot(v+\delta)-c)=h_{u,c}^{\rm val}(v)\) _as desired. For subgradient queries_ \(h_{\mathbf{u}}^{\rm sub}(\mathcal{M}(x),\mathbf{g})=sgn(\langle\mathbf{u}, \mathcal{M}(x),\mathbf{g}\rangle-c)\)_, with_ \(\mathbf{u}\in\mathbb{R}^{n+d}\)_, denote by_ \(\mathbf{u}_{n}\) _the vector of the first_ \(n\) _entries of_ \(\mathbf{u}\)_, and by_ \(\mathbf{u}_{d}\) _the vector of the last_ \(d\) _entries of_ \(\mathbf{u}\)_. One may use_ \[h_{*}^{\rm sub}(\mathbf{g}):=h_{\mathbf{u}_{d},c-\langle\mathbf{u}_{n}, \mathcal{M}(\mathbf{x})\rangle}^{\rm sub}(\mathbf{g})=sgn\Big{(}\langle \mathbf{u}_{d},\mathbf{g}\rangle-(c-\langle\mathbf{u}_{n},\mathcal{M}( \mathbf{x})\rangle)\Big{)},\] _since then we similarly have_ \[h_{*}^{\rm sub}(\mathbf{g})=sgn\Big{(}\langle\mathbf{u}_{d},\mathbf{g} \rangle-(c-\langle\mathbf{u}_{n},\mathcal{M}(\mathbf{x})\rangle)\Big{)}=sgn( \langle\mathbf{u},(\mathcal{M}(x),\mathbf{g})\rangle-c)=h_{\mathbf{u}}^{\rm sub }(\mathcal{M}(x),\mathbf{g})\] _as desired. For these hereditary queries, note that_ \(h_{u,c-u\delta}^{\rm val}\) _and_ \(h_{\mathbf{u}_{d},c-\langle\mathbf{u}_{n},\mathcal{M}(\mathbf{x})\rangle}^{ \rm sub}\) _are indeed in_ \(\mathcal{H}_{0,d}^{\rm val}\) _and_ \(\mathcal{H}_{0,d}^{\rm sub}\)_, respectively, since_ \(u\in\mathbb{R}\)_,_ \(\mathbf{u}_{n}\in\mathbb{R}^{n}\)_, and_ \(\mathbf{u}_{d}\in\mathbb{R}^{d}\)_._
We remark here that \(\mathcal{H}^{bit}\), without the permitted "shifts" allowed in \(\mathcal{H}^{bit^{*}}\) is not hereditary, as it may not satisfy condition (6) for the function values for all \(\delta\); however, any lower bounds obtained with \(\mathcal{H}^{bit^{*}}\) must also hold for \(\mathcal{H}^{bit}\), since the former is a richer class of queries.
Definition of the adversary.We will define **Adv-Cont+** analogously as in the full-information case, now receiving queries in \(\mathcal{H}\) according to the oracle setting considered. **Adv-Cont+** will be \(\varepsilon\)-hard for \(\ell\) rounds answering queries from \(\mathcal{H}\), and after \(\ell-1\) rounds it commits to a single surviving function with optimal value \(>OPT+\varepsilon\). As queries for general oracles using first-order information consist of a point \(\mathbf{z}=(\mathbf{x},\mathbf{y})\in\mathbb{R}^{n}\times\mathbb{R}^{d}\) and a permissible query \(h\in\mathcal{H}\), let us write \((\mathbf{x},\mathbf{y},h)\) for notational simplicity to denote such a query.
We describe here the behavior of **Adv-Cont+** in the general oracle case, and such that it satisfies the same invariant of Lemma 16 as in the full-information case, i.e. it is \(\varepsilon\)-hard for \(\ell\) rounds and only keeps a single surviving function with optimal value at least \(OPT+\varepsilon\) after \(\ell\) queries have been made.
**Procedure 3. Adv-Cont+**
Initialize set of survived functions \(S_{0}=\overline{\mathcal{F}}_{cont}\)
For each round \(t=1,2\ldots\):
1. Receive query point \((\mathbf{y}_{t},h_{t})\in\mathbb{R}^{d}\times\mathcal{H}\) from the optimization algorithm.
2. If \(t\leq\ell-1\): Send \((\mathbf{y}_{t},h_{t})\) to the adversary **Adv-Cont**, receiving back the answer \(\alpha\). Obtain \(S_{t}\) by removing from \(S_{t-1}\) the functions \(f\) that are not consistent with this response under any first-order chart \(\mathcal{G}\), namely \(f\) for which \(h_{t}(f(\mathbf{y}_{t}))\neq\alpha\) if \(h_{t}\in\mathcal{H}^{\rm val}\), or for which there does not exist a \(\mathbf{g}_{t}\in\partial f(\mathbf{y}_{t})\) such that \(h_{t}(f(\mathbf{y}_{t}),\mathbf{g}_{t})=\alpha\) if \(h_{t}\in\mathcal{H}^{\rm sub}\). Send the response \(\alpha\) to the optimization algorithm.
3. If \(t=\ell\): Since **Adv-Cont** is \(\varepsilon\)-hard for \(\ell-1\) rounds, there is a finite collection of functions \(\{f_{1},...,f_{k}\}\subset S_{t-1}\cap\mathcal{F}_{cont}\) that do not share an \(\varepsilon\)-solution. Define their pointwise maxima \(f_{\max}=\max\{f_{1},...,f_{k}\}\) and set \(S_{t+k}=\{f_{\max}\}\), for all \(k=0,1,2...\). Set the value \(v_{t}\) to be \(f_{\max}(\mathbf{y}_{t})\) and set \(\mathbf{g}_{t}\) to be a subgradient in \(\partial f_{\max}(\mathbf{y}_{t})\) (consistent with what the first order chart \(\mathcal{G}_{0}\) gives for \(f_{1},\ldots,f_{k}\) at \(\mathbf{y}_{t}\), if \(\mathbf{y}_{t}\) has been queried in an earlier round), and send the response \(h_{t}(v_{t})\) or \(h_{t}(\mathbf{g}_{t})\) to the optimization algorithm, according to whether \(h_{t}\in\mathcal{H}^{\rm val}\) or \(h_{t}\in\mathcal{H}^{\rm sub}\) respectively.
4. If \(t>\ell\): Let \(f_{\max}\) be the only function in \(S_{t-1}\). If \(\mathbf{y}_{t}\) was queried in an earlier round \(k\), answer \((v_{k},\mathbf{g}_{k})\). Otherwise, as in the step above, set the value \(v_{t}\) to be \(f_{\max}(\mathbf{y}_{t})\) and set
\(\mathbf{g}_{t}\) to be any subgradient in \(\partial f_{\max}(\mathbf{y}_{t})\), and again send the appropriate response \(h_{t}(v_{t})\) or \(h_{t}(\mathbf{g}_{t})\) to the optimization algorithm.
Proving that this **Adv-Cont+** satisfies the invariant from Lemma 16 follows exactly the same steps as in the full-information case. Hence, using this **Adv-Cont+** and the same family of functions \(\psi_{\mathtt{F}}\) from (4), we will be able to construct **Adv-MI** for this general oracle case to satisfy a version of Invariant 1, slightly modified for this general case to refine what we mean by functions being consistent with responses given to the more general queries. To achieve this, let **Adv-MI** operate according to the following procedure.
[backgroundcolor=gray!20, linewidth=0.5em, linewidth=0.
and for its subgradient we have
\[\mathbf{g}\in\partial\hat{f}_{r(\mathbf{x}_{t})}\implies(\mathbf{M}_{r(\mathbf{x} _{t})},\mathbf{g})\in\partial\psi_{\mathtt{F}}(\mathbf{x}_{t},\mathbf{y}_{t}), \tag{9}\]
by Lemma 18 and (3).
Suppose first that \(h_{t}\in\mathcal{H}^{\mathrm{val}}\) was a function value query, and denote by \(B^{\mathrm{val}}\) and \(h_{*}^{\mathrm{val}}\) the transformation and hereditary query that **Adv-MI** uses, according to definition 20, giving \(B^{\mathrm{val}}(h_{*}^{\mathrm{val}}(v))=h_{t}(v+\delta)\) for all \(v\in\mathbb{R}\). For every collection \(\mathtt{F}=(f_{\mathtt{F}})_{\mathtt{F}}\) of surviving functions \(f_{\mathtt{F}}\in S(\mathtt{F})\), by the consistency guarantee of **Adv-Cont+** (Item 1 of Lemma 16), the function \(f_{r(\mathbf{x}_{t})}\) selected for the fiber \(r(\mathbf{x}_{t})\) has response \(h_{*}^{\mathrm{val}}(f_{r(\mathbf{x}_{t})}(\mathbf{y}_{t}))=\alpha\). Then, from the definition of hereditary queries and (8), we have \(B^{\mathrm{val}}(h_{*}^{\mathrm{val}}(f_{r(\mathbf{x}_{t})}(\mathbf{y}_{t})) )=h_{t}(f_{r(\mathbf{x}_{t})}(\mathbf{y}_{t})+\delta)=h_{t}(\psi_{\mathtt{F}} (\mathbf{x}_{t},\mathbf{y}_{t}))\), and so \(\psi_{\mathtt{F}}\) is indeed consistent with the response \(B^{\mathrm{val}}(\alpha)\) provided by **Adv-MI**.
If instead \(h\in\mathcal{H}^{\mathrm{sub}}\) was a subgradient query, again denote \(B^{\mathrm{sub}}\) and \(h_{*}^{\mathrm{sub}}\) as the appropriate transformation and hereditary query, with \(B^{\mathrm{sub}}(h_{*}^{\mathrm{sub}}(\mathbf{g}))=h(\mathcal{M}(\mathbf{x}), \mathbf{g})\) for all \(\mathbf{g}\in\mathbb{R}^{d}\). Again, for every choice \(\mathtt{F}\) of surviving functions on the fibers, the function \(f_{r(\mathbf{x}_{t})}\) on \(r(\mathbf{x}_{t})\) has \(h_{*}^{\mathrm{sub}}(\mathbf{g})=\alpha\), for some \(\mathbf{g}\in\partial f_{r(\mathbf{x}_{t})}(\mathbf{y}_{t})\). Then, from the definition of hereditary queries and (9), \(B^{\mathrm{sub}}(h_{*}^{\mathrm{sub}}(\mathbf{g}))=h_{t}(\mathbf{M}_{r( \mathbf{x}_{t})},\mathbf{g})=h_{t}(\mathbf{g}_{\psi})\), with \(\mathbf{g}_{\psi}\in\partial\psi_{\mathtt{F}}(\mathbf{x}_{t},\mathbf{y}_{t})\). Hence, whether \(h_{t}\) is a function value or subgradient query, all functions \(\psi_{\mathtt{F}}\) for choices \(\mathtt{F}\) of the surviving functions on the fibers are consistent with the responses given by **Adv-MI**, for the oracle \(\mathcal{O}(\mathcal{G},\mathcal{H})\) with permissible queries \(\mathcal{H}\) and the first-order chart \(\mathcal{G}\) from Invariant 2.
We now prove that **Adv-MI** is \(\varepsilon\)-hard for \(2^{n-1}\ell-1\) rounds, thus proving Theorem 7 in the general case. Since **Adv-MI** makes at most 2 queries to **Adv-Cont+** in every round (Step 2) of Procedure 4, if the optimization algorithm runs for fewer than \(2^{n-1}\cdot\ell\) iterations, there is a fiber \(\mathbf{x}^{*}\in\{0,1\}\) where **Adv-MI** sent at most \(\ell-1\) hereditary queries to the adversary **Adv-Cont+** of the fiber \(\mathbf{x}^{*}\). Thus, by the guarantee of the latter (Item 2 of Lemma 16), the surviving set \(S(\mathbf{x}^{*})\) has some finite collection of functions \(f_{\mathbf{x}^{*}}^{1},...,f_{\mathbf{x}^{*}}^{k}\) with no common \(\varepsilon\)-approximate solution, and the remainder of the proof follows as in the full-information case, by considering \(\psi_{\mathtt{F}^{1}},...,\psi_{\mathtt{F}^{k}}\) that have \(f_{\mathbf{x}^{*}}^{1},...,f_{\mathbf{x}^{*}}^{k}\) on the fiber \(\mathbf{x}^{*}\), and some functions with optimal value greater than \(OPT+\varepsilon\) on all the other fibers.
## 3 Proof of Theorem 9
To demonstrate Theorem 9, we need to introduce the idea of _information memory_ of any query strategy/algorithm.
**Definition 22**.: _A first-order query strategy with information memory comprises three functions:_
1. \(\phi_{\mathrm{query}}:\{0,1\}^{*}\to[-R,R]^{n}\times[-R,R]^{d}\)__
2. \(\phi_{\mathrm{update}}^{\mathrm{sep}}:\left(\mathbb{R}^{n}\times\mathbb{R}^{d} \right)\times\{0,1\}^{*}\to\{0,1\}^{*}\)__
3. \(\phi_{\mathrm{update}}^{\mathrm{val}}:\mathbb{R}\times\{0,1\}^{*}\to\{0,1\}^{*}\)__
4. \(\phi_{\mathrm{update}}^{\mathrm{sub}}:\left(\mathbb{R}^{n}\times\mathbb{R}^{d} \right)\times\{0,1\}^{*}\to\{0,1\}^{*}\)_,_
_where \(\{0,1\}^{*}\) denotes the set of all binary strings (finite sequences over \(\{0,1\}\)), including the empty string._
_Given access to a first-order chart \(\mathcal{G}\), the query strategy maintains an information memory \(r_{k}\) at every iteration \(k\geq 0\), which is a finite length binary string in \(\{0,1\}^{*}\), with \(r_{0}\) initialized as the empty string. At every iteration \(k=1,2,\ldots\), the query strategy computes \(\mathbf{z}_{k}:=\phi_{\mathrm{query}}(r_{k-1})\) and
updates its memory using either \(r_{k}=\phi_{\text{update}}^{\text{sep}}\left(\mathbf{g}_{\mathbf{z}_{k}}^{ \text{sep}}(\widehat{f},\widehat{C}),r_{k-1}\right)\), \(r_{k}=\phi_{\text{update}}^{\text{val}}\left(\mathbf{g}_{\mathbf{z}_{k}}^{ \text{val}}(\widehat{f},\widehat{C}),r_{k-1}\right)\) or \(r_{k}=\phi_{\text{update}}^{\text{sub}}\left(\mathbf{g}_{\mathbf{z}_{k}}^{ \text{sub}}(\widehat{f},\widehat{C}),r_{k-1}\right)\), where \((\widehat{f},\widehat{C})\) is the unknown true instance. After finitely many iterations, the query strategy does a final computation based on its information memory and reports an \(\varepsilon\)-approximate solution, i.e., there is a final function \(\phi_{\text{fin}}:\{0,1\}^{*}\to\mathbb{Z}^{n}\times\mathbb{R}^{d}\)._
_The information memory complexity of an algorithm for an instance is the maximum length of its information memory \(r_{k}\) over all iterations \(k\) during the processing of this instance._
The following proposition allows us to relate the information memory complexity of first-order algorithms with information complexity under access to a general binary oracle using first-order information.
**Proposition 23**.: _Let \(\mathcal{G}\) be a first-order chart. For any first-order query strategy \(\mathcal{A}\) with information memory that uses \(\mathcal{G}\), there exists a query strategy \(\mathcal{A}^{\prime}\) using the general binary oracle based on \(\mathcal{G}\), such that for any instance \((f,C)\), if \(\mathcal{A}\) stops after \(T\) iterations with information memory complexity \(Q\), \(\mathcal{A}^{\prime}\) stops after making at most \(Q\cdot T\) oracle queries._
_Conversely, for any query strategy \(\mathcal{A}^{\prime}\) using the general binary oracle based on \(\mathcal{G}\), there exists a first-order query strategy \(\mathcal{A}\) with information memory such that for any instance \((f,C)\), if \(\mathcal{A}^{\prime}\) stops after \(T\) iterations, \(\mathcal{A}\) stops after making at most \(T\) iterations with information memory complexity at most \(T\)._
Proof.: Let \(\mathcal{A}\) be a first-order query strategy with information memory. We can simulate \(\mathcal{A}\) by the query strategy whose queries are precisely the bits of the information memory state \(r_{k}\) at each iteration \(k\) of \(\mathcal{A}\). More formally, the query is \(\mathbf{z}=\phi_{\text{query}}(r_{k-1})\) and \(h(\cdot)=(\phi_{\text{update}}^{\text{sep}}(\cdot,r_{k-1}))_{i}\), \(h(\cdot)=(\phi_{\text{update}}^{\text{val}}(\cdot,r_{k-1}))_{i}\), or \(h(\cdot)=(\phi_{\text{update}}^{\text{sub}}(\cdot,r_{k-1}))_{i}\), depending on which type of query was made, where \(i\) indexes different bits of the corresponding binary string.
Conversely, given a query strategy \(\mathcal{A}^{\prime}\) based on the general binary oracle, we can simulate it with a first-order query strategy with information memory where in each iteration, we simply append the new bit queried by \(\mathcal{A}^{\prime}\) to the current state of the memory.
We need the following result derived from Marsden et al. [11] on information memory complexity.
**Theorem 24**.: [11, Theorem 1] _For every \(\delta\in[0,1/4]\), there is a class of instances \(\mathcal{I}\subseteq\mathcal{I}_{n,d,R,\rho,M}\), where \(n=0\), and a first-order chart \(\mathcal{G}\) such that any first-order query strategy with information memory must have either \(d^{1.25-\delta}\) information memory complexity (in the worst case) or make at least \(\tilde{\Omega}(d^{1+\frac{4}{3}\delta})\) iterations (in the worst case)._
Proof of Theorem 9.: In the case when \(n=0\), we can set \(\delta=\frac{3}{28}\) in Theorem 24 to obtain that any first-order query strategy uses either \(d^{8/7}\) information memory or makes at least \(\tilde{\Omega}(d^{8/7})\) iterations. Using the second part of Proposition 23, we obtain the lower bound of \(\tilde{\Omega}(d^{8/7})\) on the number of queries made by any query strategy using the general binary oracle based on \(\mathcal{G}\).
Applying Theorem 7 enables us to extend the bound to the mixed-integer scenario (\(n>0\)). Further, by integrating this with Corollary 8, we can obtain the desired bound.
## 4 Proof of Theorems 10 and 11
We will use \(B_{\infty}(\mathbf{p},\delta)\) to denote the \(\ell_{\infty}\) ball of radius \(\delta\) centered at \(\mathbf{p}\in\mathbb{R}^{n}\times\mathbb{R}^{d}\), i.e., \(B_{\infty}(\mathbf{p},\delta)=\{\mathbf{z}\in\mathbb{R}^{n}\times\mathbb{R}^ {d}:\|\mathbf{z}-\mathbf{p}\|_{\infty}\leq\delta\}\). Recall that we consider the subclass of instances \((f,C)\in\mathcal{I}_{n,d,R,\rho,M}\) such that the fiber containing the optimal solution also contains a point that is \(\rho\)_-deep in \(C\)_, that is: if \((\mathbf{x}^{*},\mathbf{y}^{*})\in\mathbb{Z}^{n}\times\mathbb{R}^{d}\) is an optimal solution for this instance, then there
is a point \((\mathbf{x}^{*},\bar{\mathbf{y}})\) such that the full-dimensional ball \(B_{\infty}((\mathbf{x}^{*},\bar{\mathbf{y}}),\rho)\) is contained in \(C\). We use \(\mathcal{I}^{deep}_{n,d,R,\rho,M}\) to denote this subclass of instances. We will use \(C_{-\rho}:=\{\mathbf{z}\in C:B_{\infty}(\mathbf{z},\rho)\subseteq C\}\) to denote the set of all \(\rho\)-deep points in \(C\).
Our strategy for proving Theorems 10 and 11 is to: 1) solve the the problems using approximate subgradients/separating hyperplanes; 2) use bit queries/inner product sign queries to construct such approximations.
For the first item, we use an algorithm designed by Oertel [13] (see also [3]) based on the concept of a _centerpoint_: this is a point in the convex set where every halfspace supported on it cuts off a significant (mixed-integer) volume of the set. The algorithm maintains an outer relaxation \(P\) of the feasible region \(C\) in every iteration, and repeatedly applies separation or subgradient-based cuts through the centerpoint of \(P\). The assumption that the feasible region contains a ball (in the optimal fiber) establishes a volume lower bound that essentially limits the number of iterations of the algorithm. While the original algorithm in [13, 3] uses exact separation/subgradient oracles, we show, not surprisingly, that approximate ones suffice. To prove Theorem 11, we employ a similar approach. However, due to the continuous nature of the setting, we can obtain a better upper bound compared to Theorem 10 by applying a stronger bound on the centerpoints from Grunbaum [9].
The next item is to construct approximate separation/subgradient oracles by making only a limited number of binary queries on the separating hyperplanes and/or subgradients. In case of bit queries \(\mathcal{H}^{\mathrm{bit}}\) this can be easily done by querying enough bits of the latter. The case of inner product sign queries \(\mathcal{H}^{\mathrm{dir}}\), where we can pick a direction \(\mathbf{a}\) and ask "Is \(\langle\mathbf{a},\mathbf{g}\rangle\geq 0\)?" for the subgradient or separating hyperplane \(\mathbf{g}\), is more interesting. It boils down to approximating the vector \(\mathbf{g}\) (subgradient/separating hyperplane) using few such queries.2
Footnote 2: This is related to (actively) learning the linear classifier whose normal is given by \(\mathbf{g}\)[1]. These methods can perhaps be adapted to our setting, but we present a different and self-contained statement and proof. See the discussion at the end of Section 4.2.
To formalize the first item, we begin by defining three approximate oracles as follows.
**Definition 25**.: _We have the following:_
* _An_ \(\varepsilon\)_-approximate separation oracle_ \(\hat{\mathbf{g}}^{\mathrm{sep}}\) _is such that_ \(\hat{\mathbf{g}}^{\mathrm{sep}}_{\bar{\mathbf{z}}}(f,C)=\mathbf{0}\) _iff_ \(\bar{\mathbf{z}}\) _belongs to_ \(C\)_, and otherwise the cut_ \(\langle\hat{\mathbf{g}}^{\mathrm{sep}}_{\bar{\mathbf{z}}}(f,C),\mathbf{z}\rangle \leq\langle\hat{\mathbf{g}}^{\mathrm{sep}}_{\bar{\mathbf{z}}}(f,C),\bar{ \mathbf{z}}\rangle\) _is valid for all_ \(\varepsilon\)_-deep points_ \(\mathbf{z}\in C_{-\varepsilon}\)_._
* _An_ \(\varepsilon\)_-approximate value cut oracle_ \(\hat{\mathbf{g}}^{\mathrm{sub}}\) _is such that for every_ \(\mathbf{z}\) _such that_ \(\langle\hat{\mathbf{g}}^{\mathrm{sub}}_{\bar{\mathbf{z}}}(f,C),\mathbf{z} \rangle\geq\langle\hat{\mathbf{g}}^{\mathrm{sub}}_{\bar{\mathbf{z}}}(f,C),\bar{ \mathbf{z}}\rangle\)_, we have_ \(f(\mathbf{z})\geq f(\bar{\mathbf{z}})-\varepsilon\)_._
* _An_ \(\varepsilon\)_-approximate value comparison oracle is such that for every function_ \(f:[-R,R]^{n+d}\to[-U,U]\) _and every pair of points_ \(\mathbf{z},\mathbf{z}^{\prime}\) _we obtain the answer to the query "Is_ \(f(\mathbf{z})\leq f(\mathbf{z}^{\prime})+\varepsilon\)_?"._
Then the first item can be formalized as the following.
**Theorem 26**.: _There exists an algorithm that, for any \(M,R>0\), \(0<\varepsilon\leq MR\) and \(\rho>0\), can report an \(\varepsilon\)-approximate solution for every instance in \(\mathcal{I}^{deep}_{n,d,R,\rho,M}\), using at most_
\[O\bigg{(}2^{n}(n+d)d\log\bigg{(}\frac{MR}{\min\{\rho,1\}\varepsilon}\bigg{)} \bigg{)}\]
_oracle calls, given access to any \(\rho^{\prime}\)-approximate separation oracle, \(\varepsilon^{\prime}\)-approximate value cut oracle, and \(\varepsilon^{\prime}\)-approximate value comparison oracle with \(\rho^{\prime}=\frac{\varepsilon^{\prime}\rho}{4MR}\) and \(\varepsilon^{\prime}=\frac{\varepsilon}{6}\)._
_For the continuous setting with \(n=0\), the bound can be improved to_
\[O\bigg{(}d\log\bigg{(}\frac{MR}{\min\{\rho,1\}\varepsilon}\bigg{)}\bigg{)}.\]
We postpone the proof of Theorem 26 to Section 4.1.
The next lemma shows that one can implement the approximate oracles from Definition 25 using bit queries and inner product sign queries.
**Lemma 27**.: _Consider a first-order chart \(\mathcal{G}\). Let \(f:\mathbb{R}^{n+d}\to\mathbb{R}\) be a convex \(M\)-Lipschitz function taking values in \([-U,U]\), and \(C\subseteq[-R,R]^{n+d}\) a convex set._
_Then for every pair of points \(\bar{\mathbf{z}},\bar{\mathbf{z}}^{\prime}\in[-R,R]^{n+d}\), we can obtain an \(\varepsilon\)-approximate separation oracle vector \(\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\), an \(\varepsilon\)-approximate value cut vector \(\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sub}}(f,C)\), and an \(\varepsilon\)-approximate value comparison between \(\bar{\mathbf{z}}\) and \(\bar{\mathbf{z}}^{\prime}\) using either a sequence of bit queries from \(\mathcal{H}^{\mathrm{bit}}\), or a sequence of inner product sign queries from \(\mathcal{H}^{\mathrm{dir}}\), on the separating hyperplane \(\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\), the subgradient \(\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sub}}(f,C)\) and the function value \(\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{val}}(f,C)\). The number of required queries to implement the approximate oracles is \(O\left((n+d)\log\frac{(n+d)R}{\varepsilon}\right)\), \(O\left((n+d)\log\frac{(n+d)MR}{\varepsilon}\right)\) and \(O\left(\log\frac{U}{\varepsilon}\right)\) respectively, for both \(\mathcal{H}^{\mathrm{bit}}\) and \(\mathcal{H}^{\mathrm{dir}}\)._
The proof of Lemma 27 is deferred to Section 4.2.
Proof.: Theorems 10 and 11 follow from Theorem 26 and Lemma 27.
### Proof of Theorem 26
We first describe the centerpoint algorithm for convex optimization due to Oertel [13] (see also [3]). Let the _mixed-integer volume_ of a (Borel) set \(U\in\mathbb{R}^{n+d}\) be \(\mu(U):=\sum_{\mathbf{x}\in\mathbb{Z}^{n}}\mathrm{vol}_{d}(U\cap(\{\mathbf{x} \}\times\mathbb{R}^{d}))\), where \(\mathrm{vol}_{d}\) is the \(d\)-dimensional Lebesgue measure. The following notion is the main element of the algorithm.
**Theorem 28** (Mixed-integer centerpoint [13, 3]).: _For any compact convex set \(C\subseteq\mathbb{R}^{n+d}\), there is a point \(\mathbf{z}\in C\cap(\mathbb{Z}^{n}\times\mathbb{R}^{d})\) (called a mixed-integer centerpoint) such that for every halfspace \(H\) with \(\mathbf{z}\) on its boundary, we have \(\mu(C\cap H)\geq\frac{1}{2^{n}(d+1)}\,\mu(C)\)._
Algorithm 1 below is the centerpoint-based algorithm for solving mixed-integer convex optimization problems from [13], restated in terms of approximate separation \((\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C))\), value cut \((\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sub}}(f,C))\) and value comparison oracles.
To analyze this algorithm we need the following technical lemma regarding deep points in instances in \(\mathcal{I}^{deep}_{n,d,R,\rho,M}\).
**Lemma 29**.: _For every instance \((f,C)\) in \(\mathcal{I}^{deep}_{n,d,R,\rho,M}\), and for every \(0<\varepsilon<2MR\), there is an \(\varepsilon\)-approximate solution \(\mathbf{z}\) such that the ball \(B_{\infty}(\mathbf{z},\frac{\varepsilon\rho}{2MR})\) is contained in \(C\)._
Proof.: Let \(\mathbf{z}^{*}=(\mathbf{x}^{*},\mathbf{y}^{*})\) be an optimal solution for the instance, and let \(\bar{\mathbf{z}}=(\mathbf{x}^{*},\bar{\mathbf{y}})\) be such that \(B_{\infty}(\bar{\mathbf{z}},\rho)\) is contained in \(C\). For \(\alpha=\frac{\varepsilon}{2MR}\), we claim that the point \(\mathbf{z}:=(1-\alpha)\mathbf{z}^{*}+\alpha\bar{\mathbf{z}}\) has the desired properties. First, by convexity of \(C\) we have that the desired ball \(B_{\infty}(\mathbf{z},\frac{\varepsilon\rho}{2MR})=(1-\alpha)\mathbf{z}^{*}+ \alpha B_{\infty}(\bar{\mathbf{z}},\rho)\) is contained in \(C\). In addition, since \(\mathbf{z}-\mathbf{z}^{*}=\alpha\cdot(0,\bar{\mathbf{y}}-\mathbf{y}^{*})\) and \(f\) is \(M\)-Lipschitz over the integer fibers, we have
\[f(\mathbf{z})\,\leq\,f(\mathbf{z}^{*})+\alpha M\cdot\|\mathbf{y}^{*}-\bar{ \mathbf{y}}\|_{\infty}\,\leq\,f(\mathbf{z}^{*})+\varepsilon,\]
where the last inequality uses that \(\mathbf{y}^{*},\bar{\mathbf{y}}\in[-R,R]^{d}\) and the definition of \(\alpha\); so \(\mathbf{z}\) is an \(\varepsilon\)-approximate solution. This concludes the proof.
1. Initialize the version set \(P_{0}:=[-R,R]^{n+d}\) and the collection of feasible points \(F=\emptyset\). For iterations \(t=0,\ldots,T-1\): 1. Let \(\mathbf{z}_{t}\in P_{t}\cap(\mathbb{Z}^{n}\times\mathbb{R}^{d})\) be a mixed-integer centerpoint of \(P_{t}\) given by Lemma 28. 2. If the \(\rho^{\prime}\)-approximate separation oracle says that \(\mathbf{z}_{t}\) is infeasible for \(C\), add the cut \(\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sep}}(f,C),\mathbf{z} \rangle\leq\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sep}}(f,C), \mathbf{z}_{t}\rangle\) to \(P_{t}\), namely set \(P_{t+1}=P_{t}\cap\{\mathbf{z}:\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{ \mathrm{sep}}(f,C),\mathbf{z}_{t}\rangle\leq\langle\hat{\mathbf{g}}_{\mathbf{ z}_{t}}^{\mathrm{sep}}(f,C),\mathbf{z}_{t}\rangle\}\) 3. Else, add \(\mathbf{z}_{t}\) to the set of feasible solutions \(F\), and add the cut from the \(\varepsilon^{\prime}\)-approximate value cut oracle, namely set \(P_{t+1}=P_{t}\cap\{\mathbf{z}:\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{ \mathrm{sub}}(f,C),\mathbf{z}\rangle\leq\langle\hat{\mathbf{g}}_{\mathbf{z}_{ t}}^{\mathrm{sub}}(f,C),\mathbf{z}_{t}\rangle\}\).
2. Finally, return a point \(\hat{\mathbf{z}}\) from \(F\) that has approximately the minimum value among all solutions in \(F\), namely such that \(f(\hat{\mathbf{z}})\leq\min_{\mathbf{z}\in F}f(\mathbf{z})+\varepsilon^{\prime}\). This can be accomplished by asking \(|F|-1\) queries to the \(\varepsilon^{\prime}\)-approximate value comparison oracle. _Proof of Theorem 26._ We show that Algorithm 1 with number of iterations set as \[T=2^{n}(n+d)(d+1)\ln\left(\frac{2R}{\min\{\rho^{\prime},1\}}\right)\in O\left( 2^{n}(n+d)d\ln\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\] has the desired properties First, regarding the number of oracle queries performed: in each iteration it performs at most \(2\) approximate separation/value cut queries, and in Step 2 it performs \(|F|-1\leq T\) approximate value comparison queries. In total, the algorithm performs at most \(3T\) queries, giving the desired complexity. Now we show that the algorithm returns an \(\varepsilon\)-optimal solution. For that, it suffices to show that for this value of \(T\), the set of feasible solutions \(F\) contains an \(\frac{\varepsilon}{2}\)-optimal solution. Using Lemma 29, let \(\bar{\mathbf{z}}\) be an \(\varepsilon^{\prime}\)-approximate solution such that the ball \(B_{\infty}(\bar{\mathbf{z}},\frac{\varepsilon^{\prime}\rho}{2MR})=B_{\infty} (\bar{\mathbf{z}},2\rho^{\prime})\) is contained in \(C\). Thus, the ball \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\) is contained in \(C_{-\rho^{\prime}}\). Since the cut added to \(P_{t}\), whether in Step (b) or (c), goes through the centerpoint \(\mathbf{z}_{t}\), the mixed-integer volume of \(P_{t}\) is reduced by a factor of at least \((1-\frac{1}{2^{n}(d+1)})\) in each iteration (to simplify the notation let \(\alpha:=\frac{1}{2^{n}(d+1)}\)). The definition of \(T\) shows that the last set \(P_{T}\) has mixed-integer volume at most \[(1-\alpha)^{T}\mu(P_{0})=(1-\alpha)^{T}(2R)^{n+d}\leq e^{-T\alpha}(2R)^{n+d} \leq\left(\min\{\rho^{\prime},1\}\right)^{n+d}\leq\left(\min\{\rho^{\prime},1 \}\right)^{d}.\] (10) Let \(X\) be the intersection of \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\) with the mixed-integer fiber containing \(\bar{\mathbf{z}}\). \(X\) has the same structure as an \(\ell_{\infty}\) ball of radius \(\rho^{\prime}\) in \(\mathbb{R}^{d}\), and thus has volume at least \((2\rho^{\prime})^{d}\), which is strictly bigger than the right-hand side of (10). This means that some mixed-integer point from \(X\) is cut off by one of the hyperplanes applied by the algorithm. However, such a hyperplane cannot be one added in Step (b), since \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\subseteq C_{-\rho^{\prime}}\) and the cuts in that step are valid for \(C_{-\rho^{\prime}}\). Thus, there is an iteration \(t\) that added a Step (c) approximate value cut that cut off a point \(\tilde{\mathbf{z}}\in X\). Thus, \(\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sub}}(f,C),\tilde{\mathbf{z} }\rangle>\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sub}}(f,C),\mathbf{z }_{t}\rangle\). Since this is an \(\varepsilon^{\prime}\)-approximate value cut, we get that \(f(\tilde{\mathbf{z}})\geq f(\mathbf{z}_{t})-\varepsilon^{\prime}\). Since \(f\) is \(M\)-Lipschitz on the fiber containing \(\bar{\mathbf{z}}\) and \(\tilde{\mathbf{z}}\), and the \(\ell_{\infty}\) distance between \(\bar{\mathbf{z}}\) and \(\tilde{\mathbf{z}}\) is at most \(\rho^{\prime}\), we get \[f(\mathbf{z}_{t})\leq f(\tilde{\mathbf{z}})+\varepsilon^{\prime}\leq f(\bar{ \mathbf{z}})+\rho^{\prime}M+\varepsilon^{\prime}\leq\mathrm{OPT}+2\varepsilon^ {\prime}+\rho^{\prime}M\leq\mathrm{OPT}+\frac{\varepsilon}{2},\] where the last inequality uses that \(\rho^{\prime}=\frac{\varepsilon^{\prime}\rho}{4MR}\leq\frac{\varepsilon^{\prime}}{M}\) and that \(\varepsilon^{\prime}=\frac{\varepsilon}{6}\). _Proof of Theorem 26._ We show that Algorithm 1 with number of iterations set as \[T=2^{n}(n+d)(d+1)\ln\left(\frac{2R}{\min\{\rho^{\prime},1\}}\right)\in O\left( 2^{n}(n+d)d\ln\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\] has the desired properties First, regarding the number of oracle queries performed: in each iteration it performs at most \(2\) approximate separation/value cut queries, and in Step 2 it performs \(|F|-1\leq T\) approximate value comparison queries. In total, the algorithm performs at most \(3T\) queries, giving the desired complexity. Now we show that the algorithm returns an \(\varepsilon\)-optimal solution. Now we show that the algorithm returns an \(\varepsilon\)-optimal solution. Now we show that for this value of \(T\), the set of feasible solutions \(F\) contains an \(\frac{\varepsilon}{2}\)-optimal solution. Using Lemma 29, let \(\bar{\mathbf{z}}\) be an \(\varepsilon^{\prime}\)-approximate solution such that the ball \(B_{\infty}(\bar{\mathbf{z}},\frac{\varepsilon^{\prime}\rho}{2MR})=B_{\infty} (\bar{\mathbf{z}},2\rho^{\prime})\) is contained in \(C\). Thus, the ball \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\) is contained in \(C_{-\rho^{\prime}}\). Since the cut added to \(P_{t}\), whether in Step (b) or (c), goes through the centerpoint \(\mathbf{z}_{t}\), the mixed-integer volume of \(P_{t}\) is reduced by a factor of at least \((1-\frac{1}{2^{n}(d+1)})\) in each iteration (to simplify the notation let \(\alpha:=\frac{1}{2^{n}(d+1)}\)). The definition of \(T\) shows that the last set \(P_{T}\) has mixed-integer volume at most \((1-\alpha)^{T}\mu(P_{0})=(1-\alpha)^{T}(2R)^{n+d}\leq e^{-T\alpha}(2R)^{n+d} \leq\left(\min\{\rho^{\prime},1\}\right)^{n+d}\leq\left(\min\{\rho^{\prime},1 \}\right)^{d}.\) Let \(X\) be the intersection of \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\) with the mixed-integer fiber containing \(\bar{\mathbf{z}}\). \(X\) has the same structure as an \(\ell_{\infty}\) ball of radius \(\rho^{\prime}\) in \(\mathbb{R}^{d}\), and thus has volume at least \((2\rho^{\prime})^{d}\), which is strictly bigger than the right-hand side of (10). This means that some mixed-integer point from \(X\) is cut off by one of the hyperplanes applied by the algorithm. However, such a hyperplane cannot be one added in Step (b), since \(B_{\infty}(\bar{\mathbf{z}},\rho^{\prime})\subseteq C_{-\rho^{\prime}}\) and the cuts in that step are valid for \(C_{-\rho^{\prime}}\). Thus, there is an iteration \(t\) that added a Step (c) approximate value cut that cut off a point \(\tilde{\mathbf{z}}\in X\). Thus, \(\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sub}}(f,C),\tilde{\mathbf{z} }\rangle>\langle\hat{\mathbf{g}}_{\mathbf{z}_{t}}^{\mathrm{sub}}(f,C),\mathbf{z }_{t}\rangle\). Since this is an \(\varepsilon^{\prime}\)-approximate value cut, we get that \(f(\tilde{\mathbf{z}})\geq f(\mathbf{z}_{t})-\varepsilon^{\prime}\). Since \(f\) is \(M\)-Lipschitz on the fiber containing \(\bar{\mathbf{z}}\) and \
This shows that the set of feasible solutions \(F\) contains an \(\frac{\varepsilon}{2}\)-approximate solution, namely the above \(\mathbf{z}_{t}\), as desired. This concludes the proof for the mixed-integer setting.
For the continuous setting with \(n=0\), the improved bound follows from using an improved bound on centerpoints due to Grunbaum [9]. Specifically, \(\alpha\) in the left hand side of (10) can be taken to be \(\frac{1}{e}\), where \(e\) is Euler's constant.
### Proof of Lemma 27
The proof will be divided into two parts: first, we show how to obtain approximate oracles using bit queries, after which we show how to do the same using inner product sign queries.
Obtaining approximate oracles using bit queries.Since \(f\) is \(M\)-Lipschitz with respect to the \(\ell_{\infty}\) norm, any subgradient has \(\ell_{\infty}\) norm at most \(M\). Thus, letting \(\varepsilon^{\prime}:=\frac{\varepsilon}{2(n+d)R}\), we will query the sign and the bits indexed by the integers \(\lceil\log M\rceil,\lceil\log M\rceil-1,\ldots,-\lfloor\log\frac{1}{\varepsilon ^{\prime}}\rfloor\) of each coordinate of \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) (nonnegative integers index the bits before the decimal, and negative integers index the bits after the decimal in the binary representation). This can be done by querying the bits of \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) for a total of \((n+d)(\log\frac{M}{\varepsilon^{\prime}}+2)\) queries - for each coordinate, one queries \(\log M+\log(\frac{1}{\varepsilon})+1\) bits for the desired precision and one additional bit for the sign. This gives a vector \(\hat{\mathbf{g}}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) such that \(\|\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)-\hat{\mathbf{g}}_{\mathbf{z}}^{ \mathrm{sub}}(f,C)\|_{\infty}\leq\sum_{i>\log\frac{1}{\varepsilon^{\prime}}} \frac{1}{2^{i}}\leq\varepsilon^{\prime}\).
Then \(\hat{\mathbf{g}}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) is an \(\varepsilon\)-approximate value cut. Let \(\mathbf{g}:=\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) and \(\hat{\mathbf{g}}:=\hat{\mathbf{g}}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\) to simplify notation. For every \(\mathbf{z}\in[-R,R]^{n+d}\) such that \(\langle\hat{\mathbf{g}},\mathbf{z}\rangle\geq\langle\hat{\mathbf{g}},\bar{ \mathbf{z}}\rangle\) we have by convexity of \(f\)
\[f(\mathbf{z})-f(\bar{\mathbf{z}})\geq\langle\mathbf{g},\mathbf{z }-\bar{\mathbf{z}}\rangle =\underbrace{\langle\hat{\mathbf{g}},\mathbf{z}-\bar{\mathbf{z}} \rangle}_{\geq 0}+\langle\mathbf{g}-\hat{\mathbf{g}},\mathbf{z}-\bar{\mathbf{z}}\rangle\] \[\geq-\|\mathbf{g}-\hat{\mathbf{g}}\|_{\infty}\cdot\|\mathbf{z}- \bar{\mathbf{z}}\|_{1}\] \[\geq-2\varepsilon^{\prime}(n+d)R\,=\,-\varepsilon, \tag{11}\]
where the second inequality follows from Holder's inequalty, and so \(\hat{\mathbf{g}}\) has the desired property.
For \(\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\), recall that by assumption the separating vector \(\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\) has unit length, and hence \(\|\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\|_{\infty}\leq 1\). Then querying the sign and the bits indexed by \(0,-1,\ldots,-\log\frac{1}{\varepsilon^{\prime}}\) of each coordinate of \(\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\) we obtain a vector \(\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\) such that \(\|\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)-\hat{\mathbf{g}}_{\bar{ \mathbf{z}}}^{\mathrm{sep}}(f,C)\|_{\infty}\leq\varepsilon^{\prime}\).
We claim that \(\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\) is an \(\varepsilon\)-approximate separation oracle, namely the inequality \(\langle\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C),\mathbf{z} \rangle\leq\langle\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C), \bar{\mathbf{z}}\rangle\) holds for all \(\mathbf{z}\in C_{-\varepsilon}\). As before, to simplify the notation we use \(\mathbf{g}:=\mathbf{g}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\) and \(\hat{\mathbf{g}}:=\hat{\mathbf{g}}_{\bar{\mathbf{z}}}^{\mathrm{sep}}(f,C)\). For every \(\mathbf{z}\in[-R,R]^{n+d}\) we have
\[\langle\hat{\mathbf{g}},\mathbf{z}\rangle=\langle\mathbf{g},\mathbf{z} \rangle+\langle\hat{\mathbf{g}}-\mathbf{g},\mathbf{z}\rangle\leq\langle\mathbf{ g},\mathbf{z}\rangle+\|\hat{\mathbf{g}}-\mathbf{g}\|_{\infty}\cdot\| \mathbf{z}\|_{1}\leq\langle\mathbf{g},\mathbf{z}\rangle+\varepsilon^{\prime}R( n+d)=\langle\mathbf{g},\mathbf{z}\rangle+\frac{\varepsilon}{2}. \tag{12}\]
Now we claim that for every point \(\mathbf{z}\in C_{-\varepsilon}\) we have \(\langle\mathbf{g},\mathbf{z}\rangle\leq\langle\mathbf{g},\bar{\mathbf{z}} \rangle-\varepsilon\): since the inequality \(\langle\mathbf{g},\mathbf{x}\rangle\leq\langle\mathbf{g},\bar{\mathbf{z}}\rangle\) is valid for the ball \(B(\mathbf{z},\varepsilon)\subseteq C\), we have
\[\langle\mathbf{g},\bar{\mathbf{z}}\rangle\geq\max_{\mathbf{w}\in B(0, \varepsilon)}\langle\mathbf{g},\mathbf{z}+\mathbf{w}\rangle=\langle\mathbf{g},\mathbf{z}\rangle+\varepsilon, \tag{13}\]
proving the claim. Finally, we claim that \(\langle\mathbf{g},\bar{\mathbf{z}}\rangle\leq\langle\hat{\mathbf{g}},\bar{ \mathbf{z}}\rangle+\frac{\varepsilon}{2}\):
\[\langle\mathbf{g},\bar{\mathbf{z}}\rangle-\langle\hat{\mathbf{g}},\bar{ \mathbf{z}}\rangle=\langle\mathbf{g}-\hat{\mathbf{g}},\bar{\mathbf{z}}\rangle\leq \|\mathbf{g}-\hat{\mathbf{g}}\|_{\infty}\cdot\|\bar{\mathbf{z}}\|_{1}\leq \varepsilon^{\prime}R(n+d)=\frac{\varepsilon}{2}. \tag{14}\]
Combining inequalities (12)-(14) proves that the cut \(\langle\hat{\mathbf{g}},\mathbf{z}\rangle\leq\langle\hat{\mathbf{g}},\bar{ \mathbf{z}}\rangle\) is valid for \(C_{-\varepsilon}\).
To obtain the \(\varepsilon\)-approximate value comparison oracle, since \(f\) takes values in \([-U,U]\), it suffices to probe the sign plus \(\log\frac{2U}{\varepsilon}\) bits of \(f(\bar{\mathbf{z}})\) and \(f(\bar{\mathbf{z}}^{\prime})\) to approximate each of the values within \(\pm\frac{\varepsilon}{2}\), in which case we can decide whether \(f(\bar{\mathbf{z}})\leq f(\bar{\mathbf{z}}^{\prime})+\varepsilon\) or not. This concludes the proof when using bit queries.
Obtaining approximate oracles using inner product sign queries.This proof largely follows the same steps as for the first part, except for the need for the following result, which may be of independent interest.
**Lemma 30**.: _For any \(\varepsilon\in(0,1)\) and any vector \(\mathbf{g}\in\mathbb{R}^{d}\), using \(O(d\log\frac{d}{\varepsilon})\) inner product sign queries one can obtain a unit-length vector \(\hat{\mathbf{g}}\in\mathbb{R}^{d}\) such that \(\left\|\hat{\mathbf{g}}-\frac{\mathbf{g}}{\left\|\hat{\mathbf{g}}\right\|} \right\|\leq\varepsilon\)._
Proof.: We prove by induction on the dimension \(d\) that for every \(\delta\in(0,2)\), with \(d\log\frac{8}{\delta}\) inner product sign queries we can obtain a vector \(\hat{\mathbf{g}}\) such that \(\left\|\hat{\mathbf{g}}-\frac{\mathbf{g}}{\left\|\hat{\mathbf{g}}\right\|} \right\|\leq 2d\delta\); the lemma then follows by setting \(\delta=\frac{\varepsilon}{2d}\).
Just one query suffices when \(d=1\), so consider the base case \(d=2\). Perform a binary search as follows: Start with the cone \(K_{0}=\mathbb{R}^{2}\), with corresponding angle \(2\pi\). In iteration \(t\), we maintain a cone \(K_{t}\) containing \(\mathbf{g}\) whose angle is half that of \(K_{t-1}\) as follows. For each iteration, find a line \(\{\mathbf{x}:\langle\mathbf{a},\mathbf{x}\rangle=0\}\) that cuts \(K_{t}\) into two cones \(K_{t}^{L}=K_{t}\cap\{\mathbf{x}:\langle\mathbf{a},\mathbf{x}\rangle\leq 0\}\) and \(K_{t}^{R}=K_{t}\cap\{\mathbf{x}:\langle\mathbf{a},\mathbf{x}\rangle\geq 0\}\) each with half the angle of \(K_{t}\), i.e. bisecting \(K_{t}\). Ask the query "Is \(\langle\mathbf{a},\mathbf{g}\rangle\geq 0\)?", and if so set \(K_{t+1}=K_{t}^{R}\), otherwise set to \(K_{t+1}=K_{t}^{L}\), and repeat the procedure. By construction all the cones \(K_{t}\) contain \(\mathbf{g}\), and after \(\log\frac{8}{\delta}\) iterations we obtain a cone \(K\) with angle \(\frac{\delta\pi}{4}\). Let \(\hat{\mathbf{g}}\) be any vector in this cone with unit \(\ell_{2}\)-norm. For any other \(\mathbf{x}\in K\) also of unit norm, we have
\[\left\|\hat{\mathbf{g}}-\mathbf{x}\right\|_{2}^{2}=2-2\langle\hat{\mathbf{g} },\mathbf{x}\rangle\leq 2-2\cos(\delta\pi/4)\leq(\delta\pi/4)^{2}\leq \delta^{2},\]
where the second inequality uses that fact that \(\cos(\theta)\geq 1-\frac{\theta^{2}}{2}\) for all \(\theta\in(0,\pi/2)\). So \(\|\hat{\mathbf{g}}-\mathbf{x}\|_{2}\leq\delta\) for all unit-norm vectors in \(K\), and in particular \(\hat{\mathbf{g}}\) gives the desired approximation of \(\frac{\mathbf{g}}{\left\|\hat{\mathbf{g}}\right\|_{2}}\), proving the desired result when \(d=2\).
Now consider the general case \(d>2\). Consider any \(2\)-dimensional subspace \(A\) of \(\mathbb{R}^{d}\), and let \(\Pi_{A}\) denote the projection onto this subspace. Using the \(2\)-dimensional case on the subspace \(A\), we see that by using \(\log\frac{8}{\delta}\) queries of the form "Is \(\langle\mathbf{a},\Pi_{A}\mathbf{g}\rangle\geq 0\)?", we can obtain a unit length vector \(\tilde{\mathbf{g}}\in A\) such that \(\left\|\lambda_{A}\cdot\tilde{\mathbf{g}}-\Pi_{A}\mathbf{g}\right\|\leq\delta \|\Pi_{A}\mathbf{g}\|\), where \(\lambda_{A}:=\left\|\Pi_{A}\mathbf{g}\right\|\). We note that since \(\langle\mathbf{a},\Pi_{A}\mathbf{g}\rangle=\langle\Pi_{A}^{*}\mathbf{a}, \mathbf{g}\rangle\), the required queries can be obtained by inner product sign queries (here \(\Pi_{A}^{*}\) denotes the adjoint linear operator for the projection operator \(\Pi_{A}\), whose matrix representation is given by the transpose of the matrix representing the projection \(\Pi_{A}\)).
Now consider the \((d-1)\)-dimensional subspace \(B:=\operatorname{span}\{\tilde{\mathbf{g}},A^{\perp}\}\), and notice that \(\operatorname{dist}(\mathbf{g},B)\leq\delta\|\mathbf{g}\|\): the vector \(\mathbf{b}:=\lambda_{A}\cdot\tilde{\mathbf{g}}+(\mathbf{g}-\Pi_{A}\mathbf{g})\) belongs to \(B\) and \(\|\mathbf{g}-\mathbf{b}\|=\|\lambda_{A}\cdot\tilde{\mathbf{g}}-\Pi_{A}\mathbf{ g}\|\leq\delta\|\Pi_{A}\mathbf{g}\|\leq\delta\|\mathbf{g}\|\). Since \(\mathbf{g}\) is close to this subspace, we project it there and recurse on dimension. More precisely, consider the projection \(\Pi_{B}\mathbf{g}\) of \(\mathbf{g}\) onto \(B\), and inductively obtain a vector \(\hat{\mathbf{g}}\in B\) such that \(\left\|\lambda_{B}\cdot\hat{\mathbf{g}}-\Pi_{B}\mathbf{g}\right\|_{2}\leq 2(d-1) \delta\cdot\left\|\Pi_{B}\mathbf{g}\right\|\) (letting \(\lambda_{B}:=\left\|\Pi_{B}\mathbf{g}\right\|\)), by using additional \((d-1)\log\frac{8}{\delta}\) queries (for a total of \(d\log\frac{8}{\delta}\) queries).
We claim that \(\hat{\mathbf{g}}\) is the desired approximation of \(\mathbf{g}\), namely \(\|\frac{\mathbf{g}}{\left\|\hat{\mathbf{g}}\right\|}-\hat{\mathbf{g}}\|_{2} \leq 2d\delta\). To see this, from triangle inequality we have
\[\left\|\mathbf{g}-\left\|\mathbf{g}\right\|\cdot\hat{\mathbf{g}}\right\|\leq \left\|\mathbf{g}-\Pi_{B}\mathbf{g}\right\|+\left\|\Pi_{B}\mathbf{g}-\lambda_{ B}\cdot\hat{\mathbf{g}}\right\|+\left\|\lambda_{B}\cdot\hat{\mathbf{g}}-\left\| \mathbf{g}\right\|\cdot\hat{\mathbf{g}}\right\|. \tag{15}\]
The first term of the right-hand side equals \(\operatorname{dist}(\mathbf{g},B)\), which is at most \(\delta\|\mathbf{g}\|\) as argued above. For the second term, by induction we have
\[\|\Pi_{B}\mathbf{g}-\lambda_{B}\cdot\hat{\mathbf{g}}\|_{2}\leq 2(d-1)\delta\cdot\|\Pi_{B} \mathbf{g}\|\leq 2(d-1)\delta\|\mathbf{g}\|.\]
Finally, we claim that the last term of (15) is at most \(\delta\|\mathbf{g}\|\): since \(\hat{\mathbf{g}}\) has unit norm, it equals \(|\lambda_{B}-\|\mathbf{g}\|=\|\|\Pi_{B}\mathbf{g}\|-\|\mathbf{g}\|=\|\mathbf{g} \|-\|\Pi_{B}\mathbf{g}\|\), and by triangle inequality we have
\[\|\mathbf{g}\|\leq\|\Pi_{B}\mathbf{g}\|+\|\mathbf{g}-\Pi_{B}\mathbf{g}\|\leq\| \Pi_{B}\mathbf{g}\|+\delta\|\mathbf{g}\|,\]
giving the claim.
Applying all these bounds to (15), we get that \(\left\|\mathbf{g}-\|\mathbf{g}\|\cdot\hat{\mathbf{g}}\right\|\leq 2d\delta\| \mathbf{g}\|\), as desired. This concludes the proof of the lemma.
Now we are ready to finish the proof. To obtain an \(\varepsilon\)-approximate value comparison oracle using \(\mathcal{H}^{\mathrm{dir}}\), we can do a binary search on the function values using the queries \(h_{u,c}^{\mathrm{val}}\) with \(u=1\) and different values of \(c\) (as the midpoint of the interval in the binary search) - see Definition 5. Thus, with \(O(\log\frac{U}{\varepsilon})\) queries, we can implement an \(\varepsilon\)-approximate value comparison oracle.
For the \(\varepsilon\)-approximate separation, we apply Lemma 30 above to the separation oracle \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(f,C)\), with \(O\big{(}(n+d)\log\frac{(n+d)R}{\varepsilon}\big{)}\) inner product sign queries to obtain a vector \(\hat{\mathbf{g}}_{\mathbf{z}}^{\mathrm{sep}}(f,C)\) such that \(\|\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(f,C)-\hat{\mathbf{g}}_{\mathbf{z}}^ {\mathrm{sep}}(f,C)\|\leq\frac{\varepsilon}{2R\sqrt{n+d}}\) (recall the non-zero separation oracles are assumed to have unit length). Using the same arguments as in inequalities (12)-(14), we see that \(\hat{\mathbf{g}}_{\mathbf{z}}^{\mathrm{sep}}(f,C)\) gives a cut valid for \(C_{-\varepsilon}\), and hence is an \(\varepsilon\)-approximate separation oracle.
For the \(\varepsilon\)-approximate value cut oracle, we do the same thing, but apply Lemma 30 to \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)\), with \(O\left((n+d)\log\left(\frac{(n+d)MR}{\varepsilon}\right)\right)\) oracle calls to obtain an approximation \(\|\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(f,C)-\hat{\mathbf{g}}_{\mathbf{z}}^ {\mathrm{sub}}(f,C)\|\leq\frac{\varepsilon}{2MR\sqrt{n+d}}\) and then use the argument from (11).
**Remark 31**.: _Notice that the main ingredient for implementing approximate oracles using inner product sign queries is to use such queries for approximating a given vector \(\mathbf{g}\) (Lemma 30). We remark that this task is related to (actively) learning a linear classifier, namely that whose normal is given by \(\mathbf{g}\). While there are existing procedures for doing this, they only guarantee the desired approximation with high probability (albeit on a slightly weaker query model), instead of with probability 1 as we want here; see for example [1]. While it is possible that these methods can be adapted to our setting, we present a different, self-contained statement and proof._
## 5 Proofs of Theorem 12 and Corollary 13
Proof of Theorem 12.: Suppose we have an algorithm \(\mathcal{A}\) that reports an \(\varepsilon\)-solution to any instance in \(\mathcal{I}_{n,d,R,\rho,M}\) after \(u\) queries to a full-information first-order oracle based on the first-order chart \(\mathcal{G}\), a finite set of instances \(\mathcal{I}\subseteq\mathcal{I}_{n,d,R,\rho,M}\), and a true (unknown) instance \(I\in\mathcal{I}\). Our goal is to report a feasible \(\varepsilon\)-solution using few queries to \(\mathcal{O}(\mathcal{G},\mathcal{H})\), where \(\mathcal{H}\) contains all binary queries. For this, we design a procedure that maintains a family \(\mathcal{U}\subseteq\mathcal{I}\) of the instances, which always includes the true instance \(I\), and possibly determines exact information to pass to \(\mathcal{A}\). We will show that we can always either reduce \(|\mathcal{U}|\) by a constant factor, or determine exact information to use with \(\mathcal{A}\).
Denote by \(D\) the query strategy of \(\mathcal{A}\). Initialize \(\mathcal{U}=\mathcal{I}\), and (ordered) lists \(Q=\emptyset,H=\emptyset\), which will serve as query-response pairs for the algorithm \(\mathcal{A}\). In particular, \(H\) will contain full first-order information about the true instance, and \(Q\) will be the sequence of queries \(\mathcal{A}\) makes. While \(|\mathcal{U}|>1\) and \(|Q|\leq u\), do the following:
* Set \(\mathbf{z}=D(Q,H)\), and query whether \(\mathbf{z}\) is feasible. Let us simply write \(\mathbf{g}_{\mathbf{z}}(I)\) to mean \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}(I)\) if \(\mathbf{z}\) is infeasible for the instance \(I\), and \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}(I)\) or \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}(I)\) if feasible, where \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sep}}\), \(\mathbf{g}_{\mathbf{z}}^{\mathrm{val}}\) and \(\mathbf{g}_{\mathbf{z}}^{\mathrm{sub}}\) are the first-order maps for separating hyperplane, function value and subgradient, respectively, used by the general binary oracle at \(\mathbf{z}\). Write \(V_{\mathbf{z}}\) to be the appropriate codomain.
* **Case 1:** For every \(\mathbf{v}\in V_{\mathbf{z}}\), at most half of the instances \(I^{\prime}\in\mathcal{U}\) give \(\mathbf{g}_{\mathbf{z}}(I^{\prime})=\mathbf{v}\). Then there exists a set \(A\subseteq V_{\mathbf{z}}\) such that the number of instances \(I^{\prime}\in\mathcal{U}\) with \(\mathbf{g}_{\mathbf{z}}(I^{\prime})\in A\) is between \(\frac{1}{4}|\mathcal{U}|\) and \(\frac{3}{4}|\mathcal{U}|\). Let \(\mathcal{U}_{0}:=\{I\in\mathcal{U}:\mathbf{g}_{\mathbf{z}}(I^{\prime})\not\in A\}\), \(\mathcal{U}_{1}:=\{I\in\mathcal{U}:\mathbf{g}_{\mathbf{z}}(I^{\prime})\in A\}\); thus, \(|\mathcal{U}_{i}|\leq\frac{3}{4}|\mathcal{U}|\) for \(i=0,1\). Query whether the true instance \(I\) has \(\mathbf{g}_{\mathbf{z}}(I)\in A\), using the binary query \(h_{A}:h_{A}(\mathbf{v})=1\) iff \(\mathbf{v}\in A\), so that \(h_{A}(\mathbf{g}_{\mathbf{z}}(I))=0\) if \(I\in\mathcal{U}_{0}\) and \(h_{A}(\mathbf{g}_{\mathbf{z}}(I))=1\) if \(I\in\mathcal{U}_{1}\). Update \(\mathcal{U}\leftarrow\mathcal{U}_{q}\), where \(q\) is the answer to the query \((\mathbf{z},h_{A})\) given by the oracle.
* **Case 2:** There exists \(\bar{\mathbf{v}}\in V_{\mathbf{z}}\) such that more than half of the instances \(I^{\prime}\in\mathcal{U}\) have \(\mathbf{g}_{\mathbf{z}}(I^{\prime})=\bar{\mathbf{v}}\). Query whether the true instance \(I\) has \(\mathbf{g}_{\mathbf{z}}(I)=\bar{\mathbf{v}}\), using the binary query \(h:h(\bar{\mathbf{v}})=1\) and \(h(\mathbf{x})=0\) for all other inputs \(\mathbf{x}\neq\bar{\mathbf{v}}\), so that \(h(\mathbf{g}_{\mathbf{z}}(I))=1\) iff \(\mathbf{g}_{\mathbf{z}}(I)=\bar{\mathbf{v}}\). If \(\mathbf{g}_{\mathbf{z}}(I)\neq\bar{\mathbf{v}}\), then update \(\mathcal{U}\) by removing from it all instances \(I^{\prime}\) such that \(\mathbf{g}_{\mathbf{z}}(I^{\prime})=\bar{\mathbf{v}}\), reducing the size of \(\mathcal{U}\) by at least half. Otherwise, if \(\mathbf{z}\) was infeasible, we then know the exact separating hyperplane (or function value or subgradient, in the case \(\mathbf{z}\) is feasible) for the true instance \(I\) and the first-order chart \(\mathcal{G}\), namely \(\mathbf{g}_{\mathbf{z}}^{\text{sep}}(I)=\bar{\mathbf{v}}\), and so employ it to update \(Q\) and \(H\) by appending \(\mathbf{z}\) and \(\mathbf{v}\) to them, respectively, which will serve as information for the algorithm \(\mathcal{A}\).
In each step, either the size of \(\mathcal{U}\) decreases by at least \(1/4\), or full (exact) first-order information at the query point determined by the query strategy of \(\mathcal{A}\) is obtained and \(Q,H\) are updated. The former can only happen \(O(\log|\mathcal{I}|)\) times until \(\mathcal{U}\) becomes a singleton, in which case we know the true instance and can report its optimal solution, while if the latter happens \(u\) times, one can run the algorithm \(\mathcal{A}\) with the information \((Q,H)\) to report an \(\varepsilon\)-approximate solution to the true instance \(I\), noting that since the points in \(Q\) were determined according to the query strategy of the algorithm, the information in \((Q,H)\) is indeed sufficient to run the \(\mathcal{A}\) on for \(u\) iterations. Hence, after at most \(\log|\mathcal{I}|+u\) queries to the general binary oracle, one can report an \(\varepsilon\)-solution to the true instance.
Corollary 13 follows immediately when one uses the centerpoint-based algorithm of [13, 3] as \(\mathcal{A}\), which is the exact oracle version of Algorithm 1 above and needs at most
\[O\left(2^{n}\,d\,(n+d)\log\left(\frac{dMR}{\min\{\rho,1\}\varepsilon}\right)\right)\]
queries in the mixed-integer case, or
\[O\left(d\log\left(\frac{MR}{\min\{\rho,1\}\varepsilon}\right)\right)\]
queries in the continuous (\(n=0\)) case to produce an \(\varepsilon\)-approximate solution to any instance in \(\mathcal{I}_{n,d,\rho,M,R}\).
## 6 Statements and Declarations
Competing Interests.There are no financial or non-financial interests that are directly or indirectly related to this work.
|
2303.05254 | Transport properties and doping evolution of the Fermi surface in
cuprates | Measured transport properties of three representative cuprates are reproduced
within the paradigm of two electron subsystems, itinerant and localized. The
localized subsystem evolves continuously from the Cu 3d$^9$ hole at
half-filling and corresponds to the (pseudo)gapped parts of the Fermi surface.
The itinerant subsystem is observed as a pure Fermi liquid (FL) with
material-independent universal mobility across the doping/temperature phase
diagram. The localized subsystem affects the itinerant one in our transport
calculations solely by truncating the textbook FL integrals to the observed
(doping- and temperature-dependent) Fermi arcs. With this extremely simple
picture, we obtain the measured evolution of the resistivity and Hall
coefficients in all three cases considered, including LSCO which undergoes a
Lifshitz transition in the relevant doping range, a complication which turns
out to be superficial. Our results imply that prior to evoking polaronic,
quantum critical point, quantum dissipation, or even more exotic scenarios for
the evolution of transport properties in cuprates, Fermi-surface properties
must be addressed in realistic detail. | Benjamin Klebel-Knobloch, Wojciech Tabis, Mateusz A. Gala, Osor S. Barišić, Denis K. Sunko, Neven Barišić | 2023-03-09T13:46:48Z | http://arxiv.org/abs/2303.05254v2 | # Transport properties and doping evolution of the Fermi surface in cuprates
###### Abstract
Measured transport properties of three representative cuprates are reproduced within the paradigm of two electron subsystems, itinerant and localized. The localized subsystem evolves continuously from the Cu 3d\({}^{9}\) hole at half-filling and corresponds to the (pseudo)gapped parts of the Fermi surface. The itinerant subsystem is observed as a pure Fermi liquid (FL) with material-independent universal mobility across the doping/temperature phase diagram. The localized subsystem affects the itinerant one in our transport calculations solely by truncating the textbook FL integrals to the observed (doping- and temperature-dependent) Fermi arcs. With this extremely simple picture, we obtain the measured evolution of the resistivity and Hall coefficients in all three cases considered, including LSCO which undergoes a Lifshitz transition in the relevant doping range, a complication which turns out to be superficial. Our results imply that prior to ewoking polaronic, quantum critical point, quantum dissipation, or even more exotic scenarios for the evolution of transport properties in cuprates, Fermi-surface properties must be addressed in realistic detail.
cuprates, superconductivity, Hall-coefficient, quantum criticality, Lifshitz transition, Fermi surface, ARPES, tight-binding
## Introduction
The discovery of superconductivity in 1911 was one of the most surprising in the field of solid state physics [1]. It took almost fifty years before the phenomenon was successfully explained by BCS theory [2]. The next milestone was the discovery of high-temperature superconductivity (SC) in cuprates about thirty-five years ago [3]. The superconducting (SC) state in these compounds is of type II, which is well understood in the BCS/London framework to mean that the coherence length is shorter than the penetration depth. In cuprates, the coherence length is extremely short, resulting in very high second critical fields, of the order of 100 T, and the SC gap is \(d\)-wave, unlike elemental, phonon-mediated, superconductors where it is always \(s\)-wave. However, the main reason why these compounds are considered unconventional is the unusual evolution of normal-state properties with doping [4, 5]. Here, one should carefully separate compound-specific from universal properties [6]. In cuprates, SC is universally observed in the range between \(p\sim 0.04\)-\(0.05\) (underdoped) and \(0.30\)-\(0.35\) (overdoped), with a maximal value of the SC transition temperature (\(T_{\rm c}\)) around \(p\sim 0.16\). This common pattern implies that the origin of SC stems from universal normal-state behavior, while the wide variation in observed maximal \(T_{\rm c}\)'s (more than an order of magnitude) is due to more subtle non-universal effects which tune the SC in particular compounds.
Indeed, despite many compound-specific properties within this group of materials, a range of underlying universal behaviors was identified [6] precisely in those normal-state transport properties which were long considered to be both the key to their SC and widely, but wrongly, taken as proof that the charge carriers were not a Fermi liquid (FL) [4, 5]. A particular milestone in establishing that the itinerant carriers were, in fact, a FL was the observation that the sheet resistance (i.e., resistance per CuO\({}_{2}\) layer) in these compounds is universal [7]. But perhaps the most surprising universality is that the Hall mobility across the doping-temperature phase diagram of the cuprates is essentially compound- and doping-independent, as discovered through combined measurements of the resistivity (\(\rho\)) and Hall coefficient (\(R_{\rm H}\)) [8]. Moreover, it was shown that the Hall mobility (\(\mu_{H}^{-1}=\frac{\rho}{R_{\rm H}}=\frac{m^{*}}{\epsilon\tau}\)) exhibits a robust quadratic temperature dependence (\(\mu_{H}^{-1}=C_{2}T^{2}\)), with an essentially universal value of \(C_{2}=0.0175(20)\,\rm{TK}^{-2}\), as presented for Hg1201, Tl2201 and LSCO at low doping (\(p<0.8\)) in Fig. 1b (for other cuprate
compounds see Ref. [7]).
The universal quadratic dependence of \(\mu_{H}^{-1}\) suggests that the underlying transport scattering rate is FL-like in all temperature and doping regimes of the relevant phase diagram [8]. And indeed, the FL nature of itinerant charges was unambiguously demonstrated in both regimes (under- and overdoped) by experimental observations, e.g., FL scalings in the underdoped regime [9, 10, 11], or the Wiedemann-Franz law [12], angle-resolved photoemission spectroscopy (ARPES) [13], and quantum oscillation measurements in the overdoped regime [14]. These fundamental experimental facts imply that the explanation for the behavior of itinerant charges in cuprates must be searched for first within the standard framework of FL charge transport.
The universality of the Hall mobility implies a well-defined, fixed ratio \(\frac{m^{*}}{e\tau}\). Consequently, the resistivity (\(\rho=R_{H}\mu_{H}^{-1}\)) provides direct information about the carrier density in the FL framework [7]. A systematic analysis of the extensive electronic transport data allows one to determine how the effective carrier density \(n_{\text{eff}}\) evolves across the temperature-doping phase diagram. This analysis, summarized in Fig. 1a,c, reveals that in the low-temperature limit, the effective _itinerant_ carrier density \(n_{\text{eff}}\) changes gradually with decreasing doping from \(n_{\text{eff}}=1+p\) to \(n_{\text{eff}}=p\)[8, 15]. Denoting the density of _localized_ carriers by \(n_{\text{loc}}\), the total carrier density satisfies the relation
\[n_{\text{loc}}+n_{\text{eff}}=1+p \tag{1}\]
by charge conservation. Hence, the change in \(n_{\text{eff}}\) means that exactly one hole carrier per CuO\({}_{2}\) unit cell localizes (\(n_{\text{loc}}:0\to 1\)) when crossing from the overdoped to the underdoped region of the phase diagram. Such an evolution of the effective carrier density extracted from resistivity measurements was confirmed several years later by measurements of the doping evolution of the high-field low-temperature Hall number \(n_{H}=\frac{V}{\sigma R_{H}}\) (\(V\) is the elementary cell volume) determined in Bi2201 and Tl2201, shown in Fig. 1c (for YBCO a correction for the anisotropy factor was required [16]). It was also demonstrated, by transport [8] and optical conductivity [10], that the incipient change in the effective carrier density just below optimal doping is also responsible for the linear-in-temperature resistivity observed in this so-called "strange metal" (SM) regime (Fig. 1a). Consequently, some of us attributed the whole unusual evolution of different properties in cuprates to the localized charge, in particular to its gradual delocalization with temperature or doping [6, 8]. Notably, that is by definition the non-Fermi-liquid component of the cuprate problem, because localized charges do not conduct.
Currently, countless alternative interpretations of normal-state properties are based solely on the apparent non-Fermi liquid evolution of the scattering rate, focusing exclusively on the optimally doped or overdoped regimes, and without taking into account that the carrier concentration (i.e., the density of states at the Fermi level) can also change. For example, it is often argued that the linear temperature dependence of resistivity is caused by underlying quantum criticality [17], or, according to most recent interpretations, by the charge scattering rate reaching [18] the Planckian limit [19, 20], where it is further argued that this scattering is momentum-independent and inelastic [21]. Furthermore, the significant reduction in the Hall number from \(1+p\) to \(p\) was attributed to quasiparticle decoherence, despite the fact that the determined \(n_{\text{H}}\)[16] perfectly coincides with \(n_{\text{eff}}\) determined earlier from the resistivity [15]. It was recently suggested that cuprates are best understood in terms of two distinct current-carrying fluids, of which one behaves like a coherent FL [22, 23]. Thus, even within the scattering-rate scenarios alone, the electronic properties of cuprates are intensely discussed, with mutually incompatible proposals [4].
A recent analysis of the optical conductivity data clearly separates scattering-rate from carrier-density effects, revealing unequivocally that the missing part of the Fermi surface (FS) outside the well-known arcs is indeed gapped in cuprates [10]. In the present work, we follow the same gapping scenario to calculate the values of \(R_{H}\) and the longitudinal conductivity \(\sigma_{xx}=(1/\rho)\) as a function of doping directly from the measured FS, and compare them with experimental data. The ungapped segments, Fermi arcs, are the only parts of the FS that contribute to charge transport. After establishing the calculation procedure on compounds with rather simple, nearly circular underlying FS's, we focus specifically on LSCO. It undergoes a Lifshitz transition in the doping range of interest [26], which presents a challenge to our simple FS approach [7]. Indeed, as the Lifshitz transition is approached in doping, the values of \(C_{2}\) and \(n_{\text{H}}\) in LSCO strongly deviate from their universal values, both quantitatively and qualitatively, as shown in Fig. 1 b and c, respectively. We show that even such strong deviations are captured in considerable detail by the suggested (universal FL) calculation approach. Thus, the exception of LSCO turns out to be superficial. Rather, it serves only to corroborate the universality. Because the appearance of the Lifshitz transition in parallel with the gradual (de-)localization process explains even strong deviations from universal behaviors, we will argue that prior to applying exotic approaches to analyze any particular compound, one should try to carefully establish the exact shape of the FS first, and check if the same, quite standard, procedure can be applied. Finally, because our calculations unambiguously show that the whole complexity of cuprates stems from the gradual localization of exactly one charge per CuO\({}_{2}\) plaquette, its role in the superconducting mechanism will be discussed as well.
Figure 1: Phase diagram and carrier density in cuprates. **a** Schematic phase diagram that captures the evolution of key universal features of cuprates.[6, 24] The approximate limit of the antiferromagnetic (AF) phase is shown in grey and the superconducting (SC) dome in yellow. The doping/temperature evolution of the density of localized charge, as extracted from the resistivity, is indicated by blue shading. Solid black and dashed grey lines are isodensity lines. In the OD-FL regime, all carriers contribute to electronic transport. Both, the pseudogap Fermi liquid (PG-FL, roughly corresponding to \(>97\%\) localized holes) and the strange metal (SM, marked by an intensive gradual delocalization) are also indicated, though there is no conceptual difference between them. **b** Measurements of the Hall mobility revealed that \(C_{2}=(1/\mu_{H})-C_{0})/T^{2}\) is essentially universal in cuprates, with \(C_{2}=0.0175(20)\) TK\({}^{-2}\) (indicated by the dashed line).[8] However, \(C_{2}\) in LSCO for \(p>0.08\) deviates strongly from the universal value. **c** Doping dependence (at \(T=0\) K) of the carrier density (\(n_{\text{eff}}\)) as determined from the resistivity (full lines),[25] compared with values obtained from the Hall coefficient (solid points).[22, 16] Both quantities behave identically and reveal the \(p\) to \(1+p\) change in the carrier density.
## Results
We begin our analysis by invoking the standard definition of the Hall coefficient \(R_{H}\) in terms of the directly measurable diagonal (\(\sigma_{xx},\sigma_{yy}\)) and off-diagonal (\(\sigma_{xy}\)) components of the conductivity tensor [27, 28, 29]:
\[R_{H}=-B^{-1}\frac{\sigma_{xy}}{\sigma_{xx}\sigma_{yy}}, \tag{2}\]
with B the applied magnetic induction. For the tensor terms, we use textbook FL expressions [27, 28, 29]:
\[\sigma_{xx} =\frac{e^{2}\tau}{2\hbar^{2}}\frac{N_{V}}{\Gamma_{2D}}\oint_{E_{F }}dk_{\parallel}\left|\frac{\partial\epsilon_{\bar{k}}}{\partial k_{\perp}}\right| \tag{3}\] \[\sigma_{xy} =\frac{e^{3}B\tau^{2}}{\hbar^{4}}\frac{N_{V}}{\Gamma_{2D}}\oint_{ E_{F}}dk_{\parallel}\left|\frac{\partial\epsilon_{\bar{k}}}{\partial k_{\perp}} \right|^{-1}\left[\left(\frac{\partial\epsilon_{\bar{k}}}{\partial k_{x}} \right)^{2}\frac{\partial^{2}\epsilon_{\bar{k}}}{\partial^{2}k_{y}}-\frac{ \partial\epsilon_{\bar{k}}}{\partial k_{x}}\frac{\partial\epsilon_{\bar{k}}} {\partial k_{y}\partial k_{x}}\right]. \tag{4}\]
Here, \(k_{\perp}\) and \(k_{\parallel}\) are components of the charge carrier wave vector \(\bar{k}\) perpendicular and parallel to the FS, respectively. \(\Gamma_{2D}\) is the area of the 2D Brillouin zone and \(N_{V}\) is the number of states per unit volume. The integrals are usually taken over the whole FS. However, because parts of the FS are gapped in cuprates, only the Fermi arcs centered at the nodes contribute to these integrals. To describe the arc lengthening with doping, we introduce a parameter \(f_{\rm g}=n_{\rm eff}/\left(1+p\right)\), where the evolution of \(n_{\rm eff}\) is inferred from resistivity measurements [24]. Quantitatively, \(f_{\rm g}\) denotes the fraction of ungapped states contributing to the transport on the FS, relative to the full underlying FS. The doping evolution of \(f_{\rm g}\) is presented in Fig. 2b and Fig. 3b for our three representative materials. Hereafter, the integrations in Eqs. (3) and (4) are understood to be carried out only along the Fermi arcs, whose length is expressed by \(f_{\rm g}\).
It can be seen without calculation that an ideal partially gapped parabolic band (i.e., circular underlying FS) immediately leads to a 1:1 correspondence between \(n_{\rm eff}\) and \(n_{\rm H}\). Simply, because the Fermi velocity \(v_{F}=\hbar^{-1}\left|\partial\epsilon(\bar{k}_{F})/\partial k_{\perp}\right|\) and scattering rate do not change along the FS, the value of the integrals must give the length of the arc \(f_{\rm g}\), which in turn directly corresponds to \(n_{\rm eff}\) (for a more detailed discussion see Methods, Section 0.2.2).
However, the fact is that the underlying FS is non-universal for different cuprates, exhibiting curvatures with a significant departure from the circular form, with different values of Fermi velocities along the FS. Thus, to calculate \(\sigma_{xx}\) and \(\sigma_{xy}\) from Eqs. (3) and (4), respectively, knowledge of the exact shape of the bands in the \(k_{\rm B}T\) window around the Fermi-level (i.e., the only energy range relevant for transport) is required. ARPES experiments measure the dispersion of bands near the Fermi energy directly. In the case of single-layer cuprates, only one band intersects the Fermi level. This band may be parametrized by 2D tight-binding models (for more details see Methods Section 0.1). Here, we have used previously published best-fit parametrizations of ARPES data, essentially without any further modification.
### Doping evolution of \(R_{h}\) in Hg1201 and Tl2201: the case of nearly circular FS's
The underlying FS's of Hg1201 and Tl2201 are nearly circular. We also recall that \(\mu_{\rm H}\) in both compounds practically does not change with doping or temperature as the arcs lengthen, which implies that all (arc) segments have the same or nearly the same contribution in terms of \(v_{F}\)'s and scattering rates. Thus, it is expected that the calculated \(n_{\rm H}\) correctly corresponds to \(n_{\rm eff}\) (determined from resistivity), if one chooses \(f_{\rm g}=n_{\rm eff}/\left(1+p\right)\) for the range of the integrals in Eqs. (3) and (4).
Fermi surfaces were previously measured by ARPES and parametrized by effective tight-binding models at \(p\sim 0.15\) for Hg1201 [30, 31] and at \(p\sim 0.24\) for Tl2201 [32, 13] (see Table 1 in the Methods section). These doping levels are indicated by shaded vertical bands in Fig. 2, right panel. To extend our calculations to other doping levels of interest without introducing fitting ambiguities, we shifted the chemical potential in the same (rigid) band to satisfy Luttinger's sum rule for the underlying, ungapped FS (see Eq. (10) in the Methods section). Even such a crude, zeroth-order approximation turns out to be sufficient to correctly capture the doping evolution of transport coefficients in Hg1201 and Tl2201. To determine \(n_{\rm eff}\) from the resistivity in the crossover region of \(p\) to \(1+p\), the approach introduced in Ref. [24] was used. Again, only previously published parameters were used (Ref. [24] for Hg1201 and Ref. [33] for Tl2201), with the new parameter \(f_{\rm g}\) fixed by transport (with details in the Methods section). The resulting FS's are shown in Fig. 2. By the dashed (dotted) lines we indicate the underlying FS of Hg1201 (Tl2201), while full lines indicate the length of the ungapped arcs at specific doping levels, which corresponds to the fractional extension of the FS shown in Fig. 2b. The nearly circular shape of the underlying FS's is apparent. From the integration over the arcs, we calculate the Hall-number according to Eq. (2) (full line in Fig. 2c), and compare the result with measured values of \(n_{\rm H}\) from Hg1201 (red) and Tl2201 (blue). Unsurprisingly, the calculated doping dependence of \(n_{\rm H}\) (and \(\rho\) discussed below) in the limit of \(T=0\) correctly represents \(n_{\rm eff}\), as shown in Fig. 1c. As already noted earlier [8], the measured \(n_{\rm H}\) of Hg1201 (see Fig. 2c,) is slightly higher than the calculated values, due to subtle difficulties in determining the exact sample geometry as well as the concentration of holes in the CuO\({}_{2}\) layer accurately in cases of interstitial oxygen doping.
### The case of a Lifshitz transition in LSCO: the exception that confirms the rule
To put our approach to a more challenging test, we extend the same analysis to LSCO, whose FS has a rather interesting evolution with profound consequences on transport coefficients. In this compound, a Lifshitz transition in the crossover region of \(p\) to \(1+p\) is well established. Notably, a similar Lifshitz transition has been seen by ARPES in the bismuth cuprates (Bi,Pb)\({}_{2}\)(Sr,La)\({}_{2}\)CuO\({}_{6+\delta}\) and Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8}+\delta\) as well.[38, 39, 40, 41] However, the Lifshitz transition occurs at higher doping levels there (in the single-layer compound, at \(p\gtrsim 0.3\)), at the limit of synthesis capabilities. Therefore, it is both less interesting and less convenient to investigate the Lifshitz transition in bismuth cuprates. On the other hand, ARPES measurements of LSCO have been extensively documented for a wide doping range, thus the FS is established exceptionally well. Moreover, the band-structure of LSCO was parameterized through tight-binding parameters as reported in Refs. [26, 42]. This parametrization includes doping-dependent tight-binding parameters (see Methods). Notably, with this published parametrization, the total carrier density of the underlying FS deviates slightly from Luttinger's sum rule. However, this roughness introduces only a small uncertainty in our calculations, which is henceforth neglected, highlighting the underlying stability of our approach. To extrapolate between measured doping levels, the tight-binding parameters were interpolated by smooth polynomials (see Table 1).
In Fig. 3a, we show the FS of LSCO parametrized according to Ref. [26]. It accurately reproduces the ARPES measurement, in particular the change from the hole-like circular shape to an electron-like diamond shape with increasing \(p\).[42] To calculate \(\sigma_{xx}\) and \(\sigma_{xy}\), we follow exactly the same procedure as above for Hg1201 and Tl2201. The doping evolution of \(n_{\text{eff}}\) is determined from the resistivity,[24] which in turn defines the length of the arcs \(f_{g}\). The evolution of the underlying FS and the concomitant change of the arc length is displayed in Fig. 3, a and b, respectively. Taking into account the simplicity of our approach against the complexity of the underlying FS, the calculated \(n_{\text{H}}\) agrees surprisingly well with measured values, as shown in Fig. 3c and the Supplementary Information, Figures S1 and S2.
We have thus obtained a simple understanding of why the large deviation of \(n_{\text{H}}\) from \(n_{\text{eff}}\) in LSCO, overshooting \(1+p\) divergently, does not invalidate our general FL approach for the arc carriers in cuprates. Namely, the anomaly is a direct manifestation of the Lifshitz transition in the underlying FS, which causes the _denominator_ in the FL expression for \(n_{\text{H}}\), which measures FS curvature, to go through zero as the FS changes from hole- to electron-like.[28, 29] Concurring with that interpretation, negative values of \(n_{\text{H}}\) have been reported in thin films at \(p\geq 0.32\).[35, 42] This re-entrance of negative \(n_{\text{H}}\) values with doping emerges naturally from our analysis, as further discussed in Supplementary Information 1.
Figure 2: The FS and Hall number of Hg1201 and Tl2201. In **a**, the FS’s as parametrized in Refs. [30, 31] (Hg1201) and [32, 13] (Tl2201) are shown. The underlying FS’s are almost circular as obvious from the dashed (Hg1201) and dotted (Tl2201) lines, where arcs (ungapped states) are indicated with full lines. **b** The arc-length, \(f_{g}\), as extracted from the resistivity. **c** The here calculated \(n_{\text{H}}\) (full line) is compared with the measured values (points) from Refs. [8] (Hg1201) and [22] (Tl2201), where error bars are reproduced from the respective cited works. The shaded areas in **b** and **c** indicate the doping ranges for which ARPES data is available. In case of Hg1201, experimental values of \(n_{\text{H}}\) are collected at \(T=100\,\text{K}\), just above the value of the maximal \(T_{\text{c}}\) (\(\sim 95\) K) in this compound, while for Tl2201 high-field zero-Kelvin extrapolations are shown.
Figure 3: The FS and Hall number of LSCO. In **a**, the FS as parametrized in Ref. [26] is shown, where the dashed lines correspond to the underlying FS, while the arcs appear as full lines. The underlying FS undergoes a Lifshitz transition between \(0.15<p<0.22\). **b** The arc length, \(f_{g}\), as determined from the resistivity is shown as dashed line. In case of LSCO, \(f_{g}\) was also adjusted to obtain a better fit of the Hall data. The resulting evolution is shown by the dotted line. **c** The combination of the here calculated \(n_{\rm H}\) (dashed and dotted lines) and previously reported experimental data (Refs. [34, 35, 36]) reveals an excellent agreement. Consistently with the case of Hg1201 but also to avoid problems related the ordering tendencies in LSCO at low temperatures [7, 37], the experimental values of \(n_{\rm H}\) are collected at \(T=\)100 K. Higher doping levels and the temperature dependence are discussed in the Supplementary Information 1.
Parenthetically, we mention that, because of the complex shape of the FS and the changes of \(f_{g}\) with doping, it is inherently difficult to define at which exact doping the Lifshitz transition is supposed to occur. If we define this transition as the point where the underlying FS curvature changes sign from hole-like to electron-like, we can pinpoint it at \(p\sim 0.18\). However, because parts of the FS are gapped at this doping, this point is barely noticeable, as a minuscule kink in Fig. 3c (hidden by a measured point), and a kink in Fig. 4a. On the other hand, if we consider the doping dependence of \(n_{\rm H}\) as primary, and interpret its point of divergence as the Lifshitz transition, this puts it at a significantly higher doping level of \(p\sim 0.28\). This difference shows that the precise position of the Lishitz transition in cuprates manifests itself differently in \(n_{\rm H}\) and in dispersions fitted to ARPES. Finally, we note that \(n_{\rm H}\) starts to deviate from \(n_{\rm eff}\) even below the \(p\) to \(1+p\) crossover. This happens because the arcs begin to flatten due to the proximity of the Lifshitz transition, with the flat sections making a small contribution to the curvature. Importantly, it follows from the same reasoning that the divergence in \(n_{\rm H}\) cannot affect the \(p\) to \(1+p\) crossover in \(n_{\rm eff}\), because the latter is measured simply as the total number of itinerant carriers, irrespective of the shape of the FS.
### Resistivity
Now we turn back to the resistivity to demonstrate the robustness and self-consistency of the above analysis. It is also a necessary step because we have originally relied only on the universality of \(\mu_{\rm H}\) to determine \(n_{\rm eff}\) and consequently the length of the arcs, neglecting all deviations, including the (large) one shown in Fig. 1 for LSCO. In this determination of \(n_{\rm eff}\), all of the underlying \(v_{F}\)'s were tacitly taken to be universal because the mobility is essentially universal. Now, we will calculate the doping dependence of the resistivity from the arced FS's, using Eq. (3), which takes into account the variation of \(v_{F}\) along the arc, but strictly respecting the experimentally established universality of the nodal \(v_{F}\) (see also Supplementary Information 2 for details).[44] To compare our calculations with experimentally established values, we plot the results in the form of \(\tau p_{\square}=A_{2}/C_{2}\), with \(A_{2\square}\) as in \(\rho_{\square}\sim A_{2\square}T^{2}\) from Ref. [7] combined with \(C_{2}\) as in \(\tau^{-1}\sim C_{2}T^{2}\) from Ref. [8]. \(A_{2}\) and \(C_{2}\) are both pre-factors to a squared temperature behavior, so the temperature cancels in the product, i.e., \(\tau\rho_{\square}=A_{2}/C_{2}\) is a temperature-independent parameter (see the details in the Methods section). In this way, we can compare data measured at finite temperatures with our calculation at \(T=0\). As obvious from Fig. 4, the agreement is remarkable, which is perhaps expected in case of Hg1201 and Tl2201 but less so, given the simplicity of our approach, in case of LSCO. This agreement also implies that \(v_{F}\) does not vary significantly along the parts of the arcs with a significant contribution to transport.
Finally, to test our approach even further, we invert it and fit our calculation to the measured \(n_{\rm H}\), to obtain \(n_{\rm eff}\) which defines the arc length \(f_{g}\) (See Table 2 in the Methods for details). In this case, the agreement between measured and calculated values of \(n_{\rm H}\) is by design (dotted line in Fig. 3c). Notably, a better agreement with measured resistivity data is also obtained (dotted line in Fig. 4a). It might be also interesting to note that this approach results in a somewhat broader \(p\) to \(1+p\) crossover than reported elsewhere,[22] as shown in Figs. 3b and 4b. This is perhaps to be expected since LSCO is a compound that is known to be disordered. However, the two approaches are qualitatively the same and the (rather small) difference sets the limits of the expected uncertainty.
The overall universality of the sheet resistance can be understood given that the Cu 3\(d\) orbital is blocked by Coulomb effects, so coherent FL conduction dominantly occurs via the Cu 4s-O 2\(p_{x,y}\), and secondarily via the O 2\(p_{x}\)-2\(p_{y}\), orbital overlaps.[6] Both are chemically invariant across the cuprates[45] in agreement with the universality of \(v_{F}\) along the arcs established here. Notably, in LSCO at the antinodes, \(v_{F}\) has a strong doping dependence due to the Lifshitz transition. However, as apparent from the above, these parts of the FS contribute to transport processes only when they become ungapped at elevated doping levels, at which point the van Hove singularity (vHS) has moved away from the FS again. Therefore, the sheet resistance satisfies the universal value in a broader doping range than would be expected from considering \(v_{F}\) along the full underlying FS.
## Discussion
In the context of the last 35 years of debates in the field of cuprates, each new demonstration that textbook FL formulas can be used to describe a key property of these materials marks essential progress. Here, we have shown that these formulas are perfectly adequate to calculate transport coefficients, even in the complex case of a concurrent Lifshitz transition, while it has been shown elsewhere that they can also describe other key properties, like optical conductivity,[9, 10] specific heat[46], magneto-resistivity[11], quantum oscillations[14, 47, 48, 49], etc. This robustness implies that, even when large deviations from the reported universal behaviors in cuprates are observed, one should seek first to understand them by taking the actual shape of the FS carefully into account, distinguishing between the localized and the itinerant charges.
Fermi arcs in cuprates have been extensively discussed, mostly from the point of view of intraorbital interactions (large Hubbard \(U_{d}\)) and the concomitant AF correlations, which were associated with the pseudogap. Such approaches have difficulties with the proper estimation of the amount of mobile charge available to conduction, and to the Hall effect in particular. To our knowledge, L. Gor'kov and G. Teitel'baum were the first to estimate the FL carrier concentration from the length of the arcs relative to the total underlying FS,[50] as we have done here. The observation that \(n_{\rm H}\) diverges in LSCO because of the Lifshitz transition has been made previously by I. Kupci and S. Barisic.[28, 29] Here, we have harnessed this phenomenology to answer a
Figure 4: Calculated _versus_ measured resistivity and carrier density. **a** To facilitate the comparison between calculated (lines) and measured sheet resistance (\(\rho_{\square}\sim A_{2}T^{2}\) – (opaque symbols)[7] or in the crossover regime \(\rho_{\square}\sim A_{1}T^{1}+A_{2}T^{2}\) – (shaded symbols)[43]) a temperature independent quantity \(\tau\rho_{\square}\sim A_{2}/C_{2}\) is displayed, as a function of doping, for all three discussed compounds (see the Methods Section 0.3 for details). Dashed and dotted lines for LSCO correspond to calculations with the same \(f_{\rm g}\) as presented by dashed and dotted lines in Fig. 3b. The inset shows an extended doping range to \(p=0\) on a logarithmic scale for clarity. A small kink in the calculated doping dependence for LSCO at \(p\sim 0.18\) (dashed line) coincides with the Lifshitz-transition of the underlying FS. **b** Full and dashed lines show \(n_{\rm eff}\) as inferred from resistivity measurements, which is the only input parameter for the performed calculation. In case of LSCO, an additional dotted line indicates \(n_{\rm eff}\) obtained by adjusting \(f_{\rm g}\) (i.e., arc-length) for a better fit of the Hall data. For Hg1201 and Tl2201 \(n_{\rm eff}\) (lines) and \(n_{\rm H}\) (symbols) coincide. This is not the case for LSCO, where \(n_{\rm H}\) diverges at the Lifshitz transition. However, \(n_{\rm eff}\) shows a similarly smooth crossover in LSCO as it does in Hg1201 and Tl2201. The calculated \(\sigma_{\rm cr}\) [Eq. (3)] strongly depends on \(v_{F}\), whose value is usually not controlled in tight-binding fits to ARPES data. Therefore, normalization factors \(f_{\rm norm}\) have been applied to \(\tau\rho_{\square}\) of LSCO and Tl2201. The details of this normalization are in Supplementary Information 2.
precise question: Can the deviation of the quadratic temperature coefficient \(C_{2}\) of the Hall mobility in LSCO from its universal constant value in all other cuprates be wholly explained within the same simplest-possible FL framework? The answer is yes: Once the carrier concentration is read off from the arc lengths, and the Lifshitz transition is taken into account, there is nothing specific left to model in LSCO.
While the narrow point so made is impressive enough--there is really no exception to the universal properties of the conducting FL in cuprates--its indirect repercussions are even greater. It means that the material-specific properties, among which the value of \(T_{\rm c}\) is the most significant, are entirely regulated by the other component in the charge-conservation equation, Eq. (1), namely the localized hole. It confirms that the pseudogap itself is a signature of that hole localization, not of the interactions among itinerant carriers in the arc. The latter was the default assumption of many previous investigations, including the ones cited above.
This interpretation of the pseudogap is expected both on later theoretical and independent experimental grounds. In the meanwhile, Fermi arcs have been obtained in a one-body DFT+U calculation,[51] once the Coulomb doping mechanism[52] has been correctly taken into account, with its concomitant in-plane orbital disorder. Experimentally, optical spectroscopy shows the localized hole as a clearly gapped mid-infrared feature, once the FL signal calculated from transport is subtracted.[10] These investigations and the present one concur that there are really no itinerant states at the Fermi energy beyond the arcs, so there is no need for any special mechanism--quantum dissipation, or pocket reconstruction, to name but a couple of more popular scenarios--to account for their absence in ARPES. All that is needed is to acknowledge that the pseudogap originates physically in the background (ionic) Coulomb forces which localize part of the charge, not in the interactions between the itinerant carriers.
A number of observations with putative quantum-critical-point interpretations have turned out to be something else on closer inspection. For example, it was recently reported in Ref. [46] that the maximum in the electronic specific heat found[53, 54] around \(p\sim 0.20-0.22\) can be related to a Lifshitz transition by standard expressions for the electronic specific heat based on a tight-binding parametrization of ARPES data. This approach is similar to ours, and with the same conclusion, that it is not necessary to introduce a quantum critical point \(p^{*}\) at \(p=0.19\) to reconcile calculations with the data.
The apparent discontinuity in the evolution of \(n_{\rm H}\) with doping in YBCO was also originally claimed to imply a quantum critical point,[55] despite the fact that \(n_{\rm eff}\) estimated from resistivity, reported earlier, showed a gradual \(p\) to \(1+p\) evolution.[24] However, this discontinuity disappeared when the chain anisotropy was taken into account,[16] as already mentioned in the introduction.
In order to apply the above scheme effectively, some simple pitfalls should be avoided. First, different probes will sometimes see different arc lengths, or a Lifshitz transition at (slightly) different doping levels. Here, the key is that the orbital transition by which the hole delocalizes can be triggered by the probe itself, most easily by temperature, so one observes a considerable change in \(n_{\rm eff}\) as the temperature rises above \(T^{*}\),[8, 10] resulting in elongation of the arcs in a similar manner as demonstrated here as a function of doping. In the context of the Lifshitz transition, the impression will be that it is approached at lower doping levels with a higher-energy probe smeared with its accompanying finite width. In particular, as shown here, it is seen in ARPES sooner than in transport.[46] Second, and more importantly, one should distinguish dispersive and diffusive conduction. If the same (coherent) carriers (quasi-particles) encounter several scattering mechanisms, say internal (umklapp) scattering and impurities, these will add to the total _resistivity_:
\[\frac{1}{\tau_{\rm tot}}=\frac{1}{\tau_{\rm int}}+\frac{1}{\tau_{\rm imp}}. \tag{5}\]
On the other hand, if a part of the dispersive carriers becomes diffusive for an unspecified reason so that there are two conductive subsystems at the same time, their contributions will add to the total _conductivity_:
\[\sigma_{\rm tot} =\sigma_{\rm coh}+\sigma_{\rm diff} \tag{6}\] \[=\frac{e^{2}}{m^{*}}\left(n_{\rm coh}\tau_{\rm coh}+n_{\rm diff} \tau_{\rm diff}\right). \tag{7}\]
Assuming that the coherent part is due to internal FL scattering, \(\tau_{\rm coh}=\tau_{\rm int}\sim T^{-2}\), and the temperature dependence of \(\sigma_{\rm diff}\) may be anything but \(T^{-2}\), one finds
\[\rho_{tot}=\frac{1}{\sigma_{tot}}\sim\frac{1}{T^{-2}+\sigma_{\rm diff}}\sim T ^{\alpha}. \tag{8}\]
In other words, the pure FL \(T^{2}\) behavior is contaminated by the diffusive component, resulting in an effective power law with a real-number exponent \(\alpha\neq 2\). The experimental fact that we can detect a clean \(T^{2}\) behavior of the _resistivity_ deep in the PG regime, at low temperatures,[7, 49] close to T\({}_{c}\),[56] shows that any contribution of putative incoherent carriers is completely negligible. Furthermore, irradiation of the sample produces simple offsets of the origin of the \(T^{2}\) according to Mathiessen's
rule,[57], Eq. (5), essentially ruling out all but FL explanations. Notably, the same conclusion can be drawn from the Hall mobility. Not only is this property (\(C_{2}T^{2}\)) universal across the phase diagram,[8] but also the constant term related to impurity scattering (\(C_{0}\)) was documented very early, precisely in the so called strange metal regime at optimal doping.[58]
Interestingly, the incoherent contribution is observed in pnictides,[10] where a vHS at the Fermi level[59] provides a ready reservoir of slow carriers, easily turned diffusive even by the lowest temperatures. Such a contribution should also be expected in cuprates in which the vHS approaches the Fermi level at high doping levels, where it is not gapped like in LSCO. Most probably this is indeed the case in bismuth compounds.[10, 60] These parallel examples are useful cross-checks of our interpretation.
The localized hole, responsible for the pseudogap as noted above, being non-conductive, is by definition the "non-FL" part of the total charge active in the cuprates. Importantly, it is active, not just a charge reservoir. In fact, we argue that it plays a central role in the cuprate enigma, on two grounds. First, its universal vanishing (delocalization) on the overdoped side is concomitant with the universal vanishing of SC.[10, 24, 8] Second, NMR experiments[61] directly show that the compound-dependent charge redistribution between Cu and O with doping is related to the compound-dependent value of \(T_{\mathrm{c}}\).[6] A scenario emerges in which the doped FL scatters on the localized hole, and this scattering is responsible for high-\(T_{\mathrm{c}}\) SC in cuprates. In this way, the localized hole introduces necessary and sufficient material-dependence into an otherwise universal FL of mobile charges. A detailed exposition of this scenario has recently been published elsewhere.[6] Suffice it to say that the clear _separation_ between the FL and non-FL sector laid out here is quite different from all polaron scenarios, which rely on charge transport by these _composite_ electron-lattice objects, in contradiction with the observed material- and doping-independence of the FL transport parameters. It is also quite different from all scenarios which assume that the carriers in the arcs are not a FL, in contradiction with observations in all three compounds studied here with the particular purpose of elucidating that point. To repeat, the most important effective interaction in cuprates might well be the scattering of the universal FL on the localized hole, which gives rise to high-\(T_{\mathrm{c}}\) superconductivity. Any microscopic model of the latter must conform to the macroscopic observations presented here.
In summary, we have calculated electronic transport characteristics (resistivity and Hall coefficient) directly from the band structure of several cuprate materials. Combining textbook expressions with the experimentally established pseudogapped FS's, we reproduced the observed doping evolution of the resistivity and the Hall coefficient for Hg1201, Tl2201 and LSCO. This work provides a direct link between transport coefficients and FS geometry in cuprates, showing in particular that the doping evolution of the Hall coefficient can be explained without invoking a quantum critical point or any other, even more exotic scenario. On the contrary, it is sufficient to assume that the ungapped, itinerant charge carriers are always a FL, in agreement with recent measurements of the transport and optical scattering rate. Because our approach is phenomenological, these results are observations, not hypotheses. They invite microscopic considerations on the origin of the experimental facts of Fermi arcs and universal FL scattering, which we have also briefly presented above. These are centered on the other, localized contribution to the charge-conservation equation (1) and its role in high-\(T_{\mathrm{c}}\) superconductivity. Taking the two together, we present a consistent narrative as a necessary part of any final explanation of this fascinating phenomenon.
## Methods
### Tight-binding model parameters
Over the course of the last several decades, ARPES spectra were extensively measured and fitted with tight-binding models to parameterize the bands and the (underlying) FS's, in a number of compounds. We present the tight-binding formula in a very generic form, Eq. (9):
\[\varepsilon_{k}=\varepsilon_{0} -2\ t_{0}\left[\cos\left(k_{x}a\right)+\cos\left(k_{y}a\right)\right]\] \[-4\ t_{1}\cos\left(k_{x}a\right)\cos\left(k_{y}a\right)\] \[-2\ t_{2}\left[\cos\left(2k_{x}a\right)+\cos\left(2k_{y}a\right)\right]\] \[-4\ t_{3}\left[\cos\left(2k_{x}a\right)\cos\left(k_{y}a\right)+ \cos\left(k_{x}a\right)\cos\left(2k_{y}a\right)\right]\] \[+0.5\ t_{4}\cos\left(2k_{x}a\right)\cos\left(2k_{y}a\right) \tag{9}\]
where the tight-binding parameters for compounds studied in this work (Hg1201[30, 31], Tl2201[33, 32] and LSCO[42]) are given in Table 1 and visualized in Fig. 5. Note that the naming convention was standardized.
In the case of LSCO, high-quality ARPES data exists for a range of doping levels, therefore it is possible to extract the evolution of the FS with doping directly. For Hg1201 and Tl2201, the number of doping levels on which ARPES studies have been performed is much more limited (one doping level each). Therefore, the doping dependence for these materials is introduced by a rigid band shift, respecting Luttinger's sum rule in the underlying FS:
\[1+p=2\frac{A_{FS}^{u}}{A_{BZ}} \tag{10}\]
Figure 5: Doping and compound dependence of used tight-binding parameters. All parameters are given in units of [eV]. Full points denote values reported in the literature according to Table 1, full lines are polynomial interpolations.
where \(A_{FS}^{u}\) denotes the surface area enclosed by the underlying FS in k-space, while \(A_{BZ}\) denotes the area of the first Brillouin zone.
### Details of the calculation procedure
#### 0.2.1 Circular FS (parabolic band) - ungapped
Using the general expressions Eqs. (3) and (4), it is instructive to derive \(\sigma_{ij}\) for the particularly simple case of a circular FS and isotropic group velocity, \(v_{F}=\hbar^{-1}|\partial\varepsilon/\partial k_{\perp}|=\hbar k_{F}/m^{*}\), where \(m^{*}\) is the effective mass.
\[\sigma_{xx}=\frac{e^{2}\tau n}{m^{*}} \tag{11}\]
where the concentration of charge carriers \(n\) may be expressed in terms of the ratio of the area of the occupied part of the Brillouin zone and the total area of the Brillouin zone,
\[n=(2s+1)\;n_{0}\frac{k_{F}^{2}\pi}{\Gamma_{2D}}\;. \tag{12}\]
The most simple Drude form for \(\sigma_{xx}\) in Eq. (11), which is obtained from the general expression in Eq. (3), is a consequence of the particular circular-shaped form of the FS and the fact that the velocity is in the same direction and proportional to \(k_{F}\) along the FS, \(v_{F}\propto k_{F}\).
With the circular FS and the constant velocity \(|v_{F}|\), the nondiagonal part of the conductivity tensor is given by
\[\sigma_{xy} =(\omega_{\rm c}\tau)\;\frac{e^{2}\tau}{m^{*}}\;(2s+1)\;n_{0} \frac{k_{F}^{2}\pi}{\Gamma_{2D}} \tag{13}\] \[=(\omega_{\rm c}\tau)\;\sigma_{xx} \tag{14}\]
with \(\sigma_{xx}\) given by Eq. (11) and \(\omega_{\rm c}\) the cyclotron frequency, \(\omega_{\rm c}=eB/m^{*}\). For a parabolic band, the effective mass \(m^{*}\), characterized by the second derivative of the dispersion at the bottom of the band, is the only model parameter that defines the dispersion at the FS for any doping. However, this simplicity is lost for any more complicated band structure.
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Hg1201 [30, 31] & T12201 [13, 32] & LSCO [42] \\ \hline \(\varepsilon_{0}\) & \(\varepsilon_{0}=\varepsilon_{0}(p)\) & \(\varepsilon_{0}=\varepsilon_{0}(p)\) & \(\varepsilon_{0}=\varepsilon_{0}(p)\) \\ \(t_{0}\) & 0.46 & 0.18125 & 0.25 \\ \(t_{1}\) & -0.105 & -0.0755 & \(t_{1}=t_{1}(p)\) \\ \(t_{2}\) & 0.08 & -0.003975 & \(-0.5t_{1}\) \\ \(t_{3}\) & -0.02 & -0.0100625 & 0 \\ \(t_{4}\) & 0 & 0.0068 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Tight binding parameters for the FS models in use. In the case of Hg1201 and Tl2201, the function for \(\varepsilon_{0}\) is determined to satisfy Eq. (10), while all other parameters are held fixed, as in Ref. [31]. In contrast, for LSCO a broad range of doping dependent parameters exist, see Fig. 5. All numerical parameters are given in units of [eV]. Parameters described as doping dependent functions are displayed in Fig. 5.
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Hg1201 & Tl2201 & LSCO \\ \hline \(\Delta_{0}\) & 4000 & 3700 & 3900 \\ \(\delta\) & 600 & 700 & 800 \\ \(p_{c}\) & 0.2 & 0.22 & 0.22 \\ \(\alpha\) & 2 & 2 & 2 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of the gap distribution used to determine \(n_{\rm eff}\), according to the approach described in Ref. [24].
#### 0.2.2 Circular FS (parabolic band) - gapped
Assuming a circular FS that does not intersect the zone boundaries, following Luttinger's theorem one obtains
\[k_{F}=\left(\frac{1+x}{2\pi}\right)^{\frac{1}{2}}\frac{\pi}{a}\;.\]
Introducing \(0\leq p(x)\leq 1\), as a parameter that defines the ungapped part of the FS, the concentration of itinerant charges in Eq. (12) takes a particularly simple form,
\[n=(2s+1)\;n_{0}\;\frac{1+x}{2}\;p(x)\;. \tag{15}\]
Assuming a parabolic band, exhibiting a circular FS and an isotropic group velocity, we consider a doping-dependent gapping mechanism that resembles the situation in cuprates. We chose \(f_{\rm g}\) in a way that \(n_{\rm eff}\) first evolves exactly as \(p\) (for \(p<0.16\) ) to increase more steeply to \(1+p\) at \(p=0.28\). From Fig. 6, it is obvious that this results in a 1:1 correspondence between \(n_{\rm H}\) and \(n_{\rm eff}\).
### Comparison with experimental data: Resistivity
Our calculations are performed in the low-temperature limit. To compare calculation results with experimental data collected at finite temperatures, we introduce a temperature independent variable following the arguments discussed below. A general expression (Taylor expansion) for the resistivity is:
\[\rho=A_{0}+A_{1}T^{1}+A_{2}T^{2} \tag{16}\]
where \(A_{0}\) is associated with sample dependent impurity-scattering, \(A_{1}\) that appears in the crossover/strange metal regime we attribute to a change in the carrier density due to the delocalization process discussed in the main text, while \(A_{2}\) is the
Figure 6: FS and Hall-coefficient of an ideal parabolic band. In **a**, dashed lines correspond to the underlying FS, while the arcs appear as full lines. The fraction of ungapped (i.e. ”active”) states \(f_{g}\) is displayed in **b**, and the calculated density of charge carriers \(n_{\rm eff}\)(line) and \(n_{\rm H}\) at selected doping levels (points) in **c**. In this case, the crossover from \(p\) to \(1+p\) is modeled between \(p=0.16\) and \(p=0.28\).
Fermi-liquid term (associated with \(n_{\text{eff}}(T=0\) K) charges). Thus, the coefficient \(A_{2}\) is of our main interest. Specifically, we use \(A_{2,\square}\), as the resistivity per CuO\({}_{2}\) was demonstrated to be universal across multiple cuprate families [7].
We extract the scattering time \(\tau\), presumably related to the Umklapp process [49], from the measured universal Hall-mobility [8]:
\[\mu_{H} =\frac{e\tau}{m^{*}} \tag{17}\] \[\mu_{H}^{-1} =C_{0}+C_{2}T^{2} \tag{18}\]
As in the case of the resistivity, the constant term \(C_{0}\) is a contribution related to the impurities [58]. We approximate the effective mass with a constant \(m^{*}\sim 3.5m_{e}\) (Eq. (19)), again because the universality of the Hall mobility (Fig. 1) implies it, where the exact value was determined from quantum oscillations in overdoped Tl2201. Notably, we do expect some compound-dependence of the effective mass but such corrections are not essential in the context of the present calculations.
\[\tau =\frac{m^{*}}{e}\cdot\mu_{H}\] \[\simeq\frac{3.5m_{e}}{e}\cdot\frac{1}{C_{2}T^{2}} \tag{19}\]
Combining the two \(T^{2}\)-like behaviors, we arrive at a temperature independent parameter \(\tau\rho_{\square}\):
\[\tau\rho_{\square} =\frac{A_{2,\square}T^{2}}{C_{2}T^{2}}\frac{m^{*}}{e}\] \[=\frac{A_{2,\square}}{C_{2}}\frac{m^{*}}{e} \tag{20}\]
## Data Availability
The datasets generated and/or analysed during the current study are available from the corresponding authors upon request.
## Code Availability
The code to reproduce the presented results is available from the corresponding authors upon request.
## References
* [1] Onnes, H. K. Further experiments with liquid helium. C. On the change of electric resistance of pure metals at very low temperatures, etc. IV. The resistance of pure mercury at helium temperatures. _Commun. from Phys. Lab. Univ. Leiden_**120b** (1911).
* [2] Bardeen, J., Cooper, L. N. & Schrieffer, J. R. Theory of Superconductivity. _Phys. Rev._**108**, 1175-1204, DOI: 10.1103/PhysRev.108.1175 (1957).
* [3] Bednorz, J. G. & Muller, K. A. Possible high Tc superconductivity in the Ba-La-Cu-O system. _Zeitschrift fur Physik B Condens. Matter_**64**, 189-193, DOI: 10.1007/BF01303701 (1986).
* [4] Phillips, P. W., Hussey, N. E. & Abbamonte, P. Stranger than metals. _Science_**377**, eabh4273, DOI: 10.1126/science.abh4273 (2022). Publisher: American Association for the Advancement of Science.
* [5] Keimer, B., Kivelson, S. A., Norman, M. R., Uchida, S. & Zaanen, J. From quantum matter to high-temperature superconductivity in copper oxides. _Nature_**518**, 179-186, DOI: 10.1038/nature14165 (2015).
* [6] Barisic, N. & Sunko, D. K. High-T\({}_{\text{c}}\) Cuprates: a story of two electronic subsystems. _J. Supercond. Nov. Magn._**35**, 1781-1799, DOI: 10.1007/s10948-022-06183-y (2022).
* [7] Barisic, N. _et al._ Universal sheet resistance and revised phase diagram of the cuprate high-temperature superconductors. _Proc. Natl. Acad. Sci._**110**, 12235-12240, DOI: 10.1073/pnas.1301989110 (2013). Publisher: National Academy of Sciences Section: Physical Sciences.
* [8] Barisic, N. _et al._ Evidence for a universal Fermi-liquid scattering rate throughout the phase diagram of the copper-oxide superconductors. _New J. Phys._**21**, 113007, DOI: 10.1088/1367-2630/ab4d0f (2019). ArXiv: 1507.07885.
* [
* [9] Mirzaei, S. I. _et al._ Spectroscopic evidence for Fermi liquid-like energy and temperature dependence of the relaxation rate in the pseudogap phase of the cuprates. _Proc. Natl. Acad. Sci._**110**, 5774-5778, DOI: 10.1073/pnas.1218846110 (2013).
* [10] Kumar, C. M. N. _et al._ Optical conductivity of cuprates in a new light., DOI: 10.48550/arXiv.2204.10284 (2022). ArXiv:2204.10284 [cond-mat].
* [11] Chan, M. _et al._ In-plane magnetoresistance obeys Kohler's rule in the pseudogap phase of cuprate superconductors. _Phys. Rev. Lett._**113**, 7005, DOI: 10.1103/PhysRevLett.113.177005 (2014).
* [12] Proust, C., Boaknin, E., Hill, R. W., Taillefer, L. & Mackenzie, A. P. Heat transport in a strongly overdoped cuprate: Fermi liquid and a pure \(d\)-wave BCS superconductor. _Phys. Rev. Lett._**89**, 147003, DOI: 10.1103/PhysRevLett.89.147003 (2002).
* [13] Plate, M. _et al._ Fermi surface and quasiparticle excitations of overdoped Tl\({}_{2}\)Ba\({}_{2}\)CuO\({}_{6+\delta}\). _Phys. Rev. Lett._**95**, 077001, DOI: 10.1103/PhysRevLett.95.077001 (2005). Publisher: American Physical Society.
* [14] Vignolle, B. _et al._ Quantum oscillations in an overdoped high-\(T_{c}\) superconductor. _Nature_**455**, 952-955, DOI: 10.1038/nature07323 (2008).
* [15] Pelc, D. _et al._ Emergence of superconductivity in the cuprates via a universal percolation process. _Nat. Commun._**9**, 4327, DOI: 10.1038/s41467-018-06707-y (2018).
* [16] Putzke, C. _et al._ Reduced Hall carrier density in the overdoped strange metal regime of cuprate superconductors. _Nat. Phys._**17**, 826-831, DOI: 10.1038/s41567-021-01197-0 (2021).
* [17] Cooper, R. A. _et al._ Anomalous criticality in the electrical resistivity of La\({}_{2,\mathrm{x}}\)Sr\({}_{\mathrm{x}}\)CuO\({}_{4}\). _Science_**323**, 603-607, DOI: 10.1126/science.1165015 (2009).
* [18] Legros, A. _et al._ Universal T -linear resistivity and Planckian dissipation in overdoped cuprates. _Nat. Phys._**15**, 142, DOI: 10.1038/s41567-018-0334-2 (2019).
* [19] Zaanen, J. Why the temperature is high. _Nature_**430**, 512, DOI: 10.1038/430512a (2004).
* [20] Sadovskii, M. V. Planckian relaxation delusion in metals. _Physics-Uspekhi_**64**, 175, DOI: 10.3367/UFNe.2020.08.038821 (2021). Publisher: IOP Publishing.
* [21] Grissonnanche, G. _et al._ Linear-in temperature resistivity from an isotropic Planckian scattering rate. _Nature_**595**, 667-672, DOI: 10.1038/s41586-021-03697-8 (2021). Number: 7869 Publisher: Nature Publishing Group.
* [22] Ayres, J. _et al._ Incoherent transport across the strange-metal regime of overdoped cuprates. _Nature_**595**, 661-666, DOI: 10.1038/s41586-021-03622-z (2021).
* [23] Ayres, J., Katsnelson, M. I. & Hussey, N. E. Superfluid density and two-component conductivity in hole-doped cuprates. _Front. Phys._**10** (2022).
* [24] Pelc, D., Popcevic, P., Pozek, M., Greven, M. & Barisic, N. Unusual behavior of cuprates explained by heterogeneous charge localization. _Sci. Adv._**5**, eaau4538, DOI: 10.1126/sciadv.aau4538 (2019).
* [25] Pelc, D. _et al._ The resistivity phase diagram of cuprates revisited. _arXiv e-prints_ arXiv:1902.00529 (2019).
* [26] Yoshida, T. _et al._ Systematic doping evolution of the underlying Fermi surface of La\({}_{2-\mathrm{x}}\)Sr\({}_{\mathrm{x}}\)CuO\({}_{4}\). _Phys. Rev. B_**74**, 224510, DOI: 10.1103/PhysRevB.74.224510 (2006). Publisher: American Physical Society.
* [27] Ong, N. P. Geometric interpretation of the weak-field Hall conductivity in two-dimensional metals with arbitrary Fermi surface. _Phys. Rev. B_**43**, 193-201, DOI: 10.1103/PhysRevB.43.193 (1991). Publisher: American Physical Society.
* [28] Niksic, G., Kupcic, I., Barisic, O. S., Sunko, D. K. & Barisic, S. Multiband responses in high-\(T_{c}\) cuprate superconductors. _J. Supercond. Nov. Magn._**27**, 969-975, DOI: 10.1007/s10948-013-2420-0 (2014).
* [29] Kupcic, I. & Jedovnicki, I. Memory-function conductivity formula and transport coefficients in underdoped cuprates. _The Eur. Phys. J. B_**90**, 63, DOI: 10.1140/epjb/e2017-70737-0 (2017).
* [30] Das, T. **Q-**0 collective modes originating from the low-lying Hg-O band in superconducting HgBa\({}_{2}\)CuO\({}_{4+\delta}\). _Phys. Rev. B_**86**, 054518, DOI: 10.1103/PhysRevB.86.054518 (2012). Publisher: American Physical Society.
* [31] Vishik, I. M. _et al._ Angle-resolved photoemission spectroscopy study of HgBa\({}_{2}\)CuO\({}_{4+\delta}\). _Phys. Rev. B_**89**, 195141, DOI: 10.1103/PhysRevB.89.195141 (2014). Publisher: American Physical Society.
* [32] Peets, D. C. _et al._ Tl\({}_{2}\)Ba\({}_{2}\)CuO\({}_{6+\delta}\) brings spectroscopic probes deep into the overdoped regime of the high-\(T_{c}\) cuprates. _New J. Phys._**9**, 28-28, DOI: 10.1088/1367-2630/9/2/028 (2007). Publisher: IOP Publishing.
* [33] Pelc, D. _et al._ Resistivity phase diagram of cuprates revisited. _Phys. Rev. B_**102**, 075114, DOI: 10.1103/PhysRevB.102.075114 (2020). Publisher: American Physical Society.
* [
* [34] Ando, Y., Kurita, Y., Komiya, S., Ono, S. & Segawa, K. Evolution of the Hall coefficient and the peculiar electronic structure of the cuprate superconductors. _Phys. Rev. Lett._**92**, 197001, DOI: 10.1103/PhysRevLett.92.197001 (2004).
* [35] Tsukada, I. & Ono, S. Negative Hall coefficients of heavily overdoped La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\). _Phys. Rev. B_**74**, 134508, DOI: 10.1103/PhysRevB.74.134508 (2006). Publisher: American Physical Society.
* [36] Padilla, W. J. _et al._ Constant effective mass across the phase diagram of high-\(T_{c}\) cuprates. _Phys. Rev. B_**72**, 060511, DOI: 10.1103/PhysRevB.72.060511 (2005).
* [37] Li, Y., Tabis, W., Yu, G., Barisic, N. & Greven, M. Hidden Fermi-liquid charge transport in the antiferromagnetic phase of the electron-doped cuprate superconductors. _Phys. Rev. Lett._**117**, 197001, DOI: 10.1103/PhysRevLett.117.197001 (2016).
* [38] Kondo, T. _et al._ Hole-concentration dependence of band structure in (Bi,Pb)\({}_{2}\)(Sr,La)\({}_{2}\)CuO\({}_{6+\delta}\) determined by the angle-resolved photoemission spectroscopy. _J. Electron Spectrosc. Relat. Phenom._**137-140**, 663-668, DOI: 10.1016/j.elspec. 2004.02.104 (2004).
* [39] Piriou, A., Jenkins, N., Berthod, C., Maggio-Aprile, I. & Fischer, O. First direct observation of the Van Hove singularity in the tunnelling spectra of cuprates. _Nat. Commun._**2**, 221, DOI: 10.1038/ncomms1229 (2011). Number: 1 Publisher: Nature Publishing Group.
* [40] Kaminski, A. _et al._ Change of Fermi-surface topology in Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) with doping. _Phys. Rev. B_**73**, 174511, DOI: 10.1103/PhysRevB.73.174511 (2006). Publisher: American Physical Society.
* [41] Drozdov, I. K. _et al._ Phase diagram of Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) revisited. _Nat. Commun._**9**, 5210, DOI: 10.1038/s41467-018-07686-w (2018). Number: 1 Publisher: Nature Publishing Group.
* [42] Yoshida, T. _et al._ Low-energy electronic structure of the high-\(T_{c}\) cuprates La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) by angle-resolved photoemission spectroscopy. _J. Physics: Condens. Matter_**19**, 125209, DOI: 10.1088/0953-8984/19/12/125209 (2007). Publisher: IOP Publishing.
* [43] Hussey, N. E., Gordon-Moys, H., Kokalj, J. & McKenzie, R. H. Generic strange-metal behaviour of overdoped cuprates. _J. Physics: Conf. Ser._**449**, 012004, DOI: 10.1088/1742-6596/449/1/012004 (2013). Publisher: IOP Publishing.
* [44] Zhou, X. J. _et al._ Universal nodal Fermi velocity. _Nature_**423**, 398-398, DOI: 10.1038/423398a (2003).
* [45] Pavarini, E., Dasgupta, I., Saha-Dasgupta, T., Jepsen, O. & Andersen, O. K. Band-structure trend in hole-doped cuprates and correlation with \(T_{\rm cmax}\). _Phys. Rev. Lett._**87**, 047003, DOI: 10.1103/PhysRevLett.87.047003 (2001).
* [46] Zhong, Y. _et al._ Differentiated roles of Lifshitz transition on thermodynamics and superconductivity in La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\). _Proc. Natl. Acad. Sci._**119**, e2204630119, DOI: 10.1073/pnas.2204630119 (2022). Publisher: Proceedings of the National Academy of Sciences.
* [47] Doiron-Leyraud, N. _et al._ Quantum oscillations and the Fermi surface in an underdoped high-\(T_{\rm c}\) superconductor. _Nature_**447**, 565-568, DOI: 10.1038/nature05872 (2007).
* [48] Barisic, N. _et al._ Universal quantum oscillations in the underdoped cuprate superconductors. _Nat. Phys._**9**, 761-764, DOI: 10.1038/nphys2792 (2013).
* [49] Tabis, W. _et al._ Arc-to-pocket transition and quantitative understanding of transport properties in cuprate superconductors. _arXiv:2106.07457 [cond-mat]_ (2021).
* [50] Gor'kov, L. P. & Teitel'baum, G. B. Two regimes in conductivity and the Hall coefficient of underdoped cuprates in strong magnetic fields. _J. Physics: Condens. Matter_**26**, 042202, DOI: 10.1088/0953-8984/26/4/042202 (2014).
* [51] Lazic, P. & Sunko, D. K. Fermi arcs and pseudogap emerging from dimensional crossover at the Fermi surface in La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\). _EPL_**112**, 37011, DOI: 10.1209/0295-5075/112/37011 (2015).
* [52] Mazumdar, S. A unified theoretical approach to superconductors with strong Coulomb correlations: the organics, LiTi\({}_{2}\)O\({}_{4}\), electron- and hole-doped copper oxides and doped BaBiO\({}_{3}\). In Baeriswyl, D. & Campbell, D. K. (eds.) _Interacting electrons in reduced dimensions_, 315-329 (Plenum Press, New York, 1989).
* [53] Momono, N. _et al._ Low-temperature electronic specific heat of La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) and La\({}_{2-x}\)Sr\({}_{x}\)Cu\({}_{1,y}\)Zn\({}_{y}\)O\({}_{4}\). Evidence for a d wave superconductor. _Phys. C: Supercond._**233**, 395-401, DOI: 10.1016/0921-4534(94)90768-4 (1994).
* [54] Girod, C. _et al._ Normal state specific heat in the cuprate superconductors La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) and Bi\({}_{2+y}\)Sr\({}_{2-x}\)Y\({}_{2-x}\)La\({}_{x}\)CuO\({}_{6+\delta}\) near the critical point of the pseudogap phase. _Phys. Rev. B_**103**, 214506, DOI: 10.1103/PhysRevB.103.214506 (2021). Publisher: American Physical Society.
* [55] Badoux, S. _et al._ Change of carrier density at the pseudogap critical point of a cuprate superconductor. _Nature_**531**, 210-214, DOI: 10.1038/nature16983 (2016). Number: 7593 Publisher: Nature Publishing Group.
56] Popcevic, P. _et al._ Percolative nature of the direct-current paraconductivity in cuprate superconductors. _npj Quantum Mater._**3**, 42, DOI: 10.1038/s41535-018-0115-2 (2018).
* [57] Rullier-Albenque, F., Alloul, H., Balakirev, F. & Proust, C. Disorder, metal-insulator crossover and phase diagram in high-Tc cuprates. _Europhys. Lett._**81**, 37008, DOI: 10.1209/0295-5075/81/37008 (2008).
* [58] Chien, T. R., Wang, Z. Z. & Ong, N. P. Effect of Zn impurities on the normal-state Hall angle in single-crystal YBa\({}_{2}\)Cu\({}_{3-x}\)Zn\({}_{x}\)O\({}_{7-\delta}\). _Phys. Rev. Lett._**67**, 2088-2091, DOI: 10.1103/PhysRevLett.67.2088 (1991).
* [59] Derondeau, G. _et al._ Fermi surface and effective masses in photoemission response of the (Ba\({}_{1-x}\)K\({}_{x}\))Fe\({}_{2}\)As\({}_{2}\) superconductor. _Sci. Reports_**7**, 8787, DOI: 10.1038/s41598-017-09480-y (2017).
* x Pb x Sr 2
- y La y CuO 6 + \(\delta\) cuprates. _Phys. Rev. B_**106**, 054515, DOI: 10.1103/PhysRevB.106.054515 (2022).
* [61] Rybicki, D., Jurkutat, M., Reichardt, S., Kapusta, C. & Haase, J. Perspective on the phase diagram of cuprate high-temperature superconductors. _Nat. Commun._**7**, 11413, DOI: 10.1038/ncomms11413 (2016).
## Acknowledgments
The work at the TU Wien was supported by the European Research Council (ERC Consolidator Grant No. 725521), while the work at the University of Zagreb was supported by project CeNIKS co-financed by the Croatian Government and the European Union through the European Regional Development Fund-Competitiveness and Cohesion Operational Programme (Grant No. KK.01.1.1.02.0013). The work at AGH University of Science and Technology was supported by the National Science Centre, Poland, Grant No. OPUS: UMO-2021/41/B/ST3/03454, the Polish National Agency for Academic Exchange under "Polish Returns 2019" Programme: PPN/PPO/2019/1/00014, and the subsidy of the Ministry of Science and Higher Education of Poland. O.S.B. acknowledges the support by the QuantiXLie Center of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004).
## Author contributions statement
N.B. conceived the research, B.K.K and O.S.B. performed the analysis and computations. W.T. and M.G. verified the results and methods. B.K.K., W.T., D.K.S. and N.B. wrote the manuscript with input from all authors.
## Additional information
The authors declare no competing interests.
Supplementary Information for: Transport properties and doping evolution of the Fermi surface in cuprates
**B. Klebel-Knobloch\({}^{\text{(b1)}}\), W. Tabis\({}^{\text{(b2,1)}}\), M. A. Gala\({}^{\text{(b1,2)}}\), O. S. Barisic\({}^{\text{(b3,*)}}\), D. K. Sunko\({}^{\text{(b4,*)}}\), and N. Barisic\({}^{\text{(b1,4,*)}}\)**
\({}^{1}\)Institute of Solid State Physics, TU Wien, 1040 Vienna, Austria
\({}^{2}\)AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, 30-059 Krakow, Poland
\({}^{3}\)Institute of Physics, Bijenicka cesta 46, HR-10000, Zagreb, Croatia
\({}^{4}\)Department of Physics, Faculty of Science, University of Zagreb, Bijenicka cesta 32, HR-10000, Zagreb, Croatia
\({}^{*}\)[email protected], [email protected], [email protected]
## 1 LSCO - additional observations
In the main text, our discussion is focused on the doping levels \(p<0.30\). The reason is threefold. First, our main interest is to understand the strong deviation from the universal properties, which starts at \(p\sim 0.08\) in LSCO. Second, we were interested in the evolution of \(n_{\text{H}}\) in the \(p\) to \(1+p\) regime which occurs below \(p<0.27\). Third, the reported ARPES tight-binding parametrization extends to \(p=0.30\). Therefore, we decided to constrain our discussion conservatively.
However, measurements performed on thin films [1] show that \(n_{\text{H}}\) becomes negative both at high doping (\(p>0.32\)) and, even more instructively, rather abruptly upon increasing the temperature at \(p=0.32\), see Fig. S1. Both sign-reversals are easily captured within the proposed arc elongation with temperature/doping.
At high doping levels (\(p>0.35\)), the Fermi surface (FS) is closed and electron-like, thus it is to be expected that \(n_{\text{H}}\) carries a negative sign. To understand the abrupt change as a function of temperature, it is sufficient to note that the FS close to \(p=0.30\) still consists of arcs, with long, flat "metallic" segments that are dominantly hole-like, while the short electron-like segments at the end of the Brillouin zone are partially gapped.
Simply by looking at such a FS, it is easy to deduce that even a small arc elongation with temperature, encompassing the strongly curved electron-like segments, causes large changes in \(n_{\text{H}}\), eventually provoking the observed sign-reversal. As shown in Fig. S1b, the results of our calculations reveal that \(n_{\text{H}}\) indeed changes its sign to become negative at elevated doping levels. We also note that the calculated \(n_{\text{H}}\) agrees with the reported values from Ref. [1] only quantitatively--which we attribute to difficulties of both synthesizing homogeneous LSCO films at high Sr concentrations and ascertaining the exact hole-content of the CuO layers.
We also find it instructive to calculate the \(n_{\text{H}}\) of the (ungapped) underlying Fermi-surface Fig. S2. In Fig. S2, the difference between the \(n_{\text{H}}\)'s calculated from the pseudogapped (dashed line) and ungapped Fermi surface (full line) is a simple vertical shift by one elementary charge, straightforwardly related to the (de)localization of exactly one charge per CuO\({}_{2}\) plaquette. We find that the simplicity of this argument makes it hard to contest the reality of delocalization. Sometimes, a picture is worth a thousand words.
## 2 Universality of nodal \(v_{f}\) and the absolute value of \(\sigma_{\text{xx}}\)
In this work, we use published best-fit parametrizations of ARPES data, in the form of tight-binding dispersions. Typically, the fit for such models takes into account energy windows which capture a significant portion of the band which crosses the Fermi level, or more generally the shape of the FS, not only the (low) energy scale which is relevant for transport processes. Therefore, the nodal Fermi velocities (\(v_{F}^{n}\)) at \(<30\,\mathrm{meV}\) from the Fermi level are not always captured with high accuracy. Because the simple FL transport calculations are sensitive to the value of \(v_{F}^{n}\), only tight-binding parametrizations that capture its observed value with sufficient precision can be used in direct comparisons with the experiment.
A careful analysis shows the \(v_{F}^{n}\) (at \(4\,\mathrm{meV}<\omega<30\,\mathrm{meV}\)) to be universal across a number of cuprate families and the relevant doping range (see Fig. S3a). [2, 3, 4] Because \(v_{F}^{n}\) from the tight-binding fit to ARPES data of Hg1201 is closest to the observed universal value of \(\sim\)\(1.8\,\mathrm{eV}\,\mathrm{\AA}\), in the calculation for \(\tau\rho_{\square}\), we normalize the other compounds to Hg1201. The corresponding factors of normalization \(f_{\text{norm}}\) are given in Fig. S3b. The corrected \(\tau\rho_{\square}\) is shown in the main text (Fig. 4a), while the uncorrected one is in Fig. S3c. This normalization may be incorporated into the line integrals given by Eqs. (3) and (4) in
the main text, applying a simple substitution \(\varepsilon_{k}\to f_{\text{norm}}\varepsilon_{k}\). It is easy to see that such a rescaling of the dispersion keeps the values of the Hall coefficient \(R_{H}\) unchanged, as given by Eq. (2) and shown in the figures in the main text.
**Figure S2.** The underlying (ungapped) FS and Hall number in LSCO. In **a**, the underlying FS as parametrized in Ref. [7] is shown. In contrast to the pseudogapped case (see Fig. 3 in the main text), the fraction of ungapped states \(f_{\delta}\) is now 100%, depicted in **b** as red line. **c** The calculated \(n_{\text{H}}\) of the underlying FS is depicted with a red line here too, and compared to the real (pseudogapped) case, discussed in the main text and also shown in Fig. 3c.
**Figure S3.****a** The nodal Fermi velocity \(v_{F}\). Full points are experimental data reproduced from Ref. [2] with a mean error of \(0.4\,\mathrm{eV}\,\mathrm{\SIUnitSymbolAngstrom}\). Clearly \(v_{F}\) is, within error bars, compound- and doping-independent. Lines depict the derivative of the electronic dispersion based on the tight-binding models in Refs. [3, 8] (Hg1201), Refs. [9, 10] (TI2201) and Ref. [11] (LSCO). Clearly, the tight-binding \(v_{F}\)'s strongly deviate from the experimentally established universal value, which greatly affects the absolute value of the calculated resistivity. To control this imprecision of the tight-binding procedure, the normalization factor \(f_{\mathrm{norm}}\) is introduced to set the values of TI2201 and LSCO nodal "tight-binding" \(v_{F}\) to that of Hg1201. **b** Normalizing factors \(f_{\mathrm{norm}}\) between the nodal \(v_{F}\)'s of TI2201 and Hg1201 (blue dashed curve) as well as the nodal \(v_{F}\)'s between LSCO and Hg1201 (orange dashed curve). **c** The as-calculated \(\tau_{\mathrm{PD}}\)'s are compared to the experimentally obtained values by combining \(\rho_{\square}\) from Refs. [13, 12], and \(\tau\) from Ref. [14]. To obtain Fig. 4 of the main text, we have simply multiplied the calculated \(\tau\rho_{\square}\) values by the corresponding normalization factors from panel **b**. |
2308.08402 | Scaling description of frictionless dense suspensions under
inhomogeneous flow | Predicting the rheology of dense suspensions under inhomogeneous flow is
crucial in many industrial and geophysical applications, yet the conventional
`$\mu(J)$' framework is limited to homogeneous conditions in which the shear
rate and solids fraction are spatially invariant. To address this shortcoming,
we use particle-based simulations of frictionless dense suspensions to derive
new constitutive laws that unify the rheological response under both
homogeneous and inhomogeneous conditions. By defining a new dimensionless
number associated with particle velocity fluctuations and combining it with the
viscous number, the macroscopic friction and the solids fraction, we obtain
scaling relations that collapse data from homogeneous and inhomogeneous
simulations. The relations allow prediction of the steady state velocity,
stress and volume fraction fields using only knowledge of the applied driving
force. | Bhanu Prasad Bhowmik, Christopher Ness | 2023-08-16T14:42:53Z | http://arxiv.org/abs/2308.08402v1 | # Scaling description of frictionless dense suspensions under inhomogeneous flow
###### Abstract
Predicting the rheology of dense suspensions under inhomogeneous flow is crucial in many industrial and geophysical applications, yet the conventional '\(\mu(J)\)' framework is limited to homogeneous conditions in which the shear rate and solids fraction are spatially invariant. To address this shortcoming, we use particle-based simulations of frictionless dense suspensions to derive new constitutive laws that unify the rheological response under both homogeneous and inhomogeneous conditions. By defining a new dimensionless number associated with particle velocity fluctuations and combining it with the viscous number, the macroscopic friction and the solids fraction, we obtain scaling relations that collapse data from homogeneous and inhomogeneous simulations. The relations allow prediction of the steady state velocity, stress and volume fraction fields using only knowledge of the applied driving force.
_Introduction._ Dense suspensions are an important class of soft matter system comprising Brownian or non-Brownian particles mixed roughly equally by volume with viscous fluid [1]. Their rheology attracts sustained interest from physicists due to the manifold complex phenomena that arise with apparently simple constituents [2; 3]. These include non-equilibrium absorbing state transitions [4], shear thickening [5], thinning [6], and yield stress behaviour [7]. As well as being of fundamental interest, characterising this complexity is key to the extensive use of dense suspensions in various formulation and processing industries.
A useful model with which to build rheological understanding is the non-Brownian suspension [8], an especially appealing system when one considers the case of inertialess hard spheres. By analogy to dry granular systems [9], a recent study successfully obtained constitutive laws for this system [10], confirming their rate-independence and finding one-to-one relations between the volume fraction \(\phi\) and each of two dimensionless rheological quantities, the viscous number \(J=\eta\dot{\gamma}/P\) and the macroscopic friction coefficient \(\mu=\sigma_{xy}/P\). Here \(\eta\) is the suspending liquid viscosity, \(\dot{\gamma}\) is the shear rate, \(P\) is a measure of the particle contribution to the normal stress, and \(\sigma_{xy}\) is the shear stress. This important result, the so-called \(\mu(J)\)-rheology, forms the basis of subsequent models that introduce rate-dependence through additional stress scales [11; 12].
The applicability of \(\mu(J)\) becomes limited when considering inhomogeneous flows in which \(\dot{\gamma}\) varies spatially [13; 14; 15]. In particular, the lower limit of \(\mu\) (which we denote \(\mu_{J}\)) is non-zero in all homogeneously flowing systems irrespective of the particle-particle friction coefficient \(\mu_{p}\)[16; 17; 18] but can by construction vanish when mechanical balance dictates sign changes in \(\sigma_{xy}\) such as along pipe centrelines. In such scenarios regions that would otherwise be jammed (_i.e._ with \(\mu<\mu_{J}\) and \(J=0\)) can have non-zero \(\dot{\gamma}\) thanks to facilitation by nearby flowing regions [19; 20]. This non-local effect has been extensively studied in amorphous solids [21] and dry granular systems [22], often by formulating a fluidity field with diffusive behaviour characterised by an inhomogeneous Helmholtz equation. Microscopically it is conceptualized that the fluidity originates from an activated process that diffuses through the system in a cooperative way controlled by an inherent length scale [23; 24; 19; 21; 22]. Recent works in dry granular matter [25; 26; 27] interpret the fluidity in terms of particle velocity fluctuations \(\delta u\) and density \(\rho\), defining a fourth dimensionless quantity \(\Theta=\rho\delta u^{2}/P\) and seeking constitutive relations linking it to \(\phi\), \(\mu\) and \(I\)[9] (the dry counterpart to \(J\)). This successfully collapses data from homogeneous and inhomogeneous simulations onto a master curve, but is limited in that the \(\Theta\) fields required to make predictions thereafter must be obtained by simulation. Naturally such findings raise the question of whether similar constitutive equations exist to unify homogeneous and inhomogeneous dense suspension rheology.
Here we use particle-based simulation [28] to model dense suspensions under homogeneous and inhomogeneous conditions, achieving the latter through an imposed Kolmogorov flow following the approach of Saitoh and Tighe [19]. We seek to unify the rheology under both sets of conditions by first defining a dimensionless suspension temperature based on particle velocity fluctuations, as \(\Theta=\eta\delta u/aP\), analogous to the granular temperature [26], and then obtaining relations among the four dimensionless numbers \(\phi\), \(J\), \(\mu\) and \(\Theta\). Although the \(\mu(J)\) framework was devised based on frictional millimetric grains, recent experiments demonstrate it is nonetheless applicable to frictionless ones [29], and we focus here on the latter. Doing so we find scalings that can collapse homogeneous and inhomogeneous rheology data onto a set of master curves that can then be used to predict the rheology of other flow types.
_Simulations details._ We simulate a mixture of frictionless, non-Brownian spheres of radius \(a\) and \(1.4a\) mixed in equal number in a periodic box of dimensions \(L_{x}\), \(L_{y}\), \(L_{z}\), using LAMMPS [30; 31] (see Fig. 1(a)). Particles are suspended in a density (\(\rho\)) matched viscous liquid, and we impose pairwise contact and hydrodynamic forces as described by Ref.[18]. Briefly, the hy
drodynamic lubrication force for particles of radius \(a_{i}\) and \(a_{j}\), with center-to-center vector \(\mathbf{r}_{i,j}\), is given by \(\mathbf{F}_{i,j}^{h}\sim(1/h)\mathbf{u}_{i,j}\), where \(\mathbf{u}_{i,j}\) is the relative velocity of the particles and \(h=(a_{i}+a_{j})-|\mathbf{r}_{i,j}|\). \(F_{i,j}^{h}\) is not computed for \(h>0.05a\), and it saturates to \(\sim(1/h^{c})\mathbf{u}_{i,j}\) for \(h\leq h^{c}\) (with \(h^{c}=0.001a\)), allowing particles to come into contact. Contact forces arise only when \(|\mathbf{r}_{i,j}|<(a_{i}+a_{j})\) and are given by \(\mathbf{F}_{i,j}^{c}=k\left[(a_{i}+a_{j})-|\mathbf{r}_{i,j}|\right]\mathbf{u}_{ij}\), where \(k\) is a spring constant and \(\mathbf{n}_{i,j}=\mathbf{r}_{i,j}/|\mathbf{r}_{i,j}|\). Particles additionally experience dissipative drag due to motion relative to the fluid, given by \(\mathbf{F}_{i}^{d}=6\pi\eta a\left(\mathbf{u}_{i}-\mathbf{u}^{\infty}(y_{i})\right)\), with \(\mathbf{u}_{i}\) the velocity of particle \(i\) and \(\mathbf{u}^{\infty}(y_{i})\) the liquid streaming velocity at the position of particle \(i\).
Flow is generated by specifying \(\mathbf{u}^{\infty}\) to induce particle motion through drag. We obtain homogeneous rheology data for fixed-volume systems of \(\phi=0.48\) to \(0.65\) by generating simple shear _via_\(\mathbf{u}^{\infty}(y)=\dot{\gamma}y\mathbf{\delta}_{x}\), with \(y\) the direction of the velocity gradient and \(\mathbf{\delta}_{x}\) the unit vector along \(x\). We chose our parameters such that \(\rho\dot{\gamma}a^{2}/\eta\ll 1\) and \(\dot{\gamma}\sqrt{\rho a^{3}/k}\ll 1\), recovering rate-independence [10]. To obtain inhomogeneous flow we specify a spatially dependent liquid velocity as \(\mathbf{u}^{\infty}(y)=\kappa\sin\left(2\pi y/L_{y}\right)\mathbf{\delta}_{x}\) (see Fig. 1(b), and the gradient \(\dot{\gamma}^{\infty}\) in Fig. 1(c)), and later test the model with \(\mathbf{u}^{\infty}(y)=\kappa\sin^{3}(2\pi y/L_{y})\mathbf{\delta}_{x}\). We run simulations with \(L_{y}=50a\), \(100a\) and \(200a\) (with \(L_{x},L_{z}=20a\)) and systems containing \(\mathcal{O}(10^{4})\) particles (we verified that larger systems produce equivalent rheology results). We simulated systems with mean volume fraction \(\bar{\phi}=0.5\) to \(0.63\) (achieved by varying the particle number), and \(\kappa\) is a constant with dimensions of velocity, chosen so that the measured \(\rho\dot{\gamma}a^{2}/\eta\) remains \(<0.01\) throughout and particle inertia is negligible. The stress (a tensor) is computed on a per-particle basis as \(\mathbb{\Sigma}_{i}=\sum_{j}(\mathbf{F}_{i,j}^{*}\otimes\mathbf{r}_{i,j})\), counting both contact and hydrodynamic forces.
We aim to compare the spatially-variant values of \(J\), \(\mu\), \(\phi\) and \(\Theta\) obtained _via_ inhomogeneous flow with the spatially-invariant ones obtained _via_ homogeneous flow (the latter follow closely our previous results [18]). Doing so requires computing the variation in \(y\) of the stress and velocity fields under inhomogeneous flow, which we do by binning particle data in blocks of width \(a\) and volume \(V_{b}=L_{x}aL_{z}\), with the per-block value of a quantity being simply the mean of the per-particle quantities of the particles with centers lying therein. We compute the velocity fluctuation (necessary for calculating the \(\Theta\) field) of each particle as \(\delta u_{i}=|u_{i,x}-u_{i,x}^{\dagger}|\) where \(u_{i,x}\) is the \(x\)-component of \(\mathbf{u}_{i}\) and \(u_{i,x}^{\dagger}\) is the average \(x\) velocity of all particles with centers lying in a narrow window \(\pm\epsilon\) (taking \(\epsilon=\mathcal{O}(0.1a)\)) of \(y\), and we then bin \(\delta u_{i}\) per block. As all three components of the velocity fluctuations are statistically equivalent we have used only the \(x\) values to compute \(\Theta\). In what follows we report steady state data only [32], averaging across \(6\) realizations and at least \(500\) configurations per realization.
_Results._ Shown in Fig. 1(b)-(g) are, respectively, steady-state profiles in \(y\) of the coarse-grained velocity (in \(x\)) \(u_{x}\), shear rate \(\dot{\gamma}=\partial u_{x}/\partial y\), velocity fluctuations \(\delta u\), volume fraction \(\phi\), pressure \(P\) (\(=(1/3)\mathrm{Tr}(\mathbb{\Sigma})\)), and shear stress \(\sigma_{xy}\), for \(\bar{\phi}=0.60\), with each plotted point representing a block. Although at initialisation the particle density is homogeneous (_i.e._\(\phi\neq\phi(y)\)), in the steady
Figure 1: Inhomogeneous flow of a frictionless dense suspension. Shown are (a) a typical configuration of the system for \(\bar{\phi}=0.60\), with the red region highlighting a coarse-graining box; and the steady-state profiles in \(y\) of (b) the \(x\)-components of the externally applied liquid velocity field \(u_{x}^{\infty}\) (green line) and the coarse-grained velocity field of the particles \(u_{x}\) (red points). Velocity is presented here in units of \(\kappa\); (c) the expected shear rate for a Newtonian fluid \(\dot{\gamma}^{\infty}=\partial u_{x}^{\infty}/\partial y\) (green line) and the measured shear rate \(\dot{\gamma}\) (red points), both in units of \(\kappa/a\); (d) the velocity fluctuations \(\delta u\) in units of \(\kappa\); (e) the local volume fraction \(\phi\), noting that the higher values at low \(\dot{\gamma}\) demonstrate particle migration has taken place; (f) the pressure \(P\) expressed in units of \(\eta\kappa/a\); (g) the shear stress \(\sigma_{xy}\) computed from the particle interactions (red points) and by integrating over the left hand side of Eq. 4 (green points), in the same units as \(P\).
state \(\phi\) exhibits spatial variation set up by particle migration to balance the normal stress [13; 14; 33]. The velocity profile follows a similar trend to the applied force, as expected, but is flattened at the regions of largest \(\phi\) leading to significant deviations between \(\dot{\gamma}\) and \(\dot{\gamma}^{\infty}\). The pressure becomes spatially uniform, and the shear stress follows the shear rate in sign. Since \(P\) is spatially invariant in the steady state, one can deduce that the variation of the quantities \(\eta\dot{\gamma}/P\), \(\sigma_{xy}/P\) and \(\eta\delta u/aP\) follow \(\dot{\gamma}\), \(\sigma_{xy}\) and \(\delta u\) respectively.
We analyse inhomogeneous data by computing the dimensionless control parameters in each block, defining the scalar shear rate and stress components on the basis of invariants of the respective tensor quantities so that \(J,\mu>0\). This is done for a range of \(\bar{\phi}\), with parametric plots of \(J(y)\), \(\phi(y)\), \(\mu(y)\) and \(\Theta(y)\) shown in Figs. 2(a)-(c). Each plotted point represents a \(y\)-coordinate, and colors represent different \(\bar{\phi}\). Shown also (in black) are homogeneous data. Reading across the data points of a single color from right-to-left represents moves from regions of high-to-low \(\dot{\gamma}\) in the inhomogeneous domain.
The homogeneous \(\phi(J)\) and \(\mu(J)\) relations follow qualitatively the result of Boyer _et al._[10], though our frictionless particles render \(\phi_{J}\) and \(\mu_{J}\) dissimilar. \(\Theta(J)\) follows a power-law relation, as in dry granular matter [26] though with a different exponent. In general large-\(J\) inhomogeneous data approximately match homogeneous data, though they deviate with decreasing \(J\) demonstrating the shortcomings of the existing constitutive laws.
With the help of scaling theory, we next attempt to find constitutive laws that simultaneously describe the rheology under homogeneous and inhomogeneous flow. We focus first on how the inverse viscosity \(J/\mu=\eta\dot{\gamma}/\sigma_{xy}\) vanishes as \(\phi\) approaches the jamming point \(\phi_{J}\). This trend is followed by all the homogeneous and inhomogeneous simulations, leading to our first scaling relation
\[J/\mu=\alpha\left(\phi_{J}-\phi\right)^{2}, \tag{1}\]
plotted in Fig. 2(d) with \(\alpha=4.1\) and \(\phi_{J}=0.6555\).
The next scaling relation is motivated by Kim and Kamrin [26]. In homogeneous flow, within the range of our data we find \(\mu^{2.5}\sim J\) (Fig. 2(b)) and \(\Theta^{1.44}\sim J\) (Fig. 2(c)). Since for the range of \(\bar{\phi}\) explored here inhomogeneous data follow homogeneous laws at large \(J\), we expect a scaling of the form \(\mu^{2.5}\Theta^{1.44}\sim F_{1}(J)\). Indeed this results in a good collapse as shown in Fig. 2(e), in which data are described by the relation
\[\Theta^{1.44}\mu^{2.5}=\begin{cases}\beta J^{2}&\text{if }J>10^{-3};\\ \partial J^{1.33}&\text{if }J\leq 10^{-3};\end{cases} \tag{2}\]
Figure 2: Relations between the dimensionless control parameters. Shown in the top row are the relations between the dimensionless viscous number \(J\) and (a) the volume fraction \(\phi\) for a range of homogeneous \(\phi\) (black data) and inhomogeneous \(\bar{\phi}\); (b) the effective friction coefficient \(\mu\) and; (c) the suspension temperature \(\Theta\). In the bottom row are the collapses using the scaling Eqns. 1 (d), 2 (e) and 3 (f), for different \(\bar{\phi}\) and \(L\). In (d) we show data for \(L/a=50\) to highlight its deviation from the scaling relation. Black triangles represent homogeneous data (simple shear) and all other points are for inhomogeneous flow at different \(\bar{\phi}\).
with \(\beta=3\) and \(\vartheta=0.06\).
The final scaling relation is motivated by the relation between granular fluidity and \(\phi\) reported for dry granular matter. Zhang and Kamrin [25] write a non-dimensional granular fluidity \(\tilde{g}=gd/\delta u\), where \(g=\dot{\gamma}/\mu\), and \(d\) is the spatial dimension. We define an equivalent quantity in terms of the previously discussed dimensionless numbers, namely \(J/\mu\Theta\), though we find a better collapse is achieved through a change to the exponents as
\[\frac{J}{\Theta^{0.8}\mu^{1.2}}=F_{2}(\phi), \tag{3}\]
with \(F_{2}(\phi)=\epsilon\left[(\phi-\phi_{m})+\sqrt{(\phi-\phi_{m})^{2}+\zeta} \right]+\lambda\phi\) (see Fig. 2(f)) and \(\epsilon=-10.98\), \(\phi_{m}=0.618\), \(\zeta=0.0004\) and \(\lambda=1.533\). We thus have three scaling relations, Eqs. 1, 2 and 3, that relate \(\phi\), \(J\), \(\mu\) and \(\Theta\). The collapse appears poorer for \(\bar{\phi}=0.5\) (Fig. 2(f)) and \(L/a=50\) (Fig. 2(d)), indicating limits to the range of applicability. An issue in the former case may be that our simplified hydrodynamics, accounting only for lubrication, becomes nonphysical at lower \(\phi\) and that a more highly resolved fluid field is required.
Given a profile of one of the dimensionless numbers, one could therefore fully characterise the rheology of the system. In our simulations, however, the only known input is the externally applied force, which we recall is defined through \(\mathbf{u}^{\infty}\). To use the scaling relations we need to establish another relation that can provide us one of these dimensionless numbers from the knowledge of the applied force profile. Considering the inertia-free momentum balance \(\nabla\cdot\mathbb{E}=-\mathbf{f}\) per unit volume, we can write the following equation for the \(k^{th}\) block of the simulation cell (which we verified in Fig. 1(g)):
\[N_{k}6\pi\eta a\left[Au_{x,k}^{\infty}-u_{x,k}\right]=-\left(\frac{\partial \sigma_{xy,k}}{\partial y}\right)V_{b}. \tag{4}\]
Here \(N_{k}\), \(u_{x,k}^{\infty}\), \(u_{x,k}\) and \(\sigma_{xy,k}\) are the particle number in the block, the liquid streaming velocity at the centre of the block, and the particle velocity and stress averaged over the block, which has volume \(V_{b}\). \(A\) is an order unity quantity necessary to account for small variations in \(u_{x,k}^{\infty}\) across the block. The first term of Eq. 4 represents the net applied force and the second represents the net viscous force exerted by the fluid due to drag. The resultant of these is balanced by the net stress gradient inside the block. Using the definition of our dimensionless numbers, Eq. 4 can be rewritten for the streaming velocity at \(y\) as
\[u_{x}^{\prime\infty}(y)=\left[\int_{0}^{y}\frac{1}{a}J^{*}(y^{\prime})dy^{ \prime}-\frac{2a}{9\phi(y)}\left(\frac{\partial\mu^{*}(y)}{\partial y}\right) \right], \tag{5}\]
with \(u_{x}^{\prime\infty}(y)=u_{x}^{\infty}(y)\eta A/aP\) and asterisks representing multiplication by \(\mathrm{sgn}(\dot{\gamma}^{\infty}(y))\), noting that \(P\) is uniform at steady state and using \(\phi(y)=(4/3)\pi a^{3}N(y)/V_{b}\), acknowledging our earlier comment about phase separation [32]. Equation 5 thus relates the externally applied liquid flow field to the profiles of \(J\), \(\mu\) and \(\phi\).
For a known \(\mathbf{u}^{\infty}\) we solve Eqs. 1, 2, 3 and 5 numerically in the following way. We first guess a \(\phi\left(y\right)\) profile by assuming accumulation at points where the spatial derivative of the imposed force vanishes, starting with a simple form as \(\phi(y)=\sum_{j=1}^{n_{p}}a_{j}/[(y-y_{j}^{0})^{2}+b_{j}^{2}]+\phi_{0}\), with mass conserved through \(\bar{\phi}=\frac{1}{L_{y}}\int_{0}^{L_{y}}\phi\left(y\right)dy\). Here \(y_{j}^{0}\) are the coordinates of the point where the first derivative of the applied force vanishes, \(n_{p}\) is the number of such points and \(b_{j}\) is the width of the Lorentzian function peaked at \(y_{j}^{0}\). We then compute directly \(J\), \(\mu\) and \(\Theta\) using Eqs. 1, 2 and 3, before attempting to balance Eq. 5. The imbalance of Eq. 5 reflects the accuracy of our guess. We refine \(\phi(y)\) by tuning \(\phi_{0}\), \(a_{j}\) and \(b_{j}\) until Eq. 5 is satisfied (up to some tolerance). Shown in Fig. 3 are predicted results compared against 'unseen' simulation data (_i.e._ data not used to obtain the scaling exponents) with \(\bar{\phi}=0.55\), \(0.57\) and \(\mathbf{u}^{\infty}(y)=\kappa\sin^{3}(2\pi y/L_{y})\mathbf{\delta}_{x}\), demonstrating the degree of success of the scaling relations for predicting \(y\)-profiles of \(\phi\), \(J\), \(\mu\) and \(\Theta\). Considering the highly nonlinear nature of the scaling relations, the quality of the predictions is reasonably good.
Conclusions.Using particle-based simulation we seek universality amongst flows of dense, frictionless suspen
Figure 3: Predictions of the scaling relations against simulation data not used for obtaining the scaling exponents, with \(\mathbf{u}^{\infty}(y)=\kappa\sin^{3}(2\pi y/L_{y})\mathbf{\delta}_{x}\). Shown are (a) the volume fraction \(\phi\); (b) the viscous number \(J\); (c) the effective friction coefficient \(\mu\); and (d) the suspension temperature \(\Theta\), with predictions given by solid lines and simulation data in points, for \(\bar{\phi}=0.55\) (red) and \(0.57\) (green).
sions. Along with canonical suspension rheology control parameters \(\phi\), \(J\) and \(\mu\), we introduce a fourth quantity \(\Theta\) characterising velocity fluctuations, inspired by recent studies in dry granular physics [26]. We find a trio of scaling relations among these quantities that collapse data for homogeneous and inhomogeneous flow. Utilising a momentum balance we show that from knowledge of the externally applied force, one can use the relations to predict the features of a general inhomogeneous flow. Our work raises manifold avenues for future work. In particular, the microscopic origin of the exponents is not understood, nor is their generalisation to the broader class of suspensions that includes polydisperse particles (for which colloidal forces may become relevant [34]), non-spheres and other complexities. Meanwhile the question of a diverging lengthscale --apparently a staple of non-local rheology in dry granular matter [22; 35; 24]-- remains open. Computing a granular fluidity field from our data, we find, similar to [19], no divergence in the characteristic lengthscale, which remains \(\mathcal{O}(a)\) everywhere. This raises an important open question regarding what are the minimal conditions required for a diverging lengthscale in inhomogeneous particulate flows.
###### Acknowledgements.
B.P.B. acknowledges support from the Leverhulme Trust under Research Project Grant RPG-2022-095; C.N. acknowledges support from the Royal Academy of Engineering under the Research Fellowship scheme. We thank Ken Kamrin, Martin Trulsson, Mehdi Bouzid, Romain Mari and Jeff Morris for useful discussions.
|
2310.12610 | Fracton gauge fields from higher-dimensional gravity | We show that the fractonic dipole-conserving algebra can be obtained as an
Aristotelian (and pseudo-Carrollian) contraction of the Poincar\'e algebra in
one dimension higher. Such contraction allows to obtain fracton electrodynamics
from a relativistic higher-dimensional theory upon dimensional reduction. The
contraction procedure produces several scenarios including the some of the
theories already discussed in the literature. A curved space generalization is
given, which is gauge invariant when the Riemann tensor of the background
geometry is harmonic. | Francisco Peña-Benítez, Patricio Salgado-Rebolledo | 2023-10-19T09:39:17Z | http://arxiv.org/abs/2310.12610v2 | # Fracton gauge fields from higher dimensional gravity
###### Abstract
We show that the fractonic dipole-conserving algebra can be obtained as an Aristotelian (and pseudo-Carrollian) contraction of the Poincare algebra in one dimension higher. Such contraction allows to obtain fracton electrodynamics from a relativistic higher-dimensional theory upon dimensional reduction. The contraction procedure produces several scenarios including the some of the theories already discussed in the literature. A curved space generalization is given, which is gauge invariant when the Riemann tensor of the background geometry is harmonic.
## 1 Introduction
Fracton phases of matter represent a remarkable class of quantum states with novel and intriguing properties, challenging conventional paradigms in condensed matter physics [1; 2; 3; 4; 5]. These exotic phases are characterized by their restricted mobility of excitations, leading to unconventional patterns of long-range entanglement and topological order [6; 7; 8]. Understanding and describing the nature of fracton phases has emerged as a forefront of research, first in the fields of condensed matter theory and more recently in high energy theory, promising groundbreaking insights into the behavior of quantum matter.
Gauge theories have been instrumental in describing a wide array of physical phenomena, from the fundamental forces of nature to condensed matter systems. Actually, to delve into the fascinating realm of the so-called gapless fracton phases, it is crucial to explore the role of symmetric gauge fields [9; 10; 11; 12; 13; 14; 5]. These gauge fields mediate the interactions between fractonic matter. Therefore, the inclusion of symmetric gauge fields enriches the theoretical framework, paving the way for an insightful analysis of fracton phases in various materials and scenarios. On the other hand, fractonic theories couple naturally to Aristotelian geometries [15; 16; 11; 12; 13], i.e., manifolds whose tangent space isometry group is given only by rotations and space-time translations, but no boosts as in the more familiar case of Riemann-Cartan geometry. Recently, generalizations of fracton electrodynamics defined on curved space have been constructed by gauging the Monopole Dipole Momentum Algebra (MDMA) [11; 12; 13].
The main results of this paper are that the dipole conserving symmetry group can be embedded in Poincare, and that symmetric gauge fields can be obtained from a relativistic
non-Einstein gravity theory in one dimension higher. The higher-dimensional theory is described by a Yang-Mills-like action with Poincare as gauge group. Nonetheless, the theory is assumed to be in a Higgs phase with symmetry breaking pattern \(\mathfrak{iso}(d+1,1)\to\mathfrak{so}(d,1)\). The fracton gauge fields are obtained after a particular limit defined by a Lie algebra contraction that connects the Poincare algebra with the MDMA. Such contraction is a suitable combination of pseudo-Carrollian and Aristotelian limits respectively. Additionally by dimensionally reducing the system we identify the extra dimension of the system with the internal \(U(1)\) associated to monopole transformations. Similarly, the dipole transformations in the dimensionally reduced theory is obtained from the spacetime boosts along the compactified spatial dimension.
The structure of this paper is as follows: In Section 2, we provide a brief overview of dipole conserving systems, highlighting their defining characteristics. In addition, we introduce symmetric gauge theories as the fields mediating the interactions between fractons, and discuss the incompatibility between the gauge principle and spacetime curvature. In Section 3 we define a novel contraction of the Poincare algebra that leads to the MDMA algebra in one dimension less. Then, we define a higher-dimesional Riemann-squared gravitational theory, and after the algebra contraction we dimensionally reduce it. After doing so, we discuss the results and finish with some conclusions and comments on possible future developments in Section 4.
## 2 Symmetric gauge fields and dipole conservation
Before discussing dipole conserving systems, let us consider a spinless particle in \(d+1\) spatial dimensions with momentum \(P_{A}\), and angular momentum \(J_{AB}=x^{A}P_{B}-x^{B}P_{A}\). If the system is translational and rotational invariant \(P_{A}\) and \(J_{AB}\) will be constants of motion, which implies that
\[\dot{P}_{A}=0, \tag{1}\] \[\dot{x}^{A}\propto P_{A}. \tag{2}\]
Actually notice that the conservation of angular momentum does not introduce new constants of motion, instead it constraints the particles's motion by requiring the velocity to be parallel to the momentum. Now let us pick coordinates \(x^{A}=(x^{i},R\,z)\), interpret \(x^{i}\) as the coordinates of the physical space, \(z\) the coordinate of some extra dimension, and \(R\) a constant with dimensions of length. Then, we redefine the transverse momentum as \(Q\equiv RP_{z}\), and the transverse angular momentum \(Q^{i}\equiv RJ_{i\,z}\). After doing so, we zoom out the \(z-\)direction by sending \(R\to 0\). Therefore, the angular momenta become
\[J_{ij} =x^{i}P_{j}-x^{j}P_{i}, \tag{3}\] \[Q^{i} =Qx^{i}. \tag{4}\]
From the lower dimensional perspective the transverse momentum takes the form of an internal charge, and the angular momentum \(Q^{i}\) corresponds to the dipole moment of that
charge \(Q\). For such a particle the conservation of all the charges imply
\[P_{i} =\text{constant}, \tag{5}\] \[x^{i} =\text{constant},\] (6) \[J_{ij} =0, \tag{7}\]
which is unusual since the particle is not allowed to move, however, its momentum is not constrained to be related to the velocity. Although a system with such properties seems to be dynamically trivial, we notice that motion is allowed once more than one particle are included. For example, let us consider two fractons with charges \(Q_{1}=q\), \(Q_{2}=-q\) such that \(Q=0\), and \(Q^{i}=q(x^{i}_{(1)}-x^{i}_{(2)})\) for such a system the conservation of momentum and dipole imply
\[P_{(1)i} =\frac{1}{2}\left(P_{i}+W_{i}(t)\right), x^{i}_{(1)} =X^{i}(t)+\frac{1}{2q}Q^{i} \tag{8}\] \[P_{(2)i} =\frac{1}{2}\left(P_{i}-W_{i}(t)\right), x^{i}_{(2)} =X^{i}(t)-\frac{1}{2q}Q^{i}. \tag{9}\]
Thus the actual dynamical variables for a dipole are the center of mass position \(X^{i}(t)\), and the relative momentum \(W_{i}(t)\) between the particles forming the dipole. If in addition we impose conservation of angular momentum we obtain the condition
\[\dot{X}^{[j}(t)P^{k]}=\frac{1}{2q}\dot{W}^{[j}(t)Q^{k]}, \tag{10}\]
which for \(d=3\) constraints the force \(\dot{W}^{i}\) as
\[\dot{X}^{i}(t) \equiv V^{i}(t), \tag{11}\] \[\dot{W}_{i}(t) =\gamma(t)Q^{i}-\frac{4q^{2}}{|\mathbf{Q}|^{2}}Q^{j}(V^{i}(t)P_{j }-V^{j}(t)P_{i})\,. \tag{12}\]
In fact, we can go beyond the single particle picture assuming locality and introducing the densities \(\varrho,p_{i}\) such that we can express the conserved charges as
\[Q =\int d^{d}x\,\varrho, Q^{i} =\int d^{d}x\,x^{i}\varrho, \tag{13}\] \[P_{i} =\int d^{d}x\,p_{i}, J_{ij} =\int d^{d}x\,\left(x^{i}\,p_{j}-x^{j}\,p_{i}\right). \tag{14}\]
With these definitions and requiring \(\dot{Q}=0\), \(\dot{P}_{i}=0\), we obtain the continuity equations
\[\partial_{0}\varrho+\partial_{i}j_{i} =0, \tag{15}\] \[\partial_{0}p_{j}+\partial_{i}\tau_{ij} =0. \tag{16}\]
On the other hand, dipole and angular momentum conservation \(\dot{Q}^{i}=0\), and \(\dot{J}_{ij}=0\) demands the constraints
\[j_{i} =\partial_{j}K_{ji}, \tag{17}\] \[\tau_{[ij]} =\partial_{k}L_{kij}, \tag{18}\]
with \(L_{ijk}=-L_{ikj}\), and \(K_{ij}\) the dipole current. However, without lost of generality the stress tensor can be improved such that the new stress tensor \(T_{ij}\) is symmetric. With such improvement the conservation of angular momentum is automatic if momentum is conserved. Similarly, to guarantee conservation of both charge and dipole moment it is enough to satisfy the generalized continuity equation1
Footnote 1: Notice that the antisymmetric part of \(K_{ij}\) drops from Eq. (19)
\[\partial_{0}\varrho+\partial_{i}\partial_{j}K_{ij}=0. \tag{19}\]
### Fracton Gauge Theory
In fact, the charge (dipole) conservation can be derived once the scalar \(\phi\), and symmetric tensor \(A_{ij}\) gauge fields are minimally coupled to the fracton current as
\[S\sim\int d^{d+1}x\left(\varrho\phi+K_{ij}A_{ij}\right), \tag{20}\]
with the gauge field transforming as
\[\delta A_{ij}=\partial_{i}\partial_{j}\varepsilon,\quad\delta\phi=-\dot{ \varepsilon}. \tag{21}\]
Actually, in a system with dynamical gauge fields, the corresponding generalized electrodynamic theory with electric and magnetic fields defined as [9]
\[F_{ijk}=\partial_{i}A_{jk}-\partial_{j}A_{ik},\quad F_{0ij}=\dot{A}_{ij}+ \partial_{i}\partial_{j}\phi, \tag{22}\]
with action
\[S=\frac{1}{2}\int d^{d+1}x\left(F_{0ij}F_{0ij}-\frac{1}{2}F_{ijk}F_{ijk} \right). \tag{23}\]
Notice that it is possible to construct a symmetric spacetime tensor \(A_{\mu\nu}\) with \(A_{0\mu}=-\partial_{\mu}\phi\), and transformation law \(\delta A_{\mu\nu}=\partial_{\mu}\partial_{\nu}\varepsilon\). The field strength for the enhanced model is defined as
\[F_{\mu\nu\rho}=\partial_{\mu}A_{\nu\rho}-\partial_{\nu}A_{\mu\rho}, \tag{24}\]
and the action (23) can be written as
\[S=\frac{1}{4}\int d^{d+1}xF_{\mu\nu\rho}F^{\mu\nu\rho}, \tag{25}\]
where indices are raised and lowered with the Minkowski metric \(\eta_{\mu\nu}=\text{diag}(-+\cdots+)\). Notice, that such construction posses and accidental gauge symmetry \(\delta A_{\mu\nu}=\partial_{\mu}\beta_{\nu}\) that leaves the field strength invariant but does not respect the symmetry property of the gauge field. Actually we notice that the alternative gauge field \(\tilde{A}_{\mu\nu}=A_{\mu\nu}+\partial_{\mu}\phi\,\tau_{\nu}\) with \(\tau_{\nu}=\delta^{0}_{\nu}\) has field strength \(\tilde{F}_{\mu\nu\lambda}=F_{\mu\nu\lambda}\), therefore, same equation of motions. It is important to emphasize that the constraint \(\tau^{\mu}A_{\mu\nu}=\partial_{\mu}\phi\) explicitly breaks the apparent relativistic symmetry.
### Fractons on curved manifolds
In order to illustrate the tension between the fractonic gauge principle and curved spaces, let us consider a Minkowskian manifold with metric \(g_{\mu\nu}\) and a timelike vector \(\tau^{\mu}\). We can define a spatial metric by introducing a clock \(\tau_{\mu}\) as
\[h_{\mu\nu}=\tau_{\mu}\tau_{\nu}+g_{\mu\nu},\qquad\tau_{\mu}\tau^{\mu}=1,\quad h _{\mu\nu}\tau^{\nu}=0, \tag{26}\]
In addition, we introduce an Aristotelian covariant derivative with the torsion-free connection
\[\nabla_{\mu}\tau_{\nu}=0=\nabla_{\mu}h_{\nu\rho}\qquad\Gamma^{\mu}_{\nu\rho}= \tau^{\mu}\partial_{\nu}\tau_{\rho}+\frac{1}{2}h^{\mu\lambda}\left(\partial_{ \nu}h_{\lambda\rho}+\partial_{\rho}h_{\nu\lambda}-\partial_{\lambda}h_{\nu \rho}\right), \tag{27}\]
and \(\partial_{\mu}\tau_{\nu}-\partial_{\nu}\tau_{\mu}=0\). With these ingredients at hand, we consider the symmetric tensor gauge field \(A_{\mu\nu}\) of the previous section satisfying the covariant condition
\[\tau^{\nu}A_{\mu\nu}=-\partial_{\mu}\phi, \tag{28}\]
under gauge transformations we postulate the field \(A_{\mu\nu}\) transforms as2
Footnote 2: Notice that a covariant theory related to fractons for a symmetric rank-two gauge field with the same type of gauge transformation has been considered in [14; 17]. However the form of the field strength and therefore the action are different.
\[\delta A_{\mu\nu}=\nabla_{\mu}\nabla_{\nu}\epsilon, \tag{29}\]
and its corresponding field strength reads
\[F_{\mu\nu\lambda}=\nabla_{\mu}A_{\nu\lambda}-\nabla_{\nu}A_{\mu\lambda}. \tag{30}\]
The issue this construction has is that the field strength is not gauge invariant, in fact it can be shown that
\[\delta F_{\mu\nu\rho}=[\nabla_{\mu},\nabla_{\nu}]\partial_{\rho}\epsilon=-R^{ \alpha}\,_{\rho\mu\nu}\,\partial_{\alpha}\epsilon. \tag{31}\]
Therefore, a minimal extension of the action (25) can be
\[S=-\int d^{d+1}x\;\sqrt{|g|}\;\left(\frac{1}{2}F_{\mu\nu\lambda}F^{\mu\nu \lambda}-R^{\mu\nu\rho\sigma}A_{\mu\rho}A_{\nu\sigma}\right). \tag{32}\]
However, this action is not invariant in arbitrary backgrounds. In fact, it preserves gauge invariance for background spacetime with curvature satisfying
\[\nabla_{\mu}R^{\mu\nu\rho\sigma}=0. \tag{33}\]
In addition, notice that the theory can be reformulated using frame fields \(e^{a}_{\;\mu}\) such that \(h_{\mu\nu}=\delta_{ab}\,e^{a}_{\;\mu}e^{b}_{\;\nu}\), and satisfying \(\nabla_{\mu}e^{a}_{\;\nu}+\omega^{ab}_{\;\;\mu}e^{b}_{\;\nu}=0\) with \(\omega^{ab}_{\;\;\mu}\) the \(\mathfrak{so}(d)\) connection. Moreover, we define \(A^{a}_{\;\mu}=A_{\mu\nu}e^{a\nu}\), therefore the gauge field \(A_{\mu\nu}\) can be split as
\[A_{\mu\nu}=A^{a}_{\;\;\mu}e_{a\nu}-\partial_{\mu}\phi\,\tau_{\nu} \tag{34}\]
where we have used the constraint (2.28). Besides, the field \(A^{a}_{\ \ \mu}\) satisfies
\[A_{a\mu}\tau^{\mu}=-\partial_{\mu}\phi\,e_{a}^{\ \mu},\qquad A_{a\mu}e_{b}^{\ \mu}=A_{\mu\nu}e_{a}^{\ \mu}e_{b}^{\ \nu}\equiv A_{ab}, \tag{2.35}\]
and thus can be put in the form
\[A_{a\mu}=-\partial_{\nu}\phi\,e_{a}^{\ \nu}\tau_{\mu}+A_{ab}e_{\ \mu}^{b}. \tag{2.36}\]
Using the fact that the Riemann tensor constructed out of the Aristotelian connection (2.27) satisfies \(R^{\mu\nu}_{\ \ \rho\sigma}\tau_{\mu}=0\), and the field strength of \(A_{\mu\nu}\) satisfies \(F_{\mu\nu\rho}\tau^{\rho}=0\) the action (2.32) can be written in terms of \(A_{a\mu}\) as
\[S=-\int d^{d+1}x\,\sqrt{|g|}\,\bigg{[}\frac{1}{2}\tilde{F}^{a}_{\ \mu\nu}\tilde{F}_{a}^{\ \mu\nu}-R^{ab\mu\nu}A_{a\mu}A_{b\nu}\bigg{]}. \tag{2.37}\]
where
\[\tilde{F}^{a}_{\ \mu\nu}=F_{\mu\nu\rho}e^{a\rho}=D_{[\mu}A^{a}_{\ \nu]}, \tag{2.38}\]
and \(D_{\mu}\) is the (rotational) covariant derivative built up with \(\omega^{ab}_{\ \ \mu}\). The transformation law of \(A^{a}_{\ \mu}\) that is compatible with (2.29) reads
\[\delta A^{a}_{\ \mu}=D_{\mu}\left(e^{a\nu}\partial_{\nu}\epsilon\right). \tag{2.39}\]
Alternatively to the action (2.32), it is possible to introduce a Higgs mechanism for the dipole symmetry as shown in Ref. [11], by doing so a gauge invariant action can be obtained after introducing a Stueckelberg field \(\psi_{\alpha}\) transforming as \(\delta\psi_{\alpha}=\partial_{\alpha}\epsilon\) such that the new gauge field is gauge invariant
\[\bar{A}_{\mu\nu}=A_{\mu\nu}-\nabla_{\mu}\psi_{\nu} \tag{2.40}\]
and therefore its field strength
\[\bar{F}_{\mu\nu\rho}=\nabla_{\mu}\bar{A}_{\nu\rho}-\nabla_{\nu}\bar{A}_{\mu \rho}=F_{\mu\nu\rho}+R^{\alpha}_{\ \ \rho\mu\nu}\,\psi_{\alpha} \tag{2.41}\]
is gauge invariant, and no constraints on the background space-time are needed. Therefore, we define the action
\[S=-\frac{1}{2}\int d^{d+1}x\;\sqrt{-g}\;\bar{F}_{\mu\nu\lambda}\bar{F}^{\mu \nu\lambda}. \tag{2.42}\]
Since \(R^{\alpha}_{\ \mu\nu\rho}\tau_{\alpha}=0\), the longitudinal component of the Stuckelberg field \(\psi_{\mu}\tau^{\mu}\) does not appear in the action, therefore, it will be completely undetermined. In fact, if we fix \(\psi_{\mu}\) to be of the form
\[\psi_{\mu}=-\phi\tau_{\mu}+\psi^{a}e_{a\mu}, \tag{2.43}\]
and use Eq. (2.34) we can write the invariant gauge field as
\[\bar{A}_{\mu\nu}=\left(A^{a}_{\ \mu}-D_{\mu}\psi^{a}\right)e_{a\nu}. \tag{2.44}\]
With that choice for the Stuckelberg the action reduces to
\[S=-\frac{1}{2}\int d^{d+1}x\;\sqrt{-g}\;\left(\tilde{F}^{a}_{\ \mu\nu}-R^{ab}_{\ \ \mu\nu}\psi_{b}\right)\left(\tilde{F}_{a}^{\ \mu\nu}-R_{ac}^{\ \ \mu\nu}\psi^{c}\right), \tag{2.45}\]
which corresponds to the action obtained in [11] after gauging the dipole conserving symmetry group.
In the next section, we will embed the MDMA into the Poincare group in one extra dimension and show that the actions (32), (37), (45) can be obtained from a gravitational gauge theory upon dimensional reduction after applying an Aristotelian limit.
## 3 From Poincare gauge theory to fracton gauge fields
Following the intuition built in Sect. 2 we will embed the fracton algebra into the Poincare algebra \(\mathfrak{iso}(d+1,1)\) with the purpose of deriving the fractonic gauge theory as some limit of a gauge theory of gravity in one dimension higher.
### Fractonic Symmetry Algebra
In systems with dipole conservation the charge conservation cannot be described as a regular internal \(U(1)\), since the total value of the dipole moment in the system changes once a space translation is applied (see Eq. (13)). Actually, this property is consequence of the generators of space translations and the transformation generated by the dipole charge not commuting, and forming a non-Abelian symmetry algebra [18; 19]
\[[\mathbf{P}_{i},\mathbf{Q}^{j}]=\delta^{j}_{i}\mathbf{Q}, \tag{17}\]
where \(\mathbf{P}_{i}\), \(\mathbf{Q}^{j}\), \(\mathbf{Q}\) are the generators of translations, "dipole", and \(U(1)\) transformations. In addition, if the system posseses rotational invariance, the bracket Eq. (17) has to be supplemented with
\[[\mathbf{J}_{ij},\mathbf{Q}^{k}]=\delta^{k}_{[\![j}\mathbf{Q}_{i]\!]},\qquad[\mathbf{J}_{ij}, \mathbf{P}_{k}]=\delta_{k[\![j}\mathbf{P}_{i]\!]},\qquad[\mathbf{J}_{ij},\mathbf{J}_{kl}]= \delta_{[\![k}\mathbf{J}_{l]\!]j\!]}, \tag{18}\]
where \(\mathbf{J}_{ij}\) is the generator of the \(\mathfrak{so}(d)\) rotation group. We will refer to this algebra de Monopole Dipole Momentum Algebra (MDMA).
The commutation relation Eq. (17) has important implications on the properties of fractons systems on curved space. In fact, as we discussed in the previous section spacetime curvature is in tension with the fractonic gauge transformations, and in particular the origin of the symmetry breaking has been argue to have the same origin as the translational invariance breaking in gauge theories of spacetime symmetry groups [11].
We also notice that MDMA is isomorphic to the Carroll algebra [20; 21]
\[\begin{array}{ll}[\mathbf{P}_{i},\mathbf{C}^{j}]=\delta^{j}_{i}\mathbf{H},&[\mathbf{J}_{ij},\mathbf{C}^{k}]=\delta^{k}_{[\![j}\mathbf{C}_{i]\!]},\\ [\mathbf{J}_{ij},\mathbf{P}_{k}]=\delta_{k[\![j}\mathbf{P}_{i]\!]},&[\mathbf{J}_{ij},\mathbf{J}_{kl }]=\delta_{[\![k}\mathbf{J}_{l]\!]j\!]},\end{array} \tag{19}\]
which requires to reinterpret the \(U(1)\) generator \(\mathbf{Q}\) as the Hamiltonian \(\mathbf{H}\) and the dipole generator \(\mathbf{Q}^{i}\) and the generator of Carrollian boosts \(\mathbf{C}^{i}\). The analogy between the dipole conserving group and the Carroll symmetry has been extensively used to construct fractonic models of particles and gauge fields [21; 22; 23]. A remarkable property of Carroll theories is that it corresponds to the vanishing speed of light limit of the Poincare algebra [20], and are also characterized by immobile excitations [21; 23; 24]
The Carroll algebra can be obtained from the Poincare algebra \([\mathbf{J}_{\mu\nu},\mathbf{P}_{\rho}]=\eta_{\rho[\nu}\mathbf{P}_{\mu]}\), \([\mathbf{J}_{\mu\nu},\mathbf{J}_{\rho\sigma}]=\eta_{[\mu[\rho}\mathbf{J}_{\sigma]\nu]}\), where \(\mu=(0,i)\) and \(\eta_{\mu\nu}=\text{diag}(-,+,\cdots,+)\), by means of the following Lie algebra contraction [25]
\[\mathbf{P}_{0}=\frac{1}{c}\mathbf{H},\qquad\mathbf{J}_{0i}=\frac{1}{c}\mathbf{C}_{i}, \tag{3.4}\]
when the speed of light \(c\) vanishes. This suggests that the dipole conserving algebra can be obtained by means of a similar limiting procedure from Poincare. In the next section we show that this is indeed the case.
### Poincare and the fracton symmetry algebra
In this section we will show how the MDMA in \(d+1\) dimensions can be obtained after contracting Poincare algebra in \(\mathfrak{iso}(d+1,1)\) in \(d+2\) dimensions, given by
\[[\mathbf{\mathcal{J}}_{\hat{A}\hat{B}},\mathbf{\mathcal{P}}_{\hat{C}}] =\eta_{\hat{C}[\hat{B}}\mathbf{\mathcal{P}}_{\hat{A}]}, [\mathbf{\mathcal{J}}_{\hat{A}\hat{B}},\mathbf{\mathcal{J}}_{\hat{C}\hat{D}}] =\eta_{[\hat{A}[\hat{C}}\mathbf{\mathcal{J}}_{\hat{D}]\hat{B}]}, \tag{3.5}\]
where \(\eta_{\hat{A}\hat{B}}=\text{diag}(-+\cdots+)\) is the Minkowski metric and \(\hat{A}=0,1,\ldots,d,d+1\equiv n\). The resemblance between the dipole conserving algebra and the Carroll algebra suggests that a rescaling analog to (3.4) will allow to obtain the later as a Lie algebra contraction of Poincare. However, temporal translations are also present in fractonic systems, where the generator \(\mathbf{H}\) commutes with all the generators of the dipole conserving algebra. Thus, instead of using time as the longitudinal direction in the contraction, we can use a spatial direction \(\hat{A}=n\) to define a pseudo-Carrollian contraction3 similar to (3.4), i.e.
Footnote 3: Similarly, a pseudo-Galilean contraction using a spatial direction instead of time as longitudinal direction have been considered in [26].
\[\mathbf{\mathcal{P}}_{n}=\frac{1}{\sigma}\mathbf{Q}\qquad\mathbf{\mathcal{J}}_{An}=\frac{ 1}{\sigma}\mathbf{Q}_{A},\qquad A=(0,a),\qquad\sigma\to 0. \tag{3.6}\]
The extra dimension will be interpreted as the internal direction that will be associated to the conservation of monopole charge. However, the resulting symmetry will still be relativistic, which contrast with the absence of boost symmetry in the dipole conserving algebra and the Aristotelian character of fractonic systems. Thus, one should supplement (3.6) with a contraction of Aristotelian nature that eliminates transformations that connect space and time translations. This can be achieved through the rescaling
\[\mathbf{\mathcal{J}}_{0A^{\prime}}=\frac{1}{\varepsilon}\mathbf{G}_{A^{\prime}}, \qquad A^{\prime}=(a,n),\qquad\varepsilon\to 0. \tag{3.7}\]
Therefore, we select the time and spatial direction \((0,n)\) and split the indices as \(\hat{A}=(0,a,n)\), where \(a=1,\ldots,d\). The commutation relations (3.5) then take the form
\[[\mathbf{\mathcal{J}}_{0n},\mathbf{\mathcal{P}}_{n}] =\mathbf{\mathcal{P}}_{0}, \tag{3.8a}\] \[[\mathbf{\mathcal{J}}_{an},\mathbf{\mathcal{P}}_{n}] =\mathbf{\mathcal{P}}_{a},\] (3.8b) \[[\mathbf{\mathcal{J}}_{0n},\mathbf{\mathcal{J}}_{an}] =-\mathbf{\mathcal{J}}_{0a},\] (3.8c) \[[\mathbf{\mathcal{J}}_{an},\mathbf{\mathcal{J}}_{bn}] =-\mathbf{\mathcal{J}}_{ab}, \tag{3.8f}\]
\[[\boldsymbol{\mathcal{J}}_{0a},\boldsymbol{\mathcal{J}}_{0b}] =\boldsymbol{\mathcal{J}}_{ab}, \tag{11a}\] \[[\boldsymbol{\mathcal{J}}_{ab},\boldsymbol{\mathcal{J}}_{0c}] =\delta_{c[b}\boldsymbol{\mathcal{J}}_{0a]},\] (11b) \[[\boldsymbol{\mathcal{J}}_{ab},\boldsymbol{\mathcal{J}}_{cd}] =\delta_{[a[c}\boldsymbol{\mathcal{J}}_{d]b]},\] (11c) \[[\boldsymbol{\mathcal{J}}_{0a},\boldsymbol{\mathcal{J}}_{0n}] =\boldsymbol{\mathcal{J}}_{an},\] (11d) \[[\boldsymbol{\mathcal{J}}_{0a},\boldsymbol{\mathcal{J}}_{bn}] =\delta_{ab}\boldsymbol{\mathcal{J}}_{0n}, \tag{11e}\]
Combining the Aristotelian contraction (10) and the pseudo-Carrollian contraction (11) lead us to define the following rescaling of the elements of the higher-dimensional Poincare algebra
\[\boldsymbol{\mathcal{P}}_{0}=\boldsymbol{H}\,,\quad\boldsymbol{ \mathcal{P}}_{a}=\boldsymbol{P}_{a}\,,\quad\boldsymbol{\mathcal{P}}_{n}=\frac{ 1}{\sigma}\boldsymbol{Q}\,,\quad\boldsymbol{\mathcal{J}}_{an}=\frac{1}{ \sigma}\boldsymbol{Q}_{a},\quad\boldsymbol{\mathcal{J}}_{ab}=\boldsymbol{J}_{ ab}, \tag{12}\] \[\boldsymbol{\mathcal{J}}_{0n}=\frac{1}{\varepsilon\sigma} \boldsymbol{K},\quad\boldsymbol{\mathcal{J}}_{0a}=\frac{1}{\varepsilon} \boldsymbol{G}_{a},\]
where the generators in the first line of Eqs. (12) are to be interpreted as spacetime translations, \(U(1)\), dipole transformations, and rotations respectively. Taking the limit \(\varepsilon\to 0\), we find that the set of generators \(\{\boldsymbol{J}_{ab},\boldsymbol{H},\boldsymbol{P}_{a},\boldsymbol{Q}, \boldsymbol{Q}_{a}\}\) close in a sub-algebra defined by the following non-vanishing commutators
\[[\boldsymbol{P}_{a},\boldsymbol{Q}_{b}]=\delta_{ab}\boldsymbol{ Q}, \tag{13a}\] \[[\boldsymbol{J}_{ab},\boldsymbol{P}_{c}]=\delta_{c[b}\boldsymbol {P}_{a]},\] (13b) \[[\boldsymbol{Q}_{a},\boldsymbol{Q}]=\sigma^{2}\boldsymbol{P}_{a},\] (13c) \[[\boldsymbol{J}_{ab},\boldsymbol{Q}_{c}]=\delta_{c[b}\boldsymbol {Q}_{a]}, \tag{13d}\]
whereas the generators \(\{\boldsymbol{G}_{a},\boldsymbol{K}\}\) form an ideal with commutation relations
\[[\boldsymbol{J}_{ab},\boldsymbol{G}_{c}]=\delta_{c[b}\boldsymbol {G}_{a]},\qquad[\boldsymbol{G}_{a},\boldsymbol{Q}_{b}]=\delta_{ab}\boldsymbol {K},\qquad[\boldsymbol{Q}_{a},\boldsymbol{K}]=\sigma^{2}\boldsymbol{G}_{a}. \tag{14}\]
The sub-algebra (13) is an extension of the dipole symmetry algebra defined by (10) and (11), and reduces to it in the limit \(\sigma\to 0\). In this limit, the ideal (14) becomes trivial as it reduces to \(\boldsymbol{G}_{a}\) transforming as a \(\mathfrak{so}(d)\) vector. Contrary to the standard contractions where the parameters are removed from the algebra, we will take the strict limit \(\varepsilon\to 0\), but keep the leading terms in \(\sigma\) for reasons that will become clearer below.
### Poincare Gauge Theory
After showing that the MDMA can be obtained after contracting the Poincare algebra, we will proceed constructing a relativistic gravity theory in five-dimensions with Poincare as gauge group, and then dimensionally reduce it. This analysis has the purpose of understanding the puzzling relation between fracton phases of matter and gravity theories. As we will show, starting from a fully boost invariant action in higher dimensions is not enough to recover an invariant fracton gauge theory. This is related to the fact that in the strict \(\sigma\to 0\) limit the lower dimensional theory is purely gravitational without fracton gauge fields. Therefore, the gauge group will be realized non-linearly by adding a Higgs field \(\Psi^{A}\) associated to the transverse boosts \(\boldsymbol{\mathcal{J}}_{An}\), where un-hatted capital indices take values
\(A=0,\ldots,d\). Therefore, the non-linear connection one-form \(\boldsymbol{\mathcal{A}}=\boldsymbol{\mathcal{A}}_{\hat{\mu}}dx^{\hat{\mu}}\), \(\hat{\mu}=0,\ldots,d+1\), taking values on the \(\mathfrak{iso}(d+1,1)\) algebra Eq. (3.5) reads
\[\boldsymbol{\mathcal{A}}=E^{\hat{A}}\boldsymbol{\mathcal{P}}_{\hat{A}}+\frac{1} {2}\Omega^{\hat{A}\hat{B}}\boldsymbol{\mathcal{J}}_{\hat{A}\hat{B}}, \tag{3.12}\]
where \(E^{\hat{A}}_{\ \mu}\) and \(\Omega^{\hat{A}\hat{B}}_{\ \mu}\) are the non-linear vielbeins and spin-connection, respectively. On the other hand, the corresponding curvature reads
\[\boldsymbol{\mathcal{F}}=d\boldsymbol{\mathcal{A}}+\boldsymbol{\mathcal{A}} \wedge\boldsymbol{\mathcal{A}}=\mathcal{T}^{\hat{A}}\boldsymbol{\mathcal{P}}_ {\hat{A}}+\frac{1}{2}\mathcal{R}^{\hat{A}\hat{B}}\boldsymbol{\mathcal{J}}_{ \hat{A}\hat{B}}, \tag{3.13}\]
where \(\mathcal{T}^{\hat{A}}\) and \(\mathcal{R}^{\hat{A}\hat{B}}\) are the torsion and the curvature forms
\[\mathcal{T}^{\hat{A}}=dE^{\hat{A}}+\Omega^{\hat{A}}_{\ \hat{B}}\wedge E^{\hat{B}}, \qquad\mathcal{R}^{\hat{A}\hat{B}}=d\Omega^{\hat{A}\hat{B}}+\Omega^{\hat{A}}_ {\ \hat{C}}\wedge\Omega^{\hat{C}\hat{B}}. \tag{3.14}\]
We choose these fields to be such that, under a Poincare transformation with gauge parameter \(\varepsilon=\Upsilon^{\hat{A}}\boldsymbol{\mathcal{P}}_{\hat{A}}+\frac{1}{2} \Theta^{\hat{A}\hat{B}}\boldsymbol{\mathcal{J}}_{\hat{A}\hat{B}}\), they transform only under the lower-dimensional Lorentz group \(\mathfrak{so}(d,1)\) with parameters \(\Theta^{AB}\), including as well diffeomorphisms with parameter \(\Xi^{\hat{\mu}}\), we obtain the set of transformations
\[\begin{split}&\delta E^{n}=\mathcal{L}_{\Xi}E^{n},\\ &\delta E^{A}=\mathcal{L}_{\Xi}E^{A}-\Theta^{A}_{\ B}E^{B}\\ &\delta\Omega^{An}=\mathcal{L}_{\Xi}\Omega^{An}-\Theta^{A}_{\ B}\Omega^{Bn},\\ &\delta\Omega^{AB}=\mathcal{L}_{\Xi}\Omega^{AB}+d\Theta^{AB}- \Omega^{[A}_{\ \ \ \ C}\Theta^{B]C},\end{split} \tag{3.15}\]
where \(\mathcal{L}\) stands for the Lie derivative. Actually, \(\boldsymbol{\mathcal{A}}\) is related to the \(\mathfrak{iso}(d+1,1)\) gauge fields
\[\boldsymbol{\tilde{\mathcal{A}}}=\tilde{E}^{\hat{A}}\boldsymbol{\mathcal{P}}_ {\hat{A}}+\frac{1}{2}\tilde{\Omega}^{\hat{A}\hat{B}}\boldsymbol{\mathcal{J}}_ {\hat{A}\hat{B}}, \tag{3.16}\]
by a gauge transformation with an element \(e^{\Psi^{A}\mathcal{J}_{An}}e^{\Phi^{A}\boldsymbol{\mathcal{P}}_{\hat{A}}}\) belonging to the coset \(\mathfrak{iso}(d+1,1)/\mathfrak{so}(d,1)\) (for details, see [27]),
\[\boldsymbol{\mathcal{A}}=e^{\Psi^{A}\mathcal{J}_{An}}e^{\Phi^{\hat{A}} \boldsymbol{\mathcal{P}}_{\hat{A}}}\left(\boldsymbol{\tilde{\mathcal{A}}}+d \right)e^{-\Phi^{\hat{B}}\boldsymbol{\mathcal{P}}_{\hat{B}}}e^{-\Psi^{B} \mathcal{J}_{Bn}}. \tag{3.17}\]
In fact, the vielbein and spin connection transforms under the \(\mathfrak{iso}(d+1,1)\) Poincare transformations in the standard way, i.e.
\[\begin{split}&\delta\tilde{E}^{\hat{A}}=d\Upsilon^{\hat{A}}+ \tilde{\Omega}^{\hat{A}}_{\ \hat{B}}\Upsilon^{\hat{B}}-\Theta^{\hat{A}}_{\ \hat{B}}\tilde{E}^{\hat{A}},\\ &\delta\tilde{\Omega}^{\hat{A}\hat{B}}=d\Theta^{\hat{A}\hat{B}}- \tilde{\Omega}^{[\hat{A}}_{\ \ \ \ C}\Theta^{\hat{B}]\hat{C}}.\end{split} \tag{3.18}\]
In addition, the Stueckelberg fields \(\Phi^{\hat{A}}\) and \(\Psi^{A}\) obey the transformation rules
\[\delta\Phi^{\hat{A}}=\Upsilon^{\hat{A}}-\Theta^{\hat{A}}_{\ \hat{B}}\Phi^{\hat{B}},\quad\delta\Psi^{A}=\Theta^{An}-\Theta^{A}_{\ B}\Psi^{B}. \tag{3.19}\]
In terms of these fields, the non-linear gauge fields can be expressed as
\[\begin{split}& E^{n}=E^{\prime n}-E^{\prime A}\Psi_{A}-\frac{1}{2}E^ {\prime n}\Psi^{A}\Psi_{A}+\frac{1}{3!}E^{\prime A}\Psi_{A}\Psi^{B}\Psi_{B}+O \left(\Psi^{4}\right),\\ & E^{A}=E^{\prime A}+E^{\prime n}\Psi^{A}-\frac{1}{2}E^{\prime B} \Psi_{B}\Psi^{A}-\frac{1}{3!}E^{\prime n}\Psi^{A}\Psi^{B}\Psi_{B}+O\left(\Psi^ {4}\right),\\ &\Omega^{An}=\tilde{\Omega}^{An}-d\Psi^{A}-\tilde{\Omega}^{A}_{ \phantom{A}B}\Psi^{B}+\frac{1}{2}\Psi_{B}\Psi^{[A}\tilde{\Omega}^{B]n}+\frac{1} {3!}\tilde{\Omega}^{A}_{\phantom{A}B}\Psi^{B}\Psi^{C}\Psi_{C}+O\left(\Psi^{4} \right),\\ &\Omega^{AB}=\tilde{\Omega}^{AB}-\Psi^{[A}\tilde{\Omega}^{B]n}+ \frac{1}{2}\Psi^{[A}\tilde{\Omega}^{B]}_{\phantom{A}C}\Psi^{C}+\frac{1}{3!} \Psi^{[A}\tilde{\Omega}^{B]n}\Psi^{C}\Psi_{C}+O\left(\Psi^{4}\right).\end{split} \tag{3.20}\]
where we have defined the translational-invariant vielbein
\[E^{\prime\hat{A}}=\tilde{E}^{\hat{A}}-d\Phi^{\hat{A}}-\tilde{\Omega}^{\hat{A}} _{\phantom{A}\hat{B}}\Phi^{\hat{B}}. \tag{3.21}\]
Notice that by construction the fields \(E^{\hat{A}}\) are invertible. In fact, we define the inverse vielbeins \(E^{\phantom{A}\hat{\mu}}_{\phantom{A}\hat{A}}\) satisfying \(E^{\hat{\mu}}_{\phantom{A}\hat{A}}E^{\hat{A}}_{\phantom{A}\hat{\nu}}=\delta^{ \hat{\mu}}_{\phantom{A}\hat{\nu}}\) and \(E^{\hat{A}}_{\phantom{A}\hat{\mu}\hat{E}}^{\hat{\mu}}_{\phantom{A}\hat{B}}= \delta^{\hat{A}}_{\phantom{A}\hat{B}}\). In addition, we can define the space-time metric and its inverse.
\[G_{\hat{\mu}\hat{\nu}}=E^{\hat{A}}_{\phantom{A}\hat{\mu}\hat{A}\hat{B}}E^{ \hat{B}}_{\phantom{A}\hat{\nu}},\qquad G^{\hat{\mu}\hat{\nu}}=E^{\hat{\mu}}_{ \phantom{A}\hat{A}}\eta^{\hat{A}\hat{B}}E^{\hat{\nu}}_{\phantom{A}B}. \tag{3.22}\]
It is also convenient to introduce an affine connection \(\Gamma^{\hat{\rho}}_{\phantom{A}\hat{\mu}\hat{\nu}}\) by means of the vielbein postulate
\[\hat{\nabla}_{\hat{\mu}}E^{\hat{A}}_{\phantom{A}\hat{\nu}}=\partial_{\hat{\mu }}E^{\hat{A}}_{\phantom{A}\hat{\nu}}+\Omega^{\hat{A}}_{\phantom{A}\hat{B}\hat{ \mu}}E^{\hat{B}}_{\phantom{A}\hat{\nu}}-\Gamma^{\hat{\rho}}_{\phantom{A}\hat{ \mu}\hat{\nu}}E^{\hat{A}}_{\phantom{A}\hat{\rho}}=0. \tag{3.23}\]
In terms of which the components of the torsion and the curvature read
\[\begin{split}&\mathcal{T}^{\hat{\rho}}_{\phantom{A}\hat{\mu}\hat{ \nu}}=E^{\hat{\rho}}_{\phantom{A}\hat{A}}\mathcal{T}^{\hat{A}}_{\phantom{A}\hat{ \mu}\hat{\nu}}=\Gamma^{\hat{\rho}}_{\phantom{A}\hat{\mu}\hat{\nu}}-\Gamma^{ \hat{\rho}}_{\phantom{A}\hat{\rho}\hat{\mu}}\\ &\mathcal{R}^{\hat{\rho}}_{\phantom{A}\hat{\sigma}\hat{\mu}\hat{ \nu}}=E^{\hat{\rho}}_{\phantom{A}\hat{\sigma}\hat{B}}E_{\phi\hat{B}} \mathcal{R}^{\hat{A}\hat{B}}_{\phantom{A}\hat{\mu}\hat{\nu}}=\partial_{\hat{\mu }}\Gamma^{\hat{\rho}}_{\phantom{A}\hat{\nu}\hat{\sigma}}-\partial_{\hat{\nu}} \Gamma^{\hat{\rho}}_{\phantom{A}\hat{\mu}\hat{\sigma}}+\Gamma^{\hat{\rho}}_{ \phantom{A}\hat{\mu}\hat{\gamma}}\Gamma^{\hat{\gamma}}_{\phantom{A}\hat{\nu} \hat{\sigma}}-\Gamma^{\hat{\rho}}_{\phantom{A}\hat{\nu}\hat{\gamma}}\Gamma^{ \hat{\gamma}}_{\phantom{A}\hat{\mu}\hat{\sigma}}.\end{split} \tag{3.24}\]
Finally, with these ingredients we define the volume form
\[{}^{*}1=\frac{1}{(d+2)!}\epsilon_{\hat{A}_{0}\ldots\hat{A}_{d+1}}E^{\hat{A}_{0} }\wedge\ldots\wedge E^{\hat{A}_{d+1}}, \tag{3.25}\]
and a Hodge dual operation that we can use to define the Yang-Mills-like action
\[S=-\frac{1}{2}\int\langle*\boldsymbol{\mathcal{F}}\wedge\boldsymbol{\mathcal{F}} \rangle\,, \tag{3.26}\]
where \(\langle\cdots\rangle\) stands for an invariant metric on the \(\mathfrak{iso}(d+1,1)\) algebra in \(d+2\) dimensions. However, note that the Higgsing of the theory allows us to define a bi-linear form only invariant under the action of the unbroken subgroup \(SO(d,1)\). With that criteria, the most general choice is
\[\begin{split}&\langle\boldsymbol{\mathcal{J}}_{AB}\boldsymbol{ \mathcal{J}}_{CD}\rangle=\hat{\alpha}_{0}\left(\eta_{AC}\eta_{BD}-\eta_{AD}\eta_ {BC}\right),&\langle\boldsymbol{\mathcal{P}}_{n}\boldsymbol{ \mathcal{P}}_{n}\rangle=\hat{\alpha}_{1},\\ &\langle\boldsymbol{\mathcal{J}}_{An}\boldsymbol{\mathcal{J}}_{ Bn}\rangle=\hat{\alpha}_{2}\eta_{AB},&\langle\boldsymbol{\mathcal{P}}_{A} \boldsymbol{\mathcal{P}}_{B}\rangle=\hat{\alpha}_{3}\eta_{AB}.\end{split} \tag{3.27}\]
### Aristotelian contraction
As previously pointed out our goal is to implement the contraction (3.9) of the Poincare algebra on the gravitational theory described by Eq. (3.26). This can be achieved after rescaling the gauge fields as
\[\Omega^{ab} =\omega^{ab}, \tag{3.28a}\] \[\Omega^{an} =\sigma\,v^{a},\] (3.28b) \[\Omega^{0a} =\varepsilon\,\mu^{a},\] (3.28c) \[\Omega^{0n} =\varepsilon\sigma\,u, \tag{3.28d}\]
Actually, in the limit \(\varepsilon\to 0\), the connection shown in Eq. (3.12) reduces to a non-Lorentzian one taking values on the elements of the algebra defined by (3.10) and (3.11), which can be written as
\[\mathbf{\mathcal{A}}=\mathbf{A}+\mathbf{B}, \tag{3.29}\]
where
\[\mathbf{A}=\tau\,\mathbf{H}+e^{a}\mathbf{P}_{a}+\frac{1}{2}\omega^{ab}\mathbf{J}_{ab}+v^{a}\bm {Q}_{a}+\rho\,\mathbf{Q},\qquad\mathbf{B}=\mu^{a}\mathbf{G}_{a}+u\mathbf{K}. \tag{3.30}\]
The corresponding contracted curvature is given by
\[\mathbf{\mathcal{F}}=\mathbf{F}+\mathcal{D}_{\mathbf{A}}\mathbf{B}, \tag{3.31}\]
where \(\mathbf{F}=d\mathbf{A}+\mathbf{A}\wedge\mathbf{A}\) is the curvature associated to the connection \(\mathbf{A}\), which reads
\[\mathbf{F}=d\tau\,\mathbf{H}+\left(T^{a}+\sigma^{2}v^{a}\wedge\rho\right)\mathbf{P}_{a}+ \frac{1}{2}\left(R^{ab}-\sigma^{2}v^{a}\wedge v^{b}\right)\mathbf{J}_{ab}+F^{a}\, \mathbf{Q}_{a}+f\,\mathbf{Q}, \tag{3.32}\]
with
\[T^{a}=De^{a}, \tag{3.33a}\] \[F^{a}=Dv^{a},\] (3.33b) \[R^{ab}=d\omega^{ab}+\omega^{a}{}_{c}\wedge\omega^{cb},\] (3.33b) \[f=d\rho+e^{a}\wedge v_{a}, \tag{3.33d}\]
and \(D\) is the covariant exterior derivative with respect to \(\omega^{ab}\). The term \(\mathcal{D}_{\mathbf{A}}\mathbf{B}\), on the other hand, is the covariant derivative of \(\mathbf{B}\) with respecto to \(\mathbf{A}\).
\[\mathcal{D}_{\mathbf{A}}\mathbf{B}=d\mathbf{B}+[\mathbf{A},\mathbf{B}]=\left(D\mu^{a}+\sigma^{2}v ^{a}\wedge u\right)\mathbf{G}^{a}+\left(du+\mu^{a}\wedge v_{a}\right)\mathbf{K}. \tag{3.34}\]
Notice also that in the limit \(\varepsilon\to 0\), the nonvanishing components of the invariant tensor (3.27) are
\[\begin{array}{ll}\langle\mathbf{J}_{ab}\mathbf{J}_{cd}\rangle=\hat{\alpha}_{0}\left( \delta_{ac}\delta_{bd}-\delta_{ad}\delta_{bc}\right),&\langle\mathbf{Q}_{a}\mathbf{Q} _{b}\rangle=\sigma^{2}\hat{\alpha}_{2}\delta_{ab},&\langle\mathbf{Q}\mathbf{Q}\rangle =\sigma^{2}\hat{\alpha}_{1}\\ \langle\mathbf{P}_{a}\mathbf{P}_{b}\rangle=\hat{\alpha}_{3}\delta_{ab},&\langle\mathbf{H} \mathbf{H}\rangle=-\hat{\alpha}_{3},\end{array} \tag{3.35}\]
whereas the components \(\langle\mathbf{G}_{a}\mathbf{G}_{b}\rangle\) and \(\langle\mathbf{K}\mathbf{K}\rangle\) vanish. This means that the gauge fields \(\mu^{a}\) and \(u\) entering \(\mathbf{B}\) decouple from the rest of the gauge fields in the Aristotelian limit and do not appear in the action (3.26) after the contraction. Thus, for simplicity we remove the connection \(\mathbf{B}\) from the analysis and consider only the connection \(\mathbf{A}\).
The transformations in Eqs. (3.15) lead to the following symmetry transformations for the gauge fields in Eqs. (3.30) when \(\varepsilon\to 0\)
\[\delta\tau=\mathcal{L}_{\Xi}\tau, \tag{3.36a}\] \[\delta e^{a}=\mathcal{L}_{\Xi}e^{a}-\theta^{a}_{\ b}e^{b},\] (3.36b) \[\delta\rho=\mathcal{L}_{\Xi}\rho, \tag{3.36c}\]
where the gauge parameters are related to the ones in Eqs. (3.15) by \(\Theta^{ab}=\theta^{ab}\). From the contracted algebra perspective these transformations can be obtained from a gauge transformation of the connection defined in Eq. (3.30) as, \(\delta\mathbf{A}=\mathcal{L}_{\xi}\mathbf{A}+d\mathbf{\lambda}+[\mathbf{A},\mathbf{\lambda}]\), with \(\mathbf{\lambda}=\frac{1}{2}\theta^{ab}\mathbf{J}_{ab}\).
In the Aristotelian limit \(\varepsilon\to 0\), the linear connection (3.16) can also be decomposed in the form (3.29) with
\[\tilde{\mathbf{A}}=\tilde{\tau}\,\mathbf{H}+\tilde{e}^{a}\mathbf{P}_{a}+\frac{1}{2}\tilde{ \omega}^{ab}\mathbf{J}_{ab}+\tilde{v}^{a}\mathbf{Q}_{a}+\tilde{\rho}\,\mathbf{Q}. \tag{3.37}\]
Using (3.21) we can define the translational-invariant vielbein
\[\lambda =\tilde{\tau}-d\phi \tag{3.38}\] \[h^{a} =\tilde{e}^{a}-D\phi^{a}-\sigma^{2}v^{a}\varphi,\] \[a =\tilde{\rho}-d\varphi+v_{a}\phi^{a},\]
where we have defined the following rescaling for the field \(\Phi^{\hat{A}}\)
\[\Phi^{n}=\sigma\varphi,\quad\Phi^{0}=\phi,\quad\Phi^{a}=\phi^{a}. \tag{3.39}\]
Similarly, using (3.20) and rescaling \(\Psi^{A}\) in the form
\[\Psi^{0}=\varepsilon\sigma\psi,\qquad\Psi^{a}=\sigma\psi^{a}, \tag{3.40}\]
the Aristotelian non-linear fields can be expressed in terms of the linear ones as
\[\tau =\lambda \tag{3.41}\] \[e^{a} =h^{a}+\sigma^{2}a\psi^{a}-\frac{\sigma^{2}}{2}h^{b}\psi_{b}\psi ^{a}-\frac{\sigma^{4}}{3!}a\psi^{a}\psi^{b}\psi_{b}+O\left(\psi^{4}\right),\] \[\rho =a-h^{a}\psi_{a}-\frac{\sigma^{2}}{2}a\psi^{a}\psi_{a}+\frac{ \sigma^{2}}{3!}h^{a}\psi_{a}\psi^{b}\psi_{b}+O\left(\psi^{4}\right),\] \[v^{a} =\tilde{v}^{a}-\tilde{D}\psi^{a}+\frac{\sigma^{2}}{2}\psi_{b}\psi ^{[a}\tilde{v}^{b]}+\frac{\sigma^{2}}{3!}\tilde{\omega}^{a}_{\ b}\psi^{b}\psi^{c} \psi_{c}+O\left(\psi^{4}\right),\] \[\omega^{ab} =\tilde{\omega}^{ab}-\sigma^{2}\psi^{[a}\tilde{v}^{b]}+\frac{ \sigma^{2}}{2}\psi^{[a}\tilde{\omega}^{b]}_{\ c}\psi^{c}+\frac{\sigma^{4}}{3!} \psi^{[a}\tilde{v}^{b]}\psi^{c}\psi_{c}+O\left(\psi^{4}\right),\]
where fields \(\lambda\), \(h^{a}\), \(a\), \(\tilde{v}^{a}\) and \(\tilde{\omega}^{ab}\) transform as
\[\delta\lambda=\mathcal{L}_{\Xi}\lambda, \tag{3.42a}\] \[\delta h^{a}=\mathcal{L}_{\Xi}h^{a}-\theta^{a}{}_{b}h^{b}-\sigma^ {2}b^{a}a,\] (3.42b) \[\delta a=\mathcal{L}_{\Xi}a+b_{a}h^{a},\] (3.42c) \[\delta v^{a}=\mathcal{L}_{\Xi}\psi^{a}+b^{a}-\theta^{a}{}_{b}\psi ^{b}. \tag{3.42f}\]
Notice that, apart from the parameter associated to local rotations, we have defined \(\Theta^{an}=\sigma\,b^{a}\). Since we are interested in expressing the action (3.26) in terms of the rescaled fields given in Eq. (3.28), we notice that the \(d+2\) metric and its inverse can be decomposed as
\[G_{\hat{\mu}\hat{\nu}}=g_{\hat{\mu}\hat{\nu}}+\sigma^{2}\rho_{\hat{\mu}}\rho_{ \hat{\nu}},\qquad G^{\hat{\mu}\hat{\nu}}=g^{\hat{\mu}\hat{\nu}}+\frac{1}{\sigma ^{2}}\rho^{\hat{\mu}}\rho^{\hat{\nu}}, \tag{3.43}\]
where the inverse vielbein has been rescaled as \(E^{\hat{\mu}}_{\phantom{\hat{\mu}}\hat{A}}=(\tau^{\hat{\mu}},e^{\hat{\mu}}_{ \phantom{\hat{\mu}}\hat{a}},\sigma^{-1}\rho^{\hat{\mu}})\) and we have defined
\[g_{\hat{\mu}\hat{\nu}}=-\tau_{\hat{\mu}}\tau_{\hat{\nu}}+\delta_{ab}e^{a}_{\ \hat{\mu}}e^{b}_{\ \hat{\mu}},\qquad g^{\hat{\mu}\hat{\nu}}=-\tau^{\hat{\mu}} \tau^{\hat{\nu}}+\delta^{ab}e^{\hat{\mu}}_{\ \phantom{\hat{\mu}}a}e^{\hat{\nu}}_{ \phantom{\hat{\nu}}b}. \tag{3.44}\]
The fields satisfy the orthogonality relations
\[\rho^{\hat{\mu}}g_{\hat{\mu}\hat{\nu}}=0=\rho_{\hat{\mu}}g^{\hat{\mu}\hat{\nu }},\qquad\rho^{\hat{\mu}}\rho_{\hat{\mu}}=1,\qquad g^{\hat{\mu}\hat{\rho}}g_{ \hat{\rho}\hat{\nu}}+\rho^{\hat{\mu}}\rho_{\hat{\nu}}=\delta^{\hat{\mu}}_{\hat {\nu}}. \tag{3.45}\]
and the square root metric determinant can be written as
\[\sqrt{|G|}=\sigma\sqrt{|g\rho|},\qquad|g\rho|=\frac{1}{(d+1)!}\epsilon_{a_{0} \cdots a_{d}}\epsilon^{\mu_{0}\cdots\mu_{d+1}}e^{a_{0}}_{\ \mu_{0}}\cdots e^{a_{d}}_{\ \hat{\mu}_{d}}\rho_{\mu_{d+1}}. \tag{3.46}\]
All this allows us to expand the higher dimensional action in powers of \(\sigma\),
\[S=\frac{1}{\sigma}S_{0}+\sigma S_{1}+\sigma^{3}S_{2}+O(\sigma^{5}), \tag{3.47}\]
with the leading order terms taking the form
\[S_{0}=-\int d^{d+2}x\,\sqrt{|g\rho|}\,g^{\hat{\mu}\hat{\rho}} \rho^{\hat{\nu}}\rho^{\hat{\sigma}}\left[\frac{\hat{\alpha}_{0}}{2}R^{ab}_{\ \ \hat{\mu}\nu}\,R_{ab\hat{\rho}\hat{\sigma}}+\hat{\alpha}_{3}T^{a}_{\ \hat{\mu}\hat{\nu}}\,T_{a\hat{\rho}\hat{ \sigma}}-\hat{\alpha}_{3}\partial_{[\hat{\mu}}\tau_{\hat{\nu}]}\partial_{[\hat {\rho}}\tau_{\hat{\sigma}]}\right],\] \[S_{1}=-\int d^{d+2}x\,\sqrt{|g\rho|}\left[g^{\hat{\mu}\hat{\rho}}g ^{\hat{\nu}\hat{\sigma}}\left(\frac{\hat{\alpha}_{0}}{4}R^{ab}_{\ \hat{\mu}\nu}\,R_{ab\hat{\rho}\hat{\sigma}}+\frac{\hat{\alpha}_{3}}{2}T^{a}_{ \ \hat{\mu}\hat{\nu}}\,T_{a\hat{\rho}\hat{\sigma}}-\frac{\hat{\alpha}_{3}}{2} \partial_{[\hat{\mu}}\tau_{\hat{\nu}]}\partial_{[\hat{\rho}}\tau_{\hat{\sigma} ]}\right)\right.\] \[\qquad\left.+2\hat{\alpha}_{3}g^{\hat{\mu}\hat{\rho}}\rho^{\hat{ \nu}}T_{a\hat{\mu}\hat{\nu}}\ v^{a}_{\ \hat{\rho}}+g^{\hat{\mu}\hat{\rho}}\rho^{\hat{\nu}}\rho^{\hat{\sigma}}\left( \hat{\alpha}_{2}F^{a}_{\ \hat{\mu}\hat{\nu}}\,F_{a\hat{\rho}\hat{\sigma}}-\hat{\alpha}_{0}R_{ab\hat{\mu }\hat{\nu}}\ v^{a}_{\ [\hat{\rho}}v^{b}_{\ \hat{\sigma}]}+\hat{\alpha}_{1}f_{\hat{\mu}\hat{\nu}}\,f_{\hat{\rho}\hat{ \sigma}}\right)\bigg{]},\] \[S_{2}=-\frac{1}{2}\int d^{d+2}x\,\sqrt{|g\rho|}\Bigg{[}g^{\hat{ \mu}\hat{\rho}}g^{\hat{\rho}\hat{\sigma}}\left(\hat{\alpha}_{2}F^{a}_{\ \hat{\mu}\hat{\nu}}\,F_{a\hat{\rho}\hat{\sigma}}-\hat{\alpha}_{0}R_{ab\hat{\mu }\hat{\nu}}\ v^{a}_{\ [\hat{\rho}}v^{b}_{\ \hat{\sigma}]}+\hat{\alpha}_{1}f_{\hat{\mu}\hat{\nu}}\,f_{\hat{\rho}\hat{ \sigma}}\right)\] \[\qquad\qquad\qquad\qquad\qquad+\hat{\alpha}_{0}g^{\hat{\mu}\hat{ \rho}}\rho^{\hat{\rho}}\rho^{\hat{\sigma}}\ v_{a[\hat{\mu}}v_{b[\hat{\nu}]}\ v^{a}_{\ [\hat{\rho}}v^{b}_{\ \hat{\sigma}]}+2\hat{\alpha}_{3}g^{\hat{\mu}\hat{\rho}}\ v_{a\hat{\mu}}\ v^{a}_{ \ \hat{\rho}}\Bigg{]}.\]
### Dimensional reduction
Before starting with the dimensional reduction procedure, we would like to point out that, as it generically happens with gravitational theories, local translations are "spontaneously broken" [28]. Nonetheless, we are interested in a system where the fracton charge (momentum in the extra direction) is conserved. Therefore, we will require the existence of a spacelike (transverse) killing vector \(\mathcal{K}\), and to use coordinates \(x^{\hat{\mu}}=(x^{\mu},z)\) such that \(\mathcal{K}=\partial_{z}\). Moreover, from the perspective of this construction the fracton \(U(1)\) transformations can be understood as translations along the extra spacetime dimension (see Eq.
(3.9)). Therefore, since the generator \(\mathbf{Q}\) commutes with all the rest after the contraction it is natural to require the existence of a Killing vector \(\mathcal{K}\). That would guarantee that gauge fields remain invariant when a transverse diffeo with constant parameter is applied. Thus we require
\[\mathcal{L}_{\mathcal{K}}\mathbf{A}=\partial_{z}\mathbf{A}=0, \tag{3.49}\]
which implies all components of the fields to be \(z\)-independent. After doing so, we introduce the gauge fixing conditions
\[\rho_{z}=1, \tag{3.50}\]
From the orthogonality relations between the higher-dimensional vielbein and its inverse, it follows that setting (3.50) implies
\[\rho^{\hat{\mu}}=\delta^{\hat{\mu}}_{z},\quad\tau_{z}=0=e^{a}_{\ z},\quad\tau^ {z}=-\rho_{\mu}\tau^{\mu},\quad e^{z}_{a}=-\rho_{\mu}e^{\mu}_{\ a}, \tag{3.51}\]
whereas the rest of the fields satisfy the relations
\[\tau^{\mu}\tau_{\mu}=1, \tag{3.52a}\] \[e^{a}_{\ \mu}\tau^{\mu}=0,\] (3.52b) \[\tau^{\mu}\tau_{\nu}+e^{\mu}_{\ a}e^{a}_{\ \nu}=\delta^{\mu}_{\nu}, \tag{3.52c}\]
familiar from non-Lorentzian geometry. The higher-dimensional metric tensor then takes the form
\[G_{\hat{\mu}\hat{\nu}}dx^{\hat{\mu}}dx^{\hat{\nu}}=g_{\mu\nu}dx^{\mu}dx^{\nu}+ \sigma^{2}(dz+\rho_{\mu}dx^{\mu})(dz+\rho_{\nu}dx^{\nu}), \tag{3.53}\]
and the determinant of the metric reduces to
\[\sqrt{|g\rho|}=\sigma\sqrt{|g|}. \tag{3.54}\]
In order to make the dipole symmetry explicit, we express the non-linear fields \(\rho_{\mu}\) and \(v_{\hat{\mu}}\) in terms of \(a_{\mu}\), \(\tilde{v}^{a}_{\hat{\mu}}\), \(\psi^{a}\), \(\omega^{ab}_{\hat{\mu}}\), \(\tau\) and \(e^{a}_{\ \mu}\). At leading order in \(\sigma\) we can write
\[\begin{split}\rho_{\mu}&=a_{\mu}-e^{a}_{\ \mu}\psi_{a}+O(\sigma^{2}),\\ v^{a}_{\ \hat{\mu}}&=\tilde{v}^{a}_{\ \hat{\mu}}-D_{ \hat{\mu}}\psi^{a}+O(\sigma^{2}).\end{split} \tag{3.55}\]
Thus, by defining
\[\tilde{F}^{a}=D\tilde{v}^{a},\quad\tilde{f}=da+e^{a}\tilde{v}_{a}, \tag{3.56}\]
the curvatures \(F^{a}_{\hat{\mu}\hat{\nu}}\) and \(f_{\hat{\mu}\hat{\nu}}\) can be written as
\[\begin{split}& F^{a}_{\ \hat{\mu}\hat{\nu}}=\tilde{F}^{a}_{\ \hat{\mu}\hat{\nu}}-R^{ab}_{\ \hat{\mu}\hat{\nu}}\psi_{b}+O(\sigma^{2}),\\ & f_{\hat{\mu}\hat{\nu}}=\tilde{f}_{\hat{\mu}\hat{\nu}}-T^{a}_{\ \hat{\mu}\hat{\nu}}\psi_{a}+O(\sigma^{2}).\end{split} \tag{3.57}\]
Finally, as a last gauge fixing we impose
\[\omega^{ab}_{\ \ z}=0,\qquad\tilde{v}^{a}_{\ \ z}=0. \tag{3.58}\]
Implementing all these conditions, the transformations of the \((d+1)\)-dimensional fields take the form
\[\delta\tau_{\mu} =\mathfrak{L}_{\xi}\tau_{\mu}, \tag{3.59a}\] \[\delta e^{a}_{\ \mu} =\mathfrak{L}_{\xi}e^{a}_{\ \mu}-\theta^{a}_{\ b}e^{b}_{\ \mu},\] (3.59b) \[\delta a\omega^{ab}_{\ \ \mu} =\mathfrak{L}_{\xi}\omega^{ab}_{\ \ \mu}+D_{\mu}\theta^{ab},\] (3.59e) \[\delta\psi^{a} =\mathfrak{L}_{\xi}\psi^{a}+b^{a}-\theta^{a}_{\ b}\psi^{b}, \tag{3.59f}\]
where \(\mathfrak{L}\) denotes the Lie derivative in \(d+1\) dimensions and the higher-dimensional diffeomorphism parameter has been redefined as
\[\Xi=\Xi^{\hat{\mu}}\partial_{\hat{\mu}}=\xi^{\mu}\partial_{\mu}-\epsilon \partial_{z}=\xi-\epsilon\partial_{z}. \tag{3.60}\]
where we have remaned \(\Xi^{n}=-\epsilon\). Demaning that the gauge conditions (3.50) and (3.51) are invariant under gauge transformations (3.59), restricts the diffeomorphism parameter \(\xi^{\mu}\) and the gauge paramenters \(\theta^{ab}\) and \(b^{a}\) to be \(z\)-independent.
Due to the conditions (3.50),(3.51) and (3.58), the gauge connection (3.30) satisfies \(\mathbf{A}_{z}=\mathbf{Q}\). This together with the fact that the fields are \(z\)-independent imply that the field strength two-form (3.32) satisfies
\[\mathbf{F}_{\ z\mu}=\partial_{[z}\tau_{\mu]}\mathbf{H}+T^{a}_{\ z\mu}\mathbf{P}_{a}+\frac{ 1}{2}R^{ab}_{\ \ z\mu}\mathbf{J}_{ab}+F^{a}_{\ z\mu}\mathbf{Q}^{a}+f_{z\mu}\mathbf{Q}=0. \tag{3.61}\]
As a consequence, the action \(S_{0}\) in (3.48) vanishes. The condition (3.49) allows to trivially integrate over the coordinate \(z\) in the action. Introducing the new constant
\[\alpha_{n}=\sigma\hat{\alpha}_{n}\int dz. \tag{3.62}\]
we find the following gauge-fixed action
\[S=S_{1}+\sigma^{2}S_{2}+O(\sigma^{4}), \tag{3.63}\]
where
\[\begin{split} S_{1}&=-\frac{1}{2}\int d^{d+1}x\, \sqrt{|g|}\,\Big{[}\frac{\alpha_{0}}{2}R^{ab}_{\ \ \mu\nu}\,R^{\ \ \mu\nu}_{ab}+\alpha_{3}T^{a}_{\ \mu\nu}T_{a}^{\ \mu\nu}-\alpha_{3}\partial_{[\mu}\tau_{\nu]} \partial^{[\mu}\tau^{\nu]}\Big{]}\,,\\ S_{2}&=-\frac{1}{2}\int d^{d+1}x\,\sqrt{|g|}\,\bigg{[} \alpha_{2}\left(\tilde{F}^{a}_{\ \mu\nu}-R^{ab}_{\ \ \mu\nu}\psi_{b}\right)\left(\tilde{F}^{a}_{\ \mu\nu}-R^{\ \ \mu\nu}_{ac}\psi^{c}\right)\\ &-\alpha_{0}R^{ab\mu\nu}\left(\tilde{v}_{a\mu}-D_{\mu}\psi_{a} \right)\left(\tilde{v}_{b\nu}-D_{\nu}\psi_{b}\right)+\alpha_{1}\left(\tilde{ f}_{\mu\nu}-T^{a}_{\ \mu\nu}\psi_{a}\right)\left(\tilde{f}^{\mu\nu}-T^{b\mu\nu}\psi_{b}\right)\\ &+2\alpha_{3}\left(\tilde{v}^{a}_{\ \mu}-D_{\mu}\psi^{a}\right) \left(\tilde{v}_{a}^{\ \mu}-D^{\mu}\psi_{a}\right)\bigg{]}.\end{split} \tag{3.64}\]
Notice that, in the strict limit \(\sigma=0\), the limit procedure leads to \(S=S_{1}\). However, other limit can be defined that lead to the action \(S_{2}\) instead. In the following we discuss a few interesting cases:
\(\bullet\)**Pretko's theory** - we can introduce the rescaling \(\alpha_{n}\to\frac{1}{\sigma^{2}}\alpha_{n}\) and introduce auxiliary fields \(\lambda^{ab}_{\ \mu\nu}\), \(\lambda^{a}_{\ \mu\nu}\) and \(\lambda_{\mu\nu}\), which allow to rewrite (3.63) as
\[\begin{split} S&=\alpha_{0}\int d^{d+1}x\,\sqrt{|g|} \,\left(\frac{\sigma^{2}}{2}\ \lambda^{ab}_{\ \ \mu\nu}\,\lambda^{\ \ \mu\nu}_{ab}-\,\lambda^{\ \ \mu\nu}_{ab}\,R^{ab}_{\ \ \mu\nu}\right)\\ &+\alpha_{3}\int d^{d+1}x\,\sqrt{|g|}\,\left(\sigma^{2}\ \lambda^{a}_{\ \mu\nu}\,\lambda^{\ \ \mu\nu}_{a}-2\,\lambda^{\ \mu\nu}_{a}\,T^{a}_{\ \mu\nu}-\sigma^{2}\lambda_{\mu\nu}\lambda^{\mu\nu}-2\lambda^{\mu\nu}\partial_{[ \mu}\tau_{\nu]}\right)+S_{2}+O(\sigma^{2}).\end{split} \tag{3.65}\]
Indeed, one can see that integrating out the auxiliary fields yields the action (3.63) after properly rescaling the constants \(\alpha_{n}\). Now, in the limit \(\sigma\to 0\), \(\lambda^{ab}_{\ \ \mu\nu}\), \(\lambda^{a}_{\ \ \mu\nu}\) and \(\lambda_{\mu\nu}\) become Lagrange multipliers enforcing the constraints
\[R^{ab}_{\ \ \mu\nu}=0,\qquad T^{a}_{\ \ \mu\nu}=0,\qquad\partial_{[\mu}\tau_{ \nu]}=0, \tag{3.66}\]
and thus leading to the action \(S=S_{2}\) on flat space. For \(\alpha_{3}=0\) and \(\alpha_{2}=1\), the action boils down to
\[S=-\frac{1}{2}\int d^{d+1}x\left[\tilde{F}^{a}_{\ \mu\nu}\tilde{F}_{a}^{\ \mu\nu}+\alpha_{1}\tilde{f}_{\mu\nu}\tilde{f}^{\mu\nu}\right]. \tag{3.67}\]
Spliting the index \(\mu\) into space and time components \(\mu=(0,i)\), choosing \(\tau_{\mu}=(1,0,\ldots,0)\), \(e^{a}_{\mu}=(0,\delta^{a}_{i})\), \(\omega^{ab}_{\mu}=0\), and gauge fixing \(a_{\mu}=(\phi,0,\ldots,0)\) we can write
\[\tilde{F}^{a}_{\mu\nu}=\partial_{[\mu}\tilde{v}^{a}_{\ \nu]},\qquad\tilde{f}_{0 i}=-\partial_{i}\phi-\tilde{v}_{i0},\qquad\tilde{f}_{ij}=\tilde{v}_{[ij]}, \tag{3.68}\]
where we have defined \(\tilde{v}_{\mu\nu}=e_{a\mu}\tilde{v}^{a}_{\ \nu}\). We now split the field \(\tilde{v}^{a}_{\ \mu}\) as
\[\tilde{v}^{a}_{\ \mu}=A^{a}_{\ \mu}+B^{a}_{\ \mu}, \tag{3.69}\]
where \(u^{a}_{\ \mu}\) is the part of \(\tilde{v}^{a}_{\ \mu}\) that solves \(\tilde{f}_{\mu\nu}=0\) and thus has the form
\[A_{0i}=-\partial_{i}\phi,\quad A_{ij}=A_{ji}. \tag{3.70}\]
Notice that the curvature \(f_{\mu\nu}\) is gauge invariant in the absence of torsion and the second term in (3.67) is now a mass term for the fields \(B_{0i}\) and \(B_{ij}\). In the low energy regime of the theory we can keep only the gapless modes and therefore neglect \(B^{a}_{\ \mu}\). In this case, defining
\[F_{\mu\nu k}\equiv\delta_{ak}\tilde{F}^{a}_{\ \mu\nu} \tag{3.71}\]
leads to the action (2.23) proposed in [9]. The gauge transformation for \(\phi\) and \(A_{ij}\) follow from (3.59) and the gauge invariance of the gauge condition \(a_{i}=0\). This yields4
Footnote 4: Notice that this result also holds in curved space, where setting \(a_{\mu}e^{\mu}_{a}=0\) as a gauge condition yields \(b_{a}=e_{a}^{\ \mu}\partial_{\mu}\epsilon\), which leads to the transformation law (2.39) for \(A^{a}_{\ \mu}\).
\[b_{a}=\delta^{i}_{a}\partial_{i}\epsilon\quad\Rightarrow\quad\delta\phi=- \dot{\epsilon},\quad\delta A_{ij}=\partial_{i}\partial_{j}\epsilon, \tag{3.72}\]
which matches (2.21).
\(\bullet\)**Proca extension** - Turning on the constant \(\alpha_{3}\) in the previous example leads to a Proca extension of Fracton electrodynamics. Indeed, by renaming, and \(\alpha_{3}=m^{2}\), the action (2.23) reads
\[S=-\int d^{d+1}x\left[\frac{1}{2}D_{[\mu}A^{a}_{\ \nu]}D^{[\mu}A_{a}^{\ \nu]}+m^{2}\left(A^{a}_{\ \mu}-\partial_{\mu}\psi^{a}\right)\left(A^{\ \mu}_{a}-\partial^{\mu}\psi_{a}\right)\right]+\ldots, \tag{3.73}\]
where \(\ldots\) in the action contains the kinetic term for the \(B\) field, and the interactions between the gauge field \(A^{a}_{\ \mu}\) and \(B^{a}_{\ \mu}\). Notice again that the \(B\) field will be massive and gauge invariant.
\(\bullet\) **Curved space generalization** - Implementing the rescaling \(\alpha_{1}\to\sigma^{-2}\alpha_{1}\) and \(\alpha_{2}\to\sigma^{-2}\alpha_{2}\) and taking the limit \(\sigma\to 0\) in the action (3.63) we find
\[S =-\frac{1}{2}\int d^{d+1}x\,\sqrt{|g|}\bigg{[}\frac{\alpha_{0}}{2} R^{ab}_{\phantom{ab}\mu\nu}\,R_{ab}^{\phantom{ab}\mu\nu}+\alpha_{3}T^{a}_{\phantom{ a}\mu\nu}T_{a}^{\phantom{a}\mu\nu}-\alpha_{3}\partial_{[\mu}\tau_{\nu]}\partial^{[\mu} \tau^{\nu]}\] \[+\alpha_{2}\left(\tilde{F}^{a}_{\phantom{a}\mu\nu}-R^{ab}_{ \phantom{ab}\mu\nu}\psi_{b}\right)\left(\tilde{F}^{\phantom{a}\mu\nu}_{a}-R_{ ac}^{\phantom{ac}\mu\nu}\psi^{c}\right)+\alpha_{1}\left(\tilde{f}_{\mu\nu}-T^{a}_{ \phantom{a}\mu\nu}\psi_{a}\right)\left(\tilde{f}^{\mu\nu}-T^{b\mu\nu}\psi_{b }\right)\bigg{]}, \tag{3.74}\]
which corresponds to a torsion-full generalization of the action proposed in [11]. Indeed, the term proportional to \(\alpha_{2}\) is precisely the action (2.45).
\(\bullet\) **Constrained curved background** - Another possibility is to treat the action (3.63) perturbatively in the parameter \(\sigma\), by assuming the fields \(\tilde{v}^{a}_{\phantom{a}\mu}\) and \(a_{\mu}\) do not backreact on the geometry and considering the gravitational fields to be onshell. We consider the case \(\alpha_{3}=0\) and \(\alpha_{0}/2=\alpha_{2}=1\). The action \(S_{1}\) then reduces to an Aristotelian version of the Stephenson-Kilmister-Yang model [29; 30; 31]. The field equations coming from \(S_{1}\) after varying with respect to the metric fields and the spin connection read
\[\delta\omega^{ab}_{\phantom{ab}\mu}: D^{\mu}R_{ab\mu\nu}=0, \tag{3.75}\] \[\delta h^{\alpha\beta}: \frac{1}{2}R^{ab}_{\phantom{ab}\mu(\alpha}R^{\phantom{ab}\mu}_{ ab\phantom{ab}\beta)}-\frac{1}{4}g_{\alpha\beta}R^{ab}_{\phantom{ab}\mu\nu}R_{ ab}^{\phantom{ab}\mu\nu}=0,\] (3.76) \[\delta\tau^{\alpha}: \tau_{\alpha}R^{ab}_{\phantom{ab}\mu\nu}R_{ab}^{\phantom{ab}\mu \nu}-2\tau^{\mu}R^{ab}_{\phantom{ab}\mu\nu}R_{ab\alpha}^{\phantom{ab}\nu}=0. \tag{3.77}\]
We consider a solution with vanishing torsion \(T^{a}_{\mu\nu}=0=\partial_{[\mu}\tau_{\nu]}\). One consequence of this is that the resulting action \(S_{2}\) is gauge invariant even in absence of the Stuckelbeg field \(\psi^{a}\) thanks to the field equation (3.75). This means that on such gravitational backgrounds it is possible to describe a phase of the system with unbroken dipole symmetry where \(\psi^{a}=0\). Moreover, as in the flat case, after decomposing \(\tilde{v}^{a}_{\phantom{a}\mu}\) in the form (3.69), the term \(\tilde{f}_{\mu\nu}\tilde{f}^{\mu\nu}\) is a mass for \(B^{a}_{\phantom{a}\mu}\) and therefore this field can be neglected in a low energy description of the system. Under all these considerations, the action \(S_{2}\) takes the simple form
\[S_{2}=-\int d^{d+1}x\,\sqrt{|g|}\bigg{[}\frac{1}{2}\tilde{F}^{a}_{\phantom{a} \mu\nu}\tilde{F}_{a}^{\phantom{a}\mu\nu}-R^{ab\mu\nu}A_{a\mu}A_{b\nu}\bigg{]}. \tag{3.78}\]
This is precisely the action (2.32) when the gauge fixing condition
\[a_{\mu}=\phi\,\tau_{\mu} \tag{3.79}\]
is imposed. Indeed, the generalization of \(A^{a}_{\phantom{a}\mu}\) in the curved space case has the form
\[A^{a}_{\phantom{a}\mu}=\frac{1}{2}\partial_{[\alpha}a_{\nu]}\left(\delta^{ \alpha}_{\mu}+\tau^{\alpha}\tau_{\mu}\right)e^{a\nu}+A_{ab}e^{b}_{\phantom{b} \mu},\quad A_{ab}=A_{ba}. \tag{3.80}\]
When the conditions (3.49), (3.50), (3.51) and (3.58) are imposed, at leading order in the \(\sigma\)-expansion the vielbein postulate (3.23) and its inverse relation lead to the lower-dimensional vielbein postulate and its inverse
\[\nabla_{\mu}\tau_{\nu}=0,\quad\nabla_{\mu}e^{a}_{\phantom{a}\nu}=0,\quad \nabla_{\mu}\tau^{\nu}=0,\quad\nabla_{\mu}e_{a}^{\phantom{a}\nu}=0. \tag{3.81}\]
where \(\nabla_{\mu}\) acts with \(\omega^{ab}_{\ \ \mu}\) on tangent space indices and with \(\Gamma^{\sigma}_{\mu\nu}\) on space-time indices. Using these relations we can write the projections of the gauge field \(A^{a}_{\mu}\) along the inverse tetrad \(\tau^{\mu}\) and \(e^{\mu}_{a}\) as
\[\begin{split} A_{a\mu}\tau^{\mu}&=\tau^{\mu}\nabla_ {\mu}\left(e_{a}^{\ \nu}a_{\nu}\right)-e_{a}^{\ \mu}\nabla_{\mu}\left(\tau^{\nu}a_{\nu}\right),\\ A_{a\mu}e_{b}^{\ \mu}&=e_{b}^{\ \mu}A_{ab}-\frac{1}{2} \nabla_{[\mu}a_{\nu]}e_{a}^{\ \mu}e_{b}^{\ \nu}.\end{split} \tag{100}\]
One can show the equivalence between the actions (38) and (101) by noticing that these relations reduce to (36) after the gauge fixing condition (101) is implemented. Similarly, one can define
\[A_{\mu\nu}=A_{a\mu}e_{\nu}^{a}-\nabla_{\mu}a_{\nu}, \tag{101}\]
which can be shown to be explicitly symmetric after replacing (102), and matches (37) after imposing the gauge condition (101).
## 4 Conclusions and outlook
In this paper, we have studied the connection between symmetric gauge fields and gravity after understanding MDMA as a contraction of Poincare algebra. This analysis gives a geometric interpretation to the fracton charge as the momentum of the matter field in a tranverse (internal) spacetime dimension, and dipole charge as the angular momentum along that direction.
The main result of the paper is twofold: on one hand we have constructed a Lie algebra contraction that allows to obtain the MDMA from the Poincare algebra in one dimension higher by putting together a combination of a pseudo-Carrollian contraction and an Aristotelian one. On the other hand, we have derived the action (101), which describes fracton gauge fields coupled to Aristotelian geometry. This action was obtained from a higher-dimensional Poincare gauge theory in a symmetry-broken phase after applying a dimensional reduction and the pseudo-Carrollian-Aristotelian limit.
Different \(\sigma\to 0\) limits of the resulting action were analized which led in particular to the original gauge theory of fracton electrodynamics proposed by Pretko on flat space, together with a flat space Proca extension of the theory, a spontaneously broken phase in curved space, and a symmetric phase in curved space with the harmonic condition \(D^{\mu}R_{ab\mu\nu}=0\).
As future directions, we envisage possible generalizations of our results. For instance, one could explore the inclusion of fractonic matter in our model by considering higher dimensional relativistic matter fields coupled to our Poincare gauge theory. Additionally, one could generalize the Lie algebra contraction here considered to fractonic symmetries that generalize the MDMA by including higher moment charges. Indeed, due to the isomorphism between the Bargmann algebra and the extension of the MDMA that includes conservation of the trace of the quadrupole moment, it would be interesting to understand the relation between Newton-Cartan gravity and fracton gauge theories with such a gauge group. Moreover, it would be of interest to explore supersymmetric generalizations of our results. By exploiting the relation between the Carroll algebra and the MDMA, supersymmetric extensions of Carroll could be of use in the study of spin 1/2 fractons.
###### Acknowledgments.
We thank E. Bergshoeff, G. Palumbo, O. Castillo-Felisola for enlightening comments and discussions. F. P.-B. acknowledges the Nordita Institute for hospitality while attending the workshop "Hydrodynamics at all scales". This work has been funded by the Norwegian Financial Mechanism 2014-2021 via the Narodowe Centrum Nauki (NCN) POLS grant 2020/37/K/ST3/03390.
|
2303.12728 | LocalEyenet: Deep Attention framework for Localization of Eyes | Development of human machine interface has become a necessity for modern day
machines to catalyze more autonomy and more efficiency. Gaze driven human
intervention is an effective and convenient option for creating an interface to
alleviate human errors. Facial landmark detection is very crucial for designing
a robust gaze detection system. Regression based methods capacitate good
spatial localization of the landmarks corresponding to different parts of the
faces. But there are still scope of improvements which have been addressed by
incorporating attention.
In this paper, we have proposed a deep coarse-to-fine architecture called
LocalEyenet for localization of only the eye regions that can be trained
end-to-end. The model architecture, build on stacked hourglass backbone, learns
the self-attention in feature maps which aids in preserving global as well as
local spatial dependencies in face image. We have incorporated deep layer
aggregation in each hourglass to minimize the loss of attention over the depth
of architecture. Our model shows good generalization ability in cross-dataset
evaluation and in real-time localization of eyes. | Somsukla Maiti, Akshansh Gupta | 2023-03-13T06:35:45Z | http://arxiv.org/abs/2303.12728v1 | # LocalEyenet: Deep Attention framework for Localization of Eyes
###### Abstract
Development of human machine interface has become a necessity for modern day machines to catalyze more autonomy and more efficiency. Gaze driven human intervention is an effective and convenient option for creating an interface to alleviate human errors. Facial landmark detection is very crucial for designing a robust gaze detection system. Regression based methods capacitance good spatial localization of the landmarks corresponding to different parts of the faces. But there are still scope of improvements which have been addressed by incorporating attention.
In this paper, we have proposed a deep coarse-to-fine architecture called LocalEyenet for localization of only the eye regions that can be trained end-to-end. The model architecture, build on stacked hourglass backbone, learns the self-attention in feature maps which aids in preserving global as well as local spatial dependencies in face image. We have incorporated deep layer aggregation in each hourglass to minimize the loss of attention over the depth of architecture. Our model shows good generalization ability in cross-dataset evaluation and in real-time localization of eyes.
keywords: Facial landmark detection, Attention model, Deep Learning, Convolutional Neural Network, Deep Layer Aggregation, Human Machine Interaction, Gaze Controlled Interface
## 1 Introduction
Modern machines are user-friendly and keep the possibility of human machine interaction open. Human interaction provides an effective way of controlling the machine with ease and reduces the risk of error. Gaze driven human machine interfaces have become essential for smooth controlling of machine parts with no physical intervention. The need of gaze control has been addressed since the past two decades in assistive robotic systems [1] for disable persons and in controlling the robotic arms in robotic surgical system [2]. Tracking the eye gaze requires accurate localization of the facial landmarks. Face landmark detection has been playing a significant role in current human machine interface applications, such as gaze tracking for autonomous vehicles [3], face tracking [4][5][6] and facial expression analysis [7][8].
Localization of landmark points on the face helps us in performing face alignment in an effective manner. While designing a robust Gaze controlled interface, it is mandatory to perform precise localization of eyes with high accuracy and low latency. Simultaneously, landmark detection methods are required to handle different real-time challenges, such as low lighting condition, face occlusion and fast movement of head that lead to variation in pose. Since past two decades, with the evolution of new methods in deep learning, there has been a significant improvement in developing more robust solutions. Convolutional neural networks (CNNs) have been widely used for facial landmark detection for different applications. Availability of large annotated face databases in different environmental conditions has made the task easier. Even with small dataset, different data augmentation techniques have been proved really efficient in providing better generalization to the solution. Coarse to fine regression type of architectures has shown the most prominent localization performance. These techniques learns the coarse features over a shape space at the shallow layers and subsequently learns the fine features at deeper layers [9] by cascading several deep CNN [10]. Cascaded hourglass and Unet architectures have been able to provide good generalization but still have some issues. The heatmap generated using most of these state of the art architectures do not compute the local attention after each layer. This affects the learning of correlation between features from the shallow layers and the deep layers.
We here aim to provide a solution that solves the following issues by designing different modules.
* We have defined a self-attention module between the individual hourglasss modules, which learns the local spatial attention and the global attention to estimate the precise location of the landmarks.
* We have designed a deep layer aggregation module to learn the feature dependencies over the depth of architecture while parsing the features across network.
* We have used differentiable Soft-argmax [11][12] to make the framework an end-to-end trainable architecture to determine the coordinates \((x,y)\) of each facial landmark from the attention heatmaps.
## 2 Related Work
Facial landmark detection and localization has been an active area of research for aligning faces. There have been three broad working techniques [13], a holistic techniques, b. Constrained Local Model (CLM) based techniques and c. regression-based techniques. Holistic models [14] generate appearance model of the face images by first building a shape model and then mapping the features in the shape model. Active appearance models [15][16] have been one of the widely used holistic approach that computes and updates the shape and appearance coefficients and the parameters for affine transformations for detection of the facial landmarks. CLM based techniques generate global shape model and local appearance models for each landmark points [17]. Local appearance models help in minimizing the error due to variation in illumination and occlusion. Baltrusaitis et. al presented a Constrained Local Neural Field (CLNF) [18] that take care of the feature detection problem in wild scenario with less illumination and blurred faces. Recently Weighted Iterative Closest Point (ICP) based surface matching [19] and hierarchical filtering strategy [20] have been used to reduce the effect of noise in face registration. Researchers have recently proposed a semi-supervised method [21], called self-calibrated pose attention network (SCPAN), that computes Boundary-Aware Landmark Intensity (BALI) fields corresponding to a boundary and the landmarks closest to the boundary. They have also extended their work by proposing an implicit multiorder correlating geometry-aware (IMCG) model [22][23], that uses spatial and channel correlations to attain the local as well as global features.
On the other hand, the regression based techniques mainly focuses on learning a model that maps the facial landmarks on the images, in spite of developing global and local face models. The regression models can be mainly classified into two categories, viz. i. Cascaded regression techniques and ii. Coarse-to-fine techniques.
### Cascaded regression techniques
Cascaded regression methods have been proved to be very efficient and it mainly relies on extracting features at each stage and optimizing the regression parameters for estimating the positions of facial landmarks. Most of the cascaded regression methods relies on hand-crafted features. Local binary features (LBF) [24] and Histogram of Oriented Gaussians (HoGs) have been the most used hand-crafted features that are used for the facial landmarks. In [25], LBFs have been extracted using random forests and linear regression model is learned for each landmark position. However development of regression models based on dedicated feature extraction methods tend to make the solution more sensitive to the given data. Thus most of the time, it fails to provide a generalized solution in scenarios with varied illumination conditions, occlusions and varied poses. The current cascaded regression models aim to develop a deep cascaded end-to-end architecture to determine the landmark coordinates [26][27][28]. In [29], the authors have developed a Deformable Transformer Landmark Detector (DTLD) model that preserves the local spatial structure of the face images and improves the localization accuracy. Lai et al.[30] have included recurrent neural network (RNN) modules to learn the dependency of the features generated by the cascaded CNN modules. Weng et al have developed a cascaded deep autoencoder network (CDAN) [31], that learns the feature representation at the global and the local stages simultaneously in cascaded manner.
### Coarse-to-fine techniques
Coarse-to-fine techniques have been immensely used by developing different deep learning architectures [9][32][33][34][35] for generating accurate prediction of the landmark points. Dapogny et al. has defined spatial softargmax [36] to generate landmark-specific attention. Kim et al developed an extended version of MTCNN model, called EMTCNN model [37] where they have dilated convolution and CoordConv to improve the localization accuracy. Several works has been performed, where facial recognition, head pose determination and other activities has been
performed along with facial landmark detection. In such applications, shared CNN [38] have been developed that uses same set of feature maps and enables better representation learning. Hannane et al [39] learned a FLM topological model that performs divide-conquer search for different patches of the face using coarse to fine CNN techniques and subsequently refines the landmarks positions by using a shallow cascaded CNN regression. Gao has developed a supervised encoder-decoder architecture [40] based on EfficientNet-B0 where the dark knowledge extracted from teacher network is used to supervise the training of a small student network and patch similarity (PS) distillation is used learn the structural information of the face.
For a given face, generated by a face detection technique, the regression model tries to estimate the position of the landmark points accurately. Thus researchers have also dedicated their time to solve the problem of face initialization. Lv et al. has developed a deep regression network with two-stage re-initialization [41] of faces at global and local scale to unify the different face bounding boxes obtained using different face detection methods. Heatmap regression methods generate coarse attention maps by developing a heatmap for each landmark position [42]. These methods [43][44][45] preserves the spatial features and thus provides better performance. UNet, Hourglass, Encoder-Decoder [46][47][48] are the most widely used fully convolutional networks for generating high resolution heatmaps. Stacked hourglass network [49][50] have a been popular architecture to generate accurate heatmaps that can also improve the localization in case of partial occlusions. Stacking of several hourglass models improves the spatial mapping of the features in subsequent hourglasses. Stacked U-Nets have been another very popular model where the features are extracted and then attention maps are generated by performing deconvolution. Stacked densely connected U-Nets have been developed in [51] that parses the global and local features across the U-Nets. Guo et al [52] has defined channel aggregation block (CAB) to improve the capacity of the stacked U-Net model in parsing the features across network. In [53] landmark-guided self attention (LGSA) block has been introduced that process the output feature map one hourglass to improve the spatial structure of the landmarks and sends it to the next hourglass. Xiong proposed a Gaussian vector [54] to encode landmark coordinates that provides a better convergence result. As the heatmap-based methods requires a post-processing step which is non-differentiable, Jin has defined a single-stage pixel-in-pixel regression [55] where for each heatmap a grid is detected on heatmap and then offsets are determined for precise localization of the landmarks.
### Attention models
Prediction of the face landmark coordinates, using the regression techniques, generates a model that consider the locations of the landmarks. Attention methods have recently gained popularity due to the better performance in localization. The attention methods focuses on the most salient part of the face and thus reduces the unnecessary complexity due to the irrelevant parts of the image. Different attention models have been developed that define attention block that learns feature channel attention across the architecture, viz., squeeze-and-excitation block in [56]. Learning the spatial attention is very useful for facial landmark localization due to the structure of the face. Spatial attention block has been designed using non-local block in Grad-CAM [57][58]. Woo [59] has designed CBAM that combines both spatial and channel attention. Wang has used non-local neural net [60] to learn self-attention that is most suitable due to the spatial geometry of face. Researchers have also used adversarial training methods [61] by introducing a discriminator module [62] to improve the performance by generating more accurate heatmaps, followed by a attention module that optimizes the spatial correlations [63] between the facial landmarks.
## 3 Methodology
We aim to develop a gaze controlled human machine interface for controlling the navigation of robotic platform and thus the primary objective is to localize the eyes of user. Thus instead of detecting 68 landmark points, we have devised our problem to detect the landmarks present only in the eye region of the face. We have selected 12 landmark points out of the 68 point landmark annotations, with indices 37 to 48.
Attention-driven facial landmark detection is usually performed by generating ground truth heatmap. We have generated 12 groundtruth heatmaps, each for a landmark position, corresponding to 6 landmarks for each eye, by applying a gaussian filter centered at each landmark position. Standard deviation of each gaussian filter is set at 5. We have proposed an attention based model for localization of the eye heatmaps and detection of the landmarks in the eye region. The architecture of the model has been discussed in the following section.
### Network Architecture
We have proposed a deep attention model called _LocalEyenet_ by using Stacked hourglass(HG) architecture as backbone for detection of the eye landmark positions. We have stacked 3 hourglass modules in a linear fashion as shown in Fig.1. The hourglass architecture has been designed in such as way that it combines the coarse and fine features in an efficient manner over the depth of the model. In standard hourglass architecture, the lower layer features are combined with the higher layer features using residual blocks and upsampling. The residual blocks use a skip connection to pass the low level features to the deeper layers. This alleviates the problem of vanishing gradients and also retains features even after few layers of downsampling. But this also provides a shallow aggregation between the layers and does not retain overall information across the network.
#### 3.1.1 Deep Hourglass architecture
The concept of layer aggregation has been incorporated by designing deep layer aggregation schematics such as Iterative Deep Aggregation (IDA) as proposed in [64]. We have adopted the concept and designed a Deep layer aggregation unit (DLAU) that maps the features in an iterative manner. Merging the features from all modules and channels makes the feature combination more appropriate and the loss of attention over depth is being reduced. The structure of the DLAU has been described in Fig. 2.
For designing our model, we have replaced the residual blocks in the hourglass modules by DLAU. Depth-wise convolution of the deep layer features are concatenated with the shallow features. This provides a deep aggregation between the features and the correlation between features across the network is preserved in this manner. The architecture of each deep hourglass module has been shown in Fig. 3.
The heatmaps generated by the LocalEyenet has been passed to the post-processing stage where the positions of the landmarks are estimated from the heatmaps. Argmax is the most commonly used mathematical tool that computes the positions of the landmarks by computing the locations with the highest probability in the attention heatmap. Since argmax is non-differentiable, we have used the sofargmax [11] method which is a continuous function at each point and is thus differentiable. This allows us to train the network end to end and estimates the probability of the landmark positions on the attention heatmap in a single go.
#### 3.1.2 Attention between Hourglass modules
The attention in feature maps, generated by intermediate hourglass modules, are computed and incorporated with the coarse predictions at each stage. This helps in understanding the spatial dependencies and correlation in images. We have adopted the concept of non-local neural network and modified it to compute the self-attention at each location of the feature map.
The feature map \(F_{i}\) obtained from the deep hourglass module is passed through a residual block to let the spatial dependencies flow across layers. The updated feature map \(F_{i}{}^{r}\) is passed through a convolution layer with kernel size \(1\times 1\) and \(12\) filters, each for a landmark point, to obtain feature map \(F_{i}{}^{rc}\). The convolved feature map \(F_{i}{}^{rc}\) is then passed though the sofargmax operator to generate a coarse prediction of the attention map \(Map_{i}\), as defined in Equation [1], where \(W\times H\) is the size of the attention map. The similarity between the initial prediction of attention heatmap and the residual feature map \(F_{i}{}^{r}\) is computed to rectify the attention heatmap by incorporating the spatial association by computing the element-wise dot product, as defined in Equation [2].
\[Map_{i}=\sum_{x=1}^{W}\sum_{y=1}^{H}\frac{x}{W}\frac{y}{H}\frac{exp(F_{i}{}^{ rc}(x,y))}{\sum_{k=1}^{W}\sum_{l=1}^{H}exp(F_{i}{}^{rc}(k,l))} \tag{1}\]
Figure 1: Framework of LocalEyenet model
Figure 3: Architecture of Deep hourglass module with deep layer aggregation
Figure 2: Deep layer aggregation unit (DLAU)
\[Map_{i}{}^{1}=Map_{i}\odot F_{i}{}^{r} \tag{2}\]
The convolved feature map \(F_{i}{}^{rc}\) is passed through the self-attention module. The non-local neural network has been instantiated by defining embedded Gaussian at each pixel of the image in form \(\phi(x_{i})=W_{\phi}x_{i}\), \(\theta(x_{j})=W_{\theta}x_{j}\) and \(g(x_{i})=W_{g}x_{i}\). The parameters \(W_{\phi}\), \(W_{\theta}\) and \(W_{g}\) are learned through backpropagation. The spatial similarity between the embedded gaussians at each location is estimated by Equation [3]. The similarity between the similarity map and the convolved feature map is computed by Equation [4].
\[f(i,j)=e^{\phi(x_{i})\theta(x_{j})^{T}} \tag{3}\]
\[S_{i}=softmax(f)\odot g \tag{4}\]
The non-local operation is completed by applying a transform \(W\) on the self-attention map \(S_{i}\). The residual connection \(F_{i}{}^{rc}\) is added to the non-local output to generate the local attention map \(S_{i}{}^{\prime}\) as defined in Equation [5]. The updated feature map \(F_{i}{}^{\prime}\) is computed by adding the coarse prediction of the attention map \(Map_{i}\), which incorporates the global attention that has been learned in the last hourglass module, to the attention map \(Att_{i}\) as defined in Equation [6]. The architecture of the complete attention block is shown in Fig. 4.
\[Att_{i}=WS_{i}+F_{i}{}^{rc} \tag{5}\]
\[F_{i}{}^{\prime}=Att_{i}+Map_{i}{}^{1} \tag{6}\]
#### 3.1.3 Loss function
To compute the performance of the deep modules, we have used softargmax operation to determine the coordinates \((x,y)\) of facial landmarks from the attention heatmaps. The location with the highest value of softargmax are the most probable estimates of landmark positions.
The deviation of the estimated landmark positions from the groundtruth locations is used to optimize the parameters of the model. Developed attention model aims to find local minima in the loss plot. Different loss function influences the optimized value of parameters and are computed using RMSprop optimization. We have used 3 different type of loss functions and compared the performance of the models.
As the problem is devised as a regression problem, we have first optimized the mean square error (MSE) loss as defined in Equation [8], where L=number of landmark points. The MSE loss is computed as normalized squared L2 norm of the error function \(d\) as defined in Equation [7], where \(gt\) is the groundtruth vector and \(pr\) is the predicted values of the landmark points.
\[d=gt-pr \tag{7}\]
\[MSE=\frac{1}{L}\sum_{i=1}^{L}d_{i}{}^{2} \tag{8}\]
Figure 4: Design of Attention module
Presence of outliers increases the error and impairs the performance of the model. We have tried to minimize the risk of training the outliers, by learning the optimization of Huber loss, defined by Equation [9] that computes the mean absolute error (MAE) for errors with significantly higher values.
\[L_{\delta}(d)=\begin{cases}\frac{d^{2}}{2},&\text{if }|d|\leq\delta\\ \delta\left(|d|-\frac{\delta}{2}\right),&\text{otherwise}\end{cases} \tag{9}\]
Recently, researchers have used [28][65] a loss function called wing loss that works well with the facial landmark detection. This loss compensates for large as well as small errors by defining a piece-wise linear and nonlinear loss function as defined in Equation [10], where the constant \(C\) smoothly connects the nonlinear and linear function.
\[Wing\left(d\right)=\begin{cases}w\ln\left(1+\frac{|d|}{\epsilon}\right),& \text{if }|d|\leq\delta\\ \left(|d|-C\right)~{},&\text{otherwise}\end{cases} \tag{10}\]
\[C=w-wln(1+\frac{w}{\epsilon}) \tag{11}\]
The developed coarse-to-fine architectures models have been optimized using different loss functions and the performance have been discussed in the subsequent results section.
## 4 Results and Discussion
The faces are first detected and cropped for detection of facial landmarks. High resolution face images are down-sampled and smaller images are upsampled to the size \(256\times 256\). The resized images are passed through developed deep learning framework.
### Dataset
We have used 2 public domain datasets in this paper, 300W and 300VW. The results of the designed attention model architecture has been trained on the images from the 300W dataset and has been tested on both the datasets.
#### 4.1.1 300w
The 300W dataset [66][67][68] contains facial images captured in two environmental settings, namely indoor and outdoor conditions. This ensures the varied illumination conditions, large variation in expression, pose and occlusion that are being covered in the dataset formation. The dataset contains total 600 images with 300 in indoor and 300 in outdoor categories. Faces in the images are annotated using 68 landmark points using a semi-automatic methodology. The images with multiple number of faces have been annotated separately with different annotation files.
#### 4.1.2 300vw
The 300VW dataset [69][70][14] contains facial videos of 113 subjects recorded in wild scenarios at varied frame rate of 25-30 fps, where each video is around 1 minute long. All the frames have been annotated using 68 landmark points. The videos are recorded under three different scenarios such as well-lit conditions, different illumination conditions and completely unconstrained conditions including occlusions, make-up, expression, head pose, etc. We have extracted frames from the videos and cropped the faces from each frame. The faces has been reshaped later and the annotations have been updated accordingly as discussed in the data pre-processing section.
### Data Preprocessing
The images are of different resolution and the face contains a small part of the image in most of the cases. Some of the images contain more than one faces and thus we cannot use the whole image for designing the architecture. Thus faces are cropped from the images and resized to the resolution \(256\times 256\). The landmark annotations are also scaled accordingly as defined in Equation [12], where \((x,y)\) are the coordinates of any landmark point and \((x_{new},y_{new})\)
are the coordinates of newly mapped location of the landmark and \([h,w]\) and \([h_{new},w_{new}]\) are the height and width of the original image and resized image respectively.
\[\begin{split} x_{new}&=x*(w_{new}/w)\\ y_{new}&=y*(h_{new}/h)\end{split} \tag{12}\]
We aim to localize the eyes of the user for further gaze tracking operation. Thus we have selected only 12 landmark points that represent both the eyes on the faces. We initialize heatmaps for each landmark position by defining a Gaussian centered at each landmark with standard deviation of 5. Thus for each input image we generate 12 heatmaps, each centered at one of the 12 landmark points.
### Data Augmentation
We have augmented the images by doing different mathematical operations such as horizontal flipping and rotation. We have also performed blurring on the face images to make a robust dataset for training. The landmark points of the horizontally flipped images are redefined in Equation [13].
\[\begin{split} x_{new}&=w_{new}-x\\ y_{new}&=y\end{split} \tag{13}\]
The cropped and resized faces have also been rotated by very small angle and the rotated faces have been added to the augmented dataset. The rotation angle has been selected as \(-5^{\circ}\), \(+5^{\circ}\), \(-10^{\circ}\) and \(+10^{\circ}\) and a rotated image has been generated in each category. The landmark points have been rescaled to the new coordinate system as defined in Equation [14].
\[\begin{split} x_{new}&=x\cos\theta+y\sin\theta+x_{ offset}\\ y_{new}&=-x\sin\theta+y\cos\theta+y_{offset}\end{split} \tag{14}\]
We have used Gaussian filter of dimension \(9\times 9\) with standard deviation \(\sigma_{x}=\sigma_{y}=1.8\) for generating blurred images. The filter is defined as Equation [15].
\[f(x,y)=\frac{1}{\sqrt{2\pi\sigma_{x}\sigma_{y}}}\exp^{-(x^{2}+y^{2})/2\sigma_{ x}\sigma_{y}} \tag{15}\]
There are few images, generated after rotation, which are partially cropped, and this leads to the cropping of some landmark points. Thus these images have been manually checked and removed from the dataset. The generated augmented 300W dataset contains 4195 images in total, where the distribution of the images in each category is defined in Table [1].
### Evaluation
We have developed a coarse-to-fine architecture incorporating attention and have evaluated the performance of the architecture in terms of Normalized Mean Error(NME) and Area under Curve(AUC). NME is defined in Equation [16] where _iod_ is the inter-ocular distance that is defined as L2 norm between the outer corners of eyes.
\[NME=\frac{1}{L}\sum_{i=1}^{L}\frac{\left\|gt_{i}-pr_{i}\right\|_{2}}{iod} \tag{16}\]
\begin{table}
\begin{tabular}{l|c|c|c} \hline Category & Indoor & Outdoor & Total \\ \hline Original & 300 & 300 & 600 \\ Horizontally Flipped & 1197 & 1198 & 2395 \\ Rotated & 300 & 300 & 600 \\ Blurred & 300 & 300 & 600 \\ \hline \end{tabular}
\end{table}
Table 1: Images generated after data augmentation of 300W dataset
The AUC and failure rate (FR) is computed from the cumulative error distribution curve with threshold NME set at 0.05. The performance of the different architectures have been enlisted in Table [2]. The attention heatmaps generated by different frameworks on sample images of 300W and 300VW have been shown in Fig. 5. The NME obtained for 300W and 300VW datasets evaluated using different models have been shown in Fig. 6.
Three different losses, viz., MSE, huber and wing loss have been optimized for determination of optimum parameters for the developed deep learning framework. The performance of the models for 3 loss functions has been enlisted in Table [3]. As we can visualize from the table, for most of the frameworks, including our architecture, the NME is minimum for MSE loss optimization. Huber loss also provides good generalization for Hourglass model, DenseUnet model and Stacked HG model with CAB[52]. NME obtained for wing loss optimization are comparatively quite high for almost all the models. Thus we have selected MSE loss for optimization of the model parameters.
The cumulative mean error is computed over all the samples of 300W dataset. The Cumulative Error Distribution (CED) curve as shown in Fig. 6, shows that the DenseUnet model generates the highest NME and our LocalEyenet model displays the lowest NME and covers the maximum AUC.
## 5 Ablation study
In order to understand the role of attention block in the eye localization architecture, we have analyzed the model performance for different scenarios of 300VW dataset. We have implemented standard state of the art Stacked Hourglass model and introduced the DLAU unit in place of Residual block in each hourglass. The performance of model is compared by computing the NME, failure rate and AUC for the models. Introduction of the attention block between
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Methodology** & \multicolumn{2}{c|}{**300W**} & \multicolumn{1}{c}{**300VW**} \\ \cline{2-4} & **NME** & AUC & **NME** \\ \hline Hourglass model & 0.0878 & 0.1672 & 0.4781 \\ Unet model & 0.0167 & 0.8825 & 0.3176 \\ DenseUnet model & 0.5901 & 0 & 0.5950 \\ Stacked Unet model & 0.0187 & 0.6371 & 0.3002 \\ Stacked Hourglass model & 0.0784 & 0.1948 & 0.4612 \\ Densely Connected Unet[51] & 0.1453 & 0.0611 & 0.4541 \\ Stacked HG with CAB[52] & 0.1331 & 0.0761 & 0.2781 \\ Stacked Hourglass with DLAU & 0.0094 & 0.8145 & 0.2934 \\
**LocalEyenet model** & **0.0047** & **0.9082** & **0.2635** \\ \hline \end{tabular}
\end{table}
Table 2: Performance of different deep learning framework on 300W and 300VW datasets
Figure 5: Attention Heatmaps generated by different deep learning frameworks on sample image from 300W and 300VW dataset. (a-i) corresponds to image from 300W dataset and (j-r) corresponds to image from 300VW dataset; (a),(j) Original image, (b),(k) Hourglass model, (c),(l) Unet model, (d),(m) DenseUnet model, (e),(n) Stacked Unet model, (f),(o) Stacked Hourglass model, (g),(p) Densely connected Unet model [51], (h),(q) Stacked Hourglass with CAB[52], (i),(f) LocalEyenet Model
## References
the Deep hourglass modules improves the localization of the heatmaps in the annotated area. Comparison of the performance for ablation study has been enlisted in Table [4] for 300W dataset.
The videos in 300VW dataset contains three different scenarios. Videos from scenario 1 have almost no illumination variation, but contain head pose variations. Videos from scenario 2 have been recorded in unconstrained conditions with different illuminations and videos from scenario 3 have all variations including illumination, occlusions, make-up and head pose. The performance of the attention model and its ablation for the 3 different scenarios have been assessed in Table [5]. The attention heatmaps generated by standard stacked hourglass model, the updated stacked hourglass model with DLAU and the LocalEyenet model for the 3 different scenarios have been shown in Fig. 7.
## 6 Testing on real-time video stream
We have tested the performance of our model in real-time video stream. The real-time video streaming has been performed using logitech C920 pro webcam which captures frames at 35 fps with resolution of 1920\(\times\)1080. The faces in each frame of the video stream is initialized using dlib face detector. The detected faces are passed through the
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline
**Methodology** & \multicolumn{2}{c|}{**MSE loss**} & \multicolumn{2}{c|}{**Huber loss**} & \multicolumn{2}{c}{**Wing loss**} \\ \cline{2-6} & NME & AUC & NME & AUC & NME & AUC \\ \hline Stacked Hourglass model & **0.0784** & 0.1948 & 0.3349 & 0.0004 & 4.6254 & 0 \\ Stacked Hourglass with DLAU & **0.0094** & 0.8145 & 0.0095 & 0.8143 & 9.7909 & 0 \\
**LocalEyenet model** & **0.0047** & 0.9082 & 0.0915 & 0.1573 & 0.1516 & 0.0542 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study: Evaluation of NME on 300W dataset
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Methodology** & \multicolumn{2}{c}{**NME**} \\ \cline{2-4} & Scenario1 & Scenario2 & Scenario3 \\ \hline Stacked Hourglass & 0.0499 & 0.0594 & 0.3531 \\ Stacked HG with DLAU & 0.0446 & 0.0589 & 0.0443 \\
**LocalEyenet model** & **0.0283** & **0.0371** & **0.0402** \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation of NME for different scenarios on 300VW dataset
Figure 6: a. NME Plot, b.Cumulative Error Distribution curves for landmark localization on 300W dataset
LocalEyenet model for inferencing. We have tried to impose different conditions such as head pose variation, removal of spectacles and partial occlusion using hands etc. to check the performance of our model. The inference has been performed at the rate of 32 fps which is very close to the frame rate of the videos. The mean error corresponding to the frames are very low and the eye landmarks detected on the faces in different frames have been displayed in Fig.9.
## 7 Conclusion
Gaze detection systems have the very potential to develop a human-machine interface device. To build a gaze detection system, it is of utmost necessity to develop a machine learning framework using facial landmarks. In this
Figure 8: Eye heatmap on frame of real-time video. (a) Real-time image frame, (b) Heatmap.
Figure 7: Attention Heatmaps for ablation study on sample images from 3 scenarios of 300W dataset. (a) Image from Scenario 1, (e) Image from Scenario 2, (i) Image from Scenario 3, (b),(f),(j) Heatmap generated by Stacked Hourglass model on image from Scenario 1, Scenario 2, Scenario 3 respectively, (c),(g),(k) Heatmap generated by Stacked Hourglass with DLAU on image from Scenario 1, Scenario 2, Scenario 3 respectively, (d),(h),(l) Heatmap generated by LocalEyenet Model on image from Scenario 1, Scenario 2, Scenario 3 respectively.
work, we propose a convolutional framework named LocalEyenet. The framework is an attention driven neural network architecture that focuses on localization of region of interests in image. The model detects the facial landmarks corresponding to eye regions using heatmap based regression. We show that our framework performs better than any other state of the art heatmap regression method, even with variations in illumination, pose and occlusion by spectacles and hand. Our model learns to localize the heatmaps at an early stage and thus generates inferences at a very high speed. To summarize, LocalEyenet is a robust facial landmark localizer that can perform very well in real-time eye localization under different varied environment conditions.
## Acknowledgment
The authors would like to acknowledge Council of Scientific & Industrial Research (CSIR)-Central Electronic Engineering Research Institute (CEERI), Pilani, Rajasthan, India for providing the facilities for conducting the research work.
|
2303.15047 | Bubble nucleation and jetting inside a millimetric droplet | In this work, we present experiments and simulations on the nucleation and
successive dynamics of laser-induced bubbles inside liquid droplets in
free-fall motion, i.e. a case with a free boundary in all directions. The
droplets of a millimetric size have a nearly spherical shape by the moment the
bubble is nucleated. We have investigated the nucleation of secondary bubbles
induced by the rarefaction wave that is produced when the shock wave emitted by
the laser-induced plasma reflects at the drop surface. Interestingly,
three-dimensional clusters of cavitation bubbles are observed. Their shape is
compared with the negative pressure distribution computed with a CFD model and
allows us to estimate a cavitation threshold value. High-speed recordings of
the drop/bubble dynamics are complemented by the velocity and pressure fields
simulated for the same initial conditions. The effect of the proximity of a
curved free surface on the jetting dynamics of the bubbles was qualitatively
assessed by classifying the cavitation events using a non-dimensional stand-off
parameter which depends on the drop size, the bubble maximum radius and the
relative position of the bubble inside the drop. Additionally, we studied the
role of the drop's curvature by implementing a structural similarity algorithm
to compare cases with bubbles produced near a flat surface to the bubbles
inside the drop. This quantitative comparison method indicated the existence of
equivalent stand-off distances at which bubbles influenced by different
boundaries behave in a very similar way. The oscillation of the laser-induced
bubbles promote the onset of Rayleigh-Taylor and Rayleigh-Plateau
instabilities, observed on the drop's surface. This phenomenon was studied by
varying the ratio of the maximum radii of the bubble and the drop. The specific
mechanisms leading to the destabilisation of the droplet surface were
identified. | Juan Manuel Rosselló, Hendrik Reese, K. Ashoke Raman, Claus-Dieter Ohl | 2023-03-27T09:45:12Z | http://arxiv.org/abs/2303.15047v2 | # Bubble nucleation and jetting inside a millimetric droplet
###### Abstract
In this work, we present experiments and simulations on the nucleation and successive dynamics of laser-induced bubbles inside liquid droplets in free-fall motion, i.e. a case where the bubbles are subjected to the influence of a free boundary in all directions. The droplets of a millimetric size are released from a height of around \(20\,\mathrm{cm}\) and acquire a nearly spherical shape by the moment the bubble is nucleated. Within this droplet, we have investigated the nucleation of secondary bubbles induced by the rarefaction wave that is produced when the shock wave emitted by the laser-induced plasma reflects at the drop surface. Interestingly, three-dimensional clusters of cavitation bubbles are observed. Their shape is compared with the negative pressure distribution computed with a CFD model and allows us to estimate a cavitation threshold value. In particular, we observed that the focusing of the waves in the vicinity of the free surface can give rise to explosive cavitation events that end up in fast liquid ejections. High-speed recordings of the drop/bubble dynamics are complemented by the velocity and pressure fields simulated for the same initial conditions. The effect of the proximity of a curved free surface on the jetting dynamics of the bubbles was qualitatively assessed by classifying the cavitation events using a non-dimensional stand-off parameter \(\Upsilon\) which depends on the drop size, the bubble maximum radius and the relative position of the bubble inside the drop. Additionally, we studied the role of the drop's curvature by implementing a structural similarity algorithm to compare cases with bubbles produced near a flat surface to the bubbles inside the drop. Interestingly, this quantitative comparison method indicated the existence of equivalent stand-off distances at which bubbles influenced by different boundaries behave in a very similar way. The oscillation of the laser-induced bubbles promote the onset of Rayleigh-Taylor and Rayleigh-Plateau instabilities, observed on the drop's surface. This phenomenon was studied by varying the ratio of the maximum radii of the bubble and the drop. The specific mechanisms leading to the destabilisation of the droplet surface were identified through a careful inspection of the high speed images together with the numerical simulations.
## 1 Introduction
Phase explosion in confined liquid volumes has recently gained interest because of its connection with thriving research areas like x-ray liquid crystallography (Grunbein _et al._, 2021), x-ray holography (Vassholz _et al._, 2021; Hagemann _et al._, 2021), extreme UV light, and plasma generation (Favre _et al._, 2002). A better understanding of the interaction of high-power lasers with small liquid particles is also relevant in laser-based atmospheric monitoring techniques (Rohwetter _et al._, 2010; Mei & Brydegaard, 2015) or in optical atomisation techniques that can be applied to the production of airborne transported micro-drops used as drug carriers (Lee _et al._, 2022). At the heart of all of these research fields is the injection of high-power photons into a small liquid sample, the initiation of phase transition from liquid to vapour, the rapid pressure fluctuations, and the successive complex fluid mechanics driven by this impulsive energy input. In this study, we want to shed light on the fundamental flows that can be induced in liquid samples once this phase transition has been initiated. In particular, we focus on the fluid dynamics within a spherically confined liquid sample after the violent phase explosion of the vapour bubble induced by a high-power laser pulse. We explore the non-spherical dynamics of vapour bubbles within a liquid droplet, i.e. surrounded by free boundaries only. Bubble dynamics in droplets have so far mostly been studied from the perspective of destabilisation of the liquid-gas interface of the droplet (Singh & Knight, 1980; Alexander & Armstrong, 1987; Eickmans _et al._, 1987; Lindinger _et al._, 2004; Thoroddsen _et al._, 2009; Marston & Thoroddsen, 2015; Gonzalez-Avila & Ohl, 2016; Zeng _et al._, 2018). Here, we explore the bubble dynamics within the droplet (Obreschkow _et al._, 2006).
Pulsed lasers can be focused into optically transparent media to induce explosive bubble nucleation by dielectric breakdown. This process is accompanied by the emission of an acoustic shock wave with an amplitude on the order of GigaPascals depending on the pulse energy, duration, and wavelength. For instance, the initial amplitude of the shock wave (i.e. at the edge of the plasma rim) in water can be in the range from 2.4 GPa to 11.8 GPa for a 6 ns laser pulse with an energy between 1 mJ to 10 mJ and a wavelength of 1064 nm focused with a numerical aperture (NA) of 22\({}^{\circ}\)(Vogel _et al._, 1996; Noack & Vogel, 1998). Recently, the initial shock wave amplitude produced by similar nanosecond laser pulses of 24 mJ (NA = 10\({}^{\circ}\)) was measured with a novel x-ray probing technique, obtaining peak values of around 20 GPa (Vassholz _et al._, 2021).
When a laser-induced cavity is produced in a confined space with free boundaries, like a droplet, most of the sound wave energy reflects back from the interface with an inverted phase, meaning that the original shock wave is transformed into a rarefaction wave. If the negative pressure amplitude of the reflected wave is below the cavitation threshold of the liquid, a trail of bubbles is nucleated after the wave passage. This effect is commonly observed upon wave reflection on the free boundary of a flat surface (Heijnen _et al._, 2009), nearby bubbles (Quinto-Su & Ando, 2013), a liquid column (Sembian _et al._, 2016) or, as we already mentioned, a drop (Obreschkow _et al._, 2006; Gonzalez-Avila & Ohl, 2016; Kondo & Ando, 2016; Kyriazis _et al._, 2018; Wu _et al._, 2018, 2021; Biasiori-Poulanges & Schmidmayer, 2023). Laser cavitation in some of these configurations was lately applied in studies involving x-ray holography or x-ray diffraction to investigate the propagation of shock waves in liquids (Stan _et al._, 2016; Ursescu _et al._, 2020; Hagemann _et al._, 2021). The use of very small amounts of liquid prevents the x-rays from being fully absorbed by the sample, thus improving the contrast of the x-ray images. This technique is suitable to study the properties of opaque liquids without optical aberrations, it is less sensitive to distortions produced by wavy surfaces, and also allows retrieving information about the liquid density changes produced by the passage of the pressure waves (Vassholz _et al._, 2021, 2023), which represents an advantage over traditional optical imaging.
Another interesting aspect of the nucleation of bubbles in the proximity of a boundary resides in their jetting dynamics. Laser-induced bubbles produced under different boundary conditions have been widely studied, both experimentally and numerically. Perhaps the case that got the most attention is the one of a bubble collapsing in the proximity of a boundary of large extent, e.g. a solid boundary (Plesset & Chapman 1971; Lauterborn & Bolle 1975; Blake _et al._ 1999; Brujan _et al._ 2002; Lindau & Lauterborn 2003; Yang _et al._ 2013; Lechner _et al._ 2017; Gonzalez-Avila _et al._ 2021), an elastic boundary (Brujan _et al._ 2001; Rossello & Ohl 2022), or a free surface (Koukouvinis _et al._ 2016; Li _et al._ 2019c; Bempedelis _et al._ 2021; Rossello _et al._ 2022). In real-world conditions, the boundary is of finite extent and the cavity may be spuriously affected by more than a single boundary (for instance, the walls of a container or the liquid free surface), exerting a considerable influence the direction of the jetting (Kiyama _et al._ 2021; Andrews & Peters 2022).
The jet dynamics are frequently characterised by a stand-off parameter (Lindau & Lauterborn 2003; Supponen _et al._ 2016; Lauterborn _et al._ 2018) computed as the ratio of the distance between the bubble nucleation position and the boundary (\(d\)) and the maximum radius attained by the bubble after its creation (\(R_{max}\)). If the cavity collapse occurs next to boundaries other than a plane, for instance, irregular or curved surfaces (Tomita _et al._ 2002; Blake _et al._ 2015; Wu _et al._ 2018\(a\); Li _et al._ 2019\(b\); Againin _et al._ 2022) like pillars (Koch _et al._ 2021\(b\); Kadivar _et al._ 2021), fiberes (Mur _et al._ 2023), corners (Zhang _et al._ 2020; Mahmud _et al._ 2020), crevices (Trummler _et al._ 2020; Andrews _et al._ 2020), perforated plates (Gonzalez-Avila _et al._ 2015; Reese _et al._ 2022), or spheres (Zhang _et al._ 2018; Li _et al._ 2019\(a\); Zevnik & Dular 2020; Ren _et al._ 2022), the anisotropy does not have one predominant direction and thus the use of a single stand-off parameter (e.g. \(d/R_{max}\)) is no longer sufficient to fully characterise the system. The same situation arises in cases where the bubbles are produced in a constricted space, for example in narrow channels (Gonzalez-Avila _et al._ 2011; Wang _et al._ 2018; Brujan _et al._ 2022), between two surfaces (Li _et al._ 2017; Liu _et al._ 2017), in a liquid column (Robert _et al._ 2007), or inside a drop (Obreschkow _et al._ 2006; Thoroddsen _et al._ 2009; Marston & Thoroddsen 2015; Gonzalez-Avila & Ohl 2016; Zeng _et al._ 2018).
The dynamics of jetting bubbles inside drops or curved free surfaces have not been extensively explored. Recently, we have reported experimental and numerical results on the formation of a jetting bubble in the proximity of a curved free boundary, given by the hemispherical top of a water column or a drop sitting on a solid plate (Rossello _et al._ 2022). As a natural extension of that work, we now present a study on the jet formation during the collapse of laser-induced bubbles inside a falling drop. This is a particularly interesting case as the bubble is surrounded entirely by a free boundary. From an experimental point, the intrinsic curvature of the liquid surface offers a very clear view into the bubble's interior.
The rapid acceleration induced by the bubble oscillations in the proximity of a free boundary also gives rise to surface instabilities, in particular Rayleigh-Taylor instabilities (RTI) (Taylor 1950; Keller & Kolodner 1954; Zhou 2017\(a\),b_). This situation is more pronounced when the oscillating bubble wall gets close to the free surface, as commonly occurs in reduced volumes like a drop (Zeng _et al._ 2018; Klein _et al._ 2020). The Rayleigh-Taylor instability produces corrugated patterns on the liquid surface that can grow and promote the onset of other instabilities like the Rayleigh-Plateau instability. Furthermore, the multiple pits and ripples produced by the RTI on the liquid surface can interact with the acoustic emissions of the oscillating bubble to generate a fluid focusing which results in a thin outgoing liquid jet (Tagawa _et al._ 2012; Peters _et al._ 2013).
This article is organised into different sections focusing on one of the above-discussed aspects, i.e. the shock wave dynamics and the nucleation of secondary cavitation bubbles, the jetting dynamics of the collapsing laser-induced bubbles, and the formation of instabilities on the drop surface as a consequence of the bubble oscillation.
## 2 Experimental method
The experimental method used to achieve controlled laser bubble inception inside a millimetric drop is depicted in figure 1(a). Individual drops were released from the tip of a blunt metallic needle with an internal diameter of 330 \(\mathrm{\SIUnitSymbolMicro m}\) (and an external diameter of 600 \(\mathrm{\SIUnitSymbolMicro m}\)) by the action of an electronic syringe pump _KD Scientific - Legato SPLG110_. This device pushed a fixed volume of \(\sim 12\,\mathrm{\SIUnitSymbolMicro m}\) of deionised water through the needle, producing single drops with a radius of \((1.42\pm 0.01)\,\mathrm{mm}\). After a drop was released it traveled a distance of \(h=30\,\mathrm{cm}\) in free-fall motion. Just before it impacted a glass plate a pulsed laser was focused into the droplet to nucleate the cavitation bubble.
The pulse energy of the laser (Nd:YAG _Q2-1064_ series, pulse duration 4 ns, wavelength 1064 nm) could be varied between 1.9 mJ and 20.3 mJ and was focused with a microscope objective (_Zeiss LD Achroplan_ 20\(\times\), NA = 0.4) see bottom of figure 1(a). In the experiments, a standard microscope slide was placed on top of the laser focusing objective in order to prevent wetting of its outer lens, which would provoke a significant distortion of the laser beam. Accordingly, the protective glass was meticulously cleaned after each drop impact.
The fall distance \(h\) was sufficient for the surface tension to stabilise the liquid into an approximately spherical shape, reaching a velocity of \((1.7\pm 0.1)\,\mathrm{m/s}\) upon laser arrival. At the same time, the variation of the lateral position of the drop centre relative to the laser focus was typically below 200 \(\mathrm{\SIUnitSymbolMicro m}\), which aids experimental repeatability. The vertical position where the bubble is created within the droplet is controlled with some precision by synchronising the laser pulse with the passage of the drop through a light gate. This consists of a red laser diode paired with a photo-diode that triggers a digital delay generator _Quantum 9520_ which then fires the laser after a specified time.
The dynamics of the cavitation bubble within the droplet and the resulting surface
Figure 1: Description of the experimental setup. (a) A water drop with a volume of \(\sim 12\,\mathrm{\SIUnitSymbolMicro m}\) is detached from a cylindrical blunt needle (stainless steel, 600 \(\mathrm{\SIUnitSymbolMicro m}\) of external diameter) by gravitational forces. When the drop reaches a velocity of \((1.7\pm 0.1)\,\mathrm{m/s}\) a cavitation bubble is produced inside it by a laser pulse with a duration of 4 ns and a typical energy of \((2.4\pm 0.1)\,\mathrm{mJ}\). (b) Once reflected from the drop surface, the shock waves emitted from the laser-induced bubble nucleate tiny bubbles inside the liquid drop. (c) The bubble undergoes an asymmetric collapse with jetting, whose shape depends on the position of the bubble inside the drop.
instabilities were captured in high-speed videos using a _Shimadzu XPV-X2_ camera equipped with a photography macro lens _Canon MP-E 65 mm f/2.8 1-_5\(\times\)_. A diffused back illumination from a continuous white LED lamp _SMETec_ (9000 lm) in combination with the curved nature of the drops allowed us to obtain clear images of the droplet interior. Furthermore, the curvature of the liquid refracted the light in a way that reveals the internal structures of the jetting bubbles, knowing that it distorts the apparent position and shape (Koch _et al._, 2021). For direct comparison of the experimental and the numerical results, an _in-house_ script was applied to the simulated results to compensate for such image distortions (Martins _et al._, 2018). This correction (based on Snell's law) was also used to obtain the "real" nucleation position of the laser bubble.
Due to the limited number of recorded frames, the framing rate of the high-speed videos had to be adjusted to capture the important features of the phenomena under study. For instance, to visualise the shock wave propagation and the resulting nucleation of bubbles from the reflected rarefaction wave, see figure 1(b)) required a frame rate of 5 Mfps (i.e. the maximum achievable by the camera), while the temporal evolution of the jets (depicted in figure 1(c)) and the instabilities of the drop surface are captured already at 200 kfps or 500 kfps, respectively.
### Definition of a stand-off parameter for a curved boundary \(\Upsilon\)
In order to consider the curvature of the drop's surface in the characterisation of the jet dynamics, we defined a non-dimensional coefficient \(\Upsilon\) that combines two non-dimensional numbers, each one representing a relevant dimension of the problem. First, we use the stand-off distance \(D^{*}\)(Lauterborn _et al._, 2018) as the ratio of the bubble "seeding" position (\(d\)) and the maximum radius achieved by the bubble when produced at the centre of the spherical drop (\(R^{*}_{max}\)). The second non-dimensional distance \(\chi\) is given by the ratio of the drop radius (\(R_{d}\)) and the distance of the bubble from the drop centre (\(r\)). To summarise,
\[D^{*} =\frac{d}{R^{*}_{max}} \tag{1}\] \[\chi =\frac{R_{d}}{r}=\frac{R_{d}}{R_{d}-d}\] (2) \[\Upsilon =\chi\,D^{*}=\frac{R_{d}}{(R_{d}-d)}\,\frac{d}{R^{*}_{max}} \tag{3}\]
A schematic representation of the aforementioned parameters is presented in figure 2. Here, \(R^{*}_{max}\) is tightly related to the energy of the laser pulse (Lauterborn _et al._, 2018) and, as we explain later in section 4.3, it also varies slightly with the drop size as \(R_{d}\rightarrow\infty\). For the purpose of having reproducible results, the use of \(\Upsilon\) should be limited to values of
Figure 2: Schematic of the drop interior with relevant dimensional parameters.
for which the bubble is contained inside the drop volume (i.e. \(0\leq R_{max}^{*}<R_{d}\)) and the drop shape is not significantly distorted by surface instabilities (Zeng _et al._, 2018). Additionally, the symmetry of the drop/bubble configuration implies that \(d\leq R_{d}\).
In principle, the parameter \(\Upsilon\) behaves similarly as the traditional stand-off distance (e.g. \(d/R_{max}\)), however, the addition of \(\chi\) as a weighting factor represents a measure of the influence of the boundaries all around the bubble, and not only its closest point. This means that the regions of the free surface in directions other than \(\theta=0\) could also be relevant to the bubble dynamics as the separation from the bubble and the boundary in those angular directions gets smaller, i.e. when the radius \(R_{d}\) is decreased and the bubble is located at a reduced \(d\). Alternatively, \(\Upsilon\) could be understood as a measure of the anisotropy, with high anisotropy at the liquid boundary (\(d\to 0\)) and perfect isotropy at the bubble centre, \(d=R_{d}\).
The tight relation between the traditional stand-off distance and \(\Upsilon\) is also evidenced by the following considerations and limiting cases:
* \(\Upsilon\) rises monotonically with \(d\) for a fixed laser pulse energy (or \(R_{max}^{*}\)).
* In the limit \(R_{d}\rightarrow\infty\) the traditional stand-off distance is recovered.
* If the bubble is near the drop wall, \(d\to 0\), then \(\Upsilon\to 0\).
* If the bubble is near the drop centre, \(d\to R_{d}\), then \(r=0\), \(\Upsilon\rightarrow\infty\), and we recover the traditional unbounded case, in which the bubble collapses spherically due to symmetry.
It is important to note that \(\Upsilon\) can take the same value for different combinations of \(d\), \(R_{max}^{*}\), and \(R_{d}\). Therefore, two identical values of \(\Upsilon\) computed from two different values of \(D^{*}\) and \(\chi\) do not necessarily result in identical bubble dynamics. A comparison between cases could be made by fixing the value of one or two of the parameters. For example, the effect of the surface curvature \(R_{d}\) on the bubble dynamics can be evaluated by maintaining \(D^{*}\), or the influence of the "seeding" depth \(d\) can be studied by fixing the drop size \(R_{d}\) and the energy of the laser pulse. In this way, the parameter preserves the same functionality as the traditional stand-off parameter (Lauterborn _et al._, 2018), but now includes the surface curvature dimension.
## 3 Numerical method
Volume-of-Fluid simulations were carried out in _OpenFOAM-v2006_(OpenFOAM-v2006, 2020) using a modification of the solver _compressibleMultiphaseInterFoam_. This modified version is called _MultiphaseCavBubbleFoam_ and was already implemented in previous works to study the formation of the "bullet jet" (Rossello _et al._, 2022) and micro-emulsification (Raman _et al._, 2022). In those works, similar simulations of a single expanding and collapsing bubble in the vicinity of a liquid-gas and a liquid-liquid interface were performed, respectively. Since the solver is explained in detail there, we will only give the information that is specific to the present case of a bubble created in a free-falling liquid drop.
Considering the approximate rotational symmetry of the experimental configuration, we carried out the simulations as quasi-two-dimensional. The computational domain represents a slice of a cylindrical domain with a height of \(3\,\mathrm{mm}\) and a radius of \(3\,\mathrm{mm}\), which is filled with a gas representing the surrounding air at ambient pressure. The domain is divided into a square mesh of cells with a width of \(40\,\mathrm{\SIUnitSymbolMicro m}\), which is then further refined to a cell width of \(10\,\mathrm{\SIUnitSymbolMicro m}\) in the region occupied by the liquid drop. The boundaries of the domain in the radial and axial directions are open, wave transmissive boundaries.
A slightly prolate ellipsoidal liquid drop representing a falling water drop is initiated in the centre of the cylinder with an axial radius of \(1440\,\mathrm{\SIUnitSymbolMicro m}\) and a radial radius of \(1400\,\mathrm{\SIUnitSymbolMicro m}\). We neglect the relative motion of the drop through the air, and thus take the drop and the air to be initially at rest. This is because the speed of the falling drop and the effects of drag are negligible when compared with the speeds developed by the bubble wall and the jets. We
also neglect any subsequent gravitational acceleration, since its effect is negligible on the time scales considered. Inside the drop, a bubble is seeded on the symmetry axis with an initial over-pressure of 1.69 GPa and an initial radius of 25.7 \(\mathrm{\SIUnitSymbolMicro m}\). The initial pressure was chosen such that the initial bubble gas density equals the density of the surrounding liquid, in accordance with equation (1). This is based on the assumption that the laser energy deposition occurs on a much smaller time scale than the expansion of the bubble. The initial bubble radius \(R_{0}\) is chosen to match the maximum expansion \(R_{max}^{*}\) in the experiment.
The bubble contents are modelled with the same properties as the gas surrounding the liquid droplet but are calculated as a separate component. This allows us to apply a mass correction to the gas in the bubble only that accounts for the mass loss due to condensation during the bubble's first oscillation cycle. This is done as a one-time correction at the time of maximum bubble expansion, at which the bubble gas density is reduced by 70 %. More details can be found in our previous work (Rossello _et al._, 2022). The surface tension between the liquid and the gases is 70 mN/m, and that between the gases is 0. The Tait equation of state is used for all components,
\[p=\left(p_{0}+B\right)\left(\frac{\rho}{\rho_{0}}\right)^{\gamma}-B\, \tag{1}\]
with the parameters given in table 1. Here, \(\gamma\) is the adiabatic exponent.
The output of the numerical data was done in intervals of 10 ns to capture shock wave propagation dynamics, and every 1\(\mathrm{\SIUnitSymbolMicro s}\) for the bubble and jetting dynamics.
## 4 Results and discussion
The inception of a laser-induced bubble inside a liquid drop gives rise to a rich and complex chain of events. We start with an overview of the fluid dynamics that are observed following the creation of the cavitation bubble by the dielectric rupture of the liquid, as shown in figure 3. Here, the bubble is nucleated off-centre and close to the upper interface of the droplet. The fluid dynamics can be divided into three stages, which are discussed in detail in the later sections. For now, we provide a brief description of these 3 stages: (1) The bubble is nucleated into a rapidly expanding vapour cavity that launches during its deceleration a shock wave into the droplet, not visible in figure 3. Upon reflection at the acoustic soft liquid-gas interface, the rarefaction wave propagates through the drop leaving behind a trail of cavitation bubbles in certain regions where the wave convergence produces sufficient tension to induce local acoustic cavitation, 2 \(\mathrm{\SIUnitSymbolMicro s}\leq t\leq\) 6 \(\mathrm{\SIUnitSymbolMicro s}\) in figure 3. Depending on the location of the laser bubble the rarefaction wave may focus in a reduced volume close to the interface, creating secondary cavitation and provoking the ejection of a single jet at the opposite side of the laser bubble nucleation site (e.g. \(t>\) 6 \(\mathrm{\SIUnitSymbolMicro s}\) in figure 3). (2) In the second stage, the laser-induced bubble undergoes an asymmetrical collapse from its maximum size. Here, the anisotropy of the boundary conditions results in the formation of a jet, which starts as an indentation on one side of the cavity and grows to pierce the bubble at the opposite extreme. In cases where the laser cavity is created near the drop surface, we also observe the destabilisation of
\begin{table}
\begin{tabular}{l r r r r r} & \(B\) in MPa & \(\rho_{0}\) in kg/m\({}^{3}\) & \(p_{0}\) in Pa & \(\gamma\) & \(\mu\) in mPa s \\ liquid & 303.6 & 998.2061 & 101325 & 7.15 & 1 \\ gases & 0 & 0.12 & 10320 & 1.33 & 0.013 \\ \end{tabular}
\end{table}
Table 1: Tait equation of state parameters and dynamic viscosities \(\mu\) of the simulated fluid components. Both gaseous components are treated as the same type of gas.
the liquid surface by a Rayleigh-Taylor instability. (3) In the third and last stage, the bubble re-expands after jetting, adopting a liquid-gas structure that depends mostly on the stand-off distance (i.e. \(\Upsilon\)). On its second collapse, the cavity fragments and later disperses due to the complex flow created by its first collapse.
In the following, the reported values of \(\Upsilon\) are computed for a surface curvature of \(1.42\,\mathrm{mm}\), which corresponds to the mean radius of the drops produced in this work.
### Acoustic cavitation nucleation
The specific shape of the cavitation bubble clusters produced by the passage of the rarefaction wave is highly dependent on \(\Upsilon\). This is because the negative pressure focuses differently when the original shock wave is emitted from a different location. As the acoustic nucleation only occurs below a certain pressure threshold, the resulting bubble clouds can assume complex
Figure 3: Stages of the events developing inside the drop. The numbers indicate the time in \(\mathrm{\SIUnitSymbolMicro s}\) after the laser shot. In the first stage, spanning from \(t=0\,\mathrm{\SIUnitSymbolMicro s}\) to \(t=52\,\mathrm{\SIUnitSymbolMicro s}\)(framed in red), a rarefaction wave (i.e. the reflection of the shock wave) produces a trail of cavitation bubbles. For low values of \(\Upsilon\) a liquid jet is ejected from the extreme of the drop opposite to the bubble inception. In the second stage, defined between \(t=60\,\mathrm{\SIUnitSymbolMicro s}\) and \(t=106\,\mathrm{\SIUnitSymbolMicro s}\)(framed in blue), the bubble collapses after reaching its maximum size and a jet forms. In some cases, a Rayleigh-Taylor instability (RTI) is observed near the bubble. The third stage (framed in green) runs from \(t=108\,\mathrm{\SIUnitSymbolMicro s}\) until the end of the video at \(t=214\,\mathrm{\SIUnitSymbolMicro s}\). Here, the bubble re-expands after jetting and adopts a characteristic shape during its second collapse that depends mostly on \(\Upsilon\). The width of each frame is \(2.70\,\mathrm{mm}\).
three-dimensional structures. Figure 4 presents experimental results showing the temporal evolution of bubble clouds generated for different values of \(\Upsilon\). In this study, the bubble "seeding" position was varied by changing the delay between the drop release and the laser shot, thus shifting the laser focus position along the vertical symmetry axis of the drop.
As aforementioned, the shock waves emitted from the laser focal spot will reflect from the free boundary of the drop as a rarefaction wave. Due to the nearly spherical shape of the drop, the reflected acoustic waves will focus in a region located at a similar distance from its centre \(r\) (where the laser bubble was created) but on the opposite side of the drop. In the case where the shock wave originates near the surface (i.e. \(\Upsilon\lesssim 1\)), the resulting pressure distribution is characterized by a negative pressure zone moving close to the liquid surface which produces a spherical shell of tiny cavitation bubbles, as displayed in the panels (a) to (c) (and also (i) to (j)) of figure 4. This phenomenon occurs when the sound reflects multiple times on the drop walls and travels circumferentially near the liquid surface without a significant loss of intensity, which is usually referred to as "whispering gallery effect" (Raman & Sutherland 1922). As the rarefaction waves focus at a similar depth where the shock wave was emitted, it produces explosive cavitation events close to the free boundary and on the drop's vertical axis. The rapid expansion of those larger cavitation bubbles gives rise to the liquid jets shown in the first row of figure 3. A more detailed explanation of the formation and dynamics of this particular type of jet will be published elsewhere.
As the laser focusing depth \(d\) is increased, the negative pressure is distributed in larger regions, but still, the nucleation of bubbles predominantly occurs on the side opposite to the laser focus. Additionally, the bubble clusters turn from having the structure of a shell (see panels (g), (h), and (i) of figure 4) into a volumetric cavitation cloud when the laser bubble is generated near the drop centre, as shown in the panels (e) and (f) of figure 4. This transition can be explained by analysing the pressure distribution dynamics with the numerical simulations (Ando _et al._ 2012; Quinto-Su & Ando 2013; Gonzalez-Avila & Ohl 2016). Figure 5 demonstrates the clear correlation between the evolution of the acoustic pressure profile and the nucleation of secondary cavitation bubbles. Furthermore, this correlation can be used to determine the cavitation pressure threshold of the liquid by comparing the shape and the location of the negative pressure front with the shape of the bubble cloud within the drop. Such a comparison was only possible after applying a numerical algorithm to the simulated results to compensate for the image distortions induced by the drop curvature. The last frames in panels (a) and (b) of figure 5 display an overlap of both the experimental video frames and the simulated pressure profiles. From the measurements, we found a consistent cavitation threshold of approximately 4.5 MPa. Considering that we did not filter the water sample we assume that the cavitation is most likely heterogeneous.
The acoustic cavitation thresholds reported for water in the literature vary strongly, depending on the measurement method, water purity, gas saturation, and water temperature. Atchley _et al._ (1988) used distilled, deionised, and filtered (0.2 um) tap water irradiated by pulsed ultrasound and found thresholds between 0.5 and 2.0 MPa, depending on the pulse duration and frequency. Sembian _et al._ (2016) subjected a water column to a single shock wave and found a cavitation threshold between 0.42 and 2.33 MPa. Biasiori-Poulanges & Schmidmayer (2023) compared numerical simulations and experiments of a liquid drop subjected to a planar shock wave and found a threshold between 0.37 and 2.4 MPa. A similar shock front can be found when a droplet impacts on a solid surface at a high speed (e.g. higher than 100 m/s) as studied by Kondo & Ando (2016); Wu _et al._ (2018, 2021). Assuming homogeneous nucleation, Ando _et al._ (2012) and later Quinto-Su & Ando (2013) found a cavitation threshold of 60 MPa and 20 MPa, respectively, comparing experiments and simulations of a reflected shock wave at a free boundary. Therefore, the threshold value obtained in this work falls around the middle of the spectrum of values measured by other
Figure 4: Acoustic cavitation inside a water droplet. The distribution of bubbles in the liquid changes significantly with the position of the laser-induced bubble. The frame width is 3.15 mm. The time between consecutive frames is 600 ns. (a) \(\Upsilon=0.65\). (b) \(\Upsilon=1.1\). (c) \(\Upsilon=1.7\). (d) \(\Upsilon=2.5\). (e) \(\Upsilon=7.5\). (f) \(\Upsilon=68\). (g) \(\Upsilon=13\). (h) \(\Upsilon=5.4\). (i) \(\Upsilon=1.8\). (j) \(\Upsilon=0.9\). Full videos of panels (b), (d) and (f) are available in the online supplementary movies 1-3.
authors. Figure 5(c) evidences a growth in the secondary bubble cluster with increasing energy of the laser pulse, demonstrating the resulting shift in the location of the cavitation threshold isobar for higher amplitudes of the initial shock wave. It is relevant to point out that VoF simulations are notorious for numerical diffusion which causes the shock wave to smear out over time. Because of this, the simulations may underestimate the pressures reached in the experiments. Please note that the VoF model does not account for phase transitions and the subsequent interaction of nucleated cavitation bubbles with the finite amplitude waves. A model for of high-frequency waves interacting with small cavitation clouds that may be applicable was recently developed by Maeda & Colonius (2019). Finally, panels (d) and (e) of figure 5 exemplify some of the hollow three-dimensional bubble structures observed in the experiments.
### Bubble jetting
In the second stage presented in figure 3 the laser-induced bubble reaches its maximum radius and then collapses. At this point, it becomes clear that a non-uniform distance between the
Figure 5: Acoustic cavitation bubble clouds for laser-induced bubbles at different relative positions in the drop. The frames compare the advance of the shock/tension waves within the drop with the observed nucleation sites. The average drop diameter is \((2.84\pm 0.05)\,\mathrm{mm}\) in all cases. The last frame of each series presents an overlay of the frames and the cumulative minimum pressure after the first reflection of the shock wave at the free boundary. The red line indicates the isobar of -4.5 MPa, i.e. the approximate nucleation threshold pressure. (a) Here, the bubble is slightly off-centre (i.e. \(d\simeq R_{d}\)). (b) \(\Upsilon=5.4\). (c) Change in the cluster dimensions with increasing laser pulse energy (indicated in mJ). (d) and (e) present evidence of the formation of complex hollow three-dimensional bubble structures. Here, \(\Upsilon\) is 3.5 and 1.45, respectively.
bubble and the free surface produces an asymmetric collapse, which culminates in a liquid jet. In this section, we explore the effect of varying the parameter \(\Upsilon\) (as performed in section 4.1), but this time we lay focus on the development of the jets, as shown in figure 6.
The experiments reveal that, as the position of the laser focus is varied between the centre and the surface of the drop, the characteristics of the jetting change smoothly: For large values of \(\Upsilon\), a spherical rebound of the bubble without any jetting is observed. The values of \(\Upsilon\gtrsim 3.5\) are accompanied by the formation of a very thin liquid jet crossing through the centre of a weakly deformed bubble. In this "weak jet" case, the tip of the jet separates from the main cavity when it starts to collapse during its second oscillation cycle (see figure 6(b)). For \(1.2\lesssim\Upsilon\lesssim 3.5\), as in panel (c) of figure 6, the "whispering gallery" effect becomes relevant, causing the inception of larger acoustic bubbles on the side opposite to the laser cavity and the ejection of liquid driven by their expansion. The deformation of the bubble in its rebound phase is significantly stronger than in panel (b) of figure 6. As the laser is focused closer to the drop's surface, i.e. \(0.3\lesssim\Upsilon\lesssim 1.2\), the expansion of the bubble provokes the onset of a Rayleigh-Taylor instability. This can be seen in figure 6(d) by the formation of several
Figure 6: Bubble jetting is produced by a laser-induced bubble generated at different relative positions inside the drop. The numbers indicate the time in \(\mu\)s. The length of the scale bars is \(1\,\mathrm{mm}\). (a) Spherical oscillation case, \(\Upsilon=203\). (b) Weak jet case, \(\Upsilon=3.9\). (c) Standard jet case, \(\Upsilon=1.5\). (d) \(\Upsilon=0.44\). (e) Bullet jet case, \(\Upsilon=0.22\). Full videos are available in the online supplementary movies 4-8.
"spikes" growing from the thin liquid film trapped between the cavity and the surrounding air. At the same time, the bubble collapse (from \(t=66\,\mathrm{\SIUnitSymbolMicro s}\)) results in an elongated cavity, similarly as in the "bullet jet" case (Rossello _et al._2022). This behaviour is more pronounced for even smaller stand-off distances, as presented in figure 6(e). The dynamics of this particular jet are described in detail in Ref. (Rossello _et al._2022) and correspond to the case where the laser cavity is generated almost directly on the surface of the drop (i.e. \(0.01\lesssim\Upsilon\lesssim 0.3\)). Here, atmospheric gas is trapped after the closure of a conical ventilated splash and later dragged into the liquid by the liquid jet that grows from a stagnation point located on the top of a "water bell" (at the bottom of the frame at \(t=36\,\mathrm{\SIUnitSymbolMicro s}\)). As a result, an elongated gas cavity is shaped and driven across the drop.
The combined effects of the curved shape of the drop in addition to the diffuse illumination lead to images of the interior of the gas cavity with remarkable clarity. A few examples of this are presented in figure 7.
Panels (a) and (b) of figure 7, reveal the temporal evolution of the liquid indentation into the bubble, as well as the toroidal shape acquired by the gas upon its collapse. Moreover, figure 7(b) demonstrates the accuracy of the numerical simulations to reproduce the jetting process. In panel (c) we see how a perforation of the thin liquid sheet between the cavity and the atmosphere resulted in a spray of aerosol droplets ejected into the cavity during jetting. This event can be explained by the lower pressure inside the bubble compared to the atmospheric pressure and the disruption of the liquid on the upper side of the drop caused by the RTI. The spray front spreads into the cavity and collide with the lower wall of the bubble, disrupting the smoothness of the interface.
The bullet jet case of figure 6(e) distinguishes itself from the other cases by its unique features, i.e. its enhanced shape stability during its formation from an open splash, but also by the near robustness against the surrounding fluid and geometry. Bullet jets have been observed in shallow waters (Rossello & Ohl 2022) and near flexible or rigid materials, without these conditions affecting their dynamics. Furthermore, in a previous work (Rossello _et al._2022), we demonstrated that the bullet jet is scalable and independent of the orientation of the surface with respect to gravity. In figure 8, we expand the list of remarkable robustness by showing it to exist of various sizes even within a highly curved and finite volume. Here, the bullet jet size was characterised by the ratio between the radius of the initial water bell at its base (\(R_{wb}\)) and the drop radius \(R_{d}\).
Figure 7: Detailed view of the interior of a jetting bubble. The time between frames is \(2\,\mathrm{\SIUnitSymbolMicro s}\). (a) Jet formation for \(\Upsilon=1.6\). The frame width is \(1.46\,\mathrm{mm}\). (b) Comparison between experimental data and a simulation performed for \(\Upsilon=2.9\). The frame width is \(1.38\,\mathrm{mm}\). (c) Spray produced by air entering the gas cavity (in which the pressure is lower than the atmospheric pressure) while the jet is formed. The frame width is \(2.11\,\mathrm{mm}\).
The images depict that the penetration depth of both the gas and the liquid conforming to the bullet jet is proportional to the initial splash size. For instance, in figure 8(a) the jet loses its momentum and stops around the middle of the drop, but it crosses the drop for the larger splashes shown in the panels (c) to (e). Remarkably, in the latter case the bullet jet occupies almost the entire drop while still preserving its characteristic features.
The physics behind the evolution of the bubble jetting cases classified in figure 6 can be further explained with the aid of numerical simulations, as presented in figure 9.
Figure 9(a) depicts a purely radial oscillation of both the gas and liquid, found when the bubble is placed in the centre of the drop (i.e. \(\Upsilon\rightarrow\infty\)). The simulations shown in panels (b) and (c) of figure 9 were computed using the same \(\Upsilon\) measured from the experimental cases displayed in the corresponding panels of figure 6. In general, the agreement between the simulations and the experiments is excellent, even though small variations in the size of the experimental and simulated bubbles show some differences in the specific timing of their oscillation cycle. The resemblance can be seen in some of the morphological features that characterise the dynamics of each type of jet at different stages, like the width of the indentation formed during bubble piercing, the shape of the cavity after the first rebound, and the way in which the second collapse evolves in each case. More details on noteworthy features are provided below in figure 10.
In panels (a) to (c) of figure 9 the bubble is initiated with a much larger pressure than the atmospheric gas outside the drop. This pressure difference, which is constant in all directions, accelerates the liquid between the two gas domains. Since this force is proportional to the pressure gradient, the liquid gets accelerated more strongly between the bubble and the nearest part of the drop surface (where the liquid is thinner), causing the drop to bulge
Figure 8: Scalability of the bullet jet in a millimetric droplet. The measurements, organised in columns, show bullet jets formed from different splash sizes. In each column, the upper frame shows the time at which the water bell closes. In the lower frame, composed of two vertical stripes, the time at which the bullet jet is fully developed is shown on the left, and a frame illustrating the position of the jet tip at an advanced time indicated in μs is shown on the right. (a) \(R_{wb}/R_{d}=0.16\). (b) \(R_{wb}/R_{d}=0.27\). (c) \(R_{wb}/R_{d}=0.37\). (d) \(R_{wb}/R_{d}=0.56\). (e) \(R_{wb}/R_{d}=0.74\).
out in that location. Within the first few microseconds of the explosive bubble expansion, the pressure within the bubble decreases rapidly and reaches values much smaller than the atmospheric pressure. Thus, the pressure gradient changes its direction and now accelerates the liquid towards the bubble, which first slows down the cavity's expansion and afterward causes its collapse. In the same way as in the expansion phase, the thinnest part of the liquid experiences the strongest acceleration, which ultimately leads to a liquid jet indenting the bubble from the nearest part of the drop surface.
The case presented in figure 9(d) differs greatly from the previous cases by the fact that now the bubble is close enough to the drop surface to generate an open cavity, allowing the ejection of the initially pressurised gas inside it into the atmosphere, and later the flow of gas into the expanded cavity before the splash closes again. Once the cavity is closed, it remains with an approximate atmospheric pressure, which prevents it from undergoing a strong collapse as it occurs in the previously discussed cases (a) to (c). The radial sealing of the splash forms an axial jet directed toward the centre of the drop, which pierces the bubble and drags its content through the drop. More details on the mechanisms behind the bullet jet formation can be found in Ref. (Rossello _et al._, 2022).
As a consequence of the conservation of momentum, the collapse of the gas cavity gives origin to a stagnation point, from which the liquid flows both inside the pierced bubble and away from it in opposite directions. In particular, the stagnation point is not stationary but moves along the axis of symmetry, following a different trajectory in each case. In the case of figure 9(b) the stagnation point shifts towards the surface as the bubble moves deeper into the drop. For the case in figure 9(c) the stagnation point does not reach the surface and its movement is less pronounced. In the bullet jet case, shown in figure 9(d), the stagnation point forms on the apex of the water bell (i.e. the splash after its closure). It then trails the bell's collapse and remains very close to the drop surface afterward, moving slightly towards the drop centre while the bullet jet moves across the drop.
Figure 9: Numerical simulations of the temporal evolution of jets produced inside the drop for different \(\Upsilon\). The simulated drop has a height of \(2.88\,\mathrm{mm}\) and a width of \(2.8\,\mathrm{mm}\) as measured in the experiments. The plot shows the gas and liquid phases along with the velocity field. The time between frames is \(26\,\mathrm{\SIUnitSymbolMicro s}\) for (a)-(c) and \(30\,\mathrm{\SIUnitSymbolMicro s}\) for (d) starting at \(t=1\,\mathrm{\SIUnitSymbolMicro s}\) in the first frame. (a) Spherical oscillation case, \(\Upsilon\rightarrow\infty\). (b) Weak jet case, \(\Upsilon=3.896\). (c) Standard jet case, \(\Upsilon=1.518\). (d) Bullet jet case, \(\Upsilon=0.028\).
#### 4.2.1 Cavity dynamics on its second collapse
After the jetting, the subsequent re-expansions and collapses of the cavities are characterised by the bubble's and the drop's distorted shapes and even more complicated flow fields. A good example of this can be found in the second collapse of the bubbles analysed in figure 10, which shows a significant dependence on \(\Upsilon\).
Figure 10 compares the shape taken by the bubble for two cases with \(\Upsilon=3.9\) (panels (a) and (b)) and \(\Upsilon=1.9\) (panels (c) and (d)). Interestingly, the flattened side of the "teardrop" shape acquired by the cavity after the re-expansion develops a curved indentation during its second collapse. The numerical simulations make clear that such an indentation is created by the flow produced by an uneven pressure gradient on the cavity surface. The shape of this ring-shaped indentation visibly changes with \(\Upsilon\). For example, the case presented in figure 10(c) displays an annular bubble necking with the detachment of two gaseous rings as the cavity shrinks. These concentric rings have two different diameters and are arranged in two distinct planes, as highlighted in figure 10(d).
#### 4.2.2 Influence of \(R_{d}\) on the jet dynamics: Behavioural similarity vs. structural similarity
The bubble dynamics observed in the falling drop case have many similarities with what is typically seen in bubbles collapsing near a planar rigid surface (Lauterborn _et al._, 2018) or a planar free surface (Supponen _et al._, 2016; Rossello _et al._, 2022). Moreover, the analysis of the values of the stand-off parameter \(D^{*}\) reveals that each type of jet (qualitatively classified
Figure 10: Detailed collapse dynamics of the gas cavity immediately after the jetting of the laser bubble. Experimental (a) and simulated (b) view of the “weak” jet obtained when \(\Upsilon=3.9\). The images were taken at 200 kfps. (c) Ring formation after the necking of the cavity typically observed on cases with \(\Upsilon\approx 1.9\). The images were taken at 500 kfps. (d) Direct comparison between experiment and simulation, revealing the precise flow pattern leading to the ring detachment (indicated by the white arrows). The time between frames is 2.5 \(\upmu\)s. The colour scale in the simulations corresponds to the one in figure 9.
according to figure 7 in Ref. (Rossello _et al._, 2022)) occurs in a comparable range of values of \(D^{*}\). One example of the latter can be found in figure 11.
The parallel found between cases with dissimilar curvature of the liquid surface suggests that, contrarily to the reported observations for bubbles collapsing near concave solid surfaces (Again _et al._, 2022), \(R_{d}\) does not have a dominant role in the particular jetting regime adopted by the cavities when the bubbles are located near the free boundary. This statement was confirmed by the numerical simulations depicted in figure 12. There, the dynamics of identical bubbles expanding and collapsing near the surface of the drop, or the flat free surface of an ideally infinite pool, are compared for three stand-off distances \(D^{*}\).
The simulations show that the correspondence between the flat and the curved surface cases is gradually lost when the bubble is placed further away from the drop surface. The deviation between the two cases is already visible in figure 11(c) and (d). There, the jet dynamics are matched only when \(D^{*}\) takes a higher value for the flat free surface measurement. The simulations indicate that this discrepancy starts at around \(D^{*}=1.2\) (shown in figure 12(c)) and keeps growing for higher values. We can portrait these changes as being enclosed between two extreme scenarios: (1) the bubble is produced right on the liquid surface, generating a bullet jet, which is not affected by the characteristics of the boundaries and thus is independent on \(R_{d}\). As the cavity is placed closer to the drop centre the surface curvature becomes increasingly relevant to the jet dynamics. This is consistent with our definition of \(\Upsilon\), since \(D^{*}\) and \(\Upsilon\) take similar values for lower values of \(d\), and grow apart as the cavity is placed deeper in the drop. (2) When the bubble is almost at the drop centre (i.e. \(\Upsilon\rightarrow\infty\)) there is no jetting for the curved case. However, in the flat surface case the jetting still occurs for comparable values of \(D^{*}\) (e.g., \(D^{*}\sim 2\)), demonstrating how the curvature weighting factor \(\chi\) becomes increasingly relevant.
It is important to stress that if the bubble is placed near the drop boundary, for instance at \(D^{*}\lesssim 1.4\), the discrepancies found in the jetting dynamics of a bubble in the "semi
Figure 11: Comparison of cases with similar bubble dynamics and a different curvature of the free surface \(R_{d}\). The panels (a) and (c) show cases where the cavity is produced near a flat free surface. The cases in (b) and (d) show similar bubbles generated inside a drop with a mean radius of \(1.42\,\mathrm{mm}\). Here, the numbers represent the time normalised with the time of collapse of the cavities from each case. (a) Here, \(D^{*}=0.85\). (b) \(D^{*}=0.88\). (c) \(D^{*}=1.6\). (d) \(D^{*}=1.37\).
infinite" liquid pool when compared with the droplet case are mainly provoked by the surface curvature, and not by the dissimilar extension of the liquid below the gas cavity. This particular point is corroborated in the Appendix A by means of complementary measurements and numerical simulations of jetting bubbles in the proximity of a hemispherical tip of a cylindrical water column.
So far, we have classified and compared the characteristics of the jetting regimes produced at different \(D^{\star}\) qualitatively, i.e. based on their general morphological features as presented in previous works (Supponen _et al._, 2016; Rossello _et al._, 2022). In the following, we will refer to this as _behavioural similarity_. An alternative and more precise way of analysing the spatial correlation between the dynamics of two different jets can be achieved by contrasting the pixel distribution on the video frames to find common features between images. This quantitative comparison method is usually referred as _structural similarity_ analysis and can be implemented using different image scanning algorithms (Sampat _et al._, 2009). Here, we use the _complex wavelet structural similarity index_ (CW-SSIM) (Zhou & Simoncelli, 2005) to evaluate the correlation between the temporal evolution of two different jetting cavities. The CW-SSIM approach has some advantages over direct pixel to pixel comparison methods (e.g. intensity-based) or the simpler versions of the structural similarity index (e.g. SSIM). For instance, it accounts (up to some point) for both intensity variations and non-structural
Figure 12: Jetting dynamics of identical bubbles produced near a flat surface or the curved surface of a droplet (\(R_{d}=1.42\,\mathrm{\SIUnitSymbolMicro m}\)). The non-dimensional time, indicated by the numbers, was normalised with the collapse time of each bubble. (a) Here, \(D^{\star}=0.61\). (b) \(D^{\star}=1.02\). (c) \(D^{\star}=1.23\).
geometric distortions like object translation, scaling and rotation (Sampat _et al._, 2009). The CW-SSIM index can take values ranging from zero (if there is no correlation at all) to one (when the images are identical).
In figure 13, we contrast the dynamics of two bubbles initially located at a distance \(D^{*}\) from a flat or a curved surface as already done in figure 12, but this time using the CW-SSIM index to evaluate their similarity. The non-dimensional time (\(t^{*}\)) was computed using the collapse time of the bubbles for each case. Figure 13 presents plots of the temporal evolution of the similarity index next to a series of selected frames at \(t^{*}=0.3;0.6;0.9;1.05;1.2;1.35;1.5;1.7\); and \(1.9\), which illustrate and compare the shape of the cavities in the flat boundary case (grey background on the left side) and the drop case.
The results expose the differences between the behavioural similarity and the structural similarity approaches, i.e. two bubbles can have the same jetting regime but still have dissimilar structures. This is observed in bubbles at lower stand-off distances like the case with \(D^{*}=0.47\) in figure 13. The discrepancy can be explained by the higher degree of fragmentation of the bubble after jetting, acquiring an elongated shape in regimes with a ventilated cavity, or where the liquid layer between the gas in the bubble and the atmosphere is affected by the RTI. In particular, the cases producing an open cavity (i.e. \(D^{*}\lesssim 0.35\)) were not suitable for the structural similarity analysis. Here, the fluctuations of the splashing
Figure 13: Structural similarity of the dynamics of jetting bubbles produced near a flat surface or the drop boundary at an identical stand-off distance. Here, the temporal evolution of the CW-SSIM index is presented for three examples corresponding to cases with \(D^{*}=0.47\), \(D^{*}=0.88\), and \(D^{*}=1.85\). The insets show a comparison of the images of both simulated bubbles. The frames are centered at specific non-dimensional times (blue vertical lines), displaying a half frame corresponding to the flat surface case (grey background on the left) and a half frame taken from the drop case with the same \(D^{*}\).
dynamics observed in the numerical simulations and the impossibility to define a collapse time due to the non-collapsing nature of those cavities (see figure 6(d) and (e)) prevented us to perform a reliable assessment of the CW-SSIM index.
As the laser bubble is produced deeper into the liquid, both structural and behavioural approaches lead to the same conclusions (previously discussed in figure 12). The similarity found in the development of bubbles near surfaces with or without curvature is excellent for some stand-off distances, e.g. around the middle point located between the liquid surface and the drop centre. One example of this is shown in the central panel of figure 13 corresponding to \(D^{*}=0.88\). As we already mentioned, near the centre of the drop (i.e., \(D^{*}\simeq 2\)) the difference in the anisotropy in both cases produces dissimilar bubble oscillations (see the lower panel of figure 13).
A more general overview of those three scenarios is presented in figure 14, where the mean value of the CW-SSIM index is plotted along with \(D^{*}\) and \(\Upsilon\). Considering that all the bubbles have a very similar initial expansion phase, only the times corresponding to the first collapse and the complete second oscillation cycle were computed in the mean value of CW-SSIM. After the second collapse, the bubble is heavily fragmented and there is no longer a recognisable structure. Figure 14 confirms that the structural similarity is rather poor for bubbles near the surface (i.e. \(0\lesssim D^{*}\lesssim 0.7\)). Around \(D^{*}=0.9\), the similarity index reaches a peak where the match is excellent. A good agreement is sustained over a range of \(D^{*}\) values between approximately 0.7 and 1.7, meaning that even when the features of the cavities are not identical they have a similar distribution of the gas phase (and the same jetting regime). For \(D^{*}\gtrsim 1.7\), the similarity index suffers an abrupt fall and the value of \(\Upsilon\) diverges as the bubble seeding position gets closer to the drop centre. This is consistent with the definition of \(\Upsilon\), which relates its magnitude to the influence of the drop curvature in the bubble dynamics.
The experimental results displayed in figure 11 suggest that there is a correspondence between the dynamics of a bubble seeded with a given \(D^{*}_{\rm flat}\) in the flat surface case and the temporal evolution of a bubble with \(D^{*}_{\rm drop}\) inside the droplet. We explore this apparent
Figure 14: Structural similarity between cases with bubbles seeded at different stand-off distances from a flat free surface or inside the drop. The curve shows the mean value of the “total” CW-SSIM index (green) computed as the average of the mean indices observes in the first bubble collapse (red) and in its second oscillation cycle (blue). The sudden increase in \(\Upsilon\) (black markers) as we seed the bubble close to the drop centre is linked to a decay in the similarity between the cavities.
"equivalence" between values of \(D_{\text{flat}}^{*}\) and \(D_{\text{drop}}^{*}\) in figure 15 and figure 16. Since the result of the CW-SSIM analysis is affected by a significant translation of the objects being compared, the initial positions of the bubbles were matched by performing a vertical shift on the drop case simulations. Figure 15 shows evidence of the mentioned correspondence by presenting two examples, one with \(D_{\text{flat}}^{*}=1.02\) and \(D_{\text{drop}}^{*}=0.95\), and a second one where the bubble is closer to the drop centre, i.e., \(D_{\text{flat}}^{*}=2.40\) and \(D_{\text{drop}}^{*}=1.68\).
These two examples in figure 15 prove that there are pairs of \(D^{*}\) values where the similarity between the dynamics of bubbles produced near two surfaces with uneven curvature is remarkable, at least during the whole period comprised in the first two oscillation cycles. This correlation analysis was performed for an extended range of values of \(D^{*}\) to find that for each value of \(D_{\text{drop}}^{*}\) there is one value of \(D_{\text{flat}}^{*}\) with similar dynamics, i.e. which maximise the CW-SSIM index when a simulation made with that particular \(D_{\text{flat}}^{*}\) is compared against simulations with every possible value of \(D_{\text{drop}}^{*}\). As shown in figure 16, the dependence of the
Figure 15: Similarity study of the jetting dynamics of bubbles near a flat surface (grey background) or a drop (white background) with a different \(D^{*}\). The initial position of the bubbles was matched by performing a vertical shift on the drop case simulations. For each value of \(D_{\text{flat}}^{*}\) there is one value of \(D_{\text{drop}}^{*}\) with similar dynamics, i.e. producing the maximum CW-SSIM index when a simulation of a given \(D_{\text{flat}}^{*}\) is compared against simulations with every possible value of \(D_{\text{drop}}^{*}\). The numbers indicate non-dimensional time \(t^{*}\). (a) For \(D_{\text{flat}}^{*}=1.02\) the maximum average CW-SSIM index was achieved with \(D_{drop}^{*}=0.95\). (b) The best match for \(D_{\text{flat}}^{*}=2.40\) was \(D_{\text{drop}}^{*}=1.68\). (c) Temporal evolution of the CW-SSIM index for the cases on panels (a) and (b).
equivalent stand-off distance starts as a linear function in the proximity of the surface and grows rapidly as the bubble is placed deeper in the liquid. Interestingly, the linear fit performed on smaller values of \(D^{*}\), which meet the conditions for the CW-SSIM analysis, projects a ratio between equivalent \(D^{*}_{\rm drop}\) and \(D^{*}_{\rm flat}\) near 1 when the seeding position approaches the surface. Now, bearing in mind the definition of \(\Upsilon=\chi\,D^{*}\), the previous observation is consistent with the limiting case at the surface where \(\chi\to 1\), meaning that the curvature does not play a significant role for the bubble jetting dynamics. At the other extreme, i.e. as \(d\to R_{d}\), \(\Upsilon\) diverges, indicating a strong influence of the drop geometry on the bubble evolution. At this point, it is important to stress that in this context, "equivalence" does not mean that the dynamics are identical, but their structure is "as similar as it could be" for the matching of \(D^{*}_{\rm flat}\) and \(D^{*}_{\rm drop}\). The same clarification applies to bubbles with the same \(\Upsilon\), which is a multivalued function as explained in section 2.1.
### Radial bubble oscillations
In the previous section, we studied features found in the dynamics of an axisymmetric jetting bubble. Let us now have a closer look at the only case with spherical symmetry, i.e. where the laser cavity is placed in the centre of the drop (\(\Upsilon\to\infty\)). In this scenario, the bubble undergoes several spherical oscillations with a decaying amplitude, as commonly observed in laser bubbles created in unbounded liquids (Liang _et al._, 2022). Figure 17(a) presents a comparison between an experiment and simulated data computed using the VoF solver, finding an excellent agreement. Like for the previously simulated results, here we applied the correction script that accounts for the distortion induced by the drop curvature.
Figure 16: Best match between the dynamics of bubbles in the flat boundary and the drop cases for different stand-off distances, i.e. \(D^{*}_{\rm flat}\) and \(D^{*}_{\rm drop}\). For each value of \(D^{*}_{\rm flat}\), the best match was obtained by finding the corresponding value of \(D^{*}_{\rm drop}\) that maximises the mean CW-SSIM index. As indicated by the parameters of the linear fit, the best match is found at similar values of \(D^{*}\) at the lower depths, but they become increasingly different as \(d\to R_{d}\), where the bubble is seeded at the drop centre (indicated with a vertical dotted line). The dashed grey line was added as a visual reference. The vertical error bars represent the deviation of CW-SSIM from the perfect similarity case (i.e. CW-SSIM = 1)
Figure 17(b) depicts the temporal evolution of the bubble radius \(R(t)\) for the examples in panel (a). In addition, it presents \(R(t)\) calculated for a case of a drop of an ideally infinite size, which corresponds to the case of an unbounded liquid domain. The initial conditions in the VoF model were chosen to match the experimental \(R_{max}^{*}\), and then the other cases were simulated maintaining the same parameters while changing the drop size. Notably, the bubble computed with CFD reaches a slightly larger maximum radius as the liquid layer thickness is increased to infinity (i.e. an unbounded bubble case) and thus also has a larger collapse time. This might be due to the effect produced by the consecutive (and alternating) tension and pressure waves interacting with the bubble during its expansion (see figure 17(a)).
To shed some light on this matter, we use a spherical bubble model based on a modified Rayleigh-Plesset model (RP) (Obreschkow _et al._, 2006; Zeng _et al._, 2018) that accounts for the finite droplet size, viscosity of the liquid and interfacial tension. It is worth noting that in those previous works the millimetre sized droplet was sitting on the top of a blunt needle or deformed into an ellipsoidal shape by a strong levitating acoustic field. This lead to a non-spherical boundary conditions that affect the bubble dynamics. In the present analysis, the droplet is nearly perfectly spherical, thus matching with the purely spherical RP model within a droplet. The results, presented in figure 17(c), show that the bubble grows up to almost the same size independently of the drop size. For the same initial conditions set in the VoF model (\(p_{g}(t=0)=1.69\,\mathrm{GPa}\) and \(R_{b}(t=0)=17.3\,\mathrm{\SIUnitSymbolMicro m}\)) the work is almost completely done against the surrounding pressure (\(p_{\infty}=1\,\mathrm{bar}\)) while surface energy and
Figure 17: Direct comparison between the experiment and a numerical simulation for a case where the laser bubble is placed at the centre of the drop. (a) The median diameter of the drop is \(1.42\,\mathrm{mm}\). The simulated images showing the velocity field have been remapped to account for the distortion provoked by the curvature of the drop. The numbers indicate time in \(\,\mathrm{\SIUnitSymbolMicro s}\) and the colour scale is given in \(\,\mathrm{m}\mathrm{/}\mathrm{s}\). (b) Radial dynamics of the experimental and simulated bubbles. The experimental radius was obtained by fitting a circle on the bubble. The radius in the simulations was estimated using the gas volume (i.e. the spherical equivalent radius). The results were compared with the unbounded case to find that the bubbles inside the drop have a shorter expansion/collapse cycle. (c) Bubble dynamics is obtained with a modified Rayleigh-Plesset model for different drop sizes. The radii of the larger drops remain almost unaltered during the bubble oscillation.
viscous dissipation is negligible. Yet, for smaller droplet volumes, the inertia is reduced and therefore the expansion time to maximum bubble radius and the almost symmetrical collapse reduce, too.
For the particular initial conditions, both models agree on the elongation of the oscillation cycle, however they predict dissimilar results on the maximum radius reached by the bubbles. The simple Rayleigh-Plesset model is used to give a comparison to the VoF simulations and to evaluate the impact of the shock wave (and its reflections) on the bubble dynamics. From figure 17(c) we can infer that the maximum expansion of the bubble is nearly independent of the droplet size, while in the VoF simulations it is not. The VoF model accounts for the reflected wave, thus the discrepancy suggests that upon the acoustic soft reflection of the shock wave momentum is imparted on the droplet interface. The importance of reflected waves on cavitation nucleation in confined liquid samples was recently also found for an acoustic hard reflection where the bubble expansion was lowered (Bao _et al._, 2023). A more comprehensive formulation for the bubble dynamics than the RP model which incorporates both the bubble-shockwave interaction and compressibility effects can be found in Zhang _et al._ (2023).
### Drop surface instabilities
In the previous sections, the formation of radial liquid jets growing from the drop surface in the shape of "spikes" was mentioned. As explained above, this phenomenon stems from an initial perturbation of the liquid interface and the posterior ejection of liquid produced by the Rayleigh-Taylor instability. This kind of instability occurs when the rapid expansion or the collapse of the bubble wall accelerates a thin liquid layer trapped between the cavity and the atmospheric gas, producing a pattern of ripples on the drop surface that grow further in the consecutive bubble oscillations. A clear example of the events leading to the onset of this kind of instability in this particular experiment is shown in figure 18(a). There, the acoustic emissions from the laser dielectric breakdown nucleate a cloud of bubbles within the drop. As the cavity expands, all these smaller bubbles are incorporated (by coalescence) into the main bubble, producing a series of dimples on the bubble surface, visible at \(t=50\,\mathrm{\SIUnitSymbolMicro s}\) of figure 18(a). These dimples may contribute to the later destabilisation of the drop surface, which is highly dependent on the ratio \(R_{max}^{*}/R_{d}\). Additionally, \(R_{max}^{*}/R_{d}\) determines the liquid layer thickness and its acceleration by the bubble/drop dynamics. The ripples in the drop surface become noticeable just after the first bubble collapse (i.e. \(t=130\,\mathrm{\SIUnitSymbolMicro s}\)) and grow significantly during the bubble re-expansion, as shown at \(t=190\,\mathrm{\SIUnitSymbolMicro s}\). However, the most dramatic events take place after the second bubble collapse (i.e. at \(t=230\,\mathrm{\SIUnitSymbolMicro s}\)). There, the ripples grow into liquid "spikes" which lead to the detachment of small droplets due to the action of the Rayleigh-Plateau instability, as shown in figure 18(b). At the same time, the second bubble collapse releases a strong shock wave in the radial direction. This shock wave interacts with the array of meniscus-shaped pits on the liquid surface to produce fast radial jets (see the frame at \(t=420\,\mathrm{\SIUnitSymbolMicro s}\)). The later sequence is clearly captured in the frames of figure 18(c). It is important to note that this complex phenomenon not only depends on the shock wave strength but also requires certain conditions to be met (Tagawa _et al._, 2012; Peters _et al._, 2013), like a minimum depth and curvature of the pits, which may explain the absence of "spikes" during the first bubble collapse.
To further analyse the onset of these instabilities, we varied the energy of the laser pulse, hence producing bubbles with various sizes and thus with distinct ratios \(R_{max}^{*}/R_{d}\). The results are presented in figure 19. Even when the extreme image distortion produced near the drop interface prevent us to obtain an accurate value of the bubble radius, these measurements make evident that the amplitude of the ripples increases with increasing \(R_{max}^{*}\) and with each consecutive bubble oscillation.
In figure 19(a) the expansion of the bubble is not sufficient to visibly disturb the drop's spherical surface. In the case shown in figure 19(b) the bubble's first collapse does not break up the drop surface, however, a mild wave pattern is observed on the surface after the bubble re-expansion (at \(t=420\,\mathrm{\SIUnitSymbolMicro\SIUnitSymbolMicro s}\)). In spite of the presence of these low amplitude ripples, no radial jets are ejected from the drop upon the second bubble collapse. When the ratio \(R_{max}^{*}/R_{d}\) is further increased, as shown in figure 19(c), we find very similar dynamics of the bubble/drop system, but now the valleys between the ripples (and the acoustic pressure wave) are deep enough to trigger the radial jetting. This confirms the existence of threshold conditions for the "spikes" to be formed. In the remaining cases presented in panels (d) to (f) of figure 19, the general dynamics of the bubble/drop system are very similar to the previous cases, although as the laser pulse energy is increased the instabilities become perceivable at an earlier time. For example, in figure 19(f) liquid "spikes" are already formed after the first bubble collapse (Zeng _et al._, 2018).
Figure 20 shows VoF simulations of the Rayleigh-Taylor instability found on the drop surface. From figure 20(a) it is clear that the instability is grown by the volumetric oscillation of the bubble, while the shock waves emitted from the bubble upon its creation (and later at its collapse) accelerates the ripples on the drop surface and form the thin "spikes". This kind of simulations was previously performed by Zeng _et al._ (2018) for an ellipsoidal droplet with the RTI manifesting only in a reduced region of the surface located on the drop poles. In the present work, we study a nearly spherically symmetric case where the spikes have no
Figure 18: Drop surface destabilisation mechanisms. The mean drop radius is \(1.42\,\mathrm{mm}\) and the numbers represent time in \(\mathrm{\SIUnitSymbolMicro s}\). (a) As the main bubble expands, the secondary, acoustic cavitation bubbles produce small dimples on the gas cavity surface (e.g. at \(50\,\mathrm{\SIUnitSymbolMicro s}\)). Those may promote the formation of a series of ripples during the bubble collapse (at \(130\,\mathrm{\SIUnitSymbolMicro s}\)). As the bubble re-expands, the Rayleigh-Taylor instability causes the growth of liquid “spikes” that later lead to the detachment of small drops due to the Rayleigh-Plateau instability, as indicated with a green arrow in panel (b). There, the frame width is \(570\,\mathrm{\SIUnitSymbolMicro m}\). At the same time, the second collapse of the bubble enhances the surface irregularities and pits that appear in the areas between the ripples. The shock wave emitted during the second collapse gives origin to fast liquid jets ejected from the centre of the pits, as highlighted with a blue arrow in panel (c). The frame width in this sequence is \(490\,\mathrm{\SIUnitSymbolMicro m}\). The full video is available in the online supplementary movie 9.
Figure 19: Onset of the drop surface instabilities for bubbles produced with different laser pulse energies. The mean drop radius is 1.42 mm and the numbers represent time in \(\upmu\)s. (a) Here, the energy of the laser pulse is \(L=1.9\) mJ. (b) \(L=3.1\) mJ. (c) \(L=3.9\) mJ. For this energy, the RTI affects the drop surface enough to produce liquid ejection after the second bubble collapse. (d) \(L=4.6\) mJ. (e) \(L=5.2\) mJ. (f) \(L=6.4\) mJ. Note that panels (d)-(f) are shown in wider frames than the panels (a)-(c) to show the larger “spikes”. A full video of panel (f) is available in the online supplementary movie 10.
preferred origin, i.e. they escape the droplet isotropically.
The instability was quantified by defining the spike height as half of the difference between the maximum and minimum radial deviations from the initial drop shape, which was then normalised with the average drop radius, \(R_{d}=1420\,\mathrm{\SIUnitSymbolMicro m}\). In figure 20(b), it is evident that the instability is formed immediately after bubble creation, as it grows during the bubble's initial expansion. It starts shrinking at \(R_{0}\leq 40\,\mathrm{\SIUnitSymbolMicro m}\) during its first collapse and grows again upon its rebound.
For \(R_{0}=25.7\,\mathrm{\SIUnitSymbolMicro m}\), the normalised spike height stays below \(0.2\,\mathrm{\char 37}\), meaning that the instability does not further develop in the first bubble oscillation cycles. As the initial radius \(R_{0}\) is increased in steps of \(5\,\mathrm{\SIUnitSymbolMicro m}\), the spike height approximately doubles. Thus, the spike height is exponentially related to the bubble size, i.e. spike height\(/R_{d}\sim e^{R_{0}\text{-const.}}\). Considering that the instability increases continuously with increasing laser energy, a threshold for the onset of the instability can only be chosen arbitrarily. Here, we choose an _ad hoc_ threshold value as the normalised spike height of \(1\,\mathrm{\char 37}\) of \(R_{d}\), around which the spike height does not shrink during most of the bubble's first collapse.
Similarly to the observed in the experiments, the simulations of figure 20(b) show how the spikes are ejected earlier in time as the maximum radius reached by the bubble is increased. For \(R_{0}=35\,\mathrm{\SIUnitSymbolMicro m}\), the spikes nearly cross the threshold (indicated in the plot with a dashed line) in the second oscillation cycle, while for \(R_{0}=40\,\mathrm{\SIUnitSymbolMicro m}\), the threshold is crossed shortly after the first collapse, and for \(R_{0}\geq 45\,\mathrm{\SIUnitSymbolMicro m}\) it is already exceeded during the first oscillation cycle. Figure 20(a) compares the instability for a case below (\(R_{0}=35\,\mathrm{\SIUnitSymbolMicro m}\)) and above (\(R_{0}=45\,\mathrm{\SIUnitSymbolMicro m}\)) the established threshold, showing a strong increase in the spike size as well as the ejection of droplets for a larger bubble. This droplet separation from the spikes, highlighted in the bottom row of figure 20(c), was previously discussed in figure 18(b) as an example of the
Figure 20: Numerical simulations of the instabilities development at the surface of the drop. (a) Selected frames for a case with \(R_{0}=35\,\mathrm{\SIUnitSymbolMicro m}\) (left) and \(R_{0}=45\,\mathrm{\SIUnitSymbolMicro m}\) (right). The non-dimensional time \(t^{*}\) is shown on the top-left of each frame. (b) Temporal evolution of the Rayleigh-Taylor instability (RTI) spike height for various \(R_{0}\). An _ad hoc_ threshold for the instability onset is indicated by a dashed line at \(1\,\mathrm{\char 37}\) of \(R_{d}\). (c) Selected frames of a zoomed view of the drop surface for \(R_{0}=45\,\mathrm{\SIUnitSymbolMicro m}\) (frame window indicated by a red square in (b)), showing the RTI and the Rayleigh-Plateau instability.
effect of the Rayleigh-Plateau instability. An upper limit of the spike height is reached at \(\approx 1\ R_{d}\) for \(R_{0}=65\) um, where the outer spikes reach about twice the drop size, while the inner spikes breach the liquid layer that separates the bubble from the outside air. Because of this, the bubble interior is partially filled with atmospheric gas and the cavity ceases to oscillate. At this point, the drop can not be longer defined as such, as shown in the experiments of figure 19(f) where the liquid mass becomes an intricate collection of spikes and a significant portion of it is ejected away as smaller droplets.
## 5 Conclusion
In this manuscript, we presented some of the complex fluid dynamics occurring once a vapour bubble expands within a water droplet. Specifically, we analysed the appearance of acoustic secondary cavitation, and the formation of liquid jets in the proximity of highly curved free surfaces, and finally, we provided detailed experimental and simulated images of the onset and the development of shape instabilities on the surface of the drop.
The first part of the research highlights that acoustic waves emitted from the micro-explosion nucleate complex secondary cavitation clouds. Further, the study corroborates the existing relation between the evolution of the negative pressure profile and the shape of the bubble clusters inside the drop. A cavitation threshold pressure of around \(-4.5\) MPa was estimated by performing a direct comparison between the experiments and the simulations. The numerical model does not account for the bubble nucleation induced by the rarefaction waves. The implementation of this experimental technique to other liquids, particularly in cases where large samples are not available, might contribute to achieving a deeper understanding of the nucleation of bubbles by sound waves. The present experimental setup may be modified to create a bubble within a superheated droplet to reveal in a well-defined system the coupling of fluid dynamics with thermodynamics, and also study how the liquid temperature affects the later fragmentation dynamics (Bar-Kohany & Levy, 2016).
The secondary bubbles cluster and several types of jets, both caused by the generation of laser bubbles at different positions inside the droplet, were classified using a stand-off parameter \(\Upsilon\). The use of a single quantity to characterise the system simplifies the direct comparison between cases. The optical lens effect linked to the spherical shape of the drops allowed us to obtain images of the bubble jet's interior with a remarkable level of detail.
The numerical simulations were crucial to explain the complex flow fields generating these jets, as well as to explain the shape acquired by the gas cavities during their second collapse phase, including many interesting features like the annular bubble necking and the detachment of multiple vapour rings.
The effect of the liquid surface curvature on the bubble jetting has been analysed, by comparing the evolution of a bubble inside a droplet and in a semi-infinite pool, using two complementary points of view. First, a qualitative assessment (here called _behaviour similarity_) indicates that the jetting regime differs rather little when the cavity is seeded nearby the free boundary. In this part of the study, we have shown that for the droplet case the non-dimensional distance \(D^{*}\) is the most determining quantity, while the curvature of the liquid does not have a dominant role in the evolution of the jetting cavities. This conclusion is based in the analysis of numerical simulations where only the parameter \(R_{d}\) was modified, and also by comparing the current results with the previously reported for a flat surface.
A second type of analysis, which uses the CW-SSIM index to evaluate the _structural similarity_ of the cavities, was applied on the same numerical data to perform this time a quantitative comparison of the jetting near a flat and a curved surface. Here, we found that for bubbles in the vicinity of the liquid surface (i.e. \(0\lesssim D^{*}\lesssim 0.7\)) the structural similarity is
rather poor, mostly due to the higher degree of fragmentation of the gas phase developed in regimes with a ventilated cavity, or where the liquid surface is affected by the RTI.
Both similarity criteria indicate the existence of a seeding depth around \(D^{*}\sim 1\) where the bubbles in the flat and curved cases resemble each other the most. In addition, as the bubble seeding position is set further away from the surface, the jetting regimes are progressively dissimilar, in particular when the laser cavity is generated nearby the drop centre. The sudden drop in the CW-SSIM index found in this situation matches an equally abrupt rise in the value of \(\Upsilon\) starting around \(\Upsilon=10\).
The CW-SSIM analysis confirmed that for each stand-off distance in the flat boundary case (i.e. \(D^{*}_{\rm flat}\)), there is another value of \(D^{*}\) (i.e. \(D^{*}_{\rm drop}\)) where the bubble dynamics of both cases resemble each other the most. The relation between \(D^{*}_{\rm flat}\) and \(D^{*}_{\rm drop}\) supports the definition and the functionality described for \(\Upsilon\). This kind of similarity study could be used to span a more comprehensive parameter space with \(D^{*}_{\rm flat}\) and \(\Upsilon\) computed with different curvature radii, and thus achieve a more general picture of the group of parameter values having "equivalent" jetting dynamics. Moreover, the jet matching would greatly benefit from the implementation of more complex comparison methods or the use of machine learning techniques that consider both the behavioral and the structural criteria.
The spherical bubble oscillations observed in the experiments where the laser was focused on the geometrical centre of the droplet were analysed using two different numerical models. Both models were in excellent agreement with the measured temporal bubble radius evolution. More importantly, both models predict a reduction in the expansion/collapse time when the drop size is decreased. Of course, this study is valid as long as the liquid layer around the bubble is not thin enough to promote the onset of the RTI, as it happens in cases with a low \(R_{d}/R^{*}_{max}\) ratio.
The radial oscillations of a central bubble were also used to study the onset of shape instabilities at the gas-liquid interfaces, given by the Rayleigh-Taylor and Rayleigh-Plateau instabilities. The destabilisation mechanism of each instability and its effect on the droplet surface was illustrated by detailed high-speed images. Here, we have demonstrated how the radial acceleration imposed by the bubble oscillation triggers the RTI, which in turn induces a pattern of superficial ripples on the drop. Those acquire a concave shape during the bubble collapse and give rise to liquid filaments due to the transfer of the momentum from the bubble shock wave emissions to the curved pits formed on the gas-liquid interface. The ejected filaments later break up by the action of the RPI causing the detachment of smaller droplets and thus the atomisation of the drop.
The phase change from liquid to vapour within droplets is observed in a wide variety of applications, such as in flash boiling atomisation (Loureiro _et al._, 2021), in spray-flame synthesis (Jungst _et al._, 2022), spray cooling (Tran _et al._, 2012), extreme ultraviolet light generation (Versolato, 2019), and laser-induced breakdown spectroscopy of liquids (Lazic & Jovicovic, 2014) to name a few. They all have in common that through a complex non-spherical symmetric process a liquid is fragmented through a micro-explosion within. While Rayleigh-Taylor instabilities determine the growth of ripples on the surface of the droplet, the non-spherical bubble dynamics that leads to jetting out of the droplet affects the resulting size distribution of liquid particles, too. The high degree of control achieved in the current experiments opens up the possibility of studying RTI of more complex interfaces, e.g. the effect of particles covering the surface, surfactants, or complex fluids. Those experiments could be supported by complementary numerical simulations to optimise the work flow in the laboratory.
### Funding
J.M.R and K.A.R. acknowledge support by the Alexander von Humboldt Foundation (Germany) through the Georg Forster and Humboldt Research Fellowships. This project has received funding from the European Union's Horizon research and innovation programme under the Marie Sk lodowska-Curie grant agreement No. 101064097, as well as the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under contract OH75/4-1 and INST 272/280-1. The authors would also like to thank the anonymous reviewers for their constructive criticism and suggestions that helped to improve this work.
## Declaration of interests
The authors report no conflict of interest.
### Author contributions
**Juan Manuel Rossello**: Conceptualisation, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Software, Visualisation and Writing - original draft, Writing - review & editing. **Hendrik Reese**: Conceptualisation, Data curation, Formal Analysis, Software, Visualisation and Writing - review & editing. **K. Ashoke Raman**: Conceptualisation, Funding acquisition and Investigation. **Claus-Dieter Ohl**: Conceptualisation, Formal Analysis, Funding acquisition, Resources and Writing - review & editing.
## Appendix A Bubble jetting in a liquid pool with a curved free surface
In section 4.2, the role of the curvature of the free surface was analysed by comparing the bubble jetting observed near a flat surface to the jetting dynamics of bubbles within the falling droplet. There, both the experimental and numerical results indicate that the effect of the curvature on the bubble jetting regime is almost negligible for low values of \(D^{*}\), but the specific shape acquired by the cavity during and after the jetting are no longer similar as \(D^{*}\) takes values larger than 1.2, as shown by the structural similarity analysis. However, the curvature of the surface is not the only difference between these two cases, since in one case the liquid is confined (i.e. the droplet) and in the other the bubble is produced on top of an ideally "semi-infinite" liquid column (which has a length of 5 cm in the experiments and was numerically infinite in the simulations). An intermediate step between those two experimental scenarios is given by the configuration described in figure 21(a). Here, the bubbles are also produced close to the free surface of a liquid pool, but in this case the top of the liquid column presents a curved surface with the shape of a dome. Panels (b), (c) and (d) of figure 21 show three examples of jetting bubbles generated at different depths \(d\). The bubbles were located away from the symmetry axis to make evident that the jets always point in the direction normal to the surface. As discussed in section 4.2, we observed a similar behaviour of the jetting dynamics for both curved surfaces at comparable values of the stand-off parameter. The example shown in panel (b) of figure 21 corresponds to the case (c) of figure 12, while the jet dynamics of the case (c) of figure 21 matches the one of figure 12(a).
Figure 22 compares the jetting of bubbles in the drop case with the jetting of bubbles in a configuration as the one shown in figure 21(a). The results present almost identical bubble dynamics even in the case with \(D^{*}=1.23\), meaning that the differences observed between the case with the flat surface and the drop case are indeed caused by the effect of the surface curvature and not by the difference in the boundary conditions below the bubble or in the
liquid volume. As previously discussed for the flat surface case, the similarity found in the cases displayed in figure 22 will be eventually lost as the laser bubble is produced closer to the drop centre.
|
2309.00377 | Nonlinear Dirichlet forms, energy spaces, and calculus rules | We review recent contributions on nonlinear Dirichlet forms. Then, we
specialise to the case of 2-homogeneous and local forms. Inspired by the theory
of Finsler manifolds and metric measure spaces, we establish new properties of
such nonlinear Dirichlet forms, which are reminiscent of differential calculus
formulae. | Giovanni Brigati | 2023-09-01T10:23:48Z | http://arxiv.org/abs/2309.00377v1 | # Nonlinear Dirichlet forms, energy spaces, and calculus rules
###### Abstract
We review recent contributions on nonlinear Dirichlet forms. Then, we specialise to the case of \(2-\)homogeneous and local forms. Inspired by the theory of Finsler manifolds and metric measure spaces, we establish new properties of such nonlinear Dirichlet forms, which are reminiscent of differential calculus formulae.
_To my late grandfather, Roberto, with love and gratitude._
**MSC2020:** Primary 31C45; Secondary 47H20, 31C25, 46E36, 35K55.
**Keywords:** Nonlinear Dirichlet form, Dirichlet form, nonlinear semigroup, energy space, differential calculus.
## 1 Introduction
The theory of (bilinear) Dirichlet forms [13, 25, 29] is a rich topic at the interface between analysis and probability, in connection with Markov semigroups (see Theorem 2.1 below). Dirichlet forms were introduced in [12] as a class of bilinear/quadratic forms generalising the standard Dirichlet energy
\[\mathcal{E}(u)=\begin{cases}\int_{\mathbb{R}^{d}}|Du(x)|^{2}\,dx,\qquad\text{ if}\,u\in\mathrm{W}^{1,2}(\mathbb{R}^{d});\\ +\infty,\qquad\text{otherwise.}\end{cases} \tag{1}\]
An abstract calculus (called \(\Gamma\)-calculus) has been developed in [8] to capture the hypercontractivity and decay properties of linear Markov semigroups
induced by quadratic Dirichlet forms, establishing connections between linear diffusion processes, Riemannian geometry, and functional inequalities [9; 23; 6]. In particular, on Riemannian manifolds, the \(\Gamma\)-calculus applied to the \(\mathrm{W}^{1,2}\)-seminorm (which is a quadratic Dirichlet form) links the long-time behaviour of the heat flow (the associated Markov semigroup) with the Bochner identity [22], via an inequality called _Bakry-Emery condition_. In case of a lower bound on the _Ricci curvature_, the Bakry-Emery condition is satisfied. This turns out to be an equivalence, yielding a definition of Ricci lower bounds in pure terms of Dirichlet forms [35]. A third equivalent notion of Ricci lower bounds was given via optimal transport by Lott, Villani, and Sturm [28; 33; 34].
This last definition makes sense even for metric measure spaces. However, unlike Riemannian manifolds, the analogous of (1) on metric measure spaces, called _Cheeger's energy_[2; 3; 4], is not a quadratic Dirichlet form in general, and the metric heat flow is nonlinear. So, \(\Gamma\)-calculus and a Bakry-Emery condition could be established only in metric measure spaces whose Cheeger's energy is quadratic (_infinitesimally Hilbertian spaces_) [5]. In this class, Ambrosio, Gigli, and Savare could recover the equivalence between the Bakry-Emery condition and the Ricci lower bounds of [28; 33]. Then, the study of geometry and functional inequalities on RCD spaces (i.e. infinitesimally Hilbertian spaces satisfying a Ricci lower bound) flourished, becoming a main subject in the last ten years [26; 27].
Much less is known in case the Cheeger's energy is nonquadratic. At the moment, Ricci lower bounds in this case are given only via the Lott-Villani-Sturm approach. The Cheeger's energy being nonquadratic is not exceptional, as it is the case in all (non-Riemman) Finsler geometries. Finsler structures will be a source of inspiration for this paper, as Riemmanian manifolds are model examples for the RCD case. We detail hereby the state of the art.
* Ricci lower bounds on Finsler manifolds appear in [31; 32; 30]. The equivalence between an _intrinsic definition_, and the Lott-Villani-Sturm approach of [28; 33] holds true. Moreover, a suitable equivalent Bakry-Emery condition has been found. This last condition looks very different from the standard one, as it is a comparison estimate between the nonlinear Finsler Laplacian and a linearisation of it.
* A definition of _nonlinear Dirichlet form_ has been given in [19]. Properties of nonlinear Dirichlet forms have been studied in [21; 20] and [17; 16]. No analogous of the Bakry-Emery \(\Gamma\)-calculus is available in this context up to the best of our knowledge.
* Nonquadratic Cheeger's energies belong to the class of nonlinear Dirichlet forms of [19]. The converse, i.e. a representation theorem of abstract nonlinear Dirichlet forms as metric Cheeger's energies, is missing in general, but available in the quadratic case [5].
Motivated by the last point, our long-term goal is to strengthen the link between nonlinear Dirichlet forms and nonquadratic Cheeger's energies. In this note, we study 2-homogeneous, and local nonlinear Dirichlet forms. We analyse the associated non-Hilbertian energy space, and we recover some abstract calculus rules, which are reminiscent of _concrete_ calculus in metric measure spaces [26, 3], capturing the analogies as much as we can.
The work is organised as follows. Section 2 contains a presentation of the main notion involved in the paper. Section 3 specialises to 2-homogeneous and local nonlinear Dirichlet forms. Property of the associated energy spaces are collected in Section 4. Sections 5-6 contain first-order, and second-order calculus rules, respectively. Finally, in Section 7 we list some desirable results which are still missing in the theory.
## 2 Definitions and tools
### Quadratic Dirichlet forms
Let \((X,\mathcal{F},m)\) be a \(\sigma\)-finite measure space, such that \(\mathcal{F}\) is in a bi-measurable correspondence with the Borel class on \(\mathbb{R}\).
A (quadratic) Dirichlet form is a quadratic, lower-semicontinuous (l.s.c.) functional
\[\mathcal{E}:\mathrm{L}^{2}(X,m)\to[0,\infty],\]
whose domain
\[D(\mathcal{E}):=\big{\{}u\in\mathrm{L}^{2}(X,m)\,:\,\mathcal{E}(u)\neq\infty \big{\}}\]
is a dense subspace of \(\mathrm{L}^{2}(X,m)\), and such that
\[\forall u\in D(\mathcal{E}),\quad\mathcal{E}(0\lor u\wedge 1)\leqslant \mathcal{E}(u). \tag{2}\]
The symbols \(\vee\), \(\wedge\) stand for the maximum and minimum operations, respectively. The quadratic form \(\mathcal{E}\) induces an unbounded, positive semi-definite, self-adjoint linear operator \(A:D(A)\to\mathrm{L}^{2}(X,m)\), such that
\[\forall u\in D(\mathcal{E}),\qquad\mathcal{E}(u)=\int_{X}\sqrt{A}u\,\sqrt{A}u \,dm,\]
\[\forall u\in D(A),\qquad 2\mathcal{E}(u)=\int_{X}-u\,Au\,dm.\]
By construction, one has that \(t\mapsto T_{t}:=\mathrm{e}^{At}\) is a linear and continuous semigroup of contractions on \(\mathrm{L}^{2}(X,m)\)[18] such that
\[\partial_{t}T_{t}=A\,T_{t}.\]
Finally, one could introduce the bilinear Dirichlet form associated with \(\mathcal{E}\) as follows
\[\Lambda(u,v):=\int_{X}\sqrt{A}u\,\sqrt{A}v,\qquad\forall u,v\in D(\mathcal{E}), \tag{3}\]
which is called simply _Dirichlet form_ in the literature [13, 25, 29].
The interest of Dirichlet forms, especially in connection with linear semigroups [25], lies in the following.
**Theorem 2.1**.: _Let \(\mathcal{E}\) be a quadratic, l.s.c., densely defined, positive semi-definite quadratic form over \(\mathrm{L}^{2}(X,m)\). Then, the following are equivalent._
1. \(\mathcal{E}\) _is a Dirichlet form._
2. _The linear semigroup_ \((T_{t})_{t}=\mathrm{e}^{At}\) _is a continuous and self-adjoint semigroup of contractions in_ \(\mathrm{L}^{p}(X,m)\) _for all_ \(p\in[1,\infty]\)_:_ \[\forall t\geqslant 0,\quad\forall p\in[1,\infty],\qquad\|T_{t}\|_{ \mathrm{L}^{p}(\mathbb{R}^{d})\to\mathrm{L}^{p}(\mathbb{R}^{d})}\leqslant 1.\] (4) _Moreover,_ \((T_{t})_{t}\) _is positivity-preserving:_ \[\forall u\in\mathrm{L}^{2}(X,m),\,t\geqslant 0,\qquad u\geqslant 0\ \Longrightarrow\ T_{t}u\geqslant 0.\]
3. _The form_ \(\mathcal{E}\) _verifies the normal contraction property_ \[\forall u\in\mathrm{W}^{1,2}(\mathbb{R}^{d}),\quad\forall\phi\in\Phi,\qquad \mathcal{E}(\phi(u))\leqslant\mathcal{E}(u),\] (5) _where_ \[\Phi:=\left\{\phi:\mathbb{R}\to\mathbb{R}\,:\,\phi(0)=0,\,\phi\text{ is $1$-Lipschitz}\right\}.\] _._
The implication \((2\implies 1)\) should be understood in the following sense. Given a linear, self-adjoint semigroup \((T_{t})_{t}\), satisfying the hypotheses of condition 2., one could always (uniquely) define an unbounded, linear, positive semi-definite, and self-adjoint operator
\[A:=\lim_{t\to 0}\,\frac{T_{t}-\mathrm{Id}}{t},\]
then, \(\sqrt{A}\) by functional calculus. This way, the functional
\[\mathcal{E}(u):=\begin{cases}\int_{X}\left|\sqrt{A}u\right|^{2}\,dm,\qquad\text {if }u\in D(\sqrt{A}),\\ +\infty,\qquad\text{otherwise},\end{cases}\]
is a Dirichlet form. Semigroups satisfying condition 2. of Theorem 2.1 are usually called linear _Markov semigroups_.
### Nonlinear Dirichlet forms
In [19], a possible extension of Dirichlet forms to the nonlinear setting has been established. We adopt this approach, in view of a nonlinear extension of Theorem 2.1, namely Theorems 2.2 and 2.3.
Let \(\mathcal{E}:\mathrm{L}^{2}(X,m)\to[0,\infty]\) be a convex, l.s.c. functional with dense domain. Indicate with \(\partial\mathcal{E}:D(\partial\mathcal{E})\subset\mathrm{L}^{2}(X,m)\to 2^{ \mathrm{L}^{2}(X,m)}\) its subdifferential operator:
\[\partial\mathcal{E}(u)=\left\{\xi\in\mathrm{L}^{2}(X,m)\,:\,\forall z\in \mathrm{L}^{2}(X,m),\quad\mathcal{E}(z)-\mathcal{E}(u)\geqslant\int_{X}\xi \,u\,dm\right\},\]
see [14, 15]. Let \((T_{t})_{t\geqslant 0}\) be the semigroup of nonlinear operators generated by \(-\partial\mathcal{E}\) via the differential equation
\[\begin{cases}\partial_{t}T_{t}\,u\in-\partial\mathcal{E}(T_{t}\,u),&\forall t \in(0,\infty),\quad\forall u\in\mathrm{L}^{2}(X,m),\\ T_{0}\,u=u,&\forall u\in\mathrm{L}^{2}(X,m).\end{cases} \tag{6}\]
Equation (6) is well-posed for all \(u\in\mathrm{L}^{2}(X,m)\). Its solution is usually called the gradient flow of \(\mathcal{E}\) starting at \(u\). See [1, 14].
The functional \(\mathcal{E}\) is a nonlinear Dirichlet form if the associated semigroup \((T_{t})_{t}\) is a contraction in all \(\mathrm{L}^{p}(X,m)\) spaces (4), and if, in addition, it is order-preserving:
\[\forall u,v\in\mathrm{L}^{2}(X,m),\,t\geqslant 0,\qquad u\geqslant v\ \Longrightarrow\ T_{t}u\geqslant T_{t}v. \tag{7}\]
A semigroup of nonlinear maps satisfying (4), and (7) is called a _nonlinear Markov semigroup_. Notice that a quadratic Dirichlet form is a special case of a nonlinear Dirichlet form. Moreover, if \(\mathcal{E}\) is quadratic, we have that \(\partial\mathcal{E}=A\).
Thanks to [11], condition (4) should be verified only for \(p=\infty\). After the results in [10, 14, 17], conditions (4),(7) can be characterised in terms of invariance of convex sets in \(\mathrm{L}^{2}(X,m)\) for the action of the semigroup \((T_{t})_{t}\), then, equivalently rewritten as contraction properties of the functional \(\mathcal{E}\) itself.
**Theorem 2.2** ([17]).: _Let \(\mathcal{E}:\mathrm{L}^{2}(X,m)\to[0,\infty]\) be a l.s.c. functional. Then, \(\mathcal{E}\) is a nonlinear Dirichlet form if and only if, for all \(u,v\in\mathrm{L}^{2}(X,m)\), and \(\alpha\in[0,\infty)\), \(\mathcal{E}\) verifies_
\[\mathcal{E}(u\lor v)+\mathcal{E}(u\wedge v) \leqslant\mathcal{E}(u)+\mathcal{E}(v), \tag{8}\] \[\mathcal{E}(H_{\alpha}(u,v))+\mathcal{E}(H_{\alpha}(u,v)) \leqslant\mathcal{E}(u)+\mathcal{E}(v), \tag{9}\]
_with_
\[H_{\alpha}(u,v)(x)=\begin{cases}v(x)-\alpha&u(x)-v(x)<-\alpha,\\ u(x)&u(x)-v(x)\in[-\alpha,\alpha],\\ v(x)+\alpha&u(x)-v(x)>\alpha.\end{cases} \tag{10}\]
The _normal contraction property_ (5) was recovered also for the nonlinear setting.
**Theorem 2.3** ([17]).: _Let \(\mathcal{E}\) be a nonlinear Dirichlet form. Then \(\mathcal{E}\) has the normal contraction property_ (5) _if and only if_
\[\mathcal{E}(-f)\leqslant\mathcal{E}(f)\quad\forall f\in\mathrm{L}^{2}(X,m). \tag{11}\]
By homogeneity, condition (11) is trivial in the quadratic setting.
### Finsler and metric Sobolev spaces
Let \(\mathcal{M}\) be a smooth, closed, boundary-free manifold. Let \(m\) be a Borel, \(\sigma\)-finite measure on \(\mathcal{M}\). A Finsler structure on \(\mathcal{M}\) is a smooth map \(x\mapsto|\cdot|_{x}\) associating to each point \(x\in\mathcal{M}\) a norm on \(T_{x}\mathcal{M}.\) Let \(|\cdot|_{x}^{\star}\) be the dual norm, and consider the map \(F:T\mathcal{M}\to 2^{T^{\star}\mathcal{M}}\) defined as follows:
\[F(x,\xi)=\left\{(x,\zeta)\,:\,\zeta\in T_{x}^{\star}\mathcal{M},\,|\zeta|_{x}^ {\star}=|\xi|_{x},\quad\langle\zeta,\xi\rangle_{x}=|\xi|_{x}^{2}\right\},\]
where \(\langle\cdot,\cdot\rangle_{x}\) is the duality product between \(T_{x}^{\star}\mathcal{M}\) and \(T_{x}\mathcal{M}.\) If \(u:\mathcal{M}\to\mathbb{R}\) is a regular function, then \(Du\) is a section of \(T^{\star}\mathcal{M}.\)
We have that the Sobolev seminorm
\[\mathcal{E}(u)=\begin{cases}\int_{\mathcal{M}}\left(|Du(x)|_{x}^{\star}\right) ^{2}\,dm=\int_{\mathcal{M}}|F^{-1}(Du)|_{x}^{2}\,dm,\quad\text{if }u\in\mathrm{W}^{1,2}(\mathcal{M},dm),\\ \infty,\qquad\text{otherwise},\end{cases} \tag{12}\]
is a nonlinear Dirichlet form. Moreover, \(\mathcal{E}\) is a quadratic form if and only if \(F\) is a linear map, if and only if \(|\cdot|_{x}\) is induced by a scalar product for all \(x\in\mathcal{M},\) so that \(\mathcal{M}\) is a Riemmanian manifold. One can compute
\[\partial\mathcal{E}(u)=-2\nabla\cdot F^{-1}(Du),\]
which plays the role of the Laplacian on \((\mathcal{M},|\cdot|,m)\), but it is a nonlinear (possibly multivoque) operator. If \(u\in D(\partial\mathcal{E})\), and \(v\in\mathrm{L}^{2}(\mathcal{M},m)\), then the scalar product
\[\int_{\mathcal{M}}-\nabla\cdot F^{-1}(Du)\,v\,dm\]
makes sense. As Gigli observes in [26], one cannot hope to find an adjoint operator \(S\) of \(-\nabla\cdot F^{-1}(Du)\) such that
\[\int_{\mathcal{M}}-\nabla\cdot F^{-1}(Du)\,v\,dm=\int_{\mathcal{M}}S(v)\,u\,dm,\]
as the right-hand-side is linear in \(u\), while the left-hand-side is not. The best one could do is moving only one derivative on \(v\) and finding
\[\int_{\mathcal{M}}-\nabla\cdot F^{-1}(Du)\,v\,dm=\int_{\mathcal{M}}\langle F^ {-1}(Du),Dv\rangle_{x}\,dm.\]
The form
\[\Lambda(u,v):=\int_{\mathcal{M}}\langle F^{-1}(Du),Dv\rangle_{x}\,dm \tag{13}\]
is multivoque, defined for \(u,v\in D(\mathcal{E})\). The two arguments play different roles in \(\Lambda\), but there is no hierarchy in their regularity. We define also the maximal and minimal sections of \(\Lambda\), given by
\[\Lambda^{-}(u,v) :=\int_{\mathcal{M}}\inf_{g\in F^{-1}(Du)}\langle g,Dv\rangle_{x} \,dm, \forall u,v\in D(\mathcal{E}), \tag{14}\] \[\Lambda^{+}(u,v) :=\int_{\mathcal{M}}\sup_{g\in F^{-1}(Du)}\langle g,Dv\rangle_{x} \,dm, \forall u,v\in D(\mathcal{E}). \tag{15}\]
Finally, we have [26]
\[\forall u,v\in D(\mathcal{E}),\qquad\Lambda^{\pm}(u,v)=\lim_{\sigma\to 0^{\pm}} \frac{\mathcal{E}(u+\sigma v)-\mathcal{E}(u)}{\sigma}. \tag{16}\]
We remark that the last formula holds even at a pointwise level in the context of Finsler manifolds.
We conclude the section by recalling the definition of the Cheeger's energy, which extends (1) and (12) to metric measure spaces [3].
Let \((X,\tau)\) be a topological space such that it is homeomorphic to a complete and separable metric space. Then \((X,\tau)\) is called a Polish space. Let \((X,\tau)\) be a Polish space equipped with a function \(d:X\times X\to[0,+\infty]\) such that
* \(d\) is an extended distance on \(X\);
* \(d\) is \(\tau-\)l.s.c.;
* for all sequences \((x_{n})_{n}\subset X\) such that \(d(x_{n},x)\to 0\), for an element \(x\in X\), we have \(x_{n}\to x\) in \(\tau\).
* the extended metric space \((X,d)\) is complete.
Let \(m\) be a Borel, \(\sigma-\)finite measure on \((X,d,\tau)\) such that
\[m(B(x,r))\leqslant\exp(Cr^{2}), \tag{17}\]
for a uniform constant \(C\). Let \(f\in\operatorname{Lip}_{b}(X)\) and let \(x\in X.\) The maximum local slope of \(f\) at \(x\) is given by
\[|Df|(x):=\limsup_{y\to x}\frac{|f(y)-f(x)|}{d(x,y)}.\]
Let \(\bar{\operatorname{Ch}}:\operatorname{L}^{2}(X,m)\to[0,+\infty]\) be defined via
\[\bar{\operatorname{Ch}}(u)=\frac{1}{2}\int_{X}|Du|^{2}dm\qquad\text{ if }u\in \operatorname{Lip}_{b}(X),\]
and \(+\infty\) otherwise.
The Cheeger's energy of \((X,d,\tau,m)\) is the functional \(\operatorname{Ch}\) defined via
\[\operatorname{Ch}=sc^{-}\bar{\operatorname{Ch}},\]
where \(sc^{-}\) is the l.s.c. envelope in the \(\operatorname{L}^{2}\)-topology.
We have that \(D(\operatorname{Ch})\) is a vector space, known as the metric Sobolev space and indicated in the literature with the symbol \(\operatorname{W}^{1,2}(X,d,m)\). In general, the Cheeger's energy \(\operatorname{Ch}\) is not quadratic and \(\operatorname{W}^{1,2}(X,d,m)\) is not a Hilbert space. Let \(u\in D(\partial\operatorname{Ch})\). Then,
\[-\Delta_{d,m}(u):=\partial^{0}\operatorname{Ch}(u),\]
is called the metric Laplacian of \(u\), where \(\partial^{0}\) indicates the element of minimal norm in the subdifferential. Our analysis of nonlinear Dirichlet forms is motivated by the following.
**Theorem 2.4** ([3]).: _The Cheeger's energy \(\operatorname{Ch}\) is a nonlinear Dirichlet form in the sense of Cipriani and Grillo [19]._
For calculus rules in metric spaces we generally refer to [3, 26]. Precise results will be cited wherever they are needed.
\(2\)-homogeneous and local functionals
Even when the Cheeger's energy is nonquadratic, it is a \(2\)-homogeneous functional satisfying some locality properties [3]. Then, we reduce to the class of nonlinear Dirichlet forms which are \(2\)-homogeneous and local (in a sense defined below).
Let \(\mathcal{E}:\mathrm{L}^{2}(X,m)\to[0,\infty]\) be a functional. Then, \(\mathcal{E}\) is \(2-\)homogeneous if
\[\forall\nu\in\mathbb{R},\,\forall u\in\mathrm{L}^{2}(X,m),\qquad\mathcal{E}( \nu u)=\nu^{2}\mathcal{E}(u).\]
We say that \(\mathcal{E}\) is local if, for all \(u,v\in D(\mathcal{E})\) such that \(u\) is constant on the support of \(v\), we have
\[\mathcal{E}(u+v)=\mathcal{E}(u)+\mathcal{E}(v).\]
Finally, we introduce the symbol \((\cdot,\cdot)\) as a short-hand notation for the \(\mathrm{L}^{2}(X,m)-\)scalar product.
**Theorem 3.1**.: _Let \(\mathcal{E}\) be a \(2\)-homogeneous, convex, l.s.c. functional. Then, the following hold true._
1. \(\mathcal{E}(0)=0.\)__
2. _The subdifferential_ \(\partial\mathcal{E}\) _is_ \(1-\)_homogeneous. Moreover the set_ \(D(\partial\mathcal{E})\) _is invariant under scalar multiplication. Conversely, if_ \(\mathcal{E}\) _is a non-negative, convex, l.s.c. functional, such that its subdifferential_ \(\partial\mathcal{E}\) _is_ \(1-\)_homogeneous and_ \(\mathcal{E}(0)=0\)_, then_ \(\mathcal{E}\) _is_ \(2-\)_homogeneous._
3. _For all_ \(u\in D(A),\,\forall\xi\in\partial\mathcal{E}(u)\) _we have_ \[\int_{X}u\xi\,dm=2\mathcal{E}(u).\] (18)
Formula (18) is reminiscent of the integration by parts
\[\int_{\mathbb{R}^{d}}|Du|^{2}\,dx=-\int_{\mathbb{R}^{d}}u\,\Delta u\,dx.\]
In Finsler geometry, the same holds:
\[\mathcal{E}(u)=\int_{\mathcal{M}}\langle F^{-1}(Du),Du\rangle_{x}\,dm=-\int_{ \mathcal{M}}\nabla\cdot F^{-1}(Du)\,u\,dm,\qquad\forall u\in D(\partial \mathcal{E}).\]
Proof of Theorem 3.1.: The first property is trivial: \(\mathcal{E}(0)=0^{2}\mathcal{E}(0)=0.\)
For the second property, suppose that \(u\in D(\mathcal{E})\) and let \(\lambda>0\). Let \(w\in\partial\mathcal{E}(u),\) so that
\[\mathcal{E}(v)-\mathcal{E}(u)\geqslant(w,v-u),\]
for all \(v\in\mathrm{L}^{2}(X,m)\). Then
\[\mathcal{E}(\lambda v)-\mathcal{E}(\lambda u)=\lambda^{2}(\mathcal{E}(v)- \mathcal{E}(u))\]
hence
\[\mathcal{E}(\lambda v)-\mathcal{E}(\lambda u)\geqslant\lambda^{2}(w,v-u)=( \lambda w,\lambda v-\lambda u).\]
Since \(v\) is arbitrary, so it is \(\lambda v.\)
For the converse, suppose in addition that \(\mathcal{E}\) is a \(\mathrm{C}^{1,1}\) functional. For a fixed element \(u,\) it is sufficient to prove that \(t\mapsto\mathcal{E}(tu)\) is \(2-\)homogeneous. This fact is straightforward, since a real \(\mathrm{C}^{1}\) function which vanishes in \(0,\) and whose derivative is \(1-\)homogeneous, is bound to be \(2-\)homogeneous. In the general case, one can argue by Yosida regularisation [24], provided that the Yosida-regularised of \(\mathcal{E}\) is still \(2\)-homogeneous, as we show below.
For the third property, we use Yosida regularisation, with the notation of [24]. In particular, let \(A_{\lambda}\) be the regularisation _a la Yosida_ of \(\partial\mathcal{E},\) for all \(\lambda>0.\) If \(\mathcal{E}\) is \(2-\)homogenous, then so it is its Yosida-regularised (and vice-versa). Indeed:
\[\mathcal{E}_{\lambda}(\mu u) =\] \[=\inf_{v\in\mathrm{L}^{2}(X,m)}\left\{\mathcal{E}(\mu v)+\frac{1} {2\lambda}|\mu u-\mu v|^{2}\right\}=\] \[=\mu^{2}\mathcal{E}_{\lambda}(u),\]
for all \(\mu,\lambda>0.\) As an intermediate result we prove an explicit formula for \(\mathcal{E}_{\lambda}.\) Fix \(u\in D(\partial\mathcal{E})\). Let \(g:[0,+\infty)\to\mathbb{R}\) be defined via
\[g(x)=\mathcal{E}_{\lambda}(xu)-x^{2}\mathcal{E}_{\lambda}(u).\]
We have that \(g\) is \(\mathrm{C}^{1}\) by composition and \(g=g^{\prime}=0,\) which reads as:
\[(A_{\lambda}(xu),u)-2x\mathcal{E}_{\lambda}(u))=0,\]
that is (18) for \(\mathcal{E}_{\lambda},\) hence
\[\mathcal{E}_{\lambda}(u)=\frac{1}{2}(A_{\lambda}(u),u).\]
We can prove the direct implication. If one passes in the limit for \(\lambda\downarrow 0,\) one obtains
\[\mathcal{E}(u)=\frac{1}{2}(A^{0}(u),u).\]
Take any other element \(\xi\in\partial\mathcal{E}(u)\). Via \(G-\)convergence [7], we find a sequence \((u_{n})_{n}\) s.t. \(u_{n}\to u\) strongly in \(\mathrm{L}^{2}\) and \(A_{\frac{1}{n}}(u_{n})\to\xi\) strongly in \(\mathrm{L}^{2}\). Without loss of generality choose \((u_{n})_{n}\) such that \(|u_{n}-u|<n^{-2}\). Then
\[\frac{1}{2}\left(A_{\frac{1}{n}}(u_{n}),u_{n}\right)\to\frac{1}{2}(\xi,u).\]
At the same time,
\[\frac{1}{2}\left(A_{\frac{1}{n}}(u_{n}),u_{n}\right)=\mathcal{E}_{\frac{1}{n} }(u_{n})\to\mathcal{E}(u),\]
see [24, 14].
## 4 Energy spaces
Let \(\mathcal{E}\) be a \(2\)-homogeneous nonlinear Dirichlet form. Then, the space \(D(\mathcal{E})\) is called _Dirichlet space_. Some properties of \(D(\mathcal{E})\) have already been given in [20], but, under our hypotheses, we have a simpler structure with new results.
**Theorem 4.1**.: _Let \(\mathcal{E}\) be a \(2\)-homogeneous nonlinear Dirichlet form. Then, the following hold true._
1. _The space_ \(D(\mathcal{E})\) _is a vector space. The functional_ \[u\mapsto\|u\|_{\mathcal{E}}^{2}:=\|u\|_{\mathrm{L}^{2}(X,m)}^{2}+\mathcal{E}(u)\] _is the square of a norm on_ \(D(\mathcal{E})\)_. Moreover,_ \((D(\mathcal{E}),\|u\|_{\mathcal{E}})\) _is a Banach space._
2. _The pair_ \((D(\mathcal{E}),\|u\|_{\mathcal{E}})\) _is a Hilbert space if and only if_ \(\mathcal{E}\) _is a quadratic Dirichlet form._
3. _Lipschitz functions_ \(\phi:\mathbb{R}\to\mathbb{R}\) _such that_ \(\phi(0)=0\) _act on_ \(D(\mathcal{E})\)_:_ \[\forall u\in D(\mathcal{E}),\qquad\phi\circ u\in D(\mathcal{E}).\] _Moreover,_ \(D(\mathcal{E})\cap\mathrm{L}^{\infty}(X,m)\) _is an algebra._
4. \((D(\mathcal{E}),\|u\|_{\mathcal{E}})\) _is a dual space_ _[_21_]__._
Proof.: Since \(D(\mathcal{E})\) is homogeneous and convex, it is a vector space. The stability under scalar multiplication is encoded in the \(2-\)homogeneity. We shall now prove the triangle inequality for the functional \(\|u\|_{\mathcal{E}},\) while the homogeneity and the condition \(\|u\|_{\mathcal{E}}=0\) iff \(u=0\) are clear. For any normed vector space, the triangular inequality is equivalent to the convexity of the closed unit ball. Let \(u,v\in D(\mathcal{E})\) such that \(\|u\|_{\mathcal{E}}\vee\|v\|_{\mathcal{E}}\leqslant 1.\) Then, for any \(\lambda\in[0,1]\):
\[\|\lambda(u)+(1-\lambda)v\|_{\mathcal{E}}^{2} \leqslant\] \[=|\lambda u+(1-\lambda)v|^{2}+\mathcal{E}(\lambda u+(1-\lambda)v)\leqslant\] \[\leqslant\lambda|u|^{2}+(1-\lambda)|v|^{2}+\lambda\mathcal{E}(u) +(1-\lambda)\mathcal{E}(v)=\] \[=\lambda(|u|^{2}+\mathcal{E}(u))+(1-\lambda)(|v|^{2}+\mathcal{E}( v))\leqslant\] \[\leqslant\lambda+1-\lambda=1.\]
Let now \((u_{n})_{n}\) be a Cauchy sequence in \(D(\mathcal{E})\) with respect to the norm \(\|\cdot\|_{\mathcal{E}}.\) Then \((u_{n})_{n}\) is a Cauchy sequence in \(\mathrm{L}^{2}\). Hence \(u_{n}\to u\) in \(\mathrm{L}^{2}\), for an element \(u\). For \(m\in\mathbb{N}\), \(\mathcal{E}(u_{n}-u_{m})\) is bounded in \(\mathbb{R}\). Thanks to l.s.c.
\[\mathcal{E}(u-u_{m})\leqslant\liminf_{n\to\infty}\mathcal{E}(u_{n}-u_{m}) \leqslant C.\]
Hence, \(\mathcal{E}(u)\leqslant 2\mathcal{E}(u-u_{m})+\mathcal{E}(u_{m})\leqslant 4C,\) which implies \(u\in D(\mathcal{E})\). Finally,
\[0\leqslant\lim_{m}\mathcal{E}(u-u_{m})\leqslant\lim_{m}\liminf_{n}\mathcal{E }(u_{n}-u_{m})\downarrow 0.\]
The first statement is then proved. The second statement follows by definition of \(\|u\|_{\mathcal{E}}\). For the third statement, if \(\phi\) is \(1\)-Lipschitz, then, Theorem 2.3 ensures \(u\in D(\mathcal{E})\ \Longrightarrow\ \phi(u)\in D(\mathcal{E})\). Otherwise, if \(\phi\) is \(L\)-Lipschitz, then \(\phi/L\) is \(1\)-Lipschitz, so
\[\mathcal{E}(\phi(u))=L^{2}\,\mathcal{E}(L^{-1}\phi(u))\leqslant L^{2}\, \mathcal{E}(u).\]
If \(u\in D(\mathcal{E})\cap\mathrm{L}^{\infty}\), then \(u^{2}\) is a Lipschitz transformation of \(u\). Hence, the function \(u^{2}\in D(\mathcal{E}).\) The computation \(uv=1/2\left((u+v)^{2}-u^{2}-v^{2}\right)\) concludes the proof of the third statement of the theorem.
Notice that the third statement of the last theorem replicates a well-known result in the theory of Sobolev spaces.
## 5 First-order calculus
Let \(\mathcal{E}\) be a \(2-\)homogeneous nonlinear Dirichlet form. Our goal is to reconstruct an object which plays the role of (14)-(15), but expressed in pure
terms of \(\mathcal{E}.\) In case \(\mathcal{E}\) is quadratic, the natural associated bi-variate object is the bilinear Dirichlet form (3), and the two limits (14)-(15) coincide. In the general case, a canonical bi-variate object lacks, so we have to take a choice.
Having (16) in mind, we define
\[\Lambda^{\pm}(u,v):=\lim_{\sigma\to 0^{\pm}}\frac{\mathcal{E}(u+\sigma v)- \mathcal{E}(u)}{\sigma},\qquad\forall u,v\in D(\mathcal{E}),\]
as the left and right slopes of \(\mathcal{E},\) at \(u,\) in the direction of \(v\). Notice that the definition is well-given, as \(\sigma\mapsto\mathcal{E}(u+\sigma v):\mathbb{R}\to\mathbb{R}\) is convex.
**Theorem 5.1**.: _Let \(\mathcal{E}\) be a \(2\)-homogeneous nonlinear Dirichlet form. Then, the slopes \(\Lambda^{\pm}\) have the following properties._
1. _For all_ \(u,v\in D(\mathcal{E})\)_,_ \(\Lambda^{\pm}(u,v)\) _are finite, and_ \[\big{|}\Lambda^{\pm}(u,v)\big{|}\leqslant 2\,\sqrt{\mathcal{E}(u)}\,\sqrt{ \mathcal{E}(v)},\] (19) _which is sharp for_ \(u=v\)_. Moreover,_ \[\Lambda^{-}(u,v)\leqslant\Lambda^{+}(u,v),\qquad\forall u,v\in D(\mathcal{E}).\]
2. _For all_ \(u\in D(\mathcal{E})\)_, we have_ \(\Lambda^{\pm}(u,u)=2\mathcal{E}(u).\)__
3. _For all_ \(u\in D(\mathcal{E})\)_,_ \(D^{\pm}(u,\cdot)\) _is positively_ \(1-\)_homogeneous. Moreover_ \(D^{+}(u,\cdot)\) _is convex, while_ \(D^{-}(u,\cdot)\) _is concave._
4. _For all_ \(v\in D(\mathcal{E})\)_, we have that_ \(D^{\pm}(\cdot,v)\) _is positively_ \(1-\)_homogeneous._
5. _For all_ \(u,v\in D(\mathcal{E})\)_, it holds_ \(D^{+}(u,-v)=-D^{-}(u,v).\)__
6. _If_ \(\mathcal{E}\) _is local and_ \(u,v\in D(\mathcal{E})\) _are such that_ \(u\) _is constant on_ \(supp(v)\)_, we have_ \[\Lambda^{\pm}(u,v)=0.\]
Proof.: 1.
The inequality \(\Lambda^{-}\leqslant\Lambda^{+}\) is a consequence of convexity for the map \(t\mapsto\mathcal{E}(u+tv).\) Take now any \(u,v\in D(\mathcal{E}).\)
\[\Lambda^{+}(u,v) =\] \[=\lim_{h\to 0^{+}}h^{-1}(\mathcal{E}(u+hv)-\mathcal{E}(u))=\] \[=\lim_{h}h^{-1}\left(\left(\sqrt{\mathcal{E}(u+hv)}\right)^{2}- \left(\sqrt{\mathcal{E}(u)}\right)^{2}\right)\leqslant\] \[\leqslant 2\sqrt{\mathcal{E}(u)}\sqrt{\mathcal{E}(v)}+\lim_{h}h \mathcal{E}(v)=2\sqrt{\mathcal{E}(u)}\sqrt{\mathcal{E}(v)}.\]
We used the monotonicity of \(t\mapsto t^{2}\) and the fact that \(\sqrt{\mathcal{E}}\) is a seminorm. With an analogous argument, one can prove that
\[\Lambda^{-}(u,v)\geqslant-2\sqrt{\mathcal{E}(u)}\sqrt{\mathcal{E}(v)}.\]
2.
Compute
\[\Lambda^{\pm}(u,u)=\lim_{h\to 0^{\pm}}\frac{\mathcal{E}((1+h)u)-\mathcal{E}(u)}{h}= \lim_{h\to 0^{\pm}}2h\,\mathcal{E}(u)+h^{2}\,\mathcal{E}(u)=2\,\mathcal{E}(u).\]
3. and 4.
Let \(\lambda>0\).
\[\Lambda^{\pm}(u,\lambda v) =\lim_{h\to 0^{\pm}}h^{-1}(\mathcal{E}(u+h\lambda v)-\mathcal{E}(u))=\] \[=\lambda\lim_{h\to 0^{\pm}}(\lambda h)^{-1}(\mathcal{E}(u+h \lambda v)-\mathcal{E}(u))=\] \[=\lambda\Lambda^{\pm}(u,v).\]
Let \(u,v\in D(\mathcal{E})\), let \(\lambda>0\). Hence
\[\Lambda^{\pm}(\lambda u,v) =\lim_{h\to 0^{\pm}}h^{-1}(\mathcal{E}(\lambda u+hv)- \mathcal{E}(\lambda u))=\] \[=\lim_{h\to 0^{\pm}}h^{-1}(\mathcal{E}(\lambda(u+h\lambda^{-1}v))- \mathcal{E}(\lambda u))=\] \[=\lambda\lim_{h\to 0^{\pm}}\lambda h^{-1}(\mathcal{E}(u+h \lambda^{-1}v))-\mathcal{E}(u))=\] \[=\lambda\Lambda^{\pm}(u,v).\]
We prove only the convexity of \(\Lambda^{+}(u,\cdot)\), being the other proof very similar. Fix \(u,v_{1},v_{2}\in D(\mathcal{E})\).
\[\Lambda^{+}(u,\lambda v_{1}+(1-\lambda)v_{2})=\] \[=\lim_{\sigma\to 0^{+}}\sigma^{-1}(\mathcal{E}(u+\sigma \lambda v_{1}+\sigma(1-\lambda)v_{2})-\mathcal{E}(u))=\] \[=\lim_{\sigma\to 0^{+}}\sigma^{-1}(\mathcal{E}(\lambda(u+ \sigma v_{1})+(1-\lambda)(u+\sigma v_{2})-\mathcal{E}(u))\leqslant\] \[\leqslant\lim_{\sigma\to 0^{+}}\sigma^{-1}(\lambda\mathcal{E}(u+ \sigma v_{1})+(1-\lambda)\mathcal{E}(u+\sigma v_{2})-\mathcal{E}(u))=\] \[=\lambda\Lambda^{+}(u,v_{1})+(1-\lambda)\Lambda^{+}(u,v_{2}).\]
5.
It is sufficient to switch \(h\) with \(-h\) and take limits.
The proof of the last statement is a direct calculation. We perform it for the two limits at once:
\[\lim_{h\to 0}\frac{\mathcal{E}(u+hv)-\mathcal{E}(u)}{h}=\lim_{h\to 0}\frac{ \mathcal{E}(hv)}{h}=\lim_{h\to 0}\frac{h^{2}\mathcal{E}(v)}{h}=0.\]
Note that \(\Lambda^{\pm}(u,v)\) generalise the integrals \(\int_{X}D^{\pm}u(\nabla v)\,dm\) introduced by Gigli in [26] for metric measure spaces. There, even the pointwise objects \(D^{\pm}u(\nabla v)(x)\) make sense, while in our setting the slopes \(\Lambda^{\pm}\) are not necessarily represented by a density w.r.t. \(dm\).
In general, \(\Lambda^{-}\neq\Lambda^{+}\), even in Finsler manifolds. In case \(\Lambda^{+}(u,\cdot)=\Lambda^{-}(u,\cdot)\), the form \(\mathcal{E}\) is said to be _regular_ at \(u\), and more structure is available. We also say that \(\mathcal{E}\) is _regular_ if \(\mathcal{E}\) is regular at all \(u\in D(\mathcal{E}).\) If \(\mathcal{E}\) is Frechet-differentiable, we have that \(\mathcal{E}\) is regular and \(\Lambda(u,v)=(\nabla E(u),v),\) for all \(u,v\in D(\mathcal{E}).\)
**Proposition 5.2**.: _Let \(\mathcal{E}\) be a \(2\)-homogeneous, regular nonlinear Dirichlet form. Then, \(\Lambda:=\Lambda^{+}=\Lambda^{-}\) is linear in the second argument. Moreover, for all \(u\in D(\mathcal{E})\), we have that \(\Lambda(u,\cdot)\in D(\mathcal{E})^{\star}\). Finally, \(\mathcal{E}\) is quadratic if and only if \(\mathcal{E}\) is regular and_
\[\forall u,v\in D(\mathcal{E}),\qquad\Lambda(u,v)=\Lambda(v,u). \tag{20}\]
Proof.: The first assertion is entailed by the fact that \(\Lambda(u,\cdot)\) is both concave and convex, and continuous, see Theorem 5.1. If \(\mathcal{E}\) is quadratic, we have
\[\Lambda(u,v)=\int_{X}\sqrt{A}(u)\,\sqrt{A}(v)\,dm,\]
which is symmetric in \(u,v.\) For the converse, assume \(\mathcal{E}\) to be regular. Then, the maps \(t\mapsto\mathcal{E}(u+tv)\) are differentiable, for all \(u,v\in D(\mathcal{E}).\) Then,
\[\mathcal{E}(u+v)-\mathcal{E}(u)=\int_{0}^{1}\frac{d}{dt}\, \mathcal{E}(u+tv)\,dt=\] \[=\int_{0}^{1}\Lambda(u+tv,v)\,dt=\int_{0}^{1}\Lambda(v,u)+2t\, \Lambda(v,v)=\Lambda(v,u)+\mathcal{E}(v).\]
By exchanging \(v\) with \(-v\), and adding up, we prove that \(\mathcal{E}\) satisfies the parallelogram identity, hence \(\|\cdot\|_{\mathcal{E}}\) is induced by a scalar product. Equivalently, \(\mathcal{E}\) is quadratic.
The last point shows that the arguments \(u,v\) of \(\Lambda\) actually play non-interchangeable roles (unless \(\Lambda\) is a bilinear Dirichlet form). This is intuitive in the case of Finsler manifolds, see (16).
Second-order calculus
In metric measure spaces, the role of the Laplacian is played by the minimal section of \(\partial\)Ch, being Ch the Cheeger's energy. In this section, we investigate the properties of \(\partial\mathcal{E},\) for a \(2\)-homogeneous nonlinear Dirichlet form. Moreover, we introduce an extended subdifferential, in analogy with [26].
Let \(\mathcal{E}\) be a \(2-\)homogeneous nonlinear Dirichlet form. Consider the space \(D(\partial\mathcal{E}),\) equipped with the distance
\[d_{\partial}^{2}(u,v)=\|u-v\|_{\mathrm{L}^{2}(X,m)}^{2}+\|\partial\mathcal{E} (u)-\partial\mathcal{E}(v)\|_{\mathrm{L}^{2}(X,m)}^{2},\]
where the second contribution is a distance between closed subsets of \(\mathrm{L}^{2}(X,m).\)
**Proposition 6.1**.: _Let \(\mathcal{E}\) be a \(2-\)homogeneous nonlinear Dirichlet form. Then, we have that \(D(\partial\mathcal{E})\subset D(\mathcal{E})\) is dense and continuous at \(0\). Moreover, \(D(\partial\mathcal{E})\subset\mathrm{L}^{2}(X,m)\) is dense and continuous. Finally,_
\[\forall u\in D(\partial\mathcal{E}),\qquad\mathcal{E}(u)\leqslant\frac{1}{2} \left\|\partial^{0}\mathcal{E}(u)\right\|_{\mathrm{L}^{2}(X,v)}\left\|u\right\| _{\mathrm{L}^{2}(X,m)}.\]
Proof.: The density of the two inclusions is proved as follows. Let \(u\in\mathrm{L}^{2}(X,m)\). Then, \(T_{t}u\in D(\partial\mathcal{E})\) for all \(t>0\)[24], where \((T_{t})_{t}\) is the gradient flow generated by \(\mathcal{E}.\) The convergence properties of \(T_{t}u\to u,\) as \(t\to 0\) are sufficient to conclude. In addition, \(d_{\partial}(u,v)\geqslant\|u-v\|_{\mathrm{L}^{2}(X,m)}\) shows the continuity of the inclusion \(D(\partial\mathcal{E})\subset\mathrm{L}^{2}(X,m)\). The inequality \(\mathcal{E}(u)\leqslant\frac{1}{2}\left\|\partial^{0}\mathcal{E}(u)\right\|_{ \mathrm{L}^{2}(X,v)}\left\|u\right\|_{\mathrm{L}^{2}(X,m)}\), is a combination of (18) and Cauchy-Schwarz's inequality. This implies also the continuity at \(0\) of \(D(\partial\mathcal{E})\subset D(\mathcal{E}).\)
Some integration by parts rules, reminiscent of those given in [3, 26], link the subdifferential with the slopes of Section 5, as the next result shows.
**Theorem 6.2**.: _Let \(\mathcal{E}\) be a \(2\)-homogenous nonlinear Dirichlet form. Then, the following rules hold._
1. _For all_ \(u\in D(\partial E),\) _and all_ \(v\in D(\mathcal{E}),\) _we have that_ \[\Lambda^{-}(u,v)\leqslant(\partial\mathcal{E}(u),v)\leqslant\Lambda^{+}(u,v).\] _In particular, if_ \(\mathcal{E}\) _is such that_ \(\Lambda(u,\cdot)^{+}=\Lambda^{-}(u,\cdot),\) _and_ \(\mathcal{E}\) _is subdifferentiable at_ \(u\)_, we have_ \[\forall v\in D(\mathcal{E}),\qquad\Lambda^{\pm}(u,v)=(\partial\mathcal{E}(u), v),\] _and_ \(\partial\mathcal{E}(u)\) _contains only one element._
2. _For_ \(\lambda>0\)_, let_ \(A_{\lambda}\) _be the Yosida regularisation of_ \(\partial\mathcal{E}\)_. Then,_ \(\forall u,v\in D(\mathcal{E})\)_,_ \[\Lambda^{-}(u,v)\leqslant\liminf_{\lambda\to 0}(A_{\lambda}(u),v) \leqslant\limsup_{\lambda\to 0}(A_{\lambda}(u),v)\leqslant\Lambda^{+}(u,v).\] _In particular, if_ \(\mathcal{E}\) _is regular, we have_ \(\lim_{\lambda\to 0}(A_{\lambda}(u),v)=\Lambda(u,v),\) _for all_ \(u,v\in D(\mathcal{E}).\)__
Proof.: 1.
We perform the calculation for one side of the inequality, being the other one analogous. Fix \(\xi\in\partial\mathcal{E}(u)\) and compute
\[\mathcal{E}(u+hv)-\mathcal{E}(u)\geqslant(\xi,hv),\]
which reads, if \(h<0\), as
\[h^{-1}(\mathcal{E}(u+hv)-\mathcal{E}(u))\leqslant(\xi,v).\]
The inequality follows by taking the supremum for \(h<0\). If \(\mathcal{E}\) is regular at \(u\), we have that \((\partial\mathcal{E}(u),v)\) is prescribed for all \(v\in D(\mathcal{E})\) by the values of \(\Lambda(u,v)\), hence, \(\partial\mathcal{E}(u)\) contains only one element.
2.
We start with the \(\liminf\) inequality. For all \(\lambda>0\), \(u,v\) as in the hypothesis and \(h<0\), consider
\[\mathcal{E}_{\lambda}(u+hv)-\mathcal{E}_{\lambda}(u)\geqslant(A_{\lambda}(u),hv),\]
which leads to
\[\liminf_{\lambda\to 0}h^{-1}(\mathcal{E}_{\lambda}(u+hv)-\mathcal{E}_{\lambda}(u)) \leqslant\liminf_{\lambda\to 0}(A_{\lambda}(u),v).\]
The l.h.s. admits a limit, hence,
\[h^{-1}(\mathcal{E}(u+hv)-\mathcal{E}(u))\leqslant\liminf_{\lambda\to 0}(A_{ \lambda}(u),v).\]
Taking the supremum over \(h<0\) yields the sought inequality. For the \(\limsup\) inequality, still consider \(\lambda>0\), \(u,v\) as in the hypotheses, and \(h>0\). Write
\[\mathcal{E}_{\lambda}(u+hv)-\mathcal{E}_{\lambda}(u)\geqslant(A_{\lambda}u, hv),\]
and take the \(\limsup\) in both sides to get
\[\limsup_{\lambda\to 0}\mathcal{E}_{\lambda}(u+hv)-\mathcal{E}_{\lambda}(u) \geqslant\limsup_{\lambda\to 0}(A_{\lambda}u,hv),\]
which reads
\[\mathcal{E}(u+hv)-\mathcal{E}(u)\geqslant\limsup_{\lambda\to 0}(A_{\lambda}u,hv),\]
hence,
\[h^{-1}\mathcal{E}(u+hv)-\mathcal{E}(u)\geqslant\limsup_{\lambda\to 0}(A_{ \lambda}u,v).\]
The result follows by taking the infimum in \(h\).
The domain of \(\partial\mathcal{E}\) generalises the space \(\mathrm{W}^{2,2}(\mathbb{R}^{d})\), with
\[\partial\mathcal{E}:D(\partial\mathcal{E})\to\mathrm{L}^{2}(X,m).\]
Hereby, we give an extended definition, in the spirit of [26], mimicking the distributional Laplacian
\[\Delta:\mathrm{W}^{1,2}(\mathbb{R}^{d})\to\mathrm{H}^{-1}(\mathbb{R}^{d}).\]
Let \(\mathcal{E}\) be a \(2\)-homogeneous nonlinear Dirichlet form. Then, we say that a function \(u\in D(\mathcal{E})\) is a point of extended subdifferentiability if there exists a measure \(\mu\) on \((X,\mathcal{F})\) such that all \(v\in D(\mathcal{E})\) are \(\mu\)-measurable and the following holds
\[\Lambda^{-}(u,v)\leqslant\int_{X}v\,d\mu\leqslant\Lambda^{+}(u,v). \tag{21}\]
In this case, we write \(u\in D(\bar{\partial}\mathcal{E})\) and \(\mu\in\bar{\partial}\mathcal{E}(u)\). Notice that Theorem 5.1 implies \(\mu\in D(\mathcal{E})^{\star}\).
We collect some properties of \(\bar{\partial}\mathcal{E}\) in the next result, which concludes our analysis.
**Proposition 6.3**.: _Let \(\mathcal{E}\) be a \(2\)-homogeneous nonlinear Dirichlet form. Then, the extended subdifferential \(\bar{\partial}\) satisfies the following:_
1. \(\bar{\partial}\mathcal{E}(u)\) _is convex and_ \(1\)_-homogeneous;_
2. _if_ \(u\in D(\bar{\partial}\mathcal{E})\) _and_ \(\mathcal{E}\) _is regular at_ \(u\)_, then_ \(\bar{\partial}\mathcal{E}(u)\) _contains only one measure._
3. _if_ \(u\in D(\partial\mathcal{E}),\) _and_ \(\xi\in\partial\mathcal{E}(u)\)_, then_ \(\xi\,dm\in\bar{\partial}\mathcal{E}(u).\)__
Proof.: The first assertion is a consequence of the \(1\)-homogeneity of \(\Lambda^{\pm}\) and of the linearity of the integral with respect to the measure. The second assertion follows from the fact that the values of \(\int_{X}v\,d\mu\) are prescribed by \(\Lambda(u,v)\). Finally, the third statement holds by definition, after Theorem 6.2.
The validity of a converse of the third statement in the last theorem has been discussed for metric spaces in [26], and looks unclear. We are not able to give an analogous of [26, Proposition 4.11] as well. Such formula is one of the hypotheses of [5] for the quadratic case.
Perspectives and _desiderata_
A missing point in the theory is a pointwise object representing \(\Lambda^{\pm}(u,v)\), which should play the same role as \(D^{\pm}v(\nabla u)(x)\) in [26]. It is unclear whether the existence of a density for \(\Lambda^{\pm}\) should be imposed, or if it is a consequence of some locality hypotheses on \(\mathcal{E}\). In any case, heuristically, it corresponds to finer integration by parts formulae as those of the current paper. Also a Leibniz formula and some chain rules on \(\Lambda^{\pm}(u,\cdot)\) would be advances in the theory. Still, we do not know if one should expect such properties or impose them.
Another desirable statement is some integral representation of \(\mathcal{E}\) under locality assumptions. However, this is not available even if \(X=\mathbb{R}^{2}.\) Finally, a missing notion is that of _linearised_ flow associated to the gradient flow of a nonlinear Dirichlet form. Equivalently, one would need a _linearisation of \(\partial\mathcal{E}\) in the direction of the gradient_, as in [32].
## Declarations
The author has been funded by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 754362. Partial support has been obtained from the EFI ANR-17-CE40-0030 Project of the French National Research Agency.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
The author has no competing interests to declare that are relevant to the content of this article.
The author thanks the organisers of the \(4^{th}\) BYMAT Conference for the opportunity.
|
2306.01834 | Spectral analogues of Barbarian asteroids among CO and CV chondrites | K- and L-type asteroids are considered to be the parent bodies of CV and CO
chondrites. Spectral models of L-types invoke an enrichment in CAI with respect
to the chondrites in the meteorite collection. Barbarian asteroids are
associated to L-type asteroids yet the relationship between these populations
is still not clear. We aim to investigate the link between the K- and L-type
and Barbarian asteroids and the CV and CO chondrites by means of spectral
matching of a large number of reflectance spectra of objects from either
population. We seek to identify matches based on observed rather than modelled
spectral features. We employ a matching criterion that accounts for the
residuals and the correlation of the spectral features. The only free parameter
in the comparison is the degree of alteration of the asteroids with respect to
the meteorites expressed via an exponential model. We derive an absolute scale
of similarity between the spectra using laboratory data from irradiation
experiments. CVOxA chondrites are the best match to the asteroids, in
particular to K-type (7 out of 11 asteroids matched) and Barbarians (11 out of
16). CO chondrites provide convincing matches for K-types (5 out of 11) and
Barbarians (7 out of 16) as well. A single non-Barbarian L-type is matched to a
meteorite. Only a few asteroids are matched to CVOxB and CVRed chondrites.
Barbarian asteroids are represented among CO and CVOxA chondrites without
requiring an enrichment of CAI in the asteroids. Four candidate Barbarian
asteroids are identified, three of which are classified as K-types. These
asteroids are favourable targets for polarimetric observations. The discrepancy
between L-type asteroids and CV and CO chondrites is likely related to the
ambiguity of the asteroid class itself. An extension of the taxonomy to include
polarimetric properties is required. | Max Mahlke, Jolantha Eschrig, Benoit Carry, Lydie Bonal, Pierre Beck | 2023-06-02T18:00:02Z | http://arxiv.org/abs/2306.01834v1 | # Spectral analogues of Barbarian asteroids among CO and CV chondrites
###### Abstract
K- and L-type asteroids are considered to be the parent bodies of CV and CO chondrites. Spectral models of L-types invoke an enrichment in calcium-aluminium-rich inclusions (CAIs) with respect to the chondrites in the meteorite collection. Barbarian asteroids are associated to L-type asteroids yet the relationship between these populations is still not clear. We aim to investigate the link between the K- and L-type and Barbarian asteroids and the CV and CO chondrites by means of spectral matching of a large number of reflectance spectra of objects from either population. We seek to identify matches based on observed rather than modelled spectral features. We employ a matching criterion that accounts for the residuals and the correlation of the spectral features. The only free parameter in the comparison is the degree of alteration of the asteroids with respect to the meteorites expressed via an exponential model. We derive an absolute scale of similarity between the spectra using laboratory data from irradiation experiments. CV\({}_{\text{OAA}}\) chondrites are the best match to the asteroids, in particular to K-type (7 out of 11 asteroids matched) and Barbarians (11 out of 16). CO chondrites provide convincing matches for K-types (5 out of 11) and Barbarians (7 out of 16) as well. A single non-Barbarian L-type is matched to a meteorite. Only a few asteroids are matched to CV\({}_{\text{OAB}}\) and CV\({}_{\text{Red}}\) chondrites. Barbarian asteroids are represented among CO and CV\({}_{\text{OAA}}\) chondrites without requiring an enrichment of CAIs in the asteroids. Four candidate Barbarian asteroids are identified, three of which are classified as K-types. These asteroids are favourable targets for polarimetric observations. The discrepancy between L-type asteroids and CV and CO chondrites is likely related to the ambiguity of the asteroid class itself. An extension of the taxonomy to include polarimetric properties is required.
Max Mahlke\({}^{1,2}\), Dolantha Eschrig\({}^{3}\), Benoit Carry\({}^{1}\), Lydie Bonal\({}^{3}\), and Pierre Beck\({}^{3}\)\({}^{1}\)Universite Cote d'Azur, Observatoire de la Cote d'Azur, CNRS, Laboratoire Lagrange, France
\({}^{2}\)Institut d'Astrophysique Spatiale, Universite Paris-Saclay, CNRS, F-91405 Orsay, France
\({}^{3}\)Universite Grenoble Alpes, Institut de Planetologie et d'Astrophysique de Grenoble, CNRS-CNES, 38000 Grenoble, France
## 1 Introduction
Establishing links between meteorites and their parent asteroids is a fundamental goal of planetary science (Gaffey, 1993; Binzel, 1995; Greenwood et al., 2020; DeMeo et al., 2022). Detailed mineralogical analyses of meteorites allow us to interpret observational features of single asteroids (e. g. McCord et al., 1970; Lazzaro et al., 2000; de Leon et al., 2004; Dibb et al., 2023) and trends among larger populations (e. g. Fornasier et al., 2010; Thomas & Binzel, 2010; de Leon et al., 2012; Vernazza et al., 2016; Eschrig et al., 2021, 2022), which in turn are used to infer their dynamical history and to constrain models of the formation of the Solar System.
K- and L-type asteroids are rare both in terms of their absolute number and their mass fraction with respect to the general population of the Main Belt; combined, they represent \(<\)10 % in a given mass range and section of the Main Belt (DeMeo & Carry, 2013; Mahlke et al., 2022). They are observationally distinct from members of the C- and S-complex due to their moderate albedos and colours, which typically fall between those of these latter complexes (Tedesco et al., 1989; DeMeo & Carry, 2013; Mainzer et al., 2011; Popescu et al., 2018). K-type asteroids show a 1 \(\upmu\)m feature associated to forsterite olivine and, in some cases, a weak 2 \(\upmu\)m feature associated to orthopyroxene (Bell, 1988; Mohle-Diniz & Carvano, 2005; Clark et al., 2009). L-type asteroids exhibit a 2 \(\upmu\)m feature attributed to Fe\({}^{2+}\)-bearing spinel and a weak or fully absent 1 \(\upmu\)m feature attributed to Fe-rich olivine (Burbine et al., 1992; Sunshine et al., 2008). As the depths of both features vary, the spectral appearance of both classes is continuous and members of either class are frequently reclassified into the other one (Bus & Binzel, 2002a; DeMeo et al., 2009; Mahlke et al., 2022).
Based on spectral similarities, K- and L-type asteroids have been associated primarily to two classes of carbonaceous chondrites (CCs), namely CO and CV (Bell, 1988; Burbine et al., 2001; Mothe-Diniz et al., 2008; Clark et al., 2009). These classes of anhydrous CCs show similar and partially overlapping distributions in oxygen-isotope compositions and petrographic properties, including similar volume percentages of chondrules, matrix, and refractory inclusions (Weisberg et al., 2006; Krot et al., 2014). All CO and CV chondrites are of petrographic type 3. They show the largest abundances of refractory inclusions among CCs with 13 vol% and 10 vol% respectively. More specifically, they include calcium-aluminium-rich inclusions (CAIs) and to a lesser degree amoeboid olivine aggregates (AOAs) (\(<\)5vol%, Ebel et al., 2016; Pinto et al., 2021). CAIs are some of the oldest components in chondrites and consist of various minerals including melting, forsterite and spinel. They are believed to have condensed at high temperatures and low pressures within the solar nebula. AOAs are micro- to millimetre sized aggregates of forsterite, Fe-Ni metal, spinel and anorthite, among others. Most AOAs did not undergo melting (Scott & Krot, 2014).
The key differences between CO and CV chondrites are given in their whole-rock compositions, where the latter are generally enriched in lithophile elements with respect to CO chondrites while CO are generally enriched in siderophile elements. Furthermore, CO chondrites have considerably smaller chondrules (average diameter of 0.15 mm, Krot et al., 2014) compared to CV (1 mm). Spectrally, both chondrite classes show 1 \(\upmu\)m features attributed to olivine and 2 \(\upmu\)m features attributed to Fe\({}^{2+}\)-bearing spinel (Cloitis et al., 2012, 2015). The 2 \(\upmu\)m feature is generally absent in CO chondrites of petrographic type \(\leq\) 3.1, while the 1 \(\upmu\)m feature becomes more pronounced with thermal metamorphism
(Cloutis et al., 2012b; Eschrig et al., 2021).
CV chondrites are further subdivided into the reduced \(\mathrm{CV_{Red}}\) and the oxidised \(\mathrm{CV_{OxA}}\) and \(\mathrm{CV_{OkB}}\)(Mcsween, 1977; Weisberg et al., 1997). The subgroups are based on different compositional and petrographic properties (Krot et al., 1995; Cloutis et al., 2012c). In particular, in comparison to \(\mathrm{CV_{OxA}}\) and \(\mathrm{CV_{OkB}}\), \(\mathrm{CV_{Red}}\) chondrites are characterised by (i) a lower abundance of matrix, (ii) a higher abundance of metal, and (iii) the presence of Ni-poor sulfides. In comparison to \(\mathrm{CV_{OkB}}\), \(\mathrm{CV_{OxA}}\) are characterised by (i) similar matrix abundance, (ii) a higher abundance of metal, (iii) the presence of metal almost exclusively under the form of avariuite, (iv) lower Ni content of sulfides, and (v) lower magnetic susceptibility and saturation remanence (Bonal et al., 2020). The oxidised CV chondrites (in particular \(\mathrm{CV_{OxA}}\)) generally have larger petrographic types (\(>\)3.6) than the reduced CV and the CO chondrites (Bonal et al., 2016, 2020). Cloutis et al. (2012c) did not identify differences in the spectral appearance between the three subgroups exceeding the variability of the spectra within a single subgroup, while Eschrig et al. (2021) observe systematic differences in the depths and widths of the 1 um and 2 um features, in particular between \(\mathrm{CV_{Red}}\) and \(\mathrm{CV_{OxA}}\). Eschrig et al. (2021) further note the spectral similarity between CO chondrites and the \(\mathrm{CV_{OxA}}\) subgroup.
While it is commonly assumed that each CC class is derived from an individual parent body, Gattacceca et al. (2020) conclude, based on petrographic and isotopic properties, that oxidised and reduced CV chondrites have two distinct parent bodies. Furthermore, Greenwood et al. (2010) suggest that the oxidised CV subgroups and CK chondrites may have formed in the same parent body given their similar oxygen isotopes and elemental abundances. These authors propose that the thermally metamorphosed CK chondrites represent the core of this parent body while the CV chondrites form the outer shell. A possible remnant of the partially differentiated CV-CK parent body could be the Eos family. Its members are predominantly K-types and show a spectral variability that is consistent with being composed of a partially differentiated ordinary-chondritic parent body (Doressoundiram et al., 1998; Mothe-Diniz et al., 2008; Greenwood et al., 2010). A further candidate family may be the Eunomia family. Its parent body (15) _Eunomia_ appears partially differentiated with an olivine-rich composition (Nathues et al., 2005).
Like K-types, L-type asteroids are commonly associated to CO and CV chondrites (Burbine et al., 1992, 2001). Using radiative transfer models, Sunshine et al. (2008) show that the spectra of (234) _Barbara_, (387) _Aquitania_, and (980) _Anacostia_ can be modelled using endmembers consisting of olivine, CAI-free matrix from \(\mathrm{CV_{OxA}}\) Allende, and a subtype of CAI (fluffy type A) common in CV chondrites in addition to a slope component. The derived CAI abundances are between 22 % and 39 %. A large abundance of refractory inclusions would necessitate an early formation of these asteroids, making them the most ancient probes of the Solar System formation among the small bodies (Sunshine et al., 2008).
Devogele et al. (2018) extend this analysis to a larger sample of L-type asteroids and by including an endmember spectrum consisting of a bulk measurement of \(\mathrm{CV_{OxA}}\) Y-86751. For the sample of 28 L-types, the required CAI abundance using the same \(\mathrm{CV_{OxA}}\) Allende endmember as Sunshine et al. (2008) is \((28\pm 13)\) vol%, while for the \(\mathrm{CV_{OxA}}\) Y-86751 endmember, an abundance of \((14\pm 10)\) vol% of CAI is required to spectrally match the sample of 28 L-type asteroids. The highest refractory inclusion abundance observed in meteorites is 13 vol% for CO chondrites when accounting for both CAIs and AOA (Brearley et al., 1998; Krot et al., 2014).
A different population of asteroids commonly associated to L-types are the Barbarians. Unlike K- and L-types, this group is defined based on polarimetric rather than spectral features. Specifically, Barbarians are defined based on a high inversion angle (\(\alpha_{\mathrm{min}}>25\) deg) of the negative branch of the polarimetric phase curve, as observed for (234) _Barbara_ by Cellino et al. (2006). While Devogele et al. (2018) concluded that L-types as defined by DeMeo et al. (2009) and Barbarian asteroids are identical populations, Mahlke et al. (2022) show that Barbarians exhibit a spectral variability that is larger than permissible for a single taxonomic class. Nevertheless, Devogele et al. (2018) show that the Barbarian polarimetric feature is correlated with the modelled abundance of CAI in the asteroid, and the authors suggest CAI enrichment as a possible mechanism for the large polarimetric inversion angle. Frattin et al. (2019) measure \(\alpha_{\mathrm{min}}=(22\pm 1)\) deg for \(\mathrm{CV_{OxA}}\) Allende and \(\alpha_{\mathrm{min}}=(20\pm 5)\) deg for CV \(\mathrm{DaG}\) 521 and COs \(\mathrm{FRO}\) 95002 and \(\mathrm{FRO}\) 99040, which does not allow the reported correlation to be confirmed or denied. An alternative explanation is a heterogeneity of high- and low-albedo particles on the asteroid surfaces (Gil-Hutton et al., 2008).
In this work, we investigate the spectral match of CO and CV chondrites and K-type, L-type and Barbarian asteroids. One of the main focuses of our analysis is on the question of whether a larger sample size of asteroid, and in particular meteorite spectra, may reveal matches between the chondrites and the asteroids without requiring an enrichment in CAIs. We further divide the CV chondrites into the subgroups \(\mathrm{CV_{OxA}}\), \(\mathrm{CV_{OkB}}\), and \(\mathrm{CV_{Red}}\), in line with the current interpretation of the CV class in the literature. In Sect. 2, we outline the sample preparation of asteroid and meteorite spectra as well as the matching procedure. In Sect. 3, we present our results. In Sect. 4, we draw conclusions based on the derived similarities of the populations.
## 2 Methodology
In this section, we first outline our methods of sample collection and preparation, followed by a description of the matching algorithm and the derivation of an absolute similarity scale used to identify matching pairs of asteroids and meteorites.
### Sample selection
#### 2.1.1 Asteroids
The asteroid spectra used here are compiled from various online repositories and publications (refer to Appendix A). The compilation and dataset are described in detail in Mahlke et al. (2022). From this dataset, we select visible-near-infrared (VisNIR) spectra from 0.45 um to 2.45 um of asteroids classified as K-types and L-types in Mahlke et al. (2022) (meaning that the probability to be K- or L-type is larger than any other class probability) as well as of confirmed Barbarian asteroids following the census presented in Devogele et al. (2018). The Barbarian asteroids are not necessarily classified as K- or L-types in Mahlke et al. (2022), as is the case for S-type (980) _Anacostia_. Spec
Figure 1: Reflectance spectra of non-Barbarian K- and L-type asteroids (left) and Barbarian asteroids (right). The spectra are sorted by class and decreasing NIR slope. Wavelengths below 0.7 μm (dotted line) are excluded from the following analysis, as outlined in the text. The spectra are shifted along the y-axis for comparability. Their references are given in Appendix A.
Figure 2: Reflectance spectra of CK, CO, and CV chondrites. The spectra are sorted by class and decreasing NIR slope. The two spectra from the RELAB database are marked by an ‘R’ beside the name of the respective meteorite. Wavelengths below 0.7 μm (dotted line) are excluded from the following analysis. The spectra are shifted along the y-axis for comparability. The spectra of the CO chondrites are from Eschrig et al. (2019), and those of CV chondrites are from Eschrig et al. (2019). The measurement of the CK chondrite is unpublished (J. Eschrig).
tra with low signal-to-noise ratio in particular towards the near-infrared (NIR) are rejected following visual inspection. An example is given in Fig. 1, where the noise towards the NIR does not allow us to reliably identify the presence and depth of a feature around 2 um. In total, there are 48 spectra, which are shown in Fig. 1. Twelve spectra belong to 11 K-types, 24 spectra to 20 L-types, 9 spectra belong to 8 M-types, 2 spectra belong to 2 P-types, and 1 spectrum belongs to 1 S-type. The 16 Barbarian asteroids include all M-, P-, and S-types, as well as 8 L-type asteroids.
Selecting K- and L-types based on taxonomic classifications is not straightforward because of their spectral variability and the continuity between the classes, making a clear separation challenging (DeMeo et al., 2009; Mahlke et al., 2022). L-types are further prone to misclassification as S-types and vice versa due to their 2 um bands. Using a probabilistic classification scheme shows the uncertainty in the spectral classification. Table 1 gives the taxonomic classifications of the asteroids in this study in the systems of Mahlke et al. (2022) and Bus-DeMeo (Bus & Binzel, 2002; DeMeo et al., 2009). For spectra not classified in DeMeo et al. (2009), we used the classy tool1 to classify them. In some cases, this required extrapolation of the observed spectral range. The maximum extrapolated range represents 7.1% of the classified spectrum and hence we do not expect this to affect the resulting classification.
Footnote 1: [https://classy.readthedocs.io](https://classy.readthedocs.io)
As class definitions in Mahlke et al. (2022) are based on Gaussian distributions, the large spectral variability among L-types necessarily leads to the inclusion of edge cases with significant probabilities of belonging to other taxonomic classes, as shown in Table 1. Example objects are (1658) _Innes_ and (980) _Anacostia_. The class probabilities could be used to exclude asteroids from this study in case of an ambiguous classification; however, we choose not to cut the sample based on the probabilities as the following analysis is on a per-object basis and there is no downside to having a potentially misclassified object in the comparison sample.
Further indicated in Table 1 is the Barbarian nature of the asteroids, based on results from Devogele et al. (2018) and Bendjoya et al. (2022). Asteroids are marked with a dash if there are insufficient polarimetric data to determine the Barbarian nature.
Figure 1 shows the variability in spectral features and slopes of the K- and L-types as well as of the Barbarian asteroids. The 2 um band in L-types varies between prominent (e. g. (234) _Barbara_) and nearly absent ((824) _Anastasia_), while some depict a 1 um band ((3043) _San Diego_) and resemble S-types. For K-types, a variability of the strength of the 1 um band is well established (Clark et al., 2009). Among the Barbarian asteroids, we observe a similar variability to that seen among the L-types, (824) _Anastasia_ appears blue and featureless while (980) _Anacostia_ has a red slope with bands present around 1 um and 2 um. Nevertheless, as discussed in Mahlke et al. (2022), the class boundaries between K, L, and M based on VisNIR spectra and visual albedos are continuous, giving rise to edge cases without a conclusive classification.
Six asteroids are represented with more than one spectrum in the sample. We choose this to understand the systematic uncertainty that enters into the asteroid-meteorite matching when using a single spectrum of an individual as a reference. For example, the duplicate spectra \(b\) and \(c\) of (599) _Luisa_ contain the same NIR spectrum but different visible spectra, which results in a noticeable shift of the visible feature from around 0.8 um to 1.0 um.
#### 2.1.2 Meteorites
A total of 41 reflectance spectra of 40 individual chondrites are analysed in this study, including 15 CO and 24 CV spectra, the latter of which are divided into 10 CV\({}_{\rm OAA}\), 8 CV\({}_{\rm O\alpha B}\), and 6 CV\({}_{\rm Red}\), as shown in Fig. 2. The majority of the spectra were presented in Eschrig et al. (2019, 2021, 2021) and are available online in the SSHADE database (Schmitt et al., 2018).23
Footnote 2: [https://www.sshade.eu/data/experiment/EXPERIM_LB_20191220_001](https://www.sshade.eu/data/experiment/EXPERIM_LB_20191220_001)
Footnote 3: [https://www.sshade.eu/data/experiment/EXPERIM_LB_20191220_002](https://www.sshade.eu/data/experiment/EXPERIM_LB_20191220_002)
We further add one spectrum for each of the CV\({}_{\rm OAA}\) chondrites Allende and Y-86751 from the RELAB database (Pieters & Hiroli, 2004, respective specimen IDs: MT_JM-071 and MP-TXH-009). These spectra are used in Sunshine et al. (2008) and Devogele et al. (2018) to study the mineralogical abundances of L-type asteroids and therefore allow us to compare our results to those studies. We do not add more spectra from RELAB as we aim for consistent sample treatment and measurement conditions for the meteorite spectra in order to reduce the spectral variability introduced by changes in observation geometry and sample properties such as the grain size (Cloutis et al., 2012, 2012; Eschrig et al., 2021). Finally, we acquire a spectrum of CK4 chondrite ALH 85002 to extend the sample towards the analogue proposed by Mothe-Diniz et al. (2008) for K-types of the Eos family. More samples of CK chondrites were not available to us for this work. Given the variability of CK chondrites reported by Cloutis et al. (2012), any conclusion based on a single sample is highly tentative. Nevertheless, we choose to include this sample based on the relationship between CV and CK chondrites proposed by Greenwood et al. (2010).
Apart from the two RELAB measurements, all reflectance spectra were acquired with a consistent sample preparation and measurement procedure. The chondrite samples were hand ground to a powder of approximately submillimetre grain size (Garenne et al., 2016) using a pestle and mortar. In contrast to the method used to obtain the RELAB spectra, no sieving was performed by Eschrig et al. (2021) in order to avoid a selection effect of harder-to-grind chondrite components. Garenne et al. (2016) estimate the average grain size of hand-ground chondrite samples to 100 um to 200 um. 50 mg of the chondrite powder was added to a sample holder and the surface was flattened using a spatula to facilitate comparison between measurements. Reflectance spectra in the range of 340 nm to 4200 nm were obtained at 80 \({}^{\circ}\)C under vacuum to eliminate terrestrial water contamination using a measuring geometry of \(i=0^{\circ}\), \(e=30^{\circ}\).
There are two spectra available for CV Allende, one from RELAB and one from Eschrig et al. (2021). While the former was measured on a CAI-free powder of \(<\)38 um grain size under ambient temperatures and pressures, the spectrum in Eschrig et al. (2021) was taken on the bulk powder using the conditions described above. These differences in measuring conditions may partly explain the differences observed between these two spectra. Both the removal of the CAI component and the decrease
in grain size of the RELAB sample with respect to that from Eschrig et al. (2021) could explain the decrease in band depth of the 2 um feature (Mustard and Hays, 1997; Eschrig et al., 2022).
Fig. 2 reveals a large degree of spectral variability between the meteorite classes and even among samples of the same class or subclass. Both CV and CO spectra are variable in band structure and slope. The spectra of CO chondrites tend to have more pronounced 1 um bands though there are samples such as MIL 07193, which show no band at all. Cloutis et al. (2012c) note that the petrologic parameters used to differentiate CO and CV chondrites do not give rise to appreciable differences in the spectra, in particular on fine-grained parent body surfaces. Furthermore, the authors cannot establish spectral differences between CV subtypes, while they are apparent in the sample of Eschrig et al. (2021), as shown in Fig. 2 and discussed in the original publication. The spectra of CV\({}_{\text{OAA}}\) look similar to those of CO chondrites. Also apparent are large differences in the visible slope, likely due to the formation of ferric oxides as part of terrestrial weathering (Salisbury and Hunt, 1974; Gooding, 1982; Cloutis et al., 2012c; Eschrig et al., 2021), and we exclude the region below 0.7 um from the analysis due to this systematic uncertainty.
### Spectral matching of asteroids and meteorites
We first define a criterion to quantify the similarity between two reflectance spectra and then outline the assumptions we make to define an absolute scale of similarity.
#### 2.2.1 Similarity criterion \(\Phi\)
To quantify the similarity between two reflectance spectra \(\mathbf{X}\) and \(\mathbf{Y}\) consisting of \(N\) datapoints \(x_{i}\) and \(y_{i}\) with \(i\in\{1,\dots,N\}\) sampled at the same wavelengths \(\lambda_{i}\), we combine two criteria presented in Popescu et al. (2012). The first criterion quantifies the similarity by means of the residuals \(e_{i}\) given by \((x_{i}-y_{i})\),
\[\Phi_{\text{res}}=\frac{1}{N}\sqrt{\sum_{i}^{N}(e_{i}-\tilde{e})^{2}}, \tag{1}\]
where \(\tilde{e}\) is the mean residual value. Smaller values of \(\Phi_{\text{res}}\) indicate an increasing similarity between the spectra. The second criterion quantifies the covariance \(\text{cov}(\mathbf{X},\mathbf{Y})\) of the curves,
\[\Phi_{\text{cov}}=\frac{\text{cov}(\mathbf{X},\mathbf{Y})}{\sigma_{\mathbf{X} }\sigma_{\mathbf{Y}}}, \tag{2}\]
where \(\sigma_{\mathbf{X}}\) and \(\sigma_{\mathbf{Y}}\) are the respective standard deviations. Larger values of \(\Phi_{\text{cov}}\) indicate an increasing similarity between the spectra. In particular, the correlation of the spectra quantifies their similarity in potential absorption features. Both criteria are combined to give the similarity criterion \(\Phi\),
\[\Phi=\frac{\Phi_{\text{cov}}}{\Phi_{\text{res}}}, \tag{3}\]
where larger values indicate an increasing similarity between the spectra.
Prior to the comparison, all reflectance spectra are smoothed using a Savitzky-Golay filter (Savitzky and Golay, 1964). This filter computes a polynomial fit to all data points in a rolling window of a user-defined width and replaces the value at the centre of the window by the value of the fitted polynomial. By visual inspection of the smoothing, we choose a polynomial degree of 3 and a rolling-window width of 41 data points. For (172) _Baucis_, we use a width of 95 data points to smooth the systematic artefact around 0.8 um. Finally, the spectra are resampled to a uniform wavelength grid to ensure that the computed similarity criteria are comparable.
#### 2.2.2 Accounting for secondary spectral alterations
The reflectance spectra of asteroids and meteorites are shaped primarily by their chemical composition and mineralogy, which are the properties we aim to compare in asteroids and meteorites. However, in second order, spectra are shaped by surface properties such as the regolith grain size and porosity as well as alterations due to the space- or the terrestrial environment (Reddy et al., 2015; Cloutis et al., 2018; Eschrig et al., 2022). These secondary effects generally lead to a difference in the spectral appearance of meteorites and their parent-body asteroid populations, which has to be accounted for when establishing compositional connections (Gaffey, 1993; Binzel, 1995; DeMeo et al., 2022). In particular, for carbonaceous chondrites, observing geometry and surface properties such as grain size lead to considerable changes in the spectral appearance, that is, larger than differences due to space weathering (Cloutis et al., 2012b, c; Brunetto et al., 2014; Lantz et al., 2015, 2017; Vernazza et al., 2016). However, these effects generally alter the spectral slope and the depth of absorption bands rather than the central wavelength (Brunetto et al., 2014; Cloutis et al., 2012c; Beck et al., 2021). In this work, we therefore make the assumption that the secondary changes of the spectral continuum may be described in a joint model, for which we use the exponential space-weathering model derived in Brunetto et al. (2006):
\[W(\lambda)=K\exp\Big{(}-\frac{C_{s}}{\lambda}\Big{)}, \tag{4}\]
where \(W\) is the weathering function given by the ratio of the meteorite to the asteroid spectrum, \(K\) a normalising scale factor, and \(C_{S}\) the strength parameter of the space weathering. In the following, we refer to this model as the alteration model in order to highlight the fact that we account for all secondary spectral alterations with this exponential function, including but not limited to the space weathering. The larger \(C_{S}\), the stronger the exponential alteration of the asteroid spectrum with respect to the meteorite spectrum. Negative values of \(C_{S}\) correspond to an asteroid spectrum that is redder than the meteorite spectrum, and positive \(C_{S}\) corresponds to a blueing of the asteroid. As mentioned above, we limit the spectral comparison to the range of 0.7 um \(\leq\lambda\leq\) 2.45 um.
We therefore identify potential matches between asteroids and meteorites by evaluating the similarity of their ratio to the alteration model given in Eq. (4) using the similarity criterion \(\Phi\) given in Eq. (3). As we do not know the absolute values of the asteroid reflectance spectra, we normalise the ratio to a unit L2 norm prior to fitting the exponential model.
#### 2.2.3 Quantification of similarity
Brunetto et al. (2006) derive the model in Eq. (4) under the assumption that space weathering only marginally affects absorp
tion features. This assumption is validated for ordinary chondritic samples and mafic silicates. We therefore have to examine whether the model holds for CV/CO-like material as well. Furthermore, to assess the similarity of a match, we require a scale that indicates whether the computed values of \(\Phi\) are in agreement with the assumption that differences are induced by secondary alterations rather than by mineralogical or compositional features. We validate the model approach and compute this scale using results from irradiation experiments presented by Lantz et al. (2017) who irradiated spectra of CV3 Allende and CO3 chondrites Lance and FM 95002 with He\({}^{+}\) ions. The VisNIR spectra of the pristine and irradiated sample are shown in Fig. 3. For each level of irradiation, we divide the irradiated spectra of each meteorite by their pristine spectrum and compute the \(\Phi\) criterion between the obtained ratio and the fitted alteration model. Two example fits are shown on the left-hand side of Fig. 4 for meteorites Allende and Lance, showing the highest and the lowest similarity respectively. The right-hand side of Fig. 4 shows the results for all the meteorites at the different irradiation levels. The similarity \(\Phi\) between ratio and model decreases with increasing degree of irradiation (i. e. increasing degree of space weathering) for all three meteorites. We compute the mean \(\Phi\) for each irradiation level among the three meteorites, indicated by the dotted horizontal lines, and define them as \(S_{j}\), where \(j\in\{1,2,3,4\}\). In Fig. 4 and from here on, we normalise all \(\Phi\) values by the maximum value from this comparison (\(\Phi_{1\,/\,\mathrm{Pristine}}\) of CV Allende, top left in Fig. 4) for convenience.
The model in Eq. (4) is used to account for additional spectral alterations. On the other hand, the similarity scale derived in this manner only accounts for changes due to space weathering. As such, it is a strict scale and asteroid-meteorite pairs ruled out in this work may be considered matches if additional discrepancies due to other spectral effects are accounted for. Thus, the strict scale gives an increased reliability for the matches we identify.
## 3 Results
Each of the 48 asteroid spectra in Fig. 1 is divided by each of the 41 meteorite spectra in Fig. 2 and the resulting ratio is fit by the alteration model in Eq. (4). The similarities \(\Phi\) of all model fits are shown in Fig. 5. Meteorites are aligned along the \(y\)-axis, asteroids along the \(x\)-axis. Asteroids are indicated by their number, which is given in green if the asteroid is a confirmed Barbarian and in black otherwise. Darker colours in the figure indicate larger values of \(\Phi\) and therefore a better description of the asteroid-meteorite ratio by the alteration model. The cells are coloured red (blue) if the asteroid is redder (bluer) than the meteorite. Asteroid-meteorite pairs with \(\Phi\) values above a value \(S_{j}\) as defined in Sect. 2.2.3 have the respective index \(j\) superimposed in black. For these pairs, the spectral differences are well explained by the alteration model and we consider them to be matches. No match reaches the similarity levels \(S_{1}\) and \(S_{2}\). A
Figure 4: Evaluation of the exponential alteration model using irradiated meteorite samples. Left: Ratios of irradiated to pristine spectra. Here, we show the most (top) and the least (bottom) similar to the fitted alteration model, as quantified by the similarity criterion \(\Phi\). Right: Distribution of \(\Phi\) values for all three reference meteorites and the different irradiation levels \(j\). The dotted horizontal lines indicate the mean similarity \(S_{j}\) for each \(j\).
Figure 3: Reflectance spectra of pristine and irradiated carbonaceous chondrites from Lantz et al. (2017). Wavelengths below the dotted vertical line at 0.7 \(\upmu\)m are not accounted for in the analysis due to terrestrial weathering. Data courtesy of C. Lantz.
selection of the matches with the largest \(\Phi\) is shown in Fig. 6, where the asteroid spectra have been divided by the alteration model fit of the asteroid-meteorite ratio. The spectra are normalised to minimise the root-mean-square difference between them.
Fig. 5 shows that CV\({}_{\text{ONA}}\) chondrites have the most matches with asteroids that surpass the \(S_{4}\) threshold, in particular with K-types (7 out of 11 asteroids) and Barbarians (12 out of 16). For non-Barbarian L-types, only (4917) _Yurilvovia_ is a match, for 9 out of 11 chondrites. The remaining matches for CV\({}_{\text{ONA}}\) chondrites are almost exclusively for confirmed Barbarians. Only half of the L-type Barbarians are matched to chondrites, and all the matches are to different CV\({}_{\text{ONA}}\) chondrites than for the non-L-type Barbarians. (234) _Barbara_ matches Allende and QUE 94688; the latter match is shown in the bottom-right panel of Fig. 6. (402) _Chloe_ and (599) _Luisa_ match QUE 94688 as well, while non-L-type Barbarians do not show similarities to this specific CV\({}_{\text{ONA}}\) chondrite, with the exception of (387) _Aquitania_. Axtell is further only matched by (387) _Aquitania_ and no other non-L-type Barbarian. However, this latter group of asteroids shows large similarities to the remaining CV\({}_{\text{ONA}}\) chondrites, with the exception of (458) _Hercynia_ and one spectrum of (679) _Pax_. (980) _Anacostia_ does not resemble any meteorite in this study, which is consistent with its classification as S-type in Mahlke et al. (2022). Among K-types, (1903) _Adzhimushkaj_ and (2957) _Tatsuo_ match 5 and 8 of the 11 CV\({}_{\text{ONA}}\) meteorites, respectively. (3028) _Zhangguoxi_ matches 5 CV\({}_{\text{ONA}}\) chondrites while consistently showing a bluer spectrum than all of them.
Non-L-type Barbarians further match CV\({}_{\text{ONA}}\) and CV\({}_{\text{Red}}\) chondrites (five out of nine), while L- and K-types do not, with two exceptions among K-types. We note that if an asteroid matches a CV\({}_{\text{ONA}}\) chondrite, it tends to match a CO chondrite as well.
CO chondrites show a dichotomy, 6 out of 15 have noticeably smaller mean \(\Phi\) scores than the others in Fig. 5. Five out of 11 K-types and 7 out of 16 Barbarians are matched to CO chondrites.
Figure 5: Similarity \(\Phi\) values of the alteration model to the ratio of the respective pairs of asteroid–meteorite reflectance spectra. Darker colours indicate greater similarity. Red colours show that the asteroid is reddened with respect to the meteorite, and blue colours indicate blueing. Pairs that exceed the similarity levels \(S_{j}\) have the respective \(j\) superimposed. Black dotted lines separate different classes of asteroids and meteorites. Asteroids are labelled by their number, while Barbarian asteroids are marked with green labels.
For non-Barbarian asteroids, only (4917) _Yurilivovia_ matches CO chondrites, most consistently with ALH 85003.
The CK chondrite ALH 85002 shows the greatest similarity to K-types and matches (653) _Berenike_ and (742) _Edisona_. On the other hand, (980) _Anacostia_ is among the least similar asteroids to the ensemble of meteorites and does not have any match, which is likely due to its pronounced 1 \(\upmu\)m band.
## 4 Discussion
Under the model assumptions outlined in Sect. 2, we draw the following conclusions from the spectral comparison.
### Matches among CO and CV for Barbarians
We identify several CO and CV chondrites that match Barbarian asteroids, including (234) _Barbara_, which shows a prominent 2 \(\upmu\)m band and is matched to CV\({}_{\rm OSA}\) chondrites Axtell and QUE 94688. The latter match is shown in Fig. 6. (387) _Aquitania_ is matched to the same meteorites, while (599) _Luisa_ matches a variety of CV\({}_{\rm OSA}\) and CO chondrites. These asteroids were subjects of the studies of Sunshine et al. (2008) and Devogele et al. (2018) and were shown to match the CV endmember spectra after adding CAI components to the radiative transfer models. The two meteorite spectra used in these latter works (marked with the suffix 'R' in Fig. 5) do not match (234) _Barbara_ or (387) _Aquitania_ in this study either, which are the two Barbarians with the most prominent 2 \(\upmu\)m bands. However, (599) _Luisa_ is considered a match to CV\({}_{\rm OSA}\) Y-86751. Overall, non-L-type Barbarians (generally showing weaker 2 \(\upmu\)m bands than their L-type counterparts) are more similar to the meteorite spectra. CV\({}_{\rm OSA}\) chondrites are the most similar to Barbarian asteroids, while CO chondrites further show considerable similarities to Barbarians as well. The fact that the matched L-type Barbarians are associated to different CV\({}_{\rm OSA}\) than the non-L-type Barbarians highlights the spectral variability in terms of the Barbarian features. This finding further strengthens the link to CV\({}_{\rm OSA}\) chondrites, as the groups display a similar feature variability.
Barbarians (824) _Anastasia_ and (980) _Anacostia_ are not represented among the CO and CV chondrites. Both asteroids represent the spectral extremes of the Barbarian variability, both in terms of features (feature-poor versus feature-rich) and NIR slopes (blue versus red). A possible explanation is therefore that the corresponding extreme endmembers of the chondrites are not present in this study. Furthermore, (172) _Baucis_ has no match.
### Possible Barbarians
Based on the spectral similarities between CV\({}_{\rm OSA}\) chondrites and Barbarian asteroids, it may be worthwhile investigating the Barbarian nature of the remaining four asteroids that show significant similarities to the chondrites, which are the K-types (1903) _Adzhimushkaj_, (2957) _Tatsuo_, (3028) _Zhangguoxi_, and L-type (4917) _Yurilivovia_. To our knowledge, no polarimetric phase curves have been observed for these targets. We recommend prioritising them in order to confirm or rule out their Barbarian
Figure 6: Example matches of asteroids and meteorites. The asteroid spectra (black, solid) have been divided by the exponential alteration-model fitted to the ratio of the asteroid and meteorite spectrum. The similarity \(\Phi\) between the asteroid spectra and the meteorite spectra (red) does not account for wavelengths below 0.7 \(\upmu\)m (vertical, dotted line). The light grey and thick black lines show the spectra before and after smoothing, respectively. The given asteroid classes are from Mahlke et al. (2022). The spectrum of (234) _Barbara_ is the one from DeMeo et al. (2009).
### Matches with K-type asteroids inconclusive
Apart from three asteroids ((1903) _Adzhimushkaj_, (2957) _Tatsuo_, and (3028) _Zhangguoxi_, refer to next part), K-types only match a few meteorites in this study, mostly CO and CV\({}_{\text{OKA}}\) chondrites. (653) _Berenike_ and (742) _Edisona_ match the only CK chondrite ALH 85002. Both are members of the Eos-family, for which Mothe-Diniz et al. (2008) suggested CK chondrites as analogue materials based on comparison of NIR spectra. A larger comparison of K-types and CK chondrites using a consistent meteorite dataset could strengthen this link.
Considering the relationship between CV and CK chondrites proposed by Greenwood et al. (2010), we observe that two out of three matches with CK ALH 85002 are also matches to CV\({}_{\text{OKA}}\) chondrites. However, as noted above, our small sample size prevents us from reaching any form of conclusion here.
### Non-barbarian L-types are not parent bodies of CO and CV
The majority of non-Barbarian L-types are neither matched by CO nor CV. Even not considering possible class interlopers like (1658) _Innes_, (3043) _San Diego_, or (3066) _McFadden_ (all three with considerable probability of being S-types), L-types show little similarity to the chondrites in this study. Based on the comparison here, these L-types may be ruled out as parent bodies of CO and CV chondrites.
### Asteroid taxonomy would benefit from polarimetric measurements
The results displayed in Fig. 5 clearly show that Barbarian asteroids should have their own taxonomic class. However, their spectral variability prevents this in a taxonomic scheme based on spectroscopy and albedo only. Polarimetric properties should therefore be added to the asteroid taxonomy. Mahlke et al. (2022) highlight that polarimetric measurements are the most promising observable to resolve compositional ambiguities in the M-complex. The C-complex may further benefit significantly from the addition of polarimetric measurements into the taxonomic scheme including VisNIR spectra and the visual albedo.
The addition of polarimetric properties to the taxonomy is currently challenging because of the low numbers of observed asteroids. Efforts such as the Calern Asteroid Polarization Survey (CAPS, Bendjoya et al., 2022) are valuable steps towards a more descriptive taxonomy.
### Spectral matching for CCs requires larger sample size
In Sect. 2, we highlight the variability of reflectance spectra of both asteroids and meteorites. The analysis in Sect. 3 shows that this variability can alter the interpretation significantly: for any asteroid and meteorite population combination (except for L-types and CV\({}_{\text{OkB}}\) and CV\({}_{\text{Red}}\)), we identify both matching and non-matching pairs. Relations between populations may therefore only be established reliably when regarding a large number of objects. The three spectra of (599) _Luisa_ illustrate this issue: based on spectrum (a), (599) _Luisa_ does not match any meteorite in the comparison, while the spectra (b) and (c) are matched to 13 and 3 meteorites, respectively.
Concerning the spectra of meteorites, the variability based on the sample processing and observation technique used is well established. Here, we note in particular that the spectra of Allende from Eschrig et al. (2019) and from the RELAB database are matched to different asteroids, in agreement with the different sampling procedures outlined in Sect. 2.
## 5 Conclusion
K- and L-type asteroids are rare and have been associated to various classes of CC. For L-type asteroids, an enrichment in refractory inclusions is considered necessary to arrive at satisfying matches with meteorite spectra. In this study, we perform a large-scale comparison of asteroids and meteorites, focusing on K- and L-types as well as Barbarian asteroids and on their proposed matches, the CO and CV chondrites. The employed matching criterion \(\Phi\) emphasises the correlation between the compared reflectance spectra, which translates into an emphasis on matching absorption features. Spectral alterations are accounted for in a single exponential model function. We establish matches between Barbarian asteroids and CO and CV chondrites that do not require any additional spectral component. Four candidate Barbarian asteroids are identified based on their matches to the same chondrite classes as established Barbarians. For K-types and L-types, matches among CO and CV chondrites are sparse, and we rule out the possibility that these chondrite classes originate from non-Barbarian L-type asteroids.
## Acknowledgements
The authors thank the referee Julia de Leon for the thorough and constructive review which improved the manuscript. The authors further thank Cateline Lantz for providing the data of the irradiation experiments.
MM acknowledges funding from the European Space Agency in the framework of the Network Partnering Initiative. The view expressed in this publication can in no way be taken to reflect the official opinion of the European Space Agency.
Parts of this work have been funded by the ERC grant SOLARYS ERC-CoG2017-771691.
All (or part) of the data utilised in this publication were obtained and made available by the MITHNEOS MIT-Hawaii Near-Earth Object Spectroscopic Survey. The IRTF is operated by the University of Hawaii under contract 80HQTR19D0030 with the National Aeronautics and Space Administration. The MIT component of this work is supported by NASA grant 80NSSC18K0849.
|
2302.02612 | Theoretical description of optofluidic force induction | Optofluidic force induction (OF2i) is an optical nanoparticle
characterization scheme which achieves real-time optical counting with
single-particle sensitivity and high throughput. In a recent paper [\v{S}imi\'c
et al., Phys. Rev. Appl. 18, 024056 (2022)], we have demonstrated the working
principle for standardized polystrene nanoparticles, and have developed a
theoretical model to analyze the experimental data. In this paper we give a
detailed account of the model ingredients including the full working equations,
provide additional justification for the assumptions underlying OF2i, and
discuss directions for further developments and future research. | Marko Šimić, Christian Hill, Ulrich Hohenester | 2023-02-06T08:20:29Z | http://arxiv.org/abs/2302.02612v1 | # Theoretical description of optofluidic force induction
###### Abstract
Optofluidic force induction (of2i) is an optical nanoparticle characterization scheme which achieves real-time optical counting with single-particle sensitivity and high throughput. In a recent paper [Simic _et al._, Phys. Rev. Appl. **18**, 024056 (2022)], we have demonstrated the working principle for standardized polystyrene nanoparticles, and have developed a theoretical model to analyze the experimental data. In this paper we give a detailed account of the model ingredients including the full working equations, provide additional justification for the assumptions underlying or2i, and discuss directions for further developments and future research.
## I Introduction
Nanoparticle characterization in dispersion proves to be a challenging task, in particular for complex and heterogeneous particle systems [1]. Effects such as particle agglomeration and aggregation can lead to highly polydisperse or multi-modal systems, thus calling for robust, accurate and versatile characterization methods [2]. Existing technologies, such as nanoparticle tracking analysis [3] or electron microscopy, provide possibilities for single particle analysis, however, with the bottleneck of low particle throughput and offline measurements.
The recently introduced optofluidic force induction (of2i) technique addresses these problems using the principle of optical tweezers in combination with a continuous flow, in order to perform single particle analysis of polydisperse samples in real-time [4]. The physics underlying this scheme is similar to optical tweezer experiments, where a strongly focused laser beam is used to optically trap particles in three dimensions. The basic principle has been pioneered by Arthur Ashkin in 1970, and has been awarded the Nobel Prize for Physics in 2018 [5]. Optical tweezers allow for precise control of orientation, position and arrangement of the particles under investigation [6; 7; 8]. Besides holding particles in place, a weakly focused laser beam can also achieve two-dimensional optical trapping, where the particles can move along the optical axis of the exciting beam. Within the context of nanoparticle characterizations, this can be employed for optical chromatography [9].
At the heart of optical tweezers simulations lies the calculation of the optical forces [10; 11]. These forces arise from the light-matter interaction of the exciting laser beam with a particle and the resulting photon momentum transfer, ultimately leading to a light scattering problem. While it is well established how such scattering problems can be solved for usual plane wave excitations within Mie theory [12], more attention is required when dealing with higher-order laser modes carrying orbital angular momentum [13; 14; 15], such as the Laguerre-Gaussian beams used in of2i. Again the light scattering theory for such excitations has been developed elsewhere [16; 17], but must be put together with the other ingredients of a full simulation approach with sufficient care. Here, in addition to optical forces, viscous drag and thermal fluctuations contribute considerably to the dynamics of a particle in a liquid medium [18].
In this paper we develop and discuss a four-step model for the simulation of of2i, which accounts for the incoming electromagnetic fields of a Laguerre-Gauss beam, solves Maxwell's equations for such excitations and spherical particles, computes from the solutions of Maxwell's equations the optical scattering spectra and optical forces, and uses Newton's equations of motion to simulate the particle trajectories. A number of representative and prototypical setups are investigated to estimate the importance of the various ingredients entering our model.
The outline of the paper is as follows. In Sec. II we present the theory and derivation of the OF2i trajectory model. The resulting particle trajectories are presented in Sec. III, and we provide detailed discussions of the impact of particle refractive indices, sphere sizes, and Brownian motion. Finally, in Sec. IV we summarize our results and give an outlook to future work. Some of the theoretical details are given in the Appendices.
## II Theory
The basic principle of2i is sketched in Fig. 1. The nanoparticles to be analyzed are immersed in a solution and are pumped through a microfluidic flow cell. Additionally, a weakly focused laser beam propagates in the flow direction. The purpose of this laser is three-fold. First, the optical forces in the transverse directions \(x\),\(y\) (see Fig. 1) push the nanoparticles to the intensity maxima of the laser field, such that particles propagating sufficiently close to the maxima become trapped in the transverse directions. Second, the optical forces in the laser propagation direction \(z\) push the particles and lead to velocity changes depending on size and material properties. Third, light is scattered off the particles and can
be monitored outside the flow cell. By analyzing the velocity changes of the individual particles being transported through the focus region, one obtains detailed information about their properties. The light scattering intensities and emission patterns provide additional information, as will be discussed in more detail below.
An important ingredient of of2i is the use of a vortex laser beam with an orbital angular momentum (oam) [13; 14; 15; 19]. Throughout this work we consider a weakly focused Laguerre-Gaussian laser beam with a polarization along \(x\), with the electric field [20] (see also Appendix A)
\[\mathbf{E}(r,\phi,z)\approx\mathscr{E}_{m}(r,z)e^{im\phi}\,\hat{\mathbf{x}}\,, \tag{1}\]
where \(m\) is the so-called topological charge associated with the oam, and \(\mathscr{E}_{m}(r,z)\) is the field profile in the radial and propagation directions. The intensity profile of such a beam is depicted in Fig. 1 for \(m=2\). Because of the topological charge, it has a ring-like distribution in the transverse directions with zero intensity in the center, and the trapped nanoparticles move along spiral-shaped trajectories through the focus region. This has the advantage that nanoparticles can bypass each other more easily and collisions are strongly surpressed in comparison to laser beams with an intensity maximum on the optical axis.
In Ref. [4] we have experimentally demonstrated the working principle of of2i for an ensemble of standard polystyrene nanoparticles with well-known size distributions, and have developed a theoretical model for the analysis of the experiments. In the remainder of this section, we give a detailed presentation of the various ingredients entering this model. We start by presenting the theory in its most general form, and then specialize on the implementations using either Mie theory or a fully numerical simulation approach.
### Four-step model for OF2i
The theoretical description of of2i consists of an electromagnetic part and a particle trajectory part. We first provide a brief summary of the theoretical ingredients and then ponder on the details. In the electromagnetic part, we account for the optical response of the nanoparticles and compute the optical forces and scattering fields, see also Fig. 2. We start with the incoming fields of the Laguerre-Gauss laser beam, \(\mathbf{E}_{\mathrm{inc}}\), \(\mathbf{H}_{\mathrm{inc}}\), which would be solutions of Maxwell's equations in absence of the nanoparticle. In presence of the nanoparticle we additionally have to consider the scattered fields \(\mathbf{E}_{\mathrm{sca}}\), \(\mathbf{H}_{\mathrm{sca}}\), which are chosen such that the boundary conditions of Maxwell's equations are fulfilled at the particle boundary. The sum of incoming and scattered fields then provides us with the total fields, which are the proper solutions of Maxwell's equations. From the deflection of the incoming fields we can compute the optical force \(\mathbf{F}_{\mathrm{opt}}\), as shown in Fig. 2 and discussed in more detail below. In the particle trajectory part, we consider a Newton's equation of
Figure 1: Schematics of optofluidic force induction (of2i). (a) Nanoparticles to be analyzed are transported through a microfluidic channel alongside a weakly focused laser beam with an optical vortex (optical angular momentum \(m=2\)). The dashed box indicates the region where the field distribution is shown in Fig. 2. The solid box indicates the region where in panel (b) the field intensity distribution of a nanosphere with a diameter of 2 \(\mu\)m, located at the intensity maximum, is shown.
Figure 2: Field distribution in the focus region of the laser, see also dashed box in Fig. 1(a). The light becomes deflected by the nanoparticle, through the actio-reactio principle an optical force (solid lines) is exerted on the particle that leads to a trapping in the transverse \(x\),\(y\) directions and a velocity change in the \(z\) direction. The positions of panels (a–e) are reported in the panel on the left. We show results for nanospheres with diameters of 500 and 1000 nm, respectively, the refractive index is \(n_{b}=1.33\) for the embedding medium (water) and \(n=1.59\) for the nanosphere (polystyrene). Note that in panels (e) the intensity is low and the fields are hardly visible.
motion for the nanoparticle,
\[m\bar{\mathbf{r}}=\mathbf{F}_{\text{opt}}(\mathbf{r})+\mathbf{F}_{\text{drag}}+\mathbf{F}_{\text{ stoch}}\,, \tag{2}\]
where \(m\) is the mass of the particle, which might include the added mass due to the fluid [21, Sec. 4.15], \(\mathbf{r}\) is the particle position, \(\mathbf{F}_{\text{drag}}\) the drag force of the particle moving through the fluid, and \(\mathbf{F}_{\text{stoch}}\) accounts for the stochastic fluid forces that are needed according to the fluctuation-dissipation theorem to counterbalance the drag forces [22]. By successively computing the optical forces and updating the particle position according to Eq. (2), we obtain the nanoparticle trajectories. Altogether, the theoretical model for of2i can be broken up into the following four steps.
1. Provide an expression for the incoming electromagnetic fields of the Laguerre-Gauss laser beam.
2. Solve Maxwell's equations in presence of the nanoparticle, using either an analytical or numerical approach. This step provides us with the scattered electromagnetic fields.
3. Use the total fields, this is the sum of incoming and scattered fields, to compute the optical force acting on the nanoparticle at a given position.
4. Use Newton's equation of motion including optical and microfluidic forces to obtain the particle trajectory.
In this work we will establish the methodology for this four-step model and discuss results of representative simulation setups. In the future we plan to extend this model by tracing the scattered electromagnetic fields through the imaging system, which will allow us a most direct comparison with the experimental results. For completeness, we here list the additional steps that will be needed to simulate the imaging system.
1. Propagate scattered electromagnetic fields through glass boundaries of microfluidic flow cell.
2. Simulate imaging of scattered electromagnetic fields, using for instance the approach of Richards and Wolf [23; 24; 25].
We start by discussing the electromagnetic part of our simulation approach. The power scattered by the nanoparticle is computed from the flow of scattered energy through the nanoparticle boundary [26]
\[P_{\text{sca}}=\frac{1}{2}\oint_{\partial V}\text{Re}\left(\mathbf{E}_{\text{sca} }\times\mathbf{H}_{\text{sca}}^{*}\right)\cdot d\mathbf{a}\,, \tag{3}\]
where \(\partial V\) is the particle boundary with the infinitesimal boundary element \(d\mathbf{a}\). In deriving this expression we have assumed the usual time harmonic dependence \(e^{-i\omega t}\) for the electromagnetic fields and have averaged over an oscillation cycle. Eq. (3) gives an estimate of how bright the nanoparticle appears in an imaging system, although a detailed analysis should additionally include the emission pattern of the scattered fields and the aforementioned deflection of these fields through lenses.
Similarly, the transfer of momentum from the electromagnetic fields to the nanoparticle, this is the optical force, can be computed from the net flux of momentum carried by the electromagnetic fields through the nanoparticle boundary and by utilizing momentum conservation in the composite system formed by the nanoparticle and the electromagnetic fields. This is, the inbalance of electromagnetic flux through the nanoparticle boundary provides us with the momentum transferred from the fields to the nanoparticle. For time harmonic electromagnetic fields and by averaging over an oscillation cycle, we obtain under the assumption of quasi-stationarity, where the nanoparticle motion is negligible on the time scale of the field oscillations, the expression [10; 27; 11; 25]
\[\mathbf{F}_{\text{opt}}=\frac{1}{2}\oint_{\partial V}\text{Re}\left[\tensor{ \theta}{\theta}-\frac{1}{2}\mathds{1}\text{tr}\!\left(\tensor{\theta}{\theta} \right)\right]\cdot d\mathbf{a}\,. \tag{4}\]
The term in brackets is Maxwell's stress tensor accounting for the momentum density flow of the electromagnetic fields, with [26]
\[\theta_{ij}=\varepsilon E_{i}\,E_{j}^{*}+\mu H_{i}\,H_{j}^{*}\,, \tag{5}\]
where \(\varepsilon\) and \(\mu\) are the permittivity and permeability of the embedding background medium, respectively. Eqs. (3) and (4) are the central expressions for the electromagnetic part of our theoretical modeling, and can be evaluated once the electromagnetic fields are at hand. Note that the expression for the optical force can be easily generalized to obtain optical torques acting on nanoparticles, which is of importance for non-spherical particle geometries [10; 25; 11].
For the trajectory part, we consider for the force on a small sphere moving with velocity \(\mathbf{v}\) through a viscous fluid the usual Stokes' drag valid for a creeping flow with a Reynolds number much smaller than one [28]
\[\mathbf{F}_{\text{drag}}=-6\pi\mu R_{\text{hyd}}\big{(}\mathbf{v}-\mathbf{v}_{\text{fluid }}\big{)}\,, \tag{6}\]
where \(\mathbf{v}_{\text{fluid}}\) is the velocity of the fluid and \(\mu\) the dynamic viscosity. In this work we set for simplicity \(R_{\text{hyd}}\) to the radius of the sphere, but in general this hydrodynamic radius might differ from the radius entering the optical calculations [29]. We will address this point in future work.
For sufficiently large spheres, say for diameters above 10 nm, the momentum relaxation time is so short that we can approximately set \(\dot{\mathbf{v}}\approx 0\)[30]. Also the stochastic forces don't play a decisive role for larger spheres, as will be discussed in Sec. III.3. The nanosphere's velocity \(\mathbf{v}\) is then obtained from the condition that the optical force is balanced by the drag force, and we get
\[\mathbf{v}(\mathbf{r})=\mathbf{v}_{\text{fluid}}+\frac{\mathbf{F}_{\text{opt}}(\mathbf{r})}{6\pi \eta R_{\text{hyd}}}\,. \tag{7}\]
We emphasize that our model contains no free parameters, and all laser, fluid, and nanoparticle parameters can be inferred in principle from experiment.
### Mie theory
Mie theory provides an efficient and versatile method for solving Maxwell's equations for spherical nanoparticles [12], as schematically depicted in Fig. 3. The basic idea is to expand the electromagnetic fields in a complete basis with spherical symmetry. The transverse fields can be expanded using [25; 26]
\[z_{\ell}(kr)\mathbf{X}_{\ell,m}(\theta,\phi)\,,\quad\nabla\times z_{\ell}(kr)\mathbf{X }_{\ell,m}(\theta,\phi)\,, \tag{8}\]
where \(z_{\ell}(kr)\) are spherical Bessel or Hankel functions, \(k\) is the wavenumber of the medium, and \(\mathbf{X}_{\ell,m}\) are the vector spherical harmonics. The angular degree and order are denoted with \(\ell\) and \(m\), respectively. The basis of Eq. (8) has the advantage that field matching at the nanosphere boundary can be done easily and seperately for each pair of \(\ell\), \(m\). Unfortunately, Mie theory is often complicated by the fact that the definitions of the various functions are not unique and different choices have been adopted in the literature, such that it is often difficult to compare results. We here largely follow the definitions given in [25; 16; 26]. For the incoming fields we choose spherical Bessel functions, which become plane waves at large distances \(kr\gg 1\). The incoming electromagnetic fields can then be expanded via
\[\mathbf{E}_{\rm inc} =\sum_{\ell,m}\left[b^{\rm inc}_{\ell,m}j_{\ell}\mathbf{X}_{\ell,m}+ \frac{i}{k}a^{\rm inc}_{\ell,m}\nabla\times j_{\ell}\mathbf{X}_{\ell,m}\right]Z\] \[\mathbf{H}_{\rm inc} =\sum_{\ell,m}\left[a^{\rm inc}_{\ell,m}j_{\ell}\mathbf{X}_{\ell,m}- \frac{i}{k}b^{\rm inc}_{\ell,m}\nabla\times j_{\ell}\mathbf{X}_{\ell,m}\right]\,, \tag{9}\]
where \(Z\) is the impedance and \(a^{\rm inc}_{\ell,m}\), \(b^{\rm inc}_{\ell,m}\) are the coefficients to be determined for specific incoming fields. Similarly, for the scattered fields outside the nanoparticle we choose spherical Hankel functions, which become outgoing spherical waves at large distances,
\[\mathbf{E}_{\rm sca} =-\sum_{\ell,m}\left[b^{\rm sca}_{\ell,m}h^{(1)}_{\ell}\mathbf{X}_{ \ell,m}+\frac{i}{k}a^{\rm sca}_{\ell,m}\nabla\times h^{(1)}_{\ell}\mathbf{X}_{ \ell,m}\right]Z\] \[\mathbf{H}_{\rm sca} =-\sum_{\ell,m}\left[a^{\rm sca}_{\ell,m}h^{(1)}_{\ell}\mathbf{X}_{ \ell,m}-\frac{i}{k}b^{\rm sca}_{\ell,m}\nabla\times h^{(1)}_{\ell}\mathbf{X}_{ \ell,m}\right]\,. \tag{10}\]
These scattered fields are uniquely determined upon knowledge of the coefficients \(a^{\rm sca}_{\ell,m}\), \(b^{\rm sca}_{\ell,m}\). Additionally, we need the scattered electromagnetic fields inside the nanosphere, which are identical to Eq. (10), however, with the replacement of the spherical Hankel by spherical Bessel functions that remain finite at the origin, and with different coefficients \(c^{\rm sca}_{\ell,m}\), \(d^{\rm sca}_{\ell,m}\). Below we will discuss how the scattering coefficients can be obtained through field matching at the sphere boundary.
For the incoming fields we consider a weakly focused Laguerre-Gauss laser beam and employ the paraxial approximation [20], which is well justified for our case of weak focusing. The explicit expressions are given in Appendix A. In [16] the coefficients \(a^{\rm inc}_{\ell,m}\), \(b^{\rm inc}_{\ell,m}\) were computed by matching the incoming fields and the Mie expansion of Eq. (9) in the far-field limit. We here proceed somewhat differently and compute the coefficients using the field values on the sphere boundary [25, Eq. (E.5)]
\[a^{\rm inc}_{\ell,m}j_{\ell}(kR) =-\frac{Z^{-1}k}{\sqrt{\ell(\ell+1)}}\oint Y^{*}_{\ell,m}\Big{[} \mathbf{r}\cdot\mathbf{E}_{\rm inc}(\mathbf{r}+\mathbf{r}_{0})\Big{]}\,d\Omega\] \[b^{\rm inc}_{\ell,m}j_{\ell}(kR) =\phantom{-}\frac{k}{\sqrt{\ell(\ell+1)}}\oint Y^{*}_{\ell,m} \Big{[}\mathbf{r}\cdot\mathbf{H}_{\rm inc}(\mathbf{r}+\mathbf{r}_{0})\Big{]}\,d\Omega\,, \tag{11}\]
where the integrals extend over the unit sphere, and \(\mathbf{r}\) is a position determined by the unit sphere angles and located on the sphere with radius \(R\). In Mie theory, the coefficients have to be computed for a reference frame where the sphere center is in the origin. As the incoming electromagnetic fields are defined in a reference frame where the focus is the origin, we have to translate \(\mathbf{r}\) by the center position \(\mathbf{r}_{0}\) of the nanosphere. The computation of the integrals can be considerably accelerated by using an equidistant grid for the azimuthal coordinate and noting that the resulting integral can be computed using the fast Fourier transform [31]. The remaining integral over the polar angle is computed by means of a Legendre-Gauss quadrature. The implementation of Eq. (11) can be easily tested for an incoming plane wave through comparison with the resulting analytic expressions [26, Eq. (10.53)].
The computation of the scattered fields is particularly simple within Mie theory because each pair of angular degrees and orders \(\ell\), \(m\) can be handled separately. Field matching is accomplished through the so-called Mie co
Figure 3: Schematics of optical simulation approach. The incoming fields of the vortex laser are expanded in vector spherical harmonics (vsh), and are used together with the Mie coefficients to compute the scattered electromagnetic fields. Once the incoming and scattered fields are at hand, we can compute optical response properties such as the scattered light or the optical forces. In the right panel we show the \(z\)-component of the force density, this is the integrand of Eq. (4), on the sphere boundary.
efficients [12; 25]
\[a_{\ell} =\frac{Z_{2}\psi_{\ell}(x_{1})\psi_{\ell}^{\prime}(x_{2})-Z_{1}\psi_ {\ell}^{\prime}(x_{1})\psi_{\ell}(x_{2})}{Z_{2}\psi_{\ell}(x_{1})\xi_{\ell}^{ \prime}(x_{2})-Z_{1}\psi_{\ell}^{\prime}(x_{1})\xi_{\ell}(x_{2})}\] \[b_{\ell} =\frac{Z_{2}\psi_{\ell}^{\prime}(x_{1})\psi_{\ell}(x_{2})-Z_{1} \psi_{\ell}(x_{1})\psi_{\ell}^{\prime}(x_{2})}{Z_{2}\psi_{\ell}^{\prime}(x_{1} )\xi_{\ell}(x_{2})-Z_{1}\psi_{\ell}(x_{1})\xi_{\ell}^{\prime}(x_{2})}\,, \tag{12}\]
with \(k_{1}\), \(k_{2}\) being the wavenumbers of the medium inside and outside the nanosphere, respectively, and \(Z_{1}\), \(Z_{2}\) the corresponding impedances. We have introduced the abbreviation \(x=kR\) and the Riccati-Bessel functions \(\psi_{\ell}(x)=xj_{\ell}(x)\), \(\xi_{\ell}(x)=xh_{\ell}^{(1)}(x)\), where a prime indicates the derivative with respect to \(x\). With the Mie coefficients, the scattered and incoming fields can be related through
\[a_{\ell,m}^{\text{sca}}=a_{\ell}\,a_{\ell,m}^{\text{inc}}\,,\quad b_{\ell,m}^ {\text{sca}}=b_{\ell}\,b_{\ell,m}^{\text{inc}}\,. \tag{13}\]
Thus, the entire solution of Maxwell's equations for spherical particles is embodied in the Mie coefficients of Eq. (12), where the matching of fields at the particle boundary has been explicitly worked out. Mie theory can be also used to compute the optical forces from the incoming and scattering coefficients only. We here follow the approach of [17] where analytic expressions are derived. Appendix B gives the explicit formulas used in this work.
### Boundary element method
We additionally performed simulations using a fully numerical Maxwell solver. In this work these simulations are mainly used for testing purposes to check the proper implementation of our Mie theory. However, in future work such an approach might be useful for the investigation of non-spherical or coupled particles. We employ our home-made nanobem solver [32] which is based on a boundary element method (bem) approach that can be easily adopted for the nanospheres under study. Details of the approach and typical runtime examples are discussed in some length in [32]. In the present work we use the optforce function of the galerkin.solution class in order to directly compute the optical forces. Results of our bem simulations will be presented in the next section.
## III Results
Using the methodology developed in the previous section, we performed simulations with the same parameters as previously used in [4]. We consider a Laguerre-Gaussian beam with a topological charge of \(m=2\), a beam waist of \(w_{0}=4.78\)\(\mu\)m for the fundamental Gaussian beam, a wavelength of \(\lambda=532\) nm, and a power of \(1.65\) W. For details of the incoming laser fields see Appendix A. The fluid velocity is set to \(v_{\text{fluid}}=0.3\) mm/s and we use material parameters representative of water, namely a dynamic viscosity of \(\eta=9.544\times 10^{-4}\) Pa s and a refractive index of \(n_{b}=1.33\). The refractive index of the nanospheres is set to \(n=1.59\), a value representative for polystyrene, if not noted differently.
Figure 4 reports results for the optical force in the focus region. The force \(F_{z}\) in the longitudinal direction is largest at the intensity maxima of the vortex beam, see Fig. 1. There the sphere is pushed in the positive \(z\) direction leading to the velocity enhancements to be discussed below. The force \(F_{x}\) in the transverse direction leads to trapping along \(x\), and vanishes at the trapping positions \(\pm w_{0}\), where the intensity and \(F_{z}\) is largest. Additionally, there is an unstable equilibrium position at \(x=0\) where no force is present because of the ring-like intensity profile of the vortex beam. In the figure we compare different computation schemes, namely Mie theory with different cutoff numbers for the angular order, the determination of the incoming Mie coefficients using either Eq. (13) or the scheme presented in [16], and a fully numerical approach based on the boundary element method. All schemes give indistinguishable results, thus demonstrating the accuracy and robustness of our approach.
Figure 5 shows as function of the angular degree \(\ell\) the absolute values of the incoming and scattered Mie coefficients for a nanosphere with 1000 nm diameter, which is trapped in the focus plane. With increasing \(\ell\) the incoming coefficients increase, whereas the Mie coefficients of Eq. (12) decrease (not shown). The scattering coefficients of Eq. (13) are the product of the incoming and Mie coeff
Figure 4: Optical force \(F_{x}\), \(F_{z}\) in the focus plane \(z=0\) and for a nanosphere with a diameter of 500 nm. We compare results for different computation schemes. Mie(\(\ell_{\text{max}}\)) report results for Mie theory with the cutoff number \(\ell_{max}\) for the angular order and for the incoming fields computed within the paraxial approximation given in Appendix A. farfield gives the results for the approach of [16] where the fields are matched in the farfield, for details see text. BEM reports results derived with our nanobem Maxwell solver based on a boundary element method approach. For the sphere discretization we use 796 boundary elements, for details see [32]. The region shaded in gray reports the intensity of the Laguerre-Gauss laser beam in arbitrary units. As apparent from the figure, the optical force in the propagation direction \(F_{z}(x)\) directly follows the intensity profile.
ficients, which have a maximum at \(\ell=6\) for the diameter under investigation, and then drop rapidly. A similar behavior is observed for the optical force \(F_{z}\), the explicit expressions are given in Appendix B. In what follows, we choose a conservative cutoff number \(\ell_{\text{max}}=30\) for the angular degree, which provides a good compromise between fast simulations and highly accurate results.
Figure 6 shows results for the nanosphere trajectories as obtained with the four-step model introduced in Sec. II.1. We compare laser excitations (a-c) with and (a*-c*) without an optical vortex, as well as sphere diameters of (a) 250, (b) 500, and (c) 1000 nm. Let us start by analyzing the sub-figures of the various panels in slightly more detail. In (1,2) we show selected trajectories. Initially, the spheres are located at positions \((x,0,z_{0})\) sufficiently far away from the focus (\(z=0\)) in a region where the optical forces are weak and can be neglected. The nanoparticles are then transported through the fluid into regions of larger field strength, where some of them become trapped in the transverse directions. The velocity changes of the trapped nanospheres in the laser propagation direction \(z\) are shown in (3). The color of the trajectories and velocities corresponds to the scattering power of Eq. (3), see (4) for the color code in arbitrary units. It is apparent that trapped particles scatter more light and appear significantly brighter in an imaging system. In the focus region, the scattered power of the trapped spheres with a diameter of 250 nm is at least three orders larger than that of the untrapped ones, and at least five orders for the larger spheres. Additionally, only the trapped particles experience noticeable velocity changes. The red dots in (1) indicate those particles which are trapped in the focus plane \(z=0\). As can be seen, some spheres become trapped after the focus plane.
When comparing the results for different sphere diameters in panels (a-c) of Fig. 6, we observe that with increasing diameter (i) more particles become trapped and (ii) experience larger velocity enhancements. This can be attributed to the larger optical forces for larger nanoparticles. We also find that (iii) the trajectories of all trapped particles are practically indistinguishable, and (iv) the deflection of the particles out of the \(xz\)-plane increases with increasing diameters [see panels (2)]. This is due to the orbital angular momentum transferred from the vortex laser to the nanospheres. Finally, (v) also the scattering power increases with increasing diameter. All these observations are supported by the experimental findings reported in [4], and suggest a dynamics where the nanospheres become first trapped in the transverse directions, and then propagate along the intensity maxima of the focused laser in presence of almost identical optic and fluidic forces through the focus region. Note that in typical experiments the nanoparticles initially don't propagate in a single plane but are randomly distributed, correspondingly they are also randomly distributed in the focus region around the circular intensity maximum distribution of the vortex beam. This leads to the aforementioned suppression of collisions and blockage in comparison to laser excitations with an intensity maximum on the optical axis.
To make this point more explicit, in panels (a*-c*) we report results for a Laguerre-Gauss excitation with zero topological charge, \(m=0\), this is, for an excitation without an oam. The trajectories are similar to the previous ones, with the exception of the larger velocity enhancements attributed to the higher field strengths of the focused laser without a vortex. Additionally, we observe (2) that all particle trajectories are bound to the \(xz\)-plane because of the missing oam. Owing to the laser intensity distribution that has a maximum at the \(z\)-axis for \(m=0\), all trajectories are located on the \(z\)-axis around the focus regions, thus leading to particle collisions and blockage.
In what follows, we investigate the ability of or2i to infer from the observed velocity changes the size and material composition of the nanospheres. We here only discuss the impact of these parameters and leave the problem of how to solve the inverse problem, namely the determination of size, material, and possibly geometry, to future work.
### Refractive index of nanospheres
Figure 7 shows the maximal velocity in the focus region for dielectric nanospheres with different diameters and refractive indices (see inset). In all simulations we use water with an refractive index of \(n_{b}=1.33\) for the embedding medium. For the smallest refractive indices of the nanospheres, say for \(n\leq 1.6\), the maximal velocity increases monotonically with increasing diameter, at least
Figure 5: Absolute values of incoming and scattering Mie coefficients, and of force \(F_{z}\) as a function of angular degree \(\ell\). We consider a sphere with 1000 nm diameter at the trapping position in the focus plane. For the incoming Mie coefficients we plot \(\sum_{m}\left(\left|a_{\ell,m}^{\text{inc}}\right|+\left|b_{\ell,m}^{\text{ inc}}\right|\right)\), with a similar expression for the scattering coefficients. The contributions are scaled such that the sum of the scattering coefficients gives one. For the optical force, we report the increments \(|F_{z}(\ell)-F_{z}(\ell-1)|\) for different degrees \(\ell\). The force contributions are scaled such that the sum gives one.
for the sphere sizes under investigation. In this regime it is thus possible to directly correlate the observed velocity enhancement with the particle diameter, as we have previously done in [4]. Things somewhat change for larger nanospheres where the optical response is governed by Mie resonances supported by the spherical nanoparticles. Correspondingly, beyond a certain cutoff diameter the maximal velocity no longer simply increases with increasing diameter, but exhibits a more complicated resonance behavior.
For nanoparticles with larger refractive indices and/or larger particles in the micrometre range, in general it thus might be useful to analyze more carefully the light scattered off the nanoparticles. In Fig. 8 we show the emis
Figure 6: Trajectories and velocities for nanospheres with different diameters and for laser beams (a–c) with and (a*–c*) without an optical vortex. In each panel, we report selected trajectories in the (1) \(xz\) and (2) \(xy\) plane, (3) the nanoparticle velocities as a function of propagation length \(z\). The colors of the line segments scale with the total scattering power of the spheres, given in arbitrary units with the colorbar reported in panel (4). Trapped particles scatter more light and can be observed more easily.
sion pattern of nanospheres with different diameters and refractive indices. With increasing diameter the emission pattern sharpens into the forward direction (note that in the plots we use a logarithmic scale), however, at the same time the emission into other directions becomes strongly structured and provides detailed information about the nanosphere properties. Using Fraunhofer diffraction and Mie scattering approaches, the characterization of particle sizes upon knowledge of the refractive indices of the nanoparticle and the embedding medium is a well established technique [33]. A more refined modeling of imaging within of2i would be needed (steps 5 and 6) to address the question whether the viable nanoparticle parameters can be uniquely extracted using this additional information.
### Active volume
When inferring the particle number distribution from OF2i measurements, we have to account for the fact that larger particles become trapped more easily than smaller ones, owing to the increase of optical forces with increasing particle size. See for instance the red dots in panels (1) of Fig. 6 for those particles which are trapped in the focus plane. Recall that in our simulations we start with an initial position \((x,0,z_{0})\) for the particles, where the propagation distance \(z_{0}\) is located in a region where the optical forces are negligible (we use \(z_{0}=-1\) mm). Subsequently, the particles are transported by the fluid into regions of larger field intensities, where they become trapped and experience the velocity changes previously discussed.
In Figure 9 we show the velocities in the focus plane as a function of transverse starting position \(x\) and sphere diameter, and for different refractive indices. We observe that particles become either trapped or not, and for a given diameter and refractive index all trapped particles are transported with the same velocity through the focus plane. This observation agrees with the velocity curves shown in panel (3) of Fig. 6. When measuring particle size distributions one has to account for the different cutoff parameters for trapping \(x_{\text{cut}}(R,n)\), which depend on particle radius \(R\) and refractive index \(n\). For starting position \(x\leq x_{\text{cut}}\) particles are trapped in the focus plane, for \(x>x_{\text{cut}}\) the optical forces are too weak for trapping. As previously discussed in [4], one can define an active volume
\[V_{\text{active}}(R,n)=\Big{[}\pi x_{\text{cut}}^{2}(R,n)\Big{]}v_{\text{fluid }}t_{\text{meas}}\,, \tag{14}\]
where the term in brackets is the cross section in the transverse direction, and \(v_{\text{fluid}}t_{\text{meas}}\) is the size of the sampling volume along the propagation direction in the measurement time \(t_{\text{meas}}\). The active volume corrects for the fact that larger particles are trapped more easily and are observed more frequently in comparison to smaller particles. For \(N_{\text{meas}}\) velocity counts within \(t_{\text{meas}}\), the particle density is then proportional to \(N_{\text{meas}}/V_{\text{active}}\).
### Stochastic forces
We finally comment on the influence of stochastic forces and Brownian motion, which are known to have
Figure 8: Normalized emission pattern for nanospheres with diameters of (a) 250, (b) 500, (c) 1000, and (d) 2000 nm. We use a logarithmic scale in the radial direction and refractive indices of 1.4, 1.6, 1.8, 2.0, with the same color code as in Fig. 7. All plots are scaled to the respective maxima of the emission patterns. In all cases the nanospheres are located in the focus plane at the trapping position around the intensity maxima of the vortex laser.
Figure 7: Maximal velocity in the focus region for nanospheres with different diameters and refractive indices (see inset). For larger refractive indices the velocity increases non-monotonically because of Mie resonances supported by the spheres.
an important impact for optical tweezers and related experiments. The necessity for considering such forces was first noticed in the groundbreaking paper of Albert Einstein on Brownian motion [34]. In our implementation of a stochastic force term we closely follow Ref. [18]. We first compute the drift velocity \(\mathbf{v}\) using Eq. (7) and then update the position according to [18, Eq. (18)]
\[\mathbf{r}(t+\Delta t)\approx\mathbf{r}(t)+\mathbf{v}\Delta t+\left(\frac{k_{B}T\delta t}{3 \pi\eta R}\right)^{\frac{1}{2}}\mathbf{W}\,, \tag{15}\]
where \(\Delta t\) is the computational timestep, \(k_{B}\) is Boltzmann's constant, \(T\) is the temperature, \(R\) is the sphere radius, and \(W_{x}\), \(W_{y}\), \(W_{Z}\) are normally distributed random numbers with a variance equal to one, as obtained for instance by the matlab function randn. The time step \(\Delta t\) has to be chosen sufficiently small such that the optical forces at \(\mathbf{r}(t)\) and \(\mathbf{r}(t+\Delta t)\) do not differ significantly. In all our simulations we used a value of \(\Delta t=1\) ms and a temperature of \(T=293\) K.
Fig. 10 shows results for simulations including stochastic forces. Let us first concentrate on the results for spheres with a sufficiently large diameter, say panels (b-d). In contrast to simulations without stochastic forces (thin lines), the velocity curves exhibit fluctuations that decrease with increasing diameter, and the motion in the transverse direction is altered in regions of weak optical forces. Once particles are trapped, they follow along the intensity maxima of the laser along trajectories that are very similar to the ones we have previously discussed. As in or2i experiments predominantly the trapped particles can be observed, stochastic forces typically have no crucial impact on the observed particle trajectories. Things are somewhat different for the smallest spheres, where the stochastic forces are of equal strength than the op
Figure 10: Velocities (left) and trajectories (right) as a function of propagation distance, with (thick lines) and without (thin lines) consideration of Brownian motion and for different sphere diameters (see inset). We use different colors for the different starting positions of the spheres. For the Brownian motion the velocity \(v=\nicefrac{{\Delta z}}{{\Delta t}}\) is defined as the ratio between the propagation distance \(\Delta z\) travelled by a particle in a time interval \(\Delta t=0.01\) s and the time interval \(\Delta t\). \(r=\sqrt{x^{2}+y^{2}}\) is the transverse distance. For the smallest diameter shown in panel (a) the stochastic forces are of equal strength than the optical forces. For the larger diameters shown in panels (b–d) only the positions where the particles become trapped are somewhat altered by the Brownian motion. Once they are trapped, they essentially follow the trajectories previously discussed for simulations without stochastic forces.
Figure 9: Velocity in focus region for different transverse starting positions \(x\) and diameters, as well as for different refractive indices. In all simulations the particles start at \((x,0,z_{0})\) in a region where the optical forces are negligible. Particles become either trapped or not (gray region), where all trapped particles are transported with the same velocity through the focus region. With increasing sphere diameter more particles become trapped, owing to the increase of optical forces for the larger particles.
tical forces, and trapping can only be observed close to the focus region. Such behavior is not found in experiment where spheres with a diameter of 200 nm are clearly trapped. We attribute this disagreement to our simplified choice of the hydrodynamic radius in Eq. (7), and will analyze this point in more detail elsewhere.
## IV Summary and Outlook
To summarize, we have presented a four-step model for the theoretical description of of2i, which accounts for the nanoparticle propagation in a microfluidic channel in presence of laser excitation. The approach is currently based on Mie theory but can be extended with moderate computational overhead to full Maxwell solvers, using for instance the boundary element method, in order to simulate non-spherical or coupled particles. We have investigated the influence of particle size, refractive index, and Brownian motion on the observed trajectories and velocity enhancements. Quite generally, our results support the unique measurement capabilities of of of2i for single-particle tracking with high throughput.
of2i measurement results provide additonal information such as the emission pattern, which might be used in future work to extract further properties of the particles to be analyzed. With this additional information we might overcome the difficulties regarding Mie resonances, in particular for particles with larger refractive indices, which currently lead to a problematic non-monotonic relation between sphere diameter and velocity enhancement. It will be also interesting to see how our conclusions become modified for non-spherical particles or particles with no sharp interfaces.
From the experimental side, we plan to investigate shorter Rayleigh ranges where smaller particles can be trapped more easily, as well as different polarization states of the incoming laser. For small particles the issue regarding geometric and hydrodynamic radius should be addressed with greater care. We also expect that for absorbing particles heating effects and the resulting photophoretic forces must be taken into account. This leaves us with a relatively large to-do list for the future. However, the four-step model introduced in this work provides us with a solid and versatile machinery for future investigations.
## Acknowledgements
This work was supported in part by the Austrian Research Promotion Agency (FFG) through project AoDiSys 891714, the European Commission (EC) through the projects NanoPAT (H2020-NMBP-TO-IND-2018-2020, Grant Agreement number: 862583) and MOZART (HORIZON-CL4-2021-RESILIENCE-01, Grant Agreement Number: 101058450). We thank the whole nano-medicine workgroup at the Gottfried Schatz Research Center for their cooperation and most helpful discussions.
## Appendix A Fields of Laguerre-Gauss beam
The electromagnetic fields for a Laguerre-Gauss laser beam within the paraxial approximation are taken from [20] and are repeated in this Appendix for the sake of completeness. Let \(m\) be the topological charge of the vortex beam and \(w_{0}\) the beam waist. The radial index is set to \(n=0\) throughout. The wavenumber of the embedding medium is \(k\). We introduce the Rayleigh range
\[z_{R}=\frac{1}{2}kw_{0}^{2} \tag{10}\]
and the \(z\)-dependent waist
\[w(z)=w_{0}\sqrt{1+\zeta^{2}}\,, \tag{11}\]
where \(\zeta=\frac{z}{z_{R}}\). We next define [20, Eq. (3)]
\[u_{0}=\frac{1}{1+i\zeta}\exp\left[-\left(\frac{r}{w_{0}}\right)^{2}\frac{1}{ 1+i\zeta}\right] \tag{12}\]
together with
\[u_{m}=\left(\frac{\sqrt{2}r}{w(z)}\right)^{m}\exp\left[im\left(\phi-\tan^{-1} \zeta\right)\right]\,. \tag{13}\]
The electric field is then given through [20, Eqs. (35,37)]
\[E_{x} =Au_{0}u_{m}e^{ikz} \tag{14}\] \[E_{z} =\left(\frac{m(x+iy)}{kr^{2}}-\frac{ix}{iz-z_{R}}-\frac{4ix}{kw^{ 2}}\right)Au_{0}u_{m}e^{ikz}\,.\]
Here \(A\) is the amplitude of the laser beam. Similarly, the magnetic field reads [20, Eqs. (39,49)]
\[ZH_{y} =Au_{0}u_{m}e^{ikz} \tag{15}\] \[ZH_{z} =\left(\frac{m(iy-x)}{kr^{2}}-\frac{iy}{iz-z_{R}}-\frac{4iy}{kw^{ 2}}\right)Au_{0}u_{m}e^{ikz}\,.\]
## Appendix B Optical forces within Mie theory
In this Appendix we give the expressions for the optical forces in terms of Mie coefficients [17]. A few modifications arise due to the different notations adopted in this work. We first introduce the abbreviations
\[\Lambda^{(1)} =\frac{1}{\ell+1}\sqrt{\frac{(\ell+m+2)(\ell+m+1)\ell(\ell+2)}{(2 \ell+1)(2\ell+3)}}\] \[\Lambda^{(2)} =\frac{1}{\ell+1}\sqrt{\frac{(\ell-m+2)(\ell-m+1)\ell(\ell+2)}{( 2\ell+1)(2\ell+3)}}\] \[\Lambda^{(3)} =-\frac{\sqrt{(\ell+m+1)(\ell-m)}}{\ell(\ell+1)}\]
as well as
\[\Lambda_{z}^{(1)} =\frac{1}{\ell+1}\sqrt{\frac{(\ell-m+1)(\ell+m+1)\ell(\ell+2)}{(2\ell +1)(2\ell+3)}}\] \[\Lambda_{z}^{(2)} =\frac{m}{\ell(\ell+1)}\,.\]
The expressions given in [17, Eq. (29a)] can then be written in the compact form
\[f =\Lambda^{(1)}\left[2a_{\ell+m}^{\text{sca}\,*}a_{\ell+1,m+1}^{ \text{sca}\,*}+a_{\ell,m}^{\text{inc}\,*}a_{\ell+1,m+1}^{\text{sca}\,*}a_{\ell +1,m+1}^{\text{inc}\,*}\right]\] \[+\Lambda^{(1)}\left[2b_{\ell,m}^{\text{sca}\,*}b_{\ell+1,m+1}^{ \text{sca}\,*}+b_{\ell,m}^{\text{inc}\,*}b_{\ell+1,m+1}^{\text{sca}\,*}+b_{\ell,m}^{\text{inc}\,*}\ell_{\ell+1,m+1}^{\text{inc}\,*}\right]\] \[+\Lambda^{(2)}\left[2a_{\ell+1,m-1}^{\text{sca}\,*}a_{\ell,m}^{ \text{sca}\,*}+a_{\ell+1,m-1}^{\text{sca}\,*}a_{\ell,m}^{\text{sca}\,*}+a_{ \ell+1,m-1}^{\text{inc}\,*}\right]\] \[+\Lambda^{(2)}\left[2b_{\ell,m}^{\text{sca}\,*}b_{\ell,m}^{\text {sca}\,*}+b_{\ell+1,m-1}^{\text{inc}\,*}b_{\ell,m}^{\text{sca}\,*}+b_{\ell,m-1 }^{\text{sca}\,*}b_{\ell,m}^{\text{inc}\,*}\right]\] \[+\Lambda^{(3)}\left[2a_{\ell,m}^{\text{sca}\,*}b_{\ell,m+1}^{ \text{sca}\,*}+a_{\ell,m}^{\text{inc}\,*}b_{\ell,m+1}^{\text{sca}\,*}+a_{ \ell,m}^{\text{sca}\,*}b_{\ell,m+1}^{\text{inc}\,*}\right]\] \[-\Lambda^{(3)}\left[2b_{\ell,m}^{\text{sca}\,*}a_{\ell,m+1}^{ \text{sca}\,*}+b_{\ell,m}^{\text{inc}\,*}a_{\ell,m+1}^{\text{sca}\,*}+b_{\ell,m}^{\text{sca}\,*}\ell_{\ell,m+1}^{\text{inc}\,*}\right]\,.\]
|
2310.02403 | 4-Strand Burau is Unfaithful Modulo 5 | We introduce a new algorithm for finding kernel elements in the Burau
representation. Our algorithm applies reservoir sampling to a statistic on
matrices which is closely correlated with Garside length. Using this we exhibit
an explicit kernel element in the Burau representation on 4-strands reduced
modulo 5. | Joel Gibson, Geordie Williamson, Oded Yacobi | 2023-10-03T19:57:08Z | http://arxiv.org/abs/2310.02403v1 | # 4-strand Burau is Unfaithful Modulo \(5\)
###### Abstract.
We introduce a new algorithm for finding kernel elements in the Burau representation. Our algorithm applies reservoir sampling to a statistic on matrices which is closely correlated with Garside length. Using this we exhibit an explicit kernel element in the Burau representation on 4-strands reduced modulo 5.
## 1. Introduction
The Burau representation of the \(n\)-strand braid group \(\pi_{n}:B_{n}\to\operatorname{GL}_{n-1}(\mathbb{Z}[v,v^{-1}])\), plays a prominent role in many applications of braid groups, including to geometry, topology and mathematical physics [3]. A key question underlying many of these developments is determining when \(\pi_{n}\) is faithful.
It is not difficult to prove that \(\pi_{3}\) is faithful (cf. Propostion 3.3), but despite intensive study, there wasn't substantial progress on this question until 1991, when Moody proved that \(\pi_{n}\) is not faithful for \(n\geq 9\)[11]. His work was developed further by Long and Paton to show unfaithfulness for \(n\geq 6\)[10]. Bigelow extended this also to \(n=5\)[2].
The case \(n=4\) remains open, and has attracted considerable attention. One reason for this is that a negative answer provides a non-trivial knot with trivial Jones polynomial, answering a central question in knot theory [1, 9].
Henceforth we set \(\pi=\pi_{4}\). Cooper and Long approached the faithfulness of \(\pi\) by reducing modulo primes. That is, they considered the composition of \(\pi\) with the natural map \(\operatorname{GL}_{3}(\mathbb{Z}[v,v^{-1}])\to\operatorname{GL}_{3}(\mathbb{F }_{p}[v,v^{-1}])\), where \(\mathbb{F}_{p}\) is the field with \(p\) elements [5]. Following their work, we denote this representation \(\pi\otimes\mathbb{F}_{p}\). The study of the faithfulness of \(\pi\otimes\mathbb{F}_{p}\) is interesting for several reasons:
1. Examples when \(\pi\otimes\mathbb{F}_{p}\) is unfaithful can give insights into the difficulty of establishing the faithfulness of \(\pi\).
2. It might be the case that \(\pi\otimes\mathbb{F}_{p}\) is unfaithful modulo all primes, yet \(\pi\) is faithful. Controlling the kernel modulo infinitely many primes might provide a route to establishing faithfulness of \(\pi\).
One of the main results in [5] is that \(\pi\otimes\mathbb{F}_{2}\) is not faithful. In [6] the same authors extended their methods to show that \(\pi\otimes\mathbb{F}_{3}\) is also not faithful. They noted their program runs into obstacles at \(p=5\): "This case remains open and has some features which suggest it may be different to the first two primes." Our main result resolves this case.
**Theorem 1.1**.: _The representation \(\pi\otimes\mathbb{F}_{5}\) is not faithful._
Some remarks on our main theorem:
1. Our approach, which we discuss in the next section, is heavily computational and is quite different from the existing methods. A complete implementation of our algorithm (in Python) is available on github [8].
2. We discover several kernel elements modulo 5, the smallest of which is of Garside length \(54\)1. We note that a straightforward computation shows that there are \(10^{40}\) elements of this Garside length, thus finding this element by brute force search is not feasible. Footnote 1: see §2 for the definition of Garside length
3. We have made extensive searches with our algorithm to try to discover a kernel element of \(\pi\otimes\mathbb{F}_{7}\), without success. We are also unable to rediscover Bigelow's (integral) kernel elements for \(n=5\) and \(n=6\), which indicates the limitations of our algorithm.
## 1. Introduction
### Background
The motivation of this paper is to study the _Birau matrix_ of a given set of \(n\)-dimensional matrices \(\mathcal{A}\). The _Birau matrix_ of \(\mathcal{A}\) is the _Birau matrix_ of \(\mathcal{A}\), and the _Birau matrix_ of \(\mathcal{A}\) is the _Birau matrix_ of \(\mathcal{A}\). The _Birau matrix_ of \(\mathcal{A}\) is the _Birau matrix_ of \(\mathcal{A}\).
## 3. Basic idea
Suppose one can find a statistic on matrices which detects Garside length. In other words, we have a function \(\mathsf{p}:\mathrm{GL}_{n-1}(\mathbb{Z}[v,v^{-1}])\to\mathbb{N}\) such that \(\mathsf{p}(\pi_{n}(\sigma))=C\cdot\ell_{G}(\sigma)\) for some constant \(C{>}0\). Then this implies the faithfulness of \(\pi_{n}\):
\[\pi_{n}(\sigma)=\mathrm{Id}\implies\mathsf{p}(\pi_{n}(\sigma))=0\implies \ell_{G}(\sigma)=0\implies\sigma=\Delta^{d}\]
for some \(d\in\mathbb{Z}\), and \(\pi_{n}(\Delta^{d})=\mathrm{Id}\) if and only if \(d=0\).
On the other hand, if \(\mathsf{p}\) is correlated with \(\ell_{G}\) only for generic braids, then one can try to use \(\mathsf{p}\) as an ansatz to find kernel elements: braids with surprisingly low \(\mathsf{p}\)-value might point towards elements in the kernel. We explain both of these situations, beginning with the simple case of the \(3\)-strand Burau representation.
First we define the statistic which we'll be using throughout. Given \(A\), a non-zero \(r\times s\) array with entries in Laurent polynomials in \(v\), let \(\deg(A)\) be the highest power of \(v\) that occurs in an entry of \(A\), and let \(\mathrm{val}(A)\) be the lowest power. We set
\[\mathsf{projlen}(A)=\deg(A)-\mathrm{val}(A). \tag{3.1}\]
Note that multiplication of a Burau matrix by \(\Delta\) does not affect \(\mathsf{projlen}\):
\[\mathsf{projlen}(\pi_{n}(\sigma))=\mathsf{projlen}(\pi_{n}(\sigma)\Delta)= \mathsf{projlen}(\Delta\pi_{n}(\sigma)). \tag{3.2}\]
**Proposition 3.3**.: _For any \(\sigma\in B_{3}\) we have \(\mathsf{projlen}(\pi_{3}(\sigma))=2\ell_{G}(\sigma)\). Hence, the \(3\)-strand Burau representation is faithful._
Proof.: Let \(\sigma\in B_{3}\) have GNF given by (2.2) and let \(\pi_{3}(\sigma)=[c_{1},c_{2}]\), where \(c_{i}\) are the column vectors in the matrix. Then the following trichotomy holds:
1. \(\deg(c_{1})=\deg(c_{2}),\mathrm{val}(c_{1})=\mathrm{val}(c_{2})\), and \(\sigma=\Delta^{d}\) for some \(d\in\mathbb{Z}\).
2. \(\deg(c_{1})>\deg(c_{2}),\mathrm{val}(c_{1})>\mathrm{val}(c_{2})\), and \(\mathscr{R}(\sigma)=\{s_{1}\}\).
3. \(\deg(c_{1})<\deg(c_{2}),\mathrm{val}(c_{1})<\mathrm{val}(c_{2})\), and \(\mathscr{R}(\sigma)=\{s_{2}\}\).
Moreover, in each case \(\mathsf{projlen}(\pi_{3}(\sigma))=2\ell_{G}(\sigma)\).
To see this, we induct on \(\ell=\ell_{G}(\sigma)\). If \(\ell=0\) then we are in the first case of the trichotomy. Note that in this case \(0=\mathsf{projlen}(\pi_{3}(\sigma))=2\ell_{G}(\sigma)\). Now let \(\ell\geq 1\), and suppose that \(\sigma^{\prime}:=\Delta^{d}\widetilde{w_{1}}\cdots\widetilde{w_{\ell-1}}\) falls into the second case of the trichotomy. Setting \(\pi_{3}(\sigma^{\prime})=[c^{\prime}_{1},c^{\prime}_{2}]\), we then have that \(\deg(c^{\prime}_{1})>\deg(c^{\prime}_{2}),\mathrm{val}(c^{\prime}_{1})> \mathrm{val}(c^{\prime}_{2})\), \(\mathscr{R}(\sigma^{\prime})=\{s_{1}\}\), and \(\mathsf{projlen}(\pi_{3}(\sigma^{\prime}))=2\ell_{G}(\sigma^{\prime})\). The condition \(\mathscr{R}(\widetilde{w_{\ell-1}})\supseteq\mathscr{R}(\widetilde{w_{\ell}})\) forces \(w_{\ell}=s_{1}\) or \(w_{\ell}=s_{1}s_{2}\). If \(w_{\ell}=s_{1}\) then \([c_{1},c_{2}]=[-v^{2}c^{\prime}_{1},-vc^{\prime}_{1}+c^{\prime}_{2}]\), and we are in case (2). Otherwise, \(w_{\ell}=s_{1}s_{2}\) and \([c_{1},c_{2}]=[-vc^{\prime}_{2},v^{3}c^{\prime}_{1}-v^{2}c^{\prime}_{2}]\), so we are in case (3). Either way we have the equality \(\mathsf{projlen}(\pi_{3}(\sigma))=2\ell_{G}(\sigma)\). The other cases follow similarly.
Moving onto the \(4\)-strand braid group, it is easy to see that a straightforward analogue of Proposition 3.3 does not hold. For instance, taking \(\sigma=\widetilde{u}\), where \(u=s_{1}s_{2}s_{1}s_{3}\) we have that
\[\pi_{4}(\sigma)=\begin{pmatrix}0&0&-v^{4}\\ v^{3}&v^{2}&v^{3}\\ 0&-v&-v^{2}\end{pmatrix}\]
We see that \(\mathsf{projlen}(\pi_{4}(\sigma))=3\) is not even, and one cannot read off \(\mathscr{R}(u)=\{s_{1},s_{3}\}\) directly from the columns achieving \(\deg(\pi_{4}(\sigma))\). Nevertheless, we can ask whether generically \(\mathsf{projlen}\) is a good heuristic for Garside length.
The following table shows the result of sampling \(1000\) random braids in \(B_{4}\) of each Garside length, up to \(50\). Each braid is depicted by a blue dot in the plane, where the \(x\)-axis is labelled by the Garside length of the braid, and the \(y\)-axis is labelled by \(\frac{1}{2}\mathsf{projlen}\). The \(y=x\) line is
drawn in red.
We see that it is almost always the case that \(\frac{1}{2}\mathsf{projlen}\) is larger than the Garside length. Moreover, there appears to be a strong linear correlation between these two statistics for generic braids. However, there are a few exceptions corresponding to blue dots below the red line.
Now consider an element
\[\sigma=\Delta^{d}\widetilde{w_{1}}\cdots\widetilde{w_{\ell}}\]
in the kernel of some Burau representation. Because \(\mathsf{projlen}\) of a kernel element is zero and \(\mathsf{projlen}\) is unaffected by multiplication by \(\Delta\) (3.2) we have
\[0=\mathsf{projlen}(\sigma)=\mathsf{projlen}(\widetilde{w_{1}}\cdots\widetilde{ w_{\ell}})\]
It is now reasonable to suspect that the projlens of Garside prefixes of \(\widetilde{w_{1}}\cdots\widetilde{w_{\ell}}\):
\[\mathsf{projlen}(\widetilde{w_{1}}),\mathsf{projlen}(\widetilde{w_{1}} \widetilde{w_{2}}),\ldots,\mathsf{projlen}(\widetilde{w_{1}}\cdots\widetilde{ w_{\ell-1}}),\mathsf{projlen}(\widetilde{w_{1}}\cdots\widetilde{w_{\ell}})=0. \tag{3.4}\]
are typically smaller than those of a generic braid of the same length. Thus searching for GNFs with low projlen might point us in the direction of kernel elements.
## 4. The algorithm
We'll now describe our algorithm for sampling braids with low \(\mathsf{projlen}\). We picture the set-up for our search as placing a "bucket" at each point in \(\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\). Initially, these buckets are empty and over time they will be filled with Garside normal forms of 4-strand braids, and their Burau matrices. The bucket \(\mathsf{B}_{(\ell,m)}\) at the point \((\ell,m)\) will only contain braids \(\sigma\) such that \(\ell_{C}(\sigma)=\ell\) and \(\mathsf{projlen}(\pi(\sigma))=m\).
The method by which we fill the buckets is called **reservoir sampling**[12]. This is a well-known algorithm which selects a random sample of \(k\) elements without repetition from a set of \(N\) elements which arrive sequentially, and where the value of \(N\) is unknown in advance. Typically, \(N\) is enormous. Thus it is not possible to store all elements seen and the sampling is done in one pass. At any point in time, the selection of \(k\) elements, i.e. the "reservoir", should be uniformly distributed among all elements seen thus far.
We recall the idea in more detail since it is so fundamental to our approach. Suppose we're sampling elements from a set \(X\) and that the elements arrive as a sequence \(x_{1},x_{2},\ldots,x_{N}\).
Let us first consider the case \(k=1\). Our reservoir then consists of a single element, which at the \(i\)-th step we will denote \(r_{i}\). At the first step we let \(r_{1}=x_{1}\). In the second step, we replace \(r_{1}\) with \(x_{2}\) with probability \(\frac{1}{2}\), otherwise \(r_{2}=r_{1}\). In the third step we replace \(r_{2}\) with \(x_{3}\) with probability \(\frac{1}{3}\), otherwise \(r_{3}=r_{2}\). Note that indeed, for every \(i\), \(r_{i}\) is uniformly distributed among \(x_{1},\ldots,x_{i}\).
If \(k>1\), then the reservoir is a \(k\)-subset of \(X\), which at the \(i\)-th step we will denote \(R_{i}=\{r_{1}^{i},\ldots,r_{k}^{i}\}\). First we just fill the reservoir: \(r_{j}^{1}=x_{j}\) for \(j\leq k\). At the next step we randomly generate a number \(j\in\{1,\ldots,k+1\}\), and if \(j\leq k\) replace the \(j\)-th element of \(R_{1}\) by \(x_{k+1}\):
\[r_{q}^{2}:=\begin{cases}x_{k+1}\text{ if }q=j,\\ r_{q}^{1}\text{ otherwise.}\end{cases}\]
Otherwise, \(R_{2}=R_{1}\). Next, randomly generate a number \(j\in\{1,\ldots,k+2\}\), and if \(j\leq k\) replace the \(j\)-th element of \(R_{2}\) by \(x_{k+2}\), and so on. At the \(i\)-th step, we have that \(R_{i}\) is a uniformly distributed \(k\)-subset of \(\{x_{1},\ldots,x_{i+k-1}\}\).
Going back to our set-up, we will now describe an adaptation of the above ideas to place braids in appropriate buckets. At the \(\ell\)-th step in our algorithm, we will be placing braids in buckets at points \((\ell,m)\) for various \(m\). We fix ahead of time an upper bound \(k\), which is the maximum number of braids contained in a single bucket. Set \(\mathsf{B}_{\ell}=\bigcup_{m\geq 0}\mathsf{B}_{(\ell,m)}\).
To begin place the identity \(e\in B_{4}\) in \(\mathsf{B}_{(0,0)}\). Now let \(\ell>0\), and at the \(\ell\)-th step proceed as follows:
```
Set \(N_{m}=0\) for every \(m\)\(\triangleright\) Counts how many braids we've tried to place in \(\mathsf{B}_{(\ell,m)}\) for\(\sigma\in\mathsf{B}_{\ell-1}\)do for\(\widetilde{u}\) a Garside suffix of \(\sigma\)do \(m=\mathsf{projlen}(\sigma\widetilde{u})\) \(M=|\mathsf{B}_{(\ell,m)}|\) if\(M<k\)then place \(\sigma\widetilde{u}\) and its Burau matrix in \(\mathsf{B}_{(\ell,m)}\) else randomly generate a number \(j\in\{1,\ldots,N_{m}\}\) if\(j<k\)then replace \(j\)-th element of \(\mathsf{B}_{(\ell,m)}\) by \(\sigma\widetilde{u}\) and its Burau matrix else discard \(\sigma\widetilde{u}\) endif endif Set \(N_{m}:=N_{m}+1\) endfor endfor
```
Note that since we are storing the Burau matrices along the way, it is easy for us to compute \(\mathsf{projlen}(\sigma\widetilde{u})\). Also, we can easily modify this algorithm to study the Burau representation modulo primes (we simply remember the \(p\)-Burau matrices instead).
## 5. Main result
We'll now describe the most interesting output of our algorithm: in the case \(p=5\) we obtained several elements in the kernel of \(\pi\otimes\mathbb{F}_{5}\). We stress that these are the first known kernel elements in Burau representations \(\pi\otimes\mathbb{F}_{p}\) with \(p>3\), and this represents the first explicit progress on this question in 25 years!
The smallest we found, \(\kappa\), has Garside length 54. We have
\[\kappa=\Delta^{-27}*\sigma\]
where \(\Delta\) denotes the Garside element and \(\sigma\) is the following word in the Artin generators:
\[1,3,1,3,1,3,2,2,1,3,3,2,2,1,3,3,2,2,2,1,3,1,3,2,1,2,1,3,1,3,2,1,2,1,3,1,3,2, 2,2,1,3,3,\] \[2,2,1,3,3,2,2,1,3,1,3,1,2,3,2,1,1,3,2,1,2,1,3,1,2,1,3,1,3,2, 2,1,3,2,2,1,3,2,2,2,1,\] \[3,3,2,2,1,3,3,2,2,1,3,1,2,3,2,1,1,3,2,1,2,1,3,1,3,2,1,2,1,3,1,2, 3,2,1,1,3,2,2,1,1,3,2,2,1,3,3,2,\] \[2,1,3,3,2,2,2,1,3,2,2,1,3,1,2,3,2,2,1,3,1,2,3,2,2,1,3,1,2,3,2, 2,1,3,2,1\]
The length of \(\sigma\) in the Artin generators is 162.
The following plot depicts the journey of \(\sigma\) through the various buckets, as described in (3.4):
The \(x\)-axis is labelled by Garside length, the \(y\)-axis by projlen2, and each blue dot represents a bucket which contains a Garside prefix of \(\kappa\).
Footnote 2: It’s actually labelled by projlen\(+\) but we ignore this technicality.
The rightmost dot corresponds to the bucket containing \(\sigma\). It's interesting to note that this trajectory follows a generic path until about Garside length 30, where suddenly the reservoir sampling starts finding elements with smaller than expected projlen. Around length 40, the algorithm hits a "point of no return", and makes a beeline for \(\sigma\).
In case the reader is interested, here is the Garside normal form of \(\kappa\):
\[\kappa=(\Delta^{-27};s_{0}s_{2},s_{0}s_{2},s_{0}s_{2}s_{1},s_{1}s _{0}s_{2},s_{2}s_{1},s_{1}s_{0}s_{2},s_{2}s_{1},s_{1},s_{1}s_{0}s_{2},s_{0}s_{2 }s_{1}s_{0},s_{1}s_{0}s_{2},\] \[s_{0}s_{2}s_{1}s_{0},s_{1}s_{0}s_{2},s_{0}s_{2}s_{1},s_{1}s_{0}s _{2},s_{2}s_{1},s_{1}s_{0}s_{2},s_{2}s_{1},s_{1}s_{0}s_{2},s_{0}s_{2},s_{0}s_{ 1}s_{2}s_{1}s_{0},s_{0}s_{2}s_{1}s_{0},\] \[s_{1}s_{0}s_{2},s_{0}s_{2}s_{1}s_{0},s_{1}s_{0}s_{2},s_{0}s_{2}s_{ 1},s_{1}s_{0}s_{2}s_{1},s_{1}s_{0}s_{2},s_{2}s_{1},s_{1}s_{0}s_{2},s_{2}s_{1},s _{1}s_{0}s_{2},s_{2}s_{1},s_{1}s_{0}s_{2},\] \[s_{0}s_{1}s_{2}s_{1}s_{0},s_{0}s_{2}s_{1}s_{0},s_{1}s_{0}s_{2},s_{ 0}s_{2}s_{1}s_{0},s_{1}s_{0}s_{2},s_{0}s_{1}s_{2}s_{1}s_{0},s_{0}s_{2}s_{1},s_{ 1}s_{0}s_{2},s_{2}s_{1},\] \[s_{1}s_{0}s_{2},s_{2}s_{1},s_{1},s_{1}s_{0}s_{2}s_{1},s_{1}s_{0}s _{2},s_{0}s_{1}s_{2}s_{1},s_{1}s_{0}s_{2},s_{0}s_{1}s_{2}s_{1},s_{1}s_{0}s_{2}, s_{0}s_{1}s_{2}s_{1}s_{0})\]
Since randomness is built into the algorithm, every time we run it we get different outcomes. About 20% of the time we actually find a kernel element, and this takes about a couple of hours on a standard personal computer.
Other runs of our algorithm discovered two other elements in the kernel of Garside lengths 59 and 65 respectively. They are
\[\kappa_{1}=\Delta^{-29}\sigma_{1}\quad\text{and}\quad\kappa_{2}=\Delta^{-33} \sigma_{2}\]
where \(\sigma_{1}\) and \(\sigma_{2}\) are given by the following words in the Artin generators, of lengths 174 and 198 respectively:
\[\sigma_{1}=(1,2,1,3,2,2,1,3,1,3,2,2,2,1,3,1,2,2,1,3,1,2,2,1,3,1, 3,1,2,3,2,1,1,2,3,2,2,1,3,1,2,3,2,2,1,3,1,3,2,2,2,1,3,1,2,2,1,3,1,\] \[2,2,2,1,3,1,2,3,2,2,1,3,1,2,3,2,2,1,3,1,2,3,2,1,3,1,3,1,3,2,2,1, 3,1,2,2,1,3,1,2,2,1,3,2,1,3,2,1,2,1,\] \[3,1,2,3,2,1,1,3,2,2,1,3,3,2,2,1,3,3,2,1,3,2,3,2,1,3,2,1,3,2,1, 3,2,1,2,1,3,1,3,2,1,2,1,3,1,3,2,2,1,3,2,2,1,3,1)\] \[\sigma_{2}=(1,3,1,3,1,3,2,2,1,3,3,2,1,3,2,2,1,3,1,2,3,1,2,3,1, 2,1,3,1,2,1,3,1,3,2,2,1,3,3,2,2,1,3,3,2,1,3,2,1,3,2,2,1,3,2,1,3,2,\] \[1,1,3,2,1,2,1,3,1,3,2,1,2,1,3,3,2,2,1,3,3,2,2,1,3,1,2,3,2,1,3,2, 1,3,2,1,2,1,3,2,1,3,1,2,1,3,1,3,2,2,2,1,3,3,2,2,1,3,1,\] \[2,3,2,1,1,3,2,1,2,1,3,1,2,1,3,1,3,2,2,1,3,3,2,2,1,3,3,2,2,1,3, 3,2,1,3,2,2,1,3,2,1,3,2,1,3,2,1,2,1,3,1,3,2,2,1,3,2,1,3,2,2,1,3,2,1,3,2,2,1,3,2, 1,3,2)\]
Here are the trajectories of \(\sigma_{1}\) and \(\sigma_{2}\) through the buckets:
**Remark 5.1**.: _The reader wishing to explore more can experiment with the notebook "p=5 kernel elements" available at [8]. We have also implemented a check of our results in Magma in the file "Magma check.m" which is also available at [8]. It should be easy to modify the Magma file to suit any computer algebra system._
## 6. How good is our algorithm?
We regard the fact that our algorithm is able to discover a kernel element nearly 25 years after the last discovery of a kernel element (for \(p=3\) in [6]) as some evidence that our approach is interesting. On the other hand, it is interesting to ask: can our algorithm rediscover known kernel elements?
### n=4
Here we see a plot Garside of length (\(x\)-axis) against the minimal projlens found (\(y\)-axis), over several runs of our algorithms modulo various integers \(m=2,3,4,5,6,7\):
It immediately finds many kernel elements modulo 2 and 3. It also finds kernel elements modulo 4. Note also the jagged red line: during these runs the algorithm found an element of Garside length around 65 with low projlen, but did not find a kernel element of this length. It did, however find a kernel element of Garside length around 105 (where the red line finally touches the \(x\)-axis). Finally, note that although we know that kernel elements exist modulo 6, our algorithm didn't find them. (The purple line.)
### n=5,6
Despite several attempts we have not been able to recover Bigelow's (integral) kernel elements for \(n=5\) and \(n=6\). How close are we to finding these elements?
It turns out that there is a beautiful and simple idea that allows us to accurately answer this question. For concreteness, let us take \(n=6\) in which case Bigelow's kernel element is of Garside length 16:
\[\beta=\Delta^{d}\widetilde{w_{1}}\widetilde{w_{2}}\cdots\widetilde{w_{16}}.\]
Now we can modify our code slightly to force it to add each of the Garside divisors
\[\beta_{1}=\widetilde{w_{1}},\quad\beta_{2}=\widetilde{w_{1}}\widetilde{w_{2} },\quad\dots,\quad\beta_{16}=\widetilde{w_{1}}\widetilde{w_{2}}\cdots\widetilde {w_{16}}\]
to their appropriate buckets during the reservoir search. In other words, we peform the reservoir sampling as usual, however we force it to successfully find \(\beta\):
Now for each reservoir containing \(\beta_{i}\) we can compute the probability that a random sample would contain \(\beta_{i}\). If we denote this probability by \(P(\beta_{i})\), then
\[P(\beta)=\prod_{i=1}^{16}P(\beta_{i})=\prod_{i=1}^{16}\frac{k}{\max(r(\beta_{i} ),k)}\]
where \(k\) denotes the bucket size as above, and \(r(\beta_{i})\) denotes the total number of elements seen by the bucket containing \(\beta_{i}\).
With \(k=500\) this probability is smaller than \(10^{-6}\). Increasing the bucket size to \(10^{5}\) decreased this probability to around \(0.0005\). At this point it is believable that algorithmic improvements (or simply a bigger computer and bigger buckets!) new kernel elements for \(n=6\) might be found. A similar analysis for \(n=5\) shows that this case is considerably harder.
### Future Directions
We briefly comment on three avenues which we believe warrant further exploration.
#### 6.3.1. Monte Carlo methods
The task of searching through Garside normal forms in order to find kernel elements is a textbook example of tree search. This problem features prominently in computer Chess and Go. In Go, major progress was made in the 1990s and 2000s by employing Monte Carlo evaluation strategies [4]: in order to decide the strength of a board position, one randomly finishes the game 5000 times and sees how often one wins. We tried a similar method here:
1. A database \(D\) of promising nodes (with score) is initiated with the identity element.
2. At each step, \(\sigma\) is sampled from \(D\) (randomly with weighting based on the score). We then do reservoir sampling starting from each Garside suffix \(\sigma\widetilde{u}\) for \(N\) steps. Each suffix is then added to \(D\), with score based on the lowest projlen seen.
(Thus, we regard exploring the tree of all possible Garside normal forms as a one-player game, where the goal is to find kernel elements, and a "move" consists of replacing the current element by a Garside successor.)
We implemented this algorithm and it consistently found kernel elements for \(p=5\). We did several long runs in other settings (e.g. (\(n=4,p=7\)), (\(n=5,p=41\)) etc.) without finding kernel elements. In this setting there are several design choices (e.g. how to implement score and select based upon it, how to choose \(N\),...) which have a big impact on performance. This is worth investigating systematically and could lead to a much better algorithm.
#### 6.3.2. Machine learning methods
We begun this project with the aim of employing machine learning methods. We outline here one unsuccessful attempt to use vanilla supervised learning to improve our sampling. Many other experiments are possible, and we hope that our software can provide a good basis for further experiments.
Consider all elements of Garside length \(\ell\). Imagine an oracle \(O\) which takes as input the matrix of an element of Garside length \(\ell\) and tells us the minimum projlen amongst all Garside successors of Garside length \(\ell+k\) for some value of \(k\) (e.g. \(k=5\)). Then it seems intuitive that using the oracle \(O\) to weight reservoir sampling (i.e. elements with low projlen in \(k\) further steps are more likely to be kept in a bucket) would lead to better results.
With this motivation in mind, we tried to train a vanilla neural network which takes as input the matrix of \(\sigma\), and attempts to predict the minimal projlen in \(k\) steps time. Our models attained reasonable training accuracy (\(\sim 80\%\)), but seemed to make no difference at all to the long-term performance of the algorithm.3 In other words, adding the neural network made no discernable difference to the smallest projlen found for large Garside lengths. Thus, our attempt to use neural networks to spot patterns in Burau matrices was unsuccessful. This suggests that such patterns are either not there, or are difficult to detect.4
Footnote 3: Some further details on our algorithm: We encode a matrix \(M\) of Laurent polynomials as a list of matrices encoding the coefficients of \(v^{i}\). Because we only care about our matrices up to scalar multiple, we can assume that the non-zero matrices range from \(0\) to the projlen: \(M_{0},M_{1},\ldots,M_{\mathsf{projlen}(M)}\). Now, after multiplying \(M\) by all Garside successors of length \(k\), it is easy to see that the resulting projlens only depends on the first and last \(k^{\prime}\) matrices in our list, i.e. on \(M_{0},M_{1},\ldots,M_{k^{\prime}},M_{\mathsf{projlen}(M)-k^{\prime}+1},\ldots M _{\mathsf{projlen}(M)}\) for some small value of \(k^{\prime}\). We further simplified our dataset by remembering only \(M_{0}^{\prime},M_{1}^{\prime},\ldots,M_{k^{\prime}}^{\prime},M_{\mathsf{ projlen}(M)-k^{\prime}+1}^{\prime},\ldots M_{\mathsf{projlen}(M)}^{\prime}\) where \(M_{i}^{\prime}\) is a zero-one matrix whose entries record which entries in \(M_{i}^{\prime}\) are non-zero. This list was then serialized and used as input to a vanilla neural network.
Footnote 4: One can also imagine that a drop in projlen is a rare event, which is too rare to be picked up by the neural network.
#### 6.3.3. Commutators
It is striking that all kernel elements found in [5, 6, 2] are commutators. It is tempting to try to modify our search regime to use this fact as a prior. It is not clear how to do so, but a reasonable proposal could lead to better results. We have not tried to express the elements that we have found as (close relatives of) commutators.
|
2307.08287 | Drawing non-planar graphs with rotation systems on the Klein bottle | This paper provides a linear time algorithm in the number of edges that,
given a simple 3-connected non-planar graph G with a Klein bottle rotation
system, outputs a straight line drawing of G with no crossings on the flat
Klein bottle. | François Doré, Enrico Formenti | 2023-07-17T07:17:54Z | http://arxiv.org/abs/2307.08287v1 | # Drawing non-planar graphs with rotation systems on the Klein bottle
###### Abstract
This paper provides a linear time algorithm in the number of edges that, given a simple 3-connected non-planar graph \(G\) with a Klein bottle rotation system, outputs a straight line drawing of \(G\) with no crossings on the flat Klein bottle.
Keywords:Straight-line drawing Rotation Systems Graph Embedding Non-orientable Surfaces
## 1 Introduction
Wagner [15] and Fary [4], independently, proved that simple planar graphs admit a straight-line representation on the plane. Mohar extended this result to flat surfaces (2-polytopes where distinct sides are identified as one) _i.e._ the cylinder, the Mobius band, the flat torus and the flat Klein bottle [9]. However, even knowing that they exists, finding these representations is not an easy task. Read proposed an algorithm to create a straight-line representation of planar graphs given their rotation systems [12]. This algorithm has then been adapted to the torus by Kocay et al. [7] but, to the best of our knowledge, similar algorithms for non-orientable surfaces, especially for the Klein bottle, have not been much studied.
Figure 1: Flat representation of the torus (left) and the Klein bottle (right). Pairs of identified sides of the \([0,1]\) Square have here the same number of markings on them but we will assume that these pairs always concern two opposite sides
Although the torus and the Klein bottle seem similar at a first glance, there are classes of graphs that can be embedded in the torus but not in the Klein bottle and conversely. Riskin studied in detail the nonembeddability of toroidal graphs on the Klein bottle and stated, in his conclusions, the following conjecture [13]:
Klein bottle polyhedral maps with four disjoint homotopic noncontractible circuits of the type that collapse the Klein bottle to the pinched torus are not toroidal.
The class of graphs described by Riskin's conjecture is not the only one which has a Klein bottle embedding but are not toroidal. Indeed, Figure 2 shows a Klein-embeddable graph which does not satisfy the hypothesis of Riskin's conjecture but have a minor homeomorphic to a torus obstruction listed in the database made by Myrvold and Woodcock [11].
Indeed, the whole class of square grids of size \(m\times n\) on the Klein bottle where \(m\) runs along the non-inverted side and \(n\) runs along the inverted side will be non-toroidal when \(m\geq 2\) and \(n\geq 8\) since they contain the sub-grid of Figure 2.
These facts motivate the search for algorithms which given a rotation system can build a straight-line drawing on the Klein bottle.
Figure 2: An Klein embedding of a square grid which is non-toroidal. The subgraph homeomorphic to a torus obstruction is drawn by solid lines (left) and its embedding the plane (right).
This is the main matter of the next sections. We stress that some of the algorithms are sufficiently generic to be extendable to other surfaces.
## 2 Rotation systems and the Klein bottle
We will use the standard definitions and concepts from graph topology (for more details see, for instance, the book of Mohar and Thomassen [10]). An _embedding_\(\Lambda(G)\) of a graph \(G=\langle V,E\rangle\) on a surface \(S\) is a representation of \(G\) in which vertices are points on \(S\) and edges are simple curves homeomorphic to \([0,1]\) over \(S\). This representation is such that endpoints of a curve associated with an edge must coincide with the endpoints of the edge, no curve representing an edge contains more than two vertices and no two curves intersect at a common interior point. A _face_ of an embedding is a maximal contiguous region of \(S-\Lambda(G)\). An embedding is called _cellular_ if all its faces are homeomorphic to disks. The _Euler's characteristic_\(\chi(\Lambda(G))\) of an embedding is equal to \(|V|-|E|+|F|\), where \(F\) is the set of all the faces of \(\Lambda(G)\).
It is well known that _orientable rotation systems_ of a graph \(G\), or _pure_ rotation system, induce a unique embedding of \(G\) on an orientable surface \(S\) up to embedding equivalence, but this relation, is not necessarily the same in the non-orientable case. Below, we define _general_ (or _non-orientable_) rotation systems.
Definition 1: Let \(G=\langle V,E\rangle\) be a graph, its _general rotation system_\(\Pi(G)\) is the structure \(\langle\pi,w\rangle\), where \(\pi=\{\pi_{v}\mid v\in V\}\) is the ordered adjacency of each vertex, and \(w=\{w_{e}\mid e\in E\}\) is the sign of each edge (indicating if they are twisted or not).
From now on, we will consider that nodes labelled by integers in the set \(\{1,\ldots,|V|\}\). Thus, a node \(u\) is lower than a node \(v\) if the same relation holds for their labels.
The rotation system \(\Pi_{2}(G)\) is the _flip_ of \(\Pi_{1}(G)\) iff the adjacencies of all of its vertices are reversed _w.r.t._ those of \(\Pi_{1}(G)\). Two orientable rotation systems \(\Pi_{1}(G)\) and \(\Pi_{2}(G)\) are _equivalent_ if letting \(\Pi\) be \(\Pi_{1}(G)\) or its flip, for each node of \(\Pi\), there exists a cyclic permutation which transforms the ordered adjacency into the one of \(\Pi_{2}(G)\)
In order to test the equivalence of two orientable rotation systems \(\Pi_{1}(G)\) and \(\Pi_{2}(G)\), one can consider the minimal cyclic permutation of the ordered adjacency of \(\Pi_{1}(G)\) and \(\Pi_{2}(G)\) and then check the adjacency of the lowest node, if the second node is greater than the last, we take the _flip_. With this procedure, we can easily check if two rotation systems of one graph are equivalent or not. However, with non-orientable rotation systems, this is no longer sufficient.
As shown in Figure 3, moving one node \(v\) (the node 2 in this case) through an inverted side, implies changes on the rotation system (two changes in our figure). First, the adjacency of \(v\) is _flipped_, meaning the order of its adjacency is reversed. Second, all the edges incident to \(v\) have their signs changed. We call this operation _switching_ the vertex \(v\).
Two rotations systems \(\Pi_{1}(G)\) and \(\Pi_{2}(G)\), such that \(\Pi_{1}(G)\) can be obtained from \(\Pi_{2}(G)\) by a sequence of flip/switch are said to be _switch-equivalent_. We will denote this fact by \(\Pi_{1}(G)\cong\Pi_{2}(G)\). This means that we can freely and independently switch any vertex without altering the rotation system. To have a rotation system independent of the switches, we can switch the nodes \(v\) whose \(\pi_{v}\) needs to be flipped, handle the changes for the edge signs accordingly and then consider the minimal cyclic permutations as for the orientable case. An outline of this formatting method is shown in Algorithm 1.
```
[MISSING_PAGE_POST]
The rotation system formatting algorithm runs through all the vertices and changes the sign of each edge at most twice, therefore its complexity is \(O(|V|+|E|)\).
This algorithm will be particularly useful when enumerating the embeddings needed as a base for our drawing algorithm.
## 3 Enumerating the embeddings
In this section we are going to provide an enumeration algorithm for the embeddings of a graph into a generic surface. We stress that enumeration is not an easy task in general. Indeed, given a graph \(G\) with \(n\) nodes and \(m\) edges, an upper bound for the number of labelled embeddings of \(G\) is given by
\[2^{m}\prod_{0<i\leq n}(d_{i}-1)!\]
where \(d_{i}\) is the degree of node \(i\). Therefore, since we assumed \(d_{i}>2\) the above bound is certainly larger than \(2^{m+n}\) and hence for large values of \(m\) or \(n\), labelled enumeration can be practically unfeasible. We therefore prefer to enumerate the unlabelled embeddings up to isomorphism and switch-equivalence. Some theoretical results exist in this domain, but no tight bounds on the number of these embeddings nor a way to generate them efficiently, to our knowledge, is known (see [2, 5] for further details).
To enumerate all possible unlabelled embeddings of a graph \(G\) on the Klein bottle, one can enumerate all the rotation systems of \(K\)
changing both adjacencies and edge twists. For each possible rotation system \(\Pi(G)\), compute its genus (or Euler characteristic) with a face-walking algorithm to keep only the ones with \(\chi(\Pi(G))=0\), and store the minimal (lexicographically ordered) formatted form among all the relabellisations of \(G\).
Although for small graphs, the complete enumeration can be done quite easily, few optimizations can be done to minimize the number of times a canonical form is generated.
Firstly, it is not mandatory to test all the possible relabellisations of \(\Pi(G)\) when computing its canonical form. One can simply test permutations conserving the automorphism groups1.
Footnote 1: See the Nauty library of McKay for automorphism and isomorphism algorithms that have proved their worth and that are usable in practice [8].
A second step in this direction consists in exploiting a notion coming from the domain of signed graphs, namely the _frustration_ of a graph.
Definition 2: Let \(G=\langle V,E\rangle\) be a graph and \(\Pi(G)\) an embedding of \(G\). The _frustration_\(f(\Pi(G))\) of \(\Pi(G)\) is the minimum number of twisted edges after any sequence of switches.
Now, we are going to define the frustration for standard (_i.e._ unsigned) graphs.
Definition 3: Let \(G=\langle V,E\rangle\) be a graph. The _frustration_\(f(G)\) of \(G\) is the maximum of the frustrations \(f(\Pi(G))\) over all possible \(\Pi(G)\) of \(G\).
Figure 4: Three switch-equivalent signatures of \(K_{5}\) (up to isomorphism) having a frustration of 4. Solid (resp., dotted) lines represent positive (resp., negative or twisted) edges. Curved arrows denote the possible switch-transitions between signatures.
Having the notion of frustration in mind, there is no need to check for all the configurations for edge signs. Indeed, it is well known that the frustration of any graph is at most half the number of edges [1]. So only a subset of all possible signatures is needed when enumerating the labelled embeddings.
Theorem 4.1: _Let \(G=\langle V,E\rangle\) be a graph with \(\Pi(G)\) its embedding (potentially with twisted edges) and with \(\chi(\Pi(G))=0\). If \(f(\Pi(G))=0\), then the corresponding embedding on the Klein bottle is not cellular._
Proof: Let \(G\) be a graph satisfying the hypothesis. Assume \(f(\Pi(G))=0\). Then, there exists a way to move the nodes of \(G\) on the Klein bottle in such a way that there are no twisted edges anymore, _i.e._ we can produce an embedding \(\Pi^{\prime}(G)\) in which all the edges are either fully contained in \([0,1]^{2}\) or passing through the non-inverted identified sides. Thus, there is at least one face, the one containing the inverted side, which is not homeomorphic to a disk.
Finally, the following theorem provides a criterion to exclude some rotation systems which are, in practice, not embeddable in the Klein bottle.
Theorem 4.2: _Let \(G=\langle V,E\rangle\) be a non-planar graph with \(\Pi(G)\). If \(\chi(\Pi(G))=0\) and \(f(\Pi(G))=0\), then there is no embedding of \(G\) corresponding to \(\Pi(G)\) in the Klein bottle._
Proof: By Theorem 4.1, if \(f(\Pi(G))=0\), there is a face homeomorphic to a cylinder. By cutting through it, the supposed embedding of \(G\) becomes a cylindrical one. However, each cylindrical embedding is necessarily planar (by identifying one of the two borders as one point) which is a contradiction.
At this point, all the material for the enumeration algorithm are ready. Let us comment how Algorithm 2 works. First of all, remark that \(S_{n}\) (line 4) denotes the set of permutations of \(n\) elements and \(X^{\sigma}\) (lines 13 and 16) denotes the application of the permutation \(\sigma\) on the set \(X\). Lines 3-4 build the combinatorial objects needed for the enumeration and lines 5 through 14 construct a rotation system given those combinatorial objects. Line 16 finds the canonical
form of the constructed rotation system. The set \(X_{\text{False}}\) stores the pseudo-valid (_i.e._ with Euler's characteristic equal 0) systems but with a frustration of 0, _i.e._ the ones whose a corresponding embedding in the Klein bottle does not exists (see Theorem 2), and \(X_{\text{All}}\) store the pseudo-valid ones regardless of their frustration. For sake of clarity, the two optimisations detailed above are not integrated in the algorithm but they could intervene respectively lines 3 and 16.
## 4 The straight-line drawing algorithm
Our drawing algorithm uses the idea of _embedding extension_: extract a subgraph which we know how to draw and then extend the drawing by adding the missing nodes and edges (see [11, 6, 3] for instance). Since the main motivation of the algorithm is to draw non-planar and even non-toroidal graphs by a straight-line drawing, we can hence take a Kuratowski subgraph, _i.e._\(K_{5}\) or \(K_{3,3}\), as a base. However,
to be sure to be able to draw this subgraph according to the given rotation system, we have first to determine all the ways to draw these two graphs on the Klein bottle. We will also make the standard assumption that our input graph is 3-connected. If not, dummy edges can be added to make \(G\) become 3-connected, then removed at the very end.
We introduce a notation for the drawing of a graph \(G\), usable both in theory and implementation-wise.
Definition 4: Let \(G=\langle V,E\rangle\) be a graph, we define its drawing \(\Gamma(G)=\langle\gamma,\delta\rangle\), with \(\gamma=\{\gamma_{v}\in[0,1]^{2}\mid v\in V\}\), the coordinate of the node \(v\) in the Klein bottle, and \(\delta=\{\delta_{e}\in\mathbb{Z}^{2}\mid e\in E\}\), the shifts of each edge defining if they go through one or multiple identified sides of the Klein bottle.
One coordinate, \(x\) or \(y\) is chosen to correspond to the inverted side, without loss of generality, say \(x\). Let consider a node \(v\) with coordinates \((\gamma_{x_{v}},\gamma_{y_{v}})\) having a neighbour \(u\) linked by an edge with a \((\delta_{x},\delta_{y})\) shift. To compute the coordinates of \(u\) relative to \(v\), we will consider two cases:
\[\gamma_{v\to u}=\begin{cases}(\delta_{x}+\gamma_{x_{u}},\delta_{y}+ \gamma_{y_{u}})&\delta_{x}\text{ is even}\\ (\delta_{x}+\gamma_{x_{u}},\delta_{y}+1-\gamma_{y_{u}})&\delta_{x}\text{ is odd} \end{cases}\]
These relative coordinates are mainly used to draw the edges between the nodes or to compute the center of a face. This can intuitively be extended to non-adjacent nodes, in which case we sum up the shifts of the edges on the path between the nodes. We assume in the following that the paths in question are clear form the context.
As explained before, the base of our drawings are the Kuratowski subgraphs present in our input graph. As a preprocessing step, all the possible embeddings of \(K_{5}\) and \(K_{3,3}\) are precomputed in advance. This step is executed only once by Algorithm 2, to have thereafter a usable database for our main algorithm. After this step we are left with the 13 embeddings (11 for \(K_{5}\) and 2 for \(K_{3,3}\)) shown in Figure 5. We will denote this set of embeddings \(\Omega\). We can note that for \(K_{5}\), there are 5 more embeddings on this surface than on the torus, described by Myrvold [11]. We stress that these are the only
possible ways, up to translation of the nodes, to draw these graphs on the Klein bottle, meaning that when extracting our subgraph, the given rotation system of this subgraph will necessarily correspond to exactly one of these drawings, again up to switch-equivalence.
We setup the drawings of these embeddings to be convex (see Figure 6), in order to use a Tutte-like algorithm to place the remaining nodes. Note that having strictly convex embeddings as starting base is not mandatory especially as the extracted subgraph homeomorphic to \(K_{5}\) or \(K_{3,3}\) could potentially have nodes of degree 2 which would lead to non-strictly convex embeddings. The following Theorem ensures that the final drawing is equivalent to the rotation system given in input, since it proves that for each of the locally planar and orientable faces there exists a unique non-intersecting drawing.
Theorem 4.1: _Let \(G=\langle V,E\rangle\) be a graph, and \(f\) a convex 2-cell region of the Klein bottle bounded by a fixed cycle \(C\) of G. Let \(H\) be the subset of the nodes of \(G\) that has to be embed in \(f\). If \(G\) is 3-connected, then there exist a unique planar embedding of the subgraph
Figure 5: Unlabelled embeddings of \(K_{5}\) and \(K_{3,3}\) on the Klein bottle.
of \(G\) induced by \(H\cup C\) where all the nodes of \(H\) are in the same side of \(C\)._
Proof: First, let \(v\) be a node on \(C\) and \(u\) a node adjacent to \(v\). The node \(v\) can potentially appear multiple times on \(C\), however each of the possible attachments corresponds to a distinct position in \(\pi_{v}\). Knowing \(\pi_{v}\) allows to know on which occurrence of \(v\) in \(C\)\(u\) must be attached. Thus, the ambiguity on the multiplicity of any node on the boundary of \(f\) can be cleared up. Let assume now the \(v\) can be embedded in two different faces, the one dictated by \(\Pi(G)\) and another face of \(\Pi(G-u)\) which contradicts \(\Pi(G)\). Since \(G\) is 3-connected, \(u\) has at least three neighbours, let say \(w_{1}\),\(w_{2}\) and \(w_{3}\) be those neighbours. If both embeddings are locally planar, both faces have on their boundary all the \(w_{i}\). Let \(P\) be a path in \(G-u\) starting and ending on \(C\) and containing all the \(w_{i}\). Wlog, let \(w_{1}\) and \(w_{3}\) be such \(w_{2}\) is included in the subpath of \(P\) starting from \(w_{1}\) and ending with \(w_{3}\). Let assume removing the nodes \(w_{1}\) and \(w_{3}\) does not disconnect \(G\) since it is 3-connected, then there must be a path connecting \(w_{2}\) to \(C\). However if \(u\) can be embedded in both faces, the edges \((u,w_{1})\) and \((u,w_{3})\) must intersect this path, which is a contradiction.
Figure 6: Strictly convex unlabelled embeddings of \(K_{5}\) and \(K_{3,3}\) on the Klein bottle.
```
0:\(G=\langle V,E\rangle\,,\Pi(G),\Omega\)
0:\(\Gamma(G)\)
1:\(H\leftarrow\textsc{KURATOWSKI\_SUBGRAPH}(G)\)
2:\(\tilde{H}\leftarrow\textsc{SMOOTHED}(H)\)
3:for\(\Pi(K)\in\Omega\)do
4:if\(\Pi(K)\cong\Pi(G)\)then
5:\(\Gamma(\tilde{H})\leftarrow\Gamma(K)\)
6:endif
7:endfor
8:for\(v\in V\)do
9:if\(v\in H\)and\(v\notin\tilde{H}\)then
10:\(u,w\leftarrow\textsc{GET\_CHAIN\_ENDPOINTS}(v)\)
11:\(\gamma_{v}\leftarrow(\gamma_{u}+(\gamma_{u\to w}-\gamma_{u})*\frac{P_{uw},index(v)}{|P_{uw}|})\mod 1\)
12:endif
13:endfor
14:for\((u,v)\in E\)do
15:if\(u\in H\)and\(v\in H\)and\((u,v)\notin H\)then
16:\(\delta_{(u,v)}\leftarrow\lfloor\gamma_{u\to v}\rfloor\)
17:endif
18:endfor
19:for\(v\in V\)do
20:if\(v\notin H\)then
21:\(F\leftarrow\textsc{GET\_FACE}(\Pi(G),H,v)\)
22:\(\gamma_{v}\leftarrow(\sum_{u\in F}\gamma_{u}/|F|)\mod 1\)
23:endif
24:endfor
25:\(\Gamma_{0}\leftarrow\Gamma(G)\)
26:\(\Gamma_{1}\leftarrow\textsc{TUTTE}(\Gamma_{0})\)
27:\(i\gets 0\)
28:while\(\Gamma_{i}\neq\Gamma_{i+1}\)do
29:\(\Gamma_{i+2}\leftarrow\textsc{TUTTE}(\Gamma_{i+1})\)
30:\(i\gets i+1\)
31:endwhile
```
**Algorithm 3** Drawing from Rotation System
At this point we have all the main ingredients for our drawing algorithm. We assume to have the following routines:
* KURATOWSKI_SUBGRAPH which extract a subgraph of \(G\) homeomorphic to a Kuratowski subgraph.
* SMOOTHED which takes a graph and returns a new one where all nodes of degree 2 have been replaced by an edge linking their two neighbours.
* GET_CHAIN_ENDPOINTS which takes a node of degree 2 and returns the two endpoints of the chain on which the node is.
* GET_FACE computes, according to the order of the adjacency lists of the already fixed nodes, the face in which a node has to be embedded.
* TUTTE which moves all the non-fixed nodes to the barycenter of the positions of its neighbours. See [14] for more details about Tutte's algorithm.
Moreover, the computation of the center of a face (line 22) and the TUTTE (line 29) subroutine are performed according to the edges shifts as said before. For a face, we consider one point of the face in the square and get the coordinates of the other nodes by running through the edges and keeping track of the shifts of the edges. For the TUTTE routine, if a node \(v\) would exit the \([0,1]\) square with this routine, its \(\gamma_{v}\) and the shifts of its incident edges would have to be managed accordingly to maintain the coherence of \(\Gamma(G)\).
Proposition 1: _Algorithm 3 runs in linear time in the number of edges of the input graph \(G\)._
Proof: The Kuratowski subgraph extraction can be done in linear time (see [16, 17] for details). The search for the base drawing (lines 3-6) is a comparison with a finite set of small graphs and hence, can be considered in constant time. Then, the drawing of the remaining nodes of the Kuratoswki subgraph (lines 8-13) can be done linearly if we don't recompute the chains each time. The drawing of the edges not present in the Kuratowsi subgraph but whose endpoints are (lines 14-18) needs only a run through the boundary of the faces. Finding the faces of the remaining nodes (lines 19-24) can also be done linearly thanks to memoization. Finally, applying Tutte's algorithm on the non-fixed nodes is also linear, one can stop once each node has been moved at least once to ensure linearity. Thus, the total complexity of this algorithm is therefore in \(O(n+m)\) with \(n\) the number of nodes and \(m\) the edges.
## 5 Conclusions
We have highlighted the interest of having an algorithm for constructing straight-line representations of graphs on the Klein bottle. We proposed an algorithm to compare two general rotation systems
and another one to enumerate all the possible ones of a given graph on a given surfaces, the latter using notions coming from the domain of signed graphs to be slightly more optimized than a naive one. We presented also and above all an algorithm to draw a graph, given its rotation system, and build a representation equivalent to it.
This work can be extended along several directions. The first one consists in characterizing more classes of graphs which are non-toroidal but embeddable in the Klein bottle. Another interesting theoretical question would be to have more tight bounds for the number of distinct unlabelled embeddings of a graph on the Klein bottle or on other surfaces, following on from works already done on similar questions [2, 5]. Finally, one could study if our algorithm can be extended to surfaces of higher genera or if the flat representation of these surfaces, with more sided polygons, would lead to insoluble problems.
|
2303.15721 | Design Space Exploration for PCM-based Photonic Memory | The integration of silicon photonics (SiPh) and phase change materials (PCMs)
has created a unique opportunity to realize adaptable and reconfigurable
photonic systems. In particular, the nonvolatile programmability in PCMs has
made them a promising candidate for implementing optical memory systems. In
this paper, we describe the design of an optical memory cell based on PCMs
while exploring the design space of the cell in terms of PCM material choice
(e.g., GST, GSST, Sb2Se3), cell bit capacity, latency, and power consumption.
Leveraging this design-space exploration for the design of efficient optical
memory cells, we present the design and implementation of an optical memory
array and explore its scalability and power consumption when using different
optical memory cells. We also identify performance bottlenecks that need to be
alleviated to further scale optical memory arrays with competitive latency and
energy consumption, compared to their electronic counterparts. | Amin Shafiee, Benoit Charbonnier, Sudeep Pasricha, Mahdi Nikdast | 2023-03-28T04:10:17Z | http://arxiv.org/abs/2303.15721v1 | # Design Space Exploration for PCM-based Photonic Memory
###### Abstract.
The integration of silicon photonics (SiPh) and phase change materials (PCMs) has created a unique opportunity to realize adaptable and reconfigurable photonic systems. In particular, the nonvolatile programmability in PCMs has made them a promising candidate for implementing optical memory systems. In this paper, we describe the design of an optical memory cell based on PCMs while exploring the design space of the cell in terms of PCM material choice (e.g., GST, GSST, Sb\({}_{2}\)Se\({}_{3}\)), cell bit capacity, latency, and power consumption. Leveraging this design-space exploration for the design of efficient optical memory cells, we present the design and implementation of an optical memory array and explore its scalability and power consumption when using different optical memory cells. We also identify performance bottlenecks that need to be alleviated to further scale optical memory arrays with competitive latency and energy consumption, compared to their electronic counterparts.
Integrated Photonics, Phase Change Materials, Photonic Memories +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
larger than \(T_{g}\) and lower than \(T_{l}\) will recapture the crystalline structure. This process is called _set_. Note that \(T_{l}>T_{g}\), making the reset the most power-hungry procedure, compared to set.
A PCM is in an intermediate state when a portion of the material has an amorphous state and the rest has a crystalline state. The crystallized area of the PCMs can be estimated by analyzing the temperature distribution of the cell when being heated (Garfinkel et al., 2016). Temperature distribution in a PCM can be calculated by solving the unsteady-transient heat flow equation in the cell upon the heat transfer in the material (Garfinkel et al., 2016). The required energy to trigger the phase transition of a PCM can be provided electrically, thermally, or optically (Bouquet et al., 2017). For electrically (thermally) controlled PCMs, a PN junction (microheater) can be used to apply heat and initiate the phase transition (Garfinkel et al., 2016). When triggered optically, a laser pulse with specific power and duration will be used to set or reset the cells.
To implement PCM-based photonic memory cells, understanding the optical properties of PCMs is important. Upon a phase (state) transition, the optical refractive index of the PCM, and hence the optical transmission of the cell, will change drastically. This can be used to store data on the cell's optical transmission levels. The optical refractive index profile of three PCMs (GST, GSST, and Sb\({}_{2}\)Se\({}_{3}\)) is shown in Fig. 1. Observe the drastic contrast between the crystalline and amorphous state of the PCMs. Note that for C-band (1530-1565 nm), GST shows the highest contrast in refractive index when shifting from amorphous to the crystalline state, and vice versa. This makes GST the most suitable candidate to implement PCM-based photonic memory cells. In addition, observe that the PCMs in the crystalline state have a much higher extinction coefficient compared to their amorphous state. This leads to higher absorption of the optical power in the crystalline state compared to the amorphous state. The absorbed optical power will be converted to heat, and can be used to trigger the phase transition in PCMs. To estimate the optical refractive index profile of the PCMs in any intermediate state, one can use the Lorenz model in (Bouquet et al., 2017; Garfinkel et al., 2016).
### PCM-based Photonic Memory
A PCM-based photonic memory cell can be realized by depositing a PCM on top of an SOI waveguide. The schematic of a PCM-based photonic memory cell is shown in Fig. 2(Garfinkel et al., 2016). In this design, the light in the input waveguide couples to the lower ring, and then passes through the waveguide with the PCM on top of it (i.e., memory cell). The heaters on the rings are responsible for tuning the resonant wavelength in the rings Note that in this design, both rings should have the same resonant wavelength to ensure correct operation with the same wavelength (Garfinkel et al., 2016).
Because of the higher refractive index and extinction coefficient of PCMs in the crystalline state (see Fig. 1), as the PCM in the unit cell starts to crystallize, the optical transmission of the cell decreases due to the absorption of optical power in the PCM. The optical transmission contrast due to the absorption of light in the PCM helps realize multiple, distinct optical transmission levels between the initial and final state of the material, to store single or multiple bits per cell (Bouquet et al., 2017; Bouquet et al., 2017). As the optical transmission contrast between the initial and final state of the PCM increases, the cell is able to store a larger number of bits. However, this comes at a cost of higher power consumption or latency because a larger portion of the PCM needs to be crystallized. The light-PCM interaction reduces when using silicon (e.g., instead of silicon nitride (Bouquet et al., 2017)) in PCM-based photonic memories (Garfinkel et al., 2016). This makes the PCM-based photonic memories with SiPh more power-hungry with higher latency. However, PCM-based photonic memories with SiPh offer a more compact footprint, lower propagation loss, and compatibility with CMOS fabrication foundries (Garfinkel et al., 2016).
Figure 1. Optical refractive index (\(n\)) and extinction coefficient (\(\kappa\)) for different PCM materials and wavelengths (Bouquet et al., 2017; Garfinkel et al., 2016).
Figure 3. Designed PCM-based photonic memory cell. Set and reset are carried out using a microheater on top of the cell.
Figure 2. Unit cell of a PCM-based photonic memory (Garfinkel et al., 2016).
As mentioned earlier, phase transitions in PCMs can be triggered electrically, thermally, or optically. In this paper, the set and reset procedures are carried out thermally and using a microheater on top of the cell, due to the decreased light-matter interaction between silicon and PCM and, consequently, lower optical absorption in the PCM (on top of silicon waveguide) in amorphous and intermediate states. The schematic design of the cell with a microheater is shown in Fig. 3. Using this design, a low-power electrical signal with a long duration can be used to set (crystallize) the cell. A short-duration pulse with higher power (compared to the set pulse) can be used to reset (switching to the amorphous state) the cell, regardless of the initial state.
## 3. PCM-based photonic memory cells
In this section, we present a detailed design-space exploration of PCM-based photonic memory cells for the design demonstrated in Fig. 3, using GST, GSST, and Sb\({}_{2}\)Se\({}_{3}\) PCMs.
### Cell Insertion Loss
The insertion loss can be defined as the attenuation of the input optical signal when the cell is in an amorphous state. When a PCM is in the amorphous state (i.e., contains 0), it should not attenuate the input optical signal. The insertion loss originates from the extinction coefficient in the amorphous state (see Fig. 1). Note that as the cell starts to crystallize, the loss of the cell is in fact originating from the optical power absorption in the cell, which determines the optical transmission contrast being used to store data.
Considering the cell in Fig. 3, Fig.4 shows the optical insertion loss for different PCMs of different geometries (width and thickness) at 1550 nm. Results are based on simulations in Lumerical MODE solver (Lumerical, 2015). Note that we consider the PCM's width and waveguide's width to be the same. Out of the three PCMs under test, GST shows the highest optical insertion loss in the amorphous state due to its higher extinction coefficient in the C-band (see Fig. 1(a)), where its loss can be as high as \(\approx\)0.6 dB/\(\mu\)m (see Fig. 4(a)). Moreover, note that the loss in the amorphous state for GST increases with its thickness, while the effect of PCM or waveguide width is insignificant. Despite GST's high insertion loss in the amorphous state, it has the highest contrast in the refractive index switching from the amorphous to crystalline state, making it the best candidate for photonic memories. Next are GSST and Sb\({}_{2}\)Se\({}_{3}\) that are lossless in the amorphous state (see Figs. 4(b) and 4(c)), but compared to GST, have lower contrast in the refractive index between the two states. Note that to realize PCM-based photonic memory cells, having low loss in the amorphous state and high refractive index contrast between crystalline and amorphous state is ideal.
### Cell Capacity
Cell capacity in PCM-based photonic memories is a parameter that can be determined by capturing the optical transmission contrast between the amorphous and partially or fully crystallized state of the PCM (Lumerical, 2015). As the crystallization fraction increases, the optical-transmission changes increase due to increased attenuation of the input optical signal. This leads to a higher number of separable signal levels to store data, and hence storing a larger number of bits. For example, for a 2-bit PCM-based photonic memory cell, only 4 signal levels are needed to store data (00, 10, 01, 11).
The optical transmission contrast (\(\Delta T\)) and optical absorption contrast (\(\Delta P\)) between fully crystalline and fully amorphous state for 2-\(\mu\)m-long PCM-based photonic memory cells of different geometries and materials are shown in Fig. 5. Note that \(\Delta T\) is not only a function of \(\Delta P\) in the cells. \(\Delta T\) partially originates from the optical-refractive-index mismatch between the PCM and SOI waveguide. The effect of the refractive-index contrast is more observable in Sb\({}_{2}\)Se\({}_{3}\). We can see from Figs. 5(c) and 5(f) that for Sb\({}_{2}\)Se\({}_{3}\), although \(\Delta P\) is zero, the material unexpectedly shows some \(\Delta T\) between the two states, which stems from the optical-refractive-index mismatch. Note that such a \(\Delta T\) in Sb\({}_{2}\)Se\({}_{3}\) cannot be controlled actively as it is independent of the material absorption (or phase change of the material). In addition, Sb\({}_{2}\)Se\({}_{3}\) shows lower refractive index contrast, and hence significantly lower \(\Delta T\) compared to GST and GSST (see Fig. 1). These make Sb\({}_{2}\)Se\({}_{3}\) not an ideal candidate to implement PCM-based photonic memories with SOI waveguides, necessitating some additional design optimization to address the optical-refractive-index mismatch.
To avoid optical-refractive-index mismatch when designing GST- and GSST-based photonic memory cells, one should pick a design where both \(\Delta T\) and \(\Delta P\) are maximum. Doing so ensures that the \(\Delta T\) is stemming from the optical power absorption. Accordingly, considering Figs. 5(a) and 5(d), for a 2-\(\mu\)m-long GST cell, \(\Delta T\) and \(\Delta P\) are at 95% when the thickness of the cell is about 20 nm with the width of 470 nm. Note that the impact of waveguide/PCM width on \(\Delta T\) and \(\Delta P\) is negligible. This cell can store up to 6 bits (up to 64 separable signal levels), considering a \(\approx\)1% (0.96/64) margin between each state of the cell. However, this cell suffers from 0.2 dB/\(\mu\)
Figure 4. Insertion loss of PCM-based photonic memory cells with different materials and geometries. WG: Waveguide.
insertion loss in the amorphous state (see Fig. 4(a)). Using the same approach, we can design a 2\(\cdot\)\(\mu\)m-long, 40-nm thick GSST-based cell with a width of 470 nm to store 6 bits per cell, but with no insertion loss in the amorphous state.
The bit capacity of a cell is determined by adjusting the crystallized fraction of the cell. As mentioned in Section 2, the refractive index of a PCM in an intermediate state can be estimated using the Lorenz model from (Loren et al., 2010; Loren et al., 2010), and assuming a uniform phase transition in the PCM's volume from amorphous to crystalline state. Using the Lorenz model and FDTD simulations, and assuming \(\approx\)1% margin to separate transmission levels (Loren et al., 2010), to store a maximum of 2 (4) bits per cell, we found that up to 20% (40%) of the PCM needs to be crystallized when using GST and GSST. To store 6 bits per cell, these cells should be fully crystallized. Note that the aforementioned values are the required crystallization fraction for the extreme cases for writing "2"\(n\)-1" (\(n\) is the bit capacity of the cell) to the cells. The crystalline fraction can be controlled by tuning the power and duration of the heat source being used to set the cells, and, as it was mentioned, it can be estimated by solving the unsteady-transient heat transfer equation in the PCM's volume (Loren et al., 2010). The reset procedure will be the same regardless of the maximum number of bits stored, as the PCM should return to its initial amorphous state. Storing multiple bits per a PCM-based photonic memory cell can be challenging due to the essential need for more complex programming and detection policies at the architectural level, when scaling the cells to implement memory arrays.
### Latency and Power Consumption
The latency and power consumption of a PCM-based photonic memory cell are a function of the cell's maximum bit capacity. As the cell's bit capacity increases, due to the need for a higher transmission contrast, more energy is required to reach higher levels of crystallization. Using the design in Fig. 3, a microheater is designed to set and reset PCM-based photonic memory cells. The heater material is Ti/TiN with \(\rho\) = 60 \(\mu\).\(\Omega\).cm and a sheet resistance of 5.5 \(\Omega\)/sq. The melting temperature of the heater material (Ti/TiN) is 1941 K, considered to avoid melting the heater upon heating the PCM. The thickness of the heater is 110 nm with the width and length of 2 \(\mu\)m, placed 600 nm above the waveguide to reduce metallic absorption due to metal-light interaction. Lumerical HEAT (Han et al., 2010) is used to carry out unsteady-transient heat transfer simulations to capture the temperature distribution in the PCMs (only GST and GSST; see Section 3.2), as a function of exposure time for a given electric power applied to the heater.
Fig.6(a) shows the maximum set energy for the GST- and GSST-based photonic memory cells designed in Section 3.2, when a 6 mW electrical pulse is applied to the heater with different pulses (\(E=P.t\), where \(t\) is the pulse duration and \(P\) is the electric power applied to the heater). The \(T_{g}\) for GST and GSST is considered to be 453 K and 423 K, respectively. In addition, the melting temperature of 890 K and 900 K is considered for GST and GSST, respectively (Loren et al., 2010; Loren et al., 2010; Loren et al., 2010; Loren et al., 2010). We can see from Fig.6(a) that, in general, as we increase the cell's bit capacity, the maximum energy required to set the cell (energy that is required to write "2"\(n\)-1", where \(n\) is the cell's bit capacity) also increases. This is due to the essential need for larger transmission and optical absorption contrast. For example, in a 6-bit PCM-based photonic memory cell using GSST, the maximum energy of 175 nJ is required to write "111111" to the cell, while for GST, this energy can be as high as 248 nJ. An electric
Figure 5. (a)–(c) Optical transmission contrast (\(\Delta T\)) and (d)–(f) total absorption contrast (\(\Delta P\)) between crystalline and amorphous state for PCM-based photonic memory cells with GST, GSST, and Sb\({}_{2}\)Se\({}_{3}\). Simulations are based on Lumerical FDTD.
pulse of 40 mW with a duration of 3.5 \(\mu\)s is used to reset the cells by reaching the melting temperature of the PCMs, hence returning to the amorphous state.
The power-latency trade-off for the 6-bit GST- and GSST-based cells is shown in Fig. 6(b). As it can be seen, as we increase the maximum set power, the latency decreases and the trend is nonlinear. In addition, note that for power values lower than 6 mW, phase transition cannot be triggered, regardless of pulse duration. In other words, the required energy to trigger a specific phase transition is not always the same, and it depends on the electrical power used to write on the cells. This effect stems from the physical mechanism of the heat transfer from the heater to the PCMs given the sample's thermal properties, such as thermal conductivity, specific heat capacity, and density.
## 4. PCM-based memory arrays
Leveraging the cell introduced in Section 2 (see Fig. 2), one can realize a PCM-based photonic memory array by cascading \(M\) cells per row, and for the total number of \(N\) rows with the configuration depicted in Fig. 7. Here, we consider the design presented in (Han et al., 2017). Note that the original design in (Han et al., 2017) used different ring radii to induce different resonant wavelength shifts in a row, while in our work microheaters on the rings are used to realize the required resonant shift to read and write data with the PCM-based photonic memory cells. The reason for using heaters instead of different radii is to actively control the resonant shift in the rings and the spacing between the resonant peaks, which creates an additional degree of freedom when designing a memory array. Due to using heaters for tuning the rings in this design, there is no need for fine-tuning the input, drop, and output gaps in the rings associated with each PCM cell. The resonant wavelength of the rings in each row is slightly different (\(\Delta\lambda=850\) pm (Han et al., 2017)), which is controlled by the heaters in our design. The readout of each cell in the memory depicted in Fig. 7 can be done in two steps. First, we can select the row to be read using output ports S\({}_{1}\) to S\({}_{N}\). Then, the cell to be read from within each row can be selected via the input wavelength, due to the slight difference between the resonant wavelengths of all rings in a row (Han et al., 2017). Finally, the optical signal transmission from each cell can be converted to an electrical signal via photodetectors (PDs) at the end of each output port, to retrieve the stored data.
Employing the two cells designed in Section 3, we explore the scalability of the memory array in Fig. 7. The required laser output optical power (\(P_{lsr}\)) for reading from the last cell in each row in this memory array can be defined as (Kal
for the memory array using 6-bit GST- and GSST-based cells to write "111111" is shown in Fig. 8(b) for different memory array capacities (i.e., total number of bits stored) with \(M=N\). As can be seen, as we increase the array capacity, the maximum write energy of the entire memory increases linearly, and it can be as high as 0.6 mJ for GST and 0.4 mJ for GSST. Note that a 6 mW electrical pulse is used to write on the cells via heaters. Considering the results in Fig. 8, we can see that scaling up a memory array to increase its capacity is infeasible without further optimization of the cell's structure due to high input optical power required to compensate for the losses. For example, to store 2400 bits (120-bits per row and 6-bits per cell when \(M=N=\)20), the input laser should provide at least 30.4 dBm to compensate for the losses throughout the memory array. Consequently, the 1% margin between optical transmission levels considered in this paper may lead to unreliable readouts due to the undesired change in the state of the cells due to increased input optical power. This motivates the need for a trade-off between the transmission level's margin and the number of levels, and therefore the cell's bit capacity. Moreover, optical transmission drift is another limitation that can lead to unreliable readout of cells. Such drifts impose a higher set pulse duration and lower bit capacity (to achieve a larger margin between the optical transmission levels) to stabilize the cell's state when writing the data (Garshan et al., 2017; Garshan et al., 2017). Another limiting factor in scaling the design in Fig. 7 is the free-spectral range (FSR) of the rings. We cannot arbitrarily increase the number of rings to store more bits per array. Increasing the number of rings per row necessitates a larger number of operating wavelengths to store and read the data from the cells. This leads to the essential need for rings with a large FSR, which requires smaller rings (FSR is inversely proportional to ring radius) at the cost of increased optical loss in the rings.
## 5. Conclusion
In this paper, we presented a design-space exploration of PCM-based photonic memories with silicon photonics using three well-known PCMs, namely GST, GSST, and Sb\({}_{2}\)Se\({}_{3}\). Parameters such as optical insertion loss of the cell in the amorphous state, optical transmission contrast between amorphous and crystalline state, cell's bit capacity, and set and reset energies are explored to design an optimized photonic memory cell. We showed that for thermally controlled PCM-based photonic memory cells with GST or GSST, as the bit capacity of the cells increases, the maximum set energy also increases drastically. Finally, we presented an example of a memory array using the optimized memory cells and explored the scalability and maximum set energy in the array as the size of the array changes. Our results show the promise of PCM-based photonic memories and the critical need for cross-layer design co-optimization (material to array level) to minimize energy and latency costs in such memories.
## Acknowledgements
This work was supported in part by the National Science Foundation under grants CCF-2006788 and CNS-2046226.
|
2309.00411 | Strongly interacting Bose-Fermi mixture: mediated interaction, phase
diagram and sound propagation | Motivated by recent surprising experimental findings, we develop a
strong-coupling theory for Bose-Fermi mixtures capable of treating resonant
inter-species interactions while satisfying the compressibility sum rule. We
show that the mixture can be stable at large interaction strengths close to
resonance, in agreement with the experiment but at odds with the widely used
perturbation theory. We also calculate the sound velocity of the Bose gas in
the $^{133}$Cs-$^6$Li mixture, again finding good agreement with the
experimental observations both at weak and strong interactions. A central
ingredient of our theory is the generalization of a fermion mediated
interaction to strong Bose-Fermi scatterings and to finite frequencies. This
further leads to a predicted hybridization of the sound modes of the Bose and
Fermi gases, which can be directly observed using Bragg spectroscopy. | Xin Shen, Nir Davidson, Georg M. Bruun, Mingyuan Sun, Zhigang Wu | 2023-09-01T12:12:10Z | http://arxiv.org/abs/2309.00411v1 | # Strongly interacting Bose-Fermi mixture: mediated interaction, phase diagram and sound propagation
###### Abstract
Motivated by recent surprising experimental findings, we develop a strong-coupling theory for Bose-Fermi mixtures capable of treating resonant inter-species interactions while satisfying the compressibility sum rule. We show that the mixture can be stable at large interaction strengths close to resonance, in agreement with the experiment but at odds with the widely used perturbation theory. We also calculate the sound velocity of the Bose gas in the \({}^{133}\)Cs-\({}^{6}\)Li mixture, again finding good agreement with the experimental observations both at weak and strong interactions. A central ingredient of our theory is the generalization of a fermion mediated interaction to strong Bose-Fermi scatterings and to finite frequencies. This further leads to a predicted hybridization of the sound modes of the Bose and Fermi gases, which can be directly observed using Bragg spectroscopy.
_Introduction._--The interest in mixtures of bosonic and fermionic quantum fluids has long predated the discovery of ultracold atomic gases. Indeed, as early as in the 1960s \({}^{3}\)He-\({}^{4}\)He solutions were studied by H. London and others [1], which led to the creation of an indispensable workhorse of low temperature experiments--the dilution refrigerator [2]. For ultracold atomic gases, the Bose-Fermi mixture is not only practically valuable for sympathetically cooling the Fermi gas [3; 4], but also serves as a versatile platform for studying a variety of physics, including polarons [5; 6], mediated interactions [7; 8; 9; 10; 11; 12; 13; 14], unconventional pairing [15; 16] and dual superfluidity [17; 18; 19]. Due to its importance, more than a dozen different Bose-Fermi mixtures have so far been realized and studied experimentally (see Ref. [20] for a review).
Since the inter-species interaction can be tuned in an atomic Bose-Fermi mixture, the first fundamental question concerns its stability and miscibility [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. For a weakly interacting Bose-Einstein condensate (BEC) mixed with a single-component Fermi gas, perturbation theory predicts that a sufficiently large Bose-Fermi scattering length will lead to the collapse of the system on the attractive side [22; 24; 27] and to phase separation on the repulsive side [26; 29; 31; 32]. At typical atomic gas densities, the predicted critical values of the scattering length are quite small such that perturbation theory is expected to be valid. The recent experimental results for the \({}^{133}\)Cs-\({}^{6}\)Li mixture have therefore come as a surprise [34]. By measuring the bosonic sound propagation at varying Bose-Fermi scattering lengths, the experiments found that the mixture regains its stability near the inter-species Feshbach resonance, in contradiction with the perturbation theory [34].
In order to understand this puzzling phenomenon and more broadly the properties of resonant Bose-Fermi mixtures, we develop a strong-coupling approach based on the many-body Bose-Fermi scattering matrix. Importantly, our theory satisfies the compressibility sum rule [35; 36], which plays a crucial role in determining the stability of the mixture. With this approach, we first obtain the zero-temperature phase diagram of the mixture corresponding to the experimental setup. The predicted region of stability is consistent with the experimental observation but differs significantly from that of the perturbative theory near resonance. An integral part of our theory is a generalization of the well-known Ruderman-Kittel-Kasuya-Yosida (RKKY) fermion mediated interaction [37] to the regime of strong Bose-Fermi scattering. Based on this interaction, we further calculate the speed of sound in the BEC and find reasonable agreement with the recent experiment for all interaction strengths. Lastly, we show that the retarded nature of this mediated interaction leads to an intriguing hybridization of the BEC sound mode and an induced fermionic zero sound mode, which can be observed by Bragg spectroscopy.
_Bose-Fermi mixture._--We consider a mixture of a weakly interacting BEC of bosons with mass \(m_{b}\) and a non-interacting gas of fermions with mass \(m_{f}\) at zero temperature and in a configuration that is representative of many current experimental systems [13; 14; 34]. Namely, the BEC is completely immersed in a spatially much larger Fermi gas such that the Fermi gas surrounding the bosons acts effectively as a reservoir for the Fermi gas inside the mixture; this is illustrated in the inset of
Fig. 1. The Hamiltonian for the mixture is
\[\hat{H}= \sum_{\mathbf{p}\neq 0}\left[(\epsilon_{b,\mathbf{p}}+2g_{b}n_{b})\hat{b}^{ \dagger}_{\mathbf{p}}\hat{b}_{\mathbf{p}}+\frac{1}{2}g_{b}n_{b}(\hat{b}^{\dagger}_{\mathbf{p }}\hat{b}^{\dagger}_{-\mathbf{p}}+h.c.)\right]\] \[+ \sum_{\mathbf{p}}\epsilon_{f,\mathbf{p}}\hat{f}^{\dagger}_{\mathbf{p}}\hat{f} _{\mathbf{p}}+g_{bf}\sum_{\mathbf{p}\mathbf{p}^{\prime}\mathbf{q}}\hat{f}^{\dagger}_{\mathbf{p}} \hat{b}^{\dagger}_{\mathbf{p}^{\prime}}\hat{b}_{\mathbf{p}^{\prime}+\mathbf{q}}\hat{f}_{ \mathbf{p}-\mathbf{q}}, \tag{1}\]
where \(\hat{b}^{\dagger}_{\mathbf{p}}(\hat{f}^{\dagger}_{\mathbf{p}})\) creates a boson (fermion) of momentum \(\mathbf{p}\) and energy \(\epsilon_{i,\mathbf{p}}=\mathbf{p}^{2}/2m_{i}\) with \(i=b(f)\). We have used the Bogoliubov theory to describe the BEC with density \(n_{b}\) and interaction strength \(g_{b}=4\pi a_{b}/m_{b}\), where \(a_{b}\) is the bosonic scattering length. Similarly, the Bose-Fermi interaction strength \(g_{bf}\) is determined by the corresponding scattering length \(a_{bf}\). Here we use units where \(\hbar\) and the system volume are unity.
_Strong-coupling theory_.-- In order to describe strong Bose-Fermi interactions, we use a Green's function approach with the Bose-Fermi scattering matrix as a basic building block [38; 39; 40; 41]. Within this framework, the fermionic Green's function is given by (see Fig. 2(a)),
\[G_{f}(p)=\frac{1}{i\omega_{p}-(\epsilon_{f,\mathbf{p}}-\mu_{f})-n_{b}\mathcal{T}_{ bf}(p)}, \tag{2}\]
where \(\mu_{f}\) is the chemical potential of the fermions inside the mixture, \(\omega_{p}\) is the Matsubara frequency and \(p\equiv(i\omega_{p},\mathbf{p})\). The scattering matrix between a boson and a Fermi gas of density \(n_{f}\) is
\[\mathcal{T}_{bf}(p)=\frac{g_{bf}}{1-g_{bf}\Pi_{bf}(p)} \tag{3}\]
with the pair propagator
\[\Pi_{bf}(p)=\int\!\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\left[\frac{1-n_{\rm FD}(\mathbf{ k})}{i\omega_{p}-\epsilon_{b,\mathbf{p}-\mathbf{k}}-\xi_{f,\mathbf{k}}}+\frac{2m_{r}}{k^{2}} \right]. \tag{4}\]
Here \(\xi_{f,\mathbf{k}}=\mathbf{k}^{2}/2m_{f}-k_{f}^{2}/2m_{f}\) with \(k_{f}\equiv(6\pi^{2}n_{f})^{1/3}\) and \(n_{\rm FD}(\mathbf{k})=\theta(\xi_{f,\mathbf{k}})\) is the Fermi-Dirac distribution. We have neglected the effects of the BEC on the pair propagator, which is a good approximation for \(n_{b}a_{b}^{3}\ll 1\). The last term in Eq. (4) regularizes the divergence coming from the momentum independence of \(g_{bf}\) so that one can establish the relation \(g_{bf}=2\pi a_{bf}/m_{r}\), where \(m_{r}=m_{b}m_{f}/(m_{b}+m_{b})\) is the reduced mass.
To the lowest order in the scattering matrix, the effects of the Fermi gas on the BEC are captured by the self-energy diagram shown in Fig. 2(b). As shown in the Sup. Mat. [42], in order to fulfill the compressibility sum rule one also needs to include the diagrams in Fig. 2(c)-(d), which are second order in \(\mathcal{T}_{bf}\). Incorporating also the usual Bogoliubov self-energies due to the weak boson-boson scattering, we find
\[\Sigma_{11}(p) =2n_{b}g_{b}+\sum_{k}G_{f}(k)\mathcal{T}_{bf}(k+p)+n_{b}\Gamma_{ \rm mi}(p,0;p)\] \[\Sigma_{12}(p) =n_{b}g_{b}+n_{b}\Gamma_{\rm mi}(p,-p;p) \tag{5}\]
as the normal and anomalous self-energies of the BEC. Here we have defined the generalized fermion mediated interaction (shown in Fig. 2(e))
\[\Gamma_{\rm mi}(p,p^{\prime};q)=\sum_{k} G_{f}(k)G_{f}(k+q)\] \[\times\mathcal{T}_{bf}(p+k)\mathcal{T}_{bf}(p^{\prime}+k+q), \tag{6}\]
where \(\sum_{k}\equiv T\sum_{i\omega_{k}}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\) with temperature \(T\). The normal and anomalous Green's functions of the BEC can be obtained from the coupled equations [43]
\[\left[G^{0}(p)^{-1}-\Sigma_{11}(p)\right]G_{11}(p)-\Sigma_{12}(p)G _{12}(p)=1; \tag{7}\] \[\left[G^{0}(-p)^{-1}-\Sigma_{11}(-p)\right]G_{12}(p)-\Sigma_{12}( p)G_{11}(p)=0, \tag{8}\]
where \(G^{0}(p)=(i\omega_{p}-\epsilon_{b,\mathbf{p}}+\mu_{b})^{-1}\) and \(\mu_{b}\) is the bosonic chemical potential. To ensure that the bosonic spectrum is gapless, the chemical potential must satisfy the Hugenholtz-Pines theorem \(\mu_{b}=\Sigma_{11}(0)-\Sigma_{12}(0)\)[43]. From Eq. (5) we find
\[\mu_{b}=n_{b}g_{b}+\sum_{k}G_{f}(k)\mathcal{T}_{bf}(k). \tag{9}\]
Solving Eqs. (7)-(9) yields the bosonic Green's functions, which can be used to calculate the Bogoliubov spectrum \(E_{b,\mathbf{p}}\) and other physical properties.
Figure 2: Self-energy diagrams for fermions (a) and bosons (b)-(d) due to the Bose-Fermi interaction. The (dashed) red line denotes a (condensate) boson and the black line a fermion. The wavy line is the fermion mediated interaction in Eq. (6).
Figure 1: Phase diagram of the \({}^{133}\)Cs-\({}^{6}\)Li mixture with density ratio \(n_{b}/n_{f,res}=10\), where \(n_{f,res}\) is the reservoir fermion density and \(k_{f,res}\equiv(6\pi^{2}n_{f,res})^{1/3}\) is the corresponding Fermi momentum. The dashed lines are the stability boundaries calculated by the perturbation theory.
_Phase diagram._--We first use our strong-coupling theory to construct the zero temperature phase diagram spanned by the two scattering lengths \(a_{b}\) and \(a_{bf}\). The stability and miscibility of the mixture are determined by two conditions [44]: (a) the chemical potential \(\mu_{f}\) of the Fermi gas within the mixture equals that of the reservoir \(\mu_{f,res}\); (b) the compressibility of the BEC under a fixed fermion chemical potential is positive definite [45], i.e., \(\left(\partial\mu_{b}/\partial n_{b}\right)\bigr{|}_{\mu_{f}}\geq 0\). The first condition places a constraint on the fermion density inside the mixture while the second ensures that the mixture is stable against collapse. In Fig. 1, we show the phase diagram obtained from these conditions for the experimentally relevant case of a \({}^{133}\)Cs-\({}^{6}\)Li mixture with density ratio \(n_{b}/n_{f,res}=10\). In the following we discuss in detail how this phase diagram is obtained.
Using the condition \(\mu_{f}=\mu_{f,res}\), the fermionic quasi-particle dispersion \(\varepsilon_{f,\mathbf{p}}\) is determined from the poles of \(G_{f}(p)\) and the fermion density inside the mixture is calculated as \(n_{f}=\sum_{p}G_{f}(p)\). We find that similar to the so-called Bose polaron, i.e., a single fermion in a BEC [46; 47; 48], the fermion Green's function also has two quasi-particle branches: an attractive and a repulsive one; the attractive (repulsive) branch has negative (positive) energy and takes most of the spectral weight for \(a_{bf}<0\) (\(a_{bf}>0\)). These two branches are shown in Fig. 3(a) for the \({}^{133}\)Cs-\({}^{6}\)Li mixture. Hence, we assume that the fermions occupy the attractive branch for \(a_{bf}<0\) and the repulsive branch for \(a_{bf}>0\). Since \(\mu_{f}\) is fixed by the reservoir, it follows that the density \(n_{f}\) of fermions occupying the attractive branch increases as \(a_{bf}\) is tuned from a small negative value to resonance; the opposite is true for fermions occupying the repulsive branch. This is shown in Fig. 3(b). Thus, in the latter case the fermion density inside the mixture vanishes beyond a critical value of \(a_{bf}\), leading to phase separation between the fermions and the bosons indicated by the grey region in Fig. 1.
Outside the region of phase separation, the stability of the mixture is determined by the compressibility of the BEC. From Eq. (9), we find
\[\left.\frac{\partial\mu_{b}}{\partial n_{b}}\right|_{\mu_{f}}=\frac{4\pi}{m_{ b}}\left[a_{b}+\frac{m_{b}}{4\pi}\Gamma_{\rm mi}(0,0;0)\right]. \tag{10}\]
This relation naturally leads to an effective scattering length from the fermion mediated interaction given by
\[a_{\rm eff}\equiv\frac{m_{b}}{4\pi}\Gamma_{\rm mi}(0,0;0). \tag{11}\]
It then follows from Eq. (10) that the BEC collapses when the total scattering length \(a_{b}+a_{\rm eff}\) turns negative.
In the weak Bose-Fermi interaction limit, we can replace \(\mathcal{T}_{bf}\) by \(g_{bf}\) and the fermion mediated interaction in Eq. (6) reduces to the familiar RKKY form \(\Gamma_{\rm mi}(q)=g_{bf}^{2}\chi_{f}^{(0)}(i\omega_{q},\mathbf{q})\)[9; 10], where \(\chi_{f}^{(0)}(i\omega_{q},\mathbf{q})\) is the Lindhard function of a free Fermi gas. Since \(\chi_{f}^{(0)}(0,\mathbf{q})=-(m_{f}k_{f}/2\pi^{2})(1-q^{2}/8k_{f}^{2})\) in the long wavelength limit, second order perturbation theory predicts that \(a_{\rm eff}=-(1/2\pi)(m_{f}/m_{b}+m_{b}/m_{f}+2)k_{f}a_{bf}^{2}\)[42; 43]. In Fig. 4(c) we compare this result against that calculated by our strong-coupling theory. We find that while the two approaches agree for weak coupling as expected, the strong coupling result for \(a_{\rm eff}\) is significantly smaller close to unitarity.
This has important consequences for the phase diagram. Since the BEC collapses for \(a_{b}+a_{\rm eff}<0\) as discussed above, the values of \(a_{\rm eff}\) shown in Fig. 4 directly give the boundaries for the collapse regions shown in Fig. 1. While perturbation theory predicts a collapse region that extends to arbitrarily large values of \(a_{b}\) as unitarity \(1/a_{bf}=0\) is approached, our strong-coupling theory predicts a much smaller collapse region bounded by a maximum value of \(a_{b}\) near resonance. It follows that the mixture is stable even at resonance provided that \(a_{b}\) is sufficiently large. In Fig. 1, we see that the region of stability of the Bose-Fermi mixture indeed is significantly larger than predicted from perturbation theory.
Figure 3: (a) The attractive and repulsive fermion quasi-particle branch \(\varepsilon_{f,\mathbf{p}=0}\) for the \({}^{133}\)Cs-\({}^{6}\)Li mixture of Fig. 1. (b) Corresponding fermion densities inside the mixture. Solid and dashed lines are the strong-coupling and perturbation theory respectively. We assume for the moment that the mixture is always stable.
Figure 4: The effective scattering lengths from the mediated interaction, calculated by the perturbation theory and the strong-coupling theory. The blue dotted line shows the behavior of \(a_{\rm eff}\) assuming the fermions stay on the attractive branch beyond the resonance. The results are for the \({}^{133}\)Cs-\({}^{6}\)Li mixture of Fig. 1.
_Bosonic sound propagation._--We next turn to the discussion of bosonic sound propagation observed recently in a strongly-interacting \({}^{133}\)Cs-\({}^{6}\)Li mixture [34]. As usual, the Bogoliubov sound velocity in the BEC is defined from the Bogoliubov spectrum as \(c_{b}=\lim_{\mathbf{p}\to 0}E_{b,\mathbf{p}}/|\mathbf{p}|\). In a pure BEC, this velocity is given by \(c_{b}^{(0)}=\sqrt{n_{b}g_{b}/m_{b}}\) which coincides with that defined by the compressibility \(c_{b,\text{com}}=\sqrt{(n_{b}/m_{b})\partial\mu_{b}/\partial n_{b}}\)[36]. Interestingly, these two quantities are not equal in the Bose-Fermi mixture due to the retarded nature of the fermion mediated interaction. Retardation effects can however be ignored when the Fermi velocity \(v_{f}\) is much larger than the sound velocity in the pure BEC, i.e., when \(c_{b}^{(0)}/v_{f}=(m_{f}/m_{b})\sqrt{(2/3\pi)(n_{b}/n_{f})(k_{f}a_{b})}\ll 1\). This is indeed the case for the \({}^{133}\)Cs-\({}^{6}\)Li mixture in Ref. [34] due to the very small Fermi-Bose mass ratio.
We therefore use the compressibility formula to calculate the sound velocity and compare the results with the recent experiment. In order to do this, we must first analyze the experimental procedure. In Ref. [34], the mixture is first prepared at a small value of \(a_{bf}\) on either side of the Feshbach resonance and is subsequently ramped to a target value of \(a_{bf}\) within a fixed duration of time. This process is approximately adiabatic for small target values \(a_{bf}\), but highly non-adiabatic for target values in the resonant regime. Consequently, a significant fraction of the fermions will not remain on the same quasi-particle branch under such non-adiabatic ramps due to the Landau-Zener transitions [49; 50]. Furthermore, heavy losses of atoms are observed in experiments near resonance [34]. For these reasons, one must expect that near resonance the experimental values for the fermion densities inside the mixture will be much smaller than those predicted by our thermal equilibrium theory described above. To make comparisons with experiments conducted near resonant \(a_{bf}\), we therefore treat \(n_{f}\) as a fitting parameter using \(n_{f}/n_{f,res}=0.006+0.05\times(k_{f}a_{bf})^{-1/2}\) for \(a_{bf}>0\) and \(n_{f}/n_{f,res}=0.006-0.0002\times(k_{f}a_{bf})^{-3}\) for \(a_{bf}<0\) as suggested by the behavior of the loss data in experiments [34]. Other parameters are the same as those in the experiment, i.e., \(n_{b}\approx 1.87\times 10^{19}m^{-3}\), \(n_{f,res}\approx 3\times 10^{17}m^{-3}\) and \(a_{b}=270a_{0}\) where \(a_{0}\) is the Bohr radius. As can be seen in Fig. 5, for small \(a_{bf}\) both perturbative and strong-coupling theory with no fitting of \(n_{f}\) agree well with experiments although the latter performs slightly better. For resonant \(a_{bf}\), however, perturbative theory predicts no sound propagation while the strong-coupling theory with fitted \(n_{f}\) reproduces the experimental measurements well.
_Retardation and induced fermionic zero sound._--The Fermi-Bose mass ratio is much larger for a \({}^{23}\)Na-\({}^{40}\)K mixture [51; 52; 53] compared to a \({}^{133}\)Cs-\({}^{6}\)Li mixture, and it follows from the arguments given above that retardation effects must be significant for the former. A remarkable consequence of this is the possibility of exciting an induced fermionic zero sound mode through a bosonic density perturbation. It is known that in a Bose-Fermi mixture the non-interacting fermions can also experience a mediated interaction due to the Bose gas, which can lead to a fermionic zero sound mode with a speed \(\sim v_{f}\)[54; 55; 56]. When the Bose-Fermi interaction is strong and the zero sound velocity is comparable to that in the pure BEC, we anticipate a strong coupling of these two modes.
In order to demonstrate this, we turn to the calculation of the dynamic structure factor of the BEC, which also gives the sound spectrum [36] and can be directly probed by Bragg spectroscopy [57; 58]. It is defined as
\[S_{b}(\omega,\mathbf{p})\equiv\frac{1}{\pi}\text{Im}\chi_{b}(i\omega_{p}\to \omega+i0^{+},\mathbf{p}). \tag{12}\]
Here \(\chi_{b}(p)\) is the density-density response function of the BEC and is given by \(\chi_{b}(p)=-2N_{b}[G_{11}(p)+G_{11}(-p)+2G_{12}(p)]\) within the Bogoliubov framework, where \(N_{b}\) is the total number of bosons. We now calculate \(S_{b}(\omega,\mathbf{q})\) for a \({}^{23}\)Na-\({}^{40}\)K mixture with \(n_{b}/n_{f}=10\), \(k_{f}a_{b}=0.067\) and \(1/(k_{f}a_{bf})=-3\), which yields \(c_{b}^{(0)}/v_{f}\sim 0.65\). As shown in Fig. 6(a), \(S_{b}(\omega,\mathbf{q})\) exhibits a double peak structure indicating the presence of two modes, in stark contrast with the single peak structure at small \(a_{bf}\) or for \(c_{b}^{(0)}/v_{f}\ll 1\)[42]. Figure 6(b) plots the dispersion of these two modes, which are compared to the single mode in a pure BEC. This explicitly demonstrates that a fermionic
Figure 5: Comparison of our strong-coupling theory for the BEC sound velocity to the experimental results (dots) in Ref. [34].
Figure 6: (a) Dynamic structure factor \(S_{b}(\omega,\mathbf{q})\) of the BEC at different momenta for a strongly-interacting \({}^{23}\)Na-\({}^{40}\)K mixture. (b) Excitation spectrum obtained from the peaks of \(S_{b}(\omega,\mathbf{q})\) (blue asterisks) and for a pure BEC (black dots).
zero sound mode indeed can hybridize with the Bogoliubov sound mode and manifest itself in the excitation spectrum of the BEC.
_Concluding remarks._-- We have developed a strong-coupling theory for the ground state and collective excitations of strongly interacting Bose-Fermi mixtures, emphasizing the role of a generalized mediated interaction. Our theory agrees well with recent experimental results for a resonant \({}^{133}\)Cs-\({}^{6}\)Li mixture, which the much used perturbation theory fails to account for. Furthermore, we show that new, interesting physics caused by retardation of the generalized mediated interaction can be revealed by the bosonic dynamic structure factor and observed in future experiments. Finally, in light of the many different mixtures being studied experimentally, our approach may be used to systematically explore the effects of mass and density ratio on properties of strongly interacting Bose-Fermi mixtures.
_Acknowledgement_. We thank Shizhong Zhang and Ren Zhang for helpful discussions. This work is supported by National Key R&D Program of China (Grant No. 2022YFA1404103), NSFC (Grant No. 11974161), NSFC (Grant No. 12004049), NSFC (Grant No. 12104430), Shenzhen Science and Technology Program (Grant No. KQTD20200820113010023) and Key-Area Research and Development Program of Guangdong Province (Grant No. 2019B030330001).
|
2305.09086 | Ultranarrow linewidth room-temperature single-photon source from
perovskite quantum dot embedded in optical microcavity | Ultranarrow bandwidth single-photon sources operating at room-temperature are
of vital importance for viable optical quantum technologies at scale, including
quantum key distribution, cloud based quantum information processing networks,
and quantum metrology. Here we show a room-temperature ultranarrow bandwidth
single-photon source generating polarised photons at a rate of 5MHz based on an
inorganic CsPbI3 perovskite quantum dot embedded in a tunable open-access
optical microcavity. When coupled to an optical cavity mode, the quantum dot
room-temperature emission becomes single-mode and the spectrum narrows down to
just 1 nm. The low numerical aperture of the optical cavities enables efficient
collection of high-purity single-mode single-photon emission at
room-temperature, offering promising performance for photonic and quantum
technology applications. We measure 94% pure single-photon emission into a
single-mode under pulsed and continuous-wave (CW) excitation. | Amit R. Dhawan, Tristan Farrow, Ashley Marshall, Alex Ghorbal, Wonmin Son, Henry J. Snaith, Jason M. Smith, Robert A. Taylor | 2023-05-16T01:07:46Z | http://arxiv.org/abs/2305.09086v1 | Ultranarrow linewidth room-temperature single-photon source from perovskite quantum dot embedded in optical microcavity
###### Abstract
Ultranarrow bandwidth single-photon sources operating at room-temperature are of vital importance for viable optical quantum technologies at scale, including quantum key distribution, cloud-based quantum information processing networks, and quantum metrology. Here we show a room-temperature ultranarrow bandwidth single-photon source generating polarised photons at a rate of 5 MHz based on an inorganic CsPbI3 perovskite quantum dot embedded in a tunable open-access optical microcavity. When coupled to an optical cavity mode, the quantum dot room-temperature emission becomes single-mode and the spectrum narrows down to just \(\sim 1\) nm. The low numerical aperture of the optical cavities enables efficient collection of high-purity single-mode single-photon emission at room-temperature, offering promising performance for photonic and quantum technology applications. We measure 94% pure single-photon emission into a single-mode under pulsed and continuous-wave (CW) excitation.
## Introduction
Ultranarrow linewidth room-temperature (RT) single-photons are essential for photonic quantum technologies [1, 2] but their fabrication poses a challenge. Probabilistic single-photon sources such as attenuated lasers are non-ideal [3] while high-performance single-photon emission has only been demonstrated at cryogenic temperatures [4]. Cryogenic cooling is expensive and cumbersome so hinders practical use. Peltier coolers are cryogen-free so offer a cheaper alternative to cryogenic cooling, however, room-temperature operation sets the gold-standard for viable ultranarrow-band single-photon sources. Perovskite quantum dots (PQDs) are promising emitters for cost-effective, scalable, spectrally-pure and colour-tunable single-photon sources for quantum technology applications [5, 6]. Quantum confinement in PQDs maintains the non-classical character of the optical signal at RT [7], but like their semiconductor counterparts at higher temperatures their emission linewidth widens by up to tens of nanometres due to phonon-broadening, which undermines their technological potential. Possible strategies have been proposed for producing narrow linewidths (35-65 meV) at room-temperature through targeted chemical treatment of the dot surface to quench low-energy surface phonon modes responsible for broadening [8]. However, restoring the linewidths to almost cryogenic environment-like linewidths at RT is a significantly more demanding task, which may be achieved by constructing a
single-photon source comprising an emitter embedded in a tunable optical micro-cavity [9]. This configuration offers the advantages of narrowband emission, excellent emission directionality and high single-mode photon collection at RT. Where light-matter engineering systems such as plasmonic antennas [10, 11, 12, 13] demonstrate very high Purcell factor due to ultralow mode volume, open-access optical microcavity systems as demonstrated here offer narrowband single-mode emission and wavelength tunability. Additionally, PQDs can be dispersed in a large range of non-polar solvents after synthesis, so can be spin-coated on a wide variety of surfaces for integration within devices.
We demonstrate such a single-photon source in air at RT featuring an inorganic CsPbI\({}_{3}\) PQD (Fig 1a) coupled to an optical micro-cavity (Fig 1c). We observe that the narrowband TEM\({}_{00}\) mode emission from individual PQDs embedded in the micro-cavity exhibits strong photon antibunching under both continuous-wave (CW) and pulsed excitation with single-photon purity of 94% in a single-mode with \(\sim\) 1 nm linewidth. In this way we produce bright, pure-colour emission with a detected photon rate of \(5\times 10^{6}\) per second. We also note the challenges associated with PQDs due to photo-induced degradation or photo-bleaching in intense light fields while chemical passivisation techniques are being developed to improve their robustness [14].
PQD emitters offer excellent optical properties, including fast polarised emission with long coherence times and high quantum yields (95%) [15]. Their near-lifetime limited photoluminescence (PL) linewidth and high quantum yield at cryogenic temperatures offers best-in class performance for an unprocessed nanocrystal system, outperforming by two orders of magnitude typical semiconductor photon sources [16, 17, 18, 19]. The emission wavelength of PQDs can be tuned over a wide range (430-730 nm) by modifying their chemical composition, and they maintain optical performance and narrow linewidths up to RT [8]. This, coupled to the low-cost and ease of synthesis, brings them tantalizingly close to industrial scaling-up since most applications operate in air at ambient temperatures.
## Results
The custom-fabricated open Fabry-Perot micro-cavities used in this study offer a unique combination of small mode volumes (\(<1\,\mu\)m\({}^{3}\)) and Q-factors of up to (\(>10^{4}\)) [20], combined with full in situ wavelength tunability of the cavity mode. They consist of a planar mirror, onto which the PQDs are deposited by spin-coating from solution, and a curved mirror, where the distance and angle between the mirrors is controlled using piezoelectric nano-positioning stages. The mirror coatings are tailored to the design-wavelength of the PQDs. The operation wavelengths of the cavities can range from 450 nm to 950 nm and higher depending on the choice of mirror-coating.
### Out-of-cavity measurements
Photoluminescence from PQD film was characterised using the experimental set-up of Fig 1d) prior to coupling into the optical microcavity. Fig. 2\(a\) and \(b\) compare the PL peaks of single out-of-cavity PQDs at 4 K in vacuum, and at RT in air. At cryogenic temperatures, their characteristic PL spectrum can be fitted with a Lorentzian profile with linewidth 0.6 nm (1.7 meV), which is within the typical range of \(0.6-2\) meV [21] at cryogenic temperatures for single CsPbI\({}_{3}\) nanocrystals with edge length of \(\sim\) 15 nm. At RT, the out-of-cavity FWHM is more than an order of magnitude wider at \(\sim\) 40 nm, which is attributed to homogeneous broadening due to low-energy phonon-coupling [22, 8] present on the surface of the quantum dots.
Our time-resolved photoluminescence (TRPL) measurements on PQDs (Fig 2d,e) show a typical lifetime of 0.4 ns at 4 K, and 12.2 ns at RT, respectively, which is consistent with observed behaviour [23, 24] and is attributed to the fission of excitons into free carriers at higher temperatures [25]. The state lifetime at cryogenic temperature is comparable to the 180-300 ps lifetimes [15, 26] reported in lead halide PQDs. We calculated the decay lifetime using a mono-exponential fit of the TRPL curve typical of the
transition rate dynamics in two-level systems like PQDs excited at low powers [15, 21]. We note that the fast component of the time-resolved PL signal (Fig 2d, e) is more than 2.5 orders of magnitude more intense than the long-lived residual tail of the emission, attributed to delayed carrier recombination during thermalisation and trapping [27]. Detector dark counts account for the flat non-zero intensity segment of the delayed tail of the emission.
The noteworthy optical performance can in part be attributed to the presence of fast, optically-active, triplet states (Fig 1b), present uniquely in lead halide perovskites [28, 29, 21]. Spin-forbidden triplet transitions delay PL emission, but in these systems they become dipole-allowed due to unusually strong spin-orbit coupling from heavy Pb ions. This results in bright triplet states--the only known example of a material with this property--which can help explain the up-to 1000\(\times\) brighter PL intensity observed in PQDs compared with other semiconductors. A Rashba-type effect due to symmetry perturbation inverts the energies of singlet and triplet exciton states and lifts the fine structure degeneracy to reveal the ultranarrow linewidths within the fine structure in the orthorhombic and tetragonal phases of the crystal [28, 21], but not in the orthogonal phase where the splitting is degenerate. Different PQDs exhibit different decay times, where the variations in lifetime can be attributed to differences in the sizes of the nanocrystals, hence different quantum confinement energies [30].
**Polarised photons**. Polarisation measurements at RT highlight that the PQD emits partially polarised light. Plotting the fluorescence intensity \(I(\theta)\) as a function of a linear polariser angle \(\theta\) (Fig. 2c) reveals that PQD emission is polarised, which is consistent with other reports [26]. Measured data is fitted to Malus' law, \(I(\theta)=I_{\mathrm{min}}+(I_{\mathrm{max}}-I_{\mathrm{min}})\cos^{2}\theta\), where \(I(\theta)\) is the intensity at polariser angle \(\theta\), and \(I_{\mathrm{max}}\) and \(I_{\mathrm{min}}\) are the maximum and minimum intensities respectively. The degree of linear polarisation, defined as \((I_{\mathrm{max}}-I_{\mathrm{min}})/(I_{\mathrm{max}}+I_{\mathrm{min}})\), is found to be 40%. Polarisation of photons in single-photon sources with \(>\) 50% efficiency and near unity indistinguishability can be achieved with polarised cavities [31]. However, single-photon devices where the source itself is polarised are advantageous in technological applications such as entanglement-based quantum key distribution.
**In-cavity measurements**
**Coupling a PQD to a microcavity**. The planar mirror with spin-coated CsPbI\({}_{3}\) PQDs was scanned confocally (Fig. 3a), and individual PQDs, which inherently emit single-photons, were selected for cavity coupling and driven towards the mirror with concave features using nano-positioning motion controllers to create an optical microcavity. This pre-cavity coupling characterization was carried out with the emitter facing the objective to facilitate light extraction. Most PQDs from the tested batch blinked or fluoresced intermittently under 532 nm laser excitation as shown in Fig. 3b. The photo-bleaching of individual PQDs, which is well-known [7], especially in an intense light field such as inside a cavity at RT, can make closing the cavity and recording measurements challenging. The PQDs remain optically active for periods lasting seconds to minutes when the cavity closes due to the increased field intensity and photo-degradation, aggravated by pulsed illumination, after which time the emission becomes too weak for in-cavity measurements. Due to photodegradation of single PQDs, from 100 PQDs, approximately 10 of them could successfully be coupled to the cavity for measurements.
The PQD in the cavity was excited by shining a laser through the planar mirror and the cavity was finely tuned to couple maximum fluorescence from the PQD to the optical cavity TEM\({}_{00}\) mode. This design featuring a half-symmetric open-access resonator configuration offers two advantages: first, any emitter on the planar mirror can be coupled to a wavelength tunable optical cavity, and second, the concave mirror facilitates optimal coupling by reducing light dissipation due to scattering. Moreover, milling multiple concave features with different radii of curvature on the same plinth permit different coupling possibilities. The cavity-emitted light was collected from the planar mirror side using a 0.85 numerical aperture coverslip-corrected objective. The low angle of cavity emission allows
efficient collection even with lower numerical aperture objectives or lenses [32].
The finesse \(\mathcal{F}\) of our optical-cavity was recorded to be 100, which yields a quality factor, \(Q=q\mathcal{F}=3\times 100=300\). Here, \(q\) is the axial mode index of the optical cavity. Increasing \(q\), increases the quality factor and the effective mode volume \(V\) (\(0.5\,\mu\mathrm{m}^{3}\) in our case), and hinders electromagnetic field confinement in low-width cavities as used here [20, 33]. The curved mirror was \(4.4\,\mu\mathrm{m}\) wide with a \(8\,\mu\mathrm{m}\) radius of curvature. The Purcell factor \(F_{P}=\xi^{2}\frac{3\lambda_{c}^{3}}{4\pi^{2}}\frac{Q}{V}\), where \(\lambda_{c}\) is the wavelength of the main cavity mode and \(\xi\) is the dipole orientation factor that accounts for the coupling between the emitter and the cavity field. \(\xi^{2}=1\) for a perfectly aligned dipole and \(\xi^{2}=1/3\) if all possible dipole orientations are averaged. Assuming randomly oriented PQD dipoles, this gives \(F_{P}=4.7\), and can reach a maximum value of 14 for a perfectly aligned dipole. This moderate value of \(F_{P}\) is attributed to the relatively low \(Q\) value of our cavity, which can be increased by using higher finesse cavities.
**Single-mode emission**. The PL spectrum of a single PQD at RT is significantly broader compared to that at cryogenic temperatures. When the PQD is inserted into the microcavity, its emission is forced into the optical modes of the cavity, which acts as a narrow bandpass filter.
By changing the cavity length using piezo-electric actuators to adjust the cavity modes, PQD emission was coupled into a cavity mode, which makes the emission narrowband and single-mode. Fig. 2a displays the emission from a TEM\({}_{00}\) with an axial mode index of 3. Compared to the \(40\,\mathrm{nm}\) wide RT free-space emission of a PQD in Fig. 2b, its cavity-coupled emission results in a single-mode with a FWHM of \(1\,\mathrm{nm}\). The open-access microcavity design permits straightforward modification of the axial and the lateral emitter position that enables wavelength tunability and coupling of the emitter to different cavity modes. This design has been employed to demonstrate wavelength tunable narrow-band RT emission linewidths from other single-emitters as well [32, 33]. The coupling of a PQD to a cavity mode leads to a modification of the density of optical states. This Purcell effect alters the spontaneous decay process of the PQD such that its emission is forced into the narrow cavity mode to which it is coupled.
**Single-photon emission**. We performed photon correlation measurements in a Hanbury Brown and Twiss (HBT) setup on the emission from a PQD coupled to an optical cavity TEM\({}_{00}\) mode (Fig 4a) using CW (Fig 4c) and pulsed (Fig 4d) lasers. Pulsed excitation poses additional challenges for systems prone to photo-bleaching due to the high energy in individual pulses. In both regimes, we recorded 94% single-photon purity. Remarkably, the detected photon rate was \(5\,\mathrm{MHz}\)--Fig 4b shows the actual photon rate measured by the photon-detector without taking into account any optical system and photodetection losses. Fig 4c shows the time-decay of the PL emission from a single in-cavity PQD in air at RT with a mono-exponential lifetime of \(12.7\,\mathrm{ns}\).
**Summary**
We have demonstrated a single-photon source in air at RT based on inorganic CsPbI\({}_{3}\) perovskite PQDs embedded in a microcavity with a single-photon purity of 94% in CW and pulsed mode operation, generating linearly polarised single-photons at a rate of \(5\,\mathrm{MHz}\). Critically, coupling the emission into the cavity mode reduced the emission linewidth to just \(\sim 1\,\mathrm{nm}\) without the need for cryogenic cooling. The reproducible synthesis of PDQs and ease of deposition directly onto cavity surfaces results in a highly reproducible low-cost single-photon system with the potential for transformational impact on quantum technologies at scale with the advent of robust PQDs.
## Materials and Methods
### Mirror fabrication
An ultra-violet fused silica slip (Spectrosil 2000) was diced to create flat-topped plinth of height 100 \(\mu\)m, and top area of 300 \(\mu\)m \(\times\) 300 \(\mu\)m on which smooth spherical concave features are created using focused ion-beam milling. Here, we used concave feature of depth 0.3 \(\mu\)m and radius of curvature of 8 \(\mu\)m. A planar substrate made of the same material is used for the planar mirror as well. By depositing alternate layers SiO\({}_{2}\) and Ta\({}_{2}\)O\({}_{5}\) by ion-beam sputtering, the dielectric Bragg mirror reflectors are created. The \(97.5\pm 0.5\)% and \(>\) 99.9% reflectivity of the planar and plinth mirrors at a central wavelength of 690 nm (selected due to the PQD emission wavelength), respectively allow the creation of an optimum optical microcavity where light is extracted from the planar mirror.
### Quantum dot synthesis
Reagents: All chemicals were purchased from Sigma Aldrich and used without further purification. Lead iodide (PbI\({}_{2}\), 99%), cesium carbonate Cs\({}_{2}\)CO\({}_{3}\), Reagent Plus 99%), 1-octadecene (ODE, technical grade, 90%), oleic acid (OA, technical grade, 90%), oleylamine (OLAm, technical grade, 70%), methyl acetate (MeAc, anhydrous, 99.5%), octane (anhydrous, 99%) toluene (anhydrous, 99.8%), and ethylenediaminetetraacetic acid (EDTA, ACS Reagent, 99.4%).
Perovskite quantum dots (PQDs) were synthesized following the hot injection method adapted from the literature [34]. Each step up until the PQDs purification was done using standard Schlenk line techniques to keep the reaction air-free, under nitrogen. First, 0.407g Cs\({}_{2}\)CO\({}_{3}\), 1.25 mL OA, and 20 mL ODE were added to a 100 mL 3-neck flask and degassed for 1 hour under vacuum (flask 1). Flask 1 was then heated to 150\({}^{\circ}\)C, the vacuum was switched to an over pressure of nitrogen when the flask temperature was 100\({}^{\circ}\)C. Flask 1 was left stirring at 150C until all of the solid Cs\({}_{2}\)CO\({}_{3}\) was dissolved, indicating that the Cs-oleate had formed. Flask 1 was then cooled to 130\({}^{\circ}\)C before being used in the next step.
Into a 250 mL 3-neck round bottom flask, 0.5 g of PbI\({}_{2}\) and 25 mL ODE were degassed and then heated to 120\({}^{\circ}\)C under vacuum (flask 2). Meanwhile, 2.5 mL of OA and 2.5mL OLAm were heated on a hotplate set at 130\({}^{\circ}\)C. The hot OA-OLAm mixture was injected into flask 2 and left under vacuum until all the PbI\({}_{2}\) had dissolved. Flask 2 was switched from vacuum to nitrogen and the temperature control unit was set to 180\({}^{\circ}\)C. Immediately upon reaching 180\({}^{\circ}\)C, 2 mL of the Cs-oleate solution from flask 1 was injected into flask 2. Flask 1 was then moved from the heating mantle to an ice bath as quickly as possible after injection. Once flask 2 had cooled, the reaction was removed from the Schlenk line and exposed to ambient conditions for the purification steps.
The reaction mixture from flask 2 was separated into 2 centrifuge tubes (10 mL in each) and 70 mL MeAc was used to precipitate the PQDs. The PQDs were then centrifuged to form a pellet and the supernatant was discarded. The pellets were redispersed in 5 mL of hexane, then precipitated with 7 mL methyl acetate and centrifuged again. This pellet was dispersed in 2 mL of octane and stored in a glass vial in the fridge. Some precipitate collected on the bottom of the vial overnight, this is avoided when removing the sample from the vial.
### Sample preparation
For the RT measurements, the CsPbI\({}_{3}\) QDs were treated with EDTA by stirring 1 mL of PQDs with 5 mg of EDTA overnight to improve photo-stability and filtered through a 200 nm mesh. Size-selective centrifugation was used in order to obtain monodispersed PQDs. The original PQD solution in octane was diluted by at least 10-fold and then centrifuged at low speeds (2000-3000 RPM) for 30 minutes. The resulting supernatant was used for measurements, while the small pellet that formed was discarded. A
well-dissolved (in toluene) and concentration-calibrated PMMA solution was prepared and added to the sample which was then spin-coated onto the flat DBR mirror. This resulted in a monodispersed deposition of PQDs at the right concentration (1-5PQDs/10 \(\mu\)m\({}^{2}\) with a target of 1PQD/10 \(\mu\)m\({}^{2}\), corresponding to the cavity diameter). Coupling of single-PQDs poses a challenge if the sample is prone to clustering of PQDs, even after calibrating the colloidal concentration to the cavity diameter and laser footprint. Clusters couple to the cavity more readily, revealing its modal structure, but are unsuitable for single-photon generation, which requires the coupling of single PQDs.
A polymethyl methacrylate (PMMA) coat helped to isolate the PQDs from air, albeit with marginal effect as compared to previous attempts where no PMMA was used, and attenuated the signal intensity by approximately 10-15%. The collection efficiency can be improved by centrifuging the PQDs, covering them in PMMA with thickness \(\lambda_{c}\), and by replacing the final SiO\({}_{2}\) layer of the planar DBR, since the refractive indices of the two materials match. Chemical passivation such as with Ethylenediaminetetraacetic acid (EDTA) [14] offers an additional strategy for improving the durability of the PQDs against bleaching. We note that there are no restrictions on the operation-temperature of the cavities, since they can work as well in the cryogenic regime as they do at RT.
For the 4 K measurements, solutions of CsPbI\({}_{3}\) nanocrystals in toluene were spin-coated at 4000 rpm for 30 seconds onto glass substrates. Various dilutions (in toluene) were trialed until the concentration allowed for the resolution of the emission spectrum from individual PQDs.
### PL, TRPL, and polarisation measurements
The optical properties of the PQDs at RT were characterized using a confocal micro-photoluminescence setup. The PQDs were excited with a 532 nm Oxiuss diode-pumped solid-state CW laser, and a 532 nm PicoQuant PDL800-D pulsed laser at a repetition rate of 5 MHz with a pulse width of 50 ps. The two microcavity mirrors were mounted on Thorlabs Nanomax 300 piezo-electric stages to control the cavity configuration. Both the laser pump beam was excitation and cavity fluorescence collection was through the planar mirror side using a cover-slip corrected 0.85 numerical aperture Olympus LCPFLN100XLCD objective. For the out-of-cavity measurements, the same setup was used but without the curved mirror and the emitter, placed on the planar mirror, faced the objective. Single-photon detection and counting was performed using Exelitas SPCM-AQRH-14 SPADs and Swabian Instruments Time Tagger 20.
|
2302.02174 | Comment on "Axion-matter coupling in multiferroics" | A previous publication [H. S. Roising et al., Phys. Rev. Research 3, 033236
(2021)] involving the current authors pointed out a coupling between dark
matter axions and ferroic orders in multiferroics. In this comment we argue
that using this coupling for dark matter sensing is likely not feasible for the
material class we considered, with present-day technologies and level of
materials synthesis. The proposed effect (for QCD axions) is small and is
overwhelmed by thermal noise. This finding means that likely materials for the
proposed detection scheme would need to be found with significantly lower
magnetic ordering temperatures. | Alexander Balatsky, Benjo Fraser | 2023-02-04T14:36:26Z | http://arxiv.org/abs/2302.02174v1 | # Comment on "Axion-matter coupling in multiferroics"
###### Abstract
A previous publication [H. S. Roising et al., Phys. Rev. Research **3**, 033236 (2021)] involving the current authors pointed out a coupling between dark matter axions and ferroic orders in multiferroics. In this comment we argue that using this coupling for dark matter sensing is likely not feasible for the material class we considered, with present-day technologies and level of materials synthesis. The proposed effect (for QCD axions) is small and is overwhelmed by thermal noise. This finding means that likely materials for the proposed detection scheme would need to be found with significantly lower magnetic ordering temperatures.
In a previous publication [1] we considered the coupling between dark matter axions and electrons in multiferroics. The coupling was found to yield an energy contribution of the form \(gaV(\mu_{0}/\varepsilon_{0})^{1/2}\mathbf{P}\cdot\mathbf{M}\), where \(\mathbf{P}\) (\(\mathbf{M}\)) is the ferroelectric (ferromagnetic) polarization vector, \(V=L_{\text{domain}}^{3}\) is the volume of the homogeneous ferroic domains, \(a\) is the axion field, and \(g\sim 10^{-10}g_{ae}\) where \(g_{ae}\) is the bare axion-electron coupling. A linear response estimate suggested that the coupling could lead to a time-dependent magnetic response on the order of \(\delta M\sim\mathcal{O}(\text{1aT})\) under ideal conditions with parameters motivated by hexagonal \(\text{Lu}_{1-x}\text{Sc}_{x}\text{FeO}_{3}\) (\(h\)-LSFO), a candidate \(\mathbf{P}\parallel\mathbf{M}\) multiferroic. We suggested that multiferroics therefore might be a platform for sensing dark matter axions using hypersensitive magnetometers and macroscopic sensor volumes. In this comment, we provide order-of-magnitude noise estimates suggesting that mK temperatures and \(V\sim 1\text{m}^{3}\) may be required to achieve a signal-to-noise ratio greater than one. These tight temperature and volume constraints would make it challenging to sense dark matter axions in multiferroics with present-day technologies and existing material candidates.
In Ref. [1] we used a Ginzburg-Landau model for the longitudinal magnetization perturbation \(\delta M(t)\) induced by the axion. In this note, we extend the model to include the effects of noise by adding a stochastic term to the equations of motion. This produces the Langevin equation
\[\begin{split}\frac{\text{d}^{2}\delta M(t)}{\text{d}t^{2}}+& \gamma\frac{\text{d}\delta M(t)}{\text{d}t}+m_{M}^{2}\delta M(t)\\ &=P_{0}\theta(t)+\xi(t),\end{split} \tag{1}\]
where \(\xi(t)\) is the noise term, \(\gamma\) the damping factor (the width of the magnetic resonance), \(P_{0}\) the static ferroelectric polarization, and \(\theta(t)=\theta_{0}a(t)\) is the axion driving term. We allow for the possibility of a bandwidth \(\Delta\omega_{a}\) for the axion signal. Solving (1), the power spectral density \(S_{M}(\omega)\equiv\int\text{d}t\,\text{e}^{i\omega t}\langle\delta M(t) \delta M(0)\rangle\) is equal to
\[S_{M}(\omega) = |\chi_{M}(\omega)|^{2}\left[S_{\theta}(\omega)+S_{\xi}(\omega)\right] \tag{2}\] \[\chi_{M}(\omega) = \frac{1}{-(\omega^{2}-m_{M}^{2})-i\gamma\omega} \tag{3}\]
where \(\chi_{M}(\omega)\) is the response function of equation (1), and \(S_{\theta}(\omega)\), \(S_{\xi}(\omega)\) are the spectral densities of the two driving terms on its right hand side.
We assume white noise with correlation function \(\langle\xi(t)\xi(t^{\prime})\rangle=\lambda\,\delta(t-t^{\prime})\). The kinetic energy of the magnetization in the Ginzburg-Landau model of [1] is \(F(M)=\int\text{d}^{3}x\frac{1}{2}\alpha_{M}(\partial_{t}M)^{2}\), where we guess \(\alpha_{M}=(1\text{meV})^{-2}\) based on typical spin-exchange couplings; the value of this constant has not been directly measured for \(h\)-LSFO. Then the fluctuation-dissipation theorem gives \(\lambda\sim T\gamma/(V\alpha_{M})\) in the classically limited case (we are in this limit since the LSFO Curie temperature is \(T\sim 100K\gg\hbar\omega_{a}/k_{B}\)).
There are three relevant bandwidths in the problem:
* \(\gamma\sim 10^{-6}\)eV the width of the magnetic resonance: our numerical estimate here are based on inelastic neutron scattering measurements [2]
* \(\Delta\omega_{a}\sim\frac{1}{2}\omega_{a}v_{a}^{2}\sim 10^{-12}\)eV the width of the axion signal, expected to be determined by Doppler broadening of the Galactic axion background
* \(\Delta\omega_{\text{meas}}\sim t_{\text{meas}}^{-1}\sim 10^{-18}\)eV the measurement bandwidth, which is determined by the measurement time \(t_{\text{meas}}\): this estimate assumes \(t_{\text{meas}}\sim 1\)hr.
We therefore see that \(\gamma\gg\Delta\omega_{a}\gg\Delta\omega_{\text{meas}}\). In this regime the signal-to-noise ratio is [3; 4]
\[\frac{S}{N}\,=\,\frac{S_{\theta}(\omega_{a})}{S_{\xi}(\omega_{a})}\sqrt{\frac{ \Delta\omega_{a}}{\Delta\omega_{\text{meas}}}} \tag{4}\]
where the square root factor does not come from (3), but from that fact that we can take \(N\sim\Delta\omega_{a}/\Delta\omega_{\text{meas}}\) samples by scanning across the axion resonance - see the
Appendix of 3. This is the regime where the measurement time is much greater than the axion coherence time, so that the amplitude signal-to-noise ratio \(\sqrt{\frac{S}{N}}\propto t_{\rm meas}^{1/4}\).
For our case we find the order-of-magnitude estimate
\[\begin{split}\frac{S}{N}&\sim\,\frac{V}{\gamma T} \frac{|P_{0}|^{2}}{\Delta\omega_{a}\alpha_{M}}(\chi\theta)^{2}\sqrt{\frac{ \Delta\omega_{a}}{\Delta\omega_{\rm meas}}}\\ &=10^{-8}\left(\frac{V}{1\rm m^{3}}\right)\left(\frac{10^{-5} \rm eV}{\gamma}\right)\left(\frac{100\rm K}{T}\right)\sqrt{\frac{\Delta\omega _{a}}{\Delta\omega_{\rm meas}}}\end{split} \tag{5}\]
In Fig. 1 we plot the case of classically limited noise as a function of the linear size of the homogeneous domains \(L_{\rm domain}=V^{1/3}\). Untrained samples of \(h\)-LSFO display ferroelectric (ferromagnetic) domains of typical size \(10\mu\)m (\(100\mu\)m)\({}^{5}\), the former of which can be controlled by the quench rate through the transition [6]. Training techniques can in ideal cases push the homogeneous coupling domains to the order of \(1\)mm [7], which from the estimate of Fig. 1 is four orders of magnitude short from achieving a signal-to-noise ratio greater than one, even for mK temperatures.
The absolute value of the magnetic signal, which our estimates suggest to be on the order of \(\mathcal{O}(1\) aT) for \(h\)-LSFO motivated parameters, is at the lower end of what can feasibly be measured with present-day SQUID technologies having sensitivities on the order of \(10^{-16}\) T/\(\sqrt{\rm Hz}\). The Gravity Probe B experiment [8] demonstrated sensitivities to magnetic field deviations of about 5 aT, with sampling times of a few days.
These small numbers for our proposed multiferroic indicate (as it is currently synthesized) that it is unsuitable for detection of the QCD axion through our mechanism. However, we do not rule out the mechanism's future viability, for example by optimizing the material properties to improve the signal-to-noise ratio. We note that the driving term of the axion-induced perturbation is proportional to the ferroelectric polarization, so one could search for multiferroics where the polarization is large, including hybrid structures and field-induced sensing devices. Lone pair ferroelectrics, such as the archetypical BiFeO\({}_{3}\), have the potential to reach polarizations at least an order of magnitude greater than \(h\)-LSFO [9].
Finally, we mention that should an alternative mechanism be found in which the axion couples to the matter fields \(\mathbf{P}\) and \(\mathbf{M}\) directly from the axion-photon coupling, \(\frac{\sigma_{a}}{f_{a}}F_{\mu\nu}\tilde{F}^{\mu\nu}\), this could improve significantly on some of the above issues, since such a coupling would not suffer from the \(m_{a}/m_{e}\) suppression that enters in the the axion-fermion coupling \(\frac{\partial_{\mu}a}{f_{a}}\tilde{\Psi}_{f}\gamma^{\mu}\gamma^{5}\Psi_{f}\) we considered here. For an \(\mu\)eV mass axion train is of the order \(10^{-10}\)!
_Acknowledgements_: This work developed from our discussions with Henrik S. Roising, to whom we are grateful for analysis, comments and critique. We also acknowledge discussions with S.W. Cheong, J. Conrad, S. Griffin, N. Spaldin and F. Wilczek. This work was funded by VR Axion research environment grant 'Detecting Axion Dark Matter In The Sky And In The Lab (AxionDM)' funded by the Swedish Research Council (VR) under Dnr 2019-02337, European Research Council ERC HERO-810451 grant and University of Connecticut.
|
2306.00294 | Affinity-based Attention in Self-supervised Transformers Predicts
Dynamics of Object Grouping in Humans | The spreading of attention has been proposed as a mechanism for how humans
group features to segment objects. However, such a mechanism has not yet been
implemented and tested in naturalistic images. Here, we leverage the feature
maps from self-supervised vision Transformers and propose a model of human
object-based attention spreading and segmentation. Attention spreads within an
object through the feature affinity signal between different patches of the
image. We also collected behavioral data on people grouping objects in natural
images by judging whether two dots are on the same object or on two different
objects. We found that our models of affinity spread that were built on feature
maps from the self-supervised Transformers showed significant improvement over
baseline and CNN based models on predicting reaction time patterns of humans,
despite not being trained on the task or with any other object labels. Our work
provides new benchmarks for evaluating models of visual representation learning
including Transformers. | Hossein Adeli, Seoyoung Ahn, Nikolaus Kriegeskorte, Gregory Zelinsky | 2023-06-01T02:25:55Z | http://arxiv.org/abs/2306.00294v1 | Affinity-based Attention in Self-supervised Transformers Predicts Dynamics of Object Grouping in Humans
###### Abstract
The spreading of attention has been proposed as a mechanism for how humans group features to segment objects. However, such a mechanism has not yet been implemented and tested in naturalistic images. Here, we leverage the feature maps from self-supervised vision Transformers and propose a model of human object-based attention spreading and segmentation. Attention spreads within an object through the feature affinity signal between different patches of the image. We also collected behavioral data on people grouping objects in natural images by judging whether two dots are on the same object or on two different objects. We found that our models of affinity spread that were built on feature maps from the self-supervised Transformers showed significant improvement over baseline and CNN based models on predicting reaction time patterns of humans, despite not being trained on the task or with any other object labels. Our work provides new benchmarks for evaluating models of visual representation learning including Transformers.
## 1 Introduction
A fundamental problem that our visual system must solve is how to group parts of the visual input together into coherent whole objects (Peters and Kriegeskorte, 2021). The role of attention in solving this problem has been experimentally studied for decades (Treisman, 1996; Adeli et al., 2022). A proposed solution is that attention can bind object features through activation spreading within an object using horizontal connectivity in retinotopic visual areas (Roelfsema, 2023). However, the modeling work in this domain has focused on bottom-up, Gestalt cues, and clear object boundaries for how attention can spread within an object to bind its features (e.g. the "growth cone" model (Jeuvissen et al., 2016)). A compelling model of primate vision, however, should be able to handle natural images where object boundaries are frequently ambiguous and bottom-up cues must be combined with prior object-specific knowledge.
Building a model of object-based attention applicable to natural images requires the modeling of connectivity between image regions that can guide the spread of attention. In this work, we test the hypotheses that features from recent vision Transformers can capture this connectivity and are therefore well-suited to address the spreading of object-based attention. To that end, we introduce a model of object-based attention built on self-supervised vision Transformers. In this model, feature similarly, which we call affinity (Fig. 1) (Chen et al., 2022), guides the spread of attention in the visual input to perform perceptual grouping, playing the role of long-range lateral connections in
linking distance points of the visual input in the retinotopic maps of the ventral pathway (Gilbert and Li, 2013; Ramalingam et al., 2013). We also designed and collected human data on a well-controlled behavioral experiment to probe how people group complex objects in natural scenes. We then test the models on two benchmarks based on this dataset. The first benchmark evaluates the alignment between the affinity signals in the feature maps and the ground truth object boundaries, measuring the extent to which the learned representations are object-centric. The second benchmark evaluates the models' ability to predict human object-based perception and behavior, measured by an object grouping task.
## 2 Related works
Models of object-based attention and grouping in the brain:The human visual system continuously captures objects in a scene by spatially segregating their features from the background and dynamically grouping those features. Formation of these objects rely on different grouping signals ranging from part-whole matching and Gestalt processes to prior semantic knowledge of object categories (Greff et al., 2020; Vecera, 2000; Wagemans et al., 2012; Gilbert and Li, 2013). Most modeling work, however, has been focused on understanding how the former, more bottom-up cues can be implemented in the retinotopic maps of the visual cortex. For example, the models of "association fields" (Field et al., 1993) suggests the effective connectivity between units depends on the similarity between the simple represented features (e.g. orientation). Attention is believed to modulate the effective connectivity between the units and therefore guide the grouping process. Most modeling work in this domain has also focused on Gestalt cues and clear object boundaries (Jeurissen et al., 2016). These models therefore cannot be applied to objects in natural context where objects become less spatially separated or when contours lack definition.
Self-supervised Vision Transformers:Application of Transformers to vision has been very successful, with these models outperforming convolutional neural networks (CNNs) on object recognition and other tasks (Dosovitskiy et al., 2020). In Transformers, the visual input is first divided into different patches that are then encoded as feature vectors called tokens. At each layer of processing, a given token, that represents a particular image patch, updates its value by interacting with and mixing ("attending" to) the values of all other tokens that it finds relevant. The selective nature of this mixing has motivated naming this process "attention" in Transformers (Vaswani et al., 2017). These attention weights in supervised vision Transformers have been shown to perform some perceptual grouping (Mehrani and Tsotsos, 2023). More recently, studies have explored training these models on self-supervised objectives yielding some intriguing object-centric properties that are not as prominent in the models trained for classification. When trained with self-distillation loss (DINO, Caron et al. (2021) and DINOv2 Oquab et al. (2023)), the attention values contain explicit information about the semantic segmentation of the foreground objects and their parts, reflecting that these models can capture object-centric representations without labels. In Masked auto-encoding (MAE, He et al. (2022)), the input image is significantly occluded and the model is trained to reconstruct the whole image from a small number of visible patches. Minimizing the reconstruction loss enables the model to learn object-centric feature that yield great performance on other downstream tasks.
Self-supervised Transformers for objects and part discovery:There have been recent attempts to investigate the extent to which self-supervised Transformers can learn high-level characteristics
Figure 1: Example affinity maps for a few image patches from the grid, generated using features from DINO.
of a scene. These studies involve computing feature similarity among all tokens and examining their correspondence with high-level concepts such as objects and parts. LOST (Simconi et al., 2021) and Tokencut (Wang et al., 2022) use the similarity graph to perform unsupervised object discovery showing success when there is one salient object in the scene. Other work (Amir et al., 2021) have used the feature similarity to perform co-segmentation of object parts. These results collectively corroborate that vision Transformers trained with a self-supervised objective begin to represent object-centric information, meaning the patches that have the highest affinity to a given patch are likely to be on the same object (Fig. 1).
## 3 Behavioral experiment
We use a "two-dot" paradigm (Fig. 2) to directly probe how humans group and segment the regions of natural images into objects. In this paradigm, two dots are placed on an image and participants are asked to indicate by button press whether they are on the same object or two different objects (Fig. 2A). One of these dots is always at the center location, and the other is at a peripheral location. The reaction time (RT) of this button press is the primary measure in this task, and reveals the difficulty of object grouping and the spread of attention within an object. Previous works using this paradigm have been limited in scale or have focused on simpler stimuli (Vecera, 2000; Kim et al., 2019; Korjoukov et al., 2012). For example, in Korjoukov et al. (2012) 24 images were hand selected to depict two instances of either a vehicle or an animal. Our work significantly scales up this effort, and our dataset is available at github.com/Hosseinadeli/affinity_attention.
### Behavioral methods
Participants:72 undergraduate students participated in our experiment for course credit. Their mean age was 20.4 years (range = 17-32) and all had normal or corrected-to-normal vision. This study was approved by the school Institutional Review Board.
Stimuli and Apparatus:We selected 288 images from the Microsoft COCO (Common Objects in Context) dataset (2017 validation set), which has images of complex everyday scenes depicting common objects in their natural context (Lin et al., 2014). The images also come with object-level segmentations, which we used to generate four versions of each display (Fig. 2B): "same-close"
Figure 2: **A) Behavioral procedure. Participants maintain fixation on the center dot during the trial. B) Sample trial from all four conditions, each coded by color. C) Placement of dots in all conditions and trials. D) Mean reaction time for correct trials by condition, with SEM.**
(two dots on the same object with a close distance), "same-far" (two dots on the same object with a far distance), "different-close" (two dots on two different objects with a close distance), and "different-far" (two dots on two different objects with a far distance). We ensured that the distances are controlled between the two same/different conditions, thus preventing the participants from making the same/different decision based on distance. Fig. 2C shows the placement of the dots across all four conditions. Each participant saw each image only in one condition. The assignment of images to the four conditions was counterbalanced across participants so that every 4 participants saw the full set of trials. The experiment was conducted on a 19-inch flat-screen CRT ViewSonic SVGA monitor with a screen resolution of 1024\(\times\)768 pixels and a refresh rate of 100 Hz. Participants were seated approximately 70 cm away from the monitor, which resulted in the screen subtending a visual angle of 30\({}^{\circ}\)\(\times\)22\({}^{\circ}\). This meant that around 34 image pixels spanned approximately 1 degree of visual angle, making the 'close' peripheral dot in the experiment located around 3 degrees from the fixation point (central dot) and 'far' peripheral dot located 6 degrees from the fixation point. Gaze position before the button response was recorded using an EyeLink 1000 eye-tracking system (SR Research) with a sampling rate of 1000 Hz. Gaze coordinates were parsed into fixations using the default Eyelink algorithm, which employed a velocity threshold of 30 degrees per second and an acceleration threshold of 8000 degrees per second squared. Calibration drift was checked before every trial, and recalibration was performed if necessary to ensure accurate eye-tracking data.
Procedure:Participants were instructed to determine whether the two dots were on the same object or two different objects. Each trial started with the presentation of a fixation cross for 500 ms, indicating the location of the central dot. Both the central and peripheral dots were then displayed for 1,000 ms without the image. Next, the cues were superimposed on the image and flickered at a frequency of 5 Hz to ensure their visibility. During the trial, participants were required to maintain their gaze on the center dot for the entire duration. If their gaze deviated more than 1 degree of visual angle away from that location, the trial was terminated. 7 percent of the trials were removed due to breaking fixation. To record their responses, participants utilized a Microsoft gamepad controller, with buttons assigned to the "same" or "different" condition. The assignment of the same button were randomized to the right or left hand across participants to ensure the RT differences were not due to the dominant hand bias. Each participant performed 32 practice trials and 256 experimental trials. The experimental trials were divided into four blocks, with breaks provided between the blocks. The order of image presentation in each block was randomized across the trials. To provide accuracy feedback, a sound was used to indicate an incorrect response to participants. We removed one experimental image from our analyses based on the ground truth response being ambiguous, leaving 255 experimental images and 1020 (255\(\times\)4) trials for behavioral analyses and model evaluations.
### Behavioral results
The average subject accuracy on this task was 90 percent. We only show the analyses for trials where the subject response was correct (however the patterns were largely the same when considering both correct and incorrect trials). Fig. 2D shows the RT data for each condition. Subjects were faster to respond when the two dots were on the same object compared to when the peripheral dot was on a different object. This effect is know as the same object advantage [14], indicating that the first dot facilitates the selection of the whole object. This effect interacted with dot distance, where we observed the fastest RTs in the close-separation same-object condition. This behavioral pattern is consistent with the hypothesis that attention spreads from the center dot within the cued object, thereby reaching the closer dot faster than the farther dot. If the peripheral dot is on a different object, dot separation would not be expected to play a large role on RTs [14].
Fig. 3 shows four sample trials for each condition with comparable difficulties. For this visualization, We first ordered the trials for each condition by their RTs and then selected the 50th, 100th, 150th and 200th trials, displayed from left to right, respectively, for each condition. Of note, RTs increase with the distance between the dots when the dots are on the same object. Comparing the cat figures (top row) illustrates this pattern for dots on the same object. When the dots are on different objects, there is no effect of dot separation on RTs, as can be seen by comparing the elephants. While average and cross condition behavioral patterns are interesting, there is also interesting variability within each condition. Task difficulty, and RTs, increase when there are within object boundaries between the dots, when the dots are on different object parts (with different textures), when the dots are on
the narrower parts of the object, when they are close to the boundaries, or when there are multiple objects from the same category. Our behavioral dataset is therefore rich in capturing the variable conditions under which humans group objects in natural scenes. We will test models on how well they can predict the mean RT of the subjects for each trial.
## 4 Modeling experiments
In our modeling experiments, we set out to evaluate a wide range of representation learning models, as shown in Table 1. We considered both Transformer and CNN based models. For the Transformer models we focused on the ViT architecture [14] with different number of parameters (base, large, or giant) using different patch sizes (8, 14, or 16). The models are trained with either the self-distillation method, DINO [1] or the updated method, DINO version 2 [13]. Also included are models trained using masked autoencoding (MAE,[12]). In Transformers, each token is represented with three vectors: key, query, and value. The affinity between tokens can be calculated using any of these feature representations (self-attention is the dot product of one token's key with another token's query). For these models, we extract the patch features from the last Transformer layer and calculate affinity by performing the dot product of each token's feature with all the other tokens. Sample affinity maps for a few patches
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|} \hline run name & training & architecture & model & patch size & feature type \\ \hline DINOv2\_ViTb14 & DINO v2 & ViT & base & 14 & key, query, or value \\ \hline DINOv2\_ViT114 & DINO v2 & ViT & large & 14 & key, query, or value \\ \hline DINOv2\_ViTg14 & DINO v2 & ViT & giant & 14 & key, query, or value \\ \hline DINO\_ViTb16 & DINO & ViT & base & 16 & key, query, or value \\ \hline DINO\_ViTb8 & DINO & ViT & base & 8 & key, query, or value \\ \hline MAE\_ViTb16 & MAE & ViT & base & 16 & key, query, or value \\ \hline DINO\_ResNet50 & DINO & ResNet50 & & & conv \\ \hline ImageNet\_ResNet50 & ImageNet & ResNet50 & & & conv \\ \hline \end{tabular}
\end{table}
Table 1: All model run specifications
Figure 3: **A)** Sample trials for the four conditions in our experiment. The corresponding average reaction times from subjects is displayed on top.
are shown in Fig. 1, generated using the key features from the a model trained with DINO objective. Our code is available at github.com/Hosseinadeli/affinity_attention.
We also included two CNN models in our comparison, both using the ResNet50 architecture [He et al., 2016]. One is pre-trained on ImageNet for object classification and the other is trained using the DINO method. Convolutional features are extracted from the third convolutional block of the model to have feature dimensions comparable to a Transformer model with the path size of 16. The feature tensors (with size h x w x d) are then divided into h x w feature vectors of length d to represent different patches of the image. We then take the dot product of each feature vector with all the others to calculate the affinity map for each patch.
### Quantifying the object-centric component of affinity
Affinity between two patches of an image is driven by feature similarity. However, in order for the object-based attention to spread within an object, the affinity has to have a strong object-centric component. We quantify this by performing an ROC analyses of our experimental dataset. For each of the 1020 trials, we first extract the affinity map from the peripheral dot location. A sample map is shown in Fig. 4A on the left, which used the key features from the ViT based model (trained on DINO v2 with patch size 14; DINOv2_ViTb14 in Table 1). This affinity map represents the strength of the feature similarity between the patch at the peripheral dot location and all the other patches in the image. We utilized different thresholds to evaluate the spatial distribution of affinity signals at a specific strength and assess their alignment with the ground truth object boundary. This analysis allowed us to determine the extent to which the signals were concentrated within the object or dispersed outside of it, providing insights into the object-centric nature of the learned features. In the example shown in Fig. 4A, the threshold is decreasing in steps of 0.05 from 1 to 0. True Positive Rate (TPR) is calculated for each threshold as the size of the area that is active in the object divided over the entire area of that object. False Positive Rate (FPR) is calculated as the size of the area active outside of the object divided over the entire area outside of that object. The figure shows that the TPR increases as the threshold decreases, but that the FPR does not increase, meaning the patches having the strongest affinity to the given peripheral location are in the same object. This indicates a strong object-centric signal in this affinity map. Only after the TPR reaches a high level of >0.96, does the FPR start to increase with a decreasing threshold. Figure 4B illustrates an example trial from
Figure 4: **A) A sample experimental trial with the central and peripheral dots shown on the top left. The affinity map for the peripheral dot is shown below the trial image. The activity remaining on the affinity map after decreasing a threshold in steps of 0.5 from 1 to 0. The True Positive Rate (TPR) and False Positive Rate (FPR) are displayed above the activity map for each step. The TPR increases significantly before the FPR increases, showing a strong object-centric signal in this map. B) Affinity map from another trial, with the same step-wise analysis of TPR and FPR as the threshold decreases. Compared to A), the object-centric signal in this example is not as strong. C) The ROC curves for all the model runs. The figure legend is ordered by the area under the curve with model performance decreasing from top to bottom.**
the same model where the object-centric signal is not strong in the affinity map. In this case, FPR increases significantly before the TPR reaches a high level.
Averaging the TPR and FPR across all the trials, we can then get a gist of how object-centric the affinity is for a given model. Fig. 4C shows ROC curves for all the model runs in Table 1. We calculate the area under the ROC curves (AUC) as the performance measure. The figure legend is ordered from top to bottom based on decreasing AUC. For the Transformer models we report the best results among the key, query, and value features. We generally observed similar performance from the affinity maps built using the key and query features, and that they outperform the value features. This observation generally agrees with prior work on using these features for unsupervised object discovery [Simeoni et al., 2021, Wang et al., 2022]. Results reveal that DINO v2 clearly improves upon the earlier DINO model. The MAE based affinity maps perform strongly, outperforming original DINO model but not the v2. Interestingly, the CNN-based models perform poorly on this task regardless of the training objective. The results corroborate earlier findings suggesting that self-supervised learning methods, paired with a Transformer architecture, learn more object-based features.
### Affinity spread
We designed a simple algorithm to spread attention through an object using the affinity signal (that is normalized to have all values between 0 and 1). The model, like the subjects, starts every trial from the patch at the center dot location. From this starting location, the model then selects all the tokens with strong affinity above the starting threshold (tau), causing attention to spread to a bigger segment around the center dot (Fig. 5A top-left). At each new step, the model identifies the patches in the image that have a strong affinity with the already attended segment by taking an average of the affinity maps for all the tokens over this growing segment. We put a constraint on this process to have the segment grow as one contiguous region but only allowing the newly added patches to be connected to the current segment. Fig. 5A shows the iterative spread of attention in two sample objects over 20 steps (generated from DINOv2_ViTb14_k features). As the segment grows, the attention spread becomes more conservative due to the constraint placed on the segment growth and that all the patches in the segment vote where to spread next. To counter this conservative spread, the model gradually reduces the threshold (tau_step) at each time step, thereby allowing attention to spread within the entire object before spreading outside of it. Tau and tau_step are two hyperparameters in the algorithm that we explored for each run.
The number of steps that it takes for attention to reach the peripheral dot is the model's prediction of the RT for that trial. Fig. 5B shows the attention spread overlaid on the original images for the two trials. In both cases, the model reaches the close peripheral dot (top) in fewer steps than the far peripheral dot (bottom). Fig. 5C shows more examples of trials where attention reaches the dot in
Figure 5: **A)** 20 steps of attention spread for two trials (top vs bottom two rows), each starting from the center dot. **B)** Attention spread overlaid on the image for the steps that attention reached the peripheral dot in the close (top) and far (bottom) conditions for both trials. **C)** Examples of attention spreading in objects.
the object. We find that, while the individual affinity maps could be noisy and scattered, the act of spreading object-based attention from a point inside the object that relies on averages of many affinity maps can yield object-centric incremental segments.
Model predictions were similarly made for the "different" condition trials where two dots are located on two different objects. Given that the threshold is decreasing at each step, in some trials the attention spreads outside the object and reaches the peripheral dot even when the dot is on a different object. That step number will be the model prediction for the RT of the participants on that trial. In other cases, attention never reaches the peripheral dot when it is on a different object, in which case the model prediction would be 21 (maximum number of steps + 1). We plot in Fig. 6A the histogram of the number of trials where attention reaches the peripheral dot as a function of number of steps (1 to 21) for the DINOV2_ViTb14 run with tau and tau_step set at 0.8 and 0.2, respectively. The rightmost bar at bin 21 in each condition indicates the number of trials where attention did not reach the peripheral dot. Fig. 6B shows the average number of steps that the model took for attention to reach the close and far peripheral dot for each condition. The model predicts the same object advantage in the same condition, evidenced by a smaller number of steps, compared to the different condition. Also for the same condition, the model behavior shows the effect of distance on the number of steps, similar to the effect of distance on RT that we observed in humans. However, the model also showed an effect of distance in the different condition trials, contrary to the pattern in human RT. This Highlights a limitation in our approach to predicting RT in the different condition that needs to be addressed in future work. Lastly, we observed that the behavioral trends of the model shown in Fig. 6B largely stayed the same for a range of tau and tau_step values, but that the average number of steps that it took to reach the peripheral dot was higher with higher starting thresholds and smaller steps.
Fig. 6C shows how affinity-spread based on features from different models can predict the human behavior on the grouping task. The bars show the correlation of each model prediction with the average reaction time of the subjects across the 1020 trials. We also include a Euclidean model as a simple baseline that predicts longer RT when there is a greater distance between the two dots. The gray horizontal line labeled "Subject model" indicates the upper bound for model prediction. In order to create this model, we split the 72 subjects' data into two groups with 36 subject and correlated the
Figure 6: **A)** Histogram of the number of trials where the attention spread reaches the peripheral dot for each condition, with number of steps on the x-axis. **B)** Mean number of model steps for attention to reach the peripheral dot in the close (light bars) and far (dark bars) conditions for both same and different trials, with SEMs. **C)** Correlation between different model predictions and average subject RT. The gray line shows the subject-subject agreement, serving as a rough upper bound on model performance.
average response of the two groups. We then repeated the process 50 times with different splits and then took the average of the correlation coefficients across the splits.
The results suggest that affinity spread models built using feature maps from self-supervised Transformers better align with human grouping behavior when compared to baseline models and models based on convolutional neural networks (CNN), despite not being trained on the same/different task or with any other object labels. We can observe that models exhibiting stronger object-centric affinity signals tend to achieve superior performance in predicting human behavior, a trend we expect to continue. Another notable observation is that larger models do not necessarily perform better, with ViTg14 underperforming the base and large ViT models. Given the still large gap between model and human behavior on individual trials, this model comparison and dataset will be a useful benchmark for future developments.
## 5 Discussion
In this work, we proposed a novel affinity-based model of object grouping and demonstrate that self-supervised Transformers provide a plausible mechanism for human grouping and object-based attention, thus extending the value of these models beyond core object recognition [20]. Our affinity spread method, building on self-supervised representation learning, does not require a large number of labeled samples for training, making this a more biologically plausible mechanism for how the primate visual system learns to group features and perceive objects. Our work also provides a new behavioral dataset and framework for evaluating models of representation learning, including Transformers. This work further contributes to computer vision by showing how object-based attention, a core element of human cognition, can be integrated into an AI model.
|
2302.12367 | Extracting Victim Counts from Text | Decision-makers in the humanitarian sector rely on timely and exact
information during crisis events. Knowing how many civilians were injured
during an earthquake is vital to allocate aids properly. Information about such
victim counts is often only available within full-text event descriptions from
newspapers and other reports. Extracting numbers from text is challenging:
numbers have different formats and may require numeric reasoning. This renders
purely string matching-based approaches insufficient. As a consequence,
fine-grained counts of injured, displaced, or abused victims beyond fatalities
are often not extracted and remain unseen. We cast victim count extraction as a
question answering (QA) task with a regression or classification objective. We
compare regex, dependency parsing, semantic role labeling-based approaches, and
advanced text-to-text models. Beyond model accuracy, we analyze extraction
reliability and robustness which are key for this sensitive task. In
particular, we discuss model calibration and investigate few-shot and
out-of-distribution performance. Ultimately, we make a comprehensive
recommendation on which model to select for different desiderata and data
domains. Our work is among the first to apply numeracy-focused large language
models in a real-world use case with a positive impact. | Mian Zhong, Shehzaad Dhuliawala, Niklas Stoehr | 2023-02-23T23:50:24Z | http://arxiv.org/abs/2302.12367v1 | # Extracting Victim Counts from Text
###### Abstract
Decision-makers in the humanitarian sector rely on timely and exact information during crisis events. Knowing how many civilians were injured during an earthquake is vital to allocate aids properly. Information about such _victim counts_ is often only available within full-text event descriptions from newspapers and other reports. Extracting numbers from text is challenging: numbers have different formats and may require numeric reasoning. This renders purely string matching-based approaches insufficient. As a consequence, fine-grained counts of injured, displaced, or abused victims beyond fatalities are often not extracted and remain unseen. We cast victim count extraction as a question answering (QA) task with a regression or classification objective. We compare regex, dependency parsing, semantic role labeling-based approaches, and advanced text-to-text models. Beyond model accuracy, we analyze extraction reliability and robustness which are key for this sensitive task. In particular, we discuss model calibration and investigate few-shot and out-of-distribution performance. Ultimately, we make a comprehensive recommendation on which model to select for different desiderata and data domains. Our work is among the first to apply numeracy-focused large language models in a real-world use case with a positive impact.1
Footnote 1: Code is available online at:
[https://github.com/mianzg/victim_counts](https://github.com/mianzg/victim_counts)
## 1 Introduction
Timely and accurate information during crisis events is crucial for rescue operations and the allocation of humanitarian aid Lepuschitz and Stoehr (2021). However, crisis information is often scarce, subjective, or biased, which renders reported numbers in text extremely important Hellmeier et al. (2018); Zavarella et al. (2020); Radford (2021). For instance, the count of injured or missing people provides quantitative information about the catastrophic impact of an earthquake. In this work, we focus on human victims in crisis events, e.g., fatalities in floods, herein referred to as _victim counts_. A reliable estimate of victim counts is helpful during crisis Darcy and Hofmann (2003); Kreutzer et al. (2020), and also post-crisis, benefiting research to diversify measures of crisis intensity. As of now, most intensity measures are either limited to event types Vincent (1979); Goldstein (1992), fatality counts Kalyvas (2006); Chaudoin et al. (2017) or both Stoehr et al. (2022). More fine-grained measures such as injured, displaced, or abused victims are not captured in most popular databases and remain unmonitored Krause (2013); Cruyff et al. (2017); Cullen et al. (2021).
Many victim counts are reported in full-text form within event descriptions in news media. This makes their systematic collection and analysis technically complex. Manual extraction of victim counts from text is very labor-intensive and does not scale to big data collections Schrodt and Ulfelder (2016); Lewis et al. (2016). Computerized approaches such as the event coding software Tabari Schrodt (2009) and Petrarch2 Norris et al. (2017) focus on extracting actor and event types. They rely on lambda calculus and syntactic pattern matching, but disregard mentions of victim counts.
As we will show, parsing-based approaches perform decently well at extracting explicitly reported victim counts. They can identify the mention of the count "\(5\)" in "5 people were injured". However, they are often inadequate when the description _implies_ a correct count -- for example, from the description that "one logger was shot but survived", a human reader may infer that _one_ person is injured. Since neither a count nor the injury is mentioned explicitly, a parsing-based system may fall short. Another difficulty stems from the fact that the counts can be reported in many, different formats. A reported count may be digit-based or
spelled out, define an exact quantity or a range as in "dozens of people were injured". As a consequence, formulating the task of victim count extraction is not an easy endeavor (SS3). Most prior work assumes a setting where the count is explicitly mentioned in an event description (Dohling and Leser, 2011; Imran et al., 2013; Rudra et al., 2018; Camilleri et al., 2019). Such settings can be tackled by sequence labeling models that select a relevant span from the given description. However, if the victim count does not appear verbatim, as in the above "one logger" example, models with some form of abstract reasoning capacity may be needed (Roy et al., 2015). Recently, large language models have shown promising results in answering number-focused questions with and without explicit mentions of relevant numbers (Lewkowycz et al., 2022; Nye et al., 2021; Wei et al., 2022; Lefebvre and Stoehr, 2022).
This paper is concerned with studying these different approaches (SS4): as baselines, we compare regular expression, dependency parsing, and semantic role labeling. We consider the NT5 (Yang et al., 2021) model as a representative numeracy-enhanced pre-trained language model. We use the representation of this model in a generation, a classification, and a regression setting. We evaluate all models along three dimensions: accuracy (SS5), reliability (SS6), and robustness (SS7). We find that the fine-tuned language model outperforms the baseline models, especially when the victim count extraction requires reasoning. Reliability and robustness are particularly important in high-stake, human-centric tasks such as victim count extraction (Zhang et al., 2020; Kong et al., 2020; Russo et al., 2022). Model reliability indicates to which extent model behavior can be trusted within decision-making settings (Leibig et al., 2017; Jiang et al., 2021). One dimension of reliability is model calibration which indicates if a model's confidence is aligned well with it making correct predictions (Guo et al., 2017). While calibration has been widely studied for classification, we add to the discussion of calibrated regression (Song et al., 2019) and generation settings (Widmann et al., 2021). Finally, the dimension of robustness describes how stably a model performs. For instance, when the training set is limited or when the test data is out-of-distribution, a less robust model will forfeit more of its predictive performance. To shed light on this dimension, we conduct experiments in few-shot learning and out-of-distribution settings.
We conclude with an application to showcase the extraction of fine-grained and highly specialized types of victim counts. Lastly, we discuss the benefits and drawbacks of the different approaches to assist practitioners in choosing the most suitable task formulation and model.
## 2 Data
We use publicly available datasets covering natural disasters and armed conflicts, namely: (1) World Atrocities Dataset (WAD) (Schrodt and Ulfelder, 2016), (2) Non-violent and Violent Campaigns and Outcomes 3.0 (NAVCO) (Lewis et al., 2016), and (3) European Media Monitor (EMM) (JRC Science Hub, 2018; Steinberger et al., 2017). For each dataset, we use the event text description and two types of victim counts: the death count and the injury count that we refer to as "WAD death" or "WAD injury". We pre-process the data by removing the samples with missing values (NaN) in the victim counts. For EMM, we only consider samples with a non-zero victim count since "\(0\)" is over-represented.
## 3 Task Formulation
In this section, we discuss some questions and challenges faced in formulating the task of extracting victim counts from event descriptions. We justify some of the choices we make and describe why it is not possible to have a single formulation that fits all needs:
**Is the victim count always present in the text?** Victim counts can be expressed in various ways in the text. When the count is expressed explicitly in the text, say "5 people were injured", a span extraction model can effectively extract the injury count \(5\). However, in certain cases, a single explicit number might not be mentioned, and the victim count needs to be logically or algebraically inferred from the text. Consider the description "a 4-year-old girl and her mother were found dead"; a model would need to logically deduce that the victim count of death is \(2\). To handle this, we not only look at span extraction models but also experiment with models that can understand the text at a deeper level and produce a victim count.
**Is the victim count always a single number?** Often, in the event description, the victim count
is described as a range, such as "at least 330 people died", or in vague terms, like "dozens were injured". Additionally, even within a description, the victim counts for the same event can be varying, possibly because of recording the counts from different sources. This makes extracting a single exact count almost impossible. In such cases, the best a model can do is to output a close estimate of the actual victim count. Another solution would be to provide a range within which the count could lie. For a humanitarian section deciding on the quantity of aid to be deployed, a range might suffice over a single exact count. To account for this, we also look at models that are trained to output a range by classifying the victim counts into a set of binned categories.
## 4 Models
In SS4.1, we introduce baselines models that parse an event description and heuristically extract a victim count. We then specify the model implementation for the different task formulations in SS4.2.
### Baseline Models
All baselines extract a victim count by locating the part of the text that could be relevant to victims and finding the nearby victim counts. The locating step requires a pre-defined list of words denoted as _locating list_. For example, to extract death counts, this list would include terms like "kill" and "die".
Regex.Regular expressions (regex) is a rule-based method to extract counts by string pattern matching. The patterns (App. A) are built based on active or passive voice to extract a count closest to phrases in the locating list.
Dependency Parsing.The dependency parsing model collects all possible numeric modifiers and their dependency relationships. Since not every numeric modifier relates to victim counts, e.g., "42-year-old", we construct dependency rules with the locating list to decide if the number is the victim count. For example, one rule is to check if the numeric modifier is for a subject phrase that would reject "\(42\)" in the example of "42-year-old". If no numeric modifier is found (e.g., "a journalist was injured"), additional rules use the locating list to return "1" if the rule is satisfied and otherwise return "0".
Srl.Semantic role labeling (SRL) recursively decomposes text input into pairs of predicates and their arguments. We define a list of predicate verbs for death and injury count as the locating list. Then, we iterate over the predicate-argument pairs, check if any predicate from the locating list occurs, and extract the count from its argument if possible. If a predicate exists, the implementation returns the first number as the count if multiple are found and returns "1" if no verbatim number is found. If no such predicate appears, the count is set to "0".
### Task Modeling
We perform victim count extraction using three methods: generation, regression, and classification. As discussed above, each of these approaches caters to the different formulations of our task and can be beneficial in different scenarios. Across these methods, we use the same underlying NT5 model. For clarity, we denote NT5-Gen, NT5-Reg, and NT5-Clf for the corresponding models. The NT5 model Yang et al. (2021) is a variant of the T5 model Raffel et al. (2020) with further fine-tuning on numerical tasks. We query the model in a similar fashion to previous works by giving the question and event description in the form: "answer me:[question] context:[passage]". We discuss how we fine-tune this model for each of our specific methods below.
Generation.For generation, we fine-tune NT5 to decode the victim counts autoregressively. At inference, we use beam search to generate output. Generation does not guarantee to only generate numeral tokens; therefore, we follow De Cao et al. (2021) to constrain the possible generation tokens in a prefix-conditioned way, such that only number digit tokens \(0-9\) and EOS token are allowed at each decoding step.
Regression.For regression, we add two linear layers (with ReLU activation) on the encoder representation to output the numerical victim count. The model is trained to optimize the \(\log\) mean-squared error between the true and predicted count.
Classification.We model the task as a classification problem by binning the victim counts into ordinal classes. Similar to regression, the model has a classification head of a linear layer and a softmax layer on top of an encoder initialized with NT5 weights. Our experiments use a 3-class classification by converting the victim counts into three categories: \([0,3],(3,10],(10,\infty)\).
## 5 Accuracy of Counts Extraction
We begin by evaluating the efficacy of our proposed methods for victim count extraction. We examine the model accuracy by comparing baselines and the fine-tuned model with a generation objective (SS5.1). We then show the results of using classification and regression formulations (SS5.2).
### Comparing Baselines with NT5-Gen
We compare the accuracy performance of the baseline models and the fine-tuned NT5-Gen model. Tab. 1 shows the results of extracting the injury counts using _Exact-Match_ and \(F_{1}\)scores commonly used in related tasks [22, 23]. We measure \(F_{1}\) score on digitized tokens (i.e., "\(34\)" \(\rightarrow\) ["\(3\)", "\(4\)"]). The fine-tuned NT5-Gen model has an accuracy boost up by \(7\)-\(13\%\) in _Exact-Match_ and by \(6\)-\(13\%\) in \(F_{1}\) score than the strongest baseline model SRL. The performance of regex and dependency parsing varies heavily across different data, which implies that the regex pattern or dependency relationship may be less helpful in finding the victim counts.
Moreover, we convert the victim counts into four bins, where the bins are selected to have a balanced number of samples in each bin. As an illustration, Fig. 1 shows the confusion matrices on the transformed injury counts. For both victim types, baseline models have a low precision to falsely return "0" too often. Compared with baselines, the NT5-Gen model improves to extract victim counts whose numeric values are large (e.g., \(y>10\)).
Qualitative Analysis.We qualitatively examine error samples of the SRL model that the NT5-Gen model extracts correctly. We randomly select 20 error samples for each test set to evaluate and summarize 4 types of errors with examples in Tab. 2. Out of all errors2, \(39.2\%\) belong to diverse lin
Figure 1: Confusion matrices of the baselines and the fine-tuned NT5-Gen model (columns) of extracting injury counts from different data (rows). We convert the true and prediction victim counts into 4 categories: for any count \(y\), “0” is \(y=0\), “1” is \(0<y\leq 3\), “2” is \(3<y\leq 10\) and “3” is \(y>10\). Values are normalized over true counts. Baselines tend to have low precision on extracting injury counts (dark columns on “0”). SRL and NT5-Gen have comparable accuracy and recall; however, NT5-Gen is slightly better in precision.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{_Exact-Match_} & \multicolumn{3}{c}{\(\mathbf{F}_{1}\)} \\ \cline{2-7} & WAD & NAVCO & EMM & WAD & NAVCO & EMM \\ \hline Regex & 0.117 & 0.264 & 0.064 & 0.202 & 0.318 & 0.124 \\ Dep & 0.226 & 0.303 & 0.052 & 0.355 & 0.363 & 0.136 \\ SRL & 0.741 & 0.430 & 0.313 & 0.779 & 0.484 & 0.361 \\ \hline NT5-Gen & **0.813** & **0.501** & **0.443** & **0.846** & **0.544** & **0.492** \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Exact-Match_ and \(F_{1}\) scores of the baseline models and the fine-tuned NT5-Gen on injury counts. The best results are **bolded**. The NT5-Gen model performs better than baselines across all datasets. DEP refers to the dependency parsing model and SRL refers to the semantic role labeling model.
guistic expressions on depicting victims, \(38.3\%\) contain number ambiguity, \(8.3\%\) need numerical reasoning, and \(5.8\%\) have spelling issues (for the tokenizer). The NT5-Gen model performs better when the count needs numerical reasoning. Even if the reasoning is not needed, SRL may fail when the linguistic expression to depict victims (e.g., "have throats cut") is out of the pre-defined locating list (e.g., ["die", "kill", "slay"]). These error types are difficult for baseline models to be improved since the patterns cannot be defined beforehand.
### Results on Classification and Regression
We examine the accuracy of the classification and regression formulations by comparing NT5-Clf and NT5-Reg with different initialization weights. To compare, we use t5-small and bert-base-uncased pre-trained weights for the encoder. Tab. 3 shows the classification results on NAVCO injury data. Fine-tuning t5-small and nt5 reaches comparable performance; precision and recall scores are similar, but precision is slightly higher. The scatter plots (Fig. 2) show the results of regression using different pre-trained weights with the mean squared error (MSE). For a (log-transformed) victim count larger than \(5\), using the regression objective seems more conservative in giving small-valued predictions. The numeracy-rich NT5 weights do not particularly improve accuracy for a classification or regression objective, and employing some standard pre-trained weights might be sufficient.
## 6 Evaluating Reliability
Another important dimension is reliability which we evaluate through the lens of calibration (SS6.1). As we approach the task with multiple formulations, calibration analysis is especially needed to understand whether a model is calibrated (SS6.2), and how post-hoc calibration techniques may adjust models to be better calibrated (SS6.3).
### Preliminaries: Calibration Metrics
A well-calibrated model ensures that the confidence of the output is well aligned with the chance of the output being accurate. This is a desirable property for our task -- consider a model extracts "0" when the text depicts an injured person. A calibrated model would assign very low confidence to the extracted count, which may avoid error propagation to downstream decisions, e.g., medical resource dispatch. We here introduce the expected calibra
\begin{table}
\begin{tabular}{l l r r r} \hline \hline \multicolumn{1}{c}{Error Type} & \multicolumn{1}{c}{Context} & Truth & SRL & NT5 \\ \hline Diverse Expression & _Six passengers_ in a taxi also _had their throats cut_ & 6 & 0 & 6 \\ \hline Numerical Reasoning & Herders shot and _killed four people_ [...]. Herders & 5 & 4 & 5 \\ & then shot and _killed a farmer_ at Jokhana [...] & & & \\ \hline Number Ambiguity & _Unidentified gunmen_ clash with army & 1 & 0 & 1 \\ \hline Number Spelling &.Twenty-three people were killed [...] & 23 & 1 & 23 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error examples of SRL that the NT5-Gen model is correct on extracting death counts. Diverse Expression refers to the string patterns not captured by pre-defined rules. Numerical Reasoning shows that the correct count has to be achieved by some mathematical operation over the text. Number Ambiguity indicates that a verbatim number is not written but an estimate may be made (with domain expertise). Number Spelling refers to problems with number / text format that are typos or the tokenizer parses wrongly (e.g., “twenty-three”\(\rightarrow\) “twenty”).
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & Accuracy & \(\mathbf{F}_{1}\) & Precision & Recall \\ \hline NT5 & 0.65 & 0.60 & 0.62 & 0.59 \\ T5 & 0.65 & 0.60 & 0.61 & 0.59 \\ BERT & 0.52 & 0.23 & 0.17 & 0.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification results on NAVCO injury data with the NT5-Clf model initialized by different pre-trained weights: nt5, t5-small, and bert-base-uncased. \(F_{1}\), precision and recall scores are macro.
Figure 2: Scatter plots of the fine-tuned NT5-Reg model initialized with different pre-trained weights (nt5, t5-small, and bert-base-uncased). The models are trained on log-transformed victim counts.
tion error (ECE) (Pakdaman Naeini et al., 2015), a standard metric used for classification and is extended for generation decoding (Widmann et al., 2021). For regression, we apply quantile calibration error (Kuleshov et al., 2018).
Given \(n\) samples, we create \(M\) equal-width bins over the interval \([0,1]\). ECE takes a weighted average on the differences between the classification accuracy and the mean confidence within each \(B_{m}\),
\[\mathrm{ECE}=\sum_{m=1}^{M}\frac{|B_{m}|}{n}\bigg{|}\mathrm{acc}(B_{m})- \mathrm{conf}(B_{m})\bigg{|}.\]
The quantile calibration error averages the differences between the empirical frequency \(\mathrm{freq}(B_{m})\) and the upper bound of \(B_{m}\) (i.e., \(\sup(B_{m})\)), where \(\mathrm{freq}(B_{m})\) is the fraction of \(n\) samples whose quantiles lower or equal to \(\sup(B_{m})\),
\[\mathrm{RegCE}=\frac{1}{M}\sum_{m=1}^{M}\bigg{|}\mathrm{freq}(B_{m})-\sup(B_{ m})\bigg{|}.\]
The calibration error of generation decoding takes the best \(b\) beam search answers, and applies softmax on their scores to represent the confidence. The ECE is then calculated on the best beam search answer similar to classification.
### Calibration Error on Different Models
We show in Tab. 4 the calibration errors measured on the fine-tuned NT5-Clf, NT5-Reg, and NT5-Gen with different data. Surprisingly, the NT5-Gen model is well-calibrated on most datasets, except for EMM injury: the lowest calibration error is \(0.05\) on NAVCO death, and the errors on other data range between \(0.08\) and \(0.33\). Classification models tend to have large calibration errors (\(>0.19\)). In particular, the error is larger than \(0.3\) on NAVCO and EMM data to classify injury counts. Regression is also prone to large calibration errors (\(>0.15\)).
Another helpful tool is the reliability diagrams which visualize the calibration errors at different confidence bins. As an illustration, Fig. 3 shows the diagram of the NT5-Clf model fine-tuned on NAVCO injury data, and the diagonal line indicates the perfect calibration. This model is over-confident, and we can observe large gaps when the model confidence is larger than \(0.8\).
### Post-hoc Calibration
Since the models can be over-confident based on the above analysis, we see the necessity to calibrate models for victim count extraction. We use temperature scaling for classification and generation decoding, and isotonic regression for regression. The post-hoc calibrators use development data to minimize negative log-likelihood and are then applied to test sets to measure calibration errors. As a comparison, Fig. 3 (right) shows the calibrated results of the fine-tuned NT5-Clf model on NAVCO injury data. The calibration error (i.e., ECE) reduces from \(0.33\) to \(0.06\). The errors of other calibrated models can be found in Tab. 4. In general, when the models have rather a large calibration error (e.g., \(>0.3\)), post-hoc calibration is more helpful and adjusts the models to a better-calibrated level.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{Death} & \multicolumn{2}{c}{Injury} \\ \hline Data & Model & Orig & Calib. & Orig & Calib. \\ \hline NAVCO & Clf & 0.222 & 0.044 & 0.332 & 0.060 \\ & Reg & 0.220 & 0.097 & 0.141 & 0.057 \\ & Gen & 0.054 & 0.040 & 0.092 & 0.092 \\ \hline WAD & Clf & 0.192 & 0.055 & 0.228 & 0.088 \\ & Reg & 0.272 & 0.107 & 0.167 & 0.294 \\ & Gen & 0.218 & 0.221 & 0.096 & 0.042 \\ \hline EMM & Clf & 0.277 & 0.098 & 0.314 & 0.055 \\ & Reg & 0.201 & 0.189 & 0.368 & 0.188 \\ & Gen & 0.087 & 0.092 & 0.328 & 0.122 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Calibration errors of fine-tuned NT5-Clf, NT5-Reg, and NT5-Gen models before (Orig.) and after (Calib.) applying post-hoc calibration. Post-hoc calibration effectively reduces the errors.
Figure 3: Reliability diagrams compare the calibration error before (left) and after (right) post-hoc calibration of the fine-tuned NT5-Clf model using the NAVCO injury data. This model is prone to large calibration errors (red gaps) in many bins. This is especially true for bins with high model confidence (\(>0.8\)).
## 7 Evaluating Robustness
Typically, conflict or disaster data is noisy and limited. This is making it challenging to train models on a large-scale, high-quality training set. For this reason, we need robust models that excel in few-shot and out-of-distribution settings.
Reduced Training Size.We fine-tune the NT5-Gen, NT5-Reg, and NT5-Clf models on different-size portions of the training set. Specifically, we use \(100\%\), \(50\%\), \(10\%\),\(5\%\), \(0.5\%\) and \(0\%\) of the training data and as further discussed in App. C.1. As expected, we find that the accuracy of all models drops when using a smaller training set. The NT5-Gen model reveals to be the most robust in keeping the _Exact-Match_ metric above \(0.6\) when being fine-tuned on only \(5\%\) of the training data. The calibration error of the fine-tuned NT5-Clf model increases when the training size is reduced, while the fine-tuned NT5-Reg and NT5-Gen models do not follow this trend. In the zero-shot setting, the NT5-Reg and NT5-Gen models reach their largest calibration error. In contrast, the NT5-Clf model reaches its smallest calibration error in the zero-shot setting.
Out-of-distribution (OOD) Setting.We set up synthetic tasks in which a fine-tuned model is confronted with an out-of-distribution setting at test time. For example, we fine-tune a model on WAD death and then repurpose it to classify WAD injury. Then, we evaluate the drop in performance of this "out-of-distribution" model compared to an "in-distribution" model, that has been trained on WAD injury labels directly. We conduct this comparison on different datasets and models.
In App. C.2, we evaluate the NT5-Clf model in a classification formulation and report accuracy. As expected, we find that accuracy decreases in every setting with performance drops between \(0.001\%\) and \(0.3\%\). In Fig. 15, we evaluate the NT5-Reg model in a regression setting measured in MSE. We find that the performance decreases in the out-of-distribution settings as evidenced by an average increase of \(1.12\) in MSE. Finally, in Fig. 16, we turn to an NT5-Gen model in a generative setting. As an evaluation metric, we consider _Exact-Match_ and observe a decrease of \(0.18\) in _Exact-Match_ on average.
## 8 Application: Overlooked Victim Types
Most event datasets feature only one column detailing victim counts. This column typically quantifies fatalities, as they are considered least ambivalent and most important (Kalyvas, 2006; Chau
Figure 4: Timeline of victim counts in Syria data from Sept to Nov 2021 as given in the ACLED dataset. We use the NT5-Gen model that is fine-tuned on NAVCO data. Our model can be tested on the extraction of fatality counts which is the only victim count featured in ACLED (Fig. (a)). Beyond fatality counts, it can extract more fine-grained victim types such as (b) injury and (c) abduction counts. Confidence scores are shown for some of the predictions.
doin et al., 2017). The Armed Conflict Location & Event Data Project (ACLED) (Raleigh et al., 2010; Raleigh and Kishi, 2019) recently published curated datasets containing violence against healthcare workers, media personnel, and women. Considering the ACLED dataset on Political Violence Targeting Women & Demonstrations Featuring Women, we find that more than \(85\)% of events have _zero_ fatalities. This means, many forms of violence remain non-quantified, often those against "marginalized" groups of society.
Using the methods presented in this work, we can extract much more fine-grained victim types such as "injured women" and "abducted women". To this end, we rely on the NT5-Gen model that we fine-tuned on the NAVCO data, without specifically asking for "women". In Fig. 4, we present exemplary two-month time series of events in Syria. We find that our model has a higher recall than precision on the ground truth annotations for fatality counts. This may be desirable since we would like to avoid overlooking true victim counts.
## 9 Discussion
This work surveys different task formulations of victim count extraction and inspects desiderata like accuracy, reliability, and robustness of different models. We now summarize our findings and conclude which approach performs best under which circumstances (Tab. 5).
Some of the parsing-based approaches have the advantage of requiring no ground truth annotations of the extracted victim counts. This means, there is no need for training, but instead, a manually curated list of patterns and rules has to be assembled. The regex approach, for instance, has minimum requirements regarding hardware, but writing regex patterns is very time-intensive and can be prone to mistakes. Overall, the baseline models shine when it comes to speed, and they perform reasonable when victim counts are explicitly mentioned. Yet they fail at complex reasoning. For instance, when asking for the count of deaths in "one child and four women lost their lives", all baselines mistakenly output "\(1\)".
This is where language model-based methods have a competitive edge. The fine-tuned NT5-Gen model has high accuracy both in _Exact-Match_ metric and relative error metric. Surprisingly, it is also well-calibrated and relatively robust in the few-shot and out-of-distribution setting. This performance comes at the costs of reduced speed, the requirement of large amounts of training data, and the need for resources like GPUs to be deployed on a large scale.
Comparing classification and regression objectives, we conclude that classification is easier to handle. In most settings, it may be sufficient to extract a range rather than an exact number anyways. In comparison to generation, in classification and regression settings, models show higher calibration errors and require post-hoc calibration to adjust the model confidence.
## 10 Related Works
This work interfaces with related works from different disciplines to improve the measurement of crisis intensity. It draws inspiration from recent advancements in question answering models with a focus on numbers and math word problems. This includes number-enhanced language models more generally. Our work also connects with model calibration in natural language processing (NLP) more generally.
Measurement of Crisis Intensity.Extracting information about crises has been widely explored using social media data (Temnikova et al., 2015) and newspapers (Keith et al., 2017; Halterman et al., 2021). Most existing measures of crisis intensity focus on counts of event types (Goldstein, 1992; Tereshchenko, 2020; Stoehr et al., 2022) or fatality counts (Kalyvas, 2006). Previous work studies friend-enemy relationships (Han et al., 2019; Russo et al., 2022; Stoehr et al., 2021, 2023) and conflict-indicative changes in word embeddings (Kutuzov et al., 2017).
Numerical Question Answering.Numerical Question Answering pertains to the task of providing numeric answers to questions. An exemplary model is NAQANet (Dua et al., 2019), which extends QANet (Yu et al., 2018) with numerical operations. Neural Module Networks (Gupta et al., 2020) learn and execute a chain of logical learnable and differentiable modules. Some of these modules are specifically targeted at mathematical operations such find-num or count. Other approaches leverage knowledge graphs (Davidov and Rappoport, 2010; Kotnis and Garcia-Duran, 2019) or graph neural networks (Chen et al., 2020). Thawani et al. (2021) provides a detailed overview over methods for representing and modeling numbers in NLP.
Number-enhanced Language Models.More recent work in number question answering relies on pre-trained large language models. GengBERT [1] improves numeric reasoning abilities by including a large amount of synthetic data containing numbers. Codex [3] and NT5 [22] apply similar strategies and are trained on code and math word problems. Other approaches focus on step-by-step reasoning such as Minerva [17], scratchpad [23] and chain-of-thought prompting [24]. Lefebvre and Stoehr (2022) propose a prompting-based method particularly for conflict event classification.
Calibration of NLP Models.The calibration of NLP models has been extensively studied in classification [14] and structured prediction tasks [15, 25]. Calibration methods have been adapted in language modeling [1, 24], question answering [16, 17], and machine translation [15, 25].
## 11 Conclusion
We presented _victim count extraction_, a challenging and impactful task. The task can be tackled using different formulations and models. Models should be evaluated along different dimensions such as accuracy, reliability, and robustness. We survey this ambiguity of victim count extraction, identify promising directions, and discuss outlooks and applications.
## Acknowledgments
We would like to thank and acknowledge ideas, input, support and feedback from Leonie Muggenthaler, Ryan Cotterell as well as the anonymous reviewers. Niklas Stoehr is supported by a scholarship from the Swiss Data Science Center (SDSC).
## Limitations
The models may be biased or reproduce biases inherent in their training data. Presenting unrelated, faulty or immoral questions to a model can cause unguided and malicious behavior. For example, we caution of asking questions such as "How many people _will be injured...?_"; and even worse "How many people _should be injured...?_". Improving model calibration will help defending against these issues and enable awareness of when to abstain from answering.
## Ethics Statement
This work originated from the motivation to diversify victim count extraction towards underrepresented victim types and overlooked forms of violence. This work ultimately intends to assist researchers and analysts in the sector of humanitarian aid who are in demand of accurate victim count information.
|
2308.07057 | Understanding Hackers' Work: An Empirical Study of Offensive Security
Practitioners | Offensive security-tests are a common way to pro-actively discover potential
vulnerabilities. They are performed by specialists, often called
penetration-testers or white-hat hackers. The chronic lack of available
white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the
efficiency of security testing. To achieve this, researchers and tool builders
need a solid understanding of how hackers work, their assumptions, and pain
points.
In this paper, we present a first data-driven exploratory qualitative study
of twelve security professionals, their work and problems occurring therein. We
perform a thematic analysis to gain insights into the execution of security
assignments, hackers' thought processes and encountered challenges.
This analysis allows us to conclude with recommendations for researchers and
tool builders to increase the efficiency of their automation and identify novel
areas for research. | Andreas Happe, Jürgen Cito | 2023-08-14T10:35:26Z | http://arxiv.org/abs/2308.07057v3 | # Understanding Hackers' Work:
###### Abstract.
Offensive security-tests are commonly employed to pro-actively discover potential vulnerabilities. They are performed by specialists, also known as penetration-testers or white-hat hackers. The chronic lack of available white-hat hackers prevents sufficient security test coverage of software. Research into automation tries to alleviate this problem by improving the efficiency of security testing. To achieve this, researchers and tool builders need a solid understanding of how hackers work, their assumptions, and pain points.
In this paper, we present a first data-driven exploratory qualitative study of twelve security professionals, their work and problems occurring therein. We perform a thematic analysis to gain insights into the execution of security assignments, hackers' thought processes and encountered challenges. This analysis allows us to conclude with recommendations for researchers and tool builders, to increase the efficiency of their automation and identify novel areas for research.
software testing, offensive security testing, ethical hacking +
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.00
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.0
+
Footnote †: 10.
this is the first work that focuses on how hackers work, i.e., the context within which a security professional moves and the processes that influence their decisions during security assignments.
Huaman et al. (Huaman et al., 2017) performed a large-scale interview study of German small-to-medium enterprises (SMEs). While SMEs are making up a third of Germany's GDP, they often lack resources for establishing an effective cyber-security posture. It analyzes their preconception with regards to cybercrime, their adoption of security measures and their experiences with attacks. In contrast to our study, this focuses upon the potential "victims", not upon security operators. One interesting finding was that 45.1% of interviewed companies had a cybersecurity incident warranting manual response in the preceding 12 months -- further highlighting the need for trained personnel.
Smith, Theisen and Barik (Theisen and Barik, 2017) describe Red Teams working at Microsoft. They cover a wide range of topics including how corporate culture and red teaming interact. They also lightly touched on how people became security professionals and the interactions in their daily work. Its interviewees were recruited from within Microsoft, a single large-scale company, and thus might not reflect wider industry practices which, as referenced by the previously mentioned paper, consists to a large degree of SMEs. In contrast, this publication focuses on the execution of security assignments, highlights hacker's thought processes and details challenges in academic and automation research. Furthermore, this paper is not limited to the discipline of red-teaming.
Van den Hout (Van den Hout, 2017) investigated the impact of different penetration test methodologies on the quality of the tests performed but concluded that only one reviewed methodology had widespread adoption, but its recommendations for a structured approach were not taken into account. This could indicate a gap between "real" penetration testing and codified methodologies.
Multiple papers describe aspects of penetration-testing without focusing on the operator's mindset or their decision processes. Munaiah et al. (Munaiah et al., 2017) analyze event datasets and manually map attack patterns to _MITRE ATT&CK Enterprise_. This is used to show a-posteriori attack patterns but does not analyze how hackers select the attacks to execute. _MITRE ATT&CK_ itself is a taxonomy of TTPs (Tactics, Techniques, and Procedures) and not a full attack methodology. Bhuiyan et al. (Bhuiyan et al., 2017) uses GitHub security bug reports to identify the origins of bug reports. Examples of these origins are software source code, software log files, binary files, etc. This details what data are used during reporting, but does not explain how a security professional identifies potential vulnerabilities for research in the first place, e.g., why a security professional analyzes a mentioned log file for relevant security information.
Other papers focus on narrow sub-disciplines of hacking which cannot be projected upon the hacking industry at large. Ceccato et al. (Cecato et al., 2017) describes how hackers attack protected software, i.e., how software protection mechanisms in provided binary files are analyzed through reverse engineering. Based upon the responses of our interview series, reverse-engineering is not representative for activities performed by offensive operators at large.1
Footnote 1: During the interview series, a single participant mentioned using frazing to hunt for vulnerabilities. They were switching to other disciplines due to the high resource and time requirements of fuzzing.
The PhD thesis "How Hackers Think" (Van den Hout, 2017) is a high-level treatise on hacker history, culture and their thought processes. It identifies multiple characteristics of hackers, e.g., being highly self-motivated and curious, being able to tolerate ambiguity, and their use of mental models and patterning. Its focus lies on a high conceptual level and does not analyze how hackers actually identify and chose vulnerabilities. Neither does the study identify how different areas of penetration-testing, e.g., OT or red-teaming, might impact a hacker's mindset.
## 3. Methodology
This paper follows a _pragmatist_ approach (Sandar et al., 2017; Sohn et al., 2017) combining methods from the _empiricist_ and _summarist interpretist_ traditions (Sandar et al., 2017).
We used semi-structured interviews to gather insights into hackers' work and thought processes.
**Ethical Considerations.** Our institution does not have a formal IRB process but offers voluntary submission to a Pilot Research Ethics Committee. As human interviews were conducted, the committee was consulted, and topics were discussed, including ethically relevant methodological clarifications, more specifically questions related to the involvement of voluntary participants in the research, as well as mitigating the risk of contextual identification. Participants gave their informed consent before the interviews took place; all data collected were anonymized by researchers prior to analysis. All data storage and processing complied with national privacy regulations and the EU's General Data Protection Regulation (GDPR).
**Recruitment.** We define the target population as offensive-security practitioners that work directly with customer systems. Previous research has found that security professionals are reluctant to communicate with outsiders (Sandar et al., 2017), especially when it comes to their methodology and techniques. To counteract this, researchers reached out to public figures: the initial seed was populated by contacting security companies, finalists of public security challenges, and security conference participants. We use snowball sampling to improve the interview pool: At the end of each interview, we asked the current interviewee to connect us with other offensive security professionals. In addition, we cold-called both a hacking education
\begin{table}
\begin{tabular}{l l l} \hline \hline Participant & Primary & Secondary \\ \hline Participant 1 & web & infrastructure, iso27001 \\ Participant 2 & web & infrastructure, mobile \\ Participant 3 & red-team & AD, OT, web \\ Participant 4 & web & social engineering \\ Participant 5 & red-team, IoT/OT & web, social engineering \\ Participant 6 & web & AD, social engineering \\ Participant 7 & infrastructure & web, tool development \\ Participant 8 & web & infrastructure \\ Participant 9 & infrastructure & AD \\ Participant 10 & red-team, AD & \\ Participant 11 & OT, IoT & web \\ Participant 12 & web & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Participants
YouTuber and a public hacking collective that is well known for publishing vulnerability disclosures. Both were mentioned by the participants during the interviews, both did not react to the contact attempt further enforcing the idea of a close-knit community (Zhu et al., 2017).
We sampled new interview participants until theoretical saturation was reached, that is, no new information was obtained during the interviews. When considering theoretical saturation we differentiated between common themes and themes specific to the interviewee's specialty area. We continued interviews until neither two subsequent interviews contributed new specialty area information, nor three subsequent interviews contributed new common themes. Theoretical saturation was reached after the \(12\)th interview which fit recommendations (Zhu et al., 2017; Zhu et al., 2017).
**Participants.** We considered participants that worked primarily in an offensive security field and excluded participants that primarily worked within social engineering or physical security. If participants were working in a hybrid field, such as reverse-engineering or source-code analysis, their primary focus had to be offensive. To gain seasoned results, we only reached out to professionals with at least four years of experience in the IT security field.
To our dismay, we were not able to recruit any offensive security professionals that identified as non-male. While we come from a culture that naively prides itself on blind meritocracy (Zhu et al., 2017), we found this contradiction disturbing. As we did not deem it relevant, we did not ask about our participants' religious or cultural affectivities, but in hindsight, we can assume diversity in that area.
To protect the anonymity of the participants, we cannot detail their employment status, ethnicity, work experience before security work, and time of employment within the security field, etc. When excluding education and CTF-participation, participants had an average work experience of 9 years (\(\mu=9.0\), \(\sigma=6.5,median=8\)).
**Interview Protocol.** We conducted semi-structured interviews utilizing video conferencing software. All but two interviewees enabled both video and audio transmission. The average duration of the interview was 55 minutes. Before the interview started, the participants were informed about data processing, and their rights, and asked for their informed consent.
We opened the interviews with questions about the interviewee's job description and how they acquired the needed skill set. Those were followed up by talking about the types of security assignments the participants are involved with. For 1-3 of these areas, detailed questions about particularities, procedures, automation, and problems were asked; since the questions were open-ended, the interviews branched out to subtopics organically. The interviews were closed with questions about grievances and additional thoughts related to the field of IT security.
We recorded and manually transcribed all interviews. During the transcription, sensitive data was scrubbed from the interview; the transcribed interview was then submitted for confirmation to the interviewee. Scrubbed interviews were loaded into _delve_(Bord et al., 2016) for thematic analysis.
**Analysis.**_Reflexive Thematic Analysis_(Zhu et al., 2017) was chosen to perform a data-driven exploratory analysis of interview transcriptions. In summary, when performing thematic analysis, the researchers initially familiarize themselves with the data, and extracts of the data are tagged with codes. These codes are then used to create clusters that identify or construct underlying themes. Then, those themes are reviewed, defined, and named. The results of the findings are presented in Section 4-6.
**Data Availability.** The data used in this study was collected through interviews with a close-knit community of ethical hackers. Deanonymization would likely not be preventable. In accordance with ethical guidelines and agreement with the interview participants, the decision was made not to release the interview data. All meta-information related to the interviews, including the interview guide and consent forms are part of our replication package.
**Threats to Validity.** Any interview-based study faces the threat of _selection bias_ (_internal threat_). To counteract this, we performed snowball sampling, recruited random security professionals during security conferences, and explicitly invited security professionals from different disciplines. For ethical reasons, interview participation was limited to white-hat hackers (_internal threat_). According to prior analysis, the activities of black-hat hackers, e.g., Ransomware groups, can be seen as a subset of the activities performed by ethical red-teams (Bord et al., 2016; Bord et al., 2016) which are covered in this work.
Another potential bias would be _experimenter bias_ (_internal threat_). To reduce the risk, all the data collected was analyzed separately by the different authors, and their respective labeling results were compared for differences, ambiguities were discussed and resolved.
Hacking contains multiple disciplines. Our results might only capture common themes of a subset of those (_external validity_). We try to counteract this by inviting interviewees from various hacking fields, as is reflected in Table 1. The geographical distribution covered roughly Central Europe. Other geographic regions might be more advanced when it comes to the utilization of the different types of security assignment.
## 4. Becoming a Hacker
The interview responses reveal several interesting themes regarding the path to becoming a hacker.
**Academic Education.** All but one participant attended at least a single university-level class. Nine completed bachelor's degree studies in IT (or related field, such as CS), and of those, all continued to add a master's level degree. The percentage of interviewees enrolled in IT security specific programs increased from \(55\%\) (\(n=5\)) for bachelor's studies to \(78\%\) (\(n=7\)) for master's studies. This fits the perceived lack of IT-Security and Secure Development lectures during non IT-security centric programs, which was partially addressed by attending CTFs or enrolling for non-mandatory security classes. Classes were often taken in an extra occupational capacity. All fitting a common theme of "_fascination with IT security_" combined with high intrinsic motivation.
**Experience before IT-Security.** Having 2-3 years of non-security IT exposure before entering the IT security field was found to be advantageous. Another related recommendation was to have a broad IT security base combined with one or two specialization areas. Within our group of interviewees, the common base was web security or internal network assessments; examples of specializations were red teaming or cloud-specific knowledge.
**Staying relevant.** All interviewees perceived a need for ongoing education. The ubiquitous information source was Twitter/X, followed by other online services such as YouTube channels, blog posts, Reddit, Github, or commercial online courses. In the physical
world, colleagues and conferences were mentioned. The quality of online material was considered high, although one interviewee had qualms about publishing information due to potential misuse. A single participant regularly used the Darknet as a news source.
**To CTF or not.** CTF attendance was a common theme. Participants saw a bidirectional information transfer: skills learned in CTFs were applicable at work and vice versa. Tasks in CTFs were considered very targeted in that they narrowly focus on a vulnerability, and solving the challenge or reading a write-up were considered efficient ways of gathering knowledge about the respective vulnerability. Specialized security practitioners, e.g., from the OT or ICS area, found CTFs to be introductory and shallow.
## 5. How do trackers work?
While we encountered the common muttering of "_every projects_ is _different_", these sections identify types of penetration tests, each with distinct requirements, strategies, and particular actions. When looking at a pen-tester's work, this is the external view, i.e., how a pen-tester's work is perceived from the outside.
### Types of Security Tests and their Differences
Although different assignments have a similar project organization, their execution differs due to the respective client and target environment. Table 2 shows the main types of security assignments encountered during interviews.
**Vulnerability Assessments** focus upon achieving a high coverage of the targeted assets, which are typically external IP-ranges (including web servers) or internal networks (including clients and internal infrastructure). Enumerating targets, e.g., through web crawling or network scans, leads to the creation of important inventory databases. Those are subsequently used to test against known vulnerability databases, known configuration errors or generic vulnerability classes such as SQL injections. As assignments typically include large amounts of potential targets, a high level of automation is necessary.
**Penetration Tests** (Pen-Tests) share similarities with vulnerability assessments. The demarcation point between those two varied between interviewees. The situation is further complicated as vulnerability scans are often used as an initial step during pen-testing. Generally speaking, while vulnerability assessments focus on breadth, pen-testing focuses on depth, i.e., thoroughly breaking a single target. Pen-Tests are within the realm of application security: in addition to well-known vulnerabilities or configuration errors, new vulnerabilities are hunted within the software under test. Penetration tests are often performed against custom-written software where no prior vulnerabilities are published in vulnerability databases. As the scope is tight, customers commonly provide dedicated test environments against which destructive tests can be performed. Another benefit of the limited scope is that the execution of a penetration test can be highly structured, some (\(n=2\)) interviewees went as far as calling them "_catalog-based_". Pen-tests are primarily performed manually.
**Internal Network Tests** verify the security and resilience of internal networks. Their basic assumption is "_assumed breach_", i.e., the adversary is already within the local network and now attempts to gain sensitive data or achieve higher privileges -- emulating Ransomware scenarios that have recently scourged companies. Microsoft Active Directory (AD) is ubiquitous in corporate networks; thus, if present, it is the main target. In these cases, the security assignment's intent is to obtain domain administrator privileges. The focus lies on exploiting known vulnerabilities, product features, mis-configurations, and insufficient access-control or hardening measures. Another big aspect is Lateral Movement, i.e., using compromised systems to pivot to new targets. Assignments are made against productive environments.
**OT Tests** target Operational Technology (OT) such as SCADA or ICS (Industrial Control System) networks. They can be differentiated into product tests and in-situ network tests of already configured systems. As solutions consist of off-the-shelf software that is highly customized for usage within the corresponding client network, the latter are often preferred by the customer. Tested subjects often use proprietary protocols; therefore, reverse engineering is a common practice in OT tests.
OT facilities, e.g., power plants, are expensive and often hard to come by, thus a dedicated testing environment is rarely available. Testing commonly occurs during scheduled down-times; this severely impacts the available test window. Another related particularity: availability often trumps the breadth or depth of performed security tests. As test subjects are "_connected to the real world_", negative side effects are potentially catastrophic. Security tests are therefore highly coordinated with customers to prevent any negative fallout. This often prohibits any covert action. Regulatory requirements (Zhou et al., 2017) lead to a convergence between IoT and OT devices. In addition, Microsoft Active Directory starts to encore OT networks, thus creating an overlap with Internal Network Tests.
Compared to other approaches, in **Red-Teaming** the attackers have a concrete mission, e.g., gain access to a defined subset of computers or a source code repository. While during _Internal Network Penetration Tests_ gaining Domain Admin is often the final goal, this is only a means for achieving the mission during Red-Teaming. Attackers holistically target a company and employ additional techniques such as Open Source Intelligence (OSINT) and Social Engineering; Post-Exploitation is more prominent compared to other disciplines. Red teaming is not concerned with broad coverage, but with achieving the team's defined objective. Red-Teaming does not only attack the target's technical security posture but also the response of the blue team, i.e., defenders. Thus covert operations, hidden persistence, command&control systems (C2) and evasion of defensive techniques enter the picture.
Assignments are often performed in larger teams and over extensive time frames, making information transfer between participants more important. Adding additional team members to speed up an
\begin{table}
\begin{tabular}{l r r r} \hline \hline Type & Covert & Team-Size & Effort in Days \\ \hline Vulnerability Assessment & not typical & 1 & 2-4 \\ Penetration Test & optional & 1-2 & 5-10 \\ Internal Network Test & optional & 1-2 & 7-10 \\ OT Test & never & 1-2 & 7-10 \\ Red-Teaming & always & 3-4 & 30+ \\ \hline \hline \end{tabular}
\end{table}
Table 2. Types of Security Assessments
ongoing operation is problematic as the new team members do not share the existing member's target system knowledge.
### Black- vs. Gray-Box Security Testing
When it comes to test execution, an important distinction is the amount of information and support provided by the customer. During black-box tests, practitioners go in "_blind_"; no information except the scope is given. During white-box tests, full system access or even the source-code of the tested application is given. Gray-box tests lie in-between: often access credentials or system architecture descriptions are provided before testing commences.
Pure white-box tests, as in "source-code reviews", are rarely performed due to their prohibitive costs. The type of assignment is also of importance: red-teaming is almost always performed as a black-box test as the target's personnel is not involved beneficially. OT tests are often performed in tight lock-step with customers (to reduce the potential fallout) and thus are gray-boxed. Interviewees overwhelmingly **recommended moving from black-box towards white-box testing.** The reasons given were time and thus cost efficiency, as well as potential for improved test coverage.
In other areas, customers are helping pen testers to improve efficiency too. "Assumed breach" scenarios in Internal Network Penetration Testing conceptually assume that a client computer will be breached eventually and thus use a breached computer as a starting point for investigations. During web pen tests or during external scans, rate limits or firewalls are commonly disabled to allow swift pen test execution. During web application pen-tests, internal details, such as used technologies, are commonly provided to reduce the search space.
### Typical Testing Workflows
Participants were asked to detail the execution of the different types of assignments. This section describes the peculiarities of the different areas.
Activities performed during **Web Penetration Tests** can be separated into exploratory intuitive testing and exhaustive testing against checklists or standards. All interviewees utilized both, no specific ordering between those two was detected, although if the checklist-verification was automated, it often was run in parallel to exploratory testing. If a high-level of automation is achieved, the manual exploratory testing can be integrated into the automation: one interviewee detailed a multi-stage automated test-setup containing multiple enumeration steps, where the result of each step was manually verified, rectified and used to instrument subsequent automated steps. Manual testing, e.g., manual crawling, was integrated as an additional input into the tested steps.
According to interviewees, most time and effort are spent upon authorization tests. An application typically has multiple user groups with different access rights. During testing, penetration testers request one or more users per existing group and try to perform unauthorized data access with one user using data of another user. To verify responses, testers need documentation about the implemented access groups. If none was given, interviewees approximate a model of the access rules through probing/testing and experience.
With the exception of testing for authentication or authorization, automated testing was deemed well-established and automated tooling was commonly employed. Common injection attack vectors were well covered by tooling, for example, _sqlmap_(Kal
as well as easily detectable, and countermeasure systems using honey-tokens are beginning to be deployed at customers' sites.
All interviewees in the IoT area mentioned applying industrial standards as well as the usage of checklists that included the OWASP IoT (Steiner, 2017) and OWASP Firmware Testing guides (Steiner, 2017).
**Red-Teaming** is special due to its evasion- and deception-based methods as well as through its objective-based approach. A red team initially has knowledge of its objective, e.g., gain access to a special server in department X, as well as a broad allowed scope, e.g., the targeted company. Teams initially model how to breach the company, e.g., by identifying potential social engineering victims. After the breach, low-key enumeration is used to covertly model "_how a company works_" and then abuse that knowledge to derive attacks that mirror expected traffic and behavior patterns. Throughout a red-teaming campaign, a map of known or breached elements is built and compared to the imagined map of the company that includes the final objective: if both converge, the objective should be achieved.
Automation employed for network lateral movement or breaching web applications originate from the other pen-testing disciplines but have to be re-evaluated against their chance of being detected. As red-team assignments are performed against real and live systems, the scope of destructive operations might be limited.
**OT-Tests** have their own challenges. Due to the prevalence of proprietary protocols, time-consuming reverse engineering of those protocols often occurs. Mentioned experiences of our interviewees indicate that Security-by-Obscurity is still common; this would match the perceived resistance of some ICS suppliers when faced with responsible disclosure requests. Due to the time burden of reverse-engineering, it frequently has to be aborted due to the timeboxed nature of testing.
Due to the potentially catastrophic side-effects of testing, a risk-based approach is often applied: together with the customer, a threat model workshop can be performed, and potential scenarios that warrant testing identified. Those scenarios, and only those, are subsequently manually executed against the OT system. As the available amount of time is fixed, threat modeling and performing the derived tests compete for the same temporal resources.
### Automation
All interviewees used pre-made tooling, while few (\(n=3\)) wrote additional tooling on their own. Overall, the tooling situation for specific testing areas was seen in a positive light. In contrast, "_all-in-one_" tools were seen in a negative light. Multiple interviewees remarked that a "_fully automated tool cannot replace a pen-tester_" or, as one interviewee cynically replied, "_yeah, I want a tool where I can click a button and magically I get a finished pen-tester report_". Practitioners relied on multiple small tools for different areas, e.g., _gobuster_(Deng et al., 2017) for content discovery or _sylmap_(Deng et al., 2017) for testing SQL injections. PortSwiger's _BURP Poxy Suite_(Deng et al., 2017) was used by every web application pen-tester interviewed. See Table 3 for a list of commonly named automated tools.
**Problems with tooling.** Interviewees remarked that the setup overhead of automation tools can be problematic. Especially for short-term projects, such as vulnerability assessments or tightly-timed web application pen-tests, the initial setup overhead and processing time can be prohibitive for deploying tooling. Another problem was coverage: even within the same problem area, the coverage of different tools widely diverges, and the situation is made worse as commonly no tool provides full coverage of a testing area. To counteract this, practitioners commonly use multiple tools redundantly, yielding more processing time overhead and needing manual merging of the different tools' results.
Some areas were described as not suitable for automation. As OT systems are finicky and the potential fallout catastrophic, automated tests are often not feasible. Additionally, when performing social engineering during red-team assignments, fully automated tools are avoided for both fear of detection and ethical qualms because they would be used on human targets.
**Extendability and Community** was identified as an important discriminator by practitioners. Both are related to fast-paced developments within the exploit community: if a tool can be proactively extended or be scripted by the community, it and its implemented methods can evolve faster compared to reactive development within walled gardens. An example of an OSS tool utilizing community-provided detection rules is _nuclei_(Deng et al., 2017); an example of a commercial tool with good OSS extendability is the PortSwiger _BURP Proxy Suite_(Deng et al., 2017) with its integrated _BApp Store_.
**Manual fine-tuning to reduce search space.** Multiple interviewees mentioned that they are adjusting the tooling according to their ongoing findings. Examples of this feedback loop would be limiting tested vulnerability classes to feasible ones, e.g., not testing a static website for SQL injections, or limiting tested database queries to concrete database dialects.
## 6. How do Hackers think?
While Section 5 describes the external view on pen-tests, their type and activities activities performed during them, this section focuses on the inner workings and thoughts of security professionals during testing, detailing their decision processes and potential sources of their intrinsic motivation.
### Exploiting Configuration vs. Applications
A reoccurring theme was the distinction between searching for known vulnerabilities and hunting for new vulnerabilities.
Examples of the former would be executing a known vulnerability scan against off-the-shelf software or investigating a Microsoft Active Directory for misconfigurations; an example of the latter
\begin{table}
\begin{tabular}{l l l l} \hline \hline Tool & Area & Availability & \# \\ \hline PortSwiger BURP Suite (Deng et al., 2017) & Web-Testing & free, commercial & 7 \\ BloodHound (Brands, 2017) & AD Enumeration & OSS & 5 \\ SQLMappy (Deng et al., 2017) & Web/SQLi & OSS & 3 \\ mmap (Deng et al., 2017) & Network & OSS & 7 \\ neussus (Deng et al., 2017) & Network & commercial & 8 \\ gobuster (Deng et al., 2017) & distribater (Deng et al., 2017) & Network & OSS & 2 \\ certify (Deng et al., 2017) & AD Exploitation & OSS & 4 \\ metasplolt (Deng et al., 2017) & Exploitation & OSS & 3 \\ nuclei (Deng et al., 2017) & Exploitation & OSS & 3 \\ \hline \hline \end{tabular} \# denotes the interviewee count mentioning the corresponding tool.
\end{table}
Table 3. Commonly Named Tools.
would be searching for SQL injections within a custom written application or discovering a new vulnerability class.
Synonyms given for "_searching for known vulnerabilities vs. hunting for new vulnerabilities_" were "_ vulnerability assessments vs. application security_" or "_hacking configuration vs. hacking programs_".
These two categories are fluid. For example, findings from "_hunting for bugs_", i.e., a new 0-day exploit against a software, can end up within "_searching for known vulnerabilities_", i.e., when a rule for detecting 0-day is added to a web vulnerability scanner.
While not stated explicitly during the interviews, we assume that our interviewee's mental model is primed through their understanding of this divide, and highly impacts tool and technique selection. As an interviewee mentioned, "_you don't hunt for 0-days during an Active Directory assignment_". This implies that pen-testers will not consider spending days fuzzing a domain controller for new vulnerabilities during internal network scans.
### Identifying Vulnerable Areas or Operations
Participants often described exploratory testing during which they were guided by intuition. Through follow-up questions, further information about this intuition was gathered.
All interviewees were analyzing requests and responses; the former for conspicuous parameters and the latter for occurrences of error messages or other suspicious behavior, that is, behavior that does not fulfill the testers' expectations.
During the interviews, multiple areas were identified where security testers possessed a mental model of the expected behavior of the software-under-test; during testing security testers were trying to find operations that could trigger unexpected behavior which, in turn, might turn into a security vulnerability. Those mental models were built from experience, e.g., prior assignments or experience within the specific business area, as well as adapted during the security test itself, e.g., "_learning how the application works_". A summary of multiple observed mental models is shown in Table 5.
Pen-testers attributed their intuition to experience which could be built from previous penetration tests, participation in CTF events, prior engagements with the same client or industry area, or by implementing similar software solutions during their former life as software developers. Participants remarked that during testing, they are triggered by vulnerabilities or exploits they had recently read about and, in response, would start additional research. One penetration tester mentioned creating a topic map during everyday research which they then refer back to during assignments.
Related to experience, practitioners had preconceptions about the technologies used or features implemented. Some functionality, e.g., file uploads or XML processing, were thought to be hard to implement in a secure manner -- to quote a participant, "_there are some things that just cannot be implemented correctly_". Similar resentments were discovered about used technologies. Some programming languages were deemed to increase the probability of an application containing defects; an interviewee mentioned thinking "_let's see how developers have been fooled again_" when going into assignments. As cynical as it may be, PHP was often mentioned as such a technology.
Two distinct positions were experienced regarding the learnability of this intuition. On one side, "_nobody is born a super hacker_", on the other hand, one interviewee mentioned that the best penetration testers in their peer group exhibited hacking-style behavior already during kindergarten. Debating nature-vs-nurture or art-vs-craft would go beyond the scope of this publication. Regardless of this, common consensus was found that hacking skills are improved through practice.
It is important to note that participants may be subject to _selection_ and _survorship_ bias. They might find vulnerabilities in areas they focus on, ignoring plentiful vulnerabilities in other areas they are historically ignoring. After a vulnerability has been found in an area, the increased attention upon that area often yields multiple subsequent vulnerabilities (Brandt et al., 2017).
\begin{table}
\begin{tabular}{l l l} \hline \hline Subtheme & \# & Representative Quotes \\ \hline High-Level Targeting & 7 & _“We select the attack that would be the most cost-effective for the attacker”_ \\ & & _“...before we attack proprietary protocols we’ll attack a windows domain server missing updates.”_ \\ Experience & 12 & _“How we actually work? We look for obvious vulnerabilities, those that jump out immediately”_ \\ & & _“I know that from my time programming \(C\)/\(C\)+...\(I\) find the errors that I made back then”_ \\ & & _“I search for vulnerabilities that I have seen and exploited before.”_ \\ & & _“...often I see systems that I have already seen when doing CTFs... then I already know how to attack it”_ \\ Familiarity with Target & 4 & _“If it is a repeat customer then you already know how they tick and what their problems are”_ \\ Observed Features & 10 & _“One runs through the web applications and sees a feature and thinks “this looks interesting, could it be implemented weirdly”_” \\ & & _“If there’s an upload function, I am interested.”_ \\ Observed Technology & 11 & _“Some things cannot be done securely, for example PHP.”_ \\ & & _“Well, you always feel happy when the application is somehow a PHP application.”_ \\ Modeling Behavior & 9 & _“Testing is manual, as you need to get a feel how the application is supposed to work and answer”_ \\ & & _“You search for unexpected behavior...for example a database that throws an error when you enter a ’.”_ \\ Intuition & 8 & _“This will be esoteric... but I believe there is some organ that tingles if an operation looks fishy”_ \\ \hline \hline \end{tabular}
\end{table}
Table 4. Excerpt of sub-themes of “Identifying Vulnerable Areas or Operations”
### Dealing with Uncertainty
Pen-testers routinely have to deal with uncertainty as they lack transparency of the tested system: pen-testers must make assumptions about requirements, the tested system's architecture, as well as about accepted input values and the corresponding expected output parameters (Peters et al., 2017). They evaluate those against their expectations, and if a system deviates, examine the deviation for exploitability. When in doubt, testers can escalate and query their clients, but this is deemed to be time-inefficient and thus minimized.
Examples of uncertainty would be a pen-tester issuing a HTTP request where they expect an "access denied" response but instead, receive a successful response containing data that cannot be clearly classified as belonging to the current user or not. Another example would be testing for time-based blind SQL injection vulnerabilities where the measured latency is not sufficiently deterministic for verifying the vulnerability. Similarly, second-order attacks cannot easily be attributed to the initial request but only to the operation that eventually contained the vulnerability.
Penetration testers modify existing valid requests to include malicious payloads. When these requests produce errors, the reason can be uncertain: was it a potential vulnerability? A successful input filtering algorithm? Or an application error that cannot be exploited? This classification impacts the selection of subsequent requests and attacks.
Another instance of uncertainty occurs during tool optimization: tool output is continuously used to further optimize subsequent tool invocations. Interviewees performed a sanity check if reported system fingerprints were feasible and forfeited them otherwise. In addition, some high-impact decisions, such as limiting the expectations to a single DBMS type, were verified with the client before incorporating them into tooling selection or configuration.
### Don't waste my time
One theme discovered was that interviewees feel the need to be time-efficient. This might be related to tight time-budgets or very constrained test-bed availability being anathema to good test coverage. Shortcuts were taken to reduce menial tasks. For example, during internal network tests, a breach is already assumed. The interviewees defended this decision through "_this will eventually happen through social engineering anyways_". A similar argument was given for being provided accounts with local administrative privileges: "_a real attacker can just wait for the next 0-day_", or for disabling Anti-Virus solutions as evading them "_takes time not skill_". Tests with foregone conclusions were considered tedious, one example given was testing an Anti-Virus solution embedded within a web-application with different payloads. The repetitiveness of this task might contribute to this too. This aversion to responsible disclosure procedures might be correlated to bad experiences during prior disclosures: the vendor's responses were mostly "wasting" the interviewee's time.
### Quality Control
Pen-Testers were concerned about the quality of their work, especially when working with high-stakes data such as health records - "_nobody wants to be that pen-tester that overlooked a vulnerability that was later exploited_". A tester's attention is also a limited resource: at least one pen-tester remarked that web application tests can be monotonous and that after 3-4 days their motivation degrades. Usage of checklists, automated baseline scans, and working in teams were encountered as quality improvement measures.
The applicability of checklists depends upon the testing domain. Some domains, e.g., Web-Applications or Mobile Applications, were seen as narrow and thus supporting the creation of security checklists. Other domains such as IoT were described as diverse and impeding the creation of a unified security checklist.
Checklists were often derived from open industry standards; they were maintained and extended by companies, but the resulting in-house checklists were seldom given back to the community and published. Common base for checklists was the OWASP tri-fecta of Vulnerability Top 10, Software Verification Standard and Testing Guide; instances of those are provided by OWASP for multiple domains such as Web-Applications (Shen et al., 2017; Shen et al., 2018; Shen et al., 2019), Mobile Applications (Peters et al., 2017; Shen et al., 2019), IoT (Peters et al., 2017) or Firmware (Peters et al., 2017). Surprisingly, neither MITRE ATT&CK(Shen et al., 2019) nor PTES (Peters et al., 2019) were mentioned by our interviewees. Working in teams or asking colleagues can be seen as a broadening of the available experience pool or as employing a "human checklist". The use of automated tools as baseline scans that upheld minimal quality standards can also be interpreted as quality control. Interviewees mentioned usage of fully-automated commercial web vulnerability scanners such as NetSparker (Shen et al., 2018) or Acunetix (Beng et al., 2019) for this purpose. Some HTTP inspection proxies, for example, PortSwigger BURP (Peters et al., 2017) or OWASP ZAP (Peters et al., 2018), have gained similar scanning capabilities. Those were used by some of the interviewees and encroached on terrain traditionally taken by web vulnerability scanners. In defense of testers, full coverage of the software-under-test is not feasible due to the black- to gray-box nature of security assignments.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Area & Input & Identified Elements & Describes & Used for \\ \hline Web Testing & Web Traffic & Access Rules & ACL model & Authentication Checks \\ Red-Teaming & Network Traffic & Communication Patterns & Expected Communication & Covert Channels \\ Network Tests & local data and network shares & File data and metadata & Company Data & find juicy information \\ OT tests & data flows & data flow model & system architecture & identify test scenarios \\ OT tests & network traffic & network commands & network protocol & protocol reversing \\ Web Testing & web traffic, context & used technologies & technology stack & potential vulnerabilities \\ Web Testing & web traffic & HTTP requests and responses & input model & generate tests \\ \hline \hline \end{tabular}
\end{table}
Table 5. Excerpt of observed models
### Dealing with Change
Security is in a constant state of flux. Compared to other disciplines, the existence of active adversaries -- the struggle between red and blue teams -- lead to a Red Queen's Race: participants must run to stand still (Zang et al., 2017; Wang et al., 2018). If not evolving, the respective adversary will overcome.
Interviewees lamented that some areas -- breaking into web applications, breaching external infrastructure/perimeters, and reverse-engineering -- have become harder due to boosted defenses such as usage of frameworks, improved default configurations, and heightened awareness of security posture (cf. Table 6). They are partially switching work areas, i.e., turning towards OT or internal network testing.
## 7. Discussion and Implications
We review our findings following the structure of our initial research questions to formulate points of discussion and implications for security researchers and practitioners.
### Alignment between Research and Industry
We started this study with two questions, **"What do common security tests look like?"** and **"How do Hackers perform their tasks?"**. Those questions were broadly formulated to gain insight into how common assignments for practitioners look like, and how practitioners navigate their tasks within those assignments. These questions were particularly motivated by the fact that existing work is not grounded in the realities of offensive security practices.
#### 7.1.1. Research must match a Project's Scope
During interviews, we identified typical security assignments with their respective typical resource allocations. Research should heed those resources allocated. For example, when targeting web vulnerability assessments, a typical project was given with 2-4 days of manual effort. Setting up a fuzzing pipeline, running the fuzzer, and analyzing its results is not feasible in this short time frame, thus rendering generic fuzzing rather infeasible for web security practitioners. Still, searching Google Scholar for _"fuzzing web applications"_ yields 23000 results.
Given that interviewees mentioned the prevalence of web application frameworks and their preference for grey-box testing, SBOM-based solutions should be a better fit and would warrant additional research.
Most assignment types were done in solitary or as a paired team, indicating that research into collaborative solutions might be of limited use. The one exception using larger teams was RedTeaming although here collaborative solutions integrated into C2-frameworks are already commonly used.
Automation with direct target-interactions were deemed problematic in the RedTeaming and OT areas due to the sensitivity of their targets. In OT, security by obscurity still seems to be common, limiting the opportunities for source-code analysis based approaches. On the other hand, improvements to reverse-engineering binaries or protocols would be appreciated by practitioners.
Recently, the usage of Large Language Models (LLMs) for automated security testing has been explored (Wang et al., 2018). While preliminary results look promising, to maximize their long-term impact the resulting automations should be aligned to the mentioned industry issues.
#### 7.1.2. Security Researchers and Security Practitioners
Separating security into academic research and industry creates a false dichotomy. Industry itself is, at least, separated into security practitioners and security researchers. The former are practitioners that perform customer-specific assignments: those are the people that typically perform short-term penetration tests and directly communicate with clients to improve their security. In contrast, security researchers do not exclusively work on short-term client projects but spent time researching new attack techniques and vectors. An example of the former would be an anonymous pen-tester working on a different web-application every week; an example of the latter would be James Kettle investigating and documenting a new attack class, HTTP Request Smuggling, over many years (Sundundar et al., 2018; Wang et al., 2018). Security researchers search for new attack vectors or analyze a software product for a prolonged period of time to release exploits or be awarded CVEs. Security Practitioners are more focused on hunting configuration errors, exploiting well-known vulnerabilities, or identifying new instances of known attack classes. They utilize information and tools from security researchers for that.
Tools such as fuzzers are thus more applicable to security researchers than to security practitioners. The large amount of research into fuzzing indicates that academic research is targeting security researchers rather than practitioners and thus are only indirectly improving the security landscape when information from security researchers trickles down to practitioners.
\begin{table}
\begin{tabular}{l l} \hline \hline Sub-Theme & \# & Representative Quotes \\ \hline Impact of Frameworks & 5 & _“Security improves because frameworks help developers write secure code”_ \\ & & _“Pen-Testing has become boring as critical vulnerabilities are found less often”_ \\ & & _“Usage of secure frameworks pushed vulnerability hunting towards business logic.”_ \\ Defensive Mindset & 3 & _“Developer awareness about security has become better.”_ \\ Changing targets & 7 & _“In the future we might use social engineering not only for the initial foothd, but also for lateral movement”_ \\ & & _“Rich-client applications are still fun...they feel like web applications twenty years ago.”_ \\ & & _“Active-Directory: I moved into this area because it is fun to break into a system within days.”_ \\ & & _“The situation in OT will stay the same. It’s hard to modernize all the legacy hard- and software.”_ \\ & & _“Some OT networks are ransomware-ready.”_ \\ \hline \hline \end{tabular}
\end{table}
Table 6. Excerpts of _Dealing with Change_: How is Security-Testing changing?
### Opportunities for Research.
We now want to answer the important final question, **"What tedious or time-consuming areas could be improved?"** throughout the rest of this section and frame them as opportunities for future research that directly benefits security practitioners.
#### 7.2.1. **Automating Authorization Testing**
For security tests with a relatively restricted scope such as _web application tests_, we suggest research into covering additional vulnerability classes. **Authorization Testing** is currently performed manually and was named one of the most time-consuming parts of testing and thus would be a fruitful target for automation research. Current gaps are manifold: detection of potential operations, accepted parameters, and potentially malicious parameters; generation of payloads as well as the assessment of an attack's success. A subtle problem is the classification of returned web pages and downloads into authorized and unauthorized content as this is highly context specific.
#### 7.2.2. **Gray-box Testing**
The preference for gray-box testing by software security professionals was surprising and can have a significant impact on software testing design: if the target's configuration or source code can be accessed (or if the target is willing to instrumentalize the target software through sensors as is done in IAST), **automated software testing approaches using source-code or configuration** become increasingly feasible for security testing. Further research into automated source code and configuration file analysis from a security perspective, is currently underexplored and ripe for investigation. Research in this area yields dual-use tools, aiding both offensive security professionals searching for vulnerabilities as well as defensive software developers trying to prevent vulnerabilities from entering their code in the first place.
#### 7.2.3. **API Workflow Discovery for Security Test Generation**
Interviewees lamented that the manual creation of API security test-cases is a tedious and time-consuming process. While the automation of API test generation would be advantageous, the following gaps currently prevent this: discovery of API endpoints and operations, generation of benign requests as a baseline, combining single requests into test flows using social and semantic information, deriving malicious test cases, and finally evaluating test outcomes. The **automatic generation of security test suites based upon API definitions and traffic patterns** would reduce testers' odium for utilizing this important class of testing. While there have been several works that propose approaches for API discovery (Song et al., 2018; Wang et al., 2019), the kind of discovery we envision would focus on maximizing coverage for security tests.
#### 7.2.4. **Information Discovery for Security Testing**
_Internal Network Tests_ and _Red-Teaming_ are highly dependent on discovering and utilizing client-specific information. **Stealthy information gathering from compromised systems or network shares** is performed manually, and thus its efficiency could be improved. The goal is the automated identification of "juicy" information while reducing the number of read requests to minimize network impact or the chance of triggering intrusion detection systems. Research in this area would also benefit defenders as it would make forensic work, e.g., analyzing data breaches, more efficient.
#### 7.2.5. **Scaling Personalized Phishing with ML**
_Phishing_ is an important part of the red-teaming workflow and is commonly done manually, due to the nature of customization proper phishing requires. We see an opportunity to investigate the **increase of scalability of social engineering through machine learning** techniques. To create highly effective phishing mails, currently, mails are manually customized to fit the respective recipient. Machine learning techniques could automate this and thus provide **Spear Phishing at Scale**, as they have already been shown to personalize natural language communication in other domains (Kohlfelder and Ferdows, 2017; Wang et al., 2019; Wang et al., 2019). An additional avenue for research is the identification of potential targets for social engineering, both from an external perspective (identifying initial recipients within a company) as well as **detecting informal networks within companies to enrich subsequent social-engineering campaigns** -- this is an example of the red-teaming theme of "_understanding how companies function_".
#### 7.2.6. **Human-in-the-loop for OT testing**
OT professionals were wavy of fully automated security tests due to the potential negative impact on stability and thus availability. We suggest research into supplemental areas while letting humans decide which attacks to execute. One example would be to **reduce the pain and effort of reverse engineering protocols**: OT tests are very time-bound thus there is little time for fuzzing or reverse-engineering OT protocols while the potential benefit might be immense due to security being provided by the obscurity of those protocols. Combining fuzzing with automatic reverse-engineering should yield large benefits (Wang et al., 2019). The fear of potential fall-out has other consequences too: OT-tests are often performed by executing scenarios in lockstep with the customer. The scenarios are identified through threat modeling components and their data flows. To reduce the time spent on this effort, ways of **automatically deriving scenarios including attack paths** from system and data flow diagrams should be investigated.
Both OT professionals and red-teams were weray of fully automated testing solutions due to the potential negative impact upon stealth (red-teaming) or stability (OT). To facilitate the deployment of automated systems, **research into Human-Computer Interactions to bolster the acceptance of ML and automated systems** is needed. It is assumed that important topics will include humans-in-the-loop as well as the explainability of automated reasoning.
#### 7.2.7. **Studying Knowledge Communities for Security Testers**
Our interview participants unsurprisingly felt the need for ongoing education w.r.t. new vulnerabilities and security trends. They synthesized information from multiple sources, the pivotal one being Twitter/X. Research on how developers stay current (Wang et al., 2019) and how development communities shape around news outlets (Wang et al., 2019) should be extended to the security arena, especially now that recent stewardship changes at Twitter might impact its reach. **Automated recommender systems utilizing diverse hacking news sources** such as news outlets, social media, and, the "darknet" should enable security professionals to stay up to date easier.
## Acknowledgment
We thank the anonymous interview participants for their time, and Loren Kohnfelder and Geraldine Fitzpatrick for providing feedback. |
2303.05532 | Multiparameter estimation perspective on non-Hermitian
singularity-enhanced sensing | Describing the evolution of quantum systems by means of non-Hermitian
generators opens a new avenue to explore the dynamical properties naturally
emerging in such a picture, e.g. operation at the so-called exceptional points,
preservation of parity-time symmetry, or capitalising on the singular behaviour
of the dynamics. In this work, we focus on the possibility of achieving
unbounded sensitivity when using the system to sense linear perturbations away
from a singular point. By combining multiparameter estimation theory of
Gaussian quantum systems with the one of singular-matrix perturbations, we
introduce the necessary tools to study the ultimate limits on the precision
attained by such singularity-tuned sensors. We identify under what conditions
and at what rate can the resulting sensitivity indeed diverge, in order to show
that nuisance parameters should be generally included in the analysis, as their
presence may alter the scaling of the error with the estimated parameter. | Javid Naikoo, Ravindra W. Chhajlany, Jan Kolodynski | 2023-03-09T19:00:09Z | http://arxiv.org/abs/2303.05532v3 | # Multiparameter estimation perspective on non-Hermitian singularity-enhanced sensing
###### Abstract
Describing the evolution of quantum systems by means of non-Hermitian generators opens a new avenue to explore the dynamical properties naturally emerging in such a picture, e.g. operation at the so-called exceptional points, preservation of parity-time symmetry, or capitalising on the singular behaviour of the dynamics. In this work, we focus on the possibility of achieving unbounded sensitivity when using the system to sense linear perturbations away from a singular point. By combining multiparameter estimation theory of Gaussian quantum systems with the one of singular-matrix perturbations, we introduce the necessary tools to study the ultimate limits on the precision attained by such singularity-tuned sensors. We identify under what conditions and at what rate can the resulting sensitivity indeed diverge, in order to show that nuisance parameters should be generally included in the analysis, as their presence may alter the scaling of the error with the estimated parameter.
_Introduction._--Quantum entanglement boosts dramatically performance in sensing [1; 2], allowing quantum sensors to breach classical limits naively imposed by the i.i.d.-statistics [3]. The corresponding enhancement, however, turns out to be very fragile [4; 5; 6], making methods of quantum control [7; 8; 9] and error-correction [10; 11; 12] essential, if the robustness against decoherence and imperfections is to be maintained. As the impact of noise becomes inevitable with sensor complexity, a change of paradigm is necessary. One way is to adopt non-Hermitian description of the dynamics and carefully engineer the noise instead, in order to make the evolution extremely sensitive to external perturbations. For instance, by considering deviations from the so-called _exceptional points_ (EPs) in the space of parameters characterising the system [13]--corresponding to special degeneracies at which \(n\) (complex) eigenvalues coalesce along with their respective eigenmodes [14; 15; 16]--a linear perturbation \(\epsilon\) away from the EP leads to an \(n\)th-root splitting \(\sim\sqrt[5]{\epsilon}\) of the eigenmode frequencies [17]. This starkly contrasts the common polynomial energy-splittings arising when perturbing (Hermitian) Hamiltonians. Hence, if one measured the resulting splitting of eigenmode frequencies in a noiseless manner, it would yield infinitely steep signals of unbounded sensitivity as \(\epsilon\to 0\). In the classical regime, in which stochastic noise primarily distorts the signal, such a phenomenon has been spectacularly demonstrated with optical resonators [18; 19], but it is washed out at quantum scales, at which the intrinsic quantum noise dominates [20; 21; 22]--in a similar way as it fundamentally prohibits noiseless amplification of quantum optical signals [23; 24].
Nonetheless, alternative sensing schemes involving linearly coupled systems were proposed (and implemented [25]) that surpass the impact of quantum noise by resorting to different perturbations of the effective non-Hermitian generator, \(\mathbf{H}\), describing the quantum Langevin dynamics [26]--with the operation around an EP being no longer essential [27]. For example, by considering the internal interaction to be non-reciprocal and perturbing the coupling strength instead, the sensitivity--the signal-to-noise ratio (SNR)--was shown to improve by a constant factor [28]. Impressively, it was also shown that by engineering \(\mathbf{H}\) to be _singular_[29] and sensing perturbations of the internal frequency with the probing signal tuned to it, the SNR may diverge boundlessly as \(\epsilon^{-2}\) with \(\epsilon\to 0\)[30]. Despite the apparent similarity to the EP-induced effect, this is a consequence of probing the quantum sensor close to a critical point of a dynamical phase transition, which constitutes a resource in sensing tasks [31; 32; 33; 34; 35; 36; 37; 38]. Although criticality may question the linear model of dynamics [17; 20], for particular sensors--in particular, the one we consider below [39; 40; 41; 42; 43]--it has been verified both at low and high probe powers [42]. Here, we demonstrate that multiparameter analysis is then required to correctly assess the sensing capabilities of a singularity-tuned sensor [30], and hence possibly of other criticality-enhanced sensing schemes [31; 32; 33; 34; 35; 36; 37; 38].
In particular, we analyse the emergence of singularity-induced SNR-divergence within the canonical linear system exhibiting _parity-time_ (PT) _symmetry_[44], i.e. two coupled bosonic cavities that experience loss and gain while being continuously monitored [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 211; 213; 214; 215; 216; 217; 218; 225; 219; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 256; 257; 258; 26; 267; 279; 288; 290; 281; 289; 291; 292; 301; 302; 303; 31; 314; 329; 333; 34; 35; 360; 370; 38; 38; 391; 392; 304; 305; 393; 306; 394; 307; 395; 396; 308; 397; 398; 399; 410; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 45; 450; 451; 452; 453; 454; 456; 457; 458; 46; 459; 560; 57; 58; 597; 61; 62; 63, 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 89; 910; 83; 88; 85; 89; 92; 931; 86; 87; 88; 88; 89; 939; 94; 95; 96; 97; 98; 99; 101; 112; 113; 114; 115; 116; 117; 118; 119; 122; 133; 144; 155; 157; 158; 169; 170; 182; 193; 194; 195; 196; 183; 197; 198; 199; 201; 210; 211; 22; 213; 224; 24; 259; 260; 271; 228; 28; 293; 297; 298; 299; 303; 316; 320; 321; 32; 324; 333; 34; 35; 361; 322; 32; 32; 32; 32; 32; 32; 333; 34; 36; 373; 38; 39; 47; 48; 59; 50; 51; 52; 539; 53; 54; 56; 57; 59; 62; 63, 64; 66; 67; 68; 69; 71; 80; 84; 85; 87; 8; 89; 94; 95; 96; 97; 98; 98; 99; 100; 101; 112; 114; 115; 116; 117; 118; 119; 133; 140; 141; 142; 143; 145; 156; 157; 159; 161; 178; 189; 199; 190; 191; 189; 192; 193; 195; 196; 197; 198; 199; 202; 2103; 212; 223; 241; 25; 258; 26; 267; 273; 28; 299; 310; 332; 338; 393; 404; 433; 444; 45; 46; 47; 48; 49; 516; 58; 59; 60; 59; 612; 6
states [45], in order to determine the ultimate limits on sensitivity. We then perform _singular perturbations_[46, 47, 48, 49] of the corresponding frequency response function, in order to show that the rate of the SNR-divergence critically depends on the perturbation form, even when assuming all other system parameters to be perfectly known. Furthermore, not only any slight deviation from the singular point precludes unbounded sensitivity, but also the divergence rate depends on the presence of _nuisance parameters_, i.e. other system parameters unknown prior to estimation [50, 51]. Our results suggest that one must be careful when assessing the sensing capabilities of singularity-tuned sensors [30], as these depend strongly on the ability to fine-tune and calibrate the system.
_Non-Hermitian sensor model._--Sensors [39, 40, 41, 42, 43] can be conveniently described by the model depicted in Fig. 1, in which two cavities containing optical modes \(\hat{a}_{1}\) and \(\hat{a}_{2}\) at frequencies \(\omega_{1}\) and \(\omega_{2}\), respectively, are linearly coupled with strength \(g\), so that the overall free Hamiltonian reads:
\[\hat{H}_{S}=\omega_{1}\hat{a}_{1}^{\dagger}\hat{a}_{1}+\omega_{2}\hat{a}_{2}^{ \dagger}\hat{a}_{2}+g\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{ \dagger}\hat{a}_{1}\right). \tag{1}\]
We shall consider here the case of degenerate cavities such that \(\omega_{1}=\omega_{2}=:\omega_{0}\)[42]. Each mode \(\hat{a}_{1}\) (\(\hat{a}_{2}\)) is separately coupled to a scattering optical channel \(\hat{B}_{1}\) (\(\hat{B}_{2}\)) that effectively induces loss (gain) of strength \(\eta_{1}\) (\(\eta_{2}\)) on each cavity. In parallel, both cavities are independently probed via channels \(\hat{A}_{1}\) and \(\hat{A}_{2}\), each coupled with strength \(\kappa\), whose outputs are continuously monitored. By resorting to the input-output formalism [52] summarised in App. A, the sensor dynamics can be described by a linear quantum Langevin equation [26]:
\[\partial_{t}\hat{\mathbf{a}}=-\mathrm{i}(\omega_{0}\mathbf{I}+\mathbf{H})\hat{\bm {a}}+\hat{\mathbf{A}}_{\mathrm{in}}+\hat{\mathbf{B}}_{\mathrm{in}}, \tag{2}\]
where \(\hat{\mathbf{a}}\coloneqq\{\hat{a}_{1},\hat{a}_{2}\}^{\mathrm{T}}\), \(\hat{\mathbf{A}}_{\mathrm{in}}\coloneqq\{\sqrt{\kappa}\hat{A}_{1,\mathrm{in}}, \sqrt{\kappa}\hat{A}_{2,\mathrm{in}}\}^{\mathrm{T}}\), \(\hat{\mathbf{B}}_{\mathrm{in}}\coloneqq\{\sqrt{\eta_{1}}\hat{B}_{1,\mathrm{in}},- \sqrt{\eta_{2}}\hat{B}_{2,\mathrm{in}}^{\dagger}\}^{\mathrm{T}}\), and \(\mathbf{I}\) is a 2\(\times\)2 identity matrix. As depicted in Fig. 1, \(\hat{A}_{\ell,\mathrm{in}}\), \(\hat{B}_{\ell,\mathrm{in}}\) with \(\ell=1,2\) denote the effective input fields of the optical channels, whose output fields are then determined by the input-output relations [26]. In particular, as shown in App. A, the outputs being measured are given at any time \(t\) by: \(\hat{A}_{\ell,\mathrm{out}}(t)=\hat{A}_{\ell,\mathrm{in}}(t)-\sqrt{\kappa}\, \hat{a}_{\ell}(t)\).
The inter-cavity interactions, as well as their coupling to optical channels, modify the free evolution of the cavity modes in Eq. (2) in the form of a non-Hermitian dynamical generator:
\[\mathbf{H}=\begin{pmatrix}-\mathrm{i}\gamma_{1}&g\\ g&+\mathrm{i}\gamma_{2}\end{pmatrix} \tag{3}\]
with \(\gamma_{1}\coloneqq(\eta_{1}+\kappa)/2\) (\(\gamma_{2}\coloneqq(\eta_{2}-\kappa)/2\)) being the overall loss (gain) rate of each cavity. As a result, defining \(\gamma_{\pm}\coloneqq(\gamma_{2}\pm\gamma_{1})/2\), we may compactly write the eigenvalues and (unnormalised) eigenmodes of \(\mathbf{H}\) as
\[\lambda_{\pm}=\mathrm{i}\gamma_{-}\pm\sqrt{g^{2}-\gamma_{+}^{2}},|e_{\pm} \rangle\!=\!\begin{pmatrix}-\mathrm{i}\gamma_{+}\pm\sqrt{g^{2}-\gamma_{+}^{2}} \\ g\end{pmatrix}, \tag{4}\]
so that it becomes clear that the spectrum of \(\mathbf{H}\) is real iff \(g\geq\gamma_{+}\) and \(\gamma_{-}=0\), in which case \(\mathbf{H}\) formally exhibits the _PT-symmetry_ manifested by the fact that interchanging the modes and performing complex conjugation does not affect the description of dynamics [44]. The sole condition \(\gamma_{-}=0\), on the other hand, we term as the _balanced_ scenario, as the gain then balances out exactly the loss (\(\gamma_{1}=\gamma_{2}\)) [41], which in presence of PT-symmetry can be interpreted as the lasing threshold [53, 30]. Importantly, it has been demonstrated that by maintaining the PT-symmetry, the validity of the linear model (2) can be extended to high probe powers [42].
Furthermore, the non-Hermitian generator (3) exhibits an EP (of the second order) when gain and loss are set such that \(g=\gamma_{+}\), characterized by the simultaneous coalescence of the two eigenvalues (still potentially complex) \(\lambda_{\pm}\) and eigenmodes \(|e_{\pm}\rangle\)[44]. In what follows, when probing the system at the sensor frequency \(\omega_{0}\), the _singularity_ of the non-Hermitian generator (3) will play a pivotal role. This corresponds to the condition \(\det\mathbf{H}=0\) equivalent to \(g^{2}=\gamma_{1}\gamma_{2}\). Note that in general the singularity of \(\mathbf{H}\) is not related to the EP condition, but is assured if the system is both balanced and at an EP [20].
In sensing tasks with linear perturbations, the non-Hermitian generator (3) is generally modified as follows:
\[\mathbf{H}_{\mathbf{\theta}}\coloneqq\mathbf{H}-\sum_{i=0}^{m}\theta_{i}\mathbf{n} _{i}=\mathbf{H}_{\mathbf{\theta}}-\theta_{0}\mathbf{n}_{0}, \tag{5}\]
where \(\mathbf{\theta}\coloneqq\{\theta_{i}\}_{i}\) denotes a set of (\(m+1\), real) parameters to be sensed, each of which modifies the dynamics (3) according to some (complex) 2\(\times\)2 matrix \(\mathbf{n}_{i}\). In Eq. (5), we single out the special case in which \(\theta_{0}\) denotes the _primary_ parameter to be sensed around zero, while the rest of the set, \(\mathbf{\theta}\coloneqq\{\theta_{i}\}_{i\neq 0}\), contains _nuisance_ parameters, i.e. ones that are of no interest but nonetheless unknown. This allows us to capture all the following \(\theta_{0}\)-estimation scenarios. By setting \(\mathbf{n}_{0}=\sigma_{z}\) in Eq. (5), we let \(\theta_{0}=(\omega_{1}-\omega_{2})/2\) describe perturbations of the detuning between the cavity frequencies--as originally considered in the EP-based sensing schemes [20]. By choosing \(\mathbf{n}_{0}=\sigma_{x}\) instead, we let \(\theta_{0}\) perturb the coupling strength \(g\)--as investigated in Ref. [28] dealing with non-reciprocal dynamics. Finally, we consider \(\mathbf{n}_{0}=\mathbf{I}\) (or \(\mathbf{n}_{0}=(1,0;0,0)\)), in which case \(\theta_{0}\) describes perturbations of the common frequency \(\omega_{0}\) (or of \(\omega_{1}\) for the first cavity only) [30].
_Linear response in the Fourier domain._--We consider the (linear) sensor to be interacting with Gaussian light [54], so it is sufficient to describe its dynamics using the Gaussian formalism [55, 56], within which evolution of any bosonic modes \(\hat{b}_{i}\) is fully characterised after defining the vector \(\hat{\mathsf{S}}=\{\hat{q}_{1},\hat{q}_{2},\ldots,\hat{p}_{1},\hat{p}_{2},\ldots\}^{ \mathrm{T}}\) of their quadratures, \(\hat{q}_{i}=\hat{b}_{i}+\hat{b}_{i}^{\dagger}\) and \(\hat{p}_{i}=-\mathrm{i}(\hat{b}_{i}-\hat{b}_{i}^{\dagger})\), and tracking its mean \(\mathsf{S}\coloneqq\langle\hat{\mathsf{S}}\rangle\) and covariance matrix \(\mathsf{V}\) with entries \(\mathsf{V}_{jk}\coloneqq\frac{1}{2}\langle\{\hat{\mathsf{S}}_{j},\hat{ \mathsf{S}}_{k}\}\rangle-\langle\hat{\mathsf{S}}_{j}\rangle\langle\hat{\mathsf{S }}_{k}\rangle\). Moreover, as we are interested in probing the sensor at a particular frequency \(\omega\), we focus on the evolution in the Fourier space, in which according to Eq. (2) the dynamics of measured outputs, i.e. \(\hat{\mathsf{S}}_{\mathrm{out}}^{A}\) containing quadratures of \(\hat{A}_{\ell,\mathrm{out}}[\omega]\coloneqq\int\!\mathrm{d}t\,\mathrm{e}^{ \mathrm{i}\omega t}\hat{A}_{\ell,\mathrm{out}}(t)\), is given by \(\mathsf{S}_{\mathrm{out}}^{A}=(1-\kappa\mathsf{G})\,\mathsf{S}_{\mathrm{in}}^{A}\), and \(\mathsf{V}_{\mathrm{out}}^{A}=(1-\kappa\mathsf{G})\,\mathsf{V}_{\mathrm{in}}^{A} \left(1-\kappa\mathsf{G}\right)^{\mathrm{T}}+\kappa\mathsf{G}\Xi\bar{V}_{\mathrm{in }}^{B}\Xi^{\mathrm{T}}\Xi^{\mathrm{T}}\Xi^{\mathrm{T}}\Xi^{\mathrm{T}}\Xi^
\(\mathsf{V}^{A}_{\mathrm{out}}\) depends also on the overall covariance matrix of input scattering modes, i.e. \(\tilde{\mathsf{V}}^{B}_{\mathrm{in}}\) describing correlations between 8 quadratures of \(\hat{B}_{\ell,\mathrm{in}}[\pm\omega]\) with \(\ell=1,2\), see App. B for derivation. In the above, \(\Xi\) is the transfer matrix associated with the coupling of sensor cavities to the scattering channels, \(\mathsf{I}\) denotes a 4\(\times\)4 identity matrix, while the central object is the _linear-response (or Green's) function_--here, a 4\(\times\)4 matrix:
\[\mathsf{G}[\omega]=\mathsf{J}\left((\omega-\omega_{0})\mathsf{I}-\mathsf{H} \right)^{-1}, \tag{6}\]
whose divergent behaviour, as shown, is responsible for the unbounded precision when sensing perturbations. By \(\mathsf{J}=(0,-\mathbf{I};\mathbf{I},0)\) we denote in Eq. (6) the symplectic form consistent with our notation (see App. B), within which [57]:
\[\mathsf{H}\coloneqq\begin{pmatrix}\mathrm{Re}[\mathbf{H}]&-\mathrm{Im}[ \mathbf{H}]\\ \mathrm{Im}[\mathbf{H}]&\mathrm{Re}[\mathbf{H}]\end{pmatrix}=\begin{pmatrix}0 &g&\gamma_{1}&0\\ g&0&0&-\gamma_{2}\\ -\gamma_{1}&0&0&g\\ 0&\gamma_{2}&g&0\end{pmatrix}, \tag{7}\]
is now the phase-space representation of the dynamical generator (3). Crucially, Eq. (6) diverges if \(\det((\omega-\omega_{0})\mathsf{I}-\mathsf{H})=0\), which--as shown in App. C--can always be assured by tailoring the probing frequency \(\omega\). However, in the special case of probing _in resonance_ with the common internal frequency, i.e. \(\omega=\omega_{0}\) that we assume from now on, this condition becomes tantamount to \(\det\mathsf{H}=|\mathrm{det}\,\mathbf{H}|^{2}=0\), i.e. the dynamical generator (3) being indeed singular (see also App. C).
Considering now \(\mathbf{\theta}\)-parametrised perturbations specified in Eq. (5), the response function (6) becomes
\[\mathsf{G}_{\mathbf{\theta}}[\omega=\omega_{0}]=\mathsf{J}\left(\sum_{i=0}^{m} \theta_{i}\mathsf{n}_{i}-\mathsf{H}\right)^{-1}=\mathsf{J}\left(\theta_{0} \mathsf{n}_{0}-\mathsf{H}_{\bar{\mathbf{\theta}}}\right)^{-1}, \tag{8}\]
where by \(\mathsf{n}_{i}\) we denote the phase-space representation of each \(\mathbf{n}_{i}\)[57], and analogously to Eq. (5) highlight the situation when \(\theta_{0}\) is the primary parameter to be sensed with \(\mathsf{H}_{\bar{\mathbf{\theta}}}\coloneqq\mathsf{H}-\sum_{i\neq 0}\theta_{i} \mathsf{n}_{i}\). When considering singular dynamics (2) such that \(\det\mathbf{H}=0\), the response function (8) diverges when all \(\theta_{i}=0\), and the character of divergence depends on the explicit form of the matrices \(\{\mathsf{n}_{i}\}_{i}\). For instance, when estimating only \(\theta_{0}\approx 0\) the singular behaviour may be maintained despite \(\bar{\mathbf{\theta}}\neq 0\)--as long as the condition \(\det\mathsf{H}_{\bar{\mathbf{\theta}}}=0\Leftrightarrow\det\mathbf{H}_{\bar{\mathbf{ \theta}}}=0\) is assured also for \(\bar{\mathbf{\theta}}\neq 0\).
In order to study this issue in detail, aiming to treat \(\bar{\mathbf{\theta}}\) as nuisance parameters, we focus on the _two-parameter setting_, \(\mathbf{\theta}=\{\theta_{0},\theta_{1}\}\), in which the primary parameter is fixed to represent the \(\omega_{0}\)-frequency perturbations, i.e. \(\mathbf{n}_{0}=\mathbf{I}\)[30]. In contrast, we choose the secondary parameter \(\theta_{1}\) such that its variations either invalidate or maintain the singularity condition. In the space of parameters \(g\), \(\gamma_{1/2}\) depicted in Fig. 2, this corresponds to perturbations either away from or within the surface defined by the constraint \(\det\mathbf{H}=0\) that we mark in green. However, as we demand the PT-symmetry to be maintained in order to ensure the linearity [42], we also require the \(\theta_{1}\)-perturbations not to leave the triangular vertical plane in Fig. 2. Thus, we choose as the _singularity non-preserving_ (NS) case \(\mathbf{n}_{1}^{\mathrm{NS}}\coloneqq\sigma_{x}\), which effectively yields perturbations of the coupling \(g\)[28]--marked in Fig. 2 with a black arrow. On the contrary, in case of _singularity preserving_ (S) perturbations of \(\theta_{1}\), we are constrained to choose \(\mathbf{n}_{1}^{\mathrm{S}}=\sigma_{x}-\mathrm{i}\sigma_{z}\) that maintains \(g=\gamma_{1}=\gamma_{2}\)--in the parameter space of Fig. 2 the diagonal (blue dashed) line is followed, as indicated by red arrows. Interestingly, the latter maintains also the EPIC condition [58]. These two scenarios correspond to generator perturbations (5) of the form \(\mathbf{H}_{\mathbf{\theta}}=\mathbf{H}_{\theta_{1}}^{\mathrm{NS/S}}-\theta_{0} \mathbf{I}\), where
\[\mathbf{H}_{\theta_{1}}^{\mathrm{NS}}\coloneqq\bar{\mathbf{H}}-\theta_{1} \sigma_{x}\quad\text{and}\quad\mathbf{H}_{\theta_{1}}^{\mathrm{S}}\coloneqq \bar{\mathbf{H}}-\theta_{1}(\sigma_{x}-\mathrm{i}\sigma_{z}), \tag{9}\]
and \(\bar{\mathbf{H}}\) is \(\mathbf{H}\) of Eq. (3) with \(g=\gamma_{1}=\gamma_{2}=1\)[59].
_Multiparameter estimation of Gaussian states._--For a quantum system prepared in the state \(\rho_{\mathbf{\theta}}\), given particular parameter values \(\mathbf{\theta}\coloneqq\{\theta_{j}\}_{j}\) to be estimated and a measurement scheme yielding a probability distribution \(p(\xi|\mathbf{\theta})\) of a single-shot outcome \(\xi\), the _classical_ and _quantum Fisher information matrices_ (CFIM and QFIM) are, respectively, defined as [50]:
\[\mathbf{F}_{jk} \coloneqq\mathbb{F}_{p(\xi|\mathbf{\theta})}[\partial_{j}\mathrm{ln}\,p (\xi|\mathbf{\theta})\,\partial_{k}\mathrm{ln}\,p(\xi|\mathbf{\theta})], \tag{10}\] \[\mathbf{\mathcal{F}}_{jk} \coloneqq\mathrm{Tr}[\rho_{\mathbf{\theta}}\tfrac{1}{2}\{L_{j},L_{k}\}], \tag{11}\]
where, for short, we write the derivative w.r.t. any estimated parameter \(\theta_{j}\) as \(\partial_{j}\equiv\partial/\partial_{\theta_{j}}\), while by \(\mathbb{E}_{p(\xi|\mathbf{\theta})}[\mathbf{\bullet}]\coloneqq\int\!\mathrm{d}\xi\,p( \xi|\mathbf{\theta})\bullet\) we denote above the expectation w.r.t. the distribution \(p(\xi|\mathbf{\theta})\) (dropped for brevity below). In the quantum case (11), \(\mathbb{E}_{p(\xi|\mathbf{\theta})}[\mathbf{\bullet}]\) naturally generalises to \(\mathrm{Tr}[\rho_{\mathbf{\theta}}\bullet]\), while \(\partial_{j}\mathrm{ln}\,p(\xi|\mathbf{\theta})\) becomes then the _symmetric_ logarithmic derivative \(L_{j}\) defined as the solution to \(\partial_{j}\rho_{\mathbf{\theta}}=\tfrac{1}{2}\{\rho_{\mathbf{\theta}},L_{j}\}\)[60].
Now, for any unbiased estimator \(\tilde{\mathbf{\theta}}(\mathbf{\xi})\) constructed based on the measurement data \(\mathbf{\xi}=\{\xi_{r}\}_{r=1}^{\nu}\) gathered over \(\nu\) independent shots, its _(squared-)error matrix_\(\mathbf{\Delta}^{2}\tilde{\mathbf{\theta}}\coloneqq\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{ \theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{ \theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{ \theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot \mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{ \theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{ \theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\mathbf{\theta}\cdot\bm
\(\mathbb{E}\Big{[}(\tilde{\mathbf{\theta}}-\mathbf{\theta})(\tilde{\mathbf{\theta}}-\mathbf{\theta})^{ \mathrm{T}}\Big{]}\) satisfies the so-called _quantum Cramer-Rao bound_ (QCRB) [61; 62]:
\[\nu\,\mathbf{\Delta}^{2}\tilde{\mathbf{\theta}}\geq\mathbf{F}^{-1}\geq\mathbf{\mathcal{F}}^{-1}, \tag{12}\]
where the first matrix inequality is guaranteed to be saturable by some \(\tilde{\mathbf{\theta}}\) in the \(\nu\to\infty\) limit, i.e. for any \(\mathbf{W}\geq 0\) for which \(\mathrm{Tr}[\mathbf{W}\mathbf{\Delta}^{2}\tilde{\mathbf{\theta}}]\) is then minimised. In contrast, although the second inequality applies to any quantum measurement, as the optimal measurements for distinct parameters \(\theta_{j}\) may not commute (formally \(\mathrm{Tr}[\rho_{\mathbf{\theta}}[L_{j},L_{k}]]\neq 0\)[63]), it may _not_ be generally saturable by any estimator \(\tilde{\mathbf{\theta}}\) given some \(\mathbf{W}\geq 0\). However, it can differ at most by a factor of 2 from the minimal \(\mathrm{Tr}[\mathbf{W}\mathbf{F}^{-1}]\) attained by the optimal measurements [50].
In the special case when only a single parameter, say \(\theta_{i}\), is of interest, while the others are treated as nuisance ones (i.e. \(\mathbf{W}_{jk}=\delta_{ij}\delta_{ik}\)), we denote the lower bounds on the error in Eq. (12) as \(\Delta_{\mathrm{C}}\theta_{i}\coloneqq\sqrt{[\mathbf{F}^{-1}]_{ii}}\) and \(\Delta_{\mathrm{Q}}\theta_{i}\coloneqq\sqrt{[\mathbf{\mathcal{F}}^{-1}]_{ii}}\), respectively, so that any unbiased estimator of \(\theta_{i}\) satisfies then \(\nu\Delta^{2}\tilde{\theta}_{i}\geq\Delta_{\mathrm{C}}^{2}\theta_{i}\geq \Delta_{\mathrm{Q}}^{2}\theta_{i}\)[64]. This contrasts the _ideal_ single-parameter scenario, in which all parameters apart from \(\theta_{i}\) are known, in which case Eq. (12) simplifies to \(\nu\Delta^{2}\tilde{\theta}_{i}\geq\delta_{\mathrm{C}}^{2}\theta_{i}\geq \delta_{\mathrm{Q}}^{2}\theta_{i}\) with \(\delta_{\mathrm{C}}\theta_{i}\coloneqq 1/\sqrt{[\mathbf{F}]_{ii}}\) and \(\delta_{\mathrm{Q}}\theta_{i}\coloneqq 1/\sqrt{[\mathbf{\mathcal{F}}]_{ii}}\).
Considering any Gaussian measurement of the probe outputs \(\tilde{A}_{t,\mathrm{out}}\), its outcome \(\mathsf{x}\) is normally distributed \(\mathsf{x}\sim\exp\{-\frac{1}{2}(\mathsf{x}-\bar{\mathsf{x}})\mathsf{C}^{-1} (\mathsf{x}-\bar{\mathsf{x}})^{\mathrm{T}}\}\)[56; 65], with both the mean vector \(\bar{\mathsf{x}}(\mathbf{\theta})\) and the covariance matrix \(\mathsf{C}(\mathbf{\theta})\) generally depending on the parameter set \(\mathbf{\theta}\). The CFIM (10) takes then a special form [66]:
\[\mathbf{F}_{jk}=\frac{1}{2}\,\mathrm{Tr}\big{[}\mathsf{C}^{-1}(\partial_{j} \mathsf{C})\mathsf{C}^{-1}(\partial_{k}\mathsf{C})\big{]}+(\partial_{j}\bar{ \mathsf{x}})^{\mathrm{T}}\mathsf{C}^{-1}(\partial_{k}\bar{\mathsf{x}}), \tag{13}\]
so that in case of heterodyne measurement being performed one should replace \(\bar{\mathsf{x}}=\mathsf{S}_{\mathrm{out}}^{A}\) and \(\mathsf{C}=\mathsf{V}_{\mathrm{out}}^{A}+\mathsf{I}\) above [56]. More generally, allowing for arbitrary quantum measurements performed on a Gaussian state of mean \(\mathsf{S}(\mathbf{\theta})\) and covariance \(\mathsf{V}(\mathbf{\theta})\), the QFIM (11) reads \(\mathbf{\mathcal{F}}_{jk}=\frac{1}{2}\,\mathrm{Tr}[\mathsf{L}_{j}\partial_{k} \mathsf{V}]+(\partial_{j}\mathsf{S})^{\mathrm{T}}\mathsf{V}^{-1}(\partial_{k} \mathsf{S})\)[45; 67; 68; 69; 70], with the matrix \(\mathsf{L}_{j}\) possessing a nontrivial form detailed in App. D. However, by generalising the results of Ref. [71] to the multiparameter case, we show in App. D that the QFIM can always be approximated for noisy, e.g. highly thermalised, Gaussian states as
\[\mathbf{\mathcal{F}}_{jk}\approx\frac{1}{2}\,\mathrm{Tr}\big{[}\mathsf{V}^{-1}( \partial_{j}\mathsf{V})\mathsf{V}^{-1}(\partial_{k}\mathsf{V})\big{]}+( \partial_{j}\mathsf{S})^{\mathrm{T}}\mathsf{V}^{-1}(\partial_{k}\mathsf{S}), \tag{14}\]
as long as the spectrum of \(\mathsf{V}\) satisfies \(\lambda_{\mathrm{min}}(\mathsf{V})\gg 1\)[72].
While ensuring this to always be the case here--with mean and covariance in Eq. (14) being the ones of measured modes, \(\mathsf{S}_{\mathrm{out}}^{A}\) and \(\mathsf{V}_{\mathrm{out}}^{A}\) (we drop superscript \(A\) for brevity below)--we have that \(\partial_{j}\mathsf{S}_{\mathrm{out}}=-\kappa\,\partial_{j}\mathsf{G}_{\mathbf{ \theta}}\mathsf{S}_{\mathrm{in}}\) given \(\mathsf{S}_{\mathrm{out}}=(1-\kappa\mathsf{G}_{\mathbf{\theta}})\,\mathsf{S}_{ \mathrm{in}}\), where \(\partial_{j}\mathsf{G}_{\mathbf{\theta}}=\mathsf{G}_{\mathbf{\theta}}\mathsf{n}_{j} \mathsf{J}\mathsf{G}_{\mathbf{\theta}}\) follows from Eq. (8). As we are interested in small perturbations of the primary parameter \(\theta_{0}\) around zero, we keep only terms quadratic in \(\mathsf{G}_{\mathbf{\theta}}\), so that \(\mathsf{V}_{\mathrm{out}}\approx\mathsf{G}_{\mathbf{\theta}}\mathsf{V}_{\mathrm{in}} \mathsf{G}_{\mathbf{\theta}}^{\mathsf{T}}\) yielding \(\partial_{j}\mathsf{V}_{\mathrm{out}}\approx\partial_{j}\mathsf{G}_{\mathbf{ \theta}}\mathsf{V}_{\mathrm{in}}\mathsf{G}_{\mathbf{\theta}}^{\mathsf{T}}+ \mathsf{G}_{\mathbf{\theta}}\mathsf{V}_{\mathrm{in}}\partial_{j}\mathsf{G}_{\mathbf{ \theta}}\mathsf{S}_{\mathrm{in}}\), where \(\mathsf{V}_{\mathrm{in}}\) some input covariance matrix (here, \(\mathsf{V}_{\mathrm{in}}=\mathsf{V}_{\mathrm{in}}^{A}+\Xi\nabla_{\mathbf{\theta}}^{B} \Xi^{T}\)) that is invertible for any Gaussian state of finite energy [55]. As a result, the "noisy" QFIM (14) reads
\[\mathbf{\mathcal{F}}_{jk} \approx\mathrm{Tr}[\mathsf{n}_{j}\mathsf{J}\mathsf{G}_{\mathbf{\theta}} \mathsf{n}_{k}\mathsf{J}\mathsf{G}_{\mathbf{\theta}}]+\mathrm{Tr}[\mathsf{V}_{ \mathrm{in}}^{-1}\mathsf{n}_{j}\mathsf{J}\mathsf{G}_{\mathbf{\theta}}\mathsf{V}_{ \mathrm{in}}\mathsf{G}_{\mathbf{\theta}}^{\mathsf{T}}\mathsf{J}^{\mathrm{T}}\mathsf{n }_{k}^{\mathrm{T}}]\] \[\quad+\mathsf{S}_{\mathrm{in}}^{\mathrm{T}}\mathsf{C}_{\mathbf{\theta}}^{ \mathrm{T}}\mathsf{J}^{\mathrm{T}}\mathsf{n}_{j}^{\mathrm{T}}\mathsf{V}_{ \mathrm{in}}^{-1}\mathsf{n}_{k}\mathsf{J}\mathsf{G}_{\mathbf{\theta}}\mathsf{S}_{ \mathrm{in}}. \tag{15}\]
_Single-parameter sensitivities.--_ When sensing a single parameter \(\theta_{0}\) with others perfectly known, we set \(\bar{\mathbf{\theta}}=0\) in Eq. (8), so that \(\mathsf{H}_{\bar{\mathbf{\theta}}=0}=\mathsf{H}\) and only the entry \(j=k=0\) in Eqs. (13-15) is of relevance. Now, whenever the generator \(\mathsf{H}\) is _non-singular_, the response function (8) admits a Neumann series \(\mathsf{G}_{\theta_{0}}=-\mathrm{J}\mathsf{H}^{-1}(\mathsf{I}+\sum_{k=1}^{ \infty}\theta_{0}^{k}(\mathsf{n}_{0}\mathsf{H}^{-1})^{k})\) such that \(\lim_{\theta_{0}\to 0}\mathsf{G}_{\theta_{0}}=-\mathrm{J}\mathsf{H}^{-1}\)[73], so that Eq. (15) yields
\[\mathbf{\mathcal{F}}_{00}^{\mathrm{NS}} \approx\mathrm{Tr}\big{[}\mathsf{n}_{0}\mathsf{H}^{-1}\mathsf{n}_{0} \mathsf{H}^{-1}\big{]}+\mathrm{Tr}[\mathsf{V}_{\mathrm{in}}^{-1}\mathsf{n}_{0} \mathsf{H}^{-1}\mathsf{V}_{\mathrm{in}}(\mathsf{H}^{\mathrm{T}})^{-1}\mathsf{n}_{0 }^{\mathrm{T}}]\] \[\quad+\mathsf{S}_{\mathrm{in}}^{\mathrm{T}}(\mathsf{H}^{\mathrm{T}})^{-1} \mathsf{n}_{0}^{\mathrm{T}}\mathsf{V}_{\mathrm{in}}^{-1}\mathsf{n}_{0} \mathsf{H}^{-1}\mathsf{S}_{\mathrm{in}}+\mathcal{O}(\theta_{0}), \tag{16}\]
which cannot diverge in the \(\theta_{0}\to 0\) limit. Hence, it sets a non-zero lower bound on the estimation error \(\delta_{\mathrm{Q}}\theta_
\(\mathsf{H}_{\tilde{\mathbf{\theta}}}\) is non-singular, \(\mathsf{G}_{\mathbf{\theta}}\) must again allow for a Neumann series expansion with \(\lim_{\theta_{0}\to 0}\mathsf{G}_{\mathbf{\theta}}=-\mathsf{J}\mathsf{H}_{\tilde{\mathbf{ \theta}}}^{-1}\). Hence, following Eq. (16) with \(\mathsf{H}\) replaced by \(\mathsf{H}_{\tilde{\mathbf{\theta}}}\), we have \(\mathbf{\mathcal{F}}_{00}\) to be bounded as before with the estimation error \(\Delta_{\mathsf{Q}}\theta_{0}=\sqrt{[\mathbf{\mathcal{F}}^{-1}]_{00}}\geq 1/\mathbf{ \mathcal{F}}_{00}\) forbidden again to vanish with \(\theta_{0}\to 0\). When \(\mathsf{H}_{\tilde{\mathbf{\theta}}}\) is singular instead, similarly to the single-parameter case (17), we must resort to the SM expansion of \(\mathsf{G}_{\mathbf{\theta}}\) with the expansion coefficients \(\mathsf{X}_{k}\) now depending also on the nuisance parameters \(\tilde{\mathbf{\theta}}\). However, the pole-order does _not_ translate directly onto the scaling of sensitivity with \(\theta_{0}\) any more. It is so, as the estimation error \(\Delta_{\mathsf{Q}}\theta_{0}=\sqrt{[\mathbf{\mathcal{F}}^{-1}]_{00}}\) involves now the inverse of the QFIM and, hence, is in principle affected by correlations between different unknown parameters. We show this explicitly by focussing on the two-parameter estimation scenario with \(\mathsf{G}_{\theta_{0},\theta_{1}}\); primary \(\theta_{0}\) being generated by \(\mathsf{n}_{0}=\mathsf{I}\), while \(\theta_{1}\) acts as nuisance parametrising either the non-singular or the singular dynamical generator specified in Eq. (9).
In the case of \(\mathbf{H}_{\theta_{1}}^{\mathrm{NS}}\) in Eq. (9), which for any \(\theta_{1}\neq 0\) leads to \(\mathsf{H}_{\theta_{1}}\) being _non-singular_, the error in estimating \(\theta_{0}\) may not vanish with \(\theta_{0}\to 0\). Considering, e.g., a thermal state with zero displacement as the probe input, we show analytically in App. G.1 that \([\mathbf{\mathcal{F}}^{-1}]_{00}=1/\mathbf{\mathcal{F}}_{00}\propto\theta_{1}^{2}\) at \(\theta_{0}=0\), which is bounded away from zero for any \(\theta_{1}\neq 0\). For completeness, however, we also demonstrate that at the special singular point \(\theta_{1}=0\)--by resorting correctly to the SM expansion--\(\Delta_{\mathsf{Q}}\theta_{0}=\delta_{\mathsf{Q}}\theta_{0}+O(\theta_{0}^{4})\), so that the impact of the nuisance parameter can then be ignored as \(\theta_{0}\to 0\), with single-parameter results being applicable (\(\Delta_{\mathsf{Q}}\theta_{0}\propto\theta_{0}^{2}\), black lines in Fig. 3).
Turning now to \(\mathbf{H}_{\theta_{1}}^{\mathrm{S}}\) of Eq. (9), which importantly leads to \(\mathsf{H}_{\theta_{1}}\) being _singular_ for any value of \(\theta_{1}\), we show that the SM expansion \(\mathsf{G}_{\theta_{0},\theta_{1}}=\mathsf{J}\theta_{0}^{-2}\sum_{k=0}^{1} \theta_{0}^{k}\mathsf{X}_{k}(\theta_{1})\), see App. G.2, exhibits a pole of order two. Substituting this expansion into Eq. (15), we evaluate the corresponding entries of the QFIM, i.e.: \(\mathbf{\mathcal{F}}_{00}\approx\alpha\theta_{0}^{-4}+2\beta\theta_{0}^{-3}+ \gamma\theta_{0}^{-2}\), \(\mathbf{\mathcal{F}}_{11}\approx\alpha\theta_{0}^{-2}\), and \(\mathbf{\mathcal{F}}_{01}\approx\alpha\theta_{0}^{-3}+\beta\theta_{0}^{-2}\), where now \(\alpha\coloneqq\mathrm{Tr}\big{[}\mathsf{V}_{\mathrm{in}}^{-1}\mathsf{X}_{ \mathsf{Q}}\mathsf{V}_{\mathrm{in}}\mathsf{X}_{\mathsf{Q}}^{\mathsf{T}}\big{]} +2\langle\mathsf{X}_{\mathsf{I}}^{\mathsf{T}}\mathsf{V}_{\mathrm{in}}^{-1} \mathsf{X}_{\mathsf{Q}}\rangle\), \(\beta\coloneqq 2\langle\mathsf{V}_{\mathrm{in}}^{-1}\mathsf{X}_{\mathsf{Q}}\rangle\) and \(\gamma\coloneqq 8+2\langle\mathsf{V}_{\mathrm{in}}^{-1}\rangle\) with \(\langle\bullet\rangle=\mathsf{S}_{\mathrm{in}}^{\mathsf{T}}\bullet\mathsf{S}_{ \mathrm{in}}\). Crucially, this implies that the error in estimating the parameter \(\theta_{0}\) reads
\[\Delta_{\mathsf{Q}}\theta_{0}=\left(\mathbf{\mathcal{F}}_{00}-\frac{\mathbf{\mathcal{F} }_{01}\mathbf{\mathcal{F}}_{10}}{\mathbf{\mathcal{F}}_{11}}\right)^{-\frac{1}{2}} \approx\left(\gamma-\frac{\beta^{2}}{\alpha}\right)^{-\frac{1}{2}}\,\theta_{0}, \tag{18}\]
which now scales linearly with \(\theta_{0}\). In particular, the presence of nuisance parameter \(\theta_{1}\) precludes the scaling from following the quadratic behaviour dictated by the pole. We show this explicitly in Fig. 3, where we plot the exact estimation error \(\Delta_{\mathsf{Q}}\theta_{0}\) in red (blue) for \(\mathbf{H}_{\theta_{1}}^{\mathrm{S}}\) with \(\theta_{1}=0\) (\(\theta_{1}=0.25\)). Not only it follows the linear scaling predicted by Eq. (18), but so does \(\Delta_{\mathsf{C}}\theta_{0}\) attained by the heterodyne detection (dashed lines).
_Conclusions.--_We establish the tools necessary to assess the performance of quantum sensors operated in the Gaussian regime to sense linear perturbations away from a singular point. This allows us to verify and investigate the character of the unbounded sensitivity then exhibited, while clarifying that other dynamical properties of the system, e.g. operation at an exceptional point, fulfilment of lasing conditions or non-reciprocity, do not play a primary role. However, we demonstrate that nuisance parameters may then strongly affect the performance, in particular, the rate at which the sensitivity diverges, which proves that even when constructing local estimators these should be taken into account. Such a phenomenon resembles the setting of quantum superresolution problems [75], in which the lack of a spatial reference displays to resolve infinitesimal separations between objects [76; 77; 78; 79; 80]. It is natural to ask how our results will change when following the Bayesian approach to estimation [81; 82; 83], accounting for some prior knowledge about the sensed and/or nuisance parameters, which we leave open for the future.
_Acknowledgments.--_We thank Marcin Jarzyna, Mohammad Mehboudi, Giacomo Sorelli and Konrad Banaszek for helpful comments. This work has been supported by the Quantum Optical Technologies project that is carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. RWC also acknowledges support from the Polish National Science Centre (NCN) under the Maestro Grant No. DEC-2019/34/A/ST2/00081.
|
2304.07382 | Zero-Shot Multi-Label Topic Inference with Sentence Encoders | Sentence encoders have indeed been shown to achieve superior performances for
many downstream text-mining tasks and, thus, claimed to be fairly general.
Inspired by this, we performed a detailed study on how to leverage these
sentence encoders for the "zero-shot topic inference" task, where the topics
are defined/provided by the users in real-time. Extensive experiments on seven
different datasets demonstrate that Sentence-BERT demonstrates superior
generality compared to other encoders, while Universal Sentence Encoder can be
preferred when efficiency is a top priority. | Souvika Sarkar, Dongji Feng, Shubhra Kanti Karmaker Santu | 2023-04-14T20:27:09Z | http://arxiv.org/abs/2304.07382v1 | # Zero-Shot Multi-Label Topic Inference with Sentence Encoders
###### Abstract
Sentence encoders have indeed been shown to achieve superior performances for many downstream text-mining tasks and, thus, claimed to be fairly general. Inspired by this, we performed a detailed study on how to leverage these sentence encoders for the "zero-shot topic inference" task, where the topics are defined/provided by the users in _real-time_. Extensive experiments on seven different datasets demonstrate that _Sentence-BERT_ demonstrates superior generality compared to other encoders, while _Universal Sentence Encoder_ can be preferred when efficiency is a top priority.
## 1 Introduction
As one of the most fundamental problems in text mining, topic modeling, and inference have been widely studied in the literature [1, 1, 2, 3]. In this paper, we focus on _Zero-shot_ approaches [22, 23, 24] for inferring topics from documents where document and topics were never seen previously by a model. Furthermore, for developing _Zero-shot_ methods, we exclusively focus on leveraging the recent powerful sentence encoders.
The problem of zero-shot topic inference can be described using an intuitive example, where an end user (possibly a domain expert) is actively involved in the inference process. Consider that the domain expert is analyzing a large volume of health articles and wants to automatically infer topics of those articles with health-related topics like "Autoimmune Disorders", "Heart health", "Arthritis", etc. For this real-life use case, the user will provide the collection of documents as well as a set of topics to be used as topics for categorizing the documents. Additionally, the user may also provide a list of relevant keywords/clues associated with each topic which can be used as expert guidance for the inference process. The _zero-shot topic inference_ algorithm then infers topics for each document.
Naturally, zero-shot topic inference is a difficult task, and only limited previous works tackled this problem [25, 26, 27]. However, recent developments in transfer learning research have demonstrated that pre-trained sentence embeddings like [19, 28, 18] can achieve promising results in many downstream zero-shot NLP tasks. Inspired by these, we focus on exploring zero-shot methods by using various sentence encoders for topic inference.
Thus, this paper aims to examine the transfer learning capabilities of popular sentence encoders [18], Language-Agnostic SEntence Representations (LASER) [17], Sentence-BERT (SBERT) [16], and Universal Sentence Encoder (USE) [19] for topic inference tasks and subsequently, establish a benchmark for future study in this crucial direction. To achieve this, we conducted extensive experiments with multiple real-world datasets, including online product reviews, news articles, and health-related blog articles. We also implemented two topic-modeling and word embeddings-based zero-shot approaches to compare as baselines. Our experiment results show that among all four encoders _Sentence-BERT_ is superior in terms of generality compared to other encoders, while _Universal Sentence Encoder_ being the second best.
## 2 Related Work
This work is built upon prior research from multiple areas, including Topic Modeling and Categorization, [1, 23], Text Annotation [1, 18, 19], Zero-Shot Learning [26, 27],
Sentence embeddings Casanueva et al. (2020); Cer et al. (2018) etc. A brief discussion on each area and how this work is positioned concerning the state-of-the-art is as follows.
### Topic Modeling and Inference
**Classical Unsupervised Topic Models**: Classical Topic Models emerged in the late '90s. Hofmann et. al. Hofmann (1999) proposed one of the early topic models, PLSA, which modeled word co-occurrence information under a probabilistic framework in order to discover the underlying semantic structure of the data. Later, Blei et al. (2003) proposed Latent Dirichlet Allocation (LDA), which extended PLSA by proposing a probabilistic model at the level of documents. To this date, LDA remains one of the most widely used topic models. Multiple works followed LDA later, including Wang et al. (2011); Du et al. (2013); He et al. (2016); Hingmire and Chakraborti (2014) etc.
**Topic Inference by Supervised Classification**: Several studies including (Tuarob et al., 2015; Bundschus et al., 2009) have shown that it is possible to categorize topics from well-annotated collections of training data through supervised learning. Iwata et al. (2009) proposed a topic model for analyzing and excerpting content-related categories from noisy annotated discrete data such as web pages stored in bookmarks. Poursabzi-Sangdeh and Boyd-Graber (2015) combined document classification and topic models, where topic modeling was used to uncover the underlying semantic structure of documents in the collection. Engels et al. (2010) came up with an automatic categorization scheme, in which they employed a latent topic model to generate topic distributions given a video and associated text. Meng et al. (2018), proposed a supervised method that addresses the lack of training data in neural text classification. Other researchers. Hassan et al. (2020) proposed a supervised classification problem for sexual violence report tracking.
**Zero-Shot Topic Inference**: Various topic modeling-based approaches have been explored for zero-shot topic inference; for instance, Li et al. (2018) worked towards a topic modeling approach for dataless text classification. Similarly, Zha and Li (2019) proposed a novel Seed-guided Multi-label Topic Model based dataless text classification technique. Karmaker Santu et al. (2016) proposed a zero-shot model that can mine implicit topics from online reviews without any supervised training.
Researchers also explored the zero-shot topic inference paradigm using deep learning techniques where knowledge of topics is incorporated in the form of embeddings. Veeranna et al. (2016), adopted pre-trained word embedding for measuring semantic similarity between a label and documents. Further endeavor has been spent on zero-shot learning using semantic embedding by (Hascoet et al., 2019; Zhang et al., 2019; Xie and Virtanen, 2021; Rios and Kavuluru, 2018; Yin et al., 2019; Xia et al., 2018; Zhang et al., 2019; Pushp and Srivastava, 2017; Puri and Catanzaro, 2019; Yogatama et al., 2017; Pushp and Srivastava, 2017; Chen et al., 2021; Gong and Eldardiry, 2021).
### Sentence Embedding
Sentence encoders (Universal Sentence Encoders, InferSent, Language-Agnostic Sentence Representations, Sentence-BERT) are heavily in practice recently. In this section, we will discuss their applications in the research area.
The utility of these powerful sentence encoders has been tested for many popular NLP tasks, including Intent Classification Casanueva et al. (2020), Fake-News Detection Majumder and Das (2020), Duplicate Record Identification Lattar et al. (2020), Humor detection Annamoradnejad (2020), Ad-Hoc monitoring Sarkar et al. (2023) and COVID-19 Trending Topics Detection from tweets Asgari-Chenaghlu et al. (2020). Authors in Cheng (2021) proposed a dual-view approach that enhances sentence embeddings. Another line of work focused on the performance of sentence embedding techniques for transfer-learning tasks (Perone et al., 2018; Enayet and Sukthankar, 2020), whereas a group of researchers reported that state-of-the-art sentence embeddings are unable to capture sufficient information regarding sentence correctness and quality in the English language Rivas and Zimmermann (2019); Sarkar et al. (2022). Chen et al. (2019) utilized USE to create domain-specific embeddings. Another school of researchers (Hassan et al., 2019; Chen et al., 2018; Tang et al., 2018) leveraged sentence embedding for recommending research articles and computing semantic similarity between articles. Adi et al. (2017) proposed a framework that facilitated a better understanding of the encoded sentence representations and extended this work in (Adi et al., 2017), which discussed the effect of word frequency or word distance on the ability to encode sentences.
### Difference from Previous Works
Despite much research in this area, the latest sentence encoder's potential has not been systematically examined for the goal task. Although few previous works have leveraged sentence encoders for calculating the similarity between a text and a topic, most of the works so far have mainly focused on a single sentence encoder. In contrast, we study multiple state-of-the-art sentence encoders for the multilabel _zero-shot_ topic inference task and experiment with different ways of encoding both topics and documents, and thus, conduct a more comprehensive comparative study. We also propose a novel way to incorporate the auxiliary information provided by the user to encode topics, which eventually improved the inference result.
## 3 Problem Statement
The traditional _Topic Inference_ task can be defined as follows:
**Definition 1**: _Given a collection of documents \(D\) and a set of **pre-defined** topics \(T\), infer one or more topics in \(T\) FOR each document \(d\in D\)._
Thanks to the **pre-defined** set of topics \(T\), the traditional _Topic Inference_ task can benefit from fine-tuning based on a carefully designed training set for supervised learning. On the other hand, we follow the idea of _Definition-Wild Zero-Shot-Text Classification_ coined by Yin et al. (2019), which is as follows:
**Definition 2**: _Definition-Wild 0SHOT-TC aims at learning a classifier f(\(\cdot\)) : \(X\to Y\), where classifier f(\(\cdot\)) never sees Y-specific labeled data in its model development._
Extending on top of _Definition-Wild (0SHOT-TC)_, we formalize our task from the user's standpoint in the following fashion:
**Definition 3**: _Given a collection of documents \(D=\{d_{1},d_{2},...,d_{n}\}\), a user \(x\) and a set of **user-defined topics \(T_{x}=\{t_{1},t_{2},...,t_{m}\}\)** provided in **real-time**, annotate each document \(d_{i}\in D\) with zero or more topics from \(T_{x}\)**without any further fine-tuning**._
Note that, it is possible that two different users will provide a different set of topics for the same dataset based on their application needs and end goals. This essentially means creating customized training datasets beforehand is no longer possible because the target topics/labels are provided in real-time. We also assume that each topic \(t\) is expressed as a word/phrase, and the user can provide a list of additional keywords \(K_{t}\) associated with each topic \(t\). In a nutshell, our ad-hoc problem setting assumes that the end user provides all the documents, the target topic, and an optional list of topic-related keywords as inputs in real time. The user here is usually a domain expert with specialized knowledge or skills in a particular area of endeavor (e.g., a cardiologist or a business analyst).
Noteworthy, A topic \(t\) may not occur by its name/phrase explicitly in a document \(d_{i}\). For example, a document about "Mental Health" may not include the exact phrase "Mental Health", but still talk about "Depression", "Anxiety" and "Antide-pressant Drugs". Thus, the topic "Mental Health" is implicit in this document, and it is equally important to annotate the implicit topics within the document and the explicit topics. Although the user-provided optional keywords may help mitigate this issue, it is almost impossible to provide a comprehensive list of keywords that can capture all possible ways "Mental Health" issues can be described. At the same time, a single appearance of a keyword may not always mean the document as a whole is focused on the corresponding topic. To summarize, neither the presence nor absence of keywords are sufficient to infer the correct topics associated with a document; they are just informative clues from the user end.
## 4 Method for Zero-shot Topic Inference
In this section, we discuss the _zero-shot topic inference_ approach we studied in this paper. The inputs are a corpus of documents, a set of user-defined topics, and an optional keyword list for each topic, whereas the output is each document labeled with zero to more topics. The end-to-end inference process is shown in Fig 1 and briefly described below.
1. The end user provides the inputs, i.e., article text, custom-defined topics, and optional keywords.
2. Input article, topics, and keywords are fed to the sentence encoder model separately, and this is where we used different sentence encoders (InferSent, LASER, Universal Sentence Encoder (USE), and Sentence-BERT).
3. Next, Two separate embedding vectors are generated by sentence encoders: * Article Embedding: The input article is embedded by sentence encoders using three different approaches, the details of which are discussed in section 4.1. * Topic Embedding: The candidate topics are
embedded by combining the sentence embedding vectors of the topics as well as optional keywords/definitions/explicit mentions. In total, four different approaches have been explored; the details are provided in section 4.2.
4. Once we obtain these two embeddings, in the next step, we assess the semantic similarity between the two embeddings. Foremost, the semantic similarity is quantified using the cosine similarity between 2 embeddings. Then, based on the cosine similarity and a user-defined threshold, we assign topics to the article which are higher than the threshold. To do an exhaustive analysis, we experimented over a range of thresholds between 0 to 1.
5. The output of the _zero-shot topic inference_ framework is the set of inferred topic(s).
### Article Embedding
For article embedding, we adopted three methods narrated in Table 1. As mentioned earlier, in our work, we leveraged contemporary sentence encoders for the mentioned task, such as:
1. InferSent Conneau et al. (2017)
2. Language-Agnostic SEntence Representations (LASER) Artetxe and Schwenk (2019)
3. Sentence-BERT (SBERT) Reimers and Gurevych (2019)
4. Universal Sentence Encoder (USE) Cer et al. (2018)
We would like to mention that we did not perform fine-tuning or parameter tuning on top of the pre-trained sentence encoders. We have provided brief descriptions of the above encoders in appendix A.4.
### Topic Embedding
For generating topic embedding, we adopted four approaches, including and excluding the auxiliary information provided by the user, to do a comparative study. The details of topic embedding are given in Table 2. As part of topic embedding using auxiliary information (Embedding approach "Explicit-Mentions"), we performed a rudimentary annotation on the dataset to find explicit mentions of the topics, which is discussed in algorithm 1.
\begin{table}
\begin{tabular}{p{42.7pt}|p{284.5pt}} \hline
**Embedding Approach** & **Description** \\ \hline \hline Entire Article (EA) & Encode the entire article using Sentence Encoders at once, including articles that are long paragraphs and consist of more than one sentence. \\ \hline Sentence Embedding Average (SEA) & Split the article into sentences, then encode each sentence, and at the end, average all sentence embedding to generate article embedding. \\ \hline Individual Sentence (ISE) & Split the input article into sentences and encode each sentence separately. Then, unlike averaging (Sentence Embedding Average), use the individual sentence embeddings for similarity calculation with topic embedding. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Three different ways of encoding an input article using sentence encoders.
\begin{table}
\begin{tabular}{p{42.7pt}|p{284.5pt}} \hline
**Embedding Approach** & **Description** \\ \hline \hline
**Topic-Name-Only** & Encode only the topic name/phrase. \\ \hline
**Topic + Keywords** & Encode both topic name and keywords, then average all embeddings to generate the final topic embedding. \\ \hline
**Topic + Keyword + Definition** & Extract the topic’s and keyword’s definitions from WordNet, encode these definitions separately using sentence encoders, and then average all embeddings to generate the final topic embedding. For example, instead of encoding the keyword “campaign”, we generated embedding of its definition, “race between candidates for elective office”. \\ \hline
**Explicit-Mentions** & First, extract all the articles explicitly mentioning the topic/phrase using algorithm 1 for all topics. Then, for each topic, generate embeddings of all articles which are explicitly annotated/labeled with that topic, then average them to obtain the ultimate topic embedding. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Four different ways of encoding a topic using sentence encoders.
Figure 1: Steps for _Zero-shot Multi-Label Topic Inference_ process, leveraging sentence encoders.
### Zero-shot Topic Inference
Once we obtain all the embeddings, we measure cosine similarity between article and topic embeddings and, accordingly, infer topics. For instance, considering article \(a\) and topics \(t\in T\) as well as considering "Entire Article" (EA) and "Topic-Name-Only" (TNO) as the embedding approach for article and topic, respectively, the inference of topic works as follows:
\[\hat{t}=\operatorname*{argmax}_{\mathrm{t\in T}}\left\{\mathrm{cosine\_similarity}\left( \mathrm{EA(a)},\mathrm{TNO(t)}\right)\right\}\]
Where topic \(t\) belongs to a set of input topics \(T\). \(EA(a)\) represents the embedding of article \(a\) using the "Entire Article" embedding approach, and \(TNO(t)\) represents the embedding of topic \(t\) using the "Topic-Name-Only" embedding approach. Note that, the combination of other articles and topic embeddings can be expressed in a similar way and hence, omitted due to lack of space.
## 5 Experimental Design
### Datasets
Although the idea of our goal task is inspired from _Definition-Wild Zero-Shot-Text Classification_ coined by Yin et al. (2019), we realized that the dataset introduced in the paper is suitable for Single-Label Multi-class classification, whereas our zero-shot topic inference setup is a Multi-label classification problem (more than one topic is associated with the input article). Hence in our experiments, we mainly focused on curating/using the following datasets. (A) _Large datasets_ with a higher number of articles for inference and relatively longer text (News and Medical-Blog collected from the web), and (B) _Small datasets_ which contain fewer articles (<2000) for inference and are relatively shorter in length.
_Large Datasets:_ The large datasets were published by Sarkar and Karmaker (2022), which are collection of publicly available online news1 and medical-blog articles2. Each article is already labeled with one or more ground-truth topics and stored in JSON objects. Some statistics about these datasets are summarised in Table 3.
Footnote 1: [https://newsbusters.org/](https://newsbusters.org/)
Footnote 2: [https://www.health.harvard.edu/](https://www.health.harvard.edu/)
_Small Datasets:_ The small datasets are originally a set of 5 different online product reviews; these were initially collected from Hu and Liu (2004) and re-annotated by Karmaker Santu et al. (2016). Unlike large datasets, the product reviews are shorter in length and contain more topics than the larger two datasets (see Table 3).
In zero-shot learning, the auxiliary information about topics is provided by the end user (e.g., domain experts) conducting the inference task in the form of keywords/textual descriptions. In this section, we have shown some topics and corresponding keywords details from the Medical dataset (Table 4); due to lack of space, we provided more examples in the appendix A.2.
### Baseline Approaches
As baselines, we used constrained topic modeling and a classical word embedding-based inference approach.
**Generative Feature Language Models** (GFLM) were proposed by (Karmaker Santu et al., 2016)). The paper suggested an approach based on generative feature language models that can mine the implicit topics effectively through unsupervised statistical learning. The parameters are optimized automatically using an Expectation-Maximization
\begin{table}
\begin{tabular}{c|c} \hline
**Topic Name** & **Keywords** \\ \hline \hline Addiction & Opioids, Alcohol, Drug \\ Headache & Migraine, Sinus, Chronic pain \\ \hline Heart Health & Hypertension, Stroke, Cardiovascular \\ \hline Mental Health & Depression, Anxiety, Antidepressant \\ \hline Women’s Health & Pregnancy, Breast, Birth \\ \hline \end{tabular}
\end{table}
Table 4: Topics and optional keywords from the Medical dataset
\begin{table}
\begin{tabular}{c c c c c} \hline
**Dataset** & **Articles** & **Avg. Article** & **Topics** & **Topics/** \\ & & **length** & **article** \\ \hline Medical & 2066 & 693 & 18 & 1.128 \\ News & 8940 & 589 & 12 & 0.805 \\ Cellular phone & 587 & 16 & 23 & 1.058 \\ Digital camera 1 & 642 & 18 & 24 & 1.069 \\ Digital camera 2 & 380 & 17 & 20 & 1.039 \\ DVD player & 839 & 15 & 23 & 0.781 \\ Mp3 player & 1811 & 17 & 21 & 0.956 \\ \hline \end{tabular}
\end{table}
Table 3: Statistics on _Large_ and _Small_ datasets
algorithm. Details on the method have been discussed in the appendix A.3.
**Classical word embeddings** are a popular way to encode text data into a dense real-valued vector representation. In order to implement a zero-shot classifier, we encoded both the input document and the target topics using pre-trained word embeddings and then, computed vector similarity between the input document encoding and each target topic encoding separately. The implementation of the classical word-embedding-based zero-shot approach is very similar to our setup (discussed in 4); the differences are:
1. Instead of sentence encoders in step 2, pre-trained Glove embedding is used.
2. Articles are represented in two different ways. a) _Average Sentence Level Embedding:_ For each input article, we encode the article by averaging the pre-trained embeddings (e.g., Glove) of each word present in that article. b) _Dictionary of Word Embeddings_: Extract word embedding of all words in an article, and instead of taking the average, we save them individually as a key-value pair.
3. For semantic similarity assessment between Article and Topic embeddings, we used two different metrics: 1) Euclidean distance and 2) Cosine Similarity.
Rest of the process, i.e. step 4 and step 5 are the same as discussed in section 4.
### Evaluation Metric
To measure the performance of each _zero-shot topic inference_ approach, we use three popular metrics available in the literature: Precision, Recall, and \(F_{1}\) score. First, for each article, the model inferred topic(s) were compared against the list of "gold" topic(s) to compute the true positive, false positive, and false negative statistics for that article. Then, all such statistics for all the articles in a dataset were aggregated and used to compute the final Precision, Recall, and micro-averaged \(F_{1}\) score.
## 6 Performance Analysis and Findings
In this section, we present the performance details of each sentence encoder over various types of article and topic encoding techniques (mentioned in Table 1 and 2), respectively. As part of the evaluation, we report the \(F_{1}\) score for all sentence encoders and omit the Precision and Recall scores due to lack of space. Table 5 contains the baseline results for _Small_ and _Large_ datasets. Table 6 shows the performance of _Small datasets_ for four types of topic embedding techniques and four sentence encoders. Note that articles in _Small datasets_ are mostly single sentences; hence, we only considered "Entire Article" as the article embedding for the _Small datasets_. In contrast, Table 7 provides details on _Large datasets_ for twelve combinations, including four types of topic embedding techniques and three types of article embeddings.
dataset, which is associated with ground truth _"Size"_, _"Lens"_, _"Photo"_. We observed that InferSent and LASER annotated the review with many incorrect topics, e.g. _"Design"_, _"Feature"_, _"Manual"_, _"Weight"_, _"Focus"_ etc. Universal Sentence Encoder (USE) annotated the same review with correct and some other topics which are semantically correlated to the correct topics, for instances _"Size"_ (highly correlated with _"Weight"_), _"Focus"_(highly correlated with _"Lens"_), _"Video"_(highly correlated with _"Photo"_). On the other hand, for the same review, SentenceBERT inferred correct topics _"Size"_, _"Lens"_, _"Photo"_ and an incorrect topic _"Video"_, thus achieve best \(F_{1}\) Score among all the encoders. Due to space limitation, we have added case study from _Large_ dataset in the appendix A.5.
2. Even though USE could not beat SBERT generally, USE attained a score very close to the baseline methods (GFLM) and SBERT.
3. From the perspective of different topic embedding techniques, we observed that "Topic+Keywords" and "Explicit-Mentions" seemed to surpass the other two topic embedding techniques. Both of them include auxiliary information from end users indicating that using user guidance (in the form of topic keywords) helps zero-shot topic inference in real time.
4. From the perspective of different article embedding techniques, mostly "Entire Article" appeared superior to others, with "Sentence Embedding Average" being the second best. Whereas, "Individual Sentence Embedding" was not promising.
5. "Explicit-Mentions" topic embedding with "Entire Article" as the article embedding attained the best score, followed by "Topic + Keywords" topic embedding paired with "Entire Article".
6. \(F_{1}\) score obtained by InferSent and LASER indicates that these encoders failed to generalize over unseen datasets and, therefore, may not be a good choice for _zero-shot topic inference_.
7. Despite the observation stated in (6), we would like to point out that the inclusion of user guidance in the inference process boosted the performance of both InferSent and LASER. For example, "Topic-Name Only" embedding achieved around 7% \(F_{1}\) score (Average over all datasets). However, with "Explicit-Mentions" embedding, \(F_{1}\) score elevated to around 30% (Average over all datasets).
8. For small datasets, "Topic-Name-Only" embedding presented an interesting case. Here, USE performed better than SBERT. This suggests that, for the product review domain, if additional keywords for each topic are unavailable, USE may be a better choice than SBERT. However, a detailed investigation is warranted to determine the root cause for this result.
For analyzing computation time, we also logged the time taken by different encoders for different types of article and topic embeddings. Since our goal task focuses on the real-time scenario, an important criterion for picking the right approach is the inference time. Therefore, we have reported embedding generation time (in seconds) for each model in Tables 8 and 9. Some observations from these tables are as below.
1. USE is the fastest of all encoders for generating
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c|c c c} \hline \multicolumn{2}{c|}{**Dataset \(\boldsymbol{>}\)**} & \multicolumn{10}{c}{**Medical**} \\ \hline \multicolumn{2}{c|}{**Topic Embedding \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}} & \multicolumn{1}{c|}{**Topic Name Only**} & \multicolumn{2}{c|}{**Topic+Keywords**} & \multicolumn{2}{c|}{**Topic+Keyword+Def’n**} & \multicolumn{2}{c}{**Explicit-Mentions**} \\ \hline \multicolumn{2}{c|}{**Article Embedding \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ }}}}}}}}}}}}}} & \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SE**} & \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c|}{**SEE**} & \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**ISE**} \\ \hline \multirow{3}{*}{\begin{tabular}{c} **Sentence** \\ **Encoder** \\ \end{tabular} } & InferSent & 0.128 & 0.146 & 0.120 & 0.102 & 0.105 & 0.119 & 0.140 & 0.132 & 0.131 & 0.154 & 0.217 & 0.227 \\ & LASER & 0.120 & 0.142 & 0.134 & 0.124 & 0.122 & 0.121 & 0.125 & 0.124 & 0.185 & 0.187 & 0.139 & 0.136 \\ & SBERT & **0.565** & **0.571** & **0.547** & **0.579** & **0.541** & **0.471** & **0.460** & **0.465** & **0.420** & **0.594** & **0.556** & **0.534** \\ & USE & 0.488 & 0.516 & 0.429 & 0.500 & 0.484 & 0.340 & 0.390 & 0.409 & 0.375 & 0.520 & 0.504 & 0.468 \\ \hline \multicolumn{2}{c|}{**Dataset \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbolboldsymbolboldsymbolboldsymbol }}}}}}}}}}}}} }& \multicolumn{1}{c}{**News**} & \multicolumn{1}{c}{**Topic Embedding \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbol{\boldsymbolboldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}} }& \multicolumn{1}{c}{**Topic+Keyword+Def’n**} & \multicolumn{2}{c}{**Explicit-Mentions**} \\ \hline \multicolumn{2}{c|}{**Article Embedding \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbol{\boldsymbolboldsymbolboldsymbolboldsymbolboldsymbolboldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}} }& \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SE**} & \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**SEE**} & \multicolumn{1}{c}{**EA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**SEA**} & \multicolumn{1}{c}{**ISE**} \\ \hline \multirow{3}{*}{
\begin{tabular}{c} **Sentence** \\ **Encoder** \\ \end{tabular} } & InferSent & 0.105 & 0.116 & 0.099 & 0.217 & 0.127 & 0.110 & 0.129 & 0.141 & 0.117 & 0.234 & 0.161 & 0.144 \\ & LASER & 0.171 & 0.180 & 0.154 & 0.181 & 0.176 & 0.135 & 0.126 & 0.127 & 0.128 & 0.130 & 0.136 & 0.134 \\ \cline{1-1} & SBERT & **0.425** & 0.408 & **0.447** & **0.488** & **0.458** & **0.374** & 0.406 & 0.386 & 0.378 & **0.511** & **0.416** & **0.404** \\ \cline{1-1} & USE & 0.419 & **0.426** & 0.367 & 0.461 & 0.418 & 0.281 & **0.420** & **0.390** & **0.391** & 0.446 & 0.371 & 0.368 \\ \hline \end{tabular}
\end{table}
Table 7: F1-Score for the _zero-shot topic inference_ task for _Large datasets_ (Medical and News). Performance comparison of four sentence encoders over various topic embedding procedures and article embedding procedures.
sentence embedding, followed by SBERT.
2. Among all topic embedding techniques, "Explicit Mentions" took the highest time for processing since, for "Explicit Mentions", the encoder needs to traverse the whole dataset.
3. Among all article embedding techniques, "Entire Article" appeared to be the fastest for USE and SBERT. In particular, for "Entire Article" embedding on the News dataset USE took approximately 64 seconds, whereas, InferSent and LASER took around 3867 and 1919 seconds, respectively.
4. The difference in article embedding time is more conspicuous on the _Large datasets_ as they contain a longer and higher number of articles. USE, SBERT clearly wins over InferSent and LASER in comparison of time as well.
5. The high processing time over _Large datasets_ suggests that InferSent and LASER are unsuitable for real-time inference if the dataset is vast and comprised of long articles.
In essence, comprehensive performance and runtime analysis show that a) auxiliary information helps in achieving better performance in real-time _zero-shot topic inference_ task, b) even though the sentence encoders are designed to be fairly general, aiming for seamless transfer learning, not all of them serve the purpose accurately, c) the processing time varies a lot across different sentence encoders and should be considered seriously while using these encoders in real-time tasks.
## 7 Conclusion
_Zero-shot topic inference_ is a fundamental yet challenging task. Topic inference task tries to identify submerged topics from documents in a given corpus and is proven easier to solve in a supervised problem setup. However, the recent shift towards zero-shot and transfer learning due to expensive training data motivated us to inspect the task from a zero-shot perspective. Since _zero-shot topic inference_ is a much more difficult problem and recent sentence encoders are relatively under-explored in this context, we leveraged four popular sentence encoders to examine their generalization power for the task.
Our _zero-shot topic inference_ task formulation demands serving end users in real time. Although popular sentence encoders have been shown to achieve better generalization over many downstream NLP tasks, not all performed well for the _zero-shot topic inference_ task. Amidst four encoders, Sentence-BERT seemed to perform well on unseen data, while USE performed decently and was the second best. However, InferSent and LASER did not achieve a comparable performance against USE and SBERT. We also proposed novel ways for incorporating user guidance in the zero-shot process, which improved the overall accuracy of topic inference. Apart from performance on unseen datasets, we compared execution time which revealed that not only the \(F_{1}\) score but also the execution time poses a reservation on the use of InferSent and LASER for the mentioned task.
\begin{table}
\begin{tabular}{r|c c c c|c c c c|c c c c c} \hline \multicolumn{13}{c|}{**Total Time for Computing Embedding for Entire Article**} \\ \hline \hline & \multicolumn{6}{c|}{**Small Datasets**} & \multicolumn{6}{c}{**Large Datasets**} \\ \cline{2-13}
**Sentence** & **Cellular** & **Digital** & **Digital** & **DVD** & **Mp3** \\
**Encoder** & **phone** & **cam. 1** & **cam. 2** & **player** & **player** & **player** \\ \hline InferSent & 8.645 & 7.819 & 4.610 & 10.668 & 21.433 \\ LASER & 6.952 & 8.267 & 4.451 & 9.989 & 15.387 \\ SBERT & 6.790 & 6.973 & 4.454 & 9.634 & 18.783 \\ USE & 5.618 & 6.739 & 4.091 & 8.292 & 17.496 \\ \hline \multicolumn{13}{c}{**Total Time for Computing Article Embedding**} \\ \hline \hline
**Article** & **Sentence** & \multicolumn{6}{c|}{**Large Datasets**} \\ \cline{2-13}
**Embedding** & **Encoder** & **Medical** & **News** \\ \hline & InferSent & 902.851 & 3867.350 \\
**Entire** & LASER & 514.929 & 1919.147 \\
**Article** & SBERT & 28.805 & 88.633 \\ & USE & 27.478 & 64.203 \\ \hline \multirow{2}{*}{**Sentence Embedding**} & InferSent & 1035.469 & 3807.204 \\ & LASER & 639.066 & 2373.594 \\
**Average** & SBERT & 548.573 & 1891.539 \\ & USE & 412.942 & 1448.037 \\ \hline \multirow{2}{*}{**Individual Sentence Embedding**} & InferSent & 1022.728 & 3778.628 \\ & LASER & 631.533 & 2350.273 \\ \cline{1-1} & SBERT & 553.106 & 1876.776 \\ \hline \multirow{2}{*}{**Sentence Embedding**} & InferSent & 1035.469 & 3807.204 \\
**Language** & LASER & 639.066 & 2373.594 \\
**Language** & SBERT & 548.573 & 1891.539 \\ & USE & 412.942 & 1448.037 \\ \hline \multirow{2}{*}{**Individual Sentence Embedding**} & InferSent & 1022.728 & 3778.628 \\ & LASER & 631.533 & 2350.273 \\ \cline{1-1} & SBERT & 553.106 & 1876.776 \\ \cline{1-1} & USE & 428.725 & 1410.522 \\ \hline \end{tabular}
\end{table}
Table 8: Time comparison for generating topic embedding by different sentence encoders (Time unit in seconds).
\begin{table}
\begin{tabular}{r|c c c|c c c c c c c c c c c} \hline \multicolumn{13}{c|}{**Total Time for Computing Embedding for Entire Article**} \\ \hline \hline & \multicolumn{6}{c}{**Small Datasets**} & \multicolumn{6}{c}{**Small Datasets**} \\ \cline{2-13}
**Sentence** & **Cellular** & **Digital** & **Digital** & **DVD** & **Mp3** \\
**Encoder** & **phone** & **cam. 1** & **cam. 2** & **player** & **player** & **player** \\ \hline InferSent & 8.645 & 7.819 & 4.610 & 10.668 & 21.433 \\ LASER & 6.952 & 8.267 & 4.451 & 9.989 & 15.387 \\ SBERT & 6.790 & 6.973 & 4.454 & 9.634 & 18.783 \\ USE & 5.618 & 6.739 & 4.091 & 8.292 & 17.496 \\ \hline \multicolumn{13}{c}{**Total Time for Computing Article Embedding**} \\ \hline \hline
**Article** & **Sentence** & \multicolumn{6}{c}{**Large Datasets**} \\ \cline{2-13}
**Embedding** & **Encoder** & **Medical** & **News** \\ \hline & InferSent & 902.851 & 3867.350 \\
**Entire** & LASER & 514.929 & 1919.147 \\
**Article** & SBERT & 28.805 & 88.633 \\ & USE & 27.478 & 64.203 \\ \hline \multirow{2}{*}{**Sentence Embedding**} & InferSent & 1035.469 & 3807.204 \\ & LASER & 639.066 & 2373.594 \\ & SBERT & 548.573 & 1891.539 \\ & USE & 412.942 & 1448.037 \\ \hline \multirow{2}{*}{**Individual Sentence Embedding**} & InferSent & 1022.728 & 3778.628 \\ & LASER & 631.533 & 2350.273 \\ & SBERT & 553.106 & 1876.776 \\ & USE & 428.725 & 1410.522 \\ \hline \end{tabular}
\end{table}
Table 9: Time comparison for generating article embedding by different sentence encoders for _Small and Large datasets._ (_Time unit in seconds_) |
2310.14507 | Fast Marching based Rendezvous Path Planning for a Team of Heterogeneous
Vehicle | This paper presents a formulation for deterministically calculating optimized
paths for a multiagent system consisting of heterogeneous vehicles. The key
idea is the calculation of the shortest time for each agent to reach every grid
point from its known initial position. Such arrival time map is efficiently
computed using the Fast Marching Method (FMM), a computational algorithm
originally designed for solving boundary value problems of the Eikonal
equation. By leveraging the FMM, we demonstrate that the minimal time
rendezvous point and paths for all member vehicles can be uniquely determined
with minimal computational overhead. The scalability and adaptability of the
present method during online execution are investigated, followed by a
comparison with a baseline method that highlights the effectiveness of the
proposed approach. Then, the potential of the present method is showcased
through a virtual rendezvous scenario involving the coordination of a ship, an
underwater vehicle, an aerial vehicle, and a ground vehicle, all converging at
the optimal location within the Tampa Bay area in minimal time. The results
show that the developed framework can efficiently construct continuous paths of
heterogeneous vehicles by accommodating operational constraints via an FMM
algorithm | Jaekwang Kim, Hyung-Jun Park, Aditya Penumarti, Jaejeong Shin | 2023-10-23T02:35:28Z | http://arxiv.org/abs/2310.14507v2 | # Fast Marching based Rendezvous Path Planning for a Team of Heterogeneous Vehicle
###### Abstract
A formulation is developed for deterministically calculating the optimized paths for a multi-agent system consisting of heterogeneous vehicles. The essence of this formulation is the calculation of the shortest time for each agent to reach every grid point from its known initial position. Such arrival time map can be readily assessed using the Fast Marching Method (FMM), a computational algorithm originally designed for solving boundary value problems of the Eikonal equation. Leveraging the FMM method, we demonstrate that the minimal time rendezvous point and paths for all member vehicles can be uniquely determined with minimal computational concerns. To showcase the potential of our method, we use an example of a virtual rendezvous scenario that entails the coordination of a ship, an underwater vehicle, an aerial vehicle, and a ground vehicle to converge at the optimal location within the Tampa Bay area in minimal time. It illustrates the value of the developed framework in efficiently constructing continuous path planning, while accommodating different operational constraints of heterogeneous member vehicles.
_Keywords--_ Fast marching method, Path planning, Heterogeneous vehicles, Multi-agent system, Autonomous Vehicles
## 1 Introduction
Recent advancements in various types of autonomous vehicles have sparked interest in multi-agent systems, which hold the potential to efficiently address complex tasks. Strategic multi-agent path finding (MAPF) becomes crucial, particularly when the team comprises heterogeneous vehicles with varying operational domains and capabilities, such as different speeds, sizes, and maneuverability. These agents may encompass a wide range of vehicles, including ships, underwater vehicles, aerial vehicles, and ground vehicles. Each type of vehicle can have unique navigational constraints and environmental interactions [1, 2]. Previous studies in multi-agent planning have primarily concentrated on scheduling, with an emphasis on task allocation and agent coordination [3]. However, these approaches have warranted a rigorous continuous path 1 planning that ensures smooth and uninterrupted motion of vehicles [4], since all agents in real-world applications must adapt their paths in response to changing environmental conditions and dynamic obstacles.
Footnote 1: Here, a continuous path refers to a path defined on continuous real-world space and thus can serve as a smooth path for autonomous vehicles.
In the perspective of computational science, a general form of continuous MAPF is known as an NP-hard problem [5]. One of the main challenges is related to the high dimensionality of the problem. With many agents in an environment, the number of potential paths and interactions can become overwhelmingly large. The complexity of the multi-agent path finding problem also stiffly increases, as the problem as the number of agents in the system increases, and thus solving MAPF problems becomes computationally expensive. Moreover, in many cases, efficiency is not the sole concern; safety (collision-free paths) must also be taken into account. Due to the complex nature and conflicting objectives encountered in MAPF problems, in many cases, one needs to reduce or approximate the original problem to a simpler form, compromising accuracy and global optimality.
In this work, we consider continuous path planning of a multi-agent system for minimal time rendezvous tasks. In these tasks, some agents initially operating at different locations are tasked with meeting to exchange information or resources. Such type of tasks are frequently encountered in spacecraft docking scenarios [6, 7]. A team of autonomous underwater vehicles also often needs to initiate information exchange tasks at close distances due to limited data transfer capabilities in deep water [8]. In these scenarios, identifying the optimal rendezvous point and the path for each agent to achieve the earliest possible rendezvous time as a team (or other optimizing goals) is important. Unfortunately, however, planning paths that accommodate differences of vehicles, while optimizing overall performance remains a significant challenge.
The primary contribution of this paper lies in formulating the rendezvous problem of a multi-agent system in a way that is suitable for assessment using the fast marching method (FMM). The FMM is a well-established numerical technique originally developed for solving the Eikonal equation. Beyond its original purpose, however, the FMM has also demonstrated its capability in efficiently computing the shortest paths on continuous grids [9, 10, 11, 12, 13]. Extending these works, we show how the use of the FMM for rendezvous MAPF also enables the enhancement of collaboration, reduction of complexity, and optimization of the overall mission performance of the team. Specifically, we first define an optimization problem that involves continuous path planning for a team of heterogeneous vehicles, each with its operational domain. Then, we exploit the direct output from the FMM as a key component of a new path planning approach. Our approach deterministically calculates the time-optimal rendezvous point for heterogeneous vehicles and determines the path to the optimal rendezvous point from different initial agent positions. Throughout this process, the method also takes into account their unique operational constraints and environmental interactions.
The remainder of the paper is organized as follows. Section 2 introduces methodologies of the FMM and FMM-based path planning. In Section 3, we formulate an optimization problem for multi-agent path planning of a rendezvous task and introduce a new methodology based on to efficiently solve the problem. Section 4 presents a virtual path planning experiment to demonstrate the potential of our proposed approach, while Section 5 discusses important features and highlights merits of the suggested methodology. Finally, we conclude the paper in Section 6, listing potential future research directions.
## 2 Background on Fast Marching Method and Its Application to Path Optimization
In this section, we provide a brief overview of the FMM, which will be used to address the challenges of multi-agent path planning for rendezvous missions. Originally developed for solving a nonlinear first-order partial differential equation, the FMM has shown high efficiency in dealing with interface mechanics compared to other algorithms [14, 15, 16, 17]. The FMM has also found applications in diverse research domains,
encompassing materials science [18], computer graphics [19], and image processing [20]. Particularly, its application to path optimization has a long history in various domains of applications, ranging from marine vehicles to social navigation [21, 22, 23, 24, 25, 26]. In the following, we begin by summarizing the main ideas of the FMM in its original context.
### The fast marching method
First introduced in Ref. [27], the FMM is an efficient computational algorithm for tracking the front, or _interface_, that evolves with the outward unit normal direction with speed \(V\). The explicit outcome of FMM is the arrival time \(T(\mathbf{x})\) that the initial surface needs to reach every point \(\mathbf{x}\) on the given domain \(\Omega\). For example, Fig. 1 demonstrates the result of FMM used to track an initial surface \(\Gamma\) (the innermost blue line) growing with a uniform outward normal velocity \(V(\mathbf{x})=1\). In the following, we brief the FMM algorithm as described in Ref. [28].
Let \(s(t)\) describe a surface evolving speed \(V(\mathbf{x})\) from a given initial surface \(s(0)=\Gamma\). Instead of solving a time-dependent problem for \(s(t)\) to track the moving surface, the FMM solves a function \(T(\mathbf{x})\) defined as
\[T(s(t))=t, \tag{1}\]
with \(T=0\) on \(\Gamma\).
Differentiating (1) and noting that \(\nabla T\) is normal to the surface, one arrives at the following boundary value problem,
\[|\nabla T|V=1. \tag{2}\]
Also, the boundary condition for \(T\) equivalent to the original time-dependent problem is
\[T=0\quad\text{on }\Gamma. \tag{3}\]
Equation (2) is commonly referred to as the Eikonal equation.
Now, we describe the algorithm to solve (2) on a two-dimensional discrete grid, i.e. \(\mathbf{x}=(x,y)\). However, it is worth noting that the algorithm are conveniently generalized to arbitrary dimensions. Let \(D_{ij}^{-x}(\cdot)\) denotes the standard backward-difference operation on the grid point \(ij\)
\[D_{ij}^{-x}T=\frac{T_{ij}-T_{(i-1)j}}{\delta x}. \tag{4}\]
Likewise, we use \(D^{+x},D^{-y}\), and \(D^{+y}\) to represent forward in \(x\), backward and forward in \(y\) backward finite difference operators, respectively. In order to ensure a unique _viscosity solution_ for the eikonal equation (2), we necessitate the consistent utilization of an upwind finite difference scheme when computing the gradient. This is compactly written as
\[\frac{1}{V(\mathbf{x})}=\big{[}(\max(D_{ij}^{-x}T,D^{+x}T_{ij},0)^{2}+\max(D_{ij}^ {-y}T,-D^{+y}T_{ij},0)^{2}\big{]}^{1/2}. \tag{5}\]
When the neighboring values of \(T_{ij}\) are known, the discrete eikonal equation (5) becomes a quadratic equation for \(T_{ij}\) at each grid point, allowing for straightforward analytical solutions.
The FMM initiates by performing the following initialization step.
1. Assign \(T(\mathbf{x})=0\) for grid points in the area enclosed by the initial surface, and tag them as _accepted_
2. Assign \(T(\mathbf{x})=+\infty\) for the remaining grid points, and tag them as _far_
3. Among the _accepted_ points, identify the points that are in the neighborhood of points tagged as _far_, and tag them as _considered_
The key step in the fast marching method is to update \(T\) with a trial value using Eq. (5) for grid points tagged as _considered_, while only accepting the update with the smallest value at each iteration. This procedure requires keeping track of the smallest \(T\)-value among points tagged as _considered_. The potential \(T\) values are managed in a specialized data structure inspired by discrete network algorithms [29]. This data structure is known as a min-heap data structure, which represents a complete binary tree with a property that the value at any given node is less than or equal to the values of its children. Utilizing the min-heap, the FMM then proceeds as follows.
1. Form a min-heap structure for the _considered_ points.
2. Access the minimum value of the heap, located at the root of the binary tree.
3. Determine a trial solution \(\tilde{T}\) on the neighbors of the root using Eq. (5). If the trial solution \(\tilde{T}\) is smaller than the present values, then update \(T(\mathbf{x})=\tilde{T}\).
4. If a point, previously tagged as _far_, is updated using a trial value, relabel it as _considered_, and add it to the heap structure.
5. Tag the root of the heap as _accepted_, and delete it from the heap.
6. Repeat steps 2 to 5, until every grid point is tagged as _accepted_.
The primal computational complexity of the FMM arises from maintaining the heap structure, which is known to be \(\mathcal{O}(n\log n)\), where \(n\) is the number of total grid points.
Figure 1: The level sets of the solution to the Eikonal equation (2) computed using the fast marching method, describe a surface evolving with outward normal velocity \(V(\mathbf{x})=1\). The level set values are indicative of the time it takes for the initial surface (represented by the innermost blue line) to reach each grid point within the computational domain.
### Adaptation of the FMM for path optimization
While the FMM is originally developed for interface problems, numerous studies have also successfully applied the FMM in vehicle path planning scenarios, enabling agents to navigate complex environments, avoid obstacles, and reach their destinations efficiently. These studies have primarily focused on single-agent path planning under various external conditions. These include time-varying environmental factors, such as waves and currents in oceans [9], time-varying environments with predictive models [10], angle guidance for uncrewed surface vehicles [30], anisotropic Fast Marching (FM)-based approaches for dynamic obstacles [11] and bridge obstacles [12], as well as path planning for autonomous ships [13]. In contrast, its application in multi-agent systems remains relatively unexplored. A few examples include swarm coordination [31] and formation control involving vehicles with different dynamic properties [32].
In the context of path optimization, the computational domain \(\Omega\) of the FMM takes on a new perspective as the configuration space for mobile agents, often depicted through a binary occupancy map as illustrated in Fig. 2. The binary image, which is in a size of \(n=n_{1}\times n_{2}\) pixels, takes the value of \(0\) if the position is occupied by obstacles, and \(1\) otherwise. Also, the initial surface \(\Gamma\) is reduced to a single wave-source point \(\mathbf{x}_{0}\), representing the initial location of an agent. The velocity field \(V(\mathbf{x})\) signifies the permissible speed of vehicles at a given position while considering the proximity of obstacles (such as walls and barriers) to the agents. As part of the FMM's initialization step, every grid point located on obstacles is initially labeled as "accepted".
Next, the FMM algorithm is executed to compute the shortest time \(T(\mathbf{x})\) for the propagating wave to arrive at each grid point. The trajectory of the agent is finally determined by extracting the maximum gradient direction of \(T(\mathbf{x})\) from the target point to the initial point. Since \(T(\mathbf{x})\) is derived from the target point, the resulting \(T\)-field uniquely exhibits a single minimum at the target point, ensuring a unique solution [23].
A remaining task is to employ an appropriate model for the velocity field \(V(\mathbf{x})\) that respects the environment. While one might simply consider the simplest option, which is to use a constant value \(\mathcal{V}_{max}\) representing the maximum speed of the agents, it is observed that the resulting trajectory lacks realism as it fails to ensure both smoothness and a safe distance between agents and obstacles [21].
To address these issues, the FMM has been advanced into the Fast Marching Square (FMS) method. In order to guarantee a safe distance between obstacles and agents, this approach introduces a penalty to the agent's velocity as it navigates in proximity to obstacles. The FMS method entails the implementation of
Figure 2: An example of binary occupancy map. The binary image, which is on \(512\times 512\) pixel size, takes the value of \(0\) if the position is occupied by obstacles, and \(1\) otherwise.
two distinct FMMs.
The objective of the first FMM is to construct a velocity grid map that takes into account the presence of obstacles. This is achieved by evolving initial surfaces, which represent the boundaries of obstacles in the environment, with a constant velocity \(V(\mathbf{x})=1\). The outcome of this process is the computation of the distance \(d(\mathbf{x})=T(\mathbf{x})\in\mathbb{R}^{+}\) at each grid point, indicating the shortest distance to the nearest obstacle. Consequently, a velocity grid map \(V\) is computed as a function of \(d\), which is artificially designed to penalize the vehicle's speed as it approaches obstacles. A common choice for this penalizing function is a linear relationship \(V\propto d\), which is inspired by a two-dimensional repulsive electrostatic potential [33]. Alternatively, one may also consider
\[V(d(\mathbf{x}))=\mathcal{V}_{\max}\left[1-\exp\left(-\alpha\left(\frac{d}{d_{\max }}\right)\right)\right], \tag{6}\]
where \(d_{\max}\) is the maximum distance in the configuration space and \(\mathcal{V}_{\max}\) is the maximum speed of the agent at a free space, respectively. Note that the form (6) includes a dimensionless free parameter \(\alpha\) that indirectly governs the safety distance. Fig. 4 shows the plots of the velocity function profiles at several values of \(\alpha\). The velocity map created from the binary map (Fig. 2), using the form (6) with \(\alpha=3\), is shown in Fig. 3.
Next, the second FMM is executed from the initial position \(\mathbf{x}_{0}\) of the agent (or vehicle) to compute the time grid map \(T(\mathbf{x})\), respecting the environmental constraints through \(V(\mathbf{x})\). Finally, the path is obtained again by applying the gradient descent algorithm to \(T(\mathbf{x})\) and the resulting path for the example case shown in Fig. 6.
The strength of the FMM-based method lies in its unparalleled computational speed when dealing with specific types of optimization problems. For instance, to provide a more intuitive grasp of the computational efficiency inherent in FMM-based methods, we delve into some practical specifics. The process of extracting a path from a grid of size \(10^{7}\) typically demands only a matter of seconds when employing a single-core machine. To put this into a simpler perspective, it is comparable to handling a two-dimensional pixel image measuring \(4000\times 4000\) in dimensions. It is also noteworthy that the application of the FMM across multiple iterations does not burden the optimization process with any substantial computational time constraints. The efficiency of the FMM-based method inspires the development of a new framework for various scenarios of modern operations of uncrewed vehicles in the subsequent section.
Figure 3: Velocity map created using the form (6) with \(\alpha=3\). The velocity values \(V^{*}\) are normalized with the maximum speed of agent \(\mathcal{V}_{\max}\).
## 3 FMM-based Rendezvous Path Planning for a Team of Heterogeneous Vehicles
The goal of this section is to introduce an innovative approach to leveraging the FMM-based method within a multi-agent path planning domain. In particular, we put forward an FMM-based rendezvous path planning algorithm designed for a diverse team of vehicles. The team is tasked with efficiently converging at a single location, aiming for optimal efficiency in pursuit of a general goal.
### Problem Statement
This paper considers the problem of finding paths for \(N(\geq 2)\) heterogeneous vehicles in a team, which are tasked with rendezvousing within a minimal time. The region of interest \(\Omega\) is assumed to be represented by an occupancy grid map, where each pixel is either free \(\mathcal{C}_{\text{free}}\) or occupied \(\Omega\backslash\mathcal{C}_{\text{free}}\). According to the Ref. [34], a path is viewed as a continuous function \(\tau:[0,1]\rightarrow\mathcal{C}_{\text{free}}\), in which each point along the path is given by \(\tau(s)\) for some \(s\in[0,1]\). Here, \(\tau(0)\) corresponds to the starting point of the agent whereas \(\tau(1)\) denotes the target point. Although the orientation of each vehicle will not be considered in this work, it is also feasible to incorporate their orientations using the existing methodology [30].
We assume that the starting position \(\tau^{i}(0)\) of each vehicle in the team is given. Note that we introduced the index \(i=1,...,N\) to denote each vehicle. Then, the rendezvous path planning for the team is divided into two sub-problem. The first problem is to determine the optimal rendezvous point \(\mathbf{x}_{m}\) such that
\[\mathbf{x}_{m}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{C}_{\text{free}}} \mathcal{F}(\mathbf{x}), \tag{7}\]
where \(\mathcal{F}\) is a general cost function. The second sub-problem is to determine the optimal path \(\tau^{i}(0)\) from initial point of each agent to the optimal point \(\tau^{i}(1)=\mathbf{x}_{m}\).
From now on, for the purpose of illustration, we fix the optimizing function \(\mathcal{F}\) as the meeting time. In
Figure 4: Plots of velocity functions (6) as a function of normalized distance \(d/d_{\max}\) for different values of \(\alpha\). The vertical axis \(V^{*}(=V/\mathcal{V}_{\max})\) is a normalized velocity by the maximum speed. In general, a smaller value of \(\alpha\) results in a larger imposed safety distance.
Figure 5: Comparison of velocity maps generated from the different velocity forms shown in Fig. 4. Sharper increase of \(V^{*}\) to value 1 results in a larger safety distance.
rendezvous tasks, this corresponds to the arrival time of last agent, which is written as
\[\mathcal{F}(\mathbf{x})=\max\left[T^{1}(\mathbf{x}),T^{2}(\mathbf{x}),\ldots,T^{N}(\mathbf{x}) \right]. \tag{8}\]
where \(T^{i}(\mathbf{x})\) denotes the arrival time for all \(\mathbf{x}\in\mathcal{C}_{\text{free}}\), which will be also referred to as a time grid onwards.
### The algorithm
Now, we describe our approach to the aforementioned rendezvous path planning problem. Considering the different initial position of each agent, a single implementation of the FMS method will yield \(T^{i}(\mathbf{x})\) for all point in \(\Omega\). During this step, one can consider the arrival time \(T^{i}(x)\) of each agent can be determined considering the different velocities and the safe distances imposed by environments. Once the arrival time maps for all agents are prepared, the best meeting point \(\mathbf{x}_{m}\), which minimizes the cost \(F(\mathbf{x})\) can be conveniently determined by
\[\mathbf{x}_{m}=\underset{\mathbf{x}\in\mathcal{C}_{\text{free}}}{\arg\min}\left(\max \left[(T^{1}(\mathbf{x}),T^{2}(\mathbf{x}),\ldots,T^{N}(\mathbf{x})\right]\right) \tag{9}\]
This approach can be easily generalized for situations where agents have different operation conditions, such as moving cost or fuel consumption rate, operation domain, or dynamics properties.
Below provides the implementation detail of the presented approach using an example of rendezvous planning for three agents, which are initially located at three different corners of a given binary occupancy map previously shown in Fig. 2. The initial positions are shown in Fig. 6(a), Fig. 6(b), and Fig. 6(c). For simplicity, we assume that the vehicles are identical, which means that the vehicles travel with the same dynamics and at a same and constant speed. Specifically, \(\alpha=3\) and \(\mathcal{V}_{\max}=1\) is used in the example.
The algorithm first begins by following the standard step of the FMS method to measure the distance to the distance \(d\in\mathbb{R}^{+}\) to the nearest obstacles on every point in the grid. The first FMM runs from the initial surfaces of obstacles to fill up \(d\)-values on every non-occupied point in \(\mathcal{C}_{\text{free}}\subset\Omega\), using the uniform velocity \(V(\mathbf{x})=1\). Next, we generate a velocity map \(V^{i}(\mathbf{x})\) for each agent \(i\in\{1,2,...,N\}\) using the velocity function (6). Each agent may have a different value of safety parameter \(\alpha\) and the maximum allowable speed \(\mathcal{V}_{\max}\). The velocity map for the binary occupancy map using the the form is shown in Fig. 3.
Figure 6: The optimized path after applying the gradient descent algorithm is plotted on the time grid. The white circle denotes the start point, while the square indicates the endpoint.
Then, we run the second FMM multiple times starting from each initial position of agent \(\mathbf{x}_{0}^{i}\), which corresponds to \(\tau^{i}(0)\). This second round of FMM computation is executed to propagate a source wave point located at the target point until the arrival time \(T\) value at the initial point is determined. At each iteration, we obtain the arrival time of map \(T^{i}(\mathbf{x})\) for each agent. Once the iterations of the second FMM finish, the optimal point \(\mathbf{x}_{m}\) can be determined directly from the form (9). The result of the term in (9), \(\max\big{[}(T^{1}(\mathbf{x}),\ldots,T^{N}(\mathbf{x})\big{]}\), is shown with a color map in Fig. 6(d).
Lastly, the optimized path \(\tau^{i}\) for each agent to the rendezvous point is determined by applying the gradient descent algorithms to the time grid \(T^{i}((\mathbf{x})\). This trajectory optimization step is inferred from the maximum gradient direction of \(T(\mathbf{x})\). The final outcome of the FMS method is the optimized continuous path \(\tau\), a collection of point in \(\Omega\) that guides trajectory of agents as shown in Fig. 6. The procedure is summarized in Algorithm 1.
Figure 7: (a-c) The normalized arrival time \(T^{*}\) maps for three agents located at different initial points. (d) The optimized path drawn on \(\mathcal{F}(\mathbf{x})\) as defined in the form (8).
```
0: A binary occupancy map \(\Omega=\mathcal{C}_{\text{free}}\cup\mathcal{C}_{\text{free}}^{c}\), a cost function \(\mathcal{F}(\mathbf{x})\), positions of obstacles \(\mathbf{x}_{\text{obs}}\) and initial points of total \(N\) agents \(\tau^{i}(0)\in\Omega\)
0: The best point \(\mathbf{x}_{m}\in\mathcal{C}_{\text{free}}\) that optimize the cost \(\mathcal{F}\), and the optimized path \(\tau^{i}\) for each agent
1: Execute the fast marching method from \(\partial\mathcal{C}_{\text{free}}^{c}\) to compute the minimum distance \(d(\mathbf{x})\) from obstacles
2:for\(i=1\) to \(N\)do
3: Calculate the maximum velocity field \(V^{i}(\mathbf{x})\) using \(d(\mathbf{x})\)
4: Execute the fast marching method from \(\mathbf{x}_{0}^{i}\) to compute \(T^{i}(x)\)
5:endfor
6: Determine the optimizing point \(\mathbf{x}_{m}\) that minimizes \(\mathcal{F}(\mathbf{x})\)
7:for\(i=1\) to \(N\)do
8: Execute the fast marching method using \(V^{i}(\mathbf{x})\) from \(\mathbf{x}_{0}^{i}\) to \(\mathbf{x}_{op}\)
9: Use the maximum gradient descent algorithm to determine the path \(\tau^{i}\) from \(\mathbf{x}_{i}^{o}\) to \(\mathbf{x}_{op}\).
10:endfor
```
**Algorithm 1** FMM-based path optimization for multi-agent rendezvous
## 4 Experiment
In this section, we conduct a numerical experiment that showcases an application of the suggested method in more realistic cases. We consider a virtual scenario of rendezvous task for a team of heterogeneous vehicles. The experimental setting is as follows.
### Experimental set up
First, we created a computational domain to simulate a realistic environment. We chose the Tampa Bay area as our test domain. We used a satellite image from NASA's Earth Observatory (as shown in Fig. 8a)2. Also, the GRIP tool (_Graphically Represented Image Processing engine_) [35] was employed to convert the satellite image into a binary configuration space map. The primary objective of image processing at this stage was to distinguish water bodies and land areas, as illustrated by white and black pixels respectively in Fig. 8b.
Footnote 2: [https://earthobservatory.nasa.gov/images/4745/tampa-bay-florida](https://earthobservatory.nasa.gov/images/4745/tampa-bay-florida)
Next, we built a team of heterogeneous agents, of which member vehicles comprised four types: an uncrewed underwater vehicle (UUV), an uncrewed surface vehicle (USV), an uncrewed ground vehicle (UGV), and an uncrewed aerial vehicle (UAV). The UUV operates exclusively underwater but is limited by operational depth constraints. Consequently, UUV operations are required to take place at a considerable distance from the shoreline. On the other hand, the USV is designed for slower mobility, but it has the capability to navigate areas closer to the coastline. In contrast, the UGV's operational domain is limited to land. Lastly, the UAV, being an aerial platform, is assumed to move at a constant speed without encountering any obstacles.
Operational constraints for the aforementioned heterogeneous agents were addressed using their respective velocity maps\(V^{i}(\mathbf{x})\). The primary tools were the magnitude of the penalty parameter \(\alpha\) in (6) and mirroring of binary image. To begin with, it is reasonable to impose a higher penalty to the operating velocity of UUV in the proximity to land, since UUV is required to operate at a far distance from the shoreline.
Thus, we set the penalty parameter to \(\alpha^{uuv}=100\), while the values of \(\alpha^{usv},\alpha^{ugv}\) are set to 3. Moreover, in order to address the specific land travel limitation of the UGV, the operation domain of UGV was obtained by the mirroring of binary operational domain of ocean vehicles (i.e. USV, UUV), the result of which is shown in Fig. (c)c. For UAVs traveling above both water and land, their domain was considered as free space without obstacles.
The remaining parameters are the maximum operational speed \(\mathcal{V}^{i}_{\max}\) of each vehicle. In actual applications, these parameters should reflect the actual performances of agents. In this virtual test, we assumed the following scenario to demonstrate the full potential of the present approach. First, the maximum operational speed of UGV was assumed to be the slowest among all agents, considering case where UGVs need to move as a group or encounter additional environmental restrictions (such as traffic or changes in topography). Then, we normalized velocities of vehicle using the maximum speed of UGV, and thus we write \(\mathcal{V}^{ugv}_{\max}=1\). The maximum speeds of USV and UUV were set to same \(\mathcal{V}^{uuv}_{\max}=\mathcal{V}^{usv}_{\max}=2\), and the UAV was assumed to have the highest navigation speed, and set to \(\mathcal{V}^{uav}_{\max}=3\).
With the prescribed setting, we applied the Algorithm 1 to solve the optimization problem of rendezvous path planning.
### Results
We executed the first FMM (of the FMS) for each vehicle from arbitrarily selected initial points, as seen by the red dots in Fig. (a)a-Fig. (d)d. The computed time grid \(T^{i}(\mathbf{x})\) for each vehicle is also visualized in the same plots. One distinguished case is Fig. (d)d which shows unimpeded paths for the UAV throughout the environment.
Next, we note that an additional procedure that is not described in Algorithm 1 is necessary, as the UGV operates on the complementary domain of the UUV and USV in our test scenario. Note that candidates for the rendezvous point should be located on the shoreline, which belongs to the obstacles (i.e. \(\Omega\backslash\mathcal{C}_{\text{free}}\)) in the first FMM for any agent. To address such issue, we extended the time grid \(T^{i}(\mathbf{x})\) to the edges of obstacles. Grid cells on the edge of binary images (Fig. (b)b and Fig. (c)c) were first detected using MATLAB's "edge" function, and the \(T^{i}(\mathbf{x})\) on edges were inferred from the minimum time value among the adjacent grid
Figure 8: (a) Satellite image of Tampa Bay, FL. Downloaded from NASA Earth Observatory (b) Processed binary image from Fig. (a)a for the USV and UUV; (c) Processed binary image for UGV, which is an inverse of Fig. (b)b
points. Fig. 10 illustrates this procedure. This results in a subset of the shoreline emerging as the candidate for the rendezvous point as seen in Fig. 11. Then, in the same figure, the black dot, which represents the optimal rendezvous point where all vehicles can converge in the minimum time, is determined by the form (9).
Next, the algorithm computes a path for each vehicle using the gradient descent method. Fig. 11(a) illustrates the UUV's trajectory, characterized by significant turns to remain within its operational range, attributable to the low alpha value. On the contrary, Fig. 11(b) exemplifies the USV's efficiency in navigating between islands. The UGV's path in Fig. 11(c) remains confined to land, adhering to its intended operational domain. Lastly, the paths for all vehicles are summarized in Fig. 13, which also includes the simplest UAV's path unhindered by obstacles due to its aerial capabilities.
Overall, we see that the resulting paths in Fig. 13 align well with the assumed operational constraints of the member vehicles, offering realistic planning outcomes. While the paths include overlaps between the USV and UGV, path conflicts are not an issue in the test scenario, since the agents have unique operational domains; they would not collide even if their paths overlap at the same time.
## 5 Discussion
In this section, we highlight the merits of our methodology in two ways.
Figure 10: A mnemonic that outlines procedures for extending the time grid to accommodate situations where vehicles operate in non-intersecting domains, specifically the UUV/USV and UGV.
Figure 9: The time grids \(T^{i}(\mathbf{x})\) of the (a) UUV, (b) USV, (c) UGV, and (d) UAV. The red dot in each plot denote the initial position of the vehicle.
Figure 11: An extended time grid defined on shoreline to compute the rendezvous point \(\mathbf{x}_{op}\), following the form (9).
Figure 12: The paths planned for the (a) UUV, (b) USV, (c) UGV, shown over the each time grids Fig. 9.
First, the authors are aware that constructing collision-free paths for agents is one of the most critical aspects of MAPF, even though our test scenario was free of such conflicts. However, even if agents share the same operational domain, under our framework it is easy to check whether overlapping paths will lead to collisions. This is because we have explicit information of each vehicle's arrival time at overlapping points; path conflicts occurring at different times do not result in collisions between agents. We believe this is useful information for extending the current framework for truly collision-free path planning.
Lastly, we re-emphasize that the main contribution the present work is the formulation of the problem of multi-agent rendezvous task in the form of (8). While Algorithm 1 is straightforward under this formulation, a potential improvement can be found in the design of the velocity function (6) to respect a more detailed and realistic operational constraints of various types of vehicles.
Lastly, the main advantage of our approach is that the process is deterministic, implying that the resulting paths are guaranteed to be the globally optimized solution. In addition, the computational cost for determining paths of all agents is only proportional to the number of agent \(N\). Therefore, at least for the rendezvous task as defined in (8), we claim that our approach outperforms heuristic, and stochastic, and machine learning based-method in terms of an unique solution approach and scalabilty.
## 6 Conclusion
Recent rapid advancements in uncrewed vehicle technology have significantly improved accessibility and cost-effectiveness, leading to their widespread integration across various domains, including ground, water, and air. As systems with uncrewed vehicles become ubiquitous, the demand for sophisticated navigation methodologies that can efficiently guide their interactions also becomes paramount In this regard, the present work has introduced a new approach to path planning for multi-agent systems. Our method is rooted in the well-established framework of the FMM. The methodology presented in this paper leverages the capabilities of
Figure 13: Path planning results plotted over the original satellite image. Circles denote starting points of vehicles, and the triangle denotes the computed optimal rendezvous point from the presented algorithm.
the FMM to efficiently optimize trajectories for heterogeneous teams of agents, augmenting their operational efficiency and collective synergy.
To illustrate our approach, we considered an example path planning scenario involving four different types of uncrewed vehicles navigating around the Tampa Bay area. The results of virtual experiment demonstrated how the path planning task of a multi-agent system can benefit from the effectiveness of the FMM-based method, which conveniently incorporates the individual operational characteristics of the heterogeneous vehicles. The computational efficiency and flexibility of our approach open the door to various directions for future work.
* The optimization function \(\mathcal{F}\) can also be extended to incorporate various scenarios of rendezvous tasks other than the minimal time. For example, we plan to include different operational costs for heterogeneous vehicles to maximize the economic efficiency of rendezvous tasks.
* The proposed framework can be extended to path planning in presence of dynamic obstacles. This generalization will allow the algorithm to consider the collision between two different agents. The future work will investigate how the fast marching method can be modified in order to efficiently incorporate moving objects or other moving agents in the computation.
* Finally, we also envision extending our framework to different purposes of path planning for heterogeneous agents beyond rendezvous missions. This could involve group search optimization and assignment tasks.
## Credit author statement
**Jaekwang Kim:** Conceptualization, Methodology, Software (original code development and numerical experiments), Validation, Writing. **Hyung-Jun Park:** Validation, Review. **Jaejeong Shin:** Conceptualization, Software (numerical experiments), Supervision, Writing.
## Acknowledgements
Jaekwang Kim was supported by the Hongik University new faculty research support fund.
|
2308.01426 | The Onset Acceleration for Surfactant Covered Faraday Waves | Faraday waves are gravity-capillary waves that emerge on the surface of a
vertically vibrated fluid when the energy injected via vibration exceeds the
energy lost due to viscous dissipation. Because this dissipation primarily
occurs in the free surface boundary layer, their emergence is particularly
sensitive to free surface properties including the surface tension, elasticity,
and viscosity of surfactants present at the free surface. We study this
sensitivity by considering a Newtonian fluid bath covered by an insoluble
surfactant subject to vertical vibrations which produce sub-harmonic Faraday
waves. By assuming a finite-depth, infinite-breadth, low-viscosity bulk fluid
and accounting for surface tension, Marangoni, and Boussinesq effects, we
derive an expression for the onset acceleration up to second order in the
expansion parameter $\Upsilon = \sqrt{\tfrac{1}{\mathcal{R}e}}$. We recover the
results of previous numerical investigations, but only by modifying the
Marangoni and Boussinesq numbers to account for the low-viscosity limit. The
analytic expression allows us to consider a range of parameters not previously
studied, including a wide variety of fluid depths and driving frequencies. In
addition, we uncover regions of parameter space for which our model predicts
that the addition of surfactant would lower, rather than elevate, the onset
acceleration. We discuss the possible use of this model in developing a surface
viscometer for surfactant monolayers. | Stephen L. Strickland, Karen E. Daniels, Michael Shearer | 2023-08-02T20:52:52Z | http://arxiv.org/abs/2308.01426v1 | # The Onset Acceleration for Surfactant Covered Faraday Waves
###### Abstract
Faraday waves are gravity-capillary waves that emerge on the surface of a vertically vibrated fluid when the energy injected via vibration exceeds the energy lost due to viscous dissipation. Because this dissipation primarily occurs in the free surface boundary layer, their emergence is particularly sensitive to free surface properties including the surface tension, elasticity, and viscosity of surfactants present at the free surface. We study this sensitivity by considering a Newtonian fluid bath covered by an insoluble surfactant subject to vertical vibrations which produce sub-harmonic Faraday waves. By assuming a finite-depth, infinite-breadth, low-viscosity bulk fluid and accounting for surface tension, Marangoni, and Boussinesq effects, we derive an expression for the onset acceleration up to second order in the expansion parameter \(\Upsilon=\sqrt{\frac{1}{Re}}\). We recover the results of previous numerical investigations, but only by modifying the Marangoni and Boussinesq numbers to account for the low-viscosity limit. The analytic expression allows us to consider a range of parameters not previously studied, including a wide variety of fluid depths and driving frequencies. In addition, we uncover regions of parameter space for which our model predicts that the addition of surfactant would lower, rather than elevate, the onset acceleration. We discuss the possible use of this model in developing a surface viscometer for surfactant monolayers.
Faraday waves, Instability, Interfacial Flows
## 1 Introduction
When faced with a roaring ocean, Roman sailors would break open casks of oil, spilling the contents into the sea, and would ride in a patch of quiet oil-covered water until the storm abated. This calming effect of an oil layer on ocean waves (gravity-capillary waves) has been reported by Pliney the elder (Pliny the Elder), publicized by Benjamin Franklin Franklin et al. (1774), and utilized by Shields at Aberdeen Harbor Aitken (1882) as a way of keeping seafaring craft safe. More recently, the dynamic effects of surface materials on gravity-capillary waves have become useful for the remote detection of crude oil spills Cini et al. (1983); Brekke and Solberg (2005); Ghanmi et al. (2015), detection of biological molecules Picard
and Davoust (2007, 2009), measurement of bulk and interfacial rheology Lucassen-Reynders and Lucassen (1970); Douady (1990); Jiang et al. (1993); Raynal et al. (1999); Saylor et al. (2000); Behroozi et al. (2007); Shao et al. (2018); Lau et al. (2020), and the patterning of interfaces Henderson et al. (1991); Henderson (1998); Wright and Saylor (2003).
In these applications, a layer of surfactant reduces the surface tension (\(\sigma\)) of the bulk fluid by an amount that depends upon the surface density (\(\Gamma\)) of the surfactant, and typically the surface tension decreases monotonically as surfactant density increases. An inhomogeneous distribution of surfactant will result in surface tension gradients (Marangoni stresses) which drive flow in the bulk fluid. This flow then transports the surfactant molecules, modifying their spatial distribution. If left unperturbed, the coupled surfactant-fluid system would reach an equilibrium for which the surfactant becomes homogeneously distributed and the Marangoni stresses vanish via diffusion.
A traveling gravity-capillary wave will compress and expand the surfactant-covered interface, giving rise to Marangoni stresses (Lange and Huhnerfuss 1984). These stresses in turn result in a viscous boundary layer at the fluid surface where dissipation of the wave's energy is enhanced. The energy dissipation is often made apparent through the exponential decay of the wave's amplitude as it propagates (Reynolds 1880; Levich 1941; Dorrestein 1951; Case and Parkinson 1957; Goodrich 1961; Davies and Vose 1965; Lucassen-Reynders and Lucassen 1970; Jiang et al. 1993; Saylor et al. 2000; Behroozi et al. 2007).
For linear small-amplitude gravity-capillary waves, the energy dissipation rate is characterized by the damping parameter \(\delta\) and is related to the surface elastic modulus (a.k.a. surface dilational modulus or Gibbs' elasticity), \(\varepsilon_{0}=-\Gamma_{0}\frac{d\sigma}{d\Gamma}\) where \(\Gamma_{0}\) is the equilibrium mean surfactant density. Contemporary theoretical and experimental research (Reynolds 1880; Levich 1941; Dorrestein 1951; Case and Parkinson 1957; Goodrich 1961; Davies and Vose 1965; Miles 1967; Lucassen-Reynders and Lucassen 1970; Jiang et al. 1993; Saylor et al. 2000; Behroozi et al. 2007) has shown that the damping \(\delta\) increases non-monotonically as a function of \(\varepsilon_{0}\). For \(\varepsilon_{0}=0\), \(\delta\) measures the bulk damping effect in the absence of surfactant. As \(\varepsilon_{0}\) is increased (typically by adding more surfactant), \(\delta\) reaches a maximum that can be an order of magnitude larger than the surfactant-free bulk damping. For larger \(\varepsilon_{0}\), \(\delta\) decreases to a value that is roughly half of its peak value (Reynolds 1880; Levich 1941; Dorrestein 1951; Case and Parkinson 1957; Goodrich 1961; Davies and Vose 1965; Miles 1967; Lucassen-Reynders and Lucassen 1970; Jiang et al. 1993; Saylor et al. 2000; Behroozi et al. 2007).
Small-amplitude standing gravity-capillary waves, known as Faraday waves (Faraday 1831), emerge when a fluid bath is vertically vibrated above an angular frequency \(\omega_{c}\), provided the acceleration amplitude \(a\) meets or exceeds the onset acceleration \(a_{c}\) for that frequency. The emergent wave, with angular frequency \(\omega_{0}\), can either be harmonic (\(\omega_{0}=\omega\)) or sub-harmonic (\(\omega_{0}=\frac{1}{2}\omega\)). In this work, we focus exclusively on sub-harmonic Faraday waves.
We understand the emergence of the Faraday waves from an energy-balance standpoint: the fluid bath dissipates energy in every wave mode due to the viscosity of the fluid. On the other hand, vertical vibration injects energy into all wave modes, but not uniformly. When the driving amplitude is less than the onset acceleration, the energy dissipation exceeds energy injection in all modes so that no wave emerges, but at the onset acceleration \(a_{c}\), a single wave mode has more energy injected than dissipated, while all others remain dissipative. Therefore, a wave emerges with a selected pattern of wavenumber \(k_{c}\)(Edwards and Fauve 1994; Gollub 2006; Ibrahim 2015).
Because the energy dissipation largely occurs in the boundary layer near the free surface, the onset acceleration for Faraday waves is very sensitive to the presence of surface stresses such as due to surface tension, surface elasticity, and surface viscosity. Because surfactants
modify these stresses, the onset of Faraday waves can serve as an effective indicator of the presence of a surfactant as well as a means of measuring the rheological properties of the surfactant layer. The effects of soluble surfactant on the Faraday wave onset have been observed (Ballesta and Manneville 2005) as has the effect of insoluble surfactant on the damping rates of Faraday waves (Henderson et al. 1991; Henderson 1998).
Benjamin and Ursell (1954); Kumar and Tuckerman (1994); Chen and Vinals (1999), and Kumar and Matar (2002, 2004a,b) have made theoretical predictions for these non-linear waves that relate the onset acceleration to the rate of energy dissipation in the system. These theoretical predictions for the onset acceleration are typically formulated by using Floquet analysis, first applied by Kumar and Tuckerman (1994), which results in a recursion relation whose truncation is often solved with numerical techniques for a pre-specified parameter regime. This combination of Floquet analysis and numerical solvers has been expanded to consider surfactant effects by Ubal et al. (2005a,b,c); Giavedoni and Ubal (2007); Kumar and Matar (2004b); Mikishev et al. (2016), who found that the onset acceleration is sensitive to the rheological properties of the surfactant layer in much the same way that the viscous damping parameter \(\delta\) for linear gravity-capillary waves depends upon the surface elasticity \(\varepsilon_{0}\). Using a different approach (a purely analytic technique), Chen and Vinals (1999) considered Faraday waves on an infinite-depth surfactant-free fluid and derived an exact expression for the onset acceleration and wave number of the emergent Faraday waves. Chen and Vinals also started with Floquet analysis yielding a recursion relation, but instead of solving this relation numerically, they considered the weak viscosity limit, expanding the driving acceleration and wave number in terms of \(\gamma=\frac{1}{\gamma\epsilon}\) and solving for the coefficients of the expansion.
In this paper, we extend the techniques of Chen and Vinals (1999) into the finite-depth low-viscosity regime with surfactant, and we show that our analysis improves upon the numerical predictions of Kumar and Matar (2004a) and Giavedoni and Ubal (2007). In section SS2, we present the parameterization of our system and the linearized governing equations for our model. The techniques for solving these equations are in section SS3 while the general solution and special cases are in section SS4. In section SS5, we compare our analytic solution to the results of previous numerical investigations. Novel features of the onset acceleration are in section SS6, and a possible application of this model for developing a surface viscometer is detailed in section SS7.
## 2 Parameterization & Governing Equations
We consider, as shown in figure 1 and parameterized in table 1, an incompressible Newtonian fluid of infinite horizontal extent and finite depth \(H\) with flow velocity \(\vec{u}(\vec{r},t)=[u,v,w]\), density \(\rho\), and dynamic viscosity \(\mu\). The elevation of the free surface is given by \(z=\zeta(\vec{r}_{H},t)\) where we denote the horizontal coordinates as \(\vec{r}_{H}\). We will use the subscript \({}_{H}\) to indicate a projection onto the horizontal plane. The dynamics of the air above the free surface are taken to be negligible.
The surfactant monolayer at the free surface is treated as a 2-dimensional Newtonian fluid. At equilibrium the mean surfactant mass density is uniformly \(\Gamma_{0}\) which determines the mean surface tension \(\sigma_{0}(\Gamma_{0})\), mean surface dilational viscosity \(\Lambda(\Gamma_{0})\), mean surface shear viscosity \(M(\Gamma_{0})\), and mean surface diffusivity \(D(\Gamma_{0})\). When the surface is dynamic, the instantaneous local surfactant mass density is \(\Gamma(\vec{r}_{H},t)\) thereby inducing small negligible perturbations in \(\Lambda\), \(M\), and \(D\) away from their equilibrium values. Variations in the surface tension \(\sigma=\sigma(\Gamma)\) are significant however; in linearizing the governing equations, effects of surface tension are divided into the equilibrium surface tension \(\sigma_{0}\) and the surface elasticity \(\varepsilon_{0}=-\Gamma\frac{d\sigma}{d\Gamma}\) which will both be considered constants.
To ensure that the unperturbed fluid surface remains at \(z=0\) during vibration, we will
analyze the system in the non-inertial reference frame that is co-vibrated with the container floor. In effect, this gives a gravitational body-force term \(\vec{g}(t)=-\left(g+a\cos(\omega t)\right)\hat{z}\) in which \(g=9.8\) m/s\({}^{2}\) is the gravitational constant and \(a\) is the acceleration amplitude of the vibration.
In developing the model, we non-dimensionalize the governing equations by choosing
\begin{table}
\begin{tabular}{c|c} \hline Symbol & Description \\ \hline \hline \([x,y,z]\) & Cartesian coordinates \\ \(\vec{r}_{H}\) & horizontal coordinates \\ \(H\) & bulk fluid depth \\ \(\zeta\) & surface elevation \\ \(\vec{u}=[u,v,w]\) & flow velocity \\ \(\rho\) & bulk fluid density \\ \(\mu\) & bulk fluid dynamic viscosity \\ \(\delta\) & damping parameter \\ \(\Gamma\), \(\Gamma_{0}\) & surfactant surface density \& mean equilibrium surface density \\ \(\sigma\), \(\sigma_{0}\) & surface tension \& mean equilibrium surface tension \\ \(\varepsilon_{0}\) & surface elastic modulus \\ \(\Lambda\) & surface dilational viscosity \\ \(M\) & surface shear viscosity \\ \(\Omega=\Lambda+2M\) & combined surface viscosity \\ \(D\) & surface diffusivity \\ \(g\) & gravitational constant \\ \(a\) & driving acceleration amplitude \\ \(\omega\) & driving frequency \\ \(\omega_{0}=\frac{1}{2}\omega\) & sub-harmonic Faraday wave frequency \\ \(a_{c}\) & critical (onset) acceleration \\ \(k_{c}\) & critical (onset) wavenumber \\ \(k_{0}\) & wavenumber for gravity-capillary waves on an unvibrated fluid \\ \(l_{0}=1/k_{0}\) & lengthscale \\ \hline \end{tabular}
\end{table}
Table 1: Physical parameters
Figure 1: Schematic showing a horizontally-infinite layer of incompressible fluid with mean finite depth \(H\) covered with a surfactant layer (green) subject to vertical vibrations of angular frequency \(\omega\) with an amplitude \(a\) driven by the container floor (black). The flat equilibrium fluid surface (dashed line) is taken as \(z=0\) and the perturbation of that surface is \(\zeta\). The bulk fluid is assumed to have constant density \(\rho\), dynamic viscosity \(\mu\), and flow velocity \(\vec{u}\). The free surface have spatiotemporally-varying surfactant mass density \(\Gamma\), equilibrium surface tension \(\sigma_{0}\), surface elasticity \(\varepsilon_{0}\), surface dilational viscosity \(\Lambda\), surface shear viscosity \(M\), and surface diffusivity \(D\).
scales for time, length, and mass. We scale time by \(\frac{1}{\omega_{0}}\), the Faraday wave period. The length scale is set by \(l_{0}=\frac{1}{k_{0}}\), where \(k_{0}\) is the wavenumber for a gravity-capillary wave on an un vibrated fluid as given by the finite-depth Kelvin dispersion relation \(1=\left(\frac{g}{\omega_{0}^{2}}k_{0}+\frac{\sigma_{0}}{\rho\omega_{0}^{2}}k_{0 }^{3}\right)\tanh(k_{0}H)\). We scale mass by \(\rho l_{0}^{3}\). Velocities are scaled by \(l_{0}\omega\), and \(\zeta\) and \(H\) are scaled by a factor of \(l_{0}\). We also scale the surfactant density by \(\Gamma_{0}\) and pressure by \(\rho l_{0}^{2}\omega_{0}^{2}\). This process results in the standard dimensionless numbers defined in table 2. For the remainder of this manuscript, \(\vec{u}\), \(p\), \(k\), \(H\), \(\zeta\), and \(\Gamma\) will refer to the dimensionless versions of these quantities.
To analyze the system at onset, we linearize the governing equations and boundary conditions around the equilibrium \(\vec{u}=0\), \(\zeta=0\), \(\Gamma=1\), which refer to as the trivial solution. The bulk fluid satisfies the linearized incompressible Navier-Stokes equations,
\[\partial_{t}\vec{u}=-\vec{\nabla}p+\frac{1}{\mathcal{R}e}\nabla^{2}\vec{u}-G \,\left(1+\mathcal{F}\cos(2t)\right)\hat{z} \tag{2.1}\]
\begin{table}
\begin{tabular}{c|c} Definition & Description \\ \hline \hline \(\mathcal{F}=\frac{q}{g}\) & dimensionless acceleration amplitude \\ \(G=\frac{g}{l_{0}\omega_{0}^{2}}\) & dimensionless gravitational acceleration \\ \(\Sigma=\frac{\sigma_{0}^{2}}{\rho\omega_{0}^{2}l_{0}^{3}}\) & dimensionless surface tension \\ \(\frac{G}{\Sigma}=\frac{\rho gl_{0}^{2}}{\sigma_{0}}\) & Bond number \\ \(\mathcal{R}e=\frac{\rho\omega_{0}l_{0}^{2}}{\mu}\) & bulk Reynolds number \\ \(\Upsilon=\sqrt{\frac{\mu}{\rho\omega_{0}l_{0}^{2}}}=\sqrt{\frac{1}{\mathcal{R}e}}\) & the expansion parameter \\ \(Ca=\frac{1}{\mathcal{R}e\Sigma}=\frac{\mu\omega_{0}l_{0}}{\sigma_{0}}\) & capillary number \\ \(\mathcal{M}=\frac{\varepsilon_{0}/\sigma_{0}}{Ca}=\frac{\varepsilon_{0}}{\mu \omega_{0}l_{0}}\) & Marangoni number \\ \(\mathcal{M}^{\dagger}=\Upsilon\mathcal{M}=\frac{\varepsilon_{0}}{\sqrt{\rho \mu\omega_{0}^{3}l_{0}^{4}}}\) & modified Marangoni number \\ \(\mathcal{P}e=\frac{\omega_{0}l_{0}^{2}}{D}\) & surface Peclet number \\ \(\mathcal{B}_{D}=\frac{\Lambda+M}{\mu l_{0}}\) & dilational Boussinesq number \\ \(\mathcal{B}_{S}=\frac{M}{\mu l_{0}}\) & shear Boussinesq number \\ \(\mathcal{B}=\mathcal{B}_{D}+\mathcal{B}_{S}=\frac{\Omega}{\mu l_{0}}\) & combined Boussinesq number \\ \(\mathcal{B}^{\dagger}=\Upsilon\mathcal{B}=\frac{\Omega}{\sqrt{\rho\mu\omega_{0} l_{0}^{4}}}\) & modified Boussinesq number \\ \end{tabular}
\end{table}
Table 2: Dimensionless numbers
\[\vec{\nabla}\cdot\vec{u}=0 \tag{2}\]
with Reynolds number \(\mathcal{R}e\), gravitation number \(G\), and dimensionless driving acceleration \(\mathcal{F}\). The bulk fluid also satisfies the no-slip boundary condition at the floor \(z=-H\) :
\[\vec{u}=0,\ \ \ z=-H. \tag{3}\]
At the free surface \(z=\zeta(\vec{r}_{H},t)\), we have further boundary conditions: the kinematic boundary condition,
\[\partial_{t}\zeta=w, \tag{4}\]
and the surface continuity equation, expressing the advection and diffusion of the surfactant,
\[0=\partial_{t}\Gamma+\vec{\nabla}_{H}\cdot\vec{u}_{H}-\frac{1}{\mathcal{P}_{ e}}\nabla_{S}^{2}\Gamma \tag{5}\]
with Peclet number \(\mathcal{P}e\).
Since the mass of the surfactant monolayer is negligible, the surface tangential and normal stress boundary conditions contain no inertial or gravitational terms for the surfactant,
\[0=-\left[\begin{array}{c}\partial_{x}w+\partial_{z}u\\ \partial_{y}w+\partial_{z}v\end{array}\right]-\mathcal{M}\vec{\nabla}_{H} \Gamma+\mathcal{B}_{D}\vec{\nabla}_{H}\vec{\nabla}_{H}\cdot\vec{u}_{H}+ \mathcal{B}_{S}\vec{\nabla}_{H}^{2}\vec{u}_{H}, \tag{6}\]
\[0=\frac{1}{\Sigma}p-2Ca\,\partial_{z}w-\nabla_{H}^{2}\zeta \tag{7}\]
with Marangoni number \(\mathcal{M}\) and dilational and shear Boussinesq numbers \(\mathcal{B}_{D}\) and \(\mathcal{B}_{S}\) respectively. General governing equations for a surfactant-covered surface are given by Scriven (1960), with later corrections by Waxman (1984).
The equations can be reduced and expressed in terms of \(w\), \(\zeta\), and \(\Gamma\) exclusively. In the reduction, the dilational and shear Boussinesq effects can be combined, so it is convenient to define an effective surface viscosity \(\Omega=\Lambda+2M\) and an effective Boussinesq number \(\mathcal{B}=\mathcal{B}_{D}+\mathcal{B}_{S}\). The equations and boundary conditions are then
\[0=\left[\partial_{t}(\nabla_{H}^{2}+\partial_{zz})-\frac{1}{\mathcal{R}e}( \nabla_{H}^{2}+\partial_{zz})^{2}\right]w \tag{8a}\]
\[w=0;\ \ \ \partial_{z}w=0,\ \ z=-H \tag{8b}\]
\[\partial_{t}\zeta=w,\ \ \ z=\zeta \tag{8c}\]
\[\partial_{t}\Gamma-\partial_{z}w-\frac{1}{\mathcal{P}e}\nabla_{H}^{2}\Gamma=0,\ \ \ z=\zeta \tag{8d}\]
\[-\left[\nabla_{H}^{2}-\partial_{zz}\right]w-\mathcal{M}\nabla_{H}^{2}\Gamma- \mathcal{B}\nabla_{H}^{2}\partial_{z}w=0,\ \ \ z=\zeta \tag{8e}\]
\[Ca\ \partial_{z}\left(\mathcal{R}e\partial_{t}-\left(3\nabla_{H}^{2}+\partial_ {zz}\right)\right)w-\frac{G}{\Sigma}\left(1+\mathcal{F}\cos(2t)\right)\nabla_ {H}^{2}\zeta+\nabla_{H}^{2}\nabla_{H}^{2}\zeta=0,\ \ \ z=\zeta \tag{8f}\]
with Capillary number \(Ca\), dimensionless surface tension \(\Sigma\), and Bond number \(\frac{G}{\Sigma}\).
## 3 Technique for Finding the Onset Acceleration
The onset acceleration \(\mathcal{F}_{c}=\frac{a_{c}}{g}\) is the minimum value of the driving parameter \(\mathcal{F}\) for which \(w\), \(\zeta\), and \(\Gamma\) are non-trivial. As detailed in appendix A, we apply an ansatz and solve for non-trivial \(w\), \(\zeta\), and \(\Gamma\) up to a family of constants \(\zeta_{j}\) which are the amplitudes of each wave mode. We will see that these wave mode amplitudes are coupled in a way that allows us to solve for the onset acceleration.
For a given wavenumber \(k\), the ansatz becomes:
\[\begin{split}& w=\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{ \text{odd}}}ije^{ijt}\left(\mathcal{A}_{j}\sinh(kz)+\mathcal{B}_{j}\cosh(kz)+ \mathcal{C}_{j}\sinh(q_{j}z)+\mathcal{D}_{j}\cosh(q_{j}z)\right)\\ &\zeta=\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{\text{odd }}}\zeta_{j}e^{ijt}\\ &\Gamma=1+\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{ \text{odd}}}\Gamma_{j}e^{ijt}\end{split} \tag{1}\]
where \(q_{j}^{2}=k^{2}+ij\mathcal{R}e\). The series for \(w\) represents the general solution of the fourth order partial differential equation (8\(a\)), which is satisfied by any choice of coefficients \(\mathcal{A}_{j}\), \(\mathcal{B}_{j}\), \(\mathcal{C}_{j}\), and \(\mathcal{D}_{j}\). Equation (8\(d\)) then gives a homogeneous equation expressing the coefficients \(\Gamma_{j}\) as linear combinations of \(\mathcal{A}_{j}\), \(\mathcal{B}_{j}\), \(\mathcal{C}_{j}\), and \(\mathcal{D}_{j}\). Together, the homogeneous equations (8\(b\)), (8\(c\)), and (8\(e\)) form a linear system showing that the coefficients \(\mathcal{A}_{j}\), \(\mathcal{B}_{j}\), \(\mathcal{C}_{j}\), and \(\mathcal{D}_{j}\) not only depend on \(k\), but also are all proportional to \(\zeta_{j}\).
We are left with equation (8\(f\)), in which the temporal modes are coupled due to the forcing term \(\mathcal{F}\cos(2t)\). In this term, \(\zeta_{j}e^{ijt}\cos(2t)\) splits into \(\zeta_{j}e^{i(j+2)t}\) and \(\zeta_{j}e^{i(j-2)t}\), so equation (8\(f\)) becomes a linear difference equation for the sequence of coefficients \(\{\zeta_{j}\}\):
\[0=-\zeta_{j}H_{j}+\zeta_{j-2}\mathcal{F}+\zeta_{j+2}\mathcal{F}. \tag{2}\]
Here, the term \(\zeta_{j}H_{j}\) is given by the formula:
\[\zeta_{j}H_{j}=-\frac{2}{G}\left[\zeta_{j}G+\zeta_{j}\Sigma k^{2}+\frac{ij}{k \mathcal{R}e}\left(\mathcal{A}_{j}(k^{2}+q_{j}^{2})+2kq_{j}C_{j}\right)\right]. \tag{3}\]
Since \(\mathcal{A}_{j}\), \(\mathcal{C}_{j}\) are proportional to \(\zeta_{j}\), the right hand side of equation (3) is also proportional to \(\zeta_{j}\). Consequently, each \(H_{j}\) depends only on the wavenumber \(k\).
Following Chen and Vinals (1999), to solve for the onset acceleration \(\mathcal{F}_{c}\) (the minimum \(\mathcal{F}\)), we truncate this difference equations as follows. We note that when a wave of wavenumber \(k\) is driven near its onset, it oscillates sub-harmonically indicating that \(\zeta_{1}\) is the most significant contributor with nearly all other modes being negligible. But as the driving exceeds the onset, higher order frequencies emerge making the higher order \(\zeta_{j}\) more significant. Since we want to solve for the onset, we can consider the \(\zeta_{j}\) to approach 0 as \(j\rightarrow\infty\), suggesting that truncation at large \(j=n\) will provide increasingly accurate estimates. Starting with \(\zeta_{n}\), we use the recursion relation to solve for \(\zeta_{j}\), \(j\leqslant n\) :
\[\begin{split}\zeta_{n}=&\zeta_{n-2}\frac{\mathcal{F} }{H_{n}}\qquad\qquad\qquad\zeta_{n-2}=&\zeta_{n-4}\frac{\mathcal{ F}}{H_{n-2}-\frac{\mathcal{F}^{2}}{H_{n}}}\qquad\qquad\qquad\dots\end{split} \tag{4}\]
Extending to \(\zeta_{1}\), we obtain:
\[\zeta_{1}=\zeta_{-1}\frac{\mathcal{F}}{H_{1}-\frac{\mathcal{F}^{2}}{H_{3}- \frac{\mathcal{F}^{2}}{H_{5}-\cdots}}} \tag{5}\]
Because \(\zeta\) (which measures the displacement of the fluid surface) is real valued, \(\zeta_{-1}=\zeta_{1}^{*}\) and therefore \(\frac{\zeta_{1}}{\zeta_{-1}}=e^{i\,\phi}\) where \(\phi=2\arg(\zeta_{1})\) measures the phase difference between the driving vibration (of frequency \(\omega\)) and the wave oscillation (of frequency \(\omega_{0}=\frac{1}{2}\omega\)). Eliminating the
\(\zeta_{j}\), we obtain an expression relating \(\mathcal{F}\) and \(k\) :
\[\mathcal{F}=e^{i\phi}\left(H_{1}-\frac{\mathcal{F}^{2}}{H_{3}-\frac{\mathcal{F}^ {2}}{H_{5}-\ldots}}\right). \tag{10}\]
Since the coefficients \(H_{j}\) depend only on the wavenumber \(k\), this equation defines the driving acceleration \(\mathcal{F}\) implicitly as a function of \(k\). The onset acceleration \(\mathcal{F}_{c}=\mathcal{F}(k_{c})\) is the minimum of \(\mathcal{F}(k)\), with onset wavenumber \(k=k_{c}\).
In finding \(\mathcal{F}_{c}\), one could proceed with a first derivative test as done by Chen and Vinals (1999). Here, we instead consider that, as with a driven damped oscillator, resonance occurs when the driver optimally injects energy into the oscillator. Often expressed in terms of a phase difference of \(\frac{\pi}{2}\) between the forcing and the position of a driven oscillator, a similar phase difference happens with Faraday waves near onset. As shown in Figure 1 of Douady et al. (1989), the vibration acceleration and the Faraday wave have a phase difference \(\phi=\frac{\pi}{2}\), and on close examination, the solution reported by Chen and Vinals (1999) exhibits this same phase difference. Consequently, at the onset acceleration \(\mathcal{F}=\mathcal{F}_{c}\), we have \(\frac{\zeta_{1}}{\zeta_{-1}}=e^{i\phi}=i\), and (10) becomes:
\[0=i\mathcal{F}_{c}+H_{1}-\frac{\mathcal{F}_{c}^{2}}{H_{3}-\frac{\mathcal{F}_ {c}^{2}}{H_{5}-\ldots}}. \tag{11}\]
To solve equation (11) in the case of weak bulk viscosity, we express the onset wavenumber \(k_{c}\) and onset acceleration \(\mathcal{F}_{c}\) as power series in the parameter \(\Upsilon=\sqrt{\frac{1}{\mathcal{R}e}}\). In the limit \(\Upsilon\to 0\), we observe \(k_{c}\to 1\) and \(\mathcal{F}_{c}\to 0\), so that the power series become:
\[\mathcal{F}_{c}=\sum_{n=1}^{\infty}\alpha_{n}\Upsilon^{n},\ \ \ \ \ k_{c}=1+\sum_{n=1}^{\infty}\beta_{n}\Upsilon^{n}. \tag{12}\]
In considering the weak viscosity limit, the behaviors of the Marangoni and Boussinesq numbers need careful attention. A naive consideration of Table 2 would suggest that \(\mathcal{M}\sim\mathcal{B}\sim\frac{1}{\mu}\sim\mathcal{O}(\Upsilon^{-2})\); however, in using the method of dominant balance (see appendix B), we find \(\mathcal{M}\sim\mathcal{B}\sim\Upsilon^{-1}\) and therefore define modified Marangoni and modified Boussinesq numbers \(\mathcal{M}^{\dagger}\) and \(\mathcal{B}^{\dagger}\) as:
\[\mathcal{M}^{\dagger}=\frac{\varepsilon_{0}}{\sqrt{\mu\rho\omega_{0}^{3}t_{0} ^{4}}}\ \ \,\ \ \ \mathcal{B}^{\dagger}=\frac{\Omega}{\sqrt{\mu\rho\omega_{0}t_{0}^{4}}} \tag{13}\]
so that \(\mathcal{M}=\Upsilon^{-1}\mathcal{M}^{\dagger}\) and \(\mathcal{B}=\Upsilon^{-1}\mathcal{B}^{\dagger}\) with \(\mathcal{M}^{\dagger}\sim\mathcal{B}^{\dagger}\sim\mathcal{O}(1)\). The particular choice \(\mathcal{M}=\Upsilon^{-1}\mathcal{M}^{\dagger}\) provides a way of putting the surfactant effects as \(\mathcal{O}(1)\) in \(H_{j}\) rather than \(\mathcal{O}(\Upsilon)\). If one were to take \(\mathcal{M}=\mathcal{M}^{\dagger}\), the onset acceleration would monotonically increase without bound as the Marangoni number increases. Alternatively, if one were to take \(\mathcal{M}=\Upsilon^{-2}\mathcal{M}^{\dagger}\), then all of the onset acceleration terms second order and higher would be infinite. In choosing \(\mathcal{M}=\Upsilon^{-1}\mathcal{M}^{\dagger}\), we obtain quantitative agreement with previous numerical work as we will show in section SS5.
At this point, we solve for the \(\alpha_{n}\) and \(\beta_{n}\) by substitution into eqn 11 and collect terms of like order in \(\Upsilon\). We find that the complex valued equations permit real valued solutions for \(\alpha_{n}\) and \(\beta_{n}\). For reference, appendix B shows the \(H_{j}\) expanded in terms of \(\Upsilon\).
## 4 The solution
Before presenting the general solution for the onset acceleration, it is reassuring to consider the solution in a few specific cases that have already been well-studied. The original analytical work by Chen and Vinals (1999) considered the infinite-depth surfactant-free problem, so in section SS4.1, we will show that our analysis recovers their result. In section SS4.2, we will extend the surfactant-free problem to the finite-depth limit. In section SS4.3, we will consider the infinite-depth surfactant problem since the Marangoni, Boussinesq, and Peclet effects are easier to identify. We will then present the complete solution for a finite-depth surfactant-covered fluid in section SS4.4. For all cases, Mathematica was used to help calculate the coefficients of the series expansions.
### The surfactant-free infinite-depth case
We obtain the surfactant-free infinite-depth case by letting \({\cal M}^{\dagger}\to 0\), \({\cal B}^{\dagger}\to 0\), and \(H\to\infty\). In this case, eqn (3.7) becomes:
\[\eqalign{0=&-{2\over G}\left(-1+G+\Sigma\right)\cr&+\Upsilon\left(i\alpha_{1}-{2 \over G}\beta_{1}\left(1+2\Sigma\right)\right)\cr&+\Upsilon^{2}\left(i\left( \alpha_{2}-{8\over G}\right)-{2\over G}\beta_{2}(1+2\Sigma)\right)\cr&+ \Upsilon^{3}\left(i\left(\alpha_{3}+{4\sqrt{2}\over G}\right)-{2\over G} \left(\beta_{3}(1+2\Sigma)-2\sqrt{2}\right)\right)\cr&+\Upsilon^{4}\left(i \alpha_{4}-{2\over G}\left(\beta_{4}(1+2\Sigma)+4+{G^{2}\over 32}\alpha_{2}^{2} \right)\right)\cr&+\Upsilon^{5}\left(i\left(\alpha_{5}-{2\sqrt{2}+8\beta_{3} \over G}\right)-{2\over G}\left(\beta_{5}(1+2\Sigma)-\sqrt{2}+{G^{2}\over 16} \alpha_{2}\alpha_{3}\right)\right)\cr&+{\cal O}(\Upsilon^{6})\cr}\]
Because \(\Upsilon\) is arbitrary, each term must independently vanish. The zeroth order term is a reiteration of the infinite-depth Kelvin dispersion relation, and the higher order terms permit solutions for the \(\alpha_{j}\), \(\beta_{j}\). The onset acceleration and wavenumber thus become:
\[{\cal F}_{c}={1\over G}\left[8\Upsilon^{2}-4\sqrt{2}\Upsilon^{3}+{2\sqrt{2}(11 -2G)\over(3-2G)}\Upsilon^{5}+{\cal O}(\Upsilon^{6})\right]\]
\[k_{c}=\left[1+{2\sqrt{2}\over 3-2G}\Upsilon^{3}-{6\over 3-2G}\Upsilon^{4}+{3 \sqrt{2}\over 3-2G}\Upsilon^{5}+{\cal O}(\Upsilon^{6})\right]\]
The expression for the onset acceleration \({\cal F}_{c}\) is identical to Chen and Vinals (1999), where \(\Upsilon=\sqrt{\gamma/2}\) in their notation. However, their expression for the wavenumber \(k_{c}\) is the same only up to third order, so the formula in Chen and Vinals (1999) is not a valid solution to equation (3.7).
### The surfactant-free finite-depth case
The surfactant-free finite-depth case is achieved by letting \({\cal M}^{\dagger}\to 0\) and \({\cal B}^{\dagger}\to 0\) while keeping \(H\) arbitrary. The onset acceleration becomes:
\[{\cal F}_{c}={1\over G}\left[\sqrt{2}\csc{\rm sech}^{2}(H)\Upsilon+4\coth(H){ 4\Sigma\cosh(2H)+\cosh(3H)\csc{\rm sech}(H)+4H-2\Sigma\over Q_{H}}\Upsilon^{2 }+{\cal O}(\Upsilon^{3})\right]\]
where \(Q_{H}=2\Sigma\cosh(2H)+\sinh(2H)+2H-2\Sigma\).
In the infinite-depth limit (\(H\to\infty\)), the first-order term vanishes, and the second order term collapses to \(\frac{8}{G}\Upsilon^{2}\), in agreement with (4.2a).
### The surfactant-covered infinite-depth case
We obtain the surfactant-covered infinite-depth case by letting \(H\to\infty\) while keeping \(\mathcal{M}^{\dagger}\) and \(\mathcal{B}^{\dagger}\) arbitrary. The onset acceleration becomes:
\[\mathcal{F}_{c}=\frac{1}{G}\left[\sqrt{2}\left(\frac{\mathcal{Q}_{S}-1+\frac{ \sqrt{2}\mathcal{M}^{\dagger}}{1+\frac{1}{\mathcal{P}e^{2}}}}{\mathcal{Q}_{S}} \right)\Upsilon+\left(\frac{2\Sigma\mathcal{N}_{1}+\mathcal{N}_{2}}{\left(2 \Sigma+1\right)\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\right)\Upsilon^ {2}+\mathcal{O}(\Upsilon^{3})\right] \tag{4.4}\]
where
\[\mathcal{Q}_{S}=1+\sqrt{2}\mathcal{B}^{\dagger}+\mathcal{B}^{\dagger 2}+\frac{ \mathcal{M}^{\dagger}}{\mathcal{P}e\left(1+\frac{1}{\mathcal{P}e^{2}}\right)} \left(\sqrt{2}+2\mathcal{B}^{\dagger}-\sqrt{2}\mathcal{P}e+\mathcal{M}^{ \dagger}\mathcal{P}e\right)\]
and \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) (which are polynomials of \(\mathcal{M}^{\dagger}\), \(\mathcal{B}^{\dagger}\), \(\frac{1}{\mathcal{P}e}\), and \(1+\frac{1}{\mathcal{P}e^{2}}\)) are printed in appendix C.3.
In the surfactant-free limit (\(\mathcal{M}^{\dagger}\to 0\) and \(\mathcal{B}^{\dagger}\to 0\)), we find
\[\frac{\mathcal{Q}_{S}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim \frac{\mathcal{N}_{2}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim 8\]
Thus the first-order term vanishes and the coefficient of the second-order term approaches \(\frac{16\Sigma+8}{G\left(2\Sigma+1\right)}\Upsilon^{2}\). Hence, equation (4.4) agrees with equation (4.2a) in this limit.
### The general solution
The general expression for the onset acceleration is:
\[\mathcal{F}_{c}= \frac{1}{G}\left[\sqrt{2}\left(\frac{\mathcal{Q}_{S}\csc^{2}(H)+ \left(\mathcal{Q}_{S}-1+\frac{\sqrt{2}\mathcal{M}^{\dagger}}{1+\frac{1}{ \mathcal{P}e^{2}}}\right)\coth^{2}(H)}{\mathcal{Q}_{S}}\right)\Upsilon\right. \tag{4.5}\] \[\left.+\left(\coth(H)\frac{\cosh(2H)\left(4\Sigma\mathcal{L}_{1}+ \mathcal{L}_{2}\coth(H)\right)+\cosh(3H)\csc(H)\mathcal{L}_{3}+4H\mathcal{L}_{4 }-2\Sigma\mathcal{L}_{5}+\coth(H)\mathcal{L}_{6}}{\left(1+\frac{1}{\mathcal{ P}e^{2}}\right)^{3}\left(\mathcal{Q}_{S}\right)^{3}\mathcal{Q}_{H}}\right)\Upsilon^{2}\right.\] \[\left.+\mathcal{O}(\Upsilon^{3})\right]\]
where the coefficients \(\mathcal{L}_{1}\), \(\mathcal{L}_{2}\), \(\mathcal{L}_{3}\), \(\mathcal{L}_{4}\), \(\mathcal{L}_{5}\), and \(\mathcal{L}_{6}\) (which are polynomials of \(\mathcal{M}^{\dagger}\), \(\mathcal{B}^{\dagger}\), \(\frac{1}{\mathcal{P}e}\), and \(1+\frac{1}{\mathcal{P}e^{2}}\)) are printed in appendix C.4.
In the surfactant-free limit, as \(\mathcal{Q}_{S}\to 1\), the first order term approaches \(\frac{\sqrt{2}\csc h^{2}(H)}{G}\Upsilon\). Noting that the \(\mathcal{L}\) coefficients approach:
\[\frac{\mathcal{L}_{1}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim \frac{\mathcal{L}_{3}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim \frac{\mathcal{L}_{4}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim \frac{\mathcal{L}_{5}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim 4\] \[\frac{\mathcal{L}_{2}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}} \sim\frac{\mathcal{L}_{6}}{\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}}\sim 0,\]
the second order term approaches \(\frac{4\coth(H)}{G}\frac{4\Sigma\cosh(2H)+\cosh(3H)\csc(H)+4H-2\Sigma}{ \mathcal{Q}_{H}}\Upsilon^{2}\). In this limit, the general solution agrees with eqn 4.3.
In the infinite-depth limit, the first order term approaches \(\frac{\sqrt{2}}{G\mathcal{Q}_{S}}\left(\mathcal{Q}_{S}-1+\frac{\sqrt{2}\mathcal{M} ^{\dagger}}{1+\frac{1}{\mathcal{P}e^{2}}}\right)\Upsilon\). Noting that \(\mathcal{L}_{1}=\frac{1}{2}\mathcal{N}_{1}\) and \(\mathcal{L}_{2}=\mathcal{N}_{2}-2\mathcal{L}_{3}\), the second-order term asymptotically approaches
\[\frac{\frac{1}{2}e^{2H}(2\Sigma\mathcal{N}_{1}+\mathcal{N}_{2}-2\mathcal{L}_{3 })+e^{2H}\mathcal{L}_{3}}{(1+\frac{1}{\mathcal{P}e})^{2}\mathcal{Q}_{S}^{3} \frac{1}{2}(2\Sigma+1)e^{2H}},\]
showing the general solution to agree with eqn 4.4.
## 5 Comparing to previous numerical results
Kumar and Matar (2004a,b); Ubal et al. (2005a,c), and Giavedoni and Ubal (2007) studied the finite-depth surfactant-covered Faraday wave problem, examining the effect of surfactants on the onset acceleration, wave number, and the phase shift between the surfactant distribution and the surface topography. Kumar and Matar (2004a,b) and Ubal et al. (2005a) accounted for Marangoni and Peclet effects while Ubal et al. (2005c) considered Boussinesq effects only. Giavedoni and Ubal (2007) generalize results of Ubal et al. (2005a) and Ubal et al. (2005c), accounting for all three effects. In establishing the efficacy of our analytic solution, we compare to Kumar and Matar (2004a,b) and Giavedoni and Ubal (2007). The parameters for these studies are shown in Table 3, and include a wide range of surface elasticities, viscosities, and diffusivities but only a few values for the fluid depth, viscosity, surface tension, density, and driving frequency.
Before comparing results, it is worth contrasting our analytic approach to the recursion relation in eqn 3 with previous numerical approaches. In our analysis, we truncated the recursion relation at an arbitrarily large \(n\), established a base case, and Taylor-expanded the problem in the weak-viscosity limit to second order in \(\Upsilon\); the numerical approaches are based on truncating the relation at \(j=10\), casting the relation into matrix form, and numerically solving the remaining expression as an eigenvalue problem, deducing the onset acceleration from the eigenvalue. Neither Kumar and Matar (2004a) nor Giavedoni and Ubal (2007) explicitly considered the weak-viscosity limit, but based on the parameters reported, they studied \(\Upsilon=0.0696\) and \(\Upsilon=0.0644\) respectively, well within the weak-viscosity limit.
In order to compare results, we converted the \(\mathcal{M}\) and \(\mathcal{B}\) from these numerical studies first to physical parameter values \(\varepsilon_{0}\), \(D\), and \(\Omega\) and then into the \(\mathcal{M}^{\dagger}\) and \(\mathcal{B}^{\dagger}\) used here. This process
\begin{table}
\begin{tabular}{c|c|c} Symbol & Kumar and Matar (2004a) & Giavedoni and Ubal (2007) \\ \hline \hline \(\omega_{0}\) (rad/s) & \(\pi\times 60\) & \(\pi\times 120\) \\ \(\rho\) (kg/m\({}^{3}\)) & \(1\times 10^{3}\) & \(1\times 10^{3}\) \\ \(\mu\) (kg/m/s) & \(1\times 10^{-3}\) & \(1\times 10^{-3}\) \\ \(\sigma_{0}\) (kg/s\({}^{2}\)) & \(30\times 10^{-2}\) & \(70\times 10^{-2}\) \\ \(g\) (m/s\({}^{2}\)) & \(9.81\) & \(9.80\) \\ \(H\) (m) & \(1\times 10^{-2}\) & \(1.5\times 10^{-3}\) \\ \(\varepsilon_{0}\) (kg/s\({}^{2}\)) & \(0-10^{-1}\) & \(10^{-6}-10^{-1}\) \\ \(D\) (m/s\({}^{2}\)) & \(10^{-9}-10^{-2}\) & \(10^{-9}-10^{-3}\) \\ \(\Omega\) (kg/s) & N/A & \(0-10^{-3}\) \\ \end{tabular}
\end{table}
Table 3: The parameter values and ranges used to compare our general solution to the numerical results of Kumar and Matar (2004a) and Giavedoni and Ubal (2007). The results of this comparison are shown in figures 2 and 3.
requires careful accounting, not only in the additional use of \(\Upsilon\) in the definitions of \(\mathcal{M}^{\dagger}\) and \(\mathcal{B}^{\dagger}\), but also in the definitions of the length and time scales used to non-dimensionalize the problem. Although we use the wavelength from the finite-depth Kelvin dispersion relation as the length scale, Kumar and Matar (2004a) used the fluid depth \(H\), and Giavedoni and Ubal (2007) used \(l_{0}=\frac{g}{\omega^{2}}+\sqrt[3]{\frac{\sigma_{0}}{\rho\omega^{2}}}\).
Figure 2 shows that our analysis is in quantitative agreement with the results of Kumar and Matar (2004a) for a wide range of \(\mathcal{M}^{\dagger}\) and \(\mathcal{P}e\). These curves clearly show that when diffusion is negligible, the onset acceleration rapidly increases with surface elasticity up to a maximum that is significantly larger than the surfactant-free case. The onset acceleration then decreases to nearly half its peak value, very similar to the energy damping rate in linear gravity-capillary waves as discussed in section SS1. Surface diffusivity acts to reduce, broaden, and shift the peak to higher values of Marangoni number.
Figure 3 shows that our analysis also agrees with the results of Giavedoni and Ubal (2007) for five decades of \(\mathcal{M}^{\dagger}\), eight decades of \(\mathcal{B}^{\dagger}\), and six decades of \(\mathcal{P}e\). Figure 3 (a) and (c) show the dependence of \(\mathcal{F}_{c}\) vs \(\mathcal{M}^{\dagger}\) and \(\mathcal{P}e\), and although the fluid depth, driving frequency, and surface tension are different than in Fig. 2, the trends in the plots are the same. Figure 3 (b) shows \(\mathcal{F}_{c}\) vs \(\mathcal{B}^{\dagger}\) which exhibits a steady rise to a plateau. Figure 3 (d) shows how surface viscosity also reduces, broadens, and moves the peak in the \(\mathcal{F}_{c}\) vs \(\mathcal{M}^{\dagger}\) plot. The surface viscosity also increases the onset acceleration at low \(\mathcal{M}^{\dagger}\).
To help visualize the behavior of \(\mathcal{F}_{c}\) in the \(\mathcal{M}^{\dagger}\)-\(\mathcal{B}^{\dagger}\) plane, we have added figure 4. Although no new comparison is made in this figure, it uses the same physical parameters as Fig. 3. This visualization clearly shows that \(\lim_{\mathcal{M}^{\dagger}\to\infty}\mathcal{F}_{c}=\lim_{\mathcal{B}^{ \dagger}\to\infty}\mathcal{F}_{c}\). Further, as \(\mathcal{P}e\to 0\), the peak lessens, broadens, shifts to higher values of \(\mathcal{M}^{\dagger}\), and ultimately vanishes.
Figure 3 (a,b) additionally show the behavior of the first-order and second-order terms of the analytic solution (4.5) and infinite-depth analytic approximation (4.4). The first-order terms captures the overall trends, particularly the maximum when \(\mathcal{M}^{\dagger}\approx 1\) and the rise when
Figure 2: **Comparison of the analytic solution (4.5) to numerical solutions of Kumar and Matar (2004a)**. The onset acceleration \(\mathcal{F}_{c}\) is plotted against the modified Marangoni number \(\mathcal{M}^{\dagger}\) for \(\mathcal{B}^{\dagger}=0\) for a range of Peclet numbers \(\mathcal{P}e\). The numerical results (finely dotted lines) are nearly indistinguishable from the corresponding analytic solution (solid lines).
\(\mathcal{B}^{\dagger}\approx 0.5\), but generally under-predict the onset acceleration. The second-order terms correct the under-prediction.
## 6 Additional features of the analytic solution
The results presented in section SS5 cover a wide range of \(\mathcal{M}^{\dagger}\), \(\mathcal{B}^{\dagger}\), and \(\mathcal{P}e\). Table 3 shows that this range was achieved by varying the surface parameters, surface elasticity, surface viscosity, and surface diffusivity, while keeping constant the parameters for the bulk fluid. Because of our analytic treatment, we are able to efficiently explore the behavior of \(\mathcal{F}_{c}\) for a wide range of surface and bulk parameters. In this section, we will consider the behavior of \(\mathcal{F}_{c}\) in the \(H\)-\(\omega\) plane, and observe that in some regions of this plane, increasing \(\mathcal{M}^{\dagger}\) can significantly decrease the onset acceleration. We will also explore the dependence of \(\mathcal{F}_{c}\) on \(\mu\), \(\rho\), and \(\sigma_{0}\).
The \(H\) dependence of \(\mathcal{F}_{c}\) (and similarly for the \(\omega\) dependence) arises in two distinct ways. Although the general solution (eqn 4.5) explicitly references \(H\), the fluid depth also affects the finite-depth Kelvin dispersion relation which is used to determine the characteristic length scale \(l_{0}\) which is incorporated into nearly all of the dimensionless numbers. Consequently, probing the depth-dependence cannot be done by merely plotting eqn 4.5 while holding
Figure 3: **Comparison of the analytic solution (4.5) to numerical solutions of** **Giavedoni and Ubal (2007). (a) \(\mathcal{F}_{c}\)** vs \(\mathcal{M}^{\dagger}\) with \(\mathcal{B}^{\dagger}=0\) and \(\mathcal{P}e=7.991\times 10^{4}\). The numerical result (finely dotted black line) is very close to the analytic solution (4.5) (solid red line). The first-order term is shown as a dashed red curve. The corresponding infinite-depth analytic approximation (4.4) is shown as a solid yellow curve, and the first order contribution is a dashed yellow curve. (b) \(\mathcal{F}_{c}\) vs \(\mathcal{B}^{\dagger}\) with \(\mathcal{M}^{\dagger}=0\) and \(\mathcal{P}e=7.991\times 10^{4}\). The numerical result (finely dotted black line) is very close to the analytic solution (4.5) (solid blue line). The first-order term is shown as a dashed blue curve. The corresponding infinite-depth analytic approximation (4.4) is shown as a solid cyan curve, and the first order contribution is a dashed cyan curve. (c) \(\mathcal{F}_{c}\) vs \(\mathcal{M}^{\dagger}\) for \(\mathcal{B}^{\dagger}=0\) and a range of \(\mathcal{P}e\). Numerical results are finely dotted curves, barely distiguishable from the corresponding analytic solutions (4.5) shown as solid curves. (d) \(\mathcal{F}_{c}\) vs \(\mathcal{M}^{\dagger}\) for \(\mathcal{P}e=7.991\times 10^{4}\) and a range of \(\mathcal{B}^{\dagger}\). Numerical results are finely dotted curves, barely distiguishable from the corresponding analytic solutions (4.5) shown as solid curves.
Figure 4: **Behavior of \({\cal F}_{c}\) in the \({\cal M}^{\dagger}\)-\({\cal B}^{\dagger}\) plane as the Peclet number is decreased: (a) \({\cal P}e=7.991\times 10^{4}\) (b) \({\cal P}e=7.991\times 10^{0}\) (c) \({\cal P}e=7.991\times 10^{-1}\) (d) \({\cal P}e=7.991\times 10^{-2}\).**
These graphs show the role that surface diffusion plays in reducing, broadening, and moving the maximum to higher \({\cal M}^{\dagger}\). All of these graphs consider a finite-depth water-like bulk fluid using the same physical parameters as Fig. 3.
the dimensionless numbers constant. Similarly, \(\omega\) directly contributes to the dimensionless numbers via \(\omega_{0}\), and it also affects the dimensionless numbers through \(l_{0}\) via the same dispersion relation.
To illuminate the role of the finite-depth Kelvin dispersion relation, fig 5 shows the wave speed \(c=l_{0}\omega_{0}\) in the \(H\)-\(\omega\) plane for terrestrial water (\(\rho=1000\) kg/m\({}^{3}\), \(\mu=0.001\) kg/m/s, \(\sigma_{0}=0.07\) N/m, \(g=9.8\) m/s\({}^{2}\)). This dispersion relation strictly applies to linear gravity-capillary waves, but in the limit \(\Upsilon\to 0\), the dispersion relation for non-linear Faraday waves approaches these plots. These plots clearly show the gravity wave region \(\mathcal{R}_{G}\) and the capillary wave region \(\mathcal{R}_{\Sigma}\) as well as two new regions which we refer to as depth-restricted gravity waves \(\mathcal{R}_{GH}\) and depth-restricted capillary waves \(\mathcal{R}_{\Sigma H}\) since the finite depth of the container results in slower waves. This dispersion relation is so significant that all of these regions are apparent in the behavior of \(\mathcal{F}_{c}\) in the \(H\)-\(\omega\) plane.
Figure 6 (a) shows \(\mathcal{F}_{c}\) in the \(H\)-\(\omega\) plane for surfactant-free water. In later figures, we use this surfactant-free onset acceleration as a reference and denote it as \(\mathcal{F}_{c0}\). In this figure, we have
marked the four regions from figure 5 on the plot. Although the most eye-catching feature is the valley that traces along the \(\mathcal{R}_{G}\)-\(\mathcal{R}_{GH}\) border, each border either coincides with or is next to significant curvature. Since the plot is logarithmic on all axes, any planar surfaces indicate power-law behavior, and any curvature from a plane corresponds to a change in the exponent. Figure 6 (b) shows the low-viscosity expansion parameter \(\Upsilon\) in the same \(H\)-\(\omega\) plane. In considering water, all points shown on the \(H\)-\(\omega\) plane have \(\Upsilon<1\) with the maximum value of \(\Upsilon=0.3458\) occurring at the shallowest depth and highest driving frequency. The two blue dots in figure 6 show the locations in the \(H\)-\(\omega\) plane where we compared our analysis to Kumar and Matar (2004a) and Giavedoni and Ubal (2007) which lie in the \(\mathcal{R}_{\Sigma}\) and \(\mathcal{R}_{\Sigma H}\) regions respectively.
Figure 7 shows the effect of surface elasticity on the onset acceleration by showing a progression of graphs of \(\mathcal{F}_{c}\) and \(\frac{\mathcal{F}_{c}}{\mathcal{F}_{c0}}\) in the \(H\)-\(\omega\) plane. For small elasticities, a new wedge-shaped region appears at shallow depths and mid-range driving frequencies. We will refer to this region as the elasticity-affected region \(\mathcal{R}_{\varepsilon}\). As the surface elasticity increases, \(\mathcal{R}_{\varepsilon}\) descends the graph, pressing towards deeper depths. The tip of the wedge follows the \(\mathcal{R}_{GH}\)-\(\mathcal{R}_{\Sigma H}\) boundary all the way to the quadruple point where the four types of waves meet. Within \(\mathcal{R}_{\varepsilon}\), the onset acceleration are elevated above the surfactant-free behavior, but at the
Figure 6: **Behavior of \(\mathcal{F}_{c}\) and the expansion parameter \(\Upsilon\) in the \(H\)-\(\omega\) plane** using the same surfactant-free water-like conditions as in figure 5. The same regions from fig 5 are evident in the behavior of \(\mathcal{F}_{c}\) in that at the boundaries of each region, \(\mathcal{F}_{c}\) exhibits significant curvature. The two blue dots indicate where our comparison with Kumar and Matar (2004a) and Giavedoni and Ubal (2007) occurred.
boundary, the onset acceleration may either increase or decrease. Figure 7 shows that the boundary tends to decrease the onset acceleration for shallow systems and increase the onset acceleration for deep systems. Figure 8 shows this unusual effect of surface elasticity in more detail by plotting \(\mathcal{F}_{c}\) in the \(\mathcal{M}^{\dagger}\)-\(\mathcal{B}^{\dagger}\) plane for eight different locations of the \(H\)-\(\omega\) plane, the \(H\)-\(\omega\) location of each plot is shown in figure 7 (a). Each column corresponds to a driving frequency and each row corresponds to a fluid depth. Notably, the application of a surfactant
Figure 7: **Behavior of \(\mathcal{F}_{c}\) in the \(H\)-\(\omega\) plane as surface elasticity is increased**. These figures consider the same water-like conditions as in figure 5 and 6 but now with a surfactant that only affects the surface elasticity and does not diffuse. The left column of figures show \(\mathcal{F}_{c}\) while the right column shows the ratio of \(\mathcal{F}_{c}\) to \(\mathcal{F}_{c0}\) (the corresponding surfactant-free onset acceleration from fig. 6). A ratio of \(10^{0}=1\) means that the onset acceleration is indistinguishable from the surfactant-free case. The eight cyan dots indicate locations on the \(H\)-\(\omega\) plane corresponding to the subplots in figure 8.
can decrease the onset acceleration by more than an order of magnitude. Although the effects of diffusion are not shown in fig 7, increasing the diffusivity only results in a lessening of the Marangoni-induced extremes (peaks and valleys) and a broadening of the boundary around \(\mathcal{R}_{\varepsilon}\). Diffusivity does not affect the dependence of the onset acceleration on the Boussinesq number.
Figure 9 shows the effect of surface viscosity on the onset acceleration by showing a progression of graphs of \(\mathcal{F}_{c}\) and \(\frac{\mathcal{F}_{c}}{\mathcal{F}_{c0}}\) in the \(H\)-\(\omega\) plane. Unlike surface elasticity, increasing surface viscosity will only increase the onset acceleration. Weak surface viscosities will elevate the onset acceleration at high frequency across all depths, and further increasing the surface viscosity shifts these effects to lower frequencies.
Figures 10, 11, and 12 show the effects of bulk viscosity, surface tension, and bulk density respectively. The parameters are based on water where \(\mu=10^{-3}\) kg/m/s, \(\sigma_{0}=70\times 10^{-3}\) N/m, and \(\rho=10^{3}\) kg/m\({}^{3}\), and in each figure, we vary a single parameter. Frames (a,b) of these
Figure 8: **Behavior of \(\mathcal{F}_{c}\) in the \(\mathcal{M}^{\dagger}\)-\(\mathcal{B}^{\dagger}\) plane** for several \(H\) and \(\omega\). Each plot corresponds to a cyan dot in fig 7 (a). Each column corresponds to a driving frequency and each row corresponds to a fluid depth. The left column has a frequency of \(\omega=2\pi\times 0.25\) rad/s while the right has \(\omega=2\pi\times 10\) rad/s. The depths of each row is: (a,e) \(H=1\times 10^{-5}\) m (b,f) \(H=1\times 10^{-4}\) m (c,g) \(H=1\times 10^{-2}\) m (d,h) \(H=1\times 10^{0}\) m. Although all of the plots use the same coloration, the color scaling is unique for each plot as indicated by the colorbars.
figures consider depth-restricted gravity waves (\(\omega=2\pi\times 0.25\) rad/s and \(H=10^{-3}\) m), while frames (c,d) consider depth-restricted capillary waves (\(\omega=2\pi\times 120\) rad/s and \(H=1.5\times 10^{-3}\) m).
Figure 10 graphs the onset acceleration \(\mathcal{F}_{c}\) vs the bulk viscosity \(\mu\) showing that increasing the bulk viscosity generally increases the onset acceleration. Although the Boussinesq effects shown in frames (b,d) always increase the onset acceleration, the Marangoni effects shown in frames (a,c) will sometimes decrease the onset acceleration as noted before. Further, the
Figure 9: **Behavior of \(\mathcal{F}_{c}\) in the \(H\)-\(\omega\) plane as surface viscosity is increased**. These figures consider the same water-like conditions as in figures 5, 6, and 7 but now with a surfactant that only affects the surface viscosity and does not diffuse. The left column of figures show \(\mathcal{F}_{c}\) while the right column shows the ratio of \(\mathcal{F}_{c}\) to \(\mathcal{F}_{c0}\) (the corresponding surfactant-free onset acceleration from fig. 6). A ratio of \(10^{0}=1\) means that the onset acceleration is indistinguishable from the surfactant-free case.
presence of surface elasticity can result in cases where a more viscous fluid could have a lower onset acceleration than a less viscous fluid. These figures also show the extreme cases of a surfactant-free fluid (black dashed line) and a surfactant-saturated fluid (red dashed line). For gravity waves, a surfactant-saturated fluid always has a higher onset acceleration than a surfactant-free fluid; however, for capillary waves, there is a crossover where a saturated surface will onset Faraday waves more readily than a surfactant-free surface. This crossover is for large viscosities, and since this model is designed for the low-viscosity limit and only tested against numerical results using a viscosity of \(10^{-3}\) kg/m/s, the crossover may be a limitation of our second-order analysis. However, Suman and Kumar (2008) reported similar numerical results for high-viscosity systems, finding that the onset acceleration for an inertial-less surfactant-free system would be infinite, but Marangoni stresses allow Faraday waves to emerge, thereby preferentially decreasing the onset acceleration.
Fig 11 graphs the onset acceleration \(\mathcal{F}_{c}\) vs equilibrium surface tension \(\sigma_{0}\). The gravity
Figure 10: **Dependence of \(\mathcal{F}_{c}\) on the bulk viscosity**. Similar to the parameters used in Giavedoni and Ubal (2007), the system parameters are \(\rho=1\times 10^{3}\) kg/m\({}^{3}\), \(\sigma_{0}=70\times 10^{-3}\) N/m, and \(g=9.8\) m/s\({}^{2}\). (a,b) consider depth-restricted gravity waves where \(H=1\times 10^{-3}\) m and \(\omega=2\pi\times 0.25\) rad/s while (c,d) consider depth-restricted capillary waves where \(H=1.5\times 10^{-3}\) m and \(\omega=2\pi\times 120\) rad/s. (a,c) consider a range of \(\varepsilon_{0}\) with \(\Omega=0\) while (b,d) consider a range of \(\Omega\) with \(\varepsilon_{0}=0\). The limiting case of surfactant-free is shown by a dashed black line while the case of a surfactant-saturated surface is shown by a red dashed line.
waves in frames (a,b) are unaffected by surface tension while the capillary waves in frames (c,d) do respond to the surface tension. In comparing our results with Giavedoni and Ubal (2007), we used a surface tension of \(70\times 10^{-3}\) N/m where increasing surface elasticity and surface viscosity would preferentially increase the onset acceleration; however upon decreasing the surface tension, we find another crossover where increasing these surface parameters will decrease the onset acceleration.
For completeness, we have included figure 12 which graphs the onset acceleration \(\mathcal{F}_{c}\) vs bulk fluid density \(\rho\). The onset acceleration generally decreases as bulk density increases.
## 7 Conclusion and Discussion
We have derived an analytic expression for the onset acceleration for Faraday waves in a finite-depth infinite-breadth low-viscosity surfactant-covered fluid. We have shown that
Figure 11: **Dependence of \(\mathcal{F}_{c}\) on the surface tension**. Similar to the parameters used in Giavedoni and Ubal (2007), the system parameters are \(\rho=1\times 10^{3}\) kg/m\({}^{3}\), \(\mu=1\times 10^{-3}\) kg/m/s, and \(g=9.8\) m/s\({}^{2}\). (a,b) consider depth-restricted gravity waves where \(H=1\times 10^{-3}\) m and \(\omega=2\pi\times 0.25\) rad/s while (c,d) consider depth-restricted capillary waves where \(H=1.5\times 10^{-3}\) m and \(\omega=2\pi\times 120\) rad/s. (a,c) consider a range of \(\varepsilon_{0}\) with \(\Omega=0\) while (b,d) consider a range of \(\Omega\) with \(\varepsilon_{0}=0\). The limiting case of surfactant-free is shown by a dashed black line while the case of a surfactant-saturated surface is shown by a red dashed line.
this expression accurately reproduces the results of previous numerical works. Our analysis required a novel definition of the Marangoni and Boussinesq numbers to handle the low-viscosity limit as the standard definition result in unbounded behavior for \(\mathcal{F}_{c}\).
We have also shown that for shallow systems, the model model makes an unexpected prediction: adding a surfactant to a shallow system can lower the onset acceleration for Faraday waves. In context of the energy-balance perspective of the emergence of Faraday waves, one would expect that by increasing the surface elasticity, which introduces a new viscous boundary layer at the free surface, one would increase the viscous dissipation and thereby increase the onset acceleration. However, there are cases where increasing the bulk viscosity in the presence of a surfactant reduces the onset acceleration. These unexpected results may be related to the work by Suman and Kumar (2008) where Marangoni effects in an inertial-less system act to destabilize the system.
We conclude by noting the potential utility of our analysis in determining the surface
rheology of a surfactant monolayer. Despite the myriad of surface rheometers that utilize macroscopic systems (Fuller and Vermant 2012; Jaensson and Vermant 2018) and microscopic systems (Samaniuk and Vermant 2014), transverse and longitudinal capillary waves have long been used to probe the surface dilational viscosity (Lemaire and Langevin 1992; Buzza et al. 1998; Saylor et al. 2000), an historically difficult measurement. Measuring the onset of Faraday waves is ideal for accessing the dilational viscosity since (i) no mechanical probe is introduced to the system's surface and (ii) at onset there are minimal surfactant concentration gradients, two key challenges that plague other measurement techniques (Fuller and Vermant 2012). Further, the detection of Faraday waves requires a minimum of technical equipment.
Our analysis works effectively in finite-depth systems where the bulk fluid has a viscosity comparable to water (or less). With our general solution, one could measure the onset acceleration for a range of frequencies and fit for the surface rheological parameters. Marangoni and Boussinesq effects have different frequency dependencies and can be readily distinguished. In fitting, one would determine \(\Omega=\Lambda+2M\) rather than the surface dilational viscosity itself; however, with a surface shear viscometer, one could then deduce the dilational viscosity using Faraday waves.
## 8 Acknowledgments
We would also like to thank Lake Bookman for the many helpful discussions in the early attempts to formulate the theoretical framework. We would also like to thank the NSF for grant # NSF DMS-0604047 and NSF DMS-0968258.
Declaration of Interests: The authors report no conflict of interest.
## Appendix A Integrating the Governing Equations
In SS2, we presented the governing equations for our model (eqns 2.8). Here, we solve for \(w\), \(\zeta\), and \(\Gamma\) at the moment that the Faraday waves emerge, the moment when these functions become non-trivial. In this appendix, we will present an ansatz and solve for all but the final constants of integration, the wave mode amplitudes \(\zeta_{j}\). The analysis of these final constants of integration is addressed in SS3 and will yield an expression for the onset acceleration.
We use the following dimensionless ansatz:
\[w =\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{\text{odd}}} ijw_{j}(z)e^{ijt}\] \[\zeta =\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{\text{odd}}} \zeta_{j}e^{ijt}\] \[\Gamma =1+\cos(\vec{k}\cdot\vec{r}_{H})\sum_{j\in\mathbb{Z}_{\text{odd}} }\Gamma_{j}e^{ijt}\]
The dimensionless wave number \(\vec{k}\) is not identically 1 as the Faraday wavenumber is not equal to the wavenumber from the Kelvin dispersion relation.
With this ansatz, eqn 2.8\(a\) yields a family of 4th order linear ODEs for the \(w_{j}(z)\) which can be readily solved.
\[\left[ij\mathcal{R}e\left(-k^{2}+\partial_{zz}\right)-\left(-k^{2}+\partial_{ zz}\right)^{2}\right]w_{j}(z)=0\]
\[w_{j}(z)=\mathcal{A}_{j}\sinh(kz)+\mathcal{B}_{j}\cosh(kz)+C_{j}\sinh(q_{j}z)+ \mathcal{D}_{j}\cosh(q_{j}z)\]
where \(q_{j}^{2}=k^{2}+ij\mathcal{R}e\). The coefficients \(\mathcal{A}_{j}\), \(\mathcal{B}_{j}\), \(C_{j}\), and \(\mathcal{D}_{j}\) are constants of integration which will be proportional to \(\zeta_{j}\).
The surfactant continuity equation (eqn 2.8_d_) yields the coefficients of the surfactant distribution \(\Gamma_{j}\).
\[\Gamma_{j}=\frac{k\mathcal{A}_{j}+q_{j}C_{j}}{ij+\frac{k^{2}}{\mathcal{P}e}}\]
Eqns 2.8\(b\), 2.8\(c\), and 2.8\(e\) yield a system of equations for \(\mathcal{A}_{j}\), \(\mathcal{B}_{j}\), \(\mathcal{C}_{j}\), and \(\mathcal{D}_{j}\).
\[0 =\mathcal{A}_{j}\sinh(-kH)+\mathcal{B}_{j}\cosh(-kH)+\mathcal{C}_ {j}\sinh(-q_{j}H)+\mathcal{D}_{j}\cosh(-q_{j}H)\] \[0 =k\mathcal{A}_{j}\cosh(-kH)+k\mathcal{B}_{j}\sinh(-kH)+q_{j} \mathcal{C}_{j}\cosh(-q_{j}H)+q_{j}\mathcal{D}_{j}\sinh(-q_{j}H)\] \[0 =\zeta_{j}-\mathcal{B}_{j}-\mathcal{D}_{j}\] \[0 =\zeta_{j}k^{2}+\left(k\mathcal{A}_{j}+q_{j}\mathcal{C}_{j} \right)\mathcal{S}_{j}+\left(k^{2}\mathcal{B}_{j}+q_{j}^{2}\mathcal{D}_{j}\right)\]
The solutions are:
\[\mathcal{A}_{j} =\zeta_{j}\frac{\mathcal{S}_{j}q_{j}\mathcal{P}_{1j}-2k^{2}q_{j} \mathcal{P}_{3j}-\left(k^{2}+q_{j}^{2}\right)\left(k\mathcal{P}_{4j}-q_{j} \right)}{\mathcal{Q}}\] \[\mathcal{B}_{j} =-\zeta_{j}\frac{\mathcal{S}_{j}q_{j}\left(k\left(1-\mathcal{P}_ {3j}\right)-q_{j}\mathcal{P}_{4j}\right)-\left(k^{2}+q_{j}^{2}\right)\mathcal{ P}_{2j}}{\mathcal{Q}}\] \[\mathcal{C}_{j} =-\zeta_{j}\frac{\mathcal{S}_{j}k\mathcal{P}_{1j}-2k^{2}\left(k- q_{j}\mathcal{P}_{4j}\right)+k(k^{2}+q_{j}^{2})\mathcal{P}_{3j}}{\mathcal{Q}}\] \[\mathcal{D}_{j} =\zeta_{j}\frac{\mathcal{S}_{j}k\left(k\mathcal{P}_{4j}-q_{j} \left(1-\mathcal{P}_{3j}\right)\right)-2k^{2}\mathcal{P}_{2j}}{\mathcal{Q}}\]
where
\[\mathcal{S}_{j} =k^{2}\left(\frac{\mathcal{M}}{ij+\frac{k^{2}}{\mathcal{P}e}}+ \mathcal{B}\right)\] \[\mathcal{P}_{1j} =q_{j}\tanh(Hq_{j})-k\tanh(Hk)\] \[\mathcal{P}_{2j} =q_{j}\tanh(Hk)-k\tanh(Hq_{j})\] \[\mathcal{P}_{3j} =\operatorname{sech}(Hk)\operatorname{sech}(Hq_{j})\] \[\mathcal{P}_{4j} =\tanh(Hk)\tanh(Hq_{j})\] \[\mathcal{Q} =-(k^{2}-q_{j}^{2})\mathcal{P}_{2j}+\mathcal{S}_{j}\left(\left(k ^{2}+q_{j}^{2}\right)\mathcal{P}_{4j}-2kq_{j}\left(1-\mathcal{P}_{3j}\right)\right)\]
In compiling all of these steps, we obtain the ansatz listed in eqn 3.1 where the wave amplitude \(\zeta_{j}\) is the only remaining unsolved constant of integration.
## Appendix B Dominant Balance in the Weak-Viscosity Limit
In SS3 we developed the central problem of this manuscript and mapped the route to the solution by considering a weak-viscosity fluid. In expanding the pertinent quantities in terms of the expansion parameter \(\Upsilon=\sqrt{\frac{1}{\mathcal{R}e}}\), we noted that that Marangoni and Boussinesq numbers had to be rescaled per the method of dominant balance. Here, we detail the expansions for the pertinent quantities and their asymptotic behavior in the weak-viscosity limit.
The quantities \(\mathcal{F}_{c}\), \(k_{c}\), \(q_{j}\), and \(\mathcal{S}_{j}\) are expanded as:
\[\mathcal{F}_{c} =\sum_{n=1}^{\infty}\alpha_{n}\Upsilon^{n}\sim\mathcal{O}(\Upsilon)\] \[k_{c} =1+\sum_{n=1}^{\infty}\beta_{n}\Upsilon^{n}\sim\mathcal{O}(1)\] \[q_{j}^{2} =k^{2}+ij\frac{1}{\Upsilon^{2}}\sim\mathcal{O}(\frac{1}{\Upsilon ^{2}})\] \[\mathcal{S}_{j} =\frac{k^{2}}{\Upsilon}\left(\frac{\mathcal{M}^{\dagger}}{ij+ \frac{k^{2}}{\mathcal{P}e}}+\mathcal{B}^{\dagger}\right)\sim\mathcal{O}( \frac{1}{\Upsilon})\]
In the low-viscosity limit,
\[\tanh(Hq_{j}) \sim 1+\mathcal{O}(e^{-\frac{H}{\Upsilon}})\to 1\] \[\cosh(Hq_{j}) \sim\sinh(Hq_{j})\sim\mathcal{O}(e^{\frac{H}{\Upsilon}})\to\infty\] \[\operatorname{sech}(Hq_{j}) \sim\operatorname{csch}(Hq_{j})\sim\mathcal{O}(e^{-\frac{H}{ \Upsilon}})\to 0\]
which simplifies the parameters \(\mathcal{P}_{1j}\), \(\mathcal{P}_{2j}\), \(\mathcal{P}_{3j}\), and \(\mathcal{P}_{4j}\):
\[\mathcal{P}_{1j} \to q_{j}-k\tanh(Hk)\sim\mathcal{O}(\frac{1}{\Upsilon})\] \[\mathcal{P}_{2j} \to q_{j}\tanh(Hk)-k\sim\mathcal{O}(\frac{1}{\Upsilon})\] \[\mathcal{P}_{3j} \to 0\sim\mathcal{O}(0)\] \[\mathcal{P}_{4j} \to\tanh(Hk)\sim\mathcal{O}(1)\]
The constants of integration can then be expressed as:
\[\mathcal{Q} =\mathcal{S}_{j}\left(\left(k^{2}+q_{j}^{2}\right)\mathcal{P}_{4j}- 2kq_{j}\right)+\tfrac{ij}{\Upsilon^{2}}\mathcal{P}_{2j}\sim\mathcal{O}(\tfrac{1} {\Upsilon^{3}})\] \[\frac{\mathcal{A}_{j}}{\zeta_{j}} =\frac{\mathcal{S}_{j}q_{j}+(k^{2}+q_{j}^{2})}{Q}\mathcal{P}_{1j} \sim\mathcal{O}(1)\] \[\frac{\mathcal{B}_{j}}{\zeta_{j}} =\frac{\mathcal{S}_{j}q_{j}+(k^{2}+q_{j}^{2})}{Q}\mathcal{P}_{2j} \sim\mathcal{O}(1)\] \[\frac{\mathcal{C}_{j}}{\zeta_{j}} =\frac{\mathcal{D}_{j}}{\zeta_{j}}=-\frac{\mathcal{S}_{j}k \mathcal{P}_{1j}+2k^{2}\mathcal{P}_{2j}}{Q}\sim\mathcal{O}(\Upsilon)\]
The coupling coefficient \(H_{j}\) becomes:
\[H_{j}=-\frac{2}{G}\left[G+\Sigma k^{2}-\frac{j^{2}\mathcal{S}_{j}q_{j}\mathcal{ P}_{1j}}{kQ}+\frac{ij\Upsilon^{2}}{kQ}\left((k^{2}+q_{j}^{2})^{2}\mathcal{P}_{1j} -4k^{3}q_{j}\mathcal{P}_{2j}\right)\right]\]
In this form, one can easily check that \(H_{j}\sim\mathcal{O}(1)\). As mentioned in SS3, the definitions of \(\mathcal{M}^{\dagger}\) and \(\mathcal{B}^{\dagger}\) ensure that the surfactant effects in the third term \(\frac{j^{2}\mathcal{S}_{j}q_{j}\mathcal{P}_{1j}}{kQ}\) are \(\mathcal{O}(1)\).
## Appendix C Onset Acceleration
Here we present the expressions for the onset acceleration of all the cases given in SS4. We also fully define all of the coefficients in the expressions. These expressions were calculated with the aid of Mathematica.
### The surfactant-free infinite-depth limit
\[\mathcal{F}_{c}=\frac{1}{G}\left[8\Upsilon^{2}-4\sqrt{2}\Upsilon^{3}+\frac{2 \sqrt{2}(11-2G)}{(3-2G)}\Upsilon^{5}+\mathcal{O}(\Upsilon^{6})\right]\] (C 1)
### The surfactant-free finite-depth limit
\[\mathcal{F}_{c}=\frac{1}{G}\left[\sqrt{2}\csc{\rm{s}}^{2}(H)\Upsilon+\frac{4 \coth(H)\left(4\Sigma\cosh(2H)+\cosh(3H)\csc{\rm{s}}(H)+4H-2\Sigma\right)}{Q _{H}}\Upsilon^{2}+\mathcal{O}(\Upsilon^{3})\right]\] (C 2)
where
\[\mathcal{Q}_{H}=2\Sigma\cosh(2H)+\sinh(2H)+2H-2\Sigma\]
### The infinite-depth surfactant limit
\[\mathcal{F}_{c}=\frac{1}{G}\left[\sqrt{2}\left(\frac{\mathcal{Q}_{S}-1+\frac{ \sqrt{2}\mathcal{M}^{\dagger}}{1+\frac{1}{\mathcal{P}e^{2}}}}{\mathcal{Q}_{S} }\right)\Upsilon+\left(\frac{2\Sigma\mathcal{N}_{1}+\mathcal{N}_{2}}{\left(2 \Sigma+1\right)\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}\left(Q_{S} \right)^{3}}\right)\Upsilon^{2}+\mathcal{O}(\Upsilon^{3})\right]\] (C 3)
where the constants \(\mathcal{Q}_{S}\), \(\mathcal{N}_{1}\), and \(\mathcal{N}_{2}\) are below:
\[\mathcal{Q}_{S}=1+\sqrt{2}\mathcal{B}^{\dagger}+\mathcal{B}^{\dagger 2}+\frac{ \mathcal{M}^{\dagger}}{\mathcal{P}e\left(1+\frac{1}{\mathcal{P}e^{2}}\right) }\left(\sqrt{2}+2\mathcal{B}^{\dagger}-\sqrt{2}\mathcal{P}e+\mathcal{M}^{ \dagger}\mathcal{P}e\right)\]
\[\mathcal{N}_{1} = \left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{3}\left(8+20\sqrt{2} \mathcal{B}^{\dagger}+48\mathcal{B}^{\dagger 2}+34\sqrt{2}\mathcal{B}^{\dagger 3}+30 \mathcal{B}^{\dagger 4}+8\sqrt{2}\mathcal{B}^{\dagger 5}+2\mathcal{B}^{\dagger 6}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\left(-20\sqrt{2}-68\mathcal{B}^{\dagger}-54\sqrt{2}\mathcal{B}^{ \dagger 2}-44\mathcal{B}^{\dagger 3}-8\sqrt{2}\mathcal{B}^{\dagger 4}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger 2}\left(48+102\sqrt{2}\mathcal{B}^{\dagger}+180\mathcal{B}^{\dagger 2}+80 \sqrt{2}\mathcal{B}^{\dagger 3}+30\mathcal{B}^{\dagger 4}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger 2}\left(-48\sqrt{2}\mathcal{B}^{\dagger}-120\mathcal{B}^{\dagger 2}-64 \sqrt{2}\mathcal{B}^{\dagger 3}-24\mathcal{B}^{\dagger 4}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger 3}\left(-54\sqrt{2}-132\mathcal{B}^{\dagger}-48\sqrt{2}\mathcal{B}^{ \dagger 2}\right)+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger 4}\left(30+40\sqrt{2}\mathcal{B}^{\dagger}+30\mathcal{B}^{\dagger 2}\right)\] \[+\mathcal{M}^{\dagger 3}\left(20\sqrt{2}+88\mathcal{B}^{\dagger}+32 \sqrt{2}\mathcal{B}^{\dagger 2}\right)+\mathcal{M}^{\dagger 4}\left(-32\sqrt{2} \mathcal{B}^{\dagger}-24\mathcal{B}^{\dagger 2}\right)+\mathcal{M}^{\dagger 5} \left(-8\sqrt{2}\right)+\mathcal{M}^{\dagger 6} \tag{2}\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\tfrac{1}{\mathcal{P}e}\left(+20\sqrt{2}+96\mathcal{B}^{\dagger}+102 \sqrt{2}\mathcal{B}^{\dagger 2}+120\mathcal{B}^{\dagger 3}+40\sqrt{2} \mathcal{B}^{\dagger 4}+12\mathcal{B}^{\dagger 5}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger 2}\tfrac{1}{\mathcal{P}e}\left(-68-108\sqrt{2}\mathcal{B}^{\dagger }-132\mathcal{B}^{\dagger 2}-32\sqrt{2}\mathcal{B}^{\dagger 3}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger 3}\tfrac{1}{\mathcal{P}e}\left(34\sqrt{2}+120\mathcal{B}^{\dagger }+80\sqrt{2}\mathcal{B}^{\dagger 2}+40\mathcal{B}^{\dagger 3}\right)\] \[+\mathcal{M}^{\dagger 3}\tfrac{1}{\mathcal{P}e}\left(20\sqrt{2}-32 \sqrt{2}\mathcal{B}^{\dagger 2}-16\mathcal{B}^{\dagger 3}\right)+\mathcal{M}^{ \dagger 4}\tfrac{1}{\mathcal{P}e}\left(-44-32\sqrt{2}\mathcal{B}^{\dagger }\right)+\mathcal{M}^{\dagger 5}\tfrac{1}{\mathcal{P}e}\left(8\sqrt{2}+12\mathcal{B}^{ \dagger}\right)\]
\[\mathcal{N}_{2} = \mathcal{N}_{1}+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{3} \mathcal{B}^{\dagger 3}\left(2\sqrt{2}+4\mathcal{B}^{\dagger}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}\left(-4+2\sqrt{2}\mathcal{M}^{\dagger}-4\sqrt{2} \mathcal{B}^{\dagger}+12\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}-8\mathcal{B} ^{\dagger 2}-6\sqrt{2}\mathcal{B}^{\dagger 3}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}\left(16\sqrt{2}\mathcal{M}^{\dagger}-16\mathcal{M} ^{\dagger 2}+24\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}-24\sqrt{2}\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger 2}+8\mathcal{B}^{\dagger 2}+4\sqrt{2}\mathcal{B}^{\dagger 3}\right)\] \[+\mathcal{M}^{\dagger 2}\left(-4\sqrt{2}\mathcal{M}^{\dagger}+8 \mathcal{M}^{\dagger 2}-2\sqrt{2}\mathcal{M}^{\dagger 3}-16\sqrt{2}\mathcal{B}^{\dagger}+16\mathcal{B}^{\dagger}\mathcal{M}^{ \dagger}-24\mathcal{B}^{\dagger 2}+20\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}\tfrac{1}{\mathcal{P}e}\left(4\sqrt{2}\mathcal{B}^{ \dagger}+12\mathcal{B}^{\dagger 2}\right)\] \[+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}\tfrac{1}{\mathcal{P}e}\left(-4\sqrt{2}\mathcal{M}^{ \dagger}+4\mathcal{M}^{\dagger 2}+4\sqrt{2}\mathcal{B}^{\dagger}-20\mathcal{B}^{\dagger}\mathcal{M}^{ \dagger}+8\mathcal{B}^{\dagger 2}-20\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}\right)\] \[+\mathcal{M}^{\dagger 2}\tfrac{1}{\mathcal{P}e}\left(-8+8\sqrt{2} \mathcal{M}^{\dagger}-4\mathcal{M}^{\dagger 2}-8\sqrt{2}\mathcal{B}^{\dagger}+32\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}-12\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{ \dagger 2}+16\mathcal{B}^{\dagger 2}+8\sqrt{2}\mathcal{B}^{\dagger 3}\right)\]
### The general solution
\[\mathcal{F}_{c}= \frac{1}{G}\left[\sqrt{2}\left(\frac{\mathcal{Q}_{S}\cosh^{2}(H)+ \left(\mathcal{Q}_{S}-1+\frac{\sqrt{2}\mathcal{M}}{1+\frac{1}{\mathcal{P}e^{2}}} \right)\coth^{2}(H)}{\mathcal{Q}_{S}}\right)\Upsilon\right.\] (C 5) \[\left.+\left(\frac{\coth(H)\left(\cosh(2H)\left(4\Sigma\mathcal{L}_{1 }+\mathcal{L}_{2}\coth(H)\right)+\cosh(3H)\coth(H)\mathcal{L}_{3}+4H\mathcal{L} _{4}-2\Sigma\mathcal{L}_{S}+\coth(H)\mathcal{L}_{6}\right)}{\left(1+\tfrac{1}{ \mathcal{P}e^{2}}\right)^{3}\left(\mathcal{Q}_{S}\right)^{3}\mathcal{Q}_{H}} \right)\Upsilon^{2}\] \[+O(\Upsilon^{3})\right]\]
where
\[\mathcal{L}_{1}=\tfrac{1}{2}\mathcal{N}_{1}\] \[\mathcal{L}_{2}=\mathcal{N}_{2}-2\mathcal{L}_{3}\]
\[\mathcal{L}_{3} = \left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}\left(4+17\mathcal{B}^{ \dagger 4}+\mathcal{B}^{\dagger 6}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\left(-10\sqrt{2}+24\mathcal{M}^{\dagger}+(10\sqrt{2})/\mathcal{P}e-7 \sqrt{2}\mathcal{B}^{\dagger 4}+15\mathcal{M}^{\dagger}\mathcal{B}^{\dagger 4}+\frac{20 \sqrt{2}}{\mathcal{P}e}\mathcal{B}^{\dagger 4}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{\dagger} \left(2\sqrt{2}\mathcal{B}^{\dagger 4}-12\mathcal{M}^{\dagger}\mathcal{B}^{ \dagger 4}-27\sqrt{2}\mathcal{M}^{\dagger 2}+15\mathcal{M}^{\dagger 3}+\frac{-34 \mathcal{M}^{\dagger}+17\sqrt{2}\mathcal{M}^{\dagger 2}}{\mathcal{P}e}\right)\] \[+\mathcal{M}^{\dagger 2}\left(8\sqrt{2}\mathcal{M}^{\dagger}+4 \mathcal{M}^{\dagger 2}-5\sqrt{2}\mathcal{M}^{\dagger 3}+\mathcal{M}^{\dagger 4}+ \frac{-4+14\sqrt{2}\mathcal{M}^{\dagger}-24\mathcal{M}^{\dagger 2}+4\sqrt{2} \mathcal{M}^{\dagger 3}}{\mathcal{P}e}\right)\]
\[\mathcal{L}_{4} = \left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}\left(4+10\sqrt{2} \mathcal{B}^{\dagger}+23\mathcal{B}^{\dagger 2}+14\sqrt{2}\mathcal{B}^{ \dagger 3}+8\mathcal{B}^{\dagger 4}-\mathcal{B}^{\dagger 6}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\left(-9\sqrt{2}+23\mathcal{M}^{\dagger}-28\mathcal{B}^{\dagger}+42 \sqrt{2}\mathcal{M}^{\dagger}\mathcal{B}^{\dagger}-18\sqrt{2}\mathcal{B}^{ \dagger 2}+48\mathcal{M}^{\dagger}\mathcal{B}^{\dagger 2}\right.\] \[\left.-8\mathcal{B}^{\dagger 3}+\sqrt{2}\mathcal{B}^{\dagger 4}-15 \mathcal{M}^{\dagger}\mathcal{B}^{\dagger 4}+\frac{10\sqrt{2}+46\mathcal{B}^{ \dagger}+42\sqrt{2}\mathcal{B}^{\dagger 2}+32\mathcal{B}^{\dagger 3}-6 \mathcal{B}^{\dagger 5}}{\mathcal{P}e}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{\dagger 2} \left(4+18\sqrt{2}\mathcal{M}^{\dagger}-8\mathcal{M}^{\dagger 2}+24 \sqrt{2}\mathcal{B}^{\dagger}+24\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}+36 \mathcal{B}^{\dagger 2}\right.\] \[\left.-6\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}+15 \mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 2}-12\mathcal{B}^{\dagger 4}\right.\] \[\left.+\frac{28-14\sqrt{2}\mathcal{M}^{\dagger}+36\sqrt{2} \mathcal{B}^{\dagger}-32\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}+24 \mathcal{B}^{\dagger 2}-4\sqrt{2}\mathcal{B}^{\dagger 3}+20\mathcal{B}^{\dagger 3} \mathcal{M}^{\dagger}}{\mathcal{P}e}\right)\] \[+\mathcal{M}^{\dagger 3}\left(8\sqrt{2}-4\mathcal{M}^{\dagger}+ \sqrt{2}\mathcal{M}^{\dagger 2}-\mathcal{M}^{\dagger 3}+16\mathcal{B}^{\dagger}-4 \sqrt{2}\mathcal{B}^{\dagger 2}\right.\] \[\left.+12\mathcal{M}^{\dagger}\mathcal{B}^{\dagger 2}+\frac{4 \sqrt{2}-8\mathcal{M}^{\dagger}-8\mathcal{B}^{\dagger 4}+4\sqrt{2}\mathcal{M}^{\dagger} \mathcal{B}^{\dagger}-6\mathcal{M}^{\dagger 2}\mathcal{B}^{\dagger 3}+8\mathcal{B}^{ \dagger 3}}{\mathcal{P}e}\right)\]
\[\mathcal{L}_{5} = \left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{3}\left(4+4\sqrt{2} \mathcal{B}^{\dagger}-12\mathcal{B}^{\dagger 2}-34\sqrt{2}\mathcal{B}^{\dagger 3}-66 \mathcal{B}^{\dagger 4}-32\sqrt{2}\mathcal{B}^{\dagger 5}-14\mathcal{B}^{\dagger 6}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\left(-4\sqrt{2}+20\mathcal{B}^{\dagger}+54\sqrt{2}\mathcal{B}^{ \dagger 2}+92\mathcal{B}^{\dagger 3}+32\sqrt{2}\mathcal{B}^{\dagger 4}-12\mathcal{M}^{\dagger}-102\sqrt{2}\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}\right.\] \[\left.-396\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}-320\sqrt{2} \mathcal{B}^{\dagger 3}\mathcal{M}^{\dagger}-210\mathcal{B}^{\dagger 4}\mathcal{M}^{\dagger}\right.\] \[\left.+\frac{4\sqrt{2}-24\mathcal{B}^{\dagger}-102\sqrt{2} \mathcal{B}^{\dagger 2}-264\mathcal{B}^{\dagger 3}-160\sqrt{2}\mathcal{B}^{ \dagger 4}-84\mathcal{B}^{\dagger 5}}{\mathcal{P}e}\right)\] \[+\left(1+\frac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{\dagger 2}\left(-48\sqrt{2} \mathcal{B}^{\dagger}-264\mathcal{B}^{\dagger 2}-256\sqrt{2}\mathcal{B}^{\dagger 3}-168\mathcal{B}^{ \dagger 4}-54\sqrt{2}\mathcal{M}^{\dagger}-276\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}\right.\] \[\left.-192\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}+66 \mathcal{M}^{\dagger 2}+160\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger 2}+210\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 2}\right.\] \[\left.+\frac{-20-108\sqrt{2}\mathcal{B}^{\dagger}-276\mathcal{B}^{ \dagger 2}-128\sqrt{2}\mathcal{B}^{\dagger 3}+34\sqrt{2}\mathcal{M}^{\dagger}+264 \mathcal{B}^{\dagger}\mathcal{M}^{\dagger}+320\sqrt{2}\mathcal{B}^{\dagger 2} \mathcal{M}^{\dagger}+280\mathcal{B}^{\dagger 3}\mathcal{M}^{\dagger}}{\mathcal{P}e}\right)\] \[+\mathcal{M}^{\dagger 3}\left(-20\sqrt{2}-184\mathcal{B}^{\dagger}-128 \sqrt{2}\mathcal{B}^{\dagger 2}+128\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}+168 \mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}+32\sqrt{2}\mathcal{M}^{\dagger 2}-14 \mathcal{M}^{\dagger 3}\right.\] \[\left.+\frac{-20\sqrt{2}+128\sqrt{2}\mathcal{B}^{\dagger 2}+112 \mathcal{B}^{\dagger 3}+92\mathcal{M}^{\dagger}+128\sqrt{2}\mathcal{B}^{\dagger} \mathcal{M}^{\dagger}-32\sqrt{2}\mathcal{M}^{\dagger 2}-84\mathcal{B}^{\dagger}\mathcal{M}^{\dagger 2}}{\mathcal{P}e}\right)\]
\[\mathcal{L}_{6} =\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{3}\left(28\mathcal{B}^{ \dagger 2}+48\sqrt{2}\mathcal{B}^{\dagger 3}+95\mathcal{B}^{\dagger 4}+32\sqrt{2} \mathcal{B}^{\dagger 5}+15\mathcal{B}^{\dagger 6}\right)\] \[\quad+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)^{2}\mathcal{M}^{ \dagger}\left(-6\sqrt{2}-48\mathcal{B}^{\dagger}-94\sqrt{2}\mathcal{B}^{ \dagger 2}-140\mathcal{B}^{\dagger 3}-57\sqrt{2}\mathcal{B}^{\dagger 4}+36 \mathcal{M}^{\dagger}+116\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.+432\mathcal{B}^{\dagger 2} \mathcal{M}^{\dagger}+320\sqrt{2}\mathcal{B}^{\dagger 3}\mathcal{M}^{\dagger}+225 \mathcal{B}^{\dagger 4}\mathcal{M}^{\dagger}\right.\] \[\qquad\qquad\qquad\qquad\left.+\tfrac{+6\sqrt{2}+40\mathcal{B}^{ \dagger 3}+130\sqrt{2}\mathcal{B}^{\dagger 2}+300\mathcal{B}^{\dagger 3}+180 \sqrt{2}\mathcal{B}^{\dagger 4}+84\mathcal{B}^{\dagger 5}}\right)\] \[\quad+\left(1+\tfrac{1}{\mathcal{P}e^{2}}\right)\mathcal{M}^{ \dagger}\left(16\mathcal{B}^{\dagger}+24\sqrt{2}\mathcal{B}^{\dagger 2}+40 \mathcal{B}^{\dagger 3}+14\sqrt{2}\mathcal{B}^{\dagger 4}+24\mathcal{M}^{ \dagger}+16\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger}-192\mathcal{B}^ {\dagger 2}\mathcal{M}^{\dagger}\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.-256\sqrt{2}\mathcal{B}^{ \dagger 3}\mathcal{M}^{\dagger}-180\mathcal{B}^{\dagger 4}\mathcal{M}^{\dagger}-93\sqrt{2} \mathcal{M}^{\dagger 2}-364\mathcal{B}^{\dagger}\mathcal{M}^{\dagger 2}\right.\] \[\qquad\qquad\qquad\qquad\left.-264\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 2}+81 \mathcal{M}^{\dagger 3}+160\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger 3}+210 \mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 3}\right.\] \[\qquad\qquad\qquad\left.+\tfrac{+8\sqrt{2}+32\mathcal{B}^{ \dagger 2}+28\sqrt{2}\mathcal{B}^{\dagger 2}+24\mathcal{B}^{\dagger 3}-62 \mathcal{M}^{\dagger}-160\sqrt{2}\mathcal{B}^{\dagger 3}\mathcal{M}^{ \dagger}-392\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger}-188\sqrt{2}\mathcal{B}^{ \dagger 3}\mathcal{M}^{\dagger}}{\mathcal{P}e}\right.\] \[\qquad\qquad\qquad\left.+\tfrac{+51\sqrt{2}\mathcal{M}^{\dagger 2}+276\mathcal{B}^{\dagger}\mathcal{M}^{ \dagger 2}+320\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 2}+280\mathcal{B}^{\dagger 3}\mathcal{M}^{ \dagger 2}}{\mathcal{P}e}\right)\] \[\quad+\mathcal{M}^{\dagger 2}\left(-16-64\sqrt{2}\mathcal{B}^{ \dagger}-72\mathcal{B}^{\dagger 2}+24\sqrt{2}\mathcal{M}^{\dagger}+264\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}+188\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{ \dagger}+28\mathcal{M}^{\dagger 2}\right.\] \[\qquad\qquad\qquad\qquad\left.-128\sqrt{2}\mathcal{B}^{\dagger} \mathcal{M}^{\dagger 2}-168\mathcal{B}^{\dagger 2}\mathcal{M}^{\dagger 2}-43\sqrt{2} \mathcal{M}^{\dagger 3}+15\mathcal{M}^{\dagger 4}\right.\] \[\qquad\qquad\qquad\left.+\tfrac{-28-8\sqrt{2}\mathcal{B}^{ \dagger}+64\mathcal{B}^{\dagger 2}+24\sqrt{2}\mathcal{B}^{\dagger 3}+66\sqrt{2}\mathcal{M}^{\dagger}+96\mathcal{B}^{ \dagger}\mathcal{M}^{\dagger}-128\sqrt{2}\mathcal{B}^{\dagger 2}\mathcal{M}^{ \dagger}}{\mathcal{P}e}\right.\] \[\qquad\qquad\qquad\left.+\tfrac{+112\mathcal{B}^{\dagger 3}\mathcal{M}^{\dagger}-136 \mathcal{M}^{\dagger 2}-164\sqrt{2}\mathcal{B}^{\dagger}\mathcal{M}^{\dagger 2}+36\sqrt{2}\mathcal{M}^{\dagger 3}+84\mathcal{B}^{\dagger} \mathcal{M}^{\dagger 3}}{\mathcal{P}e}\right)\]
|
2302.00763 | Collaborating with language models for embodied reasoning | Reasoning in a complex and ambiguous environment is a key goal for
Reinforcement Learning (RL) agents. While some sophisticated RL agents can
successfully solve difficult tasks, they require a large amount of training
data and often struggle to generalize to new unseen environments and new tasks.
On the other hand, Large Scale Language Models (LSLMs) have exhibited strong
reasoning ability and the ability to to adapt to new tasks through in-context
learning. However, LSLMs do not inherently have the ability to interrogate or
intervene on the environment. In this work, we investigate how to combine these
complementary abilities in a single system consisting of three parts: a
Planner, an Actor, and a Reporter. The Planner is a pre-trained language model
that can issue commands to a simple embodied agent (the Actor), while the
Reporter communicates with the Planner to inform its next command. We present a
set of tasks that require reasoning, test this system's ability to generalize
zero-shot and investigate failure cases, and demonstrate how components of this
system can be trained with reinforcement-learning to improve performance. | Ishita Dasgupta, Christine Kaeser-Chen, Kenneth Marino, Arun Ahuja, Sheila Babayan, Felix Hill, Rob Fergus | 2023-02-01T21:26:32Z | http://arxiv.org/abs/2302.00763v1 | # Collaborating with language models for embodied reasoning
###### Abstract
Reasoning in a complex and ambiguous environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other hand, Large Scale Language Models (LSLMs) have exhibited strong reasoning ability and the ability to to adapt to new tasks through in-context learning. However, LSLMs do not inherently have the ability to interrogate or intervene on the environment. In this work, we investigate how to combine these complementary abilities in a single system consisting of three parts: a Planner, an Actor, and a Reporter. The Planner is a pre-trained language model that can issue commands to a simple embodied agent (the Actor), while the Reporter communicates with the Planner to inform its next command. We present a set of tasks that require reasoning, test this system's ability to generalize zero-shot and investigate failure cases, and demonstrate how components of this system can be trained with reinforcement-learning to improve performance.
## 1 Introduction.
Achieving complex tasks in embodied environments often requires logical reasoning. Such logical reasoning has been a challenge for machine learning (Russin et al., 2020; Mitchell, 2021) - even more so with embodied agents, where the agent also has to _perceive_ and _control_ in its environment, in addition to _reasoning_ about how to accomplish a complex task. Recent large scale language models (LSLMs), however, have shown great promise for reasoning (Radford et al., 2019; Brown et al., 2020). Can this complex reasoning ability be used for embodied tasks?
One major issue is that LSLMs are not embodied or grounded. They do not have a way to directly take actions in embodied environments, or of knowing what is happening in an environment. For each of these, we rely on other components of an agent model. In this work, we investigate an agent paradigm that we call **Planner-Actor-Reporter**. The **Planner** is the LSLM--it reads the task description, does any required logical reasoning, and breaks the problem down into a sequence of simple instructions. These instructions are passed to the **Actor**, which is an RL agent programmed to complete a small set of simple instructions in the environment. Finally, to complete the feedback loop, we have the **Reporter**, which observes the environment and reports information back to the Planner so it can adjust the instructions it issues. See Figure 1A.
Other recent work has investigated forms of closed-loop feedback for LSLMs in embodied reasoning tasks Huang et al. (2022); Ahn et al. (2022). In this work, we generalize these approaches into a three part Planner-Actor-Reporter paradigm. We highlight the separate and crucial roles played
by these components by introducing and evaluating on a series of tasks which require the agent to explore the world to gather information necessary for planning, break down complex tasks into steps, and communicate visual properties of the world back to the Planner. Finally, we demonstrate that the Reporter module can be trained with reinforcement learning (RL), reducing the need for hand-specified sources of feedback.
## 2 Methods
Environment and Actor:Our environment is a 2D partially observable grid-world. The environment contains unique objects specified by color, shape and texture, and the Actor sees a top-down egocentric pixel RGB view with visibility within 5 squares of the agent. In addition to movement actions, the Actor can perform two special actions when on top of an object: _examine_ which reveals a hidden piece of text about the object, and _pickup_ which adds the object to its inventory.
The Actor is pre-trained with RL to follow instructions of the form "Pick up the X" or "Examine the X". Figure 1B shows an example observation from the environment, details about Actor architecture and environment can be found in App B.
Planner:We use pre-trained large language models with the same architecture: Chinchilla (Hoffmann et al., 2022), of two sizes: 7B and 70B parameters. To promote grounding with in-context learning (Brown et al., 2020), we provide 5 randomly selected "few-shot examples" of each task (assuming optimal Planner, Reporter, and Actor; see App E for full text), and directly use the model's sampled language as input to the Actor. At every timestep, the sampled language and information generated by the Reporter are appended to the dialogue transcript, and used as the prompt to get a new instruction from the Planner at the next timestep.
Reporter:We specify the role of the Reporter further by drawing parallels to hierarchical RL (Sutton et al., 1999; Kulkarni et al., 2016), where a high-level 'Planner' issues temporally abstracted instructions to a lower-level 'Actor'. A key difference from these setups is that in our experiments, the observation space of the Actor and Planner are different. In our setup, the Actor operates over pixel observations and produces movement actions, while the Planner operates over a language observation (the prompt) and produces language actions (the produced instruction). The Actor is language conditional and can interpret the Planner's instructions. But the Planner cannot parse the results of the Actor's actions (to produce an appropriate next action). The Reporter translates from the Actor's action+observation space to the Planner's. In the most general case, a Reporter takes (a sequence of) Actor actions and pixel observations and produces a text string that contains _true_ and _relevant_ information about what the Actor did and how the environment responded.
There are several ways to implement a Reporter, varying what is reported and how much of it is hard-coded, pre-trained, or learned from scratch. Previous work has used implicit Reporters implemented as part of the Actor that only convey instruction-completion (Ahn et al., 2022), or pre-trained perception models that answer natural language questions about the Actor's observations (Zeng et al., 2022; Huang et al., 2022). In this work, we start with a hard-coded Reporter to first explore the performance of the Planner-Actor interaction in our novel information gathering tasks (Sec 3). We then pioneer learning this Reporter within the Planner-Actor-Learner loop to optimize reward (Sec 4).
Figure 1: **Setup.** A. Schematic of the **Planner-Actor-Reporter** paradigm and an example of the interaction among them. B. Observation and action space of the PycoLab environment.
Tasks:We create a suite of tasks that examine the challenges of reasoning, generalization, and exploration in embodied environments that LM Planners can help with (detailed in App C). We focus on two types of tasks (_conditional_ and _search_ tasks) that require explicit information gathering such that a) the Planner must issue an explicit information gathering instruction, b) the Actor must carry it out, and c) the Reporter must relay the results before d) the Planner can issue the next instruction.
## 3 Language models as interactive planners
We examine the interaction between Planner, Actor and Reporter in tasks that require all three components for success. Building on top of previous work (Ahn et al., 2022; Zeng et al., 2022) which show that LSLMs can break down a complex real-world tasks into step-by-step instructions, we focus on tasks where the Planner needs to also explicitly issue information gathering instructions and incorporate the reported information for generating the next instructions. Further, our tasks are realized over objects with abstract properties that are not grounded in the LM's previous semantic experiences and therefore require significant abstract logical reasoning. We analyze the performance of different Planners and their robustness. All components are pre-trained.
The task setup is as follows: all the objects in the room have a'secret property' (good / bad / unknown). When the Actor 'examines' an object, a ground-truth Reporter relays a text string 'I examined {object}, its secret property is {value}' to the Planner. The Planner can then issue the next instruction to the Actor.
### Secret property conditional task
We start with the simplest task which requires information gathering. The goal of the episode is to pick up a correct object, based on another object's secret property. The task description passed to the Planner is as follows: 'If {decider object} is good, pick up {object 1}, otherwise pick up {object 2}'. A successful episode consists of 5 steps: a) the Planner instructs the Actor to examine the {decider object}, b) the Actor examines the object, c) the Reporter relays the revealed information (always done correctly in this setting), d) the Planner reasons which object needs to be picked up based on the report, {object 1} or {object 2}, and instructs the Actor to pick up the correct object e) the Actor picks up the correct object.
Explicit information gathering actions are classically challenging with pure RL. With a LSLM Planner and 5 language traces of solved examples as prompt, and an Actor trained on only simple pick-up and examine tasks, we can complete this complex multi-step task with good accuracy (Fig 2A). A pure RL baseline performs poorly even after 100M learner frames (see App D, Fig 2A).
In our analysis, we identify two main failure cases: the LSLM Planner failing to infer the next instruction given the environment feedback, and the Actor failing to follow the instruction provided by the Planner. In the first case, we observe that smaller language models (7B parameters) are only able to infer the correct object to pick up for reward 58% of the time given all information; larger language models (70B parameters) are able to do so 96% of the time. This shows that even relatively simple reasoning remains out of reach for smaller models without fine-tuning. In the second failure case, we observe that the Actor might encounter distribution shift, for example in episode length or instruction format, which makes it unable to Planner's instruction.
### Secret property search task
We extend the previous task by requiring additional steps of information gathering. Instead of examining a single object, the agent needs to examine multiple objects, note their secret properties, and pick up the correct object for reward. The task is specified as 'The objects are {}, {}, and {}. Pick up the object with the good secret property'. A successful episode consists of the Planner asking the Actor to examine each object in turn until it finds one with a 'good' property, at which point it asks the Actor to pick up that object.
Although this task requires more information gathering steps, and the RL baseline performs worse (see App D), the agent framework with Planner-Actor-Reporter is still able to complete the task zero-shot (i.e. without any additional environment interaction; Fig 2A). Curiously, we observe that our agents perform better in this task than in the previous task where only one object needs to be examined (Fig 2A; and App D). We hypothesize that since the number of information gathering steps varies, the Planner doesn't use a rigid "one examine, one pick up" policy and can be more robust to errors. For example, if the Actor examines the wrong object. We see that the Planner can indeed recover from such errors (Sec A.1). Similar to the observations above, we note that larger language models (70B) perform significantly better than smaller models (7B) (Fig 2A).
### Robustness to irrelevant reports
We saw in the _search_ task from the previous section, that the 70B Planner is reasonably robust to mistakes from the Actor (e.g. Section A.1). In this section, we examine if it can also be robust to a noisy Reporter. We break the assumption that only task relevant actions in the environment are reported, and irrelevant actions in the environment, e.g. "I have moved left" / "I have moved up and right" etc. are reported 20% of the time.
We find that performance does reduce but not dramatically (Fig 2B). The smaller 7B model is less robust than the 70B model, showing a more dramatic reduction in performance. We find that the 70B Planner uses strategies of repetition (where it repeats an instruction until it receives the relevant report, e.g. Sec A.2) or cycling (where it cycles through examine instructions for all the objects, e.g. Sec A.3), or some combination of the two, until it hits a 'good object'.
The few shot prompts provide no examples of how to respond to irrelevant reports. When we do provide guidance and demonstrate a'repeating' strategy (e.g. Sec A.2) in the prompted examples, this restores performance to that without the irrelevant reports for the 70B Planner (Fig 2B); the 7B Planner improves but doesn't fully recover. This robustness indicates promise that our approach (particularly with large Planners) scales to imperfect Reporters. However, inference time through a large Planner is expensive, so a Reporter that ignores irrelevant events is more efficient.
## 4 Training a truthful Reporter
In the previous section, we focused on studying the behaviors of the Planner in our agent framework with a Reporter which always reports accurate information. However, such a Reporter does not exist in most environments. In this section, we study how we can train a reporter from scratch with RL.
We consider a 'visual conditional task' where the "secret property" is not directly revealed in text with a special 'examine' action, but rather must be decoded from visual observations. In particular, the task is specified as 'If {decider object} is close to the wall, pick up {object 1}, otherwise pick up {object 2}'. The Reporter's input is the same visual observations as the Actor and its output is a binary classifier head that can choose between one of two reports ('The object is {close to /far from} the wall'). Note that when training first starts, the Reporter does not have any pre-existing grounding mechanisms to report accurate information about the scene. As training continues, the Reporter can use the final reward of the episode to learn what information is most _helpful_ to the Planner, and eventually converge to report only truthful and relevant information.
In contrast, recent work has used pretrained models with visual grounding (e.g. vision language models Zeng et al. (2022), or handcrafted mechanisms Huang et al. (2022)) to act as the Reporter module. We believe that building an effective Reporter module should combine both approaches: using a pre-trained module to bootstrap perception and grounding, and then using RL to finetune the pre-trained module to communicate with the Planner module. Our investigations show that Reporter training with RL is indeed viable and beneficial.
## 5 Discussion and future work
We advocate for a three-part system (Planner-Actor-Reporter), using pre-trained language models as a Planner that issues natural language commands to an embodied Actor, with a Reporter translating information back to the Planner. We introduce a series of tasks that leverage a pre-trained language model's abstract reasoning capacities, showing impressive and robust zero-shot performance, and analyse errors in different-sized models. We show the first proof of concept that the Reporter can
Figure 2: **Results.** A. Performance on secret property conditional and secret property search tasks with different Planners and baseline RL. B. Robustness of the Planners under an imperfect Reporter on the secret property search task. C. Improvement in performance as a Reporter is trained on the Visual conditional task. All error-bars are CIs across multiple episodes.
be trained to facilitate better collaboration between Planner and Actor. Exciting directions for future work include incorporating pre-trained components into the Reporter, expanding to more complex/realistic tasks, and improving training with a large model in the loop.
|
2303.08016 | Detection of Abuse in Financial Transaction Descriptions Using Machine
Learning | Since introducing changes to the New Payments Platform (NPP) to include
longer messages as payment descriptions, it has been identified that people are
now using it for communication, and in some cases, the system was being used as
a targeted form of domestic and family violence. This type of tech-assisted
abuse poses new challenges in terms of identification, actions and approaches
to rectify this behaviour. Commonwealth Bank of Australia's Artificial
Intelligence Labs team (CBA AI Labs) has developed a new system using advances
in deep learning models for natural language processing (NLP) to create a
powerful abuse detector that periodically scores all the transactions, and
identifies cases of high-risk abuse in millions of records. In this paper, we
describe the problem of tech-assisted abuse in the context of banking services,
outline the developed model and its performance, and the operating framework
more broadly. | Anna Leontjeva, Genevieve Richards, Kaavya Sriskandaraja, Jessica Perchman, Luiz Pizzato | 2023-03-10T06:10:53Z | http://arxiv.org/abs/2303.08016v1 | # Detection of Abuse in Financial Transaction Descriptions Using Machine Learning
###### Abstract
Since introducing changes to the New Payments Platform (NPP) to include longer messages as payment descriptions, it has been identified that people are now using it for communication, and in some cases, the system was being used as a targeted form of domestic and family violence. This type of tech-assisted abuse poses new challenges in terms of identification, actions and approaches to rectify this behaviour. Commonwealth Bank of Australia's Artificial Intelligence Labs team (CBA AI Labs) has developed a new system using advances in deep learning models for natural language processing (NLP) to create a powerful abuse detector that periodically scores all the transactions, and identifies cases of high-risk abuse in millions of records. In this paper, we describe the problem of tech-assisted abuse in the context of banking services, outline the developed model and its performance, and the operating framework more broadly.
Abuse, NLP, Machine Learning, Offensive Language
## I Introduction
### _Technology Assisted Abuse_
Digital communication plays an increasingly important role in everyday life. As of 2021, 4.55 billion people are active social media users, equating to 57.6% of the world population [1]. The prevalence and variety of digital communication have given us the ability to contact someone 24 hours a day through many different ways such as social media, text-messaging, and email. Although this has increased convenience for a lot of people, it has also presented significant challenges for personal security and privacy, and in particular for domestic violence victims/survivors [2].
Technology-facilitated abuse is commonly defined as the use of technology such as mobile, online or other digital technologies, as a tool for people to engage in behaviours such as coercive control, intimidation, stalking, monitoring, psychological and emotional abuse, consistent harassment and unwanted contact, sexual harassment, to cause harm and distress to the recipient [3]. This term can be extended to include broader forms of online harassment and cyber bullying; however, it is typically focused on gendered violence (domestic violence) [4]. The impacts of technology-facilitated abuse on the recipient can include depression, worthlessness, fatigue, self-harm, traumatisation, fear, isolation, emotional distress and more. There are also reported economic impacts, functional harms and an intrusion on the recipient's personal freedom [4].
### _Technology assisted abuse in Banking_
Modern payment systems have increased the speed of financial transactions and also enabled richer descriptions of those transactions [5]. The introduction of the New Payment Platform (NPP) in Australia in 20181 allows a person or a business to conduct a transfer to others in real-time and include up to 280 UTF-8 characters for payment description and an additional 35 printable ASCI characters for payment reference. NPP has also provided customers with the ability to set up a simple identifier (PayID(r)) for their accounts, such as a mobile number or an email address, so that they no longer need to remember their Bank State Number (BSB)2 and account number. It resulted in a simple and fast way for people to transfer funds to each other in Australia. As of June 2022, 107 Australian financial institutions use these services, and more than 10 million PayIDs have been registered by customers and businesses [6]. New technologies such as PayID simplified banking increasing the ease and volume of transactions, however, it also provided perpetrators another tool to use for abuse.
Footnote 1: See: [https://nppa.com.au/the-platform/](https://nppa.com.au/the-platform/)
Footnote 2: BSB is a number that indicates the bank and branch that holds ones account in Australia, facilitating a transaction between banks.
In early 2020, we as the Commonwealth Bank of Australia (CBA) identified the use of real-time transactions as a means of communication between individuals, typically through the use of low value transactions. We found that more than 8,000 customers in a three-month period had received multiple low-value deposits with messages in the transaction description that were potentially abusive. We identified that the intent of the messages ranged from "jokes" using profanity to serious threats or references to domestic abuse and family violence [5]. Utilizing transaction descriptions as a mode of either criminal communication or abuse rather than as means to transfer funds is being detected in financial institutions across the world. Australian Transaction Reports and Analysis Centre (AUSTRAC) Fintel Alliance report [5] notes that it is not unique to the Australian banking industry. For example, several Brazilian news groups report the arrest of a man harassing a young woman through bank transfers after having his number blocked [7]. We can see that any payment that contains a free text field to be completed by the sender and viewed by the recipient can be a vehicle for criminal communication.
### _Role of the Bank_
Although online banking was never intended to be used as a digital communication technology, its occurrence has meant that financial institutions had to take action in order to protect those being abused. Initial responses by financial institutions have involved actions such updating their terms and conditions to include references to abusive transactions and introducing real-time word blocks from reference lists [8]. These measures have shown to significantly reduce the use of profanity and some abuse in transaction description, however, they have not completely stop serious abuse from happening.
Although these solutions have seen a reduction in profanity used in online banking transactions, they are not stopping abusers with the intention to cause harm or distress to the recipients as they have simply learnt how to circumvent these initial solutions. For example, the word _unblock_, which is associated with these abusive payments, was observed to be modified to _un-block_, _u.n.b.l.o.c.k_ and other versions to bypass it. Because of this, we decided to protect our customers by building a monitoring system that can work in the background identifying cases of serious abuse that may need to be further investigated.
Building a system that identifies abuse involves, among many things, the definition of abuse, and the design of a system and processes that can help the victims and dissuade the perpetrator. To proactively stop abusers or to reach out and provide support to the affected customers, we first need to identify these cases. There is a lot of complexity in this task alone including the volume of transactions sent each day, understanding the context of the transaction sent and the nuances of the language and behaviour used. We have addressed the issue by using a multi-step approach. First, the model is applied to score all the transactions. The cases with the highest score are then sent to a dedicated team of customer vulnerability specialists that manually review and contact the victims of abuse identified by the model. The team will then take the most appropriate action, for example, it may involve contacting the victim, as well as sending warning letters to the perpetrator and let them know their behaviour is not tolerated. In some cases it might involve welfare checks to ensure their safety and to gain their consent to take further action. Due to the capacity and complexity of cases and interventions offered, the team is only able to manually review and process a limited number of cases a month. Therefore, it is crucial that we control the number of false positive cases ensuring we detect all the true positive cases.
Due to the novelty of the problem, the current approach doesn't contain comparison with the other models, and we believe can be improved by efforts of the wider research community. However, the current work establishes a solid baseline to compare it to. We also hope that it helps to adopt these techniques in the other financial institutions that are currently utilising simple filters and keyword detection that can be easily overcome and bypassed.
## II Problem Statement
In this paper we propose an approach to the problem of high-risk abuse case detection in the banking payment systems using a combination of features from different deep learning models. Despite the fact that technology assisted abuse is not a new problem and some research has been conducted to investigate it (see Section III for more details), this type of abuse using banking transactions was only recently identified and poses a new set of challenges. One of the biggest challenges is the sensitivity of the matter. It should be handled with uttermost care considering that both action and no-action can be potentially dangerous. Bank transactions are different to social media messages that can be easily deleted or blocked. The transactions have much longer "life-span". They might be visible to someone beyond the victim. They might be delivered in printed form. They might be used as evidence for people's applications to loans and other services, which can cause returnatisation by revisiting them. Similarly, the victim might have much lower tolerance towards the abuser's behaviour and all these situations can be difficult to deal with.
In terms of actions taken, the bank often contacts abusers asking them to stop. If the behaviour continues, it is possible to _unbank_ a customer. However, differently to social media bans, unbanking is a decision of a financial institution to ends its relationship with a particular individual and could lead to serious consequences affecting peoples' lives. To complicate things further, transaction descriptions are often limiting in context and often open to the interpretations. Therefore, a dedicated team has to investigate and approach to each case individually, which is a labour-intense process that leads to the prioritisation of the cases. This is known as a human-in-the-loop system, defined by needing both human and machine performance to contribute to improving the overall system results [9]. Therefore, in this paper we focus on the framework of detecting high-risk cases that need to be prioritised, allowing us to adhere to Australia's AI Ethics Principles of reliability and safety [10].
**Definition II.1**: _High-risk cases of abuse are defined by the severity and volume of the following:_
* _the presence of repetitive, abusive, degrading or hateful comments about a person or persons_
* _threats of physical or sexual violence to a person_
* _threats of self-harm_
* _endangering or causing distress to a minor_
* _repeated or unwanted sexual requests to a person._
## III Literature review
A report by [11] provides an in-depth investigation of different technology utilized by abusers to commit technology facilitated abuse related to Domestic Violence. The report explores types of abuse associated with coercive control, financial abuse, smart homes and stalking, and how these are misused by abusers. The report also provides a framework for inclusive safety when designing technology systems however does not suggest any solutions around how to identify when a system is being misused. Although the report touches on how financial abuse can happen in banking systems it does not explore a problem of transactions descriptions being utilized by abusers to send abusive messages and exhibit control and stalking behaviours.
The problem of detecting abusive messages in bank transaction descriptions is novel. While similar problems of detecting offensive language, toxicity levels in text, bullying and hate speech has been a subject of research over the past 20 years, it has mostly been in the context of social network moderation, for example, employing machine learning techniques to identify this type of content from Twitter [12] and Facebook [13].
In a similar fashion to the issue of abusive messages, data within social networks is highly unstructured, informal, and often misspelled, therefore, papers such as [14] have utilised natural language processing techniques to detect both lexical and syntactic features of sentences. Branching out from solely using features from the text, [14] used style, structure and posting pattern features to improve detection of offensive messages. [15] used joy, emoticons, uppercase, number of followers, amongst other features.
[13] outlines a new approach called Entailment as Few Shot Learner (EFL). With the aim to improve language models as few-shot learners, the approach involves converting class labels into a natural language sentence which is used to describe the label, and determine whether the label entails the description. The EFL approach can also leverage techniques such as unsupervised contrastive data augmentation and can be extended to multilingual few-shot learning. [16] proposes a novel shallow neural network using GloVe embeddings on Wikipedia public datasets to classify whether the comments are toxic or are instances of attack in cyber bullying context.
[12] leveraged machine learning to detect targeted vs untargeted offensive language. This was done by creating a three-level annotation schema, corresponding to three subtasks. The first Subtask A focussed on purely the language in a dataset, classifying it as either offensive or non offensive. Subtask B further classified the data as targeted or untargeted, i.e. general offensive language or hate speech, and Subtask C classifies whether the hate speech was targeted at an individual or a group.
Other techniques to detect abuse have leveraged systems based on pre-trained language models such as RoBERTa and BERT [17], which have reached new state-of-the-art performances on numerous tasks [18]. In [19], a BERT model fine-tuned with binary cross-entropy loss was used to identify abusive language in Twitter Hatespeech and Wikipedia datasets. BERT embedded models outperformed other embeddings such as fastText, TextCNN and TextCNN + Character n-grams. One issue that was found with pre-trained models is that they are trained on general datasets, so they have limitations on domain-specific language tasks. Re-training pre-trained models on domain-specific datasets is a popular method to address this as seen in [20]. This is especially useful contexts such as abusive language detection where there is not enough data to train a BERT-like model from scratch. Their model, "TweetBERT", was re-trained on a Twitter-based corpus and outperformed other BERT based models when analysing Twitter content.
Paper [21] outlined another method to improve BERT based models in order to detect instances of cyberbullying and harmful speech on Australian-based Twitter data. This was done through appending additional features onto BERT as special tokens. The features included emoji paths, metadata such as user information (e.g. age, gender, number of posts), data on their network (e.g. number of follower and friends) and their power (followers/friends ratio). Results showed that BERT with the extra tokens (BERT + emoji + network + power) yielded the most accuracy.
We observed that prior work focuses on identifying instances of abusive messaging instead of the abusive relationship. As mentioned in Definition 2.1 of High-Risk abuse these transactions descriptions need to be relatively consistent and occur in a higher frequency. In a similar fashion, some papers have implemented techniques on user-centric data to identify potentially abusive users, rather than lone instances of abusive language. For example, [22] used graph machine learning to identify hateful users. Hateful accounts were characterised using attributes such as creation date, user activity, network centrality, sentiment and lexical analysis, amongst other attributes. The methodology involved using a process based on DeGroot's learning model [22] to sample users in a neighbourhood, and label them as hateful or non-hateful. There were significant patterns found to be associated with 'hateful users', including increased activity and increased frequency in using particular language.
While there is a lot of work that focus on online social networks, there has been no research into detecting abuse in transaction descriptions in a context of financial services. In this work, we leverage several machine learning techniques to not only identify abusive language in transaction descriptions, but identify the transaction relationships of the customers who are using it.
## IV Data
In this section, we introduce the specifics of the dataset we use as well as the data preprocessing methods. This paper relies on Commonwealth Bank transaction data. We extracted details from the bank's database, including transaction descriptions, the corresponding dollar amounts, date of the transaction, sender and recipient account numbers. We gathered transaction data from both the new payment platform (NPP) and the non-NPP processes. This data was used to generate features for the model training, which were aggregated by _relationships_, as described in Section V. Note, a relationship in this context means a sender and a recipient pair of a transaction; that is, if sender \(a\) sends a transaction to recipient \(b\), we have the \(\langle a,b\rangle\) relationship, if \(b\) sends a transaction to \(a\) this creates a different and new \(\langle b,a\rangle\) relationship. The number of transactions we used by relationship is defined by the historical time-window we used. In this study we fixed time-window to be one month.
Our dataset contains 1,039 relationships that were labelled as either (1) highly abusive or (0) non-abusive. Among those unique relationships, 283 were branded as 'highly-abusive' by several domestic violence experts who used the definition of high-risk abuse as a guide (see Definition 2.1). They had an agreement score of 87%. Negative sampling was created by randomly choosing non-abusive relationships as well as
a sample of cases where transactions had "conversational" descriptions that do not meet the abuse requirements but were significantly different to normal transactions. Some cases, for example, included customers sending song lyrics to one another or a perfectly natural chat. This was done to avoid using a machine learning model to detect only long messages rather than high-risk abuse. This training set contains data from July 2021 to January 2022. We used this dataset for our experiments and validated our proposed system using k-fold cross validation. It is important to note that there are no overlapping relationship pairs between folds.
In addition, for an out-of-sample dataset, we extracted one month of transaction data. We used data from the month of February 2022 for this. This demonstrates a model scoring use-case as part of the current business process. In any given month, less than 0.0005% of cases are abusive, resulting in a highly imbalanced situation. As manually scoring all of these monthly transactions is impractical, the out-of-sample test set was created by labelling the top 50 highest scored relationships of the corresponding month for each of the candidate models, 35 of which turned out to be highly abusive.
## V Methodology
In this section we described our approach in more details. The first task was to decide whether it was better to detect abuse at the transaction, customer, or relationship level. The transaction level lacks sufficient textual information to capture the context. Consider a transaction text that says "I love you". Without more transactions to observe the dynamics, it's unclear whether this is a case of harassment or a regular message between a couple. However, if this type of description is sent every 5 minutes and the other party requests to stop, the case becomes much clearer. However, if we consider collecting all transactions at the customer level, the abusive information may be diluted. For example, abusive customers frequently harass only one person among their recipients despite having a large network of regular recipients. As a result, we detect abuse at the relationship level, using descriptions gleaned from transactions between each sender-recipient pair. Figure 1 shows an example of an abuser having multiple victims, which in this case we would flag as two distinct relationships of high risk.
Following that, we describe the overall approach developed to detect abuse in transaction descriptions (AITD). It should be noted here that our target is to detect the highly abusive cases. Figure 2 diagram depicts an overview of our system involving the following steps:
1. Transaction-level feature generation: creating appropriate features from each single transaction description (Section V-A)
2. Relationship-level feature generation: aggregating these features on each relationship (sender-recipient pair) in order to detect abusive customers, not just individual abusive transactions (Section V-B)
3. Incorporating reciprocity information: generating features related to the replies a potential victim might have sent (Section V-C)
4. Training a machine learning model to predict the labels: Random Forest model was used to classify relationships as either highly abusive or non-abusive
### _Transaction-level feature generation_
As described previously, the first step is to generate features on a transaction level. This transaction-level feature vector is then aggregated to create features on a relationship level. There are three types of features involved in our model, as follows:
* **Transaction details features (TRX)** are solely related to the specifics of the transaction between sender and receiver, such as: dollar amount transacted, date of the transaction, number of transactions a day, maximum number of transactions per day and the time between maximum and minimum number of transactions.
* **Simple text features (ST)** are related to the basic information we can extract from the transaction description, such as length of the transaction description, upper/lower/mixed case flags, number of words, length of the longest word in the transaction description, does the message contains special characters/numbers, empty description flag, various punctuation and number-related flags.
Fig. 1: Some abusers intimate several people. In this case, Abuser’s communication with ex-wife and Abuser’s communication with his child are considered as separate relationship in our dataset. **Note: the transaction descriptions in this image are not real and were made up as illustrations of how abusive these messages are.**
Fig. 2: An overview of the system architecture. Three sets of the features are created from the raw data and combined on the level of relationships. The final model takes both original features and reciprocity features.
* **Emotion, Toxicity and Sentiment features (ETS)** are features based on three pre-trained language models that are able to provide valuable information for the abuse detection based on the text in the message descriptions. Seven _toxicity_ features were generated per transaction using the unbiased version of the BERT-based, pre-trained language model Detoxyf [23]. The unbiased version of Detoxyf recognises toxicity and minimises unintended bias of identities. It scores the text on seven categories of toxicity, such as toxicity, severe toxicity, obscene, threat, insult, identity attack and sexual explicit. These scores were then used as the seven toxicity features for the proposed AITD model. For _emotion_ features we used DistilBERT models trained on four data sources, including dailydialog, emotion-stimulus, isear and huggingface emotion datasets [24]. This pre-trained model determines whether the given text is neutral, joyful, sad, angry, contains love, fear, or surprise emotions. We predicted the scores for each emotion class for each transaction description. We used VADER (Valence Aware Dictionary for Sentiment Reasoning) for finding the _sentiment_ of a transaction description. It indicates both the polarity (positive/negative) and the intensity (strength) of emotion. The sentimental analysis of VADER is based on a dictionary that maps lexical features to emotion intensities known as sentiment scores. A text's sentiment score can be calculated by adding the intensity of each word in the text. VADER's Sentiment intensity analyser accepts a string and returns a dictionary of scores in four categories, including positive, negative, compound, and neutral [25].
### _Relationship level feature generation_
The above-mentioned features were calculated for every transaction. Because our prediction task focus is on relationships, we need to aggregate the information from all the transactions between a sender and a receiver. This aggregation is done in a slightly different way depending on the features used as shown in Table I.
We also used additional features derived from all transactions in a relationship. These are the number of transactions sent in a relationship, the maximum number of transactions sent in a single day and the number of unique days in that a transactions has occured.
### _Reciprocity_
We also include the same features that are calculated on the replies of a relationship. That is, we calculate features on relationship \(\langle a,b\rangle\) as well as features on the reciprocal relationship \(\langle b,a\rangle\). This is to confirm our hypothesis that reciprocity might be useful as a recipient often avoids replying to an abusive sender.
## VI Results and Discussion
First, we performed an experiment to investigate what sets of features are able to discriminate the best between highly abusive and non-abusive cases. We evaluated the models with the following combinations of the feature sets: transaction details (TRX), simple text (ST), and toxicity and sentiment features (ETS). We show the results of repeated 5-fold cross-validation in Table II and use precision, recall, F1, AUC and AUC-PR metrics for evaluation. Overall, the best performing model was the random forest model using simple text, transaction details, emotion, toxicity and sentiment features and reciprocity combined (ETS + ST + TRX). After selecting the best sets of features, we experimented with adding reciprocal features and observed further improvements (see Table III)
Next, we evaluated our results on an out-of-sample test set as outlined in Section IV. The best system (ETS + ST + TRX + reciprocity) of the previous experiment was used to demonstrate the capability of our model. The aim of this validation was to make sure that the model had consistency and had no false positives for the highest scored results, thus, allowing us to confidently select top cases for a manual review. Table III displays the ROC curve of the models on the out-of-sample test set of the transaction descriptions collected over one month, whith no overlap of this month of data with our training dataset.
Next, the top 50 cases of sender-recipient pairs are manually labelled and used to produce the ROC curve, however, we did not manually verify the rest of the cases as it contains hundreds of transaction descriptions and requires a lot of manual effort.
The ROC curve shows the trade-off between sensitivity (true positive rate or recall), and specificity (1 - false positive rate). A black dotted line in Figure 3 corresponds to a random guess. Note that classifiers that produce curves closer to the top-left corner indicate a better performance for the highest-scored cases. From Fig. 3 we can clearly see that the best system predicts the highly abusive cases successfully with the first true negative in the 26th case.
## VII Conclusion
In this paper we outlined a new problem related to the harassment and abuse happening in the financial services' domain. While resembling similarities with the other online social platforms, this problem poses new challenges and requires careful consideration. We outlined a particular case of abuse in transaction descriptions in the largest bank of Australia, and suggested ways to resolve it. We explored models with different feature sets and measured their performance in a real-life scenario. We showed that the best performing model is a supervised model trained on a variety of features that range in complexity from simple transaction and text features to the toxicity and emotions features that are calculated using state-of-the art advances in the field of NLP. The final model is already fully operational in the bank. To increase the model's robustness, we regularly retrain it when the sent cases are verified from the customer vulnerability specialists.
We are continue improving our system in order to provide better protection to our customers. There are a range of potential improvements we are currently working on and aim to published in future work. Some examples of potential improvements are: better foreign language coverage, use of several months conversation history to detect a long-term abuse, use of BERT embeddings instead of high-level features when labelled training set is large enough.
|
2310.10761 | Simulation Based Composite Likelihood | Inference for high-dimensional hidden Markov models is challenging due to the
exponential-in-dimension computational cost of the forward algorithm. To
address this issue, we introduce an innovative composite likelihood approach
called "Simulation Based Composite Likelihood" (SimBa-CL). With SimBa-CL, we
approximate the likelihood by the product of its marginals, which we estimate
using Monte Carlo sampling. In a similar vein to approximate Bayesian
computation (ABC), SimBa-CL requires multiple simulations from the model, but,
in contrast to ABC, it provides a likelihood approximation that guides the
optimization of the parameters. Leveraging automatic differentiation libraries,
it is simple to calculate gradients and Hessians to not only speed-up
optimization, but also to build approximate confidence sets. We conclude with
an extensive experimental section, where we empirically validate our
theoretical results, conduct a comparative analysis with SMC, and apply
SimBa-CL to real-world Aphtovirus data. | Lorenzo Rimella, Chris Jewell, Paul Fearnhead | 2023-10-16T18:48:57Z | http://arxiv.org/abs/2310.10761v1 | # Simulation Based Composite Likelihood
###### Abstract
Inference for high-dimensional hidden Markov models is challenging due to the exponential-in-dimension computational cost of the forward algorithm. To address this issue, we introduce an innovative composite likelihood approach called "Simulation Based Composite Likelihood" (SimBa-CL). With SimBa-CL, we approximate the likelihood by the product of its marginals, which we estimate using Monte Carlo sampling. In a similar vein to approximate Bayesian computation (ABC), SimBa-CL requires multiple simulations from the model, but, in contrast to ABC, it provides a likelihood approximation that guides the optimization of the parameters. Leveraging automatic differentiation libraries, it is simple to calculate gradients and Hessians to not only speed-up optimization, but also to build approximate confidence sets. We conclude with an extensive experimental section, where we empirically validate our theoretical results, conduct a comparative analysis with SMC, and apply SimBa-CL to real-world Aphovirus data.
_Keywords:_ Hidden Markov model; Composite likelihood; Monte Carlo approximation.
## 1 Introduction
Discrete-state hidden Markov models (HMMs) are common in many applications, such as epidemics (Allen, 2008; Britton, 2010), systems biology (Wilkinson, 2018) and ecology (Glennie et al., 2023). Increasingly there is interest in individual-based models (e.g. Keeling and Eames, 2005; Rimella et al., 2023), in which the HMM explicitly describes the state of
each individual agent in a population. For example, an individual-based epidemic model may characterise each person in a population as having a latent state, being either susceptible, infected or recovered. This state is typically observed noisily, with a sample of individuals being detected as infected with a possibly imperfect diagnostic test (e.g. Cocker et al., 2023b). Thus, whilst there may be only a small number of states for each individual, this corresponds to a latent state-space that grows exponentially with the number of individuals.
In theory, likelihood calculations for such discrete-state HMMs is tractable using the forward-backward recursions (Scott, 2002). However the computational cost of these recursions is at least linear in the size of the state-space of the HMM: this means that they are infeasible for individual-based models with moderate or larger population sizes. This has led to a range of approximate inference methods. These include Monte Carlo methods such as MCMC and sequential Monte Carlo. Whilst such methods can work well, often they scale poorly with the population size - which may lead to poor mixing of MCMC algorithms or large Monte Carlo variance of the weights in sequential Monte Carlo. An alternative approach is approximate Bayesian computation (ABC), where one simulates from the model for different parameter values, and then approximates the posterior for the parameter based on how similar each simulated data set is to the true data. Such a method needs informative, low-dimensional, summary statistics to be available so that one can accurately measure how close a simulated data set is to the true data. Furthermore ABC can struggle with complex models with many parameters, as the number of summary statistics needs to increase with the number of parameters (Fearnhead and Prangle, 2012).
In this paper we consider individual-based HMMs where we have individual level observations. We present a computationally efficient method for inference that is based on the simple observation: if we fix the state of all members of the population except one, then we can analytically calculate the conditional likelihood of that one individual using forward-backward recursions. This idea has been used before within MCMC algorithms that update the state of each individual in turn conditional on the states of the other individuals (Touloupou et al., 2020). Here we use it in a different way. By simulating multiple realisations of the states of the other individuals we can average the conditional likelihood to obtain a Monte Carlo estimate of the likelihood of the data for a given individual. We then sum the log of these estimated likelihoods over individuals to obtain a composite log-likelihood (Varin, 2008) that can be maximised using, for example, stochastic gradient descent, to estimate the parameters.
We introduce the general class of models we consider in Section 2. We then show how to obtain a Monte Carlo estimate of the likelihood for the observations associated with a single individual, which can be used as the basis of a composite likelihood for our model. The calculation of the likelihood for each individual involves accounting for feedback between the state of the individual in question, and the probability distribution of future states of the rest of the population. A computationally more efficient method can be obtained by ignoring this feedback - and we present theory that bounds the error of this approach, and show that it can decay to zero as the population size tends to infinity. Then in Section 4 we show how we can get confidence regions around estimators based on maximising our composite
likelihood. We then demonstrate the efficiency for individual-based epidemic models both on simulated data and on data from the 2001 UK foot and mouth outbreak.
## 2 Model
### Notation
Given the integer \(t\in\mathbb{N}\), we denote the set of integers from \(1\) to \(t\) as \([t]\), and we use \([0:t]\) if we want to include \(0\). Additionally, we use \([t]\) as shorthand for indexing, for instance \(x_{[t]}\) denotes the collection \(x_{1},\ldots,x_{t}\). Given an index \(n\in[N]\), we use \(x^{n}\) to denote the \(n\)th component of \(x\) and \(x^{\setminus n}\) to denote the \((N-1)\)-dimensional vector obtained by removing the \(n\)th component from \(x\). If required, we augment the superscript notation and use \(x^{(i)}\) to refer to the vector \(x^{(i)}\) with components \(x^{(i),n}\). For a finite and discrete set \(\mathcal{S}\), we represent the cardinality of \(\mathcal{S}\) as \(\mathbf{card}\left(S\right)\), and we use the shorthand \(\sum_{x}\) to express the sum over all elements of \(\mathcal{S}\).
We use bold font to denote random variables and regular font for deterministic quantities. For the underlying probability measure, we commonly use \(p\) and, for the sake of clarity, we focus on its functional form, for instance we use \(p\left(x_{t}|x_{t-1},\theta\right)\) for the probability of \(\mathbf{x}_{t}=x_{t}\) given \(\mathbf{x}_{t-1}=x_{t-1}\) and the parameters \(\theta\).
### Hidden Markov models and likelihood computation
A hidden Markov model (HMM) \(\left(\mathbf{x}_{0},\left(\mathbf{x}_{t},\mathbf{y}_{t}\right)_{t\geq 1}\right)\) is a stochastic process where the unobserved process \(\left(\mathbf{x}_{t}\right)_{t\geq 0}\) is a Markov chain, and the observed process \(\left(\mathbf{y}_{t}\right)_{t\geq 1}\) is such that, for any \(t\geq 1\), \(\mathbf{y}_{t}\) is conditionally independent of all the other variables given \(\mathbf{x}_{t}\). See Chopin and Papaspiliopoulos (2020) for a comprehensive review of HMMs.
Within this study, we focus on the specific scenario of HMMs on finite dimensional state-spaces. Precisely, we consider \(\left(\mathbf{x}_{t}\right)_{t\geq 0}\) to take values on the state-space \(\mathcal{X}^{N}\), which satisfies a product form, \(\mathcal{X}^{N}=\bigtimes_{n\in[N]}\mathcal{X}\), where \(\mathcal{X}\) is finite and discrete. We also consider \(\left(\mathbf{y}_{t}\right)_{t\geq 1}\) to be on the space \(\mathcal{Y}^{N}\), which also satisfies a product form, \(\mathcal{Y}^{N}=\bigtimes_{n\in[N]}\mathcal{Y}\), but here \(\mathcal{Y}\) can be of any form.
Given a collection of parameters \(\theta\), an HMM is fully defined through its components: the initial distribution \(p\left(x_{0}|\theta\right)\), which is the distribution of \(\mathbf{x}_{0}\); the transition kernel \(p\left(x_{t}|x_{t-1},\theta\right)\), which is the distribution of \(\mathbf{x}_{t}\) given \(\mathbf{x}_{t-1}\); and the emission distribution \(p\left(y_{t}|x_{t},\theta\right)\), which is the distribution of \(\mathbf{y}_{t}\) given \(\mathbf{x}_{t}\). Given the assumption that \(\mathcal{X}\) is finite and discrete, the probability distribution \(p\left(x_{0}|\theta\right)\) takes the form of a probability vector with \(\mathbf{card}\left(\mathcal{X}\right)^{N}\) elements, while \(p\left(x_{t}|x_{t-1},\theta\right)\) corresponds to a \(\mathbf{card}\left(\mathcal{X}\right)^{N}\times\mathbf{card}\left(\mathcal{X }\right)^{N}\) stochastic matrix.
For a given time horizon \(T\in\mathbb{N}\), we may assume that the data sequence \(y_{1},\ldots,y_{T}\) is generated from the aforementioned hidden Markov model with parameters \(\theta^{\star}\). Our primary interest is in inferring the parameter \(\theta^{\star}\) responsible for generating the data or, in cases where the model is not fully identifiable, a set of parameters that are equally likely. The computation of the likelihood for HMMs with discrete state-space is relatively straightforward and
involves marginalization over the entire state-space:
\[p\left(y_{[T]}|\theta\right)=\sum_{x_{[0:T]}}p\left(x_{0}|\theta\right)\prod_{t \in[T]}p\left(x_{t}|x_{t-1},\theta\right)p\left(y_{t}|x_{t},\theta\right). \tag{1}\]
In practice, to avoid marginalizing on an exponential-in-time state-space, the likelihood is recursively computed using the Forward algorithm, which recursively compute the filtering distribution \(p(x_{t}|y_{t},\theta)\) and the likelihood increments \(p(y_{t}|y_{[t-1]},\theta)\). The \(t+1\) step of Forward algorithm comprises of two operations, namely, prediction:
\[\left\{\begin{matrix}p\left(x_{t+1}|x_{t},\theta\right)\\ p\left(x_{t}|y_{[t]},\theta\right)\end{matrix}\right\}\stackrel{{ \text{prediction}}}{{\longrightarrow}}\left\{p\left(x_{t+1}|y_{[t]}, \theta\right)=\sum\limits_{x_{t}}p\left(x_{t+1}|x_{t},\theta\right)p\left(x_{t }|y_{[t]},\theta\right)\right\},\]
where the transition kernel is applied to the previous filtering distribution, and correction:
\[\left\{\begin{matrix}p\left(y_{t+1}|x_{t+1},\theta\right)\\ p\left(x_{t+1}|y_{[t]},\theta\right)\end{matrix}\right\}\stackrel{{ \text{correction}}}{{\longrightarrow}}\left\{\begin{matrix}p\left(x_{t+1}|y_{[t +1]},\theta\right)=\frac{p\left(y_{t+1}|x_{t+1},\theta\right)p\left(x_{t+1}|y_ {[t]},\theta\right)}{p\left(y_{t+1}|y_{[t]},\theta\right)}\\ p\left(y_{t+1}|y_{[t]},\theta\right)=\sum\limits_{x_{t+1}}p(y_{t+1}|x_{t+1}, \theta)p\left(x_{t+1}|y_{[t]},\theta\right)\end{matrix}\right\},\]
from which the likelihood increments \(p\left(y_{t}|y_{[t-1]},\theta\right)\), with \(p\left(y_{1}|y_{[0]},\theta\right)\coloneqq p\left(y_{1}|\theta\right)\), are then combined to compute the likelihood:
\[p\left(y_{[T]}|\theta\right)=\prod_{t\in[T]}p\left(y_{t}|y_{[t-1]},\theta \right).\]
Despite its simplicity, the Forward algorithm necessitates a marginalization on the full state-space, incurring a computational cost that is, at worst, quadratic in the cardinality of the state-space. For the considered scenario, this translates in a complexity of \(\mathcal{O}\left(\mathbf{card}\left(\mathcal{X}\right)^{2N}\right)\), making the Forward algorithm unfeasible for large values of \(N\).
Obviously, more sophisticated techniques are available to perform inference in HMMs. Notable among these are techniques such as Approximate Bayesian Computation (Beaumont, 2019), Sequential Monte Carlo (Doucet et al., 2001), and Variational Inference (Blei et al., 2017). While a complete overview of alternative methods is out of the scope of this work, it is noteworthy that even for these approaches, tackling the challenges of scaling up to high-dimensional HMMs, large values of \(N\), remains a significant obstacle, comparable to the challenges faced by the Forward algorithm.
### Factorial structure
Research has demonstrated that introducing certain factorization structures into the underlying model could yield to approximate algorithms with interesting theoretical and computational properties (Rebeschini and Van Handel, 2015; Rimella and Whiteley, 2022). Moreover, the prospect of inference in high-dimensional HMMs without any assumptions about the
model structure appears implausible. Therefore, we restrict our study to the HMMs with initial distribution, transition kernel, and emission distribution that satisfy the following factorizations:
\[\begin{split}& p\left(x_{0}|\theta\right)=\prod_{n\in[N]}p\left(x_{0 }^{n}|\theta\right),\quad p\left(x_{t}|x_{t-1},\theta\right)=\prod_{n\in[N]}p \left(x_{t}^{n}|x_{t-1},\theta\right),\\ & p\left(y_{t}|x_{t},\theta\right)=\prod_{n\in[N]}p\left(y_{t}^{n }|x_{t}^{n},\theta\right),\end{split} \tag{2}\]
which essentially says that we can decompose the initial distribution in \(N\) probability vectors of size \(\mathbf{card}\left(\mathcal{X}\right)\), the transition kernel in \(N\) stochastic matrices that are \(\mathbf{card}\left(\mathcal{X}\right)\times\mathbf{card}\left(\mathcal{X}\right)\), whose elements also depend on \(x_{t-1}\), and the observation \(n\) is conditionally independent from all the other variables given \(\mathbf{x}_{t}^{n}\).
It is important to mention that the introduced factorisation does not resolve our problems; rather, it serves as an essential foundation upon which we construct our approximation. Furthermore, the factorization given by (2) is common in several real world applications, among which: epidemics (Rimella et al., 2023, 2020), traffic modelling (Silva et al., 2015), sociology (Bianchi and Squazzoni, 2015) and finance (Samanidou et al., 2007).
## 3 Simulation based composite likelihood: SimBa-CL
From model structure shown in Section 2.3, we can notice that by fixing the state of all but one component of the latent process, \(x_{[T]}^{\setminus n}\) say, we can leverage the factorisation and calculate probabilities related to the time-trajectory of the remaining state, \(x_{[T]}^{n}\), with a computational cost that is \(\mathcal{O}\left(\mathbf{card}\left(\mathcal{X}\right)\right)\). This idea has been used within Gibbs-style MCMC updates for epidemics see Touloupou et al. (2020). We show how to use this idea, together with using Monte Carlo to average over \(x_{[T]}^{\setminus n}\), to calculate the marginal likelihoods \(p(y_{[T]}^{n}|\theta)\). We can then use the produce of these marginal likelihoods, \(\prod_{n\in[N]}p(y_{[T]}^{n}|\theta)\), as an approximate likelihood that can be maximised to estimate \(\theta\). This idea is related to some approximations in discrete state-space HMMs and sequential Monte Carlo (Boyen and Koller, 1999, 2013; Rebeschini and Van Handel, 2015; Rimella and Whiteley, 2022), and corresponds to the framework of composite marginal likelihood (Varin, 2008; Varin et al., 2011). We return to this latter point in Section 4.
Using \(p\left(y_{[T]}|\theta\right)\approx\prod_{n\in[N]}p\left(y_{[T]}^{n}|\theta\right)\) still falls short, as the computation of \(p\left(y_{[T]}^{n}|\theta\right)\) continues to require a recursive marginalization on \(\mathcal{X}^{N}\). Yet, we can express the marginal likelihood \(p\left(y_{[T]}^{n}|\theta\right)\) as:
\[p\left(y_{[T]}^{n}|\theta\right)=\sum_{x_{[0:T-1]}^{\setminus n}}p\left(x_{[0: T-1]}^{\setminus n}|\theta\right)p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}, \theta\right), \tag{3}\]
where:
\[p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n},\theta\right)=\sum_{x_{[0:T]}^{n}}p \left(x_{T}^{n}|x_{T-1},\theta\right)p\left(x_{[0:T-1]}^{n}|x_{[0:T-1]}^{ \setminus n},\theta\right)\prod_{t\in[T]}p\left(y_{t}^{n}|x_{t}^{n},\theta \right).\]
We have two necessary ingredients for calculating \(p\left(y_{[T]}^{n}|\theta\right)\): firstly, \(p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n},\theta\right)\), which demands \(T\) recursive marginalizations on \(\mathcal{X}\) given \(x_{[0:T-1]}^{\setminus n}\); secondly, a marginalization on \(\mathcal{X}^{N-1}\) through \(p\left(x_{[0:T-1]}^{\setminus n}|\theta\right)\), see (3). On a superficial glance, Equation (3) might appear to involve circular reasoning, given that marginalizing over \(\mathcal{X}^{N-1}\) remains computationally unfeasible. However, when sampling from the process is both inexpensive and straightforward, we can resort to estimating (3) using Monte Carlo sampling.
We refer to this procedure as "Simulation Based Composite Likelihood", or "SimBa-CL" in short. In the following sections, we give an in-depth discussion on SimBa-CL, and show how we can target the true marginals of the likelihood and build a likelihood approximation in \(\mathcal{O}\left(N^{2}\right)\), see Section 3.1, how to approximate the marginals of the likelihood and build an approximation of the likelihood in \(\mathcal{O}\left(N\right)\), see Section 3.2, and how to generalise SimBa-CL, see Section 3.4. For the sake of presentation, we remove the dependence on the parameter \(\theta\), and focus on the filtering aspects of the algorithms for a fixed \(\theta\).
### SimBa-CL with feedback
Given efficient sampling from \(p(x_{[0:T-1]}^{\setminus n})\) and low-cost evaluation of \(p(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n})\) for a given \(x_{[0:T-1]}^{\setminus n}\), we can readily deduced a Monte Carlo estimate of the marginal likelihood from (3):
\[p\left(y_{[T]}^{n}\right)\approx\frac{1}{P}\sum_{i\in[P]}p\left(y_{[T]}^{n}|x_ {[0:T-1]}^{(i),\setminus n}\right), \tag{4}\]
where \(P\in\mathbb{N}\) is the number of Monte Carlo samples and \(x_{[0:T-1]}^{(i),\setminus n}\sim p\left(x_{[0:T-1]}^{\setminus n}\right)\). Repeating (4) for all \(n\in[N]\) and computing the product across \(n\) of these Monte Carlo estimates represents then a reasonable strategy for approximating the likelihood of the model.
Two ingredients are pivotal in the computation of (4): (i) sampling from the model and (ii) calculating \(p(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n})\). Sampling from \(p(x_{[0:T-1]}^{\setminus n})\) can be achieved by sampling \(x_{[0:T-1]}\) from \(p(x_{[0:T-1]})\) and then selecting the subset \(x_{[0:T-1]}^{\setminus n}\). It is worth noting that sampling from the entire process is generally straightforward and commonly employed in simulation based algorithm like approximate Bayesian computation (ABC) [Beaumont, 2019] and sequential Monte Carlo (SMC) [Doucet et al., 2001]. However, calculating \(p(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n})\) is intricate and demands a meticulous derivation of an alternative Forward algorithm, which takes into account the simulation outcome \(x_{[0:T-1]}^{\setminus n}\).
For the computation of \(p(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n})\), it is important to recognise that \(p(x_{[0:T-1]}^{n}|x_{[0:T-1]}^{\setminus n})\) can be reformulated as a product between the transition dynamics and the probability of
observing a certain simulation outcome:
\[p\left(x_{[0:T-1]}^{n}|x_{[0:T-1]}^{\setminus n}\right)=p\left(x_{0}^{n}\right) \prod_{t\in[T-1]}p\left(x_{t}^{n}|x_{t-1}\right)f\left(x_{t-1}^{n},x_{[0:t]}^{ \setminus n}\right), \tag{5}\]
where we refer to \(f(x_{t-1}^{n},x_{[0:t]}^{\setminus n})\coloneqq p(x_{t}^{\setminus n}|x_{t-1} ^{n},x_{[0:t-1]}^{\setminus n})\) as the simulation feedback, and so:
\[f\left(x_{t-1}^{n},x_{[0:t]}^{\setminus n}\right)=\frac{\prod \limits_{\bar{n}\in[N]\setminus n}p\left(x_{t}^{\bar{n}}|x_{t-1}\right)}{ \sum_{\bar{x}_{t-1}^{n}}\prod\limits_{\bar{n}\in[N]\setminus n}p\left(x_{t}^{ \bar{n}}|\bar{x}_{t-1}^{n},x_{[t-1]}^{\setminus n}\right)p\left(\bar{x}_{t-1}^ {n}|x_{[0:t-1]}^{\setminus n}\right)}, \tag{6}\]
where \(p\left(x_{0}^{n}|x_{[0:0]}^{\setminus n}\right)=p\left(x_{0}^{n}\right)\) for the factorisation of the initial distribution. More details on the factorisation (5) and the derivation of the simulation feedback (6) are available in the supplementary material.
By reformulating \(p\left(x_{[0:T-1]}^{n}|x_{[0:T-1]}^{\setminus n}\right)\) as depicted in (5), we arrive at the following expression:
\[p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right)=\sum_{x_{[0:T]}^{n}}p \left(x_{0}^{n}\right)\prod_{t\in[T-1]}f\left(x_{t-1}^{n},x_{[0:t]}^{\setminus n }\right)\prod_{t\in[T]}p\left(x_{t}^{n}|x_{t-1}\right)p\left(y_{t}^{n}|x_{t}^ {n}\right), \tag{7}\]
which resembles the likelihood of an HMM, see (1). Specifically, it comprises the usual transition dynamic term \(p\left(x_{t}^{n}|x_{t-1}\right)\) accompanied by two likelihood terms: one originating from the simulation outcome \(f\left(x_{t-1}^{n},x_{[0:t]}^{\setminus n}\right)\), and another concerning the observation \(p\left(y_{t}^{n}|x_{t}^{n}\right)\). We can then establish a Forward algorithm involving two corrections, one that is correcting according to the emission distribution:
\[\left\{\begin{matrix}p\left(y_{t}^{n}|x_{t}^{n}\right)\\ p\left(x_{t}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)\end{matrix} \right\}\xrightarrow{\text{observation}}\left\{\begin{matrix}p\left(x_{t}^{n}|y_{[ t]}^{n},x_{[0:t]}^{\setminus n}\right)=\frac{p(y_{t}^{n}|x_{t}^{n})p\left(x_{t}^{n}|y_{[ t-1]}^{n},x_{[0:t]}^{\setminus n}\right)}{p\left(y_{t}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{ \setminus n}\right)}\\ p\left(y_{t}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)=\sum\limits_{x_ {t}^{n}}p\left(y_{t}^{n}|x_{t}^{n}\right)p\left(x_{t}^{n}|y_{[t-1]}^{n},x_{[0: t]}^{\setminus n}\right)\end{matrix}\right\}, \tag{8}\]
and the other that is correcting according to the simulation feedback:
\[\left\{\begin{matrix}f\left(x_{t}^{n},x_{[0:t+1]}^{\setminus n}\right)\\ p\left(x_{t}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)\end{matrix}\right\} \xrightarrow{\text{feedback}}\left\{\begin{matrix}p\left(x_{t}^{n}|y_{[t]}^{n},x _{[0:t+1]}^{\setminus n}\right)=\frac{f\left(x_{t}^{n},x_{[0:t+1]}^{\setminus n }\right)p\left(x_{t}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)}{p\left(x_ {t+1}^{\setminus n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)}\\ p\left(x_{t+1}^{\setminus n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)=\sum \limits_{x_{t}^{n}}f\left(x_{t}^{n},x_{[0:t+1]}^{\setminus n}\right)p\left(x_ {t}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)\end{matrix}\right\}. \tag{9}\]
The prediction follows as is in the basic HMM scenario with \(p(x_{t}^{n}|x_{t-1})\) as transition kernel and \(p\left(x_{t-1}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)\) for the distribution to update:
\[\left\{\begin{matrix}p(x_{t}^{n}|x_{t-1})\\ p\left(x_{t-1}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)\end{matrix} \right\}\xrightarrow{\text{feedback}}\left\{p\left(x_{t}^{n}|y_{[t-1]}^{n},x_{[0 :t]}^{\setminus n}\right)=\sum\limits_{x_{t-1}^{n}}p(x_{t}^{n}|x_{t-1})p\left(x_ {t-1}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)\right\}. \tag{10}\]
Remark that the order of these operations depends on the model structure, in our specific case we have: feedback correction, prediction, observation correction.
It is important to note that the computation of the simulation feedback \(f\left(x_{t-1}^{n},x_{[0:t-1]}^{\setminus n}\right)\) relies on \(p\left(x_{t-1}^{n}|x_{[0:t-1]}^{\setminus n}\right)\). In this particular context, the HMMs theory still proves to be handy as \(p\left(x_{t-1}^{n}|x_{[0:t-1]}^{\setminus n}\right)\) is the posterior distribution on \(x_{t-1}^{n}\) given the simulation output as observations. Consequently, this interpretation enables the employment of another Forward algorithm to compute recursively these intermediate quantities, where the correction step is given by:
\[\left\{\begin{aligned} &\prod\limits_{\bar{n}\in[N]\setminus n}p \left(x_{t}^{\bar{n}}|x_{t-1}\right)\\ & p\left(x_{t-1}^{n}|x_{[0:t-1]}^{\setminus n}\right)\end{aligned} \right\}\overset{\text{correction}}{\longrightarrow}\left\{\begin{aligned} p\left(x_{t-1}^{n}|x_{[0:t]}^{\setminus n}\right)=\frac{\prod \limits_{\bar{n}\in[N]\setminus n}p\left(x_{t}^{\bar{n}}|x_{t-1}\right)p\left( x_{t-1}^{n}|x_{[0:t-1]}^{\setminus n}\right)}{p\left(x_{t}^{\setminus n}|x_{[0:t-1]}^{ \setminus n}\right)}\\ & p(x_{t}^{\setminus n}|x_{[0:t-1]}^{\setminus n})=\sum\limits_{x _{t-1}^{n}}\prod\limits_{\bar{n}\in[N]\setminus n}p\left(x_{t}^{\bar{n}}|x_{t -1}\right)p\left(x_{t-1}^{n}|x_{[0:t-1]}^{\setminus n}\right)\end{aligned} \right\}, \tag{11}\]
and the prediction follows:
\[\left\{\begin{aligned} p\left(x_{t}^{n}|x_{t-1}\right)\\ p\left(x_{t-1}^{n}|x_{[0:t]}^{\setminus n}\right)\end{aligned} \right\}\overset{\text{prediction}}{\longrightarrow}\left\{p\left(x_{t}^{n}|x_{[0: t]}^{\setminus n}\right)=\sum\limits_{x_{t-1}^{n}}p\left(x_{t}^{n}|x_{t-1} \right)p\left(x_{t-1}^{n}|x_{[0:t]}^{\setminus n}\right)\right\}. \tag{12}\]
An iterative application of the aforementioned steps provides a collection of likelihood increments on both the simulation output and the observations, enabling the computation of \(p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right)\) as follows:
\[p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right)=p\left(y_{T}^{n}|y_{[T-1]} ^{n},x_{[0:T-1]}^{\setminus n}\right)\prod\limits_{t\in[T-1]}p\left(y_{t}^{n} |y_{[t-1]}^{n},x_{[0:t]}^{\setminus n}\right)p\left(x_{t}^{\setminus n}|y_{[t -1]}^{n},x_{[0:t-1]}^{\setminus n}\right), \tag{13}\]
where \(p\left(y_{1}^{n}|y_{[0]}^{n},x_{[0:0]}^{\setminus n}\right)\coloneqq p\left(y_ {1}^{n}|x_{0}^{\setminus n}\right)\) and \(p\left(x_{1}^{\setminus n}|y_{[0]}^{n},x_{[0]}^{\setminus n}\right)\coloneqq p \left(x_{1}^{\setminus n}|x_{[0]}^{\setminus n},\theta\right)\).
The final algorithm, named "SimBa-CL with feedback", is presented in Algorithm 1, and the key steps are: simulation from the model, application of the Forward step With Feedback and application of the Forward step For Feedback. The computational complexity of running Algorithm 1 is \(\mathcal{O}\left(PTN^{2}\textbf{card}\left(\mathcal{X}\right)^{2}\right)\), wherein \(P\), \(T\), \(N\) come from looping over number of simulations, time steps and dimensions, \(\textbf{card}\left(\mathcal{X}\right)^{2}\) comes from marginalizing over the state-space \(\mathcal{X}\), and the extra \(N\) terms refers to the simulation feedback computation. It is noteworthy that this cost can potentially be reduced to \(\mathcal{O}\left(PTN\max_{n}\{\textbf{card}\left(\textbf{Neig}\left(n\right) \right)\}\textbf{card}\left(\mathcal{X}\right)^{2}\right)\) if the transition kernel \(p\left(x_{t}^{n}|x_{t-1},\theta\right)\) presents some local structure. Precisely, if \(p\left(x_{t}^{n}|x_{t-1},\theta\right)=p\left(x_{t}^{n}|\bar{x}_{t-1},\theta\right)\) for any \(x_{t-1}^{\textbf{Neig}\left(n\right)}=\bar{x}_{t-1}^{\textbf{Neig}\left(n\right)}\), where **Neig** represents a function mapping any \(n\in[N]\) onto a set in the power set of \([N]\). In practical terms, this indicates that computing the simulation feedback can be computationally cheaper if the inter-dimension interactions are sparse. Also it is important to notice that the algorithm can be run in parallel on both the dimension and the simulation, making the dependence on \(P\) and the first dependence on \(N\) less heavy.
```
0:\(p\left(x_{0}\right)\), \(p\left(x_{t}|x_{t-1}\right),p\left(y_{t}|x_{t}\right)\) and their factorisations for each \(i\in[P]\)do \(x_{0}^{(i)},\sim p\left(x_{0}\right)\) for\(t\in[T]\)do \(x_{t}^{(i),}\sim p\left(x_{t}|x_{t-1}^{(i)}\right)\) and compute \(f\left(x_{t-1}^{n},x_{[0:t]}^{(i),\setminus n}\right)\) for each \(n\in[N]\)do if \(t\neq T\), run the feedback correction (9) and get: \(\left\{p\left(x_{t-1}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{(i),\setminus n}\right) \right\}\) Run the prediction (10) and get: \(\left\{p\left(x_{t}^{n}|y_{[t-1]}^{n},x_{[0:t]}^{(i),\setminus n}\right) \right\}\) Run the observation correction (8) and get: \(\left\{p\left(x_{t-1}^{n}|x_{[0:t]}^{(i),\setminus n}\right)\right\}\) Run the correction (11) and get: \(\left\{p\left(x_{t}^{n}|x_{[0:t]}^{(i),\setminus n}\right)\right\}\) Run the prediction (12) and get: \(\left\{p\left(x_{t}^{n}|x_{[0:t]}^{(i),\setminus n}\right)\right\}\) endfor endfor Compute \(p\left(y_{[T]}^{n}|x_{[0:T-1]}^{(i),\setminus n}\right)\) as in (13) endfor Return \(\frac{1}{P}\sum_{i\in[P]}p\left(y_{[T]}^{n}|x_{[0:T-1]}^{(i),\setminus n}\right)\)
```
**Algorithm 1** SimBa-CL with feedback
### SimBa-CL without feedback
An important aspect of SimBa-CL with feedback is that it targets the true marginals of the likelihood, trading off with a computational cost that, in the worst case scenario, scales quadratically with the dimension \(N\). However, one could contemplate a strategy involving the removal of the simulation feedback and design a SimBa-CL that is more computationally efficient, while being "close" to SimBa-CL with feedback.
Looking back at (7), omitting the simulation feedback yields to:
\[p\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right)\approx\sum_{x_{[0:T]}^{n} }p\left(x_{0}^{n}\right)\prod_{t\in[T]}p\left(x_{t}^{n}|x_{t-1}\right)p\left( y_{t}^{n}|x_{t}^{n}\right)\eqqcolon\tilde{p}\left(y_{[T]}^{n}|x_{[0:T-1]}^{ \setminus n}\right), \tag{14}\]
from which we have the following marginal likelihood approximation:
\[p\left(y_{[T]}^{n}\right)\approx\sum_{x_{[0:T-1]}^{n}}p\left(x_{[0:T-1]}^{ \setminus n}\right)\tilde{p}\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right) \eqqcolon\tilde{p}\left(y_{[T]}^{n}\right),\]
where we emphasised that we are relying on an approximation by using the notation \(\tilde{p}\). It can be seen that \(\tilde{p}\left(y_{[T]}^{n}\right)\) is still a proper marginal likelihood, as it sums to one when marginalizing on \(\mathcal{Y}\), but it is not the true marginal likelihood, as the latter has to consider the simulation feedback.
Upon scrutinizing (14), we can recognise the same likelihood structure as (1). This time, we are isolating our calculation to a single component \(n\) and fixing the others through simulation from model. This suggests that a simple Forward algorithm can be run in isolation on each dimension, by fixing the others to the simulation outcome, and so provide some approximate likelihood increments \(\tilde{p}\left(y_{t}^{n}|y_{[t-1]},x_{[0:t-1]}^{\setminus n}\right)\). Concretely, the corresponding Forward algorithm will require a prediction step:
\[\begin{cases}p\left(x_{t+1}^{n}|x_{t}\right)\\ \tilde{p}\left(x_{t}^{n}|y_{[t]}^{n},x_{[0:t-1]}^{\setminus n}\right)\end{cases} \xrightarrow{\text{prediction}}\left\{\tilde{p}\left(x_{t+1}^{n}|y_{[t]}^{n},x_{[ 0:t]}^{\setminus n}\right)=\sum_{x_{t}^{n}}p\left(x_{t+1}^{n}|x_{t}\right) \tilde{p}\left(x_{t}^{n}|y_{[t]}^{n},x_{[0:t-1]}^{\setminus n}\right)\right\}, \tag{15}\]
and a correction step:
\[\begin{cases}p\left(y_{t+1}^{n}|x_{t+1}^{n}\right)\\ \tilde{p}\left(x_{t+1}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)\end{cases} \xrightarrow{\text{correction}}\begin{cases}\tilde{p}\left(x_{t+1}^{n}|y_{[t+1] }^{n},x_{[0:t]}^{\setminus n}\right)=\frac{p\left(y_{t+1}^{n}|x_{t+1}^{n} \right)\tilde{p}\left(x_{t+1}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)}{ \tilde{p}\left(y_{t+1}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)}\\ \tilde{p}\left(y_{t+1}^{n}|y_{[t]}^{n},x_{[0:t]}^{\setminus n}\right)=\sum_{x_ {t+1}^{n}}p\left(y_{t+1}^{n}|x_{t+1}^{n}\right)\tilde{p}\left(x_{t+1}^{n}|y_{[ t]}^{n},x_{[0:t]}^{\setminus n}\right)\end{cases}, \tag{16}\]
where the dependence on \(x_{t}^{\setminus n}\) is introduced during prediction without any feedback. Treating \(x_{t}^{\setminus n}\) in this way is essentially considering a functional dependence rather than a probabilistic one, in the sense that we fix the other \(N-1\) components just to compute the transition kernel \(p(x_{t+1}^{n}|x_{t})\) on component \(n\).
Recursively applying (15) and (16) provides a sequence of approximate marginal likelihood increments which can be then used to approximate the marginal likelihood for a fixed simulation in the usual way:
\[\tilde{p}\left(y_{[T]}^{n}|x_{[0:T-1]}^{\setminus n}\right)=\prod_{t\in[T]} \tilde{p}\left(y_{t}^{n}|y_{[t-1]}^{n},x_{[0:t-1]}^{\setminus n}\right),\]
where \(\tilde{p}\left(y_{1}^{n}|y_{[0]}^{n},x_{[0:0]}^{\setminus n}\right)\coloneqq \tilde{p}\left(y_{1}^{n}|x_{[0]}^{\setminus n}\right)\). The marginal likelihood approximation is then obtained as the mean of the Monte Carlo approximations, and we named this final algorithm SimBa-CL without feedback, see Algorithm 2.
When comparing Algorithm 2 with Algorithm 1, the simplicity of the latter becomes evident. The computational cost is reduced from \(\mathcal{O}\left(PTN^{2}\mathbf{card}\left(\mathcal{X}\right)^{2}\right)\) to \(\mathcal{O}\left(PTN\mathbf{card}\left(\mathcal{X}\right)^{2}\right)\). Also, as with Algorithm 1, our new SimBa-CL procedure is parallelisable on both \(N\) and \(P\)
```
0:\(p\left(x_{0}\right)\), \(p\left(x_{t}|x_{t-1}\right)\), \(p\left(y_{t}|x_{t}\right)\) and their factorisation for each \(i\in\left[P\right]\)do \(x_{0}^{\left(i\right),\,\sim}\)\(p(x_{0})\) for\(t\in\left[T\right]\)do \(x_{t}^{\left(i\right),\,\sim}\)\(p\left(x_{t}|x_{t-1}^{\left(i\right),\,\sim}\right)\) for each\(n\in\left[N\right]\)do Run the prediction (15) and get: \(\left\{\tilde{p}\left(x_{t}^{n}|y_{[t-1]}^{n},x_{[0:t-1]}^{\left(i\right),\, \setminus n}\right)\right\}\) Run the correction (16) and get: \(\left\{\tilde{p}\left(x_{t}^{n}|y_{[t]}^{n},x_{[0:t-1]}^{\left(i\right),\, \setminus n}\right)\right\}\) endfor endfor Return \(\frac{1}{P}\sum_{i\in\left[P\right]}\tilde{p}\left(y_{[T]}^{n}|x_{[0:T-1]}^{i,\setminus n},\theta\right)\)
```
**Algorithm 2** Simulation likelihood without feedback
### KL-bound for SimBa-CL with and without feedback
To evaluate the impact of excluding the simulation feedback from SimBa-CL, a natural approach is to compare the two estimates of the marginal likelihood:
\[p\left(y_{[T]}^{n}\right) =\sum_{x_{[0:T]}^{\setminus n}}p\left(x_{[0:T]}^{\setminus n} \right)\sum_{x_{[0:T]}^{n}}p\left(x_{[0:T]}^{n}|x_{[0:T]}^{\setminus n}\right) \prod_{t\in\left[T\right]}p\left(y_{t}^{n}|x_{t}^{n}\right),\] \[\tilde{p}\left(y_{[T]}^{n}\right) =\sum_{x_{[0:T]}^{\setminus n}}p\left(x_{[0:T]}^{\setminus n} \right)\sum_{x_{[0:T]}^{n}}p\left(x_{0}^{n}\right)\prod_{t\in\left[T\right]}p \left(x_{t}^{n}|x_{t-1},\right)p\left(y_{t}^{n}|x_{t}^{n}\right).\]
As both \(p\left(y_{[T]}^{n}\right)\) and \(\tilde{p}\left(y_{[T]}^{n}\right)\) are probability distributions on \(\mathcal{Y}^{T}\), a simple way of comparing them is the Kullback-Leibler divergence, which we denote with \(\mathbf{KL}\left[p\left(\mathbf{x}\right)||q\left(\mathbf{x}\right)\right]\) for \(p,q\) probability distributions on a general discrete space and \(q\left(x\right)=0\) implying \(p\left(x\right)=0\) (absolute continuity).
The objective of this section is then to establish an upper bound for \(\mathbf{KL}\left[p\left(\mathbf{y}_{[T]}^{n}\right)||\tilde{p}\left(\mathbf{y }_{[T]}^{n}\right)\right]\), which will also demonstrate that, under certain conditions, our "without feedback" approximation consistently improves. Naturally, for theoretical results, we must rely on technical assumptions, which we will strive to explain from an intuitive perspective as much as possible.
**Assumption 1**.: _For any \(n,\bar{n}\in\left[N\right]\) and for any \(x_{t}^{\bar{n}}\in\mathcal{X}\), if \(x_{t-1},\bar{x}_{t-1}\in\mathcal{X}^{N}\) are such that \(x_{t-1}^{\setminus n}=\bar{x}_{t-1}^{\setminus n}\) then:_
\[\left|p\left(x_{t}^{\bar{n}}|x_{t-1}\right)-p\left(x_{t}^{\bar{n}}|\bar{x}_{t- 1}\right)\right|\leq\frac{s_{\bar{n}}}{N}\left|d_{n,\bar{n}}\left(x_{t-1}^{n} \right)-d_{n,\bar{n}}\left(\bar{x}_{t-1}^{n}\right)\right|,\]
_where \(d_{n,\bar{n}}:\mathcal{X}\rightarrow\mathbb{R}_{+}\) and \(s_{\bar{n}}\) is a positive constant._
Assumption 1 ensures the boundness of the transition dynamic when altering the states of only \(n\in[N]\) at time \(t-1\). This essentially asserts that changing the state of a single component at time \(t-1\) minimally impact the dynamics of the other components. In essence, the impact is measured in terms of the function \(d_{n,\bar{n}}\), which shows how changes in \(n\) affect any other dimension \(\bar{n}\). This concept is similar in flavour to the decay of correlation property explained in Rebeschini and Van Handel (2015) and Rimella and Whiteley (2022), which ensures a weak sensitivity of the conditional distributions on any \(n\) given a perturbation on any other dimension. Compare to Rebeschini and Van Handel (2015) and Rimella and Whiteley (2022) our assumption is way less intricate and it simply requires some form of linear decay in the overall dimension \(N\). In simple terms, in large systems with interconnected dimensions minor changes have diminishing effects, which is also intuitively true in many real world application, like individual-based models for epidemiology (Shamil et al., 2021; Rimella et al., 2023; Cocker et al., 2023).
**Assumption 2**.: _For any \(n,\bar{n}\in[N]\), if \(x_{t-1},\bar{x}_{t-1}\in\mathcal{X}^{N}\) are such that \(x_{t-1}^{\backslash\bar{n}}=\bar{x}_{t-1}^{\backslash\bar{n}}\) then there exists \(0<\epsilon<1\) such that:_
\[\sum_{x_{t}^{n}}p\left(x_{t}^{n}|x_{t-1}\right)\frac{1}{p\left(x_{t}^{n}|\bar{ x}_{t-1}\right)^{2}}\leq\frac{1}{\epsilon^{2}},\quad\text{and}\quad\sum_{x_{t}^{n}}p \left(x_{t}^{n}|x_{t-1}\right)\frac{1}{p\left(x_{t}^{n}|\bar{x}_{t-1}\right)^{ 3}}\leq\frac{1}{\epsilon^{3}}.\]
Assumption 2 imposes bounds on the expectations of the square and the cube of the reciprocal of the transition kernel. It is worth noting that this assumption is not excluding the case \(p\left(x_{t}^{n}|x_{t-1},\theta\right)=0\), but rather ensures that the non-zero elements of the transition kernel are always non-zero.
Under assumptions 1-2 and the additional assumption that the effect of an interaction on a single dimension does not exceed \(N\), we can state the following theorem.
**Theorem 1**.: _If \(|d_{n,\bar{n}}\left(x^{n}\right)-d_{n,\bar{n}}\left(\bar{x}^{n}\right)|<N\) for any \(x^{n},\bar{x}^{n}\in\mathcal{X}\) and assumptions 1-2 hold, then for any \(n\in[N]\):_
\[\mathbf{KL}\left[p\left(\mathbf{y}_{[T]}^{n}\right)||\tilde{p}\left(\mathbf{y }_{[T]}^{n}\right)\right]\leq\frac{a(\epsilon)}{N}\sum_{t\in[T]}\mathbb{E} \left\{\frac{1}{N}\sum_{\bar{n}\in[N],\bar{n}\neq n}s_{\bar{n}}^{MAX}\mathbb{ V}ar\left[d_{n,\bar{n}}\left(\mathbf{x}_{t-1}^{n}\right)\left|\mathbf{x}_{[0:t-1]}^{ \backslash n}\right]\right\},\]
_where \(a(\epsilon)\coloneqq 2\left[\frac{1}{2\epsilon^{2}}+\frac{1}{3\epsilon^{3}}\right]\) and \(s_{n}^{MAX}\coloneqq\max\{s_{n}^{2},s_{n}^{3}\}\)._
Proof.: The proof requires the Data Processing inequality, a Taylor expansion of the function \(f\left(z\right)=\log\left(1+z\right)\) and Jensen's inequality. The full proof is available in the supplementary material.
From Theorem 1 we can observe that the approximation improves when: (i) \(N\) increases; and (ii) the expected variance of the interaction term across dimensions decreases. Hence, our SimBa-CL without feedback will be more or less the same as the SimBa-CL with feedback whenever we are considering a sufficiently large \(N\) and not too noisy scenario.
### SimBa-CL on general partitions
Up until now, we have implicitly assumed that approximating \(p\left(y_{[T]}\right)\) involves a product of marginals across all dimensions. However, it is worth considering that a complete factorization across \([N]\) might not be the optimal choice, and potentially disregarding interdependencies among dimensions.
It is hence not too difficult to imagine the existence of a more suitable factorization that better captures interdimensional interactions. Specifically, we consider a partition \(\mathcal{K}\) on \([N]\) and reformulate our likelihood approximation as follows:
\[p\left(y_{[T]}\right)\approx\prod_{K\in\mathcal{K}}p\left(y_{[T]}^{K}\right), \quad\text{or}\quad p\left(y_{[T]}\right)\approx\prod_{K\in\mathcal{K}}\tilde{ p}_{\mathcal{K}}\left(y_{[T]}^{K}\right), \tag{17}\]
where on the left we have the actual product of the true marginals and on the right \(\tilde{p}_{\mathcal{K}}\) denotes the generalization of \(\tilde{p}\). As for SimBa-CL with and without feedback we can reformulate our marginals and approximate marginals as a simulation from the model followed by an HMM likelihood:
\[p\left(y_{[T]}^{K}\right)\coloneqq\sum_{x_{[0:T]}^{\setminus K}} p\left(x_{[0:T]}^{\setminus K}\right)\sum_{x_{[0:T]}^{K}}p\left(x_{[0:T]}^{K}|x_{[0:T ]}^{\setminus K}\right)\prod_{t\in[T]}\prod_{n\in K}p\left(y_{t}^{n}|x_{t}^{n} \right); \tag{18}\] \[\tilde{p}_{\mathcal{K}}\left(y_{[T]}^{K}\right)\coloneqq\sum_{x_{[ 0:T]}^{\setminus K}}p\left(x_{[0:T]}^{\setminus K}\right)\sum_{x_{[0:T]}^{K}} \prod_{n\in K}p\left(x_{0}^{n}\right)\prod_{t\in[T]}p\left(x_{t}^{n}|x_{t-1} \right)p\left(y_{t}^{n}|x_{t}^{n}\right). \tag{19}\]
As it can be extrapolated from (17), the likelihood approximations are now including all the interaction inside \(K\), while still enforcing some form of factorisation.
As seen in Section 3.1 and in Section 3.2, (18) aims to approximate the true marginals over \(K\in\mathcal{K}\), while (19) only offers approximations. Once again, akin to SimBa-CL with feedback, if we want to approximate the true marginals of the likelihood we need some form of simulation feedback. This time, the simulation feedback will be from \(x_{[0:t]}^{\setminus K}\) onto \(x_{t-1}^{K}\), as we are considering probability distributions on \(K\).
We can then easily adapt Algorithm 1 and Algorithm 2, transitioning from a full factorisation to a factorisation on the partition \(\mathcal{K}\). Note that, the algorithm requires operations on the space \(\mathcal{X}^{K}\) and so a computational cost that is exponential in the maximum number of components contained in \(K\): \(\mathcal{O}\left(PT\mathbf{card}(\mathcal{K})\mathbf{card}(\mathcal{X})^{2 \max_{K\in\mathcal{K}}\mathbf{card}(K)}\right)\). More details and theoretical results are available in the supplementary material.
## 4 Confidence sets for SimBa-CL
Composite likelihoods are generally obtained as product of likelihoods components, whose structure dependent on the considered model, see Varin et al. (2011) for a review on composite likelihood methods. There are two classes of composite likelihoods: composite conditional likelihoods and composite marginal likelihoods. Composite conditional likelihoods combine
conditional distributions obtained from the main likelihood (Besag, 1974, 1975; Vecchia, 1988; Mardia et al., 2008), while composite marginal likelihoods work on the marginal distributions (Cox and Reid, 2004; Chandler and Bate, 2007; Varin and Vidoni, 2008). Considering that both SimBa-CL with and without feedback are represented as a product of the marginals of the likelihood, one can easily draw parallels with the composite marginal likelihoods literature and exploit the asymptotic theory to build confidence sets.
The first step is to find the maximum composite likelihood estimator \(\hat{\theta}_{CL}\) by maximising the composite likelihood \(\mathcal{L}_{CL}\left(\theta;y_{[T]}\right)=\prod_{K\in\mathcal{K}}\mathcal{L }_{CL}^{K}\left(\theta;y_{[T]}^{K}\right)\) or, equivalently, the composite log-likelihood \(\ell_{CL}\left(\theta;y_{[T]}\right)=\sum_{K\in\mathcal{K}}\ell_{CL}^{K}\left( \theta;y_{[T]}^{K}\right)\), where \(\mathcal{L}_{CL}^{K}\left(\theta;y_{[T]}^{K}\right)\) and \(\ell_{CL}^{K}\left(\theta;y_{[T]}^{K}\right)\) depends on the considered SimBa-CL. As SimBa-CL does not require any resampling, the resulting procedure is suited to automatic differentiation, which allows us to optimise the parameters via any gradient descent technique (Hinton et al., 2012; Zeiler, 2012; Kingma and Ba, 2014).
The second step is to notice that in the composite likelihood literature, we have some form of asymptotic normality for \(\hat{\theta}_{CL}\)(Lindsay, 1988; Varin, 2008):
\[\hat{\theta}_{CL}\overset{d}{\approx}\mathcal{N}\left(\theta,G\left(\theta \right)^{-1}\right),\]
where \(G\left(\theta\right)\) is the Godambe information matrix (Godambe, 1960). It then follows that, upon estimation Godambe information matrix, we can build confidence sets for \(\hat{\theta}_{CL}\) as multidimensional ellipsoids.
The final step is to estimate the Godambe information matrix, which is given by \(G\left(\theta\right)=S\left(\theta\right)V\left(\theta\right)^{-1}S\left( \theta\right)\), and so decomposed in the sensitivity matrix and the variability matrix:
\[S\left(\theta\right)=\mathbb{E}_{\theta}\left\{-\operatorname{Hess}_{\theta} \left[\ell_{CL}\left(\theta;\mathbf{y}_{[T]}\right)\right]\right\}\text{ and }V\left(\theta\right)=\mathbb{V}\mathrm{ar}_{\theta}\left\{\nabla_{ \theta}\left[\ell_{CL}\left(\theta;\mathbf{y}_{[T]}\right)\right]\right\},\]
where \(\operatorname{Hess}_{\theta}\) and \(\nabla_{\theta}\) are the Hessian and the gradient with respect to \(\theta\), and can be computed via automatic differentiation given \(\theta\) and a realisation of \(\mathbf{y}_{[T]}\). Given that we can compute Hessian and the gradient given \(\theta,y_{[T]}\), we can also estimate expectation and the variance via simulations from the model (expected information). Approximating \(S\left(\theta\right)\) and \(V\left(\theta\right)\) with the actual observations (observed information) is slightly discussed in the supplementary material along with some experiments.
### Bartlett identities
The computation of the Hessian can be resource-intensive, especially when using automatic differentiation, and variance estimation might be noisy. We can then simplify the form of sensitivity and variability matrix by invoking the first and second Bartlett identities. Note that, when considering SimBa-CL with feedback the identities hold asymptotically in the number of Monte Carlo samples \(P\), while for SimBa-CL without feedback they hold only
approximately. The sensitivity matrix and variability matrix can be then reformulated as:
\[S\left(\theta\right)=\sum_{K\in\mathcal{K}}\mathbb{E}_{\theta} \left\{\nabla_{\theta}\left[\ell_{CL}^{K}\left(\theta;y_{[T]}^{K}\right)\right] \nabla_{\theta}\left[\ell_{CL}^{K}\left(\theta;y_{[T]}^{K}\right)\right]^{\top }\right\},\] \[V\left(\theta\right)=\sum_{K,\tilde{K}\in\mathcal{K}}\mathbb{E}_{ \theta}\left\{\nabla_{\theta}\left[\ell_{CL}^{K}\left(\theta;y_{[T]}^{K}\right) \right]\nabla_{\theta}\left[\ell_{CL}^{\tilde{K}}\left(\theta;y_{[T]}^{\tilde{K }}\right)\right]^{\top}\right\},\]
where \(=\) becomes \(\approx\) if we do not include the feedback, and where both matrices can be once again estimated via simulation. More details are available in the supplementary material.
## 5 Experiments
The experiments center on the field of epidemiological modelling, and specifically they focus on individual-based models (IBMs). Individual-based models arise when we want to describe an epidemic from an individual perspective. The complexity of IBMs lies in their high-dimensional state-space, making close-form likelihood computationally unfeasible. However, these models satisfy the factorisation outlined in (2), making them the perfect candidates for our SimBa-CL.
SimBa-CL is implemented in Python using the library TensorFlow and available at the GitHub repository:??, and all the experiments were run on a 32gb Tesla V100 GPU.
More experiments and more details on the presented experiments are available in the supplementary material.
### The effect of feedback and partition's choice on SimBa-CL
In this section we perform a cross-comparison on a susceptible-infected-susceptible (SIS) individuals based model on SimBa-CL with feedback on \(\mathcal{K}=\left\{\left\{1\right\},\ldots,\left\{N\right\}\right\}\) ("fully factorised SimBa-CL with feedback"), SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1\right\},\ldots,\left\{N\right\}\right\}\) ("fully factorised SimBa-CL without feedback"), and SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback"), where \(N\) is assumed to be even.
#### 5.1.1 Model
Building upon the framework introduced by Ju et al. (2021) and Rimella et al. (2023), where \(n\) represent an individual and \(w_{n}\) is a vector of covariates. We consider a \(p\left(x_{0}^{n}|\theta\right)\) with an initial probability of infection of \(\frac{1}{1+\exp\left(-\beta_{0}^{\top}w_{n}\right)}\) and a transition kernel \(p\left(x_{t}^{n}|x_{t-1},\theta\right)\) with a probability of transitioning from S to I of \(1-\exp\left[-\lambda_{n}\left(\frac{\sum_{n\in[N]}\mathbb{I}\left(x_{t-1}^{n} =2\right)}{N}+\iota\right)\right]\) and a probability of transitioning from I to S of \(1-\exp\left(-\gamma_{n}\right)\), where \(\lambda_{n}=1/(1+\exp\left(-\beta_{\lambda}^{\top}w_{n}\right))\) and \(\gamma_{n}=1/(1+\exp\left(-\beta_{\gamma}^{\top}w_{n}\right))\) and with \(w_{n},\beta_{0},\beta_{\lambda},\beta_{\gamma}\in\mathbb{R}^{2}\). Moreover we consider \(p\left(y_{t}^{n}|x_{t}^{n},\theta\right)=\left(\frac{1}{1+\exp\left(-\beta_{ \gamma}^{\top}w_{n}\right)\right)\) and \(\gamma_{n}=1/(1+\exp\left(-\beta_{\gamma}^{\top}w_{n}\right))\).
The model is a \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=\left\{1,2\right\},\ldots,\left\{N-1,N\right\}\}\) ("coupled SimBa-CL without feedback on \(\mathcal{K}=
\(q^{x_{t}^{n}}\mathbb{I}\left(y_{t}^{n}\neq 0\right)+\left(1-q^{x_{t}^{n}}\right) \mathbb{I}\left(y_{t}^{n}=0\right)\), with \(q\in[0,1]^{2}\). Unless specify otherwise, our baseline model employs \(N=1000\), \(T=100\), \(w_{n}\) to be such that \(w_{n}^{1}=1\) and \(w_{n}^{2}\sim\mathbf{Normal}\left(0,1\right)\), and the data generating parameters \(\beta_{0}=[-\log\left(\left(1/0.01\right)-1\right),0]^{\top}\), \(\beta_{\lambda}=[-1,2]^{\top}\), \(\beta_{\gamma}=[-1,-1]^{\top}\), \(q=[0.6,0.4]^{\top}\) and \(\iota=0.001\). It is also important to mention that the considered SIS satisfies Assumption 1 and Assumption 2.
#### 5.1.2 Empirical evaluation of the Kullback-Leibler divergence
We start by comparing our SimBa-CL methods in terms of empirical Kullback-Leibler divergence (KL) on a set of simulated data. Different settings are considered: an increasing population size \(N=[10,100,1000]\); either "high" \(\beta_{0}=[-6.9,0]^{\top}\) or "base" \(\beta_{0}=[-4.60,0]^{\top}\) or "low" \(\beta_{0}=[-2.20,0]^{\top}\), i.e. around either \(0.1\%\) or \(1\%\) or \(10\%\) of initial infected; and "base" and "low" \(\iota=[0.001,0.01]\). Note that different \(\beta_{0}\) and \(\iota\) control the variance of the process, as having more infected at the beginning of the epidemic or including more environmental effect result in an epidemic that is closer to the equilibrium.
Table 1 reports mean and standard deviations of the empirical KL per each scenario. Focusing on the fourth column, which contrasts the fully factorised SimBa-CL with and without feedback, we can notice that: increasing \(N\) decreases the KL and decreasing the variance decreases the KL. Similar conclusions can be drawn for the fifth column, which compares the fully factorise SimBa-CL without feedback with the coupled SimBa-CL without feedback. These comments suggests less and less differences across the methods when increasing \(N\) and decreasing the variance, which is in line with our theoretical results.
#### 5.1.3 Comparing likelihood surfaces
We proceed to undertake a comparison of profile likelihood surfaces for the baseline IBM SIS model using the following protocol: (i) choose one among \(\beta_{0}\), \(\beta_{\lambda}\), \(\beta_{\gamma}\) and \(q\); (ii) simulate using the baseline model, and ensure at least 10 infected in the epidemic realisation; (iii) create a bi-dimensional grid on the chosen parameter; (iv) per each element of the grid compute our SimBa-CL methods by fixing the other parameters to their real values.
The outcomes of this experiment are illustrated in Figure 1. Interestingly, all the considered SimBa-CL exhibit a consistent shape, meaning that, in the SIS scenario, including the feedback or choosing a coarser partition has a limited impact on the overall likelihood. Furthermore, it becomes evident that all these methods effectively recover the data-generating
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(N\) & 1000 & 100 & 100 & 100 & 100 & 10 \\ \(\beta_{0}\) & base & high & low & base & base & base \\ \(\iota\) & base & base & base & base & low & base \\
**KL** on feedback & 0.3 (0.2) & 7.5 (3.2) & 0.8 (0.4) & 5.9 (2.9) & 1.4 (1) & 49.6 (17.2) \\
**KL** on partition & 1.4 (0.3) & 36.1 (6.6) & 3 (0.9) & 35.1 (7) & 5.6 (2) & 177.3 (36.5) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing empirical KL between SimBa-CL and under different scenarios. All the numerical values have been multiply by \(10^{9}\) to improve visualisation.
parameter, except for \(\beta_{0}\). Note that there is an obvious identifiability issue with \(\beta_{0}\) as \(\beta_{0}^{2}=0\) implies covariate \(w_{n}^{2}\) to not be used. However, given that \(w_{n}^{2}\) is random, we could have more initial infected associated to \(w_{n}^{2}<0\), which gives an higher likelihood to models where \(\beta_{0}^{2}<0\). A similar reasoning can be replicated for \(w_{n}^{2}>0\), which explains the symmetry of the likelihood surface of \(\beta_{0}\) on the vertical axis.
### Asymptotic properties of SimBa-CL
We can now turn to the problem of computing the maximum composite likelihood estimator and the corresponding confidence sets. As proven in theorems 1, for a sufficiently "regular" model and a sufficiently large \(N\), including the simulation feedback and using a coarser partition bears marginal significance for SimBa-CL. We hence narrow our studies to the asymptotic properties of the fully factorised SimBa-CL without feedback. As a toy model, we consider again the IBM SIS described in Section 5.1.1.
Figure 1: Profile log-likelihood surfaces for \(\beta_{0},\beta_{\lambda},\beta_{\gamma},q\). From top to bottom, fully factorised SimBa-CL without feedback, coupled SimBa-CL without feedback, and fully factorised SimBa-CL with feedback. Red dots locate the data generating parameter, while black dots are used for the maximum on the grid.
#### 5.2.1 Maximum Simba-CL convergence and coverage in two dimensions
We start our exploration by looking at the bi-dimensional parameter \(\beta_{\lambda}\), given all the other parameters fixed to their baseline values. The hope is that \(\beta_{\lambda}\) is easily identifiable as it highly influence the evolution of the epidemics, and so we can test the asymptotic properties on a well-behaved parameter.
To explore the asymptotics of SimBa-CL we investigate four scenarios with an increasing amount of data: (i) \(N=100,T=100\); (ii) \(N=100,T=300\); (iii) \(N=1000,T=100\); (iv) \(N=1000,T=300\). Per each scenario, we simulate 100 epidemics and per each dataset we optimize \(\beta_{\lambda}\) through Adam optimization (Kingma and Ba, 2014), aiming to minimize the negative log-likelihood. After optimization, we have a sample of 100 bi-dimensional parameters per each scenario, which can be turned into box-plots as shown in Figure 2. Here, as both \(T\) and \(N\) increases, we can observe an evident shrinkage towards the true parameter, suggesting consistency of the maximum SimBa-CL estimator when \(N\) and \(T\) increases.
Taking the investigation a step further, we analyse the empirical coverage of confidence sets built as explained in Section 4. We consider \(N=1000,T=300\) and the optimized parameters from the previous experiment. We start by calculating the Godambe information matrix without using the approximate Bartlett identities, and we build 95% 2-dimensional confidence sets for our parameter \(\beta_{\lambda}\). The procedure results in a coverage of 1 and so an overestimation of uncertainty. However, when repeating the same procedure using the approximate Bartlett identities, the coverage now aligns with the theoretical coverage of 0.95. This favorable outcome can be attributed to less noisy estimates, as we are exploiting the factorisation in the model and so computing expectations on lower dimensional spaces.
Figure 2: Box-plots on the optimized \(\beta_{\lambda}\). On the left, \(\beta_{\lambda}^{1}\), on the right, \(\beta_{\lambda}^{2}\). Horizontal solid orange lines show the medians and green triangles are used for the means. Horizontal red solid lines show the true parameters.
#### 5.2.2 Maximum Simba-CL convergence and coverage in nine dimensions
Transitioning to a substantially more intricate scenario, we set our sights on the estimation of all model parameters \(\theta=(\beta_{0},\beta_{\lambda},\beta_{\gamma},q,\iota)\). Analogous to the 2-dimensional case, we simulate from the model 100 times and per each simulation we optimize the parameters using Adam optimizer. The outcomes are reported in Figure 3.
Foremost, it becomes apparent that increasing the value of \(T\) does not influence \(\beta_{0}\). This arises due to the fact that observations in the later time periods carry scarce information on the initial condition. Moreover, recall that \(w_{n}^{2}\sim\mathbf{Normal}\left(0,1\right)\) and \(\beta_{0}^{2}=0\). This makes the parameter not identifiable, as commented in Section 5.1. This ill-posed model definition implies that increasing \(N\) will not improve the uncertainty around our estimate as \(\beta_{0}^{2}<0\) and \(\beta_{0}^{2}>0\) can be equally likely, while the unbiasness is preserved due to the symmetry of the set of equally likely parameters. At the same time, the parameter \(\iota\) and \(\beta_{\lambda}\) are also hard to identify as smaller (or bigger) estimates of \(\beta_{\lambda}\) will lead to bigger (or smaller) estimates of \(\iota\). Indeed, \(\beta_{\lambda}\) governs the infection rates from the community, while \(\iota\) represents the environmental effect. It is then clear that generating an epidemic from \(\beta_{\lambda},\iota\) is equivalent to generating one by decreasing \(\beta_{\lambda}\) and increasing accordingly \(\iota\). This correlation is especially vivid in Figure 3, as an over-estimation of \(\iota\) (log-scale) lead to an underestimation of \(\beta_{\lambda}\).
Clearly, in a 9-dimensional scenario, the process of recovering empirically the theoretical coverage is substantially more complicated. Jointly, we find 0 coverage of the 9-dimensional 95% confidence sets, irrespective of whether the Bartlett identities are employed or not. We then compute the confidence intervals on single parameters by marginalising the 9
Figure 3: Box-plots on the optimized \(\theta=(\beta_{0},\beta_{\lambda},\beta_{\gamma},q,\iota)\). Parameters labels are reported on the y-axes. Horizontal orange solid lines show the medians and green triangles are used for the means. Horizontal red solid lines show the true parameters.
dimensional Gaussian distribution. Marginally, we find that using the approximate Bartlett identities improve the coverages, see Table 2 for numerical values.
### Comparing SimBa-CL with sequential Monte Carlo
As SimBa-CL methods provide biased estimates of the likelihood, the objective of this section is to compare SimBa-CL methods with both sequential Monte Carlo (SMC) and block sequentinal Monte Carlo (BSMC) algorithms. While SMC provides unbiased particles estimate of the likelihood (Chopin and Papaspiliopoulos, 2020), this quantity can suffer high variance in high-dimensional scenarios. On the other hand, BSMC deals with the curse of dimensionality by providing a factorised, albeit biased, particles estimate of the likelihood (Rebeschini and Van Handel, 2015), aligning with SimBa-CL.
Building upon the insights of the previous sections, we compare SMC and BSMC with fully factorised SimBa-CL without feedback. Regarding the SMC comparison, we consider two approaches: the auxiliary particle filter (APF) and the SMC with proposal distribution given by the approximate optimal proposal distribution developed by Rimella et al. (2023). Due to the curse of dimensionality we expect poor performances of the APF, hence we include the Block APF in our analysis. The block APF works as the Block particle filter (Rebeschini and Van Handel, 2015), a BSMC algorithm, but it propose particles according to the transition kernel informed by the current observation.
#### 5.3.1 Model
In the subsequent sections we work again on the IBM SIS model from Section 5.1.1. However, we also analyse an individual-based susceptible-exposed-infected-removed (SEIR) model. Specifically, we still have some bi-dimensional covariates \(w_{n}\), while \(p(x_{0}^{n}|\theta)\) is now 4-dimensional with the second and the fourth components being zero and the first and the third components being as in IBM SIS. Similarly, the transition kernel \(p(x_{t}^{n}|x_{t-1},\theta)\) is a 4 by 4 matrix with the same dynamics of IBM SIS when considering transitions from S to E and from I to R, with the addition of a transition E to I with probability \(1-e^{-\rho}\). Unless specified otherwise, we consider as the baseline model the one with: \(N=1000\), \(T=100\) and the data generating parameters set to \(\beta_{0}=\left[-\log\left(\left(1/0.01\right)-1\right),0\right]^{\top}\), \(\beta_{\lambda}=[-1,2]^{\top}\), \(\rho=0.2\), \(\beta_{\gamma}=[-1,-1]^{\top}\), \(q=[0,0,0.6,0.4]^{\top}\).
\begin{table}
\begin{tabular}{l c c c c c} Parameter & \(\beta_{0}\) & \(\beta_{\lambda}\) & \(\beta_{\gamma}\) & \(q\) & \(\iota\) \\ \hline Without Bartlett & 0.17 and 0.05 & 0.61 and 0.87 & 0.8 and 1. & 0.87 and 0.5 & 0.02 \\ With Bartlett & 0.98 and 0.89 & 0.99 and 0.75 & 0.97 and 0.97 & 1. and 0.98 & 0.92 \\ \hline \end{tabular}
\end{table}
Table 2: Empirical coverage per each parameter when computing the Godambe information matrix with and without the approximate Bartlett identities. Whenever the parameter is bi-dimensional the coverage per each component is reported in the same cell separated by “and”.
#### 5.3.2 SimBa-CL and SMC for an individual-based SIS model
We consider the baseline SIS model with \(N=1000\), and precisely: we generate the data; we run SimBa-CL and the baselines algorithms 100 times on the given data using the data generating parameters; and we estimate the mean and standard deviation of the log-likelihood. The results are reported in Table 3.
Notably, the method proposed by Rimella et al. (2023b) emerges as the best method in terms of log-likelihood mean and variance, as it yields to unbiased estimates of the likelihood and it reduces the variance. Our SimBa-CL exhibits superior computational efficiency, with a running time that is also almost three times faster than vanilla Block APF.
As the bias from our SimBa-CL seems significant compared to the one from Block APF, we run a paired comparison on the profile likelihood surfaces as in Section 5.1, and report them in Figure 4. It can be noticed that both SimBa-CL and Block APF generate similar likelihood surfaces whose maxima are close to the data generating parameter. Unfortunately, we could not include in our studies the SMC from Rimella et al. (2023b) for computational reasons.
#### 5.3.3 SimBa-CL and SMC for an individual-based SEIR model
The section concludes with a comparison on the baseline IBM SEIR. As for the SIS model we simulate synthetic data, we run our SimBa-CL along with the baselines algorithms using the data generating parameter, and we estimate the mean and variance of the resulting log-likelihood computations. The outcomes are reported in Table 4.
It is important to note that the SEIR scenario is considerably more complex than the SIS scenario. In the SEIR case, observing only infected and removed makes difficult for the SMC algorithms to prevent particle failure, without the use of a very informative proposal distributions.
Table 4 clearly shows that, in order to avoid failure of the SMC, we need a smart proposal distribution as the one proposed by Rimella and Whiteley (2022), and also a large \(h\) to reach a reasonable log-likelihood variance. On the other hand, our SimBa-CL is able to reach almost comparable log-likelihood variance almost ten times faster than the SMC.
\begin{table}
\begin{tabular}{l c c c c} P & 512 & 1024 & 2048 & Time (sec) \\ \hline APF & -81103.37 (46.04) & -81046.57 (49.65) & -80976.34 (36.08) & 1.05s \\ h=5 & -79551.92 (1.79) & -79552.24 (1.6) & -79552.81 (1.57) & 3.78s \\ h=10 & -79551.9 (1.81) & -79552.22 (1.47) & -79553.01 (1.56) & 5.61s \\ Block APF & -79565.69 (5.84) & -79558.44 (3.95) & Out of memory & 2.97s \\ SimBa-CL & -79612.74 (3.4) & -79612.31 (2.37) & -79612.34 (1.55) & 1.03s \\ \hline \end{tabular}
\end{table}
Table 3: Log-likelihood means and log-likelihood standard deviations for the baseline SIS model with \(N=1000\). \(h\) is the number of future observations included in Rimella et al. (2023b) (\(h=0\) correspond to APF).
### 2001 UK Foot and mouth disease outbreak
In the year 2001, the United Kingdom experienced an outbreak of foot and mouth disease, a highly contagious virus affecting cloven-hoofed animals. Over an 8 month period, 2026 farms out of 188361 in the UK were infected, concentrated in the North and South West of England, and costing an estimated PS billion to public and private sectors [UK National Audit Office]. The publicly available dataset ([http://www.defra.gov.uk](http://www.defra.gov.uk)) has been extensively studied, and we choose as an example an analysis of the 8791 farms in the Cumbria region to compare to a previous similar MCMC-based analysis in Jewell et al. (2009).
\begin{table}
\begin{tabular}{l c c c c} P & 512 & 1024 & 2048 & Time (sec) \\ \hline APF & Failed & Failed & Failed & 1.2s \\ h=5 & -43447.56 (52.04) & -43419.52 (51.08) & -43391.0 (52.41) & 4.44s \\ h=20 & -43004.55 (5.38) & -43001.9 (4.65) & -42999.76 (3.7) & 11.08s \\ h=50 & -42999.93 (3.44) & -42998.13 (2.72) & -42996.74 (2.39) & 20.88s \\ Block APF & Failed & Failed & Failed & 2.09s \\ SimBa & -43683.85 (9.54) & -43683.67 (7.35) & -43683.76 (5.16) & 1.25s \\ \hline \end{tabular}
\end{table}
Table 4: Log-likelihood means and log-likelihood standard deviations for the baseline SEIR model with \(N=1000\). \(h\) is the number of future observations included in Rimella et al. (2023b) (\(h=0\) correspond to APF).
Figure 4: Profile log-likelihood surfaces for \(\beta_{0},\beta_{\lambda},\beta_{\gamma},q\) from fully factorised SimBa-CL without feedback (first and third column) and Block APF (second and the fourth columns). Red dots are used for the data generating parameter, while black dots locate the maximum on the grid.
#### 5.4.1 Model
Similar to previous models, we consider an individual-based model with farms as the individual. We assume farms are exist in Susceptible, Infected, Notified (i.e. quarantined on detection), and Removed states (the SINR model). Transitions from S to I and I to N follow a discrete-time stochastic process, with infected farms immediately quarantined, and N to R (farm culling) occurs deterministically after 1 day.
We consider an initial probability of infection for farm \(n\) of \(1-\exp\left\{-\tau\frac{\sum_{\hat{n}\in[N]}\lambda_{\hat{n},n}}{N}\right\}\), with parameter \(\tau>0\). We assume transition probabilities \(Pr(x_{t+1}^{n}=\mathrm{N}|x_{t}^{n}=\mathrm{I})=1-\exp\{-\gamma\},\ \gamma>0\), and \(Pr(x_{t+1}^{n}=\mathrm{N}|x_{t}^{n}=\mathrm{I})=1\). We assume individual infection probabilities \(Pr(x_{t+1}^{n}=I|x_{t}^{n}=S,x_{t})=1-\exp\left\{-\frac{\sum_{\hat{n}\in[N]} \lambda_{\hat{n},n}\mathbb{I}\left(x_{t-1}^{\hat{n}}=2\right)}{N}\right\}\), where \(\lambda_{\hat{n},n}\) is the infection pressure exerted by an infected farm \(\tilde{n}\) to a susceptible farm \(n\) and formulated as:
\[\lambda_{\tilde{n},n}=\frac{\delta}{N}\left[\zeta\left(w_{\tilde{n}}^{c} \right)^{\chi}+\left(w_{\tilde{n}}^{s}\right)^{\chi}\right]\left[\xi\left(w_{ n}^{c}\right)^{\chi}+\left(w_{n}^{s}\right)^{\chi}\right]\frac{\psi}{E_{\tilde{n},n}^ {2}+\psi^{2}},\]
with \(\delta,\xi,\zeta,\chi,\psi\) positive parameters, \(w_{n}^{c}\) number of cattle in the \(n\)-th farm, \(w_{n}^{s}\) number of sheep in the \(n\)-th farm and \(E_{\tilde{n},n}\) the Euclidean distance in kilometres between farm \(\tilde{n}\) and farm \(n\). This SINR model is an example of heterogeneous mixing individual-based model as the infectious contacts are not homogeneous in space. The emission distribution follows the usual formulation \(p\left(y_{t}^{n}|x_{t}^{n},\theta\right)=q^{x_{t}^{n}}\mathbb{I}\left(y_{t}^{n }\neq 0\right)+\left(1-q^{x_{t}^{n}}\right)\mathbb{I}\left(y_{t}^{n}=0\right)\), with \(q=\left[0,0,1,0\right]^{\top}\) as we observe perfectly the notified.
#### 5.4.2 Inference
We run 100 optimization using Adam on our fully factorised SimBa-CL without feedback and select the "best" simulation according to its final SimBa-CL score. We then estimate the Godambe information matrix using the approximate Bartlett identities. As we learned the parameters in a log-scale, we need to use log-Normal's when plotting the parameters distributions, see Figure 5.
The parameter \(\tau\) can be intuitively understood as the time interval between the first infection and the first notified infections, marking the onset of the notification process. In Figure 5 we can recognise an optimal \(\tau\) of about 40, suggesting a relatively slow start in notifying farms. Furthermore we can observe a mean time before notification of about 2.5 days, leading to an estimate of 3.5 days for the mean infection period, encompassing the period from farm infection to culling. This implies a relatively fast intervention once the notification process is implemented. Also, from the last row of Figure 5, we can notice a decrease of over 60% of the infectivity after just 2 km, which could be used to define containment zones around infected farms.
Parameters \(\zeta,\chi,\xi\) are more difficult to interpret as they regulate the susceptibility and infectivity of farms according to the number of animals. To help visualising their effect we produce Table 5, which shows average susceptibility and infectivity of: a medium-size farm
with only cattle, a medium-size farm with only sheep, a large-size farm and a small-size farm. From Table 5 we can deduce that the effect of owing cattle is significantly higher than the one of owing sheep for both infectivity and susceptibility, and that even small farms can affect the epidemic spread. This agrees with the study from (Jewell et al., 2013) and also with the Directive of the Council of the European Union.
## Funding
This work is supported by EPSRC grants EP/R018561/1 (Bayes4Health) and EP/R034710/1 (CoSInES).
Figure 5: FMD parameters’ distributions and spatial kernel decay. Parameters’ labels are reported on the x-axes. Red solid lines represent the maximum SimBa-CL estimator. The last plot of the second row shows the spatial decay of infectivity in Km.
\begin{table}
\begin{tabular}{r r r r} Nr. cattle & Nr. sheep & Mean susceptibility & Mean infectivity \\ \hline
100 & 0 & 131.83 & 1364.99 \\
0 & 1000 & 54.69 & 54.69 \\
50 & 500 & 124.84 & 950.17 \\
2 & 6 & 16.49 & 144.37 \\ \hline \end{tabular}
\end{table}
Table 5: Mean susceptibility and mean infectivity for four farms conformations. |
2303.10659 | COVID-19 event extraction from Twitter via extractive question answering
with continuous prompts | As COVID-19 ravages the world, social media analytics could augment
traditional surveys in assessing how the pandemic evolves and capturing
consumer chatter that could help healthcare agencies in addressing it. This
typically involves mining disclosure events that mention testing positive for
the disease or discussions surrounding perceptions and beliefs in preventative
or treatment options. The 2020 shared task on COVID-19 event extraction
(conducted as part of the W-NUT workshop during the EMNLP conference)
introduced a new Twitter dataset for benchmarking event extraction from
COVID-19 tweets. In this paper, we cast the problem of event extraction as
extractive question answering using recent advances in continuous prompting in
language models. On the shared task test dataset, our approach leads to over 5%
absolute micro-averaged F1-score improvement over prior best results, across
all COVID-19 event slots. Our ablation study shows that continuous prompts have
a major impact on the eventual performance. | Yuhang Jiang, Ramakanth Kavuluru | 2023-03-19T13:47:56Z | http://arxiv.org/abs/2303.10659v2 | # COVID-19 event extraction from Twitter via extractive question answering with continuous prompts
###### Abstract
As COVID-19 ravages the world, social media analytics could augment traditional surveys in assessing how the pandemic evolves and capturing consumer chatter that could help healthcare agencies in addressing it. This typically involves mining disclosure events that mention testing positive for the disease or discussions surrounding perceptions and beliefs in preventative or treatment options. The 2020 shared task on COVID-19 event extraction (conducted as part of the W-NUT workshop during the EMNLP conference) introduced a new Twitter dataset for benchmarking event extraction from COVID-19 tweets. In this paper, we cast the problem of event extraction as extractive question answering using recent advances in continuous prompting in language models. On the shared task test dataset, our approach leads to over 5% absolute micro-averaged F1-score improvement over prior best results, across all COVID-19 event slots. Our ablation study shows that continuous prompts have a major impact on the eventual performance.
COVID-19, event extraction, question answering, social media mining
## 1 Introduction
Social media has emerged as a double-edged sword in the health communication world. Increasingly, consumers are seeking health information from online social platforms [1]. Hence, it is imperative that health agencies and professionals disseminate accurate information in a timely manner in appropriate social networks. On the other hand, misinformation is also proliferating on these platforms. The sudden rise of misinformation and the need to counter it has never been more urgent than during the ongoing COVID-19 pandemic [2]. For example, our prior efforts indicated concerted chatter surrounding promotions of nicotine, smoking, and vaping as potentially helpful to prevent/treat COVID-19, without any substantial evidence [3]. As smart phones become ubiquitous in the world, it's easy to both produce and consume information about health-related events. During the pandemic, these include disclosures of people testing positive or negative for COVID-19 and their takes/perceptions on what strategies or medications worked (or did not work) for them to prevent or treat the condition. If these are carefully extracted from massive online posts, they could help health agencies detect new spikes in infections and help track most frequently mentioned therapeutic options (some of which could be misinformation). The 2020 COVID-19 event extraction shared
task2 tackled this problem as part of the workshop on noisy user generated text (W-NUT) at the EMNLP 2020 conference. The organizers of the shared task created a manually curated dataset of five event types each with multiple slots (more in the next section). The overall aim was to facilitate a benchmark for COVID-19 event extraction on social media and to facilitate the creation of a massive COVID knowledge base of events that can be queried in a structured manner. In this paper, we improve on the prior best results [4] by over 5% in absolute micro-averaged F1 score on the test set of the shared task. To do this, we design a new approach using recent advances in continuous prompts for language models (LMs) to cast the event extraction task as an extractive question answering (QA) problem via questions prepended with the continuous prompts. Our code and config details are here: [https://github.com/bionlproc/twitter-covid-QA-extraction](https://github.com/bionlproc/twitter-covid-QA-extraction) 2
Footnote 2: [http://noisy-text.github.io/2020/extract_covid19_event-shared_task.html](http://noisy-text.github.io/2020/extract_covid19_event-shared_task.html)
## 2 Methods
### The COVID-19 event extraction dataset
The five event types are (1) tested positive (TPos) (2) tested negative (TNeg) (3) cannot test (CT) (4) death (D) and (5) cure and prevention (C&P). The CT event was more relevant during the early phases of the pandemic when it was very difficult to find reliable and timely testing centers. Each event has its own slots that characterize the event. For example, a C&P event has three slots: (a) _what_: which method of cure/prevention is being mentioned? (b) _opinion_: does the author of the tweet believe that the cure/prevention is effective? (c) _who_: who is promoting the cure/prevention? The slot fillers are typically spans of text in the tweet that answer the corresponding slot question. We can see that (b) is binary response and hence not a span of text and (c) can have a special "author of the tweet" non-span label if the author of the tweet is promoting the cure and they don't explicitly refer to themselves in the tweet text. For example, consider the sentence: _"No doubt that **vaping** could have prevented a multitude of Covid19 deaths as reported by some **French scientists**"_. The _what_ slot answer here would be the 4th token "vaping", the _opinion_ slot's binary label would be YES (because the author of the tweet appears to believe in the effectiveness), and the _who_ slot's answer is the last bigram "French scientists". For the TPos event, besides the expected _who_, _where_, and _when_ slots, there are slots for _recent travel_ and _employer_ capturing their recent visit to a place before they tested positive and the company/org they work for, respectively. It's not hard to see that some slots do not necessarily have a valid span in the text. If recent travel and employer are not discussed in a tweet, the model is expected to not identify any spans. For a full description of all slots for all five events, please see the paper by the dataset creators [4]. The dataset has a total of 7500 training and development tweets and 2500 test set tweets annotated with various events.
### Baselines and our methods
The main baseline is _bidirectional encoder representations from transformers_ (BERT), a well-known transformer model for NLP applications [5]. The current best results are from the shared task organizers Zong et al. [4] by using a special BERT model trained on COVID-19 tweets called the COVID-Twitter BERT (CT-BERT [6]) model
(specifically, _covid-twitter-bert_ on HuggingFace). We note that the candidate tweets for each event type are selected using special keyword-based queries as described in the appendix of Zong et al.'s paper [4]. The main approach used in all these prior approaches is to select possible different spans of an input tweet and predict if each of them answers the question corresponding to a particular slot. This can sometimes yield two different spans for the same slot, which is allowed in the dataset (e.g., for the "who" TPos slot, more than one person mentioned in the tweet could be testing positive). This approach can be very expensive as the number of candidate spans is typically all noun phrases and named entities mentioned in the tweet.
Our main approach is to map the slot filling task as an extractive QA task by passing the slot question text Q along with the tweet text T, separated by a special SEP token, as the input <Q> [SEP] <T> to a transformer language model (LM). It is trained with the output being the begin and end position tags for the tokens corresponding to the span that answers the question (for the slot being filled.) We begin with the well-known RoBERTa pretrained LM trained on the popular SQuAD QA dataset (V2) (specifically, _deepset/roberta-large-squad2_ on HuggingFace). Since we are using a QA strategy, it is reasonable for us to use an LM trained on a QA dataset. We train and fine-tune this model using the training and development instances of the COVID event task. The parameter sizes of the state-of-the-art CT-BERT model and our fine-tuned RoBERTa model are both near 350 million and as such our model is not inherently more expensive to train. However, it is much faster at test time because we do not have to run a BERT classifier for each noun phrase; the model simply outputs the span boundaries of the answer.
We introduce a new variation besides passing the question text Q to the LM, inspired by the notion of continuous prompts. Although the question text Q encodes the information need, there are often multiple ways of asking the same question and it is well known that different variations of the same question may have different performances. So, to counter this, continuous prompts (additional trainable parameter vectors) were introduced recently by Li and Liang [7] as part of their so called "prefix tuning" strategy. The central idea is to use new "virtual tokens" that are not part of the base LM vocabulary as prefixes for downstream tasks. A high-level schematic of our approach is in Figure 1 with virtual tokens prepended to the input for each slot. Based on experiments with the development dataset, we determined 60 virtual tokens were apt for each slot, each with a dimension of 1024 generated through a multi-layer perceptron network that further parametrizes the model.
A few final considerations in our method involve some nuanced aspects of the dataset. We encode a missing slot with the output of the [CLS] (special start token standard in any LM) token span. For the "author of the tweet" outcome for _who_ slot (e.g.,
Figure 1: Our extractive QA architecture with continuous prompts for COVID event extraction
"just tested positive folks"), at training time, we select the whole tweet as the gold outcome. At test time, if the model predicts a span that is longer than eight tokens, we output the "author of the tweet" label as it is inconceivable to have a _who_ slot spanning many tokens. We transform the 3-way _gender_ slot (male, female, unspecified) into two 2-way slots _gender-male_ (male or not) and _gender-female_ (female or not) to account for our span-based method. Because the RoBERTa model only outputs a single span and given our task allows for multiple spans per slot, we split the single span based on commas and conjunctions to identify potential multiple spans. Finally, all shared task participants and the annotators who created the dataset were only allowed to select answer spans from pre-chunked noun phrases and named entities. These chunks are provided for each tweet as part of the dataset. The baseline methods and CT-BERT operate on these chunks; since ours is a span detection method, our spans do not always match with organizer provided chunks. We address this by identifying the closest match of a predicted span using Jaccard similarity among tweet chunks. If there is no overlap at all with a chunk, we predict a missing slot. Our model is fine-tuned with a learning rate \(4\times 10^{-6}\) using the Adam optimizer for 8 epochs.
## 3 Results
We first present the event level classification results for BERT, CT-BERT, and our approach. We note that since different search terms were used by the dataset creators for different types, event type classification boils down to the binary setting where an event is identified if any of the slots returns a valid non-[CLS] span. From Table 1, we clearly see that across all event types our method outperforms the prior best results [4] by double-digit margins in terms of F1-scores. The macro-average F1 score is 89.1 for our model compared to 77.4 for CT-BERT. While this is encouraging, identifying the event type is not as useful unless individual slots are accurately filled for those events.
Overall, the micro-averaged F1 score across all 31 slots (across the five event types) for CT-BERT is 67.0 and for our approach is 72.5, indicating over 5% absolute improvement.All individual slot level scores are difficult to display within space constraints. However, a common trend we observed is that our approach greatly improves recall compared to the CT-BERT method. However, it sometimes loses some precision but, overall, we see more balance between precision and recall with our method. We demonstrate this via slot-level performances for the _cure and prevention_ event shown in Table 2. With the bold values we can see that CT-BERT is better in precision, while our method is superior in recall and F1-score. We also conducted ablation experiments and found that our micro-averaged F1 across all slots drops from 72.5 to 66.8 if we drop
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & BERT & CT-BERT & Ours \\ \hline
**Tested Positive** & 90.0 & 88.5 & **94.5** \\ \hline
**Tested Negative** & 72.0 & 77.2 & **89.0** \\ \hline
**Cannot Test** & 72.0 & 72.8 & **86.6** \\ \hline
**Death** & 73.0 & 78.7 & **88.0** \\ \hline
**Cure and Prev.** & 64.0 & 69.8 & **87.9** \\ \hline \end{tabular}
\end{table}
Table 1: F1 scores for event level classification
the continuous prompts and simply used the **Q [SEP]** T input to the model, indicating major importance of those virtual tokens. Likewise, we see the score dip to 67.9 if we don't use pre-training on the SQuAD dataset, confirming that pretraining on a task that is similar to the downstream task is very important.
## 4 Discussion and Conclusion
From Tables 1 and 2, and the overall micro-average improvement of over 5% in slot filling for the full test set compared with prior best results that use CT-BERT [4], we believe our method is a nontrivial advance in pandemic related event extraction from Twitter data. Though our focus is exclusively on COVID-19, we believe the general event types and associated slots are applicable to any pandemic situation, in general. We conducted preliminary error analyses and noticed that we need to improve further in mapping our spans to chunks provided by the dataset, especially for the age slot where the Jaccard similarity is not very reliable. Additionally, our analyses revealed ambiguous/subtle cases that could also be construed as correct extractions. For instance, our model extracts Notre Dame for the _employer_ slot from the tweet "A Notre Dame football player has tested positive for COVID-19". Although the tweet does not clearly say the player works for the university, it satisfies the general intention behind the _employer_ slot. Despites these limitations, we believe our model will lead to improved social media based infodemiology studies for pandemics.
**Acknowledgments**: This work is supported by the U.S. National Library of Medicine through grant R01LM013240.
|
2308.14995 | WSAM: Visual Explanations from Style Augmentation as Adversarial
Attacker and Their Influence in Image Classification | Currently, style augmentation is capturing attention due to convolutional
neural networks (CNN) being strongly biased toward recognizing textures rather
than shapes. Most existing styling methods either perform a low-fidelity style
transfer or a weak style representation in the embedding vector. This paper
outlines a style augmentation algorithm using stochastic-based sampling with
noise addition to improving randomization on a general linear transformation
for style transfer. With our augmentation strategy, all models not only present
incredible robustness against image stylizing but also outperform all previous
methods and surpass the state-of-the-art performance for the STL-10 dataset. In
addition, we present an analysis of the model interpretations under different
style variations. At the same time, we compare comprehensive experiments
demonstrating the performance when applied to deep neural architectures in
training settings. | Felipe Moreno-Vera, Edgar Medina, Jorge Poco | 2023-08-29T02:50:36Z | http://arxiv.org/abs/2308.14995v1 | WSAM: Visual Explanations from Style Augmentation as Adversarial Attacker and Their Influence in Image Classification
###### Abstract
Currently, style augmentation is capturing attention due to convolutional neural networks (CNN) being strongly biased toward recognizing textures rather than shapes. Most existing styling methods either perform a low-fidelity style transfer or a weak style representation in the embedding vector. This paper outlines a style augmentation algorithm using stochastic-based sampling with noise addition to improving randomization on a general linear transformation for style transfer. With our augmentation strategy, all models not only present incredible robustness against image stylizing but also outperform all previous methods and surpass the state-of-the-art performance for the STL-10 dataset. In addition, we present an analysis of the model interpretations under different style variations. At the same time, we compare comprehensive experiments demonstrating the performance when applied to deep neural architectures in training settings.
Style augmentation, adversarial attack, understanding, style, convolutional networks, explanation, interpretability, domain adaptation, image classification, model explanation, model interpretation.
## 1 Introduction
Currently, deep learning neural nets require a large amount of data, usually annotated, to increase the generalization and obtain high performance. To deal with this problem, methods for artificial data generation are performed to increase the training samples; this common learning strategy is called data augmentation. In computer vision, data augmentation increases the number of images through pixel-level processing and transformations. For supervised tasks where labels are known, these operations perform label-preserving transformations controlled by the probability of applying the operation and usually a magnitude that intensifies the operation effects on the image (Szegedy et al., 2016; Tanaka and Aranha, 2019). More recently, random erasing (DeVries and Taylor, 2017) and GAN-based augmentation (Tanaka and Aranha, 2019) improved the previous accuracy. In contrast, recent advances in style transfer (Ghiasi et al., 2017; Jackson et al., 2018) lead us to think about the influence of applying random styling and what deep networks learn from this.
Style augmentation is a technique that generates variations from an original set of images changing only the style information and keeping the main content. The style transformation applied to the image changes the image's pixel information, generating a new diverse set of samples that follow the same original distribution. In contrast, content information remains equal (Ghiasi et al., 2017). However, original style transfer techniques started with heavy computation to generate one stylized image. Experimentally, augmenting the training set randomly shows a new level of stochastic behavior, avoids overfitting in a small dataset, and stabilizes performance on large ones (Zheng et al., 2019). Nowadays, some can work close to real-time performance while others can generate a batch of styles per image (Ghiasi et al., 2017; Jackson et al., 2018).
In Interpretable Machine Learning (IML), specifically in image-based models such as CNN, several methods exist to interpret and explain predictions. Usually, large and complex models like CNN are called "black-box" due to their vast number of parameters (hidden layers). So, to know the information shared through each layer, some methods were developed using informa
tion from layers and gradients such as Saliency Maps (Simonyan et al., 2013), and CAM-based methods (Zhou et al., 2016; Selvaraju et al., 2017). These methods help explain complex "black box" image-based models and identify essential features in each sample prediction. In our approach, we will use these model explainers to highlight regions inside the input images to provide a visual interpretation of them.
In this work, we propose an augmentation strategy based on traditional augmentation plus style transformations. Besides, we implement new methods to visualize, explain, and interpret the behavior of our trained models. Also, we can understand which features are activating based on the style augmentation selected and study the influence of that style. Our main contributions in the present work are summarized as follows:
* We give an explanation of the successful augmentation strategy based on interpretation methods.
* We propose a **Style Activation Map** (SAM), **Weighted Style Activation Map** (WSAM), and **WSAM Variance** to visualize and understand the influence of style augmentation.
* We outperform previous results on the STL-10 dataset using traditional and style augmentations.
## 2 Related Works
### Style Transfer
In the first neural algorithm (Gatys et al., 2015), a content image and a style image are inputted to the neural network to obtain an output image with the original content but a new style. (Jing et al., 2017) employed the Gram matrices to model textures by encoding the correlations between convolutional features from different layers. Previous style transfer works (Ulyanov et al., 2017) improve the visual fidelity in which semantic structure was preserved for images with higher resolution. In (Geirhos et al., 2018) was concluded that neural networks have a strong bias with texture. Although the initial developments generated exciting results compared to the pioneer method, drawbacks such as weak texture synthesis and high computational cost were present (Ulyanov et al., 2017; Jing et al., 2017). More recently, (Li et al., 2018; Ghiasi et al., 2017) solved the problem by relying on arbitrary styles without retraining the neural model. Also, other techniques adjusted a new parameter or inserted noise carefully to generate more style variations from one style input (Ghiasi et al., 2017; Kotovenko, Dmytro, adn Sanakoyeu, Artsiom, and Lang, Sabine, and Ortong, 2018). Using these latter strategies, the first augmentation employing successfully style augmentation performing a cross-domain classification task (Jackson et al., 2018) follows the methodology adopted on (Ghiasi et al., 2017), which uses an Inception-v3 (Szegedy et al., 2015) architecture for the encoder and residual blocks for the decoder networks. However, the latent space is modified by a multivariate normal distribution which changes the style embedding. Other contemporary approaches (Zheng et al., 2019; Georgievski, 2019) used style augmentation and reported exciting results in classification tasks, specifically STL-10, CIFAR-100, and Tiny-ImageNet-200 datasets. Other interesting applications are extended to segmentation tasks (Hesse et al., 2019; Gkitsas et al., 2019).
Based on this literature review, we used a neural transfer model following a trade-off between edge preservation, flexibility to generate style variations, time processing, and best visual fidelity under different styles. We also compare our methodology to prior approaches used for style augmentation.
### Deep Network Explanations
Explaining a CNN is focused on analyzing the information passed through each layer inside the network. Following this idea, several methods were proposed to visualize and obtain a notion about which features of a deep CNN were activated in one specific layer. In (Simonyan et al., 2013) (saliency maps) showed the convolutional activations, (Zeiler and Fergus, 2014) showed the impact of applying occlusion to the input image. In other methods, they use the gradients to visualize features and explain deep CNN networks such as DeepLIFT (Shrikumar et al., 2017), which computes scores for each feature; Integrated Gradients (Sundararajan et al., 2017), which computes features based on gradients; CAM (Zhou et al., 2016), and Grad-CAM (Selvaraju et al., 2017) which computes relevant regions using gradient and feature maps. Each method identifies features with high and strong activation representing the prediction for a specific predicted category.
Guided by this literature review, we propose a new method called **Style Activation Maps (SAM)** based on the Grad-CAM method applied to style augmentation. We choose this one due to better behavior and performance against adversarial attacks or noise-adding techniques (Adebayo et al., 2018; Gilpin et al., 2018). Our main goal is to understand and interpret the impact of applying style augmenta
tion in classification tasks and analyze their influence.
## 3 Proposed Method
In this section, we present theoretical formulation and some interpretation methods used.
### Style Augmentation
For our experiments, we used the same methodology as Jackson et al. (2018); we nevertheless used a faster VGG-based network and added noise to diversify the style features. Specifically, we used an architecture composed of a generalized form of a linear transformation Li et al. (2018). Also, we compare with other related works Jackson et al. (2018); Zheng et al. (2019) that use neural style augmentation.
Formally, let \(C=\big{\{}c_{1},c_{2},...,c_{j}\big{\}},c_{i}\in\mathbb{R}^{N\times M\times C}\) be the content image set and let \(Z=\big{\{}z_{1},z_{2},...,z_{i}\big{\}},z_{i}\in\mathbb{R}^{n}\) be the precomputed style embedding set from \(S=\big{\{}s_{1},s_{2},...,s_{i}\big{\}},s_{i}\in\mathbb{R}^{N\times M\times C}\), are used to feed the styling algorithm to generate the output set \(O=\big{\{}o_{1},o_{2},...,o_{j}\big{\}},o_{j}\in\mathbb{R}^{N\times M\times C}\). Moreover, we denote zero-mean vectors \(\overline{c}_{j}\in\mathbb{R}^{N\times M\times C}\) and \(\overline{z}_{i}\in\mathbb{R}^{n}\). Our style strategy transfers elements \(z_{i}\) from the style set \(Z\) to a specific element from the content set \(C\).
The VGG ("r41") architecture, denoted as \(M(.)\), maps \(\mathbb{R}^{N\times M\times C}\rightarrow\mathbb{R}^{N_{1}\times M_{1}\times F}\). and a non-linear function \(\phi(.)\) maps \(\mathbb{R}^{N_{1}\times M_{1}\times F_{1}}\rightarrow\mathbb{R}^{n}\), where \(N_{1}<N\), \(M_{1}<M\) and \(F_{1}>F\). Also, we denote \(C(.)\), \(U(.)\) as the compress and uncompress CNN-based networks from the original paper Li et al. (2018). \(\phi(.)\) embeds the input image to an embedding vector that contains the semantic information of the image. More concisely, we use this non-linear function to map the original image to an embedding vector as shown in Eq. 1 for the content image and Eq. 2 for the style image. In our implementation, the function \(\phi(.)\) employs a CNN whose output is used to compute the covariance matrix and feed it to a fully-connected layer.
Since we use an architecture based on linear transformations, which is generalized from previous approaches Ghiasi et al. (2017), the transformation matrix \(T\) sets and preserves the feature affinity of the content image (determined by the covariance matrix of the content and the style). This is expressed in Eq. 3. In our implementation, we precomputed the style vectors and saved all textures in memory; thereby, our modifications are described in Eq. 4 and 5.
\[\phi_{c}=\phi_{1}(VGG(\overline{c}_{j})) \tag{1}\] \[\phi_{s}=\phi_{2}(VGG(\overline{s}_{i}))\] (2) \[T=\phi_{c}\phi_{c}^{T}\phi_{s}\phi_{s}^{T} \tag{3}\]
In our implementation, we precompute the style vector and save all textures in memory; thereby, our modifications are expressed in Eq. 4 and 5.
\[T=\phi_{c}\phi_{c}^{T}(\alpha\phi_{c}\phi_{c}^{T}+(1-\alpha) \hat{z_{i}}) \tag{4}\] \[o_{i}=U(T\ C(c_{j}))+(\alpha)\mu_{c_{i}}+(1-\alpha)\mu_{z_{i}} \tag{5}\]
Where \(\alpha\) is the interpolation hyper-parameter which controls the strength of the style transfer similarly to Jackson et al. (2018), and \(\hat{z_{i}}\), defined in Eq. 6, is the embedding vector of the style set with a noise addition for style randomization.
\[\hat{z_{i}}\sim\overline{z}_{i}+\mathcal{N}(\mu_{i},\sigma_{i}^{2}) \tag{6}\]
As argued in prior methodologies, minor variations increase the randomization in the process; thereby, we apply noise instead of using a sampling strategy similar to applying a Gaussian noise in the latent space of generative networks during the training. In particular, we set this noise source as a multivariate normal distribution which means covariance scales and shifts \(\overline{z}_{i}\) into the embedding space. This is also useful for understanding the randomization process and the influence of the latent space.
### Model Interpretation
In this work, we propose a new method **Style Activation Map** based on Grad-CAM to visualize the predictions and the highlighting regions with the most representative activated features from styled images. To do this, we extract from the penultimate layer the \(A^{k}\in\mathbb{R}^{u\times v}\) feature maps of width \(u\) and height \(v\), with each element indexed by \(ij\). So \(A^{k}_{i,j}\) refers to the activation at location \((i,j)\) of the feature map \(A^{k}\). We apply the GlobalAveragePooling (GAP) technique to the feature maps to get the neuron importance weights defined in Eq. 7.
\[\widehat{\delta}^{c}_{k}=\overbrace{\frac{1}{Z}\sum_{i}\sum_{j}}^{\text{GAP}} \underbrace{\frac{\partial y^{c}}{\partial A^{k}_{ij}}}_{\text{grad-backprop}} \tag{7}\]
Where \(\widehat{\delta}^{c}_{k}\) represents the neuron importance weights, \(c\) is the class, \(Z=u\times v\) the size of the image, \(k\) is the k-th feature map, \(A^{k}_{ij}\) is the feature map,
\(y^{c}\) the score for class \(c\), and \(\frac{\partial Y^{c}}{\partial A^{k}_{ij}}\) is the gradient vector obtained via back-propagation. Next, we calculate the corresponding activation maps for each prediction using Eq. 7. From this point, we propose a new technique to visualize the highlighted regions in stylization and their variations. We present two methods: the **Style Activation Map (SAM)** defined as the relevant highlighted regions of the different styles in predictions and the **Weighted Style Activation Map (WSAM)** defined as the weighted sum of all styles applied in all samples per class.
We denote the **SAM** of a styled image with style \(\sigma\), style intensity \(\alpha\), and class \(c\) by \(I^{c}_{\alpha,\sigma}\). Where \(\alpha\) is the style intensity used, \(\sigma\) is the style used. Also, we have the \(k\)-th feature activation maps \(A^{k}\in\mathbb{R}^{u\times v}\), and their class score \(y^{c}\) for class \(c\):
\[SAM^{c}_{\alpha,\sigma}=ReLU(\sum_{k}\delta^{c}_{i}A^{k}_{\alpha,\sigma}) \tag{8}\]
We apply the ReLU function to the weighted linear combination of the feature maps \(A^{k}\) because we are only interested in features with a **positive influence**. Then, we use this result to obtain the **WSAM** doing a weighted mean of \(SAM^{c}_{\alpha,\sigma}\) and their predictions \(y^{c}_{\alpha,\sigma}\) using all styles and all intensities. We define \(\Omega\) as the product of total styles and total intensities evaluated, so we have:
\[WSAM^{c}=\frac{1}{\Omega}\sum_{\alpha}\sum_{\sigma}y^{c}_{\alpha,\sigma}\times SAM ^{c}_{\alpha,\sigma} \tag{9}\]
Once we calculate \(WSAM^{c}\) in eq. 9 we will calculate the total variance region of \(m\) samples to identify the most significant styles features for the classifier:
\[WSAM^{c}_{variance}=\frac{1}{Z\times m}\sum_{i}^{m}(WSAM^{c}_{i}-y^{c}_{i} \times I^{c}_{i})^{2} \tag{10}\]
Where \(I^{c}_{i}\) is the i-th input sample stylized with \(\alpha=1.0\) (no style), \(Z=u\times v\) the image size, and their class score \(y^{c}_{i}\) for the class \(c\). Our metric shows the highlighted region variance between an image and its styles with different \(\alpha\)s.
## 4 Experiments and Results
We perform our experiments using the STL-10 (\(96\times 96\)) dataset, where samples are distributed in 5,000 and 8,000 labeled data for training and testing, respectively. We disregard the 100,000 unlabeled data for all our experiments. Besides, all experiments were performed using five different networks with high performance, such as Xception (Chollet, 2016), InceptionV3-299 (Szegedy et al., 2015), InceptionV4 (Szegedy et al., 2016), WideResNet-96 (Zengedy et al., 2017), and WideResNet-101 (Kabir et al., 2020). We also compare our results with other state-of-the-art style augmentation like SWWAE (Zhao et al., 2015), Exemplar Convnet (Dosovitskiy et al., 2014), IIC (Ji et al., 2018), Ensemble (Thoma, 2017), WideResNet+cutout (DeVries and Taylor, 2017), InceptionV3 (Jackson et al., 2018), and STADA (Zheng et al., 2019).
### Style Augmentation
First, we explore the effects of style augmentation through t-SNE visualization of images after applying the styler network to a subset of the test set Figure 1a; we note some clusters of original images and their styles separate a bit of distance such
Figure 1: (a) Visualization using t-SNE, samples with their style augmentation. (b) Different styles with variations of the parameter \(\alpha\) from 1.0 (no stylization) to 0.0 (style augmentation) on images.
as truck and horse. In Figure 0(b), we performed some styles using some \(\alpha\) values to find the best balance between style and content information as described above in Eq. 5. However, we emphasize the difference between traditional augmentation and the classical technique like rotation, mirroring, cutout [14], etc. With style augmentation, we increase the number of samples using about 80 000 styles and sampling for style intensity. Figure 0(b) shows different styles and different \(\alpha\) values (style intensity) from 0.0 to 1.0 by steps of 0.2.
Images with augmentation strategies for training deep models include traditional augmentations, cutouts, and our style augmentation method using a lower style effect (\(\alpha=0.7\)). At this point, we can consider **style augmentation** as a **noise-adding technique** or **adversarial attacker** due to the style distortion, which makes images more challenging to represent and be associated with the correct class.
### Training Models
For experiments, we define four learning strategies, which are composed of no augmentation (None or N/A), traditional augmentation (Trad), style augmentation (SA), and both (Trad+SA) for each model. In Table 1, we present the quantitative comparisons between the state-of-the-art methods in style augmentation using styling and architectures for the STL-10 dataset; the Extra column means additional data is used to train that model, Trad column means traditional augmentation plus cutout, and Style column indicates our style augmentation.
We note that in all cases, style augmentation helps to improve results. Besides, we found that models with higher input resolution reached higher accuracy after applying the styling method shown in Figure 1(a). Experiments on different input sizes support this affirmation [1].
Furthermore, in Figure 1(b), we analyze the influence of style additions to a subset of the test set composed of 100 samples (10 samples per class), computing their average accuracy in each point on axis X consisting of an overall 20,000 random styles, these styles were sorted following from greater to lower accuracy. Note that the accuracy of the model trained without style augmentation decreased drastically for some styles. In contrast, the use of styles in training becomes the same architecture more robust to strong variations without losing accuracy.
## 5 Style Activation Maps Visualization
Once our training step was finished, we evaluated and understood the stylization behavior in our models. First, in Figure 2(a), we show how our Style Activation Map works. Each row is the model, and each column is the learning strategy using no augmentation (N/A), using only style augmentation (SA), using traditional augmentation plus cutout (Trad), and using both (Trad+SA). We take a random sample with no style (\(\alpha=1\)) to calculate their SAM (style activation map) for each model and each augmentation strategy. From this, we see how both Trad and Trad+SA help the models to focus on the plane instead other regions like no augmentation (N/A). Also, it is important to highlight that the better the prediction, the more accurate the region in the object (in this case, a plane).
On the other hand, using the best model WideResNet-101, we use the same random sample (a plane) to test the different learning strategies using the same style but varying the \(\alpha\) parameter. Let's say, in this case, we will use as input the stylized sample. In Figure 2(b), we show the influence of image stylization. Each row means learning strategy N/A, SA, Trad, and Trad+SA. So, each column implies that
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Network & Extra & Trad & Style & Acc \\ \hline SWWAE & ✓ & ✓ & & 74.33 \\ Exempla Conv & ✓ & ✓ & & 75.40 \\ IIC & ✓ & ✓ & & **88.80** \\ Baseline & & ✓ & & 75.67 \\ Ensemble & & ✓ & & 77.62 \\ STADA\({}^{*}\) & & ✓ & ✓ & 75.31 \\ InceptionV3-299\({}^{*}\) & & ✓ & ✓ & **80.80** \\ Xception-96\({}^{*}\) & & ✓ & ✓ & **82.67** \\ Xception-128\({}^{*}\) & & ✓ & ✓ & **85.11** \\ \hline \multirow{4}{*}{Xception-256\({}^{*}\)} & & & & 73.37 \\ & & ✓ & & 86.19 \\ & & ✓ & ✓ & 74.89 \\ & & ✓ & ✓ & **86.85** \\ \hline \multirow{4}{*}{InceptionV4-299\({}^{*}\)} & & & & 79.17 \\ & & ✓ & & 86.49 \\ & & ✓ & & 80.52 \\ & & ✓ & ✓ & **88.18** \\ \hline \multirow{4}{*}{WideResNet-96\({}^{*}\) (WRN)} & & & & 77.28 \\ & & ✓ & & 87.26 \\ \cline{1-1} & & ✓ & ✓ & 83.58 \\ \cline{1-1} & & ✓ & ✓ & **88.83** \\ \hline \multirow{4}{*}{WideResNet-101\({}^{*}\) (WRN)} & & & & 87.83 \\ \cline{1-1} & & ✓ & & 88.23 \\ \cline{1-1} & & ✓ & ✓ & 92.23 \\ \cline{1-1} & & ✓ & ✓ & **94.67** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy comparison of data augmentation methods in STL-10 (\({}^{*}\)) indicates results performed by us.
the styled input sample by \(\alpha\) value varies from 1.0 (no intensity/style) to 0.0 (more intensity) before being evaluated by each network. We saw that a styled image tested in a model which does not use SA gets too bad results, but this did not happen in the model trained with SA. Also, the SAM-relevant regions in styled models tend to be constant along the \(\alpha\) variations.
In Figure 3(a), we show different samples, styles, and \(\alpha\) values. We show the influence of style in random samples with random styles; we note that some styles exist which don't help to improve prediction. Otherwise, it gets worse. From this result, we say some **styles can influence positively, negatively, or do not impact** the input image. In addition, this result shows how the relevant regions for the network change depending on the style, and these two results are shown in Figure 3(b), also improving or not the confidence of the prediction.
## 6 Discussions
We train, test, and visualize the impact of the style augmentation varying both \(\alpha\) values (from 0.0 to 1.0 in steps of 0.2) and learning strategies (N/A, SA, Trad, and Trad+SA) using the STL-10 dataset. We achieve high performance and the best result with WideResNet-101. We show the behavior of the style augmentation technique proposed (see Figure 0(a) and Figure 0(b)). We identify that some styles perturb the images more than others using the same sample, like adding noise. Also, we argue that by using larger input sizes and removing some complex styles, we probably remove the negative impacts on training (see Figure 1(a)). Furthermore, our experiments showed interesting robustness to styles when styling is included in the training (see Figure 1(b)). Nonetheless, we also observed that the accuracy of models with Trad decreased drastically for some styles. Additionally, we found that some textures are more challenging to perform a style transfer using cutting-edge networks.
We explored more deeply the effects of particular styles and their influence on training and testing. In Figure 2(a), we present how style made a model more robust thanks to the different intensities of \(\alpha\)
Figure 3: Comparing SAM results: (a) We compare SAM from different models (rows) using the augmentations strategies: None, Trad, SA, and Trad+SA (columns). (b) We compare SAM of the WideResNet-101 trained using N/A, Trad, SA, and Trad+SA tested on the same image and style but varying the style intensity \(\alpha\) as input.
Figure 2: (a) Influence of the application of styles on a subset of the test set. (b) Comparison of WideResNet-101 robustness under style augmentation setting during training. Accuracy vs. style transfer (\(\alpha=0.5\)) for a subset of the test set.
which behaves as noise but does not apply to everyone. Specifically, we took the case of the plane evaluated in Figure 2(b). We got a low accuracy (0.341%) with higher style intensity (\(\alpha=0.0\)). Otherwise, we got the highest accuracy (0.988%) with \(\alpha=0.8\). Furthermore, experiment results suggested that the best fit for \(\alpha\) could be **between** 0.3 and 0.8, similar results were found in Jackson et al. (2018). In Figure 3(a), we note that some styles have no effects, and for others, the network learns how to classify images correctly with higher intensities (noise). Also, style strengthens the correlation between predictions and styled features activation maps (see Figure 3(b)).
We now calculate the **WSAM variance** and WSAM for each class sample, using all styles and \(\alpha\)s. In Table 2, we present the **WSAM variance** of all **SAMs**. Besides, in Figure 5, we show the result of **WSAM** for one sample per class. These results give us an idea about the impact of applying **79 424** styles with different \(\alpha\) intensities during the training phase and how the network learns to deal with those noisy samples (styled images), helping the robustness of the model. Finally, These results allow us to understand the influence of style augmentation in image classification. We can say that style augmentation can be used as a noise adder or adversarial attacker, making our model more robust against adversarial attacks.
## 7 Conclusions and Future Work
In this work, we define a metric to explain by experimentation the behavior, the impact, and how the style augmentation may impact getting better results in the classification tasks. This metric is composed of three main outputs: Style Activation Map (SAM), Weighted Style Activation Map (WSAM), and \(WSAM_{variance}\); this last one measures the variance of the regions of relevant features in styled samples. We outperform the state-of-the-art **without extra data** in style augmentation accuracy with WideResNet-101 trained on the STL-10 dataset; besides, our method gives robustness to input variations. From results and experiments, style augmentation has an impact on the model, and this impact can be visualized through SAM regions generated. We conclude that styles may modify and perturb different features from the input images (as an adversarial attacker), thus causing another set of images with slight variations in the distribution or becoming outliers making that prediction fail. In future directions, we will
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(WSAM_{variance}\) & Category & \(WSAM_{variance}\) & Category \\ \hline airplane & 0.107 & horse & 0.269 \\ truck & 0.129 & bird & 0.316 \\ deer & 0.175 & dog & 0.338 \\ cat & 0.193 & monkey & 0.380 \\ car & 0.228 & ship & 0.456 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the total WSAM variance sorted for each class in STL-10 after normalization.
Figure 4: Comparing SAM results: (a) WideResNet-101 SAM results using different values for \(\alpha\), different styles, and different samples. (b) WideResNet-101 SAM results show the negative impact styles (3 on the left side), positive impact styles (3 on the right side), and no style evaluated for \(\alpha=(0,0.5,0.9)\) to one input image (middle).
Figure 5: Results after calculating the WSAM for each class sample, varying styles, and \(\alpha\) as defined in Eq. 9. We can see the total variance of the relevant region after stylization.
extend this study to more complex models with a higher number of parameters (like transformers) and higher images size like ImageNet and explain how style could influence their internal behavior. Also, we propose to understand more deeply which features are selected to be preserved in each style and which distortion they could generate through the network layers.
## 8 Acknowledgements
This work was supported by Carlos Chagas Filho Foundation for Research Support of Rio de Janeiro State (FAPERJ)-Brazil (grant #E-26/201.424/2021), Sao Paulo Research Foundation (FAPESP)-Brazil (grant #2021/07012-0), and the School of Applied Mathematics at Fundacao Getulio Vargas (FGV/EMAp). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the FAPESP, FAPERJ, or FGV.
|
2306.02483 | Performance of near-optimal protocols in weak processes | A natural criticism of the optimal protocol of the irreversible work found
for weakly driven processes is its experimental difficulty in being
implementable due to its singular part. In this work, I explore the possibility
of taking its continuous linear part as an acceptable near-optimal protocol.
First, I prove that such a solution is the optimal protocol for non-singular
admissible functions. I corroborate this result by observing successful
comparisons with test protocols on six reasonable examples. Also, extending
such analysis, I conclude that the error committed on this near-optimal
protocol is considerable compared to the first-order singular approximation
solution, except for sudden and slowly-varying processes. A conjecture is made
about a general structure of a near-optimal protocol for systems under
arbitrarily strong perturbations. | Pierre Nazé | 2023-06-04T21:19:53Z | http://arxiv.org/abs/2306.02483v3 | # Performance of near-optimal protocols in weak processes
###### Abstract
A natural criticism of the universal optimal protocol of the irreversible work found in the context of weak processes is its experimental difficulty to be implementable due to its singular part. In this work, I propose as a partial solution to this problem its continuous linear part as an acceptable near-optimal protocol. This is based on the analysis of several examples of the error committed to approximating the solution extended until its second order in its continuous linear part. The result seems to be universal: depending mainly on the ratio between switching time and waiting time \(\tau/\tau_{w}\), the error for sudden and slowly-varying processes is less than \(1\%\), while for \(\tau\approx\tau_{w}\) it has a peak with an upper bound around \(8\%\). Although implementing Dirac deltas could be an experimental challenge, I present also the error including those functions, where the results of these new near-optimal protocols become slightly better.
## I Introduction
Optimization problems are very subjected to the space of admissible functions of their solutions [1]. In some cases, computational research may not be able to visualize completely some regions due to its proper limitations. The optimal protocols found in this research are nothing more than near-optimal protocols when related to a bigger space of admissible functions.
Another situation when this happens is when the admissible functions include generalized functions, such as Dirac deltas and their derivatives [2]. In particular, the experimental implementation of these cases can be a challenge, although a proposition of how one can implement them has been proposed [2]. In this case, accessible near-optimal protocols, tangible to experimentalists, are of utmost importance.
In Ref. [2], a universal optimal protocol for the irreversible work and its variance for isothermal and weak processes has been proposed, where the solution is split into a continuous linear part and a singular part, composed of Dirac deltas and their derivatives. In the face of the problem reported above, I propose as an acceptable near-optimal protocol for this case the continuous linear part. This proposition is based on a universal behavior verified in the analyses of several examples of the error committed by not adding the Dirac deltas and first derivatives of the singular part: for sudden and slowly-varying processes the error is less than \(1\%\), while for other cases it is less than \(8\%\). Finally, I present the case where the Dirac deltas are included in the near-optimal protocol, whose result becomes slightly better.
## II Preliminaries
I start defining notations and developing the main concepts to be used in this work.
Consider a classical system with a Hamiltonian \(\mathcal{H}(\mathbf{z}(\mathbf{z_{0}},t)),\lambda(t))\), where \(\mathbf{z}(\mathbf{z_{0}},t)\) is a point in the phase space \(\Gamma\) evolved from the initial point \(\mathbf{z_{0}}\) until time \(t\), with \(\lambda(t)\) being a time-dependent external parameter. During a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\), with the system being in contact with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The average work performed on the system during this interval of time is
\[\overline{W}\equiv\int_{0}^{\tau}\left\langle\overline{\partial_{\lambda} \mathcal{H}}(t)\right\rangle_{0}\dot{\lambda}(t)dt, \tag{1}\]
where \(\partial_{\lambda}\) is the partial derivative in respect to \(\lambda\) and the superscripted dot the total time derivative. The generalized force \(\left\langle\overline{\partial_{\lambda}\mathcal{H}}\right\rangle_{0}\) is calculated using the averaging \(\overline{\cdot}\) over the stochastic path and the averaging \(\langle\cdot\rangle_{0}\) over the initial canonical ensemble. The external parameter can be expressed as
\[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{2}\]
where, to satisfy the initial conditions of the external parameter, the protocol \(g(t)\) must satisfy the following boundary conditions
\[g(0)=0,\quad g(\tau)=1. \tag{3}\]
We consider as well that \(g(t)\equiv g(t/\tau)\), which means that the intervals of time are measured according to the switching time unit.
Linear-response theory aims to express average quantities until the first-order of some perturbation parameter considering how this perturbation affects the observable to be averaged and the process of average [3]. In our case, we consider that the parameter does not considerably changes during the process, \(|g(t)\delta\lambda/\lambda_{0}|\ll 1\), for
all \(t\in[0,\tau]\). In that manner, using such framework, the generalized force can be approximated until first-order as
\[\begin{split}\left\langle\overline{\partial_{\lambda}\mathcal{H}}(t) \right\rangle_{0}&=\left\langle\partial_{\lambda}\mathcal{H} \right\rangle_{0}+\delta\lambda\left\langle\partial_{\lambda\lambda}^{2} \mathcal{H}\right\rangle_{0}g(t)\\ &\quad-\delta\lambda\int_{0}^{t}\phi_{0}(t-t^{\prime})g(t^{ \prime})dt^{\prime}.\end{split} \tag{4}\]
The quantity \(\phi_{0}(t)\) is the so-called response function [3], which can be conveniently expressed as the derivative of the relaxation function \(\Psi_{0}(t)\)[3]
\[\phi_{0}(t)=-\frac{d\Psi_{0}}{dt}. \tag{5}\]
In our particular case, the relaxation function is calculated as
\[\Psi_{0}(t)=\beta\left\langle\partial_{\lambda}\mathcal{H}(0)\overline{ \partial_{\lambda}\mathcal{H}}(t)\right\rangle_{0}-\mathcal{C}, \tag{6}\]
where the constant \(\mathcal{C}\) is calculated to vanish the relaxation function for long times [3]. We define as the relaxation timescale of the system the quantity
\[\tau_{R}=\int_{0}^{\infty}\frac{\Psi(t)}{\Psi(0)}dt. \tag{7}\]
The generalized force, written in terms of the relaxation function, can be expressed as
\[\begin{split}\left\langle\overline{\partial_{\lambda}\mathcal{H }}(t)\right\rangle_{0}&=\left\langle\partial_{\lambda}\mathcal{H }\right\rangle_{0}-\delta\lambda\widetilde{\Psi}_{0}g(t)\\ &\quad+\delta\lambda\int_{0}^{t}\Psi_{0}(t-t^{\prime})\dot{g}(t^{ \prime})dt^{\prime},\end{split} \tag{8}\]
where \(\widetilde{\Psi}_{0}\equiv\Psi_{0}(0)-\left\langle\partial_{\lambda\lambda}^ {2}\mathcal{H}\right\rangle_{0}\). Finally, combining Eqs. (1) and (8), the average work performed at the linear response of the generalized force is
\[\begin{split}\overline{W}=&\,\delta\lambda\left\langle \partial_{\lambda}\mathcal{H}\right\rangle_{0}-\frac{\delta\lambda^{2}}{2} \widetilde{\Psi}_{0}\\ &+\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{ \prime})\dot{g}(t^{\prime})\dot{g}(t)dt^{\prime}dt.\end{split} \tag{9}\]
We observe that the double integral on Eq. (9) vanishes for long switching times [4]. Therefore the other terms are part of the contribution of the difference of free energy, since this quantity is exactly the average work performed for quasistatic processes in isothermal drivings. Thus, we can split the average work into the difference of free energy \(\Delta F\) and irreversible work \(W_{\text{irr}}\)
\[\Delta F=\delta\lambda\left\langle\partial_{\lambda}\mathcal{H}\right\rangle _{0}-\frac{\delta\lambda^{2}}{2}\widetilde{\Psi}_{0}, \tag{10}\]
\[W_{\text{irr}}=\delta\lambda^{2}\int_{0}^{\tau}\int_{0}^{t}\Psi_{0}(t-t^{ \prime})\dot{\lambda}(t^{\prime})\dot{\lambda}(t)dt^{\prime}dt. \tag{11}\]
In particular, the irreversible work can be rewritten using the symmetric property of the relaxation function [3]
\[W_{\text{irr}}=\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\Psi_ {0}(t-t^{\prime})\dot{\lambda}(t^{\prime})\dot{\lambda}(t)dt^{\prime}dt. \tag{12}\]
I establish at this point the regimes where linear-response theory is able to describe thermodynamic processes. Those regimes are determined by the relative strength of the driving with respect to the initial value of the protocol, \(\delta\lambda/\lambda_{0}\), and the rate by which the process occurs with respect to the relaxation time of the system, \(\tau_{R}/\tau\). See Fig. 1 for a diagram depicting the regimes. In region 1, the so-called slowly-varying processes, the ratio \(\delta\lambda/\lambda_{0}\) is arbitrary, while \(\tau_{R}/\tau\ll 1\). By contrast, in region 2, the so-called finite-time and weak processes, the ratio \(\delta\lambda/\lambda_{0}\ll 1\), while \(\tau_{R}/\tau\) is arbitrary. In region 3, the so-called arbitrarily far-from-equilibrium processes, both ratios are arbitrary. Linear-response theory can only describe regions 1 and 2 [4]. In this work, we are going to focus on region 2 only.
Consider the irreversible work rewritten in terms of the protocols \(g(t)\) instead of its derivative
\[W_{\text{irr}}= \frac{\delta\lambda^{2}}{2}\Psi(0)+\delta\lambda^{2}\int_{0}^{ \tau}\dot{\Psi}_{0}(\tau-t)g(t)dt \tag{13}\] \[-\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\int_{0}^{\tau}\ddot{ \Psi}(t-t^{\prime})g(t)g(t^{\prime})dtdt^{\prime}. \tag{14}\]
Using calculus of variations, we can derive the Euler-Lagrange equation that furnishes the optimal protocol \(g^{*}(t)\) of the system that will minimize the irreversible work [5]
\[\int_{0}^{\tau}\ddot{\Psi}_{0}(t-t^{\prime})g^{*}(t^{\prime})dt^{\prime}=\dot{ \Psi}_{0}(\tau-t). \tag{15}\]
In particular, the optimal irreversible work will be [5]
\[W_{\text{irr}}^{*}=\frac{\delta\lambda^{2}}{2}\Psi(0)+\frac{\delta\lambda^{2}}{ 2}\int_{0}^{\tau}\dot{\Psi}_{0}(\tau-t)g^{*}(t)dt. \tag{16}\]
Also, the Euler-Lagrange equation (15) furnishes also the optimal protocol that minimizes the variance of the work
Figure 1: (Color online) Diagram of nonequilibrium regions. Region 1: slowly-varying processes, Region 2: finite-time but weak processes and Region 3: arbitrarily far-from-equilibrium processes. Linear response theorem can describe regions 1 and 2.
[6]. In this case, the optimal variance of work is
\[\sigma_{\rm W}^{2^{*}}=\frac{\beta\delta\lambda^{2}}{4}\Psi(0)+\frac{\beta\delta \lambda^{2}}{4}\int_{0}^{\tau}\dot{\Psi}_{0}(\tau-t)g^{*}(t)dt. \tag{17}\]
In Ref. [2], the following universal solution was found
\[g^{*}(t)=\frac{t+\tau_{w}}{\tau+2\tau_{w}}+\sum_{n=0}^{\infty}\frac{a_{n}( \delta^{(n)}(t)-\delta^{(n)}(\tau-t))}{\tau+2\tau_{w}}, \tag{18}\]
where the waiting time \(\tau_{w}\) is defined as
\[\tau_{w}=\mathcal{L}_{t}[\Psi(t)](0). \tag{19}\]
The objective of this work is to respond to a natural criticism of such a universal solution: due to its singular part, its experimental implementation can be a real challenge, so how can one overcome this problem? I present here an analysis of the error committed in the irreversible work in two approximations: considering only the continuous linear part and this part added to the Dirac deltas. As we are going to see, these near-optimal protocols are quite reasonable.
## III Approximation errors
How much do the addition of deltas peaks and their derivatives affect the optimal irreversible work? Let us define the following quantity
\[\Delta W_{n}=\frac{\delta\lambda^{2}}{2}\int_{0}^{\tau}\dot{\Psi}(\tau-t)g_{n }^{*}(t)dt, \tag{20}\]
where
\[g_{-1}^{*}(t)=\frac{t+\tau_{w}}{\tau+2\tau_{w}}, \tag{21}\]
and
\[g_{n}^{*}(t)=\frac{a_{n}(\delta^{(n)}(t)-\delta^{(n)}(\tau-t))}{\tau-2\tau_{R }}, \tag{22}\]
for \(n\geq 0\). Two errors will be analyzed
\[\epsilon_{-1}(\tau/\tau_{w})=\left|\frac{\sum_{n=0}^{1}\Delta W_{n}}{\frac{ \delta\lambda^{2}}{2}\Psi(0)+\sum_{n=-1}^{1}\Delta W_{n}}\right|, \tag{23}\]
and
\[\epsilon_{0}(\tau/\tau_{w})=\left|\frac{\Delta W_{1}}{\frac{\delta\lambda^{2 }}{2}\Psi(0)+\sum_{n=-1}^{1}\Delta W_{n}}\right| \tag{24}\]
where the first does not consider the addition of terms of the singular part and the second one do not consider any derivative. Observe that the errors do not depend on \(\Psi(0)\). I considered also without loss of generality \(\delta\lambda=1\).
## IV Examples
I present now the examples that will be analyzed in the next section.
### Overdamped Brownian motion
We consider in this example a white noise overdamped Brownian motion subjected to a time-dependent harmonic potential, with the mass of the system equal to one, \(\gamma\) as a damping coefficient and \(\omega_{0}\) as the natural frequency of the potential. The relaxation function for both moving laser and stiffening traps [5] are given by
\[\Psi_{1}(t)=\Psi_{0}\exp\bigg{(}-\frac{|t|}{\tau_{R}}\bigg{)}, \tag{25}\]
where \(\tau_{R}\) is the relaxation timescale of each case.
### Underdamped Brownian motion: moving laser trap
We consider in this example a white noise overdamped Brownian motion subjected to a time-dependent harmonic potential, with \(m\) as the mass of the particle, \(\gamma\) as a damping coefficient, and \(\omega_{0}\) as the natural frequency of the potential. The relaxation function for moving laser trap [5] is given by
\[\Psi_{2}(t)=\Psi_{0}\exp\bigg{(}-\frac{\gamma}{2}|t|\bigg{)}\left(\cos\omega t +\frac{\gamma}{2\omega}\sin\omega|t|\right), \tag{26}\]
where \(\omega=\sqrt{\omega_{0}^{2}-\gamma^{2}/4}\).
### Underdamped Brownian motion: stiffening laser trap
For the same system, but in the stiffening trap case, the relaxation function is [4]
\[\Psi_{3}(t)=\Psi_{0} \exp\left(-\gamma|t|\right)\bigg{[}\frac{2\omega_{0}^{2}}{\omega ^{2}}\] \[+\left(\frac{\omega^{2}-2\omega_{0}^{2}}{\omega^{2}}\right)\cos \omega t+\frac{\gamma}{\omega}\sin\omega|t|\bigg{]}\,, \tag{27}\]
where \(\omega=\sqrt{4\omega_{0}^{2}-\gamma^{2}}\).
### Sinc relaxation function
In Ref. [7], when we apply the method of time average in a thermally isolated system performing an adiabatic process, we produce a new one performing an isothermal one with a typical relaxation time. In particular, for
thermally isolated systems that have a relaxation function equal to
\[\Psi(t)=\Psi_{0}\cos{(\omega t)}, \tag{28}\]
will have for time-averaged relaxation function
\[\Psi_{4}(t)=\Psi_{0}\operatorname{sinc}\left(\frac{\pi}{2}\frac{t}{\tau_{R}} \right), \tag{29}\]
where \(\tau_{R}\) is the relaxation timescale of the system.
### Gaussian relaxation function
A relaxation function that satisfies the criteria of compatibility with the Second Law of Thermodynamics [4] is the Gaussian relaxation function
\[\Psi_{5}(t)=\Psi_{0}\exp\left(-\frac{\pi}{4}\left(\frac{t}{\tau_{R}}\right)^{2 }\right), \tag{30}\]
where \(\tau_{R}\) is the relaxation timescale of the system.
### Bessel relaxation function
The Bessel relaxation function is given by
\[\Psi_{6}(t)=\Psi_{0}J_{0}\left(\frac{t}{\tau_{R}}\right), \tag{31}\]
where \(J_{0}\) is the Bessel function of the first kind with \(\nu=0\) and \(\tau_{R}\) is its relaxation timescale. It satisfies the criteria for compatibility with the Second Law of Thermodynamics. Such relaxation function can model the Ising chain subjected to a time-dependent magnetic field and evolving in time at equilibrium accordingly to Glauber-Ising dynamics [8].
## V Error analysis
In Fig. 2 are depicted graphics for each one of those examples presented in the previous section illustrating the errors \(\epsilon_{-1}\) and \(\epsilon_{0}\) for ratios \(\tau/\tau_{w}\). For \(\Psi_{1}(t)\), the errors \(\epsilon_{-1}\) and \(\epsilon_{0}\) are null since the optimal protocol does not depend on the deltas. For \(\Psi_{2}(t)\), the error \(\epsilon_{0}\) is null since the optimal protocol does not depend on derivatives of Dirac delta. For the other cases, a general feature for error \(\epsilon_{-1}\) appears: for sudden and slowly-varying processes, where respectively \(\tau/\tau_{w}\ll 1\) and \(\tau/\tau_{w}\gg 1\), the error becomes less than 1%. This is easy to understand since there is almost no participation of the Dirac delta and derivatives in this regimes [2]. However, when \(\tau\approx\tau_{w}\) the error achieves a peak with an upper bound around 8%, which indicates a more participation of the terms of the singular part. For the error \(\epsilon_{0}\), the general feature persists, but the results are better, with a peak with an upper bound of 4%. Therefore, it is quite reasonable to consider such linear solution or linear solution plus Dirac deltas as acceptable near-optimal protocols. Finally, it is important to remark that such numbers can in principle increase as high orders are added. However, for being of such nature, these increments should be negligible.
## VI Final remarks
In this work, I proposed the continuous linear part approximation of the universal optimal protocol for weak processes as a reasonable near-optimal protocol. The proposition is based on the analyses of several examples where the errors of adding the terms of the singular parts were considered. The error \(\epsilon_{-1}\), where no term was added, for sudden and slowly-varying processes is less than 1%, while it has a peak with an upper bound of 8%. The error \(\epsilon_{0}\) this universal feature persists, but the result is better: the upper bound reduce to 4%. In this way, it is reasonable to consider such approximations as near-optimal protocols.
|
2302.10570 | Co-Driven Recognition of Semantic Consistency via the Fusion of
Transformer and HowNet Sememes Knowledge | Semantic consistency recognition aims to detect and judge whether the
semantics of two text sentences are consistent with each other. However, the
existing methods usually encounter the challenges of synonyms, polysemy and
difficulty to understand long text. To solve the above problems, this paper
proposes a co-driven semantic consistency recognition method based on the
fusion of Transformer and HowNet sememes knowledge. Multi-level encoding of
internal sentence structures via data-driven is carried out firstly by
Transformer, sememes knowledge base HowNet is introduced for knowledge-driven
to model the semantic knowledge association among sentence pairs. Then,
interactive attention calculation is carried out utilizing soft-attention and
fusion the knowledge with sememes matrix. Finally, bidirectional long
short-term memory network (BiLSTM) is exploited to encode the conceptual
semantic information and infer the semantic consistency. Experiments are
conducted on two financial text matching datasets (BQ, AFQMC) and a
cross-lingual adversarial dataset (PAWSX) for paraphrase identification.
Compared with lightweight models including DSSM, MwAN, DRCN, and pre-training
models such as ERNIE etc., the proposed model can not only improve the accuracy
of semantic consistency recognition effectively (by 2.19%, 5.57% and 6.51%
compared with the DSSM, MWAN and DRCN models on the BQ dataset), but also
reduce the number of model parameters (to about 16M). In addition, driven by
the HowNet sememes knowledge, the proposed method is promising to adapt to
scenarios with long text. | Fan Chen, Yan Huang, Xinfang Zhang, Kang Luo, Jinxuan Zhu, Ruixian He | 2023-02-21T09:53:19Z | http://arxiv.org/abs/2302.10570v1 | Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
###### Abstract
Semantic consistency recognition aims to detect and judge whether the semantics of two text sentences are consistent with each other. However, the existing methods usually encounter the challenges of synonyms, polysemy and difficulty to understand long text. To solve the above problems, this paper proposes a co-driven semantic consistency recognition method based on the fusion of Transformer and HowNet sememes knowledge. Multi-level encoding of internal sentence structures via data-driven is carried out firstly by Transformer, sememes knowledge base HowNet is introduced for knowledge-driven to model the semantic knowledge association among sentence pairs. Then, interactive attention calculation is carried out utilizing soft-attention and fusion the knowledge with sememes matrix. Finally, bidirectional long short-term memory network (BiLSTM) is exploited to encode the conceptual semantic information and infer the semantic consistency. Experiments are conducted on two financial text matching datasets (BQ, AFQMC) and a cross-lingual adversarial dataset (PAWSX) for paraphrase identification. Compared with lightweight models including DSSM, MwAN, DRCN, and pre-training models such as ERNIE etc., the proposed model can not only improve the accuracy of semantic consistency recognition effectively (by 2.19%, 5.57% and 6.51% compared with the DSSM, MWAN and DRCN models on the BQ dataset), but also reduce the number of model parameters (to about 16M). In addition, driven by the HowNet sememes knowledge, the proposed method is promising to adapt to scenarios with long text.
Keywords:semantic consistency HowNet transformer sememes knowledge knowledge fusion.
## 1 Introduction
Semantic consistency recognition (or text semantic matching) task is one of the important tasks of natural language processing (NLP), and can be applied to a wide variety of downstream tasks, such as information retrieval, question answering, dialogue system, machine translation, etc. The input of this task are mainly sentence pairs, different from
natural language inference (and text entailment) which aims to recognize the semantic relationship (neutral, entailment, contradiction) between the sentence pairs, the final objective of semantic consistency recognition is to judge whether the semantic meanings of the sentence pairs are consistency or similar.
The task of semantic consistency recognition is one of the challenging tasks due to the characteristics of polysemy and synonymy, which are prominent and may lead to ambiguity and misunderstanding especially in Chinese language. However, most of the existing algorithms including bag-of-word (BOW), vector space model(VSM), term frequency-inverse document frequency (TF-IDF) and DSSM [2], MwAN [3], DRCN [4] cannot catch the semantic meanings accurately, which mainly solve the matching or similarity problem at the lexical level and are difficult to understand text semantics accurately from context. In detail, the existing text matching algorithms based on lexical coincidence and data-driven neural network has the following limitations:
(1) The semantic diversity of word vocabularies. The same word can express different meanings in different context, such as "Apple(\(\frac{\#}{\#}\),\(\frac{\#}{\#}\))" may denotes a kind of fruit in a menu but can also denotes "Apple Inc(\(\frac{\#}{\#}\),\(\frac{\#}{\#}\))" in the electronic product market.
(2) The phrase meaning depends on the order of expression. For a lexical phrase, the meaning expressed will be completely different if exchange the order, such as "\(\frac{\#}{\#}\)(cow)" and "\(\frac{\#}{\#}\)(milk)" in Chinese, "one another(\(\frac{\#}{\#}\),\(\frac{\#}{\#}\))" and "another one(\(\frac{\#}{\#}\), \(\frac{\#}{\#}\))" in English.
In order to solve the above problems, this paper proposes a semantic consistency recognition method based on Transformer [1] and HowNet [22], which expands the research on sentence semantic information acquisition. First, we use Transformer to conduct multi-level encoding via data-driven for representation of the internal structure and intrinsic semantics of text sentences. We then introduce an external knowledge base, i.e. HowNet, to conduct knowledge-driven modeling of semantic knowledge association between vocabularies. In addition, soft-attention is exploited to calculate mutual attention and to achieve knowledge fusion with the semantic matrix. Finally, BiLSTM is incorporated to further encode the semantic information of the conceptual level of text and infer the semantic consistency. A number of experiments show that, compared with the existing lightweight models and the pre-training models, the fusion of HowNet sememes knowledge and Transformer's advantage for long text can improve the accuracy of the semantic consistency recognition to a certain extent.
The model proposed in this paper has the following innovations:
* In order to solve the problem of text synonyms, polysemy and difficulty to understand long text for semantic consistency recognition, a co-driven method based on the fusion of Transformer and HowNet sememes knowledge is proposed. In addition to Transformer encoding and model pre-training via data-driven, HowNet sememes are incorporated to enhance the understanding of synonyms and polysemy via knowledge-driven embedding and inference.
* A technical approach of semantic knowledge fusion is proposed. The multi-level internal structure and semantic information of sentence pairs are encoded through Transformer, and the external knowledge base HowNet is introduced to model the semantic knowledge similarity between sememes sequences. At last, soft-attention
is utilized to calculate the interactive attention and conduct knowledge fusion via the sememes matrix.
* Experiments are conducted on text matching of ant finance, banking finance scenarios and multiple paraphrase identification application. Compared with lightweight models such as DSSM [2], MwAN [3], DRCN [4] and pre-training models such as ERNIE, the proposed method can not only improve the accuracy of text semantic consistency recognition effectively, but also reduce the model parameters. It is worth highlighting that our model is capable of adapting to long text. The code will be released at [https://github.com/Platanus-hy/sememes_codriven_text_matching](https://github.com/Platanus-hy/sememes_codriven_text_matching).
## 2 Related works
In recent years, with the rapid development of machine learning, a large number of methods have been proposed to solve the problem of text consistency recognition. In terms of semantic information acquisition, the classical short text matching model DSSM [2] solved the problem of dictionary explosion in LSA (latent semantic analysis), LDA (latent dirichlet analysis) and other methods, but also loses context information due to the use of the word bag model. The ESIM [5] model, proposed in 2016, utilizes the BiLSTM and attention mechanism comprehensively and conducts interaction between sentence pairs in local reasoning for the first time. The DIIN [6] model proposed in 2018 uses CNN and LSTM for feature extraction, but the author uses both word vectors and local vectors in its input layer, inputs some additional syntactic features, and uses DenseNet for feature extraction. The DRCN model proposed in 2018 learns form DenseNet [7]'s intensive connection operation for image recognition, it retains the most original information of the text through intensive connection to RNN and continuously adds interactive information to the matrix vector through multiple circulations, and finally outputs via a full connection layer. The KIM [8] model proposed in 2018 uses the external knowledge base WordNet [9] to infer the logical relationship between sentence pairs and embed external prior knowledge into the similarity matrix. In the MwAN [3] model proposed in 2018, the authors use a variety of attention mechanisms (splicing, bilinear, dot multiplication, subtraction) to fully capture the relationship between sentence pairs. Finally, multiple results are weighted and combined to output the final probability through GRU and full connection layers.
In terms of sentence structure, CT-LSTM [10], which was proposed in 2015, introduces a tree shaped LSTM to solve the problem that LSTM cannot extract the structural information of sentences as well as discussed the long sequence dependency problem. Different from the commonly used RNN sequence modeling, it uses the dependency relationship of sentences as the input of LSTM, which also has some inspiration for future research.
With the proposal of BERT [11] model in 2018, a trend of pre-training models swept the whole NLP world and ranked among the top in the major NLP lists. BERT has a complete Encoder Decoder framework. Its basic composition is Transformer, which is mainly composed of multi-head attention. It is a model built with pure attention, which can solve the problem of long-distance dependency in text sequences, i.e. the attention mechanism is capable to model and remember semantic context over longer
distance. The advantage of BERT model is that it can learn more grammatical and semantic information from large corpus, making the output word vector more representative, the larger number of parameters improve the expressive ability and make it perform well in various downstream tasks. In order to accelerate the training speed of the pre-training models and to tackle the high hardware requirements on consumer GPUs, Tim Dettmers [12] proposed LLM.Int8() for Transformer, which enables ordinary consumer GPUs to use very large models without sacrificing performance. In addition to Transformers, Hanxiao [13] proposed another simple, attention independent architecture, gMLP, which achieves the same level as Transformers in some indicators of pre-training, and even outperforms Transformers in some downstream tasks.
Due to the special polysemy of lexical words, there are certain difficulties in text matching. Different words often express the same meaning, such as "China" and "Huaxia", which are semantically consistent, but not related in terms of grapheme. In order to solve this problem, many researchers choose to use the speech, dependency syntax and other information to calculate the similarity. For example, Yan [14] and others tried to label the text with part of speech, and only reserved nouns, verbs and adjectives. They obtained word pairs by combining dependency syntax analysis. With PageRank [15] and degree centrality as indicators, they established a grammar network for a large number of text, and proposed a text similarity calculation method combining syntactic relations and lexical semantics. Yan [16] proposed a discourse-level text automatic generation model based on topical constraints. The synonymy of the keyword set is used to generate multiple article topic plans. Lin [17] introduced the concept vector space and represented document as a set of concept words to build a vector space, then calculated the semantic similarity through cosine similarity, which performs better than the bag of word (BOW) model with Word2Vec [18].
The LET model proposed by Boer [19] in 2021 utilizes HowNet for text entailment recognition. They transformed the initial vectors of all the sememes for each lexical word with graph attention, then fused the sememes vectors through attention pooling for each word, and obtained the final word vector through the integration of GRU and BERT word vectors. Lexical words are often ambiguous, only a few relevant sememes are essential to identify the correct semantics, which will lead to the fact that the vector of sememes obtained mismatch with the actual sentence, resulting in redundant semantic information. In this paper, the sememes words are filtered in advance before incorporating into the interaction matrix to avoid adding redundant information.
## 3 Research Method
This section mainly introduces the co-driven text semantic consistency recognition model based on the fusion of Transformer and HowNet sememes knowledge, and analyzes the model structures and their functions. The proposed model structure is shown in Figure 1, which is divided into 6 layers, namely the Transformer encoding layer, the attention layer, the BiLSTM layer, the pooling layer, the fully connected layer and the prediction layer.
### The Transformer Encoding Layer
The role of the encoding layer is to model the sequential text to obtain deep semantic information through the neural network. Instead of utilizing the commonly used neural networks such as CNN, LSTM, etc., we exploit Transformer encoder architecture for sentence pair encoding, which is mainly composed of multi-head attention modules and can alleviate the problem of gradient vanishing. The multi-head attention mechanism is formalized as in Eq.(1) to Eq.(4):
\[Q =\mathrm{W}^{Q}X \tag{1}\] \[K =\mathrm{W}^{K}X\] (2) \[V =\mathrm{W}^{V}X \tag{3}\]
\[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V \tag{4}\]
where \(X\) is the input sentence, \(W^{Q}\), \(W^{K}\), \(W^{V}\in R^{d_{\mathrm{model}}\times d_{k}}\) are weight matrix, \(Q\), \(K\), \(V\) are embeddings of \(X\) for query, key and value respectively, i.e. query for the appropriate key using \(Q\) and \(K\), then select the corresponding value \(V\) via softmax classifier.
Figure 1: Model structure.
In addition, the absolute position encodings are calculated as in Eq.(5) and Eq.(6).
\[PE_{(p,2m)}=\sin\left(\frac{p}{10000^{2m/d}}\right) \tag{5}\]
\[PE_{(p,2m+1)}=\cos\left(\frac{p}{10000^{2m/d}}\right) \tag{6}\]
where \(d\) denotes the dimension of the word embedding, \(p\) denotes the index of the word in the sentence.
### The Attention Layer
Attention layer is an important component of text consistency recognition model, which has the advantages of fast, effective and lightweight. Various types of attention mechanisms are proposed in the recent years, such as Soft-attention, Hard attention, Self-attention, etc. Haili [20] also proposed an attention mechanism which focus on fine-grained sentiments. This paper adopts the commonly used soft-attention mechanism, but incorporates the semantic matrix generated based on HowNet sememes knowledge, which are balanced by trainable weights \(\gamma\)
HowNet is a lexical sememes knowledge base for Chinese and English, which discloses the semantic relationships between the concepts and their sememes. This paper mainly uses HowNet to obtain all the lexical sememes corresponding to the words in the sentence pairs. Namely, if two words share the same sememes, the value of their corresponding position in the HowNet sememes matrix will be set to 1, otherwise set to 0. Figure 2 shows an example of sememes interactive calculating for sentence "\(\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancelancel{\cancelancel{\cancelancelancel}} \c{\cancel{\leftleftleft({\cancel{\leftleftleft({\leftleft({\leftleft({ \left\left{\left({\leftleft{\leftleft{\leftleft{\leftleft{\leftleft{ \leftleft{\leftleft{\leftleft{\leftleft{\leftleft{{ \leftleftleftleft{{{ \leftleftleftleftleftleft{{{ \leftleftleftleftleftleft}{{{ \leftleftleftleftleftleft{{{ { { }}}}}}} }}}}}}}}}}}}}}}}}}}}}}}\)\) \)}}}\) }}}\)}}}\)}}}\)}}\)\)\)\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}
are vectorized via random initialization when without pre-training. For pre-training, the word embeddings of Bert are used. The generation process of the attention matrix is formalized as Eq.(9):
\[e=PH^{T}+\gamma\cdot M \tag{9}\]
where \(\gamma\) is a trainable parameter. The attention matrix \(e\) not only integrates the structure and semantic information between sentences, but also obtains the synonymous and polysemous relationships of word pairs between sentences. The heat map of matrix change is shown in Figure 4. After injecting the sememes information, the weights of some intersection positions increased, which indicates that the positions reflect the key features of semantic consistency between two sentences. After incorporating the attention matrix, the soft-attention is calculated as follows:
\[\hat{P}=\sum_{j=1}^{l_{p}}\frac{\exp\left(e_{ij}\right)}{\sum_{k=1}^{l_{k}}\exp \left(e_{ik}\right)}P_{tf},\forall i\in\left[1,2,\cdots,l_{p}\right] \tag{10}\]
\[\hat{H}=\sum_{j=1}^{l_{p}}\frac{\exp\left(e_{ij}\right)}{\sum_{k=1}^{l_{p}} \exp\left(e_{ik}\right)}H_{tf},\forall i\in\left[1,2,\cdots,l_{h}\right] \tag{11}\]
In Eq.(10) and Eq.(11), \(P_{tf}\) and \(H_{tf}\) are embedding matrix of sentence pairs after encoding via Transformer. \(l_{p}\) and \(l_{h}\) indicate the length of the sentences as well as the output of the soft-attention module.
Figure 2: Analysis of HowNet semantics.
### The BiLSTM Layer
This layer is used to process the output of Transformer layer after the soft-attention mechanism, i.e. \(\hat{P}\) and \(\hat{H}\). Through the Bidirectional Long Short-Term Memory (BiLSTM), the forward encoding and backward encoding are concatenated to obtain the context information.
The output of the Long Short-Term Memory (LSTM) network is as follows:
\[P_{bi-lstm}=\text{BiLSTM}(\hat{P}) \tag{12}\]
where \(\hat{P}\) refers to the output of the sentence \(P\) via the soft-attention mechanism, and \(P_{bi-lstm}\) refers to the encoding result of \(\hat{P}\) via the BiLSTM module.
### Average Pooling and Max-Pooling
In order to fuse the text information after Transformer and BiLSTM, this model splices multiple inputs through max-pooling and average pooling. The purpose of this layer is to transform the vector dimension of two sentences from \(R^{b\times d}\) to \(R^{b\times d}\) and facilitate the subsequent input of full connection layer:
\[P_{o}=\left[P_{tf};P_{bi-lstm}\right] \tag{13}\] \[P_{rep}=\left[\text{MaxPool}\left(P_{o}\right);\text{AvgPool} \left(P_{o}\right)\right] \tag{14}\]
where \(P_{tf}\) denotes the output of Transformer encoding module, \(P_{bi-lstm}\) denotes the output of BiLSTM module, \(P_{O}\) is the concatenation of \(P_{tf}\) and \(P_{bi-lstm}\), \(P_{rep}\) denotes the result after concatenation of max-pooling and average pooling output of \(P_{O}\).
### The Fully Connected Layer
After obtaining the complete sentence vector expression for the sentence pairs, i.e. \(P_{rep}\) and \(H_{rep}\), the commonly used vector splicing method is to direct splice and input them
Figure 3: Changes of attention matrix.
into the multi-layer feed-forward neural network to obtain the results. When conducting splicing, the information in the HowNet matrix is considered, the sum of two different dimensions in the HowNet matrix is obtained as \(HN_{row}\) (sum for rows) and \(HN_{col}\) (sum for cols). The final input \(H\) of the feed forward neural network is obtained via the concatenation with \(P_{rep}\) and \(H_{rep}\).
\[\begin{split} HN_{\text{row}}=\text{sum}(M,axis=0)\\ HN_{\text{col}}=\text{sum}(M,axis=1)\\ H=\text{concat}\left(P_{rep};H_{\text{rep}};P_{rep}-H_{rep}; HN_{\text{col}}\;;HN_{\text{row}}\right)\end{split} \tag{15}\]
where \(sum(M,axis)\) denotes summation of \(M\) along the axis dimension. For example, \(HN_{row}\) denotes the result of the sum of the HowNet matrix along the first dimension, that is, the HowNet information corresponding to sentence \(P\). Through vector concatenation, the corresponding semantic information of the two sentences is obtained, in which, \(P_{rep}-H_{rep}\) also represents the difference between the two sentence vectors.
### The Prediction Layer
After obtaining the final sentence vector representation of the sentence pairs, the model uses a two-layer fully connected neural network and a softmax layer to classify the sentence matching results into 1(positive) or 0(negative) as in Eq.(16). The cross entropy loss function is calculated as in Eq.(17).
\[p=softmax(FFN(FFN(H))) \tag{16}\]
\[Loss=\frac{1}{N}\sum_{i}^{N}-\left[y_{i}\times\log\left(p_{i}\right)+\left(1-y _{i}\right)\times\log\left(1-p_{i}\right)\right] \tag{17}\]
where \(y_{i}\) represents the label of the \(i\)-th sample (\(P_{i},H_{i}\)), the label of positive sample is 1, otherwise 0. \(p_{i}\) indicates the probability that the sample is predicted to be a positive sample.
In addition to the commonly used cross entropy loss function, we also tried the CoSent loss function, which forces the similarity of positive sample pairs greater than that of negative samples, and makes the distance between positive and negative samples in the vector space as far as possible. The experiment shows that using the CoSent loss function has certain effect on pre-training methods such as BERT and Sentence-BERT [21], reducing the converge time cost of pre-training models. But for the proposed model in this paper, the effect of CoSent loss reduces compared with cross entropy loss when without pre-training.
In the training phase, we used MultiStepLR to dynamically adjust the learning rate, which reduces the learning rate of each parameter group by a decay rate of 0.5 once the number of epoch reaches one of the milestones (i.e. the 20th, 50th, 80th, 100th and 150th iterations of the experiment). By dynamically adjusted the learning rate as the number of iterations increases, the convergence speed of the model increases, and its variation trend is shown in Figure 4.
## 4 Experiments And Analysis
### Datasets
In order to verify the effectiveness of the text consistency recognition model based on Transformer and HowNet dual drivers proposed in this paper, this paper conducts experiments on three open datasets respectively. The data sets are respectively PAWSX data set, AFQMC data set and BQ Corpus data set.
PAWSX dataset is a multilingual definition pair dataset released by Google. It is characterized by highly overlapping vocabulary, which is helpful to further improve the model's judgment on difficult samples, and can test the model's judgment ability on similar samples. The AFQMC dataset is an ant financial similarity dataset, which contains 34,334 training data, 4316 validation data and 3861 test data. BQ Corpus is the problem matching data set in the banking and financial field, including the problem pairs extracted from the online banking system log of one year. It is the largest problem matching data in the banking field at present, including 10000 training data, 10000 verification data, and 10000 test data.
### Experimental Setup
The experiments of this paper are conducted on a 4-card GPU server with RTX2080ti. The parameters and softwares used for model training are shown in Table 3.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Dataset Name & Training Set Size & Validation Set Size & Test Set Size \\ \hline PAWSX & 49401 & 2000 & 2000 \\ AFQMC & 34334 & 4316 & 3861 \\ BQ Corpus & 100000 & 10000 & 10000 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset Size
Figure 4: The decay trend of learning rate.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Sentence1 & Sentence2 & label \\ \hline \begin{tabular}{c} (Does Wechat consumption count) \\ \end{tabular} & \begin{tabular}{c} (How much money is still unpaid) \\ \end{tabular} & 0(negative) \\ \begin{tabular}{c} (What are the good products next week) \\ \end{tabular} & \begin{tabular}{c} (What financial products are available \\ in January) \\ \end{tabular} & 1(positive) \\ \begin{tabular}{c} (May I check the bill) \\ \end{tabular} & \begin{tabular}{c} (You can check the bill) \\ \end{tabular} & 0(negative) \\ \begin{tabular}{c} (It’s unable to borrow now) \\ \end{tabular} &
\begin{tabular}{c} Q
### Comparison of experimental results
In order to verify the actual effect of the model proposed in this paper, three classical text matching models including DSSM, MwAN and DRCN, are selected for comparison without pre-training. For the pre-training model, we chose BERT-wwm-ext, BERT and Baidu ERNIE.
The selected data sets are PAWSX, AFQMC and BQ Corpus. In order to ensure the unity of the experiment, all models use the same Jieba vocabulary for the same dataset, and the indicators of comparison are ACC and F1-score.
It can be seen from Table 4 that the accuracy of the proposed model in BQ dataset is higher than that of other models. As shown in Table 5, from the perspective of data sets, the results of the three models on AFQMC are not very good. Preliminary analysis shows that the language standardization of the sample data is poor, such as incomplete
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model name & Pre-trained & Acc & F1 \\ \hline DSSM & \(\bigtimes\) & 77.12 & 76.47 \\ MwAN & \(\bigtimes\) & 73.99 & 73.29 \\ DRCN & \(\bigtimes\) & 74.65 & 76.02 \\ Ours & \(\bigtimes\) & **78.81** & **76.62** \\ \hline Improvement & \(\bigtimes\) & +2.19\% & +1.96\% \\ \hline BERT-wwm-ext & \(\bigtimes\) & 84.71 & 83.94 \\ BERT & \(\bigtimes\) & 84.50 & 84.00 \\ ERNIE & \(\bigtimes\) & 84.67 & 84.20 \\ Ours-BERT & \(\bigtimes\) & **84.82** & **84.33** \\ \hline Improvement & \(\bigtimes\) & +0.177\% & +0.464\% \\ \hline \end{tabular}
\end{table}
Table 4: Experimental results on the BQ Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model name & Pre-trained & Acc & F1 \\ \hline DSSM & \(\bigtimes\) & 57.02 & 30.75 \\ MwAN & \(\bigtimes\) & 65.43 & 28.63 \\ DRCN & \(\bigtimes\) & 66.05 & 40.60 \\ Ours & \(\bigtimes\) & **66.62** & **42.93** \\ \hline Improvement & \(\bigtimes\) & +0.86\% & +5.7\% \\ \hline BERT-wwm-ext & \(\bigtimes\) & 81.76 & 80.62 \\ BERT & \(\bigtimes\) & 81.43 & 79.77 \\ ERNIE & \(\bigtimes\) & 81.54 & 80.81 \\ Ours-BERT & \(\bigtimes\) & **81.84** & **81.93** \\ \hline Improvement & \(\bigtimes\) & +0.097\% & +1.38\% \\ \hline \end{tabular}
\end{table}
Table 5: Experimental Results on the AFQMC Dataset
sentences, such as "[7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][90][91][92][94][95][96][97][98][99][90][91][92][93][94][95][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][98][99][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][98][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][[97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][[97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][97][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][96][[97]][99][90][91][92][93][94][95][96][97][99][99][90][91][92][93][94][95][[96][97][99][9][90][91][92][93][94][95][96][97][99][9][90][91][92][93][94][95][[96][97][99][9][90][91][92][93][94][95][96][[97]][99][90][91][92][93][94][95][[96]][[97][99][9][90][91][92][93][94][95][[96][97][99][9][90][91][92][93][94][95][[96]][[97][99][9][90][91][92][93][94][95][[96]][[97][99][9][90][91][92][93][94][95][[96]][[97][99][9][90][91][92][93][94][95][[96]][[97]][99][9][99][91][92][93][94][95][[96][97][99][9][99][90][91][92][93][94][95][[96]][[97][9][99][9][91][92][93][94][95][[96][97]][[99][99]][99][91][[92]][93][94][[95][[96]][[97]][[99]][99][91][[92]][95][[96][[97]][99][99][9][91][[92]][93][94][95][[96]][[97]][[99]][91][[92]][[93][94][95][[96]][[97]][[99]][91][[92]][[93][96][[97]][99][91][[92]][[93][97][9][99][9][91]][[92]][[93][94][95][[96]][[99]][91][[92]][[93]][[94]][[95][96][97][9][9][91][9][91][92][93][94][[95]][[99]][99][91][[92]][[93]][[94]][[95][96][[97]][99][9][91][[92]][93][94][[95]][[96][9][9][91][9][92][93][94][95][[96]][[97]][99][9][91][[92]][96][[97]][99][9][9][91][[92]][[93][94][95][[96]][[97]][99][9][91][[92]][97][9][91][[92]][92][93][94][95][[96]][[97]][99][9][91][[92]]
can significantly improve the sensitivity of the model to polysemy and synonym, and can significantly improve the performance of the model.
According to the results in Table 8, HowNet can effectively improve the performance of both text with a length of less than 15 and text with a length of more than 15 and less than 50, and can obtain more valid semantic information in longer text to achieve better results. After the longest text segment experiment in the dataset, the longest text length that this model can handle is 50 on the basis of ensuring the experimental effect.
From the result data in Table 9, the higher the number of Transformer layers, the better the effect of the model. By stacking the Transformer coding layer, the performance of the model can be improved to a certain extent, but at the same time, the number of pa
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Tokenizer & HowNet & Acc & F1 \\ \hline \multirow{2}{*}{Jieba} & \(\bigvee\) & **0.7881** & **0.7662** \\ & \(\bigtimes\) & 0.7783 & 0.7624 \\ \hline \multirow{2}{*}{PKUseg} & \(\bigvee\) & **0.7869** & **0.7653** \\ & \(\bigtimes\) & 0.7792 & 0.7611 \\ \hline \multirow{2}{*}{HanLP} & \(\bigvee\) & **0.7853** & **0.7599** \\ & \(\bigtimes\) & 0.7735 & 0.7512 \\ \hline \end{tabular}
\end{table}
Table 7: Semantic consistency recognition results on the BQ dataset using different tokenizers with or without HowNet.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Sequence length & HowNet & Acc & F1 \\ \hline
1\(\sim\)15 & \(\bigvee\) & **0.7869** & **0.7662** \\ & \(\bigtimes\) & 0.7763 & 0.7521 \\ \hline
15\(\sim\)50 & \(\bigvee\) & **0.7884** & **0.7684** \\ & \(\bigtimes\) & 0.7792 & 0.7545 \\ \hline \end{tabular}
\end{table}
Table 8: Semantic consistency recognition results on the BQ dataset for different sequence length with or without HowNet.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Number of Transformer layers & Acc & F1 \\ \hline
2 & 0.7433 & 0.7353 \\
4 & 0.7648 & 0.7572 \\
6 & 0.7752 & 0.7731 \\
8 & 0.7853 & 0.7658 \\
10 & 0.7881 & 0.7662 \\ \hline \end{tabular}
\end{table}
Table 9: Semantic consistency recognition results on the BQ dataset with different number of Transformer encoding layers.
rameters of the model and the training time of the model will be significantly increased, and the convergence speed will also be significantly slower. We take the model with 6 coding layers as the optimal model, and its parameter quantity is 16M, while the parameter of the DRCN model which performs the best in the models without pre-training is 19M. During training, the changes of trainable parameters are as follows:/par
It can be seen from Figure 5 that during the experiment, by observing the changes of the trainable parameters of the attention matrix, the weight of the sememes information matrix obtained by HowNet in the attention matrix is gradually increased with the increase of the number of iterations, which indicates that the HowNet sememes information generated by the original text has a positive role in improving the model effect.
## 5 Conclusion
This paper proposed a new text semantic consistency recognition model based on Transformer and HowNet, which uses the HowNet sememes knowledge to tackle the synonyms and polysemy problems for semantic matching of sentence pairs. The experiments on datasets including BQ, AFQMC and PAWSX show that, compared with models without pre-training including DSSM, MwAN and DRCN, as well as pre-training models such as ERNIE, the proposed method has a certain improvement. By stacking Transformer layers, the performance of semantic consistency recognition can be effectively improved while introducing more parameters to some extent. In contrast, the model proposed in this paper has fewer parameters and better performance. Compared with LET, which also uses HowNet as the external information base, this paper filters the irrelevant semantic information while utilizing HowNet sememes information, avoiding the impact of redundant sememes on the results, which is more accurate and more intuitive. Experiments show that the model proposed in this paper can improve the accuracy of consistency recognition effectively using HowNet sememes knowledge,
Figure 5: Variation diagram of trainable parameters
and can also adapt to long text scenarios within 50 characters. There are obvious improvements either with the lightweight model or with the pre-training model.
In the future, we will study the knowledge limitation of language expression in depth, i.e. a sentence is wrong in combination with common sense, but is correct in terms of morphology and syntax. For example, "the sun travels around the earth" et al. Thus, more work is needed to supplement the external knowledge, we will continue to utilize commonsense knowledge to further improve the semantic understanding of raw text.
|
2304.14498 | MWaste: A Deep Learning Approach to Manage Household Waste | Computer vision methods have shown to be effective in classifying garbage
into recycling categories for waste processing, existing methods are costly,
imprecise, and unclear. To tackle this issue, we introduce MWaste, a mobile
application that uses computer vision and deep learning techniques to classify
waste materials as trash, plastic, paper, metal, glass or cardboard. Its
effectiveness was tested on various neural network architectures and real-world
images, achieving an average precision of 92\% on the test set. This app can
help combat climate change by enabling efficient waste processing and reducing
the generation of greenhouse gases caused by incorrect waste disposal. | Suman Kunwar | 2023-04-02T16:56:49Z | http://arxiv.org/abs/2304.14498v1 | # MWaste: A Deep Learning Approach to Manage Household Waste
###### Abstract
Computer vision methods have shown to be effective in classifying garbage into recycling categories for waste processing, existing methods are costly, imprecise, and unclear. To tackle this issue, we introduce MWaste, a mobile application that uses computer vision and deep learning techniques to classify waste materials as trash, plastic, paper, metal, glass or cardboard. Its effectiveness was tested on various neural network architectures and real-world images, achieving an average precision of 92% on the test set. This app can help combat climate change by enabling efficient waste processing and reducing the generation of greenhouse gases caused by incorrect waste disposal.
Waste Classification, Deep Learning, Waste Management
## I Introduction
Waste issue is a global concern and is on the rise due to the growth of urban areas and population, with predictions showing a potential increase of 70% by 2050 if no measures are taken to address it [1]. The increasing complexity of waste composition and the absence of a standardized waste classification system make waste identification challenging, resulting in disparities in waste generation and management practices across different regions [2][3].
Comprehending household solid waste management practices is essential for the progress of integrated solid waste management [4]. Identifying waste plays a pivotal role in the waste management process as it enables facilities to manage, recycle, and diminish waste suitably, while ensuring compliance with regulations and monitoring their advancement over time.
Various studies and approaches that utilize deep learning models for efficient waste sorting and management, which can contribute to a more sustainable environment has been done. Models such as RWNet [5], Garbage Classification Net [6], Faster Region-Based Convolutional Neural Network [7], and ConvoWaste [8] were proposed and evaluated for their high accuracy and precision rates in waste classification. These studies also highlight the importance of accurate waste disposal in fighting climate change and reducing greenhouse gas emissions. Some studies also incorporate IoT [9] and waste grid segmentation mechanisms [10] to classify and segregate waste items in real-time.
By integrating machine learning models with mobile devices, waste management efforts can be made more precise, efficient, and effective. One of the research uses an app that utilizes optimized deep learning techniques to instantly classify waste into trash, recycling, and compost with an accuracy of 0.881 on the test set [11].
While it shows the potentiality the benchmarking with other state of art model is still needed and is limited in classifying waste into three types. In response, we introduce MWaste, a mobile app that utilizes computer vision and deep learning to classify waste materials into trash, plastic, paper, metal, glass, or cardboard types. The app provides users with suggestions on how to manage waste in a more straightforward and fun way.
The app is tested on various neural network architectures and real-world images, achieving highest precision of 92% on the test set. This app can function with or without an internet connection and rewards users by mapping the carbon footprint of the waste they managed. The app's potential to facilitate efficient waste processing and minimize greenhouse gas emissions that arise from improper waste disposal can help combat climate change. Additionally, the app can furnish valuable data for tracking the waste managed and preserved carbon footprints.
The rest of this paper is structured as follows: Section II explains the system architecture of MWaste. Section III and IV detail the training and experimental evaluations. Finally, Section V summarizes the findings of this research.
## II Methods
This section discusses the architecture of the system and the flow of the process.
### _System Architecture_
Classifying waste using machine learning is a challenging task since determining the recyclability or compostability of waste based on images is difficult due to the properties of the material being hard to detect from images. Besides, waste can take on various shapes and forms, which requires machine learning techniques to handle such variability and the recyclability of waste depends on the local recycling center's capabilities, which the app must consider.
Taking those considerations into account, the app is designed in such a way that feedbacks are collected from users and can operate smoothly with or without an internet connection. The waste image is obtained from the gallery or from camera, and is passed through the waste classification model, which is trained to categorize the waste.
The classification model is the result of training a specific CNN model on a dataset of labeled images. Several state-of-the-art convolutional neural network methods is tested in this research, which included Inception V3 [12], MobileNet V2 [13], Inception Resnet V2 [14], Resnet 50 [15], Mobile Net [16], and Xception [17].
The model is then converted into TensorFlow Lite model as they are highly optimized, efficient, and versatile, making them ideal for running real-time predictions on mobile [18]. Once identified, the model calculates the carbon emissions associated with the material and provides waste management recommendations. For misclassification, user can submit the waste image for further analysis. Managing waste earns reward points, and the amount of carbon footprint saved is also tracked. An internet connection is required to submit wrongly predicted waste images and sync accumulated points.
## III Training
This section describes the training procedure and parameter settings used in this research.
### _Datasets_
For this research, the publicly available trashnet dataset [19] is utilized, consisting of 2,527 images across six classes: glass, paper, cardboard, plastic, metal, and trash. These images were captured using Apple iPhone 7 Plus, Apple iPhone 5S, and Apple iPhone SE, with the objects placed on a white posterboard in sunlight or room lighting. The dataset was annotated by experts. To ensure robustness, 60% of the images were used for training, 13% for testing, and 17% for validation.
### _Procedure_
The gathered dataset is processed through different models while keeping all parameters constant. Subsequently, the outcomes are attentively analyzed. Categorical cross-entropy is employed to gauge the loss, as it is suitable for multiclass problems [20]. Meanwhile, accuracy serves as a metric, and Adam is the optimizer of choice, given that it applies momentum and adaptive gradient for computing adaptive learning rates for each parameter [21].
Global average pooling is added to create one feature map per category in the final convolutional layer for the classification task [22]. Three dense layers are then employed to learn complex functions and improve the accuracy of classification. To avoid overfitting, dropout is added as a regularization technique [23]. Softmax is used as an activation function to convert the output values into probabilities [24].
## IV Evaluation
In this section, different evaluation metrics are discussed and the results are compared based on them.
### _Evaluation Metrics_
The evaluation measures can be used to explain the performance of various models. The study employs the Accuracy Score and F1 Score as evaluation metrics.
#### Iv-A1 Accuracy Score
Classification accuracy is defined as the percentage of accurate predictions out of the total number of samples analyzed. To calculate accuracy in classification, the number of correct predictions is divided by the total number of predictions, and the resulting fraction is expressed as a percentage by multiplying it by 100 [25]. The formula for the accuracy score is as follows:
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{1}\]
#### Iv-A2 F1 Score
When attempting to compare two models with contrasting levels of accuracy and recall, such as one with poor precision but strong recall, it can be challenging. Improving accuracy may have an adverse effect on recall, and vice versa, which can result in confusion [26]. Hence, the F1-Score is utilized as a means of comparing the two sets and serves as a valuable metric for evaluating both recall and precision simultaneously.
Fig. 1: System Architecture of MWaste
The F1-Score is employed when dealing with imbalanced class data situations [27]. As most real-world classification problems involve uneven case distributions, the F1-score is a more suitable metric for evaluating the model compared to accuracy.
\[F1=\frac{2*Precision*Recall}{Precision+Recall} \tag{2}\]
### _Model Evaluation_
Models are evaluated with same settings and their outputs are measured using evaluation metrics: accuracy score, and f1 -score.
After comparing the models as shown in Table I, it can be seen that InceptionResNetV2 and Xception have higher accuracy, but the loss is higher for InceptionResNetV2 and Inception V3 models. Figure 3 illustrates the classification result of a waste material from the given training set.
Accuracy and loss of each model during training is shown in Figure 4.
Fig. 4: Training Loss and Accuracy graph of (a) MobileNet (b) Inception V3 (c) InceptionResNet V2 (d) ResNet50 (e) MobileNet V2 (f) Xception models with given datasets
Fig. 3: Classification output in MWaste App
Fig. 2: Datasets comprising the count of classes
## V Conclusion and Future Work
This study presents a mobile application that utilizes deep learning techniques to classify waste in real-time. The app categorizes waste into six groups, including plastic, paper, metal, glass, cardboard, and trash, and is publicly available with a trained model1. The app incorporates gamification strategies, such as a leaderboard based on waste management points, to motivate users to dispose of waste properly.
Footnote 1: [https://github.com/summ2u/deep-waste-app](https://github.com/summ2u/deep-waste-app)
The team plans to improve the accuracy of the classification system, form partnerships with local recycling companies, and expand the dataset to raise awareness of environmental impacts and reduce incorrect waste disposal.
## VI Acknowledgements
My heartfelt appreciation goes out to Gary Thung and Mindy Yang for sharing the TrashNet dataset on Github for public use. This dataset has proven to be an invaluable asset for my research or project on waste management and classification, and I am deeply thankful for their hard work in gathering and disseminating this information to a larger audience.
|
2308.05640 | A Comparative Visual Analytics Framework for Evaluating Evolutionary
Processes in Multi-objective Optimization | Evolutionary multi-objective optimization (EMO) algorithms have been
demonstrated to be effective in solving multi-criteria decision-making
problems. In real-world applications, analysts often employ several algorithms
concurrently and compare their solution sets to gain insight into the
characteristics of different algorithms and explore a broader range of feasible
solutions. However, EMO algorithms are typically treated as black boxes,
leading to difficulties in performing detailed analysis and comparisons between
the internal evolutionary processes. Inspired by the successful application of
visual analytics tools in explainable AI, we argue that interactive
visualization can significantly enhance the comparative analysis between
multiple EMO algorithms. In this paper, we present a visual analytics framework
that enables the exploration and comparison of evolutionary processes in EMO
algorithms. Guided by a literature review and expert interviews, the proposed
framework addresses various analytical tasks and establishes a multi-faceted
visualization design to support the comparative analysis of intermediate
generations in the evolution as well as solution sets. We demonstrate the
effectiveness of our framework through case studies on benchmarking and
real-world multi-objective optimization problems to elucidate how analysts can
leverage our framework to inspect and compare diverse algorithms. | Yansong Huang, Zherui Zhang, Ao Jiao, Yuxin Ma, Ran Cheng | 2023-08-10T15:32:46Z | http://arxiv.org/abs/2308.05640v1 | A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-objective Optimization
###### Abstract
Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.
Visual analytics, evolutionary multi-objective optimization
## 1 Introduction
Decision making under more than one criteria often appears in real-world optimization problems. Unlike single-aspect decision tasks where only one target will be satisfied, multiple aspects should be considered simultaneously to obtain optimal solutions with trade-offs between different yet often conflicting objectives. Considering water reservoir systems as an example [49], the operators are intended to design various operation policies that maximize power production. However, blindly optimizing the factors for power production may result in negative impact on irrigation or even increase the risk of flooding. To solve the multi-criteria decision making problems, various types of multi-objective optimization algorithms have been developed in the recent decades. The underlying mechanism of these algorithms is to search for a solution set where each solution cannot dominate the others, i.e., better
Fig. 1: Our visual analytics framework comprises three primary modules, namely the Algorithm-level Comparison module (V1), Evolution-level Exploration module (V2-4), and Solution-level Inspection module (V5). For the DDMPO2 test problem, the three algorithms show similar rankings and positions in the table and the projection result, respectively (A1), together with similar distributions on all four quality measures and the solution sets with the best IGD values in the respective algorithms (A2 and B1). The three \(k\)NN graphs display alternative relationships between generations from the same or different algorithms (B2, B2.1, B2.2).
than another solution on all objectives. Owing to the non-dominance nature in the solution set, decision makers are able to choose between a variety of feasible solutions to meet the incoming requirements.
Among the existing categories of methods, evolutionary multi-objective optimization (EMO) algorithms have been demonstrated as one of the most effective approaches to find an optimal solution set [55]. With a proper design of the evolution strategy, diversity and accuracy of solutions can be achieved simultaneously in a single run, providing wide-ranging decision choices for experts or stakeholders in real-world applications. Despite the fact that EMO algorithms are widely-used in many applications [2, 38, 51], a key challenge in evaluating EMO algorithms is to conduct comparative analysis between results from different algorithms [77, 29]. Conventional approaches utilize numerical quality indicators, such as Inverted Generational Distances (IGD) and Hypervolume (HV), to quantify the performance of the algorithms, and these metrics are naturally inherited in identifying the best or worst algorithms in comparison tasks. However, recent surveys [30, 55] emphasized that evaluating and comparing solution sets is a non-trivial task since aggregated measures, such as IGD and HV, are insufficient for characterizing EMO algorithms from multiple aspects including convergence, diversity, and uniformity. In addition, EMO algorithms usually work as a black-box, which could hinder the trustworthiness and in-depth evaluation of the behaviors in the evolutionary processes [59, 70]. Thus, there is an urgent need for incorporating comprehensive assessment in evaluating and comparing solution sets from different algorithms.
Inspired by the success of visual analytics approaches in Explainable AI, we believe that the human-in-the-loop paradigm lends itself well in analyzing solution sets of evolutionary multi-objective optimization algorithms. In this work, we propose a visual analytics framework for comparative analysis of multiple EMO algorithms. The framework follows the algorithm-agnostic approach, allowing for the incorporation of various EMO algorithms as long as they meet the same evolutionary computing protocol. Supported by a multi-faceted visualization scheme, analysts are allowed to compare EMO algorithms at different levels of granularity, ranging from overall performance to individual generations. To gain insights into the evolutionary processes, a nearest-neighbor-based visual design is proposed to reveal the relationships between generations from multiple algorithms, which could benefit domain experts in understanding the underlying evolutionary behaviors of the algorithms. We demonstrate our framework through the widely-adopted DTLZ benchmarking suite and a real-world multi-objective optimization problem. Our contributions include:
* An interactive visual analytics framework for explaining and comparing the evolutionary multi-objective optimization algorithms;
* A suite of visualization and interaction designs that facilitate the exploration and comparison between measures, iterations, and individual solution sets;
* Case studies and interviews on analyzing evolutionary processes in two test problems to demonstrate the effectiveness of the framework.
## 2 Related Work
Our framework focuses on explaining and comparing evolutionary processes in EMO algorithms. In this section, we review the relevant works on visualization and visual analysis techniques in multi-objective optimization and algorithmic models.
### Visualization in Multi-objective Optimization
Visualization has emerged as an effective means for analyzing solutions and attracted significant attention in the past decades [47, 56, 60]. Given the multi-dimensional nature of the decision and objective space, common techniques for visualizing high-dimensional data, including projections [53, 44, 35] and parallel coordinates plots (PCP) [31], have been widely-adopted. Chen et al. [9] propose a visualization design for characterizing Pareto fronts with self-organizing maps (SOM), while later work by Nagar et al. [44] employs an enhanced, interpretable SOM that improves coverages and topographic correctness. Tusar and Filipic [56] formulate the visualization problem of 4-D objective space as a multi-objective optimization problem, where the projected results should preserve shapes, ranges, and distributions of the objective vectors. Building on the 2-D RadViz plot, 3D-RadVis [23] extends the ability to present data distributions of objective vectors by mapping the third dimension to the distances to a hyperplane. Meanwhile, PalteteViz [53] proposes an alternative presentation by stacking multiple RadViz plots representing different layers based on the distances to a core location in the objective space. To address the readability issue of PCPs, Li et al. [31] conducted a study on how PCPs reveal the distribution and quality of a solution set.
Beyond static visualization techniques, interactive visual analysis methods have proven to be effective for exploring solution sets in an intuitive manner. Cibulski et al. [11] conduct a design study on how visualizing Pareto fronts can aid decision-making in the engineering field. They proposed an interactive system called PAVED that adopts a parallel coordinates plot to support the exploration of feasible solutions generated from optimization algorithms. More recent works in route planning [67] and interpretable machine learning [8] integrate the visualization of solution sets for a domain-specific problem into context-aware views, such as city maps with bus routes and multi-dimensional projections of trained models.
While various visualization approaches for solution set analysis have been proposed, there remains an urgent need for inspecting and exploring evolutionary processes, which may not be thoroughly facilitated by basic static visualizations of algorithms' final results. Our work supplements the visualization of solution sets and enables a more effective exploration of the dynamics and behaviors of evolutionary processes.
### Explainable AI
Along with the recent advances in Explainable AI, there has been a considerable amount of research tackling the problem of explaining and diagnosing algorithmic models. Various surveys [3, 14, 19, 28, 37, 73] have systematically summarized the research questions in this field. Here we mainly survey the literature related to visualizing execution processes of algorithms or models as well as model comparison.
**Model Explainability.** Owing to the complexity of the underlying mechanisms, the black-box metaphor is widely-used in literature to describe the difficulties in understanding how a certain model works. Thus, some existing works tend to "open the black box" by exposing and interpreting the running processes of complex models. An early attempt proposed by Tzeng and Ma [57] depicts neurons and weights in an artificial neural network with a node-link-based design in order to show the dependencies between inputs and outputs. Muhlbacher et al. [42] provide a structured summarization of how visual exploration can be involved in an ongoing computational procedure.
Regarding the execution stage involved in the explanation, Wang et al. [66] design CNN Explainer which supports education and inspection of the prediction processes. For analyzing training processes of convolutional neural network (CNN) models, DeepTracker [32] addresses the challenge in visualizing large-scale training log data with a hierarchical down-sampling approach. Besides the CNN classification models, some works discuss the explainability issues in generative networks, such as Liu et al [33] and Kahng et al. [24].
**Model Comparison.** Model comparison is a fundamental and critical task in assessing the performance of different models trained for a specific prediction problem [7, 48, 27]. In addition to selecting the best model or parameter setting, a thorough comparative analysis can also enhance the understanding of the learning and prediction behaviors. For typical classification tasks, comparisons between different classifiers can assist the understanding of the predicted class labels as well as the critical internal structures that affect the labeling process, such as Manifold [76], the "learning-from-disagreement" framework proposed by Wang et al. [61], and the work by Gleicher et al. [17]. In graph learning, works by Pister et al. [46] and Xie et al. [68] focus on comparing important nodes or edges to disclose hidden patterns in network structures. With the outstanding success of deep learning methods in the past decade [22, 28, 62, 10, 72], comparing neural networks becomes essential for disclosing learned knowledge in the complex models, in
cluding DeepCompare [43], CNNComparator [75], VAC-CNN [69]. Alongside the works mentioned above that explicitly compare neural networks, several advanced tasks also require model comparison as a critical component in their workflow [39, 45, 63, 71, 64].
In summary, the visual analytics community has produced plenty of works that address the issue of explainability regarding computation processes in training or prediction stages as well as comparing model inputs and outputs. However, a research gap remains in the visual exploration of evolutionary processes which can help open the black box of EMO algorithms. To this end, PIVE [26] is by far the closest in spirit to our work, which illustrates a per-iteration visualization framework for iterative algorithms in machine learning and optimization. Nevertheless, adapting this framework to the analysis of EMO algorithms requires extra effort in addressing the domain-specific requirements in multi-objective optimization, including multi-aspect measures of the generations and support in understanding relationships between algorithms, evolutions, and solution sets.
## 3 Design Overview
In this section, we illustrate the fundamental components of evolutionary multi-objective optimization algorithms, which constitute the primary area of interest. Research challenges and analytical tasks are then outlined from a literature review and preliminary expert interviews.
### Background
**Definition and Terminology.** Given a decision space \(X\), multi-objective optimization problems [55, 29] can be characterized as to find extremum of \(m\) objective functions:
\[\min f(\mathbf{x})=(f_{1}(\mathbf{x}),f_{2}(\mathbf{x}),...,f_{m}(\mathbf{x})) \tag{1}\]
where \(\mathbf{x}=(x_{1},x_{2},...,x_{d})\in X\) is a \(d\)-dimensional vector in the _decision space_, and the space spanned by the objective functions forms an _objective space_. Optimizing one objective function may often deteriorate the outputs of other objectives, making it almost impossible to find a single decision vector that minimizes all objectives. To formally represent relationships between solutions, we define _Pareto dominance_ as that for two solutions \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), \(\mathbf{x}_{1}\)_dominates_\(\mathbf{x}_{2}\) if and only if \(f_{i}(\mathbf{x}_{1})\leq f_{i}(\mathbf{x}_{2})\) and at least one \(f_{i}(\mathbf{x}_{1})<f_{i}(\mathbf{x}_{2})\) (termed \(\mathbf{x}_{1}\prec\mathbf{x}_{2}\)) on all \(m\) objectives. Thus, we expect to find a set of trade-off solutions \(\mathbf{x}_{1}^{\prime},\mathbf{x}_{2}^{\prime},...,\mathbf{x}_{n}^{\prime}, \in S\) where any two solutions in the set cannot dominate each other. All such \(\mathbf{x}_{i}^{\prime}\) in \(S\) are denoted as _Pareto optimal solutions_, and \(S\) is thereafter called a _Pareto set_. Accordingly, the corresponding objective vectors of \(\mathbf{x}_{i}^{\prime}\) in the objective space are called _Pareto optimal (objective) vectors_, and the set of them forms a _Pareto front_. Fig. 2 (A) presents two examples of the solution distributions in two and three objective problems.
However, the majority of multi-objective optimization algorithms currently available can only offer an approximation to the ideal Pareto front. As a result, in benchmarking problems, a _reference set_ (gray dots in Fig. 2 (A)) is typically provided as a sampled representation of the continuous or discrete _true Pareto front_[54]. Various quality measures have been developed to quantify the proximity of the data distributions between a solution set and the reference set. These measures are then used to evaluate and compare the performance of different algorithms.
**Evolutionary Algorithms in Multi-objective Optimization.** The Evolutionary algorithm is a stochastic search strategy which has been shown to find optimal solutions that converge towards the ideal Pareto front while maintaining diversity in the solutions. Fig. 2 (B) illustrates the typical pipeline of EMO algorithms, which involves the following main steps [78]. Initially, with a given population of decision vectors, a mating pool is created based on the individuals' fitness scores. Variations including recombination and mutation are then applied to the mating pool to create crossover individuals. In the final environmental selection step, the individuals from the original population and the modified mating pool are evaluated to build a new population via a survival testing strategy. Such a complete loop of the steps is referred to as a _generation_, and a new solution set is derived from the population in each generation. The entire process is executed iteratively until a termination criterion is met, such as thresholds for number of iterations or quality standards. As such, the solution set from the best generation can be treated as the final result of the algorithm.
It should be noted that strategies of mating, performing variations, and conducting environmental selection differ across EMO algorithms. For the purpose of supporting visual exploration and comparison of the evolutionary processes, we adopt an algorithm-agnostic paradigm based on generations. Under this approach, the evolutionary processes for different algorithms can be abstracted as a series of solution sets that correspond to the generations, Fig. 3.
### Requirement Analysis
Given the key features in evolutionary multi-objective optimization in Fig. 2 (B), we intended to design a visual analytics framework that incorporates comparisons between solution sets and generations in the evolutionary processes of multiple algorithms. To better illustrate the analytical tasks, we conducted a literature review on EMO algorithms [21, 29, 30, 55] and compiled a list of major research gaps. In addition, we organized an open-ended pilot interview with two domain experts (E1 and E2) to validate and refine the research gaps. E1 (who serves as one of the co-authors) is a professor majoring in evolutionary computing and has 10- years of research experience on EMO algorithms. E2 is a researcher in intelligent transportation systems who adopts multi-objective optimization algorithms in the workflow.
During the interview, we discussed several questions with the experts, including their approaches to applying EMO algorithms in their research, whether they had used visualization methods in their workflow, and how interactive visualizations could facilitate their analysis. Both experts affirmed the value of visualization in inspecting and comparing solution sets when developing and benchmarking new algorithms. They also noted that many popular EMO tools [1, 54, 5] only provide basic scatterplots or PCPs of the solution sets with limited or even no interactions, highlighting the necessity for an interactive visual exploration framework. In addition, choosing an appropriate algorithm becomes a critical consideration when determining which algorithm is best suited for a given problem. During the development of new algorithms, comparative analysis plays an important role in uncovering whether these novel algorithms outperform existing counterparts. Specifically, we identified three key requirements for visual comparative analysis:
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,leftmargin=*]
* **R1: Level of details.** When it comes to identifying comparison targets [16], it is important to consider three key components in EMO algorithms: quality indices, evolutionary processes, and individual generations. Quality indices are the most common way to compare the performance between algorithms and identify the best solutions [30, 21]. For an in-depth exploration of the algorithms, the evolutionary processes should be considered, which consists of the output of the generations in the iterative optimization process [29, 77]. E1 emphasized the value of investigating the behaviors of solution sets as they evolve iteratively, particularly when testing novel algorithms. In addition, comparing solution sets of individual generations facilitates a fine-grained quality analysis in the objective space.
* **R2: Patterns along the evolutionary processes or between in
Fig. 2: (A) Examples of the objective space for two and three-objective problems. (B) An illustration of typical evolutionary algorithm pipelines.
dividual generations.** In relation to the purpose (or "actions") of comparisons [16], the experts have addressed two main categories of patterns in examining evolutionary processes: evolutionary patterns that entail comparing two (sub)sequences of generations, and patterns when comparing two individual generations. As surveyed in Section 2.1, current research primarily targets on visualizing individual or the best generations, while exploring trends, progressions, and anomalies in the evolutionary process is an equally crucial aspect.
* **R3: Measures for assessing different quality aspects.** Regarding the comparison strategy [16], the majority of research in EMO algorithms only reports limited types of quality measures for result comparisons, such as Inverted Generational Distance (IGD) and HyperVolume (HV). However, a more comprehensive analysis of solution sets in the evolutionary process necessitates a multi-aspect evaluation [30, 55]. In real-world multi-criteria decision-making scenarios, trade-offs between different quality aspects can enhance the diversity of the feasible solutions and offer a wider range of options to meet diverse requirements, as commented by E2.
### Analytical Tasks
We further summarize the following tasks based on the aforementioned requirements. Notably, the first requirement, **R1**, is used as the primary axis to organize the tasks by aligning with the visual information-seeking mantra [52] which provides the "overview+detail" scheme.
**T1: Summarize the performance of algorithms.** Summarizing algorithm performance is the entry point of the comparative analysis. Analysts are interested in an overall comparison between algorithms:
* _How are the algorithms performed under different quality measures? What are the best solution sets in each algorithm?_ **(R3)**
* _How similar are the algorithms with respect to best solution sets and quality measures?_ **(R3)**
**T2: Reveal relationships among evolutionary processes.** At an intermediate level, the similarities between different evolutionary processes should be characterized to expose the behaviors of algorithms:
* _How do measures vary along the evolutionary processes?_ **(R2, R3)**
* _How do solution sets in the generations change among different algorithms? How similar are such changes?_ **(R2)**
**T3: Examine differences between solution sets.** At the generation level, a detailed comparison of solution sets from different generations should be considered as the fine-grained analysis:
* _How data in the solution sets distributes in the objective space?_ **(R2)**
* _How similar are two solution sets in terms of different aspects of quality measures?_ **(R3)**
## 4 Visual Analytics Framework
Based on the identified considerations and analytical tasks, we have developed a visual analytics framework to support comparative analysis between EMO algorithms. Our framework consists of two main stages in the workflow:
**Similarity Modeling.** As a data preprocessing stage, the logs recording the evolutionary processes of various algorithm candidates applied to a test problem are obtained and loaded into our framework. The similarities among the algorithms are subsequently computed, along with the similarities among solution sets of the corresponding generations.
**Visual Exploration.** With the outputs from the preprocessing stage, we have designed three interactive visualization modules to inspect and compare the evolutionary processes from multiple granularities, Fig. 3:
* _Algorithm-level Comparison_ **(T1)**: The **statistical overview** shows the basic information of the test problem alongside quality measure statistics for all loaded algorithms. The overall similarity among all algorithms is visualized in the **algorithm similarity view**.
* _Evolution-level Exploration_ **(T2)**: The **quality measure view** presents the trend of quality measures for all generations in selected algorithms, while the corresponding details are provided in the **time-line view**. The **generation similarity view** facilitates the exploration of inter-generational relationships between different algorithms.
* _Solution-level Inspection_ **(T3)**: The **solution set view** enables direct comparison of data distributions in the objective space between the solution sets of selected generations.
Fig. 1 illustrates the interface of our framework. Analysts are allowed to switch between views to finish their tasks. Our modular design supports the loading of any EMO algorithm results on selected test problems once they follow the same output protocol.
### Data Preprocessing and Similarity Modeling
The processing stage provides a formatted data abstraction of the evolutionary processes. Additional computations are performed including algorithm and generation similarities as well as quality measures on solution sets.
**Data Abstraction and Sampling.** The evolutionary process for an algorithm can be regarded as a sequence of generations, each of which is associated with a solution set. As previously mentioned in Section 3.1, this data abstraction is commonly adopted by most EMO algorithms, allowing for algorithm-agnostic comparisons between evolutionary processes. However, the number of generations for an algorithm execution can be excessively large. As such, a uniform down-sampling strategy with a tunable sample rate can be applied in the preprocessing stage to reduce the size of the original generations before loading it, as illustrated in Fig. 4 (A).
**Quality Measures.** Our framework utilizes four distinct quality measures to evaluate solution sets with respect to key performance aspects including convergence, spread, and uniformity [30] (**R3**), Fig. 4 (B).
_Inverted Generational Distance (IGD)_[12]: IGD is one of the most widely-used measures in multi-objective optimization. Given a solution set \(S=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\) and a reference set \(P^{*}=\{\mathbf{r}_{1},\mathbf{r}_{2},...,\mathbf{r}_{m}\}\), IGD is formulated as follows:
\[\text{IGD}(P^{*},\Omega)=\frac{\sum_{i=1}^{m}\min_{\mathbf{x}\in S}\text{ dist}(\mathbf{r}_{i},\mathbf{x})}{m} \tag{2}\]
where \(\text{dist}(\mathbf{r}_{i},\mathbf{x})\) is the Euclidean distance from \(\mathbf{r}_{i}\) to a solution \(\mathbf{x}\). In other words, Equation 2 calculates the average distance from each known ground-truth reference point \(\mathbf{r}_{i}\) to its nearest solution in a given generation. A lower IGD value means a better performance of the solution set with respect to convergence towards the ground-truth as well as diversity on the true Pareto front.
_Hypervolume (HV)_[79]: HV is another commonly-used quality measure for evaluating solution sets. By specifying an anchor point in the objective space, HV can be seen as the union volume of the hypercubes determined by the anchor point and each solution \(\mathbf{x}\) in the solution set \(S\). A higher HV value signifies a superior performance of the solution set.
_Spacing (SP)_[50] and _Maximum Spread (MS)_[77]: As per the taxonomy proposed in the study by Li et al. [30], IGD and HV are both classified as aggregated indicators that reflect multiple aspects through a single value. While IGD and HV can indicate the quality of convergence and diversity, the uniformity and spread of solutions is an equally significant aspect that needs consideration. To achieve this
Fig. 3: An overview of our visual analytics framework.
goal, we employ two additional quality measures, namely, SP and MS, to assess the uniformity and spread of the solutions and ensure a better representation of the entire Pareto front. SP aims to calculate the variation in distances between the solutions where a large deviation implies a non-uniform distribution, while MS gauges the covered area connected by the minimum and maximum values on each objective in the objective space.
**Generation and Algorithm Similarity.** In EMO, the conventional approach for algorithm selection solely relies on quality measures as the basis for comparison. In this context, comparable measurement values are strong indicators to similar distributions of the solution sets and potentially similar evolutionary processes. For a more thorough comparison, we model the similarities in two detail levels for generations and evolutionary processes of algorithms, respectively (**R1**).
_Generation Similarity_: The purpose of assessing generation similarity is to quantify the degree of similarity between the distributions of solution sets in the objective space. To this end, we harness the Wasserstein distance, also referred to as Earth Mover's Distance (EMD), to reveal the similarity between the distributions, Fig. 4 (C). The purpose of adopting this distance measure is that yielding varying solution sets is a common scenario in EMO algorithms, and the Wasserstein distance is effective in comparing distributions of varying shapes and sizes by measuring the "work" required to transform one distribution into another. Moreover, such interpretable and intuitive analogy of the distances, i.e., the amount of "work", facilitates the understanding for human users when conducting visual comparisons.
_Algorithm Similarity_: To analyze the similarities between algorithms, we consider the generation sequences in the algorithms as time series, Fig. 4 (D), in order to utilize the well-established time series similarity measures. The attribute values for each time step can be assigned as the IGD or HV values, and we utilize the dynamic time warping (DTW) distance and the Euclidean distance to determine similarities between these time series. Additionally, we employ the generation similarity of the representative solution sets obtained from two distinct algorithms to represent their algorithm similarity, as evaluated by the generations possessing the best IGD or HV values.
### _Comparing Algorithms_
The algorithm comparison module is designed to facilitate a coarse-grained analysis of the relationships between different algorithms by leveraging quality measures and similarities that were computed during the data preprocessing stage,(**T1**). Specifically, this module presents an overview of the quality measures and the interrelationships among the loaded algorithms on the test problem.
**Visualization.** Shown in Fig. 1 (V1), the region in the interface consists of three panels. In the statistical overview on the left side, the basic information regarding the workspace is presented, including the name of the test problem, the number of dimensions in both the decision space and the objective space, and the total number of algorithms loaded in the framework. A quality measure table is provided underneath the basic information, which itemizes the IGD and HV values for the best and last generations of each algorithm.
To depict the coarse-grained similarities between the algorithms, the algorithm similarity view deploys a scatterplot to show the distances between algorithms with the t-SNE projection method [58] to determine the coordinates of the dots. Each dot on the scatterplot represents an algorithm, and the distance between algorithms is reflected in the relative positions of the dots. The input pair-wise similarity used for the t-SNE projection can be selected from the drop-down menu on the title bar of the view, where various algorithm similarity measures computed in the preprocessing stage are available for comparison.
**Interaction.** Clicking on the algorithm names in the initial column of the quality measure table enables users to activate the visualization of the corresponding algorithm in the quality measure view and the timeline view. A categorical color scheme is applied to the activated rows with one color per each algorithm, whereby visual components sharing the same hue pertain to elements associated with the respective algorithm throughout the interface. In addition, analysts can click on the dots in the scatterplot to achieve the same effect as clicking on the algorithm names in the table.
### _Analyzing Evolutionary Processes_
Since the behaviors of algorithms are manifested in the solution sets of the generations, analyzing the evolution of these generations can aid in comprehending the characteristics and underlying mechanisms of the algorithms. In this module, we present a set of visualizations that illuminate prominent evolutionary patterns (**T2**).
**Quality Measure View.** Shown in Fig. 1 (V2), this view is designed to illustrate overall temporal patterns of quality measures across evolutionary processes. It comprises four line charts, each corresponding to a specific quality measure, i.e., IGD, HV, MS, and SP. Upon activation of an algorithm in the quality measure table or the algorithm similarity view, the corresponding measurement values will be depicted as time series in the line charts. The horizontal axis of the line charts represents the order of generations, while the vertical ranges are scaled to fit the largest value in the visible area of the line charts. The colors of the lines are consistent with the colors assigned to the activated algorithms. The four line charts of the quality measures support zooming and panning, allowing for interactions while keeping the vertical value ranges filling the viewport of the charts. Analysts can select a specific generation in the timeline view by clicking on a data point in the corresponding time series.
**Timeline View.** The timeline view, Fig. 1 (V3), serves as a detailed explanation of the evolutionary processes for the activated algorithms. It shows the objective vectors of all generations in a juxtaposition manner, as illustrated in Fig. 1 (V2). Each row represents an activated algorithm, consisting of two components.
* _Left Side_: A _summary panel_ at the beginning of the row displays the best measurement values of HV, MS, and SP in a bar chart. To provide a comprehensive view of the aggregated IGD value in Equation 2, we expand the representation measure with a histogram of all dist\((\mathbf{r}_{i},\mathbf{x})\) in the definition. This allows analysts to observe the distributions of the nearest distances to the reference set and obtain a better understanding of the qualities of individual objective
Fig. 4: The data preprocessing stage. (A) An optional down-sampling stage to reduce the number of generations. (B) Four quality measures are calculated for each generation. (C) Generation similarity and (D) Algorithm similarity are measured between individual generations or the entire evolutionary processes.
vectors in a solution set. The aggregated IGD value is marked with a vertical line. The value ranges in the bar charts and histograms share the same scale among all rows in the timeline view, enabling visual comparisons between multiple algorithms.
* _Right Side:_ A series of scatterplots are arranged as small multiples, with each scatterplot corresponding to a generation and presenting the distribution of the solution set in the objective space. If the number of objectives, i.e., the dimension of the objective space, exceeds two, PCA is adopted to project the objective vectors onto a 2-D plane. Note that we fit the PCA parameters based on the objective vectors in the reference set, and the derived projection matrix is shared among all the scatterplots in all algorithms. The dots in the scatterplots use the theme color of the corresponding algorithm, while the objective vectors in the reference set are plotted in gray color to depict a ground-truth.
The timeline view enables several interactions. By hovering over the bars in the bar chart, analysts can obtain the numerical values and the order of generations associated with each measure. Additionally, analysts may select a particular generation by clicking on the corresponding scatterplot, which triggers a mode change in the quality measures panel. In this comparison mode, the measures for the selected generation are displayed alongside those of the best-performing generations in a grouped bar chart style. Furthermore, to facilitate the comparison of the distributions of the best and selected generations, a histogram with stripe texture is drawn beneath the existing best-performing one, which represents the IGD distribution of the selected generation.
**Generation Similarity View.** An issue that arises in the quality measure view and the timeline view is the lack of explicit depiction of relationships among individual generations. These relationships, whether they are related to the generations in the same or different algorithms, remain elusive in visual representations. To address this limitation, the generation similarity view, Fig. 1 (V4), has been introduced as a means of revealing similarities between generations, which aids analysts in identifying common or distinct evolutionary behaviors. Fig. 5 (A) illustrates the design of this view. This approach allows for the inclusion of all generations of a given algorithm by toggling the switch adjacent to the algorithm name in the timeline view. Furthermore, it is possible to select multiple algorithms simultaneously.
_KNN Graph Building:_ To discern the underlying structure of the neighborhood in the chosen generations, we employ the \(k\)-nearest-neighbor graph (kNN graph) to delineate their proximity. The graph comprises nodes that correspond to generations from the selected algorithms, with nearest neighbors for each node detected based on the generation similarity, i.e., the Wasserstein distance between the corresponding solution sets in the objective space. Notably, the graph-building procedure includes generations from all selected algorithms, and this is critical for capturing the intra- and inter-relationships among the generations from different algorithms.
_Layout and Visual Encoding:_ Once the \(k\)NN graph is prepared, we use the Kamada-Kawai layout algorithm [25] to compute the coordinates of the nodes, Fig. 5 (A). The nodes are represented as colored points in the view, where the categorical color mapping for the activated algorithms is followed. In order to denote the sequential order of the generations, we assign varying degrees of lightness to the point colors with lighter colors indicating earlier positions in the evolutionary processes. Moreover, the size of the points can be mapped to one of the quality measures, which is controlled by a drop-down menu in the title bar of the view, Fig. 5 (A1). The edges in the kNN graph are illustrated as light gray lines.
Beyond the backbone graph structure, we augment the visual presentation with several supplementary design elements to facilitate the identification of significant structures or features hidden in the graph.
* While the \(k\)NN graph is useful for showing the neighborhood structure of generations, it may not accurately convey the intrinsic clustering patterns in the measurement space. To address this, we use the HDBSCAN [40] clustering algorithm with the pair-wise generation similarities as input to detect groupings and outliers among the generations. The nodes belonging to the same cluster are enclosed by a gray bubble underneath, Fig. 5 (A2). Furthermore, the outlying nodes identified by HDBSCAN are decorated with a dark gray border, indicating that they do not belong to any clusters, Fig. 5 (A3).
* Nodes with nearest neighbors from different algorithms may suggest that the corresponding evolutionary processes share similar behaviors and produce comparable solution sets. We signify such nodes through a nested design where an additional outer ring is included, Fig. 5 (A4). This outer ring is a colored donut chart and represents the proportions of neighboring nodes that originate from the same or different algorithms.
* It may be sometimes challenging to trace the sequential ordering of generations based solely on the lightness of the node colors. Inspired by the Time Curve technique from Bach et al. [4], we use a Bezier curve to emphasize the temporal sequence of nodes from the same algorithm, Fig. 5 (A5). However, the curves may result in severe visual clutter with a large number of nodes. To address this, the curves are partitioned into segments by removing curves that lie within a cluster (A2 in Fig. 5). In this manner, only the connections between clusters are displayed, which significantly reduces the visible length of the curves while preserving chronological information.
_Interactions:_ A rich set of interactions are supported in the generation similarity view. Semantic zooming is supported in exploring the details of the \(k\)NN graph. The gray bubbles, outer rings, and time curves can be disabled to provide a clear interface. To avoid visual clutter, the number of visible \(k\)NN graph edges is controlled in a slider. Hovering the mouse pointer on the nodes and time curves opens an information prompt that contains details such as node neighborhoods or covered nodes of the curve, Fig. 5 (A6). Clicking on a gray bubble can highlight the nodes in the cluster.
_Alternative Design:_ We have also considered utilizing chronological order as the primary horizontal axis to layout the nodes and representing similarities with edges between generations, which is similar to the CareerLine design [65]. Nonetheless, when the number of generations is large, such edges may result in severe visual clutter. Moreover, we prioritize similarity information between generations in the layout, which can be naturally encoded as 2-D distances.
### _Inspecting Solution Sets_
Upon examining the evolutionary processes based on quality measures and generation similarities, a nuanced comparative analysis of particular generations may be required. Thus, we design a solution set view that enables a direct comparison of the solution sets in the objective space.
**Visualizing Solution Sets.** Illustrated in Fig. 1 (V5), when analysts select a particular scatterplot patch in the timeline view, the corresponding scatterplot is magnified and displayed on the right side of the solution
Fig. 5: The visual design of (A) the generation similarity view and (B) the scatterplot in the solution set view.
set view ( Fig. 5 (B)), with the relevant patch snapshot listed on the left side. The coordinates of the solutions in the generations follow the same approach described in Section 4.3, wherein the projection matrix of PCA is fit on the objective vectors in the reference set and then applied to the selected generations. The colors of the dots are determined based on their originating algorithms. To mitigate potential visual clutter, we employ two additional methods:
* The first additional design is to draw a contour map for each selected generation using the kernel density estimation method, Fig. 5 (C). This is intended to reveal the density of the dots, and the density levels are represented by the lightness of the filled contour colors and the stroke width of the contour lines.
* The second design is the application of the outlier-biased random sampling method [34, 74] to reduce the number of visible dots while maintaining the coverage of the overall data distributions. However, this sampling method is applied to the projected 2-D plane, which may lose outliers in the original objective space. To address this issue, the Local Outlier Factor method [6] is employed in the objective space to preserve all outlying extremum solutions. The detected outliers and extremum solutions are marked as crosses instead of dots, thereby maintaining the spread information of the solution sets.
**Visualizing the Reference Set.** To establish a ground-truth for comparison, the reference solutions in the reference set are also visualized in the scatterplot. Due to the excessively large quantity of reference solutions, we design three different display modes that highlight various aspects of the true Pareto front. In Fig. 5, the reference solutions can be presented as scatters of their objective vectors (B), a grayscale density map, or a textured gray alpha hull that depicts the boundaries of the reference set (D). Analysts can switch between the three modes flexibly, depending on the desired aspect of information they wish to examine, such as density for the density map and coverage for the alpha hull.
**Interactions.** The scatterplot supports semantic zooming to facilitate detailed examination on demand. An exclamation mark displayed in a generation snapshot indicates that certain solutions lie outside the current viewport, prompting analysts to zoom out to locate them. Hovering over the dots opens a tooltip of the objectives' actual values. A slider is used to control the sample rate of the sampling method.
## 5 Case Study and Expert Interview
In this section, we describe how our framework can facilitate the understanding and investigation of the interrelations among EMO algorithms on a test problem. Furthermore, we outline the feedback from domain experts in an interview. Our framework follows a browser-server architecture and relies on the PlatEMO [54] platform for running EMO algorithms and generating logs of the generations, including the solution sets for each generation. Python Flask library is employed for the backend, while Vue3 and d3.js are used for the frontend.
### DTLZ3 Benchmark Problem
The DTLZ (Deb-Thiele-Laumanns-Zitzler) suite is a family of multi-objective optimization test problems initially proposed by Deb et al. [13], which is widely-used in evaluating the performance of EMO algorithms. In each problem, a known Pareto front is provided in the form of a reference set, enabling researchers to assess the accuracy and diversity of the approximations obtained from EMO algorithms. The first case study aims to investigate the ability of selected EMO algorithms to tackle the DTLZ3 problem, which involves three objective functions and a 12-dimensional decision space. A total of 36 EMO algorithms1 were utilized to test the DTLZ3 problem with the same initial population size of 100. We run each algorithm once with 500 generations and down-sampled them to 100 uniformly. The parameter \(k\) for the \(k\)NN graphs in the generation similarity view was set to 10.
Footnote 1: Please refer to the supplemental material for a full list of the algorithms.
**Algorithm-level Comparison (T1).** After loading data into the framework, the quality measure table and algorithm similarity view are displayed. To filter the algorithms with optimal performance, the table is ranked based on the generations with the best IGD values in each algorithm, i.e., the "Best IGD" column, in ascending order. It can be observed that t-DEA, MSEA, FDV, NSGA-III, and VAEA are the most effective algorithms in the table. After activating these five algorithms, we find that their corresponding projected points were clustered together when switching to the "Best IGD" measure in the algorithm similarity view, consistent with the rankings in the table.
When the algorithm similarity view was switched to "Euclidean (IGD)" which measures the IGD value series across generations, FDV and MSEA were found to be significantly distant from the other three. This implies that they present different behaviors related to the changes in the IGD values during evolutionary processes. This observation is also confirmed by examining the IGD line chart in the quality measure view, which shows significantly lower values in the first ten generations for the two corresponding time series of FDV and MSEA. The HV values indicate similar trends, with FDV and NSGA-III showing outlying behaviors compared to t-DEA, MSEA, and VAEA.
**Generation-level Similarity Analysis (T2, T3).** To comprehensively investigate the five activated algorithms, we utilize the interplay between various views to analyze the evolutionary processes. We observe from the summary panels in the timeline view that the activated algorithms show similar best measure values and IGD value distributions. However, due to the differences in the HV and IGD series mentioned above, we need further investigation (**T2**). To this aim, we select t-DEA, which has the best IGD value in the quality measure table, as the standard and perform paired comparisons with the other activated algorithms in the generation similarity view.
_t-DEA vs. FDV (Fig. 6 (B1)):_ In the \(k\)NN graph constructed using the generations obtained from t-DEA (red) and FDV (blue), we make the following observations (**T2**). First, both algorithms depict clear evolutionary trends from the top right corner to the bottom left. When setting the visual mapping of point sizes to IGD values, it can be observed that the sizes reduce gradually along the evolution paths. Furthermore, there are only a few intersections between them, Fig. 6 (B1.1). This implies that the two algorithms are designed to have distinct evolutionary strategies with little to no overlap. Second, in the intersection area, FDV's generations are in the early stages of the entire process (from generation #5 to #7), whereas t-DEA's process is in the intermediate stage (from #51 to #54). This suggests that FDV converges rapidly to the Pareto front and achieves a quality that t-DEA requires over 50 generations to reach. This pattern is more evident when enabling the time curves, where long jumps in the yellow curves are observed from the top-right corner, Fig. 6 (B1.2). We further examine the solution sets of generation #5 in FDV and #52 in t-DEA (**T3**), Fig. 6 (B1.2). The scatterplot of the solution sets indicates that both generations have outlying solutions beyond the reference set area, and their similarities are relatively larger than those of the best generations, which is the main cause of having such intersections.
_t-DEA vs. MSEA (Fig. 6 (B2)):_ The \(k\)NN graph for t-DEA and MSEA exhibits a heavy overlap until almost half of the evolutionary process (over 50 generations) (**T2**). In the overlapping early stages, the solution sets of both algorithms are far from the reference sets but have the momentum to converge towards the central area, as depicted in the solution set scatterplots in Fig. 6 (B2.1). The evolutionary processes begin to diverge after approximately 50 generations, with the later generations forming two distinct branches. When comparing the solution sets of the pivotal generations (#55 in t-DEA and #56 in MSEA) and their subsequent generations, Fig. 6 (B2.2), we observed that t-DEA achieves a more uniform coverage to the reference set, whereas MSEA presents poorer uniformity near the central area (**T3**), Fig. 6 (B2.3),
_t-DEA vs. VAEA (Fig. 6 (B3)):_ During the early stages of the evolutionary process, there is an interweaving pattern between the two algorithms (**T2**). Similar to the comparison between t-DEA and MSEA, two diverging branches begin to emerge after 50-60 generations. However, the subsequent development behaviors differ between the two algorithms. As depicted in Fig. 6 (B3.1), the generations in t-DEA continue to evolve till the end of the process. Nevertheless, in Fig. 6 (B3.2), an intriguing cluster A is observed where the points with varying color
lightness are mixed up. By observing the points through hovering, it can be determined that they correspond to generations ranging from #60 to #100. After enabling the time curve for VaEA, multiple connections among cluster A, cluster B, and some outlying generations in between are observed. Further examination reveals that the points in cluster B correspond to generations from #78 to #83. By analyzing the line charts in the quality measure view, jitters are observed between the 77th and 83rd generations in the time series of VaEA, Fig. 6 (B3.3). We further scrutinize the small multiples of scatterplots of VaEA in the timeline view from the 70th to 90th generations (**T2, T3**). In the ranges where jitters occur, some solutions start to move towards the bottom corner and then spread away after the 86th generation, indicating a sudden exploitation behavior in the evolutionary process.
**Experts' Comments.** First, FDV displays significantly faster convergence compared to other algorithms. With such observation, the experts mentioned that further investigations could be conducted to determine whether such a phenomenon is due to the inherent advantages of FDV or merely a coincidental matching between FDV's customized reproduction operators and the objective functions of the test problem. This is crucial in directing the development of more generalized algorithm designs rather than those tailored to specific test problems. Second, VaEA's behavior aligns with their knowledge that "novelty search" [41] can be initiated in evolutionary processes when optimization encounters barriers. The experts commented that despite acknowledging the presence of such behavior before using our system, it remains essential to employ intuitive visualizations to observe when and how novel solutions can be generated. This may also assist in the future optimization of variation strategies to produce superior novel solutions.
### _DDMOP2 Problem_
DDMOP [20] is a test suite that were developed from real-world multi-objective optimization problems. In the second case study, we have selected the DDMOP2 test problem, which involves generating optimal frontal structures for vehicles to handle crashes. The problem consists of five decision variables for reinforced members around the frontal structure. Three factors representing the severity of a crash are assigned to the three objectives, which need to be minimized. We have employed the same algorithm and parameter settings as the first case study.
**Algorithm-level Comparison (T1).** Our exploration commences with the NSGA-II algorithm, which is the baseline employed in the reference [20]. We have employed NNIA and \(\mathrm{RM-MEDA}\), which are the most similar and the best algorithms, respectively, based on the algorithm similarity of the generations with the best IGDs, Fig. 1 (A1). After examining the line charts in the quality measure view (Fig. 1 (A2)) as well as the summary panel (Fig. 1 (A3)) in the timeline view, we have observed that the three algorithms exhibit a similar trend in the evolutionary processes and achieve comparable measurement values. However, it is still necessary to determine whether the evolving behaviors concealed in the generation series are identical.
**Generation-level Similarity Analysis (T2, T3).** Initially, we enabled the best generations of the three algorithms on IGD values in the solution set view, Fig. 1 (B1). In the scatterplot, the three solution sets appear to display a similar distribution, which further supports the similarity observed in the quality measures (**T3**). However, evolutionary processes in the generation similarity view indicate differences in comparisons. As depicted in Fig. 1 (B2), when only the \(k\)NN graph for NSGA-II is enabled, two major clusters appear, with intra-connections established by several outlying generations between the two clusters. After adding the generations from NNIA, a considerable number of nodes are displayed with outer rings, indicating that the generations from different algorithms are interconnected in the kNN graph significantly (**T2**). However, when comparing generations from NSGA-II (Fig. 1 (B2.1)) and RM-MEDA (Fig. 1 (B2.2)), the nodes are clearly separated based on the originating algorithms, with only a few inter-algorithm neighboring relationships preserved.
**Experts' Comments.** For relationships between the three algorithms, domain experts observed that this case exemplifies a situation where algorithms that belong to the same category may not display substantial differences in evolutionary behaviors. NSGA-II and NNIA share the concept of non-dominated neighbor-based selection in the population selection stage, as described in previous research [18]. Although updated crowding-distance measurements in NNIA have been reported to provide advantages in certain scenarios, additional inspection should be conducted to make sure whether the pronounced similarity between NSGA-II and NNIA results from the DDMOP2 test problem settings, which may not effectively showcase such advantages. This also highlights the issue that in real-world test problems like DDMOP, characteristics that maximize the advantages of a particular algorithm may not be as apparent as in artificial benchmarking problems, which presents new challenges in real-world decision-making scenarios.
Fig. 6: The DTLZ3 problem. (A) The algorithm-level similarity analysis. (B) Comparative analysis between t-DEA and other three algorithms.
### Expert Interview
We conducted an expert interview involving three domain experts to further evaluate our framework. E1 and E2 were the experts who have participated in a preliminary interview for compiling analytical tasks, while E3 is a research scientist with a Ph.D. in evolutionary optimization and machine learning. During the interview, we presented the background, analytical tasks, and visualization design of the framework as well as the two test problems used in the case study. We then facilitated an open discussion, allowing the experts to freely explore and provide feedback on the implemented system. Comments on the framework design and functionality were collected during the interviews, which lasted between 1 and 1.5 hours each. Some of the comments provided by the experts on the case study results have already been reported in "Experts' Comments" in Section 5.1 and 5.2.
All three experts agreed on the effectiveness of the proposed visual analytics framework for comparative analysis of EMO algorithms. E3 noted that such interactive comparison had not been extensively explored in the field of evolutionary computing and multi-objective optimization, thus highlighting the usefulness of the framework in terms of revealing the relationships between different algorithms. E1, who has experience in designing EMO algorithms, remarked that the framework could prove to be helpful in inspecting the effectiveness of the newly proposed variation and environmental selection strategies based on existing baselines. "Sometimes, when designing new algorithms for specific types of problems, it is uncertain to what extent the strategy will work, and existing quality measures may not reveal the fundamental differences between two algorithms. An intuitive comparison could facilitate the assessment of whether the algorithm is performing in accordance with the researcher's expectations."
With respect to the visualization design in the framework, positive feedback was received from the domain experts. E2 noted that although some existing EMO platforms and libraries have incorporated visualization features, an integrated visualization framework with comprehensive support for visual exploration could alleviate the burden of programming. In addition, the interplay between multiple views facilitates the understanding of evolutionary processes from diverse perspectives. E1 and E3 also highlighted the generation similarity view as an interesting approach to understanding the similarities between generations. "It allows us to easily perceive the characteristics of various evolutionary processes, such as how fast an algorithm can converge and whether unexpected exploratory evolutions, such as Cluster B in the result of VaEA on DTLZ3, occurred," commented E1.
We also received constructive suggestions to improve the current framework design. E2 discussed the feasibility of integrating application-specific visualizations into the framework to provide better support for real-world decision-making scenarios. E3 suggested exploring the evolutionary processes in single-objective optimization problems, which would be an interesting topic for further research.
## 6 Discussion and Conclusion
In this paper, we present a visual analytics framework for comparative analysis of multiple evolutionary multi-objective optimization (EMO) algorithms. Based on the analytical tasks, the algorithm-agnostic framework allows for conducting comprehensive comparison from multiple levels of details including the algorithm-level assessment, evolution-level comparison, and solution-set-level inspection. The effectiveness of the proposed framework is demonstrated through two test problems.
**Comparison with Existing Work.** To better contextualize our framework within the current literature on visualizing multi-objective optimizations, we have chosen three representative works and compared them across various dimensions in a qualitative manner, Fig. 7. Our framework offers an interactive solution primarily concentrating on the analysis of evolutionary processes, whereas the existing works mainly tackle the task of solution set analysis for individual generations.
**Scalability.** We discuss the scalability of our framework from two aspects: data preprocessing, and visual design.
_Data Preprocessing_: A bottleneck that impedes the scaling up of the number of algorithms is attributed to the similarity computation process. The computation of the Wasserstein distance incurs a high computational cost due to the approximation required in its computation [15]. Furthermore, the time required for pairwise similarity computation increases substantially with the number of generations in each algorithm. In the two case studies presented in this paper, the data preprocessing stage for each problem with 36 algorithm runs took approximately 3.5 hours. To address this issue, increasing the sampling rate to reduce the number of generations included in subsequent stages is a viable solution. Advanced sampling techniques can also be explored to preserve only informative generation subsequences.
_Visual Design_: Our framework utilizes scatterplot-based designs, in the algorithm similarity view, the generation similarity view, and the solution set view. When a large number of elements, i.e., points, are rendered in the same view, visual clutter may occur. Moreover, considering that colors are employed to represent distinct algorithms, there may be difficulties when a considerable number of algorithms are activated in the aforementioned views, as previously recognized in the literature on multi-class scatterplots [36, 74]. Our framework has deployed various techniques such as semantic zooming and density maps to mitigate the issues. Subsequent work could explore additional simplification strategies, such as hierarchical visualization techniques, additional filtering methods, and multi-class sampling approaches for scatterplots.
**Generalizability.** First, with regard to the number of objectives, our framework can be readily extended to address many-objective optimization problems where the number of objectives exceeds that of the multi-objective setting. This can be achieved through the incorporation of advanced multi-dimensional projection methods, such as t-SNE and UMAP, into the visualization of solution sets. Second, a promising extension for generalizing our framework lies in the realm of stochastic behaviors in multiple runs of an algorithm. Here, additional uncertainty analysis and visualization techniques can be leveraged to enhance the analysis of such behaviors. In addition, our framework can be employed in a wide range of iterative optimization algorithms, where iterations can be treated as generations in the EMO context. Nonetheless, significant modifications to the solution visualization components may be necessary to support the analysis of only one objective.
**Limitations and Future Work.** Currently, our framework adopts an abstract representation of the objective space. In the future, we intend to enhance our framework for specific application scenarios by incorporating contextual information and semantic meanings into the visualizations. We also aim to uncover the underlying mechanisms of the evolutionary processes by applying white-box explanation techniques to EMO algorithms. Furthermore, there are current plans to enhance the visual exploration components for the solution sets by adopting additional multi-dimensional visualization methods. A linked analysis between the decision and the objective space can significantly facilitate the understanding of solution distributions in a single solution set. Lastly, there remain several parameters that users can adjust, including the down-sampling rate of the algorithm runs and the \(k\) value for the kNN graph. Further investigation can be undertaken to understand the impact of these hyperparameters on the analytical workflow.
Figure 7: A comparative analysis involving 3D-RadViz [23], iSOM [44], PAVED [11], and our framework is presented. In the columns denoting supported tasks, + symbolizes the primary task supported by the work, * designates partially or indirectly supported tasks, and - means tasks that are not supported.
## Supplemental Materials
The supplemental materials include (1) a demo video of our framework, (2) the subtitle file of the demo video, and (3) the list of all EMO algorithms selected in the case studies. An implementation is released at [https://github.com/VIS-SUSTech/visual-analytics-for-emo-algorithm-comparison](https://github.com/VIS-SUSTech/visual-analytics-for-emo-algorithm-comparison).
## Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (No. 62202217), Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515012889), Guangdong Talent Program (No. 2021QN02X794), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X386). Y. Ma would like to thank his wife, Y. Qi, for her love and constant support.
|
2302.09021 | Energy Efficient Computation Offloading in Aerial Edge Networks With
Multi-Agent Cooperation | With the high flexibility of supporting resource-intensive and time-sensitive
applications, unmanned aerial vehicle (UAV)-assisted mobile edge computing
(MEC) is proposed as an innovational paradigm to support the mobile users
(MUs). As a promising technology, digital twin (DT) is capable of timely
mapping the physical entities to virtual models, and reflecting the MEC network
state in real-time. In this paper, we first propose an MEC network with
multiple movable UAVs and one DT-empowered ground base station to enhance the
MEC service for MUs. Considering the limited energy resource of both MUs and
UAVs, we formulate an online problem of resource scheduling to minimize the
weighted energy consumption of them. To tackle the difficulty of the
combinational problem, we formulate it as a Markov decision process (MDP) with
multiple types of agents. Since the proposed MDP has huge state space and
action space, we propose a deep reinforcement learning approach based on
multi-agent proximal policy optimization (MAPPO) with Beta distribution and
attention mechanism to pursue the optimal computation offloading policy.
Numerical results show that our proposed scheme is able to efficiently reduce
the energy consumption and outperforms the benchmarks in performance,
convergence speed and utilization of resources. | Wenshuai Liu, Bin Li, Wancheng Xie, Yueyue Dai, Zesong Fei | 2023-02-14T14:53:57Z | http://arxiv.org/abs/2302.09021v1 | # Energy Efficient Computation Offloading in Aerial Edge Networks With Multi-Agent Cooperation
###### Abstract
With the high flexibility of supporting resource-intensive and time-sensitive applications, unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) is proposed as an innovational paradigm to support the mobile users (MUs). As a promising technology, digital twin (DT) is capable of timely mapping the physical entities to virtual models, and reflecting the MEC network state in real-time. In this paper, we first propose an MEC network with multiple movable UAVs and one DT-empowered ground base station to enhance the MEC service for MUs. Considering the limited energy resource of both MUs and UAVs, we formulate an online problem of resource scheduling to minimize the weighted energy consumption of them. To tackle the difficulty of the combinational problem, we formulate it as a Markov decision process (MDP) with multiple types of agents. Since the proposed MDP has huge state space and action space, we propose a deep reinforcement learning approach based on multi-agent proximal policy optimization (MAPPO) with Beta distribution and attention mechanism to pursue the optimal computation offloading policy. Numerical results show that our proposed scheme is able to efficiently reduce the energy consumption and outperforms the benchmarks in performance, convergence speed and utilization of resources.
Mobile edge computing, unmanned aerial vehicle, computation offloading, deep reinforcement learning, digital twin.
## I Introduction
With the rapid advance of Internet of Things (IoT) technology and the extensive deployment of 5G networks, the envision and planning of future 6G networks is in progress. It is expected to be empowered with the ability of providing intelligent and green communication features, and realizing the ubiquitous connection between massive objectives [1, 2, 3]. In order to achieve these requirements, the quality of service and application experience in 6G network need to be significantly enhanced, which involves higher requirements for the computing resources and communication resources of various devices [4]. In this sense, mobile edge computing (MEC) as a popular paradigm naturally arises to alleviate the pressure on resource-constrained devices where the heterogeneous resource-intensive applications are performed. Specifically, the devices can offload their computational tasks to edge servers with powerful computing resources to pursue satisfactory energy consumption and latency [5, 6].
However, the transmission links of terrestrial MEC servers may be influenced by blockage, and the mobile users (MUs) located away from the servers are typically not well covered. Especially in some emergency situations such as disaster area and hot spot area, the fixed MEC servers are intricate to deploy. Recently, unmanned aerial vehicle (UAV) has emerged as a promising technology to assist traditional infrastructure-based MEC networks due to the representative features of high mobility, flexible deployment, and low cost [7, 8]. By cooperatively carrying MEC servers, UAVs can fly closer to the MUs to offload part of the tasks and provide reliable communication links, which is a fast and cost-effective deployment way for computation offloading with low delivery delay and high data rate requirements [9, 10]. The dynamicity of the MEC networks is further enhanced with the participation of UAVs and MUs, and thus motivates the application of online optimizing in MEC service, especially the integration of artificial intelligence approaches that outperforms traditional offline optimization in decision time and adaptability to the environment.
For the learning-based optimizing techniques, the dynamicity in networks imposes strict requirements on accurate state sensing, real-time decision-making, and the precise execution, which presents technical challenges for implementing the UAV-assisted MEC networks [11, 12]. In view of this, as an emerging technology in the 6G era, digital twin (DT) has come into the research vision, which is capable of maintaining the virtual models of physical entities in digital space and monitoring of the overall system status. Currently, DT has been widely used in smart city, healthcare, and intelligent manufacturing [13]. By collecting massive timely data from real space, DT can also realize efficient model training in dynamic MEC networks, thus providing the MEC services with more intelligent decisions.
### _Related Work_
In recent years, UAV-assisted MEC has attracted growing research interests, and extensive research efforts have been conducted to enhance network performance in various scenarios. To exploit the performance of UAV in special situations, Do-Duy _et al._[14] and Tran _et al._[15] studied the joint deployment and resource configuration in UAV-aided disaster emergency communications by using iterative methods. To pursue the real-time decision of UAV-assisted
networks, Wang _et al._[16] integrated deep reinforcement learning (DRL) method into iterative convex optimization approaches, and Dai _et al._[17] studied the minimization of the execution latency in an edge-cloud heterogeneous network comprising UAVs and base stations via DRL approach. For some distributed scenarios, multi-agent reinforcement learning (MARL) emerges as a mighty technique that trains efficient decentralized models, and thereby avoids the dependence of centralized DRL on the whole network state. Wang _et al._[18] considered the load balance in multi-UAV-assisted MEC under the constraints of users' energy consumption. Peng _et al._[19] investigated the utilization of MARL to solve the dynamic network resource allocation in UAV-assisted vehicular edge networks. Cai _et al._[20] studied the data sensing and computation offloading problem in a multi-UAV-assisted MEC, and integrated attention mechanism into MARL algorithm to accelerate the training speed of the network. To further investigate the cooperation of multiple agents, Ji _et al._[10] exploited the acquisition latency minimization in multi-UAV cellular networks via MARL. Zhao _et al._[21] investigated the cooperation between multi-UAV and multi-ground MEC servers to minimize the total network utility and enhance the performance of service.
In view of incomparable merits of DT, the idea of combining DT and MEC has made many attempts to build a digital twin edge network (DITEN), but the current research on DITEN is still in its infancy. To fully exploit the capability of DT network for capturing the real-time state of the network, Sun _et al._[22] investigated the latency minimization problem of DITEN via DRL method and considered the estimated deviation of DT. Sun _et al._[23] developed an approach to automatically adjust the aggregation frequency of federated learning in DITEN network. Lu _et al._[24] proposed a blockchain-based federated learning framework that runs in DT for collaborative computing, improving the reliability and security of the system and enhancing data privacy. Liu _et al._[25] introduced edge collaboration into the DITEN network and realized intelligent offloading of tasks. As a step forward, Do-Duy _et al._[26] iteratively optimized the joint resources of mobile devices and multiple edge servers to minimize the total latency in industrial IoT networks, and Zhang _et al._[27] combined DT technology and decomposed MARL into the design of vehicle edge computing networks. Recently, Li _et al._[28] focused on the DT-aided computation offloading in UAV edge networks where the double deep Q-network algorithm is explored to reduce the system energy consumption.
### _Contributions and Organization_
The above-mentioned excellent works have investigated the UAV/DT networks from various aspects. However, the scenarios that involve the joint cooperation among UAVs, base station (BS), and MUs have not been fully exploited. Especially when MUs are deployed in remote or disaster areas, they are not able to directly communicate and synchronize status with BS and DT. Therefore, different from these prior studies, we in this paper propose a real-time distributed optimization framework for the multi-UAV-assisted MEC networks based on MARL, where the policy model can be deployed on MUs and UAVs to make resource allocation decisions, and a DT layer is deployed at the BS to pursue timely monitoring and centralized training, thereby saving the energy of model training on resource-limited MUs and UAVs. Particularly, the main contributions of this paper are presented as follows.
1. We proposed a multi-UAV-assisted MEC network with air-ground cooperation to provide the computing services for the resource-limited MUs, where the interplay between physical environment and DT layer is taken into account. The MUs can partially offload their computational task to UAVs or relay to BS for further computation processing via UAVs. Considering the energy limits of both MUs and UAVs, a weighted energy consumption minimization problem is formulated by jointly designing the association, offloading proportion, trajectory control, and resource allocation on computation and communication.
2. To tackle the high complexity of the formulated real-time problem, we formulate it as a Markov decision process (MDP), and address it by an MARL framework with heterogeneous agents. the high-dimensional hybrid action spaces, the multi-agent proximal policy optimization (MAPPO) algorithm is utilized to train both MUs and UAVs to cooperatively make offloading decisions in dynamic MEC networks.
3. To enhance the performance of training, we adopt Beta distribution and attention mechanism in actor and critic network, respectively. Simulation results show the efficient training convergence and effectiveness of our proposed MAPPO with attention mechanism and Beta distribution (AB-MAPPO) algorithm in optimizing energy consumption. The proposed algorithm outperforms other benchmarks, especially in utilization of computational resource, and flexibility to different deploying scenarios.
The rest of this paper is organized as follows. We first present the system model and formulate the joint problem in Section II. The design of AB-MAPPO algorithm is given in Section III. Section IV presents the simulation results. Finally, we conclude this paper in Section V.
## II System Model and Problem Formulation
In this section, we first introduce a multi-UAV-assisted MEC system model, including network model, mobility model of MUs, DT model, communication model, and computation model. Then, we formulate the optimization problem to minimize the weighted energy consumption of MUs and UAVs while ensuring the delay requirement of the tasks.
### _Network Model_
As shown in Fig. 1, we consider a multi-UAV-assisted air-ground MEC network containing a BS, \(M\) UAVs, and \(K\) MUs. For simplicity, we denote the set of MUs by \(\mathcal{K}\triangleq\{1,2,\ldots,K\}\), the set of UAVs by \(\mathcal{M}\triangleq\{1,2,\ldots,M\}\), and the index of BS by \(M+1\). The MUs typically have limited battery life and computing capability, which are not capable of completing their resource-intensive tasks efficiently. We
assume that UAVs and BS are equipped with MEC servers to provide computing service for MUs. The set of servers is denoted by \(\mathcal{M}^{\star}\triangleq\mathcal{M}\cup\{M+1\}\). Without loss of generality, we introduce a time period \(T\), which is divided into \(N\) time slots with time of \(\delta_{t}\). The set of time slots is denoted as \(\mathcal{N}\triangleq\{1,2,\ldots,N\}\). Using three-dimensional Cartesian coordinate system, the locations of MU \(k\), UAV \(m\) and BS at time slot \(n\) are denoted as \(\mathbf{u}_{k}[n]=[x_{k}^{\mathrm{MU}}[n],y_{k}^{\mathrm{MU}}[n],0]^{\mathsf{ T}}\), \(\mathbf{q}_{m}[n]=[x_{m}^{\mathrm{UAV}}[n],y_{m}^{\mathrm{UAV}}[n],H]^{\mathsf{T}}\), and \(\mathbf{u}_{\mathbf{BS}}=[x_{\mathbf{BS}},y_{\mathbf{BS}},H_{\mathsf{B}}]^{ \mathsf{T}}\), respectively. Note that the UAVs are flying at a fixed altitude \(H\). To establish an efficient mapping of the physical MEC network, a DT layer is deployed at BS and is equipped with essential functions of data storage, status synchronization and model training. The main notations in this paper are summarized in Table I.
### _Mobility Model of MUs_
At the beginning of time slot \(n=1\), all MUs are randomly located, and the MUs are moving according to Gaussian-Markov random model [29]. Considering that each time slot is of short period, the location of MUs are assumed to be static during one time slot. Specifically, at each time slot \(n\), the velocity of \(v_{k}[n]\) and direction \(\theta_{k}[n]\) of MU \(k\) are respectively given by
\[v_{k}[n] =\mu_{1}v_{k}[n-1]+(1-\mu_{1})\bar{s}+\sqrt{1-\mu_{1}^{2}}\Phi_{k}, \tag{1a}\] \[\theta_{k}[n] =\mu_{2}\theta_{k}[n-1]+(1-\mu_{2})\bar{\theta}+\sqrt{1-\mu_{2}^{2 }}\Psi_{k}, \tag{1b}\]
where \(0\leq\mu_{1},\mu_{2}\leq 1\) are the parameters representing the effect of previous state, \(\bar{s}\) and \(\bar{\theta}\) are the average velocity and direction of all MUs, respectively. Also, \(\Psi_{k}\) and \(\Phi_{k}\) are generated by two independent Gaussian distributions with different mean-variance pairs \((\bar{\xi}_{v_{k}},\varsigma_{v_{k}}^{2})\) and \((\bar{\xi}_{\theta_{k}},\varsigma_{\theta_{k}}^{2})\) for MU \(k\). Therefore, the coordinate of MUs can be updated as
\[x_{k}^{\mathrm{MU}}[n] =x_{k}^{\mathrm{MU}}[n-1]+v_{k}[n-1]\cos(\theta_{k}[n-1])\delta_{ t}, \tag{2a}\] \[y_{k}^{\mathrm{MU}}[n] =y_{k}^{\mathrm{MU}}[n-1]+v_{k}[n-1]\sin(\theta_{k}[n-1])\delta_{ t}. \tag{2b}\]
### _DT Model of the UAV-assisted MEC Network_
In the considered multi-UAV network with MEC service, the DT layer deployed at BS is responsible for state monitoring and virtual twin mapping. To maintain the virtual twins, the physical devices need to upload the DT information of themselves to the DT layer. In this paper, two types of entities are represented, i.e., MUs and the UAVs. To model the virtual twins of them, we focus on their key features which efficiently represent their real-time state corresponding to the optimization scheme.
The virtual twin of each MU \(k\) needs to record its location and task information at each time slot \(n\), which is given by
\[\mathrm{DT}_{k}[n]=\{\mathbf{u}_{k}[n],\Omega_{k}[n],\tilde{f}_{k}^{\mathrm{ loc}}[n]\}, \tag{3}\]
where \(\Omega_{k}[n]\) is the task information of users and \(\tilde{f}_{k}^{\mathrm{loc}}[n]\) is the estimated computational frequency for MU \(k\), which will be illustrated in following subsection.
The DT of each UAV should reflect its service status, involving the resource allocation and movement, and is denoted as
\[\mathrm{DT}_{m}^{U}[n]=\{\mathbf{q}_{m}[n],\alpha_{k,m}[n],\tilde{f}_{k,m}[n], \forall k\in\mathcal{K}\}, \tag{4}\]
where \(\alpha_{k,m}[n]\) and \(\tilde{f}_{k,m}^{\mathrm{edge}}[n]\) are defined the association factor of the network and the estimated computational resource allocated to MU \(k\) by UAV \(m\).
### _Computation Model_
Each MU generates a task \(\Omega_{k}[n]=(L_{k}[n],C_{k}[n])\) with a latency requirement of \(\delta_{t}\) at the beginning of each time slot \(n\), where \(L_{k}[n]\) and \(C_{k}[n]\) denote the size of input data and average number of central process unit (CPU) cycles for processing one bit data of MU \(k\)'s task, respectively. The tasks can be divided into two parts at random and executed concurrently, with \((1-\rho_{k}[n])L_{k}[n]\) bits of input data for local computing, and \(\rho_{k}[n]L_{k}[n]\) bits offloaded to MEC servers associated with BS and UAVs for edge computing. Denoting the offloading association of MU \(k\) at time slot \(n\) as \(\alpha_{k,m}[n]\), we have
\[\alpha_{k,m}[n]\in\{0,1\},\forall k\in\mathcal{K},m\in\mathcal{M}^{\star},n \in\mathcal{N}. \tag{5}\]
Fig. 1: System model.
The details are illustrated as follows.
#### Iii-B1 Local computing
With \(\alpha_{k,0}[n]=1\), MU \(k\) computes total task by itself locally, indicating that \(\rho_{k}[n]=0\).
According to prior MEC research [17], by adopting dynamic voltage frequency scaling (DVFS) technique, the estimated frequency of MU \(k\) at time slot \(n\) in DT can be denoted as
\[\tilde{f}_{k}^{\text{loc}}[n]=\min\{Y_{k}^{\text{loc}}[n]/\delta_{t},f_{\max}^{ \text{loc}}\}, \tag{6}\]
where \(f_{\max}^{\text{loc}}\) denotes the maximum CPU frequency of MUs, \(Y_{k}^{\text{loc}}[n]=(1-\rho_{k}[n])L_{k}[n]C_{k}[n]\) denotes the total computing cycles of MU \(k\)'s task. It means that for energy efficient objective, the task is processed with the minimum CPU frequency required to complete it according to DVFS, thus we have \(\tilde{f}_{k}^{\text{loc}}[n]=Y_{k}^{\text{loc}}[n]/\delta_{t}\), and \(\tilde{f}_{k}^{\text{loc}}[n]\) is limited by the maximum CPU frequency \(f_{\max}^{\text{loc}}\).
It is noteworthy that the DT layer can't fully reflect the state of MUs and UAVs due to some issues such as the hysteresis of status synchronization and the transmission error. Thus, a deviation is introduced to model the estimated error of DT and can be utilized to verify the robustness of the system. The deviation may occur from computational frequency [22, 25, 26], and location data. Denoting \(\hat{f}_{k}^{\text{loc}}[n]\) as the estimated CPU frequency of MU \(k\), the estimated local computing time of MU \(k\) at time slot \(n\) can be expressed as \(\tilde{T}_{k}^{\text{loc}}[n]=Y_{k}^{\text{loc}}[n]/\tilde{f}_{k}^{\text{loc }}[n]\). By denoting \(\hat{f}_{k}^{\text{loc}}[n]\) as the estimated deviation of actual frequency \(f_{k}^{\text{loc}}[n]=\tilde{f}_{k}^{\text{loc}}[n]+\tilde{f}_{k}^{\text{loc }}[n]\), the computing latency gap of MU \(k\) between DT and actual value can be expressed as
\[\Delta T_{k}^{\text{loc}}[n]=\frac{-Y_{k}^{\text{loc}}[n]\hat{f}_{k}^{\text{ loc}}[n]}{\tilde{f}_{k}^{\text{loc}}[n](\tilde{f}_{k}^{\text{loc}}[n]+\tilde{f}_{k}^{ \text{loc}}[n])}. \tag{7}\]
As a result, the actual local computing time of MU \(k\) can be calculated by
\[T_{k}^{\text{loc}}[n]=\tilde{T}_{k}^{\text{loc}}[n]+\Delta T_{k}^{\text{loc}} [n]. \tag{8}\]
#### Iii-B2 Edge computing
MUs can request UAVs for offloading their tasks to MEC servers at UAVs or BSs. The procedures are respectively illustrated as follows.
* **UAV computing**: A part of MU \(k\)'s task is transmitted to UAV \(m\), and executed by the MEC server on UAV. In this case, we have \(\alpha_{k,m}[n]=1\) and \(\alpha_{k,M+1}[n]=0\).
* **BS computing**: Considering the complex ground environment such as obstacle blocking, the MU-BS links are too poor to support transmission. Therefore, a certain part of task is first transmitted to UAV \(m\), and further relayed to BS for executing. To this end, we have \(\alpha_{k,m}[n]=\alpha_{k,M+1}[n]=1\).
We adopt probabilistic line-of-sight (LoS) channel to represent UAV-MU links. The probability of geometrical LoS between the UAV and MUs depends on the environment status and the elevation angle. Denoting the elevation angle as \(\vartheta_{k}[n]\), and the LoS probability of MU \(k\) and UAV \(m\) at time slot \(n\) as \(\mathbb{P}(\text{LoS},\vartheta_{k}[n])\), which is approximated to following form
\[\mathbb{P}(\text{LoS},\vartheta_{k}[n])=\frac{1}{1+a\text{exp}\left(-b( \vartheta_{k}[n]-a)\right)}, \tag{9}\]
where \(a\) and \(b\) are the parameters related to environment, and \(\vartheta_{k}[n]\) is given by
\[\vartheta_{k}[n]=\frac{180}{\pi}\arctan\left(\frac{H}{\|\mathbf{u}_{k}[n]- \mathbf{q}_{m}[n]\|}\right). \tag{10}\]
Additionally, the non-line-of-sight (NLoS) channel probability can be expressed as \(\mathbb{P}(\text{NLoS},\vartheta_{k}[n])=1-\mathbb{P}(\text{LoS},\vartheta_{k }[n])\). Therefore, the expected channel power gain of from MU \(k\) to UAV \(m\) is given by
\[h_{k,m}[n] =\frac{\beta_{0}\mathbb{P}(\text{LoS},\vartheta_{k}[n])}{\| \mathbf{u}_{k}[n]-\mathbf{q}_{m}[n]\|^{2}}+\frac{\nu\beta_{0}\mathbb{P}( \text{NLoS},\vartheta_{k}[n])}{\|\mathbf{u}_{k}[n]-\mathbf{q}_{m}[n]\|^{2}}\] \[=\frac{\nu\beta_{0}\mathbb{P}(\text{NLoS},\vartheta_{k}[n])}{\| \mathbf{u}_{k}[n]-\mathbf{q}_{m}[n]\|^{2}}, \tag{11}\]
where \(\tilde{\iota}\) is the path loss exponent, \(\nu\) is the NLoS attenuation, \(\beta_{0}\) is the channel power gain at the reference distance of 1 m.
Similar to [30], we assume that the change of LoS probability \(\mathbb{P}(\text{LoS},\vartheta_{k}[n])\) in each time slot is negligible. Furthermore, channel from UAV \(m\) to BS can be modeled by quasi-static block fading LoS link [31], i.e.,
\[h_{m}^{\text{rel}}[n]=\frac{\beta_{0}}{\|\mathbf{q}_{m}[n]-\mathbf{u}_{\text{BS }}\|^{2}}. \tag{12}\]
We consider the orthogonal frequency division multiple access scheme for data transmission. In each time slot, MU \(k\) first requests for offloading, then UAV \(m\) allocates an orthogonal frequency bandwidth of \(B_{k,m}[n]\) for transmission. Therefore, the transmit rates between MU \(k\) and UAVs, and UAV-BS are given by
\[R_{k}[n]=\sum_{m=1}^{M}\alpha_{k,m}[n]B_{k,m}[n]\log_{2}\left(1+\frac{p_{k}h_{ k,m}[n]}{B_{k,m}[n]N_{0}}\right), \tag{13}\]
\[R_{m}^{\text{rel}}[n]=B_{u}\log_{2}\left(1+\frac{p_{m}h_{m}^{\text{rel}}[n]}{B_ {u}N_{0}}\right), \tag{14}\]
where \(p_{k}\) and \(p_{m}\) denote the uplink transmit power of MU \(k\) and UAV \(m\), respectively, \(B_{u}\) is the bandwidth allocated to UAVs for relaying, and \(N_{0}\) is the noise power density.
Since each MU can associate with at most one UAV or BS in each time slot \(n\), and the bandwidth should be only allocated to links between associated devices, we have
\[\sum_{m=0}^{M}\alpha_{k,m}[n]=1,\forall k\in\mathcal{K},n\in \mathcal{N}, \tag{15}\]
\[\sum_{m=1}^{M}\sum_{k=1}^{K}B_{k,m}[n]\leq B,\forall n\in\mathcal{N}, \tag{16}\]
\[B_{k,m}[n]\geq 0,\forall k\in\mathcal{K},\forall m\in\mathcal{M},n\in \mathcal{N}, \tag{17}\]
\[\lceil B_{k,m}[n]/B\rceil=\alpha_{k,m}[n],\forall k\in\mathcal{K},m\in \mathcal{M},n\in\mathcal{N}, \tag{18}\]
where \(B\) denotes the available channel bandwidth. Then the size of total task transmitted to UAV \(m\) by MUs and the CPU cycles of MU \(k\)'s task executed by UAV \(m\) can be respectively calculated as
\[L_{m}^{\text{tmss}}[n]=\sum_{k=1}^{K}\alpha_{k,m}[n]\rho_{k}[n]L_{k}[n], \tag{19}\]
\[Y_{k,m}^{\text{edge}}[n]=\alpha_{k,m}[n](1-\alpha_{k,M+1}[n])\rho_{k}[n]L_{k}[ n]C_{k}[n]. \tag{20}\]
Following prior research [17, 32], we assume that the computation result with small size can be downloaded to MU with a negligible transmission latency. Additionally, BS is equipped with energy supply and powerful MEC server, hence the computing time and energy at BS are not considered. Thus, the total estimated edge computing latency of MU \(k\) can be expressed as
\[T_{k}^{\text{edge}}[n]= \sum_{m=1}^{M}\alpha_{k,m}[n]\left(T_{k}^{\text{off}}[n]+\alpha_ {k,M+1}[n]T_{k,m}^{\text{rel}}[n]\right.\] \[\left.+(1-\alpha_{k,M+1}[n])T_{k,m}^{\text{comp}}[n]\right), \tag{21}\]
where \(T_{k}^{\text{off}}[n]=L_{k}[n]\rho_{k}[n]/R_{k,m}[n]\) denotes the transmission delay of MU \(k\), \(T_{m}^{\text{rel}}[n]=L_{k}[n]\rho_{k}[n]/R_{k,m}^{\text{rel}}[n]\) denotes the time of UAV \(m\) for relaying the task to BS, and \(f_{k,m}^{\text{edge}}[n]\) is the estimated CPU frequency of server \(m\) allocated to MU \(k\) at time slot \(n\). Accordingly, the computing latency gap of UAV \(m\) between DT and actual value can be expressed as
\[\Delta T_{k,m}^{\text{comp}}[n]=\frac{-Y_{k,m}^{\text{edge}}[n]\hat{f}_{k,m}^ {\text{edge}}[n]}{\hat{f}_{k,m}^{\text{edge}}[n](\hat{f}_{k,m}^{\text{edge}}[ n]+\hat{f}_{k,m}^{\text{edge}}[n])}, \tag{22}\]
where \(\hat{f}_{k,m}^{\text{edge}}[n]\) is the estimated deviation of \(f_{k,m}^{\text{edge}}[n]\). Denoting \(\tilde{T}_{k,m}^{\text{comp}}[n]=Y_{k,m}^{\text{edge}}[n]/\hat{f}_{k,m}^{\text {edge}}[n]\) as the estimated time of UAV \(m\) for computing the task of MU \(k\), the actual edge computing time of MU \(k\) at each time slot \(n\) can be calculated by
\[T_{k,m}^{\text{comp}}[n]=\tilde{T}_{k,m}^{\text{comp}}[n]+\Delta T_{k,m}^{ \text{comp}}[n], \tag{23}\]
and the actual CPU frequency allocated to MU \(k\) by UAV \(m\) is given by \(f_{k,m}^{\text{edge}}[n]=\tilde{f}_{k,m}^{\text{edge}}[n]+\hat{f}_{k,m}^{ \text{edge}}[n]\). Since it is inappropriate to allocate the CPU frequency to MUs which are not choosing to offload to the UAVs, i.e., computing at local or BS, we have
\[\lceil\tilde{f}_{k,m}^{\text{edge}}[n]/f_{\max}^{\text{edge}}\rceil= \alpha_{k,m}[n](1-\alpha_{k,M+1}[n]),\] \[\forall k\in\mathcal{K},m\in\mathcal{M},n\in\mathcal{N}. \tag{24}\]
For the location data, a random noise is imposed on the observation of devices, and thus has an influence on the decisions of algorithm, which will illustrated in Section III.
### _Energy Consumption Model_
As the entities (MUs and UAVs) can upload their essential status information to DT layer via UAVs with specific bandwidth, the energy consumption can be evaluated by the DT layer. The energy models of MUs and UAVs are illustrated as follows.
#### Iii-E1 MU energy consumption
The energy consumption of MU \(k\) comprises the local computing energy and transmit energy, which are denoted as \(E_{k}^{\text{loc}}[n]\) and \(E_{k}^{\text{off}}[n]\), respectively. Therefore, we have
\[E_{k}^{\text{loc}}[n]=\kappa f_{k}^{\text{loc}}[n]^{2}Y_{k}^{\text{loc}}[n], \tag{25}\]
\[E_{k}^{\text{off}}[n]=p_{k}[n]T_{k}^{\text{off}}[n], \tag{26}\]
where \(\kappa\) is the effective capacitance coefficient determined by chip architecture.
#### Iii-E2 UAV energy consumption
The energy consumption of UAVs is imposed by flying and computation. In each time slot \(n\), UAV \(m\) is assumed to fly under the limits of the maximum speed \(v_{\max}\) and the maximum acceleration \(a_{\max}\), which indicates that
\[\|\mathbf{a}_{m}[n]\|=\frac{\|\mathbf{v}_{m}[n+1]-\mathbf{v}_{m}[n]\|}{\delta_{t }}\leq a_{\max},\forall m\in\mathcal{M},n\in\mathcal{N}, \tag{27}\]
\[\mathbf{q}_{m}[n+1]=\mathbf{q}_{m}[n]+\mathbf{v}_{m}[n]\delta_{t}+\frac{1}{2} \mathbf{a}_{m}[n]\delta_{t}^{2},\forall m\in\mathcal{M},n\in\mathcal{N}, \tag{28}\]
\[\|\mathbf{v}_{m}[n]\|=\frac{\|\mathbf{q}_{m}[n+1]-\mathbf{q}_{m}[n]\|}{\delta_{t }}\leq v_{\max},\forall m\in\mathcal{M},n\in\mathcal{N}. \tag{29}\]
Subsequently, we adopt the UAV's propulsion energy model introduced by [33], the flying energy of UAV \(m\) is then expressed as follows
\[E_{m}^{\text{fly}}[n]= \Bigg{[}\frac{1}{2}d_{0}\rho sA\|\mathbf{v}_{m}[n]\|^{3}+P_{0} \left(1+\frac{3\|\mathbf{v}_{m}[n]\|^{3}}{U_{\text{tip}}^{2}}\right)\] \[+P_{i}\left(\sqrt{1+\frac{\|\mathbf{v}_{m}[n]\|^{4}}{4v_{0}^{4}}} -\frac{\|\mathbf{v}_{m}[n]\|^{2}}{2v_{0}^{2}}\right)\Bigg{]}\delta_{t}, \tag{30}\]
where \(P_{0}\) and \(P_{i}\) are the blade profile power and induced power in hovering status, respectively, \(U_{\text{tip}}\) is the tip speed of the rotor blade, \(v_{0}\) is the mean rotor velocity, \(d_{0}\) is fuselage drag ratio, \(s\) denotes the rotor solidity, \(\rho\) denotes the air density, and \(A\) denotes the rotor disc area.
Note that the communication energy of UAV is relatively small compared to the energy consumption imposed by the intensive computation and flying, which can be neglected. The computation energy of UAV \(m\) is calculated as
\[E_{m}^{\text{edge}}=\sum_{k=1}^{K}\kappa\alpha_{k,m}[n]f_{k,m}^{\text{edge}}[n ]^{2}Y_{k,m}^{\text{edge}}[n]. \tag{31}\]
As a consequence, the overall energy consumption of MUs and UAVs are calculated by
\[E_{k}^{\text{MU}}[n] =E_{k}^{\text{loc}}[n]+E_{k}^{\text{off}}[n], \tag{32}\] \[E_{m}^{\text{UAV}}[n] =E_{m}^{\text{fly}}[n]+E_{m}^{\text{edge}}[n]. \tag{33}\]
### _Problem Formulation_
Considering the deficient energy resource of MUs and limited energy budget of UAVs, we pursue a multi-UAV-assisted MEC network with air-ground cooperation design, aiming at minimizing the weighted energy consumption of MUs and UAVs with latency sensitive tasks, by jointly optimizing the MU association \(\mathbf{A}\triangleq\{\alpha_{k,m}[n],\forall n\in\mathcal{N},k\in\mathcal{K },m\in\mathcal{M}^{*}\cup\{0\}\}\), the offloading proportion \(\boldsymbol{\varrho}\triangleq\{\rho_{k}[n],\forall n\in\mathcal{N},k\in \mathcal{K}\}\), the CPU frequency allocation \(\mathbf{F}\triangleq\{f_{k,m}^{\text{edge}}[n],\forall m\in\mathcal{M},k\in \mathcal{K},n\in\mathcal{N}\}\), the bandwidth allocation \(\mathbf{B}\triangleq\{B_{k,m}[n],\forall m\in\mathcal{M},k\in\mathcal{K},n \in\mathcal{N}\}\), and the UAV velocity \(\mathbf{V}\triangleq\{\mathbf{v}_{m}[n],\forall m\in\mathcal{M},n\in\mathcal{ N}\}\). Therefore, the weighted energy minimization problem is formulated as
\[\mathcal{P}1\min_{\mathbf{A},\mathbf{B},\mathbf{F},\boldsymbol{ \varrho},\mathbf{V}} \varpi\sum_{n=1}^{N}\sum_{m=1}^{M}E_{m}^{\text{UAV}}[n]+\sum_{n=1}^{N} \sum_{k=1}^{K}E_{k}^{\text{MU}}[n]\] (34a) s.t ( 5 ),(15 ),(16 ),(17 ),(24 ),(27 ),(28 ), (34b ) \[0\leq\rho_{k}[n]\leq 1,\forall k\in\mathcal{K},n\in\mathcal{N}, \tag{34c}\] \[\lceil\rho_{k}[n]\rceil=1-\alpha_{k,0}[n],\forall k\in\mathcal{K },n\in\mathcal{N},\] (34d) \[\sum_{k=1}^{K}f_{k,m}^{\text{edge}}[n]\leq f_{\max}^{\text{edge} },\forall m\in\mathcal{M},n\in\mathcal{N},\] (34e) \[T_{k}^{\text{loc}}[n]\leq\delta_{t},\forall k\in\mathcal{K}, \forall m\in\mathcal{M},n\in\mathcal{N},\] (34f) \[T_{k}^{\text{edge}}[n]\leq\delta_{t},\forall k\in\mathcal{K}, \forall m\in\mathcal{M},n\in\mathcal{N},\] (34g) \[\|\mathbf{q}_{i}[n]-\mathbf{q}_{j}[n]\|^{2}\geq d_{\min}^{2},\forall i,j\in\mathcal{M},i\neq j,n\in\mathcal{N}, \tag{34h}\]
where \(\varpi\) is the weight factor, \(d_{\min}\) denotes the minimum safe distance between the UAVs. Constraints (5) and (15) ensure the feasibility of the association status, constraints (16)-(18) guarantee that the bandwidth is only allocated to valid links, constraints (27)-(29) denote the velocity and acceleration limits of UAVs, constraints (34c) and (34d) indicate the range and the validness of offloading proportion, constraints (24) and (34e) denotes feasibility of computational resource allocation, constraints (34f) and (34g) denote the computational latency constraints, and constraint (34h) limits the minimum safe distance between UAVs.
It can be readily derived that \(\mathcal{P}1\) is a mixed integer non-convex and combinational problem with huge amount of highly decoupled variables. Furthermore, the proposed problem needs to be timely solved in each time slot \(n\) due to the randomly generated tasks and the movement of devices. Nonetheless, it is very intricate to solve the problem by typical iterative techniques such as alternative optimization and genetic algorithm. In the following, we propose an efficient algorithm for this complicated problem by harvesting the MARL approach.
## III Proposed DRL Approach: AB-MAPPO
In this section, the essential elements of MDP in multi-agent DRL are first illustrated. Then, we endeavor to develop an MAPPO approach with attention mechanism and Beta distribution to address problem \(\mathcal{P}1\).
### _MDP Elements Formulation_
In MARL setup, problem \(\mathcal{P}1\) can be modeled as an MDP with the set of agents \(\mathcal{I}\triangleq\{1,\ldots,K+M\}\), state space \(\mathcal{S}\), action space \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\times\cdots\times\mathcal{A} _{K+M}\). The agents can enhance their policy by interacting with the environment in discrete steps. In each step \(t\), each agent \(i\) obtains the current observation \(o_{i}(t)\) from the global environment state \(s(t)\triangleq\{o_{i}(t),\forall i\in\mathcal{I}\}\), takes action \(a_{i}(t)\in\mathcal{A}_{i}\), then obtains a reward \(r_{i}(t)\), and the environment transfers to a new state \(s(t+1)\).
The MDP involve two types of agents, i.e., MUs and UAVs. At the beginning of each step \(t\), the MUs generate actions, and then request for offloading according to the actions. Afterwards, the UAVs obtain the actions and serve the MUs. The status information is periodically synchronized to DT if is required, and rewards are evaluated centrally by DT. The formulation of MDP elements are illustrated as follows.
#### Iii-1 MDP elements of MUs
The MDP elements of MUs involve the observation \(o_{k}(t)\), action \(a_{k}(t)\) and reward \(r_{k}(t)\), which are presented as follows.
* **Observation:** From the perspective of privacy, MUs can only obtain the positions of themselves \(\mathbf{u}_{k}[n]\) by Global Positioning System, and the positions of MEC servers \(\mathbf{q}_{m}[n]\) from the broadcast of the DT layer. The position data accordingly has a random deviation, which is also imposed to the observations of agents. Furthermore, the task information \(\Omega_{k}[n]\) can be observed. As the task load on UAVs is highly relevant to MUs' offloading decisions, the previous computation load of UAVs are also involved.
Therefore, the observation of MU \(k\) at step \(t\) can be expressed as \[o_{k}(t)=\Big{\{}\mathbf{u}_{k}[n],\mathbf{q}_{m}[n],\Omega_{k}[n],Y_{k,m}^{ \text{edge}}[n-1],\forall m\in\mathcal{M}\Big{\}}.\] (35)
* **Action:** The decisions of MUs involve offloading association **A**, and offloading proportion \(\boldsymbol{\varrho}\). Therefore, for each MU \(k\), the action is decomposed by \[a_{k}(t)=\Big{\{}\alpha_{k,m}[n],\rho_{k}[n],\forall m\in\mathcal{M}^{*}\cup \{0\}\Big{\}}.\] (36)
* **Reward:** It can be noticed from (34a) that the MUs need to consider their own influence on the total weighted energy consumption and the load on UAV servers. Hence, the reward of each MU \(k\) should involve the energy consumption of both MU \(k\) itself and the UAV associated by MU \(k\). The reward of each MU agent \(k\) is given by \[r_{k}(t)= -\sum_{m=1}^{M}\alpha_{k,m}[n]\left(\varpi E_{m}^{\text{UAV}}[n] +P^{t}(T_{k}^{\text{loc}}[n],T_{m}^{\text{edge}}[n])\right)\] \[-E_{k}^{\text{MU}}[n],\] (37) where \[P^{t}(T_{k}^{\text{loc}}[n],T_{m}^{\text{edge}}[n])= \frac{\mu_{t}}{\delta_{t}}\Big{(}\text{ReLU}(T_{k}^{\text{loc}} [n]-\delta_{t})\] \[+\text{ReLU}(T_{m}^{\text{edge}}[n]-\delta_{t})\Big{)}\] (38) denotes the penalty for unsatisfaction of latency constraints, \(\mu_{t}\) is a penalty factor, and the \(\text{ReLU}(\cdot)\) is the rectified linear unit function.
#### Iii-A2 MDP element of UAVs
Note that each UAV needs to make decision after MUs give their association. Herein, MDP elements are denoted as follows
* **Observation:** The UAV agent \(m\) observes the locations of themselves, locations of all MUs as well as the UAVs, and task information from associated MUs. Denoting \(-m\) as the index set of UAVs except for UAV \(m\), we have \[o_{K+m}(t)=\Big{\{}\rho_{k}[n],\Omega_{k}[n],\mathbf{u}_{k}[n],\mathbf{q}_{m} [n],\mathbf{q}_{-m}[n]\Big{\}}.\] (39)
* **Action:** After receiving the requests, UAVs need to allocate the bandwidth, configure the computational frequency, and adjust velocity according to the observations. Therefore, the action of UAV agent \(m\) is given by \[a_{K+m}(t)=\bigg{\{}B_{k,m}[n],\tilde{f}_{k,m}^{\text{edge}}[n],\mathbf{a}_{ m}[n],\forall k\in\mathcal{K}\bigg{\}}.\] (40)
* **Reward:** After receiving, relaying and executing the tasks from MUs, the UAVs get rewards from environments. The reward of each UAV \(m\) needs to consider the energy consumption of both itself and the served MUs, i.e., \[r_{K+m}(t)= -\sum_{k=1}^{K}\alpha_{k,m}[n]\left(E_{k}^{\text{MU}}[n]+P^{t}(T_ {k}^{\text{loc}}[n],T_{m}^{\text{edge}}[n])\right)\] \[+\varpi E_{m}^{\text{UAV}}[n]+P^{o}(\mathbf{q}_{m}[n])+P^{c}( \mathbf{q}_{m}[n])\] \[+P^{d}(\mathbf{q}_{m}[n]),\] (41) where \[P^{o}(\mathbf{q}_{m}[n])=\mu_{o}\|\mathbf{q}_{m}[n]-\text{clip}(\mathbf{q}_{m }[n],0,W)\|,\] (42) denotes the penalty when UAVs try to fly out of the square boundary with width \(W\), and \(\mu_{o}\) is a penalty factor. \[P^{d}(\mathbf{q}_{m}[n])= \frac{1}{W}\bigg{(}\|\mathbf{q}_{m}[n]-\frac{1}{|\mathcal{K}_{m}| }\sum_{k=1}^{K}\alpha_{k,m}[n]\mathbf{w}_{k}[n]\|\] \[-d_{\mathrm{th}}\bigg{)}\] (43) guides the distance from UAV \(m\) to MUs, \(d_{\mathrm{th}}\) is a distance threshold, and \(|\mathcal{K}_{m}|\) is the set of MUs associated with UAV \(m\). In addition, \[P^{c}(\mathbf{q}_{m}[n])\] \[=\mu_{c}\sum_{j=1,j\neq m}^{M}\min\Big{\{}\|\mathbf{q}_{m}[n]- \mathbf{q}_{j}[n]\|-d_{\mathrm{min}},0\Big{\}}/d_{\mathrm{min}}\] (44)
is the penalty for disobeying the safety distance \(d_{\mathrm{min}}\) between UAVs, and \(\mu_{c}\) is corresponding penalty factor. It is assumed that the UAVs will stop at the boundary if they try to fly out of it, and thus \(\mathbf{q}_{m}[n]\leftarrow\text{clip}(\mathbf{q}_{m}[n-1]+\mathbf{v}_{m}[n-1 ]\delta_{t}+\frac{1}{2}\mathbf{a}_{m}[n-1]\delta_{t}^{2},0,W)\).
### _MAPPO-based DRL Approach With CTDE_
As a variant of PPO specialized for multi-agent settings, MAPPO is one of the state-of-the-art MARL algorithms [34]. Based on-policy scheme, each agent has an actor, a critic, and a replay buffer. As DT is capable of evaluating the global state of the environment, the centralized training and decentralized executing (CTDE) scheme can be adopted. In MAPPO with CTDE, the critics evaluate the centralized state-value function of global environment state. Specifically, the agents repeatedly input the observations \(o_{i}(t)\) into actor networks and get \(a_{i}(t)\) and \(r_{i}(t)\) for an episode, storing the experiences into buffer. Then in the end of an episode, the agents update their policies. Firstly, they sample some batches of experiences \(\{o_{i}(t),a_{i}(t),r_{i}(t),s(t),\text{pr}_{i}(t)\},\forall i\in\mathcal{I}\) from their buffers, where the \(\text{pr}_{i}(t)\) denotes the log-probability for sampling the action \(a_{i}(t)\). Then, in each update, the actor and critic update the parameters with the loss of policy and global state-value, respectively. The loss of actor \(i\) is calculated by
\[L^{\text{actor}}(\theta_{i})=\mathbb{E}_{\pi_{\theta_{i}}}\Bigg{\{} \min\bigg{[}\frac{\pi_{\theta_{i}}(a_{i}(t)|o_{i}(t))}{\pi_{\theta^{{}^{\prime}}_{i} }(a_{i}(t)|o_{i}(t))}\hat{A}_{i}(t),\] \[\text{clip}\left(\frac{\pi_{\theta_{i}}(a_{i}(t)|o_{i}(t))}{\pi_{ \theta^{{}^{\prime}}_{i}}(a_{i}(t)|o_{i}(t))},1-\epsilon,1+\epsilon\right)\hat {A}_{i}(t)\bigg{]}\Bigg{\}}, \tag{45}\]
where \(\pi_{\theta^{{}^{\prime}}_{i}}\) and \(\pi_{\theta_{i}}\) denote the old and current policy, respectively. \(\hat{A}_{i}(t)\) is the estimation of advantage function \(A_{i}(t)=Q_{i}(s(t),a_{i}(t))-V_{i}(s(t))\). In PPO series, the generalized advantage estimation (GAE) is adopted to improve the performance, which is defined as follows
\[\hat{A}_{i}(t)= \sum_{l=0}^{\infty}(\gamma\lambda)^{l}\Big{(}R_{i}(t+l)+\gamma V_{ i}\big{(}s(t+1+l)\big{)}\] \[-V_{i}(s(t+l))\Big{)}, \tag{46}\]
where \(\gamma\) is the discount factor, \(\lambda\) is the parameter of GAE for bias-variance tradeoff in estimation, and \(V_{i}(s(t))=\sum\limits_{l=0}^{\infty}\gamma^{l}R_{i}(t+l)\) is the cumulative discounted reward, which also represents the state-value function. Denoting \(V^{\xi_{i}}(s(t))\) as the state-value function estimated by the critic of agent \(i\), the loss of critic \(i\) is given by
\[L_{i}^{\text{critic}}(\xi_{i})=\frac{1}{2}\Big{[}V^{\xi_{i}}\big{(}s(t)\big{)}- V_{i}\big{(}s(t)\big{)}\Big{]}^{2}, \tag{47}\]
where \(\xi_{i}\) is the parameter of \(i\)-th critic network. Therefore, the actor and critic can be updated according to (45) and (47), respectively.
It is worth noting that the constraints need to be satisfied by action-remapping as follows. First, the output vectors of actor networks in MAPPO are sampled from distributions and are divided according to the sequence for concatenating the variables of actions. Then, the divided vectors are scaled into their origin domain. Furthermore, some constraints need to be tackled by normalization or rounding operation. For each MU \(k\), the index of its associated UAV is with respect to the maximum value of the divided vector corresponding to \(\{\alpha_{k,m}[n],\forall m\in\mathcal{M}^{*}\cup\{0\}\}\), and thus \(\{\alpha_{k,m}[n],\forall m\in\mathcal{M}\cup\{0\}\}\) is obtained. If \(\alpha_{k,0}[n]=0\), \(\alpha_{k,M+1}[n]\) will be decided by rounding its entry in the divided vector. For constraints (18) and (34d) on \(B_{k,m}[n]\) and \(\rho_{k}[n]\), we multiply their divided vectors by \([\alpha_{k,1}[n],\alpha_{k,2}[n],\ldots,\alpha_{k,M}[n]]\) and \(1-\alpha_{k,0}[n]\) as action masks. And then the constraints (16) and (34e) are satisfied by normalizing their divided vectors.
### _MAPPO-based Training Framework_
As displayed in Fig. 2, during the proposed DT-assisted CTDE process, the MUs and UAVs perform computation offloading according to the actions given by the actor networks of their agents in physical environment, send their own experiences and synchronizing system status to the DT layer. Then, the DT layer evaluates the global environment state by the observations of agents, updates the buffers, and gets the predicted values. After updating the actors and critics, the parameters of actor networks are downloaded to UAVs and MUs. Note that the network parameters can be shared between homogeneous agents [34]. To fully exploit the performance of MAPPO approach, we introduce Beta distribution and attention, which are stated as below.
#### Iv-C1 Beta policy
The above-mentioned actions are typically continuous and bounded, such as \(\varrho\in[0,1]\) and \(\mathbf{a}_{k}[n]\in[-a_{\max},a_{\max}]\). However, conventional action sampling from Gaussian distribution in policy networks will unavoidably introduce an estimation bias of policy gradient, since the boundary effects will be imposed by force clipping the values of out-of-bound actions. To tackle this problem, we adopt Beta distribution instead of Gaussian distribution in terms of the output of policy networks, which has the following form
\[f(x,\alpha,\beta)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{ \alpha-1}\Big{(}1-x\Big{)}^{\beta-1}, \tag{48}\]
where \(\alpha\) and \(\beta\) are the parameters of Beta distribution. Since (48) has also a bounded domain, it is appropriate to sample bounded actions. Moreover, the Beta distribution can be more close to uniform distribution than Gaussian by initializing, and thus the agents can more abundantly explore the action space at initial stage of training.
#### Iv-C2 Attention mechanism
With the increasing number of agents, the ever-increasing dimensions of environment state make it difficult for the critic network with a simple fully-connected layer to tackle the input, thereby leading to slower or even difficult convergence of critic network and being harmful to the behavior of actor network. However, for each agent, other agents may impose the impact with different intensities on the state-value, and thus the distinguished attention should be paid to them, which involves the nearby devices and the devices with higher computational load. Therefore, we introduce the multi-head attention unit before the multi-layer perceptron (MLP) of critic networks to improve the training process. The vectors of observations for two types of agents are first passed through their corresponding 3-layer MLPs to
Fig. 2: Training framework of AB-MAPPO.
get feature values \(e_{i}\). Then, all the feature values of agents are sent to the attention heads to get the attention values \(x_{i}\) by
\[\alpha_{i,j}=\text{Softmax}\left(\frac{e_{j}^{\mathsf{T}}W_{\mathsf{key}}^{ \mathsf{T}}W_{q}e_{i}}{\sqrt{d_{\mathsf{key}}}}\right), \tag{49}\]
\[x_{i}=\sum_{j\neq i}\alpha_{i,j}W_{v}e_{j}, \tag{50}\]
where \(e_{j}\) is the feature value of another agent \(j\), \(d_{\mathsf{key}}\) is the variance of \(e_{j}^{\mathsf{T}}W_{\mathsf{key}}^{\mathsf{T}}W_{q}e_{i}\). The matrix \(W_{\mathsf{key}}\) transforms \(e_{j}\) into a key, the matrix \(W_{v}\) transforms \(e_{j}\) into a value, and the matrix \(W_{q}\) transforms \(e_{i}\) into a query. Finally, \(x_{i}\) and \(o_{i}(t)\) are sent to the MLP to get the estimated state-value \(V^{\xi_{i}}(s(t))\).
### _Complexity Analysis_
Based on above-mentioned discussions, we propose the AB-MAPPO algorithm, which is summarized in Algorithm 1. The complexity of attention module is \(\mathcal{O}(I^{2}V)\), where \(V\) is the length of feature-value vectors, according to [35]. For an MLP, the computational complexity of the \(j\)-th layer is \(\mathcal{O}(Z_{j-1}Z_{j}+Z_{j}Z_{j+1})\), where \(Z_{j}\) is the number of neurons for \(j\)-th layer. Hence, the computational complexity of a \(J\)-layer MLP is calculated by \(\mathcal{O}\left(\sum_{j=2}^{J-1}Z_{j-1}Z_{j}+Z_{j}Z_{j+1}\right)\). The actor networks are all 3-layer MLPs, and each of two critic networks for MU and UAV agents have two 3-layer MLPs for the observations of different types agents and one 3-layer MLP after attention module for the output of value function. Therefore, the overall computational complexity for training is calculated by the sum of complexity imposed by actor and critic networks \(\mathcal{O}\left(\mathrm{e}^{\max}(\mathrm{Pe}I^{2}V+\mathrm{epl}\sum_{j=2}^{J -1}Z_{j-1}Z_{j}+Z_{j}Z_{j+1})\right)\), and for one-step execution is just calculated by \(\mathcal{O}\left(\sum_{j=2}^{J-1}Z_{j}+Z_{j}Z_{j+1}\right)\).
```
1:Initialize \(n=1\), episode length epl, PPO epochs Pe, and maximum episodes \(\mathrm{e}^{\max}\).
2:for agent \(i\in\mathcal{I}\)do
3: Initialize actor networks \(\theta_{i}\), critic networks \(\xi_{i}\),replay buffer \(\mathbf{B}_{i}\);
4:endfor
5:for Episode = 1,..., \(\mathrm{e}^{\max}\)do
6:for\(t\) = 1,..., \(\mathrm{e}\)pl do
7: Obtain \(o_{k}(t),\forall k\in\mathcal{K}\) from the environment;
8: Execute the action \(a_{k}(t),\forall k\in\mathcal{K}\);
9: Obtain \(o_{K+m}(t),\forall m\in\mathcal{M}\) from the environment;
10: Execute the action \(a_{K+m}(t),\forall m\in\mathcal{M}\);
11: Calculate log-probability \(\text{pr}_{i}(t),\forall i\in\mathcal{I}\);
12: Synchronize status information to the DT layer if is required;
13:if\(n\) mod \(N\) = 0 then
14: Synchronize observations \(o_{i}(t)\), actions \(a_{i}(t)\) to the DT layer;
15: Evaluate reward \(r_{i}(t),\forall i\in\mathcal{I}\) in DT layer;
16: Store transition \(\text{Tr}_{i}(t)=\){\(o_{i}(t),a_{i}(t),r_{i}(t),s(t),\text{pr}_{i}(t)\)}, \(\forall i\in\mathcal{I}\) into buffer \(B_{i}\);
17:endif
18: Update \(n=n\) mod \(N\) + 1
19:endfor
20:for epoch = \(1,\dots\mathrm{Pe}\)do
21:for agents \(i\in\mathcal{I}\)do
22: Update actor \(\theta_{i}\) and critic \(\xi_{i}\) according to (45) and (47) by \(\forall\text{Tr}_{i}(t)\in\mathbf{B}_{i}\);
23:endfor
24:endfor
25: Download the actor networks from DT layer to UAVs and MUs;
26:endfor
```
**Algorithm 1** Proposed AB-MAPPO algorithm
## IV Numerical Results
In this section, we evaluate the performance of the proposed AB-MAPPO algorithm. The simulation settings and numerical results are presented as follows.
### _Simulation Settings_
In our simulations, a 1000 \(\times\) 1000 m rectangular area is considered, where the MUs are randomly distributed in region with \(x,y\in[0,1000]\) m, the BS is set at (-500 m, 0 m, 10 m), and the UAVs are flying at \(H=200\) m. Note that since the energy consumption of UAVs and MUs are sensitive and the BS typically has abundant power supply, the energy consumption of the BS involving computation and DT construction is not considered. Other parameter settings of simulation and hyperparameters of MAPPO are summarized in Table II and Table III, according to prior works [16, 33, 17, 33].
The baseline algorithms for comparison with our proposed AB-MAPPO are listed as follows:
* **B-MAPPO:** MAPPO without attention mechanism and with Beta distribution.
* **AG-MAPPO:** MAPPO with Gaussian distribution and attention mechanism.
* **MADDPG:** MADDPG is an off-policy algorithm with determinate policy [18]. The exploration noise is set as 0.5, the buffer size is set as 20000, and other parameters are the same to the proposed scheme.
* **Randomized:** The MUs and UAVs execute randomly generated actions by the algorithm.
### _Convergence of the MARL Training Algorithm_
We first evaluate the convergence performance of the proposed AB-MAPPO algorithm in Fig. 3(a) and Fig. 3(b). The average episode reward of all MUs is evaluated with comparison of RL baselines under \(M=10\) UAVs and \(K=60\) MUs. It can be seen that the average episode reward of UAVs obviously increases during the training process and then reaches to the stable values. Intuitively, the proposed AB-MAPPO scheme, B-MAPPO scheme, AG-MAPPO scheme, and MADDPG scheme converge at around 30k, 45k, 60k and 60k steps, respectively. As expected, the proposed scheme outperforms other baselines, with the fastest convergence and reward. Specifically, the proposed AB-MAPPO scheme converge faster than the B-MAPPPO scheme, and the MAPPO schemes with Beta distribution achieve greater rewards than the AG-MAPPO scheme. The reason can be explained as follows: 1) The attention mechanism makes the critic networks quickly concentrate on the significant parts of the global state, thereby accelerating the convergence. 2) With the Beta distribution, the agents explore more uniformly at the initial stage of training, retain better exploration ability, and converge to better solutions, which also presents the superiority of Beta distribution over Gaussian distribution and the MADDPG's exploration noise.
Subsequently, Fig. 4 presents the convergence of the average weighted energy consumption of users with \(K=60\) MUs and \(M=10\) UAVs. It indicates that the energy consumption has also been optimized with the increase of reward during the training process. Intuitively, the proposed AB-MAPPO scheme achieves lower energy consumption and has the most rapid convergence. In terms of stability, the energy consumption of the proposed AB-MAPPO scheme goes down more smoothly, the AG-MAPPO and the B-MAPPO schemes have higher fluctuation during training, and the MADDPG scheme is more tortuous. The above results verify the effectiveness and reliability of our proposed scheme.
To verify the fairness of service, we present the varying of Jain's fairness index of MUs in Fig. 5 during training. It is worth noting that the Jain's fairness index of service is denoted as \(\frac{\sum\limits_{k=1}^{K}E_{k}^{\text{all}}[n])^{2}}{\left(K\sum\limits_{k=1 }^{K}(E_{k}^{\text{all}}[n])^{2}\right)}\). We can find that the fairness of MUs gradually grows during the training process and the fairness of proposed AB-MAPPO is higher and more stable against the AG-MAPPO scheme. Another observation is that the fairness of both schemes slightly decreases at first and then ascends, while the AG-MAPPO scheme converges faster but has lower fairness. The ascending of fairness is due to
Fig. 3: The convergence of the rewards of agents.
the fact that the policy of MUs gradually becomes stable, indicating that both the quality and reliability of service have been improved.
### _Comparison of Benchmarks Under Different Settings_
To verify the impact of number of MUs on the energy expense of the network, we then evaluate the average weighted energy consumption of MUs, i.e., \(\sum\limits_{k=1}^{K}\left(\varpi\sum\limits_{m=1}^{M}\alpha_{k,m}[n]E_{m}^{ \text{UAV}}[n]+E_{k}^{\text{loc}}[n]\right)/K\), versus the number of MUs in Fig. 6 with \(M=10\) UAVs. From this figure, we observe that the energy cost gradually increases as the number of MUs grows, the proposed AB-MAPPO scheme outperforms the baselines. Additionally, the gaps between RL-based schemes have the tendency of becoming larger, the randomized scheme keeps at the highest energy consumption, and the performance gain of attention mechanism increases as the number of MUs grows. Since the average communication resource declines as the number of MUs grows, the latency and energy for transmission of MUs increase, and the MEC servers can process less amount of tasks. Therefore, the MUs tend to computing locally, thereby leading to the ascending of the energy for local computing. Furthermore, the randomized scheme uniformly generates actions, and thus has poor ability to appropriately utilize the abundant resource at BS and UAVs.
Fig. 7 presents the comparison of the proposed scheme and baselines on average weighted energy consumption of \(K=70\) MUs under different number of UAVs. As shown in this figure, when more UAVs participate in the service, the average weighted energy of MUs gradually declines. It can be seen that the proposed AB-MAPPO scheme has the lowest energy cost, and the B-MAPPO scheme is slightly higher than it. The likely reason is that the Beta distribution can efficiently improve the exploration of UAVs in large action space. Moreover, the MADDPG scheme converges to worse solutions than that of the MAPPO scheme, which indicates that
Fig. 4: The convergence of weighted energy consumption.
Fig. 5: The improvement of fairness of MUs.
Fig. 6: The impact of MUs on average weighted energy consumption of MUs.
Fig. 7: The impact of number of UAVs on average weighted energy consumption of MUs.
the algorithm is relatively difficult to tackle the increasingly challenging actions.
To gain further insights, Fig. 8 depicts the average weighted energy consumption of MUs versus the channel bandwidth \(B\). It can be seen that the energy consumption gradually decreases when the bandwidth increases. Furthermore, the proposed AB-MAPPO scheme obtains the best performance, and when the bandwidth becomes larger, the gaps between RL schemes have the tendency of decreasing. This is because more abundant communication resource enables the MUs to offload more tasks to UAVs and BS, since lager bandwidth imposes looser constraints on the optimization problem, which may introduce larger penalty to hinder the exploration of agents. This in turn verifies that the proposed AB-MAPPO scheme outperforms the baselines in tackling the problems with strong constraints.
### _The Impact of Environment Settings on the Performance_
In Fig. 9, we examine the impact of weight factor \(\varpi\) on the optimized energy consumption of UAVs and MUs, under \(K=60\) MUs and \(M=10\) UAVs. It can be observed that as the weight factor \(\varpi\) increases from 0 to 0.009, the energy consumption of UAVs sharply decreases at first, and then becomes smoother. Meanwhile, the energy of MUs increases sharply. The reason for this trend is that as \(\varpi\) increases, the energy consumption of UAVs is more emphasized and more tasks are computed locally by MUs, which means that more computing energy is consumed by MUs and less energy is consumed by UAVs.
To evaluate the impact of computational resource on the energy expense of users, we present the average weighted energy consumption of users versus the maximum computational frequency of users and UAVs in Fig. 10(a) and Fig. 10(b), respectively. It can be readily observed that as the computational frequency of users and UAVs increases, the energy consumption gradually decreases, and keeps at about constant values. The likely reason is that the increase of computational resource leads to an increase of satisfaction with computational service. Hence, the policy of MUs on offloading proportion is accordingly adjusted to balance the energy consumption and the penalty for the improvement of reward. Then, the abundant computational resource makes the latency requirements satisfied, and the policies become stable.
We then evaluate the deviation of DT on system performance in Fig. 11. We can find that as the deviation rate of
Fig. 11: The impact of DT deviation.
Fig. 8: The impact of bandwidth on energy consumption of MUs.
Fig. 10: The impact of computational resource.
Fig. 9: The impact of weight factor on energy consumption of MUs and UAVs.
DT increases, the average energy consumption of MUs has the tendency of increasing. This can be explained by the fact that the DT deviation reduces the accuracy of observations and the actions of agents. As such, the improvement of the policies for agents is influenced, and thus the reward and the energy consumption become worse. Another observation is that as the deviation of DT increases from 0 to 0.25, the weight energy consumption rises by about 15%, which verifies the robustness of the proposed MARL approach.
Fig. 12 shows weighted energy consumption versus the maximum task size \(D_{\max}\) and evaluates the without DT scheme ("labeled as w/o DT"), while fixing the minimum value \(5\times 10^{5}\) for with different number of MUs. The w/o DT scheme is evaluated by adding the random deviation rate in [-0.5,0.5] on the normalized variables of locations and computational frequency. This is under the following consideration. The DT-assisted framework enables UAVs to directly synchronize the estimated information with the DT layer at BS. Hence, the estimated deviation is cut down by avoiding the frequent queries for exchanging the environment information between UAVs, and thus the devices can obtain more timely status of the system. Therefore, the estimated deviation is set higher to evaluate the w/o DT schemes, whose performance is intuitively worse than that of DT-assisted schemes. Additionally, it can be readily observed that more energy is consumed as the task size grows and the bandwidth decreases. It can be explained by the fact that more computation resource is required to satisfy the latency requirements of MUs, and since the bandwidth decreases, the average communication resource decreases. As such, the transmission expense ascends and the MUs tend to compute more tasks locally, thereby leading to the increasing of energy consumption.
Fig. 13 displays the trajectories of MUs and UAVs in different scenarios. In Fig. 13 (a), the MUs are relatively crowded in a part of region, and the UAVs start from random locations, with the width of region 500 m and \(T=50\) s. It can be observed that the trained UAVs are capable of flying closer to MUs and keep hovering on the crowded MUs to pursue higher transmit rate. In Fig. 13 (b), the UAVs start at the corner of the region, and the MUs are relatively dispersed. At the beginning, the UAVs can quickly fly to the MUs, and then cooperate with each other to serve the associated MUs. Additionally, the UAVs are close to most of associated MUs rather than fully involving the remote ones, which indicates that UAVs can skillfully the long-term reward. In Figs. 13 (c) and (d), we present the trajectory with large-scale simulations with \(K=60\) users and \(M=10\) UAVs. It can be seen that the UAVs can move to the regions with more MUs and cooperatively cover the region of MUs to provide better service and maintain the fairness. It verifies that in the situation with larger scale, the reward function can guide the UAVs to rapidly access the MUs.
## V Conclusion
In this paper, we proposed a multi-UAV-assisted MEC network with air-ground cooperation where DT is applied to enhance the offloading service. We formulated a weighted sum energy consumption minimization problem by jointly optimizing the offloading decision, bandwidth, flying trajectory, communication resource and computation resource. To tackle this challenging problem, we modeled our problem as an MDP where MUs and UAVs act as agents to collectively interact with the environment to receive distinctive observations. Considering the high-dimensional hybrid action space, the MAPPO algorithm with attention mechanism and Beta distribution was leveraged to efficiently obtain the optimal policy. Simulation results verified that the proposed scheme can significantly reduce energy consumption of the network compared with other benchmarks. Additionally, the combination of UAV and BS offloading strategies can take fully advantage of communication and computational resources, as well as adapt to the time-varying network environments.
Fig. 12: The impact of maximum task size of users.
Fig. 13: The trajectories of UAVs with different scenarios. |
2310.06103 | Leveraging Multilingual Self-Supervised Pretrained Models for
Sequence-to-Sequence End-to-End Spoken Language Understanding | A number of methods have been proposed for End-to-End Spoken Language
Understanding (E2E-SLU) using pretrained models, however their evaluation often
lacks multilingual setup and tasks that require prediction of lexical fillers,
such as slot filling. In this work, we propose a unified method that integrates
multilingual pretrained speech and text models and performs E2E-SLU on six
datasets in four languages in a generative manner, including the prediction of
lexical fillers. We investigate how the proposed method can be improved by
pretraining on widely available speech recognition data using several training
objectives. Pretraining on 7000 hours of multilingual data allows us to
outperform the state-of-the-art ultimately on two SLU datasets and partly on
two more SLU datasets. Finally, we examine the cross-lingual capabilities of
the proposed model and improve on the best known result on the
PortMEDIA-Language dataset by almost half, achieving a Concept/Value Error Rate
of 23.65%. | Pavel Denisov, Ngoc Thang Vu | 2023-10-09T19:22:51Z | http://arxiv.org/abs/2310.06103v1 | Leveraging Multilingual Self-Supervised Pretrained Models for Sequence-to-Sequence End-to-End Spoken Language Understanding
###### Abstract
A number of methods have been proposed for End-to-End Spoken Language Understanding (E2E-SLU) using pre-trained models, however their evaluation often lacks multilingual setup and tasks that require prediction of lexical fillers, such as slot filling. In this work, we propose a unified method that integrates multilingual pretrained speech and text models and performs E2E-SLU on six datasets in four languages in a generative manner, including the prediction of lexical fillers. We investigate how the proposed method can be improved by pretraining on widely available speech recognition data using several training objectives. Pretraining on 7000 hours of multilingual data allows us to outperform the state-of-the-art ultimately on two SLU datasets and partly on two more SLU datasets. Finally, we examine the cross-lingual capabilities of the proposed model and improve on the best known result on the PortMEDIA-Language dataset by almost half, achieving a Concept/Value Error Rate of 23.65%.
Pavel Denisov, Ngoc Thang Vu Institute for Natural Language Processing (IMS), University of Stuttgart, Germany spoken language understanding, self-supervised learning, end-to-end, sequence-to-sequence, multilingual
## 1 Introduction
Spoken Language Understanding (SLU) is a common name for tasks combining speech and language processing to extract semantic concepts from spoken sentences, such as intents, slots, and named entities. This functionality is essential to various systems with a voice interface, including intelligent assistants and automatic call answering services. Traditionally, SLU has been decomposed to Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) subtasks that are solved sequentially in a pipeline manner. In this scenario, ASR converts speech recording to text representation that is then processed by NLU. The advantage of a pipelined approach is that both ASR and NLU can be optimized independently using numerous datasets labelled for the corresponding tasks. The disadvantages are that: first, the text representation lacks paralinguistic information, such as prosody and punctuation in most cases, and in addition to that contains errors introduced by ASR and propagated to NLU; and second, sequential execution of ASR and NLU introduces a time lag that is not desirable in an interactive context. These downsides motivated the community to work on the End-to-End (E2E) SLU methods that allow building a single model performing SLU task directly on speech input.
One of the critical challenges in E2E SLU is data sparsity since the availability of labelled datasets for SLU is even lower than for ASR or NLU. Transfer learning is a popular technique that alleviates the data sparsity problem for one task by learning from another related task. More recently, Self-Supervised Learning (SSL) methods demonstrate the possibility of gaining improvements even without requiring annotation for any task by training on unlabelled data of corresponding modalities, such as text [1] or speech [2].
Usually, SLU tasks are treated as a classification problem, either on the sequence level (Intent Classification (IC) [3]) or on the token level (Slot Filling (SF) [4] and Named Entity Recognition (NER) [5]). Few works, however, represent SLU tasks as generation problems [6, 7]. There are multiple examples of classification-based SLU systems utilizing SSL pretrained speech and text encoders [3, 8]. However, usage of SSL pretrained models in generation-based SLU systems is limited to speech encoders [9]. Aside from that, generation-based SLU descriptions are typically focused on one language only [10, 11]. We aim to close these two gaps by employing multilingual SSL pretrained speech and text models for solving SLU tasks in multiple languages via sequence-to-sequence modeling. In this work, we propose a unified architecture for building E2E SLU models and evaluate it on a diverse set of established SLU benchmarks. Our experiments demonstrate that multilingual SSL pretrained text-to-text model can be fine-tuned to solve token level NLU tasks in a generative way and this can be transferred to speech modality with the help of multilingual SSL pretrained speech encoder. Furthermore, we improve the NLU to SLU transferability by aligning the hidden representations of speech and text using a medium-sized multilingual ASR dataset and different training approaches: Connectionist Temporal Classification (CTC), Attention Encoder-Decoder (AED) and a novel Modality Correlation (MC) objective. In several cases, our results are better than the best previously reported. We provide the implementation, configurations, data preparation and scoring scripts and pretrained models at [https://github.com/DigitalPhonetics/multilingual-seq2seq-slu](https://github.com/DigitalPhonetics/multilingual-seq2seq-slu).
## 2 Method
### SLU model
The proposed approach is outlined in Figure 1. Our SLU model combines SSL pretrained speech encoder and text-to-text encoder-decoder models. The raw speech input \(x\) is processed by the speech encoder that outputs an acoustic representation \(\mathbf{H}^{\mathrm{Feat}}\). The Adaptor maps the acoustic representation \(\mathbf{H}^{\mathrm{Feat}}\) from the speech encoder to a quasi-graphemic representation \(\mathbf{H}^{\mathrm{PreEnc}}\) that resembles text token embeddings and is fed to the text encoder. Output of the text encoder \(\mathbf{H}^{\mathrm{PostEnc}}\) is decoded by the autoregressive text decoder producing an output hidden representation \(\mathbf{H}^{\mathrm{PostDec}}\) that is projected to text token logits used to produce an output text token sequence \(y\). The speech and text encoder layers are initialized from the parameters of the general pretrained models without any changes. The text decoder parameters are first fine-tuned on the corresponding SLU dataset using the ground truth transcriptions as input and the SLU annotations as output, thus resulting in the text based NLU model. The text encoder is kept frozen during this step to ensure the transferability to the SLU model initialized from the general parameters. Only the Adaptor's parameters have to be learned during the SLU training, and this motivates us to investigate the Adaptor pretraining.
### Adaptor pretraining
Generally, the Adaptor pretraining aims to minimize the distance between the speech representation in the SLU model and the text representation in the SSL text model. As Figure 2 shows, this can be done on multiple levels of the neural network using various training strategies. We select three types of a text representation and the corresponding levels of the SLU model to extract a logically similar speech representation: (i) text token embeddings \(\mathbf{H}_{\mathrm{Text}}^{\mathrm{PreEnc}}\) and the SLU Adaptor output \(\mathbf{H}_{\mathrm{Speech}}^{\mathrm{PreEnc}}\); (ii) the hidden text representation \(\mathbf{H}_{\mathrm{Text}}^{\mathrm{PostEnc}}\) after the encoder of the SSL text model and the hidden speech representation \(\mathbf{H}_{\mathrm{Speech}}^{\mathrm{PostEnc}}\) after the text encoder of the SLU model; (iii) the output text representation \(\mathbf{H}_{\mathrm{Text}}^{\mathrm{PostDec}}\) after the decoder of the SSL text model and the hidden speech representation \(\mathbf{H}_{\mathrm{Speech}}^{\mathrm{PostDec}}\) after the text decoder of the SLU model.
The Adaptor output can be aligned with the text token embeddings using the CTC loss function in the vein of LegoNN approach [12]. While this is the most straightforward and computationally cheap way to pretrain Adaptor parameters in our framework, it might set an unnecessary strict target for the alignment because the text-to-text network has some level of robustness to noisy inputs and not all differences between text and speech representations are equally harmful for the prediction of the correct output.
Given this consideration, it might be better to pretrain the Adaptor in the full AED model while keeping all parameters except of the Adaptor in a frozen state. This way, the output of the decoder serves as a source of training signal and the Adaptor parameters receive only the relevant updates for the prediction of the correct output. The disadvantage of this approach is that the differences between the original decoder and a task specific NLU decoder used in a SLU model can hinder the transferability of the pretrained Adaptor output. It is fair to assume that the practical importance of this problem depends on the actual differences between the original and task specific decoders. These differences can be avoided by the parameter efficient tuning of the decoder [13].
Finally, any hidden speech representation can be trained directly on a hidden text representation as a target. In order to do that, we propose the modality matching approach. Its design is inspired by the cross-modal grounding methods [14, 15] and the Barlow twins loss [16]. First, we construct a speech-text correlation matrix:
\[\mathbf{C}_{\mathrm{Speech-Text}}=\mathbf{H}_{\mathrm{Speech}}(\mathbf{H}_{ \mathrm{Text}})^{\intercal},\]
where \(\mathbf{H}_{\mathrm{Speech}}\in\mathbb{R}^{L_{\mathrm{max}}\times d_{\mathrm{ model}}}\) and \(\mathbf{H}_{\mathrm{Text}}\in\mathbb{R}^{L_{\mathrm{max}}\times d_{\mathrm{ model}}}\) are padded normalized hidden speech and text representations, \(L_{\mathrm{max}}=max(L_{\mathrm{Speech}},L_{\mathrm{Text}})\) is the maximum length of the speech and text sequences and \(d_{\mathrm{model}}\) is the hidden representation dimension in both modalities. The representations are zero-padded and normalized to Euclidean norm along each hidden representation vector. We mask out the loss for elements outside of \(L_{\mathrm{Text}}\), because the zero-dominated matrices caused convergence problems. This can be reduced to the truncation instead of the padding and masking, but we implement it that way because of the batching. A ground truth for \(\mathbf{C}_{\mathrm{Speech-Text}}\) is set to be analogous text-text self-correlation matrix:
\[\mathbf{C}_{\mathrm{Text-Text}}=\mathbf{H}_{\mathrm{Text}}(\mathbf{H}_{ \mathrm{Text}})^{\intercal}\]
During the training, we minimize the mean square error between \(\mathbf{C}_{\mathrm{Speech-Text}}\) and \(\mathbf{C}_{\mathrm{Text-Text}}\). The MC approach can
Figure 1: General architecture of our SLU model. The speech encoder and text encoder are initialized from unchanged SSL pretrained models. The text decoder is initialized from an SSL pretrained model that is fine-tuned on the ground truth transcriptions of an SLU dataset. The Adaptor is trained from scratch on an SLU dataset or is pretrained on ASR data (section 2.2).
be applied to the encoder outputs and therefore allows a trade-off between the two previously mentioned levels of the neural network. In addition, it can be employed as an alternative to the hard label based CTC and AED approaches applicable only at the specific levels of the neural network.
## 3 Experimental Setup
### Data
We assess the performance of the proposed method using the six established SLU benchmarks: SLURP [4], SLUE-VoxPopuli [5] (evaluating on the validation set), CATSLU [17], MEDIA [18], PortMEDIA-Domain and PortMEDIA-Language [19]. These datasets cover three SLU tasks in four languages belonging to three families and have medium to low resource data regimes. Details of the SLU datasets are given in Table 1. In addition to SF annotation, each utterance of SLURP is labeled with one out of 59 unique intents. We evaluate the SLU systems using the established metrics for each dataset: IC accuracy and SLU-F1 for SLURP, F1 and label-F1 for SLUE-VoxPopuli, accuracy and F1 for CATSLU, Concept Error Rate (CER) and Concept/Value Error Rate (CVER) for MEDIA, PortMEDIA-Domain and PortMEDIA-Language datasets. The SLU-F1 combines the slot label F1
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & SLURP & SLUE & CATSLU & MEDIA & PM-Dom & PM-Lang \\ \hline Samples & 120K/ & 5.0K/ & 7.0K/ & 12.0K & 5.8K/ & 6.0K/ \\ (train/ & 8.6K/ & 1.7K/ & 1.6K/ & 1.2K/ & 1.3K/ & 1.9K/ \\ dev/test) & 13.0K & 1.8K & 5.8K & 3.5K & 2.8K & 3.8K \\ \hline Hours & 84.7/ & 14.07 & 5.5 & 16.1/ & 7.4/ & 7.5 \\ (train/ & 6.9/ & 4.9/ & 1.3/ & 1.6/ & 1.6/ & 2.5/ \\ dev/test) & 10.3 & 4.9 & 4.8 & 4.8 & 3.4 & 5.0 \\ \hline Language & English & English & Mandarin & French & French & Italian \\ Tasks & IC, SF & NER & SF & SF & SF & SF \\ \#Concepts & 56 & 7 & 54 & 143 & 34 & 124 \\ \hline \end{tabular}
\end{table}
Table 1: SLU benchmarks used for the evaluation. #Concepts denotes number of unique slot labels for SF or unique entity labels for NER.
Figure 2: Investigated options for the Adaptor pretraining using pairs of speech recordings and text transcriptions. The Adaptor is optimized to predict the hidden representation of the speech that is as close to the hidden representation of the text as possible. Each Adaptor pretraining option refers to a combination of the hidden representation layer (PreEnc, PostEnc, PostDec) and the training method (MC, CTC, AED). The MC loss can be applied directly to the hidden representations of the speech and the text before the text encoder (b), after the text encoder (c) or after the text decoder (d). Alternatively, the hidden speech representation can be projected to the token logits via a regular linear transformation (Token Projection) allowing CTC training if the hidden speech representation is extracted before the text encoder (a) or AED training if the hidden speech representation is extracted after the text decoder (e).
with the word and character edit distances of the slot value [4]. The CER and CVER are based on the edit distance and are calculated the same way as the word error rate, but on the sequences of slot labels or slot labels combined with the values instead of words.
The Adaptor pretraining experiments are performed using subsets of Common Voice Corpus 9.0 [20] and WenetSpeech [21]. We select the 36 languages that are present in both XLS-R [22] and mBART50 [23] pretraining data and sample down the resulting training and validation subsets uniformly to 1000 and 20 hours respectively.
### Training details
**NLU models** are obtained by fine-tuning the SSL text-to-text model on the ground truth transcriptions of each SLU dataset and its task specific outputs. After preliminary experiments the mBART50 Large model [23] was chosen because of its best scores. According to the original mBART50 approach, the language is encoded as a special token that is added at the beginning of the input sequence and is given as the initial output token to the decoder. Parameters of the model's encoder and token embeddings are frozen.
**SLU models** are implemented in ESPnet-SLU toolkit [9] and follow its SLURP recipe. We use the weighted-sum of hidden states [24, 25] of XLS-R (0.3B) pretrained model [22] as speech features. The Adaptor module is technically organized as a VGG/Conformer based encoder [26, 27] followed by a convolutional Length Adaptor [28]. This design is based on the encoder architecture of ASR [25] and SLU [9] systems and demonstrated better results in our preliminary experiments compared to the direct fine-tuning of SSL speech encoders. Output of the Adaptor module is fed to the general mBART50 text encoder that is followed by the fine-tuned text decoder from the NLU model. As in NLU, both the text encoder and decoder are conditioned on the SLU dataset language by adding the special language token embedding at the beginning of the text encoder's input sequence and by setting the special language token as the initial output of the text decoder. Conformer layers are configured with \(d_{\mathrm{model}}=1024\), \(d_{\mathrm{ff}}=4096\), \(d_{h}=8\), \(E=8\) and \(\mathrm{Conv}\) kernel size of 31. The Length Adaptor contains a 1-dimensional convolutional layer with stride 2 and reduces the length of input sequence by factor of 2. Label smoothing with a penalty of 0.1, as well as 3-way speed perturbation [29] data augmentation method are utilized during the training. The training is done with Adam optimizer [30] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\) and warmup learning rate scheduler. Number of epochs and warmup steps, maximum learning rate and batch size are tuned individually for each SLU dataset.
**SLU Adaptor pretraining** is carried out with the configuration similar to the SLU model training. We set the maximum learning rate to 5e-5, the number of warmup steps to 25k and the number of epochs to 30 with the early stopping after 3 epochs of no improvements in validation accuracy.
## 4 Results
### Baseline
Table 2 shows the results of the NLU models on the ground truth transcriptions and the baseline SLU results on the speech recordings. Each baseline SLU system is essentially the corresponding NLU model transferred to the speech modality using the SSL pretrained speech encoder as described in the section 2.1. The Adaptor's parameters in the baseline SLU systems have to be trained from scratch using the speech recordings from the SLU datasets only. First of all, we note that the SSL text-to-text model can in principle be fine-tuned to perform various NLU tasks in a generative manner. Next, we observe that the gap between the NLU and SLU performance correlates with the amount of training data for the English benchmarks, which suggests that the low resource setting is an issue for our initial SLU model. On the other side, SLU performance is slightly better than the NLU performance for the French benchmarks and is much better for the Italian benchmark, despite the small amount of training data. This might be due to the encoder or token embeddings freeze during the NLU training: both French and Italian likely have smaller presence in the pretraining data of the original mBART50 model. Finally, the Mandarin benchmark demonstrates the large gap between the NLU and SLU performance, what we explain by the very small presence of Mandarin and related languages in the XLS-R pretraining.
### Adaptor pretraininig
A comparison of the Adaptor pretraining methods is given in Table 3. It is evident from these results that both medium and low resource benchmarks benefit from the Adaptor pretraining in general. Hard label loss appears to yield better
\begin{table}
\begin{tabular}{l|l|l l} \hline \hline Dataset & Metrics & NLU & SLU \\ \hline \multirow{2}{*}{SLURP} & IC Acc.\(\uparrow\) & 85.67 & 86.97 \\ & SLU-F1\(\uparrow\) & 79.30 & 77.71 \\ \hline \multirow{2}{*}{SLUE} & F1\(\uparrow\) & 83.25 & 68.90 \\ & label-F1\(\uparrow\) & 87.76 & 82.28 \\ \hline \multirow{2}{*}{CATSLU} & Acc.\(\uparrow\) & 82.56 & 63.87 \\ & F1\(\uparrow\) & 73.48 & 48.33 \\ \hline \multirow{2}{*}{MEDIA} & CER\(\downarrow\) & 16.50 & 13.67 \\ & CVER\(\downarrow\) & 19.09 & 16.28 \\ \hline \multirow{2}{*}{PM-Dom} & CER\(\downarrow\) & 23.49 & 21.43 \\ & CVER\(\downarrow\) & 26.59 & 24.62 \\ \hline \multirow{2}{*}{PM-Lang} & CER\(\downarrow\) & 40.76 & 25.13 \\ & CVER\(\downarrow\) & 43.93 & 29.11 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the NLU models on the ground truth transcriptions and the SLU models on the speech recordings (without Adaptor pretraining).
performance than the MC for all datasets on the PreEnc level and for three out of six datasets on the PostDec level. Interestingly, the PostDec level provides the best results on all datasets except MEDIA. Four out of these datasets contain the smallest amount of training sentences, therefore the changes in the decoder during the NLU fine-tuning are probably sufficiently small so that it is compatible with the Adaptor pretrained with the general mBART50 decoder. SLURP dataset also works best with the PostDec AED pretrained Adaptor despite the largest amount of training sentences. Unlike the rest of the datasets, we observe the best scores on MEDIA for the PostEnc MC aligned Adaptor, possibly because the NLU fine-tuning on this task leads to the larger modifications in the mBART50 decoder, therefore reducing its compatibility with the general decoder. Visualization of the parameter differences in Figure 3 shows that the relative change lies in the higher range for the higher number of parameters after fine-tuning on MEDIA dataset compared to the other tasks. Additionally, we calculate the ratio of unique tokens from each SLU dataset that is seen during the pretraining, as shown in Table 4. It can be observed from these numbers that the AED method is more effective for the datasets with at least 95% unique test set tokens seen during the pretraining and the MC loss is more effective otherwise. The MEDIA dataset again stands out in this analysis with the lowest ratio of unique training set tokens seen in the pretraining, this might also explain the lower PostDec pretraining effectiveness.
After exploring the training approaches separately, we try to combine them by simply summing the loss outputs. No systems with the multitask pretrained Adaptor degrade considerably compared to the best system for each dataset, which indicates that different pretraining objectives regularize each other when combined. Although the multitask pretraining does not outperform any single objective pretraining, the combination of CTC and AED can be recommended as a trade-off solution when little is known about the SLU data.
\begin{table}
\begin{tabular}{l|c c|l} \hline \hline Dataset & \multicolumn{2}{c|}{Tokens seen in pretraining, \%} & Best pretraining \\ \cline{2-3} & Train & Test & method \\ \hline SLURP & 98.45 & 98.93 & PostDec AED \\ SLUE & 98.93 & 99.66 & PostDec MC/AED \\ CATSLU & 98.93 & 99.41 & PostDec AED \\ MEDIA & 91.59 & 93.38 & PostEnc MC \\ PM-Dom & 93.47 & 93.20 & PostDec MC \\ PM-Lang & 95.52 & 95.67 & PostDec AED \\ \hline \hline \end{tabular}
\end{table}
Table 4: Influence of the SLU benchmark’s unique tokens seen in the Adaptor pretraining data on the best pretraining method for that SLU benchmark.
\begin{table}
\begin{tabular}{l|l|l c|c c|c c|c c|c c|c c|c} \hline \hline ID & Adaptor & \multicolumn{2}{c|}{SLURP} & \multicolumn{2}{c|}{SLUE} & \multicolumn{2}{c|}{CATSLU} & \multicolumn{2}{c|}{MEDIA} & \multicolumn{2}{c|}{PM-Dom} & \multicolumn{2}{c|}{PM-Lang} & \multicolumn{1}{c}{Average} \\ \cline{3-14} & pretraining & IC Acc.\(\uparrow\) & SLU-F1\(\uparrow\) & F1\(\uparrow\) & label-F1\(\uparrow\) & Acc.\(\uparrow\) & F1\(\uparrow\) & CER\(\downarrow\) & CVER\(\downarrow\) & CER\(\downarrow\) & CVER\(\downarrow\) & CER\(\downarrow\) & CVER\(\downarrow\) & RER, \% \\ \hline
0 & None & 86.97 & 77.71 & 68.90 & 82.28 & 48.33 & 63.87 & 13.67 & 16.28 & 21.43 & 24.62 & 25.13 & 29.11 & - \\ \hline
1 & PreEnc MC & 87.79 & 77.85 & 69.60 & 83.60 & 43.63 & 59.02 & 13.65 & 16.66 & 20.51 & 24.02 & 26.49 & 30.62 & -3.15 \\
2 & PreEnc CTC & 88.59 & 78.80 & 71.25 & 84.04 & 50.33 & 65.73 & 12.44 & 15.13 & 19.54 & 23.06 & 22.93 & 26.50 & 6.81 \\
3 & PostEnc MC & 88.13 & 78.91 & 71.93 & 85.10 & 47.50 & 62.65 & 12.31 & 15.07 & 19.67 & 22.81 & 33.66 & 27.22 & 5.12 \\
4 & PostDec MC & 88.93 & 79.22 & 72.25 & 86.56 & 50.52 & 65.84 & 12.60 & 15.25 & **17.90** & **21.15** & 22.52 & 26.06 & 9.95 \\
5 & PostDec AED & **89.33** & **80.08** & 72.94 & **86.57** & **54.54** & **69.24** & 13.64 & 16.18 & 18.00 & 21.27 & **21.88** & **25.04** & **12.63** \\ \hline
6 & 2 + 5 & 89.14 & 80.07 & **73.07** & 86.05 & 53.63 & 68.37 & 12.12 & **14.71** & 19.38 & 22.36 & 23.28 & 26.78 & 10.97 \\
7 & 2 + 3 + 5 & 89.20 & 79.58 & 73.05 & 86.20 & 50.81 & 65.55 & **12.11** & 14.83 & 18.85 & 21.91 & 23.40 & 26.72 & 9.17 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results depending on the layer and loss function for Adaptor pretraining and the average Relative Error Reduction (RER) compared to the error rate without Adaptor pretraining. The error rate is calculated by subtracting from 100 for the accuracy, SLU-F1, F1 and label-F1 metrics. The last two lines contain the results for the combined losses. Underlined numbers show the best scores for the individual losses (without combination). **Bold** numbers show the best overall scores.
Figure 3: Distribution of the differences between the parameters of the original mBART50 decoder and the fine-tuned NLU decoders depending on the dataset. The difference is computed as a percentage of the original parameter value. A wider line indicates a larger number of parameters having certain difference.
#### 4.2.1 Language of pretraining data
In order to examine the importance of the multilingual Adaptor pretraining, we experiment with the PostEnc MC approach but replace the training data with 1000 hours of English recordings sampled from the Common Voice corpus. The results are displayed in Table 5. While the differences with the multilingual pretraining are rather small, the results confirm that the Adaptor pretraining should at least include the languages corresponding to the downstream SLU tasks. Moreover, the spoken NER model on the English recordings is also improved by the multilingual Adaptor pretraining, what can be attributed to the relatively large amount of the loanwords in this task.
#### 4.2.2 Scaling up Adaptor pretraining
We select the best performing configuration (PostDec AED) and run the Adaptor pretraining on our full ASR dataset comprising of 7000 hours. It can be seen from Table 6 that the additional pretraining data improves the results in almost all cases indicating a scaling potential of our method. In order to compare our E2E SLU with the pipeline approach, we transcribe the SLU datasets using the PostDec AED model trained on 7000 hours and subsequently fine-tune and evaluate the NLU models in these ASR transcriptions. As the comparison with the pipeline results in Table 6 shows, our E2E SLU approach can offer more accurate predictions given the same training data and pretrained models as the pipeline. Moreover, Table 6 shows the comparison of our best result with the previous work. The proposed approach demonstrates overall competitive results with the exception of CATSLU dataset. Our systems outperform the previous results on four datasets according to the metrics that evaluate predictions with variable length and large vocabulary (SLU-F1, CVER), and only on two datasets if we take into account the metrics that evaluate either a single element from a small predefined set of values (IC accuracy) or a sequence of such elements (CER). This suggests that for tasks with a larger prediction space, the generative capabilities of our model are more important.
### Cross-lingual SLU
Our SLU approach is mostly multilingual, so the model is highly likely to have cross-lingual capabilities. The PortMEDIA-Language benchmark is specifically designed to evaluate this aspect because it uses the same SF tags as the MEDIA corpus, but is recorded in Italian instead of French. Similarly to [8], we experiment with our best MEDIA model and evaluate it on PortMEDIA-Language test set without any modification and after fine-tuning on the PortMEDIA-Language training set. The results shown in Table 7 outperform by a large margin most of the zero-shot results reported in Table 3 of [8] and all of the fine-tuning results reported in Table 5 of [8]. On top of that, our fine-tuning numbers are better than any previously reported.
## 5 Conclusions
We propose a unified E2E SLU approach based on the multilingual SSL pretrained speech and text-to-text models and evaluate it on multiple SLU benchmarks. The evaluation results are comparable to or better than the previously reported, but apply to a more diverse set of the SLU benchmarks. The pretraining on the medium amount of ASR data using the popular CTC and AED and the novel MC approaches helps to improve the scores across the board and allows outperforming the best-known configurations in several cases, suggesting that the proposed model can be improved further.
\begin{table}
\begin{tabular}{l|c c|c|c} \hline \hline Dataset & Metrics & \multicolumn{2}{c|}{Data, hours} & Pipeline & Prior work \\ \cline{3-5} & & 1K & 7K & & \\ \hline \multirow{2}{*}{SLURP} & IC Acc.\(\uparrow\) & 89.33 & 90.04 & 64.88 & **90.07** \\ & SLU-F1\(\uparrow\) & 80.08 & **80.66** & 54.78 & 79.90 \\ \hline \multirow{2}{*}{SLUE} & F1\(\uparrow\) & 72.94 & 75.47 & 63.57 & **77.20** \\ & label-F1\(\uparrow\) & 86.57 & 88.14 & 76.66 & **88.70** \\ \hline \multirow{2}{*}{CATSLU} & Acc.\(\uparrow\) & 54.54 & 56.34 & 38.46 & **86.30** \\ & F1\(\uparrow\) & 69.24 & 71.07 & 58.49 & **92.56** [33] \\ \hline \multirow{2}{*}{MEDIA} & CER\(\downarrow\) & 13.64 & 12.07 & 29.51 & **11.20** [34] \\ & CVER\(\downarrow\) & 16.18 & **14.57** & 33.45 & 17.20 \\ \hline \multirow{2}{*}{PM-Dom} & CER\(\downarrow\) & 18.00 & **17.90** & 44.80 & 21.90 \\ & CVER\(\downarrow\) & 21.27 & **21.08** & 51.52 & 35.90 \\ \hline \multirow{2}{*}{PM-Lang} & CER\(\downarrow\) & 21.88 & **21.50** & 49.53 & 26.18 \\ & CVER\(\downarrow\) & **25.04** & **25.04** & 54.62 & 39.28 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Scaling up the Adaptor PostDec AED pretraining with additional data and comparison with the pipeline approach and with the prior work.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Metrics & No transfer & Zero-shot & Fine-tuning \\ \hline CER\(\downarrow\) & 21.50 & 64.93 & **20.30** \\ CVER\(\downarrow\) & 25.04 & 71.64 & **23.65** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results of transfer learning from MEDIA (French) to PM-Lang (Italian).
\begin{table}
\begin{tabular}{l|l|c c} \hline \hline Dataset & Metrics & \multicolumn{2}{c}{Pretraining languages} \\ \cline{3-5} & & English & Multiple \\ \hline \multirow{2}{*}{SLURP} & IC Acc.\(\uparrow\) & **88.62** & 88.13 \\ & SLU-F1\(\uparrow\) & **79.09** & 78.91 \\ \hline \multirow{2}{*}{SLUE} & F1\(\uparrow\) & **72.06** & 71.93 \\ & label-F1\(\uparrow\) & 84.40 & **85.10** \\ \hline \multirow{2}{*}{CATSLU} & Acc.\(\uparrow\) & 47.08 & **47.50** \\ & F1\(\uparrow\) & 62.43 & **62.65** \\ \hline \multirow{2}{*}{MEDIA} & CER\(\downarrow\) & 12.88 & **12.31** \\ & CVER\(\downarrow\) & 15.75 & **15.07** \\ \hline \multirow{2}{*}{PM-Dom} & CER\(\downarrow\) & **19.65** & 19.67 \\ & CVER\(\downarrow\) & 23.55 & **22.81** \\ \hline \multirow{2}{*}{PM-Lang} & CER\(\downarrow\) & 23.71 & **23.36** \\ & CVER\(\downarrow\) & 27.83 & **27.22** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Effect of the languages of Adaptor pretraining data (PostEnc MC loss). |
2306.12848 | On the Direct Construction of MDS and Near-MDS Matrices | The optimal branch number of MDS matrices makes them a preferred choice for
designing diffusion layers in many block ciphers and hash functions.
Consequently, various methods have been proposed for designing MDS matrices,
including search and direct methods. While exhaustive search is suitable for
small order MDS matrices, direct constructions are preferred for larger orders
due to the vast search space involved. In the literature, there has been
extensive research on the direct construction of MDS matrices using both
recursive and nonrecursive methods. On the other hand, in lightweight
cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a
better balance between security and efficiency as a diffusion layer compared to
MDS matrices. However, no direct construction method is available in the
literature for constructing recursive NMDS matrices. This paper introduces some
direct constructions of NMDS matrices in both nonrecursive and recursive
settings. Additionally, it presents some direct constructions of nonrecursive
MDS matrices from the generalized Vandermonde matrices. We propose a method for
constructing involutory MDS and NMDS matrices using generalized Vandermonde
matrices. Furthermore, we prove some folklore results that are used in the
literature related to the NMDS code. | Kishan Chand Gupta, Sumit Kumar Pandey, Susanta Samanta | 2023-06-22T12:43:53Z | http://arxiv.org/abs/2306.12848v3 | # On the Direct Construction of MDS and Near-MDS Matrices
###### Abstract
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. Consequently, various methods have been proposed for designing MDS matrices, including search and direct methods. While exhaustive search is suitable for small order MDS matrices, direct constructions are preferred for larger orders due to the vast search space involved. In the literature, there has been extensive research on the direct construction of MDS matrices using both recursive and nonrecursive methods. On the other hand, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer compared to MDS matrices. However, no direct construction method is available in the literature for constructing recursive NMDS matrices. This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursive settings. Additionally, it presents some direct constructions of nonrecursive MDS matrices from the generalized Vandermonde matrices. We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices. Furthermore, we prove some folklore results that are used in the literature related to the NMDS code.
Keywords:Linear Code MDS code Near-MDS code Diffusion Layer MDS matrix Near-MDS matrix.
## 1 Introduction
The concept of confusion and diffusion, introduced by Shannon [28], is commonly employed in the design of symmetric key cryptographic primitives. Typically, the round function of such designs uses both non-linear and linear layers to achieve confusion and diffusion, respectively. The focus of this paper is on the
construction of linear diffusion layers that maximize the spreading of internal dependencies. One way to formalize the concept of perfect diffusion is through the use of multipermutations, which is introduced in [27, 32]. Another way to define it is using Maximum Distance Separable (MDS) matrices [3]. Due to the optimal branch number of MDS matrices, many block ciphers and hash functions use them in their diffusion layers. In the literature, there has been extensive study of constructing MDS matrices, and we can categorize the approaches of constructing MDS matrices mainly in two ways: nonrecursive and recursive. In nonrecursive constructions, the constructed matrices are themselves MDS. Whereas in recursive constructions, we generally start with a sparse matrix \(A\) of order \(n\), with proper choice of elements such that \(A^{n}\) is an MDS matrix.
The advantage of recursive MDS matrices is that they are particularly well suited for lightweight implementations: the diffusion layer can be implemented by recursively executing the implementation of the sparse matrices, requiring some clock cycles. Recursive MDS matrices based on the companion matrices were used in the PHOTON [8] family of hash functions and LED block cipher [9] because companion matrices can be implemented by a simple LFSR.
One can again classify the techniques used to construct MDS matrices is based on whether the matrix is constructed directly or a search method by enumerating some search space. While an exhaustive search may be appropriate for finding small order MDS matrices, direct constructions are favored for higher orders, owing to the enormous search space.
In the literature, there has been extensive research on the direct construction of MDS matrices using both recursive and nonrecursive methods. Nonrecursive direct constructions mainly rely on Cauchy and Vandermonde based constructions [10, 15, 18, 19, 23, 26], while recursive direct constructions are obtained through certain coding-theoretic methods. Augot et al. [1] employed shortened BCH codes, and Berger [2] used Gabidulin codes in their method. Then, in a series of works [12, 13, 14], the authors proposed many approaches for the construction of recursive MDS matrices from the companion matrices over finite fields.
Near-MDS (NMDS) matrices have sub-optimal branch numbers, leading to a slower diffusion speed compared to MDS matrices. However, NMDS matrices can provide a more favorable trade-off between security and efficiency as a diffusion layer, when compared to MDS matrices. Despite their potential benefits, research on NMDS matrices has been limited in the literature, and there is currently no direct construction method available for them in recursive approach. In 2017, Li et al. [20] studied the construction of NMDS matrices from circulant and Hadamard matrices. In [21], the focus is on studying the recursive NMDS matrices with the goal of achieving the lowest possible hardware cost. Furthermore, recent studies such as [16, 30] have presented direct constructions of NMDS codes, which can be utilized to derive nonrecursive NMDS matrices.
This paper aims to address the absence of direct constructions for recursive NMDS matrices by presenting some direct constructions of NMDS matrices in recursive setting. It also includes a novel direct construction of recursive
MDS matrices. Additionally, the paper introduces generalized Vandermonde matrices for direct constructions of nonrecursive MDS and NMDS matrices. We also propose a method for constructing involutory MDS and NMDS matrices. Furthermore, the paper provides formal proof for some commonly referenced folklore results in the literature of NMDS codes.
This paper is structured as follows: Section 2 provides the necessary notations and presents some fundamental results, including useful results on NMDS codes. Section 3 describes several direct construction methods for nonrecursive MDS and NMDS matrices, while Section 4 presents direct construction methods for recursive MDS and NMDS matrices. Finally, Section 5 concludes the paper.
## 2 Definition and Preliminaries
Let \(\mathbb{F}_{q}\) be the finite field containing \(q\) elements, where \(q=p^{r}\) for some prime \(p\) and a positive integer \(r\). The set of vectors of length \(n\) with entries from the finite field \(\mathbb{F}_{q}\) is denoted by \(\mathbb{F}_{q}^{n}\). Let \(\mathbb{F}_{q}[x]\) denote the polynomial ring over \(\mathbb{F}_{q}\) in the indeterminate \(x\). We denote the algebraic closure of \(\mathbb{F}_{q}\) by \(\bar{\mathbb{F}}_{q}\) and the multiplicative group by \(\mathbb{F}_{q}^{*}\). It is a well-established fact that elements of a finite field with characteristic \(p\) can be represented as vectors with coefficients in \(\mathbb{F}_{p}\). In other words, there exists a vector space isomorphism from \(\mathbb{F}_{p^{r}}\) to \(\mathbb{F}_{p}^{r}\) defined by \(x=(x_{1}\alpha_{1}+x_{2}\alpha_{2}+\cdots+x_{r}\alpha_{r})\rightarrow(x_{1},x _{2},\ldots,x_{r})\), where \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{r}\}\) is a basis of \(\mathbb{F}_{p^{r}}\). If \(\alpha\) is a primitive element of \(\mathbb{F}_{p^{r}}\), every nonzero element of \(\mathbb{F}_{p^{r}}\) can be expressed as a power of \(\alpha\) i.e. \(\mathbb{F}_{p^{r}}^{*}=\{1,\alpha,\alpha^{2},\alpha^{3},\ldots,\alpha^{p^{r}-1 }\}\).
Let \(\mathcal{M}_{k\times n}(\mathbb{F}_{q})\) denote the set of all matrices of size \(k\times n\) over \(\mathbb{F}_{q}\). For simplicity, we use \(\mathcal{M}_{n}(\mathbb{F}_{q})\) to denote the ring of all \(n\times n\) matrices (square matrices of order \(n\)) over \(\mathbb{F}_{q}\). Let \(I_{n}\) denote the identity matrix of \(\mathcal{M}_{n}(\mathbb{F}_{q})\). The determinant of a matrix \(A\in\mathcal{M}_{n}(\mathbb{F}_{q})\) is denoted by \(\det(A)\). A square matrix \(A\) is said to be nonsingular if \(\det(A)\neq 0\) or equivalently, if the rows (columns) of \(A\) are linearly independent over \(\mathbb{F}_{q}\). We now recall some concepts from coding theory.
A linear code \(\mathcal{C}\) of length \(n\) and dimension \(k\) over \(\mathbb{F}_{q}\) is denoted as an \([n,k]\) code. If the minimum distance of \(\mathcal{C}\) is equal to \(d\) then we denote it as an \([n,k,d]\) code. The dual code \(\mathcal{C}^{\perp}\) of a code \(\mathcal{C}\) can be defined as the subspace of dimension \((n-k)\) that is orthogonal to \(\mathcal{C}\).
A generator matrix of \(\mathcal{C}\) over \(\mathbb{F}_{q}\) is defined as a \(k\times n\) matrix \(G\) whose rows form a basis for \(\mathcal{C}\). On the other hand, a parity check matrix of \(\mathcal{C}\) over \(\mathbb{F}_{q}\) is a \((n-k)\times n\) matrix \(H\) such that for every \(c\in\mathbb{F}_{q}^{n}\), \(c\in\mathcal{C}\iff\,Hc^{T}=\mathbf{0}\). In other words, the code \(\mathcal{C}\) is the kernel of \(H\) in \(\mathbb{F}_{q}^{n}\). A generator matrix \(G\) is said to be in standard form if it has the form \(G=[I_{k}\mid A]\), where \(A\) is a \(k\times(n-k)\) matrix. If \(G=[I_{k}\mid A]\) is a generator matrix, then \(H=[-A^{T}|I_{n-k}]\) is a parity check matrix for \(\mathcal{C}\).
The following lemma establishes a connection between the properties of a parity check matrix and the minimum distance \(d\) of a linear code \(\mathcal{C}\).
Lemma 1: _[_22_, page 33]_ _Let \(H\) be a parity check matrix of a code \(\mathcal{C}\). Then the code has minimum distance \(d\) if and only if_
1. _every_ \(d-1\) _columns of_ \(H\) _are linearly independent,_
2. _some_ \(d\) _columns are linearly dependent._
Constructing a linear code with large values of \(k/n\) and \(d\) is desirable in coding theory. However, there is a trade-off between the parameters \(n,k,\) and \(d\). For instance, the well-known Singleton bound gives an upper bound on the minimum distance for a code.
Theorem 3.1: _(The Singleton bound)[22, page 33] Let \(C\) be an \([n,k,d]\) code. Then \(d\leq n-k+1\)._
Definition 1: (MDS code) A code with \(d=n-k+1\) is called maximum distance separable code or MDS code in short.
Remark 1: An \([n,k]\) MDS code is defined as having minimum distance of \(n-k+1\). Thus, every set of \(n-k\) columns of the parity check matrix are linearly independent.
Remark 2: Since the dual of an MDS code is again an MDS code [22, page 318], every \(k\) columns of the generator matrix are linearly independent.
Theorem 3.2: _[_22_, page 321]_ _An \([n,k,d]\) code \(\mathcal{C}\) with generator matrix \(G=[I\mid A]\), where \(A\) is a \(k\times(n-k)\) matrix, is MDS if and only if every square submatrix (formed from any \(i\) rows and any \(i\) columns, for any \(i=1,\ 2,\ \ldots,\)\(min\{k,n-k\}\)) of \(A\) is nonsingular._
Now we will briefly discuss another important class of linear code which found many applications in cryptography. In [5], the concept of Near-MDS codes is introduced as a relaxation of some constraints of the MDS code. The widely used approach to defining Near-MDS codes is through generalized Hamming weights [33].
Definition 2: [33] Let \(\mathcal{C}\) be an \([n,k]\) code with \(\mathcal{D}\subset\mathcal{C}\) as a subcode of \(\mathcal{C}\). The support of \(\mathcal{D}\), denoted by \(\chi(\mathcal{D})\), is the set of coordinate positions, where not all codewords of \(\mathcal{D}\) have zero i.e.
\[\chi(\mathcal{D})=\{i:\exists(x_{1},x_{2},\ldots,x_{n})\in\mathcal{D}\text{ and }x_{i}\neq 0\}.\]
Using the terminology, an \([n,k]\) code is a linear code of dimension \(k\) and support size at most \(n\). The rank of a vector space is its dimension, and we may use the terms rank and dimension interchangeably.
Example 1: Let \(\mathcal{C}\) be the linear code with a generator matrix
\[G=\begin{bmatrix}1&0&0&0&1&0\\ 0&1&0&0&1&1\\ 0&0&1&0&0&1\end{bmatrix}.\]
Then \(\chi(\mathcal{C})=\{1,2,3,5,6\}\) and \(\chi(\mathcal{D})=\{2,3,5,6\}\) for the subcode \(\mathcal{D}\) generated by the second and third rows of \(G\).
Definition 3: [33] For a linear code \(\mathcal{C}\), the \(r\)-th generalized Hamming weight, denoted as \(d_{r}(\mathcal{C})\), is defined as the cardinality of the minimal support of an \(r\)-dimensional subcode of \(\mathcal{C}\), where \(1\leq r\leq k\), i.e.
\[d_{r}(\mathcal{C})=\min\{|\chi(\mathcal{D})|:\text{$\mathcal{D}$ is a subcode of $\mathcal{C}$ with rank $r$}\}.\]
Note that \(d_{1}(\mathcal{C})=d\) is the minimum distance of \(\mathcal{C}\).
Example 2: Consider the linear code \(\mathcal{C}\) in Example 1. It is easy to check that \(d_{1}(\mathcal{C})=2\). By determining the minimal support of all two-dimensional subspaces \(\mathcal{D}\subset\mathcal{C}\), we get \(d_{2}(\mathcal{C})=4\). Also, there is at least one codeword in \(\mathcal{C}\) with a \(1\) in each position except the fourth position, which implies that \(d_{3}(\mathcal{C})=5\).
Theorem 3.1: _(Monotonicity) [33] For every \([n,k,d]\) linear code, we have_
\[1\leq d_{1}(\mathcal{C})=d<d_{2}(\mathcal{C})<d_{3}(\mathcal{C})\cdots<d_{k}( \mathcal{C})\leq n.\]
Corollary 1: _(Generalized Singleton bound) [33] For an \([n,k]\) linear code \(\mathcal{C}\), \(d_{r}(\mathcal{C})\leq n-k+r\). (When \(r=1\), this is the Singleton bound.)_
Theorem 3.2 provides another method to compute the generalized Hamming weight of linear code. Let \(H\) be a parity check matrix of \(\mathcal{C}\) and let \(H_{i}\), \(1\leq i\leq n\), be its \(i\)-th column vector. Let \(<H_{i}:i\in I>\) be the space generated by the column vectors \(H_{i}\) for \(i\in I\).
Theorem 3.3: _[_33_]_ _For all \(r\leq k\),_
\[d_{r}(\mathcal{C})=\min\{|I|:|I|-\text{rank}(<H_{i}:i\in I>)\geq r\}.\]
The following Theorem establishes a connection between the properties of a parity check matrix and the generalized Hamming weight of a linear code \(\mathcal{C}\). Although this theorem is well-known, we have not found its proof, so we are providing it below.
Theorem 3.4: _[_33, 5_]_ _Let \(H\) be a parity check matrix for a linear code \(\mathcal{C}\). Then \(d_{r}(\mathcal{C})=\delta\) if and only if the following conditions hold:_
1. _any_ \(\delta-1\) _columns of_ \(H\) _have rank greater or equal to_ \(\delta-r\)_,_
2. _there exist_ \(\delta\) _columns in_ \(H\) _of rank_ \(\delta-r\)_._
Proof: For any \(I\subset\{1,2,\ldots,n\}\), let \(S(I)=<H_{i}:i\in I>\) be the space spanned by the vectors \(H_{i}\) for \(i\in I\), where \(H_{i}\) denotes the \(i\)-th column of the parity check matrix \(H\) of \(\mathcal{C}\). Let
\[S^{\perp}(I)=\Bigg{\{}x\in\mathcal{C}:x_{i}=0\text{ for }i\not\in I\text{ and }\sum_{i\in I}x_{i}H_{i}=0\Bigg{\}}.\]
Then \(\text{rank}(S(I))+\text{rank}(S^{\perp}(I))=|I|\).
Let \(d_{r}(\mathcal{C})=\delta\), and we will prove that both conditions hold. To do so, let us assume for the sake of contradiction that there exist some \(\delta-1\) columns of \(H\), say \(H_{i_{1}},H_{i_{2}},\ldots,H_{i_{\delta-1}}\), with rank \(\leq\delta-r-1\).
Now let \(I=\{i_{1},i_{2},\ldots,i_{\delta-1}\}\subset\{1,2,\ldots,n\}\). Then \(\operatorname{rank}(S(I))\leq\delta-r-1\). Thus, we have
\[\operatorname{rank}(S^{\perp}(I)) =|I|-\operatorname{rank}(S(I))\] \[\geq\delta-1-(\delta-r-1)=r.\]
Therefore, we have \(\operatorname{rank}(S^{\perp}(I))\geq r\). Also, by the construction, \(S^{\perp}(I)\) is a subcode of \(\mathcal{C}\) and \(|\chi(S^{\perp}(I))|\leq\delta-1\). This leads to a contradiction since \(d_{r}(\mathcal{C})=\delta\). Therefore, we can conclude that any \(\delta-1\) columns of \(H\) have rank greater or equal to \(\delta-r\).
Since \(d_{r}(\mathcal{C})=\delta\), there exist a subcode \(\mathcal{D}\) of \(\mathcal{C}\) with \(\operatorname{rank}(D)=r\) and \(|\chi(\mathcal{D})|=d_{r}(\delta)\). Let \(I=\chi(\mathcal{D})\). Now we will show that \(D=S^{\perp}(I)\).
Let \(c=(c_{1},c_{2},\ldots,c_{n})\in\mathcal{D}\) be a codeword. Then we have
\[\sum_{i=1}^{n}c_{i}H_{i} =\mathbf{0}\] \[\implies\sum_{i\in I}c_{i}H_{i}+\sum_{i\not\in I}c_{i}H_{i} =\mathbf{0}\] \[\implies\sum_{i\in I}c_{i}H_{i} =\mathbf{0}\ \ [\text{Since }c_{i}=0\ \forall i\not\in I=\chi( \mathcal{D})]\] \[\implies c\in S^{\perp}(I)\] \[\implies D\subset S^{\perp}(I).\]
If possible, let \(\operatorname{rank}(S^{\perp}(I))=r^{\prime}>r\). Now since \(\operatorname{rank}(S(I))+\operatorname{rank}(S^{\perp}(I))=|I|\), we have
\[|I|-\operatorname{rank}(S(I))=r^{\prime}>r\] \[\implies d_{r^{\prime}}(\mathcal{C})\leq|I|=\delta\ \ [\text{By Theorem \ref{thm:
Since \(|I|-\mathrm{rank}(S(I))=r\), by Theorem 4, we have \(d_{r}(\mathcal{C})\leq\delta\).
If possible, let \(d_{r}(\mathcal{C})=\delta-t\) for some \(t\geq 1\). Now by Theorem 4, there exist some \(I^{\prime}\subset\{1,2,\ldots,n\}\) with \(|I^{\prime}|=\delta-t\) such that
\[|I^{\prime}|-\mathrm{rank}(S(I))\geq r\] \[\implies \mathrm{rank}(S(I))\leq|I^{\prime}|-r\] \[\implies \mathrm{rank}(S(I))\leq\delta-t-r.\]
Therefore, there exist \(|I^{\prime}|=\delta-t\) many columns, say \(H_{i_{1}},H_{i_{2}},\ldots,H_{i_{\delta-t}}\), of \(H\) of \(\mathrm{rank}\leq\delta-t-r\). Now by adding any other \(t-1\) columns of \(H\) to that \(\delta-t\) columns we have \(\delta-1\) columns, say \(H_{i_{1}},H_{i_{2}},\ldots,H_{i_{\delta-t}},H_{i_{\delta-t+1}},\ldots,H_{i_{ \delta-1}}\), of \(H\) of \(\mathrm{rank}\leq(\delta-t-r)+(t-1)=\delta-r-1<\delta-r\). This leads to a contradiction to condition \((i)\). Hence, we must have \(d_{r}(\mathcal{C})=\delta\).
Definition 4: (NMDS code)[5] A linear \([n,k]\) code \(\mathcal{C}\) is said to be Near-MDS or NMDS if
\[d_{1}(\mathcal{C})=n-k\ \ \text{and}\ \ d_{i}(\mathcal{C})=n-k+i,\ \ \text{for}\ i=2,3,\ldots,k.\]
Remark 3: From the monotonicity of generalized Hamming weights, we can say that an \([n,k]\) linear code is NMDS if and only if \(d_{1}(\mathcal{C})=n-k\) and \(d_{2}(\mathcal{C})=n-k+2\).
Theorem 4.1 provides the following useful result on NMDS code.
Lemma 2: _[_5_]_ _Let \(H\) be a parity check matrix of an \([n,k]\) code \(\mathcal{C}\). Then the code \(\mathcal{C}\) is NMDS if and only if \(H\) satisfies the conditions_
1. _every_ \(n-k-1\) _columns of_ \(H\) _are linearly independent,_
2. _there exist some_ \(n-k\) _columns that are linearly dependent,_
3. _any_ \(n-k+1\) _columns of_ \(H\) _are of full rank._
Proof: Let \(\mathcal{C}\) be an NMDS code. Therefore, we have \(d_{1}=n-k\) and \(d_{2}=n-k+2\). Since \(d_{1}\) is the minimum distance of \(\mathcal{C}\), from Lemma 1, we can say that \(d_{1}=n-k\) if and only if any \(n-k-1\) columns of \(H\) are linearly independent and there exist some \(n-k\) columns that are linearly dependent. Moreover, Theorem 4.1 implies that \(d_{2}=n-k+2\) if and only if any \(n-k+1\) columns of \(H\) have rank greater or equal to \((n-k+2)-2=n-k\) and there exist \(n-k+2\) columns of \(H\) of rank \((n-k+2)-2=n-k\). Since \(H\) is a parity check matrix of \(\mathcal{C}\), we have \(\mathrm{rank}(H)=n-k\). Therefore, we can conclude that \(d_{2}=n-k+2\) if and only if any \(n-k+1\) columns of \(H\) are of full rank. Hence, the lemma.
It can be deduced from the properties of the generalized Hamming weights that the dual of an NMDS code is also an NMDS code.
Lemma 3: _[_5_]_ _If a linear \([n,k]\) code is NMDS, then its dual code is also NMDS._
Corollary 2: _[_5_]_ _A linear \([n,k]\) code \(\mathcal{C}\) is NMDS if and only if \(d(\mathcal{C})+d(\mathcal{C}^{\perp})=n\), where \(d(\mathcal{C})\) and \(d(\mathcal{C}^{\perp})\) denotes the minimum distance of the code \(\mathcal{C}\) and its dual \(\mathcal{C}^{\perp}\), respectively._
One can infer from Lemma 3 that a generator matrix of a linear \([n,k]\) NMDS code must satisfy conditions similar to those in Lemma 2.
Lemma 4: _[_5_]_ _Let \(G\) be a generator matrix of an \([n,k]\) code \(\mathcal{C}\). Then the code \(\mathcal{C}\) is NMDS if and only if \(G\) satisfies the conditions_
1. _every_ \(k-1\) _columns of_ \(G\) _are linearly independent,_
2. _there exist some_ \(k\) _columns that are linearly dependent,_
3. _any_ \(k+1\) _columns of_ \(G\) _are of full rank._
Remark 4: It is worth noting that not all \([n,k,n-k]\) codes are necessarily NMDS codes. For example, consider the linear code \(\mathcal{C}\) with generator matrix
\[G=\begin{bmatrix}1&0&0&\alpha^{2}&\alpha&0\\ 0&1&0&\alpha&\alpha&0\\ 0&0&1&\alpha&0&\alpha\end{bmatrix}\]
over the finite field \(\mathbb{F}_{2^{2}}\) constructed by the polynomial \(x^{2}+x+1\) and \(\alpha\) is a root of \(x^{2}+x+1\). Then it can be checked that \(\mathcal{C}\) is a \([6,3,3]\) code. Also, by determining the minimal support of all two-dimensional subspaces \(\mathcal{D}\subset\mathcal{C}\), we get \(d_{2}(\mathcal{C})=4<5\). This value is achieved by the subspace spanned by the first two rows of the generator matrix \(G\). Hence, \(\mathcal{C}\) is not an NMDS code.
Almost-MDS codes, introduced in [4], are closely related to NMDS codes.
Definition 5: (AMDS code)[4] An \([n,k,d]\) code \(\mathcal{C}\) is said to be Almost-MDS or AMDS code if \(d=n-k\).
As pointed out in Remark 4, not every AMDS code is NMDS, but for large \(n\) both notions coincide.
Theorem 4.1: _[_5_]_ _If \(n>k+q\), every \([n,k,n-k]\) code over \(\mathbb{F}_{q}\) is NMDS._
From Corollary 2, we have the following fact which serves as an alternative definition of an NMDS code.
**Fact 1**: _A linear \([n,k]\) code \(\mathcal{C}\) is NMDS if and only if both the code \(\mathcal{C}\) and its dual \(\mathcal{C}^{\perp}\) are AMDS codes._
We will now explore MDS and NMDS matrices, which have notable cryptographic applications. The concept of MDS and NMDS matrices is derived from the MDS and NMDS codes, respectively. Generally, the matrix \(A\) in the generator matrix \(G=[I\ |\ A]\) of an \([n,k]\) code \(\mathcal{C}\) is considered to be an MDS or NMDS matrix depending on whether the code \(\mathcal{C}\) is MDS or NMDS. Since square matrices are typically used in practice, for the sake of simplicity, we will consider the \([2n,n]\) code instead of the generic form of the \([n,k]\) code throughout the rest of this paper.
**Definition 6**: _[_15_]_ _Let \(\mathbb{F}_{q}\) be a finite field and \(n\) be an integer. Let \(x\to A\times x\) be a mapping from \(\mathbb{F}_{q}^{n}\) to \(\mathbb{F}_{q}^{n}\) defined by the \(n\times n\) matrix \(A\). We say that \(A\) is an MDS matrix if the set of all pairs \((x,A\times x)\) is an MDS code i.e. a linear code of dimension \(n\), length \(2n\) and minimum distance \(n+1\)._
Therefore, from Theorem 2, we have the following fact as another characterization of an MDS matrix.
**Fact 2**: _A square matrix \(A\) is an MDS matrix if and only if every square submatrices of \(A\) are nonsingular._
The goal of lightweight cryptography is to design ciphers that require minimal hardware resources, consume low energy, exhibit low latency, and optimize their combinations. One proposed method for reducing chip area is the use of recursive MDS matrices.
Definition 7: Let \(q\) be a positive integer. A matrix \(B\) is said to be recursive MDS or \(q\)-MDS if the matrix \(A=B^{q}\) is MDS. If \(B\) is \(q\)-MDS then we say \(B\) yields an MDS matrix.
Example 3: For example, the matrix
\[B=\left[\begin{array}{cccc}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&\alpha&0&0\end{array}\right]\]
is 22-MDS, where \(\alpha\) is a primitive element of the field \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\).
We will now discuss NMDS matrices, which have numerous uses in lightweight cryptographic primitives. The concept originates from coding theory, specifically the NMDS codes.
Definition 8: A matrix \(A\) of order \(n\) is said to be an NMDS matrix if \([I\mid A]\) is a generator matrix of an \([2n,n]\) linear NMDS code.
Since we know from Lemma 3 that the dual of an NMDS code is also an NMDS code, we can deduce the following result for NMDS matrices.
Corollary 3: _If \(A\) is an NMDS matrix, then \(A^{T}\) is also an NMDS matrix._
Remark 5: Note that if \(A\) is an MDS matrix, then \(A^{T}\) is also an MDS matrix [10].
Definition 9: Let \(q\) be a positive integer. A matrix \(B\) is said to be recursive NMDS or \(q\)-NMDS if the matrix \(A=B^{q}\) is NMDS. If \(B\) is \(q\)-NMDS then we say \(B\) yields an NMDS matrix.
Example 4: The matrix in Example 3 is a recursive NMDS matrix with \(q=10\)
Vandermonde matrices have gained significant attention in the literature of constructing MDS codes. However, Vandermonde matrices defined over a finite field may contain singular square submatrices [22, Page 323]. Consequently, these matrices by itself need not be MDS. To address this issue, Lacan and Fimes [18, 19] employed two Vandermonde matrices to construct an MDS matrix. Later, Sajadieh et al. [26] used a similar approach to obtain an MDS matrix that is also involutory.
Definition 10: (Vandermonde matrix) The matrix
\[A=vand(x_{1},x_{2},x_{3},\ldots,x_{n})=\begin{bmatrix}1&1&1&\ldots&1\\ x_{1}&x_{2}&x_{3}&\ldots&x_{n}\\ x_{1}^{2}&x_{2}^{2}&x_{3}^{2}&\ldots&x_{n}^{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ x_{1}^{n-1}&x_{2}^{n-1}&x_{3}^{n-1}&\ldots&x_{n}^{n-1}\end{bmatrix}\]
is called a Vandermonde matrix, where \(x_{i}\)'s are elements of a finite or infinite field.
We sometimes use the notation \(vand(\mathbf{x})\) to represent the Vandermonde matrix \(vand(x_{1},x_{2},x_{3},\ldots,x_{n})\), where \(\mathbf{x}=(x_{1},x_{2},x_{3},\ldots,x_{n})\).
It is known that
\[\det(vand(\mathbf{x}))=\prod_{1\leq i<j\leq n}(x_{j}-x_{i}),\]
which is nonzero if and only if the \(x_{i}\)'s are distinct.
There are several generalizations of the Vandermonde matrices in the literature, as documented in [6, 7, 17, 24, 29, 31] and the references therein. Our focus is on the variant presented in [17], due to its applications in cryptography and error correcting codes. The definition of this variant is as follows.
Definition 11: (Generalized Vandermonde matrix) Let \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{F}_{q}^{n}\) and \(T=\{t_{1},t_{2},\ldots,t_{n}\}\subset\mathbb{Z}\) with \(0\leq t_{1}<t_{2}<\ldots<t_{n}\). Then the matrix
\[V(\mathbf{x};T)=\begin{bmatrix}x_{1}^{t_{1}}&x_{2}^{t_{1}}&\ldots&x_{n}^{t_{1 }}\\ x_{1}^{t_{2}}&x_{2}^{t_{2}}&\ldots&x_{n}^{t_{2}}\\ \vdots&\vdots&\vdots&\\ x_{1}^{t_{n}}&x_{2}^{t_{n}}&\ldots&x_{n}^{t_{n}}\end{bmatrix}\]
is said to be a generalized Vandermonde matrix with respect to \(T\).
Remark 6: Observe that the matrix \(V(\mathbf{x};T)\) is the Vandermonde matrix \(vand(\mathbf{x})\) if \(T=\{0,1,\ldots,n-1\}\).
Let \(I\) denote the set of discontinuities in \(T\), i.e., \(I=\{0,1,\ldots,t_{n}\}\setminus T\). Clearly, \(0\leq t_{1}<t_{2}<\ldots<t_{n}=n+|I|-1\). Throughout the rest of the paper, the notation \(V_{\perp}(x;I)\) is used interchangeably with \(V(\mathbf{x};T)\).
Now we will see how the determinant of \(V(\mathbf{x};T)\) can be computed with the help of the determinant of a Vandermonde matrix when \(T\) has discontinuities. To do so, we require the following definition.
**Definition 12**.: _The elementary symmetric polynomial of degree \(d\) is defined as_
\[\sigma_{d}(x_{1},x_{2},\ldots,x_{n})=\sum_{w(e)=d}x_{1}^{e_{1}}x_{2}^{e_{2}} \cdots x_{n}^{e_{n}},\]
_where \(e=(e_{1},e_{2},\ldots,e_{n})\in\mathbb{F}_{2}^{n}\)._
**Theorem 7**.: _[_17_, Theorem 1]_ _If \(I=\{l_{1},l_{2},\ldots,l_{s}\}\), we have_
\[\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))\det(S(\mathbf{x}))\]
_where \(S(\mathbf{x})=(\sigma_{n-l_{i}+j-1}(\mathbf{x}))_{i,j=1}^{s}\)._
**Lemma 5**.: _[_17_, Lemma 1]_ _If \(I=\{l\}\), we have_
\[\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))\sigma_{n-l}(\mathbf{x}).\]
By substituting \(I=\{n-1\}\) and \(I=\{1\}\) into Lemma 5, we can derive Corollaries 4 and 5, respectively.
**Corollary 4**.: _Let \(I=\{n-1\}\), then \(\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))(\sum_{i=1}^{n}x_{i})\)._
**Corollary 5**.: _Let \(I=\{1\}\) and each \(x_{i}\) be a nonzero element of a field. Then we can express the determinant of \(V_{\perp}(\mathbf{x};I)\) as_
\[\det(V_{\perp}(\mathbf{x};I))=(\prod_{i=1}^{n}x_{i})\det(vand(\mathbf{x}))( \sum_{i=1}^{n}x_{i}^{-1}).\]
Now, we will consider the case when \(T\) has more than one discontinuity, specifically, we will explore how to compute the determinant of \(V_{\perp}(\mathbf{x};I)\) when \(I=\{1,n\}\).
**Corollary 6**.: _Let \(I=\{1,n\}\) and each \(x_{i}\) be a nonzero element of a field. Then we can express the determinant of \(V_{\perp}(\mathbf{x};I)\) as_
\[\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))\left(\prod_{i=1}^{n}x_{i }\right)\left[(\sum_{i=1}^{n}x_{i})(\sum_{i=1}^{n}x_{i}^{-1})-1\right].\]
Proof.: From Theorem 7, we know that
\[\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))\det(S(\mathbf{x})),\]
where \(S(\mathbf{x})=\begin{bmatrix}\sigma_{n-1}(\mathbf{x})&\sigma_{n}(\mathbf{x}) \\ \sigma_{0}(\mathbf{x})&\sigma_{1}(\mathbf{x})\end{bmatrix}\). Thus, we have
\[\det(S(\mathbf{x})) =\sigma_{n-1}(\mathbf{x})\sigma_{1}(\mathbf{x})-\sigma_{n}( \mathbf{x})\sigma_{0}(\mathbf{x})\] \[=\left[(\prod_{i=1}^{n}x_{i}\sum_{i=1}^{n}x_{i}^{-1})(\sum_{i=1}^ {n}x_{i})\right]-\prod_{i=1}^{n}x_{i}\] \[=\prod_{i=1}^{n}x_{i}\left[(\sum_{i=1}^{n}x_{i})(\sum_{i=1}^{n}x_ {i}^{-1})-1\right].\]
Therefore, \(\det(V_{\perp}(\mathbf{x};I))=\det(vand(\mathbf{x}))\left(\prod_{i=1}^{n}x_{i }\right)\left[(\sum_{i=1}^{n}x_{i})(\sum_{i=1}^{n}x_{i}^{-1})-1\right]\).
Now let us recall the companion matrix structures which are used for the construction of recursive MDS matrices.
Definition 13: (Companion matrix) Let \(g(x)=a_{1}+a_{2}x+\ldots+a_{n}x^{n-1}+x^{n}\in\mathbb{F}_{q}[x]\) be a monic polynomial of degree \(n\). The companion matrix \(C_{g}\in M_{n}(\mathbb{F}_{q})\) associated to the polynomial \(g(x)\) is given by
\[C_{g}=\left[\begin{array}{cccc}0&1&0&\ldots&0\\ \vdots&&\ddots&&\vdots\\ 0&0&\ldots&\ldots&1\\ -a_{1}&-a_{2}&\ldots&\ldots&-a_{n}\end{array}\right].\]
Definition 14: A square matrix \(M\in M_{n}(\mathbb{F}_{q})\) is said to be diagonalizable if \(M\) is similar to a diagonal matrix. This means \(M=PDP^{-1}\) for some diagonal matrix \(D\) and a nonsingular matrix \(P\).
Now, we will consider some results related to diagonalizable companion matrices.
Lemma 6: _[_11_]_ _Let \(C_{g}\in M_{n}(\mathbb{F}_{q})\) be a nonsingular companion matrix which is diagonalizable, say \(C_{g}=PDP^{-1}\) where P is a nonsingular matrix of order \(n\) and \(D=diag(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\). Then all entries of \(P\) will be nonzero. Moreover, \(C_{g}\) can be expressed as \(C_{g}=VDV^{-1}\), where \(V=vand(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\)._
Corollary 7: _[_11_]_ _A companion matrix \(C_{g}\) is nonsingular and diagonalizable if and only if all eigenvalues of \(C_{g}\) are distinct and nonzero._
Lemma 7: _[_25_]_ _If \(M\) is an \(n\times n\) matrix with \(n\) distinct eigenvalues, then \(M\) is diagonalizable._
Theorem 7.1: _[_25_]_ _The characteristic polynomial of \(C_{g}\), as defined in Definition 13, is the polynomial \(g(x)=a_{1}+a_{2}x+\ldots+a_{n}x^{n-1}+x^{n}\)._
Since the roots of a characteristic polynomial are the eigenvalues, based on Lemma 6, Lemma 7 and Theorem 7.1, we can conclude the following result for a companion matrix.
Theorem 7.2: _If the monic polynomial \(g(x)=a_{1}+a_{2}x+\ldots+a_{n}x^{n-1}+x^{n}\) has \(n\) distinct roots \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\), then \(C_{g}\) can be expressed as \(C_{g}=VDV^{-1}\), where \(V=vand(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) and \(D=diag(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\)._
We close this section by providing Lemma 8, which will be beneficial in this paper. To prove this lemma, we need the following result from linear algebra.
Theorem 7.3: _[_25_, Theorem 3.5.4]_ _Let \(A\) be an \(n\times n\) matrix and \(B\) be an \(n\times l\) matrix. If \(A\) is nonsingular, then the rank of \(AB\) is equal to the rank of \(B\)._
Lemma 8: _Let \(A\) be an \(n\times n\) nonsingular matrix and \(G\) be a generator matrix of an \([l,n]\) linear code \(\mathcal{C}\). Then \(AG\) is also a generator matrix of the code \(\mathcal{C}\)._
Proof: We know that the rows of the generator matrix \(G\) form a basis for the linear code \(\mathcal{C}\) and \(\mathrm{rank}(G)=n\). Also, since \(A\) is nonsingular, according to Theorem 3.1, we have \(\mathrm{rank}(AG)=\mathrm{rank}(G)=n\). Therefore, all \(n\) rows of \(AG\) are linearly independent.
Note that each row of \(AG\) is a linear combination of the rows of \(G\). Therefore, each row of \(AG\) represents a codeword of \(\mathcal{C}\), and these rows are linearly independent. Consequently, the rows of \(AG\) form a basis for \(\mathcal{C}\). Therefore, \(AG\) is also a generator matrix of the code \(\mathcal{C}\).
## 3 Direct Construction of Nonrecursive MDS and NMDS Matrices
The application of Vandermonde matrices for constructing MDS codes is well documented in literature [10, 15, 18, 19, 23, 26]. In this section, we explore the use of generalized Vandermonde matrices for the construction of both MDS and NMDS matrices. Specifically, we focus on the generalized Vandermonde matrices \(V_{\perp}(\mathbf{x};I)\), where \(I\) is a subset of \(\{1,n-1,n\}\).
Generalized Vandermonde matrices, with these parameters, defined over a finite field can contain singular submatrices (see Example 5). Consequently, these matrices by itself need not be MDS over a finite field. However, like Vandermonde based constructions, we can use two generalized Vandermonde matrices for constructing MDS matrices.
Example 5: Consider the generalized Vandermonde matrix \(V_{\perp}(\mathbf{x};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{5})\) and \(I=\{3\}\)
\[V_{\perp}(\mathbf{x};I)=\begin{bmatrix}1&1&1&1\\ 1&\alpha&\alpha^{2}&\alpha^{5}\\ 1&\alpha^{2}&\alpha^{4}&\alpha^{10}\\ 1&\alpha^{4}&\alpha^{8}&\alpha^{20}\end{bmatrix},\]
where \(\alpha\) is a primitive element of the finite field \(\mathbb{F}_{2^{4}}\) constructed by the polynomial \(x^{4}+x+1\). Consider the \(2\times 2\) submatrix
\[\begin{bmatrix}1&\alpha^{5}\\ 1&\alpha^{20}\end{bmatrix}\]
which is singular as \(\alpha^{20}=\alpha^{5}\).
Theorem 3.1: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\ldots,x_{2n})\) and \(I=\{n-1\}\). The elements \(x_{i}\) are \(2n\) distinct elements from \(\mathbb{F}_{q}\), and \(\sum_{i=1}^{n}x_{r_{i}}\neq 0\) for all \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{1,2,\ldots,2n\}\). Then the matrices \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) are such that any square submatrix of them is nonsingular and hence MDS matrices._
Proof: Let \(U\) be the \(n\times 2n\) matrix \([V_{1}\mid V_{2}]\). By Corollary 4, we can conclude that both \(V_{1}\) and \(V_{2}\) are nonsingular matrices. Consider the product \(G=V_{1}^{-1}U=[I\mid A]\), where \(A=V_{1}^{-1}V_{2}\). We will now prove that \(A\) does not contain any singular submatrix.
Now, since \(U=V_{1}G\), from Lemma 8, we can say that \(U\) is also a generator matrix for the linear code \(\mathcal{C}\) generated by matrix \(G=[I\mid A]\). From Remark 2, we know that a generator matrix \(U\) generates an \([2n,n,n+1]\) MDS code if and only if any \(n\) column of \(U\) is linearly independent.
Now we can observe that any \(n\) column of \(U\) form a generalized Vandermonde matrix of the same form as \(V_{1}\) and \(V_{2}\). Since each \(x_{i}\) are distinct and \(\sum_{i=1}^{n}x_{r_{i}}\neq 0\) for all \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), form Corollary 4, we can say that every \(n\) column of \(U\) are linearly independent. Hence, we can say that the code \(\mathcal{C}\) is an MDS code.
Therefore, \(G\) generates an \([2n,n,n+1]\) MDS code and hence \(A=V_{1}^{-1}V_{2}\) is an MDS matrix. For \(V_{2}^{-1}V_{1}\), the proof is identical.
Remark 7: We know that the inverse of an MDS matrix is again MDS [10], therefore, if \(V_{1}^{-1}V_{2}\) is MDS, then \(V_{2}^{-1}V_{1}\) is also MDS and vice versa.
Example 6: Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), \(\mathbf{y}=(\alpha^{4},\alpha^{5},\alpha^{6},\alpha^{7})\) and \(I=\{3\}\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{8}}\) and a root of \(x^{8}+x^{7}+x^{6}+x+1\). It can be verified that \(V_{1}\) and \(V_{2}\) satisfies the conditions in Theorem 11. Therefore, the matrices
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{7}&\alpha^{234}&\alpha^{57}&\alpha^{1 56}\\ \alpha^{37}&\alpha^{66}&\alpha^{55}&\alpha^{211}\\ \alpha^{205}&\alpha^{100}&\alpha^{30}&\alpha^{86}\\ \alpha^{227}&\alpha^{50}&\alpha^{149}&\alpha^{40}\end{bmatrix}\text{ and }V_{2}^{-1}V_{1}=\begin{bmatrix}\alpha^{136}&\alpha^{49}&\alpha^{235}& \alpha^{30}\\ \alpha^{210}&\alpha^{77}&\alpha^{201}&\alpha^{198}\\ \alpha^{144}&\alpha^{72}&\alpha^{52}&\alpha^{220}\\ \alpha^{42}&\alpha^{228}&\alpha^{23}&\alpha^{248}\end{bmatrix}\]
are MDS matrices.
Similar to MDS matrices, generalized Vandermonde matrices with \(I=\{n-1\}\) themselves may not be NMDS over a finite field (see Example 7). As a consequence, we use two generalized Vandermonde matrices for constructing NMDS matrices.
Example 7: Consider the generalized Vandermonde matrix \(A=V_{\perp}(\mathbf{x};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{3},\alpha^{7})\) and \(I=\{3\}\).
\[A=\begin{bmatrix}1&1&1&1\\ 1&\alpha&\alpha^{3}&\alpha^{7}\\ 1&\alpha^{2}&\alpha^{6}&\alpha^{14}\\ 1&\alpha^{4}&\alpha^{12}&\alpha^{28}\end{bmatrix},\]
where \(\alpha\) is a primitive element of the finite field \(\mathbb{F}_{2^{4}}\) constructed by the polynomial \(x^{4}+x+1\) and \(\alpha\) is a root of it. Now consider the generator matrix
\[G =[I\ |\ A]\] \[=\begin{bmatrix}1&0&0&0&1&1&1\\ 0&1&0&0&1&\alpha&\alpha^{3}&\alpha^{7}\\ 0&0&1&0&1&\alpha^{2}&\alpha^{6}&\alpha^{14}\\ 0&0&0&1&1&\alpha^{4}&\alpha^{12}&\alpha^{28}\end{bmatrix}.\]
Now consider matrix
\[M=\begin{bmatrix}0&1&1&1&1\\ 0&1&\alpha&\alpha^{3}&\alpha^{7}\\ 1&1&\alpha^{2}&\alpha^{6}&\alpha^{14}\\ 0&1&\alpha^{4}&\alpha^{12}&\alpha^{28}\end{bmatrix},\]
which is constructed by the five columns: the third, fifth, sixth, seventh, and eighth columns of \(G\). It can be observed that \(\text{rank}(M)=3<4\), which violates the condition \((iii)\) in Lemma 4. Therefore, \(A\) is not an NMDS matrix.
Theorem 4.1: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},\quad x_{n+2},\quad\ldots,\)\(x_{2n})\) and \(I=\{n-1\}\). The elements \(x_{i}\) are \(2n\) distinct elements from \(\mathbb{F}_{q}\) such that \(\sum_{i=1}^{n}x_{i}\neq 0\), \(\sum_{i=1}^{n}x_{n+i}\neq 0\) and \(\sum_{i=1}^{n}x_{r_{i}}=0\) for some other \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{1,2,\ldots,2n\}\). Then the matrices \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) are NMDS matrices._
Proof: Let \(U\) be the \(n\times 2n\) matrix \([V_{1}\ |\ V_{2}]\). By Corollary 4, we can conclude that both \(V_{1}\) and \(V_{2}\) are nonsingular matrices. Consider the product \(G=V_{1}^{-1}U=[I\ |\ A]\), where \(A=V_{1}^{-1}V_{2}\). To show, \(A=V_{1}^{-1}V_{2}\) is an NMDS matrix, we need to prove that the \([2n,n]\) linear code \(\mathcal{C}\) generated by \(G=[I\ |\ A]\) is an NMDS code.
Now, since \(U=V_{1}G\), from Lemma 8, we can say that \(U\) is also a generator matrix for the linear code \(\mathcal{C}\). Thus, we can conclude that \(A=V_{1}^{-1}V_{2}\) is an NMDS matrix if and only if \(U\) meet the three conditions mentioned in Lemma 4.
A submatrix \(U[R]\), constructed from any \(t\) columns of \(U\), is given by
\[U[R]=\begin{bmatrix}1&1&\ldots&1\\ x_{r_{1}}&x_{r_{2}}&\ldots&x_{r_{t}}\\ x_{r_{1}}^{2}&x_{r_{2}}^{2}&\ldots&x_{r_{t}}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ x_{r_{1}}^{n-2}&x_{r_{2}}^{n-2}&\ldots&x_{r_{t}}^{n-2}\\ x_{r_{1}}^{n}&x_{r_{2}}^{n}&\ldots&x_{r_{t}}^{n}\end{bmatrix}, \tag{1}\]
where \(R\) denotes a set \(\{r_{1},r_{2},\ldots,r_{t}\}\subset E=\{1,2,\ldots,2n\}\) of \(t\) elements.
So for \(R=\{r_{1},r_{2},\ldots,r_{n-1}\}\subset E\) we have
\[U[R]=\left[\begin{array}{cccc}1&1&\ldots&1\\ x_{r_{1}}&x_{r_{2}}&\ldots&x_{r_{n-1}}\\ \vdots&\vdots&\ddots&\vdots\\ x_{r_{1}}^{n-2}&x_{r_{2}}^{n-2}&\ldots&x_{r_{n-1}}^{n-2}\\ x_{r_{1}}^{n}&x_{r_{2}}^{n}&\ldots&x_{r_{n-1}}^{n}\end{array}\right].\]
Now, we consider the \((n-1)\times(n-1)\) submatrix \(U^{\prime}[R]\) of \(U[R]\), which is constructed from the first \(n-1\) rows of \(U[R]\). Therefore, we have
\[U^{\prime}[R] =\left[\begin{array}{cccc}1&1&\ldots&1\\ x_{r_{1}}&x_{r_{2}}&\ldots&x_{r_{n-1}}\\ \vdots&\vdots&\ddots&\vdots\\ x_{r_{1}}^{n-3}&x_{r_{2}}^{n-3}&\ldots&x_{r_{n-1}}^{n-3}\\ x_{r_{1}}^{n-2}&x_{r_{2}}^{n-2}&\ldots&x_{r_{n-1}}^{n-2}\end{array}\right.\] \[=vand(x_{r_{1}},x_{r_{2}},\ldots,x_{r_{n-1}}),\]
which is nonsingular since each \(x_{i}\) is a distinct element. Therefore, any submatrix of \(U\) constructed from any \(n-1\) columns has a nonsingular \((n-1)\times(n-1)\) submatrix, implying that any \(n-1\) columns of \(U\) are linearly independent.
Now suppose \(\sum_{i=1}^{n}x_{r_{i}^{\prime}}=0\) for some \(R^{\prime}=\{r_{1}^{\prime},r_{2}^{\prime},\ldots,r_{n}^{\prime}\}\subset E\). Then for \(R^{\prime}\), we have
\[U[R^{\prime}]=\left[\begin{array}{cccc}1&1&\ldots&1\\ x_{r_{1}^{\prime}}&x_{r_{2}^{\prime}}&\ldots&x_{r_{n}^{\prime}}\\ \vdots&\vdots&\ddots&\vdots\\ x_{r_{1}^{\prime}}^{n-2}&x_{r_{2}^{\prime}}^{n-2}&\ldots&x_{r_{n^{\prime}}}^{n- 2}\\ x_{r_{1}^{\prime}}^{n}&x_{r_{2}^{\prime}}^{n}&\ldots&x_{r_{n}^{\prime}}^{n} \end{array}\right],\]
which is a generalized Vandermonde matrix \(V_{\perp}(\mathbf{x};I)\) with \(\mathbf{x}=(x_{r_{1}^{\prime}},x_{r_{2}^{\prime}},\ldots,x_{r_{n}^{\prime}})\) and \(I=\{n-1\}\). Thus, from Corollary 4, we have
\[\det(U[R^{\prime}])=\left[\prod_{1\leq i<j\leq n}(x_{r_{j}^{\prime}}-x_{r_{i}^ {\prime}})\right]\Bigg{(}\sum_{i=1}^{n}x_{r_{i}}^{\prime}\Bigg{)}.\]
Since \(\sum_{i=1}^{n}x_{r_{i}^{\prime}}=0\), we have \(\det(U[R^{\prime}])=0\) i.e. the columns of \(U[R^{\prime}]\) are linearly dependent. Hence, there exist \(n\) columns (depends upon \(R^{\prime}\)) that are linearly dependent.
Now we need to show that the third condition of Lemma 4 is also satisfied by \(U\). To prove this, we will use a contradiction argument. Suppose, for the sake of contradiction, that each set of \(n+1\) columns of \(U\) is not of full rank. Let \(R^{\prime\prime}=\{r_{1},r_{2},\ldots,r_{n},r_{n+1}\}\subset E\) be a set of \(n+1\) elements such that the corresponding submatrix \(U[R^{\prime\prime}]\) of \(U\) is not of full rank i.e., \(\operatorname{rank}(U[R^{\prime\prime}])<n\). Now by our assumption, each \(n\times n\) submatrix of \(U[R^{\prime\prime}]\) is singular. Since each \(x_{r}\neq x_{r^{\prime}}\) for \(r,r^{\prime}\in E\), from Corollary 4, it follows that it follows that
\[x_{r_{2}}+x_{r_{3}}+x_{r_{4}}+x_{r_{5}}+\cdots+x_{r_{n+1}} =0\] \[x_{r_{1}}+x_{r_{3}}+x_{r_{4}}+x_{r_{5}}+\cdots+x_{r_{n+1}} =0\] \[x_{r_{1}}+x_{r_{2}}+x_{r_{3}}+x_{r_{5}}+\cdots+x_{r_{n+1}} =0\] \[\vdots\] \[x_{r_{1}}+x_{r_{2}}+x_{r_{3}}+x_{r_{4}}+\cdots+x_{r_{n}} =0.\]
This system of equations can be written as \(MX=0\), where \(M\) is a \((n+1)\times(n+1)\) matrix given by
\[M=\begin{bmatrix}0&1&1&1&\dots&1\\ 1&0&1&1&\dots&1\\ 1&1&0&1&\dots&1\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&1&1&\dots&0\end{bmatrix}\ \ \text{and}\ \ X=[x_{r_{1}},x_{r_{2}},x_{r_{3}},\dots,x_{r_{n+1}}]^{T}.\]
Note that \(\det(M)\)=\((-1)^{n}n\). Suppose \(p\) is the characteristic of the field \(\mathbb{F}_{q}\). We will now examine two scenarios: first, when \(p\) does not divide \(n\); and second, when \(p\) divides \(n\).
**Case 1: \(p\nmid n\)**.
In this case, we have \(\det(M)\neq 0\). Therefore, \(MX=0\) has a unique solution \(X=[0,0,\dots,0]^{T}\). This means \(x_{r_{i}}=0\) for \(i=1,2,\dots,n+1\) which is a contradiction because each \(x_{i}\) is distinct.
**Case 2: \(p|n\)**.
If \(p|n\), \(M\) is a singular matrix. Let \(M^{\prime}\) be the \(n\times n\) submatrix obtained by deleting the 1st row and 1st column of \(M\). The determinant of \(M^{\prime}\) is given by \(\det(M^{\prime})=(-1)^{n-1}(n-1)\). Since \(p\) is a prime and \(p|n\), we must have \(p\nmid(n-1)\). Therefore, \(\det(M^{\prime})\neq 0\). From this, we conclude that the rank of \(M\) is \(n\) and so the solution space of \(MX=0\) has dimension 1.
Since \(p|n\), it is easy to verify that \([1,1,\dots,1]^{T}\) is a solution of \(MX=0\). As this vector is nonzero, we deduce that the solution space of \(MX=0\) is given by
\[X=\big{\{}c\cdot[1,1,\dots,1]^{T}:\ c\in\mathbb{F}_{q}\big{\}}.\]
Therefore, we have
\[[x_{r_{1}},x_{r_{2}},x_{r_{3}},\dots,x_{r_{n+1}}]^{T}=c\cdot[1,1,\dots,1]^{T}\]
for some \(c\in\mathbb{F}_{q}\), which contradicts the fact that each \(x_{r}\neq x_{r^{\prime}}\) for \(r,r^{\prime}\in E\).
Thus, we can conclude that \(U\), and hence \(G=[I\mid A]\), generates an \([2n,n]\) linear NMDS code. Therefore, according to Definition 8, \(A=V_{1}^{-1}V_{2}\) is an NMDS matrix. For \(V_{2}^{-1}V_{1}\), the proof is identical.
Remark 8: In Theorem 12, it is assumed that \(\sum_{i=1}^{n}x_{i}\neq 0\) and \(\sum_{i=1}^{n}x_{n+i}\neq 0\). This assumption is made based on Corollary 4, which states that
\(\det(vand(\mathbf{x}))(\sum_{i=1}^{n}x_{i})\) and \(\det(V_{\perp}(\mathbf{y};I))=\det(vand(\mathbf{y}))(\sum_{i=1}^{n}x_{n+i})\). If either of these sums is zero, it would result in the determinant of either \(V_{1}\) or \(V_{2}\) being zero, making them singular. Hence, the assumption is necessary to ensure the nonsingularity of \(V_{1}\) and \(V_{2}\).
Example 8: Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), \(\mathbf{y}=(\alpha^{4},\alpha^{5},\alpha^{6},\alpha^{7})\) and \(I=\{3\}\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It is easy to check that each \(x_{i}\) are distinct and \(1+\alpha+\alpha^{3}+\alpha^{7}=0\). Therefore, the matrices
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{7}&\alpha^{9}&\alpha^{9}&1\\ \alpha^{14}&\alpha^{14}&\alpha^{3}&1\\ \alpha^{10}&\alpha^{5}&\alpha^{5}&0\\ \alpha^{2}&\alpha^{2}&\alpha^{8}&1\end{bmatrix}\text{ and }V_{2}^{-1}V_{1}= \begin{bmatrix}0&\alpha^{7}&1&\alpha^{7}\\ 1&\alpha^{14}&0&\alpha^{3}\\ 1&\alpha^{5}&1&\alpha^{10}\\ 1&\alpha^{8}&1&\alpha^{8}\end{bmatrix}\]
are NMDS matrices.
In the context of implementing block ciphers, we know that if an efficient matrix \(M\) used in encryption is involutory, then its inverse \(M^{-1}=M\) applied for decryption will also be efficient. Hence, it is important to find MDS or NMDS matrices that are also involutory. In the following theorem, we prove a result for obtaining involutory matrices from the generalized Vandermonde matrices with \(I=\{n-1\}\). The proof technique used in this theorem is similar to the proof of [10, Theorem 4.3] for Vandermonde matrices.
Theorem 4.1: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices of even order over \(\mathbb{F}_{2^{r}}\) with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})\) and \(I=\{n-1\}\). If \(y_{i}=l+x_{i}\) for \(i=1,2,\ldots,n\), for some \(l\in\mathbb{F}_{2^{r}}^{\star}\) then \({V_{2}V_{1}}^{-1}\) is a lower triangular matrix whose nonzero elements are determined by powers of \(l\). Also, \(V_{1}^{-1}V_{2}\)\((=V_{2}^{-1}V_{1})\) is an involutory matrix._
Proof: Let \(V_{1}^{-1}=(t_{i,j})_{n,n}\) and \(V=V_{2}V_{1}^{-1}=(v_{i,j})_{n,n}\). As \(V_{1}V_{1}^{-1}=I\), we have
\[V_{1_{row(1)}}\cdot V_{1_{column(1)}}^{-1} =\sum_{i=1}^{n}t_{i,1}=1 \tag{2}\] \[V_{1_{row(k)}}\cdot V_{1_{column(1)}}^{-1} =\sum_{i=1}^{n}x_{i}^{k-1}\cdot t_{i,1}=0\text{ for }2\leq k\leq n-1\text{ and}\] (3) \[V_{1_{row(n)}}\cdot V_{1_{column(1)}}^{-1} =\sum_{i=1}^{n}x_{i}^{n}\cdot t_{i,1}=0. \tag{4}\]
Therefore, from Equation 2, we have \(v_{1,1}=V_{2_{row(1)}}\cdot V_{1_{column(1)}}^{-1}=1\).
Now for \(2\leq k\leq n-1\), we have
\[v_{k,1} =V_{2_{row(k)}}\cdot V_{1_{column(1)}}^{-1}\] \[=\sum_{i=1}^{n}y_{i}^{k-1}\cdot t_{i,1}=\sum_{i=1}^{n}\left(l+x_{i} \right)^{k-1}\cdot t_{i,1}\] \[=\sum_{i=1}^{n}\left({}^{k-1}C_{0}x_{i}^{k-1}+{}^{k-1}C_{1}x_{i}^ {k-2}\cdot l+\ldots\right.\] \[\qquad\qquad\qquad\left.+{}^{k-1}C_{k-2}x_{i}\cdot l^{k-2}+{}^{k- 1}C_{k-1}l^{k-1}\right)\cdot t_{i,1}\] \[=\sum_{i=1}^{n}l^{k-1}\cdot t_{i,1}=l^{k-1}\qquad\text{[By Equation 3].}\]
Also, we have
\[v_{n,1} =V_{2_{row(n)}}\cdot V_{1_{column(1)}}^{-1}\] \[=\sum_{i=1}^{n}y_{i}^{n}\cdot t_{i,1}=\sum_{i=1}^{n}\left(l+x_{i} \right)^{n}\cdot t_{i,1}\] \[=\sum_{i=1}^{n}\left({}^{n}C_{0}x_{i}^{n}+{}^{n}C_{1}x_{i}^{n-1} \cdot l+\ldots+{}^{n}C_{n-1}x_{i}\cdot l^{n-1}+{}^{n}C_{n}l^{n}\right)\cdot t _{i,1}\] \[=\sum_{i=1}^{n}{}^{n}C_{1}x_{i}^{n-1}l\cdot t_{i,1}+\sum_{i=1}^{ n}l^{n}\cdot t_{i,1}\qquad\text{[By Equations 3 and 4]}\] \[=l^{n}\qquad\text{[Since $n$ is even, ${}^{n}C_{1}=0$ in $\mathbb{F}_{2^{r}}$ and by Equation 2].}\]
So we have computed the 1st column of \(V=V_{2}V_{1}^{-1}\).
Again since \(V_{1}V_{1}^{-1}=I\), we have
\[V_{1_{row(1)}}\cdot V_{1_{column(2)}}^{-1} =\sum_{i=1}^{n}t_{i,2}=0, \tag{5}\] \[V_{1_{row(2)}}\cdot V_{1_{column(2)}}^{-1} =\sum_{i=1}^{n}x_{i}\cdot t_{i,2}=1,\] (6) \[V_{1_{row(k)}}\cdot V_{1_{column(2)}}^{-1} =\sum_{i=1}^{n}x_{i}^{k-1}\cdot t_{i,2}=0\text{ for $3\leq k\leq n-1$ and}\] (7) \[V_{1_{row(n)}}\cdot V_{1_{column(2)}}^{-1} =\sum_{i=1}^{n}x_{i}^{n}\cdot t_{i,2}=0. \tag{8}\]
Therefore, from Equation 5, we have \(v_{1,2}=V_{2_{row(1)}}\cdot V_{1_{column(2)}}^{-1}=0\).
Also, we have
\[v_{2,2} =V_{2_{row(2)}}\cdot V_{1_{column(2)}}^{-1}=\sum_{i=1}^{n}y_{i}\cdot t _{i,2}\] \[=\sum_{i=1}^{n}\left(l+x_{i}\right)\cdot t_{i,2}=\sum_{i=1}^{n}l \cdot t_{i,2}+\sum_{i=1}^{n}x_{i}\cdot t_{i,2}=1\quad\quad\text{[By Equations \ref{eq:v2} and \ref{eq:v2}]}\]
Now for \(3\leq k\leq n-1\), we have
\[v_{k,2} =V_{2_{row(k)}}\cdot V_{1_{column(2)}}^{-1}\] \[=\sum_{i=1}^{n}y_{i}^{k-1}\cdot t_{i,2}=\sum_{i=1}^{n}\left(l+x_{ i}\right)^{k-1}\cdot t_{i,2}\] \[=\sum_{i=1}^{n}\left({}^{k-1}C_{0}x_{i}^{k-1}+{}^{k-1}C_{1}x_{i}^ {k-2}\cdot l+\ldots\right.\] \[\qquad\qquad\qquad\left.+{}^{k-1}C_{k-2}x_{i}\cdot l^{k-2}+{}^{k- 1}C_{k-1}l^{k-1}\right)\cdot t_{i,2}\] \[=\sum_{i=1}^{n}{}^{k-1}C_{k-2}x_{i}l^{k-2}\cdot t_{i,2}+\sum_{i= 1}^{n}l^{k-1}\cdot t_{i,2}\quad\quad\text{[By Equation \ref{eq:v2}]}\] \[={}^{k-1}C_{1}l^{k-2}\quad\quad\text{[By Equations \ref{eq:v2} and \ref{eq:v2} and since ${}^{k-1}C_{k-2}={}^{k-1}C_{1}$]}.\]
Also, we have
\[v_{n,2} =V_{2_{row(n)}}\cdot V_{1_{column(2)}}^{-1}\] \[=\sum_{i=1}^{n}y_{i}^{n}\cdot t_{i,2}=\sum_{i=1}^{n}\left(l+x_{i} \right)^{n}\cdot t_{i,2}\] \[=\sum_{i=1}^{n}\left({}^{n}C_{0}x_{i}^{n}+{}^{n}C_{1}x_{i}^{n-1} \cdot l+\ldots+{}^{n}C_{n-1}x_{i}\cdot l^{n-1}+{}^{n}C_{n}l^{n}\right)\cdot t_ {i,2}\] \[=\sum_{i=1}^{n}{}^{n}C_{1}x_{i}^{n-1}l\cdot t_{i,2}+\sum_{i=1}^{n }{}^{n}C_{n-1}x_{i}l^{n-1}\cdot t_{i,2}\quad\text{[By Equations \ref{eq:v2}, \ref{eq:v2} and \ref{eq:v2}]}\] \[={}^{n}C_{1}l^{n-1}=0\quad\quad\text{[Since $n$ is even, ${}^{n}C_{1}=0$ in $\mathbb{F}_{2^{r}}$ and by Equation \ref{eq:v2}]}.\]
So we have computed the 2nd column of \(V=V_{2}V_{1}^{-1}\). Similarly,
\[v_{1,3} =v_{2,3}=0,v_{3,3}=1,v_{k,3}={}^{k-1}C_{2}l^{k-3}\text{ for $4\leq k \leq n-1$ and}\] \[v_{n,3} ={}^{n}C_{2}l^{n-2}\] \[v_{1,4} =v_{2,4}=v_{3,4}=0,v_{4,4}=1,v_{k,4}={}^{k-1}C_{3}l^{k-4}\text{ for $5\leq k \leq n-1$ and}\] \[v_{n,4} ={}^{n}C_{3}l^{n-3}\text{ and so on.}\]
Therefore, \(V=V_{2}V_{1}^{-1}\)
\[=\left[\begin{array}{cccccccc}1&0&0&0&\dots\dots&0&0\\ l&1&0&0&\dots\dots&0&0\\ l^{2}&{}^{2}C_{1}l&1&0&\dots\dots&0&0\\ l^{3}&{}^{3}C_{1}l^{2}&{}^{3}C_{2}l&1&\dots\dots&0&0\\ l^{4}&{}^{4}C_{1}l^{3}&{}^{4}C_{2}l^{2}&{}^{4}C_{3}l&\dots\dots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\dots\dots&\vdots&\vdots\\ l^{n-2}&{}^{n-2}C_{1}l^{n-3}&{}^{n-2}C_{2}l^{n-4}&{}^{n-2}C_{3}l^{n-5}&\dots &1&0\\ l^{n}&{}^{n}C_{1}l^{n-1}&{}^{n}C_{2}l^{n-2}&{}^{n}C_{3}l^{n-3}&\dots\dots&{}^{n}C _{n-2}l^{2}&1\end{array}\right].\]
Thus, \(V_{2}V_{1}^{-1}\) is a lower triangular matrix.
Therefore, for \(1\leq i\leq n-1\) and \(1\leq j\leq n\), we have
\[(VV_{2})_{i,j} =V_{row(i)}\cdot V_{2_{column(j)}}\] \[=l^{i-1}+{}^{i-1}C_{1}l^{i-2}\cdot y_{j}+{}^{i-1}C_{2}l^{i-3} \cdot y_{j}^{2}+\dots+{}^{i-1}C_{i-2}l\cdot y_{j}^{i-2}+y_{j}^{i-1}\] \[=(l+y_{j})^{i-1}=x_{j}^{i-1}=(V_{1})_{i,j}.\]
Now for \(1\leq j\leq n\), we have
\[(VV_{2})_{n,j} =V_{row(n)}\cdot V_{2_{column(j)}}\] \[=l^{n}+{}^{n}C_{1}l^{n-1}\cdot y_{j}+{}^{n}C_{2}l^{n-2}\cdot y_{j }^{2}+\dots+{}^{n}C_{n-2}l^{2}\cdot y_{j}^{n-2}+y_{j}^{n}\] \[=l^{n}+{}^{n}C_{1}l^{n-1}\cdot y_{j}+{}^{n}C_{2}l^{n-2}\cdot y_{j }^{2}+\dots+{}^{n}C_{n-2}l^{2}\cdot y_{j}^{n-2}\] \[\qquad\qquad\qquad+{}^{n}C_{n-1}l\cdot y_{j}^{n-1}+y_{j}^{n}\ \ \mbox{[ Since ${}^{n}C_{n-1}=0$ in $\mathbb{F}_{2^{r}}$]}\] \[=(l+y_{j})^{n}=x_{j}^{n}=(V_{1})_{n,j}.\]
Thus, we have \(V_{2}V_{1}^{-1}V_{2}=V_{1}\) which implies that \((V_{1}^{-1}V_{2})^{2}=I\) i.e. \(V_{1}^{-1}V_{2}=V_{2}^{-1}V_{1}\) is involutory.
Remark 9: \(V_{1}^{-1}V_{2}\) is involutory if and only if \(V_{1}^{-1}V_{2}=V_{2}^{-1}V_{1}\)
Now by applying Theorem 11 and Theorem 13, we can find involutory MDS matrices over \(\mathbb{F}_{2^{r}}\), as follows.
Corollary 8: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices of even order over \(\mathbb{F}_{2^{r}}\) with \(\mathbf{x}=(x_{1},x_{2},\dots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\dots,x_{2n})\) and \(I=\{n-1\}\). If \(V_{1}\) and \(V_{2}\) satisfying the three properties:_
1. \(x_{n+i}=l+x_{i}\) _for_ \(i=1,2,\dots,n\)_, for some_ \(l\in\mathbb{F}_{2^{r}}^{\star}\)_,_
2. \(x_{i}\neq x_{j}\) _for_ \(i\neq j\) _where_ \(1\leq i,j\leq 2n\)_, and_
3. \(\sum_{i=1}^{n}x_{r_{i}}\neq 0\) _for all_ \(R=\{r_{1},r_{2},\dots,r_{n}\}\subset E\)_, where_ \(E=\{1,2,\dots,2n\}\)_,_
_then \(V_{1}^{-1}V_{2}\) is an involutory MDS matrix._
Example 9: Let \(\alpha\) be a primitive element of \(\mathbb{F}_{2^{\mathbf{s}}}\) and a root of \(x^{8}+x^{7}+x^{6}+x+1\). Let \(l=\alpha\), \(\mathbf{x}=\left(1,\alpha,\alpha^{2},\alpha^{3},\alpha^{4},\alpha^{5}\right)\), and \(\mathbf{y}=\left(\alpha+1,0,\alpha^{2}+\alpha,\alpha^{3}+\alpha,\alpha^{4}+ \alpha,\alpha^{5}+\alpha\right)\). Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(I=\{5\}\). Then it can be checked that both matrices \(V_{1}\) and \(V_{2}\) satisfy the conditions of Corollary 8. Therefore, the matrix
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{113}&\alpha^{33}&\alpha^{227}&\alpha^{9 3}&\alpha^{16}&\alpha^{174}\\ \alpha^{63}&\alpha^{107}&\alpha^{186}&\alpha^{149}&\alpha^{175}&\alpha^{10}\\ \alpha^{105}&\alpha^{34}&\alpha^{116}&\alpha^{97}&\alpha^{198}&\alpha^{197}\\ \alpha^{40}&\alpha^{66}&\alpha^{166}&\alpha^{43}&\alpha^{213}&\alpha^{52}\\ \alpha^{136}&\alpha^{10}&\alpha^{185}&\alpha^{131}&\alpha^{5}&\alpha^{136}\\ \alpha^{211}&\alpha^{17}&\alpha^{101}&\alpha^{142}&\alpha^{53}&\alpha^{56}\\ \end{bmatrix}\]
is an involutory MDS matrix.
Remark 10: It is worth mentioning that the above result is not true for odd order matrices. For example, consider the \(3\times 3\) generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(I=\{2\}\), \(\mathbf{x}=(1,\alpha,\alpha^{2})\) and \(\mathbf{y}=(1+\alpha^{3},\alpha+\alpha^{3},\alpha^{2}+\alpha^{3})\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). Then it can be checked that the matrices \(V_{1}\) and \(V_{2}\) satisfies the conditions in Corollary 8. However, the matrix
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{10}&\alpha^{13}&\alpha^{1}\\ \alpha^{3}&\alpha^{11}&\alpha^{11}\\ \alpha^{11}&\alpha^{1}&\alpha^{13}\\ \end{bmatrix}\]
is not an involutory matrix.
Also, by using Theorem 12 and Theorem 13, we can obtain involutory NMDS matrices over \(\mathbb{F}_{2^{r}}\) with the following approach.
Corollary 9: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices of even order over \(\mathbb{F}_{2^{r}}\) with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\ldots,x_{2n})\) and \(I=\{n-1\}\). If \(V_{1}\) and \(V_{2}\) satisfying the three properties:_
1. \(x_{n+i}=l+x_{i}\) _for_ \(i=1,2,\ldots,n\)_, for some_ \(l\in\mathbb{F}_{2^{r}}^{\star}\)_,_
2. \(x_{i}\neq x_{j}\) _for_ \(i\neq j\) _where_ \(1\leq i,j\leq 2n\)_, and_
3. \(\sum_{i=1}^{n}x_{i}\neq 0\)_,_ \(\sum_{i=1}^{n}x_{n+i}\neq 0\) _and_ \(\sum_{i=1}^{n}x_{r_{i}}=0\) _for some other_ \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\)_, where_ \(E=\{1,2,\ldots,2n\}\)_,_
_then \(V_{1}^{-1}V_{2}\) is an involutory NMDS matrix._
Example 10: Let \(\alpha\) be a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). Let \(l=1\), \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), and \(\mathbf{y}=(0,1+\alpha,1+\alpha^{2},1+\alpha^{3})\). Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(I=\{3\}\). Then it
can be checked that both matrices \(V_{1}\) and \(V_{2}\) satisfy the conditions of Corollary 9. Therefore, the matrix
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{9}&\alpha^{7}&\alpha^{7}&\alpha^{7}\\ \alpha^{3}&\alpha^{14}&\alpha^{3}&\alpha^{3}\\ \alpha^{10}&\alpha^{10}&\alpha^{5}&\alpha^{10}\\ \alpha^{2}&\alpha^{2}&\alpha^{2}&\alpha^{8}\end{bmatrix}\]
is an involutory NMDS matrix.
We will now focus on using the generalized Vandermonde matrices \(V_{\perp}(\mathbf{x};I)\) with \(I=\{1\}\) for constructing MDS and NMDS matrices. Similar to the case of generalized Vandermonde matrices with \(I=\{n-1\}\), these matrices alone may not be MDS or NMDS (as shown in Example 11). Therefore, we will consider two generalized Vandermonde matrices for the construction of MDS and NMDS matrices.
Example 11: Consider the generalized Vandermonde matrix \(V_{\perp}(\mathbf{x};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{5},\alpha^{10})\) and \(I=\{1\}\)
\[V_{\perp}(\mathbf{x};I)=\begin{bmatrix}1&1&1&1\\ 1&\alpha^{2}&\alpha^{10}&\alpha^{20}\\ 1&\alpha^{3}&\alpha^{15}&\alpha^{30}\\ 1&\alpha^{4}&\alpha^{20}&\alpha^{40}\end{bmatrix},\]
where \(\alpha\) is a primitive element of the finite field \(\mathbb{F}_{2^{4}}\) constructed by the polynomial \(x^{4}+x+1\). But it contains a singular \(2\times 2\) submatrix \(\begin{bmatrix}1&1\\ \alpha^{15}&\alpha^{30}\end{bmatrix}\). Hence, \(V_{\perp}(\mathbf{x};I)\) is not an MDS matrix. Also, it can be checked that \(V_{\perp}(\mathbf{x};I)\) is not an NMDS matrix.
We can prove the following theorem using Corollary 5 which is similar to the proof of Theorem 11. For brevity, we state the result without presenting a proof.
Theorem 14: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\ldots,x_{2n})\) and \(I=\{1\}\). Suppose that the elements \(x_{i}\) are \(2n\) distinct nonzero elements from \(\mathbb{F}_{q}\), and \(\sum_{i=1}^{n}x_{r_{i}}^{-1}\neq 0\) for all \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{1,2,\ldots,2n\}\). Then the matrices \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) are such that any square submatrix of them is nonsingular and hence MDS matrices._
Example 12: Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), \(\mathbf{y}=(\alpha^{4},\alpha^{5},\alpha^{6},\alpha^{7})\) and \(I=\{1\}\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{8}}\) and a root of \(x^{8}+x^{7}+x^{6}+x+1\). It can be verified that \(V_{1}\) and \(V_{2}\) satisfies the conditions in Theorem 14. Therefore, the matrices
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{9}&\alpha^{43}&\alpha^{252}&\alpha^{70 }\\ \alpha^{232}&\alpha^{68}&\alpha^{92}&\alpha^{168}\\ \alpha^{206}&\alpha^{213}&\alpha^{93}&\alpha^{230}\\ \alpha^{34}&\alpha^{243}&\alpha^{61}&\alpha^{152}\end{bmatrix}\text{ and }V_{2}^{-1}V_{1}= \begin{bmatrix}\alpha^{24}&\alpha^{137}&\alpha^{42}&\alpha^{223}\\ \alpha^{66}&\alpha^{14}&\alpha^{88}&\alpha^{197}\\ \alpha^{187}&\alpha^{35}&\alpha^{50}&\alpha^{25}\\ \alpha^{128}&\alpha^{33}&\alpha^{214}&\alpha^{246}\end{bmatrix}\]
are MDS matrices.
In the following theorem we discuss a new construction of NMDS matrices from the generalized Vandermonde matrices with \(I=\{1\}\). The proof can be derived using Corollary 5, following a similar approach to that of Theorem 12. We state the result without providing a proof.
Theorem 15: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\ldots,x_{2n})\) and \(I=\{1\}\). Assume that the elements \(x_{i}\) are \(2n\) distinct nonzero elements from \(\mathbb{F}_{q}\) such that \(\sum_{i=1}^{n}x_{i}^{-1}\neq 0\), \(\sum_{i=1}^{n}x_{n+i}^{-1}\neq 0\) and \(\sum_{i=1}^{n}x_{r_{i}}^{-1}=0\) for some other \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{1,2,\ldots,2n\}\). Then the matrices \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) are NMDS matrices._
Remark 1: Similar to Theorem 12, according to the Corollary 5, the assumption \(\sum_{i=1}^{n}x_{i}^{-1}\neq 0\) and \(\sum_{i=1}^{n}x_{n+i}^{-1}\neq 0\) in Theorem 15 is necessary to ensure the nonsingularity of \(V_{1}\) and \(V_{2}\).
Example 13: Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), \(\mathbf{y}=(\alpha^{4},\alpha^{5},\alpha^{6},\alpha^{7})\) and \(I=\{1\}\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It is easy to check that each \(x_{i}\) are distinct and \(1+\alpha^{-1}+\alpha^{-2}+\alpha^{-7}=0\). Therefore, the matrices
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{9}&\alpha^{5}&\alpha^{2}&\alpha^{13}\\ \alpha^{7}&\alpha&\alpha^{10}&\alpha^{9}\\ \alpha^{11}&0&1&\alpha^{5}\\ \alpha^{11}&\alpha^{8}&\alpha^{4}&0\end{bmatrix}\text{ and }V_{2}^{-1}V_{1}= \begin{bmatrix}\alpha^{14}&\alpha^{11}&\alpha^{9}&\alpha^{13}\\ 0&\alpha^{4}&\alpha^{8}&\alpha^{2}\\ \alpha^{6}&\alpha^{13}&\alpha^{13}&\alpha^{2}\\ \alpha^{2}&1&\alpha^{4}&\alpha^{6}\end{bmatrix}\]
are NMDS matrices.
Now we consider generalized Vandermonde matrices \(V(\mathbf{x};T)\), where \(T\) has more than one discontinuity, specifically, we consider \(V_{\perp}(\mathbf{x};I)\) with \(I=\{1,n\}\) for providing a new direct construction for MDS matrices. The proof follows a similar approach to that of Theorem 11 and can be derived using Corollary 6. For brevity, we state the result without presenting a proof.
Theorem 16: _Let \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) be two generalized Vandermonde matrices with \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), \(\mathbf{y}=(x_{n+1},x_{n+2},\ldots,x_{2n})\) and \(I=\{1,n\}\). The elements \(x_{i}\) are \(2n\) distinct nonzero elements from \(\mathbb{F}_{q}\), and \((\sum_{i=1}^{n}x_{r_{i}})(\sum_{i=1}^{n}x_{r_{i}}^{-1})-1\neq 0\) for all \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{1,2,\ldots,2n\}\). Then the matrices \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) are such that any square submatrix of them is nonsingular and hence MDS matrices._
Example 14: Consider the generalized Vandermonde matrices \(V_{1}=V_{\perp}(\mathbf{x};I)\) and \(V_{2}=V_{\perp}(\mathbf{y};I)\) with \(\mathbf{x}=(1,\alpha,\alpha^{2},\alpha^{3})\), \(\mathbf{y}=(\alpha^{4},\alpha^{5},\alpha^{6},\alpha^{7})\) and \(I=\{1,4\}\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It can be verified that \(V_{1}\) and \(V_{2}\) satisfies the conditions in Theorem 16. Therefore, the matrices
\[V_{1}^{-1}V_{2}=\begin{bmatrix}\alpha^{10}&\alpha^{2}&\alpha^{2}&\alpha^{14} \\ \alpha^{12}&\alpha^{2}&\alpha^{10}&\alpha^{5}\\ \alpha&\alpha^{9}&1&1\\ \alpha^{7}&\alpha^{7}&\alpha^{4}&\alpha^{12}\end{bmatrix}\text{ and }V_{2}^{-1}V_{1}= \begin{bmatrix}\alpha^{7}&\alpha^{4}&\alpha^{12}&\alpha^{2}\\ \alpha^{5}&\alpha^{10}&\alpha^{9}&\alpha^{6}\\ \alpha^{5}&1&\alpha^{12}&\alpha^{12}\\ \alpha^{9}&\alpha^{2}&\alpha^{7}&\alpha^{5}\end{bmatrix}\]
are MDS matrices.
Remark 12: It is important to note that in Theorem 11 and Theorem 12, at most one \(x_{i}\) may be zero for \(V_{1}^{-1}V_{2}\) and \(V_{2}^{-1}V_{1}\) to be MDS or NMDS. However, in Theorem 14, Theorem 15, and Theorem 16, each \(x_{i}\) needs to be nonzero; otherwise, the term \(x_{i}^{-1}\) in the conditions will not be defined.
Remark 13: We have presented a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices \(V_{\perp}(x;I)\) with \(I=\left\{n-1\right\}\). However, we have not been able to determine the conditions for constructing involutory MDS and NMDS matrices from generalized Vandermonde matrices with \(I=\left\{1\right\}\) and \(I=\left\{1,n\right\}\).
Remark 14: This paper does not consider the generalized Vandermonde matrices \(V(\mathbf{x};T)\) with discontinuities other than \(\left\{1\right\}\), \(\left\{n-1\right\}\), or \(\left\{1,n\right\}\), or those with more than two discontinuities. This is because the conditions for being MDS or NMDS matrices become more complicated. However, it is possible to find additional direct constructions of MDS and NMDS matrices by using Theorem 7.
Till now, we have discussed nonrecursive constructions of MDS and NMDS matrices. In the next section, we will explore the recursive constructions of MDS and NMDS matrices using the direct method.
## 4 Direct Construction of Recursive MDS and NMDS Matrices
In this section, we present various techniques for direct construction of MDS and NMDS matrices over finite fields, in recursive approach. To the best of our knowledge, we are the first to provide a direct construction method for recursive NMDS matrices. We begin by establishing a condition for the similarity between a companion matrix and a diagonal matrix. Using this condition, we can represent the companion matrix as a combination of a Vandermonde matrix and a diagonal matrix. We utilize determinant expressions for generalized Vandermonde matrices to present several techniques for constructing recursive NMDS matrices that are derived from companion matrices. Furthermore, a new direct construction for recursive MDS matrices is introduced.
Lemma 9: _Let \(g(x)\in\mathbb{F}_{q}[x]\) be a monic polynomial of degree \(n\) with \(n\) distinct roots, say \(\lambda_{1},\ldots,\lambda_{n}\in\bar{\mathbb{F}}_{q}\). Then the matrix_
\[G^{\prime}=\left[\begin{array}{ccccc}1&\lambda_{1}&\ldots&\lambda_{1}^{n-1}& \lambda_{1}^{m}&\lambda_{1}^{m+1}&\ldots&\lambda_{1}^{m+n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 1&\lambda_{n}&\ldots&\lambda_{n}^{n-1}&\lambda_{n}^{m}&\lambda_{n}^{m+1}& \ldots&\lambda_{n}^{m+n-1}\end{array}\right] \tag{9}\]
_is also a generator matrix for the \([2n,n]\) linear code \(\mathcal{C}\) with generator matrix \(G=[I\ |\ (C_{g}^{T})^{m}]\)._
Proof: From Theorem 4.1, we know that if a polynomial \(g(x)\) has \(n\) distinct roots \(\lambda_{1},\ldots,\lambda_{n}\), then the companion matrix \(C_{g}\) associated to \(g(x)\) can be written as \(C_{g}=VDV^{-1}\), where
\[V =vand(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\] \[=\left[\begin{array}{cccc}1&1&\ldots&1\\ \lambda_{1}&\lambda_{2}&\ldots&\lambda_{n}\\ \lambda_{1}^{2}&\lambda_{2}^{2}&\ldots&\lambda_{n}^{2}\\ \vdots&\vdots&\vdots&\vdots\\ \lambda_{1}^{n-1}&\lambda_{2}^{n-1}&\ldots&\lambda_{n}^{n-1}\end{array}\right]\]
and \(D=diag(\lambda_{1},\ldots,\lambda_{n})\).
Let \(\mathcal{C}\) be an \([2n,n]\) linear code with generator matrix \(G=[I\ |\ (C_{g}^{T})^{m}]\). Now
\[G =[I\ |\ (C_{g}^{T})^{m}]=[I\ |\ ((V^{T})^{-1}DV^{T})^{m}]\] \[=[I\ |\ (V^{T})^{-1}D^{m}V^{T}] \tag{10}\] \[=(V^{T})^{-1}[V^{T}\ |\ D^{m}V^{T}]\] \[=(V^{T})^{-1}G^{\prime},\]
where \(G^{\prime}=[V^{T}\ |\ D^{m}V^{T}]\). Therefore, we have
\[G^{\prime} =[V^{T}\ |\ D^{m}V^{T}]\] \[=\left[\begin{array}{cccc}1&\lambda_{1}&\ldots&\lambda_{1}^{n-1 }&\lambda_{1}^{m}&\lambda_{1}^{m+1}&\ldots&\lambda_{1}^{m+n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 1&\lambda_{n}&\ldots&\lambda_{n}^{n-1}&\lambda_{n}^{m}&\lambda_{n}^{m+1}& \ldots&\lambda_{n}^{m+n-1}\end{array}\right].\]
Also, from (10), we have \(G^{\prime}=V^{T}G\). Hence, according to Lemma 8, we can conclude that \(G^{\prime}\) is also a generator matrix for the linear code \(\mathcal{C}\).
Let \(C_{g}\) be the companion matrix associated with a monic polynomial \(g(x)\) of degree \(n\geq 3\). Then for \(m<n\), it can be observed that the first row of \(C_{g}^{m}\) is a unit vector. Hence, the linear code generated by \([I\ |\ C_{g}^{m}]\) has minimum distance \(<n\). Therefore, for \(m<n\), \(C_{g}^{m}\) cannot be an MDS or NMDS matrix.
Theorem 4.2: _Let \(g(x)\in\mathbb{F}_{q}[x]\) be a monic polynomial of degree \(n\). Suppose that \(g(x)\) has \(n\) distinct roots, say \(\lambda_{1},\ldots,\lambda_{n}\in\bar{\mathbb{F}}_{q}\). Let \(m\) be an integer with \(m\geq n\). Then the matrix \(M=C_{g}^{m}\) is MDS if and only if any \(n\) columns of the matrix \(G^{\prime}\) given in (9) are linearly independent._
Proof: From Remark 5, we know that \(C_{g}^{m}\) is an MDS matrix if and only if its transpose \((C_{g}^{m})^{T}=(C_{g}^{T})^{m}\) is also an MDS matrix. Also, according to Definition 6, \((C_{g}^{T})^{m}\) is MDS if and only if the \([2n,n]\) linear code \(\mathcal{C}\), with generator matrix \(G=[I\ |\ (C_{g}^{T})^{m}]\), is an MDS code.
Now since \(\lambda_{1},\ldots,\lambda_{n}\) are \(n\) distinct roots of \(g(x)\), from Lemma 9, we can say that the matrix \(G^{\prime}\) in (9) is also a generator matrix for the code \(\mathcal{C}\). Therefore, by Remark 2, we can establish that \((C_{g}^{m})^{T}\) is MDS, and hence \(C_{g}^{m}\), if and only if any \(n\) columns of \(G^{\prime}\) are linearly independent.
Theorem 4.1: _Let \(g(x)\in\mathbb{F}_{q}[x]\) be a monic polynomial of degree \(n\). Suppose that \(g(x)\) has \(n\) distinct roots, say \(\lambda_{1},\ldots,\lambda_{n}\in\bar{\mathbb{F}}_{q}\). Let \(m\) be an integer with \(m\geq n\). Then the matrix \(M=C_{g}^{m}\) is NMDS if and only if the matrix \(G^{\prime}\) given in (9) satisfy the three conditions outlined in Lemma 4._
Proof: From Corollary 3, we know that \(C_{g}^{m}\) is an NMDS matrix if and only if its transpose \((C_{g}^{m})^{T}=(C_{g}^{T})^{m}\) is also an NMDS matrix. Also, by Definition 8, \((C_{g}^{T})^{m}\) is NMDS matrix if and only if the \([2n,n]\) linear code \(\mathcal{C}\), with generator matrix \(G=[I\ |\ (C_{g}^{T})^{m}]\), is an NMDS code.
As \(\lambda_{1},\ldots,\lambda_{n}\) are \(n\) distinct roots of \(g(x)\), we can infer from Lemma 9 that the matrix \(G^{\prime}\) defined in (9) is also a generator matrix for the code \(\mathcal{C}\). Consequently, we can conclude that \((C_{g}^{m})^{T}\) is NMDS, and therefore \(C_{g}^{m}\) is NMDS, if and only if the matrix \(G^{\prime}\) satisfy the three conditions outlined in Lemma 4.
Lemma 10: _If \(g(x)=\prod_{i=1}^{n}(x-\lambda_{i})\in\mathbb{F}_{q}[x]\) yields a recursive MDS (NMDS) matrix then for any \(c\in\mathbb{F}_{q}^{*}\) the polynomial \(c^{n}g\left(\dfrac{x}{c}\right)=\prod_{i=1}^{n}(x-c\lambda_{i})\) also yields a recursive MDS (NMDS) matrix._
Proof: Let \(g^{*}(x)=c^{n}g\left(\dfrac{x}{c}\right)\). The matrix \(C_{g^{*}}=cDC_{g}D^{-1}\) where
\[D=\begin{bmatrix}1&0&0&\ldots&0&0\\ 0&c&0&\ldots&0&0\\ 0&0&c^{2}&\ldots&0&0\\ &&\ldots&&\\ 0&0&0&\ldots&c^{n-2}&0\\ 0&0&0&\ldots&0&c^{n-1}\end{bmatrix}\]
The matrix \(C_{g^{*}}^{m}=c^{m}DC_{g}^{m}D^{-1}\) is MDS (NMDS) if and only if \(C_{g}^{m}\) is MDS (NMDS).
Using the above lemma, it is possible to obtain more polynomials that produce recursive MDS or NMDS matrices from an initial polynomial.
Now, we present two methods for the construction of polynomials that yield recursive NMDS matrices. The polynomials constructed using these methods have distinct roots. The main idea behind these methods is Theorem 4.1: we suitably choose \(\lambda_{i},1\leq i\leq n\), and verify that the polynomial \(g(x)=\prod_{i=1}^{n}(x-\lambda_{i})\in\mathbb{F}_{q}[x]\) satisfies the condition of Theorem 4.1. To do so, we must examine the rank of the submatrices of \(G^{\prime}\) constructed from any \(t\) columns (here we examine \(t=n-1,n,n+1\)) of \(G^{\prime}\) corresponding to \(\lambda_{i}\)'s as given in (9). A submatrix \(G^{\prime}[R]\), constructed from any \(t\) columns of \(G^{\prime}\), is given by
\[G^{\prime}[R]=\begin{bmatrix}\lambda_{1}^{r_{1}}&\lambda_{1}^{r_{2}}&\ldots& \lambda_{1}^{r_{t}}\\ \lambda_{2}^{r_{1}}&\lambda_{2}^{r_{2}}&\ldots&\lambda_{2}^{r_{t}}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{n}^{r_{1}}&\lambda_{n}^{r_{2}}&\ldots&\lambda_{n}^{r_{t}}\end{bmatrix}, \tag{11}\]
where \(R\) denotes a set \(\{r_{1},r_{2},\ldots,r_{t}\}\subset E=\{0,1,\ldots,n-1,m,m+1,\ldots,m+n-1\}\) of \(t\) elements.
Theorem 4.1: _Let \(\lambda_{i}=\theta^{i-1}\) for \(1\leq i\leq n-1\) and \(\lambda_{n}=\theta^{n}\) for some \(\theta\in\mathbb{F}_{q}^{*}\). Let \(g(x)=\prod_{i=1}^{n}(x-\lambda_{i})\). Then for an integer \(m\geq n\), the matrix \(C_{g}^{m}\) is NMDS if and only if \(\theta^{r}\neq\theta^{r^{\prime}}\) for \(r,r^{\prime}\in E\) and \(\sum_{i=1}^{n}\theta^{r_{i}}=0\) for some \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{0,1,\ldots,n-1,m,m+1,\ldots,m+n-1\}\)._
Proof: We have \(\lambda_{i}=\theta^{i-1}\) for \(1\leq i\leq n-1\) and \(\lambda_{n}=\theta^{n}\). So for \(R=\{r_{1},r_{2},\ldots,r_{t}\}\subset E\) we have
\[G^{\prime}[R]=\left[\begin{array}{cccc}1&1&\ldots&1\\ \theta^{r_{1}}&\theta^{r_{2}}&\ldots&\theta^{r_{t}}\\ \vdots&\vdots&\ddots&\vdots\\ (\theta^{n-2})^{r_{1}}&(\theta^{n-2})^{r_{2}}&\ldots&(\theta^{n-2})^{r_{t}}\\ (\theta^{n})^{r_{1}}&(\theta^{n})^{r_{2}}&\ldots&(\theta^{n})^{r_{t}}\end{array} \right]=\left[\begin{array}{cccc}1&1&\ldots&1\\ \theta^{r_{1}}&\theta^{r_{2}}&\ldots&\theta^{r_{t}}\\ \vdots&\vdots&\ddots&\vdots\\ (\theta^{r_{1}})^{n-2}&(\theta^{r_{2}})^{n-2}&\ldots&(\theta^{r_{t}})^{n-2}\\ (\theta^{r_{1}})^{n}&(\theta^{r_{2}})^{n}&\ldots&(\theta^{r_{t}})^{n}\end{array} \right].\]
Now to prove the theorem, we can assume \(x_{r_{i}}=\theta^{r_{i}}\) for \(1\leq i\leq t\) and apply Theorem 4.1.
Example 15: Consider the field \(\mathbb{F}_{2^{4}}\) with the constructing polynomial \(x^{4}+x+1\) and let \(\alpha\) be a root of it. Let \(\theta=\alpha\). We can verify that \(\theta^{0}+\theta^{1}+\theta^{3}+\theta^{7}=0\). Now, let us consider the polynomial \(g(x)=(x-1)(x-\alpha)(x-\alpha^{2})(x-\alpha^{4})\). It can be verified that \(C_{g}^{m}\) is an NMDS matrix for \(4\leq m\leq 11\).
Remark 15: The above theorem assumes that \(\sum_{i=1}^{n}\theta^{r_{i}}=0\) for some \(R=\{r_{1},\)\(r_{2},\)\(\ldots,\)\(r_{n}\}\subset E\). However, to ensure MDS property, the condition needs to be changed to \(\sum_{i=1}^{n}\theta^{r_{i}}\neq 0\) for all \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\)[14, Theorem 3].
Remark 16: We can see that the condition on \(\theta\) in Theorem 4.1 is applicable even if we take \(\lambda_{i}=\theta^{i-1}c,1\leq i\leq n-1\), and \(\lambda_{n}=\theta^{n}c\) for some \(c\in\mathbb{F}_{q}^{*}\). By considering the roots in this way the polynomials that we get are same as those obtained by applying Lemma 10.
Lemma 11: _Let \(\lambda_{1}=1\), and \(\lambda_{i}=\theta^{i},\)\(2\leq i\leq n\), for some \(\theta\in\mathbb{F}_{q}^{*}\). Let \(g(x)=\prod_{i=1}^{n}(x-\lambda_{i})\). Then for an integer \(m\geq n\), the matrix \(C_{g}^{m}\) is NMDS if and only if \(\theta^{r}\neq\theta^{r^{\prime}}\) for \(r,r^{\prime}\in E\) and \(\sum_{i=1}^{n}\theta^{-r_{i}}=0\) for some \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\), where \(E=\{0,1,\ldots,n-1,m,m+1,\ldots,m+n-1\}\)._
Proof: Consider \(\gamma_{i}=\lambda_{n-i+1}=(\theta^{-1})^{i-1}c,1\leq i\leq n-1\) and \(\gamma_{n}=\lambda_{1}=(\theta^{-1})^{n}c\) for \(c=\theta^{n}\). Then by Theorem 4.1 and the above remark, the matrix \(C_{g}^{m}\) is NMDS if and only if \(\theta^{-r_{i}},1\leq i\leq n\), are distinct and \(\sum_{i=1}^{n}\theta^{-r_{i}}=0\) for some \(R=\{r_{1},r_{2},\ldots,r_{n}\}\subset E\). Hence, the proof.
Example 16: Consider the field \(\mathbb{F}_{2^{4}}\) with the constructing polynomial \(x^{4}+x+1\) and let \(\alpha\) be a root of it. Let \(\theta=\alpha\). We can verify that \(\theta^{0}+\theta^{-1}+\theta^{-2}+\theta^{-7}=0\). Now, let us consider the polynomial \(g(x)=(x-1)(x-\alpha^{2})(x-\alpha^{3})(x-\alpha^{4})\). It can be verified that \(C_{g}^{m}\) is an NMDS matrix for \(4\leq m\leq 11\).
Remark 17: The proof of the above lemma can also be seen similarly as in the proof of Theorem 4.1 by using Corollary 5.
Remark 18: The above lemma assumes that \(\sum_{i=1}^{n}\theta^{-r_{i}}=0\) for some \(R=\{r_{1},\)\(r_{2},\)\(\dots,\)\(r_{n}\}\subset E.\) However, to ensure MDS property, the condition needs to be changed to \(\sum_{i=1}^{n}\theta^{-r_{i}}\neq 0\) for all \(R=\{r_{1},r_{2},\dots,r_{n}\}\subset E\)[14, Corollary 1].
Now, we will present a direct construction of polynomial that yield recursive MDS matrix.
Theorem 4.2: _Let \(\lambda_{1}=1\), and \(\lambda_{i}=\theta^{i}\) for \(2\leq i\leq n-1\) and \(\lambda_{n}=\theta^{n+1}\) for some \(\theta\in\mathbb{F}_{q}^{*}.\) Let \(g(x)=\prod_{i=1}^{n}(x-\lambda_{i}).\) Then for an integer \(m\geq n\), the matrix \(C_{g}^{m}\) is MDS if and only if \(\theta^{r}\neq\theta^{r^{\prime}}\) for \(r,r^{\prime}\in E\) and \((\sum_{i=1}^{n}\theta^{r_{i}})(\sum_{i=1}^{n}\theta^{-r_{i}})-1\neq 0\) for all \(R=\{r_{1},r_{2},\dots,r_{n}\}\subset E\), where \(E=\{0,1,\dots,n-1,m,m+1,\dots,m+n-1\}.\)_
Proof: We have \(\lambda_{1}=1\), and \(\lambda_{i}=\theta^{i}\) for \(2\leq i\leq n-1\) and \(\lambda_{n}=\theta^{n+1}.\) From Theorem 4.1, we know that the matrix \(C_{g}^{m}\) is MDS if and only if any \(n\) columns of \(G^{\prime}\) are linearly independent. So for any \(R=\{r_{1},r_{2},\dots,r_{n}\}\subset E\) we have
\[G^{\prime}[R]=\left[\begin{array}{cccc}1&1&\dots&1\\ (\theta^{2})^{r_{1}}&(\theta^{2})^{r_{2}}&\dots&(\theta^{2})^{r_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ (\theta^{n-1})^{r_{1}}&(\theta^{n-1})^{r_{2}}&\dots(\theta^{n-1})^{r_{n}}\\ (\theta^{n+1})^{r_{1}}&(\theta^{n+1})^{r_{2}}&\dots(\theta^{n+1})^{r_{n}}\\ \end{array}\right]=\left[\begin{array}{cccc}1&1&\dots&1\\ (\theta^{r_{1}})^{2}&(\theta^{r_{2}})^{2}&\dots&(\theta^{r_{n}})^{2}\\ \vdots&\vdots&\ddots&\vdots\\ (\theta^{r_{1}})^{n-1}&(\theta^{r_{2}})^{n-2}&\dots&(\theta^{r_{n-1}})^{n-2}\\ (\theta^{r_{1}})^{n+1}&(\theta^{r_{2}})^{n+1}&\dots&(\theta^{r_{n}})^{n+1}\\ \end{array}\right].\]
Let \(y_{r_{i}}=\theta^{r_{i}}\) for \(1\leq i\leq n.\) Therefore, we have
\[G^{\prime}[R]=\left[\begin{array}{cccc}1&1&\dots&1\\ y_{r_{1}}^{2}&y_{r_{2}}^{2}&\dots&y_{r_{n}}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ y_{r_{1}}^{n-1}&y_{r_{2}}^{n-1}&\dots&y_{r_{n}}^{n-1}\\ y_{r_{1}}^{n+1}&y_{r_{2}}^{n+1}&\dots&y_{r_{n}}^{n+1}\\ \end{array}\right],\]
which is a generalized Vandermonde matrix of the form \(V_{\perp}(\mathbf{y};I)\) with \(I=\{1,n\}.\) Therefore, from Corollary 6\(\det(G^{\prime}[R])\neq 0\) if and only if \(y_{r_{i}}\) are distinct and \((\sum_{i=1}^{n}y_{r_{i}})(\sum_{i=1}^{n}y_{r_{i}}^{-1})-1\neq 0.\) Hence, the proof.
Example 17: Consider the field \(\mathbb{F}_{2^{4}}\) with the constructing polynomial \(x^{4}+x+1\) and let \(\alpha\) be a root of it. Let \(\theta=\alpha\) and consider the polynomial \(g(x)=(x-1)(x-\alpha^{2})(x-\alpha^{3})(x-\alpha^{5}).\) It can be checked that the polynomial \(g(x)\) satisfies the condition in Theorem 4.2, so it yields a recursive MDS matrix of order \(4.\) It can be verified that \(C_{g}^{4}\) is an MDS matrix.
## 5 Conclusion
There has been significant research in the literature on the direct construction of MDS matrices using both recursive and nonrecursive methods. However, research |
2308.05898 | Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile
Applications | Mobile apps bring us many conveniences, such as online shopping and
communication, but some use malicious designs called dark patterns to trick
users into doing things that are not in their best interest. Many works have
been done to summarize the taxonomy of these patterns and some have tried to
mitigate the problems through various techniques. However, these techniques are
either time-consuming, not generalisable or limited to specific patterns. To
address these issues, we propose UIGuard, a knowledge-driven system that
utilizes computer vision and natural language pattern matching to automatically
detect a wide range of dark patterns in mobile UIs. Our system relieves the
need for manually creating rules for each new UI/app and covers more types with
superior performance. In detail, we integrated existing taxonomies into a
consistent one, conducted a characteristic analysis and distilled knowledge
from real-world examples and the taxonomy. Our UIGuard consists of two
components, Property Extraction and Knowledge-Driven Dark Pattern Checker. We
collected the first dark pattern dataset, which contains 4,999 benign UIs and
1,353 malicious UIs of 1,660 instances spanning 1,023 mobile apps. Our system
achieves a superior performance in detecting dark patterns (micro averages:
0.82 in precision, 0.77 in recall, 0.79 in F1 score). A user study involving 58
participants further shows that \tool{} significantly increases users'
knowledge of dark patterns. | Jieshan Chen, Jiamou Sun, Sidong Feng, Zhenchang Xing, Qinghua Lu, Xiwei Xu, Chunyang Chen | 2023-08-11T01:18:56Z | http://arxiv.org/abs/2308.05898v1 | # Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile Applications
###### Abstract.
Mobile apps bring us many conveniences, such as online shopping and communication, but some use malicious designs called dark patterns to trick users into doing things that are not in their best interest. Many works have been done to summarize the taxonomy of these patterns and some have tried to mitigate the problems through various techniques. However, these techniques are either time-consuming, not generalisable or limited to specific patterns. To address these issues, we propose UIGuard, a knowledge-driven system that utilizes computer vision and natural language pattern matching to automatically detect a wide range of dark patterns in mobile UIs. Our system relieves the need for manually creating rules for each new UI/app and covers more types with superior performance. In detail, we integrated existing taxonomies into a consistent one, conducted a characteristic analysis and distilled knowledge from real-world examples and the taxonomy. Our UIGuard consists of two components, Property Extraction and Knowledge-Driven Dark Pattern Checker. We collected the first dark pattern dataset, which contains 4,999 benign UIs and 1,353 malicious UIs of 1,660 instances spanning 1,023 mobile apps. Our system achieves a superior performance in detecting dark patterns (micro averages 0.82 in precision, 0.77 in recall, 0.79 in F1 score). A user study involving 58 participants further shows that UIGuard significantly increases users' knowledge of dark patterns.
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: |
2303.16228 | Disk flaring with TNG50: diversity across Milky Way and M31 analogs | We use the sample of 198 Milky Way (MW) and Andromeda (M31) analogs from
TNG50 to quantify the level of disk flaring predicted by a modern,
high-resolution cosmological hydrodynamical simulation. Disk flaring refers to
the increase of vertical stellar disk height with galactocentric distance. The
TNG50 galaxies are selected to have stellar disky morphology, a stellar mass in
the range of $M_* = 10^{10.5 - 11.2}~\rm{M_{\odot}}$, and a MW-like Mpc-scale
environment at $z=0$. The stellar disks of such TNG50 MW/M31 analogs exhibit a
wide diversity of structural properties, including a number of galaxies with
disk scalelength and thin and thick disk scaleheights that are comparable to
those measured or inferred for the Galaxy and Andromeda. With one set of
physical ingredients, TNG50 returns a large variety of flaring flavours and
amounts, also for mono-age stellar populations. With this paper, we hence
propose a non-parametric characterization of flaring. The typical MW/M31
analogs exhibit disk scaleheights that are $1.5-2$ times larger in the outer
than in the inner regions of the disk for both old and young stellar
populations, but with a large galaxy-to-galaxy variation. Which stellar
population flares more, and by how much, also varies from galaxy to galaxy.
TNG50 de facto brackets existing observational constraints for the Galaxy and
all previous numerical findings. A link between the amount of flaring and the
$z=0$ global galaxy structural properties or merger history is complex.
However, a connection between the scaleheights and the local stellar vertical
kinematics and gravitational potential is clearly in place. | Diego Sotillo-Ramos, Martina Donnari, Annalisa Pillepich, Neige Frankel, Dylan Nelson, Volker Springel, Lars Hernquist | 2023-03-28T18:00:04Z | http://arxiv.org/abs/2303.16228v1 | # Disk flaring with TNG50: diversity across Milky Way and M31 analogs
###### Abstract
We use the sample of 198 Milky Way (MW) and Andromeda (M31) analogs from TNG50 to quantify the level of disk flaring predicted by a modern, high-resolution cosmological hydrodynamical simulation. Disk flaring refers to the increase of vertical stellar disk height with galactocentric distance. The TNG50 galaxies are selected to have stellar disky morphology, a stellar mass in the range of \(M_{*}=10^{10.5-11.2}\) M\({}_{\odot}\), and a MW-like Mpc-scale environment at \(z=0\). The stellar disks of such TNG50 MW/M31 analogs exhibit a wide diversity of structural properties, including a number of galaxies with disk scalelength and thin and thick disk scalelength that are comparable to those measured or inferred for the Galaxy and Andromeda. With one set of physical ingredients, TNG50 returns a large variety of flaring flavours and amounts, also for mono-age stellar populations. With this paper, we hence propose a non-parametric characterization of flaring. The typical MW/M31 analogs exhibit disk scaleheights that are \(1.5-2\) times larger in the outer than in the inner regions of the disk for both old and young stellar populations, but with a large galaxy-to-galaxy variation. Which stellar population flares more, and by how much, also varies from galaxy to galaxy. TNG50 de facto brackets existing observational constraints for the Galaxy and all previous numerical findings. A link between the amount of flaring and the \(z=0\) global galaxy structural properties or merger history is complex. However, a connection between the scaleheights and the local stellar vertical kinematics and gravitational potential is clearly in place.
keywords: methods: numerical -- galaxies: formation -- Galaxy: disc -- Galaxy: evolution -- Galaxy: structure
## 1 Introduction
Understanding the formation and evolution of our Galaxy, of Andromeda, and of other disk galaxies is one of the main quests of modern astrophysics. Over the last decade, large spectroscopic surveys have constrained quantities such as the ages, element abundances and phase-space properties of the stars in the Milky Way, mostly in the proximity of the Sun but also at several kpc distance, throughout the disk, bulge and stellar halo. These include LAMOST (Deng et al., 2012), RAVE (Steinmetz et al., 2020, and references therein), SEGUE/SDSS (Blanton et al., 2017), APOGE (Majewski et al., 2017), GALAH (Martell et al., 2017), H3 Survey (Conroy et al., 2019) and finally GAIA (Gaia Collaboration et al., 2016), with the delivery of positions and proper motions for more than 1.4 billion stars in the third data release (Gaia Collaboration et al., 2022). Similarly, albeit from a distance of about 750 kpc, photometric and spectroscopic surveys like PHAT (Dalcanton et al., 2012) and SPLASH (Gilbert et al., 2009) have mapped large portions of the disk of Andromeda and its disk-halo interface.
### The stellar disk of the Galaxy
A remarkable feature uncovered over the past few years about the stellar disk of our Galaxy is the existence of two different stellar populations in the solar neighbourhood: on the one hand, alpha-rich and metal-poor stars seem to associate well with the _geometrical_ or _morphological_ "thick" disk, with scaleheight of \(\sim\) 600-1400 pc and with old and kinematically-hotter stars; on the other hand, metal-rich stars with lower, i.e. solar [\(\alpha\)/Fe] abundances are thought to populate the _geometrical_ or _morphological_ "thin" disk, with scaleheight of \(\sim\) 150-350 pc and characterised by young and kinematically-coldter stellar populations (e.g. Gilmore & Reid, 1983; Juric et al., 2008; Adibekyan et al., 2012; Haywood et al., 2013; Bland-Hawthorn & Gerhard, 2016, the latter for a compilation of several measurements).
Far away from the solar neighbourhood, the vertical distribution of the chemical composition, positions and kinematics of disk stars in our Galaxy are more uncertain and, possibly, more complicated. Assuming that stellar chemistry is a good proxy for stellar ages (Twarog, 1980), the concordance picture posits that the thick disk of the Milky Way (and possibly of other spiral galaxies) is mainly composed of old stars, whereas young stars dominate the thin disk. This
is consistent with the so-called "inside-out" and "upside-down" scenario, whereby at early times stars were born in a radially-compact but vertically-thick disk and, later on, a thin and more extended disk developed (e.g. Robin et al., 2014; Bovy et al., 2016, for observational inferences) and (Bird et al., 2013; Stinson et al., 2013; Minchev et al., 2014; Buck et al., 2020; Agertz et al., 2021; Nelson et al., 2021; Bird et al., 2021; Yu et al., 2022, for numerical modeling).
However, observations of our Galaxy have suggested that two simultaneous facts are in place at larger galactocentric distances than 8 kpc: on the one hand, the scaleheight of the Galactic stellar disk increases at larger galactocentric distances, a phenomenon called _flaring_ (see below; first observed in the HI disk); on the other hand, the Milky Way's geometric thick disk, here denoted as stars at large heights over the disk plane (\(\gtrsim 2\) kpc), also contains young stars (Ness et al., 2016; Xiang et al., 2017, 2018; Feuillet et al., 2019).
### Disk flaring in the Galaxy
The flaring of the stellar disk of our Galaxy has been studied quantitatively, namely by inferring from observations the changes in stellar disk height with galactocentric distance. Studies have found that the stellar disk of the Galaxy is flared in the outskirts (Evans et al., 1998; Alard, 2000), but this phenomenon is unlikely to be present in the inner disk (Mateu & Vivas, 2018). Flaring may be more appreciable when the disk stars are dissected into mono-age and/or mono-abundance populations (e.g. Stinson et al., 2013; Minchev et al., 2017), populations that are assumed to be roughly equivalent. There is strong empirical evidence also for the flaring of the Milky Way's (low-alpha) disk (Ness et al., 2019). For example, Bovy et al. (2016) binned APOGEE stars in mono-abundance populations and quantified the changes in stellar disk height between 4 and 15 kpc from the Galactic Center. They found that the high-[\(\alpha\)/Fe] population - mainly associated with old stars - does not show any evidence of flaring, whereas low-[\(\alpha\)/Fe] stars - associated with young populations - present clear evidence of a flaring, with scaleheights exponentially increasing as a function of galactocentric distance. Mackereth et al. (2017) binned APOGEE RGB stars between 3 and 15 kpc from the Galactic Center in mono-age & mono-[Fe/H] populations and reached similar albeit not identical conclusions as above (and ones shared by Minchev et al., 2017): all mono-age populations flare although to different levels - the radial profiles of the scaleheight of high-[\(\alpha\)/Fe] stars are generally flatter, whereas the low-alpha populations flare more strongly, albeit mostly linearly. Finally, Ting & Rix (2019) studied the vertical motion of low-alpha disk stars via their vertical actions, and published an analytical function for the mean vertical action of stars at given age and radius. Their findings imply a manifest flaring of the young stellar population (3 Gyr) with scaleheights of \(120\) and \(500\) pc at 4 and 14 kpc, respectively (see SS4).
These recent results point to a consistent picture for the Galaxy, at least, and only when mono-age or mono-abundance stellar populations are analyzed separately: young stars in the Galaxy exhibit some level of flaring, at least at radii \(\gtrsim 10-11\) kpc (see otherwise Mateu & Vivas, 2018). However, as of today, some confusion and uncertainties remain as to how different levels of flaring may map into the distributions of stellar ages in the height vs. radial distance plane and as to how the flaring of different mono-age stellar populations translates into the flaring of the morphological thin and thick disks. Finally, it remains unclear whether the phenomenology in the Milky Way is representative of most spiral galaxies or not.
### Thin and thick Galactic disks
Whether the morphological or geometrical thin and thick disks of the Galaxy are two distinct components, or just the manifestation of a single variable structure, is also a matter of debate. The former is the classical view, as described for example in Juric et al., 2008, whereas recent analyses (e.g. Bovy & Rix, 2013; Rix & Bovy, 2013) argue for the latter, with the vertical structure of the Galactic disk being a continuum of stellar populations. The claim is that, even if the vertical stellar mass density profile is well described by a double exponential fit or similar, this does not necessarily imply a physically-originated decomposition. It is because of these arguments that it is now customary to characterize the vertical stellar disk structure in terms of mono-age or mono-abundance stellar populations, which should promise clearer physical insight.
### Flaring of Andromeda and other spiral galaxies
Also the stellar disk of Andromeda seems to be well described by a double vertical component, with thin and thick disks separating in both kinematics and metallicity (Collins et al., 2011), but with scaleheights approximately two to three times larger than those observed in the Milky Way: about 0.9-1.3 and 2.2-3.4 kpc, respectively. On the other hand, the level of disk flaring in Andromeda remains unclear and, de facto, unaccessible. On the other hand, disk flaring has been suggested by observations in a number of edge-on spiral galaxies (de Grijs & Peletier, 1997; Narayan & Jog, 2002; Kasparova et al., 2016; Rich et al., 2019; Sarkar & Jog, 2019).
### Disk flaring with theoretical and numerical models
The formation of thick disks and disk flaring are two closely linked phenomena. Several processes have been suggested to be responsible for the thickening of the stellar disk in general or for the disk flaring in particular: radial migration, accretion of stars from satellites, heating of a thinner pre-existing disk through mergers, and in-situ star formation from gas-rich mergers.
The thickening of the stellar disk with galactic radius has been suggested to be a natural consequence of radial orbit migration by Sales et al. (2009); Schonrich & Binney (2009); Loebman et al. (2011); Roskar et al. (2013). For example, Minchev et al. 2012 suggest that radial migration leads to the thickening in the outer disk while having the opposite effect in most part of the disk, leading to a significant effect on flaring. On the other hand, other studies support that radial migration does not contribute to the thickening of stellar disks (Martig et al., 2014; Vera-Ciro et al., 2014, 2016; Minchev et al., 2014; Grand et al., 2016) and thus that the effect of radial migration on disk flaring cannot be large. Flaring could also be a consequence of heating caused by external triggers, i.e. not because of secular processes but rather events such as the infall of satellites or the interaction with other flying-by galaxies (e.g. Kazantzidis et al., 2009, with N-body only models of disks bombarded by cosmologically-consistent subhaloes).
Cosmological hydrodynamical simulations of well-resolved Milky Way (MW)-like galaxies, which have become increasingly realistic over the last decade (see e.g. Guedes et al., 2011; Wetzel et al., 2016; Grand et al., 2017; Agertz et al., 2021, for Eris, LATTE, the Auriga sample, and VINTEREGATAN, respectively), have also allowed to address the question of disk flaring.
Minchev et al. (2015), analysing two simulated galactic disks formed in a cosmological context - one from Martig et al. 2012 and one from Aumer et al. 2013 -, demonstrated that a non-flaring
thick disk can actually be in place even if several mono-age populations with different levels of flaring are superposed - a statistical phenomenon commonly known as _Simpson's paradox_: a trend can appear in several groups of data but disappear or reverse when the groups are combined. By using a cosmological zoom-in simulation from the FIRE project of a MW-mass galaxy (\(M_{\rm stars}\simeq 6\times 10^{10}{\rm M}_{\odot}\) at \(z=0\)), Ma et al. (2017) found that the scaleheight of mono-age stellar populations shows an outward and somewhat linear flaring, being higher at larger galactocentric distances. However, differently from Minchev et al. (2015), in the FIRE galaxy the scaleheights of both the thin and thick disks are found to be flared, with nearly the same slope of the mono-age populations. Also in all the 30 Auriga MW-analogs (Grand et al., 2017), an exponential flaring is a common feature: the flaring is in place for both young stars (\(<3\) Gyr) and the whole stellar populations, even though by different amounts. Indeed, by fitting the flaring with an exponential trend, Grand et al. (2017) found that, in the majority of the Auriga galaxies, young stars show a higher degree of flaring with respect to the global ones.
With one of the APOSTLE cosmological hydrodynamical simulations, Navarro et al. (2018) demonstrated that the stellar-disk flaring reflects the flaring of the gaseous disk - as stars inherit the properties of the gas at their birth - and argued that the age and metallicity gradients are settled at birth and are not the result of radial migration or disk instabilities.
The flaring of mono-age populations is non negligible in all the five NIHAO-UHD MW-like galaxies (Buck et al., 2020) and in the MW-mass disk galaxy VINTERGATAN (Agertz et al., 2021). However, some of them flare linearly, others flare with an exponential radial trend of the heights. Moreover, the increasing of the scaleheight in the NIHAO-UHD sample is found to be much stronger for the old stellar populations, unlike the case of the Galaxy, and mild-to-no flaring is appreciable when all stellar populations are combined, similarly to the cases by Minchev et al. (2015).
Finally, more recently, Garcia de la Cruz et al. (2021) expanded the work of Minchev et al. (2015) by showing the vertical structures of 27 MW-like galaxies with \(M_{\rm stars}\simeq 10^{10}-2\times 10^{11}{\rm M}_{\odot}\) at \(z=0\): they found that in 44 per cent of their galaxies, the morphological thick disk does not flare and this typically occurs in galaxies with \(M_{\rm stars}<5\times 10^{10}{\rm M}_{\odot}\), with a thin disk (\(<1\)kpc) and a rather quiescent merger history. On the other hand, the remaining 15 galaxies show a flared thick disk and they are more massive, have a thicker disk and have undergone a major merger with respect to their non-flaring counterparts.
Despite the many recent results on the topic put forward by the simulation community, the scientific and general interpretation of the findings above, and their applicability to the cases of the Galaxy or Andromeda, are impeded by a number of limitations. Firstly, most of the analyses based on state-of-the-art cosmological models remain qualitative and refer to one or just a few galaxies formed within a specific galaxy-formation model: namely, they are often reduced to the plotting of the stellar scaleheights (and/or vertical stellar velocity dispersion) as a function of radius for stars in e.g. different age bins, and are associated to only one or just a few specific realizations of galaxies that span a limited range (if any) of mass, merger history and stellar disk structure. Secondly, when the study of more than a handful of objects is possible, the quantification of the flaring is not consistently derived across the analyses, making the comparison of the predicted outcome problematic.
### TNG50 and the scope of this paper
In this paper, we use the most recent and highest-resolution simulation of the IllustrisTNG project (Pillepich et al., 2018; Nelson et al., 2018; Marinacci et al., 2018; Naiman et al., 2018; Springel et al., 2018), TNG50 (Pillepich et al., 2019; Nelson et al., 2019), and quantify the stellar disk flaring of 198 MW- and M31-like galaxies, thereby tripling the number of cosmologically-simulated galaxies analyzed to this end. This is possible thanks to the mass and spatial resolution of the simulation, which returns galaxies with disks as thin as \(100-200\) pc (Pillepich et al., 2019; Sotillo-Ramos et al., 2022), and to the encompassed volume, with realistic galaxy properties and galaxy populations across a wide range of masses, types, and environments i.e. not only for the case of disk, star-forming galaxies that form in \(10^{12}{\rm M}_{\odot}\) haloes.
We hereby focus on the vertical distribution of the stellar mass in disks at \(z=0\) and on its connection to the vertical stellar velocity dispersion. We assess the flaring both for the morphological thin and thick disks (i.e. when single-component vertical fits are not appropriate to obtain scaleheights) and especially by separately studying mono-age stellar populations. We again postpone to future work the study of how the latter connect to mono-abundance populations in the context of the IllustrisTNG model and enrichment, but we give particular emphasis to whether and how often -i.e. across the selected galaxy sample -, young disk stars flare more or less than old disk stars, and we explore the relationship between the degree of the flaring and \(z=0\) galaxy and disk properties.
In Section 2, we hence summarize the salient aspects of the TNG50 simulation, describe the adopted selection of MW/M31-like galaxies, and define the ways we characterize the simulated stellar disks. In Section 3 we show the range of stellar-disk structures encompassed by the TNG50 MW/M31 analogs, including their scaleheights as a function of radius and the cases of warped and disturbed stellar disks. We quantify the vertical disk structure and flaring predicted by TNG50 for MW/M31-like galaxies in SS4. There we also argue for, and propose, a non-parametric and more-generally applicable and comparable method to quantify the amount of the disk flaring and compare the flaring of stars of different ages and to the inferences for our Galaxy. We connect stellar disk heights to the underlying stellar kinematics and potential in SS5. In Section 6, we quantitatively compare the TNG50 results to those from previous simulations, by casting them all under the same general and non-parametric flaring quantification; we discuss our results, limitations, and the possible origin of the diversity predicted by TNG50, and connect to observations of the distributions of the stellar ages as a function of galactocentric radius and height. Summary and conclusions are given in Section 7.
## 2 Methods
### The TNG50 simulation
The TNG50 simulation is, among the flagship runs of the IllustrisTNG project (Nelson et al., 2019), the smallest in volume but best in resolution: it evolves a cubic box of \(\sim 50\) comoving Mpc a side, sampled by \(2160^{3}\) dark-matter particles and \(2160^{3}\) initial gas cells (Nelson et al., 2019; Pillepich et al., 2019), with a resulting gas-cell and stellar-particle mass resolution of about \(8.5\times 10^{4}\)\({\rm M}_{\odot}\) and a dark-matter mass resolution of about \(4.5\times 10^{5}\)\({\rm M}_{\odot}\).
TNG50 uses the code Arepo(Springel, 2010) and includes the IllustrisTNG galaxy-formation model introduced and described in the method papers by Weinberger et al. 2018; Pillepich et al. 2018a: in
practice, it solved for the coupled equations of gravity and magneto-hydrodynamics in an expanding Universe, in addition to prescribing the cooling and heating of the cosmic gas, star formation, stellar evolution and enrichment, as well as phenomena such as stellar feedback and the seed, growth and feedback from supermassive black holes (SMBHs). The initial conditions of TNG50 have been initialized at redshift \(z=127\) and assume a cosmology compatible with the Planck 2015 results (Planck Collaboration et al., 2016).
As in previous large-scale and zoom-in cosmological simulations of MW-mass galaxies, also in TNG50 stellar particles do not represent individual stars but rather simple, mono-age stellar populations of thousands of stars characterized by an initial stellar mass function (Chabrier, 2003, for TNG50). On the other hand, a few modeling elements set apart TNG50 from the great majority of cosmological simulations that have been used so far to study the vertical structure and flaring of galactic disks: chiefly, the inclusion of magnetic fields and the effects of SMBH feedback (both also in Auriga, Grand et al., 2017). Importantly, TNG50 is a relatively large uniform-volume simulation and so, differently than in many of the aforementioned zoom-in simulations, it returns a large number of massive galaxies (\(\simeq 800\) at \(z=0\) above \(10^{10}\,\mathrm{M}_{\odot}\)) and hence, among them, also many MW and M31-mass objects spanning a wide range of merger histories, i.e. _without_ any a-priori choice about the number and time of their past major mergers (Sotillo-Ramos et al., 2022).
TNG50 is suitable for studying disk flaring thanks to its mass and spatial resolution (see Nelson et al., 2019; Pillepich et al., 2019, 2021, for more details). The smallest gas cell in TNG50 at \(z=0\) measures 9 pc across, whereas the average gas cells within the star-forming regions of massive galaxies at \(z=0\) are typically of the order \(50-200\) pc: this means that processes such as star formation and feedback are implemented below such spatial scales. The gravitational potential, on the other hand, is softened on different scales for different types of resolution elements: the softening length of the stellar and DM particles read 288 pc, the smallest softening length of the gas cells is 72 pc. These are sufficient to capture half-light disk heights of \(200-400\) pc for the typical massive star-forming galaxy at low redshift, but also thinner ones (i.e. thinner than the softening length for stellar particles; see Pillepich et al. (2019) and next Sections). Pillepich et al. (2019) presents also a study of the resolution effects on galaxy sizes and heights in TNG50: stellar disk thickness (as stellar half-mass height) can be considered to be converged in TNG50 to better than 20-40 per cent.
Finally, the choice and functioning of the IllustrisTNG model as implemented in TNG50 have been validated against observations not only of the Galaxy or Andromeda, but of large galaxy populations (see Pillepich et al., 2021, for a summary).
### Galaxy selection: choosing MW and M31 analogs
In the following, we identify (sub)halos within the TNG50 volume by using the Friends-of-Friends (FoF) and SubFind algorithms (Davis et al., 1985; Springel et al., 2001). Also, we define a "virial" halo boundary, \(R_{200c}\), as the radius within which the mean enclosed mass density is 200 times the critical density of the Universe. We refer to the total mass enclosed within this radius as the virial mass, \(M_{200c}\), of the host halo. Additionally, all galaxies residing within one virial radius of the host center are dubbed as "satellite" or "galaxy", whereas the galaxy settled at the deepest potential within a FoF is named "central", and it is typically, but not always, the most massive one. The galaxy stellar mass adopted in this work (\(M_{\mathrm{stars}}\)) is the sum of all stellar particles within a fixed aperture of 30 physical kpc, unless otherwise stated.
With these definitions in mind, we select MW/M31-like galaxies from TNG50 at \(z=0\) by means of the following three criteria, all in turn based on _observable_ rather than _halo-based_ properties. Extended motivations and characterizations for this selection are given in Pillepich et al. in prep. and the resulting sample has been already used in Engler et al. (2021, 2022); Pillepich et al. (2021); Sotillo-Ramos et al. (2022); Chen et al. (2023); Ramesh et al. (2023)).
Namely, at \(z=0\) we select galaxies:
* with \(M_{\mathrm{stars}}\) (\(<\)30 kpc) \(=10^{10.5-11.2}\mathrm{M}_{\odot}\);
* with a disk stellar shape (see Section 2.2.1 for more details)
* in isolation: no other galaxies with \(M_{\mathrm{stars}}>10^{10.5}\mathrm{M}_{\odot}\) within 500 kpc and host halo mass \(M_{200c}<10^{13}\mathrm{M}_{\odot}\).
This leads to a sample of 198 TNG50 MW/M31-like galaxies, and since it is not required for a galaxy to be the central of its halo, our sample also includes pairs of a few Local Group-like systems. Additionally, we note that, differently from the majority of zoom-in simulations of MW-mass haloes (see for example Guedes et al., 2011; Roca-Fabrega et al., 2016; Agertz et al., 2021; Renaud et al., 2021), our sample is not a-priori _biased_ in its history, namely, we have not imposed a recent quiescent merger history for the MW/M31-like galaxies to be part of our sample (see Sotillo-Ramos et al. (2022) for an in-depth analysis of the merger history of all TNG50 MW/M31 analogs).
Throughout the text and in some selected Figures, we at times label _MW-mass_ and _M31-mass_ those galaxies within the MW/M31-like sample with stellar mass in the ranges \(10^{10.5-10.9}\mathrm{M}_{\odot}\) and \(10^{10.9-11.2}\mathrm{M}_{\odot}\), respectively. MW-mass (M31-mass) galaxies are sampled, on average, by \(\simeq 5.5\times 10^{5}\) (\(1.3\times 10^{6}\)) gravitationally-bound star particles.
#### 2.2.1 Stellar morphology selection
As stated above, each galaxy in the TNG50 MW/M31 sample has been selected to be _disky_, i.e. they either satisfy a stellar-morphology constraint based on the minor-to-major axis ratio (\(c/a\)) of the stellar mass distribution or are disky by visual inspection. The first criterion is satisfied if \(c/a\geq 0.45\)(see Chua et al., 2019; Pillepich et al., 2019, for more details), being \(c\) and \(a\) the minor and major axis of the ellipsoidal distribution of stellar mass between 1 to 2 times the stellar half-mass radius (\(R_{\mathrm{stars},1/2}\)). Additionally, in the TNG50 MW/M31-like sample are also present galaxies that, even if with \(c/a>0.45\), clearly appear disky and with well-defined spiral arms by visual inspection, based on 3-band images in face-on and edge-on projections. Of 198 TNG50 MW/M31-like galaxies, 25 have been included via the visual-inspection step. See Pillepich et al. in prep for more details.
### Measurement of stellar disk properties
#### 2.3.1 Definition of disk stars
Throughout this paper, we quantify the structures of the simulated galactic stellar disks based on mass, i.e. based on the spatial location of stellar particles and the stellar mass densities they sample.
We call disk stars all the gravitationally-bound (according to SubFind) stellar particles that are in circular orbit upon inspection, i.e. with circularity \(\epsilon=L_{z}/L_{z,\mathrm{circ}}>0.7\). Here \(L_{z}\) is the z-component of the angular momentum of a given star particle and \(L_{z,\mathrm{circ}}\) is the angular momentum of a star located at the same radius but following a perfectly circular orbit. The z direction for each galaxy (its "up
vector") is chosen to be the direction of the total angular momentum of all stars within \(2\times R_{\rm stars,1/2}\). The galactic plane is hence the plane perpendicular to this up vector. The center of a galaxy is chosen as the location of its most gravitationally-bound element, typically the location of its SMBH.
#### 2.3.2 Stellar disk lengths
We measure the disk length, for any given galaxy, selecting all stellar particles with circular orbits (i.e. disk stars, \(\epsilon>\)0.7) between one and four times the half-mass radius (i.e. excluding the bulge region). We fit an exponential profile to the radial stellar surface density distribution in face-on projection, in bins of 2 kpc:
\[\Sigma(R)=\Sigma_{\rm d}\exp\left(-\frac{\rm R}{\rm R_{d}}\right), \tag{1}\]
where \(\Sigma_{d}\) is the stellar mass surface density of the disk at \(R=0\) and \(R_{\rm d}\) is the disk scalelength, a characteristic scale that is commonly used as a proxy for the extension or the size of the stellar disk.
We perform 100 fits for each galaxy, by starting with random initial values around the values taken at the limits of the cylindrical shell. The fitting routine (python curve_fit) uses a non-linear least squares method to fit our defined function to the data. The best measure of the scalelength of a given galaxy is then the mode of the distribution; an error can be obtained as the interquartile range.
#### 2.3.3 Stellar disk heights
Given the up-vector for each galaxy, we analyze its vertical stellar disk structure, by rotating its stars in edge-on projections and by extracting and fitting the vertical stellar mass density distribution of the disk stars. We determine the latter at various different radii, centered at integer multiples, from 1 to 5 in steps of 0.5, of the the scalelength of the galaxy, \(R_{\rm d}\), and dividing the galactic disk into radial annuli (cylindrical shells) of width one integer. The latter are also centered at multiple physical radii, in cylindrical shells of 2 kpc.
We use either a single or a double parametric formula to fit the vertical mass profiles at fixed gactocentric distance. In the literature, there are a variety of formulas to describe these profiles: the most common are exponential, hyperbolic secant and squared hyperbolic secant. All three can be seen as special cases of the general formula (see, e.g., van der Kruit, 1988):
\[f(z)\propto{\rm sech}^{2/n}({\rm nz}/2{\rm h_{z}}), \tag{2}\]
for the cases \(n\rightarrow\infty,n=2\) and \(n=1\), respectively. All three tend to the exponential profile as \(z\) increases, being the shape of the profile for low values of \(z\) the main difference among the three.
The single squared hyperbolic secant profile reads
\[\rho(z)=\rho_{0}\,{\rm sech}^{2}({\rm z}/2{\rm h_{z}})\,, \tag{3}\]
where \(\rho_{0}\) is the normalization and \(h_{z}\) is the disk scaleheight. The factor 2 in the denominator allows that the scaleheights of the exponential, and of the linear and squared hyperbolic secant cases, are comparable in magnitude. As shown in the upcoming sections, this provides a good description of the vertical mass distribution of mono-age stellar populations, for all selected galaxies. So the fit of Eq. 3 is the one we adopt to quantify the stellar disk scaleheight of mono-age stellar populations.
However, when all disk stars in a galaxy are considered, a two-component vertical formula returns a better fit for the majority of TNG50 MW/M31-like galaxies:
\[\rho(z)=\rho_{\rm thin}\,{\rm sech}^{2}({\rm z}/2{\rm h_{thin}})+\rho_{\rm thick }\,\,{\rm sech}^{2}({\rm z}/2{\rm h_{thick}})\,\,. \tag{4}\]
This gives us the scaleheights of both a "thin" (\(h_{z1}\)) and a "thick" (\(h_{z2}\)) disk component (Gilmore & Reid, 1983; Yoachim & Dalcanton, 2006; Comeron et al., 2011b, 2012; Ma et al., 2017; Buck et al., 2020; Agertz et al., 2021; Navarro et al., 2018). We stress here that the division into a thin and thick disk of our TNG50 MW/M31-like galaxies is purely geometrical, i.e. morphological, and does not necessarily imply a meaningful physical decomposition into two separate structures (see Bovy & Rix, 2013).
We choose to proceed with the single and double sech\({}^{2}\) formula because it has been extensively used: e.g. by Yoachim & Dalcanton (2006); Bizyaev et al. (2014) with observed edge-on spiral galaxies and Villalobos & Helmi (2008); Stinson et al. (2013); Ma et al. (2017); Park et al. (2021) with simulated galaxies. The main justification is the physical motivation: it represents the vertical density variation of a self-gravitating iso-thermal population (Spitzer, 1942; van der Kruit & Searle, 1981). However de Grijs & Peletier (1997); Hammersley et al. (1999) claim that it never reproduces well the densities of the Galactic midplane. In fact, other works have preferred the exponential formula: it has been used as fit of the vertical stellar density profile of the Galaxy (Pritchet, 1983; Siegel et al., 2002; Juric et al., 2008; Bovy et al., 2016; Mackereth et al., 2017) and of other galaxies (Comeron et al., 2011a); it has also been used for simulations in the works by Roca-Fabrega et al. (2016); Buck et al. (2020); Agertz et al. (2021). van der Kruit (1988) proposed that an intermediate solution, i.e. a sech profile, would reproduce better the stellar vertical densities in the MW. It is adopted in the works by de Grijs & Peletier (1997); Matthews (2000).
Similarly for what is done for the scalelengths and in Sotillo-Ramos et al. 2022, we perform 100 fits for each galaxy, galactocentric distance, age bins, etc. The initial guess fit values are in the ranges 20 to 1000 pc and 800 to 7000 pc for the thin and thick disks when using the double function, and 100 to 7000 for the single function. We choose the mode of the distribution as the best measure of the scaleheight and quote errors as one standard deviations of the estimated parameters, provided by the fitting function.
## 3 The structural and age properties of the stellar disks in TNG50 MW/M31-like galaxies
Before quantifying the stellar disk flaring according to TNG50, we first comment on the structural properties of the stellar disks of the 198 TNG50 MW/M31-like galaxies - additional global and structural properties can be found in Pillepich et al. in prep. and references therein. An extensive analysis of the merger history of each MW/M31-like galaxy, and on how stellar disks can survive major mergers, is instead given in Sotillo-Ramos et al. 2022.
### Diversity of stellar disk lengths and heights
An in-depth analysis of the global structural properties of the stellar disks of the 198 TNG50 MW/M31-like galaxies can be found in Pillepich et al. in prep. We refer the reader to that work for details, whereas here we report the most relevant facts.
TNG50 predicts a wide range of stellar disk sizes, also at fixed stellar mass. Within the TNG50 MW/M31 sample, the stellar disk scalelengths vary between \(\sim 1.5\) and \(\sim\)17 kpc, denoting a remarkable variety of disk extents in such a narrow range of stellar mass (Pillepich et al. in prep., their Fig.11, top). These sizes are consistent with previous zoom-in simulations of \(\sim 10^{12}\,{\rm M}_{\odot}\) haloes, e.g. Auriga (Grand et al., 2017), Eris (Guedes et al., 2011), NIIAO-UHD (Buck et al., 2020) and VINTERGATAN (Agertz et al., 2021).
Also, TNG50 disk sizes are compatible with those measured for local disky and spiral galaxies (based on stellar light rather than stellar mass Gadotti 2009; Lelli et al. 2016) and for the Galaxy (Hammer et al. 2007; Juric et al. 2008; Bovy and Rix 2013) and Andromeda (Worthey et al. 2005; Barmby et al. 2006; Hammer et al. 2007). We note that a number of TNG50 MW/M31-like galaxies fall within the observed values for the scalelength and stellar mass of the Galaxy and Andromeda, whereas the rest have more or less extended stellar disks for their mass: compared to the total TNG50 sample of MW/M31 analogs, the Milky Way has a rather compact stellar disk given its mass, as it settles at the lower end of the TNG50 distribution, while for Andromeda the value is rather average.
The scaleheights of TNG50 galaxies, evaluated at galactocentric distances of a few times the disk length, can be as small as \(\simeq\) 200 pc (lowest 10th percentiles). Yet, stellar disks of TNG50 MW/M31-like galaxies (as selected in Section 2.2) can be as thick as a few (Pillepich et al. in prep., their Fig.11, bottom panels). As is the case for the disk extent, TNG50 MW/M31-like galaxies have typically thicker thin disks than the Galaxy but not necessarily than Andromeda. TNG50 disk heights are consistent with those of zoom-in simulations (e.g. Guedes et al. 2011; Grand et al. 2017; Ma et al. 2017; Buck et al. 2020). There is even a number of TNG50 MW/M31-like galaxies that exhibit thin and thick disks with similar heights as the observational estimates of both the Galaxy and Andromeda.
We in particular highlight and take note of six galaxies (Subhalo IDs 516101, 535774, 538905, 550149, 552581, 536365), which we refer to as _MW-analogs_ and whose stellar disk properties are within the observational estimates for the Galaxy. These MW-analogs are chosen among the TNG50 MW/M31-like galaxies that have thin and thick disk heights consistent with those of the Galaxy (approximately in the range \(175-360\) pc and \(625-1450\) pc, respectively), measured at either 7-9 kpc or \(2.7-4.7\times R_{\rm d}\); and with disk scalelength and stellar mass in the ranges \(1.7-2.9\) kpc and \(10^{10.5-10.9}\,{\rm M}_{\odot}\), encompassing available literature contraints. There is also one galaxy (Subhalo ID 432106) that could be considered a _M31-analog_, based on its stellar mass, disk scalelengths and thickness (Pillepich et al. in prep.).
### Vertical stellar mass profiles
In Fig. 1 we show the vertical surface mass density profiles at different radii of disk stars for nine TNG50 MW/M31 analogs. These are selected to be representative of the whole sample, namely below, in between and above the 25\({}^{th}\) and 75\({}^{th}\) percentiles, in either galaxy stellar mass (from left to right) or in stellar disk scalelength
Figure 1: **Vertical surface density profiles of example TNG50 MW/M31-like galaxies.** In each panel, we show the vertical stellar surface mass density profiles at different radii for one of nine TNG50 MW/M31 analogs. Solid curves represent the two-component fit, as described in the text (see Eq. 4). These galaxies are selected to be representative of the whole sample, below the 25\({}^{th}\) percentile, in between 25\({}^{th}\) and 75\({}^{th}\) percentile and above the 75\({}^{th}\) percentile, of both galaxy stellar mass (left to right columns) and scalelength (top to bottom rows).
(top to bottom). The scalelength and stellar mass are labeled in each panel. Dots represent the measured values of the stellar density, solid curves represent the resulting function (as per Eq. 4), whereas different colors denote different radii.
tion of the flaring (see next Sections). These can appear under the form of the common _S-shaped_ warps, as well as asymmetries, and have been generally ascribed to e.g. the tidal distortions imparted by an external (merging) satellite in fly-by (Ostriker & Binney, 1989; Kazantzidis et al., 2009; Gomez et al., 2013; D'Onghia et al., 2016; Gomez et al., 2017; Semczuk et al., 2020) or to a misaligned accretion of high angular momentum cold gas (see for example Roskar et al., 2010; Aumer et al., 2013).
In this paper we do not attempt an accurate and in-depth analysis of warped stellar disks, which we leave to future works, but we at least signal those TNG50 MW/M31-like galaxies that may exhibit some warps or disturbed stellar disks based on a visual inspection of their edge-on stellar maps. Among the TNG50 MW/M31 sample, we identify 21 galaxies that have a well-defined S-shaped warp and 17 galaxies with a generally disturbed or distorted stellar disk, which we refer to as _disturbed_ disk. When needed, we will properly identify such galaxies in plots and discuss them in the text (e.g. Section 4.3). A catalog with corresponding flags is released with this paper.
## 4 Disk Flaring with TNG50
Equipped with the vertical stellar mass distributions of stars and of mono-age stellar distributions throughout the simulated stellar disks, we can quantify how, and by how much, if at all, the disk scaleheights of all, young and old stellar populations increase with galactocentric distance. Namely, in the following, we provide results from TNG50 about the the vertical structure of stellar disks in MW/M31 analogs across galactocentric distances. The questions we would like to answer are the following: how often does flaring occur in MW/M31-like galaxies according to TNG50? And across all MW/M31-mass disk galaxies, how often do young and old stellar populations display the same or different amount of flaring?
### Diversity in disk flaring across TNG50 MW/M31 analogs
Fig. 4 shows the disk scaleheights as a function of galactocentric distance of mono-age stellar populations in six MW/M31-like galaxies from the TNG50 simulation. Curves of different colors represent different bins of stellar ages, as labeled in the legend (blue to red from young to old), with scaleheights obtained via a single-component fit (Eq. 3). The dashed, dotted curves accounts for all disk stars in the simulated galaxies, with scaleheights obtained via a double-component fit (Eq. 4): black for geometrical thick disk and grey for the geometrical thin disk. These galaxies are chosen to highlight the variety of disk flaring that we find across the TNG50 MW/M31-like sample:
1. _Top left panel_: both mono-age and total stellar populations exhibit a substantial flaring, with the older stars flaring, upon visual impression, a bit more than younger ones. In this galaxy, the flaring of most mono-age populations is almost exponential (in analogy with Minchev et al., 2015; Grand et al., 2017 and with some galaxies of Buck et al., 2020).
2. _Top right panel_: both mono-age and total stellar populations exhibit a flaring, which appears in this case linear rather than exponential (in analogy with Ma et al., 2017; Agertz et al., 2021; Garcia de la Cruz et al., 2021 and some galaxies of Buck et al., 2020).
3. _Middle left panel_: flaring is more evident when considering mono-age populations than when considering all disk stars at once (colored curves vs. black and grey ones; consistently with the results of Minchev et al., 2015).
4. _Middle right panel_: in this galaxy, young and old stellar populations follow two somewhat different trends, that can be identified, respectively, with the geometrical thin and thick disks.
Figure 3: **Age distributions of disk stars in selected TNG50 MW/M31-like galaxies**. Each line represents one galaxy. In the top, we show example galaxies with a _young_ disk, i.e. with a mean stellar age younger than a Gyr. In the middle and bottom panels, we show example galaxies with an _old_ disk, i.e. with a mean stellar age older than 9 Gyr. Additionally, we split this sample into two bins of stellar mass, below (middle) and above (bottom) \(10^{10.9}\mathrm{M}_{\odot}\). Only low-mass galaxies show a relatively young stellar disk (7 galaxies, top panel) while old stellar disks are present in both low and high-stellar mass MW/M31-like galaxies (9 and 8 galaxies in the middle and bottom panel, respectively). All the other MW/M31-like galaxies (174 of 198, not shown) have an intermediate-age stellar disk.
* _Bottom left panel_: in this galaxy, all mono-age stellar populations and the thick disk are flared, but not the thin disk (or at least not out to 4.5\(\times R_{d}\)). The latter does not follow any of the mono-age populations: clearly a double functional profile was necessary to unravel it.
* _Bottom right panel_: a linear flaring of both mono-age and total populations is manifest in the inner part of the galaxy. The flaring disappears in the outer parts, as the scaleheights remains either constant or non monotononic.
The significance of the results uncovered here is remarkable: with a single set of physical-model ingredients, TNG50 returns all the
Figure 4: **Visualization of disk flaring in a few example TNG50 MW/M31-like galaxies**. We show disk scaleheight as a function of galactocentric distance normalized by the scalelength for six TNG50 MW/M31 analogs, chosen to highlight the diverse ways in which the flaring is manifested across the whole galaxy sample (see the text for details). Curves of different colors represent different stellar ages, as labeled in the legend. The black (grey) curves with circles denote the geometrical thick (thin) disk, i.e. all disk stars without splitting in stellar ages.
manifestations of flaring that have been found so far in individual simulations and simulated galaxies. This demonstrates that the diversity of disk flaring outcomes can arise naturally from the diversity of the galaxy population.
It is also manifest from Fig. 4 that the change of scaleheights with galactocentric distance is not always smooth nor monotonic. This can be explained by the fact that realistic disk galaxies, even if simulated, can have complex structures, including over- and underdense regions across their disks. Namely, TNG50 galaxies are certainly not akin to idealized smooth exponential disks: this is reflected in their more complex radial and vertical stellar density distributions, which in turn may simply be not well described by parametric functions (see Sections 2.3.2 and 2.3.3). We notice that these effects are more accentuated towards the disks' outskirts, whereby, together with sparser populations of stars, the error bars on the scaleheight estimates get typically larger.
### A new non-parametric and generally-accessible quantification of disk flaring
As TNG50 returns a wide variety of disk flaring, a unique prescription to quantify it would be inadequate. We have seen, for example, that the radial trends of scaleheights can be both exponential and linear. Past works have fitted the flaring with an exponential formula only (Lopez-Corcedoira et al., 2002; Grand et al., 2017) or with a linear function only (Evans et al., 1998; Alard, 2000; Garcia de la Cruz et al., 2021; Lian et al., 2022). The TNG50 phenomenology suggests that a non-parametric quantification of the flaring is of the essence.
We propose a quantification of the flaring that is independent of the shape of the flaring (linear, exponential or otherwise) and that can be applied in the most general manner to any data, so long as the latter allows the estimate of the disk scaleheight at two different galactocentric distances. Namely, we advocate for a quantification of the flaring based simply on the _relative enhancement_ of disk heights at two locations (inner and outer disk), i.e. the difference between the scaleheights at two fixed galactocentric distances divided by the height in the innermost location. Now, as discussed in SS3.1, stellar disks can span a wide diversity of extents even in a relatively narrow range of galaxy mass: \(\sim 1.5-17\) kpc. This hence requires to quantify the amount of flaring upon normalizing the galactocentric distance by the scalelength of each MW/M31-like galaxies. We then propose and evaluate the amount of flaring in between (1-5)\(\times R_{\rm d}\) (see also Minchev et al., 2014, for a similar approach):
\[\tau_{\rm flare}=\frac{h_{z,5\times R_{\rm d}}-h_{z,1\times R_{\rm d}}}{h_{z,1 \times R_{\rm d}}}\, \tag{5}\]
where \(h_{z,r}\) denotes the vertical scaleheight in a narrow radial annulus at distance \(r\), in our case according to SS2. A \(\tau_{\rm flare}\) equal to 1 (3) means that the disk height is 2 (4) times larger at \(5\times R_{\rm d}\) than at the inner radius.
### What flares more? Young or old stars?
We proceed our analysis by quantifying the degree of flaring of the stellar disk populations as per Eq. 5, separately for old and young stars, i.e. with stellar ages of \(8-10\) and \(2-4\) Gyr, respectively. As noted before, the vertical distributions of stars in relatively narrow age intervals are well described by a single-component formula (Eq. 3), which hence gives the corresponding scaleheight, in agreement with previous simulations (Martig et al., 2014; Minchev et al., 2015; Ma et al., 2017) and observations (e.g. Bovy et al., 2016, assuming the chemistry as a good proxy for ages).
The main results of this paper are shown in Fig. 5. In the top panel, we show the comparison between the flaring of young and old stellar populations across 159 TNG50 MW/M31-like galaxies1.
Footnote 1: In fact, in 39 of the 198 TNG50 MW/M31, one or both the stellar age bins are not sufficiently populated to ensure a good vertical profiling and fitting of the stellar vertical mass distribution.
The black dashed line separates galaxies in which old stars flare more than the young ones (above the bisector) or vice versa, young stars flare more than the old ones (below the bisector): according to TNG50, a slight majority of MW/M31-like galaxies (89) galaxies, i.e. 56 per cent) have stellar disks whereby the young stars flare more than the old ones. However, a non-negligible fraction of TNG50 galaxies display a similar amount of flaring between young and old stars, settling on top of (or very close to) the dashed line. For the average or typical TNG50 MW/M31-like galaxy, young and old stellar populations exhibit scaleheights in the outer disk that are \(\sim 1.5-2\) times larger than those in the inner part of the disk (median values of 1.44 and 1.66 for young and old stars, respectively; see inset of the main panel of Fig. 5).
A few galaxies populate the bottom right and top left corners, where the flaring of the young stars is considerably more pronounced than of the old stars, or vice versa. Still, across the studied TNG50 MW/M31 sample, slightly more frequent are galaxies where the young, rather than the old, stars reach high levels of flaring, e.g. \(\tau_{\rm flare}\gtrsim 4\), corresponding to scaleheights at large distances that are \(\gtrsim 5\) times larger than in the inner disk regions. Young stars show a somewhat broader diversity of flaring (with a weak peak at \(\tau_{\rm flare}=1-4\)), whereas the old stellar populations exhibit a narrower distribution concentrated around \(\tau_{\rm flare}=0-3\).
To visualize how the flaring quantification corresponds to diverse vertical structures, we show the change of scaleheight as a function of galactocentric distance for three galaxies, each representing different regions of the \(\tau_{\rm flare}^{\rm young}-\tau_{\rm flare}^{\rm old}\) plane. A series of stellar light images of TNG50 MW/M31-like galaxies with substantial flaring are shown in Fig. 6.
Importantly, a few galaxies from TNG50 seem to reproduce the flaring phenomenology inferred observationally for our Galaxy. The vertical magenta areas in the panels of Fig. 5 represent the flaring (evaluated using Eq. 5) of the "young" stellar population of our Milky Way, with the scaleheights inferred from the vertical-action values estimated in Ting & Rix (2019). In the iso-thermal regime, a population of stars with a mean vertical action \(\langle J_{z}\rangle(R,t)\) has vertical distribution \(p(z)\sim{\rm sech}^{2}(\frac{z_{*}}{2h_{z}})\) with a scaleheight \(h_{z}(R,t)=\sqrt{\frac{\langle J_{z}\rangle(R,t)}{2\nu_{z}(R)}}\), where \(\nu_{z}(R)\) is the local vertical frequency which we determine with Galpy (Bovy, 2015) in the same way as Ting & Rix (2019). Therefore, assuming a vertical stellar density distribution proportional to sech\({}^{2}\) and a Milky Way gravitational potential from Bovy (2015), we can infer a level of flaring for our Galaxy. About seven galaxies in the TNG50 sample show a flaring similar to the one inferred observationally for the Galaxy.
In particular, a good fraction of TNG50 galaxies with flaring similar to the Galaxy also have similar galaxy stellar mass. In the bottom left panel of Fig. 5, we show the same plot as in the top but with different symbols denoting different subsamples of the TNG50 MW/M31-like galaxies: squares indicate M31-mass objects (53 in total, \(\geq 10^{10.9}{\rm M}_{\odot}\)), stars indicate the 106 MW-mass galaxies. The magenta star (orange square) symbols represent the MW analogs (M31 analogs) identified in SS3, i.e. the six (one) galaxies that have
detailed stellar disk structural properties consistent with the Galaxy (Andromeda). Within TNG50, there is no simulated galaxy with the same stellar disk structure, including extent, thickness, and flaring, of the Galaxy. However, our Galaxy represents one among many realizations of disky galaxies and, in terms of flaring of the young stars per se, according to TNG50, it appears rather common.
Finally, the right bottom panel of Fig. 5 is meant to convince us that the general picture depicted so far is not systematically affected or biased by cases of warps or disturbed disks (Section 3.4). These are highlighted in blue or orange, based on the visual inspection presented above. Although our measurements are all azimuthally-averaged, distorted disks could potentially imply an under or over estimation of the amount of flaring. There is no manifest bias toward large or small degrees of disk flaring when we focus on warped and disturbed stellar disks, namely they populate all regions of the depicted space as the rest of the population. However, the few iden
Figure 5: **Flaring of young vs. old stellar populations in TNG50 MW/M31-like galaxies.** We compare between the flaring of young (2-4 Gyr) and old (8-10 Gyr) stellar populations. The black solid line separates galaxies in which old stars flare more than the young ones (above the bisector) or vice versa (below the bisector). The vertical magenta areas indicate the flaring of young stellar population of our Milky Way, inferred from Ting & Rix (2019) – see the text for more details. We have selected three galaxies from three different regions of the plane and show in the small panels their scaleheights vs. radius. In the bottom left panel, TNG50 MW/M31-like galaxies are separated according to their stellar mass, above (squares, M31-mass like) and below (stars, MW-mass like) (\(\rm{log}\,(M_{stars}/M_{\odot})=10.9\). Moreover, galaxies dubbed as MW-analogs (i.e. having stellar mass, scalelength, thick and thin scaleheight similar to the Milky Way) are depicted with pink stars. In the bottom right panel, we highlight the cases in which galaxies show a warped (in navy) or disturbed (in orange) stellar disk, showing that these visually-identified features do not seem to systematically bias our quantification of flaring.
tified warped disks seem to tend to have a young stellar population that flares more than the old one.
### The cases of TNG50 galaxies with stellar disk properties compatible with the Galaxy's
As already mentioned in Section 3.1, in the TNG50 MW/M31 sample there are six galaxies with stellar mass and disk sizes similar to the Milky Way and one galaxy with disk properties similar to Andromeda. For the MW-analogs, and to connect more directly with the observational opportunities in our Galaxy, we show in Fig. 7 the change of the scaleheight as a function of galactocentric distance.
As already pointed out throughout this work, even for galaxies sharing the main disk properties, the flaring can be qualitatively very diverse. Indeed, for two MW-analogs, Subhalo IDs 538905 and 566365, the flaring is quite linear, whether we consider montage stellar populations (colored curves) or all disk stars (black and grey curves); on the other hand, the galaxy with Subhalo ID 552581 shows an exponential flaring. Additionally, the MW-analogs 516101 and 535774 show more irregular trends where mono-age population and the thin and thick disks flare quite differently. Once we evaluate the degree of flaring in the way proposed in this work (i.e. by using Eq. 5), the relative enhancement between 1 and 5\(R_{\rm d}\) turns out to be diverse (see bottom left panel of Fig. 5, magenta stars with black contours): for two of them, young stars flare more than old ones, with different level of flaring. Additionally, we have an analog where young and old stars flare equally, and an analog where old stars flare much more than young stars.
## 5 Disk flaring and kinematics in TNG50
Our fiducial quantification and definition of disk flaring is based on a geometrical estimation of the stellar disk scaleheights. However, the structural properties of galaxies are expected to be the global manifestation of the underlying orbital configuration and interaction of all present matter components. In Pillepich et al. 2019, we showed that the vertical structure of both the stellar and gaseous components of TNG50 star-forming galaxies across epochs indeed is the resolved outcome of an ensemble of physical ingredients - chiefly, the shape and depth of the overall gravitational potential, which in turn is the result of the interaction and orbital mixture of both collisional and collisionless material. Here we expand upon that analysis by focusing on MW/M31-like galaxies, and link the flaring with the
Figure 6: **Stellar-light composite images of selected TNG50 MW/M31-like galaxies.** We show the face-on and edge-on projections of 18 MW/M31 analogs from TNG50 at z = 0 that exhibit substantial flaring (i.e. \(\tau>3\)).
kinematics of disk stars: we hence examine their vertical velocity dispersion.
From a theoretical point of view, the scaleheight for an isothermal sheet is related, above and below a certain position, to the local vertical velocity dispersion (\(\sigma_{\rm a}^{2}\)) and the local stellar surface density (\(\Sigma_{\star}\)), according to the relation (Spitzer, 1942): \(h_{\rm s}\propto\sigma_{\rm a}^{2}/\Sigma_{\star}\).
We show in Fig. 8 this connection for the disk stars of the TNG50 MW/M31 analogs, separating between star in the inner vs. outer disk (different shades of colors) and young vs. old stellar populations (left vs. right panels, respectively). Here we adopt our fiducial choice for the heights measurement as in the previous Sections.
A number of interesting considerations can be made. Firstly, despite the complexity of the realistic disks realized by TNG50 and even though stars are not the only matter components in the disk regions, the stellar disk heights of both young and old stellar populations relate to the underlying vertical velocity dispersion and average mass surface density: the higher the \(\sigma_{\rm a}^{2}/\Sigma_{\star}\) ratio, the thicker the disk. However, and secondly, the relation may be not perfectly linear and is steeper than \(h_{\rm s}\propto\sigma_{\rm a}^{2}/\Sigma_{\star}\) (dotted lines): this is probably due to the gas and dark matter contributing to the disk potential and also to the morphological complexity of a real galactic stellar disk compared to the idealized isothermal sheet.
Figure 7: **Vertical disk structure and flaring of six TNG50 MW-like galaxies.** The scaleheight as a function of the galactocentric distance (normalized by the scalelength) is shown for the six MW analogs with stellar mass and disk properties most similar to the Galaxy. Error bars represent one standard deviation errors of the parameters, provided by the fitting function.
We have quantified the same relation by using the half-mass heights of the disks instead of the best fit to a squared hyperbolic secant function, as a more robust estimate of the scaleheight against disk internal structure and inhomogeneities: we notice, although we do not show, that the scatter in the relationships of Fig. 8 at fixed galactocentric distance is considerably reduced in such a case. This suggests that the galaxy-to-galaxy scatter in Fig. 8 is not all due to physical effects and the most severe cases of galaxy outliers are those where the parametric functional forms are not a very good description of the vertical stellar mass distribution in the disk.
Fig. 8 shows that, also according to TNG50 and as argued in previous works, at fixed galactocentric distance in the disk, older stars are not only thicker but also hotter than younger ones, i.e. exhibit overall larger velocity dispersions (left vs. right panel). The same phenomenology has been measured in the Milky way (Nordstrom et al., 2004; Dorman et al., 2015) and also in other cosmological simulations (Buck et al., 2020).
We also find, although do not show, that at fixed galactocentric distance, the stellar disk heights exhibit a correlation with both the local stellar velocity dispersion and the local stellar mass surface density, individually: namely, stellar disks are thicker when they are hotter or less dense. But so, what does determine the flaring of the stellar heights?
In Fig. 9, we show how the level of disk-height flaring of TNG50 MW/M31-like galaxies depends on stellar velocity dispersion (top), inverse of the stellar mass surface density (middle) and the ratio of the two, i.e. \(\sigma_{\rm 2}^{2}/\Sigma_{*}\) (bottom), whereby these quantities are evaluated in the outer disk regions. To avoid any issue with fitting the vertical mass profiles with parametric functions, here we quantify the flaring by measuring the stellar half-mass heights (see also SS6). We also quantify the flaring and its relationship with stellar kinematics and potential for young and old stellar populations separately: left vs. right panels of Fig. 9.
From this analysis we can see that, according to TNG50, galaxies with hotter or less dense outer stellar disks flare more strongly, in both young and old stellar populations. By comparing the top to the middle panels, we can also conclude that the diversity in flaring is mostly driven by the diversity in outer-disk temperatures, i.e. in stellar kinematics, rather than by a diversity in stellar surface density.
## 6 Discussion
### On other methods to quantify the disk flaring
One of the main messages of this paper is that, not only across differently-simulated galaxies, but also within one given simulation, the nature and amount of disk flaring can be very diverse, even for disk galaxies within a narrow range of galaxy stellar mass and environments. We hence advocate for a non parametric quantification of flaring, as provided in SS4.2.
One possible drawback of this proposition is that, at least in our fiducial implementation, it still relies on a parametric measure of the stellar disk heights. In SS2.3.3, we have described different methods that are commonly used to quantify disk heights and all results in the previous sections, unless otherwise stated, are based on our fiducial choice: profile fitting with single and double squared hyperbolic secant, for mono-age stellar populations and all-ages disk stars, respectively.
We show now in Fig. 10 how the flaring values for young and old stellar populations depend on the method for height measurements. We show again the main panel of Fig. 5 in the top left, unchanged, and repeat it for the additional parametric cases in the top right (hyperbolic secant) and bottom left (exponential). The bottom right panel shows the flaring measurement based on non-parametric half-mass height measurements, as in Fig. 9.
All four cases exhibit roughly similar flaring ranges and three of them also have similar distributions on the depicted plane (squared and single hyperbolic secant and half-mass heights), whereas for the exponential profiles the distribution appears more sparse and dissimilar. The fraction of galaxies where the young stars flare more than the old stars (and vice-versa) also changes from method to method, although not in qualitatively significant manners, barring the case of the exponential fits. The latter is the method that returns the most different qualitative and quantitative quantification of flaring
Figure 8: **Relationship between stellar heights and stellar kinematics for TNG50 MW/M31-like galaxies.** We plot the ratio between the squared local stellar vertical velocity dispersion and the stellar surface density vs. stellar disk scaleheight, for young (left) and old (right) stellar populations. We do so at different locations within the stellar disk. The dashed line defines the linear correspondence that is expected for an idealized self-gravitating disk.
ing. The quantification based on the non-parametric measurements of the stellar half-mass disk heights provides confidence that, in the population, fit-based assessments are not dramatically affected by possible numerical/fitting issues or by the possibility that parametric functional forms do not describe the stellar mass distributions well. On the other hand, individual galaxies may exhibit somewhat different amounts of flaring depending on the adopted method to measure their disk heights. Comparisons can hence only be done once the same operational definitions are adopted on all sides.
Considering that some of these galaxies may have a bar at a galactocentric distance of \(1R_{\rm d}\), we have repeated the measurements of the flaring parameter in the range \((2-5)\times R_{\rm d}\) instead of \((1-5)\times R_{\rm d}\): the results are shown in the top left panel of Fig. 10, in grey. The values are smaller (as we can expect from the flaring phenomenon) but they are distributed similarly to the original definition on both sides of the identity line.
Figure 9: **Disk flaring vs. kinematic properties for TNG50 MW/M31-like galaxies.** We plot the amount of flaring of young (left column, in blue) and old (right column, in red) stellar populations as a function of the stellar vertical velocity dispersion at \(5R_{\rm d}\) (top) and the inverses of the stellar surface density at \(5R_{\rm d}\) (middle), and finally their ratio (bottom). In each panel, the galaxy number density is estimated with a Gaussian kernel and represented with the shaded contour areas. Solid curves are medians in bins of the quantity on the \(x\)-axes. Galaxies with hotter or less dense outer stellar disk flare more.
### A note to observers: on the "flaring" based on the spatial distribution of stellar ages
As introduced in Section 1, it has become customary to inspect the vertical structure of the Galactic stellar disk by looking at the mean or median stellar ages as a function of galactocentric radius \(R_{\rm gal}\) and as a function of vertical distance from the midplane \(|z|\). This typically done in observations of the Galaxy (Ness et al., 2016; Xiang et al., 2017; Feuillet et al., 2019) as well as with simulation data (Ma et al., 2017; Buck et al., 2020; Agertz et al., 2021).
The common emergent picture is that of a _funnel_ shape, in which at each radius \(R_{\rm gal}\), the mean age of disk stars increases with \(|z|\), whereas at fixed \(|z|\) the mean or median stellar age decreases with galactocentric radius. This can be appreciated in Fig. 11, where we have compiled examples from the literature, for the Galaxy (left column Feuillet et al., 2019; Xiang et al., 2017; Ness et al., 2016) and from MW-like zoom-in cosmological simulations (central column Agertz et al., 2021; Ma et al., 2017; Navarro et al., 2018), and where it can be seen that young stars at large galactocentric distances can indeed be found at large altitudes.
For comparison, on the right column of Fig. 11, we show five TNG50 MW/M31 analogs that are representative of the whole sample and that qualitatively reproduce previously-quantified observational and theoretical scenarios, for \(R_{\rm gal}=3-14\) kpc and \(|z|=0-4\) kpc. Here we divide all stars in \(100\times 100\) bins on the \(|z|-R_{\rm gal}\) plane, and the colors denote the mean stellar ages in each pixel. Also in TNG50 MW/M31-like galaxies, the mean stellar age increases
Figure 10: **Flaring of young vs. old stellar populations in TNG50 MW/M31-like galaxies for different methods to measure disk heights.** Top left: quantification of the flaring with the fiducial height measurements based on fitting a squared hyperbolic secant profile (as in all figures so far): blue data points refer to the flaring measured between 1 and 5\(\times R_{\rm d}\) (same as main panel of Fig. 5), whereas in grey the flaring is evaluated at 2 vs. 5\(\times R_{\rm d}\).
with \(Z\) at fixed \(R_{\rm gal}\), and decreases with \(R_{\rm gal}\), at fixed \(Z\), but with different detailed patterns depending on the galaxy.
However, it is important to realize that, even if these funnel features have been often connected to the flaring of the disk, they in fact do not imply it. Flaring is the increase of the stellar disk thickness at increasing radii. The change with radius and height of the typical ages of the disk stars _do not_ imply flaring: an un-flared inside-out model could still produce the age maps with a funnel shape.
To demonstrate this we produce two MW disk mocks from the model described in Frankel et al. (2020) and inspect for both cases the stellar density and mean age as a function of radius and height (we do not show the plots here). Without flaring and inside-out growth, no funnel shape is present. However, when we activate inside-out growth, we can recover the same pictures as in Fig. 11.
To illustrate this further, we show in Fig. 12 the \(R-z\) plane color-coded by mean age that result from two toy models, one with flaring and one without flaring. Despite the absence of flaring, the model without flaring exhibits the typical funnel structure purely as a result from inside-out formation and vertical heating. The model with flaring is adapted from the model described in Frankel et al. (2020), which builds on the best fit vertical distribution of Ting & Rix (2019). In summary, the model describes a star formation history of the disk from the inside-out, forming on an exponential radial profile, and the subsequent orbit evolution in the plane (radial migration and radial heating as diffusion in angular momentum and radial action) and out of the plane as a scale-height that increases with radius and time. The toy model without flaring has exactly the same formation scenario and parameters and the same in-plane orbit evolution, but the vertical distribution is now modelled as a scale-height that increases with time as \(h_{\rm a}(t)=0.15+0.1(t/1{\rm Gyr})\) independent of radius. As it can be appreciated, a funnel-like distribution of stellar ages can be in place with or without actual flaring, i.e. with or without orbital changes of the stars.
### Comparison to previous simulations
As we have already decribed in Section 1.5, even if the flaring of the stellar disk has been studied and investigated in a number of previous theoretical works in the literature, a comparison among them is non-trivial. This is due to the different ways to quantify the flaring and the diverse ways in which the flaring is manifest in the various simulated galaxies and simulation models. In this section we attempt a closer comparison between the TNG50 results and those from selected zoom-in simulations of MW-mass disk galaxies, such as NIHAO-UHD (Buck et al., 2020), Latte (Ma et al., 2017), VINTERGATAN (Agertz et al., 2021), and Auriga (Grand et al., 2017).
We note that in all the simulations used for this comparison, the scaleheight of the mono-age stellar populations that constitute the stellar disk is evaluated from a single exponential fit, which is not our fiducial choice - this is the case for all but for Lute, where a single \(\rm{sech}^{2}\) profile is used instead. We hence proceed as follows. From the available literature, we extrapolate the scaleheight of young and old stellar population at 4 and 12 kpc from the galactic center (to be consistent with all the selected models). Unless otherwise stated we try to stick to our definition of _young_ (2-4 Gyr) and _old_ (8-10 Gyr) age bins. Then, using Eq. 5 (but with the heights at the physical distances of 4 and 12 kpc instead of \(1\times R_{\rm d}\) and \(5\times R_{\rm d}\)), we evaluate the amount of flaring \(\tau_{\rm flare}\) of each model and we plot them in concert with the TNG50 results in Fig. 13. In order to make an apples-to-apples comparison, we measure the scaleheights of also the TNG50 sample by fitting single exponential profiles to the stellar vertical density distribution.
In the left panel of Fig. 13, TNG50 is compared with NIHAO-UHD (black plus symbols), Latte (black diamond) and VINTERGATAN (black crosses). We note that in Latte, the "old" stellar population is composed by all stars older than 8 Gyr while in VINTERGATAN the bins of stellar ages are different from the one used in this work, being \(\Delta\)age = 1 Gyr. To be consistent, we then plot both of them, a thick cross denoting stellar populations of 3-4 Gyr and a thin cross representing stars of 2-3 Gyr.
Because of the diverse treatment of the flaring of the mono-age stellar populations adopted for the Auriga galaxies, we choose to separate this comparison from the others, and show it in the right panel of Fig. 13. In this case, we apply to our TNG50 sample the same choices as in the Auriga's paper. Indeed, here the young stars are defined to be all disk stars below 3 Gyr and their flaring is shown against all disk stars. For the 30 MW-like galaxies in the Auriga sample, the vertical density distribution of the disk is fitted at a series of different radii with a single exponential profile. However, as already mentioned in SS3.2, the vertical profiles of the TNG50 MW/M31-like galaxies are more often better described with a double functional than a single functional profile when all disk stars are considered. Nevertheless, also in this case we measure the heights for the TNG50 galaxies by fitting a single exponential profile. As in the left panel, the flaring is evaluated between 4 and 12 kpc.
As it is clear from Fig. 13, the TNG50 MW/M31-like sample returns and brackets all the other theoretical findings, including the most extreme cases: very small flaring of the old population, as in Latte, or cases where the old stars flare much more than the young ones, as in one of the NIHAO-UHD galaxies. We see also that in the TNG50 sample we have galaxies where the flaring values are larger than in previous simulations. The consistency of the outcomes in Fig. 13 is indeed a remarkable result. Until now it was not possible to say whether the diversity of flaring manifestations predicted by simulations was a genuine manifestation of galaxy-to-galaxy variation or was due to different numerical codes, galaxy-formation models, galaxy/halo selection or even different numerical resolution. The results with TNG50 demonstrate that MW/M31-like galaxies can exhibit very diverse levels and flavours of flaring simply due to galaxy-to-galaxy diversity.
### Disk flaring vs. \(z=0\) structural and global properties of MW/M31-like galaxies
In SS5, we have shown that the flaring of the stellar disk thickness is a direct manifestation of larger stellar velocity dispersions and lower stellar surface densities in the outer disk regions. Are there other global and/or structural properties of galaxies that correlate with the disk flaring? In this section we proceed to see whether or not the flaring of the mono-age stellar populations is connected with the global or structural \(z=0\) properties of the TNG50 MW/M31-like galaxies.
For the sake of clarity, we divide the \(z=0\) properties in two subgroups: disk structural properties, including gas mass fraction in the disk region (Fig. 14) and mass properties (Fig. A1). In each figure, the amount of flaring of young (left columns) and old stellar populations (right columns) are shown separately, with lines representing the running medians.
From Fig. 14, we see that there is no clear trend or correlation between the degree of flaring and the scalelength of the disk. On the other hand, a clearer positive trend appears when we plot the dependence of flaring with the scaleheight of the stellar disk, both for the young and for the old stars, albeit with a significant scatter: namely, galaxies with thicker stellar disks appear to host a larger amount of
disk flaring. As Fig. 14 shows, the current galaxy stellar mass, stellar disk mass and disk-to-total mass ratio of a MW/M31-like galaxy are not predictors for disk flaring (even though this statement may differ across a larger range of stellar or halo masses). However, interestingly and related to the discussions of Section 5, larger gas mass fractions in the disk of these galaxies imply larger degrees of flaring.
We confirm, although do not show, that the findings above hold irrespective of whether the flaring is evaluated based on stellar half-mass heights or scaleheights from parametric fits of the vertical stellar mass distribution.
Finally, we have compared the amount of flaring for young and old stellar populations with additional galactic and environmental properties (we omit the corresponding plots for brevity). The proportion of barred galaxies is not significantly different in the two subsets of systems where the young population flares more than the old stellar populations or viceversa. The presence itself of a bar does not seem to correlate either with the amount of flaring. We have examined whether the presence of other (satellite) galaxies in the proximity of our MW/M31 analogs at \(z=0\) may be a predictor of, or may be associated to, larger degrees of flaring - we cannot discern any statistically-robust trend.
### Disk flaring vs. merger histories of MW/M31-like galaxies
Is the amount of disk flaring determined by the past merger history of a galaxy?
As already mentioned in the Introduction, our TNG50 MW/M31-like sample is built with no constraint on past history, so that the \(z=0\) MW/M31-like galaxies from TNG50 are the result of very diverse merger histories. In Fig. 15, we investigate whether the amount of flaring is correlated with aspects of the past merger history. To this aim, we highlight those galaxies that have undergone at least one major merger in the last 2 Gyr (red), 5 Gyr (orange) and since \(z=1\) (navy). There seems to be a trend whereby MW/M31-like galaxies that underwent recent major mergers are more likely to have young stars flaring more with respect to the old population. To test the statistical significance of this statement (for the case of the major merger since \(z=1\), where the counts are higher), we perform two sample tests of proportions, obtaining a p-value of 0.12 for both the Z-test and the chi-square test: this means that the trend that we see Fig. 15 is only weak.
Now, we can speculate that even if stars younger than 2 Gyr were not born yet at the time of the major merger, they were born later in a "pertrurbed" stellar disk likely heated-up by the merger event. Alternatively, as discussed in Ma et al. (2017), the young stars flare more because they inherit the flaring of the gas from which they formed, being this possibly perturbed by the recent mergers. However, the occurrence of recent or less recent major mergers does not seem to imply overall systematically stronger disk flaring. In fact, we have not identified trends between disk flaring and a number of merger-history statistics, including the number of all major and mi
Figure 11: **Stellar age distributions in the MW/M31 midplane**: vertical distance from the midplane as a function of galactocentric distance with the color code representing the mean or median stellar age. Here we compare observations from Ness et al. (2016); Xiang et al. (2017); Feuillet et al. (2019) (left), simulations from Ma et al. (2017); Navarro et al. (2018); Agertz et al. (2021) (center), and a sample of five TNG50 MW/M31 analogs (right). Barring those in the right column, the plots are replicated from the respective research studies without modification.
nor mergers, the ex situ total mass and the ex situ fraction of the galaxy.
### On possible resolution effects
Before closing, we offer a few remarks on the possible effects of numerical resolution. In this paper we have characterized the stellar disk structures and flaring predicted for MW/M31-like galaxies by one simulation only, the highest-resolution run of the IllustrisTNG project: TNG50. As shown in Pillepich et al. (2019), stellar disk heights are affected by the underlying resolution: TNG50 galaxies are on average thinner than those in the 8-times lower mass resolution counterpart TNG50-2: see their Figures B2 and B3. However, by how much galaxies are thinner in the higher-resolution simulation in comparison to lower-resolution runs depends on redshift and stellar mass range. With this in mind, we notice that all the results shown in this paper are based on the relative enhancement, \(\tau_{\rm flare}\), of scaleheights across the simulated stellar disks. It is therefore plausible that any effect of numerical resolution on the values of the disk scaleheights should be mitigated by this definition. Furthermore, the levels of disk flaring predicted by the TNG50 simulation are consistent with those returned in MW analogs simulated at much better numerical resolution, such as Latte and Vintergatan. This gives further confidence on the quantitative soundness of the results provided herein.
## 7 Summary and conclusions
In this paper we have presented a comprehensive study of the stellar disk structure of a large, unbiased sample of Milky Way (MW) and Andromeda (M31)-like galaxies from TNG50, the highest-resolution cosmological magneto-hydrodynamical large-volume simulation of the IllustrisTNG project (82.1).
Our TNG50 MW/M31-like sample includes objects with stellar disky morphology, with a stellar mass in the range of \(M_{*}=10^{10.5-11.2}~{}{\rm M}_{\odot}\), and within a MW-like Mpc-scale environment at \(z=0\) (SS2.2). We have focused on the vertical structure of the TNG50 MW/M31-like galaxies and, in particular, on the flaring of their stellar disks, distinguishing across mono-age stellar populations. Throughout this paper, by disk flaring we mean the quantification of by how much the scaleheight of the stellar disk changes (i.e. increases) with galactocentric distance, once stars are selected according to their ages (or other properties).
Our analysis and results rely on the resolution, sample size, and realism of the TNG50 simulated galaxies. In fact, thanks to its high numerical resolution approaching the one typical of "zoom-in" simulations, TNG50 returns at \(z=0\) 198 different realizations of MW/M31-like galaxies. The stellar disk scalelength of TNG50 MW/M31-like galaxies ranges across \(\sim 1.5-17\) kpc, with a good qualitative and quantitative agreement when compared to other models and available observational findings of local disky and spiral galaxies (Pillepich et al. in prep.). Moreover, the vertical stellar distribution in most TNG50 MW/M31-like galaxies can be well described with a double (squared hyperbolic secant) profile, allowing us to distinguish between a thin and a thick disk _geometric_ component. For some galaxies, the stellar thin (thick) disk is found to be as thin (thick) as the observed one for the Milky Way, i.e. with a scaleheight of about \(175-360\) (\(900-1300\)) pc.
Our main results on disk flaring can be summarized as follows:
* By fitting the vertical stellar density distribution of each simulated galaxy with a double or single functional profile, we have estimated the stellar disk scaleheights at a series of different galactocentric distances and for different mono-age stellar populations (Figs. 1 and 2). TNG50 predicts, in general, systematically higher values of the stellar scaleheight moving outward from the galactic center, i.e. it predicts "disk flaring". In fact, we show that, with one unique set of physical ingredients, TNG50 is able to reproduce diverse levels and kinds of flared stellar disks (Fig. 4).
* Because, according to TNG50, the increase of stellar scaleheight with galactocentric distance can be linear, exponential or other, depending on the galaxy and stellar population, and because disk galaxies in a narrow range of stellar mass can exhibit disk scalelengths varying by up to factors of 8, with this paper we propose and advocate for an easy, non-parametric, fit-independent measurement of the degree of flaring. Namely, we propose to quantify flaring simply based on the relative enhancement of the scaleheight between 1 and 5 times the scalelength of each MW/M31 galaxy (Eq. 5).
* We have compared the amount of flaring displayed by the young (2-4 Gyr) and old stellar populations (8-10 Gyr) finding that which stars flare more, and by how much, changes from galaxy to galaxy, with both typically exhibiting \(1.5-2\) thicker disk heights in the outskirts than towards the center (Fig. 5). The young stellar populations in about eleven MW/M31-like galaxies exhibit a similar degree of flaring of our Milky Way, for which we have extrapolated
Figure 12: **Stellar mean age distributions for two MW-like disk mocks.** We illustrate the \(R-z\) plane of two model galaxies color-coded by mean age (as in Fig. 11 from a toy disk model that includes flaring (top) and one that does not (bottom). The toy model is adapted from the best-fit of Frankel et al. (2020); Ting and Rix (2019). In the top panel, the vertical distribution of stars is a second “function with an age- and radius-dependent scale-height as described in Frankel et al. (2020) that captures the flaring of the Galactic disk as described and fit by Ting and Rix (2019). In the bottom panel, the model is the same, but the vertical distribution of stars is replaced by a non-flaring toy distribution, where the scaleheight is essentially only a function of age. In particular, we took \(h_{z}(t)=0.15+0.1(t/1{\rm Gyr})\) kpc a (not physically motivated, but simple) example. The funnel shape of iso-age contours in the bottom panel arises naturally from the combination of inside-out formation of the disk and subsequent vertical heating, not from disk flaring.
the scaleheight as a function of radius and ages from Ting & Rix (2019).
\(\bullet\) We have applied our method to the data available in the literature for selected zoom-in simulations of MW-like galaxies: namely, Latte, VINTERGATAN, NIIHAO-UHD and Auriga. The amount of flaring of old and young stellar populations found in TNG50 encompasses qualitatively and quantitatively all the aforementioned simulations, implying that the stellar-disk flaring of the unbiased sample of TNG50 MW/M31 analogs, returned by a fixed physical model, covers all the previous findings of zoom-in simulations performed with different codes, in some cases with better resolution, different galaxy formation models and varying assumptions on the past assembly history of the simulated galaxies (Fig. 13).
* The scaleheights we measure in the TNG50 simulated galaxies are a manifestation of the underlying orbits and overall potential, with their values exhibiting a clear correlation with the local stellar vertical velocity dispersion and the local stellar surface density (Fig. 8), as predicted by theoretical models. Namely, stellar disks are thicker if hotter and with lower surface densities.
* According to our analysis, galaxies with hotter or less dense outer stellar disks flare more strongly, in both young and old stellar populations, and the diversity in flaring seems to be mostly driven by a diversity in vertical stellar velocity dispersion in the outer disk regions (Fig. 9).
* On the other hand, the flaring of the mono-age populations does not manifestly depend on global \(z=0\) structural properties of the MW/M31 sample. However, two key albeit mild trends can be highlighted: old stellar populations flare more in galaxies with larger disk scaleheight and larger gas mass fraction in the disks, the former relationship being in place also for young stars (Figs. 14, A1).
* Fig. 11, right panels compared to left and middle ones). However, we argue that these observed trends do not necessarily imply flaring, and shall not be used in fact as a proxy for flaring, as a change in average stellar age with galactocentric distance can be realized with a varying trend with radius of stellar age distributions and not necessarily by a change in stellar orbits (Fig. 12).
Overall, with the analysis of this paper we have demonstrated that a sample of 198 \(z=0\) MW/M31-like galaxies - selected to be disky, isolated (but allowing Local Groups), and within a narrow range of stellar mass (covering the MW and M31 stellar masses) -, exhibit a great diversity in their vertical disk structures. The heterogeneity of the stellar disk properties and the diverse flavors and amounts of stellar disk flaring are a key result. Indeed, TNG50 is a cosmological magneto-hydrodynamical simulation that - with a given set of physical-model ingredients and with a resolution that bridges the gap between the volume and zoom-in simulations - is able to reproduce, replicate, and expand upon the features of the stellar disk flaring
Figure 13: **Flaring of TNG50 MW/M31 analogs in comparison to the results of other cosmological MW-like galaxy simulations.** We show again the flaring of old and young stars for TNG50 MW/M31 analogs in comparison to, on the left, simulations from Agertz et al. (2021, VINTERGATAN: thin and thick crosses, with young stellar populations of 2-3 Gyr and 3-4 Gyr, respectively), Ma et al. (2017, Latte: diamonds), and Buck et al. (2020, NIIHAO-UHD: plus symbols): on the right, simulations from the Auriga project (Grand et al., 2017). The magenta area denotes the stellar flaring of young stars of the Milky Way extrapolated from Ting & Rix (2019), as in Fig. 5. We note that in all the simulation models used for this comparison (except for late that uses \(\rm{sech}^{2}\)), the scaleheight of the mono-age stellar populations in the disk is evaluated from a single exponential fit. In these plots, the heights are hence measured using exponential profiles also for TNG50.
already investigated and shown in the literature, based on zoom-in cosmological simulations.
The fact that there is no trivial correlation between the amount of flaring and the \(z=0\) structural _global_ properties of galaxies is intriguing. We have also tentatively investigated whether flaring may be correlated with the number of the more or less recent major/minor/total mergers, finding no obvious dependence (Fig. 15). These considerations, together with the findings on kinematics, lead us to speculate that the increasing of the stellar disk scaleheight with the galactocentric distance could be ascribed to a possible heating of the stellar disk due to external perturbations such as flybys (without the need for coalescence). In fact, keeping in mind that we have not imposed constraints on past history, the pathways leading to \(z=0\) MW/M31-like galaxies could be very diverse across the whole sample (see also Sotillo-Ramos et al., 2022). Future and further analyses are required to connect the disk flaring to the past of MW/M31-like galaxies and to the dynamical evolution of their stars.
Figure 14: **Disk flaring vs. disk properties for TNG50 MW/M31-like galaxies.** We plot the amount of flaring of young (left column, in blue) and old (right column, in red) stellar populations as a function of stellar disk scalelength (top), disk scaleheight at 4\(\times R_{\rm d}\) (middle) and gas fraction within the disk (bottom). In each panel, the galaxy number density is estimated with a Gaussian kernel and represented with the shaded contour areas. Solid curves are medians in bins of the quantity on the \(x\)-axes. Galaxies with thicker stellar disks and larger gas disk fractions seem to be characterized by somewhat larger degrees of disk flaring.
## Data Availability Statement
The entire data of the IllustrisTNG simulations, including TNG50, are publicly available and accessible at www.tng-project.org/data (Nelson et al., 2019). Additional and easier-to-use data products and particle data cutouts related to the 198 MNW/M31-like galaxies from TNG50 used in this paper are also publicly available (Pillepich et al. in prep.). With this paper, we make public also a series of catalogs for the various measures of disk flaring and for the flags of warped and disturbed TNG50 MW/M31-like galaxies.
## Acknowledgements
DS, MD, and AP acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 ("The Milky Way System", subprojects A01 and A06). DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). The TNG50 simulation used in this work has been run on the HazellHen Cray XC40-system at the High Performance Computing Center Stuttgart under the Gauss centers for Super-computing (GCS) Large-Scale Project GCS-DWAR (2016; Pils Nelson/Pillepich).
|
2310.18329 | Unveiling Energy Efficiency in Deep Learning: Measurement, Prediction,
and Scoring across Edge Devices | Today, deep learning optimization is primarily driven by research focused on
achieving high inference accuracy and reducing latency. However, the energy
efficiency aspect is often overlooked, possibly due to a lack of sustainability
mindset in the field and the absence of a holistic energy dataset. In this
paper, we conduct a threefold study, including energy measurement, prediction,
and efficiency scoring, with an objective to foster transparency in power and
energy consumption within deep learning across various edge devices. Firstly,
we present a detailed, first-of-its-kind measurement study that uncovers the
energy consumption characteristics of on-device deep learning. This study
results in the creation of three extensive energy datasets for edge devices,
covering a wide range of kernels, state-of-the-art DNN models, and popular AI
applications. Secondly, we design and implement the first kernel-level energy
predictors for edge devices based on our kernel-level energy dataset.
Evaluation results demonstrate the ability of our predictors to provide
consistent and accurate energy estimations on unseen DNN models. Lastly, we
introduce two scoring metrics, PCS and IECS, developed to convert complex power
and energy consumption data of an edge device into an easily understandable
manner for edge device end-users. We hope our work can help shift the mindset
of both end-users and the research community towards sustainability in edge
computing, a principle that drives our research. Find data, code, and more
up-to-date information at https://amai-gsu.github.io/DeepEn2023. | Xiaolong Tu, Anik Mallik, Dawei Chen, Kyungtae Han, Onur Altintas, Haoxin Wang, Jiang Xie | 2023-10-19T23:55:00Z | http://arxiv.org/abs/2310.18329v2 | Unveiling Energy Efficiency in Deep Learning: Measurement, Prediction, and Scoring across Edge Devices
###### Abstract.
Today, deep learning optimization is primarily driven by research focused on achieving high inference accuracy and reducing latency. However, the energy efficiency aspect is often overlooked, possibly due to a lack of sustainability mindset in the field and the absence of a holistic energy dataset. In this paper, we conduct a threefold study, including energy measurement, prediction, and efficiency scoring, with an objective to foster transparency in power and energy consumption within deep learning across various edge devices. Firstly, we present a detailed, first-of-its-kind measurement study that uncovers the energy consumption characteristics of on-device deep learning. This study results in the creation of three extensive energy datasets for edge devices, covering a wide range of kernels, state-of-the-art DNN models, and popular AI applications. Secondly, we design and implement the first kernel-level energy predictors for edge devices based on our kernel-level energy dataset. Evaluation results demonstrate the ability of our predictors to provide consistent and accurate energy estimations on unseen DNN models. Lastly, we introduce two scoring metrics, PCS and IECS, developed to convert complex power and energy consumption data of an edge device into an easily understandable manner for edge device end-users. We hope our work can help shift the mindset of both end-users and the research community towards sustainability in edge computing, a principle that drives our research. Find data, code, and more up-to-date information at [https://amai-gsu.github.io/DeepEn2023](https://amai-gsu.github.io/DeepEn2023).
Edge AI, Deep Neural Network, Energy Consumption +
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
[https://doi.org/10.1145/383740.3628442](https://doi.org/10.1145/383740.3628442)
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
+
Footnote †: leftmargin=*]Footnote 0: 2023: Copyright held by the owner/author(s). ACM ISBN 979-8-4-0071-0213-8/2/3/12.
## 1. Introduction
Recently, there has been heavy investment in implementing various AI applications on mobile and edge devices, for instance, (1) _vision-based_ AI applications, such as image classification [(1; 2; 3)], face recognition [(4; 5)], object detection and tracking [(6; 7; 8)], image super-resolution [(9; 10; 11)], segmentation [(12)], pose estimation [(13)], and gesture recognition [(14)]; (2) _natural language processing_ (NLP) based applications, such as smart reply [(15)], question answering [(16)], language translation [(17; 18)], and sentiment analysis [(19; 20)]; and (3) _voice-based_ applications, such as virtual-assistant [(21)], speech recognition [(22)], and sound classification [(23)].
Despite the remarkable advances in edge device capabilities such as functionality, computation power, and storage capacity, the limited energy capacity has been the major bottleneck in promoting advanced edge AI applications. On one hand, edge AI applications, particularly those that involve intensive computing resources such as deep learning algorithms, tend to consume a significant amount of energy [(24; 25)]. On the other hand, mobile and edge devices are typically powered solely by embedded batteries, so their energy capacity is significantly constrained by form factor requirements, safety considerations, manufacturing costs, and concerns on the environmental impact of the battery technology used. As a result, heavy battery usage of an application often results in low ratings or subbar user experience. A survey [(26)] finds that about 55% of users surveyed would give a negative review to a mobile application that consumes a lot of battery, indicating that energy consumption is a crucial aspect of the user experience that cannot be overlooked. These observations raise intuitive questions: _How can we identify the energy bottlenecks and optimize the energy efficiency of on-device deep learning for diverse edge devices? What are the primary factors that have a large impact on the energy consumption of deep neural network (DNN) executions, the core of on-device deep learning? Where is the energy spent inside a DNN execution?_ Answering these questions, however, is challenging, due to the lack of holistic understanding of the intricacies of power and energy consumption in DNN executions on edge devices. First and foremost, _we cannot optimize what cannot be measured_. The energy efficiency of an edge device is more than its AI hardware capability in isolation. Instead, it is coupled with the on-device deep learning software stack, whose net performance is shrouded beneath the DNN models and end-to-end processing pipeline of diverse edge AI applications. Second, _we cannot optimize what is under-appreciated or neglected in the design_. Most existing research and development in deep learning primarily aim to reduce inference latency and enhance accuracy, often neglecting to consider the impact on energy efficiency. As a result, it becomes crucial to strike a balance between improving energy efficiency and enhancing performance in on-device deep learning for modern edge devices.
In this paper, we study the problem of accurate energy measurement, prediction, and understandable scoring of on-device deep learning, and make three concrete contributions towards enabling _transparency of power and energy consumption inside on-device deep learning across diverse edge devices_.
First, we conduct the first detailed measurement study to accurately quantify the energy consumed by on-device deep learning across diverse modern edge devices. Our measurement study covers three dimensions, including the power and energy consumption of kernels, state-of-the-art (SOTA) DNN models, and widely-used edge AI applications. Our measurements reveal multiple key observations, which remain consistent across eight different measured edge devices. Overall, we measure and collect fine-grained power traces and accurate energy consumption data for (1) 16 types of kernels with \(1,847\) unique configurations, (2) nine SOTA DNN models with \(50\) variants each, and (3) six widely-used edge AI applications on eight commercial edge devices executed with mobile CPU and GPU. These measurements result in creation of three large-scale power and energy datasets, including kernel-, model-, and application-level datasets for on-device deep learning on edge devices.
Second, based on our kernel-level energy dataset and the observations gained in the measurement study, we design and implement kernel-level energy predictors on both mobile CPU and GPU. To the best of our knowledge, this is the first energy predictor for on-device deep learning on commercial edge devices (e.g., modern smartphones), which can provide consistently accurate energy estimation on unseen DNN models. This offers an effective approach to extend our measurements and observations derived from a limited DNN model space to new DNN models, which enhances the extensibility of our measurement study.
Lastly, beyond valuing research that aims at improving the energy efficiency of on-device deep learning, it is crucial that our measurement study are accessible to a wide audience, such as end-users with non-technical backgrounds. For instance, presenting an energy efficiency score, ranging from \(0\) to \(100\), should be more straightforward and easier to understand than telling end-users that their device will consume \(120.090\) mJ per inference to run MobileNetV with CPUs. To this end, we develop two scoring metrics: _power consumption score (PCS)_ and _inference energy consumption score (IECS)_. These two scoring metrics help to distill the power and energy efficiency of an edge device in an intuitive and understandable way. We present a complete scoring results for eight edge devices benchmarked by leveraging our application-level dataset.
## 2. Background and Challenges
### Background
DNN models are the core of on-device deep learning and consume a major portion of both computational and energy resources on mobile and edge devices. A DNN model consists of a sequence of primitive operations, such as convolution2D (conv), depthwise convolution2D (dwconv), activations, pooling, and fully-connected (fc), which are organized into layers, allowing the network to learn complex patterns from input data. To enhance the computational efficiency of the DNN inference (i.e., to reduce inference latency and avoid redundant memory access), kernel fusion (or operator fusion) is a key optimization and has been incorporated in SOTA DNN execution frameworks, such as TVM (Tran et al., 2017), TFLite (Tran et al., 2017), and MNN (Tran et al., 2017). For instance, three individual operations, conv, batch normalization (bn), and rectified linear unit (relu) can be fused into one composite operation, conv+bn+relu1, to achieve inference acceleration on edge devices. This means the entire sequence can be processed as a single step, which reduces memory access (since intermediate results don't need to be written to and read from memory) and kernel launch overhead. Hence, given its crucial role in runtime optimization, a kernel is typically considered as the fundamental unit for scheduling and execution in deep learning frameworks, particularly on edge devices (Zhu et al., 2018).
Footnote 1: In this paper, + represents kernel fusion.
### Challenges
**C1: Accuracy.** In order to optimize the energy efficiency of DNN executions on resource-constrained edge devices, it is crucial to gain a deep understanding of the energy consumption characteristics associated with various DNN models across different edge hardware platforms, such as mobile CPUs and GPUs. Consequently, the importance of conducting accurate measurement studies on real devices is becoming increasingly paramount. However, measuring accurate energy consumption on a real edge device is non-trivial. The challenges arise from two main observations: (1) existing energy profiling methods for mobile and edge devices, which rely on built-in current sensors, cannot capture power consumption at high time granularity (i.e., less than \(100\) ms); and (2) the growing level of integration in the electronic circuits of edge devices presents challenges when attempting to connect them with an external power monitor.
First, most SOTA DNN models can achieve inference latencies of \(10\) to \(200\) ms when executed on mobile CPUs. These latencies can be significantly reduced to a range of \(1\) to \(50\) ms when executed on mobile GPUs (Zhu et al., 2018). On the other hand, a DNN model usually consists of tens or hundreds of kernels that run sequentially on the edge device (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018), each potentially having an execution time of less than a millisecond. Therefore, to accurately capture the instantaneous power variations within a DNN inference, which includes the precise power consumption of individual kernels, an ideal power sampling rate should be less than \(1\) ms. However, we have observed that existing edge devices, such as smartphones, typically have built-in current sensors (e.g., fuel gauge) with a time-granularity of approximately \(100\) ms to \(1\) second. This restricts the sampling rate at which the sensors can measure the power drawn by the device to \(1-10\) times per second. This indicates that the existing built-in current sensors cannot fully capture the fine-grained, kernel-level power variations within a DNN inference on the edge device, resulting in inaccurate measurements.
We have conducted a measurement study on a real device, Huawei P40 Lite, to investigate the extent of this discrepancy compared to
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{CPU} & \multicolumn{2}{c}{GPU} \\ \cline{2-5} & Energy & Error & Energy & Error \\ \hline Built-in & 132.420mJ & 10.3\% & 19.254mJ & 30.64\% \\ \hline Ground-truth & 120.090mJ & - & 27.760mJ & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. MobileNetV1 energy consumption.
the ground-truth power and energy consumption2. As shown in Tables 1 and 2, measurements dependent on the device's built-in current sensor produce large errors in both the overall DNN model (\(10.3\%-30.64\%\)) and individual kernel (\(1.76\%-31.8\%\)) energy consumption3. Moreover, as we show in Section 4, using the energy dataset created by a built-in current sensor to train an energy predictor results in consistently poor prediction accuracy. In addition, Fig. 1 demonstrates the built-in current sensor also fails to capture the characteristics of power variations among kernels within a DNN inference. For instance, there is usually a sudden power rise at the start of conv executions, and a sudden power drop in most devours.
Footnote 2: In this paper, the ground-truth power and energy consumption is measured by connecting the real device to the Monsoon power monitor [33].
Footnote 3: The energy consumption is calculated by multiplying the measured power consumption by the model/kernel inference latency. To ensure that the energy consumption enters are primarily caused by the power measurement inaccuracy, we use the ground-truth latency when calculating the energy consumption in the built-in current sensor measurements.
_Consequently, these observations indicate that existing energy profiling solutions for mobile and edge devices that heavily rely on the built-in current sensor may fail to offer accurate power measurements for DNN executions (e.g., power profiler [34], reading virtual file current_now from /sys/class/power_supply/battery/[35], and reading battery level drops from /ACTION_BatterRY_CHANGED [36, 37])._
Second, one of the common methods to measure accurate and fine-grained power consumption for mobile and edge devices in the research community is to connect the device to an external power monitor with a high sampling rate [38, 39, 40, 41, 42, 43]. However, we find that connecting newer commercial devices, especially smartphones released after 2017, to an external power monitor requires significant effort due to the increasing level of integration of their electronic circuits. Fig. 2 compares the battery connector in an older Samsung smartphone, the Galaxy S5, released in 2014, with that of a newer Samsung model, the Galaxy S20, released in 2020. The battery connector in a smartphone is used to connect a battery to its integrated circuit board. Older smartphones, including the Galaxy S5, use a specific type of battery connector known as a "snap-type connector". Featuring four metal prongs, as shown in Fig. 2(a), the snap-type connector allows for easy identification of the positive and negative terminals and enables connection to an
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Kernels} & \multicolumn{3}{c}{CPU} \\ \cline{2-4} & Built-in (mJ) & Ground-truth (mJ) & Error \\ \hline conv+relu & 3.914 & 3.984 & 1.76\% \\ dwconv+relu & 5.578 & 4.814 & 15.8\% \\ conv+relu & 8.020 & 7.739 & 3.62\% \\ dwconv+relu & 8.682 & 8.193 & 5.96\% \\ conv+relu & 2.649 & 2.422 & 9.36\% \\ dwconv+relu & 5.211 & 4.428 & 17.6\% \\ conv+relu & 1.225 & 0.930 & 31.8\% \\ dwconv+relu & 1.541 & 1.285 & 20.0\% \\ conv+relu & 2.030 & 1.643 & 25.5\% \\ dwconv+relu & 7.824 & 6.549 & 19.5\% \\ conv+relu & 3.450 & 2.933 & 17.6\% \\ dwconv+relu & 0.174 & 0.149 & 16.8\% \\ conv+relu & 1.179 & 0.972 & 21.3\% \\ dwconv+relu & 2.879 & 2.448 & 17.6\% \\ conv+relu & 12.394 & 11.324 & 9.45\% \\ dwconv+relu & 0.524 & 0.466 & 12.4\% \\ conv+relu & 14.112 & 12.976 & 8.76\% \\ dwconv+relu & 0.906 & 0.771 & 17.5\% \\ conv+relu & 12.065 & 11.095 & 8.74\% \\ dwconv+relu & 1.108 & 0.944 & 17.3\% \\ conv+relu & 14.446 & 13.327 & 8.39\% \\ dwconv+relu & 0.409 & 0.367 & 11.5\% \\ conv+relu & 12.240 & 11.357 & 7.77\% \\ dwconv+relu & 0.299 & 0.267 & 11.7\% \\ conv+relu & 4.349 & 4.019 & 8.20\% \\ dwconv+relu & 0.110 & 0.093 & 17.6\% \\ conv+relu & 4.353 & 3.902 & 11.6\% \\ global-pool & 0.071 & 0.062 & 14.4\% \\ fully connected & 0.664 & 0.636 & 4.42\% \\ \hline \multicolumn{4}{c}{\(=\) error \(\leq\) 5\%} & \multicolumn{2}{c}{\(=\) 5\% \(<\) error \(\leq\) 10\%} \\ \multicolumn{4}{c}{\(=\) 10\% \(<\) error \(\leq\) 20\%} & \multicolumn{2}{c}{\(=\) error \(>\) 20\%} \\ \hline \hline \end{tabular}
\end{table}
Table 2. MobileNetv1 individual kernel energy consumption.
Figure 1. Comparison of time-granularity between the device’s built-in current sensor and external power monitor. Tested mobile device: Huawei P40 Lite
Figure 2. Comparison between an older snap-type battery connector and a modern FPC connector.
external power monitor. However, advanced smartphones such as the Galaxy S20 use a proprietary, tiny, and delicate Flexible Printed Circuit (FPC) battery connector, as shown in Fig. 2(b). The FPC connector's small size and delicate construction make it challenging to work with, requiring specialized tools and expertise to connect it to an external power monitor that offers higher accuracy. This might be one of the main reasons that recent research papers typically rely on the built-in current sensor for measuring coarse-grained power consumption on mobile and edge devices (Shi et al., 2017; Wang et al., 2018; Wang et al., 2018).
_Consequently, although external power monitors with high sampling rates show promising accuracy in measurement, the challenges associated with connecting newer commercial devices to such external monitors can be a significant barrier._
**C2: Extensibility.** In recent years, we have witnessed a significant surge in the development of DNNs, particularly those specifically designed to address the increasing demand for mobile and edge devices. This has led to the invention of several milestone Convolutional Neural Network (CNN) models, including, but not limited to, AlexNet, DenseNet, GoogleNet, and MobileNet. Moreover, the advent of Neural Architecture Search (NAS) has accelerated advancements in the design and optimization of novel CNN models by automating the design process and facilitating customization. While measuring the energy consumption of DNN inferences on real devices is highly desirable for various tasks, such as serving as a ground-truth dataset for training energy predictors for on-device deep learning, it is practically infeasible and excessively time-consuming to measure all DNN models individually. For example, we spend approximately 2.1 days to measure 200 models on a single device, while ProxylessNAS (Shi et al., 2017) explores nearly 0.3 million models in a single round of search. This predicament leads to a critical challenge: how can we ensure the observations and measurements derived from a limited DNN model space can be extensible to new (unseen) DNN models?
_Consequently, the huge and expansive model-design space significantly challenges the extensibility of energy measurements on real mobile and edge devices._
**C3: Understandability.** In addition to valuing research aimed at reducing the energy consumption of DNN executions, it is essential that our measurement study is accessible to a wide audience, such as end-users with non-technical backgrounds. As we presented in Section 1, end-users consider the energy efficiency of their devices as one of the most critical factors. Results that are easy to understand can help end-users make informed purchasing decisions. For instance, presenting an energy efficiency score, ranging from 0 to 100, could be more straightforward and easier to understand than simply telling the end-user that the device will consume 120.090 mJ per inference to run MobileNetv1 with CPUs. Consequently, end-users can compare different devices and choose the one that best suits their needs. On the other hand, for the research community, an easily adoptable measurement method or energy dataset can accelerate progress in developing energy-efficient DNN models, designing energy predictors, or searching for DNN models with energy/power constraints within a vast model-design space. Currently, due to a lack of sustainability mindset, the optimization of DNNs is primarily driven by research focused on achieving high inference accuracy and minimizing latency.
_We hope our work can help shift the mindset of both end-users and the research community towards sustainability, a principle that drives our research._
## 3. Energy Measurement and Dataset
We conduct a measurement study and create three energy datasets: kernel-, model-, and application-level datasets. Overall, we collect fine-grained power traces and accurate energy consumption data for (1) 16 types of kernels with 1, 847 unique configurations, (2) nine SOTA DNN models with 50 variants each, and (3) six widely-used edge AI applications on eight commercial edge devices.
### Energy Measurement
We develop a reproducible energy measurement methodology, which facilitates the collection of accurate and fine-grained power consumption of kernels, DNN models, and end-to-end edge AI applications on modern edge devices.
**Proposed solution for C1: accuracy.** As discussed in Section 2, although external power monitors demonstrate promising accuracy and time granularity for tracing power variations within a DNN execution, establishing a physical connection between a modern edge device with an FPC battery connector and such a monitor is nontrivial. To address this challenge, we first use a mechanic mobile device DC power cable (Wang et al., 2018) that is designed to fit multiple device models, including those with FPC connectors, to connect the tested devices to an external power monitor. This method requires little effort on the part of the benchmarking researchers. However, we find that the tested devices cannot boot due to the lack of proprietary battery management system (BMS) chips. BMS is an electronic system that manages and monitors the performance and safety of a device battery, and is typically attached to the battery in modern edge devices. The device OS must communicate with the proprietary BMS to check the status and safety of the battery before allowing the phone to power on. Hence, the device cannot boot if its battery is disconnected or an unauthorized battery is connected. We have studied multiple alternatives to address this issue, and we find that the most effective method is to segregate the BMS chip from the device battery without tearing it down, and use it as a bridge to connect the device to the external power monitor. This method strikes a good balance between the effort required and reproducibility. We have validated this method on eight different modern smartphones, as illustrated in Fig. 3. All of the tested devices are able to power on with full functionality using this method.
_We develope a detailed documentation to provide step-by-step instructions on how to implement this method on other modern edge devices, which will help the community to reproduce our measurements and apply this technique to their own research or practical applications._
**Rules for measurement.** Since the power consumption of mobile and edge devices can be easily influenced by the environment, such as heat dissipation and background activities, it is crucial to create specific rules for measurement. These rules can bolster the consistency and reliability of power measurements across diverse devices and testing conditions. By controlling and accounting for environmental factors, we can mitigate their influence on our power data collection, and thus gain a more accurate understanding of the
inherent power and energy consumption characteristics of DNN executions. To this end, we establish a set of rules for power measurements. Through our observation, these rules effectively ensure consistency and reproducibility4.
Footnote 4: Although understanding how the power consumption of DNN executions may vary with noisy background activities is important (since it is close to practical use cases), it is equally crucial to isolate and understand the intrinsic power characteristics of the DNNs, independent of these variations. This is one of the primary goals of our measurement study in this paper.
* Disable adaptive brightness and set the display to the lowest brightness level.
* Turn off WiFi, Bluetooth, cellular network, and Near-Field Communication (NFC) interfaces to minimize the interference on the accuracy of power measurements.
* Shut down and disable any background applications and services to minimize the interference on the accuracy of measurements.
* Conduct measurements with a room-temperature between 20 and 25\({}^{\circ}\)C.
* Maintain an air gap with proper ventilation to regulate the temperature of the smartphone and prevent run-time thermal throttling.
* Configure the screen refresh rate to 60 Hz.
* Configure the camera sample rate to 15 frames per second, if the executed edge AI applications require the use of the device camera.
* Set up a 2-minute cooldown interval between individual tests to allow the device to cooldown.
**Devices and tools.** We select eight modern edge devices with eight distinct mobile SoCs that include at least one high-end and one mid-range SoC from leading chipset vendors, such as Qualcomm, HiSilicon, and MediaTek. Their specifications are summarized in Table 3. The selected mobile SoCs can serve as representative examples of advanced and widely used mobile AI silicions in the past two years. Unless stated, all power consumption data are measured by the Monsoon power monitor with a 5000 Hz sampling rate. Note that other power monitors with a sampling rate under a millisecond are also applicable. The latency of DNN inferences, including both model-level and kernel-level latencies, is measured by the TFLite benchmark tool[46].
### Energy Dataset
**Kernel-level.** As we introduced in Section 2, kernels constitute the fundamental units of execution in deep learning frameworks, with their types and configuration parameters significantly influencing the energy consumption during DNN executions. Table 4 illustrates that conv+bn+relu kernels typically consume more energy than
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & OnePlus & Xiaomi & Huawei & Huawei & Huawei & Huawei & Xiaomi & Motorola \\ & 8 Pro & Redmi Note8 & Mate40 Pro & P40 Pro & P40 Lite & P40 Lite & Redmi K30 Ultra & One Macro \\ \hline SoC & SD 865 & SD 665 & Kirin 9000 & Kirin 990 5G & Kirin 810 & Kirin 710F & Dimensity1000+ & Helio P70 \\ \hline Vendor & Qualcomm & Qualcomm & HiSilicon & HiSilicon & HiSilicon & HSiSilicon & MediaTek & MediaTek \\ \hline & M & A77+A55 & A73+A53 & A77+A55 & A76+A55 & A73+A53 & A77+A55 & A73+A53 \\ CPU & C1 & 4+4 & 4+4 & 4+4 & 4+4 & 2+6 & 4+4 & 4+4 & 4+4 \\ & F1 & 2.84 GHz & 2.0 GHz & 3.13 GHz & 2.86 GHz & 2.27 GHz & 2.2 GHz & 2.6 GHz & 2.1 GHz \\ \hline GPU & Adreno 650 & Adreno 610 & Mali G78 & Mali G76 & Mali G52 & Mali G51 & Mali G77 & Mali G72 \\ \hline Dedicated & Hexagon698 & Hexagon686 & Ascendlet Lite+Tiny & Lite+Tiny & D100Lite & MediaTek3.0 & MediaTek \\ AI & DSP & DSP & NPU & NPU & NPU & None & APU & APU \\ accelerator & & & Da Vinci 2.0 & Da Vinci & Da Vinci & & & \\ \hline OS (Android) & 10 & 10 & 10 & 10 & 10 & 10 & 9 \\ \hline NNAPI & & & Yes & Yes & Yes & Yes & Yes & Yes \\ support & Yes & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline Battery & C2 & 4510 mAh & 4500 mAh & 4400 mAh & 4200 mAh & 4200 mAh & 4000 mAh & 4500 mAh & 4000 mAh \\ & R & No & No & No & No & No & No & No \\ \hline Class & Flag & Mid-range & Flag & Flag & Mid-range & Mid-range & Flag & Mid-range \\ \hline \hline \end{tabular}
* SD: Snapdragon, M: Microarchitecture, C1: CPU Cores, F1: Maximum Frequency, S: Display Size, C2: Battery Capacity, and R: If battery is removable.
\end{table}
Table 3. Specifications of Measured Edge Devices and Chipsets
Figure 3. Measured devices with segregated BMS chips.
other kernel types. Furthermore, the configuration for each kernel type varies. For conv+bn+relu and dwconv+bn+relu kernels, the primary configurations includes input height and width (\(HW\))5, input channel number (\(C_{in}\)), output channel number (\(C_{out}\)), kernel size (\(KS\)), and stride (\(S\)). Table 5 presents a comparison of the energy consumption between two conv+bn+relu kernels with different configurations, both run on a mobile CPU. One kernel configuration consumes a considerable 125.232mJ of energy, whereas the other expends a mere 0.064mJ. As a result, examining the impact of kernel configurations on energy consumption lays the foundation for a comprehensive understanding of energy consumption during DNN executions on edge devices.
Footnote 5: In CNN models, input height usually is equal to input width.
To this end, we present our kernel-level energy dataset collected from real edge devices. To build the dataset, as presented in Table 4, we initially generate a large number of kernels with a variety of types (16 types for CPU and 10 types for GPU) featuring a range of configurations in the tflite format (e.g., 1032 conv+bn+relu and 349 dwconv+bn+relu kernels). The number of sampled configurations for each kernel type hinges on two main factors: its configuration dimension and its impact on the overall energy consumption during DNN executions (e.g., we observe that the conv+bn+relu kernel accounts for more than 70% of the total energy consumption in most SOTA CNN models on edge devices). These kernel configurations are randomly sampled in accordance with the sampling strategy proposed in [30]. Then, we measure the average power consumption and inference latency for each generated kernel running on individual edge devices. Each power and latency value is the average of at least 100 inference runs. We conduct these measurements independently on both CPUs and GPUs. As shown in Table 4, our kernel-level energy dataset spans a broad spectrum with different levels of energy consumption.
In Fig. 4, we seek to investigate how the five configurations (i.e., \(HW\), \(C_{in}\), \(C_{out}\), \(KS\), and \(S\)) impact the energy consumption of conv+bn+relu. In each evaluation, we vary a single configuration (e.g., \(HW\)) while maintaining the other four constants. The results reveal that the relationship between the energy consumption and the configurations is non-linear. As illustrated in Fig. 4(a), the energy consumption demonstrates a progressive increase with the growth of \(HW\). For instance, when running on the mobile CPU, the energy consumption of conv+bn+relu increases by approximately 1.85\(\times\) (0.077mJ to 0.22mJ), 3.2\(\times\) (0.22mJ to 0.93mJ), 4.37\(\times\) (from 0.93mJ to 5.0mJ), 3.36\(\times\) (5.0mJ to 21.81mJ), as \(HW\) doubles from 14 to 28, 28 to 56, 56 to 112, and 112 to 224, respectively. While operating on the mobile GPU, the energy consumption of the conv+bn+relu exhibits a similar trend but at a different rate. In this case, its energy consumption increases by roughly 1.21\(\times\) (0.013mJ to 0.029mJ), 1.89\(\times\) (0.029mJ to 0.083mJ), 3.79\(\times\) (0.083mJ to 0.399mJ), 3.98\(\times\) (0.399mJ to 1.988mJ) when \(HW\) doubles from 14 to 28, 28 to 56, 56 to 112, and 112 to 224, respectively. Moreover, we find that \(KS\) has the most significant impact on the energy consumption of conv+bn+relu. This is because the majority of energy consumption of kernel conv+bn+relu is attributed to convolutional layer. Within the convolutional layer, \(KS\) has the most significant impact due to its quadratic relationship with computational cost, while other parameters have a linear relationship. Specifically, when doubling each of the configuration, \(KS\) (from 3 to 5), \(HW\) (from 14 to 28), \(C_{in}\) (from 128 to 256), and \(C_{out}\) (from 128 to 256), the corresponding increases in energy consumption are approximately 2.08\(\times\), 1.85\(\times\), 1.05\(\times\), and 1.18\(\times\) respectively. This finding demonstrates the disproportionate influence of \(KS\) on energy consumption relative to the other parameters.
a mobile CPU, especially when the \(HW\) and \(KS\) parameters are on the lower side. For instance, when testing on the Huawei P40 Pro with the kernel configurations of \(HW=1\), \(KS=1\), \(C_{in}=480\), and \(C_{out}=20\), we find that the energy consumed by the GPU exceeds that of the CPU by more than a factor of 6.6. While the magnitude of this difference may vary across different edge devices, the overall pattern of increased energy consumption on the GPU under these conditions appears to be consistent. Typically, GPUs are more energy-efficient than CPUs as they exhibit lower inference latency, especially for large kernels that require high computational power. For small kernels, however, the inference latency on GPUs and CPUs does not show a significant difference, and GPUs might be less energy-efficient than CPUs. This is because the power consumption of GPUs is usually higher than that of CPUs, attributed to their greater I/O bandwidth and multiple cores designed for parallel computing [47].
_Insights: This observation is crucial for designing effective kernel execution scheduling strategies on edge devices. Rather than only considering the type of kernel, the specific configuration of the kernel should also be taken into account when deciding where to execute it (e.g., on the mobile CPU or GPU)._
In addition, our kernel-level dataset includes fine-grained power traces for each individual kernel, referred to as _power slices_ in this paper. These collected power slices provide valuable insights for analyzing intra-kernel power variations. One of the primary observations in power slices is that the intra-kernel power variation exhibits a "high-initial, flat-later" pattern, illustrated in Fig. 6, when the kernel is executed on a mobile CPU and the execution time exceeds a certain threshold. Fig. 6(a) reveals an initial power surge at the beginning of kernel execution on Huawei P40 Pro, equipped with a Kirin 990 5G chipset. This ramp-up phase continues for approximately 10.5ms. Following the initial ramp-up, the power
Figure 4. Energy consumption of conv+bn+relu vs. kernel configurations.
Figure 5. Comparison of energy consumption of conv+bn+relu with identical configurations on mobile CPU and GPU (\(HW=1,KS=1,S=1\), measured device: Huwei P40 Pro). Using the mobile GPU to execute the kernel does not always save the device’s energy compared to using the mobile CPU.
Figure 6. Measured fine-grained power slices for conv+bn+relu with \(HW=112,C_{in}=20,C_{out}=120,S=1\). The intra-kernel power variation exhibits a “high-initial, flat-later” pattern.
consumption settles into a more consistent, flatter profile that persists until the end of the kernel's execution. We conduct validations across varying kernel configurations, with varying execution time, as well as on various edge devices to ascertain the consistency of this observation. We find the same pattern on the measured devices, as demonstrated in Figs. 6(b) and 6(c). Interestingly, devices powered by chipsets from the same vendor (e.g., Kirin 990 5G and Kirin 810) exhibit a nearly identical ramp-up time (10.5ms and 10.2ms), while the Snapdragon 855's6 ramp-up time is around 6.2ms. The "high-initial, flat-later" pattern primarily arises due to power management techniques implemented in modern processors on edge devices. For instance, the Dynamic Voltage and Frequency Scaling (DVFS) technique can dynamically adjust a processor's voltage and frequency during runtime, based on computational demands. At the beginning of a computationally intensive kernel execution, DVFS may increase the frequency to ensure the task's timely completion. It then lowers the frequency once the task becomes more manageable, resulting in a relatively flat power consumption profile. The variation in ramp-up times among different chipsets and vendors can be attributed to the unique DVFS strategies they employ.
Footnote 6: This device is not listed in Table 3.
_Insights: The ramp-up time can negatively impact power and energy efficiency on edge devices, particularly when executing kernels with relatively small configurations (where their execution time are less than the ramp-up time). As illustrated in Fig. 6(b), the ramp-up phase causes the_conv+bn+relu _kernel_ with \(HW=112\), \(C_{in}=20\), \(C_{out}=120\), \(S=1\) to consume 25.3% and 19.6% more power than kernels with larger \(KS\)s, specifically 3 and 5. Nevertheless, kernels with smaller configurations are often preferred for implementation on edge devices to save computational resources. This highlights the critical importance of optimizing the ramp-up phase for edge devices. For instance, if the aforementioned kernel with \(KS=1\) can be executed directly in the flat phase, it can result in a reduction of energy consumption by 23.1%.
**Model-level.** We introduce our model-level energy dataset, which collects nine SOTA DNN models. These models represent a mix of both manually-designed and NAS-derived models, each with distinct kernel types and configurations. For each model, we generate 50 variants for conducting power and energy measurements by re-sampling the \(C_{out}\) and \(KS\) for each layer. Specifically, we randomly sample the new output channel number from a range of 20% to 180% of the original \(C_{out}\), while the \(KS\) is sampled from the set of values: \(\{1,3,5,7,9\}\). Table 6 summarizes the details of the measured DNN models in the dataset. In general, running these models on mobile GPUs results in an energy consumption reduction of approximately 49% to 79%, compared to the execution on mobile CPUs.
Fig. 7 presents the energy consumption breakdown of individual models by kernel types. The four kernel types that consume the most energy are conv+bn+relu, dwconv+bn+relu, fc, and concat. They account for 79.2%, 14.79%, 2.03%, and 1.5% of the total model energy consumption on mobile CPUs, respectively. On mobile GPUs, these kernels represent 78.17%, 10.91%, 4.01%, and 4.28% of the total model energy consumption. Furthermore, in most models, conv+bn+relu and dwconv+bn+relu account for the main energy percentages. On average, conv+bn+relu and dwconv+bn+relu take 93.97% and 87.74% of the total model energy consumption on the mobile CPU and GPU, respectively.
In addition, similar to the kernel-level dataset, our model-level dataset collects fine-grained power slices for all the measured DNN models. For instance, Fig. 8 illustrates the measured power slices of two AlexNets with distinct kernel configurations, whose specific configurations are detailed in Table 7. These model-level power slices offer (1) a holistic view of the precise power variations associated with each kernel within the DNN model, (2) the temporal and sequential aspects of kernel executions, and (3) a visual approach to easily identify the power and energy bottlenecks within a specific DNN model.
**Applications-level.** While the kernel- and model-level datasets can be beneficial for researchers and developers in understanding, modelling, and optimizing power and energy efficiency of DNN executions, end-users generally have a greater interest in the energy consumption of those frequently used AI applications on their devices. This is because the application's energy efficiency directly affects device's battery life, which is critical to the user experience. To this end, we create an application-level dataset, which uncovers the end-to-end energy consumption of six popular edge AI applications, covering three main categories: vision-based (object detection, image classification, super resolution, and image segmentation), NLP-based (natural language question answering), and voice-based applications (speech recognition). As shown in Table 8,
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Energy consumption (mJ)} & \multirow{2}{*}{Avg. FLOPs} \\ & CPU & GPU & \\ & min - max & min - max & (M) \\ \hline AlexNets & 36.97 - 355.58 & 7.69 - 91.80 & 815 \\ DenseNets & 231.93 - 488.87 & 66.21 - 133.58 & 1760 \\ GoogleNets & 145.03 - 262.45 & 52.66 - 90.04 & 1535 \\ MobileNetv1s & 53.59 - 136.79 & 17.36 - 42.44 & 519 \\ MobileNetv2s & 30.85 - 175.07 & 8.81 - 48.35 & 419 \\ ProxylessNASs & 58.34 - 162.11 & 17.70 - 49.29 & 526 \\ ResNet18s & 251.52 - 1432.67 & 64.19 - 391.97 & 3888 \\ ShuffleNetv2s & 25.26 - 81.41 & - & 319 \\ SqueezeNets & 92.55 - 388.16 & 34.55 - 134.65 & 1486 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Measured DNN models in our model-level dataset.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} \multicolumn{2}{c}{Configurations} \\ \multirow{2}{*}{
\begin{tabular}{c} Kernels \\ (\(\mathrm{\SIUnitSymbolMicro}\)) \\ \end{tabular} } & \multicolumn{2}{c}{AlexNet 1} & \multicolumn{2}{c}{AlexNet 2} \\ & \multicolumn{2}{c}{(Fig. 8(a))} & \multicolumn{2}{c}{(Fig. 8(b))} \\ \hline conv+relu 1 & (224, 39, 5, 4) & (224, 3, 70, 7, 4) \\ maxpool 1 & (224, 89, 3, 2) & (55, 70, 3, 2) \\ conv+relu 2 & (28, 89, 153, 7, 1) & (28, 70, 115, 7, 1) \\ maxpool 2 & (28, 153, 3, 2) & (28, 115, 3, 2) \\ conv+relu 3 & (13, 153, 460, 5, 1) & (13, 115, 345, 5, 1) \\ conv+relu 4 & (13, 460, 230, 1, 1) & (13, 345, 128, 5, 1) \\ conv+relu 5 & (13, 230, 204, 7, 1) & (13, 128, 307, 3, 1) \\ maxpool 3 & (13, 204, 3, 2) & (13, 307, 3, 2) \\ global-pool 1 & (1, 204) & (1, 307) \\ fc 1 & (204, 3686) & (307, 3686) \\ fc 2 & (3686, 6144) & (3686, 6963) \\ fc 3 & (3686, 1000) & (3686, 1000) \\ \hline Total energy (mJ) & 242.888 & 151.414 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Kernel configurations of two AlexNets.
we measure the power and energy consumption of each application with multiple reference DNN models that operate under four distinct computational settings, including CPU with a single thread, CPU with four threads, GPU delegate, and the NNAPI delegate. The dataset can serve as a resource for exploring the energy consumption distribution throughout the end-to-end processing pipeline of an edge AI application. For example, we can use the dataset to examine the energy consumed in generating image frames, converting these frames from YUV to RGB, and conducting DNN inference within an object detection application. Fig. 9 depicts the energy consumption breakdown based on the processing phases in the object detection. It demonstrates that our application-level dataset can provide interpretable observations for comprehending who is the primary energy consumer in the end-to-end edge AI application.
Figure 8. The model-level fine-grained power slices provided by our dataset can offer (1) a holistic view of the precise power variations associated with each kernel within the DNN model, (2) the temporal and sequential aspects of kernel executions, and (3) a visual approach to easily identify the power and energy bottlenecks within a specific DNN model.
Figure 7. DNN model energy consumption percentage breakdown. The top four most energy-consuming kernel types are conv+bn+relu (conv), dwconv+bn+relu (dwconv), fc, and concat.
Figure 9. End-to-end energy consumption breakdown for object detection and classification (application-level dataset).
Additionally, the application-level dataset offers essential inputs for our edge device scoring system (Section 5). Due to the page limit, we will not present additional measurement results in this paper.
**Time cost.** Finally, in Table 9, we report the time cost associated with performing measurements and creating our datasets. On a single edge device, we spend 23.1, 4.7, and 1.5 days, respectively, on (1) measuring the power and energy consumption of all the generated kernels, DNN models, and edge AI applications, and (2) creating the corresponding power and energy datasets. We will open-source our datasets and code for other researchers and developers. Collectively, we anticipate that the community will collaborate to create a larger scale energy dataset for a variety of edge devices.
## 4. Energy Prediction
In this section, we present our proposed solution to address **C2: extensibility**. To extend the applicability of our measurement study to a wider variety of DNN models, including those not present in our dataset, we design and implement a kernel-level energy predictor which can accurately predict the energy consumption of new DNN models on edge devices. The predictors are trained using our kernel-level dataset and evaluated by our model-level dataset.
### Design and Implementation
Our designed kernel-level energy prediction method is inspired by nn-meter (Wang et al., 2017) which proposed a kernel-based latency predictor for DNN models. However, nn-meter does not support energy prediction. We propose using a rationale akin to that of nn-meter for the design of our kernel-level energy predictor, especially given that kernels run sequentially on current edge devices. The key contributions of our proposed energy predictor include: (1) being the first energy predictor for modern edge devices, achieving an accuracy of 86.2% (making it the most accurate energy predictor for edge devices to date) for unseen DNNs (i.e., those with unfamiliar kernel configurations); and (2) being the first kernel-level energy predictor for DNN executions on modern edge devices. Notably, most existing research primarily uses FLOPs to estimate the energy consumption of DNN executions, resulting in generally low prediction accuracy for unseen DNN models. The core of our kernel-level energy prediction method is that we build and train a predictor for each type of kernel (e.g., conv+bn+relu) using the kernel-level energy dataset presented in Section 3. The total energy consumption of a DNN model is then predicted by summing the estimated energy consumption of all kernels within that DNN model. Our energy predictors are implemented using the random forests regression, a machine learning algorithm known for its robustness, handling of high dimensional spaces, and its capability to model complex non-linear relationships.
### Performance Evaluation
**Comparison baselines.** We implement two baselines to compare the energy prediction accuracy: (1) FLOPs-based predictor: recent work has leveraged FLOPs to estimate the energy consumption of DNN inference (Wang et al., 2017). We train FLOPs predictors using linear regression. Given the FLOPs of a DNN model, the predictor can estimate its inference energy consumption. (2) BIC-based predictor: to demonstrate the critical role that our fine-grained kernel-level dataset plays in accurately predicting energy consumption, we also train energy predictors using the power data sampled by the edge device's built-in current (BIC) sensor. To ensure a fair comparison (i.e, to confirm that any prediction errors in energy consumption are largely due to the inaccuracy of the built-in current sensor's power measurement), we take three steps: (1) using the ground-truth latency when calculating the energy consumption in the BIC training dataset, and (2) training the predictor with the same amount of data, covering the same number of kernels and identical configurations, and using the random forests regression.
**Metrics.** The prediction performance is evaluated through the root mean square error (RMSE), root mean square percentage error (RMSPE), \(\pm 10\%\), and \(\pm 15\%\) accuracy. The latter two metrics represent the percentage of models whose predicted energy consumption lies within the specified error bounds relative to actual measured energy consumption. In this paper, \(\pm 15\%\) accuracy is the default metric. Smaller RMSE/RMSPE and larger \(\pm 10\%\)/\(\pm 15\%\) indicate better prediction performance.
**Comparison results on unseen DNN models.** For the comparison study, we select AlexNets, GoogleNets, MobileNetv1s, MobileNetv2s, and ShuffleNetv2s. As the FLOPs-based predictor requires training with model-level data (i.e., the FLOPs of DNN models), we adopt a leave-one-out cross-validation approach. We set aside one model (e.g., 50 models of GoogleNets) as the test set, and use the remaining four models (e.g., 50 models each of AlexNets, MobileNetv1s, MobileNetv2s, and ShuffleNetv2s) as the training set to train the predictor. Our kernel-level predictor and the BIC-based predictor do not require model-level data for training.
The comparison results are depicted in Fig. 10. Our kernel-level energy predictor consistently outperforms the other two baselines, delivering the highest prediction accuracy. Those baselines fail to achieve comparable levels of prediction accuracy on unseen DNN models. Specifically, our predictor achieves an average prediction accuracy of 86.2%, significantly higher than FLOPs, 31.3%, and BIC, 12.7%. The poor prediction accuracy of BIC, particularly on mobile GPU, demonstrates the indispensability of a fine-grained power and energy dataset when training a reliable energy predictor for edge devices. The significant drop in prediction performance of BIC on the mobile GPU is due to the fact that DNNs typically achieve much shorter execution time on the GPU compared to the CPU. This shorter execution time on the GPU necessitates a higher power sampling rate. Moreover, the performance gap between our kernel-level predictor and the FLOPs-based predictor reflects the gain derived through considering the runtime optimization of edge devices, such as kernel fusion. Table 10 presents the prediction results evaluated across all nine DNN models in our model-level dataset. In addition, we calculate the kernel configuration overlaps between the training (kernel-level dataset) and the evaluation (model-level dataset) datasets. Results show that our energy predictors have only seen 1.1% (CPU) and 1.8% (GPU) of the configurations in the evaluation dataset, which further attests the effectiveness of our kernel-level energy predictors on unseen models.
**Discussion.** Our kernel-level energy predictor exhibits slightly lower prediction accuracy compared to the latency predictor developed in nn-meter (Wang et al., 2017). This might primarily be due to the fact that (1) nn-meter manually sets CPU frequency of the measured device to a fixed value (2.42GHz) when profiling the latency for building
the training dataset and evaluating the prediction accuracy. This creates a more controlled environment for latency measurement and prediction. However, to ensure practicality, our kernel-level energy predictor does not establish a fixed CPU frequency during energy measurement and prediction. This results in greater variability and potential uncertainty in the energy prediction, yet it more accurately reflects real-world usage scenarios where the CPU frequency is typically dynamic. (2) The scale of our energy training dataset is less extensive than that of the latency training dataset in nn-meter, as collecting fine-grained power data is significantly more time-consuming than profiling latency data, particularly on modern edge devices. Hence, we anticipate the community will collectively collaborate to further enhance the scale of our datasets.
## 5. Scoring System
In this section, we introduce our method to tackle challenge **C3: understandability**. We develop a scoring system for diverse edge devices by leveraging our application-level dataset. To ensure that the energy efficiency assessment result is accessible to a broad audience, in particular, edge device end-users with non-technical backgrounds, we develop two scoring metrics, namely _power consumption score (PCS)_ and _inference energy consumption score (IECS)_. These two scoring metrics help to distill the power and energy efficiency of a device in an intuitive and understandable way.
**PCS.** The PCS is designed to capture the aggregated power efficiency (PE) for running all six edge AI applications with 12 reference DNN models using CPU, GPU, and NNAPI delegates. It is calculated as \(PCS=\frac{\sum_{i=1}^{n}PE_{i}}{n}\), where \(n\) is the total number of reference DNN models and \(PE=(1-\frac{APC}{TDP})\times 100\). APC denotes the average power consumption for inferences. Thermal design power (TDP), measured in watts, represents the maximum power an edge device is designed to consume under normal operating conditions. The ratio \(\frac{APC}{TDP}\) provides an indication of how efficiently a device is using its power budget, with a lower ratio indicating better PE.
**IECS.** The IECS is designed to assess edge device energy efficiency, and calculated as the sum of inference energy consumption (IEC) for all six edge AI applications under CPU, GPU, and NNAPI delegates. IEC is defined as the number of inferences per unit of energy, where it factors in the trade-off between PE and inference latency. An edge device with a higher IECS is considered more energy-efficient.
**Results.** Fig. 11 compares our proposed PCS with the AI inference score developed by AI Benchmark (Wang et al., 2018) across diverse edge devices. Note that the AI inference score does not take into account power and energy efficiency. The figure illustrates a tradeoff between AI performance, power consumption, and its selling price, where a larger ball in the figure represents a higher selling price for the device. An edge device that exhibits superior power efficiency
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Application} & \multirow{2}{*}{No.} & \multirow{2}{*}{Reference DNN models} & \multicolumn{3}{c}{Delegate} & \multirow{2}{*}{Model size} \\ \cline{3-4} \cline{6-9} & & & CPU1 & CPU4 & GPU & NNAPI & (MB) \\ \hline \multirow{8}{*}{Vision-based} & \multirow{4}{*}{Image detection} & DNN1 & MobileNetv2, FP32, 300 \(\times\) 300 pixels & ✓ & ✓ & & ✓ & 24.2 \\ \cline{3-9} & & DNN2 & MobileNetv2, INTs, 300 \(\times\) 300 pixels & ✓ & ✓ & & ✓ & 6.9 \\ \cline{3-9} & & DNN3 & MobileNetv2, FP32, 400 \(\times\) 640 pixels & ✓ & ✓ & & ✓ & 12.3 \\ \cline{3-9} & & DNN4 & MobileNetv2, INTs, 640 \(\times\) 640 pixels & ✓ & ✓ & & ✓ & 4.5 \\ \cline{3-9} & & DNN5 & EfficientNet, FP32, 224 \(\times\) 224 pixels & ✓ & ✓ & ✓ & ✓ & 18.6 \\ \cline{3-9} & \multirow{4}{*}{Image classification} & DNN6 & EfficientNet, INT8, 224 \(\times\) 224 pixels & ✓ & ✓ & & ✓ & 5.4 \\ \cline{3-9} & & DNN7 & MobileNetv1, FP32, 224 \(\times\) 224 pixels & ✓ & ✓ & ✓ & ✓ & 4.3 \\ \cline{3-9} & & DNN8 & MobileNetv1, INTs, 224 \(\times\) 224 pixels & ✓ & ✓ & & ✓ & 16.9 \\ \cline{3-9} & & Super resolution & DNN9 & ESRGAN (Wang et al., 2018), FP32, 50 \(\times\) 50 pixels & ✓ & & ✓ & & 5 \\ \cline{3-9} & & Image segmentation & DNN10 & DeepLab3 (Feng et al., 2018), FP32, 257 \(\times\) 257 pixels & ✓ & & & 2.8 \\ \hline NLP-based & Natural language question answering & DNN11 & MobileBERT (Wang et al., 2018), FP32 & ✓ & ✓ & & ✓ & 100.7 \\ \hline Voice-based & Speech recognition & DNN12 & Conv-Actions-Frozen (Wang et al., 2018), FP32 & ✓ & ✓ & & ✓ & 3.8 \\ \hline \hline \end{tabular}
\end{table}
Table 8. Measured edge AI applications per device in our application-level dataset.
Figure 10. Comparison of energy prediction performance. Our predictors trained by the kernel-level dataset achieves the highest accuracy on unseen DNNs.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Kernels & Models & Applications \\ \hline Measure time per device & 23.1 days & 4.7 days & 1.5 days \\ \hline \hline \end{tabular}
\end{table}
Table 9. Time cost of measurements per edge device.
(higher PCS) and AI inference performance (higher AI performance score) is positioned towards the top right corner of the figure.
We find that scoring metrics significantly influence benchmarking results for edge devices. For instance, although the Huawei Mate40 Pro achieves the highest AI performance score, it holds the second worst PCS. Conversely, the Xiaomi Redmi Notes attains the highest PCS while having the second lowest AI performance score. These observations highlight the need for the development of IECS that balances power efficiency with AI inference performance. In Fig. 11, the color of each ball indicates the IECS of each edge device. The Huawei P40 Pro presents the best equilibrium between AI performance and power efficiency, as indicated by its IECS and its position in the figure. Table 11 presents complete IECS results.
## 6. Discussion
**Limitations.** Our current measurements and datasets are on modern smartphones equipped with mobile CPUs and GPUs. While they cover a broad spectrum of edge hardware, they might not be comprehensive. To further increase the heterogeneity, we plan to extend our energy datasets by including other modern edge devices, such as Jetson Nano, Coral TPU, and Raspberry Pi 4.
The proposed kernel-level energy predictor is built offline and will not be updated dynamically during DNN executions. Naturally, the prediction accuracy could be further improved by factoring in more environmental complexities, such as the available computing and memory resources on an edge device. We will leave this as an area for our future work.
**Automated measurement.** Table 9 illustrates that the majority of the time cost comes from energy profiling. Developing an automated measurement and profiling method can enhance the time efficiency for collecting a large-scale and more comprehensive dataset that includes a variety of edge devices and kernel configurations. The kernel-level energy predictor could also benefit, as prediction accuracy may improve with more training data. Furthermore, automated profiling could help minimize human influence, leading to more accurate measurements.
**Energy prediction for concurrent executions.** Our energy predictor is premised on the fact that kernels currently run sequentially on edge devices. In the future, DNN inference may run concurrently on multi-core chipsets. Kernels processed in parallel might consume less energy than when processed sequentially, but more than individual kernels. The energy prediction performance for concurrent execution might be lower than for sequential execution, as concurrent operations introduce greater uncertainties in energy consumption. This aspect requires further experimentation.
## 7. Related Work
**Energy measurement for edge devices.** A number of research works have proposed different methodologies and developed frameworks for measuring the energy consumption in mobile and edge devices. The Green Miner proposed in (Wang et al., 2018) can physically measure the energy consumption of mobile devices such as Android phones and automate the testing of applications. The GfxDoctor developed in (Zhuang et al., 2018) can systematically diagnose energy inefficiencies in app graphics at the app source-code level. However, none of these works have studied fine-grained energy measurement of DNNs on modern edge devices.
**Edge AI benchmark.** A few recent studies developed mobile AI benchmarks that measure the performance of on-device training and inference. For example, AI Benchmark (Wang et al., 2018; Wang et al., 2018) is arguably the first benchmark suite for mobile devices, which primarily focuses on Android smartphones and measures only the latency. MLPerf Mobile (Wang et al., 2018; Wang et al., 2018) presents the first industry-standard open-source benchmark for performance and accuracy evaluation of mobile AI devices. Additionally, AIoTBench (Wang et al., 2018) comprises a wider range of model architectures and AI frameworks, with a focus on assessing the inference capabilities of mobile and embedded devices. However,
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Model & \multicolumn{4}{c}{Mobile CPU} & \multicolumn{4}{c}{Mobile GPU} \\ \cline{2-9} variants & RMSE (m) & RMSPE (\%) & \(\pm\)10\% (Acc.) & \(\pm\)15\% (Acc.) & RMSE (m) & RMSPE (\%) & \(\pm\)10\% (Acc.) & \(\pm\)15\% (Acc.) \\ \hline AlexNets & 32.2 & 12.9 & 60.0\% & 65.0\% & 9.4 & 15.5 & 40.0\% & 70.0\% \\ DenseNets & 30.8 & 7.1 & 70.0\% & 100\% & 16.5 & 19.6 & 10.0\% & 35.0\% \\ GoogleNets & 25.1 & 11.9 & 20.0\% & 90.0\% & 3.8 & 5.5 & 95.0\% & 100\% \\ MobileNetv1s & 7.8 & 8.7 & 80.9\% & 95.2\% & 1.7 & 6.7 & 80.9\% & 100\% \\ MobileNetv2s & 7.8 & 8.3 & 76.2\% & 90.5\% & 3.1 & 11.5 & 47.6\% & 66.7\% \\ ProxylessNASs & 13.3 & 11.7 & 47.6\% & 71.4\% & 2.5 & 8.2 & 76.2\% & 95.2\% \\ ResNet18s & 44.6 & 6.1 & 95.2\% & 100\% & 30.5 & 13.1 & 38.1\% & 71.0\% \\ ShuffleNetv2s & 3.2 & 5.8 & 100\% & 100\% & - & - & - & - \\ SqueezeNets & 19.6 & 10.4 & 57.1\% & 90.5\% & 7.9 & 10.0 & 61.9\% & 85.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 10. Energy prediction results on mobile CPU and GPU.
Figure 11. Comparison of the proposed PCS and AI inference score (Wang et al., 2018). It presents a tradeoff between AI performance, power consumption, and the selling price. The larger the ball, the higher the selling price of the device.
none of these edge AI benchmarks focused on energy efficiency of on-device learning and energy dataset creation for edge devices.
## 8. Conclusion
We conduct energy consumption measurement studies for on-device deep learning. We have created extensive energy datasets at the kernel-level, model-level, and application-level to facilitate research aimed at improving the energy efficiency of deep learning on diverse edge devices. Building upon our energy datasets, we have developed kernel-level predictors that can accurately estimate the energy consumption of DNN executions on edge devices. Furthermore, we have implemented two scoring metrics to enhance the understandability of our energy measurement results. These contributions provide valuable resources and tools for advancing energy-efficient deep learning on edge devices.
###### Acknowledgements.
This work was supported by funds from Toyota Motor North America and by the US National Science Foundation (NSF) under Grant No. 1910667, 1910891, and 2025284.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Device Model} & \multicolumn{2}{c}{OnePlus} & \multicolumn{2}{c}{Xiaomi} & \multicolumn{2}{c}{Huawei} & \multicolumn{2}{c}{Huawei} & \multicolumn{2}{c}{Huawei} & \multicolumn{2}{c}{Huawei} & \multicolumn{2}{c}{Xiaomi} & \multicolumn{1}{c}{Motorola} \\ & & 8 Pro & Redmi Note8 & Mate40 Pro & P40 Pro & P40 Lite & P40 Lite & P40 Lite & Redmi K30 Ultra & One Macro \\ \hline SoC & & \multicolumn{2}{c}{Snapdragon 865} & \multicolumn{2}{c}{Snapdragon 665} & \multicolumn{2}{c}{Kirin 9000} & \multicolumn{2}{c}{Kirin 990 5G} & \multicolumn{2}{c}{Kirin 810} & \multicolumn{2}{c}{Kirin 710F} & \multicolumn{2}{c}{Dimensiony 1000+} & \multicolumn{2}{c}{Helio P70} \\ \hline \multirow{3}{*}{DNN1} & CPU1 & ECI & 0.3209 & 0.4421 & 0.2421 & 0.1715 & 0.2649 & 0.5132 & 0.4640 & 0.4481 \\ & CPU4 & ECI & 0.2858 & 0.4656 & 0.2439 & 0.1854 & 0.2790 & 0.5418 & 0.4735 & 0.4441 \\ & NNAPI & ECI & 0.3130 & 0.4384 & 0.2221 & 0.2030 & 0.2419 & 0.4145 & 0.4421 & 0.4237 \\ \hline \multirow{3}{*}{DNN2} & CPU1 & ECI & 0.2228 & 0.2592 & 0.1228 & 0.1069 & 0.1469 & 0.2452 & 0.2158 & 0.2270 \\ & CPU4 & ECI & 0.2034 & 0.70186 & 0.1021 & 0.1497 & 0.2375 & 0.1800 & 0.2334 \\ & NNAPI & ECI & 0.2149 & 0.0265 & 0.1128 & 0.0933 & 0.1361 & 0.1623 & 0.3032 & 0.2443 \\ \hline \multirow{3}{*}{DNN3} & CPU1 & ECI & 1.3727 & 2.3948 & 1.2222 & 1.2684 & 1.3527 & 0.4054 & 1.6245 & 2.4968 \\ & CPU4 & ECI & 1.3868 & 2.4183 & 1.1794 & 1.3390 & 1.3869 & 3.1238 & 1.6299 & 2.4786 \\ & NNAPI & ECI & 1.3905 & 2.4250 & 1.1156 & 1.2899 & 1.3407 & 2.7439 & 1.7859 & 2.4988 \\ \hline \multirow{3}{*}{DNN4} & CPU1 & ECI & 0.6314 & 1.2265 & 0.5194 & 0.5117 & 0.6088 & 1.4706 & 0.6435 & 1.2757 \\ & CPU4 & ECI & 0.6150 & 1.2402 & 0.5437 & 0.5424 & 0.5692 & 1.5131 & 0.7760 & 1.2519 \\ & NNAPI & ECI & 0.6120 & 1.2311 & 0.5082 & 0.5174 & 0.5942 & 1.3755 & 0.7759 & 1.3334 \\ \hline \multirow{3}{*}{DNN5} & CPU1 & ECI & 0.1792 & 0.2009 & 0.1055 & 0.1103 & 0.1301 & 0.2058 & 0.2261 & 0.2125 \\ & CPU4 & ECI & 0.1778 & 0.2041 & 0.1249 & 0.1154 & 0.1419 & 0.1875 & 0.2252 & 0.2144 \\ & GPU & ECI & 0.0979 & 0.1432 & 0.1118 & 0.0964 & 0.0954 & 0.1214 & 0.1615 & 0.1504 \\ & NNAPI & ECI & 0.1238 & 0.2295 & 0.2350 & 0.3672 & 0.3287 & 0.1916 & 0.4128 & 0.2351 \\ \hline \multirow{3}{*}{DNN6} & CPU1 & ECI & 0.1923 & 0.1678 & 0.1015 & 0.0797 & 0.1079 & 0.1675 & 0.1928 & 0.1657 \\ & CPU4 & ECI & 0.1870 & 0.1619 & 0.1191 & 0.0806 & 0.1061 & 0.1403 & 0.1962 & 0.1457 \\ & NNAPI & ECI & 0.9329 & 0.2756 & 1.3280 & 1.6405 & 1.3641 & 0.1267 & 0.0819 & 2.9408 \\ \hline \multirow{3}{*}{DNN7} & CPU1 & ECI & 0.3309 & 0.2139 & 0.1475 & 0.1261 & 0.1449 & 0.2247 & 0.2803 & 0.2302 \\ & CPU4 & ECI & 0.1854 & 0.2348 & 0.1448 & 0.1228 & 0.1414 & 0.2429 & 0.2572 & 0.2348 \\ & GPU & ECI & 0.0803 & 0.1326 & 0.1461 & 0.0816 & 0.0943 & 0.1536 & 0.1586 & 0.1650 \\ & NNAPI & ECI & 0.1277 & 0.2623 & 0.2247 & 0.1652 & 0.1758 & 0.2246 & 0.1631 & 0.2644 \\ \hline \multirow{3}{*}{DNN8} & CPU1 & ECI & 0.1633 & 0.1635 & 0.0956 & 0.0814 & 0.0887 & 0.1248 & 0.1766 & 0.1943 \\ & CPU4 & ECI & 0.1555 & 0.18181 & 0.0884 & 0.0764 & 0.0887 & 0.1250 & 0.1743 & 0.1614 \\ & NNAPI & ECI & 0.0711 & 0.0678 & 0.0457 & 0.0466 & 0.0772 & 0.0998 & 0.0758 & 0.0789 \\ \hline \multirow{3}{*}{DNN9} & CPU1 & ECI & 1.6846 & 3.0001 & 1.4124 & 1.5211 & 1.5064 & 8.9297 & 2.0102 & 3.1099 \\ & GPU & ECI & 0.1369 & 0.5716 & 0.9453 & 0.1502 & 0.3426 & 0.4584 & 0.2993 & 0.4053 \\ \hline \multirow{3}{*}{DNN10} & CPU4 & ECI & 0.3411 & 0.3481 & 0.3943 & 0.3819 & 0.3451 & 0.7609 & 0.6042 & 0.7436 \\ & CPU1 & ECI & 1.5320 & 3.1282 & 1.9101 & 1.8226 & 1.8624 & 3.7850 & 2.2376 & 3.2543 \\ \cline{1-1} & CPU4 & ECI & 2.1198 & 2.8539 & 1.817 & 1.8889 & 0.3534 & 3.9754 & 2.0826 & 2.9359 \\ \cline{1-1} & NNAPI & ECI & 4.5117 & / & 2.0306 & 0.4334 & 0.4605 & 2.0196 & 2.4897 & 7.0580 \\ \hline \multirow{3}{*}{DNN12} & CPU1 & ECI & 0.3367 & 0.3038 & 0.1094 & 0.1681 & 0.1209 & 0.3491 & 0.6355 & 0.4 |
2306.12684 | Dynamic Versus Static Oxidation of Nb/Al-AlOx/Nb Trilayer | High quality Nb-based superconductor-insulator-superconductor (SIS) junctions
with Al oxide (AlO$_x$) tunnel barriers grown from Al overlayers are widely
reported in the literature. However, the thin barriers required for high
critical current density (J$_c$) junctions exhibit defects that result in
significant subgap leakage current that is detrimental for many applications.
High quality, high-J$_c$ junctions can be realized with AlN$_x$ barriers, but
control of J$_c$ is more difficult than with AlO$_x$. It is therefore of
interest to study the growth of thin AlO$_x$ barriers with the ultimate goal of
achieving high quality, high-J$_c$ AlO$_x$ junctions. In this work, 100\%\
O$_2$ and 2\%\ O$_2$ in Ar gas mixtures are used both statically and
dynamically to grow AlO$_x$ tunnel barriers over a large range of oxygen
exposures. In situ ellipsometry is used for the first time to extensively
measure AlO$_x$ tunnel barrier growth in real time, revealing a number of
unexpected patterns. Finally, a set of test junction wafers was fabricated that
exhibited the well-known dependence of J$_c$ on oxygen exposure (E) in order to
further validate the experimental setup. | Tannaz Farrahi, Alan W. Kleinsasser, Michael Cyberey, Jie Wang, Micahel B. Eller, Jian Z. Zhang, Anthony R. Kerr, Joseph G. Lambert, Robert M. Weikle, Arthur W. Lichtenberger | 2023-06-22T06:12:21Z | http://arxiv.org/abs/2306.12684v2 | # Dynamic Versus Static Oxidation of Nb/Al-AlO\({}_{x}\)/Nb Trilayer
###### Abstract
High quality Nb-based superconductor-insulator-superconductor (SIS) junctions with Al oxide (AlO\({}_{x}\)) tunnel barriers grown from Al overlayers are widely reported in the literature. However, the thin barriers required for high critical current density (J\({}_{c}\)) junctions exhibit defects that result in significant subgap leakage current that is detrimental for many applications. High quality, high-J\({}_{c}\) junctions can be realized with AlN\({}_{x}\) barriers, but control of J\({}_{c}\) is more difficult than with AlO\({}_{x}\). It is therefore of interest to study the growth of thin AlO\({}_{x}\) barriers with the ultimate goal of achieving high quality, high-J\({}_{c}\)AlO\({}_{x}\) junctions. In this work, 100% O\({}_{2}\) and 2% O\({}_{2}\) in Ar gas mixtures are used both statically and dynamically to grow AlO\({}_{x}\) tunnel barriers over a large range of oxygen exposures. In situ ellipsometry is used for the first time to extensively measure AlO\({}_{x}\) tunnel barrier growth in real time, revealing a number of unexpected patterns. Finally, a set of test junction wafers was fabricated that exhibited the well-known dependence of J\({}_{c}\) on oxygen exposure (E) in order to further validate the experimental setup.
superconductor-insulator-superconductor (SIS), aluminum oxide, dynamic oxidation, static oxidation, ellipsometry, superconduting qubits
## I Introduction
Nb/Al-AlO\({}_{x}\)/Nb trilayer superconductor-insulator-superconductor (SIS) tunnel junctions are widely-used in the applications such as single flux quantum (SFQ), quantum bits (qubits), and millimeter and sub-millimeter wave heterodyne mixers [1][2][3][4][5]. Device performance is highly dependent on (J\({}_{c}\)) and junction quality (e.g., the degree of subgap leakage), which in turn depends on the detailed nature of the \(\sim\)1 nm-thick insulating AlO\({}_{x}\) barrier material. The barrier thickness is controlled by the oxygen exposure E = P\({}_{ox}\)t\({}_{ox}\), where P\({}_{ox}\) is the oxygen partial pressure and t\({}_{ox}\) is the oxidation time. Reducing the AlO\({}_{x}\) thickness to realize junctions with higher J\({}_{c}\) results in more tunnel barrier defects (e.g., pinholes or quantum point contacts), which increasingly dominates current transport, resulting in Multiple Andreev Reflections (MAR) in the superconducting state, indicating high quantum-mechanical transparency [6][7][8][9]. High quality high-J\({}_{c}\) junctions can be realized with AlN barriers grown by plasma nitridation of Al overlayers [10][11][12][13]; however, AlO\({}_{x}\) barriers exhibit better control of J\({}_{c}\). Therefore, it is desirable to better understand how to realize thinner oxide barriers with fewer defects. Historically, the formation of Gurvitch-style trilayer junctions [14] relies on static (no pumping of the oxidation gas) thermal oxidation of thin Al overlayers in pure oxygen for tunnel barrier formation. The fabrication processes have been extensively reported, [15][16][17][18], including distinct low and high J\({}_{c}\) growth regimes [19]. There are fewer reports on dynamic oxidation (active flow of the oxidation gas due to pumping) and/or the use of diluted O\({}_{2}\). However, recent work by Tolpygo _et al._[20][21] indicates that high quality Nb-based trilayers with higher current densities can be realized using dynamic oxidation with diluted O\({}_{2}\) gas mxtures, and hence longer growth times, under proper chamber and Nb deposition conditions. Our work investigated four barrier growth modes for the first time: Both static (unpumped) and dynamic (pumped) oxidation using both 100% O\({}_{2}\) and diluted oxygen gas (2% O\({}_{2}\) in Ar, the largest dilution the gas manufacturer could provide) over a range of pressures, all in a single oxidation chamber. In situ ellipsometry was used for the first time to measure and compare AlO\({}_{x}\) tunnel barrier growth in real time for these four oxidation modes. As it will be described in section III-A, all of the ellipsometer data shown in this paper were obtained on bilayers with thick Al (100 nm) on Si/SiO\({}_{2}\) wafers. In section III-D, we discuss the connection between tunnel barrier thickness obtained using our ellipsometric technique with thick Al layers and the J\({}_{c}\) regimes characteristic of trilayer junctions based on thin Al layers.
## II Experimental Setup
The University of Virginia (UVA) trilayer deposition system (Tri-3), modified for these experiments, is composed of three chambers separated by two gate valves as shown in Figure 1. Three individual turbo-molecular pumps are used to evacuate the load lock, main deposition chamber, and oxidation chamber (which is also used, in other experiments, for the nitridation of Al barriers). All three chambers are equipped with VAT vales. The load lock and oxidation chamber have MKS Baratron gauges with full scale pressure ranges of 1 Torr and 0.1 Torr, respectively. The system base pressure is monitored using a residual gas analyzer (RGA). It has been
determined for the UVA trilayer system that the tunnel barrier growth process is not influenced (as measurable by ellipsometer) by background oxidation of the Al overlayer once the base pressure of system and the partial pressure of H2O reach below 10E-8 and 3E-9 Torr, respectively [22]. The oxidation chamber is outfitted with an A. J. A. Woollam Co. M-2000U@ ellipsometer in order to record spectroscopic data, from 235-1000 nm at a fixed angle of 70\({}^{\circ}\), during tunnel barrier growth. The ellipsometer uses 470 individual charge-coupled device (CCD) detectors to capture all wavelengths simultaneously for real time spectroscopic ellipsometry (SE) analysis across the entire spectrum. The ellipsometry software Complete EASE (J. A. Woollam Co., Inc.) was used to fit experimental curves, therefore and no theoretical spectroscopy ellipsometry was studied, therefore the comparison of experimental data with fitted data is not applicable. With this software it is possible to select a physical model and fit with data acquired by means spectroscopic ellipsometry. For the data presented in this work an ellipsometric growth model, similar to that reported in [23] for Al-AIN tunnel barrier growth, was used to track the real-time oxide growth in situ. The optically thin Al overlayer was modeled using a sum of Lorentz and a single Drude Oscillator, the Al-oxide layer was modeled using a Cauchy-Urbach dispersion, and all optically thick Nb and Al films were modeled using pseudo optical constants obtained by directly inverting the ellipsometry equations from a single measurement. Small angular offsets from the substrate mounting can introduce appreciable measurement errors run-to-run. To correct for any angular offset in the system, the oxidized Si wafer was first measured before deposition and used as a calibration standard. More details of the UVA Tri-3 system can be found elsewhere [24].
For this work, in order to investigate the growth of AlO\({}_{x}\) tunnel barriers, all of the oxidation experiments were performed in the separate oxidation chamber. The system base pressure and H2O partial pressure for all the reported experiments were \(\sim\)7.7E-9 and 1.4E-9 (or lower), respectively. All experiments used 50.8 mm diameter, 450 \(\mu\)m thick, double side polished Si substrates with 300 nm thermally grown SiO\({}_{2}\). During oxidation, the wafer, which is heatsunk with Apiezon@ L grease to a metal wafer holder block, is wedged into a tapered opening of the 17 \({}^{\circ}\)C water-cooled growth table where the growth can be monitored in situ by the ellipsometer. The ellipsometer's light source module was aligned to just off the center of the wafer block during the initial system setup, and did not need to be further adjusted during this study. After positioning of the wafer-block on the oxidation growth table, and prior to introducing gas into the load lock and start of oxidation, the ellipsometer is triggered to start acquiring data where each data point was acquired at \(\sim\) 4-second intervals and integrating measured results over the acquisition window. In the original setup, static oxidation of the AlO\({}_{x}\) tunnel barrier layer took place in the load lock. Therefore, the static oxidation line was connected to the load lock and consists of a 99.995% purity undiluted O\({}_{2}\) source, a micrometer and a series of on-off valves to control the pressure. In order to conduct this new study's diluted oxidation experiments, a second oxidation line (dynamic line) was added to the load lock. This line consists of a 2% O\({}_{2}\) in Ar gas source and a mass flow controller. In both static and dynamic oxidation cases, the gate valve between the load lock and oxidation chamber in Figure 1, called 'oxidation valve', is left open and the gate valve between the load lock and main deposition chamber (called 'deposition valve') is left close. In the case of static oxidation after growing Nb/Al (or just Al) layers on a Si/SiO\({}_{2}\) substrate in the main chamber, the wafer is transferred to the growth table in the oxidation chamber without breaking the vacuum. Prior to introducing oxygen gas to the system, both the load lock and oxidation chamber VAT valves are closed. For dynamic oxidation, the load lock VAT valve is closed and the oxidation gas pressure is controlled by adjusting the oxidation chamber VAT valve. For gas pressures below and above 90 mTorr, the 0.1 Torr oxidation chamber Baratron and the 1 Torr load lock pressure gauges are used, respectively. To achieve the desired working gas pressure in the dynamic oxidation setup for pressures below 90 mTorr, the oxidation chamber VAT valve position is adjusted automatically with feedback from the 0.1 Torr sensor,
Fig. 1: **Left**: Picture and **right**: schematic of the UVA trilayer system (Tri-3) modified for these experiments. Oxidation lines are connected to the load lock highlighted in green. The main deposition chamber and oxidation chamber are highlighted in red and blue, respectively.
but for pressures above 90 mTorr, the VAT valve is adjusted manually.The oxidation gas flow is set at 28 sccm in both cases. After completing the oxidation and turning off the gas flow in both growth modes, in order to quickly evacuate the chambers of oxygen, both the load lock and oxidation VAT valves are opened.
## III Results and discussion
### _Initial Study-AlO\({}_{x}\) Thickness Repeatability_
In theory, the critical current density of a tunnel junction is exponentially dependent on the tunnel barrier thickness. Therefore, our experiments require a high degree of reproducibility in the ellipsometric measurement of barrier thickness. In an initial exploratory study, the repeatability of the _ellipsometerically-measured_AlO\({}_{x}\) growth was assessed. Similar to the approach UVA uses for realizing trilayers for SIS junctions, after a "modest" ion gun clean to remove \(\sim\)10 nm of SiO\({}_{2}\) from the wafer surface, 165 nm of Nb and \(\sim\)5 nm Al overlayer were sputter deposited onto the wafer. Films were exposed to undiluted and diluted oxygen under both dynamic and static thermal oxidation settings for 8100 seconds and the thickness profile of each oxidation process was recorded in situ by the ellipsometer with a 25 \(\mu\)m by 60 \(\mu\)m spot size from just off the center of wafer. The optical constants of Al overlayer film were modeled using a sum of four Lorentz and a single Drude oscillators [23].
A typical plot of thickness versus time shows an initial period of rapid growth followed by a slower, decreasing growth rate. Thickness plots obtained through the ellipsometric model for this Nb/Al (\(\sim\)7 nm) material stack reveals a thickness variation of 0.04 nm after the initial rapid growth regime. This variation in oxide thickness is significant when considered in terms of critical current density. The measured variation is believed to be due primarily to our ellipsometric growth model having to take both the thin Al overlayer and underlying Nb thickness into account before and during the growth. Therefore, we changed to the use of an optically opaque Al layer (\(\sim\)100 nm thick) with the AlO\({}_{x}\) film growth modeled using Cauchy and Urbach dispersion for the real and imaginary parts of the refractive index, respectively. The growth experiments were repeated with the same oxidation conditions on the 100 nm layer of aluminum deposited directly on Si/SiO\({}_{2}\) wafers, yielding a significantly more repeatable _measured_ growth rate with only \(\pm\) 5 pm run to run scatter. Henceforth, unless otherwise specified, the ellipsometric data reported in this study are based on samples with 100 nm of aluminum layer. While small local atomic variations in barrier thickness are expected across the oxidation front, it is evident that the \(\sim\)1500 \(\mu m^{2}\) sampling area effectively averages these local variations in barrier thickness. Given the very small \(\pm\) 5 pm run to run thickness scatter, it is found that this ellipsometer measured average thickness is repeatable wafer to wafer for the same oxidation conditions. Since our measured ellipsometer thickness is very repeatable from run to run, if the Al-oxide layer is uniform and of high quality, it seems reasonable to hope that this thickness should also be a good predictor of J\({}_{c}\).
### _Dynamic vs Static AlO\({}_{x}\) Growth_
We first studied ellipsometric growth data at room temperature (RT), for several pressures in the 3-100 mTorr range, versus time. We compared statically grown AlO\({}_{x}\) in 100% O\({}_{2}\), the most commonly-used mode of trilyer growth, with dynamically grown AlO\({}_{x}\) in 2% O\({}_{2}\) because diluted O\({}_{2}\) is
Fig. 2: **(a)** The ellipsometrically-measured AlO\({}_{x}\) thickness as a function of time for dynamically grown oxide in 2% O\({}_{2}\) and statically grown oxide in 100% O\({}_{2}\) for different gas pressures. **(b)** The same data plotted as a function of time on log scale. Note that typically only two data points are contained in the first ten seconds of growth. The Mott-Cabrera theory (_L(t)_\(\propto\)\(t^{1/2}\)) is fitted for the first three 100 mTorr static experimental data points (first regime of AlO\({}_{x}\) growth) and is shown in the long-dashed orange curve.
commonly used for high-J\({}_{c}\) junctions. The results are shown in Figure (a)a. As expected, the oxide growth rate increases with increasing oxygen pressure in both cases. For a given pressure, the growth rate is much larger in the undiluted case. One would expect that, in order to achieve a given oxide thickness in a particular time, the O\({}_{2}\) partial pressure would have to be 50 times higher in the diluted case (to achieve the same effective oxygen dose). Indeed, the 100 mTorr 2% O\({}_{2}\) (2 mTorr O\({}_{2}\) partial pressure) data in Figure (a)a lie close to the 3 mTorr 100% O\({}_{2}\) data (3 mTorr partial pressure).
The growth kinetics of oxides are described by Fromhold and Cook [25] based on the Mott-Cabrera [26] theory in which two regimes are defined for oxide thickness (L) as a function of time (t). This theory predicts that an initial thin layer of oxide film growth for which oxidation is extremely rapid. This behavior results from electrons tunnelling through the thin oxide (electron current is large), leading to the formation of an electronic potential on the film surface (the Mott potential). The oxide thickness in this first stage of growth is described by L(t) \(\propto\) t\({}^{1/2}\) (the Mott-Cabrera equation). In the second regime, oxide growth is limited by electron tunneling current through the thicker oxide without a potential on the surface of film. Growth in this stage is slow and logarithmic, described by L(t)/log(t) [26]. These two regimes are observed in our data as well as that of other groups [27][28]. A sharp jump of AlO\({}_{x}\) thickness with initial flux of oxygen to the chamber is followed by a slower and decreasing oxide growth rate.
To further consider these data, we expanded and re-plotted the Figure (a)a data on a logarithmic time scale as shown in Figure (b)b. The sudden initial increase of thickness to several angstroms for all the curves presumably occurs in the first Mott regime where oxide forms on the metal surface through
Fig. 3: The measured AlO\({}_{x}\) thickness, as determined by spectroscopic ellipsometry, as a function of oxygen exposure (oxygen partial pressure times time) for oxides grown (a) dynamically in 2% O\({}_{2}\), (b) statically in 100% O\({}_{2}\), (c) statically in 2% O\({}_{2}\), and (d) dynamically in 100% O\({}_{2}\) environment for various oxygen pressures.
electron tunneling. The growth rate for times longer than \(\sim\)10 seconds is significantly less than the first stage growth and the growth curves become linear on this log(t) plot for considerably longer growth times. Note that for all growths, the first logged ellipsometer data point does not correspond exactly to the moment that oxidation gas starts to flow. As described above, the ellipsometer is manually started before the introduction of the oxidation gas, and the ellipsometer takes data every 4-5 seconds. In order to plot the data, we have taken the last ellipsometer data point with no growth as t = 0. The result is a small inaccuracy due to the time shift of the Figure 2b curves. This error is of little consequence beyond \(\sim\)10-20 seconds. A fit of the Mott theory to the first three data points of the 100 mTorr static data is also shown, further illustrating the expected form of initial growth.
It is interesting to note that all of the oxidation curves in Figure 2b exhibit what may be a third regime of growth, a modest, gradual step well above the initial 10 seconds of growth but below the eventual steady-state linear regime, which corresponds to much longer growth times. For example, the 100 mTorr dynamic data has such a step around 8-8.5 A, while the 10 mTorr dynamic data has a step around 6-7 A. Similar step features were also found for growth samples using Nb/Al (\(\sim\) 7 nm) layers. In fact, Lindmark _et al._[27] reported a similar step in one of their data traces, but offered that it might have been due to a pressure fluctuation during growth. However, closer examination of their data reveals modest steps in more of their traces. It can be seen, within each of the two sets of our curves, (i) the inflection step occurs earlier the lower the pressure (and hence, lower exposure), and (ii) the step is wider in time and larger in height the lower the pressure used. Yet, for example, in comparing these two sets of curves, the step for the 100 mTorr undiluted static curve occurs before the 100 mTorr diluted O\({}_{2}\) dynamic curve, even though this static curve has a much larger exposure for a given time. This suggests that these two growth modes may have different underlying growth mechanics.
It is well known that the J\({}_{c}\) of SIS junctions depends monotonically on oxygen exposure, E. In fact, until recently, the literature was consistent with J\({}_{c}\) depending _only_ on E (for a given temperature and no other reactive gases in the chamber). To explore the limits of this correlation, we examined oxide thickness as a function of oxygen exposure for all four growth modes. The results are shown in Figure 3. Since J\({}_{c}\) depends exponentially on AlO\({}_{x}\) thickness, we expect the E-dependence of the oxide thickness (on a linear scale) to resemble the J\({}_{c}\) dependence of Nb/Al-AlO\({}_{x}\)/Nb tunnel junctions (on a log scale). The middle step feature is exhibited in all four subfigures of Figure 3, although the steps for undiluted dynamic growth are less pronounced (they are clearly visible if when the data are blown up). For low to medium exposure values in all four growth modes, distinct thickness traces are found. In addition, The growth curves for the diluted dynamic, undiluted dynamic, and diluted static growth modes, but not undiluted static mode, collapse into a single dependence.
Figure 4a compares the behavior of the E-dependence of AlO\({}_{x}\) thickness for two growth modes: diluted dynamic growth, represented by solid squares and undiluted static growth, represented by solid circles in different colors. For low exposures, distinct thickness traces for each pressure can also be seen for both undiluted static and diluted dynamic growth. In this low exposure regime, oxide thickness is highly dependent on exposure for different pressures. This result was not expected. It could reflect the initial Mott-Cabrera mode of growth or an initial non-ideal oxide growth. In this low exposure regime, for a given growth mode and E, a lower pressure results in a thicker aluminum oxide. Additionally, in the low E regime and for the same E and pressure, dynamic oxide thickness is significantly larger than static oxide thickness. This might be an indication of more complete incorporation of oxygen in the Al at low E for the diluted growth because the oxidation is occurring more slowly. In this scenario, for higher E, and hence longer growth time, the affected Al layer has the opportunity to be oxidized more completely and all traces should tend to converge. Yet as shown in Figure 3b, this argument does not hold for undiluted static growth. In Figure 4b, an expanded view of Figure 4a, the full coalescing of the diluted dynamic traces is clearly seen, in contrast to the case of undiluted static growth. The contrast in the behavior of undiluted static growth as high E is striking where each pressure trace follows a different growth path, even for high exposure. For high exposures, the 10 mTorr undiluted static growth trace coincidentally falls on the coalesced data line for the other growth modes. However, for large E at an even lower pressure of 3 mTorr, the undiluted static growth curve yields a thicker oxide layer than the coalesced trace. Clearly, for a given pressure, undiluted growth occurs faster than diluted growth. However, it is not clear why the undiluted static growth mode gives such different results from the other growth modes.
### _Static vs Dynamic Growth in 2% O\({}_{2}\)_
To further analyze growth in diluted oxygen, Al films were oxidized statically and dynamically at working pressures ranging from 40 mTorr to 200 mTorr. Ellipsometric data of these two experiments, along with the data of dynamically grown AlO\({}_{x}\) in undiluted oxygen are shown in Figure 5 (undiluted, static is not included). For high enough E, all traces coalesce to a single trace in the high O\({}_{2}\) exposure regime. Moreover, as we found in Figure 2b for the low pressure regime, within each growth set for the same E, the lower the pressure the greater the oxide thickness.
### _J\({}_{c}\)(E) Regimes_
The second goal of our study was to examine the dependence of J\({}_{c}\) for Nb/Al-AlO\({}_{x}\)/Nb tunnel junctions on E for both diluted and undiluted, and both dynamic and static, oxidation conditions and to compare the results with our thickness studies. The UVA SIS junction fabrication process has been extensively described elsewhere [29][30]. For any particular pressure and oxidation time, the AlO\({}_{x}\) thickness is unlikely to be precisely the same for growth on thick and thin Al. However, we make this assumption in order to choose reasonable parameters for initial diluted dynamic growth conditions. We first examined the well-established and characterized trilayer
growth conditions from our previous work on 100% O\({}_{2}\) static oxidation and chose two exposure values that corresponded to 4 and 7 \(\frac{kA}{cm^{2}}\). Next, the two corresponding oxide thicknesses were identified from our new ellipsometer results for undiluted static growth on thick Al. Finally, using the diluted dynamic ellipsometer data for thick Al, growth curves were chosen for these thickness values with relatively long growth times: 8180 seconds at 400 mTorr and 9450 seconds at 40 mTorr. We also ensured that the identified growth conditions were beyond the step of the second oxidation regime. For both of these conditions, we fabricated wafers and determined J\({}_{c}\). For this work, we calculated J\({}_{c}\) by dividing I\({}_{c}\)R\({}_{n}\) = 1.8 mV (the typical 4.2 K value from our prior work) by R\({}_{n}\)A, where R\({}_{n}\) is the junction resistance (the ratio of junction V and I at 5 mV) and A is the junction area. The resulting J\({}_{c}\) values were 3 and 10.7 \(\frac{kA}{cm^{2}}\), respectively. These two experimental results are shown in the Figure 6, a log-log plot of J\({}_{c}\) versus E.
The established UVA empirical AlO\({}_{x}\) formula for undiluted static growth with J\({}_{c}\) in the range of 1 to 7 \(\frac{kA}{cm^{2}}\), is shown by a black dotted line of the form of J\({}_{c}\)\(\propto\) E\({}^{-\alpha}\), with \(\alpha\) = 0.4. The best fit for the early new UVA experimental data is shown as a green dashed line with \(\alpha\) = 0.48 in this low J\({}_{c}\) regime. For reference, the dashed blue lines show the Kleinsasser _et al._'s trend lines, based on averaged data of many groups, where \(\alpha\), for low and high J\({}_{c}\) regions (predominantly undiluted, static oxidations), is 0.4 and 1.6, respectively [17][18][19]. The red solid lines display the trends from the recent work of Tolpygo _et al._'s, where \(\alpha\) for the high J\({}_{c}\) region with diluted dynamic growth is 1.0 and \(\alpha\) for the undiluted static growth is 0.521 [21]. We note that our initial J\({}_{c}\) versus exposure data for diluted dynamic as well as our previous undiluted static results appear reasonable for the low J\({}_{c}\) regime. More fabrication runs will be needed in order to fully examine the behaviour of J\({}_{c}\) dependencies for the UVA system at a variety of pressure and growth conditions.
## IV Conclusion
In this work, we have studied AlO\({}_{x}\) growth with diluted (2% O\({}_{2}\) in Ar) and undiluted (100% O\({}_{2}\)) oxygen under both dynamic and static oxidation processes using an _in situ_ spectroscopic ellipsometry. Several interesting features were noted:
Fig. 4: **Left: (a)** The measured AlO\({}_{x}\) thickness, and **right: (b)** the expanded view of the measured AlO\({}_{x}\) thickness as determined through spectroscopic ellipsometry, as a function of oxygen exposure (oxygen partial pressure times time) is plotted for statically grown oxides in 100% O\({}_{2}\) and dynamically grown oxides in 2% O\({}_{2}\) for various oxygen pressures.
Fig. 5: The measured AlO\({}_{x}\) thickness, as determined through spectroscopic ellipsometry, as a function of oxygen exposure is plotted for statically grown oxide in 2% O\({}_{2}\), dynamically grown oxide in 2% O\({}_{2}\), and dynamically grown oxide in 100% O\({}_{2}\).
(i) For low to mid exposure values in all four growth modes, distinct thickness traces are found for each pressure, and therefore no simple correlation between oxide growth and exposure.
(ii) In addition to the expected two'standard' regimes of growth, all traces exhibit a possible third growth regime appearing as an intermediate gradual step. For a given growth approach (e.g., diluted and dynamic) the step occurs earlier in time the lower the pressure (i.e., the lower the exposure). For a given growth approach, the step is also wider and taller the lower the pressure (i.e., the lower the exposure). However, comparing different growth approaches for a given pressure, the steps occur at different E values depending on the growth mode. For example, the undiluted static step occurs at higher E than the diluted dynamic step. This suggests that these growth modes may have different underlying growth mechanics.
(iii) The growth curves for three of the growth modes (namely: diluted dynamic, undiluted dynamic, and diluted static) when plotted versus E on a log scale coalesce to essentially one growth trace at sufficiently high exposure. Individual growth traces for a given growth mode join the coalesce-line at lower exposure the lower the growth pressure (hence the longer the growth time for that growth mode). Choosing a \(\sim\)50 mTorr growth pressure for comparison, the undiluted dynamic trace joins the coalesce-line at \(\sim\)8000 mTorr-sec, while both the diluted static and diluted dynamic growth modes join the coalesce-line at \(\sim\)1000 mTorr-sec. Therefore, for a given pressure, the individual growth traces join the coalesce-line at lower E the slower the growth time, independent of the which of three growth modes is studied. One might therefore be tempted to conclude that there is fundamentally little difference between dynamic and static growth, with dilution perhaps perceived playing the most important role in the growth time. However, the behavior of the undiluted static growth mode is quite different, with the growth curves for each pressure following a distinct trace, even at high E. For undiluted static growth, the thickness is found to depend not only on exposure but also on pressure with, for the same E, thicker oxide layer growth at lower pressures (and hence longer growth time). It is also interesting to compare the undiluted static growth rate plotted versus t on a log scale for the lowest pressure against other growth mode traces at higher pressures, as is shown in Figure 7. The 3 mTorr undiluted static growth rate is comparable to the 200 mTorr diluted growth rates, yet from Figure 4b, the 3 mTorr static curve does not follow the coalesce-line, indicating that growth rate alone does not determine that a growth trace will join the coalesced curve. It is also interesting to note, for a given pressure, the growth rates for the diluted static and diluted dynamic are comparable. The data therefore suggests that (a) the undiluted static growth mode is strikingly different from the other three modes, and (b) both dilution and static versus dynamic choices play important roles in the oxide growth, along with the oxidation pressure. Test wafers with Al-oxide tunnel barriers grown in low-J\({}_{c}\) regime using the diluted dynamic oxidation method produced J\({}_{c}\) versus exposure curves that were in close agreement with our previous and longstanding undiluted static trilayer growth method. Additional work is needed to characterize a wider range of pressure and oxygen exposure modes.
Fig. 6: The measured J\({}_{c}\) data of SIS junctions with dynamically grown tunnel barrier in 2% O\({}_{2}\) as a function of oxygen exposure. The J\({}_{c}\) experimental data are shown by solid magenta star markers. The dashed blue lines show the Kleinsasser _et al._ low and high J\({}_{c}\) regime while red solid lines display oxidation exponent trends for Tolpygo _et al._ J\({}_{c}\) regime [21]. The UVA empirical formula for the AlO\({}_{x}\) static growth in pure O\({}_{2}\) is shown in black dotted line for the J\({}_{c}\) range of 1 to 7 \(\frac{kA}{cm^{2}}\). The best fit between the UVA experimental data is shown in green dashed line and has oxidation exponent of E\({}^{-0.48}\) in low J\({}_{c}\) regime.
Fig. 7: The measured AlO\({}_{x}\) thickness, as determined through spectroscopic ellipsometry, as a function of oxidation time is plotted for statically grown oxide in 2% O\({}_{2}\), dynamically grown oxide in 2% O\({}_{2}\), and statically grown oxide in 100% O\({}_{2}\) for selected pressures.
## V Acknowledgments
This work was supported by the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The research at the Jet Propulsion Laboratory, California Institute of Technology, was carried out under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The authors have no conflicts to disclose.
|
2302.00777 | Coupling a Cosmic String to a TQFT | A common framework of particle physics consists of two sectors of particles,
such as the Standard Model and a dark sector, with some interaction between
them. In this work, we initiate the study of a qualitatively different setup in
which one of the sectors is a topological quantum field theory (TQFT). Instead
of particles, the physics of a TQFT only manifests itself in non-trivial
spacetime topologies. Topological defects provide a natural place to
investigate such effects. In particular, we consider two possible ways in which
axionic cosmic strings can interact with a Zn TQFT. One of them, by extending
the structure of the axion coupling, leads to specific predictions for the
localized degrees of freedom on the cosmic string, which can in turn effect
their evolution and leave observable signals. The second approach, by gauging a
discrete subgroup of the axionic shift symmetry, leads to dramatic changes in
the string spectrum. We stress that the scenario considered here should be
regarded as a plausible way for new physics to arise since it can be the low
energy effective field theory for quite generic scenarios at high energies. To
demonstrate this point and further illustrate the physical implications, we
constructed such UV completions for both of the cases of couplings to TQFTs.
The detailed prediction for observable signals of such scenarios needs further
investigation. At the same time, our results demonstrate that there are rich
new phenomena in this scenario. | T. Daniel Brennan, Sungwoo Hong, Lian-Tao Wang | 2023-02-01T22:21:35Z | http://arxiv.org/abs/2302.00777v3 | # Coupling a Cosmic String to a TQFT
###### Abstract
A common framework of particle physics consists of two sectors of particles, such as the Standard Model and a dark sector, with some interaction between them. In this work, we initiate the study of a qualitatively different setup in which one of the sectors is a topological quantum field theory (TQFT). Instead of particles, the physics of a TQFT only manifests itself in non-trivial spacetime topologies. Topological defects provide a natural place to investigate such effects. In particular, we consider two possible ways in which axionic cosmic strings can interact with a \(\mathbb{Z}_{n}\) TQFT. One of them, by extending the structure of the axion coupling, leads to specific predictions for the localized degrees of freedom on the cosmic string, which can in turn effect their evolution and leave observable signals. The second approach, by gauging a discrete subgroup of the axionic shift symmetry, leads to dramatic changes in the string spectrum. We stress that the scenario considered here should be regarded as a plausible way for new physics to arise since it can be the low energy effective field theory for quite generic scenarios at high energies. To demonstrate this point and further illustrate the physical implications, we constructed such UV completions for both of the cases of couplings to TQFTs. The detailed prediction for observable signals of such scenarios needs further investigation. At the same time, our results demonstrate that there are rich new phenomena in this scenario.
## 1 Introduction
* 2 TQFT-Coupling I: Axion-Portal to a TQFT
* 2.1 Anomaly Inflow and 2d String Worldsheet QFT
* 2.1.1 Anomaly Inflow
* 2.1.2 Anomaly Cancellation by Fermion Zero Modes
* 2.2 UV Field Theory Completion
* 2.2.1 UV theory
* 2.2.2 Fermion Zero Modes
* 2.2.3 Anomaly Cancellation
* 3 TQFT-Coupling II: Gauging a Discrete Subgroup
* 3.1 Gauging \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\) and Axion-String Spectrum
* 3.1.1 Discrete Gauging of Free \(U(1)\) Goldstone Boson
* 3.1.2 Discrete gauging of Axion-Maxwell Theory
* 3.2 UV Field Theory Completion
* 3.3 3-Group
* 3.4 Other TQFT Couplings via Discrete Gauging
* 4 Brief Comments on Phenomenological Implications
* A Brief Introduction to Generalized Global Symmetries
* A.1 Ordinary symmetry
* A.2 Higher-form symmetry
* B \(\mathbb{Z}_{n}\) TQFT
* C Global Symmetries of TQFT-Coupling I
* C.1 Symmetries of Uncoupled Theories
* C.1.1 Anomalies
* C.2 Symmetries with TQFT Coupling
* C.2.1 Anomalies
* C.2.2 Constraints from Symmetry
* C.2.3 Other TQFT Couplings via Discrete Gauging
Introduction
There are many proposed scenarios of physics beyond the Standard Model. One universally adopted framework to incorporate new physics is to couple a known particle physics sector, such as the full Standard Model, to a "new physics" sector. The new physics sector includes _new particles_ and these extra local degrees of freedom together with _new interactions_ which may introduce novel dynamics and lead to solutions to existing problems in particle physics. In studying these new theories, _symmetry_ provides an extremely powerful tool.
Historically, new understandings of symmetry in physics has almost always led to clarifications of existing puzzles and provided new insights. In this paper, we initiate the study of a new class of couplings and analyze them by means of new symmetries. Specifically, we study the effect of coupling a particle physics theory described by a local relativistic quantum field theory (QFT) to a topological quantum field theory (TQFT). This "coupling a QFT to a TQFT" was introduced in [1] (see also [2; 3]). Yet, to the best of our knowledge, our current work is the first in considering such possibilities in the context of particle physics. Since TQFT is not characterized by any local excitations, understanding the physics of TQFT and TQFT-couplings requires a new set of tools which is afforded by generalized global symmetry [3].
Our goal is to demonstrate through a couple of simple examples that such TQFT-couplings can lead to non-trivial and interesting phenomenological implications, including possible observable effects, which are more difficult to analyze using traditional methods. We also hope to emphasize that TQFT-couplings, which may appear somewhat exotic, in fact arise as _IR discrete remnants_ of familiar examples of local QFTs. For instance, as described in detail in Appendix B, a \(U(1)\) gauge theory with charge a \(n\) scalar field flows to a non-trivial \(\mathbb{Z}_{n}\) TQFT with non-trivial physical observables. This suggests that the UV completions of many theories in particle physics may also have effects from such discrete remnants. In these cases, it is crucial to be able to identify their physical implications and formulate appropriate experimental search strategies. Using generalized global symmetry seems to be the best available technique to accomplish this goal.
Ever since the notion of generalized global symmetry was introduced [3], it has been a very active field of research and has lead to many insights in theoretical QFT (see [4] for a summary and references there-in). Accordingly, it has become increasingly important to determine how to effectively implement generalized global symmetry in the study of particle physics (for example see recent works [5; 6; 7; 8; 9]).
In this paper we will apply the techniques of generalized global symmetries to study the effects of coupling a TQFT to axion-Maxwell theory1. Axion-Maxwell theory is described
by an action2
Footnote 2: In the rest of our paper, we will adopt differential form notation. The action of axion-Maxwell theory in terms of differential forms is given by eq. (1).
\[S_{\text{\tiny a-MW}}=\int\frac{1}{2}\partial_{\mu}a\partial^{\mu}a-\frac{1}{4g^ {2}}F_{A\mu\nu}F_{A}^{\mu\nu}-\frac{iK_{A}}{16\pi^{2}}\frac{a}{f_{a}}F_{A}^{ \mu\nu}\tilde{F}_{A\mu\nu} \tag{1}\]
and appears frequently in the literature. Here, \(K_{A}\in\mathbb{Z}\) is a discrete coupling constant that matches the \(U(1)_{\text{PQ}}\left[U(1)\right]^{2}\) Adler-Bell-Jackiw (ABJ) [10; 11] anomaly coefficient of any UV completion - we will have in mind a completion by a KSVZ-type theory [12; 13]. Here we will distinguish between the axion-Maxwell sector and TQFT sector by using a subscript \(A\) for axion-Maxwell sector and \(B\) for the TQFT sector.
For the TQFT sector, we consider a gauge theory associated with a \(\mathbb{Z}_{n}\) discrete gauge group whose action is given by3
Footnote 3: BF theory admits several different descriptions and details can be found in [1].
\[S_{\text{\tiny BF}}=\frac{in}{2\pi}\int B^{(2)}\wedge F_{B}^{(2)}=\frac{in}{4 \pi}\int d^{4}x\,\epsilon^{\mu\nu\rho\sigma}B_{\mu\nu}^{(2)}\partial_{\rho}B_ {\sigma}^{(1)} \tag{2}\]
where, \(B^{(2)}\) is a 2-form gauge field (hence two antisymmetric indices) and \(F_{B}^{(2)}=dB^{(1)}\) is the field strength of a 1-form gauge field associated with a gauge group \(U(1)_{B}\) which is restricted to \(\mathbb{Z}_{n}\subset U(1)_{B}\) by the form of the above action. A review of generalized global symmetry is presented in Appendix A and a detailed discussion of this \(\mathbb{Z}_{n}\) gauge theory can be found in Appendix B. As we mentioned already, this TQFT can arise as the IR limit of the Higgs phase of an abelian Higgs model with a charge \(n\) Higgs field.
There are many ways to couple axion-Maxwell sector to a \(\mathbb{Z}_{n}\) TQFT, each of which lead to distinct physical effects. In Section 2, we discuss the TQFT-coupling via axion-portal given by
\[S_{\text{\tiny TQFT-coupling\,I}}=-\frac{iK_{AB}}{4\pi^{2}f_{a}}\int aF_{A}^{ (2)}\wedge F_{B}^{(2)}-\frac{iK_{B}}{8\pi^{2}f_{a}}\int aF_{B}^{(2)}\wedge F_ {B}^{(2)} \tag{3}\]
For this coupling, we show that
* [Section 2.1] On an axion string, there must exist a set of chiral fermion zero modes to cancel gauge anomalies from the bulk topological interactions (via anomaly inflow). In particular, the couplings to the TQFT sector implies that those chiral modes must carry \(\mathbb{Z}_{n}\) charges as well as \(U(1)_{A}\) charges.
* [Section 2.2] This theory can be an IR effective field theory of a extended version of KSVZ theory where the KSVZ fermions charged under an additional \(U(1)_{B}\) that is spontaneously broken \(U(1)_{B}\mapsto\mathbb{Z}_{n}\).
* [Appendix C] The TQFT-coupling in (3) enriches the generalized global symmetry structure and in particular, modifies 3-group structure. A non-trivial 3-group symmetry implies constraints on the renormalization group flow in the form of (para
metric) inequalities among symmetry emerging energy scales of different higher-form symmetries.4 Footnote 4: For earlier works on 3-group global symmetries of axion-Maxwell theory, see [5, 14, 15]. In addition, recently, it was pointed out in [16] that the axion-Maxwell theory also has an infinite set of non-invertible symmetries [17, 18].
* [Section 2 and 4] These features may lead to multiple interesting observable effects. Our theory predicts not only axion strings but also strings coming from the TQFT sector. It may also include "coaxial hybrid strings" whose existence and properties have not been studied. The fact that the chiral zero modes living on the string core \(\mathbb{Z}_{n}\) charges may additionally lead to significant changes in the evolution of the axion string. These in turn might have far reaching consequences such as for "cosmological plasma collider" effects and vorton stability.
In Section 3, we discuss a different TQFT-coupling which can be obtained by gauging a discrete symmetry. Concretely, we describe a gauging of \(\mathbb{Z}_{M}\) subgroup of (0-form) axion shift symmetry, \(a\to a+\frac{2\pi f_{a}}{K_{A}}\). This leads to a TQFT-coupling of the form
\[S_{\text{TQFT-coupling\,\Pi}}=\frac{1}{2}\int(da-f_{a}C^{(1)})\wedge*(da-f_{a} C^{(1)})+\frac{iK}{f_{a}}\int(da-f_{a}C^{(1)})\wedge\omega_{3}(A^{(1)}) \tag{4}\]
where \(C^{(1)}\) is the dynamical \(\mathbb{Z}_{M}\) gauge field of an \(\mathbb{Z}_{M}\) TQFT and \(\omega_{3}(A^{(1)})\) is 3d \(U(1)\) Chern-Simons action as defined in eq. (6).
The implications of this TQFT-coupling via discrete gauging can be summarized as follows.
* [Section 3.1.1] In order to single out and clarify the important physical features of discrete gauging, we first discuss discrete gauging of _free_\(U(1)\) Goldstone boson theory, i.e. \(K=0\) case. We show two consequences of discrete gauging of shift symmetry of Goldstone boson \(\phi(x)\to\phi(x)+c\): (i) it projects out some of the local operators \(I(q,x)=e^{iq\phi(x)}\) and (ii) it adds additional cosmic strings with _fractional_ winding numbers. Specifically, if we gauge a \(\mathbb{Z}_{M}\) subgroup of the axion shift symmetry, local operators with \(q\neq M\mathbb{Z}\) are removed since they are not invariant under \(\mathbb{Z}_{M}\) gauge transformations. Simultaneously, surface operators (i.e. cosmic strings) with fractional winding \[\oint\frac{d\phi}{2\pi}\in\frac{1}{M}\mathbb{Z}\] (5) are included; these objects are identified as cosmic strings of the TQFT sector.
* [Section 3.1.2] Next, we describe Axion-Maxwell theory which couples to a TQFT via discrete gauging of a \(\mathbb{Z}_{M}\) subgroup of the axion shift symmetry. In this case, we show how the spectra of local operator and cosmic string are modified in the presence of TQFT-coupling.
* [Section 3.2] We present a KSVZ-type UV field theory which flows in the IR limit to the axion-Maxwell theory coupled to a \(\mathbb{Z}_{M}\) TQFT as described above. This re
sults further illustrates our claim that non-trivial TQFT-couplings can appear from standard UV field theories when discrete remnants are left out in the IR.
* [Section 3.3] We show how \(\mathbb{Z}_{M}\) discrete gauging changes 3-group symmetry structure of the theory. In particular, in addition to above described features, the 1-form electric symmetry is broken \(\mathbb{Z}_{K}^{(1)}\to\mathbb{Z}_{K/M}^{(1)}\), thus altering the spectrum of Wilson lines.
* [Section 3.4] It is possible to systematically classify the possible ways to couple axion-Maxwell theory to TQFTs via discrete gauging. We list possible discrete gaugings of the axion-Maxwell theory and briefly discuss the resulting symmetry structure for each case.
In summary, the analysis in this paper demonstrates the importance of understanding couplings to TQFTs and tracking them along RG flows in particle physics settings. Further, the fact that the coupled models we consider in this paper have such simple UV completions illustrates that such couplings to TQFTs are not at all exotic or unrealistic. Instead, they can generically arise in the long distance behavior of standard QFTs. We hope that our work demonstrates the utility of the techniques provided by generalized global symmetries in a concrete setup and helps to initiate a broader effort in the application of generalized global symmetry techniques in particle physics.
## 2 TQFT-Coupling I: Axion-Portal to a TQFT
In this section, we discuss an axion-Maxwell theory coupled to a \(\mathbb{Z}_{n}\) topological quantum field theory. The relevant actions are presented in eqns. (1), (2), and (3). For convenience of the discussion, we reproduce them below. The axion-Maxwell sector without a TQFT-coupling is described by an action
\[S_{0}=\frac{1}{2}\int da\wedge*da+\frac{1}{2g^{2}}\int F_{A}^{(2)}\wedge*F_{A}^ {(2)}-\frac{iK_{A}}{8\pi^{2}f_{a}}\int aF_{A}^{(2)}\wedge F_{A}^{(2)}. \tag{1}\]
Here, \(a\) is a periodic scalar field (axion) with \(a\sim a+2\pi f_{a}\) (which is the remnant of a \(U(1)_{\rm PQ}\) global symmetry that is spontaneously broken at a scale set by the axion decay constant \(f_{a}\)) and \(A^{(1)}\) and \(F_{A}^{(2)}\) are \(U(1)_{A}\) one-form gauge field and its two-form field strength, respectively. The coupling constant \(K_{A}\in\mathbb{Z}\) is quantized, which is necessary in order to ensure the periodicity of \(a\), and furthermore is the coefficient of a perturbative \(\left[U(1)_{\rm PQ}U(1)_{A}^{2}\right]\) Adler-Bell-Jackiw (ABJ) anomaly [10; 11].
We would now like to study what happens when we couple axion-Maxwell theory to a TQFT. One can imagine a various of ways to achieve this. A simple choice of TQFT to couple to is a \(\mathbb{Z}_{n}\) gauge theory [19; 20; 1; 3; 21] which can be viewed as the low energy limit of a spontaneously broken \(U(1)_{B}\) gauge theory. We can couple such a \(\mathbb{Z}_{n}\) TQFT to the
axion-Maxwell theory via an axion portal coupling5
Footnote 5: The factor of 2 difference between \(K_{AB}\) and \(K_{B}\) (and \(K_{A}\)) terms comes from the fact that the anomaly polynomial is given by \(ch_{2}(F^{(2)})=\text{Tr}\Big{[}e^{\frac{iF^{(2)}}{2\pi}}\Big{]}\) restricted to the degree 4 differential form where the trace is over the total bundle. The cross term comes with a factor of 2 due to the fact that a \(U(1)_{A}\times U(1)_{B}\) bundle has \(ch_{2}(F_{A}+F_{B})=\frac{1}{2!\times(2\pi^{2})}\left(F_{A}^{(2)}+F_{B}^{(2)} \right)^{2}\).
\[S_{1} = \frac{in}{2\pi}\int B^{(2)}\wedge F_{B}^{(2)}-\frac{iK_{AB}}{4\pi^ {2}f_{a}}\int aF_{A}^{(2)}\wedge F_{B}^{(2)}-\frac{iK_{B}}{8\pi^{2}f_{a}}\int aF _{B}^{(2)}\wedge F_{B}^{(2)} \tag{2}\]
where \(F_{B}^{(2)}=dB^{(1)}\) is the field strength of a one-form \(\mathbb{Z}_{n}\) gauge field \(B^{(1)}\) and \(B^{(2)}\) is a two-form gauge field associated with one-form \(\mathbb{Z}_{n}^{(1)}\) gauge invariance.6
Footnote 6: In fact, it is possible and interesting to consider coupling to a broader class of \(\mathbb{Z}_{n}\) TQFTs such as [1]
\[S_{\text{BF}^{\prime}}=\frac{in}{2\pi}\int B^{(2)}\wedge F_{B}^{(2)}+\frac{ inp}{4\pi}B^{(2)}\wedge B^{(2)}. \tag{3}\]
The first term in eq. (2) is the action for \(\mathbb{Z}_{n}\) TQFT, often also called a BF theory. It describes a \(\mathbb{Z}_{n}\) gauge theory and a brief review is presented in Appendix B (see [1, 3, 19, 20, 21] for more discussion). The BF theory sector admits two gauge invariant "electric" operators, a Wilson line and a Wilson surface operators:
\[W_{1}(\Sigma_{1},\ell)=e^{i\ell\oint_{\Sigma_{1}}B^{(1)}},\ \ \ W_{2}(\Sigma_{2},m)=e^{im\oint_{\Sigma_{2}}B^{(2)}} \tag{4}\]
which act as sources for \(B^{(2)}\) and \(B^{(1)}\) respectively due to form of the equations of motion.
The second and third terms in eq. (2) describe (local) interactions between the axion-Maxwell sector and the TQFT sector. The goal of the rest of this section is to investigate the implications of this TQFT-coupling. In particular, we are interested in properties of the IR effective theory described by \(S_{0}+S_{1}\) which are _universal_, i.e. independent of specific UV completions.
In the rest of this section we study the implication of the TQFT on the 2d QFT living on the axion-string and how this physics can be realized in a simple UV model. The coupling discussed in this section has the possibility to produce interesting, observational signals. We relegate these details to Section 4 where we more broadly discuss the phenomenological implications of coupling axion-Maxwell theory to TQFTs.
### Anomaly Inflow and 2d String Worldsheet QFT
In this section, we discuss the axion strings in the theory described by \(S_{0}+S_{1}\) shown in eq. (1) and (2), with a special emphasis on the universal IR features.
Imagine a cosmic string placed in the spacetime \(M_{4}\). We would like to study the effects on the world volume induced by the axionic couplings
\[S_{\text{axion}}=-\frac{iK_{A}}{8\pi^{2}f_{a}}\int aF_{A}^{(2)}\wedge F_{A}^{ (2)}-\frac{iK_{AB}}{4\pi^{2}f_{a}}\int aF_{A}^{(2)}\wedge F_{B}^{(2)}-\frac{iK _{B}}{8\pi^{2}f_{a}}\int aF_{B}^{(2)}\wedge F_{B}^{(2)}. \tag{5}\]
To simplify expressions, we often use the language of descent equation of chiral anomaly (see for e.g. [22; 23; 24])
\[\omega_{4}=\frac{1}{8\pi^{2}}F^{(2)}\wedge F^{(2)}=d\omega_{3},\ \ \omega_{3}=\frac{1}{8\pi^{2}}A^{(1)}\wedge F^{(2)}, \tag{6}\] \[\delta_{a}\omega_{3}=d\omega_{2},\ \ \omega_{2}=\frac{\alpha}{8\pi^{2}}F^{(2)}. \tag{7}\]
We comment that both 3d Chern-Simons (CS) action \(\omega_{3}\) and 2d chiral anomaly \(\omega_{2}\) are defined as cohomology classes (shifting by an exact term corresponds to shifting by local counter terms) and expressions shown are canonical choices (see appendices of [24] for a comprehensive review). This allows us to rewrite \(S_{\rm axion}\) as
\[S_{\rm axion} = -\frac{i}{f_{a}}\int a\left(K_{A}\,\omega_{4}(A^{(1)})+2K_{AB}\, \omega_{4}(A^{(1)},B^{(1)})+K_{B}\,\omega_{4}(B^{(1)})\right) \tag{8}\] \[= \frac{i}{f_{a}}\int da\wedge\left(K_{A}\,\omega_{3}(A^{(1)})+2K_{ AB}\,\omega_{3}(A^{(1)},B^{(1)})+K_{B}\,\omega_{3}(B^{(1)})\right).\]
In the presence of the axion vortex, neither \(a\) nor \(da\) is well-defined at the vortex worldsheet \(r=0\).7 However, as demonstrated in [25] we can obtain an action well-defined everywhere in \(M_{4}\) even in the presence of an axion string. This can be achieved by "smoothing out" the axion string singularity by inserting a bump function \(\rho(r)\) into \(S_{\rm axion}\):
Footnote 7: By this, we really mean that at distance scale smaller than \(f_{a}\), the axion winding is not anymore protected by topology, i.e. there is enough energy fluctuation to unwind the axion. In the deep IR, the region \(r<f_{a}^{-1}\) is represented as a singularity around which axion winds, but in the UV it is a smooth field configuration.
\[S_{\rm axion}=\frac{i}{f_{a}}\int(1+\rho)\,da\wedge\left(K_{A}\,\omega_{3}(A^{ (1)})+2K_{AB}\,\omega_{3}(A^{(1)},B^{(1)})+K_{B}\,\omega_{3}(B^{(1)})\right). \tag{9}\]
so that \(\rho(r)=0\) for \(r>1/f_{a}\) and \((1+\rho(r))\to 0\) smoothly as \(r\to 0\). This insertion of \(1+\rho(r)\) regularizes the the action in the string core \(r=0\), extending the validity of our description there. In addition, we recover the original action well outside the vortex due to the second BC. As we discuss below, another advantage of this formulation is that the implementing a bump function regulator allows us to switch from covariant to consistent anomalies on the axion string world sheet so that the anomaly cancellation can be understood straightforwardly [25; 26]. In other words, \(\rho(r)\) can be thought of as a smooth generalization of a \(\delta\)-function that describes the embedding of the cosmic string world-sheet \(M_{2}^{\rm st}\) into \(M_{4}\).
These conditions on \(\rho(r)\) imply that for a cosmic string with winding number \(m\)
\[m=\oint_{S^{1}}(1+\rho)\frac{da}{2\pi f_{a}}=\int_{D_{2}}d\left[(1+\rho)\frac{ da}{2\pi f_{a}}\right]\ \ \Longrightarrow\ \ d\rho\wedge da=2\pi mf_{a}\delta^{(2)}(M_{2}^{\rm st}). \tag{10}\]
Here, \(\delta^{(2)}(M_{2}^{\rm st})\) is the two-form \(\delta\)-function, that is only non-vanishing only on the \((1+1)\)d cosmic string world-sheet \(M_{2}^{\rm st}\).
#### 2.1.1 Anomaly Inflow
Now let us consider the axion interaction term \(S_{\rm axion}\). Under a \(U(1)_{A}\) gauge transformation, \(A^{(1)}\to A^{(1)}+d\lambda_{A}\), the action varies as
\[\delta_{A}S_{\rm axion} = \frac{i}{8\pi^{2}f_{a}}\int(1+\rho)da\wedge\left(K_{A}d\lambda_{A} \wedge F_{A}^{(2)}+2K_{AB}d\lambda_{A}\wedge F_{B}^{(2)}\right) \tag{11}\] \[= \frac{im}{4\pi}\int_{M_{4}}\lambda_{A}\delta^{(2)}(M_{2}^{\rm st} )\wedge\left(K_{A}F_{A}^{(2)}+2K_{AB}F_{B}^{(2)}\right)\]
Here we see that the variation leads to an anomalous term that is localized on the cosmic string worldsheet. Similarly, the variation of \(S_{\rm axion}\) under \(\mathbb{Z}_{n}\) gauge transformations: \(B^{(1)}\to B^{(1)}+d\lambda_{B}\), is given by
\[\delta_{B}S_{\rm axion}=\frac{im}{4\pi}\int_{M_{4}}\lambda_{B}\delta^{(2)}(M_{ 2}^{\rm st})\wedge\left(2K_{AB}F_{A}^{(2)}+K_{B}F_{B}^{(2)}\right) \tag{12}\]
Since our theory is well defined everywhere, the anomalies localized on the cosmic string world-sheet in eqns. (11) and (12) must be canceled which furthermore implies that there must be degrees of freedom living on the cosmic string worldsheet that cancels these anomalies. In terms of 2d dynamics, these anomalies must be reproduced by _consistent_ gauge anomalies [25; 26]: a \(U(1)_{A}^{2}\) gauge anomaly with coefficient \(-K_{A}\), a \(U(1)_{A}\times\mathbb{Z}_{n}\), mixed anomaly with coefficient \(-K_{AB}\), and a \(\mathbb{Z}_{n}^{2}\) gauge anomaly with coefficient \(-K_{B}\).
It is also illuminating to reproduce our discussion on the existence of charged matter on the cosmic string world volume in terms of currents. The current is defined by taking a functional derivative with respect to \(A^{(1)}\),
\[*J_{1}(a,A^{(1)})=K_{A}\left(\frac{1}{4\pi^{2}f_{a}}(1+\rho)da\wedge F_{A}^{( 2)}-\frac{1}{8\pi^{2}f_{a}}d\rho\wedge da\wedge A^{(1)}\right)+\frac{K_{AB}} {4\pi^{2}f_{a}}(1+\rho)da\wedge F_{B}^{(2)}. \tag{13}\]
which satisfies the conservation equation
\[d*J_{1}(a,A^{(1)})=\frac{K_{A}}{4\pi}\delta^{(2)}(M_{2}^{\rm st})\wedge F_{A} ^{(2)}+\frac{K_{AB}}{2\pi}\delta^{(2)}(M_{2}^{\rm st})\wedge F_{B}^{(2)} \tag{14}\]
in the presence of an axion string with unit winding. This shows that there are indeed charged matter fields that are localized on the world sheet of the cosmic string.
Here there is a subtlety to the matching of the gauge anomalies involving the \(\mathbb{Z}_{n}\) gauge symmetry. The point is that at short distances, the \(\mathbb{Z}_{n}\) gauge symmetry can be enhanced to a \(U(1)_{B}\) gauge symmetry. In these cases, the \(\mathbb{Z}_{n}\) anomaly cancellation is only required \(\text{mod}\;n\). We can see this reduction of the anomaly coefficients mod \(n\) as follows. First note that any terms of the form
\[S_{\rm ct}=\frac{in}{8\pi^{2}f_{a}}\int(1+\rho)da\wedge\left(2z_{1}B^{(1)} \wedge F_{A}^{(2)}+z_{2}B^{(1)}\wedge F_{B}^{(2)}\right) \tag{15}\]
with \(z_{1,2}\in\mathbb{Z}\) are gauge invariant. This term is clearly \(U(1)_{A}\) invariant, so we only need to discuss \(\mathbb{Z}_{n}\) invariance. Under \(\mathbb{Z}_{n}\) gauge transformations, the above counter terms transform
as
\[\delta S_{\rm ct}=2\pi ik\int\frac{d\rho\wedge da}{2\pi f_{a}}\wedge\left(z_{1} \frac{F_{A}^{(2)}}{2\pi}+z_{2}\frac{F_{B}^{(2)}}{2\pi}\right). \tag{16}\]
Comparing with eq. (5) and (15), we see that the local counter terms eq. (15) allow us to reduce the anomaly coefficients for \(\mathbb{Z}_{n}\) mod \(n\), which of course is what we expect for discrete anomalies [27].
#### 2.1.2 Anomaly Cancellation by Fermion Zero Modes
As we have shown, the consistency of IR EFT requires that anomalies in eq. (11) and (12) need to be canceled by 2d QFT on the cosmic string world sheet. These anomalies can be matched by 2d massless chiral fermion that are localized on the string. And indeed, such 2d fields often arise in UV completions as perturbations around zero-modes of bulk 4d fermions in the presence of cosmic strips [28]. We will demonstrate this feature of anomaly matching in a particular UV completion in Section 2.2.
Here, we will argue for the existence of 2d chiral fermions on the cosmic string world sheet purely based on IR consistency and determine conditions for their quantum numbers. For a set of \((1+1)\)d Weyl fermions \(\{\alpha_{i}(z,t)\}\) living on the core of the cosmic string, if their charges under \(U(1)_{A}\times\mathbb{Z}_{n}\) are \((Q_{i},k_{i})\), the anomaly cancellation conditions are written as
\[\sum_{i}Q_{i}^{2}=-K_{A},\ \ \ \sum_{i}Q_{i}k_{i}=-K_{AB}\;\text{mod}\;n,\ \ \ \sum_{i}k_{i}^{2}=-K_{B}\;\text{mod}\;n. \tag{17}\]
This is a straightforward but interesting result. Adding the TQFT-coupling, while not altering the 4d bulk QFT, has modified the 2d QFT on the string worldsheet: anomaly cancellation requires that the chiral degrees of freedom carry \(\mathbb{Z}_{n}\) as well as \(U(1)_{A}\) charges. In particular, as shown in the second requirement in eq. (17), at least some of the zero mode fermions localized on the string must be charged under both \(U(1)_{A}\) and \(U(1)_{B}\). As we discuss in Section 4, this property can lead to interesting features in cosmic string physics, which could in principle be observable.
### UV Field Theory Completion
In this section, we introduce UV completions that reduce in the IR to the effective theory described by eq. (1) and (2). We solve the Dirac equation in the vortex background and determine the spectrum of fermion zero modes. Using this, we will check the anomaly cancellation discussed in Section 2.1.2 explicitly.
#### 2.2.1 UV theory
Here we take a \(U(1)_{A}\times U(1)_{B}\) gauge theory coupled to scalars and fermions. The Lagrangian is
\[\mathcal{L}= -\frac{1}{4g_{A}^{2}}F_{A}^{(2)}\wedge*F_{A}^{(2)}-\frac{1}{4g_{ B}^{2}}F_{B}^{(2)}\wedge*F_{B}^{(2)}+\sum_{i=1}^{2}\bar{\psi}_{ii}i\not{D} \psi_{i}+\bar{\chi}_{i}\not{D}\chi_{i} \tag{18}\] \[-|D\Phi_{1}|^{2}-|D\Phi_{2}|^{2}-\lambda_{1}\Phi_{1}^{\dagger} \psi_{1}\chi_{1}-\lambda_{2}\Phi_{2}\psi_{2}\chi_{2}+{\rm h.c.}+V(\Phi_{1}, \Phi_{2}).\]
The scalar potential is such that both \(\langle\Phi_{1}\rangle=f_{1}\) and \(\langle\Phi_{2}\rangle=f_{2}\) are non-zero. The quantum numbers of the fields are summarized in Table 1.
The vacuum expectation value (vev) \(f_{2}\) spontaneously breaks \(U(1)_{B}\to\mathbb{Z}_{n}\), producing a \(\mathbb{Z}_{n}\) BF theory, while \(f_{1}\) spontaneously breaks a combination of global \(U(1)_{\rm PQ}\) and \(U(1)_{B}\), which leads to a low energy axion. The fermion covariant derivatives contain \(U(1)_{A}\) and \(U(1)_{B}\) gauge fields with proper charges.
To get an effective theory below \(f_{1}\) and \(f_{2}\) by integrating out heavy fermions, it is convenient to rotate away the phases of scalar fields from the Yukawa terms. We first write the scalar fields as \(\Phi_{i}=\varphi_{i}e^{i\theta_{i}},i=1,2\) where \(\varphi_{i}\) is classical configurations. For vacuum solution \(\varphi_{i}=f_{i}\), while for a string solution (with winding number \(n_{i}\)) the Higgs vev has a non-trivial profile
\[\varphi_{i}=F_{i}(r). \tag{19}\]
The phases \(\theta_{i}\) are (would-be) Goldstone bosons and can be removed from the Yukawa terms by the following field redefinitions:
\[U(1)_{1}:\ \ \psi_{1}=e^{i\theta_{1}}\hat{\psi}_{i},\ \ \ \ U(1)_{2}:\ \ \chi_{2}=e^{-i\theta_{2}}\hat{\chi}_{2} \tag{20}\]
which are associated to the global symmetry transformations \(U(1)_{1}\) and \(U(1)_{2}\) respectively. These are, however, anomalous transformations and which will generate axionic terms in the effective action.
\[S_{\rm anom}=\frac{i}{8\pi^{2}}\int(\theta_{1}-\theta_{2})\left(F_{A}^{(2)} \wedge F_{A}^{(2)}+2qF_{A}^{(2)}\wedge F_{B}^{(2)}+q^{2}F_{B}^{(2)}\wedge F_{B }^{(2)}\right) \tag{21}\]
where the prefactors \(1,q\) and \(q^{2}\) are respectively anomaly coefficients for \(U(1)_{i}\times U(1)_{A}^{2}\), \(U(1)_{i}\times U(1)_{A}\times U(1)_{B}\), and \(U(1)_{i}\times U(1)_{B}^{2}\) anomalies (\(i=1,2\)). After these field redefinitions, the Yukawa interactions become just fermion mass terms at energies below \(f_{1}\) and \(f_{2}\). At these energies, the effective action is eq. (1) and (2) with the matching
\[\theta_{1}-\theta_{2}=\frac{\Pi_{1}}{f_{1}}-\frac{\Pi_{2}}{f_{2}} \equiv\frac{a}{f_{a}},\ \ f_{a}=\frac{f_{1}f_{2}}{\sqrt{f_{1}^{2}+f_{2}^{2}}} \tag{22}\] \[K_{A}=1,\ \ K_{AB}=q,\ \ K_{B}=q^{2}. \tag{23}\]
The other orthogonal combination of \(\theta\)'s is the would-be Goldstone boson eaten by the
\(U(1)_{B}\) gauge boson. In the IR, the eaten Goldstone boson, leads to a \(\mathbb{Z}_{n}\) gauge theory since it has charge \(n>1\). See Appendix B for more details.
In the limit \(f_{2}\gg f_{1}\), this theory describes the case where \(U(1)_{B}\) is Higgsed down to \(\mathbb{Z}_{n}\) gauge theory below \(f_{2}\) generating BF strings. Provided that \(\lambda_{2}\) is not too small, the pair \(\psi_{2}\) and \(\chi_{2}\) are integrated out around \(f_{2}\), and the effective theory at \(f_{2}\gg E\gg f_{1}\) is a KSVZ-type theory [12; 13] coupled to a \(\mathbb{Z}_{n}\) BF theory. This latter coupling is in the form of anomalous terms proportional to \(\theta_{2}\) in eq. (2.21). Here, \(\theta_{2}\) encodes the 2-form BF degree of freedom \(B^{(2)}\) via the 4d duality relation \(d\theta_{2}\sim*dB^{(2)}\) so that the "BF"-sector of the theory is described by
\[\mathcal{L}=\frac{in}{2\pi}B^{(2)}\wedge F_{B}^{(2)}+\frac{i}{8\pi^{2}}q^{2}* dB^{(2)}\wedge B^{(1)}\wedge F_{B}^{(2)} \tag{2.24}\]
At lower energies with \(E\ll f_{1}\), the \(U(1)_{\rm PQ}\) is broken by the vev of \(\Phi_{1}\), resulting in the physical axion field in \(S_{\rm axion}\) (see eq. (2.22)).
In cosmology, at temperature \(f_{1}<T<f_{2}\), a \(\mathbb{Z}_{n}\) BF theory arises and BF strings can form according to the Kibble-Zurek mechanism [29; 30; 31; 32]. The BF string is supported by non-zero \(B^{(2)}\) magnetic flux through its core which corresponds to the winding of the \(\theta_{2}\) in the dual picture. Equivalently, the BF string defect can be thought of as the Wilson surface operator defined in eq. (2.4).
We make a few remarks about the Wilson operators of the BF theory. BF theory has \(\mathbb{Z}_{n}\)-classified line operators \(W_{1}=e^{i\int_{\mathbb{Z}_{1}}B^{(1)}}\) that are charged under a 1-form \(\mathbb{Z}_{n}^{(1)}\) global symmetry which acts as \(B^{(1)}\to B^{(1)}+\frac{2\pi}{n}\lambda^{(1)}\), where \(\lambda^{(1)}\) is a 1-form with integer periods. These operators can be cut by \(\mathbb{Z}_{n}\) charged fermions - in our model \(\{\psi_{1},\chi_{1}\}\). Therefore, the topologically protected line operators are characterized by \(\mathbb{Z}_{\rm GCD(q,n-q)}\).
Additionally, the theory has \(\mathbb{Z}_{n}\)-classified string/surface operators \(W_{2}=e^{i\int_{\mathbb{Z}_{2}}B^{(2)}}\) that are charged under a 2-form \(\mathbb{Z}_{n}^{(2)}\) global symmetry which acts as \(B^{(2)}\to B^{(2)}+\frac{2\pi}{n}\lambda^{(2)}\) where \(\lambda^{(2)}\) is a 2-form with integer periods. They may be cut and broken by creation of monopole-anti-monopole pair. In the absence of monopoles in the theory, all \(\mathbb{Z}_{n}\) BF strings are topologically stable.8
Footnote 8: \(U(1)\) gauge theory with only electrically charged particles has one-form magnetic _global_ symmetry characterized by \(dF=0\). It is believed that quantum gravity does not admit any exact global symmetry and so this symmetry should be either gauged or explicitly broken. If \(U(1)\) is embedded in a GUT group, then monopoles exist and they explicitly break the 1-form magnetic symmetry.
These \(\mathbb{Z}_{n}\) BF strings are quasi-Aharonov-Bohm strings according to the classification by Polchinski [33; 21]. Indeed, the light fermions at \(f_{1}<E<f_{2}\) give rise to discrete Aharonov-Bohm phases when they circle around the BF strings. These strings are "quasi" because the probe states are not only charged under \(\mathbb{Z}_{n}\subset U(1)_{B}\), but they are also charged under low energy unbroken gauge group \(U(1)_{A}\).
At energies below \(f_{1}\), all charged fermions are integrated out. Since there are no charged states, at this scale BF strings become _local strings_ which do not have any topological charges probable by observers at spatial infinity. In addition, due to the \(U(1)_{\rm PQ}\) breaking, we get _global strings_ which is measured by a topological charge: the axion winding.9
Our theory with charge assignment given in Table 1, however, does not allow for the \(\mathbb{Z}_{n}\) TQFT to emerge below PQ symmetry breaking. Note that as we try to take the other limit \(f_{1}\gg f_{2}\), \(U(1)_{B}\) is still broken to \(\mathbb{Z}_{n}\) since both scalars carry \(U(1)_{B}\) charge \(n\). However, it is possible to construct models which have \(f_{a}\gg\Lambda_{\text{BF}}\).
#### 2.2.2 Fermion Zero Modes
In this section, we solve the Dirac equation of the UV theory eq. (18) with charge assignment Table 1, and determine spectrum of fermion zero modes, thereby demonstrating the anomaly cancellation. Our analysis below closely follows [34].
The equations of motion for the pair \(\{\psi_{1},\chi_{1}\}\) are given by
\[i\not{D}\psi_{1}=\lambda_{1}\Phi_{1}\overline{\chi}_{1},\ \ \ \ \ i\not{D}\chi_{1}=\lambda_{1}\Phi_{1} \overline{\psi}_{1} \tag{25}\]
We look for a solution in the background of string \(\Phi_{1}=F_{1}(r)e^{i\phi}\) in a form10
Footnote 10: The case with arbitrary winding number can be studied by adopting the method of [28].
\[\psi_{1}(x,y,z,t)=\alpha_{1}(z,t)\beta_{1}(x,y),\ \ \ \ \ \chi_{1}(x,y,z,t)=\eta_{1}(z,t)\xi_{1}(x,y) \tag{26}\]
where \(\alpha_{1}\) and \(\eta_{1}\) are zero modes localized on the string core and \(\beta_{1}\) and \(\xi_{1}\) are transverse zero modes. We split the Dirac operator into string- and transverse-part \(i\not{D}=i\not{D}_{s}(z,t)+i\not{D}_{T}(x,y)\) and we also define chirality operators \(\Gamma^{\text{int}}=\gamma^{0}\gamma^{3}\) and \(\Gamma^{\text{ext}}=i\gamma^{1}\gamma^{2}\) that act on the string-core and transverse space, respectively.
Let us first solve for the transverse zero modes, which satisfy
\[i\not{D}_{T}\beta_{1}=\lambda_{1}\Phi_{1}\overline{\xi}_{1},\ \ \ \ \ i\not{D}_{T}\xi_{1}=\lambda_{1}\Phi_{1}\overline{\beta}_{1}. \tag{27}\]
For \(\phi\)-independent solution, the transverse Dirac operator can be written as
\[i\not{D}_{T}=i\left(\gamma^{1}\partial_{1}+\gamma^{2}\partial_{2}\right)=i \gamma^{1}\left(\cos\phi+i\Gamma^{\text{ext}}\sin\phi\right)\partial_{r} \tag{28}\]
and the Dirac equations become in the background of winding number one string
\[i\gamma^{1}\left(\cos\phi+i\Gamma^{\text{ext}}\sin\phi\right) \partial_{r}\beta_{1}=\lambda_{1}F_{1}e^{i\phi}\overline{\xi}_{1} \tag{29}\] \[i\gamma^{1}\left(\cos\phi+i\Gamma^{\text{ext}}\sin\phi\right) \partial_{r}\xi_{1}=\lambda_{1}F_{1}e^{i\phi}\overline{\beta}_{1}. \tag{30}\]
The angular dependence requires that
\[\Gamma^{\text{ext}}\beta_{1}=+\beta_{1},\ \ \ \ \ \Gamma^{\text{ext}}\xi_{1}=+\xi_{1} \tag{31}\]
and with this the radial part can be straightforwardly integrated to obtain
\[\beta_{1}(r)=\exp\left(-\int_{0}^{r}\lambda_{1}F_{1}(r^{\prime})dr^{\prime} \right),\ \ \ \ \ \overline{\xi}_{1}=-i\gamma^{1}\beta_{1}. \tag{32}\]
Using transverse zero mode equations, the equation for the string zero mode can be shown to be
\[\left(i\not{D}_{s}\alpha_{1}\right)\beta_{1}=\left(\overline{\eta}_{1}-\alpha_{1 }\right)\left(\lambda_{1}\Phi_{1}\overline{\xi}_{1}\right) \tag{33}\]
and this shows that non-trivial string zero mode exists if and only if \(\alpha_{1}=\overline{\eta}_{1}\), that is there is only one string zero mode per pair \(\{\psi_{1},\chi_{1}\}\). The string zero mode equation becomes
\[0=i\not{D}_{s}\alpha_{1}=i\gamma^{0}\left(\partial_{t}+\Gamma^{\rm int} \partial_{z}\right)\alpha_{1}(z,t). \tag{34}\]
Recalling \(\gamma^{5}\psi_{1}=-\psi_{1}\), eq. (31) implies that \(\alpha_{1}\) should be 2d LH chiral fermion, \(\Gamma^{\rm in}\alpha_{1}=-\alpha_{1}\). The zero mode equation is then solved by
\[\alpha_{1}(z,t)=g(z+t) \tag{35}\]
for an arbitrary function \(g\). This means that 2d LH chiral zero mode propagates in the \(-\hat{z}\) direction at the speed of light. Overall, in the background of \(\Phi_{1}\)-string of winding number one, the pair \(\{\psi_{1},\chi_{1}\}\) coupled to \(\Phi_{1}\) gives rise to a single LH 2d zero mode traveling in the \(-\hat{z}\) direction at the speed of light. On a anti-\(\Phi_{1}\) string, the same procedure shows that \(\beta_{1}\) should be a negative helicity state while \(\alpha_{1}\) is now a positive helicity mode running in the \(+\hat{z}\) direction.
The analysis for \(\{\psi_{2},\chi_{2}\}\) coupled to \(\Phi_{2}\) is follows similarly with one exception: while \(\{\psi_{1},\chi_{1}\}\) couples to \(\Phi_{1}^{\dagger}\), \(\{\psi_{2},\chi_{2}\}\) couples to \(\Phi_{2}\) (see eq. (18)). This has an effect that the pair \(\{\psi_{2},\chi_{2}\}\) coupled to winding number one \(\Phi_{2}\)-string behaves like they couple to anti-\(\Phi_{2}\)-string (winding \(-1\)). Practically, we need to do the replacement \(e^{i\phi}\to e^{-i\phi}\) in eq. (29) and (30) with \(1\to 2\) relabeling. The final result is the pair \(\{\psi_{2},\chi_{2}\}\) in the background of winding number one \(\Phi_{2}\)-string has a single RH zero mode localized at the core which propagates in the \(+\hat{z}\) direction.
The quantum numbers of string zero modes are read off from those of original fermions after taking into account the field redefinitions eq. (20) (note that the Dirac equations are in the basis where all the phase degrees of freedoms are removed from the Yukawa interaction).11 The final results are summarized in Table 2.
Footnote 11: Additionally, we must take into account the change of charges due to the field redefinitions so that the condition \(\alpha_{1}=\overline{\eta}_{1}\) can be fulfilled.
#### 2.2.3 Anomaly Cancellation
Recall from eq. (21) that at \(E<f_{1},f_{2}\), the phase factor appearing in the anomalous term is the combination \(\theta_{1}-\theta_{2}=a/f_{a}\). Here, \(\theta_{1}\) measures the winding of \(\Phi_{1}\)-string
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline & chirality & \(U(1)_{\rm PQ}\) & \(U(1)_{A}\) & \(U(1)_{B}\) \\ \hline \hline \(\alpha_{1}\) & LH & 0 & 1 & \(q-n\) \\ \hline \(\alpha_{2}\) & RH & 0 & 1 & \(q-n\) \\ \hline \end{tabular}
\end{table}
Table 2: Quantum numbers of string zero modes in the \(\Phi_{1}\)-string of winding number one (for \(\alpha_{1}\)) and the \(\Phi_{2}\)-string of winding number one (for \(\alpha_{2}\)).
and \(\theta_{2}\) measures that of \(\Phi_{2}\)-string. Let us denote the winding numbers as \(\{n_{1},n_{2}\}\). We are interested in checking the anomaly cancellation in the background of winding number \(\{n_{1},n_{2}\}\) string. In this case, the bump function satisfies
\[d\rho\wedge da=2\pi(n_{1}-n_{2})f_{a}\,\delta^{(2)}\left(M_{2}^{\rm st}\right). \tag{36}\]
Using this, anomalous variation of \(S_{\rm axion}\) is found to be
\[\delta_{A}S_{\rm anom} =\frac{1}{4\pi}\int_{M_{2}^{\rm st}}(n_{1}-n_{2})\lambda_{A}\left( F_{A}^{(2)}+qF_{B}^{(2)}\right) \tag{37}\] \[\delta_{B}S_{\rm anom} =\frac{1}{4\pi}\int_{M_{2}^{\rm st}}(n_{1}-n_{2})\lambda_{B} \left(qF_{A}^{(2)}+q^{2}F_{B}^{(2)}\right). \tag{38}\]
One immediately notices that for \(n_{1}=n_{2}\), there is no localized anomaly on the composite cosmic string. The reason is that the 2d fermions \(\alpha_{1}\) and \(\alpha_{2}\) form a vector-like pair so that when \(n_{1}=n_{2}\) (i.e. there are equal number of \(\alpha_{1}\) and \(\alpha_{2}\) fields) the 2d QFT is completely vector-like. In fact, for \(n_{1}=n_{2}\), one may identify \(\Phi_{1}=\Phi_{2}\) in the action and realize that the theory is just Witten's vector-like theory of superconducting string [34], augmented by a coupling to \(U(1)_{B}\).
For \(n_{1}\neq n_{2}\), there are non-vanishing anomalies from the bulk term. The 2d chiral anomalies from the string zero modes are given by
\[\partial_{\mu}J_{A}^{\mu} =-\frac{1}{4\pi}\left(n_{1}-n_{2}\right)\left(F_{A}^{(2)}+(q-n)F_{ B}^{(2)}\right) \tag{39}\] \[\partial_{\mu}J_{B}^{\mu} =-\frac{1}{4\pi}\left(n_{1}-n_{2}\right)\left((q-n)F_{A}^{(2)}+(q- n)^{2}F_{B}^{(2)}\right). \tag{40}\]
Here, we can see that the 2d anomalies cancel the bulk anomalies exactly mod \(n\) which, taking into account the gauge invariant local counterterms eq. (15), leads to full anomaly cancellation from the bulk variation eq. (11) and (12).
## 3 TQFT-Coupling II: Gauging a Discrete Subgroup
In this section, we discuss alternative ways to couple the axion-Maxwell theory to a TQFT. In Section 3.1 we consider a coupling of axion-Maxwell theory to a \(\mathbb{Z}_{M}\) BF theory which effectively gauges the \(\mathbb{Z}_{M}\) subgroup of axion 0-form shift symmetry. As we show below, this TQFT-coupling modifies the cosmic string spectrum in an interesting way, which can be in principle observable. This coupling also changes the 3-group symmetry structure as we show in Section 3.3. We then further generalize this discussion to classify all possible TQFT-couplings via discrete gauging in Section 3.4.
We begin with a charge \(K\) axion-Maxwell theory
\[S=\frac{1}{2}\int da\wedge*da+\frac{1}{2g^{2}}\int F^{(2)}\wedge*F^{(2)}-\frac {iK}{8\pi^{2}f_{a}}\int aF^{(2)}\wedge F^{(2)}. \tag{41}\]
The global symmetry structure of this simple theory, without gauging the subgroup
\(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\), is quite rich. It is discussed in detail in [5, 14, 15] and follows from the analysis of Appendix C by removing all terms involving \(K_{B}\) and \(K_{AB}\). For reader's convenience we give a quick summary below.
The symmetry structure of the charge \(K\) axion-Maxwell theory is comprised of
1. 0-form \(\mathbb{Z}_{K}^{(0)}\) axion shift symmetry: without the axion coupling to gauge fields, theory has a \(U(1)^{(0)}\) shift symmetry \(a\to a+cf_{a},\ c\in\mathbb{S}^{1}\) with a corresponding current \(*J_{1}=if_{a}*da\). The charge \(K\) topological coupling breaks \(U(1)^{(0)}\rightarrow\mathbb{Z}_{K}^{(0)}\).12 Footnote 12: As shown in [16, 35], the more correct statement is that the _invertible_\(U(1)\) shift symmetry gets converted into a set of _non-invertible_ shift symmetries. In particular, as far as correlation functions of local operators are concerned, the presence of non-invertible symmetry imposes selection rules that respect rational shift symmetries \(\mathbb{Q}/\mathbb{Z}\subset U(1)^{(0)}\) (rather than \(\mathbb{Z}_{K}\subset U(1)^{(0)}\)). On the other hand, the non-invertible shift symmetry acts non-trivially on ’t Hooft line operators. See [16, 35] for more details and [7] for non-invertible symmetries in the Standard Model and beyond and novel applications in particle physics model building.
2. 2-form \(U(1)^{(2)}\) axion winding symmetry: this symmetry is dual to the 0-form shift symmetry and has the corresponding current \(*J_{3}=\frac{1}{2\pi f_{a}}da\). This symmetry is a consequence of the Bianchi identity \(d^{2}a=0\) and acts on 2d cosmic/axion strings.
3. 1-form \(\mathbb{Z}_{K}^{(1)}\) electric symmetry: similar to the axion shift symmetry, pure Maxwell theory has a \(U(1)^{(1)}\) electric 1-form symmetry following from the equation of motion \(d*F^{(2)}=0\) which is then broken \(U(1)^{(1)}\rightarrow\mathbb{Z}_{K}^{(1)}\) by the charge \(K\) topological coupling. This symmetry acts on Wilson line operators by shifting the gauge field by a discrete gauge field.
4. 1-form \(U(1)^{(1)}\) magnetic symmetry: this symmetry is dual to the electric 1-form symmetry and has the corresponding current \(*J_{2}=\frac{1}{2\pi}F\). This symmetry is a direct consequence of the Bianchi identity \(dF^{(2)}=0\) and acts on 't Hooft lines which can be thought of as the IR-realization of massive, stable monopoles of the UV theory.
In the following, we will refer to \(\mathbb{Z}_{K}^{(0)},\mathbb{Z}_{K}^{(1)}\) as "electric symmetries" - since they shift local fields - and the \(U(1)^{(1)},U(1)^{(2)}\) as "magnetic symmetries" because they are both dual to an electric symmetry.
In order to analyze these symmetries, we will couple to background gauge fields of the global symmetries listed above. The naive coupling leads to
\[S= \frac{1}{2}\int_{M_{4}}(da-f_{a}\mathcal{A}_{e}^{(1)})\wedge*(da -f_{a}\mathcal{A}_{e}^{(1)})+\frac{i}{2\pi f_{a}}\int_{M_{4}}da\wedge\mathcal{ A}_{m}^{(3)}\] \[+ \frac{1}{2g^{2}}\int_{M_{4}}(F^{(2)}-\mathcal{B}_{e}^{(2)})\wedge *(F^{(2)}-\mathcal{B}_{e}^{(2)})+\frac{i}{2\pi}\int_{M_{4}}F^{(2)}\wedge \mathcal{B}_{m}^{(2)}\] \[- \frac{iK}{8\pi^{2}f_{a}}\int_{N_{5}}(da-f_{a}\mathcal{A}_{e}^{(1 )})\wedge(F^{(2)}-\mathcal{B}_{e}^{(2)})\wedge(F^{(2)}-\mathcal{B}_{e}^{(2)}),\]
where, in the last line, we have written the axion-coupling on an auxiliary 5-manifold \(N_{5}\) bounding our 4d spacetime \(M_{4}\): \(\partial N_{5}=M_{4}\). This presentation makes the theory manifestly invariant under background gauge transformations up to terms that are independent of the
dynamical fields.13 However, the above presentation is in fact dependent on the choice of \(N_{5}\) due to the fact that background gauge fields \({\cal A}_{e}^{(1)}\) and \({\cal B}_{e}^{(2)}\) are \(\mathbb{Z}_{K}\)-valued, hence the integrand in the last line evaluated on a closed 5-manifold is not \(2\pi\mathbb{Z}\).
Footnote 13: The additional terms that are generated by background gauge transformations that are independent of the dynamical fields can be interpreted as ‘t Hooft anomalies. We will discuss these ‘t Hooft anomalies in Section 3.3.
The theory can be made independent of \(N_{5}\) by modifying the "magnetic" symmetries so that instead of the standard \(U(1)\)-transformations, the background gauge fields \({\cal A}_{m}^{(3)},{\cal B}_{m}^{(2)}\) additionally transform under the electric symmetries:
\[{\cal A}_{m}^{(3)}\to{\cal A}_{m}^{(3)}+d\lambda_{m}^{(2)}-\frac{K }{4\pi}\left(2\lambda_{e}^{(1)}\wedge{\cal B}_{e}^{(2)}+\lambda_{e}^{(1)} \wedge d\lambda_{e}^{(1)}\right) \tag{3.3}\] \[{\cal B}_{m}^{(2)}\to{\cal B}_{m}^{(2)}+d\lambda_{m}^{(1)}-\frac{ K}{2\pi}\left(\lambda_{e}^{(0)}{\cal B}_{e}^{(2)}+\lambda_{e}^{(1)}\wedge{\cal A }_{e}^{(1)}+\lambda_{e}^{(0)}d\lambda_{e}^{(1)}\right). \tag{3.4}\]
Note that there is a \(\mathbb{Z}_{K}^{(1)}\) that participates in the 3-group global symmetry rather than the \(\mathbb{Z}_{\sqrt{K}}^{(1)}\) which is a genuine 1-form global symmetry.
These modified symmetries then have modified field strengths \({\cal G}^{(4)},{\cal H}^{(3)}\):
\[{\cal G}^{(4)}=d{\cal A}_{m}^{(3)}+\frac{K}{4\pi}{\cal B}_{e}^{(2 )}\wedge{\cal B}_{e}^{(2)} \tag{3.5}\] \[{\cal H}^{(3)}=d{\cal B}_{m}^{(2)}+\frac{K}{2\pi}{\cal A}_{e}^{(1 )}\wedge{\cal B}_{e}^{(2)}. \tag{3.6}\]
so that the action is written
\[S= \frac{1}{2}\int(da-f_{a}{\cal A}_{e}^{(1)})\wedge*(da-f_{a}{\cal A }_{e}^{(1)})+\frac{i}{2\pi f_{a}}\int a{\cal G}^{(4)}\] \[+ \frac{1}{2g^{2}}\int(F^{(2)}-{\cal B}_{e}^{(2)})\wedge*(F^{(2)}- {\cal B}_{e}^{(2)})+\frac{i}{2\pi}\int A^{(1)}\wedge{\cal H}^{(3)}\] \[- \frac{iK}{8\pi^{2}f_{a}}\int_{N_{5}}(da-f_{a}{\cal A}_{e}^{(1)}) \wedge(F^{(2)}-{\cal B}_{e}^{(2)})\wedge(F^{(2)}-{\cal B}_{e}^{(2)}),\]
and is invariant under the choice of \(N_{5}\). The modified transformation rules in eqs. (3.3) - (3.4) show that the axion-Maxwell theory possesses a 3-group symmetry. Loosely speaking, an \(n\)-group is constructed from 0-, 1-,\(\cdots\), \((n-1)\)-form global symmetries where the \(p\)-form symmetries mix non-trivially with the \(q\)-form symmetries where \(q>p\). In our case, eq. (3.5) shows that the \(\mathbb{Z}_{K}^{(1)}\) symmetry turns on field strength of \(U(1)^{(2)}\) symmetry and similarly, eq. (3.6) shows that the \(\mathbb{Z}_{K}^{(0)}\), \(\mathbb{Z}_{K}^{(1)}\) mix with the \(U(1)^{(1)}\) symmetries. See Appendix C.2 for more details.
### Gauging \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\) and Axion-String Spectrum
Since \(\mathbb{Z}_{K}^{(0)}\) has no ABJ-anomaly, it is a good quantum symmetry and can be gauged14. Here, we will study the gauging of a subgroup \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\) and its implications.
First, note that gauging the \(\mathbb{Z}_{M}\) discrete subgroup is equivalent to coupling the original theory to a \(\mathbb{Z}_{M}\) TQFT. Indeed, this is not specific to discrete gauging and is familiar from gauging continuous symmetries. Imagine a theory with a continuous global symmetry with a current \(J_{\mu}\) coupled to a background gauge field \(\mathcal{V}^{\mu}\)
\[S\supset i\int J_{\mu}\mathcal{V}^{\mu}. \tag{3.8}\]
Gauging this global symmetry is achieved by supplementing the theory with a dynamical gauge field. The above term then describes nothing but the coupling between the original theory and newly born gauge theory sector. In our case, the gauge theory sector is a \(\mathbb{Z}_{M}\) TQFT and it couples to the theory analogously to the case of a continuous gauge field because the \(\mathbb{Z}_{M}^{(0)}\) symmetry results from an explicitly broken \(U(1)^{(0)}\) symmetry.
In order to understand how the discrete gauging effects the axion theory, it is instructive to first study the discrete gauging of free \(U(1)\) Goldstone boson.
#### 3.1.1 Discrete Gauging of Free \(U(1)\) Goldstone Boson
Consider a \(U(1)\)-valued Goldstone boson \(\phi\). This field has the standard action15
Footnote 15: We can alternatively write the action in terms of a canonically normalized field \(\Phi=f\,\phi\) which satisfies \(\Phi\sim\Phi+2\pi f\) analogously to the axion \(a\sim a+2\pi f_{a}\).
\[S=\int\frac{f^{2}}{2}(d\phi)^{2}\quad,\quad\phi\sim\phi+2\pi. \tag{3.9}\]
This periodic identification of \(\phi\) can be thought of as a gauge redundancy that transforms a naturally \(\mathbb{R}\)-valued scalar field to a \(U(1)\)-valued scalar field. This theory has a \(U(1)^{(0)}\times U(1)^{(2)}\) global symmetry corresponding to shift symmetry and winding symmetry. These symmetries have currents
\[*J_{1}=if^{2}*d\phi\quad,\quad*J_{3}=\frac{d\phi}{2\pi}. \tag{3.10}\]
Here \(*J_{1}\) is the momentum density - generates shifts of \(\phi\) - and \(*J_{3}\) measures the winding of \(\phi\). The charged operators are \(I(q,x)=e^{iq\phi(x)}\) and \(\mathcal{S}_{2}(\ell,\Sigma_{2})\) (which is the cosmic string operator of charge \(\ell\)) respectively.
Now let us consider gauging the \(\mathbb{Z}_{M}^{(0)}\subset U(1)^{(0)}\) shift symmetry. In this case, we modify the action by coupling to the 1-form \(\mathbb{Z}_{M}\) gauge field \(C^{(1)}\)
\[S=\int\frac{f^{2}}{2}(d\phi-C^{(1)})\wedge*(d\phi-C^{(1)})+\frac{iM}{2\pi} \int dC^{(1)}\wedge D^{(2)}\, \tag{3.11}\]
where \(D^{(2)}\) is a 2-form \(\mathbb{Z}_{M}\) gauge field. This gauging modifies the spectrum of operators in the theory in opposite ways: it projects out local operators (charged under \(\mathbb{Z}_{M}^{(0)}\)) and adds "dual" string operators (\(\mathbb{Z}_{M}^{(2)}\) BF strings).
First, note that the gauging identifies
\[\phi\sim\phi+\frac{2\pi}{M}. \tag{3.12}\]
This means that the gauging projects out local operators
\[I(q,x),\quad q\notin M\mathbb{Z}\, \tag{3.13}\]
because under the shift \(\phi\mapsto\phi+\frac{2\pi}{M}\), the local operator shifts as
\[I(q,x)\mapsto e^{\frac{2\pi i\,q}{M}}I(q,x)\, \tag{3.14}\]
so that it is not gauge invariant unless \(q\in M\mathbb{Z}\). Alternatively, such a charged operator can be made a gauge invariant operator by attaching a (discrete) Wilson line to it
\[I(q,x)\rightarrow\tilde{I}(q,x)=e^{iq\int_{x}^{\infty}C^{(1)}}I(q,x). \tag{3.15}\]
For \(q=M\mathbb{Z}\), however, the operator \(\tilde{I}(q,x)\) becomes a well-defined _local_ operator because the attached Wilson line is trivial and \(\tilde{I}(q,x)\) reduces to \(I(q,x)\).16
Footnote 16: As discussed in detail in Appendix B, the line operators \(W(\Sigma_{1})=e^{i\oint_{\Sigma_{1}}C^{(1)}}\) satisfy \(W^{M}=1\) and so for \(q\in M\mathbb{Z}\), one sees that \(\tilde{I}(q,x)\) reduces to \(I(q,x)\).
Similarly, due to the identification (3.12) we are now allowed to have _fractional_ winding numbers:
\[\oint\frac{d\phi}{2\pi}\in\frac{1}{M}\mathbb{Z}. \tag{3.16}\]
These fractional winding numbers are in a sense "added" to the spectrum - they are constructed by \(\phi\) passing through \(2\pi/M\) of the \(2\pi\) period which is then accompanied by a \(\mathbb{Z}_{M}\) gauge transformation. This \(\mathbb{Z}_{M}\) gauge transformation requires a non-trivial gauge background given by an insertion of the BF string operator
\[W_{2}(\ell,\Sigma_{2})=e^{i\ell\oint_{\Sigma_{2}}D^{(2)}}\, \tag{3.17}\]
which are charged under the 2-form \(\mathbb{Z}_{M}^{(2)}\) global symmetry that shifts by \(D^{(2)}\to D^{(2)}+\frac{2\pi}{M}\lambda_{2}\) with \(\oint\lambda_{2}\in 2\pi\mathbb{Z}\). These string operators are classified by the \(\mathbb{Z}_{M}\) charges \(\ell=0,1,\cdots,(M-1)\).
We can see that the insertion of the BF string operator leads to the fractional \(\phi\)-winding configurations explicitly by noting that in the presence of the BF string the equations of motion for \(C^{(1)}\) are given by:
\[dC^{(1)}=\frac{2\pi\,\ell}{M}\delta^{(2)}(\Sigma_{2})\, \tag{3.18}\]
which implies that \(\oint\frac{C^{(1)}}{2\pi}=\frac{\ell}{M}\) in the presence of the BF string. Now we can use the fact that the combination \(d\phi-C^{(1)}\) is \(\mathbb{Z}_{M}\)-gauge invariant to see that in the presence of the BF
string
\[\oint\frac{C^{(1)}}{2\pi}=\oint\frac{d\phi}{2\pi}\ {\rm mod}_{ \mathbb{Z}}=\frac{\ell}{M}. \tag{3.19}\]
See Appendix B for a detailed discussion.
As far as local physics are concerned, this theory is IR equivalent to the original free periodic scalar. This may be seen explicitly as follows. Consider rescaling: \(\phi\to\tilde{\phi}=M\phi,\ C^{(1)}\to\tilde{C}^{(1)}=MC^{(1)},\ f\to\tilde{f}=f/M\). It is easy to show that the theory with discrete gauging eq. (3.11) becomes
\[S=\int\frac{\tilde{f}^{2}}{2}(d\tilde{\phi}-\tilde{C}^{(1)}) \wedge*(d\tilde{\phi}-\tilde{C}^{(1)})+\frac{i}{2\pi}\int d\tilde{C}^{(1)} \wedge D^{(2)}. \tag{3.20}\]
The second term is recognized as a trivial theory (it is a \(\mathbb{Z}_{M}\) gauge theory with \(M=1\)) and it effectively gauges none of \(U(1)\) shift symmetry of \(\tilde{\phi}\). Accordingly, \(\tilde{C}^{(1)}\) is a background (as opposed to dynamical) gauge field of \(U(1)\) shift symmetry. One can easily check that \(\tilde{\phi}\) has \(2\pi\) periodicity and its winding number is given by \(\oint d\tilde{\phi}\in 2\pi\mathbb{Z}\).
One may then wonder if \(\mathbb{Z}_{M}\subset U(1)\) discrete gauging has no physical consequences. The answer is: discrete gauging does have important physical implications and in current example it is captured by cosmic string physics. In terms of original variables, we showed that in addition to the original global strings, the discrete gauging adds \(\mathbb{Z}_{M}\)-classified local (BF) strings. While the tension of the former is of the order \(\sim f^{2}\), the tension of the latter is the square of the scale at which the BF theory is born \(\Lambda_{\rm BF}^{2}\). In general, this second scale can be parametrically larger than \(f\) and therefore, the varying tension of the cosmic string spectrum can encode the effect of \(\mathbb{Z}_{M}\) gauging.
#### 3.1.2 Discrete gauging of Axion-Maxwell Theory
Now let us return to axion-Maxwell theory and discuss the partial gauging of discrete \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\). After gauging, the action becomes
\[S = \frac{1}{2}\int(da-f_{a}C^{(1)})\wedge*(da-f_{a}C^{(1)})+\frac{1} {2g^{2}}\int F^{(2)}\wedge*F^{(2)} \tag{3.21}\] \[+\frac{iK}{f_{a}}\int(da-f_{a}C^{(1)})\wedge\omega_{3}(A^{(1)})+ \frac{iM}{2\pi}\int D^{(2)}\wedge dC^{(1)}\]
where \(C^{(1)}\) is the dynamical gauge field for the \(\mathbb{Z}_{M}^{(0)}\) and \(\omega_{3}(A^{(1)})\) is the 3d \(U(1)\) Chern-Simons action as defined in Section 2.1. Also, \(D^{(2)}\) is the 2-form gauge field within the \(\mathbb{Z}_{M}\) BF theory sector.
It is clear from the definition of the axion shift symmetry, that \(\mathbb{Z}_{K}^{(0)}\) is a non-linearly realized discrete symmetry and the gauging, therefore, corresponds to gauging a Higgsed discrete symmetry. This is in general a result of the fact that the axion is the pseudo-goldstone field for a spontaneously broken (anomalous) \(U(1)\) symmetry of which \(\mathbb{Z}_{K}^{(0)}\subset U(1)^{(0)}\) is a non-anomalous subgroup.
However, even though we are gauging the axion shift symmetry, we still have a continuous field with dynamical excitations. While gauging of \(U(1)\) global symmetry would have removed all of the fluctuations of periodic scalar, here the gauging is applied only to a discrete subgroup and it turns only "measure-zero" part into gauge degrees of freedom, leaving most of the continuous excitations intact.
As in the case of the free periodic scalar, the discrete gauging removes some of the local operators and increases the number of surface operators. First, consider the allowed local operators. Without gauging, the charged objects under the 0-form \(\mathbb{Z}_{K}^{(0)}\) is local operators \(I(q,x)=e^{iq\,a(x)/f_{a}}\) with a charge \(q\in\mathbb{Z}\) (recall that the axion is \(2\pi f_{a}\)-periodic). It shifts as
\[I(q,x)=e^{iq\,a(x)/f_{a}}\to e^{\frac{2\pi i}{K}q}I(q,x) \tag{3.22}\]
under global \(\mathbb{Z}_{K}^{(0)}\) transformations. As in the case of the free periodic scalar, after gauging \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\), these local operators are no longer gauge invariant for general \(q\in\mathbb{Z}\). Rather, they are only gauge invariant only for \(q\in M\mathbb{Z}\) - otherwise they require attaching a Wilson line and become non-local.
Now let us consider the non-local string-like operators. Before gauging \(\mathbb{Z}_{M}^{(0)}\), the axion is a periodic scalar with the period \(a\sim a+2\pi f_{a}\). After gauging \(\mathbb{Z}_{M}^{(0)}\), the \(\mathbb{Z}_{M}\) gauge transformations shift \(a\sim a+2\pi f_{a}/M\). This allows axion field configurations with fractional winding number
\[\oint\frac{da}{f_{a}}\in\frac{2\pi}{M}\mathbb{Z} \tag{3.23}\]
because they are now gauge equivalent to winding configurations that traverse a complete period. Cosmic strings with integral winding are global axion strings, and the ones with fractional \(\mathbb{Z}_{M}\)-valued windings are BF strings. As discussed in the previous section, this can be seen explicitly by noting that the equation of motion for \(a\) implies that
\[\oint\frac{da}{2\pi f_{a}}\text{mod}_{\mathbb{Z}}=\oint\frac{C^{(1)}}{2\pi} \in\frac{1}{M}\mathbb{Z}, \tag{3.24}\]
which is activated by an insertion of BF surface operator \(W_{2}(\Sigma_{2},m)=e^{im\oint_{\Sigma_{2}}D^{(2)}}\).
At this point we recall that the scale at which the \(\mathbb{Z}_{M}\) gauge theory emerges,17\(\Lambda_{\text{BF}}\), should be generically higher than the scale where the axion is emerges from a spontaneously broken, anomalous \(U(1)\) symmetry, \(f_{a}\). The reason for the hierarchy \(\Lambda_{\text{BF}}\gtrsim f_{a}\) is simply because, in any UV completion, a discrete global symmetry in the low energy EFT (non-linearly realized or otherwise) can only be gauged if the gauge field is also discrete at or above the scale where the symmetry emerges: \(\Lambda_{\text{sym}}\gtrsim\Lambda_{\text{EFT}}\). Therefore, one expects on a general grounds that \(\Lambda_{\text{BF}}\gtrsim f_{a}\).
Footnote 17: Here we insist on the existence of a UV completion which only contains continuous gauge symmetries. The scale \(\Lambda_{\text{BF}}\) is the scale at which the continuous gauge symmetry is broken to \(\mathbb{Z}_{M}\).
This hierarchy has a measurable effect on the spectrum of cosmic strings in the theory. In the \(\mathbb{Z}_{M}^{(0)}\)-gauged theory, cosmic strings consists of global axion strings with tension \(T\sim f_{a}^{2}\) and \(\mathbb{Z}\)-valued winding numbers, and BF strings with tension \(T\sim\Lambda_{\text{BF}}^{2}\) and "winding
numbers" \(m\in\frac{1}{M}\times(0,1,\cdots,(M-1))\).18
Footnote 18: The local strings of BF theory are not really defined by a winding number as they do not possess any topological charge [33]. Rather, they are measured by a conserved magnetic flux \(m=\frac{1}{2\pi}\oint C^{(1)}\). In an Abelian Higgs model, however, we have \(\oint(d\varphi-C^{(1)})\in\mathbb{Z}\) and the magnetic flux is transfered to the would-be Goldstone boson as a winding number.
### UV Field Theory Completion
In this section, we show that the axion-Maxwell theory coupled to a TQFT as in the previous section can arise as a long distance description of a local QFT. This result demonstrates that TQFT couplings and their associated remarkable features can appear rather ubiquitously in a broad class of particle physics models.
In order to formulate a UV completion, we recall that coupling to the TQFT can be thought of as gauging a discrete subgroup of the axion shift symmetry. If we take the KSVZ UV completion of axion-Maxwell theory, the discrete gauging corresponds to gauging \(\mathbb{Z}_{M}^{(0)}\subset U(1)_{\rm PQ}\). We can then try to couple to the \(\mathbb{Z}_{M}\) TQFT via an Abelian Higgs Model which breaks a gauge symmetry \(U(1)_{B}\to\mathbb{Z}_{M}\). Requiring that the emergent \(\mathbb{Z}_{M}\) gauge symmetry act on \(U(1)_{\rm PQ}\) requires that we have the UV symmetry structure:
\[G_{\rm UV}=U(1)_{A}\times\frac{U(1)_{B}\times U(1)_{\rm PQ}}{\mathbb{Z}_{M}} \tag{3.25}\]
where we used the same notations for gauge groups as in Section 2: \(U(1)_{A}\) for unbroken electromagnetism and \(U(1)_{B}\) for the gauge group that is spontaneously broken \(U(1)\to\mathbb{Z}_{M}\). The second factor means that \(\mathbb{Z}_{M}\subset U(1)_{\rm PQ}\) is redundant and can be undone by means of \(\mathbb{Z}_{M}\subset U(1)_{B}\) rotations. This means that the true global symmetry of the theory is given by projecting out the \(\mathbb{Z}_{M}\) gauge symmetry from the \(U(1)_{\rm PQ}\) so that
\[G_{\rm global}=U(1)_{\rm PQ}/\mathbb{Z}_{M}. \tag{3.26}\]
Below, we will present a theory that realizes this symmetry structure.
\begin{table}
\begin{tabular}{|l||c|c|c|c|} \hline & \(U(1)_{A}\) & \(U(1)_{B}\) & \(U(1)_{\rm PQ}\) & \(U(1)_{F}\) \\ \hline \hline \(\psi_{+}\) & \(1\) & \(0\) & \(-M\) & \(M\) \\ \(\psi_{-}\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) \\ \hline \(\chi_{+}\) & \(1\) & \(M+1\) & \(M+1\) & \(M+1\) \\ \(\chi_{-}\) & \(-1\) & \(0\) & \(-M\) & \(-M\) \\ \hline \(\eta_{+}\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(\eta_{-}\) & \(-1\) & \(-M-1\) & \(-M-1\) & \(-M-1\) \\ \hline \(\Phi_{1}\) & \(0\) & \(1\) & \(M+1\) & \(-M+1\) \\ \(\Phi_{2}\) & \(0\) & \(-M-1\) & \(-1\) & \(-1\) \\ \(\Phi_{3}\) & \(0\) & \(M\) & \(M\) & \(M\) \\ \hline \end{tabular}
\end{table}
Table 3: Quantum numbers of the fields of the UV completion eq. (3.27). Below \(E<\langle\Phi_{I}\rangle\), this theory matches to our effective theory.
Consider a theory whose Lagrangian is
\[{\cal L}_{\rm UV}= -\frac{1}{4g_{A}^{2}}F_{A}^{(2)}\wedge*F_{A}^{(2)}-\frac{1}{4g_{B}^{ 2}}F_{B}^{(2)}\wedge*F_{B}^{(2)}+\sum_{k=\pm}\bar{\psi}_{k}i\not{D}\psi_{k}+ \bar{\chi}_{k}i\not{D}\chi_{k}+\bar{\eta}_{k}i\not{D}\eta_{k} \tag{3.27}\] \[+|D\Phi_{1}|^{2}+|D\Phi_{2}|^{2}+|D\Phi_{3}|^{2}\] \[-\lambda_{1}\Phi_{1}\psi_{+}\psi_{-}-\lambda_{2}\Phi_{2}\chi_{+} \chi_{-}-\lambda_{3}\Phi_{2}\eta_{+}\eta_{-}+{\rm h.c.}-V(\Phi_{1},\Phi_{2}, \Phi_{3}).\]
The scalar potential is such that both \(\langle\Phi_{1}\rangle\), \(\langle\Phi_{2}\rangle\), and \(\langle\Phi_{3}\rangle\) are non-zero.
We will assign gauge and global symmetry charges to the fields which are summarized in Table 3. These choices of quantum number are tightly constrained by multiple requirements:
1. In order to construct a well defined UV theory, we first choose the charges of the gauge symmetries \(U(1)_{A}\) and \(U(1)_{B}\) so that they are anomaly free. The choices above ensure that cubic gauge anomalies vanish.
2. In order to produce an axion, we demand that \(U(1)_{\rm PQ}\) is the only sponatneously broken global symmetry with any ABJ anomalies and that it only has an ABJ anomaly with \(U(1)_{A}\). The ABJ anomalies for \(U(1)_{\rm PQ}\) are given by19 Footnote 19: We can additionally obtain a theory with axion-Maxwell coupling \(K=-2Mp\) by taking \(p\)-copies of the fermion sector without increasing the number of scalar fields. This increases the number of vector-like symmetries that are not spontaneously broken in the IR, while increasing the size of the \(U(1)_{\rm PQ}\times[U(1)_{A}]^{2}\) ABJ anomaly. \[U(1)_{\rm PQ}\,[U(1)_{A}]^{2}=-2M\quad,\quad U(1)_{\rm PQ}\,[U(1)_{B}]^{2}=U( 1)_{\rm PQ}\,U(1)_{A}\,U(1)_{B}=0\.\] (3.28) By a simple counting argument, there are three remaining \(U(1)\) global symmetries which are parametrized by \(U(1)_{F}\) and two vector-like symmetries \(U(1)_{V_{1}}\) and \(U(1)_{V_{2}}\) (not present in Table 3). The vector-like symmetries \(U(1)_{V_{i}}\) act only on the fermions and hence are not spontaneously broken by Higgs condensation. On the other hand, the global symmetry \(U(1)_{F}\) is spontaneously broken and has vanishing ABJ anomalies.
3. In order to realize non-trivial overlap described by \(\frac{U(1)_{B}\times U(1)_{\rm PQ}}{\mathbb{Z}_{M}}\), we need the charges of \(U(1)_{\rm PQ}\) to reduce to those of \(U(1)_{B}\) mod\({}_{M}\). This can be easily verified upon inspection of Table 3.
The theory described by the Lagrangian in eq. (3.27) with charge assignments in Table 3 manifestly has the symmetry structure in eq. (3.25).
To reproduce the axion-Maxwell coupled to a \(\mathbb{Z}_{M}\) TQFT in the IR, we choose the parameters of the scalar potential such that \(\langle\Phi_{3}\rangle\gg\langle\Phi_{1,2}\rangle\). At intermediate energies \(E\), where \(\langle\Phi_{3}\rangle\gg E\gg\langle\Phi_{1,2}\rangle\), \(U(1)_{B}\) is broken by \(\langle\Phi_{3}\rangle\) to \(U(1)_{B}\to\mathbb{Z}_{M}\) and the theory flows to a KSVZ model coupled to a \(\mathbb{Z}_{M}\) TQFT where \(U(1)_{\rm PQ}\) is gauged by \(\mathbb{Z}_{M}\subset U(1)_{B}\). This intermediate theory has additional string-defects corresponding to the \(\mathbb{Z}_{M}\) BF strings which come from the unscreened \(\Phi_{3}\) Nielsen-Olesen vortices. Then, in the deep IR \(E\ll\langle\Phi_{1,2}\rangle\), the theory flows to charge \(2M\) axion-Maxwell theory coupled to a \(\mathbb{Z}_{M}\) gauge field and a
"decoupled" \(U(1)\) goldstone boson where the axion and auxiliary goldsone boson have charge 1 under \(\mathbb{Z}_{M}\).
In this UV completion, we find that \(\Lambda_{\rm BF}=\langle\Phi_{3}\rangle\) and \(f_{a}\sim\langle\Phi_{1,2}\rangle\) and that we respect the hierarchy \(\Lambda_{\rm BF}\gtrsim f_{a}\). However, there are other limits of this UV theory we can consider - for example by trying to "invert" the hierarchy. We will find that when we invert the hierarchy, the RG flow will lead to a different IR theory, thus side-stepping the constraints from EFT considerations.
Let us first consider the limit \(\langle\Phi_{1}\rangle\gg\langle\Phi_{2,3}\rangle\). At intermediate energies \(E\), where \(\langle\Phi_{1}\rangle\gg E\gg\langle\Phi_{2,3}\rangle\), \(U(1)_{B}\) is completely broken. Now the intermediate theory is given by a KSVZ theory. In this KSVZ theory, the \(U(1)_{\rm PQ}\) and \(U(1)_{F}\) symmetries have become degenerate as shown in Table 4
Instead, the physics of condensing \(\Phi_{1}\) mandates that we redefine the global symmetries \(U(1)_{\rm PQ},U(1)_{F}\) so that they are orthogonal to the broken gauge symmetry \(U(1)_{B}\), such as \(U(1)_{\widetilde{PQ}},U(1)_{\widetilde{F}}\) with charges defined in Table 4.20 The reason we should redefine the global symmetries is that by breaking \(U(1)_{B}\), we are required to implicitly project onto orthogonal global symmetries so that the transformations of the remaining fields do not move along the broken, gauged direction.
Footnote 20: Note that this differs from the case where \(\langle\Phi_{3}\rangle\gg\langle\Phi_{1,2}\rangle\) where the \(\mathbb{Z}_{M}\subset U(1)_{B}\) gauge symmetry, which overlaps with \(\mathbb{Z}_{M}\subset U(1)_{\rm PQ}\times U(1)_{F}\). In this case, we are not required to project onto the orthogonal symmetry because of the overlapping, preserved gauge symmetry.
Now consider the other limit where \(\langle\Phi_{2}\rangle\gg\langle\Phi_{1,3}\rangle\). At intermediate energies \(E\), where \(\langle\Phi_{2}\rangle\gg E\gg\langle\Phi_{1,3}\rangle\), \(U(1)_{B}\) is broken down to \(\mathbb{Z}_{M+1}\) and the intermediate theory is described by a KSVZ model coupled to a \(\mathbb{Z}_{M+1}\) BF TQFT. The charges of the resulting KSVZ model are given in Table 5.
Due to the symmetry structure:
\[G_{\rm total}=U(1)_{A}\times\frac{U(1)_{B}\times U(1)_{\rm PQ}\times U(1)_{F}} {\mathbb{Z}_{M}\times\mathbb{Z}_{M}}\, \tag{3.29}\]
we know that \(\langle\Phi_{2}\rangle\) breaks the symmetry to
\[G_{\rm total}\to U(1)_{A}\times\mathbb{Z}_{M+1}\times\frac{U(1)_{\rm PQ} \times U(1)_{F}}{\mathbb{Z}_{M}}\, \tag{3.30}\]
and the \(\mathbb{Z}_{M+1}\) gauge symmetry can be decoupled from \(U(1)_{\rm PQ},U(1)_{F}\). Again, we see that breaking the continuous part of \(U(1)_{B}\) demands that we project \(U(1)_{B}\) out of the global symmetries. This can be accomplished by the modified \(U(1)_{\widetilde{P}\widetilde{Q}},U(1)_{\widetilde{F}}\) symmetries generated by
\[Q_{\widetilde{P}\widetilde{Q}}=Q_{\rm PQ}-Q_{A}\quad,\quad Q_{ \widetilde{F}}=Q_{F}-Q_{A}\, \tag{3.31}\]
which completely decouples from \(\mathbb{Z}_{M+1}\). Thus, when we flow to the deep IR \(\mathbb{Z}_{M+1}\) is spontaneously broken by \(\langle\Phi_{1,3}\rangle\), and there is a remaining axion with charge 2 and decoupled \(U(1)\) goldstone boson. Therefore in our UV completion, we find that if we tune the scales of the theory so that \(\sigma=\Lambda_{\rm BF}/f_{a}\to 0\), that there will be a phase transition near \(\sigma\approx 1\) where the IR theory will have different IR dynamics as described above.
Note that we can physically think of the \(\mathbb{Z}_{M}\) gauging as a being a consequence of the fact that we "chose our symmetries" in a particular way. To see this, note that if we lift condition (iii) above, it is more natural to parametrize the global symmetries by
\[\begin{array}{c|cccccc|c|c}&\psi_{+}&\psi_{-}&\chi_{+}&\chi_{-}&\eta_{+}& \eta_{-}&\Phi_{1}&\Phi_{2}&\Phi_{3}\\ \hline U(1)_{1}&1&-1&0&0&0&0&0&0&0\\ U(1)_{2}&0&0&1&-1&0&0&0&0&0\\ U(1)_{3}&1&0&0&1&0&0&-1&-1&0\\ U(1)_{4}&1&0&0&-1&0&0&-1&1&0\end{array}\]
Here \(U(1)_{1,2}\) correspond to \(U(1)_{V_{1},V_{2}}\) which are not spontaneously broken when the \(\Phi_{I}\) condense. The global symmetries \(U(1)_{\rm PQ}\) and \(U(1)_{F}\) (generated by \(Q_{\rm PQ}\) and \(Q_{F}\) respectively) are related to \(U(1)_{B}\) and \(U(1)_{3,4}\) (generated by \(Q_{B},Q_{3,4}\)) as
\[Q_{F}=Q_{B}+MQ_{3}\quad,\quad Q_{\rm PQ}=Q_{B}+MQ_{4}. \tag{3.32}\]
In other words, we have chosen \(U(1)_{\rm PQ}\) and \(U(1)_{F}\) so that they overlap with \(U(1)_{B}\) mod\({}_{M}\) by construction - this redundancy is in some sense artificially engineered.
In this construction, it is clear how the \(\mathbb{Z}_{M}\subset U(1)_{B}\) gauge symmetry effects the resulting IR theory. First, note that after condensing \(\Phi_{3}\), the charges of remaining fields under \(\mathbb{Z}_{M}\) are given by
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline & \(U(1)_{A}\) & \(\mathbb{Z}_{M+1}\) & \(U(1)_{\rm PQ}\) & \(U(1)_{F}\) \\ \hline \hline \(\psi_{+}\) & \(1\) & \(0\) & \(-M\) & \(M\) \\ \(\psi_{-}\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) \\ \hline \(\eta_{+}\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(\eta_{-}\) & \(-1\) & \(0\) & \(-M-1\) & \(-M-1\) \\ \hline \(\Phi_{1}\) & \(0\) & \(1\) & \(M+1\) & \(-M+1\) \\ \(\Phi_{3}\) & \(0\) & \(M\) & \(M\) & \(M\) \\ \hline \end{tabular}
\end{table}
Table 5: The quantum numbers of the fields in the theory at intermediate energy levels \(\langle\Phi_{2}\rangle\gg E\gg\langle\Phi_{1,3}\rangle\).
By shifting by \(\mathbb{Z}_{M}\subset U(1)_{A}\), we can see that \(\mathbb{Z}_{M}\) gauges \(U(1)_{4}\) and does not act on the \(U(1)_{3}\). This implies that the \(\mathbb{Z}_{M}\) gauging can be decoupled from the axion by a field redefinition although the BF strings are indeed physical.
In summary, we find that in the parameter space spanned by the vevs \(\langle\Phi_{1,2,3}\rangle\), there is only one hierarchy that flows to the axion-Maxwell theory coupled to a \(\mathbb{Z}_{M}\) BF theory. This hierarchy of scales is given by \(\langle\Phi_{3}\rangle\gg\langle\Phi_{1,2}\rangle\) and leads to \(\Lambda_{\rm BF}\gg f_{a}\), thus reproducing the expectation from EFT and higher-group symmetry considerations.
### 3-Group
We now return to the 3-group symmetry structure of axion-Maxwell coupled to the \(\mathbb{Z}_{M}\) TQFT and its associated 't Hooft anomalies. Understanding the higher-group symmetry structure of this theory is essential to understanding all gaugable symmetries, hence all possible non-trivial TQFT-couplings that arise from discrete gauging. We will discuss the set of possible gaugings in the next section. Here, we will focus on the 3-group symmetry of charge \(K\) axion-Maxwell theory and the effect of \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\) gauging.
To this end, let us first review 't Hooft anomalies in theory without discrete gauging. 't Hooft anomalies can be probed by turning on background gauge fields for the global symmetries and studying the behavior of the partition function under their associated gauge transformations. In particular, we say that a symmetry has an 't Hooft anomaly if the path integral is not invariant under the associated background gauge transformations when all background gauge fields are turned on:
\[Z[A_{I}+d\lambda_{I}]=e^{-i\int\mathcal{A}^{(1)}[\{A_{I}\};\lambda_{I}]}\,Z[ \{A_{I}\}]\quad\Longleftrightarrow\quad\{\text{'t Hooft anomaly}\}\, \tag{3.33}\]
where here \(\mathcal{A}^{(1)}[\{A_{I}\};\lambda]\) captures the anomalous variation of the action. In general, the anomalous variation \(\mathcal{A}^{(1)}\) can be described by inflow of the variation of a 5d "anomaly action" \(\mathcal{A}[\{A_{I}\}]\) following the "descent procedure".
In axion-Maxwell theory, the theory contains couplings of the form
\[S=...+\int_{M_{4}}\frac{i}{2\pi f_{a}}a\,\mathcal{G}^{(4)}+\frac{i}{2\pi}A^{( 1)}\wedge\mathcal{H}^{(3)}. \tag{3.34}\]
These terms are not invariant under electric background gauge transformations and generate transformations of the form
\[\delta S=\frac{i}{2\pi}\int_{M_{4}}\lambda_{e}^{(0)}\mathcal{G}^{(4)}+\frac{i }{2\pi}\int\lambda_{e}^{(1)}\wedge\mathcal{H}^{(3)}. \tag{3.35}\]
Since the background gauge fields transform as \(\delta\mathcal{A}_{e}^{(1)}=d\lambda_{e}^{(0)}\) and \(\delta\mathcal{B}_{e}^{(2)}=d\lambda_{e}^{(1)}\), the
anomalous variation is captured by the 5d anomaly inflow action
\[S_{\text{inflow}} =\frac{i}{2\pi}\int_{N_{5}}\mathcal{A}^{(1)}_{e}\wedge\mathcal{G}^{ (4)}+\mathcal{B}^{(2)}_{e}\wedge\mathcal{H}^{(3)} \tag{3.36}\] \[=\frac{i}{2\pi}\int_{N_{5}}\mathcal{A}^{(1)}_{e}\wedge\left(d \mathcal{A}^{(3)}_{m}+\frac{K}{4\pi}\mathcal{B}^{(2)}_{e}\wedge\mathcal{B}^{( 2)}_{e}\right)+\mathcal{B}^{(2)}_{e}\wedge\left(d\mathcal{B}^{(2)}_{m}+\frac{ K}{2\pi}\mathcal{A}^{(1)}_{e}\wedge\mathcal{B}^{(2)}_{e}\right).\]
The first term describes a mixed anomaly between \(\mathbb{Z}^{(0)}_{K}\) and \(U(1)^{(2)}\), which due to the 3-group symmetry, also leads to a mixed anomaly between \(\mathbb{Z}^{(0)}_{K}\) and \(\mathbb{Z}^{(1)}_{K}\). Similarly, the second term describes a mixed anomaly between \(\mathbb{Z}^{(1)}_{K}\) and \(U(1)^{(1)}\), which also leads to a mixed anomaly between \(\mathbb{Z}^{(0)}_{K}\) and \(\mathbb{Z}^{(1)}_{K}\) due to the 3-group symmetry.
Now let us consider the effect of gauging \(\mathbb{Z}^{(0)}_{M}\subset\mathbb{Z}^{(0)}_{K}\). Here, the background gauge field \(\mathcal{A}^{(1)}_{e}\) decomposes into a dynamical gauge field \(C^{(1)}\) for the gauged \(\mathbb{Z}^{(0)}_{M}\) part a background gauge field (which by abuse of notation we call \(\mathcal{A}^{(1)}_{e}\)) for the \(\mathbb{Z}^{(0)}_{K/M}\) part. Now, the mixed anomalies involving the dynamical gauge field \(C^{(1)}\) can be viewed as ABJ-type anomalies.
First, let us consider the effect of the gauging on \(U(1)^{(2)}\). To study this, let us consider turning off \(\mathcal{B}^{(2)}_{e},\mathcal{B}^{(2)}_{m}\). Now we can write the anomaly action as
\[S_{\text{inflow}}=-\frac{i}{2\pi}\int_{N_{5}}d\mathcal{A}^{(1)}_{e}\wedge \mathcal{A}^{(3)}_{m}. \tag{3.37}\]
In this case, we find that the anomalous variation of the action is given by
\[\delta S_{\text{inflow}}=-\frac{i}{2\pi}\int_{M_{4}}\mathcal{A}^{(1)}_{e} \wedge d\Lambda^{(2)}_{m}\quad,\quad\delta\mathcal{A}^{(3)}_{m}=d\Lambda^{(2) }_{m}. \tag{3.38}\]
This implies that gauging \(\mathbb{Z}^{(0)}_{M}\) extends the periodicity of \(U(1)^{(2)}\) to \(2\pi M\). This is a consequence of the fact that \(U(1)^{(2)}\) is the dual symmetry of \(\mathbb{Z}^{(0)}_{K/M}\subset U(1)/\mathbb{Z}^{(0)}_{M}\cong U(1)^{(0)}\). This is analogous to the fact that rescaling the periodicity of a periodic scalar (i.e. gauging) also rescales the quantum numbers of the momentum and winding states oppositely.
Now let us consider the consequence of the \(\mathbb{Z}_{M}\) gauging on the \(\mathbb{Z}^{(1)}_{K}\) global symmetry. The correct way to show that the \(\mathbb{Z}^{(1)}_{K}\) symmetry is modified is to first choose counter terms so that the action is invariant under \(\mathbb{Z}_{M}\)-gauge transformations. Then, we see that the variation of the action from inflow is given by
\[\delta S_{\text{inflow}}=\frac{iK}{4\pi}\int\mathcal{A}^{(1)}_{e}\wedge\left(2 \lambda^{(1)}_{e}\wedge\mathcal{B}^{(2)}_{e}+\lambda^{(1)}_{e}\wedge d\lambda ^{(1)}_{e}\right). \tag{3.39}\]
Because the \(\mathbb{Z}_{M}\) gauging sums over \(\mathcal{A}^{(1)}_{e}\) with \(\oint\mathcal{A}^{(1)}_{e}\in\frac{2\pi}{M}\mathbb{Z}\), we see that the \(\mathbb{Z}^{(1)}_{K}\) is broken \(\mathbb{Z}^{(1)}_{K}\rightarrow\mathbb{Z}^{(1)}_{K/M}\).21 Note that the gauging of \(\mathbb{Z}^{(0)}_{K}\) does not affect the \(U(1)^{(1)}\) magnetic
symmetry since \(\mathbb{Z}_{K/M}^{(1)}\) has trivial pairing with \(\mathbb{Z}_{M}^{(0)}\) in the \(\mathcal{B}_{m}^{(2)}\) transformation laws.
Footnote 11: We note that the \(\mathbb{Z}_{K/M}^{(0)}\) symmetry is not a \(3\)-group.
We thus find that the symmetry of the theory after \(\mathbb{Z}_{M}^{(0)}\)-gauging is a \(3\)-group consisting of \(\mathbb{Z}_{K/M}^{(0)}\), \(U(1)_{m}^{(1)}\), \(\mathbb{Z}_{K/M}^{(1)}\), \(U(1)^{(2)}\) which describes the global symmetry group of charge \(K/M\) axion-Maxwell theory.
### Other TQFT Couplings via Discrete Gauging
As we have seen, one way to couple a theory to a TQFT is by a gauging a discrete symmetry. It is clear from our analysis above that this process is non-trivial and can lead to interesting features [1]. Given a theory described by a local QFT, potential TQFT couplings of this sort can be systematically analyzed by studying higher-group and associated 't Hooft anomalies. Here, we demonstrate this analysis on charge \(K\) axion-MW theory as a concrete example.
As we have discussed in the previous section, charge \(K\) axion-Maxwell theory posses a \(3\)-group global symmetry structure. This \(3\)-group involves \(U(1)^{(2)}\) axion winding symmetry that is intertwined with \(\mathbb{Z}_{K}^{(1)}\) electric symmetry and a \(U(1)^{(1)}\) magnetic symmetry that is interlaced with \(\mathbb{Z}_{K}^{(1)}\), and \(\mathbb{Z}_{K}^{(0)}\) axion shift symmetry. The 't Hooft anomalies of this theory are described the \(5\)d inflow action eq. (3.36)which describes a mixed anomaly between \(\mathbb{Z}_{K}^{(0)}\) and the mixed \(U(1)^{(2)}-\mathbb{Z}_{K}^{(1)}\) symmetry as well as a mixed anomaly between \(\mathbb{Z}_{K}^{(1)}\) and the interlaced \(U(1)^{(1)}-\mathbb{Z}_{K}^{(0)}-\mathbb{Z}_{K}^{(1)}\) symmetry. The existence of these 't Hooft anomalies restrict the consistent gaugings:
1. \(0\)-form axion shift \(\mathbb{Z}_{K}^{(0)}\) or its subgroup \(\mathbb{Z}_{M}^{(0)}\subset\mathbb{Z}_{K}^{(0)}\): This is what we have focused so far. From the first term in the inflow action eq. (3.36) we see that such a gauging turns some of the 't Hooft anomalies among global symmetries into ABJ-type anomalies between background gauge fields of global symmetry and dynamical gauge fields, hence resulting in quantum mechanical breaking of global symmetries. It leads to the modification of \(U(1)^{(2)}\) and breaks \(\mathbb{Z}_{K}^{(1)}\to\mathbb{Z}_{K/M}^{(1)}\). This in turn leads to changes of line and surface (cosmic string) operators of the theory.
2. \(1\)-form electric \(\mathbb{Z}_{K}^{(1)}\) or its subgroup \(\mathbb{Z}_{M}^{(1)}\subset\mathbb{Z}_{K}^{(1)}\): Due to the \(3\)-group structure, gauging \(\mathbb{Z}_{K}^{(1)}\) or its subgroup alone is not a consistent operation. Rather, the \(3\)-group transformation rules eq. (3.3) imply that gauging \(\mathbb{Z}_{M}^{(1)}\subset\mathbb{Z}_{K}^{(1)}\) (\(M\) may be equal to \(K\)) must be accompanied by gauging of \(\mathbb{Z}_{M}^{(2)}\subset U(1)^{(2)}\). The anomalies then imply that \(\mathbb{Z}_{K}^{(0)}\) has an ABJ anomaly that completely breaks and \(U(1)^{(1)}\) is modified so that it has periodicity \(2\pi M\).22 Footnote 22: Although \(\mathbb{Z}_{K}^{(0)}\) is broken as a global symmetry, it participates in a non-invertible symmetry structure. See [16, 35, 36] for related discussions.
3. \(2\)-form axion winding \(U(1)^{(2)}\) or its subgroup \(\mathbb{Z}_{M}^{(2)}\
Here the anomaly becomes an ABJ anomaly for \(\mathbb{Z}_{K}^{(0)}\) that breaks the symmetry completely while leaving the other parts of the 3-group untouched.
4. 1-form magnetic \(U(1)^{(1)}\) or its subgroup \(\mathbb{Z}_{L}^{(1)}\subset U(1)^{(1)}\): Here the anomaly becomes an ABJ anomaly for \(\mathbb{Z}_{K}^{(1)}\) that breaks the symmetry completely while leaving the other parts of the 3-group untouched.
It would be interesting to analyze each of these cases and understand the observable consequences of these discrete gaugings (e.g. local, line, and surface operator spectrum). Furthermore, it is an interesting question if some of all of these TQFT couplings can arise as a long-distance effective description of more fundamental QFTs at short distance scales. We leave this analysis to future investigations.
## 4 Brief Comments on Phenomenological Implications
We conclude this paper by briefly commenting on the potential phenomenological implications of the scenarios discussed in this paper. Further studies are certainly required make precise predictions, which we leave for a future work. Our discussion here will be brief and qualitative, highlighting the potential difference with well studied signals of the cosmic strings (see [37] for a review).
We begin with the discussion of the implications which are largely independent of the UV completions. The main focus of this paper is a Higgsed \(U(1)_{B}\) gauge symmetry as the origin of a TQFT. However, there is another unbroken \(U(1)_{A}\) gauge symmetry in the story. To be more concrete, we will proceed by first assuming it is the SM \(U(1)_{\rm EM}\). A TQFT does not have low energy excitations, hence it is "absent" in the IR. Yet, it still leaves some imprints.
As described in detail in this paper, a main portal can be an axion with coupling to a TQFT, as shown in eq. (2). Such a coupling does lead to important differences in comparison with the "usual" well studied axion strings. Potential signals for axion strings with localized fermionic zero modes have been studied [38; 39; 40; 26; 34; 36; 41]. However, the emphasis has been on the fermion which is charged under the SM \(U(1)_{\rm EM}\) in which case axion strings will be charged up by passing through regions of magnetic field in the universe. In our case, the fermionic zero modes localized on the string are required to carry specific \(U(1)_{B}\) charges in addition to the \(U(1)_{A}\) charge. This leads to some remarkable differences.
First of all, since \(U(1)_{B}\) is Higgsed, there are no macroscopic regions with non-zero \(U(1)_{B}\) field. However, if we assume the unbroken \(U(1)_{A}\) is the SM \(U(1)_{\rm EM}\), then the axion strings will be similarly charged up by the magnetic fields in the universe. One of the interesting consequences of the \(U(1)_{B}\) charges carried by the zero mode fermions is that they can potentially affect the fate of the string loops.
As pointed out in [42; 43; 44; 45; 46; 47; 48], charged fermions present on the string can provide a pressure which prevent the string from shrinking, leading to potentially stable final string loops (vorton). However, fermions with EM charges can be expected to decay into SM charged
particles which leads to the decay of the vorton [41; 47]. Such a decay, however, would not be possible in our case with the absence of the light particles charged under \(U(1)_{B}\), leading to stable vortons with potentially distinct signals.
We emphasize that, from the point of view of the IR theory, the stability could be attributed to a global symmetry which is imposed "by hand". However, in the scenario discussed here, we see that it is a consequence of the TQFT coupling and requirement of anomaly matching. It is possible that the \(U(1)_{\rm PQ}\) is also broken explicitly, such as by the intanton effect in the QCD. In this case, a string domain-wall network will form and collapse. The anomaly on the string then implies a certain Chern-Simons theory living on the domain wall [14; 49]. Additionally, it has been shown that a charged axion string can influence the evolution of the string network and the dynamics of string domain-wall network [41]. It would be interesting to investigate further such phenomena in our case.
As discussed in Ref. [39], it is expected that axion string is approximated electrically neutral due to the Schwinger pair production of SM light charged particles in the vicinity of the string, in the strong electromagnetic field produced by the charged particles localized on the string. Such an effect would not be effective for the BF charge on the string, since the BF-charged fermions are heavy and the \(U(1)_{B}\) is Higgsed outside of the string. Hence, if the axion string is charged up with BF-charged particles, it would not be neutral in the BF charge. At the same time, we do not expect this would lead to macroscopic effect since the \(U(1)_{B}\) field is short ranged.
If BF strings are present in the universe, they can in principle lead to different signals as well. It is generically expected that their tension will be different from the axion string. A more unique feature of the BF string is the non-trivial holonomy of the \(U(1)_{B}\) gauge field around the string. In principle, BF-charged particles (for example DM candidate), passing around the BF string could experience AB effect which changes their distributions. It remains to be investigated whether this can lead to observable effects.
Thus far we have discussed the possible IR (universal) signals. However, signals that depend on UV completion can be equally important. In addition to enhancing the discovery potential, they also provide complementary information which may eventually lead to a more complete picture. We will briefly mention a couple such possibilities in the following.
The mechanisms discussed in this paper are largely independent of the absolute scales of UV symmetry breaking. At the same time, the potential signals are sensitive to the scales. A large class of signatures of the cosmic strings are through their gravitational interactions, which is highly sensitive to the string tension \(\mu\), with the current limit roughly in the range of \(G\mu<10^{-8}\) to \(10^{-9}\)[50; 51; 52] and may potential reach \(G\mu<10^{-10}\) to \(10^{-12}\) in the near future [53]. If the cosmic string discussed here can give rise to such signals, it will certainly provide highly valuable information. This is especially important for the second case of TQFT coupling discussed in the paper, in which the main feature is the varying tension of the string spectrum.
If both the axion string and the BF string are present in the universe, there is potential for richer dynamics. In particular, the existence of two different kinds of strings differs from well studied scenarios. For example, if the symmetry breaking dynamics are such that it is energetically favored for the strings to overlap, they would tend to align rather
than cut through each other. This allows for the possibility of producing co-axial strings with different properties and can also potentially affect the evolution of the string network. There can also be interesting differences with the standard axion string story depending on the relative scale of the \(U(1)_{B}\) and \(U(1)_{\rm PQ}\) breaking. Either one can happen at higher scale in the UV theory for the first type of TQFT coupling. In particular, if \(U(1)_{\rm PQ}\) breaks at a higher scale (earlier in the evolution of the universe), there could be an epoch in which the universe is filled with a background of primordial (unbroken) \(U(1)_{B}\) field. This can charge up the axion string through the interaction with the zero mode fermions. The influence of the primordial electromagnetic field on the evolution of the axion string network has been studied [41]. It would interesting to generalize it to the case of a primordial \(U(1)_{B}\) background field.
We have been assuming that the unbroken \(U(1)_{A}\) is the SM \(U(1)_{\rm EM}\). However, \(U(1)_{A}\) could instead be a dark photon. Another intriguing possibility is that the gauge boson of the \(U(1)_{B}\) is the dark photon. Instead of (or in addition to) coupling to the SM model via a kinetic mixing with the photon, it couples through the TQFT portal described in this paper. Since it is common to assume the dark photon mass is small, this could be an extreme example of the case in which PQ symmetry breaking happens at a much higher scale. It has been pointed out recently [54] that cosmic string associated with dark photon can be produced in a broad range of dark photon production scenarios, such as through the axion coupling in eq. (1). Hence, this would be a natural stage to study the interplay between these two kinds of strings and implication of the couplings in eq. (2).
We are grateful to Clay Cordova, Thomas Dumitrescu, Jeffrey Harvey, Seth Koren, Shu-Heng Shao for helpful discussions and related collaborations. S.H. was supported by the DOE grants DE-SC-0013642 and DE-AC02-06CH11357. LTW is supported by the DOE grant DE-SC0013642. TDB is supported by Simons Foundation award 568420 (Simons Investigator) and award 888994 (The Simons Collaboration on Global Categorical Symmetries).
## Appendix A Brief Introduction to Generalized Global Symmetries
The modern notion of global symmetry, in addition to group-like symmetries, also includes non-group-like symmetries such as higher-group symmetry, non-invertible symmetries, and subsystem symmetries. In this section, we provide a short introduction to this notion of generalized global symmetries, with main focus on higher-form symmetries. We refer to [4] and references therein for more extensive and detailed discussion.
### Ordinary symmetry
We begin our discussion by recalling the general properties of ordinary (i.e. 0-form) symmetries a the language that admits straightforward generalizations to higher-form symmetries.
According to Noether's theorem, an ordinary continuous symmetry corresponds to a conserved current
\[\partial_{\mu}j_{1}^{\mu}=0 \tag{114}\]
In differential-form notation, the conserved current is written in terms of a co-closed 1-form:
\[d*j_{1}=0. \tag{115}\]
Here \(*j_{1}\) is a 3-form that is the Hodge dual of \(j_{1}\). By definition, the existence of a symmetry means there are charged objects which transform under the symmetry. In the case of the ordinary (0-form) symmetry, charged objects are local (i.e. 0-dimensional, hence the name 0-form symmetry) operators \(\mathcal{O}(x)\). For example, they can be elementary or composite field operators. In general, \(\mathcal{O}(x)\) transform under a symmetry transformation \(g\) as
\[\mathcal{O}(x)\mapsto R(g)\cdot\mathcal{O}(x) \tag{116}\]
where \(R(g)\) denotes the representation of \(g\). A set of ordinary symmetry transformations \(\{g\}\) often forms a group \(G\), which can be either continuous or discrete. The group \(G\) can be either abelian or non-abelian. More recently, it has become clearer that there exist symmetries whose mathematical structures are not a group such as higher-group [5; 14; 15; 55; 56] (as we also discuss in Section 3.3 and Appendix C.2) and non-invertible [7; 16; 17; 18; 35; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69] symmetries which strictly speaking are described by category theory.23
Footnote 23: For applications of higher-group and non-invertible symmetries in particle physics model building, see [7; 9].
In the case of continuous symmetry, it is often useful to analyze the behavior of the theory coupled to background gauge fields. For an ordinary global symmetries, we can couple the conserved current to a background gauge field \(A_{\mu}\)
\[Z\left[A\right]=\int\left[d\psi\right]\ \exp\left\{i\,S\left[\psi\right]+i\int d ^{d}x\,A_{\mu}(x)j_{1}^{\mu}(x)\right\}. \tag{117}\]
Here, we used \(\psi\) to denote collectively all quantum fields of the theory.24
Footnote 24: We emphasize that the theory under consideration needs not have a Lagrangian description. We simply imagine having an action here to streamline the discussion.
In terms of differential forms, we can couple a 0-form continuous symmetry to a background gauge field \(A^{(1)}\) as
\[Z[A^{(1)}]=\int[d\psi]\,\exp\left\{i\,S\left[\psi\right]+i\int A^{(1)}\wedge* j_{1}\right\}, \tag{118}\]
Since a 0-form symmetry has a conserved current that is a 1-form (its dual \(*j_{1}\) is \((d-1)\)-form in \(d\) spacetime dimension), the background gauge field \(A^{(1)}\) is a 1-form gauge field. The conservation of \(j_{1}\) (115) then implies that the above partition function is invariant under background gauge transformations \(A^{(1)}\to A^{(1)}+d\lambda_{0}\), where \(\lambda_{0}\) is a 0-form transformation parameter.25
In a symmetry preserving vacua, all non-vanishing expectation values \(\langle{\cal O}(x)\cdots\rangle\) are invariant under \(G\) transformations. The invariance under infinitesimal \(G\) transformation implies the Ward identity
\[\partial_{\mu}j_{1}^{\mu}(x){\cal O}(y)=\delta^{(d)}(x-y)\,R(Q)\cdot{\cal O}(y) \tag{100}\]
where \(R(Q)\) is the generator for the infinitesimal transformation in the representation \(R\).
Using the Ward identity, one can then define a charge operator that generates the \(G\) symmetry in the quantum theory by integrating the dual current \(*j_{1}\) over a closed \((d-1)\)-manifold \(\Sigma_{d-1}\)
\[Q(\Sigma_{d-1})=\int_{\Sigma_{d-1}}*j_{1}. \tag{101}\]
This charge operator \(Q(\Sigma_{d-1})\) is topological in a sense that any correlation function containing \(Q(\Sigma_{d-1})\) it is unchanged under continuous deformations of the manifold \(\Sigma_{d-1}\) so long as such a deformation does not cross a charged operator.
In the more familiar case of \(d=4\), we often choose \(\Sigma_{3}\) to be a spatial slice at a fixed time on which the Hilbert space is defined, and the above expression becomes
\[Q(\Sigma_{3})=\int d^{3}x\;j^{0}. \tag{102}\]
However, in general we can choose any closed \((d-1)\)-manifold to define the charge operator due to the topological nature of the definition in eq. (101).
This allows us to define a topological operator for any group element \(g=e^{i\lambda}\) by exponentiating the charge operator
\[U(g,\Sigma_{d-1})=\exp\left(i\lambda Q(\Sigma_{d-1})\right) \tag{103}\]
called a _symmetry defect operator_. This operator is topological in the same way as \(Q(\Sigma_{3})\). More explicitly, suppose \(\Sigma^{\prime}_{d-1}\) is a small continuous deformation of \(\Sigma_{d-1}\) without crossing any charged local operators, then the difference of the two symmetry defect operators with the same group element is given by
\[\begin{split} U(g,\Sigma)\cdot U(g,\Sigma^{\prime})^{-1}& =U(g,\Sigma)\cdot U(g^{-1},\Sigma^{\prime})=\exp\left(i\lambda \left(\int_{\Sigma}*j_{1}-\int_{\Sigma^{\prime}}*j_{1}\right)\right)\\ &=\exp\left(i\lambda\int_{\bar{\Sigma}}d*j_{1}\right)=1\end{split} \tag{104}\]
where \(\bar{\Sigma}\) is 4 dimensional manifold whose boundary is the union of \(\Sigma\) and \(\Sigma^{\prime}\). Here we used the fact that \(g^{-1}=e^{-i\lambda}\) and the conservation equation \(d*j_{1}=0\).
Using similar manipulations, it is easy to show that the symmetry defect operators
satisfy the \(G\) multiplication law
\[U(g_{1},\Sigma_{d-1})\cdot U(g_{2},\Sigma_{d-1})=U(g_{3},\Sigma_{d-1}) \tag{111}\]
with \(g_{3}=g_{1}g_{2}\).
The Ward identity eq. (106) then implies that these symmetry defect operators (SDO) implement the \(G\) action on charged operators that cross its world volume. For example, if we consider \(\Sigma_{d-1}\) that links the point \(x\), then wrapping a SDO for the element \(g\) on \(\Sigma_{d-1}\) will act on a charged operator \(\mathcal{O}(x)\) as26
Footnote 26: A \((d-1)\)-manifold \(\Sigma_{d-1}\) that wraps a point \(x\) is one that can be contracted to a point by passing through \(x\). The relation (112) then follows from the topological property of \(U(g,\Sigma_{d-1})\) by contracting it to a point (the trivial operator) and acting on \(\mathcal{O}(x)\) as \(\Sigma_{d-1}\) passes through \(x\).
\[U(g,\Sigma_{d-1})\,\mathcal{O}(x)=R(g)\cdot\mathcal{O}(x). \tag{112}\]
We can similarly define symmetry defect operators for discrete 0-form global symmetries. These are again topological operators that implement symmetry transformations on charged local operators. This structure of a global symmetry corresponding to the existence of topological operators can be taken as a definition or, if the reader prefers, we can think of discrete abelian symmetries as being part of some continuous abelian symmetry which is broken to a discrete subgroup (explicit or otherwise), in which case the structure of symmetry defect operators is inhereted from the continuous completion.
### Higher-form symmetry
We can generalize the discussion in the previous subsection to higher-form symmetries. A \(p\)-form global symmetry in a QFT defined on \(d\)-dimensional spacetime, denoted \(G^{(p)}\), acts on charged objects supported on \(p\)-dimensional manifolds (obviously \(p\leq d\)). For continuous \(p\)-form symmetry, it has \(p+1\)-form conserved current
\[d*j_{p+1}=0 \tag{113}\]
and one can construct the charge and symmetry defect operators as before
\[Q(\Sigma_{d-p-1})=\int_{\Sigma_{d-p-1}}*j_{p+1}\,\ \ \ \ U(g,\Sigma_{d-p-1})= \exp\left(i\lambda\oint_{\Sigma_{d-p-1}}*j_{p+1}\right), \tag{114}\]
where now the charge operator and symmetry defect operators are defined on codimension \(p+1\) manifolds.27
Footnote 27: For \(d\) space-time dimensional space, a codimension \(p+1\) manifold has \(d-p-1\) dimensions.
The conservation of \(j_{p+1}\) ensures that the associated symmetry defect operator is topological - the argument follows analogously to the case of 0-form symmetry described in Section A.1. If the collection of symmetry transformations \(\{g\}\) forms a group \(G\), the products of symmetry defect operators furnish the group multiplication law. For any \(p>0\), a \(p\)-form symmetry group is necessarily abelian - this follows from the fact that there is
no consistent ordering of co-dimension \(p+1>2\) manifolds to accomodate non-abelian multiplication.28
Footnote 28: The reason is that any pair of marked manifolds \(\Sigma^{(1)},\Sigma^{(2)}\) of codimension\(>\)2 can be freely deformed \(\Sigma^{(1)},\Sigma^{(2)}\longmapsto\widetilde{\Sigma}^{(1)},\widetilde{\Sigma} ^{(2)}\) so that \(\Sigma^{(1)}\cong\widetilde{\Sigma}^{(2)}\) and \(\Sigma^{(2)}\cong\dot{\widetilde{\Sigma}}^{(1)}\). For example, any two loops in \(\mathbb{R}^{3}\) can be exchanged by smooth deformations.
A \(p\)-form symmetry acts on \(p\)-dimensional operators \(W_{p}(m,\Sigma_{p})\) where we take the notation that \(m\) is the charge. Again, there is an associated Ward identity, which leads to an action of the symmetry defect operator on \(W_{p}(m,\Sigma_{p})\)
\[U(g,\Sigma_{d-p-1})W_{p}(m,\Sigma_{p})=\exp\Bigl{(}i\lambda m\ {\rm Link}( \Sigma_{d-p-1},\Sigma_{p})\Bigr{)}W_{p}(m,\Sigma_{p})\,\quad g=e^{i\lambda}. \tag{111}\]
similar to the case of the 0-form symmetry, where \({\rm Link}(\Sigma_{d-p-1},\Sigma_{p})\) is the linking number of the two manifolds. This equation holds for both continuous (e.g. \(\lambda\in\mathbb{S}^{1}\) for \(U(1)\)) and discrete (e.g. \(\lambda=2\pi/n\) for \(\mathbb{Z}_{n}\)) symmetries.
In the rest of this section, we will demonstrate these points with a simple and tractable example with continuous global symmetries. We will an analogous example with discrete higher-form symmetry (\(\mathbb{Z}_{n}\) gauge theory) in Appendix B.
Consider Maxwell theory in \((3+1)\) dimension. This theory enjoys \(U(1)_{e}^{(1)}\) 1-form electric symmetry and \(U(1)_{m}^{(1)}\) 1-form magnetic symmetry. The action is written
\[S=\frac{1}{2g^{2}}\int F^{(2)}\wedge*F^{(2)} \tag{112}\]
where \(F^{(2)}=dA^{(1)}\) is the field strength of 1-form gauge field \(A^{(1)}\). The equation of motion and Bianchi identity are written as
\[d*F^{(2)}=0\,\qquad dF^{(2)}=0. \tag{113}\]
These equations can be thought of as current conservation equations, where the former is interpreted the conservation of a 1-form electric symmetry and the latter is interpreted as the conservation of a dual magnetic 1-form symmetry. Note that for \(d=4\), both \(F^{(2)}\) and its dual \(*F^{(2)}\) are 2-forms, and these are 2-form currents for a pair of dual 1-form global symmetries.
The 1-form electric symmetry \(U(1)_{e}^{(1)}\) has a 2-form current and symmetry defect operators
\[J_{2}^{e}=\frac{i}{g^{2}}F^{(2)},\ \ \ \ U_{e}(g,\Sigma_{2})=\exp\left(\frac{i \lambda}{g^{2}}\oint_{\Sigma_{2}}*F^{(2)}\right). \tag{114}\]
for \(g=e^{i\lambda}\in U(1)\). This symmetry is an "electric" symmetry because \(\oint_{\Sigma_{2}}*F^{(2)}\) measures the electric flux through \(\Sigma_{2}\) and it acts on the dynamical electric gauge field by a shift \(A^{(1)}\to A^{(1)}+\lambda_{e}^{(1)}\) where the transformation parameter \(\lambda_{e}^{(1)}\) itself is a closed 1-form (i.e. a flat gauge connection) normalized as \(\oint\lambda_{e}^{(1)}\in U(1)\).29
Footnote 29: The relation of the 1-form parameter \(\lambda_{e}^{(1)}\) and \(\lambda\in 2\pi\mathbb{Z}\) appearing in eq. (114) is the following.
\[\oint_{M_{4}}d\lambda_{e}^{(1)}\wedge*F^{(2)}=\lambda\oint_{\Sigma_{2}}*F^{(2)}. \tag{115}\]
The gauge invariant operators that are charged under \(U(1)_{e}^{(1)}\) are the Wilson line operators \(W_{1}(m,\Sigma_{1})=e^{im\int_{\Sigma_{1}}A^{(1)}}\), where \(m\in\mathbb{Z}\). It is acted on by a non-trivial linking with the symmetry defect operator
\[U_{e}(g,\Sigma_{2})W_{1}(m,\Sigma_{1})=\exp\left(i\lambda m\ {\rm Link}(\Sigma_{2}, \Sigma_{1})\right)W_{1}(m,\Sigma_{1})\,\quad g=e^{i\lambda} \tag{114}\]
where \({\rm Link}(\Sigma_{2},\Sigma_{1})\) denotes the linking number between \(\Sigma_{2}\) and \(\Sigma_{1}\) (see [20] for an explanation of linking number in the language of QFT).
The 1-form electric symmetry can be explicitly broken by coupling to electrically charged fields. The presence of those electrically charged particles modifies the equation of motion to
\[d*J_{2}^{e}=\frac{i}{g^{2}}d*F^{(2)}=j_{\rm charge}^{(3)} \tag{115}\]
which violates the conservation law for \(J_{2}^{e}\). Here, \(j_{\rm charge}^{(3)}\) is the Hodge dual of the "usual" 1-form momentum density for the charged particles. If the particles have charge \(n\), the source in eq. (115) breaks \(U(1)_{e}^{(1)}\to\mathbb{Z}_{n}\). Physically, this breaking occurs because a dynamical field with charge \(n\) can pair produce to break \(n\) Wilson lines whose charge is a multiple of \(n\). This means that the Wilson line charge is only preserved mod \(n\) and consequently \(U(1)_{e}^{(1)}\) is broken \(U(1)_{e}^{(1)}\to\mathbb{Z}_{n}^{(1)}\).
The 1-form magnetic symmetry \(U(1)_{m}^{(1)}\) has a 2-form current and associated symmetry defect operator
\[J_{2}^{m}=\frac{1}{2\pi}*F^{(2)},\ \ \ \ U_{m}(g,\Sigma_{2})=\exp\left(\frac{i \lambda}{2\pi}\oint_{\Sigma_{2}}F^{(2)}\right), \tag{116}\]
for \(g=e^{i\lambda}\in U(1)\). We say that this symmetry is "magnetic" because \(\oint_{\Sigma_{2}}F^{(2)}\) measures the magnetic flux through \(\Sigma_{2}\) and consequently this symmetry acts on the dual magnetic photon \(\tilde{A}^{(1)}\) by a shift \(\tilde{A}^{(1)}\to\tilde{A}^{(1)}+\lambda_{m}^{(1)}\) where \(\lambda_{m}^{(1)}\) is a closed 1-form (i.e. a flat gauge connection) normalized as \(\oint\lambda_{m}^{(1)}\in U(1)\).
The gauge invariant operators that are charged under \(U(1)_{m}^{(1)}\) are 't Hooft line operators \(T_{1}(\ell,\Sigma_{1})=e^{i\ell\oint_{\Sigma_{1}}\tilde{A}^{(1)}},\ \ell\in\mathbb{Z}\):
\[U_{m}(g,\Sigma_{2})T_{1}(\ell,\Sigma_{1})=\exp\left(i\lambda\ell\ {\rm Link}(\Sigma_{2},\Sigma_{1})\right)T_{1}(\ell,\Sigma_{1})\,\qquad g=e^{i\lambda}. \tag{117}\]
The magnetic 1-form global symmetry can be broken by dynamical monopoles, similar to the way the 1-form electric symmetry is broken by electrically charged particles. Such dynamical states modify the Bianchi identity to
\[\frac{1}{2\pi}dF^{(2)}=j_{\rm mon}^{(3)}. \tag{118}\]
As with the Wilson lines and charged particles, dynamical monopoles can break 't Hooft lines by monopole-anti-monopole pair production, thereby breaking \(U(1)_{m}^{(1)}\).
We can couple the Maxwell theory to background gauge fields of these two 1-form global symmetries. The action with such couplings are given by
\[S=\frac{1}{2g^{2}}\int\left(F^{(2)}-\mathcal{B}_{e}^{(2)}\right)\wedge*\left(F^{ (2)}-\mathcal{B}_{e}^{(2)}\right)+\frac{i}{2\pi}\int\mathcal{B}_{m}^{(2)} \wedge\left(F^{(2)}-\mathcal{B}_{e}^{(2)}\right) \tag{101}\]
where \(\mathcal{B}_{e}^{(2)}\) and \(\mathcal{B}_{m}^{(2)}\) are the 2-form background gauge fields of the electric and magnetic 1-form symmetries. They transform under the respective symmetry as background gauge transformations.
\[\mathcal{B}_{e}^{(2)}\rightarrow\mathcal{B}_{e}^{(2)}+d\lambda_{e}^{(1)}, \hskip 14.226378pt\mathcal{B}_{m}^{(2)}\rightarrow\mathcal{B}_{m}^{(2)}+d \lambda_{m}^{(1)}. \tag{102}\]
Note that in addition to coupling \(\mathcal{B}_{e,m}^{(2)}\) to their respective currents - analogous couplings \(i\int A^{(1)}\wedge*j_{1}\) - we have also added additional background counter terms to make the theory explicitly invariant under \(U(1)_{e}^{(1)}\) background gauge transformations which shifts \(F^{(2)}\to F^{(2)}+d\lambda_{e}^{(1)}\).
However, now we see that the action is not invariant under \(U(1)_{m}^{(1)}\) gauge transformations
\[\delta S=-\frac{i}{2\pi}\int\lambda_{m}^{(1)}\wedge d\mathcal{B}_{e}^{(2)}. \tag{103}\]
We can make a different choice for local counterterms that makes the action invariant under \(U(1)_{m}^{(1)}\) background gauge transformations so that the theory is given by
\[S^{\prime}=\frac{1}{2g^{2}}\int\left(F^{(2)}-\mathcal{B}_{e}^{(2)}\right) \wedge*\left(F^{(2)}-\mathcal{B}_{e}^{(2)}\right)+\frac{i}{2\pi}\int\mathcal{ B}_{m}^{(2)}\wedge F^{(2)} \tag{104}\]
However, we now see that the theory is not invariant under \(U(1)_{e}^{(1)}\) gauge transformations
\[\delta S^{\prime}=-\frac{i}{2\pi}\int\lambda_{e}^{(1)}\wedge d \mathcal{B}_{m}^{(2)}. \tag{105}\]
In fact, one can show that no choice of local counterterms makes the theory invariant under both electric and magnetic 1-form symmetries with generic background gauge fields \(\mathcal{B}_{e}^{(2)}\) and \(\mathcal{B}_{m}^{(2)}\) turned on. This means that we can not gauge both global symmetries. Such a "tension" among different global symmetries is indicative of a 't Hooft anomaly involving both \(U(1)_{e}^{(1)},U(1)_{m}^{(1)}\).
It is often useful to organize anomalies of \(d\) dimensional quantum field theory in terms of a \(d+1\) dimensional topological quantum field theory. This is known as anomaly inflow [70; 71] (see also [24] for a recent discussion of anomaly inflow in the context of AdS/CFT duality and with relevance to particle phenomenology). In our current example, the \(U(1)_{e}^{(1)}U(1)_{m}^{(1)}\) mixed anomaly is described by a 5d anomaly TQFT:
\[S_{\text{inflow}}=\frac{i}{2\pi}\int_{N_{5}}\mathcal{B}_{m}^{(2)}\wedge d \mathcal{B}_{e}^{(2)} \tag{106}\]
where \(\partial N_{5}=M_{4}\), i.e. the boundary of the auxiliary 5d manifold \(N_{5}\) is the 4d spacetime.
In fact, one easily sees that under a magnetic transformation \({\cal B}_{m}^{(2)}\to{\cal B}_{m}^{(2)}+d\lambda_{m}^{(1)}\) with \({\cal B}_{e}^{(2)}\) activated (or similarly under an electric transformation with \({\cal B}_{m}^{(2)}\) turned on) this reproduces the same anomaly as the original \(4d\) action eq. (102).
## Appendix B \(\mathbb{Z}_{n}\) Tqft
In this appendix, we review an example of a topological field theory, also known as a BF theory, which is given by \(\mathbb{Z}_{n}\) gauge theory. introduced in [19; 20]. This theory appears ubiquitously in the literature on generalized global symmetry (see [1; 3; 21; 72] for a useful introduction).30
Footnote 30: Among other things, BF theory are prototypical discrete gauge theories and can be used to describe the IR theory of spontaneously broken discrete higher-form symmetries [73].
The action for \(4d\)\(\mathbb{Z}_{n}\) TQFT is given by31
Footnote 31: Although our discussion here is focused on \(4d\), most of the details generalize straightforwardly to any dimension.
\[S_{\text{BF}}=\frac{in}{2\pi}\int B^{(2)}\wedge dA^{(1)}. \tag{103}\]
It can also be written as \(B^{(2)}\wedge F^{(2)}\), hence the name BF theory. While this theory has many subtleties associated to the fact that it describes a discrete gauge theory, it is illuminating to keep in mind a particularly simple UV completion.
Consider an Abelian Higgs model with a charge \(n\) Higgs field \(\Phi\).
\[{\cal L}=\left|d\Phi-inA^{(1)}\Phi\right|^{2}+\frac{1}{2g^{2}}F^{(2)}\wedge*F^ {(2)}-V(\Phi). \tag{104}\]
where \(V(\Phi)\) is chosen so that \(\Phi\) condenses in the the IR. It is clear that the condensation of \(\Phi\) will break the \(U(1)\) gauge group down to \(\mathbb{Z}_{n}\) since the vev of \(\Phi\) is invariant under \(\mathbb{Z}_{n}\subset U(1)\) gauge transformations. Thus, in the IR the theory will flow to a \(\mathbb{Z}_{n}\) gauge theory.
To see this, note that the radial mode of \(\Phi\) has a mass similar to the scale of the symmetry breaking and is integrated out. Hence, we can decompose \(\Phi=\Lambda e^{i\varphi}\) where \(\Lambda\) is the scale of the symmetry breaking and \(\varphi\) is a periodic scalar field of charge \(n\). Substituting this into the above action yields
\[{\cal L}\sim\Lambda^{2}\left(d\varphi-nA^{(1)}\right)\wedge*\left(d\varphi-nA ^{(1)}\right)+\frac{1}{2g^{2}}F^{(2)}\wedge*F^{(2)}. \tag{105}\]
In the low energy limit, which effectively sends \(\Lambda\to\infty\), imposes \(A^{(1)}=\frac{d\varphi}{n}\) (i.e. pure gauge configuration) and sets the gauge kinetic term to zero - leaving no local degrees of freedom. This is not a surprising statement at all since all but discrete part of scalar degrees of freedom are "eaten" by the gauge boson. The key point however, is that there is still an important discrete remnant.
Let us study the action (105) a bit more closely. To proceed further, it is useful to dualize \(\varphi\). This can be achieved by introducing a new 3-form field \(H\) and rewriting the
Lagrangian as
\[{\cal L}=\frac{1}{(4\pi)^{2}\Lambda^{2}}H\wedge*H+\frac{i}{2\pi}H\wedge(d\varphi- nA^{(1)}). \tag{104}\]
One can check that by integrating out \(H\) using its equation of motion, \(*H=4\pi i\Lambda^{2}(d\varphi-nA^{(1)})\), we recover the Lagrangian in eq. (103). Additionally, the equation of motion for \(\varphi\), \(dH=0\), which means we can locally introduce a 2-form \(B^{(2)}\) with \(dB^{(2)}=H\). The Lagrangian then becomes
\[{\cal L}=\frac{1}{(4\pi)^{2}\Lambda^{2}}H\wedge*H+\frac{in}{2\pi}B^{(2)}\wedge dA ^{(1)}. \tag{105}\]
Taking the limit \(\Lambda\to\infty\), we arrive at the BF theory in eq. (101).
Having shown that eq. (101) describes a \(\mathbb{Z}_{n}\) gauge theory, we now study it in more detail. First note that the theory is completely independent of metric and hence is a "topological" field theory.32 Additionally, the equations of motion set the two field strengths to vanish
Footnote 32: In the language of differential forms, the metric only enters via the Hodge star operation.
\[dA^{(1)}=dB^{(2)}=0. \tag{106}\]
This eliminates all local degrees of freedom in the IR, and confirms once again that the theory is topological.
This theory also has two \(U(1)\) gauge symmetries
\[A^{(1)}\to A^{(1)}+d\lambda^{(0)}\quad,\quad B^{(2)}\to B^{(2)}+d\lambda^{(1)}. \tag{107}\]
The \(A^{(1)}\) gauge symmetry follows from the UV theory while the \(B^{(2)}\) gauge symmetry is a consequence of the fact that \(B^{(2)}\) is dual to \(\varphi\).33 These two gauge fields have corresponding \(\mathbb{Z}_{n}\) higher-form global symmetries under which fields transform as
Footnote 33: More explicitly, the vortices of \(\varphi\) correspond to the Wilson-like surface operators of \(B^{(2)}\). The fact that \(\varphi\) is periodic means that these vortices are integer quantized which requires that \(B^{(2)}\) is a 2-form gauge field with the transformation properties above.
\[\begin{split}\mathbb{Z}_{n}^{(1)}:& A^{(1)}\to A^{( 1)}+\frac{1}{n}\epsilon^{(1)}\,\qquad\oint\epsilon^{(1)}=2\pi\mathbb{Z}\\ \mathbb{Z}_{n}^{(2)}:& B^{(2)}\to B^{(2)}+\frac{1}{n} \epsilon^{(2)}\,\qquad\oint\epsilon^{(2)}=2\pi\mathbb{Z}.\end{split} \tag{108}\]
The existence of these symmetries is also manifested by the presence of gauge invariant Wilson line and surface operators:
\[W_{1}(\ell,\Sigma_{1})=\exp\left(i\ell\oint_{\Sigma_{1}}A^{(1)}\right)\, \qquad W_{2}(m,\Sigma_{2})=\exp\left(im\oint_{\Sigma_{2}}B^{(2)}\right)\,\ \ \ell,m\in\mathbb{Z}. \tag{109}\]
Since there are no dynamical degrees of freedom in the theory, there is no dynamical screening of these operators and they are absolutely stable.
In other words, the operators in eq. (109) are protected by the \(\mathbb{Z}_{n}\) global symmetries
above: the lines are charged under the 1-form global symmetry \(\mathbb{Z}_{n}^{(1)}\) and the surface operators are charged under the 2-form global symmetry \(\mathbb{Z}_{n}^{(2)}\). This is seen by checking that these Wilson operators transform under the global symmetries as
\[\mathbb{Z}_{n}^{(1)}: W_{1}(\ell,\Sigma_{1})\longmapsto e^{\frac{2\pi i\ell}{n}\oint_{ \Sigma_{1}}\frac{\zeta^{(1)}_{1}}{2\pi}}W_{1}(\ell,\Sigma_{1}) \tag{111}\] \[\mathbb{Z}_{n}^{(2)}: W_{2}(m,\Sigma_{2})\longmapsto e^{\frac{2\pi i\ell}{n}\oint_{ \Sigma_{2}}\frac{\zeta^{(2)}_{2}}{2\pi}}W_{2}(m,\Sigma_{2}). \tag{112}\]
In order to see that the spectrum of these operators are consistent with \(\mathbb{Z}_{n}\), we first recall that \(A^{(1)}=\frac{1}{n}d\varphi\). This shows
\[\left(W_{1}(1,\Sigma_{1})\right)^{n}=\exp\left(i\oint_{\Sigma_{1}}nA^{(1)} \right)=\exp\left(i\oint_{\Sigma_{1}}d\varphi\right)=1. \tag{113}\]
Therefore, the Wilson line operators are classified by a charge \(\ell=0,\cdots,(n-1)\), as we expect for \(\mathbb{Z}_{n}^{(1)}\) symmetry.
For the surface operator, we further dualize \(A^{(1)}\) to the dual photon field \(\tilde{A}^{(1)}\). To this end, we view the field strength \(F^{(2)}=dA^{(1)}\) as an independent field and add a Lagrange multiplier term to impose the Bianchi identity
\[\mathcal{L}=\frac{in}{2\pi}B^{(2)}\wedge F^{(2)}+\frac{i}{2\pi}d\tilde{A}^{(1 )}\wedge F^{(2)}=\frac{i}{2\pi}F^{(2)}\wedge\left(d\tilde{A}^{(1)}+nB^{(2)} \right). \tag{114}\]
In this presentation, one views the dual photon field as a matter field that Higgs the 2-form gauge field \(B^{(2)}\) with charge \(n\): \(U(1)\) 1-form gauge symmetry is broken down to \(\mathbb{Z}_{n}^{(1)}\). This is linked to the fact that the gauge symmetry of eq. (114) is
\[\tilde{A}^{(1)} \rightarrow \tilde{A}^{(1)}+d\tilde{\lambda}^{(0)}-n\lambda^{(1)} \tag{115}\] \[B^{(2)} \rightarrow B^{(2)}+d\lambda^{(1)}, \tag{116}\]
where we see that \(\tilde{A}^{(1)}\) also comes with its own 0-form gauge symmetry with parameter \(\tilde{\lambda}^{(0)}\). This is an emergent gauge symmetry as a result of the duality transformation (similarly to the 1-form gauge symmetry of \(B^{(2)}\)). The equation of motion for \(F^{(2)}\) sets \(d\tilde{A}^{(1)}+nB^{(2)}=0\). This is all we need to prove the \(\mathbb{Z}_{n}\) spectrum of surface operators. For the sake of completeness, we repeat the exercise:
\[(W_{2}(1,\Sigma_{2}))^{n}=\exp\left(i\oint_{\Sigma_{2}}nB^{(2)}\right)=\exp \left(i\oint_{\Sigma_{2}}d\tilde{A}^{(1)}\right)=1. \tag{117}\]
In order to understand the correlation function of Wilson operators, we first show that an insertion of surface operator \(W_{2}(m,\Sigma_{2})\) in the path integral (equivalent to introducing a source term) modifies the equation of motion of \(B^{(2)}\) and effectively turns on non-trivial
\(\mathbb{Z}_{n}\) valued \(F^{(2)}\) flux localized on the worldvolume of the surface operator:
\[\int\left[dA^{(1)}\right]\left[dB^{(2)}\right]\,\underbrace{e^{im\oint_{\Sigma_{ 2}}B^{(2)}}}_{W_{2}(m,\Sigma_{2})}e^{\frac{in}{2\pi}\int_{M_{4}}B^{(2)}\wedge dA ^{(1)}}\,\,\Rightarrow\,\,F^{(2)}=\frac{2\pi}{n}\delta^{(2)}(\Sigma_{2}). \tag{111}\]
Here, \(\delta^{(2)}(\Sigma_{2})\) is a 2-form delta function which is non-zero only on \(\Sigma_{2}\) and is normalized as \(\int_{\Gamma_{2}}\delta^{(2)}(\Sigma_{2})=1\) where \(\Gamma_{2}\) is a 2-manifold that transversely intersects \(\Sigma_{2}\) once. The modified equation of motion means that the holonomy of \(W_{1}=e^{i\oint A^{(1)}}\) around \(\Sigma_{2}\) evaluates to a phase \(e^{\frac{2\pi i}{n}}\).34 Similarly, an insertion of a line operator modifies the equation of motion of \(A^{(1)}\) to \(dB^{(2)}=\frac{2\pi}{n}\delta^{(1)}(\Sigma_{1})\), inducing holonomy for \(W_{2}=e^{i\oint B^{(2)}}\) which \(\mathbb{Z}_{n}\) valued. This means that
Footnote 34: Physically, surface operators \(W_{2}(m,\Sigma_{2})\) are cosmic strings of the UV Abelian Higgs model. This non-trivial holonomy of \(W_{1}\) around the BF string worldvolume is a statement that BF strings are supported by a \(\mathbb{Z}_{n}\) valued magnetic flux.
\[\langle W_{1}(\ell,\Sigma_{1})\,W_{2}(m,\Sigma_{2})\rangle\sim\exp\left(\frac{ 2\pi i}{n}\ell m\,\,\text{Link}(\Sigma_{1},\Sigma_{2})\right) \tag{112}\]
where \(\text{Link}(\Sigma_{1},\Sigma_{2})\) is the linking number of \(\Sigma_{1}\) and \(\Sigma_{2}\).
Note that the \(\mathbb{Z}_{n}\) theory does not have any non-trivial 't Hooft operators (dual to Wilson operators for \(U(1)\) gauge symmetries) nor any local operators (dual of string operators for \(U(1)\) symmetries). The reason is that the local operators constructed from the dual of \(B^{(2)}\) are written in terms of \(\varphi\): \(I(m,x)=e^{im\varphi(x)},\,\,m\in\mathbb{Z}\) which is not gauge invariant. However, it can be made by attaching a \(\mathbb{Z}_{n}\) Wilson line operator to it (see eq. (109))
\[\tilde{I}(m,x)=e^{im\varphi(x)}e^{-imn\int_{P}A^{(1)}} \tag{113}\]
at the expense that the operator has now become non-local. The equation of motion, on the other hand, sets \(d\varphi-nA^{(1)}=0\), showing that the above operator, while gauge invariant, is trivial.
Similarly, one may consider an 't Hooft line operator \(T(\ell,\Sigma_{1})=e^{i\ell\oint\tilde{A}^{(1)}}\). The dual formulation eq. (108) shows that such an operator is not gauge invariant under \(B^{(2)}\) gauge transformations. It can similarly be made gauge invariant by attaching it to a surface operator
\[\tilde{T}(\ell,\Sigma_{1})=e^{i\ell\oint\tilde{A}^{(1)}}e^{i\ell n\int_{ \Sigma_{2}}B^{(2)}},\,\,\,\partial\Sigma_{2}=\Sigma_{1} \tag{114}\]
Again, the equation of motion makes this a trivial operator.
Before we conclude, we would like to point out that there exist interesting variations of the BF theory by adding a discrete \(\theta\)-parameter term.
\[S=\int_{M_{4}}\frac{in}{2\pi}B^{(2)}\wedge dA^{(1)}+\frac{ipn}{4\pi}B^{(2)} \wedge B^{(2)}. \tag{115}\]
Here, \(p\in\mathbb{Z}\). The discrete \(\theta\)-parameter \(\frac{ipn}{4\pi}\int B^{(2)}\wedge B^{(2)}\) is also known as "discrete torsion" or as a "Symmetry Protected Topological phase" (SPT phase) associated with \(\mathbb{Z}_{n}^{(1)}\) symmetry. This theory appears in many situations including in the discussion of \(SU(n)/\mathbb{Z}_{n}\) gauge
theory (see for e.g. [1; 3; 73]) and non-invertible symmetry as a way of gauging \(\mathbb{Z}_{n}\) subgroup of bulk 1-form magnetic symmetry (see for e.g. [35]). This addition introduces several interesting and subtle effects. It makes the 1-form gauge field \(A^{(1)}\) charged under the 1-form gauge symmetry of \(B^{(2)}\), it modifies the global symmetry to \(\mathbb{Z}_{n}^{(1)}\times\mathbb{Z}_{J}^{(2)}\), where \(J=\frac{1}{2}\mathrm{gcd}(p,n)\)for \(p,n\) even and \(J=\mathrm{gcd}(p,n)\) otherwise. This clearly changes the spectrum of conserved operators and may have interesting signals different from those discussed in our paper. For a very nice and detailed discussion, we recommend highly Section 6 of [1].
## Appendix C Global Symmetries of TQFT-Coupling I
In this appendix, we discuss in detail symmetries of the theory studied in Section 2: axion-Maxwell theory coupled to a \(\mathbb{Z}_{n}\) gauge theory.35
Footnote 35: The generalized symmetry and associated higher-group structure of the axion-Maxwell were studied in [5; 14; 15].
Since there are many different symmetries and we need to introduce a background gauge field for each of them, we list these symmetries and associated current and background gauge fields in Table 6, setting up our notations.36
Footnote 36: Strictly speaking, discrete symmetries do not have associated currents. In Table 6, by the conserved current generating a discrete symmetry, we mean current of an associated \(U(1)\) symmetry that is broken down to the appropriate discrete subgroup.
The action of the theory is
\[\begin{split} S=&\frac{1}{2}\int da\wedge*da+\frac {1}{2g}\int F_{A}^{(2)}\wedge*F_{A}^{(2)}+\frac{in}{2\pi}\int B^{(2)}\wedge F_ {B}^{(2)}\\ -&\frac{iK_{A}}{8\pi^{2}f_{a}}\int aF_{A}^{(2)} \wedge F_{A}^{(2)}-\frac{iK_{AB}}{4\pi^{2}f_{a}}\int aF_{A}^{(2)}\wedge F_{B}^ {(2)}-\frac{iK_{B}}{8\pi^{2}f_{a}}\int aF_{B}^{(2)}\wedge F_{B}^{(2)}.\end{split} \tag{100}\]
The above theory has six generalized global symmetries: four 'electric' symmetries and two'magnetic' symmetries. Electric symmetries are obtained from EoMs of the fields appearing in the action and magnetic symmetries come from Bianchi identities.
The procedure for analyzing the symmetries of this theory is to couple the classical symmetries to background gauge fields and study the transformation of the action under background gauge transformations. Then, we can probe the anomalies and higher symmetry
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|} \hline \multicolumn{3}{|c||}{Electric Symmetries} & \multicolumn{3}{|c|}{Magnetic Symmetries} \\ \hline \hline
0-form axion shift & \(\mathbb{Z}_{K_{A}}^{(0)}\) & \(*J_{1a}^{e}\) & \(\mathcal{A}_{\mathrm{e}}^{(1)}\) & 2-form axion string & \(U(1)^{(2)}\) & \(*J_{3a}^{m}\) & \(\mathcal{A}_{m}^{(3)}\) \\ \hline
1-form \(A\)-electric & \(\mathbb{Z}_{1}^{(1)}\) & \(*J_{2A}^{e}\) & \(\mathcal{B}_{\mathrm{e}}^{(2)}\) & 1-form \(A\)-magnetic & \(U(1)^{(1)}\) & \(*J_{2A}^{m}\) & \(\mathcal{B}_{m}^{(2)}\) \\ \hline
1-form \(B\)-electric & \(\mathbb{Z}_{K_{A}}^{(1)}\) & \(*J_{2B}^{e}\) & \(\mathcal{C}_{\mathrm{e}}^{(2)}\) & & & & \\ \hline
2-form BF string & \(\mathbb{Z}_{n}^{(2)}\) & \(*J_{3H}^{e}\) & \(\mathcal{D}_{\mathrm{e}}^{(3)}\) & & & & \\ \hline \hline \multicolumn{3}{|c|}{Field Strength of Magnetic Symmetries} \\ \hline \hline
2-form axion string & \(\mathcal{G}^{(4)}=d\mathcal{A}_{m}^{(3)}+\cdots\) & 1-form \(A\)-magnetic & \(\mathcal{H}^{(3)}=d\mathcal{B}_{m}^{(2)}+\cdots\) \\ \hline \end{tabular}
\end{table}
Table 6: List of generalized symmetries, their currents, and background gauge fields in decoupled axion-Maxwell and \(\mathbb{Z}_{n}\) BF theory.
structures by coupling all such symmetries to background gauge fields simlutaneously and studying their gauge transformations.
A complimentary viewpoint to study the symmetry groups of the theory is to look at the conservation equation (take for example a \(U(1)\) symmetry). When there are anomalous terms
\[d*J=j\, \tag{104}\]
such that \(j\) is a quantized charge density \(\int j\in K\mathbb{Z}\), the \(U(1)\) symmetry is broken \(U(1)\mapsto\mathbb{Z}_{K}\). In a sense, this is a quick and easy way to detect symmetry breaking interactions.
The theory can be coupled to background gauge fields as
\[S=\frac{1}{2}\int(da-f_{a}\mathcal{A}_{e}^{(1)})\wedge*(da-f_{a }\mathcal{A}_{e}^{(1)})+\frac{1}{2g}\int(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)}) \wedge*(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\] \[+\frac{in}{2\pi}B^{(2)}\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{(2)}) -\frac{in}{2\pi}\int\mathcal{D}_{e}^{(3)}\wedge B^{(1)}-\frac{i}{2\pi f_{a}} \int a\,d\mathcal{A}_{m}^{(3)}+\frac{i}{2\pi}\int A^{(1)}\wedge d\mathcal{B}_ {m}^{(2)}\] \[-\frac{iK_{A}}{8\pi^{2}f_{a}}\int a(F_{A}^{(2)}-\mathcal{B}_{e}^{ (2)})\wedge(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})-\frac{iK_{AB}}{4\pi^{2}f_{a}} \int a(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{( 2)})\] \[-\frac{iK_{B}}{8\pi^{2}f_{a}}\int a(F_{B}^{(2)}-\mathcal{C}_{e}^{( 2)})\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{(2)}).\]
In general we choose conventions so that all background gauge fields satisfy the analog of Dirac quantization:
\[\oint\frac{d\mathcal{A}^{(p)}}{2\pi}\in\mathbb{Z}. \tag{105}\]
As we will see below, some of the electric symmetries are mixed up with magnetic symmetries which leads to a higher-group structure. Practically, this happens when the theory only admits field strengths of the magnetic symmetry which are shifted by non-linear terms in electric symmetry gauge fields.
### Symmetries of Uncoupled Theories
Now we will discuss the symmetries of the theory. We find that it will be simplest first analyze the decoupled theory where \(K_{AB}=K_{B}=0\) (i.e. the charge \(K_{A}\) axion-Maxwell theory and \(\mathbb{Z}_{n}\) BF theory are decoupled) starting with the magnetic symmetries before discussing the electric symmetries. We will discuss the symmetry structure of the full theory in the following section.
\(U(1)^{(2)}\) **2-form axion string symmetry:**
The axion, being a smooth field obeys a Bianchi identity \(d^{2}a=0\). This implies the existence of a two-form \(U(1)^{(2)}\) symmetry with a current
\[*J_{3}=\frac{1}{2\pi f_{a}}da. \tag{109}\]
This couples to a background 3-form gauge field \(\mathcal{A}_{m}^{(3)}\)
\[S=...+i\int\mathcal{A}_{m}^{(3)}\wedge\frac{1}{2\pi f_{a}}da=i\int\frac{a}{f_{a }}\frac{d\mathcal{A}_{m}^{(3)}}{2\pi}\,. \tag{110}\]
which is consistent with Dirac quantization condition and the gauge redundancy \(a\to a+2\pi f_{a}\).
The charged objects of \(U(1)^{(2)}\) are 2d world-sheet of axion strings \(V(m,\Sigma_{2})\). The charge and symmetry defect operators, and their action on an axion string are given by
\[Q_{aM}(\Sigma_{1})=\int_{\Sigma_{1}}*J_{3}\,\qquad U_{aM}(\alpha,\Sigma_{1})=e^ {i\alpha Q_{aM}(\Sigma_{1})}\, \tag{111}\]
Such symmetry defect operators act on the axion strings with which they have non-trivial linking
\[\langle U_{aM}(\alpha,\Sigma_{1})V(m,\Sigma_{2})\rangle=e^{i\alpha m\ \text{Link}( \Sigma_{1},\Sigma_{2}))}\langle V(m,\Sigma_{2})\rangle. \tag{112}\]
This symmetry is a \(U(1)^{(2)}\) and it is broken in the presence of dynamical strings
\[d*J_{3}=j_{2}^{\text{string}}. \tag{113}\]
Indeed, in the presence of string, the Bianchi identity \(d^{2}a=0\) is violated and the axion carries non-zero winding around the string, see eq. (10). A dual version of this appeared in the BF theory case, i.e. electric 2-form BF string symmetry discussed above.
#### \(U(1)^{(1)}_{mA}\): 1-form \(A\)-magnetic symmetry
This symmetry is a direct consequence of the Bianchi identity \(dF_{A}^{(2)}=0\) and the current is given by
\[*J_{2m}=\frac{1}{2\pi}F_{A}^{(2)}. \tag{114}\]
If there were dynamical monopoles present in the theory, they would lead to a source term in the conservation equation
\[\frac{1}{2\pi}dF_{A}^{(2)}=j_{3}^{m} \tag{115}\]
and this symmetry is broken (possibly with a unbroken discrete subgroup). 't Hooft line operators \(T(m,\Sigma_{1})=e^{im\int_{\Sigma_{1}}\tilde{A}^{(1)}}\) (where \(\tilde{A}^{(1)}\) is a dual gauge field) are charged under this
magnetic 1-form symmetry.
This symmetry couples to a background 2-form \(U(1)^{(1)}\) gauge field \(\mathcal{B}_{m}^{(2)}\) as
\[S=...+\frac{i}{2\pi}\int\mathcal{B}_{m}^{(2)}\wedge F_{A}^{(2)}. \tag{109}\]
\(\mathbb{Z}_{K_{A}}^{(0)}\) **0-form axion shift symmetry:**
The equation of motion of the axion is
\[d\left(if_{a}*da\right)=\underbrace{\frac{K_{A}}{8\pi^{2}}F_{A}^{(2)}\wedge F_ {A}^{(2)}}_{=j_{4}^{\text{inst}}(A)}. \tag{110}\]
In the absence of anomalous/instanton terms on the right hand side, the axion has \(U(1)^{(0)}\) shift symmetry \(a\to a+f_{a}c,\ c\in\mathbb{S}^{1}\) and its current is \(*J_{1}=if_{a}*da\). The above equation shows that the axion coupling breaks the continuous shift symmetry to a discrete subgroup \(U(1)^{(0)}\rightarrow\mathbb{Z}_{K_{A}}^{(0)}\).
This can also be seen explicitly by studying the shift of the action under \(a\to a+f_{a}c\):
\[\delta S=\frac{icK_{A}}{8\pi^{2}}\int F_{A}^{(2)}\wedge F_{A}^{(2)} \tag{111}\]
Clearly, the action is invariant only if
\[a\longmapsto a+\frac{2\pi f_{a}}{K_{A}}\, \tag{112}\]
where we used the identity
\[\frac{1}{8\pi^{2}}\int F_{A}^{(2)}\wedge F_{A}^{(2)}\in\mathbb{Z}. \tag{113}\]
We can define a topological symmetry defect operator for \(\mathbb{Z}_{K_{A}}^{(0)}\) by
\[\begin{split} U_{aE}(\ell,\Sigma_{3})&=e^{\frac{2 \pi i\ell}{K_{a}}Q_{aE}(\Sigma_{3})}\,\\ Q_{aE}(\Sigma_{3})&=\int_{\Sigma_{3}}*J_{1}-K_{A} \,\omega_{3}(A^{(1)})-2K_{AB}\,\omega_{3}(A^{(1)},B^{(1)})-K_{B}\,\omega_{3}( B^{(1)}),\end{split} \tag{114}\]
where \(\ell=0,1,\cdots,K_{A}-1\) and \(\omega_{3}(A^{(1)}),\omega_{3}(A^{(1)},B^{(1)}),\omega_{3}(B^{(1)})\) are Chern-Simons terms that satisfy
\[\begin{split} d\omega_{3}(A^{(1)})&=\frac{1}{8\pi^{ 2}}F_{A}^{(2)}\wedge F_{A}^{(2)}\,\\ d\omega_{3}(A^{(1)},B^{(1)})&=\frac{1}{8\pi^{2}}F_{A }^{(2)}\wedge F_{B}^{(2)}\,\\ d\omega_{3}(B^{(1)})&=\frac{1}{8\pi^{2}}F_{B}^{(2)} \wedge F_{B}^{(2)}\.\end{split} \tag{115}\]
The gauge invariant operators that are charged under this symmetry are the vertex operators \(I(m,x)=e^{\frac{im}{\mathcal{J}_{a}}a(x)},\ m\in\mathbb{Z}\). It is invariant under a gauge transformation \(a\to a+2\pi f_{a}\) and \(U_{aE}(\ell,\Sigma_{3})\) acts on it when it has non-trivial linking
\[\langle U_{aE}(\ell,\Sigma_{3})\,I(m,x)\rangle=e^{\frac{2\pi i}{K_{a}}\ell m \,\operatorname{Link}(\Sigma_{3},x)}\langle I(m,x)\rangle \tag{111}\]
where on the right-hand side we used the topological invariance of the defect operator \(U_{aE}(\ell,V_{3})\) to shrink it away after it passes through the charged operator \(I(m,x)\).
In order to couple a background gauge field to this symmetry, we in principle need to couple to a \(\mathbb{Z}_{K_{A}}\) gauge field \(\mathcal{A}_{e}^{(1)}\). However, since \(\mathbb{Z}_{K_{A}}\subset U(1)\) we can treat \(\mathcal{A}_{e}^{(1)}\) as a \(U(1)\) gauge field that has been restricted so that:
\[e^{i\oint\frac{\mathcal{A}_{e}^{(1)}}{2\pi}}=e^{\frac{2\pi in}{K_{A}}}. \tag{112}\]
This has the consequence of imposing the condition
\[K_{A}\frac{\mathcal{A}_{e}^{(1)}}{2\pi}=d\lambda^{(1)}\sim[0]\in H^{1}(M; \mathbb{Z})\, \tag{113}\]
where \([d\lambda^{(1)}]\) is a trivial cohomology class. Additionally, there is a "discrete" version of a \(\mathbb{Z}_{n}\) gauge field which is simply given by37
Footnote 37: This operation is called the \(\mathbb{Z}_{n}\) Bockstein map.
\[\beta\left(\frac{\mathcal{A}^{(1)}}{2\pi}\right)=\frac{\mathcal{A}^{(1)}}{2 \pi}\ \text{mod}_{n}\in H^{2}(M;\mathbb{Z}_{n})\, \tag{114}\]
by which we mean \(\oint\beta\left(\frac{\mathcal{A}^{(1)}}{2\pi}\right)=0,1,...,n-1\). See [1] for more in depth discussion of discrete gauge theories.
There is a mixed 't Hooft anomaly between 'electric' \(\mathbb{Z}_{K_{A}}^{(0)}\) shift symmetry and'magnetic' \(U(1)^{(2)}\) symmetry. To see this, we couple the theory to background gauge fields
\[S=...+\frac{1}{2}\int\left(da-f_{a}\mathcal{A}_{e}^{(1)}\right)\wedge*\left( da-f_{a}\mathcal{A}_{e}^{(1)}\right)+\frac{i}{2\pi f_{a}}\int\mathcal{A}_{m}^{(3)} \wedge\left(da-f_{a}\mathcal{A}_{e}^{(1)}\right) \tag{115}\]
where we added a local counterterm consisting of only background gauge fields to make the action manifestly invariant under electric symmetry. Now we can see that there is a non-trivial 't Hooft anomaly by noting that the action is not invariant under \(U(1)^{(2)}\) transformations:
\[\delta S=\frac{i}{2\pi}\int\lambda_{m}^{(2)}\wedge d\mathcal{A}_{e}^{(1)}\, \qquad\delta\mathcal{A}_{m}^{(3)}=d\lambda_{m}^{(2)}. \tag{116}\]
Alternatively, if we chose instead to omit the counterterm, the magnetic symmetry would be preserved but the action would manifestly break axion shift symmetry.
As discussed in Appendix A, we can identify this anomaly with a 5D TQFT which is
given by
\[S_{\text{inflow}}=\frac{i}{2\pi}\int_{N_{5}}\mathcal{A}_{m}^{(3)}\wedge d\mathcal{ A}_{e}^{(1)}. \tag{102}\]
\(\mathbb{Z}_{K_{A}}^{(1)}\) **1-form \(A\)-electric symmetry:**
The 1-form electric symmetry associated to the \(U(1)_{A}\) gauge field is a bit trickier. The equation of motion for \(A^{(1)}\) is
\[d\underbrace{\left(\frac{i}{g_{A}^{2}}*F_{A}^{(2)}\right)}_{=*J_{2A}^{e}}= \underbrace{-\frac{K_{A}}{4\pi^{2}f_{a}}da\wedge F_{A}^{(2)}}_{=j_{SA}^{e}(a,A ^{(1)})}+\underbrace{-\frac{K_{AB}}{4\pi^{2}f_{a}}da\wedge F_{B}^{(2)}}_{=j_{ SA}^{e}(a,B^{(1)})}. \tag{103}\]
In the absence of coupling terms, the pure Maxwell sector has a \(U(1)_{e}^{(1)}\) 1-form electric symmetry with a current \(*J_{2A}^{e}\). Naively, the interaction terms would appear to break \(U(1)^{(1)}\rightarrow\mathbb{Z}_{K_{1}}^{(1)}\) for \(K_{1}=\text{GCD}(K_{A},K_{AB})\).
However, let us couple the theory to a 2-form \(U(1)\)-gauge field \(\mathcal{B}_{e}^{(2)}\) and restrict to the case where \(K_{AB}=0\):
\[\begin{split} S=&...+\frac{1}{2g^{2}}\int(F_{A}^{(2 )}-\mathcal{B}_{e}^{(2)})\wedge*(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\\ &-\frac{iK_{A}}{8\pi^{2}f_{a}}\int a(F_{A}^{(2)}-\mathcal{B}_{e} ^{(2)})\wedge(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\,\end{split} \tag{104}\]
Now, due to the fact that
\[\oint\frac{\mathcal{B}_{e}^{(2)}}{2\pi}\in\frac{1}{K}\mathbb{Z}\quad \Longrightarrow\quad\oint\frac{\mathcal{B}_{e}^{(2)}\wedge\mathcal{B}_{e}^{( 2)}}{8\pi^{2}}\in\frac{1}{K^{2}}\mathbb{Z}\, \tag{105}\]
we see that the axion interaction term appears to break \(U(1)^{(1)}\mapsto\mathbb{Z}_{k_{A}}\) where \(K_{A}=k\,k_{A}^{2}\).
However, we can actually extend the electric 1-form global symmetry to include a \(\mathbb{Z}_{K_{A}}^{(1)}\) as follows. Recall that the 2-form axion string symmetry \(U(1)^{(2)}\) also couples to the axion as
\[S=...-\frac{i}{2\pi f_{a}}\int a\,d\mathcal{A}_{m}^{(3)}. \tag{106}\]
If we modify the transformation laws for \(\mathcal{A}_{m}^{(3)}\), we can cancel the fractional contribution38
Footnote 38: Note that here we are taking \(\mathcal{B}_{e}^{(2)}\) to be a \(\mathbb{Z}_{K}\)-valued 2-form gauge field. This can be achieved in analogy with the \(\mathbb{Z}_{K_{a}}\) 1-form gauge field for the \(\mathbb{Z}_{K_{a}}^{(0)}\) 0-form axion shift global symmetry.
\[\delta\mathcal{A}_{m}^{(3)}=d\lambda_{m}^{(2)}-\frac{K_{A}}{4\pi}\left(2\lambda _{e}^{(1)}\wedge\mathcal{B}_{e}^{(2)}+\lambda_{e}^{(1)}\wedge d\lambda_{e}^{( 1)}\right)\,\qquad\delta\mathcal{B}_{e}^{(2)}=d\lambda_{e}^{(1)} \tag{107}\]
so that the associated gauge invariant field strength is
\[\mathcal{G}^{(4)}=d\mathcal{A}_{m}^{(3)}+\frac{K_{A}}{4\pi} \mathcal{B}_{e}^{(2)}\wedge\mathcal{B}_{e}^{(2)}\, \tag{108}\]
and the theory has the coupling
\[\begin{split} S=&...+\frac{1}{2g^{2}}\int(F_{A}^{(2)}- \mathcal{B}_{e}^{(2)})\wedge*(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})-\frac{i}{2\pi f _{a}}\int a\,\mathcal{G}^{(4)}\\ &-\frac{iK_{A}}{8\pi^{2}f_{a}}\int a(F_{A}^{(2)}-\mathcal{B}_{e}^ {(2)})\wedge(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\.\end{split} \tag{109}\]
This modified transformation law (106) is the hallmark of a 3-group global symmetry.
However, now that the 1-form electric symmetry has mixed with \(U(1)^{(2)}\), it also acquires a mixed 't Hooft anomaly with \(\mathbb{Z}_{K_{a}}^{(0)}\) axion shift symmetry. However, in order to probe these anomalies, we really need to determine how to turn on the background gauge field for \(\mathbb{Z}_{K_{a}}^{(0)}\) and \(\mathbb{Z}_{K_{A}}^{(1)}\) simultaneously. This requires picking a 5-manifolds \(N_{5}\) that bounds \(M_{4}\). In this case we find that
\[S=...+\frac{iK_{A}}{8\pi^{2}}\int_{N_{5}}(da-f_{a}\mathcal{A}_{e}^{(1)})\wedge( F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\wedge(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)}). \tag{110}\]
The requirement that our action describe a well-defined, local 4d theory then becomes demanding that the theory is independent of the choice of \(N_{5}\). One can explicitly check that the action is not independent of this choice due to the terms
\[S=...+\frac{iK_{A}}{4\pi^{2}}\int\mathcal{A}_{e}^{(1)}\wedge\mathcal{B}_{e}^{ (2)}\wedge F_{A}^{(2)}. \tag{111}\]
However, we see that we can actually cancel this variation by also modifying the \(U(1)^{(1)}\) magnetic 1-form global symmetry:
\[\begin{split}\delta\mathcal{B}_{m}^{(2)}&=d\lambda _{m}^{(1)}-\frac{K_{A}}{2\pi}\left(\lambda_{e}^{(0)}\mathcal{B}_{e}^{(2)}+ \lambda_{e}^{(1)}\wedge\mathcal{A}_{e}^{(1)}+\lambda_{e}^{(0)}d\lambda_{e}^{( 1)}\right)\,\\ \delta\mathcal{B}_{e}^{(2)}&=d\lambda_{e}^{(1)}\,\quad \delta\mathcal{A}_{e}^{(1)}=d\lambda_{e}^{(0)}\,\end{split} \tag{112}\]
so that the gauge invariant field \(U(1)^{(1)}\) field strength is given by
\[\mathcal{H}^{(3)}=d\mathcal{B}_{m}^{(2)}+\frac{K_{A}}{2\pi}\mathcal{A}_{e}^{( 1)}\wedge\mathcal{B}_{m}^{(2)}\, \tag{113}\]
and the \(U(1)^{(1)}\) coupling now appears as
\[S=...+\frac{i}{2\pi}\int A^{(1)}\wedge\mathcal{H}^{(3)}. \tag{114}\]
This is the hallmark of a 2-group global symmetry which is then interlaced with the 3-group global symmetry indicated by the mixing of \(\mathbb{Z}_{K_{A}}^{(1)}\) with \(U(1)^{(2)}\).
\(\mathbb{Z}_{n}^{(1)}\) 1-form \(B\)-electric symmetry:
Now let us turn to the symmetries of \(\mathbb{Z}_{n}\) BF theory. The equations of motion for \(B^{(1)}\) is given by
\[d\underbrace{\frac{i}{g_{B}^{2}}*F_{B}^{(2)}}_{=*J_{2B}^{*}}=\underbrace{\frac{ n}{2\pi}H^{(3)}}_{=j_{3B}^{*}B^{(2)}} \tag{102}\]
Here we see that the BF source term \(j_{3B}^{e}(B^{(2)})\) breaks \(U(1)_{e}^{(1)}\) down to \(\mathbb{Z}_{n}^{(1)}\). Because \(F_{B}^{(2)}\) transforms electrically under \(\mathbb{Z}_{n}^{(1)}\), we can couple the theory to a \(\mathbb{Z}_{n}^{(1)}\) background gauge field by
\[S=...+\frac{in}{2\pi}\int(F_{B}^{(2)}-\mathcal{C}_{e}^{(2)})\wedge B^{(2)}. \tag{103}\]
\(\mathbb{Z}_{n}^{(2)}\) 2-form BF string symmetry:
Similarly, the equation of motion for \(B^{(2)}\) is the same as in BF theory
\[d\underbrace{\frac{i}{g_{H}^{2}}*H^{(3)}}_{=*J_{3H}^{e}}=\underbrace{-\frac{ in}{2\pi}F_{B}^{(2)}}_{=j_{2}^{\text{string}}}. \tag{104}\]
Here, the source term \(j_{2}^{\text{string}}\), breaks \(U(1)^{(2)}\) down to \(\mathbb{Z}_{n}^{(2)}\). In dual picture, \(*H^{(3)}\sim d\varphi\) and the above equation shows how the usual Bianchi identity \(d^{2}\varphi=0\) is violated by the presence of a string configuration, a familiar story from Abelian Higgs model.
Again, we can couple this symmetry to a \(\mathbb{Z}_{n}\) 3-form background gauge field \(\mathcal{D}_{e}^{(3)}\) as
\[S=\frac{in}{2\pi}\int B^{(1)}\wedge(dB^{(2)}-\mathcal{D}_{e}^{(3)}). \tag{105}\]
Note that this makes it clear that there is no local way we can turn on both \(\mathcal{C}_{e}^{(2)},\mathcal{D}_{e}^{(3)}\) without the action being non-invariant under \(\mathbb{Z}_{n}^{(1)}\times\mathbb{Z}_{n}^{(2)}\) gauge transfrmations. This is indicative of a mixed 't Hooft anomaly which we will now discuss.
#### c.1.1 Anomalies
Now we are able to discuss the 't Hooft anomalies of the decoupled theory. Here we will turn on all of the background fields except for \(\mathbb{Z}_{n}^{(2)}\) - recall that we can only turn on a background gauge field for \(\mathbb{Z}_{n}^{(1)}\) or \(\mathbb{Z}_{n}^{(2)}\) - and analyze the variation of the partition function under the background gauge transformations. Here we see that there are terms in the action that are explicitly not invariant under the electric symmetries:
\[\begin{split} S=&...-\frac{i}{2\pi f_{a}}\int_{M_{4 }}a\,\mathcal{G}^{(4)}+\frac{i}{2\pi}\int_{M_{4}}A^{(1)}\wedge\mathcal{H}^{(3 )}\\ &+\frac{in}{2\pi}\int B^{(2)}\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{ (2)})-\frac{in}{2\pi}\int B^{(1)}\wedge\mathcal{D}_{e}^{(3)}\,\end{split} \tag{106}\]
which leads to the anomalous variation
\[\delta S=...-\frac{i}{2\pi}\int\lambda^{(0)}_{e}{\cal G}^{(4)}+\frac{i}{2\pi}\int \lambda^{(1)}_{e}\wedge{\cal H}^{(3)}+\frac{in}{2\pi}\int\left(\tilde{\lambda}^ {(2)}_{e}\wedge{\cal C}^{(2)}_{e}-\tilde{\lambda}^{(1)}_{e}\wedge{\cal D}^{(3 )}_{e}\right)\, \tag{111}\]
where here
\[\delta a=f_{a}\lambda^{(0)}_{e}\ \,\quad\delta A^{(1)}=\lambda^{(1)}_{e }\,\quad\delta B^{(2)}=\tilde{\lambda}^{(1)}_{e}\,\quad\delta{\cal D}^{(3)}_{e}=d \tilde{\lambda}^{(2)}_{e}\, \tag{112}\] \[\delta B^{(1)}=\tilde{\lambda}^{(1)}_{e}\,\quad\delta{\cal C}^{(2)} _{e}=d\tilde{\lambda}^{(1)}_{e}\.\]
These anomalies can be described by the 5d TQFT:
\[S_{\rm inflow}=\frac{i}{2\pi}\int\left({\cal A}^{(1)}_{e}\wedge{\cal G}^{(4)}+{ \cal B}^{(2)}_{e}\wedge{\cal H}^{(3)}+{\cal D}^{(3)}_{e}\wedge{\cal C}^{(2)}_{ e}\right). \tag{113}\]
### Symmetries with TQFT Coupling
Now let us consider how turning on the coupling between axion-Maxwell theory and the \(\mathbb{Z}_{n}\) BF gauge theory: \(K_{AB},K_{B}\neq 0\). This has many effects on the symmetry structure.
However, there are several symmetries that are not effected by turning on the coupling:
* \(U(1)^{(2)}\) 2-form axion string symmetry: Adding the new axionic couplings to the theory does not affect the symmetry structure of \(U(1)^{(2)}\). This is evident from the fact that the normal axion coupling does not affect the winding 2-form symmetry of a \(U(1)\)-valued scalar field.
* \(U(1)^{(1)}_{mA}\) 1-form \(A\)-magnetic symmetry: Similarly, the new axionic couplings do not affect the magnetic 1-form symmetry of the \(U(1)_{A}\) gauge field.
* \(\mathbb{Z}_{n}^{(2)}\) 2-form BF string symmetry: Additionally, the new axionic couplings do not affect the 2-form BF string symmetry.
The couplings that are effected by considering \(K_{AB},K_{B}\neq 0\) are the 0-form axion shift symmetry and the
\(\mathbb{Z}_{K_{a}}^{(0)}\) **0-form axion shift symmetry:**
Now the equation of motion of the axion is
\[d\left(if_{a}*da\right)=\underbrace{\frac{K_{A}}{8\pi^{2}}F_{A}^{(2)}\wedge F _{A}^{(2)}}_{=j_{4}^{\rm inst}(A)}+\underbrace{\frac{K_{AB}}{4\pi^{2}}F_{A}^ {(2)}\wedge F_{B}^{(2)}}_{=j_{4}^{\rm inst}(A,B)}+\underbrace{\frac{K_{B}}{8 \pi^{2}}F_{B}^{(2)}\wedge F_{B}^{(2)}}_{=j_{4}^{\rm inst}(B)}. \tag{114}\]
We now see that there are additional source terms that can further break the \(\mathbb{Z}_{K_{A}}^{(0)}\) axion shift symmetry down to the subgroup \(\mathbb{Z}_{K_{A}}^{(0)}\mapsto\mathbb{Z}_{K_{a}}^{(0)}\) where \(K_{a}=\text{GCD}(K_{A},K_{AB},K_{B})\).
**1-form \(A\)-electric and \(B\)-electric symmetry**
Now let us consider the effect of adding the coupling \(K_{AB}\) to the 1-form electric symmetry for \(U(1)_{A}\) and 1-form electric symmetry for \(\mathbb{Z}_{n}\). As we saw, the axion coupling for generic \(K_{A}\) broke the free theory \(U(1)^{(1)}\) to a discrete \(\mathbb{Z}_{K_{A}}^{(1)}\) part of a 3-group.
When analyzing the role of the \(\mathbb{Z}_{n}^{(1)}\) BF 1-form electric symmetry, we can effectively treat \(F_{B}^{(2)}\) as the field strength for a \(U(1)\) gauge symmetry and \(\mathcal{C}_{e}^{(2)}\) as a \(\mathbb{Z}_{n}^{(1)}\subset U(1)_{B}^{(1)}\) 2-form background gauge field. The reason is that \(F_{B}^{(2)}\) does indeed satisfy the Dirac quantizatoin condition and hence there is no appreciable difference for the purpose of analyzing symmetries.
Now the symmetry structure of the theory follows straightforwardly from the analysis of the 3-group symmetry structure of axion-Maxwell theory. In particular, the \(K_{A}\) and \(K_{B}\) coupling break \(U(1)_{A}^{(1)}\mapsto\mathbb{Z}_{K_{A}}^{(1)}\) and \(\mathbb{Z}_{n}^{(1)}\mapsto\mathbb{Z}_{k_{B}}\) where \(k_{B}=\text{GCD}(K_{B},n)\).
The mixed axion coupling term is a bit more tricky. Let us denote \(L=\text{LCM}(K_{A},k_{B})\). The \(\mathbb{Z}_{K_{A}}^{(1)},\mathbb{Z}_{k_{B}}^{(1)}\) embed into a \(\mathbb{Z}_{L}^{(1)}\) enveloping group. We then see that the \(K_{AB}\) coupling now breaks \(\mathbb{Z}_{L}^{(1)}\mapsto\mathbb{Z}_{M}\) where \(M=\text{GCD}(L,K_{AB})\). Since \(L\) is a least common multiple, breaking \(\mathbb{Z}_{L}\rightarrow\mathbb{Z}_{M}\) uniquely determines the unbroken \(\mathbb{Z}_{\kappa_{A}}^{(1)}\subset\mathbb{Z}_{K_{A}}^{(1)}\) and \(\mathbb{Z}_{\kappa_{B}}^{(1)}\subset\mathbb{Z}_{k_{B}}^{(1)}\) as
\[\begin{split}\kappa_{A}&:=\text{GCD}(K_{A},M)= \text{GCD}\Big{(}K_{A},\,\text{GCD}\big{(}\text{LCM}(\text{GCD}(K_{B},n),K_{A} )\,,K_{AB}\big{)}\,\Big{)}\,\\ \kappa_{B}&:=\text{GCD}(k_{B},M)=\text{GCD}\Big{(} \text{GCD}(K_{B},n)\,,\,\text{GCD}\big{(}\text{LCM}(\text{GCD}(K_{B},n),K_{A} )\,,K_{AB}\big{)}\,\Big{)}\.\end{split} \tag{100}\]
In our UV complete model in Section 2.2, \(K_{A}=1,\ K_{AB}=q\), and \(K_{B}=q^{2}\) so that \(\kappa_{A}=1\) and \(\kappa_{B}=q\) and there is only a remanining \(\mathbb{Z}_{q}^{(1)}\) 1-form global symmetry remaining.
However, when we define the 5d gauge invariant axionic coupling, we also need to cancel the 5-dimensional dependence of the terms
\[\begin{split} S=&...+\frac{iK_{AB}}{4\pi^{2}}\int_ {N_{5}}\mathcal{A}_{e}^{(1)}\wedge\Big{(}F_{B}^{(2)}\wedge\mathcal{B}_{e}^{(2 )}+F_{A}^{(2)}\wedge\mathcal{C}_{e}^{(2)}\Big{)}\\ &+\frac{iK_{B}}{4\pi^{2}}\int_{N_{5}}\mathcal{A}_{e}^{(1)}\wedge F _{B}^{(2)}\wedge\mathcal{C}_{e}^{(2)}\.\end{split} \tag{101}\]
These additionally need to be cancelled by modifying the transformations of \(\mathcal{B}_{m}^{(2)}\) and \(\mathcal{D}_{e}^{(3)}\) and are accomplished by a straightforward generalization of eq. (100).
Now we see that the coupled theory has a 3-group symmetry involving
where the transformation rules are
\[\begin{split}\delta\mathcal{A}_{m}^{(3)}=& d\lambda_{m}^{(2)}-\frac{K_{A}}{4\pi}\left(2 \lambda_{e}^{(1)}\wedge\mathcal{B}_{e}^{(2)}+\lambda_{e}^{(1)}\wedge d\lambda_ {e}^{(1)}\right)\\ &-\frac{K_{AB}}{2\pi}\left(\lambda_{e}^{(1)}\wedge\mathcal{C}_{e }^{(2)}+\tilde{\lambda}_{e}^{(1)}\wedge\mathcal{B}_{e}^{(2)}+\tilde{\lambda}_ {e}^{(1)}\wedge d\lambda_{e}^{(1)}\right)\\ &-\frac{K_{B}}{4\pi}\left(2\tilde{\lambda}_{e}^{(1)}\wedge \mathcal{C}_{e}^{(2)}+\tilde{\lambda}_{e}^{(1)}\wedge d\tilde{\lambda}_{e}^{( 1)}\right)\\ \delta\mathcal{B}_{m}^{(2)}=& d\lambda_{m}^{(1)}- \frac{K_{A}}{2\pi}\left(\lambda_{e}^{(0)}\mathcal{B}_{e}^{(2)}+\lambda_{e}^{( 1)}\wedge\mathcal{A}_{e}^{(1)}+\lambda_{e}^{(0)}d\lambda_{e}^{(1)}\right)\\ &-\frac{K_{A,B}}{2\pi}\left(\lambda_{e}^{(0)}\mathcal{C}_{e}^{(2 )}+\tilde{\lambda}_{e}^{(1)}\wedge\mathcal{A}_{e}^{(1)}+\lambda_{e}^{(0)}d \tilde{\lambda}_{e}^{(1)}\right)\\ \delta\mathcal{D}_{e}^{(3)}=& d\tilde{\lambda}_{e}^ {(2)}-\frac{K_{B}}{2\pi}\left(d\lambda_{e}^{(0)}\wedge\mathcal{C}_{e}^{(2)}+d \tilde{\lambda}_{e}^{(1)}\wedge\mathcal{A}_{e}^{(1)}+d\lambda_{e}^{(0)}\wedge d \tilde{\lambda}_{e}^{(1)}\right)\\ &-\frac{K_{A,B}}{2\pi}\left(d\lambda_{e}^{(0)}\wedge\mathcal{B}_{ e}^{(2)}+d\lambda_{e}^{(1)}\wedge\mathcal{A}_{e}^{(1)}+d\lambda_{e}^{(0)} \wedge d\lambda_{e}^{(1)}\right)\\ \delta\mathcal{A}_{e}^{(1)}=& d\lambda_{e}^{(0)}\,\qquad \delta\mathcal{B}_{e}^{(2)}=d\lambda_{e}^{(1)}\,\qquad\delta\mathcal{C}_{e}^{(2)}=d\tilde{ \lambda}_{2}^{(1)}\end{split} \tag{108}\]
so that the gauge invariant field strengths are given by
\[\begin{split}\mathcal{G}^{(4)}=& d\mathcal{A}_{m}^{(3)} +\frac{K_{A}}{4\pi}\mathcal{B}_{e}^{(2)}\wedge\mathcal{B}_{e}^{(2)}+\frac{K_{ AB}}{2\pi}\mathcal{B}_{e}^{(2)}\wedge\mathcal{C}_{e}^{(2)}+\frac{K_{B}}{4\pi} \mathcal{C}_{e}^{(2)}\wedge\mathcal{C}_{e}^{(2)}\,\\ \mathcal{H}^{(3)}=& d\mathcal{B}_{m}^{(2)}+\frac{K_{A }}{2\pi}\mathcal{A}_{e}^{(1)}\wedge\mathcal{B}_{e}^{(2)}+\frac{K_{AB}}{2\pi} \mathcal{A}_{e}^{(1)}\wedge\mathcal{C}_{e}^{(2)}\end{split} \tag{109}\]
while \(\mathcal{D}_{e}^{(3)}\) is now replaced by its \(\mathbb{Z}_{K_{a}}^{(0)}\times\mathbb{Z}_{\kappa_{B}}^{(1)}\) gauge-invariant form
\[\mathcal{D}_{e}^{(3)}\rightarrow\widetilde{\mathcal{D}}_{e}^{(3)}=\mathcal{D} _{e}^{(3)}+\frac{K_{AB}}{3\pi}\mathcal{A}_{e}^{(1)}\wedge\mathcal{B}_{e}^{(2) }+\frac{K_{B}}{2\pi}\mathcal{A}_{e}^{(1)}\wedge\mathcal{C}_{e}^{(2)}. \tag{110}\]
Here we can clearly see that the effect of coupling the axion-Maxwell theory to the BF TQFT is that the 3-group structure has been dramatically modified. In summary, the effect of the adding the coupling is given by:
* The axion shift symmetry is reduced \(\mathbb{Z}_{K_{A}}^{(0)}\mapsto\mathbb{Z}_{K_{a}}^{(0)}\) for \(K_{a}=\text{GCD}(K_{A},K_{B},K_{AB})\),
* The 1-form \(U(1)_{A}\) electric symmetry is reduced \(\mathbb{Z}_{K_{A}}^{(1)}\mapsto\mathbb{Z}_{\kappa_{A}}^{(1)}\) where \(\kappa_{A}\) is defined in eq. (106),
* The 1-form BF electric symmetry is reduced \(\mathbb{Z}_{n}^{(1)}\mapsto\mathbb{Z}_{\kappa_{B}}^{(1)}\) where \(\kappa_{B}\) is defined in eq. (106),
* The 1-form BF electric symmetry now participates in a 3-group that mixes with \(U(1)^{(2)},U(1)^{(1)}\) magnetic symmetries as well as the electric symmetries \(\mathbb{Z}_{K_{a}}^{(0)},\mathbb{Z}_{\kappa_{A}}^{(1)},\mathbb{Z}_{n}^{(2)}\).
#### c.2.1 Anomalies
The full action with all background field strengths turned on is given by
\[S=\frac{1}{2}\int(da-f_{a}\mathcal{A}_{e}^{(1)})\wedge*(da-f_{a} \mathcal{A}_{e}^{(1)})+\frac{1}{2g}\int(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)}) \wedge*(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\] \[+\frac{in}{2\pi}\int B^{(2)}\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{( 2)})-\frac{in}{2\pi}\int B^{(1)}\wedge\widetilde{\mathcal{D}}^{(3)}\] \[-\frac{i}{2\pi f_{a}}\int a\,\mathcal{G}^{(4)}+\frac{i}{2\pi}\int A ^{(1)}\wedge\mathcal{H}^{(3)}-\frac{iK_{A}}{8\pi^{2}f_{a}}\int a(F_{A}^{(2)}- \mathcal{B}_{e}^{(2)})\wedge(F_{A}^{(2)}-\mathcal{B}_{e}^{(2)})\] \[-\frac{iK_{AB}}{4\pi^{2}f_{a}}\int a(F_{A}^{(2)}-\mathcal{B}_{e}^ {(2)})\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{(2)})-\frac{iK_{B}}{8\pi^{2}f_{a}} \int a(F_{B}^{(2)}-\mathcal{C}_{e}^{(2)})\wedge(F_{B}^{(2)}-\mathcal{C}_{e}^{( 2)}). \tag{113}\]
Similarly, we can straightforwardly write down the anomalies of the theory by looking at the non-invariance of the above action under the electric global symmetries:
\[\delta S=...-\frac{i}{2\pi}\int\lambda_{e}^{(0)}\mathcal{G}^{(4)}+\frac{i}{2 \pi}\int\lambda_{e}^{(1)}\wedge\mathcal{H}^{(3)}+\frac{in}{2\pi}\int\left( \tilde{\lambda}_{e}^{(2)}\wedge\mathcal{C}_{e}^{(2)}-\tilde{\lambda}_{e}^{(1 )}\wedge\widetilde{\mathcal{D}}_{e}^{(3)}\right)\, \tag{114}\]
where here
\[\begin{split}&\delta a=f_{a}\lambda_{e}^{(0)}\ \,\quad\delta A^{(1)}=\lambda_{e}^{(1)}\,\quad\delta B^{(2)}=\tilde{\lambda}_{e}^{(1)}\,\quad \delta\mathcal{D}_{e}^{(3)}=d\tilde{\lambda}_{e}^{(2)}\,\\ &\delta B^{(1)}=\tilde{\lambda}_{e}^{(1)}\,\quad\delta\mathcal{C}_{e}^{(2)}=d \tilde{\lambda}_{e}^{(1)}\.\end{split} \tag{115}\]
These anomalies can be described by the 5d TQFT:
\[S_{\text{inflow}}=\frac{i}{2\pi}\int\left(\mathcal{A}_{e}^{(1)}\wedge\mathcal{ G}^{(4)}+\mathcal{B}_{e}^{(2)}\wedge\mathcal{H}^{(3)}+\mathcal{D}_{e}^{(3)} \wedge\mathcal{C}_{e}^{(2)}\right). \tag{116}\]
The full set of background gauge fields and their associated symmetries are summarized in Table 7. 39
Footnote 39: 7, by the conserved current generating a discrete symmetry, we mean current of an associated \(U(1)\) symmetry that is broken down to the appropriate discrete subgroup.
\begin{table}
\begin{tabular}{|c|c|c||c|c|c|c|} \hline \multicolumn{3}{|c||}{Electric Symmetries} & \multicolumn{3}{|c|}{Magnetic Symmetries} \\ \hline \hline
0-form axion shift & \(\mathbb{Z}_{K_{A}}^{(0)}\) & \(*J_{1a}^{e}\) & \(\mathcal{A}_{e}^{(1)}\) & 2-form axion string & \(U(1)^{(2)}\) & \(*J_{3a}^{m}\) & \(\mathcal{A}_{m}^{(3)}\) \\ \hline
1-form \(A\)-electric & \(\mathbb{Z}_{k_{1}}^{(1)}\) & \(*J_{2A}^{e}\) & \(\mathcal{B}_{e}^{(2)}\) & 1-form \(A\)-magnetic & \(U(1)^{(1)}\) & \(*J_{2A}^{m}\) & \(\mathcal{B}_{m}^{(2)}\) \\ \hline
1-form \(B\)-electric & \(\mathbb{Z}_{k_{2}}^{(1)}\) & \(*J_{2B}^{e}\) & \(\mathcal{C}_{e}^{(2)}\) & & & & \\ \hline
2-form BF string & \(\mathbb{Z}_{n}^{(2)}\) & \(*J_{3H}^{e}\) & \(\widetilde{\mathcal{D}}_{e}^{(3)}\) & & & & \\ \hline \hline \multicolumn{3}{|c||}{Field Strength of Magnetic Symmetries} \\ \hline \hline
2-form axion string & \(\mathcal{G}^{(4)}=d\mathcal{A}_{m}^{(3)}+\cdots\) & 1-form \(A\)-magnetic & \(\mathcal{H}^{(3)}=d\mathcal{B}_{m}^{(2)}+\cdots\) \\ \hline \end{tabular}
\end{table}
Table 7: List of generalized symmetries, their currents, and background gauge fields in the full coupled axion-Maxwell and \(\mathbb{Z}_{n}\) BF theory. Additionally, \(\widetilde{\mathcal{D}}_{e}^{(3)}\) is defined in eq. (113) and \(\mathcal{G}^{(4)},\mathcal{H}^{(3)}\) are defined in eq. (113), indicating that the 0- and 1-form electric symmetries all participate in 3-groups, mixing into the magnetic symmetries and 2-form BF string symmetry.
#### c.2.2 Constraints from Symmetry
As discussed in [5], one of the physical consequences of having an EFT with 3-group global symmetry is that any UV completion that gives rise to it must satisfy an inequality of scales at which the different components of the 3-group emerge. In particular, since the 0- and 1-form component symmetries turn on the 2-form \(U(1)^{(2)}\) background gauge fields, we must have the inequality
\[E_{\rm 2-form}\gtrsim E_{1-form}. \tag{112}\]
In terms of physical quantities, this is given by
\[T_{\rm string}\gtrsim m_{\psi}^{2}\, \tag{113}\]
where \(T_{\rm string}\) is the tension of the axion string and \(m_{\psi}\) is the mass of the lightest charged particle, which must be charged under both \(U(1)_{A},\mathbb{Z}_{n}\) (or \(U(1)_{A},U(1)_{B}\) where \(U(1)_{B}\) breaks to \(\mathbb{Z}_{n}\) at a scale \(E_{\mathbb{Z}_{n}}\gtrsim m_{\psi}\)) that breaks the 1-form electric symmetries. See [5] for further discussion.
#### c.2.3 Other TQFT Couplings via Discrete Gauging
As discussed in Section 3.4, we can get many new couplings to TQFTs by gauging discrete subgroups of the 3-group global symmetry. Due to the similarity of the structure of the 3-group, we find that most of the possible discrete gaugings follow straightforwardly. The main difference is that now we can additionally gauge 1.) \(\mathbb{Z}_{n}^{(2)}\), 2.) \(\mathbb{Z}_{\kappa_{B}}^{(1)}\), and 3.) \(\mathbb{Z}_{\kappa_{A}}^{(1)}\times\mathbb{Z}_{\kappa_{B}}^{(1)}\). Case 1.) is straightfoward and breaks the \(\mathbb{Z}_{\kappa_{B}}^{(1)}\) 1-form symmetry due to their mixed anomaly.
Case 2.) is similar to the case of gauging just \(\mathbb{Z}_{\kappa_{A}}^{(1)}\) and requires additionally gauging a discrete subgroup of \(U(1)^{(2)}\). This additionally breaks \(\mathbb{Z}_{K_{a}}^{(0)}\) due to an ABJ anomaly and (at least partially) breaks \(\mathbb{Z}_{n}^{(2)}\).
Case 3.) combines the effects of gauging \(\mathbb{Z}_{\kappa_{A}}^{(1)}\) and \(\mathbb{Z}_{\kappa_{B}}^{(1)}\). It requires gauging a discrete subgroup of \(U(1)^{(2)}\), breaks \(\mathbb{Z}_{K_{a}}^{(0)}\), extends the periodicity of \(U(1)^{(1)}\) and (at least partially) breaks \(\mathbb{Z}_{n}^{(2)}\).
It would be interesting to study the theories produced by these discrete gaugings in more detail.
|
2306.03951 | Reinforcement Learning-Based Control of CrazyFlie 2.X Quadrotor | The objective of the project is to explore synergies between classical
control algorithms such as PID and contemporary reinforcement learning
algorithms to come up with a pragmatic control mechanism to control the
CrazyFlie 2.X quadrotor. The primary objective would be performing PID tuning
using reinforcement learning strategies. The secondary objective is to leverage
the learnings from the first task to implement control for navigation by
integrating with the lighthouse positioning system. Two approaches are
considered for navigation, a discrete navigation problem using Deep Q-Learning
with finite predefined motion primitives, and deep reinforcement learning for a
continuous navigation approach. Simulations for RL training will be performed
on gym-pybullet-drones, an open-source gym-based environment for reinforcement
learning, and the RL implementations are provided by stable-baselines3 | Arshad Javeed, Valentín López Jiménez | 2023-06-06T18:29:10Z | http://arxiv.org/abs/2306.03951v2 | # Reinforcement Learning-Based Control of CrazyFlie 2.X Quadrotor
###### Abstract
The objective of the project is to explore synergies between classical control algorithms such PID and contemporary reinforcement learning algorithms to come up with a pragmatic control mechanism to control the CrazyFlie 2.X quadrotor. The primary objective would be performing PID tuning using reinforcement learning strategies. The secondary objective is to leverage the learnings from the first task to implement control for navigation by integrating with the lighthouse positioning system. Two approaches are considered for navigation, a discrete navigation problem using Deep Q-Learning with finite predefined motion primitives, and deep reinforcement learning for a continuous navigation approach. Simulations for RL training will be performed on gym-pybullet-drones, an open-source gym-based environment for reinforcement learning, and the RL implementations are provided by stable-baselines3.
## 1 Introduction
Modeling a quadrotor such as CrazyFlie (figure 1) is not a straightforward task due to the non-linearities involved. Often, the system is linearized around a specific stationary point, but this is task-specific and could be daunting. Instead, we focus on a gray box approach, where we simulate our system using a physics engine [6] to circumvent the physical modeling of the system. In this paper, we synergize between classical PID control and reinforcement learning is explored to perform a navigation task in the CrazyFlie 2.X quadrotor. Pure classical or reinforcement learning approaches are not feasible in terms of convergence and are less interpretable. A pure RL approach demands a higher network architecture and long training hours to reach convergence. On the other hand, an end-to-end classical approach [3][4] controller's implementations can be complex as the design specification are not trivial.
In the research, first, we focus on PID tuning, where the parameters for the attitude and position Mellinger controller [5] are approximated using the Twin-Deep Deterministic Policy Gradient[1] algorithm and compared against the quadcopter's original values. Next, using the obtained PID parameters, a closed-loop controller is implemented in the simulation environment, where the quadrotor's task is to navigate to a determined point in space in a continuous environment. The RL agent is responsible for the high-level tasks and the PID loop executes the actions. Finally, robustness performance differences are explored in training with and without noise disturbances for the previous hovering navigation task.
During the implementation of the RL tasks, the algorithm selected is TD3, from stable-baselines3 [7], considering its simplicity and robustness. We expect other more advanced actor-critic models to perform similarly.
## 2 PID Tuning
The objective of the task is to determine the PID parameters for the reinforcement learning agent, based on Mellinger's PID architecture which consists of 18 parameters divided into position and attitude controllers (table 5. This architecture is already implemented on CrazyFlie's firmware, where the user writes the parameters internally to its memory.
The following subsections will describe the process of obtaining and validating the parameters from agent-train to hardware test.
### Agent training
Agent's PID parameters are trained over a determined number of episodes, starting from random parameters to an optimal solution. The task's main objective is to complete successfully a trajectory, for training a circle and testing a helix.
Using TD3 as a reinforcement learning algorithm, convergence has been achieved in 1000 time steps with a small network architecture for the Actor [50, 50] and Critic [50, 50]. The observation space is defined by position \(XYZ\) and orientation \(RPY\). The reward function is computed as
Figure 1: CrazyFlie 2.1
where \(t\) is the next position in the target trajectory and \(s^{\prime}\) is the current state. See table 1 for detailed information.
### Hardware implementation
The same helix trajectory implemented in simulation is used in hardware applying a constant velocity. Crazyflie's flying reference system is achieved using one of two methods, a relative (Flow Deck) and an Absolute (Lighthouse). The absolute position system brings accuracy and stability but depends on external modules in a fixed environment, meanwhile the relative offers mobility without any extra tracking devices, but adds drift and instability to the actions.
## 3 Navigation
The aim of the navigation task is to train a reinforcement-learning policy that learns to navigate the CrazyFile 2.X quadrotor given the destination coordinates in the specified environment. In contrast to the previous task, where we fed in a sequence/trajectory, here the RL policy is expected to learn to generate the trajectory (actions) on its own. The idea is to formulate a navigation environment and train the model in the simulation bed and then export the model for evaluation on real hardware.
To start things off, we focus on a relatively simple task of hovering the CF2.X quadrotor to get a sense of convergence. Although hovering is something that can be accomplished by classical PID (as done in the previous task), the idea here is to gradually introduce complexities in the environment where a simple PID control would fall short, for instance, introducing a wall and having to maneuver around it or have a dynamic or stochastic environment where the classical path planning algorithms like Dijkstra or A* algorithm can have a hard time accommodating the dynamics. It is also worth noting that the phase space for a system like CF2.X has 12 states, which can further exacerbate the task when employing such algorithms.
### Environment
The objective is simple, the goal is for for the reinforcement-learning agent (CF2.X) is to move to the specified destination specified by a set of coordinates \([x_{t},y_{t},z_{t}]\) starting from an initial state \(S_{0}\). The state space/observation space consists of the coordinates, orientation, and linear and angular velocities of the quadrotor. So the state vector comprises of 12 state variables, \(S=[x,y,z,r,p,y,v_{x},v_{y},v_{z},w_{x},w_{y},w_{z}]\). Given a state \(S\), the possible actions constitute moving within a 3D cube \(|\Delta x|\leq 1,|\Delta y|\leq 1,|\Delta z|\leq 1\) (a continuous action space). There are several ways of executing the action: i. A pure RL approach, where the policy function outputs the low-level control signals - the 4 motor RPMs. ii. Using an open loop control, where a controller is used to compute the control signals (motor RPMs) to move to execute the action, and the control signals are applied for a fixed number of iterations. iii. Executing a closed loop control, here the RL policy is responsible for predicting the optimal high-level actions \((\Delta x,\Delta y,\Delta z)\) and the actions are successfully executed by the trusted PID controller, relieving the RL policy from having to learn granular controls. Approaches i and ii are supported by gym-pybullet out of the box, while approach iii proposed as part of the project is a custom implementation, and was found to outperform in terms of convergence and expected reward.
The RL agent was trained using the Twin-Deep Deterministic Policy Gradient (TD3) implementation from stable_baselines3. The actor is responsible of prediction an optimal action (\(a\)) given a state (\(S\)), \(\pi:S\to a\) and the critic is responsible for predicting the reward (\(r\)) given a state and an action, \(Q:(S,a)\to r\). Table 2 summarizes the variables involved. The reward function is defined as the negative squared error of the current state and the target state (equation 1). Thus, the objective is to maximize the negative reward (ideally close to 0). Both the actor and critic are deep neural networks, the actor net has a tanh output action and the critic has a linear output activation.
\[r(S,a)=-[(x+\Delta x-x_{t})^{2}+(y+\Delta y-y_{t})^{2}+(z+\Delta z-z_{t})^{2}] \tag{1}\]
Given a state (\(S=[x,y,z]\)) and an action (\(a=[\Delta x,\Delta y,\Delta z]\)), executing the action involves successfully moving to the relative coordinates, i.e. the next state is \(S^{\prime}=(x+\Delta x,y+\Delta y,z+\Delta z)\). To avoid the agent taking long stride while executing the actions, we scale the output of the actor according to equation 2, The scaling factor of 0.05 implies that the agent is restricted to a distance of 0.05 m (relatively) along individual axes. The TD3 algorithm also defines an action noise for exploration and better convergence. We define the action noise as a multivariate Gaussian (equation 3), so the effective action is then \(a=0.05(a+a_{n})\).
\[a=0.05*[\Delta x,\Delta y,\Delta z] \tag{2}\]
\[a_{n}\sim\frac{\exp(-\frac{1}{2}(a-\mu)^{T}\Sigma(a-\mu))}{\sqrt{(2\pi)^{3}| \Sigma|}} \tag{3}\]
Table 3 lists the hyperparameters for training. The control frequency implies the number of control actions during a period of 2 simulation secs to execute the corresponding action.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Hyperparameter** & **Value** \\ \hline \hline Actor Net Arch & [50,50] \\ Critic Net Arch & [50,50] \\ \hline Number of Timesteps & 1000 \\ Learning Rate & 0.001 \\ \hline Reward Function & \(-(s^{\prime}-t)^{2}\) \\ \hline Helix Height & 0.5 m \\ Helix Radius & 0.3 m \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for PID Tuning
\begin{table}
\begin{tabular}{|l|c|} \hline
**Variable** & **Notation** \\ \hline \hline State Space & \(S=[x,y,z,r,p,y,v_{x},v_{y},v_{z},w_{x},w_{y},w_{z}]\) \\ Action Space & \(a=[\Delta x,\Delta y,\Delta z]\) \\ Min Action & \(a_{min}=[-1,-1,-1]\) \\ Max Action & \(a_{max}=[+1,+1,+1]\) \\ \hline \end{tabular}
\end{table}
Table 2: Navigation Task Variables
## 4 Robustness
Assessing the robustness of black box models has always been a challenge. While the control algorithms employing the classical control principles have an empirical to quantify robustness and evaluating performance, quantifying the robustness in the case of black box models relies on deliberate disturbance and adversarial techniques. We resort to the approach of injecting disturbance to evaluate the RL policy and also explore the possibility of improving robustness by subjecting the agent to external disturbance during the training phase. At first glance, it might be reasonable to expect that training with disturbance would make the RL policy more resilient to external disturbances. However, the experimental results refute the assumption. We find the RL agent to have inherent robustness and training with external disturbance did not have a significant impact. Our findings corroborate similar results reported for other continuous control systems [2].
We focus on step/pulse disturbance, as it is more realistic and emulates the wind disturbance experienced by the quadcopter. However, the disturbance is applied in multiple directions (along the XYZ axes). To make it challenging, during the training phase, the direction of the disturbances is switched every few iterations randomly. Figure 2 shows the training disturbances applied, the disturbance is applied along X, then Z, and then along all three axes XYZ. The evaluation is based on the fixed step disturbances and the performance is measured individually for varying magnitudes of disturbance along individual axes. This also ensures that the replay buffer always contains a good sample.
### Hardware Implementation
Once the reinforcement learning was done, the actor-network (neural network) was extracted. The real-time inference was achieved by running the model on the computer and sending the actions to CrazyFile 2.1 over the radio by continuously reading the sensor measurements. The compatibility of states was ensured by reading all of the required measurements from the onboard sensors and converting the quantities to respective units. The position \(([x,y,z])\), orientation \(([r,p,y])\) and linear velocities \(([v_{x},v_{y},v_{z}])\) were read from the Kalman filter estimates. And the angular velocities \(([\omega_{x},\omega_{y},\omega_{z}])\) were obtained from the gyro.
## 5 Results
### Pid
The objective of the agent is to complete a full 360 degrees helix. In the simulation environment, a circle is used for training, and convergence is measured against the Crazyfile trajectory to trace a helix. In figures 2(a), 2(b), and 2(c) the output plot compares the tuned (output parameters from the algorithm) vs the default for test trajectory along X, Y and Z axis.
Hardware testing includes the two methods mentioned above. First, the relative positioning system uses the Flow deck, which shows a rougher point-to-point displacement, meanwhile, the absolute positioning using the Lighthouse shows a smoother performance. See figures 3(a) and 3(b) for more details.
Finally, figure 2(d) shows the step responses, where tuned and default parameters have similar results. Therefore, we can conclude that the estimated model is approximated enough to the real, for more details table 4 presents more step response information.
### Navigation
The objective for the RL agent is to hover at \(S_{t}=[0,0,1]\) starting from the origin \(S_{0}=[0,0,0]\). Throughout the training iterations, the starting point remains fixed. Once trained, we evaluate the RL policy in a similar setting to the training task, i.e. starting at the origin, but also test the ability to generalize by changing the starting point to an arbitrary location which was never explored by the agent during training, we also subject agent to external disturbances (wind) while evaluating the model in real-world.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyperparameter** & **Value** \\ \hline \hline Actor Net Arch & \([50,100,500,100,50,3]\) \\ Critic Net Arch & \([50,100,500,100,50,1]\) \\ \hline Number of Timesteps & \(100\,000\) \\ Learning Rate & \(0.001\) \\ \hline Action Noise Mean (\(\mu\)) & \([0,0,0]\) \\ Action Noise Variance (\(\Sigma\)) & \(\begin{bmatrix}0.5&0&0\\ 0&0.5&0\\ 0&0&0.5\\ \end{bmatrix}\) \\ \hline Control Frequency & \multicolumn{2}{c|}{50 Hz} \\ \hline Initial State (of the drone) & \multicolumn{2}{c|}{\(S_{0}=[0,0,0]\)} \\ \hline \end{tabular}
\end{table}
Table 3: Hyperparameters
Figure 3: Helix Test Results in Gym-Pybullet-Drones
Figure 2: External Disturbances Applied - Training
The results are presented and compared for the three approaches described before i. Pure RL. ii. RL with open loop control. iii. RL with closed loop control. Table 6 summarizes the results. The pure RL approach (directly controlling the motor RPMs) was not found to converge even after training for 10 million steps. The RL with the open loop control approach managed to converge after 10 hrs of training, while the custom implementation of RL and a PID loop managed to converge in significantly less training time and achieves a better-expected reward (reward per step in the episode). Figure 5 compare the results of approaches (ii) and (iii). We see that the agent in approach (iii) learns significantly faster and manages to attain a higher reward early in the training phase and also attains a higher expected reward at the end of training. This is due to the fact that the agent has robust low-level control to execute the necessary motion primitives (using a closed loop PID controller) and does not have to learn them from scratch, unlike approach (ii). It is also worth noting that the magnitude of expected reward in (iii) is quite high compared to (ii), a negative reward of -6 vs -3 (per step), this is again due to the fact that we leverage robust low-level control, because of which the agent can execute actions concretely and explore more of the environment. The actor reward after convergence is close to about 1, which is expected, as the shortest distance between \(S_{0}=[0,0,0]\), \(S_{t}=[0,0,1]\) is 1 (Euclidean distance).
Approach (iii) was simulated after training converged. Figure 6a is the same as the training task, with the starting point being the origin. In figure 6b, the starting point was chosen arbitrarily, \(S_{0}=[3,3,3]\), which was never explored by the agent during the training phase, yet the policy generalizes well, but the subtle thing to note is that the trajectory is not entirely optimal (the shortest path between two points is a straight line), instead the agent first navigates to a location close to the destination and then pursues a familiar trajectory encountered during training.
The model was also tested to control the real hardware and the results were similar. While testing on the real hardware, external disturbances were applied in attempts to drift the CF2.X from the hovering point, and the model could stabilize fairly stabilize and return to the hovering point.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Metrics**} & \multicolumn{3}{c|}{**Approximation**} \\ \cline{3-6} \multicolumn{2}{|c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} \\ \hline \multicolumn{2}{|c|}{**Inception**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} \\ \hline \multicolumn{2}{|c|}{**Inception**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} & \multicolumn{1}{c|}{**Polarity**} \\ \hline \multicolumn{2}{|c|}{**Inception**} & \multicolumn{1}{c|}{**Polarity
The plots trace the movements of the agent in the environment. Red indicates the starting point and green is the destination.
### Robustness
Figures 6(a) - 6(c) compare the performance of the model trained with and without the disturbance. During the evaluation run, the disturbance vector is held constant throughout the episode (step disturbance). The magnitude of the disturbance is varied along with the directions and the expected rewards are recorded individually.
When a disturbance is applied along X or XYZ (figures 6(a) & 6(c)), the expected reward appears to increase while applying the disturbance along Z (figure 6(b)) seems to have a negative impact. A disturbance along XY direction appears to tilt the quadrotor and it appears to leverage the combined effect of the actions taken and the XY component of the wind force, making it approach the hovering point much faster, thus a better reward. This explains why a disturbance along Z has a drastic negative impact, the wind is constantly pushing the quadrotor upwards, and when the magnitude gets large, the quadrotor has to work against the wind. We also observe that the model trained without any disturbance attains a better reward when evaluating disturbances along XY direction. In the case of disturbance along Z, both models seem to perform similarly, and as expected the expected reward decreases with the magnitude of noise, because the agent cannot use the wind force along Z to its advantage, instead it was to work against it.
The robustness evaluation reveals the model trained without any disturbance during the training phase is inherently robust to external disturbances when subjected to wind forces during the testing phase. Not only the model exhibits some form of robustness, but in fact outperforms the model that was subjected to random step disturbances during training. It was also observed that during training injecting a higher magnitude of disturbance resulted in the model not converging, going unstable as expected. A plausible explanation is that a single RL agent model is unable to learn/differentiate between the behavior of the combined system, i.e. the drone plus the environment dynamics. Our observations corroborate similar findings reported by a similar study for the cart pole experiment[2]. Thus we conclude that training a model subject to external disturbance does not have a significant impact on the model's robustness, but it does help in exploring the environment faster.
Figures 7(a) - 7(b) are the simulated 3D trajectories, that show the model stable, reaching the target point \([0,0,1]\). Behavior that is confirmed in hardware with figures 7(c) and 9, where the CrazyFlie is subject to disturbances in different directions (wind force exerted manually).
Figure 6: RL Policy Test
Figures plotting the RL results for the task of hovering using open loop control to execute the actions. The X-axis represents the relative timesteps. The Y-axis represents the magnitude. Total number of timesteps for RL + open loop system is 1 000 000 (1 M), and 100 000 (100 K) for the RL + closed loop system. The performance metric used here is the expected reward per step (higher the better)
Figure 5: RL Training Performance: Comparison of RL + Open Loop Control and RL + Closed Loop Control
Figure 7: Robustness Evaluation - External Disturbance Injection
## 6 Conclusion
The project focused on reinforcement learning-based control of a CrazyFlie, exploring aspects of combining classical control and reinforcement learning approaches. Our first objective was to determine how suitable was this virtual environment, choosing as a primary task, the PID tuning of the controller coefficients. In order to do so, the agent needed to complete a predefined trajectory in simulation and reality. This formed the basis for the navigation task, where the RL agent was responsible for predicting high-level actions for maneuvers, and the PID controller was leveraged to execute the actions successfully.
The navigation task also explores 3 distinct approaches, i. A pure RL approach, where RL agent directly outputs the low-level control signal (without the PID controller). ii. RL and open-loop approach, where the agent predicts the actions and the low-level PID control computes the control signal and executes them in an open-loop fashion. iii. Finally, the RL and closed-loop approach was improved to run a closed-loop PID to ensure the high-level actions were perfectly executed before moving on to the next RL step. We observe that approach (iii) performed remarkably, achieving the best results. Approach (iii) aims to borrow the best from both fields, robust and explainable principles (PID) from classical control, and the ability to adapt and perform complex tasks from reinforcement learning.
Finally, as part of robustness evaluation, we turn towards assessing the inherent robustness of RL models and explore if subjecting the agent to external disturbances would improve the performance. The main objective was to establish a comparison of how an agent trained against disturbances could improve the model performance. In our case, the model subject to disturbance showed poor performance. We conclude that for our experiment the RL agents have inherent robustness and training with disturbances does not improve the performance, but it does help the agent explore the environment.
## 7 Acknowledgements
We acknowledge our project advisors Johan Gronqvist and Emma Tegling for all the guidance and support. We would also like to acknowledge the Department of Automatic Control, Lund University for all the resources.
|
2301.01139 | Crystal Chemistry at High Pressure | An overview of the behavior of materials at high pressure is presented,
starting from the effects on single atoms driving electronic transitions and
changes in periodic trends. A range of high-pressure-induced phenomena in the
solid state are then discussed building on the atomic changes, including
bizarre electronic structures, electrides, compounds of noble gases, changes in
elemental miscibility, and strange structural and bonding configurations. In
the final section, the field of high pressure superconductivity is discussed,
as high pressure phases have generated immense study and excitement as some of
their critical superconducting temperatures approach room temperature. | Katerina P. Hilleke, Eva Zurek | 2023-01-03T15:03:18Z | http://arxiv.org/abs/2301.01139v1 | # Crystal Chemistry at High Pressure
###### Abstract
Jupiter, the largest planet in our solar system, is a gas giant comprised mostly of hydrogen and helium. After penetrating its atmosphere, a mixture of hydrogen and helium, methane and ammonia, one would find a massive sea of hydrogen - behaving very differently than one might expect from general chemistry classes [1]. The hydrogen layer surrounding Jupiter's core, under immense pressure and at high temperature, is metallic. This is just one example of how the pressure variable is so important in determining chemical and physical behavior, bearing consequences for the evolution and dynamics of planetary interiors [2].
Under pressure, the behavior of the elements as well as the compounds they form can change drastically from what we, as beings living and learning at 1 atmosphere, are used to. Hydrogen, as described above, can become metallic at high pressure, joining its Group I cousins the alkali metals. It may be perplexing then to learn that the alkali metals lithium [3] and sodium [4; 5] become semiconducting and insulating, respectively, when squeezed. Potassium starts to behave more like a transition metal [6; 7; 8]. When combined with one another, the unexpected behavior of the elements under pressure makes for correspondingly curious compounds. Sodium chlorides with vastly divergent stoichiometries from the typical 1:1 (Figure 1) have been predicted and synthesized [9], stubbornly inert helium atoms form a compound with sodium [10], and metal superhydrides wherein the hydrogen atoms coalesce into clathrate-like networks have been reported [11]. Various "rules" for the behavior of materials at high pressure have been proposed by Prewitt and Downs [12],
Grochala and co-workers [13], and Zhang _et. al._[14], to name a few - and have undergone progressive revision as more is uncovered about the structures and materials far below us in the Earth's core and far away in the planets and stars.
Over the past century, experimental methods have evolved to create progressively higher pressures in a laboratory setting, allowing us to directly probe the behavior of materials under extreme conditions. Pressures in the megabar range can now be routinely achieved, albeit still requiring delicate setups [15; 16; 17]. Diamond anvil cells (DACs) combine the superior hardness of this desired polymorph of carbon with its optical transparency, enabling the creation of the highest static pressures and interrogation of the sample via visual and spectroscopic means. Engineering advances from multistage compression apparatus [18; 19] to toroidal DACs [20; 21] have driven the experimental ceiling for static pressures ever higher. Dynamic compression experiments under ramp or shock conditions, driven by gas guns, lases pulses, or magnetic fields can reach well into the terapascal regime [22; 23; 24], allowing us to study the behavior of diamond at 5 TPa [25] and iron at conditions thought to be in super-Earths [26]. Diagnostic techniques for characterizing the resulting substances can be difficult to implement in both dynamic and static compression, often requiring theoretical support for their interpretation [27].
Far cheaper than high-pressure experiments are computations modeling high-pressure systems. Band structures, phonons, Raman or infra-red spectra and numerous material properties can all be calculated without so much as stepping foot into a laboratory. Crystal structure prediction (CSP) techniques, not weighted down by preconceived, atmospheric pressure-based, notions of how atoms might appropriately arrange themselves in a unit cell, can identify which structures can exist under high-pressure conditions [28; 29; 30]. The computational exploration of potential energy landscapes will not be discussed
Figure 1: The familiar crystal structure of NaCl, table salt, (a) is the only known stable compound of sodium and chlorine at ambient pressure. Under pressures exerted by a diamond anvil cell (b), a variety of additional stoichiometries and crystal structures decorate the sodium-chlorine phase diagram, including the \(P4/mmm\) Na\({}_{3}\)Cl, \(P4/m\) Na\({}_{3}\)Cl\({}_{2}\), \(Imma\) NaCl\({}_{2}\), and \(Pm\bar{3}\) NaCl\({}_{7}\)[10] structures shown here.
in our contribution; to learn about methods that can be employed to identify the global, as well as important local, minima we point the reader to an excellent chapter in this book "Crystal Structure Prediction" by Andreas Hermann, Lewis J. Conway and Chris J. Pickard. Exploratory calculations highlight promising phases for further experimental investigation, but computation can just as well follow experimental results, elucidating behavior and filling in the gaps. The resulting feedback loop of experiment and theory has driven the discovery and characterization of a plethora of phases ranging from the superhard [31; 32; 33] to the superconducting [34; 35; 36; 37; 11; 38].
In the following sections we build a framework for understanding the behavior of materials at high pressure, starting from the effects of pressure on the atoms themselves, driving electronic transitions and altering periodic trends. From there, the various manifestations of high pressure in the solid state are sorted into categories (which are not mutually exclusive, but rather illustrative), starting with exotic electronic structures and electrides. We discuss compounds of the noble gases and those containing elements that are immiscible at ambient pressure, as well as crystal lattices that contain bizarre geometrical motifs and bonding configurations. Finally, we survey the effects of high pressure on superconductivity, a field that has recently undergone a veritable explosion as high pressure phases toe the line of room-temperature superconductivity [39].
## 2 The atom under pressure
Chemistry describes the interactions between the 118 distinct elements that are organized in the periodic table, proposed by Mendeleev while classifying elements according to their chemical properties observed at atmospheric conditions. The trends found within the periodic table can be used to compare atoms according to their size, the number of electrons surrounding their nuclei, and to make predictions as to whether those electrons are held tightly or loosely. Moreover, the periodic table allows students and researchers to predict how different elements will interact with one another: will they form compounds, emulsions or alloys, or will they be unreactive? If reactivity is suspected, the periodic table can be used to guess if the bonds are covalent, ionic, or (usually) somewhere in between. Across the periodic table, trends in properties such as atomic radii, electronegativity, and oxidation state can be mapped, leading one to conclude that fluorine, the most electronegative element, will gain electrons in a binary phase thereby achieving an F\({}^{-}\) configuration with a filled valence shell. On the other hand, cesium, as the least electronegative element (neglecting francium, whose miniscule half-life renders it basically experimentally irrelevant), typically assumes an oxidation state of 0 or +1.1 Yet at high pressure, several Li\({}_{n}\)Cs phases have been predicted [44] where Cs attains unusual formal oxidation states thought to be in excess of -2 due to
substantial electron transfer from lithium to cesium. How can the stability of these unintuitive stoichiometries and their resulting electronic structures be rationalized?
Let us consider electronegativity a little more. Although Pauling's [45] is the most widely adopted, a number of metrics have been used to produce scales of electronegativity for the elements. In Pauling's formulation, electronegativity differences between pairs of atoms A and B are calculated from the homo- and heteronuclear bond dissociation energies, then referenced against the electronegativity of H being set to 2.1 (later adjusted to 2.2). In this regime, the electronegativities of fluorine, lithium, and cesium are 3.98, 0.98, and 0.79, respectively. Several other scales have been proposed, including Mulliken's "absolute electronegativity" [46] being the average of the first ionization energy and electron affinity of an atom [47]. Dong _et al._ modify the Mulliken definition for elements under high pressure taking as a reference the homogeneous electron gas rather than the vacuum [48]. Allen's electronegativities are derived from the average energies of the valence electrons in the atom [49], and a closely related scheme has recently been proposed by Rahm et al [50; 51; 52], where electronegativity is calculated as the average of the electron binding energies of ground state valence electrons - approximately translated to the Allen scale if averaging over valence electrons alone. Broadly speaking, the common factor of importance in all of these is the collection of atomic orbital energies and the differences between them. Under pressure, those change.
The prototypical model for understanding the quantized behavior of energy levels in a confined system is the particle in a box. The resulting energy levels for a particle of mass \(m\) in a box of width \(L\) are given by \(E_{n}=\frac{\hbar^{2}n^{2}\pi^{2}}{2mL^{2}}\), where \(n\) is the principal quantum number. To consider the effects of pressure on this model, we could simply make the box smaller by reducing the width \(L\), which has the effect of increasing the \(E_{n}\). Thus, energy levels (orbital energies) will increase under pressure. The complicating factor is that the rate of this increase is not the same for each orbital because the number of radial nodes plays a role. The peak density of a 4s orbital is further from the nucleus than a 4p orbital, since the electrons occupying the 4s orbital must maintain orthogonality to those in the 1s, 2s, and 3s orbitals, while those in the 4p orbital must only contend with the 3p and 2p orbitals (this analogy may be extended to the 4d and 4f shells). With more electron density being further from the nucleus, the electrons in the 4s orbital will feel the effects of pressure more strongly than those in the 4p orbital. This is illustrated schematically in Figure 2a and 2b. At certain levels of confinement, electronic \(n\)s \(\rightarrow\)\(n\)p, \(n\)s \(\rightarrow\)\((n-1)\)d, and \((n-1)\)d \(\rightarrow\)\((n-2)\)f transitions can become favorable. This is the reason why pressure drives rearrangements of the orbital energies of an atom, with ensuing electronic transitions. (In section 3.2, we will explore the opportunities presented by another sort of destination orbital, this time one not centered on an atom.)
In compressed lithium, the ground state electronic configuration transitions from 1s\({}^{2}\)2s\({}^{1}\)\(\rightarrow\) 1s\({}^{2}\)2p\({}^{1}\), while cesium undergoes a 6s\({}^{1}\)\(\rightarrow\) 5d\({}^{1}\) pressure induced
transformation. The exact pressures at which these electronic changes occur depend, of course, on the chemical environment of the lithium or cesium atoms - and in calculations, on the theoretical methodology. In Cs, the s \(\rightarrow\) d transition is thought to drive the transformation to the complex Cs-III structure with 84 atoms in the unit cell that is stable near 4.2 GPa [53; 54; 55; 56; 57; 58], while in lithium the pressure at which the s-p mixing occurs is believed to be somewhat higher, in the megabar to multimegabar range, dependent on the chemical environment and level of theory [57; 59; 60; 61]. Another example includes potassium, whose complex phase diagram under pressure (see, _e.g._ Figure 2c,d), has been in part attributed to this s \(\rightarrow\) d electronic transition. [62] The energies of core orbitals can also become relevant; for example, in Cs-VI the 5p bands hybridize with the 6s [63; 64], making them accessible for chemical interactions, and the core orbitals of K have also been proposed to be key to its structural diversity. [62] Overall, one effect of pressure on the alkali metals is that their electronegativities undergo quite a rearrangement, ending up with cesium being more electronegative than lithium [48; 57]. From this perspective, the formation of cesium anions in the Li\({}_{n}\)Cs phases begins to make sense.
Among the \(d\)-block similar reorderings are predicted to take place, with the group 10 metals Ni, Pd, and Pt preferring \(d^{10}\) closed-shell configurations, as compared to the \(s^{2}d^{8}\) favored at ambient conditions, while the group 11 and 12 metals become electron donors [48]. The former transition is associated with a spike in the estimated chemical hardness - obtained as half of the HOMO-LUMO gap - of Ni, Pd, and Pt, reaching values comparable to some of the noble gases. This is contrary to the general trend where the hardness of
Figure 2: External pressure on an atom raises the energy of atomic orbitals as they are constrained to a smaller space, but the rate of this increase differs between s, p, and d orbitals (a), (b), favoring s \(\rightarrow\) d transitions for many of the alkali metals, including potassium. Potassium adopts the _bcc_ structure at 0 GPa (c) but at higher pressures takes on a series of complex structures including the incommensurately modulated host-guest _t_I19 phase [65] (d).
most of the elements in the periodic table decreases with pressure as energy levels become closer to one another [48]. The resulting changes in the relative hardness of pairs of elements can lead to changes in compound stability arising from HSAB (hard-soft acid-base) arguments, and the appearance of strange multicenter bonding manifolds in certain high pressure phases have been linked to general increases in softness [66].
From transition-metal-like behavior in the \(s\)-block to relative inertness in the formerly-\(d^{8}\)-transition metals, atoms under compression can behave very differently from their ambient-pressure selves, and the consequences for materials under pressure are far-reaching. We will now explore some of the wild and wonderful structures and phenomena that result.
## 3 The crystal under pressure
At ambient pressure a majority of the solid, metallic elements of the periodic table adopt very simple, symmetric, structures that are close-packed. The most stable geometries are those that minimize the free energy. However, at low temperatures the entropic contributions between different solid phases are typically negligible, so computational studies often employ the enthalpy to determine the structures that are preferred. With increasing pressure the enthalpy, consisting of the internal energy and pressure-volume terms (\(H=U+PV\)), becomes dominated by the \(PV\) term. It would be natural, therefore, to imagine that close-packed structures with increased coordination numbers become preferred at high pressures. The reality, however, does not coincide with our expectations. For example, within cesium the nearest neighbor coordination number first increases from 8 (bcc) to 12 (fcc), then decreases to about 10 (Cs-III), 8 (Cs-IV) and finally increases again to 10/11 (Cs-V) and 12 (Cs-VI). These structural transitions are believed to be driven by the previously discussed pressure-induced \(s\to d\) valence electronic transition within the constituent atoms, which causes the interatomic distances to become smaller compared to the ranges of the wavefunction [53].
Moving beyond elemental crystal structures, several compounds with seemingly bizarre stoichiometries (at least from the perspective of minds that experience a 1 atmosphere reality) have been predicted and/or synthesized under pressure. The familiar combination of sodium and chlorine, table salt and prototypical ionic compound with a 1:1 ratio, is not the only stable crystalline structure in the Na-Cl phase diagram. At least two unique stoichiometries, Na\({}_{3}\)Cl and NaCl\({}_{3}\), were synthesized, and several others were predicted to become stable when squeezed [9]. Noble gases xenon, argon, and helium are active components of solid compounds that have been synthesized [67; 10; 68], and _very_ hydrogen-rich compounds such as YH\({}_{9}\), LaH\({}_{10}\), and CaH\({}_{6}\) that are high-temperature superconductors with superconducting critical temperatures (\(T_{c}\)s) approaching room temperature have been made [69; 70; 71; 11; 72]. These metal superhydrides are materials-by-design success stories inspired by theoretical predictions [34; 35; 73].
In the following sections, we explore the plethora of exciting materials that can be created using high pressure, with all their intriguing structural and behavioral phenomena. While their stability and existence can be traced to the pressure driven electronic rearrangements of the constituent atoms described in Section 2, the manifestations of these rearrangements can take many forms. Below, we describe a variety of illustrative phases sorted into a series of categories, but by necessity these categories will, at times, overlap. Nevertheless, all exemplify the ramifications of high pressure on solid-state chemistry.
### Electronic structure
At atmospheric pressure, the stoichiometries of many inorganic solid-state compounds can be predicted from the most common oxidation states of their constituent elements. Usually, alkali metals and alkaline earth metals possess oxidation states of +1 and +2 respectively, so when combined with the O\({}^{2-}\) ion one would expect Na\({}_{2}\)O and MgO, as well as K\({}_{2}\)O and SrO to form. The noble gases are mostly inert, the p-block is amenable to forming covalently bonded networks, and while a wide variety of oxidation states, which correspond to various filled or half-filled orbitals, are available to several transition metals, Zn, Cd, and Hg steadily persist in maintaining their \(d^{10}\) configurations.
The consequences of the orbital energy shifts discussed in Section 2 mean that several of these "rules" no longer apply at high pressures, and elements can adopt unusual oxidation states in compounds with unexpected stoichiometries. The Li\({}_{n}\)Cs phases [44] used to illustrate the effects of pressure on electronegativity provide one such example. Above 70 GPa, the very lithium rich Li\({}_{5}\)Cs phase is predicted to become stable, joined at higher pressures by Li\({}_{3}\)Cs, Li\({}_{4}\)Cs, and LiCs [44]. Remarkably, the calculated Bader charges on the Cs atoms in these stoichiometries are all more negative than -1, and since calculated Bader charges frequently underestimate formal oxidation states, in these compounds Cs may attain a formal oxidation state that is potentially lower than -2. While alkali metal anions (alkalides) have previously been captured at ambient pressures with crypt stands [74], they achieve charges only up to -1 with the additional electron going into the \(n\)s orbital. In the case of the Li\({}_{n}\)Cs phases, however, pressure induces significant Li 2s \(\rightarrow\) 2p and Cs 6s \(\rightarrow\) 5d electronic transitions, with the latter increasing progressively with higher Li content, thereby facilitating the acceptance of electron density by the typically unoccupied Cs 5d orbitals.
For K, Cs, and Rb the pressure-driven \(n\)s \(\rightarrow\) (\(n-1\))d transitions led to predictions of transition-metal like behavior [6; 75] in the formation of intermetallic compounds with actual transition metals [76; 77; 8; 78]. Within some of these compounds, the transition metal elements assume exotic electronic configurations, as in the case of the predicted potassium iridide K\({}_{3}\)Ir (Figure 3a) containing the Ir\({}^{3-}\) anion [79]. The Ir 5\(d\) orbital becomes fully occupied as a result of electron transfer from K, echoed in the later predicted Rb\({}_{3}\)Ir and Cs\({}_{3}\)Ir phases [80]. K\({}_{3}\)Ir and Rb\({}_{3}\)Ir share the \(Pmmm\) Cu\({}_{3}\)Ti structure type, while
Cs\({}_{3}\)Ir adopts the \(P2_{1}/m\) Ni\({}_{3}\)Ta type, which consist of Ir@M\({}_{8}\) and M@M\({}_{8}\) distorted cubes, and Ir@M\({}_{12}\) distorted cuboctahedra respectively. In combination with Li under high pressure, Au displays a similar ability to adopt a significantly negative formal charge of less than -3 in the predicted phases Li\({}_{4}\)Au and Li\({}_{5}\)Au - where electrons donated from Li are placed into the empty Au 6p orbitals [81], which are less destabilized than the Li 2s or 2p under pressure.
Pressure can also promote chemical interactions with core or semi-core orbitals, as in the case of HgF\({}_{3}\) and HgF\({}_{4}\)[82]. These stoichiometries are predicted to become stable above 73 and 38 GPa, respectively, and above 200 GPa HgF\({}_{4}\) is computed to decompose into HgF\({}_{3}\) and F\({}_{2}\). The \(I4m\) symmetry HgF\({}_{4}\) crystal possesses square planar HgF\({}_{4}\) units typical of a d\({}^{8}\) organometallic complex, with the Electron Localization Function (ELF) [83] confirming covalent Hg-F interactions. To form the four Hg-F bonds, not only the Hg \(6s\) but also two of the semicore \(5d\) electrons are required. In HgF\({}_{3}\), the \(Fm\bar{3}m\) structure (which distorts below 100 GPa to \(C2/m\) symmetry) involves a fluorite-type HgF\({}_{2}^{+}\) lattice stuffed with F\({}^{-}\) ions, leaving Hg with a d\({}^{9}\) configuration. A series of predicted CsF\({}_{n}\) phases, in which the Cs 5p electrons participate in Cs-F covalent bonds, are another example of the pressure-induced activation of core electrons [84]. Their crystal structures display motifs resembling the isoelectronic \([\)XeF\({}_{n}]^{-}\) molecules - for example, \(Fdd2\) CsF\({}_{5}\) contains planar pentagonal CsF\({}_{5}\) units, similar to the \([\)XeF\({}_{5}]^{-}\) anion. With increasing fluorine content, the formal oxidation state on cesium reaches values greater than \(+1\).
### High Pressure Electrides
Electrides are solids where the electrons, localized on non nuclear-centric sites, behave as anions [85]. They are conceptually related to solvated electrons, in
Figure 3: High pressure compounds where the transition metal atoms adopt curious electronic configurations in (a) K\({}_{3}\)Ir [79] (Cu\({}_{3}\)Ti type), with Ir@Ks and K@Ks distorted cubes and iridide I\({}^{3-}\) anions, and (b) HgF\({}_{4}\)[82], in which d\({}^{8}\) configurations on the Hg lead to square planar geometries.
which the excess electrons can be thought to occupy cavities in the fluid [86], as well as alkalide liquids [41; 42] or alkalide solids where alkali metal anions fill the interstitial voids [74]. Although many types of electric families are known at atmospheric conditions, including those that are organic, inorganic, intermetallic and those where the electron localization is restricted to various dimensions or possesses topological properties, herein, we restrict the discussion to high pressure electrides: systems where the electron localization occurs as a response to compression [87].
Though the formation of high pressure electrides has been rationalized in many ways, including pressure induced orbital rehybridizations [88; 89; 4], and multicenter bond formation [90; 91], herein we focus on a simple model proposed by Miao and Hoffmann [87; 92]. As atoms in a solid compound are compressed, raising their orbital energies, the electrons in the highest-energy orbitals can vacate the atom entirely and occupy the interstices of the crystal lattice instead. To understand why this might occur, Miao and Hoffmann pointed out that orbitals can be ascribed to these voids, which can be thought of as interstitial quasiatoms (ISQs). At ambient pressure, the ISQ energies are higher than the atom-centered ones. However, unlike the atom-centered orbitals, those of the ISQ do not experience the repulsive effect caused by the core electrons, and their increase in energy with pressure will be less than the atom-centered orbitals. When the ISQ orbital energies fall below the atom-centered ones they will be occupied, thereby localizing the valence electrons in the interstitial regions. These electrons, detached from the nuclei, serve as anions and the corresponding compounds are called _electrodes_.
For several simple metals, high-pressure phases identified as electrides via calculations have been subsequently studied experimentally. In sodium, Neaton and Ashcroft posited that under pressures high enough to induce overlap of the \(2p\) orbitals, a combination of Pauli repulsion and core orthogonality constraints would drive the valence electrons away from the ionic cores, to localize in the crystalline interstices instead. This electronic redistribution would result in a metal-to-insulator transition [88]. Later CSP calculations - with experimental confirmation of an optically transparent, wide-bandgap insulating phase in the same publication - proposed an insulating \(hP4\) phase with \(P6_{3}/mmc\) symmetry to become stable above 260 GPa. [4] The new \(hP4\) phase was in fact experimentally observed at pressures as low as 200 GPa, but the discrepancy was ascribed to a combination of thermal effects as well as the preferential stabilization of metallic states by the computational method employed. Moving to higher pressures and temperatures, evidence for the \(hP4\) phase has been obtained in shock-ramp experiments [5]. However, _in situ_ X-ray diffraction (XRD) revealed peaks that could not be attributed to \(hP4\) between 240-325 GPa at temperatures in the thousands-of-degrees Kelvin. Consistent with these dynamic compression experiments, calculations showed that the free energy of a \(P6_{3}/m\) symmetry phase that is a topological electric was lower than that of \(hP4\) at these pressures and temperatures [93]. At higher
pressures yet, ca. 15.5 TPa, sodium is predicted to adopt a curious, metallic \(cI24\) electride phase consisting of Na\({}_{12}\) icosahedra [94]. In the insulating \(hP4\) structure that dominates much of the high-pressure landscape in sodium, highlighted in Figure 4a, the atoms occupy the Ni sites of the Ni\({}_{2}\)In structure type with the ISQs on the In sites, in line with the treatment of this phase as (Na\({}^{+}\))\({}_{2}\)E\({}^{2-}\) (where E\({}^{2-}\) denotes a doubly-occupied ISQ). In fact, several A\({}_{2}\)X alkali metal chalcogenides adopt the antifluorite \(Fm\bar{3}m\) crystal structure at ambient conditions, but eventually transition to the Ni\({}_{2}\)In structure type under pressure [95; 96]. Potassium also adopts the \(hP4\) phase when squeezed [62; 97].
Lithium presents another example of complex structural and electronic behavior under pressure, as first postulated by Neaton and Ashcroft who suggested it may adopt an insulating, paired ground state [59]. Subsequent experiments revealed that Li assumes the same semimetallic \(cI16\)[60] structure found in Na [98]. At higher pressures, Li transitions to a number of unique phases, such as those with orthorhombic C-centered lattices and 88, 40, and 24 atoms in their unit cells which have been observed [99]. One of these, \(oC40\) with the \(Aba2\) space group, is an electride displaying especially interesting behavior [100]. In this phase, ISQs occupy three separate symmetry-distinct sites, two that are doubly occupied (E\({}^{\rm II}\)) and the third singly occupied (E\({}^{\rm I}\)), so its primitive cell can be considered as Li\({}_{20}\)E\({}_{8}^{\rm II}\)E\({}_{4}^{\rm I}\). The E\({}^{\rm I}\)-E\({}^{\rm I}\) distance remains roughly constant at a short 1.3 A from 50-80 GPa, with an elevated electron density found between the ISQs. Crystal Orbital Hamilton Population (COHP) [101], ELF, and projected density of states (PDOS) analyses all indicate bonding character between the E\({}^{\rm I}\) sites, and examination of the \(\Gamma\)-point band-decomposed charge densities revealed bonding and antibonding states analogous to the \(\sigma_{g}\) and \(\sigma_{u}^{*}\) orbitals in H\({}_{2}\). In fact, computations have shown that ISQs can form covalent, ionic and metallic bonds with atoms as well as with other ISQs [87; 89; 100; 102; 103]. For this reason, Miao has espoused the idea that ISQs may be thought of as a chemical element and placed above helium in the periodic table under pressure [104].
The proclivity of the elements to ISQ formation has been investigated by comparing the energies of their orbitals calculated at different pressures using a He confinement model with those of an ISQ 1s orbital [92]. Unsurprisingly, Li and Na were found to favor ISQ formation at relatively low pressures, with Mg, Al, In, and Tl predicted to follow suit at higher pressures. Among the heavier alkali metals, the energies of the valence s orbitals were found to rapidly increase in energy relative to that of the ISQ 1s - but as previously discussed, these elements are also susceptible to a pressure induced electronic \(n\)s \(\rightarrow(n-1)\)d transition. Within cesium, for example, the increased d occupation, already noted by Sternheimer in 1950 [53], was invoked to explain the curious structure of Cs-IV adopted at 4.3 GPa, where the Cs atoms possess a coordination number of 8 [55]. This decrease in coordination number with increasing pressure appeared so counterintuitive to Linus Pauling that he presented an alternative structure solution assuming cubic symmetry and invoking icosahedral clusters [105]. Pauling's hypothesis turned out to be incorrect.
Importantly, von Schnering and Nesper [106] recognized that the Cs atoms in the Cs-IV structure occupied the Th sites of the ThSi\({}_{2}\) lattice - and that the valence electron density exhibited maxima not near the Cs atoms but at the Si positions of ThSi\({}_{2}\) and between the Si-Si bonds, dubbing this phase an electride. This \(I4_{1}/amd\) structure is also assumed by K and Rb under pressure [107; 108; 109].
If the high-pressure phase behavior of the alkali metals were not complex enough, Na, K, and Rb all adopt different versions of an incommensurately modulated host-guest lattice (similar to the W\({}_{5}\)Si\({}_{3}\) type), often referred to as the \(tI19\) structure, a model of which is illustrated for K in Figure 2d. [4; 107; 110; 111; 112; 113; 114] All three share the same host lattice, but display different periodicity in the guest lattice. In a study using commensurate approximants to model the electronic structure, electron localization in the interstitial spaces was found, with some highly localized basins as well as a 1D channel of electron density lying in channels of the host structure [114]. Another study proposed that electrides in the heavy alkali metals could be stabilized via ferromagnetic ordering [75].
Numerous predicted binary and ternary systems under pressure also behave as electrides, including two Li\({}_{3}\)Fe phases (with \(P6/mmm\) and \(P4/mbm\) symmetries) [115], \(P4/mbm\) Na\({}_{3}\)Fe [116], and a superconducting Y\({}_{3}\)Si phase [117]. In a range of Li\({}_{n}\)I stoichiometry phases predicted above 50 GPa, ISQs form within the interstices between I-centered Li polyhedra - but higher pressures drive electron density back from the ISQs to the iodine atoms, filling the 5p and eventually the 5d orbitals, skipping the 6s, in line with disfavoring s orbital occupation under pressure [103]. Finally, several electride phases have been
Figure 4: High pressure electric formation is observed in a number of complex high-pressure phases of the alkali metals, including both the insulating (a) \(h\)P4 phase of Na [4] (Ni\({}_{2}\)In structure type) where ISQs containing two electrons occupy the In sites, and (b) semiconducting \(oC40\) Li, containing three inequivalent ISQ sites of which two (black) are doubly occupied, and the third (pink) is singly occupied so a bond is formed between nearest neighbors, as highlighted by the lines that join them.
identified involving the famously unreactive noble gases - including a particularly surprising case [10] where the noble gas is crucial to the stability of the synthesized structure.
### Compounds of noble gases
Where did all the xenon go?
Relative to the abundance of Ar and Kr in the Earth's atmosphere, the amount of Xe is strikingly lower than it should be, a problem known as the "missing xenon paradox". Geoscientists have explained this discrepancy in many different ways [118; 119; 120], but a growing body of evidence suggests that the Xe did not escape, and instead it has been incorporated into the minerals found within the Earth. At Earth's core pressures, both the atomic orbital energies and the relative electronegativies of the elements, including Xe and the Fe and Ni that comprise the majority of the core, are significantly perturbed [48; 50]. Therefore, it should not be a surprise that their reactivity differs from our 1 atmosphere expectations.
Prior to the advent of widespread CSP, computational investigations concluded that Xe incorporation into Fe and Ni would not occur, at least not to a large extent [121; 122]. These studies, which relied on the assumption that the Xe-metal alloys adopted crystal lattices similar to those of the elemental metals, turned out to be incorrect. Later CSP-based studies predicted the emergence of stable Xe-Fe and Xe-Ni compounds above 250 and 200 GPa, respectively [123], with \(Pm\bar{3}m\) XeFe\({}_{3}\) (AuCu\({}_{3}\)-type) and \(Pmmn\) XeNi\({}_{3}\) (based on Xe@Ni\({}_{12}\) cuboctahedra) having the lowest enthalpies of formation, although \(P\bar{6}2m\) XeFe\({}_{5}\) and XeNi\({}_{5}\), as well as \(P2_{1}/m\) XeNi\({}_{6}\) also appeared on the convex hull. Confirming these predictions, experimental studies have synthesized XeNi\({}_{3}\) and Xe(Fe/Ni)\({}_{3}\) phases at high pressure, although with slightly different structures than those predicted. This includes \(Pm\bar{3}m\) for XeNi\({}_{3}\), either as an ordered AuCu\({}_{3}\)[67] or disordered CrNi\({}_{3}\) alloy [68], and a mixture of \(fcc\) and \(Pmmn\)-symmetry phases for XeFe\({}_{3}\)[68]. In these systems, the Fe and Ni atoms behave as oxidants, accepting 5p electron density from Xe [68; 123], in agreement with the predicted increase in electronegativity differences between Xe and the transition metals at high pressure [50]. Xe\({}_{2}\)FeO\({}_{2}\) and XeFe\({}_{3}\)O\({}_{6}\), both involving substantial Fe-O and Xe-O bonding, have also been computed to be stable at pressures relevant to the Earth's core [124]. The high-pressure ArNi phase, in which some Ni 3d electron density is transferred to the Ar 4s, inducing a magnetic moment on the Ni, has been synthesized [125].
CSP calculations have also predicted stable compounds containing Xe, or other noble gas (NG) elements, and Mg above 125-250 GPa [102]. This includes Mg-Xe and Mg-Kr phases, which adopt structures based on stacked square lattices of Mg and the NGs in different patterns, ranging from the CsCl type (\(Pm\bar{3}m\)) to more complex \(P4/nmm\) or \(I4/mmm\) arrangements for MgNG and Mg\({}_{2}\)NG stoichiometries. Compounds of Mg with Ar, on the other hand, were found to favor hexagonal arrangements such as anti-NiAs type MgAr (\(P6_{3}/mmc\)). In these compounds the energies of the metal 3s orbitals
increase precipitously in comparison to the outer shell d orbitals of the noble gases inducing Mg 3s \(\rightarrow\) NG d orbital transfer. The ELF of Mg\({}_{2}\)NG (NG=Xe, Kr, and Ar) phases shows an additional interesting feature: ISQ formation. This occurs far below the pressures at which elemental Mg is predicted to form an electride [89; 92]. Two reasons have been used to explain this phenomenon [102]. First, far fewer ISQ sites - concomitantly occupying less space - relative to the elemental Mg electride are necessary to accept the displaced valence electrons of Mg, as many of them are transferred to the NG atoms instead. In addition, the NG atoms promote the formation of larger interstitial spaces in the structure, stabilizing the ISQ at lower pressures. Under moderate pressures of ca. 50-300 GPa, the energies of the outer shell Xe d orbitals are similar to those of the ISQ 1s, although with higher pressure they become lower in energy, congruent with the gradual ISQ 1s \(\rightarrow\) NG d electron transfer with increasing pressure up to 600 GPa [102].
Several other stable compounds of Xe have been predicted at high pressures, including XeO, XeO\({}_{2}\), and XeO\({}_{3}\)[126], as well as Xe\({}_{3}\)O\({}_{2}\), Xe\({}_{2}\)O, and Xe\({}_{7}\)O\({}_{2}\)[127], while Xe\({}_{2}\)O\({}_{5}\) and Xe\({}_{3}\)O\({}_{2}\) have both been experimentally observed at pressures lower than 100 GPa. [128] Krypton oxide, KrO, has been predicted as well [129], as have xenon nitrides [130; 131] and carbides [132]. Fluorides of argon [133], krypton [134], and xenon [135] - with Xe-Xe dimers cropping up in Xe\({}_{2}\)F and XeF - have all been predicted. Several xenon chlorides including XeCl, XeCl\({}_{2}\), and XeCl\({}_{4}\), with the former two being metastable by 10 GPa and reaching the convex hull by 60 GPa have been computationally studied [136]. When they are combined with Li, the noble gases Ar [137] and Xe [138] are predicted to behave as anions, with the Li 2s orbital rising above the Xe and Kr outer shell d orbitals. Several cesium xenides are predicted to be stable at high pressures, many adopting alternate colorings of a distorted bcc lattice [139]. There is experimental evidence for the formation of a phase mixing Xe with water ice at conditions expected for planets such as Uranus and Neptune [140].
Helium is famously the most inert element at ambient pressure by virtue of its closed-shell electronic configuration, zero electron affinity and large ionization potential. Nonetheless, a number of stable helium-containing compounds have recently been predicted at high pressure, including those with iron [141], ammonia [142], water [143; 144], nitrogen [145; 146], and even with _other noble gases_[147; 148] (the van der Waals compound NeHe\({}_{2}\), a Laves phase in the MgZn\({}_{2}\) structure has been experimentally observed [149]).
A particularly noteworthy example is provided by Na\({}_{2}\)He [10], an electride phase with a fluorite-like lattice (Figure 5a) in which every Na\({}_{8}\) cube that does not contain a He atom is instead occupied by an electron pair (Figure 5b), so that the phase can be expressed as (Na\({}^{+}\))\({}_{2}\)(E\({}^{2-}\))He. Although He does not participate in any bond formation, its presence is nevertheless a crucial stabilizing force in this phase, which has been successfully synthesized above 113 GPa [10]. A subsequent computational study showed that the He atoms act as inert "spacers" to reduce the Madelung repulsion resulting from the unequal amounts of cations and anions in the parent (Na\({}^{+}\))\({}_{2}\)(E\({}^{2-}\)) phase [150]. Reaction
enthalpies for helium in combination with ionic AB, A\({}_{2}\)B and AB\({}_{2}\) compounds revealed that He incorporation was generally favored when the cation:anion ratios were unequal such as in MgF\({}_{2}\) and Li\({}_{2}\)O, but not for AB-type phases such as LiF or MgO, in line with the prediction of successful He incorporation into certain alkali oxides and sulfides [151]. Helium placement in the ionic lattices tends to separate ions of similar charge as shown schematically in Figure 5c. An FeO\({}_{2}\)He phase in which the Fe and O atoms form a fluorite lattice and the He atoms occupy the remaining Fe\({}_{8}\) cubes (isopointal to Na\({}_{2}\)EHe) was predicted to be stable above 120 GPa [152], with He appearing to play the same role of spacing agent. This mechanism allows even the most inert noble gases to play an active role in stabilizing compounds at high pressure, all without forming a single chemical bond.
### Miscibility under pressure
The noble gases obtained their names due to their general lack of reactivity, but under ambient conditions, numerous combinations of elements resist mixing to form alloys or stoichiometric compounds. The proclivity or reluctance of a pair of elements towards mixing has been explained in a variety of ways, resulting in predictive rules including those of Hume-Rothery and co-workers [153], Miedema's model [154; 155], Darken and Gurry's maps [156], and more [157; 158; 159]. As we will soon see, pressure turns out to be a useful variable that can alter the (im)miscibility of two or more elements.
Consider, for example, magnesium and iron, whose large size and small electronegativity differential at ambient pressure makes compound formation intractable [160]. According to Miedema's rules, compound formation is favored by large electronegativity differences and similar charge densities [154]. The electronegativity difference between the two elements greatly increases under pressure [48; 57] - and because Mg is more compressible, its radius
Figure 5: Conventional unit cell of helium-containing Na\({}_{2}\)He [10] in the \(Fm\bar{3}m\) space group (a), an electride with localized electron pairs occupying octahedral vacancies (b). The mechanism by which He incorporation serves to stabilize the structure is illustrated in (c), which shows schematically how He insertion reduces electrostatic repulsions involved in the A\({}_{2}\)B (Na\({}_{2}\)E) stoichiometry.
approaches that of Fe when squeezed, thereby increasing the miscibility of the two elements. As a result, stable Mg-Fe compounds have been computationally [161; 162] and experimentally [163] studied under pressure. The higher compressibility of K has also been found to favor compound formation with transition metals such as Ag, even at pressures below which K is anticipated to undergo an \(n\)s \(\rightarrow\) (\(n-1\))d transition [7].
Another case-in-point of pressure induced reactivity are the Li-Be alloys predicted to be stable in the megabar regime [164]. By 20 GPa LiBe\({}_{2}\) reaches the Li-Be convex hull, where it is joined by LiBe\({}_{4}\) (shown in Figure 6a) and LiBe by 80 GPa. At 100 GPa the latter trades its place on the convex hull with Li\({}_{3}\)Be. Alignment of strong diffraction peaks with \(2k_{F}\) (twice the free-electron Fermi wavevector) is suggestive of stabilization through a Fermi surface-Brillouin zone interaction mechanism, which has been used to explain the particular stabilities (and electron counts which make them so) of Hume-Rothery electron phases [165; 166]. Furthermore, at around 82 GPa, an odd feature emerges in the DOS curve of \(P2_{1}/m\) LiBe: the base of the valence band appears as a step-like function, remains flat for \(\sim\)4 eV, and sharply increases once more before more complex features take over. This is linked to a distinct separation - made possible by the pressure-induced increase in the electronegativity difference between Li and Be - between high- and low-electron density planes associated with the Be and Li atoms, respectively, leading to quasi-2D-like behavior in a geometrically 3D structure. Stabilization through Fermi sphere interaction with higher zones (referred to as Jones zones from the Mott-Jones formulation of this mechanism) was invoked to propose a high-pressure NaAl phase in the NaTl structure type [167] just above 12 GPa.
Another element that does not undergo compound formation with Li at ambient conditions is Fe. Just above 40 GPa, however, Li\({}_{3}\)Fe (\(P6/mmm\)) and LiFe (\(Fd3m\), NaCl-type) phases are computed to lie on the convex hull [115]. Some interstitial electron localization appears in Li\({}_{3}\)Fe, both in \(P6/mmm\) symmetry as well as the \(P4/mbm\) symmetry computed to prevail just above 60 GPa which is shown in Figure 6b. Both Li\({}_{3}\)Fe phases are host-guest lattices, with the Fe atoms lying in larger hexagonal (\(P6/mmm\)) and heptagonal (\(P4/mbm\)) channels, and the electron localization is found within the smaller triangular and square channels. Na\({}_{3}\)Fe is predicted to adopt this same \(P4/mbm\) phase between 120 and 300 GPa [116]. An additional Li\({}_{3}\)Fe\({}_{2}\) phase with \(C2/m\) symmetry appears on the convex hull by 80 GPa, involving Fe zigzag chains in combination with alternating Li linear or armchair chains.
Recently, a host of bismuth-containing phases have been found to be stabilized at high pressure. Bismuth is a component in numerous topologically nontrivial materials [170; 171] and superconducting systems at ambient pressure [172; 173; 174], yet under these conditions does not form stable compounds with many of the transition metals. In combination with the exotic bulk properties of bismuth, the various magnetic or otherwise electronically nontrivial properties of the transition metals make for a tantalizing combination in high pressure experiments. Under pressure, the estimated electronegativity of Bi
drops precipitously in comparison to most of the d-block, rendering it more reactive [48; 57]. Indeed, above 32 GPa, FeBi\({}_{2}\)[175; 176] was observed to form in DAC experiments in the \(I4/mcm\) Al\({}_{2}\)Cu structure type shared by NiBi\({}_{2}\)[177] and MnBi\({}_{2}\)[178] (both are stabilized by high pressure although other Ni-Bi and Mn-Bi phases are accessible at atmospheric pressure), as well as certain high-pressure transition metal pnictides [179; 180; 181]. A second phase, FeBi\({}_{3}\) with \(Cmcm\) symmetry, has also been predicted to lie on the convex hull between 36 and 39 GPa, but this narrow stability range likely hinders synthetic accessibility [176].
In fact, high-pressure high-temperature methods have been used to synthesize a wide variety of bismuth-containing compounds including CoBi\({}_{3}\)[182; 183], which adopts the \(Pnma\) NiBi\({}_{3}\) structure type, becomes stable by 5 GPa, and is a superconductor with a \(T_{c}\) just below 0.5 K. Synthesized binaries of Cu and Bi include Cu\({}_{11}\)Bi\({}_{7}\) at 3 GPa [184] and CuBi (Figure 6c) at 6 GPa [168; 169], as well as a \(I4/mmm\) Cu\({}_{2}\)Bi phase above 50 GPa [185], which
Figure 6: High pressure enables the mixing of elements which do not form compounds or alloys otherwise. These include (a) LiBe\({}_{4}\), a layered compound of Li and Be [164], (b) Li\({}_{3}\)Fe, a host-guest compound with Fe atoms as the guests to a Li-based host lattice [115], and (c) CuBi [168; 169], one representative of the many bismuth-containing compounds which have been found at high pressure. CuBi has been experimentally observed.
is possibly overtaken by a Cu\({}_{7}\)Bi\({}_{2}\) phase [177]. Of the second-row transition metals, MoBi\({}_{2}\), also in the Al\({}_{2}\)Cu structure type, has been synthesized above 35 GPa, while evidence for a Mo-Bi bcc-type alloy appeared above 5 GPa [186].
The Linear Approximation to Enthalpy (LAE), a tool for rapid and computationally cheap evaluation of formation enthalpies, was used to explore the high-pressure stability of structures in binary ambient-immiscible systems [177]. In concert with the minima hopping CSP method, several new phases were found to be stabilized by 50 GPa: PbAs, Si\({}_{3}\)Al, SiAl, SiAl\({}_{3}\), BiSn\({}_{3}\) - yet another bismuthide - In\({}_{3}\)Fe, Hg\({}_{3}\)In, HgIn, HgIn\({}_{3}\), Hg\({}_{3}\)Sn, ReSn\({}_{3}\), ReBr\({}_{3}\), ReGa, and ReGa\({}_{3}\). Only a limited range of stoichiometries (A\({}_{3}\)B, AB, and AB\({}_{3}\)) was sampled, encouraging further investigation into each of these systems - but now there is preliminary data to suggest fertile ground.
### Geometries and bonding
In the previous sections, we have explored curious electronic interactions made possible by external pressure and compound formation between unexpected species. Here, we shift our focus to the particular geometrical arrangements that emerge in materials under pressure. With higher density, atoms are forced into closer proximity - the possibility of electric formation notwithstanding - which can promote multicenter bonding in both electron-poor and electron-rich contexts as coordination numbers increase [13]. When electron-precise species are closely bunched under compression, electron-deficient multicenter bonding can emerge as the constituent electrons are needed to span more bonding interactions.
Bond symmetrization is a frequent secondary consequence of compression: an asymmetric fragment forced to occupy a progressively smaller space has less room for asymmetry. Often, this leads to a collapse into a symmetric and multicentered bonding regime, as was predicted for water ice under pressure by Pauling [187]. He suggested that the intermolecular hydrogen bonds between adjacent water molecules would shorten with pressure [187], eventually becoming equivalent with the intramolecular O-H bonds, as illustrated in Figure 6(a). This prediction was verified experimentally upon the discovery of ice X, where the lone pairs of the oxygen atoms are used to form additional covalent bonds, rendering the oxygen atoms tetrahedrally coordinated by hydrogens in a diamond-like network [188; 189]. Pressure-induced hydrogen bond symmetrization has also been observed in computations on hydrogen halide systems such as HF, HCl and HBr [190], as well as in the record-breaking superconductor \(Im\bar{3}m\) H\({}_{3}\)S [191].
Small homoatomic clusters alien to the 1 atmosphere pressure-trained mind are found or predicted for other elements in high pressure crystal structures. The wide structural variety has been explained by the increased stabilization of homonuclear bonds as compared to more polar or ionic bonds under pressure [104], favoring single-element clustering. An example of a compound containing novel homonuclear motifs is \(Pnma\) NaCl\({}_{3}\) (Figure 6(b)), computed to be stable from 20 to 48 GPa, featuring a linear Cl\({}_{3}^{-}\) anion reminiscent of
the more familiar triiodide I\({}_{3}^{-}\)[9]. Another such motif is the pentazolate N\({}_{5}^{-}\) ring, which can store more energy than the related azide anion N\({}_{3}^{-}\), but is challenging to synthesize at ambient pressure [192; 193]. This species, ubiquitous in high-pressure phases, is predicted to be a constituent of LiN\({}_{5}\)[194; 195; 196; 197] - a phase that has been successfully quenched to ambient conditions after synthesis at 45 GPa [198] - to sodium pentazolates NaN\({}_{5}\) and Na\({}_{2}\)N\({}_{5}\)[199], CsN\({}_{5}\)[200], CuN\({}_{5}\)[201], MgN\({}_{10}\) and BeN\({}_{10}\)[202], ZnN\({}_{10}\)[203], BaN\({}_{5}\) and BaN\({}_{10}\)[204], SnN\({}_{20}\)[205], and IrN\({}_{7}\)[206]. Polynitrogen chains feature in many proposed high-pressure compounds of Cs [200], Fe [207], Zn [203], Ba [204], Sn [205], Cd [208], Gd [209], and Ta [210], the last of which has been experimentally realized. High pressure also facilitates the formation of silicon clusters in predicted phases including Si\({}_{4}\) squares in CaSi [211], extended networks and clathrate-like cages in silicides such as CsSi\({}_{6}\)[212], MgSi\({}_{5}\)[213], and several lithium silicide compounds [214].
Figure 7: Geometrical and bonding adaptations at high pressures. With pressure (a), covalent O-H and intermolecular hydrogen bonds between separate water molecules in the ice VIII phase equilibrate to yield the symmetric ice X phase [188; 189]. Pressure coincides with the appearance of strange motifs including (b) linear trichloride anions in \(Pnma\) NaCl\({}_{3}\)[9], (c) clusters of Ge dumbbells (inset) in \(I4/mmm\) BaGe\({}_{3}\)[215; 216], and (d) Li\({}_{8}\)H “superatom-like” building blocks (inset) in \(Abm2\) Li\({}_{5}\)H [217]. Except for Li\({}_{5}\)H, all have been experimentally realized.
Tetrel clusters comprise a family of polar intermetallic \(I4/mmm\) symmetry compounds formed from alkaline earth or rare earth metals and group 14 tetrels in a 1:3 ratio. An example of these isotypic compounds, BaGe\({}_{3}\), is shown in Figure 7c. The clusters within it may be described as tetrel dumbbells condensed into cubes, which are capped on four equatorial faces by additional dumbbells shared with a neighboring cube, forming a loose three-dimensional network. This structure has been experimentally observed in Ca, Y, and Lu silicides [218] (and later identified in a CSP investigation of the Y-Si system [117]), and a related distorted \(I\bar{4}2m\) BaSi\({}_{3}\) phase has also been synthesized [219]. Alkaline earth trigermanides CaGe\({}_{3}\)[220], SrGe\({}_{3}\)[221], and BaGe\({}_{3}\)[215; 216] have also been found to adopt this structure. The distribution of electrons within the clusters aligns with two-center two-electron bonds along the tetrel dumbbells, with multicenter interactions between the tetrel and rare earth/alkaline earth [215; 221; 222]. Superconductivity has been measured in some of these compounds, albeit at low temperatures [216; 218], augmented by predictions from first principles calculations [104; 117; 222]. Additionally, the stability conferred by the strong covalently bonded networks permits some of these phases, synthesized at high-temperatures and high-pressures, to be recovered at ambient conditions [221; 222]. A similar example is presented by elemental carbon - diamond is its ground state at high-pressures, but due to the immense strength of its sp\({}^{3}\) covalently bonded network, the energetic barrier for its transition to the lower-enthalpy allotrope graphite is too high and it persists "forever" under ambient conditions. Furthermore, laser-driven ramp compression studies of carbon to 2 TPa have found that carbon stubbornly maintains the diamond structure well beyond its predicted high-pressure stability limits [223], as the barriers to breaking the sp\({}^{3}\) bonds remain large under pressure.
Another example of unique clusters predicted to form only at high pressures are found within a family of lithium subhydrides [217]. Computations uncovered two nearly isoenthalpic Li\({}_{5}\)H phases that had the most negative enthalpies of formation. Both were built of Li\({}_{8}\)H units that behaved as superatoms analogous to similar units in synthesized Rb\({}_{9}\)O\({}_{2}\) and Cs\({}_{11}\)O\({}_{3}\) suboxides [224]. One of these, with \(Abm2\) symmetry, is shown in Figure 7d. The Li\({}_{8}\)H cluster, a distorted bicapped trigonal antiprism of Li encapsulating a single H atom, has one electron in excess of the closed-shell octet and thus behaves as a superalkali atom.
In addition to the well-known H\({}_{2}\) molecular units and H\({}^{-}\) hydridic species, hydrogen atoms can form other distinct clusters. One of these, the trihydrogen cation, H\({}_{3}^{+}\), is in fact one of the most abundant species in the universe - but it is also largely relegated to interstellar space and the atmospheres of gas giant planets [225]. High pressure crystal chemistry offers another opportunity. The halogen polyhydride \(Cc\) H\({}_{5}\)Cl [226; 227; 228], predicted to become stable above 100 GPa, contains slightly distorted H\({}_{3}^{+}\) clusters with H-H distances of 0.74, 0.97 and 1.01 A [226]. By 300 GPa the three distances converge to 0.87-0.88 A, yielding a nearly-perfect equilateral triangle. With sufficient pressure, this H
unit interacts with a neighboring H\({}_{2}\) molecule forming a twisted bowtie-like loosely interacting H\({}_{5}^{+}\) motif [226; 228]. Metastable predicted H\({}_{2}\)F, H\({}_{3}\)F, and H\({}_{5}\)F species, as well as H\({}_{5}\)Br also contain this triangular H\({}_{3}^{+}\) cation [227], as does the metastable \(P1\) LiF\({}_{4}\)H\({}_{4}\)[229].
With two extra electrons, the trihydride anion, H\({}_{3}^{-}\), prefers a linear arrangement involving a three-center four-electron bond. Quantum chemical calculations have shown that the ground state geometry of the isolated trihydride anion possesses one H-H bond that is substantially longer than the other (2.84 vs. 0.75 A), with the transition state between the H-H\(\cdot\cdot\cdot\)H and H\(\cdot\cdot\cdot\)H-H configurations corresponding to the symmetric case [230]. Nevertheless, certain predicted high-pressure hydrides of the heavy alkali metals K [66], Rb [231], and Cs [232] (as well as the alkaline earth metal Ba [233]) feature an H\({}_{3}^{-}\) anion symmetrized via pressure. Synthesized NaH\({}_{7}\) is thought to contain an asymmetric linear H\({}_{3}^{-}\) motif [234]. Linear H\({}_{3}^{-}\) units are also predicted to appear in various indium [235] and lithium polyhydrides [236].
Scandium polyhydrides, meanwhile, are predicted to feature five-membered rings of hydrogen atoms in various arrangements. In \(I4_{1}/md\) ScH\({}_{9}\), which lies on the convex hull around 300 GPa [237], strips of edge-sharing H\({}_{5}\) pentagons are stacked perpendicular to one another along the \(c\) axis, linked by vertices. The strips are separated by additional H atoms in molecular H\({}_{2}\) units. Around 250 GPa, ScH\({}_{10}\) adopts a \(Cmcm\) structure in which H\({}_{5}\) pentagons are grouped into sets of three, sharing edges and a single common vertex. This phase is nearly isoenthalpic with another ScH\({}_{10}\) structure with the same (H\({}_{5}\))\({}_{3}\) "pentagraphenelike" clusters but arranged in \(P6_{3}/mmc\) symmetry [238]. ELF demonstrates bonding within the H\({}_{5}\) units [237; 238]. This same pentagraphenelike structure is expected to lie on the Lu-H convex hull at 300 GPa and to be very near the Hf-H and Zr-H hulls [238]. For higher hydrogen contents yet, ScH\({}_{12}\) is predicted to be built of stacked strips of edge-sharing H\({}_{5}\) pentagons spaced by Sc [237].
Indeed, the first, most simple, element does not like to be outdone! One more class of high-pressure hydrogen rich materials whose prediction sparked tremendous experimental synthetic efforts are the so-called "metal superhydrides". The reason why scientists have pursued them in earnest is the prediction, verified by recent experiments, of conventional superconductivity at temperatures approaching those experienced in a cold room (the \(T_{c}\) of LaH\({}_{10}\) is about 10 \({}^{\circ}\)F - January in Siberia), or a crisp fall day (the \(T_{c}\) of C-S-H is about 60 \({}^{\circ}\)F), albeit still at very high pressures! CSP investigations into the high-pressure Ca-H system revealed a curious \(Im\bar{3}m\) CaH\({}_{6}\) phase (Figure 8a) stable above 150 GPa in which the Ca atoms were arranged in a bcc lattice and the H atoms condensed into a sodalite-like H\({}_{24}\) framework [73]. All H-H distances were equivalent, 1.24 A at 150 GPa, and ELF analysis confirmed weak covalent bonding between the H atoms. This phase, a good metal, was predicted to exhibit large electron-phonon coupling, and indeed first principles
calculations estimated a superconducting transition temperature \(T_{c}\) of 220-235 K at 150 GPa. Subsequent synthetic exploration led to measurement of \(T_{c}\) over 200 K at 160-170 GPa for CaH\({}_{6}\)[71; 72].
The computational discovery of CaH\({}_{6}\) was shortly followed up by fruitful theoretical investigations into related metal-hydrogen systems, turning up isostructural phases for Mg [239], Sc [237; 240; 241], Y [34; 35; 242], Pu [243], Tb [244], Eu [245], and Pm-Lu [246]. Structures that are distortions of this high symmetry phase have been predicted as well. This includes a tetragonally-distorted \(I4/mmm\) ZrH\({}_{6}\) variant [247], along with an \(R\bar{3}m\) phase in which opposite hexagonal faces of the H\({}_{24}\) cubic sodalite framework are opened for SrH\({}_{6}\)[248; 249; 250] and LaH\({}_{6}\)[34; 35], although the latter may not lie on the convex hull. \(Imm2\) BaH\({}_{6}\)[233], which contains some of the H\({}_{3}^{-}\) trihydride anions explored above, can be thought of as a highly fragmented version of the sodalite framework. The role of distortions of the high-symmetry \(Im\bar{3}m\) structure adopted by CaH\({}_{6}\) tends to reduce the density of states at the Fermi level, \(E_{F}\), thereby also lowering \(T_{c}\). The origin of such distortions has recently been investigated using the lens of DFT-Chemical Pressure [251].
Higher hydrogen content allows for other clathrate-like arrangements of hydrogen, from \(P6_{3}/mmc\) YH\({}_{9}\)[35] (Figure 8b) to \(Fm\bar{3}m\) LaH\({}_{10}\)[34; 35] (Figure 8c). Distorted versions of these two structures have also been predicted, including \(C2/m\) CaH\({}_{9}\)[252] and \(P1\) Eu\({}_{4}\)H\({}_{36}\)[245], as well as \(R\bar{3}m\) CaH\({}_{10}\)[252] and AcH\({}_{10}\)[253]. In the case of LaH\({}_{10}\), quantum anharmonic effects were found to be key in stabilizing the \(Fm\bar{3}m\) structure over less symmetric variants [254]. Other more complex clathrate-like hydrogenic frameworks have been predicted as well. One example is a diamond-like lattice of Mg@H\({}_{28}\) clusters intercalated with Li@H\({}_{18}\) units, which comprise the \(Fd\bar{3}m\) Li\({}_{2}\)MgH\({}_{16}\) phase. This compound is an example of a "hot" superconductor whose estimated \(T_{c}\), 351 K at 300 GPa, is well above room temperature [255]. Such materials are
Figure 8: Clathrate-like metal hydrides predicted then synthesized at high pressures. Their remarkable superconducting properties are tied to strong electron-phonon coupling. Several metal-hydrogen stoichiometries adopt these so-called “superhydride” motifs, including \(Im\bar{3}m\) MH\({}_{6}\), exemplified by CaH\({}_{6}\)[71; 72; 73], (b) \(P6_{3}/mmc\) MH\({}_{9}\), exemplified by YH\({}_{9}\)[35; 69], and (c) \(Fm\bar{3}m\) MH\({}_{10}\), exemplified by LaH\({}_{10}\)[34; 35; 37; 70].
under intense speculation and investigation for their promise towards achieving room-temperature superconductivity, as described in Section 4.3 below.
## 4 Superconductivity
The 1911 discovery of a phenomenon in which a substance's resistivity can plummet to zero [256] sparked countless investigations and resulted in a Nobel prize for Heike Kamerlingh Onnes, as well as a number of future Nobel prizes (directly or indirectly). The mechanism of superconductivity, and the search for new superconducting materials, has fascinated scientists for over a century. A key parameter for superconductors is the critical temperature, \(T_{c}\), below which a material becomes superconducting. For a number of illustrative superconducting materials, \(T_{c}\) is plotted against the pressure at which they are stable in Figure 9. Of course, practical applications of superconductivity are limited if temperatures very near 0 K are required, and for some decades the highest known \(T_{c}\) values lingered in the low twenties [257; 258]. This sparked debate regarding a possible natural "cap" on superconductivity around these temperatures [259]. However, a family of cuprates whose superconducting mechanism has yet to be explained were the first to break the liquid nitrogen barrier [260], achieving \(T_{c}\)s over 160 K in the case of pressurized HgBa\({}_{2}\)Ca\({}_{m-1}\)Cu\({}_{m}\)O\({}_{2m+2+\delta}\)[261].
The 2001 discovery of superconductivity up to 39 K in MgB\({}_{2}\)[263] provided experimental evidence that non-cuprate phases could be promising superconductors, despite the fact that they deviated from the collection of empirical rules enumerated by B. T. Matthias in the 1950s and 1960s [272]. These ranged from favorable valence electron concentrations to a general distrust for theorists. MgB\({}_{2}\) belongs, with early and long-term record holders Nb\({}_{3}\)Sn and Nb\({}_{3}\)Ge, as well as the clathrate-like hydrides discussed in Section 3.5, to the family of "conventional" superconductors whose mechanism is thought to follow the theory propounded by Bardeen, Cooper, and Schrieffer in 1957 [273; 274]. This discovery revitalized interest in conventional superconductors [275], leading researchers to wonder what the trajectory of superconductivity research would have looked like had the 1957 measurements of the heat capacity of MgB\({}_{2}\)[276] captured the discontinuity that appears upon the superconducting transition.
Within BCS theory, two electrons of opposite momentum and spin that are within \(\pm\hbar\omega_{\rm cut}\) (an energy in line with the phonon energies) of the Fermi surface may, at long distances, overcome Coulombic repulsion and experience a net attractive potential when the lattice is polarized through phonon vibrations. This forms a Cooper pair, a composite boson of weakly interacting species. Thermal energy can break the Cooper pairs, with \(T_{c}\) describing the temperature at which this occurs and the superconducting state is destroyed. From
this construction, the \(T_{c}\) for a material can be estimated by
\[k_{B}T_{c}=1.14\hbar\omega\exp\left[\frac{-1}{N_{F}V}\right] \tag{1}\]
where \(\omega\) is the average phonon energy, \(N_{F}\) is the single spin electronic density of states at \(E_{F}\), and \(V\) is the pairing potential between two electrons resulting from the electron-phonon interaction. This suggests that high \(T_{c}\) is correlated with a large \(N_{F}\) (a feature potentially tunable via judicious doping), strong coupling between electrons and phonons, and high phonon frequencies. A frequently used semiempirical formula to estimate the \(T_{c}\) of a conventional superconductor is the Allen-Dynes modified McMillan equation [27; 28; 29]:
\[T_{c}=\frac{\omega_{\mathrm{ln}}}{1.2}\mathrm{exp}\bigg{[}-\frac{1.04(1+ \lambda)}{\lambda-\mu^{*}(1+0.62\lambda)}\bigg{]}. \tag{2}\]
Figure 9: Critical superconducting temperature, \(T_{c}\), plotted against pressure for selected materials. Data points based on experimental measurements of \(T_{c}\) are plotted with filled circles, while those estimated via theoretical calculations are plotted with empty circles. Elemental \(T_{c}\)s are given in brown, intermetallics in green, cuprates (belonging to the non-BCS unconventional family) in orange, and hydrides in blue. The boiling point of liquid nitrogen and room temperature are provided to guide the eye, with a star marking the “holy grail” of room-temperature superconductivity at ambient pressure. Data was collected from references [255; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271].
where \(\lambda\) is the electron-phonon coupling constant, \(\omega_{\ln}\) is the logarithmic average phonon frequency and \(\mu^{*}\) is the Coulomb repulsion parameter, which is typically treated semiempirically. An approximate - and illustrative - formula to estimate \(\lambda\) was proposed by Hopfield as [280]
\[\lambda=\frac{N_{F}\langle I^{2}\rangle}{M\langle\omega^{2}\rangle} \tag{3}\]
where \(\langle I^{2}\rangle\) are the electron-phonon matrix elements averaged over the Fermi surface, \(M\) is the atomic mass, and \(\langle\omega^{2}\rangle\) the mean phonon frequency. To increase \(\lambda\), then, \(N_{F}\) and the electron-phonon matrix elements should be increased. Converse to expectations from Equation 1 and Equation 2, where the average phonon energy was directly proportional to \(T_{c}\), here an increase in \(\langle\omega^{2}\rangle\) serves to decrease \(\lambda\). In the denominator of Equation 3, \(\langle\omega^{2}\rangle\) and \(M\) will naturally tend to counteract one another, as increases in atomic mass lead to softer phonon frequencies. In fact, evidence suggests that \(T_{c}\) frequently increases at the edge of dynamic instability, where soft phonons promote strong coupling [281].
In the following sections, we describe the effect of pressure on the propensity for superconductivity of the elements, hydrogen in particular. We end by discussing families of hydrogen-rich phases that are extremely promising towards achieving the once-distant goal of room-temperature superconductivity (albeit potentially only at very high pressures).
### The elements
Most of the elements in the periodic table can be superconducting given the right conditions. This includes, so far without exception, rather low temperatures. However, external pressure can affect the \(T_{c}\), or even induce superconductivity in some elements [282; 283; 284; 285]. In fact, calcium at ambient pressure is not a superconductor and achieves \(T_{c}=29\) K at 216 GPa [264]. In comparison, the highest elemental \(T_{c}\) at ambient pressure is 9.2 K for niobium [262], while only lithium among the alkali metals [286], and only beryllium [287] among the alkaline earth metals are known to superconduct at ambient pressure, both well under 0.1 K. Of the fifty-four known superconducting elements of the periodic table, only thirty-one are superconductors at ambient pressure.
Early indications regarding the ability of pressure to either enhance or suppress superconductivity [288; 289; 290] were less than encouraging. For simple metals whose electronic structure aligns with a mostly free-electron model, such as Zn, Cd, Hg, and the group 13 metals, applied pressure serves to suppress what superconductivity is present at ambient pressure [289; 290]. In such free-electron-like metals, the effect of pressure is to broaden electronic bands and increase phonon frequencies due to the stiffer lattice. Band broadening reduces the electronic density of states at \(E_{F}\), while a stiffer lattice is correlated
with weaker coupling between electrons and phonons, both effects being detrimental to the superconductivity of a system. At ambient pressure, the alkali metals behave as free-electron metals, but as we have seen in Section 3.2, with pressure their electronic structure rapidly diverges from these expectations.
For Li, \(T_{c}\) is highly pressure-dependent and reliant on complex crystal chemistry [291; 292; 293]. Like all of the alkali metals, it adopts a bcc crystal structure at ambient pressure, which in short order transitions to the fcc structure. Up to 8 GPa, the increase in pressure is reflected in an increase in phonon frequencies, typical for a stiffening lattice, but with further pressure the phonons become softer and just above 30-40 GPa, imaginary modes related to a structural instability appear. This pressure corresponds to another phase transition, this time to the _h_R1 (\(R\bar{3}m\)) structure, and shortly thereafter to the \(c\)116. Maxima in \(T_{c}\) are associated with the onset of dynamical instability, as the very soft phonon motions strongly bolster the electron-phonon coupling [281; 294]. From its mostly spherical character at 0 GPa, where lithium adopts a bcc crystal structure, the pressure-induced 2s \(\rightarrow\) 2p electronic transition drives an increasingly anisotropic Fermi surface featuring _hot spots_ of especially strong coupling [295], losing free-electron-like behavior and leading to Fermi surface nesting (FSN) [296; 297; 298]. Phonon softening, in particular along the \(\Gamma\rightarrow\) K path, is accompanied by enhancement of the electron-phonon coupling [299; 300], with the result that \(T_{c}\) grows from practically zero to a maximum of \(\sim\)20 K. This value is among the higher elemental \(T_{c}\)s, as a result of the pressure-induced electronic transitions in lithium. Following the structural transition to the \(c\)116 phase, lithium remains superconducting but \(T_{c}\) decreases due to a reduction of the FSN and concomitant smaller electron-phonon coupling [299; 301; 302]. At high enough pressures, lithium undergoes a metal-semiconductor transition - and eventually goes back to being a (poor) metal upon transitioning to the _o_C24 (\(Cmca\)) phase [303], but one nonetheless predicted to be superconducting with an estimated \(T_{c}\) of 14 K at 200 GPa [304].
In cesium [305; 306] and rubidium [307; 110], the onset of superconductivity with pressure is associated with the _o_C16 (Cs-V and Rb-IV) structures, alongside the n\(s\rightarrow(\mathrm{n}-1)d\) transition [307]. An increase in _d_-character in the electronic states at \(E_{F}\) is correlated with higher \(T_{c}\) in the transition metals [308; 309] and applies here as well to the heavier alkali metals - which, as we have seen, behave under pressure as transition metals themselves.
### Hydrogen
Vitaly Ginzburg, awarded the Nobel Prize in Physics in 2003 for his work in superconductivity and superfluidity, formulated a list of, in his view, the thirty most pressing problems for physics in the 21st century [310; 311]. Following controlled nuclear fusion, the second and third items on this list were high-temperature (room-temperature) superconductivity and metallic hydrogen. These problems are not unrelated.
In 1926, J. D. Bernal proposed that under sufficient pressure hydrogen would transition to a metallic state, but it took nearly a decade for pen to
be put to paper by Wigner and Huntington [312]. Their 1935 suggestion that hydrogen could be metallized by 25 GPa, estimated using a series of assumptions regarding crystal structure and compressibility, proved to be an immense underestimate. In 1968 Neil Ashcroft explicitly linked the quest for metallic hydrogen with the quest for superconductors with higher \(T_{c}\)s, with the suggestion that metallic hydrogen itself would be quite a fantastic superconductor [313]. Hydrogen, being the lightest element, can possess the highest frequencies (as a diatomic molecule) and experience a large electron-phonon coupling due to the lack of screening by core electrons. Moreover, in the metallic state its DOS at \(E_{F}\) is thought to be quite high, making for a very attractive material.
The pressure required to metallize hydrogen, however, is in the multi- megabar range. Claims of metallic or semimetallic hydrogen have been made for DAC experiments at very high pressures and low temperatures, toeing the line of the practical limits of these techniques [314; 315; 316]. Complicating the picture, different experiments used different measures to characterize hydrogen's transition to metallicity, from vibrational spectroscopic techniques such as IR [316], optical measurements such as reflectance and opacity [314], and resistivity measurements [315], as well as different scales to calibrate pressure. At times, this led to seemingly contradictory results, prompting questions regarding experimental accuracy and reproducibility [317; 318; 319]. _Ab initio_ calculations taking into account the quantum fluctuations of the hydrogen nuclei, however, can reconcile some of these differences, finding in the \(C2/c\)-24 high-pressure phase closing of the electronic gap (and transition to metallicity) before the closing of the optical gap (and transition from transparency to reflectivity) [320].
Additionally, the impractically low \(T_{c}\)s of ambient pressure materials are then traded for a much higher \(T_{c}\) in pure metallic hydrogen, but at impractically high pressures! In fact, _ab initio_ modeling suggests progressive jumps in \(T_{c}\) with a transition from the molecular (estimated \(T_{c}\) = 356 K near 500 GPa) to the atomic phase (\(T_{c}\) increasing to 481 K ca. 700 GPa), and with an atomic-atomic phase transition at \(\sim\)1-1.5 TPa driving up \(\lambda\) and resulting in an immense estimated \(T_{c}\)= 764 K [321]. To address the hydrogen metallization problem, Ashcroft proposed another strategy - instead of pure hydrogen, hydrogen-rich metallic alloys could be targeted as putative superconductors [322]. The presence of additional atoms in the hydrogen matrix would confer a _chemical precompression_, thereby lowering the external pressure required to reach the metallic state. Under ambient conditions, the crystal structures adopted by metal hydrides are largely subject to the dictates of balanced oxidation states, hence alkali metal hydrides assume the rock salt structure, and hydrides of +2 metals favor fluorite or Co\({}_{2}\)Si structures, while trivalent metal hydrides tend towards the BiF\({}_{3}\) structure, and so on [323; 324]. Much higher hydrogen content would be needed for such hydrogen-rich alloys, as suggested by Ashcroft, and furthermore many of the resulting structures might differ greatly from anything observed at ambient conditions. Defying the
recommendations of Matthias, the simultaneous and serendipitous advances in CSP methods meant that theoreticians were well poised to answer this call!
### Clathrate-like hydrides
They were successful. As described above, calculations on the high-pressure Ca-H system located the CaH\({}_{6}\) phase described in Section 3.5[73], in which Ca atoms are embedded into a hydrogenic sodalite-like clathrate framework. The strong electron-phonon coupling predicted for \(Im\bar{3}m\) CaH\({}_{6}\) can be traced to breathing and rocking modes of the square H\({}_{4}\) units of the sodalite framework [73]. The molecular orbital diagram for such an H\({}_{4}\) square has, above a filled bonding state, a half-occupied degenerate non-bonding state. Assuming full ionization (integrated charges within atomic basins according to the Quantum Theory of Atoms in Molecules indicate roughly 1 electron per Ca is transferred to the hydrogen network), these orbitals accept the roughly 1/3 electron per H transferred from the Ca atom. This favors symmetry-breaking Jahn-Teller distortions - key contributions to the electron-phonon coupling - that lift the degeneracy.
Similar clathrate-like hydrides, as outlined in Section 3.5, were rapidly predicted in several other systems. \(Im\bar{3}m\) YH\({}_{6}\) has been synthesized with a measured \(T_{c}\) of 224 K at 160 GPa [325]. Hydrides with even higher hydrogen content have been synthesized as well, including a \(P6_{3}/mmc\) YH\({}_{9}\) phase with a \(T_{c}\) of 262 K at ca. 180 GPa [69] (a subsequent study reported a slightly lower \(T_{c}\) of 243 K at 200 GPa [326]), and an isotypic CeH\({}_{9}\) phase whose \(T_{c}\) has only been predicted (57 K at 88 GPa and up to 100 K at 130 GPa [35], or 105-117 K by 200 K [267]). The relatively low pressure required to stabilize CeH\({}_{9}\) has been ascribed to strong chemical precompression from the delocalized Ce 4\(f\) electrons [327]. The reported \(T_{c}\)s for \(Fm\bar{3}m\) LaH\({}_{10}\) (250 and 260 K at 170 and 185 GPa, respectively [70; 37]), are in line with theoretical predictions of 257-274 K at 250 GPa [34]. Isotypic \(Fm\bar{3}m\) YH\({}_{10}\) is computed to be a room temperature superconductor with a \(T_{c}\) of 305-327 K at 250 GPa [34]. However, YH\({}_{10}\) has thus far eluded synthetic efforts. Partial doping with lanthanum appears to be one strategy to stabilize YH\({}_{10}\): a series of ternary (La/Y)H\({}_{10}\) phases have been experimentally observed, with measured \(T_{c}\) = 253 K [38].
Although the \(T_{c}\)s of many clathrate-like hydrides are stunning, these phases will surely decompose well above atmospheric pressures! CeH\({}_{9}\) is remarkable for the comparatively low, sub-megabar, pressures at which it maintains dynamic stability [267]. In an attempt to preserve the loosely-bound hydrogenic clathrate frameworks that are associated with such strong electron-phonon coupling to lower pressures, one promising strategy involves the addition of a third element in an attempt to further chemically precompress the hydrogenic lattices. The \(Fm\bar{3}m\) LaBH\({}_{8}\) phase, which is predicted to maintain dynamic stability down to 40 GPa [269], can be derived from LaH\({}_{10}\) by removing two hydrogen atoms per formula unit and placing boron atoms into the center of H\({}_{8}\) cubes that are empty in LaH\({}_{10}\)[271; 328]. LaBH\({}_{8}\) has an estimated \(T_{c}\) of 126 K at 50 GPa [269] - and the isostructural LaBeH\({}_{8}\) phase
is predicted to achieve a \(T_{c}\) of 183 K at 20 GPa [271]. Other XYH\({}_{8}\) phases have been proposed, with a variety of possible elemental combinations ripe for tuning stability and properties [271]. A second possibility is afforded by the XY\({}_{2}\)H\({}_{8}\) phases that can be constructed by leaving the H\({}_{8}\) cubes empty, but stuffing the center of H\({}_{4}\) tetrahedra instead. KB\({}_{2}\)H\({}_{8}\) (dynamically stable to 12 GPa [270]) and LaC\({}_{2}\)H\({}_{8}\) (dynamically stable down to 70 GPa [329]) are two representatives of this structural arrangement, with estimated \(T_{c}\)s of 134-146 and 69 K, respectively.
Key to the success of the clathrate-like hydrides is the maintenance of loosely-coordinated networks of hydrogen, rather than condensation into H\({}_{2}\) molecules. The effect of H-H interatomic distances on superconductivity can be seen in the MH\({}_{4}\) hydrides, which adopt the \(I4/mmm\) structure shared with ThCr\({}_{2}\)Si\({}_{2}\)[330; 331]. Hydrogen occupies two inequivalent sites in the ThCr\({}_{2}\)Si\({}_{2}\) structure - the apical H\({}_{a}\) (Wyckoff position 4\(e\), Si) and basal H\({}_{b}\) (4\(d\), Cr). A plethora of metal hydrides have been predicted or synthesized in this structure type under pressure, including Ca [73; 252; 332] and Sr [248; 249], Sc [237; 240; 241], Y [34; 242], and Zr [247] and rare earths La [34], Ce [35; 267; 333], Pr [334; 335; 333], Pu [243], Tb [244], Eu [245], Nd [35; 336], and Th [337; 338], making systematic study enticing and useful [331]. With metal oxidation states ranging from +2 to +4, the formulas of these compounds can be written as M\({}^{x+}\)(H\({}_{b}^{-}\))\({}_{2}\)(H\({}_{a}\))\({}_{2}^{(x-2)-}\), with hydride H\({}_{b}\) and a range of charges possible on the H\({}_{a}\) atoms. The \(T_{c}\)s of these phases are correlated with the length of the H\({}_{a}\)-H\({}_{a}\) contacts, which can behave anywhere from covalently bound H\({}_{2}\) units to fully dissociated hydridic anions depending on the metal atom - similar to the behavior of the X-X bond in ThCr\({}_{2}\)Si\({}_{2}\)-type AB\({}_{2}\)X\({}_{2}\) phases [339]. The size of the metal atom can be relevant, as larger atoms will stretch the H\({}_{a}\)-H\({}_{a}\) contact through purely steric interactions, but more important is the valency of the metal atom. Electron transfer from the electropositive metal into the H\({}_{a}\)-H\({}_{a}\) motif directly populates the H\({}_{2}\)\(\sigma_{u}^{*}\) antibonding orbitals, but is also driven by a Kubas-like two-pronged mechanism of H\({}_{2}\)\(\sigma_{g}\)\(\rightarrow\) M \(d\) donation, and M \(d\)\(\rightarrow\) H\({}_{2}\)\(\sigma_{u}^{*}\) back-donation. With enough H\({}_{2}\)\(\sigma_{u}^{*}\) population, the H\({}_{a}\) atoms behave in a hydridic fashion lowering the \(T_{c}\), as seen in ZrH\({}_{4}\)[247] and ThH\({}_{4}\)[337]. Donation of sufficient electron density to weaken, but not fully break, the H\({}_{a}\)-H\({}_{a}\) bonding interaction results in a much higher DOS at \(E_{F}\) and enhanced \(T_{c}\), as in YH\({}_{4}\)[242].
### Covalent hydrides
The first hydride to top the charts, as it were, was not of the metal clathrate-like family, but instead came from attempts to metallize H\({}_{2}\)S. Theory identified an H\({}_{2}\)S compound that was computed to possess a \(T_{c}\) of 80 K at 160 GPa [340]. Experimental confirmation followed shortly thereafter, finding a phase with \(T_{c}<\) 100 K, but in the process a higher-temperature preparation method yielded a sample with a \(T_{c}\) of 203 K at 150 GPa [266]. A few years before, synthetic exploration into the (H\({}_{2}\)S)\({}_{2}\)H\({}_{2}\) stoichiometry found a phase stabilized by pressure-induced hydrogen bonding above 3.5 GPa [341]. This inspired
CSP investigations of the H\({}_{3}\)S stoichiometry, which found an \(R3m\) phase with \(T_{c}=155\)-\(166\) K at \(130\) GPa [342]. By \(180\) GPa this structure transitioned to one with \(Im\bar{3}m\) symmetry (Figure 10a) for which the estimated \(T_{c}\) was \(191\)-\(204\) K at \(200\) GPa. Serendipitously the experimental [266] and theoretical [342] manuscripts appeared at nearly the same time. Subsequent XRD studies supported the identification of the \(203\) K superconductor as \(Im\bar{3}m\) H\({}_{3}\)S [343], though other structures have been proposed [344; 345; 346; 347; 348; 349].
A host of studies on the H\({}_{3}\)S superconductor have followed, exploring the isotope effect, role of anharmonicity, and possible quantum effects [191; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359]. The inclusion of quantum nuclear motions lowered the pressure where the less symmetric \(R3m\) phase was predicted to transition to the \(Im\bar{3}m\) structure with symmetric H-S bonds into the range of pressures where high \(T_{c}\)s had been measured [191]. One of the most striking features of the electronic structure of H\({}_{3}\)S are a pair of van Hove singularities bracketing \(E_{F}\)[360; 361]. Shifting the position of \(E_{F}\), potentially by doping, could increase the number of states that can participate in the electron-phonon coupling mechanism, and therefore increase the \(T_{c}\) of the system.
Doping is a common strategy used for precise tuning of \(E_{F}\), and computations using the virtual crystal approximation (VCA) suggested that the addition of a little bit of phosphorus, carbon, or silicon could raise the \(T_{c}\) into the room-temperature regime \(>280\) K [362; 363; 364]. In the VCA, alchemical pseudoatoms are constructed from weighted averages of the component atom potentials. The resulting chemical chimeras, however, cannot accurately model the local structural and electronic effects that arise when one atom is replaced with an entirely different element. This throws off, in particular, the very dynamical response properties that one must calculate carefully to obtain reasonable estimates of \(T_{c}\). Additional studies based on actual doped H\({}_{3}\)S models constructed as supercells have sought to explore the local effects of doping [365; 366; 367; 368], although the calculation of dynamical properties of the requisite large unit cells can be prohibitively expensive.
One particularly promising system involved the addition of carbon to the H\({}_{3}\)S lattice by way of methane intercalation [369; 372; 373]. Stoichiometries that are a linear combination of CH\({}_{4}\) and H\({}_{3}\)S (the most simple of which is CSH\({}_{7}\)) proved especially interesting. They yielded a variety of dynamically stable (although energetically metastable) structures, which differed in the orientation of the methane molecules encapsulated in the H\({}_{3}\)S lattice. Some of the highest \(T_{c}\)s predicted for these phases were \(181\) K at \(100\) GPa for \(I\bar{4}3m\)[372], and \(181\)-\(194\) K for the \(R3m\) symmetry structure [369] shown in Figure 10b.
Independently, photochemical synthesis in the C-S-H system yielded the first report of room-temperature superconductivity, achieving a \(T_{c}\) of \(288\) K at \(267\) GPa [268]. This report has inspired a slew of follow-up work and much debate [371; 374; 375]. XRD analysis performed at pressures below the purported room-temperature superconducting transition [370; 376] is consistent with the Al\({}_{2}\)Cu geometry (as well as an orthorhombic \(Pnma\) variant) associated with CH\({}_{4}\)-H\({}_{2}\)[377] and H\({}_{2}\)S-H\({}_{2}\)[341] (Figure 10c). This may suggest an
overall stoichiometry of [(CH\({}_{4}\))\({}_{2}\)H\({}_{2}\)]\({}_{x}\)[(H\({}_{2}\)S)\({}_{2}\)H\({}_{2}\)]\({}_{y}\) for the room-temperature C-S-H superconductor, although subsequent phase transitions at higher pressure to the high-\(T_{c}\) superconducting phase cannot be ruled out. In fact, additional studies indicate just such a structural transition occurs to form the room-temperature superconducting phase, with indications of methane signatures in the Raman spectra [371]. As was the case for the binary H\({}_{3}\)S system, it appears that a panoply of metastable phases may be accessible by slight variations on synthetic procedure, in particular on carbon content [378], offering plenty of space for further experimental and theoretical discoveries.
In addition to sulfur-based covalent hydrides, phosphorus hydrides have sparked interest after compression of a phosphine (PH\({}_{3}\)) sample yielded a material that became superconducting at 30 K at 83 GPa, increasing to 103 K at 207 GPa [265]. The structure and composition of the responsible phase or phases was unclear, prompting an array of follow-up studies levying CSP techniques to identify plausible compounds [379; 380; 381; 382; 383]. Pressure was found to drive the decomposition of phosphine into a variety of products with stoichiometries including PH, PH\({}_{2}\), PH\({}_{3}\), and more. A predicted \(C2/m\) PH\({}_{3}\) phase [380] featuring P-P bonds (in contrast to the H\({}_{3}\)S superconductor, which has no S-S bonding) was estimated to be superconducting below 83 K at 200 GPa, in line with the experimental values. Another study suggested that multiple metastable decomposition products of phosphine, including those with PH and PH\({}_{2}\) stoichiometries, might in combination be responsible for the observed superconductivity [382]. PH\({}_{2}\) phases with \(C2/m\) and \(I4/mmm\) symmetries, differing by a tilt in the component H-P-H moieties, had estimated \(T_{c}\)s of 76 and 70 K, respectively [381]. Later, another set of PH\({}_{2}\) phases were proposed consisting of simple cubic layers of phosphorus capped with hydrogen
Figure 10: The covalent hydride \(Im\bar{3}m\) H\({}_{3}\)S [266] (a) represents a breakthrough in high-\(T_{c}\) conventional superconductivity. Peaks in the electronic DOS near \(E_{F}\) have prompted numerous investigations on doped versions of H\({}_{3}\)S, discovering phases such as the methane-intercalated (CH\({}_{4}\))H\({}_{3}\)S = CSH\({}_{7}\)[369] (b), among many others. A recent synthesis of a room-temperature superconductor consisting of carbon, sulfur, and hydrogen generated even more momentum, with diffraction analysis performed at pressures below those where the room-temperature superconducting transition were observed, revealing the presence of a phase based on the \(I4/mmcm\) Al\({}_{2}\)Cu-like structure adopted by the van der Waals (H\({}_{2}\)S)H\({}_{2}\) phase [370; 371; 341] (c).
atoms and further intercalated with H\({}_{2}\) molecules acting as Coulombic spacing agents [383]. At 80 GPa, these structures had estimated \(T_{c}\)s ca. 30 K, similar to the values that were measured. Raman spectroscopic measurements provided evidence for phosphine dimerization coupled with dehydrogenation under pressure, yielding compositions such as P\({}_{2}\)H\({}_{4}\) and P\({}_{3}\)H\({}_{6}\)[384; 385]. In these phases, low temperatures were required to maintain stability at multi-megabar pressures. The \(T_{c}\) of a predicted \(C2/m\) P\({}_{4}\)H\({}_{6}\) phase was estimated to be 67 K at 200 GPa [385].
The plethora of metastable P-H compounds under pressure has prompted computational investigations into ternary systems containing phosphorus and hydrogen. Above 250 GPa, an \(R\bar{3}\) LiP\({}_{2}\)H\({}_{14}\) phase, consisting of P@H\({}_{9}\) clusters spaced by Li atoms as well as isolated H atoms, achieves an estimated \(T_{c}\) of 169 K at 230 GPa (where it is metastable) [386]. \(Pm\bar{3}\) LiPH\({}_{6}\), a colored variant of the A15 crystal structure adopted by intermetallic superconductors Nb\({}_{3}\)Ge [258] and Nb\({}_{3}\)Sn [257], has an estimated \(T_{c}\) of 150-167 K at 200 GPa (where it is metastable) [387]. In the S-P-H system, obviously tantalizing for its connection to the H\({}_{3}\)S superconductor as well as to phosphine derivatives, relatively low \(T_{c}\)s were predicted for phases on the high-pressure convex hulls, but low-lying metastable structures based on phosphorus substitution into the \(Im\bar{3}m\) H\({}_{3}\)S lattice were promising, including \(Im\bar{3}m\) S\({}_{7}\)PH\({}_{24}\), which had an estimated \(T_{c}\) of 183 K at 200 GPa [388].
## 5 Conclusion
Although the entirety of the lived human experience resides within a vastly narrow pressure range, the universe is not so simple. The chemistry we know at 1 atmosphere is not the chemistry of Jupiter, Saturn, or even the center of our own planet Earth. Starting from the periodic table itself, the ramifications of pressure are rapidly found to alter elemental behavior and, consequently, how the elements interact with one another to form new and bizarre phases. Potassium, in its guise as a "transition metal", enjoys all manner of new chemical interactions - in compound formation and in the wildly complex electride elemental structures it adopts. Cesium can become anionic, and helium takes an active role in stabilizing a network of sodium and interstitial quasiatoms. Strange geometrical and bonding motifs from clusters to networks abound.
Yet not only are the structures of phases - electronic and crystalline - molded by high pressure, but high-pressure studies have revolutionized the search for high-temperature superconductivity. Pressure induces superconductivity in a plethora of elements, and drives the formation of phases containing structural motifs whose atomic vibrations can be strongly coupled to the underlying electronic structure. From the clathrate-like LaH\({}_{10}\) to the covalent H\({}_{3}\)S - and the intensely-discussed CSH room-temperature superconductor - the playing field of high-pressure materials is a promising one for the future.
## Acknowledgments
We acknowledge the NSF (DMR-1827815) for financial support. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Fusion Energy Sciences under Award Number DE-SC0020340 to E.Z. K.P.H. thanks the US Department of Energy, National Nuclear Security Administration, through the Chicago-DOE Alliance Center under Cooperative Agreement Grant No. DE-NA0003975 for financial support. We thank Giacomo Scilla for his help in editing and preparing the manuscript.
|
2310.07209 | Multi-task Explainable Skin Lesion Classification | Skin cancer is one of the deadliest diseases and has a high mortality rate if
left untreated. The diagnosis generally starts with visual screening and is
followed by a biopsy or histopathological examination. Early detection can aid
in lowering mortality rates. Visual screening can be limited by the experience
of the doctor. Due to the long tail distribution of dermatological datasets and
significant intra-variability between classes, automatic classification
utilizing computer-aided methods becomes challenging. In this work, we propose
a multitask few-shot-based approach for skin lesions that generalizes well with
few labelled data to address the small sample space challenge. The proposed
approach comprises a fusion of a segmentation network that acts as an attention
module and classification network. The output of the segmentation network helps
to focus on the most discriminatory features while making a decision by the
classification network. To further enhance the classification performance, we
have combined segmentation and classification loss in a weighted manner. We
have also included the visualization results that explain the decisions made by
the algorithm. Three dermatological datasets are used to evaluate the proposed
method thoroughly. We also conducted cross-database experiments to ensure that
the proposed approach is generalizable across similar datasets. Experimental
results demonstrate the efficacy of the proposed work. | Mahapara Khurshid, Mayank Vatsa, Richa Singh | 2023-10-11T05:49:47Z | http://arxiv.org/abs/2310.07209v1 | # Multi-task Explainable Skin Lesion Classification
###### Abstract
Skin cancer is one of the deadliest diseases and has a high mortality rate if left untreated. The diagnosis generally starts with visual screening and is followed by a biopsy or histopathological examination. Early detection can aid in lowering mortality rates. Visual screening can be limited by the experience of the doctor. Due to the long tail distribution of dermatological datasets and significant intra-variability between classes, automatic classification utilizing computer-aided methods becomes challenging. In this work, we propose a multitask few-shot-based approach for skin lesions that generalizes well with few labelled data to address the small sample space challenge. The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network. The output of the segmentation network helps to focus on the most discriminatory features while making a decision by the classification network. To further enhance the classification performance, we have combined segmentation and classification loss in a weighted manner. We have also included the visualization results that explain the decisions made by the algorithm. Three dermatological datasets are used to evaluate the proposed method thoroughly. We also conducted cross-database experiments to ensure that the proposed approach is generalizable across similar datasets. Experimental results demonstrate the efficacy of the proposed work.
keywords: Deep Learning, Few-shot Learning, Medical Imaging, Melanoma Classification, Prototypical Networks, Segmentation +
Footnote †: journal: Elsevier
## 1 Introduction
Skin diseases is a common human illness that affects between 30 to 70% of people and can worsen if left untreated. These diseases can be caused by a number of factors, including microscopic bacteria, a fungus that thrives on the skin, some kind of allergies or pigmentation [1]. These cause lesions on the skin. The skin lesions can refer to any imperfection or defect located either above or below the skin surface. These lesions have been divided into two categories - benign and malignant. Benign lesions correspond to non-threatening skin tumours such as cysts or moles; malignant tumours are cancerous lesions, including melanoma, squamous cell carcinoma, and basal cell carcinoma, among others [2]. Melanoma originates from the melanocyte cells having a mole-like appearance and is often black or brown in colour [3]. Despite the fact that melanoma constitutes only 7% among others, it has a mortality rate of 75%, making it the deadly type of skin cancer [4]. According to WHO [5], there were around 324,000 "melanoma of the skin" cases globally in 2020. Furthermore, roughly 106,110 additional cases and 7,180 skin cancer deaths are estimated in the United States alone in 2021, according to [6]. Early detection of melanoma can prevent metastases and, as a result, mortality in the majority of cases.
An experienced dermatologist diagnoses skin cancer visually, followed by various clinical tests such as biopsy or histopathological examination. Generally, the doctors use dermoscopy, a non-invasive procedure, that magnifies the region at a high resolution and help them to clearly identify the minute spots and other details of the lesions. However, it is limited by human vision, perception and sensitivity. It requires significant training and experience in addition to the time-consuming procedure. This triggered the research community to develop precise automated approaches to assist doctors in early screening. This research aims to propose a non-invasive computer-aided screening method to help classify skin lesions and aid in the visual screening process.
Classification of skin lesion images has a relatively extensive literature. Some early methods use hand-crafted features [7; 8; 9; 10] such as texture, colour, and shape. However, these methods have limitations due to high visual similarity, intra-class variability, and the
presence of various artefacts. Furthermore, these methods were dataset-specific and did not generalize well across datasets. In addition to the boundary between the lesion and the healthy skin, as shown in Fig. 1, artefacts such as hair, reference scales and low contrast are examples that might affect classification performance. This leads to the need to extract the region of interest (ROI) before performing classification. It led to the development of approaches that are dependable, resilient, and generalizable.
In the field of medicine, deep learning has had a lot of success [11; 12; 13; 14; 15]. These approaches have shown good performance [16], can automatically learn relevant features [17], are capable of handling large and complex datasets and are highly adaptable. Despite advancements in deep learning algorithms, there are some challenges that need research focus. The first challenge is the lack of large annotated datasets, particularly for rare diseases. Using deep learning on small datasets will lead to overfitting of the model. Hence, learning a reliable and robust model with a small number of samples is another challenge. Existing techniques are ineffective in dealing with such scenarios where retraining on new data is required. Due to the small number of available samples, existing techniques may not be able to generalize effectively, resulting in lower performance.
The research community is focusing on developing approaches to overcome the aforementioned challenges. One line of research is the few-shot learning-based approaches. These approaches can be meta-learning-based [18], metric-based [19; 20], or optimization-based [21]. These approaches are gaining popularity because the model generalizes well to fewer data samples. Despite achieving success in a variety of tasks, an automated medical diagnosis system ought to be transparent, comprehensible, and explainable. One of the most desired properties is explainability [22]. Explainability is referred to as the process of examining the decisions that the system has made. It should be able to justify the reasoning behind any decision. The end-user or health professional should be able to retrace the decisions that inculcate trust among them. Additionally, it is important to demonstrate features in such a way that is fully understandable to non-deep learning users, such as medical professionals.
Continuing in the path of dealing with low data regimes and introducing explainability, this work aims to present a multitask explainable few-shot learning-based approach by fusing segmentation and classification tasks for skin lesions. In this work, we use prototypical networks, a metric-based few-shot learning approach. This approach is more straightforward, robust and computationally efficient as compared to other approaches. To extract the ROI from the input image, we have added a segmentation step in our proposed approach that will extract the ROI from the input image and enable the classification model to focus on only that region. We have also added visual descriptions of the decisions made by the system that aid in comprehending the regions that were focused on while making any decision.
To summarise, we propose a multitask few-shot learning technique for skin lesion classification that fuses segmentation and classification tasks in a unified manner. The following are the key contributions of this paper:
* We propose a prototypical few-shot learning-based end-to-end network for the skin lesions
* The proposed framework combines the segmentation and classification in such a way that the artefacts are excluded before classifying the input image
* We evaluate the performance of the proposed method in comparison to the Cosine (Cos) and Euclidean (Euc) distances
* Cross-database experiments are conducted utilising the HAM10000, Derm7pt and PH2 datasets to illustrate the generalizability of the proposed approach.
* Explainability of the proposed approach is illustrated using GradCAM results.
Figure 1: Showcasing the skin samples from HAM10000 (top row), PH2 (middle row) and Derm7pt (bottom row). This figure highlights the need for segmentation in skin lesion classification, as there are artifacts that need to be removed before classifying the image.
## 2 Related Work
This section provides a brief overview of approaches developed for segmenting and classifying skin lesion images.
### Skin Lesion Segmentation
Jafari et al. [23] have proposed using a guided filter as a pre-processing filter and extracted local and global patches based on fixed window size to classify the pixel as a lesion or normal. Xue et al. [24] have proposed an adversarial-based skin lesion segmentation network. In this work, the authors have proposed a multi-scale L1 loss function that learns global and local features by capturing long- and short-range pixel relationships. Wei et al. [25] conducted a similar research in which the authors utilised an attention module that automatically suppresses irrelevant features and focuses just on lesion features. In addition, the authors propose a new loss function that combines the Jaccard distance loss and the adversarial feature mapping loss. Experiments show that their proposed method produces precise masks with sharp boundaries. A GAN-based strategy for skin lesion segmentation has also been proposed by Tu et al. [26]. The authors employed a network with a Dense-residual block to help in feature propagation and reuse and multi-scale feature mapping from several layers. Bi et al. [27] have developed a stacked adversarial learning technique for segmenting skin lesions. The authors iteratively learned class-specific features to maximise feature diversity and added them to the FCN training data. The results of the experiments show that this method enhanced segmentation accuracy.
Mishra et al. [28] have proposed an algorithm for skin lesion segmentation using the concept of U-Net architecture with fewer layers. The authors have compared their results with otsu's thresholding method and concluded that deep learning gives better results. SkinNet, a modified version of U-Net, is proposed by Vesal et al. [29]. The architecture employed dilated and densely block convolutions during training to incorporate multiscale and global information from the input image. The results show that their proposed method can handle the poor contrast between the lesion and healthy tissue. Hasan et al. [30] proposed the DSNet, a semantic segmentation network for skin lesions. To reduce the number of parameters, the authors used dense blocks in the encoder and depth-wise separable convolutions in the decoder. As a result, a lightweight segmentation network with good segmentation results is developed. Song et al. [31] have proposed an end-to-end approach for skin lesion analysis where they jointly performed detection, segmentation, and classification using focal loss and Jaccard distance to improve the class imbalance and segmentation performance. CMM-Net is a network proposed by Al-Masni et al. [32] for biomedical image segmentation. In the UNet design, the authors have combined the global contextual features of multiple spatial scales.
Khadga et al. have proposed an optimization-based few-shot approach for medical image segmentation [33; 34]. The authors suggested employing bi-level optimization to solve the vanishing gradient problem. During training, a compound loss function consisting of log-cosh-dice and binary cross-entropy loss is used. The results of the experiments show that the proposed method performs well and has a higher generalisation ability. Zhang et al. [35] have proposed an approach to extract rich embedding features like global embedding, peak embedding, adaptive embedding and a depth-priority context module in one-shot semantic segmentation. To incorporate the prior knowledge from support and query images, Xiao et al. [36] proposed an approach for skin lesion segmentation for low-data regimes. The authors have extracted the prior mask from the samples (support and query) that helps discard the background and improve the segmentation performance. Sun et al. [37] have proposed a transformer-based approach for the few-shot semantic segmentation. The authors have employed transformer blocks to extract global information and convolutional layers for local information. In continuation to this, Shen et.al. [38] proposed to use semi-supervised few-shot semantic segmentation where the authors have used poisson learning to model the relationship between labelled and unlabeled samples. Also, the authors have used spatial consistency, which further improves the performance.
### Skin Lesion Classification
Various studies have been conducted that utilizes transfer learning while classifying skin lesions [39; 40; 41]. The authors have used deep architectures and feature maps of various layers while designing the approach. These approaches demonstrate the feasibility of using deep learning in classifying skin lesions. Yu et al. [42] has proposed a method for detecting melanoma. The authors used a deep residual network and Fisher Vector encoding to generate a global feature representation. The authors employed SVM as a classifier, and experimental studies indicate the efficacy of their proposed method. Huang et al. [43] have proposed MelanomaNet as a melanoma detection architecture to boost feature diversity. To model the relationship between feature chan
nels, the authors used the Inception-v4 network and included a residual-squeeze-and-excitation (RSE) block. The authors employed SVM with RBF kernel for classification. Gessert et al. [44] have proposed an approach for addressing resolution and class-imbalance issues in dermoscopic skin lesions. The authors propose a new patch-based attention mechanism that explicitly models local and global information. To resolve the class imbalance, the authors used a loss function, diagnosis-guided loss weighting, and providing more weight to hard samples.
To use an ensemble approach to classify skin images, Harangi et al. [45] have proposed the use of an ensemble consisting of deep architectures such as AlexNet, VGGNet and GoogLeNet. Tang et al. [46] used ensemble learning to develop a skin lesion classification approach that included both global and local information from the input image. Adegun et al. [47] have proposed a deep-learning approach to automatically segment and classify skin lesion images. The authors have used multi-scale encoder-decoder architecture and FCN-based DenseNet for segmentation and classification. Rodrigues et al. [48] have proposed an IOT-based solution for the classification of skin lesions by using a web service called LINDA, which performs all the computations over the cloud. Qin et al. [49] have presented the use of style-based GAN to generate new data for training in skin lesion classification. Other approaches have posed skin lesion analysis as a few-shot problem and tried to design methods that work well with fewer data.
To improve the existing prototype-based method for skin lesion classification, Zhuet. al. [50] proposed a temperature network that generates query-specific prototypes leading to compact intra-class distributions. In this direction, Roy et. al. [51] proposed to evaluate each sample's influence (maximum mean discrepancy) with mean embedding on the sample distribution of the particular class. To further strengthen the concept of using prototypes, Prabhu et al. [52] have proposed a prototypical clustering network for dermatological conditions. They selected more than one prototype for a class and then refined them during training. Mahajan et al. [53] have proposed a few-shot learning approach for the identification of skin diseases. The authors have used group equivariant convolutions (G-convolutions) instead of standard convolutions, which improves the network's performance. Li et al. [12] have proposed a meta-learning-based approach for rare disease diagnosis. The authors have used a dynamically scaled cross-entropy loss that automatically down-weights the easy tasks and focuses on hard tasks. To use the attention mechanism for feature extraction, Liu et. al. [54] proposed to use a relative position network (RPN) and relative mapping network (RMN). The authors used RPN to extract features, and RMN was used to obtain the similarity during the classification process.
The aforementioned algorithms yield good accuracy on the task at hand. However, they only perform segmentation or classification, and none of them integrates the two tasks in an end-to-end manner for the low data regime scenario. Also, limited algorithms incorporate explainability in few-shot learning-based algorithms for skin lesion analysis. To the best of our knowledge, this is the first paper to propose an explainable few-shot learning-based end-to-end system for skin lesion analysis that employs a segmentation mask as attention while classifying the input images.
## 3 Methodology
This paper presents a multitask deep learning few-shot-based appoach for skin lesion classification. The steps involved in the proposed network are summarized in Fig. 3.
### Problem Definition
The problem is to train a multi-task model that can jointly perform skin segmentation and classification on a dataset with a small number of labelled samples. We have adopted a prototypical-based approach for classification and an encoder-decoder-based architecture for segmentation.
### Multitask Network for Segmentation and Classification
This section explains the working of the proposed approach.
#### 3.2.1 Segmentation Network
A segmentation model is generally an encoder-decoder-based architecture where the encoder outputs the encoding, and the decoder uses it to construct the prediction binary mask. In this work, for segmentation, we have used UNet [55] as it has shown significant performance in medical images [56]. Almost all skin lesion images have one region that has to be classified; hence, the segmentation model is trained using skin lesion images only. We train the segmentation network using the skin lesion images from publicly available datasets, including those used in this work. For each image \(X_{i}\), segmentation loss is computed between the predicted mask
and the ground truth mask using Eq. 1.
\[L_{s}=-\sum_{x,y}[\mathbf{y}_{i}(x,y)\log(\widehat{\mathbf{y}}_{i}(x,y))]+(1-\mathbf{y}_{i}) \log(1-\widehat{\mathbf{y}}_{i}(x,y))\]
where \(L_{s}\) is the segmentation loss for image \(X_{i}\), \(y_{i}(x,y)\) and \(\widehat{y}_{i}(x,y)\) are the pixel value of ground truth and predicted masks at point (x,y), respectively.
#### 3.2.2 Classification Network
This work follows the concept of the prototypical approach for classification as in [19]. Fig. 2 illustrates the basic working of such networks.
Similar to previous few-shot learning-based approaches [21; 19; 57], we also follow an episodic way of learning. In episodic learning, each few-shot task is treated as an episode, meaning the model must learn to recognize novel classes based on only a few examples.
In prototypical networks, features are extracted from support and query images which act as the input to the rest of the pipeline. For each class in the support set, a representative vector (mean feature vector) is computed as the class prototype, and a distance is calculated between the query and the mean vectors. To predict the class label of the query image, the query image is assigned to the class with the closest distance, and a softmax activation function is used to get the class probabilities. Formally, the problem can be defined as:
Let \(\mathcal{L}=\{l_{1},\dots,l_{x}\}\) be the set of seen classes with a large number of labelled samples and \(\mathcal{N}=\left\{n_{1},\dots,n_{y}\right\}\) be the set of new and rare classes with fewer labelled samples, where \(x\) and \(y\) are the total number of samples in \(\mathcal{L}\) and \(\mathcal{N}\), respectively. The two sets are disjoint i.e., \(\mathcal{L}\cap\mathcal{N}=\phi\). From \(\mathcal{L}\), we can construct a large number of classification tasks by repeated sampling without replacement. Each task consists of a support set and a query set. The aim is to predict the labels of the query images based on the support images.
### Fused Network
This section presents the fusion of the two networks to obtain a unified pipeline. The segmentation model helps to focus on the region of interest and is used in two experiments.
* (E1) The segmentation model is frozen (no training done) and used in the fused network to generate masks without additional refinement.
* (E2) We freeze some layers of the pretrained segmentation model and only train the last layer with the classification task to predict the segmentation masks. These masks are refined during back-propagation of the segmentation loss. This step helps to improve the classification accuracy.
The overall working of the fused network is as follows: The input images and ground truth masks pass through the segmentation network. The prediction masks are generated for the images. For each predicted mask, segmentation loss is calculated using Eq. 1. The predicted mask is then used to extract the ROI from each respective input image using pixel-wise multiplication. These images are fed to the classification network, where they are divided into support and query images. In this work, ResNet50 [58] pre-trained on ImageNet is employed as the backbone for feature encoding. Episodic training is
Figure 2: Illustrating the basic working of prototypical networks
the classification loss. To compute the distance between query and class prototypes, we have used euclidean and cosine distances, which are given in Eq. 1 and 2. Let Q represent the query image, and M_i represent the mean vector of the ith class. The Euclidean distance (EUC) is calculated as follows:
\[EUC(Q,M\_i)=\sqrt{\sum_{i=1}^{n}(Q_{i}-M_{i})^{2}} \tag{1}\]
and the Cosine distance is calculated as:
\[CD(\theta)=\frac{Q\cdot M\_i}{\|Q\|_{2}\|M\_i\|_{2}} \tag{2}\]
The classification loss is computed by taking log-softmax over the computed distances. Eq. 3 gives the class probability of the query sample x and the prototype of each class \(p_{k}\). Learning is performed by minimizing the negative log-probability of distance via the optimizer. The classification loss is computed as,
\[p_{\theta}(y=k\mid\textbf{x})=\frac{\exp(-d(f_{\theta}(\textbf{x}),\textbf{p }_{k}))}{\sum_{k^{\prime}}\exp(-d(f_{\theta}(\textbf{x}),\textbf{p}_{k^{\prime }}))} \tag{3}\]
\[L_{c}=-\log p_{\theta}(y=k\mid\textbf{x}) \tag{4}\]
The final segmentation loss that gets backpropagated is calculated by adding the segmentation loss with the classification loss given in Eq. 5. Here, the value of \(\lambda\) is set to 2.
\[L_{total}=L_{s}+\lambda\times L_{c} \tag{5}\]
The classification loss is given more weightage, so the segmentation mask will be more refined. This way, the
Figure 3: Showcasing the proposed multitask approach that fuses the segmentation and few-shot-based classification
classification performance gets improved, and the same can be seen in experimental results (Tables 1, 3, and 2).
During testing for new and rare classes, the trained feature extractor is used for encoding, and the same episodic learning is used. The performance of the approach is evaluated using average accuracy, which is computed over the samples from the randomly chosen classes (which can be less than or equal to the number of new and unseen/rare classes).
## 4 Experimental Setup
This section presents the dataset description, implementation details and experimental results for the experiments.
### Dataset Description and Preparation
The proposed approach is evaluated on the three benchmark dermatological datasets, viz., HAM10000 [59], PH2 [60], and Derm7pt [61]. These datasets are chosen because they consist of fewer samples that can act as unseen/rare classes during testing.
**HAM10000** contains 10,015 dermatoscopic images corresponding to seven classes - Melanoma (1113), Melanocytic Nevi (6705), Basal Cell Carcinoma (514), Actinic Keratoses and Intraepithelial Carcinoma (327), Dermatofibroma (115), Benign Keratosis (1099), and Vascular lesions (142).
**PH2** consists of a set of 200 dermoscopic images corresponding to 3 classes - Common nevi (80), Atypical neui (80), and Melanoma (40).
**Derm7pt** contains over 2000 (clinical and dermoscopy) images pertaining to 20 classes. We have used the dataset split mentioned in [53].
**Data Preparation:** Out of the seven classes in HAM10000, we selected four classes with more samples (Melanoma, Benign Keratosis, Melanocytic Nevi, Basal Cell Carcinoma) in the training set. The testing set included the remaining 3 classes (Actinic Keratoses and Intraepithelial Carcinoma, Dermatofibroma, and Vascular Lesions). To verify the generalizability of the proposed approach, we used the PH2 dataset and test classes of Derm7pt only during evaluation to perform the cross-database experiments. All the images were normalized and resized to (224,224,3) to reduce the computational cost. We used several augmentation techniques on the input training images, such as vertical and horizontal flipping (with p = 0.5) and colour jittering, to add variation in the training data. The performance of the model is evaluated using \(k-\)way \(n-\)shot learning protocols. Particularly, in our experiments, \(k=2\) classes with varying \(n=1\), 3 and 5. In other words, the support set comprises \(k\)=2 classes and
* for each class, if the number of samples is 5, it is referred to as the 5-shot setting
* for each class, if we have 10 samples each, it is referred to as the 10-shot setting
For comparison with the baseline, the proposed model is also evaluated on 3-way with 5-shot and 10-shot settings for the classification task.
### Implementation Details
The proposed framework is implemented in python on the PyTorch library. The models have been trained and tested using Nvidia 1080Ti GPU. As a preliminary step for the segmentation model, we take UNet [55] architecture pre-trained on ImageNet [62] and finetuned it using publicly available skin datasets. In the fused multitask network, we trained only the last layer of the network. Eq. 1 is used to calculate the loss between the predicted mask and the ground truth mask. For classification, we take ResNet50 [58] as the backbone network for feature extraction. We used Adam as the optimizer, and the learning rate is set as 1e-3 for segmentation and 1e-6 for the classification network. We jointly train segmentation and classification networks for 10 epochs with 100 few-shot tasks each. For each experiment, all images are resized to (224x224x3) to reduce the computation.
### Experimental Results
The performance of the proposed approach is evaluated for 3 benchmark dermatological datasets. For evaluation, we have used average accuracy (%) as the performance metric. During the testing phase, the images are distributed equally across all the classes so that the accuracy does not get affected by the class imbalance. The reported accuracy is the average accuracy across 100 few-shot tasks sampled from the test set. The model is evaluated by varying \(k\) as 1, 3 and 5, and the distance metric as cosine and euclidean distance for the 2-way classification task.
Comparative results for various models on HAM10000, PH2 and Derm7pt are summarized in Tables 1, 3, 2 respectively. The average accuracy for all the datasets goes on increasing as the value of \(k\) increases. This can be attributed to the fact that more samples provide more information to represent the underlying distribution and hence can estimate the class prototype in a better way. For skin lesions, it is observed that the proposed method
outperforms other methods on all three datasets. In case of HAM10000, the average accuracy for 1, 3 and 5 shots get improved by around 2%; in PH2, there is a significant improvement of average accuracy from 70.98% (ProtoNet with Cosine distance) to 76.06% (Proposed Network with Euclidean distance) for \(k\) = 5; for \(k\) =1 and 3, the proposed network also outperforms the other networks. For the Derm7pt dataset, the proposed approach is also able to outperform the other methods. It is quite evident from the experimental results that both cosine distance and euclidean distance are effective distance metrics for few-shot prototypical-based classification networks in skin lesion classification.
As a baseline experiment, a pre-trained ResNet50 model is trained on the training data (4 classes from HAM10000, same as used in the proposed approach). All the parameters, such as learning rate, epochs, and optimizer, are kept the same for a fair comparison. For testing, the model is fine-tuned for the rare classes (same as in the proposed approach). The average accuracy for the test classes of HAM10000, PH2 and Derm7pt is shown in Fig. 4. This accuracy is low compared to the accuracy obtained by the proposed approach on 3-way classification. This behaviour is quite explainable as the pre-trained model is trained on classes different than classes at test time. The trained model is biased towards the training data and does not adapt to the new unseen data. On the other hand, the proposed approach is able to generalize well for the unseen classes as it tries to learn the metric space that helps to better classify the images. This demonstrates the limitation of transfer learning of deep models to adapt to new unseen data. We have also reported the results in terms of confidence intervals. Table 4 summarizes the results for the datasets by varying the values of CI as 70, 90 and 95%. The margin error decreases with the number of shots (samples). The margin error is used to measure the confidence of the model in its predictions. The larger margin error indicates lower confidence and vice versa. As the number of shots increases, the model gets more confident about the predictions; hence the margin of error is low. In the HAM10000 dataset, the margin error decreases from 0.51 to 0.43, 1.06 to 0.73 for 75% and 95% CI, respectively, indicating the model is getting more confident about the predictions. Generally, as we increase the CI, the width of CI will also increase and result in a higher margin of error. However, other factors, such as data and sample size variability, also affect the margin of error and can be seen in the behaviour of 90% CI for the HAM10000 dataset. In future, we will try to build a more resilient and robust system that will have a lower margin of error.
proach. As observed in Fig. 7, the proposed approach focuses only on the affected region. However, without segmentation, the algorithm examines a variety of unaffected visual components and, therefore, of little interest. This can also be seen in Fig. 7. Since there are fewer test samples, the proposed approach focuses on the smallest region that discriminates the samples in a better way. This explains the smaller portion being highlighted in the GradCAM results. These explanations help the end-users to get a better idea of the working of the algorithm and increase the credibility of any AI algorithm.
### Ablation Study
To study and investigate the effect of varying feature extractors and the value of \(\lambda\), we have performed various experiments to showcase the effect on classification accuracy.
#### 4.5.1 Effect of Feature Extractor
By varying the backbone for feature extraction with fewer layers (VGG16, ResNet18) and a large number of layers (DenseNet-121), the proposed algorithm is evaluated. Table 5 presents the reported results for the 5-shot 2-way setting. The results are clearly inferior to those shown in Tables 1, 3 and 2. The smaller sample size in each rare class can be attributable to this pattern. The prototypical networks are based on the calculation of a mean /representative vector with which the distances are computed. Choosing the right architecture helps to make the approach more efficient. Less layered architecture does not allow for the selection of more distinctive features, and more layered architecture reduces generalizability due to overfitting. Based on the results, we opted to use ResNet50 as the backbone for feature extraction in this work.
#### 4.5.2 Effect of \(\lambda\) in loss function
We evaluated the proposed algorithm by changing the value of \(\lambda\) in 5 in order to examine the impact of combining the classification loss and segmentation loss. The results are reported in Table 6. As we can see from the results, combining the classification loss with the segmentation loss improves the classification accuracy. The misclassification directs the segmentation network to refine the mask, which acts as attention for the classification network to focus only on the affected segmented region. This step helps in improving both classification
\begin{table}
\begin{tabular}{c c c c c} \hline \hline CI/Datasets & **75** & **90** & **95** \\ \hline \multirow{3}{*}{**HAM**} & 1 & 62.65 \(\pm\) 0.51 & 62.65 \(\pm\) 0.55 & 62.65 \(\pm\) 1.06 \\ & 3 & 73.12 \(\pm\) 0.47 & 73.12 \(\pm\) 0.67 & 73.12 \(\pm\) 0.80 \\ & 5 & **77.57 \(\pm\) 0.43** & **77.57 \(\pm\) 0.61** & **77.57 \(\pm\) 0.73** \\ \hline \multirow{3}{*}{**Ph2**} & 1 & 59.86\(\pm\) 0.62 & 59.86 \(\pm\) 0.89 & 59.86 \(\pm\) 1.06 \\ & 3 & 68.89 \(\pm\) 0.59 & 68.89 \(\pm\) 0.84 & 68.89 \(\pm\) 1 \\ & 5 & **76.06 \(\pm\) 0.57** & **76.06 \(\pm\) 0.82** & **76.06 \(\pm\) 0.98** \\ \hline \multirow{3}{*}{**Derm7pt**} & 1 & 59.95 \(\pm\) 0.62 & 59.95 \(\pm\) 0.89 & 59.95 \(\pm\) 1.06 \\ & 3 & 71.72 \(\pm\) 0.6 & 71.72 \(\pm\) 0.85 & 71.72 \(\pm\) 1.02 \\ \cline{1-1} & 5 & **77.64 \(\pm\) 0.56** & **77.64 \(\pm\) 0.80** & **77.64 \(\pm\) 0.96** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of Average Accuracy (in %) over 1000 randomly generated episodes with varying confidence interval (75, 90 and 95) on HAM10000 (HAM), Ph2, and Derm7pt dataset for 2-way classification (shots = 1,3,5)
Figure 4: Comparing the performance of the proposed approach (with cosine distance) with baseline (transfer learning) for 3-Way and 5-way classification. Each set of bars corresponds to the average accuracy on HAM10000, PH2 and Derm7pt, respectively.
Figure 5: Showing the images (a), their ground truth masks (b) and the predicted masks (c) by the segmentation network while training the last layer of it in parallel to the classification network.
and segmentation performance. The accuracy, however, declines if we increase the weight of classification loss or the value of \(\lambda\). It may be because fewer samples are available, and giving classification loss more weight results in poor generalizability.
## 5 Conclusion
Skin cancer is one of the most serious skin diseases. Skin lesions that are detected early can help prevent complications, including death. Because the trained model becomes biased towards the classes seen during training, the transfer learning approach does not generalise well to new and unseen classes. This behaviour can also be observed in medical settings, where new classes are introduced with a small sample size. This paper presents a multitask few-shot learning-based network for skin lesion classification. In the proposed approach, the segmentation and classification tasks are fused into a single pipeline. We evaluated the proposed network using Cosine and Euclidean distances, and it was proved from the experimental results that both distances are effective in the case of classifying skin lesions. Experiments with the HAM10000, PH2 and Derm7pt datasets show that the proposed method yields good accuracies even when applied across databases, exhibiting generalizability. Any skin disease can benefit from the proposed approach. To further improve the performance in future, we plan to propose a more robust segmentation network with varying loss functions. Moreover, we intend to expand our research to other medical modalities with limited data in the future.
|
2308.14981 | Sub-universal variational circuits for combinatorial optimization
problems | Quantum variational circuits have gained significant attention due to their
applications in the quantum approximate optimization algorithm and quantum
machine learning research. This work introduces a novel class of classical
probabilistic circuits designed for generating approximate solutions to
combinatorial optimization problems constructed using two-bit stochastic
matrices. Through a numerical study, we investigate the performance of our
proposed variational circuits in solving the Max-Cut problem on various graphs
of increasing sizes. Our classical algorithm demonstrates improved performance
for several graph types to the quantum approximate optimization algorithm. Our
findings suggest that evaluating the performance of quantum variational
circuits against variational circuits with sub-universal gate sets is a
valuable benchmark for identifying areas where quantum variational circuits can
excel. | Gal Weitz, Lirandë Pira, Chris Ferrie, Joshua Combes | 2023-08-29T02:16:48Z | http://arxiv.org/abs/2308.14981v1 | # Sub-universal variational circuits for combinatorial optimization problems
###### Abstract
Quantum variational circuits have gained significant attention due to their applications in the quantum approximate optimization algorithm and quantum machine learning research. This work introduces a novel class of classical probabilistic circuits designed for generating approximate solutions to combinatorial optimization problems constructed using two-bit stochastic matrices. Through a numerical study, we investigate the performance of our proposed variational circuits in solving the Max-Cut problem on various graphs of increasing sizes. Our classical algorithm demonstrates improved performance for several graph types to the quantum approximate optimization algorithm. Our findings suggest that evaluating the performance of quantum variational circuits against variational circuits with sub-universal gate sets is a valuable benchmark for identifying areas where quantum variational circuits can excel.
+
Footnote †: These two authors contributed equally.
+
Footnote †: These two authors contributed equally.
## I Introduction
There is much interest in constructing parameterized quantum circuits as variational ansatzes to solve mathematical problems [1]. Such circuits can find applications in quantum chemistry, for example, using variational quantum eigensolvers [2] or quantum machine learning [3]. This class of circuits also includes the quantum approximate optimization algorithm (QAOA) of Farhi _et al._[4], which has received significant attention. Its purpose is to efficiently approximate the global optima of constraint satisfaction problems (CSP) in combinatorial optimization.
A popular constraint satisfaction problem that is relevant in physics is called Max-Cut. The solution to Max-Cut can be used to find the minimum energy of the Ising Hamiltonian [5]. The Max-Cut problem is defined on a graph as follows. If the vertices of a graph can take one of two labels, the objective of Max-Cut is to maximize the number of edges with opposing labels. The constraint comes from the topology of the graph. If we have \(n\) vertices, the optimum assignment is somewhere among the \(2^{n}\) combinations. Finding the exact minimum (or maximum) of Max-Cut is considered an NP-complete problem see e.g. Appendix A2.2 of [6]. While there are no current algorithms (beyond brute force searching) that guarantee an exact solution to Max-Cut, there are classical and quantum approximation techniques that produce "good-enough" solutions in polynomial time.
The simplest solution is random guessing, which would "cut" half of the edges in the graph, on average. The fraction of the number of cut edges to the optimal solution is known as the approximation ratio. The gold standard of classical techniques is the Goemans-Williamson algorithm, which guarantees a ratio of \(0.8785\)[7].
On the quantum side, Farhi et al. [4] proved that for a depth one (\(p=1\)) quantum circuit on 3-regular, triangle-free graphs, QAOA guarantees a solution with an approximation ratio of \(0.6924\). It was also shown that as \(p\rightarrow\infty\), the QAOA will always find the true optimal solution. Recent experiments have implemented QAOA on larger numbers of qubits (see, e.g. Refs. [8; 9; 10; 11]). However, these demonstrations solve Max-Cut on small graphs, and the time to solution is slow relative to real-world applications. Furthermore, the Goemans-Williamson algorithm has an almost optimal approximation ratio and, with small modifications, can efficiently solve problems on sparse graphs with 20 million vertices on a laptop [12]. Given these factors, one might question the relevance of using QAOA. Some possible responses are (i) variational
Figure 1: Perspective on the relationship between three model classes considered in this work. Parameterized quantum circuits (PQCs) are the paradigm that represents the largest class of parameterized quantum algorithms. QAOA (Farhi et al. [4]) is a popular example. PAOA represents a class of probabilistic methods that are entirely classical. The “distance” between classical and quantum methods we consider to be in the practical sense — i.e., do they achieve comparable performance _in practice_ where near-term quantum computers are expected to be used.
approaches like QAOA might process broader applicability than the SDP relaxation, (ii) QAOA could outperform the Goemans-Williamson algorithm on extremely large problems as it is a hardware-based solution. This raises the question of what is the appropriate classical protocol to compare to.
In this article, we introduce PAOA (_probabilistic_ approximate optimization algorithm), a classical probabilistic variational circuit inspired by QAOA, that could be implemented as a hardware-based solution. In numerical experiments, we have found that PAOA can achieve performance comparable, and in some cases superior, to QAOA. The protocol we suggest provides a fairer comparison to classical techniques than the typical approach of comparing to random guessing. This provides a way to benchmark QAOA's performance in a way closer in spirit than the bound provided by the Goemans-Williamson algorithm and other excellent approaches like that of Refs. [13; 14]. This paper intends not to compete with the Goemans-Williamson algorithm but to provide a compelling alternative to quantum solutions.
At the core of the PAOA protocol lies probabilistic bits or "p-bits." These are intermediate between standard classical bits and qubits. While qubits can be prepared in a superposition state of "0" and "1", p-bits can only be prepared in classical mixtures of 0 and 1. Interestingly, p-bits can be physically implemented using modern technology [15; 16; 17]. If these p-circuits (probabilistic circuits) can be implemented using current technology, yet yield similar results to that achieved using current quantum circuits, is there a real quantum advantage? The envisioned relationship between these classes is illustrated in Figure 1.
In Section II, we briefly review constraint satisfaction problems and Max-Cut. In Section III, we summarize QAOA. In Section IV, we introduce a class of classical variational circuits that can solve some constraint satisfaction problems. We compare QAOA to our classical variational circuits numerically in Section V. Our numerics indicate that a low-depth classical variational algorithm has comparable performance to a quantum algorithm. We conclude with a summary and discussion of some open questions in Section VI.
## II Constraint Satisfaction Problems and Max-Cut
A constraint satisfaction problem is specified by a set of items \(\{z\}\), and \(m\) constraints. Each constraint involves a subset of the items. The computational task is to find a combination of items that maximize the number of satisfied constraints. For each constraint \(a\in[m]\) and each string in a given CSP, we define,
\[C_{a}(z)=\begin{cases}1&\text{if $z$ satisfies constraint a},\\ 0&\text{otherwise}\end{cases}. \tag{1}\]
Hence, the goal is to maximize the cost function \(C(z)\) over the set \(\{z\}\), where,
\[C(z)=\sum_{a=1}^{m}C_{a}(z). \tag{2}\]
Max-Cut is a CSP defined on a graph \(G(z,E)\). We represent the set of vertices as \(n\)-bit strings, i.e., \(z=z_{n}z_{n-1}...z_{2}z_{1}\), and the set of edges \(\langle ij\rangle\in E\). We will consider a simple graph \(G(z,E)\), and denote the number of edges \(|E|\). For each vertex \(i\) we assign a label, denoted as \(z_{i}\in\{0,1\}\). Consequently, the \(n\)-bit string \(z\) becomes a string of ones and zeros of length \(n\), denoted as \(z\in\{0,1\}^{n}\). Thus there are \(2^{n}\) distinct states spanning the set \(\{0,1\}^{n}\), each representing a unique assignment of the vertices. The goal of Max-Cut is to find the maximum number of edges whose vertices on each end have different labels, over the set \(\{0,1\}^{n}\). Equivalently we find the string \(z\) that maximizes,
\[C(z)=\sum_{\langle ij\rangle\in E}C_{\langle ij\rangle}(z), \tag{3}\]
where,
\[C_{\langle ij\rangle}(z)=\frac{1}{2}\big{(}1-(-1)^{z_{i}}(-1)^{z_{j}}\big{)}. \tag{4}\]
### Quantum reformulation of Max-Cut objective
The maximization of \(C(z)\) corresponds to finding the maximum energy of a Hamiltonian \(C\), an operator in the \(2^{n}\times 2^{n}\) dimensional Hilbert space with basis vectors \(\ket{z}\in\mathbb{C}^{n}\), defined by the following eigenvalue equation
\[C\ket{z}=C(z)\ket{z}, \tag{5}\]
where \(C(z)\) is defined in Eq. (3). For a simple graph \(G(z,E)\), each vertex \(i\) is associated with a spin state \(\ket{z_{i}}\), where \(z_{i}\in\{0,1\}\) and \(Z_{i}\ket{z_{i}}=(-1)^{z_{i}}\ket{z_{i}}\). The state associated with the entire graph is \(\ket{z}=\otimes_{i=1}^{n}\ket{z_{i}}\) and \(Z_{i}\ket{z}=(-1)^{z_{i}}\ket{z}\) where \(Z_{i}\) is identity on all spins except the \(i^{th}\) spin. The quantum representation of the cost operator of Max-Cut is
\[C=\sum_{\langle ij\rangle\in E}C_{\langle ij\rangle},\ \ \text{where}\ \ \ C_{\langle ij\rangle}=-\frac{1}{2}(I-Z_{i}Z_{j}). \tag{6}\]
## III Quantum approximate optimization algorithm
Here we summarize QAOA, following Farhi _et al._[4] presentation. The objective of QAOA is to find an \(n\)-bit string \(z\) that approximately minimizes the cost \(C\). Using \(C\) from Eq. (6), we define the unitary cost operator \(U(C,\gamma)\),
\[U(C,\gamma)=e^{-i\gamma C}=\prod_{\langle jk\rangle\in E}e^{-i\frac{\gamma}{2 }(Z_{j}Z_{k}-I)}, \tag{7}\]
where \(\gamma\in[0,2\pi)\) applies a phase to pairs of bits according to the cost function. Now we also define the operator \(B\),
\[B=\sum_{k=1}^{n}X_{k}, \tag{8}\]
with \(X_{k}\) being the single qubit Pauli \(X\) operator operating on the qubit corresponding to the \(k^{th}\) bit in \(z\). We define the "mixer" unitary operator with \(\beta\in[0,\pi)\),
\[U(B,\beta)=e^{-i\beta B}=\prod_{k=1}^{n}e^{-i\beta X_{k}}. \tag{9}\]
This unitary operator drives transitions between bitstrings within a superimposed state [10].
QAOA involves a sequential application of \(U(C,\gamma)\) and \(U(B,\beta)\) to an initially uniform superposition of computational basis states. Let \(\ket{+_{n}}:=H^{\otimes n}\ket{z}\) denote the uniform superposition of all possible states, where \(H\) is the Hadamard gate. The _depth number_ of the circuit, \(p\), is an integer number that counts sequential applications of the two unitaries,
\[U(\mathbf{\gamma},\mathbf{\beta})=\prod_{k=1}^{p}U(B,\beta_{k})U(C,\gamma_{k}), \tag{10}\]
with a total of \(2p\) angles defined via \(\mathbf{\gamma}:=(\gamma_{1},...,\gamma_{p})\) and \(\mathbf{\beta}:=(\beta_{1},...,\beta_{p})\). Thus the state after one application of the circuit is
\[\ket{\mathbf{\gamma},\mathbf{\beta}}=U(\mathbf{\gamma},\mathbf{\beta})\ket{+_{n}}, \tag{11}\]
and the expectation of the cost operator in the final state is
\[\bra{C}=\bra{\mathbf{\gamma},\mathbf{\beta}}C\ket{\mathbf{\gamma},\mathbf{\beta}}. \tag{12}\]
It is the vector of parameters \(\mathbf{\gamma},\mathbf{\beta}\) that we will optimize over.
With an initial set of parameters, we conduct a repeated measurement of the state in Eq. (11). Each measurement will have the superposition state collapse to one of the basis states with probability
\[\Pr(z)=|\bra{z}\mathbf{\gamma},\mathbf{\beta}\rangle|^{2}. \tag{13}\]
Given the binary string \(z\), we can efficiently compute the cut and record the corresponding cost \(C(z)\). Repeating this process many times we collect a sample of costs \(\{C(z)\}\), from which can get _estimates_ for the expectation in Eq. (12) and the minimum value of the sample
\[C_{\min}=\min_{z}\{C(z)\}. \tag{14}\]
We then define the approximation ratio,
\[R=\bra{C}/C_{\min}. \tag{15}\]
Note that, in practice, \(C_{\min}\) is rarely the absolute minimum cost achievable, as that would be the cost of the solution to the Max-Cut problem. In most applications of such optimization procedures, \(R\) is an estimate of the "true" approximation ratio.
Once estimated, \(R\) is treated as an objective function for a classical optimization algorithm. Hence, in an iterative fashion, QAOA will initialize the parameterized circuit with better problem-specific parameters, causing constructive and destructive interference to states that are better and worst for the problem respectively. After optimization the candidate optimal solution is
\[z^{*}=\underset{z}{\operatorname{argmin}}\{C(z)\}\,. \tag{16}\]
For \(p=1\) on 3-regular graphs, we are guaranteed an approximation ratio that corresponds to 0.6924 of the cost of the true optimal state [4]. It was also shown in [4] that as \(p\) goes to infinity, \(C_{\min}\) corresponding to the solution is achieved, and the approximation ratio converges to 1 as the distribution around the mean converges to the optimal cost of the problem.
## IV Classical probabilistic variational circuits
In this section, we describe our classical probabilistic variational circuits. We begin by summarizing Markov chains and their application in reversible logic in Sec. IV.1. Then we present our variational circuit ansatz in Sec. IV.2 for solving the Max-Cut problem.
### Classical probabilistic circuits
Let us consider probability distributions over a classical bit, which have the following zero entropy states,
\[\ket{0}=(1,0)^{\mathsf{T}},\quad\ket{1}=(0,1)^{\mathsf{T}}. \tag{17}\]
Although these are classical vectors, we have used Dirac notation to make a stronger analogy with QAOA. A linear combination of these vectors is a probabilistic-bit or p-bit,
\[\ket{\psi}=(1-p)\ket{0}+p\ket{1}=(1-p,p)^{\mathsf{T}}, \tag{18}\]
where \(p\in[0,1]\).
The two possible logical operations on a single p-bit are identity and NOT, which correspond to the (Pauli) permutation matrices,
\[I=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\quad\text{and}\quad X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}. \tag{19}\]
The convex hull of these two permutations gives rise to the bit-flip channel,
\[\mathcal{E}_{q}\ket{\psi}=(1-q)I\ket{\psi}+qX\ket{\psi}=\begin{bmatrix}(1-q) &q\\ q&(1-q)\end{bmatrix}\ket{\psi}\,, \tag{20}\]
where \(q\in[0,1]\).
For \(n\) p-bits, the zero entropy states are the familiar computational basis states \(|z\rangle\), \(z\in\{0,1\}^{n}\), and the convex hull of the permutation matrices is the set of doubly stochastic matrices [18; 19].
In general, the probability of moving from state \(|j\rangle\) to state \(|i\rangle\) is denoted \(P_{ij}\), which need not be equal to \(P_{ji}\). Such processes are physically motivated by e.g. the probability of an excited state of an atom to decay to the ground state but not vice versa. A probability transition matrix \(P\) for which \(P_{ij}\neq P_{ji}\) for some \(i,j\) is called a _stochastic matrix_. As the total probability of transitioning from state \(|j\rangle\) to any other state in \(\{0,1\}^{n}\) is \(1\), a stochastic matrix must obey
\[\sum_{i=1}^{2^{n}}P_{ij}=1, \tag{21}\]
for all \(j\in\{0,1\}^{n}\). In what follows, we will use stochastic matrices, rather than doubly stochastic matrices, to construct our circuit ansatz.
Since the Max-Cut problem is specified on a graph, binary strings will encode vertices. For each edge in the graph, the possible bit-string assignments for two vertices are \(\{0,1\}^{2}=\{00,01,10,11\}\). Thus, the general two-bit stochastic matrix describing the transitions between the states of the bits on edge \(\langle kl\rangle\) is
\[\mathbf{p}_{\langle kl\rangle}=\begin{bmatrix}P_{11}&P_{12}&P_{13}&P_{14}\\ P_{21}&P_{22}&P_{23}&P_{24}\\ P_{31}&P_{32}&P_{33}&P_{34}\\ P_{41}&P_{42}&P_{43}&P_{44}\end{bmatrix}. \tag{22}\]
For the task of optimization, \(\mathbf{p}_{\langle kl\rangle}\) can be written as a \(12\)-dimensional vector, noting the four linear constraints imposed by Eq. (21).
Since the state would remain a "classical" mixture, it is obvious that the circuits created with these gates are not universal for quantum computing. Interestingly, they are not universal for classical reversible computation either -- it is known that three-bit gates (e.g., the Fredkin or Toffoli gate) are required to make classical reversible computing universal [20; 21]. Nevertheless, with these "sub-universal" resources, we can still solve the Max-Cut problem with high approximation ratios.
### Paoa
In analogy with the parameterized quantum circuit of QAOA, here we propose a parameterized probabilistic circuit called the related protocol PAOA. Given a graph, a parameterized probabilistic circuit is constructed using independent stochastic matrices that act on some subset of the vertices, see Fig. 2. Generally, one could imagine applying one, two, and three-bit stochastic matrices to the vertex bits and iterating over layers as in QAOA. The limiting case would be a single \(n\) bit stochastic matrix, see Eq. (21), which is like a (classical) Boltzmann machine [22] or Ising machine [23]. However, in designing PAOA, we have found that a simple single-depth circuit of two-bit stochastic matrices is sufficient and this has the benefit of reducing the dimensionality of the optimization space. Thus for all of the variants we consider below, a depth \(1\) PAOA circuit creates a parameterized probability distribution on the graph \(G\) of the form
\[\Pr(G|\mathbf{x})=\prod_{\begin{subarray}{c}\langle kl\rangle\in E\\ k<l\end{subarray}}\mathbf{p}_{\langle kl\rangle}(\mathbf{x}_{\langle kl\rangle}) \tag{23}\]
where \(\mathbf{x}\) is the vector of all variational parameters for the graph and \(\mathbf{x}_{\langle kl\rangle}\) are the variational parameters for the
Figure 2: Illustrative example of the Max-Cut algorithms for a \(3\)-node graph \(G\) showing both QAOA and PAOA circuits. (Left) \(H\) is the Hadamard gate and \(p\) is the circuit depth (see Section III for the in-depth explanation). (Right) \(R\) denotes a random initial state, and \(P\) represents the probabilistic gate ansatz we use (see Section IV.2). We assume it is possible to run the PAOA circuit directly on the graph as SWAPs are basically free in classical computation. Thus the SWAPs depicted are virtual but allow us to map the linear circuit topology to the problem graph topology.
edge \(\langle kl\rangle\). We will see that as our ansatz only the distribution between two edges doing higher depth circuits doesn't add to the expressivity of our ansatz. Instead, we have to add different probability distribution to vertices that don't have edges, see Fig. 3.
In principle, we could use Eq. (22) to construct a variational ansatz for the edges of a graph, which would involve 12 parameters for every edge in the graph. To reduce the number of parameters intelligibly, let's re-examine our objective Eq. (7). Recall the objective of Max-Cut: Find the arrangement of vertices that maximizes the number of edges with opposing bits on each end. Thus, we set transition probabilities to unwanted states, such as \(|01\rangle\mapsto|00\rangle\), for example, to zero. This ensures maximum disagreement between edges.
This motivates the following variational ansatz, which forms the basis of PAOA,
\[\mathbf{p}_{(kl)}^{\text{PAOA}}=\begin{bmatrix}0&0&0&0\\ p_{1}&p_{2}&p_{3}&p_{4}\\ 1-p_{1}&1-p_{2}&1-p_{3}&1-p_{4}\\ 0&0&0&0\end{bmatrix}. \tag{24}\]
The vector \(\mathbf{p}\) only encodes four variational parameters per edge where \(p_{i}\in[0,1]\). If the number of edges in the graph is \(|E|\), PAOA has \(4|E|\) parameters. This ansatz allows all of the four bitstrings between vertices to transition to the strings \(\{01,10\}\) with unique probabilities.
In an effort to minimize the number of optimization parameters, we also construct Reduced PAOA, which sets all probabilities per edge to be equal, such that each edge is associated with a gate of the following form,
\[\mathbf{p}_{(kl)}^{\text{p-PAOA}}=\begin{bmatrix}0&0&0&0\\ p&p&p&p\\ 1-p&1-p&1-p\\ 0&0&0&0\end{bmatrix}, \tag{25}\]
which amounts to \(|E|\) parameters in total. Notice that if we multiply two of these stochastic matrices that \(\mathbf{p}^{\prime}\times\mathbf{p}=\mathbf{p}^{\prime}\), so, in this case, larger depth circuits don't add expressivity to our anzatz.
The objective of the PAOA is to replicate QAOA classically (Figure 2 provides an example of solution circuits using both methods). Given a graph, a parameterized probabilistic circuit is constructed using independent stochastic matrices for each edge. As one last ansatz, we define Min PAOA, which is parameterized as follows,
\[\mathbf{p}_{(kl)}^{\text{HEPA-PAOA}}=\begin{bmatrix}0&0&0&0\\ p&q&1-q&1-p\\ 1-p&1-q&q&p\\ 0&0&0&0\end{bmatrix} \tag{26}\]
which has _the same_ two parameters for every edge. Like QAOA, we allow this circuit to have multiple layers with different parameters. The form of this ansatz is chosen so that the transitions \(|00\rangle\Leftrightarrow|01\rangle\) and \(|11\rangle\Leftrightarrow|10\rangle\) are controlled by \(p\), while \(|01\rangle\Leftrightarrow|10\rangle\) are parameterized by \(q\). Both Min PAOA and standard QAOA use \(2p\) parameters.
Once the parameterized probabilistic circuit is defined, we follow the same protocol as in the QAOA. Namely, we conduct a sequence of experiments where the output of an experiment is the approximation ratio as defined in Eq. (15). After each experiment, an optimizer attempts to improve the choice of parameters such that the approximation ratio for the next experiment is larger than the previous one. We will discuss the details of our numerical experiments next.
## V Numerical experiments
In this section, we present numerical evidence for the effectiveness of PAOA on several graphs, see Fig. 4. We start by considering small 3 regular graphs with up to 10 edges and comparing the performance of QAOA to PAOA and show PAOA's performance is better than QAOA.
As numerical simulations of large quantum systems are difficult we then switch tact and simulate larger graphs. We consider several graph types an compare the performance of QAOA to PAOA on a graph with 20 edges. Then we consider the scaling of the performance of PAOA to random guessing for graphs with 50 to 250 edges. For each considered graph, we run each Max-Cut optimization algorithm 100 times. The average and standard deviation of the cut sizes as well as the estimated approximation ratio will form the basis of our conclusions.
For the fairest comparison, an "out of the box" optimizer [24] was used to train the QAOA and PAOA circuits. Each was given 100 iterations and 100 experimental runs per iteration. In no cases were the optimization algorithm's hyperparameters tuned (the defaults were used). The QAOA circuit was built and simulated using Qiskit [25]. The code to reproduce these results can be found at [26].
Figure 3: (left) In the simple version of depth one (\(p=1\)) PAOA we optimize stochastic matrices on the problem graph. To make the probability distribution richer we have to either optimize over higher-depth circuits or more than two-bit stochastic matrices. (right) The original problem graph is depicted in grey. A possible second layer of PAOA is depicted with the blue edges.
### Performance as a function of graphs size for 3-regular graphs
Regular, or \(k\)-regular graphs are those where each node has an equal \((k)\) number of edges. For \(k=0\), there are no edges. For \(k=1\), the graph consists of disjoint pairs of nodes connected by a single edge so the cut is trivial. Here we will consider \(3\)-regular graphs.
Fig. 5 is a comparative analysis of the variants of PAOA, QAOA, random guessing, and brute force methods for Max-Cut on \(3\)-regular graphs of increasing size. Specifically we consider graphs with \(|V|\in\{4,6,8,10\}\) where the number of corresponding \(3\)-regular graphs are \(\{1,2,6,21\}\)[27, A005638]. We run the different protocols on all possible graphs of size \(|V|\) and average the results.
We compare several protocols and several performance metrics as a function of graph size. The x-axis represents graph size, i.e. the order or number of vertices \(|V|\), and the y-axis showcases the performance metrics. Across the graph sizes, PAOA consistently outperforms other methods, with the exception of Best Cut which the brute force method obviously is the most performant. Moreover, PAOA's narrower distribution, denoted by a smaller standard deviation in Figure (b), indicates its consistent and reliable performance. The fact that all methods get close to the brute force method with respect to the best cut is not surprising as the graphs are small and our optimizer gets \(100\) shots. This also explains why the approximation ratios also look good. For this reason, we will study larger graphs in the latter part of this section.
Notice that we did not include the original PAOA ansatz, see Eq. (24), this is because it's performance is worse than "Reduced PAOA" and "Min. PAOA" which are Eqs. (25) and (26) respectively. We conjecture this is because of the curse of dimensionality in the original PAOA which has \(4|E|\). While Reduced and Min. PAOA have parameters vs \(|E|\) and \(2\) parameters for each layer.
### Larger \(2\) and \(3\)-regular graphs
In this section, we will focus on comparing the performance of QAOA and PAOA for larger graphs when \(k=2\) or \(k=3\). The maximum cut is trivial for \(k=2\), where the graph (if fully connected) is a cycle. The maximum cut is clearly \(n\), where \(n\) is the (even) number of nodes. However, this example provides an interesting test case as we will soon see. We have also considered \(k=3\), where
Figure 4: We consider running QAOA and variants of PAOA on these graphs to solve Max-Cut.
Figure 5: A Comparison of the different protocols to solve Max-Cut on \(3\)-regular graphs of increasing graph size i.e. \(|V|\). For each graph of size \(|V|\) the protocols were given \(100\) iterations and \(100\) experimental runs per iteration. If there is more than one graph for a given size \(|V|\) we average the performance metrics.
the solution is non-trivial. The two example graphs corresponding to the data collected below are shown here.
Table 1 compares the performance for the algorithms on a 2-regular graph with \(n=20\) nodes for several performance metrics. As the algorithms are probabilistic and run several times, outcomes are distributed. Thus we consider the following metrics to explore performance. The first metric we consider is the best cut found by the algorithm in any trial; we call this metric "Best." The metrics "Average" and "SD" refer to the average cut of the distribution and the standard deviation of the distribution. Finally, "R" is the approximation ratio given in Eq. (15).
Surprisingly, this very simple graph seems to present a relative challenge for all algorithms -- except, of course, the brute force algorithm, which will always yield the optimal cut. However, since it is searching over every possible cut, its performance on the other metrics is quite low. According to the approximation ratio cost function, Reduced PAOA is the best performer. However, a single layer of Min PAOA does quite well and is more "reliable" based on the standard deviation of the cut sizes it produces. QAOA and larger depth circuits seem to struggle with the chosen meta-heuristics.
In Table 2, the performance for a 3-regular graph with \(n=20\) nodes is presented. In this case, both of the reduced parameter variants of PAOA find the solution with similar approximation ratios. While the approximation ratio of QAOA is better than random guessing, the standard deviation at deeper circuits is higher than PAOA.
Over the course of many trials and observations not presented here, we note that either Min PAOA or Reduced PAOA are the best-performing algorithms for both \(k=2\) and \(k=3\) regular graphs. In either case, the number of gates (and hence the runtime) grows linearly with the number of nodes.
Since QAOA was simulated classically, its space/time complexity was exponential in the number of graph edges -- thus, we were limited to the range below roughly 20 graph nodes. On the other hand, PAOA scales linearly with the number of edges. Since, as we will see in the following sections, Reduced PAOA is the best-performing algorithm, we tested its performance well-beyond the 20 qubit limit. The results, summarized in Figure 6, show that PAOA continues to outperform random guessing as the graph size increases.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Best & Average & SD & \(R\) \\ \hline Brute force & 26 & 15.00 & 2.74 & 0.58 \\ Random & 24 & 15.02 & 2.72 & 0.63 \\ PAOA & 24 & 19.26 & 2.28 & 0.80 \\ Reduced PAOA & 26 & 21.35 & 2.54 & 0.82 \\ Min PAOA & 26 & 21.66 & 2.39 & 0.83 \\ Min PAOA (3 layers) & 22 & 15.99 & **2.30** & 0.73 \\ QAOA (1 layer) & 25 & 17.62 & **2.32** & 0.70 \\ QAOA (3 layers) & 24 & 17.75 & 2.81 & 0.74 \\ QAOA (6 layers) & 22 & 15.38 & 2.68 & 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 2: 3-regular graph performance (\(n=20\))
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Best & Average & SD & \(R\) \\ \hline Brute force & 20 & 10.00 & 2.24 & 0.50 \\ Random & 16 & 10.16 & 2.34 & 0.64 \\ PAOA & 18 & 14.58 & 1.53 & 0.81 \\ Reduced PAOA & 18 & 16.72 & 1.54 & 0.93 \\ Min PAOA (1 layer) & 18 & 16.22 & **1.10** & 0.90 \\ Min PAOA (3 layers) & 16 & 11.24 & 2.17 & 0.70 \\ QAOA (1 layer) & 16 & 10.76 & **1.97** & 0.67 \\ QAOA (3 layers) & 16 & 11.12 & 1.95 & 0.69 \\ QAOA (6 layers) & 14 & 10.22 & 2.00 & 0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 1: 2-regular graph performance (\(n=20\))
Figure 6: A Comparison of Reduced PAOA and random guessing to solve max cut on 2-regular graphs of increasing size. (Top) Is a comparison of the approximation ratios, and (bottom) is the maximum cut normalized to the size of the graph i.e. (the best cut)/\(|V|\). Each algorithm was run 10 times on a 2-regular graph. All data are presented along with the average (line).
### Complete graphs
A complete graph is one in which every pair of nodes is connected by an edge, see Fig. 4. By symmetry, any cut which equally partitions an even number of nodes will be maximum. This was not difficult to find for any of the algorithms we considered. Note, however, that variational algorithms have runtimes that scale with the number of edges in a graph. While we did not intentionally benchmark the timing of each algorithm, complete graphs were certainly the most time-consuming to optimize.
The performance results of each algorithm on a complete graph with \(n=10\) nodes are presented in Table 3. Surprisingly, Min PAOA is nearly perfect, finding a set of variational parameters which produce a maximum cut deterministically, for all practical purposes. The other variants of PAOA also performed extremely well.
### Other random graphs
Lastly, we assessed the performance of the algorithms on some other popular random graphs from network theory. First, the Barabasi-Albert model is a technique for generating "scale-free" networks that mimic human-made networks such as the Internet. It uses a mechanism called _preferential attachment_ whereby new nodes are added to the graph with connections to existing nodes that exist probabilistically, weighted by how many connections each node currently has. The graph in Fig. 4 the left was created starting with two connected nodes.
An Erdos-Renyi graph is one where each potential edge is added with some fixed and independent probability \(p\). Here, and in Fig. 4, we have chosen \(p=\frac{1}{2}\), which you can intuit as a complete graph with roughly half the edges removed.
In Table 4, the performance of the algorithms is presented for a Barabasi-Albert graph on \(n=20\) nodes. In this case, Reduced PAOA is a clear winner with the other variants of PAOA falling slightly behind on all metrics. Again, QAOA seems to struggle using the same metaheuristics.
The last detailed comparison appears in Table 5, where the performance of all algorithms is shown for an Erdos-Renyi graph on \(n=20\) nodes. The results bear a resemblance to the complete graph, whereby all algorithms achieve a reasonably high approximation ratio. However, Reduced PAOA appears to be the most reliable, though it did not find the absolute optimal solution.
As in Figure 6 from Section V.2, we test the continued performance of PAOA on much larger graphs. In Figure 7, the results again demonstrate a reliable improvement over random guessing on a different class of random graphs.
## VI Discussion
In this work, we have introduced the idea of using sub-universal variational circuits to compare with quantum variational circuits. We explored a single-layer circuit composed of variational stochastic matrices, which can be physically realized. Numerical experiments have demonstrated that the ansatz we have called PAOA is preferred to QAOA, its quantum analog. Though, the following caveats apply.
First, and most obvious, is the limited scope available to test quantum algorithms with classical simulators. It would be difficult to justify the extrapolation to practically relevant sized problems from such small qubit numbers. Second, PAOA and QAOA were trained using the same algorithm -- in other words, neither was optimized. The effect of this can be evidenced in some data not presented here for very small graphs. (The reader
\begin{table}
\begin{tabular}{l|c c c c} \hline Method & Best & Average & SD & \(R\) \\ \hline Brute force & 61 & 44.00 & 4.69 & 0.72 \\ Random & 53 & 43.32 & 4.36 & 0.82 \\ PAOA & 61 & 48.99 & 4.87 & 0.80 \\ Reduced PAOA & 57 & 50.04 & 3.44 & 0.88 \\ Min PAOA (1 layer) & 55 & 45.81 & 3.53 & 0.83 \\ Min PAOA (3 layers) & 55 & 44.07 & 5.03 & 0.80 \\ QAOA (1 layer) & 52 & 44.39 & 3.91 & 0.85 \\ QAOA (3 layers) & 53 & 44.77 & 4.93 & 0.84 \\ QAOA (6 layers) & 52 & 43.88 & 44.67 & 0.84 \\ \hline \end{tabular}
\end{table}
Table 5: Erdos-Renyi graph performance (\(n=20\))
is encouraged to play with the simulations themselves at [26]). On small graphs, QAOA (as trained here) performs equally well as PAOA -- it is only on larger graphs that it struggles. We expect QAOA to perform better when its classical optimizer is optimized. That being said, PAOA might also improve its performance with hyperparameter tuning.
The striking thing we have found is that PAOA works well "out of the box." It is fast, convenient, and produces high-quality results with no tuning. If PAOA can be implemented directly in hardware (as in [16], for example) and used to find solutions to CSPs in a cost-effective manner, it would impose a significant opposition to justifying investment in some quantum alternatives. In particular, the Reduced PAOA ansatz has good performance on a variety of graphs. Based on the strong performance of Reduced PAOA on all the graphs considered, we conjecture that this improved performance is due to it finding the sweet spot where it has enough parameters to compute the cut but not too many to burden the optimizer. Moreover, as our ansatz is local, it would be interesting to prove if quantum methods have an advantage over classical as in Ref. [28].
Beyond the particular case we have solved, one could imagine benchmarking quantum machine learning protocols that have quantum inputs and quantum outputs in a similar fashion. In particular, one could construct parametric circuits out of other non-universal gate sets like (discrete) Cliffords [29, 30] or Match gates [31]. As the Match gates are continuously parameterized, it is easy to imagine using gradient descent type optimization, but the Cliffords could be optimized over genetic algorithms, for example.
In any case, we hope PAOA can provide a useful benchmark for testing parameterized quantum circuits in the future. While random guessing can provide a theoretical lower bound, PAOA serves as a practical performance gauge in numerical experiments.
_Acknowledgments:_ The authors acknowledge helpful discussions with Charlie Carlson, Stuart Hadfield, Zackary Jorquera, Alexandra Kolla, Steven Kordonowy, Laurent Laborde, Nicholas Rubin, and Andrew Sornborger. LP was supported by the Sydney Quantum Academy, Sydney, NSW, Australia.
|
2301.09565 | Estimating the energy requirements for long term memory formation | Brains consume metabolic energy to process information, but also to store
memories. The energy required for memory formation can be substantial, for
instance in fruit flies memory formation leads to a shorter lifespan upon
subsequent starvation (Mery and Kawecki, 2005). Here we estimate that the
energy required corresponds to about 10mJ/bit and compare this to biophysical
estimates as well as energy requirements in computer hardware. We conclude that
biological memory storage is expensive, but the reason behind it is not known. | Maxime Girard, Jiamu Jiang, Mark CW van Rossum | 2023-01-16T13:02:22Z | http://arxiv.org/abs/2301.09565v2 | # Estimating the energy requirements for long term memory formation
###### Abstract
Brains consume metabolic energy to process information, but also to store memories. The energy required for memory formation can be substantial, for instance in fruit flies memory formation leads to a shorter lifespan upon subsequent starvation (Mery and Kawecki, 2005). Here we estimate that the energy required corresponds to about 10mJ/bit and compare this to biophysical estimates as well as energy requirements in computer hardware. We conclude that while the reason behind it is not known, biological memory storage is metabolically expensive,
1 School of Psychology
2 School of Mathematical Sciences
University of Nottingham, Nottingham NG7 2RD, United Kingdom
3 Integrative Biology and Physiology Master, Faculty of Sciences and Engineering, Sorbonne University, Paris 75005, France
The human brain consumes some 20W of energy, 20% of the body's total consumption at rest. The cost for computation and information transmission, mostly for synaptic transmission and spike generation, is well documented, and the brain's design is now widely believed to be constrained by energy needs (Attwell and Laughlin, 2001; Lennie, 2003; Harris et al., 2012; Karbowski, 2012). More recently the metabolic cost of learning has been added to the brain's energy budget. Experiments in Drosophila indicate that these costs are substantial. In Mery and Kawecki (2005) flies were exposed to a classical conditioning protocol and learned to associate an odor to a mechanical shock. After the protocol, all feeding was stopped and the time to die from starvation was measured. It was found that the conditioning reduced the lifespan compared to control flies. After controlling for exposure to unconditioned and conditioned stimuli separately, the decrease in lifespan was some 20%.
Currently, it is not clear which neural processes are the main energy consumers associated to learning and memory. However, it is known that not all forms of memory are equally costly. Persistent forms, such as Long Term Memory (LTM) in the fly, are costly, but the less persistent Anaesthesia Resistant Memory (ARM) memory which decays in a few days (Margulies et al., 2005), is not. Interestingly, aversive LTM is halted under low energy conditions (Placais and Preat, 2013). Such adaptive regulation is also found in mammals where late-phase Long Term Potentiation (late-LTP) is halted under low energy conditions, while early phase LTP is not (Potter et al., 2010).
In this note we review estimates for the energy required to store a few bits of information, namely the association of an odour with a noxious stimulus as happens in the protocol of Mery and Kawecki (2005). The estimates have large uncertainties that will hopefully be narrowed down in the future.
Nevertheless, we feel that these 'ball park' figures are useful for theoretical considerations and future experiments. We also discuss the estimate in the context of computer hardware.
How much information is stored in the classical odor-shock conditioning? In order to learn the association of odor and shock requires at least one bit of information, namely whether the stimulus is to be avoided or not. If the valence of the stimuli were stored in more detail, a few extra bits would be needed. Furthermore, the animal could store the context of the stimulus, which would be functionally beneficial. However, in contrast to mammals, we have not seen evidence for contextual fear conditioning in flies. We therefore estimate that some 10 bits are stored.
### Direct measurement of energy intake after learning
There are various methods to estimate the energy need for memory formation from experiments. The first method is based on the fact that right after learning, flies increase their sucrose intake to about double the normal rate (Fig1.c in Placais et al., 2017). In the Capillary Feeder assay (CAFE), the fly's energy uptake is determined from the consumption of sugar water from a capillary (5% sucrose; sucrose carries 16.2kJ/g) (Ja et al., 2007). The increase corresponds to an additional intake of 42\(\pm\)160mJ (19\(\pm\)85mJ in Fig 6.e) compared to control flies, where the errors denote standard deviations.1
Footnote 1: When comparing across experiments it should be noted that in the CAFE assay, energy consumption is strongly reduced during an initial habituation period during the first few days (Van den Bergh, 2022).
Of this energy intake, some will be lost due to metabolic inefficiency and some will be lost in urine and feces. Assuming a 43% conversion efficiency to produce ATP (Nakrani et al., 2022), one can infer that learning consumed some 20mJ in the form of ATP.
### Estimation via lifetime
An alternative estimate of the energy used for memory formation can be found from the reduction in survival time upon starvation after learning. It is simplest to assume that the fly dies whenever its energy reserve \(E(t)\) drops below zero. Next, assume that the energy reserve decreases linearly in time with a rate \(\beta\). I.e. \(E(t)=E_{0}-\beta t\), where \(E_{0}\) is the initial energy reserve. Calorimetry can be used to estimate the consumption rate for a non-starving fly at \(\beta=(7\pm 2)\mu\)W at 23 C. (Fiorino et al., 2018). (Noting that the metabolic rate varies across strains and that the basal metabolic rate increases steeply with increasing temperature; Klepsatel et al., 2019). The average observed lifetime shortening, denoted \(\Delta l\), caused by LTM memory formation was about 4.5hrs in both the experiments of Mery and Kawecki (2005) and Placais et al. (2017). Thus under this linear decrease model, one finds \(E_{\text{LTM}}=\beta\,\Delta l=(110\pm 100)\)mJ.
To examine the robustness of this estimate we add realistic features to this model and show how this affects the estimate. First, the energy consumption decreases as reserves are diminishing (Fiorino et al., 2018). That is, the energy reserve is a convex function of time. Fig. 1 left shows the energy reserve versus time in two conditions. In the control condition (blue curve) starvation start at time 0, causing a gradual drop in the reserve. In the learning condition (orange curve) learning causes a rapid drop in the reserve and takes place right before starvation starts.
We assume that metabolic rate is a function of the current energy reserve only. This means that after expending energy on learning, the energy reserve follows the same trajectory as that of a fly that has been starved some time already. In other words, the learning associated expenditure of the energy advances the energy trace by an amount \(\Delta t\).
We denote the initial rate of consumption as a positive number \(\beta\) (slope of purple line; horizontally shifted horizontally for clarity), and that after learning as rate \(\beta^{\prime}\) (\(\beta^{\prime}\leq\beta\); green line). From Fig. 1, it can be seen that the energy estimate is bounded as
\[\beta^{\prime}\Delta t\leq E_{\text{LTM}}\leq\beta\Delta t\]
This means that an estimate of the energy cost from lifetime differences \(\Delta t\) based on \(\beta\), is possibly an over-estimate, but that based on \(\beta^{\prime}\) is an underestimate. The metabolic rate after learning, \(\beta^{\prime}\), has to our knowledge not been measured directly, however in the setup of Fiorino et al. (2018) the metabolic rate drops some 30% under a calorie restricted diet.
The calculation also holds when the energy consumption caused by learning is not instantaneous as long as \(\beta^{\prime}\) is measured after the additional consumption caused by learning has stopped.
### Hazard model
A more involved model to estimate the energy consumed by learning is to use a hazard function formulation. A hazard function describes the instantaneous probability for dying at a certain energy reserve level (see e.g. Modarres et al., 1999; Gerstner and Kistler, 2002). In the hazard formulation, even if a population of flies all start with the same energy reserve, they will die at different times. The most basic example is a constant hazard. In that case the lifetimes are exponentially distributed and the mean lifetime is the inverse of the hazard rate.
We denote the hazard at a given energy reserve by \(h(E)\). The hazard increases as the energy reserve drops. We assume that the starvation experiments are so drastic that any age dependence of the hazard can be ignored. (Note that inclusion of age dependence would lead to further underestimation of the energy - in the extreme case that life time is only age dependent, large changes in \(E_{\text{LTM}}\) will not affect lifespan).
Figure 1: Left: Diagram for estimating the energy used for learning. Energy reserve is plotted over time. At time 0, well fed flies either learn an association, leading to a decrease in energy reserve (orange), or are part of the control group (blue). Subsequently both group are starved and die when their reserve hits zero; taught flies die an interval \(\Delta t\) earlier. The estimated energy used for learning \(E_{\text{LTM}}\) can be estimated from \(\Delta t\) using either the consumption rates right before (purple line) and right after learning (green line). The true value falls between these estimates. Right: Simulation of hazard model for 1000 flies in either the naive (top) or learning population (bottom).
In general, the mean lifetime \(l\) in a hazard model is given by
\[l=\int_{0}^{\infty}S(t)dt\]
where the survival function \(S(t)\) is given by \(S(t)=\exp\left[-\int_{0}^{t}h(t^{\prime})dt^{\prime}\right]\). We explore how advancing of the energy trace due to learning as in Fig.1 changes the average lifetime. With a tilde we denote the quantities after learning. The advance means that \(\tilde{h}(t)=h(t+\Delta t)\), so that the survival function for the learning flies is
\[\tilde{S}(t)=\exp\left[-\int_{0}^{t}h(t^{\prime})dt^{\prime}+\int_{0}^{\Delta t }h(t^{\prime})dt^{\prime}-\int_{t}^{t+\Delta t}h(t^{\prime})dt^{\prime}\right]\]
For small \(\Delta t\) this can be approximated as
\[\tilde{S}(t) \approx S(t)\left[1-\int_{0}^{\Delta t}h(t^{\prime})dt^{\prime}- \int_{t}^{t+\Delta t}h(t^{\prime})dt^{\prime}\right]\] \[\approx S(t)[1-\Delta t.h(0)+\Delta t.h(t)]\]
The average lifetime of the learned fly is \(\tilde{l}=\int_{0}^{\infty}\tilde{S}(t)dt\). Using that \(\int_{0}^{\infty}S(t)h(t)dt=\int_{0}^{\infty}\frac{dS(t)}{dt}dt=1\), the lifetime after spending an amount \(E_{\text{LTM}}\) at time zero is reduced to
\[\tilde{l}=l+\Delta t\left[1-h(0)\,l\right] \tag{1}\]
It is instructive to study the two limiting cases: When the hazard is independent of energy and hence constant in time (\(h(t)=h(0)\)), one has \(\tilde{l}=l\). In that case there is no change in lifetime. In the other case, when there is no hazard before starvation, that is, \(h(0)=0\), one has \(\tilde{l}=\Delta t\). In general the lifetime change \(\Delta l=l-\tilde{l}\) will range between 0 and the shift in the energy profile \(\Delta t\). Combined with the above result,
\[E_{\text{LTM}}\geq\beta^{\prime}\Delta t\geq\beta^{\prime}\Delta l\]
Thus using consumption rate \(\beta^{\prime}\), one will always _underestimate_ the energy expended on learning. The hazard formulation always exaggerates the underestimate. With the caveat of the unknown rate \(\beta^{\prime}\), we conclude that \(E_{\text{LTM}}\gtrsim 100mJ\).
As an illustration of this model we simulated 1000 flies, Fig.1b. The initial reserve was set to 0.6J and we assumed it decayed exponentially as \(\gamma dE/dt=-E-c\), where \(\gamma=40\)hrs and \(c=0.3\)J. The hazard was modeled as \(h=\exp(-kE)/\text{hr}\), with \(k=20J^{-1}\). This resulted in a lifetime of 32.3 hrs without learning, and 27.6hrs when learning. The estimated expenditure (\(\beta^{\prime}\Delta l\)) was 95mJ, compared to a true value of \(E_{\text{LTM}}=\)100mJ used in the simulation.
## Discussion
In summary, we used two ways of estimating the amount of energy needed to learn a simple association from behavioural data, namely from excess sucrose consumption and from change in lifespan. Encouragingly, the estimates yield comparable numbers on the order of 100mJ, or some 10mJ/bit. It is interesting to compare these costs to memory costs in digital computers. Both data storage and data transmission from CPU to memory cost substantial amounts of energy. In typical personal computers the slowest, most persistent, and most energy costly storage is farthest removed from the processor
(Das et al., 2015). For instance, a typical modern Solid State Drive (SSD) can write up to 3GByte/s, taking about 10W (Samsung 970). Hence the energy cost of storage on an SSD is about 0.5 nJ/bit. A hierarchy of smaller and faster caches (L3, L2, L1) speeds up read and write access of data that is repeatedly used by the CPU. Energy costs of these are only of the order of pJ/bit (Molka et al., 2010; Das et al., 2015). With the caveat is that computers are highly optimized for processing large chunks of data, memory storage in computers is therefore some \(6\ldots 7\) orders of magnitude less costly than biological memory storage.
Why is biological learning so metabolically demanding? Currently it is not clear whether most energy is consumed on the synaptic level, network level, or organism level. The biophysical cost of synaptic plasticity in mammals was estimated by Karbowski (2019). The leading cost there is by far protein phosphorylation, which far outweighs estimates for protein synthesis, transport costs and other costs such as actin thread milling. It is estimated as \(3\times 10^{6}\)ATP/synapse/min. Hence the cost of increased phosphorylation in a single synapse during 1 hour would come to 9pJ. Even with 1000 synapses undergoing plasticity this number is still 3 orders of magnitude below the behaviour based estimates above. Moreover, phosphorylation is more characteristic of early, inexpensive early phase LTP than of the expensive late phase LTP. Interestingly, there is recent evidence for different metabolic pathways for different types of plasticity (Dembitskaya et al., 2022). So while those estimates are thus not inconsistent with our estimates, a large amount of energy use remains unaccounted for.
To determine if the missing energy is used directly by synaptic plasticity, it would be of interest to measure energy consumption when the number of modified synapses or the number of memoranda is varied. If learning two associations would costs double the energy, synaptic processes are likely the main consumer. In that case the energy needed to learn multiple associations could rapidly become enormous. For instance learning the well-known MNIST data set requires at some \(10^{8}\) synaptic updates (in preparation). Saving strategies will be needed in that case (Li and Van Rossum, 2020).
An alternative is that the major consumers are changes in brain activity by coordinated processes such as replay - which contributes to memory consolidation in mammals, but also in flies (Cognigni et al., 2018). Calorimetry during learning could provide insight into such hypotheses. Finally, behavioral or physiological changes resulting from the learning protocol might explain the increased consumption. Control experiments with unpaired stimuli in Mery and Kawecki (2005) might not have completely corrected for such effects.
No matter the answer, animals are likely constrained by the high metabolic cost of learning; their savings strategies will help to understand biological memory formation.
#### Acknowledgments
We would like to thank Pjotr Dudek and William Levy for discussion. Jiamu Jiang is supported by a Vice-Chancellor International award from the University of Nottingham.
|
2302.06306 | Extinction of Taurus, Orion, Perseus and California Molecular Clouds
Based on the LAMOST, 2MASS and Gaia surveys I: Three-dimensional Extinction
and Structure | The three-dimensional extinction and structure are studied for the Taurus,
Orion, Perseus and California molecular clouds based on the LAMOST
spectroscopy. Stellar color excess is calculated with the intrinsic color index
derived from the atmospheric parameters in the LAMOST DR8 catalog and the
observed color index in the Gaia EDR3 and the 2MASS PSC. In combination with
the distance from the Gaia EDR3 parallax, the three-dimensional dust extinction
maps are retrieved in the color excesses $E_{\rm{G_{BP},G_{RP}}}$ and
$E_{\rm{J,K_{S}}}$ with an uncertainty of $\sim$0.03mag and $\sim$0.07mag
respectively. The extinction maps successfully separate the clouds that overlap
in the sky area and manifest the structure of the individual cloud. Meanwhile,
a bow-like structure is found with a distance range from 175pc to 250pc, half
of which is a part of the Per-Tau Shell in similar coordinates and distance
while the other half is not. Three low-extinction rings are additionally
discovered and briefly discussed. | Zhetai Cao, Biwei Jiang, He Zhao, Mingxu Sun | 2023-02-13T12:17:42Z | http://arxiv.org/abs/2302.06306v2 | Extinction of Taurus, Orion, Perseus and California Molecular Clouds Based on the LAMOST, 2MASS and Gaia surveys I: Three-dimensional Extinction and Structure
###### Abstract
The three-dimensional extinction and structure are studied for the Taurus, Orion, Perseus and California molecular clouds based on the LAMOST spectroscopy. Stellar color excess is calculated with the intrinsic color index derived from the atmospheric parameters in the LAMOST DR8 catalog and the observed color index in the Gaia EDR3 and the 2MASS PSC. In combination with the distance from the Gaia EDR3 parallax, the three-dimensional dust extinction maps are retrieved in the color excesses \(E_{\rm Gp,Gp}\) and \(E_{\rm J,K_{S}}\) with an uncertainty of \(\sim\)0.03mag and \(\sim\)0.07mag respectively. The extinction maps successfully separate the clouds that overlap in the sky area and manifest the structure of the individual cloud. Meanwhile, a bow-like structure is found with a distance range from 175pc to 250pc, half of which is a part of the Per-Tau Shell in similar coordinates and distance while the other half is not. Three low-extinction rings are additionally discovered and briefly discussed.
Distance measure (395); Interstellar dust (836); Molecular clouds (1072); Extinction (505); Interstellar dust extinction (837) +
Footnote †: journal: ApJ
0000-0002-4880-707X]ZheTai Cao
0000-0002-1888-7885]Biwei Jiang
0000-0002-1888-0885]He Zhao
0000-0002-1888-7885]Minggu Sun
## 1 Introduction
Molecular clouds (MCs), as the star birthplaces, are generally dense that causes high extinction. A precise estimation of the extinction is crucial to revealing the true brightness and color of the stars embedded and behind the cloud. In addition, molecular clouds are place for dust growth. The determination of the extinction law of MCs would help understand the dust evolution in various star-forming environments and the dust properties such as grain size. The nearby MCs to be studied in this work, specifically the Taurus MC (hereafter TMC), Orion MC (OMC), Perseus MC (PMC) and California MC (CMC), represent different including massive and low-mass star-forming environments. With precise measurements and large quantity of tracers, the extinction of MCs can be calculated with high precision and high spatial resolution, and therefore serve as the references of extinction for star forming regions.
Many works have been devoted to studying the distribution and properties of interstellar extinction, which cover star forming regions. The most widely used extinction map is the all-sky two dimensional reddening map by Schlegel, Finkbeiner and Davis (Schlegel et al., 1998) derived from dust infrared emission. With the distance measurements of billions of stars by Gaia (Gaia Collaboration et al., 2016, 2021, 2022), some three-dimensional (3D) extinction maps are produced. For example, Green et al. (2015, 2019) created an extinction map that extends to a few kiloparsec over three-quarters of the sky with the Pan-STARRS and 2MASS photometry. As for individual molecular cloud, Lombardi et al. (2010, 2011) presented the near-infrared extinction map of several nearby molecular clouds that include TMC, PMC, CMC, OMC. Dharmawardena et al. (2022) produced the continuous dust density and extinction maps of Orion,
Cygnus X, Taurus, and Perseus MC by using the Gaussian Process. The Gaussian Process is also used to infer the structure of Orion A and California (Rezaei Kh. & Kainulainen, 2022; Rezaei Kh. et al., 2020).
Most of the previous works used the photometric data to investigate the extinction of molecular clouds. Photometry has the advantage of being able to detect faint objects which leads to further distance and higher spatial resolution achievable. Spectroscopy, on the other hand, can determine the extinction more accurately with spectroscopically derived stellar parameters. The development of multi-fiber observation has greatly increased the efficiency of spectroscopy. The H-band APOGEE survey takes 300 spectra at a time, which has accumulated almost six hundred thousand stellar spectra in DR17 (Majewski et al., 2017). LAMOST, a reflective Schmidt telescope with a diameter of 5-m and a F.O.V of 5\({}^{\circ}\) that provides spectra for about 4000 objects in one exposure, has obtained stellar parameters for about ten millions stars (Luo et al., 2015). Consequently, the calculation of high precision extinction becomes feasible for very large sample of stars from spectroscopy. Moreover, such calculation of color excess is independent of extinction law so that the multi-wavelength color-excess can be used to determine the extinction law to various molecular clouds and their dust properties.
This work intends to build the 3D extinction map and structure of the nearby MCs, specifically, Taurus, Orion, Perseus and California based on spectroscopic survey. Stellar intrinsic color indexes will be calculated from the atmospheric parameters derived from the LAMOST spectrum. In combination with the distance from Bailer-Jones et al. (2021) and photometery from 2MASS and Gaia EDR31, the extinction and the structure of the MCs are obtained. In brief, the color excess of each star is calculated in the Gaia bands (i.e. \(E_{\rm GBP,GBP}\)) and in the 2MASS bands (i.e. \(E_{\rm J,K_{S}}\)), then a non-decreasing function is fitted to determine the variation of extinction with distance in the given sightline that reveals the 3D structure of the MC. Section 2 describes the data from Gaia, 2MASS and LAMOST to be used. Section 3 and Section 4 present the methods to calculate the color excess and to decompose the MCs into distance slice. Section 5 discusses the 3D structures of the four molecular clouds one by one. Section 6 is a summary.
Footnote 1: The recently released Gaia DR3 data contains the broad-band photometry already published as part of Gaia EDR3, and the astrometric data in Gaia DR3 are the same as those of Gaia EDR3 (Gaia Collaboration et al., 2022).
## 2 Data
### The Sample Selection
The optical and near-infrared (NIR) bands are selected for measuring the clouds' extinction. Because the visual extinction is much larger than the near-infrared (the V band extinction is about ten times that in the K band (Wang & Chen, 2019)), the two wavelength ranges are expected to trace both the diffuse and dense clouds. Besides, the ratio of the optical-to-NIR extinction is an indicator of the extinction law to be studied in our next work.
The optical photometric data are taken from the space telescope Gaia for its high precision and full sky coverage. The adopted Gaia EDR3 data contains 1.5 billion sources whose photometry is accurate to mmag-level in the \(G\)\({}_{\rm BP}\), \(G\), \(G_{\rm RP}\) bands centering at 532, 673, and 797 nm respectively (Jordi et al., 2010). The very wide band, \(G\), is not used in this work because the extinction coefficient of this band varies strongly with stellar effective temperature and the extinction itself (Danielski et al., 2018). The NIR data are taken from the PSC of 2MASS that surveyed the whole sky in the \(J\), \(H\), \(K_{\rm s}\) band, which bring about the photometry over more than 500 million sources with the average uncertainty better than 0.03mag (Skrutskie et al., 2006).
The stellar parameters are taken from the LAMOST DR8 (Luo et al., 2015). The LAMOST DR8 provides the effective temperature (\(T_{\rm eff}\)), surface gravity (log \(g\)) and metallicity ([Fe/H]) for more than six-million stars. For the duplicated sources, only the parameters derived from the spectrum with the highest signal-to-noise ratio are kept. Then about 6.4 million stars are kept and cross-matched with the Gaia EDR3 and 2MASS PSC datasets respectively by a radius of 1\({}^{\prime\prime}\). The distance is a key parameter to investigate the 3D extinction of the clouds. Here, we take the geometric distance provided by Bailer-Jones et al. (2021) which contains the distances and their uncertainties for 1.47 billion stars.
The data quality is further restricted as following to exclude the poor measurements:
1. [Fe/H] is within [-1.0, 0.5].
2. The error of \(T_{\rm eff}\), [Fe/H] and log \(g\) is smaller than 150K, 0.15 dex and 0.3 dex respectively.
3. The signal-to-noise ratio in the \(g\) band (the parameter "snrg" in the LAMOST DR8 catalog) is larger than 10.
4. The photometric error is \(<0.05\)mag in the Gaia EDR3 bands and \(<0.1\)mag in the 2MASS bands.
With these limitations, \(\sim\)4.6 million are left for calculating the intrinsic color index \(C^{0}_{\rm G_{\rm BP},G_{\rm RP}}\) while \(\sim\)4.3 million stars are kept for \(C^{0}_{\rm J,K_{s}}\). Furthermore, only dwarf stars are retained for three reasons: (1) metallicity has less effect on the intrinsic colors of dwarfs than of giants (see Figure 1 in Zhao et al., 2020); (2) giants in the LAMOST catalog are mostly at a distance \(>1\)kpc, much further than the studied MCs; and (3) stellar parameters of dwarfs are more accurately determined from the LAMOST observations. In practice, the dwarf stars are selected from the Kiel diagram in Figure 1 by log \(g>3.8\) for \(T_{\rm eff}>6600K\) or \(-0.0644\times T_{\rm eff}^{2}+0.457\times T_{\rm eff}+3.09<\log g\) for \(T_{\rm eff}<6600K\). This reduces the sample to \(\sim\) 3.2 million and 3.1 million stars for \(G_{\rm BP}-G_{\rm RP}\) and \(J-K_{\rm s}\) respectively.
### The Cloud Regions
Lombardi et al. (2010) defined the boundary of TMC in \((l,b)=([165^{\circ},180^{\circ}],[-10^{\circ},-20^{\circ}])\). Later, Bialy et al. (2021) recognized an ellipse substructure in the 3D space called the Tau Ring centering at \((l,b,d)=(179^{\circ}.5,~{}-14^{\circ}.2,~{}179\rm pc)\) with a semimajor and a semiminor axis of 39pc and 26pc respectively, and a projection radius of about \(10^{\circ}\) at \(d=218\rm pc\). Including all these structures, we extend the boundaries a little to cover a larger area where the extinction is comparatively low, and select the boundary as \((l,b)=([160^{\circ},195^{\circ}],[10^{\circ},-40^{\circ}]\).
The area selection for the other three MCs follows the same rule as for the TMC, i.e. we select an area that is extended a little from the previously defined. Lombardi et al. (2010) delimited the PMC to \(155^{\circ}<l<165^{\circ}\) and \(-25^{\circ}<b<-15^{\circ}\), and Rezaei Kh. & Kainulainen (2022) defined \(155^{\circ}<l<170^{\circ}\) and \(-14^{\circ}<b<-6^{\circ}\) for CMC which agrees with Lombardi et al. (2010). For OMC that is composed of three major structures, Lombardi et al. (2011) suggested \(203^{\circ}<l<217^{\circ}\) and \(-21^{\circ}<b<-17^{\circ}\) for Orion A, \(201^{\circ}<l<210^{\circ}\) and \(-17^{\circ}<b<-5^{\circ}\) for Orion B and \(188^{\circ}<l<201^{\circ}\) and \(-18^{\circ}<b<-7^{\circ}\) for \(\lambda\) Orionis. Taking all these into consideration, our selected regions of the four MCs are listed in the first and second column of Table 1. There are some overlaps in sightlines between some clouds such as TMC and CMC. Section 4 will introduce the method to further separate them in the 3D space.
By summing up the four clouds region in Table 1, the whole region to be studied is within \((l,b)=([130^{\circ},220^{\circ}],[-50^{\circ},+20^{\circ}])\), a total area of about 5000 deg\({}^{2}\). The area of each cloud and the density of tracing stars are displayed in Figure 2.
## 3 The Intrinsic Color Index: The Blue-Edge Method
The blue-edge method takes the bluest observed color index as the intrinsic one for a given set of stellar parameters by assuming that the bluest star experiences little or no extinction among a large collective. It was first suggested by Ducati et al. (2001) and then refined by Jian et al. (2017) and Wang & Jiang (2014), and applied in several works, e.g. in Xue et al. (2016); Zhao et al. (2018); Sun et al. (2018). In practice, the relation of intrinsic color index with effective temperature is derived for a given luminosity class and a given range of metallicity since temperature is the primary factor to influence the color index. Although there have been some determinations of the relations in the above mentioned works, we re-construct the relation of the intrinsic color index, \(C^{0}_{\rm G_{\rm BP},G_{\rm RP}}\) and \(C^{0}_{\rm J,K_{s}}\), with temperature for two reasons. One is that the \(G_{\rm BP}-G_{\rm RP}\) color is not included in previous works. The other is that the accuracy of the relation is improved by dividing the metallicity into more groups.
The metallicity is divided into six groups with a step of 0.25 dex ranging from -1.0 to 0.5 dex to account for the effect on intrinsic color index by metallicity. Following Jian et al. (2017), we take the bluest 5% star in a given metallicity and temperature bin as the extinction-free star, and the color of the bluest 5% star is used to represent the intrinsic color for stars in the given bin that is shown by the red dots in Figure 3. Then, a curve is fit to derive the relation between the temperature and the color of the extinction-free stars in each metallicity bin. Consequently, the color excess is the difference of the observed and the intrinsic color index calculated from the curve. Here, the metallicity and temperature bins are 0.25 dex and 200 K respectively.
An exponential function with three free parameters is used to fit the extinction-free stars (i.e. the red dots in Figure 3):
\[C^{0}_{\lambda 1,\lambda 2}=a\times\exp\left(-\frac{T_{\rm eff}}{b}\right)+c \tag{1}\]
The case of \(G_{\rm BP}-G_{\rm RP}\) in six metallicity bins ranging from -1.00 to 0.50 dex is displayed in Figure 3 while the case of \(J-K_{S}\) is similar and not displayed. It can be seen that the range of \(T_{\rm eff}\) is cut at both the low and the high ends marked by vertical dashed lines where the stars are not numerous enough or the trend cannot be depicted by the same
exponential function. The decrease of the sample due to this cutting would influence the results little, but guarantee the precision of the color index.
The uncertainty of the intrinsic color index comes from mainly photometric error and the blue-edge error. The mean photometric error and its standard deviation are \(4\pm 2\), \(4\pm 2\), \(26\pm 6\), and \(32\pm 17\)mmag in the \(G_{\rm BP},G_{\rm RP},J\), and \(K_{\rm s}\) band respectively. The error induced by the bluest edge is about 30mmag for dwarfs (Jian et al., 2017). In total, the uncertainty of \(C^{0}_{\rm 4\lambda,\lambda 2}\) is \(\sim\)30mmag for \(C^{0}_{\rm G_{\rm BP},G_{\rm RP}}\) and \(\sim\)50mmag for \(C^{0}_{\rm J,K_{\rm s}}\), which means that the major error comes from the blue-edge method for \(C^{0}_{\rm G_{\rm BP},G_{\rm RP}}\) while from both photometry and the blue-edge method for \(C^{0}_{\rm J,K_{\rm s}}\).
The color excess is calculated straightforward by subtracting the intrinsic color index from the observed. Consequently, the error is about \(\sim\)30mmag for \(E_{\rm G_{\rm BP},G_{\rm RP}}\) and \(\sim\)70mmag for \(E_{\rm J,K_{\rm s}}\).
## 4 Distance-Sliced Extinction in the Area of the Molecular Clouds
### The Extinction Map
The principle in deriving the 3D extinction in the area of the TMC, OMC, PMC and CMC is that the extinction along a sightline is a non-decreasing function of distance. With the distance by Bailer-Jones et al. (2021) and the color excess calculated above, a compromise between the number of tracers and the spatial resolution yields a step of 0.2\({}^{\circ}\) in both longitude and latitude. Due to the non-uniform extinction within a selected area of 0.2\({}^{\circ}\) squared and the error in distance and color excess, the variation of color excess with distance is scattering and sometimes not monotonically increasing as shown by the dots in Figure 4.
The isotonic regression in the SCIKIT-LEARN package for PYTHON (Pedregosa et al., 2011) is applied to trace the general tendency. For a given set of observations (\(x_{1}\), \(y_{1}\)), (\(x_{2}\), \(y_{2}\)),...,(\(x_{n}\), \(y_{n}\)), isotonic regression is a non-parametric regression that seeks a weighted least-square fit \(\hat{y_{i}}\) for all \(i\) in a monotonic model. It solves the following problem:
\[\min\sum_{i=1}^{n}w_{i}(y_{i}-\hat{y_{i}})^{2},\ \mbox{subject to}\ \hat{y_{i}} \leq\hat{y}_{j}\ \mbox{whenever}\ x_{i}\leq x_{j} \tag{2}\]
where \(y_{i}\) is the calculated color excess of star \(i\), \(\hat{y_{i}}\) is the fitted color excess along the given line of sight at the distance of star \(i\), and \(w_{i}\) is the weight. Typically, the weights are equal to 1 for all \(i\). However, because the stars closer to the center of the given sightline are more representative, an exponential kernel function is adopted to describe \(w_{i}\) as following:
\[w_{i}=\left\{\begin{array}{ll}\exp(-\frac{\theta^{2}}{2\gamma^{2}}),&\mbox{ if }0<\theta<\theta_{0}\\ 0,&\mbox{otherwise}\end{array}\right. \tag{3}\]
where \(\theta\) is the angular distance of star \(i\) to center of the selected sightline, \(\gamma\) is a scale parameter, and \(\theta_{0}\) defines the radius of the selected bin size. Meanwhile, the number of stars that have non-zero weights should be more than 10 in one sightline to ensure the credibility of the model fitting. Considering the existence of some sightlines where there are less than 10 objects within a 0.2\({}^{\circ}\) circle, a larger \(\theta_{0}\) is selected. The scale parameter \(\gamma\) is set to be equal to the resolution. Figure 4 shows the fitting results with three test values of \(\gamma\) in three sightlines as examples. After testing the model with several values of resolution (i.e. \(\gamma\)), 0.2\({}^{\circ}\) and 1.0\({}^{\circ}\) (5 times of \(\gamma\)) are selected respectively for \(\gamma\) and \(\theta_{0}\).
The extinction is sliced every 25pc from 100pc to 600pc by the above non-parameter fitting, and the results are presented in Figure 5. In combination with the continuity in the sky area, it can be seen that the four MCs appear in order of distance, i.e. TMC, PMC, OMC and CMC. The distance extents of each molecular cloud listed in Table 1 are judged by observing the main extinction structure (high extinction areas) within its boundary in Figure 5. It should be noted that the extension is partly caused by the substructures that appear at different distance instead of the thickness of the cloud. The details for each substructure will be discussed later.
The results are compared with Green et al. (2019) and Leike et al. (2020) in the same area and distance range. For comparing with Green et al. (2019), the integrated extinction map up to 600pc from this work and Green et al. (2019) is displayed in Figure 6. The value of \(E_{\rm B,V}^{\rm Green}\) in Green et al. (2019) is converted to \(E_{\rm G_{\rm BP},G_{\rm RP}}\) with the factor \(E_{\rm B,V}^{\rm Green}/E_{\rm G_{\rm BP},G_{\rm RP}}=0.71\pm 0.01\) suggested by Sun et al. (2021). It can be seen that the two results are roughly identical with a mean difference and its standard deviation of \(0.007\pm 0.093\)mag respectively. Nevertheless, it is obvious that the extinction value is comparatively smaller in the high-extinction regions (the blue part in Figure 6), such as the dense MC regions marked by the black lines in the right panel in Figure 6. This can be explained by the relatively shallow depth of the LAMOST survey that has a limiting magnitude of about 15 to 17mag in the \(g\) band, which is unable to detect the stars with a large extinction. The largest extinction in this work is about 2.0mag in
and 0.8mag in \(E_{\rm J,K_{s}}\), equivalent to \(A_{V}\sim 5\)mag, occurring in some Galactic plane areas. This can be taken as the limiting depth of extinction in this work. For comparing with Leike et al. (2020), the integrated extinction map up to 350pc from this work and Leike et al. (2020) is displayed in Figure 7, where the factor that converts the \(G\) band extinction to \(E_{\rm G_{BP},G_{RP}}\) is 1.89 (Wang and Chen, 2019). It can be seen that the results from this paper is slightly larger than that from Leike et al. (2020) with a mean difference of \(0.04\pm 0.05\)mag.
### The Distance to the Clouds
Though the distance-sliced extinction gives the rough range of each cloud, the 25pc step is quite large. The distance to various parts of a molecular cloud can be more accurately determined. The basic method to determine the distance to the cloud is the same as that in Zhao et al. (2020) for supernova remnants (SNRs). It assumes that the extinction will present a sharp increase (i.e. a 'jump') at the distance of the dense cloud due to its high dust grain density, and consequently the distance of the cloud is recognized in the variation of the interstellar extinction along the distance. Such model is used to analyze the distance to the extended sources (see e.g. Chen et al., 2017 for MCs and Zhao et al., 2020 for SNRs). Zucker et al. (2019, 2020) used the change of reddening along with parallax to derive the distance to molecular clouds as well.
Consistent with the above analysis, a circle area with a radius of 1.0\({}^{\circ}\) is taken for the distance determination. For a given area, the extinction in terms of color excess is the function of distance:
\[E(d)=E^{\rm fgd}(d)+E^{\rm MC}(d) \tag{4}\]
where \(E(d)\) is the total color excess along the line of sight until \(d\), \(E^{\rm fgd}(d)\) is the extinction from the foreground diffuse medium, and \(E^{\rm MC}(d)\) is the contribution by the molecular cloud. The \(E^{\rm MC}(d)\) is described by a Gaussian error function:
\[E^{\rm MC}(d)=\frac{\delta E}{2}\times\left[1+\textit{erf}\left(\frac{d-d^{ \rm MC}}{\sqrt{2}\times\sigma}\right)\right] \tag{5}\]
where \(\delta E\), \(d^{\rm MC}\), and \(\sigma\) represent the extinction 'jump', the distance to and the half thickness superposed onto the distance error of the specific cloud region in the given sightline respectively. However, instead of a two-order polynomial function used in Chen et al. (2017); Zhao et al. (2020) or exponential function in Sun et al. (2021), a constant is assumed to describe the foreground extinction, i.e.:
\[E^{\rm fgd}(d)=E_{0} \tag{6}\]
This modification is to match the close distance of the clouds, which implies very small foreground extinction. Technically, there are inadequate stars to determine the variation of the foreground extinction within the small distance range, in particular for TMC at \(\sim 150\)pc.
In addition, only the stars with \(E_{\rm G_{BP},G_{RP}}>\)0.15mag, i.e. \(>5\sigma\), are selected to ensure the 'jump' is significant in the given sightline. Because the 'jump' is likely to appear at the edge of the cloud, we include the stars at an extended distance range of each molecular cloud (as shown in Table 1) both at the close and the far side by 100pc to guarantee that the jump at the edge of the cloud can be detected.
An MCMC analysis is performed to find the best set of parameters in Equation 4 under the priors of uniform distribution for all parameters, which maximize the likelihood defined as :
\[L(\mathbf{x}\mid\theta)=\prod_{i=1}^{n}\frac{1}{2\pi\sqrt{\sigma}}exp(-\frac {(E_{i}-E(d_{i}\mid\theta))^{2}}{2\sigma_{E_{i}}^{2}}) \tag{7}\]
where \(\theta\) is the parameter to be determined, i.e. \(E_{0}\), \(\delta E\), \(d^{MC}\) and \(\sigma\); \(E_{i}\) and \(E(d_{i}\mid\theta)\) are the color excess calculated from the blue-edge method and the equation parameter to be fitted; \(\sigma_{E_{i}}\) is the error of the color excess; \(x\) is the data used to fit the equation, including \(E_{i}\), \(d_{i}\) and \(\sigma_{E_{i}}\); and \(n\) is the total number of stars in each pixel. We now consider the uncertainty of the distance and combine the distance uncertainty together with the uncertainty of the derived color excess as Chen et al. (2019) (c.f. their Equation 2). With the assistance of the distance from Bailer-Jones et al. (2021), the \(\sigma_{E_{i}}\) in the likelihood function is given by:
\[\sigma_{E_{i}}^{2}=\sigma_{E_{i}^{obs}}^{2}+\sigma_{E_{i}^{BlueEdge}}^{2}+(E_ {i}\frac{\sigma_{d_{i}}}{d_{i}})^{2} \tag{8}\]
where \(\sigma_{E_{i}^{\rm obs}}\) is the uncertainty of the observed color index, \(\sigma_{E_{i}^{\rm BlazeEdge}}\) is the uncertainty imported by the blue-edge method as discussed in Section 3, (\(E_{i}\frac{\sigma_{d_{i}}}{d_{i}}\)) results from the distance uncertainty, which is only an approximation under the assumption that the dust opacity is constant along the line of sight. The distance uncertainties are simply adopted as \(\sigma_{d_{i}}=\frac{d_{hi}-d_{lo}}{2}\) where \(d_{hi}\) and \(d_{lo}\) are the upper and lower bounds in the Bailer-Jones et al. (2021) distance catalog.
Stars that lie within the ranges of both the angular radius and the extended distance of the specific MC are adopted to fit Equation 4, 5 and 6. The MCMC procedure (Foreman-Mackey et al., 2013) is performed to fit the parameters in the model. The 'burn-in' chain has 50 walkers and 500 steps to stabilize the chain for final Monte Carlo simulation. Then 3000 steps with 50 walkers are run to estimate the final parameters and their errors. The median values (50th percentile) of the final chain are taken as the best estimation, and the uncertainties equal to the 16th and 84th percentile. Although the input distance range is extended, the result is required to satisfy the following condition so that the 'jump' is still in the distance range of the molecular cloud:
\[d_{\rm lower}^{\rm MC}<d^{\rm MC}-\sigma<d^{\rm MC}+\sigma<d_{\rm upper}^{\rm MC} \tag{9}\]
where \(d_{\rm lower}^{\rm MC}\) and \(d_{\rm upper}^{\rm MC}\) mark the lower and upper limit of the MC's distance, \(d^{\rm MC}\) and \(\sigma\) mark the distance and the half thickness with error of the specific cloud region.
We adopt the Gelman\(-\)Rubin statistic to determine whether the fit reaches to convergence. The MCMC fitting is regarded to be converged if the square root of the R hat (\(\sqrt{\hat{R}}\)) for all the parameters are smaller than 1.01 (Vehtari et al., 2019). We first run the chain 2 times for each sightline and the parameter set with the smaller \(\sqrt{\hat{R}}\) for the \(d^{\rm MC}\) is adopted. If the value of \(\sqrt{\hat{R}}\) is smaller than 1.01 for every parameter, this set of parameters is believed to converge and taken as the final results for the fitting in the given sightline, which leads to that about 55% of the fittings are converged. Moreover, the integrated autocorrelation time \(\tau\) is used to check the effective number of independent samples for the converged fittings (typically the chains longer than about 50\(\tau\) are sufficient (Foreman-Mackey et al., 2013)). Figure 8 shows the convergence results of the distance (\(d^{\rm MC}\)). The right panel in Figure 8 indicates that almost all (about 99%) of the converged fittings have sufficient independent samples, which strengthens the validity of results selected by R hat.
Figure 9 displays four examples, each for one MC, of the fitting result, i.e. the \(\hat{R}\) for the parameter distance (\(d^{\rm MC}\)) and the distribution of the posterior samples of \(d^{\rm MC}\). It can be seen that the model well follows the trend of the observational points in the expected distance range and the fitting is well converged.
In some sightline, one-cloud model is not perfectly optimized to fit the MCs, there are two or more clouds in some sightlines. We test the two-cloud system with four models as an example, the case sightline is the fourth example in Figure 9 which transverses both the TMC and CMC clouds. In Figure 10, four models are run: (1) The distance range is limited from 100pc to 350pc and one-cloud model is used, then the distance is derived and converged, which detects the TMC cloud; (2) the distance is limited from 300pc to 700pc and one-cloud model is used, then we obtain the distance of convergence for the CMC cloud; (3) the distance is limited from 100 to 700pc and one-cloud model is used, then the result will be similar to the result from 300pc to 700pc, which detects the distance of CMC with the higher extinction and more star tracers; and (4) the distance is again limited from 100pc to 700pc but two-cloud model is used, then two distances, i.e. the distance to the TMC and CMC cloud, are detected, which completely coincide with the one-cloud model with a preselected distance range. This proves that the one-cloud model with a preselected distance range yields the same results as a two-cloud model with no preselection of distance. Also, from the third model which is a two-cloud system but fits with one-cloud model, we find that the one-cloud model will detect the highest extinction cloud or the cloud containing more star tracers if directly used to fit a two-cloud system. The clouds are complexes and may contain multiple cloudlets. Such complexes are presented within a large space range, and the sub-structures mostly disperse in distance. In our analysis, the bin-size in space is only 0.2\({}^{\circ}\) by side, in which we consider no complexes. Indeed, a small distance separation (say less than 5pc) will be difficult to distinguish with the present accuracy.
## 5 Results and Discussions
### The extinction structure of individual molecular cloud
With the distance and the sliced extinction determined, the 2D extinction map can be decomposed into the individual map for each MC. In Figure 11, 12, 13 and 14 for the four clouds, the left and right panels are for \(E_{\rm G_{BP},G_{RP}}\) and
respectively, while the middle panels show the distance structure with the background contour map of \(E_{\rm G_{BP},G_{RP}}\). The blanks denote the position with no reliable result.
#### 5.1.1 Taurus
The extinction map of TMC is integrated over distance from 0 to 250pc and displayed in Figure 11. TMC has four prominent substructures, i.e. TMC1 and TMC2 (Lombardi et al., 2010) being the mostly studied regions, the Tau Ring (Zucker et al., 2021) and the TMC filament. These four substructures are clearly visible in the extinction map (Figure 11), where the boundaries of TMC1 and TMC2 are taken from Lombardi et al. (2010) and Dharmawardena et al. (2022), and the Tau Ring and the filament are plotted according to Bialy et al. (2021). Consistent with previous studies, TMC1 presents the most serious extinction that its densest position has an \(E_{\rm G_{BP},G_{RP}}>1.5\)mag or \(E_{\rm J,K_{s}}>0.6\)mag. Similarly, TMC2 also has high extinction, though smaller than TMC1. From the extinction map, it can be seen that some positions within TMC2 also have \(E_{\rm J,K_{s}}>0.6\)mag. Differently, the Tau ring and filament are not so dense, but have a moderate extinction mostly with \(E_{\rm G_{BP},G_{RP}}<1.0\)mag. Accordingly, their structure looks much more diffuse than TMC1 and TMC2.
In terms of distance, TMC1 and TMC2 are close to each other in that TMC1 extends from 129pc to 157pc and TMC2 extends from 132pc to 156pc. This result coincides with that of Yan et al. (2019) who found the average distance of TMC to be \(145^{+12}_{-16}\)pc, and also agrees with the result of Zucker et al. (2021) that found TMC extends from 131pc to 168pc. Meanwhile, the TMC filament and the Tau Ring are at comparable distance around 174pc, apparently further than TMC1 and TMC2. But these structures are connected. Specifically, both TMC1 and TMC2 extend to further distance with the ascending of longitude and finally connect to the Tau Ring at the edges. Independently, the Tau Ring can be depicted by an ellipse as suggested by Bialy et al. (2021), whose semi-major and semi-minor axis are 39pc and 26pc respectively (c.f. Figure 5 in Bialy et al., 2021). The center of the ellipse locates at \((l,b,d)=(179^{\circ}.5,\ -14^{\circ}.2,\ 179\)pc). The distance of the Tau Ring extends from \(\sim\)150pc to 220pc where the closer side lies at the higher latitude and the further side at the lower latitude. The TMC filament extends to the midplane area within a distance near 174pc. Part of this filament overlaps with the CMC in sightline. Such overlapping can be seen in the fourth panel of Figure 9 where two 'jumps' are visible, the first 'jump' is induced by the TMC filament at about 174pc and the second 'jump' is induced by CMC at around 480pc. Fortunately, the distance can separate them unambiguously in this work.
In addition to the four sub-structures that belong to TMC, there is a large-scale bow-like structure which appears in the extinction slice graph (Figure 5) from 175pc to 250pc which the Tau ring belongs to. The discussion of the shell and the bow will be presented in Section 5.2.1.
#### 5.1.2 Perseus
The extinction map of PMC is integrated over the distance from 250pc to 350pc, and displayed in Figure 12. Consistent with previous studies, the Perseus Main structure is obvious. There are two other substructures that appear in a few 3D extinction maps, e.g. Leike et al. (2020) and filamentary structure in Dharmawardena et al. (2022). Here, they are clearly recognizable in the middle panel of Figure 12 in that their distances increase with the longitude, the same as the Perseus Main. In addition, the distribution of extinction is continuous to the Perseus Main. Thus we think they are part of PMC and name them Per Arm1 and Per Arm2 respectively.
In comparison with TMC, PMC is not so dense. The most serious extinction occurs in the Main part, where the largest color excess \(E_{\rm G_{BP},G_{RP}}\) is about 1.0mag. The Arm1 and Arm2 have an extinction of about \(E_{\rm G_{BP},G_{RP}}\sim 0.4\)mag, i.e. \(A_{\rm V}\sim 1.0\)mag, consistent with the feature of a translucent molecular cloud. Indeed, the extension of the cloud in the radial direction is only about 30pc, which can partly account for the relatively low extinction.
There are two famous clusters located within the Perseus Main, i.e. IC 348 (\((l,b)=(160^{\circ}.50,\ -18^{\circ}.27)\)) and NGC 1333 (\((l,b)=(158^{\circ}.34,\ -20^{\circ}.64)\)) (Zucker et al., 2018). Figure 12 shows an obvious distance gradient of the PMC Main that gradually becomes further from bottom to top within a range of \(\sim\)285pc to \(\sim\)306pc, inferring a thickness of 30pc for this cloud. The distance to IC 348 and NGC 1333 is 304pc and 287pc respectively. This result agrees with that of Zucker et al. (2018) who suggested \(295\pm 4\)pc and \(299\pm 3\)pc for IC 348 and NGC 1333 respectively. However, Ortiz-Leon et al. (2018) recommended a distance of \(321\pm 27\)pc and \(294\pm 28\)pc for IC 348 and NGC 1333 respectively from the Gaia data. But their results bear a large uncertainty possibly due to the dispersion of parallaxes of YSOs, though the results are still consistent within the uncertainties.
A few high latitude molecular clouds including MBM 11-14 (Magnani et al., 1985) are in the area of the Perseus Arm1 in the left panel of Figure 12. Sun et al. (2021b) derived the distance to these four clouds to be 147pc for MBM 11,
278pc for MBM 12, 409pc for MBM 13, and 295pc for MBM 14, which implies that only MBM 12 and MBM 14 are associated with the Perseus Cloud, while MBM 11 is in front of the cloud and MBM 13 is behind the cloud. The distribution of \(E_{\rm G_{BP},G_{RP}}\) looks consistent with this result in that the integrated extinction from 250-350pc is small for MBM 11 and MBM 13.
Dharmawardena et al. (2022) named some filamentary structures such as the California, Taurus and Perseus Filament that are visible in Figure 12 of Dharmawardena et al. (2022). Compared with our work, Per Arm2 and the California filament similar structure in that they share the approximate coordinates and distance. Specifically, Per Arm2 ranges from about 290pc to 300pc while the California filament is from about 250pc to 350pc. Apart from this, the Taurus and Perseus Filament may not be the same structure as Perseus Arm1, since the angular sizes of the Taurus and Perseus filament are only about several degrees while Perseus Arm1 has an angular size of more than \(10^{\circ}\). Besides, the field in Figure 12 of Dharmawardena et al. (2022) is within about \(154^{\circ}<l<164^{\circ}\) and \(-25^{\circ}<b<-15^{\circ}\), while the majority of Perseus Arm1 is higher than \(-25^{\circ}\) in the Galactic latitude.
#### 5.1.3 Orion
The extinction map of OMC is integrated over distance from 350pc to 500pc, and displayed in Figure 13. OMC is usually divided into three parts according to the position and the morphology in the extinction map, i.e. Orion A, Orion B and \(\lambda\) Orionis. Due to the data limits, only the head part is investigated for Orion A. The boundaries of each part in Figure 13 are taken from Lombardi et al. (2011) and Dharmawardena et al. (2022). Orion B exhibits the most serious extinction, where the densest extinction of the head and the tail can reach up to \(E_{\rm G_{BP},G_{RP}}>1.5\)mag. The observable part of Orion A is much less obscured, and \(\lambda\) Orionis is even more diffuse with a color excess of about \(E_{\rm G_{BP},G_{RP}}\sim 0.3-0.5\)mag.
The closest part of OMC is \(\lambda\) Orionis, for which a clear ring-like structure is evident in the extinction map. Figure 13 shows \(\lambda\) Orionis stretches from \(\sim\)380pc to \(\sim\)400pc, which agrees with the range from 375pc to 397pc by Zucker et al. (2021). The ring can be divided into two halves, the lower half extends from 380pc to 400pc, while the upper half is connected with the Orion B head, which is at a distance around 410pc. For Orion B, the close part is the head at 410pc with \(b=-10^{\circ}\) and extends towards the tail, with a distance slowly increasing to 420pc. For the head of Orion A, the distance is about 420pc, which is larger than \(393\pm 25\)pc in Grofschedl et al. (2018) by using YSOs as tracers. Overall, Orion A and B are at comparable distance and slightly further than \(\lambda\) Orionis, and OMC is not so extended in the radial direction as TMC.
#### 5.1.4 California
The extinction map of CMC is integrated over distance from 400-600pc, and displayed in Figure 14. The densest position has \(E_{\rm G_{BP},G_{RP}}>1.2\)mag, slightly higher than PMC, though it still resembles a translucent other than dense molecular cloud.
As shown in Figure 14, the structure of CMC is comparatively simple. It is a sheet structure extending from \(\sim 448\)pc to \(\sim\)504pc with \((l,b)=([160^{\circ},170^{\circ}],\ [-10^{\circ},-5^{\circ}])\). A bubble structure (the name is from Rezaei Kh. & Kainulainen, 2022) exists with \((l,b)=([155^{\circ},160^{\circ}],\ [-13^{\circ},-7^{\circ}])\) in the extinction map at a distance from 440pc to 448pc. Rezaei Kh. & Kainulainen (2022) presents a 3D view of CMC in their Figure 2, in which the dense region of the bubble is on the small longitude and is apparently seen around 455pc, and the filament on the large longitude is around 495pc to 515pc, which is in agreement with this work.
### The Shell-Like Structure
#### 5.2.1 The Per-Tau Shell and a Bow Like Structure
Bialy et al. (2021) revealed the Per-Tau Shell that is an extended near-spherical shell embedding in PMC and TMC. They discuss a scenario that the ISM swept up by supernova and stellar feedback events forms the expanding shell containing both TMC and PMC. They suggest that Per-Tau Shell looks like a circle with the center at \((l,b,d)=(161^{\circ}.1,\ -22^{\circ}.7,\ 218\)pc) and a radius of 78pc.
The bow-like structure is most obvious in the 200-225pc slice in Figure 5, while visible from \(d=175\)pc to \(d=250\)pc. Thus the extinction is integrated from 175pc to 250pc to increase the visibility and shown in the left panel of Figure 15, where the identified shell is marked by the red dashed line and the Per-Tau shell is indicated by the black dashed line. It can be seen that the lower part of the shell coincides with the Per-Tau Shell in both the coordinates and the distance. At the top, the Per-Tau Shell is consistent with the TMC filament visible in the 175-200pc slice, while the
bow structure identified in this work stretches to another filament that is at higher latitude than the TMC filament and visible in the 200-225pc slice. Overall, the bow-like structure around 180-220pc is further than the main substructures of TMC, i.e. TMC1 and TMC2 around 140-160pc, therefore it can be regarded as an independent structure.
#### 5.2.2 The Low-extinction Rings
Three other ring-like structures are revealed in the extinction map and shown by the red dashed circles in the right panel of Figure 15. They are further than the known Per-Tau shell and the extinction is integrated from 250pc to 350pc. Indeed, they are at the same distance range as PMC, so the two high-extinction structures in Figure 15 belong to PMC other than the ring. Excluding the PMC structures, the center of the projection is located approximately at \((l,b)=(152^{\circ},\ -1^{\circ})\) with a radius of 13\({}^{\circ}\)for R1, \((l,b)=(150^{\circ},\ -5^{\circ})\) with a radius of 8\({}^{\circ}\) for R2, and \((l,b)=(168^{\circ},\ -17^{\circ})\) with a radius of 10\({}^{\circ}\) for R3. R1 and R2 are visible in the 250pc to 300pc slice, and R3 is visible in the 300pc to 350pc slices in Figure 5. For a given distance of 300pc, the linear radius of the three rings is about 68pc, 52pc, and 42pc for R1, R2 and R3 respectively, which is compatible with the size of an old supernova remnant. Moreover, the color excess in the main part of the rings is only \(\sim 0.1\)mag (at the 3\(\sigma\) level) to 0.2mag in \(E_{\rm G_{BP},G_{RP}}\) i.e. \(A_{V}\sim 0.2\)mag to 0.5mag, smaller than the above bow-like structure, while consistent with an old supernova remnant. However, they are not in the Green's list of supernova remnants (Green, 2019). The further identification of the rings is interesting but beyond the scope of this work.
The parameters of all the substructures identified in the four clouds are summarized in Table 2.
## 6 Summary
The extinction structure is studied by high-precision color excesses of the stars in the sky area of the Taurus, Orion, Perseus and California molecular cloud. The intrinsic color indexes are derived by the blue-edge method from the atmospheric parameters obtained by the LAMOST spectroscopic survey, and the observed ones are calculated from the Gaia and 2MASS photometry in the \(G_{\rm BP}\), \(G_{\rm RP}\), \(J\) and \(K_{\rm s}\) bands. The resultant error is about \(\sim\)0.03mag and \(\sim\)0.07mag for \(E_{\rm G_{BP},G_{RP}}\) and \(E_{\rm J,K_{s}}\) respectively.
In combination with the distance measured by Gaia, the distance-sliced extinction map at a step of 25pc is built up by assuming that the extinction is monotonically increasing with distance. It well separates the clouds by the distance and delimits the range of each cloud in 3D space. In addition, the distance to each cloud segment is more accurately determined from the extinction-jump model, i.e. the extinction increases sharply at the distance of the cloud segment. The extinction map is then yielded by integrating the extinction over the distance range of the cloud, which includes some low-extinction regions. The extinction structure confirms the previously identified dense sub-structure like TMC1, TMC2, Tau Ring, Orion A, Orion B, \(\lambda\) Orionis, and Perseus Main. It also finds additional structures. Two arms in the Perseus cloud (Perseus Arm2 is similar to the California filament in Dharmawardena et al. (2022)) are identified for their geometrical connection with the Per Main and evident extinction. A bow-like structure is presented at a distance around 200pc, which overlaps partly with the Per-Tau shell but deviates from it at about \(b=-10^{\circ}\). Three new rings are visible at the level of \(E_{\rm G_{BP},G_{RP}}\sim 0.1-0.2\)mag. The stellar color excesses and the extinction maps will be used in future work to study the extinction law in the star-forming regions and its dependence on the environment.
## Acknowledgments
We are grateful to Drs. Jian Gao, Haibo Yuan, Jun Li, Shu Wang, Cunying Xiao and Mr. Tianding Wang for their friendly help and discussion. We thank the anonymous referee for very useful suggestions to improve the work. This work is supported by the NSFC projects 12133002 and 12203016, National Key R&D Program of China No.2019YFA0405503, CMS-CSST-2021-A09, Natural Science Foundation of Hebei Province (No.A2022205018) and Science Foundation of Hebei Normal University (No.L2022B33). This work has made use of the data from LAMOST, Gaia and 2MASS.
scikit-learn (Pedregosa et al., 2011), emcee (Foreman-Mackey et al., 2013), dustmaps (Green, 2018). |
2303.11553 | Dynamic Vertex Replacement Grammars | Context-free graph grammars have shown a remarkable ability to model
structures in real-world relational data. However, graph grammars lack the
ability to capture time-changing phenomena since the left-to-right transitions
of a production rule do not represent temporal change. In the present work, we
describe dynamic vertex-replacement grammars (DyVeRG), which generalize vertex
replacement grammars in the time domain by providing a formal framework for
updating a learned graph grammar in accordance with modifications to its
underlying data. We show that DyVeRG grammars can be learned from, and used to
generate, real-world dynamic graphs faithfully while remaining
human-interpretable. We also demonstrate their ability to forecast by computing
dyvergence scores, a novel graph similarity measurement exposed by this
framework. | Daniel Gonzalez Cedre, Justus Isaiah Hibshman, Timothy La Fond, Grant Boquet, Tim Weninger | 2023-03-21T02:44:15Z | http://arxiv.org/abs/2303.11553v2 | # Dynamic Vertex Replacement Grammars
###### Abstract
Context-free graph grammars have shown a remarkable ability to model structures in real-world relational data. However, graph grammars lack the ability to capture time-changing phenomena since the left-to-right transitions of a production rule do not represent temporal change. In the present work, we describe dynamic vertex-replacement grammars (DyVeRG), which generalize vertex replacement grammars in the time domain by providing a formal framework for updating a learned graph grammar in accordance with modifications to its underlying data. We show that DyVeRG grammars can be learned from, and used to generate, real-world dynamic graphs faithfully while remaining human-interpretable. We also demonstrate their ability to forecast by computing divergence scores, a novel graph similarity measurement exposed by this framework.1
Footnote 1: [https://github.com/daniel-gonasalez-cedre/DyVeRG](https://github.com/daniel-gonasalez-cedre/DyVeRG).
## 1 Introduction
Like the string grammars upon which they are based, graph grammars usually deal with static data. Although it might be attractive to think of LHS \(\rightarrow\) RHS replacement schemes as indicative of change, growth, or evolution over time, this is rarely the case in grammar-based formalisms. Instead, grammars are typically used to represent hierarchical refinements of a static structure. The replacements that occur by applying production rules rarely have anything to do with time.
However, modeling time-varying data for real-life processes is fundamentally important for many scholars and scientists. Because graphs are capable of expressing immensely-complicated discrete topological relationships, they are widely used to model real world phenomena. In particular, temporal graph models have come to prominence to account for the time-varying nature of many real phenomena. For example, the Temporal Exponential Random Graph Model (TERGM) [18], Dynamic Stochastic Block Model (ARSBM) [27], and certain versions of newer Graph Neural Network models (GraphRNN, GRAN) [26, 41] are able to fit sequential graph data and make predictions about future relationships, but these models are difficult to inspect and tend to break down.
Graph grammars have seen a recent increase in popularity, with applications in molecular synthesis [15, 23, 39], software engineering [24, 25], and robotics [43]. Related models focusing on subgraph-to-subgraph transitions are readily interpretable, but need to be hand-tuned to model subgraphs of a predetermined (usually very small) size, usually for computational complexity reasons [20, 5]. These transition models tend to set out a schema for the set of permitted transitions and perform modeling by simply counting transition frequencies. Despite their simplicity, these transition models are effective tools for understanding changes in dynamic systems. However, these models struggle with larger changes outside of 3-or-4-node (or similarly small) subgraph sizes [30].
More recently, researchers have found data-driven ways to learn representative hyperedge replacement grammars (HRG) [2, 38] and vertex replacement grammars (VRG) [32, 33]. These models permit the extraction of production rules from a graph and the resulting grammar can be used to reconstruct the graph or generate similar graphs. However, as discussed earlier, these models are still limited by the inherent static nature of the formalism. The lack of a dynamic, interpretable, learnable model presents a clear challenge to modeling real-world relational data.
In the present work, we tackle this challenge by introducing the **D**ynamic **V**ertex **R**eplacement **G**rammars (DyVeRG). As the name implies, this model extends the VRG framework, which typically begins with a hierarchical clustering of the graph and then extracts graph rules in a
bottom-up fashion from the resulting dendrogram. In order to adapt VRGs to the dynamic setting, the DyVeRG model finds stable mappings between filtrations of the nodes in the dynamic graph across time. The filtration mappings provide a transparent a way to inspect the changes in the graph without significant performance degradation.
This dynamic graph grammar takes the form of a sequence of production rules we call _rule transitions_ that are interpretable and inspectable. An example of such a rule transition is illustrated in Fig. 1: the rule on the left is extracted from a graph \(G_{t}\) at time \(t\); the rule on the right covers the same nodes, but corresponds to time \(t+1\) and incorporates changes from \(G_{t+1}\). In this example, the nonterminal node on the LHS of the left production rule signifies that the RHS has two boundary edges (used to connect elsewhere in the graph). The RHS also has four terminal nodes and three terminal edges. However, as the graph changes between times \(t\) and \(t+1\), the topology of the rule on the right of Fig. 1 changes correspondingly. The blue dotted edges illustrate the addition of one terminal edge and one new boundary edge, which is why the nonterminal label on the LHS increased from 2 to 3. The red nodes and wavy dotted lines represent the deletion of a node and edge respectively across this temporal chasm.
The paper is organized as follows. We first introduce some basic concepts and terminology. Then, we describe the DyVeRG model with the help of illustrations and examples. We then introduce the dyvergence score, a byproduct of DyVeRG, and explain how rudimentary forecasting can be done as well as the more-traditional graph generation. Finally, we provide a quantitative and qualitative analysis of the model on real-world dynamic graphs and compare its predictive performance against other generative models.
## 2 Preliminaries
A graph \(G=(V,E)\) is a set of nodes \(V\) with a relation \(E\subseteq V\times V\) defining edges between the nodes. We say that \(G\) is connected if there is a path within \(G\) between any two nodes. If \(E\) is symmetric, then we say \(G\) is undirected; otherwise, \(G\) is directed. We say that \(G\) is node-labeled if we have a function \(\lambda:V\to L\) that assigns a label from \(L\) to each node in \(G\). If we have two such node-labeling functions, we call the graph doubly-node-labeled. We say that \(G\) is edge-weighted if we have a function \(\omega:E\to W\) assigning each edge in the graph some weight from \(W\). If these weights are natural numbers, then we say \(G\) is a multigraph, whose edge multiplicities are given by \(\omega\).
There are two common ways to model temporality for graphs: as continuous streams of (hyper-)edges, and as discrete sequences of graph snapshots [22]. In the present work, we consider the latter form of dynamic graph, which we represent as a (finite) sequence \(\langle G_{t}\rangle_{t=0}^{T}=\langle G_{0},\ldots G_{T}\rangle\) of graphs \(G_{t}=(V_{t},E_{t})\).
### Context-Free Grammars.
A context-free grammar (CFG) on strings is determined by a finite set of nonterminal symbols \(N\) with a distinguished starting symbol \(S\in N\) and finite a set of terminal symbols \(T\), along with a finite set of production rules \(R\subseteq N\times(N\cup T)^{*}\). Each rule \(P_{i}\in R\) represents a transition from a left-hand side (LHS) nonterminal to a finite sequence of symbols on the right-hand side (RHS), each of which is either terminal or nonterminal. We say \(P_{i}\) is terminal if it only contains terminal symbols; otherwise, \(P_{i}\) is nonterminal.
Given a string \(\Sigma=\sigma_{1}\ldots\sigma_{i}\ldots\sigma_{n}\in(N\cup T)^{*}\), the application of a production rule \(P_{i}\) to a particular nonterminal symbol \(\sigma_{i}\in N\) from \(\Sigma\) involves replacing the symbol \(\sigma_{i}\) with the string on the RHS of \(P_{i}\). Formally, the result of applying \(P_{i}=(\sigma_{i},\pi_{P_{i}})\) to \(\sigma_{i}\) in \(\Sigma\) is a new string \(\tilde{\Sigma}=(\sigma_{1}\ldots\sigma_{i-1})\cdot\pi_{P_{i}}\cdot(\sigma_{i+ 1}\ldots\sigma_{n})\), where \(\cdot\) represents the string-concatenation operation.
### Vertex-Replacement Graph Grammars.
A natural way to generalize CFGs would be to think of the characters in a string like nodes in a graph. We can then think of CFG rules as producing graphs whose nodes are arranged in a path, with attributes given by the different characters in the language and boundary conditions specifying whether or not additional characters can be added at the beginning or end of the string. Clearly, changing the connectivity structure and boundary conditions will lead to rules with more expressive RHS structures.
There are many specific formalisms and nuances, but generally a vertex-replacement grammar (VRG) is given by a finite set of nonterminal symbols \(N\subseteq\mathbb{N}\) with the distinguished starting symbol \(0\in N\) along with a set of terminal symbols \(T\subseteq V_{G}\) representing nodes in a graph. The production rules \(P_{i}\) for a VRG then look like transitions from a nonterminal symbol \(n\in N\) to a doubly-node-labeled multigraph \((H,\lambda_{H},\delta_{H})\) whose first node-labeling function \(\lambda_{H}:V_{H}\to N\cup T\) distinguishes between terminal and nonterminal symbols, and whose second node-labeling
Figure 1: An example of a rule transition comprised of two production rules: the left-rule extracted from a graph at time \(t\) and the right-rule updated at time \(t+1\). Here we see evidence of triadic closure, the disappearance of a node with its incident edge, and resulting changes to the LHS symbol.
function \(\delta_{H}:V_{H}\to\mathbb{N}\) assigns a natural number _boundary degree_ to each node of \(G\).
As was the case with CFGs, we apply rules at nonterminals by replacing those symbols with the structure on the RHS of a suitable production rule, while accounting for VRG rules' nontrivial boundary conditions (_i.e.,_ the boundary degrees). Given a connected, node-labeled multigraph \(G=(V_{G},E_{G},\lambda_{G})\) with a node \(v\in V_{G}\) having nonterminal label \(\lambda_{G}(v)\in N\), the application of a rule \(P_{i}=(\lambda_{G}(v),(H,\lambda_{H},\delta_{H}))\) at \(v\) consists of replacing \(v\) with the graph \(H\) and rewiring the _broken edges_--those edges previously connected to \(v\)--to those nodes in \(H\) such that the number of broken edges incident on a node \(v_{H}\) of \(H\) does not exceed its boundary degree \(\delta(v_{H})\).
Random rewiring is the most rudimentary way to address the boundary condition. If our data were augmented with node labels, we could guide the rewiring process using an estimated assortativity mixing matrix, or by minimizing a loss function computed over the nodes [33]. Even without node labels, we could consider greedy rewiring strategies that try to reduce discrepancy along some measured statistic of the data--_e.g.,_ modularity, average local clustering coefficient, graphlet distribution. For simplicity, we consider only the random approach in the present work.
Formally, we will say that a production rule \(P_{i}=(s,(H,\lambda_{H},\delta_{H}))\) is suitable for a node \(v\in V_{G}\) if \(\lambda_{G}(v)=s\) and \(\deg(v)=\sum_{v_{H}\in V_{H}}\delta_{H}(v_{H})\). This means that the label associated with \(v\) is the same as the nonterminal symbol \(P_{i}\) is expecting, and that the number of broken edges \(v\) will leave behind is the same as the total number of boundary edge slots \(H\) has available. With these conditions, the application of a suitable \(P_{i}\) at a node \(v\) results in a well-defined, though not necessarily deterministic, vertex-to-subgraph substitution.
Typically, we only distinguish between the rules in a grammar _up to isomorphism_ once an appropriate notion of isomorphism for production rules is specified. We say two rules \(\dot{P}=\left(\dot{s},(\dot{H},\dot{\lambda},\dot{\delta})\right)\) and \(\ddot{P}=\left(\ddot{s},(\ddot{H},\ddot{\lambda},\ddot{\delta})\right)\) from a VRG are rule-isomorphic _if and only if_\(\dot{s}=\ddot{s}\) and there is a graph isomorphism \(\dot{H}\simeq\ddot{H}\) that preserves the labels from \(\lambda\) and \(\ddot{\lambda}\) (but not necessarily \(\dot{\delta}\) and \(\ddot{\delta}\)).
### Filtrations.
A filtration of a graph \(G=(V,E)\) is a sequence of node partitions \(\mathcal{F}=\langle F_{i}\rangle_{i=1}^{n}\), where each \(F_{i}\) partitions the node set \(V\) into mutually-disjoint subsets--called _covers_--so that each \(F_{i}\) is a refinement of \(F_{i+1}\). When mining real-world networks, filtrations are often the result of hierarchical node clusterings [4, 13], \(k\)-core decompositions [11, 31], and, more recently, methods [28, 34] inspired by persistent homology [10] and topological data analysis [7]. The aforementioned methods usually produce filtrations as an intermediate result for an analysis of a network involving, for example, community detection [42], representation learning [40], or visualization [9]. Filtrations lend themselves well to generative approaches to network structures since they can highlight salient hierarchical and recursive patterns. Vertex-replacement graph grammars can induce filtrations (_cf._ Sec. 3.1) suitable for dynamic graph modeling.
## 3 Dynamic Vertex-Replacement Grammars
Incredible advances in theory and application have enabled researchers to parse real-world networks by learning an appropriate graph grammar--including vertex replacement schemes [15, 32, 33], hyperedge replacement schemes [2, 38], and Kemp-Tennenbaum [19] grammars. An important limitation of these established formalisms is their semantic inability to express temporality in terms of change to their underlying data. This presents a serious problem for practitioners: time does not stand still, and new data may lead to changes in prior beliefs. By associating the grammar extracted from a dataset with a filtration on the data, and describing how filtrations on graphs produce compatible grammars, we construct temporal transitions between grammars by defining transitions between their associated filtrations, which are driven by changes in the data. In the proceeding sections, we detail the **D**ynamic **V**ertex-**R**eplacement **G**rammar (DyVeRG) framework for generalizing VRGs in the time domain.
### Extracting Rules.
We describe in this section how to parse a graph with a VRG and how its associated filtration is computed. For simplicity, we focus on just two temporally-sequential graphs \(G_{t}=(V_{t},E_{t})\) and \(G_{t+1}=(V_{t+1},E_{t+1})\) at a time, though the idea generalizes to an arbitrarily-long finite sequence of graphs. First, an initial filtration \(\mathcal{F}_{t}^{\text{clus}}\) on \(G_{t}\) is produced using the Leiden hierarchical clustering algorithm [35]. We choose to use Leiden for our analyses as opposed to the myriad alternatives--the Louvain algorithm [6], smart local moving [36], hierarchical Markov clustering [37], recursive spectral bi-partitioning [17]--because its iterative modularity-maximization approach is intuitively appealing, and it realizes better performance [35] than Louvain and smart local mover (on which Leiden is based) while remaining efficient on larger graphs.
Taking inspiration from clustering-based node replacement grammars [32], we use the filtration \(\mathcal{F}_{t}^{\text{clust}}\) derived from the hierarchical clustering to recursively extract rules for the grammar. Starting from the bottom, we consider subfiltrations covering at most \(\mu\) nodes (terminal or nonterminal) total, where \(\mu\in\mathbb{N}_{+}\) is set _a priori_ to limit the maximum size of any rule's RHS. Among all subfiltrations of size at-most \(\mu\), we then select a subfiltration \(\mathcal{F}^{*}\) that minimizes the overall description length of the grammar. \(\mathcal{F}^{*}\) then determines a rule \(P^{*}\) that gets added to our grammar
and \(\mathcal{F}_{t}^{\text{clust}}\) is compressed until every node is in the same cover, as described by Sikdar _et al._[32].
Concurrently, we also construct a filtration \(\mathcal{F}_{t}^{\text{gram}}\) whose node covers are determined by the right-hand sides of the rules in the grammar as they are extracted. The filtration \(\mathcal{F}_{t}^{\text{gram}}\) acts as a rule-based hierarchical decomposition of \(G_{t}\), keeping track of the parallel hierarchical structure shared by the rules in the grammar and the nodes in the graph. This naturally produces a one-to-one correspondence between the rules \(R_{t}=\{P_{t,1},\dots P_{t,r}\}\) in our (unweighted) grammar and their corresponding covers in \(\mathcal{F}_{t}^{\text{gram}}\), allowing us to construct a surjective association \(f_{t}:V_{t}\to R_{t}\) between each node \(v\in V_{t}\) and the unique rule \(f_{t}(v)\in R_{t}\) that was extracted with \(v\) as a terminal node. Further, we keep track of the particular terminal symbol node on the right-hand side of \(f_{t}(v)\) corresponding to \(v\) when \(f_{t}(v)\) was extracted. We call this \(\alpha_{t}(v)\), the _alias_ of \(v\), where \(\alpha_{t}:V_{t}\hookrightarrow T\). An illustration of this process on a small example network is shown in Fig. 2. The two filtrations are highlighted along with a hierarchical decomposition of the graph induced by the grammar's rules.
The _root_ of a grammar is defined to be rule whose left-hand side is the distinguished starting symbol \(S\). Clearly, this rule covers every node in \(G_{t}\). Given two rules \(\dot{P}_{t},\tilde{P}_{t}\in R_{t}\), we say \(\dot{P}_{t}\) is an _ancestor_ of \(\tilde{P}_{t}\)_iff_ every node covered by \(\tilde{P}_{t}\) in the filtration is also covered by \(\tilde{P}_{t}\). If additionally \(\dot{P}_{t}\neq\dot{P}_{t}\), then it is a _proper ancestor_. We define one rule to be a _descendant_ of another conversely to how ancestors are defined. Finally, a _common ancestor_ of a set of rules \(\tilde{R}_{t}\subseteq R_{t}\) is rule \(\tilde{P}_{t}\) that is an ancestor of rule in \(\tilde{R}_{t}\), and the _least common ancestor_ is the one having no common ancestor of \(\tilde{R}\) as a proper descendant. Note that the least common ancestor of a nonempty subset of \(R_{t}\) always exists since the root rule is an ancestor of every rule in \(\mathcal{G}_{t}\).
### Updating the Filtration
We will refer interchangeably to the covers in the filtration \(\mathcal{F}_{t}^{\text{gram}}\) and the rules \(P_{t,i}\in R_{t}\) in the grammar \(\mathcal{G}_{t}\). For a visual summary of the proceeding description, please refer to Fig. 3. First, we categorize the edges \((u,v)\in E_{t}\cup E_{t+1}\):
1. _persistent_: \(u\in V_{t}\), \(v\in V_{t}\), \(u\in V_{t+1}\), \(v\in V_{t+1}\), and \((u,v)\in E_{t}\cap E_{t+1}\)
2. _internal additions_: \(u\in V_{t}\), \(v\in V_{t}\), \(u\in V_{t+1}\), \(v\in V_{t+1}\), and \((u,v)\in E_{t+1}\setminus E_{t}\)
3. _frontier additions_: \(u\in V_{t}\), \(v\not\in V_{t}\), \(u\in V_{t+1}\), \(v\in V_{t+1}\), and \((u,v)\in E_{t+1}\setminus E_{t}\)
4. _external additions_: \(u\not\in V_{t}\), \(v\not\in V_{t}\), \(u\in V_{t+1}\), \(v\in V_{t+1}\), and \((u,v)\in E_{t+1}\setminus E_{t}\)
5. _edge deletions_: \(u\in V_{t}\), \(v\in V_{t}\), and \((u,v)\in E_{t}\setminus E_{t+1}\)
Examples of edges from each category are demonstrated in Fig. 4. These classes determine how each edge induces a change in the filtration: class 1. edges do not influence the filtration, class 2. edges add new connections between already-existing nodes (thus altering the induced subgraph covers of the filtration) class 3. edges introduce a new neighbor for an already-existing node, class 4. edges produce two entirely-new neighboring nodes, and class 5. edges account for the removal of connections from the graph, which may or may not be associated with nodes' exodus from the network. Of these, only classes 3., 4., and 5. are capable of causing structural changes to the hierarchy of the filtration, but all except for class 1. edges will affect the grammar's rules.
#### 3.2.1 Internal Additions.
If an edge \((u,v)\) corresponds to an _internal addition_, we first find the covering rules \(f_{t}(u)=P_{u}\) and \(f_{t}(v)=P_{v}\) of the nodes incident on that edge and let \(G_{u}\) and \(G_{v}\) be their respective RHS graphs. If \(G_{u}=G_{v}\), then we simply add an edge between \(\alpha_{t}(u)\) and \(\alpha_{t}(v)\). However, if \(G_{u}\neq G_{v}\), then we find their least common ancestor and add an
Figure 3: The graph from Fig. 1(a), the filtration from Fig. 1(c), and the grammar from Fig. 1(d) are shown here with changes from time \(t+1\). The changes to \(G_{t+1}\) in (a) stimulate changes to the filtration in (b), modifying the corresponding rules in the grammar (c). Following Fig. 1, the node and edge addition is shown in blue, while node and edge removal is indicated in red. Nonterminal nodes’ borders are colored blue or red if they increased or decreased in value respectively.
Figure 2: Overview of the extraction process for a VRG on a static graph \(G_{t}\) pictured in (a). A filtration induced by a hierarchical clustering is shown in (b), from which the rules in (d) are extracted bottom-up. At the same time, the filtration in (c) is derived from the node coverings the extracted rules produce.
edge between the appropriate nodes on its right-hand side. Note that if a nonterminal symbol is incident on the edge added, then this will necessarily change the symbol--recall that the nonterminal symbols are defined to be the sum of their degree with their boundary degree. This change in the symbol must be propagated _down_ through the hierarchy by commensurately increasing LHS's of rules and adding boundary degrees.
#### 3.2.2 Frontier & External Additions.
We handle frontier and external edge additions jointly by cases. Given an edge \((u,v)\) of class iii. or iv., let \(H_{(u,v)}\triangleleft G_{t+1}\) be the maximally connected induced subgraph of \(G_{t+1}\) containing \((u,v)\).
In the first case, suppose none of the vertices of \(H_{(u,v)}\) coincide with \(G_{t}\). We begin by independently extracting a grammar \(\mathcal{H}_{(u,v)}\) on \(H_{(u,v)}\) with its own induced filtration. We then _merge_ this grammar with \(\mathcal{G}_{t}\) by combining the two filtrations under one larger cover. Specifically, this takes the form of a new root rule whose LHS is \(S\) and RHS consists of two disconnected nonterminal symbols--one for \(G_{t}\) and one for \(H_{(u,v)}\)--incorporating the rules of \(\mathcal{G}_{t}\) and \(\mathcal{H}_{(u,v)}\) as descendants. To disambiguate, the LHS's for the root rules \(\mathcal{G}_{t}\) and \(\mathcal{H}_{(u,v)}\) are updated accordingly.
In the second case, there is at least one node in common between \(H_{(u,v)}\) and \(G_{t}\). Define the _frontier_\(F(G_{t},H_{(u,v)})\) between \(G_{t}\) and \(H_{(u,v)}\) to be the collection of all such class iii. edges. Then, for each edge \((u_{G},v_{H})\in F(G_{t},H_{(u,v)})\), we find the rules \(f_{t}(u_{G})=P_{u_{G}}\) and \(P_{v_{H}}\) from \(\mathcal{G}_{t}\) and \(\mathcal{H}_{(u,v)}\) that cover \(u_{G}\) and \(v_{H}\) respectively, and increase their boundary degrees by \(1\) to indicate that this node should expect to receive a new edge. This increase in boundary degree necessitates a change to the LHS symbols of the two rules, which in turn induces more changes to their ancestor rules; these changes propagate _up_ to the roots of their respective hierarchies. Finally, once these changes have been made for each frontier edge, a new rule is created (_cf._ the prior case) with two nonterminal symbols connected by as many edges as there are in \(F(G_{t},H_{(u,v)})\), concluding the subgrammar-merging process. This accounts for all of the class iii. and iv. edges that participated in the connected component \(H_{(u,v)}\).
#### 3.2.3 Deletions.
An edge deletion \((u,v)\) must have both of its incident nodes existing in \(G_{t}\), but they need not exist in \(G_{t+1}\). As a result, we handle class v. edges by first finding the covering rules \(f_{t}(u)\) and \(f_{t}(v)\) and removing the edge between the nodes corresponding to \(\alpha_{t}(u)\) and \(\alpha_{t}(v)\) from their common ancestor. Note that if this edge was incident on a nonterminal symbol, its removal will cause a cascade of changes that must be propagated _down_ the hierarchy. Then, if \(u\) is not present in \(G_{t+1}\), we also remove the node \(\alpha_{t}(u)\) from the RHS of \(f_{t}(u)\); similarly with \(v\) and the removal of \(\alpha_{t}(v)\) from \(f_{t}(v)\).
### Measuring Deviation
Now that we know how to take a grammar \(\mathcal{G}_{t}\) and temporally modify it into \(\mathcal{G}_{t+1}\), we can analyze what the specific changes were between the two grammars. From the process delineated in Sec. 3.1, we obtain a natural correspondence \(\pi_{t}:R_{t}\to R_{t+1}\) between every rule \(P_{t}\in R_{t}\) from \(\mathcal{G}_{t}\) and its updated version \(\pi_{t}(P_{t})\in R_{t+1}\) in \(\mathcal{G}_{t+1}\). We can use this mapping to quantify the difference between the two grammars in terms of the number of changes introduced by the temporal update process. Given a rule \(P_{t+1}\in R_{t+1}\), we define the change introduced by this rule by computing the graph edit distance (GED) [1] between the RHS's of \(P_{t+1}\) and \(\pi_{t}^{-1}(\{P_{t+1}\})\) (with a small penalty to any modifications that need to be made to make the LHS's the same)2. If \(P_{t+1}\) is a rule introduced as part of the subgrammar-merging process for class iii. and iv. edges, then we let \(\pi_{t}^{-1}(\{P_{t+1}\})\) be the empty graph by definition. By aggregating these edit distances across the rules of \(\mathcal{G}_{t+1}\), we get an indication of how much \(\mathcal{G}_{t}\) had to be perturbed to accommodate the data seen in \(G_{t+1}\). Specifically, we compute
Footnote 2: The notation \(\pi_{t}^{-1}(\{P_{t+1}\})\) denotes the _preimage_ of \(P_{t+1}\) under \(\pi_{t}\).
\[\Delta=-\ln\frac{1}{1+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
them by how frequently (up to isomorphism) each rule occurred in \(\mathcal{G}_{t+1}\). The idea now is identical to the approach traditionally taken by weighted VRGs. We start with the root rule, which has LHS symbol \(S\), and use the structure on its RHS as our initial graph \(\hat{G}_{t+1}\). We then iteratively grow the graph by randomly selecting a nonterminal symbol in \(\hat{G}_{t+1}\) and randomly sampling a compatible rule to apply at that symbol, with the sampling probability for the rules determined by the frequencies of the possible candidate rules for that nonterminal symbol. Once no nonterminal symbols remain in \(\hat{G}_{t+1}\), we stop and obtain our resulting graph.
## 4 Evaluation
We perform three types of analysis to better understand the quantitative and qualitative characteristics of the DyVeRG model. In the first quantitative benchmark, we task the model with distinguishing genuine temporal dynamics from realistic imposter data created by other generative models. The second quantitative analysis asks all of the models, including DyVeRG, to generate a graph corresponding to a slice of time from the data; the generated graphs are then compared to the ground truth. We conclude with a short qualitative analysis and interpretation of the temporal transitions DyVeRG induces between grammar rules.
### Datasets
In this evaluation, we consider four dynamic datasets, listed in Tab. 1. DNC Emails and EU Emails are email networks where user email accounts are nodes and an email from one user to another at a given time is represented by an undirected edge labeled with a UNIX time. Both of these datasets are aggregated by month; DNC Emails contains a number of self-edge loops, while EU Emails contains none. The DBLP dataset is an undirected academic coauthorship graph where nodes correspond to researchers and an edge is drawn between two researchers during a particular year _iff_ they coauthor a paper during that year. Finally, the Facebook dataset is an undirected graph tracking friendships on a monthly basis, with two users sharing an edge if they were friends during that month.
We take snapshots \(0\) through \(10\) for each dataset. Because these datasets are dynamic, we summarize their orders and sizes in Fig. 5, noting they tend to grow over time.
### Baselines
We compare the DyVeRG model against 5 baselines. The Erdos-Renyi model generates random graphs of a fixed size \(n\) with probability \(p\) of an edge between any two nodes [12]; for evaluation we set \(n\) and \(p=\nicefrac{{2m}}{{n(n-1)}}\) to the ground-truth values within each timestep. The configuration model of Chung and Lu generates a random graph approximating a given degree distribution [8, 16]; for this baseline, we use the degree distribution from the dataset.
The Erdos-Renyi and Configuration models learn very rudimentary features from an input graph. The following three graph models are different in that they take a whole graph as input and use their own inductive biases to learn features. The Stochastic Block Model (SBM) uses matrix reductions to represent graphs with structured communities [21, 29]. Likewise, the more advanced graph recurrent neural network (GraphRNN) [41] is able to learn a generative model from an input collection of graphs by adapting walks over nodes as sequential data. We also provide a static implementation of DyVeRG based on CNRG [32], which we call VeRG, as a final point of comparison.
An important note should be made here that some of the data for GraphRNN is missing from the figures. This is because, when training and testing the model on the two NVIDIA GeForce RTX 2080 Ti cards available to us, with 10 GB of RAM each, we regularly ran out of memory on the larger datasets.
### Inference
The goal of this task, given a temporal sequence of graphs \(\langle G_{t}\rangle_{t=0}^{10}\), is to distinguish the graph that genuinely comes next from an assortment of impostors.
For each timestep \(t\in\{0,\ldots 9\}\), we extract a VRG from \(G_{t}\) and update it using \(G_{t+1}\), yielding DyVeRG grammars \(\langle\mathcal{G}_{t}\rangle_{t=1}^{10}\). These grammars are used to compute dyvergence scores (_cf._ Sec. 3.3) for the ground truth. This is performed \(10\) times independently for each \((G_{t},G_{t+1})\) pair, and we let \(D_{t}\) denote the mean.
We use the average ground-truth dyvergences
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & node count & edge count & \# timestamps & \# interactions & \# snapshots \\ \hline DNC Emails & 1.891 & 4.465 & 19.389 & 32.878 & 11 \\ EU Emails & 966 & 16.064 & 30.280 & 327.333 & 19 \\ DBLP & 95.9191 & 164.479 & 21 & 200.792 & 21 \\ Facebook & 61.096 & 614.797 & 736.674 & 788.135 & 29 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the datasets used in the evaluation.
Figure 5: Number of nodes and edges in the datasets over time.
\(\{D_{0},\ldots D_{t-1}\}\) to compute an estimate \(\hat{D}_{t}\) for the expected divergence of the next graph pair \((G_{t},G_{t+1})\)--_i.e._, an estimate for \(D_{t}\). Specifically, we let \(A_{t}=\nicefrac{{\sum_{i=0}^{t}D_{i}}}{{(t+1)}}\) and compute
\[\hat{D}_{t}=A_{t-1}+(D_{t-1}-A_{t-2}). \tag{2}\]
Separately, each impostor model \(\mathcal{M}\) is trained on \(G_{t+1}\) and \(10\) graphs \(\langle M_{t+1,i}\rangle_{i=1}^{10}\) are sampled from its distribution. Dyvergence scores are calculated for these graphs by extracting a VRG from \(G_{t}\) and then updating it with each of the \(M_{t+1,i}\); aggregate edits are then computed as in Eq. 1. Average divergences \(D_{\mathcal{M},t}\) are then found for the \((G_{t},M_{t+1,i})\). We define the _dyvergence_ of \(\mathcal{M}_{t}\) by
\[\text{dyvergence}(G_{t},\mathcal{M}_{t}=|\hat{D}_{t}-D_{\mathcal{M},t}|) \tag{3}\]
Dyvergence for the ground truth is similarly defined by \(\text{dyvergence}(G_{t},G_{t+1})=|\hat{D}_{t}-D_{t}|\). The lower this score is, the higher our confidence would be that the scored graph comes from the same generating distribution as the data.
We illustrate our results in Fig. 6. Here, we determine success by assigning the ground truth a lower dyvergence score than the impostor graphs. We outperform the competing baselines on the EU Emails and Facebook datasets.
Our model is also largely successful on the DNC email graph, ranking the ground truth as least-dyvergent the majority of the time, shown clearly by the ranking subfigures in Fig. 6. The model performs poorly only on the DBLP graph. We conjecture that the amount of dyvergence in DBLP from one time step to another fluctuates more drastically due to the longer timescale for data aggregation in this dataset; whereas the other three datasets were grouped into monthly snapshots, DBLP snapshots are taken annually. This might lead to inaccuracies in \(\hat{D}_{t}\), negatively impacting the dyvergence scores for the real graph while boosting performance on imposters that are not as temporally turbulent.
### Generation
A natural way to interrogate a generative graph model--like a graph grammar--is to generate graphs with it. Generative graph models are widely used in modern AI systems for contrastive and adversarial learning. Here, we use these models in the more traditional way they might be used for a task like hypothesis-testing; we fit the models, generate a graph at a particular time, and then compare the generated graph with the ground truth. For each baseline model, we train on the ground truth at time \(t\) and then generate at this same time. If the two graphs are similar according to some empirical measure of graph similarity, then we would say that the model performed well. For DyVeRG, we train on time \(t-1\), update with time \(t\), and then generate at time \(t\).
Comparing two (or more) graphs is a nontrivial task since the distributions from which graphs can be sampled can behave erratically and are often very high-dimensional.
Figure 6: Dyvergence scores and model rankings. The top subplots show, for each model, the deviations over time from the mean dyvergence score. The relative rankings of the models are then shown in the corresponding bottom subplots. Note that lower is better, so the best-performing model is listed at the bottom.
Figure 7: Portrait Divergence comparing a generated graph from each model and timestep against a corresponding ground truth graph. Lower is better.
The most natural way to determine similarity between two graphs is by an isomorphism test; however, in addition to being computationally intractable, this provides a far-too-narrow view of graph similarity. We instead take two alternative views to graph similarity. Graph portrait divergence [3] provides a holistic view of a graph based on a matrix of random-walk counts sorted by length; these results will be averaged across 10 independent trials. Maximum mean discrepancy (MMD) [14] is a kernel-based sampling test--which will thus not require any averaging--with desirable stability and computational efficiency characteristics. For both of these, lower is better.
We begin with the Portrait Divergence results, shown in Fig. 7. In general, we can see that the DyVeRG-generated graphs tend to have lower portrait divergence compared to the other models, thus outperforming them.
Next, we analyse the MMD of the eigenvalue spectra of the graphs' Laplacian matrices. MMD values are bounded between 0 and 2, with a value of 0 indicating belief that the spectrum of the ground truth and the sample spectra of the generated graphs were certainly sampled from the same underlying distribution. These results are shown in Fig. 8. Here, we find that DyVeRG performs no worse than VeRG, its static counterpart, on three of the datasets. However, on the DBLP dataset, DyVeRG performs worse than almost all of the other models, despite is static analogue VeRG outperforming every model.
There are, of course, many additional metrics by which to compare these models, but the main power of DyVeRG comes from its ability to express graph dynamics in a human-interpretable way.
### Interpretability
To illustrate how the DyVeRG model can help a practitioner understand a complex temporal dataset, we illustrate some specific examples of frequent _rule transitions_--analogous to subgraph-to-subgraph transitions--learned by the model.
We focus our analysis here on the first \(10\) timesteps of the EU Emails dataset. For each \(t\in\{0,\dots 9\}\), we extract a grammar on \(G_{t}\) and then update it according to the procedure described in Section 3, giving us a list of DyVeRG grammars \(\langle\mathcal{G}_{t}\rangle_{t=1}^{10}\). Then, given two rules \(\dot{P}\) and \(\dot{P}\), each of which could be a rule from any of the grammars \(\mathcal{G}_{t}\) for \(t\in\{1,\dots 10\}\), we say that a transition of _type_\(\dot{P}\Rightarrow\dot{P}\) has occurred _iff_ there is a grammar \(\mathcal{G}_{i}\) such that, during the temporal updating procedure, a rule isomorphic to \(\dot{P}\) was modified into a rule isomorphic to \(\dot{P}\). We then go through our list of grammars and tally up the frequency with which every possible rule transition occurs with the idea in mind that the most frequent rule transitions might provide some salient insight into the dynamics of the dataset. In Fig. 9, we have a sample of four of the most frequent rule transitions learned from EU Emails, which we will refer to as \(\dot{P}_{1}\Rightarrow\ddot{P}_{1},\dots\,\dot{P}_{4}\Rightarrow\ddot{P}_{4}\) respectively.
Both \(\dot{P}_{1}\Rightarrow\ddot{P}_{1}\) and \(\dot{P}_{2}\Rightarrow\ddot{P}_{2}\) illustrate that a new structure emerged at time \(t+1\) among nodes that did not already exist in \(G_{t}\). In the first case, we have the introduction of a new user participating in a email exchange with two other people, and this occurs \(61\) times throughout the whole dataset. In the second, we can see a pair of new users emailing each other, one of whom has sent two emails elsewhere in the network and the other of whom has sent out one additional email, a structure that occurs \(5\) times in the data. In either case, because there was no rule at time \(t\) that \(\ddot{P}_{1}\) and \(\ddot{P}_{2}\) are updated versions of, we can be certain they were additions that participated in a larger connected component that was introduced wholly at time \(t+1\). This
Figure 8: The MMD of the eigenvalues (Spectrum) of a generated graph from each model and timestep compared a corresponding ground truth graph. Lower is better.
Figure 9: A sample of the top rule transitions from the EU Emails dataset. \(\times 61\) denotes that the first rule transition was repeated 61 times. These rule transitions describe various changes in graph structure over time.
reveals a temporal property of the EU Email network: it is much more frequent for users to send out emails following periods of inactivity when other previously-inactive users are also sending out emails to new people, than it would be for them to suddenly begin sending emails to active users.
The next transition, \(\hat{P}_{3}\Rightarrow\hat{P}_{3}\), shows us that three times throughout the data, a heterophilous dyad--a communicating pair of users where one is involved in many emails and the other is not--will see a reduction in the number of emails in which the less popular user participates. By contrast, the final transition \(\hat{P}_{4}\Rightarrow\hat{P}_{4}\) exemplifies a more extreme version of the _opposite_ phenomenon: twice in the data, when a heterophilous wedge consisting of two unpopular users is bridged by a high-volume email-sender, the bridging user will experience a reduction in email output while the unpopular users become more popular.
The insights obtained by this analysis are over-specific due largely to the precise nature of rule isomorphism. However, if a more relaxed view of rule isomorphism is adopted, and the definition of rule transition is broadened, then our model could describe even more general temporal trends. Even so, our model has shown its ability to provide significant insight into network dynamics.
## 5 Conclusion
We introduced the **D**ynamic **V**ertex **R**eplacement **G**rammar (DyVeRG) formalism, which is a graph grammar model that learns rule transitions from a dynamic graph. Unlike typical graph grammars, these rule transitions encode the dynamics of a graph's evolution over time. Further, unlike subgraph-to-subgraph transition models, which learn transitions between small configurations of nodes, DyVeRG encodes rule transitions across multiple levels of granularity.
We show through our quantitative analysis across two tasks and three metrics that the fidelity of the DyVeRG model is comparable or better than many existing graph models, even a highly-parameterized, uninterpretable graph neural network. Finally, we presented a short case study demonstrating how the induced rule transitions can provide insight into a temporal dataset.
|
2304.04847 | Existence of Traveling in a Nicholson Blowfies Model with Delayed
Diffusion Term | In this paper we consider traveling waves for a diffusive Nicholson Blowflies
Equation with different discrete time delays in the diffusion term and birth
function. We construct quasi upper and lower solutions via the monotone
iteration method. This also allows for the construction of C2 upper and lower
solutions, and then traveling wave solutions. We then provide numerical results
for the kernel for the iteration. | William Barker | 2023-04-10T20:08:44Z | http://arxiv.org/abs/2304.04847v1 | # Existence of traveling waves in a Nicholson Blowflies model with delayed diffusion term
###### Abstract.
In this paper we consider traveling waves for a diffusive Nicholson Blowflies Equation with different discrete time delays in the diffusion term and birth function. We construct quasi upper and lower solutions via the monotone iteration method. This also allows for the construction of \(C^{2}\) upper and lower solutions, and then traveling wave solutions. We then provide numerical results for the kernel for the iteration.
Key words and phrases:Traveling waves; Reaction-diffusion equations; Delay; Nicholson Blowflies Equation 2000 Mathematics Subject Classification: Primary: 35C07 ; Secondary: 35K57
## 1. Introduction
The diffusive Nicholson Blowflies Model with a single delay in the birth term is of the form
\[\frac{\partial u(t,x)}{\partial t}=\frac{\partial^{2}u(t,x)}{\partial x^{2}}- \delta u(t,x)+pu(t-\tau_{1},x)e^{-au(t-\tau_{1},x)} \tag{1.1}\]
for \(x\in\mathbb{R},t\geq 0.\)\(u(t,x)\) is the blowfly population in space at a certain time, \(t.\) The death rate is \(\delta>0,\) impact rate on the immature population is \(p>0\) due to birth, and \(\tau_{1}>0\) is the maturation delay. For more information about the model see Nicholson's groundbreaking work, [13, 14] and the adaptation to include spatial diffusion, [17, 18, 23]. The above model has been studied in several celebrated papers.
In 2001, So and Zou, [19] provided an elegant result providing a construction of upper and lower solutions when there is delay in the birth function. This extended the seminal results on the existence of traveling wave solutions via the so called monotone iteration method put forth by Wu and Zou, [22]. In the standard monotone or quasi monotone iteration methods, it is often important to construct upper and lower wave front solutions. This idea was extended by Ma, [10] who developed super and sub solutions, which relaxed the requirements in Wu and Zou.
The dynamics of traveling waves for the Nicholson Blowflies model with distributed delay was discussed in 2000 by Gourley and Ruan, [5] by using energy methods and by a comparison principle for functional differential equations. In a recent paper, the global attractivity of the positive steady state was studied for a non-monotone model with distributed delay by Deng and Wu, [4].
In 2004, Mei _et al._, [12] discussed the nonlinear stability of traveling wavefronts of a time-delayed diffusive Nicholson blowflies equation under a weighted \(L^{2}\) norm. This result is very interesting and was extended by Lin and Mei, [9] who also provided numerical results via a finite difference method. Furthermore, recent results by Huang and Liu, [7] and Huang and Xu [8] studied the existence of traveling waves with a birth function given two different delays.
Boumenir and Nguyen, [3] developed the idea of quasi upper and lower solutions via a modified Perron Theorem, see Theorem (4.1) in Mallet-Peret, [11]. More information can also be found in Pruss, [15]. This leads us to the motivation of this paper. The idea of placing the diffusion term is a relatively new idea. In fact, it was shown in Barker and Nguyen, [2] traveling waves exist for reaction diffusion equations with discrete delay in both the diffusion and reaction term of the form
\[\frac{\partial u(x,t)}{\partial t}=D\frac{\partial^{2}u(x,t-\tau_{1})}{ \partial x^{2}}+f(u_{t}), \tag{1.2}\]
where \(t\in\mathbb{R},\tau_{1}>0,x,u(x,t)\in\mathbb{R},\ D>0,\ f:C\left([-\tau_{2},0], \mathbb{R}\right)\rightarrow\mathbb{R}\) is continuous and \(u_{t}(x)\in C\left([-\tau_{2},0],\mathbb{R}\right),\) defined as
\[u_{t}(x)=u(x,t+\theta),\ \theta\in[-\tau_{2},0],\ t\geq 0,\ x\in\mathbb{R}.\]
Where \(f\) is Lipschitz continuous and
\[f(0)=f(K)=0,\ \text{and}\ f(u)\neq 0,\ 0<u<K.\]
under certain monotone conditions.
This paper will be organized as follows: Section 2 will lay out some preliminary notation that may be used in the later sections. Our main results will be stated in Section 3. The existence of traveling waves will be discussed as well as solutions to exponential type polynomials of second order. The last part of Section 3 will be the explicit construction of quasi upper and lower solutions for the Nicholson Blowflies Equation with diffusive delay. Section 4 will cover some numerical results for the convolution kernel and the resulting upper and lower solutions.
## 2. Preliminaries
In this paper we will use some standard notations such as \(\mathbb{R},\mathbb{C}\) standing for the fields of reals and complex numbers. \(\Re z\) and \(\Im z\) denote the real part and imaginary part of a complex number \(z\). The space of all bounded and continuous functions from \(\mathbb{R}\to\mathbb{R}^{n}\) is denoted by \(BC(\mathbb{R},\mathbb{R}^{n})\) which is equipped with the sup-norm \(\|f\|:=\sup_{t\in\mathbb{R}}\|f(t)\|\). \(BC^{k}(\mathbb{R},\mathbb{R}^{n})\) stands for the space of all \(k\)-time continuously differentiable functions \(\mathbb{R}\to\mathbb{R}^{n}\) such that all derivatives up to order \(k\) are bounded.
If the boundedness is dropped from the above function spaces we will simply denote them by \(C(\mathbb{R},\mathbb{R}^{n})\) and \(C^{k}(\mathbb{R},\mathbb{R}^{n})\). We will use the natural order in \(BC(\mathbb{R},\mathbb{R}^{n})\) that is defined as follows: For \(f,g\in BC(\mathbb{R},\mathbb{R}^{n})\) we say that \(f\leq g\) if and only if \(f(t)\leq g(t)\) for all \(t\in\mathbb{R}\), and we will say that \(f<g\) if \(f(t)\leq g(t)\) for all \(t\in\mathbb{R}\), and \(f(t)\neq g(t)\) for all \(t\in\mathbb{R}\).
## 3. Main Results
We consider the following diffusive delayed Nicholson Blowflies model
\[\frac{\partial u(t,x)}{\partial t}=\frac{\partial^{2}u(t-\tau_{1},x)}{\partial x ^{2}}-\delta u(t,x)+pu(t-\tau_{2},x)e^{-au(t-\tau_{2},x)} \tag{3.1}\]
Moreover, we assume that \(1<p/\delta\leq e\), then we can find two equilibria
\[U_{0}=0,\ U_{e}=\frac{1}{a}(\ln{(p/\delta)}).\]
We are interested in the question: Is there a traveling wave front connecting the two equilibrium of Eq. (3.1)?
To this end, we assume the wave translation form as \(\xi=x+ct\) where \(c\) is a positive wave speed. Applying the transformation \(\phi(\xi)=u(t,x)\) to Eq. (3.1) gives
\[\phi^{\prime\prime}(\xi-c\tau_{1})-c\phi^{\prime}(\xi)-\delta\phi(\xi)+p\phi( \xi-c\tau_{2})e^{-a\phi(\xi-c\tau_{2})}=0. \tag{3.2}\]
Moving the delay out of the diffusion term via the transformation \(\zeta=\xi-r_{1}\), where \(r_{1}=c\tau_{1},r_{2}=c\tau_{2}\) yields the equivalent model
\[\phi^{\prime\prime}(\zeta)-c\phi^{\prime}(\zeta+r_{1})-\delta\phi(\zeta+r_{1}) +p\phi(\zeta+(r_{1}-r_{2}))e^{-a\phi(\zeta+(r_{1}-r_{2}))}=0. \tag{3.3}\]
For simplicity let \(\zeta=t\) and the model becomes
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1})+p\phi(t+(r_{ 1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}=0. \tag{3.4}\]
We invoke the following monotone conditions on \(f\).
* \(f(\hat{0})=f(\hat{K})=0\), where \(\hat{0}\), (\(\hat{U}_{e}\), respectively) is the constant function \(\phi(\theta)=0\) (\(\phi(\theta)=U_{e}\), respectively), for all \(\theta\in[-\tau_{2},0]\);
* There exists a positive constant \(\beta\) such that \[f(\varphi)-f(\psi)+\beta(\varphi(0)-\psi(0))\geq 0\] for all \(\varphi,\psi\in C([-\tau_{2},0],\mathbb{R})\) with \(0\leq\varphi(s)\leq\phi(s)\leq U_{e}\) for all \(s\in[-\tau_{2},0]\);
Using the monotone conditions (H1) and (H2) we can write Equation (3.4) as
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1})-\beta\phi( t+r_{1})+H(\phi(t))=0, \tag{3.5}\]
where \(H(\phi(t))=p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}+\beta\phi(t+r_{1}).\) The asymptotic behavior is
\[\lim_{t\to-\infty}\phi(t)=U_{0},\ \lim_{t\to\infty}\phi(t)=U_{e}.\]
**Definition 3.1**.: A function \(\varphi\in BC^{2}(\mathbb{R},\mathbb{R})\) is called an upper solution (lower solution, respectively) for the wave equation (3.5) if it satisfies the following
\[D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\leq 0,\] \[(D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\geq 0,\ \text{respectively})\]
for all \(t\in\mathbb{R}\).
Here, \(f_{c}\in\mathbb{X}_{c}:=:C([-c\tau_{2},0],\mathbb{R}^{n})\to\mathbb{R}\), defined as
\[f_{c}(\psi)=f(\psi^{c}),\quad\psi^{c}(\theta):=\psi(c\theta),\quad\theta\in[ -\tau_{2},0].\]
**Definition 3.2**.: A function \(\varphi\in C^{1}(\mathbb{R},\mathbb{R})\), where \(\varphi,\varphi^{\prime}\) are bounded on \(\mathbb{R}\), \(\varphi^{\prime\prime}\) is locally integrable and essentially bounded on \(\mathbb{R}\) (that is, \(\varphi^{\prime\prime}\in L^{\infty}\)), is called a quasi- upper solution (quasi-lower solution, respectively) for the wave equation (3.5) if it satisfies the following for almost every \(t\in\mathbb{R}\)
\[D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\leq 0,\] \[(D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\geq 0,\ \text{respectively}).\]
The quadratic equation \(\lambda^{2}-c\lambda-\delta=0\), which has two distinct real roots
\[\lambda_{1}=\frac{c-\sqrt{c^{2}+4\delta}}{2}<0,\ \lambda_{2}=\frac{c+\sqrt{c^{2}+4 \delta}}{2}>0,\]
whenever \(c>2\sqrt{\delta}.\) This allows for the following lemma.
**Lemma 3.3**.: _Let \(c>2\sqrt{\delta}\) and consider the characteristic equation for Eq. (3.5)_
\[\lambda^{2}-c\lambda e^{r_{1}\lambda}-\delta e^{r_{1}\lambda}=0. \tag{3.6}\]
_Define an open strip \(U\subset\mathbb{C}\) by \(\{z\in\mathbb{C}:\lambda_{1}-\varepsilon\leq\Re z\leq 0\}.\) Then For sufficiently small \(r_{1},\varepsilon>0\)_
1. _the characteristic equation has no roots on the imaginary axis,_
2. _the characteristic equation has a single root continuously dependent on_ \(r_{1}\) _in_ \(U,\) _denoted as_ \(\eta_{1}(r_{1}).\)__
This is a special case of Proposition (3.1) from [2]. This allows us to see that we have a unique bounded solution from the modified Perron Theorem, see [11]. Thus, we have the following results from [2]. The solution for Eq. 3.5 is given by
\[F\left(\phi\right)=\int_{-\infty}^{\infty}G(t-s,r)H(\phi(s))ds, \tag{3.7}\]
where there exists some positive constants \(M_{1},\delta_{1}\) such that for all \(t\in\mathbb{R},\)\(|G(t,r)|\leq M_{1}e^{-\delta_{1}|t|}\)
**Lemma 3.4**.: _Let \((\phi)\) be a quasi- upper solution (quasi-lower solution, respectively) of Eq. (3.5). Then, \(F(\phi)\) is an upper solution (lower solution, respectively) of Eq. (3.5)._
**Theorem 3.5**.: _Assume \((H1)\) and \((H2)\) hold, if there is an upper \(\overline{\phi}\) and a lower solution \(\underline{\phi}\) for Eq.(3.5) in \(\Gamma\) such that for all \(t\in\mathbb{R}\)_
\[0\leq\underline{\phi}(t)\leq\overline{\phi}(t).\]
_Then, there exists a monotone traveling wave solution to the system (3.5)._
### Quasi Upper Solutions
We will now explicitly find an acceptable quasi upper solution. The quadratic equation \(\mu^{2}-c\mu+p=0,\) which has two real roots \(\mu_{1}<0<\mu_{2}\) whenever \(c>2\sqrt{p}.\) In fact, we need the following lemma.
**Lemma 3.6**.: _Let \(c>2\sqrt{p}\) and consider the equation_
\[\mu^{2}-c\mu e^{r_{1}\mu}+pe^{r_{1}\mu}=0, \tag{3.8}\]
_and define an open strip \(V\subset\mathbb{C}\) by \(\{z\in\mathbb{C}:0<\Re z<\mu_{2}+\varepsilon\}.\) Then for sufficiently small \(r_{1},\varepsilon>0\) such that \(V\cap\{\mu_{2}\}=\emptyset.\) Then Eq. (3.8) has a single root continuously dependent on \(r_{1}\) in \(V,\) denoted as \(\eta_{2}(r_{1}).\) Moreover, \(\eta_{2}(r_{1})\) is real and_
\[\lim_{r_{1}\to 0}\eta_{2}(r_{1})=\mu_{2}. \tag{3.9}\]
**Claim 3.7**.: For sufficiently small \(r_{1}\) and \(c>2\sqrt{p},\) and \(\eta_{2}\) to be the root of Eq. (3.8) in \(V\), then the function
\[\varphi_{1}(t)=\left\{\begin{array}{ll}\frac{U_{c}}{2}e^{\eta_{2}t},&t\leq t_ {0},\\ U_{e}(1-\frac{1}{2}e^{-\eta_{2}t}),&t>t_{0}\end{array}\right.\]
is a quasi-upper solution of (3.5).
Proof.: From elementary calculus it is easy to see
\[\varphi_{1}^{\prime}(t)=\left\{\begin{array}{ll}\frac{U_{c}\eta_{2}}{2}e^{ \eta_{2}t},&t\leq 0,\\ \frac{U_{e}\eta_{2}}{2}e^{-\eta_{2}t},&t>0\end{array}\right.,\quad\varphi_{1} ^{\prime\prime}(t)=\left\{\begin{array}{ll}\frac{U_{c}\eta_{2}^{2}}{2}e^{ \eta_{2}t},&t\leq 0,\\ \frac{-U_{c}\eta_{2}^{2}}{2}e^{-\eta_{2}t},&t>0\end{array}\right.\]
Note that \(\varphi_{1}^{\prime}\) is continuous and bounded on \(\mathbb{R}\) and \(\varphi_{1}^{\prime\prime}\) exist and is continuous everywhere and bounded except for \(t=0\). We will take \(U_{e}=1\), for brevity, since it appears in every term The proof can now be completed in three cases.
**Case 1**\(t<-r_{1}\): We have
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1 })+p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\frac{\eta_{2}^{2}}{2}e^{\eta_{2}t}-\frac{c\eta_{2}}{2}e^{\eta_{ 2}(t+r_{1})}-\frac{\delta}{2}e^{\eta_{2}(t+r_{1})}+\frac{p}{2}e^{\eta_{2}(t+r _{1})}e^{-a\phi(t+(r_{1}-r_{2}))}\] \[\leq\left(\frac{\eta_{2}^{2}}{2}-\frac{c\eta_{2}}{2}e^{\eta_{2}r _{1}}+\frac{p}{2}e^{\eta_{2}r_{1}}\right)e^{\eta_{2}t}-\frac{\delta}{2}e^{\eta _{2}(t+r_{1})}\] \[=-\frac{\delta}{2}e^{\eta_{2}(t+r_{1})}\leq 0.\]
**Case 2**\(-r_{1}\leq t\leq 0\): We have
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1 })+p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\frac{\eta_{2}^{2}}{2}e^{\eta_{2}t}-\frac{c\eta_{2}}{2}e^{-\eta_ {2}(t+r_{1})}-\delta\phi(t+r_{1})+\frac{p}{2}e^{\eta_{2}(t+r_{1}-r_{2})}e^{-a \phi(t+(r_{1}-r_{2}))}\] \[\leq\frac{\eta_{2}^{2}}{2}e^{\eta_{2}t}-\frac{c\eta_{2}}{2}e^{- \eta_{2}(t+r_{1})}-\delta\phi(t+r_{1})+\frac{p}{2}e^{\eta_{2}(t+r_{1})}\] \[=\frac{\eta_{2}^{2}}{2}e^{\eta_{2}t}-\frac{c\eta_{2}}{2}e^{\eta_{ 2}(t+r_{1})}+\frac{p}{2}e^{\eta_{2}(t+r_{1})}+\frac{c\eta_{2}}{2}e^{\eta_{2}( t+r_{1})}-\frac{c\eta_{2}}{2}e^{-\eta_{2}(t+r_{1})}-\delta\phi(t+r_{1})\] \[\left(\frac{\eta_{2}^{2}}{2}-\frac{c\eta_{2}}{2}e^{\eta_{2}r_{1}} +\frac{p}{2}e^{\eta_{2}r_{1}}\right)e^{\eta_{2}t}+c\eta_{2}\sinh(r_{1}\eta_{ 2})-\delta\phi(t+r_{1})\] \[=c\eta_{2}\sinh(r_{1}\eta_{2})-\delta\phi(t+r_{1}),\]
because \(\eta_{2}\) is a root of Eq. (3.8). When \(r_{1}\to 0,\ c\eta_{2}\sinh(r_{1}\eta_{2})\to 0.\) Thus, there is a small enough \(r_{1}\) such that
\[c\eta_{2}\sinh(r_{1}\eta_{2})-\delta\phi(t+r_{1})\leq 0.\]
**Case 3**: \(0\leq t\): We have
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1})+ p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\frac{-\eta_{2}^{2}}{2}e^{-\eta_{2}t}-\frac{c\eta_{2}}{2}e^{- \eta_{2}(t+r_{1})}+p\left(1-\frac{1}{2}e^{-\eta_{2}(t+(r_{1}-r_{2}))}\right)e^ {-a\phi(t+(r_{1}-r_{2}))}-\delta\phi(t+r_{1})\] \[\leq\frac{-\eta_{2}^{2}}{2}e^{-\eta_{2}t}-\frac{c\eta_{2}}{2}e^{- \eta_{2}(t+r_{1})}+p\left(1-\frac{1}{2}e^{-\eta_{2}(t+(r_{1}-r_{2}))}\right)- \delta\phi(t+r_{1})\] \[=\frac{-\eta_{2}^{2}}{2}e^{-\eta_{2}t}+\frac{c\eta_{2}}{2}e^{- \eta_{2}t+\eta_{2}r_{1}}-\frac{p}{2}e^{-\eta_{2}t+\eta_{2}r_{1}}+\frac{p}{2}e^ {-\eta_{2}t+\eta_{2}r_{1}}-\frac{c\eta_{2}}{2}e^{-\eta_{2}t+\eta_{2}r_{1}}- \frac{c\eta_{2}}{2}e^{-\eta_{2}(t+r_{1})}\] \[+p\left(1-\frac{1}{2}e^{-\eta_{2}(t+(r_{1}-r_{2}))}\right)- \delta\phi(t+r_{1}).\]
Using the fact that \(\eta_{2}\) is a root of Eq. (3.8) we see the following simplification
\[=\frac{p}{2}e^{-\eta_{2}t+\eta_{2}r_{1}}-\frac{c\eta_{2}}{2}e^{- \eta_{2}t+\eta_{2}r_{1}}-\frac{c\eta_{2}}{2}e^{-\eta_{2}(t+r_{1})}+p\left(1- \frac{1}{2}e^{-\eta_{2}(t+(r_{1}-r_{2}))}\right)-\delta\phi(t+r_{1})\] \[=-c\eta_{2}e^{-\eta_{2}t}\cosh(\eta_{2}r_{1})+\frac{p}{2}e^{- \eta_{2}t}\left(e^{\eta_{2}r_{1}}-e^{\eta_{2}(r_{1}-r_{2})}\right)-\delta\phi (t+r_{1})+p.\]
Noticing that when \(r_{1}\to 0\), implies \(\cosh(\eta_{2}r_{1})\to 1\) and \(e^{\eta_{2}r_{1}}-e^{\eta_{2}(r_{1}-r_{2})}\to 1-e^{-\eta_{2}r_{2}}\) we can take \(r_{1},r_{2}\) small enough such that \(e^{\eta_{2}r_{1}}-e^{\eta_{2}(r_{1}-r_{2})}\approx o(r_{1}-r_{2})\). This allows one to see that for sufficiently small \(r_{1},r_{2}\)
\[=-c\eta_{2}e^{-\eta_{2}t}\cosh(\eta_{2}r_{1})+\frac{p}{2}e^{- \eta_{2}t}\left(e^{\eta_{2}r_{1}}-e^{\eta_{2}(r_{1}-r_{2})}\right)-\delta\phi (t+r_{1})+p\] \[\approx\left(-c\eta_{2}+\frac{p}{2}o(r_{1}-r_{2})\right)e^{- \eta_{2}t}-\delta\phi(t+r_{1})+p\] \[\leq-c\eta_{2}+\frac{p}{2}o(r_{1}-r_{2})-\delta\phi(t+r_{1})+p.\]
Moreover,
\[\lim_{c\to 2\sqrt{p}}-c\mu_{1}=-p,\]
and
\[\lim_{r_{1}\to 0}\eta_{2}(r_{1})=\mu_{1}\]
we have that
\[-c\eta_{2}+\frac{p}{2}o(r_{1}-r_{2})-\delta\phi(t+r_{1})+p\] \[\approx\frac{p}{2}o(r_{1}-r_{2})-\delta\phi(t+r_{1}).\]
Thus, we can take \(r_{1},r_{2}\) small enough such that
\[\frac{p}{2}o(r_{1}-r_{2})<\delta\phi(t+r_{1}).\]
This proves the result.
In order to construct quasi lower solutions we will look at a function, defined as
\[f(t)=a(t-T)^{3}+b(t-T)^{2}+\frac{1}{2}.\]
Furthermore, for some large \(T>0\) we have the properties:
* This bridges smoothly the function \(e^{\eta_{1}t}/4\) and the constant function \(1/2\)
* \(f(-T)=(1/4)e^{\eta_{1}T}\), \(f^{\prime}(-T)=(-\eta_{1}/4)e^{\eta_{1}T}\), \(f^{\prime}(T)=0\), \(f(T)=1/2\).
It was shown in Barker and Nguyen, [2] that
\[a =\frac{\eta_{1}Te^{-\eta_{1}T}+e^{-\eta_{1}T}-2}{16T^{3}}\] \[b =\frac{-\eta_{1}Te^{-\eta_{1}T}+6\left(\frac{e^{-\eta_{1}T}}{4}- \frac{1}{2}\right)}{8T^{2}},\]
as well as the following claim.
**Claim 3.8**.: Define \(f(t)\) to be the bridge function from above, then
\[\lim_{T\to\infty}\sup_{-T\leq t\leq T}\max\{|f^{\prime}(t)|,|f^{\prime\prime}( t)|\}=0. \tag{3.10}\]
**Claim 3.9**.: For sufficiently small \(r_{1}\) and \(c>2\sqrt{\delta}\), and \(\eta_{1}\) to be the positive root of Eq. (3.6), then the function
\[\underline{\varphi}(t)=\begin{cases}\frac{e^{\eta_{1}t}}{4},\ t<-T,\\ f(t),\ -T\leq t\leq T\\ \frac{1}{2},\ t>T,\end{cases}\]
is a quasi-upper solution of (3.5).
Proof.: We will do this in cases just as before.
**Case 1:**\(t\leq-T-r_{1}\)
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1}) +p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\frac{\eta_{1}^{2}}{2}e^{\eta_{1}t}-\frac{c\eta_{1}}{2}e^{\eta_{ 1}(t+r_{1})}-\frac{\delta}{2}e^{\eta_{1}(t+r_{1})}+\frac{p}{2}e^{\eta_{1}(t+r_ {1})}e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\left(\frac{\eta_{1}^{2}}{2}-\frac{c\eta_{1}}{2}e^{\eta_{1}r_{1 }}-\frac{\delta}{2}e^{\eta_{1}r_{1}}\right)e^{\eta_{1}t}+\frac{p}{2}e^{\eta_{ 1}(t+r_{1})}e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=\frac{p}{2}e^{\eta_{1}(t+r_{1})}e^{-a\phi(t+(r_{1}-r_{2}))}\geq 0,\]
because \(\eta_{1}\) is a root for Eq. (3.6).
**Case 2:**\(-T-r_{1}\leq t\leq T\) This case follows from the fact that on this interval
\[\sup_{-T-r_{1}\leq t\leq T}|\underline{\varphi}_{1}^{\prime\prime}(t)|\pm c| \underline{\varphi}_{1}^{\prime}(t+r_{1})|=\sup_{-T\leq t\leq T}|f^{\prime}(t )|\pm c|f^{\prime\prime}(t)|\]
that could be made as small as we like by taking \(T\) sufficiently large, and \(p>\delta.\) Thus, we have for some large \(T\)
\[\phi^{\prime\prime}(t)-c\phi^{\prime}(t+r_{1})-\delta\phi(t+r_{1 })+p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_{2}))}\] \[=-\delta\phi(t+r_{1})+p\phi(t+(r_{1}-r_{2}))e^{-a\phi(t+(r_{1}-r_ {2}))}+o(T)\geq 0.\]
This case has been proven.
**Case 3:**\(T\leq t\) This case is trivially due to the fact that the function is constant.
**Corollary 3.10**.: Assume that \(c>2\sqrt{p}\) is given. Then, the Eq. (3.5) has a traveling wave solution \(u(x,t)\phi(x+ct)\) for sufficiently small delays \(\tau_{1},\tau_{2}.\)
## 4. Numerical Simulations
In this section we will construct, via a specific example numerical upper and lower solutions for Eq. (3.5). Using the formula found in Theorem (4.1) from Mallet-Peret, [11] we have
\[G(t,r)=-\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{e^{i\xi t}}{-\xi^{2}-ci\xi e ^{i\xi r_{1}}-\delta e^{i\xi r_{1}}}d\xi \tag{4.1}\]
A relatively simple numerical scheme should be appropriate due to the nature of the quasi-upper and lower solutions and the smoother upper and lower solutions. To this end, we have the following lemma.
**Lemma 4.1**.: _Let \(G(t,r)\) be the Green's function found in from Eq. (4.1), then_
1. _for all_ \(N\in\mathbb{N},\ t,\xi\in\mathbb{R}\) _there is some positive constant,_ \(K\) _independent of_ \(N\) _such that_ \[|G(-N,r)+G(N,r)|<K.\]
2. _For any small positive number_ \(\varepsilon\) _there is some sufficiently large_ \(N\in\mathbb{N}\) _dependent upon_ \(\varepsilon\) _such that for all_ \(t,\xi\in\mathbb{R}\)__ \[|G(-N,r)+G(N,r)|<\varepsilon.\]
Proof.: The proof for both parts are straight forward due to the fact that for all \(t\in\mathbb{R}\) there is some positive constant, \(\delta_{1}\) such that \(||G(t,r)||\leq K_{1}e^{-\delta_{1}|t|}.\) In fact, part \(i.)\) can be shown via induction on \(N\). Take \(N=1\), then
\[|G(-N,r)+G(N,r)|\leq 2K_{1}e^{-\delta_{1}}<2K_{1}.\]
Take \(K=2K_{1}\) then the result follows. In order to show part \(ii.)\) fix \(\varepsilon>0\) then for some \(N\in\mathbb{N}\) such that
\[N\geq\ln\left(\frac{\varepsilon}{2K_{1}}^{\frac{-1}{\delta_{1}}}\right)\]
we have the following estimate:
\[|G(-N,r)+G(N,r)|\leq 2K_{1}e^{-\delta_{1}N}<\varepsilon.\]
This allows us to disregard the tails of the Green's function and focus on some finite interval in order to numerically approximate the integral for any fixed \(t\in\mathbb{R}.\) In fact, the interval can be chosen to be relatively small. We know that the following iteration is convergent
\[\phi_{n}=\int_{-\infty}^{\infty}G(t-s,r)H(\phi_{n-1}(s))ds,\]
where \(H\) We will take \(r_{1}=1,r_{2}=1/4,p=2,\delta=1,c=2\sqrt{2},N=50,T=1\). then we will approximate
\[G(t,r)\approx\ -\frac{1}{2\pi}\int_{-50}^{50}\frac{e^{i\xi t}}{-\xi^{2}-ai\xi e ^{i\xi}-be^{i\xi}}d\xi.\]
Furthermore, since \(\phi_{n}\geq 0\), we have the approximation
\[\phi_{n}=\left|\int_{-\infty}^{\infty}G(t-s,r)H(\phi_{n-1}(s))ds\right|\leq \int_{-\infty}^{\infty}|G(t-s,r)H(\phi_{n-1}(s))|\,ds.\]
With this in mind, we know that \(|G(t-s,r)|\leq K_{1}e^{-\delta_{1}|t|}.\) Shifting the line of integration to the parallel line \(z=\xi+i|\lambda_{1}|,\) where \(\lambda_{1}\) is the negative root of Eq. (3.6) gives the following
\[|G(t-s,r)|\leq K_{1}e^{\lambda_{1}|t|}.\]
A composite Simpson's rule will be used to approximate
\[K_{1}\approx\frac{1}{2\pi}\int_{-50}^{50}\frac{1}{\left(\xi+i|\lambda_{1}| \right)^{2}+\left[2i\left(\xi+i|\lambda_{1}|\right)+1\right]e^{\left(\xi+i| \lambda_{1}|\right)}}d\xi.\]
For brevity, we denote
\[f(\xi)=\frac{1}{2\pi}\int_{-50}^{50}\frac{1}{\left(\xi+i|\lambda_{1}|\right)^{ 2}+\left[2i\left(\xi+i|\lambda_{1}|\right)+1\right]e^{\left(\xi+i|\lambda_{1} |\right)}},\]
then it is well know that the composite Simpson's rule can be written as
\[I_{n}=\int_{-50}^{50}f(\xi)\approx\frac{h}{3}\left[f(-50)+4\sum_{i=1}^{n/2}f( \xi_{2i-1})+2\sum_{i=1}^{n/2-1}f(\xi_{2i})+f(50),\right]\]
where \(h=100/n,\ n\) is the number of sub intervals. The results for various step sizes (calculated in MATLAB) can be found in the table below, as well as plots for the quasi-upper and lower solutions below
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.0001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.0001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.0001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.0001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(1000\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(1000\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \(1000\) & \(1\) &.2861 \\ \hline \(1000\) & \(.1\) &.3064 \\ \hline \(10000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.01\) &.3066 \\ \hline \(100000\) & \(.001\) &.3067 \\ \hline \(1000000\) & \(.001\) &.3067 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(h\) & \(|I_{n}|\) \\ \hline \hline \hline \(100\) & \(1\) &.2861 \\ \hline \(1000\) & \ |
2301.10034 | Open-World Multi-Task Control Through Goal-Aware Representation Learning
and Adaptive Horizon Prediction | We study the problem of learning goal-conditioned policies in Minecraft, a
popular, widely accessible yet challenging open-ended environment for
developing human-level multi-task agents. We first identify two main challenges
of learning such policies: 1) the indistinguishability of tasks from the state
distribution, due to the vast scene diversity, and 2) the non-stationary nature
of environment dynamics caused by partial observability. To tackle the first
challenge, we propose Goal-Sensitive Backbone (GSB) for the policy to encourage
the emergence of goal-relevant visual state representations. To tackle the
second challenge, the policy is further fueled by an adaptive horizon
prediction module that helps alleviate the learning uncertainty brought by the
non-stationary dynamics. Experiments on 20 Minecraft tasks show that our method
significantly outperforms the best baseline so far; in many of them, we double
the performance. Our ablation and exploratory studies then explain how our
approach beat the counterparts and also unveil the surprising bonus of
zero-shot generalization to new scenes (biomes). We hope our agent could help
shed some light on learning goal-conditioned, multi-task agents in challenging,
open-ended environments like Minecraft. | Shaofei Cai, Zihao Wang, Xiaojian Ma, Anji Liu, Yitao Liang | 2023-01-21T08:15:38Z | http://arxiv.org/abs/2301.10034v3 | # Open-World Multi-Task Control Through
###### Abstract
We study the problem of learning goal-conditioned policies in Minecraft, a popular, widely accessible yet challenging open-ended environment for developing human-level multi-task agents. We first identify two main challenges of learning such policies: 1) the indistinguishability of tasks from the state distribution, due to the vast scene diversity, and 2) the non-stationary nature of environment dynamics caused by partial observability. To tackle the first challenge, we propose Goal-Sensitive Backbone (GSB) for the policy to encourage the emergence of goal-relevant visual state representations. To tackle the second challenge, the policy is further fueled by an adaptive horizon prediction module that helps alleviate the learning uncertainty brought by the non-stationary dynamics. Experiments on 20 Minecraft tasks show that our method significantly outperforms the best baseline so far; in many of them, we double the performance. Our ablation and exploratory studies then explain how our approach beat the counterparts and also unveil the surprising bonus of zero-shot generalization to new scenes (biomes). We hope our agent could help shed some light on learning goal-conditioned, multi-task agents in challenging, open-ended environments like Minecraft. The code is released at [https://github.com/CraftJarvis/MC-Controller](https://github.com/CraftJarvis/MC-Controller).
## 1 Introduction
Building agents that can accomplish a vast and diverse suite of tasks in an open-ended world is considered a key challenge towards devising generally capable artificial intelligence [3, 2, 6, 35]. In recent years, environments like Minecraft have drawn much attention from the related research communities [16, 18, 19, 20, 26], since they are not only popular, and widely accessible, but also offer an open-ended universe with myriad of tasks, making them great platforms for developing human-level multi-task agents. Although groundbreaking successes have been observed in many challenging sequential decision-making problems such as Atari [32], Go [39], and MOBA games [13, 44, 45], such successes have not been transferred to those open worlds. To understand the gap and design corresponding solutions, we need to first understand the distinct challenges brought by these environments. Let's take Minecraft [24] as an example: there are over twenty types of landscapes
Figure 1: Comparison of states between Meta-world [49] (left) and Minecraft [24] (right) based on t-SNE visualization. The points with the same color represent states from the trajectories that complete the same task. It can be seen that the states are much more distinguishable in terms of tasks in Meta-world than in Minecraft, implying the higher diversity of states and tasks in open worlds like Minecraft over traditional multi-task agent learning environments like Meta-world.
ranging from flat lands like Savannah and desert to rough mountains with forests and caves. These diverse landscapes also enable countless tasks that could be achieved by the agents: mining, harvesting, farming, combating, constructing, etc. Compared to canonical agent learning environments like Go [39], Atari [32], and robotic control suite [41, 43, 48], Minecraft provides a substantially more diverse distribution of states thanks to the rich scenes and tasks built with the game, making it exceptionally difficult to extract the pivotal task-relevant visual state representations for goal-conditioned policies. To help our readers understand the significance of this challenge, we visualize the states from trajectories that complete some tasks in Minecraft and Meta-world [48] (a popular multi-task learning environment but with fewer states and tasks) in Fig. 1. States of different tasks are annotated with different colors. Clearly, the states in Minecraft are much less distinguishable in terms of tasks than in Meta-world. Therefore goal-conditioned policies are more likely to struggle in mapping those states and tasks (served as goals) to actions.
Another grand challenge in an open-ended environment like Minecraft hails from the setting of such games, where an agent can only have very limited observations of the world. For example, in MineDoJo [16] (a recent agent benchmark built on Minecraft), the observation space comprises a first-person view image and a list of possessed items. However, many more aspects of the surroundings remain hidden from the agents. That is, the agent now has to work with a **partially observable environment**. A plague embedded with such an environment is _non-stationary dynamics_, which makes it almost impossible to predict what will happen next. Therefore, the distances from states to the current goal become much less clear due to the world uncertainty, leading to less distinguishable states in terms of goal completeness and more faulty decisions emitted by the goal-conditioned policies.
This paper aims at mitigating both aforementioned challenges that emerge from most open-world environments. First, we observe that the architecture of the policy network is crucial to learning goal-relevant visual state representations that allow goal-conditioned actions in domains with low inter-goal state diversity (cf. Fig. 1). To this end, we propose Goal-Sensitive Backbone (GSB), which enables effective learning goal-conditioned policies over 20 tasks in the Minecraft domain. Next, to mitigate the challenge posed by the partially observed and non-stationary environment, we introduce horizon as an extra condition for the policy and a corresponding horizon prediction module. Specifically, the policy is also _explicitly_ conditioned on the remaining time steps till achieving certain goals (i.e., distance-to-goal). We find it significantly boosts the performance of our agents in open-world multi-task domains. However, the ground-truth distance-to-goal is unavailable during evaluation. To fix this problem, we train a horizon prediction module and feed the estimated distance-to-goal to the horizon commanding policy in evaluation. This leads to a \(27\%\) gain in average success rate under the multi-task settings.
We evaluate the proposed approaches based on the simple yet effective behavior cloning algorithm [10]. The experiments are conducted in three common biomes. In multi-task settings, our proposed method outperforms the baseline in terms of success rate and precision by a large margin. It also achieves consistent improvement in single-task settings. Our ablation and exploratory studies then explain how our approach beat the counterparts and also unveil the surprising bonus of zero-shot generalization to new scenes (biomes).
To summarize, targeting two identified challenges distinct to open worlds, our contributions are threefold:
* We propose Goal-Sensitive Backbone (GSB), a neural network that enables effective learning goal-relevant visual state representations at multiple levels for goal-conditioned policies, aiming at addressing the challenge of diverse state distribution in open-ended environments.
* We further introduce adaptive horizon prediction to explicitly condition the policy on the distance from the current state to the goal, yielding much better performances in a partially observable open-ended environment with non-stationary dynamics.
* We conduct extensive studies on the popular yet challenging Minecraft domain with baselines and our proposed method. The results demonstrate superior advantages of our approach over the counterparts in terms of both success rate and precision of task completion.
## 2 Preliminaries
**Goal-conditioned policy**, as its name suggests, is a type of agent's policy \(\pi\) for decision-making that is conditioned on goals besides states. Specifically, we denote \(\pi(a|s,g)\) as a goal-conditioned policy that maps the current state \(s\) and goal \(g\) to an action \(a\). Compared to the canonical formulation of policy where the goal is absent, the goal-conditioned policy offers flexibility of learning _multi-task_ agent as it allows different behaviors for different tasks by simply altering the goal. There are multiple ways to specify the goal, e.g., natural language instructions [2] and goal images [36].
**Goal-conditioned imitation learning** is a simple yet effective way to learn goal-conditioned policies. Specifically, \(\pi(a|s,g)\) is optimized by imitating the demonstrations \(\mathcal{D}\), where \(\mathcal{D}=\{\tau^{1},\tau^{2},\tau^{3},\dots\}\) is a collection of trajectories \(\tau^{i}\). A trajectory is a sequence of states, actions, and goals, defined as \(\tau^{i}=\{(s^{i}_{t},a^{i}_{t},g^{i})\}_{t=0}^{T}\), where \(T\) is the trajectory length. The imitation learning objective is to maximize the likelihood of the action in demonstrations when attempting
to reach the desired goal
\[J_{IL}(\pi)=\mathbb{E}_{\tau\sim\mathcal{D}}\big{[}\sum\nolimits_{t=0}^{T}\text{ log }\pi(a_{t}|s_{t},g)\big{]}. \tag{1}\]
**Notation.** At each timestep, our architecture takes in a tuple \((\mathbf{s}_{t},\mathbf{a}_{t},h_{t},\mathbf{g},\mathbf{a}_{t-1})\) as the input, where \(\mathbf{s}_{t}=\{\mathbf{o}_{t}^{I},\mathbf{o}_{t}^{E}\}\), \(\mathbf{o}_{t}^{I}\) is the raw image observation, \(\mathbf{o}_{t}^{E}\) is the extra observation provides by the environments. \(h_{t}\) comes from the demonstration. \(h_{t}\) and \(\tilde{\mathbf{a}}_{t}\) are the predicted horizon and action, respectively. For simplicity, we also use the same symbols (\(\mathbf{o}_{t}^{E},\mathbf{g},\mathbf{a}_{t-1}\)) to represent their embeddings.
## 3 Method
In this section, we describe the proposed algorithm for learning goal-conditioned policies that are capable of completing various preliminary tasks in open-world domains. First, we revisit and provide a detailed illustration of the identified challenges in open-world domains (SS3.1). Aiming at solving these challenges, we proceed to introduce the proposed goal-sensitive backbone (SS3.2) and adaptive horizon prediction module (SS3.3). Finally, we provide an overview of the proposed method in Section 3.4.
### Challenges
As demonstrated in Section 1, the **first** major challenge of open-world environments is the indistinguishability of states in terms of different goals (cf. Fig. 1). That is, it is often hard to identify the task/goal by looking at individual states. Compared to environments with clear goal indicators in their states, agents in open-world domains need to learn goal-conditioned diverse behaviors under similar states.
This challenge can be reflected by the illustrative experiment in Fig. 2. Two multi-task environments are created based on the Minecraft domain. Both environments consist of two preliminary tasks: collect logs and hunt sheep, where the former can be done by chopping trees and the latter requires the agent to slaughter sheep. Both tasks require the agent to first locate and approach the corresponding target. As shown in Fig. 2 (center), in the single-biome environment (blue blob in Fig. 2), the agent is tasked to collect logs and hunt sheep both inside a randomly generated plain area with grass, trees, and various mobs. In contrast, in the cross-biome environment (red blob in Fig. 2), whenever the agent is tasked to hunt sheep, it is spawned randomly in a snowy plain. Although different in visual appearance, snowy plains and plains have very similar terrains, so the difficulty of each task in the cross-biome environment is similar to its counterpart in the single-biome environment. The main consequence of this change is that the agent can determine its goal by solely looking at the current state, which mimics the setting of Meta-World in Fig. 1(left).
We collect demonstrations by filtering successful trajectories played by VPT [4] (see SS4.1 for more details) and use behavior cloning to train multi-task policies on both environments. Perhaps surprisingly, as shown in Fig. 2, despite the minor difference, performance in the single-biome environment is significantly weaker than in the cross-biome one. This clearly demonstrates that the common practice of directly concatenating observation features and goal features suffer from learning diverse actions (e.g., locate trees, find sheep) given similar observations. In contrast, in the cross-biome environment, the difficulty of the two tasks fundamentally remains the same, yet the agent only needs to learn a consistent behavior in each biome (i.e., plains and snow fields). This alleviates the need to learn goal-conditioned diverse behaviors in similar states and leads to a better success rate.
The **second** key challenge comes from the partial observability of the game and non-stationary environment dynamics. Specifically, in Minecraft, the biome and mobs surrounding the agent are generated procedurally and randomly after each reset. Further, only a small fraction of the whole terrain is visible to the agent in one observation, leading to more uncertainty of the world. From the perspective of learning goal-conditioned policies, the distances from states to the current goal will become much less clear compared to canonical learning environments like Atari [12]. We refer to Appendix B for more discussion on this. Since the goal-conditioned policies also rely on distinguishable states in terms of goal completeness, they're more likely to make wrong decisions as a result of world uncertainty.
### Incentivize Goal-Conditioned Behavior with Stacked Goal-Sensitive Backbone
As elaborated in Section 3.1, learning goal-conditioned policies becomes extremely hard when states collected from
Figure 2: Demonstrations of the cross-biome environment and the more challenging single-biome environment. The challenge comes from the fact that the agent needs to learn diverse behaviors in similar states conditioned on different goals.
trajectories that accomplish different tasks are indistinguishable. While certain algorithmic design choices could improve multi-task performance in such open-world environments, we find that the structure of the policy network is a key factor towards higher episode reward. Specifically, we observe that existing CNN-based backbones can excel at completing many single tasks (e.g., hunt cow, collect stone), but struggle to learn goal-conditioned behavior when training on the tasks in a goal-conditioned manner. This motivates the need to properly fuse goal information into the network. Despite the existence of various feature fusion approaches such as concatenation and Bilinear layers [27], they all perform poorly even with a moderate number of tasks. This motivates the need to carry goal information into multiple layers of the network. Specifically, we propose goal-sensitive backbone (GSB), which effectively blends goal information to the state features at multiple levels. As shown in Fig. 3 (right), GSB is composed with multiple goal convolution blocks (g-conv block), which are obtained by augmenting the vanilla convolution block with a goal branch. Functionally, it can provide deep feature fusion between multi-level visual features and the goal information. As we will proceed to show in Section 4.3, adding GSB can lead to significant performance boost in multi-task environments. The g-conv block processes its input visual features \(\mathbf{x}^{(l)}\in\mathbb{R}^{C\times H\times W}\) with two convolution layers
\[\hat{\mathbf{x}}^{(l)}=\text{ReLU}(\text{Conv}(\text{ReLU}(\text{Conv}(\mathbf{x}^{( l)})))). \tag{2}\]
Meanwhile, it maps the goal embedding \(\mathbf{g}\) to the same feature space as the intermediate features \(\hat{\mathbf{x}}^{(l)}\) with two fully-connected layers, decribed as
\[\hat{\mathbf{g}}^{(l)}=\text{FC}(\text{ReLU}(\text{FC}(\mathbf{g}))). \tag{3}\]
The goal feature \(\hat{\mathbf{g}}^{(l)}\) is then used to modulate the intermediate features \(\hat{\mathbf{x}}^{(l)}\) channel-wise. By adding a residual connection [21], the output feature \(\mathbf{x}^{(l+1)}\) is expressed by
\[\mathbf{x}^{(l+1)}=\sigma(\hat{\mathbf{g}}^{(l)})\odot\hat{\mathbf{x}}^{(l)}+\mathbf{x}^{(l)}, \tag{4}\]
where \(\sigma(\cdot)\) is the sigmoid function and \(\odot\) is the element-wise product. This channel-wise modulation encourages the module to focus on goal-specific regions and discard the background information by adaptively weighing the channel importance. We highlight that the g-conv block can be plugged into any convolution backbone to improve its capability of extracting goal-aware visual features. The proposed goal-sensitive backbone is constructed by replacing 6 convolution blocks of the widely-adopted Impala CNN [14] to g-conv blocks. In our experiments, a GSB is used to compute goal-conditioned state features \(\mathbf{I}_{t}^{g}=\text{GSB}(\mathbf{o}_{t}^{I},\mathbf{g})\). Such an idea of fusing condition information into the backbone layer by layer was also used by some prior works [5, 33, 34, 22]. Here, we demonstrate that it works in a critical role for open-world multi-task control.
### Combat World Uncertainty with Adaptive Horizon Prediction
To address the challenge brought by the uncertainty of the world, we need to ensure the goal-conditioned policies
Figure 3: **Our Goal-conditioned Policy Architecture**. Our contributions are in red and purple. **Right:** The _goal-sensitive backbone_ (GSB) is a key component to incentivize goal-condition behaviors. It consists of a stack of g-conv blocks. It takes the image observation \(\mathbf{o}_{t}^{I}\) and the goal embedding \(\mathbf{g}\) as input, and outputs the goal-attended visual representation \(\mathbf{I}_{t}^{g}\). The multimodal joint representation \(\mathbf{f}_{t}\) is the concatenation of visual representation \(\mathbf{I}_{t}^{g}\), goal embedding \(\mathbf{g}\), extra observation embedding \(\mathbf{o}_{t}^{E}\) and previous action embedding \(\mathbf{a}_{t-1}\). The horizon prediction module \(\mu\) uses it to predict the horizon \(\tilde{h}_{t}\) while the horizon commanding policy \(\pi_{\theta}\) uses it to predict the action \(\tilde{\mathbf{a}}_{t}\). **Top:** During the training, the predicted horizon \(\tilde{h}_{t}\) is only used to compute the horizon loss \(\mathcal{L}_{h}\). The policy is conditioned on \(h_{t}\) that comes from the demonstration. **Bottom:** During the evaluation, the policy is conditioned on the predicted horizon \(\tilde{h}_{t}\) which needs to be adjusted.
to be more aware of goal-completeness given the current state. We observe that conditioning the policy additionally on the number of remaining steps toward achieving a goal, i.e., distance-to-goal, or **horizon**, can significantly improve the accuracy of predicted actions on held-out offline datasets [17, 37]. Here, we define the horizon \(h_{t}:=T-t\), where \(T\) is the trajectory length, as the remaining time steps to complete the given goal. This motivates the design of a horizon commanding policy \(\pi_{\theta}:\mathcal{S}\times\mathcal{G}\times\mathcal{H}\to\mathcal{A}\) that takes a state \(s\), a goal \(g\), and a horizon \(h\) as inputs and outputs an action \(a\). A key problem of the horizon commanding policy is that it cannot be directly used for evaluation: during gameplay, horizon is unknown as it requires completing the whole trajectory. To fix this problem, we introduce an additional horizon prediction module, which estimates the horizon given a state \(s\) and a goal \(g\). Combining the two modules together, we can apply the fruitful horizon commanding policy during gameplay.
Both modules can be trained efficiently with dense supervision. Specifically, the horizon commanding policy \(\pi_{\theta}\) can be learned by any policy loss specified by RL algorithms. For example, when behavior cloning is used, \(\pi_{\theta}\) can be optimized by minimizing the loss
\[\mathcal{L}_{a}=-\text{log}\;\pi_{\theta}(\mathbf{a}_{t}|h_{t},\mathbf{f}_{t}), \tag{5}\]
where \(\mathbf{f}_{t}\) is the joint representation of the state and goal embedded by a neural network (see SS3.4). The horizon prediction module is trained by a supervised learning loss
\[\mathcal{L}_{h}=-\text{log}\;\mu(h_{t}|\mathbf{f}_{t}), \tag{6}\]
where \(\mu\) is a network that predicts the horizon.
During the evaluation, after computing the embedding \(\mathbf{f}_{t}\) for \(s_{t}\) and \(g\), the horizon prediction module \(\mu\) is first invoked to compute an estimated horizon \(\hat{h}_{t}=\mu(\mathbf{f}_{t})\). This predicted horizon can then be fed to the horizon commanding policy to compute the action distribution \(\pi_{\theta}(\mathbf{a}_{t}|\hat{h}_{t},\mathbf{f}_{t})\). In practice, we observe that feeding an adaptive version of \(\hat{h}_{t}\), defined as \(\hat{h}_{t}:=\max(\hat{h}_{t}-c,0)\) (\(c\) is a hyperparameter), to \(\pi_{\theta}\) leads to better performance. We hypothesize that this advantageous behavior comes from the fact that by supplying the adaptive horizon \(\hat{h}_{t}\), the agent is encouraged to choose actions that lead to speedy completion of the goal. The effectiveness of the adaptive horizon will be demonstrated in Section 4.3.
### Model Summary
As shown in Fig. 3, our model sequentially connects the proposed goal-sensitive backbone, horizon prediction module, and horizon commanding policy. At each time step \(t\), the image observation and goal information are first fed forward into the goal-sensitive backbone to compute goal-aware visual feature \(\mathbf{I}_{t}^{g}\). The visual feature is then fused with additional input information including the extra observation embedding \(\mathbf{o}_{t}^{E}\), the goal embedding \(\mathbf{g}\), and the previous action embedding \(\mathbf{a}_{t-1}\) by concatenation and a feed-forward network:
\[\mathbf{f}_{t}=\text{FFN}(\big{[}\mathbf{I}_{t}^{g}\parallel\mathbf{o}_{t}^{E}\parallel \mathbf{g}\parallel\mathbf{a}_{t-1}\big{]}). \tag{7}\]
Then, \(\mathbf{f}_{t}\) is input to the horizon prediction module to predict horizon \(\bar{h}_{t}=\mu(\mathbf{f}_{t})\). And the horizon commanding policy takes in the horizon and features \(\mathbf{f}_{t}\) to compute the action. When trained with behavior cloning, the overall objective function is \(\mathcal{L}=\mathcal{L}_{a}+\mathcal{L}_{h}\). During the evaluation, the adaptive horizon \(\hat{h}_{t}\) is fed to the horizon commanding policy in replacement of \(\bar{h}_{t}\).
## 4 Experiments
This section analyzes and evaluates the proposed goal-sensitive backbone and the adaptive horizon prediction module in the open-world domain Minecraft. To minimize performance variation caused by the design choices in RL algorithms, we build the proposed method on top of the simple yet effective behavior cloning algorithm. In Section 4.1, we first introduce three suites of tasks; the agent is asked to collect and combat various target objects/mobs with indistinguishable states conditioned on different goals (challenge #1) and non-stationary environment dynamics (challenge #2). Single-task and multi-task performance on the benchmarks is evaluated and analyzed in Section 4.2, and ablation studies are conducted in Section 4.3. Finally, we unveil the surprising bonus of zero-shot generalization to new scenes and tasks in Section 4.4.
### Experimental Setup
**Environment and task.** To best expose the challenges described in Sections 1 and 3.1, a key design principle of our benchmark environments is to task the agent to complete multiple preliminary tasks in similar yet highly randomized scenes. By specifying the biome that surrounds the agent, Minecraft provides a perfect way to create such environments. Specifically, as shown in Fig. 4, every biome has unique and consistent observations; randomness comes from the fact that the terrain is generated randomly in each episode. To evaluate the scalability of the proposed method in terms of the number of tasks, we choose **Plains** and
Figure 4: Snapshots of the RGB camera view in three biomes.
**Forest**, the two most common biomes that contain a large number of resources and mobs.
In addition to the two challenges, **Plains** and **Forest** also add unique difficulties to learning goal-conditioned policies. Specifically, although we have better views in **Plains**, the resources/targets are located further away from the agent and require more exploration. In contrast, there exist more occlusions and obstacles in **Forest**.
The **Plains** benchmark consists of four tasks: harvest o oak wood (), and Combat sheep (), cow (), pig (). In the **Forest** benchmark, the agent is tasked to complete thirteen tasks: combat sheep (), cow (), pig (), harvest dirt (), sand (), oak wood (), birch wood (), oak leaves (), birch leaves (), wool (), grass (), poppy (), orange tulip ().
In addition to the above two benchmarks, we also test the agent on a "hunt animals" benchmark based on the **Flat** biome, which contains a flattened world. Specifically, the agent needs to combat sheep (), cow (), pig (), spider (), polar bear (), chicken (), donkey (), horse (), wolf (), llama (), mushroom cow () in the **Flat** environment. Compared to other benchmarks, the challenge of **Flat** comes from the fact that the mobs are constantly wondering around, which makes it hard to locate and approach the correct target.
We adopt the original observation space provided by MineDoJo [16], which includes a RGB camera-view, yaw/pitch angle, GPS location, and the type of \(3\times 3\) blocks surrounding the agent. We discretize the original multi-discrete action space provided by MineDojo into 42 discrete actions. Details are included in Appendix A.1.
**Data collection pipeline.** One significant downside of behavior cloning algorithms is the need for high-quality and densely-labeled trajectories, which often requires enormous human effort to collect. To mitigate this problem, we collect goal-conditioned demonstrations by filtering successful trajectories from gameplays by pretrained non-goal-conditioned policies. Specifically, we adopt Video Pre-Training (VPT) [4], which is trained on tremendous amount of non-goal-conditioned gameplays. We rollout the VPT policy in the three benchmarks and record all episodes that accomplishes any of the defined goals. These trajectories are then converted to a goal-conditioned demonstration dataset. Please refer to Appendix A.2 for detailed settings and efficiency analysis of our data collection pipeline.
**Evaluation.** During the evaluation, the maximum episode length is set to 600, 600, and 300 on the **Flat**, **Plains** and **Forest** benchmarks, respectively. **Plains** and **Forest** are given more time steps since, in these environments, the agent needs more time to locate and approach the target. We use _Success Rate_ and _Precision_ as our evaluation metrics. A gameplay is successful if the agent completes the goal within the episode. Precision is defined as the number of times the specified goal is achieved divided by the total number of goals completed in an episode. It measures how well the agent can be aware of the specified goal, instead of simply accomplishing any goal during gameplay.
### Experimental Results
We first focus on the simpler single-task learning setting in order to isolate the challenge introduced by non-stationary dynamics and partial observability (SS4.2.1). We then examine whether the proposed method can better address both challenges by examining its multi-task performance (SS4.2.2).
#### 4.2.1 Single task experiments
We select three typical tasks, i.e., harvest log, hunt cow, and hunt sheep, from the **Plains** benchmark for single-task training. We compare the proposed method against the following baselines. First, MineAgent [16] is an online RL algorithm that leverages pretrained state representations and dense reward functions to boost training. BC () [4], BC () [16], and BC () [14] are variants of the behavior cloning algorithm that use different backbone models (indicated in the corresponding brackets) for state feature extraction. The backbones are finetuned with the BC loss (see Appendix A.3 for more details).
Results are reported in Table 1. First, we observe that even the individual tasks are extremely challenging for online RL algorithms such as MineAgent, even its networks are pretrained on Minecraft data. We attribute this failure to its inconsistent dense reward when facing a hard-exploration task (e.g., the additional provided reward is not consistently higher when the agent is moving closer to a target object). Next, compared to BC () that uses a randomly initialized impala CNN model, the Minecraft-pretrained backbones in BC () and BC () do not bring any benefit. This could be caused by the lack of plasticity, i.e., the ability to learn in these well-trained models, echoing similar findings in computer vision and RL [11]. Finally, our approach outperforms all baseline methods, especially in terms of precision. This demonstrates that our method is more robust against non-stationary dynamics and partially observable observations.
#### 4.2.2 Multi-task experiments
We move on to evaluate the proposed method on the three multi-task benchmarks introduced in Section 4.1. The baseline includes three behavior cloning methods (we use "MT-BC" as an abbreviation of multi-task behavior cloning). We also include two variations of our method: one without the goal-sensitive backbone, and the other without the adaptive
horizon prediction module. Results on the **Plains**, **Flat**, and **Forest** environments are reported in Table 2, respectively. First, we observe that our method significantly outperforms all baselines in terms of both success rate and precision in all three benchmarks. Moreover, scaling up the number of tasks does not necessarily deteriorate the performance of our method. Specifically, we compare the average success rate on the **Plains** and **Flat** benchmark, which contain 4 and 9 tasks, respectively. While the baselines struggle to maintain their success rate on the **Flat** environment, our approach is capable of maintaining high performance despite the increased number of tasks. Putting together, results on multi-task benchmarks clearly demonstrate the superiority of our method when facing open-world environments with the two elaborated challenges (cf. SS3.1).
### Ablation Study
**Ablation study on goal-sensitive backbone.** To examine the effectiveness of our proposed goal-sensitive backbone, we compare the following two groups of architectures: 1) **Ours** (i-cnn) v.s. **Ours** (w/gsb), 2) MT-BC (i-cnn) v.s. MT-BC (w/gsb). The key distinction between the groups is whether the backbone employs a standard Impala CNN or a goal-sensitive backbone. As depicted in Table 2, our findings indicate that the goal-sensitive backbone consistently enhances performance in terms of both success rate and precision across all environments. Remarkably, in the **Flat** biome, our approach with the goal-sensitive backbone attains a \(26\%\) and \(22\%\) performance improvement in success rate and precision, respectively. This demonstrates that the goal-sensitive backbone effectively fuses the goal information into visual features and leads to goal-aware behavior.
**Parameter sensitivity on horizon prediction.** To investigate the sensitivity of the horizon-based control policy to the constant \(c\) (outlined in SS3.3), we perform experiments with \(c\) values ranging from 0 to 14. We train and evaluate the model using the multi-task setting on the **Flat** benchmark, shown in Figure 5. Our findings indicate that within the 0 to 10 range, decreasing \(c\) enhances performance, while further reduction leads to decline. This implies that subtracting a small constant from the predicted horizon-to-goal yields a more effective policy. However, subtracting a larger value results in performance deterioration, as attaining the goal within such a limited horizon may be unfeasible.
**Comparison with recurrent architecture.** We built two recurrent variants ( "Ours + RNN", "Ours - horizon pred + RNN") by using a GRU module to fuse the joint representation \(f_{t}\) and optionally also removing the horizon prediction module. During training, the batch size, frame number, and skipping frame are set to 8, 16, and 5, respectively. Table 3 (exp1 _vs._ exp3) shows that "Ours - horizon pred + RNN" becomes significantly worse, likely due to the partial observability issue (\(-26\%\) SR). However, when combining RNN and horizon module (exp2), the performance gains significantly more than our original method (\(+10\%\) SR). To sum up, while RNNs can aid in addressing partial observability, our findings indicate that in our open-world scenario, they are considerably more effective when combined with our horizon prediction module.
**Ablation on horizon loss, extra observation, and language condition.** Table 3 demonstrates that excluding horizon loss (exp5) and extra observation (exp6) can result in a decrease of success rate by \(8\%\) and \(5\%\), respectively. Furthermore, as depicted in Table 4, when the language condition is removed from the input (exp7), the policy primarily accomplishes the "chopping tree" task (\(44\%\) SR) while scarcely completing the "hunting pig" task (\(11\%\) SR). The tasks "hunting sheep" and "hunting cow" are executed fairly evenly (around \(24\%\) SR). This is likely due to trees appearing more frequently than animals in the environment.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**Success Rate (\%)**} & \multicolumn{3}{c}{**Arg. Precision (\%)**} \\ \cline{2-7} & Plains & Flat & Forest & Plains & Flat & Forest \\ \hline MT-BC (VPT) [4] & \(25_{\pm 06}\) & \(17_{\pm 05}\) & \(15_{\pm 04}\) & \(22_{\pm 05}\) & \(17_{\pm 03}\) & \(14_{\pm 04}\) \\ MT-BC (CLIP) [16] & \(22_{\pm 05}\) & \(14_{\pm 03}\) & \(14_{\pm 03}\) & \(23_{\pm 04}\) & \(15_{\pm 03}\) & \(13_{\pm 03}\) \\ MT-BC (l-CNN) [14] & \(25_{\pm 02}\) & \(18_{\pm 02}\) & \(15_{\pm 03}\) & \(23_{\pm 04}\) & \(14_{\pm 02}\) & \(13_{\pm 03}\) \\ \hline MT-BC (w/GSB) & \(32_{\pm 05}\) & \(36_{\pm 03}\) & \(19_{\pm 05}\) & \(43_{\pm 06}\) & \(36_{\pm 02}\) & \(17_{\pm 03}\) \\ **Ours** (l-CNN) & \(31_{\pm 06}\) & \(31_{\pm 04}\) & \(18_{\pm 02}\) & \(22_{\pm 03}\) & \(28_{\pm 04}\) & \(15_{\pm 04}\) \\ **Ours** (w/GSB) & \(\mathbf{55_{\pm 09}}\) & \(\mathbf{57_{\pm 09}}\) & \(\mathbf{30_{\pm 06}}\) & \(\mathbf{70_{\pm 09}}\) & \(\mathbf{50_{\pm 00}}\) & \(\mathbf{29_{\pm 06}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of **multi-goal** tasks (§4.2.2) on three biomes.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**Success Rate (\%)**} & \multicolumn{3}{c}{**Precision (\%)**} \\ \cline{2-5} & & & & & \\ \hline MineAgent [16] & \(0_{\pm 00}\) & \(01_{\pm 00}\) & \(01_{\pm 00}\) & \(-\) & \(-\) & \(-\) \\ BC (CLIP) [16] & \(18_{\pm 06}\) & \(26_{\pm 05}\) & \(25_{\pm 06}\) & \(51_{\pm 08}\) & \(43_{\pm 08}\) & \(44_{\pm 05}\) \\ BC (VPT) [4] & \(2_{\pm 08}\) & \(27_{\pm 06}\) & \(22_{\pm 06}\) & \(58_{\pm 09}\) & \(46_{\pm 05}\) & \(42_{\pm 05}\) \\ \hline BC (l-CNN) [14] & \(45_{\pm 05}\) & \(46_{\pm 04}\) & \(48_{\pm 07}\) & \(\mathbf{86}_{\pm 05}\) & \(55_{\pm 12}\) & \(45_{\pm 07}\) \\ \hline
**Ours** & \(\mathbf{50_{\pm 07}}\) & \(\mathbf{58_{\pm 10}}\) & \(\mathbf{60_{\pm 08}}\) & \(\mathbf{83_{\pm 10}}\) & \(\mathbf{75_{\pm 10}}\) & \(\mathbf{75_{\pm 06}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of **single-goal** tasks (§4.2.1) on **Plains**.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Goal** & & & & & Avg. \\ \hline
**Success Rate (\%)** & \(44_{\pm 19}\) & \(24_{\pm 06}\) & \(23_{\pm 11}\) & \(11_{\pm 07}\) & \(25_{\pm 03}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Additional ablation experiments on **Plains** biome.
### Generalization Performance
In the open-ended Minecraft environment, which features a variety of biomes with distinct appearances, a decent agent should be capable of generalizing across these diverse biomes. To evaluate the agent's zero-shot generalization ability in a new biome, we initially train the agent using data exclusively from the Plains biome. Subsequently, we test it in the Flat biome, where it faces the challenge of combatting sheep, cows, and pigs. Complicating the task, numerous distracting mobs, such as wolves and mushroom cows, appear in the testing biome but not in the training biome. The results are presented in Table 5. Our zero-shot agent demonstrates success rates comparable to those of an agent trained directly on the Flat biome. The high precision of our zero-shot agent also indicates its robust performance, even amidst numerous novel distracting mobs in the new testing biome. Therefore, we believe that our agent displays a degree of zero-shot generalization to new environments, achieved through goal-aware representation learning and adaptive horizon prediction.
## 5 Related Works
Open-ended Environments.A variety of environments have been developed for open-ended agent training, such as grid worlds [8, 9], maze worlds [42, 25, 46], and indoor worlds [1, 15, 38, 40]. Although these benchmarks have advanced agent development, they generally lack complexity in perception and task domains. This paper concentrates on Minecraft, a voxel-based 3D, first-person, open-world game centered around survival and creation. Microsoft introduced the first Gym-style API platform called Malmo [24] for Minecraft, which has spawned numerous secondary development variants. Building on Malmo, MineRL [20] offers a human-interface simulator and a dataset of human play demonstrations for the annual Diamond Challenge at NeurIPS [18, 19, 26]. MineDoJo [16], an extension of MineRL, broadens the APIs for customizing tasks and provides thousands of pre-defined compositional tasks aimed at developing a generally capable embodied agent, which we use to evaluate our method.
Embodied Agents in Minecraft.Some prior studies have utilized a hierarchical reinforcement learning framework to develop sophisticated embodied agents. For instance, SEIHAI [31] divides a long-horizon task into several subtasks, training an appropriate agent for each subtask and designing a scheduler to manage the execution of these agents. Similarly, JueWu-MC [28] adopts this concept but enhances the agent with action-aware representation learning capabilities. In recent times, the internet-scale pretraining paradigm has made a significant impact on embodied research in open-ended environments. VPT [4], for example, undergoes pretraining on an extensive collection of online gameplay videos using imitation learning. However, it lacks the ability to process any command input. MineAgent [16] takes a different approach by pretraining a language-conditioned reward function using online video-transcript pairs, which is then utilized to support multi-task reinforcement learning.
Progress Monitor.The horizon-to-goal prediction technology has already been employed as a progress monitor in the Vision-Language Navigation (VLN) communities [29, 30, 47]. This technology aids in understanding the task structure and expediting the training procedure. Generally, current progress monitors primarily function as supplementary objectives. Their estimated progress is utilized to reassess actions or execute beam search. In contrast, our estimated horizon is explicitly incorporated into the policy network to guide agent behaviors. During inference, the horizon input can be adjusted for enhanced performance.
## 6 Conclusion
In this paper, we explore the issue of learning goal-oriented policies in open-world environments. We pinpoint two major challenges unique to such settings: 1) the difficulty in distinguishing tasks from the state distribution due to immense scene variety, and 2) the non-stationary nature of environmental dynamics resulting from partial observability. We propose a goal-sensitive backbone and an adaptive horizon prediction module to overcome both. Our experiments on challenging Minecraft confirm the advantages of our proposed methods over baselines in terms of both success rate and precision of task completeness.
**Acknowledgement.** This work was supported by the National Key R&D Program of China 2022ZD0160301, and in part by the NSF grants #IIS-1943641, #IIS-1956441, #CCF-1837129, Samsung, CISCO, and a Sloan Fellowship. We thank Hongming Xu for his engineering support.
Figure 5: Multi-task performance as a function of subtracting the horizon constant \(c\). Results show that setting \(c\) to a small constant lead to better overall performance as it incentivizes the agent to exhibit behaviors that lead to faster task completion.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} **Train \(\rightarrow\) Eval** \\ \end{tabular} } & \multicolumn{3}{c}{**Success Rate (\%)**} & \multicolumn{3}{c}{**Precision (\%)**} \\ \cline{2-9} & \(\mathbf{\#}\) & \(\mathbf{\#}\) & Avg. & \(\mathbf{\#}\) & \(\mathbf{\#}\) & Avg. & Avg. \\ \hline \multirow{2}{*}{
\begin{tabular}{c} **Flat\(\rightarrow\)Flat** \\ **Plains\(\rightarrow\)Flat** \\ \end{tabular} } & \(72\) & \(60\) & \(57\) & **63** & \(44\) & \(48\) & \(54\) & **49** \\ & \(67\) & \(47\) & \(60\) & **58** & \(89\) & \(89\) & \(70\) & **83** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Quantitative results on generalization to a novel biome. |
2310.06117 | Take a Step Back: Evoking Reasoning via Abstraction in Large Language
Models | We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
instances containing specific details. Using the concepts and principles to
guide reasoning, LLMs significantly improve their abilities in following a
correct reasoning path towards the solution. We conduct experiments of
Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe
substantial performance gains on various challenging reasoning-intensive tasks
including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%. | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, Denny Zhou | 2023-10-09T19:48:55Z | http://arxiv.org/abs/2310.06117v2 | # Take a Step Back: Evolving Reasoning via Abstraction in Large Language Models
# Take a Step Back: Evolving Reasoning via Abstraction in Large Language Models
Huaixiu Steven Zheng1
Swaoop Mishra1
Xinyun Chen
Heng-Tze Cheng
Ed H. Chi
Quoc V Le
Denny Zhou
Google DeepMind
Equal Contribution
Footnote 1: footnotemark:
###### Abstract
We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by \(7\%\) and \(11\%\), TimeQA by \(27\%\), and MuSiQue by \(7\%\).
_The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. -- Edsger W. Dijkstra_
## 1 Introduction
The field of natural language processing (NLP) is witnessing a ground-breaking revolution because of the Transformer-based (Vaswani et al., 2017) large language models (LLMs) (Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; Anil et al., 2023). Scaling up the model size and pre-training corpus (Hoffmann et al., 2022; Chowdhery et al., 2022) has brought remarkable improvement in model capabilities and sample efficiency with insights from the scaling law (Kaplan et al., 2020; Hoffmann et al., 2022), as well as emergent abilities (Wei et al., 2022a) such as multi-step reasoning (Wei et al., 2022b; Zhou et al., 2022) and instruction following (Mishra et al., 2022b; Wei et al., 2021).
Figure 1: Strong Performance of Step-Back Prompting: our proposed Abstraction-and-Reasoning scheme leads to a substantial improvement in a wide range of challenging tasks in STEM, Knowledge QA and Multi-Hop Reasoning requiring complex (often multi-hop) reasoning.
Despite the great advancements, complex multi-step reasoning remains challenging for even the state-of-the-art LLMs. Lightman et al. (2023) show that process-supervision with step-by-step verification is a promising remedy to improve the correctness of intermediate reasoning steps. Techniques such as Chain-of-Thought prompting (Wei et al., 2022b) were introduced to produce a coherent series of intermediate reasoning steps to increase the success rate of following the right decoding path. Inspired by the fact that when faced with challenging tasks humans often step back and do abstractions to arrive at high-level concepts and principles to guide the process, we propose Step-Back Prompting to ground reasoning on abstractions to reduce the chance of making errors in the intermediate reasoning steps.
Among many of the cognitive skills, abstraction (Lachmy et al., 2022) is ubiquitous to humans' ability to process vast amount of information and derive general rules, and principles. For example, Kepler compressed thousands of measurements into Kepler's three laws of planetary motion which precisely describe the orbits of planets around the Sun (Russell, 1964). In critical decision making, humans find abstraction to be helpful since it provides a broader view of the environment. This work explores how LLMs can tackle complex tasks involving many low-level details through a two-step process of abstraction-and-reasoning. The first step is to teach LLMs to step back, and derive high-level abstractions such as concepts and first principles from the specific example. The second step is to leverage the reasoning ability to ground the solution on the high-level concepts and first principles. We use few-shot exemplar demonstrations to execute Step-Back Prompting on LLMs.
We experiment across a range of tasks involving domain specific reasoning such as Physics and Chemistry, knowledge-intensive question answering requiring factual knowledge, multi-hop commonsense reasoning. We observe significant performance improvements (up to \(27\%\)) in PaLM-2L (Anil et al.,
Figure 2: Illustration of Step-Back Prompting with two steps of Abstraction and Reasoning guided by concepts and principles. _Top_: an example of MMLU high-school physics (Hendrycks et al., 2020) where the first principle of Ideal Gas Law is retrieved via abstraction. _Bottom_: an example from TimeQA (Chen et al., 2021) where the high-level concept of education history is a result of the abstraction. _Left_: PaLM-2L (Anil et al., 2023) fails to answer the original question. Chain-of-Thought prompting (Wei et al., 2022b; Kojima et al., 2022) ran into errors during intermediate reasoning steps (highlighted as red). _Right_: PaLM-2L (Anil et al., 2023) successfully answers the question via Step-Back Prompting.
2023) demonstrating the efficacy of Step-Back Prompting in tackling complex tasks which are otherwise challenging due to the amount of details involved to reason through. Figure 1 shows a summary of all the key results presented in this paper. Some the tasks are very challenging: both PaLM-2L and GPT-4 achieve only \(\sim 40\%\) accuracy on TimeQA and MuSiQue. Chain-of-Thought prompting leads to a minor improvement on a few tasks, while Step-Back Prompting improves the performance of PaLM-2L across the board: \(7\%\) and \(11\%\) on MMLU Physics and Chemistry, \(27\%\) on TimeQA, and \(7\%\) on MuSiQue.
We conduct a variety of analysis and find that Step-Back Prompting has strong performance improvements (up to \(36\%\)) over chain of thought (CoT) prompting (Wei et al., 2022b) and take a deep breathe (TDB) prompting (Yang et al., 2023). We perform a qualitative evaluation where we find that Step-Back fixes a large portion of errors of the base model (up to \(\sim 40\%\)) while introducing a small portion of new errors (max \(\sim 12\%\)). We also conduct an error analysis and find that majority of the errors made by Step-Back Prompting is attributed to the intrinsic limitations of reasoning capabilities of LLMs while abstraction skills are relatively easy to teach LLMs, pointing out the direction for future improvements of methods alike Step-Back Prompting.
## 2 Step-Back Prompting
Step-Back Prompting is motivated by the observation that many tasks contain a lot of details, and are hard for LLMs to retrieve relevant facts to tackle the task. As shown in the first example (top) in Figure 2, for a Physics question of _"What happens to the pressure, P, of an ideal gas if the temperature is increased by a factor of 2 and the volume is increased by a factor of 8?"_, the LLM can deviate from the first principle of Ideal Gas Law when reasoning directly on the question. Similarly, a question of _"Estella Leopold went to which school between Aug 1954 and Nov 1954?"_ is very hard to address directly given the detailed time range constraint. In both cases, taking a step back and asking a step-back question helps model to solve the problem effectively.
We define a step-back question as a derived question from the original question at a higher-level of abstraction. For instance, instead of directly asking _"which school Estella Leopold went to during a specific period"_, a step-back question (Figure 2 bottom) would ask about the _"education history"_, which is a high-level concept encompasses the original question. Answering the step-back question of _"Estella Leopold's education history"_ in this case will provide all the necessary information to reason about _"which school Estella Leopold went to during a specific period"_. The premise is that more often the step-back question is much easier to address than the original question. Grounding the reasoning on top of such abstractions helps to avoid reasoning errors in the intermediate steps such as the example shown in Figure 2 (left) from Chain-of-Thought. In short, Step-Back Prompting consists two simple steps:
* **Abstraction**: Instead of addressing the question directly, we first prompt the LLM to ask a generic step-back question about a higher-level concept or principles, and retrieve relevant facts about the high-level concept or principles.
* **Reasoning**: Grounded on the facts regarding high-level concept or principles, the LLM can reason about the solution to the original question. We term this _Abstraction-grounded Reasoning_.
In the following sections, we present an empirical study of Step-Back Prompting on a range of challenging tasks covering STEM, Knowledge QA and Multi-Hop Reasoning involving complex reasoning.
## 3 Experimental Setup
Here we define the tasks and models we experiment with. We also describe our evaluation metric and the baselines we consider.
### Tasks
We experiment with the following diverse tasks: (a) STEM, (b) Knowledge QA and (c) Multi-Hop Reasoning. We describe below the datasets we consider (see Appendix B for more details).
* **STEM**: MMLU (Hendrycks et al., 2020) contains a series of benchmarks across diverse domains to evaluate model's language understanding. We consider the high school physics and chemistry portions of MMLU because of the deep reasoning involved.
* **Knowledge QA**: We consider TimeQA (Chen et al., 2021) since it contains complex queries that requires challenging time-sensitive knowledge. We also experiment with SituatedQA (Zhang and Choi, 2021), another challenging open-retrieval QA dataset requiring model to answer questions given temporal or geographical contexts.
* **Multi-Hop Reasoning**: We experiment with MuSiQue (Trivedi et al., 2022), a hard multihop reasoning dataset created via composable pairs of single-hop questions, and StrategyQA (Geva et al., 2021) with open-domain questions that demands some strategy to solve.
### Models
We use the following state of the art LLMs: PaLM-2L (Anil et al., 2023) and GPT-4 (OpenAI, 2023). We experiment with a variety of baselines with an instruction-tuned PaLM-2L model.
### Evaluation
Conventional evaluation metric such as accuracy, F1 score has limitations specifically for evaluating the generations of state of the art LLMs since these models often generate long form answers which are hard to capture. We instead conduct evaluation using the PaLM2-L model where we few-shot prompt the model to identify equivalence between target answers and the model predictions. Few shot examples, prompts and other details we use for this evaluation are in Appendix C.
### Baseline Methods
* **PaLM-2L, PaLM-2L 1-shot**: PaLM-2L is either queried directly with the question or has a single demonstration exemplar of question-answer included in the prompt.
* **PaLM-2L + CoT, PaLM-2L + CoT 1-shot**: PaLM-2L model is queried with zero-shot CoT prompting (Kojima et al., 2022): "_Let's think step by step_" is appended to the question. For 1-shot, One demonstration example of a question and answer pair is provided in the prompt, where the answer is in the style of CoT (Wei et al., 2022) with intermediate reasoning steps.
* **PaLM-2L + TDB**: Zero-shot prompting with "_Take a deep breath and work on this problem step-by-step._" (Yang et al., 2023) prepended to the question.
* **PaLM-2L + RAG**: For Sections 5 and 6, we use retrieval-augmented generation (RAG) where the relevant passage retrieved is used as context by the LLM.
* **GPT-4**: GPT-4 API is directly queried.
We do not use RAG for MMLU, because of the inherent reasoning nature of this benchmark contrary to the other fact-seeking datasets. All inferences are done using greedy decoding.
## 4 Stem
We evaluate Step-Back prompting on STEM tasks (Hendrycks et al., 2020) to gauge the efficacy of our method on reasoning in highly-specialized domains. We explain below our experimental setup, result and analysis of applying Step-Back prompting on the MMLU high-school Physics and Chemistry benchmarks.
### Step-Back prompting
Questions in the MMLU benchmarks require deeper reasoning. Furthermore, they also require understanding and application of formulae which are often physics and chemistry principles and concepts. In this case, we first teach the model to do abstraction in the form of concepts and first principles such as _Newton's first law of motion_, _Doppler effect_, and _Gibbs free energy_ etc. The implicit step-back question here is "_what are the physics or chemistry principles and concepts involved in
solving this task?_". We provide demonstrations to teach the model to recite from its own knowledge relevant principles for solving the task (see Appendix D.1 for few-shot exemplars).
### Results
Table 1 illustrates model performance across various setup. PaLM-2L baseline performance is \(66.4\%\) and 70.9% on Physics and Chemistry, respectively. We find that CoT and TDB zero-shot prompting do not significantly increase model performance which could be due to inherent hardness and deep reasoning associated with these tasks. In addition PaLM-2L 1-shot and PaLM-2L + CoT 1-shot do not improve against the baseline much, highlighting the challenge of demonstrating the reasoning steps to the model. In contrast, Step-Back Prompting significantly improves model performance: +7% and +11% compared to PaLM-2L, achieving state-of-the-art performance surpassing GPT-4.
### Ablation and Analysis
**Few-shot Ablation**: First, in Figure 3 we observe that Step-Back Prompting is robust against number of few-shot exemplars of (question, principles) pairs used as demonstrations. Adding more demonstration examples beyond a single example is not helpful any more. This indicates that the task of retrieving the relevant principles and concepts is relatively easy to learn and a single demonstration suffices.
**Error Analysis**: Figure 4 (left) shows the error analysis of the predictions of Step-Back Prompting compared to the baseline PaLM-2L model for MMLU high-school Physics: Step-Back Prompting corrects \(20.5\%\) errors from the baseline while introducing \(11.9\%\) errors.
To further understand where the errors come from in Step-Back Prompting, we annotate all the wrong predictions of Step-Back Prompting in the test set, and category them into 5 classes (see Appendix E.1 for examples in each class):
* **Principle Error**: The error happens at the step of Abstraction, where the first principles generated by models are wrong or incomplete.
* **Factual Error**: There is at least one factual error when the model recites its own factual knowledge.
* **Math Error**: There is at least one math error in the intermediate steps when math calculations are involved in deriving the final answer.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & MMLU Physics & MMLU Chemistry \\ \hline PaLM-2L & 66.4\% (0.8\%) & 70.9\% (0.9\%) \\ PaLM-2L 1-shot & 64\% (1.6\%) & 75.6\% (0.4\%) \\ PaLM-2L + CoT & 65\% (2\%) & 75.3\% (1.5\%) \\ PaLM-2L + CoT 1-shot & 61.5\% (1.8\%) & 76.6\% (1\%) \\ PaLM-2L + TDB & 65.7\% (0.7\%) & 73.8\% (1.1\%) \\ PaLM-2L + Step-Back (ours) & **73.2\%** (1.9\%) & **81.8\%** (1.4\%) \\ \hline GPT-4 & 70.3\% (2.3\%) & 79.9\% (1.0\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Strong performance of Step-Back Prompting on STEM tasks achieving state-of-the-art surpassing GPT-4. CoT: zero-shot Chain of Thought prompting (Kojima et al., 2022), TDB: Take a Deep Breathe prompting (Yang et al., 2023). The Table reports the average accuracy over 5 evaluation runs, with standard deviations in the parentheses.
Figure 3: Ablation study of Step-Back Prompting accuracy on MMLU high-school Physics against number of few shot exemplars: robust performance with respect to varying number of shots.
* **Context Loss**: There is at least one error when the model response loses context from the question, and deviates from addressing the original question.
* **Reasoning Error**: We define Reasoning Error as when the model makes error in the intermediate Reasoning steps before arriving at the final answer.
All five types of errors are happening during the Reasoning step except _Principle Error_ which points to the failure of the Abstraction step. As shown in Figure 4 (right), _Principle Error_ in fact comprises only a small fraction of the errors the model makes: more than \(90\%\) of the errors happen at the Reasoning step. Among the four error types during Reasoning, _Reasoning Error_ and _Math Error_ are the major loss buckets. This corroborates with the finding in the ablation study above that very few exemplars are needed to teach LLMs the Abstraction skill. Reasoning step is still the bottleneck of how well Step-Back Prompting can perform tasks such as MMLU requiring complex reasoning. For MMLU Physics specifically, the Reasoning and Math skills are critical for solving the problems successfully: even if the first principles are retrieved correctly, deep reasoning and math are involved to derive a correct final answer through a typical multi-step reasoning process.
## 5 Knowledge QA
We evaluate Step-Back Prompting on question answering benchmarks requiring intensive factual knowledge. Knowledge QA has been challenging for LLMs. In this section, we first describe the experimental setup, followed by results and analysis on Step-Back Prompting.
\begin{table}
\begin{tabular}{l|c|c c|c} \hline Method & TimeQA & TQA Easy & TQA Hard & SituatedQA \\ \hline PaLM-2L & 41.5\% & 42.6\% & 40.4\% & 54.3\% (0.3\%) \\ PaLM-2L 1-shot & 40.7\% & 41.7\% & 39.1\% & 51.8\% (0.6\%) \\ PaLM-2L + CoT & 40.8\% & 41.8\% & 39.8\% & 56.4\% (0.2\%) \\ PaLM-2L + CoT 1-shot & 38.1\% & 39.3\% & 36.8\% & 54\% (0.8\%) \\ PaLM-2L + TDB & 40.9\% & 42.6\% & 39.1\% & 54\% (0.5\%) \\ PaLM-2L + RAG & 57.4\% & 67.8\% & 46.8\% & 59.3\% (0.4\%) \\ PaLM-2L + Step-Back (ours) & 66\% & 70.4\% & 61.6\% & 57.5\% (0.3\%) \\ PaLM-2L + Step-Back + RAG (ours) & **68.7\%** & **75.2\%** & **62.3\%** & 61\% (0.4\%) \\ \hline GPT-4 & 45.6\% & 48.9\% & 42.6\% & **63.2\%** (0.4\%) \\ \hline \end{tabular}
\end{table}
Table 2: Strong performance of Step-Back Prompting on Knowledge QA tasks. CoT: Chain of Thought prompting, TDB: Take a Deep Breathe prompting, RAG: retrieval-augmented generation. Step-Back Prompting results in significant performance improvements.
Figure 4: Error Analysis of Step-Back Prompting on MMLU high-school Physics. _Left_: example categories in four buckets regarding whether the baseline or Step-Back prediction is right or wrong. _Right_: five classes of errors Step-Back makes with Reasoning being the dominating class.
### Step-Back Prompting
We evaluate Step-Back Prompting on TimeQA (Chen et al., 2021) and SituatedQA (Zhang and Choi, 2021) in the Knowledge QA category. We first teach the LLMs to do Abstraction. The step-back question "_What was Estella Leopold's education history_" in Figure 2 is generated by the LLM through few-shot demonstrations (see Appendix D.2 for details). Given the knowledge-intensive nature of these queries, we use retrieval augmentation (RAG) in combination with Step-Back Prompting. The step-back question is used to retrieve relevant facts, which works as additional context (see Table 12 for the prompting template) to ground the final reasoning step.
### Results
We evaluate the models on the test-set of TimeQA. As shown in Table 2, the baseline models of GPT-4 and PaLM-2L achieved \(45.6\%\) and \(41.5\%\), highlighting the difficulty of the task. Applying either CoT or TDB zero-shot (and one-shot) prompting to the baseline model shows no improvement. In contrast, augmenting the baseline model by regular retrieval augmentation (RAG) improves the accuracy to \(57.4\%\), highlighting the factual intensive nature of the task. The result of Step-Back + RAG shows the effectiveness of going back to a high-level concept, which enables much more reliable retrieval augmentation: the accuracy on TimeQA achieves a remarkable \(68.7\%\).
Next, we segment TimeQA into the Easy and Hard difficulty level provided in the original dataset. As expected, all methods perform worse on the Hard segment. While RAG can improve the Easy accuracy from \(42.6\%\) to \(67.8\%\), the improvement is much smaller on the Hard accuracy: \(40.4\%\) to \(46.8\%\). This is where Step-Back Prompting really shines by retrieving facts regarding high-level concepts to ground the final reasoning: Step-Back + RAG further improves the Hard accuracy to \(62.3\%\), outperforming \(42.6\%\) from GPT-4. We hypothesis that facts regarding the high-level concepts (such as _education history_) is much more accessible than the low-level details.
On the SituatedQA benchmark, we observe a moderate quality gain from \(54.3\%\) to our best method of Step-Back + RAG \(61\%\) with a small gap to GPT-4's \(63.2\%\). Similar to TimeQA, prompting techniques such as CoT and TDB don't help significantly for SituatedQA.
### Ablation and Analysis
**Few-shot Ablation**: We observe in Figure 5 (left) that the performance of Step-Back Prompting is robust against the number of exemplars used in demonstration, highlighting again the sample efficiency of learning Abstraction skills for models like PaLM-2L.
**Error Analysis:** Figure 5 (right) shows the breakdown of the all the remaining errors made by Step-Back Prompting predictions. Similar to Section 4.3, we categorize the errors:
\(\bullet\)**StepBack**: The step-back question generated is not helpful in solving the task.
\(\bullet\)**RAG**: RAG fails to retrieval relevant information despite that the step-back question is on target.
\(\bullet\)**Scoring Error**: The evaluation by the judge model made a mistake.
Figure 5: Ablation and error analysis of Step-Back Prompting on TimeQA. _Left_: ablation against number of few-shot exemplars. _Right_: four classes of errors Step-Back makes with Reasoning and RAG being the dominating error sources.
* **Reasoning Error**: The retrieved context is relevant, but the model still fails to reason through the context to arrive at the right answer.
StepBack rarely fails. In contrast, we find more than half of the errors are due to reasoning errors. \(45\%\) of errors are due to failure in retrieving the right information despite that Abstraction provided by step-back makes it a much easier task. This reflects the difficulty level of the TimeQA task. Additional error analysis of TimeQA is in Appendix A.
## 6 Multi-Hop Reasoning
We evaluate Step-Back Prompting on challenging Multi-Hop reasoning benchmark MuSiQue (Trivedi et al., 2022) and StrategyQA (Geva et al., 2021). We follow the same protocol as Section 5 to implement Step-Back Prompting.
### Results
Table 3 shows performance of various baselines on the dev set of MuSiQue and StrategyQA. Baseline performance of PaLM-2L and GPT4 are low (\(35.5\%\) and \(38.5\%\) for PaLM-2L and GPT-4 respectively) in MuSiQue since it is a hard multihop reasoning behchmark. In contrast, StrategyQA has stronger baselines (\(82.8\%\) and \(78.3\%\) for PaLM-2L and GPT4 respectively) probably because of the binary classification task. CoT and TDB improve model performance a bit in case of MuSiQue (\(\sim\) 3% and 3.5% respectively) which can be attributed to the inherent reasoning nature of this task where these methods are shown to be helpful. In case of StrategyQA, there is no significant performance gain with COT and TDB which could be due to the high baseline performance in this task, with limited scope for these prompting methods to improve performance. Often, 1-shot performance is significantly lower than their zero-shot methods which could be attributed to the potential example bias (Zhao et al., 2021; Parmar et al., 2023). RAG improves model performance (\(\sim\) 4% and 2% for MuSiQue and StrategyQA respectively.). Step-Back Prompting with the power of abstraction produces the best performance of all methods: \(42.8\%\) in MuSiQue and \(86.4\%\) in StrategyQA, significantly outperforming GPT-4 on both tasks.
### Analysis
Similar to our observation in previous sections, we find that Step-Back Prompting with RAG is able to turn \(15.4\%\) wrong predictions of base model into correct predictions, while leading to \(6.1\%\) errors the other way around. Furthermore, Step-Back + RAG fixes \(12.7\%\) errors coming from RAG. The errors introduced to RAG by Step-Back is just \(4.4\%\). More detailed analysis is in Appendix A.2.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & MuSiQue & StrategyQA \\ \hline PaLM-2L & 35.5\% (3\%) & 82.8\% (0.7\%) \\ PaLM-2L 1-shot & 29.0\% (0.5\%) & 76.6\% (0.5\%) \\ PaLM-2L + CoT & 38.7\% (3.2\%) & 83.6\% (0.4\%) \\ PaLM-2L + CoT 1-shot & 38.5\% (2.2\%) & 76.8\% (1.4\%) \\ PaLM-2L + TDB & 39.0\% (2.3\%) & 82.7\% (0.9\%) \\ PaLM-2L + RAG & 39.6\% (2.8\%) & 84.2\% (0.5\%) \\ PaLM-2L + Step-Back (ours) & 42.6\% (3.1\%) & 82.7\% (0.4\%) \\ PaLM-2L + Step-Back + RAG (ours) & **42.8\%** (2.0\%) & **86.4\%** (1\%) \\ \hline GPT-4 & 38.5\% (0.2\%) & 78.3\% (1.1\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of Step-Back Prompting on Multi-Hop Reasoning. CoT: Chain of Thought prompting, TDB: Take a Deep Breathe prompting, RAG: retrieval augmentation generation. Average accuracy is over 5 evaluation runs with the standard deviations included in the parentheses.
## 7 Discussion
Abstraction helps humans to solve complex tasks by removing irrelevant details and distill the high-level concepts and principles to guide the problem-solving process. Step-Back Prompting breaks complex tasks such as knowledge-intensive QA, multi-hop reasoning and science questions into two separate steps of Abstraction and Reasoning. We demonstrate through empirical experiments that Abstraction is an easy skill to teach the LLMs such as PaLM-2L via sample-efficient demonstrations. Grounding on the high-level concepts and principles, LLMs can leverage their intrinsic Reasoning capabilities to derive the solution. This reduces the chance of reasoning failures in the intermediate steps, and is shown to improve the performance on a wide range of complex reasoning tasks. Despite the success, through error analysis, we find that Reasoning is still one of the hardest skills for LLMs to acquire as it is still the dominating failure mode even after the large chunk of task complexity reduction by Step-Back Prompting.
Nevertheless, Abstraction is neither absolutely necessary nor possible in all scenarios. For instance, the task can be as simple as _who was the president of the United States in 2000?_, in which case there is not such a need to step back and ask a high-level question as the answer to such questions is readily available. Parallelly, questions such as _what is the speed of light?_ point to the first principles themselves. Doing Abstraction in this case would not make a difference.
## 8 Related Work
Step-Back Prompting is related to the literature of prompting and decomposition.
### Prompting
Few-shot prompting (Brown et al., 2020; Liu et al., 2023; Mishra et al., 2022; Wei et al., 2022b) has significantly improved model performance across a range of tasks without requiring to update any model parameters. Our work Step-Back Prompting is in the same category as chain of thought prompting (Wei et al., 2022b) and scratchpad (Nye et al., 2021) owing to its simplicity and generic nature, however, is focused on the key idea of abstraction which is inspired from the fact that often taking a step back and looking at broader level help humans in performing complex tasks. Our work is also related to the recitation-augmented language models (Sun et al., 2022), however in contrast to their work, we explicitly perform step-back and abstraction, with optional use of retrieval augmentation depending the nature of the task at hand.
### Decomposition
Decomposing a task into simpler tasks and solving these tasks to solve the original task have been an effective way (Zhou et al., 2022; Patel et al., 2022; Khot et al., 2022; Press et al., 2022) to improve model performance on complex tasks. Several prompting methods have been successful in improving model performance. Our work Step-Back Prompting, in contrast, is on making the question more abstract and high level, which is different from decomposition that is often low level breakdowns of the original question. Furthermore, abstract questions such as _what is the employment history of person X?_ are often generic in nature so have a many-to-one mapping since many questions (e.g. _which employer did X work for in 1990?_ and _which employer did X work for in 2000?_) can have the same abstract questions. This is in contrast to decomposition where there is often a one-to-many mapping since there are multiple decomposed sub-problems necessary to solve a given question.
## 9 Conclusion
We introduce Step-Back Prompting as a simple and generic method to elicit deep reasoning via abstraction in large language models. Experimentation on LLMs across fact-seeking, commonsense reasoning and domain specific reasoning benchmark shows Step-Back Prompting significantly improve model performance. We hypothesize that abstraction helps models to hallucinate less and reason better, probably reflecting the true nature of the model which are often hidden while responding to the original question without abstraction. We hope our work will inspire more human-inspired approaches to elicit the hidden potential of large language models. |
2307.02966 | Diagnostics for categorical response models based on quantile residuals
and distance measures | Polytomous categorical data are frequent in studies, that can be obtained
with an individual or grouped structure. In both structures, the generalized
logit model is commonly used to relate the covariates on the response variable.
After fitting a model, one of the challenges is the definition of an
appropriate residual and choosing diagnostic techniques. Since the polytomous
variable is multivariate, raw, Pearson, or deviance residuals are vectors and
their asymptotic distribution is generally unknown, which leads to difficulties
in graphical visualization and interpretation. Therefore, the definition of
appropriate residuals and the choice of the correct analysis in diagnostic
tools is important, especially for nominal data, where a restriction of methods
is observed. This paper proposes the use of randomized quantile residuals
associated with individual and grouped nominal data, as well as Euclidean and
Mahalanobis distance measures, as an alternative to reduce the dimension of the
residuals. We developed simulation studies with both data structures
associated. The half-normal plots with simulation envelopes were used to assess
model performance. These studies demonstrated a good performance of the
quantile residuals, and the distance measurements allowed a better
interpretation of the graphical techniques. We illustrate the proposed
procedures with two applications to real data. | Patrícia Peres Araripe, Idemauro Antonio Rodrigues de Lara, Gabriel Rodrigues Palma, Niamh Cahill, Rafael de Andrade Moral | 2023-07-06T13:05:10Z | http://arxiv.org/abs/2307.02966v1 | # Diagnostics for categorical response models based on quantile residuals and distance measures
###### Abstract
Polytomous categorical data, in a nominal or ordinal scale, are frequent in many studies in different areas of knowledge. Depending on experimental design, these data can be obtained with an individual or grouped structure. In both structures, the multinomial distribution may be suitable to model the response variable and, in general, the generalized logit model is commonly used to relate the covariates' potential effects on the response variable. After fitting a multi-categorical model, one of the challenges is the definition of an appropriate residual and choosing diagnostic techniques to assess goodness-of-fit, as well as validate inferences based on the model. Since the polytomous variable is multivariate, raw, Pearson, or deviance residuals are vectors and their asymptotic distribution is generally unknown, which leads to potential difficulties in graphical visualization and interpretation. Therefore, the definition of appropriate residuals, as well as the choice of the correct analysis in diagnostic tools is very important, especially for nominal categorical data, where a restriction of methods is observed. This paper proposes the use of randomized quantile residuals associated with individual and grouped nominal data, as well as Euclidean and Mahalanobis distance measures associated with grouped data only, as an alternative method to reduce the dimension of the residuals and to study outliers. To show the effectiveness of the proposed methods, we developed simulation studies with individual and grouped categorical data structures associated with generalized logit models. Parameter estimation was carried out by maximum likelihood and half-normal plots with simulation envelopes were used to assess model performance using residuals and distance metrics. These studies demonstrated a good performance of the quantile residuals and, also, the distance measurements allowed a better interpretation of the graphical techniques. We illustrate the proposed procedures with two applications to real data, for which the employed techniques validated the model choice.
G Original Research Article
Generalized logit model; maximum likelihood; model selection; half-normal plot; normality.
## 1 Introduction
Nominal polytomous variables are defined by a finite set of categories (more than two), being of interest in experimental design in many disciplines, especially in biological
and agricultural sciences. The subjects can be an individual (a plant, an insect, or an animal) or groups of individuals (a stall with animals, a cage with insects, or a plant with its branches), in which the categorical responses are observed. Therefore, the categorical data structure can be individual or grouped, depending on experimental design and goals of the study. Independent of structure, in general, the multinomial distribution (or an extension of it) is assumed to model the response variable, and the generalized logit model is used to describe the relationship between the polytomous response and covariates [3].
Additionally, the assumptions of the fitted model must be verified to validate statistical inference. In this process, residual analysis is fundamental. The first step involves an appropriate definition for the residuals and, after that, formal (hypothesis tests) and informal (exploratory plots) techniques can be used to assess goodness-of-fit and model assumptions. According to [12], residual analyses are essential to identify discrepancies between the model and the data, detecting outliers and influential points. For example, the deviance and Pearson statistics are quantitative measures widely used to test the goodness-of-fit of generalized linear models (GLMs), however they can only be applied to multinomial data in the grouped structure and are not reliable for small sample sizes [41]. In fact, residual analysis is still a challenge for the multinomial case. Since the polytomous response is multivariate, the ordinary residual, defined by the difference between the observed response and the estimated probabilities, is a vector for each individual, with a dimension defined by the number of categories. In addition, these residuals have an unknown asymptotic distribution, making them difficult to interpret in diagnostic plots [34]. It is important, therefore, to find or adapt diagnostic techniques to overcome these limitations.
A few alternatives have been proposed: the first one is to reduce the number of categories (grouping into two) and to do residual analysis by means of logistic regression, whose techniques are well consolidated (e.g. 32, 24, 18). However, grouping categories leads to the loss of information. Another would be to fit the generalized logit models (for pairs of variables) separately, and to define residuals for each sub-model and apply diagnostic tools, as proposed by [39] with three categories. However, the maximum likelihood estimates from the separate fitted models differ from those obtained in the simultaneous fit, and their standard errors tend to be larger [3].
For ordinal responses, [6] defined a continuous residual vector for the individual structure, with three categories, based on the methodology of [26]. Also, they presented the deviance and Pearson residual vectors and plots of residuals versus covariates. However, this technique is not suitable to the nominal case. For nominal data with grouped structure, [36] defined a vector of residuals based on the projected residuals presented by [7], and the Pearson residuals were presented by [16] to detect influential points. However, these methodologies require theoretical development and are not implemented in statistical software.
A residual defined for a broad class of models that can be easily implemented is the randomized quantile residual [10]. For discrete data, these residuals are an extension of the quantile residuals for continuous data and they follow approximately a normal distribution if the estimated parameters are consistent, but it is important to investigate their properties in small sample sizes [31]. [12]. Therefore, the quantile residuals are an alternative for multinomial case associated to generalized logit models, but there is a lack of investigation of their performance. In addition to the adoption of quantile residuals, another alternative is to use distance metrics, such as Euclidean and Mahalanobis distances, to reduce the dimension of the ordinary residuals for diagnostic analyses. These metrics are widespread in the literature on multivariate analysis, when
calculating how far two individuals are in the original variable space (e.g. by using principal components and cluster analysis) [20]. In the context of diagnostics, these distances have already been used to detect outliers in linear regression (17 and 14). However, there are no records of their use in models for nominal data.
Here, we propose the adoption of quantile residuals and the use of multivariate distance metrics. Our objectives are: (i) to assess the normality of randomized quantile residuals for nominal categorical models; and (ii) to reduce the dimension of ordinary residuals associated with nominal data using Euclidean and Mahalanobis distances for grouped structures. We review models and residuals for nominal polytomous data in Section 2. Then, we present the randomized quantile residual and the distance metrics in Sections 3 and 4, respectively. The framework based on randomized quantile residuals and distances for nominal responses, which are the contributions of this work, are presented in Section 5. We present results of simulation studies in Section 6, and illustrate with two applications from the literature in Section 7. Finally, we present concluding remarks in Section 8.
## 2 Models and residuals for nominal polytomous data
Statistical models for polytomous data (nominal or ordinal) are based on the multinomial probability distribution [2]. The definition of the linear predictor structure is essential when defining the model, and influences the construction of residuals, as well as the diagnostic techniques.
### Nominal data structures
It is important to distinguish between individual and grouped data structures. To establish the notation consider a sample of subjects, \(i=1,2,\ldots,n\) and the set of \(J\) categories \(A=\{1,2,\ldots,J\}\). In the individual case, each subject is a single individual, which is classified in some category of set \(A\). Then, the random vector referring to the individual \(i\) is given by \(\mathbf{Y}_{i}=(Y_{i1},\ldots,Y_{i}J)^{\prime}\), where \(Y_{ij}=1\) if the response of individual \(i\) is in category \(j\), \(j=1,2,\ldots,J\), and \(Y_{ij}=0\) otherwise, with \(\sum\limits_{j=1}^{J}{Y_{ij}=1}\). For the grouped case, each subject is composed of a group of \(m_{i}\) individuals. Then, the random variable \(Y_{ij}\) represents the number of times category \(j\) was observed in \(m_{i}\) individuals, with \(\sum\limits_{j=1}^{J}{Y_{ij}=m_{i}}\). In both cases, we have a multinomial trial, that is, it is assumed that the random vector \(\mathbf{Y}_{i}\) follows a multinomial distribution, \(\mathbf{Y}_{i}\sim\text{Multi}(m_{i},\boldsymbol{\pi}_{i})\), with parameters \(m_{i}\) and \(\boldsymbol{\pi}_{i}=(\pi_{i1},\ldots,\pi_{iJ})^{\prime}\), restricted to \(\sum\limits_{j=1}^{J}{\pi_{ij}=1}\).
### Generalized logit model
The multinomial distribution belongs to the canonical multi-parametric exponential family, with a vector of canonical parameters \(\boldsymbol{\theta}=\left[\log\left(\frac{\pi_{i}}{\pi_{J}}\right),\ldots, \log\left(\frac{\pi_{J-1}}{\pi_{J}}\right)\right]^{\prime}\), where \(\theta_{j}=\log\left(\frac{\pi_{j}}{\pi_{J}}\right)\), \(j=1,\ldots,J-1\), use canonical link functions. Considering a random sample of subjects of dimension \(n\), \(i=1,2,\ldots,n\), the generalized logit model is defined
as
\[\text{logit}\left[\pi_{ij}(\mathbf{x}_{i})\right]=\log\left[\frac{\pi_{ij}(\mathbf{ x}_{i})}{\pi_{iJ}(\mathbf{x}_{i})}\right]=\alpha_{j}+\sum\limits_{k=1}^{p}\beta_{jk}x_{ ik}=\alpha_{j}+\boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i},\ \ j=1,\ldots,J-1, \tag{1}\]
where \(J\) is the number of categories, \(\pi_{j}(\mathbf{x}_{i})\) is the probability of an individual response in the \(j\)-th category, \(\mathbf{x}_{i}=(x_{i1},x_{i2},\ldots,x_{ip})^{\prime}\) is the vector of covariates, \(\boldsymbol{\beta}_{j}=(\beta_{j1},\beta_{j2},\ldots,\beta_{jp})^{\prime}\) represents the vector of parameters, and \(\alpha_{j}\) is the intercept. According to [3], the covariates can be quantitative, factors (using dummy variables) or both. Model 1 compares each category with one chosen as a reference, generally the first or last category, but this choice can be arbitrary [40][5]. Also, from equation (1), we have:
\[\pi_{ij}(\mathbf{x}_{i})=\frac{\exp\left(\alpha_{j}+\boldsymbol{\beta}_{j}^{ \prime}\mathbf{x}_{i}\right)}{1+\sum\limits_{j=1}^{J-1}\exp\left(\alpha_{j}+ \boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i}\right)},\ \ j=1,\ldots,J-1, \tag{2}\]
and the probability for the reference category:
\[\pi_{iJ}(\mathbf{x}_{i})=1-\left[\pi_{i1}(\mathbf{x}_{i})+\cdots+\pi_{i(J-1)} (\mathbf{x}_{i})\right]=\frac{1}{1+\sum\limits_{j=1}^{J-1}\exp\left(\alpha_{j} +\boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i}\right)}. \tag{3}\]
The parameter estimation process is done by maximum likelihood, which consists of maximizing \(\pi_{ij}(\mathbf{x}_{i})\) to simultaneously satisfy the \(J-1\) equations that specify the model.
We present here a brief summary of the process, distinguishing the individual and grouped structures. First, consider the data with individual structure with the observed vector \(\mathbf{y}_{i}=(y_{i1},\ldots,y_{iJ})\) satisfying \(\sum\limits_{j=1}^{J}y_{ij}=1\), with mean \(\text{E}(Y_{ij}|\mathbf{x}_{i})=\pi_{ij}(\mathbf{x}_{i})\), \(j=1,2,\ldots,J\). Then, the log-likelihood function is given by
\[l=\log\prod\limits_{i=1}^{n}\Bigg{\{}\prod\limits_{j=1}^{J}\left[\pi_{ij}\left( \mathbf{x}_{i}\right)\right]^{y_{ij}}\Bigg{\}}=\log\prod\limits_{i=1}^{n} \Bigg{\{}\prod\limits_{j=1}^{J-1}\left[\pi_{ij}\left(\mathbf{x}_{i}\right) \right]^{y_{ij}}\left[\pi_{iJ}\left(\mathbf{x}_{i}\right)\right]^{y_{iJ}} \Bigg{\}}.\]
Using (2) and (3) we have
\[l=\sum\limits_{i=1}^{n}\Bigg{\{}\sum\limits_{j=1}^{J-1}y_{ij}\left(\alpha_{j} +\boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i}\right)-\log\Bigg{[}1+\sum \limits_{j=1}^{J-1}\exp(\alpha_{j}+\boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{ i})\Bigg{]}\Bigg{\}}.\]
Now, considering the grouped data where the observed vector \(\mathbf{y}_{i}=(y_{i1},\ldots,y_{iJ})\) satisfies \(\sum\limits_{j=1}^{J}y_{ij}=m_{i}\), with mean \(\text{E}(Y_{ij}|\mathbf{x}_{i})=m_{i}\pi_{ij}(\mathbf{x}_{i})\), \(j=1,\ldots,J\), the log-likelihood
is given by
\[l^{*} =\sum\limits_{i=1}^{n}\Bigg{\{}\sum\limits_{j=1}^{J}y_{ij}\log\left[ \pi_{ij}\left(\mathbf{x}_{i}\right)\right]+\log\left[\frac{m_{i}!}{y_{i1}!\dots y _{iJ}!}\right]\Bigg{\}}\] \[=\sum\limits_{i=1}^{n}\Bigg{\{}\sum\limits_{j=1}^{J-1}y_{ij}\log \left[\pi_{ij}\left(\mathbf{x}_{i}\right)\right]+y_{iJ}\log\left[\pi_{iJ}\left( \mathbf{x}_{i}\right)\right]+\log\left[\frac{m_{i}!}{y_{i1}!\dots y_{iJ}!} \right]\Bigg{\}}.\]
and, similarly to the individual process, by successive substitutions one finds:
\[l^{*}=\sum\limits_{i=1}^{n}\Bigg{\{}\sum\limits_{j=1}^{J-1}y_{ij}(\alpha_{j}+ \boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i})-m_{i}\log\left[1+\sum\limits_ {j=1}^{J-1}\exp(\alpha_{j}+\boldsymbol{\beta}_{j}^{\prime}\mathbf{x}_{i}) \right]+\log\left[\frac{m_{i}!}{y_{i1}!\dots y_{iJ}!}\right]\Bigg{\}}.\]
An iterative method such as Newton-Raphson can be used to maximize \(l\) and \(l^{*}\) to obtain the maximum likelihood estimates [41]. More details can be found in [2].
### Residuals associated with models for nominal categorical data
An important step in model diagnostic checking is residuals analysis, used to validate model assumptions and detect outliers or influential points [30]. The definition of the residuals as well as the analytical techniques are essential tools that contribute to this.
#### 2.3.1 Residuals for individual data
The ordinary residuals measure the deviations between the observed values and the predicted probabilities. For model (1) they are vectors of dimension \(J\times 1\) per individual, \(i=1,2,\dots,n\), given by [34]
\[\hat{\mathbf{r}}_{i}=\mathbf{y}_{i}-\hat{\boldsymbol{\pi}}_{i}=\left(y_{i1}- \hat{\pi}_{i1},y_{i2}-\hat{\pi}_{i2},\dots,y_{iJ}-\hat{\pi}_{iJ}\right)^{ \prime},\]
where \(\mathbf{y}_{i}=(y_{i1},y_{i2},\dots,y_{iJ})^{\prime}\) is the vector of observations with \(y_{ij}=1\) if the individual response \(i\) belongs to category \(j\) and \(y_{ij}=0\), otherwise, and \(\boldsymbol{\pi}_{i}=(\hat{\pi}_{i1},\hat{\pi}_{i2},\dots,\hat{\pi}_{iJ})^{\prime}\) is the vector of predicted probabilities. These residuals do not follow a multivariate normal distribution, and when used in diagnostic plots, they may not be informative, since visual interpretation is not straightforward.
The Pearson and deviance residuals for model (1) are given, respectively, by the vectors \(r_{i}^{P}=\left[r_{i1}^{P},r_{i2}^{P},\dots,r_{iJ}^{P}\right]^{\prime}\) and \(r_{i}^{D}=\left[r_{i1}^{D},r_{i2}^{D},\dots,r_{iJ}^{D}\right]^{\prime}\), whose elements are obtained by [6]
\[r_{ij}^{P}=\frac{\left(y_{ij}-\hat{\pi}_{ij}\right)}{\sqrt{\hat{\pi}_{ij}(1- \hat{\pi}_{ij})}}\]
and
\[r_{ij}^{D}=\operatorname{sign}\left(y_{ij}-\hat{\pi}_{ij}\right)\sqrt{2\left[ \left(y_{ij}-1\right)\log\left(1-\hat{\pi}_{ij}\right)-y_{ij}\log\left(\hat{ \pi}_{ij}\right)\right]},\]
where \(j=1,2,\dots,J\). These definitions are extensions of the residuals used in logistic regression. Specifically for variables on the ordinal scale, [6] proposed the surrogate residuals, that are based on the methodology presented by [26]. As the scope of this
work is centered on the nominal measurement scale, we leave it to the interested readers to consult [6] for more details.
#### 2.3.2 Residuals for grouped data
The \(J\)-dimensional ordinary residuals vector for model (1) per subject \(i\), \(i=1,2,\ldots,n\), each with \(m_{i}\) individuals, according to [41] is defined by
\[\hat{\mathbf{r}}_{i} = \frac{\mathbf{y}_{i}-m_{i}\times\hat{\boldsymbol{\pi}}_{i}}{m_{i}}\] \[= \frac{1}{m_{i}}(y_{i1}-m_{i}\hat{\pi}_{i1},y_{i2}-m_{i}\hat{\pi}_ {i2},\ldots,y_{iJ}-m_{i}\hat{\pi}_{iJ})^{\prime}\,,\]
where \(\mathbf{y}_{i}=(y_{i1},y_{i2},\ldots,y_{iJ})^{\prime}\) is the vector of observed counts, such that \(\sum\limits_{j=1}^{J}y_{ij}=m_{i}\), and \(\hat{\boldsymbol{\pi}}_{i}=(\hat{\pi}_{i1},\hat{\pi}_{i2},\ldots,\hat{\pi}_{iJ })^{\prime}\) is the vector of predicted probabilities. The \(J\)-dimensional vector of Pearson residuals is given by \(r_{i}^{P}=\left[r_{i1}^{P},r_{i2}^{P},\ldots,r_{iJ}^{P}\right]^{\prime}\) with elements [41]
\[r_{ij}^{P}=\frac{(y_{ij}-m_{i}\hat{\pi}_{ij})}{\sqrt{m_{i}\hat{\pi}_{ij}(1- \hat{\pi}_{ij})}},\]
where \(i=1,2,\ldots,n\) and \(j=1,2,\ldots,J\).
## 3 Randomized quantile residuals
The quantile residual was proposed by [10] for continuous variables. For a continuous response, \(y_{i}\), the quantile residual is defined by
\[r_{i}^{Q}=\Phi^{-1}\left\{F(y_{i};\hat{\theta}_{i},\hat{\phi})\right\},\;\;i= 1,\ldots,n,\]
where \(\Phi^{-1}\) is the inverse of the cumulative distribution function (CDF) of the standard normal distribution, \(F(y_{i};\hat{\theta}_{i},\hat{\phi})\) is the CDF associated with the response variable, \(\hat{\theta}_{i}\) is the maximum likelihood estimate of parameter \(\theta_{i}\) and the \(\hat{\phi}\) is the estimated dispersion parameter.
If the response \(y_{i}\) is discrete, we introduce randomization through a uniform random variable in the CDF for each individual, obtaining the randomized quantile residual
\[r_{i}^{Q}=\Phi^{-1}\left\{F(u_{i})\right\},\;\;i=1,\ldots,n,\]
where \(u_{i}\) represents a uniform random variable between \(a_{i}=\lim_{y\to y_{i}}F(y;\hat{\theta}_{i},\hat{\phi})\) and \(b_{i}=F(y_{i};\hat{\theta}_{i},\hat{\phi})\). Under a well-fitting model, these residuals follow, approximately, a normal distribution.
The quantile residuals have received little attention in the literature as model diagnostic tools until recently. For example, [23] used the standardized quantile residuals in goodness-of-fit tests for generalized linear models with inverse Gaussian and gamma variables. [31] investigated the performance of the quantile residual for diagnostics of
the beta regression model and [12] used the standardized randomized quantile residuals to examine the goodness-of-fit of models applied to count data. Here, we introduce their use with polytomous data associated with generalized logit models.
## 4 Distances
Consider having \(n\) individuals denoted by the random vectors \(\mathbf{z}_{i}=(z_{i1},z_{i2},\ldots,z_{iq})^{\prime}\), \(i=1,2,\ldots,n\). Each individual is represented by a point in \(q\)-dimensional space, with each dimension representing a variable [38]. Distance metrics can quantify how far two individuals are by a scalar which measures their proximity. The Euclidean and Mahalanobis distances are widely known (see [43] and [22]) and can be calculated in the original scale of the response variable [27]. The Euclidean distance between individuals \(i\) and \(t\) is defined by
\[d_{it}^{E}=\sqrt{(\mathbf{z}_{i}-\mathbf{z}_{t})^{\prime}(\mathbf{z}_{i}- \mathbf{z}_{t})}=\sqrt{\sum_{k=1}^{q}{(z_{ik}-z_{tk})^{2}}},\]
where \(z_{k}\) is the \(k\)-th variable, with \(k=1,2,\ldots,q\), and \(i,t=1,2,\ldots,n\). According to [43], this measure is the most popular to calculate the distance between individuals in \(q\)-dimensional space.
If the individuals are correlated, the covariance or correlation between them can be considered when calculating the distance [38]. In this case, the Mahalanobis distance is useful, and is expressed by
\[d_{it}^{M}=(\mathbf{z}_{i}-\mathbf{z}_{t})^{\prime}\mathbf{C}^{-1}(\mathbf{z} _{i}-\mathbf{z}_{t}),\]
where \(\mathbf{C}^{-1}\) is the inverse of the \(q\times q\) variance-covariance matrix. In the case where \(\mathbf{C}=\mathbf{I}\), with \(\mathbf{I}\) representing the identity matrix, the Mahalanobis distance reduces to the Euclidean distance. If \(\mathbf{C}\) is a diagonal matrix, then it results in the standardized Euclidean distance [20]. The Euclidean distance yields quicker calculations than the Mahalanobis distance, but considering the covariances between variables can be important [14]. However, [27] reported that some issues must be observed when using Mahalanobis distances, such as problems that may lead to singular covariance matrices and the restriction that the sample size that must be greater than the number of variables.
## 5 Methods
Here, we describe the methodological procedures associated with residual analysis of generalized logit models fitted to nominal polytomous data. We propose the use of quantile residuals for individual data, and a new methodology to reduce the dimension of ordinary residuals associated with grouped data using distance metrics.
### Individual data
For individual data, we obtain the standardized randomized quantile residuals considering the cumulative distribution function (CDF), \(F(\mathbf{y}_{i};\mathbf{\hat{\pi}}_{i},\hat{\phi})\), for the response vector
\(\mathbf{y}_{i}\) given the vector \(\mathbf{x}_{i}\), \(i=1,2,\ldots,n\). The CDF for the multinomial distribution follows from its relationship with an independent Poisson sum, given a fixed total, i.e., the multinomial CDF is computed as the convolution of \(J\) truncated Poisson random variables, as shown by [25] and implemented in R through the pmultinom package ([9]).
Now let \(\hat{\boldsymbol{\pi}}_{i}=(\hat{\pi}_{i1}(x_{i}),\hat{\pi}_{i2}(x_{i}),\ldots, \hat{\pi}_{iJ}(x_{i}))^{\prime}\) be the vector of estimated probabilities. Consider the probability mass function \(f(\mathbf{y}_{i};\hat{\boldsymbol{\pi}}_{i})\), corresponding the response of individual \(i\) in category \(j\), \(y_{ij}=1\), and \(y_{ij}=0\) otherwise. Then, the estimated CDF for individual \(i\) is
\[F^{*}(\mathbf{y}_{i},u_{i};\hat{\boldsymbol{\pi}}_{i})=F(\mathbf{1}-\mathbf{y }_{i};\hat{\boldsymbol{\pi}}_{i})+u_{i}\times f(\mathbf{y}_{i};\hat{ \boldsymbol{\pi}}_{i}), \tag{4}\]
where \(\mathbf{1}\) is a \(J\times 1\) unit vector, and \(u_{i}\) is a realization of a random variable with uniform distribution, i.e. \(U_{i}\sim(0,1)\). The randomized quantile residual for a polytomous response \(\mathbf{y}_{i}\) is given by
\[r_{i}^{Q}=\Phi^{-1}[F^{*}(\mathbf{y}_{i},u_{i};\hat{\boldsymbol{\pi}}_{i})], \tag{5}\]
where \(\Phi^{-1}\) is the quantile function of the standard normal distribution. We have therefore a scalar value for each \(i\), and these residuals are approximately normal under the null hypothesis that the model was correctly specified.
Here, we used a standardized version of the randomized quantile residuals, given by
\[r_{i}^{S}=\frac{r_{i}^{Q}-\bar{r}^{Q}}{s_{r^{Q}}}, \tag{6}\]
where \(\bar{r}^{Q}\) and \(s_{r^{Q}}\) are the mean and standard deviation of the residuals \(r_{i}^{Q}\), respectively.
For individual data, distance measurements do not effectively contribute to the analysis of residuals, since regardless of the number of individuals in the sample, the individual structure always leads to \(J\) unique distance measurements, under the assumption that the model is correctly specified, i.e, \(\mathrm{E}(\mathbf{r}_{i}|\mathbf{x}_{i})=\mathbf{0}\).
### Grouped data
For grouped data we can use a similar procedure to construct the randomized quantile residuals (eq. 6), with the following modification in the estimated CDF (eq. 4):
\[F^{*}(\mathbf{y}_{i},u_{i};\hat{\boldsymbol{\pi}}_{i})=F(\mathbf{m}-\mathbf{y }_{i};\hat{\boldsymbol{\pi}}_{i})+u_{i}\times f(\mathbf{y}_{i};\hat{ \boldsymbol{\pi}}_{i}),\]
where \(\mathbf{m}\) represents a \(J\times 1\) vector for group size, \(\mathbf{y}_{i}\) represents the counts vector for each category in the group \(i\), in which the sum is \(m\). Additionally, unlike the individual case, we reduce the dimension of the vector of ordinary residuals using distance metrics, namely the Euclidean and Mahalanobis distances. Under the assumption that the model is specified correctly, we have that \(\mathrm{E}(\mathbf{r}_{i}|\mathbf{x}_{i})=\mathbf{0}\), which is a null vector of dimension \(J\). We have that the Euclidean and Mahalanobis distances between the residual vector \(i\) and the null vector are, respectively, written as
\[d_{i}^{E}=\sqrt{(\mathbf{r}_{i}-\mathbf{0})^{\prime}(\mathbf{r}_{i}-\mathbf{0 })}=\sqrt{\sum\nolimits_{j=1}^{J}r_{ij}^{2}}\]
\[d_{i}^{M}=(\mathbf{r}_{i}-\mathbf{0})^{\prime}\mathbf{C}^{-1}(\mathbf{r}_{i}- \mathbf{0})=\mathbf{r}_{i}^{\prime}\mathbf{C}^{-1}\mathbf{r}_{i},\]
where \(\mathbf{C}\) is the \(J\times J\) covariance matrix of the residuals.
### Residual analytic tools
Once the randomized quantile residuals and distance measures are defined, formal (tests) and informal (plots) techniques are employed for diagnostics. Formally, a powerful and widely known test for detecting deviations from normality due to asymmetry or kurtosis (or both) is the Shapiro-Wilk test [37].
Informally, one can first visualize the distribution of residuals through a histogram, comparing its shape with that of the normal distribution. In the plot of residuals versus fitted values, it is possible to observe the existence of variance heterogeneity or the presence of outliers. The expected pattern in this plot is the zero-centered distribution of residuals with constant amplitude [11].
Additionally, the half-normal plot with a simulated envelope can be used to assess whether the observed data are a plausible realization of the fitted model. The absolute values of a given diagnostic measure (residuals or distances) are compared to the expected order statistics of the half-normal distribution obtained by
\[\Phi^{-1}\left[\frac{(i+n-1/8)}{2n+1/2}\right],\]
where \(\Phi^{-1}\) is the standard normal quantile function. Here, we follow the steps established by [29] for the construction of these graphs. Given a well-fitted model, we expect most points to lie within the simulated envelope.
## 6 Simulation studies
We carried out simulation studies to evaluate the performance of the standardized quantile residuals for individual and grouped data, as well as distance measures for grouped data only.
### Models and scenarios
We simulated from generalized logit models with 3, 4 and 5 response categories for both data structures (individual and grouped). We used two types of linear predictors: one with an intercept and a single continuous covariate effect (eq. 7), and one also including the effect of a factor with two levels (eq. 8). In the data simulation process, we considered sample sizes of 50, 100 and 200, and for grouped data we used the group dimensions \(m\in\{5,10,15\}\).
For model 1 the response variables were simulated from:
\[\log\left(\frac{\pi_{ij}}{\pi_{i1}}\right)=\alpha_{j}+\beta_{j}x_{i},\ \ j=2,\ldots,J, \tag{7}\]
where \(x_{i}\) are realizations of a standard normal random variable, and \(J=3,4,5\) according to the number of categories. The true parameter values were set as:
\[\boldsymbol{\theta}_{(J=3)} = (\alpha_{2},\alpha_{3},\beta_{2},\beta_{3})\] \[= (1.5,3.0,-3.0,-5.0)\] \[\boldsymbol{\theta}_{(J=4)} = (\alpha_{2},\alpha_{3},\alpha_{4},\beta_{2},\beta_{3},\beta_{4})\] \[= (1.5,3.0,2.0,-3.0,-5.0,-4.0)\] \[\boldsymbol{\theta}_{(J=5)} = (\alpha_{2},\alpha_{3},\alpha_{3},\alpha_{4},\beta_{2},\beta_{3}, \beta_{4},\beta_{4})\] \[= (1.5,3.0,2.0,4.0,-3.0,-5.0,-4.0,-7.0)\]
For model 2, the linear predictor was:
\[\log\left(\frac{\pi_{ij}}{\pi_{i1}}\right)=\alpha_{j}+\beta_{1j}x_{i1}+\beta_{ 2j}x_{i2},\ \ j=2,\ldots,J, \tag{8}\]
where \(x_{i1}\) are realizations of a standard normal random variable, \(x_{i2}\) is a dummy variable (factor with two levels), and and \(J=3,4,5\) according to the number of categories. The true values used were:
\[\boldsymbol{\theta}_{(J=3)} = (\alpha_{2},\alpha_{3},\beta_{12},\beta_{13},\beta_{22},\beta_{23})\] \[= (1.5,3.0,-3.0,-5.0,1.5,2.5)\] \[\boldsymbol{\theta}_{(J=4)} = (\alpha_{2},\alpha_{3},\alpha_{4},\beta_{12},\beta_{13},\beta_{14 },\beta_{22},\beta_{23},\beta_{24})\] \[= (1.5,3.0,2.0,-3.0,-5.0,-4.0,1.5,2.5,3.0)\] \[\boldsymbol{\theta}_{(J=5)} = (\alpha_{2},\alpha_{3},\alpha_{3},\alpha_{4},\beta_{12},\beta_{13 },\beta_{14},\beta_{15},\beta_{22},\beta_{23},\beta_{24},\beta_{25})\] \[= (1.5,3.0,2.0,4.0,-3.0,-5.0,-4.0,1.5,2.5,3.0,3.5)\]
All simulations were implemented in R software [33], using the nnet package to fit the multinomial models [35], and the hnp package [29] (hnp function) to generate the half-normal plots with a simulated envelope.
### Results for individual data
We firstly compare residuals obtained from fitting model 1 to the data generated by model 1 itself, and from fitting the null model (intercept only; scenario 1). The distribution of the p-values of the Shapiro-Wilk test are presented in Figure 1, in which we observe that the residuals under the null model are considered to be mostly not normal, while a uniform pattern is seen for the p-values for the residuals obtained from the correct model. Similar patterns are observed for the scenario where model 2 was considered (scenario 2; Figure 2). It should also be noted, in both scenarios (model 1 and model 2), that the number of categories has no influence on the residual analysis, unlike the influence of the sample size, but this is also related to the sensitivity of the Shapiro-Wilk test. Specifically, the normality of residuals was rejected by the Shapiro-Wilk test \(p<0.05\)) in most simulations considering the null model. As illustration, for example, with \(J=3\) and \(N=50\), normality was rejected \(86.6\%\) of the time when considering model 1, and \(92.7\%\) of the time when considering model 2. However, when considering the correct linear predictors, normality was rejected only for \(4.0\%\) and \(5.7\%\) of the simulated datasets for models 1 and 2, respectively (i.e., close to \(5\%\), as
expected). This shows we may identify lack-of-fit of a multinomial model fitted to individual data by analysing the normality of the randomized quantile residuals.
Figure 1: Histograms of p-values obtained via the Shapiro-Wilk test for the standardized randomized quantile residuals for \(1,000\) simulations when fitting (a) the null model (intercept only), and (b) model 1 (including continuous covariate; correct linear predictor) for \(N=50,100,200\) and \(J=3,4,5\).
Figure 2: Histograms of p-values obtained via the Shapiro-Wilk test for the standardized randomized quantile residuals for \(1,000\) simulations when fitting (a) the null model (intercept only), and (b) model 2 (including continuous and dummy covariates; correct linear predictor) for \(N=50,100,200\) and \(J=3,4,5\).
### Results for grouped data
The results for the grouped data, in general, were similar for all \(m\) values, indicating that the group dimension did not represent a source of variation for the standardized randomized quantile residuals and also to the Euclidean and Mahalanobis distance measures, particularly in this study. In this way, we present here the results for \(m=10\), with the others results available at [https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse](https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse). Initially, we present the distribution of p-values referring to the Shapiro-Wilk test applied to the quantile residuals for grouped data, considering scenario 3 (model 1 versus null) and scenario 4 (model 2 versus null). Just as in the individual case the results were satisfactory, i.e, the normality of residuals was rejected by the Shapiro-Wilk test \(p<0.05\)) in most simulations considering the null model, as can be observed from Figures 3 (scenario 3) and 4 (scenario 4).
Next, we present the results for distances measures, considering model 1 versus null and Euclidean distance (scenario 5) and Mahalanobis distance (scenario 6). For both scenarios, it was possible to distinguish the true model from the null model by using half-normal plots with a simulated envelope for the distances (Figure 5).
The median of the percentage of points outside the envelope is less than \(5\%\) for model 1, considering both distances, as opposed to almost \(100\%\) for the null model. Also, the distribution of these values within each level appears to be symmetric and has approximately the same variability (Figures 5 and 6). Similar conclusions can be drawn for model 2 for the Euclidean Figure 7) and Mahalanobis distances (Figure 8), given that the median of the percentage of points outside the envelope is less than \(5\%\) for model 2 and close to \(100\%\) for the null model using both distances.
Figure 3: Histograms of p-values obtained via the Shapiro-Wilk test for the standardized randomized quantile residuals for \(1,000\) simulations when fitting (a) the null model (intercept only), and (b) model 1 (including continuous covariate; correct linear predictor) with grouped data (\(m=10\)), for \(N=50,100,200\) and \(J=3,4,5\).
This confirms that the proposed diagnostics are useful to identify well-fitting multinomial models for grouped nominal data.
Figure 4: Histograms of p-values obtained via the Shapiro-Wilk test for the standardized randomized quantile residuals for \(1,000\) simulations when fitting (a) the null model (intercept only), and (b) model 2 (including continuous and dummy covariates; correct linear predictor) with grouped data (\(m=10\)), for \(N=50,100,200\) and \(J=3,4,5\).
Figure 5: Boxplots of the percentage of points outside the simulated envelope for model 1 (including continuous covariate) and the null model (intercept only) using the Euclidean distance, for \(N=50,100,200\) and \(J=3,4,5\).
## 7 Applications
Here, two motivation studies available in the literature are considered to illustrate the procedures presented in Sections 3 and 4.
Figure 6: Boxplots of the percentage of points outside the simulated envelope for model 1 (including continuous covariate) and the null model (intercept only) using the Mahalanobis distance, for \(N=50,100,200\) and \(J=3,4,5\).
Figure 7: Boxplots of the percentage of points outside the simulated envelope for model 2 (including continuous and dummy covariates) and the null model (intercept only) using the Euclidean distance, for \(N=50,100,200\) and \(J=3,4,5\).
### Wine Classification
This first dataset (individual structure) arises from a study carried out by [13], involving wine classification techniques ([4, 19]). In this study, a chemical analysis was carried out at the Institute of Pharmaceutical and Food Analysis and Technologies about 178 wines from three grape cultivars from the Liguria region in Italy, whose objective was to classify the different cultivars. The response variable represents the type of cultivar, assuming values \(\{1,2,3\}\). In the analysis, the amounts of 13 chemical constituents of each cultivar were determined, among which are magnesium and phenols that can be considered good indicators of wine origin [21]. Further details as well as the dataset are available in the rattle.data[15] package for R software [33].
We define the following linear predictors: \(M1\): intercept only (null model); \(M2\): intercept + phenols; \(M3\): intercept + magnesium + phenols (additive model) and \(M4\): intercept + magnesium * phenols (interaction model).
The final model was selected by applying likelihood-ratio (LR) tests to a sequence of nested models and we obtained: \(M1\times M2\) LR = 123.98 (\(p<0.01\)); \(M2\times M3\) LR = 13.14 (\(p<0.01\)) and \(M3\times M4\) LR = 1.25 (\(p=0.54\)), all statistics associated to 2 degrees of freedom. Therefore, model M3 was selected. The Akaike Information Criterion (AIC) also was used to compare models, and the lowest AIC (261.50) value was for model M3, but this measure does not verify the goodness-of-fit of the model or validates the distributional assumption.
The histogram of the randomized quantile residuals (Figure 9(a)) indicate that residuals of model M3 are normally distributed. This is confirmed by the Shapiro-Wilk test (\(p=0.167\)). Also, the plot of residuals versus fitted values (Figure 9(b)) Residuals vary mainly between \(-2\) and \(2\) and no pattern is evident, which also suggests that model M3 is well-fitted to the data.
The half-normal plots with a simulated envelope for the standardized randomized
Figure 8: Boxplots of the percentage of points outside the simulated envelope for model 2 (including continuous and dummy covariates) and the null model (intercept only) using the Mahalanobis distance, for \(N=50,100,200\) and \(J=3,4,5\).
quantile residuals are shown in Figure 10 for model M3: intercept + magnesium + phenols (a) and the null model (b). It appears that the model fits the data well, since no point is outside the envelope.
Figure 10: Half-normal plot with a simulated envelope (confidence level = 95%) of the standardized randomized quantile residuals for model M3:intercept + magnesium + phenols (a) and the null model (b) applied to the wine study data [13].
Figure 9: Distribution of standardized randomized quantile residuals from model M3 (a standard normal density curve in red) and fitted values versus standardized randomized quantile residuals, according to the wine study data ([13]).
### Student preference
The second dataset (grouped structure) refers to the choice made by high school students among different programs. This sample of 200 individuals was made available in 2013 by the statistical consulting group at the University of California at Los Angeles (UCLA), being used in studies involving polytomous data (e.g. [28], [1] and [8]). The response variable is the choice by a program (1: academic, 2: general, 3: vocational). There are 11 covariates available in this study, including socioeconomic status, gender, and scores in specific subjects (mathematics, social studies, writing, among others). Here, we consider the maths score as a continuous covariate, to verify if the score contributed to the student's decision. The data were organized in \(N=34\) including groups varying from \(m=2\) to \(m=13\). For more details, see [42].
Considering the null hypothesis that program choice is independent of maths score, we employed a LR test to compare a null model (intercept only) with a model including the maths score in the linear predictor (model M1). We obtained a test statistic of 51.97, on 2 degrees of freedom (\(p<0.01\)). Model M1 also presented a lower AIC (182.81) when compared to the null model (230.77). Based on this result, it is concluded that maths score is significant to explain program choice.
The half-normal plots with a simulated envelope indicate that model M1 (intercept + maths score) is suitable to analyse the data, for both Euclidean (Figure 11 and Mahalanobis (Figure 12) distances.
## 8 Conclusion
In this work we presented alternatives to residual analysis for nominal data with individual and grouped data structures using randomized quantile residuals and distance
Figure 11: Half-normal plot with a simulated envelope (confidence level = 95%) using Euclidean distance for null model (a) and M1: intercept + maths score (b) for the data available in [42].
measures, respectively. The simulation studies showed that these residuals and the proposed distances presented good performance in assessing model goodness-of-fit with continuous and categorical covariates. Therefore, the randomized quantile residuals and the distances may be potential tools for checking diagnostics of generalized logit models. However, the analysis of residuals for polytomous data has many challenges yet to be explored. Studies focusing on small sample sizes are necessary to assess the fit of the model, which could lead to sampling uncertainty in the residuals and distances. Venues for future work also include simulation studies focusing on longitudinal designs.
## Acknowledgments
This work derived from the thesis entitled "Residuals and diagnostic methods in models for polytomous data" with support from the Brazilian Foundation, Coordenacao de "Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior" (CAPES) process number \(88882.378344/2019-01\). This publication also had the additional support from Brazilian Fundation-CAPES process number \(88887.716582/2022-00\) and from Science Foundation Ireland under grant number \(18/CRT/6049\).
## Supplementary material
All R code, including the implementations of the proposed methods, are available at [https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse](https://github.com/GabrielRPalma/DiagnosticsForCategoricalResponse).
Figure 12: Half-normal plot with a simulated envelope (confidence level = 95%) using Mahalanobis distance for null model (a) and M1: intercept + maths score (b) for the data available in [42] |
2303.10895 | Leapfrog Diffusion Model for Stochastic Trajectory Prediction | To model the indeterminacy of human behaviors, stochastic trajectory
prediction requires a sophisticated multi-modal distribution of future
trajectories. Emerging diffusion models have revealed their tremendous
representation capacities in numerous generation tasks, showing potential for
stochastic trajectory prediction. However, expensive time consumption prevents
diffusion models from real-time prediction, since a large number of denoising
steps are required to assure sufficient representation ability. To resolve the
dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based
trajectory prediction model, which provides real-time, precise, and diverse
predictions. The core of the proposed LED is to leverage a trainable leapfrog
initializer to directly learn an expressive multi-modal distribution of future
trajectories, which skips a large number of denoising steps, significantly
accelerating inference speed. Moreover, the leapfrog initializer is trained to
appropriately allocate correlated samples to provide a diversity of predicted
future trajectories, significantly improving prediction performances. Extensive
experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show
that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE
improvement on NFL. The proposed LED also speeds up the inference
19.3/30.8/24.3/25.1 times compared to the standard diffusion model on
NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at
https://github.com/MediaBrain-SJTU/LED. | Weibo Mao, Chenxin Xu, Qi Zhu, Siheng Chen, Yanfeng Wang | 2023-03-20T06:32:48Z | http://arxiv.org/abs/2303.10895v1 | # Leapfrog Diffusion Model for Stochastic Trajectory Prediction
###### Abstract
To model the indeterminacy of human behaviors, stochastic trajectory prediction requires a sophisticated multi-modal distribution of future trajectories. Emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks, showing potential for stochastic trajectory prediction. However, expensive time consumption prevents diffusion models from real-time prediction, since a large number of denoising steps are required to assure sufficient representation ability. To resolve the dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model, which provides real-time, precise, and diverse predictions. The core of the proposed LED is to leverage a trainable leapfrog initializer to directly learn an expressive multi-modal distribution of future trajectories, which skips a large number of denoising steps, significantly accelerating inference speed. Moreover, the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories, significantly improving prediction performances. Extensive experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL. The proposed LED also speeds up the inference 19.3/30.8/24.3/25.1 times compared to the standard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at [https://github.com/MediaBrain-SJTU/LED](https://github.com/MediaBrain-SJTU/LED).
## 1 Introduction
Trajectory prediction aims to predict the future trajectories for one or multiple interacting agents conditioned on their past movements. This task plays a significant role in numerous applications, such as autonomous driving [5, 24], drones [11], surveillance systems [46], human-robot interaction systems [6], and interactive robotics [21, 26]. Recently, lots of fascinating research progresses have been made from many aspects, including temporal encoding [7, 14, 47, 54], interaction modeling [19, 1, 50, 44, 1, 1], and rasterized prediction [12, 49, 55, 49, 13]. In practice, to capture multiple possibilities of future trajectories, a real-world prediction system needs to produce multiple future trajectories. This leads to the emergence of stochastic trajectory prediction, aiming to precisely model the distribution of future trajectories.
Previous works have proposed a series of deep generative models for stochastic trajectory prediction. For example, [16, 19] exploit the generator adversarial networks (GANs) to model the future trajectory distribution; [28, 39, 50] consider the conditional variational auto-encoders (CVAEs) structure; and [3] uses the conditional normalizing flow to relax the Gaussian prior in CVAEs and learn more representative priors. Recently, with the great success in image generation [18, 34] and audio synthesis [4, 22], denoising diffusion probabilistic models have been applied to time-series analysis and trajectory prediction, and show promising prediction performances [45, 15]. Compared to many other generative models, diffusion models have advantages in stable training and modeling sophisticated distributions through sufficient denoising steps [9].
However, there are two critical problems in diffusion models for stochastic trajectory prediction. First, the real-time inference is time-consuming [15]. To ensure the representation ability and generate high-quality samples, an adequate number of denoising steps are required in standard diffusion models, which costs more computational time. For
Figure 1: Leapfrog diffusion model uses the leapfrog initializer to estimate the denoised distribution and substitute a long sequence of traditional denoising steps, accelerating inference and maintaining representation capacity.
example, experiments show that on the NBA dataset, diffusion models need about \(100\) denoising steps to achieve decent prediction performances, which would take \(\sim\)886ms to predict; while the next frame comes every 200ms. Second, as mentioned in [2], a limited number of independent and identically distributed samples might not be able to capture sufficient modalities in the underlying distribution of a generative model. Empirically, a few independent sampled trajectories could miss some important future possibilities due to the lack of appropriate sample allocation, significantly deteriorating prediction performances.
In this work, we propose leapfrog diffusion model (LED), a novel diffusion-based trajectory prediction model, which significantly accelerates the inference speed and enables adaptive and appropriate allocations of multiple correlated predictions, providing sufficient diversity in predictions. The core idea of the proposed LED is to learn a rough, yet sufficiently expressive distribution to initialize denoised future trajectories; instead of using a plain Gaussian distribution as in standard diffusion models. Specifically, our forward diffusion process is the same as standard diffusion models, which assures that the ultimate representation ability is pristine; while in the reverse denoising process, we leverage a powerful initializer to produce correlated diverse samples and leapfrog or skip a large number of denoising steps; and then, use only a few denoising steps to refine the distribution.
To implement such a leapfrog initializer, we consider a reparameterization to alleviate the learning burden. We disassemble a denoised distribution into three parts: mean trajectory, variance, and sample positions under the normalized distribution. To estimate these three, we design three corresponding trainable modules, each of which leverages both a social encoder and a temporal encoder to learn the social-temporal features and produce accurate estimation. Furthermore, all the sample positions are simultaneously generated based on the same social-temporal features, enabling appropriate sample allocations to provide diversity.
To evaluate the effectiveness of the proposed method, we conduct experiments on four trajectory prediction datasets: NBA, NFL Football Dataset, Standford Drones Dataset, and ETH-UCY. The quantitative results show we outperform the previous methods and achieve state-of-the-art performance. Specifically, compared to MID [15], the proposed leapfrog diffusion model reduces the average prediction time from \(\sim\)886ms to \(\sim\)46ms on the NBA dataset, while achieving a 15.6%/13.4% ADE/FDE improvement.
The main contributions are concluded as follows,
\(\bullet\) We propose a novel LEapfrog Diffusion model (LED), which is a denoising-diffusion-based stochastic trajectory prediction model. It achieves precise and diverse predictions with fast inference speed.
\(\bullet\) We propose a novel trainable leapfrog initializer to directly model sophisticated denoised distributions, accelerating inference speed, and adaptively allocating the sample diversity, improving prediction performance.
\(\bullet\) We conduct extensive experiments on four datasets including NBA, NFL, SDD, and ETH-UCY. Results show that i) our approach consistently achieves state-of-the-art performance on all datasets; and ii) our method speeds up the inference by around 20 times compared to the standard diffusion model, satisfying real-time prediction needs.
## 2 Related Work
**Trajectory prediction.** Early works on trajectory prediction focus on a deterministic approach by exploring force models [17, 31], RNNs [1, 33, 48], and frequency analysis [29, 30]. For example, [17] models an agent's behavior with attractive and repulsive forces and builds the force equations for prediction. To capture the multi-modalities and model future distribution, recent works start to focus on stochastic trajectory prediction and have proposed a series of deep generative models. Generative Adversarial Network (GAN) structures [16, 19, 37, 38, 10, 43] are proposed to generate multiple future trajectory distribution. [23, 28, 39, 50, 52] use the Variational Auto-Encoder (VAE) structure and learn the distribution through variational inference. [3] relaxes the Gaussian prior and proposes to use the normalizing flow, Heatmap [12, 13, 27] is used for modeling future trajectories' distribution on rasterized images. In this work, we propose a new diffusion-based model for trajectory prediction. Compared to previous generative models, our method has a large representation capacity and can model sophisticated trajectory distributions by using a number of diffusion steps. We also enable the correlation between samples to adaptively adjust sample diversity, improving prediction performance.
**Denoising diffusion probabilistic models.** Denoising diffusion probabilistic models (diffusion models) [18, 40, 42] have recently achieved significant results in image generation [9, 34] and audio synthesis [4, 22]. The idea of diffusion models is first proposed by DPM [40], which imitates the diffusion process in non-equilibrium statistical physics and reconstructs the data distribution using the denoising model. Later, [36, 45] propose diffusion models, combining with the seq-to-seq models, for probabilistic time series forecasting. MID [15] is the first to build diffusion models for trajectory prediction in modeling the indeterminacy variation process.
The standard diffusion models use hundreds of denoising steps, preventing these models from real-time applications. To accelerate the sampling process, DDIM [41] first predicts the original data and then estimates the direction to the next expected timestamp based on the non-Markov process. PD [38] applies the knowledge distillation on the denoising steps with a deterministic diffusion sampler, which will be repeated for times to accelerate the sampling. All these fast sampling methods start denoising from noise inputs, which are randomly and independently initialized. In this work, we
use a trainable leapfrog initializer to initialize a sufficiently expressive distribution, which replaces a large number of former denoising steps for much faster inference speed.
## 3 Background
### Problem Formulation
Trajectory prediction aims to predict an agent's future trajectory based on the past trajectories of itself and surrounding agents. For a to-be-predicted agent, let \(\mathbf{X}=[\mathbf{x}^{-T_{\mathrm{p}}+1},\mathbf{x}^{-T_{\mathrm{p}}+2},\dots, \mathbf{x}^{0}]\in\mathbb{R}^{T_{\mathrm{p}}\times 2}\) be the observed past trajectory over \(T_{\mathrm{p}}\) timestamps where \(\mathbf{x}^{t}\in\mathbb{R}^{2}\) records the 2D spatial coordinate at timestamp \(t\). Let \(\mathcal{N}\) be the neighbouring agent set and \(\mathbb{X}_{\mathcal{N}}=[\mathbf{X}_{\mathcal{N}_{1}},\mathbf{X}_{\mathcal{N }_{2}},\cdots,\mathbf{X}_{\mathcal{N}_{L}}]\in\mathbb{R}^{L\times T_{\mathrm{p} }\times 2}\) be the past trajectories of neighbours, where \(\mathbf{X}_{\mathcal{N}_{t}}\in\mathbb{R}^{T_{\mathrm{p}}\times 2}\) is the trajectory of the \(\ell\)th neighbour. The corresponding ground-truth future trajectory for the to-be-predicted agent is \(\mathbf{Y}=[\mathbf{y}^{1},\mathbf{y}^{2},\dots,\mathbf{y}^{T_{\mathrm{f}}}] \in\mathbb{R}^{T_{\mathrm{f}}\times 2}\) over \(T_{\mathrm{f}}\) timestamps, where \(\mathbf{y}^{t}\in\mathbb{R}^{2}\) is the 2D coordinate at future timestamp \(t\).
Because of the indeterminacy of future trajectories, it is usually more reliable to predict more than one trajectory to capture multiple possibilities. Here we consider stochastic trajectory prediction, which predicts the distribution of a future trajectory, instead of a single future trajectory. The goal of stochastic trajectory prediction is to train a prediction model \(g_{\theta}(\cdot)\) with parameters \(\theta\) to generate a distribution \(\mathcal{P}_{\theta}=g_{\theta}(\mathbf{X},\mathbb{X}_{\mathcal{N}})\). Based on this distribution \(\mathcal{P}_{\theta}\), we can draw \(K\) samples, \(\widehat{\mathcal{Y}}=\{\widehat{\mathbf{Y}}_{1},\widehat{\mathbf{Y}}_{2}, \dots,\widehat{\mathbf{Y}}_{K}\}\), so that at least one sample is close to the ground-truth future trajectory. The overall learning problem is
\[\theta^{*}=\min_{\theta}\min_{\widehat{\mathbf{Y}}_{i}\in\widehat{\mathcal{Y}} }D(\widehat{\mathbf{Y}}_{i},\mathbf{Y}),\ \ \ \mathrm{s.t.}\ \ \widehat{\mathcal{Y}}\sim\mathcal{P}_{\theta}. \tag{1}\]
### Diffusion Model for Trajectory Prediction
Here we present a standard diffusion model for trajectory prediction, which lays a foundation for the proposed method. The core idea is to learn and refine a sophisticated underlying distribution of trajectories through cascading a series of simple denoising steps. To implement this, a diffusion model performs a forward diffusion process to intentionally add a series of noises to a ground-truth future trajectory; and then, it uses a conditional denoising process to recover the future trajectory from noise inputs conditioned on past trajectories.
Mathematically, let \(\mathbf{X}\) and \(\mathbb{X}_{\mathcal{N}}\) be the past trajectories of the ego agent and the neighboring agents, respectively, and \(\mathbf{Y}\) be the future trajectory of the ego agent. The diffusion model for trajectory prediction works as follows,
\[\mathbf{Y}^{0} =\mathbf{Y}, \tag{2a}\] \[\mathbf{Y}^{\gamma} =f_{\mathrm{diffuse}}(\mathbf{Y}^{\gamma-1}),\ \gamma=1,\cdots,\Gamma,\] (2b) \[\widehat{\mathbf{Y}}_{k}^{\Gamma}\overset{i,d}{\sim}\mathcal{P}( \widehat{\mathbf{Y}}^{\Gamma})=\mathcal{N}(\widehat{\mathbf{Y}}^{\Gamma}; \mathbf{0},\mathbf{I}),\text{sample }K\ \ \mathrm{times},\] (2c) \[\widehat{\mathbf{Y}}_{k}^{\gamma} =f_{\mathrm{denoise}}(\widehat{\mathbf{Y}}_{k}^{\gamma+1}, \mathbf{X},\mathbb{X}_{\mathcal{N}}),\ \gamma\!=\!\Gamma\!-\!1,\!\cdots\!,\!0, \tag{2d}\]
where \(\mathbf{Y}^{\gamma}\) is the noisy trajectory at the \(\gamma\)th diffusion step and \(\widehat{\mathbf{Y}}_{k}^{\gamma}\) is the \(k\)th sample of denoised trajectory at the \(\gamma\)th denoising step. The final \(K\) predicted trajectories are \(\widehat{\mathcal{Y}}=\{\widehat{\mathbf{Y}}_{1}^{0},\widehat{\mathbf{Y}}_{2}^ {0},\dots,\widehat{\mathbf{Y}}_{K}^{0}\}\).
Step (2a) initializes the diffused trajectory; Step (2b) uses a forward diffusion operation \(f_{\mathrm{diffuse}}(\cdot)\) to successively add noises to \(\mathbf{Y}^{\gamma-1}\) and obtain the diffused trajectory \(\mathbf{Y}^{\gamma}\); Step (2c) draws \(K\) independent and identically distributed samples to initialize denoised trajectories \(\widehat{\mathbf{Y}}_{k}^{\Gamma}\) from a normal distribution; and Step (2d) iteratively applies a denoising operation \(f_{\mathrm{denoise}}(\cdot)\) to obtain the denoised trajectory \(\widehat{\mathbf{Y}}_{k}^{\gamma}\) conditioned on past trajectories \(\mathbf{X},\mathbb{X}_{\mathcal{N}}\). Note that i) Steps (2a) and (2b) correspond to the forward diffusion process and are not used in inference; ii) During training, \(\mathbf{Y}^{\gamma}\) is naturally the supervision for \(\widehat{\mathbf{Y}}_{k}^{\gamma}\) at the \(\gamma\)th step. Conceptually, each denoising step is the reverse of the diffusion step, and each pair of \(\mathbf{Y}^{\gamma}\) and \(\widehat{\mathbf{Y}}_{k}^{\gamma}\) shares the same underlying distribution.
The standard diffusion model is expressively powerful in learning sophisticated distributions and has achieved great success in many generation tasks. However, the task of motion prediction requires real-time inference but the running time of a diffusion model is constrained by the large number of denoising steps. Meanwhile, less denoising steps usually cause a weaker representation ability of future distributions. To achieve higher efficiency while preserving a promising representation ability, we propose leapfrog diffusion model, which uses a trainable initializer to capture sophisticated distributions and substitute a large number of denoising steps.
## 4 Leapfrog Diffusion Model
### System Architecture
In this section, we propose the leapfrog diffusion model. Here leapfrog means that a large number of small denoising steps can be replaced by a single, yet powerful leapfrog initializer, which can significantly accelerate the inference speed without losing representation ability. Let \(\mathbf{X}\) and \(\mathbb{X}_{\mathcal{N}}\) be the past trajectories of the ego agent and its neighboring agents, and \(\mathbf{Y}\) be the future trajectory of the ego agent. Denote \(\tau\) as the leapfrog step. The overall procedure of the proposed leapfrog diffusion model is formulated as follows,
\[\mathbf{Y}^{0} =\mathbf{Y}, \tag{3a}\] \[\mathbf{Y}^{\gamma} =f_{\mathrm{diffuse}}(\mathbf{Y}^{\gamma-1}),\ \gamma=1,\cdots,\Gamma,\] (3b) \[\widehat{\mathbf{y}}^{\tau}\overset{K}{\sim}\mathcal{P}(\widehat{ \mathbf{Y}}^{\tau})=f_{\mathrm{LSG}}(\mathbf{X},\mathbb{X}_{\mathcal{N}}),\] (3c) \[\widehat{\mathbf{Y}}_{k}^{\gamma} =f_{\mathrm{denoise}}(\widehat{\mathbf{Y}}_{k}^{\gamma+1}, \mathbf{X},\mathbb{X}_{\mathcal{N}}),\ \gamma=\!\Gamma\!-\!1,\!\cdots\!,0. \tag{3d}\]
Compared to the standard diffusion model (2), the main difference lies in Step (3c). The standard diffusion initializes the \(\Gamma\)th denoised distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\Gamma})\) by a plain normal distribution (2c) and requires a lot of denoising steps to enrich the expressiveness of the denoised distribution; while in Step (3c), we propose a novel leapfrog initial
izer \(f_{\mathrm{LSG}}(\cdot)\) to directly model the \(\tau\)th denoised distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\), which is hypothetically equivalent to the output of executing \((\Gamma-\tau)\) denoising steps (2d). We then draw samples from the distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\) and obtain \(K\) future trajectories \(\widehat{\mathcal{Y}}^{\tau}=\{\widehat{\mathbf{Y}}_{1}^{\tau},\widehat{ \mathbf{Y}}_{2}^{\tau},\ldots,\widehat{\mathbf{Y}}_{K}^{\tau}\}\), where \(\overset{K}{\sim}\) in (3d) means \(K\) samples are dependent to intentionally allocate appropriate sample diversity. Then, in Step (3d), we only need to apply the remaining \(\tau\) denoising steps for each trajectory \(\widehat{\mathbf{Y}}_{k}^{\gamma}\) to obtain the final prediction \(\widehat{\mathcal{Y}}=\{\widehat{\mathbf{Y}}_{1}^{\mathrm{T}},\widehat{ \mathbf{Y}}_{2}^{0},\ldots,\widehat{\mathbf{Y}}_{K}^{0}\}\).
Note that i) the proposed leapfrog diffusion model reduces the denoising steps from \(\Gamma\) to \(\tau(\ll\Gamma)\) in Step (3d) as the leapfrog initializer directly provides the trajectories at denoising step \(\tau\), accelerating the inference; ii) instead of taking independent and identically distributed samples in Step (2c), the proposed leapfrog initializer generates \(K\) trajectories \(\widehat{\mathcal{Y}}^{\tau}\) simultaneously in Step (3c), allowing \(K\) samples to be aware of each other; and iii) the standard diffusion model and the proposed leapfrog diffusion model share the same forward diffusion process, assuring that the representation capacity is not reduced.
### Leapfrog Initializer
We now dive into the design details of the proposed leapfrog initializer, which leapfrog \((\Gamma-\tau)\) denoising steps. In leapfrog initializer, we model the \(\tau\)th denoised distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\) through learning models. However, it is nontrivial for a learning model to directly capture the sophisticated distribution, which usually causes unstable training. To ease the learning burden of the model, we disassemble the distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\) into three representative parts: the mean, global variance and sample prediction. For each part, we design trainable modules correspondingly. Mathematically, let \(\mathbf{X}\) and \(\mathbb{X}_{\mathcal{N}}\) be the past trajectories of the ego agent and the neighboring agents, respectively. The proposed leapfrog initializer generates \(K\) samples as follows,
\[\mu_{\theta} =f_{\mu}(\mathbf{X},\mathbb{X}_{\mathcal{N}})\in\mathbb{R}^{T_{ \mathrm{f}}\times 2},\] \[\sigma_{\theta} =f_{\sigma}(\mathbf{X},\mathbb{X}_{\mathcal{N}})\in\mathbb{R},\] \[\widehat{\mathbb{S}}_{\theta} =[\widehat{\mathbf{S}}_{\theta,1},\cdots,\widehat{\mathbf{S}}_{ \theta,K}]=f_{\widehat{\mathbb{S}}}(\mathbf{X},\mathbb{X}_{\mathcal{N}},\sigma _{\theta})\in\mathbb{R}^{T_{\mathrm{f}}\times 2\times K},\] \[\widehat{\mathbf{Y}}_{k}^{\tau} =\mu_{\theta}+\sigma_{\theta}\cdot\widehat{\mathbf{S}}_{\theta,k }\in\mathbb{R}^{T_{\mathrm{f}}\times 2}, \tag{4}\]
where \(f_{\mu}(\cdot),f_{\sigma}(\cdot),f_{\widehat{\mathbb{S}}}(\cdot)\) are three trainable modules, \(\mu_{\theta},\sigma_{\theta}\) are the mean and standard deviation of \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\), respectively, and \(\widehat{\mathbf{S}}_{\theta,k}\) is the normalized positions for the \(k\)th sample.
To be specific, the mean estimate module \(f_{\mu}(\cdot)\) infers the mean trajectory of the \(\tau\)th denoised distribution \(\widehat{\mathcal{P}}(\widehat{\mathbf{Y}}^{\tau})\) with past trajectories \((\mathbf{X},\mathbb{X}_{\mathcal{N}})\). The mean trajectory \(\mu_{\theta}\) is shared across all the \(K\) samples. The variance estimate module \(f_{\sigma}(\cdot)\) infers the standard deviation of the \(\tau\)th denoised distribution \(\widehat{\mathcal{P}}(\widehat{\mathbf{Y}}^{\tau})\), reflecting the overall uncertainty of the trajectory, which is also shared across all the \(K\) samples. The sample prediction module \(f_{\widehat{\mathbb{S}}}(\cdot)\) takes the past trajectories \((\mathbf{X},\mathbb{X}_{\mathcal{N}})\) and the predicted uncertainty \(\sigma_{\theta}\) as the input and predicts \(K\) normalized positions where each \(\widehat{\mathbf{S}}_{\theta,k}\in\mathbb{R}^{T_{\mathrm{f}}\times 2}\).
Note that i) the reparameterization in Eq. (4) allows us to avoid learning a raw sophisticated distribution, making the training much easier; and ii) \(K\) normalized predictions are generated simultaneously from the same underlying feature, assuring appropriately allocated trajectories with variance estimation and better capturing the multi-modalities.
To implement the three trainable modules: \(f_{\mu}(\cdot)\), \(f_{\sigma}(\cdot)\), \(f_{\widehat{\mathbb{S}}}(\cdot)\), we consider a similar network design: a social encoder to capture social influence, a temporal encoder to learn temporal embedding, and an aggregation layer to fuse both social and temporal information; see Figure 2. Here we take the mean estimation module \(f_{\mu}(\cdot)\) as an example. The mean trajectory is obtained as follows,
\[\mathbf{e}_{\mu_{\theta}}^{\mathrm{social}}=\mathrm{softmax} \left(\frac{f_{\mathrm{q}}(\mathbf{X})f_{\mathrm{k}}(\mathbb{X}_{\mathcal{N}}) ^{\mathsf{T}}}{\sqrt{d}}\right)f_{\mathrm{v}}(\mathbb{X}_{\mathcal{N}}), \tag{5a}\] \[\mathbf{e}_{\mu_{\theta}}^{\mathrm{temp}}=f_{\mathrm{GRU}}(f_{ \mathrm{conv1D}}(\mathbf{X})),\] (5b) \[\mu_{\theta}=f_{\mathrm{fusion}}([\mathbf{e}_{\mu_{\theta}}^{ \mathrm{social}}:\mathbf{e}_{\mu_{\theta}}^{\mathrm{temp}}]). \tag{5c}\]
Step (5a) obtains the social embedding \(\mathbf{e}_{\mu_{\theta}}^{\mathrm{social}}\) based on the multi-head attention with \(d\) the embedding dimension and \(f_{\mathrm{q}}(\cdot),f_{\mathrm{k}}(\cdot),f_{\mathrm{v}}(\cdot)\) the query/key/value embedding functions. Step (5b) obtains the temporal embedding through the feature encoder \(f_{\mathrm{conv1D}}(\cdot)\), mapping the raw coordinates into the high-dimensional feature, followed by the gated recurrent units \(f_{\mathrm{GRU}}(\cdot)\), capturing the temporal dependence in the high dimensional sequence. Step (5c) concatenates both social and temporal embeddings and uses a multi-layer perceptron \(f_{\mathrm{fusion}}(\cdot)\) to obtain the final mean estimation. Note
Figure 2: Proposed leapfrog diffusion model (LED) in inference phase. The red agent is the to-be-predicted agent. LED first predicts \(K\) initialized trajectories at \(\tau\)th denoised step through a trainable leapfrog initializer. Then, followed by a few denoising steps, LED obtains the final predictions. In leapfrog initializer, LED learns statistics and generates correlated samples with the reparameterization.
that the sample prediction module \(f_{\widehat{\mathbb{S}}}(\cdot)\) also takes the estimated standard deviation as the input, working as
\[\mathbf{e}_{\widehat{\mathbb{S}}_{\theta}}^{\sigma}=f_{\mathrm{encode }}(\sigma_{\theta}),\] \[\widehat{\mathbb{S}}_{\theta}=f_{\mathrm{fusion}}([\mathbf{e}_{ \widehat{\mathbb{S}}_{\theta}}^{\mathrm{social}}:\mathbf{e}_{\widehat{\mathbb{S} }_{\theta}}^{\mathrm{temp}}:\mathbf{e}_{\widehat{\mathbb{S}}_{\theta}}^{ \sigma}]),\]
where an encoder \(f_{\mathrm{encode}}(\cdot)\) operates on the estimated variance \(\sigma_{\theta}\) and generates high dimensional embedding \(\mathbf{e}_{\widehat{\mathbb{S}}_{\theta}}^{\sigma}\). By this, the variance estimation also involves in the sample prediction process, instead of just scaling these prediction.
After obtaining \(K\) samples \(\widehat{\mathcal{Y}}^{\tau}=\{\widehat{\mathbf{Y}}_{1}^{\tau},\widehat{ \mathbf{Y}}_{2}^{\tau},\ldots,\widehat{\mathbf{Y}}_{K}^{\tau}\}\) from leapfrog initializer, we execute the remaining \(\tau\) denoising steps to iteratively refine those predicted trajectories (3d).
### Denoising Module
Here we elaborate the design of a denoising module \(f_{\mathrm{denoise}}(\cdot)\), which denoises the trajectory \(\widehat{\mathbf{Y}}_{k}^{\tau+1}\) conditioned on past trajectories \((\mathbf{X},\mathbb{X}_{\mathcal{N}})\). In a denoising module, two parts are trainable: a transformer-based context encoder \(f_{\mathrm{context}}(\cdot)\) to learn a social-temporal embedding and a noise estimation module \(f_{\mathbf{\epsilon}}(\cdot)\) to estimate the noise to reduce. Mathematically, the \(\gamma\)th denoising step works as follows,
\[\mathbf{C}=f_{\mathrm{context}}(\mathbf{X},\mathbb{X}_{\mathcal{ N}}), \tag{6a}\] \[\mathbf{\epsilon}_{\theta}^{\gamma}=f_{\mathbf{\epsilon}}(\widehat{\mathbf{ Y}}_{k}^{\gamma+1},\mathbf{C},\gamma+1),\] (6b) \[\widehat{\mathbf{Y}}_{k}^{\gamma}=\frac{1}{\sqrt{\alpha_{\gamma}}}( \widehat{\mathbf{Y}}_{k}^{\gamma+1}\!\!-\!\frac{1-\alpha_{\gamma}}{\sqrt{1- \bar{\alpha}_{\gamma}}}\mathbf{\epsilon}_{\theta}^{\gamma})\!+\!\!\sqrt{\!1\!-\! \alpha_{\gamma}}\mathbf{z}, \tag{6c}\]
where \(\alpha_{\gamma}\) and \(\bar{\alpha}_{\gamma}=\prod_{i=1}^{\gamma}\alpha_{i}\) are parameters in the diffusion process and \(\mathbf{z}\sim\mathcal{N}(\mathbf{z};\mathbf{0},\mathbf{I})\) is a noise. Step (6a) uses a context encoder \(f_{\mathrm{context}}(\cdot)\) on past trajectories \((\mathbf{X},\mathbb{X}_{\mathcal{N}})\) to obtain the context condition \(\mathbf{C}\), which shares a similar structure to mean estimation module \(f_{\mu}(\cdot)\); Step (6b) estimates the noise \(\mathbf{\epsilon}_{\theta}^{\gamma}\) in the noisy trajectory \(\widehat{\mathbf{Y}}_{k}^{\gamma+1}\) through noise estimation \(f_{\mathbf{\epsilon}}(\cdot)\) implemented by multi-layer perceptions with the context \(\mathbf{C}\); Step (6c) provides a standard denoising step [18]; see more details in the supplementary material.
### Training Objective
To train a leapfrog diffusion model, we consider a two-stage training strategy, where the first stage trains a denoising module and the second stage focuses on a leapfrog initializer. The reason to use two stages is because the training of leapfrog initializer is more stable given fixed distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\), avoiding non-convergent training.
Concretely, the first stage trains a denoising module \(f_{\mathrm{denoise}}(\cdot)\) in Step (3d) based on a standard training schedule of a diffusion model [15, 18] through noise estimation loss:
\[\mathcal{L}_{\mathrm{NE}}=\|\mathbf{\epsilon}-f_{\mathbf{\epsilon}}(\mathbf{Y}^{\tau+ 1},f_{\mathrm{context}}(\mathbf{X},\mathbb{X}_{\mathcal{N}}),\gamma+1)\|_{2},\]
where \(\gamma\sim\mathrm{U}\{1,2,\cdots,\Gamma\}\), \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{\epsilon};\mathbf{0},\mathbf{I})\) and the diffused trajectory \(\mathbf{Y}^{\gamma+1}=\sqrt{\bar{\alpha}_{\gamma}}\;\mathbf{Y}^{0}+\sqrt{1- \bar{\alpha}_{\gamma}}\mathbf{\epsilon}\). We then backpropagate this loss and train the parameters in the context encoder \(f_{\mathrm{context}}(\cdot)\) and the noise estimation module \(f_{\mathbf{\epsilon}}(\cdot)\).
In the second stage, we optimize a leapfrog diffusion model with a trainable leapfrog initializer and frozen denoising modules. For each sample, the loss function is
\[\mathcal{L} = \mathcal{L}_{\mathrm{distance}}+\mathcal{L}_{\mathrm{uncertainty}}\] \[= w\cdot\min_{k}\|\mathbf{Y}\!\!-\!\widehat{\mathbf{Y}}_{k}\|_{2}+ \Big{(}\frac{\sum_{k}\|\mathbf{Y}\!\!-\!\widehat{\mathbf{Y}}_{k}\|_{2}}{\sigma_ {\theta}^{2}K}+\log\sigma_{\theta}^{2}\Big{)},\]
where \(w\in\mathbb{R}\) is a hyperparameter weight. The first term constrains the minimum distance in \(K\) predictions. Intuitively, if a leapfrog initializer generates high-quality estimations for distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\), then one of the \(K\) predictions in \(\widehat{\mathcal{Y}}\) should be close to the ground-truth trajectory \(\mathbf{Y}\). The second term normalizes the variance estimation \(\sigma_{\theta}\) in reparameterization (4) through an uncertainty loss, balancing the prediction diversity and mean accuracy. Note that the variance estimation controls the dispersion of the predictions, bridging scenery complexity and prediction diversity. The first part \(\sum_{k}\frac{\|\mathbf{Y}\!\!-\!\widehat{\mathbf{Y}}_{k}\|_{2}}{\sigma_{\theta} ^{2}K}\) makes the value of \(\sigma_{\theta}\) proportional to the complexity of the scenario. The second part \(\log\sigma_{\theta}^{2}\) is a regulariser used to avoid a trivial solution for \(\sigma_{\theta}\), i.e., generating high variance for all predictions.
Technically, we can also explicitly supervise the estimation of leapfrog initializer in stage two, since the distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\) can be denoised from a normal distribution. For the explicit supervision, we draw \(M\gg K\) samples from \(\mathcal{P}(\widehat{\mathbf{Y}}^{\Gamma})\) under the normal distribution and iteratively denoise these samples through Step (2d) until we get expected denoised trajectories \(\widehat{\mathbf{Y}}^{\tau}\). And then, we calculate the statistics of the denoised distribution \(\mathcal{P}(\widehat{\mathbf{Y}}^{\tau})\) using these \(M\) samples, serving as explicit supervisions for mean estimation \(f_{\mu}(\cdot)\) and variance estimation \(f_{\sigma}(\cdot)\). However, since \(\tau\ll\Gamma\), we need to run \((\Gamma-\tau)\approx\Gamma\)-steps denoising for \(M\gg K\) samples to get statistics, resulting in unacceptable time and storage consumption for training (e.g. \(\sim\) 6 days per epoch on NBA
dataset). We thus do not use explicit supervision.
### Inference Phase
During the inference, instead of the \(\Gamma\)-steps' denoising, leapfrog diffusion model only takes \(\tau\)-steps, accelerating the inference. To be specific, we first generate \(K\) correlated samples to model the distribution \(\mathcal{P}(\hat{\mathbf{Y}}^{\tau})\) using the trained leapfrog initializer. Then, these samples will be fed into the denoising process and iteratively fine-tuned to produce the final predictions; see Algorithm 1.
## 5 Experiments
### Datasets
We evaluate our method on four trajectory prediction datasets, including two sports datasets (NBA SportVU Dataset, NFL Football Dataset) and two pedestrian datasets (Stanford Drone Dataset, ETH-UCY).
**NBA SportVU Dataset (NBA)**: NBA trajectory dataset is collected by NBA using the SportVU tracking system, which records the trajectories of the 10 players and the ball in real basketball games. In this task, we predict the future 4.0s (20 frames) using the 2.0s (10 frames) past trajectory.
**NFL Football Dataset (NFL)**: NFL Football Dataset records the position of every player on the field during each play in the 2017 year. We predict the 22 players' (11 players per team) and the ball's future 3.2s (16 frames) trajectory using the historical 1.6s (8 frames) trajectory.
**Stanford Drone Dataset (SDD)**: SDD is a large-scale pedestrian dataset collected from a university campus in bird's eye view. Following previous works [28, 51], we use the standard train-test split and predict the future 4.8s (12 frames) using 3.2s (8 frames) past.
**ETH-UCY**: ETH-UCY dataset contains 5 subsets: ETH, HOTEL, UNIV, ZARA1, and ZARA2, containing various motion scenes. We use same segment length of 8s as SDD following previous works [19, 28] and use the leave-one-out approach with four sets for training and a left set for testing.
### Implementation Details
In the leapfrog diffusion model, we set the diffusion step \(\Gamma=100\) for all four datasets and the leapfrog step \(\tau=5\) on the NBA dataset. In the leapfrog initializer, we build a transformer-based social encoder where the feed-forward dimension is set to 256, the number of heads is 2, and 2 encoder layers are applied; we apply the temporal encoder with 1D convolution kernel being 3, and output channel setting to 32, and we also build a GRU with the hidden size of 256. In the denoising module, we apply the same parameters transformer to extract the context information, and we build the core denoising module with a hidden size of 256. To train the leapfrog diffusion model, we train the denoising module for 100 epochs with an initial learning rate of \(10^{-2}\) and decay to half every 16 epochs. With a frozen denoising module, we then train the leapfrog initializer for 200 epochs with an initial learning rate of \(10^{-4}\), decaying by 0.9 every 32 epochs. We set weight parameter \(w_{1}=50\) to emphasize the distance loss. The entire framework is trained with the Adam optimizer on one GTX-3090 GPU. All models are implemented with PyTorch 1.7.1. See more details in the supplementary material.
### Comparison with SOTA Methods
We measure the performance of different trajectory prediction methods using two metrics: minADE\({}_{K}\) and minFDE\({}_{K}\), following previous work [28, 50]. 1) minADE\({}_{K}\) calculates the minimum time-averaged distance among \(K\) predictions and the ground-truth future trajectory; 2) minFDE\({}_{K}\) measures the minimum distance among the \(K\) predicted endpoints and the ground-truth endpoints. We calculate these two metrics at different timestamps on sports datasets to better evaluate the performance.
**NBA dataset.** We compare our method with the current 10 state-of-the-art prediction methods at different timestamps; see Table 1. We see that i) our method significantly outperforms all baselines in ADE and FDE at all timestamps. Our method reduces the ADE/FDE at 4.0s from 0.96/1.27
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Time} & \multicolumn{2}{c}{Social-} & \multirow{2}{*}{STGAT [20]} & \multirow{2}{*}{PECNet [28]} & \multirow{2}{*}{STAR [53]} & \multirow{2}{*}{Trajectron++} & \multirow{2}{*}{MemoNet} & \multirow{2}{*}{NPSN [2]} & GroupNet & MID & \multirow{2}{*}{**Ours**} \\ & & GAN [16] & & & STGCNN [32] & & & & & & & [50] & [15] & **Ours** \\ & CVPR’18 & ICCV’19 & CVPR’20 & ECCV’20 & ECCV’20 & ECCV’20 & CVPR’22 & CVPR’22 & \multicolumn{1}{c}{} & & & & \\ \hline
1.0s & 0.41/0.62 & 0.35/0.51 & 0.34/0.48 & 0.40/0.71 & 0.43/0.66 & 0.30/0.38 & 0.38/0.56 & 0.35/0.58 & 0.26/0.34 & 0.28/0.37 & **0.18/0.27** \\
2.0s & 0.81/1.32 & 0.73/1.10 & 0.71/0.94 & 0.83/1.61 & 0.75/1.24 & 0.59/0.82 & 0.71/1.14 & 0.68/1.23 & 0.49/0.70 & 0.51/0.72 & **0.37/0.56** \\
3.0s & 1.19/1.94 & 1.04/1.75 & 1.09/1.77 & 1.27/2.44 & 1.03/1.51 & 0.85/1.24 & 1.00/1.57 & 1.01/1.76 & 0.73/1.02 & 0.71/0.98 & **0.58/0.84** \\ Total(4.0s) & 1.59/2.41 & 1.40/2.18 & 1.53/2.26 & 1.69/2.95 & 1.13/2.01 & 1.15/1.57 & 1.25/1.47 & 1.31/1.79 & 0.96/1.30 & 0.96/1.27 & **0.81/1.10** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with baseline models on NBA dataset. minADE\({}_{20}\)/minFDE\({}_{20}\) (meters) are reported. **Bold**/underlined fonts represent the best/second-best result. Compared to the previous SOTA method, MID, our method achieves a 15.6%/13.4% ADE/FDE improvement.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Time} & \multirow{2}{*}{Social-} & \multirow{2}{*}{STGAT [20]} & \multirow{2}{*}{Special-} & \multirow{2}{*}{PENcet [28]} & \multirow{2}{*}{STAR [53]} & \multirow{2}{*}{Trajectron++} & \multirow{2}{*}{LB-EBM} & \multirow{2}{*}{NPSN [2]} & GroupNet & MID [15] & \multirow{2}{*}{**Ours**} \\ & & GAN [16] & & STGCNN [32] & & & & & [39] & [35] & [50] & & \\ & CVPR’18 & ICCV’19 & CVPR’20 & ECCV’20 & ECCV’20 & ECCV’20 & ECCV’20 & CVPR’21 & CVPR’22 & CVPR’22 & \multicolumn{1}{c}{} & & \\ \hline
1.0s & 0.37/0.68 & 0.35/0.64 & 0.45/0.64 & 0.52/0.97 & 0.49/0.84 & 0.41/0.65 & 0.75/1.05 & 0.43/0.64 & 0.32/0.57 & 0.30/0.58 & **0.21/0.34** \\
2.0s & 0.83/1.53 & 0.82/1.60 & 1.06/1.87 & 1.19/2.47 & 1.02/1.84 & 0.93/1.65 & 1.26/2.28 & 0.83/1.52 & 0.73/1.39 & 0.71/1.31 & **0.49/0.91** \\ Total(3.2s) & 1.44/2.51 & 1.39/2.48 & 1.82/3.18 & 1.99/3.84 & 1.51/2.97 & 1.54/2.58 & 1.90/3.25 & 1.32/2.27 & 1.21/2.15 & 1.14/1.92 & **0.87/1.50** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with baseline models on NFL dataset. minADE\({}_{20}\)/minFDE\({}_{20}\) (meters) are reported. **Bold**/underlined fonts represent the best/second-best result. Compared to the previous SOTA method, MID, our method achieves a 23.7%/21.9% improvement.
to 0.81/1.10 compared to the current state-of-the-art methods, MID, achieving 15.6%/13.4% improvement; and ii) performance improvement over previous methods increases with timestamps, reflecting the proposed method can capture more sophisticated distributions at further timestamps.
**NFL dataset.** We compare our method with the current 10 state-of-the-art prediction methods at different timestamps; see Table 2. We see that our model significantly outperforms all baselines in ADE and FDE at all timestamps. Our method reduces the ADE/FDE at 3.2s from 1.14/1.92 to 0.87/1.50 compared to the current state-of-the-art methods, MID, achieving 23.7%/21.9% improvement.
**SDD dataset.** We compare our method with the current 10 state-of-the-art prediction methods; see Table 3. We see that our method reduces FDE from 11.85 to 11.66 compared to the current state-of-the-art method, NPSN. Notably, the original MID [15] uses a different protocol from all the other methods, we update its code for a fair comparison.
**ETH-UCY dataset.** We compare our method with 10 state-of-the-art prediction methods; see Table 4. We see that i) our method reduces FDE from 0.35 to 0.33 compared to the current state-of-the-art method, MemoNet, achieving a 5.7\(\%\) improvement; and ii) our method achieves the best or second best to the best performance on most of the subsets.
### Ablation Studies
**Effect of components in leapfrog initializer.** We explore the effect of three key components in leapfrog initializer, including mean estimation, variance estimation, and sample prediction. Table 5 presents the results with mean and variance based on 5 experimental trials. We see that i) the leapfrog initializer achieves stable results with better performance even when prediction number \(K\) is small; and ii) the proposed mean estimation, variance estimation, and sample prediction all contribute to promoting prediction accuracy.
**Effect of leapfrog step \(\tau\).** Table 6 reports the influence of different leapfrog steps in LED. We see that i) under similar inference time, our method significantly outperforms the standard diffusion model with better representation ability; ii) when \(\tau\) is too small, leapfrog initializer targets to learn more sophisticated distribution, causing worse prediction performance; and iii) when \(\tau\) is too large, leapfrog initializer has already captured the denoised distribution, encountering performance bottleneck and wasting inference time.
**Comparison to other fast sampling methods.** Table 7 compares the performance of our method and the other two fast sampling methods: PD [38] and DDIM [41]. We see that our method significantly outperforms two fast sampling methods under similar inference time since the proposed LED promotes the correlation between predictions.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline Method & Steps & 1.0s & 2.0s & 3.0s & Total(4.0s) &
\begin{tabular}{c} Inference \\ (ms) \\ \end{tabular} \\ \hline & 10 & 0.45/0.51 & 0.98/1.55 & 1.62/2.56 & 2.21/2.77 & \(\sim\)87 \\ Standard & 50 & 0.26/0.36 & 0.56/0.91 & 0.89/1.42 & 1.21/1.73 & \(\sim\)446 \\ Diffusion & 100 & 0.21/0.28 & 0.44/0.64 & 0.69/0.95 & 0.94/1.21 & \(\sim\)86 \\ \((\Gamma)\) & 200 & 0.21/0.29 & 0.44/0.65 & 0.69/0.97 & 0.94/1.21 & \(>\)1s \\ & 500 & 0.21/0.30 & 0.45/0.68 & 0.70/0.99 & 0.95/1.23 & \(>\)1s \\ \hline Leapfrog & 3 & 0.20/0.31 & 0.40/0.62 & 0.62/0.88 & 0.84/1.10 & \(\sim\)30 \\ Diffusion & 5 & 0.18/**0.27** & **0.37/0.56** & **0.58/0.84** & **0.81**/1.10 & \(\sim\)46 \\ \((\tau)\) & 10 & **0.17**/0.27 & 0.37/0.58 & 0.59/0.85 & 0.82/**1.08** & \(\sim\)89 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Different steps \(\Gamma/\tau\) in the standard/leapfrog diffusion model on NBA. \(\tau=5\) provides the best performance.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c|c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} \multirow{2}{*}{Subset} \\ \multirow{2}{*}{Case} \\ \multirow{2}{*}
### Qualitative Results
**Visualization of predicted trajectory**. Figure 3 compares the predicted trajectories of two baselines PECNet and GroupNet, our LED (Ours), and the ground-truth (GT) trajectories on the NBA dataset. We see that our method produces more accurate predictions than the previous methods.
**Visualization of estimated mean and variance**. Figure 4 illustrates the mean and variance estimation in the leapfrog initializer under four scenes on the NBA dataset. We see that the variance estimation can well describe the scene complexity for the current agent by the learned variance, showing the rationality of our variance estimation.
**Visualization of different sampling mechanisms**. Figure 5 compares two sampling mechanisms: I.I.D sampling and correlated sampling in the leapfrog initializer. We see that the proposed correlated sampling can appropriately allocate sample diversity and capture more modalities when the number of trials \(K\) is small.
## 6 Conclusion
This paper proposes the leapfrog diffusion model (LED), a diffusion-based trajectory prediction model, which significantly accelerates the overall inference speed and enables appropriate allocations of multiple correlated predictions. During the inference, LED directly models and samples from the denoised distribution through a novel leapfrog initializer with reparameterization. Extensive experiments show that our method achieves state-of-the-art performance on four real-world datasets and satisfies real-time inference needs.
**Limitation and future work**. This work achieves inference acceleration for trajectory prediction tasks partially because the dimension of trajectory data is relatively small and the corresponding distribution is much easier to learn compared with those of image/video data. A possible future work is to explore diffusion models and fast sampling methods for higher-dimensional tasks.
## Acknowledgements
This research is partially supported by National Natural Science Foundation of China under Grant 62171276 and the Science and Technology Commission of Shanghai Municipal under Grant 21511100900 and 22DZ2229005.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline Method & 1.0s & 2.0s & 3.0s & Total(4.0s) &
\begin{tabular}{c} Inference \\ (ms) \\ \end{tabular} \\ \hline PD (K=1) & 0.20/0.33 & 0.45/0.75 & 0.72/1.13 & 0.98/1.39 & \(\sim\) 452 \\ PD (K=2) & 0.21/0.34 & 0.46/0.78 & 0.73/1.15 & 0.98/1.41 & \(\sim\)230 \\ PD (K=3) & 0.23/0.37 & 0.48/0.79 & 0.73/1.15 & 0.98/1.43 & \(\sim\)121 \\ PD (K=4) & 0.25/0.38 & 0.50/0.80 & 0.75/1.16 & 0.99/1.44 & \(\sim\)64 \\ \hline DDIM (S=2) & 0.20/0.29 & 0.42/0.65 & 0.66/0.96 & 0.91/1.21 & \(\sim\)530 \\ DDIM (S=10) & 0.22/0.32 & 0.44/0.71 & 0.69/1.04 & 0.93/1.31 & \(\sim\)107 \\ DDIM (S=20) & 0.24/0.35 & 0.49/0.81 & 0.76/1.21 & 1.02/1.51 & \(\sim\)54 \\ \hline
**Ours** & **0.18/0.27** & **0.37/0.56** & **0.58/0.84** & **0.81/1.10** & \(\sim\)**46** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison to other fast sampling methods on NBA. \(\eta=1\) in DDIM. Our method achieves the best performance.
Figure 4: Mean and variance estimation in leapfrog initializer on NBA with \(K\)=20. The estimated variance can reflect the scene complexity of the current agent and produce diverse predictions.
Figure 5: Comparison between I.I.D and correlated sampling mechanisms in NFL with \(K\)=4. Correlated samples appropriately capture multi-modalities, significantly improving prediction performances.
Figure 3: Visualization comparison on NBA. We compare the best-of-20 predictions by our method and two previous methods. Our method generates a more precise trajectory prediction. (Light color: past trajectory; blue/red/green color: two teams and the basketball.) |
2310.19037 | Spherically Symmetric Configurations in Unimodular Gravity | Unimodular gravity (UG) is considered, under many aspects, equivalent to
General Relativity (GR), even if the theory is invariant under a more
restricted diffeomorphic class of transformations. We discuss the conditions
for the equivalence between the two formulations by applying the UG to the
static and spherically symmetric configurations being the energy-momentum
tensor sourced by a scalar field or by the electromagnetic field. We argue that
the equivalence between UG and GR may be broken when analyzing the stability of
the solutions at perturbative level. | Júlio C. Fabris, Mahamadou Hamani Daouda, Hermano Velten | 2023-10-29T15:04:06Z | http://arxiv.org/abs/2310.19037v1 | # Spherically Symmetric Configurations in Unimodular Gravity
###### Abstract
Unimodular gravity (UG) is considered, under many aspects, equivalent to General Relativity (GR), even if the theory is invariant under a more restricted diffeomorphic class of transformations. We discuss the conditions for the equivalence between the two formulations by applying the UG to the static and spherically symmetric configurations being the energy-momentum tensor sourced by a scalar field or by the electromagnetic field. We argue that the equivalence between UG and GR may be broken when analyzing the stability of the solutions at perturbative level.
## I Introduction
General Relativity (GR) is the modern theory of gravitational interaction. The gravitational phenomena is considered as the structure of the space-time itself induced dynamically by matter. GR is considered a very successful theory: all local tests confirms the predictions of GR within high precision. At cosmological scales, it leads to the Standard Cosmological Model (SCM) which also addresses consistently all available observations, from scales of galaxies up to the larger structures of the universe. It accounts also successfully to the different phases of the evolution of the universe, including the primordial phases at least to the primordial nucleosynthesis scales. See from this point of view, the SCM, based on GR, is an almost perfect model to describe the entire evolution of the universe.
However, seen from a different perspective, the SCM is at least problematic. To account for the observations at the different scales, it demands the introduction of two until now undetected components in the matter/energy content of the universe. The dynamics of galaxies and cluster of galaxies, and even the formation of such structures, asks for an additional pressureless component, dubbed dark matter, which manifests only indirectly. Moreover, to explain the present accelerated phase of the universe, the CMB spectrum and to obtain an age of the universe consistent with the age of globular clusters, the SCM asks for another component, with negative pressure, which does not agglomerate, dubbed dark energy.
Dark energy is now frequently associated to the vacuum energy as predicted by quantum field theory. However, its observed value seems not consistent with the theoretical predictions by dozens of order of magnitude [1; 2; 3]. There are many proposals to cope with this problem. One of them is to replace it by a self-interacting scalar field, called quintessence [4]. However, it must be explained, in the quintessence program, why the vacuum energy must be exactly or, at least, nearly zero. Therefore, the vacuum energy must, somehow, _degravitate_[5; 6; 7]. There are many mechanisms to implement such _degravitating_ mechanism, but until now such proposals are, in some sense, in construction. For a general overview of the dark energy problem, see Ref. [8].
One interesting approach to the cosmological constant problem described above is through the unimodular gravity (UG) class of theories [9; 10; 11; 12] where the determinant of the metric, \(g\), is fixed. Originally, \(g=1\), but other possibilities can be explored, see next section. UG leads to traceless gravitational equations. The energy-momentum tensor does not conserve necessarily anymore, since UG is not invariant by the full diffeomorphism group, but by a more restricted structure called transverse diffeomorphism [13]. If the conservation of the energy-momentum tensor is imposed, GR is recovered with a cosmological term that appears as an integration constant. If the conservation of the energy-momentum tensor is not imposed, as we will prove below, a class of _dynamical vacuum_ theories is obtained, implying an interaction of the matter sector with the decaying cosmological term.
In previous works we have explored the distinction between GR and UG mainly at perturbative level in the cosmological context, see [14] and references therein (see also [15]). Here we will extend such studies to the static, spherically symmetric configurations. In UG, with the imposition of the conservation of the energy-momentum tensor, the static, spherically symmetric solutions are identical to those of GR, but now containing a cosmological constant. The non-conservation of the energy-momentum tensor, on the other hand, can be mapped in the GR structure with a dynamical cosmological term. Indeed, the non-conservation of the energy-momentum tensor is allowed in this context, leading to a new formulation of the UG theory. It is also worth mentioning that such a non-conservation mechanism appears in many other situations. For a review, see Ref. [16]. All these aspects are discussed in the next section. In section III the general equations for a static, spherically symmetric configuration are settled out. Some examples of interacting models, resulting from the non-conservation of the energy-momentum tensor, will be shown in section IV, both in presence of an electromagnetic field as well as of a self-interacting scalar field in section V. For the latter case, we perform, in section VI, a perturbative analysis aiming to show how the usual results of GR change in the unimodular context. In particular, the unimodular condition on the determinant of the metric implies in vanishing perturbations at linear level. The results obtained are discussed in section VII.
## II Field equations
The Einstein-Hilbert action, in presence of the cosmological term and the matter Lagrangian,
\[{\cal S}=\int d^{4}x\sqrt{-g}\biggl{\{}\frac{R}{16\pi G}+2\Lambda+{\cal L}_{m }\biggr{\}}, \tag{1}\]
implies in the following field equations:
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi GT_{\mu\nu}+g_{\mu\nu}\Lambda. \tag{2}\]
The application of the Bianchi identities leads to the energy-momentum tensor \(T^{\mu\nu}\) conservation:
\[{T^{\mu\nu}}_{;\mu}=0. \tag{3}\]
The conservation laws related to the energy-momentum tensor can be alternatively deduced from the invariance of the Einstein-Hilbert Lagrangian by diffeomorphic transformations [17].
In order to obtain the UG equations, we introduce a constraint in the action via a Lagrange Multiplier \(\chi\) and an external field \(\xi\)[15]:
\[{\cal S}=\int d^{4}x\biggl{\{}\sqrt{-g}R-\chi(\sqrt{-g}-\xi)\biggr{\}}+\int d^ {4}x\sqrt{-g}{\cal L}_{m}. \tag{4}\]
The presence of the external field allows one to use a suitable coordinate system according to the problem under analysis, for example, the usual spherical coordinates or the quasi-global coordinates employed in spherical symmetric space-time.
The final field equations for this case are
\[R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R = 8\pi G\biggl{(}T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T\biggr{)}, \tag{5}\] \[\frac{R^{;\nu}}{4} = 8\pi G\biggl{(}{T^{\mu\nu}}_{;\mu}-\frac{1}{4}T^{;\nu}\biggr{)}. \tag{6}\]
The above equation (6) is obtained by using the Bianchi identities in (5).
As highlighted in Ref. [17], it is important to note that in UG, the conservation of the energy-momentum tensor cannot be derived through the conventional diffeomorphism invariance because the theory only exhibits invariance with respect to a limited set of diffeomorphisms, referred to as transverse diffeomorphisms. The latter implies that the energy-momentum divergence tensor is equal to the gradient of a (undetermined) scalar function:
\[{T^{\mu}_{\nu}}_{;\mu}=\Theta_{;\nu}, \tag{7}\]
On one hand, it is entirely permissible to set the gradient of \(\Theta\) to zero. If this is done, we recover (2), with \(\Lambda\) appearing as an integration constant. On the other hand, one can also choose,
\[\Theta=\frac{R}{4}+2\pi GT. \tag{8}\]
From now on, we will identify \(\Theta\equiv-\Lambda\). With this identification, \(\Lambda\) becomes a dynamical term. If \(\Lambda\) is constant, as already stressed, we return to the GR equations in presence of a cosmological constant. But, if \(\Lambda\) is a function of the space-time coordinates, we end up with the following set of equations,
\[R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R = 8\pi G\bigg{\{}T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T\bigg{\}}, \tag{9}\] \[T^{\mu}_{\nu\;;\mu}=-\Lambda_{;\nu}. \tag{10}\]
This is equivalent (up to the restriction in the diffeomorphic class of transformation) to the RG equations in presence of a dynamical cosmological term:
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi GT_{\mu\nu}+g_{\mu\nu}\Lambda, \tag{11}\] \[T^{\mu}_{\nu\;;\mu}=-\Lambda_{;\nu}, \tag{12}\]
provided that \(\Lambda\) is identified with \(\Theta\) as given by (8). Hence, the non-conservation of the energy-momentum tensor allows to map the UG theory into GR equipped with a dynamical cosmological term, implying in an interacting like model in the GR context.
It is also convenient, for reasons that will become clear later on in the work, to write down the UG equations in a more compact form such as
\[E_{\mu\nu}=8\pi G\,\tau_{\mu\nu}, \tag{13}\]
with the definitions,
\[E_{\mu\nu} = R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R=G_{\mu\nu}+\frac{1}{4}g_{\mu \nu}R, \tag{14}\] \[\tau_{\mu\nu} = T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T. \tag{15}\]
We will call \(E_{\mu\nu}\) the unimodular gravitational tensor and \(\tau_{\mu\nu}\) the unimodular energy-momentum tensor.
## III Equations for a symmetric and static configuration
In this section, we will write down the general expressions for a symmetric and static configuration. In the appendix A the corresponding expressions with a time dependence will be derived, which are necessary to perform the perturbative analysis to be described later.
Let us consider the metric,
\[ds^{2}=e^{2\gamma}dt^{2}-e^{2\alpha}du^{2}-e^{2\beta}d\Omega^{2}. \tag{16}\]
The non-vanishing Christoffel symbols are the following.
\[\Gamma^{0}_{10}=\gamma^{\prime}\quad,\quad\Gamma^{1}_{00}=e^{2( \gamma-\alpha)}\gamma^{\prime}, \tag{17}\] \[\Gamma^{1}_{11}=\alpha^{\prime}\quad,\quad\Gamma^{1}_{22}=-e^{2( \beta-\alpha)}\beta^{\prime}\quad,\quad\Gamma^{1}_{33}=-e^{2(\beta-\alpha)} \beta^{\prime}\sin^{2}\theta,\] (18) \[\Gamma^{2}_{12}=\Gamma^{3}_{12}=\beta^{\prime},\quad\Gamma^{2}_{ 33}=-\sin\theta\cos\theta\quad,\quad\Gamma^{3}_{23}=\cot\theta. \tag{19}\]
Also, the non-vanishing components of the Ricci tensor and the Ricci scalar are the following.
\[R_{00} = e^{2(\gamma-\alpha)}[\gamma^{\prime\prime}+\gamma^{\prime}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})], \tag{20}\] \[R_{11} = -\gamma^{\prime\prime}-2\beta^{\prime\prime}+\gamma^{\prime}( \alpha^{\prime}-\gamma^{\prime})+2\beta^{\prime}(\alpha^{\prime}-\beta^{ \prime}),\] (21) \[R_{22} = 1-e^{2(\beta-\alpha)}[\beta^{\prime\prime}+\beta^{\prime}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})],\] (22) \[R_{33} = R_{22}\sin^{2}\theta,\] (23) \[R = -2e^{-2\beta}+2e^{-2\alpha}[\gamma^{\prime\prime}+2\beta^{\prime \prime}+3\beta^{\prime 2}+\gamma^{\prime}(\gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})-2 \alpha^{\prime}\beta^{\prime}]. \tag{24}\]
Consequently, the non-vanishing components of the unimodular gravitational tensor defined in (14) are the following:
\[E_{00} = e^{2(\gamma-\alpha)}\biggl{[}\frac{\gamma^{\prime\prime}}{2}- \beta^{\prime\prime}-\frac{3}{2}\beta^{\prime 2}+\frac{\gamma^{\prime}}{2}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})+\beta^{\prime}\alpha^{\prime} \biggr{]}+\frac{e^{2(\gamma-\beta)}}{2}, \tag{25}\] \[E_{11} = -\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{ \beta^{\prime 2}}{2}-\frac{\gamma^{\prime}}{2}(\gamma^{\prime}-\alpha^{\prime})+ \beta^{\prime}(\alpha^{\prime}+\gamma^{\prime})-\frac{e^{2(\alpha-\beta)}}{2},\] (26) \[E_{22} = \frac{1}{2}+\frac{e^{2(\beta-\alpha)}}{2}[\gamma^{\prime\prime}- \beta^{\prime 2}+\gamma^{\prime}(\gamma^{\prime}-\alpha^{\prime})],\] (27) \[E_{33} = E_{22}\sin^{2}\theta. \tag{28}\]
The left-hand side of the UG field equations for the symmetric and static configuration has been set up. The next step is to characterize the source field. In the next couple of sections, the electromagnetic field and a scalar field will be considered as sources of the gravitational field.
## IV The electromagnetic field
For the case of a electromagnetic field as the source of the energy-momentum tensor one has,
\[8\pi GT_{\mu\nu}^{EM}=-2\biggl{\{}F_{\mu\rho}F_{\nu}^{\rho}-\frac{1}{4}g_{\mu \nu}F_{\rho\sigma}F^{\rho\sigma}\biggr{\}}. \tag{29}\]
It is worth mentioning that it has zero trace:
\[T^{EM}=0. \tag{30}\]
Equations (5) and (6) become,
\[R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R = -2\biggl{\{}F_{\mu\rho}F_{\nu}{}^{\rho}-\frac{1}{4}g_{\mu\nu}F_{ \rho\sigma}F^{\rho\sigma}\biggr{\}}, \tag{31}\] \[F^{\mu\rho}{}_{;\mu}F_{\nu\rho} = -\frac{R_{;\nu}}{8}. \tag{32}\]
Remark that, contrarily to GR, the traceless character of the energy-momentum tensor does not imply \(R=0\), unless the Maxwell equations are obeyed.
Imposing the spherical symmetry, the only non-vanishing component is \(F^{01}=E\equiv E(r)\). Then, the equations are:
\[\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{3}{2} \beta^{\prime 2}+\frac{\gamma^{\prime}}{2}(\gamma^{\prime}+2\beta^{\prime}- \alpha^{\prime})+\beta^{\prime}\alpha^{\prime}+\frac{e^{2(\alpha-\beta)}}{2} = e^{2\gamma+4\alpha}E^{2}, \tag{33}\] \[-\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{ \beta^{\prime 2}}{2}+\frac{\gamma^{\prime}}{2}(\alpha^{\prime}+2\beta^{\prime}- \gamma^{\prime})+\beta^{\prime}\alpha^{\prime}-\frac{e^{2(\alpha-\beta)}}{2} = -e^{2\gamma+4\alpha}E^{2},\] (34) \[\frac{1}{2}\biggl{[}\gamma^{\prime\prime}-\beta^{\prime 2}+\gamma^{ \prime}(\gamma^{\prime}-\alpha^{\prime})\biggr{]}+\frac{e^{2(\alpha-\beta)}}{2} = e^{2\gamma+4\alpha}E^{2},\] (35) \[(E^{2})^{\prime}+2(\alpha^{\prime}+\gamma^{\prime}+2\beta^{ \prime})E^{2} = \frac{e^{-2(\alpha+\gamma)}}{4}R^{\prime}, \tag{36}\]
with \(R\) given by (24).
Until now, no coordinate condition has been imposed. Adding (33) and (34), we obtain,
\[\beta^{\prime}(\alpha^{\prime}+\gamma^{\prime})-\beta^{\prime\prime}-\beta^{ \prime 2}=0. \tag{37}\]
The use of the quasi-global coordinates, with \(\alpha=-\gamma\), leads to,
\[\beta=\log r. \tag{38}\]
As in the usual Reissner-Nordstrom (RN) solution in GR, there is a center at \(r=0\). Equation (8), with \(T=0\) and with the identification of \(\Theta\) with \(-\Lambda\) implies in \(R=-4\Lambda\). The equations of motion reduce to,
\[\gamma^{\prime\prime}+2\gamma^{\prime 2}-\frac{1}{r^{2}}+\frac{e^{ -2\gamma}}{r^{2}} = 2e^{-2\gamma}E^{2}, \tag{39}\] \[(E^{2})^{\prime}+4\frac{E^{2}}{r} = -\Lambda^{\prime}, \tag{40}\]
In order to proceed further, we must impose a condition. This is a crucial step in working with UG as already stressed in [14]. One possibility it to fix \(R=-4\Lambda\equiv\text{constant}\). This leads to the Reissner-Nordstrom-de Sitter (RNdS) solution. In fact, this implies to recover the conservation law \(F^{\mu\nu}{}_{;\mu}=0\). If \(\Lambda=0\), we re-obtain the RN solution. If \(\Lambda>0\), the RNdS solution is obtained, and if \(\Lambda<0\), the Reisnner-Nordstrom-(Anti) de Sitter (RNAdS) solution is recovered, as it will be seen below. On the other hand, there also also other possibilities that to be explored since \(\Lambda\) can be non constant, covering the possibility of a dynamical cosmological term.
Three cases will be considered, namely a constant and two dynamical cosmological terms, corresponding to either the usual or the modified conservation laws.
### Constant cosmological term
If \(\Lambda=\text{constant}\),
\[E=\frac{Q}{r^{2}}, \tag{41}\]
after identifying an integration constant with the total charge \(Q\). The Coulomb law is recovered, as in the RN solution.
Using the quasi-global coordinate condition and the solution for the electric field \(E\), equation (33) becomes:
\[e^{2\gamma}(\gamma^{\prime\prime}+2\gamma^{\prime 2})-\frac{e^{2\gamma}}{r^{2}}= \frac{1}{r^{2}}+2\frac{Q^{2}}{r^{4}}. \tag{42}\]
Defining \(A=e^{2\gamma}\), the equation takes the form,
\[A^{\prime\prime}-2\frac{A}{r^{2}}=\frac{2}{r^{2}}+4\frac{Q^{2}}{r^{4}} \tag{43}\]
This is a second order, linear, non-homogeneous differential equation whose solution is
\[A=1+\frac{C_{1}}{r}+\frac{Q^{2}}{r^{2}}+\frac{C_{2}}{3}r^{2}, \tag{44}\]
\(C_{1,2}\) being integration constants. Inserting this solution into the condition \(R=-\Lambda\), we obtain that it is satisfied provided \(C_{2}=-\Lambda\), while \(C_{1}\) remains arbitrary, being fixed by using the newtonian limit.
The final solution is given by,
\[A=1-2\frac{GM}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda}{3}r^{2}. \tag{45}\]
This is the RNdS solution. It coincides with the static and spherically symmetric solution in GR with an electromagnetic field and a cosmological constant. This could be expected from the beginning since UG (satisfying the usual conservation laws) leads to the same field equations as RG with a cosmological term, with the only (but important, as we will see later) difference that UG is restricted to transverse diffeormophism instead of the full diffeomorphism.
### Varying cosmological term
For a varying cosmological term, it is necessary to impose an ansatz on the behaviour of the function \(\Lambda\). This is also true in GR when the cosmological term is dynamical. Since a static and spherically symmetric configuration is considered, the cosmological term must be a function on the coordinate \(r\) only: \(\Lambda\equiv\Lambda(r)\).
Let us restrict ourselves again to the condition \(R=-4\Lambda\). Using the previous results and also identifying \(\beta=\ln r\), \(\alpha=-\gamma\) and \(A=e^{2\gamma}\), then:
\[A^{\prime\prime}+4\frac{A^{\prime}}{r}+2\frac{A}{r^{2}}=\frac{2}{r^{2}}-4 \Lambda(r). \tag{46}\]
The solution for the homogenous equation is,
\[A_{h}=\frac{C_{1}}{r}+\frac{C_{2}}{r^{2}}. \tag{47}\]
To obtain the inhomogeneous solution, we write,
\[A=\frac{f}{r^{2}}, \tag{48}\]
obtaining,
\[f^{\prime\prime}=2-4r^{2}\Lambda(r). \tag{49}\]
with a solution which depends on \(r\):
\[f=r^{2}-4\int\biggl{[}\int^{r}{r^{\prime}}^{2}\Lambda(r^{\prime})dr^{\prime} \biggr{]}dr. \tag{50}\]
We will consider two different configurations for the function \(\Lambda(r)\), corresponding to two distinct behavior both asymptotically as well as at the center (\(r=0\)).
#### iii.1.1 Case A
First, it is imposed a power law behavior for \(\Lambda(r)\),
\[\Lambda(r)=\Lambda_{0}+\Lambda_{1}r^{p}, \tag{51}\]
with \(\Lambda_{0,1}\) constants.
The final solution is given by the following expressions.
* \(p\neq-4\):
\[A = 1-\frac{2GM}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda_{0}}{3}r^{2}- \frac{4\Lambda_{1}}{(p+3)(p+4)}r^{p+2}, \tag{52}\] \[E^{2} = \frac{Q^{2}}{r^{4}}-\frac{p}{p+4}\Lambda_{1}r^{p}; \tag{53}\]
* \(p=-4\):
\[A = 1-\frac{2GM}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda_{0}}{3}r^{2}- \frac{16\Lambda_{1}}{9r}\biggl{\{}3(\ln r)^{2}+\ln r\biggr{\}}, \tag{54}\] \[E^{2} = \frac{Q^{2}}{r^{4}}+4\frac{\Lambda_{1}}{r^{4}}\ln r. \tag{55}\]
The case \(p=-4\) is clearly pathological since the electric field becomes imaginary near \(r=0\) when \(\Lambda_{1}>0\) or for large \(r\) if \(\Lambda_{1}<0\). For \(p\neq-4\) a change of sign of \(E^{2}\) can be avoided by choosing \(\Lambda_{1}>0\) for \(-4<p<0\), or \(\Lambda_{1}<0\) for \(p<-4\) or \(p>0\). The values \(p=0,-3\) correspond to the cases already included in the constants \(C_{1}\) and \(C_{2}\) of the homogenous solution.
The solution \(p\neq 0\), with the required conditions to avoid an imaginary electric field, contains either multiple horizon black holes, with a singularity at \(r=0\), or naked singularities similarly to the dSRN solution in RG but the metric functions, in the UG case, may present a different shape mainly near the singularity. These solutions are asymptotically non-flat except if \(\Lambda_{0}=0\) and \(p>-2\). The corresponding equations in GR equipped with a cosmological term with the same functional dependence, using the same symmetries, lead to the same solution as it can be explicitly verified.
#### iii.1.2 Case B
We will exploit now the functional form,
\[\Lambda(r)=\Lambda_{0}+\frac{\Lambda_{1}}{(r^{2}+a^{2})^{2}}. \tag{56}\]
If \(\Lambda_{0}=0\), this functional form represents an asymptotically constant cosmological term near the origin, which becomes zero at infinity. Following the same steps of the previous case, the final form of the metric function is:
\[A = 1-\frac{2GM}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda_{0}}{3}r^{2} \tag{57}\] \[- 2\frac{\Lambda_{1}}{r^{2}}\bigg{[}\frac{r}{a}\arctan\frac{r}{a}- \ln\biggl{(}1+\frac{r^{2}}{a^{2}}\biggr{)}\bigg{]},\] \[E^{2} = \frac{Q^{2}}{r^{4}}+\Lambda_{1}\bigg{\{}-\frac{1}{(r^{2}+a^{2})^ {2}}+2\biggl{[}\frac{1}{r^{2}a^{2}}-\frac{1}{r^{4}}\ln\biggl{(}1+\frac{r^{2}}{ a^{2}}\biggr{)}\bigg{]}\bigg{\}}. \tag{58}\]
Again, the same solution is obtained in the GR with a varying cosmological term giving by (56). There are multiple horizons and naked singularities, as in the previous case. No change of sign in the \(E^{2}\) term can be assured by imposing \(\Lambda_{1}>0\).
In all the cases discussed above, the presence of the a cosmological term, constant or not, introduces new features in the solutions with respect to the usual RN solution but does not remove the singularity at \(r=0\).
## V Scalar field
The energy-momentum tensor for a self-interacting scalar field is,
\[T_{\mu\nu}=\epsilon\biggl{(}\phi_{;\mu}\phi_{;\nu}-\frac{1}{2}g_{\mu\nu}\phi_ {;\rho}\phi^{;\rho}\biggr{)}+g_{\mu\nu}V(\phi). \tag{59}\]
The ordinary scalar field is denoted by \(\epsilon=+1\) and the phantom scalar field by \(\epsilon=-1\). In GR, in four dimensions, black holes exist only for the phantom case [19; 20].
Inserting the expression for the energy-momentum tensor (59) in the UG equations one obtains,
\[R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R = \epsilon\biggl{(}\phi_{;\mu}\phi_{;\nu}-\frac{1}{4}g_{\mu\nu} \phi_{;\rho}\phi^{;\rho}\biggr{)}, \tag{60}\] \[\frac{R_{;\nu}}{4} = \epsilon\biggl{(}\phi_{;\nu}\Box\phi+\frac{\phi^{;\rho}\phi_{; \nu;\rho}}{2}\biggr{)}. \tag{61}\]
One distinguishing feature of the above equations is the absence of the potential (or, as before, the cosmological term, which is the particular case of a constant potential): it naturally disappears due to the traceless structure of the UG equations. Equation (61) can be written as,
\[\biggl{(}\frac{R}{4}-\frac{\epsilon}{4}\phi_{\rho}\phi^{;\rho}\biggr{)}_{;\nu } = \epsilon\phi_{;\nu}\Box\phi. \tag{62}\]
Identifying,
\[\frac{R}{4}-\frac{\epsilon}{4}\phi_{\rho}\phi^{;\rho} = -V(\phi), \tag{63}\]
equations (60) and (61) take the following form,
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = \epsilon\biggl{(}\phi_{;\mu}\phi_{;\nu}-\frac{1}{2}g_{\mu\nu} \phi_{;\rho}\phi^{;\rho}\biggr{)}+g_{\mu\nu}V(\phi), \tag{64}\] \[\Box\phi=-\epsilon V_{\phi}(\phi). \tag{65}\]
In this way, we recover the GR equations equipped with a self-interacting scalar field.
In a static and spherically symmetric configuration, the UG field equations (60) read,
\[\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{3}{2} \beta^{\prime 2}+\frac{\gamma^{\prime}}{2}(\gamma^{\prime}+2\beta^{\prime}- \alpha^{\prime})+\beta^{\prime}\alpha^{\prime}+\frac{e^{2(\alpha-\beta)}}{2} = \epsilon\frac{\phi^{\prime 2}}{4}, \tag{66}\] \[-\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{ \beta^{\prime 2}}{2}-\frac{\gamma^{\prime}}{2}(\gamma^{\prime}-\alpha^{\prime}) +\beta^{\prime}(\alpha^{\prime}+\gamma^{\prime})-\frac{e^{2(\alpha-\beta)}}{2} = \epsilon\frac{3}{4}\phi^{\prime 2},\] (67) \[\frac{1}{2}[\gamma^{\prime\prime}-\beta^{\prime 2}+\gamma^{\prime}( \gamma^{\prime}-\alpha^{\prime})]+\frac{e^{2(\alpha-\beta)}}{2} = -\epsilon\frac{\phi^{\prime 2}}{4}. \tag{68}\]
Combining these equations, we have the following relations:
\[\gamma^{\prime\prime}-\beta^{\prime\prime}-2\beta^{\prime 2}+\gamma^{ \prime}(\gamma^{\prime}+\beta^{\prime}-\alpha^{\prime})+\alpha^{\prime}\beta^{ \prime}+e^{2(\alpha-\beta)} = 0, \tag{69}\] \[-\beta^{\prime\prime}-\beta^{\prime}+\beta^{\prime}(\gamma^{ \prime}+\alpha^{\prime}) = \frac{\phi^{\prime 2}}{2}. \tag{70}\]
Remark that in the UG equations, there is no potential, even if it appears in the energy-momentum tensor. Moreover, there are three metric functions (which can be reduced to two functions by gauging the radial coordinate) and the scalar field to be determined, and just two independent equations, (69) and (70). Hence an ansatz must be introduced. From the conservation law, we have the relation,
\[R+e^{-\alpha}\phi^{\prime 2}=-4V(\phi), \tag{71}\]
where \(V(\phi)\) is a function to be determined. We have slightly changed the notation (\(V\) instead of \(\Lambda\)) to identify the unknown function with the potential. With this identification, the UG equations become identical to the GR equations with a potential. In GR, the potential must be chosen. In UG a functional form for the scalar field (or for one of the metric function) must be chosen. There is a correspondence between the choice of the functional form of the scalar field and the choice of the potential in GR.
Two possible examples are the following.
1. If the scalar field is chosen such that, \[\phi = -\epsilon\frac{C}{2k}P,\] (72) \[P = 1-2\frac{k}{\rho},\] (73) we find, \[ds^{2} = P^{a}dt^{2}-P^{-a}d\rho^{4}-P^{1-a}\rho^{2}d\Omega,\] (74) \[a^{2} = 1-\epsilon\frac{C^{2}}{k^{2}}.\] (75) Using (71) we find \(V=0\). This solution represents a black hole only if \(\epsilon=-1\). This solution has been determined in the GR context in Ref. [18].
2. The regular black hole determined in Ref. [20], is also solution in the UG case, without a potential. Imposing that the scalar field is given by, \[\psi=\frac{\phi}{\sqrt{2}}=\arctan\frac{\rho}{b},\] (76) the metric is then given, in the quasi-global coordinates, by, \[ds^{2} = \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in GR, the Birkhoff theorem is valid. The same occur for the corresponding solution in the UG. This can be seen by supposing radial time dependent configurations. The argumentation follows the same reasoning used in GR, see for example [21]. From the expression presented in the appendix, and considering only a radial electric field, the \(0-1\) component of the field equations, using the Schwarzschild coordinate system, with \(\beta=\ln r\), implies that \(\alpha\) must be time independent, since the right hand side of the equation is zero for a pure radial electric field. Combining equations \(0-0\) and \(1-1\), it comes out that \(\alpha=-\gamma\). Hence, all metric functions are time independent.
For the scalar field case, the Birkhoff theorem is not valid because the right hand side contains a term of the time \(\dot{\phi}\phi^{\prime}\) which forbids to consider the metric function \(\alpha\) as time independent, as it happens in the GR case. The Birkhoff theorem is verified only if the scalar field is static [22].
In the two examples discussed in the previous section, having the scalar field as the source of the geometry, and considering the GR context, the solutions are unstable, except for the regular solution in the very special case the minimum of the areal function coincides with the horizon [23; 24]. However, this result can change in the UG context since the unimodular condition implies in new relations for the perturbed functions that are absent in GR.
We will illustrate the special features of the perturbative analysis considering the case of black holes with a scalar field. Only radial perturbations will be considered. In the GR context, this is enough to conclude about the instability of the solution [23]. We will show that in the UG, if if we try to follow the same procedure as in GR, the perturbations at first order are strictly zero due to the unimodular condition.
The unimodular condition implies,
\[g=\text{det}g_{\mu\nu}=e^{\alpha+\gamma+2\beta}=\xi. \tag{80}\]
Since the function \(\xi\) is fixed, the unimodular condition leads, at linear perturbative order,
\[\delta\alpha+\delta\gamma+2\delta\beta=0. \tag{81}\]
There is still the freedom to impose a coordinate condition due to the diffeomorphic (even if transverse) invariance. The choice \(\delta\beta=0\) is related to the gauge invariant variables [23]. Hence, we end up with the conditions,
\[\delta\alpha=-\delta\gamma,\quad\delta\beta=0. \tag{82}\]
We write down the perturbations in a generic way as
\[\delta f(x,t)=f(x)e^{-i\omega t}. \tag{83}\]
The perturbed equations, under the conditions above, are the following:
\[\delta\gamma^{\prime\prime}+4\gamma^{\prime}\delta\gamma^{\prime -}\bigg{\{}\omega^{2}.e^{-4\gamma}+2e^{-2(\gamma+\beta)}\bigg{\}}\delta\gamma = \phi^{\prime}\delta\phi^{\prime}, \tag{84}\] \[\delta\gamma^{\prime\prime}+4\gamma^{\prime}\delta\gamma^{\prime -}\bigg{\{}\omega^{2}.e^{-4\gamma}+2e^{-2(\gamma+\beta)}\bigg{\}}\delta\gamma = -3\phi^{\prime}\delta\phi^{\prime},\] (85) \[\delta\gamma^{\prime\prime}+4\gamma^{\prime}\delta\gamma^{\prime -}\bigg{\{}\omega^{2}.e^{-4\gamma}+2e^{-2(\gamma+\beta)}\bigg{\}}\delta\gamma = -\phi^{\prime}\delta\phi^{\prime},\] (86) \[-2\beta^{\prime}\delta\gamma = \phi^{\prime}\delta\phi. \tag{87}\]
It is clear that the equations are consistent only in the trivial case: \(\delta\phi=\delta\gamma=0\). Hence, it is not possible to obtain informations on the stability of the solution, at least at linear level and following a procedure close to that used in GR. This is a distinguishing feature of unimodular gravity in comparison with GR.
## VII Conclusion
Unimodular gravity (UG) is one of the first alternatives to General Relativity (GR). It is a geometric theory which is invariant with respect to a restricted diffeomorphic class of transformations, the transverse diffeomorphism, due to the imposition of a constraint on the determinant of the metric. In UG the usual conservation of the energy-momentum tensor is not assured: The conservation of the energy-momentum tensor is a choice. If it is imposed, UG becomes in principle equivalent to GR with a cosmological term. However, the restriction on the determinant of the metric may lead to some important new features at perturbative level. We have shown here that if the conservation of the energy-moment tensor is relaxed, UG becomes equivalent to GR with a dynamical cosmological term, with still the same important difference due to the UG constraint which can manifested at perturbative level.
We have discussed, in this context, the static and spherically symmetric solutions in UG. For the vacuum configuration, Schwarzschild solution is also verified in UG. The same occurs with the Reissner-Nordstrom solution, but only if the energy-momentum tensor is conserved. If not, the dynamical cosmological term induces new features, but it does not prevent the appearance of the singularity at \(r=0\). Similar features appear in the case when a scalar field appears as the main source. In this case, the potential term, representing the self-interaction of the scalar field, disappears in the UG context and an ansatz must be imposed in order to close the set of equations. This mounts, in the GR context, to choose a given potential for the scalar field. For a discussion of the UG in static, spherical configurations but focusing compact objects, see Ref. [25].
We have shown that the Birkhoff theorem follows the same features as in GR, being satisfied for a charged solution, being possibly violated for a dynamical scalar field. The linear radial perturbations have been analyzed when a scalar field is present. Once more, GR black hole solutions are generically unstable in the latter case. In UG, using the gauge invariant approach employed in GR and restricting to radial perturbations, the condition on the determinant of the metric leads to vanishing perturbations at linear order, and possibly also for higher order. As already discussed in the cosmological context, this result seems to point out for a breaking of the equivalence of UG and GR at perturbative level. There are other viewpoints on the implementation of the UG constraints in performing a perturbative analysis, see for example Ref. [26]. However, the results reported here indicates that a direct application of the procedures used in GR combined with the unimodular constraint may lead to conclusions different from those obtained in GR.
**Acknowledgements:** We thank CNPq, FAPES and FAPEMIG for partial financial support. We thank K.A. Bronnikov for enlighting discussions on some aspects of the problem treated in this work and L.F. de Oliveira Guimaraes for his remarks on the text.
## Appendix A The spherically symmetric non static metric
A dynamical spherically symmetric metric, admitting radial oscillations, is given by,
\[ds^{2}=e^{2\gamma(t,u)}dt^{2}-e^{2\alpha(t,u)}du^{2}-e^{2\beta(t,u)}d\Omega^{2}. \tag{88}\]
The non-vanishing Christoffel symbols are the following.
\[\Gamma^{0}_{00}=\dot{\gamma},\quad\Gamma^{0}_{10}=\gamma^{\prime}, \quad\Gamma^{0}_{11}=e^{2(\alpha-\gamma)}\dot{\alpha}, \tag{89}\] \[\Gamma^{0}_{22}=e^{2(\beta-\gamma)}\dot{\beta},\quad\Gamma^{0}_{3 3}=\Gamma^{0}_{22}\sin^{2}\theta,\] (90) \[\Gamma^{0}_{00}=e^{2(\gamma-\alpha)}\gamma^{\prime},\quad\Gamma^{ 0}_{11}=\dot{\alpha},\quad\Gamma^{1}_{11}=\alpha^{\prime},\] (91) \[\Gamma^{1}_{22}=-e^{2(\beta-\alpha)}\beta^{\prime},\quad\Gamma^{ 1}_{33}=\Gamma^{1}_{22}\sin^{2}\theta,\] (92) \[\Gamma^{2}_{02}=\Gamma^{3}_{03}=\dot{\beta},\quad\Gamma^{1}_{12}= \Gamma^{1}_{12}=\beta^{\prime},\] (93) \[\Gamma^{2}_{33}=-\sin\theta\cos\theta\quad,\quad\Gamma^{3}_{23}= \cot\theta. \tag{94}\]
The non-vanishing components of the Ricci tensor and the Ricci scalar are the following.
\[R_{00} = -\ddot{\alpha}-2\ddot{\beta}+\dot{\gamma}(\dot{\alpha}+2\dot{ \beta})-\dot{\alpha}^{2}-2\dot{\beta}^{2} \tag{95}\] \[+ e^{2(\gamma-\alpha)}[\gamma^{\prime\prime}+\gamma^{\prime}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})],\] \[R_{11} = e^{2(\alpha-\gamma)}\bigg{\{}\ddot{\alpha}+\dot{\alpha}(\dot{ \alpha}-\dot{\gamma}+2\dot{\beta})\bigg{\}}\] (96) \[- \gamma^{\prime\prime}-2\beta^{\prime\prime}+\gamma^{\prime}( \alpha^{\prime}-\gamma^{\prime})+2\beta^{\prime}(\alpha^{\prime}-\beta^{ \prime}),\] \[R_{22} = 1+e^{2(\beta-\gamma)}[\ddot{\beta}+\dot{\beta}(\dot{\alpha}+2 \dot{\beta}-\dot{\gamma})]\] (97) \[- e^{2(\beta-\alpha)}[\beta^{\prime\prime}+\beta^{\prime}(\gamma^ {\prime}+2\beta^{\prime}-\alpha^{\prime})],\] \[R_{33} = R_{22}\sin^{2}\theta,\] (98) \[R_{01} = 2\bigg{\{}\dot{\beta}^{\prime}+\dot{\beta}(\gamma^{\prime}- \beta^{\prime})+\dot{\alpha}\beta^{\prime}\bigg{\}},\] (99) \[R = -2e^{-2\beta}+2e^{-2\alpha}[\gamma^{\prime\prime}+2\beta^{\prime \prime}+3\beta^{\prime 2}+\gamma^{\prime}(\gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime} )-2\alpha^{\prime}\beta^{\prime}]\] (100) \[- 2e^{-2\gamma}[\ddot{\alpha}+2\ddot{\beta}+3\dot{\beta}^{2}+ \dot{\alpha}(\dot{\alpha}+2\dot{\beta}-\dot{\gamma})-2\dot{\gamma}\dot{ \beta}]\]
The non-vanishing components of the unimodular gravitational tensor,
\[E_{\mu\nu}=R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R, \tag{101}\]
are the following:
\[E_{00} = -\frac{\ddot{\alpha}}{2}-\ddot{\beta}+\frac{\dot{\gamma}}{2}(\dot{ \alpha}+2\dot{\beta})-\frac{\dot{\alpha}^{2}}{2}-\frac{\dot{\beta}^{2}}{2}+ \dot{\alpha}\dot{\beta} \tag{102}\] \[+ e^{2(\gamma-\alpha)}\bigg{[}\frac{\gamma^{\prime\prime}}{2}- \beta^{\prime\prime}-\frac{3}{2}\beta^{\prime 2}+\frac{\gamma^{\prime}}{2}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})+\beta^{\prime}\alpha^{\prime }\bigg{]}+\frac{e^{2(\gamma-\beta)}}{2},\] \[E_{11} = e^{2(\alpha-\gamma)}\bigg{[}\frac{\ddot{\alpha}}{2}-\ddot{\beta }-\frac{3}{2}\dot{\beta}^{2}+\frac{\dot{\alpha}}{2}(\dot{\alpha}+2\dot{\beta} -\dot{\gamma})+\dot{\gamma}\dot{\beta}\bigg{]}\] (103) \[- \frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{\beta ^{\prime 2}}{2}-\frac{\gamma^{\prime}}{2}(\gamma^{\prime}-\alpha^{\prime})+ \beta^{\prime}(\alpha^{\prime}+\gamma^{\prime})-\frac{e^{2(\alpha-\beta)}}{2},\] \[E_{22} = \frac{1}{2}-\frac{e^{2(\beta-\gamma}}{2}[\ddot{\alpha}-\dot{\beta }^{2}+\dot{\alpha}(\dot{\alpha}-\dot{\gamma})]\] (104) \[+ \frac{e^{2(\beta-\alpha)}}{2}[\gamma^{\prime\prime}-\beta^{\prime 2 }+\gamma^{\prime}(\gamma^{\prime}-\alpha^{\prime})],\] \[E_{01} = 2[\dot{\beta}^{\prime}+\dot{\beta}(\gamma^{\prime}-\beta^{\prime })+\dot{\alpha}\beta^{\prime}],\] (105) \[E_{33} = E_{22}\sin^{2}\theta. \tag{106}\]
For the static case, the above expressions reduce to,
\[E_{00} = e^{2(\gamma-\alpha)}\bigg{[}\frac{\gamma^{\prime\prime}}{2}- \beta^{\prime\prime}-\frac{3}{2}\beta^{\prime 2}+\frac{\gamma^{\prime}}{2}( \gamma^{\prime}+2\beta^{\prime}-\alpha^{\prime})+\beta^{\prime}\alpha^{\prime }\bigg{]}+\frac{e^{2(\gamma-\beta)}}{2}, \tag{107}\] \[E_{11} = -\frac{\gamma^{\prime\prime}}{2}-\beta^{\prime\prime}-\frac{ \beta^{\prime 2}}{2}-\frac{\gamma^{\prime}}{2}(\gamma^{\prime}-\alpha^{\prime})+ \beta^{\prime}(\alpha^{\prime}+\gamma^{\prime})-\frac{e^{2(\alpha-\beta)}}{2},\] (108) \[E_{22} = \frac{1}{2}+\frac{e^{2(\beta-\alpha)}}{2}[\gamma^{\prime\prime}- \beta^{\prime 2}+\gamma^{\prime}(\gamma^{\prime}-\alpha^{\prime})],\] (109) \[E_{33} = E_{22}\sin^{2}\theta. \tag{110}\]
|
2310.16304 | Deep Learning Approach to Photometric Redshift Estimation | Photometric redshift estimation plays a pivotal role in modern astronomy,
enabling the determination of celestial object distances by analyzing their
magnitudes across various wavelength filters. This study leveraged a dataset of
50,000 objects sourced from the Sloan Digital Sky Survey (SDSS), encompassing
magnitudes in five distinct bands alongside their corresponding redshift
labels. Traditionally, redshift prediction relied on the use of spectral
distribution templates (SED), which, while effective, pose challenges due to
their cost and limited availability, particularly when dealing with extensive
datasets. This paper explores innovative data-driven methodologies as an
alternative to template-based predictions. By employing both a decision tree
regression model and a Fully Connected Neural Network (FCN) for analysis, the
study reveals a notable discrepancy in performance. The FCN outperforms the
decision tree regressor significantly, demonstrating a notable improvement in
root mean square error (RMSE) compared to the decision tree. This improvement
highlights the FCN's ability to effectively capture complex relationships
within space data. The potential of data-driven redshift estimation is
underscored, positioning it as a valuable tool for advancing astronomical
surveys and enhancing our comprehension of the universe. With the adaptability
to either replace or complement template-based methods, FCNs are poised to
reshape the field of photometric redshift estimation, opening up new
possibilities for precision and discovery in astronomy. | Krishna Chunduri, Mithun Mahesh | 2023-10-25T02:24:37Z | http://arxiv.org/abs/2310.16304v2 | # Deep Learning Approach to Photometric Redshift Estimation
###### Abstract
Photometric redshift estimation, an essential process in astronomy for distance estimation, obtains the redshift of celestial structures by utilizing the magnitude of objects in varying wavelength filters. This research capitalized on a dataset of 50,000 objects from the Sloan Digital Sky Survey, comprising 5 bands of magnitudes and their corresponding redshift labels. Typically, studies use spectral distribution templates (SED) for redshift prediction. However, these templates are expensive and hard to obtain, especially with larger datasets. The paper explores approaches for Data-Driven methodology instead of template based prediction. Adopting both a decision tree regression model and a Fully Connected Neural Network (FCN) for analysis, the FCN significantly outperformed the decision tree regressor, achieving an impressive root mean square error (RMSE) of 0.009 compared to the decision tree's RMSE above 0.16. The strong performance of the FCN highlights its ability to capture intricate relationships in astronomical data, holding the potential for data-driven redshift estimation, which will help advance next generation surveys.
## 1 Introduction
Photometric redshift estimation is an essential process in modern astronomy, determining the redshift of celestial objects, such as galaxies and quasars. By measuring the object's magnitude in different wavelength filters, such as ultraviolet (u) or green (g), and evaluating the differences in magnitude to determine the object's color (u-g), we can use color values can help estimate redshift for the celestial object(Newman and Gruen, 2022). Such estimations play a pivotal role in the interpretation and understanding of large astronomical data surveys, shedding light on distances for celestial objects. Acquiring accurate redshift data is imperative towards advancing our grasp on galaxy formation and evolution.
Traditional methods often employ spectroscopy to determine redshift, utilizing galaxy spectral signature and wavelength shifts. However, this technique can be resource-intensive and expensive. Furthermore, faint celestial objects can pose challenges to spectroscopic observations. These drawbacks have led to the emergence of photometric redshift as a viable alternative. Photometric redshift estimation harnesses the magnitude of extragalactic objects as observed across multiple filters(Salvato et al., 2018). Rather than relying on a detailed spectrum, astronomers utilize the intensity of light across select broad wavelength bands to infer redshift.
In the realm of galaxy evolution studies, the performance of photometric redshifts (photo-z's) has profound implications. With systematic uncertainties in modeling galaxy evolution anticipated to persist in the foreseeable future, ensuring the precision of photometric redshift becomes even more important. For instance, the subdivision of objects according to their redshifts is instrumental in targeting specific redshift ranges in spectroscopic surveys. The overarching takeaway is clear: the efficacy of photo-z estimation is integral to the success of galaxy evolution studies. Creating a model that estimates photometric redshift given magnitude data is an optimal tool to assist many areas of research within the astronomical world.
Previous studies have found significant advancements. The CANDELS GOODS-S survey, utilizing the HST WFC3 H-band and ACS z-band, has helped expand our understanding of photometric redshifts(Dahlen et al., 2013). This dataset, with TFIT photometry, explored the efficacy of various codes and template Spectral Energy Distributions. It found that methods which incorporated training using a spectroscopic sample achieved enhanced accuracy. Importantly, the
research found a direct correlation between the source magnitude and the precision of redshift estimation, emphasizing the role of magnitude in estimation.
Another approach was utilizing Bayesian methodologies(Benitez, 2000). By employing prior probabilities and Bayesian marginalization, this method was adept at utilizing previously overlooked data like the expected shape of redshift distributions and galaxy type fractions. When applied to B130 HDF-N spectroscopic redshifts, this Bayesian approach showcased promising results, reinforcing its potential to address existing gaps. Importantly, these advancements were realized without the reliance on a training-set procedure, while utilizing template libraries.
Both studies used template Spectral Energy Distribution (SED) data to help test their different methodologies. While template SEDs do help estimate photometric redshift, it's become increasingly more difficult to obtain these distributions with larger datasets. Given the next generation of surveys from the James Webb Space Telescope (JWST) and Rubin Observatory (LSST), photometric redshift estimation needs a more data-driven approach to accurately predict redshift based on observational data.
The primary objective of this paper is to explore novel computational methods that take a data-driven approach to estimation, while increasing accuracy. A data-driven approach involves relying on actual observational data, such as magnitude or flux values, rather than theoretical template SEDs. Specifically, this research aims to evaluate the reliability of Fully Connected Neural Networks (FCN) in estimating photometric redshift using magnitude data.
Recent advancements in the field of machine learning have opened up new opportunities to utilize novel methods such as artificial neural networks. Fully Connected Neural Networks, a subset of artificial neural networks, are designed to capture complex relationships in data, increasing overall predictive abilities for a model(Schwing and Urtasun, 2015).
Despite the clear capabilities for neural network applications in astronomy, there remains a gap in comprehensive studies that use magnitude and color index data to make redshift predictions. Our research seeks to bridge this gap, comparing the Fully Connected Network with a decision tree regressor to see the efficacy of both when provided with light data from the Sloan Digital Sky Survey(SDSS).
We aim to create both a decision tree regression model and a FCN for photometric redshift estimation. The scope encompasses the design, training and testing of these models, followed by an analysis of their performance. Comparison metrics between the two methods will be RMS values and overall prediction accuracy.
## 2 Data
Our study utilized a dataset from the Sloan Digital Sky Survey(Kollmeier et al., 2017) with 50,000 celestial objects. For each of the objects, magnitudes of 5 different bands were included in the data. The 5 bands - \(u\), \(g\), \(r\), \(i\) and \(z\) - represent different wavelengths of light from each galaxy or quasar(Wyder et al., 2007). Alongside the magnitudes, the dataset came with redshift value labels for each object. These redshifts were obtained from spectroscopic measurements from SDSS. The first 5 rows can be found in 1.
Before delving into model training and testing, we visualized different portions of the data to better understand the distributions. In Fig. 1, the distribution of the redshift and magnitude values is illustrated.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline u & g & r & i & z & redshift \\ \hline
18.27449 & 17.01069 & 16.39594 & 16.0505 & 15.79158 & 0.0369225 \\
18.51085 & 17.42787 & 16.94735 & 16.61756 & 16.46231 & 0.06583611 \\
18.86066 & 17.91374 & 17.56237 & 17.26353 & 17.13068 & 0.1202669 \\
19.38744 & 18.37505 & 17.63306 & 17.25172 & 17.00577 & 0.1806593 \\
18.38328 & 16.59322 & 15.77696 & 15.3979 & 15.08755 & 0.04035749 \\ \hline \end{tabular}
\end{table}
Table 1: Data Set
### Figures
For preprocessing, we performed sigma-clipping using a sigma value of 3 standard deviations to remove outliers while retaining 95% of the data. Additionally, we removed redshift values less than zero as these are not physical. As a result, we ended with a dataset of 47,484 celestial objects out of the original 50,000.
## 3 Methodology
We compared two methods, a decision tree regressor and a fully connected neural network.
The decision tree regressor works by partitioning the datasets into small subsets. Each split is based on the value of the input features. Our features consisted of the 5 bandpass filters (\(u\), \(g\), \(r\), \(i\), \(z\)) as well as the colors formed by their magnitude differences (\(u-g\), \(g-r\), \(r-i\), \(i-z\)). After splitting the data, we arrive at leaf nodes where the redshift values are as similar as possible. Each leaf of the tree then predicts the average redshift of the instances that fall into it. The model is simple and transparent, but doesn't produce very good results in terms of rms and prediction.
Our next step was to create a model that could predict the redshift, given our inputs of the object in question. We chose a fully connected neural network that used the Adaptive Moment Estimation Optimizer(Kingma & Ba, 2017) in order to create a regression model to predict redshift. The architecture is illustrated in Fig. 2.
Our input layer consists of 9 inputs and an output shape of 100. The 9 inputs are composed of magnitudes across each of the bandpasses and the magnitude differences as follows,
\[{}_{Input}\text{= }[m_{u},m_{g},m_{r},m_{i},m_{u}-m_{g},m_{g}-m_{r},m_{r}-m_{i},m_{i}-m_{z}]. \tag{1}\]
We then added two more layers of neurons until the last layer with 65 neutrons and 30 neurons, respectively. With only one neuron in this last layer, it represents the predicted redshift. We used the Rectified Linear Unit (ReLU) activation function(Agarap, 2019) which worked better than the sigmoid function to account for redshift predictions with values greater than 1 as well as to improve efficiency of the network. Lastly we added a dropout rate of 0.2 to prevent overfitting after each layer. A visual representation of the neural network is shown in 2.
We minimise the mean squared error as the loss function in our neural network,
\[\mathcal{L}=\frac{1}{n}\sum_{i=1}^{n}{(y-\hat{y})^{2}}, \tag{2}\]
where \(y\) is the true redshift, \(\hat{y}\) is the predicted redshift, and \(n\) is the number of objects in a batch of the training set.
Figure 2: The chart above shows the layers and dropouts for the neural network.
## 4 Results
For the original decision tree regressor, the rms value was above 0.16. Using our neural network, we were able to improve accuracy when predicting redshift to 0.009 rms.
Figure 4: The chart above shows error bars along with the true redshift vs predicted redshift graph.
Figure 3: The chart above shows true redshift vs predicted redshift correlation. There are few outliers, with majority of predictions being close to the best fit line.
The figures above indicate that our results have clearly improved upon the Decision Tree Model previously used to predict photometric redshift in stars and quasars. Figure 4 and Figure 5 shows the values for redshift predicted by our neural network indicated by the linear graph compared to the actual observations following a similar trend. Figure 6 shows gives insights on how we trained our model and the prevention of overfitting to our dataset, so our results can be generalized to this data format.
have been overlooked. The integration of hybrid models, such as combining FCNs with Random Forests would improve overall accuracy. Lastly, fine-tuning pre-trained models would build on the foundation of previous successes to improve predictive power for redshift.
## 6 Conclusion
Our study underscores the untapped potential of data-driven methodologies in photometric redshift estimation, particularly highlighting the superior capabilities of FCNs over decision tree regressors. While traditional methods, such as decision tree regression, continue to hold value, the evolving landscape of computational methods offers new opportunities for precision and discovery in our universe. As we anticipate the demands of next-generation astronomical surveys, including those from the James Webb Space Telescope and Large Synoptic Survey Telescope(Ivezic et al., 2019), these data-centric approaches will be pivotal in unveiling the redshift to find distances for far-away celestial objects, from quasars to galaxies.
## 7 Acknowledgements
We would like acknowledge Cambridge Centre for International Research, Ltd and faculty from MIT for their contributions and resources in this project. We want to thank our mentor Dr. Daniel Muthukrishna and Fatima Zaidouni from MIT Kavli Institute for Astrophysics and Space Research.
|
2308.14759 | May the Force be with You: Unified Force-Centric Pre-Training for 3D
Molecular Conformations | Recent works have shown the promise of learning pre-trained models for 3D
molecular representation. However, existing pre-training models focus
predominantly on equilibrium data and largely overlook off-equilibrium
conformations. It is challenging to extend these methods to off-equilibrium
data because their training objective relies on assumptions of conformations
being the local energy minima. We address this gap by proposing a force-centric
pretraining model for 3D molecular conformations covering both equilibrium and
off-equilibrium data. For off-equilibrium data, our model learns directly from
their atomic forces. For equilibrium data, we introduce zero-force
regularization and forced-based denoising techniques to approximate
near-equilibrium forces. We obtain a unified pre-trained model for 3D molecular
representation with over 15 million diverse conformations. Experiments show
that, with our pre-training objective, we increase forces accuracy by around 3
times compared to the un-pre-trained Equivariant Transformer model. By
incorporating regularizations on equilibrium data, we solved the problem of
unstable MD simulations in vanilla Equivariant Transformers, achieving
state-of-the-art simulation performance with 2.45 times faster inference time
than NequIP. As a powerful molecular encoder, our pre-trained model achieves
on-par performance with state-of-the-art property prediction tasks. | Rui Feng, Qi Zhu, Huan Tran, Binghong Chen, Aubrey Toland, Rampi Ramprasad, Chao Zhang | 2023-08-24T01:54:02Z | http://arxiv.org/abs/2308.14759v1 | # May the Force be with You: Unified Force-Centric Pre-Training for 3D Molecular Conformations
###### Abstract
Recent works have shown the promise of learning pre-trained models for 3D molecular representation. However, existing pre-training models focus predominantly on equilibrium data and largely overlook off-equilibrium conformations. It is challenging to extend these methods to off-equilibrium data because their training objective relies on assumptions of conformations being the local energy minima. We address this gap by proposing a force-centric pretraining model for 3D molecular conformations covering both equilibrium and off-equilibrium data. For off-equilibrium data, our model learns directly from their atomic forces. For equilibrium data, we introduce zero-force regularization and forced-based denoising techniques to approximate near-equilibrium forces. We obtain a unified pre-trained model for 3D molecular representation with over 15 million diverse conformations. Experiments show that, with our pre-training objective, we increase forces accuracy by around 3 times compared to the un-pre-trained Equivariant Transformer model. By incorporating regularizations on equilibrium data, we solved the problem of unstable MD simulations in vanilla Equivariant Transformers, achieving state-of-the-art simulation performance with 2.45 times faster inference time than NequIP. As a powerful molecular encoder, our pre-trained model achieves on-par performance with state-of-the-art property prediction tasks.
## 1 Introduction
Representation learning for 3D molecules is a crucial task in the field of AI for science. Recent years have seen significant advancements in Graph Neural Networks [24; 19; 12] and Equivariant Graph Neural Networks architectures [30; 9; 1; 11; 23] to capture interatomic features in 3D molecular Preprint. Under review.
conformations, thereby laying foundations for learning powerful representations. Leveraging such architectures, several existing works [33, 10, 17] have explored pre-training on 3D molecular structures on equilibrium data (PCQM4MV2 [16, 15]). Specifically, [33] developed a de-noising approach that learns meaningful information about the equilibrium conformations and their surroundings. [17, 10] leveraged the potential energy of equilibrium conformations to derive a supervised pretraining objective. Such pre-trained models have achieved promising performance on diverse molecular property prediction tasks.
Nonetheless, these existing 3D molecule pre-training models are based on the equilibrium assumption and predominantly overlook off-equilibrium conformations. In this paper, we consider _off-equilibrium_ conformations to be those whose atomic forces significantly deviate from zero. In this case, existing work cannot be indiscriminately extended to non-zero forces, because their modeling assumption is fundamentally tied to equilibrium conformations being the local energy minima. The de-noising technique in [33], which is theoretically equivalent to learning forces around equilibrium conformations, may potentially be ill-posed for off-equilibrium conformations. Similarly, the energy-based supervised pretraining objectives in [17, 10] are based on the local minimality of these conformations and cannot be straightforwardly extended to off-equilibrium data. To date, the pre-training of a representation model for 3D molecular conformations that encompasses unifying both equilibrium and off-equilibrium molecular conformations remains largely underexplored. On the other hand, off-equilibrium conformations represent a significant portion of the chemical space, which is crucial for comprehensive modeling of the molecular representation space. For instance, applications such as molecular dynamics (MD) simulations are heavily dependent on the model's ability to accurately represent off-equilibrium data.
In this paper, we incorporate both off-equilibrium and equilibrium conformations into a unified representation learner. We propose a new pre-training model, "**E**quivariant **T**ransformer **O**ptimization and **R**epresentations for **E**quilibrium and **O**ff-equilibrium Molecules" (**ET-OREO**) for 3D molecular conformations. This model integrates both equilibrium and off-equilibrium data from multiple large-scale datasets. Its training objective hinges on _atomic forces_, defined as the negative gradient of a molecule's potential energy with respect to atomic coordinates. Atomic forces exhibit several notable properties:(1) they are _physically well-defined_ observable, i.e., the force acting on an atom is determined solely and uniquely from its local environment, defined as the real-space distribution of its neighboring atoms; (2) they are generalizable across various molecules in the sense that atoms from different molecules that have the same local environment should experience the same atomic forces; and (3) they can unify equilibrium and off-equilibrium data, as equilibrium data can be conceptualized as local minima in the latent (configuration) space with zero forces, while off-equilibrium data aid the model in more accurately characterizing the high-energy chemical space beyond equilibrium. Among these points, (2), which is the direct consequence of (1), makes atomic forces fundamentally different from potential energy which is defined for the whole system under consideration (molecules) and can only be determined up to an additional constant. Therefore, a predictive model of atomic forces is transferable, i.e., in principle, it can be used directly for molecules of any size (number of atoms). This advantage makes the "learning atomic forces" fundamentally different from the traditional approach in which a fictitious concept of "atomic energy" must be defined, predicted, and combined to obtain the total potential energy of the whole system [2].
Inspired by this, we develop our force-centric pretraining model that trains on both off-equilibrium and equilibrium molecular conformations. For off-equilibrium data, our model learns directly from their atomic forces, aligning the model gradient with respect to input coordinates with atomic forces. A conceived obstacle in leveraging forces lies in data acquisition: high-accuracy forces require the application of _ab-initio_ methods such as Density Functional Theory (DFT) [14, 20]. However, we argue that DFT, despite its time complexity of \(O(N^{3})\), remains tractable for moderately sized molecules [31]. Additionally, the major computational cost overlap with the potential energy, which is common in existing 3D conformation datasets.
For equilibrium data, we impose zero-force regularizations, reflecting their status as local minima of the potential energy surface. We also introduce random noise into equilibrium conformations and consider the _random noise directions as approximate forces_ on the perturbed conformation coordinates. This allows the model to "denoise" the perturbed conformations by gradients. Our approach draws inspiration from the recent success of denoising techniques in molecular learning [33, 13], energy-based supervised pretraining [10, 17], and score-matching generative models [27, 28, 25]. We integrate our gradient-based denoising and zero-force regularization on equilibrium conformations
with off-equilibrium force optimization, thereby providing the model with a unified landscape of molecular structures and the potential energy surface.
For model pre-training, we collated more than 15 million 3D molecular conformations. Our pre-training data leverages three open-domain datasets, including PCQM4Mv2 [16; 15], MD17 [6; 5], ANI1-x [26]. Moreover, we contribute a new dataset, _poly24_, consisting of simulation trajectories of a diverse family of polymers. Pretrained on these diverse sources of equilibrium and off-equilibrium data, our model attained state-of-the-art simulation performance on both MD17 and polymers, achieving efficient and accurate simulations in terms of distributional properties and accuracy relative to DFT calculations. Our model also serves as a potent representation learner for equilibrium data, attaining performance on par with state-of-the-art molecular learning methods that focus exclusively on equilibrium data.
In summary, our contributions are as follows:
* We introduce a novel force-centric molecular conformation pretraining paradigm that trains a unified conformation representation encompassing both equilibrium and off-equilibrium molecules. Our paradigm enables the representation learner to portray not only the equilibrium conformation space but also the extensive off-equilibrium spaces between them.
* Our model achieves highly accurate molecular dynamics (MD) simulations, by efficiently fine-tuning its parameters for use with molecules and polymers. This allows for fast and reliable MD simulation, as our model is able to accurately replicate the _ab initio_ forces and distributional properties of conformations. Furthermore, our model has demonstrated comparable performance to state-of-the-art models that are solely focused on equilibrium data in molecular property prediction.
* We provide the community with a diverse set of DFT simulation data comprising a varied set of polymers, which are valuable not only for studying polymer properties such as ring-opening enthalpies but also for the general modeling of molecular forces.
## 2 Related Work
Machine Learning ForcefieldIn recent years, there has been a surge in the development of deep learning models for molecule forcefields. Two prominent groups of models have emerged: geometric message passing models [24; 19; 12] and the Tensor Field Network (TFN) family [30; 9; 1; 11]. Geometric message-passing models [24; 19; 12] utilize traditional graph neural networks or message-passing networks on pairwise radial features that are translational and rotational invariant. On the other hand, the TFN family models learn SE(3)-equivariant features by leveraging harmonic functions and the irreducible representations of the SE(3) group. NequlP [1] is the state-of-the-art model in this family and has achieved high stability and fidelity on the MD17 dataset.
Along another line, EGNN [23] learns equivariant features by integrating directional distance vectors between atom pairs in their implicit 3-dimensional coordinates. This approach allows for the learning of equivariant features without incurring the computational cost of the TFN family. Our method's backbone model is TorchMDNet [29], which we view as an extension of EGNN. TorchMDNet embeds the implicit 3-dimensional coordinates in a latent high-dimensional space, enhancing the model's capacity to represent 3-dimensional equivariant features. In addition, TorchMDNet leverages the concept of continuous distance filters from SchNet [24] to further increase its representation power.
Molecular PretrainingInspired by the success of pre-training foundation models in the fields of NLP and CV, there have been several attempts to pretraining for 3D molecular structures [33; 10; 17]. Specifically, the NoisyNode [33] work adopted the denoising regularization technique of graph neural networks [13] as a pretraining method for equilibrium conformations, achieving state-of-the-art performance on molecular property predictions. [10] and [17] both base their pre-training on the supervised energy data from equilibrium conformations with forces regularization. However, as aforementioned, these existing methods predominantly rely on equilibrium data and cannot be easily extended to off-equilibrium data. While achieving high performance for property prediction tasks, they leave the vast chemical space of off-equilibrium conformations unexplored.
Force-based training over off-equilibrium molecular conformations has been explored in literature. [6; 5; 7; 4; 3] all focus on learning atomic forces and predict energy based on numeric integration, based on the claims that the potential energy and forces have different noise distribution pattern[6; 5] and forces being local and immediate atomic features [4; 3]. Their methods still focus on training on off-equilibrium simulation data from _single molecules_. In supplementary materials, we showed that the joint optimization of potential energy and forces is problematic for multi-molecule settings. We instead propose force-centered and energy-free objectives to learn a unified pretraining model for both off-equilibrium and equilibrium data; and we show that force-centered pre-training improves multi-molecule optimization and generalization.
## 3 Methodology
### Problem Setup
Consider a molecule type \(x\) that can exist in various 3D conformations. Let \(n_{x}\) represent the number of atoms in molecule \(x\). The distribution of the molecule's conformations is represented as \(\mathbf{r}_{x}\in\mathbb{R}^{n_{x}\times 3}\), and its atom types are represented as \(\mathbf{z}_{\mathbf{x}}\in\mathbb{Z}^{n_{x}}\). The potential energy of a 3D molecule is determined by its conformation \(\mathbf{r}\sim\mathbf{r}_{x}\) and atom types \(\mathbf{z}\), denoted as \(E=E(\mathbf{r},\mathbf{z})\in\mathbb{R}\). The forces applied to the atoms, which are defined as the negative gradient of the potential energy with respect to atomic coordinates, are given by \(F=-\frac{\partial E}{\partial\mathbf{r}}\in\mathbb{R}^{n_{x}\times 3}\). In theory, a stable conformation is one where the potential energy achieves a local minimum with zero forces. In molecular dynamics (MD) simulations, the molecular conformations are moved according to forces and the thermostat of the simulation.
Our goal is to learn an equivariant model \(\Phi_{\theta}(\mathbf{r},\mathbf{z})\) parameterized by \(\theta\) that learns molecular representations from both equilibrium and off-equilibrium conformations. In this paper, \(\Phi_{\theta}(\mathbf{r},\mathbf{z})\) needs to predict energy-conservative atomic forces. To achieve this, we follow the standard machine learning forcefield paradigm. For simplicity, we omit \(\mathbf{z}\) in \(E\) and \(F\), and we ignore the molecule type index \(x\) when referring to general molecules. We use \(\Phi:(\mathbb{R}^{n\times 3})\times\mathbb{Z}\rightarrow\mathbb{R}\) to output the potential energy and take \(-\nabla\Phi_{\theta}\) as forces. To distinguish between equilibrium and off-equilibrium conformations, we define the equilibrium set of conformations as \(\mathcal{E}\coloneqq\{\mathbf{r}:F(\mathbf{r})=0\}\) and the off-equilibrium set \(\mathcal{S}\) as the complementary set where the forces are non-zero.
### Joint Training of Forces on Equilibrium and Off-equilibrium Data
Our methodology is centered on forces optimization. For off-equilibrium conformations \(x\sim\mathcal{S}\) with non-zero forces, we directly optimize on the atomic forces by minimizing \(\|-\nabla\Phi_{\theta}(\mathbf{r}_{x})-F(\mathbf{r}_{x})\|_{2}^{2}\). For equilibrium molecular conformations \(x\sim\mathcal{E}\), we assume their forces to be zero and impose a zero-force regularization: \(\|-\nabla\Phi_{\theta}(\mathbf{r}_{x})\|_{2}^{2}\). However, this objective gives the model little knowledge of the conformation structures in the neighborhood of \(\mathbf{r}_{x}\). To better inform the model of the forcefield around equilibrium conformations, we further use a _de-noising equilibrium regularization_, where we add Gaussian noise on equilibrium conformations and use the noise direction as noisy forces:
\[\mathbb{E}_{\mathcal{E}\sim\mathcal{N}(0,\sigma^{2})}\left[\|\nabla_{\mathbf{ r}_{x}}\Phi(\mathbf{r}_{x}-\epsilon)-\epsilon\|_{2}^{2}\right],\qquad x\sim \mathcal{E}. \tag{1}\]
The rationale of the above force-guided denoising objective is as follows. The perturbed conformation, denoted as \(\mathbf{r}_{x}-\epsilon\), could either approximate a high-energy, off-equilibrium confirmation or, alternatively, represent an unphysical state. In the former scenario, the conformation is anticipated to relax back to the proximate local minimum, \(\mathbf{r}_{x}\), thereby yielding forces that are consistent with the direction \(\epsilon\). Conversely, in the latter situation, the learning model is still sufficiently equipped to maintain robust representations, thereby enabling a return to a stable conformation from any unphysical deviations. More formally, (1) can be interpreted as learning approximate forcefield around equilibrium conformations, as will be discussed in Section 3.3.
Combining forces optimization on off-equilibrium data and zero-force regularization and de-noising objective on equilibrium data, we have the unified force-centric pre-training objective
\[\mathbb{E}_{x\sim\mathcal{E}}\left[\underbrace{\|\nabla_{\mathbf{r}_{x}}\Phi (\mathbf{r}_{x})\|_{2}^{2}}_{\text{zero-force regularization}}+\underbrace{ \mathbb{E}_{\epsilon\sim\mathcal{N}(0,\sigma^{2})}\left[\|\nabla_{\mathbf{r} _{x}}\Phi(\mathbf{r}_{x}-\epsilon)-\epsilon\|_{2}^{2}\right]}_{\text{de- noising equilibrium}}\right]+\mathbb{E}_{x\in\mathcal{S}}\left[\underbrace{\|F(\mathbf{r}_{x})-\nabla_{ \mathbf{r}_{x}}\Phi(\mathbf{r}_{x})\|_{2}^{2}}_{\text{forces optimization}}\right], \tag{2}\]
where the first expectation over \(x\sim\mathcal{E}\) samples equilibrium conformations and imposes a zero-force regularization and the denoising objectives on the atomic coordinates. The expectation over \(x\sim\mathcal{S}\) samples off-equilibrium conformations and optimizes the model gradient with forces.
### On the De-noising Equilibrium Objective
Objective function (1) can be viewed as learning an approximate forcefield around equilibrium data. In fact, by well-known results in de-noising score-matching [32; 27; 28], the objective
\[\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\sigma^{2})}\left[\left\|\mathbf{V}_{ \mathbf{r}}\Phi(\mathbf{r}-\epsilon)-\epsilon/\sigma^{2}\right\|_{2}^{2}\right] \tag{3}\]
is equivalent to the score-matching-like objective
\[\mathbb{E}_{q_{\sigma}(\mathbf{\bar{r}})}\left[\left\|\mathbf{V}_{\mathbf{r}} \Phi(\mathbf{\bar{r}})-\mathbf{V}_{\mathbf{r}}\log q_{\sigma}(\mathbf{\bar{r} })\right\|_{2}^{2}\right],\quad q_{\sigma}(\mathbf{\bar{r}}|\mathbf{r})\coloneqq \mathbf{r}+\epsilon, \tag{4}\]
which by [33] is equivalent to learning implicit forces around \(\mathbf{r}\), assuming \(\mathbf{r}\) is an local minimizer of the energy. Hence, (1) is equivalent to learning forces around equilibrium conformations up to a scaling constant. This proof, however, cannot be naively extended to molecules whose forces are non-zero, which is the driving motivation for us to supplement the de-noising objective with forces from off-equilibrium conformations.
While [33] used the same argument for their de-noising objective, our implementation is different. They used a prediction head for predicting the noises on top of an Equivariant Transformer, while we directly predicted the noise with forces predicted by the model gradient. Our design has several unique advantages: 1) Forces principally govern atomic movements. By formulating the de-noising objective with predicted forces, our model has a physical interpretation where perturbed conformations, be it high-energy off-equilibrium conformations or unphysical conformations, could relax back to stable and physical conformations. This property is especially helpful in MD simulations, as will be shown in Section 4. 2) We can unify the prediction of the forces for both equilibrium and off-equilibrium data, providing the model with a consistent energy landscape across conformations of different molecules and states.
### Model Pre-Training and Fine-Tuning
Model Architecture.We use TorchMDNet [29] (also known as Equivariant Transformer) as our molecule encoder \(\Phi_{\theta}\), following [33]. TorchMDNet is one of the best-performing models in terms of predicting molecular properties and atomic forcefields. While [1] achieves higher accuracy in molecular prediction tasks, we find TorchMDNet more favorable for pre-training due to its expressivity and better computational efficiency.
Pre-training.Our model has 8 layers and 256 embedding sizes, 8 attention heads, and a radial basis function (RBF) dimension of 64, consistent with [29] and [33]'s best-performing model. The model parameters are optimized with the AdamW[22; 18] optimizer with a peak learning rate set as \(1e-4\). The learning rate is scheduled with 10,000 warmup steps that gradually increase the learning rate from \(1e-7\) to \(1e-4\), and afterward will decrease with a multiplier of 0.8 after 30 validation epochs of non-decreasing validation loss. On every 500 training steps, one validation epoch is performed. The model is trained with a batch size of 32 samples for 3 epochs for 468,750 gradient steps. For the denoising objective, the variance of noise \(\sigma^{2}\) is set to be 0.04.
Model Fine-tuning.The pre-training stage provides the model with a good initial representation of molecules in general but is not necessarily optimal for each specific dataset or task. For our experiments, the model is further fine-tuned on the target dataset to optimize task-specific performance. For ET-OREO, the training/testing split of overlapping datasets in both the pre-training and fine-tuning stages are consistent so that during fine-tuning, the test set will not include any data seen during training.
### Pre-Training Data
Our paper focuses primarily on the structures of organic molecules and polymers in a vacuum. This approach enables us to maintain consistent learning of quantum mechanical interactions between
atoms within a uniform environment. For this purpose, we use three public 3D molecular conformation datasets MD17, ANI1-x, and PCQM4Mv2; and we also create a new polymer simulation dataset, detailed as follows.
Poly24: MD Simulations for Polymers.We contribute _poly24_, a DFT-based MD simulation dataset for polymers. 1 We generated DFT simulation data for 24 types of polymer families, broadly categorized into cycloalkanes, lactones, ethers, and others. Each polymer family consists of a cyclic monomer and its ring-opening polymerizations. Within ring-opening polymerization, a ring of the cyclic monomers is broken and then the "opened" monomer is added to a long chain, forming a polymer chain. The details on data generation can be found in the Appendix. In total, we run generally 10 DFT simulations for different initialization of each \(L\)-loop (\(L=1,3,4,5,6\)) polymer across the 24 types of polymers. We have 1311 DFT trajectories and 6,552,624 molecular conformations. Only polymers with less than or equal to 64 atoms are used for pre-training, totaling 3,851,540 conformations. The remaining larger-polymer conformations are used for fine-tuning and testing.
Footnote 1: Dataset will be made available upon publication.
MD17, ANI1-x, and PCQM4Mv2.In addition to our own _poly24_, we have utilized three existing public datasets, namely MD17[6; 5], ANI1-x[26], and PCQM4Mv2[16; 15] for our model pre-training.
These datasets contain small organic molecules in a vacuum, and property prediction for such molecules is an area of great interest to the cheminformatics community. The machine learning for molecules community has also extensively studied and benchmarked these datasets. Table 1 summarizes all the dataset used in the pre-training stage. In total, we have more than 15 million samples from MD17, ANI1-x, PCQM4Mv2, and Poly24, covering equilibrium and off-equilibrium conformations for diverse organic molecules.
## 4 Experiments
### MD Simulations for Small Molecules on MD17
SetupAfter pre-training our model ET-OREO, we further finetune it on the MD17 dataset to validate the performance of molecular simulations using the forces predicted by our fine-tuned model. We run simulations with our model's predicted forces with a Nose-Hoover thermostat, with 500K temperature and 0.5fs timestep for 600k steps. The simulation setting is the same as [8], which benchmarked popular deep learning forcefields for MD simulations. We use the ASE package for the implementation of molecular simulation environment [21] with a characteristic time for the Nose-Hoover thermostat set to 25fs.
Baselines and Metrics.Following [8], we report 3 metrics:
* _Forces MAE_, which is the mean absolute error of forces on the DFT trajectories;
* _Stability_, which measures how long the simulation can run before blowing up. According to [8], this is detected when the radial distance function (RDF) deviates from the reference simulation by a threshold (0.10A).
* \(h(r)\)_MAE_, with \(h(r)\) representing the distribution of interatomic distances during simulation. According to [8], MAE here is calculated as the \(l1\)-norm between the reference distribution and the predicted distribution.
We compare ET-OREO with the best-performing models reported by [8]. Furthermore, ET-OREO is compared with TorchMDNet [29] trained on MD17 from scratch, and ET-ORE, which is an ablation version of ET-OREO without the zero-force and de-noising regularizations. Consistent with previous
\begin{table}
\begin{tabular}{l|c c c} Dataset & \# Conformations & Equilibrium & Off-equilibrium \\ PCQM4Mv2 & 3,378,606 & ✓ & ✗ \\ ANI1x & 4,956,005 & ✓ & ✓ \\ MD17 & 3,611,115 & ✗ & ✓ \\ poly24 & 3,851,540 & ✗ & ✓ \\ Total & 15,718,279 & ✓ & ✓ \\ \end{tabular}
\end{table}
Table 1: Datasets used in our model pre-training process.
simulation benchmark [8], TorchMDNet, ET-ORE, and ET-OREO are fine-tuned with 9,500 conformations for training and 500 for validation. The training and validation conformations are sampled from the same training data used in the pre-training of ET-OREO, and the metrics are reported on the rest of MD17 data unseen to both the fine-tuning and the pre-training of ET-OREO and ET-ORE. Furthermore, in our implementation, we focus solely on forces optimization for TorchMDNet. We show in Appendix that this improves forces accuracy without sacrificing the model's ability to predict the potential energy.
Results.In Table 2, we compare the performance of ET-OREO with baseline models. We make the following key observations: (1) On all of the simulations, we have achieved a lower interatomic distance distribution \(h(r)\) than the previous best-performance model NequIP reported by [8]. Particularly, we reduced \(h(r)\) for aspirin and salicylic acid by half, 10% for ethanol, and 18% for naphthalene. While NequIP [1] can achieve stable and accurate MD simulations by training from scratch on MD17, it suffers from a lower FPS as it is computationally expensive. In contrast, ET-OREO can achieve both fast and high-quality MD simulations. (2) ET-OREO improves forces accuracy over TorchMDNet by over three times, achieving state-of-the-art MAE on all four tested molecules. The major difference between ET-OREO and TorchMDNet is that ET-OREO is pre-trained with our force-centric objective before fine-tuning on MD17 data. Such significant performance gains show the benefits of pre-training over diverse equilibrium and off-equilibrium conformations. (3) ET-OREO produces
\begin{table}
\begin{tabular}{l l|l l l l|l l l} \hline Molecule & Metric & DimNet & GenNet-T & GenNet-dT & NequIP & TorchMDNet & ET-ORE & ET-OREO \\ \hline Aspirin & Force (\(\downarrow\)) & 10.0 & 3.3 & 5.1 & 2.3 & 7.4 & 4.2 & **1.0** \\ & Stability (\(\uparrow\)) & 54\({}_{12}\)(\(\uparrow\)) & 72\({}_{(9)}\) & 19\({}_{(12)}\) & 300\({}_{(0)}\) & 102\({}_{(45)}\) & 94\({}_{(42)}\) & 300\({}_{(0)}\) \\ & \(h(r)\) (\(\downarrow\)) & 0.04\({}_{(0.00)}\) & 0.04\({}_{(0.02)}\) & 0.04\({}_{(0.001)}\) & 0.02\({}_{(0.00)}\) & 0.04\({}_{(0.00)}\) & 0.04\({}_{(0.00)}\) & 0.02\({}_{(0.00)}\) \\ Ethanol & Force & 4.2 & 2.1 & 1.7 & 1.3 & 5.6 & 3.1 & **1.0** \\ & Stability & 26\({}_{(10)}\) & 169\({}_{(98)}\) & 300\({}_{(0)}\) & 300\({}_{(0)}\) & 121\({}_{(34)}\) & 300\({}_{(0)}\) & 300\({}_{(0)}\) \\ & \(h(r)\) & 0.15\({}_{(0.03)}\) & 0.10\({}_{(0.002)}\) & 0.09\({}_{(0.00)}\) & 0.08\({}_{(0.00)}\) & 0.12\({}_{(0.01)}\) & 0.10\({}_{(0.00)}\) & 0.03\({}_{(0.00)}\) \\ Naphthalene & Force & 5.7 & 1.5 & 1.9 & 1.1 & 3.3 & 2.0 & **0.9** \\ & Stability & 85\({}_{(68)}\) & 8\({}_{(2)}\) & 25\({}_{(10)}\) & 300\({}_{(0)}\) & 50\({}_{(20)}\) & 25\({}_{(9)}\) & 300\({}_{(0)}\) \\ & \(h(r)\) & 0.10\({}_{(0.00)}\) & 0.13\({}_{(0.00)}\) & 0.12\({}_{(0.01)}\) & 0.12\({}_{(0.00)}\) & 0.12\({}_{(0.00)}\) & 0.11\({}_{(0.00)}\) & 0.03\({}_{(0.00)}\) \\ Salicylic Acid & Force & 9.6 & 4.0 & 4.0 & 1.6 & 4.7 & 2.5 & **0.9** \\ & Stability & 73\({}_{(82)}\) & 26\({}_{(24)}\) & 94\({}_{(109)}\) & 300\({}_{(0)}\) & 60\({}_{(0.09)}\) & 94\({}_{(58)}\) & 300\({}_{(0)}\) \\ & \(h(r)\) & 0.06\({}_{(0.02)}\) & 0.08\({}_{(0.04)}\) & 0.07\({}_{(0.03)}\) & 0.03\({}_{(0.00)}\) & 0.06\({}_{(0.02)}\) & 0.05\({}_{(0.01)}\) & 0.02\({}_{(0.00)}\) \\ \hline \end{tabular}
\end{table}
Table 2: Simulation results on MD17. For all results, force MAE is reported in the unit of [meV/A], and stability is reported in the unit of [ps]. The distribution of interatomic distances \(h(r)\) MAE is unitless. FPS stands for frames per second. For all metrics (\(\downarrow\)) indicates the lower the better, and (\(\uparrow\)) indicates the higher the better. The first group of methods is taken from [8]. The second group of methods is our new baselines, including TorchMDNet [8], ET-ORE, and ET-OREO. These models share the same architecture and have the same FPS.
Figure 1: Illustration of simulation statistics for ET-OREO on MD17 dataset. Figure (a): distributions of interatomic distances during reference simulation and simulation with ET-OREO predictions finetuned on MD17. Figure (b): the curve of potential energy predicted by ET-OREO during simulation.
stable simulations and accurate interatomic distance distributions. ET-OREO is able to stably run simulations on all four molecules with accurate interatomic distance distribution \(h(r)\) compared to the reference trajectory in MD17. Figure 1 visualizes the close approximation of ET-OREO predicted \(h(r)\) compared with reference data, and that ET-OREO can produce energy-conserving simulations that sample around the equilibrium.
Regularization on Equilibrium Conformations is Vital for Simulations.We found that training TorchMDNet itself on MD17 cannot produce stable MD simulations. Meanwhile, the performance of ET-ORE shows that pre-training with forces only helped with higher forces accuracy. However, ET-ORE still does not perform well for MD17 simulations. Except for ethanol, ET-ORE cannot produce stable MD simulations, despite consistently improved forces accuracy. This is in line with [8]'s observation that higher forces accuracy does not always guarantee simulation performance. The difference between ET-ORE and ET-OREO is that the latter incorporates stable zero-force conformations and a de-noising regularization objective. The additional regularization on the conformation space proves vital to stable and accurate simulations.
ET-OREO achieves accurate MD simulation with 3 times inference speed.From Table 2, NequlP reported similar simulation performance to ET-OREO without need for pre-training. However, NequlP has a significantly slower inference time. The NequlP model in Table 2 has an inference time of approximately 119ms per step on NVIDIA V100 with 1.05 million model parameters. [8] In comparison, ET-OREO has an inference time of 48.5ms on the same hardware with 6.87 million parameters. Hence, we have achieved state-of-the-art simulation performance with a 2.45 faster inference speed.
### Simulation on Large-loop Polymers
The vast majority of our model's training data is small molecules. In this section, we investigate the model's ability to generalize to larger molecules, consisting of 15-loop polymers unseen in the training data.
SetupWe finetune our model on the forces of small polymers (5-loop or less) in the poly24 dataset for one epoch with a learning rate of 0.0001. For testing the model's MD simulation performance on large polymers, we take the larger 15-loop polymers from the poly24 dataset and run MD simulations on them with ET-OREO-fine-tuned forcefields for a maximum of 600K steps. The number of atoms for these 15-loop polymers are, respectively: 360, 360, 180, and 240. In the training data, we have at most molecules consisting of \(\approx\)100 atoms. Therefore, ET-OREO is required to perform accurate MD simulations on unseen polymers, testing both its simulation and generalization ability.
Results.Figure 2 (a) visualize the comparison results between ET-OREO predicted forces and DFT reference data. For all 15 loop polymers, ET-OREO obtains highly accurate forces to DFT calculations, with 0.01 eV/ MAE in energy and close to zero cosine distance. Hence, ET-OREO can perform MD simulations on multiple types of polymers with uniformly high correlation with DFT references. The test 15-loop polymers contain up to 360 atoms, unseen in the training data. This shows that ET-OREO can extrapolate well to large unseen polymers with known monomers. Furthermore, in practice, ET-OREO only needs training data from small polymers, which are cheap to generate _ab initio_ data and thus greatly reduce the cost of DFT simulation.
Figure 2 (b) shows the potential energy curves during simulation. The potential energy curves indicate that the polymer in the simulation converges to a stable near-equilibrium distribution quickly after
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline ID & Polymer Name & \# atoms & SMILES & Monomer & Polymer \\ \hline CK & cyclooctane & 360 & [*]CCCCCCCC[*] & & \\ OTH & n-alkane substituted \(\delta\)-valerolactone & 360 & [*]OC(CCC)CCCC[*]=O & & \\ LAC1 & \(\gamma\)-butylactone & 180 & [*]OCOCOC([*])=O & & \\ LAC2 & 3-methyl-1,4-dioxan-2-one & 240 & [*]OCOCOC(C)C([*])=O & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Details of 15-loop polymers we used for simulations, including their ID, polymer name, number of atoms for the 15-loop polymer, SMILES string, and rdkit visualizations.
initial fluctuations. Furthermore, ET-OREO can explore different possible conformation states of the polymers, as illustrated in Figure 2 (c).
### Property Prediction on QM9
To test ET-OREO's ability to encode equilibrium conformation for property prediction tasks, we follow the same experiment setting as NoisyNode [33] on QM9. [33] exclusively pre-trained on equilibrium conformations from PCQM4Mv2 with a de-noising objective. In comparison, ET-OREO has access to larger off-equilibrium data supervised by forces. We follow the exact same fine-tuning setup as [33] save the de-noising regularization in the fine-tuning stage. In Table 4, we compare ET-OREO with NoisyNode and TorchMDNet [29] on HOMO-LUMO properties prediction on QM9. We report the performance of NoisyNode trained on the TorchMDNet encoder reported in [33], hence all three models have the same encoders, except that TorchMDNet trains from scratch, and NoisyNode and ET-OREO fine-tune from their pre-trained parameters. ET-OREO improves the performance of property prediction by \(\sim\)30% compared to TorchMDNet trained from scratch, implying that our pre-training paradigm provided the model with useful information about the quantum mechanical properties of equilibrium conformations. Our performance is on par with NoisyNode, and unlike in simulation experiments, we observe no accuracy improvement by the inclusion of the rich off-equilibrium conformation. We speculate that such equilibrium property prediction tasks benefit little from additional information on off-equilibrium conformation. We want to particularly point out that, despite the pre-training effort, NoisyNode and ET-OREO both require extensive fine-tuning with a large amount of data (more than 10,000) and training epochs (roughly 150 epochs for convergence) for optimal performance. An important future improvement in molecular pre-training should be in improving the data and time efficiency of fine-tuning.
\begin{table}
\begin{tabular}{c|c c c} & TorchMDNet & NoisyNode & ET-OREO \\ \(\epsilon_{\text{HOMO}}\) & 20.3 & **15.6** & 16.8 \\ \(\epsilon_{\text{LUMO}}\) & 17.5 & **13.2** & 14.5 \\ \(\Delta\epsilon\) & 36.1 & **24.5** & 26.4 \\ \end{tabular}
\end{table}
Table 4: Fine-tuning on HOMO-LUMO properties on QM9. Metrics are MAE in meV.
Figure 2: Accuracy and Robustness of ET-OREO in MD simulations for large polymers. In Figure (a) it is observed that ET-OREO forces are highly correlated with DFT forces during its own MD simulations. Figure (b) shows the RMSD and potential energy curves of ET-OREO during MD simulations. The curves suggest that ET-OREO simulations experience equilibration. Figure (c) shows the visualization of sample conformations captured at various time points during the simulation.
## 5 Discussion
We presented ET-OREO, a pre-trained model for 3D molecules on \(\approx\)15M 3D conformations of both equilibrium and off-equilibrium states. We unified the pre-training of molecular conformations of different sources and states with a force-centric pre-training objective. With our pre-training model, we achieved state-of-the-art performance on MD simulations, with both high force accuracy and simulation efficiency. As a potent encoder for conformations, our model also attained a state-of-the-art level of performance on property prediction on equilibrium data.
Our current model is limited mostly to single and small molecules in a vacuum, leaving more complex molecular environments as future work. We also deem it a promising direction to explore more model architectures and larger model sizes. Furthermore, our de-noising objective on equilibrium data does not leverage the model's learned information on atomic forces. It will be interesting to study the possibility of a more sophisticated coupling of equilibrium and off-equilibrium optimization, to make the model progressively leverage its quantum mechanical knowledge for equilibrium data.
|
2306.10093 | Musico-acoustic Depictions of Laminar and Turbulent Flows in Ligeti
Piano Etude No. 9 and a Novel Method of Analysis | The relationship between musical material and physical phenomena has become a
topic in the musicological literature over the last several decades,
particularly concerning elements of the musical system itself, and
constructions found in the work of contemporary classical composers such as
Gyorgy Ligeti and Iannis Xenakis. Most scholars, who adopt this approach,
explore the physical phenomena of fractals in the analysis of musical works,
but fluid mechanical frameworks, such as laminar and turbulent flows, offer a
new avenue to be explored. In this paper I will propose a novel method of
musical analysis for examining musical structures in terms of fluid-like
behaviour such that Ligeti etude no. 9 serves as a model, whereby the metaphors
of laminar and turbulent flows take precedence. The methodological design
includes the utility of converting terms (by proposing correlations between
physical concepts and the acoustic properties of music), theoretical frameworks
for musicological application, and scatter plots, which provide central
analytic support to demonstrating the fluid-like tendencies in musical
materials, for they capture a formal development over time. | Noah Chuipka | 2023-06-17T19:06:41Z | http://arxiv.org/abs/2306.10093v1 | Musico-acoustic Depictions of Laminar and Turbulent Flows in Ligeti's Piano Etude No. 9 and a Novel Method of Analysis Noah Chuipka April 26, 2022
## Abstract
The relationship between musical material and physical phenomena has become a topic in the musicological literature over the last several decades, particularly concerning elements of the musical system itself, and constructions found in the work of contemporary classical composers such as Gyorgy Ligeti and Iannis Xenakis. Most scholars, who adopt this approach, explore the physical phenomena of fractals in the analysis of musical works, but fluid mechanical frameworks, such as laminar and turbulent flows, offer a new avenue to be explored. In this paper I will propose a novel method of musical analysis for examining musical structures in terms of fluid-like behaviour such that Ligeti's etude no. 9 serves as a model, whereby the metaphors of laminar and turbulent flows take precedence. The methodological design includes the utility of converting terms (by proposing correlations between physical concepts and the acoustic properties of music), theoretical frameworks for musicological application, and scatter plots, which provide central analytic support to demonstrating the fluid-like tendencies in musical materials, for they capture a formal development over time.
_Although I am an artist, my working method is that of a scientist active in basic research rather than in applied science. Or of a mathematician working on a new mathematical structure, or of a physicist looking for the tiniest particle of the atomic nucleus. I do not worry about the impact my music will make or what it will turn out to be like. What interests me is to find out the way things are. I am driven by curiosity to discover reality. Of course, there is no reality in art the way there is in science, but the working method is similar. Exactly as in basic research where the solution of a problem throws up innumerable new ones, the completion of a composition raises a host of new questions to be answered in the next piece._ - Ligeti, G (Varga, 2013, p. 32).
## 1 Introduction
Claims of mathematical and scientific depictions in Ligeti's work have emerged in the musicological literature, but mostly with regards to geometrical concepts vested in nature, mathematics, and art. For instance, the proposed fractals in the 4th movement of Ligeti's Piano Concerto (Steinitz, 1996a); the repeating series of expanded and contracted pitch structures in etude no. 14 that depict the repeating ascending columns in Constantin Brancusi's (1876-1957) 29-metre-high sculpture (Steinitz, 1996a); appearances of Lorentz's butterflies, Koch's curve, Gaston Julia's fractal, and Cantor's function in the etudes (Blanaru, 2020); and chaotic determinisms in etude no. 1 (Steinitz, 1996b), where initial musical patterns (i.e., initial conditions) result in similar yet dispersive patterns. Although most of the relevant literature discusses |
2302.02160 | Directed Acyclic Graphs With Tears | Bayesian network is a frequently-used method for fault detection and
diagnosis in industrial processes. The basis of Bayesian network is structure
learning which learns a directed acyclic graph (DAG) from data. However, the
search space will scale super-exponentially with the increase of process
variables, which makes the data-driven structure learning a challenging
problem. To this end, the DAGs with NOTEARs methods are being well studied not
only for their conversion of the discrete optimization into continuous
optimization problem but also their compatibility with deep learning framework.
Nevertheless, there still remain challenges for NOTEAR-based methods: 1) the
infeasible solution results from the gradient descent-based optimization
paradigm; 2) the truncation operation to promise the learned graph acyclic. In
this work, the reason for challenge 1) is analyzed theoretically, and a novel
method named DAGs with Tears method is proposed based on mix-integer
programming to alleviate challenge 2). In addition, prior knowledge is able to
incorporate into the new proposed method, making structure learning more
practical and useful in industrial processes. Finally, a numerical example and
an industrial example are adopted as case studies to demonstrate the
superiority of the developed method. | Zhichao Chen, Zhiqiang Ge | 2023-02-04T13:00:52Z | http://arxiv.org/abs/2302.02160v1 | # DAGs with Tears: A Novel Structure Learning Method under Deep Learning Framework
###### Abstract
Bayesian network is a frequently-used method for fault detection and diagnosis in industrial processes. The basis of Bayesian network is structure learning which learns a directed acyclic graph from data. Since the search space in structure learning will scale super-exponentially with the increase of process variables, data-driven structure learning is a challenging problem. As a novel method for structure learning, DAGs with No Tears methods are being well studied in recent years due to their compatibility with deep learning framework. However, the DAGs with No Tears methods is far from application in industrial scenario due to problems in the gradient descent based solving stage and the post-processing stage. In this work, those problems are theoretically analyzed in detail by mathematical derivations. To solve these problems, the DAGs with Tears method is proposed by using mix-integer linear programming under the deep learning framework. In addition, prior knowledge is able to incorporate into the new proposed method, making structure learning more practical and useful in industrial processes. Finally, a numerical example and an industrial simulation example are adopted as case studies to demonstrate the superiority of the developed method.
Structure learning, directed acyclic graph, Bayesian network, gradient descent, mix-integer linear programming
## I Introduction
Due to the high complexity mechanism and the lack of rigorous mathematical models, the mechanism-driven plant-level optimization is far from widespread application [1, 2]. Thanks to the great improvement of industrial intelligence and information technology, the acquisition and wide application of industrial big data have become possible. Therefore, the application of data-driven optimization is being a trending research topic. Quantities of new data-driven modeling method [3, 4] have recently been proposed especially using the deep learning method [5], which play a crucial part in the guidance of control, the increasing of economic value, and the guarantee of process safety. However, using the data acquired from the process directly could not promise the reliability of the model and results in the obstacle of the data-driven method promotion [6]. Meanwhile, as a branch of probabilistic graphical model, Bayesian network [7] is being more attractive, which can be regarded as a data structure that provides the skeleton for representing a joint distribution compactly in a factorized way [8]. As a compromise proposal of mechanism and data, Bayesian network makes it possible to open the black box of data-driven models. The fault detection & diagnosis technology [9, 10] and soft sensor method [11] based on Bayesian network have been well studied and hence the research on the application of Bayesian network in industrial big data is of great importance.
The construction of Bayesian network consists of parameter learning and structure learning [8]. The structure learning is the basic of Bayesian network construction and a popular research topic in this field [12]. Actually, learning the directed acyclic graph (DAG) from the data directly as the graph of Bayesian network is an NP-Hard problem. The major difficulty is the combinational explosion of binary variables and the non-convexity for the acyclic constraint in the optimization problem. To learn the DAGs from data, three major methods are being used namely constraint based method, score based method, and DAGs with No Tears (NOTEAR) [13] method. Constraint based methods like PC algorithm [14] view a Bayesian network as a representation of independencies. They try to test for conditional dependence and independence in the data and then to find a network (or more precisely an equivalence class of networks) that best explains these dependencies and independencies. Score-based methods like K2 algorithm [15], MCMC algorithm [16], and Hill Climbing Search algorithm [17] view a Bayesian network as specifying a statistical model and then address learning as a model selection problem. Score based methods all operate on the same principle: Define a hypothesis space of potential models -- and a scoring function that measures how well the model fits the observed data. The mentioned above methods are usually based on heuristic rule and could not consider all nodes simultaneously to find an optimal structure. Different from traditional heuristic methods, GoBNILP algorithm [18], as a score based method, used the mix-integer linear programming (MILP) to traverse all the nodes to maximize the score function. The prior knowledge can be added in the Bayesian network via the regulation of binary variables, while the combinational optimization and the corresponding constraints makes it to be an NP-Hard problem when solving the MILP linear programming problem. As a combination of score based and combinational optimization method, NOTEAR utilizes the statistical properties of the least square loss of structural equation model (SEM) in scoring DAGs. That is, the minimizer of the least square loss in the SEM provably recovers a true DAG with high probability on finite-samples and in high dimensions [19]. Therefore, the combinational optimization of DAG structure learning can be converted into continuous optimization problem and the computational efficiency and performance can be improved substantially.
The earliest NOTEAR method is a linear model, which makes it difficult to confront with the strong nonlinearity of process data. Therein, the least square loss of structural equation model in NOTEAR under deep learning framework is being studied in recent years to improve the accuracy when applying NOTEAR method to nonlinear data. Yu et al. [20] adopted the graph convolution operation to combine the VAE and NOTEAR as well as changed the constraint in NOTEAR simultaneously. The so-called DAG-GNN model is the first method that extends the NOTEAR to the nonlinear data under deep learning framework and the non-convexity constraint of the original is improved in a power form. Ng et al. [21] analyzed the SEM in an abstract function form and generalized the NOTEAR method to nonlinear case. Wang et al. [22] proposed a generative adversarial framework for NOTEAR to improve the DAG mining capability in nonlinear data.
Even though the NOTEAR has been well studied and achieved considerable improvement. In practice, the non-convexity of acyclic constraint is difficult to be satisfied under the deep learning framework whose parameters are updated via gradient descent method [23], which results in the infeasible solution in the final results constantly. Therein, the DAGs obtained from the NOTEAR methods should make post-processing to satisfy the acyclic constraint. Generally, the tear problem is converted to be a truncation problem [24], where some elements in the adjacent matrix will be set to 0 to avoid the tear problem. This post-processing method may have great perturbation of least square result. Meanwhile, to the best of the our knowledge, the NOTEAR is purely data-driven method, the prior knowledge cannot be added into the final DAG. This drawback reduces the reliability of the DAG since the models driven by mechanism and data may be more effective than those purely driven by data. To solve the two problems mentioned above, the DAGs with Tears (WITHTEAR) method is proposed in this work to take the advantages of NOTEAR and combinational optimization into account. The innovation of this work can be summarized as follows:
1) The occurrence of infeasible solution in NOTEAR method is theoretically analyzed under the gradient descent method, and the principle of the truncation operation in the post processing stage is analyzed in the perspective of least square perturbation.
2) The WITHTEAR method is formulated based on the NOTEAR method using MILP model. Besides, the MILP model considering prior knowledge of the data is combined into the DAG structure learning.
The paper is organized as follow: the problem statement is given in Section 2. In Section 3, the corresponding analyses of NOTEAR are provided in detail, and the WITHTEAR is given in Section 4. Two case studies are given in Section 5 to show the superiority of WITHTEAR bases on DAG-GNN model. Conclusions are drawn in Section 6.
## II Problem Statement
The problem to be solved in this paper is given as follows: Givens are \(n\) variables \(x_{i}\) which belongs to a set \(X\) defined as \(X\)= {\(x_{1}\), \(x_{2}\),..., \(x_{n}\)}. The prior knowledge of the variables is presented by the connection between variables using matrix \(P\). The problem is how to combine NOTEAR and MILP model to learn Bayesian Network (DAG) under deep learning framework and perform the corresponding DAG topological structure using adjacent matrix \(A\).
## III Theoretically analyses of NOTEAR method
### _The occurrence of infeasible solution_
To better perform the WITHTEAR method, the analysis of NOTEAR method is given as follows. According to reference [13], the NOTEAR methods convert the traditional combinational optimization problem:
\[\min_{A}F(W)\] \[s.t.G(W)\in DAGs\]
into a continuous programming problem shown as Problem P1:
\[(P1)\ \min_{A,\Theta}\frac{1}{2n}\sum_{j=1}^{n}\left\|X_{(j)}-f(X_{ (j)}|A,\Theta)\right\|_{F}^{2}+\lambda\|A\|_{1}\] \[s.t.h(A)=0\]
The equality constraint can be written as Eq. (1) [13] or Eq. (2) [20]:
\[h(A)=Tr(e^{A\odot A})-d=0 \tag{1}\]
\[h(A)=Tr[(I+\gamma A\odot A)^{d}]-d=0 \tag{2}\]
where the _Tr_ is the trace of matrix, \(\odot\) is Hadmard product, \(\gamma\) is hyper-parameter, \(d\) is the dimension of square matrix \(A\).
To solve this constrained optimization problem, according to the reference [25], the augmented Lagrangian method is adopted and the objective function of Problem P1 without constraint can be rewritten as shown in Eq. (3). The \(\alpha\) and \(\beta\) in Eq. (3) are Lagrange multiplier and penalty parameters respectively. When \(c\rightarrow\infty\), the minimizer of Eq. (3) must satisfy \(h(A)=0\), in which case Eq. (3) is equal to the objective function of Problem P1.
\[\begin{split}\min_{A,\Theta}\frac{1}{2n}\sum_{j=1}^{n}& \left\|X_{(j)}-f(X_{(j)}|A,\Theta)\right\|_{F}^{2}\\ &+\lambda\|A\|_{1}+\alpha h(A)+\frac{\beta}{2}|h(A)|^{2}\end{split} \tag{3}\]
Hence the strategy is progressively increase \(c\), and the Lagrange multiplier \(\lambda\) is correspondingly updated as shown in Eq. (4) and (5).
\[\alpha_{k+1}=\alpha_{k}+\beta_{k}h(A_{k}) \tag{4}\]
\[\beta_{k+1}=\left\{\begin{array}{l}10\beta_{k},if\left|h(A_{k})\right|>0.25 \left|h(A_{k-1})\right|\\ \beta_{k},otherwise\end{array}\right. \tag{5}\]
However, in the iteration process of the gradient descent method, the \(A\) may not satisfy the acyclic constraint and results in an infeasible solution. To illustrate this phenomenon, the Problem P1 will be simplified as a linear form as Problem P2 shown. The constraint written in Eq. (1) and (2) will be discussed respectively.
\[(P2)\ \min_{A}\frac{1}{2n}\sum_{j=1}^{n}\left\|X_{(j)}-X_{(j)}A \right\|_{F}^{2}\] \[s.t.h(A)=0\]
When Eq. (1) is adopted as constraint, the objective function can be formulated as Eq. (6) shown:
\[\begin{split} Loss=&\frac{1}{2n}\sum_{j=1}^{n} \left\|X_{(j)}-X_{(j)}A\right\|_{F}^{2}\\ &+\alpha(Tr(e^{A\odot A})-d)+\frac{\beta}{2}\big{|}Tr(e^{A\odot A })-d\big{|}^{2}\end{split} \tag{6}\]
Assume that in the iteration of gradient descent, the _A\({}_{k}\)_ calculated in the _k_-th time iteration satisfies the constraint, then the _A\({}_{k+1}\)_ in the next time iteration is shown in Eq. (7).
\[A_{k+1}=A_{k}-LR\times\frac{\partial Loss}{\partial A_{k}} \tag{7}\]
where _LR_ corresponds to the learning rate in the gradient descent method.
The Eq. (7) can be expanded [26, 27] in Eq. (8):
\[\begin{split} A_{k+1}=A_{k}-LR\times&\big{[}\frac{ 1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{(j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}\big{]}\]
then, Eq. (9) and (10) can be derived by multiplying _A\({}_{k}\)_ and _A\({}_{k+1}\)_ on both sides respectively.
\[\begin{split} A_{k+1}\odot A_{k}=& A_{k}\odot A _{k}\\ &-LR\times[\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{( j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\odot A_{k}\\ A_{k+1}\odot A_{k+1}=& A_{k}\odot A_{k+1}\\ &-LR\times[\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{ (j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\odot A_{k+1}\\ \end{split} \tag{10}\]
Substitute Eq. (10) to Eq. (9), Eq. (11) can be given as follow:
\[\begin{split} A_{k}\odot A_{k}=& A_{k+1}\odot A_{k +1}\\ &+LR\times[\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{( j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\odot(A_{k+1}+A_{ k})\end{split} \tag{11}\]
Finally, the LHS of Eq. (11) is part of constraint Eq. (1), Hence, Eq. (12) can be derived when substituting Eq. (11) to Eq. (1):
\[\begin{split} h&\{A_{k+1}\odot A_{k+1}\\ &+LR\times[\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{( j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\odot(A_{k+1}+A_{ k})\}=0\end{split} \tag{12}\]
Since the data is fed batch-by-batch, and the direction of gradient descent may not be determined due to the data shuffling every time of iteration. Besides, the _LR_ as hyperparameters could not be appropriate to ensure the \(A\) be feasible at the last time of iteration. Therein, Eq. (13) can't be permanent establishment during iteration of Eq. (12), which means that the algorithms will report an infeasible solution of \(A\) at the end of iteration.
\[\begin{split}&[\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_ {(j)})+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\\ &\odot(A_{k+1}+A_{k})=0\end{split} \tag{13}\]
Next, Eq. (2) is adopted in Problem P2, and the objective function can be written as Eq. (14):
\[\begin{split} Loss=&\frac{1}{2n}\sum_{j=1}^{n}\left\|X _{(j)}-X_{(j)}A\right\|_{F}^{2}\\ &+\alpha\{Tr[(1+\gamma A\odot A)^{m}]-d\}\\ &+\frac{\beta}{2}\big{|}Tr[(1+\gamma A\odot A)^{m}]-d\big{|}^{2} \end{split} \tag{14}\]
Similarly, the _A\({}_{k+1}\)_ can be derived as shown in Eq. (15):
\[\begin{split} A_{k+1}=& A_{k}-LR\times\{\frac{1}{n} \sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_{(j)})\\ &+2\alpha A_{k}\odot\sum_{k=1}^{d}C_{d}^{k}k\Big{[}\gamma(A_{k} \odot A_{k})^{k-1}\Big{]}^{T}\}\end{split} \tag{15}\]
where the \(C\) stands for the combinatontortial number and after suitable transformation, Eq. (15) can be rewritten as Eq. (16):
\[\begin{split}& A_{k}\odot A_{k}=A_{k+1}\odot A_{k+1}\\ &+LR\times\{\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_ {(j)})\\ &+2\alpha A_{k}\odot\sum_{k=1}^{d}C_{d}^{k}k\Big{[}\gamma(A_{k} \odot A_{k})^{k-1}\Big{]}^{T}\}\\ &\odot(A_{k}+A_{k+1})\end{split} \tag{16}\]
Substituting Eq. (16) to Eq. (2), the Eq. (17) similar to Eq. (12) can be derived which could not promise a feasible solution after iteration.
\[\begin{split}& h[A_{k+1}\odot A_{k+1}\\ &+LR\times\{\frac{1}{n}\sum_{j=1}^{n}X_{(j)}^{T}(X_{(j)}A_{k}-X_ {(j)})\\ &+2\alpha A_{k}\odot\sum_{k=1}^{d}C_{d}^{k}k\Big{[}\gamma(A_{k} \odot A_{k})^{k-1}\Big{]}^{T}\}\\ &\odot(A_{k}+A_{k+1})]=0\end{split} \tag{17}\]
Last, the nonlinear form is adopted in Problem P1 and the similar equations can be given in Eq. (18) and (19), which the similar phenomena can be derived.
\[\begin{split}& h\{A_{k+1}\odot A_{k+1}+LR\\ &\times[\frac{1}{n}\sum_{j=1}^{n}\big{(}\frac{\partial f(X_{(j)} |A_{k},\Theta)}{\partial A_{k}}\big{)}^{T}(f(X_{(j)}|A_{k},\Theta)-X_{(j)})\\ &+2\alpha A_{k}\odot(e^{A_{k}\odot A_{k}})^{T}]\\ &\odot(A_{k+1}+A_{k})\}=0\end{split} \tag{18}\]
\[h\{A_{k+1}\odot A_{k+1}+LR \tag{19}\] \[\times[\frac{1}{n}\sum_{j=1}^{n}{(\frac{\partial f(X_{(j)}|A_{k}, \Theta)}{\partial A_{k}})}^{T}{(f(X_{(j)}|A_{k},\Theta)-X_{(j)})}\] \[+2\alpha A_{k}\odot\sum_{k=1}^{d}{C_{d}^{k}k\Big{[}\gamma(A_{k} \odot A_{k})^{k-1}\Big{]}}^{T}]\] \[\odot(A_{k+1}+A_{k})\}=0\]
It should be noticed that, during the practice, if the L1 norm regularization is added in the objective function, the \(A\) will tends to be 0. The reason can be interpretated through observing Eq. (12), (17), (18) and (19). With the increase of \(\gamma\) as the iterations increases, the existence of the factor shown in the LHS of Eq. (20) gained from the above-mentioned equations tends to be 0, meanwhile, the L1 norm will forced the \(A\) to be sparsity [28], and hence the \(A\) will tend to be 0 in the iteration.
\[A_{k+1}+A_{k}=0 \tag{20}\]
In conclusion, due to the limit of gradient descent method and the non-convexity of the equation constraint, the DAGs acquired via NOTEAR methods will not promise to be acyclic. Therein, the post-processing operation accompanied by the NOTEAR methods arises. In the next section, the principle of post-processing operation (also known as truncate as mentioned before) will be stated for better introducing the WITHTEAR method.
### _The principle of the post-processing operation_
In post-processing operation, a threshold will be set and the elements in matrix \(A\) lower than this threshold will be set to 0, and the new matrix \(A\) is obtained. If the new matrix \(A\) satisfies the acyclic condition, then the new matrix \(A\) will be output as the final results, otherwise the new matrix \(A\) is cyclic, then the threshold will increase and the matrix will be truncated again until it is acyclic. In the truncate process, the elements of \(A\) will make the optimal least square result deviate from the optimal point. Therefore, the rationality of the truncation operation should be analyzed, which to the best of our knowledge has not been studied before. A simple analysis of the least square will be provided as follows to show the principle of the post-processing operation.
Firstly, the disturbance of the linear least square problem will be considered. According to the least square problem shown as Eq. (21):
\[\tilde{X}=XA \tag{21}\]
If a perturbation \(\delta A\) is added to \(A\), the regression perturbation on the LHS of Eq. (21) can be given as follow:
\[\delta\tilde{X}=X\times\delta A \tag{22}\]
Then, the inequality can be derived:
\[\left\|\delta\tilde{X}\right\|_{2}\leqslant\left\|\delta A\right\|_{2}\! \left\|X\right\|_{2} \tag{23}\]
Both sides of Eq. (23) can be divided by the L2 norm of matrix \(X\) and the relative error of the perturbation can be given as Eq. (24). Similarly, for the non-linear form, the perturbation is given as Eq. (25).
\[\frac{\left\|\delta\tilde{X}\right\|_{2}}{\left\|\tilde{X}\right\|_{2}} \leq\frac{\left\|A\right\|_{2}\!\left\|X\right\|_{2}}{\left\|XA \right\|_{2}}\frac{\left\|\delta A\right\|_{2}}{\left\|A\right\|_{2}} \tag{24}\] \[=(\frac{\left\|A\right\|_{2}\!\left\|X\right\|_{2}}{\left\|XA \right\|_{2}\times\left\|A\right\|_{2}})\times\left\|\delta A\right\|_{2}\]
\[\frac{\left\|\delta\tilde{X}\right\|_{2}}{\left\|\tilde{X}\right\|_{2}}\leq \frac{\left\|\frac{\partial f(X\!\left|A,\Theta)}{\partial A}\right\|_{2}}{ \left\|f(X\!\left|A,\Theta\right\|)_{2}\right\|}\!\left\|\delta A\right\|_{2} \tag{25}\]
Summarizing Eq. (24) and (25), it can be concluded that, once the matrix \(A\) and paramters of regression model is determined, the relative error merely depends on the perturbation magnitude, which means that, to make the deterioration of the least square result as less as possible, the elements of \(A\) will be set to be 0 from small to large until the graph constructed by \(A\) is acyclic. This process can adopt tear operation to fulfill. However, for convenience, the NOTEAR, makes the tear problem into a truncation problem as mentioned before. Note that, the truncation operation may deteriorate the least square loss greater than that of the tear operation. Meanwhile, the prior knowledge can't merge with the knowledge learned from data in NOTEAR through roughly truncation. Therefore, the following WITHTEAR method will be proposed to solve the two problems mentioned above.
## IV DAGs with Tears Method
### _Loops Year by MILP problem_
In the previous section, to promise the minimum perturbation of the least square problem and tear all loops of the graph simultaneously, the number of elements in \(A\) should be changed as few as possible. Therein, MILP model can be formulated to fulfill the two goals as stated above and the corresponding method is named DAGs with Tears (WITHTEAR). Before showing the WITHTEAR method, the following concept will be stated as follow for a better understanding of WITHTEAR method.
Loop matrix given in Eq. (26) is a matrix with connection between node (also known as stream, whose set is written as STR) as column and loop \(i\) as row, \([u_{i,j}]\). If a loop \(i\) includes a stream \(j\), the element in the loop matrix \(u_{i,j}=1\), otherwise \(u_{i,j}=0\).
\[U=\left[\begin{array}{ccc}0&\ldots&1\\ \vdots&\ddots&\vdots\\ 1&\ldots&0\end{array}\right] \tag{26}\]
Based on Eq. (26), innovated by [29, 30], the loop tearing cost is introduced as Eq. (27) to measure the cost of breaking the stream.
\[Cost=\sum_{j\in STR}w_{j}\times y_{j},y_{j}\in\{0,1\} \tag{27}\]
where \(w_{j}\) is the weight of the stream, which can be converted from the coefficient of \(A\), and \(y_{j}\) is binary variable to be solved. If \(y_{j}\) is 0, then the stream should be tear, vice versa. To tear
all loops, the corresponding constraints is given in Eq. (28). This constraint indicates that the loop in matrix \(U\) should be tore at least. By adopting this constraint, the acyclic of the graph formulated from matrix \(A\) can be promised.
\[\sum\limits_{j\in STR}u_{ij}y_{j}\geq 1,u\in U,y_{j}\in\{0,1\} \tag{28}\]
In all, the MILP problem can be given in problem P3. By solving problem P3, the loops of matrix \(A\) can be tore and the final DAG can be form with the minimum deterioration of least square result.
\[\min Cost=\sum\limits_{j\in STR}w_{j}\times y_{j}\] \[(P3) s.t.\left\{\begin{array}{c}\sum\limits_{j\in STR}u_{ij}y_{j} \geq 1,u\in U\\ y_{j}\in\{0,1\}\end{array}\right.\]
### _Loops Year combing prior knowledge and MILP problem_
Note that, the prior knowledge is not added in the MILP problem P3. Hence, in this section, Problem P3 will be extended to combine prior knowledge with the addition of logical propositions and the corresponding disjunctions. The prior knowledge can be given in 3 scenarios:(1), the existence of stream \(j\) is unknown; (2), the existence of stream \(j\) is obligatory; (3), the existence of stream \(j\) is forbidden. For scenario (1), the corresponding elements in matrix A can be set to be 0 before the solving of MILP problem P3. Therein, the scenario (2) and (3) will be discussed as follow: To show the relative position relationships, the logical variables [31] are defined as below:
1). \(V_{1,j}\) If the existence of stream \(j\) is unknown, \(V_{1,j}\) is True.
2). \(V_{2,j}\) If the existence of stream \(j\) is obligatory, \(V_{2,j}\) is False.
The disjunctions can be given as Eq. (29) shown:
\[\left[\begin{array}{c}V_{1,j}\\ UB_{1,j}=1.0\\ LB_{1,j}=0.0\\ j\in STR\end{array}\right]\vee\left[\begin{array}{c}V_{2,j}\\ UB_{2,j}=0.5\\ LB_{2,j}=0.0\\ j\in STR\end{array}\right] \tag{29}\] \[V_{1,j},V_{2,j}\in\{True,False\}\]
where _UB_ and _LB_ are the upper bound and lower bound of binary variable \(y_{j}\) defined in Eq. (28). The extra constraint for the binary variable \(y_{j}\) is given as Eq. (30):
\[LB_{j}\leq y_{j}\leq UB_{j} \tag{30}\]
there are two constraints in Eq. (29), while only one constraint will take effect, and the remains do not work. Therefore, each logical variable should satisfy the constraint shown in Eq. (30):
\[V_{1,j}\lor V_{2,j} \tag{31}\]
Finally, the optimization problem can be formulated as problem P4, and the value of the logical variable can be derived from matrix \(P\) that contains prior knowledge.
\[\min Cost=\sum\limits_{j\in STR}w_{j}\times y_{j}\] \[(P4) s.t.\left\{\begin{array}{c}\sum\limits_{j\in STR}u_{ij}y_{j} \geq 1,\ u\in U\\ y_{j}\in\{0,1\}\\ LB_{j}\leq y_{j}\leq UB_{j},\ j\in STR\\ y_{1,j}\lor V_{2,j}\end{array}\right.\]
### _The algorithm for DAGs with Tears method_
```
input : Data \(X\), Parameter \(\beta_{max}\), the maximum iteration time of deep learning model _Epoch_, the initial value of \(\alpha\), \(\beta\), and matrix \(A\). The coefficient of L1 norm \(\lambda\). The initial value of the best objective function of Problem P1\(\text{Loss}_{best}=\infty\), and matrix _A_\(\text{Abest}\) = None. The matrix which loads the prior knowledge \(P\). The hyper-parameter \(\omega\) output : The best matrix \(\text{Abest}\)
1Training Stage while\(\beta\leq\beta_{max}\)do
2for\(i=1:\)Epochdo
3 \(A_{k+1}=\begin{array}{c}\operatorname*{arg\,min}\limits_{A}\frac{1}{2n}\sum \limits_{j=1}^{n}\left\|X_{(j)}-f(X_{(j)}|A_{k},\Theta)\right\|_{F}^{2}\\ +\lambda\|A\|_{1}+\alpha h(\text{Ab}_{k})+\frac{\rho}{2}|h(\text{Ab}_{k})|^{2} \end{array}\)
4if\(\frac{\sum\limits_{j=1}^{n}\left\|X_{(j)}-f(X_{(j)}|A_{k},\Theta)\right\|_{F}^{2} }{2n}\leq\text{Loss}_{best}\)then
5 \(Loss_{best}=\frac{\sum\limits_{j=1}^{n}\left\|X_{(j)}-f(X_{(j)}|A_{k},\Theta) \right\|_{F}^{2}}{2n}\) ;
6 \(A_{best}=A_{k}\) ;
7
8 \(\alpha_{k+1}=\alpha_{k}+\beta_{k}h(\text{Ab}_{k})\) ;
9 \(\beta_{k+1}=\begin{cases}10\times\beta_{k},\ if\ |h(\text{Ab}_{k})|>0.25\,|h(\text{Ab}_{k-1})|\\ \beta_{k},\ otherwise\end{cases}\) ;
10
11
12
```
**Algorithm 1**Pseudo-code of WITHTEAR Method
According to Problem P3 and P4, the pseudo-code of WITHTEAR method can be given based on the results of NOTEAR as Algorithm 1. From the pseudo-code, it can be concluded that the training stage has no differences between NOTEAR method, any NOTEAR under deep learning framework can be used to acquire matrix \(A\).
In the tear stage, the post processing strategy is changed comparing to the truncation operation adopted in NOTEAR
method. By solving the MILP problem, the prior knowledge of data can be added into the DAGs structure learning increasing the validity of the DAG structure mined from data. Meanwhile, comparing to traditional DAG structure learning using MILP like GoBNILP, the binary variables in this research merely depend on the number of streams rather than that of the parent node, results in a reduction in computation complexity.
```
17Tear Stage
10PreprocessingFor the elements in A lower than the hyper-parameter \(\omega\) and the existence of corresponding streams are forbidden in \(P\), the 0 should be placed in the corresponding location. For those should streams be existence in \(P\) while the corresponding elements are 0 in \(A\), the corresponding elements set to be \(\left\|A\right\|_{\max}\) ;
18whileLoops exist in matrix \(\mathbf{A}_{best}\)do
19 Formulate loop matrix \(U\);
20 Solve Problem \((P3)\) or \((P4)\);
21 For those streams should be tore, set the corresponding elements in \(A_{best}\) to 0;
22 Detect the existence of loop in A
23
24
25
26
27
28
29
30
31
32
334
35
36
375
388
390
391
392
393
400
401
402
403
4045
4055
4066
4075
4086
4099
4100
41001
410111
411111
41111
41111
41112
41113
41114
411111
41111
41111
41111
411111
411111
411111
411111
411111
411111
41111
411111
411111
41111
41111
41111
411111
411111
411111
411111
41111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
4111111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
411111
4111111
411111
4111111
4111111
411111
411111
411111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
411111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
411111
4111111
411111
4111111
4111111
411111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
411111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
41111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
4111111
41111111
4111111
4111111
41111111
4111111
41111111
41111111
4111111
41111111
4111111
41111111
41111111
4111111
41111111
41111111
41111111
41111111
41111111
41111111
41111111
41111111
41111111
411111111
411111111
41111111
411111111
411111111
41111111
41111111
41111111
41111111
411111111
411111111
41111111
41111111
41111111
41111111
411111111
41111111
41111111
41111111
41111111
411111111
41111111
411111111
411111111
4111111111
411111111
411111111
411111111
411111111
411111111
41111111
411111111
411111111
41111111
4111111111
411111111
411111111
411111111
411111111
411111111
4111111111
411111111
411111111
4111111111
411111111
4111111111
411111111
41111111111
4111111111
4111111111
41111111111
4111111111
41111111111
41111111111
41111111111
4111111111
41111111111
41111111111
411111111111
411111111111
411111111111
411111111111
41111111111111
41111111111111
4111111111111
4111111111111111
41111111111111111111
41
The DAG-GNN, WITHTEAR (Problem P3), and WITHTEAR (Problem P4) are given in Fig. 4. The corresponding scores are listed in Table 4. The BGe score using WITHTEAR is 29.7 \(\sim\) 34.8 % higher than that of the DAG-GNN, and the Gaussian BIC score is 27.8 \(\sim\) 28.6 % higher than that of the using DAG-GNN. The behavior of WITHTEAR on the score function indicates that using the MILP problem to tear the matrix rather than roughly truncate will result in a better graph that represents the data. Note that, when solving problem P3 and P4, the BGe score and Gaussian BIC score are different, which indicates that the prior knowledge may exist bias and hence the prior should be given prudently when using WITHTEAR.
## VI Conclusions
This paper proposed the WITHTEAR to improve the problem when applying NOTEAR methods to learn DAG structure under the deep learning framework. Firstly, the reason for the existence of infeasible solution results from the acyclic constraints adopted in the NOTEAR methods has been analyzed solved by the gradient descent method. After that, the principle for the truncation in post-processing operation was stated from the perspective of least square perturbation analysis. Based on this principle, the MILP models were formulated considering the tear cost and prior knowledge simultaneously in the final DAG formation. Finally, two case studies were carried out to demonstrate the effectiveness of the WITHTEAR method comparing to NOTEAR method using DAG-GNN model as baseline. The concentration of future work may be on the
Fig. 4: The adjacent matrix \(A\) (a), DAG-GNN; (b), WITHTEAR (P3); (c), WITHTEAR (P4)
Fig. 3: The prior knowledge matrix \(P\).
Fig. 2: The flowsheet of Tennessee-Eastman process.
hyper-parameter before the MILP problem solving to reduce the burden of loop detection algorithm or the directly tear method face to the adjacent matrix.
## Appendix A The derivation of Matrix Derivative
The derivation of matrix derivative in Section III will be stated as follow. First, consider Eq. (1) as constraint, define \(\phi\) as Eq. (A.1)shown:
\[\phi=Tr(e^{A\odot A})=Tr(I+\sum_{k=1}^{\infty}\frac{\left(A\odot A\right)^{k} }{k!})\] (A.1)
Then, the matrix derivate of trace function [26, 27] can be given as Eq. (A.2):
\[d\phi =\sum_{k=0}^{\infty}\frac{\left[\left(A\odot A\right)^{k}\right]^ {T}}{k!}:d(A\odot A)\] \[=\sum_{k=0}^{\infty}\frac{\left[\left(A\odot A\right)^{k}\right]^ {T}}{k!}:(dA\odot A+A\odot dA)\] \[=\sum_{k=0}^{\infty}\frac{\left[\left(A\odot A\right)^{k}\right]^ {T}}{k!}:2A\odot dA\] (A.2)
where the : is the Frobenius / trace (inner) product. Then the Hardmard product and the Frobenius product can be commuted and the Eq. (A.3) can be derived
\[d\phi=\sum_{k=0}^{\infty}\frac{\left[\left(A\odot A\right)^{k}\right]^{T}}{k! }\odot 2A:dA\] (A.3)
Hence, the derivate of Eq. (1) to matrix \(A_{k}\) is shown as Eq. (A.4)
\[\frac{\partial\phi}{\partial A}=2A\odot\sum_{k=0}^{\infty}\frac{\left[\left(A \odot A\right)^{k}\right]^{T}}{k!}=2A\odot(e^{A\odot A})^{T}\] (A.4)
Similarly, the matrix derivate of Eq. (2) is given as follow, define \(\zeta\) as Eq. (A.5) shown.
\[\zeta =Tr[(I+\gamma A\odot A)^{d}]\] \[=Tr[\sum_{k=1}^{d}C_{d}^{k}(I)^{d-k}(\gamma A\odot A)^{k}]\] \[=Tr[\sum_{k=1}^{d}C_{d}^{k}(\gamma A\odot A)^{k}]\] (A.5)
Then, Eq. (A.6) can be derived:
\[d\zeta =\sum_{k=1}^{d}C_{d}^{k}k\Big{[}(\gamma A\odot A)^{k-1}\Big{]}^{T}:d(A \odot A)\] \[=\sum_{k=1}^{d}C_{d}^{k}k\Big{[}(\gamma A\odot A)^{k-1}\Big{]}^{T}:2 A\odot dA\] (A.6)
Comute the Hardmard product and Frobenius product shown as Eq. (A.7).
\[d\zeta=2A\odot\sum_{k=1}^{d}C_{d}^{k}k\Big{[}(\gamma A\odot A)^{k-1}\Big{]}^{ T}:dA\] (A.7)
Finally, the matrix derivate of Eq. (2) can be given as follow:
\[\frac{\partial\zeta}{\partial A}=2A\odot\sum_{k=1}^{d}C_{d}^{k}k\Big{[}( \gamma A\odot A)^{k-1}\Big{]}^{T}\] (A.8)
## Appendix B The DAG-GNN model
In Appendix B, the DAG-GNN model will be introduced. The architecture of DAG-GNN model is shown in Fig. B.1.
The expression of encoder and decoder are given in Eq. (B.1) and (B.2) [20], respectively:
\[[M_{Z}|\log S_{Z}]=(I-A^{T})MLP(X)\] (B.1)
\[[M_{X}|\log S_{X}]=MLP((I-A^{T})^{-1}Z)\] (B.2)
where the \(M\) and \(S\) are mean and variance respectively. The _MLP_ is the multi-layer perceptron and defined as Eq. (B.3). The \(W_{1}\), \(W_{2}\), \(b_{1}\), and \(b_{2}\) in Eq. (B.3) are parameters to be learnt via gradient descent.
\[MLP(X)=\mathrm{ReLU}(XW_{1}^{T}+b_{1})W_{2}^{T}+b_{2}\] (B.3)
Therefore, the loss function can be given as Eq. (B.4) where the \(D_{KL}\) is the Kullback-Leibler divergence, and the \(p(Z)\) is the prior distribution.
\[Loss=E_{q(Z|X)}[\log p(X|Z)]+D_{KL}[q(Z|X)||p(Z)]\] (B.4)
The RHS of Eq. (B.4) can be given as follow, where \(m\) is theencoder dimension, \(d\) is the number of variables, \(L\) is the number of variable, and \(c\) is a constant:
\[D_{KL}[q(Z|X)||p(Z)]\] \[=\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{d}\left(S_{Z}\right)_{i,j}^{2 }+(M_{Z})_{i,j}^{2}-2\log\left(S_{Z}\right)_{i,j}-1\] (B.5)
\[E_{q(Z|X)}[\log p(X|Z)]\] \[\approx-\frac{1}{L}\sum_{l=1}^{L}\sum_{i=1}^{m}\sum_{j=1}^{d} \frac{\left(X_{i,j}-\left(M_{X}^{(l)}\right)_{i,j}\right)^{2}}{2(S_{X}^{(l)})_{ i,j}^{2}}\] \[-\frac{1}{L}\sum_{l=1}^{L}\sum_{i=1}^{m}\sum_{j=1}^{d}\log\left(S_ {X}^{(l)}\right)_{i,j}-c\] (B.6)
Fig. B.1: The architecture of DAG-GNN model
To promise the acylic of \(A\), the constraint Eq. (2) is adopted and the optimization problem can be formulated as below :
\[\min_{A}\ Loss =E_{q(Z|X)}[\log p(X|Z)]+D_{KL}[q(Z|X)||p(Z)]\] \[s.t. h(A)=Tr[\left(I+\gamma A\odot A\right)^{d}]-d=0\]
## Acknowledgment
The first author would like to thank Wenjian Du at Sun Yatsen University and Wenshen Zhao at University of Chinese Academy of Sciences for reviewing the mathematical derivation. He would also like to thank Le Yao and Hao Wang at Zhejiang University for their help with the discussion of the logic of the research.
|
2305.09092 | ProtoVAE: Prototypical Networks for Unsupervised Disentanglement | Generative modeling and self-supervised learning have in recent years made
great strides towards learning from data in a completely unsupervised way.
There is still however an open area of investigation into guiding a neural
network to encode the data into representations that are interpretable or
explainable. The problem of unsupervised disentanglement is of particular
importance as it proposes to discover the different latent factors of variation
or semantic concepts from the data alone, without labeled examples, and encode
them into structurally disjoint latent representations. Without additional
constraints or inductive biases placed in the network, a generative model may
learn the data distribution and encode the factors, but not necessarily in a
disentangled way. Here, we introduce a novel deep generative VAE-based model,
ProtoVAE, that leverages a deep metric learning Prototypical network trained
using self-supervision to impose these constraints. The prototypical network
constrains the mapping of the representation space to data space to ensure that
controlled changes in the representation space are mapped to changes in the
factors of variations in the data space. Our model is completely unsupervised
and requires no a priori knowledge of the dataset, including the number of
factors. We evaluate our proposed model on the benchmark dSprites, 3DShapes,
and MPI3D disentanglement datasets, showing state of the art results against
previous methods via qualitative traversals in the latent space, as well as
quantitative disentanglement metrics. We further qualitatively demonstrate the
effectiveness of our model on the real-world CelebA dataset. | Vaishnavi Patil, Matthew Evanusa, Joseph JaJa | 2023-05-16T01:29:26Z | http://arxiv.org/abs/2305.09092v1 | # ProtoVAE: Prototypical Networks for Unsupervised Disentanglement
###### Abstract
Generative modeling and self-supervised learning have in recent years made great strides towards learning from data in a completely unsupervised way. There is still however an open area of investigation into guiding a neural network to encode the data into representations that are interpretable or explainable. The problem of unsupervised disentanglement is of particular importance as it proposes to discover the different latent factors of variation or semantic concepts from the data alone, without labeled examples, and encode them into structurally disjoint latent representations. Without additional constraints or inductive biases placed in the network, a generative model may learn the data distribution and encode the factors, but not necessarily in a disentangled way. Here, we introduce a novel deep generative VAE-based model, ProtoVAE, that leverages a deep metric learning Prototypical network trained using self-supervision to impose these constraints. The prototypical network constrains the mapping of the representation space to data space to ensure that controlled changes in the representation space are mapped to changes in the factors of variations in the data space. Our model is completely unsupervised and requires no a priori knowledge of the dataset, including the number of factors. We evaluate our proposed model on the benchmark dSprites, 3DShapes, and MPI3D disentanglement datasets, showing state of the art results against previous methods via qualitative traversals in the latent space, as well as quantitative disentanglement metrics. We further qualitatively demonstrate the effectiveness of our model on the real-world CelebA dataset.
## 1 Introduction
One theory of the success of deep learning models for supervised learning revolves around their ability to learn mappings from the input space to a lower dimensional abstract representation space which are best predictive of the corresponding labels [31]. However, for the models to be robust to noise and adversarial examples, be transferable to different domains and distributions and interpretable, we need to impose additional constraints on the learning paradigm. As a promising solution to this, the models can be encouraged to focus on _all_ the latent "distinctive properties" of the data distribution and encode them into a representation for downstream supervised tasks. These latent distinctive properties or _factors of variations_ are the interpretable abstract concepts that describe the data. The intuitive notion of _disentanglement_, first proposed in [1], proposes to discover all the different factors of variations from the data, and encode each factor in a separate subspace or dimension of the learned latent representation. These disentangled representations are not only interpretable and give valuable insights into the data distribution but are also more robust for multiple downstream tasks [1, 28] which might depend only on a subset of factors [29].
The problem of learning these disentangled representations in a completely _unsupervised_ way is particularly challenging as we do not have access to the ground truth labels of factors nor are privy to the true number of factors or their nature. Recent works have proposed to solve this problem by training generative networks to effectively model the data distribution and in turn the factors of variations. From this generative perspective of disentanglement, higher dimensional data is assumed to be a non-linear mapping of these factors of variation, where each factor assumes different values to generate specific examples in the data distribution. [23] intuitively characterizes representations which encode the factors as _disentangled_ if a change in a single underlying factor of variation in the data produces a change in a single factor of the learned representation (or a change in the subspace of the representation that encodes that factor). Conversely, from the generative perspective, for a representation to be disentangled, a change in a single subspace of the learned representation, when mapped to the data space, must produce a change in a single factor of variation.
For this generative mapping between changes in the representation space to the changes in the factors of variations
(in the data space) to be injective, we propose constrains on the changes in the factors of variations for pre-determined changes in the representation space. Each separate subspace of the representation, when changed, must map to a change in a _unique_ factor of variation which in turn encourages information about the different factors to be encoded in separate subspaces of the representation. Moreover, each separate subspace must _consistently_ map to a change in a single factor throughout the subspace range. This encourages the different subspaces of the representation to encode information only about a single factor of variation. The recent work of [13] also demonstrated empirically that the concept of _local isometry_ was a good inductive bias for unsupervised disentanglement, and it can aid generative models in discovering a "natural" decomposition of data into factors of variation. This local isometry constraint on the mapping enforces the changes in the data space to be proportional to any changes made in the representation space. In order to effectively impose the above constraints in an unsupervised manner, we turn towards deep metric learning.
In recent years, metric learning has emerged as a powerful unsupervised learning paradigm for deep neural networks, in conjunction with self-supervised data augmentation. One of the more successful metric learning models, Prototypical Networks, projects the data into a new metric space where examples from the same class cluster around a prototype representation of the class and away from the prototypes of other classes. We use this ability of the network to cluster the different changes in the data space mapped by the corresponding changes in the representation space and thereby enforce the above described constraints.
We develop a novel deep generative model, ProtoVAE, consisting of a Prototypical Network and Variational Autoencoder network (VAE). The VAE acts as the generative component, while the Prototypical Network guides the VAE in separating out the representation space by imposing the constraints for disentanglement.
To learn these representations in an unsupervised way, as the prototypical network needs labeled data for clustering, we train the prototypical network using generated self-supervised datasets. To produce the self supervised dataset, we perform _interventions_ in the representation space, which change individual elements of the latent space and map the intervened representations to the data space. Owing to the self-supervised training, our model is able to disentangle without any explicit prior knowledge of the data, _including_ the number of desired factors.
In this work, our core contributions are:
* We design a self-supervised data generation mechanism using a VAE that creates new samples via a process of intervention to train a metric-learning prototypical network.
* We design and implement a novel model, ProtoVAE, which combines a VAE and prototypical network to perform disentanglement without any prior knowledge of the underlying data.
* We empirically evaluate ProtoVAE on standard benchmark DSprites, 3DShapes, MPI3D, and CelebA datasets, showing state of the art results.
## 2 ProtoVAE
Our proposed model consists of a VAE [27, 17] as the base generative model (Section 2.1). The VAE consists of an inference network which encodes the data into lower dimensional latent representations and a generator network that maps the representations back into the data space. To implicitly encourage the inference network to encode disentangled representations, we impose constraints on the generative mapping from changes in the representation space to changes in the factors of variations in the data space. This generative mapping is determined by both the generator and the inference networks. To generate self-supervised data for the prototypical network, we perform interventions (Sec 2.2) which changes individual dimensions of the representation. Given a batch of latent representations encoded by the inference network, we first intervene on a dimension of the representation by changing its value to the value of another representation from the batch for the same dimension. The original representations and the intervened representations are then mapped into the data space by the generator network and concatenated to form a pair of original data and generated data from interventions. Given that the original and the intervened representations differ in a single dimension, the generative mapping should be constrained to ensure that the corresponding pair of original and generated data differs only in a single factor of variation.
This constraint is enforced using a Prototypical network (Appendix A.2) which based on the idea that there exist an embedding in which examples from the same class cluster around a prototype representation for that class. Our proposed prototypical network (Section 2.3) takes as input pairs of data generated by the self-supervised process described above, and maps these pairs of data into a metric space in which pairs generated by intervening on the same dimension cluster together. These clusters which are identifiable with intervening dimensions in-turn become identifiable with the factors of variation that differ in value between the pair when a dimension is intervened upon. We further augment the prototypical network with a separate output head that enforces local isometry, by predicting the difference in the value of the intervened dimensions from the pair in the data space. Fig 5 gives the diagram overview of the complete model.
Lastly, for the intervened representations to be mapped
into the data space such that only a _factor of variation_ is changed, we constrain the generated data to lie in the true data distribution. This constraint can be effectively enforced in the representation space by minimizing the distance between the distribution of the original representations of the inference network and the intervened representations such that the generator network maps both the distributions to the true data distribution. We do so by training a discriminator network (Section 2.4) in the representation space to distinguish between the original and the intervened representations. The inference network which generates the original representations is then trained to fool the discriminator thus effectively bridging the distance between the distributions.
### Variational Autoencoder
The base generative model consists of an inference network \(q_{\phi}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{d}\) that encodes the data \(x\) to a lower dimensional representation \(z\) and a generator network \(p_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}\) which reconstructs the data \(\hat{x}\) from the representations \(z\). The inference and the generator network are trained together to maximize the evidence lower bound (ELBO) of the data log-likelihood as in eq. 1.
\[\max_{\theta,\phi}\mathcal{L}_{V}(\theta,\phi)=\mathbb{E}_{q_{\phi}(z|x)}[ \log p_{\theta}(x|z)]-\text{KL}(q_{\phi}(z|x)||p(z)) \tag{1}\]
Maximizing the first term of eq. 1 ensures that the latent representation encodes all the information needed to faithfully reconstruct the data from the representation alone. This ensures that the representations encode all the different factors of variations in the data. The KL divergence term creates an information bottleneck which enforces optimal, compact encoding of the data by enforcing the posterior distribution to be similar to the independent, non-informative prior distribution. For more details please refer to Appendix A.1.
### Self-Supervised Data Generation
The prototypical network works to cluster changes in the factors of variations in the data space and provides gradients to the generator and inference network to better separate out the factors in the representation space. To do so, the prototypical network requires a set of supervised examples, called the support set, from each class to compute a prototype around which examples from the same class cluster. Furthermore, a supervised query set is required to compute the distance of query examples from the target prototypes and the subsequent loss is used to update the network. For learning disentangled representations in an _unsupervised_ way, we propose to generate these support and query sets using self-supervision. We describe the full algorithm in Appendix B.1.
Given a batch of data \(x\), we first use the inference network to encode the data into the representation space \(z\in\mathbb{R}^{d}\). Following [29], we define _interventions_ as the act of changing the value of a single dimension of the representation \(k\in_{R}[d]\) while keeping the values of the other dimensions the same. We change the value of the intervened dimension to another example's value for the same dimension. The result of intervening on representation \(z\) in dimension \(k\) produces the intervened representation \(\hat{z}_{k}\). The representation and the intervened representation are then mapped by the generator network to \(\hat{x}\) and \(\hat{x}_{k}\) respectively. This pair of \((\hat{x},\hat{x}_{k})\) forms the input data for the prototypical network with the intervened dimension \(k\) being the label and the difference in the representation space \(|z-\hat{z}_{k}|\) as the label for the isometry head. The representation \(z\) is intervened upon every dimension to generate \(d\) support sets and is also intervened upon one dimension chosen uniformly from \([d]\) to generate the query set. The self-supervised data generation algorithm takes in a batch of data \(x\) and outputs \(d\) support sets \(S=\{S_{1},\cdots,S_{d}\}\), one query set \(Q\), labels for the query set \(L\) and labels for training the isometric head \(I\).
### Prototypical Network
Our proposed prototypical network maps a pair of data generated by intervening on a single dimension of the representation to a lower dimensional metric space. By mapping
Figure 1: Architecture of our model consisting of a VAE, a discriminator and a Prototypical network. The representation \(z\) from the inference network of the VAE (Sec 2.1), is changed at a particular dimension \(k\) to get the intervened representation \(\hat{z}_{k}\). A discriminator (Sec 2.4) is trained to distinguish between \(z\) and \(\hat{z}_{k}\) and the inference network is updated to fool the discriminator. \(z\) and \(\hat{z}_{k}\) are passed to the generator network to map it to the reconstructed data \(\hat{x}\) and the intervened data \(\hat{x}_{k}\). The original and the intervened data are concatenated to form the pair \((\hat{x},\hat{x}_{k})\), which is then passed to the prototypical network. The prototypical network (Sec 2.3) maps the pair closer to other pairs with the same dimension intervened. The prototypical network is updated by it’s ability to correctly predict the intervened dimension of the query examples and the magnitude of the change \(\|z-\hat{z}_{k}\|\).
a pair of data_ to the metric space, the prototypical network can focus on the factor differing in value between the pair while being invariant to the values of the other non-differing factors. Critically, the factor differing in value remains the same across pairs of different examples when the same dimension is intervened upon and hence should be mapped closer in the metric space. Thus, comparing a pair of data allows the prototypical network to focus on the _change_ or _difference_ that was brought about by the intervened dimension, and makes the central focus of the losses this change.
The prototypical network first takes in elements of the generated support set \(S\) (described in Section 2.2) and computes an \(m\)-dimensional representation through the embedding function \(f_{\gamma}:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow\mathbb{R}^{m}\). In this \(m\)-dimensional space, the prototypical network computes a prototype embedding \(c_{k}\) for each element in the support set \(S_{k}\in S\) using eq. 2:
\[c_{k}=\frac{1}{|S_{k}|}\sum_{s_{k}^{(i)}\in S_{k}}f_{\gamma}(s_{k}^{(i)}) \tag{2}\]
While the support set is used to compute the prototypes, the query set is used to compute the loss by calculating the distance of its embeddings in the metric space to the target prototypes. For each dimension of the representation to encode information about a _unique_ factor of variation, each dimension when intervened upon and mapped to the data space must change a different factor of variation. Thus embeddings of pairs of data generated with the same intervening dimension of the representation must cluster closer in the metric space and away from the clusters of other dimensions. To enforce this, we introduce the _uniqueness_ loss which is computed for each query \(q^{(i)}\) example by calculating the negative log-likelihood of the true class \(l\) as in eq. 3:
\[\begin{split}&\min_{\gamma,\phi,\theta}\mathcal{L}_{U}(\gamma, \phi,\theta)=\\ &-\frac{1}{|Q|}\sum_{q^{(i)}\in Q}\log p_{\gamma}(t=l|q^{(i)}) \cdot\text{KL}(q_{\phi}(z_{l}|x)||p(z))\end{split} \tag{3}\]
where the probability of each class \(p_{\gamma}(t=l|q^{(i)})\) is calculated as a distribution over the Euclidean distance \(d\) to the prototypes as in eq. 4.
\[p_{\gamma}(t=l|q^{(i)})=\frac{\exp{(-d(f_{\gamma}(q^{(i)}),c_{l}))}}{\sum_{k^{ \prime}}\exp{(-d(f_{\gamma}(q^{(i)}),c_{k^{\prime}}))}} \tag{4}\]
The loss for every intervening dimension of the query examples is multiplied by the KL-divergence of that dimension, averaged for the batch of examples. This ensures that the loss for the intervening dimensions is scaled by amount of information encoded by that dimension. For the dimensions that do not encode any information, and hence do not change any factor of variation upon intervention, the corresponding loss is scaled by zero. This is important as we do not need any prior assumptions on the dimension of the representation needed to encode all the factors and the VAE can find the right number of dimensions needed.
In addition to the uniqueness loss, we want each dimension to consistently encode only a single factor of variation. When the representation \(z\) is first intervened on dimension \(k\) and mapped to the data space it makes a certain change in factor. When the representation is intervened again at dimension \(k\) by a different amount and mapped to the data space it should produce a change in the same factor, irrespective of the amount it was changed. When passed in to the prototypical network, the pair of data generated by the original \(z\) and intervened \(\hat{z}_{k}\) must be embedded in the prototypical metric space closer to the pair generated by the representation intervened in the same dimension by a different amount. To enforce this we introduce the _consistency_ loss in eq. 5 where the prototypes are replaced by the embeddings of the same example in the support set.
\[\begin{split}&\min_{\gamma,\phi,\theta}\mathcal{L}_{C}(\gamma, \phi,\theta)=\\ &-\frac{1}{|Q|}\sum_{q^{(i)}\in Q}\log r_{\gamma}(t=l|q^{(i)}) \cdot KL(q_{\phi}(z_{l}|x)||p(z))\end{split} \tag{5}\]
With \(r_{\gamma}\) calculated as follows:
\[r_{\gamma}(t=l|q^{(i)})=\frac{\exp{(-d(f_{\gamma}(q^{(i)}),f_{\gamma}(s_{l}^{( i)})))}}{\sum_{k^{\prime}}\exp{(-d(f_{\gamma}(q^{(i)}),f_{\gamma}(s_{k^{\prime}} ^{(i)}))})} \tag{6}\]
The consistency loss and the uniqueness loss are added together to get a combined prototypical loss eq. 7
\[L_{P}(\gamma,\phi,\theta)=L_{C}+L_{U} \tag{7}\]
As an additional inductive bias, as proposed in [13], we constrain the generative mapping between original and intervened representation \((z,\hat{z}_{k})\) and the generated pair \((\hat{x},\hat{x}_{k})\) to be locally isometric [7]. Thus the factor changed in \(\hat{x}_{k}\) when compared with \(\hat{x}\) must differ in value proportional to the corresponding change in dimension \(k\) of \(z\) and the intervened \(\hat{z}_{k}\). This serves as an imperative inductive bias for unsupervised disentanglement.
The additional head \(h_{\psi}:\mathbb{R}^{D}\times\mathbb{R}^{D}\rightarrow\mathbb{R}^{d}\), when given a pair of data, is trained to predict the difference in the values for the all the dimensions of \(z\) and \(\hat{z}_{k}\) through the loss function in eq. 8.
\[\min_{\psi,\theta,\phi}\mathcal{L}_{I}(\psi,\theta,\phi)=\|h_{\psi}((\hat{x}, \hat{x}_{k}))-|z-\hat{z}_{k}|\|^{2} \tag{8}\]
The training data for the isometry head generated in a self supervised manner as described in section 2.2 where the support set \(S\) consists of data pairs and the set \(I\) consists of the corresponding targets. In the final implementation, \(f_{\gamma}\) and \(h_{\psi}\) share all hidden convolutional layers and differ only in the final fully connected layer.
### Representation Space Discriminator
To restrict the realm of "changes" in the data space, made by intervened representations, to only the factors of variation present in the dataset we regularize the intervened representations to map to data in the true data distribution. By minimizing the reconstruction loss in eq. 1, the decoder learns to map the latent representations learned by the inference network \(z\sim q_{\phi}(z)\) to the true data distribution \(q(x)\). To enforce the decoder to also map the intervened representations \(\hat{z}\sim q(\hat{z})\) to the true data distribution we propose to minimize the distance between distributions of the true representations \(q_{\phi}(z)\) and the representations after intervention \(q(\hat{z})\) by minimizing the KL divergence between the distributions \(\text{KL}(q_{\phi}(z)||q(\hat{z}))\). To this effect we introduce a discriminator as proposed in [16] in the representation space which is trained to distinguishes samples from the two distributions by minimizing the loss in eq. 9).
\[\min_{w}\mathcal{L}_{D}(w)=-[\mathbb{E}_{z}[\log(D_{w}(\hat{z}))]+\mathbb{E}_{ z}[\log(1-D_{w}(z))]] \tag{9}\]
The inference network, which generates the representations \(z\sim q_{\phi}(z)\), is regularized to fool this discriminator by minimizing the loss in eq. 10. This encourages the inference network to encode data into representations whose individual dimensions can be intervened on and mapped to the same data distribution.
\[\min_{\phi}\mathcal{L}_{E}(\phi)=\mathbb{E}_{z}[\log(1-D_{w}(z))] \tag{10}\]
The final objective (eq. 11) of our method is a weighted sum of the different losses, and is optimized by the network parameters of the VAE and the prototypical network corresponding to each loss.
\[\begin{split}\min_{\phi,\theta,\gamma,\psi}\mathcal{L}=- \mathcal{L}_{V}(\phi,\theta)+\alpha\mathcal{L}_{E}(\phi)&+\lambda \mathcal{L}_{P}(\gamma,\phi,\theta)\\ &+\kappa\mathcal{L}_{I}(\psi,\phi,\theta)\end{split} \tag{11}\]
## 3 Empirical Evaluation
To empirically evaluate our method, we perform both quantitative and qualitative evaluation on two synthetic datasets and one real dataset with known factors of variation and qualitative evaluation on CelebA dataset [22]. The two synthetic and one real datasets are generated from independent ground truth factors of variation; DSprites [24] binary 64 x 64 images with 5 factors of variation: 3 shapes, 6 scales, 40 orientations, 32 x-positions and 32 y-positions; 3D Shapes [2] 64 x 64 x 3 color images with 6 factors of variations: 4 shapes, 8 scales, 15 orientations, 10 floor colors, 10 wall colors and 10 object colors; MPI3D real [10] 64 x 64 x 3 color images with 7 factors of variations: 6 colors, 6 shapes, 2 sizes, 3 camera heights, 3 background colors, 40 horizontal axis, 40 vertical axis, and one real world dataset; CelebA. The details of the architecture for the different components and the corresponding hyperparameters are listed in the supplementary material (Appendix B.2) and (Appendix B.3) respectively.
We qualitatively evaluate our model by intervening on the different dimensions of the learned representations and traversing the range of values of the dimension linearly in a fixed range \([-2,2]\). A model is better disentangled if the changes made in the data space while traversing a dimension are similar to the changes in a factor of variation in the data space.
Figure 2: Output of the prototypical network with embedding dim \(m=2\) when the input is real pairs of data from the 3DShapes dataset which differ in a single factor of variation. Each color corresponds to a unique factor which differs in value amongst the pair. The network clusters the changes correctly on the pairs from the original dataset. This suggests that the prototypical network is clustering pairs of images based on the changed factor of variation. _Left_: \(\lambda=10\). _Right_: \(\lambda=5\).
Our model both finds the correct number of factors and encodes them separately without any specific hyperparameter tuning. From Figure 3 we can see that our method ProtoVAE, produces disentangled traversals covering both the number of factors as well as the entirety of the range of the values for the Dsprites and the 3DShapes dataset. The latent traversals on the MPI3D dataset can be found in figure 4. Our method effectively separates the factors thus disentangling the learned latent representation without compromising on the reconstruction quality as seen from row 2. Owing to the unsupervised nature our method struggles to exactly disentangle the non-isometric discrete factor of shape in the DSprites dataset. For the 3DShapes dataset, in our traversals in figure 3, we achieved near perfect disentanglement, completely unsupervised. In the Appendix D, we show the performance of the ProtoVAE for a subset of the Dsprites dataset with only a few factors. We show traversals from models FactorVAE and \(\beta-\)VAE, along with our model, with only a few isometric factors for comparison. Our proposed ProtoVAE is the only model that does not conflate two factors and encodes them in separate dimensions of the representation.
Furthermore, we quantitatively evaluate the learned representation by calculating state-of-the-art disentanglement metrics. We choose metrics from each of the three kinds of metrics described in [33]; Intervention-based FactorVAE [16], Predictor-based Disentanglement-Completeness-Informativeness (DCI) [8] and Information
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Datasets & \multicolumn{3}{c}{Dsprites} & \multicolumn{3}{c}{3DShapes} \\ \hline Model & FVAE & DCI & MIG & FVAE & DCI & MIG \\ \hline \(\beta\)-VAE & 0.51 \(\pm\).10 & 0.23 \(\pm\).10 & 0.15 \(\pm\).10 & 0.81 \(\pm\).10 & 0.44 \(\pm\).17 & 0.28 \(\pm\).18 \\ AnnVAE & 0.70 \(\pm\).10 & 0.28 \(\pm\).10 & 0.23 \(\pm\).10 & 0.84 \(\pm\).09 & 0.46 \(\pm\).16 & 0.31 \(\pm\).15 \\ \(\beta\)-TCVAE & 0.68 \(\pm\).10 & 0.35 \(\pm\).06 & 0.17 \(\pm\).09 & 0.88 \(\pm\).07 & 0.63 \(\pm\).10 & 0.40 \(\pm\).18 \\ FVAE & 0.74 \(\pm\).06 & 0.38 \(\pm\).10 & 0.28 \(\pm\).09 & 0.81 \(\pm\).06 & 0.47 \(\pm\).12 & 0.33 \(\pm\).14 \\ Gr-FVAE & **0.75 \(\pm\).08** & 0.41 \(\pm\).07 & 0.31 \(\pm\).06 & 0.79 \(\pm\).06 & 0.49 \(\pm\).06 & 0.43 \(\pm\).11 \\ \hline
**ProtoVAE** & 0.70 \(\pm\).06 & **0.51 \(\pm\).04** & **0.37 \(\pm\).09** & **0.90 \(\pm\).06** & **0.84 \(\pm\).07** & **0.71 \(\pm\).11** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Various disentanglement metrics evaluated across a number of state of the art methods for the DSprites and 3DShapes dataset. For all metrics, **higher is better.** The results for the other models are obtained using the hyperparameter settings and experimental conditions as described in [23]. The scores for all the models were averaged across ten runs with different random seeds, with standard deviation shown as \(\pm\). Gr-FVAE is the GroupifyVAE variant applied to the FactorVAE, as this is the closest variant of the GroupifyVAE to our model and the results for which are taken from [32]. The highest values in a column are written in bold. As we see, the ProtoVAE outperforms the state of the art on a majority of the metrics. ProtoVAE hyperparameters for DSprites and 3DShapes results shown are \(\{\alpha=10,\lambda=10,\kappa=10\}\) and \(\{\alpha=20,\lambda=20,\kappa=20\}\)
Figure 3: A comparison of latent traversals in latent space for the 3DShapes and Dsprites dataset. _Left:_ 3DShapes, _Right:_ Dsprites. ProtoVAE produces smooth, disentangled latent representations. Row 1 and 2 are some sample original images, and their reconstructions generated by our model, respectively. Rows 3 downward are the traversals for each latent element, as detailed below. For 3DShapes, we actually see a near-perfect traversal across all of the known factors of variation.
-based Mutual Information Gap (MIG) [5]. The metrics were implemented as proposed in [23] with the same hyperparameters. We refer the interested readers to [33] for an intuitive understanding of the metrics. We highlight here that our model achieves a higher DCI metric and a higher corresponding _completeness_ and _informativeness_ metric, which reflects the mode covering capabilities of the learned representation.
From table 1 we see that on the DSprites dataset, our method outperforms the state of the art models in a majority of the metrics. Similar performance of our model on the 3DShapes dataset as seen in table. Also, the variance in the metrics for the different runs is significantly lower than of the previous methods, thus ensuring a more robust way to disentangle representations. For the real disentanglement dataset of MPI3D [10] consisting of a camera taking photos of an object attached to a jointed arm, we see that our model consistently either matches or outperforms the state of the art. From the baselines, especially important is the FactorVAE model, which is the base model upon which we add our contributions for the ProtoVAE model and hence use it as a comparison to demonstrate the effectiveness of our contributions.
On the CelebA dataset, we find that across multiple runs, our model is able to find the same "natural" decompositions that correspond to human-interpretable factors of variation consistently (Fig. 5). We notice that the model is not constrained to completely encode one factor per latent dimension and the model might encode different ranges of a factor in different latents; we see this occur for example when it encodes half of the azimuth in one latent, and half in another. However, as we can see, for the most part, each latent dimension contains information only about one factor of variation and even in the unsupervised regime our model still encodes natural decompositions.
We visualize the embedding space of a trained prototypical network using our method in Fig. 2. We see that when input to the prototypical network is pairs of images from the dataset, with one ground truth factor differing in value between the pair, the prototypical network effectively clusters the pairs based on the differing factor. This clustering aligns with the labels based on the intervened dimensions during training and thus points to the effectiveness of the prototypical network for encouraging disentanglement.
We also performed quantitative and qualitative ablation studies on the 3DShapes dataset by changing the values of \(\alpha\), \(\lambda\) and \(\kappa\) to understand the effectiveness of each of the components and losses we introduce. The results of these ablations can be found in the supplementary material (Appendix C). Furthermore, we also perform ablation studies on the effect of dimension \(m\) of the metric space of the prototypical network on the metric scores. We also show in the Appendix (Section C) some limitation cases where the representations of the model did not axis align with a few factors but was rotated with respect to those factors. We see that smaller values of the prototypical network metric space \(m\) performs better by encoding data in tighter clusters which in turn it imposes stronger constraints on the VAE. The discriminator and the corresponding \(\mathcal{L}_{E}\) helps in confining the encoding of the factors into a single dimension whereas \(\mathcal{L}_{P}\) alone fails to do this effectively as seen in 1.
## 4 Related Works
Many state-of-the-art unsupervised disentanglement methods extend the VAE objective function to impose additional constraints on the structure of the latent space to match the assumed independent factor distribution. \(\beta\)-VAE [12] and AnnealedVAE [3] heavily penalize the KL diver
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & FVAE & DCI & MIG \\ \hline \(\beta\)-VAE &.41 \(\pm\).05 &.23 \(\pm\).04 &.06 \(\pm\).03 \\ AnnVAE &.29 \(\pm\).04 &.12 \(\pm\).02 &.07 \(\pm\).07 \\ \(\beta\)-TCVAE &.45 \(\pm\).06 &.27 \(\pm\).03 &.16 \(\pm\).03 \\ FVAE &.40 \(\pm\).04 &.30 \(\pm\).03 & **.23 \(\pm\).03** \\ DisCo &.39 \(\pm\).07 &.29 \(\pm\).02 &.07 \(\pm\).03 \\ \hline
**ProtoVAE** & **.46 \(\pm\).04** & **.38 \(\pm\).05** & **.25 \(\pm\).11** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparative metrics on the MPI3D dataset. ProtoVAE performs comparatively or better consistently across multiple metrics on a difficult real disentanglement dataset. See Fig. 4 for latent traversals on MPI3D. ProtoVAE hyperparameters for results shown are \(\{\alpha=10,\lambda=2,\kappa=2\}\). The numbers for DisCo have been borrowed from their paper [26] for the VAE-based methods.
Figure 4: Latent traversals on the MPI3D real world disentanglement dataset. The data is collected via a camera that observes a jointed arm with known changed ground truth factors of variation. From top to bottom: original data, reconstruction, arm angle left/right, arm angle top/bottom, background height, arm end color, size. The KL values represent the amount of information encoded by that dimension of the representation.
gence term thus forcing the learned posterior distribution \(q_{\phi}(z|x)\) to be independent like the prior. Factor-VAE [16] and \(\beta\)-TCVAE [5] penalize the total correlation of the aggregated posterior \(q_{\phi}(z)\). \(TC=KL(q(z)||\prod_{i=1}^{K}q(z_{i}))\) where the aggregated posterior is calculated as \(q_{\phi}(z)=\mathbb{E}_{p(x)}[q(z|x_{i})]=\frac{1}{N}\sum_{i=1}^{N}q_{\phi}(z| x_{i})\) using adversarial and statistical techniques respectively. DIP-VAE [18] forces the covariance matrix of the aggregated posterior \(q(z)\) to be close to the identity matrix by method of moment matching. The changes that we described in the latent space are defined as intervention by [29] to study the robustness of the learned representations under the Independent Mechanisms (IM) [28] assumption. Most closely related to our work are VAE models that learn to disentangle by altering the latent code. In [15], the authors use a VAE or AE with a split double latent code, with a cycle consistent loss, but required that attribute labels be known _a priori_, which was also a requirement in [9] which learned by swapping out chunks of the latent code, and [30], which used the labels as a constraint to find unique disentanglement. The authors in [4] also used cycle-consistent loss, but again required labeling. In [14] the authors attempted unsupervised disentanglement with regular (non-variational) Autoencoder network models, by stacking one after another, our model instead uses a prototypical neural network. In [19] the authors derive a novel Jacobian loss combined with a student-teacher iterative training algorithm with an Autoencoder network model. In [25] the authors develop a latent-manipulating model aimed at human-interactive image manipulation tasks.
In [32] the authors use the group based definition by [11] and a cycle consistency loss to define the elements of a group. Our work differs significantly as we do not re-encode the reconstructed data nor the generated data from interventions and instead use a prototypical network. [35] encode the the latent space of a VAE using the commutative Lie group and enforce constraints on the latent space. A recent work [26] propose to learn disentangled representations from pre-trained models using contrastive methods.
The most prominent work from the GAN family is InfoGAN [6] which learns disentangled, semantically meaningful representations by maximizing a lower bound on the intractable mutual information between the conditioning latent variables \(c\) and the generated samples \(G(z,c)\). InfoGAN-CR [20] and [34] add a contrastive regularizer to the InfoGAN model to further encourage disentanglement. [21] add orthogonal regularization to encourage independent representations. However, all the GAN methods suffer from the limitation that they require _a priori_ the number of factors to be discovered, in addition to the number of values for all the discrete factors. For fairness of comparison, we thus only compare against methods that do not require these priors.
## 5 Conclusion and Future Work
In this work, we proposed a novel generative model consisting of a VAE and a Prototypical Network for learning disentangled representations in a completely unsupervised way, inspired by recent discovery of sufficient inductive biases. We impose constraints on the structure of the representations learned by training the model in an self-supervised manner to encode information about the different factors in separate dimensions of the representation. Our proposed method is able to outperform other state of the art networks on a number of metrics on three prominent disentanglement datasets. For future work, our method can be easily adapted to be trained in a weakly supervised regime with pairs of data differing in known number of factors being the prototypes for the prototypical network. The results can be possibly improved by intervening on multiple dimen
Figure 5: Latent traversals on the CelebA dataset ProtoVAE successfully captures ground-truth factors of variation on real-world data. From top to bottom: background color, hairstyle, head angle, age, hairstyle, hair color, skin color, face profile.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Metric & \(\alpha=0\) & \(\alpha=10\) & \(\alpha=20\) \\ \hline FVAE &.93 \(\pm\).04 &.85 \(\pm\).03 &.88 \(\pm\).05 \\ DCI &.78 \(\pm\).05 &.81 \(\pm\).06 &.81 \(\pm\).07 \\ MIG &.59 \(\pm\).07 &.63 \(\pm\).04 &.65 \(\pm\).08 \\ \(\beta\)-VAE &.92 \(\pm\).06 &.88 \(\pm\).04 &.90 \(\pm\).05 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation results for different values of \(\alpha\), which shows that the discriminator helps in confining the encoding of the factors into a single dimension. This can be seen by a higher value of the \(\beta\) VAE and the FactorVAE metrics for \(\alpha=0\) but not for the MIG and the DCI metrics which require factors to be encoded in a single dimension.
sions of the representations simultaneously. The importance of methods that can disentangle data _without labels_ is critical as data is plentiful and the resulting representations give interpretable insights into the variations in the data distribution, and can be used for downstream tasks. Our hope is this work adds evidence that self-supervised generative methods are important in this endeavor.
|
2305.00908 | Estimation of the Impact of COVID-19 Pandemic Lockdowns on Breast Cancer
Deaths and Costs in Poland using Markovian Monte Carlo Simulation | This study examines the effect of COVID-19 pandemic and associated lockdowns
on access to crucial diagnostic procedures for breast cancer patients,
including screenings and treatments. To quantify the impact of the lockdowns on
patient outcomes and cost, the study employs a mathematical model of breast
cancer progression. The model includes ten different states that represent
various stages of health and disease, along with the four different stages of
cancer that can be diagnosed or undiagnosed. The study employs a natural
history stochastic model to simulate the progression of breast cancer in
patients. The model includes transition probabilities between states, estimated
using both literature and empirical data. The study utilized a Markov Chain
Monte Carlo simulation to model the natural history of each simulated patient
over a seven-year period from 2019 to 2025. The simulation was repeated 100
times to estimate the variance in outcome variables. The study found that the
COVID-19 pandemic and associated lockdowns caused a significant increase in
breast cancer costs, with an average rise of 172.5 million PLN (95% CI [82.4,
262.6]) and an additional 1005 breast cancer deaths (95% CI [426, 1584]) in
Poland during the simulated period. While these results are preliminary, they
highlight the potential harmful impact of lockdowns on breast cancer treatment
outcomes and costs. | Magdalena Dul, Michal K. Grzeszczyk, Ewelina Nojszewska, Arkadiusz Sitek | 2023-04-27T10:01:43Z | http://arxiv.org/abs/2305.00908v2 | Estimation of the Impact of COVID-19 Pandemic Lockdowns on Breast Cancer Deaths and Costs in Poland using Markovian Monte Carlo Simulation
###### Abstract
This study examines the effect of COVID-19 pandemic and associated lockdowns on access to crucial diagnostic procedures for breast cancer patients, including screenings and treatments. To quantify the impact of the lockdowns on patient outcomes and cost, the study employs a mathematical model of breast cancer progression. The model includes ten different states that represent various stages of health and disease, along with the four different stages of cancer that can be diagnosed or undiagnosed. The study employs a natural history stochastic model to simulate the progression of breast cancer in patients. The model includes transition probabilities between states, estimated using both literature and empirical data. The study utilized a Markov Chain Monte Carlo simulation to model the natural history of each simulated patient over a seven-year period from 2019 to 2025. The simulation was repeated 100 times to estimate the variance in outcome variables. The study found that the COVID-19 pandemic and associated lockdowns caused a significant increase in breast cancer costs, with an average rise of 172.5 million PLN (95% CI [82.4, 262.6]) and an additional 1005 breast cancer deaths (95% CI [426, 1584]) in Poland during the simulated period. While these results are preliminary, they highlight the potential harmful impact of lockdowns on breast cancer treatment outcomes and costs.
Keywords:Breast Cancer Costs Markov Model Covid Lockdowns.
## 1 Introduction
The COVID-19 pandemic impacted the lives of people around the world. To slow down the spread of the disease, many countries introduced lockdown restrictions in form of banning gatherings, limiting outdoor trips and canceling public events [23]. While lockdowns positively influenced the pandemic progression (decreased doubling time) [25] or even environment [6], the negative impact on mental health [1], physical fitness [49], dietary habits [8] and other important aspects of
our lives are evident. In this work we analyze the effect of pandemic lockdowns on breast cancer care in Poland.
Breast cancer is the most frequent cause of cancer deaths among women [48] and is a high burden to public finance. There is an estimated 2.3 million women diagnosed with breast cancer and 685,000 deaths globally in 2020 [46]. The direct cause of breast cancer is unknown, but there exist a number of risk factors like obesity, late menopause or alcohol use [22]. Since there are few to no symptoms at the early stage of breast cancer, many countries introduced screening programs in the form of free mammography procedures to support the early detection of the disease [47]. COVID lockdowns resulted in restricted access to healthcare [19] which consequently reduced the number of diagnosed and treated breast cancer patients.
In this paper, we present a Markov Model-based approach to the Monte Carlo simulation of breast cancer disease progression. The nodes of the model are different states or cancer stages that the subject can be in at a specific point in time. The probabilities of transitions between states are computed based on the existing literature and empirical experiments. We use this method to conduct 100 repetitions of seven-year-long simulations on 1% of the total women population in Poland. In the simulation, we consider the direct costs (medicines, surgeries, chemotherapy), indirect costs (premature death, absenteeism, disability) of breast cancer and statistics of the number of subjects in all states. We conduct two types of experiments. First, we perform the simulation taking into consideration the impact of COVID lockdowns on the accessibility of public healthcare, screening programs and treatment. Secondly, we conduct the simulation as if there was no pandemic. We extrapolate results to the population of the entire country.
The main contributions of this paper are as follows:
1. We present a Markov Model-based simulation of the progression of breast cancer.
2. We analyze the impact of COVID lockdowns on mortality and healthcare costs using a comparison of simulations conducted on the population of Polish women with and without the simulated effect of pandemic.
3. We provide a publicly available code to simulate the progression of breast cancer: [https://github.com/SanoScience/BC-MM](https://github.com/SanoScience/BC-MM).
The rest of the paper is structured as follows. In Section 2, we describe the existing methods for the simulation of disease progression and present our approach to breast cancer modeling based on Markov Models in Section 3. We show the results of simulations with and without the effects of pandemic and discuss how COVID-19 impacted breast cancer patients and the costs of the disease in Section 4 and conclude in Section 5.
## 2 Related work
In this section, we describe works presented in the literature related to the investigation of the impact of the COVID-19 pandemic on healthcare, modeling the progression of diseases and analysis of disease costs in public finance.
### The impact of COVID-19 pandemic on healthcare
Since the beginning of the pandemic, researchers have been concerned about the possible, negative side effects of lockdowns [41]. Paltrinieri _et al._[33] reported that 35.1% lifestyle survey participants encountered worsening of physical activity during lockdowns. Similar concerns were presented by Tsoukos and Bogdanis [49] who described lower body fitness, poorer agility tests results and increased body mass index in adolescent students as the effects of a five-month lockdown. The negative impact of lockdowns does not end on the deterioration of physical fitness. Mental health is one of the factors that suffered the worst during the pandemic. Adams _et al._[1] discussed a significant decrease in self-reported mental health in the United States. The self-harm incidents due to stress related to COVID-19 were reported in India [40]. Cases of depression, anxiety and post-traumatic stress disorders were expected to rise in Sub-Saharan Africa [43].
During the pandemic, access to healthcare, especially related to the treatment of other diseases was limited [19]. Many global healthcare resources were reallocated to prevent and treat coronavirus infections [14]. The expected results of the depletion of healthcare resources were the increase of COVID-19 and all-cause mortality [38]. Additionally, more than 28 million surgeries were expected to be canceled in the UK due to lockdowns [15]. In most cases, those were operations for benign diseases, however, the effect cannot be neglected. Jiang _et. al_[21] described examples of the co-epidemics of COVID-19 and other infectious diseases and potential negative effects on the treatment of non-communicable and chronic diseases.
Concerns regarding the impact of COVID-19 on the treatment of diseases give a justified basis for the analysis the influence of lockdowns on breast cancer prevalence and costs. As reported by Gathani _et al._[18], there was a steep decrease in the number of referrals for suspected breast cancer (28% lower) and breast cancer diagnosis (16% lower) in the UK in 2020. Yin _et al._[50] describe the decline in the usage number of 3 services (breast imaging, breast surgery and genetics consultation) in the early stages of the pandemic. In [17], the pauses in screening programs that occurred in various countries were described, and disease modeling was mentioned as one of the possible approaches to analyse the repercussions of COVID-19 on breast cancer. In this paper, we analyse the impact of those radical changes in breast cancer diagnosis and treatment on the costs of breast cancer in public finance.
### Modelling progression of the disease
There are multiple methods for developing disease models for the purpose of conducting simulations [24]. One of the approaches to stochastic process mod
eling (like the progression of the chronic disease) and economic impact analysis is the utilization of Markov Modelling [11]. In such a graph model, nodes represent stages of the disease and edges the probabilities of moving from one state to another. For instance, Liu _et al._[26] presented the Continuous-Time Hidden Markov Model for Alzheimer's disease progression. In [37], a multi-state semi-Markov model was used to investigate the impact of type 2 diabetes on the co-occurrence of cardiovascular diseases.
Markov Model can be successfully utilized to conduct an analysis of breast cancer. Momenzadeh _et al._[29] used hidden Markov Model to predict the recurrence of breast cancer, while Pobiruchin _et al._[35] presented a method for Markov Model derivation out of real-world datasets (cancer registry's database). Breast cancer modeling was also used to investigate the decline in screening, delays in diagnosis and delays in treatment during COVID-19 pandemic in the USA [3]. Alagoz _et al._[3] developed three models representing each issue. The conducted simulation exposed that there is a projected excess of breast cancer deaths by 2030 due to the pandemic. In this paper, we present the results of the experiments conducted with Monte Carlo simulation based on the Markov Model of breast cancer progression in Poland.
### Costs of breast cancer care
The analysis of disease costs for public finance is a difficult task as there are different methods that could be used and various types of costs that have to be taken into consideration [12]. Costs in pharmacoeconomics can be divided into four categories: direct medical costs, direct non-medical costs, indirect costs and intangible costs [34]. Direct medical costs are the easiest to determine. They include the costs of medicines, diagnostic tests, hospital visits etc. Direct non-medical costs are costs mainly related to the treatment of the patient, but not having a medical basis. Another group of costs are indirect costs. They are mainly related to the loss of productivity associated with the patient's illness or death. The intangible costs are the costs associated with pain, suffering, fatigue and anxiety associated with the disease, as well as side effects of treatment such as nausea. They are difficult to estimate and measure, as they are mainly related to the patient's feelings [39]. In this paper, we take into consideration direct and indirect costs only.
Depending on the methodology the calculated costs may highly vary (e.g. $US20,000 to $US100,000 of the per-patient cost) [12]. Different studies analyse different types of costs, making it difficult to compare them. In [7], the mathematical model of continuous tumor growth and screening strategies was applied for the evaluation of screening policies. In [16], cost-effectiveness studies were used to estimate the costs of breast cancer screening per year of life saved to be $13,200-$28,000. The total cost of breast cancer in the USA was estimated to be $3.8 billion. Blumen _et al._[10] conducted a retrospective study to compare the treatment costs by tumor stage and type of service. The analysis was undertaken on a population selected from the commercial claims database which facilitated the study as costs-related data was directly available.
## 3 Methodology
In this section, we describe the Markov Model used for the simulation of breast cancer progression. We define the types and values of costs used and provide details on the parameters used in simulations. For more details on the actual algorithmic implementation refer to the source code available at [https://github.com/SanoScience/BC-MM](https://github.com/SanoScience/BC-MM).
#### 3.0.1 Breast cancer Markov Model
We use the Monte Carlo simulation based on the Markov Model to describe the course of breast cancer disease. The time horizon of the analysis is divided into equal time increments, called Markov cycles. During each cycle, the patient can transition from one state to another. Arrows connecting two different states indicate allowed transitions. We applied values of those transitions using clinical information, derived from previous studies or estimated empirically. The arrows from the state to itself indicate that the patient may remain in that state during cycles [44]. Transition probabilities are used to estimate the proportion of patients who will transfer from one state to another. The probability of an event occurring at a constant rate (\(r\)) during time (\(t\)) can be expressed by the equation:
\[p=1-e^{-rt} \tag{1}\]
In Figure 1, we present the Markov Model describing the progression of breast cancer. There are ten states in our model: _healthy_, four states describing a non
Figure 1: Breast cancer Markov Model with 10 states. Blue arrows indicate transitions between different stages of breast cancer, yellow ones show the diagnosis of breast cancer, green arrows indicate that the patient was healed and red represent the death event related to breast cancer.
diagnosed person with breast cancer at four stages [5] of the disease (\(stageN_{i}\), where \(i\in\{1,2,3,4\}\)), four states for diagnosed stages (\(stageD_{i}\), where \(i\in\{1,2,3,4\}\)) and _deceased_. _Deceased_ is a terminal stage of our model and if a subject reaches this state its simulation is terminated. We follow The American Joint Committee on Cancer which defines four stages of breast cancer [5].
#### 3.1.2 Simulation
We assume that each Markov cycle is equal to one week. Breast cancer is a disease that develops for many years, however, the longer the simulated period is, the less reliable are the results due to assumptions made about the future. Therefore, we set the number of cycles to 364, corresponding to 7 years (assuming that every year has 52 weeks). This period allows us to measure the long-term effects of the COVID-19 pandemic. We set the beginning of the simulation to January 1st 2019 so that the simulation stabilizes (reaches equilibrium) before the 2020 year with a pandemic. We conduct two types of simulations, one taking into account COVID-19 lockdowns, and one which assumes that there was no effect of COVID-19 on breast cancer treatment and progression. We assume that lockdowns in Poland were lasting from March 2020 till the beginning of March 2021. We repeat 100 times each type and average the collected results.
In the simulation, we take into account malignant breast cancer only (C50 ICD-10 code). Stage 0 of breast cancer (which has a different ICD-10 code) often has a 100% 5-year survival rate. Thus, we omit this stage in the analysis as it should not have a significant impact on costs and survival. We simulate the breast cancer progression in women as this sex accounts for most of the cases of the disease. We conduct computation on 1% of the representative women population in Poland with the age distribution according to Table 1 - after the end of simulations, we multiply results by 100. The minimum age of simulated patients is set to 25 because below this age the occurrence of breast cancer is rare (Table 2). We increase the age of each simulated person every 52 cycles.
To find the number of women diagnosed with breast cancer in 2019 we compute a linear trend line based on the available data. There were 143,911, 151,831, 158,534, 166,031, and 174,005 patients with breast cancer in 2010, 2011, 2012, 2013, and 2014 respectively [31]. According to Agency for Health Technology Assessment and Tariff System in Poland, those numbers rose to 227,784 and 242,838 in 2016 and 2017 [2]. The projected trend line indicated that in 2018 and 2019 there were 247,013 and 263,590 women with this disease in Poland. Taking into consideration the distribution of cancer stages among women with the diagnosed disease in the UK [13] and the distribution of age (Table 1) we derive the number of diagnosed women in Poland by cancer stage and by age (Table 3). In addition, we estimate the number of undiagnosed people. We assume that breast cancer would only be detected using mammography and follow-up diagnostic regimen, and around 71% of patients show up at this procedure [42]. In 2019 the number of mammography tests was 1.041 million with 19620 cases detected (2%). The number of people who fell ill in 2019 was 263,590. This is 2% of the 71% of people who appeared on mammograms. On this basis, the
remaining people who did not show up on the mammogram and would have a positive result can be calculated. They are 2% of the remaining 29% that should come for a mammogram. The estimated number of people is 108,427. Using this number and information about the percentage of patients in a specific stage, we calculate the number of undiagnosed patients in stages II, III and IV (Table 3). The same strategy for stage I destabilizes the model. Thus, we set the number of undiagnosed patients in the first stage to the same value as for those diagnosed in the first stage in 2019. We make this assumption due to the fact that people in stage I very often do not have symptoms yet and the model.
#### 4.2.2 State transition probabilities
We derive the following state transition probabilities:
1. \(P(healthy\to stageN_{1})\) - the probability of developing breast cancer,
2. \(P(healthy\to deceased)\) - the probability of non-cancer related death,
3. \(P(stageN_{i}\to stageN_{i+1})\) - the probability of cancer stage increase,
4. \(P(stageN_{i}\to stageD_{i})\) - the probability of cancer diagnosis,
5. \(P(stageN_{i}\to deceased)\) - the probability of breast cancer death,
6. \(P(stageD_{i}\to stageD_{i+1})\) - the probability of cancer stage increase,
7. \(P(stageD_{i}\to healthy)\) - the probability of healing,
8. \(P(stageD_{i}\to deceased)\) - the probability of breast cancer death.
\begin{table}
\begin{tabular}{c|c|c} Age & Number of women & Percentage \\ \hline
25-29 & 1,233,777 & 8\% \\
30-34 & 1,436,161 & 10\% \\
35-39 & 1,596,757 & 11\% \\
40-44 & 1,502,164 & 10\% \\
45-49 & 1,294,636 & 9\% \\
50-54 & 1,142,203 & 8\% \\
55-59 & 1,234,478 & 8\% \\
60-64 & 1,460,924 & 10\% \\
65-69 & 1,359,815 & 9\% \\
70-74 & 1,013,368 & 7\% \\
75-79 & 640,118 & 4\% \\
80-84 & 581,529 & 4\% \\
85+ & 583,545 & 4\% \\ \hline Total & 15,079,475 & 100\% \\ \end{tabular}
\end{table}
Table 1: The age distribution of Polish women above 25 years in 2019 [45].
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} Age & 0-24 & 25-29 & 30-34 & 35-39 & 40-44 & 45-49 & 50-54 & 55-59 & 60-64 & 65-69 & 70-74 & 75-79 & 80-84 & 85+ \\ \hline \# & 8 & 79 & 292 & 697 & 1,252 & 1,591 & 1,832 & 2,120 & 2,970 & 3,377 & 2,039 & 1,442 & 1,107 & 814 \\ \hline \% & 0.0 & 0.4 & 1.5 & 3.6 & 6.4 & 8.1 & 9.3 & 10.8 & 15.1 & 17.2 & 10.4 & 7.3 & 5.6 & 4.1 \\ \end{tabular}
\end{table}
Table 2: The distribution of patients diagnosed with breast cancer in 2019 in Poland [36].
To simulate the effects of covid lockdowns we modify three transition probabilities. The probability of cancer diagnosis is decreased because of lockdowns and restricted access to healthcare. The probability of breast cancer-related death is increased due to a lack of proper healthcare assistance, and the probability of healing is decreased due to poorer treatment during the COVID-19 pandemic. Numerically we implement the models as follows.
We assume **probability of developing breast cancer** is only dependent on women's age. We set this probability in the following manner - age (probability; probability in one cycle): 20 (0.1%; 0.0002%), 30 (0.5%; 0.001%), 40 (1.5%; 0.0029%), 50 (2.4%; 0.0046%), 60 (3.5%; 0.0067%), 70 (4.1%; 0.0079%), 80 (3.0%; 0.0058%) [4]. We define the **probability of non-cancer related death** according to the life tables from 2019 [45]. There are multiple resources defining the Progression-Free Survival parameter which is a time during which the patient lives with the disease but their state is not worsening. For example, Haba-Rodriguez _et al._[20] state that the median of PFS varies between 4 to 18 months. Thus, we empirically set the **probability of cancer stage increase for diagnosed patient** to \(p=k(1-e^{-\lambda t})\) where \(k\) is 0.0009, \(t\) is the number of cycles and \(\lambda\) is 10, 15, 20 or 25 depending on the stage of the disease. It is difficult and highly uncertain to assess the progression of breast cancer in an undiagnosed patient as no data exist that describe those transitions. Therefore, we define the **probability of cancer stage increase for undiagnosed women** in the same manner as in the case of diagnosed cases and set \(\lambda\) to 20, 25, 30 or 35 depending on the stage which is a reasonable approximation.
We define the **probability of healing** based on the 5-year Disease Free Survival (DFS) parameter which changes depending on the cancer stage [32]. The 5-year DFS in 2019 was 0.987 (stage I), 0.873 (stage II), 0.52 (stage III), and 0.037 (stage IV). We decrease those values during lockdowns by 19% (due
\begin{table}
\begin{tabular}{r|r r r r|r r r r} & \multicolumn{4}{c|}{Diagnosed} & \multicolumn{4}{c}{Undiagnosed} \\ \hline Age \textbackslash{Stage} & I & II & III & IV & I & II & III & IV \\ \hline
25-29 & 461 & 441 & 102 & 57 & 461 & 181 & 42 & 24 \\
30-34 & 1,705 & 1,630 & 377 & 211 & 1,705 & 670 & 155 & 87 \\
35-39 & 4,069 & 3,891 & 900 & 504 & 4,069 & 1,600 & 370 & 208 \\
40-44 & 7,308 & 6,989 & 1,617 & 906 & 7,308 & 2,875 & 665 & 373 \\
45-49 & 9,287 & 8,881 & 2,055 & 1,151 & 9,287 & 3,653 & 845 & 474 \\
50-54 & 10,694 & 10,227 & 2,366 & 1,326 & 10,694 & 4,207 & 973 & 545 \\
55-59 & 12,375 & 11,834 & 2,738 & 1,534 & 12,375 & 4,868 & 1,126 & 631 \\
60-64 & 17,337 & 16,579 & 3,836 & 2,150 & 17,337 & 6,820 & 1,578 & 884 \\
65-69 & 19,713 & 18,851 & 4,361 & 2,444 & 19,713 & 7,754 & 1,794 & 1,005 \\
70-74 & 11,902 & 11,382 & 2,633 & 1,476 & 11,902 & 4,682 & 1,083 & 607 \\
75-79 & 8,417 & 8,050 & 1,862 & 1,044 & 8,417 & 3,311 & 766 & 429 \\
80-84 & 6,462 & 6,179 & 1,430 & 801 & 6,462 & 2,542 & 588 & 330 \\
85+ & 4,752 & 4,544 & 1,051 & 589 & 4,752 & 1,869 & 432 & 242 \\ \end{tabular}
\end{table}
Table 3: The projected distribution of diagnosed and undiagnosed women with breast cancer in 2019 in Poland.
to a 19% decrease in hospitalizations [30]) to 0.801 (stage I), 0.708 (stage II), 0.422 (stage III), and 0.03 (stage IV) and set the probability of healing to \(p=k(1-e^{-\lambda t})\) where \(k\) is set to \(\frac{1}{3}\) and \(\lambda\) is computed based on the 5-year DFS. The **probability of death for diagnosed patient** is computed from the 5-year survival rate which indirectly provides the probability of death within 5 years. Taking into consideration the 5-year survival in stages I, II, III and IV of 0.975, 0.856, 0.44, 0.23 [32], we compute \(\lambda\) parameter of the probability of death in cycle \(\leq t\) (\(p(x\leq t)=1-e^{-\lambda t}\)) to be 0.0061, 0.0302, 0.1642 and 0.2939 respectively. For covid simulation according to predictions that the mortality rate might increase by 9.6% [27] the \(\lambda\) parameter is set to 0.0056, 0.0344, 0.1903, 0.3715 for every stage. The 3-month delay in cancer diagnosis may increase the chances of premature death by 12% [9]. We, therefore, find the **probability of death for undiagnosed patient** by increasing the 5-year death probability for diagnosed patients and compute \(\lambda\) for those probabilities equal to 0.0061, 0.0349, 0.1989, 0.3932 depending on the stage.
In 2019, approximately 7% of all women aged 25 and over had a mammogram. The situation changed in 2020 when the number of mammograms decreased by over 305,000, which was a decrease of about 29%. We assume that malignant breast cancer can only be detected by mammography and diagnostic follow-ups. The newly detected cases in 2019 (19,620) accounted for 2% of all mammograms performed. In 2020, the number of detected cases decreased due to the COVID-19 pandemic. The average annual growth rate of new cases of breast cancer is 3%. This means that around 20,209 new cases of cancer should have been diagnosed in 2020. We assume that the percentage of positive mammograms did not change, which will bring the number of detected cases to about 13,873. This is a difference of about 6,000 cases compared to what should have been detected. About 12.5% of mammograms are thought to have a false-negative result and data shows that only 71% of all women show up for the examination. In 2020, the number of these women probably decreased even more (by 29%). Therefore, we define the **probability of diagnosis** in one year as \(p=\frac{l_{pos}}{l_{pos}+l_{l_{neg}}+l_{n}}\) where \(l_{pos}\) is the number of positive mammography cases, \(l_{fneg}\) is the number of false negative mammography and \(l_{nm}\) is the number of women that should have taken part in the mammography and would have had positive results. Thus, this probability for 2019 is 12.4% and 11.6% during lockdowns.
#### 3.2.2 Costs of breast cancer in Poland
We collect two types of costs during simulation: direct costs and indirect costs. We divide the latter into indirect costs related to premature death and other indirect costs related to absenteeism, presenteeism, absenteeism of informal caregivers, presenteeism of informal carers, disability etc.
We derive direct per-person, per-stage costs in the following way. We estimate the total direct costs for 2019 based on the estimated number of breast cancer patients and direct costs in 2010-2014 years [31] to be 846,653 thousand PLN. We follow the distribution of the stage-specific costs in [28]. We compute that direct per-stage yearly costs, based on the number of patients in every stage,
in the simulation are: stage I (25% of total costs, 207,560 thousand PLN, 1881 PLN per person), stage II (38%, 325,108 thousand PLN, 3,185 PLN per person), stage III (25%, 215,076 thousand PLN, 5,573 PLN per person), stage IV (12%, 98,909 thousand PLN, 7,869 PLN per person). We add those costs (divided by the number of cycles in one year) for every diagnosed woman in each cycle of the simulation.
We compute indirect death costs by averaging the per-person death costs related to breast cancer patients in 2010-2014 years [31]. The average value of premature death cost is 123,564 PLN. We add this value every time a simulated person enters _deceased_ state from one of the breast cancer stages. We estimate other indirect costs in the same way. The average value of indirect per-patient cost in 2010-2014 years is 13,159 PLN. We add this value (divided by the number of cycles in year) for every patient in the \(stageD_{i}\) state in every cycle.
#### 3.3.2 Experimental setup
We develop the Monte Carlo simulation with Python 3.10 programming language. We conduct all simulations on a 1.80 GHz Intel Core i7 CPU and 16 GB RAM. The execution of all simulations took 3 hours to complete.
## 4 Results and discussion
#### 4.0.1 Costs of breast cancer
In Table 4, we present changes in average direct and indirect costs over the 7-years period. In all cases, the total costs incurred in the absence of lockdowns are smaller than the ones that resulted from the simulation with the COVID-19 pandemic. However, the only statistically significant difference (p-value < 0.001) is in the case of indirect costs related to premature death. This is reasonable because breast cancer is a long-lasting disease and the costs of treatment or patient care did not change drastically due to lockdowns. On the other hand, delayed diagnoses and surgeries resulted in more premature deaths. The impact of the pandemic is also reflected in the total costs of breast cancer (Table 5). the pandemic resulted in an increase in breast cancer costs of 172.5 million PLN (average total costs with covid - average total costs without covid) with 95% confidence interval (CI) of [82.4, 262.6]. The difference between total costs with and without lockdowns is statistically significant (p-value < 0.001). The positive influence of lockdowns on the progression of the pandemic should not be neglected. However, the presented results suggest that lockdowns had a negative impact on overall disease treatment, both socially and economically.
#### 4.0.2 Breast cancer with and without COVID-19 pandemic
Simulations also showed that there was a significant difference in the number of women deaths due to COVID-19. On average, during 7 years, 60,052 women died taking into consideration lockdowns. This number would be smaller by 1005 deaths if the pandemic did not occur (95% CI [426, 1584]). Year-by-year visualization of deaths is presented in Figure 2. It can be noticed that delayed diagnoses and poorer
treatment resulted in an overall increase in the number of deaths. The long-term effects will be visible in years to come. Figure 2 depicts also the average number of cases of diagnosed breast cancer in Poland. There is a sharp decline in the number of diagnoses between covid and no covid simulations in 2020. The delayed diagnoses resulted in an increased probability of complications and death. In the following years, the trend was reversed and more of the delayed cases were diagnosed in covid simulation. However, the inefficient healthcare system is not capable of making up for the lost diagnostic time.
#### 4.2.2 Limitations
Our study is subject to limitations. First, the model and simulations presented are only approximations of the real-world situation, and therefore, the results should be interpreted with caution. Second, the impact of the COVID-19 pandemic on the costs associated with breast cancer is complex and rapidly evolving, and our study provides only a snapshot of the situation at the time of the analysis. Third, in order to build the model, we had to make several assumptions and rely on estimates or information from countries other than Poland due to a lack of national data. Access to healthcare and treatment may vary across different countries, and this may have resulted in overestimated or underestimated data in our model. Therefore, our findings should be interpreted in the context of these limitations and further research is needed to validate and
\begin{table}
\begin{tabular}{c c c|c c|c c} & \multicolumn{2}{c|}{DIRECT} & \multicolumn{2}{c|}{INDIRECT\_DEATH} & \multicolumn{2}{c}{INDIRECT\_OTHER} \\ year & NO COVID & COVID & NO COVID & COVID & NO COVID & COVID \\ \hline
2019 & 763\(\pm\)4 & 764\(\pm\)4 & 625\(\pm\)79 & 625\(\pm\)93 & 3297\(\pm\)14 & 3302\(\pm\)13 \\
2020 & 770\(\pm\)7 & 771\(\pm\)7 & 827\(\pm\)93 & 875\(\pm\)106 & 3317\(\pm\)26 & 3321\(\pm\)28 \\
2021 & 775\(\pm\)10 & 775\(\pm\)9 & 990\(\pm\)107 & 996\(\pm\)113 & 3337\(\pm\)36 & 3338\(\pm\)34 \\
2022 & 777\(\pm\)11 & 777\(\pm\)12 & 1087\(\pm\)118 & 1119\(\pm\)120 & 3352\(\pm\)40 & 3355\(\pm\)44 \\
2023 & 777\(\pm\)11 & 779\(\pm\)13 & 1186\(\pm\)116 & 1197\(\pm\)121 & 3367\(\pm\)43 & 3372\(\pm\)50 \\
2024 & 775\(\pm\)12 & 779\(\pm\)14 & 1275\(\pm\)120 & 1270\(\pm\)119 & 3377\(\pm\)49 & 3387\(\pm\)51 \\
2025 & 774\(\pm\)13 & 777\(\pm\)15 & 1306\(\pm\)150 & 1340\(\pm\)30 & 3389\(\pm\)53 & 3399\(\pm\)56 \\ \hline TOTAL & 5411\(\pm\)55 & 5422\(\pm\)64 & 7296\(\pm\)246 & 7420\(\pm\)267 & 23437\(\pm\)213 & 23474\(\pm\)236 \\ \end{tabular}
\end{table}
Table 4: Direct and indirect simulated costs (in million PLN) of breast cancer in Poland with and without COVID-19 pandemic.
\begin{table}
\begin{tabular}{c|c c} & \multicolumn{2}{c}{TOTAL} \\ year & NO COVID & COVID \\ \hline
2019 & 4686\(\pm\)77 & 461\(\pm\)91 \\
2020 & 4915\(\pm\)91 & 4967\(\pm\)107 \\
2021 & 5101\(\pm\)116 & 5109\(\pm\)118 \\
2022 & 5216\(\pm\)133 & 5251\(\pm\)122 \\
2023 & 5329\(\pm\)122 & 5348\(\pm\)135 \\
2024 & 5428\(\pm\)130 & 5435\(\pm\)133 \\
2025 & 5469\(\pm\)170 & 5516\(\pm\)140 \\ \hline TOTAL & 36144\(\pm\)316 & 36317\(\pm\)333 \\ \end{tabular}
\end{table}
Table 5: Total simulated costs (in million PLN) of breast cancer in Poland.
expand our results. To account for the uncertainty around the course of the tumor, empirical fitting of transition probabilities was necessary. This is because, upon diagnosis, patients are immediately referred for treatment, leaving no research data on the disease's development. Furthermore, the study assumes that people with cancer did not directly die from coronavirus infection, but those at an advanced stage of the disease may have had a higher risk of succumbing faster after being infected with the pathogen. It is also worth noting that the model does not consider potential changes in healthcare policies or treatment protocols during the pandemic, which could have affected breast cancer care costs and patient outcomes. Despite these limitations, the study provides valuable insights into the potential impact of the pandemic on breast cancer care costs, and its findings could be beneficial to healthcare policymakers, clinicians, and researchers. Nevertheless, more research is necessary to confirm and expand the results presented in this study.
## 5 Conclusion
In this study, we have used a Monte Carlo simulation approach and a Markov Model to analyze the effects of COVID-19 lockdowns on the costs and mortality of breast cancer in Poland. Our findings indicate a significant negative impact on breast cancer treatment, resulting in increased costs and higher mortality rates. Although these findings are preliminary, they offer important insights for future discussions on strategies that could be employed during future pandemics. As part of our ongoing research, we plan to conduct a sensitivity analysis of model parameters and expand our analysis to estimate the impacts of lockdowns on other diseases.
Figure 2: The average number of breast cancer-related deaths (top) and the average number of breast cancer diagnoses (bottom) with and without lockdowns.
## Acknowledgements
This publication is partly supported by the European Union's Horizon 2020 research and innovation programme under grant agreement Sano No. 857533 and the International Research Agendas programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund. We would like to thank Tadeusz Satlawa, Katarzyna Tabor, Karolina Tkaczuk, and Maja Wieckiewicz from Sano for help and their initial contribution in the development of the model.
|
2308.08370 | Agglomerative Transformer for Human-Object Interaction Detection | We propose an agglomerative Transformer (AGER) that enables Transformer-based
human-object interaction (HOI) detectors to flexibly exploit extra
instance-level cues in a single-stage and end-to-end manner for the first time.
AGER acquires instance tokens by dynamically clustering patch tokens and
aligning cluster centers to instances with textual guidance, thus enjoying two
benefits: 1) Integrality: each instance token is encouraged to contain all
discriminative feature regions of an instance, which demonstrates a significant
improvement in the extraction of different instance-level cues and subsequently
leads to a new state-of-the-art performance of HOI detection with 36.75 mAP on
HICO-Det. 2) Efficiency: the dynamical clustering mechanism allows AGER to
generate instance tokens jointly with the feature learning of the Transformer
encoder, eliminating the need of an additional object detector or instance
decoder in prior methods, thus allowing the extraction of desirable extra cues
for HOI detection in a single-stage and end-to-end pipeline. Concretely, AGER
reduces GFLOPs by 8.5% and improves FPS by 36%, even compared to a vanilla
DETR-like pipeline without extra cue extraction. | Danyang Tu, Wei Sun, Guangtao Zhai, Wei Shen | 2023-08-16T13:48:02Z | http://arxiv.org/abs/2308.08370v1 | # Agglomerative Transformer for Human-Object Interaction Detection
###### Abstract
We propose an **ag**glomerative Transformer (AGER) that enables Transformer-based human-object interaction (HOI) detectors to flexibly exploit extra instance-level cues in a single-stage and end-to-end manner for the first time. AGER acquires instance tokens by dynamically clustering patch tokens and aligning cluster centers to instances with textual guidance, thus enjoying two benefits: 1) Integrality: each instance token is encouraged to contain all discriminative feature regions of an instance, which demonstrates a significant improvement in the extraction of different instance-level cues and subsequently leads to a new state-of-the-art performance of HOI detection with 36.75 mAP on HICO-Det. 2) Efficiency: the dynamical clustering mechanism allows AGER to generate instance tokens jointly with the feature learning of the Transformer encoder, eliminating the need of an additional object detector or instance decoder in prior methods, thus allowing the extraction of desirable extra cues for HOI detection in a single-stage and end-to-end pipeline. Concretely, AGER reduces GFLOPs by \(8.5\%\) and improves FPS by \(36\%\), even compared to a vanilla DETR-like pipeline without extra cue extraction. The code will be available at [https://github.com/six6607/AGER_git_](https://github.com/six6607/AGER_git_).
## 1 Introduction
Human-object interaction (HOI) detection aims at understanding human activities at a fine-grained level. It involves both the localization of interacted human-object pairs and the recognition of their interactions, where the latter poses the major challenges as a higher-level vision task [9].
Since interactions describe the relations between different instances (_i.e._, humans and objects), instance-level cues (_e.g._, human pose and gaze) are unanimously recognized as pivotal to discriminating subtle visual differences between similar relation patterns in interaction recognition. However, extracting these instance-level cues intuitively indicates a multi-stage pipeline, where an off-the-shelf object detector is essential to generate instance proposals firstly [11, 45, 23, 7, 51, 59]. Such a paradigm struggles in proposal generation, yielding less competitive performance in model efficiency. In this work, we seek to explore a _single-stage_ Transformer-based HOI detector to flexibly and efficiently exploit extra instance-level cues, thus continuing their success in HOI detection.
The challenge stems from task-bias, _i.e._, different tasks have different preferences of discriminative feature regions [62]. Gaze tracking, for example, prefers the discriminative regions of human heads [41], whereas pose estimation favours holistic human body contexts [22]. Therefore, the crux of building a single-stage pipeline lies in a proper design of information carrier, which need to ensure the integrality of instance-level representations (IRs), _i.e._, containing all discriminative regions of an instance to satisfy the diverse region preferences of different tasks. However, most popular Transformer-based detectors deal with local patches, neglecting the integrality of different instances.
Some previous methods partially tackled the above challenge. STIP [59] employs an additional DETR detector to generate instance proposals, which yet suffers from the low efficiency of the multistage pipeline. Several works [27, 3, 18] propose to use an additional query-based instance decoder to extract instance queries individually. Despite being ingenious, these queries are task-driven and
Figure 1: **Instance queries vs. instance tokens**. Instance queries typically attend to instance parts, while our instance tokens are encouraged to contain integral discriminative regions of instances. More examples are presented in supplementary materials.
learned to highlight only the most distinguishable feature regions preferred by a given task, as verified by the sparsity of learned attention map [66]. As shown in Fig. 1, the object detection driven human queries in existing methods typically contain only instance extremities, which likewise fails to guarantee the integrality of IRs, limiting its adaptability to other tasks (, pose estimation) due to task bias (Sec. 4.2). Although joint multitask learning can partially alleviate the sparsity of instance queries, it introduces unexpected ambiguities and makes the model fitting harder [53].
In this paper, we present AGER, short for **AG**glomerative **F**rans**EM**, a new framework that improves prior methods by proposing instance tokens, handling the above-mentioned challenges favorably. Specifically, we formulate tokenization as a text-guided dynamic clustering process, which progressively agglomerates semantic-related patch tokens (, belonging to the same instance) to enable the emergence of instance tokens through feature learning. Being decoupled from downstream tasks, the clustering mechanism encourages instance tokens to ensure the integrality of extracted IRs (Fig. 1) and eliminate task bias, thus allowing a flexible extraction of different instance-level cues for HOI detection. Despite being conceptually simple, instance tokens have some striking impacts. Unlike instance proposals being regular rectangles, the instance tokens are generated over irregularly shaped regions that are aligned to different instances with arbitrary shapes (Fig. 1), thus being more expressive. With this, AGER already outperforms QPIC [39] by \(\mathbf{10.6\%}\) mAP even without involving any extra cues (Sec. 4.3). Additionally, compared to instance queries, instance tokens demonstrate a significant precision improvement in cue extraction (Fig. 3), leading to a new state-of-the-art performance of HOI detection on HICO-Det [2] with \(\mathbf{36.75}\) mAP. Of particular interest, the dynamical clustering mechanism can be seamlessly integrated with Transformer encoder, dispensing with additional object detectors or instance decoders and showing an impressive efficiency. Concretely, taking as input an image with size of \(640\times 640\), AGER reduces GFLOPs by \(\mathbf{8.5\%}\) and improves FPS by \(\mathbf{36.0\%}\) even compared to QPIC that built on an vanilla DETR-like Transformer pipeline (Sec. 4.3), and the relative efficiency gaps become more evident as the image resolution grows (Fig 3c).
## 2 Related Work
Modern HOI detection methods are built on three different information carriers of IRs,, instance proposals, points and instance queries, which show different effects on the utilization of instance-level cues.
**Instance proposals** dominated CNN-based HOI detection approaches for almost the entire era [2, 7, 8, 9, 11, 13, 17, 20, 23, 25, 30, 32, 36, 44, 45, 46, 48, 51, 54, 60, 64, 65]. These methods conventionally shared a two-stage pipeline, employing an object detector [38, 12] to generate instance proposals in the first stage. Then, the human and object proposals are processed separately to extract various instance-level cues, such as human pose [11, 45, 23], human parsing [32], spatial configurations [7], human gaze [51], object labels [60], among others. Along with visual features of appearance, these auxiliary cues were leveraged either individually or conjunctively to further reason about the interactions. Although a fine-selected proposal contains integral IRs, thus allowing the extraction of various fine-grained cues, the additional object detector inevitably compromises the efficiency of these methods. Furthermore, the cropped proposals lack global contextual information, leading to lower effectiveness. In contrast, the generation of the instance tokens in AGER does not involve an object detector but is optimized as a dynamically clustering process in an end-to-end manner along with the Transformer encoder. Moreover, the clustering mechanism enables instance tokens to be aggregated from a global perception field and potentially eliminates visual redundancy among similar patch tokens, leading to stronger expressiveness of instance tokens.
**Points** were proposed to represent instances to achieve a one-stage framework for HOI detection. Specifically, [26, 49, 61] represented the interactions as the midpoints of human-object pairs and detected them based on keypoint detection networks [35, 55], dispensing with additional detectors. Thus, they enjoyed a simpler and faster pipeline, but at the expense of the capability to freely extract extra cues due to the lack of integral IRs.
**instance queries** were first introduced in the Transformer-based detector [1], which interact with patch tokens and aggregate information through several interleaved cross-attention and self-attention modules. Thanks to the impressive global context modeling capability, Transformer rapidly revolutionizes HOI detection methods [67, 4, 18, 3, 39, 57, 19, 14, 63, 59, 42, 31, 27, 43]. Most works [67, 39, 3] focused on designing an end-to-end pipeline and continuing the success of the attention mechanism for HOI detection, dealing with visual appearance features solely and neglecting the potential of extra instance-level cues. Additionally, some methods [18, 27] propose to use additional queries to detect instances individually by stacking more decoders. Nevertheless, instance queries are task-driven and fail to extract integral IRs, weakening their ability to extract extra cues due to task bias. In comparison, our AGER introduces clustering mechanism into Transformer to enable the generation of instance tokens that guarantee the integrality of IRs, which continues the success of global attention and meanwhile enjoys the potential of extra instance-level cues.
## 3 Method
In this section, we aim to explore the solution for a _single-stage_ pipeline that allows us to leverage extra instance-level cues for HOI detection. We start with a detailed description of our instance encoder, which incorporates the attention mechanism with dynamical clustering to extract instance tokens, in Sec. 3.1. Then, we take three instance-level cues as guidance to explain the scheme of the extraction and aggregation of extra cues in Sec. 3.2. Next, in Sec 3.3, we propose a new interaction decoder that enumerates all human-object pairs to recognize their interactions in a multi-pattern manner. Finally, we design a special loss function that enables the textual guidance in Sec. 3.4.
### Instance Encoder
As shown in Fig. 2, the instance encoder is organized as a backbone followed by a hierarchical Transformer encoder, where the latter incorporates self-attention and clustering mechanism to extract instance tokens iteratively.
**Backbone.** An input image is first downsampled through a plain CNN backbone and then flattened to add a cosine positional embedding to harvest the initialized and sequenced patch tokens \(\mathbf{T}_{\text{b}}\in\mathbb{R}^{N_{\text{b}}\times D_{\text{b}}}\).
**Overall architecture.** Transformer encoder consists of two stages that share an identical architecture, which comprises of several self-attention layers and a clustering layer.
Concretely, in the \(s\)-th stage, we first initialize two sets of learnable clustering centers for humans \(\mathbf{C}_{\text{h}}^{s}\in\mathbb{R}^{N_{\text{h}}^{s}\times D_{\text{h}}^{ s}}\) and objects \(\mathbf{C}_{\text{o}}^{s}\in\mathbb{R}^{N_{\text{o}}^{s}\times D_{\text{o}}^{ s}}\) separately, which are then concatenated with image tokens \(\mathbf{T}_{\text{i}}^{s}\) and learned to update representations through several self-attention (SA) layers. Subsequently, at the end of each stage, we assign each image token to different clustering centers based on feature affinities, and the assigned image tokens are then aggregated in the clustering layer. Formally, each stage is computed as
\[[\hat{\mathbf{C}}_{\text{h}}^{s};\hat{\mathbf{C}}_{\text{o}}^{s}; \hat{\mathbf{T}}_{\text{i}}^{s}]=\text{SA-Layer}([\mathbf{C}_{\text{h}}^{s}; \mathbf{C}_{\text{o}}^{s};\mathbf{T}_{\text{i}}^{s}]), \tag{1}\] \[[\mathbf{G}_{\text{h}}^{s};\mathbf{G}_{\text{o}}^{s}]=\text{ ClusteringLayer}([\hat{\mathbf{C}}_{\text{h}}^{s};\hat{\mathbf{C}}_{\text{o}}^{s};\hat{ \mathbf{T}}_{\text{i}}^{s}]). \tag{2}\]
Here, \([\ ;\ ]\) denotes concatenation operator, \(\mathbf{G}_{\text{h}}^{s}\in\mathbb{R}^{N_{\text{h}}^{s}\times D_{\text{h}}^ {s}}\) and \(\mathbf{G}_{\text{o}}^{s}\in\mathbb{R}^{N_{\text{o}}^{s}\times D_{\text{o}}^{ s}}\) are the agglomerated image tokens after the \(s\)-th stage. Note that we omit the modules of token projection, residual connection and normalization here. Specifically, \(\mathbf{T}_{\text{i}}^{1}=\mathbf{T}_{\text{b}}\) and \(\mathbf{T}_{\text{i}}^{2}=[\mathbf{G}_{\text{h}}^{1};\mathbf{G}_{\text{o}}^{ 1}]\), _i.e._, we feed the initialized patch tokens from the backbone to the \(1\)-th stage, and these small local patches are dynamically agglomerated into relatively larger segments, which are subsequently fed into the \(2\)-th stage to generate the final instance tokens. Following [52], we propagate the learned clustering centers in the 1st stage to the 2nd stage through a MLP-Mixer layer [40]. Meanwhile, to make the human and object clustering centers distinct, we add two sets of position embedding to them. Then, for the human centers, they are obtained via
\[\mathbf{P}_{\text{h}}^{s} =\text{Embedding}(N_{\text{h}}^{s},D_{\text{h}}^{s}), \tag{3}\] \[\tilde{\mathbf{C}}_{\text{h}}^{s} =\text{Zeros}(N_{\text{h}}^{s},D_{\text{h}}^{s}),\] (4) \[\mathbf{C}_{\text{h}}^{1} =\tilde{\mathbf{C}}_{\text{h}}^{1}+\mathbf{P}_{\text{h}}^{1},\] (5) \[\mathbf{C}_{\text{h}}^{2} =\tilde{\mathbf{C}}_{\text{h}}^{2}+\mathbf{P}_{\text{h}}^{2}+ \text{MLP-Mixer}(\hat{\mathbf{C}}_{\text{h}}^{1}). \tag{6}\]
\(\tilde{\mathbf{C}}_{\text{h}}^{s}\) indicate the centers that are initialized as zeros and \(\hat{\mathbf{C}}_{\text{h}}^{1}\) are updated center representations that are calculated by Eq. 7. Object centers share the same process.
**Clustering layer.** The clustering layer at the end of each stage aims to aggregate local image tokens into a new token
Figure 2: **Architecture of AGER.** AGER performs tokenization as a text-guided dynamic clustering process in the instance encoder, dispensing with any additional object detector or instance decoder, which outputs instance tokens that encourage the integrality of instance-level representations. This property enables the extraction of different instance-level cues in a _single-stage_ pipeline. Finally, a new interaction decoder leverages these desirable cues to recognize interactions in a multi-pattern manner.
based on their feature affinity, thus the small local patch tokens can be gradually merged into a larger segment and finally into an instance token that covers the integral discriminative feature region of an instance.
In particular, we first employ a cross-attention module to update the representation of clustering centers, which enables information propagation between clustering centers and image tokens via
\[\hat{\mathbf{C}}^{s}_{[\text{h},\text{o}]}=\text{softmax}(\frac{\hat{\mathbf{C} }^{s}_{[\text{h},\text{o}]}\cdot(\hat{\mathbf{T}}^{s}_{\text{i}})^{\top}}{ \sqrt{D^{s}_{\text{i}}}})\cdot\hat{\mathbf{T}}^{s}_{\text{i}}, \tag{7}\]
where \(\hat{\mathbf{C}}^{s}_{[\text{h},\text{o}]}=[\hat{\mathbf{C}}^{s}_{\text{h}}; \hat{\mathbf{C}}^{s}_{\text{o}}]\) is the concatenation of human and object centers from Eq. 1. \(D^{s}_{\text{i}}\) is the channel dimension of image tokens. Subsequently, we adopt the scheme in [52] to employ a Gumbel-softmax[15] to compute the similarity matrix \(\mathbf{A}^{s}\) between the clustering centers and the image tokens as
\[\mathbf{A}^{s}_{(k,j)}=\frac{\exp(W_{\text{c}}\hat{\mathbf{c}}^{s}_{k}\cdot W _{\text{i}}\hat{\mathbf{t}}^{s}_{j}+\gamma_{i})}{\sum_{n=1}^{N^{s}_{\text{c}} }\exp(W_{\text{c}}\hat{\mathbf{c}}^{s}_{n}\cdot W_{\text{i}}\hat{\mathbf{t}}^ {s}_{j}+\gamma_{n})}, \tag{8}\]
where \(\hat{\mathbf{c}}^{s}_{k}\) stands for the \(k\)-th clustering center in \(\hat{\mathbf{C}}^{s}_{[\text{h},\text{o}]}\) and \(\hat{\mathbf{t}}^{s}_{j}\) denotes the \(j\)-th updated image token in \(\hat{\mathbf{T}}^{s}_{\text{i}}\). \(N^{s}_{\text{c}}=\hat{N}^{s}_{\text{h}}+N^{s}_{\text{o}}\) counts the total number of clustering centers in the \(s\)-th stage. \(W_{\text{c}}\) and \(W_{\text{i}}\) are the weights of the learned linear projections for the clustering centers and the image tokens, respectively. \(\{\gamma\}_{n=1}^{N^{s}_{\text{c}}}\) are _i.i.d_ random samples drawn from the Gumbel(0,1) distribution that enables the Gumbel-softmax distribution to be close to the real categorical distribution. Finally, we merge \(N^{s}_{\text{i}}\) image tokens with corresponding clustering centers to calculate grouped tokens \(\mathbf{G}^{s}_{[\text{h},\text{o}]}=[\mathbf{G}^{s}_{\text{h}};\mathbf{G}^{ s}_{\text{o}}]\) via
\[\mathbf{g}^{s}_{k}=\hat{\mathbf{c}}^{s}_{k}+W_{u}\frac{\sum_{j=1}^{N^{s}_{ \text{i}}}\mathbf{A}^{s}_{(k,j)}W_{v}\hat{\mathbf{t}}^{s}_{j}}{\sum_{j=1}^{N^{ s}_{\text{i}}}\mathbf{A}^{s}_{(k,j)}}, \tag{9}\]
where \(\mathbf{g}^{s}_{k}\) is the \(k\)-th grouped token in \(\mathbf{G}^{s}_{[\text{h},\text{o}]}\), \(W_{u}\) and \(W_{v}\) are learned weights to project the merged features.
### Cues Extraction & Aggregation
This work realizes three instance-level cues, _i.e._, human poses (P), spatial locations (S) and object categories (T), as guidance, other valuable cues can be extracted similarly.
**Extraction.** Unlike prior methods that use different specially customized models to extract different cues, we extract those cues through several lightweight MLPs in parallel, thanks to the excellent expressiveness of the instance tokens. Concretely, we perform a 5-layer MLP to estimate the normalized locations of 17 keypoints for human pose estimation. Note that object tokens do not have a pose representation. Meanwhile, a 3-layer MLP is used to predict the normalized bounding boxes of all humans and objects as their spatial locations. Additionally, we adopt a 1-layer FFN to predict each category of humans and objects \(\hat{\mathbf{y}}\). Specifically, for the \(i\)-th human instance, its prediction \(\hat{\mathbf{y}}^{i}_{\text{h}}\in[0,1]^{2}\), where the \(2\)-th element indicates _no-human_. For object instance, similarly, \(\hat{\mathbf{y}}^{i}_{\text{o}}\in[0,1]^{N^{s}_{\text{o}}+1}\), where \(N^{s}_{\text{o}}\) is the number of object classes and the \((N^{\text{c}}_{\text{o}}+1)\)-th element denotes _no-object_.
**Aggregation.** We first adopt two fully connected layers to project all cues into a united and embedded feature space, leading to four new cue representations \(\mathbf{E}_{\text{pos}}\in\mathbb{R}^{N^{s}_{\text{h}}\times D_{\text{pos}}}\) (human poses), \(\mathbf{E}_{\text{h-spa}}\in\mathbb{R}^{N^{s}_{\text{o}}\times D_{\text{pos}}}\) (human spatial locations), \(\mathbf{E}_{\text{o-spa}}\in\mathbb{R}^{N^{s}_{\text{o}}\times D_{\text{pos}}}\) (object spatial locations) and \(\mathbf{E}_{\text{cls}}\in\mathbb{R}^{N^{s}_{\text{o}}\times D_{\text{cls}}}\) (object classes). Particularly, the text of the predicted object name is first transformed into a vector using Word2Vec [34]. Since these cues may introduce noise due to misrecognition, we manually set a threshold \(\gamma=0.7\) over the confidence of category prediction to decide their employment. Concretely, if the category (_no-object_ and _no-human_ are excluded) prediction confidence of an instance is larger than \(\gamma\), we keep its corresponding cues otherwise reset them as 0. Finally, these cues are concatenated to corresponding instance tokens to obtain the final representations via:
\[\hat{\mathbf{T}}_{\text{h}}=W_{\text{h}}[\mathbf{T}_{\text{h}};\ \mathbf{E}_{\text{pos}};\ \mathbf{E}_{\text{h-spa}}], \tag{10}\] \[\hat{\mathbf{T}}_{\text{o}}=W_{\text{o}}[\mathbf{T}_{\text{o}};\ \mathbf{E}_{\text{cls}};\ \mathbf{E}_{\text{o-spa}}], \tag{11}\]
where \(\mathbf{T}_{\text{h}}=\mathbf{G}^{2}_{\text{h}}\) and \(\mathbf{T}_{\text{o}}=\mathbf{G}^{2}_{\text{o}}\) are human and object tokens generated by the instance encoder. \(W_{\text{h}}\) and \(W_{\text{o}}\) are learned weights to project the concatenated features.
### Interaction Decoder
We adopt a 3-layer Transformer decoder to recognize interactions, of which each layer consists of a cross-attention module and a self-attention module. As the clustering mechanism in the instance encoder has located different humans and objects, our decoder aims to associate the interacted human-object pairs and recognize their interactions.
**Association.** Formally, a given image is invariably transformed into \(N^{\text{pred}}_{\text{h}}=N^{2}_{\text{h}}\) human tokens \(\hat{\mathbf{T}}_{\text{h}}\in\mathbb{R}^{N^{\text{pred}}_{\text{h}}\times D_{ \text{h}}}\) and \(N^{\text{pred}}_{\text{o}}=N^{2}_{\text{o}}\) object tokens \(\hat{\mathbf{T}}_{\text{o}}\in\mathbb{R}^{N^{\text{pred}}_{\text{o}}\times D_{ \text{o}}}\) after the instance encoder and the cue utilization module. By design, \(D_{\text{h}}=D_{\text{o}}\) and we simplify them as \(D\). Then, we add two sets of position embedding to \(\hat{\mathbf{T}}_{\text{h}}\) and \(\hat{\mathbf{T}}_{\text{o}}\) respectively via:
\[\mathbf{P}_{\text{h}}=\text{Embedding}(N^{\text{pred}}_{\text{h}},D),\ \hat{ \mathbf{T}}_{\text{h}}=\hat{\mathbf{T}}_{\text{h}}+\mathbf{P}_{\text{h}}; \tag{12}\] \[\mathbf{P}_{\text{o}}=\text{Embedding}(N^{\text{pred}}_{\text{o}},D), \hat{\mathbf{T}}_{\text{o}}=\hat{\mathbf{T}}_{\text{o}}+\mathbf{P}_{\text{o}}. \tag{13}\]
Next, the position embedding for interaction queries is initialized as the one-to-one sum of the human and object position embedding. Concretely, the position of the \((ij)\)-th interaction is the sum of the position of the \(i\)-th hu
man and the position of the \(j\)-th object, _i.e.,_\(\mathbf{p}_{\text{a}}^{(ij)}=\mathbf{p}_{\text{h}}^{i}+\mathbf{p}_{\text{o}}^{j}\), leading to an interaction position embedding \(\mathbf{P}_{\text{a}}\in\mathbb{R}^{N_{\text{s}}^{\text{pred}}N_{\text{a}}^{ \text{pred}}\times D}\), which actually enumerates total \(N_{\text{h}}^{\text{pred}}N_{\text{o}}^{\text{pred}}\) possible human-object pairs.
Moreover, in practical scenarios, one human-object pair may have multiple interaction labels. Thus, we follow [50] to incorporate multiple patterns into each interaction position. Concretely, we use a small set pattern embedding \(\mathbf{P}_{\text{pattern}}=\texttt{Embedding}(N_{\text{pattern}},D)\) to detect different interactions from each human-object pair. \(N_{\text{pattern}}\) is the number of patterns that is very small, here \(N_{\text{pattern}}=3\). Next, we share the \(\mathbf{P}_{\text{pattern}}\) to each interaction position \(\mathbf{p}_{\text{a}}\) to get the multi-pattern interaction position embedding \(\hat{\mathbf{P}}_{\text{a}}\in\mathbb{R}^{N_{\text{s}}\times D}\), where \(N_{\text{a}}=N_{\text{pattern}}\times N_{\text{h}}^{\text{pred}}\times N_{ \text{o}}^{\text{pred}}\). Finally, our interaction queries are initialized as:
\[\mathbf{Q}_{\text{a}}=\text{Zeros}(N_{\text{a}},D)+\hat{\mathbf{P}}_{\text{a}}. \tag{14}\]
**Recognition.** Along with the human and object instance tokens from the instance encoder, we feed the interaction queries \(\mathbf{Q}_{\text{a}}\) into the interaction decoder. After that, the interactions are recognized through a 1-layer FFN, following QPIC [39].
### Loss Function
The loss function consists of three parts: 1) loss of interaction recognition \(\mathcal{L}_{\text{a}}\), 2) loss of cues extraction \(\mathcal{L}_{\text{e}}\), and 3) loss of instance token generation \(\mathcal{L}_{\text{t}}\). Specifically, \(\mathcal{L}_{\text{e}}\) consists of pose estimation and location regression. Category recognition is jointly optimized with \(\mathcal{L}_{\text{t}}\). We use the focal loss [28] as \(\mathcal{L}_{\text{a}}\) and adopt \(L_{2}\) loss as \(\mathcal{L}_{\text{e}}\). The total loss is the weighted sum of them, _i.e._, \(\mathcal{L}=\alpha_{1}\mathcal{L}_{\text{a}}+\alpha_{2}\mathcal{L}_{\text{e}}+ \alpha_{3}\mathcal{L}_{\text{t}}\). More details are described in _supplementary materials_. Here, we mainly introduce the design of \(\mathcal{L}_{\text{t}}\), which enables text representations to guide the generation of instance tokens.
#### 3.4.1 Textual Guidance
Actually, some works have tried to incorporate clustering with Transformer for other tasks, such as GroupViT [52] and kMaX [56], and we borrow some ideas from them for model design. However, training the model for HOI detection is not easy. GroupViT use contrastive loss, which demands large training batch size (4096) and kMaX uses heavy decoder and dense annotations. All of these are unaffordable for HOI detection. Thus, we devise a new loss function that uses a textual signal to guide the learning of the instance encoder by enforcing a similarity between the textual representation and the instance token representation. To this end, we first define a similarity metric and then match instance tokens to each ground truth instance with this metric and finally optimize the instance encoder.
**Similarity metric.** Suppose an input image contains \(N_{\text{h}}^{\text{gt}}\) humans and \(N_{\text{o}}^{\text{gt}}\) objects, in which the \(j\)-th object is labeled as \(\mathbf{y}_{\text{o}}^{j}\). Then, taking objects as examples, our similarity metric \(\text{sim}(\cdot,\cdot)\) between the \(j\)-th ground truth object and the \(k\)-th generated object token \(\mathbf{t}_{\text{o}}^{k}\) is defined as
\[\text{sim}(j,k)=\hat{\mathbf{y}}_{\text{o}}^{k}(j)\times\text{Cosine}(\mathbf{ r}_{\text{vis}}^{k},\mathbf{r}_{\text{txt}}^{j}), \tag{15}\]
where \(\hat{\mathbf{y}}_{\text{o}}^{k}(j)\in[0,1]\) is the probability of predicting the \(j\)-th class and \(\text{Cosine}(\cdot,\cdot)\) denotes cosine similarity. \(\mathbf{r}_{\text{vis}}^{k}\) is visual representation vector projected from the \(k\)-th object token \(\mathbf{t}_{\text{o}}^{k}\) through two FC layers, and \(\mathbf{r}_{\text{txt}}^{j}\) is a text representation vector from CLIP [37]. Concretely, we prompt the _noun_ word of \(j\)-th ground truth class with a handcrafted sentence template, _i.e._, "_A photo of a {noun_}_". Then, we feed this sentence into a frozen text encoder of CLIP followed by two FC layers as projector to acquire the text representation \(\mathbf{r}_{\text{txt}}^{j}\). The human tokens share the same progress. Note that for human, the \(j\) ranges from 1 to 2, indicating _human_ and _no-human_, while for object, \(j=[1,2,...,N_{\text{o}}^{\text{c}},N_{\text{o}}^{\text{c}}+1]\), denoting total \(N_{\text{o}}^{\text{c}}\) different object categories and a _no-object_.
**Instance matching.** By design, \(N_{\text{h}}^{\text{pred}}>N_{\text{h}}^{\text{gt}}\) and \(N_{\text{o}}^{\text{pred}}>N_{\text{o}}^{\text{gt}}\). We first pad \((N_{\text{h}}^{\text{pred}}-N_{\text{h}}^{\text{gt}})\) and \((N_{\text{o}}^{\text{pred}}-N_{\text{o}}^{\text{gt}})\)_"nothing_"s to human and object ground truths respectively, leading to two new ground truth sets \(\{\mathbf{y}_{\text{h}}^{i}\}_{i=1}^{N_{\text{h}}^{\text{pred}}}\) and \(\{y_{\text{o}}^{j}\}_{j=1}^{N_{\text{pred}}^{\text{pred}}}\). Following, in case of the object tokens (same for the human tokens), we search for a permutation of \(N_{\text{o}}^{\text{pred}}\) elements \(\hat{\sigma}\in\mathfrak{S}_{N_{\text{e}}^{\text{pred}}}\) to achieve the maximum total similarity:
\[\hat{\sigma}=\operatorname*{arg\,max}_{\sigma\in\mathfrak{S}_{N_{\text{h}}^{ \text{pred}}}}\sum_{i=1}^{N_{\text{pred}}^{\text{pred}}}\text{sim}(i,\sigma(i)). \tag{16}\]
The optimal assignment is calculated with the Hungarian algorithm [21], following DETR [1].
**Objective.** After finding the optimal assignment \(\hat{\sigma}\), we are inspired by [47] to define the objective taking into account both positive predictions (assigned to ground truths that are not _nothing_) and negative (assigned to _nothing_) predictions into account. In case of object instances, the positive loss is calculated as:
\[\mathcal{L}_{\text{o}}^{\text{pos}} =\sum_{i=1}^{N_{\text{o}}^{\text{gt}}}\texttt{S}(\hat{\mathbf{y}}_ {\text{o}}^{\hat{\sigma}(i)}(i))\cdot[-\text{Cosine}(\mathbf{r}_{\text{vis}}^{ \hat{\sigma}(i)},\mathbf{r}_{\text{txt}}^{i})]\] \[+\sum_{i=1}^{N_{\text{o}}^{\text{gt}}}\texttt{S}(\text{Cosine}( \mathbf{r}_{\text{vis}}^{\hat{\sigma}(i)},\mathbf{r}_{\text{txt}}^{i}))\cdot[- \log\hat{\mathbf{y}}_{\text{o}}^{\hat{\sigma}(i)}(i)]. \tag{17}\]
Intuitively, \(\mathcal{L}_{\text{o}}^{\text{pos}}\) is equivalent to optimizing a cosine loss weighted by the class correctness and optimizing a cross-entropy loss weighted by the cosine similarity. Note that the stop gradient operator \(\texttt{S}\texttt
Thus, we enforce the representation and class to be correct at the same time. Besides, we define the negative loss as:
\[\mathcal{L}_{\text{o}}^{\text{neg}}=\sum_{j=N_{\text{o}}^{\text{p}}+1}^{N^{ \text{pred}}_{\text{o}}}[-\log\hat{\mathbf{y}}_{\text{o}}^{\hat{\mathbf{y}}}({N _{\text{o}}^{\text{c}}}+1)]. \tag{18}\]
Finally, the objective for the object instances is designed as \(\mathcal{L}_{\text{t}}^{\text{o}}=\lambda\mathcal{L}_{\text{o}}^{\text{pos}}+(1 -\lambda)\mathcal{L}_{\text{o}}^{\text{neg}}\). The objective of human instances \(\mathcal{L}_{\text{t}}^{\text{h}}\) shares the same process. Finally, \(\mathcal{L}_{\text{t}}=\mathcal{L}_{\text{t}}^{\text{o}}+\mathcal{L}_{\text{t}} ^{\text{h}}\).
## 4 Experiments
**Technical details.** Most of our default settings follow QPIC [39], _e.g._, data augmentation, backbone, etc. Specifically, the channel dimension of all tokens, clustering centers and position embedding are set to 256. \(D_{\text{pos}}=64\), \(D_{\text{spa}}=16\), \(D_{\text{cls}}=64\). We design \(N_{\text{h}}^{1}=16;N_{\text{h}}^{2}=4\) and \(N_{\text{o}}^{1}=64;N_{\text{o}}^{2}=8\). There are 4 and 2 self-attention layers in the first and second stage. For loss calculation, \(\alpha_{1,2,3}\) are set to 2.5, 1, 1.5, and \(\lambda=0.75\). For human pose, we use the annotations provided by [6, 24] for HICO-Det [2] and the annotations from [16] for V-COCO [10].
**Training.** Our batch size is 32, with an initialized learning rate of the backbone \(10^{-5}\), that of the others \(2.5e^{-4}\), and the weight decay \(10^{-4}\). We adopt the AdamW [33] optimizer for a total of 150 epochs where learning rates are decayed after 80 and 120 epochs.
### Importance of Instance-level Cues
This subsection aims to verify the importance of different instance-level cues and explore why they facilitate interaction recognition. As Tab. 1(a) verified, all cues contribute a performance gain for HOI detection, especially for the "rare" case (with fewer than 10 training instances), ranging from \(3.4\%\) to \(10.1\%\). The transformer shows excellent performance when dealing with a large number of training samples yet an inferior performance with inadequate sample volume due to the lack of inductive bias [5]. However, HOI detection has always been plagued by the long-tail distribution problem, interactions (_e.g._, _stand on chair_) with a minority of samples are thereby more likely to be misrecognized as an interaction (_e.g._, _sit on chair_) with similar visual pattern but massive samples. In this case, instance-level cues serve as some explicit priori knowledge that may be prioritised by the Transformer to recognize interactions. We further verify this solution in Tab. 1(b). Concretely, we choose 5,000 images of _wheel_ bicycle and 5,000 images of _ride_ bicycle to retrain the interaction decoder with the instance encoder being frozen. As the table shows, when the sample volumes of two interaction instances differ substantially (_e.g._, 500 vs. 5,000), additional cues can significantly improve performance, especially for small samples (\(79.1\%\) vs. \(13.7\%\)). However, the gain diminishes as the sample size tends to equalize (\(5.3\%\) vs. \(6.4\%\) with 5,000/5,000 samples). Additionally, Tab. 1(c) reports the mean difference between the cues extracted from these two interaction examples. Empirically, a relatively larger mean difference indicates a better recognizability and thus facilitates the process of classification. From this point, the various instance-level cues are more desirable features for interaction recognition.
### Importance of Clustering
As mentioned previously, the integrality of IRs is the cornerstone of extracting different cues in a single-stage framework. Fig. 2(a) first shows the coverage rate of different instance information carriers over the instance bounding box. Concretely, the proposals extracted by an extra object detector (DETR in here) show best performance, but enforces a two-stage pipeline that compromises the efficiency. Meanwhile, object detection-driven instance queries in GEN [27] attend to instance parts (14.85% coverage rate), which leads to an inferior performance in extracting other cues, as shown in Fig. 2(b). In comparison, the instance tokens generated by clustering enable a sufficient coverage over instances, allowing one to flexibly extract different extra cues (\(3\times\) precision improvement). Interestingly, the clustering mechanism natively eliminates the visual redundancy in similar tokens, promising the instance tokens a capability for increased expressiveness. Therefore, even without using any additional decoder, AGER already shows a competitive result of object detection (57.48@AP50) compared to other more complex methods.
### Analysis of Effectiveness & Efficiency
**Effectiveness**. Tab. 2 and Tab. 3 verify the effectiveness of AGER on HICO-Det [2] and V-COCO [10], respectively. First, AGER even without involving any additional cues already achieves a competitive result, with a relative \(10.6\%\) mAP gain compared to QPIC [39] on HICO-Det. It is ascribable to the CLIP-guided dynamic clustering process, which reduces the visual redundancy in patch tokens and leads to more expressive instance tokens. Secondly, AGER achieves a new state-of-the-art performance (36.75 mAP) based on human poses, spatial distributions and object categories. Note that this result can be further improved by using more valuable cues (37.10/37.77 mAP with gaze/interactiveness) at a negligible cost of additional parameters (+2.36M). However, we are not striving for that, but aim to provide the first paradigm that enables us to use extra cues in a single-stage manner, giving some valuable points to the HOI detection community. Although AGER does not achieve the optimal results on V-COCO, its performance is still very competitive.
**Efficiency.** In Tab. 4, we compare four different yet typical Transformer-based methods, including: **(i)** QPIC [39] that
adopt a vanilla DETR-like Transformer (6-layer encoder and 6-layer decoder); **(ii)** AS-Net [3] that performs two decoders to detect instances and interactions respectively (6-layer encoder and \(2\times 6\)-layer decoder); **(iii)** STIP [59] that built on a two-stage pipeline where instances are first detected through DETR [1] and **(iv)** our AGER. As shown in the table, AGER is even more efficient than QPIC that has the most simple architecture in prior Transformer-based HOI detection methods, with a relative \(36.0\%\) gain of FPS and a \(8.5\%\) reduction of FLOPs. Formally, additional computational costs of ATER are mainly introduced by calculating instance-level cues and clustering centers. However, for the former, thanks to the expressiveness of instance tokens, several lightweight MLPs are adequate to extract different cues, which bring a negligible additional complexity compared to the method using different customized tools. For the latter, although the first stage of the instance encoder takes more computation to update the clustering centers, the second stage starts to process much less tokens after clustering, and the number of tokens is further reduced after the second stage. Thus, the decoder demands a minority of computational complexity. Meanwhile, thanks to the great representation ability of the instance tokens, the decoder of AGER is much shallower than that of QPIC (3 vs. 6). Also, unlike QPIC has a quadratic computational cost _w.r.t_ the number of pixels, the size of input image does not introduce additional computations to AGER but the first stage of encoder. This is because except the first stage, AGER deals with a fixed number of tokens regardless of the input size. We visualize the relations between the complexity of different methods and the image resolution in Fig. 2(c), and present a detailed validation in _supplementary materials_.
### Ablation Study
**Clustering center numbers.** In Tab. 4(a), we compare different numbers of clustering centers. Overall, increasing centers consistently improves performance, and we find (16,64) for the first stage and (4,8) for the second stage to be optimal. Empirically, an inadequate amount of centers may fail to characterize an image sufficiently, while an excessive amount of centers are likely to introduce unexpected noises.
**Pattern numbers.** Tab. 4(b) shows the effect of multi-pattern mechanism in the interaction decoder. Specially, when the number of patterns is one, we adopt the strategy of QPIC [39] to predict a _not-one-hot-like_ label, _i.e._, a label with multiple true values. However, such an intuitive solution brings more ambiguity. In contrast, our multi-pattern strategy explicitly encourages each position embedding to attend to one specific interaction, leading to a relative \(5.6\%\) mAP gain.
**Strategies.** We verify the effectiveness of the proposed strategies in Tab. 4(c). Concretely, without explicitly adding
\begin{table}
\end{table}
Table 1: **Importance of instance-level cues**. (a) The results of incorporating visual appearance features (A) with other cues in Sec. 3.2. (b) The results of using (w/) and not using (w/o) extra cues with different sample volumes. (c) The mean differences of different cues.
Figure 3: **Importance of clustering. (a) We report the coverage as the proportion of the area of the feature region highlighted by an information carriers to the area of an instance. (b) Performance of different information carriers for different tasks (both fine-tuned with the same supervisory signals as ours). (c) The model parameters over image resolution.**
position embedding to human and object centers respectively, the increased ambiguity leads to a \(8.3\%\) performance degradation. Besides, we observe a relative \(6.8\%\) degradation when invalidating the "_cue-switch_" strategy in cue aggregation module (Sec. 3.2), _i.e._, treating all generated instance tokens as valid without using the threshold \(\gamma\) to invalidate mis-recognized instances. Note that our utilization of CLIP is quite different from other methods. Concretely, other methods (_e.g._, GEN [27]) perform CLIP to transfer interaction-specific linguistic knowledge to a visual model by using interaction (HOI-specific) labels to customize an interaction classifier, while we use just instance labels to generate general IRs. Actually, the majority of HOI detection methods use such general IRs since they are initialized using a pre-trained object detection or classification network.
**Similarity metric.** Tab. 4 compares different similarity metrics for our new objective function. When using cross-entropy (CE) solely, _i.e._, involving no textural guidance, we observe severe performance degradation (\(\approx 50\%\)), indicating that simple CE loss cannot enable dynamical clustering. We conjecture that using CE loss is more like a recognition task that may introduce unexpected task bias, _i.e._, highlighting partial features. In comparison, text representation is decoupled from downstream tasks and thus involves no task-bias. However, when adopting cosine similarity individually, we also observe a \(18.9\%\) performance degradation. It is because that the frozen text encoder of CLIP cannot differentiate two instances in the same category but with different attributes (_e.g._, a standing human and a sit
\begin{table}
\begin{tabular}{l|c c c} Method & Param. & GFIOPs & FPS \\ \hline OPIC [39] & **42.35M** & 36.95 & 20.0 \\ AS-Net [3] & 59.14M & 52.94 & 1.6 \\ STIP [59] & 54.71M & 48.27 & 1.6 \\ Ours & 44.47M & **33.81** & **27.2** \\ \end{tabular}
\end{table}
Table 4: **Analysis of efficiency**. All models are tested using a sigle GTX 1080Ti taking as input an image with a size of \(640\times 640\). Here, we adopt ResNet50-FPN as the backbone.
\begin{table}
\begin{tabular}{l c|c c|c c c|c c c} Method & Detector & Backbone & Cues & Single & Full & Rare & Non-Rare & Full & Rare & Non-Rare \\ \hline \multicolumn{10}{l}{CNN-based Methods:} \\ InteractNet [9] & COCO & R50-FPN & ✗ & ✗ & 9.94 & 7.16 & 10.77 & - & - & - \\ iCAN [8] & COCO & R50 & ✗ & ✗ & 14.84 & 10.45 & 16.15 & 16.26 & 11.33 & 17.73 \\ PMFNet [45] & COCO & R50-FPN & ✓ & ✗ & 17.46 & 15.65 & 18.00 & 20.34 & 17.47 & 21.20 \\ DRG [7] & COCO & R50-FPN & ✓ & ✗ & 19.26 & 17.74 & 19.71 & 23.40 & 21.75 & 23.89 \\ FCMNet [32] & COCO & R50 & ✓ & ✗ & 20.41 & 17.34 & 21.56 & 22.04 & 18.97 & 23.12 \\ DJ-RN [23] & COCO & R50 & ✓ & ✗ & 21.34 & 18.53 & 22.18 & 23.69 & 20.64 & 24.60 \\ SCG [58] & COCO & R50-FPN & ✓ & ✗ & 21.85 & 18.11 & 22.97 & - & - & - \\ UnionDet [17] & COCO & R50 & ✗ & 17.58 & 11.72 & 19.33 & 19.76 & 14.68 & 21.27 \\ IP-Net [49] & COCO & Hg-104 & ✗ & ✓ & 19.56 & 12.79 & 21.58 & 22.05 & 15.77 & 23.92 \\ PPDM [26] & HICO-Det & Hg-104 & ✗ & ✓ & 21.94 & 13.97 & 24.32 & 24.81 & 17.09 & 27.12 \\ GG-Net [61] & HICO-Det & Hg-104 & ✗ & ✓ & 23.47 & 16.48 & 25.60 & 27.36 & 20.23 & 29.48 \\ \hline \multicolumn{10}{l}{Transformer-based Methods:} \\ HOI-T [67] & HICO-Det & R50 & ✗ & ✓ & 23.46 & 16.91 & 25.41 & 26.15 & 19.24 & 28.22 \\ PST [4] & - & R50 & ✗ & ✓ & 23.93 & 14.98 & 26.60 & 26.42 & 17.61 & 29.05 \\ HOTR [18] & HICO-Det & R50 & ✗ & ✓ & 25.10 & 17.34 & 27.42 & - & - & - \\ AS-Net [3] & HICO-Det & R50 & ✗ & ✓ & 28.87 & 24.25 & 30.25 & 31.74 & 27.07 & 33.14 \\ QPTC [39] & HICO-Det & R101 & ✗ & ✓ & 29.90 & 23.92 & 31.69 & 32.38 & 26.06 & 34.27 \\ CDN-L [57] & HICO-Det & R101 & ✗ & ✓ & 32.07 & 27.19 & 33.53 & 34.79 & 29.48 & 36.38 \\ MSTR [19] & HICO-Det & R50 & ✗ & ✓ & 31.17 & 25.31 & 32.92 & 34.02 & 28.83 & 35.57 \\ SSRT [14] & HICO-Det & R50 & ✗ & ✓ & 31.34 & 24.31 & 33.32 & - & - & - \\ DT [63] & HICO-Det & R50 & ✗ & ✓ & 31.75 & 27.45 & 33.03 & 34.50 & 30.13 & 35.81 \\ STIP [59] & HICO-Det & R50 & ✗ & ✓ & 32.22 & 28.15 & 33.43 & 35.29 & 31.43 & 36.45 \\ Iwin [42] & HICO-Det & R101 & ✗ & ✓ & 32.79 & 27.84 & 35.40 & 35.84 & 28.74 & 36.09 \\ IF [31] & HICO-Det & R50 & ✗ & ✓ & 33.51 & 30.30 & 34.46 & 36.28 & 33.16 & 37.21 \\ GEN [27] & HICO-Det & R101 & ✗ & ✓ & 34.95 & 31.18 & 36.08 & 38.22 & 34.36 & 39.37 \\ \hline Our w/o cues & HICO-Det & R50 & ✓ & ✓ & 33.07 & 29.87 & 34.05 & 35.21 & 32.04 & 37.09 \\ Our w/ cues & HICO-Det & R50 & ✓ & ✓ & **36.75** & **33.53** & **37.71** & **39.84** & **35.58** & **40.23** \\ \end{tabular}
\end{table}
Table 2: **Performance comparison on the HICO-Det test set**. We present an additional tag “Cues” to indicate the ability to flexibly use a variety of instance-level cues, as well as “Single” to denote a single-stage pipeline.
\begin{table}
\begin{tabular}{l|c c} Method & Param. & GFIOPs & FPS \\ \hline OPIC [39] & **42.35M** & 36.95 & 20.0 \\ AS-Net [3] & 59.14M & 52.94 & 1.6 \\ STIP [59] & 54.71M & 48.27 & 1.6 \\ Ours & 44.47M & **33.81** & **27.2** \\ \end{tabular}
\end{table}
Table 3: **Performance on the V-COCO**. Limited by space, the detailed comparison is listed in _supplementary materials_.
ting human) as they are both labeled as "_a photo of a human_". If we jointly train the text encoder and provide more fine-grained labels (_e.g._, _a photo of a standing human_), the results should be improved, yet introduce much more training complexity and annotation workload. In comparison, our proposed loss is a dynamical fusion of features' generality (both human) and variability (with different attributes), which eliminates task-bias and also facilitates model training.
## 5 Discussion & Conclusion
**Limitation.** We find that clustering demands a relative higher resolution, so AGER struggles to handle small and occluded instances. Besides, our instance decoder enumerates all human-object pairs without considering interactiveness. All of these await further exploration.
**Conclusion.** In this paper, we present AGER, a novel vision Transformer for HOI detection, which provides the first paradigm that enables Transformer-based HOI detector to leverage extra cues in an efficient (single-stage) manner. AGER performs tokenization as a text-guided dynamic clustering process, improving prior methods with instance tokens, which ensures the integrality of IRs. We validate AGER on two challenging HOI benchmarks and achieve a considerable performance boost over SOTA results.
|
2301.04041 | Manifold Restricted Interventional Shapley Values | Shapley values are model-agnostic methods for explaining model predictions.
Many commonly used methods of computing Shapley values, known as off-manifold
methods, rely on model evaluations on out-of-distribution input samples.
Consequently, explanations obtained are sensitive to model behaviour outside
the data distribution, which may be irrelevant for all practical purposes.
While on-manifold methods have been proposed which do not suffer from this
problem, we show that such methods are overly dependent on the input data
distribution, and therefore result in unintuitive and misleading explanations.
To circumvent these problems, we propose ManifoldShap, which respects the
model's domain of validity by restricting model evaluations to the data
manifold. We show, theoretically and empirically, that ManifoldShap is robust
to off-manifold perturbations of the model and leads to more accurate and
intuitive explanations than existing state-of-the-art Shapley methods. | Muhammad Faaiz Taufiq, Patrick Blöbaum, Lenon Minorics | 2023-01-10T15:47:49Z | http://arxiv.org/abs/2301.04041v2 | # Manifold Restricted Interventional Shapley Values
###### Abstract
Shapley values are model-agnostic methods for explaining model predictions. Many commonly used methods of computing Shapley values, known as _off-manifold methods_, rely on model evaluations on out-of-distribution input samples. Consequently, explanations obtained are sensitive to model behaviour outside the data distribution, which may be irrelevant for all practical purposes. While _on-manifold methods_ have been proposed which do not suffer from this problem, we show that such methods are overly dependent on the input data distribution, and therefore result in unintuitive and misleading explanations. To circumvent these problems, we propose _ManifoldShap_, which respects the model's domain of validity by restricting model evaluations to the data manifold. We show, theoretically and empirically, that ManifoldShap is robust to off-manifold perturbations of the model and leads to more accurate and intuitive explanations than existing state-of-the-art Shapley methods.
## 1 Introduction
Explaining model predictions is highly desirable for reliable applications of machine learning. This is especially important in risk-sensitive settings like medicine and credit scoring (Hakkoum et al., 2022; Lee et al., 2019; Ahmad et al., 2018; Kvamme et al., 2018) where an incorrect model prediction could prove very costly. Explainability is becoming increasingly relevant because of regulations like the General Data Protection Regulation (Regulation, 2016), which may require being able to explain model predictions before deploying a model in the real world. This is less of a challenge in models like linear models and decision trees, which tend to be easier to interpret. However, the same is not true for more complex models like Neural Networks, where explaining predictions may not be straightforward (Ribeiro et al., 2016).
Explainable AI is an area of machine learning which aims to provide methodologies for interpreting model predictions. Various different techniques of explaining models have been proposed, with each approach satisfying different properties (Linardatos et al., 2021). In this paper, we focus on Shapley values (Strumbelj and Kononenko, 2010, 2014; Lundberg and Lee, 2017), a popular approach for quantifying feature relevance, which is model-agnostic, i.e., is independent of model implementation. Additionally, this is a local explanation method, i.e., it can be used to explain individual model predictions. Shapley values are based on ideas from cooperative game theory (Bilbao, 2000) and come with various desirable theoretical properties (Sundararajan and Najmi, 2020) which make it a very attractive method in practice.
At a high-level, Shapley values treat features as 'players' in a game, where the total payout is the model prediction at a given point. To quantify the feature importance, this method distributes the total payout among each player in a 'fair' manner using a _value_ function. Different types of Shapley value functions have been proposed which differ in the way they distribute payout among players (Sundararajan and Najmi, 2020; Frye et al., 2021). These can be broadly divided into two categories: (i) _on-manifold_ value functions, which only depend on the model behaviour on the input data distribution, and (ii) _off-manifold_ value functions which also depend on the model behaviour outside the input data distribution.
Off-manifold Shapley values are not robust to changes in model behaviour outside the data distribution. This means that the explanations obtained using these methods may be highly influenced if the model behaviour outside the data distribution changes, even if it remains fixed on the data distribution (Frye et al., 2021; Slack et al., 2020; Yeh et al., 2022). Such changes to the model can change the Shapley values drastically, resulting in misleading explanations, and can even be used to hide model biases. On the other hand, while the on-manifold Shapley values are robust to
such model perturbations, the explanations obtained are highly sensitive to changes in the feature distribution. Additionally, these methods do not capture the _causal_ contribution of features as they attribute importance based on feature correlations. For example, we show that on-manifold Shapley values can be 'fooled' into attributing similar importance to two positively correlated features, even if the model depends on only one of them.
In this paper, we bridge this gap between _on-manifold_ and _off-manifold_ Shapley values by proposing ManifoldShap (illustrated in Figure 1), a Shapley value function, which remains robust to changes in model behaviour outside the data distribution, while estimating the _causal_ contribution of features. We show that ManifoldShap is significantly less sensitive to changes in the feature distribution than other on-manifold value functions. We extend the formal notion of robustness in Yeh et al. (2022) by providing an alternative definition which may be more desirable in many cases. We additionally show that our proposed method satisfies both notions of robustness, while other methods do not. Moreover, ManifoldShap satisfies a number of other desirable properties which we verify theoretically and empirically on real-world datasets.
## 2 Shapley values
In this section, we will introduce Shapley values for model explainability. For any given model \(f:\mathcal{X}\rightarrow\mathcal{Y}\), our goal is to obtain localised model explanations at a given point \(\textbf{x}\in\mathcal{X}\). We assume that \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and \(\mathcal{Y}\subseteq\mathbb{R}\).
Shapley values (Strumbelj and Kononenko, 2010, 2014; Lundberg and Lee, 2017) provide a natural tool for obtaining such explanations. For a specific input **x**, Shapley values define a way of distributing the difference between \(f(\textbf{x})\) and a baseline, which we denote as \(b_{0}\), among the \(d\) input features. This can naturally be interpreted as the contribution of each feature towards the difference \(f(\textbf{x})-b_{0}\), and is commonly referred to as feature attributions. One possible choice of baseline explored in the literature is the model evaluated at an auxiliary input \(\textbf{x}^{\prime}\), i.e., \(b_{0}=f(\textbf{x}^{\prime})\). Alternatively, many methods use the average model output \(\mathbb{E}[f(\textbf{X})]\) as the baseline, i.e., \(b_{0}=\mathbb{E}[f(\textbf{X})]\). This can be used to explain _why_ the output at a point **x** deviates from the average output. The average output provides a more intuitive and interpretable baseline compared to the choice of an auxiliary input \(\textbf{x}^{\prime}\), which can be arbitrary. In this work, we therefore restrict our attention to the latter category.
As an example, consider a model which predicts an individual's salary, with input features corresponding to individual's information. If feature \(i\in[d]\) represents the age of the individual, the attribution for feature \(i\), which we will denote as \(\phi_{i}\), tells us the contribution of individual's age to the salary prediction for **x**, relative to the average salary prediction, i.e., \(f(\textbf{x})-\mathbb{E}[f(\textbf{X})]\). To compute the contribution for feature \(i\) at **x**, Shapley values consider a value function \(v:2^{[d]}\rightarrow\mathbb{R}\) where \(v\) may implicitly depend on **x**. Given a subset \(S\subseteq[d]\setminus\{i\}\), we can intuitively interpret the difference \(v(S\cup\{i\})-v(S)\) as the contribution of feature \(i\) w.r.t. the set \(S\). Next, the Shapley values for feature \(i\) is defined as a weighted sum over all possible subsets \(S\):
\[\phi_{i}\coloneqq\sum_{S\subseteq[d]\setminus\{i\}}\frac{|S|!(d-|S|-1)!}{d!}( v(S\cup\{i\})-v(S)).\]
The quantity \(\phi_{i}\) can be intuitively considered as the average contribution of feature \(i\) to the prediction at **x**. In order for the explanations obtained to be interpretable and intuitive, the value function \(v\) must be chosen such that it satisfies a number of desirable properties. We present some of the most important such properties here:
1. _Sensitivity:_ If \(f\) does not depend on \(x_{i}\), then \(v(S\cup\{i\})=v(S)\), and hence \(\phi_{i}=0\).
2. _Symmetry:_ If \(f\) is symmetric in components \(i\) and \(j\) and \(x_{i}=x_{j}\), then \(v(S\cup\{i\})=v(S\cup\{j\})\) and hence \(\phi_{i}=\phi_{j}\).
3. _Efficiency:_ If \(\phi_{i}\) denotes the attribution of feature \(i\) to \(f(\textbf{x})-\mathbb{E}[f(\textbf{X})]\), then \(v([d])-v(\emptyset)=f(\textbf{x})-\mathbb{E}[f(\textbf{X})]\) and hence, \(\sum_{i}\phi_{i}=f(\textbf{x})-\mathbb{E}[f(\textbf{X})]\).
Next, we present various commonly used value functions, which can be classified into _off-manifold_ and _on-manifold_ value functions.
### Off-Manifold Value Functions
This class of value functions does not restrict function evaluations to the data distribution, and consequently, computing Shapley values involves evaluating the model on out-of-distribution inputs, where the model has not been trained (see Figure 1). The most commonly used off-manifold value function is Marginal Shapley (MS) (also called RB-Shap (Sundararajan and Najmi, 2020)):
Figure 1: The datapoints at which model is evaluated when computing Shapley values for test point **x**, along with the data manifold. Off-manifold methods evaluate the model outside the data manifold whereas our proposal, ManifoldShap, restricts model evaluations to the data manifold.
### Marginal Shapley (MS).
\[v^{\text{MS}}_{\mathbf{x},f}(S)\coloneqq\mathbb{E}[f(\mathbf{x}_{S},\mathbf{X}_{ \bar{S}})].\]
Specifically, Marginal Shapley takes the expectation of \(f(\mathbf{x}_{s},\mathbf{X}_{\bar{S}})\) over the marginal density of \(\mathbf{X}_{\bar{S}}\).
In addition to this, there has been some recent work proposing a causal perspective when computing Shapley values (Janzzing et al., 2020; Heskes et al., 2020; Jung et al., 2022). Specifically, these works observe that manually fixing the values of features \(\mathbf{X}_{S}\) to \(\mathbf{x}_{S}\) when computing Shapley values, corresponds to _intervening_ on the feature values. In Pearl's do calculus (Pearl, 2000, 2012), this is expressed as \(do(\mathbf{X}_{S}=\mathbf{x}_{S})\). This leads to the definition of Interventional Shapley (IS) value functions:
#### Interventional Shapley (IS).
\[v^{\text{IS}}_{\mathbf{x},f}(S)\coloneqq\mathbb{E}[f(\mathbf{X})\mid do( \mathbf{X}_{S}=\mathbf{x}_{S})]. \tag{1}\]
A detailed discussion of how Interventional Shapley differs from other _non-causal_ value functions has been deferred to Section 2.4. How to compute \(v^{\text{IS}}_{\mathbf{x},f}(S)\) depends on the causal structure of the features. Janzing et al. (2020) only consider the causal relations between the function inputs and outputs, rather than between the real-world features and the true output \(Y\). This corresponds to the set-up in Figure 2, where the true feature values \(\tilde{X}_{i}\) are formally distinguished from the features \(X_{i}\) input into the function, \(f\), with \(X_{i}\) being a direct causal descendant of \(\tilde{X}_{i}\) and no interactions between \(X_{i}\). In this set-up, intervening on \(\mathbf{X}_{S}\) yields the following interventional distribution:
\[p(\mathbf{X}_{S}\mid do(\mathbf{X}_{S}=\mathbf{x}_{S}))=p(\mathbf{X}_{\bar{S} }).\]
In this case, the value function, \(v^{\text{IS}}_{\mathbf{x},f}(S)\) can straightforwardly be computed as
\[v^{\text{IS}}_{\mathbf{x},f}(S)\!=\!\mathbb{E}[f(\mathbf{X})\mid do(\mathbf{ X}_{S}=\mathbf{x}_{S})]\!\!=\!\mathbb{E}_{\mathbf{X}_{S}\sim p(\mathbf{X}_{S})}[f( \mathbf{x}_{S},\mathbf{X}_{\bar{S}})].\]
This is equivalent to Marginal Shapley. Therefore, Marginal Shapley can be considered a special case of Interventional Shapley. In contrast, Heskes et al. (2020) seeks to estimate the causal contributions of the real-world features towards the true output \(Y\), and therefore, does not distinguish between the true features and the features input into the model. The resulting IS value function also takes into account the causal relations among the true features themselves.
### On-Manifold Value Functions
These value functions only rely on function values in data distribution when computing Shapley values. As a result, any changes in the function outside data distribution does not change the explanations obtained. One of the first on-manifold value functions proposed was Conditional Expectation Shapley (CES) (Sundararajan and Najmi, 2020):
#### Conditional Expectation Shapley (CES).
\[v^{\text{CES}}_{\mathbf{x},f}(S)\coloneqq\mathbb{E}[f(\mathbf{X})\mid\mathbf{ X}_{S}=\mathbf{x}_{S}].\]
Unlike Marginal Shapley, CES takes the expectation of \(f(\mathbf{x}_{s},\mathbf{X}_{\bar{S}})\) over the conditional density of \(\mathbf{X}_{\bar{S}}\) given \(\mathbf{X}_{S}=\mathbf{x}_{S}\) (and not the marginal density of \(\mathbf{X}_{\bar{S}}\)). This has undesired implications for the obtained Shapley values, which we discuss in detail in Section 2.4.
Apart from this, recently Yeh et al. (2022) proposed Joint Baseline Shapley (JBShap), a value function which aims to make Shapley values robust to model changes in regions of low data-density. This value function explicitly takes the density \(p(\mathbf{x})\) into consideration when calculating explanations:
#### Joint Baseline Shapley (JBShap).
\[v^{\text{J}}_{\mathbf{x},f,p}(S)\coloneqq f(\mathbf{x}_{S},\mathbf{x}^{ \prime}_{S})p(\mathbf{x}_{S},\mathbf{x}^{\prime}_{\bar{S}}),\]
where \(\mathbf{x}^{\prime}\) is an auxiliary baseline. The authors also propose an extension of JBShap, called _Random Joint Baseline Shapley_ (RJBShap) where the value function averages over all possible baseline values:
#### Random Joint Baseline Shapley (RJBShap).
\[v^{\text{RJ}}_{\mathbf{x},f,p}(S)\coloneqq\mathbb{E}_{p_{\text{b}}(\mathbf{X}_ {\bar{S}})}[f(\mathbf{x}_{S},\mathbf{X}_{\bar{S}})p(\mathbf{x}_{S},\mathbf{X}_ {\bar{S}})].\]
Here, \(p_{b}(\mathbf{X}_{\bar{S}})\) is some prior distribution over features \(\mathbf{x}^{\prime}_{S}\). A natural choice of prior is the marginal density \(p(\mathbf{X}_{\bar{S}})\), which we use to compute RJBShap later.
Having listed the most relevant on and off manifold value functions, we discuss their limitations in the following sections. This will motivate our proposal of an alternative value function, which aims to circumvent these limitations.
### Limitations of off-manifold value functions
As Slack et al. (2020); Frye et al. (2021) point out, dependence of Shapley explanations on off-manifold behaviour
Figure 2: Causal structure considered in Janzing et al. (2020). The true features are \(\tilde{X}_{i}\) while features input into the model are \(X_{i}\).
of the model can be problematic. For example, computing Interventional Shapley at **x** requires evaluating the model at points \((\textbf{x}_{S},\textbf{X}_{S})\) for \(S\subseteq[d]\) where \(\textbf{X}_{S}\sim p(\textbf{X}_{S}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))\). Such points may lie outside the distribution of training data, where the model was not trained. Consider a model which is identical to the ground truth function on the data distribution. The train/test errors of the model will be 0, suggesting that it captures the ground truth function perfectly. However, if the model differs from the ground truth outside the data distribution, the model's Shapley values may be drastically different from the ground truth Shapley values, resulting in highly misleading explanations.
This limitation of off-manifold Shapley values can be exploited to 'fool' Shapley values into hiding model biases. In Slack et al. (2020), the authors consider models which are highly biased on the data manifold (i.e., solely rely on sensitive features, like racial background, for predictions). They show that these models can be perturbed outside the data manifold in such a way that the resulting Shapley values give no attribution to the sensitive features, despite the models relying solely on these sensitive features on the data manifold. Therefore, off-manifold Shapley values are highly vulnerable to off-manifold manipulations.
### Limitations of on-manifold value functions
While the on-manifold value functions do not consider model behaviour outside data distribution, the existing methods can lead to unintuitive or misleading Shapley explanations as they do not consider the _causal_ contributions of features, and are highly sensitive to feature correlations. Specifically, as Janzing et al. (2020) point out, when computing feature contributions at **x**, the value function for a subset \(S\), \(v(S)\), must capture the effect of fixing the feature values \(\textbf{X}_{S}\) to \(\textbf{x}_{S}\). This is _not_ given by \(\mathbb{E}[f(\textbf{X})\mid\textbf{X}_{S}=\textbf{x}_{S}]\) as in CES, because observing \(\textbf{X}_{S}=\textbf{x}_{S}\) also changes the distribution of \(\textbf{X}_{S}\). Instead, the impact of setting \(\textbf{X}_{S}\) to \(\textbf{x}_{S}\) is captured by \(\mathbb{E}[f(\textbf{X})\mid do(\textbf{X}_{S}=\textbf{x}_{S})]\), which in general is different from conditional expectation. Therefore, Interventional Shapley is inherently proposed to capture the _causal_ effect of fixing feature values.
Since CES considers the conditional expectation \(\mathbb{E}[f(\textbf{X})\mid\textbf{X}_{S}=\textbf{x}_{S}]\) when computing Shapley values, the resulting Shapley values are highly influenced by feature correlations. As a result, two highly correlated features may receive similar feature attributions even if the model under consideration depends on only one of them. We make this concrete with an example in Appendix D. We also demonstrate empirically in Section 5 and Appendix G that CES can be highly sensitive to the feature correlations, and consequently can lead to wrong explanations. Additionally, computing CES is computationally challenging when the feature-space is continuous. While Frye et al. (2021) propose training a surrogate model \(g\) with masked inputs to estimate the conditional expectation (see Appendix E), training \(g\) is even more difficult than training the model \(f\).
Aside from this, the JBShap and RJBShap value functions proposed by Yeh et al. (2022), explain the feature contributions for the function \(\tilde{f}_{p}(\textbf{x})\coloneqq f(\textbf{x})p(\textbf{x})\), rather than \(f(\textbf{x})\) itself. Specifically, RJBShap explain the contribution of individual features towards the difference \(\tilde{f}_{p}(\textbf{x})-\mathbb{E}_{p_{k}(\textbf{X})}[\tilde{f}_{p}( \textbf{X})]\). This means that the resulting Shapley values therefore do not explain the underlying function \(f\) itself. We make this more concrete with an example with \(\mathcal{X}\subseteq\mathbb{R}^{2}\):
\[\textbf{X}\sim\mathcal{N}(\textbf{0},I_{2}),\quad f(\textbf{x})=\exp\big{(}x _{1}^{2}/2\big{)}. \tag{2}\]
For this example, \(\tilde{f}_{p}(\textbf{x})\) only depends on \(x_{2}\) and consequently, the RJBShap values for feature 1, \(\phi_{1}=0\), for all \(\textbf{x}\in\mathcal{X}\), even though the function \(f(\textbf{x})\)_only_ depends on \(x_{1}\). RJBShap can therefore lead to _highly_ misleading explanations. We confirm this empirically in Appendix G.2.4. Additionally, the notion of off-manifold robustness satisfied by JBShap and RJBShap value functions can be restrictive. We expand upon this in Section 3.1, where we propose an alternative definition of robustness which is less restrictive, and is not satisfied by JBShap and RJBShap.
## 3 Manifold Restricted Shapley Values
In this paper, we argue that a model must be mainly characterised by it's behaviour on the data manifold. While _intervening_ on features provides the correct notion of fixing features, we must restrict our attention to the data manifold when estimating Shapley values. This allows us to avoid the issues of non-identifiability outside the data manifold, thereby making the Shapley estimates robust against adversarial attacks as in Slack et al. (2020). In order to estimate Shapley values which are robust to off-manifold manipulations, we must restrict the function evaluation to the data manifold. Before we proceed, we introduce our value function in terms of general sets \(\mathcal{Z}\subseteq\mathcal{X}\).
**Definition 1** (ManifoldShap).: _Let \(\mathcal{Z}\subseteq\mathcal{X}\) be an open set with \(\textbf{x}\in\mathcal{Z}\), and \(\mathbb{P}(\textbf{X}\in\mathcal{Z}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))>0\) for \(S\subseteq[d]\). Then, we define the ManifoldShap on \(\mathcal{Z}\) as:_
\[v^{\text{\tiny MAN}}_{\textbf{x},f,\mathcal{Z}}(S)\coloneqq\mathbb{E}[f( \textbf{X})\mid do(\textbf{X}_{S}=\textbf{x}_{S}),\textbf{X}\in\mathcal{Z}]. \tag{3}\]
**Remark.** The notation \(\mathbb{E}[\cdot\mid do(\textbf{X}_{S}=\textbf{x}_{S}),\textbf{X}\in\mathcal{Z}]\) denotes the expectation w.r.t. the density \(p_{\mathcal{Z},\textbf{x}_{S}}(\cdot)\) where
\[p_{\mathcal{Z},\textbf{x}_{S}}(\textbf{y})\coloneqq\frac{p(\textbf{y}\mid do( \textbf{X}_{S}=\textbf{x}_{S}))\mathds{1}(\textbf{y}\in\mathcal{Z})}{\mathbb{P} (\textbf{X}\in\mathcal{Z}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))}. \tag{4}\]
The condition \(\mathbb{P}(\textbf{X}\in\mathcal{Z}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))>0\) ensures that \(p_{\mathcal{Z},\textbf{x}_{S}}(\textbf{x})\) (and hence \(v^{\text{\tiny MAN}}_{\textbf{x},f,\mathcal{Z}}(S)\)) is well-defined. By conditioning on the event \(\textbf{X}\in\mathcal{Z}\), the ManifoldShap value function restricts the function evaluations to the set \(\mathcal{Z}\). In
practice, \(\mathcal{Z}\) can be chosen to be the data manifold, or any other region of interest, where model behaviour is relevant to explanations sought. In this way, ManifoldShap will disregard the model behaviour outside the region of interest when computing Shapley values. A detailed discussion of how to choose the sets \(\mathcal{Z}\) is deferred to the next section.
Our formulation of _ManifoldShap_ is general as it is does not assume a specific causal structure on the features. In our methodology, we assume that the expectation \(\mathbb{E}[f(\mathbf{X})\mid do(\mathbf{X}_{S}=\mathbf{x}_{S})]\) can be computed using observational data. This is a standard assumption needed to compute Interventional Shapley, and holds true under the causal structure in Figure 2. Under this assumption, we can compute the value function using the following result.
**Lemma 1**.: _The value function \(v^{\text{\tiny MAN}}_{\mathbf{x},f,\mathcal{Z}}\) can be written as,_
\[v^{\text{\tiny MAN}}_{\mathbf{x},f,\mathcal{Z}}(S)=\frac{\mathbb{E}[f(\mathbf{X}) \mathds{1}(\mathbf{X}\in\mathcal{Z})\mid do(\mathbf{X}_{S}=\mathbf{x}_{S})]}{\mathbb{P}( \mathbf{X}\in\mathcal{Z}\mid do(\mathbf{X}_{S}=\mathbf{x}_{S}))}\]
In practice, all we need is a manifold classifier, trained to estimate the value of the indicator, i.e. \(\hat{g}(\mathbf{x})\approx\mathds{1}(\mathbf{x}\in\mathcal{Z})\). The value function (3) can then be estimated using:
\[v^{\text{\tiny MAN}}_{\mathbf{x},f,\mathcal{Z}}(S)\approx\frac{\mathbb{E}[f( \mathbf{X})\hat{g}(\mathbf{X})\mid do(\mathbf{X}_{S}=\mathbf{x}_{S})]}{ \mathbb{E}[\hat{g}(\mathbf{X})\mid do(\mathbf{X}_{S}=\mathbf{x}_{S})]}. \tag{5}\]
We also provide alternative methodologies of estimating ManifoldShap using rejection sampling and regression techniques in Appendix C.
Choosing the sets \(\mathcal{Z}\).Next, we discuss general purpose methodologies of choosing sets \(\mathcal{Z}\) which can serve as practical estimation of the data manifold in most cases. One can obtain \(\mathcal{Z}\) by training an out-of-distribution classifier directly. Slack et al. (2020) do so by perturbing each data-point on randomly chosen features, and subsequently using these to train the classifier. In general, users may wish to choose different regions of interest \(\mathcal{Z}\) on an ad hoc basis when computing Shapley values. In what follows, we outline a few specific choices of \(\mathcal{Z}\), each of which satisfy different notions of robustness to off-manifold manipulations. We discuss this in greater length in Section 3.1.
**Definition 2** (Density manifold).: _Given an \(\epsilon>0\), we define the \(\epsilon\)-density manifold (\(\epsilon\)-DM) of the data distribution, denoted as \(\mathcal{D}_{\epsilon}\), as: \(\mathcal{D}_{\epsilon}\coloneqq\{\mathbf{x}\in\mathbb{R}^{d}:p(\mathbf{x})>\epsilon\}\). Here, \(p(\mathbf{x})\) denotes the joint density of the data._
The \(\epsilon\)-DM includes all regions of high density in the set. Using \(\mathcal{Z}=\mathcal{D}_{\epsilon}\) in our value function therefore restricts function evaluations to regions of high density. An alternative way to choose \(\mathcal{Z}\) is via the probability mass captured by \(\mathcal{Z}\), i.e., for a given level \(\alpha\), we may pick sets \(\mathcal{Z}=\mathcal{P}_{\alpha}\) such that \(\mathbb{P}(\mathbf{X}\in\mathcal{P}_{\alpha})\geq\alpha\). One such set can be defined as:
**Definition 3** (Mass manifold).: _Given an \(\alpha>0\), we define the \(\alpha\)-mass manifold (\(\alpha\)-MM) of the data distribution, denoted as \(\mathcal{P}_{\alpha}\), as \(\mathcal{P}_{\alpha}\coloneqq\mathcal{D}_{\epsilon^{(\alpha)}}\), where \(\epsilon^{(\alpha)}\coloneqq\sup\{\epsilon\geq 0:\mathbb{P}(\mathbf{X}\in \mathcal{D}_{\epsilon})\geq\alpha\}\)._
We show in Proposition 9 (Appendix B) that the Lebesgue measure of \(\mathcal{P}_{\alpha}\) is smallest among the sets \(\mathcal{Z}\) with \(\mathbb{P}(\mathbf{X}\in\mathcal{Z})\geq\alpha\). It should be noted that \(\mathcal{P}_{\alpha}\) is not necessarily the unique such set. One can use techniques like kernel density estimation and VAEs to approximate the manifolds described in this section (more details in Appendix F).
### Robustness to off-manifold manipulation
We say that a Shapley value function is robust to off-manifold manipulation, if changing the model \(f\) outside the data manifold does not lead to 'large' changes in its Shapley values. In this section, we formalise this idea of robustness and show that ManifoldShap satisfies this notion, while the existing value functions do not. First, we present the definition of robustness as used in Yeh et al. (2022), to formalise the notion of off-manifold manipulations.
**Definition 4** (T-robustness (Yeh et al., 2022)).: _Given two models \(f_{1}(\mathbf{x}),f_{2}(\mathbf{x})\) and any probability density \(p(\mathbf{x})\), we say that a value function, \(v_{\mathbf{x},f}\), is strong T-robust if it satisfies the following condition: if \(\max_{\mathbf{x}}|f_{1}(\mathbf{x})-f_{2}(\mathbf{x})|p(\mathbf{x})\leq\delta\), then, \(|v_{\mathbf{x},f_{1}}(S)-v_{\mathbf{x},f_{2}}(S)|\leq T\delta\) for any \(S\subseteq[d]\)._
As per Yeh et al. (2022),"The premise \(\max_{\mathbf{x}}|f_{1}(\mathbf{x})-f_{2}(\mathbf{x})|p(\mathbf{x})\leq\delta\) bounds the maximum perturbation on low density regions." Additionally, Yeh et al. (2022) show that JBShap and RJBShap value functions satisfy strong T-robustness to off-manifold manipulation, while other value functions like MS and CES do not. Likewise, since MS is a special case of IS, the latter also does not satisfy strong T-robustness. On the other hand, ManifoldShap restricted to \(\epsilon\)-density manifold, \(\mathcal{D}_{\epsilon}\), satisfies this notion of robustness.
**Proposition 1**.: _The value function \(v^{\text{\tiny MAN}}_{\mathbf{x},f,\mathcal{D}_{\epsilon}}(S)=\mathbb{E}[f(\mathbf{ X})\mid do(\mathbf{X}_{S}=\mathbf{x}_{S}),\mathbf{X}\in\mathcal{D}_{\epsilon}]\) is strong \(T\)-robust for \(T=1/\epsilon\)._
Proposition 1 shows that with decreasing \(\epsilon\), the robustness parameter \(T\) increases and ManifoldShap gets less robust.
Alternative definition of Robustness.Definition 4 considers a very specific notion of model perturbation. In particular, the perturbation in model \(f(\mathbf{x})\) must not exceed \(\delta/p(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{R}^{d}\) and some \(\delta>0\). This does not encapsulate the case where the function perturbation remains bounded on a region of interest \(\mathcal{Z}\), but may increase arbitrarily outside \(\mathcal{Z}\). For example, we may have the case that the function \(f(\mathbf{x})\) remains fixed on a set \(\mathcal{Z}\) with \(\mathbb{P}(\mathbf{X}\in\mathcal{Z})>0.99\). Robustness of Shapley values should dictate that changing the function outside \(\mathcal{Z}\) should not lead to arbitrarily different Shapley values. We later show that Def. 4 does not lead to such robustness guarantees.
To encapsulate this, we provide an alternative definition of robustness, which allows us to take into account model manipulation on sets with small probability mass. First, we define the notion of robustness on a general feature subspace \(\mathcal{Z}^{\prime}\subseteq\mathcal{X}\):
**Definition 5** (Subspace T-robustness).: _Let \(\mathcal{Z}^{\prime}\subseteq\mathcal{X}\) be such
that \(\mathbb{P}(\mathbf{X}\in\mathcal{Z}^{\prime})>0\). We say that a value function \(v_{\mathbf{x},f}\) is strong T-robust on subspace \(\mathcal{Z}^{\prime}\) if it satisfies the following condition: if \(\sup_{\mathbf{x}\in\mathcal{Z}^{\prime}}|f_{1}(\mathbf{x})-f_{2}(\mathbf{x})|\leq\delta\), then, \(|v_{\mathbf{x},f_{1}}(S)-v_{\mathbf{x},f_{2}}(S)|\leq T\delta\) for any \(S\subseteq[d]\)._
A value function satisfying strong T-robustness on \(\mathcal{Z}\) would not result in drastically different Shapley values when the model perturbation is bounded on the set \(\mathcal{Z}\), by some value \(\delta>0\). The above definition allows us to directly consider robustness of value functions on sets based on probability mass, \(\mathcal{P}_{\alpha}\). Moreover, by restricting the function evaluations to a set \(\mathcal{Z}\), ManifoldShap is naturally set up to provide subspace T-robustness guarantee. We formalise this as follows:
**Proposition 2**.: _The value function \(v_{\mathbf{x},f,\mathcal{Z}}^{\text{\tiny{MAN}}}\) is strong T-robust on any set \(\mathcal{Z}^{\prime}\) satisfying \(\mathcal{Z}\subseteq\mathcal{Z}^{\prime}\) with \(T=1\)._
In contrast, we show that all other value functions under consideration do not satisfy this notion of robustness:
**Proposition 3**.: _For any set \(\mathcal{Z}^{\prime}\) with \(\mathbb{P}(\mathbf{X}\in\mathcal{Z}^{\prime})<1\), the IS value function \(v_{\mathbf{x},f}^{\text{\tiny{IS}}}(S)\), the CES value function \(v_{\mathbf{x},f}^{\text{\tiny{CES}}}(S)\), and the MS value function \(v_{\mathbf{x},f}^{\text{\tiny{MS}}}(S)\), the JBSMap value function \(v_{\mathbf{x},f}^{\text{\tiny{J}}}(S)\) and the RJBSMap value function \(v_{\mathbf{x},f}^{\text{\tiny{RJ}}}(S)\) are all not strong T-robust on subspace \(\mathcal{Z}^{\prime}\) for \(|T|<\infty\)._
Consider the family of value functions which _drop_ features in \(\bar{S}\) through randomisation, i.e., \(v_{f,p_{S}}(S)=\mathbb{E}_{\mathbf{X}\sim p_{S}}[f(\mathbf{X})]\). We note that IS, MS, CES and ManifoldShap all fall into this family. For example, when \(p_{S}=p(\textbf{X}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))\) we obtain IS, and when \(p_{S}=p(\textbf{X}\mid\textbf{X}_{S}=\textbf{x}_{S})\) we obtain CES. We show in Appendix A.1 that the choice of \(p_{S}\) in ManifoldShap (i.e. \(p_{Z,\textbf{x}_{S}}\) in Eq. (4)) minimises the Total Variation distance with interventional distribution \(p(\textbf{X}\mid do(\textbf{X}_{S}=\textbf{x}_{S}))\) subject to the condition that \(v_{f,p_{S}}(S)\) is strong T-robust on \(\mathcal{Z}\). This ensures that ManifoldShap values provide reasonable estimation of _causal_ contribution of features.
### Comparison with existing methods
**Causal Accuracy.** Recall that, CES attributes feature importance based on feature correlations. Consequently, two highly correlated features may be attributed similar feature importance even if the model only depends on one of them, i.e., the sensitivity property is violated. However, ManifoldShap on the other-hand, seeks to estimate the _causal_ contribution of features towards the prediction \(f(\textbf{x})\), as it uses the _interventional_ measure restricted to the manifold \(\mathcal{Z}\) to drop features. The experiments in Appendix G confirm this, as the ManifoldShap results are significantly less sensitive to feature correlations than CES.
Our example in Eq. (2) shows how the explicit dependence of RJBSMap on the density can lead to extremely inaccurate Shapley explanations. In Appendix G.2.4, we show that because of its causal nature, ManifoldShap provides significantly more accurate and intuitive explanations. Additionally, unlike RJBShap, ManifoldShap only depends on the density estimation via the indicator \(\mathds{1}(p(\textbf{x})\geq\epsilon)\). Therefore, as we show in Appendix G.2.6, ManifoldShap is significantly more robust to density estimation errors than RJBShap.
Aside from this, Ghalebikesabi et al. (2021) propose Neighbourhood SHAP, a value function aimed to provide explanations for the localised behaviour of the model near the datapoint **x** where explanations are sought. While the authors empirically show the robustness of the methodology against off-manifold perturbations, they do not consider the causal perspective and therefore the main object of interest is not the causal contribution of features.
**Robustness.** As outlined in Section 3.1, ManifoldShap is robust to model changes outside the manifold and therefore is not vulnerable to adversarial attacks as in Slack et al. (2020). In light of this, we argue that ManifoldShap provides a compromise between conditional and interventional Shapley values. It attempts to estimate causal contributions of features, while providing robustness guarantees.
**Trade-off between Accuracy and Robustness.** Restricting function evaluations to the manifold \(\mathcal{Z}\), as in ManifoldShap, means that the resulting Shapley values are dependant on the manifold itself, and may not purely reflect the causal contribution of features. This is because these are no longer pure Interventional Shapley values. This results in a trade-off between robustness to off-manifold manipulation and the 'causal accuracy' of the Shapley values. ManifoldShap provides us flexibility over this trade-off, through the size of the manifold \(\mathcal{Z}\). When \(\mathcal{Z}=\mathcal{D}_{\epsilon}\), the size of the manifold is modulated through the \(\epsilon\) parameter. As \(\epsilon\to 0\), the size of manifold increases and ManifoldShap values tend towards IS values. However, as mentioned above, it comes at the cost of reduced robustness, as the Shapley evaluations include increasing number of datapoints 'far' from the training data. On the other hand, increasing \(\epsilon\) increases the robustness of Shapley values, while reducing their causal accuracy, as the resulting Shapley values discard a significant number of datapoints which lie outside \(\mathcal{D}_{\epsilon}\).
**Computational Considerations.** Computing CES may be computationally expensive and may require different supervised or unsupervised learning techniques (Frye et al., 2021; Sundararajan and Najmi, 2020; Yeh et al., 2022). In contrast, while ManifoldShap requires estimating a manifold classifier, estimating \(v_{\mathbf{x},f,\mathcal{Z}}^{\text{\tiny{MAN}}}(S)\) does not incur any computational cost over and above computing the interventional expectations. Proposition 1 illustrates this by expressing the ManifoldShap value function as a ratio of interventional expectations. This is even more straightforward when the causal structure is as in Figure 2, and the interventional expectation is equivalent to marginal expectation. Additionally, to avoid the exponential time complexity of computing the value function for all \(S\subseteq[d]\), we propose a sampling based estimation in Appendix C.2 which
makes computation of ManifoldShap feasible for high dimensional feature spaces (see Appendix G.2.5).
## 4 Robustness in other explanation methods
Shapley value is not the only _off-manifold_ explanation method. This problem has also been explored in other explanation methods like LIME [14, 15, 16] and gradient-based methods [17, 18]. For example, Heo et al. [19] illustrates this problem in gradient-based interpretability methods for Neural Networks. The paper shows that these explanations are not stable when model is manipulated without hurting the accuracy of the model. Numerous solutions have also been proposed such as Qiu et al. [20], which addresses this problem for explanation methods like RISE, OCCLUSION and LIME by quantifying a similarity metric for perturbed data. This metric is then integrated into the explanation methods. Likewise Saito et al. [20] proposes to make LIME robust to off-manifold manipulation, by using a GAN to sample more realistic synthetic data which are then used to generate LIME explanations. Aside from this, Anders et al. [20] proposes an alternative robust gradient-based explanation method. However, unlike Shapley values, gradient-based methods rely on model properties (e.g., differentiability), and are not model agnostic.
## 5 Experimental results
In this section, we conduct experiments on synthetic and real world datasets to demonstrate the utility of ManifoldShap and compare it with existing methods. Instead of training the models, we compute Shapley values for the underlying true functions directly. Additional experiments investigating the sensitivity of the different Shapley methods to changing feature correlations, manifold size and feature dimensions have been included in Appendix G. The code to reproduce our experiments can be found at github.com/amazon-science/manifold-restricted-shapley.
### Synthetic data experiments
Here we investigate the effect of model perturbation in low density regions on Shapley values.
\begin{tabular}{p{142.3pt} p{142.3pt}} & **Data generating mechanism.** In this experiment, \(\mathcal{Y}\subseteq\mathbb{R}\) and \(\mathcal{X}\subseteq\mathbb{R}^{2}\) follow the Causal DAG shown on the left. In specific, the Structural Causal Model (SCM) [13] for the ground truth data generating mechanism is: \\ \(X_{1}=\epsilon_{1},\ \ \ X_{2}=\rho X_{1}+\sqrt{1-\rho^{2}}\epsilon_{2},\ \ \ Y=X_{1}.\) \\ \end{tabular} Here, \(\epsilon_{i}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathcal{N}(0,1)\) and \(\rho=0.85\) is the correlation between \(X_{1},X_{2}\). Next, we define the perturbed models.
Perturbed models.We define the following family of perturbed models \(g_{\delta}:\mathcal{X}\rightarrow\mathbb{R}\), parameterised by \(\delta\in\mathbb{R}\).
\[g_{\delta}(\mathbf{X})\coloneqq Y+\delta X_{2}\mathds{1}(\mathbf{X}\not\in \mathcal{P}_{\alpha}).\]
Here, we use VAEs to estimate \(\mathcal{P}_{\alpha}\) (see Appendix F) and choose \(\alpha=1-10^{-3}\). By construction, the models \(g_{\delta}\) should agree with the ground truth on the \(\alpha\)-manifold, i.e. \(g_{\delta}(\mathbf{X})=Y\) when \(\mathbf{X}\in\mathcal{P}_{\alpha}\), but these models differ from the ground truth for \(\mathbf{X}\not\in\mathcal{P}_{\alpha}\). Figure 4 shows the model heatmaps for \(\delta=0,5\) along with the original data. It is impossible to distinguish between these models on the data manifold, as both have test mean squared error of 0.
Results.Recall that the ground truth model does not depend on \(X_{2}\), so the ground truth Shapley value for feature 2 is \(\phi_{2}=0\). As a result, for any prediction, feature 1 has greater absolute Shapley value than feature 2, i.e. \(|\phi_{1}|\geq|\phi_{2}|\). We compute Shapley values for \(g_{\delta}\) using different value functions on 500 datapoints \(\{\mathbf{x}^{(i)}\}_{i=1}^{500}\), sampled from the SCM defined above. We compute CES using the ground truth conditional distributions of \(X_{i}\mid X_{j}\) for \(i\neq j\), which can be obtained analytically in this setting. Figure 3 shows the results, with the bar plots on the left of Figures 2(a) and 2(b), showing the most important features as per different value functions for \(\delta=0,5\).
For \(\delta=0\), Figure 2(a) confirms that the IS values of the ground truth model attribute greatest feature importance to feature 1 for all datapoints. This is expected as the ground truth model does not depend on \(x_{2}\). For ManifoldShap, we observe that for 4% of the datapoints, feature 2 is attributed greater importance. This highlights that robustness of ManifoldShap comes at the cost of reduced causal accuracy of Shapley values. Furthermore, it can be seen that CES value function attributes greatest importance to feature 2 for more than 30% of the datapoints. This is because CES provides similar Shapley values for positively correlated features. We observe similar behaviour for RJB-Shap, which attributes greatest importance to feature 2 for about 20% of datapoints. This happens because RJBShap provides feature contributions for \(f_{p}(\mathbf{x})=f(\mathbf{x})p(\mathbf{x})\) rather than \(f(\mathbf{x})\), and can therefore be misleading.
When \(\delta=5\), Figure 2(b) shows that, for more than 50% of datapoints IS attributes greater importance to feature 2 than feature 1 in the perturbed model. This shows that IS is sensitive to off-manifold perturbation. For ManifoldShap on the other hand, feature 2 is attributed greater importance for only about \(10\%\) of the datapoints, less than all other baselines.
We have also plotted the difference between estimated Shapley values and the ground truth IS values, for each value function. For a fair comparison between differ
ent value functions, we scale the Shapley values so that \(\sum_{i\in\{1,2\}}|\phi_{i}|=1\). As \(\delta\) increases from 0 to 5, we can see that the errors in Shapley values increase for IS, while the errors in ManifoldShap are more concentrated around 0 than any other baseline.
The results show that ManifoldShap values, unlike IS, remain robust to off-manifold manipulations, while providing explanations which remain closer to ground truth IS values overall. CES and RJBShap, on the other hand can result in misleading explanations.
### Real world datasets
In this subsection, we evaluate the effect of adversarial off-manifold manipulation of models on Shapley values using real-world datasets. Specifically, using the same setup as in Slack et al. (2020), we show that existing methodologies may fail to identify highly problematic model biases, whereas ManifoldShap can mitigate this problem due to its robustness properties. We consider the causal structure in Figure 2 where the true features \(\tilde{X}_{i}\) are distinguished from input features \(X_{i}\), and therefore IS is equivalent to MS here.
Datasets.The COMPAS dataset, collected by ProPublica (Angwin et al., 2016), includes information for 6172 defendants from Broward County, Florida. This information comprises 52 features including defendants' criminal history and demographic attributes. The sensitive attribute in this dataset is defendants' race. The second dataset, Communities and Crime (CC), is a UCI dataset (Dua and Graff, 2017) which includes crime data in communities across the US, where each community constitutes a datapoint comprising 128 features. The sensitive attribute in CC is the percentage of Caucasian population. From here onwards, we use 'race' to refer to the sensitive attribute for both datasets.
Biased classifier.Following the strategy of Slack et al. (2020), we construct the binary classifier \(f\) to be only dependant on the sensitive feature for both datasets. Additional details are given in Appendix G.1.
Manifold estimation.Just like in Slack et al. (2020), we determine the manifold \(\mathcal{Z}\) by training an OOD classifier. In particular, we follow the strategy in Slack et al. (2020) by perturbing each datapoint on randomly chosen features, and subsequently using these newly generated perturbations to train an OOD classifier.
Out of manifold perturbation.To perturb the model outside the manifold \(\mathcal{Z}\), we construct 2 synthetic features (referred to as 'unrelated columns') like Slack et al. (2020). For datapoints that lie outside \(\mathcal{Z}\), only the 'unrelated columns' are used to classify the datapoints. However, unlike Slack et al. (2020), these 'unrelated columns' are positively correlated with race. This is done to highlight a shortcoming of CES: even though CES is an on-manifold value function, the positive correlation between unrelated columns and race 'fools' CES into attributing non-zero credit to the synthetic features.
Results.We compute the Shapley values for the perturbed models on 500 datapoints from a randomly chosen held-out dataset. We use the supervised approach to estimate CES as outlined in Appendix E. The barplots in Figures 4(a) and 4(b) show the percentage of data points in COMPAS and CC datasets respectively, for which each feature shows up as the top feature as per different value functions. For RJBShap, CES, and IS, there are more data points in both datasets with top feature among 'unrelated columns' than data points with top feature of race. For IS, this happens as a result of OOD perturbation of the model, and shows that when using IS, we can hide biases in the model
Figure 4: Heatmaps for ground truth and perturbed models \(g_{\delta}\). Each model has test mean squared error of 0.
Figure 3: Synthetic data experiments for \(\delta=0,5\). The barplots on the left of each subfigure shows the most important features for different Shapley value functions. The boxplots show the approximation errors of the Shapley values for different value functions.
by perturbing the model out of manifold. For RJBShap, this could be explained by the fact that it explicitly depends on the joint density \(p(\mathbf{x})\) of the data. Since, 'unrelated columns' are positively correlated with race, the dependence of the density \(p(\mathbf{x})\) on these features and race is similar. As a result, 'unrelated columns' get non-zero attributions in RJBShap.
This positive correlation between race and 'unrelated columns' also causes CES to attribute similar importance for features 'unrelated columns' as for race. This can be especially misleading when the data contains multiple correlated features which are not used by the model.
On the other hand, for ManifoldShap, majority of the datapoints have top feature race, whereas none of them have top feature among 'unrelated columns'. Figure 5 also shows the difference between estimated Shapley values and the ground truth IS values of the biased model. We have again rescaled the Shapley values so that \(\sum_{i\in[d]}|\phi_{i}|=1\) for fair comparison between different value functions. We can see that for the feature race, the errors of ManifoldShap are more concentrated around 0 than any other baseline considered. For 'unrelated columns', ManifoldShap values are \(\hat{\phi}_{i}=\phi_{i}=0\), i.e., ManifoldShap satisfies sensitivity property in this case. This shows that ManifoldShap is significantly more robust to adversarial manipulation of the function outside the manifold, as well as robust to the attribution of credit based on correlations among features.
## 6 Discussion and Limitations
In this paper, we propose ManifoldShap, a Shapley value function which provides a compromise between existing on and off manifold value functions, by providing explanations which are robust to off-manifold perturbations of the model while estimating the causal contribution of features. However, ManifoldShap also has its limitations.
While our work does not make any assumptions on the set \(\mathcal{Z}\), the properties of ManifoldShap are inherently linked to the choice of \(\mathcal{Z}\). ManifoldShap is only robust to perturbation of model outside \(\mathcal{Z}\) and perturbations inside \(\mathcal{Z}\) could lead to significant changes in the computed Shapley values. It is therefore important to choose \(\mathcal{Z}\) that is a good representative of the true data manifold, as otherwise, the Shapley values may not be robust to off-manifold perturbations. Additionally, as pointed out in Section 3.2, restricting model evaluations to the set \(\mathcal{Z}\) can reduce the causal accuracy of Shapley values. This becomes especially evident when the data manifold \(\mathcal{Z}\) is _sparse_ or low-dimensional relative to the space \(\mathcal{X}\). We highlight this empirically in Appendix G.2.2. Likewise, as we show in Appendix A, the sensitivity and symmetry properties of ManifoldShap are also dependent on the properties of \(\mathcal{Z}\). It is therefore worth exploring methodologies of choosing \(\mathcal{Z}\) which provide the ideal trade-off between desirable properties like causal accuracy and robustness of explanations. We believe these limitations suggest interesting research questions that we leave for future work.
#### Acknowledgements
We would like to thank Dominik Janzing for his valuable suggestions and insightful discussions. We are also grateful to Kailash Budhathoki and Philipp Faller for providing feedback on an earlier version of the manuscript.
|
2306.08804 | PEACE: Cross-Platform Hate Speech Detection- A Causality-guided
Framework | Hate speech detection refers to the task of detecting hateful content that
aims at denigrating an individual or a group based on their religion, gender,
sexual orientation, or other characteristics. Due to the different policies of
the platforms, different groups of people express hate in different ways.
Furthermore, due to the lack of labeled data in some platforms it becomes
challenging to build hate speech detection models. To this end, we revisit if
we can learn a generalizable hate speech detection model for the cross platform
setting, where we train the model on the data from one (source) platform and
generalize the model across multiple (target) platforms. Existing
generalization models rely on linguistic cues or auxiliary information, making
them biased towards certain tags or certain kinds of words (e.g., abusive
words) on the source platform and thus not applicable to the target platforms.
Inspired by social and psychological theories, we endeavor to explore if there
exist inherent causal cues that can be leveraged to learn generalizable
representations for detecting hate speech across these distribution shifts. To
this end, we propose a causality-guided framework, PEACE, that identifies and
leverages two intrinsic causal cues omnipresent in hateful content: the overall
sentiment and the aggression in the text. We conduct extensive experiments
across multiple platforms (representing the distribution shift) showing if
causal cues can help cross-platform generalization. | Paras Sheth, Tharindu Kumarage, Raha Moraffah, Aman Chadha, Huan Liu | 2023-06-15T01:18:02Z | http://arxiv.org/abs/2306.08804v2 | # PEACE: Cross-Platform Hate Speech Detection
###### Abstract
Hate speech detection refers to the task of detecting hateful content that aims at denigrating an individual or a group based on their religion, gender, sexual orientation, or other characteristics. Due to the different policies of the platforms, different groups of people express hate in different ways. Furthermore, due to the lack of labeled data in some platforms it becomes challenging to build hate speech detection models. To this end, we revisit if we can learn a generalizable hate speech detection model for the cross platform setting, where we train the model on the data from one (source) platform and generalize the model across multiple (target) platforms. Existing generalization models rely on linguistic cues or auxiliary information, making them biased towards certain tags or certain kinds of words (e.g., abusive words) on the source platform and thus not applicable to the target platforms. Inspired by social and psychological theories, we endeavor to explore if there exist inherent causal cues that can be leveraged to learn generalizable representations for detecting hate speech across these distribution shifts. To this end, we propose a causality-guided framework, **PEACE**, that identifies and leverages two intrinsic causal cues omnipresent in hateful content: the overall sentiment and the aggression in the text. We conduct extensive experiments across multiple platforms (representing the distribution shift) showing if causal cues can help cross-platform generalization.
Keywords:Causal Inference Generalizability Hate-Speech Detection.
## 1 Introduction
**Warning:**_this paper contains contents that may be offensive or upsetting._
Social media sites have served as global platforms for users to express and freely share their opinions. However, some people utilize these platforms to share hateful content targeted toward other individuals or groups based on their religion, gender, or other characteristics resulting in the generation and spread of hate speech. Failing to moderate online hate speech has shown to have negative impacts in real world scenarios, ranging
from mass lynchings to global increase in violence toward minorities [20]. Thus, building hate speech detection models has become a necessity to limit the spread of hatred. Recent years have witnessed the development of these models across disciplines [28; 14; 40; 2].
Hate speech varies based on the platform and the specific targets of the speech, influenced by factors such as social norms, cultural practices, and legal frameworks. Platforms with strict regulation policies may lead to users expressing hate in subtle ways (e.g., sarcasm), while platforms with lenient policies may have more explicit language. Collecting large labeled datasets for hate speech detection models is challenging due to the emotional burden of labeling and the requirement for skilled annotators [22]. One solution is to train a generalizable model under a cross-platform setting, leveraging the labeled data from other platforms.
Recent works developed to improve the cross-platform performance utilize either linguistic cues such as vocabulary [30] or Parts-Of-Speech (POS) tags [23]. Another direction leverages datasets with auxiliary information such as implications of various hate posts [18] or the groups or individuals attacked in the hate post [16]. Although effective, these methods suffer from shortcomings, such as linguistic methods form spurious correlations towards certain POS tags (e.g., adjectives and adverbs) or a particular category of words (e.g., abusive words). In addition, methods that utilize auxiliary information (e.g., implications of the post or the target(s)) are not extendable as the auxiliary information may not be available for large datasets or different platforms.
In contrast to previous approaches, we contend that identifying inherent causal cues is necessary for developing effective cross-platform hate speech detection models that can distinguish between hateful and non-hateful content. Since causal cues are immune to distribution shifts [6], leveraging them for learning the representations can aid in better generalization. Various studies in social sciences and psychology verify the existence of several cues that can aid in detecting hate [35; 10; 19; 5; 45] such as the hater's prior history, the conversational thread, overall sentiment, and aggression in the text. However, when dealing with a cross-platform setting, several cues may not be accessible. For instance, not all platforms allow access to user history or the entire conversation thread. Thus, we propose to leverage two causal cues namely, the overall sentiment and the aggression in the text. Both these cues can be measured easily with the aid of aggression detection tasks [3] and sentiment analysis task [44]. Moreover, both aggression and sentiment are tightly linked to hate speech. For instance, due to the anonymity on online platforms, users adopt more aggressive behavior when expressing hatred towards someone [32]. Thus, the aggression in the content could act as a causal cue to indicate hate. Similarly, hateful content is meant to denigrate someone. Thus, the sentiment also serves as a causal cue [31].
To this end, we propose a novel causality-guided framework, namely, **P**latform-ind**E**pendent **c**A**usal **C**ues for generalizable hat**E** speech detection **PEACE4**, that leverages the overall sentiment and the aggression in the text, to learn generalizable representations for hate speech detection across different platforms. We summarize our main contributions as follows:
Footnote 4: The code for PEACE can be accessed from: [https://github.com/paras2612/PEACE](https://github.com/paras2612/PEACE)
* We identify two causal cues, namely, the overall sentiment and the aggression in the text content, to learn generalizable representations for hate speech detection.
* We propose a novel framework, namely, **PEACE** consisting of multiple modules to capture the essential latent features helpful for predicting sentiment and aggression. Finally, we utilize these features and the original content to learn generalizable representations for hate speech detection.
* Experimental results on five different platforms demonstrate that **PEACE** achieves state-of-the-art performance compared with vital baselines, and further experiments highlight the importance of each causal cue and interpretability of **PEACE**.
## 2 Related Work
Social media provides a vast and diverse medium for users to interact with each other effectively and share their opinions. Unfortunately, however, a large share of users exploits these platforms to spread and share hateful content mainly directed toward an individual or a group of people. Considering the massive volume of online posts, it is impractical to moderate them manually. To address this shortcoming, researchers have proposed various methods ranging from lexical-based approaches [15, 23, 39] to deep learning-based approaches [25, 37, 33].
However, these models have been shown to possess poor generalization capabilities. Hate speech on social media is highly volatile and is constantly evolving. A hate speech detection model that fails to generalize well may exhibit poor detection skills when dealing with a new topic of hate [27, 11] or when dealing with different styles of expressing hate [9, 1], thus making it critical to develop generalizable hate speech detection models. Over recent years there has been an increase in developing generalizable models.
Generalizable hate speech detection methods can be broadly classified into two parts, namely models that leverage auxiliary information such as implications of hate posts [18], information of the dataset annotators [42], or user attributes [37]. For instance, the authors of the work [18] proposed a generalizable model for implicit hate speech detection that utilizes the implications of hateful posts and learns contrastive pairs for a more generalizable representation of the hate content. Similarly, the authors of the work [42] argue that when dealing with subjective tasks such as hate speech detection, it is hard to achieve agreement amongst annotators. To this end, they propose leveraging the annotator's characteristics and the ground truth label during the training to learn better representations and improve hate speech detection. Unlike annotators' information, the authors of [37] trained a bert model with users' profiles and related social environment and generated tweets to infer better representations for hate speech detection. Although these models have improved generalizability, the auxiliary information utilized may not be easily accessible and challenging to get when dealing with cross-platform settings.
Since language models are trained on large corpora, they exhibit some generalization prowess [36]. However, the generalization can be improved by finetuning these models on datasets related to a specific downstream task. Thus, the second category leverages language models such as BERT [12] and finetuning them on large hate speech corpora [7, 24]. For instance, the authors of [7] finetuned a BERT model on approximately 1.6 million hateful data points from Reddit and generated HateBERT, a state-of-the-art model for hate speech detection. Similarly, the authors of [24] finetuned BERT for explainable hate speech detection. Aside from these works, some methods focus on
leveraging lexical cues such as vocabulary used [34], emotion words, and different POS tags in the content [23], the target-specific keyphrases [13].
Although these methods have been shown to improve hate speech detection capabilities, these require large labeled corpora for finetuning language models, which may not be feasible in the real-world setting as the number of posts generated in a moment is extremely large or rely on lexical features which may not aid as a lot of the social media posts are filled with grammatical inconsistencies (such as misspelled words). In this work, inspired by works in social and psychological fields, we leverage inherent characteristics readily available in the text to learn generalizable representations, such as the aggression and the overall sentiment of the text.
## 3 Methodology
This section describes the methodology behind our **PEACE** framework. As shown in Figure 1 the framework consists of two major components: (i) a cue extractor component and (ii) a hate detector component. The cue extractor component extracts the proposed innate cues, sentiment, and aggression. Moreover, this component is responsible for navigating the hate detector component toward learning a cross-platform generalized representation for hate speech detection. Consequently, the hate detector component classifies a given input to hate or non-hate classes while attending to the causal guidance of the cue extractor. In the subsequent sections, we discuss the cue extractor and hate detector components in detail.
Figure 1: Proposed framework architecture for **PEACE**. The pre-trained sentiment and aggression modules guide the representation learning process to ensure generalizability.
### Causal Cue Extraction
We propose utilizing sentiment and aggression as two inherent causal cues for learning generalizable representations for better hate speech detection. Therefore, the cue extractor consists of two modules, one for extracting sentiment and one for aggression. Given an input text \(X=(x_{1},x_{2},...,x_{k})\), the purpose of the cue extractor model is to generate an attention vector \(C_{k\times 1}\) where \(k\) is the input sequence length. And here, the vector \(C_{k\times 1}\) should represent an accumulation of sentiment and aggression score for each token in the sequence \(X\), i.e., for a given token in the input \(X\), \(C_{k\times 1}\) contains how vital that token is towards the overall input's sentiment and/or aggression. We will first discuss the architecture of each cue module (sentiment and aggression) and then elaborate on how the attention vector \(C_{k\times 1}\) is generated.
#### 3.1.1 Sentiment Module
The sentiment module is a transformer encoder stack with \(n\) encoders that have learned a function \(s_{\gamma}\) such that given an input text \(X=(x_{1},x_{2},...,x_{k})\), it can classify the sentiment of \(X\), i.e., this module is a pre-trained transformer-based large language model finetuned for the sentiment detection downstream task where given an input text \(X\), it predicts the sentiment label \(y\) (positive, neutral, negative), \(y=s_{\gamma}(X)\).
#### 3.1.2 Aggression Module
Similarly, the aggression module is also a transformer encoder stack with \(n\) encoders that have learned a function \(a_{\lambda}\) such that given an input text \(X=(x_{1},x_{2},...,x_{k})\), it can classify whether \(X\) contains aggressive speech, i.e., this module is a pre-trained transformer-based large language model finetuned for the aggression detection downstream task where given an input text \(X\), it predicts the aggression label \(y\) (aggressive, non-aggressive), \(y=a_{\lambda}(X)\).
And it is essential to note here that the cue extraction module's wights are frozen when we conduct the end-to-end training of the hate detector component, i.e., we don't finetune the sentiment and aggression modules with the hate speech data.
#### 3.1.3 Attention Extraction for Individual Causal Cues
As mentioned above, the cue extractor component aims to integrate the two cue modules, sentiment, and aggression, towards generating the final causal cue guidance as an attention vector \(C_{k\times 1}\). The first step towards this objective is extracting each individual attention vector from the cue modules. Since both the sentiment and aggression cue modules are same-sized transformer encoder stacks (\(n\)-encoders), the attention extraction process is the same for both modules. Let's take the sentiment cue module; it contains \(n\)-encoder blocks and thus consists of \(n\) multi-head attention layers. The multi-head attention layer of a given encoder block can be defined as the Equation 1.
\[\begin{split} MultiHead(Q,K,V)=head_{1}(Q,K,V)\oplus...head_{n}(Q,K,V)\\ where;\quad head_{i}(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{i}}})V \end{split} \tag{1}\]
Here \(Q,K,V\) are Query, Key, and Value vectors of the transformer block \(i\), and \(d_{i}\) is the hidden state size [38].
Our goal in using the sentiment cue module attention is to figure out the words/phrases in the input text that has particular importance towards the sentiment of the text. Therefore, we need to consider an encoder block that gives comprehensive attention to the whole input. Previous research shows that the attention heads in the BERT model's last encoder block have very broad attention - i.e., attending broadly to the entire input [8]. The architecture we consider for the sentiment module is similar to the BERT architecture (transformer encoder blocks); thus, we select the last (\(n^{th}\)) encoder block's multi-head attention layer as the candidate to extract the final attention from the sentiment module. We take the mean pooling output of the \(n^{th}\) block's multi-headed attention layer as a matrix \(M_{k\times k}\) where \(k\) is the input sequence length.
\[M_{k\times k}=Mean(MultiHead_{n}(Q,K,V)) \tag{2}\]
Then the final attention vector \(S_{k\times 1}\) for the input sequence is taken by selecting the attention at CLS token of the matrix \(M_{k\times k}\). Following the same process, we extract the aggression attention vector \(A_{k\times 1}\) from the aggression cue module.
#### 3.1.3 Cue Integration
The final step towards creating the attention vector \(C_{k\times 1}\) is to aggregate each attention vector we get from cue modules. i.e., we need to weigh and aggregate the token attentions from each cue module to get the final accumulated attention vector \(C_{k\times 1}\). Once the representative attention vectors from both sentiment and aggression modules are extracted, we input the concatenated vectors through the attention selector head (\(g_{\theta}\)). The attention selector head is a fully connected neural network that takes concatenated aggression and sentiment attention to map the final attention vector \(C_{k\times 1}\).
\[C_{k\times 1}=g_{\theta}([S_{k\times 1}\oplus A_{k\times 1}]) \tag{3}\]
The intuition behind the attention selector head is that we need our framework to learn how to weigh the sentiment and aggression cues relevant to the context of the given input. For example, there can be cases where aggression could be the stronger cue towards hate speech than sentiment or vice versa.
### Hate Detector
The hate detector component consists of a similar transformer encoder stack to learn the semantic representation of the given input. However, the output of the cue detector component, attention vector \(C_{k\times 1}\), will be provided as an auxiliary signal. We select the representation learned by the hate detector blocks as \(R_{k\times d}\) where \(k\) is the sequence length, and \(d\) is the hidden state size of an encoder block. Then the extracted attention is used to navigate the hate detector to adjust the representation to incorporate the causal cues. The final representation \(F_{k\times d}\) is calculated as; \(F_{k\times d}=R_{k\times d}\odot C_{k\times 1}\). Then the representation corresponding to the end of the sequence token (\(F_{1\times d}^{CLS}\)) is passed through the classification head (\(f_{\phi}\)). The classification head (\(f_{\phi}\)) is a fully connected neural network that takes the learned semantic embedding as the input and predicts the hate label \(\hat{y}\) as \(\hat{y}=f_{\phi}(F_{1\times d}^{CLS})\).
The overall framework is trained via the cross-entropy loss for the classification, where \(y\) is the ground truth.
\[L=-\sum_{i}y_{i}\log(\hat{y}_{i}) \tag{4}\]
## 4 Experiments
This section discusses the experimental settings used to validate our framework, including the datasets and evaluation metrics used, and the baselines, followed by a detailed analysis of the experiments. We conducted a series of experiments to understand whether the identified causal cues, namely the sentiment and the aggression in the text, can aid in learning generalizable representations for hate speech detection and answer the following research questions.
* **RQ.1** Does the identified causal cues, namely, sentiment and aggression, enhance the generalization performance?
* **RQ.2** What is the importance of each causal cue in improving the generalization performance (ablation study)?
* **RQ.3** Which features does the **PEACE** utilize in input and whether these features are causal when compared to the other baselines?
### Datasets and Evaluation metrics
We perform binary classification of detecting hate speech on various widely used benchmark hate datasets. Since we aim to verify cross-platform generalization, for cross-platform evaluation, we use four datasets from different platforms: Wikipedia, Facebook, Reddit, GAB, and Twitter-Reddit-YouTube. All datasets are in the English language.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Datasets** & **Description** & **Number of** & **Hateful** & **Percent of Hateful** \\ & & **Posts/Comments** & **Posts/Comments** & **Posts/Comments** \\ \hline GAB [16] & \begin{tabular}{l} A collection of posts from the GAB \\ social media platform \\ \end{tabular} & 31,640 & 7,657 & 24.2 \\ \hline Reddit [29] & \begin{tabular}{l} Conversation threads from the Reddit platform \\ \end{tabular} & 13,633 & 4,219 & 31 \\ \hline Wikipedia [41] & \begin{tabular}{l} A collection of comments on \\ Wikipedia website \\ \end{tabular} & 1,13,728 & 22,796 & 20 \\ \hline Twi-Red-You & \begin{tabular}{l} Social media comments from three \\ sites, namely, Twitter, Reddit, and YouTube \\ \end{tabular} & 86,283 & 49,273 & 57.2 \\ \hline FRENK &
\begin{tabular}{l} Social media comments from Facebook \\ book targeting LGBT and Migrants \\ \end{tabular} & 10,034 & 3,592 & 35.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics of the experimental datasets with corresponding platforms and percentage of hateful comments or posts.
Wikipedia dataset [41] is a collection of user comments from the Wikipedia platform consisting of binary labels denoting whether a comment is hateful. Reddit [29] is a collection of conversation threads classified into hate and not hate. GAB [16] is a collection of annotated posts from the GAB website. It consists of binary labels indicating whether a post is hateful or not. Finally, Twitter-Reddit-YouTube [17] is a collection of posts and comments from three platforms: Twitter, Reddit, and YouTube. It contains ten ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech), which are debiased and aggregated into a continuous hate speech severity score (hate speech score). We binarize this data such that any data with a hate speech score less than 0.5 is considered non-hateful and vice-versa. Although Twi-Red-You and Reddit both contain data from Reddit, these data do not necessarily have the same distribution. The distribution of datasets from the same platform can still defer due to variations in the timestamps, targets, locations, and demographic attributes. The FRENK dataset [21] contains Facebook comments in English and Slovene covering LGBTQ and Migrant targets. We only consider the English dataset. The dataset was manually annotated for different types of unacceptable discourses (e.g., violence, threat). We use the binary hate speech classes hate and not-hate. A summary of the datasets can be found in Table 1. For comparison with baseline methods, macro F-measure (F1) is used as an evaluation metric for validation.
### Baselines
* this baseline utilizes contrastive learning with data augmentation to map similar posts closer to each other in the representation space to enable better generalization.
* this baseline proposed to use linguistic cues such as POS tags, stylometric features, and emotional cues derived by different words and the global emotion lexicon named, NRC lexicon [26] to enhance the generalizable capabilities for multilingual cross-domain hate speech detection.
* finetune the BERT-base model using approximately 1.5 million Reddit messages published by suspended communities for promoting hateful content. It results in a shifted BERT model that has learned language variety and hate polarity (e.g., hate, abuse). We report the results of fine-tuned HateBERT for all the datasets.
* fine-tuned using hate speech detection datasets from Twitter and Gab for a three-class classification task (hate, offensive, or normal). It combines human-annotated rationales and BERT to improve performance by reducing unintended bias toward target communities. For each dataset, we present the results of fine-tuned HateXplain.
Both HateBERT and HateXplain are not explicitly designed for generalizability but primarily for better hate speech detection. We include these baselines as they are state-of-the-art hate speech detection methods, and due to the generalization capabilities of large language models these baselines do possess better generalization [43, 18].
### Implementation Details
Our framework **PEACE** is implemented using the Huggingface Transformers library5. For our sentiment and aggression modules, we used existing RoBERTa-base models that have been finetuned for the sentiment and aggression downstream tasks [4]. Both these models are finetuned on a plethora of social media posts and have shown good performance in detecting sentiment and aggression in text. Moreover, we used a pre-trained RoBERTa-base model as our hate detector encoder blocks where \(n=12\).
Footnote 5: [https://huggingface.co/docs/transformers](https://huggingface.co/docs/transformers)
The overall architecture was trained to utilize cross-entropy loss with class balancing and optimized with the Adam optimizer. The learning rate was set to the standard value of 0.00002, and the dropout rate was 0.2 for the best performance. For learning **PEACE**, we trained the framework on a 40 GB VRAM NVIDIA GeForce RTX 3090 GPU with the early-stopping strategy.
\begin{table}
\begin{tabular}{c l l c c c c} \hline \hline \multicolumn{2}{c}{**Platforms**} & \multicolumn{2}{c}{**HateBERT**} & \multicolumn{2}{c}{
\begin{tabular}{c} **ImpCon** \\ **(AugCon variant)** \\ \end{tabular} } & \multicolumn{2}{c}{**HateXplain POS+EMO PEACE**} \\ \hline \multirow{6}{*}{**Twi-Red-You**} & **GAB** & 0.58 & 0.58 & 0.60 & 0.54 & **0.63** \\ & **Reddit** & 0.71 & 0.64 & **0.74** & 0.54 & **0.74** \\ & **Wikipedia** & 0.71 & 0.70 & 0.70 & 0.60 & **0.78** \\ & **Twi-Red-You** & **0.96** & 0.94 & 0.92 & 0.87 & 0.95 \\ & **FRENK** & 0.46 & 0.44 & 0.48 & 0.45 & **0.53** \\ \hline \multirow{6}{*}{**GAB**} & **GAB** & **0.84** & 0.65 & **0.84** & 0.76 & 0.76 \\ & **Reddit** & 0.69 & 0.64 & 0.70 & 0.56 & **0.71** \\ & **Wikipedia** & 0.74 & 0.64 & 0.70 & 0.49 & **0.78** \\ & **Twi-Red-You** & 0.61 & **0.71** & 0.61 & 0.59 & 0.70 \\ & **FRENK** & **0.71** & 0.57 & 0.60 & 0.59 & 0.69 \\ \hline \multirow{6}{*}{**Reddit**} & **GAB** & 0.56 & 0.51 & 0.59 & 0.53 & **0.61** \\ & **Reddit** & 0.88 & 0.84 & **0.89** & 0.59 & 0.88 \\ & **Wikipedia** & 0.66 & 0.63 & 0.64 & 0.56 & **0.74** \\ & **Twi-Red-You** & 0.73 & 0.70 & 0.77 & 0.65 & **0.78** \\ & **FRENK** & 0.42 & 0.42 & 0.44 & 0.49 & **0.54** \\ \hline \multirow{6}{*}{**Wikipedia**} & **GAB** & 0.65 & 0.63 & 0.64 & 0.56 & **0.68** \\ & **Reddit** & 0.73 & 0.71 & **0.74** & 0.58 & 0.72 \\ \cline{1-1} & **Wikipedia** & 0.95 & 0.93 & 0.86 & 0.94 & **0.97** \\ & **Twi-Red-You** & 0.73 & 0.72 & 0.74 & 0.69 & **0.78** \\ & **FRENK** & 0.60 & 0.51 & 0.61 & 0.52 & **0.65** \\ \hline \multirow{6}{*}{**FRENK**} & **GAB** & 0.65 & 0.67 & 0.63 & 0.58 & **0.69** \\ & **Reddit** & 0.62 & 0.66 & 0.66 & 0.55 & **0.71** \\ \cline{1-1} & **Wikipedia** & 0.67 & 0.76 & 0.73 & 0.53 & **0.81** \\ \cline{1-1} & **Twi-Red-You** & 0.65 & 0.65 & 0.64 & 0.62 & **0.78** \\ \cline{1-1} & **FRENK** & 0.78 & **0.79** & 0.75 & 0.72 & 0.78 \\ \hline \hline \end{tabular}
\end{table}
Table 2: cross-platform and in-dataset evaluation results for the different baseline models compared against **PEACE**. Boldfaced values denote the best performance and the underline denotes the second-best performance among different baselines.
### RQ.1 Performance Comparison
#### 4.4.1 Cross-Platform Generalization
We compare the different baseline models with **PEACE** on five real-world datasets. To evaluate the generalization capabilities of the models for each dataset, we split the data into train and test tests. We train all the models on the training data for one platform and evaluate the test sets of all the platforms. Table 2 demonstrates the performance comparison across the different test sets for the macro-F1 metric. The column **Platforms** showcases the Source platform on which the models were trained and the Target platforms used for evaluation. For each source dataset, we show the Average Performance of each model in both in-platform and cross-platform settings. As a result, we have the following observations regarding the cross-platform performance w.r.t. RQ.1:
* Overall, **PEACE** consistently yields the best performance across cross-platform evaluation for all the datasets while maintaining good in-platform macro F1. Comparing only the cross-platform performance, **PEACE** leads to a 5% improvement when trained on the Twi-Red-You dataset, 3% improvement for the GAB dataset, 6% improvement for Reddit, 3% improvement for the Wikipedia dataset, and 4% improvement for FRENK dataset.
* Among the four baselines, HateBERT serves as the strongest baseline in most cases, followed by HateXplain. This result is justified as both HateBERT and HateXplain are fine-tuned BERT models on large corpora of hateful content. We further fine-tune both HateBERT and HateXplain for each dataset. ImpCon performs well for some of the combinations, while for others, it cannot outperform HateBERT and HateXplain. We believe this is because the AugCon variant utilizes simple data augmentation. As a result, it might not be able to learn as good representations as the ImpCon variant that leverages the implications of hate. Furthermore, the utilization of the ImpCon variant is a challenging task in real-world scenarios, as the implications are not readily available for large datasets.
* The linguistic feature-based baseline (POS + EMO) doesn't generalize well to these datasets. We argue this is because the posts in these datasets are highly unstructured and grammatically incorrect. Even after pre-processing the inferred POS tags and emotion words may not be reflective of the hate content. As a result, the reliance on these features hurts the generalization performance.
* Majority of the baselines attain improved performance when trained on the Wikipedia dataset. We argue this is because of the size of the dataset. Among the four datasets, Wikipedia is the largest dataset indicating that a model can generalize better when it's trained on large datasets.
#### 4.4.2 Cross-Target Generalization
Furthermore, we also conducted another experiment for the FRENK dataset to evaluate how the different models generalize in a cross-target setting, where the datasets belong to the same platform (i.e., have similar ways of expressing hate) but discuss different targets of hate. Along with the hate labels, the FRENK dataset also provides the targets of hate in the dataset, namely, _LGBTQ_ and _Migrants_. Table 3 demonstrates the performance comparison for the macro-F1 metric.
We had the following observations regarding the cross-target generalization performance w.r.t. **RQ.1**:
* Comparing the cross-target generalization, we observe that C-Hate leads to an average gain of 4% improvement over the baselines. The results indicate that utilizing causal cues such as the overall sentiment and the aggression aids in learning generalizable representations and improve cross-target generalization performance.
* Across the different baselines HateBERT and ImpCon perform the best. The overall performance of HateBERT indicate that the large language models such as BERT when fine-tuned on a particular downstream task (fine-tuning BERT on hate content resulted in generation of HateBERT) can lead to competitive generalization capabilities. Furthermore, the ImpCon model performs well as it leverages data augmentation which results in more training data leading to better generalization.
### RQ.2 Importance of each cue
To assess the individual importance of the different causal cues used in **PEACE** with regard to the performance, we conduct the following experiments. We consider three variants of **PEACE**, one which utilizes only sentiment as the causal cue, namely, _Sentiment_ one which utilizes only aggression as the causal cue, namely, _Aggression_, and one which utilizes a RoBERTa base classifier without any causal cues, namely, _Base Roberta_.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{**targets**} & \multicolumn{2}{c}{**ImpCon**} & \multicolumn{2}{c}{**HateXplain POS+EMO PEACE**} \\
**Source** & **Target** & & **(AugCon variant)** & & \\ \hline
**Migrants LGBTQ** & 0.74 & 0.68 & 0.65 & 0.61 & **0.78** \\
**LGBTQ** & **Migrants** & 0.66 & 0.67 & 0.64 & 0.58 & **0.72** \\ \hline \hline \end{tabular}
\end{table}
Table 3: cross-target evaluation results for the different baseline models compared against **PEACE**. Boldfaced values denote the best performance among different baselines.
Figure 2: Comparison of cross-platform macro-F1 score to calculate the importance of each cue compared with the final model for Reddit and GAB datasets.
We conduct cross-platform experiments by training these three variants on the Reddit and the GAB datasets. The results obtained can be seen in Figure 2(a) for Reddit and Figure 2(b) for GAB. As observed, **PEACE** performs the best when both causal cues are considered. The results can deteriorate by as little as 5% to as high as 13% without the inclusion of causal cues. Among the three variants, it is observed that **PEACE** mostly benefits from the aggression cue and for some datasets, it benefits from the sentiment cue. The main reason for aggression being a strong cue is because aggression and hate are very similar tasks and earlier works have shown that aggression leads to hatred [35]. However, the base model consistently does worst, indicating that the utilization of causal cues is important to enhance the generalizability performance for hate speech detection.
### RQ.3 Case Study
Here we provide a case study that verifies the importance of causal cues in identifying the correct context for detecting hate speech Moreover, here we visually compare **PEACE** token level attention with the baseline models HateXplain and ImpCon. In order to visualize the token importance of a given model towards its prediction, we followed a similar procedure as the cue extractor [8], where the final encoder block's attention layer was utilized to accumulate the token importance by visualizing the attention weights.
We randomly sampled hate speech text from Reddit and Gab platforms to select candidate examples for the case study. Table 4 shows a few such samples with the attention token importance visualization. In the **C-Hate's** row, we annotate the sentiment module attention in \(\mathbf{\mathrm{\texttt{{-}}}}\) and aggression module attention \(\mathbf{\mathrm{\texttt{orange}}}\). The example from the Gab platform is an instance of hate towards feminist liberals. The word _"sheeple"_ and phrase _"get it one day"_ can be considered as the deciding components of the text being hate speech. In contrast to the HateXplain and ImpCon, **PEACE** is attending to the
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Platform**} \\ \cline{2-3} & **Gab** & **Reddit** \\ \hline \multirow{2}{*}{**HateXplain**} & \(\mathbf{\mathrm{\texttt{{-}}}}\) & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ & & **noni** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Case study illustrating the different features/tokens chosen as important tokens to detect hateful content across different models. Darker shades of the color represents the importance level of the token.
word "sheeple" correctly. And we see that both the sentiment and aggression modules are giving high importance to the _"sheeple."_ We have a similar observation about the phrase _"get it one day"_ where **PEACE** is successful in giving more attention to that phrase towards hate speech detection. A notable observation here is that the sentiment module is attending to the above phrase well, which could be the reason behind **C-Hate's** successfully identifying the correct context towards hate.
The next example from the Reddit platform was a complex sentence for hate speech detection, given that hate is implied, not directly expressed. As we can see, both ImpCon and HateXplain models tend to the word _"putridity"_ but not to the critical contextual components that signify implicit hate, such as _"forced eradication"_ and _"unworthy."_ This example illustrates the issue in vocabulary-based approaches to generalized hate speech detection. On the contrary, we can see that the sentiment and aggression modules accurately attend to the _"forced eradication"_ and _"unworthy"_ phrases navigating **PEACE** to correctly identify the hate speech context.
## 5 Limitations and Error Analysis
In this section, we conduct an error analysis to better understand our work's limitations and aid future work in cross-platform generalized hate speech detection. For this analysis, we select the FRENK dataset (Facebook) as the testing dataset, given it contains fine-grained information about the data, such as hate targets (LGBTQ vs. migrants) and hate types (offense vs. violence). We used the **PEACE** models trained on other platforms (Twitter, Gab, Reddit, and Wiki) to run the test on the FRENK dataset mentioned above. Finally, we analyze each model's misclassification rate/error rate under dimensions of hate target and hate type.
As seen in Figure 3(a), the model tends to have a higher error rate in detecting migrants-related samples, particularly when trained on Reddit and Twi-Red-You datasets. One notable characteristic we observed in the Reddit and Twi-Red-You datasets is that the hate examples tend to include a majority of targeted hate towards particular individuals. Similarly, the LGBTQ target in FRENK dataset contains a majority of hate examples towards individuals. However, in contrast, the migrant target contains more
Figure 3: Analysing error rate of **PEACE** under different Dimensions such as (a) hate targets (LGBTQ vs. migrants) and (b) hate type (offense vs. violence).
generic hate examples towards a group of people. This mismatch in training and testing platforms might be causing the high error rate in the migrants compared to the LGBTQ.
According to the error analysis conducted by Figure 3(b) we see that **PEACE** model has a higher error rate in the offensive hate type than the violence type. We further analyze this matter by examining the traits in the text that correspond to each of these hate types. Table 5 contains some representative samples from each of these two categories. In the violence hate type, the hate aspect is quite explicit to the reader/model. Moreover, here the sentiment and aggression cues are easily detectable. However, in the offensive hate type, we see hate to be inherently more implicit than explicit. Moreover, learning valuable signals through sentiment or aggression becomes problematic when the expressed hatred is implicit.
## 6 Conclusions and Future Work
The widespread popularity and easy accessibility of online social media platforms have led humans to easily share their opinions with the rest of the world. However, some people misuse this privilege to spread hateful content targeted to denigrate an individual or group. As a result, automated hate speech detection has become a crucial task. However, due to various factors, such as the evolving nature of hate and the limited availability of labeled data in a platform, it is challenging to develop a generalizable hate speech detection model. To address the poor generalization problem, in this paper, we proposed a generalizable hate speech detection model, named **PEACE**, that considers the inherent causal cues that characterize whether a text content is hateful. Studies in various disciplines, such as sociology and psychology, indicate that hateful content contains specific inherent cues that can be leveraged and quantified better to detect hate speech across cross-platform and cross-target settings. We leverage the text's aggression and the content's overall sentiment to learn generalizable representations for improved
\begin{table}
\begin{tabular}{l|l} \hline \hline Hate Type & Examples \\ \hline \multirow{4}{*}{Violence} & shoot them all, done!!! let the communists solve the problem!!! \\ & coz i believe that these people wont stop, sooner or later, Germany will have to use guns \\ & Quick... Bomb it. \\ & Send troops to reinforce the entry’s in Europe, the countries they are in is in safe zones, if they continue to move forward shoot to kill, as this is regarded as invasion. \\ \hline \multirow{4}{*}{Offensive} & The annoying thing is that 75\% of the migrants are Young men, why aren’t they fighting for THEIR country? Or is it more a case of they can get more from European countries (money, house,education etc) \\ & Are there terrorists hidden in migration groups? Likely. \\ & And they breed like grasshoppers.. Bye bye Europe. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples representing the different kinds of hate. The violence hate type is more explicit and direct whereas the offense hate type is more subtle and implicit.
hate speech detection. We conducted extensive experiments and showed that **PEACE** can generalize better across five different social media platforms and two different targets when compared with various state-of-the-art baselines. We further conducted experiments to show the importance of each causal cue and case study to identify the features **PEACE** relies on for detecting hate speech.
**PEACE**'s generalization prowess comes from the two primary causal cues, which are manually identified. One potential direction would be to investigate how to automate identifying the cues and build an end-to-end system. Moreover, hate speech detection can be further enriched by considering the context of the conversation. Another direction would be to explore how to leverage context in a cross-platform setting to improve generalization capabilities further.
## 7 Ethical Statement
#### 7.0.1 Freedom of Speech and Censorship
Our research on cross-platform hate speech detection aims to develop algorithms that can effectively identify and mitigate harmful language across multiple platforms. We recognize the importance of protecting individuals from the adverse effects of hate speech and the need to balance this with upholding free speech. Content moderation is one application where our method could detect and censor hate speech on social media platforms such as Twitter, Facebook, Reddit, etc. However, one ethical concern is our system's false positives, i.e., if the system incorrectly flags a user's text as hate speech, it may censor legitimate free speech. Therefore, we discourage incorporating our methodology in a purely automated manner for any real-world content moderation system until and unless a human annotator works alongside the system to determine the final decision.
#### 7.0.2 Use of Hate Speech Datasets
In our work, we incorporated publicly available well-established datasets. And we have correctly cited the corresponding dataset papers and followed the necessary steps in utilizing those datasets in our work. Moreover, we understand that the hate speech examples used in the paper are potentially harmful content that could be used for malicious activities. However, our work aims to help better investigate, comprehend, and help mitigate the harms of online hate. Therefore, we have assessed that the benefits of incorporating these real-world examples to explain our work better outweigh the potential risks.
#### 7.0.3 Fairness and Bias in Detection
Our work strives to prioritize using natural language processing tools for social good while respecting the principles of fairness and impartiality. To reduce biases and ethical problems, we openly disclose our methodology, results, and limitations and will continue to assess and improve our system in the future.
## 8 Acknowledgements
This material is based upon work supported by, or in part by the Office of Naval Research (ONR) under contract/grant number N00014-21-1-4002 and the Army Research Office under the grant number W911NF2110030. |
2310.04700 | Importance of physical information on the prediction of heavy-ion fusion
cross section with machine learning | In this work, the Light Gradient Boosting Machine (LightGBM), which is a
modern decision tree based machine-learning algorithm, is used to study the
fusion cross section (CS) of heavy-ion reaction. Several basic quantities
(e.g., mass number and proton number of projectile and target) and the CS
obtained from phenomenological formula are fed into the LightGBM algorithm to
predict the CS. It is found that, on the validation set, the mean absolute
error (MAE) which measures the average magnitude of the absolute difference
between $log_{10}$ of the predicted CS and experimental CS is 0.129 by only
using the basic quantities as the input, this value is smaller than 0.154
obtained from the empirical coupled channel model. MAE can be further reduced
to 0.08 by including an physical-informed input feature. The MAE on the test
set (it consists of 280 data points from 18 reaction systems that not included
in the training set) is about 0.19 and 0.53 by including and excluding the
physical-informed feature, respectively. We further verify the LightGBM
predictions by comparing the CS of $^{ 40,48}{\rm Ca }$+$^{78}{\rm Ni}$
obtained from the density-constrained time-dependent Hartree-Fock approach. Our
study demonstrates the importance of physical information in predicting fusion
cross section of heavy-ion reaction with machine learning. | Zhilong Li, Zepeng Gao, Ling Liu, Yongjia Wang, Long Zhu, Qingfeng Li | 2023-10-07T06:19:22Z | http://arxiv.org/abs/2310.04700v1 | Importance of physical information on the prediction of heavy-ion fusion cross section with machine learning
###### Abstract
In this work, the Light Gradient Boosting Machine (LightGBM), which is a modern decision tree based machine-learning algorithm, is used to study the fusion cross section (CS) of heavy-ion reaction. Several basic quantities (e.g., mass number and proton number of projectile and target) and the CS obtained from phenomenological formula are fed into the LightGBM algorithm to predict the CS. It is found that, on the validation set, the mean absolute error (MAE) which measures the average magnitude of the absolute difference between \(log_{10}\) of the predicted CS and experimental CS is 0.129 by only using the basic quantities as the input, this value is smaller than 0.154 obtained from the empirical coupled channel model. MAE can be further reduced to 0.08 by including an physical-informed input feature. The MAE on the test set (it consists of 280 data points from 18 reaction systems that not included in the training set) is about 0.19 and 0.53 by including and excluding the physical-informed feature, respectively. We further verify the LightGBM predictions by comparing the CS of \({}^{40,48}\)Ca+\({}^{78}\)Ni obtained from the density-constrained time-dependent Hartree-Fock approach. Our study demonstrates the importance of physical information in predicting fusion cross section of heavy-ion reaction with machine learning.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The heavy ion fusion reaction is a process in which two colliding atomic nuclei overcome the fusion barrier and then forms a excited compound nucleus. It has important scientific and applied implications, including deeper understanding of the properties and reactions of atomic nuclei, study of the synthesis and properties of superheavy elements, exploring the origin of elements, reference for the development and application of fusion energy. In addition, it is one of the most important ways for us to explore the boundary of nuclear landscape to get insights into the nuclear interactions. Therefore, it has been a hot topic of research in the field of nuclear physics since 60 years ago [1; 2; 3; 4; 5]. To perform heavy-ion reactions, a few facilities have been established and some are under construction all over the world, for example, the Cooler-Storage-Ring (CSR) [6] and High Intensity heavy-ion Accelerator Facility (HIAF) in China [7; 8], rare-isotope beam accelerator complex (RAON) in Korea [9], the Facility for Rare Isotope Beams (FRIB) in the United States, the Facility for An-tipproton and Ion Research (FAIR) in Germany, the Systeme de Production d'Ions Radioactifs (SPIRAL2) in France, the Radioactive Isotope Beam Factory (RIBF) in Japan. By using various facilities, more than 1000 excitation functions for different projectile-target combinations have been measured [10], but there are still many that have not been measured, or have large errors.
The heavy-ion fusion reaction is a complex quantum many-body process, and it involves the mutual coupling of nuclear structure and reaction dynamics [11; 12; 13; 14; 15; 16; 17; 18]. Thus it is very difficult to study strictly with first principles. Several theoretical models or empirical formulas have been proposed to study the fusion cross section (CS) which is one of the most important observables for studying heavy-ion reactions. Such as the coupled channel calculations [19; 20], the time-dependent Hartree-Fock (TDHF) theory plus solving
Schrodinger equation [21; 22], and empirical coupled channel model [23; 24; 25; 26; 27; 28; 29]. But the cross sections calculated by these methods are not completely compatible with the experimental data.
In recent years, Machine learning (ML) methods have been widely and successfully applied for analyzing data in many branches of science, such as physics (see, e.g., Refs. [30; 31; 32]). In the field of nuclear physics, ML has shown a strong ability in the study of heavy ion collisions [33; 34; 35; 36; 37], properties of strongly interacting QCD matter [38; 39; 40; 41], nuclear spallation and projectile fragmentation reactions [42; 43; 44], nuclear fission [45; 46; 47; 44], nuclear masses [48; 49; 50; 51; 52; 53], \(\beta\)-decay half-lives and energy [54; 55; 56; 54], \(\alpha\)-decay[57], the charge radius of atomic nuclei [58; 59; 60; 61], nuclear density distribution [62; 63], and the evaporation residual cross sections for superheavy nuclei [64]. Recently, a novel artificial intelligence approach has been applied to model cross section data [65], in which phenomenological formulas for the calculation of CS are derived based on a hybridization of genetic programming and artificial neural networks. The derived phenomenological formulas can qualitatively reproduce the trend but not the absolute value of the CS. ML is a rapidly growing and flourishing field. Nowadays, a diverse array of ML algorithm has been developed and continue to be refined to cover a wide variety of data types and tasks. It is interesting to study if other ML algorithm can also refine models that used in the calculations of CS, and more importantly, whether physical insights into the heavy-ion fusion reaction can be derived.
The rest of this paper is organized as follows. In Section. II, we introduce the methodology that we use in the present work, including the machine learning algorithm, the dataset and the input features. The CS obtained with machine learning algorithm are discussed in detail in Sect. III. The conclusions are given in Sect. IV.
## II Methodology
In the present work, the prediction of heavy-ion fusion cross section is a supervised task and requires a machine learning algorithm, and a set of labelled data with input and output variables. In this section, we introduce briefly these items.
The machine learning algorithm we used in the present work is the Light Gradient Boosting Machine (LightGBM) which was developed by Microsoft in 2016, it is a gradient boosting framework that uses tree based learning algorithms [66]. LightGBM is becoming increasingly popular due to its advantages including (1) faster training speed and higher efficiency, (2) lower memory usage, (3) better accuracy, (4) support of parallel and graphics processing unit learning, and (5) capability of handling large-scale data. Moreover, LightGBM is a white-box model and has an excellent degree of explainability because of its decision-tree-based nature. This is important for studying a real physical problem, as explainability may improve our knowledge about the relationship between the input features and the output. In our previous works, the strong ability of LightGBM to refine nuclear mass models [49] and mine physical information has been demonstrated [67; 68; 69]. Thus it is also employed in the present work, and parameters in LightGBM are set to their default values; we have checked that the results are insensitive to parameters in LightGBM.
\begin{table}
\begin{tabular}{l l l|l l l|l l l} \hline System & Data & Energy range & System & Data & Energy range & System & Data & Energy range \\ & points & (MeV) & & & points & (MeV) & & points & (MeV) \\ \hline \({}^{12}\)C+\({}^{99}\)Y & 9 & 26-41 & \({}^{19}\)F+\({}^{93}\)Nb & 12 & 43-60 & \({}^{32}\)S+\({}^{184}\)W & 7 & 118-144 \\ \({}^{12}\)C+\({}^{92}\)Zr & 16 & 27-45 & \({}^{19}\)F+\({}^{139}\)La & 13 & 61-115 & \({}^{32}\)S+\({}^{208}\)Pb & 32 & 139-184 \\ \({}^{12}\)C+\({}^{144}\)Sm & 15 & 41-70 & \({}^{19}\)F+\({}^{208}\)Pb & 33 & 73-145 & \({}^{33}\)S+\({}^{90}\)Zr & 13 & 74-97 \\ \({}^{12}\)C+\({}^{152}\)Sm & 6 & 42-58 & \({}^{27}\)Al+\({}^{45}\)Sc & 16 & 31-51 & \({}^{33}\)S+\({}^{91}\)Zr & 16 & 72-97 \\ \({}^{12}\)C+\({}^{154}\)Sm & 12 & 43-58 & \({}^{19}\)F+\({}^{209}\)Bi & 10 & 80-116 & \({}^{33}\)S+\({}^{92}\)Zr & 16 & 72-97 \\ \({}^{12}\)C+\({}^{181}\)Ta & 12 & 50-92 & \({}^{23}\)Na+\({}^{48}\)Ti & 11 & 32-46 & \({}^{34}\)S+\({}^{24}\)Mg & 11 & 24-32 \\ \({}^{12}\)C+\({}^{194}\)Pt & 10 & 50-69 & \({}^{27}\)Al+\({}^{76}\)Ge & 21 & 50-60 & \({}^{34}\)S+\({}^{25}\)Mg & 19 & 24-33 \\ \({}^{12}\)C+\({}^{198}\)Pt & 10 & 50-69 & \({}^{27}\)Al+\({}^{72}\)Ge & 17 & 49-61 & \({}^{34}\)S+\({}^{93}\)Mg & 21 & 24-35 \\ \({}^{40}\)Ca+\({}^{96}\)Zr & 56 & 87-113 & \({}^{27}\)Al+\({}^{73}\)Ge & 18 & 50-62 & \({}^{34}\)S+\({}^{89}\)Y & 32 & 72-92 \\ \({}^{46}\)Ti+\({}^{90}\)Zr & 10 & 99-120 & \({}^{27}\)Al+\({}^{74}\)Ge & 18 & 49-62 & \({}^{34}\)S+\({}^{168}\)Er & 41 & 111-164 \\ \({}^{12}\)C+\({}^{204}\)Pb & 12 & 50-85 & \({}^{27}\)Al+\({}^{76}\)Ge & 19 & 48-62 & \({}^{35}\)Cl+\({}^{24}\)Mg & 12 & 26-36 \\ \({}^{12}\)C+\({}^{206}\)Pb & 8 & 54-81 & \({}^{27}\)Al+\({}^{197}\)Au & 29 & 107-151 & \({}^{35}\)Cl+\({}^{25}\)Mg & 12 & 27-38 \\ \({}^{12}\)C+\({}^{208}\)Pb & 12 & 54-89 & \({}^{28}\)Si+\({}^{28}\)Si & 38 & 22-68 & \({}^{35}\)Cl+\({}^{26}\)Mg & 12 & 27-38 \\ \({}^{12}\)C+\({}^{237}\)Np & 11 & 56-77 & \({}^{28}\)Si+\({}^{30}\)Si & 22 & 22-49 & \({}^{35}\)Cl+\({}^{27}\)Al & 11 & 30-75 \\ \({}^{12}\)C+\({}^{238}\)U & 21 & 60-119 & \({}^{28}\)Si+\({}^{68}\)Zn & 12 & 50-71 & \({}^{35}\)Cl+\({}^{57}\)Cr & 17 & 52-79 \\ \({}^{14}\)N+\({}^{59}\)Co & 8 & 25-45 & \({}^{28}\)Si+\({}^{90}\)Zr & 13 & 65-93 & \({}^{35}\)Cl+\({}^{52}\)Cr & 16 & 53-78 \\ \({}^{14}\)N+\({}^{232}\)Th & 9 & 67-87 & \({}^{28}\)Si+\({}^{92}\)Zr & 22 & 65-89 & \({}^{35}\)Cl+\({}^{51}\)Vi & 18 & 49-81 \\ \({}^{14}\)N+\({}^{238}\)U & 17 & 72-138 & \({}^{28}\)Si+\({}^{94}\)Zr & 15 & 63-95 & \({}^{35}\)Cl+\({}^{50}\)Ti & 22 & 47-75 \\ \({}^{15}\)N+\({}^{56}\)Fe & 10 & 25-37 & \({}^{28}\)Si+\({}^{93}\)Nb & 13 & 68-92 & \({}^{35}\)Cl+\({}^{54}\)Fe & 18 & 55-82 \\ \hline \end{tabular}
\end{table}
Table 1: Training set.
Data used in this work consists of parts, the training set, the validation set, and test set. Training set training set, the validation set, and test set. Training set training set is performed on the validation set, the validation set, and test set. Training set is performed on the validation set, the validation set, and test set set. Training set is performed on the validation set, the validation set, and test set set. Training set is performed on the validation set, the validation set, and test
is used to adjust the parameters in LightGBM, usually validation set is a part of training set which is used to monitor and avoid overfitting. Test set is used to evaluate the actual predictive power of LightGBM on unseen data. The training set is made of 3635 experimental data points from 220 reaction systems collected in Ref.[23], it will be randomly split into the training set and validation set with a certain ratio. These experimental data were measured before 2016. We note that the training set is built considering systems with \(12\leq Z_{1}\leq 48\) and \(24\leq Z_{2}\leq 208\). In this way, we can neglect too heavy systems, for which fusion-fission and quasi-fission can be the dominant reaction modes, and also too light systems, where the presence of break-up and transfer reactions complicate the analysis. The test set consists of 280 data points from 18 reaction systems measured in recent experiments [70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]. We note here that these 18 reaction systems in the test set have not appeared in the training set. The reaction system, number of data points for each system, and the energy range of the training and test sets are listed in Tab.1 and 2, respectively.
Usually, fusion cross sections at different energies varying over several orders of magnitude, thus the logarithm of fusion cross section is set as the output of LightGBM. The input quantities, including center-of-mass energy, the charge, neutron, mass numbers, and binding energy of projectile, target, and the compound nuclei, the fusion Q-value, as well as the one (two)-proton (neutron) separation energies of the compound nuclei, are listed in Tab.3. These quantities are chosen because they are basic features (BF) of a nucleus and expected to relate to the fusion process. Usually, adding more related features may benefit the performance of the trained ML model. We have tried to add more features and different combinations of these BF quantities, but the performance of the trained ML model is only slightly improved. To introduce physical information related to heavy-ion fusion process, the fusion cross sections calculated with the empirical coupled channel (ECC) model and Wong formula are also used. Details of ECC model and the Wong formula can be found in Ref. [23]. In addition, we introduce a simplified empirical quantity Z\({}_{1}\)Z\({}_{2}\)/E\({}_{c.m}\). This quantity includes some important factors of heavy-ion fusion reaction, e.g., Z\({}_{1}\)Z\({}_{2}\) relates to the Coulomb barrier. These quantities are physical-informed or physical-guided features, which are listed in Tab.4.
The main aim of this work is to establish the relationship between the characteristic quantities of fusion reaction and the fusion cross section by learning the training set with LightGBM. In the process of training, four different input feature combinations are used and given in Tab.5. Mode BF represents the input feature is comprised of the 18 basic quantities as
\begin{table}
\begin{tabular}{c l} \hline \hline features & Description \\ \hline sig\_ECC & log\({}_{10}\) of CS obtained by ECC model \\ sig\_W & log\({}_{10}\) of CS obtained by Wong formula \\ sig\_E & Z\({}_{1}\)Z\({}_{2}\)/E\({}_{c.m}\) \\ \hline \end{tabular}
\end{table}
Table 4: Physical-informed quantities
\begin{table}
\begin{tabular}{c c c c} \hline \hline System & Data & energy range & Ref. \\ & points & (MeV) & \\ \hline \({}^{35}\)Cl + \({}^{130}\)Te & 16 & 90-125 & [70] \\ \({}^{37}\)Cl + \({}^{130}\)Te & 17 & 90-125 & [71] \\ \({}^{37}\)Cl + \({}^{68}\)Zn & 11 & 60-90 & [72] \\ \({}^{16}\)O + \({}^{61}\)Ni & 8 & 25-41 & [73] \\ \({}^{18}\)O + \({}^{61}\)Ni & 16 & 25-41 & [73] \\ \({}^{18}\)O + \({}^{62}\)Ni & 15 & 25-42 & [73] \\ \({}^{18}\)O + \({}^{116}\)Sn & 21 & 40-75 & [74] \\ \({}^{30}\)Si + \({}^{136}\)Gd & 14 & 90-116 & [75] \\ \({}^{28}\)Si + \({}^{100}\)Mo & 13 & 65-98 & [76] \\ \({}^{36}\)S + \({}^{50}\)Ti & 23 & 40-60 & [77] \\ \({}^{36}\)S + \({}^{51}\)V & 22 & 40-60 & [77] \\ \({}^{12}\)C + \({}^{182}\)W & 14 & 40-80 & [78] \\ \({}^{12}\)C + \({}^{184}\)W & 14 & 40-80 & [78] \\ \({}^{12}\)C + \({}^{186}\)W & 14 & 40-80 & [78] \\ \({}^{40}\)Ca + \({}^{92}\)Zr & 16 & 89-108 & [79] \\ \({}^{48}\)Ca + \({}^{116}\)Cd & 20 & 104-130 & [80] \\ \({}^{48}\)Ca + \({}^{118}\)Sn & 16 & 104-130 & [80] \\ \({}^{48}\)Ca + \({}^{120}\)Te & 10 & 104-130 & [80] \\ \hline Total systems & 18 & & \\ Total points & 280 & & \\ \hline \end{tabular}
\end{table}
Table 2: Test set.
\begin{table}
\begin{tabular}{c l} \hline \hline features & Description \\ \hline E\({}_{c.m}\) & collision center-of-mass energy (MeV) \\ Z\({}_{1}\) & charge of the first reaction partner \\ N\({}_{1}\) & number of neutrons of the first reaction \\ & partner \\ A\({}_{1}\) & mass number of the first reaction partner \\ Z\({}_{2}\) & charge of the second reaction partner \\ N\({}_{2}\) & number of neutrons of the second reaction \\ & partner \\ A\({}_{2}\) & mass number of the second reaction partner \\ Z\({}_{3}\) & charge of the compound nucleus \\ N\({}_{3}\) & number of neutrons of the compound \\ & nucleus \\ A\({}_{3}\) & mass number of the compound nucleus \\ B\({}_{1}\) & binding energy of the first reaction partner \\ B\({}_{2}\) & binding energy of the second reaction \\ & partner \\ B\({}_{3}\) & binding energy of the compound nucleus \\ Q & fusion Q-value (MeV) \\ S\({}_{\rm p}\) & one-proton separation energy of the \\ & compound nucleus (MeV) \\ S\({}_{\rm 2p}\) & two-proton separation energy of the \\ & compound nucleus (MeV) \\ S\({}_{\rm n}\) & one-neutron separation energy of the \\ & compound nucleus (MeV) \\ S\({}_{\rm 2n}\) & two-neutron separation energy of the \\ & compound nucleus (MeV) \\ \hline \end{tabular}
\end{table}
Table 3: Selection of basic features (BF).
listed in Tab.3. Mode_ECC represents the input feature is comprised of the 18 basic quantities and sig_ECC. Mode_W and Mode_E represent the input features are comprised of the 18 basic quantities as well as, sig_W and sig_E, respectively. By comparing the performances of these different modes, one can infer the importance of physical-informed features. The performance of ML algorithm can be quantitatively evaluated via the mean absolute error (MAE),
\[\text{MAE}\;=\frac{1}{N}\sum_{i=1}^{N}\mid\log_{10}\left(\sigma_{pred}\right)- \log_{10}\left(\sigma_{exp}\right)\mid \tag{1}\]
Here N is the number of tested data points, \(\sigma_{pred}\) and \(\sigma_{exp}\) are the predicted and experimental cross section, respectively.
## III Results
### Performance on the training set
data set built from 2908 data points and the remaining 727 data points constitute the validation data set, the MAE reduces to 0.081\(\pm\)0.005, which is better than that of many physical models. The value of MAE fluctuates in different runs, this is because when the training and validation data sets are randomly selected, it may have a probability that all data points from one reaction system are not included in the training set, the prediction of CS for this system is a challenging task, and resulting in a larger MAE. If a few data points of a reaction system are selected in the training set, the prediction of remaining data on the excitation function of this reaction system is much easier than that on a excitation function without any data point.
By increasing the capacity of the training set, the model can learn more information and reduce the MAE. However, the uncertainty of MAE on the validation set increases with an increase in the percentage of the training set, because the less tested data points, the larger the fluctuations, as shown in Fig. 3. In the present work, to avoid either a large MAE value or large uncertainty of MAE, the ratio between the training size and validation size is chosen as 4:1 in the following discussion.
#### iii.1.2 Comparison of different modes
This section aims at demonstrating the impact of input features on the performance of LightGBM. To do so, the density distribution of MAE for Mode_E, Mode_BF, Mode_ECC, and Mode_W, together with the MAE from ECC model are displayed Fig.4. The corresponding mean values and standard deviations are listed in Table 6.
First, the MAE for Wong formula is about 2.381, which means the average difference between the predicted CS and experimental CS are as large as two orders of magnitude. This is understandable, as the training set consists of many data points at deep sub-barrier energies, where Wong formula is unreliable.
Second, MAE for Mode_BF is 0.129 which is smaller than that obtained with ECC model. ECC model includes the effects of neutron transfer channels, the couplings between the relative motion and intrinsic degrees of freedom, as well as the nuclear deformation, it can give a reasonable fit to the fusion excitation function in the vicinity of the Coulomb barrier. By feeding the basic features of a reaction, LightGBM is able to achieve a better performance on the prediction of CS than ECC model.
Third, the value of MAE can be significantly reduced by including the physical-informed features in the input, this manifests the importance of physical information on the prediction of CS with ML algorithm. MAE for Mode_ECC is 0.068\(\pm\)0.004, which is the smallest of all. The values of MAE for Mode_W and Mode_E are slightly larger than that of Mode_ECC. Considering the fact that the calculations of CS with Wong formula or ECC model are much complicated than Z\({}_{1}\)Z\({}_{2}\)/E\({}_{c.m}\), Mode_E is much favored over other modes.
### Performance on the test set
The performance of LightGBM is further validated using the test set which consists of 280 data points from 18 reaction systems. Fig.5 displays the comparison of the CS predicted by Mode_BF, Mode_E, and Mode_W with the recent experimental data. It can be seen that the CS obtained with Wong formula is close to experimental data only at high energies, and is too small at low energies. This phenomenon is caused by the invalidation of the parabolic approximation at energies well below the Coulomb barrier, has been widely found and discussed in literature, see e.g., Refs. [2; 81]. Both the CS and its energy-dependent behaviour predicted by Mode_E and Mode_W are close to the experimental data, while in some reaction systems, the energy-dependent behaviour obtained by Mode_BF is very different compared to the experimental data.
\begin{table}
\begin{tabular}{l l} \hline Model & MAE \\ \hline Wong Formula & 2.38 \(\pm\) 0.09 \\ ECC model & 0.154 \(\pm\) 0.008 \\ Mode\_BF & 0.129 \(\pm\) 0.007 \\ Mode\_ECC & 0.068 \(\pm\) 0.004 \\ Mode\_W & 0.081 \(\pm\) 0.005 \\ Mode\_E & 0.081 \(\pm\) 0.005 \\ \hline \end{tabular}
\end{table}
Table 6: The average MAE on the validation set obtained from different modes and from Wong formula and ECC model. The ratio of training set to test set is 4:1.
Figure 4: (Color online) Density distribution of MAE for different modes. Results from 500 runs for each mode (Mode_ECC, Mode_E, and Mode_W) and from ECC model are displayed. Dashed lines denote a Gaussian fit to the distribution. In each run, the 3635 fusion reactions were randomly split into training and test sets at a ratio of 4:1
The values of MAE on the test set obtained with Mode_E, Mode_W, and Mode_BF are 0.197\(\pm\)0.006, Mode_E, Mode_W, and Mode_BF are 0.
0.187\(\pm\)0.005, and 0.526\(\pm\)0.013, respectively. This indicates that the physical-informed features can guide machine learning algorithms for successfully capturing the energy-dependent behaviour, then improve the performance. The MAE values on the test set are larger than that on the validation set, this is understandable because reaction systems in the test set are not included in the training set.
### Comparison with the DC-TDHF approach
The density-constrained (DC) time-dependent Hartree-Fock (TDHF) is a fully microscopic approach, it provides a good description of the fusion excitation function for many reaction systems [21; 82]. To further verify the performance of LightGBM, the nuclear fusion cross sections for \({}^{40,48}\)Ca + \({}^{78}\)Ni obtained from DC-TDHF approach are compared with the prediction of Mode_E, as shown in Fig.6. The uncertainties of DC-TDHF result from different potentials [83]. It can be seen that the results predicted with Mode_E are in line with the DC-TDHF calculations. However, in contrary to the observed enhancement of fusion cross sections of \({}^{40}\)Ca + \({}^{78}\)Ni at subbarrier energies in the DC-TDHF calculations, the CS of \({}^{40}\)Ca + \({}^{78}\)Ni predicted with Mode_E is smaller than that of \({}^{48}\)Ca + \({}^{78}\)Ni. As discussed in Ref.[83], this enhancement for \({}^{40}\)Ca + \({}^{78}\)Ni is due to its narrower width of the ion-ion potential. We note in Refs.[84; 85] that Bourgin et al. reported the fusion cross section of \({}^{40}\)Ca + \({}^{64}\)Ni system is higher than that of \({}^{40}\)Ca + \({}^{58}\)Ni, because the large neutron transfer probabilities in \({}^{40}\)Ca + \({}^{64}\)Ni result in a lowering of the fusion threshold. The experimental data of \({}^{40}\)Ca + \({}^{58,64}\)Ni are contained in the training set, thus LightGBM should learned from these data and predicted a higher CS for \({}^{48}\)Ca + \({}^{78}\)Ni. Further studies regarding the isospin-dependent fusion cross section are needed in order to classify the roles of isospin in fusion dynamics.
### Interpretability of the model
As a decision tree based algorithm, LightGBM has excellent interpretability. This is important because one expects the ML algorithm to not only perform well in refining the theoretical fusion cross section model, but also to provide some fundamental physics that the theoretical fusion cross section model does not have. Understanding what happens when the ML algorithm makes predictions can help us further improve our knowledge of the relationship between the input feature quantities and the predicted values. One possible way to understand how LightGBM provides specific prediction is to find the most important features that drive the model. To do this, one of the most popular feature attribution methods, SHapley Additive Prediction (SHAP)[86], is applied to obtain the importance ranking of input features, as displayed in Fig 7. The top is the most important feature, while the bottom is the least relevant feature for the prediction of the fusion cross section in each mode. It is seen that physical-informed features (sig_ECC, sig_W, and sig_E) in Mode_W, Mode_ECC, and Mode_E are ranked in the top, and their SHAP values are significantly larger than others. Besides these physical-informed features, the collision center-of-mass energy \(E_{\rm c.m}\) and fusion Q-value are also ranked in the top-five. It is well known that these two quantities are essential to the process of heavy-ion fusion reaction. In addition, it can be seen that the neutron number (N\({}_{3}\)) and mass number (A\({}_{3}\)) of the compound nucleus also exhibit high importance, this indicates these two quantities are strongly related to the fusion cross section, which can be further considered in modeling the fusion cross section.
Figure 6: (Color online) The nuclear fusion cross sections for \({}^{40}\)Ca + \({}^{78}\)Ni (upper panel) and \({}^{48}\)Ca + \({}^{78}\)Ni (lower panel). Red points denote the predictions with Mode_E. The shaded bands denote the calculated results from the density-constrained time-dependent Hartree-Fock (TDHF) approach, taken from Ref. [83].
## IV Summary
To summarize, the underlying basic quantities and the physical-informed quantities are fed to LightGBM to predict the cross section for heavy-ion fusion reaction. The physical-informed quantities used in this work include the fusion cross sections calculated with the empirical coupled channel (ECC) model and Wong formula, as well as a simplified quantity Z\({}_{1}\)Z\({}_{2}\)/E\({}_{c.m}\). It is found that, by using only basic quantities, LightGBM can reproduce experimental data of cross section within a factor of 10\({}^{0.129}\)=1.35, which is better than 10\({}^{0.154}\)=1.43 obtained from the coupled channel model. When the physical-informed quantities are included in the input feature, the performance of LightGBM can be significantly improved. The MAE on the test set which consists of 118 data points from 8 reaction systems is about 0.66 by only using basic quantities as the input, whereas it is reduced to about 0.2 if the physical-informed quantities are included. In addition, the trend of the excitation function can be reproduced by LightGBM when the input feature includes the physical-informed quantities. All together, our study demonstrates the importance of physical information in predicting fusion cross section of heavy-ion reaction with machine learning algorithm.
## V Acknowledgement
The authors are grateful to the C3S2 computing center in Huzhou University for calculation support. The work is supported in part by the National Natural Science Foundation of China (Nos. U2032145, 12075327, 12335008), Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No. 23lgbj003, and Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2021B0301030006.
|
2302.04511 | A Large-Scale Analysis of Persian Tweets Regarding Covid-19 Vaccination | The Covid-19 pandemic had an enormous effect on our lives, especially on
people's interactions. By introducing Covid-19 vaccines, both positive and
negative opinions were raised over the subject of taking vaccines or not. In
this paper, using data gathered from Twitter, including tweets and user
profiles, we offer a comprehensive analysis of public opinion in Iran about the
Coronavirus vaccines. For this purpose, we applied a search query technique
combined with a topic modeling approach to extract vaccine-related tweets. We
utilized transformer-based models to classify the content of the tweets and
extract themes revolving around vaccination. We also conducted an emotion
analysis to evaluate the public happiness and anger around this topic. Our
results demonstrate that Covid-19 vaccination has attracted considerable
attention from different angles, such as governmental issues, safety or
hesitancy, and side effects. Moreover, Coronavirus-relevant phenomena like
public vaccination and the rate of infection deeply impacted public emotional
status and users' interactions. | Taha ShabaniMirzaei, Houmaan Chamani, Amirhossein Abaskohi, Zhivar Sourati Hassan Zadeh, Behnam Bahrak | 2023-02-09T09:08:19Z | http://arxiv.org/abs/2302.04511v3 | # A Large-Scale Analysis of Persian Tweets Regarding Covid-19 Vaccination
###### Abstract
The Covid-19 pandemic had an enormous effect on our lives, especially on people's interactions. By introducing Covid-19 vaccines, both positive and negative opinions were raised over the subject of taking vaccines or not. In this paper, using data gathered from Twitter, including tweets and user profiles, we offer a comprehensive analysis of public opinion in Iran about the Coronavirus vaccines. For this purpose, we applied a search query technique combined with a topic modeling approach to extract vaccine-related tweets. We utilized transformer-based models to classify the content of the tweets and extract themes revolving around vaccination. We also conducted an emotion analysis to evaluate the public happiness and anger around this topic. Our results demonstrate that Covid-19 vaccination has attracted considerable attention from different angles, such as governmental issues, safety or hesitancy, and side effects. Moreover, Coronavirus-relevant phenomena like public vaccination and the rate of infection deeply impacted public emotional status and users' interactions.
**Keywords:** Covid-19, Public Vaccination, Topic Modeling, Social Analysis, Emotion Analysis
## 1 Introduction
The first officially known outbreak of the Covid-19 was initiated in Wuhan, China, at the end of 2019 (Organization, 2021). According to the rapid dissemination of Coronavirus and the number of lost lives from this infection, the Covid-19 pandemic has massively impacted our daily lives, interactions, behaviors, and routines. Although upcoming breakouts are potential and the future of the Covid-19 pandemic is uncertain (Bonnevie et al, 2021), currently, there are several vaccines which act as controlling measures for the disease outbreak.
As mentioned by Le et al (2020), controlling factors other than the quality of the vaccines, such as public support and trust towards authorities, are essential to ensure the efficiency of vaccination programs. However, these types of treatments, particularly those offered as an emergency response to a rapidly spreading pandemic, are sometimes looked upon with reservation and reluctance (Troiano and Nardi, 2021). Therefore, with respect to the overall aim of global immunization and prevention of social consequences of the pandemic, there exists great potential and need for studies that analyze both supportive and critical viewpoints related to mass vaccination of the population. Understanding critical viewpoints and their rationales is helpful to convince a wider proportion of society into getting vaccinated and increasing the success rate of such programs worldwide.
Nowadays, social media platforms play a significant role in our lives. People communicate, express their feelings and passions, and inform or get informed about the latest news via these platforms. Investigating social media can shed light on measuring people's attitudes toward any discussed topic and recognizing how their opinions evolve over time. In recent years, Twitter has been a key source of information dissemination as one of the most powerful social networks. Each user on Twitter can broadcast a message that may contain any desired content, as long as he/she abides by the platform's safety, privacy, and authenticity rules1.
Footnote 1: [https://help.twitter.com/en/rules-and-policies/twitter-rules](https://help.twitter.com/en/rules-and-policies/twitter-rules)
Despite the fact that content on Twitter is publicly accessible, conducting research on tweets requires a detailed plan for acquiring and analyzing relevant data. This paper presents a practical approach for mining and classification of Persian tweets and users regarding Coronavirus vaccination, leading to a detailed analysis of public supportive and critical attitudes on vaccination in Iran. Moreover, our study is focused on Persian, which is a resource-limited language that has received scant levels of attention from social studies compared to English. In addition, this research provides insights into the relationship between different events and social media reactions to them. The contribution of this paper can be summarized as follows:
* We describe a topic modeling approach combined with a keyword-based method for extracting Persian tweets related to vaccination.
* We apply transformer-based machine learning techniques for tweet classification.
* We conduct an emotion analysis using the labelled dataset for happiness and anger emotions in Persian words.
* We quantify different supportive and critical vaccination themes extracted from tweets.
* We investigate users' connections before and after the initiation of vaccination.
The remainder of this paper is organized as follows: Section 2 gives a brief synopsis of the previous related works. Afterwards, Section 3 explains how Persian tweets relevant to Covid-19 have been collected. In Section 4, we present the preprocessing methodologies as well as our approaches for obtaining tweets related to vaccination, and introduce a strategy to classify the tweets into three classes: negative, positive, and neutral. Techniques used for emotion analysis and further evaluations, such as extracting vaccine themes and user study, are also explored in this section. Section 5 analyzes classified tweets and extracted themes. Furthermore, multiple pieces of analysis about the Covid-19 timeline, user groups and influential users, and overall emotion analysis results are included in this section. Finally, Section 6 concludes the paper and outlines future research directions.
## 2 Related Work
Considering the diversity, richness, and availability of Twitter data, several pieces of research are conducted utilizing tweets to analyze the impact of Covid-19 on societies and social media platforms. According to Covid-19 Data Explorer 2, Iran was one of the first countries got infected by Covid-19; nevertheless, there are only a few analyses carried out to investigate Iranians' opinions toward Coronavirus and vaccination. Hosseini et al (2020) has performed one of the early studies conducted to gauge responses to ongoing events by categorizing Persian tweets into different classes and demonstrating how the reactions evolved over time. Besides, Shokrollahi et al (2021) provides a Post-structuralist Discourse Analysis (PDA) of the Covid-19 phenomenon in Persian society using social network graphs to cluster and explore influencers. Moreover, sentiment analysis of Persian tweets related to Covid-19 has been conducted in this piece of research. Lastly, Nezhad and Deihimi (2022) presented a sentiment analysis approach to assess Persian community's position toward domestic and imported Coronavirus vaccines.
Footnote 2: [https://ourworldindata.org/explorers/coronavirus-data-explorer](https://ourworldindata.org/explorers/coronavirus-data-explorer)
Generally, topic detection can help structure an extensive data collection by grouping records into different classes. In order to achieve a reliable classification, many topic modeling techniques are available. Lyu et al (2021) aims to identify the topics of tweets related to Covid-19, fetched with relevant keywords, using Latent Dirichlet Allocation (LDA) topic modeling developed by Blei et al (2003). Similarly, Wicke and Bolognesi (2021) employs LDA to illustrate how the subjects linked with the pandemic growth change over time. On
the other hand, we compared LDA with Gibbs Sampling for Dirichlet Multinomial Mixture (GSDMM) from Yin and Wang (2014) as the first-step in classifying Persian tweets. GSDMM is a modified LDA technique mainly used for short text topic modeling (STTM) tasks, assuming only one topic for each document rather than a probability distribution on all the potential topics from the original LDA. We have considered both LDA and GSDMM models and compared their results to extract the most relevant topics.
One important factor for analyzing public opinions toward vaccination is to explore trends and reactions during the pandemic. According to the temporal evolution study of different emotional categories and influencing factors implemented in Chopra et al (2021), expressing doubt about vaccination attracts the highest health-related conversations in all the countries studied during the research. Furthermore, Thelwall et al (2021) applies a manual content analysis on a small portion of vaccine-hesitant Coronavirus tweets in English to extract major themes discussed regarding hesitancy. Likewise, quantifications introduced in Bonnevie et al (2021) compare vaccine-critical posts on Twitter before and after the Covid-19 spread in the United States, which depicts a significant increase in vaccine disapproval, especially in areas related to health authorities, vaccine ingredients, and research trials. Moreover, in Bonnevie et al (2020), vaccine opposition themes are manually coded, and afterward, misinformation in each theme, as well as top influencers, are identified. The results show that prominent influencers appear to be well coordinated in misinformation dissemination. Apart from vaccine trends, another direction of our study is to classify vaccine-related tweets into three categories and discuss the evolution of each position (critical, supportive, and neutral) during the pandemic.
In addition to vaccination topics, there are pieces of research conducted on sentiment analysis of tweets with respect to the Covid-19 vaccination. One example is Wicke and Bolognesi (2021) that performs sentiment analysis based on the Pattern library, which uses a dictionary of manually-tagged adjectives with values for sentiment polarity in tweets Smedt and Daelemans (2012). Similarly, Yousefinaghani et al (2021) utilizes Valence Aware Dictionary and sEntiment Reasoner (VADER), a Python lexicon and rule-based sentiment analysis tool, to assign sentiment polarity to every tweet Hutto and Gilbert (2014). Furthermore, in a recent study, Nezhad and Deihimi (2022) applies a deep learning model reinforced with a sarcasm detection approach to achieve high accuracy for Persian tweets.
Although several projects were carried out for vaccine themes identification and sentiment analysis, many plausible analyses in these areas received less attention, especially in Persian, which is a low-resource language. In previous studies, the main concentration has been usually on vaccine-opposition themes, while we explore themes both for support and opposition themes and demonstrate how they develop throughout time using a grounded theory methodology devised by Khan (2014). Furthermore, we performed emotion
analysis over different prominent vaccination opinions, i.e., positive, negative, and neutral, using our tagged Persian words emotion dataset.
As for the focus on studying users involved in Covid-19 related conversations, one of the first studies was carried out by Bonnevie et al (2020). By analyzing "Top Authors" and user engagement, they found that vaccine opposition and misinformation does not come from a diverse distribution of users. Additionally, Yousefinaghani et al (2021) has classified Twitter users into three categories, namely pro-vaccine, anti-vaccine, and neutral and determined how each user belongs to each group. A similar study for the Turkish Twitter has been conducted by Durmaz and Hengirmen (2022). A key point to their work is that they have identified anti-vaccine influencers both before and after the pandemic. As for the study at hand, we have used a robust method to categorize each user into the positions mentioned above and study user interactions after and before the public vaccination in Iran.
## 3 Data
As previously stated, this study aims to analyze Persian tweets about vaccination to give insight into the public opinion toward Coronavirus vaccines in Iran. In order to fulfill this goal, we first need to collect relevant data for processing. The data acquisition and preprocessing procedures are fully explored in the following according to the workflow shown in Figure 1.
Figure 1: Data Acquisition and Preprocessing
### Data Acquisition
To collect Persian tweets and their respective users, we did not just focus on our task at hand; instead, we gathered a comprehensive dataset to be potentially utilized for further studies. This dataset contains 709,460,922 tweets and 6,661,480 active users from Jan. 2012 to Dec. 2021.
In this endeavor, we used Twitter Intelligence Tool (TWINT), which is an advanced Twitter scraping tool developed by Zacharias (2020), allowing us to gather Twitter users' profiles and tweets. We modified TWINT so that we could extract users' and tweets' information for every hour and saved them in Elasticsearch. We chose Elasticsearch as our database and search engine because of its robustness and scalable architecture.
In the next step and in order to separate Covid-19-related tweets, we extracted tweets from Feb. 2020, when the first infected case in Iran was publicly announced, up until Dec. 2021, based on at least one of the following keywords in Persian: **Corona, Covid, vaccine, and quarantine**. More information on the number of tweets for each keyword is provided in Table 1.
Extracted information contains features for users and tweets. We store the list of mentioned users in a tweet and whether a tweet is a reply to another one, along with the count of users' interactions with the tweet. For example, we only save the number of likes per tweet, not the list of users who liked the tweet, since we were only interested in the quantity of this statistic. Similarly, the number of followers and followings are gathered for each user, but the list of the followers or followings is not available. Table 2 and Table 3 describe further details regarding the main properties of tweets and users datasets, respectively.
#### Removing Duplicates
Repeated characters are abundant, notably in emojis, vowels, and stress letters within a word. To rectify this discrepancy, we replaced any repeated characters with more than two instances in a row with only one character of that type. For instance, the word "Hellooool" will be replaced with "Hello!" during this phase. Furthermore, we substituted similar in-a-row emojis with only one. Apart from handling the discrepancy mentioned above, this action would affect emotion analyses methodologies, as we were aware that, for example, the negative stance coming from "I haaattee this!" is probably much higher than "I hate this!", however, due to our word-based emotion analysis approach, this technique did not impacted our analysis.
Afterward, we removed duplicate records with similar tweet content and one-word tweets because these tweets often do not imply any meaningful concepts. After this phase, 286,546 records were eliminated.
\begin{table}
\begin{tabular}{|l|l|} \hline Feature Name & Description \\ \hline Tweet ID & Unique ID for every tweet \\ User ID & Unique ID employed by Twitter for the owner of the tweet \\ Conversation ID & ID employed by Twitter for the conversation \\ Retweet Count & Number of retweets \\ Reply Count & Number of replies \\ Like Count & Number of likes \\ Reply to & This field contains the User ID of the replied tweet if the current tweet is a reply to another tweet \\ Mentions & List of mentioned user IDs \\ Created at & Creation time of the tweet \\ Source & Twitter Source (Android, iPhone, iPad, Web App) \\ Hashtags & List of hashtags in the tweet \\ URLs & List of URLs in the tweet \\ Tweet & Tweet content \\ \hline \hline \end{tabular}
\end{table}
Table 2: Description of Tweet Features
\begin{table}
\begin{tabular}{|l|l|} \hline Feature Name & Description \\ \hline ID & Unique ID employed by Twitter for every user \\ Username & The name that identifies the user \\ Bio & Biography of the user \\ Location & Location of the user \\ URL & Link in the user account \\ Joined Time & Time of account creation \\ Tweet Count & Number of tweets \\ Like Count & Number of total likes \\ Followers & Number of followers \\ Followings & Number of followings \\ Private & Whether user account is private \\ Verified & Whether user account is verified \\ \hline \end{tabular}
\end{table}
Table 3: Description of User Features
#### Text Cleaning
We used the Clean-Text library in python (Filter, 2022) in addition to our customized techniques for data cleaning. Clean-Text is used for providing a better text representation. We employ this library to fix various Unicode errors and remove URLs, phone numbers, emails, and currency symbols. Moreover, we also removed HTML tags as well as meaningless characters and punctuation.
#### Normalization
For this purpose, we utilized the Hazm library, which is implemented for digesting Persian text (HAZM, 2018). We used Hazm Normalizer to unify different classes of terms.
#### Removing Stopwords
We defined a set of Persian stopwords to be removed from tweets using a combination of the Hazm stopwords dataset and Persian stopwords defined in Kharazi (2021). Afterward, we investigated every word in these two sets and removed those that might be relevant to Covid-19 and vaccination. Finally, we evaluated top-appearing words in Covid-19-related tweets and checked whether they refer to any meaningful notion; if not, we appended them into our stopwords set.
#### Lemmatization
We also performed lemmatization for our Persian dataset using the Hazm lemmatizer in order to reduce inflections and variant forms to the base form. Referring to the fact that lemmatization can change or even inverse the meaning of the words (especially in turning negative form verbs into infinitives), to compare the effect of lemmatization in the topic modeling results and subsequent steps, we created two datasets, one with lemmatization (LEM) and the other without it (N-LEM).
## 4 Methods
In order to figure out a way to filter tweets relevant to vaccination, we used a topic modeling approach combined with a keyword-based search. We also applied a transformer-based machine learning technique to classify vaccine-related tweets into three major groups (vaccine-critical, vaccine-supportive, and neutral).
Additional details about the exploited research methodology is shown in Figure 2.
### Topic Modeling
Topic modeling, which is also referred to as probabilistic clustering, is an approach to structuring a large dataset and classifying it into smaller, more interpretable, and spatially separated clusters. There are many topic modeling methodologies available, of which we chose LDA, which is an unsupervised machine learning algorithm and the most widely used technique, and GSDMM, an approach for short-text classification tasks. We applied these two topic modeling techniques to our dataset and compared their results to see which one performs better.
We used a combination of two criteria to assess the performance of our topic modeling algorithms:
1. Coherence measure (\(C_{v}\)) by Newman et al (2010): topic coherence measures calculate the degree of semantic similarity between high-scoring terms in a topic to determine its score. These metrics aid in distinguishing between semantically and non-semantically interpretable issues.
2. Human judgment: similar to what Chang et al (2009) has proposed, we carried out the word and topic intrusion tasks, focusing on the meaning of the words in subjects to examine topics and assess the interpretability of each group.
In order to achieve the most reasonable topic models, we evaluated several factors over a sample of 100,000 tweets. First, we compared the LEM dataset with N-LEM based on the coherence value over the changes of multiple hyper-parameters and word representations. LEM dataset outperforms N-LEM on
Figure 2: Workflow of Twitter Analysis toward Covid-19 Vaccination
an average of 2.3% in \(C_{v}\) score over 25 executions. Because of the mentioned reason, we opted to use LEM dataset for the rest of the topic modeling process.
Next, we compared Bag-of-Words (BoW) and Term Frequency/Inverse Document Frequency (TF-IDF) word representation techniques. For this purpose, we filtered out extreme tokens that appeared in less than 15 tweets or more than 50% of all tweets and kept only the top 100,000 tokens for topic modeling execution. On an average of 20 executions, BoW results were 2.2% better than TF-IDF.
Finally, we tuned LDA and GSDMM hyper-parameters to find the best results for each method. The parameters giving the best results are described below.
LDA parameters:
* \(NT\): The number of themes to be retrieved from the training corpus.
* \(NP\): Number of passes through the corpus during training.
* \(\alpha\): A number for a symmetric prior over document-topic distribution.
* \(CS\): Number of documents/tweets in each training chunk.
GSDMM parameters:
* \(NT_{G}\): The upper limit for the number of topics.
* \(NI\): The upper limit for the number of iterations to perform.
* \(\alpha_{G}\): A parameter ranging from 0 to 1, controlling records' affinity for a larger cluster.
* \(\beta_{G}\): A parameter ranging from 0 to 1, controlling records' affinity for a more homogeneous cluster.
We evaluated results for \(NT\) (and \(NT_{G}\)) between 5 and 10 and \(NS\) (and \(NI\)) between 6 and 12 for LDA and GSDMM models. Based on the \(C_{v}\) coherence measures shown in Table 4 and human judgments, LDA model outperforms GSDMM on our dataset.
After finding the best model for the tweets using the LDA technique, we manually labeled each group according to the concept perceived from each cluster. More information about these topics is provided in Table 5.
### Vaccine-related Tweets
Keyword-based search is usually practical for providing a required subset; however, it only relies on the presence of a list of words. Thereby, there is a lack of implication and sentence meaning when utilizing keywords to provide data. In order to deal with this challenge and obtain the most relevant tweets to
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Topic model & Number of Topics & Coherence (C\_v) \\ \hline LDA & 10 & 52.72\% \\ GSDMM & 9 & 42.46\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Best Results Gained from Topic Modeling
Covid-19 vaccination, we developed a hybrid approach and merged the results gained from the keyword-based technique with our topic modeling outcomes.
According to our topic modeling results, two groups were related to vaccination, i.e., vaccination opinions and vaccination news and reports. First, we extracted tweets with a high probability of belonging to one of these two clusters, defining a more than or equal to 0.5 as a high probability. Based on this criteria, 499,228 tweets were extracted from the dataset.
Then, we defined a series of vaccine-related keywords, which their translations to English are as follows: **vaccine, vaccination, Astra, AstraZeneca, Pfizer, Moderna, Sputnik, Covaxin, Sinopharm**. The rest of the Covid-19-related tweets were checked by these words. 538,212 tweets contained at least one of these keywords. Consequently, we stored 1,037,440 tweets related to vaccination for further studies.
### Vaccine-related Tweets Classification
After providing vaccine-related tweets, we aimed to classify them into three major groups: vaccine-critical, neutral, and vaccine-supportive. To achieve this, first, we manually labeled 6000 tweets using the grounded theory approach. For the first 1000 items of the extracted dataset, the first two authors separately labeled the tweets into the three categories mentioned above. Then, the two labeled datasets were compared against each other using Cohen's Kappa metric, having a consistency of 78 percent. After a discussion over the tweets that did not get the same label, the consistency of 90 percent was reached over the first 1000 labeled tweets. Afterward, the remaining part was split into two datasets of length 2500; each one labeled by only one person. The results are mentioned in Table 6:
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Position & Count & Percentage \\ \hline Vaccine-Critical & 1735 & 28.9\% \\ Neutral & 2611 & 43.5\% \\ Vaccine-Support & 1654 & 27.5\% \\ \hline \end{tabular}
\end{table}
Table 6: Polarity Distribution of Hand-Labeled Dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Topic description & Number of Tweets & \% of all tweets \\ \hline Religious and governmental & 257,314 & 7.27\% \\ Relatives and mourning & 370,644 & 10.47\% \\ Vaccination opinions & 527,294 & 14.90\% \\ Regional news & 293,378 & 8.29\% \\ Reports and statistics & 188,088 & 5.31\% \\ Symptoms & 501,034 & 14.16\% \\ Political and dissatisfaction & 161,774 & 4.57\% \\ quarantine and education & 456,551 & 12.90\% \\ Vaccination (news, reports) & 424,406 & 12.00\% \\ Political and financial & 358,713 & 10.13\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Final Topics
Subsequently, manually labeled data were utilized for vaccine opinion classification. We applied a combination of 5 different factors for data preprocessing demonstrated in Table 7. For text cleaning and removing stopwords, we considered three different criteria, i.e., extreme, moderate, and no filtering. The details of these three criteria are as follows:
* Extreme: Applying all the methods mentioned in Section 3.2.
* Moderate: Allowing the presence of vaccine-related words, for which we reduced the size of the stopwords set by 30%. Also, for the text cleaning part, punctuations, numbers, and conversational forms were kept in tweets.
* No Filtering: Keeping tweet contents intact.
On the other hand, we assumed only two possibilities for duplicate removal and lemmatization, whether or not to apply them. We created 36 different datasets from our original vaccine-related tweets in this stage.
Finally, we employed transformer-based machine learning techniques to accomplish our vaccine-related tweets classification. We fine-tuned and compared a series of these approaches with pre-trained models that use a masked language modeling (MLM) objective to find the best result. Utilized strategies are discussed in the following:
#### Bidirectional Encoder Representations from Transformers (BERT)
BERT, introduced in Devlin et al (2018), applies the bidirectional training of transformer, a popular attention model, to language modeling. This method contrasts with previous endeavors, since it viewed a text sequence from left to right or combined left-to-right and right-to-left training mode. We employed BERT-base and BERT-large models initially. Then, we utilized ParsBERT from Farahani et al (2021), a monolingual language model based on Google's BERT architecture, pre-trained on large Persian corpora with more than 3.9M documents, 73M sentences, and 1.3B words. Similar to previous models, we fine-tuned ParsBERT v3.0 and compared the results with BERT-base and BERT-large.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Criteria & States & \# of States \\ \hline Duplicate Removal & Keep / Remove & 2 \\ Text Cleaning & Extreme / Moderate / No Filter & 3 \\ Lemmatization & Apply / Ignore & 2 \\ Stopword Elimination & Extreme / Moderate / No Filter & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Dataset Extension Criteria
#### Robustly Optimized BERT Pretraining Approach (RoBERTa)
Liu et al (2019) trained BERT with more input data and epochs and came up with RoBERTa, showing that both techniques help in achieving better results. Furthermore, this approach slightly improved masking and data pretraining processes. Firstly, we used RoBERTa-base and large models, like the method used with the pre-trained BERT models. Next, we utilized Twitter-RoBERTa-base for sentiment analysis which is trained on about 58M tweets and fine-tuned for sentiment analysis with the TweetEval benchmark from Barbieri et al (2020). Finally, we assessed Persian RoBERTa, which is a model similar to ParsBERT's idea but based on RoBERTa architecture.
#### Lite BERT for Self-supervised Learning of Language Representations (ALBERT)
ALBERT, introduced by Lan et al (2019), brought up two significant innovations over BERT. First, it factorized embedding parameterization. ALBERT uses a small embedding size and then projects it to the transformer hidden size. Moreover, ALBERT shares all parameters between transformer layers too. For our classification task, we employed the Persian ALBERT v3.0 model, which is provided in ParsBERT.
#### Distilled Version of BERT (DistilBERT)
Distillation, as mentioned by Hinton et al (2015), is the procedure of training a small student model to mimic a larger teacher model as close as possible, and DistilBERT was introduced based on this concept (Sanh et al, 2019). To incorporate DistilBERT into the study, we utilized Persian DistilBERT v3.0 model implemented by ParsBERT.
#### Generalized Auto-regressive Pretraining for Language Understanding (XLNet)
BERT has two main limitations. It distorts the input with masks and suffers from dissimilarity of pretraining and fine-tuning. In addition, BERT ignores the dependency between masked positions. To address these issues, Yang et al (2019) used a permutation language modeling idea to create XLNET. Furthermore, they employed some techniques for masking and using the position of the prediction token. We used XLNet-base and XLNet-large pre-trained models to assess this architecture, evaluate the results, and compare them with other transformer-based models.
#### 4.3.6 Unsupervised Cross-lingual Representation Learning at Scale (XLM-R)
In addition to monolingual models, we also fine-tuned and evaluated XLM-RoBERTa (XLM-R) from Conneau et al (2019), a transformer-based multilingual masked language model pre-trained on text in 100 languages. We used XLM-RoBERTa-large model for this direction.
### Emotion Analysis
To analyze the emotion of vaccine-related tweets during the Covid-19 pandemic, we used a proprietary dataset, which provides the level of happiness and anger for a lexicon of 8,375 common Persian words found on Twitter. Six individuals participated in evaluating and labeling this dataset. Every word in the lexicon was assigned two numbers between 1 to 9, indicating the intensity of happiness and anger. 5 Refers to a neutral state, and higher numbers refer to more extreme emotions. This method is similar to Hedonometer, proposed by Dodds et al (2011), approach for measuring expressed happiness in other languages. We calculated an average happiness and anger weight for each word in the dataset. Then we fitted the inverse of the normal distribution function to assign weights to each number between 1 and 9. The purpose of using this function was to highlight the effect of extremely emotional words.
Afterward, we used the dataset to scale up the emotion analysis from individual words to texts. In order to evaluate the weighted average level of anger and happiness, we used an algorithm (H-AVG), based on Hedonometer's proposal, which is as follows:
\[h_{\text{avg}}(T)=\frac{\sum_{i=1}^{N}h_{\text{avg}}\left(w_{i} \right)\times freq_{i}}{\sum_{i=1}^{N}freq_{i}}\] \[a_{\text{avg}}(T)=\frac{\sum_{i=1}^{N}a_{\text{avg}}\left(w_{i }\right)\times freq_{i}}{\sum_{i=1}^{N}freq_{i}}\]
where \(freq_{i}\) is the frequency of the word \(w_{i}\) (\(i\)th word) in text \(T\), and \(N\) is the number of words present in \(T\).
Before calculating the averages, we dropped every word not found in the emotion dataset. Furthermore, we removed all neutral words to focus more on the sheer level of happiness and anger in tweets. Next. We calculated average happiness and anger of each tweet while disregarding every word not found in our initial emotion dataset. To have more robust results, we chose to consider average happiness and anger scores for missing words as shown below:
\[h_{\text{avg}}(w)=\frac{\sum_{i=1}^{M}h_{\text{avg}}\left(T_{i }\right)}{\sum_{i=1}^{M}freq_{i}}\] \[a_{\text{avg}}(w)=\frac{\sum_{i=1}^{M}a_{\text{avg}}\left(T_{i }\right)}{\sum_{i=1}^{M}freq_{i}}\]
where \(T_{i}\) is the \(i\)th text containing word \(w\), and \(M\) is the number of texts containing \(w\).
Later, we utilized H-AVG again to compute the average happiness and anger per day during the Covid-19 pandemic and compared the results with Covid-19-related events in Iran. The results are reported in Section 5.
### Vaccine Themes
Upon achieving an acceptable result (mentioned in details in Section 5.2) for vaccine-related tweets classification, the main subjects in vaccine opposition and support were extracted. At first, 500 randomly selected tweets, from the two groups combined, were considered. Next, we used a grounded theoretical approach and inductive analysis to identify the main themes manually. We analyzed and assigned related themes to each tweet and extracted essential keywords relevant to each theme using the content of the tweets. In order to focus only on the principal matters of each tweet, at most three relevant themes were considered for each tweet. Afterward, for each theme found in vaccine opposition and support groups, we established a set of keywords identifying the concept of the subject. Finally, these keywords were used to categorize the rest of the tweets in each vaccine-related group. The aim was to find one or more themes for at least 85% of tweets (except for neutral ones). For this goal, we continued grouping tweets while adding extra categories. Upon reaching this purpose, 15 distinct themes, each with their unique set of keywords, were found for the vaccine opposition group. This count was 16 for the vaccine-supportive group of tweets; Meaning that the core topic of 85% of vaccine opposition and support tweets were identified using 31 themes. Remaining 15% had vague or unknown overall topics. Most of the short tweets (less than four words) fitted into this group. Since we utilized a keyword-based approach, having insufficient number of words was the most significant reason not to be categorized into any pre-defined themes. For instance, _How about Vaccination?_, is a good example that does not convey any meaningful or subjective opinion over the subjects.
### User Interaction Analysis
Evaluating user activities, especially for influencers (users with high interaction rate), can give us insight into user attitudes and changes in trends that are not perceivable via assessing tweets.
In order to evaluate users' behavior toward the Covid-19 vaccination, first, we categorized users, monthly from February 2020 to December 2021, into four different groups, i.e., anti-vaccination, neutral, pro-vaccination, and mixed. If 60% or more of a user's tweets about vaccination in a month belonged to the vaccine opposition group, the user was categorized in the anti-vaccination group for that specific month. In a similar fashion, we classified pro-vaccination and neutral groups. Based on these criteria and the mentioned threshold, if a user could not be fitted into any specific group in a month, we considered
him/her as mixed, the full details of the method exploited for user classification is presented in algorithm 1.
```
\(a\leftarrow\) percentage of anti-vaccination tweets in a month \(p\leftarrow\) percentage of pro-vaccination tweets in a month \(n\leftarrow\) percentage of neutral tweets in a month if Any of the representative variables is greater than 60 then User is categorized accordingly elseif\(a==0\)then \(\triangleright\) 40 \(\leq\) p, n \(\leq\) 60 User is classified as pro-vaccination elseif\(p==0\)then \(\triangleright\) 40 \(\leq\) a, n \(\leq\) 60 User is classified as anti-vaccination else User is classified as mixed endif
```
**Algorithm 1** Single User Classification Algorithm
In the next step, we assessed influencers' activities and interactions. We made a user interaction graph where there is an edge between two users if one is mentioned or has replied to the other. The total number of a user's connections (degree of a node) is stored as the metric for analyzing the influence of a person. By computing the number of connections each user had per month, we considered the top 40 users with the highest degree for each of the 23 available months as the influencers (top 0.2 percent of each month's users). Then, we studied the distribution of influencers with respect to the four categories mentioned before. Lastly, to have an overview of the overall interactions and the effect of vaccine program, we created two social networks out of the users. One before the public vaccination in Iran, 1 June 2021, and the other one after that date.
## 5 Results
### Vaccine-related Tweets
We gathered 3,539,196 tweets relevant to Covid-19, and 1,037,440 of them were categorized as vaccine-related tweets (Shown in Figure 3) based on our hybrid approach described in Section 4.
From February 2020 to December 2021, an average of 37.65% (median 42.09%) of Covid-19 tweets per day were related to vaccination. To delve deeper into this analysis, we assessed our data concerning two important dates, the introduction of Coronavirus vaccines (9 February 2021) and the beginning of the public vaccination in Iran (1 June 2021).
According to our evaluations shown in Table 8, a greater proportion of the tweets after 9 February 2021 were related to vaccination in comparison to the previous period of the Covid-19 pandemic. Similarly, after the beginning of the
public vaccination, the rate of vaccine-related tweets was significantly higher than before that date. We found that subsequent to the official introduction of vaccines and public vaccination, vaccine-related tweets increased enormously, referring to the new subjects arising from vaccine matters, such as side effects, effectiveness, and general opinions toward taking vaccines.
### Vaccine-related Tweets Classification
For classifying our vaccine-related tweets into vaccine-opposition, neutral, and vaccine-support groups, after labeling 6000 tweets, we randomly split our tagged data into train and validation sets. 5000 tweets were considered as the training set, and the rest for the validation. Further details of the partition are provided in Table 9.
Based on the dataset extension method described in Section 4, where we had four hyperparameters to tune, the best average result belonged to the dataset with no duplicate removal, a moderate odd pattern removal, no lematization, and no stopword elimination. We called this dataset the final dataset. We continued by fine-tuning our transformer-based models on the final dataset
\begin{table}
\begin{tabular}{|l|c|} \hline \hline \multicolumn{1}{|c|}{Pandemic Period} & Daily Avg. Vaccine-related Tweets \\ \hline Before 9 Feb. 2021 & 21.11\% \\ After 9 Feb. 2021 & 54.43\% \\ \hline Before 1 Jun. 2021 & 27.29\% \\ After 1 Jun. 2021 & 58.71\% \\ \hline \hline \end{tabular}
\end{table}
Table 8: Vaccine-related Tweets over Covid-19 Pandemic
Figure 3: Relative Percentage of Vaccine Tweets Over Time
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline Sets & Critical & Neutral & Supportive \\ \hline Training & 1467 & 2167 & 1366 \\ Validation & 268 & 444 & 288 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Polarity Distribution in Training and Validation Sets
and compared their results to find the best model for our classification task. Table 10 displays more information about the top results of our classification. As it can be seen, our fine-tuned Pars-BERT model outperforms all the other approaches with 62.03% F1-Score. Other models such as BERT, RoBERTa, Twitter-RoBERTa, XLM-R, and XLNET did not reach an F1-Score of more than 30%.
### Emotion Analysis
The results of tweet emotion detection are presented in Figures 4 and 5. In these time series, several important dates (peaks and valleys) exist for each emotion type (shown by black triangles). We cross-referenced these dates with the introduction of vaccines and two available time series, namely, the daily number of new cases and the number of deaths. We found several interesting correlations, including:
* April 2020: The first peak of the pandemic (First happiness valley and anger peak): Regarding the first Covid-19 worldwide shock, in addition to the unavailability of vaccines and other treatments, there was a huge public panic concerning the Coronavirus consequences.
* July 2020: The recovery from the first peak (First happiness peak and anger valley): Although no vaccination methodology was discovered, the overall downward rate of the Covid-19 infections gave rise to the thought that the public is less susceptible to the disease.
* November 2020: The start of the third epidemic (Second anger peak): Concerning the initiation of vaccination in other countries and the reports referring to the effectiveness of vaccines, in addition to the critical situation and high rate of infection in Iran, made a huge dissatisfaction and outrage against public status toward Covid-19.
* September 2021: The period of Delta variant infection (Third happiness valley and anger peak): The Delta variant of the Coronavirus was one of the most significant eras in terms of daily new cases and deaths. As a result, albeit vaccination effectively controlled sad and angry opinions, the last anger peak and happiness valley are more considerable compared to other important dates.
Furthermore, looking at the entire happiness time series, we observe an overall rising tendency. This upward trend is apparent when we compare the
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Models & F1-Score & Accuracy (O, N, S) \\ \hline Persian ALBERT & 39.9 & 77.22, 50.34, 4.24 \\ Persian RoBERTa & 53.78 & 48.40, 61.78, 46.64 \\ Persian DistilBERT & 58.45 & 51.25, 69.57, 49.12 \\
**Pars-BERT** & **62.03** & **63.06, 60.81, 61.81** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Vaccine-related Tweets Classification Results
before and after the introduction of vaccines (February 2021) periods. Conversely, the anger time series is the opposite. We see a declining tendency when comparing the averages before and later vaccines. We used Spearman's rho and Pearson's coefficients to evaluate the correlation between the happiness and anger trends. The coefficient for both measures was -0.965 with p-value \(<\) 0.001, showing a high negative correlation between these two trends.
As the results show, vaccination significantly affected public happiness and anger toward Coronavirus. Due to vaccines' effectiveness, people trust vaccination more as a remedy for Coronavirus; hence, they tend to tweet less sad or angry tweets around the Covid-19 subject. Furthermore, we figured out that there is a high correlation between sadness and anger regarding the Covid-19 vaccination, which could be an example of how different emotions are being affected in a similar manner by an external factor such as a pandemic. It might also explain that the same negative or same positive emotions could significantly strengthen each other if they are aligned.
### Vaccine Themes
Classified data were analyzed to extract themes for both vaccine-critical and supportive tweets. Based on the extraction methods mentioned before,
Figure 4: Happiness Trend of Vaccine-Related Tweets during Covid-19 Pandemic
Figure 5: Anger Trend of Vaccine-Related Tweets during Covid-19 Pandemic
219,646 tweets were labeled as having vaccine-opposition content. These tweets belonged to 15 distinct categories (and one category named \(other\)). The same approach was adopted for the vaccine-supportive tweets, which consisted of 339,351 distinct Tweets in 16 different themes. Just like the critical side, a category called \(other\) was also considered. The details of both themes are available in table 11. Since we utilized a keyword-based approach, it is possible that a single tweet belongs to more than one category (in both themes). Therefore, the sum of frequencies for vaccine supportive and critical are larger than 100%.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \hline Theme Name & Description & Critical & Supportive \\ \hline Side Effects & Mentions of health impacts caused by vaccines & 43,608 (19.85\%) & 46,551 (13.72\%) \\ \hline Pharmaceuticals & Talks about vaccine names and companies making vaccines & 34,398 (15.66\%) & 47,506 (14.00\%) \\ \hline Political / Government & Conversations on governmental actions towards mass vaccination & 94,748 (43.14\%) & 135,095 (39.81\%) \\ \hline Vaccine Ingredients & Related to how vaccines are created and their materials & 10,293 (4.69\%) & 7,991 (2.35\%) \\ \hline Research Trials & References to experiments and lab works & 26,394 (12.02\%) & 62,252 (18.42\%) \\ \hline Religion & Topics on faith and religious practices & 9,793 (4.46\%) & 18,971 (5.59\%) \\ \hline Ineffectiveness / Hesitancy & Conversations on low vaccine impression and incapability to fight Covid-19 & 50,639 (23.05\%) & - \\ \hline Safety / Sufficiency & References to vaccine performance and ability & - & 88,627 (26.12\%) \\ \hline Disease Prevalence & Mentions of virus mutations over time & 4,756 (2.17\%) & 20,843 (6.14\%) \\ \hline Family & Expression of the concern for family members and relatives & 15,278 (6.96\%) & 28,253 (8.33\%) \\ \hline Foreign Countries & Talks of pandemic state in other countries and imported vaccines & 93,478 (42.56\%) & 79,794 (23.51\%) \\ \hline Lockdown Denial & Related to ignoring the pandemic and worldwide crisis & 5,602 (2.55\%) & - \\ \hline Pandemic Confirmation & Relevant to accepting the pandemic & - & 88,913 (26.20\%) \\ \hline Mandatory vaccination & Criticism of forced vaccination and encouragements & 15,616 (7.11\%) & - \\ \hline Influential Users & Mentions of influencers and their actions towards vaccination & 15,334 (6.98\%) & 31,970 (9.42\%) \\ \hline Vaccine Alternatives & Other vaccine substitutes, their advantages and disadvantages & 4,914 (2.24\%) & 2,406 (0.71\%) \\ \hline Medics and Hospitals & Relevant to doctors and other treatment staff & 37,053 (16.87\%) & 48,472 (14.28\%) \\ \hline Hope / Envy & Expressions of impatience towards receiving vaccination & - & 37,142 (10.95\%) \\ \hline Availability & Demanding public vaccination from authorities & - & 9,467 (2.79\%) \\ \hline Others & Not categorized in any of themes & 31,953 (14.53\%) & 45,659 (13.45\%) \\ \hline \hline \end{tabular}
\end{table}
Table 11: Brief Description of Vaccine Themes
Figure 6 illustrates the correlation of themes for both supportive and critical groups. The left correlation matrix belongs to supportive themes, and the right one represents the critical side. There are several strong relationships that are worth mentioning, which are as follows:
* Influencers and Political (Both supportive and critical): Most of the tweets concerning influencers like actors and officials regard their reaction and decisions toward the Covid-19 based on the political situation in Iran.
* Prevalence and Pandemic Confirmation (Supportive): As the Covid-19 prevalence and mutations affect people increasingly, there is a higher rate of widespread pandemic acceptance and supportive opinions regarding taking vaccines.
* Ingredients and Side Effects (Supportive): It seems that talking about vaccine ingredients usually infers the matters impacting human health for a long or short time. That is why most of the contexts discussing ingredients also refer to the side effects in humans.
* Denial and Ineffectiveness (Critical): Ignoring the pandemic is alongside disregarding the Covid-19 crisis. On the one hand, people denying Coronavirus might also tend to deny vaccines and their effectiveness; on the other hand, they might consider both Covid-19 and vaccines a delusion.
* Religious and Political (Critical): Tweets containing spiritual concepts, Talk about Covid-19 from the religious viewpoint. According to the results, these tweets seem to relate the political decisions toward vaccination in Iran to religious instructions.
### User Interaction Analysis
Assessing users' mindsets behind their tweets led us to categorize them into four different groups: anti-vaccination, pro-vaccination, neutral, and mixed. Figure 7 presents the flow of changes in anti, pro, and mixed classes based on
Figure 6: Support (a) and Opposition (b) Themes Correlation
the relative percent of monthly coverage for each group during the Covid-19 pandemic.
As it is shown, between the introduction of Coronavirus vaccines in Iran and the beginning of the public vaccination (February to June 2021), the percentage of anti-vaccination users is 4.18% lower compared to the months that vaccination was in progress. Similarly, the ratio of anti-vaccination users between February and June 2021 was 2.78% lower than the months prior to vaccine introduction. On the other hand, analyzing the pro-vaccination users demonstrated that in a period that lasted from vaccine introduction up to the end of 2021, the percentage of vaccination supporters increased by 9.67% compared to the time before February 2021.
By analyzing the results, we observed that vaccination and its results helped reduce criticism about vaccines. However, to evaluate the activity of each group, we calculated ratio of the number of tweets to the number of users for both supportive and critical groups. According to Figure 8 the average ratio for critical group is 0.2 (15.7 %) higher than the supportive. This difference is even more considerable during the time between introduction of vaccines and the start of public vaccination. According to the results, we can infer that during this period, people understood that the vaccination was inevitable; hence their opposition and hesitancy are even more expressed. On the other side, those who agreed with vaccination represented their thoughts more widely than before.
After the initiation of public vaccination, there was a considerable fall in the rate of the critical group, showing that the recovery results convinced some critics to accept the efficiency of Covid-19 vaccines. However, inferring from the slight decline in the supportive group, it appears that the vaccination results were not as promising as they expected.
In the next step, we evaluated influencers by considering user interactions, which includes replies and mentions. As previously mentioned, the top 40
Figure 7: User Classes During Covid-19 Pandemic
users with the highest rate of interaction for each month were labeled as influencers. Figure 9 shows the classification of such users during the pandemic. By looking at the number of influencers categorized as pro-vaccination and anti-vaccination, we discovered that vaccine critical influencers made up 7.91% of the whole influencer population before the introduction of Covid-19 vaccines in Iran. However, this share changed to 8.63% afterwards. As for the other side, vaccine supporters' coverage increased from 16.04% to 18.18%. From these observations, we can infer that the dissemination of vaccines resulted in more non-neutral tweets and conversations from influencers, as factors such as efficiency and side effects became much more apparent than before and users became extra opinionated.
In order to have a summary of the overall interactions and the impact of the vaccination program, we created two networks. One for before (\(BV\)) and the other after (\(AV\)) the public vaccination (June 2021) in Iran, represented in Figure 10. We excluded users who had less than 350 interactions in each of the two mentioned periods for these networks. Green nodes represent pro-vaccination and red ones illustrate anti-vaccination. Neutral and mixed users
Figure 8: Tweet to User Ratio
Figure 9: Top Influencers Per Month
appear as blue and gray nodes, respectively. Furthermore, some users are not found in our separately-gathered dataset of users, which might be non-Persian users mentioned or replied to by others; we specified them in black. Moreover, the number of connections is demonstrated with node diameter.
According to our evaluations, before June 2021, anti-vaccination users constituted 7.11% of all users, while they formed only 4.73% after that time. Likewise, pro-vaccination members accounted for 12.44% before June 2021, whereas they made up for 9.82% afterward. These two trends disagree with what we have observed for the influencers, meaning that normal users from both sides of the argument became less fixated on their positions and, on average, decided to either post fewer amounts of content or take relatively neutral stands toward the vaccination.
Table 12 shows the overall statistics of both networks. Based on the in-degree and density measures, we observed that users tend to receive fewer mentions and replies after the public vaccination compared to the previous period. Similarly, the rate of contribution to vaccine-related tweets decreased. These results show that after public vaccination and its observable effects, the level of Covid reactions in tweets decreased significantly. Nevertheless, the rate of top influencers (denoted with a large diameter) increased, especially for anti-vaccination and pro-vaccination users, showing that logical discussions among prominent members enhanced.
Furthermore, consistent with the attachment of similar nodes demonstrated with homophily measure, we observed that before vaccination, the argument between those with similar thoughts, affected by the influencers such as news accounts, was bold. On the other side, as the vaccination brought about healing outcome and side-effects, the controversy among different groups with different viewpoints was raised after the vaccination.
Figure 10: User Interactions before (a) and after (b) Public Vaccination in Iran. Red nodes showing anti-vaccinations, green ones for pro-vaccination, blue nodes representing neutral, mixed users are shown in gray, and black ones demonstrating unclassified users.
## 6 Conclusion
In this study, using a keyword-based method, we extracted Covid-19-related tweets and performed a topic modeling to specify the main subjects discussed around the Covid-19 matter. Utilizing the topic modeling results combined with a keyword-based search, we achieved vaccine-related tweets during Coronavirus pandemics up to the end of 2021 in Iran. Later, we classified vaccine-related tweets into vaccine-critical, neutral, and vaccine-supportive groups and extracted the main themes discussed around the Covid-19 vaccination.
Moreover, we carried out a happiness and anger analysis to further evaluate public opinion toward vaccination. Afterwards, we performed a range of analyses to assess how users reacted to the evolution of vaccines for the Covid-19. The results demonstrate the immense potential of online platforms to provide insight into people's reactions to crisis and how their behavior evolves. Although utilizing data from such platforms to understand Covid-19's public response has been explored to a certain degree, this study is among the first to address the issue in the Persian language. The future work can be attributed to the directions of a more comprehensive analysis of network properties and structures, such as community detection, to have a richer understanding of influential users and their connections. Furthermore, we did not segregate real accounts from fake users and bots. An accurate methodology to exclude bots from the user base would be beneficial for more robust insights into user behavior. Another important topic related to the bots is their influence in steering society's way of thinking about vaccination and social matters in general. Studying their presence, attributes or features that separates them from normal users, and the content they're spreading can be explored in future so that more cohesive and reliable content can be handed over the people searching for information.
|
2303.13344 | Stochastic Decision Petri Nets | We introduce stochastic decision Petri nets (SDPNs), which are a form of
stochastic Petri nets equipped with rewards and a control mechanism via the
deactivation of controllable transitions. Such nets can be translated into
Markov decision processes (MDPs), potentially leading to a combinatorial
explosion in the number of states due to concurrency. Hence we restrict
ourselves to instances where nets are either safe, free-choice and acyclic nets
(SAFC nets) or even occurrence nets and policies are defined by a constant
deactivation pattern. We obtain complexity-theoretic results for such cases via
a close connection to Bayesian networks, in particular we show that for SAFC
nets the question whether there is a policy guaranteeing a reward above a
certain threshold is $\mathsf{NP}^\mathsf{PP}$-complete. We also introduce a
partial-order procedure which uses an SMT solver to address this problem. | Florian Wittbold, Rebecca Bernemann, Reiko Heckel, Tobias Heindel, Barbara König | 2023-03-23T15:22:34Z | http://arxiv.org/abs/2303.13344v1 | # Stochastic Decision Petri Nets
###### Abstract
We introduce stochastic decision Petri nets (SDPNs), which are a form of stochastic Petri nets equipped with rewards and a control mechanism via the deactivation of controllable transitions. Such nets can be translated into Markov decision processes (MDPs), potentially leading to a combinatorial explosion in the number of states due to concurrency. Hence we restrict ourselves to instances where nets are either safe, free-choice and acyclic nets (SAFC nets) or even occurrence nets and policies are defined by a constant deactivation pattern. We obtain complexity-theoretic results for such cases via a close connection to Bayesian networks, in particular we show that for SAFC nets the question whether there is a policy guaranteeing a reward above a certain threshold is \(\mathsf{NP}^{\mathsf{PP}}\)-complete. We also introduce a partial-order procedure which uses an SMT solver to address this problem.
## 1 Introduction
State-based probabilistic systems are typically modelled as Markov chains [28], i.e., transition systems where transitions are annotated with probabilities. This admits an intuitive graphical visualization and efficient analysis techniques [17]. By introducing additional non-determinism, one can model a system where a player can make decisions, enriched with randomized choices. This leads to the well-studied model of Markov decision processes (MDPs) [6, 15] and the challenge is to synthesize strategies that maximize the reward of the player.
In this paper we study stochastic systems enriched with a mechanism for decision making in the setting of concurrent systems. Whenever a system exhibits a substantial amount of concurrency, i.e., events that may potentially happen in parallel, compiling it down to a state-based system - such as an MDP - can result in a combinatorial state explosion and a loss in efficiency of MDP-based methods. We base our models on stochastic Petri nets [21], where Petri nets are a standard formalism for modelling concurrent systems, especially such systems where resources are generated and consumed. When considering the discrete-time semantics of such stochastic nets, it is conceptually easy to transform them into Markov chains, but this typically leads to a state space explosion.
There exist successful partial order methods for analyzing concurrent systems that avoid explicit interleavings and the enumeration of all reachable states. Instead, they work with partial orders - instead of total orders - of events. While
such techniques are well understood in the absence of random choices, leading for instance to methods such as unfoldings [14], there are considerable difficulties to reconcile probability and partial order. Progress has been made by the introduction of the concept of branching cells [1] that encapsulate independent choices, but to our knowledge there is no encompassing theory that provides off-the-shelf partial order methods for computing the probability of reaching a certain goal (e.g. marking a certain place) in a stochastic net.
The contributions of this paper are the introduction of a new model: stochastic decision Petri nets (SDPNs) and its connection to Markov decision processes (MDPs). The transformation of SDPNs into MDPs is relatively straightforward, but may lead to state space explosion, i.e., exponentially many markings, due to the concurrency inherent in the Petri net. This can make the computation of the optimal policy infeasible. We restrict ourselves to a subclass of nets which are safe, acyclic and free-choice (SAFC) and to constant policies and study the problem of determining a policy that guarantees a payoff above some bound. Our result is that the problem SAF-POL of determining such a policy, despite the restrictions, is still NP\({}^{\mathsf{PP}}\)-complete. We reduce from the D-MAP problem for Bayesian networks [24] (in fact the two problems are interreducible under mild restrictions) and show the close connection of reasoning about stochastic Petri nets and Bayesian networks. Furthermore, for SAFC nets, there is a partial-order solution procedure via an SMT solver, for which we obtain encouraging runtime results. For the simpler free-choice occurrence nets, we obtain NP-completeness result.
Note that the main body of the paper contains some proof sketches, while full proofs and an additional example can be found in the appendix.
## 2 Preliminaries
By \(\mathbb{N}\) we denote the natural numbers without \(0\), while \(\mathbb{N}_{0}\) includes \(0\).
Given two sets \(X,Y\) we denote by \((X\to Y)\) the set of all functions from \(X\) to \(Y\). Given a function \(f\colon X\to\mathbb{N}_{0}\) or \(f\colon X\to\mathbb{R}\) with \(X\) finite, we define \(\|f\|_{\infty}=\max_{x\in X}f(x)\) and \(\operatorname{supp}(f)=\{x\in X\mid f(x)\neq 0\}\).
Complexity Classes:In addition to well-known complexity classes such as P and NP, our results also refer to PP (see [23]). This class is based on the notion of a probabilistic Turing machine, i.e., a non-deterministic Turing machine whose transition function is enriched with probabilities, which means that the acceptance function becomes a random variable. A language \(L\) lies in PP if there exists a probabilistic Turing machine \(M\) with polynomial runtime on all inputs such that a word \(w\in L\) iff it is accepted with probability strictly greater than \(\nicefrac{{1}}{{2}}\). As probabilities we only allow numbers \(\rho\) that are efficiently computable, meaning that the \(i\)-th bit of \(\rho\) is computable in a time polynomial in \(i\). (See [2] for a discussion on why such probabilistic Turing machines have equal expressivity with those based on fair coins, which is not the case if we allow arbitrary numbers.)
Given two complexity classes \(A,B\) and their corresponding machine models, by \(A^{B}\) we denote the class of languages that are solved by a machine of class
\(A\), which is allowed to use an oracle answering yes/no-questions for a language \(L\in B\) at no extra cost in terms of time or space complexity. In particular \(\mathsf{NP}^{\mathsf{PP}}\) denotes the class of languages that can be accepted by a non-deterministic Turing machine running in polynomial time that can query a black box oracle solving a problem in \(\mathsf{PP}\).
By Toda's theorem [27], a polynomial time Turing machine with a \(\mathsf{PP}\) oracle (\(\mathsf{P}^{\mathsf{PP}}\)) can solve all problems in the polynomial hierarchy.
In order to prove hardness results we use the standard polynomial-time many-one reductions, denoted by \(A\leq_{p}B\) for problems \(A,B\) (see [16]).
Stochastic Petri Nets:A stochastic Petri net [21] is given by a tuple \(N=(P,T,\ ^{\bullet}(\,),(\,)^{\bullet},\Lambda,m_{0})\) where \(P\) and \(T\) are finite sets of places and transitions, \({}^{\bullet}(\,),(\,)^{\bullet}:T\to(P\to\mathbb{N}_{0})\) determine for each transition its pre-set and post-set including multiplicities, \(\Lambda\colon T\to\mathbb{R}_{>0}\) defines the firing rates and \(m_{0}\colon P\to\mathbb{N}_{0}\) is the initial marking. By \(\mathcal{M}(N)\) we denote the set of all markings of \(N\), i.e., \(\mathcal{M}(N)=(P\to\mathbb{N}_{0})\).
We will only consider the discrete-time semantics of such nets. The firing rates determine stochastically which transition is fired in a marking where multiple transitions are enabled: When transitions \(t_{1},\dots,t_{n}\in T\) are enabled in a marking \(m\in\mathcal{M}(N)\) (i.e., \({}^{\bullet}t_{i}\leq m\) pointwise), then transition \(t_{i}\) fires with probability \(\Lambda(t_{i})/\sum_{j=1}^{n}\Lambda(t_{j})\), resulting in a discrete step \(m\to_{t_{i}}m^{\prime}\coloneqq m-{}^{\bullet}t_{i}+{t_{i}}{}^{\bullet}\). In particular, the firing rates have no influence on the reachability set \(\mathcal{R}(N)\coloneqq\{m\in\mathcal{M}(N)\mid m_{0}\to^{*}m\}\) but only define the probability of reaching certain places or markings. Defining "empty" transitions \(m\to_{\varepsilon}m\) for markings \(m\in\mathcal{R}(N)\) where no transition is enabled, such a stochastic Petri net can be interpreted as a Markov chain on the set of markings \(\mathcal{M}(N)\).
This Markov chain thus generates a (continuous) probability space over sequences \((m_{0},m_{1},\dots)\in\mathcal{M}(N)^{\omega}\) where a sequence is called valid if \(m_{0}\) is the initial marking of the Petri net and for a prefix \((m_{0},\dots,m_{n})\) all cones \(\{(m^{\prime}_{0},m^{\prime}_{1},\dots)\in\mathcal{M}(N)^{\omega}\mid\forall k =0,\dots,n:m^{\prime}_{k}=m_{k}\}\) have non-zero probability. We write \(\mathcal{FS}(N)\coloneqq\{\mu\in\mathcal{M}(N)^{\omega}\mid\mu\text{ is valid}\}\) to denote the set of valid sequences. We assume that no two transitions have the same pre- and postconditions to have a one-to-one-correspondence between valid sequences and firing sequences \(\mu:(m_{0}\to_{t_{1}}m_{1}\to_{t_{2}}\dots)\).
For a firing sequence \(\mu\), we write \(\mu^{k}:m_{0}\to_{t_{1}}m_{1}\to_{t_{2}}\dots\to_{t_{k}}m_{k}\) to denote the finite subsequence of the first \(k\) steps, \(\operatorname{len}(\mu)\coloneqq\min\{k\in\mathbb{N}\mid t_{k}=\varepsilon\}-1\), for its length, as well as
\[pl(\mu)\coloneqq\bigcup_{n=0}^{\infty}\operatorname{supp}(m_{n})\qquad\qquad tr (\mu)\coloneqq\{t_{n}\mid n\in\mathbb{N}\}\setminus\{\varepsilon\}\]
to denote the set of places reached in \(\mu\) (or, analogously, \(\mu^{k}\)), and the set of fired transitions in \(\mu\) (independent of their firing order), respectively.
We are, furthermore, interested in the following properties of Petri nets: A Petri net \(N\) as above is called
* _ordinary_ iff all transitions require and produce at most one token in each place \((\|\,^{\bullet}t\|_{\infty},\|t^{\bullet}\|_{\infty}\leq 1\) for all \(t\in T)\);
* _safe_ iff it is ordinary and all reachable markings also only have at most one token in each place \((\|m\|_{\infty}\leq 1\) for all \(m\in\mathcal{R}(N))\);
* _acyclic_ iff the transitive closure \(\prec_{N}^{+}\) of the causal relation \(\prec_{N}\) (with \(p\prec_{N}t\) if \(\,^{\bullet}t(p)>0\) and \(t\prec_{N}p\) if \(t^{\bullet}(p)>0\)) is irreflexive;
* an _occurrence net_ iff it is safe, acyclic, free of backward conflicts (all places have at most one predecessor transition, i.e., \(|\{t\mid t^{\bullet}(p)>0|\leq 1\) for all \(p\in P\)) and self-conflicts (for \(x\in P\cup T\), there exist no two distinct conflicting transitions \(t,t^{\prime}\in T\), i.e., transitions sharing preconditions, on which \(x\) is causally dependent, i.e., \(t,t^{\prime}\prec_{N}^{+}x\)), and the initial marking has no causal predecessors (for all \(p\in P\) with \(m_{0}(p)=1\), we have \(t^{\bullet}(p)=0\) for all \(t\in T\));
* _free-choice_[13] iff it is ordinary and all transitions \(t,t^{\prime}\in T\) are either both enabled or disabled in all markings (i.e., \(\,^{\bullet}t=^{\bullet}t^{\prime}\) or \(\operatorname{supp}(^{\bullet}t)\cap\operatorname{supp}(^{\bullet}t^{\prime})=\emptyset\));
* \(\varphi\)_-bounded_ (for \(\varphi\colon\mathbb{N}_{0}\to\mathbb{N}_{0}\)) iff all its runs, starting from \(m_{0}\), have at most length \(\varphi(|P|+|T|)\), i.e., iff \(\operatorname{len}(\mu)\leq\varphi(|P|+|T|)\) for all firing sequences \(\mu\in\mathcal{FS}(N)\).
We will abbreviate the class of free-choice occurrence Petri nets as FCON, safe and acyclic free-choice nets as SAFC nets, and the class of \(\varphi\)-bounded Petri nets as \([\varphi]\)BPN. Note that \(\operatorname{FCON}\subseteq\operatorname{SAFC}\) and also \(\operatorname{SAFC}\subseteq[id]\)BPN for the identity _id_.4
Footnote 4: Indeed, \([id]\)BPN contains any safe and acyclic Petri net, omitting the free-choice constraint.
We also introduce some notation specifically for SAFC nets: As common in the analysis of safe Petri nets, we will interpret markings as well as pre- and postconditions of transitions as subsets of the set \(P\) of places rather than functions \(P\to\{0,1\}\subseteq\mathbb{N}_{0}\).
The set of maximal configurations will be denoted by \(\mathcal{C}^{\omega}(N)\coloneqq\{\mathit{tr}(\mu)\mid\mu\in\mathcal{FS}(N)\}\) and configurations by \(\mathcal{C}(N)\coloneqq\{\mathit{tr}(\mu^{k})\mid\mu\in\mathcal{FS}(N),k\in \mathbb{N}_{0}\}\).
An important notion in the analysis of a (free-choice) net are branching cells (see also [8, 1]). We will define a cell to be a subset of transitions \(\mathbb{C}\subseteq T\) where all transitions \(t\in\mathbb{C}\) share their preconditions and all \(t^{\prime}\in T\setminus\mathbb{C}\) share no preconditions with \(t\in\mathbb{C}\). In other words, \(\mathbb{C}\) is an equivalence class of a relation \(\leftrightarrow\) on \(T\) defined by
\[\forall t,t^{\prime}\in T:t\leftrightarrow t^{\prime}\Longleftrightarrow\, ^{\bullet}t=\,^{\bullet}t^{\prime}.\]
We will write \(\mathbb{C}_{t}\coloneqq[t]^{\leftrightarrow}\) to denote the equivalence class of transition \(t\in T\) and \(\,^{\bullet}\mathbb{C}\coloneqq\bigcup_{t\in\mathbb{C}}\,^{\bullet}t\) as well as \(\mathbb{C}^{\bullet}\coloneqq\bigcup_{t\in\mathbb{C}}t^{\bullet}\) to denote the sets of pre- and postplaces of \(\mathbb{C}\), respectively. The set of all cells of a net \(N\) is denoted by \(\mathit{BC}(N)\).
_Markov decision processes:_ A Markov decision process (MDP) is a tuple \((S,A,\delta,\)\(r,s_{0})\) consisting of finite sets \(S\), \(A\) of states and actions, a function \(\delta\colon S\times A\to\mathcal{D}(S)\) of probabilistic transitions (where \(\mathcal{D}(S)\) is the set of probability distributions on \(S\)), a reward function \(r\colon S\times A\times S\to\mathbb{R}\) of rewards and an initial state \(s_{0}\in S\) (see also [6, 15]).
A policy (or strategy) for an MDP is some function \(\pi\colon S\to A\). It has been shown that such stationary deterministic policies can act optimally in such an (infinite-horizon) MDP setting (see also [15]). A policy gives rise to a Markov chain on the set of states with transitions \(s\mapsto\delta(s,\pi(s))\in\mathcal{D}(S)\). The associated probability space is \(s_{0}S^{\omega}\), the set of all infinite paths on \(S\) starting with \(s_{0}\), which - due to its uncountable nature - has to be dealt with using measure-theoretic concepts. As before we equip the probability space with a \(\sigma\)-algebra generated by all cones, i.e., all sets of words sharing a common prefix.
The value (or payoff) of a policy \(\pi\) is then given as the expectation of the (undiscounted) total reward (where \(\mathbf{s}_{i}\), \(i\in\mathbb{N}_{0}\) are random variables, mapping an infinite path to the \(i\)-th state, i.e., they represent the underlying Markov chain):
\[\mathbb{E}\left[\sum_{n\in\mathbb{N}_{0}}r(\mathbf{s}_{n},\pi(\mathbf{s}_{n}), \mathbf{s}_{n+1})\right].\]
To avoid infinite values, we have to assume that the sum is bounded.
The problem of finding an optimal policy \(\pi\colon S\to A\) for a given MDP \((S,A,\delta,r,s_{0})\) with finite state and action space is known to be solvable in polynomial time using linear programming [15, 19].
Bayesian Networks:Bayesian networks are graphical models that give compact representations of discrete probability distributions, exploiting the (conditional) independence of random variables.
A (finite) probability space \((\Omega,\mathbb{P})\) consists of a finite set \(\Omega\) and a probability function \(\mathbb{P}\colon\Omega\to[0,1]\) such that \(\sum_{\omega\in\Omega}\mathbb{P}(\omega)=1\). A Bayesian network [25] is a tuple \((X,\Delta,P)\) where
* \(X=(X_{i})_{i=1,\ldots,n}\) is a (finite) family of random variables \(X_{i}\colon\Omega\to V_{i}\), where \(V_{i}\) is finite.
* \(\Delta\subseteq\{1,\ldots,n\}\times\{1,\ldots,n\}\) is an acyclic relation that describes dependencies between the variables, i.e., its transitive closure \(\Delta^{+}\) is irreflexive. By \(\Delta^{i}=\{j\mid(j,i)\in\Delta\}\) we denote the parents of node \(i\) according to \(\Delta\).
* \(P=(P_{i})_{i=1,\ldots,n}\) is a family of probability matrices \(P_{i}\colon\prod_{j\in\Delta^{i}}V_{j}\to\mathcal{D}(V_{i})\), whose entries are given by \(P_{i}(v_{i}\mid(v_{j})_{j\in\Delta^{i}})\).
A probability function \(\mathbb{P}\) is consistent with such a Bayesian network whenever for \(v=(v_{i})_{i=1,\ldots,n}\in\prod_{i=1}^{n}V_{i}\) we have
\[\mathbb{P}(X=v)=\prod_{i=1}^{n}P_{i}(v_{i}\mid(v_{j})_{j\in\Delta^{i}}).\]
The size of a Bayesian network is not just the size of the graph, but the sum of the size of all its matrices (where the size of an \(m\times n\)-matrix is \(m\cdot n\)). In particular, note that a node with \(k\) parents in a binary Bayesian network (i.e., with \(|V_{i}|=2\) for all \(i\)) is associated with a \(2\times 2^{k}\) probability matrix.
**Example 2.1**.: _An example Bayesian network is given in Figure 1. There are four random variables (\(a,b,c,d\)) with codomain \(\{0,1\}\). The tables in the figure denote the conditional probabilities, for instance \(P_{d}(0\mid 01)=\mathbb{P}(X_{d}=0\mid X_{a}=0,X_{b}=1)=\nicefrac{{1}}{{6}}\), i.e., one records the probability that a random variable has a certain value, dependent on the value of its parents in the graph. The probability \(\mathbb{P}(X=0100)=\mathbb{P}(X_{a}=0,X_{b}=1,X_{c}=0,X_{d}=0)\) is obtained by multiplying \(P_{a}(0)\cdot P_{b}(1)\cdot P_{c}(0\mid 0)\cdot P_{d}(0\mid 01)=\nicefrac{{1}}{{3}} \cdot\nicefrac{{1}}{{2}}\cdot\nicefrac{{2}}{{3}}\cdot\nicefrac{{1}}{{6}}= \nicefrac{{1}}{{54}}\)._
We are interested in the following two problems for Bayesian networks (see also [24]):
* D-PR: Given the Bayesian network \((X,\Delta,P)\) and \(E=\{X_{i_{1}},\ldots,X_{i_{\ell}}\}\subseteq X\), \(e\in V_{E}\coloneqq\prod_{j=1}^{\ell}V_{i_{j}}\) (the evidence) and a rational \(p>0\), does it hold that \(\mathbb{P}(E=e)>p\)? This problem is known to be PP-complete [20].
* D-MAP: Given a Bayesian network \((X,\Delta,P)\), a rational number \(p>0\), disjoint subsets \(E,F\subseteq X\),5 and evidence \(e\in V_{E}\), does there exist \(f\in V_{F}\) such that \(\mathbb{P}(F=f,E=e)>p\), or, if \(\mathbb{P}(E=e)\neq\emptyset\), equivalently, \(\mathbb{P}(F=f\mid E=e)>p\) (by adapting the bound \(p\)). It is known that this problem, also known as maximum a-posteriori problem, is NP\({}^{\sf{PP}}\)-complete (see [20, 11]).
Footnote 5: The variables contained in \(F\) are called MAP variables.
The corresponding proof in [24] also shows that the D-MAP problem remains NP\({}^{\sf{PP}}\)-complete if \(F\) only contains uniformly distributed 'input' nodes, i.e., nodes \(X_{i}\) with \(\Delta^{i}=\emptyset\) and \(P_{i}(x_{i})=1/|V_{i}|\), as well as \(V_{i}=\{0,1\}\) for all \(i=1,\ldots,n\).
In particular, the following problem (where \(E,F\) are switched!) is still NP\({}^{\sf{PP}}\)-complete: Given a binary Bayesian network \((X,\Delta,P)\) (i.e., \(V_{i}=\{0,1\}\) for all \(i\)), a rational \(p>0\), disjoint subsets \(E,F\subseteq X\) where \(F\) only contains uniformly distributed input nodes, as well as evidence \(e\in V_{E}\), does there exist \(f\in V_{F}\) such that \(\mathbb{P}(E=e\mid F=f)>p\) (as \(\mathbb{P}(F=f)=1/2^{|F|}\) is independent of \(f\) and known due to uniformity)? We will, in the rest of this paper, refer to this modified problem as D-MAP instead of the original problem above.
**Example 2.2** (D-Map).: _Given the Bayesian Network in Figure 1 with \(F=\{X_{a}\}\) (MAP variable), \(E=\{X_{c},X_{d}\}\), \(e=(0,1)\in V_{c}\times V_{d}\) (evidence) and \(p=\nicefrac{{1}}{{3}}\), we ask whether \(\exists f\in\{0,1\}\colon\mathbb{P}(X_{c}=0,X_{d}=1\mid X_{a}=f)>\nicefrac{{1}} {{3}}\). When choosing \(f=1\in V_{a}\), the probability \(\mathbb{P}(X_{c}=0,X_{d}=1\mid X_{a}=1)=\nicefrac{{3}}{{4}}\cdot(\nicefrac{{1} }{{2}}\cdot\nicefrac{{3}}{{4}}+\nicefrac{{1}}{{2}}\cdot\nicefrac{{1}}{{3}})= \nicefrac{{13}}{{32}}>\nicefrac{{1}}{{3}}\) exceeds the bound. Note that to compute the value in this way, one has to sum up over all possible valuations of those variables that are neither evidence nor MAP variables, indicating that this is not a trivial task._
Figure 1: A Bayesian Network
Stochastic decision Petri nets
We will enrich the definition of stochastic Petri nets to allow for interactivity, similar to how MDPs [6] extend the definition of Markov chains.
Definition 3.1: A stochastic decision Petri net (SDPN) is a tuple \((P,T,\,{}^{\bullet}(),()^{\bullet},\)\(\Lambda,m_{0},C,R)\) where \((P,T,\,{}^{\bullet}(),()^{\bullet},\Lambda,m_{0})\) is a stochastic Petri net; \(C\subseteq T\) is a set of controllable transitions; \(R\colon\mathcal{P}(P)\to\mathbb{R}\) is a reward function.
Here we describe the semantics of such SDPNs in a semi-formal way. The precise semantics is obtained by the encoding of SDPNs into MDPs in Section 4.
Given an SDPN, an external agent may in each step choose to manually deactivate any subset \(D\subseteq C\) of controllable transitions (regardless of whether their preconditions are fulfilled or not). As such, if transitions \(D\subseteq C\) are deactivated in marking \(m\in\mathcal{M}(N)\), the SDPN executes a step according to the semantics of the stochastic Petri net \(N_{D}=(P,T\setminus D,\,{}^{\bullet}(),()^{\bullet},\Lambda_{D},m_{0})\) where the pre- and post-set functions and \(\Lambda_{D}\) are restricted accordingly.
For all rewarded sets \(Q\in\operatorname{supp}(R)\), the agent receives an "immediate" reward \(R(Q)\) once all the places \(p\in Q\) are reached at one point in the execution of the Petri net (although not necessarily simultaneously). In particular, any reward is only received once. Note that this differs from the usual definition of rewards as in MDPs, where a reward is received each time certain actions is taken in given states. However, logical formulae over reached places (such as "places \(p_{1}\) and \(p_{2}\) are reached without reaching place \(q\)") are more natural to represent by such one-time rewards instead of cumulative rewards.6 The framework can be extended to reward markings instead of places but at the cost of an exponential explosion, since to be able to compute the one-time step-wise rewards not only already reached places but already reached markings would have to be memorized. Note that a reward need not be positive.
Footnote 6: Firings of transitions can also easily be rewarded by adding an additional place.
More formally, given a firing sequence \(\mu:m_{0}\to_{t_{1}}m_{1}\to_{t_{2}}\dots\), the agent receives a value or payoff of \(V(pl(\mu))\) where \(V(M)\coloneqq\sum_{Q\subseteq M}R(Q)\).
Example 3.2: As an example consider the SDPN in Figure 2. The objective is to mark both places coloured in yellow at some point in time (not necessarily at the same time). This can be described by a reward function \(R\) which assigns \(1\) to the set \(\{p_{4},p_{5}\}\) containing both yellow places and \(0\) to all other sets.
The transitions with double borders (\(t_{1},t_{2}\)) are controllable and it turns out that the optimal strategy is to deactive both \(t_{1}\) and \(t_{2}\) first, in order to let \(t_{5}\) or \(t_{6}\) mark either of the two goal places before reaching the marking \((1,1,0,0,0)\) from which no information can be gained which of the two goal places have been marked. An optimal strategy thus has to have knowledge of already achieved subgoals in terms of visited places. In this case, the strategy can deactivate one of the transitions (\(t_{1},t_{2}\)) leading to the place already visited.
Figure 2: Example SDPN
Policies may be dependent on the current marking and the places accumulated so far. Now, for a given policy \(\pi:\mathcal{M}(N)\times\mathcal{P}(P)\to\mathcal{P}(C)\), determining the set \(\pi(m,Q)\subseteq C\) of deactivated transitions in marking \(m\) for the set \(Q\) of places seen so far, we consider the (continuous) probability space \(m_{0}\mathcal{M}(N)^{\omega}\), describing the infinite sequence \(m_{0}\to_{t_{1}}m_{1}\to_{t_{2}}\dots\) of markings generated by the Petri net under the policy \(\pi\) (i.e., if in step \(n\) the transitions \(D_{n}\coloneqq\pi(m_{n-1},\bigcup_{k=0}^{n-2}\operatorname{supp}(m_{k}))\) are deactivated).
Then we can consider the expectation of the random variable \(V\circ pl\), i.e.,
\[\mathbb{V}^{\pi}\coloneqq\mathbb{E}^{\pi}\left[V\circ pl\right],\]
over the probability space \(m_{0}\mathcal{M}(N)^{\omega}\). We will call this the value of \(\pi\) and, if \(\pi\equiv D\subseteq C\) is constant, simply write \(\mathbb{V}^{D}\) which we will call the value of \(D\).
For the complexity analyses we assume that \(R\) is only stored on its support, e.g., as a set \(R\subseteq\mathcal{P}(P)\times\mathbb{R}\) which we will interpret as a dictionary with entries \([Q:R(Q)]\) for some \(Q\subseteq P\), as for many problems of interest the size of the support of the reward function can be assumed to be polynomially bounded w.r.t. to the set of places and transitions.
We consider the following problems for stochastic Petri nets, where we parameterize over a class \(\mathcal{N}\) of SDPNs and (for the second problem) over a class \(\Psi\subseteq(\mathcal{M}(N)\times\mathcal{P}(P)\to\mathcal{P}(C))\) of policies:
* \(\mathcal{N}\)-\(\mathsf{VAL}\): Given a rational \(p>0\), a net \(N\in\mathcal{N}\) and a policy \(\pi\in\Psi\) for \(N\), decide whether \(\mathbb{V}^{\pi}>p\).
* \(\mathcal{N}\)-\(\mathsf{POL}\): Given a rational \(p>0\) and a net \(N\in\mathcal{N}\), decide whether there exist a policy \(\pi\in\Psi\) such that \(\mathbb{V}^{\pi}>p\). Although paramterized over sets of policies, we will omit \(\Psi\) if is clear from the context (in fact we will restrict to constant policies from Section 5 onwards).
## 4 Stochastic decision Petri nets as Markov decision processes
We now describe how to transform an SDPN into an MDP, thus fixing the semantics of such nets. For unbounded Petri nets, the resulting MDP has an infinite state space, but we will restrict to the finite case later.
**Definition 4.1**.: _Given an SDPN \(N=(P,T,F,\Lambda,C,R,m_{0})\) where \(m_{0}\) is not the constant zero function, the MDP for \(N\) is defined as the tuple \((S,A,\delta,r,s_{0})\) where_
* \(S=\mathcal{R}(N)\times\mathcal{P}(P)\) _(product of reachable markings and places collected),_
* \(A=\mathcal{P}(C)\) _(sets of deactivated transition as actions),_
* \(\delta\colon(\mathcal{R}(N)\times\mathcal{P}(P))\times\mathcal{P}(C)\to \mathcal{D}(\mathcal{R}(N)\times\mathcal{P}(P))\)_, with_ \[\delta((m,Q),D)((m^{\prime},Q^{\prime}))\coloneqq\begin{cases}p(m^{\prime} \mid m,D)&\text{if }Q^{\prime}=Q\cup\operatorname{supp}(m),\\ 0&\text{otherwise,}\end{cases}\]
_where_ \[p(m^{\prime}\mid m,D)=\frac{\sum_{t\in En(m,D),m\rightarrow_{t}m^{\prime}} \Lambda(t)}{\sum_{t\in En(m,D)}\Lambda(t)}\] _whenever_ \(En(m,D)\coloneqq\{t\in T\backslash D\mid\ ^{\bullet}t\leq m\}\neq\emptyset\)_. If_ \(\text{En}(m,D)=\emptyset\)_, we set_ \(p(m^{\prime}\mid m,D)=1\) _if_ \(m=m^{\prime}\) _and_ \(0\) _if_ \(m\neq m^{\prime}\)_. That is,_ \(p(m^{\prime}\mid m,D)\) _is the probability of reaching_ \(m^{\prime}\) _from_ \(m\) _when transitions_ \(D\) _are deactivated._
* \(r\colon S\times A\times S\rightarrow\mathbb{R}\) _(reward function) with_ \[r((m,Q),D,(m^{\prime},Q^{\prime}))\coloneqq\begin{cases}\sum_{Q\subseteq Y \subseteq Q^{\prime}}R(Y)&\text{if }Q=\emptyset,\\ \sum_{Q\subsetneq Y\subseteq Q^{\prime}}R(Y)&\text{if }Q\neq\emptyset.\end{cases}\]
* \(s_{0}=(m_{0},\emptyset)\)__
The transition probabilities are determined as for regular stochastic Petri nets where we consider only the rates of those transitions that have not been deactivated and that can be fired for the given marking. If no transition is enabled, we stay at the current marking with probability \(1\).
Note that the reward for the places reached in a marking \(m\) is only collected when we fire a transition leaving \(m\). This is necessary as in the very first step we also obtain the reward for the empty set, which might be non-zero, and due to the fact that the initial marking is assumed to be non-empty, this reward for the empty set is only collected once.
The following result shows that the values of policies \(\pi:S\to A\) (note that these are exactly the policies for the underlying SDPN) over the MDP are equal to the ones over the corresponding SDPN.
**Proposition 4.2**.: _Let \(N=(P,T,F,\Lambda,C,R,m_{0})\) be an SDPN and \(M=(S,A,\delta,\)\(r,s_{0})\) the corresponding MDP. For any policy \(\pi:S\to A\), we have_
\[(\mathbb{V}^{\pi}=)\mathbb{E}^{\pi}\left[V\circ\text{pl}\right]=\mathbb{E}^{ \pi}\left[\sum_{n\in\mathbb{N}_{0}}r(\mathbf{s}_{n},\pi(\mathbf{s}_{n}), \mathbf{s}_{n+1})\right]\]
_where \((\mathbf{s}_{n})_{n}\) is the Markov chain resulting from following policy \(\pi\) in \(M\)._
This provides an exact semantic for SDPNs via MDPs. Note, however, that for analysis purposes, even for safe Petri nets, the reachability set \(\mathcal{R}(N)\) (as a subset of \(\mathcal{P}(P)\)) is generally of exponential size whence the transformation into an MDP can at best generally only yield algorithms of exponential worst-case-time. Hence, we will now restrict to specific subproblems and it will turn out that even with fairly severe restrictions to the type of net and the policies allowed, we obtain completeness results for complexity classes high in the polynomial hierarchy.
Complexity analysis for specific classes of Petri nets
For the remainder of this paper, we will consider the problem of finding optimal _constant_ policies for certain classes of nets. In other words, the agent chooses _before_ the execution of the Petri net which transitions to deactivate for its _entire_ execution. For a net \(N\), the policy space is thus given by
\[\Psi(N)=\{\pi:\mathcal{M}(N)\to\mathcal{P}(C)\mid\pi\equiv D\subseteq C\}\ \hat{=}\ \mathcal{P}(C).\]
Since one can non-deterministically guess the maximizing policy (there are only exponentially many) and compute its value, it is clear that the complexity of the policy optimization problem \(\mathcal{N}\)-POL is bounded by the complexity of the corresponding value problem \(\mathcal{N}\)-VAL as follows: If, for a given class \(\mathcal{N}\) of Petri nets, \(\mathcal{N}\)-VAL lies in the complexity class C, then \(\mathcal{N}\)-POL lies in \(\mathsf{NP}^{\mathsf{C}}\).
We will now show the complexity of these problems for the three Petri net classes FCON, SAFC, and \([\varphi]\)BPN and work out the connection to Bayesian networks. In the following we will assume that all probabilities are efficiently computable, allowing us to simulate all probabilistic choices with fair coins.
### Complexity of safe and acyclic free-choice decision nets
We will first consider the case of Petri nets where the length of runs is bounded.
**Proposition 5.1**.: _For any polynomial \(\varphi\), the problem \([\varphi]\mathsf{BPN}\)-VAL is in \(\mathsf{PP}\). In particular, \([\varphi]\mathsf{BPN}\)-POL is in \(\mathsf{NP}^{\mathsf{PP}}\)._
Proof (sketch).: Given a Petri net \(N\), a policy \(\pi\) and a bound \(p\), a \(\mathsf{PP}\)-algorithm for \([\varphi]\mathsf{BPN}\)-VAL can simulate the execution of the Petri net and calculate the resulting value, checking whether the expected value for \(\pi\) is greater than the pre-defined bound \(p\). For this, we have to suitably adapt the threshold (with an affine function \(\psi\)) so that the probabilistic Turing machine accepts with probability greater than \(\nicefrac{{1}}{{2}}\) iff the reward for the given policy is strictly greater than \(p\).
As the execution of the Petri net takes only polynomial time in the size of the Petri net (\(\varphi\)), this can be performed by a probabilistic Turing machine in polynomial time whence \([\varphi]\mathsf{BPN}\)-VAL lies in \(\mathsf{PP}\).
Since a policy can be guessed in polynomial time, we can also infer that \([\varphi]\mathsf{BPN}\)-POL is in \(\mathsf{NP}^{\mathsf{PP}}\).
This easily gives us the following corollary for SAFC nets.
**Corollary 5.2**.: _The problem \(\mathsf{SAFC}\)-\(\mathsf{VAL}\) is in \(\mathsf{PP}\) and \(\mathsf{SAFC}\)-POL in \(\mathsf{NP}^{\mathsf{PP}}\)._
Proof.: This follows directly from Proposition 5.1 and the fact that \(\mathsf{SAFC}\subseteq[id]\mathsf{BPN}\).
**Proposition 5.3**.: _The problem \(\mathsf{SAFC}\)-POL is \(\mathsf{NP}^{\mathsf{PP}}\)-hard and, therefore, also \(\mathsf{NP}^{\mathsf{PP}}\)-complete._
Proof (sketch): This can be proven via a reduction \(\mathsf{D\mbox{-}MAP}\leq_{p}\mathsf{SAFC\mbox{-}POL}\), i.e., representing the modified \(\mathsf{D\mbox{-}MAP}\) problem for Bayesian networks as a decision problem in safe and acyclic free-choice nets. \(\mathsf{NP}^{\mathsf{PP}}\)-completeness then follows together with Corollary 5.2. Note that we are using the restricted version of the \(\mathsf{D\mbox{-}MAP}\) problem as explained in Section 2 (uniformly distributed input nodes, binary values).
We sketch the reduction via an example: we take the Bayesian network in Figure 1 and consider a \(\mathsf{D\mbox{-}MAP}\) instance where \(E=\{X_{c},X_{d}\}\) (evidence, where we fix the values of \(c,d\) to be \(0,1\)), \(F=\{X_{a}\}\) (MAP variables) and \(p\) is a threshold. That is, the question being asked for the Bayesian network is whether there exists a value \(x\) such that \(\mathbb{P}(X_{c}=0,X_{d}=1\mid X_{a}=x)>p\).
This Bayesian network is encoded into the SAFC net in Figure 3, where transitions with double borders are controllable and the yellow places give a reward of \(1\) when both are reached (not necessarily at the same time). Transitions either have an already indicated rate of \(1\) or the rate can be looked up in the corresponding matrix of the BN. The rate of a transition \(t^{i}_{x_{1}x_{2}\to x_{3}}\) is the probability value \(P_{i}(x_{3}\mid x_{1}x_{2})\), where \(P_{i}\) is the probability matrix for \(i\in\{a,b,c,d\}\).
Intuitively the first level of transitions simulates the probability tables of \(P^{a},P^{b}\), the nodes without predecessors in the Bayesian network, where for instance the question of whether \(P^{a}_{0}\) or \(P^{a}_{1}\) are marked corresponds to the value of the random variable \(X_{a}\) associated with node \(a\). Since \(X_{a}\) is a MAP variable, its two transitions are controllable. Note that enabling both transitions will never give a higher reward than enabling only one of them. (This is due to the fact that \(\max\{x,y\}\geq p_{1}\cdot x+p_{2}\cdot y\) for \(p_{1},p_{2}\geq 0\) with \(p_{1}+p_{2}=1\).)
The second level of transitions (each with rate \(1\)) is inserted only to obtain a free-choice net by creating sufficiently many copies of the places in order to make all conflicts free-choice.
The third level of transitions simulates the probability tables of \(P^{c}\), \(P^{d}\), only to ensure the net being free-choice we need several copies. For instance, transition \(t^{c}_{0\to 0}\) consumes a token from place \(P^{a,c}_{0}\), a place specifically created for the entry \(P^{c}(c=0\mid a=0)\) in the probability table of node \(c\).
In the end the aim is to mark the places \(P^{c}_{0}\) and \(P^{d}_{1}\), and we can find a policy (deactivating either \(t^{a}_{()\to 0}\) or \(t^{a}_{()\to 0}\)) such that the probability of reaching both places exceeds \(p\) if and only if the \(\mathsf{D\mbox{-}MAP}\) instance specified above has a solution.
Figure 3: SAFC net corresponding to BN in Figure 1
This proof idea can be extended to more complex Bayesian networks, for a more formal proof see the appendix.
In fact, a reduction in the opposite direction (from Petri nets to Bayesian networks) is possible as well under mild restrictions, which shows that these problems are closely related.
Proposition 5.4: _For two given constants \(k,\ell\), consider the following problem: let \(N\) be a SAFC decision Petri net, where for each branching cell the number of controllable transitions is bounded by some constant \(k\). Furthermore, given its reward function \(R\), we assume that \(|\cup_{Q\in\operatorname{supp}(R)}Q|\leq\ell\). Given a rational number \(p\), does there exist a constant policy \(\pi\) such that \(\mathbb{V}^{\pi}>p\)?_
_This problem can be polynomially reduced to \(\mathsf{D}\text{-}\mathsf{MAP}\)._
Proof (sketch): We sketch the reduction, which is inspired by [8], via an example: consider the SAFC net in Figure 5, where the problem is to find a deactivation pattern such that the payoff exceeds \(p\). We encode the net into a Bayesian network (Figure 4), resulting in an instance of the \(\mathsf{D}\text{-}\mathsf{MAP}\) problem.
We have four types of random variables: place variables (\(X_{p}\), \(p\in P\)), which record which place is marked; transition variables (\(X_{t_{1}},X_{t_{5}},X_{t_{6}}\)), one for each controllable transition, which are the MAP variables; cell variables (\(X_{\mathbb{C}_{i}}\) for \(\mathbb{C}_{1}=\{t_{1},t_{2}\}\), \(\mathbb{C}_{2}=\{t_{3},t_{4}\}\), \(\mathbb{C}_{3}=\{t_{5},t_{6}\}\)) which are non-binary and which record which transition in the cell was fired or whether no transition was fired (\(\varepsilon\)); a reward variable (\(X_{rew}\)) such that \(\mathbb{P}(X_{rew}=1)\) equals the function \(\psi\) applied to the payoff. Note that we use the affine function \(\psi\) from the proof of Proposition 5.1 to represent rewards as probabilities in the interval \([0,1]\). The threshold for the \(\mathsf{D}\text{-}\mathsf{MAP}\) instance is \(\psi(p)\). Dependencies are based on the structure of the given SAFC net. For instance, \(X_{\mathbb{C}_{3}}\) is dependent on \(X_{p_{3}}\), \(X_{p_{4}}\) (since \(\,{}^{\bullet}\mathbb{C}_{3}=\{p_{3},p_{4}\}\)) and \(X_{t_{5}}\), \(X_{t_{6}}\) (since \(t_{5},t_{6}\) are the controllable transitions in \(\mathbb{C}_{3}\)).
Both the matrices of cell and place variables could become exponentially large, however this problem can be resolved easily by dividing the matrices into smaller ones and cascading them. Since the number of controllable transitions is bounded by \(k\) and the number of rewarded places by \(\ell\), they will not cause an exponential blowup of the corresponding matrix.
Figure 4: Bayesian network obtained from the SAFC net in Figure 5 below. Entries \(*\) are ‘don’t-care’ values.
**Corollary 5.5**.: _The problem \(\mathsf{SAFC}\)-\(\mathsf{VAL}\) is \(\mathsf{PP}\)-hard and, therefore, also \(\mathsf{PP}\)-complete._
Proof.: We note that using the construction in the proof of Proposition 5.3 with the set \(F\) of MAP variables being empty, we can reduce the \(\mathsf{D}\)-\(\mathsf{PR}\) problem for Bayesian networks to the \(\mathsf{SAFC}\)-\(\mathsf{VAL}\) problem, showing that \(\mathsf{SAFC}\)-\(\mathsf{VAL}\) is \(\mathsf{PP}\)-hard. Using Corollary 5.2, this yields that \(\mathsf{SAFC}\)-\(\mathsf{VAL}\) is \(\mathsf{PP}\)-complete.
**Corollary 5.6**.: _For any polynomial \(\varphi:\mathbb{N}_{0}\to\mathbb{N}_{0}\) fulfilling \(\varphi(n)\geq n\) for all \(n\in\mathbb{N}_{0}\), the problem \([\varphi]\mathsf{BPN}\)-\(\mathsf{VAL}\) is \(\mathsf{PP}\)-complete and \([\varphi]\mathsf{BPN}\)-\(\mathsf{POL}\) is \(\mathsf{NP}^{\mathsf{PP}}\)-complete._
Proof.: As any safe and acyclic free-choice net is an \(id\)-bounded net, it is, in particular, a \(\varphi\)-bounded net with \(\varphi\) as above, and we have \(\mathsf{SAFC}\)-\(\mathsf{VAL}\leq_{p}[\varphi]\mathsf{BPN}\)-\(\mathsf{VAL}\) and \(\mathsf{SAFC}\)-\(\mathsf{POL}\leq_{p}[\varphi]\mathsf{BPN}\)-\(\mathsf{POL}\). Propositions 5.1 and 5.3 as well as Corollary 5.5, therefore show that \([\varphi]\mathsf{BPN}\)-\(\mathsf{VAL}\) is \(\mathsf{PP}\)-complete and \([\varphi]\mathsf{BPN}\)-\(\mathsf{POL}\) is \(\mathsf{NP}^{\mathsf{PP}}\)-complete.
### Complexity of free-choice occurrence decision nets
Now we further restrict SAFC nets to occurrence nets, which leads to a substantial simplification. The main reason for this is the absence of backwards-conflicts, which means that each place is uniquely generated, making it easier to trace causality, i.e., there is a unique minimal configuration that generates each place.
**Proposition 5.7**.: _The problem \(\mathsf{FCON}\)-\(\mathsf{VAL}\) is in \(\mathsf{P}\). In particular, \(\mathsf{FCON}\)-\(\mathsf{POL}\) is in \(\mathsf{NP}\)._
Proof (sketch).: Determining the probability of reaching a set of places \(Q\) in an occurrence net amounts to multiplying the probabilities of the transitions on which the places in \(Q\) are causally dependent. This can be done for every set \(Q\) in the support of the reward function \(R\), which enables us to determine the expected value in polynomial time, implying that \(\mathsf{FCON}\)-\(\mathsf{VAL}\) lies in \(\mathsf{P}\). By guessing a policy for an occurrence net with controllable transitions, we obtain that \(\mathsf{FCON}\)-\(\mathsf{POL}\) lies in \(\mathsf{NP}\).
**Proposition 5.8**.: _The problem \(\mathsf{FCON}\)-\(\mathsf{POL}\) is \(\mathsf{NP}\)-hard and, therefore, also \(\mathsf{NP}\)-complete._
Proof (sketch).: To show \(\mathsf{NP}\)-hardness we reduce \(\mathsf{3}\)-\(\mathsf{SAT}\) (the problem of deciding the satisfiability of a propositional formula in conjunctive normal form with at most three literals per clause) to \(\mathsf{FCON}\)-\(\mathsf{POL}\). Given a formula \(\psi\), this is done by constructing a simple occurrence net with parallel controllable transitions, one for each atomic proposition \(\ell\) in \(\psi\). Then we define a reward function with polynomial support in such a way that the expected reward for the constructed net is larger or equal than the number of clauses iff the formula has a model. The correspondence between the model and the policy is such that transitions whose atomic propositions are evaluated as true are deactivated.
An algorithm for SAFC decision nets
Here we present a partial-order algorithm for solving the policy problem for SAFC (decision) nets. It takes such a net and converts it into a formula for an SMT solver. We will assume the following, which is also a requirement for occurrence nets:
**Assumption 6.1**.: _For all places \(p\in m_{0}\): \({}^{\bullet}p\coloneqq\{t\in T\mid p\in t^{\bullet}\}=\emptyset\)._
This is a mild assumption since any transition \(t\in{}^{\bullet}p\) for a place \(p\in m_{0}\) in a safe and acyclic net has to be dead as all places can only be marked once.
We are now using the notion of (branching) cells, introduced in Section 2: The fact that the SDPN is safe, acyclic and free-choice ensures that choices in different cells are taken independently from another, so that the probability of a configuration \(\tau\in\mathcal{C}(N)\) under a specific deactivation pattern \(D\subseteq C\) is given by
\[\mathbb{P}^{D}(\mathit{tr}\supseteq\tau)=\prod_{t\in\tau}\frac{\chi_{T\setminus D }(t)\cdot\Lambda(t)}{\sum_{t\in\in\mathbb{C}_{t}\setminus D}\Lambda(t)}=\begin{cases} 0&\text{if }\tau\cap D\neq\emptyset\\ \prod_{t\in\tau}\frac{\Lambda(t)}{\sum_{t^{\prime}\in\mathbb{C}_{t}\setminus D }\Lambda(t^{\prime})}&\text{otherwise}\end{cases}\]
where \(\chi_{T\setminus D}\) is the characteristic function of \(T\setminus D\) and \(0/0\) is defined to yield \(0\).
The general idea of the algorithm is to rewrite the reward function \(R:\mathcal{P}(P)\to\mathbb{R}\) on sets of places to a reward function on sets of transitions that yields a compact formula for computing the value \(\mathbb{V}^{D}\) for specific sets \(D\) (i.e., solving SAFC-VAL), that we can also use to solve the policy problem SAFC-POL via an SMT solver.
We first need some definitions:
**Definition 6.2**.: _For a maximal configuration \(\tau\in\mathcal{C}^{\omega}(N_{D})\) for a given deactivation pattern \(D\subseteq C\), we define its set of prefixes in \(\mathcal{C}(N_{D})\) to be_
\[\mathrm{pre}^{D}(\tau)\coloneqq\{\tau^{\prime}\in\mathcal{C}(N_{D})\mid\tau^{ \prime}\subseteq\tau\}\]
_which corresponds to all configurations that can lead to the configuration \(\tau\). We also define the set of extensions of a configuration \(\tau\in\mathcal{C}(N_{D})\) in \(\mathcal{C}^{\omega}(N_{D})\), which corresponds to all maximal configurations that \(\tau\) can lead to, as_
\[\mathrm{ext}^{D}(\tau)\coloneqq\{\tau^{\prime}\in\mathcal{C}^{\omega}(N_{D}) \mid\tau\subseteq\tau^{\prime}\}.\]
**Definition 6.3**.: _Let \(N\) be a Petri net with a reward function \(R\colon\mathcal{P}(P)\to\mathbb{R}\) on places and a deactivation pattern \(D\). A reward function \([R]\colon\mathcal{P}(T)\to\mathbb{R}\) on transitions is called consistent with \(R\) if for each firing sequence \(\mu\in\mathcal{FS}(N_{D})\):_
\[V(\mathit{pl}(\mu))=\sum_{Q\subseteq\mathit{pl}(\mu)}R(Q)=\sum_{\tau\in \mathrm{pre}^{D}\,(\mathit{tr}(\mu))}[R](\tau).\]
This gives us the following alternative method to determine the expected value for a net (with given policy \(D\)):
**Lemma 6.4**.: _Using the setting of Definition 6.3, whenever \([R]\) is consistent with the reward function \(R\) and \([R](\tau)=0\) for all \(\tau\not\in\mathcal{C}(N)\), the expected value for the net \(N\) under the constant policy \(D\) is:_
\[\mathbb{V}^{D}=\sum_{\tau\subseteq T}\mathbb{P}^{D}(\mathit{tr}\supseteq\tau) \cdot[R](\tau).\]
Note that \([R](\mathit{tr}(\mu))\coloneqq V(\mathit{pl}(\mu))\) for \(\mu\in\mathcal{FS}(N)\) fulfills these properties trivially. However, rewarding only maximal configurations can lead, already in occurrence nets with some concurrency, to an exponential support (w.r.t. the size of the net and its reward function). The goal of our algorithm is to instead make use of the sum over the configurations by rewarding reached places immediately in the corresponding configuration, generating a function \([R]\) that fulfills the properties above and whose support remains of polynomial size in occurrence nets. Hence, we have some form of partial-order technique, in particular concurrent transitions receive the reward independently of each other (if the reward is not dependent on firing both of them).
The rewriting process is performed by iteratively'removing maximal cells' and resembles a form of backward-search algorithm. First of all, \(\preceq_{N}^{*}\) (the reflexive and transitive closure of causality \(\prec_{N}\)) induces a partial order \(\sqsubseteq\) on the set \(\mathit{BC}(N)\) of cells via
\[\forall\mathbb{C},\mathbb{C}^{\prime}\in\mathit{BC}(N):\mathbb{C}\sqsubseteq \mathbb{C}^{\prime}\Longleftrightarrow\exists t\in\mathbb{C},t^{\prime}\in \mathbb{C}^{\prime}:t\preceq_{N}^{*}t^{\prime}.\]
Let all cells \((\mathbb{C}_{1},\ldots,\mathbb{C}_{m})\) with \(m=|\mathit{BC}(N)|\) be ordered conforming to \(\sqsubseteq\), then we let \(N_{k}\) denote the Petri net consisting of places \(P_{k}\coloneqq P\setminus(\bigcup_{l>k}\mathbb{C}_{l}{}^{\bullet})\cup(\bigcup _{l\leq k}\mathbb{C}_{l}{}^{\bullet})\) (where the union with the post-sets is only necessary if backward-conflicts exist) and transitions \(T_{k}\coloneqq\bigcup_{l\leq k}\mathbb{C}_{l}\), the remaining components being accordingly restricted (note that the initial marking \(m_{0}\) is still contained in \(P_{k}\) by Assumption 6.1). In particular, it holds that \(N=N_{m}\) as well as \(T_{0}=\emptyset\) and \(P_{0}=\{p\in P\mid\forall t\in T:p\notin t^{\bullet}\}\).
Let \(N\) be a Petri net with deactivation pattern \(D\), \(\mu\in\mathcal{FS}(N_{D})\) be a firing sequence and \(k\in\{1,\ldots,|\mathit{BC}(N)|\}\). We write \(\mathit{tr}_{\leq k}(\mu)\coloneqq\mathit{tr}(\mu)\cap T_{k}\) for the transitions in the first \(k\) cells and \(\mathit{tr}_{>k}(\mu)\coloneqq\mathit{tr}(\mu)\setminus T_{k}\) for the transitions in the cells after the \(k\)-th cell as well as \(\mathit{pl}_{\leq k}(\mu)\coloneqq m_{0}\cup(\bigcup_{t\in\mathit{tr}_{\leq k }(\mu)}t^{\bullet})\) for the places reached after all transitions in the first \(k\) cells were fired.
We will now construct auxiliary reward functions \(R[k]\) that take pairs of a set of places \((U\subseteq P_{k})\) and of transitions \((V\subseteq T\setminus T_{k})\) as input and return a reward. Intuitively, \(R[k](U,V)\) corresponds to the reward for reaching all places in \(U\) and then firing all transitions in \(V\) afterwards where reaching \(U\) ensures that all transitions in \(V\) can fire.
Starting with the reward function \(R[m]:\mathcal{P}(P)\times\{\emptyset\}\to\mathbb{R},(M,\emptyset)\mapsto R(M)\), we iteratively compute reward functions \(R[k]:\mathcal{P}(P_{k})\times\mathcal{P}(T\setminus T_{k})\to\mathbb{R}\) for \(k\geq 0\):
\[R[k](U,V)\coloneqq\begin{cases}R[k+1](U,V)&\text{if }\mathbb{C}_{k+1}\cap V=\emptyset\\ \sum\limits_{\begin{subarray}{c}U^{\prime}\cap\mathbf{t}^{\bullet}\neq\emptyset \\ U=U^{\prime}\setminus\mathbf{t}^{\bullet}\cup\mathbf{t}\\ 0\end{subarray}}R[k+1](U^{\prime},V\setminus\{t\})&\text{if }\mathbb{C}_{k+1} \cap V=\{t\}\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\parpar\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\parpar\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par
Checking whether some deactivation pattern \(D\) exists such that this term is greater than some bound \(p\) can be checked by an SMT solver.
Note that, in contrast to the naive definition of \([R]\) only on maximal configurations, this algorithm constructs a reward function on configurations that, for occurrence nets, has a support with at most \(\operatorname{supp}(R)\) elements. For arbitrary SAFC nets, the support of \([R]\) might be of exponential size.
**Example 6.7**.: _We take the Petri net from Figure 5 as an example (where all transitions have firing rate 1). The reward function \(R\) is given in the table below. By using the inclusion-exclusion principle we ensure that one obtains reward \(1\) if one or both of the yellow places are marked at some point without ever marking the red place._
_The optimal strategy is obviously to only deactivate the one transition (\(t_{6}\)) which would mark the red place._
_The net has three cells_ \(\mathbb{C}_{1}=\{t_{1},t_{2}\},\mathbb{C}_{2}=\{t_{3},t_{4}\},\) _and_ \(\mathbb{C}_{3}=\{t_{5},t_{6}\}\) _where_ \(\mathbb{C}_{1},\mathbb{C}_{2}\sqsubseteq\mathbb{C}_{3}\)_. As such,_ \(R[3]=R\) _with_ \(R\) _below and obtain_ \(R[2]\) _(due to_ \(P_{2}=\{p_{1},p_{2},p_{3},p_{4},p_{5}\}\)_). In the next step, we get (by removing_ \(t_{3}\) _and_ \(t_{4}\)_)_ \(R[1]\) _and finally_ \(R[0]\)_, from which we can derive_ \([R]\)_, the reward function on transitions, as described above._
_This allows us to write the value for a set_ \(D\) _of deactivated transitions as follows (where if both_ \(t_{5},t_{6}\in D\)_, we assume the last quotient to be zero)_
\[\mathbb{V}^{D}=\frac{\chi_{T\setminus D}(t_{1})}{\chi_{T\setminus D}(t_{1})+1 }+\frac{1}{\chi_{T\setminus D}(t_{1})+1}\frac{1}{2}\frac{\chi_{T\setminus D}( t_{5})}{\chi_{T\setminus D}(t_{5})+\chi_{T\setminus D}(t_{6})}\]
\[\begin{split} R=&[\{p_{5}\}:1,\{p_{6}\}:1,\{p_{5},p _{6}\}:-1,\{p_{5},p_{7}\}:-1,\{p_{6},p_{7}\}:-1,\{p_{5},p_{6},p_{7}\}:1]\\ R[2]=&[(\{p_{5}\},\emptyset):1,(\{p_{3},p_{4}\},\{t_{5} \}):1,(\{p_{3},p_{4},p_{5}\},\{t_{6}\}):-1]\\ R[1]=&[(\{p_{5}\},\emptyset):1,(\{p_{2},p_{3}\},\{t_{ 3},t_{5}\}):1,(\{p_{2},p_{3},p_{5}\},\{t_{3},t_{6}\}):-1]\\ R[0]=&[(\{p_{1}\},\{t_{1}\}):1,(\{p_{1},p_{2}\},\{t_{ 2},t_{3},t_{5}\}):1]\\ [R]=&[\{t_{1}\}:1,\{t_{2},t_{3},t_{5}\}:1]\\ \end{split}\]
_Writing_ \(x_{i}\coloneqq\chi_{T\setminus D}(t_{i})\in\{0,1\},i=1,5,6,\) _the resulting inequality_
\[\frac{x_{1}}{x_{1}+1}+\frac{1}{2}\frac{1}{x_{1}+1}\frac{x_{5}}{x_{5}+x_{6}}>p\]
_can now be solved by an SMT solver with Boolean variables_ \(x_{1},x_{5},\) _and_ \(x_{6}\) _(i.e.,_ \(x_{1},x_{5},x_{6}\in\{0,1\}\)_)._
Figure 5: A SAFC decision net. The goal is to mark one or both of the yellow places at some point without ever marking the red place.
Runtime results:To test the performance of our algorithm, we performed runtime tests on specific families of simple stochastic decision Petri nets, focussing on the impact of concurrency and backward-conflicts on its runtime. All families are based on a series of simple branching cells each containing two transitions, one controllable and one non-controllable, reliant on one place as a precondition. Each non-controllable transition marks a place to which we randomly assigned a reward according to a normal distribution (in particular, it can be negative). The families differ in how these cells are connected, testing performance with concurrency, backward-conflicts, and sequential problems, respectively (for a detailed overview of the experiments see Appendix D).
Rewriting the reward function (and, thus, solving the value problem) produced expected results: Runtimes on nets with many backward-conflicts are exponential while the rewriting of reward functions of occurrence nets exhibits a much better performance, reflecting its polynomial complexity.
To solve the policy problem based on the rewritten reward function, we compared the performances of naively calculating the values of each possible deactivation pattern with using an SMT solver (Microsoft's z3, see also [12]). Tests showed a clear impact on the representation of the control variables (describing the deactivation set \(D\)) as booleans or as integers bounded by 0 and 1 with the latter showing a better performance. Furthermore, the runtime of solving the rewritten formula with an SMT solver showed a high variance on random reward values. Nonetheless, the results show the clear benefit of using the SMT solver on the rewritten formula in scenarios with a high amount of concurrency, with much faster runtimes than the brute force approach. In scenarios without concurrency, this benefit vanishes, and in scenarios with many backward-conflicts, the brute force approach is considerably faster than solving the rewritten function with an SMT solver. The latter effect can be explained by the rewritten reward function \([R]\) having an exponential support in this scenario.
All in all, the runtime results reflect the well-known drawbacks and benefits of most partial-order techniques, excelling in scenarios with high concurrency while having a reduced performance if there are backward- and self-conflicts.
## 7 Conclusion
We have introduced the formalism of stochastic decision Petri nets and defined its semantics via an encoding into Markov decision processes. It turns out that finding optimal policies for a model that incorporates concurrency, probability and decisions, is a non-trivial task. It is computationally hard even for restricted classes of nets and constant policies. However, we remark that workflow nets are often SAFC nets and a constant deactivation policy is not unreasonable, given that one cannot monitor and control a system all the time. We have also presented an algorithm for the studied subproblem, which we view as a step towards efficient partial-order techniques for stochastic (decision) Petri nets.
Related Work:Petri nets [26] are a well-known and widely studied model of concurrent systems based on consumption and generation of resources. Several
subclasses of Petri nets have received attention, among them free-choice nets [13] and occurrence nets, where the latter are obtained by unfolding Petri nets for verification purposes [14].
Our notion of stochastic decision Petri nets is an extension of the well-known model of stochastic Petri nets [21]. This model and a variety of generalizations are used for the quantitative analyses of concurrent systems. Stochastic Petri nets come in a continuous-time and in a discrete-time variant, as treated in this paper. That is, using the terminology of [28], we consider the corresponding Markov chain of jumps, while in the continuous-time case, firing rates determine not only the probability which transition fires next, but also how fast a transition will fire dependent on the marking. These firing times are exponentially distributed, a distribution that is memoryless, meaning that the probability of a transition firing is independent on its waiting time.
Our approach was motivated by extending the probabilistic model of stochastic Petri nets by a mechanism for decision making, as in the extension of Markov chains [28] to Markov decision processes (MDPs) [6]. Since the size of a stochastic Petri net might be exponentially smaller than the Markov chain that it generates, the challenge is to provide efficient methods for determining optimal strategies, preferably partial order methods that avoid the explicit representation of concurrent events in an interleaving semantics. Our complexity results show that the quest for such methods is non-trivial, but some results can be achieved by suitably restricting the considered Petri nets.
A different approach to include decision-making in Petri nets was described by Beccuti et al. as Markov decision Petri nets [5, 4]. Their approach, based on a notion of well-formed Petri nets, distinguishes explicitly between a probabilistic part and a non-deterministic part of the Petri net as well as a set of components that control the transitions. They use such nets to model concurrent systems and obtain experimental results. In a similar vein, graph transformation systems - another model of concurrent systems into which Petri nets can be encoded - have been extended to probabilistic graph transformation systems, including decisions in the MDP sense [18]. The decision is to choose a set of rules with the same left-hand side graph and a match, then a randomized choice is made among these rules. Again, the focus is on modelling and to our knowledge neither of these approaches provides complexity results.
Another problem related to the ones considered in this paper is the computation of the expected execution time of a timed probabilistic Petri net as described in [22]. The authors treated timed probabilistic workflow nets (TPWNs) which assumes that every transition requires a fixed duration to fire, separate from the firing probability. They showed that approximating the expected time of a sound SAFC TPWN is #P-hard which is the functional complexity class corresponding to PP. While the problems studied in their paper and in our paper are different, the fact that both papers consider SAFC nets and obtain a #P- respectively PP-hardness result seems interesting and deserves further study.
Our complexity results are closely connected with the analysis of Bayesian networks [25], which are a well-known graphical formalism to represent condi
tional dependencies among random variables and can be employed to reason about and compactly represent probability distributions. The close relation between Bayesian networks and occurrence nets was observed in [8], which gives a Bayesian network semantics for occurrence nets, based on the notion of branching cells from [1] that were introduced in order to reconcile partial order methods - such as unfoldings - and probability theory. We took inspiration from this reduction in Proposition 3 and another of our reductions (Proposition 5.3) - encoding Petri nets as Bayesian networks - is a transformation going into the other direction, from Bayesian networks to SAFC nets.
In our own work [9, 7] we considered a technique for uncertainty reasoning, combining both Petri nets and Bayesian networks, albeit in a rather different setting. There we considered Petri nets with uncertainty, where one has only probabilistic knowledge about the current marking of the net. In this setting Bayesian networks are used to compactly store this probabilistic knowledge and the main challenge is to update respectively rewrite Bayesian networks representing such knowledge whenever the Petri net fires.
Future Work:As future work we plan to consider more general classes of Petri nets, lifting some of the restrictions imposed in this paper. In particular, it would be interesting to extend the method from Section 6 to nets that allow infinite runs. Furthermore, dropping the free-choice requirement is desirable, but problematic. While the notion of branching cells does exist for stochastic nets (see [1, 8]), it does not accurately reflect the semantics of stochastic nets (see e.g. the discussion on confusion in the introduction of [8]).
As already detailed in the introduction, partial-order methods for analyzing probabilistic systems, modelled for instance by stochastic Petri nets, are in general poorly understood. Hence, it would already be a major result to obtain scalable methods for computing payoffs values for a stochastic net without decisions, but with a high degree of concurrency.
In addition we plan to use the encoding of Petri nets into Bayesian networks from [8] (on which we based the proof of Proposition 5.4) and exploit it to analyze such nets by using dedicated methods for reasoning on Bayesian networks.
Naturally, it would be interesting to extend analysis techniques in such a way that they can deal with uncertainty and derive policies when we have only partial knowledge, as in partially observable Markov decision process (POMDPs), first studied in [3]. However, this seems complex, given the fact that determining the best strategy for POMDPs is a non-trivial problem in itself [10].
Similarly, it is interesting to introduce a notion of time as in continuous-time Markov chains [28], enabling us to compute expected execution times as in [22].
Last but not least, our complexity analysis and algorithm focus on finding optimal constant policies. A natural step would be to instead consider the problem of finding optimal positional strategies as defined in Chapter 3, which is the focus of most works on Markov decision processes (see for example [10]). |
2310.14513 | Turn-Level Active Learning for Dialogue State Tracking | Dialogue state tracking (DST) plays an important role in task-oriented
dialogue systems. However, collecting a large amount of turn-by-turn annotated
dialogue data is costly and inefficient. In this paper, we propose a novel
turn-level active learning framework for DST to actively select turns in
dialogues to annotate. Given the limited labelling budget, experimental results
demonstrate the effectiveness of selective annotation of dialogue turns.
Additionally, our approach can effectively achieve comparable DST performance
to traditional training approaches with significantly less annotated data,
which provides a more efficient way to annotate new dialogue data. | Zihan Zhang, Meng Fang, Fanghua Ye, Ling Chen, Mohammad-Reza Namazi-Rad | 2023-10-23T02:53:46Z | http://arxiv.org/abs/2310.14513v1 | # Turn-Level Active Learning for Dialogue State Tracking
###### Abstract
Dialogue state tracking (DST) plays an important role in task-oriented dialogue systems. However, collecting a large amount of turn-by-turn annotated dialogue data is costly and inefficient. In this paper, we propose a novel _turn-level_ active learning framework for DST to actively select turns in dialogues to annotate. Given the limited labelling budget, experimental results demonstrate the effectiveness of selective annotation of dialogue turns. Additionally, our approach can effectively achieve comparable DST performance to traditional training approaches with significantly less annotated data, which provides a more efficient way to annotate new dialogue data1.
Footnote 1: Code and data are available at [https://github.com/h](https://github.com/h) yintell/AL-DST.
## 1 Introduction
Dialogue state tracking (DST) constitutes an essential component of task-oriented dialogue systems. The task of DST is to extract and keep track of the user's intentions and goals as the dialogue progresses Williams et al. (2013). Given the dialogue context, DST needs to predict all _(domain-slot, value)_ at each turn. Since the subsequent system action and response rely on the predicted values of specified domain-slots, an accurate prediction of the dialogue state is vital.
Despite the importance of DST, collecting annotated dialogue data for training is expensive and time-consuming, and how to efficiently annotate dialogue is still challenging. It typically requires human workers to manually annotate dialogue states Budzianowski et al. (2018) or uses a Machines Talking To Machines (M2M) framework to simulate user and system conversations Shah et al. (2018). Either way, every turn in the conversation needs to be annotated because existing DST approaches are generally trained in a fully supervised manner, where turn-level annotations are required. However, if it is possible to find the most informative and valuable turn in a dialogue to label, which enables the training of a DST model to achieve comparable performance, we could save the need to annotate the entire dialogue, and could efficiently utilize the large-scale dialogue data collected through API calls.
Active Learning (AL) aims to reduce annotation costs by choosing the most important samples to label Settles (2009); Fang et al. (2017); Zhang et al. (2022). It iteratively uses an acquisition strategy to find samples that benefit model performance the most. Thus, with fewer labelled data, it is possible to achieve the same or better performance. AL has been successfully applied to many fields in natural language processing and computer vision Schumann and Rehbein (2019); Casanova et al. (2020); Ein-Dor et al. (2020); Hu and Neubig (2021). However, the adoption of AL in DST has been studied very rarely. Xie et al. (2018) have studied to use AL to reduce the labelling cost in DST, using a _dialogue-level_ strategy. They select a batch of dialogues in each AL iteration and label the entire dialogues (e.g., every turn of each dialogue), which is inefficient to scale to tremendous unlabelled data. To our knowledge, _turn-level_ AL remains unstudied for the task of DST.
Furthermore, existing DST approaches Wu et al. (2019); Heck et al. (2020); Tian et al. (2021); Zhu et al. (2022) treat each dialogue turn as a single, independent training instance with no difference. In fact, in the real-world, utterances in a dialogue have different difficulty levels Dai et al. (2021) and do not share equal importance in a conversation. For example, in Fig.1, turn-1 is simple and only contains a single domain-slot and value (i.e., _hotel-name=Avalon_), while turn-2 is more complex and generates three new domain-slots, i.e., _hotel-book day, hotel-book people, hotel-book stay_. Given the limited labelling budget, it is an obvious choice to label turn-2 instead of turn-1 since the former is
more informative2. In addition, we observe that the complete states of the dialogue session are updated at turn-8, while turn-9 and turn-10 simply show humans' politeness and respect without introducing any new domain-slots. Therefore, while the "last turn" has been studied before Lin et al. (2021), it is often not the case that only the last turn of a dialogue session generates summary states. Using redundant turns such as turn-9 and turn-10 for training not only requires additional labelling but also possibly distracts the DST model since it introduces irrelevant context information, thus hindering the overall performance Yang et al. (2021).
Footnote 2: Here, _informative_ refers to the turn that has more valid dialogue states.
Built on these motivations, we investigate a practical yet rarely studied problem: _given a large amount of unlabelled dialogue data with a limited labelling budget, how can we annotate the raw data more efficiently and achieve comparable DST performance?_ To this end, we propose a novel turn-level AL framework for DST that selects the most valuable turn from each dialogue for labelling and training. Experiments on MultiWOZ 2.0 and 2.1 show that our approach outperforms two strong DST baselines in the weakly-supervised scenarios and achieves comparable DST performance with significantly less annotated data, demonstrating both effectiveness and data efficiency. We summarize the main contributions of our work as follows:
* We propose a novel model-agnostic _turn-level_ Active Learning framework for dialogue state tracking, which provides a more efficient way to annotate new dialogue data. To our best knowledge, this is the first attempt to apply turn-level AL to DST.
* The superiority of our approach is twofold: firstly, our approach strategically selects the most valuable turn from each dialogue to label, which largely saves annotation costs; secondly, using significantly reduced annotation data, our method achieves the same or better DST performance under the weakly-supervised setting.
* We investigate how turn-level AL can boost the DST performance by analyzing the query sizes, base DST models, and turn selection strategies.
## 2 Related Work
### Dialogue State Tracking
Dialogue state tracking is an essential yet challenging task in task-oriented dialogue systems Williams et al. (2013). Recent state-of-the-art DST
Figure 1: An example of DST from the MultiWOZ dataset Budzianowski et al. (2018). Utterances at the left and the right sides are from user and system, respectively. Orange color denotes only the selected turn is used in the weakly-supervised training setup. Only two domains (e.g _hotel, taxi_) are shown due to space limitation. (best viewed in color).
models (Wu et al., 2019; Kim et al., 2020; Heck et al., 2020; Ye et al., 2021; Tian et al., 2021; Lee et al., 2021; Zhu et al., 2022; Hu et al., 2022) using different architectures and mechanisms have achieved promising performance on complex multi-domain datasets (Budzianowski et al., 2018; Eric et al., 2020). However, they are generally trained with extensive annotated data, where each dialogue turn requires comprehensive labelling.
To mitigate the cost of dialogue annotation, some works train DST models on existing domains and perform few-shot learning to transfer prior knowledge to new domains (Wu et al., 2019; Zhou and Small, 2019), while others further improve transfer learning by pre-training extensive heterogeneous dialogue corpora using constructed tasks (Wu et al., 2020; Peng et al., 2021; Lin et al., 2021; Su et al., 2022). Recently, Liang et al. (2021); Lin et al. (2021) propose a weakly-supervised training setup, in which only the last turn of each dialogue is used. Despite the promising results, we have shown the potential drawbacks of using the last turns in Section 1. In contrast, in this work, we consider the differences between the turns and strategically select the turn that benefits the DST model the most from a dialogue for training.
### Active Learning
Active Learning uses an acquisition strategy to select data to minimize the labelling cost while maximizing the model performance (Settles, 2009). While AL has been successfully used in many fields, such as image segmentation (Casanova et al., 2020), named entity recognition (Shen et al., 2017), text classification (Schumann and Rehbein, 2019), and machine translation (Zeng et al., 2019; Hu and Neubig, 2021), rare work has attempted to apply AL to DST. Moreover, recently proposed AL acquisition methods are, unfortunately, not applicable to turn-level DST since they are designed for specific tasks or models. For instance, BADGE (Ash et al., 2019) calculates gradient embeddings for each data point in the unlabelled pool and uses clustering to sample a batch, whereas we treat each turn within a dialogue as a minimum data unit and only select a single turn from each dialogue; therefore, the diversity-based methods are not applicable to our scenario. ALPS (Yuan et al., 2020) uses the masked language model loss of BERT (Devlin et al., 2019) to measure uncertainty in the downstream text classification task, while CAL (Margatina et al., 2021) selects contrastive samples with the maximum disagreeing predictive likelihood. Both are designed for classification tasks, so these strategies are not directly applicable. Therefore, studying an AL acquisition strategy that is suitable for DST is still an open question.
## 3 Preliminaries
We formalize the notations and terminologies used in the paper as follows.
Active Learning (AL)AL aims to strategically select informative unlabelled data to annotate while maximizing a model's training performance (Settles, 2009). This paper focuses on pool-based active learning, where an unlabelled data pool is available. Suppose that we have a model \(\mathcal{M}\), a small seed set of labelled data \(\mathcal{L}\) and a large pool of unlabelled data \(\mathcal{U}\). A typical iteration of AL contains three steps: (1) train the model \(\mathcal{M}\) using \(\mathcal{L}\); (2) apply an acquisition function \(\mathcal{A}\) to select \(k\) instances from \(\mathcal{U}\) and ask an oracle to annotate them; and (3) add the newly labelled data into \(\mathcal{L}\).
Dialogue State Tracking (DST)Given a dialogue \(D=\{\left(X_{1},B_{1}\right),\ldots,\left(X_{T},B_{T}\right)\}\) that contains \(T\) turns, \(X_{t}\) denotes the dialogue turn consisting of the user utterance and system response at turn \(t\), while \(B_{t}\) is the corresponding dialogue state. The dialogue state at turn \(t\) is defined as \(B_{t}=\{\left(d_{j},s_{j},v_{j}\right),1\leq j\leq J\}\), where \(d_{j}\) and \(s_{j}\) denote domain (e.g. _attraction_) and slot (e.g. _area_) respectively, \(v_{j}\) is the corresponding value (e.g. _south_) of the domain-slot, and \(J\) is the total number of predefined domain-slot pairs. Given the dialogue context up to turn \(t\), i.e. \(H_{t}=\{X_{1},\ldots,X_{t}\}\), the objective of DST is to predict the value for each domain-slot in dialogue state \(B_{t}\).
LabellingSuppose that we have selected a turn \(t\) from the dialogue \(D\) (\(1\leq t\leq T\)) to label. An oracle (e.g. human annotator) reads the dialogue history from \(X_{1}\) to \(X_{t}\) and labels the current dialogue state \(B_{t}\). We use the gold training set to simulate a human annotator in our experiments.
Full vs. Weakly-supervised TrainingGenerally, the training dataset for DST is built in the way that each turn in a dialogue (concatenated with all previous turns) forms an individual training instance. That is, the input of a single training instance for turn \(t\) is defined as \(M_{t}=X_{1}\oplus X_{2}\oplus\cdots\oplus X_{t}\), where \(\oplus\) denotes the concatenation of sequences,
and the output is the corresponding dialogue state \(B_{t}\). By providing the entire dialogue utterances from the first turn to turn \(t\) to the model, the information from the earlier turns is kept in the dialogue history. Let \(\mathcal{D}_{D}\) be the set of training instances created for the dialogue \(D\) and \(t\) is the selected turn. Given the example in Fig.1, for full supervision, all turns are used for training (i.e., \(\mathcal{D}_{D}=\left\{\left(M_{1},B_{1}\right),\ldots,\left(M_{T},B_{T}\right)\right\}\)), whereas in weakly-supervised training, only the selected turn is used (i.e., \(\mathcal{D}_{D}=\left\{\left(M_{t},B_{t}\right)\right\}\)).
## 4 Active Learning for Dialogue State Tracking
In this section, we first define our turn-level AL-based DST framework, followed by the turn selection strategies.
### Turn-Level AL for DST
Framework.Our turn-level AL-based DST consists of two parts. First, we use AL to model the differences between turns in a dialogue and find the turn that is the most beneficial to label. The pseudo-code of this step is shown in Algo. 1. Second, after acquiring all labelled turns, we train a DST model as normal and predict the dialogue states for all turns in the test set for evaluation, as described in Section 3. In this paper, we assume the training set is unlabelled and follow the cold-start setting (Algo. 1 Line 4), where the initial labelled data pool \(\mathcal{L}=\emptyset\). We leave the warm-start study for future work.
Active Learning Loop.In each iteration, we first randomly sample \(k\) dialogues from the unlabelled pool \(\mathcal{U}\). Then, we apply a turn acquisition function \(\mathcal{A}\) and the intermediate DST model trained from the last iteration to each dialogue \(D\) to select an unlabelled turn (Algo. 1 Line 10). It is noteworthy that we consider each turn within a dialogue as a minimum data unit to perform query selection. This is significantly different from Xie et al. (2018), where they select a few dialogues from the unlabelled pool and label all turns as the training instances. Orthogonal to Xie et al. (2018)'s work, it is possible to combine our turn-level strategy with dialogue-level AL. However, we leave it as future work because the AL strategies to select dialogues and turns could be different to achieve the best performance. In this work, we focus on investigating the effectiveness of AL strategies for turn selection.
To avoid overfitting, we re-initialize the base DST model and re-train it on the current accumulated labelled data \(\mathcal{L}\). After \(R\) iterations, we acquire the final training set \(\mathcal{L}\).
```
0: Initial DST model \(\mathcal{M}\), unlabelled dialogue pool \(\mathcal{U}\), labelled data pool \(\mathcal{L}\), number of queried dialogues per iteration \(k\), acquisition function \(\mathcal{A}\), total iterations \(R\)
1:if\(\mathcal{L}\neq\emptyset\)then
2:\(\mathcal{M}_{0}\leftarrow\) Train \(\mathcal{M}\) on \(\mathcal{L}\)\(\triangleright\) Warm-start
3:else
4:\(\mathcal{M}_{0}\leftarrow\mathcal{M}\)\(\triangleright\) Cold-start
5:endif
6:for iterations \(r=1:R\)do
7:\(\mathcal{X}_{r}=\emptyset\)
8:\(\mathcal{U}_{r}\leftarrow\) Random sample \(k\) dialogues from \(\mathcal{U}\)
9:for dialogue \(D\in\mathcal{U}_{r}\)do
10:\(X\leftarrow\mathcal{A}(\mathcal{M}_{r-1},D)\)\(\triangleright\) Select a turn
11:\(\mathcal{X}_{r}=\mathcal{X}_{r}\cup\left\{X\right\}\)
12:endfor
13:\(\mathcal{L}_{r}\leftarrow\) Oracle labels \(\mathcal{X}_{r}\)
14:\(\mathcal{L}=\mathcal{L}\cup\mathcal{L}_{r}\)
15:\(\mathcal{U}=\mathcal{U}\setminus\mathcal{U}_{r}\)
16:\(\mathcal{M}_{r}\leftarrow\) Re-initialize and re-train \(\mathcal{M}\) on \(\mathcal{L}\)
17:endfor
18:return\(\mathcal{L}\)\(\triangleright\) The final training set
```
**Algorithm 1** Turn-level AL for DST
### Turn Selection Strategies
As mentioned in Section 2.2, recently proposed AL acquisition strategies are not applicable to DST. Therefore, we adapt the common uncertainty-based acquisition strategies to select a turn from a dialogue:
Random Sampling (RS)We randomly select a turn from a given dialogue. Despite its simplicity, RS acts as a strong baseline in literature (Settles, 2009; Xie et al., 2018; Ein-Dor et al., 2020).
\[X=\mathrm{Random}(M_{1},\ldots,M_{T}) \tag{1}\]
where \(T\) is the total number of turns in the dialogue.
Maximum Entropy (ME)(Lewis and Gale, 1994) Entropy measures the prediction uncertainty of the dialogue state in a dialogue turn. In particular, we calculate the entropy of each turn in the dialogue and select the highest one. To do that, we use the base DST model to predict the value of the \(j\)th domain-slot at turn \(t\), which gives us the value
prediction distribution \(\mathbf{P}_{t}^{j}\). We then calculate the entropy of the predicted value using \(\mathbf{P}_{t}^{j}\) (Eq.2):
\[\mathbf{e}_{t}^{j} =-\sum_{i=1}^{V}\mathbf{p}_{t}^{j}[i]\log\mathbf{p}_{t}^{j}[i] \tag{2}\] \[\mathbf{e}_{t} =\sum_{j=1}^{J}\mathbf{e}_{t}^{j}\] (3) \[X =\operatorname{argmax}(\mathbf{e}_{1},\dots,\mathbf{e}_{T}) \tag{4}\]
where \(V\) is all possible tokens in the vocabulary. We then sum the entropy of all domain-slots as the turn-level entropy (Eq.3) and select the maximum dialogue turn (Eq.4).
Least Confidence (LC)LC typically selects instances where the most likely label has the lowest predicted probability (Culotta and McCallum, 2005). In DST, we use the sum of the prediction scores for all \(J\) domain-slots to measure the model's confidence when evaluating a dialogue turn, and select the turn with the minimum confidence:
\[\mathbf{c}_{t} =\sum_{j=1}^{J}\mathbf{c}_{t}^{j} \tag{5}\] \[X =\operatorname{argmin}(\mathbf{c}_{1},\dots,\mathbf{c}_{T}) \tag{6}\]
where \(\mathbf{c}_{t}^{j}=\max(\operatorname{logits}_{t}^{j})\) denotes the maximum prediction score of the \(j\)th domain-slot at turn \(t\) and \(\operatorname{logits}_{t}^{j}\) is the predictive distribution.
## 5 Experiments
### Setup
Datasets.We evaluate the weakly-supervised DST performance on the MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2020) datasets3 as they are widely adopted in DST. We use the same preprocessing as Lin et al. (2021) and Su et al. (2022), and focus on five domains (i.e. _restaurant, train, hotel, taxi, attraction_). The statistics of the datasets are summarized in Appendix A.
Footnote 3: We also tried to use the SGD dataset (Rastogi et al., 2020). However, the PPTOD model is already pre-trained on this dataset, making it unsuitable for downstream evaluation. KAGE-GPT2 requires the predefined ontology to build a graph neural network, but SGD does not provide all possible values for non-categorical slots (See Section 8).
Base DST Model.We use **KAGE-GPT2**(Lin et al., 2021) as the base DST model to implement all experiments. KAGE-GPT2 is a generative model that incorporates a Graph Attention Network to explicitly learn the relationships between domain-slots before predicting slot values. It shows strong performance in both full and weakly-supervised scenarios on MultiWOZ 2.0 (Budzianowski et al., 2018). To show that the effectiveness of our AL framework is not tied to specific base models, we also experiment with an end-to-end task-oriented dialogue model **PPTOD**(Su et al., 2022). PPTOD pre-trained on large dialogue corpora gains competitive results on DST in the low-resource settings. The model training and implementation details are in Appendix B.
### Evaluation Metrics
We use **Joint Goal Accuracy (JGA)** to evaluate DST performance, which is the ratio of correct dialogue turns. It is a strict metric since a turn is considered as correct if and only if all the slot values are correctly predicted. Following the community convention, although it is not a distinguishable metric (Kim et al., 2022), we also report **Slot Accuracy (SA)**, which compares the predicted value with the ground truth for each domain-slot at each dialogue turn. Additionally, we define a new evaluation metric, **Reading Cost (RC)**, which measures the number of turns a human annotator needs to read to label a dialogue turn. As shown in Fig.1, to label the dialogue state \(B_{t}\) at turn \(t\), a human annotator needs to read through the dialogue conversations from \(X_{1}\) to \(X_{t}\) to understand all the domain-slot values that are mentioned in the dialogue history:
\[\text{RC}=\frac{\sum_{i=1}^{|\mathcal{L}|}\frac{t}{T_{D_{i}}}}{|\mathcal{L}|} \tag{7}\]
where \(|\mathcal{L}|\) denotes the total number of annotated dialogues and \(T_{D_{i}}\) is the number of turns of the dialogue \(D_{i}\). If all last turns are selected, then \(\text{RC}=1\), in which case the annotator reads all turns in all dialogues to label, resulting high cost. Note that we take JGA and RC as primary evaluation metrics.
### Baselines
Our main goal is to use AL to actively select the most valuable turn from each dialogue for training, therefore reducing the cost of labelling the entire dialogues. We evaluate the effectiveness of our
approach from two angles. First, we compare DST performance of two settings _without_ involving AL to show the benefits that AL brings:
* **Full Data (100%)**: all the turns are used for training, which shows the upper limit of the base DST model performance.
* **Last Turn (14.4%4)**: following Liang et al. (2021) and Lin et al. (2021), for each dialogue, only the last turn is used for training.
Footnote 4: 14.4% = \(\frac{\text{\#}\text{\tiny{max}}\text{\tiny{used}}}{\text{\tiny{total}}\text{ \tiny{turn}}}=\frac{7888}{54945}\)
Second, when using AL, we compare our turn-level framework with the dialogue-level approach:
* **CUDS (\(\sim\)14%)** (Xie et al., 2018): a dialogue-level method that selects a batch of dialogues in each AL iteration based on the combination of labelling cost, uncertainty, and diversity, and uses all the turns for training. We carefully maintain the number of selected dialogues in each iteration so that the total number of training instances is roughly the same (i.e., \(k\simeq 2000\)) for a fair comparison.
* **Selected Turn (14.4%)**: we apply Algo.1 and set \(\mathcal{U}=7888\), \(\mathcal{L}=\emptyset\), \(k=2000\) and use the turn selection methods mentioned in Section 4.2 to conduct experiments. As a trade-off between computation time and DST performance, here we use \(k=2000\); however, we find that a smaller \(k\) tends to have a better performance (Section 6.2). Given \(k=2000\), we have selected 7,888 turns after four rounds, and use them to train a final model.
## 6 Results & Analysis
### Main Results
Due to space limitation, we report the final results after the four AL iterations in Table 1. We present the intermediate results in Fig.2.
Our _turn-level_**AL strategy improves DST performance.** From Table 1, we first observe that, using the same amount of training data (14.4%), our proposed AL approach (i.e. \(\texttt{PPTOD}_{\text{base}}\)+ME and KAGE-GPT2+ME) outperforms the non-AL settings, **Last Turn**, in terms of both joint goal accuracy and slot accuracy. Specifically, compared with \(\texttt{PPTOD}_{\text{base}}\)+LastTurn, our \(\texttt{PPTOD}_{\text{base}}\)+ME significantly boosts the JGA by 3.1% on MultiWOZ 2.0 and 2.3% on MultiWOZ 2.1. KAGE-GPT2+ME also improves its baselines by around 0.9% on both datasets. Compared with the dialogue-level AL strategy **CUDS**, our turn-level methods improve the JGA by a large margin (2.3%\(\sim\)4.3% on both datasets). Considering that DST is a difficult task Budzianowski et al. (2018); Wu et al. (2019); Lee et al. (2021), such JGA improvements demonstrate the effectiveness of our turn-level AL framework, which can effectively find the turns that the base DST model can learn the most from.
Our _turn-level_**AL strategy reduces annotation cost.** The reading costs (RC) of \(\texttt{PPTOD}_{\text{base}}\)+ME and KAGE-GPT2+ME drop by a large margin (around 29%\(\sim\)43%) compared to the Last Turn and CUDS settings, indicating the benefits and necessity of
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Training Data**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**MultiWOZ 2.0**} & \multicolumn{3}{c}{**MultiWOZ 2.1**} \\ \cline{3-8} & & JGA \(\uparrow\) & SA \(\uparrow\) & RC \(\downarrow\) & JGA \(\uparrow\) & SA \(\uparrow\) & RC \(\downarrow\) \\ \hline \multicolumn{8}{c}{_Without Active Learning_} \\ \hline \multirow{3}{*}{**Full Data (100\%)**} & \(\texttt{PPTOD}_{\text{base}}\) & 53.37\(\pm\)0.46 & 97.26\(\pm\)0.02 & 100 & 57.10\(\pm\)0.51 & 97.94\(\pm\)0.02 & 100 \\ & KAGE-GPT2 & 54.86\(\pm\)0.12 & 97.47\(\pm\)0.02 & 100 & 52.13\(\pm\)0.59 & 97.18\(\pm\)0.02 & 100 \\ \hline \multirow{3}{*}{**Last Turn (14.4\%)**} & \(\texttt{PPTOD}_{\text{base}}\)-LastTurn & 43.83\(\pm\)1.55 & 96.87\(\pm\)0.06 & 100 & 45.94\(\pm\)0.72 & 97.11\(\pm\)0.04 & 100 \\ & KAGE-GPT2-LastTurn & 50.43\(\pm\)0.23 & 97.14\(\pm\)0.01 & 100 & 49.12\(\pm\)0.13 & 97.05\(\pm\)0.02 & 100 \\ \hline \multicolumn{8}{c}{_With Active Learning_ (\(k=2000\))} \\ \hline \multirow{3}{*}{**CUDS (\(\sim\)14\%)*
selecting dialogue turns. This significantly saves the annotation cost because a human annotator does not need to read the entire dialogue to label the last turn but only needs to read until the selected turn.
**Our approach uses less annotated data can achieve the same or better DST performance.** To further explore the capability of our AL approach, we plot the intermediate DST performance during the four iterations, as shown in Fig.2. Notably, \(\text{PPTOD}_{\text{base}}\) with Least Confidence (LC) and Maximum Entropy (ME) turn selection methods surpass the Last Turn baselines at just the second or third iteration on MultiWOZ 2.0 and MultiWOZ 2.1 respectively, showing the large data efficiency of our approach (only 7.3% / 10.9% data are used). This can be explained that \(\text{PPTOD}_{\text{base}}\) is fine-tuned on so-far selected turns after each iteration and gains a more robust perception of unseen data, thus tending to choose the turns that are more beneficial to the model. In contrast, KAGE-GPT2 underperforms the Last Turn setting in early iterations, achieving slightly higher accuracy in the final round. Despite this, the overall performance of KAGE-GPT2 is still better than \(\text{PPTOD}_{\text{base}}\) under the weakly-supervised settings. This is possibly because the additional graph component in KAGE-GPT2 enhances the predictions at intermediate turns and the correlated domain-slots Lin et al. (2021). However, when using CUDS, both DST models underperform a lot on both datasets, especially during early iterations. This indicates that the dialogue-level strategy, which does not distinguish the importance of turns in a dialogue, might not be optimal for selecting training data. In Section 6.2, we show that a smaller query size \(k\) can achieve higher data efficiency.
### Ablation Studies
In this section, we further investigate the factors that impact our turn-level AL framework.
Effect of Dialogue Query Size.Theoretically, the smaller size of queried data per AL iteration, the more intermediate models are trained, resulting the better model performance. Moreover, smaller query size is more realistic since the annotation budget is generally limited and there lack enough annotators to label large amount of dialogues after each iteration. To this end, we initialize the unlabelled pool \(\mathcal{U}\) by randomly sampling 3,000 dialogues from the MultiWOZ 2.0 training set, and apply our AL framework to KAGE-GPT2, using different query sizes, i.e., \(k=500,1000,1500\), which leads to \(6,3,2\) rounds respectively.
From Fig.3, we first observe that smaller \(k\) im
Figure 3: Joint goal accuracy on test sets of KAGE-GPT2 on MultiWOZ 2.0 with \(k=500,1000,1500\).
Figure 2: Joint goal accuracy on test sets of AL over four iterations with \(k=2000\) dialogues queried per iteration.
proves the intermediate DST performance: when \(k=500\), both LC and ME strategies boost the accuracy by a large margin at the second iteration than \(k=1000\), and at the third iteration than \(k=1500\). This suggests that, with the same number of training data, the multiple-trained DST model gains the ability to have a more accurate perception of the unseen data. By calculating the prediction uncertainty of the new data, the model tends to choose the turns that it can learn the most from. In contrast, RS chooses a random turn regardless of how many AL rounds, therefore does not show the same pattern as LC and ME. Finally, we find a smaller \(k\) tends to achieve higher data efficiency when using LC and ME strategies. It is clear from the figure that \(k=500\) uses the least data when reaching the same level of accuracy. However, the drawback of a smaller query size is that it increases overall computation time as more intermediate models have to be trained. We provide a computational cost analysis in Section 6.3.
Effect of Base DST Model.It is no doubt that the base DST model is critical to our turn-level AL framework as it directly determines the upper and lower limit of the overall performance. However, we are interested to see how our approach can further boost the performance of different DST models. We randomly sample \(\mathcal{U}=500\) dialogues from the MultiWOZ 2.0 training set and set the query size \(k=100\) for both models. As shown in Fig.4, we also report the results of the two models using the non-AL strategy of Last Turn, which can be considered as the lower performance baselines.
We first confirm that both PPTOD\({}_{\text{base}}\) and KAGE-GPT2 outperform their Last Turn baselines after applying our AL framework, demonstrating both data efficiency and effectiveness of our approach. Secondly, we notice that PPTOD\({}_{\text{base}}\) achieves comparable accuracy in the first two rounds, while KAGE-GPT2 nearly stays at 0 regardless of the turn selection methods, showing the superiority of PPTOD\({}_{\text{base}}\) under the extreme low-resource scenario. This is possibly because PPTOD\({}_{\text{base}}\) is pre-trained on large dialogue corpora thus gains few-shot learning ability Su et al. (2022), whereas only 200 training data are not enough for KAGE-GPT2 to be fine-tuned. However, in the later iterations, the performance of KAGE-GPT2 grows significantly, especially when using the ME strategy, eventually reaching the same level as PPTOD\({}_{\text{base}}\). In contrast, the accuracy of PPTOD\({}_{\text{base}}\) increases slowly, indicating the model gradually becomes insensitive to the newly labelled data.
Effect of Turn Selection Strategy.From Fig.2, while both ME and LC improve over the RS baseline, ME does not consistently outperform LC during AL iterations in terms of the joint goal accuracy, and vice versa. However, as shown in Table 1, LC results in a higher Reading Cost (RC) than ME, which means LC tends to select latter half of turns in dialogues. Conversely, ME significantly reduces RC in the last iteration (Fig.5; more in Appendix C) and is consistently better than LC and RS for
\begin{table}
\begin{tabular}{c|c c} \hline \hline
**Method** & KAGE-GPT2 & PPTOD\({}_{\text{base}}\) \\ \hline LC & 76.51\({}_{\text{24.7}}\) & 81.13\({}_{\text{22.3}}\) \\ ME & 68.18\({}_{\text{29.1}}\) & 58.68\({}_{\text{31.5}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reading Cost (RC) (%) of different turn selection methods. The lower the better.
Figure 4: Joint goal accuracy on test sets of KAGE-GPT2 and PPTOD\({}_{\text{base}}\) on MultiWOZ 2.0 with \(k=100\). Results are averaged over three runs.
Figure 5: Visualization of the turns selected by PPTOD\({}_{\text{base}}\) at the final round (\(k=100\)). ME reduces RC the most.
both DST models (Fig.4), which demonstrates the effectiveness of ME under small query size \(k\). We report their RC in Table 2, which also confirms that ME saves reading costs than LC. An example of the turns selected by ME and LC in a dialogue is shown in Table 3, more examples in Appendix D.
### Cost Analysis
Our AL-based method saves annotation costs and achieves comparable DST performance with traditional methods at the expense of increased computation time. In this section, we conduct a cost analysis, including computation and annotation costs. We initialize the unlabelled pool \(\mathcal{U}\) by randomly sampling 3,000 dialogues from the MultiWOZ 2.0 training set, and apply our AL framework to KAGE-GPT2, and set the query size as \(k=1000\). As shown in Table 4, our method improves JGA and RC than the Last Turn baseline, but with an increased runtime since our method requires three rounds of iteration.
Due to a lack of budget, we are unable to employ human annotators to evaluate the actual annotation cost. Instead, we conduct a theoretical cost analysis to show the potential cost reduction of our method. Suppose a dialogue \(D\) has \(T\) turns in total, and it takes \(x\) minutes for a human annotator to read each turn (_i.e._, reading time), \(y\) minutes to annotate a single turn (_i.e._, annotating time), \(z\) dollars per minute to hire a human annotator. Assuming our proposed method selects the \(t\)th (\(1\leq t\leq T\)) turn to annotate. The total annotation cost, including the reading time and annotating time of three methods, are listed in Table 5. Since the Full Dialogue baseline takes each accumulated turn as a training instance (Section 3), it requires the highest annotation cost. Our method only annotates a single turn per dialogue, the same as the Last Turn baseline. Therefore, the annotation cost lies in the selected turn \(t\), which is measured by RC in our experiments. As shown in Table 1 and discussed in Section 6.1, our method generally saves RC by a large margin (around 29%\(\sim\)43% across different models) compared to the Last Turn baseline and saves more compared to the Full data setting. Therefore, from a theoretical cost estimation point of view, our proposed method can save annotation costs while maintaining DST performance.
## 7 Conclusion
This paper tackles the practical dialogue annotation problem by proposing a novel turn-level AL framework for DST, which strategically selects the most valuable turn from each dialogue for labelling and training. Experiments show that our approach outperforms strong DST baselines in the weakly-supervised scenarios and achieves the same or better joint goal and slot accuracy with significantly less annotated data. Further analysis are conducted to investigate the impact of AL query sizes, base DST models and turn selection methods.
\begin{table}
\begin{tabular}{c|c c c} \hline
**Method** & **\# of Training data (\%) \(\downarrow\)** & **JGA \(\uparrow\)** & **RC \(\downarrow\)** & **Runtime (hour) \(\downarrow\)** \\ \hline Full data & 21072 (100\%) & 46.7 & 100 & 2.3 \\ Last Turn & 3000 (14.2\%) & 41.4 & 100 & 0.6 \\ ME & 3000 (14.2\%) & 44.3 & 59.3 & 1.6 \\ \hline \end{tabular}
\end{table}
Table 4: Computational cost comparison using KAGE-GPT2 on MultiWOZ 2.0 with \(\mathcal{U}=3000\) and \(k=1000\).
\begin{table}
\begin{tabular}{c|l|c|c|c|} \hline \hline & \multicolumn{1}{c|}{**Dialogue MIL2395**} & **ME** & **LC** \\ \hline Turn & [S]: & & & \\ Turn & [U]: 1 am looking for an expensive place to dine in the centre of town. & & \\ \multicolumn{4}{l|}{_State: (restaur
## 8 Limitations
We acknowledge the limitations of this paper as follows.
First, our AL approach adds extra computation time compared to directly training a DST model using only the last turns of dialogues. A smaller query size (e.g., \(k\)) may further increase the runtime as more intermediate models have to be trained. That is, we achieved similar or even better DST performance with significantly reduced annotation data at the cost of increased computation time. Therefore, the trade-off between computational cost, DST performance, and annotation cost needs to be well-determined.
Second, we are unable to employ human annotators to evaluate the actual cost due to a lack of budget. In practice, the number of annotators required depends on the financial budget, project timeline, and the proficiency of annotators. Estimating the exact number of annotators and the annotation cost is challenging. As a mitigation, we provide a theoretical cost analysis in Section 6.3. However, it is a rough estimation and may not reflect the actual cost.
Third, our experiments are limited to the MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2020) datasets. We also tried to use the SGD dataset (Rastogi et al., 2020). However, the PPTOD model is already pre-trained on this dataset, making it unsuitable for downstream evaluation. KAGE-GPT2 requires the predefined ontology (i.e., the all possible domain-slot value pairs in the dataset) to build a graph neural network, but SGD does not provide all possible values for non-categorical slots. For example, MultiWOZ has all possible values predefined for the non-categorical domain-slot _train-arriveBy_, while SGD does not have it since it is innumerable. Our AL framework is built upon the base DST model and thus suffers the same drawbacks; we may try other DST models and datasets in the future.
## Acknowledgements
This work is supported by TPG Telecom. We would like to thank anonymous reviewers for their valuable comments.
|
2301.02430 | Some Solitons on Homogeneous Almost $α$-Cosymplectic $3$-Manifolds
and Harmonic Manifolds | In this paper, we investigate the nature of Einstein solitons, whether it is
steady, shrinking or expanding on almost $\alpha$-cosymplectic $3$-manifolds.
We also prove that a simply connected homogeneous almost $\alpha$-cosymplectic
$3$-manifold, admitting a contact Einstein soliton, is an unimodular semidirect
product Lie group. Finally, we show that a harmonic manifold admits a Ricci
soliton if and only if it is flat. | Naeem Ahmad Pundeer, Paritosh Ghosh, Hemangi Madhusudan Shah, Arindam Bhattacharyya | 2023-01-06T09:26:19Z | http://arxiv.org/abs/2301.02430v2 | [
###### Abstract
In this paper, we investigate the nature of Einstein solitons, whether it is steady, shrinking or expanding on almost \(\alpha\)-cosymplectic \(3\)-manifolds. We also prove that a simply connected homogeneous almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Einstein soliton, is an unimodular semidirect product Lie group. Finally, we show that a harmonic manifold admits a non-trivial Ricci soliton if and only if it is flat. Thus we show that rank one symmetric spaces of compact as well as non-compact type are stable under a Ricci soliton. In particular, we obtain a strengthening of Theorem 1 and Theorem 2 of [1].
Almost \(\alpha\)-cosymplectic manifold, Harmonic manifold, Ricci soliton, Einstein soliton. Some Solitons on Homogeneous Almost \(\alpha\)-Cosymplectic \(3\)-Manifolds and Harmonic Manifolds] Some Solitons on Homogeneous Almost \(\alpha\)-Cosymplectic \(3\)-Manifolds and Harmonic Manifolds
Naem Ahmad Pundeer]Naem Ahmad Pundeer, Paritosh Ghosh, Hemangi Madhusudan Shah and Arindam Bhattacharyya
53B40, 58B20, 53C25, 53D15
## 1 Introduction
The study of solitons, in particular Ricci solitons, on Riemannian manifolds play a vital role in understanding the geometry of underlying manifold. It is very interesting to study Ricci and Einstein solitons on almost \(\alpha\)-cosymplectic \(3\)-manifolds. Recently, Jin and Ximin [10] showed that a simply connected homogeneous almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting contact Ricci solitons, is cosymplectic; and the manifold under consideration is an unimodular semidirect product Lie group \(\mathbb{R}^{2}\rtimes_{A}\mathbb{R}\), where \(A=\left(\begin{array}{cc}0&b\\ -b&0\end{array}\right)\), equipped with a flat left invariant cosymplectic structure.
Motivated by this result we show in this paper that, if a simply connected homogeneous almost \(\alpha\)-cosymplectic \(3\)-manifold, with some additional
hypothesis, admits a contact Einstein soliton, then the manifold is an unimodular semidirect product Lie group \(G\) of type \(G_{0b\overline{b}}=\mathbb{R}^{2}\rtimes_{A}\mathbb{R}\), where \(A=\left(\begin{array}{cc}0&b\\ -b&0\end{array}\right)\neq 0\). And also \(G\) is the Lie group \(\tilde{E}^{2}\) equipped with its flat left invariant cosymplectic structure (see Corollary 3.5). In order to prove this result, we first obtain a characterization of almost \(\alpha\)-cosymplectic 3-manifold admitting contact Einstein solitons, which is the main theorem (Theorem 3.4) of Section 3. To establish this aforementioned theorem we derive an identity (Lemma 3.3) involving scalar curvature, Lie derivative of the metric and Ricci operator on a Riemannian manifold admitting Einstein soliton. We also give some conditions on \(\alpha\) for contact Einstein solitons to be steady, shrinking or expanding on almost \(\alpha\)-cosymplectic 3-manifolds (see Theorem 3.1).
Another interesting topic in the differential geometry is the geometry of harmonic manifolds. In this paper, we prove that a harmonic manifold admits a non-trivial Ricci soliton if and only if it is flat. The flat harmonic manifold admits Ricci solitons of steady, expanding or shrinking type. We also determine the corresponding potential function. In fact, Busemann function on \(\mathbb{R}^{n}\) turns to be the potential function in case of steady solitons (see Theorem 4.1 of Section 4).
Note that any rank one symmetric space is harmonic. Therefore, in particular, we obtain that there are no non-trivial Ricci solitons on rank one symmetric spaces. Thus harmonic manifolds, in particular, rank one symmetric spaces are stable under a Ricci soliton. It is shown in Theorem 1 and Theorem 2 of [1] that any small perturbation of the non-compact symmetric metric is flow back to the original metric under an appropriately rescaled Ricci flow. Thus we obtain the strengthening of this result in case of non-compact rank one symmetric spaces. Moreover, we also obtain that compact rank one symmetric spaces are stable under the Ricci solitons.
The paper is divided into four sections. Section 2 is devoted to the preliminaries about Ricci soliton, Einstein soliton, almost \(\alpha\)-cosymplectic 3-manifolds and harmonic manifolds. In Section 3, we prove our main results on almost \(\alpha\)-cosymplectic 3-manifold admitting contact Einstein solitons, as stated above. In the last section, we prove the main result about harmonic manifolds admitting Ricci solitons.
## 2 Preliminaries
In this section, we discuss some notions required to prove the results of this paper.
### Ricci solitons
Ricci solitons are the self similar solutions of the Ricci flow. The concept of Ricci flow was first introduced by Hamilton [8] in (1982), motivated by the
work of Eells and Sampson [7] on harmonic map and the flow was given by the equation
\[\frac{\partial g}{\partial t}=-2S,\]
where \(S\) is the Ricci tensor.
_Ricci solitons_ are the generalizations of the Einstein metrics and are the solutions of the equation
\[Ric(g)+\frac{1}{2}\mathfrak{L}_{X}g=\lambda g, \tag{1}\]
where \(Ric(X,Y)=S(X,Y)\) is the Ricci curvature tensor, \(\mathfrak{L}_{X}\) is the Lie derivative along the direction of the vector field \(X\) and \(\lambda\) is a real constant. The soliton is said to be _shrinking_ if \(\lambda>0\), _steady_ if \(\lambda=0\) and _expanding_ if \(\lambda<0\).
Tashiro [17] proved very important result for complete Einstein manifolds admitting Ricci solitons.
**Theorem 2.1**.: [17] _Let \((M,g)\) be a complete Riemannian \(n\)-manifold admitting a nontrivial function \(f\) such that \(\operatorname{Hess}f=\lambda g\), then \((M,g)\) is isometric to a complete warped product metric and must have one of the three forms:_
1. \(M=\mathbb{R}\times N,\ g=dr^{2}+\rho^{2}(r)g_{N}\)_,_
2. \(M=\mathbb{R}^{n},\ g=dr^{2}+\rho^{2}(r)ds_{n-1}^{2},\ r\geq 0\)_,_
3. \(M=S^{n},\ g=dr^{2}+\rho^{2}(r)ds_{n-1}^{2},\ r\in[a,b]\)_._
### Einstein solitons
The _Einstein solitons_ are the generalization of the Ricci solitons, was first introduced by Catino and Mazzieri [4] in (2016). They are the solutions of the equation
\[\mathfrak{L}_{V}g+2S=(2\lambda+r)g, \tag{2}\]
where, Ricci tensor \(S(X,Y)=g(X,QY)\), \(Q\) being the Ricci operator, \(r\) is the scalar curvature, \(\lambda\in\mathbb{R}\) is a constant and \(V\) is known as _potential vector field_.
Einstein solitons are the self-similar solutions of the Einstein flow,
\[\frac{\partial}{\partial t}g+2S=rg.\]
It is said to be _steady_ if \(\lambda=0\), _shrinking_ if \(\lambda>0\) and _expanding_ if \(\lambda<0\).
### Almost contact metric manifolds
In order to define contact metric manifolds, we need the concept of Reeb vector field.
**Reeb vector field**[3]**:** A global vector field \(\xi\) on a contact manifold \(M^{2n+1}\), equipped with a global 1-form \(\eta\), is called _Reeb vector field_ or _characteristic vector field_, if any vector field \(X\) satisfies \(\eta(\xi)=1\) and \(d\eta(X,\xi)=0\).
**Almost contact manifold**[3]**:** Let \(M\) be a Riemannian manifold of dimension \((2n+1)\), \(n\geq 1\). \(M^{2n+1}\) is said to have an _almost contact structure
\((\varphi,\xi,\eta)\), if there exists a \((1,1)\)-tensor \(\varphi\), a global vector field \(\xi\) and a \(1\)-form \(\eta\) such that
\[\varphi^{2}X=-X+\eta(X)\xi,\ \eta(\xi)=1, \tag{3}\]
for any vector field \(X\) on \(M\), where \(\xi\) is the _Reeb vector field_. The manifold \(M\) equipped with the structure \((\varphi,\xi,\eta)\) is called an _almost contact manifold_.
**Almost contact metric manifold [3]:** A Riemannian metric \(g\) is said to be _compatible_ with an almost contact structure \((\varphi,\xi,\eta)\), if
\[g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y), \tag{4}\]
holds for any \(X,Y\in\chi(M)\) and \((M,\varphi,\xi,\eta,g)\) is called an _almost contact metric manifold_.
**Normal almost contact metric manifold [3]:** An almost contact metric manifold is said to be _normal_, if for any \(X,Y\in\chi(M)\) the tensor field \(N=[\varphi,\varphi]+2d\eta\otimes\xi\) vanishes everywhere on the manifold, where \([\varphi,\varphi]\) is the Nijenhuis tensor of \(\varphi\).
**Homogeneous almost contact metric manifold [10]:** An almost contact metric manifold \((M,\varphi,\xi,\eta,g)\) is said to be _homogeneous_, if there exists a connected Lie group \(G\) of isometries acting transitively on \(M\) leaving \(\eta\) invariant.
### Cosymplectic manifolds
A \((2n+1)\)-dimensional manifold is said to be a _cosymplectic manifold_[11], if it admits a closed, \(1\)-form \(\eta\) and \(2\)-form \(\Phi\) such that \(\eta\wedge\Phi^{n}\) is a volume element, where \(\Phi(X,Y)=g(\varphi X,Y)\) is a \(2\)-form on \(M^{2n+1}\).
**Almost cosymplectic manifold [11]:** If \(\eta\) and \(\Phi\) are not closed but \(\eta\wedge\Phi^{n}\) is a volume form, then the manifold is called _almost cosymplectic manifold_.
\(\alpha\)**-cosymplectic manifold [14]:** An almost cosymplectic manifold is said to be \(\alpha\)-cosymplectic if \(d\eta=0\) and \(d\Phi=2\alpha\eta\wedge\Phi\) for some constant \(\alpha\).
**Almost \(\alpha\)-cosymplectic manifold [11]:** An _almost \(\alpha\)-cosymplectic manifold_ is defined as an almost contact metric manifold with \(d\eta=0\) and \(d\Phi=2\alpha\eta\wedge\Phi\), for any constant \(\alpha\). In particular, the almost \(\alpha\)-cosymplectic manifold is
* _almost \(\alpha\)-Kenmotsu_ if \(\alpha\neq 0\),
* _almost cosymplectic_ if \(\alpha=0\),
* _almost Kenmotsu_ if \(\alpha=1\).
**Harmonic vector field [16]:** A characteristic vector field \(\xi\) on an almost \(\alpha\)-cosymplectic manifold is _harmonic_ if and only if \(\xi\) is an eigenvector field of the Ricci operator \(Q\).
### Almost \(\alpha\)-cosymplectic \(3\)-manifold
In this article, we will mainly focus on \(3\)-dimensional almost \(\alpha\)-cosymplectic manifold. In what follows, we will be using the following results.
**Theorem 2.2**.: [14] _An almost \(\alpha\)-cosymplectic \(3\)-manifold is \(\alpha\)-cosymplectic if and only if \(\mathfrak{L}_{\xi}h=0\), where \(h=\frac{1}{2}\mathfrak{L}_{\xi}\varphi\)._
Any almost \(\alpha\)-cosymplectic \(3\)-manifold satisfies important relationships between \(\Phi,\xi\) and \(h\).
**Lemma 2.3**.: [14] _Let \(M^{2n+1}\) be an almost \(\alpha\)-cosymplectic \(3\)-manifold, then we have,_
\[\nabla_{\xi}\varphi=0,\ \nabla\xi=0,\ h\varphi+\varphi h=0,\ h\xi=0, \tag{5}\]
_with_
\[\nabla_{X}\xi=-\alpha\varphi^{2}X-\varphi hX. \tag{6}\]
We would require some identities on the \(\varphi\)-bases [3] and the following table of the Levi-Civita connection.
**Proposition 2.4**.: [14] _On almost \(\alpha\)-cosymplectic \(3\)-manifold, there exists \(\varphi\)-bases satisfying_
\[he=\sigma e,\ h\varphi e=-\sigma\varphi e,\ h\xi=0,\]
_with \(\sigma\) a local smooth eigen-function of \(h\)._
**Theorem 2.5**.: [14] _The Levi-Civita connection on almost \(\alpha\)-cosymplectic \(3\)-manifold are given by,_
\[\begin{cases}&\nabla_{e}e=-a\varphi e-\alpha\xi,\ \nabla_{\varphi e}e=-b \varphi e+\sigma\xi,\ \nabla_{\xi}e=\mu\varphi e,\\ &\nabla_{e}\varphi e=ae+\sigma\xi,\ \nabla_{\varphi e}\varphi e=be-\alpha\xi, \ \nabla_{\xi}\varphi e=-\mu e,\\ &\nabla_{e}\xi=\alpha e-\sigma\varphi e,\ \nabla_{\varphi e}\xi=-\sigma e+ \alpha\varphi e,\ \nabla_{\xi}\xi=0,\end{cases} \tag{7}\]
_where \(a=g(\nabla_{e}\varphi e,e)\), \(b=-g(\nabla_{\varphi e}e,\varphi e)\) and \(\mu=g(\nabla_{\xi}e,\varphi e)\) are smooth functions._
The Ricci operator on almost \(\alpha\)-cosymplectic \(3\)-manifold is known explicitly [14].
**Proposition 2.6**.: [14] _The Ricci operator \(Q\) on almost \(\alpha\)-cosymplectic \(3\)-manifold is given by,_
\[\begin{cases}&Q\xi=-(2\alpha^{2}+\operatorname{tr}h^{2})\xi+(2b\sigma-e( \sigma))\varphi e-(2a\sigma+(\varphi e)(\sigma))e,\\ &Q\varphi e=(2b\sigma-e(\sigma))\xi+(\alpha^{2}+\frac{r}{2}+\frac{\operatorname {tr}h^{2}}{2}+2\sigma\mu)\varphi e+(\xi(\sigma)+2\alpha\sigma)e,\\ &Qe=-(2a\sigma+(\varphi e)(\sigma))\xi+(\xi(\sigma)+2\alpha\sigma)\varphi e+ (\alpha^{2}+\frac{r}{2}+\frac{\operatorname{tr}h^{2}}{2}-2\sigma\mu)e.\end{cases} \tag{8}\]
_Furthermore, the scalar curvature \(r=\operatorname{tr}Q\) is given by_
\[r=-6\alpha^{2}-\operatorname{tr}h^{2}-2(a^{2}+b^{2})-2(\varphi e)(a)+2e(b). \tag{9}\]
The structure of simply-connected, homogeneous almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Ricci soliton, is very well known.
**Theorem 2.7**.: [10] _Let \(M\) be a simply-connected, homogeneous almost \(\alpha\)-cosymplectic \(3\)-manifold admitting a contact Ricci soliton. Then \(M\) is an unimodular semidirect product Lie group \(G\) of type \(G_{0b\overline{b}}=\mathbb{R}^{2}\rtimes_{A}\mathbb{R}\), where \(A=\left(\begin{array}{cc}0&b\\ -b&0\end{array}\right)\), equipped with a flat left invariant cosymplectic structure. Moreover, we have the following:_
1. _If_ \(A=0\)_, i.e.,_ \(b=0\)_,_ \(G\) _is the abelian Lie group_ \(\mathbb{R}^{3}\) _equipped with its flat left invariant cosymplectic structure._
2. _If_ \(A\neq 0\)_, i.e.,_ \(b\neq 0\)_,_ \(G\) _is the Lie group_ \(\tilde{E}^{2}\) _equipped with its flat left invariant cosymplectic structure._
### Harmonic manifolds
A complete Riemannian manifold \((M^{n},g)\) is said to be _harmonic_, if for any \(p\in M\), the volume density \(\omega_{p}(q)=\sqrt{\det(g_{ij}(q))}\) in normal coordinates, centered at any \(p\in M\) is a radial function [2]. Thus,
\[\Theta(r)=r^{n-1}\sqrt{\det(g_{ij}(q))},\]
is density of geodesic sphere, is a radial function. It is known that harmonic manifolds are Einstein [2]. They are naturally classified as per the sign of the Ricci constant. Let \(r\) be the constant scalar curvature of \(M\).
* If \(r=0\), then \(M\) is flat, that is \((M,g)=(\mathbb{R}^{n},Can)\) (Lemma 4.5).
* If \(r>0\), then by Bonnet-Myer's theorem \(M\) is compact with finite fundamental group. They are compact rank one symmetric spaces by a well known result of Szabo (cf. [20]).
* If \(r<0\), then \(M\) is non-compact harmonic manifold. They are rank one symmetric spaces of non-compact type, if dimension of \(M\) is atmost \(5\).
The main result in the theory of harmonic spaces is the Lichnerowicz Conjecture: _Any simply connected, complete harmonic manifold is either flat or a rank one symmetric space._ By the above classification, we see that the conjecture is resolved for compact harmonic manifolds and is open for non-compact harmonic manifolds of dimension \(6\). There are counter examples to the conjecture when dimension is atleast \(7\), known as the Damek-Ricci spaces or NA spaces. See for more details references in [20].
In the category of non-compact harmonic manifolds, we will be considering simply connected, complete, non-compact harmonic manifolds. It follows that, these spaces don't have conjugate points (cf. [20]). Hence, by the Cartan-Hadamard theorem,
\[\exp_{p}:T_{p}M\to M\]
is a diffeomorphism and every geodesic of \(M\) is a line. That is, if \(\gamma_{v}:\mathbb{R}\to M\) is a geodesic of \(M\) with \(v\in S_{p}M\), \(\gamma^{\prime}_{v}(0)=v\), then \(d(\gamma_{v}(t),\gamma_{v}(s))=|t-s|\).
**Busemann function:** Let \(\gamma_{v}\) be a geodesic line, then the two _Busemann functions_ associated to \(\gamma_{v}\) are defined as [17]:
\[b_{v}^{+}(x)=\lim_{t\to\infty}d(x,\gamma_{v}(t))-t,\]
\[b_{v}^{-}(x)=\lim_{t\to-\infty}d(x,\gamma_{v}(t))-t.\]
## 3 Einstein Solitons on Almost \(\alpha\)-Cosymplectic \(3\)-Manifolds
In this section, we examine the nature of a _contact Einstein soliton_ on almost \(\alpha\)-cosymplectic manifold. We also show that, the characteristic vector field \(\xi\) is harmonic on almost \(\alpha\)-cosymplectic \(3\)-manifold admitting a contact Einstein soliton. Finally, we generalize Theorem 2.7 using these results.
**Contact Einstein soliton:** Let \((M^{2n+1},g)\) be a Riemannian manifold of dimension \(2n+1\) (\(n\geq 1\)). Consider the Einstein soliton (2), with potential vector field \(V\), on an almost contact metric manifold \((M,\varphi,\xi,\eta,g)\). Then the soliton is called _contact Einstein soliton_, if \(V=\xi\) that is, the potential vector field is the characteristic vector field.
The potential vector field \(V\) is called _transversal_, if it is orthogonal to the characteristic vector field, that is \(V\perp\xi\).
**Theorem 3.1**.: _Let \((M,\varphi,\xi,\eta,g)\) be an almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Einstein soliton. Then the soliton is:_
1. _steady, if_ \(\alpha^{2}=\sigma^{2}-(a^{2}+b^{2})-(\varphi e)(a)+e(b)\)_,_
2. _shrinking, if_ \(\alpha^{2}>\sigma^{2}-(a^{2}+b^{2})-(\varphi e)(a)+e(b)\)_,_
3. _expanding, if_ \(\alpha^{2}<\sigma^{2}-(a^{2}+b^{2})-(\varphi e)(a)+e(b)\)_._
Proof.: If the soliton is contact Einstein soliton, using \(V=\xi\) in (2), we have
\[g(\nabla_{X}\xi,Y)+g(X,\nabla_{Y}\xi)+2g(X,QY)=(2\lambda+r)g(X,Y), \tag{10}\]
for any vector fields \(X,Y\) on \(M\).
Substituting \(X=Y=\xi\) in the above equation and using (8), we obtain
\[\lambda=-2\alpha^{2}-2\sigma^{2}-\frac{r}{2}. \tag{11}\]
From the expression of \(r\) (9), we get
\[\lambda=\alpha^{2}-\sigma^{2}+(a^{2}+b^{2})+(\varphi e)(a)-e(b), \tag{12}\]
from which we can conclude the proof.
**Theorem 3.2**.: _Let \((M,\varphi,\xi,\eta,g)\) be an almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Einstein soliton. Then the characteristic vector field \(\xi\) is harmonic._
Proof.: From (10), we get for \(X=\xi\) and \(Y=e\),
\[(\varphi e)(\sigma)=-2a\sigma. \tag{13}\]
And for \(X=\xi\) and \(Y=\varphi e\), from (10) we have
\[e(\sigma)=2b\sigma. \tag{14}\]
Now, using (13) and (14) in the expression of \(Q\xi\) in (8), we obtain
\[Q\xi=-(2\alpha^{2}+2\sigma^{2})\xi,\]
which shows that \(\xi\) is an eigenvector field of the Ricci operator \(Q\) concluding the fact that \(\xi\) is harmonic.
We derive the identity involving the Lie derivative of the metric, Ricci operator, the potential vector field \(V\).
**Lemma 3.3**.: _Let \((M,g)\) be a Riemannian manifold of scalar curvature r, admitting an Einstein soliton (2). Then_
\[\left\|\mathfrak{L}_{V}g\right\|^{2}=2dr(V)+4\operatorname{div}\bigg{(} \bigg{(}\lambda+\frac{r}{2}\bigg{)}V-QV\bigg{)}, \tag{15}\]
_where \(Q\) is the Ricci operator._
Proof.: In local coordinate system, (2) leads to
\[\mathfrak{L}_{V}g^{ij}+S^{ij}=(2\lambda+r)g^{ij}.\]
Therefore,
\[\left\|\mathfrak{L}_{V}g\right\|^{2}= -S^{ij}\mathfrak{L}_{V}g_{ij}+(2\lambda+r)g^{ij}\mathfrak{L}_{V} g_{ij}.\] \[= -\mathfrak{L}_{V}r+g_{ij}\mathfrak{L}_{V}S^{ij}-(2\lambda+r)g_{ ij}\mathfrak{L}_{V}g^{ij}. \tag{16}\]
Now,
\[g_{ij}\mathfrak{L}_{V}S^{ij}= g_{ij}\nabla_{V}S^{ij}-g_{ij}\nabla_{\alpha}V_{i}S^{\alpha j}-g_{ij} \nabla_{\alpha}V_{j}S^{i\alpha}\] \[= 2dr(V)-2\operatorname{div}QV. \tag{17}\]
Observing that \(g_{ij}\mathfrak{L}_{V}g^{ij}=-2\operatorname{div}V\) and using (16) and (17), we get the required result.
Now we derive the main result of this section.
**Theorem 3.4**.: _Consider \(M\) to be an almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Einstein soliton. Then the following hold._
1. _If_ \(\sigma\neq 0\)_, then_ \(\alpha=a^{2}+b^{2}-2\lambda^{2}+(\varphi e)(a)-e(b)\)_._
2. _If_ \(\sigma=0\)_, then_ \(M\) _is cosymplectic._
Proof.: Replacing \(X\) by \(e\) and \(Y\) by \(\varphi e\), from (10) we get
\[g(\nabla_{e}\xi,\varphi e)+g(e,\nabla_{\varphi e}\xi)+2g(e,Q\varphi e)=(2 \lambda+r)g(e,\varphi e).\]
Using (7) and (8), after simplification we acquire,
\[\xi(\sigma)=\sigma-2\alpha\sigma. \tag{18}\]
Now putting \(X=e=Y\) in (10) and using (7), (8), (9) and (12), we get
\[6\alpha^{2}+6\sigma^{2}-4\sigma\mu+2\alpha+r=0. \tag{19}\]
Similarly, putting \(X=\varphi e=Y\) in (10) and using (7), (8), (9) and (12), we also obtain
\[6\alpha^{2}+6\sigma^{2}+4\sigma\mu+2\alpha+r=0. \tag{20}\]
So comparing (19) and (20), we have \(\sigma\mu=0\). If \(\sigma\neq 0\), then from (20), we obtain the required result using (9).
Now suppose \(\sigma=0\), then \(M\) is \(\alpha\)-cosymplectic. From [14], recall that an almost \(\alpha\)-cosymplectic manifold \(M\) is \(\alpha\)-cosymplectic if and only if for any \(X\in\chi(M)\),
\[QX=\bigg{(}\alpha^{2}+\frac{r}{2}\bigg{)}X-\bigg{(}3\alpha^{2}+\frac{r}{2} \bigg{)}\eta(X)\xi. \tag{21}\]
Since \(\nabla\xi\) is symmetric, (10) becomes
\[g(\nabla_{X}\xi,Y)+g(X,QY)=\bigg{(}\lambda+\frac{r}{2}\bigg{)}g(X,Y). \tag{22}\]
Using (6) and (21), we have from (22), for any \(X,Y\in\chi(M)\),
\[(\alpha^{2}+\alpha-\lambda)g(X,Y)-\bigg{(}3\alpha^{2}+\alpha+\frac{r}{2} \bigg{)}\eta(X)\eta(Y)=0,\]
which implies \(\alpha^{2}+\alpha-\lambda=0\) and \(3\alpha^{2}+\alpha+\frac{r}{2}=0\).
That is \(\lambda=\alpha^{2}+\alpha\) and \(r=-6\alpha^{2}-2\alpha=\text{constant}\), so that, \(\lambda+\frac{r}{2}=-2\alpha^{2}\).
Also, from (21), we have \(Q\xi=-2\alpha^{2}\xi\) which implies \((\lambda+\frac{r}{2})\xi-Q\xi=0\). Therefore, using Lemma 3.3 (15), we can say that \(\xi\) is a Killing vector field, that is, \(\nabla\xi\) is skew-symmetric. But in our case \(\nabla\xi\) is symmetric, which implies \(\nabla\xi=0\), that is, \(\alpha=0\), proving the fact that \(M\) is cosymplectic.
**Corollary 3.5**.: _Consider \(M\) to be a simply-connected, homogeneous, almost \(\alpha\)-cosymplectic \(3\)-manifold, admitting a contact Einstein soliton with \(\sigma=0\). Then \(M\) is an unimodular semidirect product Lie group \(G\) of type \(G_{0\mu\overline{\mu}}=\mathbb{R}^{2}\rtimes_{A}\mathbb{R}\), where \(A=\left(\begin{array}{cc}0&\mu\\ -\mu&0\end{array}\right)\neq 0\), is a real matrix. Moreover, \(G\) is the Lie group \(\tilde{E}^{2}\) equipped with its flat left invariant cosymplectic structure._
Proof.: The proof follows from Theorem 2.7 and Theorem 3.4.
## 4 Ricci Solitons on Harmonic Manifolds
Recall that the Ricci solitons are solutions of (1). Clearly, if a manifold is Einstein of constant \(r\), then trivial solitons \(X=0\) and \(X\) a Killing vector field are solutions of (1) with \(\lambda=r\).
In this section, we study Ricci solitons on complete, simply connected, harmonic manifolds. We prove a _Lichnerowicz type result that, a harmonic manifold admits a non-trivial Ricci soliton if and only if \(M\) is flat_. More precisely,
we show that compact harmonic manifolds and non-flat harmonic manifolds _do not_ admit non-trivial Ricci solitons. But flat harmonic manifolds _do admit_ non-trivial shrinking and expanding Ricci solitons.
In the sequel, harmonic manifold means complete, simply connected harmonic manifold.
The main theorem of this section is:
**Theorem 4.1**.: _Let \((M,g)\) be a harmonic manifold. Then \(M\) admits a non-trivial Ricci soliton if and only if \(M\) is flat. In this case, the steady Ricci soliton is trivial of Killing type given by \(X=\nabla{b_{v}}^{-};\) where \(b_{v}^{-}(x)=-\langle x,v\rangle\), the Busemann function, is the potential function on \(M\). In case, the Ricci soliton is shrinking or expanding, the potential function is given by \(f(x)=\lambda{d(x,p)}^{2}+f(p)\), for constant \(\lambda\neq 0\); and point \(p\) is the minimum or the maximum of \(f\) and \(X=\nabla f\) is the corresponding non-trivial Ricci soliton._
**Corollary 4.2**.: _There are no deformations of harmonic manifolds, and in particular, of rank one symmetric spaces under a Ricci soliton. In particular, we obtain a strengthening of [1], and also the stability of the compact rank one symmetric spaces under a Ricci soliton._
### Proof of Theorem 4.1
In this subsection we prove Theorem 4.1. We begin with the following important proposition.
**Proposition 4.3**.: _If a complete manifold admits a Ricci soliton, then it is a gradient soliton._
Proof.: This follows from Remark 3.2 of Perelman [13] (see also page 2 of [12])). Hence, in this case, if \(X\) is a Ricci soliton, then \(X=\nabla f\), for some smooth function \(f:M\to R\).
_Remark 4.4_.: Here we are only concerned with simply connected and complete Riemannian manifold. In this case, clearly, we can write \(X=\nabla f\) by Poincare Lemma, for some \(f\in C^{\infty}(M)\).
**Lemma 4.5**.: _Ricci flat harmonic manifold is flat._
Proof.: It can be shown that any harmonic manifold \((M,g)\) is asymptotically harmonic [20]. That is there exists a constant \(h\geq 0\) such that
\[\Delta{b_{v}}^{\pm}=h.\]
Let \(L_{t}=\nabla^{2}{b_{v}}^{+}\) denote the second fundamental form of horosphere, \(b_{v}^{-1}(t)\). Then \(L_{t}\) satisfies the Riccati equation, that is for \(x_{t}\in\{\gamma^{\prime}(t)\}^{\perp}\),
\[{L_{t}}^{\prime}(x_{t})+{L_{t}}^{2}(x_{t})+R(x_{t},\gamma^{\prime}(t))\gamma^{ \prime}(t)=0.\]
Tracing the above equation, we obtain that \(\operatorname{tr}{L_{t}}^{2}=0\), as \(\operatorname{Ricci}(\gamma^{\prime}(t),\gamma^{\prime}(t))=0\). But as \(L_{t}\) is a symmetric operator on \(\{\gamma^{\prime}(t)\}^{\perp}\), \(L_{t}=0\). Consequently, \(R(x,v)v=0\) for any \(x\in v^{\perp}\) and for any \(v\in SM\). Thus \((M,g)\) is flat.
**Proposition 4.6**.: _If a harmonic manifold admits a Ricci soliton, then it admits a Gaussian._
Proof.: As in this case \((M,g)\) is Einstein, it follows that
\[\nabla^{2}f=2(\lambda-r)g, \tag{23}\]
where \(r\) is a constant scalar curvature of \(M\). Thus \(f\) is a Gaussian, that is it satisifes (23).
**Lemma 4.7**.: _Let \(X=\nabla f\) be a Killing vector field on compact harmonic manifold, then \(X\) is trivial. Trivial solitons of Killing type do not exist on non-compact, non-flat harmonic manifold. On flat harmonic manifold, Killing vector field is \(X=\nabla{b_{v}}^{-}\), where \(b_{v}^{-}(x)=-\langle x,v\rangle\) is a Busemann function on \(\mathbb{R}^{n}\)._
Proof.: Because \(X=\nabla f\) is a non-trivial Killing vector field, we have
\[\nabla^{2}f=0.\]
Therefore, \(\|\nabla f\|=\operatorname{constant}\neq 0\), consequently, \(f\) has no critical points. Any Killing vector field of constant norm satisfies (p. 164-167, [17]):
\[\|\nabla^{2}f\|^{2}=\operatorname{Ric}(\nabla f,\nabla f).\]
Therefore,
\[0=\|\nabla^{2}f\|=r\|\nabla f\|^{2}\]
This implies that for \(f\) non-constant, \(r=0\) and therefore \(\operatorname{Ric}\equiv 0\) and hence harmonic manifold must be flat (Lemma 4.5).
We have \(\|\nabla f\|=\operatorname{constant}\). We may assume that \(\|\nabla f\|=1\), therefore \(f\) is distance function which is harmonic function on \((\mathbb{R}^{n},Can)\). By Proposition 5.1 of [20], it follows that
\[f(x)=b_{v}^{-}(x)=-\langle x,v\rangle,\]
is a Busemann function on \(\mathbb{R}^{n}\)[17].
If \(M\) is compact, \(\nabla^{2}f=0\) implies that \(f\) is a harmonic function. Hence, \(f\) must be a constant function.
**Proposition 4.8**.: _A compact harmonic manifold \((M,g)\) does not admit a non-trivial Ricci soliton._
Proof.: We have,
\[\nabla^{2}f=2(\lambda-r)g.\]
Therefore, \(\Delta f=2(\lambda-r)n\) implies by the Bochner's formula that,
\[\frac{1}{2}\Delta(\|\nabla f\|^{2})=4(\lambda-r)^{2}n^{2}+r(\|\nabla f\|^{2}). \tag{24}\]
Therefore,
\[4(\lambda-r)^{2}n^{2}\operatorname{Vol}(M)=-r\int_{M}\|\nabla f\|^{2}<0.\]
This implies that \(\|\nabla f\|=0\), therefore \(f\) is constant.
**Lemma 4.9**.: _A non-compact, harmonic manifold admits a non-trivial Ricci soliton if and only if it is flat. The flat harmonic manifold admits shrinking and expanding Ricci solitons with the corresponding potential function, \(f(x)=\lambda d(p,x)^{2}+f(p)\), for some \(p\in M\)._
Proof.: Suppose that a non-compact, harmonic manifold admits a non-trivial Ricci soliton. Therefore, it admits a Gaussian with \((\lambda-r)\neq 0\).
\[\nabla^{2}f=2(\lambda-r)g.\]
Therefore, \(f\) is either convex or concave function. Consequently, the only possible critical point of \(f\) is either maximum or minimum of \(f\). Suppose that \(p\) is a critical point of \(f\). Note that along any unit speed geodesic of \(M\) starting from \(p\),
\[f^{\prime\prime}(t)=2(\lambda-r). \tag{25}\]
Therefore, \(f^{\prime}(t)=2(\lambda-r)t+c\). Hence, there is exactly one critical point, and hence \(c=0\). Thus, \(f(t)=(\lambda-r)t^{2}+f(p)\), consequently \(f\) is a radial function. This implies that,
\[\Delta f=f^{\prime\prime}+\frac{\Theta^{\prime}}{\Theta}f^{\prime}=2(\lambda -r)n.\]
Therefore,
\[f^{\prime\prime}+\frac{\Theta^{\prime}}{\Theta}2(\lambda-r)t=2(\lambda-r)n.\]
Consequently by (25),
\[\frac{\Theta^{\prime}(t)}{\Theta(t)}=\frac{n-1}{t}.\]
Comparing with the series expansion (see (4.4) of [20]),
\[\frac{\Theta^{\prime}(t)}{\Theta(t)}=\frac{n-1}{t}-\frac{r}{3}+\cdots,\]
we obtain \(r=0\), hence \(M\) is flat. Finally, \(f(x)=\lambda d(p,x)^{2}+f(p)\) follows from section 1 of [5].
Finally we come to the proof of Theorem 4.1.
Proof.: A compact harmonic manifold can't admit non-trivial Ricci soliton (Proposition 4.8). If a non-compact harmonic manifold admits a trival Ricci soliton of Killing type, then \((\lambda-r)=0\), implies that \(r=0\). Therefore, \(M\) is flat and \(X=\nabla b_{v}{}^{-}\) (Lemma 4.7). If a non-compact harmonic manifold admits a non-trival Ricci soliton, then \((\lambda-r)\neq 0\) again implies that \(r=0\)
and \(M\) is flat. In this case \(X=\nabla f\), where \(f(x)=\lambda d(p,x)^{2}+f(p)\), for some \(p\in M\) (Lemma 4.9).
_Remark 4.10_.: We have shown that Theorem 4.1 confirms Theorem 2.1 in case of harmonic manifolds. Also Theorem 4.1 implies that there are no non-trivial deformation of non-flat harmonic manifolds. This indicates a result supporting the conjecture that, there are no non-trivial deformations of harmonic manifolds; and hence there should be only finitely many classes of harmonic manifolds.
## 5 Acknowledgements
Dr. Naeem Ahmad Pundeer would like to thank to U.G.C. for its Dr. D.S. Kothari Postdoctoral Fellowship. The corresponding author, Mr. Paritosh Ghosh, thanks UGC Junior Research Fellowship of India. The authors also would like to thank Mr. Dipen Ganguly for his wishful help in this research.
|
2303.03157 | Data-Driven Control with Inherent Lyapunov Stability | Recent advances in learning-based control leverage deep function
approximators, such as neural networks, to model the evolution of controlled
dynamical systems over time. However, the problem of learning a dynamics model
and a stabilizing controller persists, since the synthesis of a stabilizing
feedback law for known nonlinear systems is a difficult task, let alone for
complex parametric representations that must be fit to data. To this end, we
propose Control with Inherent Lyapunov Stability (CoILS), a method for jointly
learning parametric representations of a nonlinear dynamics model and a
stabilizing controller from data. To do this, our approach simultaneously
learns a parametric Lyapunov function which intrinsically constrains the
dynamics model to be stabilizable by the learned controller. In addition to the
stabilizability of the learned dynamics guaranteed by our novel construction,
we show that the learned controller stabilizes the true dynamics under certain
assumptions on the fidelity of the learned dynamics. Finally, we demonstrate
the efficacy of CoILS on a variety of simulated nonlinear dynamical systems. | Youngjae Min, Spencer M. Richards, Navid Azizan | 2023-03-06T14:21:42Z | http://arxiv.org/abs/2303.03157v2 | # Data-Driven Control with Inherent Lyapunov Stability
###### Abstract
Recent advances in learning-based control leverage deep function approximators, such as neural networks, to model the evolution of controlled dynamical systems over time. However, the problem of learning a dynamics model and a stabilizing controller persists, since the synthesis of a stabilizing feedback law for known nonlinear systems is a difficult task, let alone for complex parametric representations that must be fit to data. To this end, we propose _Control with Inherent Lyapunov Stability_ (ColLS), a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data. To do this, our approach simultaneously learns a parametric Lyapunov function which intrinsically constrains the dynamics model to be stabilizable by the learned controller. In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics. Finally, we demonstrate the efficacy of ColLS on a variety of simulated nonlinear dynamical systems.
## I Introduction
Data-driven approaches have shown notable successes in solving complex nonlinear control problems in robotics and autonomy, such as autonomous navigation, multi-agent control, and object grasping and manipulation [8, 11, 12]. In learning-based control problems, the task of controller synthesis is often compounded with a lack of knowledge, or uncertainty, about the system dynamics. To this end, neural networks have been widely used for system identification from data prior to controller synthesis [4, 7, 14, 15, 24]. However, when using such complex parametric representations to model dynamics, it can be challenging to provide any guarantees about the behavior of the learned system, particularly regarding the stability of the system under closed-loop feedback.
A traditional method for stabilizing nonlinear dynamical systems is linearizing the system dynamics around an equilibrium point and using the linear quadratic regulator (LQR) techniques to minimize deviation from that equilibrium. LQR methods can achieve closed-loop stability within a small region where the linear dynamics approximation is accurate, yet away from this region, they can fail spectacularly, particularly for highly nonlinear systems performing agile maneuvers [22]. Nonlinear controllers that are _certified_ to be globally stabilizing can be synthesized if a control Lyapunov function (CLF) for the system is known [20]. However, constructing a CLF even for known dynamics can be difficult to do exactly, and thus approximate approaches are popular. For example, polynomial approximations of the dynamics enable a search for sum-of-squares (SOS) polynomials as Lyapunov functions via semidefinite programming (SDP) [23]. However, polynomial approximation can be a significant restriction on the class of function approximators used.
### _Related Work_
To this end, there has been substantial growth in literature on _data-driven_ learning of stability certificates for dynamical systems, particularly those modeled with complex parametric representations. One of the earliest of such works fits a Lyapunov function for a known uncontrolled dynamical system by penalizing violations of the corresponding Lyapunov decrease condition [16]. Learning certificate functions such as Lyapunov, barrier, and contraction metric functions for dynamical systems with _sampled_ point penalties or constraints on stability violations is a ubiquitous theme in the literature [2, 5, 6, 10, 18]. However, when coupled with regression on unknown dynamics, such approaches do not even guarantee the learned model is stable. To resolve this issue, [13] recently proposed a method to jointly learn a stable uncontrolled dynamics model with a Lyapunov function. It guarantees stability of the learned dynamics model by restricting it to the stable halfspace described by the Lyapunov decrease condition. However, its naive extension to _controlled_ dynamics would restrict the learned model to be stabilizable by any control input, thereby hindering its ability to model general controlled nonlinear systems.
For controlled dynamics, the paradigm of sampled pointwise constraints largely persists in the literature. [19] jointly learn a dynamics model and certificate to regularize the dynamics model to perform well over long time horizons with sampled linear matrix inequality (LMI) constraints, yet they do not learn any specific controller. [21] assume the dynamics are known, and jointly learn a controller and a contraction metric certifying the stability of the closed-loop system with loss terms corresponding to sampled point violations of a stability inequality. For Lyapunov-based approaches, [3] jointly learn a controller and a Lyapunov function for known dynamics, while they actively add training samples that violate the Lyapunov decrease condition using a falsifier. [25] extend this method to unknown control systems with some stability guarantees, but their method requires certain knowledge on the system such as its Lipschitz constant and linearized model around the origin. Moreover, their learned dynamics do not guarantee the existence of the Lyapunov function. [9] directly apply the approach from [13] to the
controlled case, albeit only for control-affine systems when the actuator matrix is known. All in all, efficient simultaneous learning of Lyapunov functions and nonlinear controllers for systems with completely unknown dynamics has remained an open problem in control and robotics.
### _Contributions_
In this work, we tackle the difficult task of learning stabilizing feedback controllers with proven guarantees for unknown nonlinear dynamical systems. To this end, we propose _Control with Inherent Lyapunov Stability_ (ColLS), a new method for jointly learning a controlled dynamical systems model and a feedback controller from data, such that the model is _guaranteed by construction_ to be stabilized in closed-loop with the learned controller. ColLS does this by simultaneously learning a parametric Lyapunov function, which is used to constrain the open-loop dynamics onto the subspace of dynamics stabilizable in closed-loop by the learned controller. We further show that, under certain assumptions on the fidelity of our learned dynamics model, the learned controller is also guaranteed to stabilize the true dynamics. Finally, we demonstrate the performance of our joint learning method in a number of controlled nonlinear dynamical systems.
## II Problem Statement
In this paper, we are interested in controlling the unknown nonlinear dynamical system
\[\dot{x}(t)=f(x(t),u(t)) \tag{1}\]
with state \(x(t)\in\mathcal{X}\subset\mathbb{R}^{n}\) and control input \(u(t)\in\mathcal{U}\subset\mathbb{R}^{m}\) at time \(t\in\mathbb{R}\). While we do not know the dynamics \(f:\mathcal{X}\times\mathcal{U}\to\mathbb{R}^{n}\), we assume we have access to a finitely-sized dataset \(\mathcal{D}=\{(x_{i},u_{i},\dot{x_{i}})\}_{i=1}^{N}\) of input-output measurements of the system (1).
We want to _jointly_ learn the dynamics \(f\) and how to control them. Specifically, we want to _stabilize_ the system around an equilibrium point. The point \(x_{e}\in\mathcal{X}\) is an _equilibrium_ of the closed-loop system \(f_{u^{*}}(x)\coloneqq f(x,u^{*}(x))\) with feedback controller \(u^{*}:\mathcal{X}\to\mathcal{U}\) if
\[f_{u^{*}}(x_{e})=f(x_{e},u^{*}(x_{e}))=0. \tag{2}\]
There are many types of stability; we summarize the pertinent ones in the definition below for uncontrolled and closed-loop systems, i.e., where the dynamics are a function of the state \(x\) only.
**Definition II.1**.: The system \(\dot{x}=f(x)\) for \(x\in\mathcal{X}\) is _stable_ at its equilibrium point \(x_{e}\in\mathcal{X}\) if for any \(\epsilon>0\), there exists \(\delta_{e}>0\) such that \(\|x(0)-x_{e}\|_{2}<\delta_{e}\) implies \(\|x(t)-x_{e}\|_{2}<\epsilon\) for all \(t\geq 0\). The system is _asymptotically stable_ at \(x_{e}\) w.r.t. \(B\subset\mathcal{X}\) if it is stable at \(x_{e}\) and \(\lim_{t\to\infty}\|x(t)-x_{e}\|_{2}=0\) for all \(x(0)\in B\). The system is _exponentially stable_ at \(x_{e}\) w.r.t. \(B\subset\mathcal{X}\) if there exist \(m,\alpha>0\) such that \(\|x(t)-x_{e}\|_{2}\leq m\|x(0)-x_{e}\|_{2}e^{-\alpha t}\) for all \(x(0)\in B\).
For the controlled system (1), we assume a feedback controller \(u^{*}:\mathcal{X}\to\mathcal{U}\) exists such that the resulting closed-loop system \(\dot{x}=f_{u^{*}}(x)\) is exponentially stable at \(x_{e}\) w.r.t. \(\mathcal{X}\). Without loss of generality, we assume \(x_{e}=0\). Thus, our overall goal is to _jointly_ learn the dynamics \(f\) and an exponentially stabilizing feedback controller \(u^{*}\). As we discuss in Section III, our approach relies on encoding this stabilizability by \(u^{*}\) in \(f\) by construction. However, for this purpose, the stability definitions in Definition II.1 are cumbersome to work with directly, so, we appeal to Lyapunov stability theory to introduce scalar-value functions that summarize stability properties of dynamical systems [1, 17].
**Proposition II.2**.: _Consider the system \(\dot{x}=f(x)\) where \(x\in\mathcal{X}\). Suppose there exists a continuously differentiable function \(V:\mathcal{X}\to\mathbb{R}\) that is positive definite (i.e., \(V(x)>0\) for \(x\neq 0\) and \(V(0)=0\)), and satisfies_
\[\nabla_{f}V(x)\coloneqq\nabla V(x)^{\top}f(x)<0, \tag{3}\]
_for all \(x\in\mathcal{X}\setminus\{0\}\). Then the system is asymptotically stable at \(x=0\) w.r.t. \(\mathcal{X}\)._
Such a function \(V\) is termed a _Lyapunov function_. The key idea is that by condition (3), \(V\) is decreasing along any trajectories generated by \(f\) and eventually converges to 0, which implies \(x=0\). The existence of a Lyapunov function with additional properties is also a necessary and sufficient condition for exponential stability as follows.
**Proposition II.3**.: _Consider the system \(\dot{x}=f(x)\) where \(x\in\mathcal{X}\). This system is exponentially stable at \(x=0\) w.r.t. \(\mathcal{X}\) if and only if there exists a continuously differentiable function \(V:\mathcal{X}\to\mathbb{R}\) such that_
\[c_{1}\|x\|_{2}^{2}\leq V(x)\leq c_{2}\|x\|_{2}^{2},\quad\nabla_{f}V(x)\leq- \alpha V(x), \tag{4}\]
_for all \(x\in\mathcal{X}\setminus\{0\}\) and some constants \(\alpha,c_{1},c_{2}>0\)_
When Propositions II.2 and II.3 are restated for a closed-loop system \(\dot{x}=f_{u}(x)\) with controller \(u:\mathcal{X}\to\mathcal{U}\), the accompanying Lyapunov function \(V\) is also known as a _control Lyapunov function (CLF)_. In the next section, we use Proposition II.3 to constrain a model of \(f\) to be exponentially stabilizable by a parametric controller \(u^{*}\) as guaranteed by an accompanying parametric Lyapunov function \(V\).
## III Joint Learning of Dynamics, Controller and Lyapunov Function
In this paper, we propose a novel architecture shown in Figure 1 that satisfies the Lyapunov stability conditions by construction. Given the difficulty, if not infeasibility, of synthesizing a Lyapunov function for a separately learned dynamics model, we instead consider jointly learning them. Specifically, we propose a parameterization of the dynamics model that incorporates a given parametric Lyapunov function and controller. This connection allows us to jointly optimize them by applying automatic differentiation from a single loss function to fit the dataset.
More specifically, we construct the dynamics model \(f^{*}\) by projecting a parametric _nominal_ model \(\hat{f}:\mathcal{X}\times\mathcal{U}\to\mathbb{R}^{n}\) according to a parametric Lyapunov function \(V:\mathcal{X}\to\mathbb{R}\) and
a parametric feedback controller \(u^{*}:\mathcal{X}\rightarrow\mathcal{U}\). This projection constrains the dynamics model \(f^{*}\) to satisfy the Lyapunov stability condition
\[\nabla_{f^{*}_{u^{*}}}V(x)=\nabla V(x)^{\top}f^{*}(x,u^{*}(x))\leq-\alpha V(x) \tag{5}\]
for all \(x\in\mathcal{X}\setminus\{0\}\) with a given \(\alpha>0\). The construction of the dynamics model is as follows.
\[f^{*}(x,u)\!=\!\begin{cases}\hat{f}(x,u)&\text{if }x\!=\!0\\ \mathsf{Proj}_{u^{*}(x),V(x),\nabla V(x)}\left(\hat{f}(x,\cdot)\right)(u)& \text{o.w.}\end{cases} \tag{6}\]
where
\[\mathsf{Proj}_{u^{*}(x),V(x),\nabla V(x)}\left(\hat{f}(x,\cdot)\right)\] \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### _Loss Function_
We jointly learn the three components by minimizing the following loss function:
\[\mathcal{L}(\theta)=\frac{1}{N}\sum_{(x,u,i)\in\mathcal{D}}k(u,u^{*}(x))\|\dot{x }-f^{*}(x,u)\|^{2}+\lambda\|\theta\|_{2}^{2} \tag{13}\]
where \(\theta\), \(k(\cdot,\cdot)\), and \(\lambda\) denote the parameters for the neural networks, a kernel function, and the regularization coefficient, respectively. The kernel function \(k:\mathcal{U}\times\mathcal{U}\rightarrow\mathbb{R}\) attains a larger value as the two arguments are getting closer. By weighting the \(\ell^{2}\) difference, we encourage \(u^{*}\) to learn a policy that generates trajectories with smaller model error. Conversely, we also encourage \(f^{*}\) to learn better around feasible trajectories, i.e., \((x,u^{*}(x))\). In this work, we use the following kernel function:
\[k(u,u^{\prime})=1+\exp(-\beta\|u-u^{\prime}\|_{2}^{2}). \tag{14}\]
## IV Stability Guarantees
In this section, we rigorously analyze the stability properties of the learned models. We first establish the inherent stability of the constructed model \(f^{*}\) with the feedback controller \(u^{*}\) (Theorem IV.1). Then, we connect the inherent stability to the behavior of the true system \(f\) depending on the learning performance (Theorem IV.2); see Appendix B-C for proofs.
**Theorem IV.1** (Inherent Stability).: _Consider a closed-loop system \(f^{*}_{u^{*}}\) for a projected dynamics \(f^{*}\) defined by (6) with \(\hat{f}\) in (8), \(u^{*}\) in (9), and \(V\) in (10). For any \(r_{2}\geq r_{1}>0\), let \(B_{r_{1},r_{2}}:=\{x\in\mathcal{X}:r_{1}\leq\|x\|_{2}\leq r_{2}\}\). Then, 1) \(f^{*}_{u^{*}}\) is exponentially stable at the origin w.r.t. \(B_{r_{1},r_{2}}\). 2) \(f^{*}_{u^{*}}\) is exponentially stable at the origin w.r.t. \(\mathcal{X}\) if there exists \(c>0\) such that \(V(x)\leq c\|x\|_{2}^{2}\) for all \(x\in\mathcal{X}\)._
CoLLS inherently guarantees the exponential stability of the learned models, as proven in Theorem IV.1. Note that this property holds even for any initialization of the models. Next, we further argue about its connection to the true dynamics. We could expect this stability property to be transferred to the true dynamics when satisfactory learning performance is achieved. We elaborate on this relationship in the next theorem.
**Theorem IV.2**.: _Consider a dynamical system \(f\) in (1) and its learned model \(f^{*}\) in (6) with \(\hat{f}\) in (8), \(u^{*}\) in (9), and \(V\) in (10). Assume \(f\) and \(f^{*}\) are Lipschitz continuous with Lipschitz constants \(L_{f}\) and \(L_{f^{*}}\), respectively. Let \(\delta\) and \(e\) denote how densely and accurately, respectively, \(f^{*}\) is learned by defining_
\[\delta :=\sup_{x\in\mathcal{X}}\min_{(y,v)\in\mathcal{D}}\|(x,u^{*}(x)) -(y,v)\|_{2}, \tag{15}\] \[e :=\max_{(y,v)\in\mathcal{D}}\|f(y,v)-f^{*}(y,v)\|_{2}.\]
_For any \(r>0\), let the neighborhood of the origin \(N_{r}:=\{x\in\mathcal{X}:\|x\|_{2}<r\}\) and \(M_{r}:=\sup_{x\in\mathcal{X}\setminus N_{r}}\|\nabla V(x)\|_{2}\). Then, the closed-loop system \(f_{u^{*}}\) always arrives at \(N_{r}\), i.e., for any trajectory \(x(t)\) generated by \(f_{u^{*}}\) from \(x(0)\in\mathcal{X}\), there exists \(T\geq 0\) such that \(x(T)\in N_{r}\), if \(\delta\) and \(e\) are small enough such that_
\[(L_{f}+L_{f^{*}})\delta+e<\frac{\alpha\epsilon_{\rm pd}r^{2}}{M_{r}}. \tag{16}\]
## V Experiments
In this section, we demonstrate the effectiveness of CoLLS in stabilizing unknown nonlinear control systems. We first verify the stability properties of the projected models achieved by our proposed architecture (Section V-A). Then, we demonstrate the performance and accuracy of the learned controller and dynamics, respectively, in three different control problems (Section V-B-V-D).
For each scenario, we train the models with \(N=10^{5}\) data tuples which are uniformly sampled over \(\mathcal{X}=\{x\in\mathbb{R}^{n}:x_{lb}\leq x\leq x_{ub}\}\) and \(\mathcal{U}=\{u\in\mathbb{R}^{m}:-u_{\rm lim}\leq u\leq u_{\rm lim}\}\). The models are constructed with neural networks as described in Section III. We use 3-layer fully connected neural networks (FCN) for \(g_{f},g_{u}\), and \(g_{V}\). They have 100, 50, and 50 hidden neurons, respectively, in each hidden layer. For \(g_{V}\), we add tanh activation multiplied by 10 at the output layer to limit the scale of the Lyapunov function. The models are trained using a mini-batch gradient descent optimizer with gradient clipping and a learning rate 0.0001. The other hyperparameters used in the experiments are presented in Table I.
### _Random Networks_
In this experiment, we investigate the efficacy of the projection with randomly initialized models. As proven in Theorem IV.1, we verify that the projected model \(f^{*}\) in (6) equips the inherent stability in closed-loop with the feedback controller \(u^{*}\) even for their random initialization. We initialize the neural networks for \(\mathcal{X}\in\mathbb{R}^{2}\) and \(\mathcal{U}\in\mathbb{R}\). The results are shown in Figure 2. The projected model \(f^{*}\) is stabilized at the origin by the controller \(u^{*}\), while the nominal model \(\hat{f}\) is not. As induced by (8), the nominal model \(\hat{f}\) as well as the projected model \(f^{*}\) achieves zero value at the origin in closed-loop with \(u^{*}\). This ensures that the origin is an equilibrium point of the closed-loop systems. Interestingly, we observe that the candidate Lyapunov function \(V\) does not have any critical points although we use a generic FCN for \(g_{V}\) in (10), as shown in the figure. This favored construction is observed in most cases.
### _Van der Pol Oscillator_
We test our proposed method in three different control problems. We start with stabilizing a system, the Van der Pol oscillator, which is already stable at the origin without any
control input. The autonomous Van der Pol oscillator is a well-known nonlinear system that exhibits stable oscillations in the form of a limit cycle. By adding a control input \(u\in\mathbb{R}\), we consider the true dynamics as
\[\ddot{z}=u-z+\mu(1-z^{2})\dot{z} \tag{17}\]
for state \(x=[z,\dot{z}]\in\mathbb{R}^{2}\) with the parameter \(\mu{=}1\). The dataset is sampled with \(x_{lb}{=}[-1.3,-1.3],x_{ub}{=}[1.3,1.3],u_{\lim}{=}5\). The results are shown in Figure 3. The closed-loop dynamics for the true system \(f_{u^{*}}\) and the learned system \(f_{u^{*}}^{*}\) show similar behaviors. Due to some model errors, the trajectories are not perfectly aligned, but the learned controller successfully stabilizes both true and learned systems to the origin.
### _Inverted Pendulum_
Next, we demonstrate CoILS in a more challenging control problem, stabilizing an inverted pendulum. The inverted pendulum, shown in Figure 4, easily falls off the origin without proper control due to gravity. For angular position \(\theta\) from the inverted position and its angular velocity \(\dot{\theta}\), we consider states \(x=[\theta,\dot{\theta}]\in\mathbb{R}^{2}\). By providing a torque \(u\in\mathbb{R}\) at the pivot as a control input, we consider the true dynamics as
\[\ddot{\theta}=\frac{mgl\sin\theta+u-b\dot{\theta}}{ml^{2}}. \tag{18}\]
We set the parameters \(m=0.15,g=9.81,l=0.5,b=0.1\) with \(x_{lb}=[-4,-4],x_{ub}=[4,4],u_{\lim}=5\). The results are shown in Figure 5. The overall behaviors of the closed-loop dynamics are similar for the true system \(f_{u^{*}}\) and the learned system \(f_{u^{*}}^{*}\). However, the trajectories exhibit quite dissimilar \(\ell^{2}\) norm evolution. This difference occurs from the model errors in the small area around \(\dot{\theta}=0\). The small magnitude of \(f_{u^{*}}\) in those areas slows down the movement of the state even though the direction of the movement is similar for both systems. These model errors could be improved if we utilize the physical relationship between the state elements, i.e., the second element is the derivative of the first element.
### _Bicycle Path Following_
The previous examples are both control-affine systems, while CoILS is applicable to general control systems without such structure. In this experiment, we evaluate CoILS for a control system in which the control input has a nonlinear effect. We consider the problem of controlling a constant-speed bicycle to follow a unit circle path as shown in Figure 4. We aim to drive the distance error \(d_{e}\) and angular error \(\theta_{e}\) to be zero by controlling the steering angle \(u\in\mathbb{R}\). For the state \(x=[d_{e},\theta_{e}]\in\mathbb{R}^{2}\), the dynamics are given as
\[\dot{d_{e}} =v\sin\theta_{e}, \tag{19}\] \[\dot{\theta_{e}} =\frac{v\tan u}{L}-\frac{v\cos\theta_{e}}{1-d_{e}}.\]
We set the parameters \(v=6,L=1\) with \(x_{lb}=[-0.8,-0.8]\), \(x_{ub}=[0.8,0.8]\), \(u_{\lim}=0.4\pi\). The results are shown in Figure 6. The closed-loop dynamics for the true system \(f_{u^{*}}\) and the learned system \(f_{u^{*}}^{*}\) are similar to each other. Also, the learned controller generates trajectories close for both systems and successfully stabilizes both systems to the origin.
## VI Conclusion
We presented a new data-driven method CoILS to stabilize unknown controlled dynamical systems. We jointly learn the dynamics model and a feedback controller with a Lyapunov
Fig. 3: Comparison of simulation and the learned model for the Van der Pol oscillator. Top-left: Closed-loop dynamics for the true model \(f\) with the learned controller \(u^{*}\). Top-right: Closed-loop dynamics for the learned model \(f^{*}\) with the learned controller \(u^{*}\). Bottom-left: Comparison of 5 randomly initialized trajectories for the true and learned system. Bottom-right: Plot of the trajectories in the state space with the contour map of the learned Lyapunov function.
Fig. 2: Closed-loop dynamics and Lyapunov function for randomly initialized models. Top-left: Nominal model \(\dot{f}\) with controller \(u^{*}\). Top-right: Projected model \(f^{*}\) with controller \(u^{*}\). Bottom-left: Neural network \(g_{V}\). Bottom-right: Candidate Lyapunov function \(V\).
function, such that the projected model is _guaranteed by construction_ to be stabilized in closed-loop by the learned controller. We further showed that, under certain assumptions on the fidelity of our learned dynamics model, the learned controller is also guaranteed to stabilize the true dynamics. We demonstrated the performance of our method in the simulation of a number of controlled nonlinear dynamical systems.
There are several interesting avenues for future work. First, it is feasible to explore different loss functions, for instance, to include control-oriented metrics. This work focused on learning the dynamics model without explicitly optimizing the performance of the controller. Since there exist various controllers that can stabilize the system in practice, guiding the learning to find one with better performance would be an interesting direction. Also, exploring different hyperparameters for our method would be beneficial to further optimize our approach. Furthermore, one can think about extending our approach to partially observable systems. It would be challenging but worthwhile to find an output-feedback controller that stabilizes unknown dynamical systems.
|
2301.02356 | Graphical quantum Clifford-encoder compilers from the ZX calculus | We present a quantum compilation algorithm that maps Clifford encoders,
encoding maps for stabilizer quantum codes, to a unique graphical
representation in the ZX calculus. Specifically, we develop a canonical form in
the ZX calculus and prove canonicity as well as efficient reducibility of any
Clifford encoder into the canonical form. The diagrams produced by our compiler
visualize information propagation and entanglement structure of the encoder,
revealing properties that may be obscured in the circuit or stabilizer-tableau
representation. Consequently, our canonical representation may be an
informative technique for the design of new stabilizer quantum codes via graph
theory analysis. | Andrey Boris Khesin, Jonathan Z. Lu, Peter W. Shor | 2023-01-06T01:41:06Z | http://arxiv.org/abs/2301.02356v2 | # Graphical quantum Clifford-encoder compilers from the ZX calculus
###### Abstract
We present a quantum compilation algorithm that maps Clifford encoders, an equivalence class of quantum circuits that arise universally in quantum error correction, into a representation in the ZX calculus. In particular, we develop a canonical form in the ZX calculus and prove canonicity as well as efficient reducibility of any Clifford encoder into the canonical form. The diagrams produced by our compiler explicitly visualize information propagation and entanglement structure of the encoder, revealing properties that may be obscured in the circuit or stabilizer-tableau representation.
Quantum algorithms have been shown to solve a broad range of computational problems ranging from search to physical simulation [1; 2; 3; 4; 5; 6; 7; 8]. Recent progress in the experimental realization of scalable quantum computers have further generated excitement about the potential of quantum algorithms in practice [9; 10; 11; 12; 13]. Often, however, a quantum algorithm may not be in a representation that is most conducive to how one wants to study it. For example, an algorithm may be expressed in terms of higher-level unitary operations (e.g. a quantum Fourier transform operator, continuous rotation gates, etc.), while an actual quantum computer can only implement a finite (albeit hopefully universal) set of gates. This is akin to the classical motivation for compilers, which map high-level programming languages in which algorithms are expressed into a universal set of boolean-circuit gates. A similar compiler can be devised for the quantum realm, using the famous Solovay-Kitaev theorem which has been recently been improved to rely on fewer assumptions about the finite gate set itself [14]. Beyond this fundamental result, there has been a recent flurry of results related to quantum compilation from high-level operations to a finite gate set [15; 16; 17; 18; 19]. The choice of finite gate set is typically chosen to be Clifford (Hadamard, phase, and controlled-NOT gates) set plus the \(T\) gate, where \(T=\ket{0}\bra{0}+e^{i\pi/4}\ket{1}\bra{1}\), which is universal [8; 19].
Yet the idea of compilation can extend far beyond the realm of high-to-low-level transformations of quantum gates; any undesirable property of an algorithm's representation can motivate the construction of a compiler that maps the algorithm into a more useful representation for the task at hand. We are specifically motivated by the fact that the circuit representation of quantum operations can often blur the structure of the output's entanglement structure (in the intuitive sense of understanding how the circuit entangles the input qubits just by looking at it). The circuit diagram may also obscure the structure of information propagation, i.e. which qubits affect the values of which other qubits.
In this letter, we take a first step towards a compiler that resolves such issues by producing a representation that explicitly illustrates information propagation and entanglement structure. Specifically, we consider a restricted set of quantum operations--Clifford operations--and map them into visual graph diagrams that we design to have the above properties. Although Clifford operations are not universal, they are used widely in studies of fault-tolerant quantum computation [8; 20] and quantum coding theory [21; 22; 23; 24; 25; 8; 21; 25].
_Building Blocks --_ Our main tool is the ZX-calculus, a graphical language for vectors that has become of great interest in quantum information research [26; 27; 28; 29; 30]. The ZX-calculus produces visual diagrams that represent quantum states, circuits, and more. As with any formal logical system, the ZX calculus has a set of rules which may be iteratively applied to transform diagrams into equivalent diagrams. These rules are representations of identities in quantum circuits. The ZX-calculus has become of more interest than ever in fault-tolerant quantum computation and quantum compiler theory because it can explicitly visualize properties of circuits and entanglement in an intuitive manner. It has recently been applied to a host of quantum computation problems, including lattice surgery [31] and quantum optimization [32]. Importantly, the ZX-calculus is, for stabilizer tableaus, complete (equalities of tableaus can be derived from corresponding ZX diagrams), sound (vice versa), and universal (every quantum operation can be expressed in the ZX-calculus) [26; 33].
Consequently, there is great potential for the ZX-calculus to be not simply a tool for quantum compiler theory, but a quantum representation itself to which circuits and, in the stabilizer formalism of quantum error correction, stabilizer tableaus, can be compiled. A requirement for any well-defined
compiler whose output is a ZX-calculus diagram is uniqueness--any two equivalent input representations must map to the same ZX-calculus diagram. Such a diagram is known as a _ZX canonical form_ (ZXCF). Much room for exploration remains for the construction of such forms. Recently, Hu and Khesin [34] investigated the idea of ZXCFs on _states_, and showed the following.
**Theorem A** (Hu and Khesin [34]).: _There exists a ZXCF for stabilizer quantum states, which we call HK form._
Their analysis motivates our construction of a ZXCF for _quantum circuits_ (though we restrict ourselves to Clifford circuits). More precisely, we compile _Clifford encoders_, which are equivalence classes of circuits.
**Definition B**.: Two circuits are equivalent Clifford encoders if they have the same image over all possible input states. That is, two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are equivalent encoders if there exists a unitary \(U\) such that \(\mathcal{C}_{1}=\mathcal{C}_{2}U\) or \(\mathcal{C}_{2}=\mathcal{C}_{1}U\).
An encoder can always be--and is most naturally--represented by a Clifford circuit (as opposed to a non-Clifford circuit). Encoders moreover have the same expressive power as incomplete stabilizer tableaus--stabilizer matrices of size \((n-k)\times n\) that do not fully specify a state. Incomplete stabilizer tableaus are the crux of the Gottesman-Knill theorem, which proves the efficient classical simulatability of Clifford circuits [35] and are also used to specify encodings (hence the name Clifford encoders) of the logical qubits in quantum error correction. In fact, incomplete stabilizer tableaus are themselves Clifford encoders, and are used widely in error-correcting protocols [24; 25; 36]. As such, we will consider these tableaus to be a possible form of the compiler input, in addition to encoders represented as a Clifford circuit.
We will first construct a ZXCF that illustrates information propagation and entanglement. We will then describe the full compilation algorithm and show that it is efficient. Lastly, we show that the compiler map is well-defined by proving the canonicity of the ZXCF.
_Canonical Form Construction_ -- We follow the standard notation of ZX-calculus graph diagrammatics, specified in Backens [33]; we refer the reader there for the basics of the ZX calculus and transformation rules. Green nodes are associated with \(Z\) operators, and red with \(X\). Each node may be associated with a local Clifford operator (which may be expressed as a phase which is a multiple of \(\pi/2\)), and each edge may have a Hadamard gate on it. We color an edge blue if it has a Hadamard gate on it (some notations draw a box on the edge instead). The circuit takes \(n-k\) input qubits to \(n\) output qubits, for \(k\geq 0\). We define our ZXCF in this paper to be in _encoder-respecting form_ if the diagram is structured as follows.
**Definition C**.: An encoder-respecting form \(\mathcal{D}\) has only green nodes, and is structured as a semi-bipartite graph (input-associated nodes and output-associated nodes). The input cluster \(\mathcal{I}\) has \(n-k\) nodes associated with the \(n-k\) input qubits of the corresponding encoder, and the output cluster \(\mathcal{O}\) has \(n\) nodes. Each input node has a free (not connected to any other nodes) edge--the input edge. Similarly, each output node has a free output edge. The output nodes are numbered from \(1\) to \(n\)--in order from left to right on an incomplete stabilizer tableau or top to bottom on Clifford circuit output wires. For convenience, we refer to the node \(v\) connected to an output edge numbered \(i\) as the output node numbered \(i\).
The design of the encoder-respecting form graphically illustrates how information propagates from input to output (which edges connect \(\mathcal{I}\) to \(\mathcal{O}\)) as well as the entanglement structure (which edges connect \(\mathcal{O}\) to \(\mathcal{O}\)) of the underlying encoder. Note that the structure of \(\mathcal{D}\) gives rise to a natural binary "partial adjacency matrix" \(M_{\mathcal{D}}\) of size \((n-k)\times n\), which describes the edges between \(\mathcal{I}\) and \(\mathcal{O}\) like a usual graph adjacency matrix representation.
Given a good structure, all that remains in a compiler is to ensure a canonical form, so that the map from encoder to ZX diagrams is well-defined. We constrain an encoder-respecting form into a ZXCF via four additional rules.
1. _Edge Rule_: The ZXCF must have exactly one \(Z\) node per free edge and every internal edge in our ZXCF must have a Hadamard gate on it.
2. _Hadamard Rule_: No output node \(v\) may both have a Hadamard gate on its output edge and have edges connecting \(v\) to lower-numbered nodes or input nodes.
3. _RREF Rule_: \(M_{\mathcal{D}}\) must be in reduced row-echelon form (RREF).
4. _Clifford Rule_: Let \(\mathcal{P}\subseteq\mathcal{O}\) be the nodes associated with the pivot columns of the RREF matrix \(M_{\mathcal{D}}\), so that \(|\mathcal{P}|=n-k\). There can be no local Clifford operations \(\{I,S,Z,SZ,H,HZ\}\)
on input or pivot nodes, or their free edges. There can be no input-input edges or pivot edges.
To provide some intuition on these 4 rules, Fig. 1 depicts a generic example of some possible violations to the rules.
In the graphical representation of the ZXCF, pivot nodes correspond to a subset of nodes in \(\mathcal{O}\) that connect to exactly one input; the subset is such that \(\mathcal{P}\) is in one-to-one correspondence with \(\mathcal{I}\). The Clifford rule extends the semi-bipartite constraint on edges by disallowing connections between pivots.
These four rules are sufficient to produce a ZXCF; we will show the following.
**Theorem D**.: _Any Clifford ZX encoder has a unique equivalent ZXCF satisfying the Edge, Hadamard, RREF, and Clifford rules. There is an efficient algorithm to canonicalize the ZX encoder._
We will first describe the transform to a ZXCF. First, we will apply the following lemma.
**Lemma E**.: _There exists an efficient transformation of a ZX encoder diagram with \(n-k\) input edges and \(n\) output edges into a corresponding ZX presentation of a stabilizer state in HK form, with \(n+(n-k)=2n-k\) output edges._
The proof of this lemma is relegated to the Appendix.
Application of Lemma E results in a HK form, which resembles the diagrammatics in Backens [33]. In particular, there is one \(Z\) node per vertex of the graph in the HK form, with any internal edge having a Hadamard gate on it. Free edges can only have Hadamard gates on them if their phase is a multiple of \(\pi\), corresponding to the local Clifford gates \(H\) and \(HZ\) in HK form, and if the associated node is not connected to any lower-numbered nodes. Nodes whose free edges have no Hadamard gate are free to have any multiple of \(\frac{\pi}{2}\) as a phase, corresponding to local Clifford operations \(I\), \(S\), \(Z\), or \(SZ\), respectively.
We are now equipped with a HK ZX diagram that represents a state. This diagram has only output edges. To return it into an encoder diagram, we turn the appropriate \(n-k\) output edges back into input edges. In the circuit representation, this is equivalent to turning bra's into ket's and vice versa to map between an operator and a state. For example, \(\ket{00}\bra{1}-\ket{11}\bra{0}\leftrightarrow\ket{001}-\ket{110}\). This operation gives us an equivalent encoder diagram in HK form.
Note that HK form by definition is in encoder-respecting form. It also satisfies the Edge rule and an analogous Hadamard rule--there are no input edges in HK form so the analogous rule is enforced only on lowered-numbered nodes, but if we number the nodes from the beginning such that the input nodes are lowered-numbered than all output nodes, then the transformed encoder-HK will satisfy our ZXCF Hadamard rule. So, all that remains is to simplify the diagram to obey the RREF and Clifford rules. The former is given by the following theorem, which is proven in the Appendix.
**Theorem F**.: _An encoder diagram in HK form can be efficiently transformed to satisfy the RREF rule, while continuing to satisfy the Edge and Hadamard rules._
The only remaining task is to enforce the Clifford rule. First, if there are any edges between input nodes (those in \(\mathcal{I}\)), we can simply remove them. (They correspond to controlled-\(Z\) operations, and we can take off any unitary operation on the input by the encoder definition.) The same goes for local Cliffords on \(\mathcal{I}\).
All that is left to manipulate are the nodes in \(\mathcal{P}\). For any pivot node with a non-zero phase, we use Eq. (9) in Hu and Khesin [34] and apply it to the input node \(v_{\text{in}}\) associated with that pivot node. As shown there, this operation identically applies a local complementation about the input node, which notably does not change the entries of \(M_{\mathcal{D}}\). However, this operation also increases the phase of each neighbors of \(v_{\text{in}}\) by \(\frac{\pi}{2}\) (due to multiplication by \(S\)) so we repeat this process until the phase vanishes.
Lastly, if there are any edges between pivots \(p_{1}\) and \(p_{2}\), we can remove them by applying Eq. (10) from Hu and Khesin [34] to their pair of associated input nodes \(v_{1}\) and \(v_{2}\). This swaps the sets of neighbours of \(v_{1}\) and \(v_{2}\), denoted \(N_{1}\) and \(N_{2}\) (which we could always swap back at will). This will also toggle (i.e. if there were an edge, now there is not,
Figure 1: Example of an encoder-respecting form and some ways it might violate the 4 rules.
and vice versa) any edge connected to \(N_{1}\times N_{2}\), which will include exactly the one pivot-pivot edge we wish to flip. (We define an edge between a node \(v^{*}\in N_{1}\cap N_{2}\) and itself to be a \(Z\) gate, which adds a phase of \(\pi\) to \(v^{*}\).) Note that this operation does not violate the RREF rule because once this process is complete we can simply swap the neighbors back (without any toggling) via a row operation. It also does not violate the Hadamard rule as there cannot be Hadamard gates on the output edges of nodes in either \(N_{1}\) or \(N_{2}\); they were removed earlier via Eq. (10) of Hu and Khesin [34]. We repeat until no pivot-pivot edges are found.
With that step, the ZXCF transformation to ZXCF is complete. Although this process may seem lengthy, all of these steps can be done systematically in an efficient manner, without having to go back to fix earlier rules. While the algorithm as stated is polynomial time, we believe it is possible to add optimizations that further improve the time complexity for practical implementation.
_Compilation Process_ -- With the canonical form in hand, we proceed to describe a method for compiling down the three representations we consider in in this letter (ZX encoder diagram, circuit, and tableau) into a ZXCF.
In the previous sections we showed how to compile a ZX encoder diagram into a ZXCF. Moreover, in the Appendix proof of Lemma.1, we showed that this compilation is actualy done by first transforming into a circuit representation, so that step is covered as well. All that remains is the incomplete stabilizer tableau.
The most direct method of compilation is to transform the tableau into a Clifford circuit representation, and then apply the above. We will construct our circuit in reverse, going from outputs to inputs. If our tableau has \(n\) columns and \(k\) rows, we begin by drawing \(n\) output wires. We then apply the following procedure for each row of the tableau. Apply a Clifford operation \(U\) that turns the first stabilizer into \(Z_{1}=Z\otimes I\otimes\cdots\otimes I\), using the technique of the Gottesman-Knill theorem.
We multiply the remaining rows by the first until the entire first column of the tableau has only \(I\), with the exception of the first row. We then post-select on the \(+1\) result of a computational basis measurement by applying \(\langle 0|\). When reading the circuit in the forward direction, this is equivalent to initializing a work qubit in the \(|0\rangle\) state. The effect of \(U\) and the measurement is equivalent to applying \(\frac{I+P}{2}\), where \(P\) is the first row of the stabilizer tableau. This is equivalent to post-selecting on the \(+1\) measurement result of that stabilizer. Having measured the qubit, we remove its corresponding row and column from the tableau, and repeat on a tableau of \(n-1\) qubits and \(k-1\) rows, until there are no rows left, at which point the remaining \(n-k\) wires that have not been capped by measurements become our inputs.
_Proof of Canonicity_ -- We next prove that our proposed ZXCF is indeed canonical. In other words, two equivalent stabilizer tableaus--generators of the same subspace of the \(n\)-qubit Hilbert space--will map to the same ZXCF. We proceed by a counting argument. Since any incomplete stabilizer tableau can be turned into a ZXCF, it suffices to prove that there are an equal number of incomplete stabilizer tableaus as there are possible ZXCF diagrams.
**Lemma.2**.: _The number of stabilizer tableaus on \(n\) qubits with \(k\) rows is equal to the number of ZXCF diagrams with \(n\) outputs and \(n-k\) inputs._
First, the number of incomplete stabilizer tableaus with \(k\) stabilizers on \(n\) qubits is
\[\frac{\prod\limits_{i=1}^{k}2\cdot 4^{n}/2^{i-1}-2\cdot 2^{i-1}}{\prod\limits_{i =1}^{k}2^{k}-2^{i-1}}=\prod\limits_{i=1}^{k}\frac{2^{2n-i+2}-2^{i}}{2^{k}-2^{ i-1}}. \tag{1}\]
For each row \(i\), there are \(2\cdot 4^{n}\) possible Pauli strings (including the sign), but the requirement that they commute with previous rows divides the count by \(2^{i-1}\). Of these \(2\cdot 4^{n}/2^{i-1}\) valid strings, \(2^{i-1}\) strings are linear combinations of previous strings, with a factor of \(2\) for strings that differ by a sign from previous strings.
We have overcounted, however, since any two tableaus that generate the same subspace are equivalent. Thus, we need to divide by the number of choices of \(k\) generators for a given subspace of characteristic \(2\), \(\prod\limits_{i=1}^{k}2^{k}-2^{i-1}\).
Next, the number of ZXCF diagrams for the same \(n\) and \(k\) is given by the following function evaluated at \(p=o=0\).
\[f(n,k,p,o)=\begin{cases}1&\text{if $n=k=0$}\\ A_{n,k,p,o}+B_{n,k,p,o}&\text{else}\end{cases} \tag{2}\]
where
\[A_{n,k,p,o}=2^{o}f(n-1,k,p+1,o) \tag{3}\]
if \(n\neq k\) and \(A_{n,k,p,o}=0\) if \(n=k\) and where
\[B_{n,k,p,o}=(2^{2p+o+2}+2)f(n-1,k-1,p,o+1) \tag{4}\]
if \(k\neq 0\) and \(B_{n,k,p,o}=0\) if \(k=0\).
This function is computed with a base case of an empty tableau when \(n=k=0\). To derive \(f\), imagine that, starting with two empty bins, we must assign the \(n-k\) output nodes to be pivots (case \(A\)) or non-pivots (case \(B\)), where pivots need to be matched with input nodes. The current number of pivots is tracked by \(p\) and the number of non-pivot outputs is tracked by \(o\).
Suppose we want the next output node to be a pivot (in \(\mathcal{P}\)). Since there is a one-to-one correspondence between pivots and inputs, we can add pivots only if \(n>k\). The matching between pivots and input nodes is fully constrained by the RREF rule, which sorts the inputs and pivots together. The Clifford rule says that no pivot nodes may have local Clifford operations, so we just need to choose the edges connecting them to nodes we have already assigned. Since there are no pivot-pivot edges and the pivot connects to only one input, we have exactly \(2^{o}\) possibilities. Having made an assignment, \(p\) increases by \(1\), and \(n\) decreases by \(1\) (both input and output are decremented since the pivot matches with an input).
Suppose instead we want the next output node to be a non-pivot (in \(\mathcal{O}\)). Then we have to choose the local Clifford operation as well as its edges to previously assigned vertices. If we choose to connect the node to none of the \(p\) assigned inputs, \(p\) assigned pivots, or \(o\) assigned vertices, then we are allowed to place any of the \(6\) local Cliffords on the node. If any of those edges are present, however, we cannot apply a Hadamard to the output edge due to the Hadamard rule. Hence, \(4\) choices remain for the local Clifford operation. This works out to a total of \(4\cdot(2^{2p+o}-1)+6=2^{2p+o+2}+2\) possibilities. To finish, we decrement the number of output qubits without changing the number of inputs.
We can check that the recursion is solved by
\[f(n,k,p,o)=\\ 2^{o(n-k)}\prod_{i=1}^{k}\frac{(2^{n+1}-2^{i})(2^{n-i+1+2p+o}+1 )}{2^{k}-2^{i-1}}. \tag{5}\]
One can verify by standard induction that \(f(n,k,0,0)\) gives the same expression as Eq. (1), proving that ZXCF is indeed canonical.
_Application to Quantum Codes_ -- Consider, for example of a ZXCF, the nine-qubit error-correcting code, due to Shor Shor (2006), which uses \(9\) physical qubits to encode \(1\) logical qubit. The Shor code may be represented by the stabilizer tableau
\[\begin{bmatrix}Z&Z&I&I&I&I&I&I\\ Z&I&Z&I&I&I&I&I\\ I&I&I&Z&Z&I&I&I\\ I&I&I&Z&I&Z&I&I\\ I&I&I&I&I&I&Z&Z&I\\ I&I&I&I&I&I&Z&I&Z\\ X&X&X&X&X&X&I&I\\ X&X&X&I&I&I&X&X&X\end{bmatrix}. \tag{6}\]
Application of our compilation method yields the ZXCF given in Fig. 2(a). There are three identical sectors of the outputs, two with Hadamarded outputs and one without. This resembles our expectations from examination of the un-normalized qubit representation of the Shor code, \((|000\rangle\pm|111\rangle)^{\otimes 3}\). We may compare this to a relatively simple but un-canonical form in Fig. 2(b) which one might heuristically construct. We created Fig. 2(b) by using a ZX rule wherein a \(Z\) node with two Hadamarded edges can be replaced with a single non-Hadamarded edge. This diagram has some similar visualization--and is in some senses simpler--but has no obvious association of nodes with the qubits, which blurs interpretations about information propagation or entanglement structure.
There is an additional simplification we can make to our ZXCF when the encoder in question is for error correction. In particular, when one transmits
Figure 2: A ZXCF of the nine-qubit code (a) versus a simpler (but not canonical) form (b).
encoded qubits, the set of errors that can be corrected does not change if we apply local Clifford operations to the encoding, since the space of \(p\)-qubit Pauli errors is preserved. As a result, we can disregard the Hadamarded output edges and the phases on the output nodes off error-correcting encoders. Any code thus has an equivalent representation up to local Cliffords whose ZXCF consists of nothing more than a semi-bipartite graph. We have therefore proven:
**Theorem H**.: _For any stabilizer code, there exists at least one equivalent code that has a presentation as a semi-bipartite graph with no local Clifford operations._
Further examples are provided in the Appendix.
_Conclusion and Outlook_ -- In this letter, we introduced a quantum compiler mapping Clifford encoders (or equivalently incomplete stabilizer tableaus) into a canonical ZX-calculus form. The representation of our canonical form as a semi-bipartite graph between input and output qubits explicitly visualizes information propagation from input to output, as well as entanglement structure of the output. We then proved the algorithmic efficiency and correctness of the compiler.
Our compilation technique takes a first step towards the use of ZX-calculi as representations for a useful (in terms of information and entanglement) visualization of quantum operations. Whether the ZXCF technique can be generalized beyond Clifford gates--as our construction relies crucially on them--is an important next step in the exploration of ZX-based quantum compilers. An interesting direction of further research is the possibility of applying our ZXCF to the decomposition of magic states into linear combinations of stabilizer states. Another possible direction is the analysis of the correspondence between the quality of a code and its properties in the ZXCF representation. A better understanding of such a correspondence may inform better ways to design new quantum codes.
_Acknowledgments_ -- ABK was supported by the National Science Foundation (NSF) under Grant No. CCF-1729369. PWS was supported by the NSF under Grant No. CCF-1729369, by the NSF Science and Technology Center for Science of Information under Grant No. CCF-0939370, by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704., and by NTT Research Award AGMT DTD 9.24.20.
|
2303.03631 | Empirical neutron star mass formula based on experimental observables | We derive the empirical formulae expressing the mass and gravitational
redshift of a neutron star, whose central density is less than threefold the
nuclear saturation density, as a function of the neutron-skin thickness or the
dipole polarizability of $ {}^{208} \mathrm{Pb} $ or $ {}^{132} \mathrm{Sn} $,
especially focusing on the 8 Skyrme-type effective interactions. The neutron
star mass and its gravitational redshift can be estimated within $ \approx 10
\, \% $ errors with our formulae, while the neutron star radius is also
expected within a few $\%$ errors by combining the derived formulae. Owing to
the resultant empirical formulae, we find that the neutron star mass and radius
are more sensitive to the neutron-skin thickness of $ {}^{208} \mathrm{Pb} $
than the dipole polarizability of $ {}^{208} \mathrm{Pb} $ or $ {}^{132}
\mathrm{Sn} $. | Hajime Sotani, Tomoya Naito | 2023-03-07T03:30:19Z | http://arxiv.org/abs/2303.03631v1 | # Empirical neutron star mass formula based on experimental observables
###### Abstract
We derive the empirical formulae expressing the mass and gravitational redshift of a neutron star, whose central density is less than threefold the nuclear saturation density, as a function of the neutron-skin thickness or the dipole polarizability of \({}^{208}\mathrm{Pb}\) or \({}^{132}\mathrm{Sn}\), especially focusing on the 8 Skyrme-type effective interactions. The neutron star mass and its gravitational redshift can be estimated within \(\approx 10\,\%\) errors with our formulae, while the neutron star radius is also expected within a few \(\%\) errors by combining the derived formulae. Owing to the resultant empirical formulae, we find that the neutron star mass and radius are more sensitive to the neutron-skin thickness of \({}^{208}\mathrm{Pb}\) than the dipole polarizability of \({}^{208}\mathrm{Pb}\) or \({}^{132}\mathrm{Sn}\).
pacs: 04.40.Dg, 26.60.+c, 21.65.Ef
## I Introduction
The death of a massive star would be the brightest moment in its life. This is known as a supernova explosion. Through such an explosion, a neutron star would come into the world as a massive remnant. The neutron star realizes extreme states, which are almost impossible to reproduce on Earth [1]. The density inside the star easily exceeds the standard nuclear density, \(\rho_{0}\), and may reach several times larger, if not more, than \(\rho_{0}\), depending on the equation of state (EOS) for neutron star matter. The magnetic and gravitational fields inside/around the star also become much stronger than those observed in our solar system. So, the observations of neutron stars and/or their phenomena might inversely tell us the aspect of such extreme states. In particular, the constraints from the observations on the neutron star mass and radius are directly associated with the validity of the EOSs. For instance, the discoveries of massive neutron stars could exclude some of soft EOSs, whose expected maximum mass is less than the observed mass [2; 3; 4; 5]. The gravitational wave observations from the binary neutron star mergers, GW170817 and GW190425, enable us to estimate the mass and radius of neutron stats before mergers [6; 7]. The observation of GW170817 also gives us information on the dimensionless tidal deformability, which leads to the constraint on the radius of \(1.4M_{\odot}\) neutron star, i.e., \(R_{1.4}\lesssim 13.6\,\mathrm{km}\)[8]. Furthermore, owing to the relativistic effect, i.e., the light bending due to the strong gravitational field induced by the neutron star, one could primarily constrain the neutron star compactness (the ratio of the mass to radius) by carefully observing the pulsar light curves (e.g., [9; 10; 11; 12; 13; 14; 15]). In fact, the x-ray observations with the Neutron star Interior Composition ExploreR (NICER) succeed to put the constraint on the neutron star mass and radius, i.e., PSR J0030+0451 [16; 17] and PSR J0740+6620 [18; 19]. In general, astronomical observations tend to constrain the properties of neutron stars in a higher-density region.
On the other hand, via nuclear experiments performed on Earth, one could achieve information on the relatively lower-density region. Since any EOS model can be characterized with its own nuclear EOS parameters, the constraint on such parameters through the terrestrial experiments would partially restrict on the EOS for neutron star matter, which enables us to narrow the allowed region of neutron star mass and radius. For example, the observation of the ratio of the positive charged pions to the negative ones in the decay process of \(\Delta\) isobars into nucleons with pions, using the isotope beams provided by the Radioactive Isotope Beam Factory (RIBF) at RIKEN in Japan, constrains the density-dependence of symmetry energy, \(L\), i.e., \(42\leq L\leq 117\,\mathrm{MeV}\) (S\(\pi\)RIT, e.g., [20]). The estimation of the neutron-skin thickness of \({}^{208}\mathrm{Pb}\), using the parity-violating asymmetry of the elastic electron scattering cross section measured at the Thomas Jefferson National Accelerator Facility in Virginia, also constrains \(L\) as \(L=106\pm 37\,\mathrm{MeV}\) (PREX-II [21]). We note that these two constraints on \(L\) seem to be relatively large values compared to the fiducial value of \(L\), i.e., \(L\simeq 60\pm 20\,\mathrm{MeV}\)[22; 23]. Anyway, by using these constraints on the saturation parameters, one can discuss the expected neutron star mass and radius by using the empirical formulae expressing the neutron star mass and its gravitational redshift as a function of the suitable combination of nuclear saturation parameters [24; 25] or those for asymmetric nuclear matter [26]. In such a way, the astronomical observations and experimental constraints must be complementary approaches for fixing the EOS of neutron star matter [27].
Since the nuclear saturation parameters cannot be directly measured, one has to evaluate them using the experimental data strongly associated with the saturation parameters. Up to now, several strong correlations between the experimental observables
and saturation parameters have been found theoretically, which helps us to estimate the saturation parameters from the experiments. However, one might have to say that these correlations are still incomplete. Due to the theoretical uncertainties in the correlations, it is not always true that the constraint on the saturation parameters becomes more severe, even though the accuracy in experiments would be improved well [28; 29; 30; 31; 32; 33]. For instance, the estimation of the \(L\) parameter using the parity asymmetry of the polarized electron scattering cross section of \({}^{208}\mathrm{Pb}\) is strongly model dependent; even based on the same experimental data, different groups estimate different values of the \(L\) parameter [21; 34]. So, in this study, we will consider to derive the empirical formulae expressing the neutron star mass and its gravitational redshift directly using the experimental observables, instead of the nuclear saturation parameters as in Refs. [24; 25; 26]. For this purpose, we adopt the 8 EOS models with the Skyrme energy density functionals, where 5 of them are SLy models (see the next section for details). So, our discussion can be done in a relatively small parameter range, i.e., \(40\lesssim L\lesssim 80\,\mathrm{MeV}\). In addition, the empirical relations will be derived by using the neutron star models, whose central density is less than threefold the saturation density. Since the corresponding neutron star masses are at most \(\approx 1M_{\odot}\), one may not be able to directly discuss the real observations of neutron star mass with our empirical relations.
This manuscript is organized as follows. In Sec. II, we briefly mention the EOSs considered in this study and the experimental observables estimated from each EOS model. In Sec. III, we derive the empirical formulas for the neutron star mass and its gravitational redshift, where we also show the relative accuracy in the estimations from our empirical formulae. Then, in Sec. IV, we discuss the neutron star mass and radius expected from the resultant empirical formulae, together with the constraints from the astronomical and experimental observations. Finally, we conclude this study in Sec. V. Unless otherwise mentioned, we adopt geometric units in the following, \(c=G=1\), where \(c\) and \(G\) denote the speed of light and the gravitational constant, respectively.
## II EOS for neutron star matter and experimental variables
In order to construct a cold neutron star model, one has to prepare the EOS for neutron star matter with zero temperature, which satisfies the charge neutral and beta-equilibrium conditions. Up to now, many EOS models have been proposed, but the EOSs whose thermodynamical properties are opened as a tabulated format (or analytic expression) are not so much. In this study, we especially focus on the 8 EOS models constructed with the Skyrme-type effective interaction [42; 43], which are commonly accepted and give us a reasonable nuclear structure.
These Skyrme interactions contain \(10\) (or \(11\)) parameters, which are determined to reproduce experimental data of several nuclear properties, such as binding energies and charge radii, and/or nuclear matter properties. Different model, i.e., different Skyrme interaction, adopts different criteria to determine the parameters, which lead to different calculation results especially for open-shell or exotic nuclei [44; 45; 46; 47; 48]. So far, there are no standard or ultimate criteria to optimize the parameters of the Skyrme interaction. Here, we adopt only the EOS models taking into account the one-body center-of-mass correction without the two-body one [49]. In addition, we select only the EOS models, with which the expected maximum mass exceeds (or is comparable to) the \(2M_{\odot}\) neutron star observations. The EOS data are taken from the public source in CompStar Online Supernovae Equations of State (CompoOSE [35]). The EOS models adopted in this study are listed in Table 1. Once the EOS is prepared, the neutron star model is determined by integrating the Tolman-Oppenheimer-Volkoff (TOV) equations.
For any EOS model, one can express the bulk energy per nucleon for the zero-temperature uniform nuclear matter as a function of the baryon number density, \(n_{\mathrm{b}}=n_{n}+n_{p}\), and an asymmetric parameter defined as \(\alpha=\left(n_{n}-n_{p}\right)/n_{\mathrm{b}}\) with neutron (proton) number density, \(n_{n}\) (\(n_{p}\)), in the vicinity of saturation density, \(n_{0}\simeq 0.16\,\mathrm{fm}^{-3}\), for a symmetric nuclear matter as
\[\frac{E}{A}\left(n_{\mathrm{b}},\alpha\right)=w_{0}+\frac{K_{0}}{2}u^{2}+ \mathcal{O}\left(u^{3}\right)+\left[S_{0}+Lu+\frac{K_{\mathrm{sym}}}{2}u^{2}+ \mathcal{O}\left(u^{3}\right)\right]\alpha^{2}+\mathcal{O}\left(\alpha^{3} \right), \tag{1}\]
\begin{table}
\begin{tabular}{l c c c c c c c c c} EOS & \(n_{0}\left(\mathrm{fm}^{-3}\right)\) & \(w_{0}\) (MeV) & \(K_{0}\) (MeV) & \(S_{0}\) (MeV) & \(L\) (MeV) & \(K_{\mathrm{sym}}\) (MeV) & \(\eta\) (MeV) & \(M_{\mathrm{max}}\) (\(M_{\odot}\)) & Ref. \\ \hline LNS5 & \(0.15992\) & \(-15.57\) & \(240.2\) & \(29.15\) & \(50.94\) & \(-119.1\) & \(85.43\) & \(1.97\) & [36] \\ SKa & \(0.15535\) & \(-15.99\) & \(263.1\) & \(32.91\) & \(74.62\) & \(-78.45\) & \(113.6\) & \(2.22\) & [37] \\ SkMp & \(0.15704\) & \(-15.56\) & \(230.9\) & \(29.89\) & \(70.31\) & \(-49.82\) & \(104.5\) & \(2.11\) & [38] \\ SLy2 & \(0.16053\) & \(-15.99\) & \(229.9\) & \(320.00\) & \(47.46\) & \(-115.1\) & \(80.30\) & \(2.06\) & [39] \\ SLy4 & \(0.15954\) & \(-15.97\) & \(229.9\) & \(32.00\) & \(45.96\) & \(-119.7\) & \(78.60\) & \(2.06\) & [40] \\ SLy5 & \(0.16034\) & \(-15.98\) & \(229.9\) & \(32.03\) & \(48.27\) & \(-112.3\) & \(81.22\) & \(2.02\) & [40] \\ SLy9 & \(0.15117\) & \(-15.79\) & \(229.8\) & \(31.98\) & \(54.86\) & \(-81.42\) & \(88.43\) & \(2.16\) & [39] \\ SLy230a & \(0.15997\) & \(-15.99\) & \(229.9\) & \(31.99\) & \(44.32\) & \(-98.22\) & \(76.72\) & \(2.11\) & [41] \\ \end{tabular}
\end{table}
Table 1: EOS parameters adopted in this study, \(n_{0}\), \(E/A\), \(K_{0}\), \(S_{0}\), \(L\), \(K_{\mathrm{sym}}\), are listed, while \(\eta\) is a specific combination with them given by \(\eta=\left(K_{0}L^{2}\right)^{1/3}\). In addition, the maximum mass expected with each EOS is also listed [35].
where \(u=\left(n_{\rm b}-n_{0}\right)/\left(3n_{0}\right)\) and the coefficient of \(\alpha^{2}\) corresponds to the nuclear symmetry energy. The saturation parameters appearing in Eq. (1) for the EOS models adopted in this study are listed in Table 1. As mentioned below, in this study we focus on \(S_{0}\), which has been constrained well from the experiments, i.e., \(S_{0}\approx 31.6\pm 2.7\,{\rm MeV}\)[23].
On the other hand, using each model, i.e., Skyrme interaction, one can estimate the experimental observables, such as the ground-state energy, \(E_{\rm gs}\), the neutron-skin thickness, \(\Delta R_{n}\), the dipole polarizability, \(\alpha_{\rm D}\), and the energy of isoscalar giant monopole resonance (ISGMR), \(E_{\rm ISGMR}\), for specific atomic nuclei. It is known that \(\Delta R_{n}\) and \(\alpha_{\rm D}\) are correlated with the \(L\) parameter [50; 51], while \(E_{\rm ISGMR}\) is correlated with the \(K_{0}\) parameter [52; 53; 54; 55]. They are calculated by using an open-source code for the spherical Hartree-Fock and the random phase approximation (RPA) calculation named skyrme_rpa[56]. Since \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\) are doubly magic, we can safely calculate by assuming the spherical symmetry without the pairing correlation. Owing to the nature of the spherical symmetry, only the radial wave function is calculated, where we consider the box size of \(0<r<20\,{\rm fm}\) with a \(0.1\,{\rm fm}\) mesh. Starting from the Hartree-Fock ground state, the RPA calculation is performed, where the cut-off energy for unoccupied single-particle orbitals is \(60\,{\rm MeV}\) for \({}^{132}{\rm Sn}\) and \(80\,{\rm MeV}\) for \({}^{208}{\rm Pb}\). The obtained strength functions are smeared with the Lorentzian function with a \(1.0\,{\rm MeV}\) width. We note that \(\Delta R_{n}\) and \(\alpha_{\rm D}\) for \({}^{48}{\rm Ca}\) and \({}^{208}{\rm Pb}\) have been measured as \(\Delta R_{n}^{\rm Ca}=0.168^{+0.025}_{-0.028}\,{\rm fm}\)[57], \(\Delta R_{n}^{\rm Ca}=0.121\pm 0.026\)(exp)\(\pm 0.024\)(model) \({\rm fm}\)[58]; \(\alpha_{\rm D}^{\rm Ca}=2.07(22)\,{\rm fm}^{3}\)[59]; \(\Delta R_{n}^{\rm Pb}=0.211^{+0.054}_{-0.063}\,{\rm fm}\)[60], \(\Delta R_{n}^{\rm Pb}=0.283\pm 0.071\,{\rm fm}\)[21]; and \(\alpha_{\rm D}^{\rm Pb}=20.1(6)\,{\rm fm}^{3}\)[61; 62], while \(\Delta R_{n}\) and \(\alpha_{\rm D}\) for \({}^{132}{\rm Sn}\) are not known in experiments. However, it is discussed in Refs. [63; 64; 65; 66; 33; 67; 34; 68; 69; 70; 71; 72; 73; 74; 75; 76] that beyond-mean-field effects may be indispensable to calculate properties of \({}^{40}{\rm Ca}\) and \({}^{48}{\rm Ca}\) consistently, while the mean-field calculation is used in this paper. Hence, in this study, we use properties of \({}^{132}{\rm Sn}\) and \({}^{208}{\rm Pb}\) obtained by the RPA calculation.
The Hartree-Fock calculation provides the ground-state energy and density. Using the ground-state proton and neutron densities, the neutron-skin thickness can be calculated by \(\Delta R_{n}=R_{n}-R_{p}\), where \(R_{p}\) (\(R_{n}\)) is the proton (neutron) root-mean-square radius. The RPA calculation provides the strength function. The energy of ISGMR, \(E_{\rm ISGMR}\), corresponds to the peak of the strength function of the isoscalar monopole resonance. The dipole polarizability is related to the isovector dipole resonance, which can be calculated by [51]
\[\alpha_{\rm D}=\frac{8\pi e^{2}}{9}m_{-1}\left(E1\right), \tag{2}\]
where \(m_{-1}\left(E1\right)\) the inverse energy weighted sum rule of the isovector dipole resonance [67]. These results for \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\) calculated with the Skyrme interactions considered in this study are listed in Tables 2 and 3.
Up to now, it has been already shown that the neutron-skin thickness and dipole polarizability multiplied with \(S_{0}\) for \({}^{208}{\rm Pb}\)
\begin{table}
\begin{tabular}{l c c c c} EOS & \(E_{\rm gs}^{\rm Pb}\) (MeV) & \(\Delta R_{n}^{\rm Pb}\) (fm) & \(\alpha_{\rm D}^{\rm Pb}\) (fm\({}^{3}\)) & \(E_{\rm ISGMR}^{\rm Pb}\) (MeV) \\ \hline LNS5 & \(-1625.6\) & \(0.1577\) & \(21.47\) & \(13.97\) \\ SkA & \(-1636.5\) & \(0.2114\) & \(22.36\) & \(14.09\) \\ SkMp & \(-1636.9\) & \(0.1958\) & \(23.90\) & \(13.71\) \\ SLy2 & \(-1635.9\) & \(0.1637\) & \(19.62\) & \(13.51\) \\ SLy4 & \(-1636.0\) & \(0.1597\) & \(19.81\) & \(13.57\) \\ SLy5 & \(-1636.4\) & \(0.1624\) & \(19.92\) & \(13.61\) \\ SLy9 & \(-1630.3\) & \(0.1716\) & \(20.93\) & \(13.28\) \\ SLy230a & \(-1635.9\) & \(0.1525\) & \(19.20\) & \(13.55\) \\ \end{tabular}
\end{table}
Table 2: Estimation of ground-state energy, \(E_{\rm gs}\), neutron-skin thickness, \(\Delta R_{n}\), dipole polarizability, \(\alpha_{\rm D}\), and energy of ISGMR, \(E_{\rm ISGMR}\), for \({}^{208}{\rm Pb}\), using various Skyrme interactions.
are respectively associated with \(L\)[50; 51], such as
\[\Delta R_{n}^{\rm Pb}\ ({\rm fm}) =0.101+0.147L_{100}, \tag{3a}\] \[\alpha_{\rm D}^{\rm Pb}S_{0}\ \left({\rm MeV\,fm}^{3}\right) =480+330L_{100}, \tag{3b}\]
where \(L_{100}=L/\left(100\,{\rm MeV}\right)\). In a similar way, we can confirm the same properties of \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\) estimated with EOS
Figure 1: Neutron-skin thickness, \(\Delta R_{n}\), expected with each EOS model is shown as a function of the corresponding value of \(L\). The top and bottom panels correspond to the result for \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\). The fitting lines given by Eqs. (4a) and (4b) are shown with the solid lines, while the fitting line derived in Ref. [50] is also shown with the dotted line in the top panel.
Figure 2: Dipole polarizability multiplied with \(S_{0}\) for each EOS model is shown as a function of \(L\), where the top and bottom panels correspond to the result for \({}^{208}\mathrm{Pb}\) and \({}^{132}\mathrm{Sn}\). The fitting lines given by Eqs. (4c) and (4d) are shown with the solid lines, while the fitting line derived in Ref. [51] is also shown with the dotted line in the top panel. For reference, the experimental value of \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\) is shown with the shaded region in the top panel, using the experimental value \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}=20.1(6)\,\mathrm{fm}^{3}\)[62] together with \(S_{0}\approx 31.6\pm 2.7\,\mathrm{MeV}\)[23] as the fiducial value of \(S_{0}\).
models adopted in this study are also strongly associated with \(L\), i.e.,
\[\Delta R_{n}^{\rm Pb}\ ({\rm fm}) =0.0757+0.176L_{100}, \tag{4a}\] \[\Delta R_{n}^{\rm Sn}\ ({\rm fm}) =0.134+0.182L_{100},\] (4b) \[\alpha_{\rm D}^{\rm Pb}S_{0}\ \left({\rm MeV\,fm^{3}}\right) =449+382L_{100},\] (4c) \[\alpha_{\rm D}^{\rm Sn}S_{0}\ \left({\rm MeV\,fm^{3}}\right) =232+177L_{100}. \tag{4d}\]
These fitting lines are shown in Figs. 1 and 2 together with the concrete values estimated with each EOS model. From these figures, we observe that the neutron-skin thickness of \({}^{208}{\rm Pb}\) relatively deviates from the empirical relation given by Eq. (3a), while the values of \(\alpha_{\rm D}S_{0}\) of \({}^{208}{\rm Pb}\) are more or less consistent with the prediction with Eq. (3b), using the EOS models adopted in this study. In the top panel of Fig. 2, we also show the experimental value of \(\alpha_{\rm D}^{\rm Pb}S_{0}\) with the shaded region, using the experimental value \(\alpha_{\rm D}^{\rm Pb}=20.1(6)\,{\rm fm}^{3}\)[62] together with \(S_{0}\approx 31.6\pm 2.7\,{\rm MeV}\)[23] as the fiducial value of \(S_{0}\).
## III Neutron star mass formula
We have already shown that the mass, \(M\), and gravitational redshift, \(z\), of a low-mass neutron star can be expressed well as a function of the normalized central density, \(u_{\rm c}=\rho_{\rm c}/\rho_{0}\) using the central density \(\rho_{\rm c}\), and the suitable combination of the saturation parameters [24; 25; 26]. Here, a low-mass neutron star means that the central density is less than a few times the nuclear saturation density. Since the gravitational redshift is expressed with \(M\) and the stellar radius, \(R\), as \(z=\left(1-2M/R\right)^{-1/2}-1\), one can estimate the neutron star mass and radius from these empirical formulae for \(M\) and \(z\), once the saturation parameters would be constrained. In this study, we consider similar possibilities for expressing \(M\) and \(z\) directly with the experimental observables, such as the neutron-skin thickness and the dipole polarizability, instead of the saturation parameters [68].
### Empirical relations with \(\Delta R_{n}\)
First, we consider the possibility of deriving the empirical formulae for \(M\) and \(z\) with the neutron-skin thickness, \(\Delta R_{n}^{\rm Pb}\) and \(\Delta R_{n}^{\rm Sn}\). We eventually find a correlation between the mass of the neutron star constructed with a fixed central density and the neutron-skin thickness weakly depending on the EOS models. In Fig. 3, we show this feature, where the left and right panels correspond to the results with \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\) for the neutron star models with \(u_{\rm c}=1\), \(2\), and \(3\). In this figure, the solid lines denote the resultant linear fitting, given by
\[\frac{M}{M_{\odot}} =a_{0,{\rm Pb}}^{M}+a_{1,{\rm Pb}}^{M}\left(\frac{\Delta R_{n}^{ \rm Pb}}{0.2\,{\rm fm}}\right), \tag{5a}\] \[\frac{M}{M_{\odot}} =a_{0,{\rm Sn}}^{M}+a_{1,{\rm Sn}}^{M}\left(\frac{\Delta R_{n}^{ \rm Sn}}{0.2\,{\rm fm}}\right), \tag{5b}\]
where the adjusting parameters of \(a_{i,{\rm Pb}}^{M}\) and \(a_{i,{\rm Sn}}^{M}\) for \(i=0\) and \(1\) depend on the central density of the corresponding neutron star model, \(u_{\rm c}\). In Fig. 4, we show the dependence of these parameters on \(u_{\rm c}\), where the solid lines correspond to the fitting lines given by
\[a_{i,{\rm Pb}}^{M} =\sum_{j=0}^{4}a_{ij,{\rm Pb}}^{M}u_{\rm c}^{j}, \tag{6a}\] \[a_{i,{\rm Sn}}^{M} =\sum_{j=0}^{4}a_{ij,{\rm Sn}}^{M}u_{\rm c}^{j}. \tag{6b}\]
The concrete values of \(a_{ij,{\rm Pb}}^{M}\) and \(a_{ij,{\rm Sn}}^{M}\) are listed in Table 4.
In a similar way, we find that the gravitational redshift, \(z\), for the neutron star model with the fixed central density can be expressed as a function of \(\Delta R_{n}\), weakly depending on the EOS models as
\[z =a_{0,{\rm Pb}}^{z}+a_{1,{\rm Pb}}^{z}\left(\frac{\Delta R_{n}^{ \rm Pb}}{0.2\,{\rm fm}}\right), \tag{7a}\] \[z =a_{0,{\rm Sn}}^{z}+a_{1,{\rm Sn}}^{z}\left(\frac{\Delta R_{n}^{ \rm Sn}}{0.2\,{\rm fm}}\right), \tag{7b}\]
where the adjusting parameters of \(a_{i,\mathrm{Pb}}^{z}\) and \(a_{i,\mathrm{Sn}}^{z}\) for \(i=0\) and \(1\) depend on the central density of the neutron star. As an example, we show the results with \(u_{\mathrm{c}}=1\), \(2\), and \(3\) in Fig. 5, where the marks denote the value of \(z\) for the neutron stars constructed with each EOS model, while the solid lines are the fittings. Then, as shown in Fig. 4, we can derive the dependence of the adjusting parameters in Eqs. (7a) and (7b) on \(u_{\mathrm{c}}\), as
\[a_{i,\mathrm{Pb}}^{z} =\sum_{j=0}^{4}a_{ij,\mathrm{Pb}}^{z}u_{\mathrm{c}}^{j}, \tag{8a}\] \[a_{i,\mathrm{Sn}}^{z} =\sum_{j=0}^{4}a_{ij,\mathrm{Sn}}^{z}u_{\mathrm{c}}^{j}. \tag{8b}\]
The solid lines in Fig. 4 show the expected values with these fittings. The concrete values of \(a_{ij,\mathrm{Pb}}^{z}\) and \(a_{ij,\mathrm{Sn}}^{z}\) are listed in Table 4.
Now, we have derived the empirical formulae expressing \(M\) and \(z\) as a function of \(\left(u_{\mathrm{c}},\Delta R_{n}^{\mathrm{Pb}}\right)\) or \(\left(u_{\mathrm{c}},\Delta R_{n}^{\mathrm{Sn}}\right)\), respectively, as Eqs. (5) and (6) and Eqs. (7) and (8). To see the accuracy of the estimation with these empirical formulae, we calculate the relative deviation in the mass and its gravitational redshift estimated with the empirical formulae from those as a TOV solution, and show it in the top and middle panels of Fig. 6, where the left and right panels correspond to the results from the formulae with \(\Delta R_{n}^{\mathrm{Pb}}\) and \(\Delta R_{n}^{\mathrm{Sn}}\), respectively. From this figure, one can see that the mass and its gravitational redshift are estimated within \(\lesssim 10\,\%\) accuracy, using the empirical formula with \(\Delta R_{n}^{\mathrm{Pb}}\) or \(\Delta R_{n}^{\mathrm{Sn}}\). In addition, in the bottom panel of Fig. 6, we show the relative deviation of the neutron star radius estimated with the empirical formulae for \(M\) and \(z\) from that determined as a TOV solution. From this figure, we find that the stellar radius for the neutron star with \(u_{\mathrm{c}}=2\)-\(3\) can be estimated with the empirical formulae for \(M\) and \(z\) using \(\Delta R_{n}^{\mathrm{Pb}}\) or \(\Delta R_{n}^{\mathrm{Sn}}\) within a few \(\%\) accuracy. Since these empirical formulae are derived using several EOS models selected in this study, the formulae are applicable only in the range of \(\Delta R_{n}\) given by the adopted EOS models. That is, the empirical formulae are applicable in the range of \(0.8\lesssim u_{\mathrm{c}}\lesssim 3.0\) and \(0.153\lesssim\Delta R_{n}^{\mathrm{Pb}}\lesssim 0.211\,\mathrm{fm}\) or \(0.213\lesssim\Delta R_{n}^{\mathrm{Sn}}\lesssim 0.274\,\mathrm{fm}\) (e.g., see the horizontal axis in Fig. 3).
\begin{table}
\begin{tabular}{c c c c c c} \(j\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) \\ \hline \(a_{0,\mathrm{Pb}}^{M}\) & \(0.8777\) & \(-2.0127\) & \(0.9596\) & \(-0.13770\) & \(0.0047513\) \\ \(a_{1,\mathrm{Pb}}^{1}\) & \(-0.7924\) & \(1.7396\) & \(-0.3281\) & \(-0.04253\) & \(0.0117856\) \\ \hline \(a_{0,\mathrm{Sn}}^{0}\) & \(1.0850\) & \(-2.4709\) & \(1.0488\) & \(-0.13010\) & \(0.0023109\) \\ \(a_{1,\mathrm{Sn}}^{1}\) & \(-0.7604\) & \(1.6719\) & \(-0.3168\) & \(-0.03827\) & \(0.0108140\) \\ \hline \(a_{0,\mathrm{Pb}}^{0}\) & \(0.076109\) & \(-0.2565\) & \(0.1586\) & \(-0.035961\) & \(0.0034092\) \\ \(a_{1,\mathrm{Pb}}^{1}\) & \(-0.084495\) & \(0.2528\) & \(-0.1063\) & \(0.025645\) & \(-0.0027859\) \\ \hline \(a_{0j,\mathrm{Sn}}^{1}\) & \(0.099032\) & \(-0.3246\) & \(0.1884\) & \(-0.043693\) & \(0.0042857\) \\ \(a_{1,\mathrm{Sn}}^{1}\) & \(-0.081280\) & \(0.2429\) & \(-0.1025\) & \(0.024997\) & \(-0.0027348\) \\ \end{tabular}
\end{table}
Table 4: The coefficients in Eqs. (6a), (6b), (8a), and (8b).
Figure 3: The mass of the neutron stars constructed with each EOS model is shown as a function of \(\Delta R_{n}^{\mathrm{Pb}}\) in the left panel and \(\Delta R_{n}^{\mathrm{Sn}}\) in the right panel, where the neutron star mass for \(u_{\mathrm{c}}=1\), \(2\), and \(3\) are shown. The fitting lines are given by Eqs. (5a) and (5b).
Figure 4: The coefficients in Eqs. (5a), (5b), (7a), and (7b) are shown as a function of \(u_{\rm c}\), where the top and bottom panels correspond to the coefficients in the formulae with the data of \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\), respectively, while the solid lines denote the fitting lines given by Eqs. (6a), (6b), (8a), and (8b).
### Empirical relations with \(\alpha_{\rm D}\)
Next, we consider the derivation of the empirical formulae for \(M\) and \(z\), using the dipole polarizability, \(\alpha_{\rm D}\), for \({}^{208}{\rm Pb}\) and \({}^{132}{\rm Sn}\). In a similar way to the case with \(\Delta R_{n}\), we find that the neutron star mass with the fixed central density is strongly correlated to \(\alpha_{\rm D}S_{0}\), weakly depending on the EOS models. In Fig. 7, we show the neutron star mass with \(u_{\rm c}=1\), \(2\), and \(3\) as a function of \(\alpha_{\rm D}^{\rm Pb}S_{0}\) in the left panel and \(\alpha_{\rm D}^{\rm Sn}S_{0}\) in the right panel, where the solid lines denote the fitting given by
\[\frac{M}{M_{\odot}} =b_{0,{\rm Pb}}^{M}+b_{1,{\rm Pb}}^{M}\left(\frac{S_{0}}{30\,{\rm MeV }}\frac{\alpha_{\rm D}^{\rm Pb}}{20\,{\rm fm}^{3}}\right), \tag{9a}\] \[\frac{M}{M_{\odot}} =b_{0,{\rm Sn}}^{M}+b_{1,{\rm Sn}}^{M}\left(\frac{S_{0}}{30\,{\rm MeV }}\frac{\alpha_{\rm D}^{\rm Sn}}{20\,{\rm fm}^{3}}\right). \tag{9b}\]
The coefficients in these fittings depend on the value of \(u_{\rm c}\), and we can derive their fitting as
\[b_{i,{\rm Pb}}^{M} =\sum_{j=0}^{4}b_{ij,{\rm Pb}}^{M}u_{\rm c}^{j}, \tag{10a}\] \[b_{i,{\rm Sn}}^{M} =\sum_{j=0}^{4}b_{ij,{\rm Sn}}^{M}u_{\rm c}^{j}, \tag{10b}\]
with which the expected values are shown in Fig. 8 with the solid lines. The concrete values of \(b_{ij,{\rm Pb}}^{M}\) and \(b_{ij,{\rm Sn}}^{M}\) for \(i=0\), \(1\) and \(j=0\)-\(4\) are listed in Table 5.
Moreover, we also find that the gravitational redshift of the neutron star with the fixed central density is strongly associated with \(\alpha_{\rm D}S_{0}\), weakly depending on the EOS models. In fact, as shown in Fig. 9, it can be expressed as a function of \(\alpha_{\rm D}S_{0}\) for
\begin{table}
\begin{tabular}{c c c c c c} \(j\) & 0 & 1 & 2 & 3 & 4 \\ \hline \(b_{0,{\rm Pb}}^{M}\) & \(1.4048\) & \(-3.1188\) & \(1.0989\) & \(-0.08108\) & \(-0.006288\) \\ \(b_{1,{\rm Pb}}^{1,1}\) & \(-1.1036\) & \(2.3765\) & \(-0.3867\) & \(-0.08429\) & \(0.019227\) \\ \hline \(b_{0,{\rm Sn}}^{M}\) & \(1.3332\) & \(-2.9720\) & \(1.0432\) & \(-0.07696\) & \(-0.0057868\) \\ \(b_{1,{\rm Sn}}^{M}\) & \(-2.0784\) & \(4.4887\) & \(-0.6693\) & \(-0.17770\) & \(0.037747\) \\ \hline \(b_{ij,{\rm Pb}}\) & \(0.1340\) & \(-0.4237\) & \(0.2239\) & \(-0.051039\) & \(0.0050455\) \\ \(b_{1j,{\rm Pb}}^{1,1}\) & \(-0.1191\) & \(0.3508\) & \(-0.1430\) & \(0.033876\) & \(-0.0036785\) \\ \hline \(b_{ij,{\rm Sn}}^{2}\) & \(0.1279\) & \(-0.4069\) & \(0.2173\) & \(-0.050428\) & \(0.0050662\) \\ \(b_{1j,{\rm Sn}}^{1}\) & \(-0.2264\) & \(0.6699\) & \(-0.2725\) & \(0.066119\) & \(-0.0073334\) \\ \end{tabular}
\end{table}
Table 5: The coefficients in Eqs. (10a), (10b), (12a), and (12b).
\({}^{208}\mathrm{Pb}\) and \({}^{132}\mathrm{Sn}\), such as
\[z =b_{0,\mathrm{Pb}}^{z}+b_{1,\mathrm{Pb}}^{z}\left(\frac{S_{0}}{30 \,\mathrm{MeV}}\frac{\alpha_{\mathrm{D}}^{\mathrm{Pb}}}{20\,\mathrm{fm}^{3}} \right), \tag{11a}\] \[z =b_{0,\mathrm{Sn}}^{z}+b_{1,\mathrm{Sn}}^{z}\left(\frac{S_{0}}{30 \,\mathrm{MeV}}\frac{\alpha_{\mathrm{D}}^{\mathrm{Sn}}}{20\,\mathrm{fm}^{3}} \right), \tag{11b}\]
where the coefficients of \(b_{i,\mathrm{Pb}}^{z}\) and \(b_{i,\mathrm{Sn}}^{z}\) should be expressed with \(u_{\mathrm{c}}\). We show such dependence in Fig. 8, where the solid lines
Figure 6: Relative deviation of the stellar mass (\(M\)) and gravitational redshift (\(z\)) estimated with the empirical relations (Eqs. (5a)–(8b)) from those determined as the TOV solutions is shown as a function of \(u_{\mathrm{c}}\). The bottom panels correspond to the relative deviation of the stellar radius (\(R\)) predicted with the empirical formulae for \(M\) and \(z\) from that determined as the TOV solutions. The left and right panels correspond to the results obtained from the empirical formulae as a function of \(\Delta R_{n}^{\mathrm{Pb}}\) and \(\Delta R_{n}^{\mathrm{Sn}}\), respectively.
are the fittings given by
\[b_{i,\mathrm{Pb}}^{z} =\sum_{j=0}^{4}b_{ij,\mathrm{Pb}}^{z}u_{c}^{j}, \tag{12a}\] \[b_{i,\mathrm{Sn}}^{z} =\sum_{j=0}^{4}b_{ij,\mathrm{Sn}}^{z}u_{c}^{j}. \tag{12b}\]
The concrete values of \(b_{ij,\mathrm{Pb}}^{z}\) and \(b_{ij,\mathrm{Sn}}^{z}\) for \(i=0\), \(1\) and \(j=0\)-\(4\) are listed in Table 5.
Now, we have newly obtained the empirical formulae expressing \(M\) and \(z\) as a function of \(\left(u_{\mathrm{c}},\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\right)\) or \(\left(u_{\mathrm{c}},\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}\right)\), respectively, as Eqs. (9) and (10) and Eqs. (11) and (12). In the top and middle panels of Fig. 10, we show the relative deviation of the estimation of mass and gravitational redshift with empirical formulae from those determined by integrating the TOV equations. From this figure, we find that the mass and gravitational redshift can be estimated within \(\approx 8\,\%\) accuracy from the empirical formulae with \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\) or \(\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}\). Additionally, one can estimate the stellar radius by combining these formulae for \(M\) and \(z\), whose relative deviation from the stellar radius as the TOV solution is shown in the bottom panel of Fig. 10. From this figure, we find the stellar radius can be estimated within \(\approx 2\,\%\) error for the neutron star for \(u_{\mathrm{c}}=2\)-\(3\). These empirical formulae are applicable in the range of \(0.8\lesssim u_{\mathrm{c}}\lesssim 3.0\) and \(1.02\lesssim\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}/\left(600\,\mathrm{MeV}\, \mathrm{fm}^{3}\right)\lesssim 1.23\) or \(0.51\lesssim\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}/\left(600\,\mathrm{MeV}\, \mathrm{fm}^{3}\right)\lesssim 0.62\).
## IV Neutron star mass and radius relation
Using the empirical formula derived in this study, we see how the neutron star mass and radius depend on the experimental observables, i.e., \(\Delta R_{n}\) and \(\alpha_{\mathrm{D}}S_{0}\). For this purpose, we assume that \(\Delta R_{n}^{\mathrm{Pb}}/\left(0.2\,\mathrm{fm}\right)=0.9\), \(\Delta R_{n}^{\mathrm{Sn}}/\left(0.2\,\mathrm{fm}\right)=1.2\), \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}/\left(600\,\mathrm{MeV}\,\mathrm{fm}^ {3}\right)=1.13\), and \(\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}/\left(600\,\mathrm{MeV}\,\mathrm{fm}^ {3}\right)=0.56\) as their test values here. These values are around the central values in the range of corresponding variables with the EOS models adopted in this study (see the horizontal axis in Figs. 3, 5, 7, and 9). Then, in Fig. 11, we show the neutron star mass and radius predicted from the empirical relations, adopting the error of \(\pm 5\,\%,\pm 10\,\%\), and \(\pm 15\,\%\) from the test values. In addition, the experimental value of \(\Delta R_{n}^{\mathrm{Pb}}\) is known via PREX-II, i.e., \(\Delta R_{n}^{\mathrm{Pb}}=0.283\pm 0.071\,\mathrm{fm}\)[21], but this constraint is out of the range in which our empirical relation is applicable. On the other hand, the experimental value of \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}\) is \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}=20.1(6)\,\mathrm{fm}^{3}\)[62], which leads \(0.94\lesssim\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}/\left(600\,\mathrm{MeV} \,\mathrm{fm}^{3}\right)\lesssim 1.18\), assuming that \(S_{0}\approx 31.6\pm 2.7\,\mathrm{MeV}\)[23]. Since this is more or less inside the applicable range, we also show the predicted region in the neutron star mass and radius, using this experimental value, in the left-bottom panel in Fig. 11. From this figure, one can observe that the predicted neutron star mass and radius strongly depend on the experimental observables, even if one assumes the same errors in \(\Delta R_{n}\) and \(\alpha_{\mathrm{D}}S_{0}\). In fact, it seems that one can well predict the neutron star mass and radius, using the data of \(\Delta R_{n}\) for \({}^{208}\mathrm{Pb}\). For example, once one would determine the value of \(\Delta R_{n}\) for \({}^{208}\mathrm{Pb}\) within \(10\,\%\) error, neutron star radius may be determined a few \(\%\) accuracy.
To understand this situation, we see the EOS dependence of \(\Delta R_{n}^{\mathrm{Pb}}\), \(\Delta R_{n}^{\mathrm{Sn}}\), \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\), and \(\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}\). From Figs. 3, 5, 7, and 9, one can observe the minimum and maximum values of \(\Delta R_{n}^{\mathrm{Pb}}\), \(\Delta R_{n}^{\mathrm{Sn}}\), \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\), and \(\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}\) are given by SLy230a and SKa,
Figure 7: The mass of the neutron stars constructed with each EOS model is shown as a function of \(\alpha_{\mathrm{D}}^{\mathrm{Pb}}S_{0}\) in the left panel and \(\alpha_{\mathrm{D}}^{\mathrm{Sn}}S_{0}\) in the right panel, where we show the results for \(u_{\mathrm{c}}=1\), \(2\), and \(3\). The fitting lines are given by Eqs. (9a) and (9b).
respectively. Now, to see how these variables strongly depend on the EOS models, we calculate their relative range through
\[\delta B=\frac{2\left(B_{\rm{SKa}}-B_{\rm{SLy230a}}\right)}{B_{\rm{SKa}}+B_{\rm{ SLy230a}}}, \tag{13}\]
where \(B\) denotes the variables of \(\Delta R_{n}^{\rm Pb}\), \(\Delta R_{n}^{\rm Sn}\), \(\alpha_{\rm D}^{\rm Pb}S_{0}\), and \(\alpha_{\rm D}^{\rm Sn}S_{0}\). One can get \(\delta\left(\Delta R_{n}^{\rm Pb}\right)=0.324\), \(\delta\left(\Delta R_{n}^{\rm Sn}\right)=0.248\), \(\delta\left(\alpha_{\rm D}^{\rm Pb}S_{0}\right)=0.180\), and \(\delta\left(\alpha_{\rm D}^{\rm Sn}S_{0}\right)=0.186\). This means that the EOS dependence of \(\Delta R_{n}^{\rm Pb}\) is stronger than those of \(\alpha_{\rm D}^{\rm Pb}S_{0}\) or \(\alpha_{\rm D}^{\rm Sn}S_{0}\). That is, if one considers the same errors in the experimental observables, such as \(\pm 15\,\%\), \(\alpha_{\rm D}^{\rm Pb}S_{0}\) or \(\alpha_{\rm D}^{\rm Sn}S_{0}\) easily gets out from the applicable range. Actually, the values of \(\alpha_{\rm D}^{\rm Pb}S_{0}\) and \(\alpha_{\rm D}^{\rm Sn}S_{0}\) with \(\pm 15\,\%\) errors from the test values are out of the applicable range. This is a reason why the neutron star mass and radius are relatively well predicted with the empirical formula with \(\Delta R_{n}^{\rm Pb}\). Conversely, we may conclude that it is difficult to constrain the neutron star mass and radius from the dipole polarizability.
This tendency can be understood as follows: The neutron-skin thickness is basically determined by the isovector properties of the effective interaction, and accordingly, the neutron-skin thickness strongly correlates with the neutron excess \(\left(N-Z\right)/A\) (equivalent to an asymmetric parameter \(\alpha\)), where \(N\), \(Z\), and \(A\) denote the neutron number, proton number, and atomic mass number of each nucleus. Since the neutron excess of \({}^{208}\rm Pb\) is smaller than \({}^{132}\rm Sn\), the absolute value of \(\Delta R_{n}\) for \({}^{208}\rm Pb\) is smaller than that for \({}^{132}\rm Sn\). On the other hand, the deviation of \(\Delta R_{n}\) among the models in \({}^{208}\rm Pb\), i.e., \(y\)-axis of Fig. 1, is, eventually, almost the same as those in \({}^{132}\rm Sn\). Accordingly, \(\delta\left(\Delta R_{n}^{\rm Pb}\right)\) becomes larger than \(\delta\left(\Delta R_{n}^{\rm Sn}\right)\), although the accuracy of \(\Delta R_{n}^{\rm Pb}\) is expected to be better than \(\Delta R_{n}^{\rm Sn}\) since \({}^{132}\rm Sn\) is an exotic nucleus.
In contrast, \(\alpha_{\rm D}\) is associated with the isoscalar properties of the effective interaction; indeed, \(\alpha_{\rm D}S_{0}\) correlates with \(A\left\langle r^{2}\right\rangle\) with the mean-square radius of the nucleus \(\left\langle r^{2}\right\rangle\)[51; 69; 70]. Eventually, the deviation of \(\alpha_{\rm D}S_{0}\) among the models, i.e., \(y\)-axis of Fig. 2, scales with the absolute value. Isoscalar properties of the effective interaction are determined better than isovector ones. Thus, \(\delta\left(\alpha_{\rm D}S_{0}\right)\) is smaller than \(\delta\left(\Delta R_{n}\right)\).
Finally, in Fig. 12, we compare the expected region of neutron star mass and radius with the resultant empirical formula of \(\Delta R_{n}^{\rm Pb}\), i.e., Eqs. (5a), (6a), (7a), and (8a), assuming that \(\Delta R_{n}^{\rm Pb}=\left(0.9\pm 0.09\right)\times 0.2\,\rm fm\) (\(\pm 10\,\%\) deviation) to the other constraints. As shown in Fig. 11, for the neutron star model predicted with the empirical formula of \(\Delta R_{n}^{\rm Pb}\), we only plot the stellar model whose central density is up to threefold the saturation density. In the same figure, we show the constraints on the neutron star mass and radius obtained from the various astronomical observations; the gravitational wave observations in the GW170817 event, i.e., the \(1.4M_{\odot}\) neutron star radius is less than \(13.6\,\rm km\)[8], whose constraint may become more severe by combining with the multimessenger observations and nuclear theory [71; 72]; the x-ray observations via NICER for PSR J0030+0451 [16; 17] and MSP J0740+6620 [18; 19]; the observations of x-ray burst through the theoretical models [73]; and the identification of the magnetar quasi-periodic oscillations observed in GRB 200415A with the crustal torsional oscillations [74]. As the theoretical constraint, the top-left region can be excluded from the causality [75]. We also show the mass and radius region predicted with the empirical formulae with the nuclear saturation parameters, i.e., \(\eta=\left(K_{0}L^{2}\right)^{1/3}\)[24], assuming that \(L=60\pm 20\) and \(K_{0}=240\pm 20\,\rm MeV\) as their fiducial values, where the central density is considered to be less than twice the saturation density. Furthermore, for reference, the neutron star models constructed with some of the EOS models adopted in this study, such as the SKa, SkMp, and SLy4, are also shown with dotted lines.
## V Conclusion
The nuclear saturation parameters must be important parameters characterizing the EOS models. But, they are usually constrained from the experimental data through a kind of theoretical model. To avoid such a circumvention way and to directly discuss the neutron star properties, such as the mass and radius, with the experimental data, we derive the empirical formulae expressing the neutron star mass and its gravitational redshift, as a function of the normalized central density, \(u_{\rm c}=\rho_{\rm c}/\rho_{0}\), and neutron-skin thickness or the dipole polarizability for \({}^{208}\rm Pb\) or \({}^{132}\rm Sn\). These formulae can predict the neutron star mass and its gravitational redshift within \(\approx 10\,\%\) accuracy, while the stellar radius is estimated within a few \(\%\) accuracy by combining the resultant empirical formulae. Then, using the empirical formulae, we see how the neutron star mass and radius depend on the experimental data, i.e., the neutron-skin thickness and dipole polarizability of \({}^{208}\rm Pb\) or \({}^{132}\rm Sn\). As a result, we find that the neutron star mass and radius are relatively more sensitive to the data of the neutron-skin thickness \({}^{208}\rm Pb\), while they seem to be less sensitive to the dipole polarizability. As an example, we show that the neutron star radius could be determined within a few \(\%\) accuracy once the neutron-skin thickness \({}^{208}\rm Pb\) would be determined within \(10\,\%\) errors. In this study, we successfully derive the empirical formulae expressing the neutron star mass and its gravitational redshift, but the applicable range may not be so wide. This is because the sample number of the EOS models is not so much, due to the lack of availability of the EOS models. To extend the applicable range, we will consider if we could update the empirical formulae by collecting a variety of EOS models as possible.
## Acknowledgments
HS is grateful to Susumu Shimoura for his valuable comments. TN is grateful to Li-Gang Cao for his comments on the LNS functional. This work is supported in part by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers JP19KK0354, JP21H01088, and JP22K20372, by Pioneering Program of RIKEN for Evolution of Matter in the Universe (r-EMU), by the RIKEN Special Postdoctoral Researchers Program, and by the Science and Technology Hub Collaborative Research Program from RIKEN Cluster for Science, Technology and Innovation Hub (RCSTI). The numerical calculations were partly performed on cluster computers at the RIKEN iTHEMS program.
|
2310.03709 | The Effect of Quadratic Base Change on Torsion | Let $K$ be a quadratic field and $E$ an elliptic curve defined over $K$ such
that $E(K)[2]\simeq C_2.$ In this paper, we study the effect of quadratic base
change on $E(K)_{\text{tor}}.$ In particular, we examine the growth of
$E(K)_{\text{tor}}$ upon quadratic base change when $K$ is any quadratic
cyclotomic field. In addition, for a given elliptic curve $E/K$ with prescribed
torsion group over $K,$ (no restriction on its $2$-torsion part) we describe an
algorithm to find all quadratic extensions $L/K$ in which $E(K)_{\text{tor}}
\subsetneq E(L)_{\text{tor}}$ and describe $E(L)_{\text{tor}}$ in each such
case. | Irmak Balçık, Burton Newman | 2023-10-05T17:35:55Z | http://arxiv.org/abs/2310.03709v1 | # The effect of quadratic base change on torsion
###### Abstract.
Let \(K\) be a quadratic field and \(E\) an elliptic curve defined over \(K\) such that \(E(K)[2]\simeq C_{2}.\) In this paper, we study the effect of quadratic base change on \(E(K)_{\rm tor}.\) In particular, we examine the growth of \(E(K)_{\rm tor}\) upon quadratic base change when \(K\) is any quadratic cyclotomic field. In addition, for a given elliptic curve \(E/K\) with prescribed torsion group over \(K,\) (no restriction on its \(2\)-torsion part) we describe an algorithm to find all quadratic extensions \(L/K\) in which \(E(K)_{\rm tor}\subsetneq E(L)_{\rm tor}\) and describe \(E(L)_{\rm tor}\) in each such case.
## 1. Introduction
The possible torsion groups of an elliptic curve over \(\mathbb{Q}\) are known by a celebrated theorem of Mazur [14]. These groups are
\[\begin{array}{ll}C_{n}&1\leq n\leq 12,\ n\neq 11\\ C_{2}\oplus C_{2n}&1\leq n\leq 4.\end{array} \tag{1}\]
Subsequently, the work of Mazur is generalized to quadratic fields.
**Theorem 1.1** ([11], [12]).: _Let \(K\) be any quadratic field and \(E\) any elliptic curve defined over \(K\). Then \(E(K)_{tor}\) is isomorphic to one of the following 26 groups:_
\[\begin{array}{ll}C_{n}&1\leq n\leq 18,\ n\neq 17\\ C_{2}\oplus C_{2n}&1\leq n\leq 6\\ C_{3}\oplus C_{3n}&1\leq n\leq 2\\ C_{4}\oplus C_{4}.\end{array} \tag{2}\]
The full description of torsion groups appearing over cubic fields has been settled in [2]. Over quartic fields we still do not have a complete classification, analogous to (1). We know however [10] which torsion groups occur infinitely often up to isomorphism and [3] that 17 is the largest prime dividing the order of a torsion group over quartic fields.
In this paper, we focus on understanding how torsion groups appearing in (2) with a unique \(2\)-point grow upon quadratic base change. The case \(K=\mathbb{Q}\) has been completely studied upon base change in [6], [7], [5], [8], [9]. Compared to \(K=\mathbb{Q}\) case, the core idea is to extensively use the Galois action on the group of \(n\)-torsion points of an elliptic curve, which establish the first main result Theorem 3.1 of this paper. In particular, we extend our methods to determine a classification of such growth when \(K\) is restricted to a quadratic cyclotomic field i.e. \(K\) belongs to the set \(\mathcal{A}=\{\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{-3})\}.\)
Two reasons motivate our focus on these fields. First, the classification of possible torsion groups for each quadratic cyclotomic field is complete due to [15], [16] (see Theorem 1.2 below). Second, torsion groups with a unique \(2\)-point is the remaining open case in the second author's earlier results [18], [19].
**Theorem 1.2**.: _Let \(K\) be a quadratic cyclotomic field and let \(E\) be any elliptic curve defined over \(K.\)_
1. _If_ \(K=\mathbb{Q}(\sqrt{-1})\)_, then_ \(E(K)_{\text{tors}}\) _is either one of the groups from Mazur's list_ \((1)\) _or_ \(C_{4}\oplus C_{4}.\)__
2. _If_ \(K=\mathbb{Q}(\sqrt{-3})\)_, then_ \(E(K)_{\text{tors}}\) _is either one of the groups from Mazur's list_ \((1)\)_,_ \(C_{3}\oplus C_{3}\) _or_ \(C_{3}\oplus C_{6}.\)__
The groups \(C_{3}\oplus C_{3}\) and \(C_{3}\oplus C_{6}\) are only realized over \(\mathbb{Q}(\sqrt{-3})\) while the group \(C_{4}\oplus C_{4}\) is only realized over \(\mathbb{Q}(\sqrt{-1})\) since they contain a root of unity for \(3\) and \(4\), respectively.
The second main result of this paper is the following theorem.
**Theorem 1.3**.: _Let \(K\) be a quadratic cyclotomic field. Let \(E\) be any elliptic curve defined over \(K\) with \(E(K)[2]\simeq C_{2},\) and let \(L/K\) be any quadratic extension._
1. _If_ \(K=\mathbb{Q}(\sqrt{-1}),\) _then_ \(E(L)_{\text{tor}}\) _is isomorphic to one of the following groups:_ \[\begin{array}{ll}C_{2n}&1\leq n\leq 8,\ n\neq 7\\ C_{2}\oplus C_{2n}&1\leq n\leq 8,\ n\neq 7\\ C_{3}\oplus C_{3n}&1\leq n\leq 2.\end{array}\]
2. _If_ \(K=\mathbb{Q}(\sqrt{-3}),\) _then_ \(E(L)_{\text{tor}}\) _is isomorphic to one of the groups listed above or_ \(C_{4}\oplus C_{4}.\)__
See Table 1 for every possible situation. In addition, these groups are realized except possibly the group \(C_{2}\oplus C_{16}\) (See Tables 3, 4).
**Layout.** In the second section, we provide fundamental results used in the sequel. In Section 3, we prove our first result Theorem 3.1. The key method over the course of its proof is the study of \(\operatorname{Gal}(L/K)\)-action on the \(n\)-torsion group \(E(L)[n]\) for any elliptic curve \(E/K.\) The Galois action puts certain constraints on \(E(L)_{\text{tor}}\) depending only on the structure of \(E(K)_{\text{tor}}.\)
In Section 4, our main goal is to prove Theorem 1.3. For this purpose, we advance our methods by studying all \(K\)-rational points lying on the modular curves \(X_{0}(N)\) for \(N=20,24\) (see Proposition 4.1)and certain quartic points on \(X_{1}(M,N)\) for \((M,N)=(4,8),(6,6).\)
In Section 5, we study growth for an arbitrary elliptic curve \(E/K\) (no restrictions on its \(2\)-torsion part). By using division polynomials, we describe a general algorithm which determines all possible growths of any prescribed torsion group upon quadratic base change. More precisely, given an elliptic curve \(E/K,\) and a complete list of all possible torsion groups appearing as \(E(K)_{\text{tor}}\) (as in Theorem 1.2), the algorithm determines all quadratic extensions \(L/K\) in which \(E(K)_{\text{tor}}\) grows and describe \(E(L)_{\text{tor}}\) in each case. One can find the code available **here** for such an implementation in Magma. Applying the algorithm for each \(K\) in \(\mathcal{A}\) provides explicit examples of elliptic curves defined over \(K\) as shown in Table 3 and Table 4.
**Notation.** Given a number field \(K\), let \(E\) be any elliptic curve defined over \(K\). Let \(E[n]=\{P\in E(\overline{K}):nP=0\}\) denote the group of \(n\)-torsion points on \(E\) and \(E(K)[p^{\infty}]\) the Sylow \(p\)-subgroup of \(E(K)\) for any prime \(p.\) We denote by \(E^{d}\) a quadratic twist of \(E\) by a square-free \(d\in K.\)
**Acknowledgement.** We are indebted to Andreas Schweizer for allowing us to use the results from his ongoing project with the first author and a multitude of e-mail responses during the preparation of this manuscript.
## 2. Auxiliary Results
In order to set the background necessary to understand the rest of this paper, we begin with the generalization of Kwon's result which allows us to bound the quadratic growth of torsion over number fields.
Let \(E\) be defined by \(y^{2}=x^{3}+Ax+B\) with \(A,B\in K\) and let \(L=K(\sqrt{d})/K\) be any quadratic extension with \(d\in K\) a square-free. Notice that the Galois group \(\operatorname{Gal}(L/K)\) with generator \(\sigma\) acts on \(E(K)\) in an obvious way. For \(P\in E(L)\), let \(Q=(x,y)\) be the point \(P-\sigma(P).\) If \(P\in E(K)\) then \(Q=0\), otherwise it follows from \((\sigma(x),\sigma(y))=\sigma(Q)=-Q=(x,-y)\) that \(x,y/\sqrt{d}\) are in \(K\) and hence the point \((x,y/\sqrt{d})\) lies on its quadratic twist \(E^{d}\) defined by \(dy^{2}=x^{3}+Ax+B\). The reader should be aware that \(P-\sigma(P)\) in \(E^{d}(K)\) is understood as \((x,y/\sqrt{d})\) in the following statement.
**Proposition 2.1**.: _([13]) Let \(K\) be a number field, \(L=K(\sqrt{d})\) for \(d\in K\) a square-free, and let \(\sigma\) denote the generator of \(Gal(L/K)\). There exists a homomorphism \(h\) defined by_
\[E(L)_{\text{tor}} \xrightarrow{h}E^{d}(K)_{\text{tor}}\] \[P \mapsto P-\sigma(P)\]
_with \(\text{ker}(h)=E(K)_{\text{tor}}\) and it induces an injection \(E(L)_{\text{tor}}/E(K)_{\text{tor}}\hookrightarrow E^{d}(K)_{\text{tor}}.\)_
Proof.: See the proof when \(K=\mathbb{Q}\) in [13, Proposition 1].
It is already known that the odd-order torsion part of \(E(L)_{\text{tor}}\) can be well-understood by only studying two torsion groups which occur over \(K.\)
**Lemma 2.2**.: _([6]) If n is an odd positive integer we have_
\[E(K(\sqrt{d}))[n]\simeq E(K)[n]\oplus E^{d}(K)[n].\]
The key feature of the next result is the Weil pairing which puts certain constraints on \(E(K)_{\text{tor}}\) depending on \(K.\)
**Theorem 2.1**.: _([18]) Let \(K\) be a number field, \(E/K\) an elliptic curve, \(L\) a quadratic extension of \(K\) and \(p\) an odd prime._
1. _If_ \(E(K)[2]\) _is trivial then_ \(E(L)[2]\) _is trivial._
2. _If_ \(d\in K\)_,_ \(d\neq 0\) _then_ \(E^{d}(K)[2]\simeq E(K)[2].\)__
3. _If_ \(E(K)[p]\) _is trivial and_ \(E(L)[p]=C_{p}\oplus C_{p}\) _then_ \(K\) _contains a primitive_ \(pth\) _root of unity._
4. _If_ \(E(K)[p]\simeq C_{p}\) _and_ \(E(L)[p^{\infty}]\neq E(K)[p^{\infty}]\) _then_ \(E(L)[p]\simeq C_{p}\oplus C_{p}.\)__
5. _If_ \(E(K)[p]\simeq C_{p}\) _and_ \(E(L)[p]\simeq C_{p}\oplus C_{p}\) _then_ \(K\) _does not contain a primitive_ \(pth\) _root of unity._
6. _If_ \(E(K)[p]\simeq C_{p}\oplus C_{p}\) _then_ \(E(L)[p^{\infty}]=E(K)[p^{\infty}].\)__
The next proposition is specialized for certain CM elliptic curves which will be useful in the study of \(K\)-rational points on the modular curves of interest.
**Proposition 2.3**.: _Let \(K\) be a quadratic field and let \(E/K\) be any elliptic curve. If \(j(E)=0\) and \(p>3\) is a prime then \(E(K)_{\text{tor}}\) has no element of order \(p\). If \(j(E)=1728\) and \(p>2\) is a prime, then \(E(K)_{\text{tor}}\) has no element of order \(p\)._
Proof.: Suppose \(j(E)=0\). Twisting by a square in \(\mathcal{O}_{K}\) if necessary, we may assume that \(E\) has a model of the form \(y^{2}=x^{3}+Ax+B\) with \(A,B\in\mathcal{O}_{K}\). Note that since \(\mathcal{O}_{K}\) is a Dedekind domain, the principal ideal \((\Delta(E))\) generated by the discriminant of \(E\), has only a finite number of prime ideal divisors, and hence \(\Delta(E)\) lies in only a finite number of prime ideals of \(\mathcal{O}_{K}\).
Let \(q>3\) be a prime in \(\mathbb{Z}\). Since \(q\neq 3\), by the Chinese remainder theorem there exists an integer \(n\) satisfying
\[n+1 \equiv 2\ (\text{mod}\ q)\] \[n \equiv 2\ (\text{mod}\ 3)\]
Furthermore, \(n+3qk\) satisfies the congruences above for every integer k, and \((n,3q)=1\) by the congruences above. Hence by Dirichlet's theorem on arithmetic progressions, there are infinitely many primes in this arithmetic progression. In particular, there is a prime \(p\) satisfying the congruences above such that \(E\) has good reduction modulo a prime ideal \(\beta\) above \(p\). As \([K:\mathbb{Q}]=2\), we have \(\mathcal{O}_{K}/\beta\simeq\mathbb{F}_{p}\) or \(\mathbb{F}_{p^{2}}.\) By the comments following [20, Proposition 3.1] we have an injection of \(E(K)[\overline{p}]\) into \(E(\mathbb{F}_{p})\) or \(E(\mathbb{F}_{p^{2}}).\) But we have by [21, chapter 4] that
\[|E(\mathbb{F}_{p})| =p+1\equiv 2\not\equiv 0\ (\text{mod}\ q)\] \[|E(\mathbb{F}_{p^{2}})| =(p+1)^{2}\equiv 4\not\equiv 0\ (\text{mod}\ q)\]
as \(q\neq 2\). Hence in either case (noting \(p\neq q\)), we conclude there is no point of order \(q\) in \(E(K)_{\text{tor}}\).
Now suppose \(j(E)=1728\). If \(q\) is an odd prime, then one can argue just as in the \(j(E)=0\) case that there is no point of order \(q\).
## 3. Restrictions on Growth of Cyclic Even-order Torsion
In this section, our focus is to understand the effect of quadratic base change on \(E(K)_{\text{tor}}\) when \(E\) varies over all elliptic curves defined over a quadratic number field \(K\) with \(E(K)[2]\simeq C_{2}.\) Before proceeding further, we prove a useful lemma to narrow down certain growth in the proof of Theorem 3.1.
**Lemma 3.1**.: _Let \(K\) be a number field and let \(E/K\) be any elliptic curve with \(E(K)[2]\simeq C_{2}.\) If \(E(L)\) contains a subgroup of the form \(C_{2}\oplus C_{4}\) where \(L=K(\sqrt{d})\) for a square-free \(d\in K,\) then both \(E\) and its quadratic twist \(E^{d}\) have a \(K\)-rational \(4\)-torsion point._
Proof.: Suppose \(C_{2}\oplus C_{4}\subseteq E(L)\) such that \(C_{2}\oplus C_{4}=\langle Q,P\rangle\) where \(Q\) is a point of order \(2\) and \(P\) is a point of order \(4.\) Let \(\sigma\) denote the non-trivial element of \(\text{Gal}(L/K)\).
If \(\sigma(P)=P\) or \(-P\), then \(\sigma(2P)=2P\), so \(2P\) is the unique \(K\)-rational \(2\)-torsion point. Then \(\sigma(Q)=Q+2P\) and so \(\sigma\) maps \(P+Q\) to its inverse or to itself depending on \(\sigma(P)=P\) or \(-P\) from which the statement follows.
If \(\sigma(P)\) is different from \(P\) and \(-P\), then \(\sigma(P)+P,\sigma(P)-P\) are both non-trivial. Note that \(\sigma(P)+P\in E(K)[4]\) and \(\sigma(P)-P\in E^{d}(K)[4].\) Our aim is to show that both have exactly order \(4.\) If they have order \(2\), then \(\sigma(P)+P\) and \(\sigma(P)-P\) are both the unique \(K\)-rational \(2\)-torsion point, and hence they must be equal, which gives the contradiction \(2P=0\). Now assume that one of them has order \(4.\) By arguing similarly, we may assume that \(\sigma(P)+P\) has order \(4\) but \(\sigma(P)-P\) has order \(2.\) Since \(\sigma(P)\neq P,-P,\) this leaves only \(\sigma(P)-P=Q\) or \(Q+2P.\) In both cases, \(\sigma(P)+P\) must have order \(2\) which is a contradiction by assumption.
We are ready to state our first theorem which lists various restrictions on growth of torsion in quadratic extensions.
**Theorem 3.1**.: _Let \(K\) be a quadratic field, \(E/K\) an elliptic curve, and let \(L/K\) be any quadratic extension with \(L=K(\sqrt{d})\) for a square-free \(d\in K.\)_
1. _If_ \(E(K)[2^{\infty}]\simeq C_{2}\)_, then_ \(C_{2}\oplus C_{4}\not\subseteq E(L).\)__
2. _If_ \(E(K)_{\text{tor}}\simeq C_{4}\) _then_ \(E(L)_{\text{tor}}\not\simeq C_{4}\oplus C_{8}.\)__
3. _If_ \(E(K)_{\text{tor}}\simeq C_{4}\) _then_ \(E(L)_{\text{tor}}\not\simeq C_{16}.\)__
4. _If_ \(E(K)_{\text{tor}}\simeq C_{8}\) _and_ \(E(L)_{\text{tor}}\simeq C_{2}\oplus C_{16}\) _then_ \(E^{d}(K)_{\text{tor}}\simeq C_{4}.\)__
5. _If_ \(E(K)[2]\simeq C_{2},\) _then_ \(C_{2}\oplus C_{20}\not\subseteq E(L).\)__
6. _If_ \(E(K)_{\text{tor}}\simeq C_{4}\)_, then_ \(E(L)_{\text{tor}}\not\simeq C_{2}\oplus C_{24}.\)__
7. _If_ \(C_{32}\subseteq E(L)\) _then either_ \(E(K)\) _or_ \(E^{d}(K)\) _has a point of order_ \(16.\)__
Proof.: \((i)\) An immediate corollary of Lemma 3.1.
\((ii)\) Suppose \(E(L)_{\text{tor}}\simeq C_{4}\oplus C_{8}.\) By Lemma 3.1\(E^{d}(K)_{\text{tor}}\simeq C_{4}\) or \(C_{8}.\) On the one hand, we have \(|E(L)_{\text{tor}}/E(K)_{\text{tor}}|=8\) by assumption. On the other hand, \(E(L)_{\text{tor}}/E(K)_{\text{tor}}\) is cyclic since it embeds into \(E^{d}(K)_{\text{tor}}\) by Proposition 2.1. This forces \(E^{d}(K)_{\text{tor}}\) to be isomorphic to \(C_{8}\). But we claim that \(E(L)_{\text{tor}}/E(K)_{\text{tor}}\) cannot have a point of order \(8\). In detail, it is equivalent to showing that its embedding \(h(E(L)_{\text{tor}}/E(K)_{\text{tor}})\subseteq E^{d}(K)_{\text{tor}}\) has no element of order \(8\). Let \(E(L)_{\text{tor}}=\langle Q,R\rangle\) where \(Q\) is of order \(4\) and \(R\) is of order \(8.\) Let \(\text{Gal}(L/K)=\langle\sigma\rangle\) and let \(P\in E(L)\) be any point of order \(8.\) Then \(P=a_{1}Q+b_{1}R\) and \(\sigma(P)=a_{2}Q+b_{2}R\) where both \(a_{1},a_{2}\) are odd. But this implies that \(P-\sigma(P)\) has order at most \(4\) which gives a contradiction.
\((iii)\) [6, Theorem 5(iv)].
\((iv)\) Assume the hypothesis. Fix a \(2\)-torsion point \(Q\) and a \(16\)-torsion point \(P\) such that \(E(L)_{\text{tor}}=\langle Q,P\rangle.\) Let \(\sigma\) be the non-trivial element of \(\text{Gal}(L/K).\)
_Case 1_ : \(2P\) is \(K\)-rational. Note that \(\sigma\) maps \(P\) to a \(16\)-torsion point. So, \(\sigma(P)=aP\) or \(aP+Q\) where \(a\) is odd. Since \(P+\sigma(P)\) is fixed by \(\sigma\), we have
\(\sigma(P)=aP\). Otherwise, \(Q\in E(L)[2]\) would be \(K\)-rational. Moreover, \(\sigma(2P)=2P\) which implies that \(a=1\) or \(a=9\). If \(a=1\), then \(P\in E(K)\), a contradiction. If \(a=9\), then \(P+Q\) is defined over \(K\) and has order \(16\), a contradiction.
_Case 2_ : \(2P\) is not \(K\)-rational. Then \(E(K)[8]=\{bP+Q:b\in\{2,6,10,14\}\}\). So \(4P\) and \(12P\) are the \(K\)-rational \(4\)-torsion points. Hence the \(K\)-rational \(2\)-torsion point again is \(8P\). Notice that \(\sigma(P)=aP\) or \(aP+Q\) where \(a\) is odd. So \(2P\) goes to \(2aP\), and on the other hand to
\[\sigma(2P)=\sigma(2P+Q)+\sigma(Q)=2P+Q+Q+8P=10P.\]
It follows that \(a=5\) or \(13\). If \(\sigma(P)=5P\) or \(13P\) then \(P+Q+\sigma(P+Q)=14P\) or \(6P\) is \(K\)-rational, respectively. But this is not possible since \(2P\) is not \(K\)-rational. This leaves only the possibilities: \(\sigma(P)=5P+Q\) and \(\sigma(P)=13P+Q\). For example, if \(\sigma(P)=13P+Q\), we simply take \(13P+Q\) as the new \(P\), and call it \(\tilde{P}\). Then \(\sigma(\tilde{P})=5\tilde{P}+Q\). So we may assume \(\sigma(P)=5P+Q\) and \(E(K)_{\rm tor}=\langle 2P+Q\rangle\).
By Theorem 1.1 and Lemma 3.1, the only possibilities for \(E^{d}(K)_{\rm tor}\) are \(C_{4},C_{8}\) and \(C_{16}.\) Note that there exists a short exact sequence defined as follows
\[0\rightarrow\ker(\psi)\xrightarrow{i}E(L)\xrightarrow{\psi}E(K)\times E^{d} (K)\xrightarrow{\pi}{\rm coker}(\psi)\to 0 \tag{3}\]
where \(\psi\) maps \(R=(x,y)\) to
\[\psi(R)=(R+\sigma(R),\phi(R-\sigma(R),1/\sqrt{d}))\mbox{ with }\phi(R,a)=(x,ay).\]
Restricting to the torsion part, \(\ker(\psi)\) is either trivial, \(C_{2}\) or full \(2\)-torsion. One can observe from the action of \({\rm Gal}(L/K)\) that \(\ker(\psi)=\langle 8P\rangle.\) If \(E^{d}(K)_{\rm tor}\simeq C_{8}\) or \(C_{16}\), then \({\rm coker}(\psi)\) has exponent at least \(4.\) But this contradicts [6, Theorem 3], proving the assertion.
\((v)\) Applying Lemma 2.2 and Lemma 3.1, \(E(K)\) or \(E^{d}(K)\) would have a \(20\)-torsion point, which gives a contradiction by Theorem 1.1.
\((vii)\) Suppose \(E(K)_{\rm tor}\simeq C_{4}\) but \(E(L)\simeq C_{2}\oplus C_{24}.\) Fix an \(L\)-rational \(8\)-torsion point \(P\) and a \(2\)-torsion point \(Q\) different from \(4P\). Let \(\sigma\in{\rm Gal}(L/K)\) be the non-trivial automorphism. Then \(\sigma(P)\) is an \(L\)-rational point of order \(8\), so is equal to one of the eight points: \(aP\) or \(aP+Q\) where \(a\) is odd. This implies \(\sigma(4P)=4P\), and hence \(4P\) is the unique \(K\)-rational \(2\)-torsion point on \(E\). Consequently, \(\sigma(Q)=Q+4P\) from which one can observe that \(\sigma(P)\) cannot be \(aP+Q\); otherwise \(\sigma\) would be an automorphism of order \(4\).
This leaves the case \(\sigma(P)=aP\). If \(a=1\), then \(P\) is a \(K\)-rational \(8\)-torsion point, contradicting the assumption. If \(a=5\), then \(P+Q\) is fixed by \(\sigma\) and it has order \(8\), a contradiction. If \(a=7\), then \(E^{d}(K)\) contains a \(8\)-torsion point and also a \(3\)-torsion point by Lemma 2.2, so a \(24\)-torsion point, contradicting Theorem 1.1. If \(a=3\), then \(P+Q\) is mapped its inverse and so we are back to the previous case, proving the assertion.
\((vii)\) Let \(C_{32}\) be a subgroup of \(E(L)\). It follows from the Weil Pairing that \(E(L)[32]\simeq C_{M}\oplus C_{32}\) where \(M\) divides \(8.\) Fix a generator \(P\) of \(C_{32}\) and a generator \(Q\) of \(C_{M}\). Let \({\rm Gal}(L/K)=\langle\sigma\rangle\). Then, \(\sigma(P)\) is a point of order \(32\), so it is of the form \(aP+bQ\) where \(a\) is odd and \(b\) could be \(0\).
If \(a\equiv 1\) mod \(4\), then \(\sigma(P)+P\) is a point of order \(16\) since \(a+1\) is only divisible once by \(2\) and the order of \(Q\) is at most \(8\). Note that \(\sigma(P)+P\) is fixed by \(\sigma\), so is in \(E(K).\) If \(a\equiv 3\) mod \(4\), then we consider the point \(\sigma(P)-P\in E^{d}(K)\) that has order \(16\) for the same reason above.
## 4. Quadratic Growth of Torsion for \(K\) in \(\mathcal{A}\)
Let \(E\) be an elliptic curve over a number field \(K.\) We say that \(C\subseteq E(\overline{K})\) is a _K-rational subgroup_ of \(E\) if it is invariant under the action of \(\operatorname{Gal}(\overline{K}/K).\) Note that \(C\) might be \(K\)-rational even though it contains no \(K\)-rational points. Magma provides the defining polynomial \(f_{C}\) whose roots determine the \(x\)-coordinates of the points in \(C.\) If \(C\) is pointwise defined over a quadratic extension \(L\) of \(K,\) then \(f_{C}\) must have irreducible factors of degree at most \(2\) over \(K.\)
Let \(X_{0}(N)\) denote the modular curve whose \(K\)-rational points classify the isomorphism classes \((E,C)_{K}\) where \(E/K\) is an elliptic curve, and \(C\) is a cyclic \(K\)-rational subgroup of \(E\) of order \(N.\)
Our first goal in this section is to prove the following.
**Proposition 4.1**.: _Let \(K=\mathbb{Q}(\sqrt{D})\) with \(D=-1,-3\), and let \(L\) be any quadratic extension of \(K\). There is no elliptic curve \(E\) defined over \(K\) such that \(E(L)\) contains a cyclic \(K\)-rational subgroup of order \(N\) for \(N=20,24.\)_
Proof.: The modular curve \(X_{0}(20)\) is an elliptic curve with the model
\[X_{0}(20):y^{2}=x^{3}+x^{2}+4x+4.\]
If \(D=-1,\) then \(X_{0}(20)(K)\) has rank \(0\) and torsion \(C_{2}\oplus C_{6}\) with \(6\) rational cusps. Using Magma, we compute the \(4\) non-cuspidal \(K\)-rational points correspond to the isomorphism classes \((E,C)_{K}\) where \(C\) is pointwise defined over an extension of \(K\) of degree at least \(4\). The \(2\) non-cuspidal \(K\)-rational points correspond to the isomorphism classes \((E,C)_{K}\) where \(j(E)\) is equal to \(1728.\)
In case \(j=1728,\) Magma is not able to describe the isomorphism class. Instead we argue it by way of contradiction. If there exists an elliptic curve \(E/K\) with \(j(E)=1728\) and \(C\subseteq E(L)\) a \(K\)-rational cyclic subgroup of order \(20\) where \(L=K(\sqrt{d})\) for a square-free \(d\in K,\) then its quadratic twist \(E^{d}\) has a point of order \(5\) over \(K\) by Lemma 2.2. But this contradicts Proposition 2.3.
If \(D=-3,\) then \(X_{0}(20)(K)\) has rank \(0\) and torsion \(C_{6}\) which consists of entirely cusps. Hence, there is no elliptic curve \(E\) over \(K\) containing a cyclic \(K\)-rational subgroup of order \(20.\)
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Point & \(j(E)\) & Short Weierstrass Model for E & \(f_{C}\) \\ \hline \((\mp 2i,0)\) & \(287496\) & \((\pm 264i+77)/625,(\pm 616i+1638)/15625\) & (1,1,2,2,4) \\ \hline \((\pm 2i-2,\mp 2i-4)\) & \(287496\) & \((\pm 264i+77)/625,(\pm 616i+1638)/15625\) & (1,1,2,2,4) \\ \hline \((\pm 2i-2,\pm 2i+4)\) & \(1728\) & See below & \\ \hline \end{tabular}
\end{table}
Table 2.
The modular curve \(X_{0}(24)\) is an elliptic curve defined with the model
\[X_{0}(24):y^{2}=x^{3}-x^{2}-4x+4.\]
Given any \(K\) in \(\mathcal{A}\), \(X_{0}(24)(K)\) has rank \(0\) and torsion \(C_{2}\oplus C_{4}\) with \(8\) rational cusps. Since the torsion group consists of only cusps, then there is no elliptic curve \(E/K\) containing a cyclic \(K\)-rational subgroup of order \(24\).
Before further proceeding, we prove a useful proposition.
**Proposition 4.2**.: _Let \(K\) be in \(\mathcal{A}\) and \(E/K\) an elliptic curve with \(E(K)[2]\simeq C_{2}.\) If \(p\) is an odd prime dividing \(|E(L)_{tor}|\) where \(L=K(\sqrt{d})\) for a square-free \(d\in K,\) then \(p\leq 5\) and \(E(L)[p^{\infty}]\simeq C_{3},C_{5}\) or \(C_{3}\oplus C_{3}.\)_
Proof.: Suppose \(E(L)[p^{\infty}]\simeq C_{n}\) where \(n=p^{k}\) with \(p\) an odd prime and \(k\geq 1.\) Since the automorphism group \(\operatorname{Aut}(C_{n})\) is cyclic, the non-trivial element \(\sigma\) of \(\operatorname{Gal}(L/K)\) acts as identity or as multiplication by \(-1\) on \(E(L)[p^{\infty}].\) So \(E\) or its quadratic twist \(E^{d}\) has a \(K\)-rational point of order \(2n.\) This shows \(n\) is at most \(5,\) i.e. \(p\leq 5.\) Moreover, \(C_{n}\oplus C_{n}\subseteq E(L)[p^{\infty}]\) can only happen if \(L\) has a primitive \(n\)th root of unity \(\mu_{n}\). Let \(\varphi\) denote Euler's totient function. It follows that
\[[\mathbb{Q}(\mu_{n}):\mathbb{Q}]=\varphi(n)=(p-1)p^{k-1}\leq[L:\mathbb{Q}]=4.\]
Since \(p\) is odd, the inequality holds when \(n=3\) or \(5.\) If \(n=5\) then \(L=\mathbb{Q}(\mu_{5}).\) This means that \(\operatorname{Gal}(L/\mathbb{Q})\) is cyclic. Hence \(\mathbb{Q}(\sqrt{5})\) is the unique intermediate subfield of \(L\), a contradiction since \(K\subseteq L.\) Therefore, the largest value of \(n\) with \(C_{n}\oplus C_{n}\subseteq E(L)\) is \(n=3.\) It remains to show that \(C_{9}\not\subseteq E(L)[3^{\infty}].\) By way of contradiction, we assume that \(C_{9}\subseteq E(L)[3^{\infty}].\) It follows from Lemma 2.2 that \(E(K)\) or \(E^{d}(K)\) has a point of order \(18\) which is not possible by Theorem 1.2.
**Proposition 4.3**.: _Let \(K\) be in \(\mathcal{A},\)\(E/K\) any elliptic curve with \(E(K)[2]\simeq C_{2}\) and let \(L=K(\sqrt{d})\) be any quadratic extension of \(K\) for a square-free \(d\in K\)._
1. _If_ \(K=\mathbb{Q}(\sqrt{-3})\) _then_ \(C_{4}\oplus C_{8}\not\subseteq E(L).\)__
2. _If_ \(K=\mathbb{Q}(\sqrt{-1})\) _then_ \(C_{4}\oplus C_{4}\not\subseteq E(L).\)__
3. _If_ \(E(K)_{tor}\simeq C_{6}\) _then_ \(E(L)_{tor}\not\simeq C_{6}\oplus C_{6}.\)__
4. \(C_{M}\oplus C_{12}\not\subseteq E(L)\) _where_ \(M=3,4.\)__
5. \(C_{2}\oplus C_{24}\not\subseteq E(L).\)__
6. \(C_{30}\not\subseteq E(L).\)__
7. \(C_{32}\not\subseteq E(L).\)__
Proof.: _(1)_ Let \(K=\mathbb{Q}(\sqrt{-3})\) and suppose \(C_{4}\oplus C_{8}\subseteq E(L)\). In particular, full \(4\)-torsion \(E[4]\) is contained in \(E(L)\) and so \(L=K(\sqrt{-1})\) by the Weil Pairing. However, the modular curve \(X_{1}(4,8)\) is isomorphic (over \(\mathbb{Q}(\sqrt{-1})\)) to the elliptic curve with the Cremona label \(32\)a\(2\)[17, Lemma 13] and we compute
\[X_{1}(4,8)(L)=X_{1}(4,8)(\mathbb{Q}(\sqrt{-3},\sqrt{-1}))\simeq C_{2}\oplus C_ {4}\]
which consists entirely of cusps. Therefore, we get a contradiction.
_(2)_ See [4, Proposition 10.2].
_(3)_ Suppose \(E(K)_{\rm tor}\simeq C_{6}\) but, to the contrary \(E(L)_{\rm tor}\simeq C_{6}\oplus C_{6}.\) Then \(K=\mathbb{Q}(\sqrt{-1})\) by Theorem 2.1(v) and in particular \(L=\mathbb{Q}(\sqrt{-1},\sqrt{-3})\) by the Weil Pairing. Then \(E\) is induced by a non-cuspidal \(L\)-rational point lying on the modular curve \(X_{1}(6,6)\) that is defined over \(\mathbb{Q}(\sqrt{-3}).\) By [17, Lemma 14] it has an affine model over \(\mathbb{Q}\) such that
\[X_{1}(6,6):y^{2}=x^{3}+1\]
where its cusps satisfy: \(x(x-2)(x+1)(x^{2}-x+1)(x^{2}+2x+4)=0.\) We compute using Magma that
\[rk(X(L)) =rk(X(\mathbb{Q}(\sqrt{-3}))+rk(X^{(-1)}(\mathbb{Q}(\sqrt{-3}))\] \[=rk(X(\mathbb{Q}))+rk(X^{(-3)}(\mathbb{Q}))+rk(X^{(-1)}(\mathbb{Q }))+rk(X^{(3)}(\mathbb{Q}))\] \[=0.\]
and \(X(L)_{\rm tor}\simeq C_{2}\oplus C_{6}.\) However, all these torsion points are cusps, which gives a contradiction.
_(4)_ See [1, Theorem 8].
_(5)_ By the previous assertion _(4)_, it suffices to show that \(E(L)_{\rm tor}\) cannot be isomorphic to \(C_{2}\oplus C_{24}.\) On the contrary, let \(E(L)_{\rm tor}\simeq C_{2}\oplus C_{24}.\) By Theorem 3.1(i),(vi) and Lemma 2.2, we may assume that \(E(K)_{\rm tor}\simeq C_{8}\) with a \(K\)-rational 3-torsion point \(P\) on its quadratic twist \(E^{d}\). Then \(E(K)_{\rm tor}\oplus\langle P\rangle\) is isomorphic to a \(\operatorname{Gal}(L/K)\)-invariant cyclic subgroup \(C\) of \(E(L)\) with order \(24\). But, this is not possible by Proposition 4.1.
_(6)_ Suppose \(C_{30}\subseteq E(L).\) Replacing \(E\) with its quadratic twist \(E^{d}\) if necessary, we may assume that \(E(K)\) has a 3-torsion point \(Q\) with a \(K\)-rational 5-torsion point \(P\) on its quadratic twist \(E^{d}\). As argued in _(5)_ above, \(C:=\langle Q\rangle\oplus\langle P\rangle\) forms a \(\operatorname{Gal}(L/K)\)-invariant cyclic subgroup of \(E(L)_{\rm tor}\) with order \(15.\) But this gives a contradiction due to [18, Theorem 8].
_(7)_ It follows from Theorem 3.1(vii) and Theorem 1.2.
**Proof of Theorem 1.3.** The proof follows from Theorem 3.1 and Proposition 4.3 using Proposition 4.1 and Proposition 4.2.
## 5. Examples of Growth
Given an elliptic curve \(E\) defined over \(K\) and a list of all possible torsion structures over \(K\) (as in Theorem 1.2), one can describe all quadratic extensions \(L/K\) in which \(E(K)_{\rm tor}\) grows and describe \(E(L)_{\rm tor}\) in each such case. See the Magma **code** for its implementation. We give an informal description of the algorithm below.
Notation: \(\psi_{n}\) denotes the \(n\)th division polynomial of \(E\).
Input: \(d=-1\) or \(-3\) and an elliptic curve \(E/K\) where \(K=\mathbb{Q}(\sqrt{d})\).
Output: List of all quadratic extensions where torsion grows and the torsion structure in each case.
1. Let \(S:=\{2,3,5,7\}\) be the set of all primes \(p\) for which there is an elliptic curve \(E/K\) with \(p\ |\ E(K)_{\rm tor}\).
2. For each prime \(p\) in \(S\), determine the division polynomial \(f_{p}\) of smallest degree necessary to detect growth of the \(p\)-part of \(E(K)_{\rm tor}\) in a quadratic extension of \(K\): 1. Find the primary decomposition of \(E(K)_{\rm tor}\). 2. For each prime \(p\) in \(S\), count the number \(S_{p}\) of \(p\)-summands in the \(p\)-part of \(E(K)_{\rm tor}\). 1. For \(p=2\): * If \(S_{p}=0\) then \(E(K)[2^{\infty}]\) does not grow in a quadratic extension by Theorem 2.1. * If \(S_{p}\neq 0\) and \(E(K)[2^{\infty}]\simeq[2^{a},2^{b}]\) with \(a<b\) then \(f_{p}=\psi_{2^{b}}\). 2. For \(p=3\): * If \(S_{p}=0\) or \(1\) then if any growth occurs, by Lemma 2.2, \(E(K)[p]\) must grow by Theorem 2.1, so we let \(f_{p}:=\psi_{p}\). * If \(S_{p}=2\) then the \(p\)-part cannot grow in a quadratic extension by Lemma 2.2, so let \(f_{p}=1\). 3. For \(p>3\): * If \(S_{p}=0\) then if any growth occurs, by Lemma 2.2, \(E(K)[p]\) must grow by Theorem 2.1, so we let \(f_{p}:=\psi_{p}\). * If \(S_{p}=1\) then the \(p\)-part cannot grow in a quadratic extension by Lemma 4.2, so let \(f_{p}=1\). * If \(S_{p}=2\) then the \(p\)-part cannot grow in a quadratic extension by Lemma 2.2, so we ignore it.
3. For each \(p\) in \(S\): 1. Factor \(f_{p}\) over \(K\). 2. For each factor \(g\) of \(f_{p}\) over \(K\): 1. If \(\deg(g)=1\): * We may write \(g=x-c\). Compute \((c,d)\) on \(E\). If \(d\not\in K\), then the torsion grows in \(L=K(d)\) and Magma can compute \(E(L)_{\rm tor}\). 2. If \(\deg(g)=2\): * Construct the splitting field \(L\) of \(g\) over \(K\), and let \(c\) be a root of \(g\) in \(L\). Compute \((c,d)\) on \(E\). If \(d\in L\), then the torsion grows in \(L\) and Magma can compute \(E(K)_{\rm tor}\).
**Remark 5.1**.: _As an application of the algorithm, one can see the tables in Appendix for explicit examples of elliptic curves defined over each individual \(K\) in \(\mathcal{A}\)._ |
2307.00817 | $β$-decay half-lives as an indicator of shape-phase transition in
neutron-rich Zr isotopes with particle-vibration coupling effect | [Background] $\beta$-decay half-life is sensitive to the shell structure near
the Fermi levels. Nuclear deformation thus impacts the $\beta$-decay
properties. [Purpose] A first-order shape-phase transition in neutron-rich Zr
isotopes is predicted by some models. We investigate the $\beta$-decay
half-lives of neutron-rich nuclei around $^{110}$Zr, where the shape-phase
transition is predicted to occur, to see if the $\beta$-decay half-life can be
an indicator of the shape changes. [Method] The proton-neutron quasiparticle
random-phase approximation (RPA) is adopted to calculate the Gamow-Teller
transitions. In addition, we apply the quasiparticle phonon-vibrational
coupling (PVC) to consider the phonon couplings. [Results] The spherical and
oblate configurations give similar half-lives but shorter ones than the prolate
configuration at the RPA level. The PVC effect further reduces the half-lives
in general, but the effect is smaller for the deformed configuration than that
for the spherical one. As a result, it makes the shape change from the oblate
configuration to the spherical configuration visible. Therefore, a sudden
shortening of $\beta$-decay half-lives is always found at the nuclear shape
changes. [Conclusions] $\beta$-decay half-life is an indicator of the
shape-phase transition. The shape mixing and the roles of the triaxial
deformation are subject to study in the future. | Kenichi Yoshida, Yifei Niu, Futoshi Minato | 2023-07-03T07:57:14Z | http://arxiv.org/abs/2307.00817v1 | (\beta\)-decay half-lives as an indicator of shape-phase transition in neutron-rich Zr isotopes with particle-vibration coupling effect
###### Abstract
**Background**: \(\beta\)-decay half-life is sensitive to the shell structure near the Fermi levels. Nuclear deformation thus impacts the \(\beta\)-decay properties.
**Purpose**: A first-order shape-phase transition in neutron-rich Zr isotopes is predicted by some models. We investigate the \(\beta\)-decay half-lives of neutron-rich nuclei around \({}^{110}\)Zr, where the shape-phase transition is predicted to occur, to see if the \(\beta\)-decay half-life can be an indicator of the shape changes.
**Method**: The proton-neutron quasiparticle random-phase approximation (RPA) is adopted to calculate the Gamow-Teller transitions. In addition, we apply the quasiparticle phonon-vibrational coupling (PVC) to consider the phonon couplings.
**Results**: The spherical and oblate configurations give similar half-lives but shorter ones than the prolate configuration at the RPA level. The PVC effect further reduces the half-lives in general, but the effect is smaller for the deformed configuration than that for the spherical one. As a result, it makes the shape change from the oblate configuration to the spherical configuration visible. Therefore, a sudden shortening of \(\beta\)-decay half-lives is always found at the nuclear shape changes.
**Conclusions**: \(\beta\)-decay half-life is an indicator of the shape-phase transition. The shape mixing and the roles of the triaxial deformation are subject to study in the future.
## I Introduction
The physics of exotic nuclei has been one of the major subjects in the field of nuclear science with the upgrading and constructing of the radioactive-ion (RI) beam accelerator facilities around the world. Recent progresses in the development of the experimental technique of spectroscopic studies have unveiled the nuclear structure of exotic nuclei [1], and it has attracted much interest in how the shape of a nucleus changes as a function of the number of neutrons and protons.
Empirical observables revealing the evolution of nuclear shape are the excitation energies of the \(2^{+}_{1}\) and \(4^{+}_{1}\) states and their ratio together with the \(E2\) transition strengths. To explore the evolution of nuclear shells and deformations, the SEASTAR project [2] has been undertaken at RIKEN RIBF, aiming at a systematic search for new \(2^{+}\) energies in the wide range of neutron-rich nuclei. Besides that, the two-neutron separation energies, the monopole transition strengths, and the isotope shifts also reflect the structural changes of neutron-rich nuclei [3]. Nuclear deformation also has a substantial impact on the high-frequency excitation modes, such as in the photoabsorption cross-sections [4; 5].
The Zr isotopes with \(A\simeq 100\) have been of theoretical and experimental interest in nuclear structure as a region of competition between various coexisting prolate, oblate, and spherical nuclear shapes [3]. The first-order phase transition occurs uniquely in this region, while we usually see a gradual change of deformation with an increase in the neutron/proton number in other regions such as in the rare-earth nuclei. The mean-field calculations rooted in nuclear density-functional theory (DFT) [6; 7], the macroscopic-microscopic calculation [8] as well as the recent shell model calculation [9] describe well the sudden change from the spherical to the deformed shape at \({}^{100}\)Zr. The deformed region has been confirmed up to \({}^{110}\)Zr by observing a low \(E(2^{+}_{1})\) value and the \(R_{4/2}\) value being greater than three [10]. Furthermore, the calculations [6; 11; 7; 12] predict the shape transition from the deformed to the spherical configuration around \(N=74\).
The \(\beta\)-decay half-life is one of the most experimentally accessible physical quantities for RI beam facilities and plays a decisive role in determining the time scale of the \(r\)-process nucleosynthesis [13]. Observed short half-lives around \(A\simeq 110\) region speed up the \(r\)-matter flow [14]. There have been a considerable amount of works on the roles of the nuclear deformation on the Gamow-Teller (GT) strength distributions [15; 16; 17; 18; 19; 20; 21; 22]. Then, it has been found that nuclear deformation plays an important role in the \(\beta\)-decay half-lives. These works are, however, based on the random-phase approximation (RPA) that considers coherent one-particle-one-hole excitations. To understand nuclear excitations quantitatively, one sometimes needs to take into account the effect of beyond-RPA, namely higher-order configurations such as phonon-coupling effects and coherent two-particle
two-hole excitations. Important roles of the beyond-RPA effect have been recognized also for the GT strengths, which give a crucial influence to \(\beta\)-decays. The PVC effect is essential for reproducing the width of GT resonances [23; 24; 25] and improving the \(\beta\)-decay half-lives [26; 27; 28]. We propose in this work the \(\beta\)-decay half-lives as an indicator of the shape-phase transition, which may give an impact on the \(r\)-process. We will demonstrate it within the Skyrme Hartree-Fock-Bogoliubov (HFB) approach and proton-neutron quasiparticle-random-phase approximation (pnQRPA) under the condition of an axially-deformed shape. We also discuss that one can confirm the shape-phase transition of neutron-rich Zr isotopes from the \(\beta\)-decay half-lives even in the presence of the phonon-coupling effect within the quasiparticle-vibration coupling (QPVC).
The paper is organized as follows. In Sec. II, we briefly explain the models for evaluating the \(\beta\)-decay half-lives. In Sec. III, we show the results and discuss the roles of nuclear deformation and the effects of phonon coupling. Section IV summarizes the paper.
## II Nuclear energy-density functional method for \(\beta\)-decay properties
### Skyrme Hartree-Fock-Bogoliubov approach for nuclear deformation
In the framework of the nuclear energy-density functional (EDF) method we employ, the ground state of a mother nucleus is described by solving the HFB equation [29]. The single-particle and pair Hamiltonians are given by the functional derivative of the EDF with respect to the particle density and the pair density, respectively. An explicit expression of the Hamiltonians is found in the Appendix of Ref. [30]. The average particle number is fixed at the desired value by adjusting the chemical potential. Assuming the system is axially symmetric, the HFB equation is block diagonalized according to the quantum number \(\Omega\), the \(z\)-component of the angular momentum.
### Proton-neutron quasiparticle random-phase approximation
Since the details of the formalism can be found in Refs. [31; 32], here we briefly recapitulate the basic equations relevant to the present study. The excited states \(|f\rangle\) in a daughter nucleus are described as one-phonon excitations built on the ground state \(|\)RPA\(\rangle\) of the mother nucleus as
\[|f\rangle = \hat{\Gamma}^{\dagger}_{f}|\text{RPA}\rangle, \tag{1}\] \[\hat{\Gamma}^{\dagger}_{f} = \sum_{\alpha\beta}\left\{X^{f}_{\alpha\beta}\hat{a}^{\dagger}_{ \alpha,\text{n}}\hat{a}^{\dagger}_{\beta,\text{p}}-Y^{f}_{\alpha\beta}\hat{a} _{\beta,\text{p}}\hat{a}_{\alpha,\text{n}}\right\}, \tag{2}\]
where \(\hat{a}^{\dagger}_{\text{n}}(\hat{a}^{\dagger}_{\text{p}})\) and \(\hat{a}_{\text{n}}(\hat{a}_{\text{p}})\) are the neutron (proton) quasiparticle (labeled by \(\alpha\) and \(\beta\)) creation and annihilation operators that are defined in terms of the solutions of the HFB equation with the Bogoliubov transformation. The phonon states, the amplitudes \(X^{f},Y^{f}\) and the vibrational frequency \(\omega_{f}\), are obtained in the pnQRPA with a cutoff at 60 MeV. The residual interactions entering into the pnQRPA equation are given by the EDF self-consistently except for the \(J^{2}\) term: the \(J^{2}\) term in the EDF is neglected in the HFB calculation but included in the pnQRPA calculation.
### Quasiparticle vibration coupling in spherical nuclei
The QPVC model includes correlations beyond the spherical pnQRPA model by taking into account the quasipaticle phonon coupling. The self-energy of pnQRPA states is obtained by considering the coupling of doorway states consisting of a two-quasiparticle excitation coupled to a collective vibration. The properties of these collective vibrations, i.e., phonons \(|nL\rangle\), are obtained by computing the QRPA response for states of natural parity \(J^{\pi}=0^{+}\), \(1^{-}\), \(2^{+}\), \(3^{-}\), \(4^{+}\), \(5^{-}\), and \(6^{+}\), where those phonons with energy less than 20 MeV and absorbing a fraction of the non-energy weighted isoscalar or isovector sum rule (NEWSR) strength being larger than 5% are taken into account the model space. The self-energy of the pnQRPA state \(|f\rangle\) is given as
\[\Sigma_{f}(E) = \sum_{\alpha\beta\alpha^{\prime}\beta^{\prime}}W^{\downarrow}_{ \alpha\beta,\alpha^{\prime}\beta^{\prime}}(E)X^{f}_{\alpha\beta}X^{f}_{\alpha^ {\prime}\beta^{\prime}} \tag{3}\] \[+W^{\downarrow*}_{\alpha\beta,\alpha^{\prime}\beta^{\prime}}(-E) Y^{f}_{\alpha\beta}Y^{f}_{\alpha^{\prime}\beta^{\prime}},\]
where \(W^{\downarrow}_{\alpha\beta,\alpha^{\prime}\beta^{\prime}}(E)\) represents the spreading terms associated with the coupling of two-quasiparticle configurations with the doorway states, and the detailed expressions are given in Ref. [24]; \(X^{f}\) and \(Y^{f}\) are the forward and backward pnQRPA amplitudes, respectively, as defined in the last subsection but for the spherical case. To calculate the \(\beta\)-decay half-lives, we use Gaussian smearing to get the GT strength distribution,
\[S(E)=\sum_{n}\frac{1}{\sigma_{n}\sqrt{2\pi}}e^{-\frac{(E-E_{n}-\Delta E_{n})^{ 2}}{2\sigma_{n}^{2}}}B_{n}, \tag{4}\]
where \(\sigma_{n}=(\frac{\Gamma_{n}}{\alpha}+\eta)/\sqrt{2\text{ln}2}\), with \(\Delta E_{n}=\text{Re}\Sigma_{n}(E)\) and \(\Gamma_{n}=-2\text{Im}\Sigma_{n}(E)\), and \(B_{n}\) is the pnQRPA transition probability for state \(|n\rangle\). \(\eta\) is the averaging parameter in \(W^{\downarrow}\) to avoid divergence, taken as 200 keV. The details of the QPVC formulas can also be referred to Refs. [24; 27].
### Calculation of the \(\beta\)-decay half-lives
The \(\beta\)-decay half-life \(T_{1/2}\) can be calculated with the Fermi's golden rule as [33],
\[\frac{1}{T_{1/2}} =\frac{\lambda_{\beta}}{\log 2}\] \[=\frac{(g_{A}/g_{V})_{\text{eff}}^{2}}{D}\sum_{E_{f}^{*}<Q_{\beta} }f(Z,Q_{\beta}-E_{f}^{*})|\langle f|\hat{F}|\text{RPA}\rangle|^{2}, \tag{5}\]
where \(D=6147.0\) s and we set \((g_{A}/g_{V})_{\text{eff}}=1\) rather than its actual value of 1.26 to account for the quenching of the spin matrix in nuclei. The transition matrix element for the GT operator \(\langle f|\hat{F}|\text{RPA}\rangle\) is evaluated by the quasi-boson approximation as \(\langle f|\hat{F}|\text{RPA}\rangle\simeq\langle 0|[\hat{\Gamma}_{f},\hat{F}]|0\rangle\), where \(|0\rangle\) denotes the HFB ground state. The Fermi integral \(f(Z,Q_{\beta}-E_{f}^{*})\) in Eq. (5) including screening and finite-size effects is given by
\[f(Z,W_{0})=\int_{1}^{W_{0}}pW(W_{0}-W)^{2}\lambda(Z,W)dW, \tag{6}\]
with
\[\lambda(Z,W)=2(1+\gamma)(2pR)^{-2(1-\gamma)}e^{\pi\nu}\left|\frac{\Gamma( \gamma+\text{i}\nu)}{\Gamma(2\gamma+1)}\right|^{2}, \tag{7}\]
where \(\gamma=\sqrt{1-(\alpha Z)^{2}}\), \(\nu=\alpha ZW/p\), \(\alpha\) is the fine structure constant, \(R\) is the nuclear radius. \(W\) is the total energy of \(\beta\) particle, \(W_{0}\) is the total energy available in \(m_{e}c^{2}\) units, and \(p=\sqrt{W^{2}-1}\) is the momentum in \(m_{e}c\) units [33]. Here, the energy released in the transition from the ground state of the target nucleus to an excited state in the daughter nucleus is given approximately by [34]
\[Q_{\beta}-E_{f}^{*}\simeq\lambda_{\nu}-\lambda_{\pi}+\Delta M_{n-H}-\omega_{f}. \tag{8}\]
### EDF employed in the numerical calculations
We employ in the actual calculations a Skyrme-type EDF for the particle-hole channel. The SkM* functional [35] is mainly used for the present investigation, and the SLy4 functional [36] is used to supplement the discussion. The pairing is considered by using the mixed-type contact interaction
\[V_{\text{pp}}(\mathbf{r},\mathbf{r}^{\prime})=V_{0}\left[1-\frac{1}{2}\frac{\rho(\mathbf{ r})}{\rho_{0}}\right]\delta(\mathbf{r}-\mathbf{r}^{\prime}) \tag{9}\]
with \(V_{0}=-225\) MeV fm\({}^{3}\) and \(-290\) MeV fm\({}^{3}\) for the SkM* and SLy4 functionals, respectively, and \(\rho(\mathbf{r})\) and \(\rho_{0}\) being the isoscalar density and the saturation density \(0.16\) fm\({}^{-3}\). The pairing strengths in the deformed HFB calculation here are determined to be consistent with the pairing energy in spherical HFB calculation for QPVC, where the pairing strengths are adjusted by the experimental pairing gap of \({}^{114}\)Zr from three-point formulas. In the pnQRPA calculations, we include the proton-neutron pairing interaction as Eq. (9) with the same strength.
## III Results and discussion
Figure 1 shows the potential energy surfaces of the neutron-rich Zr isotopes with \(N=72\)-\(78\) calculated by using the SkM* functional, where the shape transition is predicted to occur with an increase in the neutron number [6; 7; 8; 11; 37]. The prolate and oblate configurations compete in energy in \({}^{112,114}\)Zr, while the spherical and oblately deformed configurations compete in energy in \({}^{116,118}\)Zr. We find a similar feature in the results obtained by employing the SLy4 functional, where the spherical and oblate configurations compete in energy. A standard probe of the shape change from the prolate to oblate deformations is a sign change of the spectroscopic quadrupole moment of the \(2_{1}^{+}\) state. However, it is challenging to measure the spectroscopic quadrupole moment for these neutron-rich nuclei [38].
The \(\beta\)-decay half-life is a rather experimentally accessible quantity even for very neutron-rich nuclei, and the calculated half-lives are shown in Fig. 2. The observed half-lives up to \(N=72\) are well reproduced by the calculation using the SkM* functional with the prolate configuration. We see that the calculated half-lives calculated assuming the prolate shape shorten monotonically beyond \(N=72\), whereas a sudden drop occurs at \(N=74\) when the nuclear shape changes to the oblate deformation. In the case of SLy4, the results underestimate the measurements. However, we see the half-lives for the
Figure 1: Potential energy surface (zero at the spherical configuration) of \({}^{112-118}\)Zr obtained by employing the SkM* functional.
oblate configuration are shorter than for the prolate configuration as in case of SkM*.
We discuss the mechanism for the shortening of the half-lives due to the shape change from the prolate to oblate deformations. Figure 3 shows the distributions of the partial decay-rate associated with the GT transitions in \({}^{112,114}\)Zr. The GT states appear in low energies for the oblate configuration. In \({}^{114}\)Zr, the GT strengths for the oblate configuration are larger than those for the prolate configuration. The GT strengths for the spherical configuration appear relatively higher in energy but are much larger than those for the deformed configuration, leading to the half-lives being as short as for the oblate configuration.
We show in Figs. 2 and 3 the results considering the quasiparticle-phonon coupling for the spherical configuration denoted as QPVC. Comparing with the results of half-lives for the spherical configuration, those for QPVC are shortened. This is because the GT states couple with other phonon states, resulting in distributions to lower energies. We will discuss the mechanism in more detail later on. It is considered that the PVC effect is weaker in deformed nuclei than in spherical nuclei because the quadrupole correlation is mostly described as a deformed mean field [41].
Let us study the effect of PVC in the present case. The low-lying phonon excitations are shown in Fig. 4. The first-excited quadrupole state appears at 1.7 MeV with a strength of \(6.4\times 10^{3}\) fm\({}^{4}\) for the spherical configuration, while we see the \(K=2\) state located around 0.6 MeV with a strength of \(1.3\times 10^{3}\) fm\({}^{4}\) for the prolate configuration. For the oblate configuration, the strength is \(\sim 6.0\times 10^{3}\) fm\({}^{4}\), which is much larger than that for the prolate configuration because these nuclei show softness against the triaxial deformation [42]. We also show in the right panel of Fig. 4 the octupole excitations, the major states coupling with the GT states. The calculated lowest-lying octupole state appears at 1.1 MeV both in the spherical and oblate configurations, while the strengths for the oblate and prolate configurations are more than one order of magnitude smaller than the case of the spherical configuration. Since \({}^{114}\)Zr has different behaviors for spherical and deformed shapes with respect to the quadrupole and octupole strengths, we need to investigate the interweaving roles of quadrupole and octupole phonons in PVC.
To distinguish the role of quadrupole and octupole phonons in PVC for \({}^{114}\)Zr, we consider a simple model. In this simple model, we do not include the pairing correlations, as well as the momentum-dependent interactions in the PVC vertex calculation such that the transition densities of phonons could be used directly in the PVC vertex calculation [43]. With this approximation, we could estimate the PVC effect for deformed configurations by using the phonon energies of deformed configurations and rescaling the transition densities from the spherical configuration to the deformed one to adjust the transition
Figure 3: Distributions of the partial-decay rates in \({}^{112}\)Zr (upper) and \({}^{114}\)Zr (lower) as functions of the excitation energy with respect to the ground state of the mother nucleus. The RPA results for the prolate, oblate, and spherical configurations are shown together with the QPVC result for the spherical configuration. The RPA results are smeared by the Lorentzian function with a width of \(\Gamma=0.4\) MeV.
Figure 2: \(\beta\)-decay half-lives of the Zr isotopes obtained by employing the SkM* (upper) and SLy4 (lower) functionals. Filled symbols indicate the lowest energy configuration. The calculated half-lives are compared with the experimental data [39] and the FRDM+QRPA calculation [40].
strength. The corresponding results are shown in Tab. 1.
From the spherical to oblate configurations, the lowest quadrupole-phonon energy is shifted downwards, and the strength increases, which should give a stronger PVC effect for the oblate configuration. This is confirmed in the simple model by including only the lowest quadrupole phonon in the PVC calculation, which gives 3.9 ms for the spherical configuration and 2.6 ms for the oblate configuration, compared with 6.4 ms in the RPA calculation. However, the lowest octupole phonon energy is nearly the same, but the strength is reduced by more than one order of magnitude, which would give a smaller PVC effect for the oblate configuration. This is confirmed by including only the lowest octupole phonon in the PVC calculation, which gives 3.9 ms for the spherical configuration and 5.4 ms for the oblate configuration. From the spherical to oblate configurations, the quadrupole and octupole phonons play different roles. Then we further include both phonons, and obtain 0.9 ms for the spherical configuration and 2.4 ms for the oblate configuration. It is clear to see that the PVC effect is much smaller for oblate configuration than that for the spherical configuration.
As for the prolate configuration, the energy and strength of the lowest octupole phonon is similar to those in the oblate configuration (Fig. 4(b)), while the energy of prolate shape is higher than the oblate shape with the smaller strength for the lowest quadrupole phonon (Fig. 4(a)). Thus, one can expect the PVC effect for the prolate configuration to be smaller than that for the oblate configuration. Therefore, after considering the PVC effect, the sudden change of half-lives from the prolate configuration to the spherical configuration remains, and the sudden change from the oblate to spherical configurations will also appear, which is not seen at the RPA level. For the SkM* functional, the shape change from the prolate to oblate configurations is observed at \(N=74\), and the half-life is shortened already at the RPA level. With the further inclusion of the PVC effect, the change in half-life will be more apparent. For the SLy4 functional, the shape changes from oblate to spherical, and no significant shortening is observed in half-life at the RPA level, but with the further inclusion of the PVC effect, the sudden shortening of half-life will also manifest around \(N=74\).
We have ignored the triaxial deformation in the present study. By looking at the PES in two dimensions of \(\beta\) and \(\gamma\) in Ref. [42], some nuclei are soft against the triaxial deformation. The beta-decay half-lives need to be investigated again after considering the traxial degree of freedom as well as the shape-mixing effect.
## IV Summary
We have investigated the \(\beta\)-decay half-lives in the Zr isotopes with shape changes. The GT strength distributions were evaluated in the proton-neutron QRPA and QPVC approaches. The spherical and oblate configurations give similar half-lives, and the oblate configuration is shorter than the prolate one at the RPA level. The PVC effect further could reduce the half-lives; however, the effect would be smaller for a deformed configuration than that for a spherical one. When a sudden drop of half-lives around \(N=74\) is observed experimentally, it is an indication of the shape transition. However, the present model does not take into account the triaxial shape and the shape mixing. Considering these effects remains a challenge in the future.
###### Acknowledgements.
This work was supported by the National Key Research and Development (R&D) Program of China (Grant No. 2021YFA1601500), JSPS KAKENHI (Grants No. JP19K03824 and No. JP19K03872) and
Figure 4: Distributions of (a) quadrupole and (b) octupole strengths in \({}^{114}\)Zr calculated within the framework of Sect. \(II\,A\) and \(II\,B\).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model & \multicolumn{2}{c}{deformation \(T_{1/2}\) (ms) deformation \(T_{1/2}\) (ms)} \\ \hline RPA & sph. & 6.4 & & \\ PVC \(2_{1}^{+}\) & sph. & 3.9 & obl. & 2.6 \\ PVC \(3_{1}^{-}\) & sph. & 1.1 & obl. & 5.4 \\ PVC \(2_{1}^{+},3_{1}^{-}\) & sph. & 0.9 & obl. & 2.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Half-lives of \({}^{114}\)Zr calculated by spherical RPA model (SkM*) and simplified PVC model (see text) with only the first \(2^{+}\) phonon (“PVC \(2_{1}^{+}\) ”), only the first \(3^{-}\) phonon (“PVC \(3_{1}^{-}\)”), as well as both phonons (“PVC \(2_{1}^{+},3_{1}^{-}\) ”), taking the energies and transition strengths of these phonons at spherical and oblate deformation, respectively.
the JSPS/NRF/NSFC A3 Foresight Program "Nuclear Physics in the 21st Century", as well as the National Natural Science Foundation of China (Grant No. 12075104). The numerical calculations were performed on the computing facilities at the Yukawa Institute for Theoretical Physics, Kyoto University, and at the Research Center for Nuclear Physics, Osaka University.
|
2308.01354 | An Introduction to High Contrast Differential Imaging of Exoplanets and
Disks | This tutorial is an introduction to High-Contrast Imaging, a technique that
enables astronomers to isolate light from faint planets and/or circumstellar
disks that would otherwise be lost amidst the light of their host stars.
Although technically challenging, high-contrast imaging allows for direct
characterization of the properties of detected circumstellar sources. The
intent of the article is to provide newcomers to the field a general overview
of the terminology, observational considerations, data reduction strategies,
and analysis techniques high-contrast imagers employ to identify, vet, and
characterize planet and disk candidates. | Katherine B Follette | 2023-08-02T18:00:16Z | http://arxiv.org/abs/2308.01354v1 | # An Introduction to High Contrast Differential Imaging of Exoplanets and Disks
###### Abstract
This tutorial is an introduction to High-Contrast Imaging, a technique that enables astronomers to isolate light from faint planets and/or circumstellar disks that would otherwise be lost amidst the light of their host stars. Although technically challenging, high-contrast imaging allows for _direct_ characterization of the properties of detected circumstellar sources. The intent of the article is to provide newcomers to the field a general overview of the terminology, observational considerations, data reduction strategies, and analysis techniques high-contrast imagers employ to identify, vet, and characterize planet and disk candidates.
Exoplanet Direct Imaging, High-Contrast Imaging, Circumstellar Disks
## 1 Introduction
One of the breakthrough technologies of modern exoplanet astronomy is the technique of high-contrast imaging (HCI, often referred to more simply as "direct imaging"). HCI is a catchall term that encompasses the instrumental hardware, image processing techniques, and observing strategies that are employed to enable astronomers to image very faint sources (planets, circumstellar disks) in the vicinity of bright stars.
This article provides a basic introduction to the challenge of high contrast imaging in Section 1. It then defines and briefly describes the hardware involved in HCI in Section 2. In Section 3, it outlines how hardware and atmospheric aberration manifest in the anatomy of a HCI Point Spread Function (PSF). Section 4 introduces the range of "differential imaging" observational techniques that are employed to facilitate separation of starlight from disk or planet light in post-processing, and Section 5 outlines the algorithms used to do so. Section 7 describe analysis techniques commonly employed to extract the properties of imaged planets and disks from post-processed HCI images, and Section 8 describes potential sources of false positives. Technologies that complement HCI are covered briefly in Section 9. The article is accompanied by a python code tutorial containing sample implementations of each of the main differential imaging techniques, as well as exercises for the reader. It is available at [https://github.com/kfollette/PASP_HCIutorial](https://github.com/kfollette/PASP_HCIutorial).
Throughout this article, I include definitions of many terms and phrases peculiar to High-Contrast imaging, but also assume knowledge of some common astronomy and optics terms that readers just getting started in the field may not yet be familiar with. Furthermore, the references I've chosen to include in the main text are primarily to the foundational work(s) that developed a particular technique. They are intended merely as a starting point, and should not be interpreted as the "state of the art" in the field. I provide two living documents to accompany the tutorial that I hope will serve as references in both areas. The first (available at [https://bit.ly/HCljargon](https://bit.ly/HCljargon)) provides definitions of key astronomy and optics jargon used throughout this tutorial, which some readers may find useful when they encounter unfamiliar terms. The second (available at bit.ly/beginHCI) provides a recommended reading and viewing list for those who would like to delve deeper into the techniques discussed here.
### What is high-contrast imaging?
The High-Contrast Imaging (HCI) technique is a relative newcomer in the world of exoplanet detection techniques, with the first discoveries in 2004 and 2008 (Chauvin et al., 2004; Marois et al., 2008; Kalas et al., 2008). Although
the number of planet detections is to date lower for high-contrast imaging 1 than for the indirect (radial velocity, transit, and microlensing) techniques, directly imaged companions are arguably the best characterized exoplanets. HCI also provides the best prospects for current and future characterization of exoplanet atmospheres, particularly temperate ones conducive to life as we know it. The commitment of the community to this goal is evident in the first theme of _Pathways to Discovery in Astronomy and Astrophysics for the 2020s_ (also known as the Astro2020 Decadal Survey) - "Pathways to Habitable Worlds". It calls for a "step-by-step program to identify and characterize Earth-like extrasolar planets, with the ultimate goal of obtaining _imaging_ and _spectroscopy_ of potentially habitable worlds" (pg. 2, of Sciences Engineering & Medicine 2021, emphasis mine). The gap between the modern directly imaged planet population and Earth-analogs is large in both mass and semi-major axis space (see Fig. 1). However, while indirect planet detection methods are currently more sensitive to terrestrial planets, the decadal survey goal of _imaging_ and _spectroscopy_ of exo-Earths cannot be achieved without direct detection.
Footnote 1: As of the writing of this tutorial, \(\sim\)50 companions have been imaged with estimated masses below the canonical “deuterium-burning” limit of \(<\)13M\({}_{J}\) (the formal boundary between “planet” and “brown dwarf”, though the utility of this boundary as a defining line between populations is debatable). However, this number more than doubles when considering all bound substellar (\(<\)70M\({}_{J}\)) companions to higher mass stars. Brown dwarf companions with masses less than \(\sim\)20M\({}_{Jup}\) are often referred to as “Planetary Mass Companions” (PMCs), and are likely part of the same underlying population as (i.e. formed similarly to) many of the objects currently classified as directly imaged “planets” (Wagner et al., 2019).
Although the current state of the art in HCI is imaging of \(>\)1M\({}_{J}\) planets at \(\sim\) tens of au separations, the future of the technique is bright (pun intended!), and vigorous ongoing technology development will push its sensitivities to lower mass and more tightly-separated planets.
### What is Contrast?
Figure 1: The population of known exoplanets discovered with high-contrast imaging (red) as compared to those found with indirect methods: transits (green), radial velocity (blue), and microlensing (orange) as of February, 2023 per the NASA Exoplanet Archive. Exoplanets are shown relative to solar system planets (yellow), highlighting the fact that detection techniques are not yet capable of detecting solar system analogs.
In the context of HCI, the term "contrast" refers to the brightness ratio between an astronomical source (planet, disk) and the star it orbits. "High" contrast images are those where the ratio \(\frac{F_{\rm source}}{F_{star}}\) is small, meaning the source is much fainter than the star - these detections are difficult. "Low" contrast images are therefore ones where the source-to-star ratio is larger, meaning the source is brighter relative to the star - these detections are less challenging.
Unlike stars, where absolute brightness is almost entirely a function of mass, for planets, brightness is a function of both mass and age. Planets begin their lives hot and bright and, lacking an internal source of energy sufficient to maintain that temperature, cool with time.
As they evolve, planetary spectra, and therefore contrast, also change drastically. Figure 2 shows contrast at a range of wavelengths for the same planet (Jupiter) when "young" (20Myr) and "old" (4.5Gyr, the age of our Solar System). It highlights the extreme variation in contrast as a function of wavelength as planets age.
In thinking about contrast for point sources, it is useful to keep several benchmark quantities in mind, namely:
* In the near-infrared (1-3\(\mu\)m), young (\(\sim\)few to few tens of Myr) giant planets generally have contrasts in the range \(\sim 10^{-5}--10^{-6}\) relative to their host stars. They radiate away much of their initial thermal energy over the course of the first tens of millions of years after formation, thus higher contrasts are required to detect them as they get older.
* At 3-5\(\mu\)m, the same young (\(\sim\)few Myr) planets, have more moderate contrasts of \(\sim 10^{-3}--10^{-4}\). With temperatures of \(\sim\)500-1500K, this is because their thermal emission peaks in this wavelength regime, and the brightness gap relative to the much brighter and hotter (peak emission bluer) star is narrowed. This remains the region of most favorable contrast even as planets age.
* In the optical, planets have undetectably low levels of direct thermal emission, and are seen instead in reflected light (stellar photons redirected/scattered by their atmospheres toward Earth). For mature planets (\(\gtrsim\)100Myr),
Figure 2: The predicted contrast ratios required to image Jupiter both as an “old” (4.5Gyr, blue) and “young” (20Myr, yellow) planet as a function of wavelength. Thermal and reflected light spectra were generated for both planets with PICASO (Batalha et al., 2019), binned to a spectral resolution of 300, and summed. The young Jupiter’s atmospheric properties were generated using the SONORA cloud-free atmospheric model grid (Marley et al., 2021) and divided by a simulated spectrum for a star with properties appropriate for the young Sun (T=4300K, logg=4.3, R=1.2R\({}_{\odot}\)Baraffe et al., 2015). The “old” Jupiter spectrum was generated for a 90% cloudy/10% cloud-free surface and divided by a solar spectrum.
this wavelength regime provides more moderate contrasts than the NIR. For example, at 4.5Gyr, Jupiter and Earth have contrasts of \(\sim 10^{-9}\) and \(10^{-10}\), respectively at 0.5\(\mu\)m. Combined with resolution advantages inherent in shorter wavelength imaging (See Section 2 for details) optical wavelengths provide the best prospects for future detection of solar system analog planets.
A simple analogy will help drive home the near (but not wholly) intractable nature of the contrast problem. As shown in Figure 3, for thermal emission from hot young exojupiters, the contrasts outlined above are comparable to the ratio of light emitted by a firefly relative to a lighthouse. For true (4.5 Gyr) Jupiter analogs in optical reflected light, a more apt comparison is a single bioluminescent alga relative to a lighthouse. This highlights the tremendous technological barriers that the field must overcome in order to achieve direct characterization of mature, potentially-habitable exoplanets.
Precisely how hot a planet is at formation (and therefore how bright it appears) depends on how it was formed, and a range of formation modes are likely to overlap within the exoplanet population. In other words, planets (and brown dwarfs) of the same mass may have formed via different mechanisms.
Planets like those in our solar system most likely formed via a "cold start" mechanism involving the gradual assembly of solid material within a circumstellar disk. Their "cold" starts are only cold in comparison to so-called "hot start" planets, which also form in a circumstellar disk, but rapidly as a result of gravitational collapse. The high masses and wide separations of most directly imaged planets make them good candidates for hot start formation, but current and next-generation instruments are detecting lower mass, closer-in planets for which formation mechanism is more ambiguous, and could proceed under either path. The range of models and their predictions and assumptions is well-described in Spiegel & Burrows (2012). For our purposes, the most important takeaways are that directly imaged
Figure 3: A schematic illustration of the magnitude of the brightness differential between the sun and a hot, young exojupiter in the NIR and the sun and a reflected light Jupiter in the optical. The brightness differential for a young Jupiter analog is \(\sim\)10\({}^{-6}\), comparable to the brightness differential between a lighthouse and a firefly. Once a jupiter-like planet has radiated most of the energy of formation and no longer glows brightly in the infrared, this differential drops to 10\({}^{-9}\), akin to the brightness differential between a lighthouse and a single bioluminescent alga cell.
exoplanet brightnesses can only be translated to mass estimates under assumptions of: (a) stellar age, and (b) planetary formation pathway/initial entropy of the planet unless a direct measure of the planet's mass is available from another method, such as astrometry or radial velocity.
#### 1.2.1 What do we learn from HCI planet detections?
The simplest measurements made for individual directly-detected exoplanets are their **locations2** (astrometry) and **brightnesses** (photometry). Together with _evolutionary models_ for young giant planets (which assume a formation pathway, e.g., Baraffe et al., 2003), **photometric data** allow for inference of a planet's _mass_, provided the system has a well-constrained **distance3** and a moderately-constrained _age_.
Footnote 2: In this section, I will place observed properties in **bold** the first time I reference them, and inferred physical properties in _bolded italics_
Footnote 3: Nearly all HCI detections are for objects in the solar neighborhood, for which _Gaia_ distances are sufficiently robust to consider them directly measured, rather than inferred quantities. For non-parallax distance measurements, this is not necessarily true.
Given the difficulty of robustly estimating ages for young objects, the preferred targets for direct imaging surveys have been young moving group stars; age estimates for these coeval groups are better constrained by averaging across independent estimates for their many members. **Planetary luminosity** and age can also be compared to the predictions of various planet formation models (e.g. the so-called cold/warm/hot start models, Spiegel and Burrows, 2012) to inform the initial conditions under which planets are born.
The combination of **detection limits** of large HCI planet-finding campaigns and evolutionary models allows for constraints on the _occurrence rates_ of populations of exoplanets in various mass and separation ranges unique to direct imaging (currently \(\gtrsim\)1M\({}_{J}\) and \(\gtrsim\)10au). Population constraints, in turn, inform formation models. For a review of what was learned about planet populations from the first generation of HCI campaigns, see Bowler (2016).
**Orbital monitoring** of directly imaged planets also provides constraints on the dynamical evolution of young planetary systems. For example, _coplanarity_ and the prevalence of _orbital resonances_ in multi-planet systems inform planet formation and migration models (e.g. Konopacky and Barman, 2019). _Alignment_ (or misalignment) of planetary orbits with the stellar spin axis and/or the circumstellar disk plane informs the history of dynamical interactions within the system (e.g. Balmer et al., 2022; Brandt et al., 2021). Similarly, dynamical characterization of planets in systems with disk features hypothesized to be planet-induced provides a means to test disk-planet interaction models (e.g. Fehr et al., 2022). For a comprehensive review of planetary dynamical processes, see Davies et al. (2014) and Winn and Fabrycky (2015).
Finally, **spectroscopy** of imaged companions allows for direct characterization of _atmospheric properties_. To first order, low resolution spectra can inform the bulk _composition_ of the atmosphere in more detail than photometry alone. For instance, even a low-resolution infrared spectrum of a giant planet can inform whether its atmosphere is CH\({}_{4}\) or CO-dominated. Directly imaged planet spectra, in combination with detailed atmospheric models, can also inform the _temperature-pressure structure_ of the atmosphere, likely _condensate (cloud) species_, and even the prevalence of photo- and disequilibrium _chemical processes_. Constraints on _C/O ratios_ of planetary atmospheres are probes of their formation locations relative to various ice lines that determine whether these elements are found in the gas or solid phase.
The advent of medium resolution spectroscopy of directly imaged planets with instruments such as VLT GRAVITY (R\(\sim\)500 in medium resolution mode) is enabling stronger constraints on these properties, with upgrades planned at the VLT to improve resolutions even further. Very high-resolution spectra of directly imaged companions will be enabled by coupling focal-plane optical fibers to existing high-resolution (R\(\sim\)30,000) spectrographs (e.g. The Keck Planet Imager and Characterizer (KPIC), Mawet et al., 2016). Such work requires very precise knowledge of planet astrometry to enable fiber placement, but will enable very exciting science such as constraints on planetary _rotation rates_, which can be compared against the predictions of various formation models. For a review of spectroscopy of directly imaged planets, see Biller and Bonnefoy (2018) and Marley et al. (2007).
#### 1.2.2 What do we learn from HCI disk detections?
HCI's detection efficiency is significantly higher for circumstellar disk structures than for planets4, and many high-resolution high-contrast images of circumstellar material have been collected by exoplanet direct imaging surveys (e.g. Rich et al., 2022; Esposito et al., 2020; Avenhaus et al., 2018). Such observations provide direct constraints on the distribution and composition of planet-forming material. Symmetric morphological features (such as rings, gaps, and
cavities), inform the distribution of dust in planet-forming systems and, likely, the architectures of their planetary systems. Asymmetric features (such as warps and spiral arms) provide indirect evidence of embedded or undetected planetary perturbers and/or likely locations for future planet formation. These "signposts" of planet formation, though difficult to interpret, provide a wealth of information about planets and planet formation _at or near the epoch of formation_. For a comprehensive review of the state of high-contrast disk imaging, see Benisty et al. (2022).
NIR HCI disk images are also extremely powerful in combination with high-resolution millimeter imagery. In the millimeter and sub-millimeter, dust continuum emission traces large grains in the disk midplane, and millimeter line emission can be used to trace various gas-phase species as well. NIR high-contrast images trace an entirely different population, namely small micron-sized dust grains in the upper layers of the disk. Thus, the combination of NIR and mm high-resolution imagery yields a holistic picture of various disk components, a powerful combination for understanding the radial and vertical structure of disks.
Finally, multiwavelength NIR high-contrast imagery can be used to constrain grain properties such as size, porosity, and composition (e.g. Chen et al., 2020), as well as the water ice content of NIR-scattering grains (e.g. Betti et al., 2022). A good understanding of grain properties is essential to understanding the microphysics of the dust coagulation that will eventually form planets.
## 2 Enabling technologies for high-contrast imaging
HCI is built upon a foundation of enabling technologies, namely: adaptive optics, coronagraphy, wavefront sensing, and differential imaging techniques, each of which is introduced in this section. For a more comprehensive technical review of many of these technologies, see Guyon (2018).
### Adaptive Optics
Adaptive optics is perhaps the most critical HCI enabling technology for ground-based imaging campaigns. Without it, image resolutions are limited by astronomical seeing, or the size of coherent patches in the earth's atmosphere (approximated by the "Fried parameter" \(r_{0}\), which has a \(\lambda^{6/5}\) dependence). With adaptive optics, modern HCI instruments can approach the diffraction limit,
\[\theta=1.22\frac{\lambda}{D}\]
where \(\lambda\) is the wavelength and D the diameter of the telescope. Table 1 gives the diffraction-limited resolution of an 8m telescope at 0.55\(\mu\)m (V band), 1.6 \(\mu\)m (H band) and 3.5\(\mu\)m (L band) in physical units as compared to the seeing limit at an exceptional telescope site under good weather conditions (0\(\farcs\)25 at 0.55\(\mu\)m) at each wavelength.
In principle, the diffraction-limited Point Spread Function (PSF)5 of a circular telescope aperture is the "Airy pattern". In practical terms, the function describing this PSF places the majority of the incoming starlight into a "diffraction-limited core", with a radius of 1.22\(\lambda/D\) and a Full Width at Half Maximum (FWHM) of 1.03\(\lambda/D\). Extending from this central core are a characteristic set of "Airy" diffraction rings that decrease in amplitude outward and are spaced by roughly 1\(\lambda/D\) from one another with the first minimum at 1.22\(\lambda/D\). In a perfect diffraction-limited
\begin{table}
\begin{tabular}{c c c c}
**Distance** & \multicolumn{3}{c}{**Resolution (in au)**} \\ (pc) & @\(0.55\mu m\) & @\(1.65\mu m\) & @\(3.5\mu m\) \\ \hline \hline \multicolumn{4}{c}{Seeing-Limited Observations} \\ \hline
50 & 12.5 & 46.5 & 115 \\
150 & 37.5 & 140 & 345 \\ \hline \multicolumn{4}{c}{Diffraction-Limited Observations} \\ \hline
50 & 0.9 & 2.6 & 5.5 \\
150 & 2.6 & 7.8 & 16.5 \\ \hline \end{tabular}
\end{table}
Table 1: Seeing (\(r_{0}\)) and diffraction (\(\theta\))-limited resolutions at three common HCI wavelengths for an 8m telescope at an excellent astronomical site in good weather conditions (0\(\farcs\)25 seeing at V band). Values are given in astronomical units for objects at distances of 50pc (the volume limit of many HCI surveys) and 150pc (a typical distance to nearby star forming regions).
system, the central "Airy disk" contains 84% of the total light in the PSF, with the remainder of the light in the Airy rings.
In the case of a telescope with a circular aperture and a central obscuration (e.g. by a telescope secondary mirror) the Airy pattern has a functional form of:
\[I(u)=\frac{1}{(1-\epsilon^{2})^{2}}\Bigg{[}\frac{2J_{1}(u)}{u}-\epsilon^{2} \frac{2J_{1}(\epsilon u)}{\epsilon u}\Bigg{]}^{2}\]
where u is a dimensionless radial focal plane coordinate defined as:
\[u=\frac{\pi}{\lambda}D\theta\]
Figure 4: A simplified, schematic illustration of the process of adaptive optics. _“Stage 1”_ depicts the effect of the Earth’s atmosphere on incoming plane-parallel light. The wavefront is aberrated inside of locally coherent patches in the atmosphere, and enters the telescope aperture with corrugations of a characteristic size (\(r_{0}\)). In _“Stage 2_, the incoming light is passed through a beamsplitter or dichroic, which splits it, sending some to a wavefront sensor and the rest to a science camera. In this case, a Shack-Hartmann wavefront sensor (see Section 2.2) is depicted, wherein an array of lenslets is inserted into the focal plane. Each makes a spot whose location relative to the orientation of the lenslet is indicative of the slope of the incoming wavefront. The spot locations are converted to a “best guess” of the incoming wavefront shape and a corresponding control signal is sent to actuators under an (initially flat, generally tertiary) mirror. _“Stage 3”_ depicts the result of the deformed wavefront reflecting off of the deformed mirror, causing the reflected wavefront to be re-“flattened”, thus compensating for atmospheric aberration. The sensed wavefront is depicted here as an unrealistically perfect match to the true incoming wavefront. In reality, kHz-scale time variation in the incoming wavefront, unsensed or imperfectly estimated wavefront aberration, and the speed and nature of the control algorithm mean that no wavefront is perfectly sensed and corrected. Some residual corrugation will always remain in a real AO system.
and \(\theta\) is defined as the angle between the optical axis and the point of observation. The center of the PSF is at \(\theta\)=0 and therefore u=0, and I(u) is the PSF intensity at location u. The quantity \(\epsilon\) is a measure of the amount of central obscuration expressed as a fraction of the total aperture (which acts to decrease the effective aperture and thus the predicted peak intensity), and J\({}_{1}\) is the first order Bessel function of the first kind.
In practice, HCI PSFs tend to be dominated by Airy or Airy-like diffraction patterns with a few key deviations. First, no modern AO systems achieve perfectly diffraction-limited performance. The PSF of a modern adaptive optics PSF is often characterized by its so-called "Strehl Ratio" (SR), which is the ratio of a star's observed peak intensity relative to that of its theoretical diffraction-limited peak intensity. 6 Modern Extreme Adaptive Optics (ExAO) systems routinely achieve SRs of 80-95% in the Near Infrared, but only 10-30% in the optical at present.
Footnote 6: This theoretical PSF is not fully approximated by the relatively simple \(I_{u=0}\) described above for a given telescope aperture size (D), central obscuration \(\epsilon\), and wavelength \(\lambda\), because (a) it assumes no other obscurations in the aperture (e.g. secondary mirror supports, downstream optical elements), and (b) it computes the PSF for a single wavelength, which is not measurable in practice. Thus, real Strehl Ratio approximations require detailed instrumental PSF models that include all of the telescope and instrument system’s optical elements. The on-sky predicted PSFs are then normalized to the same total intensity and divided to approximate Strehl Ratio. For further discussion of the subtleties of Strehl Ratio determination, see Roberts et al. (2004).
A proper treatment of the effect of the atmosphere on incoming starlight requires detailed atmospheric turbulence modeling (e.g. a Kolmogorov model). However, a decent first-order approximation of the effect of the Earth's atmosphere on incoming starlight, depicted in Figure 4, is to imagine a plane-parallel electromagnetic wave7 with some constant phase and amplitude encountering a layer in the Earth's atmosphere composed of coherent patches of size \(r_{0}\) (atmospheric "cells"). Inside these cells, the wavefront phase is aberrated such that it remains locally flat, however phase offsets occur between neighboring cells. Phase aberrations can take many forms and are often represented as an orthogonal basis set of polynomials with both radial and azimuthal dependencies (e.g. the Zernike polynomials). Low order aberrations have familiar names, and ones that you're likely to encounter in your annual eye exam, such as "astigmatism" and "coma". Higher order aberrations take more complex forms in phase space, but all are essentially disruptions in the intrinsic shape of the incoming PSF. For illustrative purposes, let's imagine only the simplest two low-order modes, the so-called "tip" and "tilt" modes, which preserve the shape of the PSF but modify the direction of the incoming wavefront relative to the original travel direction.
Footnote 7: Plane-parallel here means that if we were to draw a shape connecting equivalent phases of incoming electromagnetic waves from the same source, say the location where their electric field strengths are strongest, the shape of our equal-phase surface would be a plane perpendicular to the direction of travel. In other words, light from a distant source enters the Earth’s upper atmosphere in phase with neighboring light waves. This is an approximation because light exits a spherical object symmetrically, meaning that a surface of constant phase should always have some curvature; however, the distances to astronomical objects are vast compared to the sizes of the telescopes we use to intercept their light. This means that we intercept only a tiny area of a vast spherical shell of light from the star, a shell so vast that the tiny area we intercept can be treated as locally “flat”.
The effect of tip/tilt aberrations is that a wavefront exiting a layer of atmospheric cells is no longer plane-parallel. Instead, it is corrugated (the angle of arrival varies across the telescope aperture, see Figure 4's "distorted incoming wavefront") with some wavelength-dependent characteristic length scale (The Fried coherence length, \(r_{0}\sim\lambda^{6/5}\)). For an atmospheric layer at a certain height in the atmosphere, this characteristic length scale can also be represented as a characteristic angular scale called the "isoplanatic angle", \(\theta_{0}\). Note again that this is just a first-order approximation, albeit a useful one for building intuition, and that, in reality, there are a number of aberrating layers in the atmosphere with their own characteristic coherence lengths, heights, and wind speeds. The practical consequence when integrated over the telescope aperture is that the light of each coherent patch manifests as its own diffraction limited PSF at a different location in the image plane centered around the optical axis of the telescope. The instantaneous result is a number of superposed independent images of the star equal to the number of coherent atmospheric patches that the wavefront incident on the telescope passed through - i.e. the image is blurry.
Locally-coherent patches at a given layer in the atmosphere only remain so on timescales of hundredths- to thousandths- of a second (due to wind, temperature/pressure variation, etc.) which means Adaptive Optics systems must operate on these timescales in order to detect and correct these aberrations with Wavefront Sensors (WFS). Let's extend our toy example of an incoming plane-parallel wavefront that experiences pure tip/tilt aberrations at a single layer in the atmosphere. Imagine a series of corrugated wavefronts exiting this layer and being collected continuously by an astronomical detector over a realistic exposure time of several to several tens of seconds. The result will be a superposition of many hundreds or thousands of diffraction-limited PSFs (so-called "speckles") at various locations relative to the central optical axis. The result is a seeing-limited PSF, whose size/FWHM will vary according to various properties of the atmosphere, but will always be much larger than the diffraction limit. Modern AO systems are able to operate at 1-2kHz frequencies, however they are not able to perfectly sense the wavefront nor to perfectly or completely correct it on the relevant timescales. Many advancements are being made in both the hardware and software of wavefront control, including the advent of algorithms that attempt to account for the time
delay between sensing and applying a wavefront correction by predicting the state of the wavefront into the future (so-called "predictive control" algorithms, e.g. Poyneer et al., 2007; Guyon & Males, 2017).
The consequence of a perfect AO system that could fully detect for and correct wavefront aberration would be a perfect SR=100% diffraction-limited PSF. The reality is of course not perfect - a partially- or imperfectly-corrected wavefront results in the alignment of many but not all of these instantaneous PSFs. Some uncorrected, residual seeing-limited "halo" with a width of approximately \(\frac{\lambda}{r_{0}}\) is expected, and it's amplitude should decrease as the performance of the AO system (Strehl Ratio) improves. Imperfect wavefront correction can also lead to certain persistent speckles, so-called "quasi-static speckles", that are stable on timescales of minutes to hours. These are particularly worrisome because they can mimic planets, but they have the advantage of being static in their location in the instrument frame. They also exhibit spectra that are identical to that of the central star. These properties make them amenable to removal by angular and spectral differential imaging (ADI/SDI, see Section 4).
NIR HCIs can have a dozen or more clear, detectable Airy rings in their unocculted AO PSFs. These Airy rings present a fundamental barrier to achieving high contrast in the environs of the central star, and additional optics are often employed to mitigate them. Because Airy rings are a consequence of diffraction at the edges of the entrance pupil, mitigating optics are generally pupil plane8 optics that block light near its edges. One example is the "Lyot stop".
Footnote 8: Complex modern instruments utilize optics in both the “image plane”, where light incident on the telescope is brought to a focus, and the “pupil plane”, where light is collimated. Estimates of the appearance of an object in a given plane can be accomplished by Fourier transform of its appearance in the other. While image plane images show the on-sky source (often manipulated by upstream optics such as coronagraphs), pupil plane images are essentially images of the entrance aperture (which you can prove to yourself with a simple ray-tracing diagram), containing e.g. the central obscuration from the secondary mirror, the spider arms suspending it, etc. HCI instruments, especially those that require precise placement of optical elements in the pupil plane, are often equipped with “pupil-viewing” cameras, which image this entrance aperture.
The Airy PSF is also predicated on the assumption of a circular entrance aperture, which no realistic telescope entrance pupil is able to achieve. The presence of various optics, especially the secondary mirror and its supports, induce deviations from a perfect Airy PSF. To simulate an HCI PSF, therefore, requires a model of the telescope entrance aperture and any additional optics in the telescope beam. Example PSFs for a range of modern high-contrast imaging instruments are provided in Figure 5.
### Wavefront Sensing and Control
In addition to deformable mirrors (DMs), adaptive optics systems require instrumentation that can sense atmospheric aberrations and convert them to DM control signals on kHz frequencies. From an observer's perspective, the most important features of this "Wavefront Sensor" (WFS) and its accompanying control algorithm are its: wavelength, limiting magnitude, stability, and cadence.
WFS wavelength--WFS operate most often at optical wavelengths. Since most HCI is done in the NIR, such systems implement a dichroic that sends all optical light to the wavefront sensor and all NIR light to the science camera. Although this results in no loss of light at the science wavelength, it does introduce a difference in the scale of the wavefront aberrations that are sensed vs. detected (namely, \((\frac{\lambda_{sensed}}{\lambda_{detected}})^{6/5}\)). NIR wavefront sensing is an active area of development in HCI instrumentation for this reason. For a visible light HCI instrument, wavefront sensing in the optical generally requires a beamsplitter that results in a substantial loss of signal to the science camera (50% or more) as light at the science wavelength is diverted to the WFS.
WFS limiting magnitude--is a measure of the faintest targets for which the wavefront can be sensed, and is determined at the most basic level by the architecture of the WFS. Though there are many types of WFS, the most common are the Shack-Hartmann and Pyramid WFS. Tradeoffs in WFS qualities, such as sensitivity to wavefront errors of various scales and linearity between WFS measurements and DM commands, determine the choice of WFS architecture (for a full discussion, see Guyon (2018)). From the perspective of the observer, one practical consequence of WFS architecture is the range of magnitudes for which AO correction can be accomplished. A Shack-Hartmann WFS (SHWFS, see Figure 4 for a simple depiction) relies on a grid of lenslets placed in the pupil plane, each of which creates a spot on the WFS camera. The location of the spot created by each lenslet is controlled by the direction of the incoming wavefront, and this shape can then be applied to the DM to correct aberrations. The limiting magnitude of a SHWFS is a fixed quantity determined by the required brightness for an individual lenslet spot to be sensed. Because the lenslets are physical optics, this cannot be modified without swapping out the grid of lenslets. A pyramid WFS, on the other hand, modulates the incoming light beam around the tip of a four-faced glass pyramid, each facet of which creates an image of the telescope pupil on a WFS camera. These four pupil images can be analyzed to reconstruct
the incoming wavefront. A pyramid WFS camera's pixels can also be binned to achieve correction on fainter guide stars. Although wavefront information is lost in the binning process and the quality of the AO correction is therefore necessarily compromised, this does preserve the ability to apply (more modest) AO correction to fainter stars.
_WFS stability--_is effectively a measure of how long and under what conditions a WFS can provide continuous adaptive optics correction. When AO systems are operating in "closed loop" mode, meaning corrections are being applied in real time, the loop will "open" in order to protect the DM if the sensed wavefront deformations require corrections whose amplitudes are too great for the range of the DM. This is called a "breaking" of the AO control loop. One of the more critical aspects of a wavefront control algorithm is the "gain" applied to each sensed aberration. Gain can be thought of as a multiplicative factor applied to the sensed wavefront such that all of the sensed aberration is not corrected for at once, but instead some proportion of it. This is to avoid overcorrecting an aberration and driving the mirror into an oscillation, but also to allow more wiggle room for unsensed or incorrectly-sensed aberrations to pass by without breaking the loop. Different sensed wavefront aberrations (e.g. 'low order' and 'high order' modes) can have different gains, and this is one of the principal quantities that can be adjusted in real time during AO observations. Gain, wavefront stability, WFS signal strength, and the nature of the control algorithm all conspire to determine the stability of the AO loop - basically its ability to remain closed during an observing sequence.
_WFS cadence--_is the timescale on which the wavefront is sensed, and is the final factor controlling the quality and stability of AO correction. In this case, the wavelength of observation and nature of the telescope site (seeing, wind speed, etc.) sets the timescale on which the incoming wavefronts change, and the AO system must run faster than this timescale in order to apply quality correction. Many/most current AO systems operate at 1-2kHz frequencies, with faster speeds being required at shorter wavelengths.
### Coronagraphy
Another enabling HCI technology is coronagraphy, which utilizes one or more physical optics inside the instrument system to suppress direct and diffracted starlight before it reaches the detector. This allows for the collection of deeper images of planetary systems, as longer integration times can be used before saturation of the primary star. Coronagraphy is distinct from external occulters ("starshades") and software algorithms ("wavefront control") that are designed to do similar things. Available coronagraphic architectures have been rapidly expanding in recent years, and I will not provide a comprehensive review here, but will instead focus on the practical effects of a coronagraph for image processing.
The purpose of a coronagraph is to redirect starlight away from the image plane by blocking or modulating it with one or more optical components, thus reducing the amount of light that must later be removed in post-processing in order to image faint companions.
Coronagraph optical components can modulate wavefront amplitude or phase or, in many cases, both. The most basic coronagraphic architecture is an opaque or reflecting image plane spot in the center of the field, which prevents on-axis light from the central star from reaching the detector. Other coronagraphic architectures utilize interferometric techniques (e.g. the "vortex" coronagraph) to accomplish the same goal. Additional optics are often placed in the pupil plane to mitigate diffraction around coronagraph edges and around the edges of the entrance aperture more generally, which effectively decreases the amplitude of the Airy rings and allows for higher contrast imaging.
## 3 The Anatomy of a High Contrast Image
Unlike many other fields of astronomy, raw HCI images rarely contain any readily apparent raw signal from the target sources, even under aggressive hardware suppression of the stellar PSF. Post-processing is generally required to achieve the required contrast, and is covered in detail in Section 5. Nevertheless, the anatomy of a raw high-contrast image is important to understand in order to develop intuition for the range of artifacts that might survive into post-processing so they can be recognized and rejected as non-disk or non-planet signals. This section lays out the anatomy of a "typical" coronagraphic high-contrast PSF, beginning with features at the center of the image and moving outward.
_Coronagraph and Spot of Arago_ ---First, the presence of a coronagraph in the beam results in a relative dearth of light at the center of the image. Generally the size of the coronagraphic mask can be discerned in raw images by the ring of bright diffracted starlight just beyond the outer edge of the coronagraph. Inside of this ring, the image is markedly darker, but there is often a single brighter spot at the center, the so-called "spot of Arago" or "Poisson spot", an artifact of Fresnel diffraction (and occasionally also airy rings surrounding it). This spot is not sufficiently bright to
be used as a photometric or astrometric point of reference, however its detection and interpretation was central to our understanding of light as a wave and it thus has a very important role in the history of optics.
Optical Aberrations--The evolving atmosphere and the many optical elements of a high-contrast imaging instrument inevitably induce deviations in the PSF from the theoretical Airy Pattern of a circular aperture. Many of these aberrations can be sensed and corrected by the Adaptive Optics system, but imperfectly, such that some will survive into the final PSF, causing its shape to deviate from an Airy pattern and from image to image.
Speckles--The residual, uncorrected starlight that dominates raw high-contrast images generally comes in two forms. First, atmospheric or instrumental aberrations undetected or not fully corrected by the adaptive optics system manifest as "speckles" (images, often aberrated, of the central star) at a range of locations in the PSF, but concentrated toward the optical axis. These evolve with the rapidly changing atmosphere, and blend into a diffuse halo of uncorrected starlight in most raw images (the so-called "seeing halo"). For very short exposures, such speckles can be individually distinguished more readily, but in such cases they evolve quickly among images and thus rarely masquerade as planets in final PSF subtracted images. So-called "quasi-static" speckles are likely created by optical aberrations in the instrument and evolve much more slowly, thus appear stably across multiple images and are more problematic. Various forms of active control are being developed to remove these quasi-static speckles (e.g. "speckle nulling", Borde & Traub 2006; Martinache et al. 2014) and many differential imaging processing techniques are designed specifically to distinguish quasi-static speckles from planets (see Section 4).
Figure 5: A raw high-contrast image from the Gemini Planet Imager, with various features labeled. GPI’s square-shaped “dark hole” (region of AO correction” is marked in red. Satellite Spots injected intentionally into the images by the apodizer are shown with purple arrows, and serve as photometric and astrometric references. The central star is obscured by the coronagraph, the edge of which is depicted in blue. Diffraction does introduce some light to the region “underneath” the coronagraphic mask, including the “Spot of Arago” at the center of the image, marked in magenta. Examples of speckles, which are distributed throughout the image but are concentrated near the edge of the coronagraphic mask, are marked in green. Individual high-contrast imagers have various unique features, such as GPI’s “aliasing cross” (an optical effect caused by undersampling, see Poyner et al. 2016).
Dark Hole/Control Region--AO-corrected images also exhibit a boundary between the region of sensed wavefront aberration/AO correction and an uncorrected/unsensed region. This boundary defines the so-called "dark hole" or "control radius" of an AO system. The location of this boundary in the image plane is a direct consequence of the wavefront sensor's inability to perfectly sense all pupil plane wavefront aberrations. For example, there is a minimum size of wavefront aberrations that an AO system can detect and correct, set by the spacing of actuators, wavefront sensor optical component spacings (e.g. Shack Hartmann WFS), and/or wavefront sensing camera pixel scales (e.g. for a Pyramid WFS). Any spatial frequency smaller than this limit cannot be corrected by the AO system, and this pupil plane limit maps to a particular location in the image plane. Thus, the image reverts to seeing limited outside of the boundary of the dark-hole, resulting generally in an increase in the intensity of the seeing halo at its boundary.
Wind Artifacts--Wind, particularly high altitude wind, drastically affects the speed at which the incoming wavefront changes in time. AO systems therefore have a harder time 'keeping up' with aberrations along one axis of the PSF (the wind direction) than others, and the AO correction is therefore poorer along this axis. In most modern HCI imagery, the wind direction can be inferred from an apparent elongation of the speckle pattern in the wind direction (i.e. there are more speckles in the halo along the wind direction, where the AO system is struggling to "keep up"). This additional uncorrected light introduces a difference in the achievable contrast in an image azimuthally, with planets/disks that align with wind artifacts more difficult to detect.
Satellite Spots--One practical consequence of coronagraphy is the loss of a direct measurement of the central star's astrometry and photometry. At the same time, photometric and astrometric characterization of substellar sources is dependent on these properties for the central star. For this reason, many modern HCI instruments inject reference "satellite" spots into images at known locations and with known brightness ratios relative to the central star, either through a pupil plane optic custom-designed to inject them at certain locations and brightnesses or using manipulations of the deformable mirror of the telescope to produce them. Once photometrically and astrometrically characterized (e.g. Wang et al., 2014), these spots are sufficiently stable to allow them to serve as proxies for direct measurements of the location and brightness of the central star.
Instrument throughput--is a measure of the fraction of light entering the telescope aperture at a certain wavelength that ultimately makes it onto the detector. It is determined in part by the number of reflecting and refracting elements in the optical path, each of which results in loss of a few percent of incoming light. The operating wavelength of the science camera and wavefront sensor is also a consideration. Generally wavefront sensors have operated at shorter, visible wavelengths and HCI cameras have operated in the NIR, enabling a dichroic to be used to separate these portions of the incoming light and minimize loss light at the science wavelength. The advent of Infrared wavefront sensors and visible light adaptive optics systems complicate this somewhat, to the extent that it can no longer be assumed generally that all light at the science wavelength is directed to the science camera, though clever combinations of filters and beamsplitters as well as usage of light that is otherwise discarded by the system (e.g. by the coronagraphic occulter) help to maximize throughput in these cases.
## 4 Differential Imaging Techniques
Ultimately, even the best HCI hardware can only suppress starlight by 3 or 4 orders of magnitude in brightness, still 2-3 orders of magnitude too low in contrast relative to what is required to image a hot young exo-Jupiter. Modern high-contrast imaging instruments rely on a number of clever data collection methodologies - collectively referred to as "differential imaging" - to facilitate separation of starlight from planet/disk signal. When distilled to their essence, all differential imaging techniques are designed to leverage wavelengths, angular locations, other sources, or polarization states where companion light is faint or absent to estimate and subtract the PSF of the central star. These techniques are presented here in rough order of "aggressiveness" in estimating and removing the PSF of the central star.
### Polarized Differential Imaging (PDI)
Polarized Differential Imaging is the most common and successful technique for imaging circumstellar disk material in scattered light, and it is shown schematically in Figure 6. It relies on the fact that light emitted directly from the central star is (generally) unpolarized. Dust grains in the circumstellar environment, on the other hand, preferentially scatter starlight with a particular polarization geometry. Scattering is most efficient for light with an electric field vector aligned orthogonal to both: (a) the line of sight from the disk to earth and (b)the vector connecting the dust
grain and the central star. In principle, this means that a disk scattered light signal should dominate PDI images, and (unpolarized) stellar emission should be absent in polarized light images.
PDI imaging leverages separation of incoming starlight according to the orientation of its electric field vector (i.e. it's linear polarization). An optic called a Wollaston prism accomplishes this by passing incoming light through a material that has different indices of refraction for different linear polarization states. If a single Wollaston is used, the light is split into two beams with orthogonal polarizations (often called the "ordinary" and "extraordinary" beams), while a double Wollaston will yield four beams, adding redundancy that helps in removal of detector location-specific artifacts. The precise orientation of the orthogonal ordinary and extraordinary polarization vectors relative to the sky is manipulated to fully sample the polarized emission from the source by rotating an optic called a half- or quarter-wave plate, which modulates the orientation of the linear polarization state of incoming light for the two channels. This
Figure 6: A schematic representation of the Polarized Differential Imaging (PDI) technique. Light from a disk-bearing star (in this case the debris disk host HR4796 A with the Gemini Planet Imager at K band) is split into two orthogonal polarization states (polarization vectors are indicated in coral in the figure), and these two “Channels” (Column A’s “Channel 1” and “Channel 2”) are imaged simultaneously. A rotating Half-Waveplate (HWP) modulates the direction of both polarization directors by rotating 22.5 degrees between images, for a total of 4 pairs of polarized images, at orientations of 0, 22.5, 45, and 67.5\({}^{\circ}\). The two simultaneously-obtained orthogonal polarization channels are subtracted from one another (Column B). The subtractions for half-waveplate orientations 0 and 45\({}^{\circ}\) probe the Stokes Q parameter and its reverse. The subtractions for half-waveplate orientations 22.5 and 67.5\({}^{\circ}\) probe the Stokes U parameter and its reverse, respectively. These independent probes of Stokes Q and U can be combined (Column C) to average over location specific artifacts. The dual channels of Column A can also be combined across all 4 waveplate orientations to yield a Stokes I (total intensity) parameter image. This cycle of 4 waveplate orientations is repeated a number of times, often with Angular Differential Imaging (ADI) also employed (see Section 4.3), allowing for individual Q and U images to be combined across a sequence (Column D). The square root of the sum of the squared Q and U images, is called the“Polarized Intensity” (PI) image (Column E). As can been seen in the figure, it easily isolates the (polarized) light of the disk from the (unpolarized) starlight, without the need for PSF subtraction. The combined total intensity image, on the other hand, is dominated by starlight.
modulation (generally sequences of 4 angles - 0, 22.5, 45, 67.5 degrees) allows the images to be combined to yield the Stokes polarization vectors I, Q, and U 9. Addition of images with orthogonal polarizations captures the _unpolarized_ intensity of the star, while subtractions yield either "Q" or "U" images, depending on the orientation of the waveplate. Q and U images are combined to isolate polarized light from the source via the equation \(PI=\sqrt{Q^{2}+U^{2}}\). Each sequence of waveplate angles thus yields four images - I, Q, U, and PI.
Footnote 9: The Stokes vectors (a/k/a “Stokes parameters”) are a mathematical formalism used to describe the polarization state of light, namely: its total intensity (I), its linear polarization state (Q and U), and its circular polarization state (V). HCI instruments are not generally sensitive to the fourth Stokes vector V, so I will not discuss it here
The angle of the polarization vector can also be extracted from these quantities as
\[\theta_{P}=0.5\arctan\left(\frac{U}{Q}\right)\]
These vectors, when overplotted on images of a scattered light disk, demonstrate a characteristic centrosymmetric pattern. This is because of the preferred geometry of the scattering process where, as a reminder, the most efficient scattering occurs when a photon's electric field orientation (\(\theta_{p}\)) is orthogonal to both the line of sight and the vector connecting the scattering dust grain and star.
Extraction of polarized signals is complicated somewhat by multiple scattering processes and the internal optics of the instrument. Internal reflections in the instrument result in depolarization effects that vary with wavelength, incident angle, and the thickness and index of refraction of the optical components. This induces so-called "instrumental polarization", which is typically estimated from observations of both unpolarized, disk-free stars and polarization standard stars.
The simple picture of polarization presented above also assumes that each photon received was scattered by only a single small dust grain in the disk on its journey from star to disk to Earth. This is a reasonable assumption in many cases, but multiple scattering certainly occurs, and results in deviations in the centrosymmetry of polarization vectors, as well as differences in the characteristic pattern of positive and negative signal in Q and U images (often called a "butterfly" pattern because the symmetric positive/negative lobes look a bit like butterfly wings). The inclination of the disk (i.e. whether emission is "forward" or "back" scattered) also impacts the efficiency of scattering, as do grain properties such as size, composition, and porosity.
The most common variation on the process described above is to compute the so-called "azimuthal" or "local" Stokes Q and U vectors, often denoted \(Q_{\phi}\) and \(U_{\phi}\)(e.g. de Boer et al., 2020; Monnier et al., 2019) and defined as:
\[Q_{\phi}=-\,Q\cos(2\phi)-U\sin(2\phi)\] \[U_{\phi}=+\,Q\sin(2\phi)-U\cos(2\phi)\]
where \(\phi\) is the azimuthal angle. This formulation has the advantage of concentrating signal with the expected polarization vector orientation into the \(Q_{\phi}\) image, while the \(U_{\phi}\) image becomes an estimate of the noise induced by multiple scattering and instrumental polarization.
### Reference Differential Imaging (RDI)
The Reference Differential Imaging (RDI) technique utilizes images of stars other than the science target taken at other times to subtract starlight from a target image. It is an ideal approach when either (a) the PSF of a system is exceptionally stable, often the case for space-based observatories such as HST, or (b) the source being targeted has extended, symmetric features (e.g. a circumstellar disk) that might be subtracted by more aggressive algorithms that rely only on images of the target star of similar color10 for reference (see next several sections). Reference PSF libraries for RDI generally consist of images of many other stars taken at the target wavelength and in the same observing mode (e.g. same coronagraph) with the same instrument. In the case where a large library of reference images is available (e.g. a large HCI campaign, a well-established space telescope instrument), just a subset of the most highly correlated images may be chosen to construct a PSF.
Footnote 10: Similarity in color is important in RDI primarily because WFS and detector wavelength ranges are often different. Ideally, the reference star should be of similar (or slightly higher, Debes et al., 2019) brightness at _both_ wavelengths so that its total flux on the detector (at the science wavelength) and the performance of the AO system (set by the star’s brightness at the WFS wavelength) are similar.
Some HCI observers, particularly of disks, regularly conduct PSF reference star observations as part of their efforts to observe a science target. PSF references are often chosen to be similar in location on the sky (so they can be observed interspersed with or immediately before or after the science target, at similar airmass), of similar apparent
brightness at the wavelength of the WFS (so that the AO system performs similarly 11), and of similar color (so that the science image(s) have similar properties). Some modern HCI systems (SPHERE, MagAO-X) are equipped with "star-hopping" modes that allow the AO loop to be paused on one target (e.g. the science target) and then re-closed once the telescope is pointed at another nearby target (e.g. the PSF reference star). This ensures maximal similarity in their PSFs.
Footnote 11: One clever trick that some AO observers use is to “pause” the AO control loop, slew the telescope to the reference star, and re-close it with all the same WFS algorithmic parameters in order to maximize this similarity
### Angular Differential Imaging (ADI)
The Angular Differential Imaging (ADI) technique builds on the legacy of "roll-subtraction" pioneered with the Hubble Space Telescope (HST, e.g. Schneider et al., 2014). It leverages angular diversity to separate stable and quasi-stable PSF artifacts from true on-sky emission. It is predicated on the assumption that the instrumental PSF remains (relatively) stable in the frame of reference of the instrument throughout the image sequence, while true on-sky signal rotates with the sky. This allows the time series of PSF reference images to be leveraged for pattern matching or statistical combination to estimate the stellar PSF and remove it from each image. In practical terms, the quality of any ADI-based subtraction is generally a strong function of the amount of on-sky rotation of the source. For this reason, most direct imaging target observations are roughly centered around the time of that object's transit across the meridian, as this maximizes the amount of rotation achieved for a given amount of observing time. Rotation is essential to reduce a phenomenon called "self-subtraction", in which the signal of a source (disk or planet) is present in
Figure 7: A schematic representation of the process of Reference Differential Imaging (RDI), in this case using Gemini Planet Imager H-band images of the debris-disk host HR4796A collapsed across all \(\sim\)40 wavelength channels of GPI. RDI utilizes a library of images of stars _other than the science target_ (Column B) obtained in the same observing mode. Generally, stars without any known disk or planet signal are chosen as references. These reference images can be combined simply (e.g. median combined, Column C) or used to build a custom PSF for each target image in the sequence (see Section 5). This PSF estimate is subtracted (Column D) to remove starlight in the image. In the case where the images were obtained with the instrument rotator off (typical for ground-based observing, see Section 4.3), these subtracted images are rotated to a common on-sky orientation (Column E) and combined (Column F).
a different but nearby location in the PSF image being subtracted, resulting in characteristic negative lobes on either side of the source where it has been subtracted from itself (hence the name).
The simplest form of ADI, so-called "classical" ADI (cADI), constructs a single PSF for subtraction from the median combination of all images in a time series, subtracting this median PSF from each image and then rotating these subtracted images to a common on-sky orientation. These PSF subtracted and re-oriented images are then combined, further suppressing the residual speckle field, which varies from image to image.
### Spectral Differential Imaging (SDI)
HCI observing programs often aim not just to _detect_ exoplanets and circumstellar disks, but also to _characterize_ them, for which multiwavelength information is invaluable. Due to the many challenges of absolute photometric calibration in HCI (see Section 7), characterization is best facilitated by obtaining simultaneous imagery at multiple wavelengths. Thus, many modern HCI instruments are so-called "Integral Field Spectrographs" (IFSe). IFS instruments are used throughout Astronomy with a range of architectures, but in the case of HCI, they are generally of a fairly similar lenslet-based design. In lenslet-based IFSes, a grid of lenslets is placed in the focal/image plane of the optical system (not
Figure 8: Illustration of the classical Angular Differential Imaging (cADI) technique using a sequence of 40 Gemini Planet Imager coronagraphic H-band (1.6\(\mu\)m) images of the planet host Beta Pictoris (\(t_{exp}\)=1min). Images (column A) are collected with the instrument rotator off, allowing the sky to rotate. The instrumental PSF (including any quasi-static speckles) remains relatively stable in the instrument frame, while real sources rotate with the sky relative to the instrument frame. The image sequence is median combined to create an instrumental PSF (column B), which is then subtracted from each image (column C), derotated to a common on sky orientation (column D), and median combined again (column E). In this case, the planet Beta Pictoris b (coral circle) is bright enough to be seen in individual exposures. It is not present in the PSF, as it rotates with the sky and is thus not in the same position throughout the image sequence. The median PSF is not a perfect PSF reference, and image-to-image variation can be seen in column C. However, derotating and median combining these imperfect subtracted images results in a very clear detection of the planet.
unlike the grid of lenslets placed in the pull plane of a SHWFS, see Section 2.2), and then the lenslet spots are dispersed to produce a spectrum at each location. Each "spectral pixel", or "spaxel" (also referred to as a "microspectrum"), contains spectral information _at a particular location in the image plane_. Although it requires post-processing to do so, raw IFS images can be converted into a cube of resolved, multiwavelength images by using arclamps to connect locations along each microspectrum with specific wavelengths, fitting these locations photometrically by leveraging some knowledge of the instrumental PSF, and then placing that photometric value in an array at the appropriate spatial location relative to other values. A raw IFS HCI of the planet-host Beta Pictoris is shown in Figure 9 and broken down schematically.
Spectral Differential Imaging takes advantage of differences in the spectral properties of planet and starlight. In particular, it leverages images at wavelengths where planets are generally dim (e.g. for methane dominated planetary atmospheres, at 1.5 and 1.7\(\mu\)m) to construct a PSF model that is largely uncontaminated by planet light, limiting self-subtraction. Because images are collected at multiple wavelengths contemporaneously, this circumvents some of the effects of a temporally-varying PSF. As a result, the library of reference images are often better matched to the target PSF.
Most high-contrast SDI imaging to date has been done with an Integral Field Spectrograph such as GPI or SPHERE. These instruments separate the focal plane into a grid of so-called "spaxels" by focusing light on a grid of lenslets then passing the separated lenslet spots through a wavelength dispersing element to achieve a grid of microspectra. A wavelength solution is derived for each microspectrum based on observations of an internal arc lamp in the system, which are generally taken close in time to science observations as the wavelength solutions can be highly dependent on the flexure of the instrument. The microspectra are used to extract the brightness at a given wavelength for each spatial location (spaxel) and are then combined to create a cube of contemporaneously-obtained multiwavelength images. This is shown schematically in Figure 9.
SDI can also be implemented without an IFS by simply splitting incoming light into two beams with a 50/50 beamsplitter, dichroic, or Wollaston prism 12, and passing each beam through a different narrowband filter. This is sometimes called _Simultaneous_ Differential Imaging (still SDI). The SDI filter pairs lie on- and off- of a spectral line of interest, and the most common lines used in today's high-contrast imaging campaigns are on- and off-methane in the NIR and on- and off- H\(\alpha\) in the optical. In the case of young moving group stars (ages 10-300Myr), it is expected that planets will be faint or undetectable in the methane band due to absorption in giant planet atmospheres likely dominated by this gas, and brighter outside of the methane bands (see Figure 10). H\(\alpha\) differential imaging, on the other hand, leverages the fact that many younger (generally \(<\)10Myr) systems show evidence of ongoing accretion onto their central stars. The accreting material originates from and is processed through the circumstellar disk, meaning that any planets embedded in that disk are also likely to be actively accreting. One principal escape route for the energy of infalling material is radiation in hydrogen emission lines, particularly H\(\alpha\), and we expect accreting protoplanets to be bright at this wavelength and faint or undetectable in the nearby continuum.
Footnote 12: A 50/50 beamsplitter splits light equally across a wide wavelength range. A dichroic is transmissive for some wavelengths and reflective for others, resulting in preservation of all of the intensity at a given wavelength. A Wollaston prism is similar to a 50/50 beamsplitter for the case of unpolarized input light – it does not split light by wavelength, but rather by polarization state.
In terms of its utility as a tool to separate star and planet light, in its most generic form (what we might term "classical" SDI imaging, shown schematically in Figure 10) simply leverages the fact that the physical size of a stellar PSF on a detector is a function of wavelength. For simultaneously-acquired imagery at multiple wavelengths (i.e. A 3D cube of images with 2 spatial coordinates and 1 wavelength coordinate), this manifests as a magnifying effect as wavelength increases, and means that PSF features shift radially outward in detector coordinates, while true on-sky objects remain at the same position regardless of wavelength. Much like ADI angular rotation, the size of this effect is well-known (having a \(\lambda\)/D dependence), therefore it can be compensated for in post-processing. By rescaling (expanding shorter wavelength images or compressing longer wavelength ones) simultaneously-obtained images at multiple wavelengths to a common PSF scale, the wavelength-independent features of the PSF can be estimated. This rescaling alters the position of real objects in the images so that they are no longer in precisely the same location at all wavelengths, thus the rescaled images can be combined (e.g. via median or weighted-mean combination) to construct a relatively 13 planet-free PSF reference. This reference can be subtracted from the rescaled images and then the rescaling can be reversed to restore true on-sky coordinates, effectively realigning the planetary signals across wavelengths. These images can be collapsed in wavelength space to provide a robust planetary signal, enabling
detection or astrometric characterization. More commonly, however, wavelengths are kept separate and combined across a sequence of multiple IFS images. This enables extraction of planet photometry at each wavelength to create a coarse spectrum, with a spectral resolution controlled by how many spectral channels can be extracted across the wavelength range of the IFS, generally a few dozen over a \(\lessapprox\)0.5\(\mu\)m wavelength range, for resolutions on the order of \(\sim\)25-100.
SDI processing is rarely used in isolation, and is rarely executed in the simple "classical" sense described above. Instead, it almost invariably applies more sophisticated PSF estimation techniques to create custom PSFs for each image and wavelength within the image cube (i.e. using KLIP or another algorithm). Combination of SDI and ADI processing allows the user to leverage both angular and spectral diversity to identify reference images where planets might reasonably be expected to have moved enough to prevent their surviving into any combination (either through angular rotation or image rescaling).
In addition to taking advantage of the physical rescaling of the instrumental PSF to identify images taken at the same or similar times to use as references, SDI processing also often involves the application of one or more planetary spectral
Figure 9: Schematic representation of the process of extracting a multiwavelength image cube from a single raw Integral Field Spectrograph (IFS) image. In this case, the background image is a raw H-band image of the star Beta Pictoris collected with the Gemini Planet Imager (GPI). Beta Pictoris has a known companion, Beta Pictoris b, whose light can be seen even in raw GPI images as a region of excess brightness in the wings of the stellar PSF, indicated in orange here. IFS instruments place a grid of lenslets in the focal plane, the light from each of which is passed through a dispersing element before reaching the detector. This creates an array of microspectra on the detector, one of which is highlighted in magenta here. Each microspectrum can be wavelength calibrated using arc lamps and its brightness extracted to create a single spectral pixel, or “spaxel” for each wavelength (representative wavelengths of 1.55, 1.65, and 1.75\(\mu\)m indicated in cyan, yellow, and red on the microspectrum) and location in the image plane. These spaxels can be stitched together algorithmically to produce simultaneous images of the star at a number of wavelengths, creating a multiwavelength image cube rather than a single broadband image.
templates to expand the reference library. For example, if we expect a planet with a methane-dominated atmosphere, such as the planet 51 Eridani b, then there are certain H-band wavelengths where we might expect methane absorption to make fainter planetary signals undetectable. We might leverage such wavelengths then as references, regardless of their wavelength separation from the image for which we are constructing a PSF.
The size of a stellar PSF is a function of wavelength; it increases as the wavelength does. Raw SDI image cubes are therefore not initially good references for one another. Their spatial scales must first be adjusted to a common magnification in order to construct a PSF library. While this makes the instrumental PSFs of the multiwavelength images match, an effect of this rescaling is that the true on-sky spatial scale varies across the wavelength dimension of the reference images. This can result in **radial** self-subtraction of the planetary PSF when planet light at another (rescaled) wavelength makes it into the library of reference images.
A distinct advantage of SDI is the acquisition of spectral information, which allows for atmospheric characterization of directly-imaged companions and composition analyses of circumstellar disks. Although the mechanics of the technique are somewhat different and outside of the scope of this tutorial (relying on the placement of optical fibers on and off of the known location of a directly imaged companion), it's worth noting that medium- and high-resolution spectroscopy is increasingly being used to much more finely characterize the atmospheres of directly imaged companions.
## 5 Algorithms for High-Contrast Image Processing
Figure 10: A schematic representation of the process of “classical” Spectral Differential Imaging (SDI). Simultaneous images of a star are obtained at a range of wavelengths at once, in this case IFS images of the star Beta Pictoris obtained with the Gemini Planet Imager at H-band (1.5–1.75\(\mu\)m). A representative set of 5 of 37 total wavelengths from the 3D image cube (2 spatial, 1 wavelength dimension) are shown in Column A, spanning a majority of the wavelength range. Each image is rescaled to compensate for the magnification of the stellar PSF with wavelength (Column B), placing instrumental PSF features on the same spatial scale (such as the satellite spots, one of which is indicated in yellow throughout). This rescaling, however, shifts the position of any real on-sky signal (such as the light from the planetary companion Beta Pic b, indicated in pink throughout). Rescaled images can then be combined (Column C) to create a relatively planet-free PSF (in this case by taking the weighted mean of the first and last few images in the rescaled image cube, where the planet light is farthest apart), which can be subtracted from each rescaled image (Column D) to remove a majority of the stellar signal. The rescaling must then be repeated in reverse (Column E) to re-align true on-sky signals before combination. Images can be combined in wavelength space to achieve detections or astrometric measurements (Column F), or the separate wavelengths can be retained and combined across a sequence of IFS images (Column G). Photometry of the planet can then be extracted from each combined image to construct a spectrum (Column H).
In addition to applying hardware (see Section 2) techniques to suppress starlight and differential imaging (see Section 4) techniques to facilitate separation of star and planet signal, most modern HCI efforts require additional post-processing beyond the "classical" versions described in Section 4, and the most common techniques to enable this are described in this section.
### Filtering
A common form of preprocessing for high contrast images is the application of so-called "high-" or "low-pass" filters to the data. This terminology refers to the spatial frequencies14 that are the least suppressed by the application of the filtering algorithm - they "pass through" the process relatively unscathed while other spatial frequencies are suppressed. A highpass filter allows through high spatial frequency signals such as narrow disk features and planets. A low-pass filter suppresses these signals while preserving extended structures such as the stellar halo or broad disk features.
Footnote 14: This is a Fourier analysis term, and can be understood through the relation between pupil and image plane discussed previously. When an image undergoes Fourier transform, the intensity of the resulting 2D function can be related to the strength of various “spatial frequencies” in the image. These can be thought of as maps of the degree of symmetry and typical size scale of variations in the intensity of the image.
Highpass filters are applied to high-contrast imaging data before and/or after PSF subtraction. A simple example of a highpass filter is the so-called "unsharp masking" technique, wherein an image is convolved with a simple kernel (often a gaussian), and then this smoothed image is subtracted from the original. High spatial frequency structures are drastically altered (spread across many more pixels than its original extent) by this convolution, while low spatial frequency structures remain largely unaltered. Thus, the subtraction suppresses these low-frequency signals while preserving the high-frequency structure. There are a range of additional algorithms/strategies used to achieve highpass filtering, many of which are applied to the Fourier transform of an image in the frequency domain, but all of which are designed to serve the same purpose.
### PSF Post-Processing
A number of post-processing algorithms extend the concept of "classical" differential imaging to construct _custom_ PSF models for _every_ image in a time series _individually_, rather than adopting a single representative PSF for the entire image sequence. The two families of algorithm used most often are outlined below. Like ADI, RDI, and SDI,
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Technique** & **Abbr.** & **Require-ments** & **Best for** & **Advantages** & **Disadvantages** \\ \hline \multirow{4}{*}{Polarized Differential Imaging} & & \multirow{4}{*}{Wollaston prism, rotating half} & \multirow{4}{*}{disk morphology and grain studies} & - does not require PSF subtraction & - instrumental polarization, multiple scattering effects difficult to isolate and remove - combined with total intensity imagery, probes disk grain properties & - instrumental polarization, multiple scattering effects difficult to isolate and remove - forward/back-scattering can result in only one side of a disk being detectable \\ & & & \multirow{2}{*}{detection and photometry of extended disks} & \multirow{2}{*}{- allows for characterization of disks with morphology, including face-on} & - Difficult to achieve reference star observations with well-matched PSFs \\ & & & & & - PSF star observations require additional observing time \\ \hline \multirow{4}{*}{Angular Differential Imaging} & & \multirow{4}{*}{ON-sky rotation} & detection and photometry of planets, narrow disk structures & - lots of on-sky rotation can enable more effective PSF subtraction close to star & - post-processed PSFs show azimuthal self-subtraction \\ \cline{1-1} \cline{3-5} & & & \multirow{2}{*}{spectral characterization of planets, narrow disk structures} & - recovers spectral information, enabling characterization - can leverage knowledge/assumptions of spectrum to improve PSF subtraction & - post-processed PSFs show radial self-subtraction \\ \cline{1-1} \cline{3-5} & & & \multirow{2}{*}{spectral characterization of planets, narrow disk structures} & \multirow{2}{*}{- can leverage knowledge/assumptions of spectrum to improve PSF subtraction to improve PSF subtraction} & - planet movement constraint range is narrower \\ \cline{1-1} \cline{3-5} & & & & & \\ \hline \end{tabular}
\end{table}
Table 2:
both rely on assembly of a library of reference images (often other images of the target itself taken in the same imaging sequence), and these images are used to construct the PSF model(s) for the target image. They rely on correlation between the target image and the other images in the reference library, weighting most heavily the images that are most closely correlated with the target image 15. In this way, these algorithms are able to capture the time varying nature of the PSF and quasi-static speckles in images rather than relying on a single PSF for the entire image sequence. PSFs can be constructed for an entire image, or for azimuthally and/or radially divided subsections of the image, and these algorithms can be applied for ADI, SDI, RDI, and occasionally even PDI image processing.
Footnote 15: In the case of RDI processing of a disk-hosting star, some portion of the image known to host disk signal may be excluded from consideration (masked) before computing these correlations. This ensures that regions of relatively pure stellar signal drive the choice of reference images for PSF model construction and minimizes oversubtraction of disk signal.
For these more advanced PSF-subtraction algorithms, restrictions are placed on which reference images are used to estimate the PSF for a given target image. The specific images in the sequence that are excluded and included in the reference library will change for each target image. Exclusion of images taken nearby in time or wavelength is done to limit the amount of planet light that survives into the PSF model. The consequence of planet signal appearing in the PSF models is azimuthal (ADI) and/or radial (SDI) self-subtraction. Their effects are shown in Figure 11.
#### 5.2.1 Klip
Karhunen Loeve Image Processing, or KLIP, is a statistical image processing technique in which images are converted to 1D column vectors and cross correlated with all other images in a time sequence. This application of Principal Component Analysis (PCA) allows for identification of common patterns ("principal components") in the image cube.
PCA is used in a range of contexts inside and outside astronomy to reduce the dimensionality of data. A simple example of how it works is to imagine a 3D scatterplot with evident correlations among the x, y, and z axis quantities
Figure 11: Azimuthal (top row) and radial (bottom row) self-subtraction of the planet Beta Pictoris b in KLIP-processed Gemini Planet Imager data. Azimuthal self-subtraction occurs in Angular Differential Imaging (ADI) when reference images where the planet’s signal fully or partially overlaps its location in the target image are included in the PSF reference library. Radial self-subtraction occurs in Spectral Differential Imaging (SDI) when rescaled (to match the scale of the target image) PSFs at nearby wavelengths contain planet signal (shifted inward or outward in the rescaling) that overlaps that of the target image. KLIP includes a threshold for the amount of angular or physical motion that a planet at a given location must undergo (due to angular rotation for ADI and PSF rescaling for SDI) before another image in the sequence can be included in the reference library for PSF subtraction. This is a tunable parameter, and both top and bottom panels depict a sequence of very aggressive (no threshold) to less aggressive reductions. An aggressive threshold generally provides better PSF subtraction (most evident at the center of the images) because the PSF library includes the images taken closest in time to the target image, but it also results in the highest degree of self-subtraction, evident in the characteristic dark-bright-dark of the post-processed planetary PSF, where the dark regions on either side of the core are referred to as “self-subtraction” lobes. For the least aggressive reductions, self-subtraction is minimal (though the presence of the planet in the KL modes can be seen in the negative lobes extending azimuthally in the case of ADI (top row) and radially in the case of SDI (bottom row)), but PSF subtraction is also less effective. Fainter planets nearer the star may only be resolvable with more aggressive reductions.
(as shown in Figure 12). The x, y, and z coordinates are, in such a case, not particularly good descriptors of the overall data, in that it is only in combination that they can describe its variation. If we were to instead define a first principal component axis along the line of best fit, this single variable would capture the most distinct first order pattern in the data (it is the best single descriptor of the data's variance). If we were to add a second, perpendicular axis (in PCA each principal component is required to be orthogonal to all others), it would point in the direction of maximum scatter off the line of best fit, a good second order descriptor of the variance in the data.
It's difficult to extend this toy example conceptually into high numbers of dimensions, but the principal is the same - each additional orthogonal vector must be orthogonal to all others and is chosen to describe the maximum amount of additional variance in the data. Conceptually, in the case of PCA for HCI applications, this corresponds to patterns across many pixels that are present in the target image and that repeat frequently in the reference images. The first few principal components generally contain large scale PSF structures like core and halo, and the highest order principal components generally look like different realizations of the speckle pattern. Adding principal components to the model therefore increases its "aggressiveness". This makes the likelihood of a well-matched PSF model higher, but also increases the likelihood that planet light will be oversubtracted or self-subtracted.
KL modes are basically the principal components of a library of reference images that have been transformed into 1D arrays (albeit with some complexities that I will not cover in detail here). Once they are computed, an individual image is "projected" onto these KL modes, which in practice looks like a weighted linear combination of the principal components. KLIP algorithms lend themselves easily to returning models of varying complexity (different numbers of KL modes) simultaneously, so PSF subtractions can readily be generated with a range of aggressiveness and then compared. In such cases, low numbers of KL modes correspond to more conservative reductions, in that they (a) contain only the most widely varying PSF structures, and (b) result in relatively lower probability of any true circumstellar signals (disk, planet) being picked up as patterns that persist across images. The probability of these signals being picked up in the KL modes is much higher for spatially extended disks than for planetary point sources, so KLIP-ed
Figure 12: A simple visualization of Principal Component Analysis (PCA). To describe the position of any one data point in this dataset, one could specify three coordinates - it’s x, y, and z location along the depicted axes. However, one could also provide a good (albeit imperfect) estimate of a point’s location by simply providing a single coordinate - it’s coordinate along a single vector that describes as much of the variation in the data as possible - the so-called “first principal component” (depicted in blue here). If we also specified that point’s location along an additional vector defined to be both: (a) orthogonal to the first principal component, and (b) pointing along the (orthogonal) direction describing the next greatest amount of variance in the data, this “second principal component” (depicted here in yellow), together with the first, would provide an even better estimate of the point’s location with only two coordinates. In high-contrast imaging, these patterns of covariance among images (principal components) can be used to model an image’s Point Spread Function (PSF) using Karhunen-Loeve Image Processing (KLIP), which is a variant of Principal Component Analysis.
disk images often use only a low number of KL modes (e.g. \(<\)10), while point-source reductions frequently use dozens to hundreds of KL modes. A schematic illustration of the KLIP process is shown in Figure 13.
#### 5.2.2 Loci
The Locally Optimized Combinations of Images (LOCI, Lafreniere et al., 2007) technique constructs a PSF model by weighting and combining some number of images from the reference library as a PSF model for the target image. In its original form, the algorithm computes a least-squares fit to the target image using weighted linear combinations of the images in the reference library, with the goal of minimizing the residuals in the difference of the target image and the PSF model. Since it was originally developed, several enhancements have been made to the LOCI algorithm. A non-exhaustive list of these enhancements is provided below.
Template LOCI(TLOCI, Marois et al., 2014), was specifically designed for SDI imaging and its aim is to maximize the SNR of planets with a specified spectral shape. The user specifies a planet spectrum (e.g. flat, methane-dominated, etc.) and sets a threshold for the amount by which the planet's flux is allowed to be reduced by self-subtraction (due to both azimuthal FOV rotation with time and radial PSF magnification with wavelength). Using simulated planets, the amount of self-subtraction in each reference image is quantified. Images with predicted self-subtraction above a certain threshold are excluded from the reference library before the least-squares fit is computed.
Adaptive LOCI(ALOCI, Currie et al., 2012) implements an additional step of subtracting the radial profile of the star (the seeing halo) so that the speckle patterns among images can be readily compared. It also constructs a reference library from only the most correlated reference images (those above a certain user-defined correlation threshold).
Figure 13: Illustration of the Karhounen-Loeve Image Processing (KLIP) technique. This technique can be applied to ADI, SDI, and RDI imagery, but is shown for the ADI case here. Like Figure 8, this visualization utilizes a sequence of 40 Gemini Planet Imager coronagraphic H-band (1.6\(\mu\)m) images of the planet host Beta Pictoris. Images (column A) are collected with the instrument rotator off, allowing the sky to rotate. A collection of other images in the sequence (column B) are assembled for PSF modeling of **each** target image in the ADI sequence. Algorithmic controls determine the degree of “aggressiveness” in including or excluding reference images taken near in time to the target image, where planetary signal may overlap (excluded images shown with red x symbols in column C). Principal Component Analysis of the reference library and target image allows for construction of one or more PSF models of tunable complexity (number of principal components in the model, column D depicts N=5 components). As in cADI, these models are subtracted from the target image (column E), derotated to a common on sky orientation (column F), and combined (column G) to reveal the planet.
The Signal to Noise Analysis Pipeline_--(SNAP, Thompson & Marois, 2021) directly optimizes the non-linear signal-to-noise equation for a planet at a given location by dividing the vicinity of a planetary signal into an annular "optimization region" and a smaller semi-annular "subtraction region". Forward-modeled planet photometry, a vector of coefficients for the linear combination, and an estimate of the noise derived from those coefficients are optimized to maximize signal-to-noise ratio.
## 6 Comparison of Techniques
Now that we've introduced both differential imaging techniques more generally and some of the processing algorithms that we use to extend them and isolate light from extremely faint circumstellar signals, we can compare the relative efficacy of and situations best suited to application of each technique. These considerations are summarized in Table 2. Another useful tool for comparing and contrasting techniques is examination of images generated with each technique for the same object. This is provided in Figure 14 using both a very faint planetary signal (that of 51 Eridani b, Macintosh et al., 2015) and a debris disk whose narrowness facilitates recovery under all of the algorithms (HR 4796A, Arriaga et al., 2020).
Both Table 2 and Figure 14 highlight the fact that choosing a technique requires consideration of many factors, including both the feasibility of the observations and the specific science aims. An important takeaway is that differential imaging techniques can be especially powerful in combination. For example, recovery of a disk signal in both PDI and RDI or ADI imaging allows for computation of the polarization fraction (P=PI/I), a sensitive probe of the disk's grain properties. For planets, recovery of signal via multiple processing techniques lends credence to its nature as a _bona fide_ planet. In other words, the various techniques neither compete with nor supersede one another - all are needed to construct a full picture.
which is generally computed as 5 times the standard deviation of the noise at a given separation in the post-processed images, is not a true measure of the achieved sensitivity.
In order to make a more accurate calculation, the algorithmic "throughput" must be computed by injecting sources into the image at a range of separations and quantifying their recovered brightnesses. Throughput is defined as the ratio of an object's injected to recovered brightness (generally computed via the brightness of the peak pixel at the location of the source before and after PSF subtraction). Like contrast itself, it is a strong function of separation from the star. Throughput for most high-contrast imaging algorithms is low close to the star, meaning that source brightness is heavily suppressed in the PSF subtraction process, and approaches 1 at greater distances (meaning the planetary signal is relatively unaltered by PSF subtraction). The best estimate of recoverable planet brightness is therefore the 5\(\sigma\) noise level of the image _divided by_ the instrument throughput at each separation from the star. This is sometimes called the "throughput-corrected" contrast, but is most often just referred to as "the contrast".
When computing throughput, an important consideration is overlap/crosstalk between injected sources, which can result in incorrect estimates. As sources can overlap both azimuthally and radially, the general approach for point source detection limits has been to inject false planets in an outwardly spiraling pattern with appropriate separations radially and azimuthally. Computation of throughput also requires a choice of injected contrast for each false source. Generally, a low to moderate contrast is chosen and set uniformly throughout the injected planet spiral so that recovery is assured, however it is likely that injected object throughput is, at least to some extent, a function of brightness.
### Limitations of Contrast as a Metric
Contrast curves have several limitations. First, they are sensitive to post-processing choices (e.g. KLIP parameters), therefore optimization can be computationally intensive. Second, they generally assume azimuthal symmetry in the sensitivity of post-processed images where in reality, stellar PSFs often have azimuthally dependent structure. One common example of this is the so-called "wind butterfly" effect wherein lobes of higher noise/lower contrast are apparent on either side of a star in the direction of the wind in high-contrast images. This means that neither noise nor algorithmic throughput is truly azimuthally symmetric. One way to mitigate this is to inject false planetary signals at various locations azimuthally and measure how well they are recovered at each orientation. For example, one might inject three spirals of false point sources with the spiral clocked by 120\({}^{\circ}\) each time in order to more fully sample the azimuthal variation in throughput.
Figure 15: A schematic diagram illustrating how to read a contrast curve. At a given contrast and separation, a planet is detectable when it lies _above_ the curve. Achieved contrast is a steep function of separation from the central star, with only the brightest planets detectable at tight separations.
Figure 16: Demonstrated (solid lines) and predicted (dashed lines) contrast performance of various current and future HCI instruments. Lines and points are color coded by wavelength of observation. Points indicate both detected (solid outline) and simulated (dashed outline) planets. (_code and data source: V. Bailey_)\({}^{\rm a}\)
A further complication is in the definition of the "noise" in post-processed images. The most standard metric is the standard deviation of the post-processed image computed in small concentric annuli extending outward from the star. The convention in high-contrast imaging is to consider sources whose peak recovered brightness is at least 5 times above the noise level to be robust detections, and objects in the 3-5\(\sigma\) range to be marginal. Many contrast curves reported in the literature are so-called "5\(\sigma\)" contrast curves, but 3 or even 1\(\sigma\) curves are also sometimes reported and one must be careful to understand and correct for any differences when comparing contrasts among surveys. To put it plainly, all contrast curves should be interpreted as relatively rough and fuzzy boundaries between detectable and undetectable planets.
One final consideration in computing and interpreting noise in a post-processed image is that the dominant noise source close to the star is stellar speckles. In this speckle-dominated regime, there is a strong correlation between flux in adjacent pixels, since the stellar PSF has a width of several to many pixels. This has led to a best practice of implementing t-distribution rather than Gaussian noise statistics at tight separations, accounting for the small number of independent samples close to the star. In practice, this means dividing the computed standard deviation at a given separation by the factor \(\sqrt{1+1/n_{2}}\)(Mawet et al., 2014), where \(n_{2}\) is the number of independent noise realizations at that separation (\(\sim 2\pi r/FWHM\)).
In summary, there are several important questions to ask oneself when studying a contrast curve.
1. Is it throughput corrected? If not, remember that the true limit is likely at lower contrast (a higher curve).
2. By what factor has the noise level been multiplied (1, 3, 5)? If less than five, recall that objects near the curve might be considered marginal or non-detections.
3. Has the noise level been corrected to reflect appropriate noise statistics near the star? If not, the true limit may be a steeper function of separation from the star than depicted.
4. How azimuthally symmetric is the post-processed image? If azimuthal structure is apparent, the curve should be interpreted as an average. In some parts of the image, objects below the curve may be detectable; in others, objects above the curve may be undetectable.
Furthermore, one must keep in mind when comparing contrast curves between studies and instruments that these choices may not be uniform among them and the curves may not be directly comparable. For these reasons, it is important when planning observations and interpreting detections (or non-detections) relative to contrast performance, to carefully read contrast curve descriptions and discern these important details. You may practice contrast curve comparison and parsing of these details by perusing Figure 16, which compares demonstrated and expected contrast of a range of current and future HCI instruments in several wavelength regimes.
#### 7.2.1 Aside: Contrast Curves for Disk Detections
Many of the points in the discussion above are altered or invalid for extended sources. Throughput, for example, is extremely difficult to compute for disks when their azimuthal and/or radial extent is large. Generally speaking, HCI disk detections utilize more conservative post-processing algorithms and observing techniques such as RDI for which throughput is much higher.
### Signal-to-Noise Calculation
Signal-to-noise maps are standard in all fields of astronomy. In the case of direct imaging of point sources through PSF subtraction, there are several subtelties in computing them. First, the post-processed planetary PSF has characteristic "self-subtraction lobes" on either side of the planetary core. These are caused by the presence of the planet at different azimuthal angles in the reference library, and therefore KL modes. The region containing the planetary core and self-subtraction lobes needs to be excluded in order to robustly estimate the noise at comparable radial separation from the star. This is typically done by masking this region and computing the standard deviation of the remaining pixels at a given radial separation. The nature of the speckle-dominated region of the PSF also means that independent samples of the noise at a given radial separation are defined by the size of a speckle (the PSF FWHM), leaving relatively few independent noise samples at tight radial separations and requiring t-distribution noise statistics (Mawet et al., 2014). One simple correction that is applied is to mask the region of planetary signal when computing the noise statistics. This results in a better estimate of the true noise level.
### Astrometric, Photometric, and Spectral Extraction
PSF-subtraction techniques, while powerful for isolating faint signals, complicate the extraction of accurate astrometry, photometry, and spectra from a detected object. At the most basic level, this is because the process of PSF subtraction does not conserve the original planet signal.
A number of strategies are used to mitigate these complications and extract robust estimates of planetary photometric, astrometric, and spectral signals in HCI.
False Planet InjectionInjection and recovery of false planet signals in the image helps to quantify the amount of planetary signal lost during image processing (as described in Section 7.1). This is used in turn to correct photometry and estimate the true broadband intensity of planet light at a given wavelength. Similarly, false planets can be used to quantify astrometric and photometric uncertainties, often by injecting them into raw images at the same or similar radius as planet candidate(s) and utilizing the statistics of their recovered vs. injected locations and fluxes to quantify uncertainty on astrometry and photometry of the companion.
Forward ModelingInjection of a model companion or disk into raw images and examination of its morphology, astrometry, and photometry in post-processed images, is known as "forward modeling". The properties of these false planets (brightness, location, fwhm) or disks (extent, inclination, radial brightness distribution) are iterated upon and the forward models compared to post-processed data. This process is essential in interpreting post-processed images, which suffer from both self-subtraction (see "Signal-to-Noise Calculation" above) and so-called "over-subtraction", in which some of the planet or disk signal is flagged by the algorithm as noise and subtracted.. Generally, forward models are tuned by attempting to minimize residuals in the difference of the PSF subtracted image and the forward modeled image. In many cases, models are injected not into the target image sequence, but into a reference image sequence or at a wavelength in the target sequence at which the signal is absent or minimized. Post-processed signals are dependent on their azimuthal and radial location, and on the precise PSF, which is wavelength dependent, so neither of these techniques provides a perfect match. However, Pueyo (2016) showed that a post-processed PSF can also be modeled for a particular location mathematically, without altering the original images, by propagating a perturbation to the covariance matrix forward through the algorithm (KLIP or LOCI). This removes the problem of mismatch by constructing a forward model at the same location and wavelength, and the authors demonstrated its ability to boost the accuracy of spectral extraction. Inferences made via forward-modeling are, however, limited by our ability to accurately model the true planet or disk signal, which is particularly difficult for complex off-axis or time-varying PSFs and non-axisymmetric disk structures. Nevertheless, post-processed PSFs, by virtue of our precise knowledge of their constructed photometry and astrometry, are powerful probes of the effects of PSF subtraction on the properties of real signals.
Negative PlanetsAnother robust technique for determining planetary flux and location is to inject **negative** false planets into the raw image sequence at the location of the planet candidate, effectively canceling its signal. The residuals following PSF-subtraction are then minimized to determine a best fit. Although this results in quite robust photometry and astrometry estimates, arguably better than using forward modeling, it is computationally intensive and uncertainties on this technique are harder to estimate. Often observers assign error bars "by eye" to capture the range of values that result in good subtractions. For example, an appropriate flux scaling should result in near-zero residuals and not clear over- or under-subtractions (i.e. clear residual planetary excess or a clear residual negative signal at the planet location).
## 8 Potential Sources of False Positives
Direct imaging detections are intrinsically difficult, testing the limits of current technology, and there are a range of both astrophysical and instrumental false positive possibilities.
### Background Objects
One astrophysical false positive that can mimic a directly imaged companion signal is the coincidental alignment of a distant background source with a young star. This scenario is a possibility any time a faint point source is detected near a young star, thus it is among the first forms of vetting that all candidate planets are subjected to. For an initial single epoch detection, there are two important pieces of information that are used to assess the probability of a candidate being a background source - (1) the proximity of the target star to the galactic plane and (2) its spectrum.
Coincidental alignments are much more common in the galactic plane, so the probability of false positives is higher in this case. As the most common background objects masquerading as planet candidates are distant red giants, spectral information - either true spectra or NIR colors - is also crucial in assessing the probability that a faint apparent companion is truly a young planet or brown dwarf.
With a few notable exceptions (e.g. 51 Eri, Macintosh et al., 2015,, an object whose methane-dominated spectrum made its planetary nature clear from the outset), planet candidates are rarely announced until they have undergone an additional form of vetting - that of common proper motion with their host stars. Because the targets of direct imaging campaigns are close (generally \(<\)50pc), a necessity in order to achieve the requisite contrasts at planetary separations, their proper motions are invariably higher than those of distant background objects. Thus, most planet candidates are only confirmed after obtaining a second epoch observation months or years after the initial detection to confirm that the candidate and host star exhibit the same proper motions over that time period, as shown schematically in figure 17. Candidates are ruled out if they exhibit little to no proper motion between epochs.
In principle, establishment of common proper motion could be complicated by the additional motion of a true bound companion as it orbits its host star. In practice, however, most planet candidates are separated from their hosts by large enough physical separations that orbital motion is negligible compared to proper motion.
The most insidious form of false positive in establishing common proper motion is the coincidental alignment of an unbound foreground or background object with non-negligible proper motion and the target star. If the proper motion vectors of the two object are in rough alignment and of similar magnitude, the time baseline needed to distinguish a comoving object is longer. Such was the case with the apparent planetary companion HD 131399Ab (Nielsen et al., 2017).
### Disk Features
Another form of astrophysical false positive results from the prevalence of circumstellar material around the young stars targeted for direct imaging. Upon PSF subtraction, disk features can masquerade as planets, especially in cases where they are narrow (surviving highpass filtering) and non-axisymmetric (not shared with many images in the reference library). This is especially problematic for younger systems (\(<\)10Myr), where such features are ubiquitous (e.g. Benisty et al., 2022).
In the case of older (\(>\)10Myr) objects, for which the initial protoplanetary disk has usually either been incorporated into companions or dissipated, we see primarily second generation dust generated by the grinding of asteroids and/or comets in belts akin to our own asteroid and Kupier belts. These belts tend to be fairly symmetric and have limited
Figure 17: A schematic depiction of the process of determining common proper motion for a companion candidate (red circle) bound to a host star (yellow star). If the candidate is a true companion, then its motion over time (e.g. between epochs t\({}_{1}\) at left and t\({}_{2}\) at right) will closely follow the sky motion of the star (a combination of parallax and proper motion, shown as a dashed line). Companion host stars are generally close to Earth, with a higher degree of proper motion and parallax than more distant background stars, which move very little between epochs. The orbit of the bound companion around the host star (not depicted here) can complicate this somewhat, but orbital motion is generally slow for the widely-separated directly imaged companions detected to date. Importantly, color alone is rarely enough to determine whether a companion is bound or not, as background red giants share similar colors to directly imaged companions.
spatial extent, making them much less likely to be confused for planet candidates. In the case of known disk-bearing systems, candidate planets are vetted in several ways.
_Comparison with known disk features--_in both millimeter thermal emission and NIR scattered light (especially PDI-resolved features) informs the probability of confusion occurring at the location of a planet candidate. In cases where a candidate is well inside of a cleared cavity (e.g. PDS 70b, Keppler et al., 2018), the odds of confusion are minimal.
_Colors or spectra--_of companion candidate(s) can be compared to those of the star. In a case where the star and candidate spectra closely match, odds are good that the candidate has a substantial scattered light component. This could mean an envelope or disk around a planet, or a clump of disk material that has not yet formed a planet. In cases where a planet candidate exhibits a substantially different spectrum from that of the star, it is considered strong evidence for a planetary nature.
_Multiepoch information--_can be obtained to distinguish static disk features from orbiting companions. This is complicated in the case of disk features such as planet-induced spiral arms, which likely rotate with a pattern speed equal to the orbital speed of the companion inciting them. An important test is, therefore, whether apparent point sources that lie along spiral arms orbit with the speed of a companion at the point source's orbital separation. If they orbit faster (or slower), this is consistent with incitement by a different planet on a closer (or more distant) orbit.
_The robustness of the signal among post-processing techniques--_particularly those that vary somewhat in "aggressive-ness". In the most insidious cases, the presence of an extended but narrow disk feature at different azimuths in the PSF reference library can lead it to appear point-like in post-processed images. Persistence of the feature across PSF subtraction algorithmic properties, and in particular its persistence across various HCI techniques, helps to distinguish this scenario from a true point-like source. In cases where the disk structures are well constrained (e.g. from PDI imaging), forward modeling can be used to understand the likely appearance of disk structures following PSF subtraction and compared against the images. RDI and cADI are considered the most conservative processing techniques, while LOCI-ADI and KLIP-ADI are more "aggressive" in that they tend to model smaller spatial scale PSF features, and model mismatch can therefore result in smaller spatial scale apparent substructures that mimic planetary signals. Tunable parameters in the algorithms, such as the degree of rotational masking, the size of the regions for which PSFs are constructed separately, and the complexity/number of modes applied to construct the model, can be altered to be more or less aggressive. For example, including images in the reference library that are close in rotational space (a small rotational mask), constructing custom PSFs for very small regions of images, and increasing the number of modes in the PSF model all represent more "aggressive" reductions that will effectively remove stellar signal, but will also increase the rate of false positives. These parameters can be relaxed or iterated over to probe the robustness of any apparent signals.
Various other optical artifacts, quasi-static speckles, cosmic rays, and speckle noise can in principle masquerade as planets in post-processed images. In general, the properties of such artifacts should not closely mimic those of true astrophysical sources (e.g. by demonstrating self-subtraction). Nevertheless, careful analysis of false alarm probabilities is important in conducting HCI, particularly for low SNR recoveries. The gold standard in candidate vetting remains multiepoch, multi-wavelength, multi-instrument observations of candidates demonstrating common proper motion with the host star and evidence of a non-stellar spectrum.
## 9 Other related technologies
Although this tutorial is focused specifically on ground-based, non-interferometric direct imaging techniques, there are several highly related or complementary techniques that are worth highlighting.
_Interferometric Techniques--_can be applied in HCI in several ways. First, the beams from multiple telescopes can be combined in the classic sense to both collect more light and achieve higher resolution than is achievable with a single telescope aperture (because the resolution of an interferometer is \(\lambda\)/2B, where B is the longest Baseline distance between telescopes). Even in the case where multiple telescopes are not available for use in the classical sense of an interferometer, a technique called "Non-Redundant Aperture Masking" (Nakajima et al., 1989) can be used to achieve higher resolution on a single telescope. NRM requires the application of a pupil mask that is mostly opaque but contains a number of holes, each pair of which has a different separation and therefore probes a different spatial frequency. The maximum resolution achievable under this technique is half of the classical diffraction limit (\(\lambda\)/2B), giving a distinct advantage at tight inner working angles for imaging companions. All interferometric imaging requires some degree of
image reconstruction and is innately model-dependent, but these techniques nevertheless open up additional discovery space at high spectral and/or spatial resolution.
_Space-Based HCI--_is another important complimentary technique. It shares many features with ground-based HCI, including the need for wavefront sensing and control, image post-processing, and application of differential imaging techniques. Adaptive optics is in principle unnecessary in space, though some space-based HCI concepts use much lower cadence active mirror control to correct for slower (e.g. thermal) drifts in the shape of incoming wavefronts. Reference differential imaging is in many ways more powerful in space because of the innate stability of space-based instrumental PSFs, allowing in some cases for a reference library composed of images of tens to hundreds of sources in addition to the science target. Although space-based telescopes cannot leverage the rotation of the Earth to accomplish Angular Differential Imaging, they can apply a similar technique called "Roll Subtraction" by rotating the telescope around its optical axis during an imaging sequence. The amount of achievable rotation and the number of reference angles in such cases is small (e.g. 2 reference angles separated by \(\sim\)15deg), but has nevertheless proven effective at accomplishing differential imaging in space. Spectral Differential Imaging is more or less unchanged in the space-based imaging scenario, as is Polarized Differential Imaging in principle, though there are no plans to include PDI capabilities on any near-future space-based HCI missions.
_Sub-mm Interferometry--_is a fully unrelated technique to HCI, but is nevertheless highly complimentary, particularly for understanding scattered light disk features and protoplanets. Interferometric sub-mm arrays, particularly ALMA, provide a key piece of the puzzle in that they probe thermal emission from large grains in the midplane of disks. Together with information from NIR HCI of the surface layers of the disk, as well as millimeter emission from molecular gas species, a holistic picture of a disk system can be formed that encompasses all three key components - large grains, small grains, and gas. Very high-resolution millimeter continuum imaging can even probe the presence of circum_planetary_ dust and gas, compelling additional evidence for the presence of protoplanets.
## 10 Conclusion
Over the past fifteen years, ground-based High-Contrast Imaging has proven to be a robust and versatile way to probe the properties of young exoplanets and circumstellar disks. Using adaptive optics and wavefront sensing/control algorithms, atmospheric scintillation can be sensed and corrected for, allowing large ground-based telescopes to achieve diffraction limited or nearly diffraction-limited imaging at optical and near infrared wavelengths. HCI instruments often utilize coronagraphy to apply first-order suppression of incoming starlight, allowing faint nearby signals to be detected. Differential imaging techniques are then applied to leverage polarimetric, spectroscopic, target object, and angular diversity in the data to identify and remove starlight. Post-processing algorithms with various degrees of complexity and aggressiveness are then applied to enable detection of signals that are several orders of magnitude fainter in contrast, as well as detailed spectroscopic, photometric, and astrometric characterization. Signals are vetted by demonstrating common proper motion with the host star, robustness to algorithmic parameters, consistency with forward models, diversity in polarimetric or spectral properties relative to their host stars, and/or persistence across epochs, wavelengths, and instruments. HCI instruments and reduction techniques are necessarily complex in order to overcome the tremendous contrast and angular resolution barriers required to directly isolate the light from exoplanets and circumstellar disks. Yet, these techniques provide the best future prospects for someday detecting and characterizing an exo-Earth.
This tutorial was designed as an introduction for beginners, and is not comprehensive in its technical details. My hope is that it will enable those just getting started in the field to access more technical HCI instrument manuals and published results. To learn more about the current state of the art in high-contrast imaging, please see bit.ly/beginHCI, which provides a "Reading/Viewing List for Beginning High-Contrast Imagers".
## 11 Acknowledgements
I would like to thank the wonderful undergraduate and graduate students in my Spring, 2023 research group for the many group meeting sessions of figure critiques that they engaged in - this article is much better for their feedback. They are: Sarah Betti, Jada Louison, Cat Sarosi, Cailin Plunkett, Alyssa Cordero, and Adrian Friedman. Thank you to Kim Ward-Duong and Cat Sarosi for their thorough reviews of the text of the article, and to Bruce Macintosh, Mark Marley, Max Millar-Blanchaer, Ewan Douglas, Christian Marois, and Rob de Rosa for consulting on various parts of it. Thank you to the anonymous reviewer for their extremely constructive feedback, which greatly improved
the article. Finally, a huge thank you to my team of "internal" student reviewers - Giselle Hoermann, Kinsey Cronin, Jessica Labossiere, and Jingyi Zhang.
|
2310.04066 | Theory for Planar Hall Effect in Organic Dirac Fermion System | In a recent experiment on the interlayer magnetoresistance in the
quasi-two-dimensional organic salt, $\alpha$-(BEDT-TTF)$_2$I$_3$, it has been
observed that at low temperatures, interlayer tunneling attains phase
coherence, leading to the emergence of a three-dimensional electronic
structure. Theoretically and experimentally it has been suggested that the
system exhibits characteristics of a three-dimensional Dirac semimetal as a
consequence of broken time-reversal symmetry and inversion symmetry. Here, we
perform a theoretical calculation of the magnetoconductivity under an in-plane
magnetic field and demonstrate that the system displays a planar Hall effect.
Our calculations are based on a realistic model for
$\alpha$-(BEDT-TTF)$_2$I$_3$ incorporating interlayer tunneling and the tilt of
the Dirac cone. Given that the planar Hall effect is anticipated as a
consequence of chiral anomaly, our findings provide support for the
classification of $\alpha$-(BEDT-TTF)$_2$I$_3$ as a three-dimensional Dirac
semimetal. | Yuki Nakamura, Takao Morinari | 2023-10-06T07:45:43Z | http://arxiv.org/abs/2310.04066v2 | # Theory for Planar Hall Effect in Organic Dirac Fermion System
###### Abstract
In a recent experiment on the interlayer magnetoresistance in the quasi-two-dimensional organic salt, \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\), it has been observed that at low temperatures, interlayer tunneling attains phase coherence, leading to the emergence of a three-dimensional electronic structure. Theoretically and experimentally it has been suggested that the system exhibits characteristics of a three-dimensional Dirac semimetal as a consequence of broken time-reversal symmetry and inversion symmetry. Here, we perform a theoretical calculation of the magnetoconductivity under an in-plane magnetic field and demonstrate that the system displays a planar Hall effect. Our calculations are based on a realistic model for \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\) incorporating interlayer tunneling and the tilt of the Dirac cone. Given that the planar Hall effect is anticipated as a consequence of chiral anomaly, our findings provide support for the classification of \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\) as a three-dimensional Dirac semimetal.
Massless Dirac and Weyl semimetals have been extensively studied recently because of their unique and intriguing electrical properties [1; 2; 3; 4; 5; 6; 7; 8; 9]. The energy spectrum in these systems is characterized by the touching of the valence band and conduction band at discrete momentum points. The key distinction between the Dirac/Weyl semimetal and the two-dimensional Dirac fermion system lies in the presence of broken time-reversal symmetry and/or inversion symmetry. To realize a Weyl semimetal, it is necessary to break either time-reversal symmetry or inversion symmetry, or both. On the other hand, a Dirac semimetal can be realized even when both time-reversal and inversion symmetries are preserved.
Organic charge-transfer salt, \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\), has been studied as a quasi-two-dimensional Dirac fermion system[10; 11; 12]. (Here, BEDT-TTF is bis(ethylenedithio)tetrathiafulvalene.) One of the present authors theoretically predicted[13; 14] that both time-reversal symmetry and inversion symmetry are broken, and, as a result, the system becomes a three-dimensional Dirac semimetal when the interlayer tunneling becomes phase coherent at low temperatures. The phase coherence in the interlayer tunneling is confirmed experimentally[15] by the observation of the peak structure in the interlayer magnetoresistance. Furthermore, the observation of the negative magnetoresistance and the planar Hall effect (PHE) has been reported recently[16] that is associated with chiral anomaly[17; 18; 19; 20; 21; 22; 23; 24; 25; 26] in a Dirac semimetal.
In this Letter, we consider a model that includes interlayer tunneling and the tilt of the Dirac cone that exists in \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\)[10; 11]. Based on the semiclassical Boltzmann equation, we compute the magnetoconductivity under in-plane magnetic fields. We show that the system exhibits a PHE using a set of realistic parameters for \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\).
In the absence of the interlayer tunneling, there are two Driac cones in the \(k_{x}\)-\(k_{y}\) plane[10; 11]. Upon incorporating interlayer tunneling between both the same and different molecules, four Dirac cones emerge, as detailed below. In contrast to systems where spin degeneracy is lifted due to the breaking of time-reversal symmetry caused by magnetic correlations, the spin remains degenerate in \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\) because the time-reversal symmetry breaking is not associated with magnetic correlations[13]. For the sake of simplicity, we neglect the spin degrees of freedom in the follwoing analysis.
The Hamiltonian for two of the four Dirac cones is given by
\[H(\mathbf{k}) = \hbar\imath\hbar k_{x}\tau_{x}+\hbar\imath k_{y}\tau_{y}-2t_{2} \cos(ck_{z})\tau_{z} \tag{1}\] \[+\left[-2t_{1}\cos(ck_{z})+\hbar\imath k_{x}\right]\tau_{0}+ \varepsilon_{\rm D}.\]
Here \(k_{x}\) and \(k_{y}\) are in-plane wave numbers measured from the Dirac point and \(k_{z}\) is the wave number perpendicular to the \(k_{x}\)-\(k_{y}\) plane. We note that the position of the Dirac point in the plane is irrelevant for the following calculation, though we need to include them to make clear the presence of the symmetry breaking. The parameter \(u\) describes the tilt of the Dirac cone to the \(k_{x}\) axis, and we neglect anisotropy in the Dirac cone in the plane. \(c\) is the lattice constant in the \(c\)-axis. \(\tau_{x},\tau_{y},\tau_{z}\) are the Pauli matrices and \(\tau_{0}\) is the \(2\times 2\) identity matrix. \(t_{1}\) and \(t_{2}\) are the parameters for the interlayer tunneling. \(t_{1}\) is for the tunneling between the same molecules, and \(t_{2}\) is for the tunneling between the adjacent molecules along the \(a\)-axis. When \(t_{1}\neq 0\) and \(t_{2}=0\), the Dirac points shift along lines that are parallel to the \(k_{z}\) axis[27]. If \(t_{2}\neq 0\), the Dirac fermions acquire mass, with the exception at points where \(k_{z}=\pm\pi/2\). Consequently, four Dirac points emerge within the three-dimensional Brillouin zone. The Dirac cone is type-I in the \(k_{x}\)-\(k_{y}\) plane[28], so the range of the parameter \(u\) is \(-v<u<v\). The other two Dirac cones are described by Eq. (1) with \(k_{x}\rightarrow-k_{x}\). We may assume \(t_{1}>t_{2}\) from the crystal structure of \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\)[29]. In this case, the
Dirac cone is type-II[30] in the \(k_{z}\) direction. The parameter \(\varepsilon_{\rm D}\) denotes the energy of the Dirac point. We assign different values of \(\varepsilon_{\rm D}\) to the two Dirac cones in the \(k_{x}\)-\(k_{y}\) plane to incorporate the symmetry breaking.
The energy dispersion is given by \(E_{\bf k}^{(\pm)}=(\hbar v/a)\tilde{E}_{\bf k}^{(\pm)}\) where
\[\tilde{E}_{\bf k}^{(\pm)}=\pm\tilde{E}_{\bf k}-2\tilde{t_{1}}\cos(ck_{z})+ \eta ak_{x}+\tilde{\varepsilon}_{D}, \tag{2}\]
with \(\tilde{\varepsilon}_{D}=\varepsilon_{D}/(\hbar v/a)\) and
\[\tilde{E}_{\bf k}=\sqrt{a^{2}(k_{x}^{2}+k_{y}^{2})+4\tilde{t_{2}}^{2}\cos^{2}( ck_{z})}. \tag{3}\]
Here, \(a\) is the in-plane lattice constant. We take the same lattice constants for \(a\) and \(b\) axes for simplicity. We defined the following dimensionless parameters,
\[\tilde{t_{1}}=\frac{t_{1}}{\hbar v/a},\hskip 28.452756pt\tilde{t_{2}}=\frac{t_{2 }}{\hbar v/a},\hskip 28.452756pt\eta=\frac{u}{v}. \tag{4}\]
Taking \(a=1.0\times 10^{-9}\) m and \(v=5.0\times 10^{4}\) m/s, we find \(\hbar v/a=3.3\times 10^{-2}\) eV.
Figure 1(a) shows the energy dispersion in the plane and Fig. 1(b) shows that in the \(k_{z}\) direction. We see that the Dirac cone is type-I in the \(k_{x}\)-\(k_{y}\) plane and type-II in the \(k_{z}\) axis as stated above. Figure 1(c) shows the Fermi surface. If the Fermi energy is larger than \(t_{1}\) and \(t_{2}\), the Fermi surface is a warped cylinder[15]. For \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\), the Fermi energy is expected to be smaller than \(t_{1}\) and \(t_{2}\)[15]. In this case, the Fermi surface splits into a single electronic Fermi surface and two hole Fermi surfaces as shown in Fig. 1(c). Because of the tilt parameter \(\eta\), which is slightly lower than one[28], the Fermi surface is largely deformed.
We calculate the magnetoconductivity using the semiclassical Boltzmann equation employing the relaxation time approximation. The application of the Boltzmann equation is justified when \(\omega_{c}\tau<1\) with \(\omega_{c}\) being the cyclotron frequency and \(\tau\) being the scattering time. Therefore, our result is limited to the regime of relatively weak magnetic fields. In the presence of the electric field \({\bf E}\) and the magnetic field \({\bf B}\), the quasiclassical equation of motion is given by[31; 32]
\[\hbar\frac{d{\bf k}}{dt} = \frac{1}{1+\frac{e}{\hbar}{\bf B}\cdot{\bf\Omega_{k}}} \tag{5}\] \[\times\left[-e{\bf v_{k}}\times{\bf B}-e{\bf E}-\frac{e^{2}}{ \hbar}\left({\bf B}\cdot{\bf E}\right){\bf\Omega_{k}}\right],\] \[\frac{d{\bf r}}{dt} = \frac{1}{1+\frac{e}{\hbar}{\bf B}\cdot{\bf\Omega_{k}}}\] (6) \[\times\left[{\bf v_{k}}+\frac{e}{\hbar}\left({\bf\Omega_{k}} \cdot{\bf v_{k}}\right){\bf B}+\frac{e}{\hbar}{\bf E}\times{\bf\Omega_{k}} \right],\]
where \({\bf\Omega_{k}}\) is the Berry curvature.
From the energy dispersion (2), the group velocity is given by
\[{\bf v}_{\bf k}^{(\pm)}= v\left(\pm\frac{ak_{x}}{\tilde{E}_{\bf k}}+\eta,\pm\frac{ak_{y}}{ \tilde{E}_{\bf k}},\right.\] \[\left.\mp\frac{4\frac{e}{\tilde{t_{2}}^{\prime}}^{2}\sin(ck_{z}) \cos(ck_{z})}{\tilde{E}_{\bf k}}+2\frac{c}{a}\tilde{t}_{1}\sin(ck_{z})\right). \tag{7}\]
The Berry curvature[33] is given by
\[{\bf\Omega_{k}^{(\pm)}}= \left(\mp\frac{2a^{2}c\tilde{r_{2}}k_{x}\sin(ck_{z})}{2\tilde{E}_ {\bf k}^{3}},\mp\frac{2a^{2}c\tilde{r_{2}}k_{y}\sin(ck_{z})}{2\tilde{E}_{\bf k }^{3}},\right.\] \[\left.\pm\frac{2a^{2}\tilde{r_{2}}\cos(ck_{z})}{2\tilde{E}_{\bf k }^{3}}\right). \tag{8}\]
Here, \({\bf v}_{\bf k}^{(+)}\) and \({\bf\Omega_{k}^{(+)}}\) are for the positive energy state, \(\tilde{E}_{\bf k}^{(+)}\), and \({\bf v}_{\bf k}^{(-)}\) and \({\bf\Omega_{k}^{(-)}}\) are for the negative energy state, \(\tilde{E}_{\bf k}^{(-)}\).
Now we consider the contribution from the chiral anomaly and omit the term related to the anomalous Hall effect. From the Boltzmann equation, we obtain the
Figure 1: (Color online) (a) Energy dispersion of the model described by the Hamiltonian (1) in the \(k_{x}\)-\(k_{y}\) plane and (b) along the \(k_{z}\) axis. The tilt parameter in the plane is \(\eta=0.7\), and so the Dirac cone is type-I. The interlayer hopping parameters are \(\tilde{t_{1}}=0.10\) and \(\tilde{t_{2}}=0.05\), and so the Dirac cone is type-II in the \(k_{z}\) axis. (c) The Fermi surface around the two Dirac cones. The Fermi energy is set to be zero and we set \(\varepsilon_{\rm D}=0.03\). The Fermi surface consists of three portions: the middle one is the electron Fermi surface and the other two are the hole Fermi surfaces.
equations for the magnetoconductivities[20; 21]:
\[\sigma^{(\pm)}_{xx} = \frac{2e^{2}\tau}{(2\pi)^{3}}\int d^{3}\mathbf{k}\left[-f^{\prime}_ {\rm eq}\left(E^{(\pm)}_{\mathbf{k}}\right)\right]\frac{1}{1+\frac{e}{\hbar} \mathbf{B}\cdot\mathbf{\Omega}^{(\pm)}_{\mathbf{k}}} \tag{9}\] \[\quad\times\left[v^{(\pm)}_{x}+\frac{e}{\hbar}B_{x}(\mathbf{v}^{ (\pm)}_{\mathbf{k}}\cdot\mathbf{\Omega}^{(\pm)}_{\mathbf{k}})\right]^{2},\]
\[\sigma^{(\pm)}_{xy} = \frac{2e^{2}\tau}{(2\pi)^{3}}\int d^{3}\mathbf{k}\left[-f^{\prime }_{\rm eq}\left(E^{(\pm)}_{\mathbf{k}}\right)\right]\frac{1}{1+\frac{e}{\hbar} \mathbf{B}\cdot\mathbf{\Omega}^{(\pm)}_{\mathbf{k}}} \tag{10}\] \[\quad\times\left[v^{(\pm)}_{x}+\frac{e}{\hbar}B_{x}(\mathbf{v}^{ (\pm)}_{\mathbf{k}}\cdot\mathbf{\Omega}^{(\pm)}_{\mathbf{k}})\right]\] \[\quad\times\left[v^{(\pm)}_{y}+\frac{e}{\hbar}B_{y}(\mathbf{v}^{ (\pm)}_{\mathbf{k}}\cdot\mathbf{\Omega}^{(\pm)}_{\mathbf{k}})\right],\]
where \(f_{\rm eq}\) is the equilibrium Fermi-Dirac distribution function. We compute the components of the positive energy state, denoted by superscript \((+)\) and the negative energy state, denoted by superscript \((-)\), separately. Here, the magnetic field is given by \(\mathbf{B}=(B_{x},B_{y},0)=(B\cos\phi,B\sin\phi,0)\). In order to obtain the total magnetoconductivity, we take the sum of \(\sigma^{(+)}_{xx}+\sigma^{(-)}_{xx}\) and \(\sigma^{(+)}_{xy}+\sigma^{(-)}_{xy}\). We also calculate the contribution from the other two Dirac cones. The splitting of each Dirac cone in the \(k_{z}\) direction results in a twofold multiplication factor.
The result is shown in Fig. 2. We subtract the constant value \(\sigma^{0}_{xx}\) from \(\sigma_{xx}\), and the oscillating component \(\sigma_{xx}-\sigma^{0}_{xx}\) is shown in Fig. 2(a). As for \(\sigma_{xy}\), we denote it as \(\sigma^{\rm PHE}_{xy}\) in Fig. 2(b) to explicitly indicate that its contribution originates from the planar Hall effect. They are plotted as the function of \(\phi\) for different values of \(b=\left(a/\ell_{B}\right)^{2}\) with \(\ell_{B}=\sqrt{\hbar/eB}\) the magnetic length. \(b\) is defined as the dimensionless magnetic field parameter. At \(B=1\) T, \(b=1.5\times 10^{-3}\). The unit of conductivity is \(\sigma_{0}=e^{2}\tau v/(2\pi^{3}\hbar ac)\). For the interlayer tunneling parameters, \(\tilde{t}_{1}\) and \(\tilde{t}_{2}\), we take \(\tilde{t}_{1}=0.10\) and \(\tilde{t}_{2}=0.05\). For the tilt parameter we take \(\eta=0.7\). This set of parameters is reasonable for \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\). We note that both \(\sigma_{xx}-\sigma^{0}_{xx}\) and \(\sigma^{\rm PHE}_{xy}\) exhibit the periodicity of \(\pi\). This oscillating behavior can be associated with the chiral anomaly[20; 21]. Qualitatively similar behavior is observed in a recent experiment[16].
We also examined the magnetic field parameter \(b\) dependence of the amplitude of \(\sigma_{xx}-\sigma^{0}_{xx}\) and \(\sigma^{\rm PHE}_{xy}\) as shown in Fig. 3(a). We find that the amplitude varies quadratically with the magnetic field. If there remains the effect associated with the tilt of the Dirac cone, we may expect a linear dependence, but there is no such component. This is understood by complete cancellation between the contribution from the Dirac cones with opposite tilts and chiralities. We note that the amplitudes of \(\sigma_{xx}-\sigma^{0}_{xx}\) is slightly larger than \(\sigma^{\rm PHE}_{xy}\). This behavior is qualitatively in agreement with experimental observations, where the amplitude of \(\sigma_{xx}-\sigma^{0}_{xx}\) is ten times larger than that of \(\sigma^{\rm PHE}_{xy}\) at 3 T[16]. The difference of the amplitutdes is associated with the interplay between the tilt parameter dependence of the group velocity and the density of states. To make clear the tilt parameter dependence, we calculate \(\eta\) dependence of \(\sigma_{xx}-\sigma^{0}_{xx}\) and \(\sigma^{\rm PHE}_{xy}\) as shown in Fig. 3(b). When \(\eta=0\), there is no difference in the amplitudes of \(\sigma_{xx}-\sigma^{0}_{xx}\) and \(\sigma^{\rm PHE}_{xy}\). Their difference increases as we increase \(\eta\). However, the result depends on the choice of two values of \(\varepsilon_{\rm D}\). If we take a different set of values for \(\varepsilon_{\rm D}\), we obtain a different \(\eta\) dependence. The energy dispersion exhibits particle-hole symmetry; however, the integrands in Eqs. (9) and (10) do not. Consequently, the \(\eta\) dependence of \(\sigma_{xx}-\sigma^{0}_{xx}\) and \(\sigma^{\rm PHE}_{xy}\) is non-trivial.
To conclude, we have shown that the magnetoconductivity exhibit PHE in a realistic model for \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\). Since \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\) does not show any indication of ferromagnetism[34], the presence of PHE suggests the chiral anomaly effect that is associated with a three-dimensional Dirac semimetal. While our analysis is confined to a small magnetic field range due to the utilization of the semiclassical Boltzmann equation, we anticipate the occurrence of the PHE at high magnetic fields, provided there is no qualitative change between the low and high magnetic field regimes. This seems to be consistent with the recent experiment[16]. In conjunction with the experimental findings[15; 16], our results
Figure 2: (Color online) (a) Longitudinal conductivity and (b) planar Hall conductivity as the function of \(\phi\) for different values of \(b\). The unit of conductivity is \(\sigma_{0}=e^{2}\tau v/(2\pi^{3}\hbar ac)\). The constant component is subtracted from \(\sigma_{xx}\). The parameters are \(\tilde{t}_{1}=0.10\), \(\tilde{t}_{2}=0.05\), and \(\eta=0.7\). For the Dirac point energies, we set \(\varepsilon_{\rm D}/(\hbar v/a)=0.5\) for two Dirac cones along the \(k_{z}\) axis and \(\varepsilon_{\rm D}/(\hbar v/a)=-0.4\) for the other two Dirac cones.
provide strong support for the classification of \(\alpha\)-(BEDT-TTF)\({}_{2}\)I\({}_{3}\) as a three-dimensional Dirac semimetal under conditions of low temperatures and high pressures.
###### Acknowledgements.
We thank N. Tajima for helpful discussions and sharing experimental data. The research was supported by JSPS KAKENHI Grant Number 22K03533.
|
2306.16384 | Accelerating Sampling and Aggregation Operations in GNN Frameworks with
GPU Initiated Direct Storage Accesses | Graph Neural Networks (GNNs) are emerging as a powerful tool for learning
from graph-structured data and performing sophisticated inference tasks in
various application domains. Although GNNs have been shown to be effective on
modest-sized graphs, training them on large-scale graphs remains a significant
challenge due to lack of efficient data access and data movement methods.
Existing frameworks for training GNNs use CPUs for graph sampling and feature
aggregation, while the training and updating of model weights are executed on
GPUs. However, our in-depth profiling shows the CPUs cannot achieve the
throughput required to saturate GNN model training throughput, causing gross
under-utilization of expensive GPU resources. Furthermore, when the graph and
its embeddings do not fit in the CPU memory, the overhead introduced by the
operating system, say for handling page-faults, comes in the critical path of
execution.
To address these issues, we propose the GPU Initiated Direct Storage Access
(GIDS) dataloader, to enable GPU-oriented GNN training for large-scale graphs
while efficiently utilizing all hardware resources, such as CPU memory,
storage, and GPU memory with a hybrid data placement strategy. By enabling GPU
threads to fetch feature vectors directly from storage, GIDS dataloader solves
the memory capacity problem for GPU-oriented GNN training. Moreover, GIDS
dataloader leverages GPU parallelism to tolerate storage latency and eliminates
expensive page-fault overhead. Doing so enables us to design novel
optimizations for exploiting locality and increasing effective bandwidth for
GNN training. Our evaluation using a single GPU on terabyte-scale GNN datasets
shows that GIDS dataloader accelerates the overall DGL GNN training pipeline by
up to 392X when compared to the current, state-of-the-art DGL dataloader. | Jeongmin Brian Park, Vikram Sharma Mailthody, Zaid Qureshi, Wen-mei Hwu | 2023-06-28T17:22:15Z | http://arxiv.org/abs/2306.16384v2 | Accelerating Sampling and Aggregation Operations in GNN Frameworks with GPU Initiated Direct Storage Accesses
###### Abstract.
Graph Neural Networks (GNNs) are emerging as a powerful tool for learning from graph-structured data and performing sophisticated inference tasks in various application domains. Although GNNs have been shown to be effective on modest-sized graphs, training them on large-scale graphs remains a significant challenge due to lack of efficient data access and data movement methods. Existing frameworks for training GNNs use CPUs for graph sampling and feature aggregation, while the training and updating of model weights are executed on GPUs. However, our in-depth profiling shows the CPUs cannot achieve the throughput required to saturate GNN model training throughput, causing gross under-utilization of expensive GPU resources. Furthermore, when the graph and its embeddings do not fit in the CPU memory, the overhead introduced by the operating system, say for handling page-faults, comes in the critical path of execution.
To address these issues, we propose the GPU Initiated Direct Storage Access (GIDS12)ataloader, to enable GPU-oriented GNN training for large-scale graphs while efficiently utilizing all hardware resources, such as CPU memory, storage, and GPU memory with a hybrid data placement strategy. By enabling GPU threads to fetch feature vectors directly from storage, GIDS dataloader solves the memory capacity problem for GPU-oriented GNN training. Moreover, GIDS dataloader leverages GPU parallelism to tolerate storage latency and eliminates expensive page-fault overhead. Doing so enables us to design novel optimizations for exploiting locality and increasing effective bandwidth for GNN training. Our evaluation using a single GPU on terabyte-scale GNN datasets shows that GIDS dataloader accelerates the overall DGL GNN training pipeline by up to 392\(\times\) when compared to the current, state-of-the-art DGL dataloader.
Footnote 1: Under Review
Footnote 2: Source code: [http://github.com/jeongminpark417/GIDS](http://github.com/jeongminpark417/GIDS)
Footnote 3: footnotemark:
## 1. Introduction
Owing to their expressive power, Graph Neural Networks (GNNs) effectively capture the rich relational information embedded among input nodes and edges, leading to improved generalization performance over traditional machine learning techniques. As a result, GNNs have gained significant attention in recent years, due to their efficacy in various graph-based machine learning applications, such as node classification (GNN, 2017; 2018; 2019; 2018), recommendation (GNN, 2018; 2019), fraud detection (GNN, 2018; 2019; 2019; 2019), and link prediction (GNN, 2019; 2019; 2019).
To cater to this growing interest, new open-source frameworks such as PyTorch Geometric (PyG) (Garon et al., 2017), Spektral (Peters et al., 2018), and Deep Graph Library (DGL) (Shen et al., 2018) have been developed to provide optimized operators required by GNNs, such as message-passing for aggregating feature information across related graph nodes, and graph-specific neural network computation layers. Although GNN frameworks leverage GPUs for highly parallelized matrix computations, GNN training faces challenges beyond computation efficiency. While limited GPU memory capacity can be partially addressed with mini-batching and sampling for small to medium scale graph datasets, larger graphs do not fit into the GPU memory when it comes to graph sampling and feature aggregation. To address this problem, frameworks like DGL exploit Unified Virtual Addressing (UVA), pinning the graph dataset, including both the graph structure data and feature data, into the CPU memory to enable efficient subgraph extraction and feature aggregation using GPU kernels with zero-data copy transfer (Shen et al., 2018).
For large-scale graphs that do not fit into the CPU memory, the UVA approach is no longer sufficient. There are several solutions to support large-scale GNN training: (a) multi-node/multi-GPU, (b)
tiling, and (c) memory-mapped file. Leveraging multiple nodes or GPUs (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Wang et al., 2019) to partition the graph across the nodes to support large-scale GNN training is an expensive approach (Wang et al., 2019). Tiling (Tiling, 2018; Wang et al., 2019) can be used to support large-scale GNN training by leveraging graph partitioning to load tiled data and transfer it to the GPU. This approach shows poor performance due to the random access pattern and the additional cost for pre-processing the input data. Finally, the most convenient solution to train large-scale graph datasets on a single GPU is exploiting the memory-mapped file technique, which maps the graph data stored on disk to virtual memory, enabling access to the data without first loading the entire dataset into memory. Previous studies (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) extended the memory-mapped files approach and leveraged the in-memory caching mechanism to mitigate the storage overhead.
Despite its conceptual simplicity, the use of memory-mapped files in GNN training faces performance challenges due to the heavy software overhead in handling page faults and its inability to tolerate long latency incurred during data retrieval from storage. The storage latency, which is two to three orders of magnitude longer than the DRAM access latency, becomes a bottleneck in the GNN training process due to sparse and irregular graph data access patterns and the inability of the memory-mapped file approach to overlap the latencies of these accesses, resulting in poor overall performance. In Section 2.3, we show that when using memory-mapped files, the sampling and feature aggregation stages of the GNN training pipeline dominate the overall execution time and severely limit the overall GNN training performance.
In this paper, we propose a new approach called GPU initiated Direct Storage Access (GIDS) dataloader to tackle the challenges of GNN training on large-scale graphs by enabling fully GPU-oriented GNN training. We extend the key idea of BaM system (Wang et al., 2019), which leverages GPU parallelism to hide storage latency, to address the memory capacity problem for GPU-accelerated GNN training.
We further propose a three-part hybrid strategy for the GIDS dataloader to efficiently utilize all hardware resources (CPU memory, storage, and GPU memory) to accelerate the GNN training process for large-scale graphs. First, GIDS dataloader stores the feature data of the graph in storage as the feature data typically accounts for the vast majority of the total graph dataset size for large-scale graphs (see Table 4 for examples). GIDS dataloader overcomes the long storage access latency by allowing GPU threads to directly fetch feature data, leveraging the massive GPU thread-level parallelism to overlap the latencies of many storage accesses. This direct access avoids CPU software bottlenecks, leading to full utilization of storage throughput. Second, GIDS pins the graph structure data, whose size is typically tiny compared to the feature data, in the CPU memory to enable GPU graph sampling via UVA zero-data copy transfer. Finally, GIDS dataloader allocates GPU memory for the GPU software-defined cache to store feature data for recently accessed nodes to minimize the storage accesses. Moreover, we designed window buffering technique to further improve GPU cache utilization.
We evaluate the effectiveness of our work by demonstrating its implementation with NVMe SSDs in the DGL framework. Our experiments show that GIDS accelerates the feature aggregation process by up to 160\(\times\) and the graph sampling by 25\(\times\) compared to the state-of-the art DGL dataloader that uses the memory-mapping approach with only a single SSD. When scaled to four SSDs, the performance advantage of the GIDS dataloader increases to 627\(\times\) for aggregation.
Moreover, with four SSDs connected, the GIDS dataloader outperforms the state-of-the-art DGL dataloader that uses the UVA-based approach for modest-size graphs, which stores entire graph datasets in the CPU memory. GIDS dataloader achieves this by increasing the collective SSD bandwidth to saturate the PCIe Gen4 bandwidth of the GPU and amplifying the effective bandwidth with GPU software-defined cache. With GIDS and multiple SSDs, even the modest-sized graphs can simply be stored in SSDs and no longer need to take up space in the CPU memory.
We make the following key contributions in this paper.
* We analyze the limitations of the existing GNN frameworks while executing large graph datasets and show that the existing CPU-initiated approach cannot keep up with the demands of GPU-accelerated GNN training.
* We introduce a novel GPU-oriented GNN dataloader that enables direct storage accesses for GPU threads to enable and accelerate large-scale GNN training.
* We present an effective data placement strategy to efficiently leverage all available hardware resources: CPU memory, storage, and GPU memory for large-scale GNN training.
* We propose novel optimizations to improve GPU software-cache efficiency by exploiting locality in GNN training.
We demonstrate GIDS dataloader's effectiveness and flexibility by measuring performance using billion-scale datasets that do not fit in the CPU memory. The results based on the NVIDIA A100 GPUs and 512GB CPU memory capacity show that GIDS dataloader achieves up to 627\(\times\) speedup in data aggregation and 392\(\times\) speedup in overall training over a state-of-the-art GNN dataloader.
## 2. Background
In this section, we provide an overview of GNN models, followed by an introduction to mini-batching and sampling-based GNN training. We then explain the state-of-the-art framework for large-scale GNN training and its challenges.
### Graph Neural Networks (GNNs)
Graph Neural Networks (GNNs) have recently gained prominence in solving machine learning problems by incorporating graph structure information (Chen et al., 2016; Chen et al., 2016; Wang et al., 2019; Wang et al., 2019). These networks typically consist of multiple layers and operate through layer-wise message passing.
Given a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), with vertex set \(\mathcal{V}\) and edge set \(\mathcal{E}\), the node feature vectors for each vertex \(v\in\mathcal{V}\) are represented as \(x_{v}\). The node embedding of vertex \(v\) at layer \(l\) is denoted as \(h_{v}^{(l)}\), with \(h_{v}^{(0)}\) initialized with the node feature vector. The GNN updates the node embeddings using the equation:
\[h_{v}^{(l+1)}=f(h_{v}^{(l)},h_{w^{\prime}}^{(l)}\,_{w\in\mathcal{N}(v)}), \tag{1}\]
where \(\mathcal{N}(v)\) defines the neighborhood set of \(v\), \(h_{w}^{(l)}\) denotes the node embedding of the neghibor node \(w\) at layer \(l\), and \(f\) is a parameterized update function.
Graph data consists of two components: graph structure data and node feature data. The graph structure data represents the
edges and nodes of the graph, while the node feature data represents the feature embeddings for each node. Sparse matrix formats such as Coordinate (COO) format and Compressed Sparse Column (CSC) format are commonly used to store the graph structure data, whereas the node features are typically stored in an \(N\times D\) matrix, where \(N\) is the total number of nodes in the graph, and \(D\) is the dimension of each node feature. The size of each node's feature can vary greatly but typically ranges from 512B to 4KB. For large-scale graphs with billions of nodes, the size of the node feature data can reach several tens of terabytes. As a result, managing the node feature data for large-scale GNN training with limited memory capacity is a challenging task.
### GNN Training Pipeline
GNN training on large graph datasets involves mainly four stages: graph sampling, feature aggregation, data transfer, and model training. Mini-batch training is commonly used in these models for scalability and computational efficiency (Krizhevsky et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2012). In this section, we briefly describe the mini-batching technique and each key stage of the GNN training pipeline.
#### 2.2.1. Mini-batching
Mini-batching of GNN models involves splitting the graph into smaller sub-graphs and training the network on each of these sub-graphs. During each iteration of the training process, a batch of sub-graphs is loaded into GPU memory for computation. The batch size must be carefully chosen to prevent GPU memory overflow during training. Mini-batching also exposes more parallelism to the GPU training kernel, which significantly improves training speed and efficiency and makes it a popular approach for many GNN models. Previous studies have demonstrated that training neural networks with mini-batches can also lead to faster convergence and better optimization compared to training on the entire dataset (Krizhevsky et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2012).
#### 2.2.2. Node Sampling
Mini-batching alone cannot fully address the scalability limitations when working with large graphs. Even with small batch sizes, the training cost can still be substantial due to the exponential growth of memory footprint when collecting k-hop neighbors. GraphSAGE (Krizhevsky et al., 2012) introduced the concept of neighborhood sampling to tackle this problem. GraphSAGE reduces the computation and memory footprint by randomly sampling a fixed number of neighboring nodes rather than including all nodes in the graph. To ensure a sufficient level of randomness in the training process, GraphSAGE uses a uniformly random selection method for neighborhood sampling. Figure 1 illustrates an example of neighborhood sampling with a 2-hop computational graph. In this example, the sampling size is set to 3, meaning up to three neighboring nodes of the target node are selected. With two layers, the total mini-batch size is 11 (1 + 3 + 7).
#### 2.2.3. Node Feature Aggregation and Transfer
The features for each sampled node in the mini-batch must be gathered before the training on the mini-batch can start. In cases where node feature data is too large to fit into CPU memory, the current state-of-the-art approach (Krizhevsky et al., 2012; Krizhevsky et al., 2012) first transfers the features of the sampled nodes from storage to the CPU memory, and then from the CPU memory to the GPU memory via the PCIe interconnect. Afterward, the GPU model training kernels can consume the fetched node features.
### Limitation of Existing GNN Frameworks
State-of-the-art GNN frameworks, such as DGL (Krizhevsky et al., 2012) and PyG (Krizhevsky et al., 2012), have significantly improved GNN training performance by utilizing a hybrid CPU-GPU training system, where the CPU is responsible for data preparation, and the GPU handles the model training. To increase the effective memory capacity, DGL introduced the UVA-based GNN training technique (Krizhevsky et al., 2012), which pins the entire graph dataset (both graph and feature vectors) in the CPU memory and transfers data from the CPU to the GPU through zero-copy accesses, enabling the GPU to execute graph sampling and feature aggregation. While this approach helps to scale to larger graph datasets whose sizes exceed the GPU memory capacity, it cannot handle large-scale graphs whose sizes surpasses the capacity of the CPU memory since all graph data must be pinned in the CPU memory for the UVA-based technique to work.
The existing GNN frameworks rely on the CPU for graph sampling and feature aggregation execution to support graph datasets that cannot fit into the CPU memory. The key idea is to provide a notion of infinite virtual memory by memory-mapping the node feature vector files into the CPU virtual address space and allow the CPU to page fault when the requested feature vector is unavailable in the CPU memory. This eliminates the need for loading the entire dataset into the CPU memory and employs the operating system
Figure 1. A subgraph generated by a uniformly random selection method for two-layer Neighborhood Sampling.
Figure 2. Illustration of the GNN training process with the memory-mapping DGLataloader
(OS) page fault handler to bring parts of the graph data stored in the disk to the application's address space in an on-demand manner. Figure 2 illustrates the GNN training process using the approach of the memory-mapped file in the DGL framework. During the node feature aggregation stage, the CPU accesses the node features mapped in its virtual memory space, and the OS page fault handler brings the pages that contain the accessed features from storage into the CPU memory when it misses from the OS page cache.
Unfortunately, implementing node feature aggregation using a memory-mapped approach makes the node feature aggregation by far the main bottleneck of the overall training pipeline. Our profiling of each stage in the GNN training execution shows the iteration time is clearly dominated by the sampling and node aggregation stages, as shown in Figure 3. For example, the training stage is barely visible for the IGB-Full and IGBH-Full graphs, the largest two graphs used in our evaluations. This is because, for large-scale graphs, the additional cost of page faults exacerbates the gap between the data preparation throughput and model training throughput. _Thus, the key to improving the GNN training performance while training on large graphs is to drastically accelerate the sampling and feature aggregation stages_ (_i.e., the data preparation stages_).
Previous research [18; 25; 34; 48] has aimed to enhance the efficiency of node aggregation and sampling stages by using specific in-memory caching mechanisms to minimize redundant storage accesses and/or utilizing pipelining techniques to conceal graph sampling time. However, for these methods to be effective, the CPU-driven request initiation mechanism must generate requests at a sufficiently high rate to hide long storage access latency and prevent the GPU from waiting for data. In GNN training, the storage access requests are for feature vectors that must be aggregated before training can begin. To this end, we first investigate whether CPUs can generate requests at a sufficiently high rate during the sampling stage of GNN training iterations. To answer this question, we use the baseline DGL dataloader that loads the entire graph into the CPU memory, pins it, and utilizes CPU threads to perform the node sampling operation.
Figure 4 shows the request generation and consumption rate of CPU and GPU for the two data preparation stages of the GNN training pipeline: sampling and training. After the node feature vectors are loaded into the GPU memory, the GPU-accelerated training kernels can consume them at a rate greater than 29 million requests per second. This implies to maximize effective GPU utilization and minimize GNN training time for large graphs, the effective request generation rate must match or exceed the consumption rate. However, using the CPU-driven request generation sampling stage, the CPU cannot generate more than 4.1 million feature vector requests per second, even when using multiple threads (16 in this experiment beyond which the rate plateaus). This is because the sampling computation involves repeatedly traversing the graph and accessing its edges and nodes, making it difficult for the CPU to keep up with the consumption rate of the GPU-accelerated training kernels. In contrast, the GPU can generate more than 77 million feature requests per second in the sampling stage, which is significantly higher than the consumption rate required by the training kernels.
Based on these two key insights, our proposed GIDS dataloader offloads the data preparation stages to the GPU to benefit from its faster request generation rate. We further adopt the recent technique of BaM [28] which enables direct storage device access by the GPU, eliminating the overhead of OS page faults during feature vector data access.
### The BaM System
The BaM system [28] aims to tackle the problem of storage latency in big data GPU applications. The key idea behind BaM is to allow GPU threads to have direct access to the storage, making use of the massive data-level parallelism that GPUs provide. As a massive number of GPU threads can initiate direct storage access without incurring CPU-GPU synchronization or CPU software overhead, the GPU can take full advantage of parallelism to hide long storage access latency, enabling it to achieve peak storage bandwidth.
By exploiting inexpensive dense storage, the GPU can drastically expand its memory capacity, which is extremely useful for applications that require computation-directed sparse access to massive datasets, such as GNN training.
Figure 4. Request generation rate of data preparation on CPU and GPU, and request consumption rate on GPU on IGB-small dataset. The CPU and GPU used in this measurement are listed in Table 1.
Figure 3. GNN training time breakdown for the baseline DGL dataloader for different graph datasets. The node feature data is accessed from memory-mapped files, while the graph structure data is stored in the CPU memory. The GraphSAGE model is used as the GNN training model. The graph properties are listed in Table 2.
## 3. System Design
To address the challenges associated with state-of-the-art large-scale GNN training, we design and implement the GIDS dataloader, which enables fully GPU-oriented GNN training for large graphs. This section describes the design of GIDS dataloader. First, we provide an overview of design goals and introduce a new dataloader for sampling-based GNN models, extended from the DGL dataloader3. We then describe our optimizations to further improve the performance.
Footnote 3: Although the discussion is based on DGL framework, it can be easily extended to other GNN framework such as PyG (Dai et al., 2017), and AliGraph (Zhu et al., 2017).
### Data Placement Strategy and GNN Workflow
The GIDS dataloader is designed to improve the performance and scalability of GNNs by exploiting the GPU's parallelism to accelerate data preparation. As discussed in Section 2.3, the workflow of the CPU-oriented GNN training fails to generate requests at a sufficient rate to match the GPU training throughput, thus limiting the overall GNN training performance.
To address this deficiency in generating requests, the GIDS dataloader moves the data preparation process from the CPU to the GPU. As shown in Figure 3, the request generation rate of the sampling and aggregation stages running on the GPU exceeds the GPU training throughput. The next major bottleneck is the insufficient storage access throughput.
We tackle the storage access bottleneck by leveraging the BaM system (Kang et al., 2017) to allow GPU threads to directly access the storage, thus avoiding the CPU page-fault handling software overhead. The BaM system is integrated into our GIDS DGL dataloader with an interface that manages the metadata and the buffer pointers for the output mini-batch tensors. Instead of actually loading the data, we set up the mappings so that each access is translated into a BaM cache access which either finds the data in cache or generates a storage access request through the BaM cache miss handler.
Although BaM solves the CPU software overhead problem and fully hides storage latency by exploiting GPU data level parallelism, the available I/O bandwidth alone is still not enough to match the GPU training throughput. Thus, efficiently utilizing all available resources to further improve the data preparation process is key to achieving high-performance GNN training.
To achieve this goal, the GIDS dataloader utilizes a hybrid data management strategy that efficiently uses three hardware resources: the CPU memory, the GPU device memory, and the storage devices, based on the data access pattern and the access granularity. First, GPU memory is used as a software-defined cache to reduce redundant storage accesses and utilize the high bandwidth of the GPU memory. Under the hybrid data management strategy, data used by more irregular and smaller access granularity patterns are stored in the CPU memory, while data accessed through larger granularity is backed by storage. This is because that low data granularity access pattern increases I/O amplification, resulting in lower effective bandwidth. Also, irregular data access patterns can pollute the software-defined GPU cache.
Based on this data management strategy, the GIDS dataloader stores the node feature data in the storage while the graph structure data is pinned in the CPU memory since the graph structure data (4-8B) accessed by the sampling process has a much finer granularity access pattern than the node feature data (512-4096B) that is accessed by the aggregation process. Although graph structure data is pinned in the memory, this does not result in any memory capacity issues because graph structure data accounts for as little as 5% of the total dataset size and the structure data fits comfortably in the CPU memory even for terabyte-scale graphs. (see Table 4).
Finally, the GIDS dataloader uses the GPU device memory as a software-defined cache to temporarily store the feature data of recently accessed nodes, reducing the number of storage accesses and improving feature aggregation performance. This is achieved by configuring the BaM software-defined cache and setting up a custom cache-line replacement policy. The utilization of the GPU software-defined cache is further optimized through GIDS-specific cache optimizations, such as window buffering.
Figure 5 illustrates the workflow of the GIDS dataloader based on this data placement strategy. The process begins with neighborhood sampling to generate a mini-batch with sampled nodes. Since the graph structure data is pinned in the CPU memory and this step is executed on the GPU, the GPU threads access it via zero-copy data transfer (Shen et al., 2017). The sampled sub-graphs are kept in the GPU memory. Once the sampling process is complete, GPU threads check the software cache in the GPU memory to determine if the feature data for the sampled nodes is stored in the cache. If not, they directly access the storage to fetch the data and store it in the cache for future use. After fetching all the feature data for the sampled nodes in the mini-batch, the GPU executes the training process and then updates the learning parameters for the model.
### Tolerating Storage Latency during Feature Aggregation
The GIDS dataloader takes advantage of the massive thread-level parallelism provided by GPUs to effectively handle storage latency during feature aggregation. To this end, it is essential to have a sufficient number of concurrent storage access requests during the feature aggregation stage to maximize storage throughput.
Figure 5. Illustration of the GNN training process with GIDS dataloader
Based on the reported results from BaM (Kumar et al., 2017), a single PCIe Gen4 server-grade SSD's read peak throughput can be achieved with 32,768 concurrent storage access requests. When the cache-line size is the same as the node embedding feature size, the number of storage accesses during feature aggregation is equivalent to the number of sampled nodes in the mini-batch. Therefore, the size of a mini-batch should be larger than 32,768 nodes to achieve peak SSD throughput.
The size of the mini-batch can be adjusted based on the computational resources and specific requirements of the task. However, for large-scale graph datasets that contain hundreds of millions to a billion nodes, the typical size of a mini-batch is between 1,024 to 4,096 subgraphs (Kumar et al., 2017). Since the size of a subgraph for a large-scale graph can easily exceed 50 sampled nodes, the number of sampled nodes in a single mini-batch easily exceeds 200K nodes. Thus, the requirement of having at least 32,768 storage access requests during feature aggregation does not limit the flexibility of the GNN models.
Another critical factor that determines the mini-batch size is the GPU device memory capacity. The mini-batch size must be smaller than the available GPU device memory as the mini-batch is transferred from CPU to GPU for model training. Practically, there should be enough GPU device memory space after allocating space for a mini-batch since GPU device memory is also utilized for training and other data structures.
For a cache-line size of 4KB and 32,768 storage access requests, the mini-batch size is 0.125 GB with the meta-data and a few Megabytes for model parameters. The NVIDIA A100 offers up to 80 GB of GPU memory, providing enough space for a mini-batch and model parameters. Thus, GPU memory capacity is not a limiting factor for the mini-batch size.
### GPU Software-defined Cache for Efficient Feature Aggregation
Although GIDS dataloader can effectively handle storage latency and achieve peak storage bandwidth during feature aggregation, the achievable storage read bandwidth is orders of magnitude lower than the GPU memory bandwidth as High Bandwidth Memory 2 (HBM2) of the recent NVIDIA GPUs, can provide 2TB/s bandwidth (Beng et al., 2017) whereas the storage read bandwidth is limited by the 32GB/s PCIe in-take bandwidth of A100. Therefore, efficient utilization of GPU memory is necessary to amplify the effective bandwidth and further accelerate the feature aggregation process.
To address this, the GIDS dataloader employs the GPU software-defined cache. Unlike the GPU hardware caches, which helps to conserve DRAM bandwidth, the GIDS software-defined cache is used to help conserve storage bandwidth. The GPU software-defined cache in GIDS dataloader temporarily stores the feature data of recently accessed nodes, reducing the need for frequent storage accesses. During initialization, the GPU software-defined cache is allocated a fixed-size space with a configurable cache-line size. For each storage access, the entire cache-line is fetched from storage and stored in the GPU cache. Thus, to avoid I/O amplification, the GIDS dataloader sets the cache-line size equal to or slightly larger than the node feature embedding size. The GPU software-defined cache also tracks the status of each cache-line, enabling the system to determine whether the cache-line is in the cache, in use by another thread, or available for eviction.
Figure 6 illustrates the process of accessing storage for node feature vectors through the GPU software-defined cache. When a thread attempts to access the cache, it checks the state of the corresponding cache-line from the meta-data. If the cache-line is not found in the cache () the thread locks the cache-line (), selects a cache-line to evict, and requests the cache-line from storage (). Upon completion of the request, the thread marks the cache-line state as valid and unlocks the cache-line (). This allows threads to fetch data directly from the GPU cache when the cache-line is in a valid state (), avoiding unnecessary I/O traffic. Additionally, each thread warp is assigned to access the feature embedding of the same node, reducing contention and coalescing memory accesses.
### Window Buffering Cache Optimization
When the graph dataset is much larger than the GPU cache, achieving high reusability of node feature data becomes challenging due to the random nature of the neighborhood sampling process. In such scenarios, GPU memory space becomes a valuable resource, and maximizing its utilization is crucial.
One effective method to increase GPU cache utilization is to provide the cache with information about the dataset, which can involve pinning, i.e. marking certain sections of node features as highly reusable to reduce the likelihood of cache-lines in these sections being evicted (Kumar et al., 2017; Kumar et al., 2017). However, this approach presents a significant challenge in the context of GNN feature aggregation. This is because the nodes to be reused in the next iteration are randomly selected, making it challenging to mark them before launching the kernel.
To overcome this challenge, the GIDS dataloader introduces a novel technique called window buffering. Unlike the traditional frameworks, GIDS leverages the BaM software-defined cache which supports the customization of cache-line eviction policies. The window buffering technique reduces cache thrashing by avoiding the eviction of reusable node feature vectors through mini-batch look-ahead. This is achieved by conducting a graph sampling operation for a configurable number of iterations to fill the window buffer with sampled node IDs and avoiding the eviction of feature vectors for reused nodes in the window buffer. Therefore, the dataloader can look-ahead to the list of the sampled nodes for the next iterations.
Figure 6. Example of GIDS GPU software-defined cache update process
Specifically, as illustrated in Figure 7, the window buffer in GIDS dataloader is initially filled with the node IDs that will be sampled in the next few iterations (). Once the window buffer is filled, the sampled node IDs in the current mini-batch are compared with the nodes in the window buffer (). Then, the list of nodes that will be reused in the next iterations and the number of occurrences is generated (). This information is then used to update the GPU software-defined cache meta-data, which tracks the number of reuses in the next iterations for each node ().
During the update stage, when the reuse counter value is shifted from 0 to any positive number, the state of the node in the GPU cache is changed from the "Safe to Evict" state to the "USE" state so that the corresponding cache-line will not be evicted. If the counter value is already a positive number, the state is kept marked as the "USE" state (). The counter value is decreased each time the node is reused during the cache-line release stage. When the counter value is set back to 0, the state of the corresponding cache-line is then set back to the "Safe to Evict" state so that other threads can safely evict the cache-line. This approach effectively reduces cache thrashing and improves the performance of GNN feature aggregation on GPUs.
### Leveraging CPU Memory for Graph Sampling
As shown in Figure 3, the graph sampling throughput is higher on GPU than on CPU despite graph sampling being a sequential process. This is because the graph sampling process is especially latency-critical for large-scale graphs. The fundamental approach to accelerate such a process is to exploit parallelism to hide the latency, which GPUs naturally provide. Figure 8 shows that GPU outperforms CPU for all three datasets, with a performance gain of over 3x for the medium dataset. However, storing graph structure data in storage incurs multiple problems.
Firstly, the graph sampling process has a smaller data access granularity than the feature aggregation process, resulting in significant I/O amplification. This is because the data accesses to the storage devices are handled in page granularity, such as 4KB, meaning even if only a small segment of data is requested, the entire cache-line is transferred from the storage to GPU memory. Secondly, the random data access pattern from the sampling process makes it challenging for the GPU cache to exploit data locality, which can degrade the performance of the feature aggregation process. This is because the GPU memory is a limited resource, and the random data access pattern can pollute the GPU software-defined cache.
To address these challenges, the GIDS dataloader employs zero-copy data transfer via Unified Virtual Addressing (UVA) for graph structure data. Instead of storing the entire graph data in storage devices, our dataloader allows users to store node feature data on storage while pinning graph structure data in the CPU memory. This makes it possible to execute the graph sampling process on either CPU or GPU. This is a practical approach because the graph structure data is small, even for terabyte-scale graphs that we expect to accommodate in the foreseeable future, as shown in Table 4.
## 4. Evaluation
### Experimental Setup
**Environment.** Table 1 summarizes the system configuration for all evaluations. We compare GIDS and the state-of-the-art DGL baseline on an AMD EPYC high-end server-grade system equipped with a NVIDIA A100-40GB GPU and 1TB DDR4 CPU DRAM. 512GB of the CPU memory is locked for evaluation unless otherwise stated. All SSDs used for the evaluations are Intel Optane PCIe Gen4 NVMe SSDs.
**Datasets.** To assess the performance of GIDS dataloader on large-scale graph datasets, we conducted experiments using four real-world datasets: IGB-Full (Krizhevsky et al., 2012), IGBH-Full (Krizhevsky et al., 2012), ogbn-papers100M (Krizhevsky et al., 2012), and MAG240M (Krizhevsky et al., 2012). Table 2 presents the characteristics of these datasets, such as the number of nodes and edges, the dimension of the node feature data, and the type of graph. It is worth noting that
Figure 8. Graph sampling time of CPU and GPU graph sampling on the graphs with different sizes
Figure 7. Example of window buffering technique from GIDS dataloader
ogbn-papers100M and MAG240M datasets are small enough to fit into the CPU memory of our evaluation system.
For micro-benchmarking with smaller datasets, we used the subgraphs from IGB-Full dataset by following a procedure to maintain consistent characteristics as the original graph. The four graphs are denoted as IGB-tiny, IGB-small, IGB-medium, and IGB-large and their properties are shown in Table 3. The node feature data dimension is 1024, which is the same as IGB-Full dataset.
**GIDS Implementation** We extended DGL (Wang et al., 2017) to implement GIDS dataloader. Our approach involves creating new extensions for storage-based feature gathering by leveraging BaM (Wang et al., 2017) to support user-level GPU-initiated direct storage access. We then extended the DGL dataloader class to incorporate GIDS functionalities. To use the GIDS dataloader, users only need to set the GIDS flag when initializing the DGL dataloader.
**Model:** We use 3-layer GraphSAGE for homogeneous graphs and HINSage for heterogeneous graphs. Both models have a hidden dimension of 128 and a sampling size of (10,5,5). By default, we set the mini-batch size to 4,096 subgraphs where each subgraph is sampled by Neighbor Sampling.
**GIDS Dataloader:** We allocate 10 GB GPU device memory as GPU software-defined cache. By default, we use one NVMe SSD for both the GIDS dataloader and the DGL baseline dataloader.
**Baseline:** We compared GIDS with the DGL dataloader that is extended to work with memory-mapped files. We used memmap function from NumPy to create a memory-mapped array tensor for the graph data.
**Measuring Execution Time:** When working with large graph datasets, the training process can be excessively long, especially for the baseline. Therefore, we conducted the evaluations by measuring the execution time for 100 iterations after a warm-up stage of 1,000 iterations. We used the listed model configuration, with a mini-batch size typically ranging from 1 GB to 3 GB. This setup is favorable for the baseline as we are not measuring the storage latency overhead from the first 1,000 iterations when the page cache in the CPU memory is being warmed up for the baseline. However, for GIDS dataloader, only 10 iterations are required to warm up the GPU software-defined cache, and the cache miss for the baseline is more critical due to the exposed storage latency.
### Impact of Exploiting GPU Parallelism on Storage Access Latency
We conducted an extra evaluation to measure the impact of the storage latency on the feature aggregation performance of GIDS dataloader with the baseline dataloader when fetching feature data from a storage that consists of a single SSD for the IGB-Full dataset. For this evaluation, we measured the feature aggregation time for the first 20 iterations when both the GPU software-defined cache in GIDS dataloader and the CPU cache for the baseline were empty at iteration 1. Figure 9 shows that the feature aggregation bandwidth for GIDS dataloader was 5.6 GBps for iteration 1, while the baseline had a bandwidth of 0.05 GBps. As the peak SSD bandwidth for a 4KB cache-line is around 5.8 GBps, GIDS dataloader's feature aggregation throughput shows that it can fully hide the storage latency. However, the baseline dataloader fails to hide the latency and faces significant overhead when the CPU cache is empty.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Dataset** & **Feature Data Size** & **Graph Structure** & **Total Size (GB)** \\ & (\%) & **Data Size (\%)** & **Total Size (GB)** \\ \hline \hline opbin-papers100M & 68.3 & 31.0 & 77.4 \\ \hline IGB-Full & 94.7 & 5.1 & 1084 \\ \hline MAG240M & 86.7 & 12.8 & 200 \\ \hline IGB-Full & 96.0 & 3.8 & 2773 \\ \hline \end{tabular}
\end{table}
Table 4. Datasize distribution for the real-world datasets.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Dataset** & **Graph Type** & **Number of Nodes** & **Number of Edges** & **Feature Dimension** \\ \hline \hline IGB-tiny & Homogeneous & 100,000 & 547,416 & 1024 \\ \hline IGB-small & Homogeneous & 1,000,000 & 12,070,502 & 1024 \\ \hline IGB-medium & Homogeneous & 10,000,000 & 12,071,794 & 1024 \\ \hline IGB-large & Homogeneous & 100,000,000 & 12,273,571,364 & 1024 \\ \hline \end{tabular}
\end{table}
Table 3. IGB datasets used for micro-benchmarks.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Dataset** & **Graph Type** & **Number of Nodes** & **Number of Edges** & **Feature Dimension** \\ \hline \hline IGB-tiny & Homogeneous & 100,000 & 547,416 & 1024 \\ \hline IGB-small & Homogeneous & 1,000,000 & 12,070,502 & 1024 \\ \hline IGB-medium & Homogeneous & 10,000,000 & 12,071,794 & 1024 \\ \hline IGB-large & Homogeneous & 100,000,000 & 12,273,571,364 & 1024 \\ \hline \end{tabular}
\end{table}
Table 4. Datasize distribution for the real-world datasets.
Figure 9. Feature aggregation throughput comparison between GIDS dataloader and the baseline dataloader for the first 20 iterations. One SSD is used for this measurement.
As the iterations progress, the feature aggregation bandwidth for both dataloaders increases as both the GPU cache for GIDS and the CPU cache for the baseline fill up with new data. GIDS dataloader reaches the saturation point after around iteration 10 since the GPU software-cache size is only 10GB. However, the CPU page cache capacity is 512GB and the page cache is still being warmed up after the first 20 iterations, so the feature aggregation process is not yet at the peak throughput at iteration 20 for the baseline. We also observed that when the page cache hit ratio is around 100%, the baseline dataloader can achieve around 5 GBps, which is still lower than the feature aggregation bandwidth of GIDS dataloader at iteration 1. This is because the CPU is over-utilized, and there are CPU software bottlenecks causing additional overhead (Kumar et al., 2017). Thus, GIDS dataloader outperforms the baseline dataloader even when the baseline dataloader is accessing a pre-loaded page cache.
### Impact of the Window Buffering Cache Optimization
In this section, we present an evaluation of the impact of GPU software-defined cache optimization on the feature aggregation process. To conduct this evaluation, we compared the performance of GIDS with a basic GPU software-defined cache against GIDS with window buffering optimization. To ensure a fair comparison, we used the IGB-full dataset with the same Neighbor Sampling parameters and mini-batch size, and the size of the GPU cache was fixed at 10 GB for all configurations.
To accurately measure the impact of the window buffering technique, we varied the depth of the window buffer from 0 to 4, and then to 8 while evaluating the feature aggregation time and the GPU software-defined cache hit ratio. When the window buffer depth is 0, the GPU software-defined cache follows the random eviction policy, which serves as the baseline. Figure 10 displays the results, which show that the window buffering technique can improve the cache hit ratio. A window size of 4 improves the cache hit ratio by only 1.2\(\times\) and the feature aggregation time by 1.04\(\times\).
Setting the window buffer depth too low, compared to the size of the GPU cache, can lead to a similar performance as random eviction. For instance, if the mini-batch size is 2 GB, and the GPU cache size is 10 GB, most of the node features from the previous four mini-batches still reside in the cache with a random eviction policy. Therefore, the optimal hit ratio with a window size of four is similar to random eviction, making it hard to achieve a meaningful performance gain.
When we increase the window buffer size to 8, the cache hit ratio improves by 2.19\(\times\) over not having any window buffering, and the aggregation time decreases by 1.13\(\times\). This is because the depth of the window buffer provides enough information about the cached node features that will be reused in future mini-batches to avoid evicting reusable cache-lines across mini-batches, which results in a substantial difference compared to random eviction. When the window buffer size is set to 8, the cached node features that the GPU cache can utilize are more than the node feature data that can fit into the GPU cache. Any further increase in the window buffer depth should be accompanied by an increased GPU cache.
Next, we compare the performance difference between window buffering and static cache-line pinning. For static cache-line pinning, GIDS dataloader pins 40% of the GPU cache with the feature data of the nodes with higher out-degree, as these nodes are more likely to be sampled and should be prioritized for pinning (Gil et al., 2017; Wang et al., 2018).
As shown in Figure 11, the window buffering technique can outperform the cache-line pinning technique by 1.08x. Unlike static cache-line pinning, there is no overhead from graph preprocessing to mark specific segments as highly reusable whereas window buffering dynamically pins the cache-line based on the list of the sampled nodes. Moreover, the effectiveness of static cache-line pinning is highly influenced by the graph properties, whereas window buffering is significantly more flexible. As window buffering can provide higher performance and flexibility, GIDS leverages it for the default eviction policy.
However, there is a trade-off to consider when increasing the window buffer depth. First, there needs to be enough memory space for the window buffer. As the number of node samples for each mini-batch is around 1M, the size of the list of sampled nodes for a mini-batch is several megabytes. Although this is not a significantly large amount, larger window sizes increase the GPU memory requirement as the list of sampled nodes in the window buffer must be kept in the GPU memory for subsequent iterations. Additionally, a larger window size means a larger portion of the GPU cache will be pinned for future reuse, increasing the contention on the available cache-lines in the GPU software cache. Therefore, it is essential to carefully choose the window buffer size to ensure that the benefit of
Figure 11. Feature aggregation performance comparison between window buffering and static cache-line pinning
Figure 10. Performance comparison of feature aggregation process on GIDS dataloader for different window buffering depths.
a higher cache hit ratio outweighs the overhead of a larger window buffer size. By default, the GIDS dataloader sets the depth of the window buffer to 8 based on the system environment. However, the window buffer depth is a tunable parameter that users can adjust based on the hardware environment, such as GPU memory size.
### Overall Performance
Figure 12 shows the execution time of each stage of GNN training time for the baseline and GIDS dataloader on homogeneous graphs, whereas their execution time on heterogeneous graphs is shown in Figure 13. Figure 14 shows the GIDS speedup compared to the baseline. For these measurements, one SSDs is used for GIDS. We will scale the number of SSDs in Section 4.5. As shown in Figure 14, our GIDS dataloader achieves a 29.98\(\times\) and 1.38\(\times\) speedup for the feature aggregation process compared to the DGL baseline dataloader on IGB-Full and ogbn-papers100M datasets. On heterogeneous graphs, our GIDS dataloader achieves a 160\(\times\) and 1.37\(\times\) speedup on IGBH-Full and MAG240M datasets, respectively.
These performance gains for the feature aggregation are attributed to the utilization of full storage bandwidth and leveraging GPU software-defined cache. The performance gain for IGB-Full and IGBH-Full datasets is substantially larger than that for ogbn-papers100M and MAG240M because the sizes of the latter two graphs are smaller than the CPU memory capacity, and thus the baseline does not incur significant number of page faults while training with these datasets. As a result, the performance gain for ogbn-papers100M and MAG240M mainly comes from the GPU software-defined cache. It is worth noting that the feature aggregation is bounded by the peak SSD bandwidth for this evaluation as a single SSD is used for both dataloader. Section 4.5 and Section 4.7 show the overall GNN performance improvement and the time breakdown when GIDS leverages multiple SSDs to increase the storage bandwidth.
Our GIDS dataloader achieves a 22.98\(\times\) and 3.25\(\times\) speedup for the graph sampling process for IGB-Full and ogbn-papers100M. For heterogeneous graphs, our GIDS dataloader achieves a 53.55\(\times\) and 6.3\(\times\) speedup for the graph sampling process for IGBH-Full and MAG240M, respectively. This is because all graph structure data is pinned in the CPU memory so that the GPU can execute the graph sampling process via on-demand zero-copy data transfer. The transfer time is also reduced as the resulting mini-batch is kept in the GPU memory, so there is no need to transfer the feature data for the mini-batch from CPU to GPU with our GIDS dataloader. Finally, our GIDS dataloader does not modify any GNN training models so the training time remains constant between our approach and the baseline, aside from the variance caused by the random nature of the sampling process.
### Storage Bandwidth Scalability
As the data preparation process is shifted from CPU to GPU on GIDS dataloader, the data preparation throughput can be accelerated by increasing the bandwidth to transfer data from storage to GPU. However, the peak SSD bandwidth often falls behind the PCIe bandwidth, limiting the maximum achievable throughput. To overcome this limitation, GIDS utilizes one of the features of the BaM system (Zhu et al., 2017). This feature enables multiple SSDs to be connected to a single GPU and uniformly distributes I/O requests across all connected SSDs through round-robin scheduling. This amplifies the storage bandwidth and enables it to fully saturate the GPU's PCIe x16 bandwidth. For instance, if four Intel Optane SSDs are connected to a single GPU, the collective SSD bandwidth would reach -24 GBps, which nearly saturates the PCIe bandwidth.
Figure 15 shows the end-to-end (E2E) GNN training time with the time breakdown for GIDS dataloader as we scale the numbers of SSDs. With more SSDs, the collective SSD bandwidth increases, resulting in higher feature aggregation throughput. These results demonstrate that the feature aggregation process is limited by the storage bandwidth when the number of connected SSDs is less than four, and by the PCIe bandwidth when four SSDs are connected. When the PCIe bandwidth is fully saturated, the throughput is theoretically maximized when the data is not stored in the GPU memory. Since the baseline dataloader cannot even achieve the peak throughput of one SSD, its throughput does not improve with the additional SSDs. Furthermore, the pressure on the storage bandwidth for smaller graphs is relatively low, as the GPU software cache utilization is higher. Therefore, the performance gain from leveraging multiple SSDs is higher for larger graphs and the performance advantage of the GIDS dataloader is magnified by about 4\(\times\) when using four SSDs for large graphs, as shown in Figure 15.
### Comparison with UVA-based Approach
This section presents a performance comparison between the GIDS dataloader and the DGL dataloader, using the Unified Virtual Addressing (UVA) to enable GPU graph sampling and feature aggregation by pinning the graph data into the CPU memory. The IGB-Large dataset was used for evaluation since the UVA-based approach is limited to datasets smaller than the CPU memory capacity. The execution time of the UVA-based approach represents the lower bound of the GIDS approach without the software-defined cache as it does not incur any storage access latency and uses ample CPU memory bandwidth and can fully saturate the PCIe bandwidth by exploiting GPU memory-level parallelism during feature aggregation.
Figure 16 illustrates the normalized execution time of each GNN training stage for the two dataloader. As shown in the figure, the feature aggregation time for GIDS dataloader is 0.72\(\times\) compared to the baseline, a speedup of 1.29\(\times\). Both dataloader can fully saturate the PCIe bandwidth, but GIDS's performance gain comes from its GPU software-defined cache and window buffering. The effectiveness of the GPU cache becomes more critical to the end-to-end (E2E) performance as the system is bounded by the PCIe bandwidth. _Therefore, with GIDS and enough SSDs, there is no incentive for frameworks to try to keep the node features in the CPU memory even if they can fit_. This is an important insight as this significantly reduces the cost for large-scale deployment of GNN in the industry.
### GIDS Dataloader GNN Training Time-breakdown
In this section, we present the GNN training time breakdown for GIDS dataloader. To conduct this evaluation, we utilized four SSDs
to saturate the PCIe-bandwidth, and employed the window buffering technique for the GPU software-defined cache. As shown in Figure 17, the disparity between the overheads of the GNN training pipeline stages is much less with GIDS than the baseline dataloader (Figure 3). For medium-scale graph datasets, namely ogbn-papers100M and MAG240M, with the GIDS dataloader the training time accounts for 32.5% and 16.9% of the total execution time, respectively, while it accounts for only 9.8% and 2% of the execution time when using the baseline dataloader. For large-scale graph datasets, IGB-Full and IGBH-Full, the training time takes 11.7% and 16.1%, respectively, when using GIDS dataloader, whereas the baseline dataloader with a 512GB CPU memory capacity takes only 0.18% and 0.04%. The results demonstrate that our proposed GIDS dataloader accelerates the data preparation process and much more closely matches the GPU training throughput.
Figure 14. Speedup for each GNN training stage for GIDS dataloader with window buffering compared to the baseline dataloader.
Figure 12. GNN training time breakdown of the DGL dataloader and GIDS dataloader with window buffering on the homogeneous graphs. One SSD is used in these measurements.
Figure 13. GNN training time breakdown of the DGL dataloader and GIDS dataloader with window buffering on the heterogeneous graphs. One SSD is used in these measurements.
Figure 15. GNN training time breakdown of the DGL dataloader and GIDS dataloader with different numbers of SSDs.
## 5. Related Work
Several GNN specific applications and optimization have been proposed in the literature (Hanlon et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). ROC (Liu et al., 2017), NeuGraph (Liu et al., 2017), and DSP (Liu et al., 2017) propose multi-GPU training system for large-scale GNN training. However, they require significant additional hardware resources and are not scalable solutions.
FeatGraph (Liu et al., 2017) and ZIPPER (Wang et al., 2019) propose tiling to mitigate the memory footprint during GNN training. FeatGraph reduces memory usage by utilizing graph partitioning and feature dimension tiling. Meanwhile, ZIPPER employs graph-native intermediate representation to optimize GNN, such as sparse graph tiling and redundant operation elimination. However, these approaches suffer from random accesses from GNN, leading to poor performance. Moreover, these solutions do not leverage GPU for the data preparation process.
AliGraph (Liu et al., 2017), PaGraph (Liu et al., 2017), and Ginex (Ginex, 2018) use in-memory caching to reduce data transfer overhead. AliGraph and PaGraph cache high out-degree vertices in GPU memory to minimize data transfer between CPU and GPU. Ginex uses Belady's algorithm with super-batch samples and pipelining techniques to hide the latency from specialized caching policies. However, these approaches rely on the CPU for the data preparation process and cannot fully hide storage latency.
Data Tiering (Wang et al., 2019) uses weighted reverse PageRank to estimate the frequency of accesses during node sampling, improving GPU memory utilization. However, it requires all graph data to be stored in either CPU or GPU for GNN training execution, so not applicable for large-scale GNN training.
## 6. Conclusion
Training Graph Neural Networks (GNNs) on large-scale graph datasets is a challenging task due to their size exceeding the CPU memory capacity. Although distributed training is a possible solution, it is not cost-effective or even practical for many users. In this paper, we propose the GIDS dataloader, a GPU-oriented GNN training system that enables the training of large-scale graph datasets on a single machine. GIDS dataloader enables GPU threads to directly access storage and fully tolerates the long storage latency by exploiting the massive data level parallelism provided by GPUs. Moreover, GIDS dataloader further improves performance with a hybrid data placement strategy and by utilizing GPU memory as a software-defined cache with window buffering and cache-line pinning. By reducing the I/O overhead, GIDS dataloader can scale GNN training to datasets whose sizes are more than an order of magnitude larger than a single machine's CPU memory capacity while achieving up to 392x speedups over the state-of-the-art data-loader for the overall execution of an end-to-end GNN training pipeline. Our measurements show that even after fully saturating the PCIe bandwidth and achieving a huge speedup over the baseline, the feature aggregation stage can still take significantly longer than the training stage for large-scale graphs, which indicates opportunities for further optimizations of the feature aggregation stage as potential future work.
|
2308.09708 | Training with Product Digital Twins for AutoRetail Checkout | Automating the checkout process is important in smart retail, where users
effortlessly pass products by hand through a camera, triggering automatic
product detection, tracking, and counting. In this emerging area, due to the
lack of annotated training data, we introduce a dataset comprised of product 3D
models, which allows for fast, flexible, and large-scale training data
generation through graphic engine rendering. Within this context, we discern an
intriguing facet, because of the user "hands-on" approach, bias in user
behavior leads to distinct patterns in the real checkout process. The existence
of such patterns would compromise training effectiveness if training data fail
to reflect the same. To address this user bias problem, we propose a training
data optimization framework, i.e., training with digital twins (DtTrain).
Specifically, we leverage the product 3D models and optimize their rendering
viewpoint and illumination to generate "digital twins" that visually resemble
representative user images. These digital twins, inherit product labels and,
when augmented, form the Digital Twin training set (DT set). Because the
digital twins individually mimic user bias, the resulting DT training set
better reflects the characteristics of the target scenario and allows us to
train more effective product detection and tracking models. In our experiment,
we show that DT set outperforms training sets created by existing dataset
synthesis methods in terms of counting accuracy. Moreover, by combining DT set
with pseudo-labeled real checkout data, further improvement is observed. The
code is available at https://github.com/yorkeyao/Automated-Retail-Checkout. | Yue Yao, Xinyu Tian, Zheng Tang, Sujit Biswas, Huan Lei, Tom Gedeon, Liang Zheng | 2023-08-18T17:58:10Z | http://arxiv.org/abs/2308.09708v1 | # Training with Product Digital Twins for AutoRetail Checkout
###### Abstract
Automating the checkout process is important in smart retail, where users effortlessly pass products by hand through a camera, triggering automatic product detection, tracking, and counting. In this emerging area, due to the lack of annotated training data, we introduce a dataset comprised of product 3D models, which allows for fast, flexible, and large-scale training data generation through graphic engine rendering. Within this context, we discern an intriguing facet, because of the user "hands-on" approach, bias in user behavior leads to distinct patterns in the real checkout process. The existence of such patterns would compromise training effectiveness if training data fail to reflect the same. To address this user bias problem, we propose a training data optimization framework, _i.e._, training with digital twins (DTrain). Specifically, we leverage the product 3D models and optimize their rendering viewpoint and illumination to generate "digital twins" that visually resemble representative user images. These digital twins, inherit product labels and, when augmented, form the Digital Twin training set (DT set). Because the digital twins individually mimic user bias, the resulting DT training set better reflects the characteristics of the target scenario and allows us to train more effective product detection and tracking models1. In our experiment, we show that DT set outperforms training sets created by existing dataset synthesis methods in terms of counting accuracy. Moreover, by combining DT set with pseudo-labeled real checkout data, further improvement is observed. The code is available at [https://github.com/yorkeyao/Automated-Retail-Checkout](https://github.com/yorkeyao/Automated-Retail-Checkout).
Footnote 1: Counting is an inherent outcome once all products are successfully detected and tracked.
## Introduction
In the rapidly evolving landscape of smart retail, the automation of processes has emerged as a pivotal endeavor, enhancing efficiency and user experience. One prominent facet of this transformation is the automation of the checkout process, wherein users seamlessly pass products through a camera-enabled environment, eliciting automated product detection, tracking, and counting. This innovative paradigm streamlines the shopping experience but also presents unique challenges, particularly in the realm of data acquisition and model training.
To achieve AutoRetail Checkout (ARC) with deep learning, the acquisition of labeled training data has become a significant bottleneck, due to its difficulty in data collection, expensive annotation, and privacy concerns [14]. For example, collecting real AutoRetail Checkout training data could be costly, as it typically involves manual product movement in front of a camera and subsequent time-consuming human labeling. In this paper, we avoid the usage of real data labeling and introduce a novel dataset that leverages product 3D models as a foundation for training data generation. Shown in Fig. 1, given 3D product models, employing a renderer allows rapid generation of rendered product images, producing a training set with thousands of images within minutes. [14, 15, 16, 17, 18, 19].
Though data rendering using graphic engines offers significant advantages for training deep learning models, it introduces a distinctive challenge, _i.e._, the bias difference (termed as domain gap) between rendered and real data, which hampers its scalability. To explain, as users interact with the ARC in a "hands-on" manner, these individual bi
Figure 1: Problem definition. We focus on the problem of using 3D assets for training 2D detection and tracking model for AutoRetail Checkout. Given 3D assets, we aim to render a 2D training set by setting up a filming scenario. To achieve this, we propose a DtTrain framework, to improve the training set specificity on the real checkout process.
ases manifest as discernible patterns in the resulting product images. For instance, many customers tend to place the labeled side of the product facing upwards, resulting in the camera being more likely to capture this biased viewpoint. In this case, if the training data fails to encapsulate these biases, the presence of such a domain gap introduces a hurdle to the efficacy of training, as models may not generalize effectively to real-world scenarios.
In the past, addressing domain gap and making rendered data more realistic required extensive human effort, involving complex filming scene arrangements. However, recent advancements in computer vision have brought about a revolutionary change in film scene arrangement techniques. These advancements enable training set optimization and reduce the necessity for extensive human involvement in the process [14, 15, 16, 17]. A prime example of these advancements is the attribute descent algorithm developed by Yao _et al_. This algorithm efficiently learns film attribute distributions, significantly enhancing the realism of rendered data and improving the specificity of the training set for a target validation/testing [17]. With these methods, the gap between rendered and real data is narrowed, making rendered datasets more valuable for training deep learning models.
In this paper, we present a novel pipeline for rendered training set creation by augmenting core digital twins. Firstly, our approach is motivated by the understanding that the target set bias can be effectively represented by its smaller core set. To create this core set, we carefully select the most representative samples from the target domain based on their similarity in the feature space. For representative images, we utilize the graphic engine to create their digital twins, which are virtual representations of products in the rendering environment that closely mimic real images. This is achieved through precise per-image attribute optimization with coordinate descent [13]. Subsequently, we apply attribute-guided data augmentation techniques to these digital twins, thereby substantially increasing the dataset size. This data generation process culminates in the formation of the Digital Twin training set (DT set), a training dataset that specifically encapsulates the "hands-on" user bias.
We conduct a comprehensive experiment to illustrate the superior efficacy of the DtTrain over existing training set creation methods in terms of ARC counting accuracy. Furthermore, when having joint training of the DT set and pseudo-labeled real checkout data, it results in demonstrable performance enhancements. Through this endeavor, the study offers a promising solution to the conundrum of user bias, ultimately advancing the frontier of automated checkout systems in smart retail.
## Method
### Automatic Retail Checkout Dataset
In this paper, the challenge lies in the absence of a labeled real-world training set for ARC. We address this by rendering different images from the given 3D retail models. The rendered data allows us to train a robust model for the 2D ARC task, even in an environment with no prior annotations for the validation/test set.
To accomplish this task, we introduce a novel dataset specifically designed for ARC. Fig. 1, Fig. 2 and Table 1 illustrate examples and statistics of this dataset. We have curated 116 3D scans of real-world retail objects sourced from supermarkets, represented as 3D models. The dataset encompasses various object classes, including daily necessities, food, toys, furniture, and household items, among others. The images are captured in a setup from Yao et al. (2022). As depicted in Fig. 1 left, we incorporate controllable attributes like object placement, camera pose, and lighting. As highlighted in Table 1, compared to Dress Retrieval [12], MEP-3M [13], M5Product [15], and RPC [14], our dataset stands out by providing 3D assets, which offer the potential to generate an extensive collection of images. Additionally, the rendered data enables us to provide accurate product labeling and further attribute labeling for real checkout scenarios. To promote collaboration and further research, we will make the 3D models and film scene (implemented by a Unity-Python interface) readily available to the community, allowing the creation of more rendered data if required.
In a real ARC scenario, shown in the bottom right corner of Fig. 1, the camera is mounted above the checkout counter and facing straight down, while a customer is enacting a checkout action by "scanning" objects in front of the counter in a natural manner. Several different customers participate, and each of them scan slightly differently. There is a shopping tray placed under the camera to indicate where
\begin{table}
\begin{tabular}{l|l|c|c|c|c} \hline \hline & Datasets & \#Cate. & \#Images & Modality & Attr. \\ \hline \multirow{3}{*}{
\begin{tabular}{c} Multi- \\ modal \\ Retrieval \\ \end{tabular} } & Dress Retrieval & 50 & 0.020M & I,T & ✗ \\ & Product1M & 458 & 1.182M & I,T & ✗ \\ & MEP-3M & 599 & 3.011M & I,T & ✗ \\ & M5Product & 6.232 & 6.313M & I,T,V,A,Tab & ✗ \\ \hline Retail & RPC & 200 & 0.368M & I & ✗ \\ Checkout & ARC (Ours) & 116 & \(\infty\) & V & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing datasets related to retail objects. “Attr” denotes whether the dataset has attribute labels (_e.g.,_ orientation). In our ARC dataset, a category is a 3D model corresponding to a product. From each 3D model (category), we can render an unlimited number of images by varying environment and camera settings in Unity. Modalities are denoted as: Image (I), Text (T), Video (V), Audio (A), and Table (Tab).
Figure 2: 3D assets and real image examples in automatic retail checkout dataset. We have 3D assets for model training and 2D images for model validation and testing.
the AI model should focus. In summary, we obtain approximately \(22\) minutes of videos, and the videos are further split into target unlabeled _training_ and labeled _test_ sets such that _training_ and _test_ account for \(40\%\) and \(60\%\), respectively.
The presence of a noticeable domain gap between the rendered source and real target data is our major concern. Real-world datasets often exhibit distinct dataset biases, _e.g._, viewpoint bias. During a retail checkout process, customers typically view products from specific angles, resulting in uneven distribution of viewpoints. For instance, plate-like products are usually viewed from the front or rear as people handle them manually. If our rendered training set lacks a similar bias in viewpoint distribution, it creates a discrepancy between the two domains. As a consequence, the model's performance may suffer such a domain gap, leading to a drop in accuracy and effectiveness.
### Problem Definition
Formally, we denote the _target_ real ARC dataset as \(\mathcal{D}_{T}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{M}\) where \(M\) indicates the number of image-label pairs in the target. It follows the distribution \(p_{T}\), _i.e._, \(\mathcal{D}_{T}\sim p_{T}\). Let \(\mathcal{D}_{S}\) be the rendered _source_ set to be constructed, and \(\mathcal{D}_{S}=\{(\mathcal{R}(\mathbf{\psi_{i}}),y_{i})\}_{i=1}^{N}\). Here \(\mathbf{\psi_{i}}\) is an attribute vector of \(K\) components controlling the 3D rendering environment, _i.e._, \(\mathbf{\psi_{i}}=[\mu_{1},...,\mu_{K}]\in\mathbb{R}^{K}\). \(\mathcal{R}(\cdot)\) is the underlying rendering function that takes attribute vector \(\mathbf{\psi_{i}}\) as input and produces a rendered image. We input attribute \(\mathbf{\psi_{i}}\) to the renderer that generates an image-label pair. With our renderer, we can potentially render an unlimited number of images. But here we have \(N\) indicates the desired number of image-label pairs in the rendered dataset.
With these definitions, in this paper, we aim to build \(\mathcal{D}_{S}{}^{*}\) with an objective that the model \(h_{\mathcal{D}_{S}}\) trained on \(\mathcal{D}_{S}\) has minimized risk on \(\mathcal{D}_{T}\), _i.e._,
\[\mathcal{D}_{S}{}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}}\mathbb{E}_{ \mathbf{x},y\sim p_{T}}[\ell(h_{\mathcal{D}_{S}}(\mathbf{x}),y)]. \tag{1}\]
In this paper, since we actually do not have labels in \(\mathcal{D}_{T}\), The optimization objective we define in Eq. 1 is not directly tractable. Thus, we need to build \(\mathcal{D}_{S}{}^{*}\) without performing real training and testing. Alternatively, we aim to get \(\mathcal{D}_{S}{}^{*}\) which trains a model that can have similar performance as \(\mathcal{D}_{T}\), which is a real target training set. Thus, formally, we transfer the objective as:
\[\mathcal{D}_{S}{}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}}\big{|}L(h_{ \mathcal{D}_{T}}(\mathbf{x}),y)-L(h_{\mathcal{D}_{S}}(\mathbf{x}),y)\big{|}, \tag{2}\]
where \(L(h_{\mathcal{D}_{T}}(\mathbf{x}),y)\) and \(L(h_{\mathcal{D}_{S}}(\mathbf{x}),y)\) are the respective risks of model \(h\) on the dataset \(\mathcal{D}_{T}\) and \(\mathcal{D}_{S}\). For explicity, we define the risk of model \(h\) on an arbitrary dataset \(\mathbf{S}\) as
\[L(h_{\mathbf{S}}(\mathbf{x}),y)=\frac{1}{|\mathbf{S}|}\sum_{(\mathbf{x}_{i},y_{i})\in \mathbf{S}}\ell(h_{\mathbf{S}}(\mathbf{x}_{i}),y_{i}), \tag{3}\]
where \(\ell(h_{\mathbf{S}}(\mathbf{x}_{i}),y_{i})\) is the risk on individual samples as in Eq. 1.
We further split our objective into two parts, where we aim to optimize the upper bound of the error in Eq. 2, _i.e._,
\[\mathcal{D}_{S}{}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}} \big{|}\underbrace{L(h_{\mathcal{D}_{T}}(\mathbf{x}),y)-L(h_{\mathcal{D }_{C}}(\mathbf{x}),y)}_{\text{Core Set Error}}+\] \[\underbrace{\big{|}L(h_{\mathcal{D}_{C}}(\mathbf{x}),y)-L(h_{\mathcal{ D}_{S}}(\mathbf{x}),y)\big{|}}_{\text{Digital Twin Error}}. \tag{4}\]
### Coreset Selection
In the first part, to ensure similar performance between the model trained on \(\mathcal{D}_{C}\) and the model trained on \(\mathcal{D}_{T}\), we minimize the risk differences between them, _i.e._,
\[\mathcal{D}_{C}=\operatorname*{arg\,min}_{\mathcal{D}_{C}\in 2^{\mathcal{D}_{T}}}\big{|}L(h_{ \mathcal{D}_{T}}(\mathbf{x}),y)-L(h_{\mathcal{D}_{C}}(\mathbf{x}),y)\big{|}, \tag{5}\]
where \(\mathcal{D}_{C}\) is a subset of \(\mathcal{D}_{T}\) in the size \(O\), _i.e._, \(\mathcal{D}_{C}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{O}\). Thus, we reduce the problem into a core set selection problem.
From the theory of core set [10], if \(\mathcal{D}_{C}\) is the \(\delta\) cover of the set \(\mathcal{D}_{T}\) and shares the same number of classes with \(\mathcal{D}_{T}\), the risk difference between model \(h_{\mathbf{s}}\) and \(h_{\mathcal{D}_{S}}\) (_i.e._, core set error) is bounded by
\[\big{|}L(h_{\mathcal{D}_{T}}(\mathbf{x}),y)-L(h_{\mathcal{D}_{C}}(\mathbf{x}),y)\big{|} \leq \mathcal{O}(\delta)+\mathcal{O}(|\mathcal{D}_{T}|^{-\frac{1}{2}}). \tag{6}\]
\(\delta\) is the radius of the cover, and \(\mathcal{O}(\delta)\) is a polynomial function over \(\delta\). The problem can be reduced as a K-center problem [10] by optimizing \(\mathcal{O}(\delta)\). We apply a 2-approximation algorithm [11] to iteratively find optimal samples in \(\mathcal{D}_{T}\) and add to \(\mathcal{D}_{C}\). Specifically, each optimal sample \(\mathbf{z}^{*}\) is computed as
\[\mathbf{z}^{*}=\operatorname*{arg\,max}_{\mathbf{x}_{i}\in\mathcal{D}_{C}}\sum_{ \mathbf{x}_{i}\in\mathcal{D}_{T}}\min_{\mathbf{z}_{i}\in\mathcal{D}_{T}}\|f(\mathbf{x}_{i} )-f(\mathbf{x}_{j})\|_{2}, \tag{7}\]
where \(\mathbf{z}=(\mathbf{x},y)\), and \(f(\mathbf{x})\) represents the feature extracted of an image \(\mathbf{x}\). This process is named the furthest point sampling (FPS) method [10], which enables the
Figure 3: The DtTrain framework. It is designed to construct bias-adapted rendered training data. The framework comprises three key components: (a) coreset selection, aiming at identifying the most representative samples from the target domain. In the figure, the target images with green dashed boxes are selected, and those with red dashed boxes are not selected. Following this, we (b) generate digital twins for each image within the core set, by optimizing attributes shown in Fig. 1. Ultimately, the training set is curated through (c) attribute-guided data augmentation based on the rendered core set.
most representative samples from a dataset to be selected iteratively until size \(O\).
### Digital Twin Creation
We then focus on the second part, we aim to get
\[\mathcal{D}_{S}{}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}}\big{|}L(h_{ \mathcal{D}_{C}}(\mathbf{x}),y)-L(h_{\mathcal{D}_{S}}(\mathbf{x}),y)\big{|}. \tag{8}\]
as \(\mathcal{D}_{C}\) can be relatively small in scale (since it is a pruned set from \(\mathcal{D}_{T}\)), we consider dataset \(\mathcal{D}_{S}{}^{*}\) to be identical to \(\mathcal{D}_{C}\) by forming digital twins, by minimizing the content difference in every single image between them. Formally, we have the rendered dataset \(\mathcal{D}_{S}{}^{*}=\{(\mathcal{R}(\mathbf{\psi_{i}}),y_{i})\}_{i=1}^{O}\) and coreset \(\mathcal{D}_{C}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{O}\). For each sample \(\mathbf{x}_{i}\) in coreset \(\mathcal{D}_{C}\), we aim to optimize \(\mathbf{\psi_{i}}\), _i.e._,
\[\mathbf{\psi_{i}}^{*}=\operatorname*{arg\,min}_{\mathbf{\psi_{i}}}\|f(\mathbf{x}_{i})-f( \mathcal{R}(\mathbf{\psi_{i}}))\|_{2}, \tag{9}\]
where \(f(\cdot)\) denotes the feature extraction function in the feature space. In real practice, we use LPIPS [13] to calculate image differences in feature space.
To optimize \(\mathbf{\psi_{i}}\), we are inspired by attribute descent [15] and use an adapted version for obtaining digital twins, _i.e._, coordinate descent [14] for per-image optimization. Specifically, we aim to achieve the goal iteratively. Initially, at epoch \(j\), we have
\[\mathbf{\psi_{i}}^{0}=[\mu_{1}^{0},\cdots,\mu_{K}^{0}]. \tag{10}\]
At epoch \(j\) and iteration \(k\), we iteratively optimize a single variable \(\mu_{k}^{j}\),
\[\mu_{k}^{j}=\operatorname*{arg\,min}_{z\in S_{k}}\|f(\mathbf{x}_{i})-f(\mathcal{R }(\mathbf{\psi_{i}^{j}}))\|_{2}, \tag{11}\]
where
\[\mathbf{\psi_{i}^{j}}=[\mu_{1}^{j},\cdots,\mu_{k-1}^{j},z,\mu_{k+1}^{j-1},\cdots, \mu_{M}^{j-1}], \tag{12}\]
and \(S_{k},k=1,...,K\) defines the search space for \(\mu_{k}\). For example, the search space for the azimuth is between \(0^{\circ}\) and \(330^{\circ}\) by \(30^{\circ}\) degree intervals.
In this paper, an iteration is defined as the duration for which a single attribute undergoes coordination descent optimization. An epoch is a duration for which all attributes undergo one attribute descent round. In this algorithm, each iteration performs a greedy search for the optimized value of an attribute while values of the other attributes are fixed. Therefore, each iteration finds the attribute value for a single attribute, and an epoch gives values for the entire attribute vector. In our experiments, the entire optimization process usually converges in 2 epochs.
### Attribute-guided Augmentation
From the previous step, we get a set of core digital twins \(\mathcal{D}_{S}{}^{*}=\{(\mathcal{R}(\mathbf{\psi_{i}}),y_{i})\}_{i=1}^{O}\). Though it can be used for training models directly, its dataset size \(O\) can be small in size as we performed a coreset selection. To increase the dataset size to a desired number \(N\), we perturb the optimized attribute values to introduce the diversity of the training set. We randomly pick an optimized attribute vector \(\mathbf{\psi_{i}^{*}}\), we apply a multivariate Gaussian perturbation, denoted as \(\mathbf{\alpha_{j}}\sim\mathcal{N}(\mathbf{\psi_{i}^{*}},\Sigma)\), where \(\Sigma\) is a pre-defined diagonal covariance matrix, and \(i\) samples from a uniform distribution from 1 to \(O\), _i.e._, \(i\sim\mathcal{U}(1,O)\). To achieve a varied dataset resembling the digital twins, we need to strike a balance with the variance. Typically, we opt for a variance that keeps most of the values within a 15% deviation from the mean. Given such a process, we apply augmentation multiple times to sample our final training set \(\mathcal{S}\) (DT set) until it reaches size \(N\), _i.e._,
\[\mathcal{S}=\{(\mathcal{R}(\mathbf{\alpha_{j}}),y_{j})\}_{j=1}^{N}, \tag{13}\]
where \(\mathbf{\alpha_{j}}\sim\mathcal{N}(\mathbf{\psi_{i}^{*}},\Sigma)\), and \(i\sim\mathcal{U}(1,O)\).
## Experiment
### Experiment Details
**Task setting**. The videos are split such that 40% of the data is to be used for target training. We are tasked to create a rendered training set by adapting to the given unlabeled target training videos, train a model on the rendered data, and report the task test accuracy on the remaining 60% of the data, named as the target test set.
**Task model**. Once we get the optimized rendered data, we train the ARC model using the detection-tracking-counting framework as depicted from Nguyen2022. The pseudo-labeling model is trained from random attributes. More details are in the supplementary material.
**Evaluation metrics**. Our model evaluation entails aligning the dual outputs with the ground truth. A prediction is deemed accurate if and only if both the predicted _label_ and _its corresponding timeframe_ are correct. Specifically, we have precision, signifying the ratio of correct predictions to total predictions, and recall, reflecting the ratio of correct predictions to total ground truth. The culmination of these metrics is represented by an F1 score. Furthermore, we employ the Frechet Inception Distance (FID) [17] to assess the domain gap between the rendered set generated and the target set.
**Methods in comparison**. In this study, we conduct a comprehensive comparison between our proposed DtTrain
Figure 4: The pipeline to obtain the digital twins. Given a 2D real image, we built its digital twin by firstly (a) acquiring the 3D assets by pseudo labeling. (b) Upon that, we render the selected 3D asset in terms of a vector of attributes, which will be iteratively optimized by coordinate descent using the image-wise difference between the rendered image and the target real image.
framework and two established methods commonly employed for generating training sets through the graphic engine, namely LTS Ruiz, Schulter, and Chandraker (2019) and attribute descent Yao et al. (2022). These methods fall under the category of attribute distribution optimization, as they necessitate the prior definition of attribute distributions and subsequent parameter optimization.
Several existing approaches for acquiring digital twins exist. Within the DtTrain framework, we compare these approaches with the coordinate descent algorithm employed in our research. Specifically, digital twin acquisition can also be achieved through the utilization of the differentiable renderer, known as the soft rasterizer Liu et al. (2019). Additionally, we incorporate neural rendering techniques to produce digital twins, including InfoGAN Chen et al. (2016) and latent diffusion models (LDM) Rombach et al. (2022). For a comprehensive understanding of the comparative methods, we provide intricate details in the supplementary material.
### Main Results
**The superiority of DtTrain over random attributes, and existing dataset synthesis methods**. We compare the proposed DtTrain to random attributes under two settings. As shown in Fig. 5, in the first setting, we only use the rendered data to train an ARC model. In the second setting, we use the rendered data combined with the pseudo-label real data to train an ARC model. Under both settings, we observe a notable superiority of DtTrain over random attributes.
Table 2 displays evaluation results for various optimization methods, categorized by two training set creation pipelines. Notably, DtTrain outperforms existing attribute distribution optimization methods. For example, when creating digital twins with coordinate descent, the created training set surpasses the distribution optimization technique attribute descent by 2.6% F1 score.
Our understanding of the difference between attribute distribution optimization and our proposed pipeline aligns with the observed results. Digital twin creation involves image-to-image representations, with each product characterized by multiple augmented attribute vectors for a more intricate distribution. This highlights a key distinction that the assumption in attribute distribution optimization limits simulation potential, while our approach with multiple digital twins fosters a richer distribution. Adopting a more intuitive perspective, this image-to-image alignment eliminates interference from diverse backgrounds in digital twin creation, especially beneficial in object-centric tasks like ARC where unexpected perceptual noise can disrupt results.
**Enhanced accuracy through joint training with pseudo-labeled target data**. As depicted in Fig. 5, our find
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Loss} & \multicolumn{2}{c|}{Bag} & \multicolumn{2}{c|}{Box} & \multicolumn{2}{c}{Bottle} & \multicolumn{2}{c}{All} \\ \cline{2-7} & FID\(\downarrow\) & F1\(\uparrow\) & FID\(\downarrow\) & F1\(\uparrow\) & FID\(\downarrow\) & F1\(\uparrow\) & FID\(\downarrow\) & F1\(\uparrow\) \\ \hline SSIM & 172.30 & 47.62 & 134.32 & 41.00 & **115.42** & **51.43** & 133.51 & 42.21 \\ \hline StyleLoss & 182.24 & 35.71 & **123.66** & **44.44** & 134.35 & 30.30 & 135.23 & 42.76 \\ \hline LPIPS & **167.64** & **55.56** & 128.49 & 43.32 & 122.95 & 48.48 & **128.11** & **43.43** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of different loss functions. Notations and evaluation metrics are the same as in the previous table.
ings underscore the substantial advancements achieved via joint training in comparison to employing solely rendered data for training purposes. Notably, upon amalgamating the DT set with pseudo-labeled authentic data, a notable 2.21% enhancement in F1 score is observed, signifying the pronounced efficacy of this joint approach over the exclusive use of the DT set.
**The superiority of coordinate descent over existing digital twin creation methods**. In Fig. 6, the risk curve spans 150 iterations, illustrating convergence patterns of optimization methods, including soft rasterizer, InfoGAN, LDM-based, and coordinate descent. Notably, coordinate descent exhibits a stable convergence, which is a distinctive step-like descent, in contrast to gradient-based behaviors which are usually not stable. This distinction stems from coordinate descent's search strategy. These observations align with our previous assessment in Table 2. Coordinate descent consistently outperforms others, in terms of both domain gap and the final ARC accuracy.
**The superiority of FPS over random sampling**. Coreset selection in digital twin creation is crucial, capturing representative images from a large target image pool, thereby reducing the number needed for creating digital twins. To validate the effectiveness of our coreset selection method FPS, we compare it to random selection. In the latter, an equivalent number of images are randomly chosen from the target set. Results in Table 3 highlight FPS's clear superiority. It exhibits higher task accuracy and domain gap, outperforming random selection by 1.3% and 4.93, respectively.
**The superiority of LPIPS**. In our experiment, we utilize LPIPS as our loss function for digital twin creation. To gain deeper insights, we explore alternative losses: SSIM [23], and StyleLoss [14]. Results in Table 4 show LPIPS as the superior choice for domain dissimilarity and task accuracy.
**Impact of different attributes**. In our film scene, we group the attribute vector into three categories, the camera (distance, height), orientation (in-plane rotation, azimuth), and light (intensity). The ablation of each attribute can reveal their impact on task accuracy. In Fig. 7, notably, we observe camera location (distance, height) holds a dominant role in domain dissimilarity. Distant objects exist in object-centric tasks, leading to lower resolution and quality. Lighting and orientation exert similar influences, with a slight orientation advantage due to viewpoint distribution bias.
**Parameter study**. By default, we select 8 target images per product. Thus the size of \({\mathcal{D}_{S}}^{*}\), \(O=8\times 116\). Increasing corset size \(O\) enhances target distribution representation but escalates attribute optimization time. Our experi
Figure 8: The parameter study of the coreset size (**Left**) and training size (**Right**). It exhibits the task accuracy trend with the increment of the indicated parameter.
Figure 6: Convergence study of digital twin creation methods. We select 4 sample products randomly from the dataset, which include 2 boxes (tablets and toothbrush), 1 bottle (perfume), and 1 bag (detergent). We plot the loss trend for each product during the optimization process using different methods: soft rasterizer, coordinate descent InfoGAN based and LDM based neural rendering. Among these methods, coordinate descent stands out with its distinct optimization curve, characterized by a step-like descent trend. This unique behavior is attributed to coordinate descent being a search algorithm.
Figure 7: Ablation study on attributes. We divide the attribute vector into camera (height and distance), orientation (azimuth and in-plane rotation), and light (intensity). By separately analyzing the role of each attribute group in optimization and comparing it to the full optimization, we can assess their individual impact. The compromise of task accuracy (F1) serves as an indicator of the relative importance of the isolated attribute group.
ment, depicted in the left of Fig. 8, reveals an evident trend, _i.e._, larger coreset sizes enhance task accuracy. However, larger coreset sizes also increase the time needed for building a training set. This highlights a trade-off, emphasizing the need to balance accuracy near saturation while maintaining operational efficiency when selecting an optimal coreset size. We observe that beyond a coreset size of 8, accuracy improvement plateaus. Thus by default, 8 target images per product are selected.
In the experiment, we have defaulted the size of a training set \(N\) equals 22,000. We also test different training set sizes relative to the default size. Results, shown on the right of Fig. 8, clearly indicate increasing training size enhances task accuracy. However, accuracy improvement plateaus when it reaches the default training set size.
**Numerically understanding real ARC bias.** The distribution of viewpoints is illustrated in Fig. 9, providing valuable insights into inherent biases. By establishing a correlation between viewpoint distribution bias and the underlying shape characteristics, we can intuitively elucidate the rationale behind these patterns. For instance, the unimodal distribution observed in box-like tablet products indicates a customer preference for holding the product from a specific angle. In comparison, the nearly uniform distribution of body wash can be attributed to the cylindrical symmetry of its bottle-like shape. A bimodal distribution emerges for bag-like objects such as chips, showing customers' viewpoints concentrated at the front and back of the bag.
## Conclusion
In conclusion, the automation of the checkout process in smart retail environments has garnered significant attention. However, the scarcity of annotated training data has posed a challenge. To overcome this limitation, we introduced a novel approach utilizing product 3D models for data generation through graphic engine rendering. This approach, termed as DtTrain, can automatically edit the rendered image content in a graphic engine to generate training data with a good resemblance to the real ARC scenario. In addition, using viewpoint as an example, we show that models enable understanding of the dataset (user) bias computing the attribute distribution of given product categories. This article
Figure 9: Viewpoint distribution visualization for box, bottle, and bag retail products. Viewpoint distributions are learned by DTTrain. We select 4 products per class, where blue samples imply the in-plane rotation under \(30^{\circ}\) and orange for above. Each sample, represented by a point surrounding the object, corresponds to learned attributes by DtTrain. Remarkably, the distribution exhibits a non-uniform pattern, resulting in noticeable biases within each class. Our proposed method aims to model the intricate attribute patterns to accurately simulate the biases such that the domain gap is minimized.
demonstrates the benefit of training data optimization, and establishes a promising pathway for advancing automated checkout systems in smart retail through robust and representative training data.
## Acknowledgement
This work is partially done when Yue has an internship at NVIDIA, with the support of NVIDIA computing resources. This work was also supported in part by the ARC Discovery Project (DP210102801), Oracle Cloud credits, and related resources provided by Oracle for Research.
## Appendix A Related Works
**AutoRetail checkout** has been advanced through multi-modality, initially relying on barcodes Sriram et al. (1996), a technology that remains widely used and popular today. Recent developments in deep learning have sparked a shift in ARC research, with a growing emphasis on computer vision approaches. Notably, researchers have explored the utilization of VGG-16 and Inception V3 layers as feature descriptors for image classification of various products Geng et al. (2018); Chong et al. (2016). Additionally, deep learning pipelines based on state-of-the-art object detectors have been proposed for product recognition Tonioni et al. (2018). In contrast, our focus diverges from previous work that primarily emphasized model design and tuning. Instead, we concentrate on rendered image creation and optimization, aligning with the downstream task and target domain.
**Training with rendered data.** Data rendering via graphic engine has emerged as a cost-effective alternative to real labeled data, which can often be expensive to obtain. Various studies have explored the integration of rendered data alongside real data in the training set to improve model accuracy Yao et al. (2020); Zheng et al. (2017). Additionally, some researchers have delved into training models exclusively on rendered data Kar et al. (2019). In the context of this paper, we concentrate on leveraging rendered data exclusively to develop automatic retail systems.
**Existing training set optimization methods.** Leveraging rendered data offers the advantages of increased label availability and enhanced flexibility in the training process. However, it also introduces challenges, most notably the domain gap that exists between the source and target domains, thereby reducing task accuracy. Many existing studies employ reinforcement learning (RL) to improve task accuracy by optimizing rendering attributes Kar et al. (2019); Devaranjan et al. (2020); Ruiz et al. (2019); Xue et al. (2021). For instance, Kar _et al._ use policy gradients to optimize scene layout. In comparison Yao _et al._ formulate attribute optimization as a search problem due to challenges in obtaining attribute gradients, proposing a pruned greedy search called attribute descent Yao et al. (2020). However, these existing methods still require the manual definition of attribute distributions, which involves significant human effort. In this paper, we introduce the DtTrain framework, which minimizes the need for human-designed attribute distributions while achieving higher task accuracy. With DtTrain, the burden of distribution design is reduced, providing a more efficient and effective approach to attribute optimization.
## Appendix B Film Scene
We consider that the content disparity in images arises from various prime factors. In this context, a rendered image is created through a renderer, which is conditioned on attributes including azimuth, in-plane rotation, lighting intensity, lighting angle, camera distance, and height. These attributes are defined in the work by Yao et al. (2022) and are explained in Fig. 10. This supplementary figure complements the information presented in Fig. 1, providing a more detailed understanding of the physical significance of these attributes.
We first constrain the rotation attributes, namely in-plane rotation, azimuth, and light direction, to fall within the range of \(0^{\circ}\) to \(360^{\circ}\). Subsequently, the light intensity is deliberately designed to vary between 0 and 100. Here, a value of 0 signifies complete darkness, while a value of 100 corresponds to full illumination. Regarding camera height and distance, we refrain from explicitly specifying a range limit, varying between 0 and 100. Since we use coordinate descent, a search-based technique for obtaining digital twins. To accommodate this, we define search spaces \(\mathbf{S}=[S_{1},\cdots,S_{K}]\) for each attribute in the attribute list \(\mathbf{\psi}=[\psi_{1},\cdots,\psi_{K}]\). For example, we define the search space of rotation attributes, which lies in the range of \(0^{\circ}\) to \(360^{\circ}\) that has \(30^{\circ}\) degrees intervals. For the search space of camera height and camera distance, we have a range of \(0\) to \(100\) that has \(10\) intervals. In the context of coordinate descent, a balance between interval size and search space emerges, _i.e._, small intervals correspond to expansive search spaces, while larger intervals narrow down exploration. To ensure experimental fairness, we maintain a consistency akin to Yao et al. (2022).
## Appendix C Existing Optimization Methods
We compare our methods with various existing optimization strategies from previous research in the experiment. As mentioned in our experiment part, we compare DtTrain with existing training set optimization methods, and we compare
Figure 10: Controllable attributes defined in our film scene. Inherited from the film scene proposed by Yao et al. (2022), we have the attributes that include in-plane rotation, azimuth, camera distance, camera height and lighting.
coordinate descent with existing digital twin creation methods.
### Existing Training Set Optimization Methods
We utilize two well-established techniques frequently employed for creating training datasets using the graphical engine. These techniques are known as LTS Ruiz et al. (2019) and attribute descent Yao et al. (2022). These approaches belong to the realm of attribute distribution optimization, as they require the initial specification of attribute distributions followed by subsequent optimization of parameters.
**Learning to Simulate (LTS)**Ruiz et al. (2019) is a typical distribution-based reinforcement learning approach, where the controllable attributes are optimized by maximizing the reward accumulated in the downstream task. Despite the simple design, the end-to-end learning architecture implies it is difficult to combine with complicated downstream tasks. In our experiment, we regard it as an ad-hoc benchmark in conventional distribution optimization approaches.
**Attribute descent**Yao et al. (2022) is a gradient-free search strategy to get an optimized training set. During the search stage, it can significantly reduce the search space of the distribution parameters. Furthermore, attribute descent is guaranteed to find the sub-optimal solutions by iterating a single element while keeping others frozen.
### Existing Digital Twin Creation Methods
For both differentiable rendering and neural rendering, we use the same loss as coordinate descent, LPIPS.
**Differentiable rendering** refers to a renderer that preserves gradients in the rendering function, shown in Fig. 11 top. In our experiment, we use a differentiable renderer called soft rasterizer Liu et al. (2019), which is implemented in Pytorch3D Ravi et al. (2020). In the soft rasterizer, the gradient can directly backpropagate through the renderer, thereby enabling the editing of attributes to create digital twins.
**Neural rendering** can be seen as a similar form of differentiable renderer as it also enables gradient-based optimization. This approach performs backpropagation through the neural network, which serves as an imitator of a non-differentiable renderer. This approach is originally proposed by Shi et al. (2019) to create digital twins for face images. Motivated by their methods, we have an adapted version to create digital twins for product images. Specifically, as shown in Fig. 11 bottom. The whole process involves two stages. In the first stage, we train the conditional generative neural network using the attributes and their corresponding rendered images. In the second stage, we freeze the network while backpropagating gradients through the neural renderer, thereby enabling the editing of attributes to create digital twins.
In the experiment, we utilize two typical conditional generative networks: InfoGAN Chen et al. (2016) and latent diffusion model (LDM) Rombach et al. (2022). By modifying specific tailored latent variables, the InfoGAN can generate product images with desired attributes. Likewise, LDM is widely popular due to its strong representation capabilities and high-quality outputs. In our experiment, we adopt the pre-trained stable diffusion model from Rombach et al. (2022) and fine-tune it on our created retail training set. As the original architecture only supports text prompts, we make adjustments by removing the CLIP encoder Radford et al. (2021), and replacing it with a lightweight, zero-initialized attribute encoder. For LDM training, we train the entire U-Net Ronneberger et al. (2015) along with the attribute encoder to adapt the model to our retail products.
As shown in our experimental results, we demonstrate that coordinate descent consistently outperforms both differentiable rendering and neural rendering. While both differentiable rendering and neural rendering involve the iterative update of attributes through gradient descent. It is noteworthy that, in contrast to coordinate descent, these gradient descent-based approaches are susceptible to becoming entrapped within local minima. To illustrate, consider a nearly symmetrical retail product, such as a box-shaped item, where viewpoint local minima exist both on the product's front and rear sides. Despite the global minimum residing exclusively on the product's front side, gradient descent-based methods can easily become "trapped" in the local minimum of the rear side. In comparison, coordinate descent exhibits the ability to escape such entrapment due to its inherent search-based nature.
## Appendix D Task Model
We provide a comprehensive overview of the task model employed in our experiment for evaluating rendered retail data. The ARC pipeline, as encapsulated by Naphade et al. (2022), unfolds as three distinct stages: detection, tracking, and counting (DTC).
Initiating with the detection stage, our objective revolves around the identification and classification of target products within video content. To this end, we harness the power
Figure 11: Existing two methods for creating digital twins, differentiable rendering and neural rendering. For the differentiable renderer, the gradient can directly backpropagate through the renderer to optimize the attributes. For the neural renderer, we create a conditional neural network to emulate differentiable renderers.
of YOLOv5 [22], which is pertained on COCO [14] and further fine-tuned using our rendered data. The outcome manifests as estimated bounding boxes, and the classification process culminates in an ensemble of three models: Res2Net [1], Swinn-Transformer [15], and RepVGG [16].
Transitioning seamlessly to the tracking stage, our focal point lies in establishing continuity in tracking identical products across diverse frames. This mitigates the risk of count duplication. We use the tracking algorithm, _i.e._, Bytter Track [14], which aptly achieves track matching and effectively circumvents the challenge posed by object occlusion. Finally, in the counting stage, the goal is to predict each product uniquely, avoiding any redundancy. To this end, trajectory counting is employed, with each trajectory being assigned a time prediction rooted in the highest level of confidence.
|
2301.05439 | Non-Hermitian physics of levitated nanoparticle array | The ability to control levitated nanoparticles allows one to explore various
fields of physics, including quantum optics, quantum metrology, and
nonequilibrium physics. It has been recently demonstrated that the arrangement
of two levitated nanoparticles naturally realizes the tunable nonreciprocal
dipole-dipole interaction. Motivated by this development, we here propose and
analyze an array of levitated nanoparticles as an ideal platform to study
non-Hermitian physics in a highly controlled manner. We employ the non-Bloch
band theory to determine the continuum bands of the proposed setup and
investigate the non-Hermitian skin effect therein. In particular, we point out
that the levitated nanoparticle array exhibits rich dynamical phases, including
the dynamically unstable phase and the unconventional critical phase where the
spectral singularity persists over a broad region of the controllable
parameters. We also show that the long-range nature of the dipole-dipole
interaction gives rise to the unique self-crossing point of the continuum band. | Kazuki Yokomizo, Yuto Ashida | 2023-01-13T08:47:52Z | http://arxiv.org/abs/2301.05439v2 | # Non-Hermitian Physics of Levitated Nanoparticle Array
###### Abstract
The ability to control levitated nanoparticles allows one to explore various fields of physics, including quantum optics, quantum metrology, and nonequilibrium physics. It has been recently demonstrated that the arrangement of two levitated nanoparticles naturally realizes the tunable nonreciprocal dipole-dipole interaction. Motivated by this development, we here propose and analyze an array of levitated nanoparticles as an ideal platform to study non-Hermitian physics in a highly controlled manner. We employ the non-Bloch band theory to determine the continuum bands of the proposed setup and investigate the non-Hermitian skin effect therein. In particular, we point out that the levitated nanoparticle array exhibits rich dynamical phases, including the dynamically unstable phase and the unconventional critical phase where the spectral singularity persists over a broad region of the controllable parameters. We also show that the long-range nature of the dipole-dipole interaction gives rise to the unique self-crossing point of the continuum band.
A levitated nanoparticle is a laser trapped nanoscale dielectric particle smaller than wavelength of light [1]. Recent experimental developments have allowed one to cool a levitated nanoparticle to ultracold temperatures [2; 3; 4; 5; 6; 7; 8] and offered unique opportunities to study quantum mechanics of mesoscopic objects [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Additionally, previous studies demonstrated the potential of a levitated nanoparticle to explore various fields of physics, such as nonequilibrium physics [21; 22; 23; 24; 25; 26; 27] and quantum sensing [28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Remarkably, recent experimental studies have shown the possibility of extending these systems to multi-nanoparticle setups [38; 39; 40; 41; 42; 43]. In particular, Ref. [43] has reported a realization of an on-demand assembly of levitated nanoparticles, in which optical tweezers are used to trap and arrange the nanoparticles one by one.
On another front, recent years have witnessed remarkable advances in our understandings of non-Hermitian systems, i.e., a class of nonequilibrium systems that can be effectively described by a non-Hermitian operator [44]. While non-Hermitian physics has been widely investigated in several fields of quantum science, such as ultracold atoms [45; 46; 47; 48; 49] and photonics [50; 51; 52; 53], its idea has also found numerous applications in classical systems realized in optics [54; 55; 56; 57], mechanics [58; 59; 60; 61], and electrical circuits [62; 63; 64; 65]. These previous studies uncovered rich non-Hermitian phenomena that have no counterparts to Hermitian systems. For instance, a one-dimensional (1D) tight-binding model with asymmetric hopping amplitudes exhibits the non-Hermitian skin effect [66; 67; 68], where the bulk eigenstates are localized at open boundaries of the system, leading to the extreme boundary sensitivity of the eigenvalue.
In this Letter, we propose and analyze a 1D levitated nanoparticle array as an ideal platform to study previously unexplored regimes of non-Hermitian physics in a highly controlled manner. A prominent feature here is that there exists the tunable nonreciprocal dipole-dipole interaction between levitated nanoparticles, which is induced by the nonreciprocal interference originating from phase difference between the trapping lasers [41]. The proposed system then realizes a 1D tight-binding model with arbitrarily tunable asymmetric hopping amplitudes that have possibly negative signs and long-range dependence. This high controllability allows one to explore the whole parameter region of non-Hermitian systems, thus opening the possibility to uncover the full potential of non-Hermitian phenomena. In this respect, the proposed setup should be contrasted to the previous non-Hermitian platforms, where studies were restricted to severely limited parameter regions of the models due to the difficulties of realizing tunable nonreciprocal interactions.
To determine the continuum bands and the dynamical phase diagram of the levitated nanoparticle array, we invoke the non-Bloch band theory [66; 69; 70; 71; 72; 73], a recently developed powerful tool to investigate models featuring the non-Hermitian skin effect. The non-Bloch band theory allows for calculating the asymptotic eigenvalues of the systems with open boundary conditions in the limit of a large system size. This makes contrast to the conventional Bloch band theory, where the energy band reproduces the eigenvalues under periodic boundary conditions.
On the basis of this theoretical framework, we find that the levitated nanoparticle array exhibits rich dynamical phases, including the unconventional critical phase and the dynamically unstable phase. In the former, a remarkable feature is that the non-Hermitian degeneracy of the continuum bands known as the spectral singularity appears without fine-tuning and persists over a broad region of the parameters. The key ingredients of the latter are negative interparticle couplings, which were difficult to realize in the existing non-Hermitian platforms. Moreover, the proposed system can naturally realize the long-range hopping amplitudes originating from the dipole-dipole interaction provided that the particle distance is
judiciously controlled. We show that this long-range nature leads to the unique self-crossing point of the continuum band, which corresponds to the singularity of the generalized Brillouin zone.
To be concrete, we consider a 1D array of the trapped levitated nanoparticles as shown in Fig. 1. The particles are equally spaced at the interval \(d_{0}\), and all the particles have the mass \(m\). Let \(\lambda\) and \(P\) denote the wavelength and the power of all the trapping lasers, respectively. Furthermore, we assume that the motion of the particles along the plane perpendicular to the optical axis is frozen.
The interaction between the two particles arises due to the interference between the scattered electromagnetic field and the trapping laser. Since the scattered field acquires the phase \(kd_{0}\) during the propagation, the phase difference between the trapping lasers at the positions of the particles leads to the constructive and destructive interference depending on the propagation direction of the scattered field. It is this spatial asymmetry that renders the interparticle coupling nonreciprocal. Due to the long-range nature of this nonreciprocal dipole-dipole interaction, it is in general necessary to incorporate the couplings that reach up to \(N\)th neighbor particles. Altogether, in the vicinity of the focal plane, the linearized equation of motion of the \(n\)th particle along the \(z\) axis is given by
\[m\ddot{z}_{n}+m\gamma\dot{z}_{n} =-\left(m\Omega^{2}+2\sum_{l=1}^{N}K_{l}\right)z_{n}\] \[+\sum_{l=1}^{N}\left[\left(K_{l}+\bar{K}_{l}\right)z_{n-l}+\left( K_{l}-\bar{K}_{l}\right)z_{n+l}\right]. \tag{1}\]
Here, \(\Omega\) is an intrinsic mechanical frequency of the particle proportional to \(\sqrt{P}\), \(\gamma\) is a friction coefficient, and \(K_{l}\) and \(\bar{K}_{l}\) are the coupling strengths given by
\[\left\{\begin{array}{l}K_{l}=\frac{G}{lk_{0}d_{0}}\cos\left(lk _{0}d_{0}\right)\cos\left(l\Delta\phi\right),\\ \bar{K}_{l}=\frac{G}{lk_{0}d_{0}}\sin\left(lk_{0}d_{0}\right)\sin \left(l\Delta\phi\right),\end{array}\right. \tag{2}\]
where \(G\) has the dimension of a spring constant and is proportional to \(P\), \(\Delta\phi\) is the optical phase difference between the neighbor trapping lasers in the focal plane [Fig. 1], and \(k_{0}\left(=2\pi/\lambda\right)\) is a wavenumber of the trapping laser. One can infer from Eq. (2) that the couplings are long-range because the dipole-dipole interaction is proportional to the inverse of the distance between the particles. We note that the sign of the coupling constants \(K_{l}\) and \(\bar{K}_{l}\) can be controlled by changing the phase difference. We provide the derivation of Eq. (1) in the Supplementary Material [74].
In general, continuum bands of non-Hermitian tight-binding models can be obtained by invoking the non-Bloch band theory, which reproduces the asymptotic eigenvalues under open boundary conditions in the thermodynamic limit. Specifically, the continuum band is calculated from the generalized Brillouin zone spanned by \(\beta\equiv e^{ik}\) for a complex Bloch wavenumber \(k\). We here apply the non-Bloch band theory to the levitated nanoparticle array; throughout this paper, we assume \(\left|K_{N}\right|\neq\left|\bar{K}_{N}\right|\). Substituting \(z_{n}=\psi_{n}e^{i\omega t}\) to Eq. (1), we have the real-space eigenequation as follows:
\[\frac{1}{m}\sum_{l=1}^{N}\left[\left(K_{l}-\bar{K}_{l}\right)\psi _{n+l}\left(K_{l}+\bar{K}_{l}\right)\psi_{n-l}\right]\] \[+\left(\omega^{2}-i\gamma\omega-\Omega^{2}-\frac{2}{m}\sum_{l=1 }^{N}K_{l}\right)\psi_{n}=0. \tag{3}\]
Importantly, an ansatz of Eq. (3) can be taken as
\[\psi_{n}=\sum_{j=1}^{2N}\left(\beta_{j}\right)^{n}\phi^{(j)}, \tag{4}\]
where \(\beta_{j}\left(=\beta\right)\) is the solution of the characteristic equation given by
\[\frac{1}{m}\sum_{l=1}^{N}\left[\left(K_{l}-\bar{K}_{l}\right) \beta^{l}+\left(K_{l}+\bar{K}_{l}\right)\beta^{-l}\right]\] \[+\left(\omega^{2}-i\gamma\omega-\Omega^{2}-\frac{2}{m}\sum_{l=1 }^{N}K_{l}\right)=0. \tag{5}\]
We note that Eq. (5) is an algebraic equation for \(\beta\) of \(2N\)th degrees. The main result of the non-Bloch band theory is that the condition for the generalized Brillouin zone is obtained from the \(2N\) solutions, and it is given by
\[\left|\beta_{N}\right|=\left|\beta_{N+1}\right| \tag{6}\]
Figure 1: Schematic figure of the levitated nanoparticle array. The distance between the nearest-neighbor particles is \(d_{0}\), and the mass of all the particles is \(m\). All the trapping lasers have the power \(P\) and the wavelength \(\lambda\). We set the phase of the \(n\)th trapping laser in the focal plane to be \(\phi+n\Delta\phi\).
with \(|\beta_{1}|\leq\cdots\leq|\beta_{2N}|\). The trajectories of \(\beta_{N}\) and \(\beta_{N+1}\) form the generalized Brillouin zone on the complex plane, which reveals the essential features of non-Hermitian systems (see, e.g., Refs. [75, 76, 77, 78]). Then, we can calculate the continuum bands by combining Eq. (5) with the generalized Brillouin zone. We note that when \(\bar{K}_{l}=0\), the generalized Brillouin zone reduces to a unit circle, which means that the Bloch wavenumber becomes real.
We start our analysis from the levitated nanoparticle array with the nearest-neighbor interaction, which corresponds to \(N=1\) in Eq. (1); in the following, we assume \(\gamma>2\Omega\) for the sake of concreteness. From Eq. (6), the generalized Brillouin zone can be given by the circle with the radius \(r=\sqrt{\left|\left(K_{1}+\bar{K}_{1}\right)/\left(K_{1}-\bar{K}_{1}\right) \right|}\). We then have the analytical form of the continuum bands as follows:
\[\omega_{\pm}=\frac{i}{2}\gamma\pm\sqrt{\Omega^{2}+\frac{2}{m}\left(K_{1}- \sqrt{K_{1}^{2}-\bar{K}_{1}^{2}}\cos\theta\right)-\frac{\gamma^{2}}{4}}, \tag{7}\]
where \(\theta\) is a real number. Since each eigenmode contributes to the dynamics through the factor \(e^{-\mathrm{Im}\left(\omega_{\pm}\right)}e^{i\mathrm{Re}\left(\omega_{\pm}\right)}\), we can show the dynamical phase diagram of the system depending on \(K_{1}/m\) and \(\bar{K}_{1}/m\) in Fig. 2(a). Figures. 2(b)-(d) and (e)-(g) plot the evolutions of the continuum bands along the black and white arrows indicated in Fig. 2(a), respectively.
In the blue-shaded regions of Fig. 2(a), all the particles oscillate with the attenuation because \(\mathrm{Re}\left(\omega_{\pm}\right)\neq 0\) and \(\mathrm{Im}\left(\omega_{\pm}\right)>0\) [Fig. 2(b)]. In contrast, in the red-shaded regions, their motion monotonically vanishes without oscillations because \(\mathrm{Re}\left(\omega_{\pm}\right)=0\) and \(\mathrm{Im}\left(\omega_{\pm}\right)>0\) [Fig. 2(d)]. For these reasons, we term the former (latter) the dynamical phase as the underdamped (overdamped) phase.
Remarkably, we find the broad green-shaded region where the two branches \(\omega_{+}\) and \(\omega_{-}\) coalesce at \(\mathrm{Re}\left(\omega_{\pm}\right)=0\) [Fig. 2(c)], leading to the crossover dynamics between the above two dynamical phases. Such a degenerate point unique to non-Hermitian bands is known as the spectral singularity. We shall term this intermediate regime as the critical phase in the sense that the overdamped behavior eventually sets in after the initial underdamped oscillations. Importantly, the emergence of this critical phase is unique to the present setup with open boundaries because the spectral singularity disappears under periodic boundary conditions. Indeed, after replacing \(\beta\) by \(e^{ik}\)\((k\in\mathbb{R})\) in Eq. (5), one obtains the band, which reproduces the eigenvalues under periodic boundary conditions, and it is qualitatively different from Eq. (7). Thus, the transient phenomena discussed here is supported by the non-Hermitian degeneracy, and the non-Hermitian skin effect plays an essential role to realize the aforementioned critical phase. We note that, as shown in Fig. 2(f), the spectral singularity also appears along the green vertical lines in Fig. 2(a). However, one would need fine-tuning of the parameters in this case as indicated by Figs. 2(e)-(g), where the two continuum bands are recombined across the green line.
In addition to the above phases, we also find the dy
Figure 2: Dynamical phase diagram and continuum bands of the levitated nanoparticle array. (a) Dynamical phase diagram exhibiting the underdamped, critical, overdamped, and dynamically unstable phases shown in the blue, green, red, and gray-shaded regions, respectively. The spectral singularity (SS) appears in the green-shaded regions. On the black dashed lines, \(|K_{1}|=|\bar{K}_{1}|\) is satisfied. We set the parameters to be \(\Omega=1\) and \(\gamma=5\). (b)–(g) Evolutions of the continuum bands along the arrows in (a). The magenta and cyan express \(\omega_{+}\) and \(\omega_{-}\), respectively. The numerical values in each panel specify \(\left(K_{1},\bar{K}_{1}\right)\).
namically unstable phase as indicated by the gray-shaded region in Fig. 2(a). There, either of the hopping amplitudes, \(K_{1}\pm\bar{K}_{1}\), becomes negative, and the oscillation amplitudes diverges in the long-time limit because the imaginary part of the eigenvalues can take negative values. Physically, this instability originates from the fact that the negative hopping amplitudes cause the force that increasingly keeps away the particles from their equilibrium positions. We emphasize that the dynamically unstable phase discussed here is rather difficult to realize in the previous non-Hermitian systems due to the lack of the ability to implement tunable negative coupling strengths. It is worthwhile to mention that, in finite-size systems, the boundary between the overdamped and dynamically unstable phases can be slightly modified [74].
We next investigate how the long-range nature of the couplings can affect the continuum band and the corresponding generalized Brillouin zone of the levitated nanoparticle array; in the following, we neglect the friction for the sake of simplicity. To this end, we assume that the interaction reaches up to the next-nearest-neighbor particles, which corresponds to \(N=2\) in Eq. (1). In Fig. 3, we plot the continuum bands with the positive branch of the square root and the corresponding generalized Brillouin zone at different \(\Delta\phi\). We note that the black dashed curves in Figs. 3(d)-(f) indicate the conventional Brillouin zone formed by \(\beta\equiv e^{ik}\;\;(k\in\mathbb{R})\).
One can see from Figs. 3(d) and (f) that the generalized Brillouin zone with \(N=2\) forms a skewed closed curve with the cusps, at which it becomes indifferentiable, while the generalized Brillouin zone with \(N=1\) is merely a circle. Importantly, the cusps correspond to the self-crossing points of the continuum band [Fig. 3(a) and (c)] [79]. Thus, the long-range nature of the nonreciprocal interaction can lead to these unconventional band structures. Meanwhile, at \(\Delta\phi=\pi/2\), the generalized Brillouin zone becomes the unit circle independently of \(N\) as shown in Fig. 3(e), where the non-Hermitian skin effect disappears. Accordingly, there are no self-crossing points of the continuum band as shown in Fig. 3(b).
In summary, we propose and analyze the levitated nanoparticle array as an ideal platform to study new realms of non-Hermitian physics in a highly controlled manner. We show that the system exhibits the unconventional critical phase, where the spectral singularity originating from the non-Hermitian skin effect persists over a broad region of the controllable parameters. We also point out that the tunable dipole-dipole nonreciprocal interaction in the proposed setup allows for extremely nonreciprocal hopping amplitudes with possibly negative signs, which result in the dynamical instability. We finally reveal that the long-range nature of the nonreciprocal couplings further enriches the non-Hermitian band structures, leading to the cusps of the generalized Brillouin zone and the self-crossing points of the continuum band.
Several open questions remain for future studies. First, in a levitated nanoparticle array, the continuum bands can be experimentally observed by measuring the power spectral density. Hence, it should be possible to directly observe the spectral singularity and the correspondence between the cusps of the generalized Brillouin zone and the self-crossing points of the continuum band.
Second, besides the model discussed here, a levitated nanoparticle array allows one to realize various non-Hermitian tight-binding models with arbitrary parameters, thanks to its high controllability. For instance, it provides an ideal setup to realize the non-Hermitian Su-Schrieffer-Heeger model [66], where rich phenomena including a topological phase transition are expected to
Figure 3: Continuum bands and generalized Brillouin zones of the levitated nanoparticle array at different \(\Delta\phi\). (a)–(c) The continuum bands with the positive branch of the square root, and (d)–(f) the corresponding generalized Brillouin zones are shown. The red (blue) curves indicate the results for \(N=1\) (\(N=2\)). In (d)–(f), the black dashed curve expresses the conventional Brillouin zone spanned by \(\beta\equiv e^{ik}\;\;(k\in\mathbb{R})\). The system parameters are set to be \(\lambda=1.064\times 10^{-6}\;\mathrm{m},d_{0}=10^{-5}\;\mathrm{m},\Omega=10\; \mathrm{s}^{-1}\), and \(G/\left(\mathit{mkd}_{0}\right)=1\;\mathrm{s}^{-2}\).
be observed.
Third, it merits further study to examine nonlinear effects, which should play a crucial role in the dynamically unstable phase. While the asymmetric interaction can exponentially amplify the oscillation strengths in short-time regimes, we expect that this amplification is eventually balanced by the nonlinear suppression. Finally, it is interesting to extend the present analysis to quantum regimes of the levitated nanoparticle array, which should be within the experimental reach in view of recent developments of cooling levitated nanoparticles to ultracold temperatures. In particular, understanding a role of quantum correlation and coherence between levitated nanoparticles in the array remains as an intriguing open question.
K.Y. was supported by JSPS KAKENHI through Grant No. JP21J01409. Y.A. acknowledges support from the Japan Society for the Promotion of Science through Grant No. JP19K23424.
|
2310.05378 | Transcending the Attention Paradigm: Representation Learning from
Geospatial Social Media Data | While transformers have pioneered attention-driven architectures as a
cornerstone of language modeling, their dependence on explicitly contextual
information underscores limitations in their abilities to tacitly learn
overarching textual themes. This study challenges the heuristic paradigm of
performance benchmarking by investigating social media data as a source of
distributed patterns. In stark contrast to networks that rely on capturing
complex long-term dependencies, models of online data inherently lack structure
and are forced to detect latent structures in the aggregate. To properly
represent these abstract relationships, this research dissects empirical social
media corpora into their elemental components, analyzing over two billion
tweets across population-dense locations. We create Bag-of-Word embedding
specific to each city and compare their respective representations. This finds
that even amidst noisy data, geographic location has a considerable influence
on online communication, and that hidden insights can be uncovered without the
crutch of advanced algorithms. This evidence presents valuable geospatial
implications in social science and challenges the notion that intricate models
are prerequisites for pattern recognition in natural language. This aligns with
the evolving landscape that questions the embrace of absolute interpretability
over abstract understanding and bridges the divide between sophisticated
frameworks and intangible relationships. | Nick DiSanto, Anthony Corso, Benjamin Sanders, Gavin Harding | 2023-10-09T03:27:05Z | http://arxiv.org/abs/2310.05378v3 | # Transcending the Attention Paradigm: Representation Learning from Geospatial Social Media Data
###### Abstract
While transformers have pioneered attention-driven architectures as a cornerstone of research, their dependence on explicitly contextual information underscores limitations in their abilities to tacitly learn overarching textual themes. This study investigates social media data as a source of distributed patterns, challenging the heuristic paradigm of performance benchmarking. In stark contrast to networks that rely on capturing complex long-term dependencies, models of online data inherently lack structure and are forced to learn underlying patterns in the aggregate. To properly represent these abstract relationships, this research dissects empirical social media corpora into their elemental components and analyzes over two billion tweets across population-dense locations. Exploring the relationship between location and vernacular in Twitter data, we employ Bag-of-Words models specific to each city and evaluate their respective representation. This demonstrates that hidden insights can be uncovered without the crutch of advanced algorithms and demonstrates that even amidst noisy data, geographic location has a considerable influence on online communication. This evidence presents tangible insights regarding geospatial communication patterns and their implications in social science. It also challenges the notion that intricate models are prerequisites for pattern recognition in natural language, aligning with the evolving landscape that questions the embrace of absolute interpretability over abstract understanding. This study bridges the divide between sophisticated frameworks and intangible relationships, paving the way for systems that blend structured models with conjectural reasoning.
- Natural Language Processing, social media, geospatial correlation, representation learning, Bag-of-Words, cosine similarity
## 1 Introduction
The emergence of transformers has catalyzed a paradigm shift in the field of Natural Language Processing (NLP), ushering in an era of attention-driven frameworks. After sufficient pretraining and self-supervised learning on vast amounts of organized data, these Large Language Models (LLMs) excel in task-specific environments, even as few-shot or zero-shot reasoners [1, 2]. However, the effectiveness of LLMs relies on the availability of structured data, monotonous human feedback, and meticulous prompt engineering. While they demonstrate remarkable conversational aptitude, these training methods starkly contrast the tacit nature of human learning.
Natural intelligence emerges not from training on precise and composed corpora but from synthesizing abstract connections between underlying patterns. However, industry-oriented architectures are notorious for being benchmark-driven [3, 4] in order to justify funding and ensure apparent progress rather than adopting general-purpose learning practices. Advanced models that perform well on complex benchmarks often struggle to make simple logical jumps that are trivial to human intuition [5] or generalize to environmental changes [6]. The inability to discern intangible relationships is substantively attributable to statically structured training data, necessitating the exploration of data sources imbued with subtle contextual nuances.
This study seeks to transcend heuristic benchmarking by investigating the implicit context of social media's latent patterns. Online data is specifically targeted because its variability restricts its embodiment of consistent relationships. Tweets, for instance, are characterized by their brevity and unpredictability, presenting challenges in training deep models. As Mandal et al. [7] point out, without transfer learning from pre-trained models, mapping to unencoded social media token sets can be extremely messy. However, the high volume and empirical association of user-generated online data make it an ideal candidate for examining patterns in the aggregate. Social media is a fundamental representation of natural language, as it embodies human communication at its most raw and unfiltered state. Furthermore, language, by its very nature, serves as a robust empirical metric, as it primarily functions to communicate information about the world [8]. This investigation chose Twitter (now "X") content as its data source, aiming to gain tangible insights by deconstructing the data and examining geospatial correlations. More specifically, this analysis seeks to establish:
1. Whether Twitter correlation can be represented as a function of geographic location
2. How similarity trends are indicative of regional relationships
3. The extent to which low-level unigram models can embody empirical context
These conclusions will determine the extent to which meaning can be derived from text itself, independent of the capabilities of a sophisticated model. If patterns can be discerned, even despite Twitter's arbitrary nature, it could warrant a paradigm shift toward indirect learning methods that prioritize the underlying context of real-world data over the overly explicit information demanded by LLMs.
## 2 Related Works
Social media has become an increasingly common data source, with broad applications ranging from academic sentiment analysis [9] to evidence-based policy-making [10]. For example, Alotaibi et al. [11] leverage social media data as a healthcare tool to identify trends of prevalent diseases in Saudi Arabia, offering insight into the interplay between online engagement and healthcare applications. Similarly, Rodrigues et al. [12] pursue a broad temporal analysis of Twitter trends to gain insights into the efficiency of evaluating social media data in real-time. Social media data has also been used as a fine-tuning tool for LLMs, such as BERTweet [13], which achieves state-of-the-art performance on a variety of benchmarks. However, the specificity of this framework prompts a consideration of whether simpler models, when provided with sufficient data, can still find thematic patterns.
Extensive previous research has also used geographic location to explore the predictability of online data. For instance, Cheng et al. [14] built a model capable of accurately estimating the location of microbloggers by relying on local word identification. Unique approaches have also inferred locations at varying granularities, starting with time zones and slowly narrowing down to specific zip codes [15]. This allows a hierarchal classification process to avoid early overfitting, focusing instead on high-level communication patterns. Locational analysis can also be multidisciplinary, with one study demonstrating that subjects of drug use and HIV outbreaks can often be triangulated to specific population-dense regions [16]. Chandra et al. [17] also adopt a spatial reference framework, focusing exclusively on user interactions. They demonstrate that simple patterns and low-level features can still yield accurate models.
In order to establish the necessity--or lack thereof--of deep models as pragmatic predictors, the success tradeoff between powerful systems and contextual data must be evaluated. This contentious relationship is discussed by Halevy et al. [18], who argue that the lack of concise solvability in NLP necessitates potentially noisy data that can be used to build refined networks. This preference for a robust corpus is also evident in LLM hallucinations, which occur when a model is unable to properly comprehend complex data [19]. Meanwhile, Ellis [20] argues that human-level language modeling is entirely implicit and that the further models are abstracted, the more broadly beneficial they become. Strang et al. [21] address the overabundance of complex implementations and their respective benchmarks by comparatively analyzing simple linear classifiers and intricate non-linear models. They aimed to identify the necessity of state-of-the-art systems and found that, while non-linear models are often advantageous, there are many applications in which they are overkill. This evidence suggests that simple models possess the capacity to uncover fundamental textual patterns, necessitating an analysis to gauge their practical applicability.
## 3 Methodology
### 3.1 - Dataset
From 2016 to 2020, an extensive dataset of approximately 2.5 billion tweets was gathered from various geographic regions. These tweets were publicly accessible in JSON format and collected using the standard Twitter API. While the metadata of each individual tweet included a variety of data types, such as author information and timestamps, this analysis focused solely on the textual content. The objective of the data collection was twofold:
* To accumulate a vast array of tweets from diverse locations across the United States
* To optimize the data's empirical applications
Seventy specific data sites were identified, primarily centered around densely populated landmarks, such as sports stadiums and universities, and other generally populous regions. Each site was defined by a latitudinal and longitudinal boundary, with the dimensions varying based on the rate of change in population density.
The dataset of tweets was gathered from diverse areas to maximize its volume and to form a fully representative sample of the platform's evolving content. The data was collected automatically, guaranteeing the procurement of discrete tweets devoid of predetermined patterns. This method was designed to encompass a comprehensive and dynamic spectrum of user interactions, making it a valuable technique for unbiased data analysis.
### 3.2 - Data Preparation and Tokenization
To break the dataset into its simplest terms, Bag-of-Words (BoW) models were constructed from each city's respective corpus, represented as maps of individual token frequencies. BoW modeling has historically served as a favored approach for categorizing experimental applications [22]. Namely, Molero et al. [23] illustrate its capability to effectively map
to social media data, showcasing its enduring relevance even in the age of non-linear models. Traditional data preparation techniques were applied to optimize the number of distinct tokens while retaining all pertinent information. These procedures were implemented using the Natural Language Toolkit [24] and included removing common stopwords and punctuation, standardizing case sensitivity, and stemming/lemmatizing individual tokens.
The analysis was also constrained to English tweets to prevent the model from categorizing cities based on external features, which would compromise the predictability of the text itself. To improve computational efficiency, the models' tokens were then trimmed to the top five thousand total occurrences of each. Finally, the corpora were proportionally scaled to the respective cities' populations to accommodate variations in population and subsequent social media usage.
``` { <created_att": "Tune_3 31 10:11:11:94 0000 2017", "id": 7676705011028495029, "text": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t": "t":" "t": "t": "t": "t": "t": "t": "t": "t":" "t": "t": "t":" "t":" "t":" "t": "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":":" "t":":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t":" "t:" "t":" "t":" "t":" "t:" "t:" "t":" "t":" "t:" "t":" "t":" "t:" "t":" "t:" "t:" "t":" "t:" "t":" "t":" "t":" "t:" "t:" "t:" "t":" "t:" "t:" "t:" "t:" "t:" "t:" "t":" "t:" "t":" "t":" "t:" "t":" "t":" "t:" "t:" "t:" "t":" "t":" "t":" "t:" "t:" "t":" "t:" "t:" "t:" "t:" "t":" "t:" "t:" "t":" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t":" "t:" "t:" "t:" "t:" "t:" "t":" "t":" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t":" "t:" "t":" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:":" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" "t:" ":" "t
The examples in Figure 3 were selected from far-apart locations to establish how the overall distribution can be understood by region. For example, _f(Los Angeles)_ and _f(Miami)_ demonstrate convincingly linear results after 2,500 km, implying a consistent representational variance of cities past that distance. However, _f(Chicago)_ shows high variance in its distribution, likely due to its midwestern location, which is equidistant to dozens of cities of varying vernacular. Additionally, _f(Miami)_ shows a large cluster of unpredictable results around 2,000 km, likely due to the wide variety of cities at such a distance from Florida. This further emphasizes that the communication differences are particularly strong between cities of extreme distances. The regional implications of these findings are further discussed in the Discussion section.
After analyzing individual locations _f(c)_\(\in R\), a composite function \(|R|=\sum_{i}^{n}f(c_{i})\) was established as an aggregation of the individual functions. This was analyzed to form a comprehensive view of the country's overall results.
The high variance in correlation is to be expected since this function is an unweighted composite of the individual cities. Nevertheless, a negative trend is discernible across the locations. The proportionate drop-off in similarity every 1,000 km can be represented as \(\Delta\mathbf{S}=\frac{s[d-1,000km]-s[d]}{s[d-1,000km]}\). Given this, the overall similarity correlation is as follows:
\[\begin{array}{c c c c}\mathbf{Distance}&\mathbf{Similarity}&\mathbf{\Delta S }&\mathbf{\sum\Delta S}\\ \hline 0\ \mathrm{km}&0.0201&0\%&0\%\\ \hline 1,000\ \mathrm{km}&0.0193&4\%&4\%\\ \hline 2,000\ \mathrm{km}&0.0186&4\%&8\%\\ \hline 3,000\ \mathrm{km}&0.0176&5\%&13\%\\ \hline 4,000\ \mathrm{km}&0.0157&11\%&24\%\\ \hline 5,000\ \mathrm{km}&0.0132&16\%&40\%\\ \end{array}\]
The average cosine similarity between two cities within 500 km of one another is 0.0201 but drops below 0.014 when the cities are separated by over 4,000 km, a \(\sum\Delta\Delta\) value of up to 40%. Additionally, the rate of change of \(\Delta\mathbf{S}\) increases with each distance increment, hinting at a polynomial relationship. Overall, the similarity functions _f(c)_\(\in R\) as well as the aggregated \(|R|\) function establish that:
1. In the aggregate, the correlation of online communication methods decreases with distance
2. Changes in similarity are particularly evident in cities over 2,500 km apart, with equidistant central cities absorbing attributes from both counterparts
3. Simple linear models, such as Bag-of-Words, are capable of identifying empirical trends
## 5 Discussion
This analysis is a convincing indication that differing underlying language patterns exist between cities of considerable distances. It present several points of discussion regarding the locational analysis of communication styles. One important consideration is that these distinctions are purely in a unigram context. Therefore, the results are not attributable to differences in sentence structure or other large-scale linguistic attributes. Rather, they are a synthesis of tokenized characteristics that add up to a large-scale distribution. Additionally, BoW distinctions are a strong suggestion of empirical influence, because if every city generally communicates similarly, a sufficiently large BoW would normalize locational distinctions through raw computation.
Practical takeaways from this study include the implications of the increasing rate of \(\Delta\mathbf{S}\) after the distance passes 2,000 km in both _f(c)_\(\in R\) and \(|R|\). While similarity often clusters before this point, as _f(Miami)_ demonstrates, results become much more compelling as distance surpasses this threshold. In fact, many individual similarity functions show apparent linear rates of \(\Delta\mathbf{S}\) at these distances, suggesting that their representations are unique enough to differ equally from several other diverse locations. This is specifically seen in coastal cities, implying that locations that are not landlocked by other influential areas are entirely distinctive from cities of considerable distance.
Additionally, as previously mentioned, midwestern cities like Chicago exhibit the most variance in their results. This is likely because they are roughly equidistant to most other comparative locations and exhibit linguistic patterns that mix various geographic styles. These discoveries support the results of Kamath et al. [27], which find that content similarity clusters between small distances but drop significantly after 3,000 km. Social science research has also affirmed these hypotheses, developing methods to segment specific regions of America and showing the unique distinctions of different geographic areas [28]. Overall, the results of this analysis emphasize a similar locational conclusion: natural language differs increasingly between
coastal cities of long distances, while southern and midwestern cities pick up subtle similarities between both communication styles.
It is also necessary to acknowledge the inevitable presence of noise within social media data. While most models rely on datasets that are curated to exclusively contain beneficial information, user-generated content is inherently erratic. Consequently, most tweets analyzed in this study lacked any discernible geographic association. This makes the results hold particular significance: they unearth intangible structures across the entire corpus that only represent a small subset. Although refining the corpora by removing nonsensical data would have undeniably enhanced the models' performances, it would also have sacrificed the study's primary objective. The emergence of distributions from arbitrary data strengthens the argument for the existence of inherent context, even when employing simple metrics.
These findings also underscore a critical facet of employing large-scale NLP systems: the tradeoff between model interpretability and performance. The abstract relationships within these Twitter datasets remain largely enigmatic due to this study's emphasis on identifying pragmatic correlations, rather than understanding the black-box nature of such patterns. This focus aided data analysis since explainable machine learning often requires reducing data complexity and potentially higher-level associations, but it provides only presumptions of the real-world implications. Future studies that pinpoint specific features inherently associated with regional trends may yield more applicable results for the field of communication science and empower researchers to build a model that is just as interpretable as it is high-performing.
## 6 Future Work
This research explores the viability of traditional models to find complex location patterns in noisy data. This has numerous implications on both the applicability of linear models and the development of advanced future archetypes. While BoW models and other low-level representational models can be contextually mapped to observable outcomes, additional work is necessary to establish the limitations of these findings. More specifically, it is vital to recognize the data preprocessing methods that were employed for optimization. While an important conclusion of this study is that unstructured, oftentimes meaningless data can show correlative results, truly tacit comprehension does not have this luxury; noise must be automatically filtered out at an extremely high level. Future implementations should further establish the performance cutoff due to increased noise, demonstrating when the model can no longer find pragmatic correlations.
An important consideration for future social media analysis is the consideration of hashtags, hyperlinks, and other nonalphanumeric-reliant text. This study opted for simplicity, removing all of the tweets' symbolic characters from the start. However, some of these symbols are undoubtedly practical, yielding the opportunity to further establish locational distinctions. For example, Gupta et al. [29] establish an automation process to derive semantically relevant hashtags and classify them based on empirical and domain-specific significance. Individual trendy hashtags have also been targeted to explore their respective political sentiments and demonstrate what members of the public web communicate in predictive ways [30]. Future work that accounts for specific symbol combinations could shed light on new methods for geospatial mapping and find additional relationships across online communities.
## 7 Conclusion
This study ventures beyond the conventional boundaries of language paradigms by exploring the nuanced relationship between geographic location and online communication. This has unveiled compelling evidence of distinct linguistic patterns emerging across diverse locations. The results of the study find that with distance, the similarity of communication methods consistently drops. Additionally, users from various general regions exhibit unique data representations, yielding fascinating empirical implications. This comprehensive examination provides a fresh perspective on how text can be considered not just expressions of thought, but also a reflection of context. Additionally, the simplicity of Bag-of-Words models and unstructured data underscores the potential for uncovering hidden correlations within the chaotic realm of social media.
Finding geospatial correlation in data with no apparent structure demonstrates that models of minimal complexity can still learn implicitly and find subtle patterns. As a result, this research advocates for recognizing text as a profound source of representational patterns and abstract empirical features. As modern frameworks continue to grow in complexity and computational power, simple representations should not be underestimated. Embracing a temporary respite to a more primitive approach challenges a reconsideration of the necessary criteria for general-purpose intelligence. This also prompts a consideration of the viability of indirect learning methods that prioritize contextual understanding over explicit information. The implicit patterns found within the text are a testament to the depth of language and the potential for future discovery within the ever-expanding world of data analysis.
One can only marvel at the possibilities if state-of-the-art models embrace an emphasis on intangible understanding over mere interpretability. Delving into the intricacies of how social media platforms can capture the nuances of human interaction promises to extend the frontiers of both communication science and NLP. Scaling architectures down to their core can make human intuition more computationally interpretable, yielding a distinction between the significance of non-linear models and the underlying context of rich empirical data. Striving for a harmonious blend between structured networks and amorphous relationships represents the ultimate objective in developing an agent capable of abstract reasoning. |
2308.07725 | On Quasiconvexity of Precompact-Subset Spaces | Let $X$ be a metric space and $BCl(X)$ the collection of nonempty bounded
closed subsets of $X$ as a metric space with respect to Hausdorff distance. We
study both characterization and representation of Lipschitz paths in $BCl(X)$
in terms of Lipschitz paths in $X$ and in the completion of $X$. We show that a
full characterization and representation is possible in any subspace
$\mathcal{J}\subset BCl(X)$ that (i) consists of precompact subsets of $X$,
(ii) contains the singletons $\{x\}$ for every $x\in X$, and (iii) satisfies
$BCl(C)\subset\mathcal{J}$ for every $C\in\mathcal{J}$. When $X$ is geodesic,
we investigate quasiconvexity of $\mathcal{J}$ for some instances of
$\mathcal{J}$, especially when $\mathcal{J}$ consists of finite subsets of $X$. | Earnest Akofor | 2023-08-15T12:00:49Z | http://arxiv.org/abs/2308.07725v5 | ###### Abstract
###### Abstract
Let \(X\) be a metric space and \(BCl(X)\) the collection of nonempty bounded closed subsets of \(X\) as a metric space with respect to Hausdorff distance. We study both characterization and representation of rectifiable paths in \(BCl(X)\) in terms of rectifiable paths in \(X\). We show that a full characterization and representation is possible in any subspace \(\mathcal{J}\subset BCl(X)\) that (i) consists of precompact subsets of \(X\), (ii) contains \(X\) as a subspace, and (iii) satisfies \(BCl(C)\subset\mathcal{J}\) for every \(C\in\mathcal{J}\). When \(X\) is geodesic, we investigate quasiconvexity of \(\mathcal{J}\) for some instances of \(\mathcal{J}\), especially when \(\mathcal{J}\) consists of finite subsets of \(X\).
**On Quasiconvexity of Precompact-Subset Spaces**
Earnest Akofor
+
Footnote †: _Key words and phrases_. Metric space, subset space, stable covering subspace, quasiconvex, quasigeodesic, Lipschitz path.
###### Contents
* 1 Introduction
* 2 Review of rectifiable paths in metric spaces
* 3 The case of precompact-subset spaces
* 4 The case of finite-subset spaces
* 5 Some relevant questions
## 1. **Introduction**
To simplify subsequent discussions, we begin with conventions, definitions, and facts that will be used throughout. Some of these will be repeated later for convenience. The abbreviation "resp." stands for "respectively", and used to display two or more separate statements in a simultaneous or parallel manner whenever it seems convenient to do so. Given a set \(S\), its **cardinality** and **powerset** are denoted by \(|S|\) and \(\mathcal{P}(S)\) respectively. In a (topological) space \(X\), the **closure** of a subset \(A\subset X\) is denoted by \(cl_{X}(A)\) or \(\overline{A}\), and, a continuous map \(\gamma:[0,1]\to X\) is called a **path** in \(X\) from \(\gamma(0)\) to \(\gamma(1)\), or **connecting**\(\gamma(0)\) to \(\gamma(1)\). If \(X\) and \(Y\) are metric spaces and \(L\geq 0\), a map \(f:X\to Y\) is \(L\)**-Lipschitz** if \(d(f(x),f(x^{\prime}))\leq Ld(x,x^{\prime})\) for all \(x,x^{\prime}\in X\).
Throughout the rest of the introduction, let \(X=(X,d)\) be a metric space. The **diameter** of \(A\subset X\) is \(\operatorname{diam}(A)=\sup_{a,a^{\prime}\in A}d(a,a^{\prime})\). The **distance** between \(x\in X\) and \(A\subset X\) is
\[\operatorname{dist}(x,A)=\operatorname{dist}^{X}(x,A):=\inf_{a\in A}d(x,a),\]
and, between \(A\subset X\) and \(B\subset X\) is \(\operatorname{dist}(A,B):=\inf_{a\in A}\operatorname{dist}(a,B)\). For \(R>0\), the **open \(R\)-neighborhood** of \(A\subset X\) (resp., of \(x\in X\)) is
\[N_{R}(A)=N_{R}^{X}(A):=\{x\in X:\operatorname{dist}(x,A)<R\}\ \big{(}\text{resp.}\ N_{R}(x)=N_{R}^{X}(x):=N_{R}^{X}( \{x\})\big{)},\]
and the **closed \(R\)-neighborhood** of \(A\subset X\) (resp., of \(x\in X\)) is
\[\overline{N}_{R}(A)=\overline{N}_{R}^{X}(A):=\{x\in X:\operatorname{dist}(x,A )\leq R\}\ \big{(}\text{resp.}\ \overline{N}_{R}(x)=\overline{N}_{R}^{X}(x):=\overline{N}_{R}^{X}(\{x\})\big{)}.\]
The **Hausdorff distance** between \(A\subset X\) and \(B\subset X\) is
\[d_{H}(A,B):=\max\{\sup_{a\in A}\operatorname{dist}(a,B),\sup_{b\in B} \operatorname{dist}(b,A)\}=\inf\{r:A\cup B\subset\overline{N}_{r}(A)\cap\overline {N}_{r}(B)\}.\]
A set \(A\subset X\) is **bounded** if there exist \(x\in X\) and \(R>0\) such that \(A\subset N_{R}(x)\).
Let \(Cl(X)\) denote the collection of nonempty closed subsets of \(X\). In this paper, a **subset space** of \(X\) is any subcollection \(\mathcal{J}\subset Cl(X)\) on which \(d_{H}\) takes finite values, making \(\mathcal{J}=(\mathcal{J},d_{H})\) a metric space. Subset spaces that will be relevant to us include the following:
1. \(C\)**-regulated subset space** of \(X\) (for any \(C\in Cl(X)\)), \(\ \mathcal{H}(X;C):=\{A\in Cl(X):d_{H}(A,C)<\infty\}\), as introduced by Kovalev and Tyson in [9].
2. **Bounded-subset space** of \(X\), \(\ BCl(X):=\{A\in Cl(X):A\text{ is bounded}\}\), where \(\mathcal{H}(X;C)=BCl(X)\) if and only if \(C\in BCl(X)\). A subspace \(\mathcal{J}\subset BCl(X)\) is **stable** if \(BCl(A)\subset\mathcal{J}\) for each \(A\in\mathcal{J}\), and **covering** if \(X\subset\mathcal{J}\).
3. **Precompact-subset space** of \(X\), \(\ PCl(X):=\{A\in BCl(X):A\text{ is precompact}\}\), where \(A\subset X\) is **precompact** (or **totally bounded**) if for every \(\varepsilon>0\) there exists a finite set \(F\subset A\) such that \(A\subset N_{\varepsilon}(F)\). _Important fact_ (see Thomson and Bruckner [13, Theorem 13.96]): \(A\subset X\) is precompact \(\iff\) (i.e., if and only if) every sequence in \(A\) has a Cauchy subsequence.
4. **Compact-subset space** of \(X\), \(\ K(X):=\{A\in\ PCl(X):A\text{ is compact}\}\), where \(A\subset X\) is **compact** if every open cover of \(A\) has a finite subcover. _Important facts_ (see [13, Theorem 13.97] and [1, Appendices A.1.3 and A.1.6]): Since \(X\) is a metric space, a subspace \(A\subset X\) is compact \(\iff\) every sequence in \(A\) has a convergent subsequence \(\iff\) complete and precompact, where a **complete** metric space is one in which every Cauchy sequence converges. The **completion** of \(X\) (i.e., the smallest complete metric space containing \(X\) as a dense subspace) will be denoted by \(\widetilde{X}\), and if \(A\subset X\) then \(\widetilde{A}\subset\widetilde{X}\) will denote the closure of \(A\) in \(\widetilde{X}\). A metric space is **proper** if every bounded closed subset of the space is compact.
5. **Finite-subset space** of \(X\), \(\ FS(X):=\{A\in K(X):|A|<\infty\}\).
6. \(n\)**th Finite-subset space** (or \(n\)**th symmetric product**) of \(X\), \(\ FS_{n}(X):=\{A\in FS(X):|A|\leq n\}\), for any integer \(n\geq 1\).
7. \(n\)**th Upper finite-subset space** of \(X\), \(\ FS^{n}(X):=\{A\in FS(X):|A|\geq n\}=FS\backslash FS_{n-1}(X)\). If \(\lambda\geq 1\), a path \(\gamma:[0,1]\to X\) is a \(\lambda\)**-quasigeodesic** (or a \(\lambda\)**-quasiconvex path**) if it is \(\lambda d(\gamma(0),\gamma(1))\)-Lipschitz, where a \(1\)-quasigeodesic is called a **geodesic**. Accordingly, \(X\) is a \(\lambda\)**-quasigeodesic space** (or a \(\lambda\)**-quasiconvex space**) if every two points of \(X\) are connected by a \(\lambda\)-quasigeodesic in \(X\). A \(1\)-quasiconvex space is naturally called a **geodesic space**. We will refer to quasigeodesics in \(BCl(X)\) as **Hausdorff quasigeodesics** (i.e., quasigeodesics in \(BCl(X)\) with respect to \(d_{H}\)).
Let \(A,B\in\mathcal{J}\subset BCl(X)\). By **expression**, or **representation**, of a path \(\gamma:[0,1]\to\mathcal{J}\) in terms of paths \(\{\gamma_{r}:[0,1]\to X\}_{r\in R}\) we mean a pointwise expression of the form \(\gamma(t)=\{\gamma_{r}(t):r\in R\}\) for all \(t\in[0,1]\). A detailed study of quasiconvexity properties of a subspace \(\mathcal{J}\subset BCl(X)\) is naturally expected to involve a search for general characterizations and representations of Hausdorff quasigeodesics (in \(\mathcal{J}\) between any two sets \(A,B\in\mathcal{J}\)) in terms of Lipschitz paths in \(X\). Moreover, it is easier to picture a path in \(BCl(X)\) in terms of paths in \(X\). To highlight our main results, we summarize our attempt to answer the following three questions.
**Question 1:** Let \(A,B\in\mathcal{J}\subset BCl(X)\) and \(\lambda\geq 1\). With \(X\) still an arbitrary metric space, is it possible to characterize (i.e., give necessary and sufficient conditions for) the existence of a \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to\mathcal{J}\) between \(A\) and \(B\)?
When \(\mathcal{J}\) is a stable covering subspace of \(PCl(X)\), we give a positive answer to Question 1 by providing an existence criterion for individual quasigeodesics in \(\mathcal{J}\) through Theorem 3.20 - Corollary 3.21. Of course, this criterion for \(PCl(X)\) becomes a criterion for \(BCl(X)\) if the metric space
is compact or proper. For a compact metric space \(Z\), [10, Theorem 3, pages 6 and 38] gives a characterization of geodesics in \(BCl(Z)=K(Z)\). Our result in Theorem 3.20 - Corollary 3.21 therefore generalizes this characterization in \(K(X)\) to a characterization in \(PCl(X)\).
When \(X\) is geodesic or richer, the abundance of geodesics in \(X\) enables a straightforward construction of quasigeodesics in many subspaces of \(BCl(X)\), as it has been done by Kovalev and Tyson in [9, Theorem 2.1 and Corollary 2.2], by Memoli and Wan in [10, Theorem 3.6, page 14], and by Fox in [7, Theorem 3.4] when \(X\) is a normed space. In particular, it was observed in [9, Theorem 2.1] that Lipschitz paths in \(X\) are sufficient for quasigeodesics in \(BCl(X)\). Our description of quasigeodesics in Lemma 3.18 and Theorem 3.20 - Corollary 3.21 goes further to show that Lipschitz paths in \(X\) are necessary for quasigeodesics in \(PCl(X)\). The question of whether or not Lipschitz paths in \(X\) are also necessary for quasigeodesics in \(BCl(X)\) is open and posed as Question 5.1.
**Question 2:** Let \(A,B\in\mathcal{J}\subset BCl(X)\), \(\lambda\geq 1\), and suppose a \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to\mathcal{J}\) exists between \(A\) and \(B\). With \(X\) still an arbitrary metric space, is it possible to express \(\gamma\) in terms of Lipschitz paths in \(X\) (that is, is \(\gamma(t)=\{\gamma_{r}(t):r\in R\}\) for a set \(R\) and Lipschitz paths \(\gamma_{r}:[0,1]\to X\))?
Once again, when \(\mathcal{J}\) is a stable covering subspace of \(PCl(X)\), we give a positive answer to Question 2 in Theorem 3.23. A similar result (for \(\lambda>1\) instead of \(\lambda\geq 1\)) in Proposition 3.25 seems to indicate that in Theorem 3.23 it is might be possible to replace \(PCl(X)\) with a larger subspace of \(BCl(X)\). Our uncertainty here is expressed in Question 5.2.
**Question 3:** Let \(A,B\in\mathcal{J}\subset BCl(X)\) and \(\lambda\geq 1\). Suppose \(\mathcal{J}\) is a stable covering subspace of \(PCl(X)\), in which case the answer to Question 1 above is positive. If \(X\) is geodesic (i.e., 1-quasiconvex), does it follow that \(\mathcal{J}\) is \(\lambda\)-quasiconvex?
Question 3 has already been answered for some instances of \(\mathcal{J}\), which we consider important enough to be reviewed from a new perspective. Borovikova, Ibragimov, and Yousefi showed in [4, Theorem 4.1] that \(FS_{n}(\mathbb{R})\) is \(4^{n}\)-quasiconvexity. Based on our related earlier work in [1, 2], we show that if \(X\) is geodesic then the following are true:
1. \(FS_{2}(X)\) is geodesic (Corollary 4.7).
2. \(FS_{n}(X)\) is not geodesic for \(n\geq 3\) (Corollary 4.5).
3. \(FS_{n}(X)\) is 2-quasiconvex (Theorem 4.8), improving the above \(4^{n}\)-quasiconvexity of \(FS_{n}(\mathbb{R})\).
4. \(FS^{n}(X)\) is geodesic (Corollary 4.7).
If \(X\) is geodesic, the results (ii)-(iii) above say that, for \(n\geq 3\), \(FS_{n}(X)\) is 2-quasiconvex but not geodesic. Similarly, if \(X\) is geodesic then \(BCl(X)\) is \(\lambda\)-quasiconvex for \(\lambda>1\) (Corollary 3.19) but need not be geodesic, as noted in [9, page 2] following the work of Bryant in [5].
The rest of the paper is organised as follows. In Section 2, we review rectifiable paths and quasigeodesics in metric spaces. In Section 3, we present our main results on quasiconvexity of precompact-subset spaces. This is followed in Section 4 by a concise review of our earlier work on quasiconvexity of finite-subset spaces. In Section 5, we ask a few questions concerning extendability of our characterization and representation criteria for Hausdorff quasigeodesics and concerning efficiency of such quasigeodesics.
## 2. **Review of rectifiable paths in metric spaces**
**Definition 2.1** (Path, Parametrization, Length, Rectifiable, Constant speed, Natural parametrization).: Let \(X\) be a space and \([a,b]\subset\mathbb{R}\) a compact interval (where we will mostly assume \([a,b]=[0,1]\) for simplicity). A continuous map \(\gamma:[a,b]\to X\) is called a **path** in \(X\) from \(\gamma(a)\) to \(\gamma(b)\), or **connecting**\(\gamma(a)\) to \(\gamma(b)\). Given paths \(\gamma,\eta:[a,b]\to X\), \(\eta\) is a **parametrization** of \(\gamma\), written \(\eta\sim\gamma\), if \(\eta(a)=\gamma(a)\), \(\eta(b)=\gamma(b)\), and \(\eta([a,b])=\gamma([a,b])\).
Let \(X\) be a metric space and \(\gamma:[a,b]\to X\) a path. The **length** of \(\gamma\) is
\[l(\gamma)\ :=\ \sup\big{\{}l_{P}(\gamma):P\subset[a,b]\text{ a finite partition}\big{\}},\]
where \(l_{P}(\gamma):=\sum_{i=1}^{k}d(\gamma(t_{i-1}),\gamma(t_{i}))\) is the length of \(\gamma\) over \(P=\{a=t_{0}<t_{1}<\cdots<t_{k}=b\}\). If \(l(\gamma)<\infty\), we say \(\gamma\) is **rectifiable**. A path \(\gamma:[a,b]\to X\) has **constant speed**\(c\geq 0\) if \(l(\gamma|_{[t,t^{\prime}]})=c|t-t^{\prime}|\) for all \(t,t^{\prime}\in[a,b]\). A path with constant speed \(c=1\) is called a **natural parametrization**.
In the above definition \(l(\gamma)\) does not depend on the way \(\gamma\) is parameterized, i.e., if two paths \(\gamma,\eta:[a,b]\to X\) satisfy \(\gamma\sim\eta\) then \(l(\gamma)=l(\eta)\). The converse is false, i.e., for arbitrary paths \(\gamma,\eta:[a,b]\to X\), \(l(\gamma)=l(\eta)\) does not imply \(\eta\sim\gamma\).
**Lemma 2.2** (Additivity of length).: _Given a rectifiable path \(\gamma:[a,b]\to X\) and \(a\leq c\leq b\), we have_
\[l(\gamma)=l(\gamma|_{[a,c]})+l(\gamma|_{[c,b]}).\]
Proof.: For any \(\varepsilon>0\), there exist finite partitions \(P,Q,R\subset[a,b]\) such that \(l(\gamma)<l_{P}(\gamma)+\varepsilon=l_{P}(\gamma|_{[a,c]})+l_{P}(\gamma|_{[c, b]})+\varepsilon\leq l(\gamma|_{[a,c]})+l(\gamma|_{[c,b]})+\varepsilon<[l_{Q}( \gamma|_{[a,c]})+\varepsilon]+[l_{R}(\gamma|_{[c,b]})+\varepsilon]+\varepsilon =l_{Q\cup R}(\gamma)+3\varepsilon\leq l(\gamma)+3\varepsilon\).
**Lemma 2.3** (Burago and Ivanov: [6, Proposition 2.5.9, page 46]).: _Let \(\gamma:[0,1]\to X\) be a rectifiable path and let \(l=l(\gamma)\). There exists a nondecreasing continuous map \(\varphi:[0,1]\to[0,l]\) and a natural parametrization \(\overline{\gamma}:[0,l]\to X\) such that \(\gamma=\overline{\gamma}\circ\varphi:[0,1]\stackrel{{\varphi}}{{ \longrightarrow}}[0,l]\stackrel{{\overline{\gamma}}}{{ \longrightarrow}}X\). (A function \(f:\mathbb{R}\to\mathbb{R}\) is **nondecreasing** iff \(t<t^{\prime}\) implies \(f(t)\leq f(t^{\prime})\).)_
Proof.: Consider the map \(\varphi:[0,1]\to[0,l],\ t\mapsto l\big{(}\gamma|_{[0,t]}\big{)}\) which is nondecreasing by additivity of length. To prove \(\varphi\) is continuous, observe that for any \(t,t\in[0,1]\), we have \(A(t,t^{\prime}):=l(\gamma|_{[t,t^{\prime}]})=\sup_{\mu(P)\downarrow 0}A_{P}(t,t^{ \prime})\), where \(A_{P}(t,t^{\prime})=l_{P}(\gamma|_{[t,t^{\prime}]})\) and \(\mu(P)=\max_{i}|t_{i-1}-t_{i}|\) for a finite partition \(P=\{t=t_{0},t_{1},...,t_{|P|}=t^{\prime}\}\) that belongs in an inclusion-maximal chain of finite partitions of \([t,t^{\prime}]\).
So, if \(A(t):=\lim_{t^{\prime}\to t}A(t,t^{\prime})\) then, for any \(n\geq 0\) we can
(i) choose \(\delta_{n,t}>0\) such that for all \(t^{\prime}\), "\(|t-t^{\prime}|<\delta_{n,t}\ \Rightarrow\ |A(t,t^{\prime})-A(t)|<1/n\)",
(ii) further choose \(0<\delta_{n,t,t^{\prime}}<\delta_{n,t}\) such that for all \(P\), and "\(\mu(P)<\delta_{n,t,t^{\prime}}\ \Rightarrow\ |A(t,t^{\prime})-A_{P}(t,t^{\prime})|<1/n\)",
(iii) finally choose \(0<\delta_{n}<\delta_{n,t,t^{\prime}}\) such that for all \(a,b\in[t,t^{\prime}]\), "\(|a-b|<\delta_{n}\ \Rightarrow\ d(\gamma(a),\gamma(b))<1/(|P|n)\)".
Then \(|A(t)|\leq|A(t)-A(t,t^{\prime})|+|A(t,t^{\prime})-A_{P}(t,t^{\prime})|+|A_{P}( t,t^{\prime})|<3/n\to 0\), i.e., \(A(t)=0\), for all \(t\).
Next, define \(\overline{\gamma}:[0,l]\to X\), \(s\mapsto\gamma(t)\in\gamma\left(\varphi^{-1}(s)\right)\), i.e., \(t\in\varphi^{-1}(s)\), or \(\varphi(t)=s\).
Then \(\overline{\gamma}\) is well defined because for any \(t,t^{\prime}\in\varphi^{-1}(s)\), \(t\leq t^{\prime}\), we have \(\gamma(t^{\prime})=\gamma(t)\), since
\[d(\gamma(t),\gamma(t^{\prime}))\leq l(\gamma|_{[t,t^{\prime}]})=l(\gamma|_{[0, t^{\prime}]})-l(\gamma|_{[0,t]})=\varphi(t^{\prime})-\varphi(t)=s-s=0.\]
Also, for any \(t\in[0,1]\), we have \(\gamma(t)=\overline{\gamma}(\varphi(t))\) since \(t\in\varphi^{-1}(\varphi(t))\). Finally, with \(t_{s}\in\varphi^{-1}(s)\), \(t_{s^{\prime}}\in\varphi^{-1}(s^{\prime})\),
\[l(\overline{\gamma}|_{[s,s^{\prime}]})=l(\gamma|_{[t_{s},t_{s}^{\prime}]})= \big{|}l(\gamma|_{[0,t_{s}]})-l(\gamma|_{[0,t_{s^{\prime}}]})\big{|}=\big{|} \varphi(t_{s})-\varphi(t_{s^{\prime}})\big{|}=|s-s^{\prime}|.\qed\]
**Corollary 2.4** (Constant speed parametrization of a rectifiable path).: _Let \(\gamma:[0,1]\to X\) be a rectifiable path. There exists a path \(\eta:[0,1]\to X\) such that \(\ \eta\sim\gamma\\) and \(\ l(\eta_{[t,t^{\prime}]})=l(\eta)|t-t^{\prime}|\)._
Proof.: Let \(l=l(\gamma)\). By Lemma 2.3, \(\gamma=\overline{\gamma}\circ\varphi:[0,1]\stackrel{{\varphi}}{{ \longrightarrow}}[0,l]\stackrel{{\overline{\gamma}}}{{ \longrightarrow}}X\) for a nondecreasing continuous map \(\varphi\) and a natural parametrization \(\overline{\gamma}\). Thus, with \(\psi:[0,1]\to[0,l]\), \(t\mapsto lt\) (where \(\psi\sim\varphi\)), we get the parametrization of \(\gamma\) given by \(\eta:=\overline{\gamma}\circ\psi:[0,1]\stackrel{{\psi}}{{ \longrightarrow}}[0,l]\stackrel{{\overline{\gamma}}}{{ \longrightarrow}}X\), which satisfies
\[l(\eta|_{[t,t^{\prime}]})=l(\overline{\gamma}|_{[t,t^{\prime}]})=|lt-t^{ \prime}|=l|t-t^{\prime}|,\ \text{for all }t,t^{\prime}\in[0,1].\qed\]
**Definition 2.5** (Quasigeodesic, Quasiconvex space, Geodesic, Geodesic space).: Let \(X\) be a metric space and \(\lambda\geq 1\). A path \(\gamma:[0,1]\to X\) is a \(\lambda\)-**quasigeodesic** (or a \(\lambda\)**-quasiconvex** path) if
\[l(\gamma|_{[t,t^{\prime}]})\leq\lambda d(\gamma(0),\gamma(1))|t-t^{\prime}|, \text{ for all }t,t^{\prime}\in[0,1],\]
which is the case if and only if \(\gamma\) is \(\lambda d(\gamma(0),\gamma(1))\)-Lipschitz (see Lemma 2.8). We say \(X\) is a \(\lambda\)**-quasigeodesic space** (or a \(\lambda\)**-quasiconvex space**) if every two points \(x,y\in X\) are connected by a \(\lambda\)-quasigeodesic in \(X\) (i.e., a \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to X\) such that \(\gamma(0)=x\) and \(\gamma(1)=y\)). A \(1\)-quasigeodesic is called a **geodesic**, and similarly, a \(1\)-quasiconvex space is called a **geodesic space**.
Note that a \(\lambda\)-quasigeodesic is called a \(\lambda\)**-quasiconvex path** by Hakobyan and Herron in [8, page 205]. According to Tyson and Wu in [14, page 317], a quasigeodesic is differently defined to be a path that is a bi-Lipschitz embedding, while injectivity of the path is not required in our definition. An equivalent definition of a geodesic in terms of paths that are parameterized by arc length has been given by Papadopoulos in [3, Definition 2.2.1, page 56].
**Note 2.6**.: Henceforth we will assume for convenience that every rectifiable path is parameterized to have constant speed, even though some of the results might depend only partially on this assumption.
**Corollary 2.7**.: _A path \(\gamma:[0,1]\to X\) is a \(\lambda\)-quasigeodesic \(\iff l(\gamma)\leq\lambda d(\gamma(0),\gamma(1))\), and so \(\gamma\) is a geodesic \(\iff l(\gamma)=d(\gamma(0),\gamma(1))\)._
**Lemma 2.8** (Characterization and sufficient condition for quasigeodesics).: _Let \(X\) be a metric space, \(\gamma:[0,1]\to X\) a path, and \(\lambda,\lambda_{1},...,\lambda_{k}\geq 1\). The following are true:_
1. \(\gamma\) _is a_ \(\lambda\)_-quasigeodesic_ \(\iff d(\gamma(t),\gamma(t^{\prime}))\leq\lambda d(\gamma(0),\gamma(1))|t-t^{ \prime}|\)_, for all_ \(t,t^{\prime}\in[0,1]\)_._
2. _If_ \([0,1]=\bigcup_{i=1}^{k}[a_{i},b_{i}]\)_,_ \(\gamma|_{[a_{i},b_{i}]}\) _is a_ \(\lambda_{i}\)_-quasigeodesic (for all_ \(i=1,...,k\)_), and_ \(\lambda=\big{(}\max_{i}\lambda_{i}\frac{d(\gamma(a_{i}),\gamma(b_{i}))}{d( \gamma(0),\gamma(1))}\big{)}\)_, then_ \(\gamma\) _is a_ \(\lambda\)_-quasigeodesic._
Proof.: (i) If \(\gamma\) is a \(\lambda\)-quasigeodesic, then \(d(\gamma(t),\gamma(t^{\prime}))\leq l(\gamma|_{[t,t^{\prime}]})\leq\lambda d( \gamma(0),\gamma(1))|t-t^{\prime}|\) for all \(t,t^{\prime}\in[0,1]\). Conversely, if \(d(\gamma(t),\gamma(t^{\prime}))\leq\lambda d(\gamma(0),\gamma(1))|t-t^{\prime}|\) for all \(t,t^{\prime}\in[0,1]\), then
\[l(\gamma|_{[t,t^{\prime}]})=\sup\{l_{P}(\gamma)\ |\ \text{finite}\ P \subset[t,t^{\prime}]\}=\sup\{\sum_{i=1}^{k}d(\gamma(t_{i-1}),\gamma(t_{i}))\ |\ t_{i}\in[t,t^{\prime}]\}\] \[\leq\sup\{\sum_{i=1}^{k}\lambda d(\gamma(0),\gamma(1))|t_{i-1}-t_ {i}|\ |\ t_{i}\in[t,t^{\prime}]\}=\lambda d(\gamma(0),\gamma(1))|t-t^{\prime}|,\ \forall t,t^{\prime}.\]
(ii) \(l(\gamma|_{[t,t^{\prime}]})=\sum l(\gamma|_{[t,t^{\prime}]\cap[a_{j},b_{j}]}) \leq\sum\lambda_{j}d(\gamma(a_{j}),\gamma(b_{j}))|[t,t^{\prime}]\cap[a_{j},b_{ j}]|\leq\max_{j}\lambda_{j}d(\gamma(a_{j}),\gamma(b_{j}))|t-t^{\prime}|\).
**Lemma 2.9** (Characterization of geodesics: See also [3, Section 2.2, pages 56-60]).: _Let \(X\) be a metric space and \(\gamma:[0,1]\to X\) a path. Then (i) \(\gamma\) is a geodesic \(\iff\) (ii) \(d(\gamma(t),\gamma(t^{\prime}))\leq d(\gamma(0),\gamma(1))|t-t^{\prime}|\) for all \(t,t^{\prime}\in[0,1]\), \(\iff\) (iii) \(d(\gamma(t),\gamma(t^{\prime}))=d(\gamma(0),\gamma(1))|t-t^{\prime}|\) for all \(t,t^{\prime}\in[0,1]\)._
Proof.: **(i)\(\Rightarrow\)(ii)**: This is immediate by Lemma 2.8(i).
**(ii)\(\Rightarrow\)(iii)**: By (ii)\(d(\gamma(t),\gamma(t^{\prime}))\leq d(\gamma(0),\gamma(1))|t-t^{\prime}|\) for all \(t,t^{\prime}\in[0,1]\). So, by its definition, \(l(\gamma)=d(\gamma(0),\gamma(1))\). Consequently, \(d(\gamma(0),\gamma(1))\leq l_{P}(\gamma)\leq l(\gamma)=d(\gamma(0),\gamma(1))\) for any finite partition \(P\subset[0,1]\). That is, \(l_{P}(\gamma)=l_{P^{\prime}}(\gamma)\) for any two finite partitions \(P,P^{\prime}\subset[0,1]\). In particular, if \(P:=\{0,t,t^{\prime},1\}\), \(Q\subset[t,t^{\prime}]\) any finite partition, and \(P^{\prime}:=P\cup Q=\{0\}\cup Q\cup\{1\}\), then \(l_{P}(\gamma)=l_{P^{\prime}}(\gamma)\) implies
\[d(\gamma(t),\gamma(t^{\prime}))=l_{Q}(\gamma|_{[t,t^{\prime}]})=l(\gamma|_{[t,t ^{\prime}]})\stackrel{{(s)}}{{=}}d(\gamma(0),\gamma(1))|t-t^{ \prime}|,\text{ for all }t,t^{\prime}\in[0,1],\]
where step (s) is due to Note 2.6 and Lemma 2.4.
**(iii)\(\Rightarrow\)(ii)**: This is again immediate by Lemma 2.8(i).
**Definition 2.10** (Path of minimum length).: Let \(X\) be a metric space, \(x,y\in X\), and \(\mathcal{P}_{x,y}(X)\):=\(\{\)paths in \(X\) from \(x\) to \(y\}\subset\mathcal{C}\big{(}[0,1],X\big{)}\). A path \(\gamma:[0,1]\to X\) is said to **have minimum length** (or called a **path of minimum length**) if \(l(\gamma)=\inf\big{\{}l(\eta)\ |\ \eta\in\mathcal{P}_{\gamma(0),\gamma(1)}(X) \big{\}}\).
**Definition 2.11** (Length space, Rectifiably connected, Minimally connected).: A metric space \((X,d)\) is called a **length space** if \(d(x,y)=\inf\left\{l(\gamma)\ |\ \gamma\in\mathcal{P}_{x,y}(X)\right\}\) for all \(x,y\in X\). Let us call a metric space \(X\)**rectifiably connected** (resp., **minimally connected**) if every two points of \(X\) can be connected by a rectifiable path (resp., a path of minimum length).
**Remark 2.12**.: If \((X,d)\) is rectifiably connected (resp., minimally connected) then \((X,d_{0})\) is a length space (resp., a geodesic space), where \(\ d_{0}(x,y):=\inf\left\{l(\gamma):\gamma\in\mathcal{P}_{x,y}(X)\right\}\).
## 3. **The case of precompact-subset spaces**
**Definition 3.1** (Relation: Left-complete, Right-complete, Complete, Reduced, Reduced complete, Proximal).: Given sets \(A\) and \(B\), a **relation** between \(A\) and \(B\), which is a subset \(\ R\subset A\times B\), is called
1. **left-complete** if "for every \(a\in A\), \(\ |(\{a\}\times B)\cap R|\geq 1\)".
2. **right-complete** if "for every \(b\in B\), \(\ |(A\times\{b\})\cap R|\geq 1\)".
3. **complete** if it is both left-complete and right-complete.
4. **reduced** if "for every \((a,b)\in R\), \(\ |(\{a\}\times B)\cap R|\leq 1\) or \(|(A\times\{b\})\cap R|\leq 1\)".
5. **reduced complete** if complete and "\(\forall(a,b)\in R\), \(\ |(\{a\}\times B)\cap R|=1\) or \(|(A\times\{b\})\cap R|=1\)".
6. \(\lambda\)**-proximal** if "\(\sup_{(a,b)\in R}d(a,b)\leq\lambda d_{H}(A,B)\)", where \(\lambda\geq 1\).
7. **proximal** if it is 1-proximal.
Another relevant property, **dense completeness**, of a relation between subsets of a space is given in Definition 3.10.
**Remark 3.2**.: Let \(X\) be a metric space and let \(x\in X\). (i) If \(K\subset X\) is compact, there exists \(k\in K\) such that \(d(x,k)=\operatorname{dist}(x,K)\).
Fix \(\varepsilon>0\) and let \(A\in PCl(X)\). Then \(\operatorname{dist}(x,A)=\operatorname{dist}(x,\widetilde{A})=d(x,a^{\prime})\) for some \(a^{\prime}\in\widetilde{A}\subset\widetilde{X}\) (where we know \(\widetilde{A}\) is compact). So: (ii) If \(A,B\in PCl(X)\) then for each \(a\in A\) there exists \(b^{\prime}=b^{\prime}_{a}\in\widetilde{B}\subset\widetilde{X}\) such that
\[d(a,b^{\prime})=\operatorname{dist}(a,\widetilde{B})=\operatorname{dist}(a,B) \leq d_{H}(A,B)=d_{H}(\widetilde{A},\widetilde{B}).\]
In general, if \(A,B\in BCl(X)\) then for any \(a\in A\) there exists \(b_{\varepsilon}=b_{\varepsilon,a}\in B\) such that
\[d(a,b_{\varepsilon})<\operatorname{dist}(a,B)+\varepsilon\leq d_{H}(A,B)+ \varepsilon\stackrel{{(s)}}{{=}}(1+\varepsilon_{1})d_{H}(A,B),\]
where step (s) assumes that \(d_{H}(A,B)\neq 0\) and \(\varepsilon_{1}=\varepsilon/d_{H}(A,B)\). In other words, if \(\lambda>1\) then for any \(a\in A\) there exists \(b_{\lambda}=b_{\lambda,a}\in B\) such that \(d(a,b_{\lambda})\leq\lambda d_{H}(A,B)\). So: (iii) If \(\lambda>1\) we get a complete relation
\[R_{\lambda}=\{(a,b)\in A\times B:d(a,b)\leq\lambda d_{H}(A,B)\}\subset A\times B.\]
**Definition 3.3**.: If \(Z\) is a space and \(\mathcal{A}\subset\mathcal{P}(Z)\), we denote by \(\bigcup\mathcal{A}\) the union \(\bigcup_{A\in\mathcal{A}}A\). We call a space (resp., metric space) \(X\)**sequentially compact** (resp., **sequentially precompact**) if every sequence in \(X\) has a convergent (resp., Cauchy) subsequence. If \(X\) is a metric space, \(A,B\subset X\), and \(\varepsilon>0\), then we call \(A\) an \(\varepsilon\)**-net** of \(B\) if \(B\subset N_{\varepsilon}(A)\). A metric space is **precompact** (or **totally bounded**) if it has a finite \(\varepsilon\)-net for every \(\varepsilon>0\).
For proofs of the following basic facts, see [13, Theorems 13.76 and 13.97] and [1, Appendices A.1.3 and A.1.6].
**Remark 3.4**.: (i) A metric space is compact (resp., precompact) \(\iff\) sequentially compact (resp., sequentially precompact) \(\iff\) complete and precompact.
(ii) Let \(X,Y\) be metric spaces and \(f:E\subset X\to Y\) a map. If \(f\) is uniformly continuous, \(E\) is dense in \(X\), and \(Y\) is complete, then \(f\) extends to a unique uniformly continuous map \(F:X\to Y\). This result remains true if "uniformly continuous" is replaced with "\(L\)-Lipschitz".
**Lemma 3.5** (Bounded union).: _Let \(X\) be a metric space and \(X\subset\mathcal{J}\subset BCl(X)\). If \(\mathcal{C}\subset\mathcal{J}\) is bounded then \(\bigcup\mathcal{C}\) is bounded in \(X\)._
Proof.: Fix \(x\in X\). Since \(\mathcal{C}\subset\mathcal{J}\) is bounded, there exists \(R>0\) such that \(\mathcal{C}\subset N_{R}^{\mathcal{J}}(\{x\})=\{C\in\mathcal{J}:d_{H}(\{x\},C) <R\}=\{C\in\mathcal{J}:C\subset N_{R}^{X}(x)\}=\mathcal{J}\cap BCl\big{(}N_{R }^{X}(x)\big{)}\), i.e., \(C\subset N_{R}^{X}(x)\) for all \(C\in\mathcal{C}\), and so \(\bigcup\mathcal{C}\subset N_{R}^{X}(x)\).
**Lemma 3.6** (Precompact union, Compact union).: _Let \(X\) be a metric space._
_(i) If_ \(\mathcal{C}\subset PCl(X)\) _is compact, then_ \(K=\bigcup\mathcal{C}\subset X\) _is precompact (hence_ \(cl_{X}(K)\in PCl(X)\)_)._
_(ii) If_ \(\mathcal{C}\subset K(X)\) _is compact, then_ \(K=\bigcup\mathcal{C}\subset X\) _is compact._
Proof.: Pick a sequence \(\{x_{k}\}\subset K\). Then each \(x_{k}\in C_{k}\) for some \(C_{k}\in\mathcal{C}\). Since \(\mathcal{C}\) is compact in \(PCl(X)\), \(\{C_{k}\}\) has a subsequence \(\{C_{f(k)}\}\) that converges in \(\mathcal{C}\). Let \(C_{f(k)}\to C_{0}\in\mathcal{C}\). Since \(C_{0}\) is precompact in \(X\), by Remark 3.2(i), there exist(s) \(c_{k}\in\widetilde{C_{0}}\subset\widetilde{X}\) such that \(d(x_{f(k)},c_{k})=\operatorname{dist}(x_{f(k)},C_{0})\leq d_{H}(C_{f(k)},C_{0})\to 0\).
_Proof of (i):_ Since \(\widetilde{C_{0}}\subset\widetilde{X}\) is compact, \(\{c_{k}\}\subset\widetilde{C_{0}}\) has a convergent (hence Cauchy) subsequence \(c_{g(k)}\) (i.e., \(d(c_{g(k)},c_{g(k^{\prime})})\to 0\)). Therefore \(\{x_{k}\}\subset K\) has a Cauchy subsequence \(\{x_{f\circ g(k)}\}\), since
\[d(x_{f\circ g(k)},x_{f\circ g(k^{\prime})})\leq d(x_{f\circ g(k)},c_{g(k)})+d( c_{g(k)},c_{g(k^{\prime})})+d(c_{g(k^{\prime})},x_{f\circ g(k^{\prime})})\to 0.\]
This shows that \(K\) is precompact in \(X\). Moreover, the closure of a precompact set is precompact.
_Proof of (ii):_ Since \(C_{0}\subset X\) is compact, \(\{c_{k}\}\subset\widetilde{C_{0}}=C_{0}\) has a convergent subsequence \(c_{g(k)}\), i.e., \(c_{g(k)}\to c_{0}\in C_{0}\). Therefore \(\{x_{k}\}\subset K\) has a convergent subsequence \(\{x_{f\circ g(k)}\}\) (hence \(K\) is compact in \(X\)), since \(d(x_{f\circ g(k)},c_{0})\leq d(x_{f\circ g(k)},c_{g(k)})+d(c_{g(k)},c_{0})\to 0\).
**Lemma 3.7** (Subsequence: pointwise Cauchy, pointwise convergent).: _Let \(T\) be a countable set._
_(i) If_ \(K\) _is a sequentially precompact space (e.g., a precompact metric space), then any given sequence of maps_ \(f_{k}:T\to K\) _has a pointwise cauchy subsequence_ \(f_{s(k)}:T\to K\)_._
_(ii) If_ \(K\) _is a sequentially compact space (e.g., a compact metric space), then any given sequence of maps_ \(f_{k}:T\to K\) _has a pointwise convergent subsequence_ \(f_{s(k)}:T\to K\)_._
Proof.: (i) Since \(K\) is sequentially precompact, for each \(t\in T\), the sequence \(\{f_{k}(t)\}\) has a Cauchy subsequence. Consider an enumeration \(T=\{t_{1},t_{2},\cdots\}\). Then \(\{f_{k}(t_{1})\}\) has a Cauchy subsequence \(\big{\{}f_{s_{1}(k)}(t_{1})\big{\}}\), i.e., there exists a subsequence \(\big{\{}f_{s_{1}(k)}\big{\}}\subset\{f_{k}\}\) such that \(\big{\{}f_{s_{1}(k)}(t_{1})\big{\}}\) is Cauchy. Similarly, because \(\big{\{}f_{s_{1}(k)}(t_{2})\big{\}}\) has a Cauchy subsequence, there exists a further subsequence \(\big{\{}f_{s_{2}(k)}\big{\}}\subset\big{\{}f_{s_{1}(k)}\big{\}}\subset\{f_{k}\}\) such that \(f_{s_{2}(k)}(t_{2})\) is Cauchy. Continuing this way, we get subsequences \(S_{1}\supset S_{2}\supset S_{3}\supset\cdots\) of \(\{f_{k}\}\) which can be represented in an array as follows.
\[S_{1} =\big{\{}f_{s_{1}(k)}\big{\}}:\ f_{s_{1}(1)}\ f_{s_{1}(2)}\ f_{s_{ 1}(3)}\ f_{s_{1}(4)}\ \cdots\ \big{(}\text{Pointwise Cauchy on }\{t_{1}\}\big{)}\] \[S_{2} =\big{\{}f_{s_{2}(k)}\big{\}}:\ f_{s_{2}(1)}\ f_{s_{2}(2)}\ f_{s_{ 2}(3)}\ f_{s_{2}(4)}\ \cdots\ \big{(}\text{Pointwise Cauchy on }\{t_{1},t_{2}\}\big{)}\] \[S_{3} =\big{\{}f_{s_{3}(k)}\big{\}}:\ f_{s_{3}(1)}\ f_{s_{3}(2)}\ f_{s_{ 3}(3)}\ f_{s_{3}(4)}\ \cdots\ \big{(}\text{Pointwise Cauchy on }\{t_{1},t_{2},t_{3}\}\big{)}\] \[S_{4} =\big{\{}f_{s_{4}(k)}\big{\}}:\ f_{s_{4}(1)}\ f_{s_{4}(2)}\ f_{s_{ 4}(3)}\ f_{s_{4}(4)}\ \cdots\ \big{(}\text{Pointwise Cauchy on }\{t_{1},t_{2},t_{3},t_{4}\}\big{)}\] \[\ \ \ \ \ \vdots\ \ \vdots\ \ \vdots\ \ \vdots\]
Consider the diagonal sequence \(S=\big{\{}f_{s_{k}(k)}\big{\}}:\ f_{s_{1}(1)}\ f_{s_{2}(2)}\ f_{s_{3}(3)}\ f_{s_{4}(4)}\ \cdots\.\) Then for each \(i=1,2,\cdots\), the sequence \(S\cap S_{i}=S\setminus\big{\{}f_{s_{1}(1)},...,f_{s_{i-1}(i-1)}\big{\}}\ \subset\ S_{i}\) (and hence \(S\) also) is pointwise
Cauchy on \(\{t_{1},...,t_{i}\}\). Thus, taking \(i\to\infty\), we see that \(S=\bigcup_{i}(S\cap S_{i})\) is pointwise Cauchy on \(\{t_{1},t_{2},\cdots\}=T\). Define \(f_{s(k)}:=f_{s_{k}(k)}\).
(ii) In the proof of (i) above, replace "_Cauchy_" with "_convergent_" and replace "_is Cauchy_" with "_converges_".
**Theorem 3.8** (Components of a quasigeodesic/rectifiable in \(K(X)\)).: _Let \(X\) be a metric space, \(\lambda\geq 1\), and \(\mathcal{J}\subset K(X)\). If \(\gamma:[0,1]\to\mathcal{J}\) is a \(\lambda\)-quasigeodesic (resp., a rectifiable path) in \(\mathcal{J}\), then every \(a\in\gamma(0)\) is connected to some \(b\in\gamma(1)\) by a \(\lambda d_{H}(\gamma(0),\gamma(1))\)-Lipschitz (resp., \(\lambda l(\gamma)\)-Lipschitz) path \(\gamma_{(a,b)}:[0,1]\to X\) such that \(\gamma_{(a,b)}(t)\in\gamma(t)\) for all \(t\)._
Proof.: Let \(\gamma:[0,1]\to\mathcal{J}\) be a \(\lambda\)-quasigeodesic from \(A\) to \(B\), and let \(\rho:=d_{H}(A,B)\) (resp., \(\rho=l(\gamma)\)). Then we have \(\gamma(0)=A\), \(\gamma(1)=B\), \(\gamma(t)\in\mathcal{J}\) for all \(t\in[0,1]\), and
\[d_{H}(\gamma(t),\gamma(t^{\prime}))=\max\left\{\sup_{u\in\gamma(t)}\inf_{u^{ \prime}\in\gamma(t^{\prime})}d(u,u^{\prime}),\sup_{u^{\prime}\in\gamma(t^{ \prime})}\inf_{u\in\gamma(t)}d(u,u^{\prime})\right\}\leq\lambda\rho|t-t^{ \prime}|,\ \forall\ t,t^{\prime}\in[0,1].\]
For fixed \(t,t^{\prime}\in[0,1]\), this equation says for every \(u\in\gamma(t)\) there exists \(u^{\prime}\in\gamma(t^{\prime})\) such that
\[d(u,u^{\prime})\leq d_{H}(\gamma(t),\gamma(t^{\prime}))\leq\lambda\rho|t-t^{ \prime}|,\ \text{and vice versa}. \tag{1}\]
Let \(D_{k}:=\{0=t_{0}<t_{1}<\cdots<t_{k}=1\}\), \(k\geq 1\), be partitions of \([0,1]\) such that \(D_{k}\subset D_{k+1}\) and \(\bigcup D_{k}\) is dense in \([0,1]\) (e.g., \(D_{k}=\{l/2^{k}:0\leq l\leq 2^{k}\}\)). Fix \(k\geq 1\). Then for each \(a\in A\), we can define a map \(g_{k}:D_{k}\to X\) as follows. Let \(g_{k}(t_{0})=g_{k}(0):=a\in A=\gamma(0)\). Next, pick \(g_{k}(t_{1})\in\gamma(t_{1})\) such that \(d(g_{k}(t_{0}),g_{k}(t_{1}))\leq\lambda\rho|t_{0}-t_{1}|\). For the general step, given \(g_{k}(t_{i})\in\gamma(t_{i})\), pick \(g_{k}(t_{i+1})\in\gamma(t_{i+1})\) such that \(d(g_{k}(t_{i}),g_{k}(t_{i+1}))\leq\lambda\rho|t_{i}-t_{i+1}|\). This gives a map \(g_{k}:D_{k}\to X\) from \(a\in A\) to \(b_{k}=g_{k}(1)\in B\) satisfying
\[d\left(g_{k}(t),g_{k}(t^{\prime})\right)\leq\lambda\rho|t-t^{\prime}|,\ \text{for all}\ t,t^{\prime}\in D_{k}. \tag{2}\]
Consider the dense set \(D:=\bigcup_{k=1}^{\infty}D_{k}\) in \([0,1]\). For each \(k\), let \(f_{k}:D\to X\) be any extension of \(g_{k}:D_{k}\to X\) such that \(f_{k}(t)\in\gamma(t)\) for all \(t\in D\). Then \(f_{k}(D)\subset K:=\bigcup_{t\in[0,1]}\gamma(t)\). Since \(K\subset X\) is compact (Lemma 3.6(ii)) and \(D\) is countable, it follows by Lemma 3.7 that \(\{f_{k}\}\) has a pointwise convergent subsequence \(\big{\{}f_{s(k)}\big{\}}\), where we know \(f_{s(k)}\) is \(\lambda\rho\)-Lipschitz on \(D_{s(k)}\). Let \(f_{s(k)}\to f\) pointwise in \(X\). Then \(f:D\to X\) is \(\lambda\rho\)-Lipschitz and \(f(t)\in\gamma(t)\): Indeed, given \(t,t^{\prime}\in D\), we can choose \(N\) such that \(t,t^{\prime}\in D_{s(k)}\) for all \(k\geq N\), and so
\[\begin{split}& d\left(f_{s(k)}(t),f_{s(k)}(t^{\prime})\right)\leq \lambda\rho|t-t^{\prime}|,\ \text{for all}\ t,t^{\prime}\in D=\bigcup_{k}D_{s(k)},\ \text{and}\\ &\text{dist}(f(t),\gamma(t))\leq d\left(f(t),f_{s(k)}(t)\right)+ \text{dist}\left(f_{s(k)}(t),\gamma(t)\right)=d\left(f(t),f_{s(k)}(t)\right) \to 0,\ \text{for all}\ t\in D,\end{split} \tag{3}\]
\[\begin{split}&\Rightarrow\ f(t)\in\gamma(t),\ \text{for all}\ t\in D.\ \text{(Recall that each}\ \gamma(t)\ \text{is closed in}\ X.)\end{split} \tag{4}\]
Since \(f\) is \(\lambda\rho\)-Lipschitz, and \(D\) is dense in \([0,1]\), \(f\) extends (by Remark 3.4) to a \(\lambda\rho\)-Lipschitz map \(c:[0,1]\to X\). It remains to show that \(c(t)\in\gamma(t)\) for all \(t\in[0,1]\).
Fix \(t\in[0,1]\). Since \(D\) is dense in \([0,1]\), pick \(t_{j}\in D\) such that \(t_{j}\to t\). Then
\[\begin{split}&\text{dist}(c(t),\gamma(t))\leq d(c(t),c(t_{j}))+ \text{dist}(c(t_{j}),\gamma(t))=d(c(t),c(t_{j}))+\text{dist}(f(t_{j}),\gamma(t)) \\ &\leq d(c(t),c(t_{j}))+d_{H}\big{(}\gamma(t_{j}),\gamma(t)\big{)} \leq d(c(t),c(t_{j}))+\lambda\rho|t_{j}-t|\to 0,\\ &\Rightarrow\ c(t)\in\gamma(t).\ \text{(Recall that each}\ \gamma(t)\ \text{is closed in}\ X.)\end{split}\]
**Remark 3.9**.: In Theorem 3.8 we have mentioned "components of a quasigeodesic/rectifiable in \(K(X)\)". To make this precise, we will show in Theorem 3.17 that the Lipschitz paths in \(X\) obtained using Theorem 3.8 can be used to give a pointwise expression for the quasigeodesic in a stable covering subspace \(\mathcal{J}\subset PCl(X)\).
**Definition 3.10** (Densely complete relation, Subset connectivity).: Let \(X\) be a space and \(A,B\subset X\). A relation \(R\subset A\times B\) is **densely complete** if \(A_{1}:=\{a\in A:|(\{a\}\times B)\cap R|\geq 1\}\) is dense in \(A\) and \(B_{1}:=\{b\in B:|(A\times\{b\})\cap R|\geq 1\}\) is dense in \(B\).
Let \(X\) be a metric space, \(\mathcal{J}\subset BCl(X)\), and \(A,B\in\mathcal{J}\). We say "\(A\) is \(L\)**-Lipschitz \(X\)-connected** (resp., **rectifiably \(X\)-connected**) to \(B\) in \(\mathcal{J}\)", written "\(A\sim_{(L,X)}B\) in \(\mathcal{J}\)", if: There exists (i) a densely complete relation \(R\subset A\times B\) and (ii) a collection of \(L\)-Lipschitz (resp., rectifiable) paths \(\big{\{}\gamma_{(a,b)}:(a,b)\in R\big{\}}\) in \(X\), with \(\gamma_{(a,b)}\) a path from \(a\) to \(b\), such that (iii) \(cl_{X}\big{\{}\gamma_{(a,b)}(t):(a,b)\in R\big{\}}\in\mathcal{J}\), for each \(t\in[0,1]\).
Note that "\(A\) is \(L\)**-Lipschitz \(X\)-connected** (resp., **rectifiably \(X\)-connected**) to \(B\) in \(\mathcal{J}\)" \(\iff\) there exist a relation \(R\subset A\times B\) and \(L\)-Lipschitz paths \(\{\gamma_{r}:r\in R\}\) in \(X\) giving a path from \(A\) to \(B\) by the map
\[\Gamma:[0,1]\to\mathcal{J},\ t\mapsto cl_{X}\{\gamma_{r}(t):r\in R\}.\]
**Note 3.11** (Closure of union of closures).: Let \(X\) be a space and \(\{A_{\alpha}\}\) a collection of subspaces of \(X\). Then
\[cl_{X}\big{(}\bigcup cl_{X}(A_{\alpha})\big{)}=cl_{X}\big{(}\bigcup A_{\alpha }\big{)},\]
since \(cl_{X}\big{(}\bigcup cl_{X}(A_{\alpha})\big{)}\subset cl_{X}\big{(}cl_{X} \big{(}\bigcup A_{\alpha}\big{)}\big{)}=cl_{X}(\bigcup A_{\alpha})\subset cl _{X}\big{(}\bigcup cl_{X}(A_{\alpha})\big{)}\), where the union is over \(\alpha\).
**Remark 3.12** (Approximation of quasigeodesics/rectifiables in \(BCl(X)\)).: If permissible, write \(A,B\in BCl(X)\) as (strictly) increasing unions \(A=\bigcup^{\uparrow}A_{n}\) and \(B=\bigcup^{\uparrow}B_{n}\) for \(A_{n},B_{n}\in PCl(X)\), in which case
\[d_{H}(A_{n},A)=\max\{\sup_{a^{\prime}\in A_{n}}\operatorname{dist}(a^{\prime},A),\sup_{a\in A}\operatorname{dist}(A_{n},a)\}=\sup_{a\in A}\operatorname{ dist}(A_{n},a)\to 0\]
as a (strictly) decreasing and lower-bounded real sequence. Then, when possible, consider a geodesic \(\gamma_{n}:[0,1]\to PCl(X)\subset BCl(X)\) between \(A_{n}\) and \(B_{n}\) to approximate a geodesic between \(A\) and \(B\). Note that \(A_{n}\cap B_{n^{\prime}}\subset A_{\max(n,n^{\prime})}\cap B_{\max(n,n^{ \prime})}\) and \(|d_{H}(A,B)-d_{H}(A_{n},B_{n})|\leq d_{H}(A,A_{n})+d_{H}(B,B_{n})\to 0\), and so
\[A\cap B=\bigcup_{n,n^{\prime}}A_{n}\cap B_{n^{\prime}}=\bigcup_{n}A_{n}\cap B _{n}\text{ and }d_{H}(A,B)=\lim_{n}d_{H}(A_{n},B_{n}).\]
**Definition 3.13** (Locally Lipschitz map, Lipschitz neighborhood of a point).: Fix \(c\geq 0\). A map \(f:X\to Y\) is **locally \(c\)-Lipschitz** if for each \(x\in X\) there exists a neighborhood \(N_{r_{x}}(x)\), \(r_{x}=r_{x,f}>0\), such that
\[d(f(x),f(z))\leq cd(x,z),\text{ for all }z\in N_{r_{x}}(x).\]
We will call \(N_{r_{x}}(x)\) a **Lipschitz neighborhood** of \(x\) with respect to \(f\).
**Lemma 3.14** (Gluing lemma).: _Fix \(\lambda\geq 1\). Let \(X\) be a \(\lambda\)-quasiconvex space. If a map \(f:X\to Y\) is locally \(c\)-Lipschitz, then it is \(\lambda c\)-Lipschitz. (Conversely, we know a \(\lambda c\)-Lipschitz map is locally \(\lambda c\)-Lipschitz.)_
Proof.: Fix \(x,x^{\prime}\in X\). Let \(\gamma:[0,1]\to X\) be a \(\lambda\)-quasigeodesic from \(x\) to \(x^{\prime}\). Then \(f\circ\gamma:[0,1]\stackrel{{\gamma}}{{\longrightarrow}}X\stackrel{{ f}}{{\longrightarrow}}Y\) is locally \(\lambda cd(x,x^{\prime})\)-Lipschitz on \([0,1]\). Indeed, for any \(t\in[0,1]\) there is \(N_{r_{\gamma(t)}}\big{(}\gamma(t)\big{)}\), \(r_{\gamma(t)}>0\), such that
\[d\big{(}f(\gamma(t)),f(z)\big{)}\leq cd\big{(}\gamma(t),z\big{)}\text{ for all }z\in N_{r_{\gamma(t)}}\big{(}\gamma(t)\big{)},\]
and so we get the neighborhood \(U=\gamma^{-1}\big{(}N_{r_{\gamma(t)}}\big{(}\gamma(t)\big{)}\big{)}\) of \(t\) in \([0,1]\) satisfying
\[d\big{(}f(\gamma(t)),f(\gamma(s))\big{)}\leq cd\big{(}\gamma(t),\gamma(s)\big{)} \leq\lambda cd(x,x^{\prime})|t-s|,\text{ for all }s\in U.\]
Since \([0,1]\) is compact, for any \(t,t^{\prime}\in[0,1]\) we can choose a partition \(P=\{t=t_{0}<t_{1}<\cdots<t_{k}=t^{\prime}\}\) such that Lipschitz neighborhoods of the \(t_{i}\)'s cover \([t,t^{\prime}]\). So, for each \(i\in\{1,...,k\}\) there is \(s_{i}\in[t_{i-1},t_{i}]\) satisfying \(d\big{(}f(\gamma(t_{i-1})),f(\gamma(s_{i}))\big{)}\leq\lambda cd(x,x^{\prime})|t _{i-1}-s_{i}|\) and \(d\big{(}f(\gamma(s_{i})),f(\gamma(t_{i}))\big{)}\leq\lambda cd(x,x^{\prime})|t _{i-1}-s_{i}|\).
\(\lambda cd(x,x^{\prime})|s_{i}-t_{i}|\). By the triangle inequality, \(d\big{(}f(\gamma(t_{i-1})),f(\gamma(t_{i}))\big{)}\leq\lambda cd(x,x^{\prime})(|t _{i-1}-s_{i}|+|s_{i}-t_{i}|)=\lambda cd(x,x^{\prime})|t_{i-1}-t_{i}|\), and so
\[d\big{(}f(\gamma(t)),f(\gamma(t^{\prime}))\big{)}\leq\sum_{i=1}^{k}d\big{(}f( \gamma(t_{i-1})),f(\gamma(t_{i}))\big{)}\leq\lambda cd(x,x^{\prime})\sum_{i=1} ^{k}|t_{i-1}-t_{i}|=\lambda cd(x,x^{\prime})|t-t^{\prime}|.\]
This shows \(f\circ\gamma\) is \(\lambda cd(x,x^{\prime})\)-Lipschitz. Hence, \(d\big{(}f(x),f(x^{\prime})\big{)}\leq\lambda cd(x,x^{\prime})\).
**Lemma 3.15**.: _For any \(A,B,C,D\in BCl(X)\), we have \(\ d_{H}(A\cup B,C\cup D)\leq\max\big{(}d_{H}(A,C),d_{H}(B,D)\big{)}\)._
Proof.: Let \(\rho:=\max\big{(}d_{H}(A,C),d_{H}(B,D)\big{)}\), and pick any \(\varepsilon^{\prime}>\rho\). Then because \(d_{H}(A,C)\leq\rho<\varepsilon^{\prime}\) and \(d_{H}(B,D)\leq\rho<\varepsilon^{\prime}\), we have the containments \(A\subset C_{\varepsilon^{\prime}}\), \(C\subset A_{\varepsilon^{\prime}}\), \(B\subset D_{\varepsilon^{\prime}}\), \(D\subset B_{\varepsilon^{\prime}}\), which imply \(A\cup B\subset C_{\varepsilon^{\prime}}\cup D_{\varepsilon^{\prime}}=(C\cup D )_{\varepsilon^{\prime}}\) and \(C\cup D\subset A_{\varepsilon^{\prime}}\cup B_{\varepsilon^{\prime}}=(A\cup B )_{\varepsilon^{\prime}}\). It follows that \(d_{H}(A\cup B,C\cup D):=\inf\left\{\varepsilon:A\cup B\subset\big{(}C\cup D \right)_{\varepsilon},\ C\cup D\subset(A\cup B)_{\varepsilon}\right\}\leq\varepsilon^ {\prime}\), for all \(\varepsilon^{\prime}>\rho\). Hence, \(d_{H}(A\cup B,C\cup D)\leq\rho\).
**Lemma 3.16** (Zorn's lemma: See [11]).: _In a nonempty poset, if every chain has an upper bound then the poset contains a maximal element._
**Theorem 3.17** (Representation of quasigeodesics/rectifiables in K(X)).: _Fix \(\lambda\geq 1\). Let \(X\) be a metric space, \(\mathcal{J}\subset K(X)\) a stable covering subspace, and \(A,B\in\mathcal{J}\). Suppose \(\gamma:[0,1]\to\mathcal{J}\) is a \(\lambda\)-quasigeodesic (resp., rectifiable path) from \(A\) to \(B\) and let \(\rho=d_{H}(A,B)\) (resp., \(\rho=l(\gamma)\)). Then there exists a densely complete relation \(R\subset A\times B\) and a maximal collection of \(\lambda\rho\)-Lipschitz paths \(\{\gamma_{r}:[0,1]\to X\}_{r\in R}\) such that \(\ \gamma(t)=cl_{X}\{\gamma_{r}(t):r\in R\}\), for all \(t\)._
Proof.: Let \(\mathcal{S}{=}\{(\eta,R)\mid\eta:[0,1]\to\mathcal{J}\) is \(\lambda\rho\)-lipschitz, \(\eta(t)\subset\gamma(t)\), \(\eta(t)=cl_{X}\{\eta_{r}(t):r\in R\}\) for \(\lambda\rho\)-Lipschitz paths \(\eta_{r}\), \(R\subset\eta(0)\times\eta(1)\) is densely complete\(\}\) be the poset with ordering "\((\eta_{1},R_{1})\leq(\eta_{2},R_{2})\) if \(\eta_{1}(t)\subset\eta_{2}(t)\)\(\forall t\) and \(R_{1}\subset R_{2}\)". Then \(\mathcal{S}\) is nonempty by Theorem 3.8. Let \(\{(\eta_{\alpha},R_{\alpha})\}\) be a chain in \(\mathcal{S}\). With \(\eta:[0,1]\to\mathcal{J},\ t\mapsto cl_{X}\big{(}\bigcup\eta_{\alpha}(t)\big{)}\) and \(R=\bigcup R_{\alpha}\subset cl_{X}\big{(}\bigcup\eta_{\alpha}(0)\big{)}\times cl _{X}\big{(}\bigcup\eta_{\alpha}(1)\big{)}\), the pair \((\eta,R)\) is an upper bound of \(\{(\eta_{\alpha},R_{\alpha})\}\) in \(\mathcal{S}\). By Zorn's lemma, \(\mathcal{S}\) has a maximal element \((\eta^{\prime},R^{\prime})\).
Suppose there is \(s\in[0,1]\) such that \(\eta^{\prime}(s)\neq\gamma(s)\). Pick \(x_{s}\in\gamma(s)\backslash\eta^{\prime}(s)\). Using Theorem 3.8 and Lemma 3.14, construct a \(\lambda\rho\)-Lipschitz path \(\eta_{(a_{s},b_{s})}:[0,1]\to X\), \(\eta_{(a_{s},b_{s})}(t)\in\gamma(t)\), from some \(a_{s}\in A\) through \(x_{s}\) to some \(b_{s}\in B\). Let \(\eta^{\prime\prime}(t)=\eta^{\prime}(t)\cup\{\eta_{(a_{s},b_{s})}(t)\}\) and \(R^{\prime\prime}=R^{\prime}\cup\{(a_{s},b_{s})\}\). Then \((\eta^{\prime},R^{\prime})<(\eta^{\prime\prime},R^{\prime\prime})\in\mathcal{S}\), which is a contradiction.
**Lemma 3.18** (Sufficient condition for quasigeodesics in \(BCl(X)\)).: _Fix \(\lambda\geq 1\). Let \(X\) be a metric space, \(\mathcal{J}\subset BCl(X)\), and \(A,B\in\mathcal{J}\). Suppose \(A\) is \(\lambda d_{H}(A,B)\)-Lipschitz \(X\)-connected to \(B\) in \(\mathcal{J}\). Then there exists a \(\lambda\)-quasigeodesic from \(A\) to \(B\) in \(\mathcal{J}\)._
_(**Note**: This result also holds for \(\mathcal{J}\subset\mathcal{H}(X;C)\), \(C\in Cl(X)\), as in [9, Theorem 2.1 and Corollary 2.2].)_
Proof.: Consider a densely complete relation \(R\subset A\times B\) and a collection of \(\lambda d_{H}(A,B)\)-Lipschitz paths
\[\big{\{}\gamma_{(a,b)}:[0,1]\to X,\ \gamma_{(a,b)}(0)=a,\gamma_{(a,b)}(1)=b \big{\}}_{(a,b)\in R}\]
in \(X\) such that \(cl_{X}\big{\{}\gamma_{(a,b)}(t):(a,b)\in R\big{\}}\in\mathcal{J}\) for all \(t\in[0,1]\). Then the map \(\Gamma:[0,1]\to\mathcal{J}\) given by \(\Gamma(t):=cl_{X}\big{\{}\gamma_{(a,b)}(t):(a,b)\in R\big{\}}\) is a \(\lambda\)-quasigeodesic in \(\mathcal{J}\) from \(A\) to \(B\), since \(\Gamma(0)=A\), \(\Gamma(1)=B\), and
\[d_{H}\big{(}\Gamma(t),\Gamma(t^{\prime})\big{)}=\max\left\{\sup_{(a,b )\in R}\inf_{(a^{\prime},b^{\prime})\in R}d\left(\gamma_{(a,b)}(t),\gamma_{(a^{ \prime},b^{\prime})}(t^{\prime})\right),\sup_{(a^{\prime},b^{\prime})\in R} \inf_{(a,b)\in R}d\left(\gamma_{(a,b)}(t),\gamma_{(a^{\prime},b^{\prime})}(t^{ \prime})\right)\right\}\] \[\leq\sup_{(a,b)\in R}d\left(\gamma_{(a,b)}(t),\gamma_{(a,b)}(t^{ \prime})\right)\leq\lambda d_{H}(A,B)|t-t^{\prime}|,\ \text{for all }t,t^{\prime}\in[0,1].\qed\]
The following result (which uses Remark 3.2) is related to [9, Corollary 2.2] and [10, Theorem 3.5, page 14].
**Corollary 3.19**.: _(i) Fix \(\lambda\geq 1\). If \(X\) is a \(\lambda\)-quasiconvex space then \(FS(X)\) is a \(\lambda\)-quasiconvex space. (ii) Fix \(\lambda>1\). If \(X\) is a \(\lambda\)-quasiconvex space then \(BCl(X)\) is a \(\lambda^{2}\)-quasiconvex space. (iii) Fix \(\lambda\geq 1\). If \(X\) is a proper \(\lambda\)-quasiconvex space then \(BCl(X)=K(X)\) is a \(\lambda\)-quasiconvex space._
Proof.: (ii): Assume \(X\) is \(\lambda\)-quasiconvex and let \(A,B\in\mathcal{J}=BCl(X)\). By the definition of \(d_{H}\) (see Remark 3.2), the relation \(R=\{(a,b)\in A\times B:d(a,b)\leq\lambda d_{H}(A,B)\}\) is complete. Therefore, by \(\lambda\)-quasiconvexity of \(X\), we have a collection of \(\lambda^{2}d_{H}(A,B)\)-Lipschitz paths \(\{\gamma_{(a,b)}:(a,b)\in R\}\), with \(\gamma_{(a,b)}\) a path in \(X\) from \(a\) to \(b\). Since \(cl_{X}\{\gamma_{r}(t):r\in R\}\in\mathcal{J}\) for all \(t\), it follows that \(A\) is \(\lambda^{2}d_{H}(A,B)\)-Lipschitz \(X\)-connected to \(B\) in \(\mathcal{J}\).
(i) and (iii): If \(A,B\in\mathcal{J}=FS_{n}(X)\) or \(A,B\in\mathcal{J}=K(X)\) then the relation \(R=\{(a,b)\in A\times B:d(a,b)\leq d_{H}(A,B)\}\) is complete. Therefore, by \(\lambda\)-quasiconvexity of \(X\), we have a collection of \(\lambda d_{H}(A,B)\)-Lipschitz paths \(\{\gamma_{(a,b)}:(a,b)\in R\}\), with \(\gamma_{(a,b)}\) a path in \(X\) from \(a\) to \(b\). Since \(cl_{X}\{\gamma_{r}(t):r\in R\}\in\mathcal{J}\) for all \(t\), it follows that \(A\) is \(\lambda d_{H}(A,B)\)-Lipschitz \(X\)-connected to \(B\) in \(\mathcal{J}\).
**Theorem 3.20** (Criterion for quasigeodesics/rectifiables in \(PCl(X)\)).: _Fix \(\lambda\geq 1\). Let \(X\) be a metric space and \(\mathcal{J}\subset PCl(X)\) a stable covering subspace. Let \(\widetilde{\mathcal{J}}:=\big{\{}\{x\}:x\in\widetilde{X}\big{\}}\cup\bigcup_{ A\in\mathcal{J}}BCl(\widetilde{A})\subset K(\widetilde{X})\), which is also a stable covering subspace. For \(A,B\in\mathcal{J}\), the following statements are equivalent:_
1. _There exists a_ \(\lambda\)_-quasigeodesic (resp., rectifiable path)_ \(\gamma:[0,1]\to\mathcal{J}\) _from_ \(A\) _to_ \(B\)_._
2. _There exists a_ \(\lambda\)_-quasigeodesic (resp., rectifiable path)_ \(\widetilde{\gamma}:[0,1]\to\widetilde{\mathcal{J}}\) _from_ \(\widetilde{A}\) _to_ \(\widetilde{B}\)_, such that the map_ \(\gamma:[0,1]\to\mathcal{J},\ t\mapsto\widetilde{\gamma}(t)\cap X\) _is a_ \(\lambda\)_-quasigeodesic from_ \(A\) _to_ \(B\)_._
3. \(\widetilde{A}\) _is_ \(\lambda d_{H}(\widetilde{A},\widetilde{B})\)_-Lipschitz (resp.,_ \(\lambda l(\widetilde{\gamma})\)_-Lipschitz)_ \(\widetilde{X}\)_-connected to_ \(\widetilde{B}\) _in_ \(\widetilde{\mathcal{J}}\)_, such that restricting the connection to_ \(X\) _implies_ \(A\) _is_ \(\lambda d_{H}(A,B)\)_-Lipschitz (resp.,_ \(\lambda l(\gamma)\)_-Lipschitz)_ \(X\)_-connected to_ \(B\) _in_ \(\mathcal{J}\)_._
4. \(A\) _is_ \(\lambda d_{H}(A,B)\)_-Lipschitz (resp.,_ \(\lambda l(\gamma)\)_-Lipschitz)_ \(X\)_-connected to_ \(B\) _in_ \(\mathcal{J}\)_._
Proof.: Let \(A,B\in\mathcal{J}\) and set \(\rho=d_{H}(A,B)=d_{H}(\widetilde{A},\widetilde{B})\) (resp., \(\rho=l(\gamma)=l(\widetilde{\gamma})\)).
(i)\(\Rightarrow\) (ii): Let \(\gamma:[0,1]\to\mathcal{J}\) be a \(\lambda\)-quasigeodesic from \(A\) to \(B\). Then \(\widetilde{\gamma}:[0,1]\to\widetilde{\mathcal{J}},\ t\mapsto\widetilde{\gamma (t)}\) gives the desired \(\lambda\)-quasigeodesic from \(\widetilde{A}\) to \(\widetilde{B}\).
(ii)\(\Rightarrow\) (iii): Let \(\widetilde{\gamma}:[0,1]\to\widetilde{\mathcal{J}}\) be a \(\lambda\)-quasigeodesic from \(\widetilde{A}\) to \(\widetilde{B}\), such that the map \(\gamma:[0,1]\to\mathcal{J},\ t\mapsto\widetilde{\gamma}(t)\cap X\) is a \(\lambda\)-quasigeodesic from \(A\) to \(B\). By Theorem 3.8, for each \(\alpha\in\widetilde{A}\), we get a \(\lambda\rho\)-Lipschitz path \(c_{\alpha}:[0,1]\to\widetilde{X}\), \(c_{\alpha}(t)\in\widetilde{\gamma}(t)\), from \(\alpha=c_{\alpha}(0)\in\widetilde{A}\) to \(c_{\alpha}(1)\in\widetilde{B}\). Similarly, for each \(\beta\in\widetilde{B}\), we get a \(\lambda\rho\)-Lipschitz path \(c^{\beta}:[0,1]\to\widetilde{X}\), \(c^{\beta}(t)\in\widetilde{\gamma}(1-t)\), from \(\beta=c^{\beta}(0)\in\widetilde{B}\) to \(c^{\beta}(1)\in\widetilde{A}\).
We get the complete relation \(\widetilde{R}:=\Big{\{}\big{(}c_{\alpha}(0),c_{\alpha}(1)\big{)}:\alpha\in \widetilde{A}\Big{\}}\cup\Big{\{}\overline{c}^{\beta}(0),\overline{c}^{ \beta}(1)\big{)}:\beta\in\widetilde{B}\Big{\}}\subset\widetilde{A}\times \widetilde{B}\) and the collection of paths in \(\widetilde{X}\),
\[\Big{\{}\gamma_{(a,b)}\ |\ (a,b)\in\widetilde{R}\Big{\}}\ :=\ \Big{\{}\gamma_{(c_{\alpha}(0),c_{ \alpha}(1))}:=c_{\alpha}\ \Big{|}\ \alpha\in\widetilde{A}\Big{\}}\cup\Big{\{}\gamma_{( \overline{c}^{\beta}(0),\overline{c}^{\beta}(1))}:=\overline{c}^{\beta}\ \Big{|}\ \beta\in \widetilde{B}\Big{\}}\,,\]
where, given a path \(\sigma:[0,1]\to Y\), we define \(\overline{\sigma}:[0,1]\to Y\) by \(\overline{\sigma}(t):=\sigma(1-t)\).
Consider the \(\lambda\rho\)-Lipschitz path \(\widetilde{\Gamma}:[0,1]\to\widetilde{\mathcal{J}}\) given by
\[\widetilde{\Gamma}(t)\ :=\ cl_{X}\Big{\{}\gamma_{(a,b)}(t):(a,b)\in \widetilde{R}\Big{\}}=cl_{X}\big{(}\{c_{\alpha}(t):\alpha\in\widetilde{A} \}\cup\{c^{\beta}(1-t):\beta\in\widetilde{B}\}\big{)}\ \subset\ \widetilde{\gamma}(t). \tag{5}\]
By Theorem 3.17, we can choose a densely complete \(\widetilde{R}\subset\widetilde{A}\times\widetilde{B}\) and a maximal collection of \(\lambda\rho\)-Lipschitz paths \(\{\gamma_{r}:r\in\widetilde{R}\}\) in \(\widetilde{X}\) such that \(\widetilde{\Gamma}(t)=\widetilde{\gamma}(t)\). Restriction of the resulting \(\lambda\rho\)-Lipschitz \(\widetilde{X}\)-connection between \(\widetilde{A}\) and \(\widetilde{B}\) in \(\widetilde{\mathcal{J}}\) yields the desired \(\lambda\rho\)-Lipschitz \(X\)-connection between \(A\) and \(B\) in \(\mathcal{J}\).
(iii) \(\Rightarrow\) (iv): This is immediate by hypotheses.
(iv) \(\Rightarrow\) (i): If \(A\sim_{(\lambda d_{H}(A,B),X)}B\) then by Lemma 3.18 we get a \(\lambda\)-quasigeodesic from \(A\) to \(B\) in \(\mathcal{J}\).
**Corollary 3.21** (Criterion for quasigeodesics/rectifiables in \(PCI(X)\)).: _Fix \(\lambda\geq 1\). Let \(X\) be a metric space, \(\mathcal{J}\subset PCI(X)\) a stable covering subspace, and \(A,B\in\mathcal{J}\). There exists a \(\lambda\)-quasigeodesic (resp., rectifiable path) \(\gamma:[0,1]\to\mathcal{J}\) from \(A\) to \(B\iff\) there exists a densely complete relation \(R\subset A\times B\) and a collection \(\{\gamma_{(a,b)}:(a,b)\in R\}\) of \(\lambda d_{H}(A,B)\)-Lipschitz (resp., \(\lambda l(\gamma)\)-Lipschitz) paths in \(X\), \(\gamma_{(a,b)}\) a path from \(a\) to \(b\), such that \(\ cl_{X}\{\gamma_{(a,b)}(t):(a,b)\in R\}\in\mathcal{J}\), for all \(t\)._
**Remark 3.22**.: Let \(X\) be a metric space. As an immediate consequence of the above result, if \(X\) contains no nonconstant rectifiable paths (e.g., when \(X\) is a \(p\)-snowflake, detailed in [14]) then neither does \(PCI(X)\). A related result is this: By the proof of [1, Proposition 1.4.11 in Section 1.4], if \(X\) is a \(p\)-snowflake (see [14]) then so is \(BCl(X)\).
The following result generalizes Theorem 3.17.
**Theorem 3.23** (Representation of quasigeodesics/rectifiables in \(PCI(X)\)).: _Fix \(\lambda\geq 1\). Let \(X\) be a metric space, \(\mathcal{J}\subset PCI(X)\) a stable covering subspace, and \(A,B\in\mathcal{J}\). Suppose \(\gamma:[0,1]\to\mathcal{J}\) is a \(\lambda\)-quasigeodesic from \(A\) to \(B\) and let \(\rho=d_{H}(A,B)\) (resp., \(\rho=l(\gamma)\)). Then there exists a densely complete relation \(R\subset A\times B\) and a collection of \(\lambda\rho\)-Lipschitz paths \(\{\gamma_{r}:[0,1]\to X\}_{r\in R}\) such that \(\ \gamma(t)=d_{X}\{\gamma_{r}(t):r\in R\}\). Moreover, when \(\mathcal{J}\subset K(X)\subset PCI(X)\), the collection of paths \(\{\gamma_{r}:[0,1]\to X\}_{r\in R}\) can be chosen to be maximal._
Proof.: In the proof of Theorem 3.20, given a \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to\mathcal{J}\) from \(A\) to \(B\), we get its representation in terms of \(\lambda\rho\)-Lipschitz paths through the equality \(\gamma(t)=\widetilde{\gamma(t)}\cap X=\widetilde{\Gamma}(t)\cap X\), for all \(t\), where \(\widetilde{\Gamma}:[0,1]\to\widetilde{\mathcal{J}}\) is the maximal choice (from Theorem 3.17) of the map given by Equation (5).
**Note 3.24**.: Fix \(\lambda\geq 1\). If \(A\in BCl(X)\), let \(A_{R}=\overline{N}_{R}(A)=\{x\in X:\operatorname{dist}(x,A)\leq R\}\). For any \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to BCl(X)\), with \(\rho=d_{H}(\gamma(0),\gamma(1))\), the definition of Hausdorff distance \(d_{H}\) implies
\[\gamma(t)\subset\gamma(0)_{t\lambda\rho}\cap\gamma(1)_{(1-t)\lambda\rho},\ \text{for all}\ t,\]
since \(d_{H}(\gamma(0),\gamma(t))\leq\lambda\rho t\) and \(d_{H}(\gamma(t),\gamma(1))\leq\lambda\rho(1-t)\). Moreover, any \(\lambda\rho\)-Lipschitz path \(c:[0,1]\to X\) from any \(a\in\gamma(0)\) to any \(b\in\gamma(1)\) similarly satisfies
\[c(t)\in a_{t\lambda\rho}\cap b_{(1-t)\lambda\rho}\subset\gamma(0)_{t\lambda \rho}\cap\gamma(1)_{(1-t)\lambda\rho},\ \text{for all}\ t.\]
**Proposition 3.25** (Representation of some quasigeodesics in \(\operatorname{BCl(X)}\)).: _Fix \(\lambda>1\). Let \(X\) be a geodesic space and \(A,B\in BCl(X)\). Suppose the map \(\gamma:[0,1]\to BCl(X)\), \(\gamma(t)=A_{t\lambda\rho}\cap B_{(1-t)\lambda\rho}\), is a \(\lambda\)-quasigeodesic (e.g., when \(A,B\in K(X)\), as shown by Memoli and Wan in [10, Theorem 3.6, page 14] or by Serra in [12, Theorem 1]) (resp., a rectifiable path) and let \(\rho=d_{H}(A,B)\) (resp., \(\rho=l(\gamma)\))._
_Then there exists a densely complete relation \(R\subset A\times B\) and a maximal collection of \(\lambda\rho\)-Lipschitz paths \(\{\gamma_{r}:[0,1]\to X\}_{r\in R}\) such that \(\gamma(t)=cl_{X}\{\gamma_{r}(t):r\in R\}\), for all \(t\)._
Proof.: Let \(\mathcal{S}=\{(\eta,R)\ |\ \eta:[0,1]\to BCl(X)\ \text{is}\ \lambda\rho\text{-} \text{lipschitz},\ \eta(t)\subset\gamma(t),\,\eta(t)=\{\eta_{r}(t):r\in R\}\) for \(\lambda\rho\)-Lipschitz paths \(\eta_{r}\), \(R\subset\eta(0)\times\eta(1)\) is densely complete\(\}\) be the poset with ordering "\((\eta_{1},R_{1})\leq(\eta_{2},R_{2})\) if \(\eta_{1}(t)\subset\eta_{2}(t)\ \forall t\) and \(R_{1}\subset R_{2}\)". Then \(\mathcal{S}\) is nonempty by Remark 3.2, geodesy of \(X\), and Note 3.24. Let \(\{(\eta_{\alpha},R_{\alpha})\}\) be a chain in \(\mathcal{S}\). With \(\eta:[0,1]\to BCl(X)\), \(t\mapsto cl_{X}\big{(}\bigcup\eta_{\alpha}(t)\big{)}\) and \(R=\bigcup R_{\alpha}\subset cl_{X}\big{(}\bigcup\eta_{\alpha}(0)\big{)} \times cl_{X}\big{(}\bigcup\eta_{\alpha}(1)\big{)}\), the pair \((\eta,R)\) is an upper bound of \(\{(\eta_{\alpha},R_{\alpha})\}\) in \(\mathcal{S}\). By Zorn's lemma, \(\mathcal{S}\) has a maximal element \((\eta^{\prime},R^{\prime})\).
Suppose there is \(s\in[0,1]\) such that \(\eta^{\prime}(s)\neq\gamma(s)\). Pick \(x_{s}\in\gamma(s)\backslash\eta^{\prime}(s)\). Using Remark 3.2, geodesy of \(X\), Note 3.24, and Lemma 3.14, construct a \(\lambda\rho\)-Lipschitz path \(\eta_{(a_{s},b_{s})}:[0,1]\to X\)
\(\eta_{(a_{s},b_{s})}(t)\in\gamma(t)\), from some \(a_{s}\in A\) through \(x_{s}\) to some \(b_{s}\in B\). Let \(\eta^{\prime\prime}(t)=\eta^{\prime}(t)\cup\{\eta_{(a_{s},b_{s})}(t)\}\) and \(R^{\prime\prime}=R^{\prime}\cup\{(a_{s},b_{s})\}\). Then \((\eta^{\prime},R^{\prime})<(\eta^{\prime\prime},R^{\prime\prime})\in\mathcal{S}\), which is a contradiction.
## 4. **The case of finite-subset spaces**
Since \(FS_{n}(X)\) and \(FS(X)\) are stable covering subspaces of \(PCl(X)\), the results of Section 3 already say a lot about finite-subset spaces (see Corollary 3.19 for example). Our aim here is to further consider a few of the related properties of finite-subset spaces that are not yet obvious from the results of Section 3.
For any space \(X\), \(FS_{n}(X)\) is a quotient space of \(X^{n}\) via the "unordering" map \(q:X^{n}\to FS_{n}(X)\), \((x_{1},...,x_{n})\)\(\mapsto\)\(\{x_{1},...,x_{n}\}\) as a quotient map (see for example [1, Chapter 1]). Consequently, we will switch notation and write an element \(x\in FS_{n}(X)\) in the form \(x=\{x_{1},...,x_{n}\}=q(x_{1},...,x_{n})\) for an element \((x_{1},...,x_{n})\in X^{n}\).
Since finite subsets are compact, we have the quasiconvexity constant \(\lambda\geq 1\) throughout this section.
**Definition 4.1** (Recall).: Let \(X\) be a metric space and \(A,B\subset X\). A relation \(R\subset A\times B\) is \(\lambda\)**-proximal** if
\[\sup_{(a,b)\in R}d(a,b)\leq\lambda d_{H}(A,B).\]
We refer to a \(1\)-proximal relation simply as a **proximal relation**.
The following is the statement of Corollary 3.21 (of Theorem 3.20) for the stable covering subspace \(\mathcal{J}=FS_{n}(X)\subset PCl(X)\).
**Theorem 4.2** (Criterion for quasigeodesics/rectifiables in \(Fs_{n}(X)\)).: _Let \(X\) be a metric space and \(n\geq 1\). For any \(x,y\in FS_{n}(X)\), a \(\lambda\)-quasigeodesic (resp., a rectifiable path \(\gamma\)) exists from \(x\) to \(y\) in \(FS_{n}(X)\iff\) there exists (i) a complete relation \(R\subset x\times y\) and (ii) a collection of \(\lambda d_{H}(x,y)\)-Lipschitz (resp., \(\lambda l(\gamma)\)-Lipschitz) paths \(\big{\{}\gamma_{(a,b)}:(a,b)\in R\big{\}}\) in \(X\), \(\gamma_{(a,b)}\) a path from \(a\) to \(b\), such that \(|\{\gamma_{r}(t):r\in R\}|\leq n\), \(\forall t\in[0,1]\)._
Proof.: See the proof of Corollary 3.21 (of Theorem 3.20) for the case of \(\mathcal{J}=FS_{n}(X)\).
**Theorem 4.3** (Sufficient condition for geodesics in \(Fs(X)\)).: _Let \(X\) be a \(\lambda\)-quasiconvex space, \(x,y\in FS_{n}(X)\), and \(m\geq n\). If there exists an \(\alpha\)-proximal complete relation \(R\subset x\times y\) such that \(|R|\leq m\), then there exists a \(\lambda\alpha\)-quasigeodesic between \(x\) and \(y\) in \(FS_{m}(X)\)._
Proof.: Assume some \(R\subset x\times y\) is an \(\alpha\)-proximal complete relation. Then the map \(\gamma:[0,1]\to X(m)\) given by \(\gamma(t):=\big{\{}\gamma_{(a,b)}(t):(a,b)\in R,\ \gamma_{(a,b)}\) a \(\lambda\)-quasigeodesic in \(X\) from \(a\) to \(b\big{\}}\) is a \(\lambda\alpha\)-quasigeodesic from \(x\) to \(y\), since \(\gamma(0)=x\), \(\gamma(1)=y\), and for all \(t,t^{\prime}\in[0,1]\) we have
\[d_{H}(\gamma(t),\gamma(t^{\prime}))=\max\Big{\{}\max_{(a,b)\in R }\min_{(c,d)\in R}d\left(\gamma_{(a,b)}(t),\gamma_{(c,d)}(t^{\prime})\right), \max_{(c,d)\in R}\min_{(a,b)\in R}d\left(\gamma_{(a,b)}(t),\gamma_{(c,d)}(t^{ \prime})\right)\Big{\}}\] \[\qquad\qquad\leq\max_{(a,b)\in R}d\left(\gamma_{(a,b)}(t),\gamma_{ (a,b)}(t^{\prime})\right)\leq\lambda\max_{(a,b)\in R}d(a,b)|t-t^{\prime}|\leq \lambda\alpha d_{H}(x,y)|t-t^{\prime}|.\]
**Definition 4.4** (Midpoint, Opposite points, Spaced pair).: Let \(X\) be a geodesic space and \(x,y,z\in X\). We call \(z\) a **midpoint** between \(x\) and \(y\) if \(d(x,y)=d(x,z)+d(z,y)=2d(x,z)=2d(z,y)\). In this case, we also say \(x\) and \(y\) are **opposite** with respect to \(z\). In a metric space \(Z\), two points \(z_{1},z_{2}\in Z\) form a **spaced pair** if \(\overline{N}_{r}(z_{1})\cap\overline{N}_{r}(z_{2})=\emptyset\) for every \(0<r<d(z_{1},z_{2})\), or equivalently, if \(d(z_{1},z_{2})\leq\max\{d(z_{1},z),d(z,z_{2})\}\) for every \(z\in Z\).
**Corollary 4.5**.: _Let \(X\) be a geodesic space and \(n\geq 3\). (i) The quasiconvexity constant of \(FS_{n}(X)\) is at least \(2\). (ii) \(FS_{n}(X)\) is not a geodesic space._
Proof.: Fix \(\varepsilon>0\). Pick \(x=\{x_{1},...,x_{n}\}\) and \(y=\{y_{1},...,y_{n}\}\) in \(FS_{n}(X)\backslash FS_{n-1}(X)\). Arrange the points of \(x\cup y\) into sets \(A_{1},...,A_{k}\subset x\cup y\) such that \(|A_{i}|=3\) for \(1\leq i\leq s\), \(|A_{i}|=2\) for \(s+1\leq i\leq k\), and each \(A_{i}\) contains at least one member from \(x\) and at least one member from \(y\) (i.e., the \(2\) or \(3\) members of each \(A_{i}\) are mixed). Arrange the points in \(x\cup y\) such that \(\mathrm{dist}(A_{i},A_{j})>2\varepsilon\) for all \(i\neq j\), \(\mathrm{diam}(A_{1})=...=\mathrm{diam}(A_{s})=2\varepsilon\), \(\mathrm{diam}(A_{s+1})=...=\mathrm{diam}(A_{k})=\varepsilon\), and in each \(A_{i}\) with \(|A_{i}|=3\) the lone element from \(x\) (resp., \(y\)) is a midpoint of the two elements that both come from \(y\) (resp., \(x\)), which is possible as \(X\) is geodesic. Note that
\[2n=|x|+|y|=|A_{1}|+\cdots+|A_{k}|=3s+2(k-s)=s+2k.\]
We have \(d_{H}(x,y)=\varepsilon\) and the only complete relation \(R\subset x\times y\) that is proximal (as required by Theorem 4.2 for the existence of a geodesic between \(x\) and \(y\)) has cardinality
\[|R|=2s+(k-s)=s+k=2n-k\geq n+1\ (\mathrm{if}\ s\geq 2).\]
Fix \(s\geq 2\). Then we also have \(k\geq 2\) (since \(n\geq 3\)), and so \(|R|\leq 2n-2\), that is,
\[n+1\leq|R|\leq 2n-2\ (\mathrm{where}\ |R|=2n-2\ \mathrm{holds}\ \mathrm{only}\ \mathrm{when}\ k=s=2\ \mathrm{and}\ n=3).\]
(_Note_: The worst case equality \(|R|=2n-2\) above might actually hold for \(n>3\) if one instead rearranges \(x\cup y\), say, into two groups \(A=\{x_{1},...,x_{n-1},y_{n}\}\) and \(B=\{y_{1},...,y_{n-1},x_{n}\}\) such that \(\mathrm{dist}(A,B)>2\varepsilon\) and \(\mathrm{diam}(A)=\mathrm{diam}(B)\leq 2\varepsilon\). The only proximal complete relation here is \(R=\{(x_{i},y_{n}):i=1,...,n-1\}\cup\{(x_{n},y_{i}):i=1,...,n\}\), with \(|R|=2n-2\). Depending on the dimension/structure of \(X\), the elements in \(A\) and \(B\) could be further arranged to achieve certain desired results.)
_Proof of (i)_: The points \(x,y\) above form a spaced pair in \(FS_{n}(X)\), i.e., \(\overline{N}_{r}(x)\cap\overline{N}_{r}(y)=\emptyset\) whenever \(0<r<d_{H}(x,y)\), or equivalently, \(d_{H}(x,y)\leq\max\{d_{H}(x,z),d_{H}(y,z)\}\) for all \(z\in FS_{n}(X)\). To see this, observe that if \(z\in\overline{N}_{r}(x)\cap\overline{N}_{r}(y)\) for some \(0<r<d_{H}(x,y)\), then by the definition of Hausdorff distance, \(A_{1},...,A_{s}\) each neighbor at least two elements of \(z\) and \(A_{s+1},...,A_{k}\) each neighbor at least one element of \(z\), giving \(|z|\geq 2s+(k-s)=|R|\geq n+1\) (i.e., \(z\not\in FS_{n}(X)\)). So, given any \(\lambda\)-quasigeodesic \(\gamma:[0,1]\to FS_{n}(X)\) from \(x\) to \(y\), we have \(d_{H}(x,y)\leq\max\{d_{H}(x,\gamma(1/2)),d_{H}(\gamma(1/2),y)\}\leq(\lambda/2 )d_{H}(x,y)\), and so \(\lambda\geq 2\).
_Proof of (ii)_: This is an immediate consequence of (i). Alternatively, in the absence of the spaced pairs trick used to bound \(\lambda\) above, we can still directly give a proof of (ii) based on Theorem 4.2 as follows:
_Alternative proof of (ii)_: Consider the spaced pair \(x,y\in FS_{n}(X)\) constructed above (and the relation \(R\subset x\times y\)). We will show that, in view Theorem 4.2, any associated collection of paths \(\left\{\gamma_{(a,b)}:(a,b)\in R\right\}\) in \(X\) violates the necessity requirement "\(|\{\gamma_{(a,b)}(t):(a,b)\in R\}|\leq n,\ \forall t\in[0,1]\)" of the theorem.
Let \(A\) denote any of \(A_{1},...,A_{s}\), and assume without loss of generality (due to symmetry between \(x\) and \(y\)) that \(|A\cap x|=2\) and \(|A\cap y|=1\), and let \(A=\{x_{i_{1}},y_{j},x_{i_{2}}\}\), where by hypotheses \(y_{j}\) is a midpoint between \(x_{i_{1}}\) and \(x_{i_{2}}\) (i.e., \(x_{i_{1}}\) and \(x_{i_{2}}\) are opposite wrt \(y_{j}\)). Then it is clear that these points satisfy \(d(y_{j},x_{i_{1}})=d(y_{j},x_{i_{2}})=d_{H}(x,y)=\varepsilon\). Therefore, the two possible component paths (of a geodesic in \(FS_{n}(X)\) between \(x\) and \(y\)), namely, \(\gamma_{1}=\gamma_{(x_{i_{1}},y_{j})}\) and \(\gamma_{2}=\gamma_{(x_{i_{2}},y_{j})}\) are necessarily geodesics in \(X\) that satisfy the following: For \(t,t^{\prime}\in[0,1]\),
\[d\left(\gamma_{1}(t),\gamma_{1}(t^{\prime})\right)=d_{H}(x,y)|t-t^{\prime}|\ \mathrm{and}\ d \left(\gamma_{2}(t),\gamma_{2}(t^{\prime})\right)=d_{H}(x,y)|t-t^{\prime}|,\]
and so also satisfy the inter-path distance bound
\[\left|d\left(\gamma_{1}(t),\gamma_{2}(t)\right)-d\left(\gamma_{1}(t^{\prime}), \gamma_{2}(t^{\prime})\right)\right|\leq 2d_{H}(x,y)|t-t^{\prime}|.\]
If \(\gamma_{1}\) and \(\gamma_{2}\) joint up at some time \(t^{\prime}\in(0,1)\), then \(|d(\gamma_{1}(t),\gamma_{2}(t))-0|\leq 2d_{H}(x,y)|t-t^{\prime}|\), and so
\[d(\gamma_{1}(0),\gamma_{2}(0))\leq 2d_{H}(x,y)t^{\prime}\ \mathrm{and}\ d( \gamma_{1}(1),\gamma_{2}(1))\leq 2d_{H}(x,y)|1-t^{\prime}|,\]
which is a contradiction since the midpoint/opposite locations of the endpoints of the paths imply
\[d(\gamma_{1}(0),\gamma_{2}(0))>2d_{H}(x,y)t^{\prime}\text{ or }d(\gamma_{1}(1), \gamma_{2}(1))>2d_{H}(x,y)|1-t^{\prime}|,\text{ for any }t^{\prime}\in(0,1).\]
This shows it is impossible for \(\gamma_{1}\) and \(\gamma_{2}\) to join up into a single path. Hence, the necessity requirement "\(|\{\gamma_{(a,b)}(t):(a,b)\in R\}|\leq n,\ \forall t\in[0,1]\)" of Theorem 4.2 cannot be satisfied.
**Lemma 4.6**.: _Let \(X\) be a metric space. If \(x,y\in FS(X)\) then there exists a complete relation \(R\subset x\times y\) such that \(\ \max_{(u,v)\in R}d(u,v)\leq d_{H}(x,y)\\) and \(\ |R|\leq|x|+|y|\). Moreover, if \(|x|\geq 2\) and \(|y|\geq 2\) then we can choose \(R\) such that \(\ |R|\leq|x|+|y|-2\)._
Proof.: Let \(x\in X(n)\backslash X(n-1)\), \(y\in X(m)\backslash X(m-1)\). By the definition of \(d_{H}(x,y)\), for all \(i\in\{1,...,n\}\) and \(j\in\{1,...,m\}\) there exist \(\alpha(i)\in\{1,...,m\}\) and \(\beta(j)\in\{1,...,n\}\) such that \(d(x_{i},y_{\alpha(i)})\leq d_{H}(x,y)\) and \(d(x_{\beta(j)},y_{j})\leq d_{H}(x,y)\). Let \(R=\{(x_{i},y_{\alpha(i)}):i=1,...,n\}\cup\{(x_{\beta(j)},y_{j}):j=1,...,m\}\). Then \(|R|\leq n+m\).
Moreover, if \(n\geq 2\) and \(m\geq 2\), then with \(d(x_{i_{0}},y_{\alpha(i_{0})})=d_{H}(x,y)\) and \(d(x_{\beta(j_{0})},y_{j_{0}})=d_{H}(x,y)\), we see that \(R\) contains both \((x_{i_{0}},y_{\alpha(i_{0})})\) and \((x_{\beta(\alpha(i_{0}))},y_{\alpha(i_{0})})\), and by a symmetric argument, \(R\) also contains both \((x_{\beta(j_{0})},y_{j_{0}})\) and \((x_{\beta(j_{0})},y_{\alpha(\beta(j_{0}))}))\). Removing the two redundant elements, we are left with another complete relation \(\widetilde{R}=R\backslash\{(x_{\beta(\alpha(i_{0}))},y_{\alpha(i_{0})}),(x_{ \beta(j_{0})},y_{\alpha(\beta(j_{0}))})\}\) satisfying \(|\widetilde{R}|=|R|-2\leq m+n-2.\)
**Corollary 4.7**.: _Let \(X\) be a geodesic space and \(n\geq 2\). (i) If \(m\geq 2n-2\) then any two points of \(FS_{n}(X)\) can be connected by a geodesic in \(FS_{m}(X)\). (ii) A metric space \(Y\) is geodesic if and only if \(FS_{2}(Y)\) is geodesic. (iii) If \(Z\) is a geodesic space then \(FS^{n}(Z)=FS(Z)\backslash FS_{n-1}(Z)\) is a geodesic space._
Proof.: (i): If \(m\geq 2n-2\) then for any \(x,y\in FS_{n}(X)\) there exists (by Lemma 4.6) a proximal complete relation \(R\subset x\times y\) such that \(|R|\leq|x|+|y|-2\leq 2n-2\leq m\), and so (by Theorem 4.3) \(x\) and \(y\) are connected by a geodesic in \(FS_{m}(X)\). (ii) and (iii) immediately follow from (i).
The proof of the following result uses Lemma 3.15 and the meaning of a _reduced complete relation_ from Definition 3.1.
**Theorem 4.8** (Quasiconvexity of \(Fs_{n}(X)\)).: _If \(X\) is a geodesic space then \(FS_{n}(X)\) is \(2\)-quasiconvex._
_Moreover, if \(n\geq 3\) then by Corollary 4.5, the quasiconvexity constant \(2\) for \(FS_{n}(X)\) is the smallest possible._
Proof.: Let \(x,y\in FS_{n}(X)\). Let \(R\subset x\times y\) be a reduced proximal complete relation (which is possible because a proximal complete relation \(R\subset x\times y\) exists by the definition of \(d_{H}(x,y)\) and can be reduced by removing inessential elements \((a,b)\in R\), i.e., those that satisfy "\(|(\{a\}\times B)\cap R|\geq 2\) and \(|(A\times\{b\})\cap R|\geq 2\)").
Let \(x^{1}=\{a\in x:|(\{a\}\times B)\cap R|=1\}\) and \(y^{1}=\{b\in y:|(A\times\{b\})\cap R|=1\}\). Define a map \(f:x^{1}\to y\) by \((\{a\}\times B)\cap R=\{(a,f(a))\}\) and a map \(g:y^{1}\to x\) by \((A\times\{b\})\cap R=\{(g(b),b)\}\). Then
\[R=\{(a,f(a)):a\in x^{1}\}\cup\{(g(b),b):b\in y^{1}\},\]
where \(\{(a,f(a)):a\in x^{1}\}\cap\{(g(b),b):b\in y^{1}\}=\{(a,f(a)):a\in x^{0}\}=\{ (g(b),b):b\in y^{0}\}\), for subsets \(x^{0}\subset x^{1}\) and \(y^{0}\subset y^{1}\) such that \(f^{0}=f|_{x^{0}}:x^{0}\to y^{0}\) and \(g^{0}=g|_{y^{0}}:y^{0}\to x^{0}\) are mutually inverse bijections. So, with \(x^{\prime}=x^{1}\), \(y^{\prime}=y^{1}\backslash y^{0}\), \(x^{\prime\prime}=g(y^{\prime})\), and \(y^{\prime\prime}=f(x^{\prime})\), we get ("parallel" or "disjoint") surjective maps
\[f^{\prime}=f|_{x^{\prime}}:x^{\prime}\to y^{\prime\prime},\ g^{\prime}=g|_{y^{ \prime}}:y^{\prime}\to x^{\prime\prime}\]
and disjoint unions \(\ x=x^{\prime}\sqcup x^{\prime\prime}=x^{\prime}\sqcup g^{\prime}(x^{\prime}), \ y=y^{\prime}\sqcup y^{\prime\prime}=y^{\prime}\sqcup f^{\prime}(x^{\prime})\).
Let \(z=x^{\prime\prime}\cup y^{\prime\prime}\) (where \(z\in FS_{n}(X)\) by construction) and consider the \(d_{H}(x,y)/d_{H}(x,z)\)-proximal complete relation \(R_{1}\subset x\times z\) and the \(d_{H}(x,y)/d_{H}(z,y)\)-proximal complete relation \(R_{2}\subset y\times z\) given by
\[R_{1}=\{(a,f^{\prime}(a)):a\in x^{\prime}\}\cup\{(c,c):c\in x^{\prime\prime}\},\ R_{2}=\{(g^{\prime}(b),b):b\in y^{\prime}\}\cup\{(c,c):c\in y^{\prime\prime}\},\]
where the proximality claims are due to \(d_{H}(x,z)=d_{H}(x^{\prime}\cup x^{\prime\prime},x^{\prime\prime}\cup y^{ \prime\prime})\leq d_{H}(x^{\prime},y^{\prime\prime})\leq d_{H}(x,y)\) and similarly \(d_{H}(z,y)\leq d_{H}(x,y)\). Then by Theorem 4.3, we get a \(d_{H}(x,y)/d_{H}(x,z)\)-quasigeodesic \(\gamma_{1}:[0,1]\to FS_{n}(X)\) from \(x\) to \(z\) and a \(d_{H}(x,y)/d_{H}(z,y)\)-quasigeodesic \(\gamma_{2}:[0,1]\to FS_{n}(X)\) from \(z\) to \(y\). The path \(\gamma=\gamma_{1}\cdot\gamma_{2}:[0,1]\to FS_{n}(X)\) from \(x\) to \(y\) given by \(\gamma|_{[0,1/2]}(t)=\gamma_{1}(2t)\) and \(\gamma|_{[1/2,1]}(t)=\gamma_{2}(2t-1)\) satisfies
\[l(\gamma)=l(\gamma_{1})+l(\gamma_{2})\leq d_{H}(x,y)+d_{H}(x,y)=2d_{H}(x,y).\qed\]
**Corollary 4.9** ([1, Corollary 2.1.15]).: _If \(X\) is \(\lambda\)-quasiconvex then \(FS_{n}(X)\) is \(\alpha_{n}(\lambda)\)-quasiconvex with "\(\alpha_{1}(\lambda)=\alpha_{2}(\lambda)=\lambda\)" and "\(\max(2,\lambda)\leq\alpha_{n}(\lambda)\leq 2\lambda\) for \(n\geq 3\)" being the smallest possible constants._
## 5. **Some relevant questions**
According to Theorem 3.20 - Corollary 3.21, to have a geodesic in a stable covering subspace \(\mathcal{J}\subset PCl(X)\) it is necessary and sufficient to have both a complete relation and a set of Lipschitz paths with specific properties. It is clear from its proof (based on Lemma 3.18) that the sufficiency part holds for \(\mathcal{J}\subset BCl(X)\), and not just for \(\mathcal{J}\subset PCl(X)\). The necessity part (which depends on Theorem 3.8), however, is more involved. One is tempted to suspect that the necessity might still hold if we choose \(\mathcal{J}\subset BCl(X)\) such that \(\mathcal{J}\cap PCl(X)\) is dense in \(\mathcal{J}\).
**Question 5.1**.: Let \(X\) be an arbitrary metric space. Can \(PCl(X)\) be replaced with \(BCl(X)\) in Theorem 3.20 - Corollary 3.21? Two alternative ways to ask the same question are the following:
1. For the existence of quasigeodesics in a stable covering subspace \(\mathcal{J}\subset BCl(X)\), is the sufficient condition in Theorem 3.20 - Corollary 3.21 also necessary?
2. Does there exist a nontrivial quasigeodesic in a stable covering subspace \(\mathcal{J}\subset BCl(X)\) that violates the sufficient condition in Theorem 3.20 - Corollary 3.21?
If the answer to (1) above is negative (i.e., the answer to (2) above is positive) what is the largest possible subspace of \(BCl(X)\) for which the necessity part of Theorem 3.20 - Corollary 3.21 is valid?
The answer to Question 5.1 might require the answer to the following related question.
**Question 5.2**.: In Proposition 3.25, can the canonical map \(\gamma(t)=A_{t\lambda\rho}\cap B_{(1-t)\lambda\rho}\) in \(BCl(X)\) be replaced with an arbitrary \(\lambda\)-quasigeodesic in \(BCl(X)\)?
For application purposes, one can also ask questions concerning efficiency in practically constructing or realizing quasigeodesics in subset spaces. If \(X\) is a metric space then by Theorem 3.20 - Corollary 3.21, given a stable covering subspace \(\mathcal{J}\subset PCl(X)\) and \(A,B\in\mathcal{J}\), a \(\lambda\)-quasigeodesic exists between \(A\) and \(B\) in \(\mathcal{J}\iff\text{there exists a densely complete relation }R\subset A\times B\) and \(\lambda d_{H}(A,B)\)-Lipschitz paths \(\{\gamma_{(a,b)}:(a,b)\in R\}\) in \(X\), \(\gamma_{(a,b)}\) a path from \(a\) to \(b\), such that \(\Gamma(t)=cl_{X}\{\gamma_{r}(t):r\in R\}\in\mathcal{J}\), \(\forall t\in[0,1]\).
**Question 5.3**.: Among the possible densely complete relations \(R\subset A\times B\), which ones, and how many of them, have the smallest cardinality (in the sense that they are reduced densely complete relations)?
**Question 5.4**.: Among the possible (reduced) densely complete relations \(R\subset A\times B\), which ones, and how many of them, admit the shortest, simplest, or least (computationally) complex paths \(\{\gamma_{r}:r\in R\}\) in \(X\)? |
2301.04727 | A Quantum Algorithm for Shapley Value Estimation | The introduction of the European Union's (EU) set of comprehensive
regulations relating to technology, the General Data Protection Regulation,
grants EU citizens the right to explanations for automated decisions that have
significant effects on their life. This poses a substantial challenge, as many
of today's state-of-the-art algorithms are generally unexplainable black boxes.
Simultaneously, we have seen an emergence of the fields of quantum computation
and quantum AI. Due to the fickle nature of quantum information, the problem of
explainability is amplified, as measuring a quantum system destroys the
information. As a result, there is a need for post-hoc explanations for quantum
AI algorithms. In the classical context, the cooperative game theory concept of
the Shapley value has been adapted for post-hoc explanations. However, this
approach does not translate to use in quantum computing trivially and can be
exponentially difficult to implement if not handled with care. We propose a
novel algorithm which reduces the problem of accurately estimating the Shapley
values of a quantum algorithm into a far simpler problem of estimating the true
average of a binomial distribution in polynomial time. | Iain Burge, Michel Barbeau, Joaquin Garcia-Alfaro | 2023-01-11T21:32:59Z | http://arxiv.org/abs/2301.04727v3 | # A Quantum Algorithm for Shapley Value Estimation
###### Abstract
The introduction of the European Union's (EU) set of comprehensive regulations relating to technology, the General Data Protection Regulation, grants EU citizens the right to explanations for automated decisions that have significant effects on their life. This poses a substantial challenge, as many of today's state-of-the-art algorithms are generally unexplainable black boxes. Simultaneously, we have seen an emergence of the fields of quantum computation and quantum AI. Due to the fickle nature of quantum information, the problem of explainability is amplified, as measuring a quantum system destroys the information. As a result, there is a need for post-hoc explanations for quantum AI algorithms. In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post-hoc explanations. However, this approach does not translate to use in quantum computing trivially and can be exponentially difficult to implement if not handled with care. We propose a novel algorithm which reduces the problem of accurately estimating the Shapley values of a quantum algorithm into a far simpler problem of estimating the true average of a binomial distribution in polynomial time.
**Keywords:** Quantum Computing, Cooperative Game Theory, Explainable AI, Quantum AI
## 1 Introduction
### Background
With the introduction of the European Union's set of comprehensive regulations relating to technology, the General Data Protection Regulation (GDPR), there has been a massive shift in the world of AI. Specifically in our case, the GDPR has provided EU citizens a right to explanation [1]. This poses a substantial challenge, as many of today's state of the art algorithms, such as Deep learning models, are generally black boxes [2]. Meaning that even the developers of AI models usually have no way of actually understanding the decisions of their models. There are two paths one can take to rectify the new need for model explanations, either by making models inherently interpretable, or by coming up with post-hoc explanations for our black-box models. One more recent axiomatic strategy for post-hoc explainability is based on the game theory concept of the Shapley value which is a powerful measure of contribution [3]. However, direct calculation of Shapley values is an NP-hard problem [4, 5], and outside of specific problems types, sampling is the only option for approximating Shapley values [6].
On the other hand, we have the emergence of quantum algorithms and quantum machine learning (QML) [7]. Quantum computers are at least naively resistant to explanation, as the even measuring the internal state destroys most of the information within it. Combining this with techniques like deep reinforcement learning with variational quantum circuits [8] makes interpretability seem impossible.
### Problem Statement
Inherently interpretable models would likely be best [2], as an explanation of an interpretable model is guaranteed to be correct. However, much of the research and work in AI over the past couple decades have been into black box models, and many of the benefits of QML may not be possible to implemented in an interpretable fashion. Ideally, we do not want to throw away all of the previous black box research, so there is value in implementing and improving post-hoc explanation methods.
Current solutions to post-hoc explanations would unintuitive, or unwieldy to apply in the context of quantum computers. We explore a native quantum solution to post-hoc explainability using Shapley value approximation, where the function itself is approximated.
### Results
We develop a flexible framework for global evaluation of input factors in quantum circuits which approximates the Shapley values of such factors. Our framework increases circuit complexity by an additional roughly \(O(nlogn)\) c-not gates, with a total increase in circuit depth of \(O(n)\), where \(n\) is the number
of factors. The change in space complexity for global evaluations is an additional \(O(logn)\) qubits over the circuit being evaluated. This is in stark contrast to the \(O(2^{n})\) assessments needed to directly assess the Shapley values under the general case.
### Paper Organization
Section 2 provides the background and preliminaries. Sections 3 and 4 present our quantum algorithm, including an analytic analysis. Section 5 provide some examples. Section 6 concludes the work.
## 2 Background
### Shapley Values
#### 2.1.1 Cooperative Game Theory
Cooperative game theory is the study of coalitional games.
**Definition 1**: A _coalitional game_ can be described as the tuple \(G=(F,V)\), wherein \(F=\{1,2,...,N\}\) is a set of \(N\) players. \(V\) is a value function with \(V(S)\in\mathbb{R}\) representing the value of a given coalition \(S\subseteq F\), with the restriction that \(V(\emptyset)=0\).
**Definition 2**: Given a game \(G=(F,V)\), \(F=\{1,2,\ldots,N\}\), a _payoff vector_\(\Phi(G)\) is a vector of length \(N\), which describes the utility \(\Phi(G)_{i}\) of player \(i\). A payoff vector is determined by the value function, where player \(i\)'s payoff value \(\Phi(G)_{i}\) is determined by how \(V(S)\,S\subseteq F\) is effected by \(i\)'s inclusion or exclusion from \(S\) for any possible \(S\).
There are various solution concepts that construct these payoff vectors (or sets of payoff vectors) [9]. In this paper, we are most interested in Shapley values.
#### 2.1.2 Shapley Values
In the early 50s, Shapley introduced a new solution concept to determine resource allocation in cooperative games which we now denote the Shapley value. It was unique in that it returned a single unique payoff vector, which was thought to be potentially untenable at the time [10].
The Shapley value can be derived by the use of one of several sets of axioms, in our case we use the following four. Suppose we have games \(G=(F,V)\) and \(G^{\prime}=(F,V^{\prime})\), \(F=\{1,2,\ldots,N\}\), and a payoff vector \(\Phi(G)\), then:
1. Efficiency: The sum of all utility is equal to the utility of the grand coalition \[\sum_{i=1}^{N}\Phi(G)_{i}=V(F)\]
2. Equal Treatment: Players i, j are said to be symmetrical if \(\forall_{S\subseteq F,\;i,j\notin S}[V(S\cup\{i\})=V(S\cup\{j\})]\). If i, and j are symmetric in G, then they are treated equally: \[\Phi(G)_{i}=\Phi(G)_{j}\]
3. Null Player: If player \(i\) satisfies \(\forall_{S\in F,\;i\notin S}[V(S)=V(S\cup\{i\})]\), then \(i\) is a null player. If \(i\) is a null player then: \[\Phi(G)_{i}=0\]
4. Additivity: If a player is in two games the Shapley values between the two games is additive: \[\Phi(G+G^{\prime})_{i}=\Phi(G)_{i}+\Phi(G^{\prime})_{i}\] Where a game \(G+G^{\prime}\) is defined as \(G=(F,V+V^{\prime})\), and \((V+V^{\prime})(S)=V(S)+V^{\prime}(S)\), \(S\subseteq F\).
Amazingly, these axioms lead to a single unique and quite intuitive division of utility [10]. Even more, the Shapley value of \(i\) turns out to be the expected marginal contribution to a random coalition \(S\subseteq F\setminus\{i\}\), where marginal contribution \(=V(S\cup\{i\})-V(S)\). This can be interpreted as a _fair_ division of utility [11].
#### Direct Calculation
It can be shown that the following equation gives us the payoff vector for the Shapley value solution concept, which we will call Shapley values [12, 13].
**Definition 3**: Let \(G=(F,V)\), for simplicity sake, we will now write \(\Phi(G)_{i}\) as \(\Phi_{i}\). Then, the Shapley value of the \(i^{th}\) factor \(\Phi_{i}\) can be described as:
\[\Phi_{i}=\sum_{S\subseteq F\setminus\{i\}}\gamma(|F\setminus\{i\}|,|S|)\cdot( V(S\cup\{i\})-V(S))\]
Where,
\[\gamma(n,m)=\frac{1}{{n\choose m}(n+1)}=\frac{m!(n-m)!}{(n+1)!}\]
The Shapley value can be interpreted as a weighted average of contributions. The weights themselves have an intuitive interpretation, the \(\frac{1}{{n\choose m}}\) results in each possible size of \(S\) having an equal impact on the final value (since given \(|S|=m\), there would be \({n\choose m}\) summands contributing to the final value). The \(\frac{1}{n+1}\) averages between the different sizes of \(S\).
#### Intractability
Unfortunately, in spite of all the desirable attributes of Shapley values it has a major weakness, it can be incredibly costly to compute. With the above
formulation one would need to assess \(V\) with \(2^{|F\setminus\{i\}|}\) different subsets. In general, except for very specific circumstances, there seems to be no clever solutions or reformulations either. Deterministically Computing Shapley values in the context of weighted voting games has been shown to be NP-complete [4, 5]. Considering voting games are some of the most simple cooperative games, this result does not bode well for more complex scenarios. In the context of Shapley values for machine learning, it has also been shown that calculation of Shapley values are not tractable for even regression models [14]. It was also proven that in an empirical setting finding Shapley values is exponential [15].
### Explainable AI
The importance of eXplainable AI (XAI) is multifaceted, on the one hand, actually understanding a model's reasoning allows for more robustness. This can be intuited simply with the following thought experiment: imagine implementing a traditional program without being able to understand what the computer is doing, where it is literally impossible to debug. If by some miracle you were able to get it working, it certainly would not be robust to edge cases. This is more or less the situation data scientists and engineers are in while developing large black box models, they are stuck with at best naive and heuristic strategies for debugging, without a good way of understanding what the model is doing or why. XAI, in particular post-hoc explanations can serve as a critical debugging tool [16, 17]. On the other hand in important, potentially life altering applications such as medicine, loan decisions, law, and various other critical fields we can not afford to rely on AI which we don't understand. This is not only because of the recent legislative shift with the GDPR [18], but for the obvious moral and practical reasons.
## 3 A Quantum Algorithm for Shapley Value Estimation
Consider a game with \(G=(F,V)\), \(F=\{0,1,\ldots,n\}\), note that this is a game of \(n+1\) players. The goal is to efficiently approximate the Shapley value of a given player \(i\). Suppose a quantum version of \(V\), \(U_{V}^{\pm}\), is given, such that:
\[U_{V}^{\pm}\left|x\right\rangle\left|0\right\rangle:=\left|x\right\rangle\left( \sqrt{\frac{1-\hat{V}^{\pm}(x)}{2}}\left|0\right\rangle+\sqrt{\frac{1+\hat{V} ^{\pm}(x)}{2}}\left|1\right\rangle\right)\]
where \(\left|x\right\rangle\) is a vector in the computational basis. Define \(V_{\max}\) to be an upper bound for the value function magnitude.
\[V_{\max}\geq\max_{S\subseteq F}\lvert V(S)\rvert\]
Consider the binary integer, \(x=x_{0}x_{1}\ldots x_{i-1}x_{i+1}\ldots x_{n}\), let \(S_{x}\) be the set of all players \(j\) such that \(x_{j}=1\). This binary subset encoding can represent every player coalition which excludes player \(i\). Next, define,
\[\hat{V}^{+}(x):=\frac{V\left(S_{x}\cup\{i\}\right)}{V_{\max}},\text{ and }\hat{V}^{-}(x):=\frac{V\left(S_{x}\right)}{V_{\max}}.\]
A critical step for the algorithm is to approximate the weights of the Shapley value function. As will be shown later, these Shapley weights correspond perfectly to a slightly modified beta function:
\[\gamma(n,m)=\int_{0}^{1}t^{m}(1-t)^{n-m}dt\]
The beta function and by extension the Shapley coefficients can be approximated with Darboux sums of \(t^{m}(1-t)^{n-m}\) over partitions of \([0,1]\). Additionally, as will become apparent, \(t^{m}(1-t)^{n-m}\) can be implemented efficiently on a quantum computer.
Consider the function,
\[t_{\ell}(k)=\sin^{2}\left(\frac{\pi}{2}\cdot\frac{k}{2^{\ell}}\right)\]
from which the partition \(P=\left(t_{\ell}(k)\right)_{k=0}^{2^{\ell}}\) of \([0,1]\) can be constructed. This partition has been chosen as it is computationally simple to implement. Finally, define \(w_{\ell}(k)\) to be the width of the \(k^{\text{th}}\) subinterval of P, \(w_{\ell}(k)=t_{\ell}(k+1)-t_{\ell}(k)\).
With much of the context out of the way, let us describe the algorithm. Begin with the following state:
\[\left|\psi_{0}\right\rangle=\left|0\right\rangle_{\text{Pt}}\otimes\left|0 \right\rangle_{\text{Pl}}\otimes\left|0\right\rangle_{\text{Ut}},\]
where Pt denotes the partition register, Pl denotes the player register, and Ut denotes the Utility register. Suppose the number of qubits \(\ell\) in the partition register is \(O(\log n)\), then the partition register can be prepared to an arbitrary quantum state in \(O(n)\) time [19]. Prepare the partition register to be,
\[\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell}(k)}\left|k\right\rangle.\]
So that, the state of the quantum system is,
\[\left|\psi_{1}\right\rangle=\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell}(k)}\left|k \right\rangle_{\text{Pt}}\left|0\right\rangle_{\text{Pl}}\left|0\right\rangle_ {\text{Ut}}.\]
Next we perform a series of controlled rotations \(R\) (circuit in Figure 1) of the form
\[R\ket{k}\ket{0}:=\ket{k}\left(\sqrt{1-t_{\ell}^{\prime}(k)}\ket{0}+\sqrt{t_{\ell} ^{\prime}(k)}\ket{1}\right),\]
Where \(t_{\ell}^{\prime}(k)=t_{\ell+1}(2k+1)\) will be used to sample the height of the \(k^{\text{th}}\) subinterval in the Darboux sum. \(t_{\ell}^{\prime}(k)\) is always somewhere in the range \([t_{\ell}(k),t_{\ell}(k+1)]\). R is then performed on each qubit in the player register controlled by the partition register. In total, these applications of R can be performed with \(O(n\log n)\) gates in \(O(n)\) layers and yield the state:
\[\ket{\psi_{2}}=\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell}(k)}\ket{k}_{\text{Pt}} \left(\sqrt{1-t_{\ell}(k)}\ket{0}+\sqrt{t_{\ell}(k)}\ket{1}\right)^{\otimes n }\ket{0}_{\text{Ut}}.\]
Note that the player register is of size n. Let \(H_{m}\) be the set of binary numbers of hamming distance \(m\) from \(0\) in \(n\) bits, then we can rewrite \(\ket{\psi_{2}}\) as:
\[\ket{\psi_{2}}=\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell}(k)}\ket{k}_{\text{Pt}} \sum_{m=0}^{n}\sqrt{t_{\ell}(k)^{m}(1-t_{\ell}(k))^{n-m}}\sum_{h\in H_{m}}\ket{ h}_{\text{PI}}\ket{0}_{\text{Ut}}.\]
**Example 1**: For a concrete example of this change, consider \(n=2\), then
\[\left(\sqrt{1-t_{\ell}(k)}\ket{0}+\sqrt{t_{\ell}(k)}\ket{1}\right) ^{\otimes 2}= \sqrt{(1-t_{\ell}(k))^{2}}\ket{00}+\sqrt{t_{\ell}(k)(1-t_{\ell}(k ))}\ket{01}\] \[+\sqrt{t_{\ell}(k)(1-t_{\ell}(k))}\ket{10}+\sqrt{t_{\ell}(k)^{2}} \ket{11}\]
Note that \(\ket{00}\) is hamming distance \(0\) from \(\ket{00}\), \(\ket{01}\) and \(\ket{10}\) are hamming distance \(1\) from \(\ket{00}\), and \(\ket{11}\) is hamming distance \(2\) from \(\ket{00}\). With this knowledge in hand, we can rewrite our state as,
\[\sqrt{(1-t_{\ell}(k))^{2}}\sum_{h\in H_{0}}\ket{h}+\sqrt{t_{\ell}(k)(1-t_{\ell }(k))}\sum_{h\in H_{1}}\ket{h}+\sqrt{t_{\ell}(k)^{2}}\sum_{h\in H_{2}}\ket{h}\]
This can now be arranged into the desired form,
\[\sum_{m}^{n}\sqrt{t_{\ell}(k)^{m}(1-t_{\ell}(k))^{n-m}}\sum_{h\in H_{m}}\ket{h}\]
Figure 1: This circuit R is a controlled rotation
Rearranging \(\ket{\psi_{2}}\) gives,
\[\ket{\psi_{2}}=\sum_{m=0}^{n}\sum_{h\in H_{m}}\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell }(k)t_{\ell}(k)^{m}(1-t_{\ell}(k))^{n-m}}\ket{k}_{\text{Pt}}\ket{h}_{\text{Pl}} \ket{0}_{\text{Ut}}.\]
Next, we can take \(U_{V}^{-}\) and \(U_{V}^{+}\). Note that we take these separately on separate runs of the algorithm, and then compare the statistics of the two approaches. For convenience, let us for the moment write \(U_{V}^{\pm}\ket{h}\ket{0}=\ket{h}\ket{V^{\pm}(h)}\), where,
\[\ket{V^{\pm}(h)}=\sqrt{\frac{1-\hat{V}^{\pm}(h)}{2}}\ket{0}+\sqrt{\frac{1+\hat {V}^{\pm}(h)}{2}}\ket{1}\]
Applying \(U_{V}^{\pm}\ket{h}_{\text{Pl}}\ket{0}_{\text{Ut}}\) gives \(\ket{\psi_{3}^{\pm}}\) is equal to,
\[\sum_{m=0}^{n}\sum_{h\in H_{m}}\sum_{k=0}^{2^{\ell}-1}\sqrt{w_{\ell}(k)t_{\ell }(k)^{m}(1-t_{\ell}(k))^{n-m}}\ket{k}_{\text{Pt}}\ket{h}_{\text{Pl}}\ket{V^{\pm }(h)}_{\text{Ut}}.\]
This operation is wholly dependent on the game or algorithm being analyzed and its complexity. Assuming the algorithm is being implemented with a look up table, one could likely use qRAM [20]. This approach would have a time complexity of \(O(n)\) at the cost of requiring \(O(2^{n})\) qubits for storage. However, depending on the problem, there will often be far less resource intense methods of implementing \(U_{V}\), as will be seen with the voting game example in a later section.
This is the final quantum state. Let us now analyze this state through the lens of density matrices. Taking the partial trace with respect to the partition and player registers yields,
\[\operatorname{tr}_{\text{Pt},\text{Pl}}(\ket{\psi_{3}^{\pm}} \!\!\bra{\psi_{3}^{\pm}})= \sum_{m=0}^{n}\sum_{h\in H_{m}}\left(\sum_{k=0}^{2^{\ell}-1}w_{ \ell}(k)t_{\ell}(k)^{m}(1-t_{\ell}(k))^{n-m}\right)\] \[\left|V^{\pm}(h)\right\rangle_{\text{Ut}}\left\langle V^{\pm}(h) \right|_{\text{Ut}}.\]
It will be shown later in this work that,
\[\sum_{k=0}^{2^{\ell}-1}w_{\ell}(k)t_{\ell}(k)^{m}(1-t_{\ell}(k))^{n-m}\approx \gamma(n,m)\]
with an error inversely proportional to \(2^{\ell}\). Intuitively, this can be thought of as a Darboux sum approximation of a slightly modified beta function. This modified beta function happens to be exactly equal to \(\gamma(n,m)\). The specifics
of the error will be discussed in further detail and proved in later sections. As a result,
\[\mathrm{tr}_{\mathrm{Pt},\mathrm{Pl}}\big{(}\big{|}\psi_{3}^{\pm}\big{\rangle} \!\big{\langle}\psi_{3}^{\pm}\big{|}\big{)}\approx\sum_{m=0}^{n}\sum_{h\in H_{ m}}\gamma(n,m)\,\big{|}V^{\pm}(h)\big{\rangle}_{\mathrm{Ut}}\,\big{\langle}V^{ \pm}(h)\big{|}_{\mathrm{Ut}}\,.\]
Finally, we measure the qubit in the utility register in the computational basis yields the following expected value.
\[\sum_{m=0}^{n}\sum_{h\in H_{m}}\gamma(n,m)\frac{1+\hat{V}^{\pm}(h)}{2}\]
Which is equal to
\[\frac{1}{2}\sum_{m=0}^{n}\sum_{h\in H_{m}}\gamma(n,m)\hat{V}^{\pm}(h)+\frac{1} {2}\sum_{m=0}^{n}\sum_{h\in H_{m}}\gamma(n,m)\]
With the ability to craft these plus and minus states we can now extract the required information to find a close approximation to the Shapley value. Note that it is necessary to extract the expected values for the utility register of both our plus and minus states. This is achieved by repeatedly constructing state \(\big{|}\psi_{3}^{+}\big{\rangle}\) in the case of the plus state and constructing state \(\big{|}\psi_{3}^{-}\big{\rangle}\) in the case of the minus state. Respectively one would measure repeatedly measure the utility register of these states, creating binomial distributions of \(\left|0\right\rangle\) and \(\left|1\right\rangle\) measurements. From these binomial distributions it is possible to find the underlying probability, in addition, it is possible to construct a confidence interval [21]. This estimation of the expected values will take up the bulk of the runtime for the algorithm.
By first subtracting the expected result of the minus expected value from the positive expected value, then simplifying we get an expected value of:
\[\frac{1}{2}\sum_{m=0}^{n}\sum_{h\in H_{m}}\gamma(n,m)\left(\hat{V}^{+}(h)-\hat {V}^{-}(h)\right)\]
Plugging in the definition for \(\hat{V}^{\pm}(h)\) gives
\[\frac{1}{2\cdot V_{\mathrm{max}}}\sum_{m=0}^{n}\sum_{h\in H_{m}}\gamma(n,m) \left(V\left(S_{h}\cup\{i\}\right)-V\left(S_{h}\right)\right)\]
Notice that, in the \(S_{x}\) encoding, \(H_{m}\) represents each subset of \(F\setminus\{i\}\) of size \(m\). As a result, the equation is, in effect, summing over each subset of \(F\setminus\{i\}\).
Combining this observation with a final step of multiplying by \(2\cdot V_{m}ax\), yields:
\[\sum_{S\subseteq F\setminus\{i\}}\gamma(|F\setminus\{i\}|,|S|)\cdot\left(V\left(S \cup\{i\}\right)-V\left(S\right)\right).\]
This is precisely the Shapley value \(\Phi_{i}\).
## 4 Our Proposal
### Shapley Values and the Beta Function
In this subsection, the relationship between the beta function and Shapley values is explored.
Given a game with a set of \(F\) players, a subset \(S\subseteq F\), and a player \(i\), we have,
**Definition 4**: Let \(n=|F\setminus\{i\}|\), and \(m=|S|\). We denote the weights used to calculate the Shapley value in the weighted average as:
\[\gamma(n,m)=\frac{m!(n-m)!}{(n+1)!}=\frac{1}{\binom{n}{m}(n+1)}\]
**Definition 5**: Denote a function closely related to the beta function as:
\[B_{\alpha,\beta}=\int\limits_{0}^{1}x^{\beta}(1-x)^{\alpha-\beta}dx,\quad 0 \leq\beta\leq\alpha,\quad\alpha,\beta\in\mathbb{N}.\]
We will refer to this function as the special beta function. We also denote,
\[b_{\alpha,\beta}(x)=x^{\beta}(1-x)^{\alpha-\beta}\]
So that \(B_{\alpha,\beta}=\int_{0}^{1}b_{\alpha,\beta}(x)dx\).
**Lemma 1**: _We have the following recurrence relationship:_
\[B_{\alpha,\beta}=\frac{\beta}{\alpha-(\beta-1)}B_{\alpha,\beta-1}\]
\[B_{\alpha,0}=B_{\alpha,\alpha}=\frac{1}{\alpha+1}\]
_Proof_ Case 1, \(\beta=0\) or \(\alpha\):
\[B_{\alpha,0}=\int\limits_{0}^{1}(1-x)^{\alpha}dx=-\frac{(1-x)^{\alpha+1}}{ \alpha+1}\bigg{|}_{0}^{1}=\frac{1}{\alpha+1}\]
a nearly identical calculation can be used to show \(B_{\alpha,\alpha}=\frac{1}{\alpha+1}\).
Case 2, \(0<\beta<\alpha\):
\[B_{\alpha,\beta} =\int\limits_{0}^{1}x^{\beta}(1-x)^{\alpha-\beta}dx\] \[=\frac{x^{\beta}(1-x)^{\alpha-(\beta-1)}}{\alpha-(\beta-1)}\bigg{|} _{0}^{1}-\int\limits_{0}^{1}\frac{-\beta}{\alpha-(\beta-1)}x^{\beta-1}(1-x)^{ \alpha-(\beta-1)}dx\] \[=0+\frac{\beta}{\alpha-(\beta-1)}\int\limits_{0}^{1}x^{\beta-1}( 1-x)^{\alpha-(\beta-1)}dx\] \[=\frac{\beta}{\alpha-(\beta-1)}B_{\alpha,\beta-1}\]
**Theorem 2**: _The \(B\) function is equivalent to the Shapley weight function \(\gamma\):_
\[B_{n,m}=\gamma(n,m),\quad 0\leq m\leq n,\quad m,n\in\mathbb{N}\]
_Proof_ Fix \(n\), we proceed by induction.
Base case, \(m=0\): then \(B_{n,0}=\frac{1}{n+1}=\gamma(n,0)\), thus the base case holds.
Inductive step: suppose \(B_{n,k}=\gamma(n,k)\), \(k\in\mathbb{N}\), we need to show \(B_{n,k+1}=\gamma(n,k+1)\), \(0\leq k<\alpha\):
\[B_{n,k+1} =\frac{k+1}{n-k}B_{n}(k)\] \[=\frac{k+1}{n-k}\gamma(n,k)\] \[=\frac{k+1}{n-k}\cdot\frac{k!(n-k)!}{(n+1)!}\] \[=\frac{(k+1)!(n-(k+1))!}{(n+1)!}\] \[=\gamma(n,k+1)\]
\(\Box\)
To summarize, we have shown that our formulation of the beta function is equivalent to the Shapely value weight function over our domain.
### Approximating the Special Beta Function
In this section, we will be going through the task of showing that we can approximate the beta function using fairly unusual partitions. Though it may not be immediately obvious, our partition definition is extremely convenient for a quantum implementation. For the moment, it is sufficient to understand this section as pursuing a single goal: showing that our partition can be used to approximate the area under \(b_{n,m}\) over range \([0,1]\), which is equal to the
special beta function. In fact, we will show that we can estimate \(B_{n,m}\) with arbitrary accuracy.
For a visual representation of our goal, we will be estimating the area under \(b_{n,m}\) over range \([0,1]\) using our strange partition for a Darboux integral as can be seen with various resolutions in Figure 2.
#### 4.2.1 Partition
To begin, we consider a simple function, which, as we will see in later parts, is extremely natural in the quantum context.
Remark 1: Consider the following function from the real numbers in the range \(0\leq x\leq 1\) to the reals such that:
\[\sin^{2}\left(\frac{\pi}{2}x\right) \tag{1}\]
Figure 2: Visual representation of the special beta function being approximated using Darboux integrals and our novel partition.
As can be seen in Figure 3, \(\sin^{2}\left(\frac{\pi}{2}x\right)\) is clearly monotonic and bijective from the domain \([0,1]\) to the range \([0,1]\).
**Definition 6**: Let \(\ell\) be an non-negative integer, and let \(P_{\ell}=\left(t_{\ell}(0),t_{\ell}(1),\ldots,t_{\ell}(2^{\ell}-1),t_{\ell}(2^ {\ell})\right)\) be a partition of the interval \([0,1]\) where,
\[t_{\ell}(k)=\sin^{2}\left(\frac{\pi}{2}\cdot\frac{k}{2^{\ell}}\right)\]
Note that \(t_{\ell}(x)\) can be interpreted as a discretized version of the function in Remark 1, where instead of \(x\in[0,1]\), we have \(x\in\left\{\frac{k}{2^{\ell}}:k=0,\ldots,2^{\ell}\right\}\subset[0,1]\) for some fixed \(\ell\).
_Remark 2_: \(P_{\ell+1}\) is a refinement of \(P_{\ell}\).
_Example 2_: In Figures 4 and 5, we can see concrete examples of how \(t_{\ell}(k)\) partitions the interval \([0,1]\). Note that \(P_{3}\) has a point corresponding to each point in \(P_{2}\). Specifically,
\[t_{2}(i)=t_{3}(2i),\quad i\in\mathbb{N}\]
This behaviour is due to the aforementioned refinement relationship between \(P_{\ell}\) and \(P_{\ell+1}\). It is also worth noting how \(P_{3}\)'s intervals can be viewed as the intervals in \(P_{2}\) split in two.
**Lemma 3**: _We have the following properties for the function \(t_{\ell}(k)\):_
1. _This represents a kind of symmetry with respect to_ \(k=2^{\ell-1}\)__ \[t_{\ell}(k)=1-t_{\ell}\left(2^{\ell}-k\right)\]
2. \[t_{\ell}(k)=t_{\ell+1}(2k)\]
Figure 4: Visualization of Partition \(P_{\ell}\), \(\ell=2\).
Proof: Property 1:
\[t_{\ell}(2^{\ell}-k) =\sin^{2}\left(\frac{\pi}{2}\cdot\left(\frac{2^{\ell}-k}{2^{\ell}} \right)\right)\] \[=\sin^{2}\left(\frac{\pi}{2}\cdot\left(1-\frac{k}{2^{\ell}} \right)\right)\] \[=\sin^{2}\left(\frac{\pi}{2}\cdot\left(-\frac{k}{2^{\ell}} \right)+\frac{\pi}{2}\right)\] \[=\cos^{2}\left(\frac{\pi}{2}\cdot\left(-\frac{k}{2^{\ell}} \right)\right) \left(\cos(x)=\sin\left(x+\frac{\pi}{2}\right)\right)\] \[=\cos^{2}\left(\frac{\pi}{2}\cdot\frac{k}{2^{\ell}}\right) \left(\cos(x)=\cos(-x)\right)\] \[=1-\sin^{2}\left(\frac{\pi}{2}\cdot\frac{k}{2^{\ell}}\right) \left(\cos^{2}(x)=1-\sin^{2}(x)\right)\] \[=1-t_{\ell}(k)\]
Property 2:
\[t_{\ell+1}(2k)=\sin^{2}\left(\frac{\pi}{2}\cdot\frac{2k}{2^{\ell+1}}\right)= \sin^{2}\left(\frac{\pi}{2}\cdot\frac{k}{2^{\ell}}\right)=t_{\ell}(k)\]
It will be useful to define a width for each sub-interval \([t_{\ell}(k),t_{\ell}(k+1)]\).
**Definition 7**: Denote the width of a sub-interval \([t_{\ell}(k),t_{\ell}(k+1)]\) in a partition \(P_{\ell}\) as:
\[w_{\ell}(k)=t_{\ell}(k+1)-t_{\ell}(k)\]
\(w_{\ell}(k)\) can be interpreted as a function from \(k\in\mathbb{N}\), \(0\leq k\leq 2^{\ell}-1\), to \(\mathbb{R}\). This width varies with respect to both \(k\) and \(\ell\), where \(\ell\) increasing decreases \(w_{\ell}(k)\), and middling values of \(k\) maximize \(w_{\ell}(k)\) as is apparent in Figures 2 and 6.
**Definition 8**: The previous-to-new-left-interval-ratio \(\rho\) is defined as:
\[\rho_{\ell-1}(k)=\frac{w_{\ell}(2k)}{w_{\ell}(2k)+w_{\ell}(2k+1)}\]
\(\rho_{\ell-1}(k)\) can also be interpreted as a function with \(k\in\mathbb{N}\), \(0\leq k\leq 2^{\ell}-1\)
\(\rho_{\ell-1}(k)\) represents how the sizes of intervals are modified during a refinement from \(P_{\ell-1}\) to \(P_{\ell}\).
Remark 3: The previous-to-new-left-interval-ratio \(\rho\) can equivalently be represented as:
\[\rho_{\ell-1}(k)=\frac{t_{\ell}(2k+1)-t_{\ell}(2k)}{t_{\ell}(2k+2)-t_{\ell}(2 k)}=\frac{w_{\ell}(2k)}{w_{\ell-l}(k)}\]
Example 4: Let us consider \(P_{1}\) equal to the partition \((0,0.5,1)\). When we refine to \(P_{2}\), the first interval, \(\mathrm{interval}_{\mathrm{old}}=[0,0.5]\) of \(P_{1}\) is split into two new intervals, \(\mathrm{interval}_{\mathrm{left}}=[0,0.15]\) and \(\mathrm{interval}_{\mathrm{right}}=[0.15,0.5]\) (note that \(\mathrm{interval}_{\mathrm{old}}=\mathrm{interval}_{\mathrm{left}}\cup \mathrm{interval}_{\mathrm{right}}\)). Consequently, the previous-to-new-left-interval-ratio \(\rho_{1}(0)\) is equal to \(\frac{w_{2}(0)}{w_{2}(0)+w_{2}(1)}\) which is approximately \(0.3\), see Figure 6. \(\rho_{1}(0)\) represents the relative size of the new left interval, \(\mathrm{interval}_{\mathrm{left}}=[0,0.15]\) compared to the old interval \(interval_{\mathrm{old}}=[0,0.5]\).
Corollary 3.1: _The first partition refinement \(P_{0}\to P_{1}\) splits the interval in two parts, which happen to be equal sizes._
\[\rho_{0}(0)=\frac{1}{2}\]
_This can be verified easily though basic calculation._
**Lemma 4**: _As our partition approaches infinite density, the leftmost interval \([0,b]\) is split into two pieces, \([0,a]\) and \([a,b]\), a approaches \(\frac{b}{4}\). Equivalently,_
\[\lim_{\ell\to\infty}\rho_{\ell-1}(0)=\frac{1}{4}\]
Proof: \[\lim_{\ell\to\infty}\rho_{\ell-1}(0)=\lim_{\ell\to\infty}\frac{t_{\ell}(1)-t_ {\ell}(0)}{t_{\ell}(2)-t_{\ell}(0)}=\lim_{\ell\to\infty}\frac{\sin^{2}\left( \frac{\pi}{2}\cdot\frac{1}{2^{\ell}}\right)}{\sin^{2}\left(\frac{\pi}{2}\cdot \frac{2}{2^{\ell}}\right)}=\lim_{\ell\to\infty}\frac{\left(\frac{\pi}{2}\cdot \frac{1}{2^{\ell}}\right)^{2}}{\left(\frac{\pi}{2}\cdot\frac{2}{2^{\ell}} \right)^{2}}=\frac{1}{4}\]
Lemma 5: \(\rho_{\ell}(0)\) _monotonically decreases as \(\ell\) increases, for \(\ell\geq 0\)._
Proof: Let \(x=\frac{2}{\pi}2^{\ell}\). Then,
\[\rho_{\ell-1}(0)=\frac{t_{1}^{\ell}-t_{0}^{\ell}}{t_{2}^{\ell}-t_{0}^{\ell}}= \frac{\sin^{2}\left(\frac{\pi}{2}\cdot\frac{1}{2^{\ell}}\right)-\sin^{2}(0)}{ \sin^{2}\left(\frac{\pi}{2}\cdot\frac{2}{2^{\ell}}\right)-\sin^{2}(0)}=\frac{ \sin^{2}(\frac{1}{x})}{\sin^{2}(\frac{2}{x})}\]
Define \(h(x)=\frac{sin^{2}\left(\frac{1}{x}\right)}{sin^{2}\left(\frac{2}{x}\right)}\), \(x\in\left[\frac{2}{\pi},\infty\right)\), our result will hold if \(h(x)\) decreases monotonically as \(x\) increases over its domain. This is true when \(\frac{d}{dx}h(x)<0\). We have,
\[\frac{d}{dx}h(x)=-\frac{2\sin\left(\frac{1}{x}\right)\left(\cos\left(\frac{1}{ x}\right)\sin\left(\frac{2}{x}\right)-2\sin\left(\frac{1}{x}\right)\cos\left( \frac{2}{x}\right)\right)}{\sin^{3}\left(\frac{2}{x}\right)x^{2}}\]
Note that for \(x\geq\frac{2}{\pi}\):
\[\frac{2\sin\left(\frac{1}{2x}\right)}{\sin^{3}\left(\frac{2}{x}\right)x^{2}}>0\]
Thus, \(\frac{d}{dx}h(x)<0\) when,
\[\cos\frac{1}{x}\sin\frac{2}{x}-2\sin\frac{1}{x}\cos\frac{2}{x}>0\]
We can continue as follows, using the double angle identities:
\[\cos\frac{1}{x}\sin\frac{2}{x}-2\sin\frac{1}{x}\cos\frac{2}{x} =\cos\frac{1}{x}\left(2\sin\frac{1}{x}\cos\frac{1}{x}\right)-2 \sin\frac{1}{x}\left(\cos^{2}\frac{1}{x}-1\right)\] \[=2\sin\frac{1}{x}\cos^{2}\frac{1}{x}-2\sin\frac{1}{x}\cos^{2} \frac{1}{x}+2\sin\frac{1}{x}\] \[=2\sin\frac{1}{x}\] \[>0\]
This is trivially true for \(x\geq\frac{2}{\pi}\). Hence, \(\frac{d}{dx}h(x)<0\). As a result \(\rho_{\ell}(0)\) monotonically decreases when \(\ell\) increases.
**Corollary 5.1**: _By lemma 3.1, \(\rho_{0}(0)=\frac{1}{2}\), and by lemma 4\(\lim_{\ell\rightarrow\infty}\rho_{\ell}(0)=\frac{1}{4}\). Thus, since \(\rho_{\ell}(0)\) is monotonically decreasing with respect to \(\ell\) (lemma 5), it is clear that \(\rho_{\ell}(0)\in\left(\frac{1}{4},\frac{1}{2}\right]\)_
**Lemma 6**: _Similarly, we have \(\rho_{\ell}(2^{\ell}-1)\in\left[\frac{1}{2},\frac{3}{4}\right)\)._
Proof: Let us begin by showing:
\[\rho_{\ell}(2^{\ell}-1)=1-\rho_{\ell}(0)\]
Plugging in \(2^{\ell}-1\),
\[\rho_{\ell}(2^{\ell}-1) =\frac{t_{\ell}(2^{\ell}-1)-t_{\ell}(2^{\ell}-2)}{t_{\ell}(2^{ \ell})-t_{\ell}(2^{\ell}-2)}\] \[=\frac{(1-t_{\ell}(1))-(1-t_{\ell}(2))}{1-(1-t_{\ell}(2))}\] (Lemma 3, Property 2)
\[=\frac{t_{\ell}(2)-t_{\ell}(1)}{t_{\ell}(2)}\] \[=1-\frac{t_{\ell}(1)}{t_{\ell}(2)}\] \[=1-\rho_{\ell}(0)\]
As was shown in corollary 5.1, \(\rho_{\ell}(0)\in\left(\frac{1}{4},\frac{1}{2}\right]\), thus \(\rho_{\ell}(2^{\ell}-1)\in\left[\frac{1}{2},\frac{3}{4}\right)\). \(\Box\)
So we have figured out the bounds of \(\rho_{\ell}\) for the extreme values of its domain. The next step will be to show that \(\rho_{\ell}(k)<\rho_{\ell}(k+1)\) for all valid k. This will give us a chain of inequalities, which will bound all \(\rho\).
**Lemma 7**: _The following statement relates an inequality of weights to an inequality of previous-to-new-left-interval-ratios:_
\[\frac{w_{\ell}(2k)}{w_{\ell}(2k+1)}<\frac{w_{\ell}(2k+2)}{w_{\ell}(2k+3)}\iff \rho_{\ell-1}(k)<\rho_{\ell-1}(k+1)\]
_Proof_ Applying some basic algebra,
\[\frac{w_{\ell}(2k)}{w_{\ell}(2k+1)}<\frac{w_{\ell}(2k+2)}{w_{\ell}(2k+3)}\iff w _{\ell}(2k)w_{\ell}(2k+3)<w_{\ell}(2k+2)w_{\ell}(2k+1)\]
Adding \(w_{\ell}(2k)w_{\ell}(2k+2)\) (\(>0\) by remark 4) to both sides results in,
\[w_{\ell}(2k)w_{\ell}(2k+2)+w_{\ell}(2k)w_{\ell}(2k+3)<w_{\ell}(2k)w_{\ell}(2k+ 2)+w_{\ell}(2k+2)w_{\ell}(2k+1)\]
Which can be factored into,
\[w_{\ell}(2k)(w_{\ell}(2k+2)+w_{\ell}(2k+3))<w_{\ell}(2k+2)(w_{\ell}(2k)+w_{ \ell}(2k+1))\]
Rearranging gives,
\[\frac{w_{\ell}(2k)}{w_{\ell}(2k)+w_{\ell}(2k+1)}<\frac{w_{\ell}(2k+2)}{w_{ \ell}(2k+2)+w_{\ell}(2k+3)} \tag{2}\]
Finally, by definition of previous-to-new-left-interval-ratio \(\rho\),
\[\rho_{\ell}(k)<\rho_{\ell}(k+1)\]
\(\Box\)
This relation will be helpful in showing \(\rho_{\ell-1}(k)<\rho_{\ell-1}(k+1)\), as \(w_{\ell}\) is an easier function to manage.
**Definition 9**: Fix \(\ell\), let us define the following real number extensions of \(t_{\ell}\), \(T(x)\) and \(w_{\ell}\), \(W(x)\). Where the domains and codomains of \(T(x)\) and \(W(x)\) are \(\mathbb{R}\).
\[T(x)=\sin^{2}\left(\frac{\pi}{2}\cdot\frac{x}{2^{\ell}}\right)\]
and,
\[W(x)=T(x+1)-T(x)\]
where \(x\in[0,2^{\ell}-1]\) is real.
**Corollary 7.1**: _Fix \(\ell\). If \(k\in\mathbb{N}\), \(0\leq k\leq 2^{\ell}\), then \(T(k)=t_{\ell}(k)\), and \(W(k)=w_{\ell}(k)\)._
With \(w_{\ell}\) and \(t_{\ell}\) extended to the real numbers, it is possible to apply tools from calculus, making the next few proofs substantially easier.
Remark 4: Clearly, \(T(x)\) increases as \(x\) increases. As a result \(W(x)>0\) for \(x\in[0,2^{\ell}-1]\).
Remark 5: With the new function \(W\), it is possible to extend the expression seen in lemma 7,
\[\frac{W(x)}{W(x+1)}<\frac{W(x+2)}{W(x+3)},\quad x\in\left[0,2^{\ell}-1\right]\]
With corollary 7.1 in mind, it is easy to see that if this relation holds in this extended context, then it will also hold for the relation with \(w_{\ell}\).
Remark 6: The relation can be simplified further by considering,
\[\frac{W(x)}{W(x+1)}<\frac{W(x+1)}{W(x+2)}\]
Assuming this holds for all \(x\), it is very obvious that,
\[\frac{W(x)}{W(x+1)}<\frac{W(x+1)}{W(x+2)}\Rightarrow\frac{W(x)}{W(x+1)}<\frac {W(x+2)}{W(x+3)}\]
So proving \(\frac{W(x)}{W(x+1)}<\frac{W(x+1)}{W(x+2)}\), will be sufficient to show \(\rho_{\ell}(k)<\rho_{\ell}(k+1)\).
**Lemma 8**: _Fix \(\ell\). The derivative of W(x) is greater than the derivative of W(x+1):_
\[\frac{d}{dx}W(x)>\frac{d}{dx}W(x+1)\quad x\in\left[0,2^{\ell}-1\right]\]
Proof: We first find \(\frac{d}{dx}W(x+r)\), for an arbitrary \(r\in\mathbb{R}\).
\[\frac{d}{dx}T(x+r) =\frac{d}{dx}\sin^{2}\left(\frac{\pi}{2}\left(\frac{x+r}{2^{\ell }}\right)\right)\] \[=\frac{d}{dx}\sin^{2}\left(\frac{\alpha}{2}(x+r)\right) \left(\alpha=\frac{\pi}{2^{L}}\right)\] \[=\alpha\sin(\alpha(x+r))\]
Thus, we have \(\frac{d}{dx}W(x+r)\):
\[\frac{d}{dx}W(x+r) =\frac{d}{dx}T(x+r+1)-\frac{d}{dx}T(x+r)\] \[=\alpha\big{(}\sin(\alpha(x+r+1))-\sin(\alpha(x+r))\big{)}\]
Finally, we show:
\[\frac{d}{dx}W(x)>\frac{d}{dx}W(x+1) \iff\alpha(\sin(\alpha(x+1))-\sin(\alpha x)))\] \[>\alpha(\sin(\alpha(x+2))-\sin(\alpha(x+1)))\] \[\iff 2\sin(\alpha(x+1))-\sin(\alpha x)-\sin(\alpha(x+2))>0\]
Let \(u=\alpha x\), \(v=\alpha\) then,
\[2\sin(u+v)-\sin(u)-\sin(u+2v) =2(\sin u\cos v+\cos u\sin v)-\sin u\] \[\quad-(\sin u\cos 2v+\cos u\sin 2v)\] \[=2\sin u\cos v+2\cos u\sin v-\sin u\] \[\quad-\sin u(2\cos^{2}v-1)-\cos u(2\sin v\cos v)\] \[=2\sin u\cos v+2\cos u\sin v-\sin u\] \[\quad-2\sin u\cos^{2}v+\sin u-2\sin v\cos^{2}v\] \[=2\sin u\cos v(1-\cos v)+2\cos u\sin v(1-\cos v)\] \[=2(\sin u\cos v+\cos u\sin v)(1-\cos v)\] \[=2\sin(u+v)(1-\cos v)\] \[=2\sin(\alpha x+\alpha)(1-\cos\alpha)\] \[=2\sin\left(\pi\cdot\frac{x+1}{2^{\ell}}\right)\left(1-\cos\frac {\pi}{2^{\ell}}\right)\]
It is easily shown that \(1-\cos\frac{\pi}{2^{\ell}}>0\) for \(\ell\geq 0\), thus, the result holds when:
\[\sin\left(\pi\frac{x+1}{2^{\ell}}\right)>0\]
This is equivalent to when \(0<\frac{x+1}{2^{\ell}}<1\). Hence,
\[\frac{d}{dx}W(x)>\frac{d}{dx}W(x+1),\quad\text{if }x\in[0,2^{L}-1]\]
**Lemma 9**: _In the real domain, the ratio of the previous width to the current width increases monotonically, that is,_
\[\frac{W(x)}{W(x+1)}<\frac{W(x+1)}{W(x+2)},\quad x=\left[0,2^{\ell}-1\right].\]
_Proof_ By lemma 8, the derivative of current width \(W^{\prime}(x+1)\) is lower than the derivative of the previous width \(W^{\prime}(x)\). This implies that,
\[\int_{x+1}^{x+2}W^{\prime}(z)dz<\int_{x}^{x+1}W^{\prime}(z)dz.\]
Let \(\Delta W(x)\) be equal to \(\int_{x}^{x+1}W^{\prime}(z)dz\). Since \(W(x)>0\) (see Remark 4), multiplying both sides by W(x), it is apparent that \(W(x)\Delta W(x+1)\) is less than \(W(x)\Delta W(x)\). Adding \(W(x)^{2}+W(x)\Delta W(x)\) to both sides and \(\Delta W(x)^{2}\) (which is \(>0\)) to the right hand side shows that \(W(x)^{2}+W(x)\Delta W(x)+W(x)\Delta W(x+1)\) is less than \(W(x)^{2}+2W(x)\Delta W(x)+\Delta W(x)^{2}\). Factoring gives,
\[W(x)\left[W(x)+\Delta W(x)+\Delta W(x+1)\right]<\left(W(x)+\Delta W(x)\right) ^{2}\]
Note that \(W(x)+\Delta W(x)\) is equal to \(\int_{0}^{x}W^{\prime}(z)dz+\int_{x}^{x+1}W^{\prime}(z)dz\) which is equal to \(\int_{0}^{x+1}W^{\prime}(z)dz\). By a similar argument it can shown that \(W(x)+\Delta W(x)+\Delta W(x+1)\) is equal to \(\int_{0}^{x+2}W^{\prime}(z)dz\). Hence, it follows that,
\[W(x)\int_{0}^{x+2}W^{\prime}(z)dz<\left(\int_{0}^{x+1}W^{\prime}(z)dz\right)^ {2}.\]
By the fundamental theorem of calculus, \(W(x)W(x+2)\) is less than \(W(x+1)^{2}\). Rearranging yields,
\[\frac{W(x)}{W(x+1)}<\frac{W(x+1)}{W(x+2)}.\]
**Corollary 9.1**: _The following relation holds, \(\rho_{\ell}(k)<\rho_{\ell}(k+1)\) for all \(k\in[0,2^{\ell}-2]\)._
_Proof_ This is a direct result of Lemmas 7 and 9. \(\Box\)
**Theorem 10**: _For all \(\ell,k\in\mathbb{N}\), \(0\leq k\leq 2^{\ell}-1\), \(\rho_{\ell}(k)\) is bounded with,_
\[\rho_{\ell}(k)\in\left[\frac{1}{4}+\epsilon,\frac{3}{4}-\epsilon\right],\quad \epsilon>0\]
_Proof_ By Corollary 5.1 we have \(\rho_{\ell}(0)\in\left[\frac{1}{4}+\epsilon,\frac{1}{2}\right]\), thus \(\rho_{\ell}(0)>\frac{1}{4}\). By Lemma 6 we have \(\rho_{\ell}(2^{\ell}-1)\in\left[\frac{1}{2},\frac{3}{4}-\epsilon\right]\), so \(\rho_{\ell}(2^{\ell}-1)<\frac{3}{4}\). Finally, by Corollary 9.1, we have \(\rho_{\ell}(k)<\rho_{\ell}(k+1)\) for the current range of \(k=0,\ldots,2^{\ell}-1\). This results in the following chain of inequalities:
\[\frac{1}{4}<\rho_{\ell}(0)<\rho_{\ell}(1)<\cdots<\rho_{\ell}\left(2^{\ell}-1 \right)<\frac{3}{4}\]
\(\Box\)
#### Area Estimation
Various important properties about the partitioning scheme \(P_{\ell}\) have been the primary focus. This section focuses on estimating the area under \(b_{n,m}\) given a partition \(P_{\ell}\), using concepts from Darboux integrals. The end goal is to show that error can become arbitrarily small given a large enough \(\ell\). This section also lays the foundations for a better, and more realistic upper bound for error in estimating Shapley values, which is covered in a later section.
**Definition 10**: The _supremum_ of a set \(S\subseteq\mathbb{R}\) is,
\[\sup(S)=\min_{x\in\mathbb{R}}x\geq s,\quad\forall s\in S.\]
The _infimum_ of \(S\) is,
\[\inf(S)=\max_{x\in\mathbb{R}}x\leq s,\quad\forall s\in S.\]
**Definition 11**: Darboux sums takes a partition \(P=(z_{0},z_{1},\ldots,z_{n})\) of an interval \([a,b]\), where \(a=z_{0}<z_{1}<\cdots<z_{n}=b\), and a function \(f\) which maps \((a,b)\) to \(\mathbb{R}\). Each interval \([z_{i},z_{i+1}]\) is called a _subinterval_. Let
\[M_{i}=\sup_{x\in[z_{i},z_{i+1}]}f(x),\quad\text{and}\quad m_{i}=\inf_{x\in[z_ {i},z_{i+1}]}f(x),\quad i=0,\ldots,n-1\]
The upper and lower bounds of a sub interval's area are,
\[A_{U}(f,[z_{i},z_{i+1}])=(z_{i+1}-z_{i})M_{i},\quad and\quad A_{L}(f,[z_{i},z_{i+ 1}])=(z_{i+1}-z_{i})m_{i}\]
respectively. The _upper Darboux sum_ is:
\[U(f,P)=\sum_{i=0}^{n-1}A_{U}(f,[z_{i},z_{i+1}]),\]
and the _lower Darboux sum_ is:
\[L(f,P)=\sum_{i=0}^{n-1}A_{L}(f,[z_{i},z_{i+1}]),\]
There is a geometric interpretation of the Darboux sums. Each subinterval has a rectangle width corresponding to the subinterval witch, and a height corresponding to either the supremum or infimum of \(f(x)\). The upper Darboux sum is the sum of the areas of these rectangles, where their heights correspond to their suprema.
_Remark 7_: Suppose instead of taking \(M_{i}\) or \(m_{i}\), we took arbitrary elements from each subinterval to represent the heights of the rectangles:
\[\sum_{i=0}^{n-1}(z_{i+1}+z_{i})f(x_{i})\quad x_{i}\in[z_{i},z_{i+1}]. \tag{3}\]
By definitions of supremum and infimum (Definition 10), it is clear that for all \(i=0,\ldots,n-1\), \(m_{i}\leq x_{i}\leq M_{i}\). It follows that for all \(i\), the areas \(A_{L}(f,[z_{i},z_{i+1}])\) are less than or equal to the areas \((z_{i+1}-z_{i})f(x_{i})\) which are less than or equal to the areas \(A_{U}(f,[z_{i},z_{i+1}])\). This gives,
\[L(f,P)\leq\sum_{i=0}^{n-1}(z_{i+1}+z_{i})f(x_{i})\leq U(f,P).\]
As a result, there is a lot of freedom in choosing which part of the subinterval to assess \(f(x)\)
**Lemma 11**: _For any subinterval \([z_{i},z_{i+1}]\), and function \(f\) that maps elements of \([z_{i},z_{i+1}]\) to \(\mathbb{R}\),_
\[A_{L}(f,[z_{i},z_{i+1}])\leq\int\limits_{z_{i}}^{z_{i+1}}f(x)dx\leq A_{U}(f,[z _{i},z_{i+1}])\]
_Proof_ Consider \(A_{L}(f,[z_{i},z_{i+1}])\) which is equal to \((z_{i+1}-z_{i})m_{i}\) where \(m_{i}\) is equal to \(\inf_{x\in[z_{i},z_{i+1}]}f(x)\). Note that,
\[(z_{i+1}-z_{i})m_{i}=\int\limits_{z_{i}}^{z_{i+1}}m_{i}dx.\]
By Definition 10, for all \(x\in[z_{i},z_{i+1}]\), \(f(x)\) greater or equal to \(m_{i}\). Thus,
\[\int\limits_{z_{i}}^{z_{i+1}}f(x)dx\geq\int\limits_{z_{i}}^{z_{i+1}}m_{i}dx=A_{L} (f,[z_{i},z_{i+1}]).\]
A nearly identical argument can be used to show \(\int_{z_{i}}^{z_{i+1}}f(x)dx\leq A_{U}(f,[z_{i},z_{i+1}])\).
The next lemma bounds the error resulting from estimating the area under a subinterval by picking a random point on that subinterval and making a rectangle.
**Lemma 12**: _Take an arbitrary point \(y\) in the subinterval \([z_{i},z_{i+1}]\), then,_
\[\Bigg{|}\int\limits_{z_{i}}^{z_{i+1}}f(x)dx-(z_{i+1}-z_{i})f(y)\Bigg{|}\leq A _{U}(f,[z_{i},z_{i+1}])-A_{L}(f,[z_{i},z_{i+1}])\]
_Proof_ Recall that by Definition 11\(A_{L}(f,[z_{i},z_{i+1}])\) is equal to \((z_{i+1}-z_{i})m_{i}\), and \(A_{U}(f,[z_{i},z_{i+1}])\) is equal to \((z_{i+1}-z_{i})M_{i}\), where,
\[m_{i}=\inf\limits_{x\in[z_{i},z_{i+1}]}f(x),\quad and\] \[M_{i}=\sup\limits_{x\in[z_{i},z_{i+1}]}f(x).\]
By Definition 10, \(m_{i}\leq f(y)\leq M_{i}\), which implies \(A_{L}(f,[z_{i},z_{i+1}])\leq(z_{i+1}-z_{i})f(y)\leq A_{U}(f,[z_{i},z_{i+1}])\). Let us assume \((z_{i+1}-z_{i})f(y)\) be less than or equal to \(\int_{z_{i}}^{z_{i+1}}f(x)dx\), meaning \(\Bigg{|}\int\limits_{z_{i}}^{z_{i+1}}f(x)dx-(z_{i+1}-z_{i})f(y)\Bigg{|}\) is equal to \(\int\limits_{z_{i}}^{z_{i+1}}f(x)dx\ -\ (z_{i+1}-z_{i})f(y)\). Then, since Lemma 11 shows \(\int_{z_{i}}^{z_{i+1}}f(x)dx\leq A_{U}(f,[z_{i},z_{i+1}])\), it follows that
\[\int\limits_{z_{i}}^{z_{i+1}}f(x)dx-(z_{i+1}-z_{i})f(y)\leq A_{U}(f,[z_{i},z_ {i+1}])-(z_{i+1}-z_{i})f(y).\]
As shown above, it is also the case that \((z_{i+1}-z_{i})f(y)\) is greater than or equal to \(A_{L}(f,[z_{i},z_{i+1}])\). As a result,
\[A_{U}(f,[z_{i},z_{i+1}])-(z_{i+1}-z_{i})f(y)\leq A_{U}(f,[z_{i},z_{i+1}])-A_{L }(f,[z_{i},z_{i+1}]).\]
With similar argumentation, we can show this also holds for \((z_{i+1}-z_{i})f(y)\) is greater than or equal to \(\int_{z_{i}}^{z_{i+1}}f(x)dx\).
_Remark 8_ Consider the following approach to approximating area under \(f\) on the interval \([z_{i},z_{i+1}]\): choose an arbitrary \(x\in[z_{i},z_{i+1}]\), multiply by width of interval. Clearly, in this context, the error is equal to \(\Big{|}(z_{i+1}-z_{i})f(x)-\int_{z_{i}}^{z_{i+1}}f(x)dx\Big{|}\). By Lemma 12, it follows that \(A_{U}(f,[z_{i},z_{i+1}])-A_{L}(f,[z_{i},z_{i+1}])\) can be used as an upper bound for error when using this this approach.
**Corollary 12.1**: _Given a partition \(P_{\ell}=\left(t_{\ell}(0),\ldots,t_{\ell}\left(2^{\ell}\right)\right)\), the upper bound of error when approximating area under \(b_{n,m}(x)\) over the \(k^{th}\) subinterval is defined as:_
\[UE_{n,m}(\ell,k)=(t_{\ell}(k+1)-t_{\ell}(k))\left[U(b_{n,m},[t_{\ell}(k),t_{ \ell}(k{+}1)]){-}L(b_{n,m},[t_{\ell}(k),t_{\ell}(k{+}1)])\right]\]
_Remark 9_ Note that \(b_{n,m}(x)\) has at most one local maximum for \(x\in[0,1]\), for all valid \(n,m\). As a result, given a partition \(P\) of \([0,1]\), on every subinterval of \(P\) (except when the subinterval contains the local maximum, which occurs in only one subinterval), \(b_{n,m}(x)\) is either monotonically increasing or decreasing. For each subinterval \([z_{i},z_{i+1}]\) of \(P\) on which \(b_{n,m}(x)\) is monotonically increasing, \(L(b_{n,m},[z_{i},z_{i+1}])\) is equal to \(b_{n,m}(z_{i})\) and \(U(b_{n,m},[z_{i},z_{i+1}])\) is equal to \(b_{n,m}(z_{i+1})\). When \(b_{n,m}(x)\) is decreasing over \([z_{i},z_{i+1}]\), \(L(b_{n,m},[z_{i},z_{i+1}])\) is equal to \(b_{n,m}(z_{i+1})\) and \(U(b_{n,m},[z_{i},z_{i+1}])\) is equal to \(b_{n,m}(z_{i})\).
**Corollary 12.2**: _Given a partition \(P_{\ell}=\left(t_{\ell}(0),\ldots,t_{\ell}\left(2^{\ell}\right)\right)\), the following monotonic-assumption upper bound of error when approximating the area under \(b_{n,m}(x)\) over the \(k^{th}\) subinterval is defined as:_
\[\overline{UE}_{n,m}(\ell,k)=(t_{\ell}(k+1)-t_{\ell}(k))\left|b_{n,m}(t_{\ell} (k+1))-b_{n,m}(t_{\ell}(k))\right|\]
This definition makes easy finding an error upper bound for all intervals except one, the one containing the local maximum.
**Corollary 12.3**: _If \(b_{n,m}(x)\) is monotonic over \([t_{\ell}(k),t_{\ell}(k+1)]\), then \(UE_{n,m}(\ell,k)\) is equal to \(\overline{UE}_{n,m}(\ell,k)\)._
**Lemma 13**: _Let us suppose a partition \(P_{\ell-1}\) is refined to partition \(P_{\ell}\). Given a subinterval \([t_{\ell-1}(k),t_{\ell-1}(k+1)]\) of \(P_{\ell}\), the monotonic-assumption upper bound of error is reduced over that interval by a factor of at least \(3/4\), that is:_
\[\frac{3\cdot\overline{UE}_{n,m}(\ell-1,k)}{4}>\left(\overline{UE}_{n,m}(\ell, 2k)+\overline{UE}_{n,m}(\ell,2k+1)\right)\]
_Proof_ Let us consider Figure 7, there are the following widths,
\[x=t_{\ell-1}(k+1)-t_{\ell-1}(k),\quad y=\left|b_{n,m}(t_{\ell-1}(k+1))-b_{n,m} (t_{\ell-1}(k))\right|,\]
where \(x\) and \(y\) represent the width and change in height of the previous subinterval. Also there are the following widths,
\[x_{1}=t_{\ell}(2k+1)-t_{\ell}(2k),\quad y_{1}=\left|b_{n,m}(t_{\ell}(2k+1))-b_ {n,m}(t_{\ell}(2k))\right|,\]
where \(x_{1}\) and \(y_{1}\) represent the width and change in height of the left part of split previous subinterval. It follows that the width and change in height of the right part of the split subinterval have values,
\[x_{2}=x-x_{1},\quad y_{2}=y-y_{1}.\]
It is concluded that,
\[\overline{UE}_{n,m}(\ell-1,k)=x\cdot y,\]
\[\overline{UE}_{n,m}(\ell,2k)=x_{1}\cdot y_{1},\text{ and }\] \[\overline{UE}_{n,m}(\ell,2k+1)=x_{2}\cdot y_{2}.\]
Finally, let us define \(\overline{x}=\frac{x_{1}}{x},\) and \(\overline{y}=\frac{y_{1}}{y}.\) Note that when considering \(\overline{UE}_{n,m}(\ell-1,k),\)
\[\overline{x}=\frac{x_{1}}{x}=\frac{t_{\ell}(2k+1)-t_{\ell}(2k)}{t_{\ell}(2k+2 )-t_{\ell}(2k)}=\rho_{\ell-1}(k),\]
by Remark 3. Thus by Theorem 10, \(\frac{1}{4}<\overline{x}<\frac{3}{4}.\) Simultaneously, \(0\leq\overline{y}\leq 1.\) We proceed by simplifying the following expression
\[\frac{\overline{UE}_{n,m}(\ell,2k)+\overline{UE}_{n,m}(\ell,2k+1)}{\overline{ UE}_{n,m}(\ell-1,k)}=\frac{x_{1}y_{1}+x_{2}y_{2}}{xy}\]
Plugging in the definition for \(x_{2}\) and \(y_{2},\) the above is equivalent to,
\[\frac{x_{1}y_{1}+(x-x_{1})(y-y_{1})}{xy}\]
Doing the product and applying the definitions for \(\overline{x}\) and \(\overline{y}\) yields
\[2\overline{x}\,\overline{y}-\overline{y}-\overline{x}+1.\]
Rearranging, it can be shown that the above is equal to
\[2\left(\overline{x}-\frac{1}{2}\right)\left(\overline{y}-\frac{1}{2}\right)+ \frac{1}{2}.\]
Let us assume that \((\overline{y}-1/2)\) is positive without loss of generality. Then assigning to \(\overline{x}\) and \(\overline{y}\) their respective maximum values, the following inequality is obtained,
\[\frac{\overline{UE}_{n,m}(\ell,2k)+\overline{UE}_{n,m}(\ell,2k+1)}{\overline{ UE}_{n,m}(\ell-1,k)}<2\left(\frac{3}{4}-\frac{1}{2}\right)\left(1-\frac{1}{2} \right)+\frac{1}{2}=\frac{3}{4}\]
A similar argument can be used for when \((\overline{y}-1/2)\) is negative, and the above is trivially correct for \((\overline{y}-1/2)\) equals zero. \(\Box\)
**Corollary 13.1**: _For all monotonic sub-intervals \([t_{\ell}(k),t_{\ell}(k+1)]\), the upper bound of error \(UE_{n,m}(\ell,k)\) is reduced by at least \(25\%\) when the partition is refined from \(P_{\ell}\) to \(P_{\ell+1}\)._
Figure 7: Example visualization of \(\overline{UE}_{n,e}\)’s change during refinement for arbitrary function.
Proof: This follows as a direct result of Lemma 13.
Next, let us consider error over the whole of the approximation.
**Definition 12**: We denote the sum of upper bounds for error over all sub-intervals as:
\[SUE_{n,m}(\ell)=\sum_{k=0}^{2^{\ell}-1}UE_{n,m}(\ell,k),\]
and the sum of upper bounds for error over all sub-intervals with the monotonic assumption for all sub-intervals as:
\[\overline{SUE}_{n,m}(\ell)=\sum_{k=0}^{2^{\ell}-1}\overline{UE}_{n,m}(\ell,k).\]
To discuss how error evolves with respect to granularity of our partition, having an upper bound for initial error is critical.
**Definition 13**: We denote initial error as,
\[\sigma_{n,m}=SUE_{n,m}(0)\]
_Remark 10_: Using the first and second derivative tests, one can verify that \(b_{n,m}(m/n)\) is the supremum of \(b_{n,m}(x)\) for \(x\in[0,1]\). One can also easily show the infimum of \(b_{n,m}(x)\) is \(0\). Thus,
\[\sigma_{n,m}=SUE_{n,m}(0)=UE_{n,m}(0,0)=b_{n,m}\left(\frac{m}{n}\right)\]
**Lemma 14**: \[\overline{SUE}_{n,m}(\ell+1)\leq\frac{3}{4}\overline{SUE}_{n,m}(\ell)\]
Proof: By Definition 12,
\[\overline{SUE}_{n,m}(\ell+1)=\sum_{k=0}^{2^{\ell+1}-1}\overline{UE}_{n,m}( \ell+1,k).\]
Rearranging the values in the sum yields,
\[\overline{SUE}_{n,m}(\ell+1)=\sum_{k=0}^{2^{\ell}-1}\left[\overline{UE}_{n,m} (\ell+1,2k)+\overline{UE}_{n,m}(\ell+1,2k+1)\right].\]
Hence by Lemma 13,
\[\overline{SUE}_{n,m}(\ell+1)<\sum_{k=0}^{2^{\ell}-1}\left[\frac{3}{4} \overline{UE}_{n,m}(\ell,k)\right]\,=\frac{3}{4}\overline{SUE}_{n,m}(\ell).\]
## 5 Example
### Games
#### 5.1.1 Voting Games
To begin, we define a weighted voting game [4] as follows:
**Definition 14**: Consider a game G=(F, V), \(F=\{1,2,\ldots,N\}\), where each player \(i\) has voting power \(w_{i}\geq 0\), and the total votes need to sum to be greater or equal to a threshold \(q\). Given a subset \(S\subseteq F\), we have:
\[V(S)=\begin{cases}1&\quad\text{if }\sum\limits_{i\in S}w_{i}\geq q\\ 0&\quad\text{otherwise}\end{cases}\]
We denote this game as a _weighted voting game_.
The circuit implementation for this game (cf. Figure 8) is quite intuitive so long as each weight \(w_{i}\) can be represented as a fixed point number, where \(\left|Votes\right\rangle,\left|Out\right\rangle\) are initialized to \(\left|0\right\rangle\), and \(\left|Voters_{S}\right\rangle=\left|x_{0}x_{1}\cdots x_{N}\right\rangle\) with,
\[x_{i}=\begin{cases}1&\quad\text{if }i\in S\\ 0&\quad\text{otherwise}\end{cases}\]
and
\[\geq(x,q)=\begin{cases}1&\quad\text{if }x\geq q\\ 0&\quad\text{otherwise}\end{cases}\]
For our particular example we will simplify this circuit even further. Suppose we have a three player weighted voting game with \(w_{1}=3,w_{2}=2,w_{3}=1\) with \(q=4\). One benefit of this voting game is given 3 bits to store the votes, instead of using the \(\geq q\) circuit we can simply check the most significant qubit (as it will be flipped in any situation where the threshold has been passed).
We can work out the Shapley values by hand:
\[V(\emptyset)=0 V(\{1,2\})=1\]
Figure 8: Circuit implementation for the weighted voting game.
\[V(\{1\}) =0 V(\{1,3\}) =1\] \[V(\{2\}) =0 V(\{2,3\}) =0\] \[V(\{3\}) =0 V(\{1,2,3\}) =1\]
From this we have:
\[\Phi_{1} =\sum_{S\subseteq F\setminus\{i\}}\sigma(|F\setminus\{i\}|,|S|) \cdot(V(S\cup\{i\})-V(S))\] \[=\sigma(2,0)\cdot(V(1)-V(\{\emptyset\}))+\sigma(2,1)\cdot(V(\{1, 2\})-V(\{2\}))\] \[\quad+\sigma(2,1)\cdot(V(\{1,3\})-V(\{3\}))+\sigma(2,2)\cdot(V( \{1,2,3\})-V(\{2,3\}))\] \[=\sigma(2,0)\cdot(0-0)+\sigma(2,1)\cdot(1-0)+\sigma(2,1)\cdot(1 -0)+\sigma(2,2)\cdot(1-0)\] \[=2\cdot\sigma(2,1)+\sigma(2,2)\] \[=2\cdot\frac{1!(2-1)!}{(2+1)!}+\frac{2!(2-2)!}{(2+1)!}\] \[=2\cdot\frac{1}{6}+\frac{1}{3}\] \[=\frac{2}{3}\]
This can be repeated to get,
\[\Phi_{2},\Phi_{3}=\frac{1}{6}\]
#### 5.1.2 Random Games
Though it is a common assumption in the literature, our framework works with non-superadditive contexts. This is because the equation Shapley derived for Shapley values did not rely on the assumption of superadditivity [12]
## 6 Conclusion
We have addressed the context of quantum AI algorithms for supporting decision-making processes. In such a context, the problem of explainability is amplified, since measuring a quantum system destroys the information. The use of the classical concept of Shapley values for post-hoc explanations in quantum computing does not translate trivially. We have proposed a move algorithm which reduces the problem of accurately estimating the Shapley values of a quantum algorithm into a far simpler problem of estimating the true average of a binomial distribution in polynomial time. We have determined the efficacy of the algorithm by using an analytic analysis. In our analysis, we have provided an upper bound of the error associated to the algorithm. |
2308.08416 | Interplay between altermagnetism and nonsymmorphic symmetries generating
large anomalous Hall conductivity by semi-Dirac points induced anticrossings | We investigate the interplay between altermagnetic spin-splitting and
nonsymmorphic symmetries using the space group no. 62 as a testbed. Studying
different magnetic orders by means of first-principles calculations, we find
that the altermagnetism (AM) is present in the C-type magnetic configuration
while it is absent for the G-type and A-type configurations due to different
magnetic space group types. The nonsymmorphic symmetries constrain the system
to a four-fold degeneracy at the border of the Brillouin zone with semi-Dirac
dispersion. In the case of large hybridization as for transition metal
pnictides, the interplay between AM and nonsymmorphic symmetries generates an
intricate network of several crossings and anticrossings that we describe in
terms of semi-Dirac points and glide symmetries. When we add the spin-orbit
coupling (SOC), we find a Neel-vector dependent spin-orbit splitting at the
time-reversal invariant momenta points since the magnetic space groups depend
on the Neel vector. The magnetic space group type-I produces antiferromagnetic
hourglass electrons that disappear in the type-III. When the Neel vector is
along x, we observe a glide-protected crossing that could generate a nodal-line
in the altermagnetic phase. The SOC splits the remaining band crossings and
band anticrossings producing a large anomalous Hall effect in all directions
excluding the Neel-vector direction | Amar Fakhredine, Raghottam M. Sattigeri, Giuseppe Cuono, Carmine Autieri | 2023-08-16T15:02:13Z | http://arxiv.org/abs/2308.08416v2 | Interplay between altermagnetism and nonsymmorphic symmetries generating large anomalous Hall conductivity by semi-Dirac points induced anticrossings
###### Abstract
We investigate the interplay between altermagnetism spin-splitting and nonsymmorphic symmetries using the space group no. 62 as a testbed. Studying different magnetic orders by means of _first-principles_ calculations, we find that the altermagnetism (AM) is present in the C-type magnetic configuration while it is absent for the G-type and A-type configurations due to different magnetic space group types. The nonsymmorphic symmetries constrain the system to a four-fold degeneracy at the border of the Brillouin zone with semi-Dirac dispersion. In the case of large hybridization as for transition metal pnictides, the interplay between AM and nonsymmorphic symmetries generates an intricate network of several crossings and anticrossings that we describe in terms of semi-Dirac points and glide symmetries. When we add the spin-orbit coupling (SOC), we find a Neel-vector dependent spin-orbit splitting at the time-reversal invariant momenta points since the magnetic space groups depend on the Neel vector. The magnetic space group type-I produces antiferromagnetic hourglass electrons that disappear in the type-III. When the Neel vector is along x, we observe a glide-protected crossing that could generate a nodal-line in the altermagnetic phase. The SOC splits the remaining band crossings and band anticrossings producing a large anomalous Hall effect in all directions excluding the Neel-vector direction.
## I Introduction
Until a few years ago, the two collinear magnetic phases were known in condensed matter physics as ferromagnetism and antiferromagnetism displaying completely different properties. Very recently, a new variant of the collinear antiferromagnetism was discovered called altermagnetism (AM) or collinear antiferromagnets with non-interconvertible spin-structure motif pair [1; 2; 3; 4; 5; 6] which hosts both properties of the ferromagnets and usual antiferromagnets. Unlike ferromagnetism, where the crystal has a net magnetization and there is time-reversal symmetry (TRS) breaking, and unlike antiferromagnetism, where the total magnetization is zero, altermagnetism hosts systems in which the magnetization in the real space is zero but there is breaking of the spin degeneracy in the reciprocal space like in ferromagnetic compounds. The condition to observe the AM is the absence of a translation, an inversion or a combination of both that maps the spin-up charge to the spin-down charge. In this case, only a roto-translation or mirror can map the spin-up charge in the spin-down charge. From the point of view of group theory, the altermagnetic compounds must belong to type-I and type-III magnetic space groups (MSG). [7] The type-I MSGs are crystallographic space groups without any additional symmetry while the type-III are crystallographic space groups with additional antisymmetry versions of half of the symmetry operations. [8]
The altermagnetic systems exhibit nonrelativistic spin-splitting and may produce anomalous Hall effect (AHE) [9; 10] once the relativistic effects are included. The AHE is enhanced by avoided crossings, also called anticrossings. While further investigations should be made in order to assure the possession of AHE, AM can produce an AHE along the direction of the Hall vector, but a very limited number of altermagnetic systems are metallic. Regarding technological applications, alternagnets could be assumed as a leading role in realizing concepts in spincaloritronics [11]. They can be also used in Josephson junctions [12], room-temperature magnetoresistance in an all-antiferromagnetic tunnel junction [13] and to generate neutral currents for spintronics [14].
One of the space groups in which the altermagnetic phase was established is the Pnma space group [7]. In a recent work, a new route to search altermagnetic states was introduced and it was taken in consideration the example of the Pnma perovskites [15]; it was shown that distinctive signatures on the band structure emerge from the angular variation of magnetization components in altermagnets. These signatures manifest as protected nodal lines along mirror planes of the crystal structure and pinch points on the Fermi surface, which act akin to type-II Weyl nodes [15]. The Pnma presents several nonsymmorphic symmetries [16; 17; 18], which are a composition of fractional lattice translations with point-group operations, like mirror reflection (glide plane) or rotation (screw axis). The glide symmetry or glide reflection symmetry is a symmetry operation that consists of a combination of a mirror reflection with respect to a plane and then a translation parallel to that plane. The eigenvalues of the glide symmetry are +1 and -1, and the eigenvectors are orthogonal, therefore, the bands produce a protected band crossing. In the absence of
magnetism and SOC, the nonsymmorphic symmetries force the electronic bands to be degenerate at the borders of the Brillouin zone with the presence of semi-Dirac points [16; 19], generating linear magnetoresistance [20], unconventional topological phases [21; 22; 23; 24; 25; 26; 27] with unique surface states, Fermi surfaces with reduced dimensionality [28] and topological nonsymmorphic crystalline superconductivity with \(\mathbb{Z}_{4}\) topological invariant [26; 27]. In the presence of SOC and magnetism, the system presents a partial or selective removal of the degeneracy [28]. The semi-Dirac points exhibit linear band dispersion in one direction (the high-symmetry direction at least in the space-group 62) and quadratic band dispersion in the orthogonal direction [29]. The existence of these semi-Dirac points in the space group 62 and its relation with the nonsymmorphic symmetries has been widely demonstrated in literature [20; 28; 30; 31; 32]. The connection between semi-Dirac points and glide symmetry will be described later in the text.
An example of a material with the Pnma crystal phase is the CrAs compound [33; 34] in the MnP-type phase, which is a rare itinerant antiferromagnet [33; 34] that could exhibit AHE due to AM. Additionally, the CrAs system is not ionic, therefore, \(p\)- and \(d\)-bands are strongly hybridizing with a large bandwidth of the order of 11 eV [35]. This grants us the possibility to observe induced AM in the \(p\)-bands. The magnetic ground state of the CrAs is a helimagnetic phase, with the components of the magnetic moment [33; 34; 36; 37] in the \(a\)-\(b\) plane and with a spin-orbit coupling mostly from the \(p\)-states of As [32; 35; 38]. Here we investigate the CrAs as a testbed exclusively in hypothetical collinear magnetic orders since we are mainly interested in the interplay between nonsymmorphic symmetries and AM. Further investigations are necessary to extend these results to the non-collinear phase and to understand how much AM will survive in the non-collinear phase [39]. CrAs will work as a prototype for other itinerant antiferromagnets of the same space group as MnPd\({}_{2}\), FeP, MnP or CuMnAs that may have a smaller AHE due to the lower spin-orbit coupling [20]. Different from most studied micromagnetic Pnma compounds until now [40; 41; 42] such as LaMnO\({}_{3}\), YVO\({}_{3}\) and CaCrO\({}_{3}\) that have their magnetic atoms in the Wyckoff positions 4b [43], the atoms of CrAs are only present in the 4c Wyckoff positions, therefore, the system belongs to a different magnetic space group that should be more symmetric and host unexplored properties.
In this paper, we study the interplay between the intergalactic properties of the Pnma phase and the nonsymmorphic symmetries using first-principle calculations. The computational details are reported in Appendix A. Investigating different collinear magnetic configurations as the A-type, G-type and C-type shown in Fig. 1(a-c), respectively, we obtained that the altermagnetic spin-splitting is present only in the C-type configuration while it is absent in the A-type and G-type ones. Even if the C-type is not the ground state [35], we are interested in the interplay between altermagnetism and nonsymmorphic symmetries that is a general aspect that could appear in several other compounds. We find that the altermagnetic spin-splitting can be up to 0.5 eV in the \(d\)-bands close to the Fermi level while it gets reduced to 0.2 eV in the \(p\)-bands. Furthermore, we find that the interplay between AM and nonsymmorphic symmetries produces an intricate network of crossings and anticrossings in large areas of the Brillouin zone. As a result, the presence of nonsymmorphic symmetries generates a large anomalous Hall conductivity. The paper is organized as follows: in the next Section, we show the main results, while the last Section is devoted to conclusions.
## II Results
This Section is divided into two subsections with the results without SOC in the first one and the results with SOC in the second one. We report the electronic properties focusing on the interplay between AM and nonsymmorphic symmetries by considering different magnetic orders. The semi-Dirac points generate a large number of crossings and anticrossings that we describe. When we add the SOC in the second subsection, the calculation of the anomalous Hall conductivity (AHC) confirms a relatively large value due to the several crossings and anticrossings.
Figure 1: Crystal structure and collinear magnetic orders for CrAs as (a) A-type, (b) G-type and (c) C-type. The balls with colors blue and red represent the Cr atoms with the opposite spin moments. Green balls represent the As atoms. (d) Symmetries of the irreducible Brillouin zone of the orthorhombic primitive cell for the C-type magnetic order. The C-type is the only magnetic order that can host AM. The position of the high-symmetry \(k\)-points U\({}_{1}\), U\({}_{2}\), R\({}_{1}\) and R\({}_{2}\) are highlighted in green and yellow. The dashed magenta line indicates the high-symmetry path U\({}_{1}\)-\(\Gamma\)-U\({}_{2}\) which is one of the possible paths to show the AM in this magnetic space group.
### Electronic properties without SOC and nonsymmorphic symmetries
In the nonmagnetic phase, the bands are twofold degenerate at any \(k\)-vector of the Brillouin zone due to the presence of the inversion-time reversal symmetry. This is the Kramers degeneracy and it is protected under the action of the spin-orbit coupling interaction. The nonsymmorphic symmetries produce additional degeneracy at the border of the Brillouin zone. The \(\Gamma\) and R points are time-reversal invariant momenta (TRIM) points. In the Pnma space group without magnetism, the \(\Gamma\) point has only the Kramer degeneracy, two additional nonsymmorphic symmetries along the SR line produce an eight-fold degeneracy at R and S, while X, Y, Z, U and T have degeneracy 4 due to one nonsymmorphic symmetry. When we consider the C-type magnetism, \(\Gamma\) is a TRIM point and it still presents a twofold degeneracy. In this magnetic configuration, the TRIM points R, S and T have fourfold degeneracy due to an additional nonsymmorphic symmetry while all the other high-symmetry points have degeneracy 2. The fourfold degeneracy at the TRIM point R makes this space group one of the few where we can study the coexistence of AM and nonsymmorphic symmetries.
We find AM in the C-type magnetic order, but not in the other magnetic orders and this is due to the following reasons. In the space group no. 62, there are 16 different magnetic space groups. 8 of these magnetic space groups can host AM, while the other 8 groups cannot.[44] In the C-type magnetic order, the space groups that depend on the Neel vector belong to type-I and type-III. Both of these types present AM. Changing the magnetic configuration to G-type or A-type, the magnetic space group changes to type-II or type-IV, therefore, different magnetic configurations can have different al
Figure 4: Schematic representation of the crossings and anticrossings network in micrmagnetic semi-Dirac fermions without SOC. The spin-up channel is shown in blue, while the spin-down channel is shown in red. The solid line is used for the positive eigenvalues of the glide operator, while the dashed line is used for the negative eigenvalue of the glide operator. The black circles represent the band crossings between band with opposite glide eigenvalues, while the green boxes represent the band anticrossings between bands with the same glide eigenvalues.
Figure 3: Nonrelativistic spin-splitting along the R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\) for the first bands above the Fermi level at the \(\Gamma\) point. The spin-splitting reaches the value of 0.46 eV. For every couple of bands producing a finite spin-splitting at the TRIM point, another couple of bands will produce an opposite spin-splitting.
Figure 2: Band structure of the C-type magnetic order along the \(k\)-path R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\). The spin-up channel is shown in blue, while the spin-down channel is shown in red. The grey area represents the non-relativistic spin-splitting. The band structure is plotted between -0.5 and +0.5 eV where the \(d\)-electrons dominate. The black circles represent the band crossings protected by glide symmetry.
The CrAs contains 4c Wyckoff positions for both Cr and As, therefore the altermagnetic properties of the Pnma CrAs are different from the Pnma perovskites that have the magnetic atoms in 4b Wyckoff positions.[41; 42] A spin symmetry group analysis[45] would provide additional understanding of the system.
The BZ for the C-type magnetic order is reported in Fig. 1(d). With subscripts 1 and 2, we indicate the two points in the \(k\)-space that have opposite non-relativistic spin-splitting. In this case, the \(k\)-path connecting the \(\Gamma\) point with U and R shows altermagnetic spin-splitting. In this paper, we will take as an example for the discussion the altermagnetic properties along the path R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\). Similar arguments will be valid for the altermagnetic properties along U\({}_{1}\)-\(\Gamma\)-U\({}_{2}\), however, the presence of multiple \(k\)-paths where the AM is present means that there is a large area of the k-space where the altermagnetic spin-splittings reside.
We stress the relevant role of the semi-Dirac points in the generation of the band crossings and anticrossings. We focus on the semi-Dirac points at the R point, but they are present at all borders of the Brillouin zone. From the nonsymmorphic symmetries in space group no. 62, we obtain that the energy spectrum at the point R=(\(\pi\),\(\pi\),\(\pi\)) is always a semi-Dirac point with the dispersion relations for spin-up and spin-down as :
\[E_{\uparrow}(\pi-\epsilon,\pi-\epsilon,\pi-\epsilon)=\varepsilon_{0}\pm v\epsilon \tag{1}\]
\[E_{\downarrow}(\pi-\epsilon,\pi-\epsilon,\pi-\epsilon)=\varepsilon_{0}\pm v\epsilon \tag{2}\]
with \(\varepsilon_{0}\) being a combination of the on-site energies and \(v\) a combination of the first-neighbor hopping parameters[28]. Since the Dirac velocity does not depend on the energy difference between majority and minority electrons, it is the same for the spin-up or spin-down channel. In the case of large hybridization between the \(p\)-\(d\) electrons, as happens in CrAs, the Dirac velocities are also large. The presence of multiple orbitals and semi-Dirac bands with large Dirac velocities, which can be positive and negative, favors the creation of several band crossings and band anticrossings which are observable in the band structure as presented in Fig. 2. What we have shown in Fig. 2 for the Cr-\(d\) bands close to the Fermi level happens qualitatively also for the As-\(p\) bands as we have described in Appendix B. The bands with opposite Dirac velocity \(+v\) and -\(v\) also have opposite glide eigenvalues and orthogonal eigenvectors. Therefore, the bands with opposite glide eigenvalues are orthogonal and do not hybridize each other[46; 16; 47], this is exactly true for the bands with linear dispersion very close to the border of the Brillouin zone, while away from the Brillouin zone border the bands tend to show mixing of the eigenvectors. In Fig. 2, we have reported with black circles the band crossings protected by the glide symmetry.
Now, we move to the analysis of the nonrelativistic spin-splitting in the presence of nonsymmorphic symmetries. If we consider the Kramers pair bands which at the \(\Gamma\) point are at 0.1 eV above the Fermi level and follow them, we can see that the spin-up and spin-down channels have both a glide-protected band crossing and then they reach the R points at two different eigenvalues creating surprisingly a finite nonrelativistic spin-splitting at a TRIM point as shown in Fig. 3. The nonzero value of the spin-splitting at the point R apparently contradicts the concept of TRIM point, but there would be another couple of bands that would produce the opposite nonrelativistic spin-splitting to preserve the total spin-splitting at a given TRIM point. Due to these exceptional conditions, the nonrelativistic spin-splitting for those bands, reported in Fig. 3, reaches a maximum of 0.46 eV.
In Fig. 4, we plot a schematic figure with the spin-channels in blue and red colors, with the solid (dashed) line used for the positive (negative) eigenvalues of the glide operator. We have the expected intrachannel avoided band crossings and the interchannel band crossings. Additionally, we have intrachannel band crossings, namely crossings between bands with opposite glide eigenvalues. In the next subsection, we will deliberate on how these crossings will evolve with SOC as a function
Figure 5: Band structure along the R\({}_{1}\)-S-R\({}_{2}\)\(k\)-path for the C-type magnetic order including SOC with the Néel vector along the (a) x-axis, (b) y-axis and (c) z-axis. No AM is present along this \(k\)-path. In panel (b), we obtain the antiferromagnetic hourglass fermions with a magnetic space group type-I. No hourglass fermions are found in the magnetic space groups of type-III.
of the Neel vector orientation.
### Electronic properties with SOC and Anomalous Hall conductivity
In the non-magnetic case, when we consider the SOC, we have shown, in a previous work [28], a selective removal of the bands degeneracy due to the nonsymmorphic symmetries at the TRIM points and at the borders of the Brillouin zone. In the C-type magnetic configuration, when we include the SOC interaction, we observe a selective removal of the spin-degeneracy depending on the Neel vector direction and consequently on the magnetic space group. Indeed, the variation of the Neel vector direction changes the magnetic space group. We define the x-, y- and z-axis as parallel to the \(a\), \(b\) and \(c\) lattice constants, respectively, as defined in Fig. 1. When the Neel vector is along y, the magnetic space group is 62.441, which is a type I. When the magnetization is along x and z, the magnetic space groups are 62.446 and 62.447, respectively, which are type III. This selective removal of the degeneracy is valid along \(k\)-paths with and without AM. Looking at Figs. 5 and 6, we can see that the spin-orbit splitting is qualitatively and quantitatively different depending on the magnetic space group.
It was shown that, in the non-magnetic case, the SOC acts selectively at the TRIM points and Brillouin zone border due to the nonsymmorphic symmetries [32]. We will investigate the SOC effects in the magnetic phase as a function of the Neel vector starting from the path R\({}_{1}\)-S-R\({}_{2}\) where no AM is present. Without SOC, the entire RS line including the TRIM points have degeneracy 4. The band structure with SOC along the R\({}_{1}\)-S-R\({}_{2}\)\(k\)-path is reported in Fig. 5. When the Neel vector is along x, the spin-orbit splits the bands at R but not at S. When the Neel vector is along z, the spin-orbit splits the bands at S but not at R. When the Neel vector is along y, the spin-orbit splits the bands at both R and S, and therefore, in this latter case, we find antiferromagnetic hourglass fermions. These hourglass fermions are relativistic features already present along the RS line without magnetism [48; 21; 49]. Recently, the ferromagnetic and antiferromagnetic hourglass fermions were investigated. However, we find the antiferromagnetic hourglass in the magnetic space group 62.441 that was not reported before [50].
The spin-orbit acts selectively also along the R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\) path where the nonrelativistic spin-splitting is present. The effects of the SOC for different Neel vector orientations are reported in Fig. 6, we will highlight in the discussion the \(k\)-points where the SOC is not effective. When the Neel vector is along x, we have additional SOC splittings for all the crossing and anticrossing points except that for the intrachannel crossing point at around -0.1 eV, which is protected by the glide operator as shown in Fig. 6(a). A magnification of Fig. 6(a-c) is reported in Fig. 7(a-c), respectively, to highlight the band crossings protected by the glide against SOC when the Neel vector is along x. Indeed, the band crossing in the black circle in Fig. 7(a) is not splitted by SOC, while the SOC splitting is evident in 7(b,c). It was shown that this kind of glide-protected band crossings can generate a non-isoenergetic nodal-line [46; 47]. Therefore, this case would be a nodal-line in the altermagnetic phase, further investigations in this direction using model hamiltonian calculations could be interesting for the topological matter community. When the Neel vector is along y, we do not have SOC splitting at the \(\Gamma\) point as obtained in Fig. 6(b). When the Neel vector is along z as shown in Fig. 6(c), we do not have SOC splitting at the R\({}_{1}\) and R\({}_{2}\) points coherently with what is observed in Fig. 5(c).
The Hall vector in altermagnetic systems with space group no. 62 can lie in all possible directions in principle [9]. However, when the structural details are added, the Hall vector has a precise orientation. It was shown that in space group no. 62, we can expect the Hall vector in directions orthogonal to the Neel vector [42]. Several anticrossing points appear in the whole Brillouin zone boundary due to the presence of the semi-Dirac points. These anticrossing points generate large Berry curvature and consequently a large anomalous Hall effect, since, as it is well known from the literature [51; 52; 53; 54; 55; 56; 57; 58; 59], the intrinsic anomalous Hall effect can be expressed in terms of the Berry curvature. The semi-Dirac points and the glides linked to the nonsymmorphic symmetries are key ingredients to the generation of these several crossings and anticrossings. When we add the SOC, avoided band crossings are obtained and a large AHC is expected. Therefore, we can claim that the interplay between AM and nonsymmorphic symmetries generates large anomalous Hall conductivity. We calculate the AHC between -0.25 and +0.25 eV and we report it in Fig. 8 for different Neel vector orientations along the principal axes. The three AHC components that we have calculated are \(\sigma_{yz}\), \(\sigma_{xz}\) and \(\sigma_{xy}\), the numerical details are reported in Appendix A. We obtain large values of the AHE in the two directions orthogonal to the Neel vector. When the Neel vector is along x, we obtain AHCs values up to -400 S/cm for the \(\sigma_{xy}\) and 300 S/cm for the \(\sigma_{xz}\) (see Fig. 8 (a)). When the Neel vector is along y, we obtain AHC values up to -1000 for the \(\sigma_{yz}\) and +600 S/cm for the \(\sigma_{xy}\) (see Fig. 8 (b)), the spike that produces the large values at +0.14 eV was verified with a denser energy grid. Finally, when the Neel vector is along z, we obtain AHC values up to 450 for the \(\sigma_{yz}\) and -200 S/cm for the \(\sigma_{xz}\) (see Fig. 8 (c)). We observe a strong change of all the AHCs when we switch the Neel vector from the one axis to another. The compounds with space group no. 62 and magnetic atoms in Wyckoff position 4b hosts the AHC in one component [42], however, compounds with the same space group but different Wyckoff positions host an Hall vector orthogonal to the Neel vector. These values are smaller but of the same order of magnitude as the AHC in other altermagnetic metallic compounds [9; 42] as RuO\({}_{2}\) and CaCrO\({}_{3}\).
## III Conclusions
The presence of AM varies with the magnetic configuration, since the magnetic space group type strongly depends on the magnetic configuration. For the space group no. 62 with magnetic atoms in 4c Wyckoff position, the non-relativistic spin-splitting is present for the C-type magnetic order while it is absent in G-type and A-type magnetic orders. Following a couple of bands with opposite spin from the degenerate \(\Gamma\) point through the glide-protected crossing, we can end up with a finite spin-splitting at the TRIM points R\({}_{1}\) and R\({}_{2}\), however, there would be another couple of bands that would produce the opposite spin-splitting restoring the zero non-relativistic spin-splitting at the TRIM points. The magnetic space group is type-I when the Neel vector is along y and type-III when the Neel vector is along x or z. A selective removal of the spin degeneracy acts as a function of the magnetic space group at the TRIM points. For example, in the type-I magnetic space group, we find antiferromagnetic hourglass electrons that are not present when the Neel vector is along x or z. When the Neel is along x, we have a glide-protected crossing that could generate a nodal-line in the intermediate phase. Due to the semi-Dirac points and glide symmetries, the interplay between AM and nonsymmorphic symmetries produces several band crossings and avoided band crossings, once we apply the SOC these crossings generate a large AHC of the same order of magnitude found in intermediate RuO\({}_{2}\) and CaCrO\({}_{3}\).
Figure 6: Band structure of the C-type magnetic order along the \(k\)-path R\({}_{1}\)-\(\Gamma\)-R\({}_{2}\) with Néel vector along the (a) x-axis, (b) y-axis and (c) z-axis, respectively. The spin-up channel is shown in blue, the spin-down channel is shown in red while the band structure with SOC is plotted in green. The band structure is plotted between -0.3 and +0.3 eV where the \(d\)-electrons dominate.
Figure 7: Magnification of the band structure in FIG. 6. Band structure of the C-type magnetic order along the \(k\)-path R\({}_{1}\)-\(\Gamma\) with the Néel vector along the (a) x-axis, (b) y-axis and (c) z-axis. The band structure is plotted between -0.11 and -0.02 eV where there are the bands linear in \(k\) with opposite glide-eigenvalues. The black circle in panel (a) describes the band crossing between opposite glide eigenvalues not splitted by SOC when the Néel vector is along x.
## Acknowledgments
We thank Tomasz Dietl, Canio Noce and Rajibul Islam for useful discussions. The work is supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme (Grant No. MAB/2017/1). A.F. was supported by the Polish National Science Centre under Project No. 2020/37/B/ST5/02299. We acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant g91-1418, g91-1419 and g91-1426 for the availability of high-performance computing resources and support. We acknowledge the CINECA award under the ISCRA initiative IsC99 "SILENTS", IsC105 "SILENTSG" and IsB26 "SHINY" grants for the availability of high-performance computing resources and support. We acknowledge the access to the computing facilities of the Poznan Supercomputing and Networking Center Grant No. 609.
## Appendix A Computational details
We performed density functional theory (DFT) calculations by using the VASP package [60; 61; 62]. We employed the local density approximation and the Perdew-Zunger [63] parametrization of the Ceperly-Alder data [64]. The same parameters of previous works [35] as the lattice constants and atomic positions of Ref. [36] have been used. The lattice constants are a=5.60499 A, b=3.58827 A and c= 6.13519 A, while the Wyckoff positions both in 4c are Cr: (0.0070, 1/4, 0.2049) and As: (0.2045, 1/4, 0.5836). The band structure plots were obtained with 130 \(k\)-points for every path. Close to the Fermi level we have the Cr _d_-orbitals sandwiched between As \(p\)-orbitals [35]. We performed the wannierization [65; 66] by using the WANNIER90 code [67] and we considered the Cr-3\(d\) and As-4\(p\) orbitals as we have already done in previous papers [28; 29; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 289; 281; 285; 287; 289; 288; 286; 287; 288; 289; 291; 288; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 41; 415; 416; 417; 428; 43; 449; 450; 451; 452; 453; 454; 46; 455; 466; 47; 478; 48; 48; 48; 48; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 84; 86; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 121; 122; 124; 125; 126; 127; 128; 129; 1319; 140; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 161; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 190; 191; 1920; 193; 194; 195; 196; 197; 198; 199; 200; 203; 204; 205; 206; 207; 208; 209; 211; 221; 228; 229; 231; 232; 233; 241; 242; 243; 245; 246; 247; 258; 259; 261; 270; 282; 283; 293; 294; 295; 296; 297; 298; 299; 300; 311; 32; 333; 341; 35; 36; 371; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 54; 56; 57; 58; 59; 61; 70; 73; 75; 76; 77; 78; 79; 81; 82; 84; 85; 86; 87; 89; 93; 940; 95; 96; 97; 98; 101; 102; 104; 105; 106; 107; 108; 109; 114; 107; 109; 115; 116; 117; 119; 132; 133; 141; 158; 162; 178; 189; 191; 219; 223; 246; 257; 26; 267; 279; 28; 293; 301; 32; 347; 358; 36; 370; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 52; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 78; 79; 80; 83; 84; 85; 86; 87; 89; 93; 87; 88; 89; 94; 95; 88; 96; 97; 98; 99; 99; 100; 111; 12; 133; 144; 15; 166; 179; 181; 199; 199
calculation of the Anomalous Hall conductivity we used WannierTools code,[72] we have used a \(k\)-grid 200 \(\times\) 200 \(\times\) 200 \(\times\) 200. The AHC calculations were verified with a \(k\)-grid of 300 \(\times\) 300 \(\times\) 300 that reproduces the same results with negligible differences[73]. The information about the magnetic space groups were extracted by the Bilbao Crystallographic Server[44].
## Appendix B Induced altermagnetism in p-bands
As mentioned in the main text, the system hosts a strong \(p\)-\(d\) hybridization[28], therefore, the AM can be induced from the Cr \(d\)-bands to the As \(p\)-bands. The \(d\)-bands dominate up to 1.5 eV above the Fermi level[32], while above 2 eV the bands are mainly composed by As-4\(p\) spectral weight. The altermagnetic spin-splitting is of the order of 0.5 eV on the magnetic \(d\)-bands close to the Fermi level and survives in the \(p\)-bands through the \(d\)-\(p\) hybridization but it gets reduced. The largest spin-splitting in the \(p\)-orbitals is around 0.2 eV as we can see in Fig. 9. The Cr \(d\)-bands induce AM in As \(p\)-bands, this is slightly different from the AM in EuIn\({}_{2}\)As\({}_{2}\) where the magnetic \(f\)-bands of Eu induce AM on the \(d\)-bands of the same Eu atoms[74].
|
2306.01820 | Concurrent Classifier Error Detection (CCED) in Large Scale Machine
Learning Systems | The complexity of Machine Learning (ML) systems increases each year, with
current implementations of large language models or text-to-image generators
having billions of parameters and requiring billions of arithmetic operations.
As these systems are widely utilized, ensuring their reliable operation is
becoming a design requirement. Traditional error detection mechanisms introduce
circuit or time redundancy that significantly impacts system performance. An
alternative is the use of Concurrent Error Detection (CED) schemes that operate
in parallel with the system and exploit their properties to detect errors. CED
is attractive for large ML systems because it can potentially reduce the cost
of error detection. In this paper, we introduce Concurrent Classifier Error
Detection (CCED), a scheme to implement CED in ML systems using a concurrent ML
classifier to detect errors. CCED identifies a set of check signals in the main
ML system and feeds them to the concurrent ML classifier that is trained to
detect errors. The proposed CCED scheme has been implemented and evaluated on
two widely used large-scale ML models: Contrastive Language Image Pretraining
(CLIP) used for image classification and Bidirectional Encoder Representations
from Transformers (BERT) used for natural language applications. The results
show that more than 95 percent of the errors are detected when using a simple
Random Forest classifier that is order of magnitude simpler than CLIP or BERT.
These results illustrate the potential of CCED to implement error detection in
large-scale ML models. | Pedro Reviriego, Ziheng Wang, Alvaro Alonso, Zhen Gao, Farzad Niknia, Shanshan Liu, Fabrizio Lombardi | 2023-06-02T12:36:05Z | http://arxiv.org/abs/2306.01820v1 | # Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems
###### Abstract
The complexity of Machine Learning (ML) systems increases each year, with current implementations of large language models or text-to-image generators having billions of parameters and requiring billions of arithmetic operations. As these systems are widely utilized, ensuring their reliable operation is becoming a design requirement. Traditional error detection mechanisms introduce circuit or time redundancy that significantly impacts system performance. An alternative is the use of Concurrent Error Detection (CED) schemes that operate in parallel with the system and exploit their properties to detect errors. CED is attractive for large ML systems because it can potentially reduce the cost of error detection. In this paper, we introduce Concurrent Classifier Error Detection (CCED), a scheme to implement CED in ML systems using a concurrent ML classifier to detect errors. CCED identifies a set of check signals in the main ML system and feed them to the concurrent ML classifier that is trained to detect errors. The proposed CCED scheme has been implemented and evaluated on two widely used large-scale ML models: Contrastive Language-Image Pre-training (CLIP) used for image classification and Bidirectional Encoder Representations from Transformers (BERT) used for natural language applications. The results show that more than 95% of the errors are detected when using a simple Random Forest classifier that is order of magnitude simpler than CLIP or BERT. These results illustrate the potential of CCED to implement error detection in large-scale ML models.
Machine learning, soft errors, concurrent error detection, CLIP, BERT.
## I Introduction
The rapid development of Machine Learning (ML) in applications such as computer vision or natural language processing has led to large-scale models with billions of parameters [1, 2]. Text-to-image generators such as DALL-E, Stable Diffusion, and MidJourney, or large language models (such as GPT) are now used by millions of people every day and are being widely utilized in many application domains. In particular, large-scale ML systems are being considered for safety-critical applications [3, 4]. This trend is expected to continue making large-scale ML a fundamental part of safety-critical systems [5],[6],[7].
Dependability is essential when ML systems are used in safety-critical applications [8, 9]. This requirement has triggered significant efforts to define safety-related evaluation metrics for ML systems [10, 11, 12]. Dependability has also been pursued from the fault-tolerance perspective when implementing models, to handle circuit-level errors and faults that can corrupt data integrity [13]. Over the years, different error-tolerant techniques have been widely studied for popular ML models, ranging from low-complexity algorithms such as \(k\) Nearest Neighbors, Random Forests and Support Vector Machines [14, 15], to higher-complexity algorithms such as Neural Networks [16, 17, 18, 19]. Recently, due to their capability of performing advanced tasks, the error-tolerant design for Deep Neural Networks (DNNs) has also attracted substantial interest, allowing their use in safety-critical applications [20, 21, 22, 23].
Error detection is often targeted when designing error-tolerant schemes for advanced ML systems. This occurs because error correction inherently is more costly and systems usually have a default safety-mode to handle abnormal situations [24]. Traditional error detection mechanisms mainly rely on circuit or time redundancy, which significantly impacts system performance [25]. For example, to detect transient errors, inference can be run twice on the same input data and if the results are different then an error is detected. This, however, has a large performance impact because the speed of the system is reduced to half and the energy consumed twice as much.
An alternative solution is the use of Concurrent Error Detection (CED) schemes. CED operates in parallel with the ML system and exploits its algorithmic properties to detect errors [26]. Specifically, CED is more attractive for large-scale ML systems because it can potentially reduce the overhead of error detection and thus, it reduces this burden on the entire system. However, the design of CED for large-scale ML systems faces challenges as many different algorithms are combined, and implementing CED for all of them can
be overly complex. A different approach would be to employ an auxiliary ML classifier that operates concurrently with the main ML system checking its outputs to detect errors, thus creating a concurrent classifier CED scheme. To the best of the authors' knowledge, such an approach has not been proposed in the literature.
In this paper, an efficient error-detection scheme for large-scale ML systems, namely Concurrent Classifier Error Detection (CCED) is presented and evaluated. The principle of CCED is to identify a set of internal check signals in the main ML system and feed them to a concurrent classifier for error detection. This is achieved because that errors that corrupt the output of the ML system tend to create unique patterns on the check signals that are different from those of normal operation. Since the size of the concurrent classifier is very small compared to the main model, CCED can potentially achieve error detection with a negligible overhead.
The rest of this paper is organized as follows. Section II covers the preliminaries on large-scale ML models using Contrastive Language-Image Pre-training (CLIP) and Bidirectional Encoder Representations from Transformers (BERT) as examples. Then, the challenges when implementing error detection and in particular, CED on these models are discussed. Section III presents the proposed CCED scheme describing how it can be applied to CLIP and BERT models. CCED is evaluated in section IV in terms of error detection effectiveness and the overhead introduced over the unprotected ML model. Finally, the paper ends with the conclusion in Section V.
## II Preliminaries
This section first briefly reviews two widely used large-scale machine learning models to illustrate their distinctive features when compared to classical models. Then, the challenges in implementing CED in these large models are discussed showing the limitations of traditional approaches that try to exploit the features of the algorithms to perform CED. Finally, the error model considered is briefly described.
### _Clip_
The first large-scale model that we consider is Contrastive Language-Image Pre-training (CLIP) [27] which combines an image encoder and a text encoder as shown in Figure 1. Both encoders are trained to map to the same embeddings so that images can be associated with the text. Training tries to minimize the distance of the embeddings for images that correspond to the text and to maximize it for images that do not correspond to the text. This enables for example zero-shot learning in which for a given image, the embeddings are computed, and then the distances to several texts, each corresponding to a type of object, are calculated. Finally, the text with the closest embeddings is selected as the classification result. CLIP is also widely used in text-to-image generation in which a pre-trained text encoder is fed with the text prompt to produce embeddings that are then used as input to guide the image generation model [28].
CLIP typically uses large neural networks for both text and image encoders. For example, deep neural networks like the 50-layer ResNet [29] or vision transformers [30] such as ViT-L/14. This leads to a very large number of parameters and a complex system.
In our evaluation, we use CLIP for zero-shot image classification where the embeddings of different texts corresponding to the classes are compared to the embeddings of the image to be classified. The one with the shortest cosine distance is selected as the class for the input image. This is illustrated in Figure 2.
### _Bert_
The second model used to illustrate large-scale ML systems is Bidirectional Encoder Representations from Transformers (BERT) [31] that is used in Natural Language Processing (NLP) applications. As illustrated in Figure 3, the BERT model is composed of a stack of N Transformer encoders, and each encoder consists of a multi-head Self-Attention (SA) layer and a Feed-Forward Network (FFN). Each head of SA module performs scaled dot-product attention for a query matrix, a key matrix, and a value matrix. The FFN is composed of two fully-connected layers and a non-linear activation operation between them. The standard BERT model includes 12 encoders (N = 12), and each MH-SA includes 12 heads (H = 12). So it is a very large model with 110 million parameters.
Fig. 1: Structure of the CLIP Model
Fig. 2: CLIP used for zero-shot learning
BERT is pre-trained using specific tasks such as mask language modeling and next-sentence prediction to extract embeddings of the text input which can then be used for different NLP tasks such as for example, to infer the emotion from a text [32] or to find the answer to a question in a text. This is typically done by connecting a neural network to the outputs of BERT to perform the task at hand. In the case of emotion analysis, this can be a shallow neural network that is trained to classify emotions. This is illustrated in Figure 4 for both use cases.
### _Challenges for error detection in large-scale ML models_
Implementing error detection in large-scale ML systems poses new challenges when compared to simpler models. The first one is that the huge number of parameters and operations that are needed in those systems leave very little room to add redundancy. Therefore, error detection must be implemented with a very small fraction of the cost of the original system. This characteristic rules out the use of most traditional time and space redundancy-based schemes such as duplication with comparison or recomputation and comparison. Additionally, as the complexity of the ML systems increases in each new release or generation, the error detection approach should be scalable so that it can be applied to more complex models with a similar cost in relative terms. Finally, and for the same reason, the error detection scheme should be generic so that it can be integrated as part of the ML system design flow and does not require a complete redesign when the ML model changes. Therefore, ideally, error detection schemes should:
1. Have a small relative cost.
2. Be scalable.
3. Be independent of the details of the ML system model used.
The first requirement points to the use of CED so reusing the ML system properties to detect errors. However, given the complexity of large-scale ML systems (that use a variety of components and algorithms), the implementation of CED is not straightforward. One possibility could be to try to implement CED using existing techniques for each of the model components at some granularity. For example, implement CED for the Neural Network blocks, for the matrix multiplications, and so forth. This approach however leads to a complex design process in which different CED techniques have to be used for each block. Even worse, when the main ML model changes, which happens frequently as models are improving continuously, the CED must be redesigned. In summary, using CED at the block level does not meet the requirements of scalability and independence from the main ML model. Therefore, there is a need for new CED schemes that can meet the requirements of large-scale ML systems.
### _Error model_
The error model considered in this paper consists of transient errors that correspond for example to radiation-induced soft erros as major issue in modern computing systems [33]. Therefore, we consider errors in the machine learning model parameters when they are used to perform inference. In many cases, the parameters will be loaded to perform inference from the main memory or storage for each inference as they cannot all be stored on-chip. Therefore, the errors are assumed to be soft errors that affect the computation of an inference but they do not have a persistent effect; if we run the inference again, the parameters will be loaded again and the error will be no longer present. Therefore, for this scenario, if CED is implemented when an error is detected, it can be corrected by running the inference a second time.
## III Concurrent Classifier Error Detection (CCED)
In this section, we present Concurrent Classifier Error Detection (CCED), a scheme to detect errors in large-scale ML systems that addresses the challenges discussed in section II-C, so providing low-cost, scalable, and generic CED for large-scale ML systems. Initially, the overall principle and approach are presented; then we consider the selection process of the inputs to the concurrent classifier and finally discuss the implementation of the proposed scheme.
Fig. 4: Stack of BERT model and task network for different NLP applications
Fig. 3: Structure of the BERT Model: structure (left), detail of the SA block (right).
### _Overall approach_
The proposed CED scheme is based on the following observations:
1. We are only interested in detecting errors that change the classification result. The rest of the errors do not have an impact on the outcome of the ML system and since they are not persistent, they can be ignored.
2. The errors that do change the classification result would be expected to introduce in most cases significant changes in the values of some of the nodes of the ML system; hence they take values that are different from those during normal operation.
3. A classifier can, from the previous observation, be trained with the values of the nodes in the error-free and error cases to detect errors. This classifier can then operate concurrently with the main ML system to perform error detection.
4. The cost of such a classifier should be negligible compared to a large-scale ML system because it is much simpler.
From those observations, a general scheme to implement CED in large-scale ML systems can be to use a small classifier that checks some internal nodes of the ML system to detect errors. Therefore, this is a Concurrent Classifier Error Detector (CCED). The overall principle is illustrated in Figure 5, the concurrent classifier takes as inputs the values of some of the nodes of the main ML system and uses them to detect errors. This approach addresses the three requirements discussed in section II-C. First, it introduces a very small overhead as the concurrent classifier can be simple, for example, a Random Forest, as discussed in section III-C, i.e., its cost would therefore be negligible when compared to the main ML model. For the same reasons it would be scalable as the complexity of the concurrent classifier does not depend on the size or complexity of the main model but only on the data patterns introduced by the errors on the nodes being monitored. The approach is also independent of the implementation details of the main ML system as it only needs to take the values from it as a black-box. Finally, from a design perspective, it integrates nicely with the main ML system design as now the CED is another (much simpler) ML problem which fits naturally in the design flow. Therefore, the proposed CCED has the potential to address the challenges of implementing error detection in large-scale ML systems. However, the first step is to check that the outputs of the system with and without errors have different patterns that can be separated by a simple classifier. This will be confirmed by the results presented in the evaluation section for two well-known ML models: CLIP and BERT.
CCED upon detecting an error will re-run the inference such that if the error was due to a soft error, it will no longer be present, and the second run would be correct. Instead, if the error was due to a miss classification, the concurrent classifier will also detect an error in the second run that is ignored. The overall process is illustrated in Figure 6.
The performance of CCED depends on two variables: the number of false negatives and false positives. A false negative occurs when the concurrent classifier fails to identify an inference with an error. Instead, a false positive occurs when the concurrent classifier detects an error when there is none. False positives would trigger additional inferences to correct errors that do not exist while false negatives will leave the errors undetected. Therefore, their impact on performance is qualitatively different and the decision threshold used in the concurrent classifier can be used to make trade-offs between the two. For example, to maximize the error detection rate (minimize false negatives) subject to a given percentage of re-computations (false positives). This models a system that can afford a given fraction of redundant re-computations to detect errors, for example, 10%, and given that constraint wants to maximize error detection. This will be further discussed in the evaluation section.
### _Selection of the nodes to monitor_
In principle, any node in the main ML system can be monitored by the concurrent classifier. A simple approach would be to select the nodes just before the outputs. For example, in the case studies considered in this work, namely CLIP for zero-shot learning and BERT for emotion detection and question and answer, the softmax values used to determine the final class selected can be used for monitoring. This has several advantages, first, it makes the selection of the nodes straightforward for classification systems; second, by nature, the number of nodes is small which reduces the complexity of the concurrent classifier and third any error that has an effect on the system must affect these nodes and thus can potentially be detected using the values of these nodes. Therefore, next, these nodes are used for monitoring and are the inputs to the concurrent classifier.
The use of additional and/or alternative nodes is left for future work and can be used to improve the error detection rate or to reduce the false positive rate. The nodes can be selected for example by checking the impact of errors on the different nodes and choosing for monitoring the ones that have the largest differences. As this additional process can only improve the error detection rate, the results presented in the evaluation section are a lower bound on the detection rates that can be achieved by the proposed CCED technique.
### _Concurrent classifier implementation_
One of the assumptions behind our proposed CCED scheme is that errors can be detected by monitoring a small set of nodes and using a simple classifier. Therefore, the concurrent classifier can be implemented using classical machine learning algorithms such as Logistic Regression (LR), Support Vector Machines (SVM), or Random Forest (RF) which in simple
classification problems can achieve an accuracy like that of more complex models such as deep neural networks [34]. Therefore, in the proposed design, we consider such simple classifiers and leave the use of more complex ML models for the concurrent classifier for future work. As for the selection of the monitoring nodes, this puts us in a worst-case in terms of the error detection rate that can be achieved. Better error detection may be obtained by using more complex CCs at the cost of additional overheads.
## IV Evaluation
To assess the proposed CCED scheme over a wide range of scenarios, it has been evaluated using CLIP for zero-shot classification on several image datasets and with BERT for sentiment analysis and questions and answers. The following subsections describe the case studies and the methodology employed in the experiments and then results are presented.
### _Case studies_
#### Iv-A1 Clip
For CLIP, three widely used datasets: CIFAR10 [35], CIFAR100 [36] and mini-imagenet [37] are used as inputs. The first one has 10 classes and the second and third 100. The CLIP network is configured using two encoders with different complexity and performance: the simpler pre-trained model "RN50" (modified 50-layer ResNet) and the more advanced "ViT-L/14" (a vision transformer). Classification is performed using softmax and the label with the largest value is selected as the final prediction. The accuracy achieved by CLIP is summarized in Table I, it can be seen that the "ViT-L/14" model achieves significantly better performance. This comes at a cost because the two models have respectively 102 million parameters for RN50 and 427 million for "ViT-L/14". Among the datasets, the accuracy is higher for CIFAR10 because it is an easier data set with fewer classes.
#### Iv-A2 Bert
For BERT, sentence emotion classification and question answering are used as case studies. In the first one, CLIP must select one of six emotions for each sentence. The BERT model is extended with a classification network (CN) with two fully-connection layers that generate six scores, one per emotion. The stacked model (pre-trained standard BERT plus CN) from Huggingface1 is used in our evaluation. In the second case, BERT is extended with an answer location network (AN) with two independent linear transformations, one to detect the start of the answer and the other to detect the end of the answer. Again the stacked model (pre-trained standard BERT plus AN) from Huggingface2 is used in our evaluation. The accuracy achieved by BERT in both tasks is given in table II.
Fig. 5: Proposed Concurrent Classifier Error Detector (CCED) scheme.
Fig. 6: Operation of the proposed CCED scheme.
### _Methodology_
#### Iv-B1 Dataset creation
The evaluation methodology starts by generating a data set to train the concurrent classifier. This is done by running inference with and without errors to produce samples of the signals that are used as inputs to the concurrent classifier. For CLIP, first, inference is run with no errors, and the values of the softmax output for the 10 (100) classes are stored for CIFAR10 (CIFAR100 and mini-imagenet). Then a random bit is flipped in one of the parameters of the CLIP model and inference is run again, if the classification result changes, then the softmax values are stored. Using this procedure, a balanced dataset with 10,000 samples with errors that changed the classification result and 10,000 error free samples is built for CIFAR10, CIFAR100, and mini-imagenet. These are the datasets used to evaluate the performance of the concurrent classifier.
For BERT and emotion analysis, a similar procedure is used by storing in this case the six softmax values for both error-free inferences on each element of the test set and then for runs with errors that lead to a change in the output of the classifier. In this case, the dataset has 4000 samples, half with an error and half without errors. This dataset is used to evaluate the concurrent classifier. For question answering, a similar procedure is used but this time, the probability for each word in the context being the start and end of the answer are saved. This means that the number of values is significantly larger than six, it is around 100 \(\sim\) 200. The size of the data set for question answering is also 4000 samples.
#### Iv-B2 Concurrent classifier training
The second step in the evaluation is to use the datasets generated in the first step to train a concurrent classifier. A simple Random Forest classifier has been used in the experiments. The rationale as in previous choices for the nodes monitored is to obtain a lower bound on the performance of CCED that can be improved by using more complex classifiers or automatically generated ensembles [38]. For the same reason, the default hyper-parameter values of the library used (sklearn) are not modified to improve performance, our objective is to show that even a simple classifier with default parameters can detect most errors.
#### Iv-B3 Performance evaluation
To evaluate the performance of CCED we fix a percentage of false positives and adjust the decision threshold of the classifier to minimize false negatives given that constraint. As discussed prior to this model, a system can afford a given fraction of re-computations to detect errors. We consider percentages of 5,10,15, and 20% as reasonable overhead values in terms of re-computation effort (much lower than recomputing every inference to detect errors when there is no CED).
The concurrent classifier is trained normally using a subset of the dataset. Then the rest of the dataset is used as the test set to measure the false positives and negatives and also get the classification scores. Finally, the decision threshold is shifted until the maximum number of allowed re-computations is used and the percentage of detected errors on the test set is computed for that threshold.
### _Detection and re-computation_
For CLIP, the results obtained for the three concurrent classifiers are summarized in Tables III,IV,V. The tables show the percentage of errors detected, the re-computations needed when using the default classifier threshold and then the percentage of errors detected when the threshold is adjusted to have a given percentage of re-computations.
It can be observed that the random forest concurrent classifier provides good results in most cases. Therefore, it seems that random forest can be a good choice for the concurrent classifier3. Focusing on the results, the random forest classifier achieves high detection rates. More than 95% of the errors are detected for all three datasets when the ViT network is used for encoder with only 10% of recomputations and for CIFAR10, the value is over 99%. Instead, when the simpler RN50 network is used, detection is above 90% only for the CIFAR10 dataset. The difference may be linked to the accuracy achieved by the main classifier which is also better when using ViT than RN50. This suggests that the proposed CCED scheme works better when the main classifier is also performing well.
Footnote 3: Note that as discussed before we are taking the random forest classifier with its default settings so it may be possible to achieve better results by selecting other values of the hyper-parameters.
This can be explained as when the main system has good performance, the check signals tend to have clean patterns, for example with one of the classes taking a large value and the remaining much smaller ones. This makes it easier to separate those patterns from the ones induced when an error is inserted in the system which tends to produce noisy patterns. Instead, when the main system has poor performance the error-free patterns are less clean and thus it is harder to differentiate from the ones when an error occurs. Therefore, the performance of the proposed CCED is better when the main ML system has good accuracy. This is illustrated in Figures 11,8 that show typical patterns of the softmax values (which are the inputs to the concurrent classifier) for CIFAR10 and CIFAR100 respectively when using RN50 for the encoder. In the first case, the error-free pattern is better defined and thus it is easier to distinguish from the patterns obtained when there is an error in one of the system parameters. Figure 10 shows a typical pattern for CIFAR100 but when ViT is used for the encoder. In this case, the pattern is also cleaner which can be related to the higher accuracy obtained by CLIP in this case and with the better performance of the proposed CCED approach.
For BERT, when used for emotion classification, most errors can be detected by the concurrent classifier even when the percentage of re-computations is low as shown in the
results in Table VI. For example by allowing 10% of re-computations over 99% of the errors are detected. The results for question answering are illustrated in Table VII and show similar trends, and again with 10% of re-computations the concurrent classifier is able to detect over 99% of the errors. Therefore, the proposed scheme can detect most errors in BERT with a low overhead. In fact, for BERT, the percentage of recomputations can be reduced to 5%, and still more than 98% of the errors are detected.
These results confirm that the concurrent classifier introduces a very small overhead to the system and that the main cost of the proposed scheme is the re-computations induced by false positive error detections of the concurrent classifier.
## V Conclusion
This paper has presented Concurrent Classifier Error Detection (CCED) a scheme to detect errors in large-scale machine learning systems. The proposed method uses a simple classifier that takes as input the values of a small set of nodes of the main system and operates concurrently with it to detect errors. This enables scalable and low-cost error detection that is independent of the main machine learning model implementation and related details. Furthermore, the proposed CCED integrates naturally into the machine learning design flow as error detection is done also using machine learning. The proposed scheme has been evaluated on two widely used large-scale machine learning models. The results show that CCED can detect over 95% of the errors with only 10% of recomputations even when using a simple classifier such as a random forest. Therefore, CCED is a promising approach for implementing concurrent error detection in large-scale machine learning systems. Future work will explore the selection of the nodes used as inputs to the concurrent classifier and the optimization of the concurrent classifier itself to further increase the error detection rate and reduce the re-computations needed.
|
2303.15053 | Hyperparameter optimization, quantum-assisted model performance
prediction, and benchmarking of AI-based High Energy Physics workloads using
HPC | Training and Hyperparameter Optimization (HPO) of deep learning-based AI
models are often compute resource intensive and calls for the use of
large-scale distributed resources as well as scalable and resource efficient
hyperparameter search algorithms. This work studies the potential of using
model performance prediction to aid the HPO process carried out on High
Performance Computing systems. In addition, a quantum annealer is used to train
the performance predictor and a method is proposed to overcome some of the
problems derived from the current limitations in quantum systems as well as to
increase the stability of solutions. This allows for achieving results on a
quantum machine comparable to those obtained on a classical machine, showing
how quantum computers could be integrated within classical machine learning
tuning pipelines.
Furthermore, results are presented from the development of a containerized
benchmark based on an AI-model for collision event reconstruction that allows
us to compare and assess the suitability of different hardware accelerators for
training deep neural networks. | Eric Wulff, Maria Girone, David Southwick, Juan Pablo García Amboage, Eduard Cuba | 2023-03-27T09:55:33Z | http://arxiv.org/abs/2303.15053v1 | Hyperparameter optimization, quantum-assisted model performance prediction, and benchmarking of AI-based High Energy Physics workloads using HPC
###### Abstract
Training and Hyperparameter Optimization (HPO) of deep learning-based AI models are often compute resource intensive and calls for the use of large-scale distributed resources as well as scalable and resource efficient hyperparameter search algorithms. This work studies the potential of using model performance prediction to aid the HPO process carried out on High Performance Computing systems. In addition, a quantum annealer is used to train the performance predictor and a method is proposed to overcome some of the problems derived from the current limitations in quantum systems as well as to increase the stability of solutions. This allows for achieving results on a quantum machine comparable to those obtained on a classical machine, showing how quantum computers could be integrated within classical ML tuning pipelines.
Furthermore, results are presented from the development of a containerized benchmark based on an AI-model for collision event reconstruction that allows us to compare and assess the suitability of different hardware accelerators for training deep neural networks.
## 1 Introduction
Current state-of-the-art hyperparameter (HP) search algorithms such as Hyperband [1], the Asynchronous Successive Halving Algorithm (ASHA) [2] and Bayesian Optimization Hyperband (BOHB) [3] rely on a method of early termination, where badly performing trials are automatically terminated to free up compute resources for new trials to be started.
Basing the stopping criterion on the relative ranking of trials according to a chosen metric, often accuracy or validation loss, can be problematic. Since the training process is non-linear, the ranking of trials at the decision point does not necessarily hold at the target point, see figure 1. A potential solution to this problem is to use a non-linear stopping criterion, e.g., using Support Vector Regression (SVR), to predict the final model performance from a partially trained model [4].
Large-scale High Performance Computing (HPC) systems are especially suited to run Hyperparameter Optimization (HPO) algorithms such as ASHA due to the superlinear scaling that can be achieved, as demonstrated in tests performed on the JURECA-DC [5] system at the Julich Supercomputer Centre (JSC), presented in figure 2.
## 2 Quantum support vector regression for model performance prediction
The potential to speed up the HPO process via performance prediction as well as the use of a quantum annealer (QA) to train the performance predictor is investigated. The Quantum-SVR (QSVR) performance achieved is comparable to classical SVR, showing that quantum computers are good candidates to be integrated within classical ML tuning workflows in the future.
A Graph Neural Network (GNN)-based algorithm, developed for the task of machine learned particle flow (MLPF) reconstruction [6, 7] in High Energy Physics (HEP), acts as the base model for which studies are performed. A dataset consisting of learning curves and HP configurations was generated, see figure 3, by training 296 different configurations of MLPF on the publicly available Delphes dataset [8]. Configurations were drawn randomly from the HP space defined in table 1. The trainings were run in a distributed data-parallel mode on compute nodes with four NVIDIA A100 GPUs each. Training one model for 100 epochs on such a node required roughly 16 hours of wall time. To speed up the generation of learning curves, trainings were run in parallel on 24 compute nodes on the JURECA-DC-GPU system at JSC.
To establish a baseline, classical SVR was studied by fitting 1000 models on different train/test splits with the same constraint on dataset size as required by the QSVR. Due to the limited number of qubits on the QA, only 20 training samples could be used for fitting. Results vary depending on the training split and statistics are shown in the figure 4 and table 2.
leveraged to train a QSVR model for model performance prediction. Due to the probabilistic nature of quantum processes the annealing is run multiple times and returns multiple solutions which can then be combined in different ways to create the final QSVR model. Different methods to combine the QSVR solutions are described in [9]. The predictions of the best performing QSVR are plotted against the true values in figure 5.
With the aim of stabilizing QSVR performance, different techniques were tested by combining several QSVRs trained on 20 samples each. The most successful approach found was to split the training set into disjoint subsets of 20 points and train a QSVR for each subset. The final ensemble prediction was calculated as the average of all the individual predictions.
To evaluate the QSVR combination technique, 80 training and 150 testing points were used. This experiment was repeated for 10 random train/test splits, where the results are presented in Figure 6. Note that each prediction is made by combining 4 QSVRs trained on 20 points each. Although this approach did not improve the maximum \(R^{2}\) score, it did produce more stable results by significantly improving the worst performing split and reducing the standard deviation of \(R^{2}\) scores between splits, as can be seen in table 2. This is interesting as one of the problems faced while using the Q-SVR has been the instability of the results. Further improvement could be made by modifying the weights in the weighted average that combine the QSVRs, as currently a simple average is used. A comparison between the statistics of the \(R^{2}\) scores for different models described in this work are presented in table 2.
## 3 Containerized AI Benchmark
The promise of better accuracy and reconstruction scalability at inference time for AI-based algorithms is preceded by resource-intensive training. A self-contained, reproducible and containerized benchmark based on the training of deep learning models is proposed to explore the feasibility of deploying AI-driven HEP applications to different HPC environments. The metric shown in figure 7 is the training throughput, or training samples processed per second, on a subset of the publicly available Delphes dataset [8] in HEPscore [10] format.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & Best & Worst & Mean & Std &
\begin{tabular}{l} Number of \\ trainings \\ \end{tabular} \\ \hline SVR & 0.959 & 0.318 & 0.889 & 0.050 & 1000 \\ Sim-QSVR & 0.949 & 0.383 & 0.901 & 0.045 & 100 \\ QSVR & 0.948 & 0.742 & 0.880 & 0.056 & 10 \\ QSVR Ensemble & 0.927 & 0.857 & 0.899 & 0.019 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(R^{2}\) values and statistics for the different regression models. The SVR model is a classical SVR trained on a classical computer. The Sim-QSVR model is a QSVR model trained on a classical computer using simulated quantum annealing. The QSVR model is a QSVR trained on the QA and the QSVR Ensemble is the combined predictions from 4 QSVRs trained on the QA using 20 training samples each.
Figure 6: Best QSVR ensemble model predictions vs true values after running the QSVR combination technique 10 times for 10 different train/test splits using 80 training samples.
Figure 7: Benchmark results.
## 4 Conclusions
HPC systems are essential for running large-scale HPO and distributed training and can significantly increase model performance as well as speed up the iteration of model development and training. It has been shown that superlinear scaling in speed-up can be achieved for HPO workflows, indicating the benefits of HPC for such use cases.
The strong potential of using performance prediction for HPO was demonstrated, encouraging the use of this technique in future HPO studies. It was also shown that, despite the current limitations of quantum computers, it is possible to train QSVR models on a QA while achieving prediction performance comparable to those obtained on a classical SVR. This encourages further studies in utilizing hybrid quantum/HPC workflows for HPO as well as in other use cases.
Finally, the development of a containerized benchmark application with an AI use case from HEP allows for quick and easy benchmarking of new hardware accelerators in the HEPScore format.
We thank our colleagues in CoE RAISE, in particular Andreas Lintermann and Marcel Aach for helpful discussions and feedback in the course of this work. We also thank our colleagues in the CMS Collaboration, especially Joosep Pata, Javier Duarte and Farouk Mokhtar for their collaboration on the MLPF studies.
Eric Wulff, David Southwick and Eduard Cuba was supported by CoE RAISE. The CoE RAISE project has received funding from the European Union's Horizon 2020 - Research and Innovation Framework Programme H2020-INFRAEDI-2019-1 under grant agreement no. 951733.
The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA [5] at Forschungszentrum Julich. The authors gratefully acknowledge the Julich Supercomputing Centre for funding this project by providing computing time through the Julich UNified Infrastructure for Quantum computing (JUNIQ) on the D-Wave Advantage(tm) System JUPSI.
|
2304.11292 | On the Identification of the Energy related Issues from the App Reviews | The energy inefficiency of the apps can be a major issue for the app users
which is discussed on App Stores extensively. Previous research has shown the
importance of investigating the energy related app reviews to identify the
major causes or categories of energy related user feedback. However, there is
no study that efficiently extracts the energy related app reviews
automatically. In this paper, we empirically study different techniques for
automatic extraction of the energy related user feedback. We compare the
accuracy, F1-score and run time of numerous machine-learning models with
relevant feature combinations and relatively modern Neural Network-based
models. In total, 60 machine learning models are compared to 30 models that we
build using six neural network architectures and three word embedding models.
We develop a visualization tool for this study through which a developer can
traverse through this large-scale result set. The results show that neural
networks outperform the other machine learning techniques and can achieve the
highest F1-score of 0.935. To replicate the research results, we have open
sourced the interactive visualization tool. After identifying the best results
and extracting the energy related reviews, we further compare various
techniques to help the developers automatically investigate the emerging issues
that might be responsible for energy inefficiency of the apps. We experiment
the previously used string matching with results obtained from applying two of
the state-of-the-art topic modeling algorithms, OBTM and AOLDA. Finally, we run
a qualitative study performed in collaboration with developers and students
from different institutions to determine their preferences for identifying
necessary topics from previously categorized reviews, which shows OBTM produces
the most helpful results. | Noshin Nawal | 2023-04-22T01:54:30Z | http://arxiv.org/abs/2304.11292v1 | # On the Identification of the Energy related Issues from the App Reviews
###### Abstract
The energy inefficiency of the apps can be a major issue for the app users which is discussed on App Stores extensively. Previous research has shown the importance of investigating the energy related app reviews to identify the major causes or categories of energy related user feedback. However, there is no study that efficiently extracts the energy related app reviews automatically.
In this paper, we empirically study different techniques for automatic extraction of the energy related user feedback. We compare the accuracy, F1-score and run time of numerous machine-learning models with relevant feature combinations and relatively modern Neural Network-based models. In total, 60 machine learning models are compared to 30 models that we build using six neural network architectures and three word embedding models. We develop a visualization tool for this study through which a developer can traverse through this large-scale result set. The results show that neural networks outperform the other machine learning techniques and can achieve the highest F1-score of 0.935. To replicate the research results, we have open sourced the interactive visualization tool.
After identifying the best results and extracting the energy related reviews, we further compare various techniques to help the developers automatically investigate the emerging issues that might be responsible for energy inefficiency of the apps. We experiment the previously used _string matching_ with results obtained from applying two of the state-of-the-art topic modeling algorithms, OBTM and AOLDA. Finally, we run a qualitative study performed in collaboration with developers and students from different institutions to determine their preferences for identifying necessary topics from previously categorized reviews, which shows OBTM produces the most helpful results.
app reviews, energy efficiency, machine learning approaches, neural networks, data visualization
## I Introduction
Energy efficiency in mobile applications is a crucial concern for app-developers, as most of the energy related issues are usually identified from users' feedback after the publication of the app [1]. Energy inefficiency, also referred to as energy consumption or battery consumption in mobile apps, was a main concern in 2013 [2] and is still a main topic that appears in user reviews in 2019 [3].
Energy-related user reviews can contain critical information about the energy inefficiency of the app. Consider an example that is written for app _Trucker Tool_: "GPS running on background uses alot of battery. It kills alot of power of my droid. had to uninstall." This review contains useful information about how a specific functionality of the app (Geo-location tracking) led to power related concerns. Investigation into these reviews will allow the developers to identify different aspects and features of the app responsible for the energy related issues [4].
As we will discuss in the paper, finding these insightful feedbacks amongst the hoard of reviews (hundreds or thousands of them) submitted by the users for an app is not an easy task. Similar examples suggest that it is imperative for the developers to efficiently extract energy related reviews. However, to the best of our knowledge, there is no study that investigates the automatic extraction of energy consumption reviews.
In this paper, we empirically study 60 traditional machine learning models with various sets of feature combinations and 30 deep learning models (having six different architectures) with three pretrained word embeddings to identify the best features- model combination for better extraction of energy related app reviews from Google Play store, in supervised text classification approach. The models are executed on the reviews scraped for 400 apps from 12 different categories. As the number of models and features is high, we provided a visualization tool that app developers can use to try different classes of these models and visualize, compare the results by three metrics of F1-score, model accuracy, and the run time.
Further, we study approaches to investigate the energy related reviews in more detail. We compare the results of identifying energy consumption related issues using regular expressions and string matching with two state-of-the-art topic modeling algorithms, online Biterm Topic Modeling (oBTM) [5] and Adaptive Online Latent Dirichlet Allocation (AOLDA) [6]. The comparison will show the number of distinct issues that can be identified with each approach. Authors have worked in close collaboration with four developers in the identification and evaluation of the string matching and topic modeling results.
The results of this paper and the visualization tool can help the developers or researchers in model and feature selection when building automatic tools for investigating energy related issues reported by users.
The rest of this paper is organized as follows: In section II, we explain the methodology and study design, followed by
reporting the experiments results in Section III. We discuss the results and limitations of the study in Sections IV and V. The related works are summarized in Section VI, and we conclude in Section VII.
## II Methodology and Study Design
In this section, we discuss our research questions and explain the methodology, study design in details.
### _Research Questions and Preliminary Result_
The goal of this work is to identify the optimal, top-performing supervised approach to help developers extract energy efficiency related user feedback automatically. Furthermore, we want to investigate if different topic modeling algorithms can help the developers identify unforeseen issues behind energy efficiency. We, therefore, work our way to answer following research questions:
1. Can we replace manual search for extracting energy related reviews with decent accuracy? _Ans: Some Deep Learning approaches equipped with text-enriching techniques performed quite well._
2. For smaller training dataset, how does traditional machine learning models perform with respect to comparatively modern NN based models? _Ans: With proper feature engineering some traditional ML models beat many of the Deep Learning approaches._
3. Is there any opportunity cost incurred by the better performing approaches? _Ans: Significantly higher run time is recorded for the better performing approaches._
4. To what extent can modern topic modeling algorithms help developers to discover recently emerged issues responsible for energy inefficiency of the application? _Ans: Modern topic modeling algorithms can quickly identify major energy related issues (if there is any); but developers can not rely on them to discover all the issues._
### _Methodology_
Figure 1 demonstrates the main steps of our methodology. We have divided the whole procedure of our experiment into smaller steps and assembled them into 4 different groups of _Data Collection_, _text classification_, _Topic Modeling_, _Result Comparison_ (represented by 4 different columns). 4 participating developers have carefully followed through each of these steps to report back empirical data, insights presented in the paper.
The first part of _Data Collection_ consists of the steps concerning how we collected the reviews by crawling Google play-store and distributed them among the participating developers who later on annotated the reviews (described in II-C). In the _text classification_ part, we included the steps for evaluating performances of different machine learning approaches (with engineered features) and different neural network architectures (with pretrained word embedding) (described in III-A). Here, steps of feature engineering, selection, use of traditional machine learning models for classifier design are discussed in details. For the _text classification_ part, steps involved in energy related issues identification such as: text-processing, algorithm selection, annotating issues to generated topics are included. (III-B). Here, we evaluated performance of different topic modeling algorithms to generate topics that could illuminate emerging issues discussed in the reviews. In the final part of _Result Comparison_, steps for reporting on the results of the classification approaches and topic modeling experiments are included (discussed in IV).
### _Study Design_
To ensure involving the experts' opinions in the study, and reduce the bias in the results that might come from the authors' previous experience with app reviews automatic text classification, we performed our study with the participation of four Android app developers. Initial guidelines and gold standards (for labeling the energy related reviews) are provided to the developers. After collecting and cleaning the data, the reviews are distributed among the developers according to Table I. Each of the developers performed all steps shown in Figure 1 using different set of reviews from different apps. After the completion of the tasks from each column developers reported back their results in tabular form and based on that data, we developed and provided unique visualization tool to the developers for easy exploration of the result set.
**Data Distribution and Annotation:**
We have collected 100,000 most recent English reviews from 12 different categories. 20 top-free and 20 top-paid applications have been randomly selected to scrape the reviews. We have used an _open source Node.js module_1 to scrape application data from the Google Play store. It retrieves the full detail of specified application, and maximum number of reviews for that application allowed by Google.
Footnote 1: [https://github.com/facundoolano/google-play-scraper](https://github.com/facundoolano/google-play-scraper)
From the data collected, we randomly sampled 40,000 reviews. We have distributed the reviews among 4 participating developers in such a way so that no two developers have same reviews from the same app and the reviews in each developer's dataset is unique (not shared with others; although developers
Fig. 1: Overview of the study design
may discuss the reviews while annotating). Details of reviews distribution among the developers are specified in Table I. The developers have discrerely chosen 1,200 reviews each, where 600 reviews were energy-related and the rest were not energy consumption related.
The reason behind choosing such small review- set for each developer is that, usually manual- analysis for investigating energy related reviews are done most frequently by the developer for single application and the accumulated review- size is never very large.
The developers were instructed to label each review to exactly one class. In case of any confusion during labelling, a review can be openly discussed among the developers and annotated with the agreement of at least two, three in case of a disagreement. The process of annotations were completely manual and all the labelling were thoroughly double checked by 5 different undergraduate students (who were paid) in the following manner: we divided 4,800 reviews into 48 batches of 100 reviews; from each batch, at random 5 reviews are selected and checked. If 1 erroneous labelling is found then that particular batch returns to the respective developer to start its' journey back from stage 1. Out of 48 batches, we have found 2 errors in a single batch only.
### _Replication Package_
To encourage reproducibility, we uploaded all scripts, visualization code and benchmark results in a _gitlab account_2 and will provide the annotated dataset upon request.
Footnote 2: [https://gitlab.com/Mashuk/acm-msr-2020-empirical-study-for-text-classification-and-modeling-for-energy-consumption-related-reviews-in-google-play-store.git](https://gitlab.com/Mashuk/acm-msr-2020-empirical-study-for-text-classification-and-modeling-for-energy-consumption-related-reviews-in-google-play-store.git)
## III Experiment
### _Text Classification_
For autonomous extraction of energy related reviews, we empirically studied 60 machine learning models with various sets of feature combinations and 30 deep learning models with six architectures using three pretrained word embeddings. Each developer splitted their review set into training and validation sets with the ratio of 5:1. For both training and validation dataset, we maintained 1:1 ratio for energy related and non energy related reviews.
#### Iii-1 **Traditional Machine Learning**
**A. Text Preprocessing**
To avoid ambiguity, we have converted all the letters of the reviews into lower cases. We have applied this technique at the very beginning as _lowercasing_ all the reviews (textual data) is one of the simplest and most effective form of text pre-processing. Although commonly overlooked, it helps significantly when it comes to consistency of expected output. If our dataset contains mixed-case occurrences of any given word (such as "Battery", "battery", "BATTERY"), which it does, it is most likely to effect the features that we are planning to extract from the text reviews if we do not apply _lowercasing_. It is a great way to deal with sparsity issues when the dataset we are working on is rather small like ours. An example of mapping same words with different cases to the same lowercase form is provided: the following words "POWER", "power", "Power" which appear in the raw review text are all converted into "power" using _lowercasing_.
We have also masked account names, links, and hashtags using the following manner: whenever an account is addressed within the review using the "@" symbol, we took the adjacent word and replace the account name with "account". We followed the same procedure for hashtags: any word adjacent to "#" symbols was replaced by word "hashtag". For the links we have used a regular expression filter to find the links and replace it with the word "link".
We did not remove meaningless words such as "lol", "fugly" since many of them are really popular among mainstream users, yet can not be found in traditional dictionary. These abbreviations, acronyms, and word combinations in reviews appears to be very important as written communication in instant messaging, text messaging, chat, and other forms of electronic communication such as user feedback seemed to have generated a "new language" and contain great weight when it comes to sentiment. [7]
After that, we have removed punctuation and stop words from the reviews. Stop words are a set of most frequently used words in a language. Examples of stop words that were removed from our reviews are "i", "me", "my", "myself", "we", "our" etc. These words contain low information about the text and so by removing them we are enabling our features to focus more on the important words. Let's consider an example in the context of one of our reviews: "It is frustrating... the app is always running in the background... draining my battery really fast"; we would want our features to focus on the following words "frustrating app running background draining battery". If we let our feature extraction procedure to analyze every word in the sentence, words with low information might through the generated features off of its true goal which is detecting energy related reviews. On top of that, preventing all stop words from being analyzed can help us minimize the number of features in consideration while keeping our models decently sized. Instead of replacing the stop words with a dummy character, we ennuled them entirely.
In the next step, we used _Lemmatization_ which necessarily means mapping common words into their base forms. The
objective of _Lemmatization_ is to remove inflections and map a word to its root form. There is another approach for removing inflection: _stemming_ which also reduces the word into its; root form but the "root" in this case may not be a base word, but just a canonical form of the original word. The only difference between _stemming_ and _lemmatization_ is the latter one tries to accomplish the task in the proper way. _lemmatization_ does not just chop prefix/ suffix off, it actually transforms words to the actual root. That is why we have used only _lemmatization_ for out purpose. For instance, the words "good", "better", "best" in the reviews would map to "good" using a dictionary, namely WordNet for mappings.
### _Feature Engineering_
Text representation is a fundamental process for information processing which includes the tasks of determining the index terms for documents and producing the numeric vectors corresponding to the documents. Thus, we have extracted flat feature vectors from text reviews. We implemented _Count Vectors, TF-IDF (Word level, N-Gram level, Character level) Vectors_ to obtain relevant features from our dataset.
_Count Vectors_ counts the number of occurrences of each word appeared in a review. It is a matrix notation of the dataset in which every row represents a review from the dataset, every column represents a word from the dataset, and every cell represents the frequency count of a particular word in a particular review.
On the other hand, _TF-IDF_ score represents the relative importance of a word in the review and the entire review-set. TF-IDF(Term Frequency- Inverse Document Frequency) score is composed by two terms: Term Frequency computes the normalized frequency of each word, and Inverse Document Frequency computes as the logarithm of the number of the reviews in the corpus divided by the number of reviews where the specific term appears.
TF(t) = (Number of times word t appears in a review) / (Total number of words in the document)
IDF(t) = ln(Total number of documents / Number of documents with term t in it)
These TF-IDF Vectors can be generated at different levels of input tokens such as words, characters, and n-grams
* _Word Level TF-IDF_: Matrix representing tf-idf scores of every word in different reviews
* _N-gram Level TF-IDF_: N-grams are the combination of N number of words together. This Matrix is representing tf-idf scores of N-grams. _ngram_range_ is set as _(2,3)_ for our experiment.
* _Character Level TF-IDF_: Matrix representing tf-idf scores of character level n-grams in the review- set
We have adopted different variations of TF-IDF in our review classification based on the study that Zhang, Yoshida, and Tang presented on the performances of TF*IDF, LSI and multi-word in 2009 (performance were examined on the tasks of text classification). They have investigated the traditional indexing methods as TF-IDF, LSI (latent semantic indexing) with multi-word (which contains more contextual semantics than individual words and possesses favorable statistical characteristics). The performances of TF*IDF, LSI and multi-word were examined on the tasks of text classification, much like ours. Their experimental results demonstrated that TF*IDF and multi-word are comparable when they are applied for Text Classification and LSI was the poorest one of them. Even the rescaling factor of LSI had an insignificant influence on its effectiveness on text classification. [8]
In Table II, III and IV, we have denoted Count Vector, WordLevel TF-IDF, N-Gram TF-IDF, and CharLevel TF-IDF as (i), (ii), (iii), (iv) respectively. We have combined
these features using horizontal stacking and denoted their combination with '+' sign in the tables.
## Appendix C Experiment Configuration
Each participating developer were instructed to train 3 traditional machine learning models: Naive Bayes, Linear Classifier (Logistic Regression), Support Vector Machine using their designated set of reviews. For each model, developers have tried different feature combinations (total 15 of them) as described in the previous section. We have used 4000 _max_iter_ and Limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) as the solver for Logistic Regression. For SVM, gamma scaling was set to _true_. Python library Keras [9] was used for composing, training, and evaluating the mentioned models.
In these tables, we have reported the performance of 30 traditional machine learning models with engineered feature combinations using the mean of the following metrics: accuracy, F1-score, and run-time. Once the developers reported back their respective accuracy, F1-score, and run-time, we took the mean value for each case in the following manner. If reported accuracy, F1-score and run time for a specific model+feature is _x1, y1, z1 (for developer-1); x2, y2, z2 (for developer-2); x3, y3, z3 (for developer-3); and x4, y4, z4 (for developer-4);_ then mean or average accuracy = (x1+x2+x3+x4)/4, mean F1-score = (y1+y2+y3+y4)/4, and mean Runtime = (z1+z2+z3+z4)/4.
We have investigated the deviation of each reported values from the average accuracy, F1-score, Run-time and found that for each case maximum deviation calculated were \(\pm 0.02\), \(\pm 0.021\), and \(\pm 1.4s\) respectively. For every model, different combination of TF-IDF vectors prevailed yielding better performance with each of the traditional machine learning models.
Extreme Gradient BoostingIn traditional machine learning, ensemble methods are quite popular as they use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone [10, 11, 12]. A machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
Boosting is a popular ensemble method which involves incrementally building an ensemble by training each new model instance to emphasize the training instances that previous models misclassified. In most cases of text categorization (like ours), boosting has been shown to yield better accuracy than bagging, but it also tends to be more likely to over-fit the training data. [13] To include ensemble method into our tool we have our developers train an Extreme Gradient Boosting Model (XGB) with all 15 feature combinations and presented the data for XGB in Table III.
#### A-A2 **Deep Learning**
We have tried better feature engineering for getting improved performance out of traditional machine learning models. For deep learning, we will try a text enrichment approaches such as using word embeddings.
### _Word Embedding_
We leveraged pre-trained word embedding (an approach for representing words and documents using dense vector representation) to work with our Neural Network based models. Traditional bag-of-word model encoding schemes were the most popular and largely used before word embedding came into light. For a predefined vocabulary of fixed length (derived from a corpus of text such as our review-set), word embedding methods learn a real-valued vector representation where dense vectors, vectors represent words and the projection of the word into a continuous vector space respectively. Within the vector space, the position of a word is determined from text (specifically from the words that surround the word when it is being used). A word's embedding is referred to as the position of a word in the learned vector space [14]. Three most popular examples of carefully designed methods of learning word embeddings from text include:
* fastText [15]
* GloVe [16], and
* Word2Vec [17].
We could train word embeddings using the input corpus itself. Two most popular way to train our own word embedding are _Learn it Standalone_ and _Learn Jointly_. The first one is a good approach for using the same embedding in multiple models where a model is trained to learn the embedding, saved and later used as a part of another model. To use the embedding on one task, second approach may be used where the embedding is learned as part of a large task-specific model. But learning a word embedding from scratch for our problem would require a large amount of text data (millions of documents containing large number of words) to ensure that useful embeddings are learned.
Word Embeddings could also be downloaded from previously trained models as it is common for researchers to make pre-trained word embeddings available (for free; often
under permissive license). For instance, FastText, Word2Vec and GloVe word embeddings are available for free download. These word embeddings (trained on the same corpus: 1M wiki-news documents) were used on our project instead of training our own embeddings from scratch.
There are two popular options: _Static_ and _Updated_ when it comes to using pre-trained embeddings. For the first option the embedding is kept static and can be used as a component of our model and for the second option the pre-trained embedding is used to seed the model, and then the embedding is jointly updated during the training of our model. The second option suits our problem most as we want get the most out of our model and embedding to accomplish the task. [14]
Brief discussion about the models (that have been used to train our word embeddings) are given below.
_Word2Vec [17]_ takes a large corpus as its input and produces a vector space (typically of several hundred dimensions) with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located close to one another in the space. In 2013, after Mikolov, Chen, Corrado, and Dean published [18], the avalanche of word embeddings began. The proposed approach (Word2Vec) uses small neural networks to calculate word embeddings based on the context of the words. The authors of the paper published two approaches to implement this: Continuous Bag of Words (CBOW) and skip-gram. In the first approach, for the given context, the network tries to predict which word is most likely to occur/appear. The network predicts a similar probability for words that are equally likely to appear (which can be interpreted as having a shared dimension). For the second approach, the network works the other way around, that is, uses the target word to predict its context. The results of Word2Vec were unprecedented but also hard to explain from a theoretical point of view.
_GloVe [16]_ is an unsupervised learning algorithm which obtains vector representations for words by mapping words into a meaningful space where the distance between words is related to semantic similarity. One year after the publication of Word2Vec [18], researchers of Stanford published GloVe [19]. Glove is deemed as a slight variation of Word2Vec which attempts to benefit from a less obvious aspect of Word2Vec. Word2Vec learns embeddings by relating target words to their context, ignoring co-occurrences of some context words as if the frequent co-occurrence of words only helps it creating more training examples, but carries no additional information. On the other hand, GloVe accentuates on the frequency of co-occurrences and takes it as vital information rather than wasting it as mere additional training examples. So, GloVe focuses on building word embeddings in such a way that the probability of words' co-occurrence in the corpus relates directly to a combination of corresponding word vectors. In brief, its embeddings can be interpreted as a summary of the training corpus with lower rank in dimensions that associates co-occurrences.
_fastText [15]_ is formally a library for learning of word embeddings and text classification created by Facebook's AI Research (FAIR) lab [20, 21, 22, 23]. The model allows to create an unsupervised learning or supervised learning algorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages. [24]_fastText_ uses a neural network for word embedding.is an unsupervised approach to learn high dimensional vector representations for words from a large training corpus where the vectors of words that occur in a similar context are close in this space. [25]_fastText_'s kept its' base idea pretty similar to Word2Vec where instead of using words to build word embeddings, it goes one level deeper and takes characters, part of words as the building block for the embedding so that a word becomes its own context. Word embeddings generated by FastText and Word2Vec are almost similar although they are not calculated directly. Instead, they are generated as a combination of lower-level embeddings. Two main advantages to this approach are _generalization_ and _necessity of less training data_. Generalization comes to play when there are new words which have the same characters as known ones. Additionally, we need less training data as much more information can be extracted from each piece of document.
For our study purpose, we downloaded three word em
beddings; all of three embeddings were generated through training same corpus of 1 million wiki-news with three aforementioned models: fastText, Glove. Word2Vec. To make each of these word embeddings applicable with Neural Network based models, we implemented following procedure: first we loaded a pre-trained word-embedding vectors in order create a tokenizer, then converted text to sequence of tokens, padded them to ensure equal length vectors, and at last created token-embedding mapping in the form of _embedding matrix_.
### Neural Network Architectures
Traditional machine learning models depends on a data representation relying upon hand-crafted features, chosen by users or domain experts, based on its' usefulness for the classification problem at hand. On the other hand, neural networks (used in deep learning approaches) learn high-level feature representations automatically by using raw textual data as input. Researchers have previously applied Convolutional Neural Network (CNN) successfully to natural language processing problems such as text classification [26]. Usually, deep learning approaches require a large amount of training data to outperform traditional machine learning approaches [27]. As our dataset is rather small for conventional deep learning approaches, we are trying to leverage pretrained word embeddings as discussed in the previous section.
Figure 2 shows a simple architecture of neural network which was used by Haring et al. [28] to identify user comments on online news sites that address three different classes: media house, journalist, or forum moderator. We are going to use this architecture and its' different variation for empirical study purpose.
The input layer for this architecture requires its' textual inputs to be of fixed size; so, we searched for the longest review in our dataset and assigned its' length as the fixed size of input for the specified layer. To make the other reviews reach the required length for the input layer, we have padded them. Right after the input layer, our network consists of an embedding layer which contains the pre-trained word embeddings: we have assigned the embedding matrix (discussed broadly in the previous section) as the weights of this embedding layer and set the _trainable_ parameter for this layer to be _false_ so that the weights of the embedding layer stays frozen during training. After this embedding layer, our network consists of a 1D convolution layer, a 1D global max pooling layer, a dense layer with a concluding output layer. We have used _ReLU_ (Rectified Linear Unit) as the activation function for convolutional layer and _sigmoid_ function for the dense output layer.
In Table IV, output of this architecture is denoted by CNN. We have swapped the convolution and max pooling layer of described architecture with layers of RNN- LSTM (Recurrent Neural Network- Long Short Term Memory), RNN-GRU (RNN- Gated Recurrent Units), Bidirectional RNN, and RCNN (Recurrent CNN) to try out different variants of this architecture. Participating developers trained the models with a batch size of 32 and 10 epochs. The developers experimented with three mentioned word embeddings and reported the results. For comparison purpose, we also reported the performance for S- NN (Shallow Neural Network) trained with different feature combinations engineered for traditional machine learning models.
The detailed evaluation is presented in Table IV. Similar as before, the average accuracy, F1-score and Run-time for 4 of our participating developer is reported for each case.
### Data Visualization
We developed an interactive dynamic tree where each node in the first, second and third layer represent different models, feature, and result-data (accuracy, F1-score, run time) respectively. Each node can be dragged, zoomed and panned. This part of the tool was developed so that developers may traverse through the resultant data set with ease. In figure 3, we have only shown up to the first level of model selection where each node can expand into different number of second level nodes
Fig. 2: One of the Neural Network Architecture (CNN) used for text classification (yielded best performance)
to accommodate user to select different features and generate the results.
### _Runtime- Performance Trade off_
Figure 4 shows the average run time for all 90 cases described in the previous traditional Machine Learning and Deep Learning sections. In the x- axis, all the model feature combinations are presented in the ascending order (left-to-right) of their F1-score performance. Here, we observed that although Neural Network based models are outperforming traditional machine learning models accuracy-wise, these NN based models require significantly greater run-time than those of traditional ML models. Run time is almost perceived as an opportunity cost for accuracy or F1-score. When it comes to industrial implementation, developers may choose to use a model+feature combination that might yield sub-optimal accuracy but run faster than other choices.
### _Validity of the approach_
To ensure that the designed approaches and the result is valid and analogous to other binary classification cases, authors performed a similar experiment to compare the outcomes without developers involvement. We selected 600 security related application reviews and 600 non- security related reviews and applied the same approach developers used for automatic extraction of the energy related reviews. All the 90 instances (60 traditional machine learning approach, 30 Neural Network based approach) were observed but no significant difference in the accuracy, F1-score and run time. Maximum difference reported is \(\pm.03\) for both accuracy and F1-scores. For run-time maximum difference recorded was \(\pm 3.9s\). For brevity, we have reported 32 most prominent cases out of 90 in Table V.
### _Topic Modeling_
After the autonomous separation of the energy-related reviews, we need to investigate the reviews to discover issues that are causing energy inefficiency. Previously, developers have manually searched for suspected issues using manual inspection; i.e. "string matching". We want to see if we can find these issues using topic modeling. We have provided two different approaches for topic modeling, one is _Adaptive Online Latent Dirichlet Allocation_ and the other one is _Online Biterm Topic Modeling_ We compare them with previously used _manual inspection_ and present the result.
#### Iii-B1 **Different Approaches for New Issues Identification**
Three techniques used for identification of emerging issues from already separated energy related reviews are briefly discussed here.
**A. String Matching**
The most used technique to categorize user reviews is to check whether it contains a certain key- word. Developers initially predict what issues might emerge and define a list of keywords to search the reviews using "LIKE" in SQL query. The query in turn return the reviews containing the defined key- words. We compiled the keywords from the literature [29] and used for the string matching classifier. Some of the keywords are as following: _battery, energy, power, charge, affect, consume, hog, deplete, discharge, drain, kill, devour, hot, heat, slow, consume, leak_ and 24 more. This technique is widely used but it cannot identify any energy related issues unforeseen by developers.
Fig. 3: Data Visualization for text classification result set
### _Adaptive Online Latent Dirichlet Allocation_[6]
This is an online Variational Bayes algorithm developed for LDA that can be trained in small batches. While implementing Online LDA, we only need to hold a very small subset of the dataset in memory at a given time. [30]. According to [6], Adaptive Online LDA finds topic models better and faster than those found with batch VB. The batch VB algorithm assumes that user (in our case, developers) would have the entire training set at the start of training, rendering the approach ineffectual for our experiment as app developers would want to train the model incrementally. AOLDA supports incremental training so that developers can train on a chunk of reviews, then resume training if they receive more reviews from the user (practical for our purpose) without have to retrain on the original chunk of reviews. As it also works good with short text data, we have integrated it in our experiment.
**C. Online Biterm Topic Modeling Algorithm**
OBTM is inspired by the online LDA algorithm proposed in [30], which assumes documents are divided by time slices; and within these time slices, the documents are exchangeable. OBTM tries to fit a BTM model over the data in a time slice and use the counts in current time slice to adjust the hyper parameters for the next time slice. Although we could not use it with the streaming data as conveniently as AOLDA, we have integrated it into our experiment as according to [5], OBTM out performs LDA, iLDA, and iBTM for topic modeling from a set of short text documents.
#### Iii-B2 **Comparison in outcomes**
For Topic modeling part, each developer worked on their unique dataset but only for energy related reviews (each developer originally had 600 exclusive energy related reviews). At first, each of them defines some keywords based on the prediction about probable issues and uses them for string matching (process defined in _String Matching_ section); number of identified issues from this process is reported in the 3rd column of Table VI. In the second phase, we manually investigate the true number of issues appeared in reviews and for each developer's review set, the number of _true reviews_ discovered is specified in column 2. Let's assume, k number of issues were discovered for developer-A's review- set. Then for the third phase, developer- A deploys OBTM, AOLDA with an instruction to generate k number of topics. 'k' is chosen as threshold since we wanted to find out how many (out of the total number of actual issues=k) issues we can identify from the k number of topics generated by OBTM and AOLDA. After that developer- A will define/label the generated topics and try to match them with _true topics_ identified in second phase. Recorded number of matched issues are reported in 4th and 5th column of Table VI.
Here, Developer-1 had identified the following 4 issues: (i) User Interface, (ii) Extraneous work, (iii) Defective Task Allocation, (iv) App Dependency. String matching could discover only (ii); aoLDA discovered (i), (iii), (iv); and oBTM also discovered (i), (iii), (iv).
Developer-2 had identified the following 3 issues: (i) Power Save mode (ii) Internet Connection, (iii) Phone Heat up. String matching could discover only (iii); aoLDA discovered (i), (iii); and oBTM discovered (i), (ii).
Developer-3 had identified the following 5 issues: (i) Slow Animation, (ii) High resolution Videos, (iii) Defective Task Allocation, (iv) No power awareness, (v) Cache mismanagement. String matching could discover only (i), (ii); aoLDA discovered (i), (ii), (iv); and oBTM discovered (i), (ii), (iii), (iv).
Developer-4 had identified the following 2 issues: (i) Internet Connection, (ii) App dependency. String matching could discover only (i); aoLDA discovered (i); oBTM discovered all of them to some extent.
## IV Result
We have observed that out of 90 instances, top performing seven instances came from four distinct neural network
Fig. 4: Average Runtime of developers’ trials for efficient text-classification
-based architectures equipped with moderately large pretrained word embeddings. Here, CNN based architecture supported by word-embedding previously trained with fastText performed the best with highest F1-score (0.935). But not all NN architectures could outperform traditional machine learning (ML) algorithms. We have noticed that with proper feature engineering, traditional ML models can also go on par with neural network based architectures (e.g. SVM with word-level+N-gram TF-IDF as feature). Another point to take into account is that the significantly larger run time for NN based architectures which can be seen as opportunity cost for the yielded performance. When developers' frequent implementation is taken into account, NN based architectures can be quite detrimental while some traditional machine learning models such as SVM (or any other run time friendly model feature combination) emerges as more expedient choice. For topic modeling, we observed that topic modeling algorithms have performed quite well than conventional _string matching_ method and quite helpful to identify large number of unforeseen issues responsible for energy inefficiency. Developer' opinion summarizes like following: "Both OBTM and AOLDA performs far better than traditional approach. OBTM generates less number of trivial/ non- relevant keywords than AOLDA. While AOLDA encompasses a lot of information in limited number of topics (making it difficult to define the topics using generated keywords), OBTM works better to isolate and expose crucial issues in same number of topics."
## V Study Limitation
For text classification, we excluded result set for less prominent models like _Decision Trees_, _Random Forest_ as they yielded less promising results. Different variants of NN based architectures could be introduced by adding layers of _Hierarchical Attention Networks, Bidirectional Recurrent Convolutional Neural Networks_. We excluded NN architectures' performance after implementing hyper-parameter tuning as it does not contribute to significant improvement. We could generate various iteration of the results by letting developers use review sets of different sizes. Topic model algorithms were not implemented to generate varying number of topics to see how they scale the distribution of issues throughout the topics. In the topic modeling part, we have solely focused on qualitative performance and didnot take run-time into account.
## VI Related Work
Multiple studies investigated the energy consumption of mobile apps/ software among the developers, identified the best practices, determined the energy patterns, or mined the energy related commits. These studies show the importance of energy consumption of mobile apps both from the developers' and the users' perspectives. [1, 3, 29]. Data mining and analysis for _tweets_[31, 32], _Amazon Product Review_[33] and _Product reviews and description_[34] has focused on requirement engineering too. Stanik et al. [35] worked on classifying multilingual user feedback (with single CNN architecture and single word embedding) and before him Maalej et al. [36] tried to automatically classify app reviews but did not take Neural Network based architecture into account. Furthermore, Automatically mining product opinions from the web and in generating opinion-based summaries of user reviews were attempted in the following work [37, 38, 39, 40, 41, 42, 43, 44], but all of them worked with sub-optimal topic modeling algorithms.
## VII Conclusion
We empirically studied 60 machine learning models-features combinations along with 30 neural network based models equipped with 3 word embeddings. We reported the comparison using accuracy, F1-score and run time as metrics where our focus was to automatically extract the energy related reviews. Our study also exposed an opportunity cost (run time) for achieving better performance using mentioned approaches. We further compare various topic modeling algorithms, OBTM and AOLDA to automatically investigate the emerging issues responsible for energy inefficiency of the apps. We compared the previously used _string matching_ with results obtained from applied techniques and presented the result of a qualitative study performed in collaboration with developers and students to determine their preferences.
|
2306.07289 | Multi-Interactive-Modality based Modeling for Myopia Pro-Gression of
Adolescent Student | Myopia is a common visual disorder that affects millions of people worldwide
and its prevalence has been increasing in recent years. Environmental factors,
such as reading time, viewing distance, and ambient lighting, have been
identified as potential factors in the development of myopia. In this study, we
investigated the relationship between three major factors and myopia in 120
adolescents. By collecting environmental images of the adolescents in the
learning state as well as retinal fundus images, we proposed an environmental
visual load (EVL) model to extract the potential information in these images.
Through experimental data analysis, we found that these three major factors are
closely related to the severity of myopia, and that the simultaneous
exacerbation of these factors sharply increases the myopia of the eye. Our
results suggest that interventions targeting these environmental factors may
help prevent and manage myopia. | Xiangyu Yan, Gongen Han, Can Fang, Xuan Jing | 2023-06-09T02:24:14Z | http://arxiv.org/abs/2306.07289v1 | # Multi-Interactive-Modality based Modeling for Myopia Pro-Gression of Adolescent Student
###### Abstract
Myopia is a common visual disorder that affects millions of people worldwide and its prevalence has been increasing in recent years. Environmental factors, such as reading time, viewing distance, and ambient lighting, have been identified as potential factors in the development of myopia. In this study, we investigated the relationship between three major factors and myopia in 120 adolescents. By collecting environmental images of the adolescents in the learning state as well as retinal fundus images, we proposed an environmental visual load (EVL) model to extract the potential information in these images. Through experimental data analysis, we found that these three major factors are closely related to the severity of myopia, and that the simultaneous exacerbation of these factors sharply increases the myopia of the eye. Our results suggest that interventions targeting these environmental factors may help prevent and manage myopia.
myopia, reading time, viewing distance, ambient lighting, and environmental visual load model +
Footnote †: 10.0
## 1 Introduction
Myopia, or nearsightedness, is a common refractive error that affects millions of people worldwide. In myopia, light entering the eye focuses in front of the retina instead of on it, causing distant objects to appear blurry. While myopia is generally correctable with glasses or contact lenses, high levels of myopia increase the risk of serious eye conditions such as retinal detachment, glaucoma, and cataracts(Grosvenor, 2007; Wang et al., 2023; Han et al., 2022).
The etiology of myopia is multifactorial, with both genetic and environmental factors playing a role. Studies have shown that environmental factors can significantly influence the development and progression of myopia(Oner et al., 2016). Among these environmental factors are reading duration, viewing distance, and ambient lighting, as shown in Figure 1.
One of the most consistent risk factors for myopia is prolonged near work, such as reading or computer use. This association has been observed in numerous studies across different populations and age groups. The exact mechanisms by which near work contributes to myopia are not fully understood, but it is thought to be related to the accommodative and convergence responses of the eye(Singh et al., 2019). The accommodative response is the process by which the eye adjusts the shape of its lens to focus on near objects. The convergence response is the process by which the eyes rotate inwards to maintain single
binocular vision (Huang et al., 2020). Therefore, the accommodative and convergence responses of the eyes are thought to play a role in the development of myopia due to prolonged near work activities.
The distance between the eyes and the object being viewed is another factor that has been linked to myopia. Studies have found that people who hold reading material closer to their eyes are more likely to be myopic than those who hold it further away (Pan et al., 2018). This relationship may also be related to the accommodative and convergence responses of the eye. The closer an object is to the eyes, the greater the demand on the accommodative and convergence systems (Li et al., 2016; Jiang et al., 2002). This increased demand may cause the responses to become unbalanced, leading to axial elongation of the eye and the development of myopia. as shown in Fig. 2.
Ambient lighting, or the level of light in the environment, has also been implicated in the development and progression of myopia. Studies have found that people who spend more time in low light environments, such as dark classrooms or offices, are more likely to be myopic than those who spend more time in brightly lit environments. The exact mechanisms by which ambient lighting affects myopia are not fully understood,
Figure 1: Environmental factors that cause myopia.
Figure 2: Example of accommodative and convergence reactions.
but it is thought to be related to the size of the pupil and the amount of aberrations in the eye. In low light environments, the pupil may dilate to let in more light, which can increase the number of aberrations in the eye and contribute to the development of myopia (Li et al., 2015; Smith et al., 2012).
In summary, reading duration, viewing distance, and ambient lighting are all factors that have been implicated in the development and progression of myopia (Vienne et al., 2014; Matsuo and Ohtsuki, 1992; Borsting et al., 2003). However, explanations for most of these factors are separate, unsystematic, and lack mathematical modeling. In order to gain more insight into the relationship between myopia and these three factors, in this paper, we systematically model the association between myopia and the three factors and propose an environmental visual load (EVL) model. Specifically, we first collected environmental images of adolescents in the learning state as well as retinal fundus images. Second, we analyzed the values of several key variables affecting myopia in terms of reading time, viewing distance, and environmental lighting. Furthermore, based on these variables as well as the image data, we proposed three different models, namely the integrated dual-focus model, the expanded hyperbolic model, and the lighting model, to explore the mathematical relationships between these factors and myopia. In order to be able to describe these relationships more intuitively, we further unify these three models and proposed the final EVL model.
The experimental results showed that reading time, viewing distance, and environmental lighting are all important environmental factors that affect the development of myopia. Changes in these three major factors can lead to a tendency for myopia to develop in children's vision. Our proposed environmental visual load model can counteract this trend by multi-factor modeling.
## 2 Methods
For the whole modeling process, as show in Figure 3, we start with the extraction of key attributes. We consider that when people work in close proximity for long periods of time, it leads to a break in the accommodative and convergence response of the eye, and it has been found that children who had a greater accommodative response than convergence response were more likely to develop myopia. Both are therefore included in the overall set of key attributes. As for the effect of ambient light on myopia, we analyzed that the pupil may dilate in low-light environments to let in more light, which may increase the aberration of the eye and lead to the development of myopia. Therefore, we included the amount of aberration and pupil size as key elements.
Figure 3: The flow of the whole system.
After obtaining some key variables about myopia, we next model them to establish the intrinsic connections that have responded to the trends in visual acuity in adolescents. The integrated dual-focus model (IDFM) is the first mathematical model we propose that considers the accommodation and convergence responses of the eye during near work. This model assumes that the eye has a resting point of accommodation (RPA) and a resting point of convergence (RPV), which are the positions of the eye's lens and eye muscles, respectively, when the eye is at rest.
The model predicts the resting point of accommodation of the eye, \(A\), and the resting point of convergence, \(V\), based on the viewing distance, \(d\), and the refractive error of the eye, \(M\). The equations for the resting point of accommodation and convergence are as follows:
\[A =\frac{M}{1-d}\] \[V =\frac{M}{1+d} \tag{1}\]
The IDFM model suggests that when the demand on the accommodative and convergence systems is unbalanced, the eye will respond by elongating axially, which can lead to the development of myopia. Specifically, the model predicts that myopia will develop when the RPA is closer to the eye than the RPV, causing the accommodative response to be greater than the convergence response.
The expanded hyperbolic model (EHM) is another mathematical model we propose that considers the accommodative and convergence responses of the eye, as well as the effects of near work on axial elongation. This model assumes that the elongation of the eye is proportional to the resting point of accommodation, \(A\), and inversely proportional to the resting point of convergence, \(V\).
The model predicts the axial length of the eye, \(AL\), based on the resting point of accommodation and convergence, the initial axial length of the eye, \(AL_{0}\), and the duration of near work, \(t\). The equation for the axial length of the eye is as follows:
\[AL=AL_{0}+n\times t\times\frac{A}{V} \tag{2}\]
where \(n\) is a constant that depends on the individual and the type of near work being performed.
The EHM model suggests that myopia will develop when the demand on the accommodative and convergence systems is unbalanced, leading to increased axial elongation of the eye.
The lighting model is a mathematical model that considers the effects of ambient lighting on the size of the pupil and the number of aberrations in the eye. This model assumes that the number of aberrations in the eye, \(W\), is proportional to the size of the pupil, \(P\).
The model predicts the refractive error of the eye, \(M\), based on the number of aberrations in the eye, \(W\), the size of the pupil, \(P\), and the level of ambient lighting, \(L\). The equation for the refractive error of the eye is as follows:
\[M=M_{0}+n\times\frac{(W-W_{0})\times(P-P_{0})}{L} \tag{3}\]
where \(M_{0}\) is the initial refractive error of the eye, \(P_{0}\) is the initial size of the pupil, \(W_{0}\) is the initial number of aberrations, and n is a constant that depends on the individual and the type of near work being performed.
The lighting model suggests that myopia will develop when the level of ambient lighting is low, leading to increased pupil size and aberrations in the eye.
The above three equations can model each variable affecting myopia. Next, we will unify the three models into an environmental visual load (EVL) model.
In the previous analysis, we have illustrated two of the three main environmental factors affecting visual acuity, namely reading duration, and viewing distance, both lead to an imbalance in the demands of the accommodative and convergence systems. Therefore, we believe that such an imbalance can be a direct response to the cause of myopia formation, and we also incorporated the effect of ambient light. The EVL model can be calculated as follows:
\[O=\frac{AR}{VR} \tag{4}\]
where \(AR\) denotes the accommodative response, and \(VR\) denotes the convergence response, and their values change according to the environment of the human eye, so we use the axial length of the eye, \(AL\), and the viewing distance, \(d\), to correspond to their changes. They are calculated as follows:
\[AR=AL+M(1-d)\] \[VR=AL+M(1+d) \tag{5}\]
If \(O>1+\theta\) or \(O<1-\theta\), we consider that there is an imbalance between the accommodative and convergence responses, resulting in a reduced accommodative response relative to the level of stimulus convergence. This means that the eyes have difficulty focusing on distant objects, leading to blurry vision. If \(1-\theta\leq O\leq 1+\theta\), then we believe that there is a balance between the adaptive and convergence responses and that there is not yet a clear tendency to cause blurred vision. We set the value of \(\theta\) to 0.1
In order to be able to evaluate the above proposed model accurately, we divided the parameters involved in the model into two categories: computational quantities as well as statistical quantities, as shown in Table 1. The former is based on the calculation of the statistics, while the latter is the value obtained through experimental statistics in this paper. Therefore, in the subsequent experiments, we mainly focus on these statistics for the analysis.
## 3 Experiment
The experimental scenario is shown in Figure 4. The experimental environment was set up as two scenarios: the Environmental Awareness System (EAS) and the Eye Condition Awareness System (ECAS). The former
was mainly used to record data on the subject's environmental state, as well as the ambient light level to which the subject was exposed, and the duration of near work. The latter was used to photograph and measure the subject's eye condition and to obtain retinal fundus images, as well as to analyze parameters such as eye pupil size, refractive error of the eye, aberration size and viewing distance. As for the types of near work of the subjects being performed, they were divided into three types: reading, writing, and playing with cell phones, with corresponding values of 1, 1.5, and 2.
We conducted a cross-sectional study of 120 adolescents between the ages of 8 and 16, with an average of 15 of each age. Each participant underwent a comprehensive eye examination, including a measurement of spherical equivalent refraction (SER), which is a standard measure of myopia. We also use the EAS system and ECAS system to collect information on the ocular status of each participant in the experimental setting and fill in the corresponding statistics, respectively, as shown in Figure 5 and Table 2. A sample of our experimental environment configuration and the retinal fundus photograph collected are shown in Figure 5.
We divided the collected data into different age groups, with approximately 20 individuals in each age group. And we placed the adolescents of each age group in different experimental settings. Since they differ in their own eye status, they differ in the initialization values of the model. We set three different values for the initial level of ambient lighting \(L_{0}\), namely 189, 527 and 892. Table 2 below shows the experimental
Figure 4: Experimental Scenario.
Figure 5: Example of experiment. The image on the left is an environmental image, and the right is a retinal fundus photograph.
initialization values for different age groups of adolescents. The data in the table shows the initial axial length of the eye, the size of the pupil, the refractive error of the eye and the amount of aberration, all of which change with age. Axial length usually increases as the eye grows and develops, while the size of the pupil decreases with age, as has been verified in a number of studies (Bach et al., 2019; Kasthurirangan and Glasser, 2006).
The experimental data collected through the experimental system are shown in Table 3. We still divided the collected data into different age groups. And for the collected data, we divided them according to different types of experimental systems, e.g., the EAS system mainly collected the duration of near work \(t\) and the level of ambient lighting \(L\), while the ECAS system mainly collected the size of the pupil \(P\), viewing distance \(d\), the refractive error of the eye \(M\) and the number of aberrations \(W\).
After obtaining all the experimental data, we calculated and analyzed them in detail. In the 8-10 years old group, we mainly conducted experiments on the duration of near work. With the increase of work time, the accommodative and convergent responses were unbalanced, and the response was based on the \(O\) value, and the SER value showed a downward trend. This indicates that our EVL model can reflect the adverse effects of the duration of near work on visual acuity.
This phenomenon was also present in the ambient lighting and reading distance factors, which were manifested in the 10-12 and 12-14-year-old groups, respectively. The absolute value of the trend toward myopia gradually increased as ambient light became weaker and reading distance became shorter. And in the 14-16 years old group, we found that increasing the values of the three main factors simultaneously led to an increase in the output values of the final EVL model, which increased the imbalance between the adaptive and convergent responses. All these groups led to a decrease in the value of SER indicators with the change of factors. And the final 14-16 group showed the most significant changes, also indicating that simultaneous changes in reading time, ambient lighting and reading distance had the greatest effect on children's visual acuity.
## 4 Conclusions
Our findings suggest that reading time, viewing distance, and environmental lighting are all important environmental factors that influence the development of myopia. The development of myopia. With these three main factors in mind, we analyzed the key ocular attributes that would be affected by these three factors. By introducing the EAS and ECAS devices, we were able to analyze these key attributes using the acquired environmental images as well as retinal fundus images. Based on this we proposed three different models: integrated dual-focus model, expanded hyperbolic model, and lighting model, which were combined into the final environmental visual load (EVL) model. The EVL model was used to reflect the trend of myopia.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
XY contributed to conception and design of the study. GH organized the database. CF performed the statistical analysis. XJ wrote the first draft of the manuscript. XY wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
|
2310.08884 | Extending Multi-modal Contrastive Representations | Multi-modal contrastive representation (MCR) of more than three modalities is
critical in multi-modal learning. Although recent methods showcase impressive
achievements, the high dependence on large-scale, high-quality paired data and
the expensive training costs limit their further development. Inspired by
recent C-MCR, this paper proposes Extending Multimodal Contrastive
Representation (Ex-MCR), a training-efficient and paired-data-free method to
flexibly learn unified contrastive representation space for more than three
modalities by integrating the knowledge of existing MCR spaces. Specifically,
Ex-MCR aligns multiple existing MCRs into the same based MCR, which can
effectively preserve the original semantic alignment of the based MCR. Besides,
we comprehensively enhance the entire learning pipeline for aligning MCR spaces
from the perspectives of training data, architecture, and learning objectives.
With the preserved original modality alignment and the enhanced space
alignment, Ex-MCR shows superior representation learning performance and
excellent modality extensibility. To demonstrate the effectiveness of Ex-MCR,
we align the MCR spaces of CLAP (audio-text) and ULIP (3D-vision) into the CLIP
(vision-text), leveraging the overlapping text and image modality,
respectively. Remarkably, without using any paired data, Ex-MCR learns a
3D-image-text-audio unified contrastive representation, and it achieves
state-of-the-art performance on audio-visual, 3D-image, audio-text, visual-text
retrieval, and 3D object classification tasks. More importantly, extensive
qualitative results further demonstrate the emergent semantic alignment between
the extended modalities (e.g., audio and 3D), which highlights the great
potential of modality extensibility. | Zehan Wang, Ziang Zhang, Luping Liu, Yang Zhao, Haifeng Huang, Tao Jin, Zhou Zhao | 2023-10-13T06:34:23Z | http://arxiv.org/abs/2310.08884v1 | # Extending Multi-modal Contrastive Representations
###### Abstract
Multi-modal contrastive representation (MCR) of more than three modalities is critical in multi-modal learning. Although recent methods showcase impressive achievements, the high dependence on large-scale, high-quality paired data and the expensive training costs limit their further development. Inspired by recent C-MCR, this paper proposes **E**xtending **M**ultimodal **C**ontrastive **R**epresentation (Ex-MCR), a training-efficient and paired-data-free method to flexibly learn unified contrastive representation space for more than three modalities by integrating the knowledge of existing MCR spaces. Specifically, Ex-MCR aligns multiple existing MCRs into the same based MCR, which can effectively preserve the original semantic alignment of the based MCR. Besides, we comprehensively enhance the entire learning pipeline for aligning MCR spaces from the perspectives of training data, architecture, and learning objectives. With the preserved original modality alignment and the enhanced space alignment, Ex-MCR shows superior representation learning performance and excellent modality extensibility. To demonstrate the effectiveness of Ex-MCR, we align the MCR spaces of CLAP (audio-text) and ULIP (3D-vision) into the CLIP (vision-text), leveraging the overlapping text and image modality, respectively. Remarkably, without using any paired data, Ex-MCR learns a 3D-image-text-audio unified contrastive representation, and it achieves state-of-the-art performance on audio-visual, 3D-image, audio-text, visual-text retrieval, and 3D object classification tasks. More importantly, extensive qualitative results further demonstrate the emergent semantic alignment between the extended modalities (e.g., audio and 3D), which highlights the great potential of modality extensibility. Our code is available at [https://github.com/MCR-PEFT/Ex-MCR](https://github.com/MCR-PEFT/Ex-MCR).
## 1 Introduction
Multi-modal Contrastive Representation (MCR) learning endeavors to align inputs from diverse modalities within a shared representation space. Recently, the high-quality contrastive representations of more than three modalities attract increasing attention (Girdhar et al., 2023; Guzhov et al., 2022; Xue et al., 2023a,b; Liu et al., 2023b; Hegde et al., 2023; Guo et al., 2023), and play a fundamental role in many application scenarios of multi-modal understanding (Su et al., 2023; Zhang et al., 2023; Zhao et al., 2023; Wang et al., 2023a; Han et al., 2023) and generation (Tang et al., 2023; Liu et al., 2023a; Ramesh et al., 2022; Rombach et al., 2022; Gafni et al., 2022; Huang et al., 2023a). Despite the achievements of multi-modal contrastive learning, its broader and more flexible application is still constrained by the high dependence on large-scale, high-quality paired data and extremely costly training resources.
Recently, Wang et al. (2023b) introduces a novel training-efficient method, called C-MCR, for learning contrastive representations between modalities that lack paired data by mining knowledge from existing MCR spaces. It connects two pre-trained MCRs onto a new shared space via overlapping modalities. Since the modalities of each MCR are intrinsically aligned, the connection learned
from overlapping modalities can also be transferred to non-overlapping modalities. Experimentally, without using image-audio and 3D-text data pairs, C-MCR demonstrates advanced performance in image-audio and 3D-text downstream tasks.
Despite the remarkable flexibility and performance of C-MCR, its broader applications are hindered by a critical limitation: C-MCR mainly focuses on learning a new space for the two non-overlapping modalities, while the original modality alignments in powerful pre-trained MCRs are forgotten. As a result of the decline of original alignment, C-MCR faces challenges in concurrently establishing connections among three or more MCRs. Therefore, C-MCR can not be used to flexibly learn a shared contrastive representation space for more than three modalities.
This paper introduces **E**xtending **M**ulti-modal **C**ontrastive **R**epresentations (Ex-MCR), a novel training-efficient and paired-data-free unified representation learning method with excellent modality extensibility. Ex-MCR better preserves the alignment within the original pre-trained MCR space and enhances the overall learning pipeline to align different MCR spaces more robustly. Specifically, the two important designs of Ex-MCR are discussed in detail below:
Firstly, we extend one MCR space (called leaf-MCR) into another fixed MCR space (called base-MCR) rather than connecting two MCR spaces to a new space. Such a simple yet effective approach maximizes the preservation of modality alignment within the base MCR, demonstrating great potential for integrating multiple MCRs.
Secondly, we enhance the whole learning process to promote stronger alignment across different MCRs. Specifically: 1) From the training data perspective, we extract various modality-centric pseudo data pairs, aiming to alleviate the semantic bias of pseudo pairs in Wang et al. (2023b) and reflect MCR space more comprehensively. 2) From the architecture perspective, we propose a decoupled projector, which reduces interference among different optimization objectives. We further find that a simple linear mapping is more effective for learning to eliminate modality gaps within MCRs. 3) From the learning objective perspective, we employ a dense contrastive loss on pseudo-pairs between all possible modalities pairs, further enhancing the learned alignment's stability.
Utilizing Ex-MCR, we can flexibly align multiple pre-trained leaf-MCR spaces onto a common base-MCR space without any paired data and with exceptionally low training costs. To evaluate the effectiveness of our Ex-MCR, we try to extend the ULIP (3D-image) and CLAP (audio-text) onto CLIP (image-text) via the overlapping image and text modality, respectively, which derive unified and high-quality audio-image-text-3D representations. Without using any paired data, Ex-MCR attains state-of-the-art performance results across various zero-shot tasks, including audio-visual, 3D-image, audio-text, visual-text retrieval, and 3D object classification. More importantly, semantic alignment is also observed between extended modalities (e.g., audio-3D), which highlights the potential of Ex-MCR in modality extensibility.
Our contributions can be summarized as three-fold:
(1) We propose **E**xtending **M**ulti-modal **C**ontrastive **R**epresentations (Ex-MCR), a novel a training-efficient and paired-data-free representation learning method for more than three modalities.
(2) We comprehensively enhance the entire learning pipeline for aligning MCR spaces from the perspectives of training data, architecture, and learning objectives. These novel designs offer valuable insights about effectively integrating knowledge within existing MCRs.
(3) We obtain high-quality unified audio-image-text-3D representations using Ex-MCR, which exhibits advanced performance on a series of tasks and excellent modality scalability. Besides, we also conduct detailed ablation studies to verify the effectiveness of each proposed component.
## 2 Related Works
### Multi-Modal Contrastive Representations
Multi-modal Contrastive Representations (MCR) learning aims to acquire semantically aligned cross-modal representations by pretraining the model on large-scale paired data. These aligned representations play a pivotal role in downstream comprehension and generation tasks. Inspired by the success of CLIP (Radford et al., 2021), many works try to learning contrastive representa
tions for two modalities (Radford et al., 2021; Li et al., 2022; Li et al., 2022; Gan et al., 2022; Xu et al., 2021). CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) learn shared vision-text representations from million-level image-text pairs. CLAP (Elizalde et al., 2023; Wu et al., 2023) learns the audio-text representation, and CAV-MAE (Gong et al., 2022) focus on acquiring shared audio-visual feature space. C-MCR (Wang et al., 2023b) focuses on learning new representation space by connecting the pre-trained spaces through overlapping modality.
Apart from aligning two modalities, shared representations for more than three modalities attract increasing attention. AudioCLIP (Guzhov et al., 2022) and WAV2CLIP (Wu et al., 2022) train an audio encoder aligned with CLIP using audio-text-image triplets data. ULIP (Xue et al., 2023a;b) and openshape (Liu et al., 2023b) construct 3D-image-text triplets data through rendering 3D mesh into 2D images and captioning images for textual description, thereby learning a corresponding 3D encoder for image-text MCR space. Furthermore, Imagebind (Han et al., 2023) exclusively utilizes data pairs between various modalities and images to expand CLIP with multiple modal alignment encoders.
However, these methods heavily rely on large-scale, high-quality paired data collected from the internet or generated automatically and exceptionally high computational resources. Due to the lack of high-quality paired data for more modal combinations, such as audio-visual and text-3D, the extensibility of representation learning is notably constrained. Furthermore, the exceedingly high computational costs also diminish the flexibility of MCR learning.
### Audio-Visual and 3D-Text Learning
Audio-vision and 3D-text learning have significant applications in multi-modal recognition (Gemmeke et al., 2017; Chen et al., 2020b; Chang et al., 2015; Dai et al., 2017), localization (Chen et al., 2020a; Achlioptas et al., 2020; Zhao et al., 2021; 2018; Mo & Morgado, 2022; Chen et al., 2021), question-answer (Wang et al., 2023a; Zhao et al., 2023; Azuma et al., 2022; Lin et al., 2023b), and generation (Ruan et al., 2023; Poole et al., 2022; Lin et al., 2023a). They also play important roles in robot-related tasks such as human-machine interaction and synthetical information obtaining in complex environments (Peng et al., 2023; Huang et al., 2023b).
However, audio-visual datasets (Gemmeke et al., 2017; Chen et al., 2020b) often suffer from substantial noise due to soundless objects and invisible sounds. Additionally, paired 3D-text data (Chang et al., 2015) is scarce and expensive to collect. The scarcity of large-scale datasets hampers the further advancement of 3D-text and audio-vision contrastive representations. Previous methods, such as AudioCLIP (Guzhov et al., 2022) and ULIP (Xue et al., 2023a;b), mainly focus on automatically collecting or generating more paired data, but they are still limited by the relatively low quality of the training datasets. Our approach overcomes the reliance on paired data, achieving superior performance in audio-vision and 3D-text retrieval without using any audio-vision or 3D-text data.
## 3 Extending Multi-modal Contrastive Learning
### Extending Rather Than Connecting
Given two pre-trained MCR spaces on modalities \((\mathcal{A},\mathcal{B})\) and \((\mathcal{B},\mathcal{C})\), C-MCR (Wang et al., 2023b) employs two projectors to map them into a new shared space, where the alignment of different MCRs can be learned from overlapping modality \(\mathcal{B}\). Since each pre-trained MCR intrinsically contains the alignment of \((\mathcal{A},\mathcal{B})\) and \((\mathcal{B},\mathcal{C})\), the alignment learned from overlapping modality theoretically can be transferred to the non-overlapping modalities. Specifically, the embeddings from overlapping modality \(\mathcal{B}\) but different MCR are aligned via an InfoNCE loss in the new space. Besides, C-MCR retrieves pseudo \((\mathcal{A},\mathcal{C})\) pairs using the same data of \(\mathcal{B}\) and these pseudo-pairs are also aligned for a more comprehensive inter-MCR alignment. Moreover, C-MCR employs L2 loss between the embeddings from the same MCR space but different modalities to close the modality gap (Liang et al., 2022), significantly enhancing the transferability of learned inter-MCR alignment. C-MCR has remarkable flexibility and versatility since learning a novel C-MCR space requires two learnable MLPs and unpaired unimodal data.
However, C-MCR mainly focuses on learning a new space for the two non-overlapping modalities (\(\mathcal{A}\), \(\mathcal{C}\)), while the original modality alignment (\(\mathcal{A}\), \(\mathcal{B}\)) and (\(\mathcal{B}\), \(\mathcal{C}\)) in powerful pre-trained MCRs are forgotten. As a result of the decline of original alignment, C-MCR faces challenges in concurrently establishing connections among three or more MCRs. Therefore, C-MCR can not be used to flexibly learn a shared contrastive representation space for more than three modalities.
To achieve a training-efficient and paired-data-free unified contrastive representation method, we propose to extend one MCR to another rather than connect the two MCRs to a new space. Considering the two MCR spaces on modalities \((\mathcal{A},\mathcal{B})\) and \((\mathcal{B},\mathcal{C})\), Ex-MCR chooses one as the base-MCR \((\mathcal{A},\mathcal{B})\), and the other as the leaf-MCR \((\mathcal{B},\mathcal{C})\). In the "Extended" scheme, the base-MCR space is frozen, and we only train one projector to map leaf-MCR to base-MCR utilizing the overlapping modalities \(\mathcal{B}\). Specifically, we employ the native pairs of \(\mathcal{B}\) and pseudo pairs generated by \(\mathcal{B}\) to learn aligning leaf-MCR to base-MCR via InfoNCE loss. Simultaneously, we employ the L2 loss to bridge the modality gap between \((\mathcal{B},\mathcal{C})\) modalities of leaf-MCR, thereby facilitating more transferable alignments between the MCR spaces.
In contrast to C-MCR, Ex-MCR can conveniently expand more MCR spaces. Benefiting from efficient training and no need to pair data, we can flexibly align multiple leaf-MCR spaces to the same base-MCR space. In addition to explicitly establishing modality alignment within leaf-MCR and base-MCR, semantic alignment also emerges among extended modalities. Ex-MCR leverages the pivotal role of the base-MCR, employing it as a bridge for achieving semantic alignment among modalities in multiple leaf-MCR spaces.
### Enhancing Alignment Learning Pipeline
Before delving into the details of our learning pipeline, we first clarify the necessary symbols and notations. We align the audio-text space of CLAP and the 3D-image space of ULIP (leaf-MCRs) to the image-text space of CLIP (base-MCR). The unimodal inputs of audio, text, image, and 3D point cloud are denoted as \(A\), \(T\), \(V\), and \(P\), respectively. The audio features, extracted from the CLAP audio encoder, are denoted as \(\mathbf{A}^{A}=\{\mathbf{a}_{1}^{A},\mathbf{a}_{2}^{A},...,\mathbf{a}_{N}^{A}\}\). Similarly, employing the encoders of CLAP, CLIP, and ULIP, we can extract corresponding sets of features \(\mathbf{T}^{A}\), \(\mathbf{T}^{I}\), \(\mathbf{V}^{I}\), \(\mathbf{V}^{U}\), and \(\mathbf{P}^{U}\), where the superscripts \(A\), \(I\), and \(U\) represent encoding by CLAP, CLIP, and ULIP, respectively.
In Ex-MCR, freezing base-MCR allows us to preserve the original alignment of base-MCR but also implies that the modality gap within base-MCR remains preserved. Consequently, it becomes necessary to map the features of leaf-MCR to more suitable positions within the base-MCR space. To this end, we enhance the entire alignment learning pipeline from the perspectives of training
Figure 1: **The pipeline of extending leaf-MCRs (CLAP, ULIP) to base-MCR (CLIP).** For aligning CLAP to CLIP, we take audio, text, and image as input and encode them individually with frozen encoders in CLAP and CLIP. As shown in the left subfigure, we iteratively take the three kinds of modalities as query to generate pseudo-pairs. For aligning ULIP to CLIP, we take a symmetrical approach. When inferencing, audio and 3D inputs are inputted to the CLAP audio encoder and ULIP 3D encoder, then mapped into the CLIP MCR space via the corresponding projectors. Texts and images are encoded by the CLIP text encoder and image encoder.
data, architecture, and learning objectives. This enhanced learning pipeline effectively improves alignment's comprehensiveness, stability, and accuracy. Below, we sequentially introduce the design behind each perspective and corresponding motivations.
#### 3.2.1 Various Modality-centric Data
C-MCR employs data from overlapping modalities to aggregate semantic consistent embedding of non-overlapping modalities, thereby creating pseudo-pairs. This approach prompts a more comprehensive alignment. However, such a single modality-centric data is often biased and noisy. Taking CLIP and CLAP as an example, text encoders in CLIP and CLAP may introduce individual biases when encoding the same text, and texts can not describe the entire diverse visual or audio world. Therefore, the semantics of the text-centric pseudo-audio-image pairs are limited by the expressive power of text modality. The text-centric generated audio and image embeddings struggle to capture the entire audio and image representation space distribution.
To tackle the problem of limited and biased training data, we propose aggregating various modality-centric data. As depicted in the left sub-figure of Fig. 1, we no longer only take the overlapping modality as the query. Instead, all modalities in the two MCR spaces are iteratively employed as queries to aggregate corresponding semantically consistent embeddings. Take aligning CLAP to CLIP as an example; the overlapping modality-centric (e.g., text-centric) consistent embeddings can be aggregated as follows:
\[\begin{split}\tilde{\mathbf{t}}_{i}^{A}=\mathbf{t}_{i}^{A};& \quad\tilde{\mathbf{a}}_{i}^{A}=\mathrm{softmax}((\tilde{\mathbf{t}}_{i}^{A} \cdot\mathbf{T}^{A})/\tau_{1})\cdot(\mathbf{T}^{A})^{T};\\ \tilde{\mathbf{t}}_{i}^{I}=\mathbf{t}_{i}^{I};&\quad \tilde{\mathbf{v}}_{i}^{I}=\mathrm{softmax}((\tilde{\mathbf{t}}_{i}^{I}\cdot \mathbf{V}^{I})/\tau_{1})\cdot(\mathbf{V}^{I})^{T}\end{split} \tag{1}\]
Where the \(\tau_{1}\) is the temperature parameter of softmax. The \(\tilde{\mathbf{t}}_{i}^{A}\) and \(\tilde{\mathbf{t}}_{i}^{I}\) are derived from the same text data, and their semantics are natively consistent. Benefiting from the modality semantic alignment within each MCR, the generated \(\tilde{\mathbf{a}}_{i}^{A}\) and \(\tilde{\mathbf{v}}_{i}^{I}\) are also semantically relevant to the \(\tilde{\mathbf{t}}_{i}^{A}\) and \(\tilde{\mathbf{t}}_{i}^{I}\).
To capture the representation space of non-overlapping modality more comprehensively, we further introduce non-overlapping modality-centric (e.g., audio-centric or image-centric) data. This process (take audio-centric as an example) can be expressed as:
\[\begin{split}\tilde{\mathbf{a}}_{i}^{A}=\mathbf{a}_{i}^{A};& \quad\tilde{\mathbf{t}}_{i}^{A}=\mathrm{softmax}((\mathbf{a}_{i}^{A}\cdot \mathbf{T}^{A})/\tau_{1})\cdot(\mathbf{T}^{A})^{T}\\ \tilde{\mathbf{t}}_{i}^{I}=\mathrm{softmax}((\mathbf{a}_{i}^{A} \cdot\mathbf{T}^{A})/\tau_{1})\cdot(\mathbf{T}^{I})^{T};&\quad \tilde{\mathbf{v}}_{i}^{I}=\mathrm{softmax}((\tilde{\mathbf{t}}_{i}^{I}\cdot \mathbf{V}^{I})/\tau_{1})\cdot(\mathbf{V}^{I})^{T}\end{split} \tag{2}\]
Since the embeddings of \(\mathbf{T}^{A}\) and \(\mathbf{T}^{I}\) of overlapping modality are one-to-one matched, the similarity weights between \(\mathbf{a}_{i}^{A}\) and \(\mathbf{T}^{A}\) can be naturally transferred for aggregating embeddings of \(\mathbf{T}^{I}\). These pseudo-embedding pairs derived from audio can better reflect the representation space of audio. Based on the aforementioned formulas, when extending CLAP to CLIP, we can acquire three kinds (e.g., audio-centric, text-centric and image-centric) of semantically consistent embeddings \(\{\tilde{\mathbf{a}}_{i}^{A},\tilde{\mathbf{t}}_{i}^{A},\tilde{\mathbf{t}}_{i}^ {I},\tilde{\mathbf{v}}_{i}^{I}\}\). These three kinds of data are combined and shuffled for training.
#### 3.2.2 Decoupled Projector
The main network structure of Ex-MCR is a projector, and it serves two purposes: 1) Learning the intra-MCR alignment to close the modality gaps within leaf-MCR and prompt more stable alignment between MCRs. 2) Learning the inter-MCR alignment for extending leaf-MCR to base-MCR. Considering these two different purposes, we propose a decoupled projector to alleviate the potential conflict between distinct optimization objectives while exploring a more reasonable mapping layer design for these two purposes. As shown in Fig. 1, the projector is decoupled into a linear layer \(f_{l}(\cdot)\) for intra-MCR alignment and a multi-layer perceptron (MLP) layer \(f_{m}(\cdot)\) for inter-MCR alignment. For the example of extending CLAP to CLIP, we first use \(f_{l}\) to align \(\tilde{\mathbf{a}}_{i}^{A}\) to \(\tilde{\mathbf{t}}_{i}^{A}\) via L2 loss, which can be formulated as:
\[L_{intra}=\frac{1}{2}\frac{1}{B}\sum_{i=1}^{B}\lVert f_{l}(\tilde{\mathbf{a}}_{ i}^{A})-\tilde{\mathbf{t}}_{i}^{A}\rVert_{2} \tag{3}\]
With this intra-MCR alignment loss, \(f_{l}(\cdot)\) learns the mapping between audio subspace and text subspace within the CLAP, thereby effectively closing the modality gap. Since the subspaces of different modalities in MCR space are actually very similar, linear mapping is enough to bridge the
modality gap. Moreover, our experiments even found that activation layers have a negative effect on bridging the modality gap.
After bridging the modality gap, the shared \(f_{m}(\cdot)\) are employed to map both audio and text embeddings of CLAP space to the CLIP space, which can be expressed as:
\[\hat{\mathbf{a}}_{i}^{A}=f_{m}(f_{l}(\hat{\mathbf{a}}_{i}^{A}));\ \ \hat{\mathbf{t}}_{i}^{A}=f_{m}(\mathbf{t}_{i}^{A}) \tag{4}\]
#### 3.2.3 Dense Alignment Objective
Since the modality gap within base-MCR is retained, a more robust learning objective is needed to map leaf-MCR to the appropriate position in the base-MCR space. To this end, we propose to learn the alignment densely among the quadruple semantic consistent pairs described in Sec. 3.2.1. In the case of the CLAP and CLIP, the dense inter-MCR alignment objectives are defined as:
\[L_{avc}=\mathrm{InfoNCE}(\hat{\mathbf{a}}^{A},\tilde{\mathbf{v}}^ {I}); L_{tvc}=\mathrm{InfoNCE}(\hat{\mathbf{t}}^{A},\tilde{\mathbf{v}}^{I}) \tag{5}\] \[L_{ate}=\mathrm{InfoNCE}(\hat{\mathbf{a}}^{A},\tilde{\mathbf{t}} ^{I}); L_{ttc}=\mathrm{InfoNCE}(\hat{\mathbf{t}}^{A},\tilde{\mathbf{t}}^{I})\]
where the \(\mathrm{InfoNCE}(\cdot,\cdot)\) is the standard InfoNCE loss, which is defined as:
\[\mathrm{InfoNCE}(\mathbf{x},\mathbf{z})=-\frac{1}{2}\frac{1}{N}\sum_{i=1}^{N} \left[\log\frac{\exp(\mathrm{sim}(\mathbf{x}_{i},\mathbf{z}_{i})/\tau_{2})}{ \sum_{j=1}^{N}\exp(\mathrm{sim}(\mathbf{x}_{i},\mathbf{z}_{j})/\tau_{2})}+ \log\frac{\exp(\mathrm{sim}(\mathbf{z}_{i},\mathbf{x}_{j})/\tau_{2})}{\sum_{j= 1}^{N}\exp(\mathrm{sim}(\mathbf{z}_{i},\mathbf{x}_{j})/\tau_{2})}\right] \tag{6}\]
where the \(\tau_{2}\) is the temperature parameter. The overall loss for extending CLAP to CLIP is defined as a weighted combination of the intra-MCR and inter-MCR losses:
\[L=\lambda L_{intra}+\frac{1}{4}(L_{avc}+L_{atc}+L_{tvc}+L_{ttc}) \tag{7}\]
where \(\lambda\) is the hyper-parameter to balance the two terms.
For ULIP and CLIP, symmetric various modality-centric data 3.2.1, decoupled projector 3.2.2, and dense alignment loss 3.2.3 are employed to extend the 3D-image space to image-text space via the overlapping image modality.
Finally, we can use a unified representation space learning from existing MCRs for inference. Considering audio, text, image, and 3D point cloud inputs, we use CLAP's audio encoder, CLIP's text and image encoder, and ULIP's 3D encoder to extract the corresponding features \(\mathbf{a}_{i}^{A}\), \(\mathbf{t}_{i}^{I}\), \(\mathbf{v}_{i}^{I}\), \(\mathbf{p}_{i}^{U}\). And the \(\mathbf{t}_{i}^{I}\), \(\mathbf{v}_{i}^{I}\), \(f_{m}^{A}(f_{l}^{A}(\mathbf{a}_{i}^{A}))\) and \(f_{m}^{U}(f_{l}^{U}(\mathbf{p}_{i}^{U}))\) are the final audio-text-image-3D unified representation learned by Ex-MCR, where the \(f_{m}^{A}(\cdot)\), \(f_{l}^{A}(\cdot)\); \(f_{m}^{U}(\cdot)\), \(f_{l}^{U}(\cdot)\) are the learned projectors of CLAP and ULIP respectively.
## 4 Experiment
### Experimental Setting
DatasetsFor a fair comparison, we use the same unimodal datasets to C-MCR (Wang et al., 2023) for training, totaling 2.3M texts, 1.3M images, 1.8M audio, and 0.8M 3D point cloud. More details about training datasets are provided in the Appendix.
Implementation DetailsWe employ pre-trained frozen CLIP ViT-B/32 (Radford et al., 2021), CLAP (Wu et al., 2023), and ULIP v2 (PointBERT version) (Xue et al., 2023) models. The temperature \(\tau_{1}\) in Eq. 1 2 for embedding aggregation is set to 0.01 following Wang et al. (2023), while the \(\tau_{2}\) in 6 for InfoNCE loss calculation is set to 0.05. The hyper-parameter \(\lambda\) in Eq. 7 is set to 0.1. Following Wang et al. (2023), we also add Gaussian noise with a variance of 0.004 to the semantic consistent embeddings described in Sec. 3.2.1. The linear projector \(f_{l}(\cdot)\) is a simple linear layer, and the MLP projector \(f_{m}(\cdot)\) is a 2-layer MLP. We train our model with a batch size of 4096 for 36 epochs. We employ the AdamW optimizer with an initial learning rate of 1e-3 and a cosine learning rate decay strategy.
### Audio-Visual-Text Experiment
Downstream tasks.We employ zero-shot audio-image, audio-text, and image-text retrieval tasks to evaluate the audio-image-text representations of Ex-MCR by extending CLAP to CLIP. For audio-image retrieval, we conduct evaluations on Flickr-SoundNet (Senocak et al., 2018), VGGSS (Chen et al., 2021), and AVE (Tian et al., 2018) datasets. Due to their small dataset sizes, we utilize all their available data, comprising 5,000, 5,000, and 4,097 samples. For audio-text retrieval, we utilize the validation set from the AudioCaps (Kim et al., 2019) dataset, which includes 964 audio samples, and for each audio, we choose one corresponding caption for retrieval. Regarding image-text retrieval, we employ the validation set of COCO (Lin et al., 2014) dataset, consisting of 5,000 images, each accompanied by text captions. We randomly select one text annotation for each image as the ground truth. We calculate the cosine similarity between modalities in representation space and use mAP and Top-5 metrics for performance comparison.
Performance Comparison.Fig. 1 compares Ex-MCR with WAV2CLIP, AudioCLIP, and C-MCR. Notably, even without using audio-image paired data, Ex-MCR achieves significantly better performance over WAV2CLIP and AudioCLIP, which illustrates that Ex-MCR is a more effective representation learning method when high-quality data pairs are limited. Furthermore, compared to C-MCR, Ex-MCR not only attains better audio-image alignment but also inherits more audio-text alignment from CLAP, with fully preserved image-text modality alignment of CLIP, which demonstrates the overall superiority of Ex-MCR over C-MCR in establishing new spaces and maintaining original spaces. In summary, extending CLAP to CLIP with our Ex-MCR method derives state-of-the-art audio-image-text unified representations.
### 3D-Visual-Text Results
Downstream tasks.To evaluate the performance of 3D-image-text space learned by extending CLIP to CLIP, we conduct a zero-shot 3D object classification task to assess the alignment between 3D and text. We also perform zero-shot 3D-image and image-text retrieval tasks to evaluate the alignment between 3D and image, as well as image and text. The zero-shot 3D object classification task is carried on the ModelNet40 (Wu et al., 2015) validation set, comprising 2468 sample pairs across 40 different classes. We embed the label into 64 prompt templates for each class, then extracted and averaged the features to obtain the corresponding text embeddings, following Xue et al. (2023b). Regarding the zero-shot 3D-image retrieval task, we use the Objaverse-LVIS dataset (Deitke et al., 2023), which includes 46,054 3D objects. For each 3D object, CLIP v2 pro
\begin{table}
\begin{tabular}{l|c c c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Audio-Image} & \multicolumn{4}{c|}{Audio-Text} & \multicolumn{2}{c}{Image-Text} \\ & \multicolumn{2}{c}{FlickrNet} & \multicolumn{2}{c}{AVE} & \multicolumn{2}{c|}{VGGSS} & \multicolumn{2}{c|}{AudioCaps} & \multicolumn{2}{c}{COCO} \\ \hline & mAP & R@5 & mAP & R@5 & mAP & R@5 & mAP & R@5 & mAP & R@5 \\ CLAP & - & - & - & - & - & - & - & - & - & - \\ AudioCLIP & 3.81 & 4.91 & 2.33 & 2.65 & 3.10 & 3.94 & 2.23 & 2.68 & 20.14 & 27.42 \\ WAV2CLIP & 2.77 & 3.41 & 3.48 & 4.23 & 7.42 & 10.47 & 0.88 & 0.99 & 44.57 & 57.62 \\ C-MCR & 4.74 & **5.97** & 4.21 & 4.91 & 5.95 & 7.69 & 9.50 & 13.62 & 24.56 & 33.83 \\ Ex-MCR & **4.94** & 5.95 & **4.46** & **4.93** & **6.39** & **8.12** & **11.19** & **16.65** & **44.57** & **57.62** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of audio-visual-text experiments. The best results are **bolded**.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{3D-Text} & \multicolumn{4}{c|}{3D-Image} & \multicolumn{4}{c}{Image-Text} \\ & \multicolumn{2}{c}{ModelNet40} & \multicolumn{4}{c|}{Objaverse-LVIS} & \multicolumn{4}{c}{COCO} \\ \hline & Acc@1 & Acc@3 & Acc@5 & mAP & R@1 & R@5 & mAP & R@1 & R@5 \\ CLIP & & & - & - & - & - & - & - & - \\ ULIP & 60.40 & 79.00 & 84.40 & 3.54 & 1.45 & 4.51 & 34.42 & 22.92 & 46.33 \\ ULIP v2 & **73.06** & 86.39 & 91.50 & **11.41** & **6.00** & **15.63** & 34.42 & 22.92 & 46.33 \\ C-MCR & 64.90 & 87.00 & 92.80 & 3.84 & 1.36 & 4.80 & 24.23 & 14.34 & 33.19 \\ Ex-MCR & 66.53 & **87.88** & **93.60** & 6.23 & 2.54 & 8.25 & **44.57** & **32.58** & **57.62** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of 3d-visual-text experiments.
vides 12 rendered images from different perspectives, and we randomly selected one as the paired image. Additionally, we continued to use the COCO dataset's validation set for zero-shot image-text retrieval.
Performance ComparisonFrom Tab. 4.3, we can find the following key points. Firstly, even without using any 3D-text data, Ex-MCR still outperforms the advanced models (ULIP and ULIP v2) trained on 3D-text pairs in most performance metrics for 3D object classification. Secondly, the 3D-image retrieval accuracy of Ex-MCR is significantly higher than ULIP and C-MCR but lower than ULIP v2. Since the 3D-image space of ULIP v2 is treated as leaf-MCR, it is reasonable that Ex-MCR 3D-image performance is slightly lower than ULIP v2. At the same time, the better 3D-image retrieval accuracy than ULIP and C-MCR shows that Ex-MCR effectively learns strong 3D-image alignment. Finally, Ex-MCR retains the best image-text retrieval accuracy compared to these previous state-of-the-art models. The leading performance on all these tasks further demonstrates the superiority of Ex-MCR in unified contrastive representation learning.
### Emergent 3D-Audio Alignment
In this section, we study whether the semantic alignment also emerges between the extended modalities (e.g., audio and 3D). We mutually retrieve audio in AudioCaps and 3D objects in Objavrese. In Fig. 2 and 3, we provide visualizations of some top-5 retrieval results, and audios are described by their corresponding caption annotations. These cases effectively demonstrate the emergent semantic alignment between audio-3D in Ex-MCR space. For example, the sound of a flushing toilet and water flow can retrieve 3D objects of toilets or sinks, while a sailboat 3D object can retrieve clips containing sounds of water vessels and wind. More results and the original audio files are provided in our supplementary material.
These exciting results demonstrate that extending ULIP and CLAP onto CLIP following our Ex-MCR methods derives a 3D-vision-text-audio unified contrastive representation space. In addition to the state-of-the-art performance on all possible tasks, Ex-MCR is an extremely training-efficient and paired-data-free representation learning method, which amplifies its application value in unified multi-modal representation learning.
Figure 3: Visualization of 3D to Audio retrieval.
Figure 2: Visualization of Audio to 3D retrieval.
### Ablation Studies
In this section, we analyze the main components of Ex-MCR. All experiments are conducted on extending CLAP to CLIP, and we reported the average mAP of audio-visual and audio-text retrieval on AVE and AudioCaps datasets, respectively. In addition, we also provide results on more datasets and evaluation metrics in the Appendix.
Various modality-centric dataAs described in Sec. 3.2.1, We employ various modality-centric data to train our projectors. For investigating the effect of different modality-centric data, we ablate each modality-centric data, and the results are reported in Tab. 3. Each kind of data is beneficial for audio-visual and audio-image alignment, and using all kinds of data simultaneously brings the best performance.
Dense alignment objectiveTo analyze the impact of different alignment objectives, we train the model with each alignment objective. From the results reported in Tab. 4, we can find that aligning the overlapping modalities (text) is most important. In terms of audio-visual alignment, directly aligning the pseudo-audio-visual pairs shows sub-optimal performance, which proves that the pseudo-data aggregating process is biased and noisy, and the pseudo-audio-visual data pair is most severely affected by noise. On the other hand, for audio-text alignment, the model using text-text alignment objective surpasses that of directly aligning pseudo-audio-text pairs, which further demonstrates the importance of alignment learned from overlapping modality.
Structure of \(f_{l}(\cdot)\)Tab. 5 demonstrates the impact of different structures of \(f_{l}(\cdot)\). The results prove our hypothesis: the representation structures between different modalities within one MCR space are similar, and a simple linear layer is enough to bridge the modality gap. Moreover, MLP with an activation layer introduces non-linearity, which may disrupt the representation's spatial structure, bringing sub-optimal performance.
Structure of \(f_{m}(\cdot)\)The impact of structures of \(f_{m}(\cdot)\) is summarized in Tab. 6. For aligning to distinct MCR space, the non-linear MLP structure is better than the simple linear layer. Besides, our experiments show that 2-layer MLP may be enough, and more layers would not bring further performance improvement.
## 5 Conclusion
This paper proposes **E**xtending **M**ulti-modal **C**ontrastive **R**epresentations (Ex-MCR), a novel training-efficient and paired-data-free unified constrastive representation learning method for more than three modalities. Ex-MCR effectively integrates the knowledge in pre-trained MCRs through overlapping modalities between these MCRs. By extending ULIP and CLAP onto CLIP via the
overlapping image and text modality, respectively, we derive unified and high-quality audio-image-text-3D representations. Without using any paired data, Ex-MCR attains a series of state-of-the-art performance results across various tasks. More importantly, semantic alignment is also observed between extended modalities (e.g., audio-3D), which highlights the potential of Ex-MCR in modality extensibility.
|
2306.04780 | A nonequilibrium system on a restricted scale-free network | The nonequilibrium Ising model on a restricted scale-free network has been
studied with one- and two-spin flip competing dynamics employing Monte Carlo
simulations. The dynamics present in the system can be defined by the
probability $q$ in which the one-spin flip process simulate the contact with a
heat bath at a given temperature $T$, and with a probability ($1-q$) the
two-spin flip process mimics the system subjected to an external flux of energy
into it. The system network is described by a power-law degree distribution in
the form $P(k)\sim k^{-\alpha}$, and the restriction is made by fixing the
maximum, $k_{m}$, and minimum, $k_{0}$, degree on distribution for the whole
network size. This restriction keeps finite the second and fourth moment of
degree distribution, allowing us to obtain a finite critical point for any
value of $\alpha$. For these critical points, we have calculated the
thermodynamic quantities of the system, such as, the total ${m}_{N}^{F}$ and
staggered ${m}_{N}^{AF}$ magnetizations per spin, susceptibility $\chi_{N}$,
and reduced fourth-order Binder cumulant ${U}_{N}$, for several values of
lattice size $N$ and exponent $1\le\alpha\le5$. Therefore, the phase diagram
was built and a self-organization phenomena is observed from the transitions
between antiferromagnetic AF to paramagnetic P, and P to ferromagnetic F
phases. Using the finite-size scaling theory, we also obtained the critical
exponents for the system, and a mean-field critical behavior is observed,
exhibiting the same universality class of the system on the equilibrium and out
of it. | R. A. Dumer, M. Godoy | 2023-06-07T21:01:04Z | http://arxiv.org/abs/2306.04780v1 | # A nonequilibrium system on a restricted scale-free network
###### Abstract
The nonequilibrium Ising model on a restricted scale-free network has been studied with one- and two-spin flip competing dynamics employing Monte Carlo simulations. The dynamics present in the system can be defined by the probability \(q\) in which the one-spin flip process simulate the contact with a heat bath at a given temperature \(T\), and with a probability \((1-q)\) the two-spin flip process mimics the system subjected to an external flux of energy into it. The system network is described by a power-law degree distribution in the form \(P(k)\sim k^{-\alpha}\), and the restriction is made by fixing the maximum, \(k_{m}\), and minimum, \(k_{0}\), degree on distribution for the whole network size. This restriction keeps finite the second and fourth moment of degree distribution, allowing us to obtain a finite critical point for any value of \(\alpha\). For these critical points, we have calculated the thermodynamic quantities of the system, such as, the total \(\mathrm{m}_{\mathrm{N}}^{\mathrm{F}}\) and staggered \(\mathrm{m}_{\mathrm{N}}^{\mathrm{AF}}\) magnetizations per spin, susceptibility \(\chi_{\mathrm{N}}\), and reduced fourth-order Binder cumulant \(\mathrm{U}_{\mathrm{N}}\), for several values of lattice size \(N\) and exponent \(1\leq\alpha\leq 5\). Therefore, the phase diagram was built and a self-organization phenomena is observed from the transitions between antiferromagnetic \(AF\) to paramagnetic \(P\), and \(P\) to ferromagnetic \(F\) phases. Using the finite-size scaling theory, we also obtained the critical exponents for the system, and a mean-field critical behavior is observed, exhibiting the same universality class of the system on the equilibrium and out of it.
## I Introduction
The dynamic evolution of equilibrium systems is related to the fact that the transition rates of its states obey the principle of microscopic reversibility. Otherwise, without the advanced tooling as proposed by Gibbs in the equilibrium scene [1], nonequilibrium systems have aroused the interest of researchers in finding out phase transitions with the particularities of continuous phase transitions of reversible systems. One kind of the nonequilibrium system is those subjected to two dynamics in competition [2; 3]. These systems are described by a master equation that involves the sum of the operators on each present process and generally each of these processes separately obeys the principle of microscopic reversibility. However, the combination of these processes may not satisfy the detailed balance and the system will be forced out of equilibrium.
In the last decades, the computerization of data acquisition on large networks, make raised the possibility of understanding the dynamical and topological stability of its networks. From that databases, the result is that large networks that span fields as diverse as the World Wide Web (WWW) or actors that have acted in a movie together, self-organize into a scale-free state [4; 5]. This means that independent of the system and its constituents, the probability \(P(k)\) that a vertex interacts with \(k\) other vertices in the network, decay as a power law, i.e., \(P(k)\sim k^{-\alpha}\). Barabasi and Albert [5] incorporating growth and preferential attachment on its network model, were able to obtain this scale invariance, not present in the previous random [6] and small-world networks [7]. These models and their interesting ability to describe real networks instigated the curiosity of researchers to know what would be the behavior of physical systems in complex networks [8; 9; 10; 11; 12]. Among these, we can highlight the simple but powerful Ising model, comprising both exact [13; 14] and computational [15; 16; 17; 18] or approximate [19; 20; 21] results for the critical behavior on arbitrary networks.
In the same way, the study of nonequilibrium physical systems has been spreading and continuous phase transitions, characteristic of equilibrium systems is observed [22; 23]. Moreover, the same critical exponents have been obtained in reversible and irreversible systems, that is, they belong to the same universality class, acting as proof of what was conjectured by Grinstein _et al._[24], in which says that any nonequilibrium stochastic spin system with spin-flip dynamics and up-down symmetry belongs to the same universality class. The Ising model with complex networks is already being studied with competing dynamics, analytically in 1D [25], by Monte Carlo simulations in 2D [26], and by Gaussian model in 3D [27]. However, these studies were made only for small-world networks, and by Monte Carlo simulations a mean-field critical behavior is obtained, characteristic of equilibrium systems with random interactions and convergent fourth moment of its network degree distribution [13; 14; 15; 17]. Another interesting feature of that nonequilibrium systems is the self-organization phenomena between antiferromagnetic \(AF\) to paramagnetic \(P\), and \(P\) to ferromagnetic \(F\) phase transitions, as a function of competition parameter [2; 3; 22; 23].
With this in mind, in the present work, we have investigated the Ising model on a restricted scale-free network, where each site of the network is occupied by a spin variable that can assume values \(\pm 1\). Divided into two sublattices, the connections between them in the network are made by the site interactions, and the degree distribution
of the network obey a power-law distribution, with fixed values of minimum and maximum degree. The system is in a nonequilibrium regime by competing between two reagent dynamic processes that do not conserve the order parameter: with competition probability \(q\), the one-spin flip process simulates the system in contact with a heat bath at temperature \(T\), and with probability \(1-q\), the two-spin flip process mimics the system subjected to an external flux into it. Thus, here we have investigated the phase transitions of the system and verified if the phase diagrams present the same topology of systems with these same dynamics [23; 26], and in addition, the critical exponents carrying the universality class of the system, is compared with previous works at equilibrium system [16].
This article is organized as follows: In Section II, we describe the network used and the Hamiltonian model of the system. In Section III, we present the Monte Carlo simulation method, some details concerning the simulation procedures, and the thermodynamic quantities of the system, also necessary for the application of FSS analysis. The behavior of thermodynamic quantities, phase diagrams, and critical exponents are described in Section IV. Finally, in Section V, we present our conclusions.
## II Model
The Ising model studied in this work has \(N\) spins \(\sigma_{i}=\pm 1\) on a restricted scale-free network and ferromagnetic interaction of strength \(J_{ij}\). The degree distribution on the network follows the power-law \(P(k)\sim k^{-\alpha}\) and to distribute the connections between the sites, we have used the same procedures shown in the paper [16]. In order to construct a scale-free network with always convergent second and fourth moments on its degree distribution and arbitrary value of \(\alpha\). For that, we first define minimum \(k_{0}\) and maximum \(k_{m}\) degree, and the exponent \(\alpha\) of the distribution. The next procedure is to calculate the normalization constant of the distribution, \(A=\sum_{k=k_{0}}^{k_{m}}k^{\alpha}\), and found the smaller network size that we can use and guarantee the degree distribution, \(N_{0}=k_{m}^{\alpha}/A\). With these values, we create a set of site numbers, \(\{N_{k}\}\), and that will have the respective degrees \(k\), where \(N_{k}=AN/k^{\alpha}\). On that distribution of connections, we have divided the network into two sublattices, where one sublattice plays the role of central spins, while the other sublattice contains the spins in which the central spins can connect. Thus, starting with the lowest degree \(k_{0}\), connections of each \(N_{k_{0}}\) sites are randomly created connecting the two sublattices, and it was made until reach degree \(k_{m}\) and the whole set \(\{N_{k}\}\) will be visited. An example of that construction can be seen in Fig. 1 which was chosen \(\alpha=3\), \(k_{0}=2\), \(k_{m}=8\) and \(N=10^{2}\). In Fig. 1, the sites in the middle of the figure are the more connected, while the peripheral sites are the less connected, and sites from the blue sublattice are only connected with sites from the red sublattice.
Based on this construction, in the course of this work, we have selected the integer values of \(1\leq\alpha\leq 5\), \(k_{0}=4\), \(k_{m}=10\), and network size \((32)^{2}\leq N\leq(256)^{2}\) to study the nonequilibrium Ising model. The ferromagnetic Ising spin energy is described by the Hamiltonian on the form
\[\mathcal{H}=-\sum_{\langle i,j\rangle}J_{ij}\sigma_{i}\sigma_{j} \tag{1}\]
where the sum is over all pair of spins, and \(J_{ij}\) is the ferromagnetic interaction, assuming the value of unity if sites \(i\) and \(j\) interact between the sublattices.
In the nonequilibrium system presented here, let \(p(\{\sigma\},t)\) be the probability of finding the system in the state \(\{\sigma\}=\{\sigma_{1},...,\sigma_{i},...,\sigma_{j},...\sigma_{N}\}\) at time \(t\), the motion equation for the probability of states evolves in time according to the master equation
\[\frac{d}{dt}p(\{\sigma\},t)=qG+(1-q)D, \tag{2}\]
where \(qG\) represents the one-spin flip process, relaxing the spins in contact with a heat bath at temperature \(T\), favoring the lowest energy state of the system, and has probability \(q\) to occur. On the other hand, the \((1-q)D\) denotes the two-spin flip process, in which the energy of the system increases by one external flow of energy into it, and has a probability \((1-q)\) to occur. \(G\) and \(D\) are described as follows:
\[G= \sum_{i,\{\sigma^{\prime}\}}\left[W(\sigma_{i}\rightarrow\sigma^{ \prime}_{i})p(\{\sigma\},t)+\right. \tag{3}\] \[\left.-W(\sigma^{\prime}_{i}\rightarrow\sigma_{i})p(\{\sigma^{ \prime}\},t)\right]\qquad,\]
\[D= \sum_{i,j,\{\sigma^{\prime}\}}\left[W(\sigma_{i}\sigma_{j}\rightarrow \sigma^{\prime}_{i}\sigma^{\prime}_{j})p(\{\sigma\},t)+\right. \tag{4}\] \[\left.-W(\sigma^{\prime}_{i}\sigma^{\prime}_{j}\rightarrow\sigma_ {i}\sigma_{j})p(\{\sigma^{\prime}\},t)\right]\qquad,\]
Figure 1: Schematic representation of the restricted scale-free network. Red circles indicate the sites on one of the sublattices, blue circles are the sites on the other sublattice, and the black solid lines are the connections between the two sublattices. The size of the circles is proportional to the degree of sites, varying from \(k_{0}=2\) to \(k_{m}=8\) in the distribution with \(\alpha=3\), and \(N=10^{2}\).
where \(\{\sigma^{\prime}\}\) is the spin configuration after spin flipping, \(W(\sigma_{i}\rightarrow\sigma_{i}^{\prime})\) is the transition rate between the states in the one-spin flip process, and \(W(\sigma_{i}\sigma_{j}\rightarrow\sigma_{i}^{\prime}\sigma_{j}^{\prime})\) the transition rate between the states in the two-spin flip process.
## III Monte Carlo simulations
In the simulation of the system specified by the Hamiltonian in Eq. (1), we always have chosen the initial state of the system with all spin states at random, and a new configuration is generated by the following Markov process: for a given temperature \(T\), competition probability \(q\), distribution exponent \(\alpha\), network size \(N\), and minimum \(k_{0}\) and maximum \(k_{m}\) degree, we choose at random a spin \(\sigma_{i}\) in network, and generate a random number \(\xi\) between zero and one. If \(\xi\leq q\), we choose the one-spin flip process, in which the flipping probability is dependent of \(W(\sigma_{i}\rightarrow\sigma_{i}^{\prime})\) and given by the Metropolis prescription:
\[W(\sigma_{i}\rightarrow\sigma_{i}^{\prime})=\left\{\begin{array}{cc}e^{(- \Delta E_{i}/k_{B}T)}&\mbox{if}&\Delta E_{i}>0\\ 1&\mbox{if}&\Delta E_{i}\leq 0\end{array}\right., \tag{5}\]
where \(\Delta E_{i}\) is the change in energy after flipping the spin, \(\sigma_{i}\rightarrow\sigma_{i}^{\prime}\), \(k_{B}\) is the Boltzmann constant, and \(T\) the temperature of the system. Thus, the acceptance of a new state is guaranteed if \(\Delta E_{i}\leq 0\), but, in the case where \(\Delta E>0\) the acceptance is pondered by the probability \(\exp\left(-\Delta E_{i}/k_{B}T\right)\) and just guaranteed if by choosing a random number, \(0<\xi_{1}<1\), it is \(\xi_{1}\leq\exp\left(-\Delta E_{i}/k_{B}T\right)\). On the other hand, if none of these conditions are satisfied, we do not change the state of the system. Now, if \(\xi>q\) the two-spin flip process is chosen, and in addition to the spin \(\sigma_{i}\) we also randomly choose one of its neighbors \(\sigma_{j}\), and these two spins are flipping simultaneously according to transition rate \(W(\sigma_{i}\sigma_{j}\rightarrow\sigma_{i}^{\prime}\sigma_{j}^{\prime})\) given by
\[W(\sigma_{i}\sigma_{j}\rightarrow\sigma_{i}^{\prime}\sigma_{j}^{\prime})= \left\{\begin{array}{cc}0&\mbox{if}&\Delta E_{ij}\leq 0\\ 1&\mbox{if}&\Delta E_{ij}>0\end{array}\right., \tag{6}\]
where \(\Delta E_{ij}\) is the change in the energy after flipping the spins \(\sigma_{i}\) and \(\sigma_{j}\), and consequently, in this process, the new state is only accepted if \(\Delta E_{ij}>0\).
Repeating the Markov process \(N\) times, we have one Monte Carlo Step (MCS). In our simulations, we have waited for \(10^{4}\) MCS to the system reach the stationary state, in the whole the network sizes and adjustable parameters. In order to calculate the thermal averages of the interest quantities, we used more \(4\times 10^{4}\) MCS, and the average over samples was done using 10 independent samples for any configuration.
The measured thermodynamic quantities in our simulations are: magnetization per spin \(\rm m_{N}^{F}\), staggered magnetization per spin \(\rm m_{N}^{AF}\), magnetic susceptibility \(\rm\chi_{N}\) and reduced fourth-order Binder cumulant \(\rm U_{N}\):
\[\rm m_{N}^{F}=\frac{1}{N}\left[\left\langle\sum_{i=1}^{N}\sigma_{i}\right\rangle \right], \tag{7}\]
\[\rm m_{N}^{AF}=\frac{1}{N}\left[\left\langle\sum_{i=1}^{N}(-1)^{(r+c)}\sigma_ {i}\right\rangle\right], \tag{8}\]
\[\rm\chi_{N}=\frac{N}{k_{B}T}\left[\left\langle m^{2}\right\rangle-\left\langle m \right\rangle^{2}\right], \tag{9}\]
\[\rm U_{N}=1-\frac{\left[\left\langle m^{4}\right\rangle\right]}{3\left[\left \langle m^{2}\right\rangle^{2}\right]}, \tag{10}\]
where \([\ldots]\) representing the average over the samples, and \(\left\langle\ldots\right\rangle\) the thermal average over the MCS in the stationary state. To facilitate the calculation of \(\rm m_{N}^{AF}\), the sites on the network are labeled as if we had a square lattice, \(N=L^{2}\), in this way, \(r\) and \(c\) are the row and column of the site \(i\), respectively. In Eqs. (9) and (10), \(m\) can be used to represent \(\rm m_{N}^{F}\) or \(\rm m_{N}^{AF}\).
In the vicinity of the stationary critical point \(\lambda_{c}\), the Eqs. (7), (8), (9) and (10) obey the following finite-size scaling relations [28]:
\[\mathrm{m_{N}}=N^{-\beta/\nu}m_{0}(N^{1/\nu}\epsilon), \tag{11}\]
\[\chi_{\mathrm{N}}=N^{\gamma/\nu}\chi_{0}(N^{1/\nu}\epsilon), \tag{12}\]
\[\mathrm{U^{\prime}_{N}}=N^{1/\nu}\frac{U^{\prime}_{0}(N^{1/\nu}\epsilon)}{ \lambda_{c}}, \tag{13}\]
where \(\epsilon=(\lambda-\lambda_{c})/\lambda_{c}\) (\(\lambda\) and \(\lambda_{c}\) can be used \(T\) or \(q\)), and \(\beta\), \(\gamma\) and \(\nu\) are the critical exponents related the magnetization, susceptibility and length correlation, respectively. The functions \(m_{0}(N^{1/\nu}\epsilon)\), \(\chi_{0}(N^{1/\nu}\epsilon)\) and \(U_{0}(N^{1/\nu}\epsilon)\) are the scaling functions.
Using the data from simulations for the network sizes \((32)^{2}\leq N\leq(256)^{2}\) in the Eqs. (11), (12) and (13), we have obtained the critical exponents ratio \(\beta/\nu\), \(\gamma/\nu\), and \(\nu^{-1}\) from the slope of the straight lines in the log-log plot of \(\mathrm{m_{N}}(\lambda_{c})\), \(\chi_{\mathrm{N}}(\lambda_{c})\), and \(\mathrm{U^{\prime}_{N}}(\lambda_{c})\) (derivative of \(\mathrm{U_{N}}\)) as a function of \(N\). Besides that, we also used data collapse from scaling functions to estimate the critical exponent values.
## IV Results
In this section, we present and discuss the results of the nonequilibrium Ising model on a restricted scale-free network. For the two dynamic processes, we have an adjustable parameter \(q\) that controls the dynamic competition in the system. If \(0<q<1\), the two dynamic processes have a non-null probability to be chosen and acting in the system, making it irreversible with respect to the temporal evolution of its states. As these processes favor the states of higher and lower energy of the system, with the competition is possible to find stationary states in the \(AF\), \(F\), and \(P\) phases, based on the Hamiltonian of the system, Eq. (1). With this, it is worth noting that to obtain a self-organization phenomenon passing from a \(F\) to \(P\) and from \(P\) to \(AF\) phases, the division of the network into two sublattices is essential, once that for nonfrustrated antiparallelism we must to have well-defined who the central spins are, and to whom they can connect in the network.
Therefore, the first results can be seen in Fig. 2, where we have displayed the thermodynamic quantities obtained with Eqs. (7), (8), (9) and (10). These quantities were calculated as a function of the competition parameter \(q\), in which is verified that for lower values of \(q\) we found an \(AF\) phase, and for higher values of \(q\), an \(F\) phase is observed. These phases are easily explained when we look at the dynamics, once that for lower values of \(q\), the two-spin flip mechanism prevails and this favors the state of high energy in the system, which based on the ferromagnetic Ising model Hamiltonian is the one where the spin states are antiparallel, i. e., \(AF\) phase. This \(AF\) phase is made explicit in Fig. 2(a) with the \(\mathrm{m_{N}^{AF}}\) curves, and with this magnetization is calculated \(\mathrm{U_{N}^{AF}}\) present in Fig. 2(b), and its susceptibility \(\chi_{\mathrm{N}}^{\mathrm{AF}}\) in Fig. 2(c). On the other hand, for higher values of \(q\), the one-spin flip mechanism prevails, and as it favors the states of lower energy in the system, i.e., all spins in the same state, a ordered phase is also observed, \(F\) phase. The quantities
Figure 3: Phase diagrams in the plane temperature \(T\) versus \(q\) for some values of the exponent \(\alpha\) and fixed values of \(k_{0}=4\) and \(k_{m}=10\). On the top, we have the color bar of \(m_{L}\) where the left side represent the staggered magnetization \(\mathrm{m_{N}^{AF}}\) illustrating the \(AF-P\) transition and the right side the magnetization per spin \(\mathrm{m_{N}^{P}}\) illustrating the \(F-P\) transition. The magenta circles are the critical points estimated by the crossing of \(\mathrm{U_{L}}\) curves and the black solid lines are just a guide for the eyes indicating the phase transition lines.
related to this phase is specifically the magnetization \(\rm m_{N}^{F}\) curves in Fig. 2(d), and the \(\rm U_{N}^{F}\) and \(\rm\chi_{N}^{F}\) curves in Figs. 2(e) and 2(f), respectively.
We have used the curves of the fourth-order Binder cumulants for different network sizes to identify the critical points and order phase transition [29, 30, 31, 32]. The intersection point of the \(\rm U_{N}\) curves indicates the phase transition point on a second-order phase transition. With the critical point in hand for several values of adjustable parameters, a phase diagram was built, which can be seen in Fig. 3. Therefore, for these diagrams and later results, we will limit the values \(k_{0}=4\) and \(k_{m}=10\), once we can build all networks with sizes \((32)^{2}\leq N\leq(256)^{2}\), integer exponent \(1\leq\alpha\leq 5\), and compare with others equilibrium [13, 14, 16] and nonequilibrium [23, 26] Ising model results. Fig. 3 presents the phase diagrams of temperature \(T\) as a function of competition parameter \(q\) for some values of \(\alpha\), in which we can see the \(AF\), \(F\), and \(P\) phases.
In these diagrams (Fig. 3), we have illustrated the self-organization phenomena with the transitions between \(AF\) to \(P\) phases, and \(P\) to \(F\) phases. Since the scale is fixed in all the figures, we can also see that when we decrease the value of \(\alpha\), the region of ordered phases, \(F\) and \(AF\), increases. This change in the topology of the diagram is related to the degree distribution, once the lower values of exponent \(\alpha\) mean a high probability of having more connected sites on network, i.e., more sites with a degree \(k_{m}\). Consequently, knowing that more connected sites on the stationary ordered state require more energy to override its interactions, larger are the regions of the ordered phases. Another interesting observation that we can do, is regarding the shape of the regions in the ordered phases. The ferromagnetic phases are driven by the one-spin flip mechanism described by Metropolis prescription, which is very dependent on \(T\), and for high \(T\) we observe the disordered phase \(P\). On the other hand, \(AF\) phases is driven by the two-spin flip mecha
Figure 4: Data collapse of \(\rm m_{L}^{AF}\) (a), \(\rm\chi_{L}^{AF}\) (b), \(\rm m_{L}^{F}\) (c) and \(\rm\chi_{L}^{F}\) (d) for different network sizes as presented in the figures. In (a) and (c) from the left to the right side we have respectively \(\alpha=1,2,3,4\) and \(5\), and \(\rm m_{L}^{AF}\) and \(\rm m_{L}^{F}color\) bars as shown in the figures. In (b) and (d), with \(\epsilon=(q-q_{c})/q_{c}\) color bar and from the top to bottom we have the collapses with \(\alpha=1,2,3,4\) and \(5\), respectively. In these figures, we have changed the positions curves for \(1<\alpha\leq 5\) in order to compare all the collapses obtained. The critical exponents and critical points used here can be seen in Table 1 and Table 2 respectively. Here, we have fixed \(T=1\), \(k_{0}=4\), \(k_{m}=10\).
nism, in which is a simpler process and little influenced by temperature.
All systems belonging to a given universality class share the same set of critical exponents. The critical points can be used to describe the critical behavior in the sense of universality class with the set of critical exponents. Here, we have computed the exponents \(\beta\), \(\gamma\) and \(\nu\), by two methods. The first one is based on the data collapse, in which we use the scaling relations, Eqs. (11) and (12), to obtain the scaling functions of magnetization and susceptibility with its collapsed curves. This is possible because in the proximity of the critical points the scaling relations are independent of network size with the correct critical exponents and critical point of the system [29; 30; 31]. To obtain the critical exponents by this method and using the already estimated critical points, we have plotted the scaling functions \(m_{0}(N^{1/\nu}\epsilon)\) and \(\chi_{0}(N^{1/\nu}\epsilon)\) as a function of \(|\epsilon|N^{1/\nu}\) for different network sizes and in the proximity of critical points. Therefore, for \(\epsilon\to 0\) and adjusting the involved critical exponents, when the curves of different network sizes collapse better into a single curve, these exponents used are considered the critical exponents of the system.
Fig. 4 display the scaling functions \(m_{0}(N^{1/\nu}\epsilon)\) and \(\chi_{0}(N^{1/\nu}\epsilon)\) collapsed in the log-log plot to obtain its asymptotic behavior. In these figures, we have fixed \(T=1\) to obtain the critical exponents of the system both in the \(F-P\) transition and in the \(AF-P\) tran
Figure 6: Average static critical exponents \(\beta\), \(\gamma\), and \(\nu\), obtained form the slope of scaling functions and the data collapse of magnetization and susceptibility curves as a function of \(\alpha\). (a) For \(AF-F\) transition and (b) for the \(F-p\) transition. These exponents were obtained with fixed values of \(T=1\), \(k_{0}=4\) and \(k_{m}=10\).
sition, for all values of \(\alpha\). In Fig. 4(a), we can see the function \(m_{0}(N^{1/\nu}\epsilon)\) in the log-log plot, produced with the collapse of \(\rm m_{N}^{AF}\) curves, and with this was obtained the exponents \(\beta\) and \(\nu\). In the same way, in Fig. 4(b) the scaling function \(\chi_{0}(N^{1/\nu}\epsilon)\) is presented in the log-log plot with \(\chi_{\rm N}\) curves based on staggered magnetization, in which with the best data collapse we have obtained the exponent \(\gamma\), and another estimated value for the \(\nu\) exponent. On the other hand, in the \(F-P\) transition, Figs. 4(c) and 4(d), respectively, contain the log-log plot of the scaling functions based on \(\rm m_{N}^{F}\) and its susceptibility, \(\chi_{N}^{F}\). The asymptotic behavior, away from the critical point of these functions, is predicted to a slope \(\Theta\) related to the obtained critical exponents, once that for the magnetization curves starting from the ordered phase, below from the critical point \(\Theta=\beta\), and above it \(\Theta=\nu/2-\beta\), and for the susceptibility curves we only have \(\Theta=-\gamma\). The critical exponents obtained by this first method are presented in Table 1 and the critical point used for them can be seen in Table 2.
Now, let use a second method to calculate the critical exponents and also using the scaling relations, but, with the log-log plot of \(\rm m_{N}^{AF}\) and \(\rm m_{N}^{F}\) at its respective \(\chi_{N}\) and \(\rm U_{N}\) in the proximity of the critical point as a function of \(N\). The slope on this set of point returns us specific ratios between the critical exponents. Fig. 5(a) shown the points of \(\rm m_{N}^{AF}\) and \(\rm m_{N}^{F}\) in the vicinity of the critical point as a function of network sizes \(N\). With the best fit of these points and its slope based on the scaling relation of Eq. (11), gives us the estimate of the ratio \(-\beta/\nu\). In the same way, but for the susceptibility of these magnetizations, on the vicinity of the critical point as a function of \(N\) in the log-log plot, is presented in Fig. 5(b). The best fit with the points in this figure gives us the slope related to the ratio \(\gamma/\nu\) presented in the scaling relation of Eq. (12). The ratio between these critical exponents is interesting but does not reveal the correct value of the exponents separately. Thus, to solve this, we used the scaling relation in Eq. (13), in which the derivative of \(\rm U_{N}\) in the vicinity of the critical point and different network sizes gives us the ratio \(1/\nu\). This ratio is illustrated in Fig. 5(c) by its log-log plot. All the ratio between the critical exponents obtained on this method can be found in Table 2.
From these two used methods are obtained equivalent exponents. But, we have to pay attention that as we are dealing with random interactions on the network, we do not have a well-defined dimension, and consequently, it was necessary to use scaling relations dependent only of the system size, Eqs. (11), (12) and (13). Therefore, the expected mean-field finite-size scaling exponents due to these equations are \(\beta=1/2\), \(\gamma=1\), and \(\nu=2\)[28]. If compared to the usual Ising model mean-field exponents \(\beta=\widetilde{\nu}=1/2\), and \(\gamma=1\), the only exponent affected by the dimension of the system is the related to the correlation length, \(\nu\), but, can be derived by the relation \(\nu=d_{u}\widetilde{\nu}\), where \(d_{u}\) is the upper critical dimension, that in the Ising model is \(d_{u}=4\). With these information, we have computed \(\widetilde{\nu}\) based on the exponents \(\nu\) obtained here. The critical exponents of the system, obtained by the two methods are compiled in Fig. 6(a) for \(AF-P\) transitions, and in Fig. 6(b) for \(F-P\) transitions, both as a function of \(\alpha\). Comparing these obtained critical exponents with the mean-field critical exponents, we can see that for \(\alpha=1\) is obtained the more accurate mean-field critical exponents, however, as \(\alpha\) increase, the critical exponents are still of mean-field but with a little deviation. This deviation was explained in the work with the equilibrium Ising model on a restricted scale-free network [16], and is due to the increase of degree-degree correlations [33] with the decreasing of more connected sites.
For the sake of curiosity, our network was labeled as a square lattice, which we always use \(N=L^{2}\) sites. Thus, changing \(N\) in Eqs. (11), (12) and (13) by \(L\), we have the scaling relations depending on the dimension of the system, that in our case is two dimensions. With these new scaling relations, we have computed again the critical exponents of the system and we have obtained the same critical exponents of systems upper the Ising model critical dimension, by adding long-range interactions on a regular square lattice [15; 26]. It indicates that with our selected network sizes \(N=L^{2}\), we also could use the scaling relations depending on the dimension of the system to calculate the critical exponents. However, when dealing with complex networks, this dimension possibility is not always available, once that the objective is to model real networks [34; 12; 35]. In this case, Hong _et al._[28] proposed scaling relations for complex networks independent of system dimension, and from them, obtained the set of mean-field finite-size-scaling exponents.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\alpha\) & \(\beta_{F}\) & \(\gamma_{F}\) & \(\nu_{F}\,(\rm m_{L})\) & \(\nu_{F}\,(\chi_{L})\) & \(\beta_{AF}\) & \(\gamma_{AF}\) & \(\nu_{AF}\,(\rm m_{L})\) & \(\nu_{AF}\,(\chi_{L})\) \\ \hline \hline
1 & \(0.51\pm 0.04\) & \(0.98\pm 0.03\) & \(1.98\pm 0.03\) & \(2.02\pm 0.04\) & \(0.50\pm 0.03\) & \(1.00\pm 0.02\) & \(2.00\pm 0.04\) & \(2.00\pm 0.03\) \\ \hline
2 & \(0.54\pm 0.04\) & \(0.92\pm 0.03\) & \(2.00\pm 0.05\) & \(2.00\pm 0.04\) & \(0.51\pm 0.04\) & \(0.95\pm 0.03\) & \(2.00\pm 0.05\) & \(2.00\pm 0.03\) \\ \hline
3 & \(0.52\pm 0.04\) & \(0.93\pm 0.05\) & \(2.00\pm 0.02\) & \(1.96\pm 0.02\) & \(0.50\pm 0.04\) & \(1.00\pm 0.04\) & \(2.00\pm 0.03\) & \(2.02\pm 0.04\) \\ \hline
4 & \(0.56\pm 0.03\) & \(0.90\pm 0.03\) & \(1.96\pm 0.03\) & \(2.06\pm 0.04\) & \(0.56\pm 0.03\) & \(0.96\pm 0.04\) & \(2.02\pm 0.04\) & \(2.04\pm 0.03\) \\ \hline
5 & \(0.54\pm 0.04\) & \(0.95\pm 0.04\) & \(1.96\pm 0.04\) & \(2.06\pm 0.03\) & \(0.55\pm 0.04\) & \(0.95\pm 0.03\) & \(2.01\pm 0.02\) & \(1.92\pm 0.03\) \\ \hline \end{tabular}
\end{table}
Table 1: Critical exponents obtained by the data collapse method for several values \(\alpha\). The \(F-P\) transitions are denoted by \(F\) subscript and the \(AF-P\) transitions are denoted by \(AF\) subscript. In these transitions, the data collapse of magnetization and susceptibility curves returns us respectively \(\nu(\rm m_{L})\) and \(\nu(\chi_{L})\) estimates using the \(\nu\) exponent. For these exponents, we have fixed \(T=1\) and \(k_{0}=4\) and \(k_{m}=10\). The collapsed curves can be seen in Fig. 4.
## V Conclusions
Here, we have employed Monte Carlo simulations to study the thermodynamic quantities and the critical behavior of the nonequilibrium Ising model on a restricted scale-free network. By using one- and two-spin flip competing dynamics, we reach the stationary state of the system at the nonequilibrium regime. Fixing the maximum and minimum degree values for the whole network size and by using FSS analysis, we are able to always find a finite critical point even being in a network with power-law degree distribution, since we always have second and fourth convergent moments based on its distribution \(P(k)\). As a result, we have obtained the critical points from the second-order phase transitions based on the intersection of \(\mathrm{U_{N}}\) curves and built a phase diagram of temperature \(T\) as a function of the competition parameter \(q\). In these diagrams, we have verified the self-organization phenomena in the transitions from \(AF\) to \(P\) phases in lower values of \(q\), and from \(P\) to \(F\) phases in higher values of \(q\) and lower \(T\). Because we are dealing with a power-law degree distribution on the network, \(P(k)\sim k^{-\alpha}\), decreasing the value of \(\alpha\), increase the number of more connected sites, and as consequence, also increase the region of the ordered phases in the diagram. Topologies equivalent to these diagrams were also obtained in previous works with the same dynamics, but in different networks and models [23; 26]. Through FSS arguments, we calculated the critical exponents \(\beta\), \(\gamma\), and \(\nu\) for the system, and as a function of \(\alpha\), because we have a restricted scale-free network in which its second and fourth moments of degree distribution are convergent. In this case, we have always found the mean-field critical exponents and a slight deviation from them with the increasing degree-degree correlations. This mean-field behavior follows the predicted and observed critical behavior in other complex networks [13; 14; 15; 16; 17], in addition to being another agreement of what was conjectured by Grinstein _et al._[24], i.e., we obtained the same universality class of the Ising model on a restricted scale-free network both in the equilibrium regime [16] as out of it.
|
2310.19751 | Nanoscale electronic inhomogeneities in 1T-TaS$_2$ | We report a set of scanning tunneling microscopy (STM) and spectroscopy (STS)
experiments studying native defects in CVT grown 1T-TaS$_2$. Six different
sample surfaces from four bulk crystals were investigated. Wide area imaging
reveals a prevalence of nanometer-scale electronic inhomogeneities due to
native defects, with pristine regions interspersed. These inhomogeneities
appear in typical as-grown crystals and coexist with a well-formed commensurate
charge density wave of 1T-TaS$_2$ at low temperatures. Electronic
inhomogeneities show up both as variations in the apparent height in STM and in
the local density of states in STS; the bands can shift by 60 meV and the gap
varies by more than 100 meV across inhomogeneities. These inhomogeneities are
present in similar concentration across large-scale areas of all samples
studied, but do not influence the charge density wave formation on local or
global scales. The commensurate charge density wave exhibits long-range order
and remains locally intact in the presence of these inhomogeneities. | B. Campbell, J. V. Riffle, A. de la Torre, Q. Wang, K. W. Plumb, S. M. Hollen | 2023-10-30T17:19:53Z | http://arxiv.org/abs/2310.19751v1 | # Nanoscale electronic inhomogeneities in 1_t_-TaS\({}_{2}\)
###### Abstract
We report a set of scanning tunneling microscopy (STM) and spectroscopy (STS) experiments studying native defects in CVT grown 1_T_-TaS\({}_{2}\). Six different sample surfaces from four bulk crystals were investigated. Wide area imaging reveals a prevalence of nanometer-scale electronic inhomogeneities due to native defects, with pristine regions interspersed. These inhomogeneities appear in typical as-grown crystals and coexist with a well-formed commensurate charge density wave of 1_T_-TaS\({}_{2}\) at low temperatures. Electronic inhomogeneities show up both as variations in the apparent height in STM and in the local density of states in STS; the bands can shift by 60 meV and the gap varies by more than 100 meV across inhomogeneities. These inhomogeneities are present in similar concentration across large-scale areas of all samples studied, but do not influence the charge density wave formation on local or global scales. The commensurate charge density wave exhibits long-range order and remains locally intact in the presence of these inhomogeneities.
## I Introduction
Lattice defects can greatly affect the structural, optical, and electronic properties of materials and have an increased impact in the 2D limit, where much materials and device development is now focused. Of the dichalcogenides, 1_T_-TaS\({}_{2}\) is an exciting material because of its rich phase diagram, complex examples of unusual phenomena, and potential applications in memory and ultrafast switching devices [1; 2; 3; 4; 5; 6]. The many reports describing observations of and theoretical explanations for unusual behavior in 1_T_-TaS\({}_{2}\)-including Mott insulation, quantum spin liquid behavior[7], memristive switching[3; 6], hidden and metastable phases[8; 9], and topological and chiral charge density waves[10; 11]-demonstrate its complexity and the level of interest from the condensed matter community. An important but often overlooked point in this discussion is the impact of native defects on the properties of 1_T_-TaS\({}_{2}\). Induced defects were shown to suppress the commensurate CDW phase and insulating ground state and induce superconductivity at 2.5 K in one example [12], and in another, K dopants did not affect the CDW order, but did induce metallicity[13]. On the other hand, intrinsic defects have gained limited attention. A very recent study reported the local electronic structure of intrinsic defects by STM and characterized 5 distinct defects by local density of states measurements and made partial assignment of these defects using density functional theory calculations [14].
We report a set of low-temperature scanning tunneling microscopy (STM) experiments over large areas of 1_T_-TaS\({}_{2}\) that reveal the existence of nanometer-scale electronic inhomogeneities in addition to the commonly observed commensurate charge density wave (C-CDW). We surveyed 6 different sample surfaces from 4 bulk crystals, and investigated large areas of each sample. The inhomogeneities are observed as variations in the apparent height in STM topographs and as variations in the local density of states (LDOS) measured by scanning tunneling spectroscopy (STS). STS shows that the band edges shift by up to 60 mV across these features and the gap is suppressed near their center. Atomic resolution images support that lattice defects are a source of the electronic inhomogeneity. While the local CDW amplitude is affected by the electronic inhomogeneity, we find that the period and phase of the C-CDW are not modified by the defects or the associated electronic inhomogeneities. Since the bulk features of these samples, including the resistivity versus temperature and the observation of the C-CDW by XRD, are in line with broadly accepted results from the literature, the coexistence of nanoscale inhomogeneities with the C-CDW demonstrates the importance of real space images of 1_T_-TaS\({}_{2}\) and similar materials, and could potentially contribute to understanding their perplexing behavior.
## II Experimental details
Single crystals of 1_T_-TaS\({}_{2}\) were grown from stoichiometric amounts of elemental Ta and S by chemical vapor transport, using iodine as a transport agent. Starting materials were sealed in quartz tubes and heated in a three zone furnace under a 950C \(-\) 850C temperature gradient for 240h, and then quenched in ice water to stabilize the _1T_ phase.
To perform the STM experiments, a bulk TaS\({}_{2}\) crystal was mounted onto a stainless steel STM sample plate using conductive, UHV-safe epoxy, then introduced into the ultrahigh vacuum chamber. Using a cleaving screw and carbon tape, the surface of the TaS\({}_{2}\) crystal was cleaved at room temperature and \(\sim\)10\({}^{-10}\) torr. The sample was then studied using our RHK Technology PanScan Freedom, closed-cycle STM, with an operating temperature of 10 K. Pt-Ir cut tips were used in all the experiments reported here. Images were analyzed with WSxM[15] and
custom python codes available on github [16]. Determination of the band gap for each STS spectrum was done by identifying at least 6 contiguous dI/dV measurements that were within 0.25 pS of zero. The gap width was then defined as the voltage range of this contiguous set, and the gap center was defined as the average of the end points of this contiguous set. The tolerance of 0.25 pS was fine-tuned by visual inspection of the resulting gaps plotted over their respective LDOS curve.
## II Experimental results
We surveyed six crystal surfaces across four different bulk crystals of \(1\,T\)-TaS\({}_{2}\), all grown by a standard chemical vapor transport method (see methods) and cleaved in ultrahigh vacuum at room temperature. Bulk resistivity measurements of as-grown crystals match expectations from literature and temperature dependent x-ray diffraction reveals well formed charge-density waves exhibiting a nearly-commensurate to commensurate transition at 190 K and significant thermal hysteresis consistent with literature.[17; 18] When surveying large area regions, we found significant populations of defects (approximate density of 300 per million atoms) and signatures of electronic disorder, with pristine regions interspersed, using STM and STS at 10 K. Figure 1 presents images of two different samples from these experiments. Figure 1a shows an STM topographic image of a pristine region of a \(1\,T\)-TaS\({}_{2}\) sample with uniform contrast while Fig. 1c shows a similar image from a region with nanometer-scale inhomogeneities. Figure 1a is more similar to typically reported STM images. It shows bright features in a triangular lattice with a periodicity of 1.2 nm arising from the commensurate charge density wave (C-CDW), and it shows the atomic lattice of the surface S atoms (see overlay in inset). The C-CDW in STM topographs is primarily an electronic feature with an apparent height that corresponds to the integrated local density of states between the tip-sample bias and the Fermi energy. It is commensurate with the atomic lattice, forming a \(\sqrt{13}\times\sqrt{13}\) superstructure rotated \(13.9^{\circ}\) from the lattice vectors. [17; 18; 19] In order to extract the C-CDW wavevector, we compute the fast-Fourier transform (FFT) of the topographic image (Fig. 1b). By comparing the magnitude and direction of the \(q\) vector for the CDW (inner peaks, red arrow) and the atomic lattice (outer peaks, blue arrow), we extract a C-CDW wavevector that is consistent with the expected values for the C-CDW.
Figures 1c and 1d show the same type of data as Fig. 1 a and b, but for a region with inhomogeneities. The C-CDW and atomic lattice are both resolved and the FFT compares well with 1b, showing both the atomic lattice and the CDW. There are differences between these images that can be attributed to the tip termination. For example, the atomic lattice in Fig. 1c is clearly resolved, but the charge density wave is much more subtle than in 1a (see star of David in Fig. 1c inset). The background contrast in 1c is not uniform, and this nonuniformity is present for a wide range of imaging parameters. There is a dark region left of center, roughly 5 nm across, and another in the upper right corner. These features create a diffuse signal centered on zero in the FFT ( 1d). Supplemental Figure 1 shows a representative set of images that all show a clear C-CDW with an inhomogeneous background from 4 different surfaces from 3 different bulk crystals. Figure 1a, is a region of Supplemental Fig. 1a that is free of defects.
Large area images, like those in Fig. 2, show that inhomogeneities are distributed over the entire surface. These nanometer-scale inhomogeneities coexist with the C-CDW across the same spatial regions as clearly shown in the \(70\times 70\) nm\({}^{2}\) image in Fig. 2. The FFT of this region exhibits sharp peaks with a sixfold symmetry, indicating the triangular lattice of the C-CDW is well-ordered and single-phased. A linecut in Fig. 2c reveals both the 1.2 nm periodicity of the C-CDW and the non-periodic modulations in amplitude that are due to the 5-10 nm-diameter inhomogeneities in the apparent height. Fig. 2b shows the same area after applying a
Figure 1: STM topographs of the \(1\,T\)-TaS\({}_{2}\) surface at \(T=10\) K after cleaving in ultrahigh vacuum. **a, c)**. Topography showing the C-CDW and atomic resolution ((a) 500 mV, 150 pA; (b) 550 mV, 145 pA). Insets: magnified view showing resolution of S atoms with lattice overlay (red dots) and star-of-David C-CDW pattern (dashed white triangles). **b, d)** fast-Fourier transform (FFT) of images in a, c. Blue vector corresponds to \(q_{\text{lattice}}\) and red vector to \(q_{\text{CDW}}\). \(|q_{\text{CDW}}|=0.853\pm 0.03\) nm\({}^{-1}\) (\(0.27\pm 0.12\) rlu) for (b), \(|q_{\text{CDW}}|=0.912\pm 0.11\) nm\({}^{-1}\) (\(0.31\pm 0.25\) rlu) for (d).
cut-off filter to remove the C-CDW peaks and emphasize the spatial distribution and variations among the inhomogeneities. The inhomogeneities are centered around regions of low apparent height, which for these imaging parameters varies by 0.2 nm on average.
To gain more insight into the source of these inhomogeneities, we examine atomically resolved images, like those in Figure 3. Here we see the atomic lattice appears to be disrupted near several bright and dark features, as indicated by red arrows in the figure. Based on these and similar images, we assign the source of the inhomogeneities to lattice defects, most likely vacancies and substitutions. There are also regions in Fig. 3 that show contrast in the apparent height, but no obvious disruption to the surface atoms (dashed yellow circles). These features appear larger by a factor of \(\approx 1.5-2\) across and are less pronounced than those associated with surface defects. Larger scale images (Fig. 2) also show nanometer-scale features with a range of apparent heights and lateral sizes. These observations are all consistent with defects imaged in multiple crystalline layers below the surface, a common occurrence in STM imaging of semiconductors [20; 21; 22]. Thus, defects that cause the nanoscale electronic inhomogeneities are not limited to the surface layer (so are not a product of the cleave), and are likely evenly distributed throughout the bulk crystal.
The defects have a strong impact on the local electronic structure of the crystal, which we illuminate with measurements of the local density of states (LDOS). First, by comparing topography at negative bias (filled states, Fig. 4a) and positive bias (empty states, Fig. 4c) we observe a strong voltage dependence of the apparent height, which indicates significant variation in the LDOS. We directly probed the LDOS using both dI/dV spatial mapping at set biases (Fig. 4b, d) and location-dependent spectroscopy (Fig. 4e, f). The spatial features of the LDOS in Fig. 4b, d mirror those in the topography, Fig. 4a, c. At the chosen positive bias, the LDOS variations are weaker and more localized (Fig. 4d) than those at the chosen negative bias (Fig. 4b).
dI/dV point and line spectroscopy in Fig. 4e and f show the typical shape of the LDOS on \(1\,T\)-TaS\({}_{2}\) in which a \(\sim\)300 mV gap is bounded by two peaks, which are usually assigned to the upper and lower Hubbard bands of the Mott gap [23; 24]. Here, we observe spatial shifts in the energy of the peaks and the gap edges of up to \(\sim\) 60 mV. The relative contrast of the LDOS in Fig. 4b and d is also made clear since -300 mV (bottom dashed line in 4f) cuts across the strong variations in the lower Hubbard band while +300 mV corresponds to an energy deeper in the conduction band, beyond the upper Hubbard band. Notably, the gap is suppressed over the defect in a and c (Fig. 4e). The combination of LDOS maps in Figs. 4b, d and the line spectra in Fig. 4e and f show that defects in the TaS\({}_{2}\) create strong electronic inhomogeneities, causing \(\sim 60\) mV variations in the local doping and strong disruptions to the local electronic structure. This observation is reminiscent of behavior in doped Mott insulators, including iridates [25] and cuprates [26; 27]).
These strong local disruptions result in nanoscale inhomogeneities in the electronic structure over the entire crystal, as shown over a large region in Fig. 5. A \(16\times 16\) grid of dI/dV point spectroscopy indicate wide variations in both gap width and shifting of the gap center across a \(100\times 100\) nm\({}^{2}\) area. The mean dI/dV spectra is shown in 5b as the solid blue LDOS curve with the standard deviation indicated by the shaded region. The central red, vertical line shows the average gap center, and the red arrows indicate the average gap width. We chose to define the gap width using a threshold above \(dI/dV=0\) (see Methods) so that it is independent of the upper and lower Hubbard peak locations. It should be noted that
Figure 3: STM topographs showing the C-CDW and defective atomic lattice. **a)** -500 mV and 50 pA and **b)** 550 mV and 145 pA. Red arrows indicated regions where the atomic lattice appears disrupted. Dashed yellow circles highlight regions with diffuse apparent height differences, but no lattice disruption.
Figure 2: **a)** Large area STM topograph showing the C-CDW and apparent height variations (300 mV, 65 pA setpoint). Scale bar is 14 nm. Inset: fast-Fourier transform. FFT scale bar is 1.5 nm\({}^{-1}\). **b)** Topograph with C-CDW filtered out to just show the large-scale apparent height modulations. Color bar range is from 0 nm to 0.246 nm. **c)** Line cut over green line in (a) showing the 1.2 nm periodic modulations of the C-CDW and the additional large-scale structure and amplitude variations created by the larger-scale electronic disorder.
this choice does underestimate the gap width compared to the more typical choice that measures the separation of the Hubbard peaks. Fig. 5c shows the spatial variations in gap width over the region in a; the average gap width is 120 mV and the standard deviation of the set of gap widths is 50 mV. The gap center in Fig. 5d also shows large shifts, up to 60 mV positive and negative. Over a large scale, the shifts tend to cancel, leading to an average shift close to zero (as indicated in Fig. 5b).
## IV Discussion
Because of the combined importance of physical and electronic structure, we cannot directly identify the defect types from STM topographs. The defects imaged in Fig. 3 seem to be mostly of the same species, but it is likely that over large areas we are imaging multiple types, including vacancies, substitutions, and possibly intercalants. Recent work by Lutsyk _et al._[14] identified 5 distinct defect types in high resolution STM an STS data, one of which they identified as a S vacancy, which are common in dichalcogenides[28, 11]. In atomic-resolution images, we are most sensitive to the outer S atoms of TaS\({}_{2}\), so this assignment appears to be consistent with data in Fig. 3. However, Lutsyk _et al._ found the sulfur vacancy, identified by DFT, to have a very localized impact on the LDOS, extending no more than a single CDW site. Another defect, not identified in the DFT calculations but speculated to be a foreign atom substitution, was found to have electronic features extending several nanometers, which is more consistent with the large-area surveys we present here. Still this comparison is inconclusive since, in their study, the defect with nanometer-scale electronic imprint still exhibited a clear gap.
Finally, we note that we do not see any signs of strain in our large scale images of these cleaved bulk crystals. Since strain is known to induce CDW domains and even a metallic mosaic phase [29] and topological networks of CDW defects [10], lack of strain is consistent with our single-domain C-CDW observations. When combined with the LDOS measurements, this also makes clear that the nanoscale features are all electronic and do not correspond to bending or wrinkling of the surface. It is quite interesting that the C-CDW, which is known to be very sensitive to interlayer interactions(e.g. [30]) as well as slight lattice strain, is robust to a high density of lattice defects and their resulting electronic disorder. This insensitivity of the CDW to lattice defects is observed in large scale STM images presented here, but also in bulk characterization of 1T-TaS\({}_{2}\): the C-CDW observed by XRD and the insulating character of the low temper
Figure 4: **a, c)** STM topographs and **b, d)** dI/dV maps taken at -300 mV and 300 mV, respectively. Scale bar is 5 nm. **e)** dI/dV spectroscopy taken on and off the defect between the orange dashed lines in a and c. **f)** dI/dV point spectroscopy taken over the white line marked in a and c. Orange, dashed lines denote the boundary of the defect and the x-axis denotes distance along the cut, increasing in the direction of the arrow in a, c. The horizontal, white lines denote slices at \(\pm 300\) mV that correspond to the dI/dV maps in b, d.
ature transport. This tension in the sensitivity of the C-CDW to interlayer interactions and slight strain but not defects deserves further study and could contribute to theory of the impact of disorder on charge density waves-a topic of significant recent interest[31, 32, 33, 34]. These results also make clear that real space measurements are necessary to observe and study these structural and electronic inhomogeneities, which may play an important role in the observations of out-of-equilibrium states and potential applications like neuromorphic computing[6].
## Conclusions
The data presented here show the prevalence and importance of nanoscale inhomogeneities to the local electronic properties of 1 \(T\)-TaS\({}_{2}\) at 10 K. We show evidence that the inhomogeneities originate from lattice defects. These features shift the band edges by up to 60 mV and in some cases the gap is degraded directly over the defect sites. Overall these data contribute a broad picture of electronic inhomogeneities in 1 \(T\)-TaS\({}_{2}\) and CDW materials which will have an important impact on any electronic device applications and provides an example of a C-CDW robust to a high level of disorder.
Work performed at University of New Hampshire by B.C., J.V.R. and S.M.H. was supported by the National Science Foundation OIA 1921199, and a University of New Hampshire COVID recovery award. Work performed at Brown University by A.d.l.T., Q.W. and K.W.P. was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award Number DOE-SC0021265.
|
2310.09542 | The Treewidth Boundedness Problem for an Inductive Separation Logic of
Relations | The treewidth boundedness problem for a logic asks for the existence of an
upper bound on the treewidth of the models of a given formula in that logic.
This problem is found to be undecidable for first order logic. We consider a
generalization of Separation Logic over relational signatures, interpreted over
standard relational structures, and describe an algorithm for the treewidth
boundedness problem in the context of this logic. | Marius Bozga, Lucas Bueri, Radu Iosif, Florian Zuleger | 2023-10-14T09:32:47Z | http://arxiv.org/abs/2310.09542v2 | # The Treewidth Boundedness Problem for an Inductive Separation Logic of Relations
###### Abstract
The treewidth boundedness problem for a logic asks for the existence of an upper bound on the treewidth of the models of a given formula in that logic. This problem is found to be undecidable for first order logic. We consider a generalization of Separation Logic over relational signatures, interpreted over standard relational structures, and describe an algorithm for the treewidth boundedness problem in the context of this logic.
## 1 Introduction
The treewidth of a graph is a positive integer measuring the distance between the graph and a tree. For instance, trees have treewidth one, series-parallel graphs (i.e., circuits with one input and one output that can be either cascaded or overlaid) have treewidth two, whereas \(n\times n\) square grids have treewidth \(n\), for any \(n\geq 1\). The treewidth is a cornerstone of algorithmic tractability. For instance, many \(\NP\)-complete graph problems such as Hamiltonicity and 3-Coloring become \(\P\), when restricted to inputs whose treewidth is bounded by a constant, see, e.g., [23, Chapter 11].
Structures are interpretations of relation symbols that define the standard semantics of first and second order logic [18]. They provide a unifying framework for reasoning about a multitude of graph types e.g., graphs with multiple edges, labeled graphs, colored graphs, hypergraphs, etc. The notion of treewidth is straightforwardly generalized from graphs to structures. In this context, bounding the treewidth by a constant sets the frontier between the decidability and undecidability of monadic second order (\(\MSO\)) logical theories. A result of Courcelle [16] proves that \(\MSO\) is decidable over bounded treewidth structures, by reduction to the emptiness problem of tree automata. A dual result of Seese [34] proves that each class of structures with a decidable \(\MSO\) theory necessarily has bounded treewidth. Since \(\MSO\) is the yardstick of graph specification logics [17], these results show that _treewidth bounded_ classes of structures are tantamount to the existence of decision procedures for important classes of properties, in those areas of computing where graphs are relevant such as, e.g., static analysis [28], databases [1] and concurrency [19].
This paper considers the _treewidth boundedness problem_ asking for the existence of an upper bound on the treewidths of the models of a given input formula. For first order logic (and implictly \(\MSO\)) the problem is already undecidable (Theorem 1), hence we focus on non-classical substructural logics1. We prove the decidability of this prob
lem for a generalization of Separation Logic to relational signatures, interpreted over structures (Theorem 2).
Separation Logic (SL) [27, 33, 14] is a first order substructural logic with a _separating conjunction_\(*\) that decomposes structures. For reasons related to its applications to the deductive verification of pointer-manipulating programs, the models of SL are finite partial functions, called _heaps_. In this context, the separating conjunction is interpreted as the union of heaps with disjoint domains. SL interpreted over heaps is a powerful tool for reasoning about low-level pointer updates. It allows to describe actions _locally_, i.e., only with respect to the resources (e.g., memory cells, network nodes) involved, while framing out the part of the state that is irrelevant for the action. This principle of describing mutations, known as _local reasoning_[12], is at the heart of scalable compositional proof techniques for pointer programs [11].
The _Separation Logic of Relations_ (SLR) is the generalization of SL to relational signatures, interpreted over structures. This logic has been first considered for relational databases and object-oriented languages [30]. Here the separating conjunction splits the interpretation of each relation symbol from the signature into disjoint parts. For instance, the formula \(r(x_{1},\ldots,x_{n})\) describes a structure in which all relations are empty and \(r\) consists of a single tuple of values \(x_{1},\ldots,x_{n}\), whereas \(r(x_{1},\ldots,x_{n})*r(y_{1},\ldots,y_{n})\) says that \(r\) consists of two distinct tuples, i.e., the values of \(x_{i}\) and \(y_{i}\) differ for at least one index \(1\leq i\leq n\). Moreover, when encoding (hyper-)graphs by structures, SLR allows to specify (hyper-)edges that have no connected vertices, isolated vertices, or both. The same style of composition is found in other spatial logics interpreted over graphs, such as the GL logic of Cardelli et al [14].
Our motivation for studying the models of SLR arose from recent work on deductive verification of self-adapting distributed systems, where Hoare-style local reasoning is applied to write correctness proofs for systems with dynamically reconfigurable network architectures [2, 6, 7]. The assertion language of these proofs is SLR, with unary relation symbols used to model nodes (processes) of the network and relation symbols of arity two or more used to model links (communication channels) between nodes. Just as user-defined inductive predicates are used in SL to describe datastructures (lists, trees, etc.), SLR inductive predicates are used to describe common architectural styles (e.g., pipelines, rings, stars, etc.) that ensure correct and optimal behavior of many distributed applications.
A key ingredient of automated proof generation in Hoare logic is the availability of a decision procedure for the _entailment problem_\(\left[\!\left[\emptyset\right]\!\right]_{\Delta}\subseteq\left[\!\left[\psi \right]\!\right]_{\Delta}\) asking if each model of a formula \(\phi\) is also a model of another formula \(\psi\), when the predicate symbols in \(\phi\) and \(\psi\) are interpreted by a set of inductive definitions \(\Delta\). In principle, the decidability of this problem depends on (1) \(\phi\) having only treewidth-bounded models, and (2) both \(\phi\) and \(\psi\) being MSO-definable [25]. The decidability result from this paper (Theorem 2) defines precisely those formulae of SLR whose models form a treewidth-bounded set.
**Motivating examples** We introduce the reader to SLR and the treewidth boundedness problem by means of examples. Fig. 1 (a) shows a chain \(A(x_{1},x_{2})\) starting at \(x_{1}\) and ending at \(x_{2}\), whose elements are labeled by a monadic relation symbol \(a\) and linked by a binary relation \(r\). Each unfolding of the inductive definition \(A(x_{1},x_{2})\leftarrow\exists y\.\ a(x_{1})*r(x_{1},y)*A(y,x_{2})\) instantiates the existential quantifier to an element distinct from the existing ones. This is because every instantiation of an existential quantifier is placed into the '\(a\)' set and the semantics of the separating conjunction requires that these sets
must be disjoint in the models of \(\phi_{1}\) and \(\phi_{2}\), that compose into a model of \(\phi_{1}*\phi_{2}\). Then, any model of \(\exists x\exists y\.\ \mathsf{A}(x,y)\) is a (possibly cyclic) chain, of treewidth at most two.
Fig. 1 (b) shows a family of models for a slightly modified definition of a chain, given by the recursive rule \(\mathsf{A}(x_{1},x_{2})\leftarrow\exists y\.\ r(x_{1},y)*\mathsf{A}(y,x_{2})\), where the instantiations of the existential quantifiers are not placed into a set. In this case, one can fold a sufficiently large chain onto itself and creating a square grid, by using the same element of the structure more than once to instantiate a quantifier. Then, the formula \(\exists x\exists y\.\ \mathsf{A}(x,y)\) has an infinite set of models containing larger and larger square grid minors, thus having unbounded treewidth.
Since placing every quantifier instance into the same set guarantees treewidth boundedness, as in e.g., Fig. 1 (a), a natural question is what happens when these instances are placed into two (not necessarily disjoint) sets? The inductive definition of the predicate \(\mathsf{A}\) in Fig. 1 (c) creates an unbounded number of disconnected \(r\)-edges whose endpoints are arbitrarily labeled with a and b, respectively. In this case, one can instantiate a labeled (resp. b-labeled) variable with a new element or a previous b (resp. a) element and build chains (or sets of disconnected chains), of treewidth at most two4.
Footnote 4: For instance, a simple cycle with more than two elements has treewidth two.
Let us now consider three unary relation symbols \(\mathsf{a}\), \(\mathsf{b}\) and \(\mathsf{c}\) and three types of disconnected \(r\)-edges (according to the labels of their endpoints) created by three recursive definitions of Fig. 1 (d), namely \(\mathsf{a}\)-\(\mathsf{b}\), \(\mathsf{b}\)-\(\mathsf{c}\) and \(\mathsf{a}\)-\(\mathsf{c}\) edges. In this case, the formula \(\mathsf{A}()\), where \(\mathsf{A}\) is a predicate symbol of zero arity, has models with unboundedly large square grid minors, obtained by "glueing" these edges (i.e., instantiating several quantifiers with the same element from different sets). The glued pairs are connected with dotted lines in Fig. 1 (d). Consequently, the models of \(\mathsf{A}()\) form a set of unbounded treewidth.
These examples highlight the ideas behind an algorithm that decides the existence of a bound on the treewidths of the models of a given formula, with predicates interpreted
Figure 1: Examples of Bounded and Unbounded Treewidth Models
by set of inductive definitions. First, one needs to identify the definitions that can iterate any number of times producing building blocks of unboundedly large grids (modulo edge contractions). Second, these structures must connect elements from different sets, e.g., a, b or c in Fig. 1. A complication is that such sets could be defined not only by monadic relation symbols, but also by \(n\)-ary relation atoms where all but one variable have the same values for any occurrence. For instance, the variable \(x_{2}\) in Fig. 1 (a) has the same value in an arbitrarily long unfolding of \(\mathsf{A}(x_{1},x_{2})\) and we could have written \(\mathsf{r}(x_{1},x_{2})\) instead of \(\mathsf{a}(x_{1})\) in the first rule, with the same effect, while avoid using 'a' altogether. Last, the interplay between the connectivity and labeling of the building blocks is important. For instance, in Fig. 1 (d), the building blocks of the grid are structures consisting of six elements, that connect two 'a' with two 'b' elements.
For space reasons, additional technical material relative to Sections SS2, SS3, SS4, SS5 and SS6 is given in Appendix SSA, SSB, SSC, SSD and SSE, respectively.
## 2 The Treewidth Boundedness Problem
This section defines formally the treewidth boundedness problem and introduces most of the technical definitions.
Let \(\mathbb{N}\) be the set of positive integers, zero included and \(\mathbb{N}_{+}\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0pt][0.0 pt]{$\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0pt]{$\stackrel{{ \mbox{\tiny{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ }}}}}}}}}}}}}}}}}}{\mathbb{N}} \setminus\{0\}\). Given integers \(i\) and \(j\), we write \([i..j]\) for the set \(\{i,i+1,\ldots,j\}\), assumed to be empty if \(i>j\). For a set \(A\), we denote by \(\operatorname{pow}(A)\) its powerset. The cardinality of a finite set \(A\) is \(\operatorname{card}(A)\). By writing \(S=S_{1}\uplus S_{2}\), we mean that \(S_{1}\) and \(S_{2}\) partition \(S\), i.e., \(S=S_{1}\cup S_{2}\) and \(S_{1}\cap S_{2}=\emptyset\).
Multisets are denoted as \([a,b,\ldots]\) and all set operations are used with multisets as well. In particular, a binary operation involving a set and a multiset considers the set to be a multiset and yields a multiset. The multi-powerset (i.e., the set of multisets) of \(A\) is denoted as \(\operatorname{mpow}(A)\).
For a binary relation \(R\subseteq A\times A\), we denote by \(R^{*}\) its reflexive and transitive closure and by \(R^{=}\) the smallest equivalence relation that contains \(R\), i.e., the closure of \(R^{*}\) by symmetry. For a set \(S\subseteq A\), we denote by \(R|_{S}\) the relation obtained by removing from \(R\) all pairs with an element not in \(S\). A binary relation \(R\subseteq A\times B\) is an _\(A\)-\(B\) matching_ iff \(\{a,b\}\cap\{a^{\prime},b^{\prime}\}=\emptyset\), for all distinct pairs \((a,b),(a^{\prime},b^{\prime})\in R\).
**Structures** Let \(\mathbb{R}\) be a finite and fixed set of _relation symbols_, where \(\#r\geq 1\) denotes the arity of \(r\), for \(r\in\mathbb{R}\). A relation of arity one (resp. two) is called _unary_ (resp. _binary_).
A _structure_ is a pair \(\mathsf{S}=(\mathsf{U},\sigma)\), where \(\mathsf{U}\) is an _infinite_ set called the _universe_ and \(\sigma\colon\mathbb{R}\to\operatorname{pow}(\mathsf{U}^{+})\) is an _interpretation_ mapping each relation symbol \(r\) into a _finite_ subset of \(\mathsf{U}^{\#r}\). We consider only structures with finite interpretations, because \(\mathsf{SLR}\) (defined below) can only describe such structures. The _support_\(\operatorname{supp}(\sigma)\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0 pt]{$\stackrel{{\mbox{\tiny{\raisebox{\raisebox{0.0pt}[0.0pt]{$ \stackrel{{\mbox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raiseboxraisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox }}}}}}}}}}}}}}}}}{=\{u_{i}\mid\langle u_{1},\ldots,u_{\#r}\rangle\in \sigma(r),\;i\in[1..\#r]\) of an interpretation is the (necessarily finite) set of elements that occur in a tuple from the interpretation of a relation symbol. Two structures \((\mathsf{U}_{1},\sigma_{1})\) and \((\mathsf{U}_{2},\sigma_{2})\) are _locally disjoint_ iff \(\sigma_{1}(r)\cap\sigma_{2}(r)=\emptyset\), for all \(r\in\mathbb{R}\) and _disjoint_ iff \(\operatorname{supp}(\sigma_{1})\cap\operatorname{supp}(\sigma_{2})=\emptyset\). Two structures are _isomorphic_ iff they differ only by a renaming of their elements (a formal definition is given in [20, SSA3]).
We consider several operations on structures. The first operation is _composition_, defined as pointwise disjoint union of the interpretations of relation symbols:
Definition 1: The composition of two locally disjoint structures \((U_{1},\sigma_{1})\) and \((U_{2},\sigma_{2})\) is \((\mathsf{U}_{1},\sigma_{1})\bullet(\mathsf{U}_{2},\sigma_{2})\stackrel{{ \mbox{\tiny{\raisebox{0.0pt}[0.0pt]{$\stackrel{{\mbox{\tiny{\raisebox{ \raisebox{0.0pt}[0.0pt]{$\stackrel{{\mbox{\raisebox{\raisebox{\raisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox }}}}}}}}}}}}}}}{=(\mathsf{U}_{1}\cup\mathsf{U}_{2},\sigma_{1}\uplus \sigma_{2})\), where \((\sigma_{1}\uplus\sigma_{2})(r)\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0 pt]{$\stackrel{{\mbox{\tiny{\raisebox{\raisebox{0.0pt}[0.0pt]{$ \stackrel{{\mbox{{\raisebox{\raisebox{\raisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}})))))\)\)\)\)\{\{\\\\{\\\\{\\\{\{\{\{\{\{\{\{ \}\,\{\}\{\{\{\{\{\mathsf{\mathsf{ \raisebox{\raisebox{\raisebox{0.0pt}[0.0pt]{$\stackrel{{ \raisebox{\raisebox{0.0pt}[0.0pt]{$\stackrel{{\mbox{\tiny{\raisebox{ 0.0pt}[0.0pt]{$\stackrel{{{\raisebox{\raisebox{\raisebox{\raisebox{\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox 0. 0.0.0.0.0.0.0.0.0.0.\,\raisebox{ 0.0.0.\raisebox{0.0.
We define two fusion operations, that glue together elements from the same structure (_internal fusion_) or from distinct structures (_external fusion_). Fusion operations are formally defined via quotienting with respect to certain equivalence relations:
Definition 2: Let \(\mathsf{S}=(\mathsf{U},\sigma)\) be a structure and \(\approx\subseteq\mathsf{U}\times\mathsf{U}\) be an equivalence relation, where \([u]_{\approx}\) is the equivalence class of \(u\in\mathsf{U}\). The _quotient_\(\mathsf{S}_{/\approx}=(\mathsf{U}_{/\approx},\sigma_{/\approx})\) is \(\mathsf{U}_{/\approx}\stackrel{{\text{\tiny{\it def}}}}{{=}}\{[u] _{\approx}\mid u\in\mathsf{U}\}\) and \(\sigma_{/\approx}(r)\stackrel{{\text{\tiny{\it def}}}}{{=}}\{ \langle[u_{1}]_{\approx},\ldots,[u_{\#r}]_{\approx}\rangle\mid\langle u_{1}, \ldots,u_{\#r}\rangle\in\sigma(r)\}\), for all \(r\in\mathbb{R}\).
A fusion operation glues elements without losing tuples from the interpretation of a relation symbol. For this reason, we consider only equivalence relations that are _compatible_ with a given structure and define internal fusion as the following unary operation:
Definition 3: An equivalence relation \(\approx\subseteq\mathsf{U}\times\mathsf{U}\) is _compatible_ with a structure \(\mathsf{S}=(\mathsf{U},\sigma)\) iff for all \(r\in\mathbb{R}\) and any two tuples \(\langle u_{1},\ldots,u_{\#r}\rangle,\langle v_{1},\ldots,v_{\#r}\rangle\in \sigma(r)\), there exists \(i\in[1..\#r]\) such that \(u_{i}\not\approx v_{i}\). An _internal fusion_ of \(\mathsf{S}\) is a structure isomorphic to \(\mathsf{S}_{/\approx}\), for an equivalence relation \(\approx\) compatible with \(\mathsf{S}\). Let \(\mathtt{IF}(\mathsf{S})\) be the set of internal fusions of \(\mathsf{S}\) and \(\mathtt{IF}(\mathsf{S})\stackrel{{\text{\tiny{\it def}}}}{{=}}\bigcup _{\mathsf{S}\in\mathcal{S}}\mathtt{IF}(\mathsf{S})\), for a set \(\mathcal{S}\) of structures.
External fusion is a binary operation that glues elements taken from different structures:
Definition 4: An _external fusion_ of the structures \(\mathsf{S}_{1}=(\mathsf{U}_{1},\sigma_{1})\) and \(\mathsf{S}_{2}=(\mathsf{U}_{2},\sigma_{2})\) is a structure isomorphic to \((\mathsf{S}^{\prime}_{1}\bullet\mathsf{S}^{\prime}_{2})_{/\approx}\), where \(\mathsf{S}^{\prime}_{i}=(\mathsf{U}^{\prime}_{i},\sigma^{\prime}_{i})\) are disjoint isomorphic copies of \(\mathsf{S}_{i}\) and \(\approx\subseteq\mathsf{U}^{\prime}_{1}\times\mathsf{U}^{\prime}_{2}\) is the smallest equivalence relation containing a nonempty \(\operatorname{supp}(\sigma^{\prime}_{1})\text{-}\operatorname{supp}(\sigma^{ \prime}_{2})\) matching that is compatible with \(\mathsf{S}^{\prime}_{1}\bullet\mathsf{S}^{\prime}_{2}\). Let \(\mathtt{EF}(\mathsf{S}_{1},\mathsf{S}_{2})\) be the set of external fusions of \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\). For a set of structures \(\mathcal{S}\), let \(\mathtt{EF}^{*}(\mathcal{S})\) (resp. \(\mathtt{IEF}^{*}(\mathcal{S})\)) be the closure of \(\mathcal{S}\) under taking external (resp. both internal and external) fusions.
TreewdithA graph is a pair \(G=(\mathcal{N},\mathcal{E})\), such that \(\mathcal{N}\) is a finite set of _nodes_ and \(\mathcal{E}\subseteq\mathcal{N}\times\mathcal{N}\) is a set of _edges_. A (simple) _path_ in \(G\) is a sequence of (pairwise distinct) nodes \(v_{1},\ldots,v_{n}\), such that \((v_{i},v_{i+1})\in\mathcal{E}\), for all \(i\in[1..n-1]\). We say that \(v_{1},\ldots,v_{n}\) is an _undirected path_ if \(\{(v_{i},v_{i+1}),(v_{i+1},v_{i})\}\cap\mathcal{E}\neq\emptyset\) instead, for all \(i\in[1..n-1]\). A set of nodes \(S\subseteq\mathcal{N}\) is _connected in \(G\)_ iff there is an undirected path in \(G\) between any two nodes in \(N\). A graph \(G\) is _connected_ iff \(\mathcal{N}\) is connected in \(G\).
Given a set \(\Omega\) of labels, a \(\Omega\)_-labeled unranked tree_ is a tuple \(T=(\mathcal{N},\mathcal{E},r,\lambda)\), where \((\mathcal{N},\mathcal{E})\) is a graph, \(r\in\mathcal{N}\) is a designated node called the _root_, such that there exists a unique simple path from \(r\) to any other node \(n\in\mathcal{N}\setminus\{r\}\) and no path from \(r\) to \(r\) in \((\mathcal{N},\mathcal{E})\). The mapping \(\lambda:\mathcal{N}\rightarrow\Omega\) associates each node of the tree a label from \(\Omega\).
Definition 5: A _tree decomposition_ of a structure \(\mathsf{S}=(\mathsf{U},\sigma)\) is a _\(\operatorname{pow}(\mathsf{U})\)_-labeled unranked tree_\(T=(\mathcal{N},\mathcal{E},r,\lambda)\), such that the following hold:
1. for each relation symbol \(r\in\mathbb{R}\) and each tuple \(\langle u_{1},\ldots,u_{\#r}\rangle\in\sigma(r)\) there exists a node \(n\in\mathcal{N}\), such that \(\{u_{1},\ldots,u_{\#r}\}\subseteq\lambda(n)\), and
2. for each element \(u\in\operatorname{supp}(\sigma)\), the set of nodes \(\{n\in\mathcal{N}\mid u\in\lambda(n)\}\) is nonempty and connected in \((\mathcal{N},\mathcal{E})\).
The _width_ of the tree decomposition is \(\operatorname{wd}(T)\stackrel{{\text{\tiny{\it def}}}}{{=}}\max_{ n\in\mathcal{N}}\operatorname{card}(\lambda(n))-1\). The _treewidth_ of the structure \(\sigma\) is \(\operatorname{tw}(\sigma)\stackrel{{\text{\tiny{\it def}}}}{{=}}\min \{\operatorname{wd}(T)\mid T\text{ is a tree decomposition of }\sigma\}\).
Note that, since we consider only structures with finite support, tree decompositions are finite trees with finite sets as labels, hence the treewidth of a structure is a well-defined
integer. A set of structures is _treewidth-bounded_ iff the set of corresponding treewidths is finite and _treewidth-unbounded_ otherwise. We assume basic acquaintance with the notions of grid and minor. It is known that a set of structures having infinitely many minors isomorphic to some \(n\times n\) grid is treewidth-unbounded [5].
**Logics** Let \(\mathbb{V}=\{x,y,\ldots\}\) be a set of variables. _First order logic_ (\(\mathsf{FO}\)) is the set of formulae consisting of _equalities_\(x=y\) and _relation atoms_\(\mathsf{r}(x_{1},\ldots,x_{\#\mathsf{r}})\) connected by boolean conjunction, negation and existential quantification. A variable is _free_ if it does not occur within the scope of an existential quantifier and \(\mathsf{fv}(\phi)\) denotes the set of free variables of \(\phi\). A _sentence_ is a formula with no free variables. For a formula \(\phi\), we denote by \(\phi^{\exists}\) the sentence obtained by existentially quantifying its free variables. A formula without quantifiers is called _quantifier-free_. The semantics of first order logic is given by a satisfaction relation \((\mathsf{U},\sigma)\Vdash^{\mathsf{s}}\phi\) between structures and formulae, parameterized by a _store_\(\mathsf{s}:\mathbb{V}\rightarrow\mathsf{U}\) such that \((\mathsf{U},\sigma)\Vdash^{\mathsf{s}}\mathsf{r}(x_{1},\ldots,x_{\#\mathsf{r}})\) iff \(\langle\mathsf{s}(x_{1}),\ldots,\mathsf{s}(x_{\#\mathsf{r}})\rangle\in\sigma( \mathsf{r})\). If \(\phi\) is a sentence the store is not important, thus we omit the superscript and write \(\mathsf{S}\Vdash\phi\) instead. The set of _models_ of a \(\mathsf{FO}\) sentence \(\phi\) is denoted as \(\llbracket\phi\rrbracket\stackrel{{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit
the structure in which all relations symbols are interpreted by empty sets, except for \(r\), which contains the tuple of store values of \(x_{1},\ldots,x_{\#r}\) only. Moreover, every structure \((U,\sigma)\), such that \((U,\sigma)\models^{s}_{\Delta}\phi\), interprets each relation symbol as a finite set of tuples, defined by a finite least fixpoint iteration over the rules from \(\Delta\). The assumption that each structure has an infinite universe excludes the cases in which a formula becomes unsatisfiable because there are not enough elements to instantiate the quantifiers introduced by the unfolding of the rules, thus simplifying the definitions.
If \(\phi\) is a sentence (resp. a predicate-free formula), we omit the store \(s\) (resp. the SID \(\Delta\)) from \(S\models^{s}_{\Delta}\phi\). For a \(\mathsf{SLR}\) sentence \(\phi\), let \(\llbracket\phi\rrbracket_{\Delta}^{\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
The above result allows to assume w.l.o.g. that \(\Delta\) is expandable for \(\mathsf{A}\).
In the second part of the proof we reduce the treewidth boundedness of \(\llbracket\mathsf{A}\rrbracket_{\Delta}\) to the treewidth boundedness of sets of structures obtained by applying both internal and external fusion to canonical models. The proof of the lemma below in given in SS5:
Lemma 2: _Let \(\Delta\) be an expandable SID for a nullary predicate \(\mathsf{A}\). Then, (1) only if (2) only if (3), where:_
1. \(\mathsf{IEF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) _is treewidth-bounded,_
2. \(\llbracket\mathsf{A}\rrbracket_{\Delta}\) _is treewidth-bounded,_
3. \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) _is treewidth-bounded._
In the third part of the proof we establish the equivalence of the points (1-3) of Lemma 2, by proving the missing direction (3) only if (1). This reduces the treewidth boundedness of \(\llbracket\mathsf{A}\rrbracket_{\Delta}\) to the treewidth boundedness of \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\). The proofs of the following two lemmas are given in SS6:
Lemma 3: _Given a SID \(\Delta\) and a nullary predicate symbol \(\mathsf{A}\), \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) is treewidth-bounded only if \(\mathsf{IEF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) is treewidth-bounded._
The above lemma is a consequence of the argument used to show the decidability of the treewidth boundedness problem for sets of the form \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\):
Lemma 4: _The following problem is decidable: given a SID \(\Delta\) and a nullary predicate \(\mathsf{A}\), is \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) treewidth-bounded?_
Finally, the treewidth boundedness for \(\mathsf{EF}^{*}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{\mathrm{c}})\) is shown to be equivalent to the treewidth boundedness of a set generated by external fusion of a set \(\mathcal{S}\) of _connected_ structures, i.e., in which there is a path of tuples between any two elements from the support. We prove that (1) \(\mathsf{EF}^{*}(\mathcal{S})\) is treewidth-unbounded iff (2) \(\mathsf{EF}^{*}(\mathcal{S})\) contains infinitely many grid minors iff (3) there exist two disjoint structures \((\mathsf{U}_{i},\sigma_{i})\in\mathsf{EF}^{*}(\mathcal{S})\) and distinct elements \(u_{i},v_{i},w_{i}\in\mathrm{supp}(\sigma_{i})\) labeled with disjoint sets of relation symbols \(\mathcal{C}_{i}\), for \(i=1,2\). For the latter condition, Fig. 3 depicts the construction of a structure with an \(n\times n\) square grid minor, of treewidth at least \(n\), for any \(n\geq 1\). Intuitively, the condition \(\mathcal{C}_{1}\cap\mathcal{C}_{2}=\emptyset\) allows to glue the elements \(u_{1}\) with \(u_{2}\), \(v_{1}\) with \(v_{2}\) and \(w_{1}\) with \(w_{2}\), respectively.
The existence of structures satisfying condition (3) above is checked by computing an ascending Kleene sequence in a domain of multisets of relation symbols in which each symbol occurs at most three times. Since this domain is finite, the least fixpoint is attained in a finite number of steps, yielding an algorithm that decides the treewidth-boundedness of the set \(\llbracket\mathsf{A}\rrbracket_{\Delta}\).
## 3 Expandable Sets of Inductive Definitions
This section introduces the formal definitions of canonical models and expandable SIDs, thus completing the overview of the proof of Theorem 2.
Let \(\phi\) and \(\psi\) be formulae and \(\Delta\) be a SID. We denote by \(\phi\Rightarrow_{\Delta}\psi\) the fact that \(\psi\) is obtained by replacing a predicate atom \(\mathsf{A}(y_{1},\ldots,y_{n})\) in \(\phi\) by a formula \(\rho[x_{1}/y_{1},\ldots,x_{n}/y_{n}]\), where \(\mathsf{A}(x_{1},\ldots,x_{n})\leftarrow\rho\) is a rule from \(\Delta\). A _\(\Delta\)-unfolding_ is a sequence of formulae such that \(\phi_{1}\Rightarrow_{\Delta}\ldots\Rightarrow_{\Delta}\phi_{n}\). The \(\Delta\)-unfolding is _complete_ iff \(\phi_{n}\) is a predicate-free formula. The following is a direct consequence of the semantics of \(\mathsf{SLR}\):
**Proposition 1**.: _Let \(\phi\) be a sentence, \(\Delta\) a SID and \(\mathsf{S}\) a structure. Then \(\mathsf{S}\in\llbracket\phi\rrbracket_{\Delta}\) iff \(\mathsf{S}\models^{\mathsf{s}}\psi\), for a store \(\mathsf{s}\) and complete \(\Delta\)-unfolding \(\phi\Rightarrow^{\mathsf{s}}_{\Delta}\exists x_{1}\ldots\exists x_{n}\.\)\(\psi\), where \(\psi\) is a qpf formula._
Intuitively, a model of a sentence is _canonical_ if it can be defined using a store that matches only those variables that are equated in the result of the unfolding. For a qpf formula \(\phi\), we write \(x\approx_{\phi}y\) (resp. \(x\not\approx_{\phi}y\)) iff \(x=y\) is (resp. is not) a logical consequence of \(\phi\). A store \(\mathsf{s}\) is _canonical for \(\phi\)_ iff \(\mathsf{s}(x)=\mathsf{s}(y)\) only if \(x\approx_{\phi}y\), for all \(x,y\in\operatorname{fv}(\phi)\). Moreover, a _rich_ canonical model stores information about the disequalities introduced by the unfolding.
**Definition 8**.: _Let \(\Delta\) be a SID and \(\phi\) a sentence. A rich canonical \(\Delta\)-model of \(\phi\) is a pair \((\mathsf{S},\mathfrak{d})\), where \(\mathsf{S}=(\mathsf{U},\sigma)\) is a structure and \(\mathfrak{d}\subseteq\mathsf{U}\times\mathsf{U}\) is a symmetric relation, such that there exists a complete \(\Delta\)-unfolding \(\phi\Rightarrow^{\mathsf{s}}_{\Delta}\exists x_{1}\ldots\exists x_{n}\.\)\(\psi\), where \(\psi\) is qpf, and a store \(\mathsf{s}\) canonical for \(\psi\), such that \(\mathsf{S}\models^{\mathsf{s}}\psi\) and \(\mathfrak{d}(u,v)\) iff there exist variables \(x\in\mathsf{s}^{-1}(u)\) and \(y\in\mathsf{s}^{-1}(v)\) such that \(x\neq y\) occurs in \(\psi\). We denote by \(\llbracket\phi\rrbracket^{\mathsf{r}}_{\Delta}\) the set of rich canonical \(\Delta\)-models of \(\phi\) and \(\llbracket\phi\rrbracket^{\mathsf{c}}_{\Delta}\stackrel{{\text{ \tiny{\raisebox{-0.86pt}{$\stackrel{{\mbox{\tiny{\raisebox{-0.86pt}{ $\stackrel{{\raisebox{-0.86pt}{$\stackrel{{\raisebox{-0.86pt} {$\stackrel{\raisebox{-}}}{}}{}{}{}{}{}{}{{}{{}{{{}{{{}{{}{{}{{ {}{{}{}{{}{}{{}{}{}{{}}{{}}{{}{{}}{{}}}{{{}}{{}}{{}}{{}}{{} {{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{} {{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{ }{{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{ }{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{ }{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{ }{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{ {}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{ {}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{ }{{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{ }{{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{ {}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{ }{{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{ }{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{ }{{{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{ {}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{ {}{}{{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{ {}{}{{}{{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{ {}{{}{{}{}{{}{{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{ {{}{{}{}{{}{}{{}{}{}{{{}{}{{}{}{}{{}{}{}{{}{{}{}{}{ {{}{{}{}{{}{}{}{{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{ }{{{{}{{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{{ }{{{{{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}{}{{}{{{{{}{{{{{{{{{{{{{}{}{{}{}{{}{}{}{{}{}{{}{{{}{}{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\}\}\\\\\\\\\\\\\\\\}}}}}\}\\}}}}\\\\}}}}}}}}\\\\\\\\}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}}}\\\\\\\\\\\\\\
**Definition 10**.: _An equivalence relation \(\approx\subseteq\mathsf{U}\times\mathsf{U}\) is compatible with a rich canonical model \((\mathsf{S},\mathfrak{d})\) iff it is compatible with \(\mathsf{S}=(\mathsf{U},\sigma)\) and \(\mathfrak{d}(u,v)\) only if \(u\not\approx v\). We denote by \(\widetilde{\mathrm{IF}}(\mathsf{S},\mathfrak{d})\) the set of structures isomorphic to \(\mathsf{S}_{/\approx}\), where \(\approx\) is some equivalence relation compatible with \((\mathsf{S},\mathfrak{d})\)._
A store \(\mathfrak{s}\) is _injective_ over a set of variables \(x_{1},\ldots,x_{n}\) iff \(\mathfrak{s}(x_{i})=\mathfrak{s}(x_{j})\) implies \(i=j\), for all \(i,j\in[1..n]\). Note that the canonical \(\Delta\)-models of an equality free SID \(\Delta\) can be defined considering injective stores in Def. 8.
**Lemma 7**.: _Let \(\mathsf{A}\) be a nullary predicate. Then, \(\llbracket\mathsf{A}\rrbracket_{\Delta}=\widetilde{\mathrm{IF}}(\llbracket \mathsf{A}\rrbracket_{\Delta}^{r})\subseteq\mathrm{IF}(\llbracket\mathsf{A} \rrbracket_{\Delta}^{c})\)._
See proof on page 33.
A structure is a _substructure_ of another if the former is obtained from the latter by removing elements from its support:
**Definition 11**.: _Let \(\mathsf{S}_{i}=(\mathsf{U}_{i},\sigma_{i})\) be structures, for \(i=1,2\). \(\mathsf{S}_{1}\) is included in \(\mathsf{S}_{2}\) iff \(\mathsf{U}_{1}\subseteq\mathsf{U}_{2}\) and \(\sigma_{1}(r)\subseteq\sigma_{2}(r)\), for all \(r\in\mathbb{R}\). \(\mathsf{S}_{1}\) is a substructure of \(\mathsf{S}_{2}\), denoted \(\mathsf{S}_{1}\sqsubseteq\mathsf{S}_{2}\) iff \(\mathsf{S}_{1}\subseteq\mathsf{S}_{2}\) and \(\sigma_{1}(r)=\{\langle u_{1},\ldots,u_{\#r}\rangle\in\sigma_{2}(r)\mid u_{1},\ldots,u_{\#r}\in\mathrm{supp}(\sigma_{1})\}\), for all \(r\in\mathbb{R}\)._
A SID is _expandable_ if any set of canonical models of a sentence (equivalently, a nullary predicate) are all substructures of the same canonical model of that sentence, that can be, moreover, placed "sufficiently far away" one from another.
**Definition 12**.: _A SID \(\Gamma\) is expandable for a nullary predicate \(\mathsf{B}\) iff for each sequence of pairwise disjoint canonical models \(\mathsf{S}_{1}=(\mathsf{U}_{1},\sigma_{1}),\ldots,\mathsf{S}_{n}=(\mathsf{U} _{n},\sigma_{n})\in\llbracket\mathsf{B}\rrbracket_{\Gamma}^{c}\), there exists a rich canonical model \((\mathsf{S},\mathfrak{d})\in\llbracket\mathsf{B}\rrbracket_{\Gamma}^{r}\), where \(\mathsf{S}=(\mathsf{U},\sigma)\), such that:_
1. \(\mathsf{S}_{1}\bullet\ldots\bullet\mathsf{S}_{n}\sqsubseteq\mathsf{S}\)_,_
2. \(\mathfrak{d}(u,v)\) _holds for no_ \(u\in\mathrm{supp}(\sigma_{i})\) _and_ \(v\in\mathrm{supp}(\sigma_{j})\)_, where_ \(1\leq i<j\leq n\)_, and_
3. _for no relation symbol_ \(r\in\mathbb{R}\) _and tuples_ \(\langle u_{1},\ldots,u_{\#r}\rangle,\langle v_{1},\ldots,v_{\#r}\rangle\in \sigma(r)\) _there exist_ \(1\leq i<j\leq n\)_, such that_ \(\{u_{1},\ldots,u_{\#r}\}\cap\mathrm{supp}(\sigma_{i})\neq\emptyset\)_,_ \(\{v_{1},\ldots,v_{\#r}\}\cap\mathrm{supp}(\sigma_{j})\neq\emptyset\) _and_ \(\{u_{1},\ldots,u_{\#r}\}\cap\{v_{1},\ldots,v_{\#r}\}\neq\emptyset\)_._
The conditions (2) and (3) of Def. 12 ensure that the external fusion (Def. 4) of these substructures is not hindered by how they are placed inside the larger structure. This definition completes the formalization of the statements of Lemmas 1 and 2 (SS5) on which the proof of Theorem 2 rests. We proceed with a proof of Lemma 2.
**Proof of Lemma 2** "\((1)\Rightarrow(2)\)" \(\mathrm{IF}(\llbracket\mathsf{A}\rrbracket_{\Delta}^{c})\subseteq\mathrm{IEF}^{* }(\llbracket\mathsf{A}\rrbracket_{\Delta}^{c})\) holds trivially, by Def. 4, leading to \(\llbracket\mathsf{A}\rrbracket_{\Delta}\subseteq\mathrm{IEF}^{*}(\llbracket \mathsf{A}\rrbracket_{\Delta}^{c})\), by Lemma 7. "\((2)\Rightarrow(3)\)" Let \(\mathsf{S}=(\mathsf{U},\sigma)\in\mathrm{EF}^{*}(\llbracket\mathsf{A} \rrbracket_{\Delta}^{c})\) be a structure. It is sufficient to prove that \(\mathsf{S}\sqsubseteq\mathsf{S}^{\prime}\) for another structure \(\mathsf{S}^{\prime}\in\llbracket\mathsf{A}\rrbracket_{\Delta}\), because \(\mathrm{tw}(\mathsf{S})\leq\mathrm{tw}(\mathsf{S}^{\prime})\), in this case. Then there exist pairwise disjoint structures \(\mathsf{S}_{1}=(\mathsf{U}_{1},\sigma_{1}),\ldots,\mathsf{S}_{n}=(\mathsf{U}_{ n},\sigma_{n})\in\llbracket\mathsf{A}\rrbracket_{\Delta}^{c}\) and an equivalence relation \(\approx\subseteq\big{(}\bigcup_{i=1}^{n}\mathsf{U}_{i}\big{)}\times\big{(} \bigcup_{i=1}^{n}\mathsf{U}_{i}\big{)}\), that is compatible with \(\mathsf{S}_{1}\bullet\ldots\bullet\mathsf{S}_{n}\), matches only elements from different structures and is not the identity, such that \(\mathsf{S}\) is isomorphic to \((\mathsf{S}_{1}\bullet\ldots\bullet\mathsf{S}_{n})_{/\approx}\). By Def. 12, there exists a rich canonical model \((\mathsf{S}^{\prime\prime},\mathfrak{d})\in\llbracket\mathsf{A}\rrbracket_{\Delta}^ {r}\), such that (1) \(\mathsf{S}\sqsubseteq\mathsf{S}^{\prime\prime}\), (2) \(\mathfrak{d}(u,v)\) holds for no \(u\in\mathrm{supp}(\sigma_{i})\) and \(v\in\mathrm{supp}(\sigma_{j})\), where \(1\leq i<j\leq n\), and (3) for no relation symbol \(r\in\mathbb{R}\) and tuples \(\langle u_{1},\ldots,u_{\#r}\rangle,\langle v_{1},\ldots,v_{\#r}\rangle\in \sigma(r)\), there exist \(1\leq i<j\leq n\), such that \(\{u_{1},\ldots,u_{\#r}\}\cap\mathrm{supp}(\sigma_{i})\neq\emptyset\), \(\{v_{1},\ldots,v_{\#r}\}\cap\mathrm{supp}(\sigma_{j})\neq\emptyset\) and \(\{u_{1},\ldots,u_{\#r}\}\cap\{v_{1},\ldots,v_{\#r}\}\neq\emptyset\). By the last two conditions, \(\approx\) is compatible with \((\mathsf{S}^{\prime\prime},\mathfrak{d})\), leading to \(\mathsf{S}^{\prime\prime}_{/\approx}\in\widetilde{\mathrm{IF}}(\llbracket \mathsf{A}\rrbracket_{\Delta}^{r})=\llbracket\mathsf{A}\rrbracket_{\Delta}\) by Lemma 7. We conclude by taking \(\mathsf{S}^{\prime}=\mathsf{S}^{\prime\prime}_{/\approx}\).
Encoding Sets of Inductive Definitions by Tree Automata
For technical reasons, the construction of expandable SIDs with an equivalent treewidth boundness problem (Lemma 1) uses a representation of the SID as a tree automaton. This representation allows to distinguish the purely structural aspects, related to the dependencies between rules, from details related to the flow of parameters.
Let \(\mathbb{A}\) be a _ranked alphabet_, each symbol \(a\in\mathbb{A}\) having an associated integer \(\mathsf{p}(a)\geq 0\), called the _rank_ of \(a\). The elements of \(\mathbb{N}^{*}_{+}\) and finite sequences of strictly positive natural numbers, called _positions_. We write \(pq\) for the concatenation of \(p,q\in\mathbb{N}^{*}\) and \(q\cdot P\stackrel{{\mbox{\tiny{\raisebox{0.0pt}[0.0pt][0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}0[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}0[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0 }[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0 }0pt[0.0pt]{\raisebox{0.0pt}[0.0pt]{\raisebox{0.0pt}{\raisebox{0.0pt}[0.0pt]{ \raisebox{0.0pt[{0.0pt}[0.0pt]{\raisebox{0.0pt}{\raisebox{0.0pt}[{\raisebox{ 0.}0pt[{\raisebox{.}{\raisebox{.}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\)\)\)\)\)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }}\ \ \ \ \ \ }\ \ \ \ }}\ \ \ \ \ \ \ \ }}\ \ \ \ \ \ \ \ \ \ }}\ \ \ \ \ \ \ \ \ \ }}\ \ \ \ \ \ \ \ \ \ \ }\ \ \ \ \ \ }\ \ \ \ \ }\ \ \ \ \ }\ \ \ \ \ }\ \ \ \ \ }\ \ \ \ \ \ }\ \ \ \ }\ \ \ \ \ \ \ }\ \ \ \ \ \ }\ \ \ \ \ \ }\ \ \ \ \ \ }\ \ \ \ \ \ \ \ }\ \ \ \ \ \ \ }\ \ \ \ \ \ \ \ \ \ }\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
_
2. _there exists a mapping_ \(\Lambda:\mathcal{N}\!\left(\cup\delta\to\{1,\infty\}\) _such that:_ 1. _for all_ \(S\in\mathcal{N}\!\left.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Definition 15**.: _Let \(\Sigma\) be the set of apf formulae \(\alpha\) of rank \(\rho(\alpha)=\ell\), such that:_
1. \(\operatorname{fv}(\alpha)\subseteq\{x_{1}^{[\varepsilon]},\ldots,x_{n_{0}}^{[ \varepsilon]}\}\cup\{y_{1}^{[\varepsilon]},\ldots,y_{m}^{[\varepsilon]}\} \cup\bigcup_{i=1}^{\ell}\{x_{1}^{[i]},\ldots,x_{n_{i}}^{[i]}\}\)_, for some_ \(m,n_{0},\ldots,n_{\ell}\in\mathbb{N}\)_; a variable_ \(x_{j}^{[i]}\) _is called a_ \(i\)-variable_, for all_ \(i\in\{\varepsilon\}\cup[1..\ell]\)_,_
2. \(x_{j}^{[i]}\not\approx_{a}x_{k}^{[i]}\)_, for all_ \(i\in[1..\ell]\) _and_ \(1\leq j<k\leq n_{i}\)_._
_The characteristic formula of a \(\Sigma\)-labeled tree \(t\) is the apf formula \(\Theta(t)\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox }}}}}}}}}}}}}}}} {\Phi\in\mathcal{A}_{p\in\operatorname{dom}(t)}\ t(p)^{[\mu]}\), where the formulae \(t(p)^{[\mu]}\) are obtained from \(t(p)\in\Sigma\) by replacing each occurrence of a variable \(x^{[\mu]}\) by \(x^{[\mu]}\), for all \(p\in\operatorname{dom}(t)\)._
Given a SID \(\Delta\), the \(\Sigma\)-labeled automaton \(\mathcal{A}_{\Delta,\mathsf{A}}\stackrel{{\mbox{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tinytiny{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox \mboxmboxmboxmboxmboxmbox { \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox { \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox { \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mbox
See proof on page 39.
**Persistent variables** The second ingredient of the construction of expandable SIDs (Lemma 1) are the _persistent_ variables introduced by \(1\)-transitions, whose values propagate via equalities throughout each run of the choice-free automaton. We introduce these variables formally using the notion of _profile_:
Definition 17: Let \(\mathcal{A}=(\Sigma,Q,\iota,\delta)\) be a choice-free automaton, where \(\delta=\delta^{1}\uplus\delta^{\infty}\) (Def. 14). A _positional function_\(\mathfrak{P}:Q\to\operatorname{pow}(\mathbb{N})\) associates each state \(q\) with a set \(\mathfrak{P}(q)\subseteq[1..\#q]\). The _profile of \(\mathcal{A}\)_is the pointwise largest positional function \(\mathfrak{P}_{\mathcal{A}}\)_such that, for each transition_\(q_{0}\xrightarrow{\alpha}(q_{1},\ldots,q_{\ell})\in\delta^{\infty}\), each \(k\in[1..\ell]\) and each \(r\in\mathfrak{P}_{\mathcal{A}}(q_{k})\), there exists \(s\in\mathfrak{P}_{\mathcal{A}}(q_{\varepsilon})\), such that \(x_{s}^{[\varepsilon]}\approx_{\alpha}x_{r}^{[\varepsilon]}\). A variable \(x_{j}^{[\varepsilon]}\) that occurs within the label of a transition \(q_{0}\xrightarrow{\alpha}(q_{1},\ldots,q_{\ell})\in\delta\) is said to be _persistent_ iff \(j\in\mathfrak{P}_{\mathcal{A}}(q_{i})\), for all \(i\in[1..\ell]\cup\{\varepsilon\}\).
Intuitively, \(\mathfrak{P}_{\mathcal{A}}(q)\) is the set of indices of those variables, associated with a state, that will be equated, through a chain of equalities in the characteristic formula \(\Theta(t)\), to the same variable associated with the entry state in every run of \(\infty\)-transitions of \(\mathcal{A}\) over \(t\). Note that the profile is computable by a separate finite greatest fixpoint Kleene iteration over sets of SCCs in the automaton interconnected by \(\infty\)-transitions.
A _context_\(\Theta_{p\gets q}\) is a partial run over a tree \(t\) such that \(p\in\operatorname{fr}(\Theta_{p\gets q})\), \(\Theta_{p\gets q}(p)=q\) and \(\theta_{p\gets q}(r)\xrightarrow{r(r)}()\), for all \(r\in\operatorname{fr}(\theta_{p\gets q})\setminus p\), i.e., the partial run has exactly one "open" frontier position \(p\) that is labeled with a state \(q\). A key property of automata is that equalities between non-persistent variables vanish in contexts consisting of \(\infty\)-transitions only:
Definition 18: A context \(\theta_{p\gets q}\in\mathcal{R}_{q}^{\infty}(\mathcal{A})\) over a tree \(t\) is a \(q\)-reset iff (1) \(x_{j}^{[\varepsilon]}\approx_{\Theta(t)}x_{j}^{[\varepsilon]}\), for all \(j\in\mathfrak{P}_{\mathcal{A}}(q)\), and (2) \(x_{j}^{[\varepsilon]}\nsubseteq_{\Theta(t)}x_{k}^{[p]}\), for all \(j,k\in[1..\#q]\), such that \(k\not\in\mathfrak{P}_{\mathcal{A}}(q)\). The path between \(\varepsilon\) and \(p\) in \(\theta_{p\gets q}\) is a reset path.
Lemma 13: Let \(\mathcal{A}=(\Sigma,Q,\iota,\delta)\) be a trim automaton. Then, there exists a \(q\)-reset for (1) each pivot state \(q\in(\delta^{1})^{\bullet}\cap^{\bullet}(\delta^{\infty})\) of \(\mathcal{A}\) and (2) each state \(q\in{}^{\bullet}(\delta^{1})\cap^{\bullet}(\delta^{\infty})\), i.e., that is the origin of both a \(1\)-transition and a \(\infty\)-transition.
See proof on page 41.
Any sequence of partial runs consisting of \(\infty\)-transitions can be embedded in a complete run, such that each two such partial runs are separated by any number of resets:
Lemma 14: Let \(\mathcal{A}\) be a trim automaton. Given partial runs \(\theta_{1}\in\mathcal{R}_{q_{1}}^{\infty}(\mathcal{A}),\ldots,\theta_{n}\in \mathcal{R}_{q_{n}}^{\infty}(\mathcal{A})\) and an integer \(k\geq 1\), there exists an accepting run \(\theta\) of \(\mathcal{A}\) such that:
1. \(\theta_{i}\) is embedded in \(\theta\) at some position \(p_{i}\in\operatorname{dom}(\theta)\), for each \(i\in[1..n]\),
2. \(p_{i}\cdot\operatorname{dom}(\theta_{i})\cap p_{j}\cdot\operatorname{dom}( \theta_{j})=\emptyset\), for all \(1\leq i<j\leq n\),
3. the path between \(p_{i}\) and \(p_{j}\) in \(\theta\) traverses \(k\) times some reset path disjoint from \(\bigcup_{l=1}^{n}p_{\ell}\cdot\operatorname{dom}(\theta_{\ell})\), for all \(1\leq i<j\leq n\).
See proof on page 42.
A Decomposition into Expandable Sets of Inductive Definitions
This section describes the technical development leading to the proof of Lemma 1. In the rest of this section, let \(\Delta\) be a given equality-free SID and \(A\) be a nullary predicate. The automaton \(\mathcal{A}_{\Delta,A}\) recognizes the set of \(\Delta\)-models of \(A\), by Lemma 12 (1). We shall build from \(\mathcal{A}_{\Delta,A}\) finitely many automata \(\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\), such that \([\![\mathcal{A}_{\Delta,A}]\!]\) is treewidth-bounded iff \([\![\mathcal{B}_{\!]}\!]\) is treewidth-bounded, for each \(i\in[1..m]\) and, moreover, each SID \(\Gamma_{i}\stackrel{{\raisebox{0.0pt}[0.0pt][0.0pt]{\tiny def}}}{{=}} \Delta_{\mathcal{B}_{i}}\), i.e., obtained from \(\mathcal{B}_{i}\) using Lemma 12 (2) is shown to be expandable for a nullary predicate \(\mathcal{B}\), for all \(i\in[1..m]\). The construction of \(\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\) proceeds in five steps, denoted (**I-V**) in what follows.
The automata built in the following will be _simulations_ and _refinements_ of \(\mathcal{A}_{\Delta,A}\):
Definition 19: Let \(\mathcal{A}=(\mathbb{A},Q_{\mathcal{A}},\iota_{\mathcal{A}},\delta_{\mathcal{ A}})\) and \(\mathcal{B}=(\mathbb{A},Q_{\mathcal{B}},\iota_{\mathcal{B}},\delta_{\mathcal{B}})\) be automata. A mapping \(h:Q_{\mathcal{A}}\to Q_{\mathcal{B}}\) is a _simulation_ iff (1) \(h(\iota_{\mathcal{A}})=\iota_{\mathcal{B}}\) and (2) \(q_{0}\stackrel{{ a}}{{\to}}(q_{1},\ldots,q_{\ell})\in\delta_{ \mathcal{A}}\) only if \(h(q_{0})\stackrel{{ a}}{{\to}}(h(q_{1}),\ldots,h(q_{\ell}))\in \delta_{\mathcal{B}}\), for all \(q_{0},\ldots,q_{\ell}\in Q_{\mathcal{A}}\). A _refinement \(h\)_is a simulation such that, moreover (3) \(h(q_{0})\stackrel{{ a}}{{\to}}(q_{1}^{\prime},\ldots,q_{\ell}^{ \prime})\in\delta_{\mathcal{B}}\) only if there exist states \(q_{1}\in h^{-1}(q_{1}^{\prime}),\ldots,q_{\ell}\in h^{-1}(q_{\ell}^{\prime})\), such that \(q_{0}\stackrel{{ a}}{{\to}}(q_{1},\ldots,q_{\ell})\in\delta_{ \mathcal{A}}\), for all \(q_{0}\in Q_{\mathcal{A}}\) and \(q_{1}^{\prime},\ldots,q_{\ell}^{\prime}\in Q_{\mathcal{B}}\). If a simulation (refinement) \(h:Q_{\mathcal{A}}\to Q_{\mathcal{B}}\) exists then \(\mathcal{A}\)_ simulates (refines)_ \(\mathcal{B}\).
The key properties of simulations and refinements are stated and proved below:
Lemma 15: If \(\mathcal{A}\) simulates (resp. refines) \(\mathcal{B}\) then \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\) (resp. \(\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{B})\)).
See proof on page 43.
We shall also make use of the following relations between qpf formulae and the upper bounds on the treewidth of their models:
Lemma 16: Let \(\phi\) be a qpf formula, \(x_{1},x_{2},\ldots,x_{k}\) variables and \(r\) a relation symbol of arity \(k\), such that \(\phi*x_{1}\neq x_{2}\) and \(\phi*r(x_{1},\ldots,x_{k})\) are satisfiable. Then, we have:
1. \(\operatorname{tw}([\![(\phi*x_{1}=x_{2})^{\exists}]\!])\leq\operatorname{tw}([\![ \phi^{\exists}]\!])\),
2. \(\operatorname{tw}([\![\phi^{\exists}]\!])-1\leq\operatorname{tw}([\![\phi*x_{1 }\neq x_{2}\!^{\exists}]\!])\leq\operatorname{tw}([\![\phi^{\exists}]\!])\),
3. \(\operatorname{tw}([\![\phi^{\exists}]\!])-1\leq\operatorname{tw}([\![\phi*r(x_{ 1},\ldots,x_{k})^{\exists}]\!])\leq\operatorname{tw}([\![\phi^{\exists}]\!])+k\)
See proof on page 44.
**I. Satisfiability** The first step is the construction of an automaton \(\mathcal{A}_{\Delta,A}^{I}=(\Sigma,Q_{\Delta}^{I},q_{\Delta},\delta_{\Delta}^{ I})\) recognizing the set of trees from the language of \(\mathcal{A}_{\Delta,A}=(\Sigma,Q_{\Delta},q_{\Delta},\delta_{\Delta})\) that have, moreover, a satisfiable characteristic formula. This construction uses an idea of Brotherston et al [9], that characterizes the satisfiability of a predicate by an abstraction consisting of tuples of parameters occurring in the interpretation of relation symbols. A similar abstraction has been used to check satisfiability of \(\mathsf{SLR}\) formulae [6].
The states of \(\mathcal{A}_{\Delta,A}^{I}\) are _base pairs_, defined below:
Definition 20: A _base pair \((\sigma^{\sharp},\pi)\) consists of a mapping \(\sigma^{\sharp}:\mathbb{R}\to\operatorname{mpow}(\mathbb{V}^{+})\) of relation symbols \(r\) into multisets of tuples of variables of length \(\#r\) each, and a conjunction of disequalities \(\pi\). A base pair is said to be _satisfiable_ iff \(\pi\) is satisfiable and the multiplicity of any tuple \(\langle x_{1},\ldots,x_{\#r}\rangle\in\sigma^{\sharp}(r)\) is one, for all \(r\in\mathbb{R}\). Given a set of variables \(X\subseteq\mathbb{V}\), let \(\mathsf{SatBase}(X)\) denote the set of satisfiable base pairs involving variables from \(X\) and let \(\mathsf{SatBase}\stackrel{{\raisebox{0.0pt}[0.0pt][0.0pt]{\tiny def}}}{{=}} \mathsf{SatBase}(\mathbb{V})\).
We consider three partial operations on \(\mathsf{SatBase}\). First, the _composition_ is \((\sigma_{1}^{\sharp},\pi_{1})\otimes(\sigma_{2}^{\sharp},\pi_{2})\stackrel{{ \mbox{\tiny{\it def}}}}{{=}}(\sigma_{1}^{\sharp}\cup\sigma_{2}^{\sharp},\pi_{1}* \pi_{2})\) if \((\sigma_{1}^{\sharp}\cup\sigma_{2}^{\sharp},\pi_{1}*\pi_{2})\) is satisfiable, and undefined, otherwise. Second, the _substitution_\((\sigma^{\sharp},\pi)[x_{1}/y_{1},\ldots,x_{n}/y_{n}]\) replaces simultaneously each occurrence of \(x_{j}\) by \(y_{j}\) in \(\sigma^{\sharp}\) and \(\pi\), for all \(j\in[1..n]\). Third, given a set \(X\subseteq\mathbb{V}\) of variables, the _projection_ is \((\sigma^{\sharp},\pi)|_{X}\stackrel{{\mbox{\tiny{\it def}}}}{{=} }(\lambda r\.\ \{\langle x_{1},\ldots,x_{n}\rangle\in\sigma^{\sharp}(r)\ |\ x_{1},\ldots,x_{n}\in X\},\pi|_{X})\) where, for a formula \(\phi\), the operation \(\phi|_{X}\) removes from \(\phi\) all atoms involving variables not from \(X\). Finally, for a qpf formula \(\phi=\psi*\pi\), where \(\psi\) is a separated conjunction of relation atoms and \(\pi\) is a pure formula, we define:
\[\mathsf{Base}(\phi)\stackrel{{\mbox{\tiny{\it def}}}}{{=}}( \lambda r\.\ \{\langle x_{1},\ldots,x_{n}\rangle\ |\ r(x_{1},\ldots,x_{n})\mbox{ occurs in }\psi\},\pi)\]
We define the automaton \(\mathcal{A}_{\Delta,\mathsf{A}}^{I}=(\Sigma,Q_{\mathsf{A}}^{I},\iota_{ \mathsf{A}}^{I},\delta_{\Delta}^{I})\), where:
* \(Q_{\mathsf{A}}^{I}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\{(q,( \sigma^{\sharp},\pi))\ |\ q\in Q_{\mathsf{A}},\ (\sigma^{\sharp},\pi)\in\mathsf{SatBase}(x_{1}^{| \varepsilon|},\ldots,x_{\#q_{\ell}}^{|\varepsilon|})\}\),
* \(\iota_{\mathsf{A}}^{I}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}\{( \iota_{\mathsf{A}},(\sigma_{\hat{\mathsf{0}}}^{\sharp},\mathsf{emp}))\}\), where \(\sigma_{\hat{\mathsf{0}}}^{\sharp}\) interprets each relation symbol as the empty set; recall that, since we assumed \(\#\iota_{\mathsf{A}}=0\), there are no tuples associated with \(\iota_{\mathsf{A}}\),
* \(\delta_{\Delta}^{I}\) is the set of transitions \((q_{0},(\sigma_{0}^{\sharp},\pi_{0}))\stackrel{{\mbox{\tiny{\it def }}}}{{\to}}\langle(q_{1},(\sigma_{1}^{\sharp},\pi_{1})),\ldots,(q_{\ell},( \sigma_{\ell}^{\sharp},\pi_{\ell}))\rangle\), such that \(q_{0}\stackrel{{\mbox{\tiny{\it def}}}}{{\to}}(q_{1},\ldots,q_{ \ell})\in\delta_{\Delta}\) and the following condition holds: \[(\sigma_{0}^{\sharp},\pi_{0})=\Big{(}\mathsf{Base}(\alpha)\otimes\bigotimes_{i =1}^{\ell}(\sigma_{i}^{\sharp},\pi_{i})[x_{1}/x_{1}^{|\iota_{1}^{|\iota_{1}^{| \iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}^{| \iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}^{|\iota_{1}| \iota_{1^{|\iota_{1}|\iota_{1}^{|\iota_{1}|\iota_{1}^{|\iota_{1}|\iota_{1| \iota_{1|}\iota_{1|}^{\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1| \iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1| }\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|\iota_{1|}\iota_{1|}\iota_{1|}\iota_{1|\iota_{1| \iota_{1|}\iota_{1|\iota_{1|\iota_{1|\|\|\iota_{1|\|
2. remove every equality involving a non-persistent variable \(x_{j}^{[i]}\), for \(i=\epsilon\) and \(j\in[1..\#q_{0}]\), or \(i\in[1..\ell]\) and \(j\in[1..\#q_{i}]\).
The idea is to remove the equalities that would be lost when adding resets before and after every \(1\)-transitions that is, forget equalities involving non-persistent variables (2) while keeping equalities between persistent ones (1). The result is the choice-free automaton \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}\), whose properties are stated and proved below:
**Lemma 19**.: _Let \(q_{0}\stackrel{{\alpha}}{{\to}}(q_{1},\ldots,q_{\ell})\) be a \(1\)-transition of \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}\). Then, for each \(i\in[1..\ell]\) and each \(j\in\mathfrak{P}_{\mathcal{A}_{\Delta,\mathsf{A}}^{III}}(q_{i})\), there exists a \(\epsilon\)-variable \(z\), such that \(x_{j}^{[i]}\approx_{\alpha}z\)._
See proof on page 48.
**Lemma 20**.: _Let \(\phi\) and \(\psi\) be qpf formulae, such that \(\phi*\psi\) is satisfiable and \(x\not\approx_{\phi}y\), for all \(x,y\in\mathrm{fv}(\phi)\cap\mathrm{fv}(\psi)\). Let \(\psi_{eq}=\bigstar\{x=y\mid x,y\in\mathrm{fv}(\phi)\cap\mathrm{fv}(\psi),\;x \approx_{\psi}y\}\). Then, \(\mathrm{tw}(\llbracket(\phi*\psi_{eq})^{\exists}\rrbracket)\leq\mathrm{tw}( \llbracket(\phi*\psi)^{\exists}\rrbracket)+\mathrm{card}(\mathrm{fv}(\phi) \cap\mathrm{fv}(\psi))\)._
See proof on page 48.
**Lemma 21**.: _(1) \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}\) is all-satisfiable. (2) \(\llbracket\mathcal{A}_{\Delta,\mathsf{A}}^{III}\rrbracket\) is treewidth-bounded iff \(\llbracket\mathcal{A}_{\Delta,\mathsf{A}}^{II}\rrbracket\) is treewidth-bounded._
See proof on page 49. Again, \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}\) is choice-free, because it is obtained by a structure-preserving re-labeling of the choice-free automaton \(\mathcal{A}_{\Delta,\mathsf{A}}^{II}\).
**IV. Removing persistent variables** We build from \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}=(\Sigma,Q_{\mathsf{A}}^{I},\mathsf{t}_{ \mathsf{A}}^{I},\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ \mathcal{ }}}}}}}}}}}}} \Delta)\) an automaton \(\mathcal{A}_{\Delta,\mathsf{A}}^{IV}\) having no persistent variables at all. We recall that, by Lemma 8, each \(1\)-transition of a choice free automata occurs exactly once in each accepting run over a \(\Sigma\)-labeled tree \(t\) and each such occurrence corresponds to one subformula \(t(p)^{p}\) of \(\Theta(t)\), for a position \(p\in\mathrm{dom}(t)\). Using a renaming, if necessary, we can assume that the \(\epsilon\)-variables \(y_{i}^{[\epsilon]}\), i.e., not associated with the states of the transition (Def. 15), have distinct names between the \(1\)-transitions of \(\mathcal{A}_{\Delta,\mathsf{A}}^{III}\) and let \(\mathcal{Y}\stackrel{{\alpha}}{{=}}\{y_{1}^{[\epsilon]},\ldots,y_{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{
By Lemma 19, one of the cases above must hold, hence \(a_{k}\) is well defined.
* \(q_{0}\stackrel{{\alpha}}{{\to}}(q_{1},\ldots,q_{\ell})\in(\delta^{III }_{\Delta,\mathsf{A}})^{\infty}\) and, for all \(k\in[1..\ell]\), we have \(a_{k}\stackrel{{\alpha}}{{=}}a_{0}\circ b_{k}\), where \(b_{k}(i)=j\iff x_{i}^{[k]}\approx_{\alpha}x_{j}^{[\vec{x}]}\), for all \(i\in\mathfrak{P}_{\mathfrak{A}^{III}_{\Delta,\mathsf{A}}}(q_{k})\) and \(j\in\mathfrak{P}_{\mathfrak{A}^{III}_{\Delta,\mathsf{A}}}(q_{0})\). Note that, by Def. 17, \(b_{k}\) is well defined. The goal of this transformation is to remove, from the transition label \(\alpha\), the persistent variables associated to one of the states \(q_{0},\ldots,q_{\ell}\). In order to preserve the naming conventions from Def. 15, we rename the remaining (non-persistent) variables using an injective mapping \(\eta:\operatorname{fv}(\alpha)\to\operatorname{fv}(\alpha)\), such that:
* \(\eta(\{x_{i}^{[\vec{x}]}\mid i\in\mathfrak{P}_{\mathfrak{A}^{III}_{\Delta, \mathsf{A}}}(q_{0})\})=\{x_{1}^{[\vec{x}]},\ldots,x_{k_{0}}^{[\vec{x}]}\}\), \(k_{0}\stackrel{{\omega}}{{=}}n_{0}-\operatorname{card}( \mathfrak{P}_{\mathfrak{A}^{III}_{\Delta,\mathsf{A}}}(q_{0}))\),
* \(\eta(\{x_{i}^{[\vec{x}]}\mid i\in\mathfrak{P}_{\mathfrak{A}^{III}_{\Delta, \mathsf{A}}}(q_{j})\})=\{x_{1}^{[\vec{x}]},\ldots,x_{k_{j}}^{[\vec{x}]}\}\), \(k_{j}\stackrel{{\omega}}{{=}}n_{j}-\operatorname{card}( \mathfrak{P}_{\mathfrak{A}^{III}_{\Delta,\mathsf{A}}}(q_{j}))\), for \(j\in[1..\ell]\),
* \(\eta(y_{i}^{[\vec{x}]})=y_{i}^{[\vec{x}]}\), for \(i\in[1..m]\), where \(m,n_{0},\ldots,n_{\ell}\) are as in Def. 15. Note that, by the definition (1) of the transition labels of \(\mathcal{A}_{\Delta,\mathsf{A}}\), each relation atom from \(\alpha\) is of the form \(r(z_{1}^{[\vec{x}]},\ldots,z_{\vec{x}_{\vec{x}_{\vec{x}_{\vec{x}_{\vec{x}_{\vec{x }_{\vec{x}}}}}}}}^{[\vec{x}]})\) (i.e., these atoms are not changed by the transformations (**I-III**), with the exception of (**II**), which removes relation atoms from the 1-transitions). We distinguish two cases. If \(\alpha\) is the label of a 1-transition of \(\mathcal{A}^{III}_{\Delta,\mathsf{A}}\), we define \(\overline{\alpha}\stackrel{{\omega}}{{=}}\text{{\sf{emp}}}\). Otherwise (\(\alpha\) labels an \(\infty\)-transition), \(\overline{\alpha}\) is obtained by replacing each relation atom \(r(z_{1}^{[\vec{x}]},\ldots,z_{\vec{x}_{\vec{x}_{\vec{x}_{\vec{x}}}}}^{[\vec{ x}]})\) from \(\alpha\) with a relation atom \(r_{g}(\eta(z_{i_{1}}^{[\vec{x}]}),\ldots,\eta(z_{i_{k}}^{[\vec{x}]}))\), where:
* \(r_{g}\) is a new relation symbol of arity \(k\) and \(g:[1..\#r]\to[1..\mathcal{M}]\cup\{\bot\}\) is: \[g(i)\stackrel{{\omega}}{{=}}\left\{\begin{array}{l}a_{0}(j)\;, \text{if }z_{i}^{[\vec{x}]}\text{ and }x_{j}^{[\vec{x}]}\text{ are the same variable, such that }j\in\mathfrak{P}_{\mathfrak{A}^{III}_{\Delta, \mathsf{A}}}(q_{0})\\ \bot\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
We prove that \(\overline{B}_{1},\ldots,\overline{B}_{m}\) is indeed a choice-free decomposition of \(\mathcal{A}^{IV}_{\Delta,\mathsf{A}}\). The proof relies on a stronger notion of automata refinement:
**Definition 21**.: _An automaton \(\mathcal{A}=(\mathbb{A},\)\(Q_{\mathcal{A}},\iota_{\mathcal{A}},\delta_{\mathcal{A}})\) is a strong refinement of \(\mathcal{B}=(\mathbb{A},\)\(Q_{\mathcal{B}},\iota_{\mathcal{B}},\delta_{\mathcal{B}})\) iff there exists a refinement \(h:Q_{\mathcal{A}}\to Q_{\mathcal{B}}\) such that the following hold:_
1. \(h^{-1}(S)\) _is an SCC of_ \(\mathcal{A}\)_, for each SCC_ \(S\) _of_ \(\mathcal{B}\)_._
2. _for each SCC_ \(S\) _of_ \(\mathcal{B}\) _and each transition_ \(q_{0}\stackrel{{ a}}{{\to}}(q_{1},\ldots,q_{\ell})\in S^{\bullet}\) _there exists exactly_ _one transition_ \(q_{0}^{\prime}\stackrel{{ a}}{{\to}}(q_{1}^{\prime},\ldots,q_{ \ell}^{\prime})\in\delta_{\mathcal{A}}\)_, such that_ \(q_{i}^{\prime}\in h^{-1}(q_{i})\)_, for all_ \(i\in[0..\ell]\)_._
_If a strong refinement \(h:Q_{\mathcal{A}}\to Q_{\mathcal{B}}\) exists then \(\mathcal{A}\) strongly refines \(\mathcal{B}\)._
**Lemma 23**.: _If \(\mathcal{A}\) strongly refines \(\mathcal{B}\) and \(\mathcal{B}\) is choice-free, then \(\mathcal{A}\) is choice-free._
See proof on page 52.
**Lemma 24**.: _Each \(\overline{\mathcal{B}}_{1},\ldots,\overline{\mathcal{B}}_{m}\) is all-satisfiable, choice-free and \(\mathcal{L}(\mathcal{A}^{IV}_{\Delta,\mathsf{A}})=\bigcup_{i=1}^{m}\mathcal{ L}(\overline{\mathcal{B}}_{i})\)._
See proof on page 54.
**V. Wrapping \(1\)-transitions into partial runs of \(\infty\)-transitions** At this point, we have finitely many all-satisfiable choice-free automata \(\overline{\mathcal{B}}_{1},\ldots,\overline{\mathcal{B}}_{m}\) without persistent variables. In order to obtain expandable SIDs from these automata, using Lemma 12 (2), any sequence of accepting runs of \(\overline{\mathcal{B}}_{i}\) must be embedded in an accepting run of the same automaton, for all \(i\in[1..m]\). Since all \(1\)-transitions of \(\overline{\mathcal{B}}_{i}\) must occur on any accepting run (Lemma 8), we need to "wrap" the labels of \(1\)-transitions into characteristic formulae of trees recognized by partial runs consisting of \(\infty\)-transitions only. This will enable the use of Lemma 14 to embed several runs consisting of \(\infty\)-transitions into one accepting run. The outcome of this transformation is denoted \(\mathcal{B}_{i}\), for all \(i\in[1..m]\).
Let \(\overline{\mathcal{B}}\) be any of \(\overline{\mathcal{B}}_{1},\ldots,\overline{\mathcal{B}}_{m}\). For a \(\Sigma\)-labeled tree \(t\), two positions \(p\) and \(s\), such that only \(p\in\operatorname{dom}(t)\) (i.e., nothing is required about \(s\)), and a sequence of variables \(x_{1},\ldots,x_{k}\), we define the formula:
\[\Omega_{t}^{p/s}(x_{1},\ldots,x_{k})\stackrel{{\mbox{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{ \cdot}}}}}}}}}}}}}}}}\; \;\{r(x_{1}^{i},\ldots,x_{k})\stackrel{{\mbox{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\cdot
**Lemma 26**.: _For each structure \(\mathsf{S}=(\mathsf{U},\sigma)\in\llbracket\mathcal{B}\rrbracket^{\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Note that \(\operatorname{tw}(\mathcal{S})=\operatorname{tw}(\mathtt{split}(\mathcal{S}))\) for any set of structures \(\mathcal{S}\). The next lemma shows that both internal and external fusions preserve maximally connected substructures:
Lemma 28: _Given a set \(\mathcal{S}\) of structures, we have (1) \(\mathtt{split}(\mathtt{EF}^{*}(\mathcal{S}))=\mathtt{EF}^{*}(\mathtt{split}( \mathcal{S}))\), and (2) \(\mathtt{split}(\mathtt{IEF}^{*}(\mathcal{S}))=\mathtt{IEF}^{*}(\mathtt{split}( \mathcal{S}))\)._
See proof on page 4.
For a given set \(\mathbb{R}\) of relation symbols, we define the set of _colors_ as \(\mathbb{C}\stackrel{{\text{\tiny{\raisebox{0.0pt}[0.0pt][0.0pt][0.0 pt]{$\stackrel{{\text{\tiny{\raisebox{0.0pt}[0.0pt][0.0 pt]{$\stackrel{{\text{\tiny{\raisebox{\raisebox{0.0pt}[0.0pt][0.0 pt]{$\stackrel{\text{\tiny{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ \raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ }}}}}}}}}}}}}}}}}}}}}{\operatorname{pow}( \mathbb{R})\). The elements of a structure are labeled with colors as follows:
Definition 24: The _coloring_ of a structure \(\mathsf{S}=(\mathsf{U},\sigma)\) is the mapping \(\mathcal{C}_{\mathsf{S}}:\mathsf{U}\to\mathbb{C}\) defined as \(\mathcal{C}_{\mathsf{S}}(u)\stackrel{{\text{\tiny{\raisebox{0.0pt {\raisebox{0.0pt}[0.0pt]{$\stackrel{{\text{\tiny{\raisebox{\raisebox{0.0pt {\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ }}}}}}}}}}}}}}}}}}{\operatorname{\mathit{ \raisebox{{\raisebox{0.0pt{\raisebox{0.0pt{\raisebox{0.0pt {\raisebox{0.0pt}[0.0pt]{$\stackrel{{\text{\tiny{\raisebox{\raisebox{\raisebox{ \raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raiseboxraisebox{\raiseboxraisebox{ }}}}}}}}}}}}}}}}}}}}}{\operatorname{\mathit{ \raisebox{{\raisebox{0.0pt{\raisebox{0.0pt{\raisebox{0.0pt {\raisebox{0.0pt}[0.0pt]{$\stackrel{{\text{\tiny{\raisebox{\raisebox{\raisebox{ \raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raiseboxraisebox{\raisebox{{ \raiseboxraiseboxraisebox{{ }}}}}}}}}}}}}}}}}}}} \operatorname{\mathit{\mathit{\raisebox{{ \raisebox{0.0pt{\raisebox{0.0pt}[0.0pt]{$\stackrel{{\text{\tiny{\raisebox{ \raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{\raisebox{ \raiseboxraisebox{\raiseboxraisebox{{ \raiseboxraiseboxraiseboxraisebox{{ \raiseboxraiseboxraisebox{{ \raiseboxraisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraiseboxraisebox{ \raiseboxraisebox { \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraiseboxraisebox{ \raiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraiseboxraiseboxraisebox { \raiseboxraisebox{ \raiseboxraiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraiseboxraisebox { \raiseboxraisebox \raisebox{ \raiseboxraiseboxraisebox \raisebox{ \raiseboxraisebox \raisebox{ \raiseboxraiseboxraisebox{ \raiseboxraiseboxraiseboxraiseboxraisebox \raisebox{ \raiseboxraiseboxraiseboxraiseboxraiseboxraisebox { \raiseboxraisebox \raiseboxraisebox { \raiseboxraiseboxraiseboxraisebox { \raiseboxraiseboxraisebox \raiseboxraisebox { \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox { \raiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox {\raiseboxraiseboxraiseboxraisebox \raisebox \raisebox {\raiseboxraiseboxraisebox \raisebox \raisebox {\raiseboxraiseboxraisebox \raisebox \raiseboxraiseboxraisebox \raisebox \raisebox \raisebox \raiseboxraisebox 1 \raiseboxraisebox \raisebox {\raiseboxraiseboxraisebox \raisebox \raiseboxraisebox \raisebox 1 \raiseboxraiseboxraisebox \raiseboxraisebox \raisebox \raiseboxraisebox \raiseboxraisebox \raiseboxraisebox \raiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \ \raiseboxraiseboxraiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \)\raiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraisebox\ \raiseboxraiseboxraisebox\ \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox \raiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox )) \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \)) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \)) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \)) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \) \raiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \)) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox \) \)))raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \) \raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) \)))) \)raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ) ))))))))))}}}\)raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ))))))))))))))))))))))}}}}}\)}\(((((\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ))))))))))))))))))))}}\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox )))))))))))))))))))))))))))))}}}}}}\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox )))))))))))))))))))))))))))))))))}}}}}(\raiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraiseboxraisebox ))))))))))))))))))))))))
**Definition 27**.: _A set \(\mathcal{S}\) of structures conforms to \((\mathbb{C}^{red},\mathbb{C}^{green},\mathbb{C}^{blue})\) if and only if:_
1. _for all structures_ \(\mathsf{S}=(\mathsf{U},\sigma)\in\mathcal{S}\)_, if_ \(\mathcal{C}_{\mathsf{S}}(u)\in\mathbb{C}^{red}\)_, for some element_ \(u\in\operatorname{supp}(\sigma)\)_, then_ \(\mathcal{C}_{\mathsf{S}}(u^{\prime})\in\mathbb{C}^{blue}\)_, for all other elements_ \(u^{\prime}\in\operatorname{supp}(\sigma)\setminus\{u\}\)_, and_
2. \(\mathsf{S}^{\sharp}\cap\mathbb{C}^{green}\subseteq[\![\,C,\,|\,\,C\in\mathbb{ C}^{green}\!]\!]\)_, for all structures_ \(\mathsf{S}\in\mathbb{E}\mathsf{F}^{*}(\mathcal{S})\)_._
_If \(\mathcal{S}\) conforms to \((\mathbb{C}^{red},\mathbb{C}^{green},\mathbb{C}^{blue})\), any structure \(\mathsf{S}\in\mathcal{S}\) is of type either:_
* \(\mathsf{R}\) _if_ \(\mathsf{S}^{\sharp}\in\operatorname{mpow}(\mathbb{C}^{blue}\cup\mathbb{C}^{ red})\) _and_ \(\operatorname{card}(\mathsf{S}^{\sharp}\cap\mathbb{C}^{red})=1\)_,_
* \(\mathsf{G}\) _if_ \(\mathsf{S}^{\sharp}\in\operatorname{mpow}(\mathbb{C}^{blue}\cup\mathbb{C}^{ green})\) _and_ \(\operatorname{card}(\mathsf{S}^{\sharp}\cap\mathbb{C}^{green})>0\)_, and_
* \(\mathsf{B}\) _if_ \(\mathsf{S}^{\sharp}\in\operatorname{mpow}(\mathbb{C}^{blue})\)_._
Conformance to an RGB color scheme is key to bounding the treewidth of structures obtained by external fusion of a treewidth-bounded set of connected structures:
**Lemma 30**.: _Let \(\mathcal{S}\) be a treewidth-bounded set of connected structures conforming to an RGB color scheme. Then, for any structure \(\mathsf{S}\in\mathbb{E}\mathsf{F}^{*}(\mathcal{S})\), the following hold:_
1. \(\mathsf{S}\) _is connected and of type either_ \(\mathsf{R}\)_,_ \(\mathsf{G}\) _or_ \(\mathsf{B}\)_,_ \(\mathsf{\mathsf{\mathsf{S}}}\) _, if_ \(\mathsf{S}\) _of type_ \(\mathsf{R}\)__
2. \(\operatorname{tw}(\mathsf{S})\leq\left\{\begin{array}{ll}\operatorname{tw} (\mathcal{S})&\mbox{, if $\mathsf{S}$ of type $\mathsf{R}$}\\ \max(\operatorname{tw}(\mathcal{S})+2\cdot\operatorname{card}(\mathbb{C}^{ green}),3\cdot\operatorname{card}(\mathbb{C}^{green}))&\mbox{, if $\mathsf{S}$ of type $\mathsf{G}$}\\ \max(\operatorname{tw}(\mathcal{S})+2\cdot\operatorname{card}(\mathbb{C}^{ green}),3\cdot\operatorname{card}(\mathbb{C}^{green}),\operatorname{tw}(\mathcal{S})+1)&\mbox{, if $\mathsf{S}$ is of type $\mathsf{B}$}\end{array}\right.\)__
See proof on page 60.
The core of the treewidth boundedness algorithm is a decidable equivalent condition for the treewidth boundedness of a set obtained by external fusion of a treewidth-bounded set of connected structures. Essentially, a set generated by external fusion is treewidth-bounded iff there is no way of connecting six elements \(u_{i}\), \(v_{i}\) and \(w_{i}\), labeled with \(\mathcal{C}_{i}\), for \(i=1,2\), respectively, where \(\mathcal{C}_{1}\cap\mathcal{C}_{2}\neq\emptyset\) (condition (2) of Lemma 32).
**Lemma 32**.: _The following are equivalent, for any treewidth-bounded set \(\mathcal{S}\) of connected structures:_
1. \(\mathbb{E}\mathsf{F}^{*}(\mathcal{S})\) _is treewidth bounded,_
2. \([\![\,C_{1},\,C_{1},\,C_{1}]\!],[\![\,C_{2},\,C_{2},\,C_{2}]\!]\in(\mathbb{E} \mathsf{F}^{*}(\mathtt{split}(\mathcal{S})))^{\sharp 3}\) _implies_ \(\mathcal{C}_{1}\cap\mathcal{C}_{2}\neq\emptyset\) _for all_ \(\mathcal{C}_{1},\,\mathcal{C}_{2}\)_,_
3. \(\mathtt{split}(\mathcal{S})\) _conforms to some RGB color scheme._
See proof on page 63. The algorithm checks if the set \((\mathbb{E}\mathsf{F}^{*}(\mathtt{split}(\mathcal{S}))^{\sharp 3}\) meets condition (2) above. The check is effective, provided that this set can be built in finitely many steps.
In order to decide whether \([\![\,\mathsf{A}]\!]_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Computing \(k\)-Multiset Color Abstraction for External Fusion
We describe now the effective construction of a \(k\)-multiset abstraction \((\mathtt{EF}^{*}(\mathcal{S}))^{\sharp k}\) from the abstraction \(\mathcal{S}^{\sharp k}\) of a set \(\mathcal{S}\) of structures, for a given integer \(k\geq 1\). First, as we are interested only on \(k\)-multisets color abstractions, we can restrict external fusion to bipartite equivalence relations generated by a single pair, without loss of precision.
Definition 28: The _single-pair external fusion_ of disjoint structures \(\mathsf{S}_{1}=(\mathsf{U}_{1},\sigma_{1})\) and \(\mathsf{S}_{2}=(\mathsf{U}_{2},\sigma_{2})\) is the external fusion (Def. 4) induced by equivalence relations \(\{(u_{1},u_{2})\}^{=}\), where \(u_{i}\in\operatorname{supp}(\sigma_{i})\), for \(i=1,2\). We denote by \(\mathtt{EF}_{1}(\mathsf{S}_{1},\mathsf{S}_{2})\) the set of structures obtained by single-pair external fusion of \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\). For a set of structures \(\mathcal{S}\), we denote by \(\mathtt{EF}_{1}^{*}(\mathcal{S})\) the closure of \(\mathcal{S}\) under single-pair external fusions.
In general, the single-pair external fusion is strictly less expressive than external fusion, yet it produces the same \(k\)-multiset color abstractions:
Lemma 33: \((\mathtt{EF}^{*}(\mathcal{S}))^{\sharp k}=(\mathtt{EF}_{1}^{*}(\mathcal{S})) ^{\sharp k}\) _for any set \(\mathcal{S}\) of structures and integer \(k\geq 1\)._
See proof on page 6.2. Second, the closure \((\mathtt{EF}_{1}^{*}(\mathcal{S}))^{\sharp k}\) can be computed by a least fixpoint iteration of an abstract operation on the domain of \(k\)-multiset color abstractions. As the later domain is finite, this fixpoint computation is guaranteed to terminate.
Definition 29: The _single-pair multiset fusion_ is defined below, for \(M_{1},M_{2}\in\operatorname{mpow}(\mathbb{C})\):
\[\mathtt{ef}_{1}^{\sharp(M_{1},M_{2})}\stackrel{{ \cong}}{{=}}\big{\{}M\in\operatorname{mpow}(\mathbb{C})\mid\exists\mathcal{C }_{1}\in M_{1}.\ \exists\mathcal{C}_{2}\in M_{2}.\ \mathcal{C}_{1}\cap\mathcal{C}_{2}= \emptyset,\] \[M=[\mathcal{C}_{1}\cup\mathcal{C}_{2}]\cup\bigcup_{i=1,2}(M_{i} \setminus[\mathcal{C}_{i}])\big{\}}\]
Given an integer \(k\geq 1\), the _single-pair_\(k\)-multiset fusion_ is defined for \(M_{1}\), \(M_{2}\in\operatorname{mpow}(\mathbb{C})\), such that \(\operatorname{card}(M_{1})\leq k\) and \(\operatorname{card}(M_{2})\leq k\):
\[\mathtt{ef}_{1}^{\sharp k}(M_{1},M_{2})\stackrel{{ \cong}}{{=}}\{M\mid\exists M^{\prime}\in\mathtt{ef}_{1}^{\sharp(M_{1},M_{2})}.\ M\subseteq M^{\prime},\ \operatorname{card}(M)\leq k\}\]
For a set \(\mathcal{M}\) of multisets (resp. \(k\)-multisets) of colors, let \(\mathtt{ef}_{1}^{\sharp+}(\mathcal{M})\) (resp. \(\mathtt{ef}_{1}^{\sharp+}(\mathcal{M})\)) be the closure of \(\mathcal{M}\) under taking single-pair fusion on multisets (resp. \(k\)-multisets).
Lemma 34: \((\mathtt{EF}_{1}^{*}(\mathcal{S}))^{\sharp k}=\mathtt{ef}_{1}^{\sharp+}( \mathcal{S}^{\sharp k})\) _for any set \(\mathcal{S}\) of structures, for any integer \(k\geq 1\)._
See proof on page 6.2.
### Computing \(k\)-Multiset Color Abstraction for SID Canonical Models
We shall apply Lemma 34 to compute the \(k\)-multiset color abstraction of a set \([\![\mathsf{A}]\!]_{\Delta}^{c}\) of canonical models, for a given nullary predicate \(\mathsf{A}\) and SID \(\Delta\). To this end, we must first compute its \(k\)-multiset color abstraction \(([\![\mathsf{A}]\!]_{\Delta}^{c})^{\sharp k}\). This is done by a least fixpoint computation in an abstract domain, defined directly from the rules in the SID, that tracks the colors of parameter values and the \(k\)-multiset color abstraction of the elements not referenced by parameters.
A _\(k\)-bounded color triple_\(\langle X,c,M\rangle\) consists of a set of variables \(X\subseteq\mathbb{V}\), a mapping \(c:X\to\mathbb{C}\), and a multiset \(M\subseteq\operatorname{mpow}(\mathbb{C})\), such that \(\operatorname{card}(M)\leq k\). Note that there exists \(\operatorname{card}(\mathbb{C})^{\operatorname{card}(X)+k}\) distinct color triples, for given \(X\) and \(k\). We consider the following operations on color triples, lifted to sets as usual:
\(k\)-composition: \(\langle X_{1},c_{1},M_{1}\rangle\bullet^{\pm k}\langle X_{2},c_{2},M_{2}\rangle \stackrel{{\cong}}{{=}}\)
\[\{\langle X_{1}\cup X_{2},c_{12},M_{12}\rangle\mid c_{12}(x)=c_{1}(x)\uplus c _{2}(x)\text{, for all }x\in X_{1}\cap X_{2},\] \[c_{12}(x)=c_{i}(x)\text{ for all }x\in X_{i}\setminus X_{3-i} \text{, for all }i\in\{1,2\},\] \[M_{12}\subseteq M_{1}\cup M_{2},\text{ card}(M_{12})\leq k\}\]
This operation is undefined, if \(c_{1}(x)\cap c_{2}(x)\neq\emptyset\), for some \(x\in X_{1}\cap X_{2}\).
_substitution: \(\langle X,c,M\rangle[s]\stackrel{{\cong}}{{=}}\langle Y,c \circ s,M\rangle\)_, for any bijection \(s:Y\to X\)
\(k\)-projection: \(\langle X,c,M\rangle\big{|}_{Y}^{\pm k}\stackrel{{\cong}}{{=}}\)
\[\{\langle Y,c|_{Y},M^{\prime}\rangle\mid M^{\prime}\subseteq M\cup[c(x)\mid x \in X\setminus Y],\text{ card}(M^{\prime})\leq k\},\text{for any }Y\subseteq X\]
For a qpf formula \(\psi\), let \(\gamma(\psi)\stackrel{{\cong}}{{=}}\langle\mathrm{fv}(\psi), \lambda x\in\mathrm{fv}(\psi)\.\ \{r\in\mathbb{R}\mid r(x,\ldots,x)\text{ occurs in }\psi\},\emptyset\rangle\). Given a predicate \(\mathsf{B}\), we denote by \(\langle\!\langle\mathsf{B}\rangle\!\rangle_{\Delta}^{\sharp}\) the least sets of \(k\)-bounded color triples over the variables \(x_{1},\ldots,x_{\mathsf{EB}}\), the satisfies the following constraints:
\[\langle\!\langle\mathsf{B}_{0}\rangle\!\rangle_{\Delta}^{\sharp}\supseteq \big{(}\gamma(\psi)\bullet^{ik}\stackrel{{\sharp}}{{\underset{i \in[1..\ell]}{\oplus}}}\langle\mathsf{B}_{i}\rangle\!\rangle_{\Delta}^{\sharp k }[x_{1}/z_{i,1},\ldots,x_{\mathsf{EB}_{i}}/z_{i,\#\mathsf{B}_{i}}]\big{)} \big{|}_{\{x_{1},\ldots,x_{\mathsf{EB}_{0}}\}}^{\sharp k} \tag{4}\]
one for each rule of \(\Delta\) of the form:
\[\mathsf{B}_{0}(x_{1},\ldots,x_{\mathsf{EB}_{0}})\leftarrow\exists y_{1}\ldots \exists y_{m}\.\ \psi\ast\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(j_{1}\leq\ldots\leq j_{p}\) and \(\xi\subseteq J\times J\) be an equivalence relation and \(\mathsf{B}^{\xi}\) be a fresh predicate of arity \(p\). In particular, \(\#\mathsf{B}^{\xi}=0\) if \(\xi=\emptyset\) is the empty relation. We define the shorthands:
\[\begin{array}{ll}\mathrm{fv}_{J}(\mathsf{B}(y_{1},\ldots,y_{\#\mathsf{B}})) \stackrel{{\mbox{\tiny\tiny def}}}{{=}}\{y_{j}\mid j\in J\}& \xi(\mathsf{B}(y_{1},\ldots,y_{\#\mathsf{B}}))\stackrel{{ \mbox{\tiny\tiny def}}}{{=}}\{(y_{j},y_{k})\mid(j,k)\in\xi\}\\ &\mathsf{B}(y_{1},\ldots,y_{\#\mathsf{B}})_{/\xi}&\stackrel{{ \mbox{\tiny\tiny def}}}{{=}}\mathsf{B}^{\xi}(y_{j_{1}},\ldots,y_{j_{p}})\end{array}\]
Consider a rule of \(\Delta\) of the form (5), formulae \(\psi^{\prime}\), \(\psi^{\prime\prime}\), sets \(J_{i}\uplus\overline{J}_{i}=[1..\#\mathsf{B}_{i}]\), equivalence relations \(\xi_{i}\subseteq J_{i}\times J_{i}\), for all \(i\in[1..\ell]\), an equivalence relation \(\Xi\subseteq\big{(}\{x_{1},\ldots,x_{\#\mathsf{B}_{0}}\}\cup\{y_{1},\ldots,y_ {m}\}\big{)}\times\big{(}\{x_{1},\ldots,x_{\#\mathsf{B}_{0}}\}\cup\{y_{1}, \ldots,y_{m}\}\big{)}\), such that the following hold:
1. \(\psi=\psi^{\prime}*\psi^{\prime\prime}\) modulo a reordering of atoms and \(\mathrm{fv}(\psi^{\prime})\cap\mathrm{fv}(\psi^{\prime\prime})=\emptyset\),
2. \(\mathrm{fv}_{J_{i}}(\mathsf{B}_{i}(z_{i,1},\ldots,z_{i,\#\mathsf{B}_{i}})) \cap\mathrm{fv}(\psi^{\prime\prime})=\emptyset\) and \(\mathrm{fv}_{\mathcal{I}_{i}}(\mathsf{B}_{i}(z_{i,1},\ldots,z_{i,\#\mathsf{B}_{ i}}))\cap\mathrm{fv}(\psi^{\prime})=\emptyset\), for all \(i\in[1..\ell]\),
3. \(\Xi=\big{(}\zeta(\psi^{\prime})\cup\bigcup_{i=1}^{\ell}\xi_{i}(\mathsf{B}_{i} (z_{i,1},\ldots,z_{i,\#\mathsf{B}_{i}}))\big{)}^{=}\).
We distinguish two cases. If (1) there exist sets \(J_{0}\uplus\overline{J}_{0}=[1..\#\mathsf{B}_{0}]\), \(J_{0}\neq\emptyset\) and an equivalence relation \(\xi_{0}\subseteq J_{0}\times J_{0}\), such that:
1. \(\mathrm{fv}_{J_{0}}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0}}))\cap \mathrm{fv}(\psi^{\prime\prime})=\emptyset\) and \(\mathrm{fv}_{\mathcal{I}_{0}}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0}}) )\cap\mathrm{fv}(\psi^{\prime})=\emptyset\),
2. \(\mathrm{fv}_{J_{0}}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0}}))\cap \mathrm{fv}_{\mathcal{I}_{j}}(\mathsf{B}_{i}(z_{i,1},\ldots,z_{i,\#\mathsf{B}_ {i}}))=\emptyset\) and \(\mathrm{fv}_{\mathcal{I}_{0}}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0} }))\cap\mathrm{fv}_{J_{i}}(\mathsf{B}_{i}(z_{i,1},\ldots,z_{i,\#\mathsf{B}_{ i}}))=\emptyset\), for all \(i\in[1..\ell]\),
3. for all \(y\in\big{(}\mathrm{fv}(\psi^{\prime})\cup\bigcup_{i=1}^{\ell}\mathrm{fv}_{J_{ i}}(\mathsf{B}_{i}(z_{i,1},\ldots,z_{i,\#\mathsf{B}_{i}}))\big{)}\cap\{y_{1}, \ldots,y_{m}\}\) there exists \(x\in\mathrm{fv}_{J_{0}}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0}}))\), such that \((x,y)\in\Xi\),
4. \(\xi_{0}(\mathsf{B}_{0}(x_{1},\ldots,x_{\#\mathsf{B}_{0}}))=\Xi\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\texttt{IEF}^{*}(\texttt{split}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta}))= \texttt{split}(\texttt{IEF}^{*}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta}))\) is treewidth-bounded, by Lemma 31 and Lemma 28. Thus, \(\texttt{IEF}^{*}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta})\) is treewidth-bounded.
**Proof of Lemma 4** By Lemma 36 one builds a SID \(\Gamma\) and a nullary predicate \(\mathsf{B}\), such that \(\texttt{split}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta})=\llbracket \mathbb{B}\rrbracket^{c}_{\Gamma}\). By Lemma 35, one effectively computes \((\llbracket\mathbb{B}\rrbracket^{c}_{\Gamma})^{3}\) and, by Lemmas 34 and 33, one effectively computes \((\texttt{EF}^{*}(\llbracket\mathbb{B}\rrbracket^{c}_{\Gamma}))^{i^{3}}= \texttt{eff}_{1}^{i^{3}*}((\llbracket\mathbb{B}\rrbracket^{c}_{\Gamma})^{i^{3 }})\). Therefore, \((\texttt{EF}^{*}(\texttt{split}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta})) )^{i^{3}}=\texttt{eff}_{1}^{i^{3}*}(\llbracket\mathbb{B}\rrbracket^{c}_{ \Gamma}^{i^{3}})\) can be effectively computed. We decide the treewidth-boundedness of \(\texttt{EF}^{*}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta})\), by an application of Lemma 32, because, moreover, \(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta}\) is a treewidth-bounded set of structures (Lemma 6). \(\texttt{EF}^{*}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta})\) is treewidth-bounded iff condition (2) of Lemma 32 holds for \((\texttt{EF}^{*}(\texttt{split}(\llbracket\mathbb{A}\rrbracket^{c}_{\Delta}))) ^{i^{3}}\).
## 7 Conclusions and Future Work
We have presented a decision procedure for the treewidth boundedness problem in the context of \(\mathsf{SLR}\), a generalization of Separation Logic over relational signatures, interpreted over structures. This procedure allows to define the precise fragment of \(\mathsf{SLR}\) in which every formula has a bound on the treewidth of its models. This fragment is the right candidate for the definition of a fragment of \(\mathsf{SLR}\) with a decidable entailment problem. Another application is checking that each graph defined by a treewidth-bounded \(\mathsf{SLR}\) formula satisfies \(\mathsf{MSO}\)-definable properties, e.g., hamiltonicity, or \(3\)-colorability.
|
2302.06460 | QCD equation of state at finite chemical potential from unbiased
exponential resummation of the lattice QCD Taylor series | Exponential resummation of the QCD finite-density Taylor series has been
recently introduced as an alternative way of resumming the finite-density
lattice QCD Taylor series. Unfortunately the usual exponential resummation
formula suffers from stochastic bias which must be subtracted before
identifying genuine higher-order contributions. In this paper, we present a new
way of subtracting the stochastic bias at the level of each individual gauge
configuration, up to a certain order of either the Taylor series or the
cumulant expansion, by modifying the argument of the exponential. Retaining the
exponential form of the resummation allows us to also calculate the phase
factor of the fermion determinant on each gauge configuration. We present our
results for the excess pressure, number density, and the average phase factor
and show that the new results contain less stochastic bias and are in better
agreement with the QCD Taylor series compared to the previous exponential
resummation. | Sabarnya Mitra, Prasad Hegde | 2023-02-13T15:40:42Z | http://arxiv.org/abs/2302.06460v2 | # QCD equation of state at finite chemical potential from unbiased
###### Abstract
Exponential resummation of the QCD finite-density Taylor series has been recently introduced as an alternative way of resumming the lattice QCD Taylor series that yields better converging and more reliable estimates of the QCD Equation of State (QEOS) and related observables at finite temperature and density. Unfortunately, the usual formula for exponential resummation of the lattice data suffers from stochastic bias due to the fact that the derivatives of the fermion matrix are calculated stochastically. It is necessary to subtract this bias in order to identify genuine higher-order contributions. In this paper we present an alternative method of subtracting the stochastic bias up to a certain order of either the Taylor series or the cumulant expansion by modifying the argument of the exponential. In this way, the exponential form of the resummation, and hence the knowledge of the phase factor is retained. We provide results for the excess pressure, number density and the average phase factor and show that the new results contain much less stochastic bias and are better convergent compared to the usual exponential resummation of the QCD Taylor series.
## I Introduction
The phase diagram of strongly interacting matter as a function of the temperature \(T\) and baryochemical potential \(\mu_{B}\) is of interest to theorists and experimentalists alike [1; 2]. Since the system is non-perturbative except at very large temperatures and chemical potentials, a reliable non-perturbative approach is required for its study. At \(\mu_{B}=0\), such an approach is provided by lattice QCD. In recent years, lattice calculations have provided increasingly precise determinations of several properties of the quark-gluon plasma [3; 4; 5; 6; 8]. Unfortunately however, lattice QCD breaks down at \(\mu_{B}\neq 0\) due to the well-known sign problem [9; 10; 11; 12]. Despite recent progress [13; 14; 15; 16; 17; 18], currently the two most successful approaches in the QCD case are analytical continuation from imaginary to real \(\mu_{B}\)[19; 20] and Taylor expansion of the QCD partition function in the chemical potential \(\mu_{B}\)[4; 6]. Despite their successes however, both methods need to be supplemented in order to obtain reliable results beyond \(\hat{\mu}_{B}\equiv\mu_{B}/T\simeq 1\)-2 e.g. by combining the results at imaginary \(\mu_{B}\) with an alternative expansion scheme [21] or by resumming the QCD Taylor series through the use of Pade resummation [7; 8; 22; 23].
An alternative way of resumming the QCD Taylor series was recently proposed in Ref. [24]. The calculation of the Taylor coefficients requires the calculation of the \(n\)th \(\hat{\mu}_{B}\) derivative \(D_{n}^{B}\) of \(\ln\det\mathcal{M}\), where \(\hat{\mu}_{B}\equiv\mu_{B}/T\) and \(\det\mathcal{M}\) is the fermion matrix determinant. The contribution of \(D_{n}^{B}\) to all orders of the Taylor series can be shown to be \(\exp\left(D_{n}^{B}\hat{\mu}_{B}^{n}/n!\right)\). Resumming the first \(N\) derivatives in this way leads to an improved estimate for the QCD Equation of State (QEOS) which is equal to the \(N\)th order Taylor estimate plus all the higher order contributions coming from \(D_{1}^{B},\ldots,D_{N}^{B}\). It can be shown that the resummed QEOS converges more quickly than the original Taylor QEOS. Furthermore, since the odd (even) \(D_{n}^{B}\) are purely imaginary (real), the resummation procedure yields an estimate for the complex phase factor of the fermion determinant. The ensemble-averaged phase factor \(\left\langle e^{i\Theta(T,\mu_{B})}\right\rangle\) goes to zero as \(\mu_{B}\) is increased due to which the calculation of the resummed QEOS breaks down. This breakdown is physical and can be related to the presence of poles or branch cut singularities of the QCD partition function in the complex \(\mu_{B}\) plane. The resummation approach also makes it possible to calculate these singularities directly. Some of these advantages have been previously demonstrated through analytical calculations in a low-energy model of QCD [25].
Despite its advantages, one drawback of exponential resummation in the lattice QCD case is the presence of stochastic bias in the calculation of the exponential factor. Given \(N\) independent random estimates \(W_{1},\ldots,W_{N}\) of an observable \(\mathcal{W}\), the unbiased estimate of \(\mathcal{W}^{n}\) is given by
\[\mathrm{UE}\left[\mathcal{W}^{n}\right]=\sum_{i_{1}\neq i_{2}\neq\ldots\neq i_ {n}}\frac{W_{i_{1}}\cdots W_{i_{n}}}{N(N-1)\cdots(N-n+1)}. \tag{1}\]
That is, an unbiased estimate is formed by averaging over products of independent estimates. The contribution of products of the same estimate is the stochastic bias, as in the biased estimate of \(\mathcal{W}^{n}\) e.g.
\[\mathrm{BE}\left[\mathcal{W}^{n}\right]=\left[\frac{1}{N}\sum_{i=1}^{N}W_{i} \right]^{n}. \tag{2}\]
Although stochastic bias vanishes in the limit \(N\to\infty\), for a given positive definite finite value of \(N\) it can be comparable to the true value and hence can lead to a wrong estimate in some cases. We shall see in Sec. II that the usual formula for the exponential factor in exponential resummation contains stochastic bias. Subtracting this bias therefore becomes necessary, especially at higher orders and for large values of \(\hat{\mu}_{B}\).
Unlike exponential resummation, stochastic bias is not a problem in the Taylor coefficient calculations because there exist efficient formulas for evaluating the unbiased product of \(n\) operators in \(\mathcal{O}(N)\), rather than \(\mathcal{O}(N^{n})\)
time. Therefore one way to avoid stochastic bias, while still going beyond the Taylor series approach, is to replace exponential resummation by a finite order cumulant expansion [26]. This approach corrects for stochastic bias but at the expense of all-orders resummation 1. Additionally, a knowledge of the phase factor is also lost. Lastly, knowledge of the analytic structure of the QCD partition function is also lost since the cumulant expansion is a finite polynomial and is hence analytic over the entire complex \(\mu_{B}\) plane.
Footnote 1: It is also possible to avoid stochastic bias by calculating the \(D_{n}^{B}\) exactly [27]. However straightforward diagonalization is expensive, even with the reduced matrix formalism, and one is therefore constrained to work with lattices having a smaller aspect ratio than the lattices considered here.
At present, we know of no way of obtaining a fully unbiased estimate of a transcendental function such as the exponential. Nevertheless, in this paper we will present a way of subtracting the stochastic bias to a finite order of either the Taylor or the cumulant expansion while also simultaneously retaining the exponential form of the resummation. The formalism presented here thus manages to preserve all-orders resummation. Moreover, depending upon the order of the calculation and the value of \(\hat{\mu}_{B}\), it may be sufficient if the bias is eliminated up to some finite order \(N\). In that case, our formalism yields results that are close to fully unbiased resummation.
Our paper is organized as follows: In Sec. II, we will outline the construction of the unbiased exponential. We will begin by discussing Taylor expansion, simple (biased) exponential resummation and the cumulant expansion. We will then show how to modify the argument of the exponential so that the stochastic bias is subtracted either to order \(N\) of the Taylor series expansion or to some order \(M\) of the cumulant expansion. The corresponding formulas are Eqs. (13) and (14) and Eqs. (15) and (16) respectively. However, we differ a proof of the unbiasedness of the former to Appendix A. After presenting the formalism, in Sec. III we will present results for the excess pressure and number density for both finite isospin as well as baryochemical potential up to fourth order in the Taylor, biased resummation and unbiased resummation approaches. We will also present results for the average phase factor calculated using biased as well as unbiased resummation. Finally, in Sec. IV, we will summarize our results and conclusions.
## II Unbiased exponential resummation
Consider lattice QCD with \(2+1\) flavors of rooted staggered quarks defined on an \(N_{\sigma}^{3}\times N_{\tau}\) lattice. The partition function \(\mathcal{Z}(T,\mu_{Y})\) at temperature \(T\) and finite chemical potential \(\mu_{Y}\) is given by
\[\mathcal{Z}(T,\mu_{Y})=\int\mathcal{D}Ue^{-S_{G}(T)}\,\det\mathcal{M}(T,\mu_{Y }), \tag{3}\]
where \(S_{G}(T)\) is the gauge action. The chemical potential \(\mu_{Y}\) corresponds to \(\mu_{B}\) for the finite baryochemical potential case (\(Y=B\)), and to \(\mu_{I}\) for the finite isospin chemical potential case (\(Y=I\)). \(\det\mathcal{M}(T,\mu_{Y})\) is the fermion determinant given by
\[\det\mathcal{M}(T,\mu_{Y})=\prod_{f=u,d,s}\big{[}\det\mathcal{M}_{f}(m_{f},T, \mu_{f})\big{]}^{1/4}, \tag{4}\]
with \(m_{u}=m_{d}\) and \(\mu_{u}=\mu_{d}=\mu_{s}=3\,\mu_{B}\) for \(Y=B\) and \(\mu_{u}=-\mu_{d}=\mu_{I}\), \(\mu_{s}=0\) for \(Y=I\). The excess pressure \(\Delta P(T,\mu_{Y})\equiv P(T,\mu_{Y})-P(T,0)\) is given by
\[\frac{\Delta P(T,\mu_{Y})}{T^{4}}=\frac{1}{VT^{3}}\,\ln\left[\frac{\mathcal{Z} (T,\mu_{Y})}{\mathcal{Z}(T,0)}\right], \tag{5}\]
where \(V\) is the volume of the system. From the excess pressure, the net baryon or isospin density can be calculated as
\[\frac{\mathcal{N}(T,\mu_{Y})}{T^{3}}=\frac{\partial}{\partial(\mu_{Y}/T)} \left[\frac{\Delta P(T,\mu_{Y})}{T^{4}}\right]. \tag{6}\]
Owing to the sign problem of lattice QCD, it is only possible to evaluate Eq. (5) approximately e.g. by expanding the right hand side in a Taylor series in \(\mu_{Y}\) and retaining terms up to some (even) order \(N\) viz.
\[\frac{\Delta P_{N}^{T}(T,\mu_{Y})}{T^{4}}=\sum_{n=1}^{N/2}\frac{\chi_{2n}^{Y}( T)}{(2n)!}\left(\frac{\mu_{Y}}{T}\right)^{2n}. \tag{7}\]
This is the \(N\)th order Taylor estimate of \(\Delta P(T,\mu_{Y})\). Only even powers of \(\mu_{Y}\) appear in the expansion due to the particle-antiparticle symmetry of the system. The calculation of the Taylor coefficient \(\chi_{2b}^{Y}\) requires the calculation of terms such as \(\langle(D_{1}^{Y})^{a}(D_{2}^{Y})^{b}(D_{3}^{Y})^{c}\cdots\rangle\) where
\[D_{n}^{Y}(T)=\frac{\partial^{n}\ln\det\mathcal{M}(T,\mu_{Y})}{\partial(\mu_{Y} /T)^{n}}\,\bigg{|}_{\mu_{Y}=0}, \tag{8}\]
\(a+2b+3c+\cdots=2n\), and the angular brackets \(\langle\cdot\rangle\) denote the expectation value w.r.t. an ensemble of gauge configurations generated at the same temperature \(T\) but at \(\mu_{Y}=0\)[28; 29]:
\[\big{\langle}\mathcal{O}(T)\big{\rangle}=\frac{\int\mathcal{D}U\,\mathcal{O}( T)\,e^{-S_{G}(T)}\det\mathcal{M}(T,0)}{\int\mathcal{D}U\,e^{-S_{G}(T)}\det \mathcal{M}(T,0)}. \tag{9}\]
A typical lattice QCD calculation starts by calculating the first \(N\) derivatives \(D_{1}^{Y},\ldots,D_{N}^{Y}\) stochastically using \(N_{\rm rv}\sim\mathcal{O}(10^{2}\) - \(10^{3})\) random volume sources per gauge configuration. With these derivatives, it is possible to calculate all the Taylor coefficients up to \(\chi_{N}^{Y}\). The same derivatives however also contribute to higher-order Taylor coefficients through products such as \(D_{N}^{Y}D_{1}^{Y}\), \((D_{N}^{Y})^{2}\)
etc. In fact, as already mentioned in Sec. I, the contribution of \(D_{1}^{Y},\ldots,D_{N}^{Y}\) to all orders in \(\mu_{Y}\) can be resummed into an exponential factor. One can thus write a resummed estimate for \(\Delta P(T,\mu_{Y})\) as
\[\frac{\Delta P_{N}^{R}(T,\mu_{Y})}{T^{4}}=\frac{N_{\tau}^{3}}{N_{\sigma}^{3}}\ln \left[\text{Re}\left\langle\exp\left(\sum_{n=1}^{N}\frac{\overline{D_{n}^{Y}}( T)}{n!}\left(\frac{\mu_{Y}}{T}\right)^{n}\right)\right\rangle\right]. \tag{10}\]
The symbol Re in the above equation stands for the real part of a complex number. It can be proved that the \(D_{n}^{Y}\) are real (imaginary) for \(n\) even (\(n\) odd). Hence the exponential in Eq. (10) is a complex quantity. For real \(\mu_{Y}\), the partition function is real and the imaginary part vanishes when averaged over all gauge configurations. For finite ensembles, the imaginary part can be discarded provided that it is zero within error.
The overline over \(D_{n}^{Y}\) denotes the average of the \(N_{\text{rv}}\) stochastic estimates of \(D_{n}^{Y}\). As \(N_{\text{rv}}\to\infty\), \(\overline{D_{n}^{Y}}\to D_{n}^{Y}\) and Eq. (10) becomes exact. For finite \(N_{\text{rv}}\) however the exponential factor contains stochastic bias, which can be seen as follows: If we expand the exponential in a Taylor series, then we get terms such as \((\overline{D_{m}^{Y}})^{p}(\overline{D_{n}^{Y}})^{q}\cdots\) which contain products of estimates coming from the same random vector and are hence not truly independent estimates. Although stochastic bias can be shown to be suppressed by powers of \(N_{\text{rv}}^{-1}\), it can still be significant depending upon the observable and the value of \(\mu_{Y}/T\). It therefore needs to be subtracted in order to obtain a better estimate of \(\Delta P(T,\mu_{Y})\).
Stochastic bias is not an issue in the calculation of the Taylor coefficients, although such products also appear there, because there exist formulas for efficiently evaluating the unbiased estimate of **finite** products of the derivatives [26; 30]. Taking advantage of this, one way of avoiding stochastic bias is by expanding Eq. (10) in a cumulant expansion and retaining the first \(M\) terms viz.
\[\frac{\Delta P_{N,M}^{C}(T,\mu_{Y})}{T^{4}} =\frac{N_{\tau}^{3}}{N_{\sigma}^{3}}\sum_{m=1}^{M}\text{Re}\left[ \frac{\mathcal{K}_{m}\left(X_{N}^{Y}(T,\mu_{Y})\right)}{m!}\right],\] \[X_{N}^{Y}(T,\mu_{Y}) =\sum_{n=1}^{N}\frac{D_{n}^{Y}(T)}{n!}\left(\frac{\mu_{Y}}{T} \right)^{n}. \tag{11}\]
The first four cumulants are given by
\[\mathcal{K}_{1}(X_{N}^{Y}) =\langle X_{N}^{Y}\rangle,\] \[\mathcal{K}_{2}(X_{N}^{Y}) =\langle(X_{N}^{Y})^{2}\rangle-\langle X_{N}^{Y}\rangle^{2},\] \[\mathcal{K}_{3}(X_{N}^{Y}) =\langle(X_{N}^{Y})^{3}\rangle-3\langle X_{N}^{Y}\rangle(\langle X _{N}^{Y}\rangle^{2})+2\langle X_{N}^{Y}\rangle^{3},\] \[\mathcal{K}_{4}(X_{N}^{Y}) =\langle(X_{N}^{Y})^{4}\rangle-4\langle X_{N}^{Y}\rangle(\langle X _{N}^{Y}\rangle^{3})-3\langle(X_{N}^{Y}\rangle^{2})^{2}\] \[\quad+12\langle(X_{N}^{Y})^{2}\rangle\langle X_{N}^{Y}\rangle^{2 }-6\langle X_{N}^{Y}\rangle^{4}. \tag{12}\]
However, as we have already noted, with this approach both all-orders resummation as well as knowledge of the phase factor are lost. Therefore in this paper, instead of expanding the resummed pressure we propose to modify the argument of the exponential factor so that the stochastic bias is subtracted up to a certain order of either the Taylor or the cumulant expansion. Although the bias is subtracted on a configuration-by-configuration basis, the resulting expression for \(\Delta P(T,\mu_{Y})\) too can be shown to be free of stochastic bias up to the same order (Appendix A).
We begin with the Taylor series case first. The analog of Eq. (10), but with the exponential unbiased to \(\mathcal{O}(\mu_{Y}^{N})\), is achieved by replacing \(\overline{D_{n}^{Y}}(T)\) by \(\mathcal{C}_{n}^{Y}(T)\) i.e.
\[\frac{\Delta P_{N}^{R(\text{unb})}(T,\mu_{Y})}{T^{4}}=\frac{N_{\tau}^{3}}{N_{ \sigma}^{3}}\,\ln\bigg{[}\text{Re}\left\langle\exp\left(\sum_{n=1}^{N}\frac{ \mathcal{C}_{n}^{Y}(T)}{n!}\left(\frac{\mu_{Y}}{T}\right)^{n}\right)\right\rangle \bigg{]}, \tag{13}\]
where the \(\mathcal{C}_{n}^{Y}(T)\) for \(1\leq n\leq 4\) are given by
\[\mathcal{C}_{1}^{Y} =\overline{D_{1}^{Y}},\] \[\mathcal{C}_{2}^{Y} =\overline{D_{2}^{Y}}+\left(\overline{(D_{1}^{Y})^{2}}-\left( \overline{D_{1}^{Y}}\right)^{2}\right),\] \[\mathcal{C}_{3}^{Y} =\overline{D_{3}^{Y}}+3\left(\overline{D_{2}^{Y}}\overline{D_{1}^ {Y}}-\overline{D_{2}^{Y}}\,\overline{D_{1}^{Y}}\right)+\left(\overline{(D_{1}^ {Y})^{3}}-3\,\overline{(D_{1}^{Y})^{2}}\,\overline{D_{1}^{Y}}+2\,\left( \overline{D_{1}^{Y}}\right)^{3}\right),\] \[\mathcal{C}_{4}^{Y} =\overline{D_{4}^{Y}}+3\left(\overline{(D_{2}^{Y})^{2}}-\left( \overline{D_{2}^{Y}}\right)^{2}\right)+4\left(\overline{D_{3}^{Y}D_{1}^{Y}}- \overline{D_{3}^{Y}}\,\overline{D_{1}^{Y}}\right)+6\left(\overline{D_{2}^{Y}(D_ {1}^{Y})^{2}}-\overline{D_{2}^{Y}}\,\overline{(D_{1}^{Y})^{2}}\right)-3\,( \overline{(D_{1}^{Y})^{2}})^{2}\] \[\quad-12\left(\overline{D_{2}^{Y}D_{1}^{Y}}\,\overline{D_{1}^{Y}} -\overline{D_{2}^{Y}}\left(\overline{D_{1}^{Y}}\right)^{2}\right)+\overline{(D_{1 }^{Y})^{4}}-4\,\,\overline{(D_{1}^{Y})^{3}}\,\,\overline{D_{1}^{Y}}+12\, \overline{(D_{1}^{Y})^{2}}\left(\overline{D_{1}^{Y}}\right)^{2}-6\left( \overline{D_{1}^{Y}}\right)^{4},\quad\text{etc.} \tag{14}\]
The first term in each equation is just \(\overline{D_{n}^{Y}}\). The remaining terms are the "counterterms" that are added
to subtract the stochastic bias. A term such as \(\overline{D_{2}^{Y}D_{1}^{Y}}\) in the above equations stands for the unbiased product of \(D_{2}^{Y}\) and \(D_{1}^{Y}\). Similarly, \(\overline{(D_{1}^{Y})^{2}}\) represents the unbiased square of \(D_{1}^{Y}\). By contrast, a term such as \(\overline{(D_{1}^{Y})^{2}}\) represents the biased square i.e. the square of the average of \(D_{1}^{Y}\). The exponential constructed in this way is unbiased to \(\mathcal{O}(\mu_{Y}^{N})\). We will prove in Appendix A that both the Taylor expansion of the exponential as well as the excess pressure calculated from it (Eq. (13)) are free of stochastic bias up to the same order.
As already noted, the first term in each \(\mathcal{C}_{n}^{Y}\) is simply \(\overline{D_{n}^{Y}}\). In the limit \(N_{\rm rv}\to\infty\), this term approaches the correct value of \(D_{n}^{Y}\). The rest of the terms for each \(\mathcal{C}_{n}^{Y}\) also cancel each other out as \(N_{\rm rv}\to\infty\), since in that limit the distinction between biased and unbiased products vanishes. Thus \(\mathcal{C}_{n}^{Y}\to D_{n}^{Y}\) as \(N_{\rm rv}\to\infty\) and hence Eq. (13) too represents an all-orders resummation of the derivatives \(D_{1}^{Y},\ldots,D_{N}^{Y}\), the only difference this time being that the stochastic bias is eliminated to \(\mathcal{O}(\mu_{Y}^{N})\).
Although Eq. (13) is an improvement over Eq. (10), it is possible to do still better. In a typical lattice QCD calculation, each stochastic estimate of \(D_{1}^{Y},\ldots,D_{N}^{Y}\) is constructed using the same random source. Therefore, the different stochastic estimates can be actually thought of as different estimates of the operator \(X_{N}^{Y}(T,\mu_{Y})\), where \(X_{N}^{Y}(T,\mu_{Y})\) is as given in Eq. (11). It is possible to write a version of Eq. (10) in which the bias is eliminated up to a certain power of \(X_{N}^{Y}\) itself, by writing
\[\frac{\Delta P_{N,M}^{R(\rm unb)}(T,\mu_{Y})}{T^{4}}=\frac{N_{\rm r }^{3}}{N_{\sigma}^{3}}\,\ln\left[{\rm Re}\left\langle\exp\left(\sum_{m=1}^{M} \frac{\mathcal{L}_{m}(X_{N}^{Y}(T,\mu_{Y}))}{m!}\right)\right\rangle\right], \tag{15}\]
where
\[\mathcal{L}_{1} = \overline{X_{N}^{Y}},\] \[\mathcal{L}_{2} = \overline{(X_{N}^{Y})^{2}}-\left(\overline{X_{N}^{Y}}\right)^{2},\] \[\mathcal{L}_{3} = \overline{(X_{N}^{Y})^{3}}-3\,\overline{\left(X_{N}^{Y}\right)} \,\left(\overline{(X_{N}^{Y})^{2}}\right)+2\,\overline{(X_{N}^{Y})}^{3},\] \[\mathcal{L}_{4} = \overline{(X_{N}^{Y})^{4}}-4\left(\overline{(X_{N}^{Y})^{3}} \right)\,\left(\overline{X_{N}^{Y}}\right)-3\left(\overline{(X_{N}^{Y})^{2}} \right)^{2}+12\left(\overline{X_{N}^{Y}}\right)^{2}\,\left(\overline{(X_{N}^{Y })^{2}}\right)-6\left(\overline{X_{N}^{Y}}\right)^{4},\quad\text{etc.} \tag{16}\]
We note that Eqs. (16) resemble the cumulant formulas Eqs. (12), but with two differences:
* The expansion is in the space of all random estimates for a single gauge configuration rather than in the space of all gauge configurations.
* The powers \((X_{N}^{Y})^{p}\) are replaced by their respective unbiased estimates \(\overline{(X_{N}^{Y})^{p}}\).
In the limit \(N_{\rm rv}\to\infty\), the difference between biased and unbiased estimates vanishes. Then the \(\mathcal{L}_{m}\) are just the cumulants of \(X_{N}^{Y}\) over the set of all random estimates for a single gauge configuration. In the double limit \(M\to\infty\) and \(N_{\rm rv}\to\infty\) therefore, the argument of the exponential in Eq. (15) is just the cumulant expansion of \(\overline{e^{X_{N}^{Y}}}\). This observation helps to clarify the meaning of bias subtraction: It is the systematic (order-by-order) replacement of the incorrect (biased) estimate \(\overline{e^{X_{N}^{Y}}}\) of the exponential factor by the correct estimate \(\overline{e^{X_{N}^{Y}}}\).
In addition to the excess pressure and the number density, we have also presented results for the average phase factor. As already mentioned, the \(D_{n}^{Y}\) are real (imaginary) for even \(n\) (for odd \(n\)) and hence the exponential factor is complex even when \(\mu_{B}\) is real 2. Although its imaginary part vanishes, the real part still receives a contribution \(\cos\Theta(T,\mu_{B})\) at \(\mu_{B}\neq 0\) from the phase of the exponential. The average phase factor \(\langle\cos\Theta(T,\mu_{B})\rangle\) is a measure of the difficulty of the calculation at finite \(\mu_{B}\)3. As \(\mu_{B}\) is increased, \(\langle\cos\Theta(T,\mu_{B})\rangle\to 0\) and the rapid fluctuations of the phase factor cause the calculation to break down. This happens as \(\mu_{B}\to|\mu_{B}^{c}|\), where \(\mu_{B}^{c}\) is the nearest singularity to \(\mu_{B}=0\) of the QCD partition function in the complex \(\mu_{B}\) plane. Unlike a finite Taylor series therefore, the resummation calculation cannot be carried out to arbitrarily large \(\mu_{B}\).
Footnote 2: For finite isospin, the odd \(D_{n}^{Y}\) are identically zero and hence the exponential is real for both real and imaginary \(\mu_{I}\). For complex \(\mu_{I}\) however, the phase factor will also be complex for the isospin case.
Footnote 3: This is true not just for the baryochemical potential \(\mu_{B}\) but for any chemical potential for which there is a sign problem e.g. \(\mu_{S}\).
Similar to the \(D_{n}^{Y}\), it can be shown that the \(\mathcal{C}_{n}^{Y}\) (Eq. (13)) too are real (imaginary) for even (odd) \(n\). Similarly, the \(\mathcal{L}_{m}\) (Eq. (15)) too are real (imaginary) for even (odd) \(m\) when \(\mu_{Y}\) is real. Hence in each case we can define an average phase factor \(\langle\cos\Theta(T,\mu_{Y})\rangle\), where
\(\Theta(T,\mu_{Y})\) is defined as
\[\Theta_{N}^{R}(T,\mu_{Y}) =\text{Im}\left[\sum_{n=1}^{N}\frac{D_{n}^{Y}(T)}{n!}\left(\frac{ \mu_{Y}}{T}\right)^{n}\right], \tag{17a}\] \[\Theta_{N}^{R(\text{unb})}(T,\mu_{Y}) =\text{Im}\left[\sum_{n=1}^{N}\frac{\mathcal{C}_{n}^{Y}(T)}{n!} \left(\frac{\mu_{Y}}{T}\right)^{n}\right],\] (17b) \[\Theta_{N,M}^{R(\text{unb})}(T,\mu_{Y}) =\text{Im}\left[\sum_{n=1}^{M}\frac{\mathcal{L}_{n}(X_{N}^{Y}(T, \mu_{Y}))}{n!}\right]. \tag{17c}\]
The symbol Im stands for the imaginary part of the argument. For real \(\mu_{Y}\), the imaginary part is simply the sum over odd \(n\). However the above formulas are also valid for the more general case when \(\mu_{Y}\) is complex. Note that it is not possible to define a phase factor for the Taylor series. An approximation to the phase factor may be constructed by Taylor-expanding Eqs. (17) to a particular order. Unlike the resummation case however, this phase factor diverges to \(\pm\infty\) as \(\mu_{Y}\) is increased and hence it cannot be used to determine the breakdown of the calculation.
## III Results
To verify our formalism, we made use of the data generated by the HotQCD collaboration 4 for its ongoing Taylor expansion calculations of the finite density QEOS, chiral crossover temperature and conserved charge cumulants at finite density [5; 7; 8]. For these calculations, \(\mathcal{O}(10^{4}\) - \(10^{6})\) 2+1-flavor gauge configurations were generated in the temperature range \(135\) MeV \(\lesssim\ T\lesssim 176\) MeV using a Symanzik-improved gauge action and the Highly Improved Staggered Quark (HISQ) fermion action with \(N_{\tau}=8\), \(12\) and \(16\) and \(N_{\sigma}=4N_{\tau}\)[31; 32]. The temperature for each \(N_{\tau}\) was varied by varying the lattice spacing \(a\) through the gauge coupling \(\beta\), and for each lattice spacing the bare light and strange quark masses \(m_{l}(a)\) and \(m_{s}(a)\) were also tuned so that the pseudo-Goldstone pion and kaon masses were equal to the physical pion and kaon masses respectively. The scale was determined using both the Sommer parameter \(r_{1}\) and the kaon decay constant \(f_{K}\). The temperature values quoted in this paper are from the \(f_{K}\) scale.
Footnote 4: A complete description of the gauge ensembles and scale setting can be found in Ref. [6].
To calculate the Taylor coefficients, on each gauge configuration the first eight derivatives \(D_{1}^{f},\ldots,D_{8}^{f}\) for each quark flavor \(f\) were estimated stochastically using 2000 Gaussian random volume sources for \(D_{1}^{f}\) and 500 sources for the higher derivatives for both \(\mu_{B}\) and \(\mu_{I}\). The exponential-\(\mu\) formalism [35] was used to calculate the first four derivatives while the linear-\(\mu\) formalism [33; 34] was used to calculate the higher derivatives. Using this data, we calculated the excess pressure and number density for both real and imaginary baryon as well as isospin chemical potentials \(\mu_{B}\) and \(\mu_{I}\), in the range \(0\leqslant|\mu_{B,I}/T|\leqslant 2\), using 100k (20k) configurations per temperature for the baryon (isospin) case. Our results were obtained on \(N_{\tau}=8\) lattices for three temperatures viz. \(T\sim 157\), \(176\) and \(135\) MeV. These temperatures were chosen as being approximately equal to \(T_{\text{pc}}\) and \(T_{\text{pc}}\pm 20\) MeV, where \(T_{\text{pc}}=156.5(1.5)\) MeV is the chiral crossover temperature at \(\mu_{B}=0\)[5]. In this paper, we will present results for \(T=135\) and \(176\) MeV, while the \(T=157\) MeV results have been presented elsewhere.
### Results for Finite Isospin Chemical Potential
Before considering the finite \(\mu_{B}\) case, we shall first present our results for the simpler case of finite isospin chemical potential \(\mu_{I}\). For finite \(\mu_{I}\), the fermion determinant is real and there is no sign problem. Hence it is possible to calculate observables for much larger values of the chemical potential compared to the \(\mu_{B}\) case, and it is precisely for these value that bias can become significant. The QCD phase diagram in the \(T\)-\(\mu_{I}\) plane is also a topic of interest in its own right [36; 37; 38], and our formalism could prove useful in future lattice QCD studies based on the Taylor series approach.
We present our results for the second order resummation results for \(\Delta P/T^{4}\) and \(\mathcal{N}/T^{3}\), obtained using both the biased (Eq. (10), red bands) as well as the unbiased estimators (Eq. (13) and Eq. (15)), orange circles and black squares respectively), in the top two plots of Fig. 1. We also plot the second and fourth order Taylor expansion results (Eq. (7), blue and green bands) in both the plots for purposes of comparison.
We find that the fourth order Taylor results differ from the second order results for \(|\hat{\mu}_{I}^{2}|\gtrsim 1\). Turning next to the resummation results, we find that the biased resummation results agree well overall with the fourth order Taylor results for both real as well as imaginary chemical potentials. The resummation results were obtained by resumming the derivative \(D_{2}^{f}\) while the fourth order Taylor results also contain contributions from \(D_{4}^{I}\)5. The agreement between these two results would therefore suggest that the latter two derivatives do not contribute significantly for \(0\leqslant|\hat{\mu}_{I}^{2}|\leqslant 4\). Before arriving at this conclusion however, it is necessary to account for the stochastic bias that is present in the results of Eq. (10). In fact, the unbiased resummation results, obtained using either Eq. (13) or Eq. (15), lie in between the second and fourth order Taylor results. Moreover the results from Eq. (13) and Eq. (15) are practically identical, which means that it is sufficient to eliminate bias to \(\mathcal{O}(\mu_{I}^{2})\) for the range of chemical potentials considered here. We conclude that
the derivatives \(D_{3}^{I}\) and \(D_{4}^{I}\) do in fact contribute at fourth order, and that the biased resummation results will approach the unbiased results in the limit \(N_{\rm rv}\to\infty\).
Subtracting bias becomes important at higher orders because the lower order derivatives contribute through higher powers e.g. the derivatives \(D_{1}^{I}\) and \(D_{2}^{I}\) contribute at sixth order via \((D_{1}^{I})^{6}\) and \((D_{2}^{I})^{3}\) respectively. In the lower two plots of Fig. 1, we compare results from fourth order resummations with fourth and sixth order Taylor expansion results. The sixth order results (blue bands) only differ slightly from the fourth order results (green bands) for both \(\Delta P/T^{4}\) as well as \(\mathcal{N}/T^{3}\) over the entire range \(-4\leqslant\hat{\mu}_{I}^{2}\leqslant 4\). By contrast, the biased resummation results (red bands) differ significantly from both fourth and sixth order Taylor results and are in fact nonmonotonic for \(\mathcal{N}/T^{3}\) for imaginary \(\mu_{I}\). Subtracting the bias to \(\mathcal{O}(\mu_{I}^{4})\) (orange circles) yields results that are in very good agreement with the sixth order Taylor result. No further changes result from further subtraction of the bias up to fourth order of the cumulant expansion (black squares).
### Results for Finite Baryon Chemical Potential
The resummed results for the QEOS at finite baryochemical potential \(\mu_{B}\) have been previously presented in Ref. [24]. Those results were obtained using the biased formula Eq. (10), but by using the full set of 2000 independent random estimates for \(D_{1}^{B}\). The use of 2000 stochastic estimates instead of the usual 500 does decrease the stochastic bias, however it does not subtract the contribution to the bias coming from the higher order derivatives. By contrast, the unbiased exponential formulas treat all \(N\) derivatives on an equal footing and subtract all the contributions to the bias up to a certain order. The results we will present here will show that the unbiased exponential is able to achieve a greater reduction of the stochastic bias despite working with only \(N_{\rm rv}=500\) stochastic estimates of the derivatives \(D_{1}^{B},\ldots,D_{N}^{B}\).
We present our results for \(\Delta P(T,\mu_{B})\) and \(\mathcal{N}_{B}(T,\mu_{B})\) in Fig. 2. The upper two plots compare second order resummation results to second and fourth order Taylor expansions while the lower two plots compare fourth order resummation results to fourth and sixth order Taylor expansions. In all four cases, the resummation results were calculated using both the biased (Eq. (10)) as well as the unbiased exponential (Eqs. (13) and (15)).
Focusing on the upper two plots, we find that although the biased resummation results calculated using \(N_{\rm rv}=500\) random sources (red squares) agree with the second order Taylor results (magenta bands) for \(\Delta P(T,\mu_{B})\) for real \(\mu_{B}\), in all other cases they differ from the second and even from the fourth order Taylor results (orange bands). When the same biased results are recalculated using \(N_{\rm rv}=2000\) random estimates (blue triangles) for \(D_{1}^{B}\) this difference decreases, proving that the discrep
Figure 1: \(\Delta P(T,\mu_{I})/T^{4}\) and \(\mathcal{N}(T,\mu_{I})/T^{3}\), calculated for \(T=157\) MeV using second and fourth order biased (red bands) and unbiased resummations. Unbiased resummation results in cumulant (chemical potential) bases are plotted as black squares (orange circles); different ordered Taylor expansion results are plotted in green and blue bands respectively.
ancy is in fact due to stochastic bias. In fact, even for \(\Delta P_{2}^{R}(T,\mu_{B})\) for real \(\mu_{B}\), the results recalculated this way move away from the second order results and instead agree with the fourth order Taylor results. By contrast the unbiased resummation results always agree with the fourth order Taylor expansion results, even though the resummation was only carried out for the derivative \(D_{2}^{B}\). Also, the agreement between the results of Eq. (13) (green diamonds) and Eq. (15) (black inverted triangles) prove that it is sufficient to eliminate bias to \(\mathcal{O}(\hat{\mu}_{B}^{2})\) for the two observables and for the range of chemical potentials considered here. It is also clear from the figures that the biased results will approach the unbiased results as \(N_{\rm rv}\) is increased. Note however that the latter were calculated using only \(N_{\rm rv}=500\) stochastic estimates. Hence the unbiased results clearly converge faster to the \(N_{\rm rv}\to\infty\) limit as compared to the biased results. Similar conclusions also obtain in the case of fourth order resummation, as is seen from the lower two plots of Fig. 2.
Although Eqs. (13) or (15) are more complicated to evaluate than Eq. (10), this calculational cost is small compared to the cost of calculating and storing 2000 random volume source estimates of \(D_{1}^{B}\) for each of \(10^{5}\) - \(10^{6}\) gauge configurations. Similarly, while it is also possible to avoid stochastic bias by computing the \(D_{n}^{B}\) exactly, the method is expensive and one is therefore constrained to work with lattices having a smaller aspect ratio than
Figure 3: \(\Delta P(T,\mu_{B})/T^{4}\) and \(\mathcal{N}(T,\mu_{B})/T^{3}\) calculated at fourth order in \(\mu_{B}\) for all the three working temperatures \(T=135\), \(157\) and \(176\) MeV presented in red, blue and black colors respectively.
Figure 2: \(\Delta P(T,\mu_{B})/T^{4}\) and \(\mathcal{N}(T,\mu_{B})/T^{3}\), calculated for \(T=157\) MeV using second and fourth order biased and unbiased resummations and second, fourth and sixth order Taylor expansions. The Taylor expansion results are plotted as purple and orange bands, whereas unbiased resummation results for cumulant (chemical potential) bases are presented as black inverted triangles (green diamonds). The biased results for 500 and 2000 random sources are shown as red squares and blue triangles respectively.
the lattices considered in this study [27]. For these reasons, we believe that it is advisable to always use the unbiased exponential for exponential resummation of the Taylor series.
In Fig. 3, we plot the fourth order results for \(\Delta P/T^{4}\) and \(\mathcal{N}/T^{3}\) for the baryochemical case for all three temperatures viz. \(T=135\), \(157\) and \(176\) MeV. We see that for each temperature, the unbiased resummation results agree quite well with the Taylor series results up to around \(\hat{\mu}_{B}\lesssim 1.1\)-\(1.2\). Beyond that point however, the \(\mu_{B}\) resummation results break down at a value of \(\hat{\mu}_{B}\) that depends upon the temperature. By contrast, the Taylor series calculations can be extended to arbitrarily large chemical potentials. The breakdown of the resummation results occurs as \(\mu_{B}\rightarrow|\hat{\mu}_{B}^{c}|\), which is the value of the chemical potential for which the average phase factor \(\langle\cos\Theta(T,\mu_{B})\rangle\) vanishes. Beyond \(|\hat{\mu}_{B}^{c}|\), the pressure results become indeterminate, while the baryon density results show deviations from the Taylor results as well as large fluctuations about the mean value. We confirm this correlation between the breakdown and the vanishing of the phase factor for \(T=157\) MeV in Fig. 4. We plot the fourth order phase factor calculated using each of the three definitions of \(\Theta(T,\mu_{B})\) in Eq. (17). We only plotted the fourth order results since our second order results were practically identical to the fourth order results for all three cases. On the other hand, there is a clear difference between the results obtained using the biased and the unbiased formulas, with the former going to zero around \(\hat{\mu}_{B}\sim 1.5\) while the latter going to zero around \(\hat{\mu}_{B}\sim 1.2\)-\(1.3\). This difference was observed for all the three temperatures that we studied i.e. in each case the unbiased phase factor vanished at a smaller value of \(\hat{\mu}_{B}\) than the biased phase factor. These results prove that it is necessary to first account for stochastic bias also while studying e.g. the location of the closest singularity to \(\mu_{B}=0\) in the complex \(\mu_{B}\) plane.
## IV Discussion and outlook
In this paper, we have showed how the stochastic bias present in the estimate of the exponential factor can be subtracted up to a finite order in either the chemical potential or in the cumulant expansion by modifying the argument of the exponential. The stochastic bias is subtracted at the level of each individual configuration. The resulting formulas yield more accurate estimates of the QCD Equation of State especially at larger chemical potentials. Our formalism also allows us to calculate the average phase factor. From the vanishing of the phase factor, we also obtain an estimate of the distance to the nearest singularity of the QCD partition function in the complex \(\mu_{B}\) plane.
Exponential resummation provides a way to directly calculate the QCD partition function \(\mathcal{Z}(T,\mu_{B})\) itself. This makes it possible to calculate the singularities of \(\mathcal{Z}(T,\mu_{B})\) and hence determine the location of poles or branch singularities that could correspond to the location of the much sought after QCD critical point [40; 41; 42]. This has been done previously [24; 25], but we hope to repeat these calculations in the future using our new formalism in order to obtain more reliable estimates of these observables.
###### Acknowledgements.
We thank the members of the HotQCD collaboration for their inputs and for helpful discussions, as well as for the permission to use their data from the Taylor expansion calculations. The computations in this work were performed using the GPU cluster at Bielefeld University, Germany. We thank the Bielefeld HPC.NRW team for their help and support.
|
2308.13750 | Quantifying and Documenting Inequity in PhD-granting Mathematical
Sciences Departments in the United States | We provide an example of the application of quantitative techniques, tools,
and topics from mathematics and data science to analyze the mathematics
community itself in order to quantify and document inequity in our discipline.
This work is a contribution to the new and growing interdisciplinary field
recently termed "mathematics of Mathematics," or "MetaMath." Using data about
PhD-granting institutions in the United States and publicly available funding
data from the National Science Foundation, we highlight inequalities in
departments at U.S. institutions of higher education that produce PhDs in the
mathematical sciences. Specifically, we determine that a small fraction of
mathematical sciences departments receive a large majority of federal funding
awarded to support mathematics in the United States. Additionally, we identify
the extent to which women faculty members are underrepresented in mathematical
sciences PhD-granting institutions in the United States. We also show that this
underrepresentation of women faculty is even more pronounced in departments
that received more federal grant funding. | Ron Buckmire, Carrie Diaz Eaton, Joseph E. Hibdon, Jr., Jakini Kauba, Drew Lewis, Omayra Ortega, José L. Pabón, Rachel Roca, Andrés R. Vindas-Meléndez | 2023-08-26T03:24:05Z | http://arxiv.org/abs/2308.13750v3 | Quantifying Inequities and Documenting Elitism in PhD-granting Mathematical Sciences Departments in the United States
###### Abstract
In this paper we provide an example of the application of quantitative techniques, tools, and topics from mathematics and data science to analyze the mathematics community itself in order to quantify inequity and document elitism. This work is a contribution to the new and growing field recently termed "mathematics of Mathematics," or "MetaMath." Our goal is to rebut, rebuke, and refute the idea that the mathematical sciences in the United States is a meritocracy by using data science and quantitative analysis. Using research and data about PhD-granting institutions in the United States, we quantify, document, and highlight inequities in departments at U.S. institutions of higher education that produce PhDs in the mathematical sciences. Specifically, we determine that a small fraction of mathematical sciences departments receive a large majority of federal funding awarded to support mathematics in the United States and that women are dramatically underrepresented in these departments. Additionally, we quantify the extent to which women are underrepresented in almost all mathematical sciences PhD-granting institutions in the United States.
## 1 Introduction
The kinds of problems mathematics and data science can be used to solve are extremely varied, running the gamut from theoretical with no foreseen applications to those that are immediately applicable to important real-world phenomena like climate change, epidemiology, and social networks. There is a rapidly growing body of work using tools from the mathematical sciences to analyze the discipline of the mathematical sciences itself that has recently been described as the "mathematics of Mathematics" or "MetaMath" [5]. This term was chosen as an allusion to the broader field of "science of Science" [11] that uses mathematical tools to analyze science as a whole as well as its individual disciplines.
In this paper, we present a contribution to the mathematics of Mathematics to introduce readers to the kinds of questions that can be asked and the types of tools and techniques that can be used in this area. We are inspired by the work of Wapman, Zhang, Larremore, and Clauset [30] published in _Nature_ in 2022 that quantified hierarchy in faculty hiring and retention in a wide array of academic disciplines in the United States by analyzing a very large dataset of nearly 300,000 faculty members in over 10,000 departments at amost 400 PhD-granting institutions from 2011 to 2020. Additionally, using mathematical tools from
network science [19], Waphan et al. produced a "prestige ranking" for every department at every PhD-granting institution in their dataset. The research we present in this article builds on and leverages the ideas and analyses presented in Waphan et al. [30], but focuses on specific disciplines in the mathematical sciences, namely, mathematics, statistics, and operations research.
While Waphan et al. analyzed data to reveal the existence of hierarchies in faculty hiring in academia overall [30], we analyze their data and data from other sources to document elitism and quantify inequities in the mathematical sciences; that is, to highlight multiple ways that our discipline fails to be a meritocracy. In this paper, we use the definition of meritocracy as a system in which individuals obtain opportunities based solely on abilities.
The hypothesis that mathematics is a meritocracy is testable through an analysis of the data that we have access to about faculty at PhD-granting institutions and federal funding of the mathematical sciences. We assume that mathematical talent is distributed equally among all groups of people who do mathematics, in particular among all gender identities. If mathematics were a meritocracy, we expect that the data would demonstrate that the gender distribution of faculty (and/or other identities) would not be correlated to the prestige status and to the funding amounts at mathematical sciences PhD-granting institutions in the United States. However, our analysis of the data will show that the converse is true--in other words, gender and federal funding allocations _are not_ independent of the prestige status of departments. Therefore mathematics _is not_ a meritocracy. Since approximately 30% of PhDs in mathematics have been women over the last decade [3], it is reasonable to expect that if mathematics were indeed a meritocracy, on average, 30% of faculty at mathematical sciences PhD-granting institutions would be women. However, it is very rare for any institution, program, or activity in the mathematical sciences community to have one third of the participants be women.
In this paper we argue that merit is not the sole or even primary factor relating to prestige by quantifying the impacts of other factors such as (gender) identity. We also examine the allocation of grant funding, finding that there is a disproportionate concentration of grant funds at allegedly prestigious institutions. Thus, the grant allocation process serves to amass resources at departments employing fewer women.
## 2 Existing work on inequity and elitism in mathematics
In this section, we provide a short survey of selected recent work that uses tools, topics, and techniques from mathematics and data sciences to describe, document, and display inequities in mathematics and data science. We organize our discussion of the literature in this area into three topics: 1) analysis of the (lack of) diversity in the mathematical sciences; 2) the myth of meritocracy in mathematics; and 3) evidence of elitism in the mathematical sciences. For a survey of the areas discussed here, as well as the broader field of the mathematics of Mathematics, we refer the reader to the recent paper by Buckmire et al. [5].
### Diversity and Demographics of the Mathematics Community.
It is well-documented that women are underrepresented at all levels in the mathematics community, and that their representation declines monotonically as they progress through the academic system. Between 2013 to 2018, women made up approximately 42.6% of
recipients of bachelor's degrees in mathematics [3]. However, in 2016-2017 only 29% of doctorate recipients in mathematical sciences were women [17]. In 2019, 28% of hires in doctoral-granting mathematical sciences departments were women [18].
This underrepresentation of women in the mathematical sciences is even more pronounced when one considers their representation in areas generally regarded as prestigious. For example, the National Science Foundation reports that 22.2% of principal investigators (PIs) that submitted a NSF proposal to the Mathematics and Physical Sciences (MPS) directorate in fiscal year 2021 were women and 26.0% of the PIs who were awarded an NSF grant in the MPS directorate were women [15]. Topaz et al. [27] found that women comprised only 15% of tenure-stream faculty positions in doctoral-granting mathematical sciences departments in the United States.
Vitulli [28] examined the representation of women being hired by mathematics departments, based upon data from annual surveys conducted by the American Mathematical Society (AMS). Prior to 20121, the AMS reported this data by dividing departments into three Groups based on the reputational rankings in the 1995 (or previously, 1982) National Research Council report on doctoral departments [8, 9]. Group I contained the highest rated 25.9% of the departments. Group II was the next highest 30.3% while Group III contained the remaining departments. Vitulli found that from 1991-2011, 20.5% of the faculty hired by Group I departments were women, while 26.3% of the faculty hired by the remaining departments were women.
Footnote 1: The AMS changed how they report this data in 2012, as the newest National Research Council report no longer provided a total ordering of departments, instead reporting multiple measures for each department.
### The Myth of Meritocracy and Existence of Hierarchy in Mathematics and Science.
Examples of hierarchical structures [19] include the existence of institutions which have greater prestige [22] or are more likely to have students go on to obtain faculty positions at more prestigious institutions. Recently, researchers have used available data on faculty positions at institutions of higher education in the United States to document the existence of hierarchies in faculty hiring networks in academia. Clauset et al. [7] demonstrated the existence of hierarchy in faculty hiring in a study involving departments in computer science, business, and history. Wapman et al. [30] expanded this analysis to cover 295,089 faculty in 10,612 departments at 368 PhD-granting institutions and all academic disciplines for the years 2011-2020. FitzGerald et al. [10] built upon Wapman et al.'s research by using data from the Mathematics Genealogy Project (MGP) to restrict their analysis to mathematics faculty. These results demonstrate that hierarchies exist in faculty hiring networks.
Two other prominent societal hierarchical structures present in mathematics are gender and class. Researchers have analyzed data describing different aspects of academic activity and demonstrated ways gender can negatively mediate opportunity for advancement, participation, and achievement in science and mathematics [20, 24, 26]. A large study investigating class backgrounds in academia by Morgan et al. [21] found that faculty are much more likely than the general population to have a parent with a PhD, with the effect being even more pronounced at "elite" institutions.
### Evidence of Elitism in Mathematics.
There are multiple research articles using mathematical tools and techniques that provide examples of the pervasive elitism in the mathematics community. Topaz et al. [27] analyzed the editorial boards of 435 mathematical science journals and found that women accounted for a mere 8.9% of editorial positions. Brisbin and Whitcher [2] found that women are underrepresented in subfields of mathematics that are viewed as having more prestige by analyzing almost a million papers in the mathematical sciences uploaded to the arXiv preprint repository. Another way elitism is perpetuated in the mathematics community is via the selection of a self-perpetuating cadre of "elite" personnel for prestigious prizes [6]. Schlenker [25] notes that fields with applications to the social or physical sciences such as numerical analysis, mathematical modeling, or statistics seem to be viewed as having low status, and this lack of prestige accompanies the low representation of researchers in these fields among elite prizewinners.
## 3 Data and Methods
In this section, we will describe the data and explain the methodology used to obtain our results. We utilized two datasets, one sourced from Wapman et al. [29] and the second from awards made by the Division of Mathematical Sciences (DMS) at the United States National Science Foundation (NSF) between 2011 and 2020 [12].
The Wapman et al. dataset required some nuance to interpret. The dataset consisted of a census of tenured or tenure-track faculty employed at a PhD-granting institutions in the United States from the years 2011-2020. Faculty were only included in this sample if they were employed in the majority of the years under review. This dataset is centered on departments, rather than on faculty, and these departments are each assigned to a field such as "Mathematics." In particular, a department may be accounted for in multiple fields; as an example, a "Department of Mathematics and Statistics" would have its faculty included twice, in the fields of "Mathematics" and "Statistics."
Because our goal is to document elitism and quantify hierarchy in the mathematics community as a whole, we choose to define the mathematical sciences as broadly as possible (see [4]). This choice means we reduce Wapman et al.'s original dataset of 295,089 faculty in all academic disciplines offering PhDs in the United States to the 9,814 faculty that are listed under the fields of Mathematics, Statistics, or Operations Research (Table 1). We adopt the convention of capitalizing these three terms when referring to the fields present in the data throughout the rest of this paper.
In our analysis, we incorporate the department prestige rankings from Wapman et al. [29]. Because of the way these are computed (a department is prestigious if its graduates are hired by prestigious departments), some departments do not have a prestige ranking.
\begin{table}
\begin{tabular}{l c c c} Field & Departments & Faculty Members & Percentage of women \\ \hline Mathematics & 223 & 7328 & 16.8\% \\ Statistics & 122 & 2576 & 20.9\% \\ Operations Research & 51 & 1034 & 19.3\% \\ \end{tabular}
\end{table}
Table 1: Faculty present in the Wapman et al. dataset in the fields of Mathematics, Statistics, and Operations Research
This could happen if the department does not have a PhD program in one of the three fields (recall the dataset began with _institutions_ that grant PhDs), or if none of its graduates were hired by departments with prestige rankings. In Mathematics, for example, there are 223 departments listed, but only 161 of these have a prestige ranking. Departments without prestige rankings were not included in the data analyses involving prestige below, but were included in the analysis of NSF funding later in the paper.
We use this prestige ranking as a measure of the idea of elitism discussed above. Since the Group I departments--in the groupings used by the AMS historically to divide departments by perceived quality (see Section 2.1 above)--constituted about 25% of departments, in our analysis we consider the upper quartile (in terms of prestige rankings) as "elite" departments and compare this elite group to the remaining 75% of departments.
Wapman et al. include only binary gender in their dataset. This is self-reported for a small percentage (6%) of faculty in their initial dataset. They then attempted to infer the gender of the remaining faculty based on their names, ultimately ascribing a binary gender to a total 85% of the faculty listed in their dataset. We include only these faculty who were ascribed a binary gender in our analyses, eliminating roughly 15% of the total due to the inability to accurately ascribe a gender to these entries.
We separately obtained data from the NSF's publicly available data on awards made by the Division of Mathematical Sciences (DMS) from 2011-2020 [12]. These were aggregated by institution. Since the institution names in the Wapman et al. dataset typically did not match the formal organization names listed by the NSF, we manually adjusted these in order to compare the two datasets. On average, DMS awarded $235 million per year towards achieving its mission to support "a wide range of research in mathematics and statistics aimed at developing and exploring the properties and applications of mathematical structures" [13]. Of the awards to institutions, 80% were matched to the departments of interest in the following analysis. The NSF, and DMS in particular, is the primary funder of mathematics research in the United States. While some mathematicians (such as several authors of this paper) receive funding from other NSF divisions and directorates (e.g., the Division of Undergraduate Education and the Directorate for STEM Education) and other funders (e.g., the National Institutes of Health), we consider DMS funding a reasonable measure for overall financial support of Mathematics by the federal government, and in some sense a proxy indicator for "merit." Statisticians and operations researchers typically have more varied sources of funding, so we only considered the field of Mathematics in our analyses of funding below.
Some of the institutions from the NSF data with the most amount of funding were not included in our list of PhD-granting institutions because they were mathematics institutes. For example, the Mathematical Sciences Research Institute and the Institute for Advanced Study were awarded funds as separate entities from their associated universities, the University of California at Berkeley and Princeton University, respectively. We chose to treat them as separate, non-PhD-granting institutions for our analyses
## 4 Results
In this section, we present the primary results of our research into the distribution of faculty and funding at mathematical sciences PhD-granting institutions in the United States. In Figure 1, the percentage of women in the fields of Mathematics, Statistics, and Operations Research from 2011 to 2020 is given. We note that the percentage of women in Mathematics
lags behind Operations Research and Statistics throughout the time period of the dataset. We further note the percentage of women in each of the three mathematical sciences fields included in our analysis lie far below the percentage of women in academia as a whole.
Next, we computed the percentages of faculty in each department inferred to be women, and plotted these according to prestige rank in Figure 2 in the fields of Mathematics, Statistics, and Operations Research. To compute this percentage, we used as a denominator the total number of faculty in a department for which a gender was inferred, in effect removing from our sample any faculty members whose gender could not be inferred.
Figure 1: Fraction of women in the fields Mathematics, Statistics, and Operations Research, as well as academia as a whole, over the ten year period of the dataset.
Figure 2: Percentage of women by department in the fields of Mathematics, Statistics, and Operations Research. Color distinguishes the upper quartile of prestige (blue) from the lower three quartiles (orange).
We then considered the elite (top quartile) institutions as a group and calculated the percentage of faculty at these institutions that are women (Table 2); in each case, we see that the percentage of women among elite institutions is lower than among non-elite institutions. A chi-squared test for each field was conducted, finding only the difference in Mathematics to be significant (\(p<0.001\)).
We then explored the distribution of DMS funding to Mathematics departments from 2011-2020. The upper quartile of elite institutions were awarded in aggregate $119M per year in grant funding, while the non-elite institutions, of which there are three times as many, were awarded only $70M in aggregate of NSF money per year. We plotted this funding by department prestige in Figure 3. The top quartile (by prestige) of departments received 64.7% of funding in our dataset, compared to 35.3% for the lower 3 quartiles combined.
We also computed the Gini coefficient, a measure of inequality which characterizes wealth inequality on a scale from 0 to 1. A uniform distribution would result in a 0 and a case where one individual holds all the wealth would result in a Gini coefficient of 1. The Gini coefficient of NSF DMS funding for the Mathematics departments in our dataset is 0.63.
We also analyzed the full DMS-funded portfolio (excluding fellowships which are given to people instead of institutions) to avoid a possible sampling effect due to our focus on PhD-granting departments. In this more comprehensive dataset, the top 20% holds 86.1% of all DMS funding. The Gini coefficient of this data is 0.80.
We also plotted the annual grant funding received by Mathematics departments against
\begin{table}
\begin{tabular}{l c c} Field & Percentage of women & Percentage of women among elite institutions & non-elite departments \\ \hline Mathematics & 12.5\% & 18.1\% \\ Statistics & 21.3\% & 21.6\% \\ Operations Research & 17.0\% & 18.7\% \\ \end{tabular}
\end{table}
Table 2: The percentage of faculty at elite and non-elite institutions who are women in each field
Figure 3: Average annual grant funding for Mathematics departments by the prestige rank of the department, displayed by total funding to the department (a), and on a per capita basis accounting for the varying number of faculty in each department
the percentage of women in those departments in Figure 4. We note an interesting ceiling effect, where none of the 29 departments with at least 25% women received more than $1.1M in average annual funding from the DMS.
## 5 Discussion
The results presented above demonstrate that mathematics is not a meritocracy--in particular, stark inequities exist in faculty composition with respect to gender and the distribution of federal funding to mathematical sciences departments.
As demonstrated by the data presented above, almost all PhD-granting institutions have Mathematics departments which are composed of faculty that are disproportionately male. In fact, not a single Mathematics department represented in this dataset was majority women. We found that the underrepresentation of women is more pronounced among "elite" Mathematics departments (recall that we defined "elite" departments as those in the upper quartile of departments in the prestige ranking generated by Wapman et al.).
We believe in the fundamental principle that mathematical talent is distributed equally among all groups of people who do mathematics. In the context of this paper, we therefore assume an equal distribution of mathematical talent among men and women. As such, if the discipline of mathematics were a meritocracy, "elite" and "non-elite" departments would have equal representation of women. However, we have documented inequity and elitism in PhD-granting mathematical sciences departments in the United States. Therefore, we reject the hypothesis that mathematics is a meritocracy based on the (under)representation
Figure 4: Average annual grant funding for Mathematics departments by the percentage of women faculty in the department. The upper quartile (by prestige) departments are colored blue, while the remainder are colored orange.
of women among "elite" Mathematics departments.
Our analysis of NSF DMS funding found that it is not meritocratic and reinforces elitism. Pareto models, also popularized as the "20/80" economic model, predict that approximately 80 percent of assets are held, gained or earned by only 20 percent of the population being studied [23]. We found that the "elite" institutions, the top 25% in our dataset by prestige ranking, garnered 65% of the total funds given to the subset of PhD-granting institutions with a prestige ranking. When we examine all NSF DMS funding, the top 20% of awardees hoard 86% of all funds, with a Gini coefficient of 0.8. This result demonstrates even more pronounced inequality than the classic "20/80" proportion. To contextualize this result, we note Gini coefficients are commonly used to measure income inequality in nations, and the nation with the highest Gini coefficient in the world is South Africa with 0.63 in 2021 [1].
The procedures and policies that NSF uses to determine which institutions receive funding reinforce elitism and promote inequity in the mathematical sciences. Reviewers are asked to assess all proposals' intellectual merit and broader impacts under five elements [16]. Two of these elements in particular bolster hierarchies of prestige and elitism. First, reviewers are asked "How well qualified is the individual, team, or organization?" which skews reviewers towards considering institutional prestige. Second, reviewers are asked "Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?" This second question specifically skews the reviewers to more positively rate proposals from well-resourced institutions, which contributes to a rich-get-richer phenomenon that could explain the Pareto distribution found in our data.
Even more troubling are the systemic effects that are propagated over time by this extreme inequity in the distribution of funding. The cliche "the rich-get-richer" is a colloquial distillation of how systems that disproportionately allocate resources then have an easier time justifying disproportionate funding. For example, elite universities have significant funds internally for pilot projects. They have teams of grant departments that assist in the writing and administration of grants, funded by high indirect cost rates. They also have research support teams devoted to data gathering and processing and as well as communications teams devoted to disseminating the results. We also note that expectations in obtaining grant funds vary widely between departments and institutions, and are often higher at the "elite" institutions in our dataset. This likely affects the number of proposals that are submitted by different institutions. In short, the effect we are seeing is likely a complicated factor of processes which reinforce and exacerbate the status quo that is a deeply inequitable system, despite often being falsely presented as simply a meritocracy.
Further, there are compounding and intersectional power structures at play when we examine the combination of gender and funding. Our results show that "elite" departments have smaller percentages of women faculty (see Figure 4). Our results also show that federal funding is disproportionately distributed, favoring "elite" Mathematics departments. The argument of meritocracy is that women are underrepresented in these departments because their research is less meritorious than men's, possibly measured through grant funds. The argument we make is that because women are not in these "elite" departments, they do not have access to the resources that facilitate submitting successful grant proposals. In other words, there can be no singular causal explanation for the observed representation of women and resource allocation. Thus, the claim that representation and resource allocation are simply due to merit (i.e., that mathematics is a meritocracy), is false.
We encourage a focus on dismantling the idea that "elitism" is an actual indicator of merit. Rather, "elitism" is just as much, if not more, simply an indicator of inequitable resource allocation and hierarchical hiring networks. To combat these pernicious ideas, we
advocate for the redefinition of "prestige" and "elitism" in mathematics in a way that better reflects equity for excellence. For example, institutions that might deserve our respect are institutions that "reflect the diversity of the US population" [14] among their faculty and doctoral graduates in mathematics. We present the ten most prestigious PhD-granting Mathematics departments based on representation of women in Table 3. We might also define "prestige" and "merit" in the contribution that one makes not just to mathematics or academia, but in service to society. This is a cultural shift, but has significant advantages to producing a vision for science and mathematics that improves all lives.
### Limitations.
There are a number of limitations that accompany the research presented in this paper that we want to highlight, primarily due to the lack of access to publicly available, comprehensive, self-reported demographic data. The primary limitation is that the data was part of a public dataset shared by Wapman et al. [29] after they had processed it, and the raw data was not made available to us. Their methodology of determining prestige means that only PhD-granting departments are represented in the prestige data; a large number of faculty in the mathematical sciences who are at Bachelor's-only and Master's-granting departments are not included.
Recall, the Wapman et al. data includes gender, but only a small percentage were known, and the rest relied on inference through names. While we acknowledge this practice is common in this kind of work analyzing data involving people, there are multiple, potentially problematic issues with inferring individuals' gender based on names. There is some degree of selection bias in which names can be ascribed to a particular (binary) gender; we note, again, that in the dataset used here, 15% of entries were omitted due to an inability to assign them a gender with an acceptable level of accuracy. Furthermore, the reduction of gender to a binary erases the experiences of gender-diverse mathematical scientists from this work.
We also lament the lack of race/ethnicity in the data; there are important questions to be answered about the interaction of race and inequity in the mathematical sciences. The Wapman et al. dataset is limited to only tenured or tenure-track faculty. Without comprehensive demographic data, an intersectional analysis involving multiple identity characteristics is not possible. As discussed below, we hope other researchers will collect or generate additional data that can be used to address important outstanding questions about the mathematical
\begin{table}
\begin{tabular}{l c} Institution & Percentage of Women \\ \hline Bryn Mawr College & 50.0\% \\ Louisiana Tech University & 50.0\% \\ University of California Merced & 50.0\% \\ Teachers College Columbia & 45.8\% \\ University of Texas at Tyler & 40.0\% \\ University of New Hampshire & 38.9\% \\ Cleveland State University & 38.1\% \\ Drew University & 37.5\% \\ Illinois State University & 37.5\% \\ Case Western Reserve University & 37.5\% \\ \end{tabular}
\end{table}
Table 3: The top ten PhD-granting Mathematics departments by percentage of women.
sciences discipline.
## 6 Future Directions.
There are many other directions in which the research presented here could be extended. It is important to study the questions addressed here with respect to other dimensions of diversity, particularly marginalized social identities including race/ethnicity, sexual orientation, national origin, and disability status, among others. This future work should be done in a way that allows analysis using intersections of multiple identity characteristics.
The addition of geographic location to the analysis of the gender diversity of PhD-granting institutions as well as of the distribution of federal funding is another interesting direction of future research.
Another important direction is to expand this work to a wider range of institutions and faculty appointments. A study encompassing all types of institutions, and particularly community colleges, minority-serving institutions, and primarily undergraduate institutions, is necessary. Additionally, future work should investigate related questions about all types of faculty employed at these institutions, especially the increasing percentage of non-tenure track faculty.
We conclude by inviting interested researchers to join us in the ongoing MetaMath project to use mathematics and data science to promote social justice and improve equity within all fields of the mathematical sciences.
## Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while all authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Data Science and Social Justice: Networks, Policy, and Education program. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation. The authors would like to thank Phil Chodrow and Victor Piercey for fruitful conversations that pushed this work forward.
Buckmire acknowledges sabbatical support provided by the Office of the Dean of the College at Occidental College in Los Angeles, California.
Diaz Eaton was supported in part by the Bates Enhanced Sabbatical Fund and Faculty Professional Development Fund.
Hibdon was supported, in part, by the National Institutes of Health's National Cancer Institute, Grant Numbers U54CA202995, U54CA202997, and U54CA203000. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Kauba is partially supported by the National Science Foundation South Carolina LSAMP Bridge to Doctorate Fellowship HRD-2005030.
Pabon is partially supported by the National Science Foundation under Awards DMS-2108839 and DMS-1450182.
Vindas-Melendez is partially supported by the National Science Foundation under Award DMS-2102921.
Zhang is partially supported by the National Science Foundation Graduate Research Fellowship Award DGE-2040434. |
2306.14593 | Semënov Arithmetic, Affine VASS, and String Constraints | We study extensions of Sem\"enov arithmetic, the first-order theory of the
structure $(\mathbb{N}, +, 2^x)$. It is well-knonw that this theory becomes
undecidable when extended with regular predicates over tuples of number
strings, such as the B\"uchi $V_2$-predicate. We therefore restrict ourselves
to the existential theory of Sem\"enov arithmetic and show that this theory is
decidable in EXPSPACE when extended with arbitrary regular predicates over
tuples of number strings. Our approach relies on a reduction to the language
emptiness problem for a restricted class of affine vector addition systems with
states, which we show decidable in EXPSPACE. As an application of our results,
we settle an open problem from the literature and show decidability of a class
of string constraints involving length constraints. | Andrei Draghici, Christoph Haase, Florin Manea | 2023-06-26T11:05:37Z | http://arxiv.org/abs/2306.14593v1 | # Semenov Arithmetic, Affine VASS, and String Constraints
###### Abstract
We study extensions of Semenov arithmetic, the first-order theory of the structure \(\langle\mathbb{N},+,2^{x}\rangle\). It is well-known that this theory becomes undecidable when extended with regular predicates over tuples of number strings, such as the Buchi \(V_{2}\)-predicate. We therefore restrict ourselves to the existential theory of Semenov arithmetic and show that this theory is decidable in EXPSPACE when extended with arbitrary regular predicates over tuples of number strings. Our approach relies on a reduction to the language emptiness problem for a restricted class of affine vector addition systems with states, which we show decidable in EXPSPACE. As an application of our result, we settle an open problem from the literature and show decidability of a class of string constraints involving length constraints.
arithmetic theories, Buchi arithmetic, exponentiation, vector addition systems with states, string constraints 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 22222 22222 22222 22222 2222 22222 22222 22222 22222 22222 2222 22222 2222 2222 22222 22222 22222 22222 22222 22222 22222 22222 222222 22222 22222 222222 22222 222222 22222 22222 22222 222222 22222 22222 22222 22222 222222 22222 22222 22222 22222 222222 222222 22222 2222222 222222 222222 22222 2222222 222222 222222 2222222 222222 2222222 2222222 222222 222222 2222222 22222222 2222222 22222222 22222222 222222222 222222222 22222222222
generalised Semenov arithmetic_, i.e., the existential theory of \(\langle\mathbb{N},0,1,+,2^{x},\{R_{i}\}_{i\geq 0}\rangle\), where \(R_{0},R_{1},\ldots\) is an enumeration of all regular languages over the alphabets \(\{0,1\}^{d}\), \(d\geq 1\), is decidable in EXPSPACE. Non-automaticity of Semenov arithmetic and undecidability of \(\langle\mathbb{N},0,1,+,2^{x},V_{2}\rangle\) rule out the possibility of approaching this existential theory via automatic structures based on finite-state automata or via quantifier-elimination _a la_ Cherlin and Point, since \(V_{2}\) is definable as a regular language over pairs of number strings. Instead, our decidability result is based on a reduction to the language emptiness problem of a special class of _affine vector addition systems with states (affine VASS)_.
A VASS comprises a finite-state controller with a finite number of counters ranging over the natural numbers. In an affine VASS, when taking a transition, every counter can be updated by applying an affine function \(x\mapsto ax+b\) to the current value, provided that the resulting counter is non-negative. While reachability in affine VASS is decidable for a single counter [8], already in the presence of two counters reachability becomes undecidable [13]. Our reduction consequently requires a restricted class of affine VASS to obtain decidability. We call this class _restricted labelled affine VASS (restricted la-VASS)_. A restricted la-VASS is an affine VASS with \(d\) pairs of counters and hence \(2d\) counters in total. For every pair, the first counter does not change until it keeps getting incremented at every transition; the second counter is only updated via affine functions \(x\mapsto 2x\) and \(x\mapsto 2x+1\). A configuration consisting of a control state and \(2d\) counter values is accepting whenever the control state is accepting and for every pair of counters, the first counter has the same value as the second counter. We give an EXPSPACE procedure for deciding emptiness of restricted la-VASS whose correctness proof is based on a kind of counter elimination procedure in which we successively encode counters into a finite state space while preserving equi-non-emptiness. The tight syntactical restrictions on la-VASS are necessary in order to obtain a decidable class of affine VASS--relaxing those restrictions even only slightly leads to undecidability of the language emptiness problem as we will later discuss.
The EXPSPACE upper bound for existential generalised Semenov arithmetic follows from a reduction to language non-emptiness of a restricted la-VASS whose language encodes all solutions of the given formula. Obtaining an elementary upper bound is difficult since it is easily seen that smallest solutions of an existential formula of Semenov arithmetic can be non-elementary in bit-length.
As an application of our EXPSPACE upper bound for existential generalised Semenov arithmetic, we show that a certain class of string constraints with length constraints is decidable in EXPSPACE. It allows existentially quantifying over bit-strings, and to assert that the value of a string variable lies in a regular language, as well as Presburger-definable constraints over the lengths of the bit-strings stored in string variables and the numerical values of those variables (when viewed as encoding a number in binary). Decidability of this class was left open in [3]. We settle this open problem by showing that it can be reduced to the existential fragment of generalised Semenov arithmetic. Formulas of this class of string constraints appear widely in practice--in fact, essentially all formulas in the extensive collection of standard real-world benchmark sets featured in [3, 4] lie in this class.
## 2 Preliminaries
### Basic notation
By \(\mathbb{Z}\) and \(\mathbb{N}\) we denote the integers and non-negative integers, respectively. Given an \(m\times n\) integer matrix \(A\), we denote by \(\|A\|_{1,\infty}\) the \((1,\infty)\)-norm of \(A\), which is the maximum over the sum of the absolute values of the coefficients of the rows in \(A\). For \(\boldsymbol{b}\in\mathbb{Z}^{m}\), \(\|\boldsymbol{b}\|_{\infty}\) is the
largest absolute value of the numbers occurring in \(\mathbf{b}\).
### Numbers as strings and strings as numbers
Here and below, let \(\Sigma=\{0,1\}\) be a binary alphabet. Any string from \(\Sigma^{*}\) has a natural interpretation as a binary encoding of a natural number, possibly with an arbitrary number of leading zeros. Conversely, any natural number in \(\mathbb{N}\) can be converted into its bit representation as a string in \(\Sigma^{*}\). Finally, by considering strings over \((\Sigma^{k})^{*}\) for \(k\geq 1\), we can represent \(k\)-tuples of natural numbers as strings over \(\Sigma^{k}\), and _vice versa_.
Formally, given \(u=\mathbf{u}_{n}\mathbf{u}_{n-1}\dots\mathbf{u}_{0}\in(\Sigma^{k})^{*}\), we define the tuple of natural numbers corresponding to \(u\) in _most-significant digit first (msd)_ notation as
\[\llbracket u\rrbracket:=\sum_{i=0}^{n}2^{i}\cdot\mathbf{u}_{i}\,.\]
Note that \(\llbracket\cdot\rrbracket\) is surjective but not injective.We lift the definition of \(\llbracket\cdot\rrbracket\) to sets in the natural way.
### Generalised Semenov arithmetic
For technical convenience, the structures we consider in this paper are relational. We refer to the first-order theory of \((\mathbb{N},0,1,+,2^{(\cdot)})\) as _Semenov arithmetic_, where \(+\) is the natural ternary addition relation, and \(2^{(\cdot)}\) is the power relation of base two, consisting of all tuples \((a,b)\in\mathbb{N}^{2}\) such that \(b=2^{a}\). Semenov arithmetic is an extension of Presburger arithmetic, which is the first-order theory of the structure \(\langle\mathbb{N},0,1,+\rangle\). It is known that Semenov arithmetic is decidable and admits quantifier elimination [2, 7, 14].
For presentational convenience, atomic formulas of Semenov arithmetic are one of the following:
* linear equations of the form \(a_{1}\cdot x_{1}+\dotsb a_{d}\cdot x_{d}=b\), \(a_{i},b\in\mathbb{Z}\), and
* exponential equations of the form \(x=2^{y}\).
Here, \(x_{1},\dots,x_{d},y\) are arbitrary first-order variables. Clearly, richer atomic formulas such as \(x+2^{2^{y}}+y=z+5\) can be defined from those basic class of atomic formulas, since, in this example, \(x+2^{2^{y}}+y=z+5\equiv\exists u\exists v\,u=2^{v}\wedge v=2^{y}\wedge x+u+y-z=5\). Moreover, since we are interpreting numbers over non-negative integers, we can define the order relation in existential Semenov arithmetic. This enables us to without loss of generality assume that existential formulas of Semenov arithmetic are positive, since \(\neg(x=y)\equiv x<y\lor y<x\) and \(\neg(x=2^{y})\equiv\exists z\,z=2^{y}\wedge\neg(x=z)\).
The main contribution of this paper is to show that the existential fragment of a generalisation of Semenov arithmetic is decidable. Subsequently, we write \(\mathbf{0}\) to denote a tuple of \(0\)s in any arbitrary but fixed dimension. _Generalised Semenov arithmetic_ additionally allows for non-negated atomic formulas \(R(x_{1},\dots,x_{k})\), where \(R=\mathbf{0}^{*}\cdot L\) for some regular language \(L\subseteq(\Sigma^{k})^{*}\). We interpret \(R\) as \(\llbracket R\rrbracket\subseteq\mathbb{N}^{k}\), and the additional leading zeros we require ensure that \(R=\llbracket\llbracket R\rrbracket\rrbracket^{-1}\). Subsequently, we call a language \(L\subseteq(\Sigma^{k})^{*}\)_zero closed_ if \(L=\mathbf{0}^{*}\cdot L\). Given a formula \(\Phi(x_{1},\dots,x_{n})\) of generalised Semenov arithmetic, we define \(\llbracket\Phi\rrbracket\subseteq\mathbb{N}^{d}\) as the set of all satisfying assignments of \(\Phi\).
The size of an atomic formula \(R(x_{1},\dots,x_{k})\) is defined as the number of states of the canonical minimal DFA defining \(R\). For all other atomic formulas \(\varphi\), we define their sizes \(|\varphi|\) as the number of symbols required to write down \(\varphi\), assuming binary encoding of numbers. The size \(|\Phi|\) of an arbitrary existential formula \(\Phi\) of generalised Semenov arithmetic is the sum of the sizes of all atomic formulas of \(\Phi\).
The full first-order theory of generalised Semenov arithmetic is known to be undecidable [12]. This follows from the undecidability of \(\langle\mathbb{N},0,1,+,2^{(\cdot)},V_{2}\rangle\), where \(V_{2}\) is the binary predicate such that \(V_{2}(x,y)\) holds if and only if \(x\) is the largest power of \(2\) dividing \(y\) without remainder. Note that \(V_{2}\) can be defined in terms of a regular language, cf. [6]. The central result of this paper is the following:
The existential fragment of generalised Semenov arithmetic is decidable in EXPSPACE.
### Affine vector addition systems with states
A technical tool for our decidability results is a tailor-made class of _labelled affine vector addition systems with states (la-VASS)_. Formally, an la-VASS is a tuple \(V=\langle Q,d,\Sigma,\Delta,\lambda,q_{0},F,\Phi\rangle\), where
* \(Q\) is a finite set of _control states_,
* \(d\geq 0\) is the _dimension of \(V\)_,
* \(\Sigma\) is a _finite alphabet_,
* \(\Delta\subseteq Q\times\mathcal{P}(\Sigma)\times Q\) is a finite set of _transitions_,
* \(\lambda\colon\Delta\to\mathit{Ops}^{d}\) is the _update function_, where \(\mathit{Ops}\subseteq\mathbb{Z}[x]\) is the set of all affine functions over a single variable,
* \(q_{0}\in Q\) is the _initial control state_,
* \(F\subseteq Q\) is the set of _final control states_, and
* \(\Phi\) is a a quantifier-free formula of Presburger arithmetic \(\Phi(x_{1},\ldots,x_{d})\) that specifies a finite set \(\llbracket\Phi\rrbracket\subseteq\mathbb{N}^{d}\) of _final counter values_.
Note that when \(d=0\) then \(V\) is essentially a non-deterministic finite automaton.
The set of _configurations_ of \(V\) is \(C(V):=Q\times\mathbb{N}^{d}\). The _initial configuration_ of \(V\) is \(c_{0}=(q_{0},0,\ldots,0)\), and the set of _final configurations_ is
\[C_{f}(V):=\left\{(q_{f},\mathbf{v}):q_{f}\in F,\mathbf{v}\in\llbracket\Phi\rrbracket \right\}\,.\]
For an update function \(\lambda\colon\Delta\to\mathit{Ops}^{d}\), we define
\[\|\lambda\|:=\max\{|a|+|b|:\lambda(t)=(f_{1},\ldots,f_{d}),f_{i}=ax+b,1\leq i \leq d,t\in T\}\,.\]
We define the size \(|V|\) of an la-VASS \(V=\langle Q,d,\Sigma,\Delta,\lambda,q_{0},F,S_{f}\rangle\) as
\[|V|:=|Q|+|\Delta|\cdot(d+1)\cdot\log(\|\lambda\|+1)+|\Phi|\,.\]
An la-VASS induces an (infinite) labelled directed _configuration graph_\(G=(C(V),\to)\), where \(\to\subseteq C(V)\times\Sigma\times C(V)\) such that \(c\xrightarrow{a}c^{\prime}\) if and only if
* \(c=(q,m_{1},\ldots,m_{d})\) and \(c^{\prime}=(q^{\prime},m^{\prime}_{1},\ldots,m^{\prime}_{d})\),
* there is \(t=(q,A,q^{\prime})\in\Delta\) such that
* \(a\in A\),
* \(\lambda(t)=(f_{1},\ldots,f_{d})\), and
* \(m^{\prime}_{i}=f_{i}(m_{i})\) for all \(1\leq i\leq d\).
We lift the definition of \(\to\) to words \(w=a_{1}\cdots a_{n}\in\Sigma^{*}\) in the natural way, and thus write \(c\xrightarrow{w}c^{\prime}\) whenever \(c\xrightarrow{a_{1}}c_{1}\xrightarrow{a_{2}}\cdots c_{n-1}\xrightarrow{a_{n}}c ^{\prime}\) for some \(c_{1},\ldots,c_{n-1}\in C\). The _language_\(L(V)\subseteq\Sigma^{*}\) of \(V\) is defined as
\[L(V):=\left\{w\in\Sigma^{*}:c_{0}\xrightarrow{w}c_{f},c_{f}\in C_{f}(V) \right\}.\]
If we are interested in runs of la-VASS, we write \(\pi=c_{1}\xrightarrow{t_{1}}c_{2}\xrightarrow{t_{2}}\cdots\xrightarrow{t_{n-2}}c_ {n-1}\xrightarrow{t_{n-1}}c_{n}\) to emphasise the sequence of configurations and transitions taken. For \(1\leq i\leq j\leq n\), we denote by \(\pi[i,j]\) the subsequence \(c_{i}\xrightarrow{t_{j}}c_{i+1}\xrightarrow{t_{i+1}}\cdots\xrightarrow{t_{j-1 }}c_{j}\). We denote by \(\mathit{val}(\pi,m_{i})\) the value \(m_{i}\) of the \(i\)-th counter in the last configuration of \(\pi\).
The _emptiness problem_ for an la-VASS is to decide whether \(L(V)\neq\emptyset\). Affine VASS are a powerful class of infinite state systems, and even in the presence of only two counters and \(\Phi(x_{1},x_{2})\equiv x_{1}=x_{2}\), the emptiness problem is undecidable [13]. In Section 4, we identify a syntactic fragment of la-VASS for which emptiness can be decided in EXPSPACE.
### Closure properties of languages of la-VASS
We briefly discuss closure properties of la-VASS and show that they are closed under union and intersection, and restricted kinds of homomorphisms and inverse homomorphisms, using essentially the standard constructions known for finite-state automata. Let \(V_{i}=\langle Q_{i},d_{i},\Sigma,\Delta_{i},\lambda_{i},q_{0}^{(i)},F_{i},\Phi _{i}\rangle\), \(i\in\{1,2\}\), be two la-VASS.
The languages of la-VASS are closed under union and intersection. Moreover, for \(V\) such that \(L(V)=L(V_{1})\cap L(V_{2})\), we have \(|V|\leq|V_{1}|\cdot|V_{2}|\).
Proof.: This result can be obtained by generalising the standard constructions known from non-deterministic finite-state automata. The set of control states of the la-VASS \(V\) accepting the intersection of la-VASS \(V_{1}\) and \(V_{2}\) is \(Q_{1}\times Q_{2}\). The dimension of \(V\) is the sum of the dimensions of \(V_{1}\) and \(V_{2}\), and the counters of \(V_{1}\) and \(V_{2}\) get independently simulated in the counters of \(V\). Upon reading an alphabet symbol \(a\), the la-VASS \(V\) then simultaneously simulates the respective transitions of \(V_{1}\) and \(V_{2}\) for \(a\); further details are relegated to [2].
Note that since la-VASS languages contain regular languages, Proposition 2 in particular enables us to intersect la-VASS languages with regular languages.
Let \(\Sigma,\Gamma\) be two finite alphabets. Recall that a homomorphism \(h\colon\Gamma^{*}\to\Sigma^{*}\) is fully defined by specifying \(h(a)\) for all \(a\in\Gamma\). We call \(h\) a _projection_ if \(|h(a)|=1\) for all \(a\in\Gamma\).
The languages of la-VASS are closed under projections and inverses of projections.
Proof.: Let \(h\colon\Gamma^{*}\to\Sigma^{*}\) be a projection. Given an la-VASS \(V=\langle Q,d,\Sigma,\Delta,\lambda,q_{0},F,S_{f}\rangle\), to obtain closure under projections replace any \(t=(q,A,q^{\prime})\in\Delta\) with \(t^{\prime}=(q,h(A),q^{\prime})\), and set \(\lambda(t^{\prime}):=\lambda(t)\). To obtain closure under inverse projections, replace any \(t=(q,A,q^{\prime})\in\Delta\) with \(t^{\prime}=(q,h^{-1}(A),q^{\prime})\) and set \(\lambda(t^{\prime}):=\lambda(t)\).
## 3 Reducing Semenov arithmetic to restricted la-VASS
Let \(\Sigma=\{0,1\}\). In this section, we show how given a quantifier-free formula \(\Phi(x_{1},\ldots,x_{d})\) of Semenov arithmetic, we can construct an la-VASS \(V\) over the alphabet \(\Sigma_{d}:=\{0,1\}^{d}\) such that \([\![L(V)]\!]=\{\mathbf{x}\in\mathbb{N}^{d}:\Phi(\mathbf{x})\}\). We will subsequently observe that the resulting la-VASS enjoy strong structural restrictions, giving rise to the fragment of restricted la-VASS that we then formally define. For our purposes, it will be sufficient to primarily focus on formulas \(\Phi\) of Semenov arithmetic which are conjunctions of atomic formulas.
Consider a positive conjunctive formula of Semenov arithmetic
\[\Phi(\mathbf{x})\equiv A\cdot\mathbf{x}=\mathbf{b}\wedge\bigwedge_{i\in I}x_{i}=2^{y_{i}},\]
_where \(A\in\mathbb{Z}^{m\times n}\), \(\mathbf{b}\in\mathbb{Z}^{m}\), \(I\) is a finite index set, and \(x_{i}\) and \(y_{i}\) are variables from \(\mathbf{x}\). There is an la-VASS \(V\) of dimension \(2|I|\) and of size \((\|A\|_{1,\infty}+\|\mathbf{b}\|_{\infty}+2)^{O(m+|I|)}\) such that \(L(V)=\llbracket\Phi\rrbracket\)._
We derive this lemma in two parts. First, it is well known that the sets of solutions of the systems of linear equations \(A\cdot\mathbf{x}=\mathbf{b}\) can be represented by a regular language and are hence definable via an la-VASS.
[[15], see also [9, Eqn. (1)]] Given a system of equations \(\Phi\equiv A\cdot\mathbf{x}=\mathbf{b}\) with \(A\in\mathbb{Z}^{m\times d}\) and \(\mathbf{b}\in\mathbb{Z}^{m}\), there is a DFA \(V\) with at most \(2^{m}\cdot\max\{\|A\|_{1,\infty},\|\mathbf{b}\|_{\infty}\}^{m}\) states such that \(L(V)\) is zero-closed and \(\llbracket L(V)\rrbracket=\llbracket\Phi\rrbracket\).
The crucial part, which requires the power of la-VASS, are exponential equations \(x=2^{y}\). An la-VASS \(V\) with two counters and \(\llbracket L(V)\rrbracket=\llbracket x=2^{y}\rrbracket\) is depicted in Figure 1. Control-states are depicted as circles and transitions as arrows between them. The vector before the colon is the alphabet symbol read. For instance, the transition from \(q_{0}\) to \(q_{1}\) reads the alphabet symbol \((1,0)\in\{0,1\}^{2}\). After a colon, we display the counter operations when reading the alphabet symbol, the operation on the first counter is displayed on the top and the operation on the second counter on the bottom. Here and subsequently, for presentational convenience, id is the identity \(x\mapsto x\), and x2 and x2+1 are the functions \(x\mapsto 2x\) and \(x\mapsto 2x+1\), respectively. Thus, the transition from \(q_{0}\) to \(q_{1}\) applies the identity function on the first counter, and the function \(x\mapsto 2x\) on the second counter.
The idea behind the gadget in Figure 1 is as follows. For an example, suppose that \(y=5\), then \(x=32\), and in binary the sequence of digits of \(x\) and \(y\) looks as follows:
\[\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\cdots\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}1\\ 0\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\]
Since \(x=2^{y}\), we have that \(x\in 0^{*}10^{*}\), and the number of tailing zeros of \(x\) is equal to the value of \(y\). Thus, once a \(1\) in the binary representation of \(x\) has been read, the first counter in the gadget of Figure 1 keeps incrementing and counts the number of tailing zeros of \(x\). At the same time, the second counter in the gadget of Figure 1 keeps the value \(0\) until it reads the first \(1\) of the binary expansion of \(y\), since \(2\cdot 0=0\). It then computes the value of \(y\) in binary on the second counter by multiplying the value of the second counter by \(2\) when reading a zero, and multiplying by two and adding one when reading a one. The la-VASS in Figure 1 only accepts when the first and the second counter have the same value, i.e., when the number of tailing zeros of the binary expansion of \(x\) equals the value of \(y\), as required.
There is a fixed la-VASS \(V\) of dimension two such that \(L(V)\) is zero closed and \(\llbracket L(V)\rrbracket=\llbracket x=2^{y}\rrbracket\).
Lemma 4 is now an easy consequence of Lemmas 4 and 4 together with the closure of la-VASS languages under intersection (Proposition 4) and inverse homomorphisms (Proposition 4).
Figure 1: Gadget with two counters for exponential equations \(x=2^{y}\), where \(b\in\{0,1\}\).
A closer look at the gadget constructed in Figure 1 reveals a number of important structural properties:
1. all operations performed on the first counter are either the identity map id or increments ++;
2. all operations performed on the second counter are affine updates *2 and *2+1;
3. once the first counter gets incremented on a run, it gets incremented at every subsequent transition; and
4. only counter configurations in which the value of the first counter equals the value of the second counter are accepted.
Those properties are crucial to obtain decidability of (generalised) existential Semenov arithmetic.
An la-VASS is _restricted_ if it has an even number of \(2d\) counters called \(x_{i},y_{i}\), \(1\leq i\leq d\), such that every counter pair \((x_{i},y_{i})\) adheres to the above Properties (i)-(iv), and the set of final counter values is defined by \(\Phi\equiv\bigwedge_{1\leq i\leq d}x_{i}=y_{i}\).
For convenience, when referring to the counters in a pair, we subsequently refer to the first counter as its _\(x\)-counter_ and to the second counter as its _\(y\)-counter_. We will usually write \(m\) for the value of the \(x\)-counter and \(n\) for the value of the \(y\)-counter. The following is immediate from the construction in Proposition 2:
The languages of restricted la-VASS are closed under union, intersection, projection and inverse projections.
Finally, subsequently, for technical convenience, we assume that for a restricted la-VASS, we have \(|Q|\geq 2\). This is with no loss of generality, since if \(|Q|=1\) then deciding emptiness is trivial (the only control state is accepting if and only if the restricted la-VASS has non-empty language).
The next section will be devoted to the proof of the main result of this paper on restricted la-VASS:
Language emptiness of a restricted la-VASS \(V\) with \(2d\) counters is decidable in \(\mathrm{NSPACE}(|V|\cdot 2^{O(d)})\).
Let us close this section with arguing how Theorem 3.1 follows from Proposition 2. Given a formulas \(\Phi\) of generalised Semenov arithmetic, we can in space \(2^{O(|\Phi|)}\) construct the disjunctive normal form of \(\Phi\). Every disjunct can be assumed to be of the form
\[A\cdot\mathbf{x}=\mathbf{b}\wedge\bigwedge_{i\in I}x_{i}=2^{y_{i}}\wedge\bigwedge_{j\in J }R_{j}(\mathbf{x}),\]
where the \(R_{j}\) are predicates over regular languages. By Lemma 2, there is a restricted la-VASS for \(\Phi\) of dimension \(2|I|\) with a number of states bounded by \((\|A\|_{1,\infty}+\|\mathbf{b}\|_{\infty}+2)^{O(m+|I|)}=2^{p(|\Phi|)}\) for some polynomial \(p\) and whose language represents the set of solutions to \(A\cdot\mathbf{x}=\mathbf{b}\wedge\bigwedge_{i\in I}x_{i}=2^{y_{i}}\). Intersecting with the DFA for the \(R_{j}\) results in a restricted la-VASS \(V\) with \(2|I|=O(|\Phi|)\) counters such that \(|V|=2^{p(|\Phi|)}\) for some polynomial \(p\). By Proposition 2, it follows that emptiness of \(V\) is decidable in non-deterministic space exponential in \(p(|\Phi|)\). We conclude the argument by recalling that NEXPSPACE=EXPSPACE by Savitch's theorem.
## 4 Emptiness certificate for restricted la-VASS
We now show that language emptiness for restricted la-VASS is decidable in exponential space. Clearly, this problem reduces to deciding whether a given restricted la-VASS has an accepting
run, but witnessing runs may be of non-elementary length. To overcome this problem, we define an abstraction for configurations of restricted la-VASS. Abstract configurations store residue classes of counter values, as well as some further information that is required to witnesses the existence of concrete accepting runs. Before giving the formal definition, we provide some high level intuition that leads to our definition of abstract configurations. Next, we introduce reachability certificates, which are abstract runs with certain further properties. We argue that the existence of _witnessing certificates_, which are special kinds of reachability certificates witnessing that the language of an la-VASS is non-empty, are decidable in EXPSPACE. The last two sections then establish that witnessing certificates actually witness non-emptiness of restricted la-VASS.
### Key observations
Given a _restricted_ la-VASS \(V\) in dimension \(d\), assuming that \(L(V)\neq\emptyset\), there is a run \(\pi\) from an initial configuration \(c\) to a final configuration \(c^{\prime}\). With no loss of generality, throughout this section, we assume that \(\mathit{val}(c^{\prime},x_{i})\geq\mathit{val}(c^{\prime},x_{i+1})>0\) for all \(1\leq i<d\). In particular, this implies that every counter gets incremented at least once along a path witnessing non-emptiness.
Our first observation is that if along \(\pi\) a counter \(y_{i}\) achieves the first time a non-zero value by taking a *2+1 labeled transition, the length of the remaining segment of \(\pi\) is bounded by \(O(\log(m_{i}+1))\), where \(m_{i}\) is the value of counter \(x_{i}\) before the transition is taken. The reason is that, once \(y_{i}\) has non-zero value, its value at least doubles when a transition is taken. Hence if \(\pi\) is "long" then along \(\pi\) there will be loops incrementing a counter \(x_{i}\) before the corresponding \(y_{i}\) achieves non-zero value.
In the latter scenario, we may actually, subject to some bookkeeping, discard concrete values of \(x_{i}\) and \(y_{i}\) and only store their residue classes modulo \(\ell_{i}\), where \(\ell_{i}\) is the length of the first loop incrementing \(x_{i}\) along \(\pi\). In particular, if we are given a non-accepting run \(\pi^{\prime}\) such that \(\mathit{val}(\pi^{\prime},x_{i})\equiv\mathit{val}(\pi^{\prime},y_{i})\bmod \ell_{i}\) and \(\mathit{val}(\pi^{\prime},x_{i})<\mathit{val}(\pi^{\prime},y_{i})\) then \(\pi^{\prime}\) can be turned into a run \(\pi^{\prime\prime}\) where \(\mathit{val}(\pi^{\prime\prime},x_{i})=\mathit{val}(\pi^{\prime\prime},y_{i})\) by iterating the loop of length \(\ell_{i}\).
There are, however, some further subtleties that need to be taken care of. Consider the segment \(\pi^{\prime}\) of \(\pi\) between the first transition labeled by ++ on \(x_{i}\) and the first transition labeled by ++ on \(x_{i+1}\). If \(\pi^{\prime}\) contains no loop then we are in a situation where the first loop incrementing \(x_{i}\) is also the first loop incrementing \(x_{i+1}\). This means that the values of \(x_{i}\) and \(x_{i+1}\) get paired together, and hence, for an accepting run, also the values of \(y_{i}\) and \(y_{i+1}\) are paired together. In our approach, we deal with such circumstances by introducing so-called _\(y\)-constraints_. A \(y\)-constraint of the form \(y_{i}-y_{i+1}=\delta_{i}\) for some constant \(\delta_{i}\in\mathbb{N}\) asserts that the counters \(y_{i+1}\) and \(y_{i}\) must eventually have constant difference \(\delta_{i}\) along a run.
Otherwise, if \(\pi^{\prime}\) above contains a loop, the difference between the values of \(x_{i}\) and \(x_{i+1}\) is not necessarily constant, but lower-bounded by the length \(\delta_{i}\) of the loop-free sub path of \(\pi^{\prime}\). Thus, in an accepting run, the difference between \(y_{i}\) and \(y_{i+1}\) must also be at least \(\delta_{i}\), which is asserted by a \(y\)-constraint of the form \(y_{i}-y_{i+1}\geq\delta_{i}\).
### An abstraction for restricted la-VASS
Our decision procedure for emptiness of restricted la-VASS is based on reducing this problem to a reachability problem in a carefully designed finite-state abstraction of the state-space of la-VASS. Throughout this section, let \(V=\langle Q,2d,\Sigma,\Delta,\lambda,q_{0},F,\Phi\rangle\) be a restricted la-VASS. We first define the state space of the abstracted la-VASS.
**Definition 10**.: _An abstract configuration is a tuple_
\[\alpha=(q,m_{1},n_{1},\ldots,m_{d},n_{d},u_{1},u_{2},\ldots,u_{d-1}, \ell_{1},\ldots,\ell_{d})\\ \in Q\times(\mathbb{N}\cup\{\bot\})^{2d}\times\mathbb{N}^{d-1} \times(\mathbb{N}\cup\{\top\})^{d},\]
_such that \(m_{i},n_{i}\in[0,2dM_{i}]\cup\{\bot\}\) and \(u_{i}\in[0,U_{i}]\) and \(\ell_{i}\in[0,M_{i}-1]\cup\{\top\}\), where_
* \(M_{i}:=\lfloor|Q|^{((1/8)\cdot 32^{i-1}+1)}\rfloor\)_; and_
* \(U_{i}:=|Q|^{(32^{i-1}+4)}\)_._
The idea is that \(m_{i},n_{i}\) store the residue classes modulo \(\ell_{i}\) of the counter pair \(x_{i},y_{i}\) respectively, where the value \(\top\) for \(\ell_{i}\) acts as an indicator that we are storing actual values and not residue classes. The value \(\bot\) for some \(x_{i}\) or \(y_{i}\) indicates that the counter has not yet been initialised. If for an update function \(f\), \(f=\)**+** or \(f=\)**x2+1** then \(f(\bot):=1\); otherwise \(f(\bot):=\bot\), and we stipulate that \(\bot\bmod n=\bot\). The value of \(u_{i}\) in an abstract configuration carries the current difference between the value of the counters \(y_{i}\) and \(y_{i+1}\). This difference is potentially unbounded, however for our purposes it suffices to only store its value if it is less than \(U_{i}\), and to indicate the fact that it is at least \(U_{i}\) by the value \(u_{i}=U_{i}\).
We denote the (finite) set of all abstract configurations of \(V\) by \(A(V)\). Let us now define a transition relation \(\xrightarrow{\cdot}\subseteq A(V)\times\Delta\times A(V)\) such that \(\alpha\xrightarrow{t}\alpha^{\prime}\), \(t=(q,a,q^{\prime})\in\Delta\) if and only if:
* \(\alpha=(q,m_{1},n_{1},\ldots,u_{d-1},\ell_{1},\ldots,\ell_{d})\) and \(\alpha^{\prime}=(q^{\prime},m^{\prime}_{1},n^{\prime}_{1},\ldots,u^{\prime}_{ d-1},\ell_{1},\ldots,\ell_{d})\);
* \(\lambda(t)=(f_{x_{1}},f_{y_{1}},\ldots,f_{x_{d}},f_{y_{d}})\);
* \(f_{x_{i}}=\)**+** for all \(i\) such that \(m_{i}\neq\bot\);
* if \(\ell_{i}\neq\top\), \(m^{\prime}_{i}=f_{x_{i}}(m_{i})\bmod\ell_{i}\) and \(n^{\prime}_{i}=f_{y_{i}}(n_{i})\bmod\ell_{i}\);
* if \(\ell_{i}=\top\), \(m^{\prime}_{i}=f_{x_{i}}(m_{i})\) and \(n^{\prime}_{i}=f_{y_{i}}(n_{i})\); and
* for all \(i\in\{1,\ldots,d-1\}\), \[u^{\prime}_{i}=\begin{cases}min(2u_{i}+1,U_{i})&\text{if }f_{y_{i}}=\text{\bf x2+1},f_{y_{i+1}}=\text{\bf x2}\\ min(2u_{i},U_{i})&\text{if }f_{y_{i}}=f_{y_{i+1}}\\ min(2u_{i}-1,U_{i})&\text{if }f_{y_{i}}=\text{\bf x2},f_{y_{i+1}}=\text{\bf x2+1}. \end{cases}\]
Assuming that the value of \(y_{i}\) is at least the value of \(y_{i+1}\), which we will always ensure, the definition of how to update \(u_{i}\) ensures that it exactly stores the difference \(y_{i}-y_{i+1}\) unless the difference becomes too large, in which case it is levelled off at \(U_{i}\).
An abstract configuration path is a sequence of abstract configurations and transitions of the form \(R=\alpha_{1}\xrightarrow{t_{1}}\alpha_{2}\xrightarrow{t_{2}}\ldots \xrightarrow{t_{n-1}}\alpha_{n}\).
Given two consecutive \(y\)-counters \(y_{i},y_{i+1}\) and \(\delta_{i}\in\mathbb{N}\), we say that \(y_{i}-y_{i+1}=\delta_{i}\) and \(y_{i}-y_{i+1}\geq\delta_{i}\) are _\(y\)-constraints_. Let \(Y\) be a set of \(y\)-constraints, an abstract configuration \(\alpha=(q,m_{1},n_{1},\ldots,u_{1},\ldots,u_{d-1},\ell_{1},\ldots,\ell_{d})\)_respects_\(Y\) whenever
* \(u_{i}\geq\delta_{i}\) for all constraints of type \(y_{i}-y_{i+1}\geq\delta_{i}\) in \(Y\),
* and \(u_{i}<U_{i}\) and \(u_{i}=\delta_{i}\) for all constraints \(y_{i}-y_{i+1}=\delta_{i}\) in \(Y\).
We say that \(\alpha_{f}\) is a _final abstract configuration respecting_\(Y\) whenever \(q\in F\), \(m_{i}=n_{i}\) for all \(1\leq i\leq d\), and \(\alpha_{f}\) respects \(Y\).
### _Witnessing certificates_
While any concrete accepting run of an la-VASS gives rise to an abstract configuration path ending in an accepting abstract configuration, the converse does not hold. This motivates the introduction of reachability and witnessing certificates, which are special
abstract configuration paths that carry further information that eventually enables us to derive from a witnessing certificate a concrete accepting run of an la-VASS.
A _reachability certificate_ is a tuple \((R,X,Y,L)\) such that \(R=\alpha_{1}\xrightarrow{t_{1}}\alpha_{2}\xrightarrow{t_{2}}\cdots \xrightarrow{t_{n-1}}\alpha_{n}\) is an abstract configuration path, and \(X,Y,L\colon\{1,\ldots,d\}\to\{1,\ldots,n\}\). Here, \(X(i)\) and \(Y(i)\) indicate the position where the \(x_{i}\)-counter and \(y_{i}\)-counter obtain a value different from \(\bot\) for the first time. Moreover, \(L(i)\) is the position where a loop of length \(\ell_{i}\) can be found. Formally, \((R,X,Y,L)\) is required to have the following properties:
1. \(\alpha_{1}=(q_{0},\bot,\ldots,\bot,0,\ldots,0,\ell_{1},\ldots,\ell_{d})\) and if \(\ell_{i}=\top\) then \(\ell_{j}=\top\) for all \(i<j\leq d\);
2. \(\lambda(x_{i},t_{X(i)-1})=\texttt{\small{+}}\texttt{\small{+}}\) and \(\lambda(y_{i},t_{Y(i)-1})=\texttt{\small{x2+1}}\) for all \(1\leq i\leq d\);
3. \(\lambda(x_{i},t_{j})=\texttt{\small{id}}\) for all \(1\leq j<X(i)-1\);
4. \(\lambda(y_{i},t_{j})=\texttt{\small{x2}}\) for all \(1\leq j<Y(i)-1\);
5. \(X,Y,L\) are monotonic;
6. for all \(1\leq i\leq d\), if \(\ell_{i}\neq\top\) then * \(X(i)\leq L(i)<Y(i)\); and * there is a simple \(\alpha_{L(i)}\)-loop \(\alpha_{L(i)}\xrightarrow{t_{1}^{\prime}}\alpha_{2}^{\prime}\xrightarrow{t_{2 }^{\prime}}\cdots\xrightarrow{t_{\ell_{i}-1}}\alpha_{\ell_{i}-1}^{\prime} \xrightarrow{t_{\ell_{i}}^{\prime}}\alpha_{L(i)}\) of length \(\ell_{i}\).
Those conditions can be interpreted as follows. Condition (a) asserts that the certificate starts in an initial abstract configuration. We require that \(\top\) monotonically propagates since since the absence of a loop for counter \(x_{i}\) implies that the remainder of a path is short, hence we can afford to subsequently store actual counter values and not residue classes. Conditions (b), (c) and (d) assert that \(X(i)\) and \(Y(i)\) are the first position where the counters \(x_{i},y_{i}\) hold value different from \(\bot\). Condition (e) states that the counters \(x_{i+1}\), \(y_{i+1}\) do not carry a value different from \(\bot\) before the counters \(x_{i}\) and \(y_{i}\), respectively. Condition (f) implies that, if \(\ell_{i}\neq\top\) then between the first update for counter \(x_{i}\) and the first update for counter \(y_{i}\) there is a position \(L(i)\) where we can find a loop in the abstract configurations of length \(\ell_{i}\). Notice that if \(x_{j}=\bot\) or \(y_{j}=\bot\) in \(\alpha_{L(i)}\) then \(x_{j}\) and \(y_{j}\) remain to hold \(\bot\) along this loop, i.e., this loop does not update counters that have not been initialised already.
Given \(R\), the set of \(y\)-constraints induced by \(R\) is the smallest set containing
1. \(y_{i}-y_{i+1}\geq\delta_{i}\), where \(\delta_{i}:=X(i+1)-X(i)\) if there is a \(j\) such that \(X(i)\leq L(j)<X(i+1)\); and
2. otherwise \(y_{i}-y_{i+1}=\delta_{i}\), where \(\delta_{i}:=X(i+1)-X(i)\),
for all \(1\leq i<d\) such that \(\ell_{i}\neq\top\).
We introduce some further notation. Given a reachability certificate \(R\), we denote by \(\pi(R)\) the run corresponding to \(R\) in the configuration graph of \(V\), with the initial configuration \((q_{0},0,0,\ldots,0,0)\). Given indices \(1\leq i\leq j\leq n\), we denote by \(R[i,j]\) the segment \(\alpha_{i}\xrightarrow{t_{i}}\alpha_{i+1}\cdots\xrightarrow{t_{j-1}}\alpha_{j}\) of \(R\), and by \(R[i]:=\alpha_{i}\). We say that \(R\) is a _witnessing certificate_ if, for \(a\leq d\) being the largest index such that \(\ell_{a}\neq\top\):
1. \(R[1,Y(a)]\) is a simple path and \(n-Y(a)\leq 2dM_{d+1}\);
2. \(\alpha_{n}\) is a final abstract configuration respecting the set of induced \(y\)-constraints; and
3. \(\textit{val}(\pi(R),x_{a})\leq\textit{val}(\pi(R),y_{a})\).
Sometimes we will speak of witnessing certificates _restricted_ to a set of counters. By that we mean a witnessing certificates where the relevant Conditions (a)-(f) are only required for that set of counters.
Now we are ready to provide a proof for Proposition 9, that stated that language emptiness for restricted la-VASS can be decided in \(\textsc{NSPACE}(|V|\cdot 2^{O(d)})\).
Proof of Proposition 9.: Clearly, an abstract configuration can be stored in space \(|V|\cdot 2^{O(d)}\). An NEXPSPACE algorithm can hence non-deterministically choose an initial configuration and non-deterministically verify that it leads to a final abstract configuration along a path
that is a witnessing certificate. To this end, the algorithm computes the set of induced \(y\)-constraints on-the-fly while guessing the reachability certificate, and verifies them in the last configuration. Note that the \(y\)-constraints can be stored in space \(|V|\cdot 2^{O(d)}\). Finally, the requirement \(\mathit{val}(\pi(R),x_{a})\leq\mathit{val}(\pi(R),y_{a})\) can also be verified in exponential space since we require that \(R[1,Y(a)]\) is a simple path and \(n-Y(a)\leq 2M_{d+1}\).
In the next section we argue the correctness of our algorithm by proving the following theorem:
The language of a restricted la-VASS \(V\) is non-empty if and only if there exists a witnessing certificate for \(V\).
## 5 Correctness proof of the certificate
In this section we prove Theorem 4.1. The proof is split into the two directions. In Section 5.1 below, we show that the existence of a witnessing certificate for an la-VASS implies that the language of the la-VASS is non-empty. The converse direction is then shown in Section 5.2.
### Witnessing certificates imply language non-emptiness
This section proves the following proposition.
If there exists a witnessing certificate for a restricted la-VASS \(V\) then \(L(V)\neq\emptyset\).
The idea behind the proof of Proposition 4.1 is that we obtain from a witnessing certificate \((R,X,Y,L)\) of an la-VASS \(V\) a sequence of runs of \(V\) such that the final run in that sequence is an accepting run of \(V\). Initially, we obtain a run that ends in a configuration where the counters are in a congruence relation. We then carefully pump the simple loops pointed to by \(L\), beginning from the last counter working towards the first.
To formally prove Proposition 4.1, let \((R,X,Y,L)\) be a witnessing certificate, and let \(\pi(R)\) be the run in the configuration graph of \(V\) induced by \(R\). Let \(a\leq d\) be maximal such that \(\ell_{a}\neq\top\). We now define a sequence of runs \(\pi_{0},\ldots,\pi_{a}\) such that the following invariant holds. In the final configuration of \(\pi_{i}\),
* \(m_{j}\leq n_{j}\) and \(m_{j}\equiv n_{j}\bmod\ell_{j}\) for the \(j\)-th counter pair, \(1\leq j\leq a-i\); and
* \(m_{j}=n_{j}\) for the \(j\)-th counter pair, \(a-i<j\leq d\).
It is clear that \(\pi_{a}\) then witnesses \(L(V)\neq\emptyset\). We proceed by induction on \(i\).
_Base case \(i=0\):_ Let \(\pi_{0}=\pi(R)\). Since \(R\) is a witnessing certificate, \(\mathit{val}(\pi(R),x_{a})\leq\mathit{val}(\pi(R),y_{a})\), and hence \(m_{a}\leq n_{a}\) in the last configuration of \(\pi_{0}\). Moreover, \(R\) respects the set of induced \(y\)-constraints. Hence \(n_{a-1}-n_{a}\geq\delta_{a-1}\), where \(\delta_{a-1}\) is the length of the path from \(R[X(a-1)]\) to \(R[X(a)]\). Hence \(n_{a-1}-n_{a}\geq m_{a-1}-m_{a}\) and thus \(m_{a-1}\leq n_{a-1}\). Iterating this argument for the remaining counters, we get that (i) of the invariant is fulfilled for \(\pi_{0}\); (ii) trivially holds since \(R\) ends in an accepting abstract configuration.
_Induction step \(i>0\):_ Let \(\pi_{i-1}\) be the path that exists by the induction hypothesis. If \(m_{a-i}=n_{a-i}\) in the last configuration of \(\pi_{i-1}\) then we are done and take \(\pi_{i}=\pi_{i-1}\); otherwise \(m_{a-i}<n_{a-i}\) and \(m_{a-i}\equiv n_{a-i}\bmod\ell_{a-i}\). Hence, there is some \(k\in\mathbb{N}\) such that \(n_{a-i}=k\cdot\ell_{a-i}\). Since \(\ell_{i}\neq\top\), let \(\beta:=\alpha_{L(a-i)}\xrightarrow{t_{1}}\alpha_{2}\xrightarrow{t_{2}}\cdots \alpha_{\ell_{i}-1}\xrightarrow{t_{\ell_{i}}}\alpha_{L(a-i)}\) be the simple \(\alpha\)-loop at position \(L(a-i)\) that is guaranteed to exist since \(R\) is a witnessing certificate. We insert the transitions of \(\beta^{k}\) and the induced updated configurations into \(\pi_{i-1}\) at position \(L(a-i)\). Notice that \(L(a-i)<X(a-i+1)\). Otherwise, by the definition of
the induced \(y\)-constraints, \(y_{a-i}-y_{a-i+1}=\delta_{a-i}\) is in the set of induced \(y\)-constraints, where \(\delta_{i}=X(a-i+1)-X(a-i)\). Since the last abstract configuration of \(R\) respects the set of \(y\)-constraints, it must be the case that in the last configuration of \(\pi_{i-1}\), \(n_{a-i}-n_{a-i+1}=\delta_{a-i}\) and \(m_{a-i}-m_{a-i+1}=\delta_{a-i}\), so \(m_{a-i}=n_{a-i}\), because after the position \(X(a-i+1)-1\) in \(R\) and thus \(\pi_{i-1}\), the counters \(x_{a-i},x_{a-i+1}\) get incremented simultaneously. This contradicts our assumption that \(m_{a-i}\neq n_{a-i}\). Thus, the counters \(x_{a-i+1},y_{a-i+1},\ldots,x_{d},y_{d}\) remain unchanged by the insertion of \(\beta^{k}\), so (ii) and consequently (i) continues to hold in the last configuration of \(\pi_{i}\) for those counters. Moreover, due to the ordering conditions imposed on witnessing certificates, the value of \(y_{a-i}\) does not change either, and hence \(m_{a-i}=n_{a-i}\) in the last configuration of \(\pi_{i}\). Since \(\beta\) is a loop in the abstract configuration space, we have \(m_{j}\equiv n_{j}\bmod\ell_{j}\) for all \(1\leq j<a-i\) and the values of \(u_{j}\), for all \(1\leq j<a\) are preserved.
### Reachability yields witnessing certificates
We now turn towards the converse direction and show that we can obtain a witnessing certificate from a run witnessing non-emptiness.
If a restricted la-VASS \(V\) admits an accepting run then there exists a witnessing certificate for \(V\).
We begin with defining a function that turns a configuration from \(C(V)\) into an abstract configuration. This function is parameterised by \(\ell_{1},\ldots,\ell_{d}\in\mathbb{N}_{+}\cup\{\top\}\):
\[f_{V}((q,m_{1},n_{1},\ldots,m_{d},n_{d}),\ell_{1},\ldots,\ell_{d }) :=(q,m_{1}\circ\ell_{1},n_{1}\circ\ell_{1},\ldots,m_{d}\circ\ell_{ d},n_{d}\circ\ell_{d},\] \[min(n_{1}-n_{2},U_{1}),\ldots,min(n_{d-1}-n_{d},U_{d-1}),\ell_{1 },\ldots,\ell_{d})\,.\]
Here, \(m\circ\ell:=\bot\) if \(m=0\); \(m\circ\ell:=m\bmod\ell\) if \(\ell\in\mathbb{N}_{+}\); and \(m\circ\ell:=m\) if \(\ell=\top\). We lift the definition of \(f_{V}\) to paths of concrete runs \(\pi\) in the natural way, and write \(f_{V}(\pi,\ell_{1},\ldots,\ell_{d})\) for the resulting sequence of abstract configurations. Let \(\pi=c_{1}\xrightarrow{t_{1}}c_{2}\cdots\xrightarrow{t_{n-1}}c_{n}\) be a run witnessing \(L(V)\neq\emptyset\). We show how to obtain a witnessing certificate \(R\) from \(\pi\). Without loss of generality, in \(c_{n}\) we have \(m_{1}\geq m_{2}\geq\ldots m_{d}>0\).
To this end, we show how from the accepting run \(\pi\) we can iteratively define a sequence \(R_{0},R_{1},R_{2}\ldots,R_{d}\) of abstract runs and identify the required \(\ell_{1},\ldots,\ell_{d}\in\mathbb{N}_{+}\cup\{\top\}\) and \(X,Y,L\) such that \((R_{d},X,Y,L)\) is a reachability certificate. Let \(X(i):=j\) such that \(j\) is the first position in \(\pi\) where the value of counter \(x_{i}\) is non-zero; analogously define \(Y(i)\) to be the first position where the value of \(y_{i}\) is non-zero. Clearly, \(X,Y\) are monotonic and \(X(i)\leq Y(i)\), for all \(1\leq i\leq d\). Otherwise, if a counter \(y_{i}\) gets initialised before the counter \(x_{i}\) in \(\pi\), it must be the case that \(n_{i}>m_{i}\) in \(c_{n}\) and therefore \(c_{n}\) cannot be an accepting configuration.
Recall that \(\pi\) has length \(n\). In our proof, the subsequent technical lemma will allow us to conclude that, if for a counter pair \(x_{i},y_{i}\) the \(y_{i}\) counter gets updated shortly after the \(x_{i}\) counter then the run will end shortly after and counter pairs \(x_{j},y_{j}\) for \(j\geq i\) will consequently have small values.
If \(Y(i)-X(i)\leq dM_{i}\) for some \(1\leq i\leq d\) then \(n-Y(i)<dM_{i}\), so \(m_{j},n_{j}\leq 2dM_{i}\) in \(c_{n}\) for all \(i\leq j\leq d\).
Proof.: We have that \(Y(i)-X(i)\leq dM_{i}\) implies that \(val(\pi[1,Y(i)],x_{i})\leq dM_{i}+1\), and since \(val(\pi[1,Y(i)+k],y_{i})\geq 2^{k}\) we get that:
* \(val(\pi,y_{i})\geq 2^{n-Y(i)}\); and
* \(val(\pi,x_{i})\leq dM_{i}+n-Y(i)+1\).
Assume that \(n-Y(i)\geq dM_{i}\). Then, \(2^{n-Y(i)}-(dM_{i}+n-Y(i)+1)>2^{n-Y(i)}-(2n-2Y(i)+1)>0\), if \(n-Y(i)\geq 3\). However, \(\pi\) is an accepting path, so \(val(\pi,x_{i})=val(\pi,y_{i})\), and we get a contradiction. Thus, we must have that \(n-Y(i)<dM_{i}\) which implies that \(val(\pi,x_{i})\leq 2dM_{i}\), so \(m_{i}=n_{i}\leq 2dM_{i}\) and for any \(j\), \(i<j\leq d\), \(m_{j}\leq m_{i}\) and \(n_{j}\leq n_{i}\), so \(m_{j},n_{j}<2M_{i}\) in \(c_{n}\), for all \(i\leq j\leq d\) since \(\pi\) is an accepting path.
Let \(R_{0}:=f_{V}(\pi,1,1,\ldots,1)\). Note that \(R_{0}\) together with \(X\) and \(Y\) as defined above adheres to Conditions (a)-(e) of reachability certificates.
Suppose \(R_{i-1}\) and \(\ell_{1},\ldots,\ell_{i-1}\) have been constructed. If \(i>1\), and \(L(i-1)\geq X(i)\) or \(\ell_{i-1}=\top\) then we choose \(\ell_{i}:=\ell_{i-1}\), \(L(i)=L(i-1)\) and \(R_{i}:=f_{V}(\pi,\ell_{1},\ldots,\ell_{i},1,\ldots,1)\). Otherwise, we distinguish two cases.
* \(Y(i)-X(i)<dM_{i}\): we choose \(\ell_{i}:=\top\) and \(L(i):=X(i)\).
* \(Y(i)-X(i)\geq dM_{i}\): then there is a segment in \(R_{i-1}[X(i),Y(i)]\) of length greater than \(M_{i}\) on which no \(x\)-counter has its first ++ transition. Let \(N_{i}\) be the maximum number of different abstract configurations on this segment. Since \(\ell_{i-1}\neq\top\) we know that \(m_{j},n_{j}\) can take at most \(M_{j}\) different values for all \(1\leq j<i\), as they can either be \(\bot\) or a residue class modulo \(M_{j}\). Also, for all \(i\leq j\leq d\) the values of \(m_{j},n_{j}\) have a constant value, either \(0\) or \(\bot\), on this segment, and \(u_{i}=\cdots=u_{d}=0\) in all abstract configurations of this segment. So \[N_{i} \leq|Q|\prod_{1\leq j<i}M_{j}^{2}\cdot U_{j}\] \[\leq|Q|\prod_{1\leq j<i}|Q|^{(1/4)\cdot 32^{j-1}+2+32^{j-1}+4}\] \[\leq|Q|^{(1/3968)(5\cdot 32^{i}-23968)+6i+1}\] \[<|Q|^{(1/8)\cdot 32^{i-1}+1}\] \[=M_{i}\] By the pigeonhole principle, there is a smallest \(k\), \(X(i)\leq k<Y(i)\), \(\ell<M_{i}\), and a simple loop \(\alpha_{k}\xrightarrow{t_{k}}\cdots\xrightarrow{t_{k+\ell}}\alpha_{k+\ell+1}= \alpha_{k}\) in \(R_{i-1}\). We choose \(L(i):=k\), \(\ell_{i}:=\ell\) and let \(R_{i}:=f_{V}(\pi,\ell_{1},\ldots,\ell_{i},1,\ldots,1)\). By construction, \((R_{d},X,Y,L)\) is a reachability certificate. It remains to turn it into a witnessing certificate. In particular, this requires to removes from \(R_{d}\), to ensure that the final segment of \(R_{d}\) is short, and to establish that \(R_{d}\) is consistent with the implied \(y\)-constraints.
Let \(R:=R_{d}=f_{V}(\pi,\ell_{1},\ldots,\ell_{d})\) and \(a\) be the largest index such that \(\ell_{a}\neq\top\). In order to make \(R\) loop-free, we iterate the following process:
* identify the first simple loop \(\alpha_{k}\xrightarrow{t_{k}}\cdots\xrightarrow{t_{k+\ell}}\alpha_{k+\ell+1}\) in \(R[1,Y(a)]\) and replace it by \(\alpha_{k}\); observe that for \(I:=\{k+1,\ldots,k+\ell\}\), we have \(I\cap\{X(i),Y(i),L(i):1\leq i\leq d\}=\emptyset\) since \(\alpha_{X(i)-1}\xrightarrow{t_{X(i)-i}}\alpha_{X(i)}\) occurring in \(R_{d}\) means that \(x_{i}\) has value \(\bot\) in \(\alpha_{X(i)-1}\) and a value different from \(\bot\) in \(\alpha_{X(i)}\), and thus \(\alpha_{X(i)}\) cannot be part of a loop; the same argument applies to any \(Y(i)\). Finally, since \(L(i)\) was chosen as the index of the first configuration of the first cycle appearing after \(X(i)\), we have \(L(i)\not\in I\) for all \(1\leq i\leq d\) as well.
* update \(X,Y,L\) such that for all \(i\) such that \(X(i)>k\), \(X(i):=X(i)-\ell\), and analogously \(Y(i):=Y(i)-\ell\) and \(L(i):=L(i)-\ell\) for the respective \(i\). This process guarantees that \(R[1,Y(a)]\) is loop-free. It is easy to verify that \((R,X,Y,L)\) obtained in this way is a reachability certificate and that the last abstract configuration of \(R\) is accepting.
We now show that the \(y\)-constraints induced by \(R\) are valid in the final configuration of \(R\). To this end, we first show that for all \(1\leq i\leq d\) such that \(\ell_{i}\neq\top\), \(X(i+1)-X(i)<U_{i}\). Consider the simple path \(\alpha_{X(i)}\xrightarrow{t_{X(i)}}\alpha_{X(i)+1}\xrightarrow{t_{X(i)+1}} \cdots\xrightarrow{t_{X(i+1)-1}}\alpha_{X(i+1)}\). If \(Y(i)\geq X(i+1)\) then clearly \(X(i+1)-X(i)\leq N_{i}<U_{i}\), where \(N_{i}\) is defined as above. Otherwise, there is a \(k\in\mathbb{N}\) such that the path decomposes as
\[\alpha_{X(i)}\xrightarrow{t_{X(i)}}\cdots\alpha_{Y(i)}\xrightarrow{t_{Y(i)}} \cdots\alpha_{Y(i)+k}\xrightarrow{t_{Y(i)+k}}\cdots\xrightarrow{t_{X(i+1)-1}} \alpha_{X(i+1)}\]
and
* \(u_{i}=0\) in all abstract states \(\alpha_{j}\) with \(X(i)\leq j\leq Y(i)\);
* \(u_{i}=U_{i}\) in all abstract states \(\alpha_{j}\) with \(Y(i)+k\leq j\leq X(i+1)\); and
* \(k\leq\log U_{i}\).
Thus, the maximum length of \(R[X(i),X(i+1)]\) is bounded by:
\[N_{i}\cdot M_{i}+\log U_{i}+N_{i}\cdot 2M_{i}\] \[\leq 2\cdot M_{i}^{3}+\log U_{i}\] \[\leq |Q|^{(3/8)\cdot 32^{i-1}+4}+|Q|^{5(i-1)+1}+4|Q|\] \[< |Q|^{32^{i-1}+4}\] \[= U_{i}\]
We can now show that \(R\) respects the induced \(y\)-constraints. Fix some \(1\leq i\leq d\) such that \(\ell_{i}\neq\top\). We distinguish two cases:
* There is no \(1\leq j\leq a\) such that \(X(i)\leq L(j)\leq X(i+1)\). Thus, we know that \(y_{i}-y_{i+1}=\delta_{i}\) is in the set of induced \(y\)-constraints. Also, \(val(\pi,y_{i})-val(\pi,y_{i+1})=val(\pi,x_{i})-val(\pi,x_{i+1})=X(i+1)-X(i)= \delta_{i}\) since we did not remove any abstract loops on the segment of \(R_{d}\) between the first ++ update for \(x_{i}\) and the first ++ update for \(x_{i+1}\). Finally, since \(\delta_{i}<U_{i}\) by the above argument, we conclude that \(u_{i}=\delta_{i}\) in the last abstract configuration \(R[n]\) of \(R\).
* Otherwise, \(X(i)\leq L(i)\leq Y(i)\), so \(y_{i}-y_{i+1}\geq\delta_{i}\) is in the set of induced \(y\)-constraints. However, \(val(\pi,y_{i})-val(\pi,y_{i+1})=val(\pi,x_{i})-val(\pi,x_{i+1})\geq X(i+1)-X(i )=\delta_{i}\) and again because \(\delta_{i}<U_{i}\) we can conclude that \(u_{i}\geq\delta_{i}\) in \(R[n]\).
This establishes that the \(y\)-constraints are satisfied. Let \(n\) be the index of the last abstract configuration of \(R\). For the final step, we now argue that \(val(\pi(R),x_{a})\leq val(\pi(R),y_{a})\) and \(n-Y(a)\leq 2dM_{d+1}\). We make a case distinction:
* \(a=d\): Note that \(val(\pi,x_{d})=val(\pi,y_{d})\). Since we only remove loops from \(R_{d}[1,Y(d)]\), we have that \(val(\pi(R),x_{d})\leq val(\pi(R),y_{d})\). If \(n-Y(d)\leq 2dM_{d+1}\) we are done with \((R,X,Y,L)\) as a witnessing certificate. Otherwise, assume \(n-Y(d)>2dM_{d+1}\). This implies that the path \(R[Y(d),n]\) must contain at least one simple loop. Consider iterating the following process:
* remove the first simple loop from \(R[Y(d),n]\) and update \(n:=n-\ell\), where \(\ell\) is the length of the simple loop that was removed; and
* stop if \(n-Y(d)\leq 2dM_{d+1}\). We argue that, \(n-Y(d)\geq M_{d}^{2}\). Let \(R^{\prime}\) and \(n^{\prime}\) be the previous values of \(R,n\) before the last iteration. It must be that \(n^{\prime}>2dM_{d+1}\) and since the length of any simple loop of \(R^{\prime}[Y(d)+1,n^{\prime}]\) is bounded by \(M_{d+1}\), we get that \(n-Y(a)\geq M_{d+1}\geq M_{d}^{2}\). Note that \(Y(d)-X(d)\leq M_{d}\cdot|Q|\cdot\prod_{1\leq j<d}M_{j}^{2}\cdot U_{j}\leq M_{d} ^{2}\), so \(val(\pi(R[1,Y(d)]),x_{d})\leq M_{d}^{2}\). It must be then the case that \(val(\pi(R),x_{d})\leq val(\pi(R),y_{d})\).
* \(a<d\): we know \(n-Y(a)\leq 2dM_{d+1}\) by Lemma 14. Moreover, we must have that \(val(\pi(R),x_{a})\leq val(\pi(R),y_{a})\) since \(val(\pi,x_{a})=val(\pi,y_{a})\) and we do not remove loops after the counter \(y_{a}\) is incremented.
## 6 A decidable fragment of string constraints
In this section, we show that a certain fragment of string constraints whose decidability status has been left open in the literature can be reduced in logarithmic space to generalised Semenov arithmetic, and is hence decidable in EXPSPACE. This demonstrates an important application of our results on generalised Semenov arithmetic, with deep connections to solving string constraints in practice, which has been one of the motivations for our work.
Let \(\Sigma=\{0,1\}\). The _theory of enriched string constraints_\(T_{\mathrm{inc}}\) is the first-order theory of the two-sorted structure
\[\langle\Sigma^{*},\mathbb{N};\{w\}_{w\in\Sigma^{*}},\cdot,\textit{len},\textit{ sn},\{R_{i}\}_{i\in\mathbb{N}},0,1,+\rangle,\]
where
* the binary function \(\cdot\) over \(\Sigma^{*}\) is the string concatenation operator,
* the unary function \(\textit{len}\colon\Sigma^{*}\to\mathbb{N}\) returns on input \(w\) the length \(|w|\) of \(w\),
* the unary function \(\textit{sn}\colon\Sigma^{*}\to\mathbb{N}\) on input \(u\) returns \(\llbracket u\rrbracket\), and
* \(R_{0},R_{1},\ldots\subseteq\Sigma^{*}\) is an enumeration of all regular languages.
The remaining predicates, constant and function symbols are defined in their standard semantics.
The above theory was introduced in [4], where an SMT solver addressing some fragments of this theory was defined, implemented, and compared to other state of the art solvers which can handle such string constraints. Extending [4], [3] presents in more details the motivation behind considering this theory and its fragments. More precisely, the authors of [3] analysed an extensive collection of standard real-world benchmarks of string constraints and extracted the functions and predicates occurring in them. The works [3, 4] focused on benchmarks that do not contain word equations, and the result of the aforementioned benchmark-analysis produced exactly the four functions and predicates mentioned above: _len_, _sn_, regular language membership, and concatenation of strings.
Complementing the practical results of [4], [3] showed a series of theoretical results regarding fragments of \(T_{\mathrm{inc}}\). In particular, the existential theory of \(T_{\mathrm{inc}}\) is shown to be undecidable. Moreover, [3] leaves as an open problem the question whether the existential theories of \(T_{\mathrm{REln}}\) and \(T_{\mathrm{REnc}}\), which drop the concatenation operator and length function, respectively, are decidable. From these two, the existential theories of \(T_{\mathrm{REln}}\) seems particularly interesting, as all instances from the benchmarks considered in the analysis [3] can be easily translated into a formula from this particular fragment of \(T_{\mathrm{REln}}\). Indeed, by the results reported in Table 1.b from [3], no instance contains both concatenation of strings and the _sn_ function; moreover, the concatenation of strings, which appears only in formulas involving regular membership predicates and, in some cases, length function, can be easily removed in all cases by a folklore technique called automata splitting (see, e.g., [1]). Therefore, showing that the existential fragment of \(T_{\mathrm{REln}}\) is decidable essentially shows that one can decide all the instances from the standard benchmarks analysed in [3].
In this paper, we solve this open problem. By a reduction to generalised Semenov arithmetic, we can settle the decidability status of \(T_{\mathrm{REln}}\):
The existential fragment of \(T_{\mathrm{REln}}\) is decidable in EXPSPACE.
Again, we treat \(T_{\mathrm{REln}}\) as a relational structure. Without loss of generality, we may assume that atomic formulas of \(T_{\mathrm{REln}}\) are one of the following:
* \(R(s)\) for some string variable \(s\) and a regular language \(R\);
* \(s=t\) for some string variables \(s\) and \(t\);
* \(\mathit{len}(s,x)\) or \(\mathit{sn}(s,x)\) for some string variable \(s\) and integer variable \(x\); or
* \(\boldsymbol{a}\cdot\boldsymbol{x}\geq b\) for a vector of integer variables \(\boldsymbol{x}\).
The size of a formula of \(T_{\mathrm{REln}}\) is defined in the standard way as the number of symbols required to write it down, assuming binary encoding of numbers, and where the size of some \(R\) is the size of the smallest DFA accepting \(R\). Furthermore, in a quantifier-free formula \(\varphi\) of \(T_{\mathrm{REln}}\), we may without loss of generality assume that all atomic formulas occur positive, except for atomic formulas \(s=t\).
We now describe the reduction to existential Semenov arithmetic. The idea underlying our proof is that we map a string \(s\) to the number \(\llbracket 1s\rrbracket\). Note that we cannot directly treat strings in \(\Sigma^{*}\) as natural numbers due to the possibility of leading zeros. This encoding enables us to treat strings as numbers and to implement the functions \(\mathit{sn}\) and \(\mathit{len}\) in generalised Semenov arithmetic. Given a quantifier-free formula \(\varphi\) of \(T_{\mathrm{REln}}\), we define by structural induction on \(\varphi\) a function \(\sigma\) that maps \(\varphi\) to an equi-satisfiable formula of generalised Semenov arithmetic:
* Case \(\varphi\equiv R(s)\): \(\sigma(\varphi):=(0^{*}1R)(s)\);
* Case \(\varphi\equiv s=t\) or \(\varphi\equiv\neg(s=t)\): \(\sigma(\varphi):=s=t\) or \(\sigma(\varphi):=\neg s=t\), respectively;
* Case \(\varphi\equiv\mathit{sn}(s,x)\): \(\sigma(\varphi):=\exists y.\,2^{y}\leq s\wedge s<2^{y+1}\wedge x=s-2^{y}\);
* Case \(\varphi\equiv\mathit{len}(s,x)\): \(\sigma(\varphi):=2^{x}\leq s\wedge s<2^{x+1}\);
* Case \(\varphi\equiv\boldsymbol{a}\cdot\boldsymbol{x}\geq b\): \(\sigma(\varphi):=\boldsymbol{a}\cdot\boldsymbol{x}\geq b\); and
* Case \(\varphi\equiv\varphi_{1}\sim\varphi_{2},\ \sim\ \in\{\land,\lor\}\): \(\sigma(\varphi):=\sigma(\varphi_{1})\sim\sigma(\varphi_{2})\).
Let \(\varphi\) be a quantifier-free formula of \(T_{\mathrm{REln}}\) and \(S\) be the set of string variables occurring in \(S\). Then \(\varphi\) is satisfiable if and only if \(\sigma(\varphi)\land\bigwedge_{s\in S}s>0\) is satisfiable.
Proof.: Observe that the variables occurring in \(\varphi\) are the same variables as those occurring in \(\sigma(\varphi)\). Let \(S\) be the set of string variables in \(\varphi\) and \(X\) be the set of integer-valued variables in \(\varphi\). Given an assignment \(\mathcal{I}_{S}\colon S\to\{0,1\}^{*}\), we define \(\tilde{\mathcal{I}}_{S}:=S\to\mathbb{N}\) such that \(\tilde{\mathcal{I}}_{S}(s):=\llbracket 1\mathcal{I}_{S}(s)\rrbracket\). Subsequently, denote by \(\mathcal{I}_{X}\colon X\to\mathbb{N}\) an assignment to the integer-valued variables. We show by structural induction on \(\varphi\) that \((\mathcal{I}_{S},\mathcal{I}_{x})\models\varphi\) if and only if \((\tilde{\mathcal{I}}_{S},\mathcal{I}_{X})\models\sigma(\varphi)\land\bigwedge_{s \in S}s>0\):
* Case \(\varphi\equiv R(s)\): Let \(\mathcal{I}_{S}(s)=b_{n-1}\cdots b_{0}\), we have \(\mathcal{I}_{S}(s)\in R\) if and only if \(2^{n}+\sum_{i=0}^{n-1}2^{i}b_{i}\in\llbracket 0^{*}1R\rrbracket\), noting that \(2^{n}+\sum_{i=0}^{n-1}2^{i}b_{i}=\llbracket 1b_{n-1}\cdots b_{0}\rrbracket= \tilde{\mathcal{I}}_{S}(s)\).
* Case \(\varphi\equiv\mathit{sn}(s,x)\): Let \(\mathcal{I}_{S}(s)=b_{n-1}\cdots b_{0}\) and \(\mathcal{I}_{X}(x)=m\). We have that \(m=\sum_{i=0}^{n-1}2^{i}b_{i}\) if and only if \(m=\tilde{\mathcal{I}}_{S}(s)-2^{n}\) if and only if \((\tilde{\mathcal{I}}_{S},\mathcal{I}_{X})\models\sigma(\varphi)\land\bigwedge_{s \in S}s>0\).
* Case \(\varphi\equiv\mathit{len}(s,x)\): Let \(\mathcal{I}_{S}(s)=b_{n-1}\cdots b_{0}\) and \(\mathcal{I}_{X}(x)=m\). We have that \(m=n\) if and only if \(2^{m}\leq\llbracket 1b_{n-1}\cdots b_{0}\rrbracket<2^{m+1}\) if and only if \((\tilde{\mathcal{I}}_{S},\mathcal{I}_{X})\models\sigma(\varphi)\land\bigwedge_{s \in S}s>0\).
The remaining cases follow obviously.
## 7 Conclusion
The main result of this article has been to show that the existential theory of generalised Semenov arithmetic is decidable in EXPSPACE. As an application of this result, we showed that a highly relevant class of string constraints with length constraints is also decidable in EXPSPACE; the decidability of this class was the main problem left open in [3]. On a technical level, those results were obtained by showing that a restricted class of labelled affine VASS has an EXPSPACE-decidable language emptiness problem. The structural restrictions
imposed on those restricted la-VASS are rather strong, though necessary to obtain a decidable class of la-VASS.
An interesting aspect of our approach is that it establishes automaticity of the existential fragment of a logical theory that is different from traditional notions of automaticity, which are based on finite-state automata or tree automata over finite or infinite words and trees [5, 11], respectively. It would be interesting to better understand whether there are natural logical theories whose (existential) fragments are, say, Petri-net or visibly-pushdown automatic.
We have ignored algorithmic lower bounds throughout this article, but it would, of course, be interesting to see whether the upper bounds of the decision problems we considered in this article are tight. It is clear that generalised Semenov arithmetic is PSPACE-hard since it can readily express the DFA intersection non-emptiness problem, but this still leaves a considerable gap with respect to the EXPSPACE upper bound we established. In particular, the recent results of [2] showing an NEXP upper bound for the existential fragment of Semenov arithmetic suggest that, if an EXPSPACE lower bound for existential generalised Semenov arithmetic is possible, it will require the use of regular predicates.
|
2308.02885 | REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption | Fully Homomorphic Encryption (FHE) enables privacy-preserving computation and
has many applications. However, its practical implementation faces massive
computation and memory overheads. To address this bottleneck, several
Application-Specific Integrated Circuit (ASIC) FHE accelerators have been
proposed. All these prior works put every component needed for FHE onto one
chip (monolithic), hence offering high performance. However, they suffer from
practical problems associated with large-scale chip design, such as
inflexibility, low yield, and high manufacturing cost.
In this paper, we present the first-of-its-kind multi-chiplet-based FHE
accelerator `REED' for overcoming the limitations of prior monolithic designs.
To utilize the advantages of multi-chiplet structures while matching the
performance of larger monolithic systems, we propose and implement several
novel strategies in the context of FHE. These include a scalable chiplet design
approach, an effective framework for workload distribution, a custom
inter-chiplet communication strategy, and advanced pipelined Number Theoretic
Transform and automorphism design to enhance performance.
Experimental results demonstrate that REED 2.5D microprocessor consumes 96.7
mm$^2$ chip area, 49.4 W average power in 7nm technology. It could achieve a
remarkable speedup of up to 2,991x compared to a CPU (24-core 2xIntel X5690)
and offer 1.9x better performance, along with a 50% reduction in development
costs when compared to state-of-the-art ASIC FHE accelerators. Furthermore, our
work presents the first instance of benchmarking an encrypted deep neural
network (DNN) training. Overall, the REED architecture design offers a highly
effective solution for accelerating FHE, thereby significantly advancing the
practicality and deployability of FHE in real-world applications. | Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, Sujoy Sinha Roy | 2023-08-05T14:04:39Z | http://arxiv.org/abs/2308.02885v2 | # REED: Chiplet-Based Scalable Hardware Accelerator for Fully Homomorphic Encryption
###### Abstract
Fully Homomorphic Encryption (FHE) has emerged as a promising technology for processing encrypted data without the need for decryption. Despite its potential, its practical implementation has faced challenges due to substantial computational overhead. To address this issue, we propose the _first_ chiplet-based FHE accelerator design 'REED', which enables scalability and offers high throughput, thereby enhancing homomorphic encryption deployment in real-world scenarios. It incorporates well-known wafer yield issues during fabrication which significantly impacts production costs. In contrast to state-of-the-art approaches, we also address data exchange overhead by proposing a non-blocking inter-chiplet communication strategy. We incorporate novel pipelined Number Theoretic Transform and automorphism techniques, leveraging parallelism and providing high throughput.
Experimental results demonstrate that REED 2.5D integrated circuit consumes 177 mm\({}^{2}\) chip area, 82.5 W average power in 7mm technology, and achieves an impressive speedup of up to 5,982\(\times\) compared to a CPU (24-core 2\(\times\)Intel X5690), and 2\(\times\) better energy efficiency and 50% lower development cost than state-of-the-art ASIC accelerator. To evaluate its practical impact, we are the _first_ to benchmark an encrypted deep neural network training. Overall, this work successfully enhances the practicality and deployability of fully homomorphic encryption in real-world scenarios.
## I Introduction
Data breaches compromising large cloud storage, and jeopardizing millions of private accounts, have become a daily threat [26, 37, 43]. The vulnerability stems from storing data in an unencrypted format, leaving it susceptible to attacks. Even if the server encrypts the data for storage, it needs to decrypt it for processing, exposing it to potential privacy breaches. This is where Fully Homomorphic Encryption (FHE) comes into play. FHE is a promising cryptographic technique that enables secure and privacy-preserving computation, communication, and storage. Servers can compute on homomorphically encrypted data and return encrypted outputs. This approach ensures that every client holds the key to his/her privacy. FHE's potential utility spans a wide range of applications, including cloud computing, data processing, and machine learning. The concept of Homomorphic Encryption was first presented in 1978 by Rivest, Adleman, and Dertouzos [54], and the first FHE scheme was introduced in 2009 by Craig Gentry [22]. Since then, numerous algorithmic proposals have emerged [6, 11, 12, 13, 17].
A common limitation shared by these schemes is the significant computation and memory overhead. This results in a performance degradation of 10,000\(\times\) to 100,000\(\times\)[30] compared to unencrypted computation. It can be attributed to the fact that plain data expands into large polynomials during homomorphic computation, and simple operations, like multiplication, translate into complex polynomial operations homomorphically. Consequently, this drawback hinders FHE deployment in real-life scenarios. To bridge this performance gap between plain and homomorphic computations, researchers have proposed acceleration techniques on various platforms, including CPU, GPU, FPGA, and ASIC [1, 4, 18, 19, 21, 29, 32, 33, 34, 40, 46, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 64, 67, 68, 69, 70].
Software implementations offer flexibility but suffer from poor performance. While attempts have been made to bridge this gap with GPU-based solutions [4, 29], they have yet to match the performance of FPGA-based works [1, 40, 53, 70], which not only exhibit superior performance but also provide re-programmability and real-time verifiability. Nonetheless, there is still a need to narrow the runtime gap between plain and homomorphic computations in FPGA-based solutions. Currently, the most notable acceleration results have been achieved through ASIC works. However, it is important to note that in pursuit of maximizing acceleration, several works have deviated from the critical requirement of ensuring real-life deployability. This is mainly due to the monolithic implementation approach adopted by all ASIC-based works in the literature.
In a monolithic design, all components are integrated into a single chip which is relatively easy to design. However, it suffers from various limitations such as inflexibility, low yield, and higher manufacturing costs [23]. Works such as [32, 33, 34] propose chips with an approximate size of 400mm\({}^{2}\), resulting in a manufacturing yield of only 67% [39]. They also face high development costs (\(\approx\)25MS [45]) and very long time-to-market (\(>\)3 years). In such cases, architecture is excessively large, surpassing manufacturing capabilities and making pre-silicon verification on FPGAs infeasible. These limitations pose a significant obstacle to the fast implementation of homomorphic encryption techniques.
Additionally, these proposals overlook the crucial need for communication-computation parallelism. Off-chip to on-chip communication is considerably slower than the chip's computation speed. Therefore, while the proposed designs may demonstrate good performance on shallow benchmarks [25],
[59], they are likely to experience significant performance degradation for complex tasks like neural network training. Enhancing the overall efficiency of the design requires leveraging communication-computation parallelism.
Overall, we have observed that research on FHE acceleration has reached a saturation point with limited scalability. The general approach to address this is proposing larger chips. However, this has already reached a stage where manufacturing has become infeasible, making the proposed acceleration unattainable. To overcome this major challenge, we embrace chiplet-based design [72, 24, 39]- a modular approach to building big architectures and offers numerous advantages, including scalability, high yield, low cost, quick pre-silicon verification, and less time-to-market [23, 7]. Since existing designs cannot be efficiently scaled down to chiplet-based designs, we introduce a scalable design methodology that can be easily scaled up or down depending on requirements and constraints. We hope this will open practical possibilities for privacy-preserving computation.
### _Our Contribution_
We unfold our major contribution across two dimensions of scalability- configurable design methodology and modular implementation approach. Our contributions are as follows:
* **Scalable hardware design- REED:** We propose a configuration-based (\(N_{1}\times N_{2}\)) design methodology while incorporating communication bandwidth of \(N_{2}\) coefficients. All the building blocks under this design methodology offer a thought of \(f/N_{1}\) operations per second, where \(f\) is the design operating frequency. By changing the configuration parameters (\(N_{1},N_{2}\)), the architecture can be adapted to the desired area and throughput requirements. Thus, it addresses the various constraints in real-world scenarios, improving utility.
* **Chiplet-based implementation- REED 2.5D:** We take a step back from existing implementation techniques and present a novel and cost-effective chiplet-based FHE implementation approach, which is inherently scalable. REED with 2.5D packaging surpasses state-of-the-art work SHARP\({}_{64}\)[32] with 2\(\times\) better energy efficiency and 2\(\times\) less development cost. To the extent of our knowledge, this is the _first chiplet architecture for accelerating FHE_. We further explore the potential of extending this work by leveraging the promising 3D Integrated Circuit (IC) technology [7].
* **High-performance computation:** We present innovative design techniques for the number-theoretic transform (NTT) and automorphism. Our approach introduces a Hybrid NTT unit that eliminates the need for the expensive transpose operation or scratchpad memory and an easily configurable automorphism unit. These building blocks leverage parallelism and pipelining for high throughput.
* **Communication-computation parallelism:** Chiplet-based architectures may suffer from slow inter-chiplet communication bottleneck. We address this by proposing the _first non-blocking ring-based inter-chiplet communication_ strategy in the context of FHE, ensuring computation-communication parallelism. This is made feasible due to our proposed interleaved data distribution technique, which reduces memory consumption. These optimizations further enhance the overall performance of the hardware acceleration design.
* **Application Benchmark:** We choose parameters to offer high precision and, at the same time, good performance. REED is the _first work to benchmark an encrypted deep neural network (DNN) training_, showcasing practical viability and real-world impact. While CPU (24-core, 2\(\times\)Intel Xeon CPU X5690 \(@\) 3.47GHz) requires 29 days to finish it, REED 2.5D takes only 7.7 minutes, a realistic time for an NN training. We also use DNN training to run accuracy/precision experiments and validate our parameter choice.
## II Background
Let \(\mathbb{Z}_{Q}\) represent the ring of integers in the range \([0,Q-1]\). \(\mathcal{R}_{Q,N}=\mathbb{Z}_{Q}[x]/(x^{N}+1)\) refers to polynomial ring containing polynomials of degree at most \(N-1\) and coefficients in \(\mathbb{Z}_{Q}\). A polynomial is denoted as \(a\in\mathcal{R}_{Q,N}\). In Residue Number System (RNS) [20] representation, \(Q\) is a composite modulus comprising co-prime moduli, \(Q=\prod_{i=0}^{L-1}q_{i}\). Let \(\mathbf{a}\) be a vector of residue polynomials, and \(a^{i}\) be the \(i\)-th residue polynomial in the vector. We use the'matht' font, e.g., c or sk, to represent ciphertexts or keys. Operators \(\cdot\) and \(\langle,\rangle\) denote the multiplication and dot-product between two ring elements. Noise is represented as \(e\) and is refreshed for every computation.
### _FHE schemes and HEAAN routines_
Multiple FHE schemes exist in literature, including BGV [6], TFHE [13], HEAAN [11, 12], among others. These schemes differ primarily in the types of data they can encode and the supported operations. For instance, BGV can handle integers, while HEAAN can work with fixed-point numbers and is widely adopted for benchmarking machine learning applications [31, 25]. Hence, our accelerator design is in the context of HEAAN [11]. Notably, most schemes also require polynomial computations similar to those in HEAAN. Consequently, our proposed design methodology can be extended to other schemes, like BGV.
We present the HEAAN [11] routines for ciphertexts at level \(l\) (multiplicative depth is \(l-1\).) where \(l<L\), \(Q_{l}=\prod_{i=0}^{l-1}q_{i}\), and \(L\) is the maximum level. Please refer to [11] for a detailed description. The first three procedures are computed by the client, and the remaining procedures are evaluated on ciphertexts by the cloud.
_1)_ HEAAN.KeyGen\((\,)\): This routine generates secret key \(\mathtt{sk}=(1,s)\), public key \(\mathtt{pk}=(-a\cdot s+e,a)\in\mathcal{R}_{Q_{L},N}^{2}\), and several key-switching keys \(\mathtt{ksk}_{i}=(-a\cdot s+e+P\cdot s^{\prime},a)\in\ \mathcal{R}_{PQ_{L},N}^{2}\) for \(i\in[0,L)\), where \(a\) is uniformly random and \(s^{\prime}\) is a secret. For relinearization, \(s^{\prime}=s^{2}\).
_2)_ HEAAN.Enc\((m,\mathtt{pk})\): It encrypts a message \(m\) using public key, and returns ciphertext \(\mathtt{c}=v\cdot\mathtt{pk}+(m+e,e)\in\mathcal{R}_{Q_{L},N}^{2}\)
#### V-A3 \(\mathtt{HEAN.Dec}(\mathtt{c},\mathtt{sk})\)
The ciphertext \(\mathtt{c}\) is decrypted using the secret key \(\mathtt{sk}\) to return message \(m^{\prime}=\langle\mathtt{c},\mathtt{sk}\rangle\).
#### V-A4 \(\mathtt{HEAN.Add}(\mathtt{c},\mathtt{c}^{\prime})\)
It takes two input ciphertexts \(\mathtt{c}=(\boldsymbol{c}_{0},\boldsymbol{c}_{1})\in\mathcal{R}_{Q_{l},N}^{2}\) and \(\mathtt{c}^{\prime}=(\boldsymbol{c}_{0}^{\prime},\boldsymbol{c}_{1}^{\prime })\in\mathcal{R}_{Q_{l},N}^{2}\) and computes \(\mathtt{cad}=(\boldsymbol{d}_{0},\boldsymbol{d}_{1})\) where \(\boldsymbol{d}_{0}=\boldsymbol{c}_{0}+\boldsymbol{c}_{0}^{\prime}\in\mathcal{ R}_{Q_{l},N}\) and \(\boldsymbol{d}_{1}=\boldsymbol{c}_{1}+\boldsymbol{c}_{1}^{\prime}\in\mathcal{ R}_{Q_{l},N}\).
#### V-A5 \(\mathtt{HEAN.Mult}(\mathtt{c},\mathtt{c}^{\prime})\)
It multiplies two input ciphertexts \(\mathtt{c}=(\boldsymbol{c}_{0},\boldsymbol{c}_{1})\in\mathcal{R}_{Q_{l},N}^{2}\) and \(\mathtt{c}^{\prime}=(\boldsymbol{c}_{0}^{\prime},\boldsymbol{c}_{1}^{\prime })\in\mathcal{R}_{Q_{l},N}^{2}\), and computes \(\boldsymbol{d}_{0}=\boldsymbol{c}_{0}\cdot\boldsymbol{c}_{0}^{\prime}\in \mathcal{R}_{Q_{l},N}\), \(\boldsymbol{d}_{1}=\boldsymbol{c}_{0}\cdot\boldsymbol{c}_{1}^{\prime}+ \boldsymbol{c}_{1}\cdot\boldsymbol{c}_{0}^{\prime}\in R_{Q_{l},N}\), and \(\boldsymbol{d}_{2}=\boldsymbol{c}_{1}\cdot\boldsymbol{c}_{1}^{\prime}\in \mathcal{R}_{Q_{l},N}\). The output is the non-linear ciphertext \(\mathtt{d}=(\boldsymbol{d}_{0},\boldsymbol{d}_{1},\boldsymbol{d}_{2})\in \mathcal{R}_{Q_{l},N}^{3}\), which is then linearized using a 'key-switch' procedure, as described below.
#### V-A6 \(\mathtt{HEAN.Automorphism}(\mathtt{c},\mathtt{rot})\)
The input ciphertext \(\mathtt{c}=(\boldsymbol{c}_{0},\boldsymbol{c}_{1})\in\mathcal{R}_{Q_{l},N}^{2}\) is homomorphically rotated by \(rot\) using galoi element (\(gle=5^{rot}\bmod 2N\)) to return ciphertext \(\mathtt{d}=(\rho_{\mathit{rot}}(\boldsymbol{c}_{0}),\rho_{\mathit{rot}}( \boldsymbol{c}_{1}))\in\mathcal{R}_{Q_{l},N}^{2}\) after key-switch. A conjugation is a special form of automorphism when (\(gle=2N-1\)).
#### V-A7 \(\mathtt{HEAN.KeySwitch}(\mathtt{d},\mathtt{ksk})\)
It performs key-switch, using a relevant key \(\mathtt{ksk}\), after \(\mathtt{HEAN.Mult}/\mathtt{Automorphism}\) so the resultant ciphertext is encrypted under the same secret key as the input ciphertext. It computes \(\mathtt{c}^{\prime\prime}=(\boldsymbol{c}_{0}^{\prime\prime},\boldsymbol{c}_{1} ^{\prime\prime})\) where \(\boldsymbol{c}_{0}^{\prime\prime}=\sum_{i=0}^{l-1}a_{0}^{i}\cdot\mathit{ksk}_{0 }^{i}\in\mathcal{R}_{PQ_{l},N}\) and \(\boldsymbol{c}_{1}^{\prime\prime}=\sum_{i=0}^{l-1}a_{0}^{i}\cdot\mathit{ksk}_{ 1}^{k}\in\mathcal{R}_{PQ_{l},N}\). This is followed by \(\mathtt{c}=\big{(}(\boldsymbol{d}_{1},\boldsymbol{d}_{2})+(\mathtt{HEAN.ModDown }(\mathtt{c}^{\prime\prime})\big{)}\in\mathcal{R}_{Q_{l},N}^{2}\). \(\mathtt{HEAN.ModDown}()\) scales down the modulus from \(PQ_{l}\) to \(Q_{l}\).
#### V-A8 \(\mathtt{HEAN.Bootstrap}\)
This routine is designed to refresh the multiplicative depth of ciphertexts. It involves evaluating the decryption operation homomorphically [10, 5, 9]. This is the most computationally expensive routine, and it is not a standalone procedure but rather a combination of the above routines. A certain amount of multiplicative depth (\(L_{\mathit{boot}}\)) is consumed during bootstrapping. As a result, the depth of the ciphertext after bootstrapping (\(L_{\mathit{eff}}\)) is always lower than the original multiplicative depth \(L\). We closely adhere to the implementation in OpenFHE [2] for benchmarking.
This works uses hardware acceleration to speed up the cloud-side homomorphic procedures. Table I lists the HEAAN parameters with the notation and values we use for our design targeting the 128-bit classical security (\(N=2^{16},\log PQ=1728\)) [5, 3]. The choice of word-size (\(w=54\)-bit) offers the best balance between performance and precision, as discussed in Section V-A.
### _Number Theoretic Transform (NTT)_
NTT is a discrete Fourier transform defined over the ring \(\mathbb{Z}_{q}\) as \(\hat{\boldsymbol{a}}_{i}=\sum_{j=0}^{N-1}\boldsymbol{a}_{i}\omega^{ij}\) for \(i\in[0,N)\), where \(\omega\) is \(N\)-th primitive root of unity. It reduces the complexity of polynomial multiplication from \(\mathcal{O}(N^{2})\) to \(\mathcal{O}(N\log N)\) and is excessively utilized in FHE schemes. In a polynomial ring, _negative wrapper convolution_ enables reduction-free polynomial multiplication. It requires polynomials to be multiplied with powers of \(2N\)-th root of unity, \(\psi\) (pre-processing and post-processing). For more details, readers may refer to [62, 58].
### _Monolithic vs Chiplet, and Chiplet packaging techniques_
In the context of large Integrated Circuits, authors in [23, 39, 72] discuss the advantages of chiplet-based designs over monolithic designs. The problem with monolithic designs stems from the fact that to keep up with the increasing demand for high performance and functionality, chips need to be scaled up, and advanced technology nodes must be utilized. Manufacturing such big chips reduces the wafer yield as more surface area is exposed to defects per chip and increases the developments cost. Such huge designs take a very long time-to-market, and it is impossible to test and verify them before manufacturing using FPGAs. More factors, such as size limitation and sub-optimal die performance due to overload, contribute to Moore's law's slowdown. Hence, there is a shift from these SoC (System on Chip) to SiP (System in Package) [23], as SiP directly addresses these challenges.
Migrating chiplets to advanced technology is easier than an entire monolithic design. In SiP, multiple heterogeneous smaller chiplets can be manufactured separately and later integrated together using various packaging techniques as long as they adhere to common interface standards. This promotes chiplet-reuse, further lowering the development costs, expediting the process, and resulting in a substantial profit margin. The chiplet-packaging techniques can be broadly classified into three main categories: 2D, 2.5D, and 3D [23, 39]. In 2D packaging, different dies are mounted on a substrate, commonly known as a multi-chip module (MCM). It has limitations due to the substrate, resulting in slow die-to-die communication and high power consumption.
To address these limitations, the most reliable technology for integrating chiplets is the silicon interposer, known as 2.5D integration. In this approach, an interposer is placed between the die and the substrate, enabling die-to-die connections on the interposer itself. The use of an interposer significantly enhances interconnectivity, leading to improved performance. Several studies in the literature, such as [49, 73], demonstrate the practicality of this approach. Taking the integration capabilities a step further, 3D packaging involves stacking different dies on top of each other, akin to a skyscraper. In 3D packaging, the dies are interconnected using through-silicon vias (TSVs). 3DIC is gaining significant popularity and serves as the foundation for advancements [65, 8, 48, 66]. A well-known example of 3D packaging is the High Bandwidth Memory (HBM/HBM2/HBM3), where multiple DRAM dies are stacked. This approach significantly reduces the critical path and area, resulting in higher performance, lower power consumption, and increased bandwidth. The slowdown of Moore's law finds hope in 2.5D and 3D IC.
## III The Scalable Architecture Design Methodology for REED
Our primary objective is to propose a scalable design methodology that can be utilized by chiplet-based accelerators to offer superior acceleration. A hardware accelerator design for FHE schemes has three fundamental computation units: Number Theoretic Transform (NTT/NTT), Multiply-and-Accumulate (MAC), and Automorphism. All homomorphic operations can be computed by utilising these units. The computational building blocks are integrated with memory components to create a complete Processing Unit (PU). To achieve a transition from a single PU to multiple PUs and multiple chiplets, we also need to address PU-PU communication and data distribution. This overall design flow for our accelerator design- REED, is illustrated in Fig. 1.
We will present our design methodology starting with the top of the hierarchy- Multi-Chiplet Design. To comprehend the decision-making process behind the middle modules, it is crucial to grasp the design principles employed for the bottom-most modules: NTT, MAC, and Automorphism. Hence, next, we will thoroughly discuss the scalable design of these modules. Subsequently, we will demonstrate how these modules are integrated to form a complete PU, ensuring optimal performance and efficiency. Lastly, we will showcase our efficient data distribution and PU-PU/C2C (chuplet-to-chuplet) communication strategies. They enable seamless data exchange, leading to better scalability and acceleration.
### _Multi-Chiplet Design_
We presented the advantages of disintegrated systems over monolithic designs in Section II-C. The transition from 2D monolithic packaging to 2.5D or 3D disintegrated chiplet systems represents both the present and future of architectural designs, as emphasized in [23, 24, 28, 39, 66, 72, 73]. In this context, we present REED 2.5D and RE3D.
#### Iii-A1 REED 2.5d
We first present a sample two-chiplet design, depicted in Fig. 2 (a). Here we connect two REED chiplets and establish connections between PU and HBMs via the interposer. Due to the proposed ring-based communication (Section III-E), scaling this design only increases the interconnects linearly. Hence, we can scale it to four chiplets, as shown in Fig. 2 (b). We ensure through our FHE design that no HBM-HBM communication is required. Hence, they are positioned on the outer side. Moreover, we avoid sharing a single HBM among multiple chiplets, ensuring that each HBM is located only in proximity to the one chiplet it serves. Given that die-to-die communication requires a simple ring-like communication pattern (discussed in Section III-E), the chiplets are placed in a relatively straightforward manner, as not all dies need to communicate with every other die. In [49, 73], authors propose general-purpose chiplet-based processors with an actual tapeout. Our placements strategies align with these, demonstrating practical viability. We acknowledge the potential latency issues arising from slow chiplet-to-chiplet communication, and this will be addressed in Section III-E.
#### Iii-A2 RE3d: REED's journey from 2.5d to 3d
After discussing the design for REED 2.5D, we present its extension to a complete 3D IC structure, which holds immense potential for future computing. To achieve this transition, we have two options: connecting the PU with the HBM controller via TSV (as shown in Fig. 3) or merging the PU unit with the lower HBM controller die. By adopting either of these approaches, we can significantly reduce the reliance on the Network-on-Chip (NoC), leading to a compact chip design with lower power consumption. Each chiplet is a full 3D IC package (PU and Memory) and needs a die-to-die link via interposer for connecting to other chiplets. The reduction in the area primarily comes from fewer HBM stacks on the lateral area and the integration of the REED-PU unit with the HBM controller. Additionally, the decrease in critical paths due to the reduced interconnects would enhance the design's performance. Thus, RE3D would further bridge the gap between speedup and privacy.
#### Iii-A3 Disintegration Granularity
It is crucial to note that disintegrated systems face a trade-off between development cost and performance degradation, depending on the disintegration granularity. Existing works, such as [24, 39, 65, 66, 72, 73], show that disintegration improves yield, but it introduces challenges such as floorplanning and post-silicon testing overhead. Since this design is used for accelerating FHE, its full utilization in the long run also weighs in. Hence, we need to address the question: _How much disintegration_
Fig. 1: Design hierarchy for chiplet-based HE accelerator.
Fig. 3: The side and top view of proposed RE3D. It has four REED 3D IC chiplets interconnected using the die-to-die link via silicon interposer.
Fig. 2: (a) Side view of two chiplet-based REED 2.5D, and (b) top view of four chiplet-based REED 2.5D.
is best for our design?_ Considering a complete die area of 800mm\({}^{2}\), dividing it into four chiplets offers an \(\approx\) 80% yield, while eight chiplets provide a yield of \(\approx\) 90%. Although the eight-chiplet option seems promising, it comes at the cost of additional complexities in floorplanning/routing, testing, and power consumption. In the context of FHE, as the multiplicative depth decreases, we need fewer PUs (discussed in Section III-D). Hence, employing four larger chiplets offers longer utilization compared to eight smaller chiplets, assuming an equal number of PUs per chiplet.
In conclusion, instantiating four chiplets strikes the perfect balance between manufacturing cost and utilization. Next, we will discuss the design of building blocks for REED-PU and see how they help us achieve our acceleration goals.
### _The ingredients of REED Processing Unit_
The need for scalability and high throughput drives our design methodology. Before delving into the details of the building blocks, we introduce the REED-configuration (\(N_{1},N_{2}\)) for polynomial degree \(N\), where \(N_{1}\times N_{2}=N\), and both \(N_{1}\) and \(N_{2}\) are powers of two. This configuration provides a throughput of \(\frac{f}{N_{1}}\) operations per second and can process \(N_{2}\) coefficients simultaneously, where \(f\) is the design's operating frequency. Hence, requiring a memory read/write bandwidth of \(N_{2}\). The proposed standard configuration brings forth advantages, such as improved throughput and efficient resource utilization. Mapping all the building blocks to this configuration enhances scalability, enabling easy scale-up or scale-down. Now, let us explore how we design the building blocks to match this configuration and fully exploit its potential.
#### Iii-B1 The Hybrid NTT/NTT (Frankenstein's approach)
This unit plays a vital role in converting polynomials from slot to coefficient representation and vice versa. It is computationally extensive and occupies over 50% architectural area. Therefore, designing an efficient NTT/NTT unit is crucial as it directly impacts the overall throughput and area-consumption.
There are various approaches in the literature to implement NTT for large-degree polynomials, such as iterative [1, 40, 58], pipelined [71, 74] and hierarchical [19]. While these approaches can offer efficient designs for specific configurations or target platforms, they all suffer from implementation complexity and lack of scalability for large polynomial sizes. Additionally, these approaches rely on scratchpad-like memories, which can serve as prefetch units for other building blocks. The iterative approach enables using multiple processing elements to improve the performance of NTT; however, its implementation complexity increases significantly with the number of processing elements. The pipelined approach (also referred to as single-path delay feedback (SDF)) provides a bandwidth-efficient solution but a diminished performance.
The hierarchical approach (also referred to as four-step NTT), utilized in [19], treats a polynomial of size \(N\) as an \(N=N_{1}\times N_{2}\) matrix and divides a large NTT into smaller parts. It involves performing \(N_{1}\)-point NTTs on the \(N_{2}\) columns of the matrix, then multiplying each coefficient by \(\omega^{i\cdot j}\) (where \(i\) and \(j\) are matrix row and column indices), transposing the matrix, and finally performing \(N_{2}\)-point NTTs on the \(N_{1}\) columns. Transposing a matrix of size \(N_{1}\times N_{2}\) requires \(N_{1}\) separate memories and large data re-ordering units. For example, in [19], the transpose unit consumes 14% of the area per compute cluster. Moreover, in terms of time inefficiency, it will require additional \(N_{2}\) cycles for writing data to the transpose memory and \(N_{1}\) cycles for reading it.
Although the hierarchical approach simplifies the NTT implementation, it has the following limitations: \((i)\) it requires a costly transpose operation, \((ii)\)\(N_{1}\) and \(N_{2}\) are fixed to \(N_{1}=N_{2}\)[19, 21], hence offering limited flexibility, and \((iii)\) the reliance on scratchpad leads to large memory fan-in and fan-out, causing routing inefficiencies. We address these challenges by introducing a novel Hybrid NTT using Frankenstein's approach, which utilizes parts of hierarchical, iterative, pipelined, and plain unrolled NTTs.
```
1:\(a\) (a matrix of size \(N_{1}\times N_{2}\) in row-major order)
2:\(\omega\) (\(N\)-th root of unity), \(\psi\) (\(2N\)-th root of unity)
3:\(a=\text{NTT}(a)\) (a matrix of size \(N_{1}\times N_{2}\) in column-major order)
4:for\((i=0;i<N_{1};i=i+1)\)do
5:for\((j=0;j<N_{2};j=j+1)\)do
6:\(a[i][i]\gets a[i][j]\cdot\omega^{i\cdot N_{2}+j}\pmod{q}\)\(\triangleright\) Pre-processing (PP)
7:endfor
8:apply \(N_{1}\)-pt NTT to the columns of \(a\)\(\triangleright\) using SDF-NTT
9:for\((i=0;i<N_{1};i=i+1)\)do
10:for\((j=0;j<N_{2};j=j+1)\)do
11:\(a[i][j]\gets a[i][j]\cdot\omega^{i\cdot j}\pmod{q}\)\(\triangleright\) Hadamard product (HP)
12:endfor
13:Apply \(N_{2}\)-pt NTT to the rows of \(a\)\(\triangleright\) using Unrolled-NTT (U-NTT)
14:return\(a\)
```
**Algorithm 1** Hybrid NTT with NWC
The proposed NTT/NTT unit is fully pipelined, and its flow is shown in Algorithm 1 and Fig. 4. During the NTT operation, we first perform pre-processing (Step 3 Algorithm 1) using \(N_{2}\) modular multipliers (PP). The resulting coefficients are sent to \(N_{2}\) pipelined NTT units (\(N_{1}\)-pt SDF-NTT) to perform Step 6. The output coefficients of SDF-NTT units are processed via the Hadamard Product unit (HP) that multiplies the coefficients with powers of \(\omega\) (Step 9) using \(N_{2}\) modular multipliers. Finally, we employ a \(N_{2}\)-pt unrolled NTT (U-NTT) unit.
**Transpose elimination:** The Hybrid NTT eliminates transpose by using two orthogonal NTT approaches, pipelined (SDF) approach for \(N_{1}\)-sized NTTs and unrolled (U-NTT) approach
Fig. 4: The proposed novel Hybrid NTT/NTT design flow for \(N=N_{1}\times N_{2}\).
for \(N_{2}\)-sized NTTs. The input polynomials for the NTT/INIT operation are stored in \(N_{2}\) memories of depth \(N_{1}\). As shown in Figure 5 (a), the output coefficients of SDF-NTT are processed directly by U-NTT, providing a seamless, natural transpose operation. It also helps make our NTT unit bi-directional, as illustrated in Figure 5 (b).
**Low-level optimizations:** For modular multiplication and reduction unit, we adopted the word-level Montgomery [41, 42] modular reduction algorithm and optimized it (Algorithm 2) for our special prime form, \(2^{w-1}+q_{H}\cdot 2^{m}+1\), where \(m\) is Montgomery reduction size, and \(\lceil\log_{2}q_{H}\rceil\) is small. For our design, we use \(w=54\), \(m=18\) and \(\lceil\log_{2}q_{H}\rceil=10\). To reduce the on-chip twiddle factor memory requirement, we employ on-the-fly twiddle factor generation using a small constant memory that stores a few initial constants. By utilizing this, we reduce the on-chip constant storage by up to 98.3%.
In summary, the proposed Hybrid NTT/INITT design offers a throughput of \(\frac{f}{N_{1}}\) operations per second and can be scaled for various area/performance trade-offs by adjusting the values of \(N_{1}\) and \(N_{2}\). It eliminates the expensive transpose operation, simplifies routing, and enhances pipelining.
```
1:\(d=a\cdot b\), \(q=2^{w-1}+q_{H}\cdot 2^{m}+1\)
2:\(m\) (Mont. red. size), \(L=\lceil\frac{log_{2}q_{H}}{m}\rceil\) (number of reduction steps)
3:\(c=a\cdot b\cdot R^{-1}\pmod{q}\), \(R=2^{mL}\)
4:\(T\gets d\)
5:for (\(i=0;i<L;i=i+1\)) do
6:\(T_{H},T_{L}\gets T\gg m,T\pmod{2^{m}}\)
7:\(T\gets-T_{L}\pmod{2^{m}}\), \(cin\gets T2[m-1]\lor T_{L}[m-1]\)
8:\(T\gets(q_{H}\cdot T2)+T_{H}+cin+(T2\ll(w-1-m))\)
9:endfor
10:return\(c\leftarrow(T\geq q)\)\(?\)\(T-q\)\(:\)\(T\)
```
**Algorithm 2** Word-level Montgomery Modular Reduction
#### Iii-B2 Multiply-and-Accumulate (MAC)
MAC is a linear unit, and by instantiating \(N_{2}\) MACs for configuration (\(N_{1},N_{2}\)), we achieve the desired throughput \(\frac{f}{N_{1}}\). Our _triadic_ units are capable of performing multiplication and addition/subtraction simultaneously, which is advantageous for the key-switch operation (discussed in Section III-C). It employs the same modular multiplication unit utilized by the NTT/INITT unit.
#### Iii-B3 Automorphism/Conjugation
This unit permutes ciphertexts using the Galois element (\(gle\)) to achieve rotation or conjugation. We note three important properties in automorphism-\((i)\) all \(N_{2}\) coefficients come and go to \(N_{2}\) distinct memories, \((ii)\) they are read and written to the same address, and \((iii)\) the coefficients move in pairs. To understand this, let us take a brief look at how automorphism works.
A polynomial is stored as a matrix \(N_{1}\times N_{2}\) in \(N_{2}\) memories. When we load \(N_{2}\) coefficients from memory address \(l_{0}\) across all \(N_{2}\) memories, they are shuffled using \(\rho_{\text{rot}}\) and then written to address \(l_{1}\) across all \(N_{2}\) memories. Hence, even though the coefficient order is shuffled, they all go to the same address of \(N_{2}\) distinct memories. We utilize this property to permute all \(N_{2}\) coefficients in parallel. This out-of-place automorphism is presented in Algorithm 3. The in-place permutation techniques proposed in previous works [19, 57] increase routing complexity due to memory transposition requirements.
```
1:\(a[N_{1}][N_{2}],\)\(gle\)
2:\(a=\rho(a)\)
3:\(index=gle\)
4:for\((0=0;l_{0}<N_{1};l_{0}=l_{0}+1)\)do
5:\(l_{1}\gets index\pmod{\log(N_{1})}\)
6:\(start\gets index\gg\log(N_{1})\)
7:\(addr[j]\leftarrow(start+j\cdot gle)\pmod{\log(N_{2})}\)\(\forall\)\(j\in[0,N_{2})\)
8:\(\hat{a}[l_{1}]\leftarrow\hat{\text{s}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{ \text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{ \text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{ \text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}} \hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{\text{u}}\hat{ \text{u}}\hat{\text{u}}\hat{\text{u
tree-like shuffle. Note that the number of pipeline stages adjusts with \(N_{2}\), making the unit scalable and efficient for higher configurations. Moreover, the unit can handle any arbitrary rotation and provide a throughput of \(\frac{f}{N_{1}}\).
We now use these building block designs to design a complete processing unit.
### _Packed REED Processing Unit (PU)_
We initiate the PU design (as shown in Fig. 7) by placing the NTT/INTT unit first and providing separate memories for both ends. This ensures straightforward routing and efficient PU-PU communication (Section III-C1). Although the NTT/INTT unit operates on one polynomial at a time, the result is multiplied with two polynomials (key-switching keys) and accumulated. Hence, we instantiate a pair of MAC units capable of simultaneously processing both key components. Similarly, we include two automorphism units as well. A PRNG is deployed to generate the public key component on-the-fly, following the approach in [40]. The design operates based on instructions, wherein a relatively small instruction controller manages the multiplexers and collects 'done' signals from these units. Our design choices ensure that the NTT/INTT and MAC/automorphism units can run concurrently in the pipeline.
Among all the routines, the key-switch is the most expensive operation. In this, we transform all \(L\) residue polynomials from slot to coefficient representation (INTT), and then each of these is transformed to \(L+1\) NTTs, multiplied with two key components, and accumulated. This requires \(L\) INTTs, \(L(L+1)\) NTTs, and \(2L(L+1)\) MACs, making the throughput of this operation: \(\frac{1}{L(1+3(L+1))\cdot N_{1}}\). We utilize REED's parallel processing capability to perform all MAC operations concurrent to the NTT operations (shown in Fig. 8). We can also perform multiplication and accumulation simultaneously. Hence, we save \(2L(L+1)\) clock cycles and increase the throughput to \(\frac{f}{L(L+3)\cdot N_{1}}\), resulting in a 66.7% improvement.
#### Iii-C1 Prefetch-Memory
Previous works [19, 33, 34] refer to their on-chip memory as scratchpads due to the need for intermediate storage and prefetch functionality during NTT/INTT and automorphism operations [19]. This approach increases routing complexity as multiple modules access these scratchpads, requiring them to be in very close proximity.
Our proposed low-level building blocks mitigate the need for intermediate storage, allowing us to use memory units solely as prefetch units. Each memory unit exhibits balanced fan-in and fan-out, and among the five memory units depicted, only four are fed by off-chip memory. The small memory in Figure 7 is responsible for storing and communicating the INTT result to the other PUs (elaborated in Section III-E). Only two of the four memories communicating with off-chip memory need to write back the results, as illustrated by bi-directional arrows in Fig. 7. In total, three memories perform off-chip read/write communication. These memories are physically divided into two parts. When one is utilized for on-chip computation, the other performs off-chip prefetch. This results in a highly streamlined design.
After finalizing REED-PU, we next discuss the optimal data distribution and communication for a multi-PU setting, which lays the foundation for multi-chiplet design.
### _Data distribution and parallel processing for multiple PUs_
In a multi-PU setting, data distribution strategies include duplicating data across multiple PUs or utilizing shared memory. These approaches have severe drawbacks, as data duplication requires more storage, and shared memory introduces the risk of deadlocks and needs distributed communication protocols. In our multi-PU design, we focus on ensuring that each PU operates on independent data. Most of the previous works [32, 33, 34] propose a single monolithic PU and therefore do not require a dedicated study on data distribution. In a multi-PU work- Medha [40], the authors extensively discuss this and propose distributing computation across the RNS bases by employing one PU per RNS base. However, with this approach, as the multiplicative depth decreases, a significant number of PUs become idle, causing underutilization.
Nevertheless, distributing computation across RNS bases enables highly parallel computations. Therefore, we leverage this approach in an r-PU setting, where r is smaller than the number of RNS bases (r \(<L+1\)). For data distribution across r-PUs, we utilize an interleaved approach, where the RNS bases of the ciphertexts and keys are distributed among the PUs in an interleaved manner (where, \(\text{PU}_{i}\) stores \(\text{\tiny{C}}_{\texttt{r}j+i}\)\(\forall\)\(0\leq i<r,0\leq j<\frac{(L+1)}{\text{\tiny{C}}_{\texttt{r}j+i}}\)), instead of grouping them in a sequential manner (where, \(\text{PU}_{i}\) stores \(\text{\tiny{C}}_{\texttt{r}i+j}\)). This ensures that all PUs are fully utilized in the long run, maximizing the benefits of parallel processing.
It is worth noting that the need for PU-PU communication is inevitable. Distributing data across RNS bases reduces this communication but does not eliminate it. In the following subsection, we will discuss how to handle this.
Fig. 8: Timeline demonstrating parallel and pipelined operation flow.
Fig. 7: The REED-PU design. Every data communication (memory to building blocks and off-chip to on-chip) here has a bandwidth of \(N_{2}w\).
### _Efficient non-blocking REED-REED communication_
Before delving into the solution, let us discuss the need for data exchange across PUs. This is required during the key-switch routine for linearizing, non-linear ciphertext polynomial (\(d_{2}\)) after multiplication. Here an \(\mathcal{O}(L^{2})\) base conversion is done to switch the modulus of each \(L\) residue polynomial of \(d_{2}\) (NTT(\(d_{2q_{i}}\))\({}_{q_{i}}\)\(\forall\)\(0\leq i<L\)) to (\(L+1\)) residues polynomials (NTT(\(d_{2q_{i}}\))\({}_{q_{j}}\), \(\forall\)\(0\leq j\leq L\))) (discussed in Section III-C). The data volume is substantial, and broadcasting each INTT result to all PUs will require a fully-connected communication network among PUs. As the number of PUs (\(\mathbf{r}\)) increases, this becomes quadratically complex and expensive. When multiple PUs are instantiated in a disintegrated SiP, slow C2C communication becomes a bottleneck.
Instead, we propose an alternative approach illustrated in Fig. 9. Here, we communicate the INTT results to the PUs in parallel with the NTT computations. This approach offers a long communication window for data send/receive, as depicted by the large rectangles between the REED PUs in the figure, which is only made possible due to the proposed interleaved data distribution. Therefore, in cases where C2C communication is slower than computation, this extended communication window prevents PUs/chiplets from experiencing data starvation. Consequently, non-blocking communication is achieved as data computation can proceed concurrently with relatively slower communication. Additionally, note that the communication shown in the figure is only uni-directional. For example, the REED\({}_{0}\) only needs to send data to REED\({}_{3}\) and receive data from the previous REED\({}_{1}\). This enables a simple _ring-based communication_ among REED chiplets.
In conclusion, our ring-based communication strategy requires only one read/write port per chiplet, as opposed to (\(\mathbf{r}-1\)) ports in a star-like communication network. Furthermore, we address the practical possibility of slower C2C communication by providing a prolonged communication window. Lastly, with our approach, there is no possibility of deadlocks.
Next, we present the area and performance results for 4-chiplet REED 2.5D with one PU per chiplet.
## IV REED's Implementation Results
We synthesize our chiplet-based design, REED 2.5D, for configurations 1024\(\times\)64 and 512\(\times\)128. For synthesis, we employ TSMC 28nm and ASAP7 [14] 7nm ASIC libraries, with Cadence Genus 2019.11, and use SRAMs for on-chip memories. Our primary objective is to achieve high performance while optimizing area and power consumption. To this end, we set our clock frequency target to 1.5 GHz, use High-vt cells (hvt) configuration for low leakage power, enable clock-gating, and set the optimization efforts to high. We set the input/output delays to 20% of the target clock period and leverage incremental synthesis optimization features.
As off-chip storage, we leverage the state-of-the-art HBM3 [27, 36, 48] memory. Owing to its improved performance and reduced power, it is already deployed in various GPUs and CPUs [15]. It is a 3D IC memory with the memory controller as the bottom layer, and DRAM dies stacked on top of it. HBM3 with 8/12 stacks of 32Gb DRAMs has 32/48 GB storage capacity [27, 44], which is sufficient to store all the key-switching keys. The ciphertexts provided by the client can be transferred to REED using 32 lanes PCIe5 offering a bandwidth of 128 GB/s [63]. The slow communication overhead can be easily masked with computations. In our work, we present results for HBM3 PHY and HBM3 NoC, based on [48, 8] and consider the minimum reported bandwidth of 896GB/s [48]. Some recent studies [15, 48] have reported significantly higher bandwidths of 1.15/3 TB/s. By leveraging these higher bandwidths, our area and power consumption will reduce due to fewer memory requirements.
Table II presents the area results for the REED 2.5D architecture, featuring a 4-chiplet configuration as illustrated in Fig. 10. Configuration 512\(\times\)128 requires twice the amount of HBM3 compared to the 1024\(\times\)64 configuration due to the doubled bandwidth requirement. We implement the inner REED-PU, NoC, and HBM3 (shown in Fig. 10) as one chiplet (similar to [49, 73]). In Table III, we present the performance of FHE routines for both configurations with the achieved target clock frequency of 1.5 GHz.
Moreover, we take a step further by prototyping the essential building blocks on Xilinx Alveo U250 to verify functional correctness. It is worth noting that the monolithic designs
Fig. 9: Non-blocking ring-based communication for four REED chiplets when \(L=7\). The blocks between chiplets represent the long communication window to make up for slow inter-chiplet (C2C) communication.
proposed in the literature have excessive size [19, 32, 33, 34, 57], rendering them unsuitable for pre-silicon verification on FPGA for functionality testing. However, we have successfully overcome this limitation by adopting a chiplet-based implementation strategy and fully leveraging its capabilities. The run-time and power consumption estimates are obtained using a cycle-accurate simulated model.
When we extend REED 2.5D to a 3D IC, one might wonder how to stack the second HBM3 on top of the REED die. In Fig. 10, we include two stacks to achieve the required bandwidth (1.8TB/s), constrained by the interposer and substrate technology. However, 3D IC technology enables direct TSV connections from the chip's surface to HBM3, enabling the construction of wider buses for higher bandwidth [66]. As per the results on 7nm technology for REED 2.5D (with one HBM3 stack), the REED-PU and NoC account for less than 50% of the area. Hence, by implementing the HBM3 controller on top of it, the lateral surface area would be reduced by \(\approx\)50%. Although this would not directly impact the chiplet manufacturing cost, as the monolithic 3D IC testing and integration costs tend to be higher, it would significantly reduce the cost of the underlying layers, such as the silicon interposer and substrate/package required for using this in a chiplet setting. Thus, the 3D IC integration of REED promises a huge reduction in overall chip area and power consumption. These findings validate our approach's efficacy, and we believe it will inspire further research in this direction.
### _What to expect from higher-throughput configurations?_
Until now, we have examined two configurations (1024\(\times\)64 and 512\(\times\)128) that only partially demonstrate the advantages of our proposed scalable design methodology. As we double the throughput (by doubling the value of \(N_{2}\)), the area of PU only increases by approximately 1.5\(\times\). This trade-off arises from the chip area comprising two components--\((i)\) the computation logic area, which scales linearly with throughput, and \((ii)\) on-chip storage that remains fixed to a number of polynomials. As we opt for a higher configuration, the polynomial-size remains the same while the number of coefficients to be processed increases.
However, an important question remains: _what configuration strikes the best balance between throughput and manufacturing cost?_ To address this, we turn to [24], where the authors discuss that for 7nm technology, the optimal manufacturing size ranges from 40 to 80 mm\({}^{2}\), while for 40nm, it ranges from 50 to 150 mm\({}^{2}\). In Fig. 11, we present two sets of area consumption results for 28nm and 7nm technologies. The first set corresponds to four REED cores produced as a single monolithic chip, while the second set represents one REED chiplet. The optimal area ranges are highlighted in blue and pink. As we can see, for both 7nm and 28nm, the configuration 512\(\times\)128 falls within the most optimal development area range and offers high throughput. Monolithic designs, within the optimal range, offer 4 to 8 \(\times\) less throughput.
### _Comparison with related work_
The realization of privacy-preserving computation through FHE holds great potential for the entire community, leading to various research efforts in the literature. These endeavours span from efficient software implementation libraries [2, 16, 60] to ASIC chip proposals [19, 32, 33, 34, 35, 36]. Among these, the ASIC designs [19, 32, 33, 34, 57] have achieved the most promising acceleration results. However, their primary drawback lies in the high manufacturing costs associated with
Fig. 11: Demonstration of increase in area with REED configurations, put in the order of increasing throughput [24].
Fig. 10: The complete architecture diagram of 4-chirplet REED 2.5D for 512\(\times\)128 configuration.
the large monolithic chips, resulting in low yield and further exacerbating the cost. Therefore, our work focuses on reducing manufacturing costs while pursuing accelerated performance.
Table IV and Fig. 12 showcase the successful achievement of our goals. The table compares our design's area consumption, performance, and power consumption for the packed bootstrapping operation (OpenFHE [2]) with existing works-F1 [19], BTS [34], ARK [33], CraterLake (CLake) [57], and SHARP (SH) [32]. Note that all these works propose monolithic chips and suffer the drawback pertaining to monolithic designs. We utilize the results obtained for 4-chirplet REED 2.5D on 7nm technology as these works also provide results for this specific technology. Several normalizing metrics exist in the literature for comparison, such as the amortized time \(\text{T}_{\text{A.S.}}\)[1, 33, 34] that calculates the bootstrapping time divided by \(L_{\text{{eff}}}\) and packing \(n\). However, this metric overlooks factors such as area, power, and precision. Hence, we use EDAP (Energy-Delay-Area product) metric [38] and modify it to accommodate the trade-off of high precision necessary for large applications (discussed in Section V-A).
Higher precision necessitates a larger word size, \(w\). This has a linear impact on some components and a quadratic on others. Our first proposed metric, \(\text{EDAP}_{w}\) (Eq. 1), incorporates a linear increase due to word size. It is important to note that the area of the REED-PU increases quadratically with \(w\) due to the presence of multipliers. This is addressed in the second metric, \(\text{EDAP}_{w,w^{2}}\) (Eq. 2), with \(w=54\) as the baseline. Under this metric, we achieve 2\(\times\), 1.7\(\times\) better results compared to the state-of-the-art work SHARP\({}_{64}\), SHARP\({}_{36}\)[32].
\[\small\begin{split}\text{EDAP}_{w}=\frac{\text{E}\cdot\text{D} \cdot\text{Area}\cdot 54}{w}&\text{(1)}\\ \text{EDAP}_{w,w^{2}}=\frac{\text{E}\cdot\text{D}\cdot\text{Area} _{w^{0}}\cdot 54^{2}}{w^{2}}+\frac{\text{E}\cdot\text{D}\cdot\text{Area}_{\text{ soft}}\cdot 54}{w}&\text{(2)}\end{split} \tag{1}\]
We also assess the yield and manufacturing cost, as depicted in Fig. 12. For this, we use the original area and not the word-size scaled area. We plot the relative yield [39] and manufacturing cost [24, 45], using our work on 7nm technology as the baseline. As observed, we achieve the highest yield and lowest manufacturing cost for 7nm, resulting in the least overall cost (manufacturing cost/yield), 50% less than state-of-the-art monolithic design SHARP\({}_{64}\). On 28nm technology, we achieve \(85\%\) cheaper design compared to SHARP\({}_{64}\).
In the next section, we will report the application benchmarks and discuss the importance of precision.
## V Application benchmarks
We benchmark three machine learning applications: linear regression, logistic regression, and a Deep Neural Network (DNN). The speedup results are presented in Table V. Each application is evaluated for _encrypted_ training and inference. In this setting, the server provides computational support without knowledge of the data or model parameters, ensuring complete blind computation. Most applications benchmarked in the previous works [19] are partially blind; the server does not see the data but knows the model parameters to evaluate it. To our knowledge, none of the previous works benchmark an encrypted neural network training.
#### V-1 Linear Regression
We employ the Kaggle Insurance dataset [59] to benchmark linear regression. The model uses a batch size of 1204 and 1338 input feature vectors (each containing six features) for training and inference and achieves an accuracy of 78.1% (same as plain model [59]), as it does not require any approximation and completes training in just two iterations (forward-backward\({}_{\times 2}\)-forward).
Fig. 12: Relative a) yield of existing monolithic designs versus the proposed 7nm chiplet-based architecture [39], b) development cost (including Interposer cost) [24, 40, 45], and c) cost of SiP development (cost/yield). RD refers to our work REED 2.5D.
#### V-A2 Logistic Regression
It is a supervised machine learning model that reports the probability of an event using the logistic function, evaluated using function approximations in a homomorphic context. Their accuracy depends on the degree of approximation function expansion and precision. Existing works, such as [32, 57], utilize the HELR [25] application to benchmark encrypted training on MNIST [35] data, with varying batch sizes (256, 1024). In Fig. 13, we illustrate the superior performance of REED 2.5D compared to these works.
We also evaluate logistic regression on the iDASH2017 cancer dataset (similar to [31]) employed to predict cancer probability. Utilizing the same expansion as [25], we achieve a training accuracy of 62% in just a single iteration. The competition winner [31] reports a slightly higher accuracy of 62.36%. This dataset comprises 18 features per input, with batch sizes of 1422 and 1579 used for training and inference.
#### V-A3 Deep Neural Network
The DNN serves as a powerful tool for Deep Learning, leveraging multiple network layers. In our study, we employ a DNN for the MNIST dataset [35], as illustrated in Fig. 14. We pack four pre-processed images per batch to prevent overflow during the 128\(\times\)64 (2\({}^{15}\)) matrix multiplication. DNN training requires 12,500 batches. Thus, all the existing works [32, 33, 34, 57] not providing computation-communication parallelism will suffer as their on-chip memory is insufficient. The DNN is trained for \(\approx\)7000 (\(\approx\)5.8 Bootstrappings per iteration) iterations and achieves 95.2% accuracy in 29 days using OpenFHE [2]. REED 2.5D could finish this in only 7.7 minutes. This is where our computation-communication parallelism shines, as a huge amount of ciphertexts are required for such an application. None of the works in literature offers this and are bound to suffer for any memory-intensive application.
### _Precision-loss experimental study_
Another facet of privacy-preserving computation is precision loss. Since the server cannot see the intermediate or final results, the best it can do is to ensure that the parameters it operates on support higher precision. To validate our parameter sets, we ran experiments for the DNN training. In Fig. 15, we can see how quickly the training accuracy drops as the word size is reduced. Thus, precision plays a vital role in providing privacy-preserving computation on the cloud. Our choice of 54-bit word size strikes the perfect balance between precision and performance. Works offering a smaller word-size [19, 32, 57] require in-depth study to mitigate the accuracy loss due to low-precision. Although we cannot prove that our parameters are the best, we ensure they can support most applications with a high precision guarantee. Our analysis will encourage readers to seek straightforward privacy-preserving solutions with maximal application coverage.
## VI Conclusion
FHE has garnered considerable interest due to its privacy-preserving computation capability. However, the major obstacle preventing its widespread deployment lies in its substantial computational overhead. Consequently, numerous efforts have been dedicated to accelerating fully homomorphic encryption in hardware; however, many of these attempts tend to focus excessively on acceleration at the expense of practicality. In this regard, our proposed accelerator design, REED, effectively addresses this limitation and achieves remarkable acceleration. Our approach utilizes a scalable design methodology that can be easily extended to larger configurations while also adapting to constrained environments.
We implement this methodology using a chiplet-based technique, which enables scalability. The experimental results highlight both the acceleration achieved and the practical implementation aspects of REED. Notably, our design is modular, paving the way for intriguing future prospects such as formal verification. Additionally, we plan to extend benchmarking to encompass larger network training scenarios to further demonstrate the utility of our parameters. Overall, the advancements presented in this work hold the promise of advancing privacy-preserving computations and promoting the wider adoption of fully homomorphic encryption.
Fig. 14: A DNN for MNIST [35] with two hidden and one output layers.
Fig. 13: Relative metrics comparison for the HELR [25] application with batch sizes 256 and 1024. Under these metrics, the lower the value, the better.
Fig. 15: Accuracy plot of different word sizes for the DNN. The lines are smoothened and the red dotted zig-zag line resembles the original form.
## Acknowledgement
This work was supported in part by Samsung Electronics co. ltd., Samsung Advanced Institute of Technology and the State Government of Styria, Austria - Department Zukunftsfonds Steiermark. We also extend our gratitude to Ian Khodachenko for his assistance in conducting the application benchmarking process.
|
2303.11235 | FullFormer: Generating Shapes Inside Shapes | Implicit generative models have been widely employed to model 3D data and
have recently proven to be successful in encoding and generating high-quality
3D shapes. This work builds upon these models and alleviates current
limitations by presenting the first implicit generative model that facilitates
the generation of complex 3D shapes with rich internal geometric details. To
achieve this, our model uses unsigned distance fields to represent nested 3D
surfaces allowing learning from non-watertight mesh data. We propose a
transformer-based autoregressive model for 3D shape generation that leverages
context-rich tokens from vector quantized shape embeddings. The generated
tokens are decoded into an unsigned distance field which is rendered into a
novel 3D shape exhibiting a rich internal structure. We demonstrate that our
model achieves state-of-the-art point cloud generation results on popular
classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset.
Additionally, we curate a dataset that exclusively comprises shapes with
realistic internal details from the `Cars' class of ShapeNet and demonstrate
our method's efficacy in generating these shapes with internal geometry. | Tejaswini Medi, Jawad Tayyub, Muhammad Sarmad, Frank Lindseth, Margret Keuper | 2023-03-20T16:19:23Z | http://arxiv.org/abs/2303.11235v1 | # FullFormer: Generating Shapes Inside Shapes
###### Abstract
Implicit generative models have been widely employed to model 3D data and have recently proven to be successful in encoding and generating high-quality 3D shapes. This work builds upon these models and alleviates current limitations by presenting the first implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details. To achieve this, our model uses unsigned distance fields to represent nested 3D surfaces allowing learning from non-watertight mesh data. We propose a transformer-based autoregressive model for 3D shape generation that leverages context-rich tokens from vector quantized shape embeddings. The generated tokens are decoded into an unsigned distance field which is rendered into a novel 3D shape exhibiting a rich internal structure. We demonstrate that our model achieves state-of-the-art point cloud generation results on popular classes of 'Cars', 'Planes', and 'Chairs' of the ShapeNet dataset. Additionally, we curate a dataset that exclusively comprises shapes with realistic internal details from the 'Cars' class of ShapeNet and demonstrate our method's efficacy in generating these shapes with internal geometry.
## 1 Introduction
Continuous representations of data and signals in the form of implicit functions are impacting many research areas of computer vision and graphics. The idea of having a continuously learned function to represent 3D data implicitly is efficient since these functions can represent diverse topologies as well as being agnostic to resolution [10]. Neural networks have been successfully utilized to parameterize these implicit functions. Applications of neural implicit representation are widespread _e.g_. geometry representation [26, 1, 32], image super-resolution [9] and generative models [29, 43, 54] etc.
Implicit representations for 3D shapes can be categorized into two types. The first type represents the outer surface of a 3D shape as occupancy grids or distance fields. Occupancy networks [26] define the surface as a continuous decision boundary of a deep neural network classifier whereas DeepSDF [32] represents a 3D surface using a signed distance field (SDF). A significant benefit of an SDF is easy extraction of the surface registered as a point cloud using the marching cubes algorithm [24]. However, many implicit neural networks require 3D shapes to be watertight which are often not readily available. Atzmon _et al_. [1] propose a sign agnostic loss function to learn an SDF from non-watertight data; however, their model requires careful initialization of the neural network parameters and often misses thin structures. Another drawback of SDFs stems from their inherent nature _i.e_. 3D shapes are modeled as inside and outside. Therefore multiple nested surfaces inside a 3D shape can not be represented due to only two states of distances in an SDF, either positive or negative and 0 or 1 in
Figure 1: This paper addresses generating 3D objects with rich internal geometric details.
occupancy networks.
Unsigned distance fields (UDFs) are another type of implicit representation whereby a 3D shape is delineated through a function that predicts the unsigned distance of a given point in space to the nearest surface of the 3D shape. This representation is capable of encoding multiple layers of internal 3D structures since distance values are not limited to only capturing inside or outside. However, extracting a surface from a UDF to a tractable datatype such as point clouds is non-trivial. The standard marching cube algorithm [24] cannot be used, as finding a zero-level set by detecting the flips between inside and outside is not possible with UDFs. Chibane et al. [11] provided algorithms to extract point clouds comprising internal geometries from UDFs. They have further demonstrated the use of UDFs on the task of shape reconstruction. However, shape completion/synthesis or novel shape generation with UDFs remains unexplored. In this paper, we present an approach, which leverages UDFs capability to represent nested 3D shapes to learn and generate rich internal structures, while ensuring the high quality and diversity of the 3D shapes.
Learning representations of complex shapes requires the encoding of distant shape contexts. This is especially true when shapes with internal structures are considered, _i.e_. local shape context is not sufficient to model long-range relationships for example between the overall height of a car and the shape or tilting of its seats. To facilitate the encoding of relationships at varying spatial distances, transformer-based models that leverage the self-attention mechanism are the method of choice [50, 12]. Transformers are proven to be effective in modeling data distributions and generating realistic samples in image generation and 3D shape completion tasks [14, 52]. Unfortunately, transformers can not directly learn from UDF representations since they rely on discrete token representations. Leveraging the advantages of transformers for shape generation with internal structure is therefore non-trivial.
In this paper, we present a way to properly learn to generate 3D shapes with internal details while modeling long-range shape dependencies. This effectively integrates transformer-based shape learning with UDFs. We thereby exploit the fact that fine surface details can be locally encoded whereas global structures depend on large spatial contexts. Thus, we first employ a convolutional neural network-based implicit function to encode the continuous latent representation of 3D shapes locally. We then make use of vector quantization to discretize the continuous locally encoded shape information into a sequence of discrete tokens. Finally, a latent transformer model is employed to learn the global long-range dependencies on the basis of these discrete tokens to generate novel discrete latent shape representations. This encoded 3D shape information is fed into the decoder to predict UDFs. 3D shapes are retrieved in the form of dense point clouds from the generated UDFs using the dense point cloud algorithm mentioned in [11].
In summary, our contributions are as follows:
* We propose an implicit neural network-based generative framework for generating 3D shapes with nested geometries, _i.e_. internal details.
* Our generative model can learn from both watertight and non-watertight 3D data.
* We carefully curate a new dataset of car shapes, with internal geometries, from ShapeNet and make it publicly available.
* We demonstrate that our method outperforms previous state-of-the-art and achieves superior qualitative and quantitative point cloud generation results.
## 2 Related Work
Generative Adversarial NetworksA standard generative model used in computer vision applications is the generative adversarial network (GAN)[15]. Recent works [9, 22] have shown 3D shape generation combining implicit neural networks and generative adversarial networks. However, the quality of output suffers from mode collapse and catastrophic forgetting due to the instability of GAN training [23, 47].
Score-based ModelsAnother form of generative models are denoising diffusion probabilistic models, also known as score matching models [20, 18, 45, 46]. The main principle of these models is that they model the gradient of the log probability density function with respect to the real sample. This process is referred to as score matching. These models have achieved state-of-the-art in many downstream tasks such as super-resolution, and generation [41, 3, 6, 54]. However, they are slow at inference time, limiting their usage in real-time applications.
Likelihood-based ModelsVariational auto-encoders (VAEs) and autoregressive models (ARs) are two commonly known likelihood-based models and they both aim to learn a probability distribution over the input data. While VAEs are computationally efficient and fast at inference time, their generation quality is often inferior compared to that of GANs[21, 40]. Conversely, autoregressive models (ARs) can represent data distribution with high fidelity but generate samples slowly [31, 39, 34, 5]. To overcome the limitations of these two models, hybrid models combining the autoregressive transformer models and vector quantized VAEs have been proposed to generate high-resolution realistic images [14]. Our proposed method builds upon this hybrid model setup and focuses on generating 3D
shapes with internal structures. Our approach is related to ShapeFormer [52], which employs a latent transformer architecture to learn from compact and discretely encoded sequences that approximate 3D shapes, specifically for 3D shape completion utilizing signed distance functions (SDFs). However, they do not tackle the task of unconditional shape generation. Moreover, they employ a local pooled PointNet model [38] for feature extraction, which can limit the expressiveness of the feature embeddings. In contrast, we demonstrate that incorporating locality inductive biases, as in CNNs, in extracted features allows for tractable feature embeddings. Therefore, we opt for using an IF-Net-based [7] encoder. While their approach is restricted to performing shape completion exclusively on watertight 3D models, our method offers the ability to generate novel shapes with internal structures and is not constrained by watertight-only models.
Implicit Neural Generative ModelsIn recent years, neural implicit networks have gained significant attention for their efficacy in 3D representational learning [33, 27, 1, 37, 44, 42, 56, 19, 17]. While several models have explored implicit representation for 3D surface reconstruction, only a few have used it for 3D model generation [54, 17]. In general, this type of neural representation encapsulates a 3D surface by taking a spatial coordinate value as input and outputs a parameter, ones or zeros for points inside or outside the surface [27] or a signed distance from the surface [33]. However, as mentioned before, these representations do not preserve the internal geometry of 3D shapes. Recently, NDF [11] has demonstrated that UDFs are capable of representing inner details within 3D models. In this context, we propose a deep implicit generative framework that utilizes UDFs to generate high-quality 3D models with internal geometric structures. Our work highlights the potential of UDFs in generating rich 3D models. This has significant implications for various applications, such as product design, robotics, CAD designs, and medical imaging, whereby internal geometries are crucial for accurate modeling and simulation.
## 3 Method
The objective of this work is to leverage the representational power of unsigned distance fields (UDF) in order to implicitly model 3D shapes whilst retaining their internal geometric details. To achieve this goal, we utilize the learning capabilities of transformers and incorporate UDF-based implicit function learning to develop an autoregressive generative model capable of generating 3D shapes with internal structures. Previous research works [14, 52] have demonstrated the expressive power of transformers in capturing long-range dependencies in the input data. However, complexity increases considerably with the sequence length [50]. This problem is exasperated when the data representation is a dense 3D model. Therefore, instead of representing a 3D model as voxels, point clouds, or discrete patches directly, we learn a compact and discrete representation whereby a shape is encoded using a codebook of context-rich parts. This allows a transformer to capture long-range interactions between these contextual parts and effectively model the distributions over the full shapes. At inference time, generated latent codes are decoded into a UDF using an implicit decoder. UDFs are then rendered into point clouds using a dense point cloud algorithm provided by Chibane et al. [11]. Figure 2 details the complete framework of our approach.
Our method can be sectioned into two parts. First, we describe a form of an autoencoder, namely Vector Quantized Unsigned Distance Field (VQUDF), which learns a context-rich codebook, as detailed in Sec. 3.1. Then we present the latent transformer architecture as a generative model capable of producing novel shapes, as outlined in Sec. 3.2.
### Sequential Encoding with VQUDF
A 3D shape is represented as a point cloud input denoted by \(\mathbf{X}\in\mathbb{R}^{N\times 3}\). To harness the power of transformers in the generation, we encode \(\mathbf{X}\) into a discrete _sequence_ of tokens. This discrete _sequence_ must encapsulate the complete geometric information of the 3D shape. Inspired by ideas from [49, 14], we formalize the encoder, codebook, and decoder architecture for generating 3D shapes with internal geometry using UDFs.
EncoderTo generate 3D shapes with internal structures using transformers, we require a compact and discrete representation of the input shape that maintains high geometric resolution. The input to our encoder is sparse voxelized point cloud defining a 3D shape. When dealing with voxel data representations, capturing local spatial context is essential since the correlation between neighboring voxels significantly impacts the overall shape of the object. CNNs are well-suited for capturing prior inductive bias of strong spatial locality within the images [13]. By incorporating local priors from CNNs, we can effectively capture the spatial context of the input data and encode it into a compact feature grid utilizing ideas from neural discrete representation learning [49]. To achieve this, the first step is to employ a CNN-based feature extractor \(\mathcal{E}\) called IF-Net [11]. IF-Net takes a sparse voxelized point cloud \(\mathbf{X}\) and maps it to a set of _multi-scale_ grid of deep features \(\mathbf{F}_{1},...,\mathbf{F}_{m}\) s.t. \(\mathbf{F}_{k}\in\mathcal{F}_{k}^{K^{3}}\) and \(\mathcal{F}_{k}\in\mathbb{R}^{c}\). Note that the resolution \(K\) reduces, and the number of channels \(c\) increases as \(k\) increases. For tractability, we interpolate feature grids \(\mathbf{F}_{1},...,\mathbf{F}_{m-1}\) to the scale of final feature grid \(\mathbf{F}_{m}\) using trilinear interpolation. This provides us with a good trade-off between model complexity and shape details. A concatenation of
along the channel dimension results in a compact feature grid \(\mathbf{Z}\in\mathbb{R}^{K^{3}\times C}\). Note that \(\mathbf{Z}\) is a continuous latent feature representation.
QuantizationA discrete description of the world can aid learning by compressing information in many domains, such as language or images [49, 28, 8]. We posit that 3D models are no exception and can greatly benefit from discrete representations. In addition, to utilize the generative transformer model, the input shape is preferably a discrete _sequence_. Therefore, we employ vector quantization to transform the continuous latent feature representation \(\mathbf{Z}\) into a sequence of tokens \(\mathcal{T}\) using a learned codebook \(\mathcal{B}\) of context-rich codes \(\mathcal{B}=\{\mathbf{b}_{i}\}_{i=1}^{V}\subset\mathbb{R}^{n_{z}}\) where \(n_{z}\) is the length \(K\times C\) of a code. Following a row-major ordering [14], each feature slice \(\mathbf{z}_{i}\in\mathbf{Z}\) is clamped to the nearest code in the codebook \(\mathcal{B}\) using equation 1, fig. 2, which results in a quantized feature grid \(\hat{\mathbf{Z}}\).
\[t_{i}=\text{argmin}_{j\in\{1,..,V\}}\|\mathbf{z}_{i}-\mathbf{b}_{j}\| \tag{1}\]
A sequence of tokens \(\mathcal{T}\) is then defined as the ordered set of indices \((t_{i})\forall i\in\{1,..,|\mathcal{T}|\}\).
DecoderAs stated earlier, we aspire to learn an implicit representation of shapes to benefit from properties of such models, for example, no watertight shape restrictions, arbitrary resolution, and encoding internal structures. To achieve this, we train a decoder to output an unsigned distance field \(\text{UDF}(\mathbf{p},\mathcal{S})=\text{min}_{\mathbf{q}\in\mathcal{S}}\| \mathbf{p}-\mathbf{q}\|\) which is a function that approximates the unsigned distances between the sample points \(\mathbf{p}\) and the surface of the shape \(\mathcal{S}\). Formally, the decoder is defined as a neural function \(\mathcal{D}(\hat{\mathbf{Z}},\mathbf{p}):\mathbb{R}^{K^{3}\times C}\times \mathbb{R}^{3}\mapsto\mathbb{R}^{+}\) that regresses the UDF from a set of point \(\mathbf{p}\) conditioned on the latent discrete feature grid \(\hat{\mathbf{Z}}\). The dense point cloud algorithm provided by Chibane et al. [10] is used further to convert UDF to a final point cloud denoted by \(\hat{\mathbf{X}}\).
Training VQUDFThe training process involves learning the encoder \(\mathcal{E}\), codebook \(\mathcal{B}\), and the decoder \(\mathcal{D}\) simultaneously. The overall loss function is denoted in equation (2).
\[\mathcal{L}_{\text{VQUDF}}(\mathcal{E},\mathcal{B},\mathcal{D})=\] \[\parallel\text{UDF}(\mathbf{p},\mathcal{S})-\text{UDF}_{gt}( \mathbf{p},\mathcal{S})\parallel_{2}^{2}+\mathcal{L}_{c} \tag{2}\]
The first term denotes the reconstruction loss, which is computed as the difference between predicted and ground truth UDFs. This method is different from the commonly utilized approach of computing loss between predicted and true point clouds. The second term \(\mathcal{L}_{c}\) denotes the commitment loss in equation (3).
\[\mathcal{L}_{c}=\parallel\text{sg}[\mathcal{E}(\mathbf{X})]-\hat{\mathbf{Z}} \parallel_{2}^{2}+\parallel\text{sg}[\hat{\mathbf{Z}}]-\mathcal{E}(\mathbf{X })\parallel_{2}^{2} \tag{3}\]
Different from vanilla NDF training, our pipeline has a non-differentiable quantization operation. Following previous
Figure 2: **Approach:** Key ingredients of our pipeline are vector quantized autoencoder, unsigned distance field (UDF), and latent transformer. The first stage is learning VQUDF which is a vector quantized autoencoder model that takes voxelized point clouds as input to a CNN-based encoder and utilizes an implicit decoder to output a UDF of the 3D shape. UDF ensures rich internal details are retained in a continuous data representation. Latent codes from the learned VQUDF are used to train an autoregressive transformer. This transformer learns to generate novel latent codes at test time. An implicit decoder then decodes generated latent codes to output a UDF. A 3D shape is then rendered from the UDF as a more tractable data format such as a point cloud.
works [2, 49], we utilize a straight-through gradient estimator to circumvent this problem. Under this approach, gradients are simply copied over from the decoder to the encoder. This method ensures joint training of the codebook, the encoder, and the decoder.
### Generating a Sequence of Latent Vectors
Latent TransformerTransformers have shown tremendous performance in generating images by modeling them as a sequence of tokens and learning to generate such sequences [35, 30]. Transformers are unconstrained by the locality bias of CNNs allowing them to capture long-range dependencies in images. 3D models with internal structures also exhibit long-range dependencies, for example, the number and shape of seats in a car depend on the body being either a sedan or a sports car. Previous works [55, 16, 51] have successfully demonstrated capturing these dependencies using transformers for 3D models. We represent 3D shapes as a sequence of tokens \(\mathcal{T}=(t_{1},...,t_{|\mathcal{T}|})\) resulting from our trained VQUDF framework. Recall that each token \(t_{i}\) is an index of the closest codebook latent embedding to the continuous latent feature grid. The generation of shapes is modeled as an autoregressive prediction of these indices. A transformer learns to predict the distribution of the next indices given prior ones. The likelihood of the complete sequence \(\mathcal{T}\) is described as \(p(\mathcal{T})=\prod_{i=1}^{|\mathcal{T}|}p(t_{i}|t_{1...i-1})\).
Transformer TrainingThe generation of latent codes as a sequence of tokens using transformers is highlighted in Fig. 2. The learned weights of the trained VQUDF autoencoder are frozen before the training of the transformer. VQUDF is first used to create a training dataset of 3D shape latent embeddings. These latent embeddings are used in the training of the transformer. The training objective for generation is maximizing the log-likelihood of tokens in a randomly sampled sequence to represent the 3D shape \(p(\mathcal{T})\):
\[\mathcal{L}_{\text{Transformer}}=\mathbb{E}_{x\sim p(x)}[-\text{log}\,p( \mathcal{T})] \tag{4}\]
After training, this model starts with the [START] token and predicts the next indices forming a complete sequence \(\mathcal{T}\) until a [END] token is predicted. By mapping indices in the sequence \(\mathcal{T}\) back to the corresponding codebook entries, a discrete latent feature grid \(\hat{\mathbf{Z}}\) is recovered. The 3D shape is then reconstructed using the implicit decoder \(\mathcal{D}\), which results in a UDF. We use the dense point cloud extraction algorithm proposed in [11] to extract the generated 3D shape \(\hat{\mathbf{X}}\) as a point cloud.
## 4 Experiments
This section thoroughly evaluates our proposed approach and demonstrates its effectiveness in generating high-quality shapes with internal structure and details. We compare our point cloud generation results against multiple SOTA point cloud generation baselines and present good qualitative and quantitative results for the shape generation task.
### Implementation Details
We train our models in two stages. First, we train the VQUDF module, followed by a latent transformer module. For training, we utilize stock hardware comprising one Nvidia RTX Quadro GPU with 48GB of VRAM. All code is written in PyTorch [36] whereby a portion is acquired from open repositories of [11, 14]. For training both modules, we use a batch size of 1 and the Adam optimizer. For VQUDF training, we employ a learning rate of 1e-6 and ReLU activation, whereas the transformer's training uses a learning rate of 4.5e-6. Furthermore, the transformer has 12 layers and 8 attention heads. The length of the input sequence to the transformer model is set as 7952; the codebook size is 8192, with each codebook having a dimensionality of 512. Additional training details, including architectures of networks, are presented in the supplementary material.
### Datasets
Since our approach focuses on generating internal structures, we sought a dataset of 3D shapes having internal geometric details. Such datasets are scarce which led us to curate a new dataset of cars with realistic internal geometry. We call this dataset 'Full Cars'. Other than this dataset, we utilize standard datasets from ShapeNetCore [4]. A detailed description of both datasets is provided in this section.
ShapeNetWe use the ShapeNetCore v2 dataset with three categories: _Airplanes_, _Chairs_, and _Cars_. The Cars object category of the ShapeNet dataset contains both open and closed shapes. However, some shapes are present with internal details. Most cars either include no internal structure or significantly degraded internal geometry, as shown in Fig. 3.
Full CarsThis dataset is a subset of the ShapeNetcore v2 dataset of the 'cars' category. The cars in this dataset exhibit rich internal details. We utilize Blender to filter out cars without internal structures. We select cars that have a significant number of points inside the outer shell to facilitate learning of insides. After curating, the final dataset contains 1602 models of full cars. These have
Figure 3: _ShapeNet:_ Most samples have no internal details.
320, and 160 shapes for training, validation, and test sets, respectively. Note that although this dataset comprises cars with internal structure, these models are non-watertight and therefore present a significant challenge for implicit representation and subsequent generation. A glimpse of the internal geometry in this dataset is shown in Fig. 4.
### VQUDF Reconstruction Performance
The input point cloud is sampled and voxelized before feeding into the VQUDF encoder. The number of points sampled from different datasets and voxel resolution during training of the VQUDF module are presented in Table 1. Recall that the input 3D shape is encoded into a feature grid \(\mathbf{\hat{Z}}\) where each channel comprises a feature block of dimension \(K^{3}\). The quality of encoded information and generation capability depends on the dimensionality \(K\) of the 3D latent feature grid \(\mathbf{\hat{Z}}\). Fig.5 shows reconstruction results of the VQUDF module on the Full Cars dataset with different values of \(K\) such that resolution of the 3D latent feature becomes \(\mathbf{\hat{Z}}\in\mathbb{R}^{64^{3}\times C}\), \(\mathbf{\hat{Z}}\in\mathbb{R}^{16^{3}\times C}\) and \(\mathbf{\hat{Z}}\in\mathbb{R}^{8^{3}\times C}\) respectively, where \(C\) is the number of channels. Note that the fidelity of internal geometries increases progressively with the dimensionality \(K\) of \(\mathbf{\hat{Z}}\). However, increased \(K\) results in a large quantized sequence length \(\tau\) making transformer training difficult. Hence, a good trade-off between geometrical fidelity and memory footprint is achieved by selecting \(\mathbf{\hat{Z}}\in\mathbb{R}^{16^{3}\times C}\) which is then processed into a tractable sequence of tokens to generate shapes with internal details.
### Baseline
We use the following baselines which generate novel 3D point clouds to compare with our point cloud generation. The first baseline is Graph Convolution GAN [48], which relies on standard GAN-based generation and employs localized operations in the form of graph convolutions to generate point clouds. Another baseline is Diffusion Model Luo et al. [25], which employs denoising diffusion probabilistic models for the point cloud generation. Lastly, we also compare against Pointflow [53], which utilizes normalizing flows for the point cloud generation. These models naturally carry the ability to learn inside details of 3D models, provided that they have been trained on datasets with internal structures. However, they do not utilize an implicit continuous representation to capture internal details. Therefore, these approaches are not only limited to a fixed number of points generation and resolutions but also their ability to model insides in predicted 3D shapes.
### Metrics
For quantitative evaluation, we use three different metrics following previous works.
MmdMinimum matching distance (MMD) indicates the faithfulness of generated samples with real data. A lower MMD indicates that generated samples are realistic towards ground truth samples.
CovDiversity is an important aspect of generative models. A high coverage score (COV) indicates that the model does not suffer from mode collapse and has high sample diversity.
JsdJenson-Shannon divergence (JSD) computes the symmetric similarity between distributions of generated samples and reference samples. A lower value of JSD is desirable. However, this metric is dependent on the selection of the reference set.
### Qualitative Results
In this section, we show the qualitative performance of our generative model on the considered datasets.
\begin{table}
\begin{tabular}{c c c} \hline Dataset & Points Sampled & Voxel resolution \\ \hline \hline ShapeNet _Cars_ & 10000 & 256\({}^{3}\) \\ \hline ShapeNet _Planes_ & 5000 & 32\({}^{3}\) \\ \hline ShapeNet _Chairs_ & 4000 & 32\({}^{3}\) \\ \hline Full Cars & 10000 & 256\({}^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Number of points sampled and voxel resolution considered for VQUDF training for different datasets.
Figure 4: _Full Cars:_ We curate a dataset of cars from ShapeNet that contains rich internal details.
Figure 5: **Reconstruction Results:** Our model reconstruction results with different latent space resolutions \(64^{3}\), \(16^{3}\) and \(8^{3}\) respectively (top to bottom).
ShapeNetThe samples of point cloud generation results with 2048 points of our model against baseline models for the classes _chairs_ and _airplanes_ are presented in Fig. 6. We highlight that our model does not rely on any priors in the form of preset tokens in the input sequence, thus ensuring the complete unconditioned generation of the results. The performance of our method is apparent with less noisy and realistic shape generations. We further note that immense diversity is present in the shapes generated, whereby all generated samples in Fig. 6 are of distinct visual designs. High fidelity is also perceptible across the generated examples. More results of generated samples of _cars_ are provided in the supplementary material.
Full CarsWe use the Full Cars dataset to showcase the veracity of our approach's key feature to generate high-fidelity outer shells with intricate internal geometric details. The qualitative results of randomly generated cars are presented in Fig. 7 demonstrating the efficacy of our model in generating samples with rich internal geometric structures. Additionally, generated cars in Fig. 7 demonstrate a remarkable level of diversity, for example, varied genres of cars with different number of seats. Fig. 8 presents comparative results of randomly generated cars from Diffusion [25], PointFlow [53] and our FullFormer. Both comparative methods are inherently capable of encapsulating internal structures, therefore are directly comparable. We train both methods on the 'Full Cars' dataset. Our approach achieves a clear visual superiority over comparative methods, which fail to generate any discernible internal structure. It is also important to note that shapes in the training data lack dense internal geometries of high fidelity. Despite this limitation, our method is able to learn a general model which is capable of generating shapes with internal structures given noisy real-world raw data.
### Quantitative Results
In this section, we present a quantitative evaluation of our model's performance in point cloud generation. The metrics discussed in section 4.5 are tabulated in Table 2. Our method achieves state-of-the-art performance on all the metrics for the 'Full Cars' dataset, validating the capability of FullFormer in generating complete shapes with rich insides. High coverage and low JSD further demonstrate that
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline DatasetModel & \multicolumn{3}{c}{GraphCNN-GAN [48]} & \multicolumn{3}{c}{Diffusion [25]} & \multicolumn{3}{c}{PointFlow [53]} & \multicolumn{3}{c}{**Ours (FullFormer)**} \\ \cline{2-13} & MMD\(\downarrow\) & COV\(\uparrow\) & JSD\(\downarrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & JSD\(\downarrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & JSD\(\downarrow\) & MMD\(\downarrow\) & COV\(\uparrow\) & JSD\(\downarrow\) \\ \hline ShapeNet _Cars_ & 3.18 & 16 & 4.67 & 1.4 & 17.7 & **2.21** & 1.28 & 29.67 & 3.16 & **1.13** & **29.72** & 2.29 \\ ShapeNet _Planes_ & 1.1 & 31.09 & 1.75 & 0.98 & 36.73 & **0.65** & 1.41 & 35.87 & 1.06 & **0.92** & **37.37** & 0.83 \\ ShapeNet _Chairs_ & 4.213 & 33.5 & 1.24 & **3.79** & 36.2 & **0.42** & 4.19 & 33.23 & 0.82 & **3.79** & **37** & 1.06 \\ Full Cars & 2.32 & 20 & 3.81 & 1.24 & 21.23 & 2.83 & 1.18 & 24.85 & 3.39 & **0.93** & **25.07** & **2.72** \\ \hline \hline \end{tabular}
\end{table}
Table 2: We quantitatively compare the results of our method with GraphCNN-GAN [48], Diffusion [25] and PointFlow [53]. We report minimum matching distance (MMD), coverage score (COV), and Jenson and Shannon divergence (JSD) for comparison. We use Chamfer distance (CD) for MMD and COV calculations. MMD scores are multiplied by \(10^{3}\) and JSD are multiplied by \(10^{-1}\). Our proposed FullFormer improves consistently over all previous methods in terms of MMD and COV.
Figure 6: **Outer Hull Generation: Our models show high-quality point cloud generation results when trained on object categories of chairs, aeroplanes of ShapeNet dataset and visually improve over previous methods such as GraphCNN-GAN [48], Diffusion [25] or PointFlow [53].**
generated models exhibit high diversity which we also observe visually.
Moreover, we achieve the best performance in MMD and coverage across all classes of cars, chairs, and planes of the ShapeNet dataset compared with other baselines. While it is true that FullFormer appears to achieve higher JSD values than PointFlow [53] and Diffusion [25] for the ShapeNet dataset, qualitative results continue to show diversity in all the considered datasets. Therefore the lower score of JSD for the ShapeNet dataset is hypothesized to be a cause of reference set selection.
### Limitations
Unlike the high-fidelity achieved on outer shells, generated internal details exhibit lower quality. A sampling of the feature space limits the details of the shape's geometry. However, our approach presents the first effort towards generating internal details, which can be clearly seen in the presented qualitative results. Our evaluation is also constrained by the scarcity of available shape datasets with rich internal structures. Furthermore, we used off-the-rack methods to mesh our dense point cloud results which degraded the quality of our results, as there is no direct algorithm to mesh 3D shapes from unsigned distance fields. Especially on fine details and thin structures, the quality of generated shapes is not easy to assess from point clouds.
## 5 Conclusion
In this work, we present FullFormer: a model to generate 3D objects with internal structures. Our approach employs a vector quantized autoencoder (VQUDF) to learn 3D shape geometry. The encoder consumes a voxelized point cloud as input whilst the decoder predicts an unsigned distance field (UDF) of the 3D shape. To generate discrete embeddings of the 3D shape, we employ a latent transformer model. This transformer is trained autoregressively on indices of quantized shape embeddings learned by the VQUDF, making it computationally efficient. The trained transformer is then able to generate latent codes unconditionally. Generated codes are decoded into a UDF as the output representation ensuring that generated shapes have rich internal structure and high-fidelity outer surface at arbitrary resolution. We demonstrate superior qualitative and quantitative results compared to previous state-of-the-art methods.
Figure 8: **Generation Comparison: Our model (with \(16^{3}\) latent space resolution) shows high-quality internal structure generation results compared to previous models. It is apparent that these models do not achieve discernable internal structure. All point clouds are sampled to 2048 points.**
Figure 7: **Generation: Diverse generation results from our FullFormer model on the Full Cars dataset with internal structures. The high degree of detail of generated shapes is clearly visible in the dense point clouds. Note that, not only seats specific to car type, but also minute details such as steering wheels are well generated. High point clouds quality even allows to compute surface meshes (bottom) of the non-watertight shapes with internal structures.** |
2302.09753 | A Simple and Fast Approach for Computing the Fusion Reactivities with
Arbitrary Ion Velocity Distributions | Calculating fusion reactivity involves a complex six-dimensional integral of
the fusion cross section and ion velocity distributions of two reactants. We
demonstrate a simple Monte Carlo approach that efficiently computes this
integral for arbitrary ion velocity distributions with a time complexity of
$O(N)$, where $N$ is the number of samples. This approach generates random
numbers that satisfy the reactant velocity distributions. In cases where these
numbers are not readily available, we propose using Gaussian random numbers
with weighted factors. For cases where only a small number of $N$ samples are
available, a $O(N^2)$ method can be used. We benchmarked this approach against
analytical results for drift bi-Maxwellian distributions and provided examples
of drift ring beam and slowing down distributions. Our results show that the
error can be less than 1\% with $N\sim10^4$ samples for our standard approach. | Huasheng Xie | 2023-02-20T04:20:32Z | http://arxiv.org/abs/2302.09753v2 | A Simple and Fast Approach for Computing the Fusion Reactivities with Arbitrary Ion Velocity Distributions
###### Abstract
Calculating fusion reactivity involves a complex six-dimensional integral of the fusion cross section and ion velocity distributions of two reactants. We demonstrate a simple Monte Carlo method that efficiently computes this integral for arbitrary ion velocity distributions with a time complexity of \(O(N)\), where \(N\) is the number of samples. Our approach generates random numbers that satisfy the reactant velocity distributions. In cases where these numbers are not readily available, we propose using Gaussian random numbers with weighted factors. Our approach is more time-efficient than the \(O(N^{2})\) method used in some particle simulation codes, while providing the same accuracy. We benchmark our method against analytical results for drift bi-Maxwellian distributions and provide examples of drift ring beam and slowing down distributions. Our results show that the error can be less than \(1\%\) with \(N\sim 10^{4}\) samples.
keywords: Fusion Reactivity, Monte-Carlo, Arbitrary Velocity Distributions +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Fusion reactivity \(\langle\sigma v\rangle\) is the integral of fusion cross section and the reactants' velocity distribution functions
\[\langle\sigma v\rangle=\int\int d\mathbf{v}_{1}d\mathbf{v}_{2}\sigma(|\mathbf{v}_{1}\!-\! \mathbf{v}_{2}|)|\mathbf{v}_{1}\!-\!\mathbf{v}_{2}|f_{1}(\mathbf{v}_{1})f_{2}(\mathbf{v}_{2}), \tag{1}\]
where \(f_{1}\) and \(f_{2}\) are the normalized velocity distribution functions of two ions, i.e., \(\int f_{j}(\mathbf{v}_{j})d\mathbf{v}_{j}=1\) with \(j=1,2\), and \(d\mathbf{v}_{j}=d\mathbf{v}_{xj}d\mathbf{v}_{yj}d\mathbf{v}_{zj}\). Here, \(\sigma=\sigma(E)\) or \(\sigma=\sigma(v)\) is the fusion cross section, with \(E\) being the energy in the center-of-mass frame
\[E=\frac{1}{2}m_{r}v^{2},\ \ v=|\mathbf{v}|=|\mathbf{v}_{1}-\mathbf{v}_{2}|,\ \ m_{r}=\frac{m_{1}m_{2}}{m_{1}+m_{2}}, \tag{2}\]
where \(m_{1}\) and \(m_{2}\) are the mass of the two reactants, and \(m_{r}\) is the reduced mass of the system.
Equation (1) is not only important for calculating the fusion yield in laboratory [1] or stellar [2] plasmas, but it is also useful for obtaining spectrum information of the distribution functions \(f_{1,2}\) from a diagnostic perspective [3]. However, calculating \(\langle\sigma v\rangle\) for arbitrary \(f_{1}\) and \(f_{2}\) is difficult since it involves a six-dimensional (6D) velocity integral, which is usually computed numerically using high-dimensional integral methods such as Monte Carlo methods [4] or orthogonal polynomial expansion methods [5]. Kolmes et al. [6] used a mix of quadrature and Monte Carlo algorithm to study the fusion yield of plasma with velocity-space anisotropy at constant energy. Nath et al. [7] reduced the 6D integral to a 3D integral for drift tri-Maxwellian distributions, which is numerically tractable. Several analytical 1D integral results are summarized in Ref. [8], with the drift bi-Maxwellian distribution being the most general one, which can be reduced to Maxwellian, bi-Maxwellian, and beam-Maxwellian cases.
The numerical integration of Eq.(1) using the Monte-Carlo approach can actually be quite simple. While the approaches used in particle simulation codes[9; 10] to calculate the fusion yield are valid for arbitrary velocity distributions and can be used to calculate Eq.(1), we have found that a more flexible approach can be developed when only interested in calculating the fusion reactivity integral, Eq.(1). In this work, we demonstrate a simple yet effective Monte-Carlo approach for this 6D integral. The time cost of the approach used in particle simulation codes is \(O(N^{2})\), where \(N\) is the number of samples, while the time cost of our approach is \(O(N)\).
Section 2 describes the approach used in this work.
In Section 3, we benchmark our results against analytical results for drift bi-Maxwellian distributions, and apply our approach to drift ring beam and slowing down distributions. Finally, in Section 4, we summarize our findings.
## 2 Monte-Carlo Approach
The fusion reaction rate per unit volume and per unit time can be calculated as [1; 2]
\[R_{12}=\frac{n_{1}n_{2}}{1+\delta_{12}}\langle\sigma v\rangle, \tag{3}\]
where \(n_{1}\) and \(n_{2}\) are the number densities of the two reactants, respectively, and \(\delta_{12}\) is equal to 0 for different reactants and 1 for the same reactants.
Eq.(3) implies a physical meaning, namely, that the fusion reactivity \(\langle\sigma v\rangle\) represents the probability of a fusion reaction occurring. Thus, we select one particle from species 1 and one particle from species 2, and calculate \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}||\mathbf{v}_{1}-\mathbf{v}_{2}|\) for these two particles. We repeat this process \(N\) times, and as \(N\) approaches infinity, the average value of each \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}||\mathbf{v}_{1}-\mathbf{v}_{2}|\) will be the integral value of Eq.(1). This yields a simple Monte-Carlo approach (Method 1) to compute Eq.(1):
* Step 1: Generate a random particle with velocity \(\mathbf{v}_{1}=(v_{1x},v_{1y},v_{1z})\) that satisfies the velocity distribution \(f_{1}(\mathbf{v}_{1})\), and a random particle with velocity \(\mathbf{v}_{2}=(v_{2x},v_{2y},v_{2z})\) that satisfies the velocity distribution \(f_{2}(\mathbf{v}_{2})\).
* Step 2: Calculate \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|\) for these two particles.
* Step 3: Repeat Steps 1 and 2 for \(N\) times.
* Step 4: Obtain the average value of each \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|\), which is the integral value of Eq.(1).
This approach has a time complexity of \(O(N)\).
The approach (Method 2) used in some particle simulation codes[9; 10] to compute Eq.(1) can be simplified as follows:
* Step 1: Generate \(N_{1}\) particles randomly with velocities \(\mathbf{v}_{1}=(v_{1x},v_{1y},v_{1z})\) that satisfy the velocity distribution \(f_{1}(\mathbf{v}_{1})\), and \(N_{2}\) particles with velocities \(\mathbf{v}_{2}=(v_{2x},v_{2y},v_{2z})\) that satisfy the velocity distribution \(f_{2}(\mathbf{v}_{2})\).
* Step 2: Calculate \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|\) for each pair of particles, resulting in a total of \(N_{1}\times N_{2}\) pairs.
* Step 3: Obtain the average value of each \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|\), which is the integral value of Eq.(1).
Usually, \(N=N_{1}\simeq N_{2}\). This approach has a time cost of \(O(N_{1}N_{2})\simeq O(N^{2})\).
Both Method 1 and Method 2 require generating random numbers that satisfy the reactant velocity distributions. In cases where these numbers are not readily available, we can modify Method 1 to obtain Method 3, which uses weighted factors and the following equation
\[\langle\sigma v\rangle=\int\int d\mathbf{v}_{1}d\mathbf{v}_{2}\sigma(|\mathbf{v}_{1}-\mathbf{ v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|w(\mathbf{v}_{1},\mathbf{v}_{2})f_{1g}(\mathbf{v}_{1})f_{2g}( \mathbf{v}_{2}). \tag{4}\]
Here, the weight function is defined as
\[w(\mathbf{v}_{1},\mathbf{v}_{2})=\frac{f_{1}(\mathbf{v}_{1})f_{2}(\mathbf{v}_{2})}{f_{1g}(\mathbf{ v}_{1})f_{2g}(\mathbf{v}_{2})}.\]
We can compute Eq. (4) using Method 3, which involves the following steps:
* Step 1: Generate a random particle with velocity \(\mathbf{v}_{1}=(v_{1x},v_{1y},v_{1z})\) that satisfies the velocity distribution \(f_{1g}(\mathbf{v}_{1})\), and another random particle with velocity \(\mathbf{v}_{2}=(v_{2x},v_{2y},v_{2z})\) that satisfies the velocity distribution \(f_{2g}(\mathbf{v}_{2})\).
* Step 2: Calculate \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|w(\mathbf{v}_{1},\mathbf{v}_{2})\) for these two particles.
* Step 3: Repeat Steps 1 and 2 for \(N\) times.
* Step 4: Obtain the average value of each \(\sigma(|\mathbf{v}_{1}-\mathbf{v}_{2}|)|\mathbf{v}_{1}-\mathbf{v}_{2}|w(\mathbf{v}_{1},\mathbf{v}_{2})\), which is the integral value of Eq. (1).
Method 3 is actually an important sampling Monte Carlo approach[4]. A good choice of \(f_{1g}\) and \(f_{2g}\) can reduce the requirement of \(N\). In this work, we use Gaussian distributions for \(f_{1g}\) and \(f_{2g}\). The time cost of Method 3 is also \(O(N)\).
Figure 1 provides sample code programs to demonstrate the above three Monte Carlo methods used to calculate the 6D fusion reactivity integral for drift tri-Maxwellian velocity distributions given by
\[f_{j}(\mathbf{v}_{j})=\Big{(}\frac{1}{2\pi}\Big{)}^{3/2}\Big{(}\frac {1}{v_{1x}/v_{1y}/v_{1y}v_{1z}}\Big{)}^{1/2}\exp\Big{[}-\frac{(v_{ij}-v_{2ij}) ^{2}}{2v_{1xj}^{2}}\] \[-\frac{(v_{ij}-v_{2ij})^{2}}{2v_{2y}^{2}}-\frac{(v_{ij}-v_{2ij}) ^{2}}{2v_{2zj}^{2}}\Big{]}. \tag{5}\]
Here, \(v_{1xj}\), \(v_{1yj}\), and \(v_{1xj}\) are the thermal velocities in each direction, and \(v_{dxyj}\), \(v_{dxyj}\), and \(v_{dzj}\) are the drift velocities in each direction, with \(j=1,2\). These three simple codes can quickly compute all the results in Nath et al [7], with Method 1 being the most effective (see Sec. 3).
## 3 Benchmarks and Applications
To demonstrate the methods presented in Section 2, we compare the results with analytical solutions for drift bi-Maxwellian distributions[8]. Additionally, we compare the three methods for drift ring beam[11; 12] and slowing down[12] distributions and use the D-T fusion reaction cross-section data from Ref.[13].
### Drift bi-Maxwellian distribution
The distribution functions are given by
\[f_{j}(\mathbf{v}_{j}) = \frac{1}{T_{\parallel j}^{1/2}T_{\perp j}}\Big{(}\frac{m_{j}}{2\pi k _{B}}\Big{)}^{3/2}. \tag{6}\] \[\exp\Big{[}-\frac{m_{j}v_{\perp j}^{2}}{2k_{B}T_{\perp j}}-\frac{m _{j}(v_{\parallel j}-v_{dj})^{2}}{2k_{B}T_{\parallel j}}\Big{]},\]
where \(j=1,2\), and \(k_{B}\) is the Boltzmann constant. Here, \(\int f_{j}(\mathbf{v}_{j})d\mathbf{v}_{j}=1\), \(v_{\perp j}^{2}=v_{xj}^{2}+v_{yj}^{2}\), and \(v_{\parallel j}=v_{zj}\). The drift tri-Maxwellian distribution Eq.(5) reduces to the drift bi-Maxwellian distribution Eq.(6) by taking \(v_{txj}=v_{v\gamma j}=\sqrt{k_{B}T_{\perp j}/m_{j}}\), \(v_{txj}=\sqrt{k_{B}T_{\parallel j}/m_{j}}\), and \(v_{dxj}=v_{d\gamma j}=0\) in Eq.(5). With this drift bi-Maxwellian distribution, the 6D integral Eq.(1) reduces to a 1D integral[8], which is a function of only \(T_{r}\), \(R_{t}\), and \(E_{d}\), where
\[T_{r}=\frac{(2T_{\perp r}+T_{\parallel r})}{3},\ R_{t}=\frac{T_{\perp r}}{T_{ \parallel r}},\ E_{d}=k_{B}T_{d}=\frac{m_{r}v_{d}^{2}}{2},\]
where \(v_{d}=v_{dz2}-v_{dz1}\). Additionally, we have
\[T_{\parallel r}=\frac{m_{1}T_{\parallel 2}+m_{2}T_{\parallel 1}}{m_{1}+m_{2}},\ \ T_{ \perp r}=\frac{m_{1}T_{\perp 2}+m_{2}T_{\perp 1}}{m_{1}+m_{2}}.\]
Figure 2 shows the benchmark results of the 6D Monte-Carlo approach to the analytical 1D integral[8] for drift bi-Maxwellian distributions, which exhibit good agreement with \(R_{t}=2\), \(E_{d}=20\)keV and \(N_{1}=10^{4}\). To obtain the error of each method, the results are repeated three times for each case. The total computation time of each method is also the computer time taken. We observe that the total computer cost for computing the 6D Monte-Carlo results in Fig.2 using Method 1 for 20 points of \(T_{r}\) with \(N=10^{4}\) and repeat 3 times is 0.11 seconds, with an error less than 1%. To achieve a similar level of accuracy, Methods 2 and 3 require around 50 times more computation time.
Figure 3 compares the computation time and error with different values of \(N\), using 6D Monte-Carlo approach Method 1 for drift bi-Maxwellian distributions. We find that \(N=10^{6}\) is sufficient for these parameters (\(R_{t}=0.5\), \(E_{d}=20\)keV). The computation time is not accurately proportional to \(O(N)\) due to the fact that for high values of \(N\), the vector program scheme can save some computation costs.
Figure 1: Sample code demonstrating three Monte-Carlo methods for computing the 6D fusion reactivity integral for drift tri-Maxwellian velocity distributions.
Figure 3: Comparison of computation time and error using 6D Monte-Carlo approach Method 1 for drift bi-Maxwellian distributions with different values of \(N\).
Figure 2: Comparison between the results obtained using the 6D Monte-Carlo approach and the analytical 1D integral method[8] for drift bi-Maxwellian distributions.
Figure 4: Comparison of three Monte-Carlo methods for drift ring beam distributions, where \(v_{dj}=[v_{djx},v_{djy},v_{djx},v_{djr}]\).
Figure 5: Comparison of the three Monte-Carlo methods for slowing down distributions, where \(v_{t}=\sqrt{2k_{B}T_{r}/m_{r}}\).
### Drift ring beam distribution
The drift ring beam distribution, which includes both parallel and perpendicular drifts as well as temperature anisotropy, is given by[11]
\[f_{j}(\mathbf{v}_{j}) =f_{zj}\cdot f_{\perp j}=\frac{1}{\sqrt{\pi}v_{zj}}\exp\Big{[}- \frac{(v_{zj}-v_{dzj})^{2}}{v_{zj}^{2}}\Big{]}\cdot \tag{7}\] \[\frac{1}{\pi A_{j}v_{z_{\perp j}}^{2}}\exp\Big{[}-\frac{(\sqrt{(v _{zj}-v_{dzj})^{2}+(v_{yj}-v_{dyj})^{2}-v_{dyj})^{2}}}{v_{z_{\perp j}}^{2}} \Big{]},\]
where \(A_{j}=\exp(-\frac{v_{dzj}^{2}}{v_{z_{\perp j}}^{2}})+\sqrt{\pi}(\frac{v_{dzj}} {v_{z_{\perp j}}})\text{erfc}(-\frac{v_{dzj}}{v_{z_{\perp j}}})\), and \(\int f_{j}(\mathbf{v}_{j})d\mathbf{v}_{j}=1\), for \(j=1,2\). The error function \(\text{erfc}(-x)=1+\text{erf}(x)\), and \(\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt\). The 1D analytical form of Eq.(1) for this distribution is not yet available. Note also that there exists a \(\sqrt{2}\) difference between the definition of thermal velocity \(v_{t}\) here and Eq.(5).
Appendix A provides instructions on how to generate random numbers with this distribution. Figure 4 compares the drift ring beam using the three methods in Section 2. Once again, we observe that Method 1 is the most efficient among them.
### Slowing down distribution
The isotropic slowing down distribution is given by [12]
\[f_{j}(\mathbf{v}_{j})=\frac{3}{4\pi\ln[1+v_{bj}^{3}/v_{cj}^{3}]}\frac{H(v_{bj}-v)}{ v^{3}+v_{cj}^{3}}, \tag{8}\]
where \(\int f_{j}(\mathbf{v}_{j})d\mathbf{v}_{j}=1\) for \(j=1,2\), and \(H(x)\) is the Heaviside function, defined as \(H(x<0)=0\), \(H(x>0)=1\), and \(H(0)=1/2\). The 1D analytical form of Eq.(1) for this distribution is not yet available.
Instructions for generating random numbers with this distribution are provided in A. Figure 5 compares the slowing down distribution using the three methods described in Sec.2. Once again, we see that Method 1 is the most effective. For Method 3, we also compared two types of random numbers: Gaussian \(f_{1g,2g}\), and uniform \(f_{1g,2g}\) in \(v_{xjyzj}\in[-v_{bj},v_{bj}]\) for \(j=1,2\). Both methods yielded similar results, indicating the robustness of this approach.
## 4 Summary and Discussion
We have developed a simple Monte-Carlo approach to compute the 6D fusion reactivity integral Eq.(1) for arbitrary ion velocity distributions. We compared three types of this approach for several typical distributions, such as drift bi-Maxwellian, drift ring beam, and slowing down distributions. Our results show that this approach is both robust and effective.
The second method is similar to that used in particle simulation codes, with a time cost of \(O(N^{2})\). The first method is found to be the most effective one among them, with a time cost of \(O(N)\). However, it still requires a routine to generate the corresponding random numbers of the given distributions, as in the second method. The third method uses a weight function to remove the requirement of generating corresponding random numbers, with a time cost of \(O(N)\). For these three methods, the typical requirement for \(N_{1,2,3}\) is \(N_{1}\simeq 10^{4}-10^{5}\), \(N_{2}\simeq 5\sqrt{N_{1}}\simeq 10^{3}\), and \(N_{3}\simeq 50N_{1}\simeq 10^{6}-10^{7}\).
Overall, our Monte-Carlo approach provides a practical and efficient tool for computing the fusion reactivity integral. Future work may involve further optimization of the algorithms and exploring new applications of this approach in related fields.
_Acknowledgments_ Discussions with Dong WU, Munzhi TAN, Ke LI and Feng WANG are acknowledged.
## Appendix A Random numbers for drift ring beam and slowing down distributions
To generate a velocity \(v\) with distribution \(f(v)\) from a uniform \(u\in[0,1)\) random number using a monotonic function transformation \(v=v(u)\), we use the relation
\[v(u+\Delta u)=v+\Delta v,\Delta u=f(v)\Delta v, \tag{9}\]
which can be written as
\[f(v)dv=du. \tag{10}\]
Solving for \(u\) gives
\[u=u(v)=\int f(v^{\prime})dv^{\prime}. \tag{11}\]
We can then calculate the transformation \(v=v(u)\) from the inverse function of \(u=u(v)\).
To model the distributions of drift ring beams, we use the product of two distributions: \(f(\mathbf{v})=f_{z}(v_{z})\cdot f_{\perp}(v_{x},v_{y})\), where \(f_{z}(v_{z})\) can be generated using a standard Gaussian random number function. The distribution \(f_{\perp}(v_{x},v_{y})\) is given by
\[f_{\perp}=\frac{1}{\pi Av_{r_{\perp}}^{2}}\exp\Big{[}-\frac{(\sqrt{(v_{x}-v_{dx })^{2}+(v_{y}-v_{dy})^{2}}-v_{dr})^{2}}{v_{r_{\perp}}^{2}}\Big{]},\]
where \(v_{\perp}=\sqrt{(v_{x}-v_{dx})^{2}+(v_{y}-v_{dy})^{2}}\in[0,\infty)\) and \(\phi\) is the angle between the \(x\)-axis and the velocity vector \(\mathbf{v}_{\perp}\) in the \(xy\)-plane. The quantity \(A\) is defined as
\(A=\exp(-v_{dr}^{2}/v_{r\perp}^{2})+\sqrt{\pi}(v_{dr}/v_{r\perp})\text{erfc}(-v_{dr}/ v_{r\perp})\), and \(\int f_{j}(\mathbf{v}_{j})d\mathbf{v}_{j}=1\). In the \((v_{\perp},\phi)\) space, we have \(f(v_{\perp},\phi)=f(v_{\perp})f(\phi)\), where
\[f(v_{\perp})=\frac{2v_{\perp}}{Av_{r\perp}^{2}}\exp\Big{[}-\frac {(v_{\perp}-v_{dr})^{2}}{v_{r\perp}^{2}}\Big{]},\ 0\leq v_{\perp}<\infty\] \[f(\phi)=\frac{1}{2\pi},\ 0\leq\phi<2\pi. \tag{11}\]
The coefficients are normalized such that \(\int\limits_{0}^{\infty}f(v_{\perp})dv_{\perp}=1\) and \(\int_{0}^{2\pi}f(\phi)d\phi=1\). To generate \(\phi\), we use a uniform random number \(u\in[0,1)\) and set \(\phi=2\pi u\).
The relationship between \(v_{\perp}\) and the uniform random number \(u\) is given by the following equation
\[u=\int f(v_{\perp})dv_{\perp}=\frac{1}{A}\Big{\{}\sqrt{\pi} \frac{v_{dr}}{v_{r\perp}}\Big{[}\text{erf}\Big{(}\frac{v_{dr}}{v_{r\perp}} \Big{)}-\text{erf}\Big{(}\frac{v_{dr}-v_{r}}{v_{r\perp}}\Big{)}\Big{]}\] \[+\exp\Big{(}-\frac{v_{dr}^{2}}{v_{r\perp}^{2}}\Big{)}-\exp\Big{(} -\frac{(v_{\perp}-v_{dr})^{2}}{v_{r\perp}^{2}}\Big{)}\Big{\}}, \tag{12}\]
which satisfies the requirements \(u(0)=0\) and \(u(\infty)=1\). In the case of a usual Maxwellian/Gaussian distribution with \(v_{dr}=0\) and \(A=1\), we have
\[u=-\exp(-v_{\perp}^{2}/v_{r\perp}^{2})+1,\]
so that
\[v_{\perp}=v_{r\perp}\sqrt{-\ln(1-u)},\]
which is one of the standard ways to generate a Gaussian random distribution. When \(v_{dr}\neq 0\), we can obtain the inverse function \(v_{\perp}(u)=u^{-1}(v_{\perp})\) numerically using 1D interpolation, since \(u(v_{\perp})\) is known and monotonically increasing. Then, we can obtain the velocity components \((v_{x},v_{y})\) using the following equations:
\[v_{x}=v_{\perp}\cos\phi+v_{dx},\ v_{y}=v_{\perp}\sin\phi+v_{dy}.\]
Note that \(\phi\) and \(v_{\perp}\) should use independent random numbers \(u\).
Similarly, for the slowing-down distribution in \((v,\phi,\theta)\) space, we have
\[f(\mathbf{v})=f(v)f(\theta)f(\phi),\ \ f(\theta)=\frac{1}{\pi},\ \ f( \phi)=\frac{1}{2\pi},\] \[f(v)=\frac{3v^{2}}{\ln[1+v_{y}^{3}/v_{c}^{2}]}\frac{H(v_{y}-v)}{ v_{y}^{3}+v_{c}^{2}}, \tag{13}\]
which means \(0\leq\theta<\pi\) and \(0\leq\phi<2\pi\) are uniformly distributed. We have
\[u=\int f(v)dv=\ln[1+v^{3}/v_{c}^{3}]/\ln[1+v_{b}^{3}/v_{c}^{3}],\]
with \(v\in[0,v_{b}),\ u\in[0,1)\), i.e.,
\[v=v_{c}\Big{[}\exp\Big{[}u\ln(1+v_{b}^{3}/v_{c}^{3})\Big{]}-1 \Big{]}^{1/3}.\]
After generating random numbers of \((v,\theta,\phi)\), we can obtain \((v_{x},v_{y},v_{z})\) via
\[v_{x}=v\sin\theta\cos\phi,\ \ v_{y}=v\sin\theta\sin\phi,\ \ v_{z}=v\cos\theta.\]
For arbitrary distributions, generating random numbers is not always straightforward. However, there are numerical libraries available, such as UNURAN [14].
|
2310.06671 | Making Large Language Models Perform Better in Knowledge Graph
Completion | Large language model (LLM) based knowledge graph completion (KGC) aims to
predict the missing triples in the KGs with LLMs. However, research about
LLM-based KGC fails to sufficiently harness LLMs' inference proficiencies,
overlooking critical structural information integral to KGs. In this paper, we
explore methods to incorporate structural information into the LLMs, with the
overarching goal of facilitating structure-aware reasoning. We first discuss on
the existing LLM paradigms like in-context learning and instruction tuning,
proposing basic structural information injection approaches. Then we propose a
Knowledge Prefix Adapter (KoPA) to fulfill this stated goal. The KoPA uses a
structural pre-training phase to comprehend the intricate entities and
relations within KGs, representing them as structural embeddings. Then KoPA
communicates such cross-modal structural information understanding to the LLMs
through a knowledge prefix adapter which projects the structural embeddings
into the textual space and obtains virtual knowledge tokens positioned as a
prefix of the input prompt. We conduct comprehensive experiments and provide
incisive analysis concerning how the introduction of cross-modal structural
information would be better for LLM's factual knowledge reasoning ability. Our
code and data are available at https://github.com/zjukg/KoPA . | Yichi Zhang, Zhuo Chen, Lingbing Guo, Yajing Xu, Wen Zhang, Huajun Chen | 2023-10-10T14:47:09Z | http://arxiv.org/abs/2310.06671v2 | # Making Large Language Models Perform Better in Knowledge Graph Completion
###### Abstract.
Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs and enrich the KGs to become better web infrastructure, which can benefit a lot of web-based automatic services. However, research about LLM-based KGC is limited and lacks effective utilization of LLM's inference capabilities, which ignores the important structural information in KGs and prevents LLMs from acquiring accurate factual knowledge. In this paper, we discuss how to incorporate the helpful KG structural information into the LLMs, aiming to achieve structural-aware reasoning in the LLMs. We first transfer the existing LLM paradigms to structural-aware settings and further propose a **knowledge prefix** adapter (KoPA) to fulfill this stated goal. KoPA employs structural embedding pre-training to capture the structural information of entities and relations in the KG. Then KoPA informs the LLMs of the knowledge prefix adapter which projects the structural embeddings into the textual space and obtains virtual knowledge tokens as a prefix of the input prompt. We conduct comprehensive experiments on these structural-aware LLM-based KGC methods and provide an in-depth analysis comparing how the introduction of structural information would be better for LLM's knowledge reasoning ability. Our code is released at [https://github.com/zjukg/KoPA](https://github.com/zjukg/KoPA).
Knowledge Graphs, Knowledge Graph Completion, Triple Classification, Large Language Models, Instruction Tuning +
Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †:
Current LLM-based KGC methods like KGLLaMA (Kolmogorov, 1997) construct simplistic prompts and apply the vanilla instruction tuning (IT) to fine-tune the LLMs. However, LLMs have inadequate memory for precise and nuanced factual knowledge which always leads to hallucination (Kolmogorov, 1997). Besides, the KGs possess intricate structural information such as subgraph structure, relational patterns, relative entities/relations and so on. Incorporating this structural information is highly advantageous for LLMs to develop a comprehensive understanding of the KG. However, this is neglected by vanilla IT because each input prompt only includes single input triple, which can not build the awareness of the KG structure for LLMs and leads to a structural information wastage.
To address these issues, we make a further step to LLM-based KGC, aiming to explore how to incorporate the KG structural information into the LLMs and enable structural-aware reasoning. We begin by discussing how to transfer the existing LLM paradigms such as in-context learning (ICL) (Kolmogorov, 1997) and instruction tuning (IT) (Kolmogorov, 1997) to a structural-aware setting. We propose a structural-aware ICL method and a structural-aware IT method as the base models, focusing on integrating the KG structural information into LLM through text representation. Additionally, we propose a knowledge prefix adapter (KoPA) approach to make LLMs a better structural-aware knowledge discriminator. leverages self-supervised structural embedding pre-training to capture the structural information in the KG. Then KoPA transforms the structural embeddings into textual embedding space by a knowledge prefix adapter and obtains several virtual knowledge tokens. This mapping of structural embeddings to textual form provides auxiliary information to input triples. The virtual knowledge tokens serve as a prefix in the input prompt sequence, guiding the instruction-tuning process. Besides, we conduct comprehensive analysis and experiments to demonstrate the promising performance and transferability of KoPA. In summary, our contribution is three-folded:
* We are the first work to comprehensively explore the utilization of LLMs for KGC, specifically by incorporating KG structural information to enhance LLM inference. This involves transfering the existing LLM paradigms like ICL and IT to a structural-aware setting for the KGC task.
* We propose a knowledge prefix adapter (KoPA) which effectively integrates pre-trained KG structural embeddings with LLMs. KoPA enables full interaction of textual embeddings from LLM and structural embeddings from KG. The fine-tuned LLMs with KoPA are capable of making decisions that exhibit structural awareness for KGC.
* We conduct extensive experiments on three public benchmarks and evaluate the KGC performance of all the structural-aware methods proposed by us with adequate baseline comparison. We compare the effectiveness of different methods of introducing structural information into LLMs.
## 2. Related Works
### Knowledge Graph Completion
Knowledge graph completion (KGC) (Kolmogorov, 1997) is an important research area in the KG community, which aims to mine missing triples in a given incomplete KG. KGC contains several sub-tasks such as triple classification (Kolmogorov, 1997), entity prediction (Kolmogorov, 1997), and relation prediction (Kolmogorov, 1997). The common point among KGC tasks is to establish an effective mechanism to measure the plausibility of the triples. The mainstream KGC methods can be divided into two categories: embedding-based methods and PLM-based methods.
Embedding-based methods (Kolmogorov, 1997; Kolmogorov, 1997; Kolmogorov, 1997) are designed to embed the entities and relations of KGs into one or several continuous representation space. These approaches make full use of structural information from the KGs to model triple plausibility with a well-designed score function and learn the entity/relation embeddings in a self-supervised manner, where negative sampling (Kolmogorov, 1997) is applied. Besides, depending on the design of the scoring function, embedding-based methods can be divided into three sub-categories: (1) translation-based methods like TransE (Kolmogorov, 1997) and RotatE (Kolmogorov, 1997), (2) tensor decomposition methods like DistMult (Stoek et al., 2010) and ComplEx (Romick et al., 2010). (3) neural network-based methods like ConvE (Song et al., 2015). Embedding-based KGC methods learn the structural embeddings for triple discrimination but neglect the textual information in the KG.
Moreover, PLM-based methods consider KGC as text-based tasks by fine-tuning pre-trained language models like BERT (Kolmogorov, 1997). The short textual descriptions of entities and relations are organized as an input sequence and encoded by the PLMs. KG-BERT (Kolmogorov, 1997) is the first PLM-based method that models KGC as a binary text classification task. Subsequent works like MTL-KGC (Kolmogorov, 1997) and StAR (Star, 2007) have further improved KG-BERT by introducing more training tasks such as relation classification and triple ranking and more complex triple encoding strategy. PKGC (Star, 2007) utilizes manual prompt templates to capture the triple semantic. Other methods like KGT5 (Kolmogorov, 1997) and KG-S2S (Kolmogorov, 1997) make a step on the generative KGC (Kolmogorov, 1997) in a sequence-to-sequence paradigm with encoder-decoder PLMs like T5 (Kolmogorov, 1997). PLM-based methods leverage the power of PLM but make the training process into text-based learning, which is difficult to capture complex structure information in the KGs.
Figure 1. A simple case of LLM-based KGC. Useful structural information that describes the surrounding information about the entities can serve as auxiliary prompts and guide the LLM to make correct decisions.
### LLMs for KG research
In recent years, large language models (LLMs) (Kang et al., 2017; Wang et al., 2018; Wang et al., 2019) have made rapid progress and demonstrated powerful capabilities in a considerable number of text-related tasks (Wang et al., 2019). LLMs are usually pre-trained in an auto-regressive manner with next word prediction task (Wang et al., 2018) and demonstrate strong capability on text comprehension and generation. Some significant techniques such as instruction tuning (IT) (Kang et al., 2017) and human preference alignment (Wang et al., 2019) are further applied to guide the model to follow human instructions and generate responses that are consistent with human values and preferences.
Among the research topics of LLM, integrating LLM and KG (Kang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) is a popular and important one. On the one hand, hallucination (Wang et al., 2019; Wang et al., 2019) is widespread in LLMs which means LLMs are lack factual knowledge and not interpretable. KGs which store structured knowledge can mitigate such a phenomenon (Chen et al., 2016; Chen et al., 2016; Chen et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) by introducing factual knowledge into LLMs. On the other hand, LLMs can benefit KG-related tasks such as KGC (Wang et al., 2019; Wang et al., 2019), entity alignment (Wang et al., 2019), KGQA (Chen et al., 2016) and others (Chen et al., 2016; Chen et al., 2017; Wang et al., 2019; Wang et al., 2019) by its powerful generation capability. KGs for LLMs (KG4LLM) and LLMs for KGs (LLM4KG) are both important.
We focus on applying LLMs in the KGC task (LLM4KGC), which has not been carefully studied yet. KGLLaMA (Wang et al., 2019) made the first step by vanilla instruction tuning approach but it lacks in-depth research and systematic exploration about how to unlash the power of LLMs and the KGs themselves to make structural-aware reasoning and achieve better KGC performance. In this paper, we will dive into this problem from a more systematic perspective with triple classification task.
### Incorporate Non-textual Modality Information into LLMs
As LLMs demonstrate generalizable capabilities on text generation, many other works attempt to incorporate non-textual modality such as images (Wang et al., 2019; Wang et al., 2019), audio (Xu et al., 2019), and video (Xu et al., 2019), which are also called multi-modal LLMs (Wang et al., 2019). These methods tend to encode non-textual information through the modality encoders and then process it as virtual text tokens. The non-textual tokens are aligned with the word tokens by instruction tuning on multi-modal datasets.
The multi-modal LLM mentioned above usually excludes graph, which is another important data modality. There are also some works talking about how to incorporate graph data into LLMs. Drug-Chat (Dong et al., 2019) proposes to encode the drug molecule graphs with graph encoders and fine-tune the LLM to predict drug interactions. Other works (Kang et al., 2017; Wang et al., 2019; Wang et al., 2019) explore how to solve graph learning tasks like node classification and graph classification by convert the graph structure information into LLMs.
Our research is relative to this topic as KGs also have complex graph structures on top of the text descriptions. In this paper, we will explore how to incorporate complex structural information in the KGs into the LLMs to achieve better reasoning capabilities on knowledge graph completion.
## 3. Basic Settings for LLM-based KGC
### Notations and Preliminaries
A KG can be denoted as \(\mathcal{G}=(\mathcal{E},\mathcal{R},\mathcal{T},\mathcal{D})\) where \(\mathcal{E},\mathcal{R}\) are the entity set, relation set respectively. \(\mathcal{T}=\{(h,r,t)\mid h,t\in\mathcal{E},r\in\mathcal{R}\}\) is the triple set and \(\mathcal{D}\) is the description set of each entity and relation. We denote \(\mathcal{D}(e),\mathcal{D}(r)\) as the short textual description of each entity \(e\in\mathcal{E}\) and each relation \(r\in\mathcal{R}\). For example, the text description of the entity '/m/0ctzf1' is \(\mathcal{D}\)(/m/0ctzf1')="The Transformers'.
When applying LLMs to KGC tasks, we denote a LLM as \(\mathcal{M}\) that serves as a text decoder. The input textual sequence \(\mathcal{S}\) of the model \(\mathcal{M}\) consists of several parts: the instruction prompt \(\mathcal{I}\), the triple prompt \(\mathcal{X}\), and the optional auxiliary demonstration prompt \(\mathcal{U}\). The instruction prompt \(\mathcal{I}\) is the manually prepared instruction to guide the LLM \(\mathcal{M}\) to execute the KGC task. The triple prompt \(\mathcal{X}\) contains the textual information about the triples that need to be processed, which can be denoted as:
\[\mathcal{X}(h,r,t)=\mathcal{D}(h)\oplus\mathcal{D}(r)\oplus\mathcal{D}(t) \tag{1}\]
where \((h,r,t)\in\mathcal{T}\) is a triple and \(\oplus\) denotes the textual token concatenation operation. In other words, the short descriptions of \(h,r,t\) would be applied as the input information. The auxiliary demonstration prompt \(\mathcal{U}\) is an optional prompt for different settings. In the following, we will follow this set of notations.
Meanwhile, we use triple classification as an entry point to investigate how to utilize LLM to accomplish the KGC task. Triple classification is a basic KGC task aiming to conduct binary classification tasks on the given triples. Whereas in the LLM paradigm, all tasks are converted into the form of text generation. Therefore, we desire the model \(\mathcal{M}\) to answer true or false given the textual sequence input \(\mathcal{S}=\mathcal{I}\oplus\mathcal{U}\oplus\mathcal{X}\).
If we start from the above definition, this task can be modeled as a text classification task. However, triple classification is different from vanilla text classification because the entities and the relation in the triple prompt have complex semantic information defined by the given KG. Without knowledge of this type of information, the content of the model answer is unreliable and unstable. Despite the vast amount of commonsense knowledge that exists in the LLMs (Wang et al., 2019), research has shown that large models are numb to fine-grained factual knowledge and will fall into a hallucination. Thus, incorporating the KG information into the prompt to provide more auxiliary information and guide the LLM to make structural-aware reasoning is the key to achieving excellent LLM-based KGC.
In the next few sections, we will talk about how to incorporate the KG information in the text-based prompt. We will first discuss this problem for both of the two common LLM paradigms which are training-free reasoning and instruction tuning. Though they are existing paradigms the specificity of the KGC task as we mentioned above makes it necessary to reconsider how to incorporate KG information into these pre-existing paradigms.
### Training-free Reasoning Approaches
Training-free reasoning is an efficient approach to employing a LLM to solve downstream tasks without extra training. We need to prepare a suitable prompt template to acquire the results generated by the model \(\mathcal{M}\). Mainstream training-free approaches consist of zero-shot reasoning and in-context learning (Kang et al., 2017). Existing methods like (Wang et al., 2019) have tried zero-shot reasoning to evaluate the link prediction ability of LLM. Besides, there are no ICL-based KGC methods yet. We will discuss each of them below more systematically and incorporate structural information into LLMs.
#### 3.2.1. Zero-shot Reasoning Approach
Zero-shot reasoning (ZSR) is a direct approach for LLMs to do the reasoning task without auxiliary information \(\mathcal{U}\). Thus, the input sequence of ZSR can be denoted as \(\mathcal{S}_{\textit{ZSR}}=\mathcal{I}_{\textit{ZSR}}\oplus\mathcal{X}\). The decoding process of the LLM \(\mathcal{M}\) can be formulated as:
\[\mathcal{A}_{\textit{ZSR}}=\arg\max_{\mathcal{A}}P_{\mathcal{M}}(\mathcal{A}| \mathcal{S}_{\textit{ZSR}})=\arg\max_{\mathcal{A}}P_{\mathcal{M}}(\mathcal{A}| \mathcal{I}_{\textit{ZSR}},\mathcal{X}) \tag{2}\]
where \(\mathcal{A}\) is the generated answer of the model \(\mathcal{M}\) and \(\mathcal{I}_{\textit{ZSR}}\) is the instruction template for ZSR. In the setting of ZSR, no KG information is added to the input sequence \(\mathcal{S}_{\textit{ZSR}}\).
The determinative information in the ZSR prompt is only the textual descriptions of the test triple. ZSR is unable to incorporate KG information due to its setting limitations, otherwise, it cannot be called zero-shot.
#### 3.2.2. In-context Learning Approach with Structural-aware Demonstration
As another training-free paradigm, in-context learning (ICL) (Hendry et al., 2017) allows the model \(\mathcal{M}\) to add auxiliary demonstration \(\mathcal{U}\) to the input \(\mathcal{S}\) and accomplish the task in the form of analogical reasoning, which can be denoted as:
\[\mathcal{A}_{\textit{icl}}=\arg\max_{\mathcal{A}}P_{\mathcal{M}}(\mathcal{A}| \mathcal{S}_{\textit{icl}})=\arg\max_{\mathcal{A}}P_{\mathcal{M}}(\mathcal{A}| \mathcal{I}_{\textit{icl}},\mathcal{U},\mathcal{X}) \tag{3}\]
As for the triple classification task, the demonstration \(\mathcal{U}\) should be some triples and their labels in the form of \(\{(\mathcal{X}_{i},y_{i}),1\leq i\leq k\}\), where \(\mathcal{X}_{i}\) is the demonstration triple and \(\dagger_{i}\) is the label. We denote the ICL with \(k\) demonstrations as \(k\)-shot ICL.
The demonstration triples can be randomly sampled from the existing training KG. However, to further incorporate the relative KG information of the test triple \((h,r,t)\), we propose to sample triples that are in the local structure of \(h\) and \(t\), which means one of the entities in each sampled triple should be \(h\) or \(t\). Besides, as existing KG only consists of positive triples, we employ negative sampling (Zhu et al., 2017) to sample negative triples for demonstration. The number of positive and negative triples are the same for balanced predictions. In the demonstration prompt, the positive triples are labeled as true and the negative triples are labeled as false.
By doing this, we incorporate the local structural information into the demonstration prompt \(\mathcal{U}\) with both positive and negative samples. Such a structural-aware demonstration could better enhance the analogical reasoning process of the model \(\mathcal{M}\).
### Instruction Tuning Approach
Instruction tuning (IT) aims to fine-tune the LLM to follow human instructions and accomplish the mentioned tasks in the instruction prompt. In this section, we will talk about how to incorporate the KG information into IT approaches.
#### 3.3.1. Vanilla Instruction Tuning
In the setting of vanilla IT, the instruction prompt \(\mathcal{I}_{\textit{It}}\) will describe the details of completing the triple classification task and the triple prompt \(\mathcal{X}\) consists of the input triple. No other auxiliary demonstrations are included in the input template. To train the model \(\mathcal{M}\), the input sequence is organized as \(S_{\textit{it}}=\mathcal{I}_{\textit{it}}\oplus\mathcal{X}\oplus\mathcal{A}_ {\textit{it}}\). where \(\mathcal{A}_{\textit{it}}\) is the predicted answer of the training data. The model \(\mathcal{M}\) is fine-tuned with the next word prediction task (Zhu et al., 2017) which is a universal approach to training LLMs. The training objective can be formulated as:
\[\mathcal{L}_{\textit{it}}=-\frac{1}{|\mathcal{S}_{\textit{it}}|}\sum_{i=1}^{| \mathcal{S}_{\textit{it}}|}\log P_{\mathcal{M}}(s_{i}|s_{<i}) \tag{4}\]
where \(s_{i}(i=1,2,\ldots,|\mathcal{S}_{\textit{it}}|)\) represents the textual tokens of the input sequence \(\mathcal{S}_{\textit{it}}\). In the inference stage, the model \(\mathcal{M}\) is employed to predict the answer \(\mathcal{A}_{\textit{it}}\) of the test data like Equation 2. Besides, negative sampling (Zhu et al., 2017) is also applied as training KG only consists of positve triples.
Vanilla IT only fine-tunes the LLM to learn the knowledge in the single triple to discriminate. Such an approach makes it difficult to fully utilize the rich semantics present in a KG and the model performance is limited.
Figure 2. An overview of the knowledge prefix adapter (KoPA) by us. KoPA is a two-stage LLM-based KGC framework. KoPA first pre-trains structural-embeddings for the entities and relations in the given KG. Then KoPA employs instruction tuning to fine-tune the LLM. The structural embeddings of the given input triple will be projected into the textual token space of the LLM by the adapter and serve as a string of prefix in the front of the input prompt sequence, which are also called virtual knowledge tokens. With the unidirectional attention mechanism of the decoder-only LLM, these virtual knowledge tokens will be seen by the following texual tokens, which will will allow the LLM to decode the answer of the instruction in a structure-aware state.
#### 3.3.2. Structural-aware Instruction Tuning
As mentioned before, the structural information of KG plays a significant role in the KGC tasks (Kang et al., 2018). To incorporate such KG information during the fine-tuning stage, we achieve this goal by adding the neighborhood descriptions of the input triple. Specifically, we can sample the neighborhoods of the head \(h\) and tail \(t\) and put the textual descriptions of neighborhood triples in the demonstration prompt \(\mathcal{U}_{it}\). In this way, the input training sequence is enhanced as \(\mathcal{S}_{it}=T_{it}\oplus\mathcal{U}_{it}\oplus\mathcal{X}\oplus\mathcal{ A}_{it}\).
We name such an approach as structural-aware instruction tuning as the local structural information of the entities is added into the input sequence in the form of neighborhood triples.
## 4. Knowledge Prefix Adapter for Llm-Based Kgc
In Section 3, we provide a detailed discussion of how the existing LLM paradigms can be used for the triple classification task and introduce local structural information about KGs to further enhance the model performance. However, such approaches can work to some extent, but there are obvious drawbacks. These fundamental approaches to incorporate KG structural information mentioned in Section 3 focus on adding the neighborhood information to the input sequence in the form of text.
However, representing the KG structural information in the form of text is not a good choice, which may bring in more invalid or redundant information to the prompt. It's not scalable and effective to increase the length of the prompt indefinitely because a long context will lead to both a decline in model capability and high computational consumption. Besides, we also have difficulty finding the structural information in the KGs that is decisive for triple discrimination. These two problems put us in a dilemma.
To solve such issues, we will introduce the **Knowledge Prefix Adapter (KoPA for short) to incorporate the KG structural information into LLM for triple classification. Figure 2 presents an intuitive view of KoPA. As shown in Figure 2, the design of KoPA is divided into two parts. Firstly we extract the structural information of entities and relations from the KG through structural embedding pre-training, and then we inject this structural information through a structural prefix adapter into the input sequence \(\mathcal{S}\). The LLM \(\mathcal{M}\) is further fine-tuned with the structural-injected sequence.
We will discuss the details in the next few sections and make a comprehensive comparison among KoPA and the methods mentioned in Section 3.
### Structural Embedding Pre-training
Instead of adding text about the neighborhood information into the input sequence, KoPA extracts the structural information of the entities and relations by self-supervised structural embedding pre-training. For each entity \(e\in\mathcal{E}\) and each relation \(r\in\mathcal{R}\), we learn a structural embedding \(e\in\mathbb{R}^{d_{e}},r\in\mathbb{R}^{d_{r}}\) respectively, where \(d_{e},d_{r}\) are the embedding dimensions. We encode the KG structural information in the embeddings and further adapt them into the textual representation space of LLMs.
Referring to the existing embedding-based KGC paradigm, we define a score function \(\mathcal{F}(h,r,t)\) to measure the plausibility of the triple \((h,r,t)\). We adopt the self-supervised pre-training objective by negative sampling (Bengio et al., 2017):
\[\begin{split}\mathcal{L}_{pre}&=\frac{1}{|\mathcal{ T}|}\sum_{(h,r,t)\in\mathcal{T}}\Big{(}-\log\sigma(\gamma-\mathcal{F}(h,r,t))\\ &-\sum_{i=1}^{K}p_{i}\log\sigma(\mathcal{F}(h^{\prime}_{i},r^{ \prime}_{i},t^{\prime}_{i})-\gamma)\Big{)}\end{split} \tag{5}\]
where \(\gamma\) is the margin, \(\sigma\) is the sigmoid activation function and \((h^{\prime}_{i},r^{\prime}_{i},t^{\prime}_{i})(i=1,2,\dots,K)\) are \(K\) negative samples (Bengio et al., 2017) of \((h,r,t)\). The weight \(p_{i}\) is the self-adversarial weights proposed in (Zhu et al., 2019).
By minimizing such a pre-training loss, the structural embeddings of each entity and relation are optimized to fit all its relative triples thus the KG structural information such as subgraph structure and relational patterns is captured in the embeddings. Such an approach has been proven effective in many embedding-based KGC methods (Bengio et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019), which is also called distributed representations (Kang et al., 2017) in the earliest days.
### Knowledge Prefix Adapter
After structural embedding pre-training, we could obtain the structural embeddings \((h,r,t)\) of a triple \((h,r,t)\) where the KG structural information is encoded in. However, the structural embeddings are learned in a different representation space against the textual token representation space of the LLM \(\mathcal{M}\), which means \(\mathcal{M}\) can not directly understand these embeddings. Thus we apply a knowledge prefix adapter \(\mathcal{P}\) to project them into the textual token representation space of \(\mathcal{M}\). Specifically speaking, the structural embeddings are converted to several virtual knowledge tokens \(\mathcal{K}\) by \(\mathcal{P}\):
\[\mathcal{K}=\mathcal{P}(h)\oplus\mathcal{P}(r)\oplus\mathcal{P}(t) \tag{6}\]
In practice, the adapter \(\mathcal{P}\) would be a simple projection layer (Zhu et al., 2019). Then we put \(\mathcal{K}\) in the front of the original input sequence \(\mathcal{S}\) serving as a prefix of the instruction and triple prompt \(\mathcal{S}_{kpa}=\mathcal{K}\oplus T_{it}\oplus\mathcal{X}\). This way, all the following text tokens can be seen with the prefix \(\mathcal{K}\) due to the unidirectional attention in decoder-only LLMs. By doing this, the textual tokens can pay unidirectional attention to the structural embeddings of the input triple. Such a structural-aware prompt will be employed during fine-tuning and inference. During training, we froze the pre-trained structural embeddings. The adapter is optimized to learn the mapping from structural knowledge toward textual representation and will have the generalization to new triples in the inference stage, which will benefit the textual description and provide the triple information from another perspective to make enhanced predictions.
### Complexity Analysis
After proposing KoPA, we make a comparison among LLM-based KGC methods to demonstrate the advantages of KoPA, which is shown in Table 1. Compared with the basic paradigms (ZSR/ICL/IT), KoPA incorporates the KG structural embeddings into LLM to combine the textual and structural information. Meanwhile, KoPA makes the length of the prompt more refined as the length of virtual tokens generated by the structural prefix adapter is fixed to 3 for head/relation / tail respectively. In contrast, the prompt length of structural-aware IT (enhanced IT in the table) is linearly related
to the number of neighborhood triples \(k\). KoPA can get better results with a more simplified prompt, and we will show this in the experimental section.
## 5. Experiments
### Datasets
In our experiments, we use three public KG benchmarks UMLS (Wang et al., 2017), CoDeX-S (Wang et al., 2017), and FB15K-237N (Wang et al., 2017) to evaluate the capability of the proposed LLM-based KGC methods.
UMLS (Wang et al., 2017) is a classic medical knowledge graph including general knowledge about medicine and health care. CoDeX-S (Wang et al., 2017) is an encyclopedic KG extracted from Wikidata (Wikidata, 2017). FB15K-237N proposed in (Wang et al., 2017) is modified from FB15K-237. Besides, CoDeX-S and FB15K-237N mine hard negative triples for a more challenging evaluation and avoid false negative samples in the validation/test dataset during dataset construction. We constructed negative samples for UMLS in the same method. The statistic information is shown in Table 2.
### Experimental Settings
#### 5.2.1. Baseline Methods
In our experiments, we provide a comprehensive comparison of our method with three broad classes of baseline models on the triple classification task, which is an important subtask of KGC. The KGC baselines can be divided into three parts: embedding-based methods, PLM-based methods, and LLM-based methods. The specific models used for these baselines are listed below:
(1). **Embedding-based KGC methods.** We select four traditional embedding-based KGC methods for comparisons, namely TransE (Chen et al., 2017), DistMult (Wang et al., 2017), ComplEx (Wang et al., 2017), and RotatE (Wang et al., 2017). These methods predict the triple plausibility by the learned structural embeddings and the score functions defined in the model.
(2). **PLM-based KGC methods**. We select KG-BERT (Wang et al., 2017) and PKGC (Wang et al., 2017) as PLM-based KGC baselines, which are classic methods focusing on the triple classification task. These methods treat triple classification as a binary text classification task.
(3). **LLM-based KGC methods**. LLM-based KGC research is still at an early stage. There are only KGLaMA (Wang et al., 2017) to be the LLM-based KGC baseline. In addition to KGLaMA, the methods proposed in Section 3 by us including ZSR, ICL, IT, and structural-aware IT (enhanced IT) will also serve as baselines.
Besides, we further divide the LLM-based methods into two categories: training-free methods and fine-tuning methods. Training-free methods consist of ZSR and ICL, while the rest are all fine-tuning methods.
#### 5.2.2. Implementation and Detail Settings
We reproduce the baseline results and implement the KoPA proposed by us.
For embedding-based KGC methods, we reproduce the results with OpenKE we set the embedding dimension \(d_{e}=d_{r}=512\) and sample \(K=32\) negative samples during training. The margin \(\gamma\) is tuned among \(\{0,4,6,8,12\}\). After training KGC models, we search for the best classification score threshold on the validation set for test data following the traditional setting (Chen et al., 2017).
For PLM-based methods, the backbone model for PLM-based KGC methods is BERT (He et al., 2017). We fine-tune the KG-BERT according to the official code implementation. Since PKGC requires a lot of manual work to annotate each relation with a prompt, we only report the results of FB15K-237N shown in the original paper.
For all LLM-based methods, we employ Alpaca-7B (Ross et al., 2017) as the LLM backbone. Alpaca is a famous extended version of LLaMA (Wang et al., 2017) model fine-tuned on instruction-following data. We reproduce the triple classification results of KGLLaMA (Wang et al., 2017) over two backbones (LLaMA and Alpaca) to avoid the effect of backbone choice on the results. We name the two baseline models KGLLaMA and KGALpaca respectively.
For zero-shot reasoning, in addition to measuring with the same backbone Alpaca, we also test the performance of the _GPT-3.5-turbo_ which has 175B parameters. For the in-context learning method, we sample k-shot (\(k\)=1,2,4,8) structural-aware demonstrations. Besides, we sample 4 neighborhood triples for each triple to conduct structural-aware instruction tuning. For KoPA, we employ RotatE (Wang et al., 2017) and the score function of structural embedding pre-training and the embedding dimension is set to 512 and the adapter is a 512\(\times\)4096 linear projection layer. For all the fine-tuning methods (instruction tuning, structural-aware instruction tuning, and KoPA), we fine-tune Alpaca using LoRA (Ross et al., 2017) with rank 32. The number of epochs is searched in \(\{3,4,5\}\) and the learning rate is tuned in \(\{1e-4,3e-4,5e-4\}\). We use the AdamW optimizer (\(\beta_{1}=0.9,\beta_{2}=0.99\)) (Kingmare et al., 2014) with a fixed batch size of 12. We conducted all the experiments with Nvidia A800 GPUs.
#### 5.2.3. Evaluation Protocol
We evaluate the methods with triple classification task (Chen et al., 2017) to discriminate whether a triple \((h,r,t)\) is true or false, which is essentially a binary classification task. Meanwhile, all the test datasets are label-balanced. Therefore, we use accuracy, precision, recall, and F1-score as the evaluation metrics.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Method**} & **Requires** & **Extra** & **Prompt** \\ & **Fine-tuning** & **KG Info** & **Length** \\ \hline ZSR & ✗ & ✗ & \(L_{l}+L_{T}\) \\ ICL & ✗ & ✓ & \(L_{l}+L_{T}+kL_{D}\) \\ Vanilla IT & ✓ & ✗ & \(L_{l}+L_{T}\) \\ Enhanced IT & ✓ & ✓ & \(L_{l}+L_{T}+kL_{D}\) \\ \hline KoPA & ✓ & ✓ & \(L_{l}+L_{T}+3\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparasion among LLM-based KGC methods in three ways. As for the prompt length analysis, \(L_{l}\), \(L_{T}\) denote the length of the instruction prompt and triple prompt. \(L_{D}\) denotes the length of a demonstration and \(k\) is the demonstration number. ZSR/ICL/IT refer to zero-shot reasoning, in-context learning, and instruction tuning respectively.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & \(|\mathcal{E}|\) & \(|\mathcal{R}|\) & \#Train & \#Valid(+/-) & \#Test(+/-) \\ \hline UMLS & 135 & 46 & 5216 & 652/652 & 661/661 \\ CoDeX-S & 2034 & 42 & 32888 & 1827/1827 & 1828/1828 \\ FR15K-237N & 13104 & 93 & 87282 & 7041/7041 & 8226/8226 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistical information of datasets. The positive (+) and negative (-) samples are 1:1 in the valid and test set.
### Main Results
The main experiment results of triple classification are shown in Table 3. Since precision and recall alone do not give a good response to the model's performance on the classification task, we focus on accuracy and F1-score. However, to provide a comprehensive analysis of different models, we also report the precision and recall results in the table.
Overall, we can find that KoPA achieves outperforming accuracy and F1-score results compared with the existing 16 baseline models on all three datasets. Taking CoDeX-S as an example, KoPA achieves 1.81% improvement in accuracy and 1.85% improvement on F1. As we use the pre-trained RotatE embeddings in KoPA, we can observe that KoPA significantly outperforms the original embedding-based RotatE method, especially on larger and more challenging datasets like CoDeX-S and FB15K-237N.
Meanwhile, compared with all LLM-based approaches, we can see that the LLMs cannot understand the KG structural information well without fine-tuning. The zero-shot LLMs perform very poorly in the triple classification task even though GPT-3.5-turbo (175B parameters) has the excellent capability. Though the demonstrations provided by ICL can incorporate the KG information, the performance gain is limited. Besides, the prediction results of training-free methods are biased and easy to slip into the extremes of all-right or all-wrong, as the recall of them is either very high or very low but the F1 scores are relatively low all the time.
However, fine-tuning LLMs can introduce the KG information into LLMs as the overall performance makes obvious improvements. Meanwhile, though structural-aware IT enhances the input prompt with neighborhood information of triples, its performance is also limited compared with KoPA. This suggests that the structural embeddings consist of more semantic-rich information compared with text-based auxiliary prompts, which can also be understood by the LLM through the prefix adapter. Combining the analysis in Section 4.3 and the experimental results, KoPA achieves better results on top of shorter prompts.
### Transferability Exploration
The results in the main experiments have shown the effectiveness of KoPA. To further validate the generality and the transferability of KoPA, we conduct a new transferability experiment. In this experiment, we will demonstrate that the knowledge prefix adapter will learn to transfer from structural embeddings to textual token representations and provide semantic-rich auxiliary information to enhance the decoding process of LLM inference.
We demonstrate this point by testing the influence of KoPA for entities that do not appear in the training phase, which is also called inductive setting in other KGC works (Kang et al., 2017). We split the KG dataset into an inductive setting with a defined inductive rate (IR), which refers to the ratio of unseen entities during training. For example, if IR=10%, we will randomly select 10% entities as the inductive entity set. Any triple in the training set whose head or tail is in the inductive set will be removed during training. Besides, the triples in the test set will be divided into two parts: the seen (S) part and the unseen (U) part. If the head or tail in a triple is in the inductive entity set, it will be regarded as unseen.
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c|c c c|c c} \hline \hline & \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**UMLS**} & \multicolumn{3}{c|}{**CoDeX-S**} & \multicolumn{3}{c}{**FB15K-237N**} \\ \cline{3-13} & & **Acc** & **P** & **R** & **F1** & **Acc** & **P** & **R** & **F1** & **Acc** & **P** & **R** & **F1** \\ \hline \multirow{4}{*}{Embedding-based} & TransE (Kang et al., 2017) & 84.49 & 86.53 & 81.69 & 84.04 & 72.07 & 71.91 & 72.42 & 72.17 & 69.71 & 70.80 & 67.11 & 68.91 \\ & DistMult (Song et al., 2017) & 86.38 & 87.06 & 86.53 & 86.79 & 66.79 & 69.67 & 59.46 & 64.16 & 58.66 & 58.98 & 56.84 & 57.90 \\ & ComplEx (Kang et al., 2017) & 90.77 & 89.92 & 91.83 & 90.87 & 67.64 & 67.84 & 67.06 & 67.45 & 65.70 & 66.46 & 63.38 & 64.88 \\ & RotatE (Kang et al., 2017) & 92.05 & 90.17 & 94.41 & 92.23 & 75.68 & 75.66 & 75.71 & 75.69 & 68.46 & 69.24 & 66.41 & 67.80 \\ \hline \multirow{2}{*}{PLM-based} & KG-BERT (Kang et al., 2017) & 77.30 & 70.96 & 92.43 & 80.28 & 77.30 & 70.96 & 92.43 & 80.28 & 56.02 & 53.47 & 97.62 & 67.84 \\ & PKGC (Kang et al., 2017) & - & - & - & - & - & - & - & - & 79.60 & - & - & 79.50 \\ \hline \multirow{4}{*}{\begin{tabular}{c} LLM-based \\ Training-free \\ \end{tabular} } & Zero-shot(Alpaca) & 52.64 & 51.55 & 87.69 & 64.91 & 50.62 & 50.31 & 99.83 & 66.91 & 56.06 & 53.32 & 97.37 & 68.91 \\ & Zero-shot(GPT-3.5) & 67.58 & 88.04 & 40.71 & 55.67 & 54.68 & 69.13 & 16.94 & 27.21 & 60.15 & 86.62 & 24.01 & 37.59 \\ & ICL(1-shot) & 50.37 & 50.25 & 75.34 & 60.29 & 49.86 & 49.86 & 50.59 & 50.17 & 54.54 & 53.67 & 66.35 & 59.34 \\ & ICL(2-shot) & 53.78 & 52.47 & 80.18 & 63.43 & 52.95 & 51.54 & 98.85 & 67.75 & 57.81 & 56.22 & 70.56 & 62.58 \\ & ICL(4-shot) & 53.18 & 52.26 & 73.22 & 60.99 & 51.14 & 50.58 & 99.83 & 67.14 & 59.29 & 57.49 & 71.37 & 63.68 \\ & ICL(8-shot) & 55.52 & 55.85 & 52.65 & 54.21 & 50.62 & 50.31 & 99.83 & 66.91 & 59.23 & 57.23 & 73.02 & 64.17 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} LLM-based \\ Fine-tuning \\ \end{tabular} } & KG-LLMa (Kang et al., 2017) & 85.77 & 87.84 & 83.05 & 85.38 & 79.43 & 78.67 & 80.74 & 79.69 & 74.81 & 67.37 & 96.23 & 79.25 \\ & KG-Alpaca (Kang et al., 2017) & 86.01 & 94.91 & 76.10 & 84.46 & 80.25 & 79.38 & 81.73 & 80.54 & 69.91 & 62.71 & 98.28 & 76.56 \\ \cline{1-1} & Vanilla IT & 86.91 & 95.18 & 77.76 & 85.59 & 81.18 & 77.01 & 88.89 & 82.52 & 73.50 & 65.87 & 97.53 & 78.63 \\ \cline{1-1} & Structural-aware IT & 89.93 & 93.27 & 86.08 & 89.54 & 81.27 & 77.14 & 88.40 & 82.58 & 76.42 & 69.56 & 93.95 & 79.94 \\ \hline \multicolumn{2}{c|}{KoPA} & **92.58** & 90.85 & 94.70 & **92.70** & **82.74** & 77.91 & 91.41 & **84.11** & 77.65 & 70.81 & 94.09 & **80.81** \\ \hline \hline \end{tabular}
\end{table}
Table 3. The main experiment results of triple classification. We report the accuracy (ACC), precision (P), recall (R), and F1-score (F1) results for each method on the three datasets. “-” means the result are missing because the specificity of PKGC makes it difficult to reproduce. The best Acc / F1 results in baselines are marked with underline, and we highlight our results with bold when we achieve new SOTA.
We fine-tune the LLM with only remaining seen triples and test on both seen and unseen triples. In this setting, a set of entities will not participate in the training process and the LLM does not see their textual descriptions, which will make the test process more challenging. We report the accuracy and F1 score for seen (S), unseen (U), and all (A) test triples, which is shown in Figure 3 for three fine-tuning methods: KoPA, vanilla IT, and structural-aware IT (enhanced IT in the figure).
From the radio charts, we can observe that KoPA outperforms the other methods for unseen triples and has less performance degradation when IR increases. The performance of structural-aware IT (enhanced IT) with neighborhood triples in the textual form is more unstable. These phenomena suggest that the knowledge prefix adapter can learn a good mapping from the structural embeddings to the textual representation, which is transferable even if the entities are unseen during training. The structural embeddings captured from KG play a more significant role in informing the LLM with useful structural information.
### Ablation Study
To verify the effectiveness of the KoPA design, we conduct a two-part ablation study. The first part is designed to verify the effectiveness of structural embedding and the second part is designed to verify the effectiveness of prefix adapter. As shown in Table 4, we can find that removing the structural embeddings or replacing them with random initialized embeddings both lead to performance decline. Also, we find that the model is compatible with different types of structural embeddings. However, the performance gain depends on whether the embedding was originally powerful in the triple classification task or not. Refer to Tables 3, TransE (Chen et al., 2017) and RotatE (Zhu et al., 2018) are better embedding-based KGC models compared with DistMult (Wang et al., 2019) and ComplEx (Wang et al., 2019). This demonstrates that semantic-rich structural information is the key to performance improvement and KoPA takes full advantage of it.
Meanwhile, putting the virtual knowledge tokens generated by the adapter in the middle (infix) or in the last (suffix) of the input sequence will also decrease the performance. We believe the reason is that putting tokens in the front of the sequence will make all the text pay attention to them as LLMs are usually decoder-only architectures with unidirectional self-attention. Then the LLM can make a better decision with the structural embeddings that fully interact with the text. Combining these two parts of the ablation study, we believe that our design of KoPA is effective and reasonable.
### Case Study
To make a more intuitive view of KoPA, we conduct a case study in this section from both macro and micro perspectives. From a macro perspective, we count the prediction overlap of several models and plot a Venn diagram shown in Figure 4.
From the diagram we can find that KoPA has a significant portion of the predictions that do not intersect with several other models, which means that KoPA makes the right prediction on some test data that many other models predict incorrectly. This suggests that the structural information incorporated in KoPA has a significant role in making correct predictions. For a micro example, a test triple (_John Landis, film director film, Coming to America_) is predicted as wrong by the RotatE model and vanilla instruction tuning LLM. With retrieved neighborhood triples (_Coming to America, locations New York City_), (_John Landis, nationality, USA_), (_Coming to America, genre, romantic comedy_), (_Comedy, common netflix titles_, _Coming to America_), the structural-aware fine-tuned LLM still makes a wrong prediction because the neighborhood information is of little use in the judgment of the current prediction though they are the correct factual. The structural embeddings applied in KoPA contain more
\begin{table}
\begin{tabular}{c|c c} \hline \hline Model & Acc & F1 \\ \hline KoPA(Prefix + RotatE) & 82.74 & 84.11 \\ \hline \multirow{5}{*}{Embedding} & w/o SE & 81.18 & 82.52 \\ & w/ TransE & 82.46 & 83.42 \\ & w/ DistMult & 80.71 & 81.27 \\ & w/ ComplEx & 81.21 & 82.12 \\ & w/ Random & 81.53 & 82.36 \\ \hline \multirow{2}{*}{Position} & Infix & 81.21 & 82.69 \\ & Suffix & 71.99 & 71.38 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation study results on CoDeX-S. We first replace the pre-trained structural embedding with other components and change the insert position of virtual knowledge tokens to demonstrate the effectiveness of knowledge prefix adapter.
Figure 3. The results of the transferbility experiment. We report the results on CoDeX-S dataset under different inductive rate (IR). Besides, we split the test data into seen (S) and unseen (U) parts based on whether the entity appeared during training. Also we total the results of all (A) the test data together. Accuracy (Acc) and F1-score (F1) are reported in the radar charts.
information than structural information in the form of text and are easier for us to extract by a structural pre-training process. Thus, KoPA outperforms other models in the triple classification task.
## 6. Conclusion
In this paper, we present KoPA, a knowledge prefix adapter designed for LLM-based KGC. KoPA is designed to incorporate the structural information of KGs into the LLMs and enhance the input prompt sequence with virtual knowledge tokens generated by KoPA, which guide the text decoding process to make reasonable predictions. KoPA is a two-stage method that consists of both structural embedding pre-training and instruction tuning on LLMs. We conduct triple classification experiments which is an important KGC task to demonstrate the outperforming results achieved by KoPA. Of course, there are some limitations in our work. For the time being, we have not generalized the model method to all kinds of KGC tasks such as entity prediction and relation prediction.
In the future, we plan to dive deep into LLM-based KGC and think about a more unified framework to accomplish all the KGC tasks with the help of LLMs. Besides, we will explore adapting KGs into LLM-based downstream applications to make the LLMs knowledgeable.
|
2305.13974 | Looking for Traces of Non-minimally Coupled Dark Matter in the X-COP
Galaxy Clusters Sample | We look for possible evidence of a non-minimal coupling (NMC) between dark
matter (DM) and gravity using data from the X-COP compilation of galaxy
clusters. We consider a theoretically motivated NMC that may dynamically arise
from the collective behavior of the coarse-grained DM field (e.g., via
Bose-Einstein condensation) with averaging/coherence length
$\L_{\mathrm{nmc}}$. In the Newtonian limit, the NMC modifies the Poisson
equation by a term $\L_{\mathrm{nmc}}^2 \nabla^2 \rho$ proportional to the
Laplacian of the DM density itself. We show that this term when acting as a
perturbation over the standard Navarro-Frenk-White (NFW) profile of cold DM
particles, can yield DM halo density profiles capable of correctly fitting
galaxy clusters' pressure profiles with an accuracy comparable and in some
cases even better than the standard cold DM NFW profile. We also show that the
observed relation between the non-minimal coupling length scale and the virial
mass found in Gandolfi et al., 2022 for Late Type Galaxies is consistent with
the relation we find in the current work, suggesting that the previously
determined power-law scaling law holds up to galaxy cluster mass scales. | Giovanni Gandolfi, Balakrishna Sandeep Haridasu, Stefano Liberati, Andrea Lapi | 2023-05-23T11:57:22Z | http://arxiv.org/abs/2305.13974v1 | # Looking for Traces of Non-minimally Coupled Dark Matter in the X-COP Galaxy Clusters Sample
###### Abstract
We look for possible evidence of a non-minimal coupling (NMC) between dark matter (DM) and gravity using data from the X-COP compilation of galaxy clusters. We consider a theoretically motivated NMC that may dynamically arise from the collective behavior of the coarse-grained DM field (e.g., via Bose-Einstein condensation) with averaging/coherence length \(L_{\rm NMC}\). In the Newtonian limit, the NMC modifies the Poisson equation by a term \(L_{\rm NMC}^{2}\nabla^{2}\rho\) proportional to the Laplacian of the DM density itself. We show that this term when acting as a perturbation over the standard Navarro-Frenk-White (NFW) profile of cold DM particles, can yield DM halo density profiles capable of correctly fitting galaxy clusters' pressure profiles with an accuracy comparable and in some cases even better than the standard cold DM NFW profile. We also show that the observed relation between the non-minimal coupling length scale and the virial mass found in Gandolfi et al. (2022) for Late Type Galaxies is consistent with the relation we find in the current work, suggesting that the previously determined power-law scaling law holds up to galaxy cluster mass scales.
Cosmology (343) - Dark matter (353) - Non-standard theories of gravity (1118)
## 1 Introduction
Zwicky (1933) originally hypothesized the existence of an unseen matter component to explain the large velocity scatter of the Coma cluster. In the subsequent decades, astrophysicists became aware of a discrepancy between luminous matter and the amount of mass required to explain the kinematic properties of spiral galaxies (Rubin et al., 1978; Bosma, 1978). The astrophysical community traces back this missing mass to dark matter (DM); an unseen, cold (i.e. non-relativistic), and weakly interacting massive particle. This cold DM paradigm has been successful on cosmological scales, yet it struggles to fully reproduce the observed phenomenology on galactic scales, especially in DM-dominated dwarfs. This has motivated astrophysicists to consider several possible solutions, some of them radically departing from the standard cold DM paradigm. Some astrophysicists advocate a more realistic and complete inclusion of baryonic physics and feedback in models and simulations that could in principle alleviate some of the cold DM phenomenological issues at galactic scales (see e.g. Di Cintio et al., 2014; Pontzen and Governato, 2014; El-Zant et al., 2016; Navarro et al., 2017; Santos-Santos et al., 2016; Peirani et al., 2017; Desmond, 2017; Keller and Wadsley, 2017; Ludlow et al., 2017; Wheeler et al., 2019; Freundlich et al., 2020; Freundlich et al., 2020). Others point to alternative scenarios in which DM is composed of non-standard particle candidates (see the review by Salucci, 2019 and references therein). Another proposal was to abandon entirely the DM paradigm in favour of modifying the laws of gravity (such as in the Modified Newtonian Dynamics or MOND model, originally proposed in Milgrom, 1983).
In Gandolfi et al. (2021) and Gandolfi et al. (2022), we have explored a new possibility to solve the small-scale incompleteness of the cold DM paradigm, having reviewed and tested a model in which cold DM is non-minimally coupled with gravity. Many other works by our team and collaborators already conjectured this possibility (e.g., Bruneton et al., 2009; Bertolami and Paramos, 2010; Bettoni et al., 2011; Bettoni et al., 2014; Bettoni and Liberati, 2015; Ivanov and Liberati, 2020). As shown in Gandolfi et al. (2021) and Gandolfi et al. (2022), the introduction of this coupling extends in a simple fashion the cold DM paradigm while maintaining its successful phenomenology on large cosmological scales and improving its behaviour in galactic systems. The term "non-minimal" implies that the gradient
of the DM distribution directly couples to the Einstein tensor. Such non-minimal coupling (NMC) is not necessarily a fundamental feature of the DM particles, but rather may dynamically develop when the averaging/coherence length \(L_{\rm NMC}\) associated with the fluid description of the DM collective behaviour is comparable to the local curvature scale. In the Newtonian limit, this NMC appears as a modification of the Poisson equation by a term \(L_{\rm NMC}^{2}\nabla^{2}\rho\) proportional to the DM density \(\rho\)(see Bettoni et al., 2014). This simple modification impacts on the internal dynamics of spiral galaxies, which are altered compared to a pure cold DM framework. In Gandolfi et al. (2021) and Gandolfi et al. (2022) we have shown that this NMC between DM and gravity can alleviate the so-called core-cusp controversy, i.e. the observed discrepancy between the cored inner radii shape of the observed galactic dark haloes density profiles with the cusper shape predicted by DM, gravity-only simulations who are best described by the so-called Navarro-Frenk-White (NFW) profile (Navarro et al., 1996; Lokas and Mamon, 2001; Boylan-Kolchin and Ma, 2004; Navarro, 2006; de Blok, 2010; Navarro et al., 2017). Gandolfi et al. (2022) also shown how such NMC manages to reproduce for a diverse sample of spiral galaxies the tight empirical relationships linking the baryonic and the dark component of galaxies. It is argued that the most general of such relations is the Radial Acceleration Relation (see Lelli et al., 2017; Chae et al., 2019; Li et al., 2018; Li et al., 2018; Di Paolo et al., 2019; Green and Moffat, 2019; Tian et al., 2020; Rodrigues and Marra, 2020), whose explanation is far from trivial in the cold DM framework (albeit some attempts have been made in this sense, see e.g. Di Cintio et al., 2014; Di Cintio and Lelli, 2016; Santos-Santos et al., 2016; Keller and Wadsley, 2017; Ludlow et al., 2017; Desmond, 2017; Navarro et al., 2017; Wheeler et al., 2019).
The aim of the present work is to test the NMC DM model on the scales of galaxy clusters to assess its capability in fitting their pressure profiles and to determine if the scale relations predicted by this model are also satisfied in these regimes. For this purpose we will use the XMM-Newton Cluster Outskirts Project (X-COP) data products (see Ghirardini et al., 2018; Ettori et al., 2019; Eckert et al., 2019; Ghirardini et al., 2019). This sample consists of 12 clusters with well-observed X-ray emission and high signal to noise ratio in the Planck Sunyaev-Zel'dovich (SZ) survey (Planck Collaboration et al., 2016). With the X-COP data we would have information about the ICM temperature and pressure in a wide radial range, from 0.2 Mpc to 2 Mpc.
The paper is organized as follows: in Sec. 2 we will briefly summarize the underlying theory behind the NMC DM model and we will present the data of the X-COP collaboration in more detail, in Sec. 3 we will proceed to illustrate and comment on our results and in Sec. 4 we will summarize our work as well as outlining the future developments of our work.
Throughout this work, we adopt the standard flat \(\Lambda\)CDM cosmology (Aghanim et al., 2020) with rounded parameter values: matter density \(\Omega_{M}=0.3\), dark energy density \(\Omega_{\Lambda}=0.7\), baryon density \(\Omega_{b}=0.05\), and Hubble constant \(H_{0}=100h\;{\rm km\;s^{-1}Mpc^{-1}}\) with \(h=0.7\). Unless otherwise specified, \(G\approx 6.67\times 10^{-8}\;{\rm cm^{3}\;g^{-1}\;s^{-2}}\) indicates the standard gravitational (Newton) constant.
## 2 NMC Modelling and X-COP Data
### A theoretical background for the NMC
Here we provide a short theoretical background for the NMC DM model, referring the reader to Gandolfi et al. (2021) and Gandolfi et al. (2022) for further information. A very basic NMC model can be built with the addition of a coupling term \(S_{\rm int}\) between DM and gravity in the total Einstein-Hilbert action (in the Jordan frame) with shape:
\[S_{\rm int}\;\left[\bar{g}_{\mu\nu},\varphi\right]=\epsilon L_{\rm NMC}^{2} \int{\rm d}^{4}x\,\sqrt{-\bar{g}}\,\widetilde{G}^{\mu\nu}\,\nabla_{\mu}\, \varphi\nabla_{\nu}\varphi\ ; \tag{1}\]
here \(\varphi\) is the (real) DM scalar field, \(\epsilon=\pm 1\) is the polarity of the coupling, \(\widetilde{G}^{\mu\nu}\) is the Einstein tensor, and \(L_{\rm NMC}\) is the NMC characteristic length-scale. From a purely theoretical perspective, such form of the NMC is allowed by the Einstein equivalence principle (e.g., Bekenstein, 1993; Di Casola et al., 2015). In our approach however, the length \(L_{\rm NMC}\) does not need to be a new fundamental constant of Nature, as it is indeed suggested by its virial mass-dependent scaling observed in Gandolfi et al. (2022). Instead, \(L_{\rm NMC}\) could emerge dynamically from some collective behavior of the coarse-grained DM field (e.g., Bose-Einstein condensation). We hence remark that our NMC model does not consist in a modified gravity theory, but simply in a formalization of an emergent behavior of cold DM inside halos. Furthermore, the bookkeeping parameter \(\epsilon\) will be set to \(\epsilon=-1\) (repulsive coupling) based on the findings of Gandolfi et al. (2021) and Gandolfi et al. (2022).
We also stress that the NMC DM model hereby discussed could in principle share features with other prospective DM models, such as self-interacting DM scenarios. Nonetheless, the NMC DM framework contemplates not only a
self-interaction term for DM in the action but also a scale-dependent geometric interaction term between the DM field and the baryonic component, which is sourced by the non-minimal coupling of the DM to gravity.
Adopting the fluid approximation for the field \(\varphi\) (as in Bettoni et al., 2012) and taking the Newtonian limit, the NMC translates into a simple modification of Poisson equation (Bettoni et al., 2014)
\[\nabla^{2}\Phi=4\pi G\left[(\rho+\rho_{\rm bar})-\epsilon L^{2}\nabla^{2}\rho \right], \tag{2}\]
where \(\Phi\) is the Newtonian potential, and \(\rho_{\rm bar}\) and \(\rho\) are the baryonic and DM densities. In spherical symmetry, Eq. (2) implies that the total gravitational acceleration writes
\[g_{\rm tot}(r)=-\frac{G\,M(<r)}{r^{2}}+4\pi\,G\,\epsilon L^{2}\,\frac{{\rm d} \rho}{{\rm d}r}\;, \tag{3}\]
where \(M(<r)\) is the total mass enclosed in the radius \(r\); the first term is the usual Newtonian acceleration and the second term is the additional contribution from the NMC.
In Gandolfi et al. (2021) we have highlighted that Eq. (2) gives rise to some interesting features for strongly DM-dominated systems in self-gravitating equilibria. First of all, the NMC can help to develop an inner core in the DM density profile. This enforces a shape for the density profile which closely follows the phenomenological Burkert profile (Burkert, 1995) out to several core scale radii. Moreover, DM-dominated halos with NMC are consistent with the core-column density relation (see e.g. Salucci and Burkert, 2000; Donato et al., 2009; Burkert, 2015; Behroozi et al., 2013; Burkert, 2020), i.e. with the observed universality of the product between the core radius \(r_{0}\) and the core density \(\rho_{0}\). In Gandolfi et al. (2022) we tested the NMC hypothesis using a diverse sample of spiral galaxies. The NMC DM model proved to yield fits to the stacked rotation curves of such objects with a precision always superior to pure NFW model fits and in several instances comparable or even better than the Burkert model ones. Furthermore, we observed an interesting power law scaling relation between the halo virial mass \(M_{200}\) and the non-minimal coupling length scale \(L_{\rm NMC}\) for the fitted galaxies. By assuming such mass-dependent scaling of \(L_{\rm NMC}\), the NMC DM model was also able to reproduce the Radial Acceleration Relation up to the regime of dwarf spheroidal galaxies. Yet the NMC DM model awaits to be tested on scales larger than galactic ones, and this is precisely the scope of the present work.
### Modeling cluster thermal profiles
The thermal pressure profiles1 of galaxy clusters are defined as functions of the gravitational potential in play. In the framework of the NMC DM model this reads as
Footnote 1: Here the gas density \(n_{\rm gas}(r)\approx 1.826\,n_{e}(r)\) is the sum of the electron and proton number densities, \(\mu\) is the mean molecular weight in a.m.u., and \(m_{p}\) is the proton mass.
\[P^{\rm th}(R)=P^{\rm th}(0)-1.8\mu m_{\rm p}\int_{0}^{R}n_{\rm e}(r)\left[ \frac{GM_{\rm DM}(r)}{r^{2}}-4\pi\,G\,\epsilon L_{\rm NMC}^{2}\,\frac{{\rm d} \rho}{{\rm d}r}\right]{\rm d}r, \tag{4}\]
where we model the electron density (ED) profile through the Vikhlinin profile (Vikhlinin et al., 2006),
\[\frac{n_{e}(r)}{n_{0}}=\frac{(r/r_{c})^{-\alpha/2}[1+(r/r_{s})^{7}]^{-\varepsilon /(2\gamma)}}{[1+(r/r_{c})^{2}]^{(3/2)\beta-\alpha/4}}. \tag{5}\]
To specify the dark mass distribution in Eq.(4) we adopt the same perturbative approach of Gandolfi et al. (2022), considering the NMC as a small - perturbation over the standard cold DM NFW profile
\[\rho_{\rm NFW}(r)=\frac{\delta_{c}\rho_{c}r_{s}^{3}}{r\left(r+r_{s}\right)^{ 2}}. \tag{6}\]
Here, \(r_{s}\) is a reference scale radius, \(\delta_{c}\) is the dimensionless characteristic overdensity of the halo and \(\rho_{\rm c}=3H_{0}^{2}/8\pi G\) is the local critical density. The NFW profile can also be written in terms of the halo virial mass \(M_{500}\) (i.e., the mass value at which the interior mean density is 500 times the critical density of the Universe) and the halo concentration \(c\equiv r_{500}/r_{s}\), with \(r_{500}\approx 260\left(M_{500}/10^{12}M_{\odot}\right)^{1/3}\) being the virial radius, and being \(\delta_{c}\rho_{c}=M_{500}c^{3}g(c)/4\pi r_{500}^{3}\) with \(g(c)\equiv[\ln(1+c)-c/(1+c)]^{-1}\). The DM mass profile in Eq.(4) will then coincide with the NFW mass distribution, and the term \(d\rho/dr\) will be the gradient of the NFW density profile. We remark that in this analysis the perturbative parameter is \(L_{\rm NMC}/r_{s}\), a quantity that is always small for the range of masses probed in our study, as we will show with our results.
### The X-COP data
We test the aforementioned formalism for the NMC using the XMM-Newton Cluster Outskirts Project (X-COP)2 catalogue (Eckert et al., 2017) with joint X-ray temperature and Sunyaev-Zel'dovich (SZ) pressure observations. The methodology we adopt here is equivalent to the one earlier implemented in Haridasu et al. 2021 (please refer to it for further details). To constrain the characteristic length scale (\(L_{\rm NMC}\)) alongside the parameters of the mass profile (\(\Theta_{M}\)) and the electron density (\(\Theta_{e}\)), we write a joint likelihood \(\mathcal{L}\) as
Footnote 2: The datasets are publicly available at the following link: [https://dominiqueeckert.wixsite.com/xcop/about-x-cop](https://dominiqueeckert.wixsite.com/xcop/about-x-cop)
\[\mathcal{L}=\mathcal{L}_{\rm Px}+\mathcal{L}_{\rm P_{SZ}}+\mathcal{L}_{\rm ED}, \tag{7}\]
where the pressure is computed through eq. (4) and the electron density is modelled as eq. (5). Here the first term accounts for the likelihood corresponding to the X-ray temperature \(P_{\rm X}\) data and the second term denotes the likelihood for the co-varying SZ pressure data and the last term in Eq. (7) accounts for the modelled electron density data.
Alongside these primary parameters of the model we also include an additional intrinsic scatter \(\Sigma_{\rm P,int}\), following the approach in Ghirardini et al. (2018); Ettori et al. (2019). We refer to Haridasu et al. (2021), for an elaborate discussion on the mild differences between our approach here and the analysis performed in Ettori et al. (2019).
We perform a Bayesian analysis through MCMC sampling using the publicly available emcee3 package (Foreman-Mackey et al., 2013; Hogg and Foreman-Mackey, 2018), which implements an affine-invariant ensemble sampler and GetDist4 package (Lewis, 2019), to perform analysis of the chains and plot the contours. We utilise flat uniform priors on all the parameters \(\Theta_{e}=\{n_{0},\alpha,\beta,\varepsilon,r_{\rm c},r_{\rm s}\}\), \(\Theta_{\rm M}=\{M_{500},c\}\) and the NMC characteristic length scale \(L_{\rm NMC}\) in the MCMC analysis. Note here that we utilise the analytical form for the \(M(<r)\) of the cluster which is expressed as a function of \(\Theta_{\rm M}\). Finally, we also perform a model comparison through the Bayesian evidence \(\mathcal{B}\)(Trotta, 2008, 2017; Heavens et al., 2017), using the MCEvidence package (Heavens et al., 2017)5. Comparing the Bayesian evidence one can assess the preference for a given model \(\mathcal{M}_{1}(\Theta_{1})\) over the base model, i.e, the NFW model. Also, the Bayesian evidence is contrasted on the Jeffrey's scale (Jeffreys, 1961), where \(\Delta\log(\mathcal{B})<2.5\) and \(\Delta\log(\mathcal{B})>5\), imply either a weak or a strong preference for the extended model, respectively.
Footnote 3: [http://dfm.io/emcee/current/](http://dfm.io/emcee/current/)
Footnote 4: [https://getdist.readthedocs.io/](https://getdist.readthedocs.io/)
Footnote 5: [https://github.com/yabebalFantaye/MCEvidence](https://github.com/yabebalFantaye/MCEvidence).
## 3 Testing the NMC with X-COP Galaxy Clusters Data
### General results and example clusters
We report the results of our MCMC parameter estimation in Table (2) and the respective statistical comparison in Table (1). The reduced chi-squared (\(\chi^{2}_{\rm red}\)) values in Table (1) indicate that for the majority of the clusters the NMC DM model generally provides a description of the data comparable and often even better than the NFW model. Nevertheless, we point out that the value of the NMC lengthscale \(L_{\rm NMC}\) is partially guided by the availability of data at the innermost radii, and X-COP cluster pressure profiles are not well characterised in these regions. This lack of data at small radii relaxes the constraints on the higher-end of the possible values for \(L_{\rm NMC}\), and it is ultimately responsible for the production of a hole-like feature (corresponding to low or negative values of pressure) observed in our analysis for a certain fraction of the cluster pressure profiles at inner radii. We however anticipate that these features could be erased just by adding one or more data points at inner radii for the pressure profiles. Unfortunately, such data are yet to be available for the X-COP cluster sample. In light of this, the reader should interpret values of the NMC lengthscale \(L_{\rm NMC}\) obtained in this work for clusters exhibiting a hole in their pressure profiles just as upper bounds on the real values of \(L_{\rm NMC}\). We also note that our NMC DM model does not modify the estimation of pressure profiles in the outskirts of the cluster, essentially implying that the results presented here are not degenerate with any additional physics that can potentially affect the pressure profile estimation at outer radii, such as non-thermal pressure support, which for example could be important for cluster A2319 (Eckert et al., 2019). In the last column of Table (1) we show estimates of the Bayesian evidence \(\Delta_{\mathcal{B}}\) exploited to further compare the two models, assuming standard NFW to be the base model. The NMC DM model is preferred for half of the clusters in the sample, and likewise it is mildly disfavored by the other half (up to the more striking case of RXC1825, for which \(\Delta_{\mathcal{B}}=-3.53\)).
In Table (2) we have reported the concentration \(c\) and virial mass \(M_{500}\) values from our MCMC analysis for the NFW and the NMC DM models. Estimates for these values from the two models are always compatible within the displayed
uncertainties, with the exception of cluster RXC1825's concentration (slightly larger in the NMC framework than the NFW case) and \(M_{500}\) (conversely slightly smaller in the NMC case). Despite this overall compatibility, we note that the NMC model predicts concentration values systematically larger than the NFW ones. Table (2) also features the MCMC estimations for the NMC lengthscale \(L_{\rm NMC}\). Overall, these values of \(L_{\rm NMC}\) exceed by two orders of magnitude on average the same values obtained for spiral galaxies in Gandolfi et al. (2022). This result is remarkably consistent with the increasing trend observed for spiral galaxies in Gandolfi et al. (2022) between the mass of dark matter halos and the \(L_{\rm NMC}\) associated with them, as we will show more in detail in Sect. (3.2).
In Fig. (1) and Fig. (2) we show two exemplificative profiles (clusters A644 and A2142) obtained with our MCMC analysis, alongside the posterior contour plots for the \(\{M_{500},c,L_{\rm NMC}\}\). As in the other clusters, both the NFW and the NMC DM models provide a good description of the general trend of the data. However, the NMC DM model is able to provide a better fit for the clusters whose data at the innermost radii are tracing a flattening in the shape of the pressure profiles. Such flattening seems to arise right within the area in which the NMC effect is active (i.e., within a distance of \(L_{\rm NMC}\) from the center of the dark haloes, represented as a blue shaded area in both Fig. (1) and Fig. (2)). As aforementioned, such NMC effect should be read with caution, given the limitation of the temperature data available in the innermost regions of the cluster.
Fig. (3) shows the one-dimensional posterior distribution of the \(L_{\rm NMC}\) parameter from our MCMC analysis for the X-COP cluster sample. Consistently with the galactic dark halos analyzed in Gandolfi et al. 2022, \(L_{\rm NMC}\) has different values in different halos, depending on their characteristics (in particular on their virial mass). Some halos (e.g. RXC1825 or A85) show a one-dimensional posterior converging towards \(L_{\rm NMC}=0\), suggesting that the dark matter density profile for these halos may have a cuspy shape, well reproduced by the NFW model. In other halos (e.g. A2319 and A2255) the NMC produces typical scale lengths capable of reaching fractions of Mpc. These values are likely to be slightly overestimated since, as previously discussed, some of these clusters exhibit an NMC DM pressure profile featuring a central hole. Despite this, the peak of such a one-dimensional posterior is clearly far from \(L_{\rm NMC}=0\), indicating that the shape of the density profile of these dark halos could be less cuspy and different from that of the
Figure 1: Left: Pressure profile and related contour plots for the A644 cluster. Data are displayed as red dots (Sunyaev-Zel’dovich effect data) and cyan dots (data from the temperature profile by X-ray measurements). The black, solid lines represent the Bayesian MCMC best fit for the NMC DM model, with the grey contour representing the 68 % confidence interval around the best fit line. The dashed blue line represents instead the NFW best fit. The blue shaded area in the profile represents the region of the dark halo within which the NMC is active, i.e. an area that extends from the centre of the halo up until \(L_{\rm NMC}\). Right: The green contours represent the NMC DM model, while the blue contours represent the NFW fit.
NFW profile. As can be seen in the right panel of Fig. (1), the non-zero values for \(L_{\rm NMC}\) are essentially accompanied by a mild positive correlation with \(M_{500}\) and subsequently a non-Gaussian degeneracy with the concentration \(c\). Also, for all the clusters that have a non-zero posterior for the \(L_{\rm NMC}\), we do not observe any such correlation with the \(M_{500}\) parameter, as in the case of A2142, shown in the right panel of Fig. (2). In this context, clusters A2255 and A2319 show a slightly larger value of the lengthscale \(L_{\rm NMC}\) in the posteriors. We also note that for the clusters A2255 and RXC1825, we find a strong bi-modal behavior, from which we select the maximum posterior region. As can be seen also from the corresponding Bayesian evidence in favor of the NMC DM model, the clusters A3158, A2319, and A2255 show a moderate preference (\(\Delta_{log(\mathcal{B})}\gtrsim 2\)), owing to the slightly larger values of \(L_{\rm NMC}\). As can be seen in Figure 6, this evidence in favor of the NMC DM in these three clusters is essentially driven by the improvement of the fit accounting for the innermost data point in the X-ray pressure observations. And on the contrary, the cluster RXC1825 shows a preference for the standard NFW scenario at a similar level of Bayesian evidence.
### \(L_{\rm NMC}\) vs. \(M_{500}\)
In this section, we investigate the relation between the NMC lengthscale \(L_{\rm NMC}\) and the dark halo virial mass \(M_{500}\) observed as a result of our analysis. We remark that this relationship is an important feature of the NMC DM model which, as previously stated, is not to be considered as a modified theory of gravity, and therefore \(L_{\rm NMC}\) should not be thought of as a new proposed fundamental constant of nature. The observed relationship between \(L_{\rm NMC}\) and \(M_{500}\) shows that \(L_{\rm NMC}\) indeed does not have a universal value, and it depends on at least one property of the dark haloes under consideration. The \(L_{\rm NMC}\) - \(M_{500}\) relationship was first observed in Gandolfi et al. (2022) to hold for the galactic dark halos, analyzed therein. A remarkable result of this earlier analysis is that one can describe such a relationship with a simple power law. In this work, we investigate the validity of this relation up to the virial mass ranges typical of galaxy clusters. The results of our analysis are shown in Fig. (4). Here, the virial masses of the spiral galaxies and their errors are rescaled from \(M_{200}\) to \(M_{500}\) to homogenize the results. Remarkably, the X-COP clusters data point derived by our MCMC analysis are seemingly in agreement with the power law trend of the \(L_{\rm NMC}\) - \(M_{500}\) relationship observed in Gandolfi et al. (2022). We performed an MCMC fit using the model \(\log_{10}L_{\rm NMC}=a\log_{10}(bM_{500})\) to fit both galactic and clusters data simultaneously, obtaining as parameter values \(a=0.542\pm 0.005\) and \(b=0.807\pm 0.005\). The slope \(a\) found in this analysis is compatible with the slope found by fitting a similar power law to galaxies only, as done in Gandolfi et al. 2022 (\(0.7\pm 0.2\)). The best fit line in this work is shown in Fig. (4) as a solid black line together with a grey shaded area representing a one-sigma confidence limit of the fit. In the same figure, we also show as a grey dotted line the relation \(L_{\rm NMC}=M_{200}^{0.8}\), utilized in Gandolfi et al. 2022 as a reference relation to study the
Figure 2: Same as Fig. (1) but for the A2142 cluster.
capacity of the NMC DM model in reproducing the Radial Acceleration Relation. In the galactic virial mass regime, the two power laws are consistent within a one-sigma confidence limit, and their slopes are compatible within the errors. The updated scaling law retrieved in this work translates into an average variation of the RAR with respect to the one computed in Gandolfi et al. 2022 by a mere 0.33%, with the average of such variation being taken for every radial acceleration bin in which the RAR of Gandolfi et al. 2022 is computed (spanning from a minimum variation of 0.004% to 1.4% among all the bins). We stress that such variation is well within the errors associated to the RAR computed in Gandolfi et al. 2022 for every single bin of radial acceleration. In fact, for the RAR of Gandolfi et al. 2022 the minimum and maximum percentage relative uncertainties are 0.67% and 3.27% respectively, and the average one is 1.85%. We thus conclude that the updated \(L_{\rm NMC}\) - \(M_{500}\) relation retrieved in this work, albeit different from the one considered in Gandolfi et al. 2022, is still able to reproduce the RAR in the galactic dark haloes mass regime. That being said, from Fig. (4) it is possible to appreciate the significant difference between the two power laws when approaching the cluster dark halo mass regime. This essentially constitutes an improvement over the previous analysis which utilized only the galaxies to assess the same relation. As previously mentioned, for some of the clusters the \(L_{\rm NMC}\) values could be slightly overestimated, and hence it is possible that the real best-fit power law could be even less steep than what is found in our analysis. Moreover, we expect that including a galaxy cluster dataset that probes the innermost regions of the halo could help reduce the scatter in the \(L_{\rm NMC}\) - \(M_{500}\) relation.
### Scatter in the \(M_{500}\) vs. \(c\)
In Fig. (5) we test the correlation between concentration \(c\) and \(M_{500}\) values inferred from our MCMC analysis against the relationship between \(c_{200}\) and \(M_{200}\) of dark halos found in Dutton and Maccio 2014, namely:
\[\log_{10}c_{200}=0.905-0.101\log_{10}\left(M_{200}/10^{12}h^{-1}M_{\odot} \right). \tag{8}\]
To make this comparison, we rescale the value of the virial mass \(M_{500}\) of the clusters to \(M_{200}\), recalculating the corresponding concentrations accordingly. We then perform an MCMC fit to find the best-fit power law that best describes the data obtained by exploiting both the NFW model and the NMC DM model. In both these cases, there is some visible difference between the two best-fit power laws and the relationship found in Dutton and Maccio 2014. This is true at least up to the cluster mass regime, where both the best-fit power laws of the NFW and NMC DM model
Figure 3: The one-dimensional posterior distribution for the lengthscale parameter \(L_{\rm NMC}\) as retrieved in our Bayesian MCMC analysis..
intersect the report of Dutton & Maccio 2014. Comparing the best-fit power laws with each other, we do not identify important differences between the two models, since the corresponding data have a rather similar scatter around the Dutton & Maccio 2014 relation. This is something we expected following the previous examination of the tabulated results of our MCMC analysis. Figure 5 can provide interesting qualitative hints on the expected concentrations of sub-haloes in galaxy clusters within this framework. As shown in Meneghetti et al. 2020, the \(\Lambda\)CDM is at variance with the observed density and compactness of the dark matter sub-haloes in galaxy clusters. From our analysis, the NMC DM model predicts galaxy-sized dark matter sub-structures in clusters featuring overall higher concentrations associated with lower halo mass values with respect to the standard CDM paradigm. However, we caveat that only future analysis relying on high-quality data and exploiting a larger sample of galaxy clusters can confirm this prediction. In this context, the observed tensions at galaxy clusters scales present a promising way to further test the NMC dark matter scenario and its phenomenology.
## 4 Summary
In this section, we summarize the main results of this work. We tested the NMC DM model against the pressure profiles of galaxy clusters belonging to the X-COP sample, finding that:
Figure 4: Virial mass (\(M_{500}\)) vs. \(L_{\rm NMC}\) relation. Blue triangles are the same spiral galaxies data utilized in Gandolfi et al. 2022, whereas the red circles represent the X-COP cluster measurements found in our Bayesian MCMC analysis. The best-fit power law is represented as a black solid line, whereas the shaded grey area represents a one-sigma confidence interval. The grey dashed line represents the \(M_{500}\) VS \(L_{\rm NMC}\) relation utilized in Gandolfi et al. 2022 to obtain the results therein. Note that the virial masses of spirals and their errors are rescaled to \(M_{500}\) (i.e., a mass at which the interior mean density is 500 times the critical density of the Universe) since they were originally computed as \(M_{200}\) (i.e., a mass at which the interior mean density is 200 times the critical density of the Universe).
* Our model in which the NMC act as a perturbation over a cold DM behavior provides a good description of the cluster pressure profiles, with a fit accuracy comparable to or in some cases even better than the NFW model both in terms of reduced \(\chi^{2}\) and Bayesian evidence;
* The \(M_{500}-L_{\rm NMC}\) relation is well described by a simple power law even beyond the mass regime of spiral galaxies investigated in Gandolfi et al. 2022. However, wanting to extend this relationship to include galaxy clusters, it is necessary to correct the slope of the above relationship with respect to the value reported in Gandolfi et al. 2022 based only on LTGs.
One key issue in our analysis is the lack of data at smaller radii in the pressure profiles of the X-COP clusters, as this may have partially resulted in overestimating the \(L_{\rm NMC}\) values inferred in our analysis. Nevertheless, previous works based on X-COP cluster data (see e.g. Haridasu et al. 2021) highlighted how cored profiles would seem to better describe the DM density distribution for a few clusters belonging to this sample. Then, even if the X-COP cluster profiles were better characterized at inner radii, the NMC DM model would probably be still preferred for all those clusters exhibiting cored profiles with respect to the cuspier NFW model. Indeed, a possible future step to corroborate our analysis would be to use data from well-characterized galaxy clusters at small radii (such as data from the CLASH
Figure 5: Concentration VS virial mass relation. Grey triangles and grey circles are respectively spiral galaxies from Gandolfi et al. 2022 and X-COP clusters’ data obtained with the NFW model. Blue triangles and red circles represent data retrieved assuming the NMC DM model. The orange solid line represents the relation by Dutton and Maccio 2014 featuring a lognormal scatter of 0.11 dex represented by the orange area around the line. The purple dashed line and the pink dashed lines represent respectively the \(c_{200}\) VS \(M_{200}\) relations respectively found for the NFW model and the NMC DM model. Note that the cluster virial masses (\(M_{500}\)) and their errors have been downscaled to \(M_{200}\) to make them comparable to the Dutton and Maccio 2014 relation.
collaboration, see e.g. Umetsu et al. 2014), probing the regions where the effect of the NMC is crucial. Another interesting extension of our work would concern the investigation of the mechanism originating the NMC between DM and gravity, and particularly how this mechanism gives rise to the observed power law relationship between \(L_{\rm NMC}\) and the virial mass \(M_{500}\). For this purpose, we will consider implementing the NMC DM model in full N-body simulation to study the time-dependent conditions and the formation mechanisms of cosmic structures in this framework. In this context, colliding galaxy clusters would configure as promising study systems to place constraints on the NMC DM model, in a similar fashion to what is done with self-interacting DM scenarios (see e.g. Robertson et al. 2017). Indeed, the effects of the NMC in colliding systems could be particularly significant in the regions where the DM density changes appreciably as a consequence of the DM haloes merger. Indeed, we expect the repulsive nature of the NMC to manifest at the interface of the collision, with the overall effect of slowing down the merger process. Modelizing this scenario is however challenging and calls for a dedicated future work. We also stress that another interesting avenue to characterize the phenomenology of this model further is to test it against known tensions on galaxy cluster scales and beyond.
We warmly thank Dominique Eckert for sharing additional data with us, and we thank the anonymous referee for the helpful and constructive comments. AL is supported by the EU H2020-MSCA-ITN-2019 Project 860744 BiD4BESt: Big Data applications for black hole Evolution Studies, and by the PRIN MIUR 2017 prot. 20173ML3WW: Opening the ALMA window on the cosmic evolution of gas, stars and supermassive black holes. BSH is supported by the INFN INDARK grant. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.